works
Richard Ngo AGI safety from first principles: superintelligence online Future artificial general intelligence (AGI) is likely to surpass human intelligence, potentially leading to the development of “superintelligence.” The path to superintelligence may involve the duplication and cooperation of multiple AGIs, cultural learning among AIs, and recursive improvement, driven by the ability of AIs to improve their own training processes. Such superintelligence could have profound implications for society and its governance, raising complex questions about goals and motivations. – AI-generated abstract

AGI safety from first principles: superintelligence

Richard Ngo

AI Alignment Forum, September 28, 2020

Abstract

Future artificial general intelligence (AGI) is likely to surpass human intelligence, potentially leading to the development of “superintelligence.” The path to superintelligence may involve the duplication and cooperation of multiple AGIs, cultural learning among AIs, and recursive improvement, driven by the ability of AIs to improve their own training processes. Such superintelligence could have profound implications for society and its governance, raising complex questions about goals and motivations. – AI-generated abstract

PDF

First page of PDF