works
Eliezer Yudkowsky et al. Reducing long-term catastrophic risks from artificial intelligence online In 1965, I. J. Good proposed that machines would one day be smart enough to make themselves smarter. Having made themselves smarter, they would spot still further opportunities for improvement, quickly leaving human intelligence far behind (Good 1965). He called this the “intelligence explosion.” Later authors have called it the “technological singularity” or simply “the Singularity” (Kurzweil 2005; Vinge 1993). The Singularity Institute aims to reduce the risk of a catastrophe resulting from an intelligence explosion. We do research, education, and conferences. In this paper, we make the case for taking artificial intelligence (AI) risks seriously, and suggest some strategies to reduce those risks

Reducing long-term catastrophic risks from artificial intelligence

Eliezer Yudkowsky et al.

Machine Intelligence Research Institute, 2010

Abstract

In 1965, I. J. Good proposed that machines would one day be smart enough to make themselves smarter. Having made themselves smarter, they would spot still further opportunities for improvement, quickly leaving human intelligence far behind (Good 1965). He called this the “intelligence explosion.” Later authors have called it the “technological singularity” or simply “the Singularity” (Kurzweil 2005; Vinge 1993). The Singularity Institute aims to reduce the risk of a catastrophe resulting from an intelligence explosion. We do research, education, and conferences. In this paper, we make the case for taking artificial intelligence (AI) risks seriously, and suggest some strategies to reduce those risks

PDF

First page of PDF