works
Holden Karnofsky Communications with Jaan Tallinn online The following work may or may not contain an abstract. Nonetheless, this AI-generated abstract discusses the output of safety research conducted by the Singularity Institute for Artificial Intelligence (SIAI). The outputs include papers, tools, insights, and conversations that may be useful in developing safer artificial general intelligence (AGI). It is argued that SIAI’s work is unlikely to be rendered moot because a type of AGI referred to as “hard takeoff” will likely play out as SIAI predicts. This type of AGI is one that would develop rapidly without giving humans time to react. Lastly, it is emphasized that building AI tools is more important than increasing awareness of the AGI problem – AI-generated abstract.

Communications with Jaan Tallinn

Holden Karnofsky

GiveWell, 2011

Abstract

The following work may or may not contain an abstract. Nonetheless, this AI-generated abstract discusses the output of safety research conducted by the Singularity Institute for Artificial Intelligence (SIAI). The outputs include papers, tools, insights, and conversations that may be useful in developing safer artificial general intelligence (AGI). It is argued that SIAI’s work is unlikely to be rendered moot because a type of AGI referred to as “hard takeoff” will likely play out as SIAI predicts. This type of AGI is one that would develop rapidly without giving humans time to react. Lastly, it is emphasized that building AI tools is more important than increasing awareness of the AGI problem – AI-generated abstract.

PDF

First page of PDF