works
Kaj Sotala and Roman V. Yampolskiy Corrigendum: Responses to catastrophic AGI risk: A survey (2015 Phys. Scr. 90 018001) article Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.

Corrigendum: Responses to catastrophic AGI risk: A survey (2015 Phys. Scr. 90 018001)

Kaj Sotala and Roman V. Yampolskiy

Physica Scripta, vol. 90, no. 6, 2015, pp. 069501

Abstract

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.

PDF

First page of PDF