works
Robin Hanson and Eliezer Yudkowsky The Hanson-Yudkowsky AI-foom debate book The possibility of an intelligence explosion, in which a machine intelligence surpasses human intelligence in designing new, even more intelligent machines, has been a subject of debate for over half a century. Some believe that such an event would be highly likely and could have a catastrophic impact on humanity. Others disagree, citing historical precedents and the robust nature of innovation in society. While both sides agree that the development of machine intelligence is worthy of significant attention, they disagree on the rate and extent to which it will occur. The potential for such an event to be localized, or driven by a single AI, is an area of particular contention. The authors analyze the concept of optimization power and consider how it relates to past events such as the advent of human intelligence. They discuss the potential consequences of recursive self-improvement and explore the concept of a “hard takeoff,” in which an AI quickly becomes superintelligent and gains control of its own development. The authors further consider the role of architecture and content in intelligence, the potential for a global or local intelligence explosion, and the difficulty in predicting future events. – AI-generated abstract

The Hanson-Yudkowsky AI-foom debate

Robin Hanson and Eliezer Yudkowsky

Berkeley, 2013

Abstract

The possibility of an intelligence explosion, in which a machine intelligence surpasses human intelligence in designing new, even more intelligent machines, has been a subject of debate for over half a century. Some believe that such an event would be highly likely and could have a catastrophic impact on humanity. Others disagree, citing historical precedents and the robust nature of innovation in society. While both sides agree that the development of machine intelligence is worthy of significant attention, they disagree on the rate and extent to which it will occur. The potential for such an event to be localized, or driven by a single AI, is an area of particular contention. The authors analyze the concept of optimization power and consider how it relates to past events such as the advent of human intelligence. They discuss the potential consequences of recursive self-improvement and explore the concept of a “hard takeoff,” in which an AI quickly becomes superintelligent and gains control of its own development. The authors further consider the role of architecture and content in intelligence, the potential for a global or local intelligence explosion, and the difficulty in predicting future events. – AI-generated abstract