On classic arguments for AI discontinuities
2020
Abstract
This work focuses on two hypotheses about the development of artificial general intelligence (AGI). The first is whether there will be a sudden emergence of AGI systems, and the second concerns whether, after AGI emerges, there will be a sudden transformation of AGI capabilities, referred to as an “explosive aftermath”. Despite these hypotheses being often discussed simultaneously, they are distinct in their implications and underlying assumptions. The absence of an explicit differentiation between these hypotheses has contributed to the tendency to assume that they are mutually dependent. However, this assumption is problematic since the two emerge separately. The “sudden emergence” hypothesis is less likely. If AGI emerges through a more gradual process with incremental advancements, it suggests that AGI may not have radical or sudden impacts when it emerges, and that we may have ample time to address potential risks associated with AGI development. Thus, the urgency of focusing solely on the “explosive aftermath” is weakened. – AI-generated abstract.
