Intelligence explosion: evidence and Import
In Amnon H. Eden et al. (ed.) Singularity hypotheses: A scientific and philosophical assessment, Berlin, 2012, pp. 15–40
Abstract
In this chapter we review the evidence for and against three claims: that (1) there is a substantial chancewe will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion
