AGI ruin scenarios are likely (and disjunctive)
Effective Altruism Forum, July 27, 2022
Abstract
The probability of artificial general intelligence (AGI) leading to ruin is argued to be likely (>90% within our lifetimes), rather than a narrow possibility with large potential harm. Predicted ruin stems from a confluence of technical, organizational, and global factors including: the necessity of navigating a strategy for deploying AGI that prevents global destruction, successful resolution of technical alignment issues, and the requirement for internal dynamics of relevant organizations to safely manage AGI deployment. Unforeseen misuse, while not included in this estimation, also poses distinct risks. A note of caution is raised against overestimating humanity’s general competence in handling AGI based on observed responses to crises such as the COVID-19 pandemic. Potential mitigations include the possibility that the AGI alignment problem may be avoided altogether, or that technical solutions could obviate the need for widespread coordination. The acknowledgement of overall human competence rapidly increasing in the near future provides an outside perspective on the singular variable that could contradict this doom-laden prognosis. – AI-generated abstract.
