The Open Letter on AI Doesn't Go Far Enough
TIME, March 29, 2023
Abstract
The development of artificial intelligence systems surpassing human capabilities poses a severe existential risk. The creation of superhuman AI under current conditions, lacking proven alignment techniques and deep understanding of the systems’ internal workings, is highly likely to result in the extinction of humanity and all biological life. Relying on future AI to solve alignment problems is not a viable plan. Progress in AI capabilities significantly outpaces progress in AI safety and alignment research. Proposed temporary moratoriums, such as a six-month pause on training systems more powerful than GPT-4, are insufficient to address the magnitude of the threat. A complete, indefinite, and worldwide shutdown of large-scale AI training runs is required. This necessitates international agreements, the decommissioning of large GPU clusters, stringent controls on computing power allocation for AI training, hardware tracking, and a willingness to enforce the ban globally, potentially through military means, prioritizing AI risk mitigation even above nuclear conflict avoidance. – AI-generated abstract.
