Principles for the AGI Race
LessWrong, August 30, 2024
Abstract
This work argues that the development of artificial general intelligence (AGI) presents a significant risk to humanity. To mitigate these risks, the author proposes eight principles for guiding the development of AGI, focusing on the need for broad and legitimate authority in decision-making, an overwhelming evidence of net benefit before taking actions that impose significant risks on others, an exit strategy for racing towards AGI development, and the importance of maintaining accurate race intelligence. The author critiques the actions of OpenAI and Anthropic, arguing that both companies have failed to uphold these principles. The author concludes by calling for greater transparency, public engagement, and independent oversight in the development of AGI. – AI-generated abstract.
