Is AI safety ‘rather speculative long-termism’?
Oxford Political Review, August 25, 2019
Abstract
Artificial Intelligence (AI) development poses significant potential risks to humanity, including the risk of an AI with malevolent or hostile intentions. Detractors of prioritizing research into AI safety argue that the risk is too remote and uncertain to justify diverting resources from other pressing global concerns. However, this article argues that leading AI experts believe the risk of catastrophic AI to be real and imminent, warranting the prioritization of AI safety research even at the expense of other important causes. AI researchers estimate a 50% chance of AI surpassing human capabilities by 2061, and a 31% chance that this development could lead to catastrophic outcomes. Weighing these probabilities against the potential value of reducing existential risk, the author concludes that the potential returns on investment in AI safety research justify the diversion of resources from other important causes. – AI-generated abstract.
