Catastrophic AI misuse
80,000 Hours, 2025
Abstract
Advanced artificial intelligence (AI) is projected to accelerate scientific discovery at an unprecedented rate, potentially compressing decades of research into a few years. While beneficial for medicine and technology, this rapid progression risks outpacing global safety protocols and institutional oversight. The primary danger lies in the creation of advanced weapons of mass destruction, particularly enhanced bioweapons capable of greater lethality and transmissibility than naturally occurring pathogens. AI also enhances cyberwarfare capabilities, which could destabilize nuclear deterrence or provide unauthorized access to dangerous technologies. Beyond known threats, the acceleration of fields like nanotechnology and high-energy physics could lead to unforeseen catastrophic risks. These developments increase the likelihood of global catastrophes resulting from state-level arms races or misuse by non-state actors. Counteracting these threats requires the immediate implementation of international governance frameworks, liability laws, and technical safeguards such as safety-by-design and rigorous biological screening. Proactive coordination is essential to ensure that the development of defensive measures and regulatory structures keeps pace with the expanding capabilities of autonomous AI systems. – AI-generated abstract.
