Risks from autonomous weapon systems and military AI
Effective Altruism Forum, May 19, 2022
Abstract
The use and proliferation of autonomous weapon systems appears likely in the near future, but the risks of AI-enabled warfare are under-studied and under-funded. Autonomous weapons and military applications of AI more broadly (such as early-warning and decision-support systems) have the potential to increase the risk factors for a variety of issues, including great power conflict, nuclear stability, and AI safety. Although “killer robots” receive much more media attention (and funding), autonomous weapons remain a neglected issue for three reasons. First, the largest organizations focus mostly on humanitarian issues. Second, those who do study risks beyond “slaughterbots” are few and receive even less funding; there is a talent shortage, and the most widely-advocated solution — a formal treaty-based arms control or “killer robot ban” — is not the most tractable solution. Thus, philanthropists have an opportunity to have an outsized impact in this space and reduce the long-term risks to humanity’s survival and flourishing. This report intends to advise philanthropic donors who wish to reduce the risk from autonomous weapon systems and from the military applications of AI more broadly. We argue that much of the problem arises from strategic risks that affect the likelihood of great power conflict and nuclear war, and that good interventions focus on key stakeholders, not multilateralism. We recommend philanthropists focus on research on strategic risks and work on confidence-building measures. – AI-generated abstract.
