works
Michael Hirsch The AI Doomsday Machine Is Closer to Reality Than You Think online The rapid integration of artificial intelligence (AI) into military decision-making systems by the Pentagon, driven by competition with China and Russia, poses significant escalation risks. War games involving large language models (LLMs) consistently demonstrate an aggressive bias towards escalating conflicts, including the use of nuclear weapons, resembling historical hardline military stances. Despite official policies emphasizing human control, the demands for lightning-fast responses in modern warfare, such as coordinating drone swarms and countering cyberattacks, are pushing military systems towards greater autonomy. A critical concern is the lack of fundamental understanding regarding how these advanced AI models operate, leading to unreliable outcomes. The blurring distinction between conventional and nuclear weapons, especially with technologies like hypersonic missiles, further compounds the risk, as AI may not accurately interpret threat levels. Discussions about “dead hand” automated retaliation systems and the observed aggressive tendencies of AI indicate a potential for AI to subtly alter deterrence logic, making de-escalation more challenging. While efforts are underway to mathematically guarantee AI reliability, the rapid pace of integration often outstrips these attempts. – AI-generated abstract.

The AI Doomsday Machine Is Closer to Reality Than You Think

Michael Hirsch

Politico, September 2, 2025

Abstract

The rapid integration of artificial intelligence (AI) into military decision-making systems by the Pentagon, driven by competition with China and Russia, poses significant escalation risks. War games involving large language models (LLMs) consistently demonstrate an aggressive bias towards escalating conflicts, including the use of nuclear weapons, resembling historical hardline military stances. Despite official policies emphasizing human control, the demands for lightning-fast responses in modern warfare, such as coordinating drone swarms and countering cyberattacks, are pushing military systems towards greater autonomy. A critical concern is the lack of fundamental understanding regarding how these advanced AI models operate, leading to unreliable outcomes. The blurring distinction between conventional and nuclear weapons, especially with technologies like hypersonic missiles, further compounds the risk, as AI may not accurately interpret threat levels. Discussions about “dead hand” automated retaliation systems and the observed aggressive tendencies of AI indicate a potential for AI to subtly alter deterrence logic, making de-escalation more challenging. While efforts are underway to mathematically guarantee AI reliability, the rapid pace of integration often outstrips these attempts. – AI-generated abstract.

PDF

First page of PDF