works
Robert Wiblin and Keiran Harris Mustafa Suleyman on getting Washington and Silicon Valley to tame AI online The coming wave of artificial intelligence and biotechnology presents a fundamental dilemma for the 21st century, threatening to destabilize the “narrow path” democratic societies walk between chaos and authoritarianism. The primary near-term threat is not from runaway superintelligence, but from the proliferation of powerful capabilities that could empower small groups or individuals to cause catastrophic harm, undermining the nation-state. While misaligned, autonomous AI is a legitimate concern, it is a medium-term problem, perhaps a decade away, and not an issue with current large language model architectures. To manage these risks, a “containment” strategy is necessary, including mandatory government regulations such as capability audits for frontier models, restrictions on dangerous knowledge like bioweapon creation, and a ban on using these models for electioneering. While developing ever-larger models seems to accelerate this race, participation is framed as necessary to shape safety standards from the inside, as the technological push is inevitable due to geopolitical and commercial incentives. Engaging policymakers and skeptics is more effective when focused on concrete misuse and national security scenarios rather than speculative arguments about superintelligence. – AI-generated abstract.

Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Robert Wiblin and Keiran Harris

80,000 Hours, September 1, 2023

Abstract

The coming wave of artificial intelligence and biotechnology presents a fundamental dilemma for the 21st century, threatening to destabilize the “narrow path” democratic societies walk between chaos and authoritarianism. The primary near-term threat is not from runaway superintelligence, but from the proliferation of powerful capabilities that could empower small groups or individuals to cause catastrophic harm, undermining the nation-state. While misaligned, autonomous AI is a legitimate concern, it is a medium-term problem, perhaps a decade away, and not an issue with current large language model architectures. To manage these risks, a “containment” strategy is necessary, including mandatory government regulations such as capability audits for frontier models, restrictions on dangerous knowledge like bioweapon creation, and a ban on using these models for electioneering. While developing ever-larger models seems to accelerate this race, participation is framed as necessary to shape safety standards from the inside, as the technological push is inevitable due to geopolitical and commercial incentives. Engaging policymakers and skeptics is more effective when focused on concrete misuse and national security scenarios rather than speculative arguments about superintelligence. – AI-generated abstract.

PDF

First page of PDF