12 Tentative Ideas for US AI Policy
Open Philanthropy, April 17, 2023
Abstract
This work outlines 12 tentative US policy ideas aimed at mitigating potential existential risks from transformative artificial intelligence. Acknowledging rapid AI advancements and expert concerns regarding “extremely bad” outcomes, these proposals seek to increase the odds of beneficial long-term outcomes. Key recommendations include implementing software export controls for frontier AI models, requiring hardware security features on advanced chips, tracking and licensing large compute clusters and the development of frontier models, and establishing stringent information security and safety testing requirements for these systems. Further proposals involve funding research and development in AI alignment, interpretability, and defensive information security, creating antitrust safe harbors for AI safety collaboration, mandating AI incident reporting, clarifying developer liability for AI-related harms, and developing mechanisms for rapid shutdown of large compute clusters. These ideas represent a personal perspective, highlighting areas for future policy development and advocacy. – AI-generated abstract.