An Interview with Ben Garfinkel, Governance of AI Program Researcher
The Politic, June 20, 2019
Abstract
The interview focuses on the work done at the Governance of AI program, a part of the Future of Humanity Institute in Oxford. Ben Garfinkel, a research fellow at the program, discusses the various international security challenges associated with progress in artificial intelligence, highlighting both short-term risks such as the malicious use of AI in cyberattacks and long-term concerns related to the possibility of artificial general intelligence. Garfinkel outlines several concrete risks, including the potential for AI to be used for surveillance, the automation of jobs, and the development of autonomous weapon systems. He also emphasizes the need for collaboration between countries and companies to ensure the safe and ethical development of AI, suggesting that arms control agreements, credible commitments, and forecasting research could be beneficial. Garfinkel further discusses the unique challenges posed by AI in comparison to other dual-use technologies, such as nuclear and biological weapons, and argues that the private sector plays a significantly larger role in AI development than in other fields. – AI-generated abstract.
