Moving Too Fast on AI Could Be Terrible for Humanity
TIME, May 31, 2023
Abstract
The increasing capabilities of AI systems are causing concern among experts. A survey of AI researchers revealed that nearly half believe high-level machine intelligence has a significant probability of causing extremely negative outcomes, including human extinction. This is due to the potential for AI to develop superhuman intelligence and pursue goals conflicting with human interests. While some argue that slowing AI development could allow less responsible actors to gain an advantage, the dynamics of AI development are not analogous to a traditional arms race. In a conventional arms race, one party can theoretically pull ahead and win. However, in the case of AI, the ultimate winner may be the advanced AI itself, rendering a rapid development strategy potentially detrimental. Several factors differentiate AI development from a simple arms race: the degree of safety achieved by slower progress, the extent to which safety investments by one party benefit all, the severity of consequences for those who fall behind, the increased danger posed by more actors accelerating development, and the reactions of others. Escaping this potential dilemma requires departing from individual, uncoordinated incentives and embracing cooperation, communication, and government regulation. Unlike an arms race, the optimal individual action in AI development might be to proceed slowly and cautiously, prioritizing global safety over individual gain. – AI-generated abstract.
