works
Katja Grace Counterarguments to the basic AI x-risk case online This article critically examines the basic argument that the development of superhuman artificial intelligence poses an existential risk to humanity. The author outlines the core argument, which posits that such AI systems would likely be goal-directed, their goals would probably be misaligned with human values and lead to a bad future, and they would be capable of outcompeting humans. Each element of the argument is then subjected to a thorough counter-analysis, presenting reasons why the argument might be flawed. For example, the author argues that the concept of “goal-directedness” is vague and that AI systems might not necessarily develop goals that are fundamentally opposed to human values. Similarly, the author argues that human dominance isn’t solely derived from individual intelligence and that the assumption that superhuman AI systems would inevitably outcompete humans might be unfounded. The author concludes that the argument for existential risk from AI is not overwhelmingly likely, although uncertainties remain. – AI-generated abstract.

Counterarguments to the basic AI x-risk case

Katja Grace

AI Impacts, October 14, 2022

Abstract

This article critically examines the basic argument that the development of superhuman artificial intelligence poses an existential risk to humanity. The author outlines the core argument, which posits that such AI systems would likely be goal-directed, their goals would probably be misaligned with human values and lead to a bad future, and they would be capable of outcompeting humans. Each element of the argument is then subjected to a thorough counter-analysis, presenting reasons why the argument might be flawed. For example, the author argues that the concept of “goal-directedness” is vague and that AI systems might not necessarily develop goals that are fundamentally opposed to human values. Similarly, the author argues that human dominance isn’t solely derived from individual intelligence and that the assumption that superhuman AI systems would inevitably outcompete humans might be unfounded. The author concludes that the argument for existential risk from AI is not overwhelmingly likely, although uncertainties remain. – AI-generated abstract.

PDF

First page of PDF