works
Geoffrey Miller Ideological engineering and social control: A neglected topic in AI safety research? online The potential for AI to facilitate ideological control by governments and corporations represents a significant, albeit less-discussed, risk. AI systems could enhance existing surveillance infrastructure, automate the creation and dissemination of targeted propaganda, and even manipulate individuals’ experiences in virtual and augmented reality environments. This could lead to the suppression of dissent, the erosion of individual autonomy, and the creation of societies increasingly isolated from reality. While not posing an existential threat, such a scenario represents a serious risk to human welfare and could exacerbate other global catastrophic risks by limiting citizen oversight and enabling the unchecked growth of powerful AI systems. – AI-generated abstract.

Ideological engineering and social control: A neglected topic in AI safety research?

Geoffrey Miller

Effective Altruism Forum, August 31, 2017

Abstract

The potential for AI to facilitate ideological control by governments and corporations represents a significant, albeit less-discussed, risk. AI systems could enhance existing surveillance infrastructure, automate the creation and dissemination of targeted propaganda, and even manipulate individuals’ experiences in virtual and augmented reality environments. This could lead to the suppression of dissent, the erosion of individual autonomy, and the creation of societies increasingly isolated from reality. While not posing an existential threat, such a scenario represents a serious risk to human welfare and could exacerbate other global catastrophic risks by limiting citizen oversight and enabling the unchecked growth of powerful AI systems. – AI-generated abstract.

PDF

First page of PDF