Lucia Coulter and Jack Rafferty want to strip the world of lead-based paint
Vox, October 20, 2022
Abstract
The article argues that while individuals concerned with artificial intelligence (AI) risks primarily belong to two groups—skeptics and proponents—the author is aligned with neither. Presently affiliated with the Centre for Long-Term Resilience, the author points out that those invested in developing AI could have conflicts of interest when working on AI safety and governance. Hence, she tries to occupy a unique position with no such conflicts of interest. While acknowledging there are trade-offs between concerns for current AI’s harms and potential existential AI risks, she argues that solutions addressing one issue often address the other. Thus, highlighting the need for oversight and accountability, she encourages more engagement between AI ethics experts and policymakers. On the other hand, she also emphasizes the importance of avoiding excessive focus on extreme risks alone, as doing so might undermine attention to more prevalent challenges such as societal inequality, disinformation, and power concentration. – AI-generated abstract.
