My highly personal skepticism braindump on existential risk from artificial intelligence
Measure is Unceasing, January 23, 2023
Abstract
This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like selection effects at the level of which arguments are discovered and distributed, community epistemic problems, and increased uncertainty due to chains of reasoning with imperfect concepts as real and important. I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured.
