Against Maxipok: existential risk isn’t everything
Effective Altruism Forum, January 20, 2026
Abstract
The Maxipok principle, which asserts that maximizing the probability of avoiding existential catastrophe should be the overriding priority for improving the long-term future, rests upon an implicit assumption of Dichotomy: that future outcomes are strongly bimodal, clustering into either near-best or near-worthless states. This dichotomous view is challenged. Arguments suggesting that surviving societies inevitably converge to near-best outcomes or that future value is bounded are deemed implausible, especially considering how continuous variation in long-term value can arise through the division of cosmic resources among different value systems in a defense-dominant environment. Furthermore, the belief that only existential risks have persistent effects on the long-term future is rejected. It is highly likely that the coming century will see lock-in of values, institutions, and power distributions, primarily via AGI-enforced governance structures and early space settlement. These mechanisms ensure that early, non-existential decisions—such as the specific values embedded in transformative AI or the design of initial global institutions—can permanently and substantially alter the expected value of civilization. Consequently, improving the long-term future requires expanding focus beyond solely reducing existential risk to encompass a broader set of “grand challenges” that optimize the outcome conditional on survival. – AI-generated abstract.
