works
David Thorstad Existential risk pessimism and the time of perils online Many EAs endorse two claims about existential risk. First, existential risk is currently high: (Existential Risk Pessimism) Per-century existential risk is very high. For example, Toby Ord (2020) puts the risk of existential catastrophe by 2100 at 1/6, and participants at the Oxford Global Catastrophic Risk Conference in 2008 estimated a median 19% chance of human extinction by 2100 (Sandberg and Bostrom 2008). Let’s ballpark Pessimism using a 20% estimate of per-century risk. Second, many EAs think that it is very important to mitigate existential risk: (Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value. You might think that Existential Risk Pessimism supports the Astronomical Value Thesis. After all, it is usually more important to mitigate large risks than to mitigate small risks. In this post, I extend a series of models due to Toby Ord and Tom Adamczewski to do five things: I show that across a range of assumptions, Existential Risk Pessimism tends to hamper, not support the Astronomical Value Thesis.I argue that the most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the Time of Perils Hypothesis. I clarify two features that the Time of Perils Hypothesis must have if it is going to vindicate the Astronomical Value Thesis.I suggest that arguments for the Time of Perils Hypothesis which do not appeal to AI are not strong enough to ground the relevant kind of Time of Perils Hypothesis.I draw implications for existential risk mitigation as a cause area.

Existential risk pessimism and the time of perils

David Thorstad

Effective Altruism Forum, August 12, 2022

Abstract

Many EAs endorse two claims about existential risk. First, existential risk is currently high: (Existential Risk Pessimism) Per-century existential risk is very high. For example, Toby Ord (2020) puts the risk of existential catastrophe by 2100 at 1/6, and participants at the Oxford Global Catastrophic Risk Conference in 2008 estimated a median 19% chance of human extinction by 2100 (Sandberg and Bostrom 2008). Let’s ballpark Pessimism using a 20% estimate of per-century risk. Second, many EAs think that it is very important to mitigate existential risk: (Astronomical Value Thesis) Efforts to mitigate existential risk have astronomically high expected value. You might think that Existential Risk Pessimism supports the Astronomical Value Thesis. After all, it is usually more important to mitigate large risks than to mitigate small risks. In this post, I extend a series of models due to Toby Ord and Tom Adamczewski to do five things: I show that across a range of assumptions, Existential Risk Pessimism tends to hamper, not support the Astronomical Value Thesis.I argue that the most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the Time of Perils Hypothesis. I clarify two features that the Time of Perils Hypothesis must have if it is going to vindicate the Astronomical Value Thesis.I suggest that arguments for the Time of Perils Hypothesis which do not appeal to AI are not strong enough to ground the relevant kind of Time of Perils Hypothesis.I draw implications for existential risk mitigation as a cause area.

PDF

First page of PDF