The power of catastrophic thinking
New York review of books, February 25, 2021
Abstract
The existence of humanity is under threat from several existential risks, ranging from natural phenomena like supervolcanic eruptions to anthropogenic threats like engineered pandemics and unaligned artificial intelligence. A careful risk analysis suggests that while natural risks pose a negligible threat, anthropogenic risks are far more significant, and the greatest threat to humanity comes from unaligned artificial intelligence. However, the author argues that the moral frameworks used to assess existential risk often lead to preposterous conclusions, where humanity’s potential future, even in distant epochs, weighs heavily on the present. The author proposes instead that we should care about the future of humanity not because of some moral obligation, but because the value of our lives depends on the continuation of humanity. – AI-generated abstract
