works
Lucas Perry AI Alignment Podcast: Moral Uncertainty and the Path to AI Alignment with William MacAskill online The article delves into the existential risks faced by humanity in the coming century, with an estimated one in six chance of an existential catastrophe. These threats arise from human-driven factors like nuclear war and climate change, and emerging technologies like artificial intelligence and engineered pandemics. It proposes a long-term strategy, with the primary goal of achieving existential security before addressing questions of humanity’s ultimate purpose. The author believes that the magnitude of these risks demands immediate action, especially in addressing knowledge gaps and refining decision-making strategies in uncertain environments. – AI-generated abstract.

Abstract

The article delves into the existential risks faced by humanity in the coming century, with an estimated one in six chance of an existential catastrophe. These threats arise from human-driven factors like nuclear war and climate change, and emerging technologies like artificial intelligence and engineered pandemics. It proposes a long-term strategy, with the primary goal of achieving existential security before addressing questions of humanity’s ultimate purpose. The author believes that the magnitude of these risks demands immediate action, especially in addressing knowledge gaps and refining decision-making strategies in uncertain environments. – AI-generated abstract.

PDF

First page of PDF