AI Alignment Podcast: Moral Uncertainty and the Path to AI Alignment with William MacAskill
Future of Life Institute, September 18, 2018
Abstract
The article delves into the existential risks faced by humanity in the coming century, with an estimated one in six chance of an existential catastrophe. These threats arise from human-driven factors like nuclear war and climate change, and emerging technologies like artificial intelligence and engineered pandemics. It proposes a long-term strategy, with the primary goal of achieving existential security before addressing questions of humanity’s ultimate purpose. The author believes that the magnitude of these risks demands immediate action, especially in addressing knowledge gaps and refining decision-making strategies in uncertain environments. – AI-generated abstract.
