works
Robert Wiblin and Keiran Harris Our descendants will probably see us as moral monsters. what should we do about that? online Moral uncertainty is the state of not knowing which moral theory is correct, a situation that the author argues is common and potentially unavoidable. Given this uncertainty, traditional approaches to moral decision-making based on a single moral view are insufficient. The author proposes instead using expected value reasoning, where one assigns probabilities to different moral theories and calculates the expected value of each action under each theory. However, this approach faces several challenges, including how to compare values across disparate moral viewpoints, how to deal with fanatical views that assign infinite wrongness to specific actions, and how to manage the potential for small probabilities in “wacky” moral views to dominate decision-making. While the author believes the problems associated with expected value reasoning are solvable, he concludes that the practical implications of moral uncertainty are substantial. The author argues that given uncertainty about the most valuable things in the world, one should prioritize actions that seem broadly good for many different moral theories. This includes focusing on long-term future wellbeing and protecting fundamental human rights. Ultimately, he argues that humanity should enter a long reflection period in which moral uncertainty is taken seriously, and where a large portion of civilization dedicates itself to moral research. – AI-generated abstract

Our descendants will probably see us as moral monsters. what should we do about that?

Robert Wiblin and Keiran Harris

80,000 Hours, January 19, 2018

Abstract

Moral uncertainty is the state of not knowing which moral theory is correct, a situation that the author argues is common and potentially unavoidable. Given this uncertainty, traditional approaches to moral decision-making based on a single moral view are insufficient. The author proposes instead using expected value reasoning, where one assigns probabilities to different moral theories and calculates the expected value of each action under each theory. However, this approach faces several challenges, including how to compare values across disparate moral viewpoints, how to deal with fanatical views that assign infinite wrongness to specific actions, and how to manage the potential for small probabilities in “wacky” moral views to dominate decision-making. While the author believes the problems associated with expected value reasoning are solvable, he concludes that the practical implications of moral uncertainty are substantial. The author argues that given uncertainty about the most valuable things in the world, one should prioritize actions that seem broadly good for many different moral theories. This includes focusing on long-term future wellbeing and protecting fundamental human rights. Ultimately, he argues that humanity should enter a long reflection period in which moral uncertainty is taken seriously, and where a large portion of civilization dedicates itself to moral research. – AI-generated abstract

PDF

First page of PDF