Which consequentialism? Machine ethics and moral divergence
In Carson Reynolds and Alvaro Cassinelli (eds.) AP-CAP 2009: The fifth Asia-Pacific computing and philosophy conference, October 1st-2nd, University of Tokyo, Japan, proceedings, ed. Carson Reynolds and Alvaro Cassinelli, 2009, pp. 23–25
Abstract
Some researchers in the field of machine ethics have suggested consequentialist or util- itarian theories as organizing principles for Artificial Moral Agents (AMAs) (Wallach, Allen, and Smit 2008) that are ‘full ethical agents’ (Moor 2006), while acknowledging extensive variation among these theories as a serious challenge(Wallach, Allen, and Smit 2008). This paper develops that challenge, beginning with a partial taxonomy of conse- quentialisms proposed by philosophical ethics. We discuss numerous ‘free variables’ of consequentialism where intuitions conflict about optimal values, and then consider spe- cial problems of human-level AMAs designed to implement a particular ethical theory, by comparison to human proponents of the same explicit principles. In conclusion, we suggest that if machine ethics is to fully succeed, itmust draw upon the developing field of moral psychology.
