Machine ethics and the idea of a more-than-human moral world
In Michael Anderson and Susan Leigh Anderson (eds.) Machine Ethics, Cambridge, 2011, pp. 115–137
Abstract
“We are the species equivalent of that schizoid pair, Mr Hyde and Dr Jekyll; we have the capacity for disastrous destruction but also the potential to found a magnificent civilization. Hyde led us to use technology badly; we misused energy and overpopulated the earth, but we will not sustain civilization by abandoning technology. We have instead to use it wisely, as Dr Jekyll would do, with the health of the Earth, not the health of people, in mind.”–Lovelock 2006: 6–7IntroductionIn this paper i will discuss some of the broad philosophical issues that apply to the field of machine ethics. ME is often seen primarily as a practical research area involving the modeling and implementation of artificial moral agents. However this shades into a broader, more theoretical inquiry into the nature of ethical agency and moral value as seen from an AI or information-theoretical point of view, as well as the extent to which autonomous AI agents can have moral status of different kinds. We can refer to these as practical and philosophical ME respectively.Practical ME has various kinds of objectives. Some are technically well defined and relatively close to market, such as the development of ethically responsive robot care assistants or automated advisers for clinicians on medical ethics issues. Other practical ME aims are more long term, such as the design of a general purpose ethical reasoner/advisor – or perhaps even a “genuine” moral agent with a status equal (or as equal as possible) to human moral agents.
