works
Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello Toward a general logicist methodology for engineering ethically correct robots article A s intelligent machines assume an increasingly prominent role in our lives, there seems little doubt they will eventually be called on to make important, ethically charged decisions. For example, we expect hospitals to deploy robots that can adminis-ter medications, carry out tests, perform surgery, and so on, supported by software agents, or softbots, that will manage related data. (Our dis-cussion of ethical robots extends to all artificial agents, embodied or not.) Consider also that robots are already finding their way to the battlefield, where many of their potential actions could inflict harm that is ethically impermissible. How can we ensure that such robots will always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear natural languages, that their behavior will be con-strained specifically by the ethical codes affirmed by human overseers? Pessimists have claimed that the answer to these questions is: " We can’t! " For exam-ple, Sun Microsystems’ cofounder and former chief scientist, Bill Joy, published a highly influential argu-ment for this answer. 1 Inevitably, according to the pessimists, AI will produce robots that have tremen-dous power and behave immorally. These predictions certainly have some traction, particularly among a public that pays good money to see such dark films as Stanley Kubrick’s 2001 and his joint venture with Stephen Spielberg, AI). Nonetheless, we’re optimists: we think formal logic offers a way to preclude doomsday scenarios of mali-cious robots taking over the world. Faced with the chal-lenge of engineering ethically correct robots, we pro-pose a logic-based approach (see the related sidebar). We’ve successfully implemented and demonstrated this approach. 2 We present it here in a general method-ology to answer the ethical questions that arise in entrusting robots with more and more of our welfare. Deontic logics: Formalizing ethical codes Our answer to the questions of how to ensure eth-ically correct robot behavior is, in brief, to insist that robots only perform actions that can be proved eth-ically permissible in a human-selected deontic logic. A deontic logic formalizes an ethical code—that is, a collection of ethical rules and principles. Isaac Asi-mov introduced a simple (but subtle) ethical code in his famous Three Laws of Robotics: 3

Toward a general logicist methodology for engineering ethically correct robots

Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello

IEEE Intelligent Systems, vol. 21, no. 4, 2006, pp. 38–44

Abstract

A s intelligent machines assume an increasingly prominent role in our lives, there seems little doubt they will eventually be called on to make important, ethically charged decisions. For example, we expect hospitals to deploy robots that can adminis-ter medications, carry out tests, perform surgery, and so on, supported by software agents, or softbots, that will manage related data. (Our dis-cussion of ethical robots extends to all artificial agents, embodied or not.) Consider also that robots are already finding their way to the battlefield, where many of their potential actions could inflict harm that is ethically impermissible. How can we ensure that such robots will always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear natural languages, that their behavior will be con-strained specifically by the ethical codes affirmed by human overseers? Pessimists have claimed that the answer to these questions is: " We can’t! " For exam-ple, Sun Microsystems’ cofounder and former chief scientist, Bill Joy, published a highly influential argu-ment for this answer. 1 Inevitably, according to the pessimists, AI will produce robots that have tremen-dous power and behave immorally. These predictions certainly have some traction, particularly among a public that pays good money to see such dark films as Stanley Kubrick’s 2001 and his joint venture with Stephen Spielberg, AI). Nonetheless, we’re optimists: we think formal logic offers a way to preclude doomsday scenarios of mali-cious robots taking over the world. Faced with the chal-lenge of engineering ethically correct robots, we pro-pose a logic-based approach (see the related sidebar). We’ve successfully implemented and demonstrated this approach. 2 We present it here in a general method-ology to answer the ethical questions that arise in entrusting robots with more and more of our welfare. Deontic logics: Formalizing ethical codes Our answer to the questions of how to ensure eth-ically correct robot behavior is, in brief, to insist that robots only perform actions that can be proved eth-ically permissible in a human-selected deontic logic. A deontic logic formalizes an ethical code—that is, a collection of ethical rules and principles. Isaac Asi-mov introduced a simple (but subtle) ethical code in his famous Three Laws of Robotics: 3

PDF

First page of PDF