Robot Makes Ethical Decisions

ethical-robot

Robots and other machines equipped with artificial intelligence shoot military targets, distribute cash (think: ATMs), drive cars and deliver medication to patients, to name a few. If people performed these duties, they would be expected to behave in a certain way and follow moral and ethical guidelines. But what about robots? They can’t yet think and act on their own accord, so should we expect them to behave morally?

Researchers working in the field of machine ethics say yes and are investigating ways to program machines to behave morally.

Philosopher Susan Anderson and her research partner and husband Michael Anderson, a computer scientist have programmed a robot to behave in an ethical manner. Based on certain facts and outcomes, the robot must weigh a decision and make choices about what to do. The situation is rooted in the medical field, where the robot’s duties involve reminding patients to take their medicine. A human would judge how often to remind a patient to take medicine and whether or not to inform the doctor if the patient refuses to take the medicine. But how do you program a robot to do that?

The Andersons created a software program based on an approach to ethics developed in 1930 by Scottish philosopher David Ross. The so-called prima facie duty approach takes into account the different obligations a person must face — such as being just, doing good, not causing harm, keeping a promise — when deciding how to act in a moral way.

In the case of a patient taking medication, the robot’s program weighs the potential benefits the patient will have if she takes her medicine, the harm that may come to her if she doesn’t and also her right to autonomy. In the video demonstration below (complete with funky background music), the robot reminds the patient to take the medication and then after the patient says “no” a couple of times, decides finally to tell the doctor.

If I were the robot, I’m not sure I would announce this, since the patient could just flip the off switch. And other factors come into play, too, such as lying. What if the patient says she will take the medication, but then doesn’t? Just because a robot has morals, doesn’t mean a human will.

source : http://news.discovery.com/tech/robot-makes-ethical-decisions.html