Applying Asimov’s Laws of Robotics

Isaac Asimov published his first short story about robotics in 1942 and his I Robot in 1950. The three laws of robotics he developed have now become the de facto basis for most science fiction dealing with robots and even in engineering:

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with a higher order law.

image: http://www.geekalerts.com/u/toyota-robots.jpg

Later as Asimov integrated his Robot series with his Foundation series he added a Zeroth Law:

Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

A modern counterpart to Asimov’s fictional character is Eliza. Eliza was created in 1966 by Professor Joseph Weizenbaum of Massachusetts Institute of Technology who wrote Eliza — a computer program for the study of natural language communication between man and machine. She was initially programmed with 240 lines of code to simulate a psychotherapist by answering questions with questions.

Robotics and robot design has now advanced to the stage that engineers are now having to re-look at Asimov’s laws for practical implementation. David Woods, professor of integrated systems engineering at Ohio State University says “The philosophy has been, ‘sure, people make mistakes, but robots will be better — a perfect version of ourselves.’ We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.” He addresses the practical issues in his article: Beyond Asimov: The Three Laws of Responsible Robotics in IEEE Intelligent Systems by Robin Murphy , David D. Woods, July 2009, pp. 14-20. “Go back to the original context of the stories,” Woods says, referring to Asimov’s I Robot among others. “He’s using the three laws as a literary device. The plot is driven by the gaps in the laws — the situations in which the laws break down. For those laws to be meaningful, robots have to possess a degree of social intelligence and moral intelligence, and Asimov examines what would happen when that intelligence isn’t there.”

“His stories are so compelling because they focus on the gap between our aspirations about robots and our actual capabilities. And that’s the irony, isn’t it? When we envision our future with robots, we focus on our hopes and desires and aspirations about robots — not reality.”

In reality, engineers are still struggling to give robots basic vision and language skills. These efforts are hindered in part by our lack of understanding of how these skills are managed in the human brain. We are far from a time when humans can agree on a universal ethical or moral code and even further away from imbuing such a code into robots.

Woods and his coauthor, Robin Murphy of Texas A&M University, composed three laws that tries to put the responsibility back on humans.

The three new laws that Woods and Murphy propose are:

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

Woods admits that one thing is missing from the new laws: the romance of Asimov’s fiction — the idea of a perfect, moral robot that sets engineers’ hearts fluttering.

Tags: , , ,


%d bloggers like this: