Posts Tagged ‘robots’

“Fantastic Voyage” comes to life – sort of

May 16, 2016
Raquel Welch fantastic voyage

Raquel Welch fantastic voyage

Old codgers like me will remember the 1965 science fiction film “Fantastic Voyage” where a medical team in the submarine “Proteus” are shrunk to microscopic size and are injected into the vessels of a brain-damaged scientist  to try and save him. The ship is reduced to one micron in size but the miniaturisation is temporary and they will revert to normal size after one hour. Naturally the team contains one bad guy. But the most memorable part of this film is that Raquel Welch is one of the team (an assistant).

But now comes news from MIT that “researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.”

The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.

“It’s really exciting to see our small origami robots doing something with potential important applications to health care,” says Rus, who also directs MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “For applications inside the body, we need a small, controllable, untethered robot system. It’s really difficult to control and place a robot inside the body if the robot is attached to a tether.”

Rules of killing need to be modified to cover drones and robots

May 27, 2014

Should a civilian operator of a killing drone be considered an armed or an unarmed combatant? Can such an operator be targeted in accordance with the Rules of War? Is the US targeting and killing of a US citizen by a drone attack lawful? Can a robot drone ethically be programmed to defend itself, automatically and without any human control, if such defence would require harm to other humans. Asimov’s 3 laws of robotics come to mind.

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with a higher order law.

The ethics of killing now need to be revisited.

According to the New America Foundation:

  • The CIA drone campaign began in Yemen in 2002 and in Pakistan in 2004.
  • Drone strikes in Pakistan rose steadily under President Barack Obama in 2009, to their peak of 122 in 2010.
  • Starting in 2011, strikes in Pakistan began to decline, while they spiked in Yemen, particularly as the Obama administration began using drones to support the Yemeni government’s battles against al-Qaeda-linked militants in 2012.
  • The civilian and “unknown” casualty rate from drone strikes has fallen steadily over the life of the program.
  • The casualty rate in Pakistan for civilians and “unknowns” — those who are not identified in news reports definitively as either militants or civilians — was around 40% under President George W. Bush. It has come down to about 7% under President Obama.
  • Only 58 known militant leaders have been killed in drone strikes in Pakistan, representing just 2% of the total deaths.
  • In 2012, 2% of the drones’ victims were characterized as civilians in news reports and 9% were described in a manner that made it ambiguous whether they were militants or civilians.
  • In 2013, civilian casualties are at their lowest ever. That is partly the result of a sharply reduced number of drone strikes in Pakistan — 26 so far in 2013, compared with a record 122 in 2010 — and also more precise targeting.
US Drone killings in Pakistan (New America Foundation)

US Drone killings in Pakistan (New America Foundation)

According to a UN survey, civilians have been killed in 33 separate drone attacks around the world. In Pakistan, an estimated 2,200 to 3,300 people have been killed by drone attacks since 2004, 400 of whom were civilians. According to the latest figures from the Pakistani Ministry of Defense, 67 civilians have been killed in drone attacks in the country since 2008.

Of course the Rules of War are notoriously flexible and tend to follow the actions of the strong. They are not much in evidence in Syria. They were largely ignored in the invasion of Iraq. We have heard today about air attacks by the Ukrainian government on armed “rebels” who wish to secede in Donetsk.

KTH Press ReleaseIn her recent thesis on the ethics of automation in war, Linda Johansson, a researcher in robot ethics at Sweden’s KTH Royal Institute of Technology, suggests that it is necessary to reconsider the international laws of war, and to begin examining whether advanced robots should be held accountable for their actions. ….

She also questions the ethics of assigning drone operators the task of tracking a targeted person from a safe distance for days, perhaps even a week, before striking. “This is different from ordinary combat soldiers who face their opponents directly,” she says. “The post-traumatic stress syndrome that affects an operator may be just as severe as for a regular soldier.”

Currently drones are still operated remotely by a human being, but technological advancement is so rapid that full automation is more than just a grim science fiction fantasy.

Johansson sketches out a scenario to show how reaching that point presents other ethical questions:

“Soon we may be facing a situation where an operator controls two drones instead of one, on account of cost reasons,” Johansson says. “Add to that the human tendency to rely on technology. Now imagine a situation where very quick decisions must be made. It becomes easy to step out of the decision loop and hand over control to the robot or computer.

“Man becomes the weakest link.”

It could also be argued that robots are not entitled to defend themselves, since under the rules of war they are not in danger of losing their lives. “Does it mean that they have lost the right to kill human soldiers?” she asks.

Robots, especially drones, can also facilitate the conduct of “secret war”, with low transparency and minimal involvement of troops.

Linda Johansson’s research has resulted in a compilation of seven articles. In addition to autonomous systems in the war, she studied other aspects of robots. One of the articles is about care-giver robots and the ethics around them. Two of her articles focus on the so-called “agent landscape” – or if and when advanced robots can be held responsible for their actions.

Applying Asimov’s Laws of Robotics

August 21, 2010

Isaac Asimov published his first short story about robotics in 1942 and his I Robot in 1950. The three laws of robotics he developed have now become the de facto basis for most science fiction dealing with robots and even in engineering:

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with a higher order law.

image: http://www.geekalerts.com/u/toyota-robots.jpg

Later as Asimov integrated his Robot series with his Foundation series he added a Zeroth Law:

Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

A modern counterpart to Asimov’s fictional character is Eliza. Eliza was created in 1966 by Professor Joseph Weizenbaum of Massachusetts Institute of Technology who wrote Eliza — a computer program for the study of natural language communication between man and machine. She was initially programmed with 240 lines of code to simulate a psychotherapist by answering questions with questions.

Robotics and robot design has now advanced to the stage that engineers are now having to re-look at Asimov’s laws for practical implementation. David Woods, professor of integrated systems engineering at Ohio State University says “The philosophy has been, ‘sure, people make mistakes, but robots will be better — a perfect version of ourselves.’ We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.” He addresses the practical issues in his article: Beyond Asimov: The Three Laws of Responsible Robotics in IEEE Intelligent Systems by Robin Murphy , David D. Woods, July 2009, pp. 14-20. “Go back to the original context of the stories,” Woods says, referring to Asimov’s I Robot among others. “He’s using the three laws as a literary device. The plot is driven by the gaps in the laws — the situations in which the laws break down. For those laws to be meaningful, robots have to possess a degree of social intelligence and moral intelligence, and Asimov examines what would happen when that intelligence isn’t there.”

“His stories are so compelling because they focus on the gap between our aspirations about robots and our actual capabilities. And that’s the irony, isn’t it? When we envision our future with robots, we focus on our hopes and desires and aspirations about robots — not reality.”

In reality, engineers are still struggling to give robots basic vision and language skills. These efforts are hindered in part by our lack of understanding of how these skills are managed in the human brain. We are far from a time when humans can agree on a universal ethical or moral code and even further away from imbuing such a code into robots.

Woods and his coauthor, Robin Murphy of Texas A&M University, composed three laws that tries to put the responsibility back on humans.

The three new laws that Woods and Murphy propose are:

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

Woods admits that one thing is missing from the new laws: the romance of Asimov’s fiction — the idea of a perfect, moral robot that sets engineers’ hearts fluttering.


%d bloggers like this: