Scientists Trying To Teach Morality To A Robot

on Sunday, 18 May 2014

A group of researchers from Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the US Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If they are successful, they will create an artificial intelligence able to autonomously assess a difficult situation and then make complex ethical decisions that can override the rigid instructions it was given.

Seventy-two years ago, science fiction writer Isaac Asimov introduced "three laws of robotics" that could guide the moral compass of a highly advanced artificial intelligence. Sadly, given that today's most advanced AIs are still rather brittle and clueless about the world around them, one could argue that we are nowhere near building robots that are even able to understand these rules, let alone apply them.
A team of researchers led by Prof. Matthias Scheutz at Tufts University is tackling this very difficult problem by trying to break down human moral competence into its basic components, developing a framework for human moral reasoning. Later on, the team will attempt to model this framework in an algorithm that could be embedded in an artificial intelligence. The infrastructure would allow the robot to override its instructions in the face of new evidence, and justify its actions to the humans who control it.
"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," says Scheutz. "The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities."
For instance, a robot medic could be ordered to transport urgently needed medication to a nearby facility, and encounter a person in critical condition along the way. The robot's "moral compass" would allow it to assess the situation and autonomously decide whether it should stop and assist the person or carry on with its original mission.
If Asimov's novels have taught us anything, it's that no rigid, pre-programmed set of rules can account for every possible scenario, as something unforeseeable is bound to happen sooner or later. Scheutz and colleagues agree, and have devised a two-step process to tackle the problem.
In their vision, all of the robot's decisions would first go through a preliminary ethical check using a system similar to those in the most advanced question-answering AIs, such as IBM's Watson. If more help is needed, then the robot will rely on the system that Scheutz and colleagues are developing, which tries to model the complexity of human morality.
As the project is being developed in collaboration with the US Navy, the technology could find its first application in medical robots designed to assist soldiers in the battlefield.

0 comments:

Post a Comment