Robot that Chooses to Inflict Pain Sparks Debate about AI Systems

June 19, 2016

A robot built by roboticist Alexander Reben from the University of Berkeley, California has the ability to decide using AI whether or not to inflict pain.

The robot aims to spark a debate on if an AI system can get out of control, reminiscent of the terminator. The robot design is incredibly simple, designed to serve only one purpose; to decide whether or not to inflict pain. The robot was engineered by Alexander Reben of the University of Berkeley and was published in a scientific journal aimed to spark a debate on whether or not artificial intelligent robots can get out of hand if given the opportunity.

“The real concern about AI is that it gets out of control,”

he said.

“[The tech giants] are saying it’s way out there, but let’s think about it now before it’s too late. I am proving that [harmful robots] can exist now. We absolutely have to confront it.”

Reben is exploring the very real possibility that a robot programmed with an algorithm that learns from experience and human interaction could pick up on the negative traits imposed by some humans, presenting a potential downfall of AI systems that could become dangerous.

In practice, when a finger is placed near the robot it has the potential to prick a finger, drawing blood. Reben explains that he really has no idea when the robot will strike.

“The robot makes a decision that I as a creator cannot predict,”

he said,

“I don’t know who it will or will not hurt. It’s intriguing, it’s causing pain that’s not for a useful purpose – we are moving into an ethics question, robots that are specifically built to do things that are ethically dubious.”

The machine costs about US $200 and will not be available for retail. Its primary focus is to explore the philosophy of three robotic laws proposed by Isaac Asimov in 1942- the first of which being a robot may not hurt a human. Following Reben’s research led him to conclude that an AI  “kill switch” similar to the one currently being developed by engineers from Google’s artificial intelligence division, DeepMind, and Oxford University, could become incredibly important in the near future.

rebenReben’s robot pricking a finger [Image Source: Alexander Reben]

The major concern is assuring that a robot’s algorithm is designed in such a way that it can not override the kill switch to prevent it from being turned off- an incredibly dangerous scenario. Although current robots are designed for a practical purpose, as they gain more intelligence and more human-like learning algorithms, it will become imperative to ensure there are significant safety measures in place to prevent a robot from turning into a terminator, pricking your finger, or even worse.

SEE ALSO: Artificial Intelligence Should be Protected by Human Rights