Two professors of psychology reveal an interesting take of the classic trolley problem: would you sacrifice a robot to save a human life? The answer might surprise you.
Is Robot Life Worth More Than Human Life?
In a paper published in the journal Social Cognition, Psychology professors Sari Nijssen of Radboud University in Nijmegen in the Netherlands and Markus Paulus at Ludwig-Maximilians-Universitaet (LMU) in Munich present the results of an experiment meant to test the moral approaches people take toward robots in different circumstances.
The researchers presented the classic trolley problem to participants: would they be prepared to endanger the life of a single individual in order to save several injured people?
In the study, the individual to be sacrificed to save the many would either be a human, a humanoid robot with a physiognomy with various degrees of anthropomorphism, or a robot that was unmistakably a machine. Scenarios were also designed that would present the robot as compassionate towards others and as a being with its own cognition and sentience.
The point of the trolley problem is normally to assess how sacrosanct one considers human life, since the group may die as a result of an accident of nature—something that happens thousands of times a day—, but someone actually must make the decision to kill the individual.
Bad News For The Humans
The researchers found that the more a robot is given human characteristics, the less likely people are to sacrifice it to save human life.
Presenting the participants with stories about the positive personality traits of the robot not only made participants less likely to sacrifice it to save humans, a significant number of participants reportedly expressed a readiness to take anonymous human life in order to save the life of the robot.
This has profound moral implications for our technology going forward. A major trend on display at CES 2019 was how hundreds of vendors were deliberately trying to invest human-like personalities in their robots in order to convince consumers to buy them and, if what they had on display is any guide, they have come a very long way in doing so.
This trend could present serious moral challenges in the future if Nijssen and Paulus are right. “The more the robot was depicted as human - and in particular the more feelings were attributed to the machine - the less our experimental subjects were inclined to sacrifice it," Paulus said.
"This result indicates that our study group attributed a certain moral status to the robot. One possible implication of this finding is that attempts to humanize robots should not go too far. Such efforts could come into conflict with their intended function - to be of help to us.