A group of Italian researchers programmed a robot to "think out loud" in a bid to build trust between robots and humans. In a study, published today, April 21, in the journal iScience, the researchers explain how their work could help users to understand why robots make certain decisions.
Why do we talk to ourselves? Depending on the circumstances, it can serve a useful purpose, allowing us to organize our thought process, make rational decisions, and vocalize our state of mind to others.
It can even play a role in enabling us to carry out basic functions — a study from 2001 showed how blocking our inner monologue by repeatedly saying random words out loud impeded volunteers' ability to carry out simple tasks.
Robots self-speaking for human benefit
Using the principles of self-speak, researchers from the University of Palermo programmed SoftBank's Pepper robot to voice its "thinking process" while carrying out a series of tasks.
"If you were able to hear what the robots are thinking, then the robot might be more trustworthy," co-author Antonio Chella explained in a press release, describing first author Arianna Pipitone's idea that launched the study at the University of Palermo.
"The robots will be easier to understand for laypeople, and you don't need to be a technician or engineer. In a sense, we can communicate and collaborate with the robot better," Chella continued.
Can self-speaking robots increase user trust?
In order to test their idea, the researchers asked people to set a dinner table with Pepper the robot, using etiquette rules as a guideline. They found that, with the help of Pepper's vocalized inner talk, the robot more efficient at solving problems.
The researchers compared Pepper's performance with and without the speech function and found that it had a higher task-completion rate when vocalizing its "thoughts."
"People were very surprised by the robot's ability," first author Arianna Pipitone explained. "The approach makes the robot different from typical machines because it has the ability to reason, to think. Inner speech enables alternative solutions for the robots and humans to collaborate and get out of stalemate situations."
One example, the researchers explained, saw Pepper being asked to place a napkin in a spot that would contradict the etiquette guidelines — Pepper started asking itself a series of questions and concluded that the user might be confused.
Ultimately, however, human commands came first in this experiment: "ehm, this situation upsets me. I would never break the rules, but I can't upset him, so I'm doing what he wants," Pepper said.
In this instance, Pepper's vocalized inner voice would allow the user to know that the robot solved a dilemma by prioritizing the human command.
The researchers said that situations such as the one outlined above could help layman users understand the route of robot actions as well as build human-robot trust in a world that is increasingly automated.
A new framework for robot-human collaboration
The pandemic has led to a "new normal" that increasingly relies on automated services. However, even before it started, the International Federation of Robotics pointed to the fact that worldwide sales of professional-service robots were up 32% to $11.2 billion from 2018 to 2019.
One point the scientists do concede in their study is the fact that the inner voice makes Pepper the robot slower, as robots are typically programmed to carry out their inner "thought process" in a matter of milliseconds before getting straight to the task at hand.
Still, they say their work provides a framework for future investigations into the way such self-speak could be used to enhance trust in our mechanized counterparts.