New Models Sense Humans' Trust in Intelligent Machines
As robots and machines become more entwined with our daily lives, the field of improving human-robot interactions is growing. Recent work in the field has developed new “classification models” that show how well humans trust intelligent machines they collaborate with.
The models will go a long way to helping improve the quality of interactions and teamwork.
The recent work by assistant professor Neera Jain and associate professor Tahira Reid, from Purdue University’s School of Mechanical Engineering, is just one step in the overall goal of designing intelligent machines capable of changing their behavior to enhance their human teammates' trust in them.
Robots and humans need to get along
“Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans,” Jain said.
“As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions.”
This will improve the efficiency of human and machine interactions. Currently, distrust in machines can result in system breakdowns.
Purdue University gives the example of aircraft pilots and industrial workers who are routinely interacting with automated systems but may override the system if they intuit that the system is faltering.
“It is well established that human trust is central to successful interactions between humans and machines,” Reid said.
The researchers have developed two types of “classifier-based empirical trust sensor models,” which puts them one step closer to improving the relationship between human and intelligent machines. The model gathers data from the human subjects in two ways to gauge ‘trust.’
Brainwaves elude trust in real time
It monitors brainwave patterns, but also measures changes in the electrical characteristics of the skin, providing psychophysiological “feature sets” correlated with trust. To complete the study forty-five human subjects wore EEG headsets and a device on their hand to measure galvanic skin response.
One model uses the same set of psychophysiological features for all 45 participants, while the other is tailored to the individual. The latter model improves accuracy but takes a lot more time in training.
The two models had a mean accuracy of 71.22 percent, and 78.55 percent, respectively. It is the first time that EEG is being used to gather data related to trust in real time.
“We are using these data in a very new way,” Jain said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event.
“We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship,” Jain explained.
“In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this.”
The work is important to improving future human and robot interactions.
“A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real time,” Jain continued.
The study has been published in a special issue of the Association for Computing Machinery’s Transactions on Interactive Intelligent Systems. The journal’s special issue is titled "Trust and Influence in Intelligent Human-Machine Interaction."
IE has covered technology demonstrations before but these windows are now entering their commercial phase and will be available for purchase soon.