In an unusual role reversal, a new Johns Hopkins University study sought to demonstrate how computers can make mistakes like humans do by making people think like computers do.
Think like a computer
"Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences. "Our project does the opposite -- we're asking whether people can think like computers."
Artificial intelligence systems are much better than people at calculating math or storing large quantities of information. Where they fail is at recognizing everyday objects.
However, recently, neural networks have been created that mimic the human brain. This has resulted in an improved ability to identify objects leading to technological advances in applications such as autonomous cars and facial recognition.
However, a critical blind spot remains. It's possible to purposely make images that neural networks cannot correctly recognize called "adversarial" or "fooling" images.
The new study sought to evaluate whether humans could also misidentify these tricky images.
"These machines seem to be misidentifying objects in ways humans never would," Firestone says. "But surprisingly, nobody has really tested this. How do we know people can't see what the computers did?"
To test this, Firestone and his team asked 1800 test subjects to "think like a machine". Since machines have only a small vocabulary, Firestone showed people fooling images that had already tricked computers, and gave them the same kinds of labeling options that the machine had.
What they found was that humans tended to make the same labeling choices as the computers when faced with these limited options. People agreed with the computer's answer 75 percent of the time.
Researchers then gave people a choice between the computer's favorite answer and its next-best guess. 91 percent of people once again agreed with the machine's first choice.
"We found if you put a person in the same circumstance as a computer, suddenly the humans tend to agree with the machines," Firestone says. "This is still a problem for artificial intelligence, but it's not like the computer is saying something completely unlike what a human would say."
The study is published in the journal Nature Communications.