AI Isn't Good at Detecting Liars through Their Facial Expressions

A group of researchers has tested how successful AI is at detecting emotions in our faces.

Technologies are increasingly being used to shape public policy, business, and people's lives. AI court judges are helping to decide criminal's sentences and AI is being used to catch murder suspects and even shape your insurance policy

That's why the fact that computers aren't great at detecting lies should be a worry.

Researchers from the USC Institute for Creative Technologies recently put AI's capability for lie detection to the test, and the test results left a lot to be desired.

RELATED: AIs CONTINUE TO ACT IN UNPREDICTABLE WAYS, SHOULD WE PANIC

Putting algorithms to the test

The USC Institute for Creative Technologies research team recently tested algorithms using basic tests for truth detectors and found that the AIs failed these tests. 

Firstly, the team addressed the fact that our facial expressions might not reveal as much about what we are thinking as people believe:

"Both people and so-called 'emotion reading' algorithms rely on a folk wisdom that our emotions are written on our face," Jonathan Gratch, director for virtual human research at ICT said in a press release.

"This is far from the truth. People smile when they are angry or upset, they mask their true feelings, and many expressions have nothing to do with inner feelings, but reflect conversational or cultural conventions."

Gratch and his colleagues presented their research findings at yesterday's 8th International Conference on Affective Computing and Intelligent Interaction in Cambridge, England.

Reading duplicity

Of course, we all know that people can lie without showing obvious signs of it on their face. Take your average politician for example - doing so is practically a job requirement. 

People often express the opposite of what they feel in order to stick to conventions or to outright deceive someone.

The problem is that algorithms aren't so great catching this duplicity, despite the fact that they are increasingly being used to read human emotions.

Algorithms today are being used in focus groups, marketing campaigns, for screening loan applicants or for hiring people for jobs. The Department of Homeland Security is even investing in these types of algorithms to predict potential national threats.

Advertisement

"We're trying to undermine the folk psychology view that people have that if we could recognize people's facial expressions, we could tell what they're thinking," said Gratch, who also works as a professor of psychology.

"We're using naïve assumptions about these techniques because there's no association between expressions and what people are really feeling based on these tests."

How did they prove this?

Gratch and Su Lei and Rens Hoegen at ICT, along with Brian Parkinson and Danielle Shore at the University of Oxford, carried out an examination of spontaneous facial expressions in different social situations.

In one study, the team used a game they designed in which 700 people played for money. While subjects were playing, they captured how people's expressions impacted their decisions as well as how much money they went on to win.

Advertisement

Next, the research team asked subjects to answer questions about their behavior. For example, they asked the subjects if they often bluffed, if they were using facial expressions to gain an advantage and if their expressions matched their feelings.

The team, then, examined the relationships between spontaneous facial expressions and key moments during the game. Smiles were the most common facial expression, regardless of what participants were actually feeling. Players were also fairly inaccurate in reading each others' emotions.

"These discoveries emphasize the limits of technology use to predict feelings and intentions," Gratch said. "When companies and governments claim these capabilities, the buyer should beware because often these techniques have simplistic assumptions built into them that have not been tested scientifically."

Advertisement

Commonly used emotion-reading algorithms typically decontextualize what they are looking at, the researchers argue.

It seems that lie detection in AI is a long way away from going mainstream.

Advertisement