A.I. face study reveals a shocking new tipping point for humans
Computers have become very, very good at generating photorealistic images of human faces.
What could possibly go wrong?
A study published last week in the academic journal Proceedings of the National Academy of Sciences confirms just how convincing “faces” produced by artificial intelligence can be.
In that study, more than 300 research participants were asked to determine whether a supplied image was a photo of a real person or a fake generated by an A.I. The human participants got it right less than half the time. That’s worse than flipping a coin.
The results of this study reveal a tipping point for humans that should feel shocking to anybody who thinks they are savvy enough to spot a deepfake when it's put up against the genuine article.
While the researchers say this feat of engineering “should be considered a success for the fields of computer graphics and vision,” they also “encourage those developing these technologies to consider whether the associated risks are greater than their benefits,” citing dangers that range from disinformation campaigns to the nonconsensual creation synthetic porn.
“[W]e discourage the development of technology simply because it is possible,” they contend.
Neural networks are getting incredibly good
The researchers behind this study started with 400 synthetic faces generated by an open-source A.I. program made by the technology giant NVIDIA. The program is what’s called a generative adversarial network, meaning it uses a pair of neural networks to create the images.
The “generator” starts by creating a completely random image. The “discriminator” uses a huge set of real photos to give feedback to the generator. As the two neural networks go back and forth, the generator gets improves each time, until thediscriminator can’t tell the real images from the fake ones.
As it turns out, humans aren’t any better.
Three experiments show surprising results
For this study, psychologists built a gender-, an age- and a racially inclusive sample of 400 synthetic images that NVDIA's A.I. had created. It comprised 200 men and 200 women and included 100 faces that fell into four racial categories: Black, white, East Asian, and South Asian. For each of those synthetic faces, the researchers chose a demographically similar image from the discriminator's training data
In the first experiment, more than 300 participants looked at a sample of 128 faces and said if they thought each one was real or fake. They got it right just 48.2 percent of the time.
The participants didn’t have an equally hard time with all the faces they looked at, though. They did worst at analyzing the white faces, probably because the A.I.’s training data included far more photos of white people. More data means better renderings.
In the second experiment, a new batch of humans got a little bit of help. Before assessing the images, those participants got a short tutorial with clues about how to spot a computer-generated face. Then they started looking at images. After each one, they learned if they’d guessed right or wrong.
The participants in this experiment did a bit better, with an average score of 59.0 percent. Interestingly, all of the improvement seemed to be from the tutorial, rather than learning from the feedback. The participants actually did slightly worse during the second half of the experiment than during the first half.
In the final experiment, participants were asked to rate how trustworthy they found each of the 128 faces on a scale of one to seven. In a stunning result, they said that, on average, the artificial faces seemed 7.7 percent more trustworthy than human-made faces.
Taken together, these results lead to the stunning conclusion that A.I.s "are capable of and more trustworthy – than real faces," the researchers say.
The implications could be huge
These results point to a future with that holds the potential for some strange situations about recognition, memory, and a complete flyover of the Uncanny Valley.
They mean that “[a]nyone can create synthetic content without specialized knowledge of Photoshop or CGI,” says Lancaster University psychologist Sophie Nightingale, a co-author on the study.
The researchers list a number of nefarious ways people might use these “deep fakes” that are virtually indistinguishable from real images. The technology, which works similarly for video and audio, could make for extraordinarily convincing misinformation campaigns. Take the current situation in Ukraine, for example. Imagine how quickly a video showing Vladimir Putin — or, for that matter, Joe Biden — declaring war on a long-time adversary would circulate across social platforms. It could be very hard to convince people that what they saw with their own eyes wasn't real.
Another major concern is synthetic pornography that shows a person performing intimate acts that they never actually did.
The technology also has big implications for real photos.
“Perhaps most pernicious is the consequence that in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question," the researchers say.
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.
A Japanese scientist advances the development of plasma thrusters, which could boost spacecraft deep into space, by improving their conversion efficiency.