Adobe Trains AI to Detect Deepfakes and Photoshopped Images
At a time when facial manipulation tools, deepfakes and fake facial images are more advanced and common than ever before, Adobe, the multinational American computer software company, has trained AI to differentiate these fakes from original facial photos.
A team of researchers from Adobe and UC Berkeley in California, U.S., have worked together to create this tool.
The aim of their work is to restore faith in digital media, in a day and age when countless fakes and touch ups occur.
How did the team train AI?
The team studied Adobe's Photoshop feature called Face Away Liquify, which is meant to change people's faces, eyes and mouths.
Later, they trained a convolutional neural network (CNN), used to analyze visual imagery, to pick up the changes made to the faces in the images.
The neural network detected the fake faces with a 99 percent accuracy rate. Compared with the 53 percent picked up by the naked human eye, that is an incredibly strong rate of accuracy.
The tool, then, could revert the images back to their original format. This tool is specifically targetting facial manipulation.
As Adobe researcher Richard Zhang stated: "We live in a world where it's becoming harder to trust the digital information we consume."
This is just the start of Adobe's detection of facial manipulation. Let's see what will be developed next.
MIT researchers develop a passive cooling technology that does not rely on electricity. It provides large energy savings with minimal water consumption even in humid places.