Here's What You Need to Know About Identifying Deepfakes
/img/iea/9lwj7WR1OE/what-you-need-to-know-about-identifying-deepfakes.jpg)
Every new technology has a dark side.
However, not all of the hysteria surrounding deepfake technology is unmerited, according to a recent study published in a UCL blog post, which argues deepfakes pose the most serious artificial intelligence-enabled threat to society.
And they could become a feature, not a bug, of the modern world.
Deepfake technology is becoming more accessible
The FBI recently warned industry leaders that AI-generated synthetic media is a serious threat, considering deepfakes a tempting means for bad actors bent on spear phishing, or launching social engineering campaigns. Nina Schick, a deepfake expert, thinks the hyper-realistic videos, audio files, and images portrayed in the AI-generated synthetic media will become common features of the world, which means our poor human faculties need to get better at discerning the fakes from the real deal.
This rings true in light of the ongoing commercialization of everything (like deepfakes), wherein apps like FaceSwap, FaceApp, Avatarify, and Zao are proliferating globally, reaching the eyes and ears of consumers and bad actors, alike. As of writing, all content on these and similar apps are protected under the First Amendment of the United States, but what is arguably the most defining feature of that country (the constitutional amendments) might not have anticipated the potential danger of deepfakes, which have attacked politicians, organizations, and even normal citizens. In March of 2019, cybercriminals tricked the CEO of a U.K. energy firm into moving $243,000 to a Hungarian supplier with a fake audio file.
In 2020, a Philadelphian lawyer was the victim of an audio-spoofing attack, and, in 2021, Russian pranksters made fools of European politicians with an attack that allegedly began with a deepfake video. While automatically blaming old rival nations is the simplest recourse, it's also the most misguided: You don't have to work for a foreign government to want to exacerbate distrust in media content. And as synthetic media becomes more accessible (and convincing) to everyone, it will only become harder to know which content is genuine, and which is taking us somewhere else. Either way, the long-term consequence of ubiquitous deepfakes in media could deepen societal harm already caused by the last several years of socio-political calamity, which, believe it or not, fundamentally changed society as we know it.
Deepfakes could become a feature of media content
Many consumers of internet content already practice what you could describe as "disbelief by default," where skepticism grows to eclipse the very possibility of credibility in the minds of ordinary people. On the one hand, this could work in the favor of dishonest or corrupt politicians and corporate leaders with societal ambitions. With so much disinformation on our timelines, those with a modicum of celebrity can deflect information they don't like by declaring it "fake news," or, alternatively, equivocating information they don't like with some other, more familiar social antagonism, which itself may or may not be real. This is especially worrisome when politicians deflect valid criticism by falsely claiming it stems from or can be subjectively associated with another (again, potentially fictional) antagonism, which can lead the public to become skeptical of other, absolutely dangerous and very real social antagonisms. Like climate change.
Luckily, we're not completely out of options, according to the FBI. Deepfakes may be spotted via distortions surrounding the pupils or earlobes of a face. You might also notice bizarre motions of the head and torso, in addition to syncing disparities between the audio and associated lip movements. The background, too, may suffer distortions, like indistinct or blurry figures. Finally, social media profiles with nearly identical eye spacing throughout a wide spectrum of disparate images are probably suspect. But the first rule about deepfakes is: There is no single suite of types, because they're always changing. But, in time, we might be able to mentally "filter" through older deepfakes the same way some of us already do when consuming conventional media. This can involve bracketing repeated phrases and sentiments, while also considering the feasibility of motivations that likely lie behind the content. And, as always, we can remember to ask ourselves: Who benefits?
From robot dogs to AI and a train that could take you to Mars, the Oracle's industry labs showcase a vision of a sustainable future.