Are Deepfakes Actually a Threat? Here's What the FBI Thinks
The Federal Bureau of Investigation (FBI) is worried about the threat deepfakes may pose.
In a Private Industry Notification (PIN) sent to companies throughout the U.S., the FBI warned companies that "[m]alicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12 to 18 months."
FBI argues deepfakes may be used to influence political and corporate processes
The FBI's new warning is the first relating to deepfakes — which are a synthetic media recreation generated by AI or machine learning — and comes amid heightened awareness of the rapid spread and possible dangers of disinformation via media sources that misrepresent the "facts on the ground." For example, convincing deepfake videos of Tom Cruise portrayed what appeared to be him playing golf, walking through a store, and performing a magic trick. The videos quickly went viral, suggesting an age of what the FBI calls "flawless forgeries" has arrived.
However, while we live in a world where anyone can engage with deepfakes, the process of curating high-end and specialized AI drivers is more difficult than it seems. Chris Ume, a developer of several Tom Cruise videos, is a Belgium VFX specialist, and thinks the threat of deepfakes is overstated.
"You can't do it just pressing a button," said Ume in a report from The Verge. "That's important, that's a message I want to tell people." For Ume, clips of Tom Cruise deepfakes took weeks of work, using the open-source DeepFaceLab algorithm along with other, more familiar video editing tools. "By combing CGI and VFX with deepfakes, it makes it better. I make sure you don't see any of the glitches."
The FBI disagrees. According to the agency's notification, foreign actors are already using synthetic content in influence campaigns. Additionally, the Bureau thinks AI-enabled media with misleading aims will increasingly be used by "foreign and criminal cyber actors for spearphishing and social engineering" crimes.
Spearphishing is when a fraudulent or malicious agent sends emails ostensibly from a trusted sender, to encourage the user to send sensitive information or give access to private networks. And the FBI thinks Russian, Chinese, and Chinese-language "actors" are already employing synthetic profile images to mask fake online accounts — called sockpuppets — purportedly to push foreign propaganda campaigns.
FBI urges private and public sectors to practice 'good informational hygiene'
The Bureau also said actors whose origins remain unknown have posed as "journalists" via manufactured profile images, pushing fake articles that were later picked up and boosted by real media outlets.
The FBI then argued that malicious cyber actors will do more than push propaganda campaigns on behalf of foreign parties — but also use synthetic media and deepfakes to launch attacks on the private sector. Specifically, the Bureau warned that synthetic content could enable a "newly defined cyber-attack vector" known as Business Identity Compromise (BIC), which involves deepfake tools enabling the curation of "synthetic corporate personas" or employees. Among other things, this could cause "very significant financial and reputational impacts to victim businesses and organizations," claimed the FBI.
To combat the rapidly-evolving dangers of deepfakes, or so the FBI calls them, organizations and the public should establish good information hygiene, including multifactor authentication, and preparation to identify malicious attempts at social engineering and spearphishing. The Bureau also suggests companies train their employees to use SIFT media resilience framework — which is an acronym for Stop, Investigate information's source, Find trusted coverage, and Trace the original content.
Inspecting profile photos of online accounts for visual glitches or clues of falsehood — like visual distortions around pupils, earlobes, or choppy backgrounds — can also help protect users from deepfake or synthetic media manipulation. And while there's no perfect way to protect against high-end deepfake AI drivers specifically designed to manipulate public or private decision-making, the cost and time required to develop convincing deepfakes will likely keep them out of reach for all but the most committed agents of future cybercrime.