Bobby Chesney, co-author of the paper “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” agrees. “We need to talk about mitigation and limiting harm, not solving this issue,” Chesney told The Verge. “Deepfakes aren’t going to disappear.”
A new report, published in Data and Society, says that AI might not be able to protect us from the potentially disruptive effects of deepfakes on society.
Study authors Britt Paris and Joan Donovan claim that deepfakes are part of a long history of media manipulation, an issue that has always required a social as well as a technical fix.
Age-old media manipulation
As Google announces its efforts to help in detection of deepfakes through AI, Paris and Donovan claim that relying on AI to fix an AI-generated problem could make it worse.
It could serve to concentrate more power, and data, in the hands of big tech corporations.
The study authors say that deepfakes are very unlikely to be fixed solely by technology. “The relationship between media and truth has never been stable,” the report says.
As examples, the study cites the way courts mistrusted photographic evidence in court when it was first allowed by judges in the 1850s. In the 1990s, reporters misrepresented the uneven death toll between U.S. and Iraqi forces in the Gulf War, making it feel like an evenly fought war.
“These images were real images,” the report authors write. “What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television.”
Deepfakes aren't going away
“The panic around deepfakes justifies quick technical solutions that don’t address structural inequality,” Paris told The Verge.
“It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.”
Paris and Donovan believe that focusing on technical fixes means that legislative changes aren't made to help those that are under the biggest threat of being manipulated.
Unsurprisingly, they are suspicious of the idea that big tech corporations, so many of which have recently been at the center of huge data and privacy scandals, are going to save us from mass manipulation via a new technology.