Deepfakes are no more effective at distorting memory than simple text descriptions
Remember when that video of Jon Snow issuing an apology for the way Game of Thrones season eight ended went viral across social networks?
Well, as it turns out, that video was a deepfake where the voice was altered to make it seem like that of Snow portrayed by Kit Harington.
Watch the deepfake video of Jon Snow's speech below:
The boom in employing AI has generated confusion among a substantial number of individuals from differentiating true from false in media.
Deepfakes are the 21st century’s answer to Photoshopping as they use a form of artificial intelligence called deep learning to make images of fake events, hence the name 'deepfake'.
Recently, a team of researchers undertook a study about deepfake videos and determined that deepfake clips of movie remakes that don’t actually exist influenced participants to falsely remember the non-existent films.
However, simple text descriptions of the fake movies also prompted similar false memory rates, the study states.
Art of faking
The research was conducted by Gillian Murphy of University College Cork, Ireland, and Lero, the Science Foundation Ireland Research Centre for Software, along with colleagues.
The researchers explain: “Deepfake videos are clips that have been altered using artificial intelligence (AI) technology to switch out one person’s voice or face with that of a different person.”
They cited concerns about the ease with which deepfakes are curated due to their affordability, making the tools more accessible to video producers.
The researchers aimed to explore the potential risks and benefits of deepfakes by initiating an empirical study with participants. They sought to boost conversations regarding the potential of these creative applications as well as the risks attached to creating deepfakes.
Researchers expressed concern that deepfakes could be spreading misinformation and manipulating viewers’ memories.
Study method
To prove their theory, scientists invited 436 people to complete an online survey that required the participants to watch deepfake videos of fake movie remakes starring different actors.
The researchers said that Will Smith was depicted as the character Neo—originally played by Keanu Reeves—in The Matrix.
Another movie remake portrayed Brad Pitt and Angelina Jolie in The Shining. Additional fake movies included Indiana Jones and Captain Marvel.
In order to compare, participants were also shown clips of original remakes such as Charlie & The Chocolate Factory, Total Recall, Carrie, and Tomb Raider.
The subjects were also made to read text descriptions of remakes instead of watching deepfakes.
They were not informed that the deepfakes were false until later in the survey.
Consequential effects
According to the analysis, it was found that deepfake videos and accompanying text descriptions had a consequential impact on participants' memory – generating false memories of the fabricated content.
On average, approximately 49 percent of the participants believed that each individual fake remake was genuine.
Additionally, many participants reported remembering fake remakes being better than the originals. Interestingly, false memory rates were equally high when subjects were shown text descriptions.
The analysis suggests that deepfake technology is no more effective at distorting memory than other more traditional methods.
The authors said, “while deepfakes are of great concern for many reasons, such as non-consensual pornography and bullying, the current study suggests they are not uniquely powerful at distorting our memories of the past. Though deepfakes caused people to form false memories at quite high rates in this study, we achieved the same effects using simple text. ”
In conclusion, the majority of participants proclaimed their discomfort with employing deepfake technology to recast films and cited concerns about disrespecting artistic integrity and disrupting the shared social experience of movies.
Researchers state that the findings could help inform future design and regulation of deepfake technology.
“In essence, this study shows we don't need technical advances to distort memory, we can do it very easily and effectively using non-technical means,” they said.
The findings have been published in the open-access journal PLOS ONE on July 6 and can be accessed here.
Abstract
There are growing concerns about the potential for deepfake technology to spread misinformation and distort memories, though many also highlight creative applications such as recasting movies using other actors, or younger versions of the same actor. In the current mixed-methods study, we presented participants (N = 436) with deepfake videos of fictitious movie remakes (such as Will Smith staring as Neo in The Matrix). We observed an average false memory rate of 49%, with many participants remembering the fake remake as better than the original film. However, deepfakes were no more effective than simple text descriptions at distorting memory. Though our findings suggest that deepfake technology is not uniquely placed to distort movie memories, our qualitative data suggested most participants were uncomfortable with deepfake recasting. Common concerns were disrespecting artistic integrity, disrupting the shared social experience of films, and a discomfort at the control and options this technology would afford.