How do we solve the deepfake problem? Many proposals have been made — including using AI and mitigating damage instead of focusing on a solution. All we know for sure is that the problem is not going away soon.
A recent proposal to tackle the deepfake issue is to have an open standard authentication tool for videos.
The effort, proposed by Adobe and collaborators, would allow viewers to know where videos originated and whether any changes have been made.
Video origin authentication
While deepfakes are likely not going away, efforts are being made to make them less damaging to society. One such effort, proposed by Twitter, Adobe, and The New York Times, would see an industry-wide authentication tool allow tags to be added to videos that would show their origin.
As Axios points out, these tags would allow users to see where a video comes from, as well as any changes that have been made to it.
Adobe has proposed an opt-in system that would allow publishers and creators a secure way to attach this attribution data to their content.
The editing platform company says it could include the technology in its tools, but it would encourage them to be used as an open standard that video creators across platforms could also utilize.
A prototype of the tool was shown this week at Adobe's MAX conference in Los Angeles.
Adobe claims it hasn't yet finalized the mechanisms of its tools, saying it will seek more input from its collaborators. Similar applications rely on blockchains that cannot be altered.
A looming deepfake crisis?
A recent report, published in Data and Society, argues that deepfakes need a societal as well as a technical fix. Technical solutions will help, but the dissemination of fake videos for political and individual gain is a broader issue.
Is Adobe's authentification the right way to go, though? Verification that isn't easily achievable could split the Internet into a group of trusted and non-trusted content based largely on who has the easiest access to the authentification.
Another company, startup Truepic, is also creating technology for verifying videos from the moment that they are created to their life on the web.
As deepfakes get more advanced, the efforts to stop them must also step up — or at the very least lessen their damage to the public.