In an effort to detect deepfakes online, Facebook, Microsoft and the Partnership on AI are teaming up with academia to launch the Deepfake Detection Challenge.
In a blog post, Mike Schroepfer, Chief Technology Officer at Facebook said the goal of the challenge is to create technology that is open source that can better detect when artificial intelligence has been used to change the video in an effort to mislead.
Facebook wants to spur innovations to combat deepfakes
The Deepfake Detection Challenge will include a data set, leaderboard, grants and awards aimed at spurring innovation in this area. Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are joining forces with Facebook, Microsoft and the Partnership on AI to make the challenge possible.
Facebook is partnering with industry leaders and academic researchers to create the DeepFake Detection Challenge, a collaborative effort to build new tools to detect videos that have been manipulated with AI. https://t.co/IlwSVmhQxb pic.twitter.com/XNpR95MniF— Facebook AI (@facebookai) September 5, 2019
Schroepfer said it's important for the data to be freely available for the community to use. As a result Facebook, which is dedicating more than $10 million to fund the effort, will use paid actors with the required consent to contribute to the challenge. No data from the social media giant will be used for the challenges, Schroepfer said.
“Deepfake techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online. Yet the industry doesn't have a great data set or benchmark for detecting them," wrote the executive in the blog post. "We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes."
Partnership on AI to take the lead on governance
The governance aspects of the challenge will be overseen by the Partnership on AI's new Steering Committee on AI and Media Integrity. That committee is comprised of organizations including Facebook, Microsoft, and WITNESS among others. "This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress," Schroepfer.
Facebook's efforts come at a time when deepfakes are beginning to be used beyond trying to alter videos online. According to a recent report, criminals used a deepfake of an executive's voice to trick him into transferring money. According to a Wall Street Journal report the $243,000 transfer was purported to be a payment to a Hungarian supplier and the UK CEO, believing that he was speaking with his German boss, made the transfer as requested.
With the U.S. presidential election coming in 2020, fears are growing that political operatives will use deepfakes to smear candidates. Facebook's reputation has already taken a big hit after the 2016 election and is now taking steps to prevent a repeat this time around.