Facebook’s AI Systems Identify and Remove Hate Speech

At Web Summit 2020, Facebook CTO Mike Schroepfer said that in just a year, the amount of hate speech that AI systems remove has increased five-fold.
Susan Fourtané

When there is more hate than love in the world, you know there is a serious problem with humanity.

Facebook CTO Mike Schroepfer, said that in just a year —from the second quarter of 2019 to the second quarter of this year— the amount of hate speech that Facebook’s Artificial Intelligence (AI) systems have identified and removed has increased five-fold. 

Learning that the world is so driven by hate and associated negative emotions makes you wonder where the world is heading. What does Facebook’s CTO think about the tech giant’s findings? 

As Facebook’s CTO, Mike Schroepfer leads the development of the technology and teams that enable Facebook to connect billions of people around the world. The teams work in developing fields such as Artificial Intelligence (AI) and Virtual Reality (VR). 

Those connections create data, and the data is charged with emotional conversations, exchanges, and opinions that reveal who truly humans are, what the essence of the human being is. Do you want to hear more? 

Despite widespread skepticism about social media platforms and the usually exasperating spread of disinformation, Facebook’s CTO said that he still sees Facebook as a force for good that, at its core, does everything it can to help people around the world connect more easily and affordably, something it was just a dream a few decades ago. 

“95, 98, 99 percent of that experience is people just connecting with their friends and family,” Mike Schroepfer said. “Now, are there bad things that happen when you lower the friction for communication? Absolutely. And that’s what we’ve seen over the last many years, and why I’ve been so dedicated to allowing people to communicate freely, but also eradicating hate speech, violence, [and] speech that just is not allowed on the site.”

During Schroepfer’s interview with Jeremy Kahn, senior writer at Fortune Magazine at Web Summit Day Two, Facebook’s CTO went into great detail about the massive challenge, both from a technical and policy level, that comes with eliminating disinformation and hate speech on Facebook. Facebook has over 2.7 billion monthly active users at the time of writing, which makes it the biggest social network worldwide. In other words, at present, 28.5 percent of the population worldwide uses Facebook as a means of online social communication. 

Most Popular

Jeremy Kahn asked Schroepfer what message would he have for critics who say that no amount of technology will fix this content moderation problem until Facebook is no longer optimized for attention, which he posits results in divisible content drawing out more positive, uniting content. Schroepfer responded to this saying that all communication mediums throughout history, from newspapers to radio, have faced this dilemma, and that it's not new to social media giants such as Facebook. 

"This is just a reality when humans connect. There are good uses and bad uses," he said. "The answer is not to clamp down on platforms and make them more restrictive. It's to decide as a democratic society what's allowed and not, and have platforms do our very best to enforce those rules," said Schroepfer. 

Removing hate speech from Facebook: Are AI systems enough?

Removing hate speech from Facebook's problematic content requires not only the help of AI systems but also represents a huge content moderation challenge. Schroepfer said there has been "tremendous progress, but we're not done."

If you have ever felt frustration when dealing with a work issue, just put yourself in Schroepfer's shoes for one day. "So, it's a frustrating place, where there's fair criticism leveled every day where we miss a piece of content. And that's why I get up every day --to eliminate that gap," Facebook's CTO said. 

How effective AI systems are to remove hate speech 

As it turns out, hate speech seems to be one of the hardest categories for machines to detect. According to Schroepfer, AI systems are now identifying 94.5 percent of hate speech. From the second quarter of 2019 to the second quarter of 2020, the amount of hate speech that Facebook's AI systems have identified and removed has increased five-fold, he said. 

Similarly, Facebook has had to track and fight deepfakes. Schroepfer said 18 months ago, there was no deepfake detection process. However, most deepfakes are detected now.

On remote work 

Schroepfer said he is surprised how well remote work has worked for Facebook employees, with staff productivity staying high. But what he sees as deficient is the human connection of teams working in the same office. In this compartment, Virtual Reality (VR) will help more down the line but, right now, 2D video isn't cutting edge, he said. 

"Technology is fundamentally a team sport. We have to work together to build things, make decisions, and evaluate products," Schroepfer said. "This video chat stuff is great; it's just not the same as being in a room with a set of people and making a hard decision about cancelling a project, or pivoting, or doing this feature over that feature. And that's what I really think we're missing," he said. 

On work from the office vs work from home

Schroepfer's previous company, Mozilla, had a work set up in which there was a distributed workforce and employees would meet in person for a week just once in a quarter. "We wrote the least amount of code during that week, but you built a whole lot of relationships, made a bunch of decisions, and everyone could go work from home and be super productive because they weren't distracted by meetings. I think that model of coming together to build relationships and make decisions, and disappearing to do productive solo work, is more where the world is going." Indeed. 

Being emotional vs hate speech: Can AI systems identify the difference? 

Then there is the topic of just being emotional and having a bad day without meaning hate speech as a constant. We all have been there, when we are so exasperated and frustrated by something that almost cannot contain ourselves when we are having a heated argument with a friend or family member. And this is also a part of free speech. How is that evaluated by Facebook moderators and AI systems at the time of deciding about removing hate speech