Facebook released it's most recent Enforcement Report today, detailing the steps it is taking toward policing the content and misuse of its platform, including the removal of more than three billion fake accounts and over seven million posts deemed to contain hate speech.
Facebook Details Efforts at Policing Content on Its Network
Facebook published its third Community Standards Enforcement Report today, detailing the efforts it has made in combating inappropriate, extremist, and harmful content on its network, including removing more than three million accounts determined to be fake and seven million posts flagged as containing hate speech.
"In total," Facebook said, in a blog post announcing the release, "we are now including metrics across nine policies within our Community Standards: adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, regulated goods, spam, global terrorist propaganda and violence and graphic content."
When it came to nudity and sexually explicit material shared on their network, Facebook said that for every 10,000 content items viewed by a user, 11 to 14 of those views contained content that violated Facebook's policies. For violence and graphic content, 25 views out of 10,000 violated the company's policies. In reporting content featuring or promoting sexual exploitation, child nudity, or global terrorism, Facebook said that the prevalence for these were "too low to measure using our standard mechanisms, but we are able to estimate that in Q1 2019, for every 10,000 times people viewed content on Facebook, less than three views contained content that violated each policy."
Facebook said it removed more than seven million posts determined to violate its policy on hate speech, including four million posts in the first quarter.
The company also said that they estimate 5% of their active monthly accounts are fake accounts, identifying and deactivating 1.2 billion fake accounts in the fourth quarter of 2018 and 2.19 billion in the first quarter of 2019.
It isn't enough to remove content after it has been viewed or deactivate a fake account after it had caused harm, as Facebook itself acknowledges, and so the report also outlines how successful the company has been in proactively detecting violations and taking action before a user reported it. The company says that they caught over 95% of the content they took action on before a user reported it and in the case of hate speech were able to identify 65% of offending content before it needed to be reported, up from 24% last year.
Given the often opaque nature of communications by hate groups and the various codes employed to slip past detection algorithms, Facebook said: "we continue to invest in technology to expand our abilities to detect this content across different languages and regions."