Facebook's AI Thought a Video of a Black Man Was 'About Primates'
Artificial intelligence (AI) is the next frontier of technology and promises a brand new future for the entire humanity. While many have wondered about the dystopian possibility where AI takes over the world, it is the inherent biases that they are being built up with, that are the bigger worry.
Recently, a user flagged an AI-generated prompt on Facebook that labeled a video about a Black man, as content "about Primates." While this led to a social media uproar that received an apology from the company that claimed that there was still "progress to make" in AI algorithms, the real question is what are the other offensive and highly biased things developers at Facebook unknowingly inserting into their programs?
A 2019 report from MIT Tech Review revealed that the company's ad serving AI had severe biases. While serving ads to two billion users, the AI showed ads for preschool teachers and secretary positions predominantly to women while janitorial positions and taxi-driver positions were served to minority communities. Even in the housing sector, the platform showed sale ads to white users, and ads seeking tenants were shown to minorities. Although the company had then committed to rectifying the bias, gender bias continued even in 2021.
The company has resisted audits into its research and AI models and even set up internal teams in 2020 to study if racial bias affects minorities on its platform as well as Instagram, a photo and video sharing app, that Facebook owns. But its efforts haven't borne the desired results as was seen recently.
Um. This “keep seeing” prompt is unacceptable, @Facebook. And despite the video being more than a year old, a friend got this prompt yesterday. Friends at FB, please escalate. This is egregious. pic.twitter.com/vEHdnvF8ui— Darci Groves (@tweetsbydarci) September 2, 2021
It would be unfair to single out Facebook in this case. In 2015, Google's algorithms were found to be equally horrifying after they flagged photos as those of 'gorillas.' A more recent report from Algorithm Watch, showed Google's Vision Cloud labeled changed the label of an instrument from 'device' to 'gun' as the skin tone darkened. The company fired AI ethics researcher Timnit Gebru after she pointed out inherited biases in the company's language learning models.
AI algorithms from another social media site, Twitter, showed their bias when images containing black and white people were cropped to favor the fairer skin. The company responded by rolling out a bug bounty program to encourage users from finding biases on its platform and reporting them. We have contacted Facebook to understand how they plan to respond and shall update the post when a reply is received.
While it would be easier to blame technology, it is quite clear that the biases are inherent in us and there is more cleansing needed in there, before lines of code are sorted going forward.