Racially biased AI can lead to false arrests, warns expert

A faculty of law assistant professor is highlighting the many dangers of facial recognition technology.
Loukia Papadopoulos
Representational image of facial recognition technology.jpg
Representational image of facial recognition technology.


Racially biased artificial intelligence (AI) is not only misleading, it can be right down detrimental, destroying people’s lives. This is a warning University of Alberta Faculty of Law assistant professor Dr. Gideon Christian issued in a press release by the institution.

Christian is most notably the recipient of a $50,000 Office of the Privacy Commissioner Contributions Program grant for a research project called Mitigating Race, Gender and Privacy Impacts of AI Facial Recognition Technology. The initiative seeks to study race issues in AI-based facial recognition technology in Canada. Christian is considered an expert on AI and the law.

“There is this false notion that technology unlike humans is not biased. That’s not accurate,” said Christian, PhD.

People of color most at risk

Facial recognition technology in particular, he warned, is damaging to people of color.

“Technology has been shown (to) have the capacity to replicate human bias. In some facial recognition technology, there is over 99 percent accuracy rate in recognizing white male faces. But, unfortunately, when it comes to recognizing faces of color, especially the faces of Black women, the technology seems to manifest its highest error rate, which is about 35 percent," explained Christian. 

“Facial recognition technology can wrongly match your face with that of some other person who might have committed a crime. All you see is the police knocking on the door, arresting you for a crime you never committed.”

Although most of the misidentification cases you hear about are in the US, Christian warns there could be plenty in Canada too.

“We know this technology is being used by various police departments in Canada. We can attribute the absence of similar cases to what you have in the US based on the fact that this technology is secretly used by Canadian police. So, records may not exist, or if they do, they may not be publicized," he said.

“What we have seen in Canada are cases (of) Black women, immigrants who have successfully made refugee claims, having their refugee status stripped on the basis that facial recognition technology matched their face to some other person. Hence, the government argues, they made claims using false identities. Mind you, these are Black women — the same demographic group where this technology has its worst error rate.” 

Christian explained that technology is never inherently biased but that the data used to train machine learning algorithms is to blame. The tech will produce results according to what it is fed.

Free of racial bias and discrimination

“The majority of us want to live in a society that is free of racial bias and discrimination,” said Christian. “That is the essence of my research, specifically in the area of AI. I don’t think we want a situation where racial bias, which we have struggled so hard to address, is now subtly being perpetuated by artificial intelligence technology.”

The professor noted that in many ways this is an old problem masquerading itself as a new issue. He warned that if not addressed it could destroy years of progress.

“Racial bias is not new,” he noted. “What is new is how these biases are manifesting in artificial intelligence technology. And this technology — and this particular problem with this technology, if unchecked — has the capacity to overturn all the progress we achieved as a result of the civil rights movement.”  

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board