Autonomous Cars Can't Recognise Pedestrians with Darker Skin Tones

People with darker skin more at risk of being hit by a self-driving vehicle.

A new report shows that the systems designed to help autonomous cars recognise pedestrians may have trouble recognising people with darker skin tones. The worrying research has been uploaded to the preprint server arxiv.

SEE ALSO: BIAS IN ARTIFICIAL INTELLIGENCE REQUIRES DISCIPLINE AND COMPASSION

Evidence already existed that some facial recognition software struggled to work with darker skin tones. But the results of the study into autonomous cars has a potentially deadly outcome.

Worlds best show bias

Researchers from Georgia Tech investigated eight AI models used in state-of-the-art object detection systems to complete their study. These systems allow autonomous vehicles to recognize road signs, pedestrians, and other objects as they navigate roads.

They tested these systems using two different categories based on the Fitzpatrick scale. A scale commonly used to classify human skin color.

Darker skin at higher risk

Overall the accuracy of the system decreased by 5 percent when it was presented with groups of images of pedestrians with darker skin tones. And according to the published paper, the models showed “uniformly poorer performance” when confronted with pedestrians with the three darkest shades on the scale.

These results come after the outcome is adjusted to take into consideration whether the photo was taken during the day or at night. In summary, the report suggests that people with darker skin tones will be less safe near roads dominated by autonomous vehicles than those with lighter skin.

Bias-elimination starts with diversity in research

The report thankfully gives a brief outline how to remedy this unfathomable reality. This starts with simply increasing the number of images of dark-skinned pedestrians in the data sets used to train the systems.

Engineers responsible for the development of these systems need to place more emphasis on training the systems with higher accuracy for this group. 

The report, which the authors say they hope gives enough compelling evidence to address this critical issue before deploying these recognition systems into the world, is another reminder of the general lack of diversity in the AI world.

Unfortunately, this isn't the first report of potentially deadly racism in AI-powered systems. In May last year, ProPublica reported that software used to assist judges in determining the risk a perpetrator posed of recommitting a crime was biased against black people.

Advertisement

Racial profiling is lethal

The system is used by judges in criminal sentencing, it provides a score based on whether the person is likely to reoffend. A high score suggests they will reoffend, a low score suggest it is less likely.

Autonomous Cars

Help Make the Difficult Moral Decisions to Improve Self-Driving Cars

The investigative journalists assessed the risk score assigned to more than 7000 people in Broward County in Florida in 2013 and 2014 and then watched to see if the same people were charged with any new crimes in the next two years. 

The algorithm not only proved to be unreliable, only 20 percent of the people predicted to commit violent crimes did so. It was also racially biased.

Black defendants were more likely to be flagged as future criminals, wrongly labelling them at almost twice the rate of white defendants. While white defendants were mislabeled as a low risk more often than black defendants.

The AI development community must take come together and take a public stand against this sort of massively damaging bias. 

Advertisement