An estimated 33 million autonomous vehicles will be on the road by 2040, potentially eliminating some of the dangers posed by fallible human motorists. After all, artificial intelligence isn't prone to road rage, distracted driving, or the ability to fall asleep at the wheel.
But there are other concerns to keep in mind when imagining a future where Jarvis takes the wheel: racism, sexism, and ableism.
Skin tone detection
Algorithms aren't perfect. They're designed by humans, who are fallible. And can easily reflect the bias of their creators. Algorithms learn from the examples they're given. If they're not given enough examples of diverse populations, it'll be harder to recognize them in practice.
In 2021, The Law Commission, began drawing up a legal framework for autonomous vehicles introduction into UK roads, saying they may "struggle to recognize dark-skinned faces in the dark." Those with disabilities, the report says, are also at risk, "systems may not have been trained to deal with the full variety of wheelchairs and mobility scooters."
A 2019 report had similar findings. Researchers from Georgia Tech investigated eight AI models used in state-of-the-art object detection systems to complete their study. These systems allow autonomous vehicles to recognize road signs, pedestrians, and other objects as they navigate roads.
They tested these systems using two different categories based on the Fitzpatrick scale, a scale commonly used to classify human skin color. Overall the accuracy of the system decreased by five percent when it was presented with groups of images of pedestrians with darker skin tones. The models showed “uniformly poorer performance” when confronted with pedestrians with the three darkest shades on the scale.
Beyond the driver's seat
Racism in AI systems isn't limited to cars. Amazon's facial recognition software, Rekognition, for example, struggled to recognize darker skin tones and female faces. It also famously matched Congresspoeple's headshots with photos from a mugshot database.
In May last year, ProPublica reported that software used to assist judges in determining the risk a perpetrator posed of recommitting a crime was biased against black people. The system is used by judges in criminal sentencing, it provides a score based on whether the person is likely to re-offend. A high score suggests they will re-offend, a low score suggests it is less likely.
The investigative journalists assessed the risk score assigned to more than 7,000 people in Broward County in Florida in 2013 and 2014 and then watched to see if the same people were charged with any new crimes in the next two years.
The algorithm not only proved to be unreliable (only 20 percent of the people predicted to commit violent crimes did so), but it was also racially biased. Black defendants were more likely to be flagged as future criminals, wrongly labeling them at almost twice the rate of white defendants. While white defendants were labeled as low risk more often than black defendants.
An unbiased future
The importance of developing unbiased AI systems cannot be overstated. With autonomous vehicles, it starts with simply increasing the number of images of dark-skinned pedestrians in the data sets used to train the systems.
Engineers responsible for the development of these systems need to place more emphasis on training the systems with higher accuracy for this group. Further, hiring diverse teams at the get-go will also set the companies up for success.
Every day, AI becomes more integrated into our lives. It's clear that the AI development community must take a stand against this sort of massively damaging bias.