Bias in Artificial Intelligence Requires Discipline and Compassion
The panel titled ‘Solving Bias in Artificial Intelligence’ at CES 2019, avoided these scary headlines and tried to have a thoughtful discussion on how to make meaningful change in a rapidly expanding sector.
Like many of the panels so far the discussion was kept to a relatively strict line around regulatory control.
Panel representative of both industry and government
Michael Hayes, the Sr. Manager for Government Affairs at the Consumer Technology Association moderated the talk between panel members Austin Carson from Nvidia, Bari Williams the VP of Legal, Policy and Business Affairs at All Turtles and Sunmin Kim the Technology Policy Advisor from the Office of Senator Brian Schatz.
Williams kicked off proceedings by talking about the way bias can be viewed from many different angles. She explained that for her personally, she saw bias differently depending on if she was acting like a mother to her three children, identifying as a black woman or looking through the lens of her work as a lawyer.
This is a key point to note and is one of the reasons why actually identifying bias can be half of the battle. The panel was quick to point out that unlike in other parts of the tech industry bias in AI doesn’t come from bad actors it comes from bad information.
Bad data, not bad actors
None of the panels suggested that bias is due to malicious engineering but is rather a combination of many factors related to both input and output. The Federal Trade Commission released a report in 2016 on big data asking whether it is inclusive or exclusive.
The report concluded by urging users of big data to consider whether their data set used to create algorithms is representative or does it only represent a certain segment.
Secondly, it provokes creators of AI tech to ask what the algorithm is doing, how is extracting that? Is it basing its decision on the factors related to its intentional data or other patterns it might identify that inadvertently create bias?
In terms of regulations, the panel agreed there was more that the government could do but that having industry expert knowledge inside the decision-making process was key to making progress.
Another point raised and discussed by the panel was the need for diverse teams in tech generally. The more diverse engineering and programming teams are the more likely bias is going to be spotted and dealt with early on rather than it being lingering and developing over time.
Diverse teams mean that bias is more likely to be taken seriously.
Diversity key in addressing bias
The solution to increased diversity was addressed in a previous panel and the challenges of increasing the number of women and people of color in tech continue to simmer at CES.
Another fascinating point raised by the panel was the idea that AI can actually assist in detecting and overcoming bias. Technically there should be no better machine to look for bias behavior than intelligent AI.
In conclusion, Hayes congratulated the panel on being present and on addressing the difficult ideas related to bias in AI. He went on to say the first step in overcoming the bias is to recognize it which was done on the panel today.