Nearly every day, we hear about new advances in AI that enable new ways to monitor activities and people, transforming many processes in our day to day life.
What we may then hear every other day is how AI can exacerbate racial and gender bias and pose a threat to privacy, job security, and economic well being. It could possibly even spark a war in the view of Elon Musk.
AI-powered facial recognition raises concerns over privacy and bias
As explained in Facial Recognition Concerns: Microsoft's Six Ethical Principles, “The widespread use of Artificial Intelligence-powered facial recognition technology can lead to some new intrusions into people’s privacy.”
Given the ability to capture people’s image and identify them on public streets in the name of security, people are rightfully concerned that they will lose their ability to maintain any privacy. That extends to environments at school and work, as detailed in the article.
A 2018 New York Times article raised another concern with the headline, “Facial Recognition Is Accurate, if You’re a White Guy.” The problem is this:
“The darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.”
The source of these figures is Joy Buolamwini, a researcher at the MIT Media Lab, and the founder of the Algorithmic Justice League (AJL). She has devoted herself to uncover how biases seep into AI and so skew results for facial recognition.
See her TED Talk in this video:
This year, Buolamnwini published the findings of her research with Inioluwa Deborah Raji from the University of Toronto, in Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.
According to that study Amazon's Rekognition software also messed up on those who fall out of the white man category. It misidentified women as men almost one out of fives times, according to the study. In addition, it incorrectly identified darker-skinned women as men 31 percent of the time, it says.
Buolamnwini wrote a Medium post in which she referred to her research and that 26 researchers are demanding that Amazon stop selling Rekognition. The ACLU has also stepped up on the pressure, urging Amazon shareholders to demand the elimination of the facial recognition technology.
The pressure has had some measurable results. San Francisco recently voted to ban it on from the arsenal of tech tools used by law enforcement.
Concern about AI’s bias problem
Others have also pointed out the potential for racial bias being reinforced by AI, most notably, Cathy O'Neil in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. What she calls “math destruction” is the “result of models that reinforce barriers, keeping particular demographic populations disadvantaged by identifying them as less worthy of credit, education, job opportunities, parole, etc.”
See her TED Talk here:
Whether the problem stems from flaws in the data collected or in the algorithms set up to analyze the data, people are no longer content to trust AI as objective and all-knowing. They are now demanding explainable AI that introduces transparency into the process and allows biases and other flaws to be brought to light and resolved.
“What is vital is to make anything about AI explainable, fair, secure and with lineage, meaning that anyone could see very simply see how any application of AI developed and why,” declared Ginni Rometty, IBM's CEO, during her keynote address at CES on January 9, 2019. IBM now offers consultations on reducing bias for those who are building machine learning systems.
You can watch the full address here:
Ethical concerns about AI as an existential threat
“AI is a fundamental risk to the existence of human civilization,” asserted Elon Musk in 2017 during an interview before an audience at the National Governors Association Summer Meeting. You can see it in full in this video:
If you prefer a much shorter take, you can see this video:
No stranger to advanced technology, Musk was arguing from the position of someone who understands both the capabilities and consequences of AI. That’s why he argues that regulations have to be put in place now, not after the technology advances and already presents a danger.
In contrast to negative outcomes that could have resulted from other technologies in the past, that he said were limited “to individuals” and did not affect “society as a whole,” Musk said. However, AI poses a “fundamental, existential risk.”
For Musk, that risk includes starting a war. He posited a “purely hypothetical” example that it’s possible for AI to be set with “a goal to maximize the value of a portfolio of stock” that is “long on defense” and that it would do whatever it takes to achieve its end, including setting thing in motion to instigate a war.
On a less catastrophic note, Musk also acknowledged that AI does threaten jobs. But for some that is the real catastrophe to be concerned about.
Ethical concerns about AI and jobs
In July 2017, the Wall Street Journal published the article, Robots Are Replacing Workers Where You Shop. It described some of the tech already replacing human employees at Walmart stores.
The Journal article also featured a table from Citi Research on the threat to jobs from automation by the year 2030. Industries expected to come off with the least damage include insurance and finance, with a high-risk rate of 54 percent.
Accommodation and food services do not far as well, with a high-risk rate of 86 percent. That would amount to a huge number of people booted from their jobs, which would not prove damaging only to individuals who lose theirs but to the overall economy.
In an interview with VIAnews, Anthony Zador, Chairman of Neuroscience and Professor of Biology at the Cold Spring Harbor Laboratory, warned that some of “the consequences for society are going to be, in a lot of ways, devastating.”
That’s primarily due to the fact that we will see “tens of millions of jobs” disappear with no assurance of “what they’re going to be replaced with, if they’re going to be replaced by anything.”
He did hasten to add that he puts the blame here on “society; it’s not the fault of technology.” Regardless of who is to blame, the damage appears inevitable.