This AI Knows Exactly How Racist and Sexist You Can Be

Shelby Rogers

For many innovators, technology serves as a way to bridge gaps between various disparities. This continues over into the realm of artificial intelligence. Algorithms that hire would theoretically eliminate biases and give women and men of all races an equal chance at work. After all, it's a robot and not a human. This could even apply to policing, as certain minorities often get the brunt of excessive police force throughout the world.

How well an AI could pick up on our stereotypes

In recent years, AIs have managed to become increasingly humanlike thanks to quicker machine learning. However, that broader base of information can lead to an AI acquiring a larger base of human thoughts -- including our biases.

This AI Knows Exactly How Racist and Sexist You Can Be

[Image Source: Pixabay]

Researchers decided to put this connection to the test to see just how well an AI could pick up on our stereotypes. The team included researchers from throughout the globe, including several from Princeton University.

"Don’t think that AI is some fairy godmother," said the study's co-author Joanna Bryson. Bryson serves as a computer scientist at the University of Bath in the United Kingdom and Princeton University. "AI is just an extension of our existing culture."

Word association tests

The team found inspiration in preexisting psychology. They looked at implicit association tests (IATs). In an IAT, a word briefly appears on a screen and then the speed at which people react to that word unearths their subconscious associations. Previous IATs have found that names like "Brad" and "Courtney" are associated with positive words like "happy." However, names associated with communities of color get a more negative association.

The team developed an AI system with a similar associative style. Bryson and her colleagues called it the word embedding association test (WEAT). They start with defining a word based on the context in which the word is used. For example, "ice" and "vapor" would have similar embeddings due to them frequently being used with "water." However, given that the computer sees these words as a series of zeroes and ones, it's a little different than the intuitive understandings humans have of certain word pairings.

Most Popular

This AI Knows Exactly How Racist and Sexist You Can Be

[Image Source: Pixabay]

"A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language," said Arvind Narayanan, a computer scientist at Princeton University.

The paper shows that while common word associations can be statistical and logical, more troubling biases can still creep in. Words like "woman" were associated with humanities and items of the home. "Male" and "man" caught associations with maths and sciences.

The machine learning tool developed by the researchers trained on a "common crawl" corpus. This took billions of words from published materials online. It also trained on data from Google News. For the team, the results didn't come as a surprise.

Sandra Wachter, a data ethics and algorithms researcher at Oxford said "The world is biased, the historical data is biased, hence it is not surprising that we receive biased results."

Bryson also noted in an interview with the Guardian that "a lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it."

This machine learning system can certainly call us on our faults, but does this make any AIs inherently racist or sexist? The team doesn't think so. Whereas humans can lie about reasons for not hiring someone, an algorithm cannot. The numbers and information processed through it, while voluminous, still removes feelings and learned prejudices from its systems.