Find out How Artificial Intelligence Perceives You Through ImageNet Roulette

New research shows how AI categorizes us in questionable ways.

Thanks to artificial intelligence and facial recognition, you can unlock your phone merely by showing your face to your screen. The technology is impressive but what's less understood, however, is just how AI classifies you behind the scenes through its algorithms. 

Now you can find this out thanks to ImageNet Roulette, where you can upload images of yourself and be tagged as a specific type of person and can grasp an understanding of how AI categorizes us. The results are entertaining at times but sometimes they're rude and borderline racist. 

RELATED: AIS CONTINUE TO ACT IN UNPREDICTABLE WAYS, SHOULD WE PANIC?

What is ImageNet Roulette?

Created as part of an art exhibition — Training Humans — at the Prada Foundation museum in Milan, ImageNet Roulette was made to show us how we as humans are classified by computer systems or machine learning systems. 

Gathering information from its neural network, which is trained to select categories of 'Person' descriptions from the ImageNet dataset, it has over 2,500 categories to choose from, to classify humans. 

These classifications have been put together to form an exhibition at the Prada Foundation, created by Trevor Paglen and Kate Crawford

ImageNet Roulette is available online. Anyone can take a screenshot or upload an image of themselves. You then receive your classification based on the uploaded image. 

ImageNet, on the other hand, is one of the most significant and important training sets in artificial intelligence. Launched in 2009, it grew exponentially.

Scouring the Internet for images, it collected millions of photos and for a while became the world's biggest academic user of Amazon's Mechanical Turk. At the end of the day, ImageNet had 14 million labeled photos, which have 20,000 different categories. 

What is controversial about AI classifying us by our images?

Through ImageNet Roulette it is made clear that some classifications are relatively harmless, even amusing perhaps. Some people are categorized as 'nonsmokers,' 'face,' or even 'psycholinguist.' Nothing too horrible.

However, when some images were taken in the dark, or with darker lighting, the categories jumped to 'black,' 'black person,' 'negro,' and 'negroid.' 

For people with dark skin, the labels jumped to 'mulatto,' 'orphan,' and even 'rape suspect.' All the fun quickly disappears here. 

These categories were added by the original ImageNet database, back from 2009, not by the ImageNet Roulette creators. 

Essentially, the categories are based on how closely linked the images are with the training images from ImageNet's database

This goes to show how biased these algorithms in AI can be. The data was collated from a number of sources: the original creators of ImageNet, the society that produced the images, the Amazon Mechanical Turks workers' opinions, and the dictionaries that provided the words in the first place.

Advertisement

Unfortunately, the algorithms were originally created by humans, and they must be the ones to change them in order to remove any bias.

Ultimately, the website says: "we want to shed light on what happens when technical systems are trained on problematic training data. AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong."

Advertisement

Stay on top of the latest engineering news

Just enter your email and we’ll take care of the rest: