Scientists Create World's First 'Psychopath' AI By Making It Read Reddit Captions

A team of scientists from MIT created the world's first 'psychopathic' AI named Norman.

MIT scientists have successfully created an AI ‘psychopath’ by feeding it with content from Reddit. Three deep learning experts have banded together to show how the content we show AI really impacts their learning outcomes. 

AI

Elon Musk and OpenAI Want to Create an Artificial Intelligence that Won't Spell Doom for Humanity

To prove their point, MIT engineers and researchers Pinar Yanardag, Manuel Cebrian and Iyad Rahwan had their AI, which they dubbed Norman after the lead character in Psycho, read image captions from a Reddit forum.  

They then showed Norman a series of ink blots from the famous Rorschach test. While a standard AI when shown the images generally responded with answers such as ‘close up of a wedding cake on a table’, Norman responds with graphic ideas including ‘man is murdered by machine gun in broad daylight’.

Scientists Create World's First 'Psychopath' AI By Making It Read Reddit Captions
Via: Norman

To be clear, the scientists didn’t show the robot the actual images available on the eery Reddit site, it only read the images' captions. While a seemingly creepy and unnecessary piece of research, the Norman project couldn't come at a better time. 

When asked what motivated them to pursue a psychopathic AI, Norman's creators explained that the project is about bringing awareness to the way AI develops bias through the content its fed. “The data you use to teach a machine learning algorithm can significantly influence its behavior,” the researchers said.

“So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” As AI technology rapidly develops, questions of how it will be monitored and regulated persist. 

Scientists Create World's First 'Psychopath' AI By Making It Read Reddit Captions
Via: Norman

Norman raises big questions of AI ethics

Today Google released its ethics principles relating to its work with AI. The company says it won’t work with any military organizations to develop weapons.

Norman presents subtle questions to both AI engineers and the wider public about how we are monitoring the content and therefore bias of new AI. 

Scientists Create World's First 'Psychopath' AI By Making It Read Reddit Captions
Via: Norman

AI used in self-driving cars, for instance, needs to be taught how to recognize pedestrians from inanimate objects. Attention needs to be paid to ensure the data provided to the AI is not referencing any particular look of what is human to ensure no bias is inadvertently taught. 

Advertisement

The same team from MIT have forayed into the dark side of AI before. Their first project was the AI-powered 'Nightmare Machine'. The project tackled the challenge of getting AI to not only detect but induce extreme emotions such as fear in humans. 

Scientists Create World's First 'Psychopath' AI By Making It Read Reddit Captions
Via: Norman

They then followed this gruesome project with the development of Shelley, the world's first collaborative AI Horror Writer. Shelley was ‘raised; reading scary bedtime stories from the subReddit NoSleep

She was then able to write over 200 horror stories in collaboration with human writers. In 2017, the team turned away from the dark side briefly to develop Deep Empathy, a project that explored whether AI can increase empathy for victims of far-away disasters by creating images that simulate disasters closer to home. 

Via: Norman/MIT

Advertisement