Microsoft's Ai Bot Turned Racist After Hours on the Internet

Microsoft's Ai Bot Turned Racist After Hours on the Internet

Just yesterday, Microsoft unleashed Tay, an AI bot with its own twitter account that can organically respond and generate tweets to anyone on the site. Developed by Microsoft's Technology and Research team, Tay, with the twitter handle @Tayandyou, is targeted at U.S. 18-24 year olds which Microsoft claims to be the largest online social demographic in the U.S. If you know even a little about the internet, then you know that releasing a twitter robot that learns from what people say to it on twitter could potentially go wrong very fast. In fact, the AI appeared to develop racist tendencies, wholeheartedly become a proponent for Donald trump, and even went so far as to claim that Hitler did nothing wrong. Check out some of the tweets below.

tay ai microsoft bot racist Ai bot[Image Source: Twitter]

Users began trying to "turn Tay racist" and make her say things that otherwise should have been censored. "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," says Microsoft. It appears at this point, Microsoft has been scrambling to take control of their Ai bot and silence her racist tendencies.

Here is the first tweet that the twitter bot made to the world:

Unfortunately, most of the evidence in regards to Tay.ai's racist tweets seem to have disappeared off of the twitter account. All day yesterday users tried to get the robot to say something bad, some were successful while others were not so lucky.

Here's another user that tried to get the AI to pick up on some very prejudiced tendencies.

In a now deleted tweet, Tay even said, "WE'RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT", a phrase that U.S. Presidential candidate Donald Trump said early on in the campaign trail. This user was one who discovered that the robot may be voting for Trump in 2016.

The online AI even will track information from anyone that tweets it and the data "may be retained for up to one year" in order to improve Tay's intelligence. In theory, creating a fast learning robot was a good idea, as large quantities of organic human knowledge could be gained in a very short amount of time. Microsoft probably forgot that the Internet is a pretty vicious place, and when you give it an a little bit, users will do whatever they can to take a lot. For now, it appears that the AI has been silenced until Microsoft gets a handle on what they are going to do with the catastrophe of an artificial intelligence test. Here's what one user claimed when Tay stopped tweeting back.

In what may be Tay's last tweet, marking an end to her very short life, the AI said she would be going to sleep since she is tired from all of the conversations.

This experiment poses a more serious question to the organic ability for AI mechanisms to learn from human behavior. How do you program a device to pick out prejudiced and otherwise negative views from the wealth of human knowledge? There will always be negative viewpoints expressed by people of the world, but keeping AI bots from sharing in those opinions may prove to be a more difficult task. There's no word yet on if Tay will ever be brought back to life, but if she is, the Internet will surely start doing its thing again.

SEE ALSO: Why is Microsoft dumping data centers into the Pacific Ocean?

0
comments

ENTERTAINMENT This Space Themed Hostel Will Make You Feel Like You're on a Space Mission 1 month ago