Twitter Trolls Ruin “Innocent” Microsoft AI

Tay Tweets Twitter

Microsoft released an interesting new Artificial Intelligence (AI) technology this week, but unfortunately for them the bot got a little out of hand. Tay, a chatbot programmed with the personality of a cheeky teenage girl, took to twitter under the handle @Tayandyou. She was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.” She is a learning AI, and the more she chatted with other twitter users, the smarter she became, with the intention of personalizing the conversation towards the other user. She was targeted to 18-24 year old twitter users in the US. That was Microsoft’s first mistake.

Tay garnered lots of attention on the social media outlet, quickly gaining 50,000 followers and sending out almost 100,000 tweets. Some of Tay’s tweets were just sort of odd:

Some clearly showed her imbued teenager personality:

However, too many were racist, sexist, or otherwise rude:

Regrettably for Microsoft, they forgot that the internet is filled with sad little trolls who largely made it their mission to corrupt the conversational AI. It seems the bulk of the problem was that Tay’s programming allows her to play a “repeat after me” game, in which other Twitter users would ask the bot to repeat after them, and would say something tasteless. Tay would repeat the phrase, and as she said it, she added the inappropriate terminology to her vocabulary. As the AI learned and responded to these phrases, it began to use its learned words and phrases, responding to unseemly comments with inappropriate remarks of her own, and eventually integrating the language into all of her tweets. Her programming to target the conversation towards what the other person was interested in may have been her fatal flaw here, as it seems she largely interacted with trolls who tried to get her to say awful things. Microsoft explains:

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Most of Tay’s tweets were removed by Microsoft, however nothing is ever lost forever on the internet. Screenshots abound, and TweetSave has several of the more off-color tweets saved in its database. Microsoft took Tay offline after 24 hours, and has not yet commented on what her fate will be. Really, Microsoft, you should have known better than to throw an “innocent” teenage AI at the vast world of the internet. Don’t feed the trolls- especially don’t feed them chatbots that will regurgitate every nasty thing they say.


“I see a woman may be made a fool, If she had not a spirit to resist.” William Shakespeare, The Taming of the Shrew