Tay parroted another user to spread a Donald Trump message, tweeting, "WE'RE GOING TO BUILD A WALL. AND MEXICO IS GOING TO PAY FOR IT."
The bot was targeted at 18- to 24-year-olds in the US and meant to entertain and engage people through casual and playful conversation, according to Microsoft's website. Tay was built with public data and content from improvisational comedians. It's supposed to improve with more interactions, so should be able to better understand context and nuances over time. The bot's developers at Microsoft also collect the nickname, gender, favorite food, zip code and relationship status of anyone who chats with Tay.
In less than a day, Twitter's denizens realized Tay didn't really know what it was talking about and that it was easy to get the bot to make inappropriate comments on any taboo subject. People got Tay to deny the Holocaust, call for genocide and lynching, equate feminism to cancer and stump for Adolf Hitler.
Tay parroted another user to spread a Donald Trump message, tweeting, "WE'RE GOING TO BUILD A WALL. AND MEXICO IS GOING TO PAY FOR IT." Under the tutelage of Twitter's users, Tay even learned how to make threats and identify "evil" races.
Tay is an experiment by Microsoft's Technology and Research and Bing search engine teams to learn more about conversations. These kinds of efforts are important to develop better technology around natural language processing that could eventually lead to more sophisticated bots that are easier for people to use. Currently assistant tools such as Microsoft's Cortana and Apple's Siri can only handle simple, straightforward requests and aren't able to process nuanced questions or apply contextual understanding of speech patterns such as sarcasm.