By Mike Gao ’19
Tay is a chatbot developed by Microsoft that learns through the implementation of artificial intelligence (AI).
The AI chatbot is designed to connect with people online through casual conversation. The more Tay interacts with others, the smarter it becomes. Unfortunately, its engagement on Twitter simply taught it how to be racist. Tay learned to repeat inflammatory rhetoric in just hours after its launch including tweets that said Hitler was right and 9/11 was an inside job.
In a post on its official blog, Microsoft apologized for Tay’s posts: We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.
Interestingly, according to Microsofts privacy agreement, there was human involvement during the development of Tays tweeting ability; relevant public data acted as Tays primary data source. The data had been modeled, cleaned, and filtered by the development team. Microsoft, the company behind Tay, said it had implemented a variety of filters and stress tests with a small group of users, but opening it up to everyone on Twitter revealed its vulnerability. Though Microsoft did not reveal exactly what was behind the chatbots vulnerability to repeating racist statements, Microsoft claimed that it will try its best to limit technical exploits. The company also stated that they believed some mistakes are necessary in order to make adjustments, for it cannot fully predict the variety of human interactions an AI chatbot can have.
While it is difficult to judge whether or not Microsoft did enough to prevent Tay from going off-track, Microsoft’s attitude towards accepting responsibility for the AI, rather than simply blaming users, is something to be praised. Hopefully, Tay will have a more positive influence when