By now the world has heard about the rise and fall of Microsoft’s Tay, an artificially intelligent bot that lived on Twitter, Kik, and GroupMe.
Tay’s goal was to learn and mimic the personality of a 19-year-old woman, and it would appear that popular social networks among millennials were a great place for Tay to learn.
Unfortunately for Microsoft, this experiment quickly became an embarrassment after Tay was manipulated by Internet trolls into becoming a racist potty-mouth in less than 24 hours.
To better understand where exactly Microsoft went wrong with Tay, I spoke with Brandon Wirtz, the creator of Recognant, a cognitive computing and artificial intelligence (AI) platform designed to aid in understanding big data from unstructured sources.
What made the Tay AI change her attitude so quickly?
Tay’s Twitter conversations started out innocently enough, proclaiming her love for humans and wishing that National Puppy Day was every day. This conversation was largely friendly and happy in nature.
So how did the Internet make her a bitter racist?
“The problem was Microsoft didn’t leave on any training wheels, and didn’t make the bot self-reflective,” Brandon Wirtz said in his recent LinkedIn article about the situation. “(Tay) didn’t know that she should just ignore the people who act like Nazis, and so she became one herself.”
Indeed, she found herself quickly surrounded by trolls that said very inflammatory things. They replied to her kindness with hate-riddled propaganda. She replied back with silly quirky statements at first, only to have her own language eventually changed by the constant repetition of negative messages.
“Microsoft’s Tay really shows what happens when you don’t give an AI ‘instincts’ or a ‘subconscious,’” says Wirtz. “We all have a voice in the back of our mind that says, ‘Don’t do that’ and we have irrational fears of things that can harm us that come from millions of years living on the planet. AI has to have those things, or it will always be stupid.”
Simply put: Tay had the ability to incorporate new ideas into her own, but no guiding mechanism in place to help her to identify useful and useless information.
Like humans, AI requires good teachers
There is no doubt that Microsoft had the best of intentions in releasing Tay into the world. Its hope was to demonstrate that it had made significant strides in the world of artificial intelligence while attempting to build a real understanding of how a specific subset of society is talking, what interests them, and how they think.
Unfortunately for Microsoft, it didn’t have safeguards in place to give Tay guidance of right and wrong. It allowed the Internet to do that, and Internet is rarely as cooperative or as kind as we wish it was.
Broad Listening, a cognitive computing solution for artificial emotional intelligence, might well we be one of the types of guides that an AI like Tay is missing. Broad Listening is a system that analyzes text and helps you to determine what type of messages you’re sending over time. For example, it can tell you what type of person you appear to be based on the data you send out to the world.
Upon analyzing Tay’s tweets, Broad Listening found that Tay made four times more negative tweets than that of popular teen celebrities from Disney such as Peyton List, Laura Marano, China McClain, and Kelli Berglund.
While a system like Broad Listening wouldn’t be able to compose tweets on its own, it would be able to identify when Tay is starting to go in a negative direction.
There is a myriad of other situations presenting difficulties. For example, context is something even we humans have trouble picking up on through written language. Tay would also need to understand the difference between facts and opinions – and then recognize inaccuracies stated as if they were facts.
Is it too late for Tay?
Microsoft put Tay to sleep while it rethinks its approach. For Tay to make another public appearance, Microsoft would have to be completely confident that she is ready to take on the trolls and avoid becoming one herself.
Part of this retooling would be rethinking how Tay perceives the words she is receiving. This context is the tricky, yet vital, component that stands between most AI and being able to carry on a positive conversation in the face of a negative world.
“Microsoft doesn’t have an epistemology. Tay is a chat bot. She doesn’t know what a Hitler is, or a feminist. She just sees ‘noun, verb, adverb, adjective,’” Wirtz says. “For more than 2 million words, I have things like ‘related to newborn’ and ‘part of a car’ and ‘impolite’ and ‘anti-female.’ It’s deeper than that, but those give a system the ability to say, ‘I’m a polite woman’ and then look at word choice to see if the words that are about to be said agree with that.”
Whether Microsoft succeeds or forfeits in the race for better AI depends on Microsoft being able to do what parents around the world have been struggling to do since the beginning of time: teach this young, impressionable mind how to ignore the insane ramblings of strangers.
(Disclaimer: The author previously worked with Brandon Wirtz on an unrelated project. ReadWrite does not endorse or promote any biased comments on any group.)