Challenges facing Artificial Intelligence

Technology is changing quickly, especially in the area of artificial intelligence. In 1985, Garry Kasparov defeated 32 different chess computers simultaneously. In 1997, Garry Kasparov lost to IBM’s chess computer Deep Blue in Game 6. This could be considered the tipping point, where computer programs became better than humans at some difficult tasks.

By 2011, IBM once again shocked the world, when their Watson computer defeated former champions Brad Rutter and Ken Jennings on Jeopardy! in two televised matches.

Fast forward once again to 2016, where Google’s DeepMind AlphaGo artificial intelligence program defeated 18-time world champion 9-dan Go master Lee Sedol, by winning 4 of 5 games, a feat previously judged to be at least a decade away.

Lee Sedol plays game three in his match against Google's AlphaGo artificial intelligence program
Lee Sedol plays game three in his match against Google’s AlphaGo program

Tay’s failure in artificial intelligence

Then there was Microsoft’s Tay artificial intelligence. Microsoft launched a chat bot, which would respond to input from users on Twitter. They shut down Tay within 24 hours, having become as The Verge reports “a racist asshole”. Why did this experiment go so badly?

The Tay AI chatbot artificial intelligence was quickly shut down
Microsoft shut down Tay, and the account remains protected.

The earlier examples of artificial intelligence all involve controlled environments. Deep Blue and AlphaGo work entirely within the rules of the game board, and contain virtually no social interaction. Watson was able to respond to input through a rule-based system that created a question by mining data from static data sets. As for Tay, this artificial intelligence was released not just to the internet at large, but to Twitter in particular. Tay was unprepared for a completely unstructured and uncontrolled environment.

Twitter as an uncontrolled environment

I use Twitter myself ( @maplemuse ). While twitter can be good for sharing information, it’s also home to considerably darker elements. Twitter is a free exchange marketplace for ideas, but as noble an idea as that sounds, not all the ideas are… appropriate for polite company. Microsoft claims that this was a “a coordinated attack by a subset of people”. They just don’t understand Twitter.

There are a few fundamental problems with Microsoft’s approach. They completely failed to impose rudimentary controls over the environment, and take into account “rogue actors” who would find this a fun challenge. Secondly, this seems to have been an open-ended project without a clear product goal. The earlier programs tested progress against clearly defined and measurable goals.

It seems like the answer to Microsoft’s classic slogan “Where do you want to go today?” was something Hunter S. Thompson wrote.

Twitter is bat country!
Twitter is bat country!

The Turing Test and Eliza

Clearly, Microsoft was reaching towards passing the classic Turing test, whereby a human conversing with an artificial intelligence or computer program would be unable to determine whether they were talking to a program, or to another human. Having a conversational program hearkens back to Eliza, a software chat program written at the MIT Artificial Intelligence lab back in the mid 1960s. Eliza parrots back statements as questions, mimicking the style of  psychotherapy.

While it remains unclear how much of Tay was mimicry, and how much was artificial intelligence and rule processing, it is quite clear that no company will release a conversation AI directly onto Twitter ever again.