In early April of 1990, I was a contestant on Jeopardy. If you were watching back then, I was the “Supercomputer Programmer from Aloha, Oregon” who won three games and $38,000 and then lost – badly – in the fourth. So there’s quite a bit of personal history tied in with the news last week that a supercomputer from IBM, called Watson, had beaten two all-time Jeopardy! winners, Brad Rutter and Ken Jennings, in a practice round for the three-day charity competition on Feb. 14, 15 and 16.
A few weeks ago, I predicted that Jennings would win, Watson would place a close second and Rutter would place third in the overall contest, and I’m sticking with that prediction in spite of Watson’s first-place finish in the practice round last week. When I put on my handicapper’s hat, the scores of the practice round – $4,400 for Watson, $3,400 for Jennings and $1,200 for Rutter – are consistent with my assessment that Jennings and Watson are evenly matched and that Rutter is unlikely to win.
The battle for first place will come down to the differences between human and machine intelligence. The machine has three advantages: faster reaction time, no emotions or fatigue and a larger potential knowledge base. But the human has the advantage of being able to decode subtle linguistic clues found in the answers on the screen that a Jeopardy! contestant must question. And humans will write those answers for the tournament knowing that one of the contestants is a machine.
M. Edward (Ed) Borasky is, in order of appearance, a boy genius, computer programmer, applied mathematician, folk singer, actor, professional graduate student, armchair astronaut, algorithmic composer, supercomputer programmer, performance engineer, Linux geek, and social media inactivist. He currently develops virtual appliances for social media analytics and data journalism, and is the publisher of the Borasky Research Journal. His hobby is collecting hobbies.
In 1950, computing pioneer Alan Turing, while pondering the question, “Can A Machine Think?”, devised what has become known as the Turing Test. The original paper, “Computing Machinery and Intelligence,” can be found here. While there’s much philosophical debate about the exact nature of the Turing Test, in my mind, it is simply this: Can a human, communicating with a both machine and a human purely electronically, achieve in the long run a score better than average in distinguishing machine intelligence from human intelligence? If not, then we say that “the machine has passed the Turing Test.”
In the years since Turing’s paper, numerous challenges, both theoretical and practical, have been set forth for machine intelligence and numerous technological responses have resulted. Machine intelligence engineers have created practical economic value and machines now perform tasks once thought to require human intelligence. The essence of the question, “Can A Machine Think?” and Turing’s proposed test is this: once you abstract away the physical implementation details of electronic circuitry and software vs. a human body, nervous system and human thought processes, can a machine perform as well or better than a human at solving symbolic problems?
We saw human-competitive performance from machines in checkers in the 1960s and an unbeatable checkers program in 2007. We’ve seen musicians unfamiliar with Chopin’s entire body of work unable to distinguish between a mazurka by Chopin and a mazurka written in Chopin’s style by David Cope’s EMI software. In 1997, we saw a computer defeat the human world champion at chess. We have seen machines compete successfully with humans in patentable innovation. And last year, we saw machines successfully navigate public roads operating motor vehicles in traffic.
During its training against former champions, Watson, the IBM computer system designed to play Jeopardy, was constantly updated on popular music, movies, television and pop culture references in order to be competitive in these categories. Watch Watson tackle pop culture references during these sparring matches.
In short, every challenge we have thrown at the machine intelligence community to produce human-competitive intelligence has been met. A series of increasingly difficult symbolic problems has been solved by electronic circuitry and software. And on February 16, 2011, I claim we will be finally able to say the Turing Test has been passed – that if the three contestants were placed behind a screen and we could see only their responses in text form and their scores, we would not be able to tell which one was Watson.
And what of the future? There’s no shortage of challenges for the supercomputers we can design and build and the software we can write for them. As IBM VP John E. Kelly III put it, “We really believe — I don’t want to be overly dramatic — but we could save lives with this.” Early diagnosis of diseases and design of effective treatments early in the cycle is one of the more obvious ones. Earthquake prediction is another one. As we approach the centenary of Turing’s birth, I say it is high time we accepted that the answer to Turing’s question, “Can A Machine Think?” is, “Yes, of course!”