Watson’s humility — and humanity
WHAT MADE “Jeopardy: The IBM Challenge’’ so riveting last week wasn’t the fact that Watson the computer clobbered his two human opponents. It seems a given that a computer on a TV game show would know a decent amount of trivia and have a quick (robotic) finger on the buzzer.
No, the amazing thing was the nonchalant way the humans around him behaved — as if Watson were any other “Jeopardy!’’ contestant who happened to have a plasma screen for a head. Host Alex Trebek offered encouraging words and the occasional snide comment (“I won’t ask,’’ Trebek sighed, when Watson wagered a weird amount on a Daily Double). When Watson put a string of question marks after a “Final Jeopardy’’ answer, the studio audience laughed as if the computer were in on the joke. When it turned out that he had bet a paltry $947, it was a laugh riot.
Watson can’t make jokes, of course. His great innovation is his ability to recognize “natural language,’’ with its puns and double meanings and abstractions. He also has an intriguing set of Jeopardy-specific abilities: he’s programmed to bet strategically, press the buzzer only when he’s confident enough, and search the board for Daily Doubles.
But his ultimate function is mining information, like a super-charged version of
IBM made some interesting choices about how to render Watson human-ish, but not precisely human. His artist-designed face is like an abstract, high-tech smiley face: a sphere surrounded by swirling dots that change color according to his confidence in an answer. His voice, culled from an actor, is cheerful and benign. And while it mimics human intonation, it also gets certain words amusingly wrong.
Yet even an imperfect digital voice can affect people’s behavior, said Joseph Tecce, a psychology professor at Boston College. To humanize Watson is an involuntary act, he said in an interview: “It’s obligatory that we treat this computer as a human, if it speaks and it’s smart. There’s no way out of it.’’
We anthropomorphize plenty of things; even our boats get names and genders. But because computers are invested with a form of “intelligence,’’ we’ve always been especially tempted to turn them into people. That’s why we pick so many fights with the lady inside the GPS, and why we’ve been trying, for decades, to build computers that approximate human behavior. In the 1960s, MIT pioneered an early artificial intelligence program called Eliza, which was designed to mimic psychotherapy. (It asked a lot of open-ended questions, such as “Can you elaborate on that?’’)
Pop culture has long wrestled with the implications of computers that are almost-human, or all too human. Would it be worse to have computers that don’t emote, like HAL in “2001,’’ or computers that do, like the religious-fundamentalist Cylons in “Battlestar Galactica’’?
When people got riled up, half-jokingly, over Watson last week, it wasn’t because they feared computers would take over human jobs. Machines have been doing that for more than a century. What Watson represented was a potential personality, a computer that might act more authentically human than ever before.
But in the end, Watson played “Jeopardy!’’ just like a computer would; he was as cool in the face of his silly mistakes as he was when he ended up victorious. What his game-show run did beautifully, in fact, was highlight just how hard it would be to mimic the complexity of people. Humans do very strange things, from a computer’s standpoint. We think it’s clever to phrase answers in the form of a question. We insist on placing bets with numbers that end in 0 and 5.
And we keep trying, contrary to logic, to turn our computers into friends. At the end of Wednesday’s “Jeopardy!’’ show, human contestant Ken Jennings pretended to throttle Watson’s nonexistent neck. Then human contestant Brad Rutter made rabbit ears behind Watson’s virtual head. Watson was unfazed. He didn’t get the joke, even though he took it like a man.
Joanna Weiss can be reached at firstname.lastname@example.org.