The incredible, victorious performance of Watson, an IBM supercomputer, on Jeopardy! this week looks and feels like something out of science fiction. (As now-former champion Ken Jennings put it, writing out his answer to "Final Jeopardy" yesterday: "I for one welcome our new computer overlords.") But what does Watson's victory actually mean for the future of artificial intelligence? Writing at the Public Library of Science, John Rennie explains what Watson has achieved and what remains to be done.
Watson, Rennie argues, is a huge step forward, and we're soon going to see Watson-like systems in everyday life - when we interact with automated phone systems, for example. Watson will easily "scale down" to tackle easier, more circumscribed problems than winning Jeopardy!, like making travel reservations or finding relevant medical research. Rennie points out a major way in which Watson surpasses his predecessors: he can be uncertain. Watson knows when he doesn't have a great answer, and a Watson-like system could ask clarifying questions, instead of just "blurting out" an answer that doesn't make sense - as Watson was forced to do in "Final Jeopardy" when he suggested "Toronto????" as a U.S. city.
So Watson can "scale down" to solve easier problems - that, after all, is why it made business sense for I.B.M. to build him. Can he scale up to solve more general problems - to carry on a conversation, for example? Rennie thinks that the answer is no. Watson takes a "brute force" approach to figuring things out. He uses massive computing power to analyze a problem in many ways simultaneously. Brains, while they do work "in parallel," are also more organized, in ways neuroscientists don't yet fully understand. Truly intelligent computers will almost certainly have to work by means of some "biologically guided" organizational principle, lest they waste their computational resources. Watson is an incredible feat of human ingenuity. But if we want to make a real artificial mind, we'll need "real breakthroughs in understanding how minds arise from brains."
The author is solely responsible for the content.
Leon Neyfakh is the staff writer for Ideas. Amanda Katz is the deputy Ideas editor. Stephen Heuser is the Ideas editor.
Guest blogger Simon Waxman is Managing Editor of Boston Review and has written for WBUR, Alternet, McSweeney's, Jacobin, and others.
Guest blogger Elizabeth Manus is a writer living in New York City. She has been a book review editor at the Boston Phoenix, and a columnist for The New York Observer and Metro.
Guest blogger Sarah Laskow is a freelance writer and editor in New York City. She edits Smithsonian's SmartNews blog and has contributed to Salon, Good, The American Prospect, Bloomberg News, and other publications.
Guest blogger Joshua Glenn is a Boston-based writer, publisher, and freelance semiotician. He was the original Brainiac blogger, and is currently editor of the blog HiLobrow, publisher of a series of Radium Age science fiction novels, and co-author/co-editor of several books, including the story collection "Significant Objects" and the kids' field guide to life "Unbored."
Guest blogger Ruth Graham is a freelance journalist in New Hampshire, and a frequent Ideas contributor. She is a former features editor for the New York Sun, and has written for publications including Slate and the Wall Street Journal.
Joshua Rothman is a graduate student and Teaching Fellow in the Harvard English department, and an Instructor in Public Policy at the Harvard Kennedy School of Government. He teaches novels and political writing.