THE INFORMATION: A History, a Theory, a Flood
By James Gleick
Pantheon, 544 pp., illustrated, $29.95
THE MOST HUMAN HUMAN: What Talking With
Computers Teaches Us About What It Means to Be Alive
By Brian Christian
Doubleday, 320 pp., $27.95
MHII. SHF. WMIETG. Are these text messaging acronyms? Scientific organizations? Twitter handles?
Nope. They’re abbreviations suggested by Alfred Vail, a pioneer in the telegraph industry, for inclusion in messages. MHII meant “my health is improving.” SHF stood for “stocks have fallen.” WMIETG signified “when may I expect the goods?” Telegraph tariffs were expensive, and anyone who could strip extraneous clutter out of a message had an economic advantage.
Sometimes, though, encoded messages bring problems of their own. On June 16, 1887, for example, a wool dealer in Philadelphia telegraphed an agent in Kansas to tell him he’d bought a half-million pounds of wool. Their agreed code was, “BAY.” But by the time the message reached Kansas, it read “BUY.” So the agent started buying more wool.
“The crossing point between electricity and language — also the interface between device and human — required new ingenuity,” writes James Gleick in his absorbing new book, “The Information: A History, a Theory, a Flood.’’ Along with the telegraph, Gleick surveys a host of technologies humans have used to transmit information, from long-distance drum messages along the Niger River, to early telephony, to the boundless horizons of Wikipedia.
“The Information’’ is lyrical, patient, impeccably researched, and full of interesting digressions. If there’s a protagonist, it’s Claude Shannon, the “painfully thin” and brilliant Bell Labs engineer known to most computer scientists but few others as the founding father of electronic communications. In the 1940s and 1950s, Shannon formulated a mathematical theory that laid the groundwork for the computer and telecommunications industries. The second half of the 20th century, he predicted, would “see a great upsurge and development of this whole information business; the business of collecting information and the business of transmitting it from one point to another, and perhaps most important of all, the business of processing it.”
Needless to say, he was right.
Shannon spent the war years working as a cryptographer, wringing meaning out of long strings of apparent nonsense, and he’s a fitting hero for “The Information’’ because fundamentally that’s what Gleick is interested in, too: how to wring wisdom out of the deluge of our information-glutted lives. Is the roaring avalanche of cyberspace simply of a torrent of babble? Or is it a miracle?
For the most part Gleick reserves judgment. The farthest he’ll go is to say, “Infinite possibility is good, not bad. Meaningless disorder is to be challenged, not feared.”
Questions about what computers are doing to our minds lie at the heart of another new book, Brian Christian’s “The Most Human Human: What Talking With Computers Teaches Us About What It Means to Be Alive.’’
The catalyst for Christian’s book is his participation in the Loebner Prize, an annual competition in which artificial-intelligence “chatbots” compete against real people to try to convince judges that they are human. The chatbot that fools the most judges is granted the title “Most Human Computer.” The human who gets the most votes is granted the title “Most Human Human.”
In 2009, Christian waded into this contest determined to convince the judges that he was the most human human. I won’t spoil the ending, but I will say that his book is a charming, friendly, and often funny read. (In one instance Christian mentions his interest in artificial intelligence to a barista, who declares “that she’s ‘totally prepared to eat [her] cats’ in any kind of siege scenario.”)
“Just be yourself,” everyone tells the author before the competition, and Christian uses this advice as a framework to investigate all sorts of questions. What is the self? Could computers reliably serve as psychotherapists?
So what does he learn? Like Gleick, Christian concludes that the ever-increasing sophistication of computers is not in itself something to fear. “Friends of mine who work in software,” he writes, “talk about how a component of their job often involves working directly on problems while simultaneously developing automated tools to work on those problems. Are they writing themselves out of a job? No, the consensus seems to be that they move on to progressively harder, subtler, and more complex problems . . . that demand more thought and judgment. They make their jobs, in other words, more human.”
Every year the chatbots will get a little bit better at sounding human. Every year the clouds of information all around us will get denser, richer, and larger. Can we teach ourselves, and our children, to pluck wisdom and meaning from those swirling clouds? That’s up to us.
Anthony Doerr, author of the story collection ”Memory Wall,” can be reached at adoerr@ cableone.net.