Security screeners at airports might do a better job spotting weapons if they spent their downtime playing video games - specifically, wasting aliens in lurid first-person shooters like Halo 3.
That's just one of the tentative findings emerging from psychologists trying boost the human ability to find threatening objects in X-rayed luggage. The subfield, once tiny and obscure, has bloomed in recent years, spawning competing theories and rival labs - and now provocative suggestions about how airport security screening might be improved.
Though baggage screening might seem on the surface like a repetitive and uncomplicated job, it turns out to be devilishly hard. Even well-trained security officers have trouble spotting guns, knives, and plastic explosives amid the tsunami of hair dryers, socks, MP3 players, metal toys, and the occasional cured ham that flows by during a holiday week like this one.
A government report issued last week noted that agents were able to sneak fake bombs past security at 19 airports by creating minor distractions, including carrying a roll of coins to set off a metal detector. And a Transportation Security Administration document obtained by USA Today revealed that when investigators placed simulated explosives into bags at Los Angeles International Airport last year, human screeners missed three-quarters of them.
Since the 9/11 attacks, there's been a strong ray of attention - and a stream of money - focused on solving this problem. Any new solutions would have applications beyond transportation safety: in medicine, for example, radiologists have to find tumors in thousands of mammograms and other X-rays, and have a high failure rate. What psychologists are finding is that the human mind fails at such tasks in very specific ways - and that understanding and compensating for those failures can help as much as new technology.
"The screener folks often get a bum press," says Jeremy Wolfe, a professor of ophthalmology at Harvard Medical School who runs the Vision Attention Lab at Brigham and Women's Hospital. The lab has received $100,000-$150,000 annually from the Department of Homeland Security over the last five years. "But I have seen very little evidence that they are anything other than professionals trying to do a very good job. And the people who are designing the task aren't stupid either - but it's really hard."
One reason it's hard is that the human brain has trouble with rare events. In 2005, Wolfe and two colleagues made news with a paper in the prestigious journal Nature that identified a "prevalence effect" in security screening. No matter how easy an object is to spot, they found, it is harder to spot if it is extremely uncommon. Wolfe and his colleagues found that people were much better at spotting objects that occurred half the time than in spotting the exact same object if it occurred only 1 in 50 times.
This may be a quirk of psychology with evolutionary roots, Wolfe suggests: If you were hunting prey it would make sense that your brain devote more attention to the species that comes along more often. But in the security scenario, where the most important threats occur very infrequently, it backfires.
In a paper now under review by the Journal of Experimental Psychology, Wolfe and his Brigham and Women's Hospital team offer one way around the perceptual glitch. If test subjects are "primed" for two minutes on tests in which knives and guns appear frequently, their high success rate continues when they switch over to a scenario in which the frequency drops. The priming effect lasts at least 20 minutes - probably long enough to get through a typical X-ray scanner shift. (Officers are rotated every half hour, to keep them from zoning out.) Josh Rubinstein, head of the "human factors" program at the Department of Homeland Security's Transportation Security Lab, says he hopes to field-test Wolfe's proposal in 2008.
Another reason baggage screening is hard is that people's visual acuity is stimulated when objects move (think hunting). Yet on an X-ray screen, the target objects are stationary against their background.
So Rubinstein's lab is working on modifying current X-ray machines to produce simulated motion. Images taken from slightly different angles can be presented in sequence, "animating" them enough to make screeners more effective at picking out the potential weapons. Based on the work of a British psychologist, Paul Evans of Nottingham Trent University, and amounting to an inexpensive tweak of old technology, the approach has been shown to improve threat detection by 15 percent, Rubinstein says.
The Halo 3 theory comes from a simple observation made by researchers at Duke University: frequent video-game players seem to be better at picking out threats quickly.
Stephen Mitroff, a Duke professor, and Mathias Fleck, a graduate student, had been following the burgeoning research on the effect of video games on all sorts of cognitive abilities. They compared the performance of gamers with nongamers, defining gamers as people who had spent at least five hours a week for the past six months playing "first-person shooters" - video games that show a world through the player's eyes, moving through a series of threat-filled hallways and landscapes. In one test of people's ability to identify low-frequency threats, gamers had an error rate of 15 percent, compared with 25 percent for nongamers.
The reason for the advantage was unclear, though gamers have obvious experience both in scanning a screen quickly for threats and in improving their visual detection in new arenas. This work has not yet been published, but is under consideration by the journal Psychonomic Bulletin and Review, and Fleck says it "could potentially shift hiring practices or training procedures." Rubinstein says his lab is keeping an eye on such studies, but there are no real-world tests planned.
Befitting a true academic field, security-screening studies now has its intramural squabbles. In the November issue of Psychological Science, Mitroff and Fleck argue that Wolfe's "prevalence" theory may be wrong. Mitroff and Fleck tried to replicate the low-prevalence effect, using a version of Wolfe's test, which had people try to pick tools out of an array of images on a computer screen. They succeeded - sort of - but subjects reported knowing that they'd done poorly on the tests. "That alarmed me," Fleck says. "It's hard to call it a 'miss' if they tell me they know they missed it."
So Mitroff and Fleck added a new condition: After the subjects had rendered their judgment about a given screen, they were allowed to correct it. They hit the escape key and went back. Under those conditions, the prevalence effect went away: people were just as good at finding the rare items as the common ones. The researchers suggest that the "prevalence effect" wasn't a fundamental perceptual error, in other words, but rather that the hand was just quicker than the eye. Visual recognition did kick in, but a microsecond after the finger made the wrong call. In psychological jargon, this is known as the "Oh, shoot" effect.
Since security officers are already encouraged to stop conveyor belts, or even back them up for a second look at a bag, this would seem to cast doubt on whether "prevalence" could be a tool for improvement. Wolfe, for his part, takes issue with technical details of the Duke tests and stands by his original findings.
The Duke team, however, is now working on other intriguing failures of human observation, such as the "satisfaction of search" problem. This refers to the well-established human tendency to end a search after one potential problem has been identified. Radiologists who detect a problem spot on an X-ray, for example, have a notably high rate of failure in identifying a second problem spot if it appears on the same X-ray.
"I have totally experienced this in the airport. They will spot toothpaste and go right for that," Fleck says, "and I wonder if they would miss something else."
Christopher Shea's column appears regularly in Ideas. E-mail firstname.lastname@example.org.