Blood on the tracks
David Hume wrote that reason is a ``slave to the emotions." But new research suggests that in our moral decision-making, reason and emotion duke it out within the mind.
MORAL PHILOSOPHERS and academics interested in studying how humans choose between right and wrong often use thought experiments to tease out the principles that inform our decisions. One particular hypothetical scenario has become quite the rage in some top psychological journals. It involves a runaway trolley, five helpless people on the track, and a large-framed man looking on from a footbridge. He may or may not be about to tumble to his bloody demise: You get to make the call.
That's because in this scenario, you are standing on the footbridge, too. You know that if you push the large man off the bridge onto the tracks, his body will stop the trolley before it kills the five people on the tracks. Of course, he will die in the process. So the question is: Is it morally permissible to kill the man in order to save five others?
In surveys, most people (around 85 percent) say they would not push the man to his death.
Often, this scenario is paired with a similar one: Again, there are five helpless people on the track. But this time, you can pull a switch that will send the runaway trolley onto a side track, where only one person is standing. So again, you can reduce the number of deaths from five to one-but in this case most people say, yes, they would go ahead and pull the lever. Why do we react so differently to the two scenarios?
Moral philosophers, if not the man on the street, can offer a few subtle logical distinctions between the cases. In the first, the fat man is being used essentially as a tool, or instrument, toward another goal. That violates the Kantian principle that human beings are ``ends" in themselves and should never be treated as mere instruments. Also, in the second scenario, the death of the innocent man can be viewed as a lamentable side effect of the chief goal, which is getting the train off the main track. This explanation is sometimes called the doctrine of the double effect: You'd pull that switch whether or not someone was on the track.
In a well-known paper that appeared in Science in 2001, however, Joshua D. Greene, then a post-doc in the Princeton psychology department, and four coauthors proposed that, whatever the philosophers said, for ordinary people the main issue was simply that pushing someone to his death-touching him and perhaps looking into his eyes-ignited an intense emotional response, whereas flipping a switch did not.
Greene, now an assistant professor at Harvard, administered MRI scans to subjects who were weighing both scenarios. While both groups showed increased activity in areas of the brain associated with intense reasoning, only in the case of those considering the footbridge scenario did the regions of the brain associated with emotion ``light up."
Greene and his colleagues described the finding as a partial victory for David Hume, the British philosopher who wrote that reason was a ``slave to the emotions." But more precisely, they described moral decision-making as a process in which reason and emotion duke it out within the mind. The finding, they added, was also a blow to older theories of human development, which held that as we become adults, we stop making moral decisions with our emotions, as children do.
In the June issue of Psychological Science, Piercarlo Valdesolo, a Northeastern University graduate student in psychology, and David DeSteno, a Northeastern professor, tightened the link between our emotions and our morals. They asked 79 subjects to consider the two trolley scenarios. But first, they had about half the subjects view a five-minute clip of ``Saturday Night Live" to put them in a good mood. The others watched a clip of a dry documentary on a Spanish village.
Valdesolo and DeSteno found that the SNL-watchers were more likely to say they would push the large man off the bridge. What seemed to be happening, they wrote, was that the happy mood caused by the video clip partly offset the negative emotions caused by the idea of directly killing a man. ``By changing the emotional response," says DeSteno, ``I can change your moral judgments."
Philosophers often caution that how we act in real life, never mind the laboratory, shouldn't determine how we ought to act. But Greene, Valdesolo, and DeSteno point out that, at the least, the results should lead us to be skeptical about our snap moral decisions, however natural and obvious they seem, as they may be very much affected by the mood we happen to be in.
. . .
Greene and the Northeastern scholars stress that the up-close-and-personal aspect of pushing the fat man off the bridge, as opposed to philosophical principles, is the most important factor influencing the decisions of ordinary people. But in a forthcoming article in the journal Mind & Language, the Harvard psychologist and biological anthropologist Marc Hauser and four coauthors argue that people are actually very subtle philosophers-at least at a subconscious level.
Hauser and his colleagues have found that people are sensitive to the doctrine of double effect even in thought experiments that don't push their emotional buttons. Even when the dirty work of actually doing the pushing is taken out of the equation, most test subjects say they are more willing to kill someone as a side effect of saving others than to kill that person as a direct means toward that end. And they make this distinction even when they can't explain their preferences afterward.
In his forthcoming book, ``Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong" (Ecco), and in other recent papers, Hauser suggests we may have a moral ``faculty" in our brains that acts as a sort of in-house philosopher-parsing situations quickly, before emotion or conscious reason come into play. Hauser compares this faculty to the mental quality that allows human beings to acquire and use language naturally and effortlessly.
It's a suggestive analogy, inviting questions about just how far the similarities run. Is human morality, like language, largely universal (gratuitous killing is bad) but with plenty of room for local variation (in some cultures, killing your daughter if she loses her virginity before marriage is not considered gratuitous)? Is it easy for children to adapt to these local differences, depending on where and how they are raised, but difficult for adults-just as it's hard to learn French at 40?
Whether the analogy to language is ``airtight" or ``useful because it allows you to ask good questions" is an open issue, Hauser says. But scholars think the answers to these questions are of more than academic interest. ``My hope is that by better understanding how we think," Greene writes on his personal website, ``we can teach ourselves to think better."
Christopher Shea's column appears biweekly in Ideas. E-mail firstname.lastname@example.org.