The Moral Mind - How We Decide - Jonah Lehrer

How We Decide - Jonah Lehrer (2009)

Chapter 6. The Moral Mind

When John Wayne Gacy was a child, he liked to torture animals. He caught mice in a wire trap and then cut them open with scissors while they were still alive. The blood and guts didn't bother him. Neither did the squeals. Sadism was entertaining.

This streak of cruelty was one of the few noteworthy facts of Gacy's childhood. In just about every other respect, his early years were utterly normal. He grew up in the middle-class suburbs of Chicago, where he was a Boy Scout and delivered the local newspaper. He got good grades in school but didn't want to go to college. When his high school classmates were later asked what they remembered about Gacy, most couldn't remember anything. He blended in with the crowd.

Gacy grew up to become a successful construction contractor and a pillar of the community. He liked to throw big summer barbecues, grill hot dogs and hamburgers and invite the neighbors over. He dressed up as a clown for kids in the hospital and was active in local politics. The local chamber of commerce voted him Man of the Year. He was a typical suburban husband.

The normalcy, however, was a carefully crafted lie. One day, Gacy's wife noticed a pungent odor coming from the crawlspace underneath their house. It was probably just a dead rodent, Gacy said, or maybe a sewage leak. He bought a fifty-pound bag of lime and tried to erase the smell. But the smell wouldn't go away. Gacy filled in the space with concrete. The smell still wouldn't go away. There was something bad underneath those floorboards.

The smell was rotting bodies. On March 12, 1980, John Wayne Gacy was convicted of murdering thirty-three boys. He paid the boys for sex, and if something went awry with the transaction, he would kill them in his living room. Sometimes he killed one after he raised his price. Or if he thought the boy might tell somebody. Or if he didn't have enough cash in his wallet. Sometimes he killed a boy because it seemed like the easiest thing to do. He'd put a sock in his mouth, strangle him with a rope, and get rid of the corpse in the middle of the night. When the cops finally searched Gacy's home, they found skeletons everywhere: underneath his garage, in the basement, in the backyard. The graves were shallow, just a few inches deep.

1

John Wayne Gacy was a psychopath. Psychiatrists estimate that about 25 percent of the prison population have psychopathic tendencies, but the vast majority of these people will never kill anybody. While psychopaths are prone to violence—especially when the violence is used to achieve a goal, like satisfying a sexual desire—their neurological condition is best defined in terms of a specific brain malfunction: psychopaths make poor—sometimes disastrous—moral choices.

At first glance, it seems strange to think of psychopaths as decision-makers. We tend to label people like John Wayne Gacy as monsters, horrifying examples of humanity at its most inhuman. But every time Gacy murdered a boy, killing without the slightest sense of unease, he was making a decision. He was willingly violating one of the most ancient of moral laws: thou shalt not kill. And yet Gacy felt no remorse; his conscience was clean, and he slept like a baby.

Psychopaths shed light on a crucial subset of decision-making that's referred to as morality. Morality can be a squishy, vague concept, and yet, at its simplest level, it's nothing but a series of choices about how we treat other people. When you act in a moral manner—when you recoil from violence, treat others fairly, and help strangers in need—you are making decisions that take people besides yourself into account. You are thinking about the feelings of others, sympathizing with their states of mind. This is what psychopaths can't do.

What causes this profound deficit? On most psychological tests, psychopaths appear perfectly normal. The working memory isn't impaired, they use language normally, and they don't have reduced attention spans. In fact, several studies have found that psychopaths have above-average IQs and reasoning abilities. Their logic is impeccable. But this intact intelligence conceals a devastating disorder: psychopaths are dangerous because they have damaged emotional brains.

Just look at Gacy. According to a court-appointed psychiatrist, Gacy seemed incapable of experiencing regret, sadness, or joy. He never lost his temper or got particularly angry. Instead, his inner life consisted entirely of sexual impulses and ruthless rationality. He felt nothing, but planned everything. (Gacy's meticulous criminal preparations are what allowed him to evade the police for so long.) Alec Wilkinson, a journalist who spent hours interviewing Gacy on death row, described his eerily detached demeanor in The New Yorker:

[Gacy] appears to have no inner being. I often had the feeling that he was like an actor who had created a role and polished it so carefully that he had become the role and the role had become him. In support of his innocence, he often says things that are deranged in their logic, but he says them so calmly that he appears to be rational and reasonable ... Compared to other murderers at the prison, Gacy seemed tranquil.

This sort of emotional emptiness is typical of psychopaths. When normal people are shown staged videos of strangers being subjected to pain—for example, receiving powerful electric shocks—they automatically generate visceral emotional reactions. Their hands start to sweat and their blood pressure surges. But psychopaths feel nothing. It's as if they were watching a blank screen. Most people react differently to emotionally charged verbs such as kill or rape than they do to neutral words such as sit or walk, but that's not the case with psychopaths. For them, the words all seem equivalent. When normal people tell lies, they exhibit the classic symptoms of nervousness; lie detectors work by measuring these signals. But psychopaths are able to consistently fool the machines. Dishonesty doesn't make them anxious because nothing makes them anxious. They can lie with impunity. When criminologists looked at the most violent wife batterers, they discovered that as those men became more and more aggressive, their blood pressure and pulse rates actually dropped. The acts of violence had a calming effect.

"Psychopaths have a fundamental emotional disorder," says James Blair, a cognitive psychologist at the National Institute of Mental Health and coauthor of The Psychopath: Emotion and the Brain. "You know when you see a scared face in a movie and that makes you automatically feel scared, too? Well, psychopaths don't feel that. It's like they don't understand what's going on. This lack of emotion is what causes their dangerous behavior. They are missing the primal emotional cues that the rest of us use as guides when making moral decisions."

When you peer inside the psychopathic brain, you can see this absence of emotion. After being exposed to fearful facial expressions, the emotional parts of the normal human brain show increased levels of activation. So do the cortical areas responsible for recognizing faces. As a result, a frightened face becomes a frightening sight; we naturally internalize the feelings of others. The brain of a psychopath, however, responds to these fearful faces with an utter lack of interest. The emotional areas are unperturbed, and the facial-recognition system is even less interested in fearful faces than it is in perfectly blank stares. The psychopath's brain is bored by expressions of terror.

While the anatomy of evil remains incomplete, neuroscientists are beginning to identify the specific deficits that define the psychopathic brain. The main problem seems to be a broken amygdala, a brain area responsible for propagating aversive emotions such as fear and anxiety. As a result, psychopaths never feel bad when they make other people feel bad. Aggression doesn't make them nervous. Terror isn't terrifying. (Brain-imaging studies have demonstrated that the human amygdala is activated when a person merely thinks about committing a "moral transgression.") This emotional void means that psychopaths never learn from their adverse experiences; they are four times more likely than other prisoners to commit crimes after being released. For a psychopath on parole, there is nothing inherently wrong with violence. Hurting someone else is just another way of getting what he wants, a perfectly reasonable way to satisfy desires. The absence of emotion makes the most basic moral concepts incomprehensible. G. K. Chesterton was right: "The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason."

AT FIRST GLANCE, the connection between morality and the emotions might be a little unnerving. Moral decisions are supposed to rest on a firm logical and legal foundation. Doing the right thing means carefully weighing competing claims, like a dispassionate judge. These aspirations have a long history. The luminaries of the Enlightenment, such as Leibniz and Descartes, tried to construct a moral system entirely free of feelings. Immanuel Kant argued that doing the right thing was merely a consequence of acting rationally. Immorality, he said, was a result of illogic. "The oftener and more steadily we reflect" on our moral decisions, Kant wrote, the more moral those decisions become. The modern legal system still subscribes to this antiquated set of assumptions and pardons anybody who demonstrates a "defect in rationality"—these people are declared legally insane—since the rational brain is supposedly responsible for distinguishing between right and wrong. If you can't reason, then you shouldn't be punished.

But all of these old conceptions of morality are based on a fundamental mistake. Neuroscience can now see the substrate of moral decisions, and there's nothing rational about it. "Moral judgment is like aesthetic judgment," writes Jonathan Haidt, a psychologist at the University of Virginia. "When you see a painting, you usually know instantly and automatically whether you like it. If someone asks you to explain your judgment, you confabulate ... Moral arguments are much the same: Two people feel strongly about an issue, their feelings come first, and their reasons are invented on the fly, to throw at each other."

Kant and his followers thought the rational brain acted like a scientist: we used reason to arrive at an accurate view of the world. This meant that morality was based on objective values; moral judgments described moral facts. But the mind doesn't work this way. When you are confronted with an ethical dilemma, the unconscious automatically generates an emotional reaction. (This is what psychopaths can't do.) Within a few milliseconds, the brain has made up its mind; you know what is right and what is wrong. These moral instincts aren't rational —they've never heard of Kant—but they are an essential part of what keep us all from committing unspeakable crimes.

It's only at this point—after the emotions have already made the moral decision—that those rational circuits in the prefrontal cortex are activated. People come up with persuasive reasons to justify their moral intuition. When it comes to making ethical decisions, human rationality isn't a scientist, it's a lawyer. This inner attorney gathers bits of evidence, post hoc justifications, and pithy rhetoric in order to make the automatic reaction seem reasonable. But this reasonableness is just a façade, an elaborate self-delusion. Benjamin Franklin said it best in his autobiography: "So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do."

In other words, our standard view of morality—the philosophical consensus for thousands of years—has been exactly backward. We've assumed that our moral decisions are the byproducts of rational thought, that humanity's moral rules are founded in such things as the Ten Commandments and Kant's categorical imperative. Philosophers and theologians have spilled lots of ink arguing about the precise logic of certain ethical dilemmas. But these arguments miss the central reality of moral decisions, which is that logic and legality have little to do with anything.

Consider this moral scenario, which was first invented by Haidt. Julie and Mark are siblings vacationing together in the south of France. One night, after a lovely day spent exploring the local countryside, they share a delicious dinner and a few bottles of red wine. One thing leads to another and Julie and Mark decide to have sex. Although she's on the pill, Mark uses a condom just in case. They enjoy themselves very much, but decide not to have sex again. The siblings promise to keep the one-night affair secret and discover, over time, that having sex has brought them even closer together. Did Julie and Mark do something wrong?*

If you're like most people, your first reaction is that the brother and sister committed a grave sin. What they did was very wrong. When Haidt asks people to explain their harsh moral judgments, the most common reasons given are the risk of having kids with genetic abnormalities and the possibility that sex will damage the sibling relationship. At this point, Haidt politely points out that Mark and Julie used two types of birth control and that having sex actually improved their relationship. But the facts of the case don't matter. Even when their arguments are disproved, people still cling to the belief that having sex with one's brother or sister is somehow immoral.

"What happens in the experiment," Haidt says, "is [that] people give a reason [why the sex is wrong]. When that reason is stripped from them, they give another reason. When the new reason is stripped from them, they reach for another reason." Eventually, of course, people run out of reasons: they've exhausted their list of moral justifications. The rational defense is forced to rest its case. That's when people start saying things like "Because it's just wrong to have sex with your sister" or "Because it's disgusting, that's why!" Haidt calls this state "moral dumbfounding." People know something seems morally wrong—sibling sex is a terrible idea—but no one can rationally defend the verdict. According to Haidt, this simple story about sibling sex illuminates the two separate processes that are at work when we make moral decisions. The emotional brain generates the verdict. It determines what is wrong and what is right. In the case of Julie and Mark, it refuses to believe that having sex with a sibling is morally permissible, no matter how many forms of birth control are used. The rational brain, on the other hand, explains the verdict. It provides reasons, but those reasons all come after the fact.

This is why psychopaths are so dangerous: they are missing the emotions that guide moral decisions in the first place. There's a dangerous void where their feelings are supposed to be. For people like Gacy, sin is always intellectual, never visceral. As a result, a psychopath is left with nothing but a rational lawyer inside his head, willing to justify any action. Psychopaths commit violent crimes because their emotions never tell them not to.

2

Moral decisions are a unique kind of decision. When you're picking out products in the grocery store, searching for the best possible strawberry jam, you are trying to maximize your own enjoyment. You are the only person that matters; it is your sense of pleasure that you are trying to please. In this case, selfishness is the ideal strategy. You should listen to those twitchy cells in the orbitofrontal cortex that tell you what you really want.

However, when you are making a moral decision, this egocentric strategy backfires. Moral decisions require taking other people into account. You can't act like a greedy brute or let your anger get out of control; that's a recipe for depravity and jail time. Doing the right thing means thinking about everybody else, using the emotional brain to mirror the emotions of strangers. Selfishness needs to be balanced by some selflessness.

The evolution of morality required a whole new set of decision-making machinery. The mind needed to evolve some structures that would keep it from hurting other people. Instead of just seeking more pleasure, the brain had to become sensitive to the pain and plight of strangers. The new neural structures that developed are a very recent biological adaptation. While people have the same reward pathway as rats—every mammal relies on the dopamine system—moral circuits can be found in only the most social primates. Humans, of course, are the most social primates of all.

The best way to probe the unique brain circuits underlying morality is by using a brain scanner to study people while they are making moral decisions. Consider this elegant experiment, led by neuroscientist Joshua Greene of Harvard. Greene asked his subjects a series of questions involving a runaway trolley, an oversize man, and five maintenance workers. (It might sound like a strange setup, but it's actually based on a well-known philosophical thought puzzle.) The first scenario goes like this:

You are the driver of a runaway trolley. The brakes have failed. The trolley is approaching a fork in the track at top speed. If you do nothing, the train will stay left, where it will run over five maintenance workers who are fixing the track. All five workers will die. However, if you steer the train right—this involves flicking a switch and turning the wheel—you will swerve onto a track where there is one maintenance worker. What do you do? Are you willing to intervene and change the path of the trolley?

In this hypothetical case, about 95 percent of people agree that it is morally permissible to turn the trolley. The decision is just simple arithmetic: it's better to kill fewer people. Some moral philosophers even argue that it is immoral not to turn the trolley, since passivity will lead to the death of four more people. But what about this scenario:

You are standing on a footbridge over the trolley track. You see a trolley racing out of control, speeding toward five workmen who are fixing the track. All five men will die unless the trolley can be stopped. Standing next to you on the footbridge is a very large man. He is leaning over the railing, watching the trolley hurtle toward the men. If you sneak up on the man and give him a little push, he will fall over the railing and into the path of the trolley. Because he is so big, he will stop the trolley from killing the maintenance workers. Do you push the man off the footbridge? Or do you allow five men to die?

The brute facts, of course, remain the same: one man must die in order for five men to live. If ethical decisions were perfectly rational, then a person would act the same way in both situations and be as willing to push the man off the bridge as he or she was to turn the trolley. And yet, almost nobody is willing to actively throw another person onto the train tracks. The decisions lead to the same outcome, yet one is moral and one is murder.

Greene argues that pushing the man feels wrong because the killing is direct: you are using your body to hurt his body. He calls it a personal moral situation, since it directly involves another person. In contrast, when you just have to turn the trolley onto a different track, you aren't directly hurting somebody else, you're just shifting the trolley wheels; the resulting death seems indirect. In this case, it's an impersonal moral decision.

What makes this thought experiment so interesting is that the fuzzy moral distinction—the difference between personal and impersonal decisions—is built into the brain. It doesn't matter what culture you live in, or what religion you subscribe to: the two different trolley scenarios trigger distinct patterns of activation. In the first scenario, when a subject was asked whether the trolley should be turned, the rational decision-making machinery was turned on. A network of brain regions assessed the various alternatives, sent their verdict onward to the prefrontal cortex, and the person chose the clearly superior option. The brain quickly realized that it was better to kill one man than five men.

However, when a subject was asked whether he would be willing to push a man onto the tracks, a separate network of brain areas was activated. These folds of gray matter—the superior temporal sulcus, posterior cingulate, and medial frontal gyrus—are responsible for interpreting the thoughts and feelings of other people. As a result, the subject automatically imagined how the poor man would feel as he plunged to his death on the train tracks below. He vividly simulated his mind and concluded that pushing him was a capital crime, even if it saved the lives of five other men. The person couldn't explain the moral decision—the inner lawyer was confused by the inconsistency—but his certainty never wavered. Pushing a man off a bridge just felt wrong.

While stories of Darwinian evolution often stress the amorality of natural selection—we are all Hobbesian brutes, driven to survive by selfish genes—our psychological reality is much less bleak. We aren't angels, but we also aren't depraved hominids. "Our primate ancestors," Greene explains, "had intensely social lives. They evolved mental mechanisms to keep them from doing all the nasty things they might otherwise be interested in doing. This basic primate morality doesn't understand things like tax evasion, but it does understand things like pushing your buddy off of a cliff." As Greene puts it, a personal moral violation can be roughly defined as "me hurts you," a concept simple enough for a primate to understand.

This is a blasphemous idea. Religious believers assume that God invented the moral code. It was given to Moses on Mount Sinai, a list of imperatives inscribed in stone. (As Dostoyevsky put it, "If there is no God, then we are lost in a moral chaos. Everything is permitted.") But this cultural narrative gets the causality backward. Moral emotions existed long before Moses. They are writ into the primate brain. Religion simply allows us to codify these intuitions, to translate the ethics of evolution into a straightforward legal system. Just look at the Ten Commandments. After God makes a series of religious demands—don't worship idols and always keep the Sabbath—He starts to issue moral orders. The first order is the foundation of primate morality: thou shalt not kill. Then comes a short list of moral adjuncts, which are framed in terms of harm to another human being. God doesn't tell us merely not to lie; He tells us not to bear false witness against our neighbor. He doesn't prohibit jealousy only in the abstract; He commands us not to covet our neighbor's "wife or slaves or ox or donkey." The God of the Old Testament understands that our most powerful moral emotions are generated in response to personal moral scenarios, so that's how He frames all of His instructions. The details of the Ten Commandments reflect the details of the evolved moral brain.

These innate emotions are so powerful that they keep people moral even in the most amoral situations. Consider the behavior of soldiers during war. On the battlefield, men are explicitly encouraged to kill one another; the crime of murder is turned into an act of heroism. And yet, even in such violent situations, soldiers often struggle to get past their moral instincts. During World War II, for example, U.S. Army Brigadier General S.L.A. Marshall undertook a survey of thousands of American troops right after they'd been in combat. His shocking conclusion was that less than 20 percent actually shot at the enemy, even when under attack. "It is fear of killing," Marshall wrote, "rather than fear of being killed, that is the most common cause of battle failure in the individual." When soldiers were forced to confront the possibility of directly harming other human beings—this is a personal moral decision—they were literally incapacitated by their emotions. "At the most vital point of battle," Marshall wrote, "the soldier becomes a conscientious objector."

After these findings were published, in 1947, the U.S. Army realized it had a serious problem. It immediately began revamping its training regimen in order to increase the "ratio of fire." New recruits began endlessly rehearsing the kill, firing at anatomically correct targets that dropped backward after being hit. As Lieutenant Colonel Dave Grossman noted, "What is being taught in this environment is the ability to shoot reflexively and instantly ... Soldiers are de-sensitized to the act of killing, until it becomes an automatic response." The army also began emphasizing battlefield tactics, such as high-altitude bombing and long-range artillery, that managed to obscure the personal cost of war. When bombs are dropped from forty thousand feet, the decision to fire is like turning a trolley wheel: people are detached from the resulting deaths.

These new training techniques and tactics had dramatic results. Several years after he published his study, Marshall was sent to fight in the Korean War, and he discovered that 55 percent of infantrymen were now firing their weapons. In Vietnam, the ratio of fire was nearly 90 percent. The army had managed to turn the most personal of moral situations into an impersonal reflex. Soldiers no longer felt a surge of negative emotions when they fired their weapons. They had been turned, wrote Grossman, into "killing machines."

3

At its core, moral decision-making is about sympathy. We abhor violence because we know violence hurts. We treat others fairly because we know what it feels like to be treated unfairly. We reject suffering because we can imagine what it's like to suffer. Our minds naturally bind us together, so that we can't help but follow the advice of Luke: "And as ye would that men should do to you, do ye also to them likewise."

Feeling sympathetic is not as simple as it might seem. For starters, before you can sympathize with the feelings of other people, you have to figure out what they are feeling. This means you need to develop a theory about what's happening inside their minds so that your emotional brain can imitate the activity of their emotional brains. Sometimes, this act of mind reading is done by interpreting facial expressions. If someone is squinting his eyes and clenching his jaw, you automatically conclude that his amygdala is excited; he must be angry. If he flexes the zygomaticus majors—that's what happens during a smile—then you assume he's happy. Of course, you don't always have access to a communicative set of facial expressions. When you talk on the phone or write an e-mail or think about someone far away, you are forced to mind read by simulation, by imagining what you would feel in the same situation.

Regardless of how exactly one generates theories of other people's minds, it's clear that these theories profoundly affect moral decisions. Look, for example, at the ultimatum game, a staple of experimental economics. The rules of the game are simple, if a little bit unfair: an experimenter pairs two people together, and hands one of them ten dollars. This person (the proposer) gets to decide how the ten dollars is divided. The second person (the responder) can either accept the offer, which allows both players to pocket their respective shares, or reject the offer, in which case both players walk away empty-handed.

When economists first started playing this game in the early 1980s, they assumed that this elementary exchange would always generate the same outcome. The proposer would offer the responder about a dollar—a minimal amount—and the responder would accept it. After all, a rejection leaves both players worse off, and one dollar is better than nothing, so this arrangement would clearly demonstrate our innate selfishness and rationality.

However, the researchers soon realized that their predictions were all wrong. Instead of swallowing their pride and pocketing a small profit, responders typically rejected any offer they perceived as unfair. Furthermore, proposers anticipated this angry rejection and typically tendered an offer of around five dollars. This was such a stunning result that nobody really believed it.

But when other scientists repeated the experiment, the same thing happened. People play this game the same way all over the world, and studies have observed similar patterns of irrationality in Japan, Russia, Germany, France, and Indonesia. No matter where the game was played, people almost always made fair offers. As the economist Robert Frank notes, "Seen through the lens of modern self-interest theory, such behavior is the human equivalent of planets traveling in square orbits."

Why do proposers engage in such generosity? The answer returns us to the act of sympathy and the unique brain circuits that determine moral decisions. Adam Smith, the eighteenth-century philosopher, was there first. Although Smith is best known for his economic treatise The Wealth of Nations, he was most proud of The Theory of Moral Sentiments, his sprawling investigation into the psychology of morality. Like his friend David Hume, Smith was convinced that our moral decisions were shaped by our emotional instincts. People were good for essentially irrational reasons.

According to Smith, the source of these moral emotions was the imagination, which people used to mirror the minds of others. (The reflective mirror, which had recently become a popular household item in Smith's time, is an important metaphor in his writing on morality.) "As we have no immediate experience of what other men feel," Smith wrote, "we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation." This mirroring process leads to an instinctive sympathy for one's fellow man—Smith called it "fellow-feeling"—that forms the basis for moral decisions.

Smith was right. The reason a proposer makes a fair offer in the ultimatum game is that he is able to imagine how the responder will feel about an unfair offer. (When people play the game with computers, they are never generous.) The responder knows that a low-ball proposal will make the other person angry, which will lead him to reject the offer, which will leave everybody with nothing. So the proposer suppresses his greed and equitably splits the ten dollars. That ability to sympathize with the feelings of others leads to fairness.

The sympathetic instinct is also one of the central motivations behind altruism, which is demonstrated when people engage in selfless acts such as donating to charity and helping out perfect strangers. In a recent experiment published in Nature Neuroscience, scientists at Duke University imaged the brains of people as they watched a computer play a simple video game. Because the subjects were told that the computer was playing the game for a specific purpose—it wanted to earn money—their brains automatically treated the computer like an "intentional agent," complete with goals and feelings. (Human minds are so eager to detect other minds that they often imbue inanimate objects, like computers and stuffed animals, with internal mental states.) Once that happened, the scientists were able to detect activity in the superior temporal sulcus and other specialized areas that help each of us theorize and sympathize with the emotions of other people. Even though the subjects knew they were watching a computer, they couldn't help but imagine what the computer was feeling.

Now comes the interesting part: the scientists noticed that there was a lot of individual variation during the experiment. Some people had very active sympathetic brains, while others seemed rather uninterested in thinking about the feelings of someone else. The scientists then conducted a survey of altruistic behavior, asking people how likely they would be to "help a stranger carry a heavy object" or "let a friend borrow a car." That's when the correlation became clear: people who showed more brain activity in their sympathetic regions were also much more likely to exhibit altruistic behavior. Because they intensely imagined the feelings of other people, they wanted to make other people feel better, even if it came at personal expense.

But here's the lovely secret of altruism: it feels good. The brain is designed so that acts of charity are pleasurable; being nice to others makes us feel nice. In a recent brain-imaging experiment, a few dozen people were each given $128 of real money and allowed to choose between keeping the money and donating it to charity. When they chose to give away the money, the reward centers of their brains became active and they experienced the delightful glow of unselfishness. In fact, several subjects showed more reward-related brain activity during acts of altruism than they did when they actually received cash rewards. From the perspective of the brain, it literally was better to give than to receive.

ONE OF THE ways neuroscientists learn about the brain is by studying what happens when something goes wrong with it. For example, scientists learned about the importance of our moral emotions by studying psychopaths; they learned about the crucial role of dopamine by studying people with Parkinson's; and brain tumors in the frontal lobes have helped to illuminate the substrate of rationality. This might seem callous—tragedy is turned into an investigative tool—but it's also extremely effective. The broken mind helps us understand how the normal mind works.

When it comes to the sympathetic circuits in the human brain, scientists have learned a tremendous amount by studying people with autism. When Dr. Leo Kanner first diagnosed a group of eleven children with autism, in 1943, he described the syndrome as one of "extreme aloneness." (Aut is Greek for "self," and autism translates to "the state of being unto one's self.") The syndrome afflicts one in every 160 individuals and leaves them emotionally isolated, incapable of engaging in many of the social interactions most people take for granted. As the Cambridge psychologist Simon Baron-Cohen puts it, people with autism are "mind-blind." They have tremendous difficulty interpreting the emotions and mental states of others.*

Scientists have long suspected that autism is a disease of brain development. For some still mysterious reason, the cortex doesn't wire itself correctly during the first year of life. It now appears that one of the brain areas compromised in people with autism is a small cluster of cells known as mirror neurons. The name of the cell type is literal: these neurons mirror the movements of other people. If you see someone else smile, then your mirror neurons will light up as if you were smiling. The same thing happens whenever you see someone scowl, grimace, or cry. These cells reflect, on the inside, the expressions of everybody else. As Giacomo Rizzolatti, one of the scientists who discovered mirror neurons, says, "They [mirror neurons] allow us to grasp the minds of others not through conceptual reasoning but through direct simulation; by feeling, not by thinking."

This is what people with autism have to struggle to do. When scientists at UCLA imaged the brains of autistic people as they looked at photographs of faces in different emotional states, the scientists discovered that the autistic brain, unlike the normal brain, showed no activity in its mirror-neuron area. As a result, the autistic subjects had difficulty interpreting the feelings on display. They saw the angry face as nothing but a set of flexed facial muscles. A happy face was simply a different set of muscles. But neither expression was correlated with a specific emotional state. In other words, they never developed a theory about what was happening inside other people's minds.

A brain-imaging study done by scientists at Yale sheds further light on the anatomical source of autism. The study examined the parts of the brain that were activated when a person looked at a face and when he or she looked at a static object, like a kitchen chair. Normally, the brain reacts very differently to these stimuli. Whenever you see a human face, you use a highly specialized brain region called the fusiform face area (FFA) that is solely devoted to helping you recognize other people. In contrast, when you look at a chair, the brain relies on the inferior temporal gyrus, an area activated by any sort of complex visual scene. However, in the study, people with autism never turned on the fusiform face area. They looked at human faces with the part of the brain that normally recognizes objects. A person was just another thing. A face generated no more emotion than a chair.

These two brain deficits—a silent mirror-neuron circuit and an inactive fusiform face area—help to explain the social difficulties of people with autism. Their "extreme aloneness" is a direct result of not being able to interpret and internalize the emotions of other people. Because of this, they often make decisions that, in the words of one autism researcher, "are so rational they can be hard to understand."

For instance, when people with autism play the ultimatum game, they act just like the hypothetical agents in an economics textbook. They try to apply a rational calculus to the irrational world of human interaction. On average, they make offers that are 80 percent lower than those of normal subjects, with many offering less than a nickel. This greedy strategy ends up being ineffective, since the angry responders tend to reject such unfair offers. But the proposers with autism are unable to anticipate these feelings. Consider this quote from an upset autistic adult whose offer of ten cents in a ten-dollar ultimatum game was spurned: "I did not earn any money because all the other players are stupid! How can you reject a positive amount of money and prefer to get zero? They just did not understand the game! You should have stopped the experiment and explained it to them..."

Autism is a chronic condition, a permanent form of mind blindness. But it's possible to induce a temporary state of mind blindness, in which the brain areas that normally help a person sympathize with others are turned off. A simple variation on the ultimatum game, known as the dictator game, makes this clear. Our sense of "fellow-feeling" is natural, but it's also very fragile. Unlike the ultimatum game, in which the responder can decide whether or not to accept the monetary offer, in the dictator game, the proposer simply dictates how much the responder receives. What's surprising is that these tyrants are still rather generous and give away about one-third of the total amount of money. Even when people have absolute power, they remain constrained by their sympathetic instincts.

However, it takes only one minor alteration for this benevolence to disappear. When the dictator cannot see the responder—the two players are located in separate rooms—the dictator lapses into unfettered greed. Instead of giving away a significant share of the profits, the despots start offering mere pennies and pocketing the rest. Once people become socially isolated, they stop simulating the feelings of other people. Their moral intuitions are never turned on. As a result, the inner Machiavelli takes over, and the sense of sympathy is squashed by selfishness. The UC Berkeley psychologist Dacher Keltner has found that in many social situations, people with power act just like patients with damage to the emotional brain. "The experience of power might be thought of as having someone open up your skull and take out that part of your brain so critical to empathy and socially appropriate behavior," he says. "You become very impulsive and insensitive, which is a bad combination."

Paul Slovic, a psychologist at the University of Oregon, has exposed another blind spot in the sympathetic brain. His experiments are simple: he asks people how much they would be willing to donate to various charitable causes. For example, Slovic found that when people were shown a picture of Rokia, a starving Malawian child, they acted with impressive generosity. After looking at Rokia's emaciated body and haunting brown eyes, they each donated, on average, two dollars and fifty cents to the charity Save the Children. However, when other people were provided with a list of statistics about starvation throughout Africa—more than three million children in Malawi are malnourished, more than eleven million people in Ethiopia need immediate food assistance, and so forth—the average donation was 50 percent lower. At first glance, this makes no sense. When people are informed about the true scope of the problem, they should give more money, not less. Rokia's tragic story is just the tip of the iceberg.

According to Slovic, the problem with statistics is that they don't activate our moral emotions. The depressing numbers leave us cold: our minds can't comprehend suffering on such a massive scale. This is why we are riveted when one child falls down a well but turn a blind eye to the millions of people who die every year for lack of clean water. And why we donate thousands of dollars to help a single African war orphan featured on the cover of a magazine but ignore widespread genocides in Rwanda and Darfur. As Mother Teresa put it, "If I look at the mass, I will never act. If I look at the one, I will."

4

The capacity for making moral decisions is innate—the sympathetic circuit is hard-wired, at least in most of us—but it still requires the right kind of experience in order to develop. When everything goes according to plan, the human mind naturally develops a potent set of sympathetic instincts. We will resist pushing the man off the bridge, make fair offers in the ultimatum game, and get deeply disturbed by images of other people in pain.

However, if something goes amiss during the developmental process—if the circuits that underlie moral decisions never mature—the effects can be profound. Sometimes, as with autism, the problem is largely genetic. (Scientists estimate the heritability of autism at somewhere between 80 and 90 percent, which makes it one of the most inheritable of all neurologic conditions.) But there's another way that the developing brain can be permanently damaged: child abuse. When children are molested or neglected or unloved as children, their emotional brains are warped. (John Gacy, for example, was physically abused throughout his childhood by his alcoholic father.) The biological program that allows human beings to sympathize with the feelings of others is turned off. Cruelty makes us cruel. Abuse makes us abusive. It's a tragic loop.

The first evidence for this idea came from the work of Harry Harlow.* In the early 1950s, Harlow decided to start a breeding colony of monkeys at the University of Wisconsin. He was studying Pavlovian conditioning in primates, but he needed more data, which meant that he needed more animals. Although nobody had ever successfully bred monkeys in the United States before then, Harlow was determined.

The breeding colony began with just a few pregnant female monkeys. Harlow carefully monitored the expecting primates; after each gave birth, he immediately isolated the infant in an immaculately clean cage. At first, everything went according to plan. Harlow raised the babies on a formula of sugar and evaporated milk fortified with a slew of vitamins and supplements. He fed the monkeys from sterilized doll bottles every two hours and carefully regulated the cycles of light and dark. In order to minimize the spread of disease, Harlow never let the babies interact with one another. The result was a litter of primates that were bigger and stronger than their peers from the wild.

But the physical health of these young monkeys hid a devastating sickness: they had been wrecked by loneliness. Their short lives had been defined by total isolation, and they proved incapable of even the most basic social interactions. They would maniacally rock back and forth in their metal cages, sucking on their thumbs until they bled. When they encountered other monkeys, they would shriek in fear, run to the corners of their cages, and stare at the floor. If they felt threatened, they would lash out in vicious acts of violence. Sometimes these violent tendencies were turned inward. One monkey ripped out its fur in bloody clumps. Another gnawed off its own hand. Because of their early deprivation, these babies had to be isolated for the rest of their lives.

For Harlow, these troubled baby monkeys demonstrated that the developing mind needed more than proper nutrition. But what did it need? The first clue came from watching these primate babies. The scientists had lined their cages with cloth diapers so that the monkeys didn't have to sleep on the cold concrete floor. The motherless babies quickly became obsessed with these cloth rags. They would wrap themselves in the fabric and cling to the diapers if anybody approached the cages. The soft fabric was their sole comfort.

This poignant behavior inspired Harlow to come up with a new experiment. He decided to raise the next generation of baby monkeys with two different pretend mothers. One was a wire mother, formed out of wire mesh, while the other was a mother made out of soft terry cloth. Harlow assumed that all things being equal, the babies would prefer the cloth mothers, since they would be able to cuddle with the fabric. To make the experiment more interesting, Harlow added a slight twist to a few of the cages. Instead of hand-feeding some babies, he put their milk bottles in the hands of the wire mothers. His question was simple: what was more important, food or affection? Which mother would the babies want more?

In the end, it wasn't even close. No matter which mother held the milk, the babies always preferred the cloth mothers. The monkeys would run over to the wire mothers and quickly sate their hunger before immediately returning to the comforting folds of cloth. By the age of six months, the babies were spending more than eighteen hours a day nuzzling with their soft parent. They were with the wire mothers only long enough to eat.

The moral of Harlow's experiment is that primate babies are born with an intense need for attachment. They cuddled with the cloth mothers because they wanted to experience the warmth and tenderness of a real mother. Even more than food, these baby monkeys craved the feeling of affection. "It's as if the animals are programmed to seek out love," Harlow wrote.

When this need for love wasn't met, the babies suffered from a tragic list of side effects. The brain was permanently damaged so that the monkeys with wire mothers didn't know how to deal with others, sympathize with strangers, or behave in a socially acceptable manner. Even the most basic moral decisions were impossible. As Harlow would later write, "If monkeys have taught us anything, it's that you've got to learn how to love before you learn how to live."

Harlow would later test the limits of animal experimentation, remorselessly probing the devastating effects of social isolation. His cruelest experiment was putting baby monkeys in individual cages with nothing—not even a wire mother—for months at a time. The outcome was unspeakably sad. The isolated babies were like primate psychopaths, completely numb to all expressions of emotion. They started fights without provocation and they didn't stop fighting until one of the monkeys had been seriously injured. They were even vicious to their own children. One psychopathic monkey bit off the fingers of her child. Another killed her crying baby by crushing its head in her mouth. Most psychopathic mothers, however, just perpetuated the devastating cycle of cruelty. When their babies tried to cuddle, they would push them away. The confused infants would try again and again, but to no avail. Their mothers felt nothing.

WHAT HAPPENS TO monkeys can happen to people. This is the tragic lesson of Communist Romania. In 1966, Nicolae Ceausescu, the despotic leader of the country, banned all forms of contraception, and the country was suddenly awash in unwanted babies. The predictable result was a surfeit of orphans; poor families surrendered the kids they couldn't afford.

The state-run orphanages of Romania were overwhelmed and underfunded. Babies were left in cribs with nothing but plastic bottles. Toddlers were tied to their beds and never touched. The orphanages lacked heat in the winter. Children with disabilities were consigned to the basement, and some went years without seeing natural light. Older children were drugged so that they would sleep for days at a time. In some orphanages, more than 25 percent of the children died before the age of five.

The children that managed to survive the Romanian orphanages were permanently scarred. Many had stunted bodies, shrunken bones, and untreated infections. But the most devastating legacy of the orphanage system was psychological. Many of the abandoned children suffered from severe emotional impairments. They were often hostile to strangers, abusive to one another, and incapable of even the most basic social interactions. Couples that adopted Romanian orphans from these institutions reported a wide array of behavioral disorders. Some children cried whenever they were touched. Others stared into space for hours and then suddenly flew into violent rages, attacking everything within reach. One Canadian couple walked into the bedroom of their three-year-old son to discover that he had just thrown their new kitten out the window.

When neuroscientists imaged the brain activity of Romanian orphans, they saw reduced activity in regions that are essential for emotion and social interaction, such as the orbitofrontal cortex and the amygdala. The orphans also proved unable to perceive emotions in others and had a pronounced inability to interpret facial expressions. Finally, the neglected children showed significantly reduced levels of vasopressin and oxytocin, two hormones crucial for the development of social attachments. (These hormonal deficiencies persisted for years afterward.) For these victims of abuse, the world of human sympathy was incomprehensible. They struggled to recognize the emotions of others, and they also found it difficult to modulate their own emotions.

Studies of American children who are abused at an early age paint a similarly bleak picture. In the early 1980s, the psychologists Mary Main and Carol George looked at a group of twenty toddlers from "families in stress." Half of these children had been victims of serious physical abuse. The other half were from broken homes—many of them were living with foster parents—but they hadn't been hit or hurt. Main and George wanted to see how these two groups of disadvantaged toddlers responded to a crying classmate. Would they display normal human sympathy? Or would they be unable to relate to the feelings of their peer? The researchers found that almost all the nonabused children reacted to the upset child with concern. Their instinctive sympathy led them to make some attempts to console the child. They were upset by seeing somebody else upset.

Childhood abuse, however, changed everything. The abused toddlers didn't know how to react to their distressed classmate. They occasionally made sympathetic gestures, but these gestures often degenerated into a set of aggressive threats if the other child didn't stop crying. Here is the study's description of Martin, an abused two-and-a-half-year-old: "Martin ... tried to take the hand of the crying other child, and when she resisted, he slapped her on the arm with his open hand. He then turned away from her to look at the ground and began vocalizing very strongly. 'Cut it out! cut it out!,' each time saying it a little faster and louder. He patted her, but when she became disturbed by his patting, he retreated, hissing at her and baring his teeth. He then began patting her on the back again, his patting became beating, and he continued beating her despite her screams." Even when Martin wanted to help, he ended up making things worse. An abused two-year-old named Kate exhibited a similar pattern of behavior. At first she reacted with tenderness to the distressed child and gently caressed him on the back. "Her patting, however, soon became very rough," the researchers wrote, "and she began hitting him hard. She continued to hit him until he crawled away." Because Kate and Martin couldn't understand the feelings of someone else, the world of human interaction had become an impenetrable place.

What these abused children were missing was an education in feeling. Because they had been denied that influx of tender emotion that the brain is built to expect, they were seriously scarred, at least on the inside. It's not that these kids wanted to be cruel or unsympathetic. They were simply missing the patterns of brain activity that normally guide our moral decisions. As a result, they reacted to the toddler in distress just as their abusive parents reacted to their own distress: with threats and violence.

But these tragic examples are exceptions to the rule. We are designed to feel one another's pain so that we're extremely distressed when we hurt others and commit moral transgressions. Sympathy is one of humanity's most basic instincts, which is why evolution lavished so much attention on mirror neurons, the fusiform face area, and those other brain regions that help theorize about other minds. As long as a person is loved as a child and doesn't suffer from any developmental disorders, the human brain will naturally reject violence and make fair offers and try to comfort the crying child. This behavior is just a basic part of who we are. Evolution has programmed us to care about one another.

Consider this poignant experiment: six rhesus monkeys were trained to pull on a variety of chains to get food. If they pulled on one chain, they got a large amount of their favorite food. If they pulled on a different chain, they got a small amount of a less enticing food. As you can probably guess, the monkeys quickly learned to pull on the chain that gave them more of what they wanted. They maximized their reward.

After a few weeks of this happy setup, one of the six monkeys got hungry and decided to pull on the chain. This is when something terrible happened: a separate monkey in a different cage was shocked with a painful jolt of electricity. All six monkeys saw it happen. They heard the awful shriek. They watched the monkey grimace and cower in fear. The change in their behavior was immediate. Four of the monkeys decided to stop pulling on the maximizing chain. They were now willing to settle for less food as long as the other monkey wasn't hurt. The fifth monkey stopped pulling on either chain for five days, and the sixth monkey stopped pulling for twelve days. They starved themselves so that a monkey they didn't know wasn't forced to suffer.