Choking on Thought - How We Decide - Jonah Lehrer

How We Decide - Jonah Lehrer (2009)

Chapter 5. Choking on Thought

The lesson of Wag Dodge, television focus groups, and Flight 232 is that a little rational thought can save the day. In such situations, the prefrontal cortex is uniquely designed to come up with creative answers, to generate that flash of insight that leads a person to the right decision. Such narratives fit comfortably with our broad assumption that more deliberation is always better. In general, we believe that carefully studying something leads to better outcomes, since we'll avoid careless errors. Consumers should always comparison shop so that they find the best products. Before we invest in stocks, we are supposed to learn as much as possible about the company. We expect doctors to order numerous diagnostic tests, even if the tests are expensive and invasive. In other words, people believe that a decision that's the result of rational deliberation will always be better than an impulsive decision. This is why one shouldn't judge a book by its cover or propose marriage on the first date. When in doubt, we try to resort to careful analysis and engage the rational circuits of the prefrontal cortex.

This faith in the power of reason is easy to understand. Ever since Plato, we've been assured that a perfectly rational world would be a perfect world, a Shangri-la ruled by statistical equations and empirical evidence. People wouldn't run up credit card debt or take out subprime loans. There would be no biases or prejudices, just cold, hard facts. This is the utopia dreamed of by philosophers and economists.

However, this new science of decision-making (a science rooted in the material details of the brain) is most interesting when the data turns out to contradict the conventional wisdom. Ancient assumptions are revealed as just that: assumptions. Untested theories. Unsubstantiated speculation. Plato, after all, didn't do experiments. He had no way of knowing that the rational brain couldn't solve every problem, or that the prefrontal cortex had severe limitations. The reality of the brain is that, sometimes, rationality can lead us astray.

FOR RENEE FLEMING, the opera superstar, the first sign of trouble came during a routine performance of Mozart's The Marriage of Figaro at the Lyric Opera of Chicago. Fleming was singing the "Dove sono" aria from act 3, one of the most beloved songs in all of opera. At first, Fleming sang Mozart's plaintive melody with her typical perfection. She made the high notes sound effortless, her voice capturing the intensity of emotion while maintaining her near perfect pitch. Most sopranos struggle with Mozart's tendency to compose in the passaggio, or the awkward part of the vocal range between registers. But not Fleming. Her performance the night before had earned her a long standing ovation.

But then, just as she neared the most difficult section of the aria—a crescendo of fluttering pitches, in which her voice has to echo the violins—Fleming felt a sudden stab of self-doubt. She couldn't stop thinking that she was about to make a mistake. "It caught me by surprise," she later wrote in her memoir. "That aria was never an easy piece, but it was certainly one with which I had had an enormous amount of experience." In fact, Fleming had performed this piece hundreds of times before. Her first big operatic break had been singing the role of the Countess at the Houston Opera, more than a decade earlier. The tragic "Dove sono" aria, in which the Countess questions the loss of her happiness, had been featured on Fleming's first album and became a standard part of her repertoire. It was, Fleming said, her "signature piece."

And yet now, she could barely breathe. She felt her diaphragm constrict, sucking the power from her voice. Her throat tightened and her pulse started to race. Although Fleming fought her way through the rest of the song, stealing breaths wherever possible—she still managed to get a standing ovation—she was deeply shaken. What had happened to her self-confidence? Why did her favorite aria suddenly make her so nervous?

Before long, Fleming's performance problems became chronic. The songs that used to be second nature were suddenly impossible to sing. Every performance was a struggle against anxiety, against that monologue in her head telling her not to make a mistake. "I had been undermined by a very negative inner voice," she wrote, "a little nattering in my ear that said, 'Don't do that ... Don't do this ... Your breath is tight ... Your tongue has gone back ... Your palate is down ... The top is spread ... Relax your shoulders!'" Eventually, it got so bad that Fleming planned to quit opera altogether. She was one of the most talented performers in the world, and yet she could no longer perform.

Performers call such failures "choking," because a person so frayed by pressure might as well not have any oxygen. What makes choking so morbidly fascinating is that the only thing incapacitating the performer is his or her own thoughts. Fleming, for example, was so worried about hitting the high notes of Mozart's opera that she failed to hit them. The inner debate over proper technique made her voice seize up, and it became impossible to sing with the necessary speed and virtuosity. Her mind was sabotaging itself.

What causes choking? Although it might seem like an amorphous category of failure, or even a case of excess emotion, choking is actually triggered by a specific mental mistake: thinking too much. The sequence of events typically goes like this: When a person gets nervous about performing, he naturally becomes extra self-conscious. He starts to focus on himself, trying to make sure that he doesn't make any mistakes. He begins scrutinizing actions that are best performed on autopilot. Fleming started to think about aspects of singing that she hadn't thought about since she was a beginner, such as where to position her tongue and how to shape her mouth for different pitches. This kind of deliberation can be lethal for a performer. The opera singer forgets how to sing. The pitcher concentrates too much on his motion and loses control of his fastball. The actor gets anxious about his lines and seizes up onstage. In each of these instances, the natural fluidity of performance is lost. The grace of talent disappears.

Consider one of the most famous chokes in sports history: the collapse of Jean Van de Velde on the last hole of the 1999 British Open. Until that point in the tournament, Van de Velde had been playing nearly flawless golf. He had a three-stroke lead entering the eighteenth hole, which meant that he could double-bogey (that is, be two strokes over par) and still win. On his previous two rounds, he'd birdied (been one stroke under par) this very hole.

Now Van de Velde was the only player on the course. He knew that the next few shots could change his life forever, turning a PGA journeyman into an elite golfer. All he had to do was play it safe. During his warm-up swings on the eighteenth, Van de Velde looked nervous. It was a blustery Scotland day, but beads of sweat were glistening on his face. After repeatedly wiping away the perspiration, he stepped up to the tee, planted his feet, and jerked back his club. His swing looked awkward. His hips spun out ahead of his body, so that the face of his driver wasn't straight on the ball. Van de Velde watched the white speck sail away and then bowed his head. He had bent the ball badly to the right, and it ended up twenty yards from the fairway, buried in the rough. On his next shot, he made the same mistake, but this time he sent the ball so far right that it bounced off the grandstands and ended up in a patch of knee-high grass. His third shot was even worse. By this point, his swing was so out of sync that he almost missed the ball; it was launched into the air along with a thick patch of grass. As a result, his shot came up far short and plunged into the water hazard just before the green. Van de Velde grimaced and turned away, as if he couldn't bear to watch his own collapse. After taking a penalty, he was still sixty yards short of the hole. Once again, his tentative swing was too weak, and the ball ended up exactly where he didn't want it: in a sandy bunker. From there, he managed to chip onto the green and, after seven errant shots, finish the round. But it was too late. Van de Velde had lost the British Open.

The pressure of the eighteenth hole was Van de Velde's undoing. When he started thinking about the details of his swing, his swing broke down. On the last seven shots, Van de Velde seemed like a different golfer. He had lost his easy confidence. Instead of playing like a pro on the PGA tour, he started swinging with the cautious deliberation of a beginner with a big handicap. He was suddenly focusing on the mechanics of his stroke, making sure that he didn't torque his wrist or open his hips. He was literally regressing before the crowd, reverting to a mode of explicit thought that he hadn't used on the golf green since he was a child learning how to swing.

Sian Beilock, a professor of psychology at the University of Chicago, has helped illuminate the anatomy of choking. She uses putting on the golf green as her experimental paradigm. When people are first learning how to putt, the activity can seem daunting. There are just so many things to think about. A golfer needs to assess the lay of the green, calculate the line of the ball, and get a feel for the grain of the turf. Then the player has to monitor the putting motion and make sure the ball is hit with a smooth, straight stroke. For an inexperienced player, a golf putt can seem impossibly hard, like a life-size trigonometry problem.

But the mental exertion pays off, at least at first. Beilock has shown that novice putters hit better shots when they consciously reflect on their actions. The more time the beginner spends thinking about the putt, the more likely he is to sink the ball in the hole. By concentrating on the golf game, by paying attention to the mechanics of the stroke, the novice can avoid beginners' mistakes.

A little experience, however, changes everything. After a golfer has learned how to putt—once he or she has memorized the necessary movements—analyzing the stroke is a waste of time. The brain already knows what to do. It automatically computes the slope of the green, settles on the best putting angle, and decides how hard to hit the ball. In fact, Beilock found that when experienced golfers are forced to think about their putts, they hit significantly worse shots. "We bring expert golfers into our lab, and we tell them to pay attention to a particular part of their swing, and they just screw up," Beilock says. "When you are at a high level, your skills become somewhat automated. You don't need to pay attention to every step in what you're doing."

Beilock believes that this is what happens when people "choke." The part of the brain that monitors behavior—a network centered in the prefrontal cortex—starts to interfere with decisions that are normally made without thinking. It begins second-guessing the skills that have been honed through years of diligent practice. The worst part about choking is that it tends to be a downward spiral. The failures build on one another, and a stressful situation is made even more stressful. After Van de Velde lost the British Open, his career hit the skids. Since 1999, he has failed to finish in the top ten in a major tournament.*

Choking is merely a vivid example of the havoc that can be caused by too much thought. It's an illustration of rationality gone awry, of what happens when we rely on the wrong brain areas. For opera singers and golf players, such deliberate thought processes interfere with the trained movements of their muscles, so that their own bodies betray them.

But the problem of thinking too much isn't limited to physical performers. Claude Steele, a professor of psychology at Stanford, studies the effects of performance anxiety on standardized-test scores. When Steele gave a large group of Stanford sophomores a set of questions from the Graduate Record Examination (GRE) and told the students that it would measure their innate intellectual ability, he found that the white students performed significantly better than their black counterparts. This discrepancy—commonly known as the achievement gap—conformed to a large body of data showing that minority students tend to score lower on a wide variety of standardized tests, from the SAT to the IQ test.

However, when Steele gave a separate group of students the same test but stressed that it was not a measure of intelligence—he told them it was merely a preparatory drill—the scores of the white and black students were virtually identical. The achievement gap had been closed. According to Steele, the disparity in test scores was caused by an effect that he calls stereotype threat. When black students are told that they are taking a test to measure their intelligence, it brings to mind, rather forcefully, the ugly and untrue stereotype that blacks are less intelligent than whites. (Steele conducted his experiments soon after The Bell Curve was published, but the same effect also exists when women take a math test that supposedly measures "cognitive differences between the genders" or when white males are exposed to a stereotype about the academic superiority of Asians.) The Stanford sophomores were so worried about being viewed through the lens of a negative stereotype that they performed far below their abilities. "What you tend to see [during stereotype threat] is carefulness and second-guessing," Steele said. "When you go and interview them, you have the sense that when they are in the stereotype-threat condition they say to themselves, 'Look, I'm going to be careful here. I'm not going to mess things up.' Then, after having decided to take that strategy, they calm down and go through the test. But that's not the way to succeed on a standardized test. The more you do that, the more you will get away from the intuitions that help you, the quick processing. They think they did well, and they are trying to do well. But they are not."

The lesson of Renee Fleming, Jean Van de Velde, and these Stanford students is that rational thought can backfire. While reason is a powerful cognitive tool, it's dangerous to rely exclusively on the deliberations of the prefrontal cortex. When the rational brain hijacks the mind, people tend to make all sorts of decision-making mistakes. They hit bad golf shots and choose wrong answers on standardized tests. They ignore the wisdom of their emotions—the knowledge embedded in their dopamine neurons—and start reaching for things that they can explain. (One of the problems with feelings is that even when they are accurate, they can still be hard to articulate.) Instead of going with the option that feels the best, a person starts going with the option that sounds the best, even if it's a very bad idea.

1

When Consumer Reports tests a product, it follows a strict protocol. First, the magazine's staff assembles a field of experts. If they're testing family sedans, they rely on automotive experts; if audio speakers are being scrutinized, the staff members bring in people trained in acoustics. Then the magazine's staff gather all the relevant products in that category and try to hide the brand names. (This often requires lots of masking tape.) The magazine aspires to objectivity.

Back in the mid-1980s, Consumer Reports decided to conduct a taste test for strawberry jam. As usual, the editors invited several food experts, all of whom were "trained sensory panelists." These experts blindly sampled forty-five different jams, scoring each on sixteen different characteristics, such as sweetness, fruitiness, texture, and spreadability. The scores were then totaled, and the jams were ranked.

A few years later, Timothy Wilson, a psychologist at the University of Virginia, decided to replicate this taste test with his undergraduate students. Would the students have the same preferences as the experts? Did everybody agree on which strawberry jams tasted the best?

Wilson's experiment was simple: he took the first, eleventh, twenty-fourth, thirty-second, and forty-fourth best-tasting jams according to Consumer Reports and asked the students to rank them. In general, the preferences of the college students closely mirrored the preferences of the experts. Both groups thought Knott's Berry Farm and Alpha Beta were the two best-tasting brands, with Featherweight a close third. They also agreed that the worst strawberry jams were Acme and Sorrel Ridge. When Wilson compared the preferences of the students and the Consumer Reports panelists, he found that they had a statistical correlation of .55, which is rather impressive. When it comes to judging jam, we are all natural experts. Our brains are able to automatically pick out the products that provide us with the most pleasure.

But that was only the first part of Wilson's experiment. He repeated the jam taste test with a separate group of college students, only this time he asked them to explain why they preferred one brand over another. As they tasted the jams, the students filled out written questionnaires, which forced them to analyze their first impressions, to consciously explain their impulsive preferences. All this extra analysis seriously warped their jam judgment. The students now preferred Sorrel Ridge—the worst-tasting jam, according to Consumer Reports—to Knott's Berry Farm, which was the experts' favorite jam. The correlation plummeted to .11, which means that there was virtually no relationship between the rankings of the experts and the opinions of these introspective students.

Wilson argues that "thinking too much" about strawberry jam causes us to focus on all sorts of variables that don't actually matter. Instead of just listening to our instinctive preferences—the best jam is associated with the most positive feelings—our rational brains search for reasons to prefer one jam over another. For example, someone might notice that the Acme brand is particularly easy to spread, and so he'll give it a high ranking, even if he doesn't actually care about the spreadability of jam. Or a person might notice that Knott's Berry Farm jam has a chunky texture, which seems like a bad thing, even if she's never really thought about the texture of jam before. But having a chunky texture sounds like a plausible reason to dislike a jam, and so she revises her preferences to reflect this convoluted logic. People talk themselves into liking Acme jam more than the Knott's Berry Farm's product.

This experiment illuminates the danger of always relying on the rational brain. There is such a thing as too much analysis.

When you overthink at the wrong moment, you cut yourself off from the wisdom of your emotions, which are much better at assessing actual preferences. You lose the ability to know what you really want. And then you choose the worst strawberry jam.

WILSON WAS INTRIGUED by the strawberry-jam experiment. It seemed to contradict one of the basic tenets of Western thought, which is that careful self-analysis leads to wisdom. As Socrates famously said, "The unexamined life is not worth living." Socrates clearly didn't know about strawberry jam.

But perhaps food products are unique, since people are notoriously bad at explaining their own preferences. So Wilson came up with another experiment. This time he asked female college students to select their favorite poster. He gave them five options: a Monet landscape, a van Gogh painting of some purple lilies, and three humorous cat posters. Before making their choices, the subjects were divided into two groups. The first was the nonthinking group: they were instructed to simply rate each poster on a scale from 1 to 9. The second group had a tougher task: before they rated the posters, they were given questionnaires that asked them why they liked or disliked each of the five posters. At the end of the experiment, each of the subjects took her favorite poster home.

The two groups of women made very different choices. Ninety-five percent of the non-thinkers chose either the Monet or the van Gogh. They instinctively preferred the fine art. However, subjects who thought about their poster decisions first were almost equally split between the paintings and the humorous cat posters. What accounted for the difference? "When looking at a painting by Monet," Wilson writes, "most people generally have a positive reaction. When thinking about why they feel the way they do, however, what comes to mind and is easiest to verbalize might be that some of the colors are not very pleasing, and that the subject matter, a haystack, is rather boring." As a result, the women ended up selecting the funny feline posters, if only because those posters gave them more grist for their explanatory mill.

Wilson conducted follow-up interviews with the women a few weeks later to see which group had made the better decision. Sure enough, the members of the non-thinking group were much more satisfied with their choice of posters. While 75 percent of the people who had chosen cat posters regretted their selection, nobody regretted selecting the artistic poster. The women who listened to their emotions ended up making much better decisions than the women who relied on their reasoning powers. The more people thought about which posters they wanted, the more misleading their thoughts became. Self-analysis resulted in less self-awareness.

This isn't just a problem for insignificant decisions like choosing jam for a sandwich or selecting a cheap poster. People can also think too much about more important choices, like buying a home. As Ap Dijksterhuis, a psychologist at Radboud University, in the Netherlands, notes, when people are shopping for real estate, they often fall victim to a version of the strawberry-jam error, or what he calls a "weighting mistake." Consider two housing options: a three-bedroom apartment located in the middle of a city that would give you a ten-minute commute, and a five-bedroom McMansion in the suburbs that would result in a forty-five-minute commute. "People will think about this tradeoff for a long time," Dijksterhuis says, "and most of them will eventually choose the large house. After all, a third bathroom or extra bedroom is very important for when Grandma and Grandpa come over for Christmas, whereas driving two hours each day is really not that bad." What's interesting is the more time people spend deliberating, the more important that extra space becomes. They'll imagine all sorts of scenarios (a big birthday party, Thanksgiving dinner, another child) that turns the suburban house into a necessity. The lengthy commute, meanwhile, will seem less and less significant, at least when it's compared to the lure of an extra bathroom. But as Dijksterhuis points out, the reasoning is backward: "The additional bathroom is a completely superfluous asset for at least 362 or 363 days each year, whereas a long commute does become a burden after a while." For instance, a recent study found that when a person travels more than one hour in each direction, he or she has to make 40 percent more money in order to be as "satisfied with life" as a person with a short commute. Another study, led by Daniel Kahneman and the economist Alan Krueger, surveyed nine hundred workingwomen in Texas and found that commuting was, by far, the least pleasurable part of their day. And yet, despite these gloomy statistics, nearly 20 percent of American workers commute more than forty-five minutes each way. (More than 3.5 million Americans spend more than three hours each day traveling to and from work, and they're the fastest-growing category of commuter.) According to Dijksterhuis, all these people are making themselves miserable because they failed to properly weigh the relevant variables when they were choosing where to live. Just as strawberry-jam tasters who consciously analyzed their preferences were persuaded by irrelevant factors like spread ability and texture, the deliberative homeowners focused on less important details like square footage and number of bathrooms. (It's easier to consider quantifiable facts than future emotions, such as how you'll feel when you're stuck in a rush-hour traffic jam.) The prospective homeowners assumed a bigger house in the suburbs would make them happy, even if it meant spending an extra hour in the car every day. But they were wrong.

THE BEST WINDOW into this mental process—what's actually happening inside the brain when you talk yourself into choosing the wrong strawberry jam—comes from studies of the placebo effect. It's long been recognized that the placebo effect is extremely powerful; anywhere between 35 and 75 percent of people get better after receiving pretend medical treatments, such as sugar pills. A few years ago, Tor Wager, a neuroscientist at Columbia University, wanted to figure out why placebos were so effective. His experiment was brutally straightforward: he gave college students electric shocks while they were stuck in an fMRI machine. (The subjects were well compensated, at least by undergraduate standards.) Half of the people were then supplied with a fake pain-relieving cream. Even though the cream had no analgesic properties—it was just a hand moisturizer—people given the pretend cream said the shocks were significantly less painful. The placebo effect eased their suffering. Wager then imaged the specific parts of the brain that controlled this psychological process. He discovered that the placebo effect depended entirely on the prefrontal cortex, the center of reflective, deliberate thought. When people were told that they'd just received pain-relieving cream, their frontal lobes responded by inhibiting the activity of their emotional brain areas (like the insula) that normally respond to pain. Because people expected to experience less pain, they ended up experiencing less pain. Their predictions became self-fulfilling prophecies.

The placebo effect is a potent source of self-help. It demonstrates the power of the prefrontal cortex to modulate even the most basic bodily signals. Once this brain area comes up with reasons to experience less pain—the cream is supposed to provide pain relief—those reasons become powerful distortions. Unfortunately, the same rational brain areas responsible for temporarily reducing suffering also mislead us about many daily decisions. The prefrontal cortex can turn off pain signals, but it can also cause a person to ignore the feelings that lead to choosing the best poster. In these situations, conscious thoughts interfere with good decision-making.

Look, for example, at this witty little experiment. Baba Shiv, a neuroeconomist at Stanford, supplied a group of people with Sobe Adrenaline Rush, an "energy" drink that was supposed to make them feel more alert and energetic. (The drink contained a potent brew of sugar and caffeine that, the bottle promised, would impart "superior functionality.") Some participants paid full price for the drinks, while others were offered a discount. After drinking the product, participants were asked to solve a series of word puzzles. Shiv found that people who'd paid discounted prices consistently solved about 30 percent fewer puzzles than the people who'd paid full price for the drinks. The subjects were convinced that the stuff on sale was much less potent, even though all the drinks were identical. "We ran the study again and again, not sure if what we got had happened by chance or fluke," Shiv says. "But every time we ran it, we got the same results."

Why did the cheaper energy drink prove less effective? According to Shiv, consumers typically suffer from a version of the placebo effect. Since they expect cheaper goods to be less effective, they generally are less effective, even if the goods are identical to more expensive products. This is why brand-name aspirin works better than generic aspirin and why Coke tastes better than cheaper colas, even if most consumers can't tell the difference in blind taste tests. "We have these general beliefs about the world—for example, that cheaper products are of lower quality—and they translate into specific expectations about specific products," said Shiv. "Then, once these expectations are activated, they start to really impact our behavior." The rational brain distorts the sense of reality, so the ability to properly assess the alternatives is lost. Instead of listening to the trustworthy opinions generated by our emotional brains, we follow our own false assumptions.

Researchers at Caltech and Stanford recently lifted the veil on this strange process. Their experiment was organized like a wine-tasting. Twenty people sampled five cabernet sauvignons that were distinguished solely by their retail prices, with bottles ranging in cost from five dollars to ninety dollars. Although the people were told that all five wines were different, the scientists weren't telling the truth: there were only three different wines. This meant that the same wines often reappeared, but with different price labels. For example, the first wine offered during the tasting—it was a bottle of a cheap California cabernet—was labeled both as a five-dollar wine (its actual retail price) and as a forty-five-dollar wine, a 900 percent markup. All of the red wines were sipped by each subject inside an fMRI machine.

Not surprisingly, the subjects consistently reported that the more expensive wines tasted better. They preferred the ninety-dollar bottle to the ten-dollar bottle and thought the forty-five-dollar cabernet was far superior to the five-dollar plonk. By conducting the winetasting inside an fMRI machine—the drinks were sipped via a network of plastic tubes—the scientists could see how the brains of the subjects responded to the different wines. While a variety of brain regions were activated during the experiment, only one brain region seemed to respond to the price of the wine rather than the wine itself: the prefrontal cortex. In general, more expensive wines made parts of the prefrontal cortex more excited. The scientists argue that the activity of this brain region shifted the preferences of the winetasters, so that the ninety-dollar cabernet seemed to taste better than the thirty-five-dollar cabernet, even though they were actually the same wine.

Of course, the wine preferences of the subjects were clearly nonsensical. Instead of acting like rational agents—getting the most utility for the lowest possible price—they were choosing to spend more money for an identical product. When the scientists repeated the experiment with members of the Stanford University wine club, they got the same results. In a blind tasting, these semi-experts were also misled by the made-up price tags. "We don't realize how powerful our expectations are," says Antonio Rangel, the neuroeconomist at Caltech who led the study. "They can really modulate every aspect of our experience. And if our expectations are based on false assumptions"—like the assumption that more expensive wine tastes better—"they can be very misleading."

These experiments suggest that, in many circumstances, we could make better consumer decisions by knowing less about the products we are buying. When you walk into a store, you are besieged by information. Even purchases that seem simple can quickly turn into a cognitive quagmire. Look at the jam aisle. A glance at the shelves can inspire a whole range of questions. Should you buy the smooth-textured strawberry jam or the one with less sugar? Does the more expensive jam taste better? What about organic jam? (The typical supermarket contains more than two hundred varieties of jam and jelly.) Rational models of decision-making suggest that the way to find the best product is to take all of this information into account, to carefully analyze the different brands on display. In other words, a person should choose a jam with his or her prefrontal cortex. But this method can backfire. When we spend too much time thinking in the supermarket, we can trick ourselves into choosing the wrong things for the wrong reasons. That's why the best critics, from Consumer Reports to Robert Parker, always insist on blind comparisons. They want to avoid the deceptive thoughts that corrupt decisions. The prefrontal cortex isn't good at picking out jams or energy drinks or bottles of wine. Such decisions are like a golf swing: they are best done with the emotional brain, which generates its verdict automatically.

This "irrational" approach to shopping can save us lots of money. After Rangel and his colleagues finished their brain-imaging experiment, they asked the subjects to taste the five different wines again, only this time the scientists didn't provide any price information. Although the subjects had just listed the ninety-dollar wine as the most pleasant, they now completely reversed their preferences. When the tasting was truly blind, when the subjects were no longer biased by their prefrontal cortex, the cheapest wine got the highest ratings. It wasn't fancy, but it tasted the best.

2

If the mind were an infinitely powerful organ, a limitless supercomputer without constraints, then rational analysis would always be the ideal decision-making strategy. Information would be an unqualified good. We would be foolish to ignore the omniscient opinions of the Platonic charioteer.

The biological reality of the brain, however, is that it's severely bounded, a machine subject to all sorts of shortcomings. This is particularly true of the charioteer, who is tethered to the prefrontal cortex. As the psychologist George Miller demonstrated in his famous essay "The Magical Number Seven, Plus or Minus Two," the conscious brain can only handle about seven pieces of data at any one moment. "There seems to be some limitation built into us by the design of our nervous systems, a limit that keeps our channel capacities in this general range," Miller wrote. While we can control these rational neural circuits—they think about what we tell them to think about—they constitute a relatively small part of the brain, just a few microchips within the vast mainframe of the mind. As a result, even choices that seem straightforward—like choosing a jam in the supermarket—can overwhelm the prefrontal cortex. It gets intimidated by all the jam data. And that's when bad decisions are made.

Consider this experiment. You're sitting in a bare room, with just a table and a chair. A scientist in a white lab coat walks in and says that he's conducting a study of long-term memory. The scientist gives you a seven-digit number to remember and asks you to walk down the hall to the room where your memory will be tested. On the way to the testing room, you pass a refreshment table for subjects taking part in the experiment. You are given a choice between a decadent slice of German chocolate cake and a bowl of fruit salad. What do you choose?

Now let's replay the experiment. You are sitting in the same room. The same scientist gives you the same explanation. The only difference is that instead of being asked to remember a seven-digit number, you are given only two numbers, a far easier mental task. You then walk down the hall and are given the same choice between cake and fruit.

You probably don't think the number of digits will affect your choice; if you choose the chocolate cake, it is because you want cake. But you'd be wrong. The scientist who explained the experiment was lying; this isn't a study of long-term memory, it's a study of self-control.

When the results from the two different memory groups were tallied, the scientists observed a striking shift in behavior. Fifty-nine percent of people trying to remember seven digits chose the cake, compared to only 37 percent of the two-digit subjects. Distracting the brain with a challenging memory task made a person much more likely to give in to temptation and choose the caloriedense dessert. (The premise is that German chocolate cake is to adults what marshmallows are to four-year-olds.) The subjects' self-control was overwhelmed by five extra numbers.

Why did the two groups behave so differently? According to the Stanford scientists who designed the experiment, the effort required to memorize seven digits drew cognitive resources away from the part of the brain that normally controls emotional urges. Because working memory and rationality share a common cortical source—the prefrontal cortex—a mind trying to remember lots of information is less able to exert control over its impulses. The substrate of reason is so limited that a few extra digits can become an extreme handicap.

The shortcomings of the prefrontal cortex aren't apparent only when memory-storage capacity is exceeded. Other studies have shown that a slight drop in blood-sugar levels can also inhibit self-control, since the frontal lobes require lots of energy in order to function. Look, for example, at this experiment led by Roy Baumeister, a psychologist at Florida State University. The experiment began with a large group of undergraduates performing a mentally taxing activity that involved watching a video while ignoring the text of random words scrolling on the bottom of the screen. (It takes a conscious effort to not pay attention to salient stimuli.) The students were then offered some lemonade. Half of them got lemonade made with real sugar, and the other half got lemonade made with a sugar substitute. After giving the glucose time to enter the bloodstream and perfuse the brain (about fifteen minutes), Baumeister had the students make decisions about apartments. It turned out that the students who were given the drink without real sugar were significantly more likely to rely on instinct and intuition when choosing a place to live, even if that led them to choose the wrong places. The reason, according to Baumeister, is that the rational brains of these students were simply to exhausted to think. They'd needed a restorative sugar fix, and all they'd gotten was Splenda. This research can also help explain why we get cranky when we're hungry and tired: the brain is less able to suppress the negative emotions sparked by small annoyances. A bad mood is really just a rundown prefrontal cortex.

The point of these studies is that the flaws and foibles of the rational brain—the fact that it's an imperfect piece of machinery—are constantly affecting our behavior, leading us to make decisions that seem, in retrospect, quite silly. These mistakes extend far beyond poor self-control. In 2006, psychologists at the University of Pennsylvania decided to conduct an experiment with M&M's in an upscale apartment building. One day, they left out a bowl of the chocolate candies and a small scoop. The next day they refilled the bowl with M&M's but placed a much larger scoop beside it. The result would not surprise anyone who has ever finished a Big Gulp soda or a supersize serving of McDonald's fries: when the scoop size was increased, people took 66 percent more M&M's. Of course, they could have taken just as many candies on the first day; they simply would have had to use a few more scoops. But just as larger serving sizes cause us to eat more, the larger scoop made the residents more gluttonous.

The real lesson of the candy scoop, however, is that people are terrible at measuring stuff. Instead of counting the number of M&M's they eat, they count the number of scoops. The scientists found that most people took a single scoop and ended up consuming however many candies that scoop happened to contain. The same thing happens when people sit down to dinner: they tend to eat whatever is on their plates. If the plate is twice as large (and American serving sizes have grown 40 percent in the last twenty-five years), they'll still polish it off. As an example, a study done by Brian Wansink, a professor of marketing at Cornell, used a bottomless bowl of soup—there was a secret tube that kept on refilling the bowl with soup from below—to demonstrate that how much people eat is largely dependent on serving size. The group with the bottomless bowls ended up consuming nearly 70 percent more soup than the group with normal bowls.

Economists call this sleight of mind mental accounting, since people tend to think about the world in terms of specific accounts, such as scoops of candy or bowls of soup or lines on a budget. While these accounts help people think a little faster—it's easier to count scoops than actual M&M's—they also distort decisions. Richard Thaler, an economist at the University of Chicago, was the first to fully explore the consequences of this irrational behavior. He came up with a simple set of questions that demonstrate mental accounting at work:

Imagine that you have decided to see a movie and have paid the admission price of $10 per ticket. As you enter the theater, you discover that you have lost the ticket. The seat was not marked, and the ticket cannot be recovered. Would you pay $10 for another ticket?

When Thaler conducted this survey, he found that only 46 percent of people would buy another movie ticket. However, when he asked a closely related question, he got a completely different response.

Imagine that you have decided to see a movie where admission is $10, but you have not yet bought the ticket. As you walk to the theater, you discover that you have lost a $10 bill. Would you still pay $10 for a ticket to the movie?

Although the value of the loss in both scenarios is the same—people were still losing ten dollars—88 percent of people said they would now buy a movie ticket. Why the drastic shift? According to Thaler, going to a movie is normally viewed as a transaction in which the cost of a ticket is exchanged for the experience of seeing a movie. Buying a second ticket makes the movie seem too expensive, since a single ticket now "costs" twenty dollars. In contrast, the loss of the cash is not posted to the mental account of the movie, so no one minds forking over another ten bucks.

Of course, this is woefully inconsistent behavior. After losing tickets, most of us become tightwads; when we lose merely cash, we remain spendthrifts. These contradictory decisions violate an important principle of classical economics, which assumes that a dollar is always a dollar. (Money is supposed to be perfectly fungible.) But because the brain engages in mental accounting, we end up treating our dollars very differently. For example, when Thaler asked people whether they would drive twenty minutes out of their way to save five dollars on a fifteen-dollar calculator, 68 percent of respondents said yes. However, when he asked people whether they would drive twenty minutes out of their way to save five dollars on a $125 leather jacket, only 29 percent said they would. Their driving decisions depended less on the absolute amount of money involved (five dollars) than on the particular mental account in which the decision was placed. If the savings activated a mental account with a minuscule amount of money—like buying a cheap calculator—then they were compelled to drive across town. But that same five dollars seems irrelevant when part of a much larger purchase. This principle also explains why car dealers are able to tack on unwanted and expensive extras and why luxury hotels can get away with charging six dollars for a can of peanuts. Because these charges are only small parts of much bigger purchases, we end up paying for things that we wouldn't normally buy.

The brain relies on mental accounting because it has such limited processing abilities. As Thaler notes, "These thinking problems come from the fact that we have a slow, erratic CPU [central processing unit] and the fact that we're busy." Since the prefrontal cortex can handle only about seven things at the same time, it's constantly trying to "chunk" stuff together, to make the complexity of life a little more manageable. Instead of thinking about each M&M, we think about the scoops. Instead of counting every dollar we spend, we parcel our dollars into particular purchases, like cars. We rely on misleading shortcuts because we lack the computational power to think any other way.

3

The history of Western thought is so full of paeans to the virtues of rationality that people have neglected to fully consider its limitations. The prefrontal cortex, it turns out, is easy to hoodwink. All it takes is a few additional digits or a slightly bigger candy scoop, and this rational brain region will start making irrational decisions.

A few years ago, a group of MIT economists led by Dan Ariely decided to conduct an auction with their business-school graduate students. (The experiment was later conducted on executives and managers at the MIT Executive Education Program, with similar results.) The researchers were selling a motley group of items, from a fancy bottle of French wine to a cordless keyboard to a box of chocolate truffles. The auction, however, came with a twist: before the students could bid, they were asked to write down the last two digits of their Social Security numbers. Then they were supposed to say whether or not they would be willing to pay that numerical amount for each of the products. For instance, if the last two digits of the number were 55, then the student would have to decide whether the bottle of wine or the cordless keyboard was worth $55. Finally, the students were instructed to write down the maximum amount they were willing to pay for the various items.

If people were perfectly rational agents, if the brain weren't so bounded, then writing down the last two digits of their Social Security numbers should have no effect on their auction bids. In other words, a student whose Social Security number ended with a low-value figure (such as 10) should be willing to pay roughly the same price as someone with a high-value figure (such as 90). But that's not what happened. For instance, look at the bidding for the cordless keyboard. Students with the highest-ending Social Security numbers (80-80) made an average bid of fifty-six dollars. In contrast, students with the lowest-ending numbers (1-1) made an average bid of a paltry sixteen dollars. A similar trend held for every single item. On average, students with higher numbers were willing to spend 300 percent more than those with low numbers. All of the business students realized, of course, that the last two digits of their Social Security numbers were completely irrelevant. Such a thing shouldn't influence their bids. And yet, it clearly did.

This is known as the anchoring effect, since a meaningless anchor—in this case, a random number—can have a strong impact on subsequent decisions.* While it's easy to mock the irrational bids of the business students, the anchoring effect is actually a common consumer mistake. Consider the price tags in a car dealership. Nobody actually pays the prices listed in bold black ink on the windows. The inflated sticker is merely an anchor that allows the car salesperson to make the real price of the car seem like a better deal. When a person is offered the inevitable discount, the prefrontal cortex is convinced that the car is a bargain.

In essence, the anchoring effect is about the brain's spectacular inability to dismiss irrelevant information. Car shoppers should ignore the manufacturers' suggested retail prices, just as MIT grad students should ignore their Social Security numbers. The problem is that the rational brain isn't good at disregarding facts, even when it knows those facts are useless. And so, if someone is looking at a car, the sticker price serves as a point of comparison, even though it's merely a gimmick. And when a person in the MIT experiment is making a bid on a cordless keyboard, she can't help but tender an offer that takes her Social Security number into account, simply because that number has already been placed into the pertinent decision-making ledger. The random digits are stuck in her prefrontal cortex, occupying valuable cognitive space. As a result, they become a starting point when she thinks about how much she's willing to pay for a computer accessory. "You know you're not supposed to think about these meaningless numbers," Ariely says. "But you just can't help it."

The fragility of the prefrontal cortex means that we all have to be extremely vigilant about not paying attention to unnecessary information. The anchoring effect demonstrates how a single additional fact can systematically distort the reasoning process. Instead of focusing on the important variable—how much is that cordless keyboard really worth?—we get distracted by some meaningless numbers. And then we spend too much money.

This cortical flaw has been exacerbated by modernity. We live in a culture that's awash in information; it's the age of Google, cable news, and free online encyclopedias. We get anxious whenever we are cut off from all this knowledge, as if it's impossible for anyone to make a decision without a search engine. But this abundance comes with some hidden costs. The main problem is that the human brain wasn't designed to deal with such a surfeit of data. As a result, we are constantly exceeding the capacity of our prefrontal cortices, feeding them more facts and figures than they can handle. It's like trying to run a new computer program on an old machine; the antique microchips try to keep up, but eventually they fizzle out.

In the late 1980s, the psychologist Paul Andreassen conducted a simple experiment on MIT business students. (Those poor students at MIT's Sloan School of Management are very popular research subjects. As one scientist joked, "They're like the fruit fly of behavioral economics.") First, Andreassen let each of the students select a portfolio of stock investments. Then he divided the students into two groups. The first group could see only the changes in the prices of their stocks. They had no idea why the share prices rose or fell and had to make their trading decisions based on an extremely limited amount of data. In contrast, the second group was given access to a steady stream of financial in formation. They could watch CNBC, read the Wall Street Journal, and consult experts for the latest analysis of market trends.

So which group did better? To Andreassen's surprise, the group with less information ended up earning more than twice as much as the well-informed group. Being exposed to extra news was distracting, and the high-information students quickly became focused on the latest rumors and insider gossip. (Herbert Simon said it best: "A wealth of information creates a poverty of attention.") As a result of all the extra input, these students engaged in far more buying and selling than the low-information group. They were convinced that all their knowledge allowed them to anticipate the market. But they were wrong.

The dangers of too much information aren't confined to investors. In another study, college counselors were given a vast amount of information about a group of high school students. The counselors were then asked to predict the grades of these kids during their freshman year in college. The counselors had access to high school transcripts, test scores, the results of personality and vocational tests, and application essays from the students. They were even granted personal interviews so that they could judge the "academic talents" of the students in person. With access to all of this information, the counselors were extremely confident that their judgments were accurate.

The counselors were competing against a rudimentary mathematical formula composed of only two variables: the high school grade point average of the student and his or her score on a single standardized test. Everything else was deliberately ignored. Needless to say, the predictions made by the formula were far more accurate than the predictions made by the counselors. The human experts had looked at so many facts that they lost track of which facts were actually important. They subscribed to illusory correlations ("She wrote a good college essay, so she'll write good essays in college") and were swayed by irrelevant details ("He had such a nice smile"). While the extra information considered by the counselors made them extremely confident, it actually led to worse predictions. Knowledge has diminishing returns, right up until it has negative returns.

This is a counterintuitive idea. When making decisions, people almost always assume that more information is better. Modern corporations are especially beholden to this idea and spend a fortune trying to create "analytic workspaces" that "maximize the informational potential of their decision-makers." These managerial cliches, plucked from the sales brochures of companies such as Oracle and Unisys, are predicated on the assumptions that executives perform better when they have access to more facts and figures and that bad decisions are a result of ignorance.

But it's important to know the limitations of this approach, which are rooted in the limitations of the brain. The prefrontal cortex can handle only so much information at any one time, so when a person gives it too many facts and then asks it to make a decision based on the facts that seem important, that person is asking for trouble. He is going to buy the wrong items at Wal-Mart and pick the wrong stocks. We all need to know about the innate frailties of the prefrontal cortex so that we don't undermine our decisions.

BACK PAIN is a medical epidemic. The numbers are sobering: there's a 70 percent chance that at some point in your life, you'll suffer from it. There's a 30 percent chance that you've suffered from severe back pain in the last thirty days. At any given time, about 1 percent of working-age Americans are completely incapacitated by their lower lumbar regions. Treatment is expensive (more than $26 billion a year) and currently accounts for about 3 percent of total health-care spending. If workers' compensation and disability payments are taken into account, the costs are far higher.

When doctors first started to encounter a surge in patients with back pain—the beginning of the epidemic is generally dated to the late 1960s—they had few answers. The lower back is an exquisitely complicated body area, full of tiny bones, ligaments, spinal discs, and minor muscles. And then there's the spinal cord itself, a thick sheath of sensitive nerves that can be easily upset. There are so many moving parts in the back that doctors had difficulty figuring out what exactly was responsible for the pain. Without a definitive explanation, doctors typically sent patients home with a prescription for bed rest.

But this simple treatment plan was extremely effective. Even when nothing was done to the lower back, about 90 percent of patients with back pain managed to get better within seven weeks. The body healed itself, the inflammation subsided, the nerves relaxed. These patients went back to work and pledged to avoid the sort of physical triggers that had caused the pain in the first place.

Over the next few decades, this hands-off approach to back pain remained the standard medical treatment. Although the vast majority of patients didn't receive a specific diagnosis of what caused the pain—the suffering was typically parceled into a vague category such as "lower lumbar strain"—they still managed to experience significant improvements within a short period of time. "It was a classic case of medicine doing best by doing least," says Dr. Eugene Carragee, a professor of orthopedic surgery at Stanford. "People got better without real medical interventions because doctors didn't know how to intervene."

That all changed with the introduction of magnetic resonance imaging (MRI) in the late 1980s. Within a few years, the MRI machine became a crucial medical tool. It allowed doctors to look, for the first time, at stunningly accurate images of the interior of the body. MRI machines use powerful magnets to make protons in the flesh shift ever so slightly. Different tissues react in slightly different ways to this atomic manipulation; a computer then translates the resulting contrasts into high-resolution images. Thanks to the precise pictures produced by the machine, doctors no longer needed to imagine the layers of matter underneath the skin. They could see everything.

The medical profession hoped that the MRI would revolutionize the treatment of lower back pain. Since doctors could finally image the spine and surrounding soft tissue in lucid detail, they figured they'd be able to offer precise diagnoses of what was causing the pain, locating the aggravated nerves and structural problems. This, in turn, would lead to better medical care.

Unfortunately, MRIs haven't solved the problem of back pain. In fact, the new technology has probably made the problem worse. The machine simply sees too much. Doctors are overwhelmed with information and struggle to distinguish the significant from the irrelevant. Take, for example, spinal disc abnormalities. While x-rays can reveal only tumors and problems with the vertebral bones, MRIs can image spinal discs—the supple buffers between the vertebrae—in meticulous detail. After the imaging machines were first introduced, the diagnoses of various disc abnormalities began to skyrocket. The MRI pictures certainly looked bleak: people with pain seemed to have seriously degenerated discs, which everyone assumed caused inflammation of the local nerves. Doctors began administering epidurals to quiet the pain, and if the pain persisted, they would surgically remove the apparently offending disc tissue.

The vivid images, however, were misleading. Those disc abnormalities are seldom the cause of chronic back pain. In a 1994 study published in the New England Journal of Medicine, a group of researchers imaged the spinal regions of ninety-eight people who had no back pain or back-related problems. The pictures were then sent to doctors who didn't know that the patients weren't in pain. The result was shocking: the doctors reported that two-thirds of these normal patients exhibited "serious problems" such as bulging, protruding, or herniated discs. In 38 percent of these patients, the MRI revealed multiple damaged discs. Nearly 90 percent of these patients exhibited some form of "disc degeneration." These structural abnormalities are often used to justify surgery, and yet nobody would advocate surgery for people without pain. The study concluded that, in most cases, "The discovery by MRI of bulges or protrusions in people with low back pain may be coincidental."

In other words, seeing everything made it harder for the doctors to know what they should be looking at. The very advantage of MRI—its ability to detect tiny defects in tissue—turned out to be a liability, since many of the so-called defects were actually normal parts of the aging process. "A lot of what I do is educate people about what their MRIs are showing," says Dr. Sean Mackey, a professor at the Stanford School of Medicine and associate director of the hospital's pain-management division. "Doctors and patients get so fixated on these slight disc problems, and then they stop thinking about other possible causes for the pain. I always remind my patients that the only perfectly healthy spine is the spine of an eighteen-year-old. Forget about your MRI. What it's showing you is probably not important."

The mistaken explanations for back pain triggered by MRIs inevitably led to an outbreak of bad decisions. A large study published in the Journal of the American Medical Association (JAMA) randomly assigned 380 patients with back pain to undergo two different types of diagnostic analysis. One group received x-rays. The other group got diagnosed using MRIs, which gave the doctors much more information about the underlying anatomy.

Which group fared better? Did better pictures lead to better treatments? There was no difference in patient outcome: the vast majority of people in both groups got better. More information didn't lead to less pain. But stark differences emerged when the study looked at how the different groups were treated. Nearly 50 percent of MRI patients were diagnosed with some sort of disc abnormality, and this diagnosis led to intensive medical interventions. The MRI group had more doctor visits, more injections, more physical therapy, and were more than twice as likely to undergo surgery. These additional treatments were very expensive, and they had no measurable benefit.

This is the danger of too much information: it can actually interfere with understanding. When the prefrontal cortex is overwhelmed, a person can no longer make sense of the situation. Correlation is confused with causation, and people make theories out of coincidences. They latch on to medical explanations, even when the explanations don't make very much sense. MRIs make it easy for doctors to see all sorts of disc "problems," and so they reasonably conclude that these structural abnormalities are causing the pain. They're usually wrong.

Medical experts are now encouraging doctors not to order MRIs when evaluating back pain. A recent report in the New England Journal of Medicine concluded that MRIs should be used to image the back only under specific clinical circumstances, such as when doctors are examining "patients for whom there is a strong clinical suggestion of underlying infection, cancer, or persistent neurologic deficit." In the latest clinical guidelines issued by the American College of Physicians and the American Pain Society, doctors were "strongly recommended ... not to obtain imaging or other diagnostic tests in patients with nonspecific low back pain." In too many cases, the expensive tests proved worse than useless. All of the extra detail just got in the way. The doctors performed better with less information.

And yet, despite these clear medical recommendations, MRIs continue to be routinely prescribed by physicians trying to diagnose causes of back pain. The addiction to information can be hard to break. A 2003 report in JAMA found that even when doctors were aware of medical studies criticizing the use of MRI, they still believed that imaging was necessary for their own patients. They wanted to find a reason for the pain so that the suffering could be given a clear anatomical cause, which could then be fixed with surgery. It didn't seem to matter that these reasons weren't empirically valid, or that the disc problems seen by MRI machines don't actually cause most cases of lower back pain. More data was seen as an unqualified good. The doctors thought it would be irresponsible not to conduct all of the relevant diagnostic tests. After all, wasn't that the rational thing to do? And shouldn't doctors always try to make rational decisions?

The problem of diagnosing the origins of back pain is really just another version of the strawberry-jam problem. In both cases, the rational methods of decision-making cause mistakes. Sometimes, more information and analysis can actually constrict thinking, making people understand less about what's really going on. Instead of focusing on the most pertinent variable—the percentage of patients who get better and experience less pain—doctors got sidetracked by the irrelevant MRI pictures.

When it comes to treating back pain, this wrong-headed approach comes with serious costs. "What's going on now is a disgrace," says Dr. John Sarno, a professor of clinical rehabilitation at New York University Medical Center. "You have well-meaning doctors making structural diagnoses despite a serious lack of evidence that these abnormalities are really causing the chronic pain. But they have these MRI pictures and the pictures seem so convincing. It's amazing how perfectly intelligent people will make foolish decisions if you give them lots of irrelevant stuff to consider."

The powers of the Platonic charioteer are fragile. The pre-frontal cortex is a magnificent evolutionary development, but it must be used carefully. It can monitor thoughts and help evaluate emotions, but it can also paralyze, making a person forget the words to an aria or lose a trusty golf swing. When someone falls into the trap of spending too much time thinking about fine-art posters or about the details of an MRI image, the rational brain is being used in the wrong way. The prefrontal cortex can't handle so much complexity by itself.

So far, this book has been about the brain's dueling systems. We've seen how both reason and feeling have important strengths and weaknesses, and how, as a result, different situations require different cognitive strategies. How we decide should depend on what we are deciding.

But before we learn how to take full advantage of our varied mental tools, we are going to explore a separate realm of decision-making. As it happens, some of our most important decisions are about how we treat other people. The human being is a social animal, endowed with a brain that shapes social behavior. By understanding how the brain makes these decisions, we can gain insight into one of most unique aspects of human nature: morality.