The Brain Is an Argument - How We Decide - Jonah Lehrer

How We Decide - Jonah Lehrer (2009)

Chapter 7. The Brain Is an Argument

One of the most coveted prizes in a presidential primary is the endorsement of the Concord Monitor, a small newspaper in central New Hampshire. During the first months of the 2008 presidential primary campaign, all of the major candidates, from Chris Dodd to Mike Huckabee, sat for interviews with the paper's editorial board. Some candidates, such as Hillary Clinton, Barack Obama, and John McCain, were invited back for follow-up interviews. These sessions would often last for hours, with the politicians facing a barrage of uncomfortable questions. Hillary Clinton was asked about various White House scandals; Barack Obama was asked why he often seemed "bored and low-key" on the stump; McCain was asked about his medical history. "There were a few awkward moments," says Ralph Jimenez, the editorial-page editor. "You could tell they were thinking, Did you just ask me that? Do you know who I am?"

But the process wasn't limited to these interviews. Bill Clinton got in the habit of calling the editors, at home and on their cell phones, and launching into impassioned defenses of his wife. (Some of the editors had unlisted phone numbers, which made Clinton's calls even more impressive.) Obama had his own persistent advocates. The board was visited by former White House staff members, such as Madeleine Albright and Ted Sorensen, and lobbied by a bevy of local elected officials. For the five members of the editorial board, all the attention was flattering, if occasionally annoying. Felice Belman, the executive editor of the Monitor, was awakened by a surprise phone call from Hillary at seven thirty on a Saturday morning. "I was still half asleep," she says. "And I definitely wasn't in the mood to talk about healthcare mandates." (Ralph still has a phone message from Hillary Clinton on his cell phone.)

Twelve days before the primary, on a snowy Thursday afternoon, the editorial board gathered in a back office of the newsroom. They'd postponed the endorsement meeting long enough; it was time to make a decision. Things would be easy on the Republican side: all five members favored John McCain. The Democrat endorsement, however, was a different story. Although the editors had each tried to keep an open mind—"The candidates are here for a year and you don't want to settle on one candidate right away," said Mike Pride, a former editor of the paper—the room was starkly divided into two distinct camps. Ralph Jimenez and Ari Richter, the managing editor, were pushing for an Obama endorsement. Mike Pride and Geordie Wilson, the publisher, favored Clinton. And then there was Felice, the sole undecided vote. "I was waiting to be convinced until the last minute," she says. "I guess I was leaning toward Clinton, but I still felt like I could have been talked into switching sides."

Now came the hard part. The board began by talking about the issues, but there wasn't that much to talk about: Obama and Clinton had virtually identical policy positions. Both candidates were in favor of universal health care, repealing the Bush tax cuts, and withdrawing troops from Iraq as soon as possible. And yet, despite this broad level of agreement, the editors were fiercelyloyal to their chosen candidates, even if they couldn't explain why they were so loyal. "You just know who you prefer," Ralph says. "For most of the meeting, the level of discourse was pretty much 'My person is better. Period. End of story.'"

After a lengthy and intense discussion—"We'd really been having this discussion for months," says Ralph—the Monitor ended up endorsing Clinton by a 3-3 vote. The room was narrowly split, but it had become clear that no one was going to change his or her mind. Even Felice, the most uncertain of the editors, was now firmly in the Clinton camp. "There is always going to be disagreement," Mike says. "That's what happens when you get five opinionated people in the same room talking politics. But you also know that before you leave the room, you've got to endorse somebody. You've got to accept the fact that some people are bound to be wrong"—he jokingly looks over at Ralph—"and find a way to make a decision."

For readers of the Monitor, the commentary endorsing Clinton seemed like a well-reasoned brief, an unambiguous summary of the newspaper's position. (Kathleen Strand, the Clinton spokesperson in New Hampshire, credited the endorsement with helping Clinton win the primary.) The carefully chosen words in the editorial showed no trace of the debate that had plagued the closed-door meeting and all those heated conversations by the water cooler. If just one of the editors had changed his or her mind, then the Monitor would have chosen Obama. In other words, the clear-cut endorsement emerged from a very tentative majority.

In this sense, the editorial board is a metaphor for the brain. Its decisions often feel unanimous—you know which candidate you prefer—but the conclusions are actually reached only after a series of sharp internal disagreements. While the cortex struggles to make a decision, rival bits of tissue are contradicting one another. Different brain areas think different things for different reasons. Sometimes this fierce argument is largely emotional, and the distinct parts of the limbic system debate one another. Although people can't always rationally justify their feelings—these editorial board members preferred either Hillary or Obama for reasons they couldn't really articulate—these feelings still manage to powerfully affect behavior. Other arguments unfold largely between the emotional and rational systems of the brain as the prefrontal cortex tries to resist the impulses coming from below. Regardless of which areas are doing the arguing, however, it's clear that all those mental components stuffed inside the head are constantly fighting for influence and attention. Like an editorial board, the mind is an extended argument. And it is arguing with itself.

In recent years, scientists have been able to show that this "argument" isn't confined only to contentious issues such as presidential politics. Rather, it's a defining feature of the decision-making process. Even the most mundane choices emerge from a vigorous cortical debate. Let's say, for instance, that you're contemplating breakfast cereals in the supermarket. Each option will activate a unique subset of competing thoughts. Perhaps the organic granola is delicious but too expensive, or the whole-grain flakes are healthy but too unappetizing, or the Fruit Loops are an appealing brand (the advertisements worked) but too sugary. Each of these distinct claims will trigger a particular set of emotions and associations, all of which then compete for your conscious attention. Antoine Bechara, a neuroscientist at USC, compares this frantic neural competition to natural selection, with the stronger emotions ("I really want Honey Nut Cheerios!") and the more compelling thoughts ("I should eat more fiber") gaining a selective advantage over weaker ones ("I like the cartoon character on the box of Fruit Loops"). "The point is that most of the computation is done at an emotional, unconscious level, and not at a logical level," he says. The particular ensemble of brain cells that win the argument determine what you eat for breakfast.

Consider this clever experiment designed by Brian Knutson and George Loewenstein. The scientists wanted to investigate what happens inside the brain when a person makes typical consumer choices, such as buying an item in a retail store or choosing a cereal. A few dozen lucky undergraduates were recruited as experimental subjects and given a generous amount of spending money. Each subject was then offered the chance to buy dozens of different objects, from a digital voice recorder to gourmet chocolates to the latest Harry Potter book. After the student stared at each object for a few seconds, he was shown the price tag. If he chose to buy the item, its cost was deducted from the original pile of cash. The experiment was designed to realistically simulate the experience of a shopper.

While the student was deciding whether or not to buy the product on display, the scientists were imaging the subject's brain activity. They discovered that when a subject was first exposed to an object, his nucleus accumbens (NAcc) was turned on. The NAcc is a crucial part of the dopamine reward pathway, and the intensity of its activation was a reflection of desire for the item. If the person already owned the complete Harry Potter collection, then the NAcc didn't get too excited about the prospect of buying another copy. However, if he had been craving a George Foreman grill, the NAcc flooded the brain with dopamine when that item appeared.

But then came the price tag. When the experimental subject was exposed to the cost of the product, the insula and prefrontal cortex were activated. The insula produces aversive feelings and is triggered by things like nicotine withdrawal and pictures of people in pain. In general, we try to avoid anything that makes our insulas excited. This includes spending money. The prefrontal cortex was activated, scientists speculated, because this rational area was computing the numbers, trying to figure out if the product was actually a good deal. The prefrontal cortex got most excited during the experiment when the cost of the item on display was significantly lower than normal.

By measuring the relative amount of activity in each brain region, the scientists could accurately predict the subjects' shopping decisions. They knew which products people would buy before the people themselves did. If the insula's negativity exceeded the positive feelings generated by the NAcc, then the subject always chose not to buy the item. However, if the NAcc was more active than the insula, or if the prefrontal cortex was convinced that it had found a good deal, the object proved irresistible. The sting of spending money couldn't compete with the thrill of getting something new.

This data, of course, directly contradicts the rational models of microeconomics; consumers aren't always driven by careful considerations of price and expected utility. You don't look at the electric grill or box of chocolates and perform an explicit cost-benefit analysis. Instead, you outsource much of this calculation to your emotional brain and then rely on relative amounts of pleasure versus pain to tell you what to purchase. (During many of the decisions, the rational prefrontal cortex was largely a spectator, standing silently by while the NAcc and insula argued with each other.) Whichever emotion you feel most intensely tends to dictate your shopping decisions. It's like an emotional tug of war.

This research explains why consciously analyzing purchasing decisions can be so misleading. When Timothy Wilson asked people to analyze their strawberry-jam preferences, they made worse decisions because they had no idea what their NAccs really wanted. Instead of listening to their feelings, they tried to deliberately decipher their pleasure. But we can't ask our NAccs questions; we can only listen to what they have to say. Our desires exist behind locked doors.

Retail stores manipulate this cortical setup. They are designed to get us to open our wallets; the frivolous details of the shopping experience are really subtle acts of psychological manipulation. The store is tweaking our brains, trying to soothe the insulas and stoke the NAccs. Just look at the interior of a Costco warehouse. It's no accident that the most coveted items are put in the most prominent places. A row of high-definition televisions lines the entrance. The fancy jewelry, Rolex watches, iPods, and other luxury items are conspicuously placed along the corridors with the heaviest foot traffic. And then there are the free samples of food, liberally distributed throughout the store. The goal of Costco is to constantly prime the pleasure centers of the brain, to keep us lusting after things we don't need. Even though you probably won't buy the Rolex, just looking at the fancy watch makes you more likely to buy something else, since the desired item activates the NAcc. You have been conditioned to crave a reward.

But exciting the NAcc is not enough; retailers must also inhibit the insula. This brain area is responsible for making sure you don't get ripped off, and when it's repeatedly assured by retail stores that low prices are "guaranteed," or that a certain item is on sale, or that it's getting the "wholesale price," the insula stops worrying so much about the price tag. In fact, researchers have found that when a store puts a promotional sticker next to the price tag—something like "Bargain Buy!" or "Hot Deal!"—but doesn't actually reduce the price, sales of that item still dramatically increase. These retail tactics lull the brain into buying more things, since the insula is pacified. We go broke convinced that we are saving money.

This model of the shopping brain also helps explain why credit cards make us spend so irresponsibly. According to Knutson and Loewenstein, paying with plastic literally inhibits the insula, making a person less sensitive to the cost of an item. As a result, the activity of the NAcc—the pleasure pump of the cortex—becomes disproportionately important: it wins every shopping argument.


There's something unsettling about seeing the brain as one big argument. We like to believe that our decisions reflect a clear cortical consensus, that the entire mind agrees on what we should do. And yet, that serene self-image has little basis in reality. The NAcc might want the George Foreman grill, but the insula knows that you can't afford it, or the prefrontal cortex realizes that it's a bad deal. The amygdala might like Hillary Clinton's tough talk on foreign policy, but the ventral striatum is excited by Obama's uplifting rhetoric. These antagonistic reactions manifest themselves as a twinge of uncertainty. You don't know what you believe. And you certainly don't know what to do.

The dilemma, of course, is how to reconcile the argument. If the brain is always disagreeing with itself, then how can a person ever make a decision? At first glance, the answer seems obvious: force a settlement. The rational parts of the mind should intervene and put an end to all the emotional bickering.

While such a top-down solution might seem like a good idea—using the most evolutionarily advanced parts of the brain to end the cognitive contretemps—this approach must be used with great caution. The problem is that the urge to end the debate often leads to neglect of crucial pieces of information. A person is so eager to silence the amygdala, or quiet the OFC, or suppress some bit of the limbic system that he or she ends up making a bad decision. A brain that's intolerant of uncertainty—that can't stand the argument—often tricks itself into thinking the wrong thing. What Mike Pride says about editorial boards is also true of the cortex: "The most important thing is that everyone has their say, that you listen to the other side and try to understand their point of view. You can't short-circuit the process."

Unfortunately, the mind often surrenders to the temptation of shoddy top-down thinking. Just look at politics. Voters with strong partisan affiliations are a case study in how not to form opinions: their brains are stubborn and impermeable, since they already know what they believe. No amount of persuasion or new information is going to change the outcome of their mental debates. For instance, an analysis of five hundred voters with "strong party allegiances" during the 1976 campaign found that during the heated last two months of the contest, only sixteen people were persuaded to vote for the other party. Another study tracked voters from 1965 to 1982, tracing the flux of party affiliation over time. Although it was an extremely tumultuous era in American politics—there was the Vietnam War, stagflation, the fall of Richard Nixon, oil shortages, and Jimmy Carter—nearly 90 percent of people who identified themselves as Republicans in 1965 ended up voting for Ronald Reagan in 1980. The happenings of history didn't change many minds.

It's now possible to see why partisan identities are so persistent. Drew Westen, a psychologist at Emory University, imaged the brains of ordinary voters with strong party allegiances during the run-up to the 2004 election. He showed the voters multiple, clearly contradictory statements made by each candidate, John Kerry and George Bush. For example, the experimental subject would read a quote from Bush praising the service of soldiers in the Iraq war and pledging "to provide the best care for all veterans." Then the subject would learn that on the same day Bush made this speech, his administration cut medical benefits for 164,000 veterans. Kerry, meanwhile, was quoted making contradictory statements about his vote to authorize war in Iraq.

After being exposed to the political inconsistencies of both candidates, the subject was asked to rate the level of contradiction on a scale of 1 to 4, with 4 signaling a strong level of contradiction. Not surprisingly, the reactions of voters were largely determined by their partisan allegiances. Democrats were troubled by Bush's inconsistent statements (they typically rated them a 4) but found Kerry's contradictions much less worrisome. Republicans responded in a similar manner; they excused Bush's gaffes but almost always found Kerry's statements flagrantly incoherent.

By studying each of these voters in an fMRI machine, Westen was able to look at the partisan reasoning process from the perspective of the brain. He could watch as Democrats and Republicans struggled to maintain their political opinions in the face of conflicting evidence. After being exposed to the inconsistencies of their preferred candidate, the party faithful automatically recruited brain regions that are responsible for controlling emotional reactions, such as the prefrontal cortex. While this data might suggest that voters are rational agents calmly assimilating the uncomfortable information, Westen already knew that wasn't happening, since the ratings of Kerry and Bush were entirely dependent on the subjects' party affiliations. What, then, was the prefrontal cortex doing? Westen realized that voters weren't using their reasoning faculties to analyze the facts; they were using reason to preserve their partisan certainty. And then, once the subjects had arrived at favorable interpretations of the evidence, blithely excusing the contradictions of their chosen candidate, they activated the internal reward circuits in their brains and experienced a rush of pleasurable emotion. Self-delusion, in other words, felt really good. "Essentially, it appears as if partisans twirl the cognitive kaleidoscope until they get the conclusions they want," Westen says, "and then they get massively reinforced for it, with the elimination of negative emotional states and activation of positive ones."

This flawed thought process plays a crucial role in shaping the opinions of the electorate. Partisan voters are convinced that they're rational—it's the other side that's irrational—but actually, all of us are rationalizers. The Princeton political scientist Larry Bartels analyzed survey data from the 1990s to prove this point. During the first term of Bill Clinton's presidency, the budget deficit declined by more than 90 percent. However, when Republican voters were asked in 1996 what happened to the deficit under Clinton, more than 55 percent said that it had increased. What's interesting about this data is that so-called high-information voters—these are the Republicans who read the newspaper, watch cable news, and can identify their representatives in Congress—weren't better informed than low-information voters. (Many low-information voters struggled to name the vice president.) According to Bartels, the reason knowing more about politics doesn't erase partisan bias is that voters tend to assimilate only those facts that confirm what they already believe. If a piece of information doesn't follow Republican talking points—and Clinton's deficit reduction didn't fit the tax-and-spend liberal stereotype—then the information is conveniently ignored. "Voters think that they're thinking," Bartels says, "but what they're really doing is inventing facts or ignoring facts so that they can rationalize decisions they've already made." Once you identify with a political party, the world is edited to fit with your ideology.

At such moments, rationality actually becomes a liability, since it allows us to justify practically any belief. The prefrontal cortex is turned into an information filter, a way to block out disagreeable points of view. Let's look at an experiment done in the late 1960s by the cognitive psychologists Timothy Brock and Joe Balloun. Half of the subjects involved in the experiment were regular churchgoers, and half were committed atheists. Brock and Balloun played a tape-recorded message attacking Christianity, and, to make the experiment more interesting, they added an annoying amount of static—a crackle of white noise—to the recording. However, the listener could reduce the static by pressing a button, at which point the message suddenly became easier to understand.

The results were utterly predicable and rather depressing: the nonbelievers always tried to remove the static, while the religious subjects actually preferred the message that was harder to hear. Later experiments by Brock and Balloun that had smokers listening to a speech on the link between smoking and cancer demonstrated a similar effect. We all silence the cognitive dissonance through self-imposed ignorance.

This sort of blinkered thinking isn't a problem for only partisan voters and devout believers. In fact, research suggests that the same flaw also afflicts those people who are supposed to be most immune to such cognitive errors: political pundits. Even though pundits are trained professionals, presumably able to evaluate the evidence and base their opinions on the cold, hard facts—that's why we listen to them—they are still vulnerable to cognitive mistakes. Like partisan voters, they selectively interpret the data so that it proves them right. They'll distort their thought process until it leads to the desired conclusion.

In 1984, the University of California at Berkeley psychologist Philip Tetlock began what he thought would be a brief research project. At the time, the Cold War was flaring up again—Reagan was talking tough to the "evil empire"—and political pundits were sharply divided on the wisdom of American foreign policy. The doves thought Reagan was needlessly antagonizing the Soviets, while the hawks were convinced that the USSR needed to be aggressively contained. Tetlock was curious which group of pundits would turn out to be right, and so he began monitoring their predictions.

A few years later, after Reagan left office, Tetlock revisited the opinions of the pundits. His conclusion was sobering: everyone was wrong. The doves had assumed that Reagan's bellicose stance would exacerbate Cold War tensions and had predicted a breakdown in diplomacy as the USSR hardened its geopolitical stance. The reality, of course, was that the exact opposite happened. By 1985, Mikhail Gorbachev was in power. The Soviet Union began implementing a stunning series of internal reforms. The "evil empire" was undergoing glasnost.

But the hawks didn't do much better. Even after Gorbachev began the liberalizing process, hawks tended to disparage the changes to the Soviet system. They said the evil empire was still evil; Gorbachev was just a tool of the politburo. Hawks couldn't imagine that a sincere reformer might actually emerge from a totalitarian state.

The dismal performance of these pundits inspired Tetlock to turn his small case study into an epic experimental project. He picked 284 people who made their living "commenting or offering advice on political and economic trends" and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be reelected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought processes so he could better understand how they'd made up their minds. By the end of the study, Tetlock had quantified 82,361 different predictions.

After Tetlock tallied the data, the predictive failures of the pundits became obvious. Although they were paid for their keen insights into world affairs, they tended to perform worse than random chance. Most of Tetlock's questions had three possible answers; on average, the pundits had selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. Tetlock also found that the most famous pundits in his study tended to be the least accurate, consistently churning out overblown and overconfident forecasts. Eminence was a handicap.

Why were these pundits (especially the prominent ones) so bad at forecasting the future? The central error diagnosed by Tetlock was the sin of certainty, which led the "experts" to mistakenly impose a top-down solution on their decision-making processes. In chapter 2, we saw examples of the true expertise that occurs when experience is internalized by the dopamine system. This results in a person who has a set of instincts that respond quickly to the situation at hand, regardless of whether that's playing backgammon or staring at a radar screen. The pundits in Tetlock's study, however, distorted the verdicts of their emotional brains, cherry-picking the feelings they wanted to follow. Instead of trusting their gut feelings, they found ways to disregard the insights that contradicted their ideologies. When pundits were convinced that they were right, they ignored any brain areas that implied they might be wrong. This suggests that one of the best ways to distinguish genuine from phony expertise is to look at how a person responds to dissonant data. Does he or she reject the data out of hand? Perform elaborate mental gymnastics to avoid admitting error? Everyone makes mistakes; the object is to learn from these mistakes.

Tetlock notes that the best pundits are willing to state their opinions in "testable form" so that they can "continually monitor their forecasting performance." He argues that this approach makes pundits not only more responsible—they are forced to account for being wrong—but also less prone to bombastic convictions, a crucial sign that a pundit isn't worth listening to. (In other words, ignore those commentators that seem too confident or self-assured. The people on television who are most certain are almost certainly going to be wrong.) As Tetlock writes, "The dominant danger [for pundits] remains hubris, the vice of closedmindedness, of dismissing dissonant possibilities too quickly." Even though practically all of the professionals in Tetlock's study claimed that they were dispassionately analyzing the evidence—everybody wanted to be rational—many of them were actually indulging in some conveniently cultivated ignorance. Instead of encouraging the arguments inside their heads, these pundits settled on answers and then came up with reasons to justify those answers. They were, as Tetlock put it, "prisoners of their preconceptions."


It feels good to be certain. Confidence is comforting. This desire to always be right is a dangerous side effect of having so many competing brain regions inside one's head. While neural pluralism is a crucial virtue—the human mind can analyze any problem from a variety of different angles—it also makes us insecure. You never know which brain area you should obey. It's not easy to make up your mind when your mind consists of so many competing parts.

This is why being sure about something can be such a relief. The default state of the brain is indecisive disagreement; various mental parts are constantly insisting that the other parts are wrong. Certainty imposes consensus on this inner cacophony. It lets you pretend that your entire brain agrees with your behavior. You can now ignore those annoying fears and nagging suspicions, those statistical outliers and inconvenient truths. Being certain means that you aren't worried about being wrong.

The lure of certainty is built into the brain at a very basic level. This is most poignantly demonstrated by split-brain patients. (These are patients who have had the corpus callosum—the nerve tissue that connects the two hemispheres of the brain—severed. The procedure is performed only rarely, usually to treat intractable seizures.) A typical experiment goes like this: using a special instrument, different sets of pictures are flashed to each of the split-brain patient's visual fields. (Because of our neural architecture, all information about the left visual field is sent to the right hemisphere, and all information about the right visual field is sent to the left hemisphere.) For example, the right visual field might see a picture of a chicken claw and the left visual field might see a picture of a snowy driveway. The patient is then shown a variety of images and asked to pick out the one that is most closely associated with what he or she has just seen. In a tragicomic display of indecisiveness, the split-brain patient's hands point to two different objects. The right hand points to a chicken (this matches the chicken claw that the left hemisphere witnessed), while the left hand points to a shovel (the right hemisphere wants to shovel the snow). The conflicting reactions of the patient reveals the inner contradictions of each of us. The same brain has come up with two very different answers.

But something interesting happens when scientists ask a split-brain patient to explain the bizarre response: the patient manages to come up with an explanation. "Oh, that's easy," one patient said. "The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed." Instead of admitting that his brain was hopelessly confused, the patient wove his confusion into a plausible story. In fact, the researchers found that when patients made especially ridiculous claims, they seemed even more confident than usual. It was a classic case of overcompensation.

Of course, the self-assurance of the split-brain patient is clearly mistaken. None of the images contained a chicken shed that needed a shovel. But that deep need to repress inner contradictions is a fundamental property of the human mind. Even though the human brain is defined by its functional partitions, by the friction of all these different perspectives, we always feel compelled to assert its unity. As a result, each of us pretends that the mind is in full agreement with itself, even when it isn't. We trick ourselves into being sure.

DURING THE LAST week of September 1973, the Egyptian and Syrian armies began massing near the Israeli border. The signals picked up by the Mossad, the main Israeli intelligence agency, were ominous. Artillery had been moved into offensive positions. Roads were being paved in the middle of the desert. Thousands of Syrian reservists had been ordered to report for duty. From the hills of Jerusalem, people could see a haze of black diesel smoke on the horizon, the noxious exhaust generated by thousands of Soviet-made tanks. The smoke was getting closer.

The official explanation for the frenzy of military activity was that it was a pan-Arab training exercise. Although Anwar Sadat, the president of Egypt, had boldly declared a few months before that his country was "mobilizing in earnest for the resumption of battle" and declared that the destruction of Israel was worth the "sacrifice of one million Egyptian soldiers," the Israeli intelligence community insisted that the Egyptians weren't actually planning an attack. Major General Eli Zeira, the director of Aman, the Israeli military intelligence agency, publicly dismissed the possibility of an Egyptian invasion. "I discount the likelihood of a conventional Arab attack," Zeira said. "We have to look hard for evidence of their real intentions in the field—otherwise, with the Arabs, all you have is rhetoric. Too many Arab leaders have intentions which far exceed their capabilities." Zeira believed that the Egyptian military buildup was just a bluff, a feint intended to shore up Sadat's domestic support. He persuasively argued that the Syrian deployments were merely a response to a September skirmish between Syrian and Israeli fighter planes.

On October 3, Golda Meir, the prime minister of Israel, held a regular cabinet meeting that included the heads of Israeli intelligence. It was here that she was told about the scale of Arab preparations for war. She learned that the Syrians had concentrated their antiaircraft missiles at the border, the first time this had ever been done. In addition, several Iraqi armored divisions had moved into southern Syria. She was also informed about Egyptian military maneuvers in the Sinai that weren't part of the official "training exercise." Although everyone agreed that the news was troubling, the consensus remained the same. The Arabs were not ready for war. They wouldn't dare invade. The next cabinet meeting was scheduled for October 7, the day after Yom Kippur.

In retrospect, it's clear that Zeira and the Israeli intelligence community were spectacularly wrong. In the early afternoon of October 6, the Egyptian and Syrian armies—a force roughly equivalent to the NATO European command—launched a surprise attack on Israeli positions in the Golan Heights and Sinai Peninsula. Because Meir didn't issue a full mobilization order until the invasion was already under way, the Israeli military was unable to repel the Arab armies. Egyptian tanks streamed across the Sinai and nearly captured the strategically important Mitla Pass. Before nightfall, more than 8,000 Egyptian infantry had moved into Israeli territory. The situation in the Golan Heights was even more dire: 130 Israeli tanks were trying to hold off more than 1,300 Syrian and Iraqi tanks. By that evening, the Syrians were pressing toward the Sea of Galilee, and the Israelis were suffering heavy casualties. Reinforcements were rushed to battle. If the Golan fell, Syria could easily launch artillery at Israeli cities. Moshe Dayan, the Israeli defense minister, concluded after the third day of conflict that the chances of the Israeli nation surviving the war were "very low."

The tide shifted gradually. By October 8, the newly arrived Israeli reinforcements began to reassert control in the Golan Heights. The main Syrian force was split into two smaller contingents that were quickly isolated and destroyed. By October 10, Israeli tanks had crossed the "purple line," or the pre-war Syrian border. They would eventually progress nearly forty kilometers into the country, or close enough to shell the suburbs of Damascus.

The Sinai front was more treacherous. The initial Israeli counterattack, on October 8, was an unmitigated disaster: nearly an entire brigade of Israeli tanks was lost in a few hours. (General Shmuel Gonen, the Israeli commander of the Southern Front, was later disciplined for his "failure to fulfill his duties.") In addition, the Israeli air force had lost control of the skies; its fighter planes were being shot down at an alarming rate, as the Soviet SA-2 antiaircraft batteries proved to be much more effective than expected. ("We are like fat ducks up there," one Israeli pilot said. "And they have the shotguns.") The next several days were a tense stalemate, neither army willing to risk an attack.

The standoff ended on October 14, when Sadat ordered his generals to attack. He wanted to ease the pressure on the Syrians, who were now fighting to protect their capital. But the massive Egyptian force was repulsed—they lost nearly 250 tanks—and on October 15, the Israelis launched a successful counterattack. The Israelis struck at the seam between the two main Egyptian armies and managed to secure a bridgehead on the opposite side of the Suez Canal. This breach marked the turning point of the Sinai campaign. By October 22, an Israeli armored division was within a hundred miles of Cairo. A cease-fire went into effect a few days later.

For Israel, the end of the war was bittersweet. Although the surprise invasion had been repelled, and no territory had been lost, the tactical victory had revealed the startling fragility of the nation. It turned out that Israel's military superiority was not a guarantee of security. The small country had almost been destroyed by an intelligence failure.

AFTER THE WAR, the Israeli government appointed a special committee to investigate the mechdal, or "omission," that had preceded the war. Why hadn't the intelligence community anticipated the invasion? The committee had uncovered a staggering amount of evidence suggesting an imminent attack. For instance on October 4, Aman learned that, in addition to building up Egyptian and Syrian forces along the border, the Arabs had evacuated Soviet military advisers from Cairo and Damascus. The day after that, new reconnaissance photographs had revealed the movement of antiaircraft missiles to the front lines and the departure of the Soviet fleet from the port of Alexandria. At this point, it should have been clear that the Egyptian forces weren't training in the desert; they were getting ready for war.

Lieutenant Benjamin Simon-Tov, a young intelligence officer at the Southern Command, was one of the few analysts who connected the dots. On October 1, he wrote a memo urging his commander to consider the possibility of an Arab attack. That memo was ignored. On October 3, he compiled a briefing document summarizing recent aggressive Egyptian actions. He argued that the Sinai invasion would begin within a week. His superior officer refused to pass the "heretical" report up the chain of command.

Why was the intelligence community so resistant to the idea of an October attack? After the Six-Day War of 1967, the Mossad and Aman developed an influential theory of Arab strategy that they called ha-Konseptzia(the Concept). This theory was based largely on the intelligence of a single source in the Egyptian government. It held that Egypt and Syria wouldn't consider attacking Israel until 1975, at which point they would have an adequate number of fighter planes and pilots. (Israeli air superiority had played a key role in the decisive military victory of 1967.) The Concept also placed great faith in the Bar-Lev line, a series of defensive positions along the Suez Canal. The Mossad and Aman believed that these obstacles and reinforcements would restrain Egyptian armored divisions for at least twenty-four hours, thus allowing Israel crucial time to mobilize its reservists.

The Concept turned out to be completely wrong. The Egyptians were relying on their new surface-to-air missiles to counter the Israeli air forces; they didn't need more planes. The Bar-Levline was easy to breach. The defensive positions were mostly made of piled desert sand, which the Egyptian military moved using pressured water cannons. Unfortunately, the Concept was deeply ingrained in the strategic thinking of the Israeli intelligence community. Until the invasion actually began, the Mossad and Aman had insisted that no invasion would take place. Instead of telling the prime minister that the situation on the ground was uncertain and ambiguous—nobody really knew if the Egyptians were bluffing or planning to attack—the leaders of the Mossad and Aman chose to project an unshakable confidence in the Concept. They were misled by their certainty, which caused them to ignore a massive amount of contradictory evidence. As the psychologist Uri Bar-Joseph noted in his study of the Israeli intelligence failure, "The need for cognitive closure prompted leading analysts, especially Zeira, to 'freeze' on the conventional wisdom that an attack was unlikely and to become impervious to information suggesting that it was imminent."

Even on the morning of October 6, just a few hours before Egyptian tanks crossed the border, Zeira was still refusing to admit that a mobilization might be necessary. A top-secret cable had just arrived from a trusted source inside an Arab government, warning that an invasion was imminent, that Syria and Egypt weren't bluffing. Meir convened a meeting with her top military officials to assess this new intelligence. She asked Zeira if he thought the Arab nations were going to attack. Zeira said no. They would not dare to attack, he told the prime minister. Of that he was certain.

THE LESSON OF the Yom Kippur War is that having access to the necessary information is not enough. Eli Zeira, after all, had more than enough military intelligence at his disposal. He saw the tanks at the border; he read the top-secret memos. His mistake was that he never forced himself to consider these inconvenient facts. Instead of listening to the young lieutenant, he turned up the static dial and clung to the Concept. The result was a bad decision.

The only way to counteract the bias for certainty is to encourage some inner dissonance. We must force ourselves to think about the information we don't want to think about, to pay attention to the data that disturbs our entrenched beliefs. When we start censoring our minds, turning off those brain areas that contradict our assumptions, we end up ignoring relevant evidence. A major general shrugs off the evacuation of Soviet military personnel and those midnight cables from trusted sources. He insists that an invasion isn't happening even when it has already begun.

But the certainty trap is not inevitable. We can take steps to prevent ourselves from shutting down our minds' arguments too soon. We can consciously correct for this innate tendency. And if those steps fail, we can create decision-making environments that help us better entertain competing hypotheses. Look, for example, at the Israeli military. After failing to anticipate the 1973 war, Israel thoroughly revamped its intelligence services. It added an entirely new branch of intelligence analysis, the Research and Political Planning Center, which operated under the auspices of the Foreign Ministry. The mission of this new center wasn't to gather more information; the Israelis realized that data collection wasn't their problem. Instead, the unit was designed to provide an assessment of the available data, one that was completely independent of both Aman and the Mossad. It was a third opinion, in case the first two opinions were wrong.

At first glance, adding another layer of bureaucracy might seem like a bad idea. Interagency rivalries can create their own set of problems. But the Israelis knew that the surprise invasion of 1973 was a direct result of their false sense of certainty. Because Aman and the Mossad were convinced that the Concept was accurate, they had ignored all contradictory evidence. Complacency and stubbornness soon set in. The commission wisely realized that the best way to avoid such certainty in the future was to foster diversity, ensuring that the military would never again be seduced by its own false assumptions.

The historian Doris Kearns Goodwin made a similar point about the benefits of intellectual diversity in Team of Rivals, her history of Abraham Lincoln's cabinet. She argues that it was Lincoln's ability to deal with competing viewpoints that made him such a remarkable president and leader. He intentionally filled his cabinet with rival politicians who had extremely different ideologies; antislavery crusaders, like Secretary of State William Seward, were forced to work with more conservative figures, like Attorney General Edward Bates, a man who had once been a slave owner. When making a decision, Lincoln always encouraged vigorous debate and discussion. Although several members of his cabinet initially assumed that Lincoln was weak willed, indecisive, and unsuited for the presidency, they eventually realized that his ability to tolerate dissent was an enormous asset. As Seward said, "The president is the best of us."

The same lesson can be applied to the brain: when making decisions, actively resist the urge to suppress the argument. Instead, take the time to listen to what all the different brain areas have to say. Good decisions rarely emerge from a false consensus. Alfred P. Sloan, the chairman of General Motors during its heyday, once adjourned a board meeting soon after it began. "Gentlemen," Sloan said, "I take it we are all in complete agreement on the decision here ... Then I propose we postpone further discussion of this matter until our next meeting to give ourselves time to develop disagreement and perhaps gain some understanding of what the decision is all about."