Confirmation Bias - Denying to the Grave - Sara E Gorman, Jack M Gorman

Denying to the Grave: Why We Ignore the Facts That Will Save Us - Sara E Gorman, Jack M Gorman (2016)

Chapter 3. Confirmation Bias

ONE OF AMERICA’S GREATEST SCIENTISTS SUMMARIZED confirmation bias well when he quipped, “I wouldn’t have seen it if I didn’t believe it.”

That scientist was the great Hall of Famer and New York Yankee baseball player Yogi Berra. How do we know Yogi Berra said that? One of us once heard someone say that he did, and it sounds like the kind of thing that has been attributed to him. Of course, there are those who say that Yogi didn’t say many of the things attributed to him and that there are actually perfectly logical explanations for some of the seemingly nonsensical statements he allegedly did utter.1 But we don’t care about any of that. We love the line, it makes our point well, and we are going to stick to the Yogi attribution no matter what kind of disconfirming evidence crops up. As Yogi might have said, “When you come to a piece of evidence that doesn’t agree with what you already believe, tear it to shreds and then ignore it.”

Confirmation bias refers to our tendency to attend only to information that agrees with what we already think is true. Notice that we did not simply say that this bias involves ignoring evidence that is incompatible with our beliefs but rather that it is an active process in which we selectively pay attention to those things that confirm our hypotheses. Confirmation bias is responsible for not only a great deal of denial of scientific evidence but also the actual generation and maintenance of incorrect scientific information. That is, scientists, doctors, and public health experts are as prone as anyone else is to “seeing what we believe,” making it especially difficult to help people sort out what is true science from the mistakes and outright fabrications. As we will see, confirmation bias is strongly rooted in primitive needs and emotion and therefore not amenable to correction merely by reciting facts.

Although confirmation bias is irrational in the sense that it does not take into consideration evidence, it is still frequently adaptive and even necessary. Sometimes rejecting something because of an initial bad impression when there is an alternative that suits our needs well is time-saving, even if the reasons behind our rejection are not evidence-based. If we got gastrointestinal symptoms once after eating a particular brand of canned baked beans, there is no harm in automatically picking a different brand the next time, even if more careful consideration would reveal that the original brand was not responsible for making us sick. Confirmation bias is one of those cognitive techniques that enable us to make rapid decisions. In everyday life that can be an advantage; when making decisions about scientific truth, however, it rarely is. Specific networks in the human brain reinforce positions we take because of confirmation bias, networks that have evolved to keep us safe and happy. Challenging these ideas requires substantial mental effort and energy expenditure that we cannot be expected to sustain on a regular basis.

The everyday, practical use of confirmation bias to make necessary decisions runs counter to something at the very heart of the scientific method: disconfirmation. Science operates mostly by creating hypotheses based on what is already known and then generating data in an attempt to prove the hypothesis is wrong. Only after exhaustive attempts to disconfirm the hypothesis have failed do scientists begin to think it might be correct. Although as we shall see that confirmation bias is used by scientists—sometimes generating incorrect inferences—the very nature of science resists acceptance of anything just because it is what is already believed. Proper science is a long, exhausting, and often frustrating process in which experiments fail over and over again. In other words, science is a continuous process of trying one’s best to prove that everything we believe is wrong. Only when repeated rigorous attempts to do so prove fruitless do we begin to believe a scientific theory might just be true.

In this chapter we will demonstrate how confirmation bias, although a highly adaptive human trait, actually often causes scientific misperceptions due to our resistance to the often counterintuitive disconfirmation process of scientific inquiry. The chapter explains confirmation bias and why it is adaptive as a form of learning from experience. We then explore the ways in which confirmation bias affects health beliefs with a particular focus on scientific and medical professionals who are by no means immune to its power. We follow this by exploring some reasons, both social and biological, why confirmation bias exists. Finally, we propose some ways of countering confirmation bias.

What Is Confirmation Bias?

Confirmation bias crops up in everyone’s life on a regular basis. Let us say, for example, that we have decided that a person we recently met, we will call him Sherlock, is very smart. We came to this conclusion because at a party he corrected someone who claimed that global warming is just the natural result of the end of the last ice age and therefore nothing to worry about. Sherlock calmly and clearly explained that the last ice age ended about 10,000 years ago and that all mathematical analyses have shown that although the earth warmed up after it was over, this does not account for the global warming we are experiencing today. Hence, Sherlock soberly and dispassionately concluded, we have a lot to worry about.

Sherlock sounded very confident in what he said. He spoke clearly. Furthermore, he has a deep voice and is taller than average. There was something “authoritative” about him. And of course, his name contributed to an overall sense that Sherlock is indeed a very smart person.

We subsequently see Sherlock several times at parties and meetings. Every time he speaks, we pay close attention to what he says, agreeing with every point he makes. During one such gathering, Sherlock explains that pasteurization destroys the vitamin D that milk naturally contains. He says this is so because vitamin D is a fat-soluble vitamin.

“Think of what happens when you broil a steak,” he tells us, as we sit raptly listening to him. “The fat melts and is discarded. That is what happens to the vitamin D when they heat it up to temperatures like you have in your oven.”

This seems correct, especially because we remember learning that vitamin D is indeed a fat- as opposed to a water-soluble vitamin like vitamin C. The next day, however, someone who was also at the party mentions to us that Sherlock was wrong.

“First of all,” Martha tells us, “milk does not have vitamin D in it until the manufacturers add it in. They have been doing that since the 1930s. Second of all, pasteurization doesn’t involve temperatures anywhere near what you get in your oven. Pasteurized milk is not heated beyond the boiling point because it would curdle. The usual method of pasteurization involves heating the milk to 72°C for 15 seconds. Finally, pasteurization doesn’t lower vitamin D levels by any amount that makes a difference.”

Now, Martha does not have a deep and alluring voice like Sherlock, and she is not as tall as he is. Furthermore, as we are listening to her we are trying to rush out to an important business meeting. Martha’s speech is full of details, and we simply do not have the attention right now for them. Who cares how long they have been pasteurizing milk, and how much is 72°C converted to Fahrenheit degrees? We notice that she says “any amount that makes a difference,” and we immediately think, “So it does lower the vitamin D content by some amount.” Sherlock was right—Martha even admitted it.

As we drive to our next meeting, our blood begins to boil. Martha is always attacking people, always trying to find the worst she can in them. She is probably envious of Sherlock because he is smarter and more popular. Every time thereafter when someone tries to disagree with something Sherlock says, we feel that same rush of angry emotion as we did in the car contemplating Martha’s attack on him and we barely consider the contradictory information. Our belief that Sherlock is smart is repeatedly confirmed and becomes unshakeable.

This example of confirmation bias illustrates several of its properties. First, it is prone to the irrational primacy effect: we give more credence to what we hear or experience first than everything that follows. As Raymond S. Nickerson puts it, “When a person must draw a conclusion on the basis of information acquired and integrated over time, the information acquired early in the process is likely to carry more weight than that acquired later.”2 Our first experience with Sherlock was a positive one in which he was in fact correct. Global warming today cannot be attributed to any lingering effects of the ending of the last ice age. Second, confirmation bias is subject to emotion. We were incidentally experiencing positive emotions when we first met Sherlock. We were at a party, and we were relaxed and having fun. Sherlock is tall, handsome, and well-spoken. By contrast, we were harried when we spoke to Martha, whom we experience with negative emotion. Third, in order to maintain our conviction that Sherlock is smart, we fail to listen to anything that contradicts this judgment. Instead, we summon up our angry defense of him. As it turns out, Sherlock is completely wrong on the pasteurization issue. Perhaps Martha did not need to add all those details, which only increased our sense that she is a pedantic irritant. But even though we do not yet know the facts—we have done no independent research of our own—we instinctively believe Sherlock and not Martha.

It turns out that Sherlock had just read something about global warming in a magazine in his dentist’s office before coming to the party. Otherwise, he knows very little about science and frequently insists on assumptions that are untrue. But the more we are confronted with that fact, the more it would be embarrassing to have to admit that we have been wrong about him. If we acknowledge that we were wrong about Sherlock and that despite his name he is really a blowhard, then perhaps everyone will be mad at us for defending him and we will look stupid. So we cling to our defense of Sherlock, becoming more and more polarized in our view that he is smart and that everyone else is misjudging him. And thus is revealed another, very serious effect of the confirmation bias: it polarizes people. Ideally, whenever someone asserts a “fact” about an unfamiliar topic, we should maintain a healthy skepticism, ask questions, and even look into it further on our own so that we can come to our own conclusions. When scientists do this together to get an answer to a question it is called collaboration. Confirmation bias, on the other hand, is an anti-collaboration phenomenon. Each person or group of people stridently defends its own position, thus increasingly driving everyone further apart.

If you think back, you will almost certainly identify times in your life when you have been prone to confirmation bias. Maybe it happened when you judged a person or decided to buy something. Perhaps you set your sights on renting a particular apartment because you loved the big windows in the living room that let sunlight in on a beautiful summer afternoon. After that, you rationalized the fact that the rent was a bit high by noting that it was only a bit higher than a friend’s new apartment; you dismissed what looked like a water stain on the ceiling in the bedroom by assuming it happened a long time ago during a rare terrible storm; and you overlooked a cockroach scurrying across the living room floor. All of this was set aside because your very first visit to the apartment was emotionally favorable. It did not even occur to you that all that sunlight would stream through only on sunny afternoons when you are usually at work. Fortunately, someone else got the apartment before you could put down a deposit, and that poor soul is now fighting with the landlord to fix the leak in the upstairs apartment that is dripping into the ceiling of the bedroom and to get an exterminator in to fight the vermin infestation. Perhaps that unfortunate renter would have benefited from some wisdom articulated more than 700 years ago by Dante. In the Divine Comedy, St. Thomas Aquinas cautions Dante when they meet in Paradise, “Opinion—hasty—often can incline to the wrong side, and then affection for one’s own opinion binds, confines the mind.”3

Confirmation bias can and does also affect how we think about our health. In a study published in the Proceedings of the National Academy of Medicine researchers followed patients for more than a year and found there was absolutely no relationship between their arthritis pain and the weather.4 We doubt that the publication of this paper 20 years ago has influenced any patient or her doctor to abandon the notion that inclement weather makes joints ache more. That belief was probably embedded in people’s minds hundreds of years ago—everyone has no doubt heard it articulated by his or her grandparents. Once this idea is implanted in our minds, we ignore every bit of evidence that contradicts it, even though there really is no biological basis for such an assertion. Once we understand what is actually involved in the two main forms of arthritis, osteoarthritis (in which joints degenerate with age) and rheumatoid arthritis (an autoimmune disease caused by abnormal antibodies attacking joints that can begin at any age), it becomes very hard to understand how humidity, cloudiness, or precipitation could possibly be a factor in the progression of the disease. Yet we are fully capable of falsely associating bad weather with pain because that is what we already think is the case. Fortunately, not much harm is done by believing this, although if arthritis sufferers take too much medication or avoid recommended therapy and exercise just because the weather is bad, there could be consequences.

As we noted earlier, confirmation bias is not without advantages, which is perhaps why it remains a part of the human panoply of cognitive devices. We need to use our experience, the things we have learned in the past and know in the present, to guide us in almost everything we do. If we bought a product that turns out to have been defective, we are biased against purchasing that product again. That is, every time we see that product in the store—let’s say it is an expensive umbrella that allegedly withstands 30-mile-an-hour winds but in fact collapsed the first time it was exposed to a slight breeze—we immediately notice its flaws and this ensures that we don’t make the same mistake twice.5 Of course, it is entirely possible that the umbrella we bought was the only defective one the manufacturer made. Hence, our decision not to buy this umbrella based on previous experience is technically biased. Without evaluating a large number of this manufacturer’s umbrellas we cannot really be sure if there are any good ones or not. But no one would fault us for relying on confirmation bias in this instance. Who wants to take a chance on getting deceived again when there are so many alternative products (like a cheap umbrella that will handle at least a 2-mile-per-hour wind)? It is easy to think of many instances, from trivial to those of great importance in our lives, in which confirmation bias enables us to use experience to make quick and adaptive decisions.

So we all are prone to confirmation bias, but in this chapter we want to reinforce our point that intelligence is not a factor in the penchant to deny scientific evidence. So we use two examples that involve people who are generally believed to be smart and even have special training in the fields involved: physicians’ overprescribing of antibiotics and health scientists’ tenacious insistence that eating fat is the cause of heart disease.

Confirmation Bias Among Scientists

Science, too, operates in large measure by relying on experience. When a scientist (or more likely today a team of scientists) completes an experiment or analyzes existing data in a new way, something will emerge. Most of the time what emerges is of little interest—more experiments fail than succeed, which is why scientists labor so long on the same thing and why graduate students in science suffer so much, hoping that one of their experiments will rise to sufficient significance to finally merit a dissertation that gets them their PhDs. But if the results are interesting, the scientists will write up their findings in a scientific paper and submit the paper to a scientific journal. The editor of the journal will look it over and decide whether the paper has a chance of being published in the journal and, if it does, will send it to several independent experts in the field for peer review. These experts are usually called referees and journals that use this system are known as refereed journals. Most papers come back from review with recommendations that they be rejected. The most prestigious scientific journals in the health field reject between 80% and 90% of the papers they receive. Hence, it is a tough road from writing a paper to getting it published.

A paper that does get published, however, may be read widely by other scientists. The new findings now have some authority by virtue of being published, and the more prestigious the journal the more they will be noticed and seen as important.6 It will now become imperative that other scientists in the field take these findings into account when they do their own work. If another group performs an experiment whose findings agree with the published ones, this is called a replication. If the new results are not confirmatory, however, it is said that the second group of investigators has failed to replicate the original findings and it is incumbent upon them to explain why they got a different result. Notice that there is already a bit of a bias in the way this works—even if the follow-up experiment was done with nearly perfect scientific rigor, the word failure is used if it yields a different result than a previously published one. There is, then, the opportunity for confirmation bias because no one wants to be a “failure,” even though in fact a well-executed experiment that yields a discrepant finding can hardly be called a “failure.” It is really just different.

The use of confirmation bias has its advantages in science, as it does in daily life. Using previously published findings to guide future experiments narrows the field of exploration and increases the chances of finding something. It is very rare nowadays that anyone stumbles on a finding by accident, the way Edward Jenner did in the late 18th century when he discovered the smallpox vaccine. Rather, today’s science works by building upon previous findings until something of value develops. The scientists who developed the vaccine for the human papillomavirus (HPV), for example, knew from previous work HPV’s role in the development of cervical cancer, its molecular structure, and the way in which immune cells attack viruses. They were then able to use this information to design a method of extracting just the viral proteins necessary to cause the body to manufacture neutralizing antibodies without being capable of causing actual infection. In other words, these investigators were guided in what to look for each step of the way by what had been published in the past. When they stumbled upon a discrepant finding, their first assumption would have been to wonder if they did something wrong in performing the experiment and figure out how to fix it, not to trash the whole idea. To do otherwise would have been to throw unnecessary obstacles into the process. Thus, scientists use confirmation bias in order to benefit from previous findings in pushing ahead with a discovery process.

But of course, just as is the case in daily life, confirmation bias can have negative consequences for both science and medicine. Sometimes, a finding or hypothesis gets so enshrined by scientists that they resist accepting evidence that proves it incorrect. Doctors can become so convinced of a belief that actual evidence does not dissuade them and they continue a practice that is not only useless but even harmful. “People sometimes see in data the patterns for which they are looking, regardless of whether the patterns are really there,” wrote Raymond S. Nickerson of Tufts University, an expert on confirmation bias.7 One example is the administration of antibiotics.

Confirmation Bias in the Medical Field: Antibiotic Overuse

The famous discovery by Alexander Fleming in 1928 that the Penicillium mold inhibited the growth of bacteria in a petri dish was not in fact as “out of the blue” as is often believed. The term antibiosis was coined in the 19th century when biologists noticed that bacteria produce substances that inhibit other bacteria. That various fungi inhibited the growth of bacteria had also been reported before Fleming made his breakthrough finding. Penicillin was not even the first antibiotic introduced. Sulfonamides were introduced in 1932, with penicillin not becoming available to civilians until after World War II. There is no question, however, that antibiotics are the closest thing to “miracle” drugs ever invented. They have changed the face of human interaction with bacteria that cause infection, saving countless lives since the middle of the 20th century and causing relatively minor and infrequent complications or adverse side effects.

Since 1950, then, we have lived in an era blessed with a class of medications that have turned previously serious and sometimes fatal diseases like bacterial pneumonia, strep throat,8 and syphilis into curable illness. Furthermore, antibiotics rarely seemed to harm anyone. There is the occasional person who is allergic to penicillin or other antibiotics, and some forms of this allergy can even cause almost instant death (by a process called anaphylaxis), but the vast majority of people take antibiotics for a week or 10 days or even longer without suffering any adverse consequences.

With this kind of track record—a superb benefit-to-risk ratio—it is no wonder that both doctors and patients have come to trust antibiotics as a great treatment for symptoms like cough, sore throat, fever, and earache. The first problem with this approach, however, is that antibiotics are useless for most of the pathogens that cause these symptoms. Take the average sore throat that afflicts school-aged children, for example. The poor 10-year-old girl suffers in agony every time she swallows. Her fever soars to 102°F and she develops a blotchy red rash on her body, a white coating on her tongue, and flushed cheeks. She feels miserable and cannot go to school. Her mother takes her to the pediatrician, who takes the tongue depressor, tells her to open her mouth and say “Ahh,” and sees the red throat typical of acute infectious pharyngitis. The pediatrician notices a bit of pus oozing from the girl’s tonsils. He calls the rash “scarleptiform”9 and declares the combination of the pus on the tonsils and this type of rash to be characteristic of strep infection. The pediatrician prescribes a 1-week course of antibiotics. He may also swab the back of the girl’s throat to obtain a sample for a throat culture and/or perform a rapid strep test to see if in fact the streptococcus bacteria are growing in her throat.

The problem with all of this is that most sore throats in 10-year-old children are caused by viruses,10 not bacteria like strep, and antibiotics have absolutely no effect on viruses. What the pediatrician should do is to prescribe antibiotics only if the rapid strep test is positive. If it isn’t, he should check the results of the throat culture a day or two later and if that is positive, then prescribe antibiotics. What he should never do is prescribe antibiotics without a positive result from either the rapid test or throat culture, or both. But unfortunately, this is what doctors do all too often. The reason is in part confirmation bias. Doctors believe they can tell the difference between bacterial and viral infections when they see pus on the tonsils and blotchy red rashes on the skin. They have seen children with pus on their tonsils and blotchy red rashes get better after taking antibiotics. Hence, every time they see these things, their belief that they are caused by a bacterial infection is confirmed and they take the “logical” action of prescribing a Z-Pak11 In fact, one study showed that between 1997 and 2010, 60% of adults received an antibiotic for a sore throat, substantially more than the 10% who should have.12

Of course, the fact is that doctors cannot distinguish bacterial from viral pharyngitis merely by looking in the throat or at the skin, or by any other aspect of the physical exam. When doctors examine patients with a sore throat, make a diagnosis, and then obtain the results of a throat culture, studies have shown that the doctor’s ability to correctly diagnose viral versus bacterial infection is no better than chance.13 Despite the fact that these studies have been published in important journals that doctors read, some still insist that they can tell the difference. Now we do not mean to minimize doctors’ abilities to figure things out on the basis of an examination. It is sometimes dazzling to watch a cardiologist listen to a person’s heart and correctly tell exactly which valve is malfunctioning or a neurologist perform an examination and determine exactly where in the brain a tumor is sitting. More often than not, sophisticated follow-up tests like echocardiograms and MRIs confirm what the cardiologist and neurologist determined from the examination. But in the case of the cause of a sore throat, only the throat culture can reveal whether it is strep or one of a multitude of viruses that cause pharyngitis, rash, fever, and coated tongues.

The problem of antibiotic overprescription is rampant throughout medicine. In a recent study published in the Journal of the American Medical Association (JAMA), investigators from Harvard Medical School reported that about 70% of people who go to the doctor because they have acute bronchitis get a prescription for an antibiotic.14 What should that rate have been? Zero. No one should get antibiotics for acute bronchitis because those medications have been shown over and over again to be ineffective for that condition. Acute bronchitis goes away on its own in about 3 weeks at most whether or not antibiotics are prescribed. For about the last 40 years various federal and professional agencies have been trying to hammer home that point, but doctors have not changed their prescribing habits at all. Patients who demand antibiotics are often blamed for this problem. According to this scenario, doctors who know better are simply afraid to say no to their patients, supposedly fearing they will go elsewhere. While that may be true, it is also the case the doctors just do not always follow the scientific evidence. Instead, they opt for what custom and their own beliefs teach them.

More ominously, it is clearly not the case that unnecessary prescribing of antibiotics is harmless.15 First, bacteria are capable of mutating very rapidly. Mutations that cause changes in actual physical characteristics in humans and other mammals generally take thousands of years to pass down through the generations, but changes in bacteria and viruses can occur in just one generation. Antibiotics generally work by targeting a process in bacteria that involves their genes and proteins. Rapid mutation means that bacteria can quickly develop variants of these genes and proteins that are no longer sensitive to antibiotics, a phenomenon called resistance. Diseases that have been easily cured with antibiotics, like gonorrhea, are now more difficult to treat because strains of the bacteria have evolved that are resistant.16 Tuberculosis is another infectious disease for which the responsible bacteria have developed widespread resistance. Every time we give someone antibiotics we create the scenario for antibiotic resistance to develop. Dr. Tom Frieden, director of the U.S. Centers for Disease Control and Prevention (CDC), recently pointed out that antibiotic resistance causes an estimated minimum of 2 million illnesses and 23,000 deaths per year in the United States.17 The CDC listed 17 drug-resistant bacteria and one fungus as representing drug-resistant pathogens in 2013.18

The second reason that antibiotic overprescription is not benign is that our bodies are colonized with thousands of different kinds of bacteria that cause us absolutely no harm and in many cases inhibit the growth of other, disease-causing bacteria. Overuse of antibiotics kills the beneficial bacteria, allowing pathogenic strains to flourish. This is one of the reasons that the awful gastrointestinal disease caused by the bacterium commonly called C. difficile is becoming such a problem—too many people are using antibiotics who should not because they really have viral, not bacterial, infections.

Despite the fact that doctors are taught in medical school that antibiotics don’t kill viruses and that overprescribing them is harmful, inappropriate antibiotic prescription remains a rampant problem. Donnelly and colleagues from the University of Alabama at Birmingham looked at data from 126 million adults who had gone to the emergency room with an upper respiratory infection (URI).19 Despite the fact that the great majority of URIs are viral illnesses—mostly colds—a whopping 61% of them were given a prescription for an antibiotic. This led the authors of the study to conclude that “the proportion of adult ARTI [acute respiratory tract infection] patients receiving antibiotics in U.S. EDs [emergency departments] is inappropriately high.”20 In another study, patients with acute bronchitis—a type of lower respiratory illness that is, as we pointed out earlier, almost always caused by viruses—were randomized to receive the non-steroidal anti-inflammatory drug ibuprofen, an antibiotic, or a placebo.21 The authors of the study pointed out that bronchitis is “one of the most common reasons for visits to primary care” and that “most patients with bronchitis receive antibiotics.” Nevertheless, the study findings were that all three groups did equally well, getting better in about a week and a half regardless of whether they received ibuprofen, an antibiotic, or no active drug. Even among hospitalized patients, about 30% of antibiotic use is unnecessary.22 The problem is becoming so dire that Jeremy Farrar, the director of the United Kingdom’s largest medical research organization, the Wellcome Trust, recently stated, “This [the use of antibiotics] is getting to a tipping point… . What we will see is people increasingly failing treatment, increasingly spending longer in hospital, patients getting sicker and having complications and dying”23

The Big Fat Diet Debate

Sometimes, confirmation bias can affect an entire scientific field, as seems to be the case with the long-standing warning that eating foods with high fat content increases the risk for cardiovascular disease. A troubling May 2014 article published in The Wall Street Journal suggests that confirmation bias has influenced generations of scientists, doctors, nutritionists, and public health experts to repeat the same bad advice over and over again.24 The article by Nina Teicholz, author of The Big Fat Surprise: Why Butter, Meat and Cheese Belong in a Healthy Diet, traces this long-held insistence—that eating foods rich in saturated fats increases the risk for heart attacks and strokes—to the work of Ancel Benjamin Keys, a scientist who worked at the University of Minnesota in the 1950s. Keys published results from a large, multi-country study that seemed to show that high-fat diets were associated with elevated serum cholesterol levels, which were known to be associated with increased rates of atherosclerotic heart disease and heart attacks. “Dr. Keys was formidably persuasive,” Teicholz writes, “and, through sheer force of will, rose to the top of the nutrition world—even gracing the cover of Time magazine—for relentlessly championing the idea that saturated fats raise cholesterol and, as a result, cause heart attacks.” Teicholz lists a number of significant scientific problems with the way in which Keys and his group conducted their studies and analyzed their data, but Keys’s work captured the attention of scientists and the public alike. In one of his papers he boldly asserted the following:

The high frequency of coronary heart disease among American men, especially in middle age, is not found among many other populations, notably among Japanese in Japan and Bantu in South Africa. Experimental, theoretical, and epidemiologic evidence implicates the diet, and especially the fats in the diet, in these differences. The search for other factors so far has been unsuccessful.

It seems probable that the more common fats of the American diet, when eaten in large amounts as is often the case in the United States, may contribute to the production of relative hypercholesterolemia and so to atherogenesis. Further, there is suggestive evidence that fatty meals may induce hypercoagulability of the blood and inhibition of fibrinolysis.25

The idea of fat clogging up the tiny coronary arteries that feed oxygen to the heart muscle was easy to grasp. Several studies ensued that seemed to further show that low-fat diets protected against heart attacks, and once again even though these studies now seem fatally flawed, the idea that eating fat causes heart disease became so embedded that scientists saw proof of its validity wherever they looked. Soon, we all believed that eggs and steak were veritable poisons. The American Heart Association began recommending that we limit our intake of saturated fats and everyone started substituting trans-fat-laden margarine for butter.

It now appears that all of this might have been one enormous manifestation of confirmation bias. In a recent paper published in the Annals of Internal Medicine, Rajiv Chowdhury of the University of Cambridge in the United Kingdom and colleagues analyzed the data from 32 studies involving 530,525 study participants and concluded, “Current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats.”26 Using a statistical method called meta-analysis,27 these authors found that taking all the studies together that have been done in this area there is no evidence that eating saturated fats increases the risk for heart disease.28 The truth was there all along, but apparently scientists were so convinced that the original hypothesis was correct that they consistently made mistakes in both study design and in the interpretation of results in order to confirm what they believed to be the truth.29 None of this was scientific fraud. This is not a case of scientists making up data or faking results. Rather, if the Chowdhury et al. findings hold up, then we will have witnessed one of the great examples of confirmation bias affecting science and public health policy of all time. Teicholz goes on in her article to assert that the recommendation to eat less fat has led to increased carbohydrate consumption, which in turn has increased the rate of diabetes. Now, according to a randomized clinical trial, it turns out that eating a low-carbohydrate diet may be better for weight loss and cardiovascular disease risk reduction, including lowering cholesterol and triglyceride levels, than a low-fat diet.30

No doubt we will read a great deal more about this issue in coming months and years. The proponents of low-fat diets will probably not agree too easily that they have been wrong for the last 50 plus years. There may be other health risks to eating too much fat, including increasing the rates of some forms of cancer. The Chowdhury meta-analysis will likely be challenged: such analyses always require that the investigators make important decisions, including which studies to include in the overall database and which statistical tests to use. Those decisions are always open to challenge.

Ironically, even scientists who conduct meta-analyses can fall into the confirmation bias trap. Felicity A. Goodyear-Smith of the University of Auckland noticed that two research groups consistently came to opposite conclusions after conducting meta-analyses of the recommendation that doctors screen patients for depression. One group routinely found that such screening is effective in identifying people with depression and reducing the rates of depression among them, and the other group found that screening for depression is ineffective. When Goodyear-Smith and her colleagues looked at the individual studies included in each of the meta-analyses, they found that each of the groups tended to include in their analyses only those studies that supported their point of view. The respective groups found problems with the methodology of studies that did not support their point of view, and they felt justified excluding those studies from the analysis. Goodyear-Smith et al. hypothesize that “authors may have a belief of what the outcome of their meta-analysis will be before they start, and that this belief may guide choices that are made on the way which may impact their review’s results. This is a form of confirmation bias.”31

If in the end, however, the reports of Chowdhury et al. that challenge the high-fat diet proscription hold up, the overall result will undoubtedly be to confuse the public. For some of us, the thought that it is now okay to eat eggs, butter, liver, and whole milk products is not far from being told that it is safe to consume arsenic. The stern advice that we eat less sugar that is rapidly replacing the low-fat recommendations will be challenged as well. The question then becomes: how are we supposed to figure out who is right and what is safe to eat? We have no easy answer for this conundrum because it rests in the nature of human psychology (i.e., the tendency to confirmation bias) and scientific progress (the constant production of new data and results, some of which inevitably challenge what has previously been thought to be correct). It is unlikely in our view that most of us will ever become scientifically sophisticated enough to spot errors in the medical literature when they first occur. Even some of the world’s top scientists failed to see the problems in Keys’s original work. The solution will have to come from better reporting of scientific results, and this will mean improved behavior by both scientists and the media. Scientists believe in their work, as they should, but must be more cautious about how they explain things to the media. Right now, in order to gain market share, medical schools and teaching hospitals rush to publicize new results as quickly as they can. Forbearance would be a good trait to adopt in these cases. The media often latch onto anything that sounds novel or controversial, thrusting it into our attention prematurely and then failing to acknowledge when studies are not entirely conclusive or when new findings contradict old ones.

That’s My Story and I’m Sticking to It

Now that we have established that everyone is prone to confirmation bias, we want to explore why confirmation bias is so steadfast. Experiments have shown quite clearly how stubbornly resistant our minds are to change, even when the original evidence we used to come to a conclusion was flimsy and new evidence completely invalidates what we believe. In a famous experiment, Craig A. Anderson, Mark R. Lepper, and Lee Ross of Stanford University told subjects that the success or failure of firefighters could be predicted by their scores on what they called the Risky-Conservative Choice Test (RCC Test). Some of the students were given examples in which a high score predicted success, and others were given examples in which high scores predicted failure, but either way the test was actually completely fictitious. Yet even after the research subjects were debriefed and told that the RCC test is totally bogus, in subsequent experiments they still maintained a belief in a relationship between the RCC test and likelihood of firefighter success. The investigators themselves seemed somewhat amazed at the outcome of the experiment, even though it confirmed their hypothesis:

In sum, the results strongly support the hypothesis that even after the initial evidential basis for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs. That subjects’ theories survived virtually intact is particularly impressive when one contrasts the minimal nature of the evidential base from which subjects’ initial beliefs were derived (i.e., two “data points”), with the decisiveness of the discrediting to which the evidence was subjected. In everyday experience, our intuitive theories and beliefs are sometimes based on just such inconclusive data, but challenges to such beliefs and the formative data for those beliefs are rarely as decisive as the discrediting procedures employed in this study.32

We do not of course mean to imply that maintaining one’s theories, beliefs, and positions is always an unreasonable thing to do. Obviously, a hasty willingness to change one’s mind can cause havoc in an individual’s life, to an organization, or even a whole country. Very often, sticking with a course of action, even when there are “bumps in the road,” is exactly the right thing to do.

What we are talking about here is not a recommendation for changing one’s mind at the drop of a hat but rather the need to constantly evaluate evidence. Even when “staying the course” is the chosen path, we should still carefully examine the evidence and not turn a blind eye to it. What Anderson and colleagues and others have discovered, however, is that we are prone to develop beliefs quickly and on the basis of flimsy evidence. Then, once these beliefs are laid down, confirmation bias sets in so that everything we see next is now twisted to conform to the original belief. The doctor who needlessly prescribes antibiotics thinks he is seeing a pattern of sore throat, fever, rash, and general malaise that he believes he has seen respond to antibiotics before. Hence, for him prescribing antibiotics is fully in accord with a theory he thinks is based on experience. Similarly, the scientists who insisted for so many years that high-fat diets raise cholesterol levels and increase the risk for heart attacks had that notion so ingrained in their minds that they continuously designed experiments and interpreted data that “confirmed” the hypothesis without recognizing that what they were doing was perpetuating the same error.

Well, If That Actress Has a Gun …

So far, we have discussed individual susceptibility to confirmation bias, but perhaps even more ominous is the way the bias can commandeer whole groups to fall for scientifically incorrect ideas. Research shows that groups of people are just as prone as individuals to exhibit bias in the way they interpret information that is presented to them on an issue.33 Recently, a friend who knows about Jack’s interest in science denial sent him a Facebook post that included a picture of a famous actress sprawled out in a provocative pose declaring that she and her husband have guns at home. The post (see figure 2) was made by a group called the Heartland Institute, a conservative think tank that defends cigarette smokers, prides itself on promoting “skepticism” about climate change, and opposes gun control laws. The post—whose accuracy we have been unable to confirm—was followed by many comments applauding the actress for her stance.34

imag

FIGURE 2 This image of the prominent actor Angelina Jolie from the British newspaper Daily Mail was also posted on social media by the Heartland Institute. It quotes her supporting the incorrect notion that personal gun ownership is effective protection against home invaders.

Here is Jack’s posted response:

Unfortunately, here I disagree with Heartland. Even if it is true that Angelina Jolie believes she and her family are safer if there is a gun in her home, that belief is incorrect. The risk of a famous actress, or anyone else for that matter, who lives in her home being killed or seriously injured by that gun is many times higher than the chance that she will ever get to shoot an intruder. In fact, if an intruder does enter someone’s home, a gun owner is more likely to be killed than a non-gun owner because most burglars do not wish to shoot anyone and only do so if they are threatened by someone with a gun. People who bring guns into their homes do not, of course, believe that anyone in their house will ever accidentally shoot someone or that they or a family member will ever get angry enough to shoot someone or depressed enough to shoot themselves. This is classic denial. In fact, the rates of family domestic gun violence are far higher, even among well-educated and seemingly happy families, than are the rates of home intrusion. This has nothing to do with one’s right to bear arms, which is protected by the second amendment. Rather, it has to do with educating the American public that owning a gun for protection is not safe and not protection.

Immediately after this came a flood of comments such as “typical liberal drivel!! And long winded too!” and “You are a f—ing idiot.” Jack tried again with some facts:

You might try to have a look at empirical studies done on this subject. You will find that having a gun at home is a health risk. I am not debating the Second Amendment here, which absolutely protects your right to own a gun. Whether you choose to own one or not is a choice that, if you look at the data, is not supported.

The responses to this were things like “Why don’t you just slap them or ask them nicely to leave while I shoot ’em?” and “If they were about to rape you, you could urinate on them. What is wrong with you?? Lolol” and

Jack, you are an idiot. If you don’t believe in guns. Then don’t have guns. As for me and most educated people, we have guns. I feel safer when I carry concealed and I feel safer at home. If someone breaks into your home, dial 911 and hope that the police get there before you get shot. Someone that is willing to break into an occupied home, is willing to shoot those occupants. I would rather be judged by 12 than carried by 6.

Jack even provided references to the published scientific literature to back up his claims. But what is fascinating is that none of his interlocutors from the Heartland site addressed the data. Although the great majority of studies examining the safety of private gun ownership have reached the conclusion that it is unsafe, there are a handful of published studies with conflicting data. That is always true in any scientific arena. These are not easy studies to do. One has to troll through large data sets to identify (a) who has a gun, (b) who was shot with their own gun or a gun owned by someone with whom they reside, (c) who was the victim of an armed home invasion, and (d) what was the outcome of the invasion. Merging data sets like these can be a daunting task, especially since there are inevitably missing data, such as data that need to be excluded because they make no sense (e.g., an entry for a 400-year-old man) or data that are unclear (what exactly was the relationship of the person shot to the person who owned the gun?). There are also always issues about what time frame to pick, which region of the country, how to ensure that the sample is representative of the general population of gun owners (no skewing in terms of age, sex, socioeconomic status, etc.), and which statistical methods to use to analyze the data.

The leading academic voice supporting the notion that firearms are necessary for self-defense is Gary Kleck, a professor of criminology at Florida State University. Kleck is a respected and often-quoted academic who has done extensive research and writing on gun control. He insists that the evidence shows that guns are often used by private citizens in the prevention of crimes and that laws restricting gun ownership increase the risk for gun violence. Perhaps his most famous assertion is that 2.5 million Americans use guns to defend themselves against attackers every year.35 This assertion has aroused derision on the part of a host of other academics who insist that the research methodology used by Kleck and coauthor Marc Gertz is completely invalid. Harvard’s David Hemenway, for example, asserts that the number is grossly inflated.36 The Kleck and Gertz study was done by a telephone survey of a relatively small number of people whose responses were then projected to a national annual number. There is reason to believe that many people who participate in such telephone surveys misremember or even falsify what really happened and also that the extrapolation of the results from the small sample to a national statistic may have been inexact.

On balance, the weight of data showing that private gun ownership is unsafe is incontrovertible. But the gun enthusiasts who confronted Jack could have brought up Kleck’s work and tried to base their argument on published data. The exchange would then have moved away from epithets and sloganeering. But nothing like that happened. Jack tried several times to push the data aspect, each time flatly stating what the studies show. He deliberately avoided attacking the gun proponents on a personal level, getting angry at being called an “idiot” and “a moron,” or trying to defend his intelligence. Each time he pointed out the data, however, the opposition seemed to get even angrier and more dogmatic. Indeed, they seemed to be having a good time. This may seem odd to some; after all, we were talking about people getting shot and killed. Whether you think you are going to shoot an armed burglar or rapist or your child is going to find your gun and shoot him- or herself, there is absolutely nothing entertaining about the outcome of gun violence. But the impression one gets when reading the Facebook exchange is of a group rallying itself into a fervor of righteous indignation and solidarity.

The impression Jack got from this is that the gun proponents who responded to his data-driven overtures extracted only what they already believed. Jack said, “Studies show that a gun in the home is more likely to be used to shoot its owner or a family member than an intruder,” and they replied, “You idiot.” No one said, “Let me think about those facts you are bringing up.” No one asked for further information or clarification of the data. The assumption was simply made that Jack is a liberal who is trying to take away their Second Amendment rights. This was a clear lesson in the reality of confirmation bias. Gun proponents believe that their opponents are completely naïve about the reality of violence in the world, live in an ivory tower, are privileged and elitist, and would not hold to their beliefs if they themselves were challenged by someone with a gun. They seem to believe that home invasions by armed intruders are rampant (they are actually quite rare) and that because they themselves supposedly know how to store their guns safely and educate their children about how to use them properly none of the scientific data about the dangers of having a gun at home apply to them. No matter what Jack said, these beliefs were continuously reaffirmed. Jack’s foray into the Heartland reified what the gun proponents already believed and gave them a communal activity they seemed to enjoy. In the end, Jack probably succeeded in making them even more convinced that guns are not only safe but also necessary to own. The group dynamic of defeating a common enemy simply made this confirmation firmer.

Michael Shermer, the provocative editor of Skeptic magazine, recently wrote that he once believed that gun control violated his libertarian principles:

Then I read the science on guns and homicides, suicides and accidental deaths… . Although the data to convince me that we need some gun-control measures were there all along, I had ignored them because they didn’t fit my creed… . In several recent debates with economist John R. Lott, Jr., author of More Guns, Less Crime, I saw a reflection of my former self in the cherry picking and data mining of studies to suit ideological convictions.37

This is quite a confession from someone who has based his career on rigorously sticking with the evidence, even when it disconfirms very popular notions. Yet even Michael Shermer acknowledges the power of confirmation bias in making us ignore what we are afraid will contradict what we want to believe. How much more difficult to flaunt the confirmation bias will it be for those of us not as devoted to evidence as Michael Shermer?

Who Says Emotions Are Harmless?

In his book Social Networks and Popular Understanding of Science and Health, Brian G. Southwell explores many of the ways in which group membership strengthens an individual’s tendency to adhere to confirmation bias. He points to extensive research showing that when people are provoked to a heightened emotional state they are more likely to share information. When that state is an unpleasant one, like fear or anger, emotionally aroused people come together to seek comfort from each other. “Much of the discussion we seek with others in moments of elevated emotion,” Southwell writes, “is not necessarily focused on new factual information sharing as much as it is focused on reassurance, coping with stress, and ritualistic bonding.”38 The purpose of groups like Heartland is not to help people understand both sides of an issue or to be able to come to a reasoned position about an important health decision.

Membership in such groups may begin with someone—let us call her Jane—who is not sure how she feels about having a gun. In fact, she doesn’t own one herself. Jane is, however, a cigarette smoker who despite trying everything from self-help groups to hypnotism to medication has been unable to quit. Every day, it seems, she is confronted with accusations that she is weak-willed for not being able to stop smoking, that she is poisoning herself and others, and that no one really wants her around. She is ordered outside whenever she wants—perhaps needs is a better way to put it—to smoke. That makes her feel as if the whole world is shunning her. Cigarette smokers as a group tend to have higher rates of depression than the general population as it is.39 Jane’s level of self-confidence cannot be helped by constantly being told that there are many public places in which she is not welcome if she is going to smoke. Sometimes Jane feels terrible about herself and other times she gets angry. After all, she is a citizen and has rights just like everyone else. One day she reads an article in her local newspaper about an organization called Heartland that defends the rights of cigarette smokers. In fact, Heartland defends people from that clique of self-righteous, wealthy liberals who are always trying to make everyone’s life miserable by telling them that everything we do is going to kill us. We are not supposed to smoke, drink, eat the foods we like, drive our cars, or have guns. And for the privilege of all these rules we have to pay higher and higher taxes.

Jane starts following Heartland on social media. She finds it comforting to interact with so many people who feel similarly abused and neglected. She finds out that not only are there many other people who smoke, there are people who do not feel there is anything wrong with it. These people also warn that the liberals who want to make her feel like an outlaw for smoking also don’t care if she gets shot during a burglary or raped. They say that everyone has a right to smoke and to own a gun and that all the handwringing about climate warming is just so much hot air (pun intended). Soon, Jane begins to worry for the first time that her house could be broken into any day. What would she do? She would be completely defenseless. There would be no way to protect herself or her children from one of those crazy maniacs you hear about all the time on the news who shoot people just because they feel like it. Jane buys a gun.40

When Jane reads the comments of someone like Jack, all of her anger is aroused. For her, Jack is not really just trying to start a calm discussion about scientific studies about gun ownership. Everything he says is completely reminiscent of every hectoring liberal telling her she is a bad person. His every word confirms her belief that Jack is exactly the kind of person against whom she must defend herself. After years of being told each time she lights up or throws out her garbage that she is contributing to the destruction of the world, Jane now joins her online social group in gleefully attacking Jack.

There are of course a number of mechanisms that go into Jane’s attitude about guns and her decision to join a group like Heartland. She believes that there is a conspiracy against her and the members of her new group, for example. But here we wish to point out that even though Jane has never read any of the scientific papers for or against gun ownership, she has found a home among a group of total believers. Thus, the Internet, which is supposed to provide us with all the information we need to make rational decisions and choices, has actually reified an irrational idea in Jane’s mind and prevented her from even trying to consider disconfirming evidence for her point of view. Furthermore, Jane came to the conclusion that Heartland’s followers must be right about gun ownership when she was in a heightened emotional or “affective” state. Strong evidence shows that people are more prone to believe what they are told whenever they are in such a state, whether it is a positive or negative one. That is, regardless of whether we are deliriously happy or morbidly sad, we are likely to believe what we are being told. “Affect,” writes Paul Slovic, the eminent behavioral psychologist, “is a strong conditioner of preference.”41 This is why politicians know that they need to get their audience emotionally aroused before trying to convince them.

Another aspect of Jane’s adoption of the pro-gun ownership hypothesis is her feeling of dread regarding potential home invaders. Even though the evidence does not support the likelihood that most people will actually ever experience such an event, gun proponents paint vivid pictures of the instances in which a home invasion by armed intruders has occurred. This is, of course, reinforced by what we see on television and in movies. As Slovic puts it, “Perceptions of risk and society’s responses to risk were strongly linked to the degree to which a hazard evoked feelings of dread.”42 Most important for our understanding of confirmation bias is that beliefs laid down during times of heightened emotion, especially those involving terror and dread, are highly durable and resistant to disconfirmation. Every time we re-experience those emotions, the belief is also brought immediately to mind.43 It is much easier for someone to induce us to summon up a feeling of dread than to get us to work deliberatively through the data on whether or not there is a protective advantage of having a gun at home. So what we learn when we are emotionally aroused is thereafter hard to shake.

Southwell stresses that people who lack self-confidence cannot be expected to contradict the ideas of a group that comforts them:

We know that people are not particularly likely to share information they do not think they understand. Absent direct encouragement to challenge or engage with scientific ideas, such as research on medical innovations or biotechnology, many people will silently defer to those they see as relatively expert… . Individuals who feel as though they can grasp key ideas are more likely to subsequently share those ideas with others.44

The question, of course, then becomes, how can we boost the confidence of individuals like our fictitious Jane so that they can grasp enough about a scientific discussion to make a reasoned decision? Without doing so, per Southwell, we are left with individuals who will make up their minds first on the basis of the emotional aspects of an issue, like fear and anger, and then see everything that is subsequently said about the issue as directly confirming what they believe.

It’s All in Your Brain

What is it that we concentrate on when we become swayed by a group like Heartland that promulgates ideas that defy scientific evidence? Does the group actually change our minds about the issues at hand or is it the social affiliation that induces joining in? How does the fixed conviction that is so ferociously defended by the confirmation bias actually initiated? In a very elegant experiment, Gregory S. Berns and his colleagues at Emory University sought to understand whether when individuals follow an incorrect group consensus it is because they are making an active decision to go along with the group or because their perception of the issue has been altered.45 In this experiment, social conformity was associated with a decrease in activation of the frontal lobe, the part of the brain that controls reason and other executive functions, and an increase in activity in the parietal and occipital lobes, regions of the brain where perceptions are formed. The conclusion from these data is that social conformity is not a conscious decision but rather it is based on actual alterations in how we see reality. When subjects did act independently of the group in the Berns et al experiment, there was more activation in the amygdala. The amygdala, as discussed in the last chapter, is buried deep in one of the more primitive parts of the brain and has been shown to be especially important in the recognition of danger and the psychological and physical manifestations of fear. Our conclusion from this, which differs somewhat from Berns’s interpretation, is that defying your group is a frightening proposition that must hold the promise of some reward in order to be tenable. In fact, other studies have shown that a decrease in biased thinking is associated with a decrease in amygdala activity.46 Somehow, we need to reduce the fear that is understandably associated with changing one’s mind.

The Berns study of brain activation patterns suggests that people do not become affiliated with anti-science groups simply because they are lonely and want to “belong.” Rather, the way they perceive an issue is actually altered. This change in brain activity solidifies the new beliefs as part of a fixed biological structure that then resists further alteration. To now defy the group’s beliefs activates the amygdala and causes fear. Confirmation bias, then, reinforces the new safety of believing in the group’s point of view.

Evaluating a belief that we are sure is true, and challenging that belief, even to the point of disconfirmation, is an exhausting process. To do it well, we need to suppress our emotions and try to think as rationally and clearly as possible. To behavioral economists, this means favoring System 2 over System 1. As we discuss throughout this book, these two systems represent a rough distinction between the more sophisticated and recently evolved System 2 centers of our brains located mainly in the prefrontal cortex (PFC), sometimes referred to as the rule-based brain, and the more emotional and primitive System 1 centers, located in what is referred to as the limbic cortex, also called the associative brain.

Lest we leap to the conclusion that the associative brain is somehow inferior to the rule-based brain, it is important to acknowledge that the former is necessary for intuition, fantasy, creativity, and imagination.47 Without it, we would probably have no art and no empathy. Humans are the only truly altruistic species,48 and without an associative brain we would not witness individuals giving money to homeless people on the street when there is absolutely no possibility of material reward for their generosity. The rule-based brain can be the driver of rather dry behaviors, like deliberation, formal analysis, and strategic planning. It is, however, the part of the brain we need to use if we are going to make decisions that are evidence-based and not tainted by emotion.

It is also not the case that the preferential use of the rule-oriented, or System 2, brain indicates that a person is smart. Cognitive ability, including intelligence, is unrelated to the tendency toward confirmation bias.49 Smart people, including Nobel Prize-winning scientists, are entirely capable of seeing only what they already believe.

Moreover, as Daniel Kahneman and his colleagues have shown through extensive research, the more tired and depleted we are, the more prone we are to relying on easier sources to make decisions and therefore committing confirmation biases. The human brain is an incredible energy-consuming organ, burning glucose for fuel at a higher rate than any other part of the body. When we are tired, hungry, depressed, or distracted it is harder to spend the energy and therefore our mental processes default to the easiest method of reaching a decision. This usually means employing only limbic, or System 1, parts of the brain and therefore leaving us prone to committing errors by using shortcuts like confirmation bias.

Confirmation Bias: As Good as Eating a Candy Bar

Using confirmation bias to make decisions may actually make us feel good in the way that people experience the positive effects of alcohol or opiates. Although again an oversimplification, there is a “reward” pathway in the brain that runs from a structure in the brainstem called the ventral tegmental area (VTA) to one deep in the lower part of the brain called the nucleus accumbens (NA, NAc, or NAcc). The pathway releases the neurotransmitter dopamine whenever we (or almost any animal) anticipates or experiences a rewarding or pleasurable stimulus. Studies have confirmed, for example, that this pathway is activated and dopamine released into synapses (the spaces between brain cells or neurons) whenever we sip an alcoholic beverage. Alcoholics, unfortunately, drink to the point that the pathway becomes exhausted and dopamine depleted, so that they need to drink ever-increasing amounts to get the same effect. But for most people, an occasional drink activates the pathway and creates a warm, pleasurable feeling. Research also indicates that the NAc is activated before making a risky financial choice.50 This reinforces the thrill of investing that many in the financial world experience.

According to Noreena Hertz, author of Eyes Wide Open: How to Make Smart Decisions in a Confusing World, “We actually get a dopamine rush when we find confirming data, similar to the one we get if we eat chocolate, have sex, or fall in love. Having jumped to a particular conclusion, we focus on information that supports it, and ignore everything that contradicts or doesn’t conform to our initial analysis.”51

Once again it is important to remember that confirmation bias, like all of the psychological mechanisms we identify that motivate denial of scientific evidence in this book, is by no means “irrational” even though it can lead to incorrect ideas. Rather, confirmation bias is a part of the way the human brain naturally works. Indeed, we are rewarded for using confirmation bias by the powerful actions of dopamine in the pleasure centers of our brains. In other words, it feels good to “stick to our guns” even if we are wrong. When understood in terms of human brain physiology in this way, getting a person to even momentarily shun what she believes in order to attend carefully to potentially disconfirming evidence requires some guarantee that the risk of doing so will produce a reward at least equal to the one she will get by succumbing to confirmation bias. Research needs to be done, then, to determine what rewards will be adequate to motivate adults to look at the data with an open mind. Is that reward based in an altruistic feeling of doing the right thing for other people? Would those who are vehemently opposed to GMOs agree to at least consider the possibility that GMOs are safe if doing so would make them feel that they are potentially helping starving children in Africa survive or preventing Asian children from going blind? Or perhaps the reward would come from securely joining a new group. Can a person rigidly against vaccination see a possible reward—and its concomitant dopamine surge—if he is shown that changing his mind would put him into contact with a large group of parents, scientists, and doctors whose main interest is in protecting children from dying from preventable diseases like measles and diphtheria?

Of course people vary in how much risk they are willing to take—including how much risk they are willing to take in changing their minds from a fixed belief to one that is based on new evidence. Interestingly, some of this interpersonal variability may be caused by differences in two genes, one of which is active in the dopamine system.52 We inherit different forms of the same gene from our parents and each form is called an allele. There are five proteins in the brain that act as receptors for the neurotransmitter dopamine, one of which is called the DRD4 receptor. The gene that encodes this receptor has a variant called the 7-repeat allele, and people who are carriers of it take 25% more risk than individuals without the 7-repeat allele of the DRD4 gene. Hence, how much of a boost a person gets from changing or staying with a belief could be in part a function of what kind of dopamine receptor genes he or she has inherited. Of course, other genes are also involved, including genes involved in the serotonin receptor system,53 as are a host of nongenetic factors. Despite these individual differences, some basics of brain biology and function can demonstrate just how naturally human the tendency toward confirmation bias is.

Just Give Me the Facts

If confirmation bias is so ingrained in our very humanity, how can we attempt to disarm it? We do at least know what does not work. Merely giving people facts that contradict their beliefs is not sufficient to disconfirm those beliefs. In fact, evidence suggests that such a strategy can actually reinforce confirmation bias because people tend to be much gentler when evaluating evidence that confirms their beliefs than when evaluating contradictory evidence. There is almost no such thing as a scientific paper without some flaws. In fact, it is customary in the final section of scientific papers for the authors to list all of the limitations of their study. They may acknowledge that their sample size was rather small, that the subjects in their study were not completely representative of the general American population (e.g., most of them were from a single socioeconomic class), or that blood samples from the research subjects were frozen and stored rather than being tested immediately. The conclusion the authors reach from this scientific mea culpa is that “more research needs to be done.” Now, if the authors can identify flaws in their study even though they clearly believe in the importance of the work, it should not be difficult for independent readers to find infelicities as well. But if the study supports our ideas then we tend to gloss over such shortcomings, whereas if it contradicts them we attack with ruthless vigor, picking apart every detail possible.

Charles G. Lord of Texas Christian University and his coauthors demonstrated this point in an experiment in which they recruited undergraduates who either supported or opposed the death penalty. The students were asked to read two papers supposedly representing actual research, one that found that capital punishment is a deterrent to crime and the other that it is not. As the investigators predicted, the students who favored the death penalty rated the paper that allegedly found a deterrent effect as more credible than the one that found it does not, while the students who were against the death penalty made the opposite ratings. Thus, at the end of the experiment, rather than being brought closer together by having had the opportunity to review data on both sides of the issue, the students were actually more convinced that they were right. Lord et al. concluded that, “The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization.”54

Proving Ourselves Wrong

There is at least one instance in which an understanding of confirmation bias has revolutionized the treatment of a medical condition. Aaron T. Beck recognized that people suffering from depression focus most on things in their environment that give them a reason to be depressed.55 Only negative information is consumed, and anything positive is either not noticed or dismissed as unreliable. Beck developed a groundbreaking psychotherapeutic method for challenging this mode of thinking called cognitive therapy. In some ways, cognitive therapy is devoted to undermining the confirmation bias of patients with depression by restoring their ability to recognize evidence of both positive and negative things in their environments. There is even brain imaging data that suggest that cognitive therapy helps restore the ability of the prefrontal cortex to muster reason to regulate the emotional outpourings of the amygdala and other limbic structures. Today, cognitive therapy stands as one of the most effective interventions for depression and a number of other psychiatric disorders. Jack has long advocated Beck as a great candidate for a Nobel Prize.

For the most part blind application of confirmation bias commits us to maintaining mistakes that almost always ultimately result in harm. The people who think that genetically modified foods are dangerous to human health passionately believe that they are protecting everyone from huge but vaguely specified risks that are being deliberately obfuscated by large corporations and their paid scientific lackeys. Ironically, anti-GMO exponents are generally among the people most horrified by poverty and starvation and expend a great deal of effort fighting against income inequality and discrimination. They might be expected to be the very people who would wholeheartedly endorse GMOs as a way of saving the lives of millions of impoverished Africans and Asians. But at this point, explaining to them that GMO foods are in fact not dangerous and that without them millions of people stand to suffer diseases and hunger that could be averted has virtually no impact. They are adamant and organized and admit only those facts that seem to support their point of view.

The task of turning our attention to a reasonable and ongoing evaluation of evidence is daunting in the face of the power of confirmation bias. As we suggest elsewhere in this book, it will require a major shift in the way we educate children about science. This will necessarily involve turning away from memorizing facts that, however important they may be, are seen as boring and oppressive, and toward an understanding and appreciation of the scientific method and the way science operates. There are, however, some shorter term innovations that may at least be partially helpful.

In the Anderson, Lepper, and Ross experiments mentioned earlier in this chapter, the investigators also established that having people write down their reasons for coming to a conclusion abetted their future inflexibility in considering disconfirming evidence. They asked some of the research subjects who had read the information that the bogus RCC test could predict the future success or failure of a firefighter to write down how they had come to the conclusion that there is a relationship between RCC test score and success/failure. Compared to those participants who were not asked to explicitly state their thinking in written form, those subjects were less likely to change their minds after the debriefings informed them that the whole relationship was fictitious and based on nonsense rather than actual data. Being required to write down how one understands a theoretical relationship, even one that later proves to be false, thus reified the belief. The researchers speculate on this finding:

It suggests … an interesting potential antidote for unwarranted belief perseverance in the face of later challenge to the evidence on which our beliefs were based.

Would such perseverance effects be eliminated or attenuated, for example, if subjects could be led, after debriefing, to consider explicitly the explanations that might be offered to support a contention in opposition to their initial beliefs? Alternatively, could subjects be “inoculated” against perseverance effects if they had been asked, at the outset of the study, to list all of the possible reasons they could imagine that might have produced either a positive or a negative relationship between the two variables being studied … ?”56

Is there a way to engage open-minded people—at least those people who have not yet irrevocably laid down their beliefs and joined like-minded groups—in exercises in which, instead of telling them the facts, they are asked to imagine different sides of a controversy, different outcomes, and different interpretations? Neurobiologists and cognitive psychologists know that passive learning is far less efficient than learning that occurs when students are performing a task, which could include writing down their responses to various alternative scenarios. An example would be to ask GMO skeptics to write down how they would deal with evidence that such foods are not in fact harmful to human health. We would not require the participants to agree that this was the case, merely to write down what they think would happen, how they would feel, and what would be the consequences if that turned out to be correct. This is, of course, a scientifically evaluable suggestion—a testable hypothesis is that such exercises would result in people who are more open-minded and willing to at least look at the data. Hopefully, such experiments are already taking place and educators will devise schemes to introduce these ideas—if and only if they prove successful in the laboratory—to widespread public distribution. Most important, we need to challenge people to think for themselves by asking them to imagine alternative scenarios including those with which they tend to disagree. We stress that lecturing, scientists’ usual method to try to rectify entrenched misconceptions, does not work. The process has to be far more interactive and iterative.

We invite our readers, then, to try this procedure in a personal experiment. Think of something you feel strongly about that you believe is supported by strong scientific evidence. Now, go to a quiet place when you are relatively free from stress or distraction and write down what you know about the arguments on the other side of your belief. Also, write down what it would take for you to change your mind. You may find that after doing this you will feel the urge to check out the data a bit more than you did in the past and to listen to alternative ideas. You may not wind up changing your mind, but either way we predict that you feel more secure in and satisfied with your point of view.

It is also important to approach people in the right way and at the right time. Because tackling the task of evaluating potentially disconfirming evidence requires a great deal of exhausting effort on the part of our prefrontal cortex, and because we cannot use electrical stimulation to activate this region of the brain on a widespread or regular basis, we need to devise ways of attracting people’s attention when they are well rested and relatively open to novelty, not in states of depression or high anxiety. This is a job for advertising and marketing experts, who have perfected the art of delivering persuasive messages in ways that maximize results. We are not advocating manipulating the public, but rather borrowing what these experts in information transfer have learned to develop more successful ways of encouraging people to listen to the facts.

It will also be critically important to recognize that people hold onto their beliefs for profound emotional reasons that are reinforced by predictable activity in specific brain circuits and networks. Retaining an idea and affiliating with like-minded members of a group activates the reward centers of our brains, making us feel righteous, safe, and loved. If we are going to move people in the direction of dispassionate regard for scientific truth, we must accept the fact that their current beliefs are based on entirely understandable principles. Indeed, the confirmation bias is not a bizarre or even irrational phenomenon but rather the result of the way our brains have evolved to make sure we remain steady, cooperate with friends and associates, and are generally in a positive state so we can succeed. As Lisa Rosenbaum elegantly states in her New England Journal of Medicine article:

Among those of us in the business of evidence-based risk reduction, terms such as “social values” and “group identities” may elicit a collective squirm. But developing an understanding of how such factors inform our perceptions of disease is critical to improving the health of our population. Certainly, understanding of one’s risk for any disease must be anchored in facts. But if we want our facts to translate into better health, we may need to start talking more about our feelings.57

Indeed, if we want people to understand that owning a gun is dangerous, that stopping GMOs contributes to disease and starvation, and that overprescribing antibiotics is counterproductive, we need to empathize with the powerful emotions and brain circuits that underlie denial of these scientifically unassailable facts. We need to encourage people to write down their ideas, try to think of alternatives, and recognize when it is their emotions rather than their frontal lobes that are guiding their decisions.

As we have shown, confirmation bias can strike both groups and individuals in different but important ways. In the second part of this book, we turn to more “individual” forms of incorrect scientific beliefs, by which we mean incorrect beliefs that thrive in the individual mind, not because of group dynamics in particular. We begin with one of the most intriguing examples: the desire to attribute causality in situations of immense uncertainty.