Avoidance of Complexity - Denying to the Grave - Sara E Gorman, Jack M Gorman

Denying to the Grave: Why We Ignore the Facts That Will Save Us - Sara E Gorman, Jack M Gorman (2016)

Chapter 5. Avoidance of Complexity

EVERY DAY SCORES OF SCIENTIFIC ARTICLES ARE PUBLISHED THAT report on the results of studies examining the causes and treatments of diseases. Each of these articles usually has five sections:

1.Abstract: an overall summary of what the paper is about

2.Introduction: an explanation of what the problem is and why the present study may offer new insights

3.Methods: a detailed account of exactly what the investigators did, usually including a statistical section in which the authors explain what mathematical tools they used to decide if their results are meaningful

4.Results: the data collected during the study (full of numbers, tables, and figures)

5.Discussion: the conclusion, in which the scientists explain the significance of their findings, what shortcomings their study had, and what should be done next to further understanding of the field in question

It is hoped that other biomedical scientists and clinicians will read as many of these papers as possible so that they can keep up-to-date on the newest information. We expect that our doctors are on the cutting edge of medical science. But of course no individual can approach reading all of these papers, even if he or she narrows the field down by reading only the most relevant topics. Furthermore, scientific papers generally do not make the most gripping reading. Although editors of scientific journals expect the papers they publish to conform to proper spelling, grammar, and syntax, they place a premium on conveying scientific accuracy, not on producing page-turners. So at the end of a long day in the laboratory or clinic, what parts of a paper do scientists and clinicians actually read?

The first thing to be ignored is the Methods section. A typical Methods section is full of technical details about who or what the subjects of the study were, what measurements were made, how the data were collected and stored, how informed consent was obtained if human subjects were involved, and what kinds of statistical analyses were done. Of course, this is the most important part of the paper. A careful reading of the Methods section in a paper published in even the most reputable and prestigious journals—such as Science, Nature, and the New England Journal of Medicine—almost never fails to reveal at least one thing that could be questioned, or one assumption or compromise that the investigators had to make in order to get the study done. But the Methods section just seems so “complicated,” and it is easiest to assume that the reviewers of the paper and the editors of the journal have made sure everything was done properly. Why not just look at the abstract, see what was found out, and leave it at that?

Scientists often throw up their hands at what they perceive to be the lack of “scientific literacy” on the part of the “general public” (e.g., nonscientists). Articles are published every year detailing some study that seems to show that Americans are scientifically less adept than residents of other developed countries. Most people take the easy way out, the scientists lament, and prefer believing superficial, often incorrect, descriptions of scientific knowledge and findings to delving into the truth. There is no doubt that these assertions are at least to some degree correct, and we will return to them later in this chapter. But as we described in the beginning of this chapter, even scientists are prone to reach for the easiest available method of getting information. It is only human nature to avoid complexity whenever possible in order to get information about a complicated topic.

Science is not necessarily the most complicated field we deal with, even if scientists sometimes want us to believe that to be the case. The most brilliant scientist may have a great deal of trouble understanding the details of the annual changes to federal accounting regulations, the rules of evidence regarding criminal procedure, or just exactly what is going on in James Joyce’s novel Finnegan’s Wake. Those things require appropriate experts, an accountant, a lawyer, and an English literature professor. But no one disputes that modern science involves challenging complexities that seem well beyond the grasp of even the most intelligent nonspecialist. And every area of science, including those within the medical and health fields with which we are concerned in this book, has its own set of complexities, including jargon, standards, and abbreviations. This chapter details another human impulse—avoidance of complexity—that is so natural to us for a variety of beneficial reasons but also causes an enormous amount of false belief when it comes to rational scientific thinking. We begin by looking at how the quest for simplicity affects us all, even the most seasoned of scientists; detail some of the ways in which our brains are primed to misunderstand science and eschew rationality; and finally propose some possible methods to make scientific thinking more intuitive. We also provide some simplified frameworks to help people break down seemingly complex scientific information into more manageable and interpretable issues. Finally, we offer suggestions about how to tell which science websites are giving us accurate information.

Science Illiteracy Has Multiple Culprits

It does not help matters when a tabloid newspaper, the New York Daily News, laments our general lack of scientific sophistication with the banner headline “Idiocracy: Much of U.S. Doesn’t Buy Big Bang or Evolution.”1 The story tells us that according to a “new poll” conducted by the Associated Press, 51% of people don’t think the Big Bang happened and 40% don’t believe in evolution. The story goes on to make fun of the average citizen who questions the Big Bang “theory” because “I wasn’t there.”

The problems with this kind of media sensationalism are manifold. First, not a single detail about the poll from which the story derives is given, so we have no idea whether it is valid. A scientist or an informed consumer of science should ask “Who was surveyed, what was the ‘return rate’ (the number of people approached who agreed to be surveyed), and what does the statistical analysis of the data tell us about whether any firm conclusions can actually be reached?” Second, buried at the end of the story is the good news: only 4% of people doubt that smoking cigarettes causes cancer, only 6% question whether mental illness is a medical condition, and only 8% disagree that our cells carry the genetic code.

But finally, and most important, is the attack on the average American, who is being labeled an “idiot.” We do not need sophisticated psychological experiments to tell us that epithets like that are unlikely to motivate anyone to want to learn more about the Big Bang or about evolution. And it also discourages scientists, doctors, and public health officials from trying to teach them. The article could have started with the remarkable information that a concerted public health campaign has convinced almost everyone that smoking is dangerous, despite the influence of the tobacco industry, or that even though the discovery of the structure of the genetic molecule, DNA, was published only in 1953 it is already the case that nearly everyone agrees that our genes determine what our bodies do and look like. Clearly, we need to do more work to help people understand that the Big Bang and evolution are real and indisputable phenomena and that even religious people for the most part agree with that fact. But calling people “idiots” is a setback. In addition, it takes our attention away from the real question: If people can understand science in many instances, why do they deny health and medical facts in some cases and not others? Saying it is because they are “uneducated” is, in this sense, the easy way out.

Indeed, the media can sometimes be a culprit in fostering scientific misunderstandings and myths. In a penetrating op-ed piece, Frank Bruni of The New York Times wonders why a former Playboy model, Jenny McCarthy, was able to gain so much attention with the notion that vaccines cause autism. He asks, “When did it become O.K. to present gut feelings like hers as something in legitimate competition with real science? That’s what the interviewers who gave her airtime did … and I’d love to know how they justify it.”2 The media will always claim that they present only what people want to hear, and it is undoubtedly more pleasurable for many people to watch Jenny McCarthy being interviewed than to listen to a scientist drone on and on about the immunological basis of vaccination. How, then, will people learn about scientific truths? Rather than brand us as idiots, the media might try a little harder to educate us about science.

We Are Not Stupid

It is important to recognize that the issue here is not one of intelligence. It is all too often the case that scientists, physicians, and public health authorities assume that anyone who fails to agree with the conclusions reached by scientific inquiry must be simply stupid. In accord with that viewpoint, educational interventions are developed that seem overly simplistic or even condescending. But the retreat from complexity and the unwillingness to plumb the depths of scientific detail afflicts even the smartest among us.

We cannot, however, lay the blame for fear of scientific complexity and pandering to the simplest explanations solely at the feet of the media, for the scientific community bears a great deal of responsibility as well. In 2013 Dan M. Kahan traced the way Americans were informed about the approval and introduction of a new vaccine against the human papillomavirus (HPV), the sexually transmitted virus that is the cause of cervical cancer.3 When the vaccine was introduced by the drug company Merck under the brand name Gardasil in 2006 it was accompanied by a CDC recommendation for universal immunization of adolescent girls and a campaign by the drug company to get state legislatures to mandate Gardasil immunization. The result was enormous controversy, with anti-vaxxers insisting the vaccine had been rushed to approval and other groups raising the concern that the vaccine would somehow encourage adolescents to have sex.4 What struck Kahan about this is the fact that the hepatitis B (another sexually transmitted virus that causes cancer) vaccine, which was introduced in the 1990s, did not evoke anywhere near the public outcry or controversy and is now an accepted part of the routine vaccination schedule for children. The difference, Kahan contends, is the way in which the scientific and regulatory communities handled the introduction of Gardasil to the public. “There was and remains no process in the FDA or the CDC for making evidence-based assessment of the potential impact of their procedures on the myriad everyday channels through which the public becomes apprised of decision-relevant science,” Kahan wrote. First, the FDA fast-tracked approval of Gardasil, allowing Merck to get a leg up on its competitor, GlaxoSmithKline’s Cervarix. Then, Merck went about a very public lobbying campaign. Finally, the CDC recommended vaccinating girls but not boys. These decisions managed to push every button in everyone’s arsenal of fear and mistrust. Rather than hearing about Gardasil from their pediatricians, as they had with the hepatitis B vaccine, parents first learned about it in sensationalized media accounts. Kahan puts the blame squarely on the FDA, CDC, Merck, and the medical community for not even considering how Americans would interpret their decisions:

Empirically, uniformed [sic] and counterproductive risk communication is the inevitable by-product of the absence of a systematic, evidence-based alternative… . The failure of democratic societies to use scientific knowledge to protect the science communication environment from influences that prevent citizens from recognizing that decision-relevant science contributes to their well-being.

Complicated Science Confounds the Brightest Among Us

Let us suppose that a parent with a PhD in history, Dr. Smith, is trying to decide whether to have his children vaccinated for measles, mumps, and rubella. He understands that these potentially very serious and even fatal diseases have nearly disappeared from the population because of vaccines. He is a generally civic-minded person who under ordinary circumstances has no problem doing what is best for other people in his community. But his paramount interest is the welfare and health of his own children, a priority few of us would dispute. He has heard that children are exposed to so many vaccinations at such an early age that the immune system is overtaxed and ultimately weakened. Children who undergo the full set of vaccinations supposedly are at high risk to succumb later in life to a variety of immunological diseases—including food allergies, behavioral problems, and seizures—because of vaccines. He has seen data that rates of these illnesses are increasing in the American population and that these increases have occurred in lock step with the proliferation of vaccinations. We cannot blame our history professor for being concerned that by vaccinating his children he may ultimately be responsible for harming them.

So the dutiful, loving parent decides to learn more about vaccines. Perhaps he goes online first, but he quickly realizes this is a mistake. There are articles on all sides of the issue, many written by people with impressive sounding scientific credentials (lots of MDs and PhDs and award-winning scientists from prestigious colleges and universities). He decides to focus on information provided by medical societies of pediatrics and immunology. What does he find out?

Some viruses and bacteria are immediately cleared from the body by a part of the immune system called the innate immune system, which includes natural killer cells, macrophages, and dendritic cells. These are not stimulated by vaccinations. Instead, it is another part of the immune system, the adaptive immune system, that responds to vaccinations. The adaptive immune system is in turn comprised of the humoral and cell-mediated immune systems, the former mediated by B cells, which produce antibodies and the latter by T cells, which are often called CD cells and have designations like “CD4+ and CD8+.” Antibodies include a light fragment and a heavy fragment …

We could go on with this, but hopefully we have made our point. Although our history professor would have no trouble delving into the effects of a new form of taxation on relations between the aristocratic and middle classes in 14th-century Flanders, when researching how a vaccine might affect his child he is already lost in the whirlwind of letters, abbreviations, and arrows from one type of immune cell to the next. And what we have provided above is merely scratching the surface. The human immune system is one of the most beautifully organized systems in the universe, capable of protecting us against the barrage of dangerous microbes we face every second of our lives. But those microbes are also brilliant in their ability to elude and co-opt the immune system in order to cause disease. How vaccines fit into this system, which has developed over millions of years of evolution to make the human species so capable of long-term survival, is ultimately a very complicated affair.

After a few hours of trying to understand the science behind immunization, Dr. Smith decides to settle on something simpler. He can easily find much briefer summaries of how vaccines work on the Internet. Most of these seem to him to be too simple, some even to the point of feeling condescending. They represent many points of view, some insisting that vaccines are safe and others that they are nearly poisonous. Dr. Smith is now getting tired and knows that at some point he will have to make a decision. Instead of trying to weed through all the technical details of the scientific reports, he decides to go with a vividly designed website sponsored by an organization called the National Association for Child and Adolescent Immunization Safety (a fictitious organization), presided over by the fictitious Ronald Benjamin Druschewski, PhD, MD, RN, MSW, MBA, ABCD (again, we are being deliberately hyperbolic here to emphasize our point). The site makes the case simply and forcefully that vaccinations stress the child’s immune system, that “long-term studies have not been done to evaluate the safety of multiple vaccinations” and that “studies show that many children are harmed every day by immunizations.” The site proceeds to give the gripping story of one 5-year-old boy who is now plagued with allergies to multiple food types and can safely eat only a pureed diet of mashed carrots and sweet potatoes without developing wheezing and hives.

Dr. Smith decides not to vaccinate his children and to turn in for the night.

The situation just outlined illustrates many different factors that cause science denial, but we emphasize here just one of them: the avoidance of complexity. Despite the fact that he is a superbly well educated and intelligent person, Dr. Smith is not an immunologist and yet he is faced with tackling either very technical discussions of the biological basis for immunization or overly simplified ones. Despite making an honest effort, he is ultimately defeated by complexity, believes he will never be able to understand the issues with any sort of rigor, and defaults to explanations that are much easier to grasp albeit less accurate.

Most people do not have PhDs and do not have as high an IQ as Dr. Smith, but people of average intelligence and educational background will also face the same dilemma if they attempt to understand the scientific basis behind many facets of medical advice. It is not easy to understand how a gene can be inserted into the genome of a plant in order to render it resistant to droughts or insects. Nor are the data demonstrating that keeping a gun at home is dangerous straightforward for someone who wishes to understand them in depth: such data are derived from epidemiological studies that require complicated statistical analyses. Even many of the people who perform such studies need to collaborate with expert statisticians in order to get the math right. Once an equation shows up, most people—even very intelligent people—get nervous.

Dealing With Complexity Requires Considerable Mental Energy

Neuroscientists, cognitive psychologists, and behavioral economists have developed a theory in recent years that divides mental processes into two varieties, variously referred to as the high road versus the low road, system 1 versus system 2, the reflective system versus the reflexive system, or fast thinking versus slow thinking. We have stressed that this dichotomy is a serious oversimplification of how the mind works. Nevertheless, it is very clear from abundant research that humans have retained more primitive parts of the brain, present in all mammals, which are used to make rapid, emotional decisions. As the great neuroscientist Joseph LeDoux of New York University has shown, one key part of the emotional brain is the amygdala, an almond-shaped structure deep in what is called the limbic cortex of the mammalian brain.5 On the other hand, humans are unique in the size and sophistication of the portion of the brain to which we have referred on a number of occasions so far, the prefrontal cortex, or PFC, which we use to perform logical operations based on reason and reflection. Almost all parts of the human brain, including the amygdala and its related structures in the limbic cortex, are barely distinguishable from those of other mammals, including our nearest genetic neighbor, the chimpanzee. The PFC, shown in figure 4, however, is the part of the brain that makes us uniquely human, for better (writing novels and composing symphonies) or worse (waging war and insider trading). Again, the tendency on the part of some authors to talk about the PFC as the seat of reason is a vast oversimplification of a brain region that contains billions of neurons, multiple layers, and many subsections, not all of which are all that reasonable. It is one of these subdivisions, called the dorsolateral prefrontal cortex (dlPFC), however, that is most nearly connected to executive function, reason, and logic. The dlPFC is relatively shielded from the influence of more emotional parts of the brain, but readily affected by them when emotions run high. Injury to the dlPFC can result in an individual who is emotionally driven and impulsive and who has difficulty planning ahead, solving complicated problems, or understanding complex explanations.

imag

FIGURE 4 Illustration of brain regions showing the PFC and amygdala among other important areas.

Source: From https://infocenter.nimh.nih.gov

Other parts of the PFC, however, are also critical in reasoned decision making. Frank Krueger and Jordan Grafman note that the PFC is responsible for three kinds of beliefs, all of which are fairly sophisticated:

Neuroimaging studies of belief processing in healthy individuals and clinical studies suggest that the specific region in the most evolved part of the brain, the prefrontal cortex, may mediate three components of normal belief processing: a deliberative process of “doxastic inhibition” to reason about a belief as if it might not be true; an intuitive “feeling of rightness” about the truth of a belief; and an intuitive “feeling of wrongness” (or warning) about out-of-the-ordinary belief content.6

It is this notion of “doxastic inhibition” that is invoked whenever we take a scientific stance and say “Wait a minute, what if that isn’t true?” We then try to reason about something we have been told by considering the opposite point of view. Hence, when we are told “Evidence says that eating foods containing genetically modified organisms will inhibit human genes,” the PFC might mediate a scientific response such as “Wait a minute, what evidence supports that statement? Let’s suppose that this statement is not true. How can we know if it is or isn’t factual?”

The difficulty with such rational thinking is, as Daniel Kahneman and Amos Tversky, the fathers of behavioral economics, and their followers have noted, that using the PFC and making reasoned choices is energy-consuming and tiring. They propose that it is much easier to make quick decisions than to ponder laboriously over the right path to take. When we are faced with an emergency situation this strategy is in fact the best one: slamming on the brakes when someone darts across the road in front of your car must be done automatically and without interference from the PFC. But more primitive parts of the brain are clearly inadequate to understand complicated biology and statistical inference. Hence, the natural tendency to avoid complexity and allow less sophisticated parts of the brain to make decisions obviates the opportunity to evaluate the science involved when making health decisions. Moreover, the PFC and the limbic brain are connected by tracts that run in both directions. In general, a strong PFC can inhibit the amygdala, so that reason overcomes emotion. On the other hand, powerful amygdala activation can inhibit activity in the PFC and drive the organism to a rapid and unreasoned response.

One way to examine brain activation under different conditions is by using functional magnetic resonance imaging (fMRI). Figure 5 shows a standard magnetic resonance image of the human brain. With fMRI it is possible to see which exact parts of the brain are activated by different types of stimuli. Amitai Shenhav and Joshua D. Greene studied brain activation during fMRI while the research participants made moral judgments and concluded that “the amygdala provides an affective assessment of the action in question, whereas the [ventromedial] PFC integrates that signal with a utilitarian assessment of expected outcomes to yield ‘all things considered’ moral judgments.”7 In general, then, the more primitive parts of the brain, represented by the amygdala and other parts of the limbic system, make fast and emotional responses that maximize immediate reward; the PFC makes decisions on the basis of reasoning that consider long-term consequences.8

imag

FIGURE 5 A magnetic resonance imaging (MRI) of the human brain, with two subsections of the prefrontal cortex highlighted.

Source: From J. R. van Noordt & S. J. Segalowitz, “Performance monitoring and the medial prefrontal cortex: A review of individual differences in self-regulation,” Frontiers in Human Neuroscience, 2012, doi: 10.3389/fnhum.2012.00197.

In a fascinating series of experiments that illustrate the ways these different brain regions operate, scientists attempted to understand the neural basis of racial prejudice by showing black and white participants pictures of people of the same or opposite race. Emily Falk and Matthew D. Lieberman summarize these experiments by noting that whenever participants demonstrated a subliminal racially biased response, the amygdala was strongly activated, but when the participants were given a chance to think about their responses, the amount of amygdala activity declined and activity in the PFC increased. “Broadly speaking,” they conclude, “each of these reports fit within the framework of attitude (or bias) regulation … ; initial automatic responses in affective processing regions [e.g., the amygdala] are altered following a deliberate choice [i.e., reflected in activity in the prefrontal cortex].”9

Scientists are only now beginning to understand the circumstances under which one brain region dominates the other. It is clear that humans have a much greater capacity to assert reason over emotion than any other organism but that there are tremendous differences among situations and, perhaps more important, among individuals in the ability to do so. Almost all humans will run away when they smell smoke and not first try to figure out what makes smoke derive from fire. On the other hand, some people when told that nuclear power plants will inevitably leak lethal radiation into our water supply demand to know on what evidentiary basis that statement is made and others will immediately sign a petition demanding that Congress outlaw nuclear power plants.

That Good Ole Availability Heuristic

Kahneman and Tversky called the intuitive, more rapid, and less reasoned strategies that we use to make decisions heuristics, and show that all of us fall back on them rather than engage in more effortful reasoning when faced with complex issues. In their words, a heuristic is “a simple procedure that helps find adequate, though often imperfect, answers to difficult questions.”10

As discussed in more detail in chapter 6 on risk perception, psychologists tell us that rather than struggling with complexity we are programmed to fall back on one of these heuristics: the availability heuristic. We are told by various people and organizations to worry that there is serious harm associated with gun ownership, nuclear power, genetically modified foods, antibiotics, and vaccines. But which of these should we really worry about? According to Richard H. Thaler and Cass R. Sunstein, the authors of the widely read book Nudge,

In answering questions of this kind, most people used what is called the availability heuristic. They assess the likelihood of risks by asking how readily examples come to mind. If people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot. A risk that is familiar, like that associated with terrorism in the aftermath of 9/11, will be seen as more serious than a risk that is less familiar, like that associated with sunbathing or hotter summers. Homicides are more available than suicides, and so people tend to believe, wrongly, that more people die from homicides.11

Of course, orders of magnitude more people develop skin cancers, including potentially lethal types like malignant melanoma, because of sunbathing than will ever be harmed by terrorists; and of the approximately 30,000 people who are killed by guns every year in the United States, about two-thirds are suicides and one-third are murdered. Yet, we spend billions of dollars and endless hours on airport security but leave it mostly up to us to figure out which kind of sunscreen will best protect our skin from potentially fatal cancers like melanoma.

As we discuss throughout this book, heuristics such as this, that have the capacity to lead us to incorrect conclusions, most often offer important benefits and should not be viewed simply as mindless “mistakes” employed by people who lack information or are too “stupid” to understand scientific data. The availability heuristic is no different. It is imperative that we make decisions based on experience. A primitive example, of course, is that we don’t touch a hot stove after once experiencing the consequences of doing so (or at least trying to do so and being screamed at by a parent “DON’T TOUCH THAT!”). If we engaged our dorsolateral prefrontal cortex every time we need to make a decision, we would be locked in place all day trying to figure things out when our own experience could give us a quick path of action. Indeed, falling back on heuristics is a normal process for several reasons. When buying groceries we can fill our basket quickly because we know pretty much what we like to eat and how much things cost. How tedious it would be if we had to engage in a project of complex reasoning every time we entered a supermarket. This kind of simple decision making based on experience and what is immediately obvious is an evolutionarily conserved and practically necessary function of our brains.

Science, however, asks us to work hard against these methods of decision making, which is why it often butts up against our psychological instincts and causes significant resistance. It is much easier for us to imagine a nuclear explosion—we have seen them in the movies countless times—than a polluted sky gradually destroying the airways of a person born with asthma. So we determine that nuclear power is more dangerous than burning coal for energy, when in fact the opposite is the case. We can all recall accidentally getting a shock from an exposed electrical wire, so we are easily convinced that shock treatment must cause brain damage, but we have no clue how electric shock can somehow stimulate a depressed person’s brain to make him or her feel better, and that it is a life-saving medical procedure that should actually be employed more often.12 In each of these cases we naturally are relying on the brain’s rapid decision-making abilities that serve us so well in everyday life and generally protect us from danger. Unfortunately, however, in each case we come to the wrong conclusion.

In some instances it is quite easy to demonstrate that scientific reasoning requires tackling complexity rather than making an intuitive judgment. Steven A. Sloman uses the familiar example of the whale.13 When we think of a whale, we first conjure an image of something that looks like a fish—it swims in the ocean and has fins. But we all know that a whale is a mammal. This is because more considered thought reminds us that whales belong to a biological classification of animals that are warm-blooded, have hair, and nurse their newborn infants. Now, although we may find the pedant at a dinner party who corrects us when we offhandedly refer to a whale as a fish to be annoying, we also agree that the correction is not controversial. A whale is a mammal. Not a big deal. On the other hand, when we are asked to imagine a nuclear power plant, the first thing that comes to mind is likely an image of the mushroom cloud over Hiroshima after the first atomic bomb was dropped in 1945. The distance from that image to an understanding of how nuclear power plants safely create huge amounts of nonpolluting energy that cannot affect the earth’s ozone layer and contribute to global warming is much greater than from fish to mammals. And while almost no one harbors a meaningful fear of whales, it is natural to be frightened by anything with the word nuclear in it. So not only is the trip from bomb to safe power more complicated than from fish to mammal, we will naturally resist that intellectual adventure at every step because our amygdala sends off a danger signal that shuts off the prefrontal cortex and urges us, at least figuratively, to run away.

We can see clearly, then, that our fear of and retreat from complexity is mediated by powerful, evolutionarily determined aspects of human brain function that translate into psychological defaults like the availability heuristic. The question is whether it is possible to provide education that will help us overcome this fear and become confident even when scientific matters are complicated. As we will argue, based on the findings and comments of many others, traditional science education is not doing a very good job to prepare us to face up to scientific complexity. But already there is a substantial body of research pointing the way to improving this situation. We believe the basic principle to follow is to teach people, from the earliest ages possible, how science really works.

Does School Make Us Hate Science?

Americans are interested in science and technology. They visit science museums and line up to buy the latest technological gadgets. But they nevertheless think that science is too complicated for them to really understand. According to a recent Pew Foundation report, “Nearly half of Americans (46%) say that the main reason that many young people do not pursue degrees in math and science is mostly because they think these subjects are too hard.”14 Our schools’ approach to teaching science appears devoted to proving this attitude correct. Americans seem prepared to throw up their hands in despair, even when they are being asked to vote on scientific issues. At the 2007 annual meeting of the American Association for the Advancement of Science (AAAS), Jon Miller presented a paper on science literacy in America. He related that in 2003 he and his colleagues at Michigan State University conducted a poll in which they ascertained the opinions of adults on a range of issues, including stem cell research. At the time, 17% were strongly in favor of such work, citing its potential for advancing cures for serious illnesses, and 17% were opposed on antiabortion grounds. Throughout the following year, stem cell research became a presidential election issue. Miller et al. repeated the poll 1 week before the election and found that now only 4% of the people backed and 4% opposed stem cell research. The rest of the people polled said the issue was just too complex.15 We may be getting some of this inability to take a scientific “stand” from our school teachers, including high school biology teachers. When a subject like evolution is perceived as controversial in a community, Michael Berkman and Eric Plutzer of Pennsylvania State University found that science teachers are “wishy-washy” and prefer to avoid the topic altogether.16 Of course, from the scientific point of view there is nothing remotely controversial about evolution, but we are never going to grasp that if the people charged with our scientific education are unwilling to take a strong stand on the facts.

Science education in the United States seems devoted to making people hate and fear science rather than strengthening their self-confidence in understanding complex issues. In elementary school, children are given projects to collect and label leaves that fall off trees in the fall, as if being able to differentiate an oak leaf from a maple leaf will reveal some important scientific principle. A few children enjoy getting outside and collecting leaves, but for most of them the task is pure tedium. In junior high school, biology revolves around memorizing the names of organs in pictures of insects, frogs, and dogs, as if being able to name the cloaca and the single-chambered heart will provoke an enduring love of science. By high school, the assignment might be to memorize the periodic table of the elements, all 118 of them. This includes remembering each element’s atomic number, symbol, and electron configuration. How comforting for the people who someday must have an opinion on the safety of nuclear energy to know that number 118 is ununoctium. My guess is that many PhD chemists need to have the periodic table hung on the wall of their labs in order to remember all the details.

It is unfortunately the case that there is a sizeable gap between what scientists and nonscientists believe to be true about many issues, including whether it is safe to eat GMOs, vaccinate our children, or build more nuclear power plants. But it seems, as Lee Rainie recently pointed out in Scientific American, there is one area in which they agree: the poor state of U.S. education in science, technology, engineering, and math (the so-called STEM subjects).17What is important here is that demoralizing people about science from an early age biases decision making in favor of using the least energy-demanding parts of the brain. People are being conditioned to resist invoking the dlPFC and even trying to think through the scientific and health questions they must confront.

Why the Scientific Method Itself Defies Our Desire for Simplicity

Science works by something called the scientific method. Not infrequently we hear someone claim that they use a different scientific method than the one used by “traditional scientists.” In fact, there is only one scientific method, which is actually no more controversial than saying that two plus two always equals four. The scientific method is not a matter of belief or opinion. Rather, it is a set of procedures that begins with making a hypothesis, designing and running an experiment, collecting data, analyzing the data, and reaching a conclusion about whether the results support or fail to support the original hypothesis. It is an example of deductive reasoning. A so-called creationist who denies the theory of evolution begins with an unfalsifiable proposition: “a higher power created the world.” On the contrary, as science teacher Jacob Tanenbaum explained in an article in Scientific American:

Scientists who formed the idea of human evolution did not invent the idea to go looking for fossils. Well before Charles Darwin published his treatise in 1859 and well before workers in a limestone quarry in 1856 found strange bones that would later be called Neandertal, scientists struggled to explain what they saw in the natural world and in the fossil record. The theory of evolution was the product of that analysis. That is how science works.18

What started as observations in nature are now the basis for understanding fundamental aspects of biology. But the process of getting there was far from simple. It is important to understand all the steps it takes to reach a scientific conclusion so that we can see clearly that “belief,” emotion, and opinion should not be among them if things are done correctly.

First, Our Best Guess

There are several important steps that a scientist must take to follow the scientific method. The first one is the clear and explicit formulation of a testable hypothesis before any data gathering even begins. We stress that this hypothesis must be testable; that is, it is imperative that it can be proved wrong. It is immediately clear why the creationist’s ideas cannot be considered by scientists: there is no science possible to test a hypothesis such as “God exists,” because there is no experiment that can either validate or falsify such a claim. Remember, the scientific method operates by attempting to falsify hypotheses, something that seems counterintuitive at first.

For example, let us say that investigators hypothesize that an experimental drug will produce significantly more weight loss than a placebo pill. The main outcome measure is the body mass index (BMI), a measure of weight adjusted for by height. Sometimes the results of an experiment reveal something unexpected that was not part of the original hypothesis. For example, the new drug being tested for weight loss might produce more reduction in total cholesterol level than placebo. When this happens, it may prove meaningless or the harbinger of an important scientific breakthrough. But scientists cannot go back and say, “Oh, that is what I was actually looking for in the first place.” Unexpected outcomes mean only one sure thing: a new experiment with a new hypothesis is in order. If the scientists want to prove the drug is also effective for cholesterol level reduction, they will have to design and conduct a new experiment with that as the main outcome measure.

The investigators’ hypothesis, then, isn’t “Drug is superior to placebo” but rather “There is no difference between drug and placebo.” This is called the null hypothesis. They will now design and conduct an experiment to see if they can falsify or reject their null hypothesis. With the testable hypothesis in hand, the investigators must next explain exactly how they will determine whether the results of the experiment are either compatible or incompatible with it. This entails stating the study’s main outcome measure. It would obviously be cheating to change the hypothesis of a study once the data start rolling in. So the investigators must state exactly what measure is the main one that will determine if the null hypothesis can be supported or rejected.

Fair and Balanced

The experimenters have decided that they will rely on weighing people and measuring how tall they are to generate the main outcome measure, BMI. But there are many more things to decide upon before starting to give people pills and asking them to get on the scale every week. The design of the experiment must be such that the hypothesis is actually being tested in a fair and balanced way. Investigators must take steps to eliminate (or at least minimize) the possibility that biases on the part of the experimenters can affect the results. Remember Yogi Berra’s great line “I wouldn’t have seen it if I didn’t believe it”? There are all kinds of technical details that scientists do to prevent bias from making things turn out in a preordained way, including keeping everyone involved in the experiment blind to which subjects are placed in which group. For example, if scientists are examining the effect that an experimental medication has on weight gain, the experimental medicine and the placebo to which it is being compared must look and taste identical and neither the scientists nor the patients can know which one they are getting. This is called a double blind because neither investigators nor subjects know which pill is which; this technique can be applied with certain modifications to almost every type of biological experiment.

This example also entails the important inclusion of a control condition. An experiment must always be constructed so that something is compared to something else. There is an experimental condition, in this case the new drug to be tested, and the control condition, here the identically appearing placebo pill. Again, the null hypothesis of this experiment is that the new drug will be no better than the placebo.

It is almost always also a requirement that subjects are randomly assigned, or randomized, to the experimental or control condition. In the weight loss study, some people will start out weighing more (that is, having a higher BMI) than others, some people will be older than others, and some people will be men. If it is the case that people who have higher BMI, are young, and are men respond preferentially to the experimental drug, then deliberately putting all such people into that group stacks the deck in favor of the experimental drug. Instead, by randomly assigning people to the two groups these differences should be evenly distributed.

A very important point with regard to proper design is adequacy of sample size. Because of the nature of the mathematics scientists use to analyze the data they get from their experiments, the smaller the number of subjects in a study, the more difficult it is to prove that one thing is different (or better or worse) than another even if there really is a difference. Mistakenly accepting the null hypothesis and deciding that there is no difference when in fact a difference exists is called a Type II error. The opposite error, declaring there is a difference when one does not really exist, is called a Type I error.

On the other hand, with very large sample sizes even trivial differences that will make no difference to anyone’s health or life can appear to be significant from a statistical point of view. When Wakefield published his now discredited paper in The Lancet allegedly showing that vaccines cause autism, the first and loudest objection should have been that no such thing could possibly be proven with only 12 children as subjects. That sample size is far too small to determine if an observation is true or just chance. On the other hand, how can a large sample size give misleading results? Jack’s friend and mentor, the great, late biostatistician Jacob Cohen, once wrote about an article in which it was concluded that height and IQ are positively correlated, meaning that the taller someone in the study was, the higher his or her IQ.19 This seems a bit ridiculous. And indeed in the study, Professor Cohen noted, the sample size was so enormous that a miniscule association became statistically significant. In fact, to make a child’s IQ go up 1 point, you would have to make him 1 foot taller. These details involve statistics that are not apparent to most readers but unfortunately often slip by professional reviewers and journal editors. All we can say is that 12 subjects is very small, and 10,000 is very large, and at either extreme inferences made from statistical tests of probability are prone to be misleading.

The final critical feature of a reliable experiment we wish to mention here is that the study must be both reproducible and reproduced, the latter hopefully multiple times. This means first that the experimenters have to lay out in complete detail exactly what they did, including how they picked their subjects, what tools they used to evaluate the outcomes, and how they analyzed the data. This is detailed in the Methods section in scientific papers, the part that most people skip over but is really the most important part of any paper. It must be written in such a way that other experimenters in other laboratories can replicate the exact same experiment to see if they get the same result. And the key to scientific truth is that any finding must be replicable: capable of being reproduced over and over again. Flukes happen in science all the time. In some cases the reasons are clear and have to do with some shortcoming in the way the initial experiment was designed or conducted. In other cases it is never clear why one researcher got one result and others another despite doing exactly the same experiment. But unless independent investigators can reproduce an experimental finding it has no value.

This is a summary of what we mean by the scientific method and the critical elements of research study design that follow from it. It is important to remember that using this method does not guarantee accurate findings. If any part of it is not done correctly the study may yield misleading or even false conclusions. But it is almost always guaranteed that if the scientific method is not followed the results of any inquiry will be useless. Scientists who use the scientific method can make mistakes; scientists who don’t will almost always be wrong. And the wonderful thing about science is that mistakes are not usually replicated because no two laboratories will ever make the same exact mistake. Thus, a result that is not reproduced is automatically one to forget about entirely.

Teaching the Scientific Method

We believe that the most valuable thing students can learn in science class is how to evaluate whether or not a claim is based on evidence that has been collected by careful use of the scientific method. We also believe that an introduction to the concept of statistical inference is feasible even at the elementary school level. This does not mean, of course, trying to make fifth graders understand the details of multiple linear regression analysis. But even young children can understand concepts like chance, reproducibility, and validity. Teaching children and young adults these lessons would seem far more interesting and palatable than memorizing the names of leaves, body parts, or elements. It does not require mathematical skill, something many people (often mistakenly) claim they lack, but rather logical reasoning competence. As such, when taught properly this view of science is accessible and indeed fascinating, something that students will enjoy tackling. A recent study showed that exposing young children—in this case disadvantaged children—to high-quality, intensive interventions aimed at countering the effects of early life adverse experiences led to better health when they were adults, including a reduction in health risks like obesity.20 The study, published in Science, was a carefully conducted, prospectively designed, randomized trial that meets all of the criteria we have outlined for following the scientific method rigorously. It showed that an intervention conducted from the time children are born through age 5 had a lifelong effect that included improving health behaviors. It begs the question whether interventions attempting to improve science literacy, begun at an early age, might not be capable of producing long-lived effects in a more scientifically savvy group of adults.21 But the key element will be to emphasize process over content: how science is conducted, how scientists reach conclusions, and why some things are just chance and others are reproducible facts of nature.

How much more interesting would science class be in the fifth grade if, instead of informing the children that today in class they would be classifying a bunch of rocks into various categories of igneous, sedimentary, and metamorphic, the teacher began by asking, “Does anyone have any idea how we decide if something is true?” Other opening questions could include “What happens in a laboratory?” and “How do we know if a medicine we are given is really going to make us better?” As Sheril Kirshenbaum recently put it on CNN,

It doesn’t matter whether every American can correctly answer a pop quiz about science topics he or she had to memorize in grade school. Isn’t that what turns a lot of us off to science to begin with? What’s important is that we work to foster a more engaged American public that will not only support but also prioritize the research and development necessary to meet the 21st century’s greatest challenges, from drought to disease pandemics.22

The way to defeat the fear of complexity, then, is to give people from the earliest possible ages the tools they will need to understand the scientific method.

Don’t Lecture Me

Using this approach, how might a nonscientist evaluate a politically and emotionally charged issue like whether nuclear power is safe, HIV causes AIDS, ECT causes brain damage, or vaccines cause autism? How do we deal with the fact that each of these issues involves highly complex science that most people are unprepared to tackle?

One thing that should not be done is to rely solely on lectures. This includes simple lectures, complicated lectures, reassuring lectures, and frightening lectures. Instead of forcing information on people, it is better to first find out what their state of understanding is, what their goals are, and what they have been told so far. In other words, ask them questions. Studies have shown that people are far more likely to use the reasoning parts of their brains—the PFC—when they are engaged and involved in a discussion than when they are passively listening to someone else. The task here is to engage the PFC of the person we are trying to convince rather than shooting facts into the black box of an uninvolved brain. It is easy to see how a passive and disengaged person might readily default to the brain’s more primitive systems, the ones that require much less effort and make emotional and impulsive judgments. It is actually possible to increase the activity of the dorsolateral PFC by training an individual in what are known as executive tasks, the ability to plan and carry out complex operations using logic and reason.23 Doing so appears to also reduce the reactivity of the amygdala to emotional stimuli. While we are not advocating a model of human sensibility that is all reason and no emotion, there is little question that more effortful thinking about complex issues and less responding to emotionally laden sloganeering is needed if we are to be equipped with the ability to make evidence-based health decisions. Teaching the method in an ongoing way at a young age will make the whole process of interpreting scientific data for the general public much less complex and confusing and should help adults make health decisions with less mental energy and lower reliance on heuristics.

Motivating the Brain to Accept Change

Neuroscientists also tell us that changing one’s viewpoint on any issue is difficult, in part because staying put with a point of view activates the pleasure centers of the brain whereas making a change excites areas of the brain associated with anxiety and even disgust. Yu and colleagues at Cambridge and University College London had subjects participate in a gambling task while their brain regional activity was measured using fMRI.24 The game involves giving participants the chance to stick with a default position or make a change to a new position. They found that staying put activated an area of the brain called the ventral striatum whereas changing activated the insula. The ventral striatum, which includes the brain structure called the nucleus accumbens (NAc) mentioned earlier, is central to the brain’s pleasure response. It is one of the parts of the brain that is routinely stimulated by drugs like alcohol and heroin and lights up in animals and people whenever they are happy. Direct stimulation of the ventral striatum with electrodes can make animals demonstrate pleasure-related responses. By contrast, the insula is activated when someone is anxious or frightened or shown disgusting pictures. In their experiment, Yu and colleagues also found that when a person loses a bet after switching, he feels worse than when he loses after staying put. So changing an opinion is associated with fear, which is precisely what we need to overcome if we are going to change people’s minds from familiar, easy default positions to more complex scientific ones.

Furthermore, activity in these same brain regions mediates how a person’s mood affects his or her willingness and ability to make a change. In an ultimatum game experiment, a type of economics experiment that is used to reveal how people make choices, K. M. Harlé and colleagues of the Department of Psychology at the University of Arizona observed that sad participants rejected more unfair monetary offers than participants in a neutral mood and that these sad people had increased activity in the insula and decreased activity in the ventral striatum.25 Of course, rejecting an unfair offer is a positive outcome, so one could say that in this case feeling sad was protective. But the point of this important experiment is that unhappy people are resistant to change and this is reflected in activation of specific brain regions. Everything seems distasteful to them (anterior insula) and little evokes pleasure (ventral striatum). Anxious individuals are similarly unable to keep up with new evidence as it comes in and therefore resist updating their ideas about what is true.26 The message here is that our countervailing messages must be stimulating and upbeat, designed to stimulate the ventral striatum rather than the insula. It is also probably best if they avoid evoking anxiety and fear. Indeed, experiments judging the effectiveness of various pro-science communications might be conducted during functional imaging. Those that most activated the striatum and least activated the anterior insula should be the ones selected for further testing as being more likely to be effective.

A model for guiding people to sort out scientific fact from fiction is a technique used by mental health professionals called motivational interviewing (MI), mentioned in passing in the previous chapter, which has been found to be particularly helpful in getting people to adopt healthy behaviors like stopping excessive alcohol intake or adhering to medication regimens for diabetes. Instead of ordering the patient to stop drinking or take the medicine, the clinician attempts to motivate the patient to do these things by collaboratively assessing what it is that the patient hopes to achieve. An alcoholic patient may begin by insisting she doesn’t want to stop drinking, but agree that she does want to stop getting arrested for DWIs. The diabetic patient may initially say the medications are a nuisance and he doesn’t feel sick anyway, but he may also express a wish to avoid having a heart attack in the next few years. By first establishing what goals the patient has, the interviewer can then proceed to work with him or her on how to achieve them.

In a similar manner, one approach to dealing with the complexity problem may be to begin by establishing what goals and values individuals have. In some instances, like when a clinician discusses vaccines, ECT, or antibiotic use with a patient, this can be done on an individual, face-to-face basis. In others, such as decisions about nuclear power or GMOs, it will be necessary to design public health approaches that can engage larger populations. Here are some goals with which most people on either side of some of the issues we are dealing with in this book will likely agree:

1.Vaccines: I want to make sure my children are as protected as possible from preventable infectious diseases, and I want other children in my community to have the same protection.

2.Antibiotics: I want only medication that is known to be effective for an illness that I or a member of my family might have, and I don’t want medication that poses a greater risk to health than a benefit.

3.ECT: If someone is severely depressed to the point that her life is in danger because she is not eating or might resort to suicide, I think she should have treatment that is most likely to reduce the depression without causing severe, irreversible adverse effects.

4.GMOs: A technology with which I am not familiar, even if it has been in use for decades, is something about which I will always be cautious and even skeptical. I need to be convinced in terms I understand that it is safe. I do agree that helping to alleviate hunger and starvation around the world is a worthwhile goal, but not at the cost of harming my family.

5.Nuclear power: We need cheaper, cleaner sources of energy that won’t pollute the air and add to the problem of global warming.

6.HIV as the cause of AIDS: AIDS is a terrible disease, and it is important to know what causes it so that effective treatments can be developed. The issue is how scientists establish what causes an illness and whether in the case of AIDS they have actually done so.

7.Gun ownership: I want to protect my home and my family from anyone or anything that might try to harm us, including intruders in my home. But I don’t allow things in my home that could harm me or my family.

8.Pasteurization: It is important to me that my family consumes foods that have high nutritional content and low risk for causing infectious or other medical problems.

Starting with these premises, the process develops along Socratic lines in which the interviewer assesses at each step what the interviewee knows and has heard about the topic and what he or she wants to know. Let us take the first case, vaccines, as an example. First let us describe a well-meant example of what probably is not going to work. Sam Wang is a neuroscientist at Princeton University who has worked in the area of finding causes for autism. His March 29, 2014, op-ed piece in The New York Times titled “How to Think About the Risk of Autism” is clearly written, informative, and convincing. He urges us to think in terms of a statistical measure called “risk ratio,” which he says is “a concept we are familiar with.” The higher the risk ratio, the higher the risk. He points out that the risk ratio for vaccines and autism is less than 1.0, meaning there is absolutely no risk of vaccines causing autism. On the other hand, elective cesarean section has a risk ratio for autism of 1.9, meaning that people who have been born via cesarean section have nearly twice the rate of developing autism than people born by vaginal delivery. He cites risk ratios for several other factors, some larger than others, and also points out the following:

The human genome is dotted with hundreds of autism risk genes. Individually, each gene is usually harmless, and is in fact essential for normal function. This suggests that autism is caused by combinations of genes acting together. Think of genes as being like playing cards, and think of autism outcomes as depending on the entire hand. New mutations can also arise throughout life, which might account for the slightly greater risk associated with older fatherhood.

The problem here is that Wang’s article is loaded with information about genetic and environmental causes of autism. We are told about hundreds of risk genes, a concept that is not immediately intuitive to the nonspecialist. Most people have heard of genes that cause diseases, like Huntington’s disease, but the concept of multiple risk genes that are somehow normal until they combine in a certain way, while biologically plausible, is difficult to grasp. On top of that, a bevy of other risk ratios for causes of autism are given, including the cesarean section risk mentioned earlier, plus maternal stress both before and after pregnancy, air pollution, lack of social stimulation, and damage to the cerebellum at birth. Wang concludes, “When reading or watching the news, remember that scare stories are not always what they seem. In this respect, looking at the hard data can help parents keep a cool head.”27

But in fact, Wang’s article is scary. Instead of clinging to the false idea that vaccines cause autism, Wang is now telling parents that hundreds of their genes may be causing the problem and that the reason they have a child with autism may be that the mother was anxious during pregnancy and had an elective cesarean section. How stress and cesarean sections might cause autism is probably not a lot more obvious to parents of children with autism than the multiple genes theory. Wang means to write for a general audience, and indeed New York Times’s readers probably have above average intelligence. But his explanations sound complicated, and elucidating them would require a great deal of explanation. The way in which genes can interact to cause a disease is, in fact, a very complicated process. So is the mechanism by which stress can alter the development of a fetal brain. Once again, a passive reader may readily default to the emotionally much easier position that a poisonous substance in vaccines, thimerosal, causes autism. It is easier to grasp the notion that something toxic added to our bodies causes a serious problem than that multiple genes we already have combine in mystical ways to do us harm.

We do not mean here to criticize Wang’s article, which is an honest attempt at explaining why the autism/vaccine notion is unfounded. He is trying very hard to explain to the nonscientist why this is the case, and we applaud him and every scientist who makes an honest effort to do so. Wang never condescends to his reader, nor does he imply we are not smart enough to understand what he is saying. Our point is, however, that the audience’s state of mind must always first be understood. If that state is one of anxious concern and fear, as is the case for many parents trying to decide if it is okay to vaccinate their children, substituting misinformation with new facts is not guaranteed to change minds. The groundwork first needs to be laid to create a receptive, calm audience that is open to considering new ideas. Adapting the techniques of MI is an approach we believe can accomplish this and should be studied in that context.

Starting at the Right Place

The key, then, is to use motivational interviewing techniques to change the passive listener of scientific information into an active participant in a scientific discussion. In the aforementioned studies reviewed by Falk and Lieberman about brain activation during the expression of racial prejudice it was noted that amygdala activation associated with bias “disappeared when participants were required to verbally label the images as belonging to a given race, and the amount of increased activity in right [ventrolateral prefrontal cortex] correlated with a decrease in amygdala activity.”28 Simply put, talking out loud about our biases engages our executive brain and reduces fear. This is the first step in a more reasoned and less prejudiced response. The MI approach to scientific literacy begins at whatever level the interviewee (or group of people) finds him- or herself at the start of the process and can proceed to greater complexity once he or she is ready. Here is an example of how this might work:

A nurse in a pediatrician’s office is discussing vaccinating a new mother’s 2-year-old child for measles, mumps, and rubella (MMR). He has established the first premise, that Ms. Jones obviously does not want her son, Douglas, to catch a dangerous disease that could be prevented and that she prides herself on being a good citizen who wants the best for all children.

The nurse, Anthony, begins the conversation by asking, “What have you heard about vaccine safety?”

“I have heard many very disturbing things,” Ms. Jones replies. “I have heard many times that vaccines cause autism and other neurological problems. I also read that giving so many vaccines to children like we do now stresses the immune system of a young child and has serious consequences as they grow up.”

“Let’s start with the idea that vaccines cause autism. What is the basis for this claim as far as you know?”

“Ever since they started putting mercury into vaccines to preserve them, the rate of autism has gone up and up.”

“Well, you are certainly right that the rate of autism in some parts of the world has increased at an alarming rate in recent years. But as you know, children had autism before they started putting mercury in vaccines, so that couldn’t be the only reason.”

“Right, but it does explain the increase.”

“But would it change your thinking on this to know that most vaccines no longer have mercury in them but the rate of autism has continued to increase?”

“I didn’t know that, but it would be hard to explain the increase if it keeps happening even when mercury is removed.”

This conversation continues as Ms. Jones becomes more and more engaged. She is an active participant and with each step, Anthony is able to explain things with a bit more complexity. Eventually, he is even able to give Ms. Jones some pretty complicated explanations about the lack of biological plausibility for vaccines as a cause of autism, the danger in thinking that an association—or correlation—between two things means one caused the other,29 and the way in which a vaccine stimulates a very small part of the human immune system in a highly specific way. Notice also that Anthony asks Ms. Jones what information she would need to make a decision when he inquires if it would change her thinking “to know that most vaccines no longer have mercury in them.” An important feature here is to establish what the interviewee’s level of evidence is on any topic and even to establish how she would go about gathering that information. Ask, for example, “What would it take to convince you one way or the other about vaccine safety? Let’s say that one person told you about research showing a link between vaccines and autism. Would that be convincing? What credentials does that person have to have? Would you want to hear the same thing from more than one person?” It is also very important to ask the interviewee what she thinks would happen if she is wrong in her decision. “Now,” Nurse Anthony might ask, “you say that you think vaccines will harm your child and you are considering taking a philosophical exemption and not having him vaccinated. Suppose, for a moment, you are wrong and vaccines don’t really have anything to do with autism. What would happen then? What would be the outcome of your decision?”

There are of course problems with the scenario we have outlined. First, even in the face-to-face situation, it takes more time than healthcare professionals can often spend with each patient to play out. Second, since we assume that the pediatrician definitely does not have time for such a lengthy conversation, we assigned the task of interviewer to a nurse. But nurses are also extremely busy healthcare professionals who may not have the time for such a lengthy encounter. Third, under our often counterproductive way of funding healthcare, no reimbursement is likely available for the doctor or nurse to spend time convincing someone to vaccinate a child. Finally, this is obviously not a public health solution. It will take creativity and perhaps new insights to transfer a motivational interviewing schema into something that can be accomplished on a large population scale. But it is certainly plausible that new technologies, particularly social media, could be engineered to accomplish such a task.

We would also like to add that our lament about the lack of funding for vaccine-related counseling could be addressed. Saad Omer, an epidemiologist at Emory University, was recently quoted in a wonderful op-ed piece by Catherine Saint Louis: “Doctors should be paid separately for vaccine counseling in cases where a ‘substantial proportion of time is being spent on vaccines.’ ”30 Perhaps it is time to challenge the mantra “Insurance won’t pay for that” every time someone comes up with a good idea for improving the public’s health. After all, isn’t it worth some health insurance company money to prevent needless death from diseases for which we have vaccines?

Most important, our suggested approach substitutes passive listening with active engagement and introduces complexity only as the participant becomes increasingly more engaged in the topic. It does not guarantee that the outcome will be acceptance of the scientific solution to any issue; indeed, the approach itself is testable. Experiments can be designed to adjudicate whether a motivational approach works better than a lecture approach in convincing people to accept the outcome of the scientific method.

“Pls Visit My Website, Thanks!!!!”

We’ve spent a lot of time discussing academic articles and how to tell whether an article is credible or not. But more often in this day and age, we get our information from the Internet. So how can we know if a website is credible? There are no absolute rules, as this is a much freer and less developed mode of publication than that represented by peer-reviewed scientific publications. Nevertheless, we offer a few guidelines to use the next time you are searching for information on a hotly contested and confusing scientific topic.

1.Spelling and grammar: Obvious errors are relatively easy to spot, and most of you probably do not give much credence to websites with many errors. Yet it is still worth reiterating that if a website is very poorly written and riddled with errors, it is probably not a reliable source of information. The only exception to this is if the website is from a foreign country and/or has been translated into English.

2.Domain: Websites with the domains “.edu” and “.gov” are very often reliable sources of scientific information. When in doubt, look for sources with these domain names. They are your safest bet. Of course this is not to say that other domains, such as “.com,” “.org,” and “.net” are definitely unreliable—far from it. However, if you are really in a bind and having a hard time figuring out what’s what in a scientific debate, your best bet is to look for “.edu” and “.gov” domains for reliable explanations.

3.Sources: Pay attention to what kinds of sources the articles or information on the website cite. Do they make a lot of claims without citing any sources at all? Unless it’s a purely and explicitly opinion-based website, this is usually a bad sign. If they do cite sources, try to do a little investigation. What kinds of sources are they? Do they simply link to very similar websites with no sources listed? Or do they take you to a wide array of peer-reviewed, scientific journal articles? Also pay attention to how dated the cited material is. Science moves very quickly, and if a website is consistently citing information from many, many years ago, that is a good sign that it is not a reliable source.

4.Design: This is of course very subjective, but in general, reliable websites have a certain professional look and feel to them. They are usually relatively well designed (although government agencies with tight budgets may not be too fancy). The website should at the very least look neat and nicely organized. Clutter, excessive use of exclamation marks, large and multicolored fonts, and graphic photos are all good indications of an unreliable website.

5.Author information: When materials on the website include an author’s name, take a moment to research him or her. What is that person’s web presence? Does a quick Google search bring up 37 petitions signed by this person against GMOs and a picture of him or her at an anti-GMO rally? Or does it yield a professional website with evidence of expertise in the area in question? This is a very important and quick way to measure whether the website is trying to bias you in a certain direction and, more important, whether its information is accurate and authoritative.

Now let’s do a little exercise. First look at figure 6, close the book, and write down as many red flags as you can.

How did you do? We counted at least five red flags: graphic photos (especially the one of the baby being stuck by many needles), multiple colors and fonts with random bolding and spacing, clutter, inappropriate exclamatory statements (“Smallpox Alert!”), and, although you cannot tell from the photo, distracting graphics, since many of the elements of the homepage are actually moving, flashing graphics. Now compare this homepage to the homepage in figure 7.

Figure 7 is the CDC’s homepage on vaccination and immunizations. Now, this might not be the most beautiful website you’ve ever seen, but it is clean, organized, and consistent in its use of fonts, bolding, and color. It has no flashing graphics or distracting exclamation points. This is a website we can trust. While this is no foolproof method, the guidelines provided earlier should enable you to evaluate the claims of any given website. Even if the website cites 100 scientific articles, paying attention to small details like design and imagery can help you distinguish what’s true from what might simply be misinformed, if not bogus.

imag

FIGURE 6 Vaccination Liberation Group homepage (http://www.vaclib.org).

imag

FIGURE 7 CDC Vaccine & Immunizations homepage (http://www.cdc.gov/vaccines/).

It is clearly urgent that we bridge the gap between what the scientific community has learned and what nonscientists know, or think they know, about scientific advances. This gulf will at times seem vast, and there is no possible appeal to have the waters split and knowledge diffuse broadly through the population. The process will not succeed if smart people think the solution is to hector, deride, or condescend to us. We need an experimental approach to the transfer of scientific knowledge, one that follows all of the principles of the scientific method involved in acquiring this knowledge in the first place.