Risk Perception and Probability - Denying to the Grave - Sara E Gorman, Jack M Gorman

Denying to the Grave: Why We Ignore the Facts That Will Save Us - Sara E Gorman, Jack M Gorman (2016)

Chapter 6. Risk Perception and Probability

EACH DAY, WHEN YOU TAKE YOUR MORNING SHOWER, YOU face a 1 in 1,000 chance of serious injury or even death from a fall. You might at first think that each time you get into the shower your chance of a fall and serious injury is 1 in 1,000 and therefore there is very little to worry about. That is probably because you remember that someone once taught you the famous coin-flip rule of elementary statistics: because each toss is an independent event, you have a 50% chance of heads each time you flip. But in this case you would be wrong. The actual chance of falling in the shower is additive. This is known in statistics as the “law of large numbers.” If you do something enough times, even a rare event will occur. Hence, if you take 1,000 showers you are almost assured of a serious injury—about once every 3 years for a person who takes a shower every day. Of course, serious falls are less common than that because of a variety of intervening factors. Nevertheless, according to the CDC, mishaps near the bathtub, shower, toilet, and sink caused an estimated 234,094 nonfatal injuries in the United States in 2008 among people at least 15 years old.1

In 2009, there were 10.8 million traffic accidents and 35,900 deaths due to road fatalities in the United States.2 The CDC estimates a 1-in-100 lifetime chance of dying in a traffic accident and a 1-in-5 lifetime chance of dying from heart disease. But none of these realities affect our behaviors very much. We don’t take very many (if any) precautions when we shower. We text, eat, talk on the phone, and zone out while driving, paying little attention to the very real risk we pose to ourselves (and others) each time we get in the car. And we keep eating at McDonald’s and smoking cigarettes, completely disregarding the fact that these behaviors could eventually affect our health in extreme and fatal ways.

On the other hand, there is zero proven risk of death as a result of the diphtheria-tetanus-pertussis (DTP) vaccine. But the death rate from diphtheria is 1 in 20 and the death rate from tetanus is a whopping 2 in 10.3 Yet we are seeing an unprecedented decrease in numbers of parents opting to vaccinate their children across the developed world. One in four U.S. parents still believes that vaccines cause autism, although this theory has been thoroughly debunked.4   Twenty-one U.S. states now allow “philosophical exemptions” for those who object to vaccination on the basis of personal, moral, or other beliefs. In recent years, rates of philosophical exemptions have increased, rates of vaccination of young children have decreased, and resulting infectious disease outbreaks among children have been observed in places such as California and Washington. In California, the rate of parents seeking philosophical exemptions rose from 0.5% in 1996 to 1.5% in 2007. Between 2008 and 2010 in California, the number of kindergarteners attending schools in which 20 or more children were intentionally unvaccinated nearly doubled: from 1,937 in 2008 to 3,675 in 2010.5 These facts prompted California to eliminate the philosophical objections option recently, although it is unclear whether this welcome change in the law is accompanied by a change in the views of those California parents who took advantage of the option while it was still available. Vaccination rates have also decreased all over Europe,6 resulting in measles and rubella outbreaks in France, Spain, Italy, Germany, Switzerland, Romania, Belgium, Denmark, and Turkey. In 2013, there was a devastating outbreak of 1,000 cases of measles in Swansea, Wales, that has been traced to parental fears about vaccination that are directly related to Andrew Wakefield’s 1998 paper (erroneously claiming) that vaccines cause autism.7

Don’t Let the Data Get in Your Way?

A similar dynamic can be seen in public notions of the risks associated with guns. The actual lifetime odds of dying as a result of assault with a firearm are estimated to be about 1 in 325. Fearing an armed invasion is a leading reason that people make the decision to keep a gun in the house. Remember that posting by the pro-gun organization Heartland depicting an actress claiming to have a gun for protection that we discussed in chapter 3? A famous actress confidently tells us that she has a gun and will defend her children from armed invaders. Yet that a gun in the home is more likely to kill someone who lives there than an intruder is an indisputable fact.8 In 2003 Douglas Wiebe, then at UCLA and now at the University of Pennsylvania, reported the results of two large studies clearly showing that having a gun at home increases the risk of being murdered or killing oneself.9 Several years later, Wiebe’s research group found that having a gun makes a person more than 4 times more likely to be shot during an assault compared to people without guns.10 Thus, even trying to defend oneself with a gun is more likely to get the gun owner killed than it is to stop an assailant. Harvard’s David Hemenway points out that “in three national self-defense gun surveys sponsored by the Harvard Injury Control Research Center, no one reported an incident in which a gun was used to protect a child under fourteen.”11 Kellerman and Mercy analyzed homicides of women between 1976 and 1987 and found that half were killed by people they knew or by their husbands whereas only 13% were killed by strangers.12Although it may be true that there are instances in which a person with a gun successfully uses it to protect himself from an assailant, this is clearly the exception. It may be that there are some instances in which not wearing a seat belt saved a life that would have been lost had the driver been wearing one, but no one denies the clear evidence that wearing a seat belt makes us safer. According to the American Academy of Pediatrics, “The absence of guns from homes and communities is the most effective measure to prevent suicide, homicide, and unintentional injuries to children and adolescents.”13 Overall, having a gun makes a person less safe. Despite all of the incontrovertible facts just cited, gathered by careful, objective scientific research, about 37% of Americans have guns at home for self-protection.14

So why do we ignore the considerable risks associated with everyday activities like showering and driving, while we fret over the very small to nonexistent likelihood of our children reacting adversely to a vaccination or of an intruder bursting into our homes and shooting us dead? The answer, we believe, lies in the psychology of risk perception. Both vaccines and guns involve two kinds of misguided risk assessment: denial of real risks and simultaneous acceptance of unrealistic, false, or small risks. In the case of vaccines, parents who refuse to vaccinate their children are both denying the real risk of the horrendous infectious diseases that were once responsible for overwhelming numbers of childhood deaths and believing as substantial a nearly zero risk of neurological damage or death from vaccines. In the case of guns, gun owners are denying the accumulated risk of suicide or homicide by a family member that accompanies having a gun in the home and exaggerating a very small risk of being killed in a firearm assault. From these examples we suggest that when it comes to health, people are intolerant of risks of harm that feel uncontrollable while they are perfectly content to accept risks that they perceive are within their control, even if these perceptions are incorrect. For example, everyone who drives believes he or she is a safe driver and totally in control of what happens on the road. In fact, 90% of drivers think they are better than average, which is, of course, statistically nearly impossible.15 In much the same way, many people who own guns seem positive that no one in their family will ever become suicidal or suddenly so enraged that he or she might turn the gun on him- or herself or someone else. Those things, they may think, occur only in other families. These particular dangerous health misperceptions fall under the psychological category we call uncontrollable risk. We don’t make the vaccines ourselves, someone else does, and because that manufacturer is out of our direct control, we think something could be wrong with them. In this chapter, we explore the question of why many statistically improbable risks seem much more relevant to us than statistically probable ones. We begin with an overview of classic risk perception theory, discuss some of the reasons we misperceive probabilities, look at some of the heuristics and biases that affect risk perception, and conclude as always with suggestions for living by the evidence.

We Can’t Always Tell What’s Risky

There are a few central reasons, relevant to the way we think, process information, and often make cognitive mistakes, that explain why we so frequently misjudge risk. The first is that people tend to be poor at interpreting probabilities. Even people who are well trained in statistics often falter when interpreting risk probabilities, especially when emotions are involved, as in the case of deciding whether to vaccinate your children. In addition, the way probabilities are framed in many ways determines people’s perceptions of the relevance of the risk and its extent. Risk perception is particularly prone to change based on the type of risk (short-term versus long-term, individual-level versus population-level, etc.), so our estimates are not always based purely on the quantitative aspects associated with the real probability of incurring the risk. Finally, risk perception is a fundamentally social phenomenon and is extremely sensitive to group dynamics and social cues. Risk perception in groups is therefore often different from risk perception in individuals. All of these features of risk and the way we process it can lead us down the wrong path, misunderstanding the consequences of particular actions and sometimes causing us to make the wrong decisions. As Nobel laureate Daniel Kahneman put it, “When called upon to judge probability, people actually judge something else and believe they have judged probability.”16

Early research on risk perception assumed that people assess risk in a “rational” manner, weighing information before making a decision, the way scientists do (or at least are supposed to do). This approach assumes that providing people with more information will alter their perceptions of risk. Subsequent research has demonstrated that providing more information alone will not assuage people’s irrational fears and sometimes outlandish ideas about what is truly risky. The psychological approach to risk perception theory, championed by psychologist Paul Slovic, examines the particular heuristics (i.e. simple methods that people use to solve difficult problems) and biases people invent to interpret the amount of risk in their environment.

In Science in 1987, Slovic summarized various social and cultural factors that lead to inconsistent evaluations of risk by the general public.17 Slovic emphasizes the essential way in which experts’ and laypeople’s views of risk differ. Experts judge risk in terms of quantitative assessments of morbidity and mortality. Yet most people’s perception of risk is far more complex, involving numerous psychological and cognitive processes. Slovic’s review demonstrates the complexity of our assessment of risk. As we alluded to earlier, no matter how many numbers we throw at people, we still think that only what we aren’t in control of can hurt us. This is a little like a child who thinks once he is in bed and closes his eyes the monsters can’t get him any longer. You cannot convince the child that monsters don’t exist or that closing his eyes is not a defense against an attack by telling him what the prevalence of monsters or monster attacks are in each state in the United States. But that is probably the approach a scientist would take. A better approach, and one we will come back to in a moment, is that of the loving parent who has gained the child’s trust and is able to reassure him that monsters don’t come into the house.

Perhaps more important than quantifying our responses to various risks is to identify the qualitative characteristics that lead us to specific valuations of risk. Slovic masterfully summarizes the key qualitative characteristics that result in judgments that a certain activity is risky or not. Besides being intolerant of risks that we perceive as uncontrollable, we cannot tolerate risks that might have catastrophic potential, have fatal consequences, or involve one group taking a risk and a separate group reaping the benefits. Slovic notes that nuclear weapons and nuclear power score high on all of these characteristics. That is, (1) we don’t control what happens inside of a nuclear power plant; (2) if there actually is a nuclear accident it could be fatal; and (3) we get electricity anyway, so how is it worth the risk to try to get it from nuclear power plants? An epidemiologist might counter that (a) catastrophic nuclear accidents are rare (we hear about freak accidents like Chernobyl and Fukushima only once every few decades), (b) there are safeguards built into modern plants, and (c) even if there is an accident, the consequences are less likely to result in substantial numbers of human casualties (to be sure, nuclear power still poses a variety of safety risks, which is a very good reason to speed up research into how to make them safer, not to condemn them outright). The climatologist might further point out that the way we currently make electricity is destroying the environment and will ultimately result in all kinds of catastrophes like melting solar ice caps, polar bears stranded on floating shards of ice, and floods that destroy property and take lives. In fact, global warming is already producing an increase in devastating storms like Superstorm Sandy and Typhoon Haiyan, leading to a much greater loss of life than anything imaginable from nuclear power. And although this may at first seem counterintuitive, from a health perspective nuclear power is clearly a safer source of energy than the coal, oil, and gas we now use to produce most of our electricity.18

But to a nonscientist, statistics on nuclear accidents and warnings about global warming seem remote and impersonal. A nuclear power plant exploding and killing everyone within 1,000 miles is somehow more graspable and therefore more to be feared.

Also unbearable are risks that are unknown, new, and delayed in their manifestation of harm. These factors tend to be characteristic of chemical technologies. The higher a hazard scores on these factors, the higher its perceived risk and the more people want to see the risk reduced.

Slovic’s analysis goes a long way in explaining why we persist in maintaining extreme fears of nuclear energy while being relatively unafraid of driving automobiles, even though the latter has caused many more deaths than the former. The risk seems familiar and knowable. There is also a low level of media coverage of automobile accidents, and this coverage never depicts future or unknown events resulting from an accident. There is no radioactive “fallout” from a car crash. On the other hand, nuclear energy represents an unknown risk, one that cannot be readily analyzed due to a relative lack of information. Nuclear accidents evoke widespread media coverage and warnings about possible future catastrophes. A mysterious thing called “radiation” can cause damage many years after the actual accident. While it is easy to understand what happens when one car crashes into another and heads are pushed through windshields, most people don’t really know what radiation is. What is that business about the decay of atomic nuclei releasing subatomic particles that can cause cancer? In this case, a lower risk phenomenon (nuclear energy) actually induces much more fear than a higher risk activity (driving an automobile). It is a ubiquitous human characteristic to overestimate small probabilities (the reason people continue to buy lottery tickets even though there is really no chance of winning) and underestimate large possibilities (enabling us to blithely continue to consume more calories than we can burn off). Perhaps even more striking are recent findings that Macaque monkeys are prone to exactly the same risk perception distortions as we humans are.19 The authors of that study suggest that this finding demonstrates an evolutionarily conserved bias toward distorting probabilities that stretches back millions of years to our nonhuman primate ancestors.

imag

FIGURE 8 Linear probability model.

From a technical point of view, psychologists, behavioral economists, and neuroscientists call the tendency of people to overestimate small probabilities and underestimate large ones nonlinear estimation of probability. If we make a plot diagram (see figure 8) in which estimation of risk is on the vertical (y) axis and actual probability is on the horizontal (x) axis, a completely rational result (called utilitarian by economists) would be a straight diagonal line with a slope of 1. That is, the greater the actual chance of something bad happening, the greater our perception of the risk. Because this ideal results in a straight line on our plot diagram, it is called a linear response. In fact, numerous experiments have shown that the real-life plot diagram is nonlinear: small probabilities on the left side of the horizontal axis have higher than expected risk perception, and high probabilities on the right side of the horizontal axis have unexpectedly lower risk perceptions (see figure 9).

Our brains are actually hardwired for these kinds of skewed risk perceptions. As we have noted, the part of the brain in animals and humans that is activated in response to an anticipated reward is called the ventral striatum. In a fascinating experiment, Ming Hsu and colleagues performed brain imaging using functional magnetic resonance imaging (fMRI) to examine activation of the striatum while subjects performed a gambling task.20 They found that when participants had different probabilities of getting a reward from gambling, the strength of ventral striatum activation followed the exact same nonlinear pattern as is predicted by the psychological experiments described earlier. In other words, our brains force this nonlinear risk assessment habit on us.

imag

FIGURE 9 Nonlinear estimation of risk.

Many risk theorists, coming from an academic tradition based in psychology, have noted that calculations of risk probabilities rely heavily on emotions and affective judgments. This is one of the ways in which individuals making risk assessments on a daily basis differ so radically from expert risk assessors, who rely exclusively on numbers and probabilities, and why their judgments of the risk involved in the same activity or phenomenon can be so divergent. For example, Slovic and others examined risk attitudes of residents of an area with frequent disastrous floods. Their research uncovered several distinct “systematic mechanisms” for dealing with the uncertainty that accompanies living in an area of pervasive high risk. Some people viewed floods as repetitive or cyclical events and thus afforded them a kind of regularity that they did not actually display. Another common strategy was to invoke what the researchers termed “the law of averages,” in which people tended to believe that the occurrence of a severe flood in one year meant that it was unlikely for a severe flood to occur in the following year. In fact, there is no natural reason that severe flooding needs to follow such a pattern. Other residents simply engaged in a form of flat-out denial. Some believed they were protected by newfangled “protective devices” that actually had no ability to protect them from severe flooding. Others seemed to have formed the belief that past floods were due to a “freak combination of circumstances” that were exceedingly unlikely to recur together in the future.21

Slovic and colleagues found that “ease of imagination” played a large role in public perceptions of risk. For example, the researchers found a heightened sense of public concern over the risks of injury and death from attack by a grizzly bear in North American national parks. In fact, the rate of injury from a grizzly bear attack is only 1 per 2 million visitors, with the death rate even smaller and thus, statistically speaking, negligible. Yet the ability to imagine the adverse event, due to the availability of photos from newspaper reporting, seems to have amplified the perception of risk for many people.22

Familiarity Breeds Errors

In terms of what types of risks are tolerated by most people, it seems that familiarity and the perceived tradeoff of risks and benefits are paramount. We tend to tolerate risks that are familiar, frequently encountered, or part of a well-understood system. This, risk theorists contend, goes a long way in explaining why we do not regard trains as high risk, even shortly after news of a highly publicized wreck, but a small accident in a nuclear reactor will cause significant social disturbance and heightened fear and avoidance of nuclear technologies. Risk theorists propose that we make this distinction in our minds based on the familiarity of the system in which the accident occurred. In other words, our brains are taking advantage of a common heuristic, or shortcut: familiar is good and safe; unfamiliar is bad and dangerous. This crude dichotomy, with which we all engage and probably rely on more heavily than we realize, is not entirely illogical and likely allows us to make good decisions when we are faced with potential threats and little time to decide how to deal with them. This heuristic is useful, for example, in helping us to realize that we can most likely safely approach a familiar dog in our neighborhood but that we might want to keep our distance from an unfamiliar stray dog that may be aggressive or diseased.

In a way, this reliance on the familiar to make risk judgments is not irrational at all. As Kasperson notes, direct experience with something, such as driving a car, can provide us with feedback on the “nature, extent, and manageability of the hazard, affording better perspective and enhanced capability for avoiding risks.”23 The problem is that sometimes we rely too heavily on our heuristics and misinterpret information about the true nature of the risk we are confronting. Sometimes a reliance on familiarity is a good way to judge threats when we must make a decision in a split second (e.g., Should I run away from an unfamiliar creature charging toward me?). The trouble is that we sometimes use these heuristics in contexts in which they are not entirely appropriate. A simple heuristic of familiarity or unfamiliarity may work perfectly well when deciding whether to approach or stay away from a strange dog on our front lawn, but it does not when the issue is the safety of nuclear power, an extremely complex issue that requires hours of research and discussions with experts to even begin to understand. Paradoxically, the more information we are bombarded with, the more we rely on these heuristics. It therefore makes sense that a large volume of information flow is often associated with the amplification of risk.24 The next few sections of this chapter will be devoted to examining those situations in which we inappropriately use heuristics and mental biases in risk assessments and end up misjudging the true nature and salience of the threat.

Compared to What?

The issue of the safety of nuclear power or vaccines also raises something we refer to as the “Compared to what?” issue. If all you know is that there were 50 excess cancer deaths from the Chernobyl nuclear power reactor disaster in the 20 years after the explosion, you would be justified in saying that is too many—in modern society we believe that we should try to prevent every premature death possible and that every life has enormous intrinsic value. Yet when we point out that this death rate is a fraction of the deaths caused by other forms of energy in widespread use, like electricity derived from burning fossils fuels, the situation completely changes and nuclear power actually appears to be a legitimate alternative that we must consider.25 Instead of calling for a ban on nuclear power plants, we should follow the lead of the Union of Concerned Scientists, which calls for funding to put in place better safety and security measures at nuclear energy facilities, steps to bring the price down for building new plants, and more research into tricky areas like safe nuclear waste disposal.26 The enormity of the gun problem in the United States is made much more palpable when it is compared to something else: the number of deaths in the United States annually from guns is almost equal to the number from automobile accidents (more than 30,000). Our minds don’t automatically ask the question “Compared to what?” when someone tells us the probability of something bad happening. Rather, we are programmed to respond to the emotional content of the presentation. Someone who somberly declares that “50 people died needlessly because of Chernobyl” gets our attention, especially if that is accompanied by pictures of corpses, atomic mushroom clouds, and the corporate headquarters of a nuclear power company.

We Probably Don’t Understand Probability

Part of the reason it is often so difficult to effectively assess risk is that understanding risk depends heavily on understanding probabilities, and, for a number of psychological reasons, most people, including expert statisticians and scientists, are not very good at understanding probabilities on an intuitive level. In our everyday thinking, our minds are much more comfortable with individual narratives than with broad, population-based figures. We readily consume and make sense of anecdotes and experiences, but we struggle with likelihoods and other percentages. This is in part because it is very difficult for us to imagine events that have never happened, to think concretely about things that are not immediately tangible, and to sit comfortably with uncertainty.

Our brains have an enormous incentive to do away with uncertainty, which is why we all form such myriad and often sophisticated heuristics and biases to help us form judgments, develop opinions, and project into the future. Probabilities require us to think about two universes: one in which one event occurs (e.g., my child gets vaccinated and is healthy) and one in which another, counterfactual, event occurs (e.g., my child gets vaccinated and has an adverse reaction). Neither one of these universes exists in the present, the moment at which the child actually gets the shot or swallows the vaccine; furthermore, once the first universe is created in our minds, imagining the counterfactual, a world in which something entirely opposite and mutually exclusive occurs, is even more difficult. Our brains do everything they can to get around this kind of thinking. We naturally seem to prefer to use the parts of our brains that make rapid, intuitive judgments because this takes less effort than engaging the more evolved reasoning parts of the brain. Our limbic lobes fire before information even reaches the prefrontal cortex. If something is frightening, it stimulates another primitive part of the brain, the insula; if it is pleasurable and rewarding, it stimulates the nucleus accumbens, a part of the ventral striatum. The more noise, confusion, and emotion there is, the more likely that we will default to the amygdala, insula, and nucleus accumbens to make decisions based on impressions rather than probabilities. In order for us to suppress these brain systems and try to reason out a sensible decision based on data-based probabilities, we have to be calm, the environment has to be quiet, and we have to be approached by someone genuinely interested in our figuring out a solution rather than merely assenting to one.

Unless we devote the time and energy to assessing risk, we will default decision making to the impulsive brain, which makes statistical inference seem wildly counterintuitive, uncomfortable, unfamiliar, and generally strange and foreign. And this goes for experts as well as laypeople. In a famous experiment, 238 outpatients with different chronic medical problems, 491 graduate students at the Stanford Business School who had completed several courses in statistics and decision theory, and 424 physicians were asked to choose between radiation and surgery to treat a patient with lung cancer based on different sets of outcome probabilities.27 Half of the subjects had the outcomes presented to them in terms of the probability of dying from each treatment (mortality risk) and the other half in terms of probability of living (survival chance). Strikingly, the participants who were told that there was a 68% chance of living for more than 1 year after surgery were most likely to choose surgery over radiation, but subjects who were told there was a 32% chance of dying 1 year after surgery were more likely to choose radiation. Perhaps even more surprising is the fact that the results did not differ by group. Physicians with experience in reading the medical literature and making intervention decisions and graduate students with expertise in statistical analysis succumbed to the same illusion as the patient group. That is, when confronted with the word dying everyone—even experts—ignored the data and made an emotional decision against surgery, but upon hearing the word living everyone chose exactly the opposite, even though there is in fact absolutely no difference in outcome between the mortality risk and chance of survival. The difference is only in how the outcome is worded or framed.

Even Experts Mess Up

Even professional statisticians can become lazy about probabilities and make easy judgments rather than engaging a slow, deliberate process of evaluation. In most medical studies the data are analyzed in such a way as to derive a probability value, or p value. You will see in the Results section of most of these papers a statement like “The difference between the two treatments was significant (p < 0.05).” If p is larger than 0.05, the difference is considered “not significant” and the study declared a failure. This convention has been accepted for over 100 years by scientists and statisticians alike, who often look only for the size of the p value to judge whether a study worked out or not. It took the mind of a brilliant and unusually charismatic statistician, Jacob Cohen, to point out the fallacy in such thinking.

A p value of 0.05 means that the probability that we found this result simply by chance is less than 5%. How was this level of significance decided upon? Basically, statisticians in the 1920s decided that it seemed about right to them. However, it turns out that the formula for calculating the p value is highly dependent on the number of subjects in the study. With only 10 subjects in a study in which half get Drug A and half Drug B, no matter how much better Drug A might be than Drug B the formula will usually not generate a p value of less than 0.05. On the other hand, if one could enroll 100,000 people in the study, then even if Drug A is just a tiny bit better than Drug B—a difference not even worth bothering with in real-life terms—the formula might generate a p value of less than 0.05. Jack Cohen had to point out over and over again that one cannot simply accept the p value as the entire basis for deciding if the outcome of a study is meaningful, yet to his ongoing chagrin most scientific journals ignored him and until the last few years relied exclusively on p values. So once again, an easy convention, one that requires little thought or effort, appeals to even the most brilliant statistician.

We Believe It if We Can See It

Numerous research studies have indicated a strong arsenal of heuristics and other strategies most people have formed to judge probabilities. A common heuristic is availability. The availability heuristic allows individuals to judge the probability of an event by the ease with which they can imagine that event or retrieve instances of it from memory. That is, if you grew up in Syracuse and you are asked about the likelihood of snow in October in the United States, your availability heuristic will cause you to estimate a higher probability than someone who grew up in Pennsylvania.28 This heuristic is similar to Slovic’s observation that dangerous situations that can be readily visualized and imagined are viewed as more likely to occur than situations that are not accompanied by readily formed mental images.

Similarly, a great deal of research has shown that we rely on stories and anecdotes rather than on statistics to judge representativeness and probability. Many of us can probably relate to this impulse or have witnessed it in our everyday lives. One of us (Sara) has a good recent anecdote illustrating this very point. After researching the concept of retail medical clinics and amassing a large amount of evidence that care under nurse practitioners for a wide array of common acute illnesses results in equally good health outcomes as care under a primary care physician, Sara found herself discussing the topic with a friend. The friend immediately interjected with a story about a time when he went to see a nurse practitioner for a case of bronchitis and the nurse practitioner did a poor job of diagnosing and treating the condition, resulting in a subsequent trip to the doctor and a longer period of illness. The friend was a highly educated individual with a very good grasp of the scientific process and of statistics who nonetheless insisted, based on this single personal experience, that nurse practitioners were vastly inferior to physicians. For a moment, Sara felt that all of her diligent research had been debunked. But then she realized: her friend’s story is completely irrelevant. The point is that these kinds of events may occur, but the research has shown that so many nurse practitioners perform at such a high level that the future probability of any individual considering seeking help from a nurse practitioner to receive poor care is actually extremely low. Yet the power of the individual story and the salient memory of events that really happened to us almost always outweigh the power of projections about future events.

Our inherent desire to link imaginability with probability can lead us into something psychologists and behavioral economists call conjunction bias. Imagine the following scenario: you meet Linda, a 31-year-old single, bright woman who majored in philosophy, has always been concerned with issues of justice and discrimination, and participated in protests against the Iraq War. Which of the following would you think is more likely: (a) Linda is a bank teller, or (b) Linda is a bank teller and is active in the feminist movement? Or, to take another scenario, which do you think is more likely: (a) a nuclear war between the United States and Russia, or (b) a nuclear war between the United States and Russia in which both countries are drawn into conflict by other conflict-laden countries such as Iraq, Libya, Israel, or Pakistan?29 Most people choose option (b) in both cases. However, the correct answer, from a statistical probability point of view, is (a) in both cases. This is because scenarios with more detail are always less likely to be true than scenarios with less detail. The likelihood that all of these factors will occur concurrently is lower than the likelihood of one factor on its own. Think of it this way: all members of the group of socially conscious bank tellers must be members of the group of all bank tellers, but the reverse is not the case. Hence, it is impossible to have a greater probability of being a socially conscious bank teller than of being a bank teller. In addition, the population base rate of bank tellers is higher than the population base rate of bank tellers who are also active in the feminist movement. This means that if you meet a random woman in the street, regardless of her personal characteristics, she is more likely to be a bank teller than to be a bank teller who is also a vegetarian, even if she is selling vegetables, talking to you about vegetables, or wearing a sign reading “Vegetables are great!” The conjunction bias can, as Thomas Kida points out, lead to “costly and misguided decisions.” Kida notes that the Pentagon itself has spent a great deal of time and money developing war plans based on “highly detailed, but extremely improbable, scenarios.”30 This is a bias that distorts the laws of probabilities that even people as smart as U.S. presidents cannot avoid.

Empathy Makes Us Human

There is thus a significant tension between the laws of statistical probability and the propensities of our minds. As discussed earlier, we have the tendency to favor stories, anecdotes, and information that we can process using our imaginations. That is, we are most comfortable with information to which we can attach some form of mental image or picture. Abstract concepts, nameless and faceless probabilities, and simple facts are of much less interest to us than stories and images. This kind of thought process allows us to develop an all-important emotional characteristic: empathy. In an entirely utilitarian society in which we made decisions solely on the basis of probabilistic projections of maximal benefits, we would not care about people who were of no material value to us. Biologists and anthropologists battle about whether any species other than humans have the capacity for empathy, and even if they do, it is very difficult to detect. Without reliance on a human connection and without an interest in individual stories, we would all be less likely to care about the old lady who has trouble crossing the street or even to function well and work together in groups. Yet when it comes to thinking about problems that really are dependent on statistical probabilities, such as the likelihood of adverse effects from eating genetically modified crops or the chance that a certain danger will truly materialize for us, we do need to pay more attention to the numbers than to the stories of individual people, and it becomes highly challenging for our brains to make this switch. Most individuals we know who smoke cigarettes actually never get lung cancer. So if we relied on individual stories we might recklessly conclude that smoking is perfectly safe. It is only by analyzing population statistics that we learn that the chances of developing lung cancer are about 25 times greater for a smoker than a nonsmoker. The very facility that we rely on in large part to be human and function appropriately in our social environments turns into a mental trap when it comes to interpreting statistics and probabilities.

This situation can have dire consequences in making healthcare decisions. Who would not like a doctor who declares “I regard every one of my patients as a unique individual and make a treatment plan based on her own special needs and circumstances”? We all want that doctor, the very model of empathic caring. But what happens when this physician is confronted with a 98-year-old woman with end-stage Alzheimer’s disease who is refusing to eat? The woman has no memory left, no idea who she or anyone else is, and a total inability to perform even the most basic functions of life. From population statistics it is predicted that with the placement of a feeding tube and forced feeding she will live for 2 more months and without it for 8 more weeks. Very few people would subject this elderly woman to the pain of having a feeding tube put in her if they appreciated the futility of doing so, but perhaps our exceptional doctor will say, “I am not letting her starve to death—that is not how she would want to die.” According to Kahneman, this is not unusual: “This is a common pattern: people who have information about an individual case rarely feel the need to know the statistics of the class to which the case belongs.”31

All of this goes a long way in explaining why our risk perception is so flawed. And indeed, risk perception theory may go a long way in explaining why some parents still insist that vaccines cause disorders such as autism in the face of abundant evidence to the contrary. Research into risk perception indicates that vaccines are an excellent candidate for being perceived as high risk: man-made risks are much more frightening than natural risks; risks seem more threatening if their benefits are not immediately obvious, and the benefits of vaccines against diseases such as measles and mumps are not immediately obvious since the illnesses associated with these viruses—but not the viruses themselves—have largely been eliminated by vaccines; and a risk imposed by another body (the government in this case) will feel riskier than a voluntary risk.32 Research has shown that risk perception forms a central component of health behavior.33 This means that if parents view vaccines as high risk, they will often behave in accordance with these beliefs and choose not to vaccinate their children.

imag

FIGURE 10 Natural history of an immunization program.

Source: M. Olpinski, “Anti-vaccination movement and parental refusals of immunization of children in USA,” Pediatria Polska, 2012, 87(4), 381-385, figure 1.

Imaginability of risk as a determiner for its perceived salience is also important here. It has often been noted that few modern parents have ever actually witnessed a case of measles or pertussis but, sadly, more and more people have contact with some of the illnesses for which vaccines are routinely blamed, such as autism. As a result, we might say that vaccines have become a victim of their own success. In 1999, Robert Chen designed a visual tool to help explain this phenomenon: the natural history of an immunization program (see figure 10).34 The beginning of an immunization program is characterized by high morbidity and mortality from the disease in question and extreme widespread fear of said disease. When a vaccine to fight the disease is introduced, euphoria follows. People are more than willing to get themselves and their children immunized against the culprit that has caused so much death and suffering. This phase may last for a long while, as long as the memory of the horrors of the disease is still fresh. But as soon as this memory starts to fade, and a new generation of people who never had or saw the disease comes of age, fear of the disease likewise begins to abate.

What follows is a broad spectrum of responses to vaccination. Most people will still comply with government regulations and physicians’ suggestions to get their children immunized. But a growing proportion of individuals will start to show resistance to a medical intervention they now find unnecessary and unnatural. Examples of disastrous effects of vaccines, however scant, will become exaggerated and widely circulated. The results of a recent nationally representative telephone survey demonstrate this principle well. In a survey of 1,600 U.S. parents of children younger than 6 years old, 25% believed that a child’s immune system was weakened by vaccines, 23% believed that children get more immunizations than is good for their health, and 15% did not want their child to get at least one of the currently recommended vaccines.35 In the absence of direct experience with diseases like measles and pertussis, parents believe that vaccines are unnecessary, that there are too many required vaccinations, and that their child’s immune system would be better off without vaccines. In other words, they don’t fear what they can’t imagine.

The connection between the anti-vaccine movement and the psychological perception of risk described in this chapter is perhaps clearest in the favoring of stories over statistics. In fact, this feature is probably true of all of the health skeptics we describe in this book. The stories are there to remind us that the consequences of assuming a benefits-outweigh-the-risks attitude might be unbearable personal pain and loss. A good example of the reliance on personal stories comes from the ThinkTwice Global Vaccine Institute’s website. The website’s title makes it sound as though it represents an academic, evidence-based organization, but it is only a website and it is populated mostly by personal stories about alleged adverse reactions to vaccines. All of the stories are titled in large blue letters with the names of the children involved. The first story on the website reads:

I recently took our only children, Harley (2 months) and Ashlee (2 years), to the doctor for their well-baby appointments. Harley had the sniffles and Ashlee had a cold, otherwise both were in perfect health. Harley was given his first DPT, polio, and Hib shots. Ashlee received her first Hib and MMR shots, as well as her third DPT and fourth polio shots. After the vaccinations I laid the children down for their naps. Harley woke first; his thighs were red. I heard Ashlee wake and then I heard a “THUMP!” Ashlee fell flat on the floor. She cried out “Mommy, me no walk!” I checked her over and stood her up; there was no strength in either leg. I called the hospital and rushed Ashlee to the emergency ward… . For ten days Harley’s behavior changed. He barely slept, hardly ate, and seemed to be getting worse. On May 17 at 9:00 a.m., my husband got up, checked on Harley, and yelled out, “Bonnie, get up, call the ambulance. Harley is dead!”

The use of these tragic stories illustrates the important connection between “imaginability” and risk perception. Once these stories are circulated, and especially when they proliferate, the newly produced ability to imagine an adverse reaction to a vaccine produces a sense of risk. In addition, we can probably all agree that it is more interesting for most people to read individual stories than to read the statistics about the probability of a child experiencing adverse effects as a result of a vaccine. What’s more, we much more easily relate to and understand the story than the statistics. As we noted earlier, in many ways it is these stories, not the statistics, that make us feel human.

A Human Response

How can we mount a human response to a story like that of Harley and Ashlee that acknowledges the pain and suffering of the family who has lost a child but does not use it to sacrifice the value of vaccination? The problem here is partly due to the public’s lack of scientific skepticism. After reading the prior quotation and recovering from the shock of graphically described tragedy, one might first ask if the story is even true. What turned out to be the actual cause of Harley’s death? In a case like this, a postmortem examination, possibly including an autopsy, was almost certainly conducted. What did it show? Did Harley really die from effects of the vaccine? And if he did, as tragic as that outcome is, how common is such an event? In fact, it is extremely rare, whereas little children like Harley die on a regular basis from automobile accidents and gunshot wounds. All of this appeals to the prefrontal cortex rather than the amygdala and probably appears heartless. But how about telling a story about a 5-year-old with leukemia who is exposed to measles virus because of contact with an unvaccinated playmate? That child has a very high risk of dying because leukemia destroys immunity to viruses like the one that causes measles. Once again, we are left with the problem that public health officials are generally loath to “fight fire with fire” and use emotionally laden, manipulative messages to persuade us to do what the data show to be the right thing. Yet neuroscientists, psychologists, and behavioral economists document over and over again that emotional messages carry more valence than fact-based ones.

Why Buy a Lottery Ticket?

When it comes to vaccines, people also probably fall victim to a common misperception about the relationship between benefits and risks. Risk and benefit are usually positively correlated; that is, in most cases the greater the risk taken, the greater the benefit enjoyed. Skiers like to say things like “No spills, no thrills.”

Research has established, however, that despite the fact that benefit and risk are most often positively correlated, they are negatively correlated in our minds. For most people, the greater the perceived benefit, the lower the perceived risk, and the lower the perceived benefit, the greater the perceived risk.36 In the case of vaccines, the assumption promulgated by anti-vaxxers that vaccines are high risk means to some people that vaccines yield low benefits, because of this innate cognitive distortion.

In the case of the more recently introduced vaccine against the human papillomavirus (HPV), the virus that causes cervical cancer in women, anti-vaxxers had to go to great lengths to concoct a risk. It is claimed that the HPV vaccine will encourage adolescent girls to have sex. This claim is based on the notion that adolescent girls refrain from sexual intercourse because they are afraid of contracting HPV infection and getting cervical cancer later in life. By fabricating a risk that society abhors—young girls having sex—the anti-vaxxers not only scared some people but also took advantage of the risk perception distortion and created the myth that the vaccine must have little relative benefit. We have yet to see any evidence from even the most sloppily done survey that adolescent girls think that way or even how many have ever heard that HPV is the cause of cervical cancer. Now that the vaccine is available, there is no evidence that teenage sex has increased. There are, of course, convincing data that the rate of newly acquired HPV infection is already declining.37

Risk perception theory helps us understand in part why this assumption is so easy to make despite all the evidence from the past 50 years indicating that the benefits of vaccines are tremendous and nearly unparalleled in the history of medicine and public health. If we allow ourselves to believe for a moment that vaccines are very high risk, as anti-vaccination advocates insist, then it follows, based on the psychological processes that help us process risk information, that the benefits are miniscule. Alhakami and Slovic have characterized this process as an affective one. If an activity is “liked,” people tend to judge its risks as low and its benefits as high. If an activity is “disliked,” people tend to judge its risks as high and benefits as low. This affective assessment of risk has become second nature for most of us by the time we are old enough to decide whether to vaccinate our children. Therefore, if we decide that vaccines are high risk, it is difficult for anyone to convince us that their benefits are in fact high. On the other hand, when cell phones were said to increase the risk of brain tumors, there were no well-organized, vocal groups demanding their immediate withdrawal from the market. Some people opted to use earphones in order to keep the phones away from their heads, but there was no widespread anti-cell phone organization attacking the companies that make them or accusing the Federal Communications Commission of complicity. This is because people liked their cell phones and therefore were prepared to ignore any such risk. Some studies have shown there really is no such risk,38 while others suggest that there may be some,39 but in this case the actual level of risk appears to make little difference because we simply all want to use our cell phones.

We Are Endowed

Our perceptions of statistics are so prone to error and psychological adjustment that even the way in which a probability is framed can have a significant effect on how we interpret it. In addition, it turns out that our calculations about which risks to take are based much less on hard numbers and probabilities and much more on the ways in which these risks are framed. Behavioral economists such as Daniel Kahneman have discussed loss aversion extensively. The idea is that our psychologies teach us to be much more afraid of losing something than pleased to gain something of equal value. Kahneman and colleagues also refer to this as the “endowment effect.” The idea is based on some experiments originating in the 1980s and 1990s that showed people’s sometimes irrational desire to hold onto items they already owned, even if they were being offered better deals in exchange for them. In one simple study, participants were given either a lottery ticket or $2.00. Each subject was then later offered an opportunity to trade the lottery ticket for money or money for a lottery ticket. Very few subjects chose to take the experimenters up on this offer.40

Other, more complex experiments proved the theory of the “endowment effect” more solidly. In one experiment, 77 students at Simon Fraser University (SFU) were randomly assigned to three experimental conditions. One group, called the Sellers, were given SFU coffee mugs and were asked about their willingness to sell the mugs at prices ranging from $0.25 to $9.25. A second group, Buyers, were asked about their willingness to buy a mug in the same price range. A third group, Choosers, were not given a mug but were asked to choose between receiving a mug or the appropriate amount of money at each price level. The experimenters noted that the Sellers and Choosers were in identical situations, in that their roles were to decide between the mug and cash at each price level. In the end, the experimenters noted that the Choosers “behaved more like Buyers than like Sellers.” The median prices were: Sellers, $7.12; Choosers, $3.12; Buyers, $2.87. The experiment effectively demonstrated that the resulting low volume of trade was more a result of owners’ discomfort at parting with their endowment (e.g., the mug) than of buyers’ reluctance to part with their cash.41

The endowment effect is reflected in brain function. Knutson and colleagues scanned research participants’ brains with functional magnetic resonance imaging (fMRI) while they performed a buying, choosing, selling experiment similar to the one just described.42 Activation was greatest in the nucleus accumbens—sometimes referred to as the brain’s reward center—when contemplating preferred products across buy and sell conditions. Thus, whenever in a perceived favorable position, the brain registered a reward response. When, however, subjects were confronted with a low price for buying versus selling, a portion of the PFC was activated. Low price might seem attractive for a buyer, but it is also suspicious and activated a brain region dedicated to evaluating a situation carefully. Finally, during selling, greater activation of an adversity center in the brain, the insula, predicted stronger endowment effect: the stronger the activation of the insula, the more the emotional negativity at the prospect of parting with a current possession was experienced. Significantly, this last finding reminds us that individual differences are always greatly important in determining the results of both cognitive neuroscience experiments and real-world decisions. Some people are more prone to the endowment effect than others, for reasons that are as of yet not entirely clear but undoubtedly have to do with a combination of genetic endowment and life experience. It is not hard to imagine the evolutionary advantage of holding onto things one already possesses.

Clearly, the human brain is wired to promote the endowment effect, albeit with interindividual variation. Once we have decided that nuclear power, ECT, and GMOs are dangerous, we regard those attitudes as possessions and resist relinquishing them even in the face of a better offer—that is, even when we are shown data that contradict our beliefs. We have again another example of the way in which brain networks support a practical functionality that works less well when decisions involving complicated scientific issues are required.

Manipulating Our Moods

Kahneman and colleagues have noted that economic models that ignore loss aversion and the endowment effect credit us with much more stability and predictability than our choices actually reveal.43 In the end, what these phenomena teach us is not only that psychology plays an enormous role in any kind of decision making, no matter how “rational,” but also that in order to understand human decision making, especially surrounding risk, we need to take people’s situations at the time of the decision into account. That is, the context of the decision matters and can help us understand some of the individual differences that exist among people’s judgments about what is truly risky and what is not. A person’s emotional state at the time of learning something new has important implications for how the content is processed and recalled. For example, as Norbert Schwarz points out, we are more likely to recall a positive memory when we are happy and a negative memory when we are sad or depressed.44

So, for example, let us imagine that the first time we read an article claiming that drinking unpasteurized milk will make our children strong and healthy there is an accompanying picture of a smiling, robust-appearing baby. Because this picture will make most of us feel happy, we will most easily recall the “fact” about unpasteurized milk when we are again in a happy mood. Similarly, suppose that when we first encounter the claim that withholding antibiotics from a person who has had a cold for more than 10 days is denying them good care there is an accompanying image of someone suffering with a nasty upper respiratory infection, complete with runny nose, red eyes, and distressed facial expression. That image will make us feel sad or unhappy and we will most readily recall the misinformation about antibiotics for colds when we are again in a bad mood. That is, the quality of the mood we are in when we first encounter a claim, regardless of whether it is true or untrue, becomes indelibly associated with the claim. We will associate consuming unpasteurized dairy products with happiness and withholding antibiotics for colds with sadness. Shrewd partisans of any cause understand this and therefore make sure to carefully manipulate our emotions when they present their ideas to us. That way, it is easier for them later on to get us into the desired mood when they present those ideas or conversely to make us recall the ideas with conviction when they once again rev up our emotions in a specific direction.

Experiments show that anxious people, as Schwarz goes on to explain, “pursued a goal of uncertainty reduction and preferred low-risk/low-reward options over any other combination.”45 If someone wants to make us afraid of genetically modified foods, their favored strategy is first to make us nervous about getting cancer and then tell us that the safest thing is just not to eat them, the lowest risk situation possible according to this view. Scientists, of course, prefer dry, factual expositions that are not intended to make people happy, sad, or anxious. While this is probably the most ethical approach, it also puts them at a distinct disadvantage to proponents of misinformation who may have no qualms about emotional manipulation. Recently, television advertisements aimed at getting people to quit smoking have been particularly graphic, showing people with advanced emphysema gasping for air and expressing heartbreaking remorse for having smoked. The ads may be effective, but some people decry them as inappropriate fear-mongering by the scientific community.

Several researchers have even thought about loss aversion in the context of people’s reactions to policy changes. The most pertinent example is the public’s attitudes toward the Affordable Care Act (ACA, also known as “Obamacare”). In the lead-up to the implementation of the ACA, investigators set up a survey experiment in an attempt to understand the relationship between loss aversion and attitudes toward various health plans. One half of the sample group was given the choice of the following two options: (a) a plan with no lifetime limit on benefits (chosen by 79.5% of respondents) and (b) a plan that limited the total amount of benefits in your lifetime to $1 million but saved you $1,000 per year (chosen by 20.5% of respondents). The second half of the sample group was given the choice between a different set of two options: (a) a plan that limited the total amount of benefits in your lifetime to $1 million (chosen by 44.2% of respondents) and (b) a plan with no lifetime limits on benefits but at a cost of an additional $1,000 per year (chosen by 55.8% of respondents). Both scenarios are effectively the same: the plan with no lifetime limits results in a loss of about $1,000, whether that is through not choosing the savings from the alternative plan or through a $1,000 cost directly tied to the plan with no lifetime limits. Yet nearly 80% of respondents in the first scenario chose the plan with no lifetime limits, while in the second scenario, this percentage dropped to about 56% when the $1,000 was explicitly framed as a cost associated with the plan with no lifetime limits. The idea here is that people are so opposed to the feeling of loss that they reject a scenario that they would have otherwise chosen if the $1,000 element were framed as savings rather than loss. Once again, the two scenarios are identical, but because they stimulate very different moods, they result in very different decisions.46

Not surprisingly, then, one of the public’s greatest fears about Obama’s healthcare reform was the prospect of losing something they already had: a particular doctor, a particular hospital, a particular health insurance plan. Wary of this, President Obama spent a lot of time emphasizing that the overhaul of the nation’s health system would not result in any form of individual loss. In a 2009 speech to a joint session of Congress, the president emphasized this point:

Here are the details that every American needs to know about this plan. First, if you are among the hundreds of millions of Americans who already have health insurance through your job, or Medicare, or Medicaid, or the VA, nothing in this plan will require you or your employer to change the coverage or the doctor you have. Let me repeat this: Nothing in our plan requires you to change what you have.47

Perhaps without even knowing it, Obama was trying to get around a psychological mechanism that plays a significant role in our decisions and in our perceptions of much of the policy that surrounds us.

In an experiment published in 2010, Roger Bartels and colleagues at the University of Minnesota showed that framing is critical in convincing people to get vaccinated.48 They designed their experiment in accordance with previous findings that gain-framed messages are most effective to promote health behaviors with minimal risk and loss-framed messages are most effective for health behaviors associated with more risk or with uncertainty. Undergraduates in psychology classes at the university were randomized to receive information that a vaccine for West Nile virus was either 90% effective or 60% effective in preventing disease and also randomized to receive either a gain-framed or loss-framed (this variation is noted in brackets) message as follows:

By [not] getting vaccinated, people will be [un]able to take advantage of a way to protect themselves from a potentially deadly infection. If you are [fail to get] vaccinated against the virus, you can be [will not be] confident that you have given yourself the best chance to decrease your risk of developing serious complications from infection… . People who are [not] vaccinated will be free of worry [continue to worry] about mosquitoes and will [not] have the peace of mind to maintain their lifestyle.

This design results, then, in four separate groups of participants. They found that when the vaccine was said to be 90% effective, subjects who got the gain-framed message were more favorable to the idea of getting vaccinated, but when the vaccine was said to be only 60% effective, the loss-framed message was more effective. Clearly, simply manipulating how the very same information is framed has a profound impact on the decisions people make.

What does all this mean for our ability to perceive risks and make decisions about risks that affect our health? For one thing, there is a great deal of research to suggest that people are willing to take more risks in order to avoid loss. In health-related contexts, this means that people will react very differently to loss versus gain-framed messages and their corresponding willingness to take health-related risks will adjust accordingly. A loss-framed message in a health scenario might go something like this: in the context of preventive medical screenings for cancer, a doctor tells a patient: “If you don’t detect cancer early, you narrow your options for treatment.” This is a loss-framed message because the doctor emphasizes what will be lost by not taking the action of going through the screening process. This is like the idea that choosing the lifetime limit option on insurance will incur a loss of $1,000 as opposed to the idea that choosing the alternative plan will lead to a savings of $1,000. A gain-framed message about preventive behaviors might be: “Eating a lot of vegetables will help you become healthier.”49 In this case, the individual has been told that engaging in healthy, preventive behaviors will help him or her gain something: better health. Research has indicated that people view disease-detecting mechanisms, such as cancer screenings and HIV tests, as far riskier than disease-preventing behaviors, such as wearing sunscreen and taking vitamins. This is because disease-detecting behaviors have the capacity to reveal something disturbing, such as the presence of a malignant growth. Because people are willing to take more risks to avoid loss, it makes sense to use loss-framed messaging when trying to get people to agree to disease-detecting behavior. On the other hand, because people already view disease-preventing behaviors as low risk, it is much more prudent to use gain-framed messaging in these instances.50

The psychological mechanisms that interpret loss and gain differently clearly have an effect on risk decisions in health contexts. It would be wise for physicians and public health officials to familiarize themselves with some of these psychological mechanisms to better understand how to influence people to make the healthiest decisions possible. For instance, perhaps because getting vaccinated is a disease-preventing behavior, it would be more suitable to frame the need for vaccination to parents in terms of the benefits of health the child will enjoy rather than from the perspective of the diseases the child may contract if he or she is not vaccinated. An experiment could even be devised to test this idea.

Group Risk Is Not the Same as Individual Risk

Since a lot of people who deny the accuracy of valid health-related data have formed large groups and communities, it is worthwhile to consider how group assessment of risk might differ from individual risk assessment. The person who joins an anti-vaccine group on Facebook and then tries to make a decision about whether or not to vaccinate her child confronts slightly different psychological forces impacting her assessment of the risk involved than someone who makes the decision by reading a book about childhood illnesses or by talking to her pediatrician. The woman who joins the Facebook page is involved in a group risk assessment, and there is reason to believe that group risk assessment differs in vital ways from individual assessment.

One of the earliest studies of group risk assessment dates from 1961. Investigators concluded from an experiment that individuals make riskier decisions when in groups than when alone. They found that while an individual might decide to allow a form of surgery when there was a 1% chance of death, people in groups might allow the surgery with up to a 50% chance of death.51 One scholar coined this the “risky shift phenomenon,” whereby the amount of risk tolerated by individuals increased when they joined groups. Yet not all studies were able to replicate this finding. In fact, a number of subsequent studies found just the opposite: groups made decisions that were less risky than individual decisions. In response, psychologists and risk theorists developed the concept of the group polarization phenomenon, which asserts that groups cause individuals to become more extreme in their decisions. If you are somewhat predisposed to make a risky decision but may be on the fence, joining a group would, in this model, cause you to make an even riskier decision than you were previously contemplating.52

In a complex experiment, Scherer and Cho helped resolve the difference by demonstrating an extremely strong association between strength and frequency of interaction among people and agreement about risk assessment.53 If Jane says hello to Harry only once a week in passing at work, she is much less likely to have a high degree of agreement with him on what is risky and what is not than if she speaks to him for 30 minutes every day. Of course, Jane might seek out Harry for more conversation because they are highly similar in many ways, including the ways in which they think about and assess risk. Nevertheless, under conditions of frequent contact the group polarization phenomenon seems to prevail: Jane’s interactions with Harry are likely to strengthen her preexisting beliefs about risk now that she is in a group rather than an individual setting. Harry’s beliefs about risk will likewise be strengthened. Scherer and Cho’s work adds evidence that the extent to which Jane and Harry’s risk perceptions will strengthen and converge is linked to the amount of contact they have.

This is perhaps an even more important concept now than it was when Scherer and Cho published their findings. Social media and the Internet make group formation much easier than ever before. In addition, the intensity and frequency of contact among people has increased exponentially with the development of new technologies and the increasing use of quick communication channels such as text messaging, Facebook, and Twitter. Many social media sites feature “groups” that individuals can join. Once you join a group, you are instantaneously part of a conversation that moves quickly and updates frequently. A sense of frequency and intensity of contact results easily from these forms of communication. According to Scherer and Cho’s findings, the convergence of our beliefs about risk is likely stronger once we join that “Are GMOs safe?” Facebook discussion group.

Research on group risk perception has also emphasized the importance of social and cultural factors in risk assessment. In arguing for what has been termed the “social amplification of risk,” sociologists proposed that it is a mistake to view risk assessment as an individual activity.54 Instead, risk assessment is inherently social and responds to existing social and cultural norms. This argument moved the conversation even further away from the technical concept of risk: a simple multiplication of the probability of events and the magnitude of their consequences.55 Because of this recognition of the more complicated nature of risk assessment, especially the ways in which risk perception responds to social and cultural environments, Kasperson developed the following model of cognitive process to explain the decoding of risk signals:

1.The individual filters the signals, meaning that only a fraction of all incoming information is actually processed.

2.The individual decodes the signal, deciding what it means to him or her personally.

3.The individual uses cognitive heuristics of the sort we have described in this chapter and elsewhere in this book to process the risk information.

4.The individual interacts with his or her cultural and peer groups to interpret and validate signals.

5.The individual and his or her groups formulate behavioral strategies either to tolerate the risk or take action to avoid it or react against the entities producing the risk.

6.The individual engages in actions formulated to accept, ignore, tolerate, or alter the perceived risk.56

Clearly this process is much more complex than simply multiplying the probability of the risk by the magnitude of its consequences and reacting accordingly. This account of risk assessment is also more complex than any model that asserts that information is received, cognitive heuristics are used to process it, and action (or no action) is taken. It incorporates a highly intricate social process into the perception of risk. Individual risk assessments are checked against group assessments, and action taken in response to these risks is both formulated and carried out in these same complex social environments. The idea that an individual assesses risk alone within the confines of his or her individual psychology, even though a more complex view than the idea that risk should be seen as probability times magnitude, is still too simple.

A quick look at a few websites can demonstrate how anti-vaccine advocates formulate strong group memberships that can shape people’s perceptions of risks through the group processes described earlier. One website, VacTruth.com, “liked” by 48,514 people on Facebook, uses the term “vaccine pushers” to conglomerate “enemy” entities into one group, in response to which the identity of VacTruth.com followers becomes more concrete. Countless headlines on the site’s “news” section use this phrase: “GBS [Guillain-Barré syndrome] Is Not Rare After Vaccinations: Here’s How Vaccine Pushers Conceal the Truth,” or “Bombshell TV Show About HPV Vaccines Reveals Cruel Nature of Vaccine Pushers.”57 The authors of the website effectively collect a series of anti-vaxxers’ “enemies” into one group: pharmaceutical companies, the CDC, scientists at universities, the FDA, and other government agencies and organizations that promote vaccines. This simple phrase “vaccine pushers” creates a sense of a unified enemy around which the group identity of the anti-vaxxers can grow stronger. If we believe Kasperson’s theory that group identities have a strong impact on both the formulation of risk assessment as well as action taken in response to these perceptions of risk, then we can see that the anti-vaxxers’ strategy of strong group formation is a powerful one indeed and one that has the capacity to create people’s sense of the risks associated with vaccines, regardless of what kind of statistical information organizations such as the CDC try to throw at them.

Another site, “VaccinationDebate,” is much more explicit about the way in which it categorizes “us” versus “them.” The author of the website explains why there is “misinformation” about the safety of vaccines:

Firstly, they have you believe that there are all these germs out there that can kill you. Or if they don’t kill you, they can cause horrible complications and long term suffering. Secondly, they have you believe that no matter how healthy you are, if you contract one of these germs then you can easily develop the disease and die. In other words, they have you believe that health is no protection. Thirdly, they have you believe that your only means of protection, your only chances of survival, is through their drugs and vaccines that you have to buy. (Your taxes are paying for them.)58

It is not immediately clear who “they” are, but if we think about it for a minute, it seems that the “they” here are the same as the “they” created by the VacTruth website described earlier. In other words, “they” are the CDC, the FDA, the government more generally, pharmaceutical companies, physician organizations, and medical researchers, all rolled up into one common entity. The “they” title again helps create a firmer sense of who “we” are and provides an opportunity for group risk assessment that goes something like this: the diseases vaccines supposedly protect us against are not risky, but the vaccines themselves present an unmitigated and potentially fatal risk. As discussed earlier, the presence of a group dynamic likely helps solidify this kind of assessment.

We have put this risk perception problem into a social framework to emphasize the importance of the community and social aspects in resistance to scientific information. But all of the weird principles of human risk perception still hold whether we look at individuals or groups. Our brains are not designed for linear risk perception even though that is how the world really is. Recognizing this is a crucial step in correcting our mistakes when we make health decisions and in planning interventions to help people make those decisions on a scientific basis.