Traffic: Why We Drive the Way We Do (and What It Says About Us) - Tom Vanderbilt (2008)
Chapter 9. Why You Shouldn’t Drive with a Beer-Drinking Divorced Doctor Named Fred on Super Bowl Sunday in a Pickup Truck in Rural Montana: What’s Risky on the Road and Why
How We Misunderstand the Risks of the Road
In a basement laboratory in the looming red-brick Henry Ford Hospital in Detroit, Michigan, a team of researchers has, for the past few years, been looking at what happens to our brains as we drive. The device that measures the faint magnetic fields the brain emits is too massive to fit inside of a car, so research subjects are instead studied in the hospital’s Neuromagnetism Laboratory, where they watch film clips of a car navigating through traffic. As I lay back on the cozy bed inside the magnetically shielded lab to get a feel for the procedure, Richard Young, a scientist with General Motors who leads the research team, told me, “Our biggest problem is people falling asleep in the bed.”
To keep people awake as they play passenger to the filmed driving, subjects are given a simple “event-detection task.” When a red light near the screen goes on, the subject, attached to a neuromagnetometer, presses a simulated brake pedal. This simple habit of braking in response to a red light (i.e, brake lights), something drivers do an estimated fifty thousand times a year, triggers a burst of activity in the brain. The visual cortex lights up about 80 to 110 milliseconds after the red signal comes on. This indicates that you have seen the signal. The left prefrontal lobe, an area of the brain linked to decision making, begins to buzz with activity. This is the microinstant during which you’re deciding what to do with the information you have acquired—here, the rather simple response of simply pressing the brake. It comes about 300 milliseconds before you actually do it. About 180 milliseconds before braking begins, the motor cortex sees action—your foot is about to be told to move. About 80 milliseconds after you have pressed the brake, the visual cortex is again activated. You’re registering that the red signal has been turned off.
The scientists are probing the neural pathways involved in what they call the “mind on the drive,” in part to learn what cell phone conversations and other activities do to our brains as we drive. But sometimes, as they watch these real-time movies of people’s brains in traffic, there are strange and unanticipated plot twists.
Once, while watching the real-time fMRI (functional magnetic resonance imaging) readings of a subject, Young noticed a burst of brain activity, not during the braking event but during “normal” driving. “There was a spike. There were brain areas lighting up in the emotional cortex, the amygdala, the limbic cortex, the lower brain,” Young recalled. This hinted at more complex responses than what usually showed up in the fairly well-conditioned responses to braking or keeping the vehicle on the road at a certain speed. What was going on? Young compared the activity to the actual video of the drive. At the moment his brain went on the boil, the driver was passing a semitrailer. After the trial, Young asked the subject if he had noticed “anything unusual during the last run.” He had. According to Young, “The person said, ‘Oh yes, I was passing that eighteen-wheeler and every time I pass one of those things I get real nervous.’”
That small peek into the brain of the driver revealed a simple, if underappreciated truth about driving: When we are in traffic, we all become on-the-fly risk analysts. We are endlessly having to make snap decisions in fragments of moments, about whether it is safe to turn in front of an oncoming car, about the right speed to travel on a curve, about how soon we should apply the brakes when we see a cluster of brake lights in the distance. We make these decisions not with some kind of mathematical probability in the back of our heads—I have a 97.5 percent chance of passing this car successfully—but with a complicated set of human tools. These could be cobbled from the most primeval instincts lurking in the ancient brain, the experience from a lifetime of driving, or something we heard yesterday on the television news.
On the one hand, it was perfectly natural, normal, and wise for the driver in Detroit to show fear in the face of an eighteen-wheeler. Large trucks, from the point of view of a car, are dangerous. Because of the staggering differences in mass—trucks weigh twenty to thirty times more than a car—the simple physics of a collision are horrifically skewed against the car. When trucks and cars collide, nearly nine of ten times it’s the truck driver who walks away alive.
As the driver’s brain activity would seem to indicate, we know this on some instinctual level, as if our discomfort in driving next to a looming truck on a highway is some modern version of the moment our prehistoric ancestor felt the hairs on the back of his neck raise when confronted with a large predator. Indeed, the amygdala, one of the areas that lit up in the Detroit driver, is thought to be linked with fear. It can be activated even before the cognitive regions kick in—neuroscientists have described the amygdala as a kind of alarm that triggers our attention to things we should probably fear. And we all likely have proof of the dangerous nature of trucks. We have seen cars crumpled on the roadside. We’ve heard news stories of truck drivers, wired on stimulants, forced to drive the deregulated trucking industry’s increasingly long shifts. We can easily recall being tailgated or cut off by some crazy trucker.
Just one thing complicates this image of trucks as the biggest hazard on the road today: In most cases, when cars and trucks collide, the car bears the greater share of what are called “contributory factors.” This was the surprising conclusion that Daniel Blower, a researcher at the University of Michigan Transport Research Institute, came to after sifting through two years’ worth of federal crash data.
It was a controversial finding. Blower, to begin with, had to determine that it did not simply stem from “survivor bias”: “The truck driver is the only one that survives these eighty-five percent of the time,” he explained. “He’s the one who gets to tell the story. That’s what’s reflected in the police report.” So he dug deeper into the records, analyzing the relative position and motion of the vehicles before a crash. Instead of relying on drivers’ accounts, he looked at “unmistakable” physical evidence. “In certain crash types like head-ons, the vehicle that crosses the center much more likely contributed to the crash than the vehicle that didn’t cross the center line,” he said. “Similarly, in rear-end crashes, the striking vehicle in the crash is much more likely to have contributed to the crash in a major way than the vehicle that was struck.” After examining more than five thousand fatal truck-car crashes, Blower found that in 70 percent of cases, the driver of the car had the sole contributing responsibility in the crash.
This hardly means trucks are not dangerous. But the reason trucks are dangerous seems to have more to do with the actions of car drivers combined with the physical characteristics of trucks (in head-on collisions, for example, they are obviously less able to get out of the way) and less to do with the actions of truck drivers. “The caricature that we have that the highways are thronged with fatigued, drug-addled truck drivers is, I think, just wrong,” Blower said. Certainly there are aggressive truck drivers and truckers jacked up on methamphetamine, but the more pressing problem, the evidence tells us, seems to be that car drivers do not fully understand the risk of heavy trucks as they drive in their presence. This is not something we are necessarily taught when we learn to drive. “In a light vehicle you are correct to be afraid of them, but it’s not because the drivers are disproportionately aggressive or bad drivers,” Blower said. “It’s because of physics, truck design, the different performance characteristics. You can make a mistake around a Geo Metro and live to tell about it. You make that same mistake around a truck and you could easily be dead.”
What all this seems to suggest is that car drivers have less to fear from trucks than from what they themselves do around trucks. I had a glimpse of this a few years back when I rode in an eighteen-wheel tractor-trailer for the first time, watching in horror as cars darted in front of the truck with dangerous proximity, sometimes disappearing from sight beneath the truck’s long, high hood. So why does it seem that virtually everyone, like my Latin-teacher friend in the Prologue, has some horror story about crazy truckers?
One possible answer goes back to the spike in brain activity of the Detroit driver. He was afraid, probably before he even knew why. The size of trucks makes most of us nervous—and rightfully so. When we have a close brush with a truck or we see the horrific results of a crash between a car and a truck, it undoubtedly leaves a greater impression on our consciousness, which can skew our view of the world. “Being tailgated by a big truck is worth getting tailgated by fifty Geo Metros,” as Blower put it. “It stays with you, and you generalize with that.” (Studies have suggested that people think there are more trucks on the road than is actually the case.)
Here’s the conundrum: If, on both an instinctual level and a more intellectual level, the drivers of cars fear trucks, why do car drivers, in so many cases, act so dangerously around them? The answer, as we are about to see, is that on the road we make imperfect guesses as to exactly what is risky and why, and we act on those biases in ways we may not even be aware of.
Should I Stay or Should I Go? Why Risk on the Road Is So Complicated
Psychologists have suggested that we generally think about risk in two different ways. One way, called “risk as analysis,” involves reason, logic, and careful consideration about the consequences of choices. This is what we do when we tell ourselves, on the way to the airport with a nervous stomach, “Statistically, flying is much safer than driving.”
The second way has been called “risk as feelings.” This is why you have the nervous stomach in the first place. Perhaps it’s the act of leaving the ground: Flying just seems more dangerous than driving, even though you keep telling yourself it isn’t. Studies have suggested that we tend to lean more on “risk as feelings” when we have less time to make a decision, which seems like a survival instinct. It was smart of the Detroit driver to feel risk from the truck next to him, but the instinctual fear response doesn’t always help us. In collisions between cars and deer, for example, the greatest risk to the driver comes in trying to avoid hitting the animal. No one with a conscience wants to hit a deer, but we may also be fooled into thinking that the deer itself presents the greatest hazard. Hence the traffic signs that say DON’T VEER WHEN YOU SEE A DEER.
One good reason why we rely on our feelings in thinking about risk is that “risk as analysis” is an incredibly complex and daunting process, more familiar to mathematicians and actuaries than the average driver. Even when we’re given actual probabilities of risk on the road, often the picture just gets muddier. Take the simple question of whether driving is safe or dangerous. Consider two sets of statistics: For every 100 million miles that are driven in vehicles in the United States, there are 1.3 deaths. One hundred million miles is a massive distance, the rough equivalent of crisscrossing the country more than thirty thousand times. Now consider another number: If you drive an average of 15,500 miles per year, as many Americans do, there is a roughly 1 in 100 chance you’ll die in a fatal car crash over a lifetime of 50 years of driving.
To most people, the first statistic sounds a whole lot better than the second. Each trip taken is incredibly safe. On an average drive to work or the mall, you’d have a 1 in 100 million chance of dying in a car crash. Over a lifetime of trips, however, it doesn’t sound as good: 1 in 100. How do you know if this one trip is going to be the trip? Psychologists, as you may suspect, have found that we are more sensitive to the latter sorts of statistics. When subjects in one study were given odds, similar to the aforementioned ones, of dying in a car crash on a “per trip” versus a “per lifetime” basis, more people said they were in favor of seat-belt laws when given the lifetime probability.
This is why, it has been argued, it has long been difficult to convince people to drive in a safer manner. Each safe trip we take reinforces the image of a safe trip. It sometimes hardly seems worth the bother to wear a seat belt for a short trip to a local store, given that the odds are so low. But events that the odds say will almost certainly never happen have a strange way of happening sometimes (risk scholars call these moments “black swans”). Or, perhaps more accurately, when they do happen we are utterly unprepared for them—suddenly, there’s a train at the always empty railroad crossing.
The risk of driving can be framed in several ways. One way is that most people get through a lifetime without a fatal car crash. Another way, as described by one study, is that “traffic fatalities are by far the most important contributor to the danger of leaving home.” If you considered only the first line of thinking, you might drive without much of a sense of risk. If you listened to only the second, you might never again get in a car. There is a built-in dilemma to how societies think about the risk of driving; driving is relatively safe, considering how much it is done, but it could be much safer. How much safer? If the number of deaths on the road were held to the acceptable-risk standards that the U.S. Occupational Safety and Health Administration maintains for service-industry fatalities, it has been estimated, there would be just under four thousand deaths a year; instead, the number is eleven times that. Does telling people it is dangerous make it safer?
One often hears, on television or the radio, such slogans as “Every fifteen minutes, a driver is killed in an alcohol-related crash” or “Every thirteen minutes, someone dies in a fatal car crash.” This is meant, presumably, to suggest not just the magnitude of the problem but the idea that a fatal crash can happen to anyone, anywhere. And it can. Yet even when these slogans leave out the words “on average,” as they often do, we still do not take it to mean that someone is actually dying, like clockwork, every fifteen minutes.
These kinds of averages obscure the startling extent to which risk on the road is not average. Take the late-night hours on weekends. How dangerous are they? In an average year, more people were killed in the United States on Saturday and Sunday from midnight to three a.m. than all those who were killed from midnight to three a.m. the rest of the week. In other words, just two nights accounted for a majority of the week’s deaths in that time period. On Sunday mornings from twelve a.m. to three a.m., there was not one driver dying every thirteen minutes but one driver dying every seven minutes. By contrast, on Wednesday mornings from three a.m. to six a.m., a driver was killed every thirty-two minutes.
Time of day has a huge influence on what kinds of crashes occur. The average driver faces the highest risk of a crash during the morning and evening rush hours, simply because the volume of traffic is highest. But fatal crashes occur much less often during rush hours; one study found that 8 of every 1,000 crashes that happened outside the peak hours were fatal, while during the rush hour the number dropped to 3 out of every 1,000. During the weekdays, one theory goes, a kind of “commuters’ code” is in effect. The roads are filled with people going to work, driving in heavy congestion (one of the best road-safety measures, with respect to fatalities), by and large sober. The morning rush hour in the United States is twice as safe as the evening rush hour, in terms of fatal and non-fatal crashes. In the afternoon, the roads get more crowded with drivers out shopping, picking up the kids or the dry cleaning. Drivers are also more likely to have had a drink or two. The “afternoon dip,” or the circadian fatigue that typically sets in around two p.m., also raises the crash risk.
What’s so striking about the massive numbers of fatalities on weekend mornings is the fact that so few people are on the roads, and so many—estimates are as high as 25 percent—have been drinking. Or think of the Fourth of July, one of the busiest travel days in the country and also, statistically, the most dangerous day to be on the road. It isn’t simply that more people are out driving, in which case more fatalities would be expected—and thus the day would not necessarily be more dangerous in terms of crash rate. It has more to do with what people are doing on the Fourth: Studies have shown there are more alcohol-related crashes on the Fourth of July than on the same days the week before or after—and, as it happens, many more than during any other holiday.
What’s the actual risk imposed by a drunk driver, and what should the penalty be to offset that risk? The economists Steven D. Levitt and Jack Porter have argued that legally drunk drivers between the hours of eight p.m. and five a.m. are thirteen times more likely than sober drivers to cause a fatal crash, and those with legally acceptable amounts of alcohol are seven times more likely. Of the 11,000 drunk-driving fatalities in the period they studied, the majority—8,000—were the drivers and the passengers, while 3,000 were other drivers (the vast majority of whom were sober). Levitt and Porter argue that the appropriate fine for drunk driving in the United States, tallying up the externalities that it causes, should be about $8,000.
Risk is not distributed randomly on the road. In traffic, the roulette wheel is loaded. Who you are, where you are, how old you are, how you are driving, when you are driving, and what you are driving all exert their forces on the spinning wheel. Some of these are as you might expect; some may surprise you.
Imagine, if you will, Fred, the pickup-driving divorced Montana doctor out for a spin after the Super Bowl who is mentioned in this chapter’s title. Obviously, Fred is a fictional creation, and even if he did exist there’d be no way to judge the actual risk of driving with him. But each of the little things about Fred, and the way those things interact, play their own part in building a profile of Fred’s risk on the road.
The most important risk factor, one that is subtly implicated in all the others, is speed. In a crash, the risk of dying rises with speed. This is common sense, and has been demonstrated in any number of studies. In a crash at 50 miles per hour, you’re fifteen times more likely to die than in a crash at 25 miles per hour—not twice as likely, as you might innocently expect from the doubling of the speed. The relationships are not proportional but exponential: Risk begins to accelerate much faster than speed. A crash when you’re driving 35 miles per hour causes a third more frontal damage than one where you’re doing 30 miles per hour.
Somewhat more controversial is the relationship between speed and the potential for a crash. It is known that drivers who have more speeding violations tend to get into more crashes. But studies have also looked at the speeds of vehicles that crashed on a given road, compared them to the speeds of vehicles that did not crash, and tried to figure out how speed affects the likelihood that one will crash. (One problem is that it’s extremely hard to tell how fast cars in crashes were actually going.) Some rough guidelines have been offered. An Australian study found that for a mean speed—not a speed limit—of 60 kilometers per hour (about 37 miles per hour), the risk of a crash doubled for every additional 5 kilometers per hour.
In 1964, one of the first and most famous studies of crash risk based on speed was published, giving rise to the so-called Solomon curve, after its author, David Solomon, a researcher with the U.S. Federal Highway Administration. Crash rates, Solomon found after examining crash records on various sections of rural highway, seemed to follow a U-shaped curve: They were lowest for drivers traveling at the median speed and sloped upward for those going more or less than the median speed. Most strikingly, Solomon reported that “low speed drivers are more likely to be involved in accidents than relatively high speed drivers.”
Solomon’s finding, despite being almost a half century old, has become a sort of mythic (and misunderstood) touchstone in the speed-limit debate, a hoary banner waved by those arguing in favor of higher speed limits. It’s not the actual speed itself that’s the safety problem, they insist, it’s speed variance. If those slower drivers would just get up to speed, the roads would flow in smooth harmony. It’s not speed that kills, it’s variance. (This belief, studies have indicated, is most strongly held by young males—who are, after all, experts, given that they get in the most crashes.) And what causes the most variance? Speed limits that are too low!
Dear reader, much as I—as guilty as anyone of an occasional craving for speed—would like to believe this, the arguments against it are too compelling. For one, it assumes that the drivers who are going slow want to be driving slowly, and are not simply slowing for congested traffic, or entering a road from a turn, when they are suddenly hit by one of those drivers traveling the mean speed or higher. Solomon himself acknowledged (but downplayed) that these kinds of events might account for nearly half of the rear-end crashes at low speeds. Studies have found that a majority of rear-end crashes involved a stopped vehicle, which presumably had stopped for a good reason—and not to get in the way of the would-be speed maven behind him. Further, Gary Davis, an engineering professor at the University of Minnesota, proving yet again that statistics are one of the most dangerous things about traffic, has suggested there is a disconnect—what statisticians call an “ecological fallacy”—at work in speed-variance studies. Individual risk is conflated with the “aggregate” risk, even if in reality, he suggests, what holds for the whole group might not hold for individuals.
In pure traffic-engineering theory, a world that really exists only on computer screens and in the dreams of traffic engineers and bears little resemblance to how drivers actually behave, a highway of cars all flowing at the same speed is a good thing. The fewer cars you overtake, the lower your chance of hitting someone or being hit. But this requires a world without cars slowing to change lanes to enter the highway, because they are momentarily lost, or because they’re hitting the tail end of a traffic jam. In any case, if faster cars being put at risk by slower cars were the mythical problem some have made it out to be, highway carnage would be dominated by cars trying to pass—but in fact, one study found that in 1996, a mere 5 percent of fatal crashes involved two vehicles traveling in the same direction. A much more common fatal crash is a driver moving at high speed leaving the road and hitting an object that isn’t moving at all. That is a case where speed variance really does kill.
Let us move on to perhaps the oddest risk factor: Super Bowl Sunday. In one study, researchers compared crash data with the start and end times of all prior Super Bowl broadcasts. They divided all the Super Bowl Sundays into three intervals (before, during, and after). They then compared Super Bowl Sundays to non-Super Bowl Sundays. They found that in the before-the-game period, there was no discernible change in fatalities. During the game, when presumably more people would be off the roads, the fatal crash rate was 11 percent less than on a normal Sunday. After the game, they reported a relative increase in fatalities of 41 percent. The relative risks were higher in the places whose teams had lost.
The primary reason for the increased postgame risk is one that I have already discussed: drinking. Nearly twenty times more beer is drunk in total on Super Bowl Sunday than on an average day. Fred’s risk would obviously be influenced by how many beers he had downed (beer, at least in the United States, is what most drivers pulled over for DUIs have been drinking) and the other factors that determine blood alcohol concentration (BAC). Increases in crash risk, as a number of studies have shown, begin to kick in with as little as .02 percent BAC level, start to crest significantly at .05 percent, and spike sharply at .08 to .1 percent.
Determining crash risk based on a person’s BAC depends, of course, on the person. A famous study in Grand Rapids, Michigan, in the 1960s (one that would help establish the legal BAC limits in many countries), which pulled over drivers at random, found that drivers who had a .01 to .04 percent BAC level actually had fewer crashes than drivers with a BAC of zero. This so-called Grand Rapids dip led to the controversial speculation that drivers who had had “just a few” were more aware of the risks of driving, or of getting pulled over, and so drove more safely; others argued that regular drinkers were more capable of “handling” a small intake.
The Grand Rapids dip has shown up in other studies, but it has been downplayed as another statistical fallacy—the “zero BAC” group in Michigan, for example, had more younger and older drivers, who are statistically less safe. Even critics of the study, however, noted that people who reported drinking with greater frequency had safer driving records than their teetotaler counterparts at every level of BAC, including zero. This does not mean that drinkers are better drivers per se, or that having a beer makes you a better driver. But the question of what makes a person a safe driver is more complicated than the mere absence of alcohol. As Leonard Evans notes, the effects of alcohol on driver performance are well known, but the effects of alcohol on driver behavior are not empirically predictable. Here is where the tangled paths of the cautious driver who has had a few, carefully obeying the speed limit, and the distracted sober driver, blazing over the limit and talking on the phone, intersect. Neither may be driving as well as they think they are, and one’s poorer reflexes may be mirrored by the other’s slower time to notice a hazard. Only one is demonized, but they’re both dangerous.
The second key risk is Fred himself. Not because he is Fred, for there is no evidence that people named Fred get in more crashes than people named Max or Jerry. It is the fact that Fred is male. Across every age group in the United States, men are more likely than women to be involved in fatal crashes—in fact, in the average year, more than twice as many men as women are likely to be killed in a car, even though there are more women than men in the country. The global ratio is even higher. Men do drive more, but after that difference is taken into account, their fatal crash rates are still higher.
According to estimates by researchers at Carnegie Mellon University, men die at the rate of 1.3 deaths per 100 million miles; for women the rate is .73. Men die at the rate of 14.51 deaths per 100 million trips, while for women it is 6.55. And crucially, men face .70 deaths per 100 million minutes, while for women the rate is .36. It may be true that men drive more, and drive for longer periods when they do drive, but this does not change the fact that for each minute they’re on the road, each mile they drive, and each trip they take, they are more likely to be killed—and to kill others—than women.
It is tempting to use this information to make some point about whether men or women are “better drivers,” but that’s complicated by the fact that in the United States, women get into nonfatal crashes at a higher rate than men. This might be at least partially the result of men driving more on roads that are more prone to fatal crashes (e.g., rural high-speed two-lane roads). What can be argued is that men drive more aggressively than women. Men may or may not be better drivers than women, but they seem to die more often trying to prove that they are.
As a gender, men seem particularly troubled by two potent compounds: alcohol and testosterone. Men are twice as likely as women to be involved in an alcohol-related fatal crash. They’re more likely to drink, to drink more, and to drive more after they drink. On the testosterone side, men are less likely to wear seat belts; and by just about every measure, they drive more aggressively. Men do things like ride motorcycles more often than women, an activity that is twenty-two times more likely to result in death than driving a car. Male motorcyclists, from Vietnam to Greece to the United States, are less likely than women to wear a helmet. As we all know, alcohol and testosterone mix in unpleasant ways, so motorcyclists who have been drinking are less likely to wear helmets than those who have not, just as male drivers who have been drinking are less likely to wear seat belts than those who are sober.
The fact that Fred is divorced puts him in a riskier pool. A French study that looked into the experiences of some thirteen thousand company employees over eight years found that a recent divorce or separation was linked to a fourfold increase in the risk of a crash that could be at least partially attributed to the driver. One could hypothesize many reasons: There’s the emotional stress (as John Hiatt once sang in a breakup song, “Don’t think about her while you’re trying to drive”), and perhaps more drinking. Or there may be lifestyle changes, like driving more to visit the kids on weekends. Perhaps people who get divorced are simply the type of people who take more risks. Fred might take some comfort, however, from a New Zealand study that found that people who have never been married have even a higher crash risk than those who are divorced. (The study took into account age and gender differences.)
Fred may not have a life partner, but he should be glad if you chose to join him in his truck: Passengers seem to be a life-saving device. Studies from Spain to California have come to the conclusion that a driver has a lower chance of being in a fatal crash if there’s a passenger. This holds particularly true for middle-aged drivers—especially when the passenger is a woman and the driver is a man. (Whether this stems from men looking out for women or women telling men to drive more safely is open to debate.)
The exception here is teenage drivers. Teens are less likely to be wearing seat belts and more likely to be drinking when driving when there are passengers in the car. Many studies have found that teen drivers are more likely to crash with passengers onboard, which is why, in many places, teens are restricted from carrying passengers of their own age during their first few years of driving.
Researchers are beginning to uncover fascinating things about how that risk plays out. A study that looked at the drivers exiting the parking lot at ten different high schools found that teenage drivers seemed to drive faster and follow cars at closer distances than other drivers did. Males drove more riskily than females. This is common knowledge, verified by insurance rates. But their risk-taking varied: Male drivers drove faster and followed closer when they had a male riding shotgun. When they had a female in the front seat, they actually behaved less riskily, and they were safer still when they drove by themselves (a pattern that also held for female drivers).
What seems to be a need to impress in the presence of males turns into a protective impulse when a female passenger (possibly a girlfriend) is in the car—or it could be that the female passenger serves as the voice of reason. This “girlfriend effect” seems to take root early and persist through later life. It need not be a romantic partner: The Israel Defense Forces, in an effort to reduce road deaths for soldiers on leave, trains female soldiers (dubbed “angels”) to act as a “calming” influence on their male comrades.
Now consider where Fred is driving. What’s the matter with Montana? In 2005, 205 people were killed on Montana’s roads, roughly one-third the number that were killed in New Jersey. But Montana has just under one-tenth the population of New Jersey. People clearly drive more in Montana, but even adjusting for what is known as VMT (or “vehicle miles traveled”), Montana drivers are still twice as likely as New Jersey drivers to die on the roads. The big culprit is alcohol: Montana drivers were nearly three times as likely as New Jersey drivers to be involved in an alcohol-related fatal crash. Montana also has higher speed limits than New Jersey, and fewer chances to get caught violating traffic laws. And, most importantly, most Montana roads are rural.
There is, in theory, nothing nicer than a drive in the country, away from the “crazy traffic” of the city. But there is also nothing more dangerous. We would all do well to heed what the sign says: IT’S GOD COUNTRY, DON’T DRIVE LIKE HELL THROUGH IT. Rural, noninterstate roads have a death rate more than two and half times higher than all other roads—even after adjusting for the fewer vehicles found on rural roads. Taking a curve on a rural, noninterstate road is more than six times as dangerous as doing so on any other road. Most crashes involve single cars leaving the roadway, which suggests poorly marked roads, high speeds, fatigue or falling asleep, or alcohol—or some combination of any or all of these. When crashes do happen, medical help is often far away.
In Fred’s case, he is the medical assistance. But what of the fact that he is a doctor? Why should that be a risk? Doctors are usually well-educated, affluent, upstanding members of the community; they drive expensive cars in good condition. But a study by Quality Planning Corporation, a San Francisco-based insurance research firm, found doctors to have the second-highest crash risk in an eight-month sample of a million drivers, just after students (whose risk is largely influenced by their young age). Why is that? Are doctors overconfident, type A drivers racing from open-heart surgery to the golf course?
One simple contributing factor may be that, in the United States at least, many doctors are male (nearly 75 percent in 2005). But firefighters and pilots are usually male as well, and those two professions were at the bottom of the risk list. Firefighters spend a lot of time in fire stations, not on the road, and pilots spend much of their time in the air. Exposure matters, which is seemingly why real estate agents, always driving from house to house, showed up high on the list. (Architects ranked high as well, prompting QPC’s vice president to speculate that they’re often distracted by looking at buildings!) Doctors drive a lot, often in urban settings, often with a certain urgency, perhaps dispensing advice via cell phone. Most important, they may also be tired. A report in the New England Journal of Medicine suggested that every time in a given month interns at Harvard Medical School pulled an extended shift, their crash risk rose by 9.1 percent. The more shifts they worked, the greater the risk that they would fall asleep while stopped in traffic, or even while driving.
Now let’s talk about Dr. Fred’s vehicle of choice, the pickup truck. It’s an increasingly popular vehicle in the United States. The number of households owning pickups rose by nearly 50 percent from 1977 to 1990, and pickup registrations continue to rise every year. It is also the most dangerous vehicle on the road: More people in the United States die in pickups per 100 million vehicles registered than in any other kind of vehicle.
Pickups also impose the most risk on drivers of other vehicles. One study showed that the Ford F-350 presents nearly seven times the risk to other cars as the Dodge Caravan, a minivan. From a vehicular point of view, pickups are high, heavy, and have very stiff front ends—meaning other vehicles have to absorb more energy in a crash. When drivers of pickups crash into other cars, they die at a lower rate than the drivers of smaller cars. Because of simple physics, larger vehicles, with larger crush zones and, often, higher-quality materials, are better able to sustain a collision.
Though not always. As some crash tests have shown, weight is often no help at all when a vehicle hits a fixed object like a wall or a large tree. Marc Ross, a physicist at the University of Michigan, told me that “mass sort of drops out of the calculation for a fixed barrier.” The car’s design—its ability to absorb its own kinetic energy—is as important as its size. In crash testing done several years ago by the Insurance Institute for Highway Safety, vehicles with crash-test dummies were sent into a barrier at 40 miles per hour. Consider two vehicles: the big and brawny Ford F-150 pickup truck, weighing in at nearly 5,000 pounds, and the tiny Mini Cooper, at just under 2,500 pounds. Which would you have rather been in? The test photos make the answer clear: the Mini Cooper. The Ford, despite having more space between the obstacle and the driver, saw a “major collapse of the occupant compartment” that “left little survival space for the driver.” In the Mini, meanwhile, “the dummy’s position in relation to the steering wheel and instrument panel after the crash test indicates that the driver’s survival space was maintained very well.”
As Malcolm Gladwell argued in the New Yorker, larger, heavier vehicles, which are more difficult to maneuver and slower to stop, may also make it harder for a driver to avoid a crash in the first place. What complicates this is the finding that, in the United States, small cars are involved in more single-car fatal crashes than large cars—and it’s single-car crashes that the greater maneuverability of smaller, lighter cars should help prevent. Smaller cars may be more maneuverable, but they also tend to be driven by riskier younger drivers, while sports cars that handle well may be “self-selected” by more adventurous drivers. Researchers with the National Highway Traffic Safety Administration raised another question: Would the higher maneuverability of smaller cars lead drivers to take more risks? “The quicker response of light vehicles,” they argued, “may give the average driver yet more opportunity to blunder.”
Risk can be deceiving. The answer to “What are the riskiest vehicles on the road?” is more complicated than it seems. Assigning risk based purely on “vehicle factors” is limiting, because it neglects the idea of who is driving the vehicle and how it is being driven. Leonard Evans, the former GM researcher, notes that crash rates are higher for two-door cars than four-door cars (up to a certain weight, where the rates become equal). “The believers in vehicle factors would say, ‘We’ve got it, you’ve just got to weld another couple doors on the vehicle and you’ve got a safe car.’”
Those two doors are often not an engineering distinction, but a lifestyle distinction: the difference, say, between a two-door Acura RSX and a four-door Toyota Corolla. In the United States from 2002 through 2005, the death rate in the “fast and furious” Acura was more than twice as high as that in the sleepy Corolla. In terms of weight, the two vehicles are virtually identical. The different crash rate owes more to the drivers of four-doors and two-doors than to the cars themselves.
The idea that who is driving (and how) affects the risk of what is being driven is well depicted in the case of the Ford Crown Victoria and the Mercury Marquis, as Marc Ross and Tom Wenzel have pointed out. The Crown Vic and its corporate cousin the Marquis, large, staid V-8 sedans both, are basically the same car—one repair manual covers both models. They both pose the same relative risk to their drivers, which is no surprise given their similarities. The Crown Vic, however, statistically poses more risk to others. Why is that? The Crown Victoria is a popular police car, meaning that it’s involved in a lot more dangerous high-speed pursuits than the Marquis. (Crown Vics, it must be said, are also the taxi of choice in New York City.)
There are “safer” cars in the hands of dangerous drivers, and “more dangerous” cars in the hands of safe drivers. Small cars such as subcompacts do pose a greater risk for their occupants if involved in a crash—although more-expensive subcompacts are less risky than cheaper subcompacts—but subcompacts also tend to be driven by people (e.g., younger drivers) with higher risks of getting into a crash, because of “behavioral factors.” Still, age is just one behavioral factor, and it interacts with the type of car being driven. As I will discuss in the next section, the drivers of small cars may actually act in safer ways because of the size of the car. Are large passenger cars statistically the safest because they pose less of a rollover risk than SUVs or because they weigh more than small cars? Or is it because they tend to be driven by the statistically safest demographic?
Returning to Fred and his pickup truck: It’s hard to tell where the risks of one end and those of the other begin. Men tend to drive pickup trucks more than women, men tend to wear seat belts less often, men who live in rural areas are more likely to drive pickup trucks without seat belts, and, after motorcyclists, the drivers of pickup trucks are the most likely to have been drinking when involved in a fatal crash. These would be only a handful of the potential risk factors—an Australian study, for example, found that black cars were more likely to crash than white cars. Is it visibility, or the types of people who drive black cars versus white ones? We all know no one washes a rental car, but are rental cars driven more recklessly? (There is some evidence to suggest so.) A study in Israel found that fewer drivers died on the roads in the first and second days after a suicide-bombing attack but then tracked an increasein danger on the third day. Are people simply staying off the roads in the aftermath, then rejoining them en masse? (Or does the aftereffect of terror cause people to act with less regard for life?)
As the risk expert John Adams likes to say, understanding risk is not rocket science—it’s more complicated. Looking at statistics from the United Kingdom, he notes that a young man is 100 times more likely than a middle-aged woman to be killed in traffic. Someone driving on Sunday morning at three a.m. has a risk 134 times greater than someone driving at ten a.m. on Sunday. Someone with a personality disorder is 10 times more likely to have a serious crash, while someone 2.9 times over the BAC limit would be 20 times more likely than a sober driver to crash.
“So if these were independent variables,” he told me, “you could multiply them and come to the conclusion that a disturbed, drunken young man on the road on a Sunday morning was about 2.5 million times more likely to have a serious accident than a normal, sober middle-aged woman driving to church seven hours later.” They are, however, not independent. “There are proportionally more disturbed, drunken young men on the road at three o’clock on a Sunday morning,” Adams noted. Now add other factors. Were the car’s tires in good shape? Was it foggy? Was the driver tired or awake? “Once you start trying to imagine all the factors,” Adams said, “that might not be an exaggeration of the disparity between one person’s risk and another person’s risk.” He used this example to “have a go” at what he calls the “Richter scales” of risk, which show, for instance, that a person has a 1 in 8,000 chance of dying or being seriously injured in a car crash, and a 1 in 25,000 chance of the same thing happening while playing soccer. “The purveyors of these tables say they produce them to guide the lay public in making risks. The lay public is hopeless at making use of numbers like this.”
There is one solid bit of advice that could be dispensed regarding whether you should take a trip with the fictional Fred: Ride in the backseat (if he had one, that is). The fatality risk in the backseat is 26 percent lower than in the front. The backseat is safer than air bags. But you run the risk of offending Fred.
The Risks of Safety
Be wary then; best safety lies in fear.
—William Shakespeare, Hamlet
In the 1950s, when car fatalities in the United States were approaching their zenith, an article in the Journal of the American Medical Association argued that the “elimination of the mechanically hazardous features of the interior construction”—for example, metal dashboards and rigid steering columns—would prevent nearly 75 percent of the annual road fatalities, saving some 28,500 lives.
Car companies were once rightly castigated for trying to shift the blame for traffic fatalities to the “nut behind the wheel.” And in the decades since, in response to public outcry and the ensuing regulations, the insides of cars have been made radically safer. In the United States (and most other places), fewer people in cars die or are injured now than in the 1960s, even though more people drive more miles. But in an oft-repeated pattern with safety devices from seat belts to air bags, the actual drop in fatalities did not live up to the early hopes. Consider the so-called chimsil. The term is slang for “center high-mounted stop lamp” (CHMSL), meaning the third rear brake light that became mandatory on cars in the 1980s, after decades of study.
On paper at least, the chimsil sounded like a great idea. It would give drivers more information that the car ahead was braking. Unlike brake lights, which go from one shade of red to a brighter shade of red (some engineers have argued that an outright change in colors would make more sense), the chimsil would illuminate only during braking. Drivers scanning through the windshield of the car ahead of them to gauge traffic would have more information. Tests had shown that high-mounted lamps improved reaction times. Experts predicted that the lamps would help reduce certain types of crashes, particularly rear-end collisions. Early studies, based on a trial that equipped some cars in taxi fleets with the lights, indicated that these incidents could be cut by 50 percent. Later estimates, however, dropped the benefit to around 15 percent. Studies now estimate that the chimsil has “reached a plateau” of reducing rear-end crashes by 4.3 percent. This arguably justifies the effort and cost of having them installed, but the chimsil clearly has not had the effect for which its inventors had hoped.
Similar hopes greeted the arrival of the antilock braking system, or ABS, which helps avoid “locked brakes” and allows for greater steering control during braking, particularly in wet conditions. But problems arose. A famous, well-controlled study of taxi drivers in Munich, Germany, found that cars equipped with ABS drove faster, and closer to other vehicles, than those without. They also got into more crashes than cars without ABS. Other studies suggested that drivers with ABS were less likely to rear-end someone but more likely to be rear-ended by someone else.
Were drivers trading a feeling of greater safety for more risk? Perhaps they were simply swapping collisions with other vehicles for potentially more dangerous “single-vehicle road-departure” crashes—studies on test tracks have shown that drivers in ABS-equipped cars more often veered off the road when trying to avoid a crash than non-ABS drivers did. Other studies revealed that many drivers didn’t know how to use ABS brakes correctly. Rather than exploiting ABS to drive more aggressively, they may have been braking the wrong way. Finally, drivers with ABS may simply have been racking up more miles. Whatever the case, a 1994 report by the National Highway Traffic Safety Administration concluded that the “overall, net effect of ABS” on crashes—fatal and otherwise—was “close to zero.” (The reason why is still rather a mystery, as the Insurance Institute for Highway Safety concluded in 2000: “The poor early experience of cars with antilocks has never been explained.”)
There always seems to be something else to protect us on the horizon. The latest supposed silver bullet for traffic safety is electronic stability control, the rollover-busting technology that, it is said, can help save nearly ten thousand lives per year. It would be a good thing if it did, but if history is a guide, it will not.
Why do these changes in safety never seem to have the predicted impact? Is it just overambitious forecasting? The most troublesome possible answer, one that has been haunting traffic safety for decades, suggests that, as with the roads in Chapter 7, the safer cars get, the more risks drivers choose to take.
While this idea has been around in one form or another since the early days of the automobile—indeed, it was used to argue against railroad safety improvements—it was most famously, and controversially, raised in a 1976 article by Sam Peltzman, an economist at the University of Chicago. Describing what has since become known as the “Peltzman effect,” he argued that despite the fact that a host of new safety technologies—most notably, the seat belt—had become legally required in new cars, the roads were no safer. “Auto safety regulation,” he concluded, “has not affected the highway death rate.” Drivers, he contended, were trading a decrease in accident risk with an increase in “driving intensity.” Even if the occupants of cars themselves were safer, he maintained, the increase in car safety had been “offset” by an increase in the fatality rate of people who did not benefit from the safety features—pedestrians, bicyclists, and motorcyclists. As drivers felt safer, everyone else had reason to feel less safe.
Because of the twisting, entwined nature of car crashes and their contributing factors, it is exceedingly difficult to come to any certain conclusions about how crashes may have been affected by changes to any one variable of driving. The median age of the driving population, the state of the economy, changes in law enforcement, insurance factors, weather conditions, vehicle and modal mix, alterations in commuting patterns, hazy crash investigations—all of these things, and others, play their subtle part. In many cases, the figures are simply estimates.
This gap between expected and achieved safety results might be explained by another theory, one that turns the risk hypothesis rather on its head. This theory, known as “selective recruitment,” says that when a seat-belt law is passed, the pattern of drivers who switch from not wearing seat belts to wearing seat belts is decidedly not random. The people who will be first in line are likely to be those who are already the safest drivers. The drivers who do not choose to wear seat belts, who have been shown in studies to be riskier drivers, will be “captured” at a smaller rate—and even when they are, they will still be riskier.
Looking at the crash statistics, one finds that in the United States in 2004, more people not wearing their seat belts were killed in passenger-car accidents than those who were wearing belts—even though, if federal figures can be believed, more than 80 percent of drivers wear seat belts. It is not simply that drivers are less likely to survive a severe crash when not wearing their belts; as Leonard Evans has noted, the most severe crashes happen to those not wearing their belts. So while one can make a prediction about the estimated reduction in risk due to wearing a seat belt, this cannot simply be applied to the total number of drivers for an “expected” reduction in fatalities.
Economists have a clichéd joke: The most effective car-safety instrument would be a dagger mounted on the steering wheel and aimed at the driver. The incentive to drive safely would be quite high. Given that you are twice as likely to die in a severe crash if you’re not wearing a seat belt, it seems that not wearing a seat belt is essentially the same as installing a dangerous dagger in your car.
And yet what if, as the economists Russell Sobel and Todd Nesbit ask, you had a car so safe you could usually walk away unharmed after hitting a concrete wall at high speed? Why, you would “race it at 200 miles per hour around tiny oval racetracks only inches away from other automobiles and frequently get into accidents.” This was what they concluded after tracking five NASCAR drivers over more than a decade’s worth of races, as cars gradually became safer. The number of crashes went up, they found, while injuries went down.
Naturally, this does not mean that the average driver, less risk-seeking than a race-car driver, is going to do the same. For one, average drivers do not get prize money; for another, race-car drivers wear flame-retardant suits and helmets. This raises the interesting, if seemingly outlandish, question of why car drivers, virtually alone among users of wheeled transport, do not wear helmets. Yes, cars do provide a nice metal cocoon with inflatable cushions. But in Australia, for example, head injuries among car occupants, according to research by the Federal Office of Road Safety, make up half the country’s traffic-injury costs. Helmets, cheaper and more reliable than side-impact air bags, would reduce injuries and cut fatalities by some 25 percent. A crazy idea, perhaps, but so were air bags once.
Seat belts and their effects are more complicated than allowed for by the economist’s language of incentives, which sees us all as rational actors making predictable decisions. I have always considered the act of wearing my seat belt not so much an incentive to drive more riskily as a grim reminder of my own mortality (some in the car industry fought seat belts early on for this reason). This doesn’t mean I’m immune from behavioral adaptation. Even if I cannot imagine how the seat belt makes me act more riskily, I can easily imagine how my behavior would change if, for some reason, I was driving a car without seat belts. Perhaps my ensuing alertness would cancel out the added risk.
Moving past the question of how many lives have been saved by seat belts and the like, it seems beyond doubt that increased feelings of safety can push us to take more risks, while feeling less safe makes us more cautious. This behavior may not always occur, we may do it for different reasons, we may do it with different intensities, and we may not be aware that we are doing it (or by how much); but the fact that we do it is why these arguments are still taking place. This may also explain why, as Peltzman has pointed out, car fatalities per mile still decline at roughly the same rate every year now as they did in the first half of the twentieth century, well before cars had things like seat belts and air bags.
In the first decade of the twentieth century, forty-seven men tried to climb Alaska’s Mount McKinley, North America’s tallest peak. They had relatively crude equipment and little chance of being rescued if something went wrong. All survived. By the end of the century, when climbers carried high-tech equipment and helicopter-assisted rescues were quite frequent, each decade saw the death of dozens of people on the mountain’s slopes. Some kind of adaptation seemed to be occurring: The knowledge that one could be rescued was either driving climbers to make riskier climbs (something the British climber Joe Simpson has suggested); or it was bringing less-skilled climbers to the mountain. The National Park Service’s policy of increased safety was not only costing more money, it perversely seemed to be costing more lives—which had the ironic effect of producing calls for more “safety.”
In the world of skydiving, the greatest mortality risk was once the so-called low-pull or no-pull fatality. Typically, the main chute would fail to open, but the skydiver would forget to trigger the reserve chute (or would trigger it too late). In the 1990s, U.S. skydivers began using a German-designed device that automatically deploys, if necessary, the reserve chute. The number of low- or no-pull fatalities dropped dramatically, from 14 in 1991 to 0 in 1998. Meanwhile, the number of once-rare open-canopy fatalities, in which the chute deploys but the skydiver is killed upon landing, surged to become the leading cause of death. Skydivers, rather than simply aiming for a safe landing, were attempting hook turns and swoops, daring maneuvers done with the canopy open. As skydiving became safer, many skydivers, particularly younger skydivers, found new ways to make it riskier.
The psychologist Gerald Wilde would call what was happening “risk homeostasis.” This theory implies that people have a “target level” of risk: Like a home thermostat set to a certain temperature, it may fluctuate a bit from time to time but generally keeps the same average setting. “With that reliable rip cord,” Wilde told me at his home in Kingston, Ontario, “people would want to extend their trip in the sky as often as possible. Because a skydiver wants to be up there, not down here.”
In traffic, we routinely adjust the risks we’re willing to take as the expected benefit grows. Studies, as I mentioned earlier in the book, have shown that cars waiting to make left turns against oncoming traffic will accept smaller gaps in which to cross (i.e., more risk) the longer they have been waiting (i.e., as the desire for completing the turn increases). Thirty seconds seems to be the limit of human patience for left turns before we start to ramp up our willingness for risk.
We may also act more safely as things get more dangerous. Consider snowstorms. We’ve all seen footage of vehicles slowly spinning and sliding their way down freeways. The news talks dramatically of the numbers of traffic deaths “blamed on the snowstorm.” But something interesting is revealed in the crash statistics: During snowstorms, the number of collisions, relative to those on clear days, goes up, but the number of fatal crashes goes down. The snow danger seems to cut both ways: It’s dangerous enough that it causes more drivers to get into collisions, and dangerous enough that it forces them to drive at speeds that are less likely to produce a fatal crash. It may also, of course, force them not to drive in the first place, which itself is a form of risk adjustment.
In moments like turning left across traffic, the risk and the payoff seem quite clear and simple. But do we behave consistently, and do we really have a sense of the actual risk or safety we’re looking to achieve? Are we always pushing it “to the max,” and do we even know what that “max” is? Critics of risk homeostasis have said that given how little humans actually know about assessing risk and probability, and given how many mis-perceptions and biases we’re susceptible to while driving, it’s simply expecting too much of us to think we’re able to hold to some perfect risk “temperature.” A cyclist, for example, may feel safer riding on the sidewalk instead of the street. But several studies have found that cyclists are more likely to be involved in a crash when riding on the sidewalk. Why? Sidewalks, though separated from the road, cross not only driveways but intersections—where most car-bicycle collisions happen. The driver, having already begun her turn, is less likely to expect—and thus to see—a bicyclist emerging from the sidewalk. The cyclist, feeling safer, may also be less on the lookout for cars.
The average person, the criticism goes, is hardly aware of what their chances actually would be of surviving a severe crash while wearing a seat belt or protected by the unseen air bag lurking inside the steering wheel. Then again, as any trip to Las Vegas will demonstrate, we seem quite capable of making confident choices based on imperfect information of risk and odds. The loud, and occasionally vicious, debate over “risk compensation” and its various offshoots seems less about whether it can happen and more about whether it always happens, or exactly why.
Most researchers agree that behavioral adaptation seems more robust in response to direct feedback. When you can actually feel something, it’s easier to change your behavior in response to it. We cannot feel air bags and seat belts at work, and we do not regularly test their capabilities—if they make us feel safer, that sense comes from something besides the devices themselves. Driving in snow, on the other hand, we don’t have to rely on internalized risk calculations: One can feel how dangerous or safe it is through the act of driving. (Some studies have shown that drivers with studded winter tires drive faster than those without them.)
A classic way we sense feedback as drivers is through the size of the vehicle we are driving. The feedback is felt in various ways, from our closeness to the ground to the amount of road noise. Studies have suggested that drivers of small cars take fewer risks (as judged by speed, distance to the vehicle ahead of them, and seat-belt wearing) than drivers of larger cars. Many drivers, particularly in the United States, drive sportutility vehicles for their perceived safety benefits from increased weight and visibility. There is evidence, however, that SUV drivers trade these advantages for more aggressive driving behavior. The result, studies have argued, is that SUVs are, overall, no safer than medium or large passenger cars, and less safe than minivans.
Studies have also shown that SUV drivers drive faster, which may be a result of feeling safer. They seem to behave differently in other ways as well. A study in New Zealand observed the position of passing drivers’ hands on their steering wheels. This positioning has been suggested as a measure of perceived risk—research has found, for instance, that more people are likely to have their hands on the top half of the steering wheel when they’re driving on roads with higher speeds and more lanes. The study found that SUV drivers, more than car drivers, tended to drive either with only one hand or with both hands on the bottom half of the steering wheel, positions that seemed to indicate lower feelings of risk. Another study looked at several locations in London. After observing more than forty thousand vehicles, researchers found that SUV drivers were more likely to be talking on a cell phone than car drivers, more likely not to be wearing a seat belt, and—no surprise—more likely not to be wearing a seat belt while talking on a cell phone.
It could just be that the types of people who talk on cell phones and disdain seat belts while driving also like to drive SUVs. But do they like to drive an SUV because they think it’s a safer vehicle or because it gives them license to act more adventurously on the road? To return to the mythical Fred, pickup drivers are less likely than other drivers to wear their seat belts. Under risk-compensation theory, he is doing this because he feels safer in the large pickup truck. But could he not drive in an even more risky fashion yet lower the “cost” of that risky driving by buckling up? It all leads to questions of where we get our information about what is risky and safe, and how we act upon it. Since relatively few of us have firsthand experience with severe crashes in which the air bags deployed, can we really have an accurate sense of how safe we are in a car with air bags versus one without—enough to get us to change our behavior?
Risk is never as simple as it seems. One might think the safest course of action on the road would be to drive the newest car possible, one filled with the latest safety improvements and stuffed full of technological wonders. This car must be safer than your previous model. But, as a study in Norway found, new cars crash most. It’s not simply that there are more new cars on the road—the rate is higher. After studying the records of more than two hundred thousand cars, the researchers concluded: “If you drive a newer car, the probability of both damage and injury is higher than if you drive an older car.”
Given that a newer car would seem to offer more protection in a crash, the researchers suggested that the most likely explanation was drivers changing the way they drive in response to the new car. “When using an older car which may not feel very safe,” they argued, “a driver probably drives more slowly and is more concentrated and cautious, possibly keeping a greater distance to the car in front.” The finding that new cars crash most has shown up elsewhere, including in the United States, although another explanation has been offered: When people buy new cars, they drive them more than old cars. This in itself, however, may be a subtle form of risk compensation: I feel safer in my new car, thus I am going to drive it more often.
Studying risk is not rocket science; it’s more complicated. Cars keep getting objectively safer, but the challenge is to design a car that can overcome the inherent risks of human nature.
In most places in the world, there are more suicides than homicides. Globally, more people take their own lives in an average year—roughly a million—than the total murdered and killed in war. We always find these sorts of statistics surprising, even if we are simultaneously aware of one of the major reasons for our misconception: Homicides and war receive much more media coverage than suicides, so they seem more prevalent.
A similar bias helps explain why, in countries like the United States, the annual death toll from car crashes does not elicit more attention. If the media can be taken as some version of the authentic voice of public concern, one might assume that, over the last few years, the biggest threat to life in this country is terrorism. This is reinforced all the time. We hear constant talk about “suspicious packages” left in public buildings. We’re searched at airports and we watch other people being searched. We live under coded warnings from the Department of Homeland Security. The occasional terrorist cell is broken up, even if it often seems to be a hapless group of wannabes.
Grimly tally the number of people who have been killed by terrorism in the United States since the State Department began keeping records in the 1960s, and you’ll get a total of less than 5,000—roughly the same number, it has been pointed out, as those who have been struck by lightning. But each year, with some fluctuation, the number of people killed in car crashes in the United States tops 40,000. More people are killed on the roads each month than were killed in the September 11 attacks. In the wake of those attacks, polls found that many citizens thought it was acceptable to curtail civil liberties to help counter the threat of terrorism, to help preserve our “way of life.” Those same citizens, meanwhile, in polls and in personal behavior, have routinely resisted traffic measures designed to reduce the annual death toll (e.g., lowering speed limits, introducing more red-light cameras, stiffer blood alcohol limits, stricter cell phone laws).
Ironically, the normal business of life that we are so dedicated to preserving is actually more dangerous to the average person than the threats against it. Road deaths in the three months after 9/11, for example, were 9 percent higher than those during the similar periods in the two years before. Given that airline passenger numbers dropped during this same period, it can be assumed some people chose to drive rather than fly. It might be precisely because of all the vigilance that no further deaths due to terrorism have occurred in the United States since 9/11—even as more than two hundred thousand people have died on the roads. This raises the question of why we do not mount a similarly concerted effort to improve the “security” of the nation’s roads; instead, in the wake of 9/11, newspapers have been filled with stories of traffic police being taken off the roads and assigned to counterterrorism.
In the 1990s, the United Kingdom dropped its road fatalities by 34 percent. The United States managed a 6.5 percent reduction. Why the difference? Better air bags, safer cars? It was mostly speed, one study concluded (although U.S. drivers also rack up many more miles each year). While the United Kingdom was introducing speed cameras, the United States was resisting cameras and raising speed limits. Had the United States pulled off what the United Kingdom did, it is suggested, 10,000 fewer people would have been killed.
Why doesn’t the annual road death toll elicit the proportionate amount of concern? One reason may simply be the trouble we have in making sense of large numbers, because of what has been called “psychophysical numbing.” Studies have shown that people think it’s more important to save the same number of lives in a small refugee camp than a large refugee camp: Saving ten lives in a fifty-person camp seems more desirable than saving ten lives in a two-hundred-person camp, even though ten lives is ten lives. We seem less sensitive to changes when the numbers are larger.
By contrast, in what is called the “identifiable victim effect,” we can be quite sensitive to the suffering of one person, like the victim of a terrible disease. We are, in fact, so sensitive to the suffering of one person that, as work by the American psychologist and risk-analysis expert Paul Slovic has shown, people are more likely to give more money to charity campaigns that feature one child rather than those that show multiple children—even when the appeal features only one more child.
Numbers, rather than commanding more attention for a problem, just seem to push us toward paralysis. (Perhaps this goes back to that evolutionary small-group hypothesis.) Traffic deaths present a further problem: Whereas a person in jeopardy can possibly be saved, we cannot know with certainty ahead of time who will be a crash victim—even most legally drunk drivers, after all, make it home safely. In fatal crashes, victims usually die instantly, out of sight. Their deaths are dispersed in space and time, with no regular accumulated reporting of all who died. There are no vigils or pledge drives for fatal car-crash victims, just eulogies, condolences, and thoughts about how “it can happen to anyone,” even if fatal car crashes are not as statistically random as we might think.
Psychologists have argued that our fears tend to be amplified by “dread” and “novelty.” A bioterrorism attack is a new threat that we dread because it seems beyond our control. People have been dying in cars, on the other hand, for more than a century, often by factors presumably within their control. We also seem to think things are somehow less risky when we can feel a personal benefit they provide (like cars) than when we cannot (like nuclear power). Still, even within the realm of traffic, risks seem to be misperceived. Take so-called road rage. The number of people shot and killed on the road every year, even in gun-happy America, unofficially numbers around a dozen (far fewer than those killed by lightning). Fatigue, meanwhile, contributes to some 12 percent of crashes. We are better advised to watch out for yawning drivers than pistol-packing drivers.
Our feelings about which risks we should fear, as the English risk expert John Adams argues, are colored by several important factors. Is something voluntary or not? Do we feel that something is in our control or beyond our control? What is the potential reward? Some risks are voluntary, in our control (we think), and there is a reward. “A pure self-imposed, self-controlled voluntary risk might be something like rock climbing,” Adams said. “The risk is the reward.” No one forces a rock climber to take risks, and when rock climbers die, no one else feels threatened. (The same might be said of suicide versus murder.) Other risks are voluntary but we cede control—for example, taking a cross-country bus trip. We have no sway over the situation. Imagine that you are at the bus station and see a driver drinking a beer at the bar. Then imagine you see the same driver at the wheel as you board your bus. How would you feel? Nervous, I would guess.
Now imagine yourself at a bar having a beer. Then imagine yourself getting in your car to drive home. Did you envision the same dread and panic? Probably not, because you were, at least in your own mind, in control. You’re the manager of your own risk. This is why people think they have a better chance of winning the lottery if they pick the numbers (it is also, admittedly, more fun that way). We get nervous about ceding control over risk to other people. Not surprisingly, we tend to inflate risk most dramatically for things that are involuntary, out of our control, and offer no reward. “The July 7 bombings here in London killed six days’ worth of death on the road,” Adams said. “After this event, ten thousand people gathered in Trafalgar Square. You don’t get ten thousand people in Trafalgar Square lamenting last week’s road death toll.”
Why is there no outrage? Driving is voluntary, it’s in our control, and there’s a reward. And so we fail to recognize the real danger cars present. Research in the United States has shown, for example, that exurban areas—the sprawling regions beyond the old inner-ring suburbs—pose greater risks to their inhabitants than central cities as a whole. This despite a cultural preconception that the opposite is true. The key culprit? Traffic fatalities. The less dense the environment, the more dangerous it is. If we wanted dramatically safer roads overnight—virtually fatality-free—it wouldn’t actually be difficult. We could simply lower the speed limit to ten miles per hour (as in those Dutch woonerven). Does that seem absurd? In the early 1900s that was the speed limit. In Bermuda, very few people die in cars each year. The island-wide speed limit is 35 kilometers per hour (roughly 22 miles per hour). In the United States, to take one example, Sanibel Island, Florida, which like Bermuda has a 35 mph maximum, has not seen a traffic fatality this century, despite a heavy volume of cars and cyclists. But merely lowering mean speeds as little as one mile per hour, as Australian researchers have found, lowers crash risks.
As societies, we have gradually accepted faster and faster speeds as a necessary part of a life of increasing distances, what Adams calls “hypermobility.” Higher speeds enable life to be lived at a scale in which time is more important than distance. Ask someone what their commute is, and they will inevitably give an answer in minutes, as if they were driving across a clock face. Our cars have been engineered to bring a certain level of safety to these speeds, but even this is rather arbitrary, for what is safe about an activity that kills tens of thousands of people a year and seriously injures many more than that? We drive with a certain air of invincibility, even though air bags and seat belts will not save us in roughly half the crashes we might get into, and despite the fact that, as Australian crash researcher Michael Paine has pointed out, half of all traffic fatalities to seat-belt-wearing drivers in frontal collisions happen at impact speeds at or below the seemingly slow level of 35 miles per hour.
We have deemed the rewards of mobility worth the risk. The fact that we’re at the wheel skews our view. Not only do we think we’re better than the average driver—that “optimistic bias” again—but studies show that we think we’re less likely than the average driver to be involved in a crash. The feeling of control lowers our sense of risk. What’s beyond our control comes to seem riskier, even though it is “human factors,” not malfunctioning vehicles, faulty roads, or the weather, that are responsible for an estimated 90 percent of crashes.
On the road, we make our judgments about what’s risky and what’s safe using our own imperfect human calculus. We think large trucks are dangerous, but then we drive unsafely around them. We think roundabouts are more dangerous than intersections, although they’re more safe. We think the sidewalk is a safer place to ride a bike, even though it’s not. We worry about getting into a crash on “dangerous” holiday weekends but stop worrying during the week. We do not let children walk to school even though driving in a car presents a greater hazard. We use hands-free cell phones to avoid risky dialing and then spend more time on risky calls (among other things). We carefully stop at red lights when there are no other cars, but exceed the speed limit during the rest of the trip. We buy SUVs because we think they’re safer and then drive them in more dangerous ways. We drive at a minuscule following distance to the car ahead, exceeding our ability to avoid a crash, with a blind faith that the driver ahead will never have a reason to suddenly stop. We have gotten to the point where cars are safer than ever, yet traffic fatalities cling to stubbornly high levels. We know all this, and act as if we don’t.