Traffic: Why We Drive the Way We Do (and What It Says About Us) - Tom Vanderbilt (2008)
Chapter 2. Why You’re Not as Good a Driver as You Think You Are
If Driving Is So Easy, Why Is It So Hard for a Robot? What Teaching Machines to Drive Teaches Us About Driving
As you wish, Mr. Knight. But, since I sense we are in a slightly irritable mood caused by fatigue…may I suggest you put the car in the auto cruise mode for safety’s sake?
—K.I.T.T., Knight Rider
For those of us who aren’t brain surgeons, driving is probably the most complex everyday thing we do. It is a skill that consists of at least fifteen hundred “subskills.” At any moment, we are navigating through terrain, scanning our environment for hazards and information, maintaining our position on the road, judging speed, making decisions (about twenty per mile, one study found), evaluating risk, adjusting instruments, anticipating the future actions of others—even as we may be sipping a latte, thinking about last night’s episode of American Idol, quieting a toddler, or checking voice mail. A survey of one stretch of road in Maryland found that a piece of information was presented every two feet, which at 30 miles per hour, the study reasoned, meant the driver was exposed to 1,320 “items of information,” or roughly 440 words, per minute. This is akin to reading three paragraphs like this one while also looking at lots of pretty pictures, not to mention doing all the other things mentioned above—and then repeating the cycle, every minute you drive.
Because we seem to do this all so easily, we tend not to dwell on it. Driving becomes like breathing or an involuntary reflex. We just do it. It just happens. But to think anew about this rather astonishing human ability, it’s worth pausing to consider what it actually takes to get a nonhuman to drive. This is a problem that Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, and his team have dedicated themselves to for the last few years. In 2005, Thrun and his colleagues won the Defense Advanced Research Projects Agency’s Grand Challenge, a 132-mile race through a tortuous course in the Mojave Desert. Their “autonomous vehicle,” a Volkswagen Touareg named Stanley, using only GPS coordinates, cameras, and a variety of sensors, completed the course in just under seven hours, averaging a rather robust 19.1 miles per hour.
Stanley won because Thrun and his team, after a series of failures, changed their method of driving instruction. “We started teaching Stanley much more like an apprentice than a computer,” Thrun told me. “Instead of telling Stanley, ‘If the following condition occurs, invoke the following action,’ we would give an example and train him.” It would not work, for example, to simply tell Stanley to drive at a certain speed limit. “A person would slow down when they hit a rut,” Thrun said. “But a robot is not that smart. It would keep driving at thirty miles per hour until its death.” Instead, Thrun took the wheel and had Stanley record the way he drove, carefully noting his speed and the amount of shock the vehicle was absorbing. Stanley watched how Sebastian responded when the road narrowed, or when the shock level of his chassis went beyond a certain threshold.
Stanley was learning the way most of us learn to drive, not through rote classroom memorization of traffic rules and the viewing of blood-soaked safety films but through real-world observation, sitting in the backseats of our parents’ cars. For Thrun, the process made him begin “questioning what a rule really is.” The basic rules were simple: Drive on this road under this speed limit from this point to this point. But giving Stanley rules that were too rigid would cause him to overreact, like the autistic character played by Dustin Hoffman in the film Rain Man, who stops while crossing an intersection because the sign changes to DO NOT WALK. What about when the conventions are violated, as they so often are in driving? “Nothing says that a tumbleweed has to stay outside the drivable corridor,” Thrun explained. In other words, stuff happens. There are myriad moments of uncertainty, or “noise.” In the same way we do things like judge whether the police car with the flashing lights has already pulled someone else over, Stanley needs to decipher the puzzling world of the road: Is that a rock in the middle of the street or a paper bag? Is that a speed bump in the road or someone who fell off their bike? The restrictions on a New York City “No Parking” sign alone would bring Stanley to his knees.
If all this seems complicated enough, now consider doing all of it in the kind of environment in which most of us typically drive: not lonely desert passes but busy city and suburban streets. When I caught up with Thrun, this is exactly what was on his mind, for he was in the testing phase for DARPA’s next race, the Urban Challenge. This time the course would be in a city environment, with off-roading Stanley retired in favor of sensible Junior, a 2006 VW Passat Wagon. The goal, according to DARPA, would be “safe and correct autonomous driving capability in traffic at 20 mph,” including “merging into moving traffic, navigating traffic circles, negotiating busy intersections, and avoiding obstacles.”
We do not always get these things right ourselves, but most drivers make any number of complex maneuvers each day without any trouble. Teaching a machine to do this presents elemental problems. Simply analyzing any random traffic scene, as we constantly do, is an enormous undertaking. It requires not only recognizing objects, but understanding how they relate to one another, not just at that moment but in the future. Thrun uses the example of a driver coming upon a traffic island versus a stationary car. “If there’s a stationary car you behave fundamentally differently, you queue up behind it,” he says. “If it’s a traffic island you just drive around it. Humans take for granted that we can just look at this and recognize it instantly. To take camera data and be able to understand this is a traffic island, that technology just doesn’t exist.” Outside of forty meters or so, Junior, according to Thrun, does not have a clue about what the approaching obstacle is; he simply sees that it is an obstacle.
In certain ways, Junior has advantages over humans, which is precisely why some robotic devices, like adaptive cruise control—which tracks via lasers the distance to the car in front and reacts accordingly—have already begun to appear in cars. When calculating the distance between himself and the car ahead, as with ACC, Junior is much more accurate than we are—to within one meter, according to Michael Montemerlo, a researcher at Stanford. “People always ask if Junior will sense other people’s brake lights,” Montemerlo said. “Our answer is, you don’t really have to. Junior has the ability to measure the velocity of another car very precisely. That will tell you a car’s braking. You actually get their velocity instead of this one bit of information saying ‘I’m slowing down.’ That’s much more information than a person gets.”
Driving involves not just the fidelity of perception but knowing what to do with the information. For Stanley, the task was relatively simple. “It was just one robot out in the desert all by himself,” Montemerlo said. “Stanley’s understanding of the world is very basic, actually just completely geometric. The goal was just to always take the good terrain and avoid the bad terrain. It’s not possible to drive in an urban setting with that limited understanding of the world. You actually have to take and interpret what you’re seeing and exhibit a higher-level understanding.” When we approach a traffic signal that has just gone yellow, for example, we engage in a complex chain of instantaneous processing and decision making: How much longer will the light be yellow? Will I have time (or space) to brake? If I accelerate will I make it, and how fast do I have to go to do so? Will I be struck by the tailgater behind if I slam on the brakes? Is there a red-light camera? Are the roads wet? Will I be caught in the intersection, “blocking the box”?
Engineers call the moment when we’re too close to the amber light to stop and yet too far to make it through without catching some of the red phase the “dilemma zone.” And a dilemma it is. Judging by crash rates, more drivers are struck from the rear when they try to stop for the light, but more serious crashes occur when drivers proceed and are hit broadside by a car entering the intersection. Do you take the higher chance of a less serious crash or the lower chance of a more serious crash? Engineers can make the yellow light last longer, but this reduces the capacity of the intersection—and once word gets out on the generous signal timing, it may just encourage more drivers to speed up and go for it.
Some people have even proposed signs that warn the driver in advance that the light is about to turn amber, a sort of “caution for the caution” that extends what is called the “indecision zone.” But a study in Austria that looked at intersections where the green signal flashes briefly before turning yellow found mixed results: Fewer drivers went through the red light than at intersections without the flashing green, but more drivers stopped sooner than necessary. The danger of the latter result was shown in a study of intersections in Israel where the “flashing green” system had been introduced. There were more rear-end collisions at those intersections than at those without the flashing green. The longer the indecision zone, the more cars that are in it, the more decisions about whether or not to go or suddenly stop, and thus the more chances to crash.
In traffic, these sorts of dilemma zones occur all the time. There are no pedestrians present in the Grand Challenge (“Thank God,” said Montemerlo); they would represent a massive problem for Junior. “I’ve thought a lot about what would happen if you let Junior loose in the real world,” Montemerlo said. Driving at Stanford is relatively sedate, but what if there is a pedestrian standing on the curb, just off the crosswalk? As the pedestrian isn’t in the road, he’s not classified as an obstacle. But is he waiting to cross or just standing there? To know this, the robot would somehow have to interpret the pedestrian’s body language, or be trained to analyze eye contact and facial gestures. Even if the robot driver stopped, the pedestrian might need further signals. “The pedestrian is sometimes wary to walk in front of someone even if they have stopped,” Montemerlo said. “Often they wait for the driver to wave, ‘You go first.’” Would you feel comfortable crossing in front of a driverless Terminator?
In some ways, however, a city environment is actually easier than a dusty desert track. “Urban driving is really constrained; there aren’t many things you can do,” said Montemerlo (who has clearly never driven on New York’s FDR Drive). “This is actually how we’re able to drive. We use the rules of the road and road markings to make assumptions about what might happen.”
Traffic is filled with these assumptions: We drive at full speed through the green light because we’re predicting that the other drivers will have stopped; we do not brace for a head-on collision every time a car comes our way in the opposite lane; we zoom over the crest of a hill because we do not think there is an oil truck stopped just on the other side. “We’re driving faster than we would if we couldn’t make these assumptions,” Montemerlo said. What the Stanford team does is encode these assumptions into the 100,000 or so lines of code that make up Junior’s brain, but not with such rigidity that Junior freezes up when something weird happens.
And weird things happen a lot in traffic. Let’s say a traffic signal is broken. David Letterman once joked that traffic signals in New York City are “just rough guidelines,” but everyone has driven up to a signal that was stuck on red. After some hesitation, you probably, and very carefully, went through the red. Or perhaps you came up behind a stalled car. To get around it would involve crossing a double yellow line, normally an illegal act. But you did it, and traffic laws usually account for exceptional circumstances. What about the question of who proceeds first at a four-way stop? Sometimes there is confusion about who arrived first, which produces a brief four-way standoff. Now picture four robot drivers who arrived at the exact same moment. If they were programmed to let the person who arrived first go first, two things might happen: They might all go first and collide or they might all sit frozen, the intersection version of a computer crash. So the Stanford team uses complex algorithms to make Junior’s binary logic a bit more human. “Junior tries to estimate what the right time to go is, and tries to wait for its turn,” Montemerlo said. “But if somebody else doesn’t take their turn and enough time passes by, the robot will actually bump itself up the queue.”
The Stanford team found that the best way for Stanley and Junior to learn how to drive was to study how humans drive. But might the robots have anything to teach us? In the very first Grand Challenge, Montemerlo said, Thrun was “always complaining that the robot slowed down too much in turns.” Yet when a graduate student analyzed the race results, he came to the conclusion that the robot could have “cornered like a Ferrari” and still only shaved a few minutes off a seven-hour race—while upping the crash risk. The reason was that most of the course consisted of straight roads. Maintaining the highest average speed over these sections was more important than taking the relatively few turns (the most dangerous parts of the road) at the highest speed possible.
“Driving smarter,” Montemerlo calls it. This is something he has thought a lot about for the Urban Challenge. “You might initially think, ‘I’ll take everything Junior does, and make it as fast as possible. I’ll make it accelerate from the stop sign as fast as possible. I’ll make it wait the minimum amount of time when it stops.’ But it turns out it doesn’t help that much. We all know it from traffic. You see the guy who speeds past you on a road, and then you see him again—you’re stopped one car behind him at the next red light. The randomness of traffic overwhelms these tiny instances. At the same time, some of these little optimizations, like being a jerk at a stop sign, cause problems for everyone. They slow everyone down.”
It took a group of some of the world’s leading robotics researchers years of work to come up with an autonomous vehicle that, while clever and adept at certain driving tasks, would quickly go haywire in real traffic. That should be both a testament to the remarkable human ability that driving is as well as a cautionary reminder not to take this activity for granted. The advantage robots have in the long run is that the hardware and software keep getting better. We humans must use what we’re born with. The human cognitive mechanism is powerful equipment, as the trials of teaching Stanley and Junior to drive show. But as we are about to see, it is not without bugs. And these are not the sort that are going to be fixed in Version 2.0.
How’s My Driving? How the Hell Should I Know? Why Lack of Feedback Fails Us on the Road
There are two things no man will admit he cannot do well: drive and make love.
—Stirling Moss, champion racer
A splashy television advertising campaign for the online auction site eBay came with the simple tagline “People Are Good.” Interestingly, a number of the images it showed involved traffic: In one spot, people joined to help push a car stuck in the snow; in another, a driver slowed to let another driver in, with a wave of the hand. By tapping into these moments of reciprocal altruism, eBay was hoping to underscore the idea that you can buy something from somebody you have never met, halfway around the globe, and feel confident that the product will actually show up. This “everyday trust,” as an eBay spokesperson described it, which “blossoms into millions of strangers transacting with each other and overwhelmingly comes off without a hitch,” roughly describes what happens in traffic.
And yet people are not always good. Each month seems to bring some new form of scam to eBay, which the company duly investigates. Sophisticated software, for one thing, sniffs out suspicious bidding patterns. What keeps the site running, however, is not the prowess of its fraud squad—which would hardly have time to monitor more than a fraction of the many millions of daily auctions—but a more simple mechanism: feedback. The desire to get positive feedback and avoid negative feedback is, as anyone who has bought or sold on the site knows, a crucial part of the experience. This probably has less to do with people wanting to feel good than the fact that sellers with good reputations can, as one study found, make 8 percent more in revenue. Either way, feedback (provided it’s authentic) is the social glue that holds eBay together.
What if there was an eBay-like system of “reputation management” for traffic? This idea was raised in a provocative paper by Lior J. Strahilevitz, a law professor at the University of Chicago. “A modern, urban freeway is a lot like eBay, without reputation scores,” he wrote. “Most drivers on the freeway are reasonably skilled and willing to cooperate conditionally with fellow drivers, but there is a sizeable minority that imposes substantial costs on other drivers, in the form of accidents, delays, stress, incivility, and rising insurance premiums.”
Inspired by the HOW’S MY DRIVING stickers used by commercial fleets, the idea is that drivers, when witnessing an act of dangerous or illegal driving, could phone a call center and lodge a complaint, using mandatory identification numbers posted on every driver’s bumper or license plate. Calls could also be made to reward good drivers. An account would be kept and, at the end of each month, drivers would receive a “bill” tallying the positive or negative comments called in. Drivers exceeding a certain threshold could be punished in some way, such as by higher insurance premiums or a suspension of their license. Strahilevitz argues that this system would be more effective than sporadic law enforcement, which can monitor only a fraction of the traffic stream. The police are usually limited to issuing tickets based on obvious violations (like speeding) and are essentially powerless to do anything about the more subtle rude and dangerous moments we encounter—how often have you wished in vain for a police car to be there to catch someone doing something dangerous, like tailgating or texting on their BlackBerry? It would help insurance companies more effectively set rates, not to mention giving frustrated drivers a safer and more useful outlet to express their disapproval, and gain a sense of justice—than by responding in kind with acts of aggressive driving.
But what about false or biased feedback? What if your next-door neighbor who’s mad at you for your barking dog phones in a report saying you were acting crazy on the turnpike? As Strahilevitz points out, eBay-style software can sniff out suspicious activity—“outliers” like one negative comment among many positives, or repeated negative comments from the same person. What about privacy concerns? Well, that’s exactly the point: People are free to terrorize others on the road because their identity is largely protected. The road is not a private place, and speeding is not a private act. As Strahilevitz argues, “We should protect privacy if, and only if, doing so promotes social welfare.”
Less ambitious and official versions of this have been tried. The Web site Platewire.com, which was begun, in the words of its founder, “to make people more accountable for their actions on the roadways in one forum or another,” gives drivers a place to lodge complaints about bad drivers, along with the offenders’ license plate numbers; posts chastise “Too Busy Brushing Her Hair” in California and “Audi A-hole” in New Jersey. Much less frequently, users give kudos to good drivers.
However noble the effort, the shortcomings of such sites are obvious. For one, Platewire, at the time of this writing, has a bit over sixty thousand members, representing only a minuscule fraction of the driving public. Platewire complaints are falling on few ears. For another, given the sheer randomness of driving, the chances are remote that I would ever come across the owner of New Jersey license plate VR347N—more remote even than the chance that they’re reading this book—and, moreover, I’m unlikely to remember that they were the one a Platewire member had tagged for “reading the newspaper” while driving! Lastly, Platewire lacks real consequences beyond the anonymous shame of a small, disparate number of readers.
The call-center idea is aimed at countering the feeling of pervasive anonymity in traffic, and all the bad behavior it encourages. But it could also help correct another problem in traffic: the lack of feedback. As discussed earlier, the very mechanics of driving enable us to play spectator to countless acts of subpar driving, while being less aware of our own. Not surprisingly, if we were to ask ourselves “How’s my driving?,” research has shown that the answer would probably be a big thumbs-up—regardless of one’s actual driving record.
In study after study, from the United States to France to New Zealand, when groups of drivers were asked to compare themselves to the “average driver,” a majority inevitably respond that they were “better.” This is, of course, statistically quite improbable and seems like a sketch from Monty Python: “We Are All Above Average!” Psychologists have called this phenomenon “optimistic bias” (or the “above-average effect”), and it is still something of a mystery why we do it. It might be that we want to make ourselves out to be better than others in a kind of downward comparison, the way the people in line in the first chapter assessed their own well-being by turning around to look at those lesser beings at the back of the queue. Or it might be the psychic crutch we need to more confidently face driving, the most dangerous thing most of us will ever do.
Whatever the reason, the evidence is strong that we self-enhance in all areas of life, often at our peril. Investors routinely claim they are better than the average investor at picking stocks, but at least one study of brokerage accounts showed that the most active traders (presumably among the most confident) generated the smallest returns. Driving may be particularly susceptible to the above-average effect. For one, psychologists have found that the optimistic bias seems stronger in situations we can control; one study found drivers were more optimistic than passengers when asked to rate their chances of being involved in a car accident.
The above-average effect helps explain resistance (in the early stages, at least) to new traffic safety measures, from seat belts to cell phone restrictions. Polls have shown, for example, that most drivers would like to see text messaging while driving banned; those same polls also show that most people have done it. We overestimate the risks to society and underestimate our own risk. It is the other person’s behavior that needs to be controlled, not mine; this reasoning helps contribute to the longstanding gap, concerning evolving technology, between social mores and traffic laws. We think stricter laws are a good idea for the people who need them.
Another problem with our view of ourselves is that we tend to rank ourselves higher, studies have shown, when the activity in question is thought to be relatively easy, like driving, and not relatively complex, like juggling many objects at once. Psychologists have suggested that the “Lake Wobegon effect”—“where all the children are above average”—is stronger when the skills in question are ambiguous. An Olympic pole-vaulter has a pretty clear indication of how good she is compared to everyone else by the height of the bar she must clear. As for a driver who simply makes it home unscathed from work, how was their performance? A 9.1 out of 10?
Most important, we may inflate our own driving abilities simply because we are not actually capable of rendering an accurate judgment. We may lack what is called “metacognition,” which means, as Cornell University psychologists Justin Kruger and David Dunning put it, that we are “unskilled and unaware of it.” In the same way a person less versed in the proper rules of English grammar will be less able to judge the correctness of grammar (to use Kruger and Dunning’s example), a driver who is not fully aware of the risks of tailgating or the rules of traffic is hardly in a good position to evaluate their own relative risk or driving performance compared to everyone else’s. One study showed that drivers who did poorly on their driving exam or had been involved in crashes were not as good at estimating their results on a simple reaction test as the statistically “better” (i.e., safer) drivers. And yet, as mentioned earlier, people seem easily able to disregard their own driving record in judging the quality of their own driving.
So whether we’re cocky, compensating for feeling fearful, or just plain clueless, the roads are filled with a majority of above-average drivers (particularly men), each of whom seems intent on maintaining their sense of above-averageness. My own unscientific theory is that this may help explain—in America, at least—why drivers polled in surveys seem to find the roads less civil with each passing year. In an 1982 survey, a majority of drivers found that the majority of other people were “courteous” on the road. When the same survey was repeated in 1998, the rude drivers outnumbered the courteous.
How does this tie into pumped-up egos? Psychologists suggest that narcissism, more than insecurity propelled by low self-esteem, promotes aggressive driving. Rather like the survey data that show a mathematical disconnect between the number of sexual partners men and women claim to have had, polls of aggressive driving behavior find more people seeing it than doing it. Someone is self-enhancing. And so narcissism, like road nastiness, seems to be on the rise. Psychologists who examined a survey called the Narcissistic Personality Inventory, which has for the past few decades gauged narcissistic indicators in society (measuring reactions to statements like “If I ruled the world, it would be a better place”), found that in 2006, two-thirds of survey respondents scored higher than in 1982. More people than ever, it seems, have a “positive and inflated view of the self.” And over the same period that narcissism was growing, the road, if surveys can be believed, was becoming a less pleasant environment. Traffic, a system that requires conformity and cooperation to function best, was filling with people sharing a common thought: “If I ruled the road, it would be a better place.”
When negative feedback does come our way on the road, we tend to find ways to explain it away, or we quickly forget it. A ticket is a rare event that one grumblingly attributes to police officers having to “make a quota” a honk from another driver is a cause for anger, not shame or remorse; a crash might be seen as pure bad luck. But usually, for most people, there is no negative feedback. There is little feedback at all. We drive largely without incident every day, and every day we become just a little bit more above average. As John Lee, head of the Cognitive Systems Laboratory at the University of Iowa, explained, “As an average driver you can get away with a lot before it catches up to you. That’s one of the problems. The feedback loops are not there. You can be a bad driver for years and never really realize it, because you don’t get that demonstrated to you. You could drive for years with a cell phone and say, ‘How can cell phones be dangerous, because I do it every day for two hours and nothing’s happened?’ Well, that’s because you’ve been lucky.”
Even the moments when we almost crash become testaments to our skill, notches on our seat belts. But as psychologist James Reason wrote in Human Error, “In accident avoidance, experience is a mixed blessing.” The problem is that we learn how to avoid accidents precisely by avoiding accidents, not by being in accidents. But a near miss, as Reason described it, involves an initial error as well as a process of error recovery. This raises several questions: Are our near misses teaching us how to avoid accidents or how to prevent the errors that got us into the tight spot to begin with? Does avoiding a minor accident just set us up for having to get out of much bigger accidents? How, and what, do we learn from our mistakes?
What do we learn from mistakes? This last question was also raised by the technology of a company called DriveCam, located in an office park in suburban San Diego, where I spent a day watching video footage of crashes, near crashes, and spectacularly careless acts of driving. The premise is simple: A small camera, located around the rearview mirror, is constantly buffering images (the way TiVo does for your television shows) of the exterior view and the driver. Sensors monitor the various forces the vehicle is experiencing. When a driver brakes hard or makes a sudden turn, the camera records ten seconds before and after the event, for context. The clip is then sent to DriveCam analysts, who file a report and, if necessary, apply “coaching.”
DriveCam, whose motto is “Taking the risk out of driving,” has its cameras installed in everything from Time Warner Cable vans to Las Vegas taxicabs to rental-car shuttle buses at airports. Companies that have installed DriveCam have seen their drivers’ crash rates drop by 30 to 50 percent. The company contends that it has several advantages over the traditional methods of trying to improve the safety records of commercial fleets. One earlier approach, as DriveCam CEO Bruce Moeller told me, was giving drivers spot safety drills. “They’d come in for the training. You’re all hopped up, ‘I’m going to do right.’ But then over time, you start pushing the envelope. You didn’t hit anybody and nobody yelled at you. So that’s fine, you get away with it, and pretty soon you start lapsing back to your old ways.” The widespread onset of “How’s My Driving?” phone numbers in the 1980s created the potential for more constant feedback, but it was often late or of debatable quality, says Del Lisk, the company’s vice president. “It’s highly prone to very subjective consumer call-ins,” he said. “Like, ‘I’m mad about my phone bill so I’m going to call in that AT&T guy.’”
Given that the company car is the most statistically hazardous environment for workers, it seems appropriate that the thinking behind DriveCam is inspired by the work of H. W. Heinrich, an insurance investigator for the Travelers Insurance Company and the author of a seminal 1931 book, Industrial Accident Prevention: A Scientific Approach. After investigating tens of thousands of industrial injuries, he estimated that for every one fatality or major injury in the workplace, there were 29 minor injuries and 300 “near-miss” incidents that led to no injury. He arranged these in the so-called Heinrich’s triangle and argued that the key to avoiding the one event at the top of the triangle lay in tackling the many small events at the bottom.
When I’d met Moeller, the first thing he’d told me, after introductory pleasantries, was: “If we were to put a DriveCam in your car, not knowing you at all, I guarantee you that you’ve got driving habits you’re not even aware of that are an accident waiting to happen.” He pointed to the Heinrich triangle he had drawn on a whiteboard. “You know about the twenty-nine and the one”—the crashes and the fatality—“because there’s hard evidence that somebody got killed or somebody crashed,” he said. “What we show you with the DriveCam monitoring this thing twenty-four/seven is that all the very same unsafe behaviors that are going on down here”—he pointed to the bottom of the triangle—“can result, or will result, in accidents, except for pure luck.”
The key to reducing what DriveCam calls “preventable accidents,” as Lisk sees it, lies at the bottom of the triangle, in all those hidden and forgotten near misses. “Most people would look at that triangle and use the top two tiers as their way of estimating how good a driver they are. The truth is, it’s really the bottom tier that is the real evaluator.” In other words, a driver thinks of their own performance in terms of crashes and traffic tickets. People riding along with a driver look at it differently. “All of us, as passengers,” Lisk said, “will ride along and evaluate drivers from the bottom of the pyramid, squeezing the armrest and pushing our feet into the floorboards.”
As I played virtual passenger on a number of DriveCam moments, a disturbing realization came to my attention. There is much careless driving, to be sure. In one clip, a man takes his hands off the steering wheel to jab at a boxer’s speed bag suspended from the rearview mirror. In any number of clips, drivers struggle to keep their eyes open and their bobbing heads erect. “We’ve got one where a guy’s driving a tanker truck full of gas for eight full seconds as he’s asleep,” Moeller said. (A dip on a Los Angeles freeway had triggered the camera.)
But what is most unsettling in a number of clips is not the event itself as much as what else was visible in the camera, just outside the frame. In one bit of footage, a man looks down to dial a cell phone as he drives down a residential street. His eyes are off the road for much of the nine seconds of the recorded event, and his van begins to drift off the road. Startled by the vibration of the roadside, he swerves back onto the road. He grimaces in a strange mixture of shock and relief. Examining the image closely, however, one sees a child on a bicycle and the child’s friend, standing just off the road, less than a dozen feet away from the triggered event. “Do you think he ever even saw the bike rider and other person?” Lisk asked. “It’s just luck. It’s that pyramid.”
Not only was the driver unaware of the real hazards he was subjecting himself and others to in the way he was driving, he was not even aware that he was unaware. “This guy’s probably a great guy, good family man, good employee,” Lisk said. “He doesn’t even know this is happening. If we told him it happened, with a black box or something, he wouldn’t even believe it.” Without the video, the driver would not have realized the potential consequences of his error. “I get reinforced more positively every day that I don’t hit a kid because I’m not seeing that stuff,” Moeller said. “I’m thinking I’m good, I can do this. I can look down at my BlackBerry, I can dial a phone, I can drink. We all get reinforced the wrong way.”
Until the moment when we do not, of course, and something goes wrong. We commonly refer to these moments as “accidents,” meaning that they were unintended or unforeseen events. Accident is a good word for describing such events as an otherwise vigilant driver being unable to avoid a tree that suddenly fell across the road. But consider the case of St. Louis Cardinals pitcher Josh Hancock, who was tragically killed in 2007 when his rented SUV slammed into the back of a tow truck that was stopped on the highway, lights flashing, at the scene of a previous crash. Investigators learned that Hancock (who days before had crashed his own SUV) had a blood alcohol concentration nearly twice the legal limit, was speeding, was not wearing a seat belt, and was on a cell phone at the time of the fatal crash.
Despite the fact that all these well-established risky behaviors were present, simultaneously, the event was still routinely referred to in the press as an “accident.” The same thing happened with South Dakota congressman Bill Janklow. A notorious speeder who racked up more than a dozen tickets in the span of four years and had a poster of himself boasting that he liked to live in the “fast lane,” in 2003 Janklow blazed through a stop sign and killed a motorcyclist. The press repeatedly called it an “accident.”
The problem with this word, as the British Medical Journal pointed out in 2001 when it announced that it would no longer use it, is that accidents are “often understood to be unpredictable,” and thus unpreventable. Were the Hancock and Janklow crashes really unpredictable or unpreventable? They were certainly unintentional, but are “some crashes more unintentional than others”? Did they “just happen” or were there things that could have been done to prevent them, or at least greatly reduce the chances of their happening? Humans are humans, things will go wrong, there are instances of truly bad luck. And psychologists have argued that humans tend to exaggerate, in retrospect, just how predictable things were (the “hindsight bias”). The word accident, however, has been sent skittering down a slippery slope, to the point where it seems to provide protective cover for the worst and most negligent driving behaviors. This in turn suggests that so much of the everyday carnage on the road is mysteriously out of our hands and can be stopped or lessened only by adding more air bags (pedestrians, unfortunately, lack this safety feature).
Most crashes involve a violation of traffic laws, whether intentional or not. But even the notion of “unintentional” versus “intentional” has been blurred. In 2006, a Chicago driver reaching for a cell phone while driving lost control of his SUV, killing a passenger in another car. The victim’s family declared, “If he didn’t drink or use drugs, then it’s an accident.” As absurd as that statement may sound, given that the driver intentionally broke the law, the law essentially agreed: The driver was fined $200. Similarly strange distinctions are found with “sober speeders.” There is a huge gulf in legal recrimination between a person who boosts his blood alcohol concentration way over the limit and kills someone and a driver who boosts his speedometer way over the limit and kills someone.
A similar bias creeps into news reports, which are often quick to note, when reporting fatal crashes, that “no drugs or alcohol were involved,” subtly absolving the driver from full responsibility—even if the driver was flagrantly exceeding the speed limit. Car companies would rightly be castigated if they advertised the joys of drinking and driving. But as a survey of North American car commercials by a group of Canadian researchers showed, it is quite acceptable to show cars being driven, soberly, in ways that a panel of viewers labeled “hazardous.” Nearly half of the more than two hundred ads screened (always carrying careful, if duplicitous, disclaimers) were considered by the majority of the panel to contain an “unsafe driving sequence,” usually marked by high speeds. Ads for SUVs were the most frequent offenders, and across all commercials, when drivers were shown, the majority were men.
What the video footage at DriveCam showed, more often than not, is not that unforeseen things happen on the road for no good reason but that people routinely do things to make crashes “unpreventable.” If the van driver had struck the child by the side of the road, it would have been reasonably “accidental” only in the sense that he did not intend to do it. Would this have just been “bad luck”? The psychologist Richard Wiseman has demonstrated in experiments that people are also capable of making their own “luck.” For example, people who know lots of people are more likely to have seemingly lucky “small-world” encounters than those who do not (and those who did not have many such chance meetings more often viewed themselves as “unlucky”).
We cannot entirely prevent “bad luck” from landing on our doorstep, but the van driver dialing his cell phone, the one who narrowly missed the kids in the DriveCam video, was virtually throwing open his door and inviting it inside. DriveCam’s hindsight does make it glaringly easy to see all the things drivers were doing wrong. The question is, Why didn’t they? Why do people act in ways that put themselves and others at unnecessary risk? Are they being negligent, ignorant, overconfident, just plain dumb—or are they just being human? Can we actually learn from our mistakes before they have real consequences?
Psychologists have demonstrated that our memory, as you might expect, is tilted in favor of more recent things. We also tend to emphasize the ends of things—as, for example, when told a series of facts and later asked to recall the entire series. Studies have confirmed that people are less likely to remember traffic accidents the further back in time they happened. In this same way, a near crash or a crash might loom more vividly than the things that led up to it. “Almost rear-ending someone will stick in your mind, but that freezing it and remembering it comes at the cost of losing the precipitating events,” Rusty Weiss, director of DriveCam’s consumer division, explained. Time also takes its toll. A study led by Peter Chapman and Geoff Underwood at the University of Nottingham in England found that drivers forgot about 80 percent more of their near crashes if they were first asked about them two weeks later than if they were asked at the end of their trip. This is exactly the point with DriveCam: It does not let you forget the precariousness of your existence on the road.
Weiss, who came to DriveCam after setting up a program to put the camera in the cars of teenage drivers in a trial in Minnesota, theorizes that this amnesia for what helped lead up to a crash, something we are all subject to, troubles beginning drivers in particular. They are the ones, ironically, who are constantly finding themselves moving in and out of risky situations. “These kids should be learning rapidly,” he says. “There’s lots of learning opportunities, yet they continue making mistakes. At the moment they say it wasn’t their fault, but then they see the video and go, ‘Oh my God.’ It’s like video feedback for your golf swing. It makes you aware of things you’re not aware of when you’re there in the moment.”
The problem may be that they are simply forgetting the moments they should be learning from. Another study by Chapman and Underwood found that when drivers were shown videos of hazardous driving situations, novice drivers were less likely to remember details from the event than were more experienced drivers.
One reason may have been that they were not looking in the right places. Researchers have long known that inexperienced drivers have much different “visual search” patterns than more experienced drivers. They tend to look overwhelmingly near the front of the car and at the edge markings of the road. They tend not to look at the external mirrors very often, even while doing things like changing lanes. Knowing where to look—and remembering what you have seen—is a hallmark of experience and expertise. In the same way that eye-tracking studies have shown reliable differences in the way artists look at paintings versus the way nonartists do (the latter tend to zero in on things like faces, while artists scan the whole picture), researchers studying driver behavior can usually tell by a driver’s glance activity how experienced they are.
Teenage drivers were, in many ways, the perfect next step for DriveCam. Like the drivers of commercial vehicles, teens are often driving someone else’s car, and they are driving under the supervision of a higher authority—in this case, Mom and Dad. A trial in Iowa put DriveCams in the cars of twenty-five high school students for eighteen weeks. Triggered events were sent to parents, and the scores (using an anonymous ID) were posted so the drivers could judge exactly where they stood in relation to their peers. According to Daniel McGehee, the trial’s head and director of the Human Factors and Vehicle Safety Research Program at the University of Iowa’s Public Policy Center, teenagers in Iowa, because of its agricultural character, can begin driving to school at fourteen. “That crash rate is absolutely out of sight,” he said. Teenagers in Iowa also drive a lot: In thirteen months of driving, the twenty-five drivers put over 360,000 miles on the odometer, many of them on the statistically most dangerous roads: rural two-lane highways.
The early clips he showed were indeed troubling: drivers sailing heedlessly through red lights, or singing and looking around absentmindedly before flying off a curve into a cornfield. Admittedly, I felt a bit uneasy peering into this little cocoon of privacy during these moments of raw, unfiltered emotion. Apparently the teens, in this age of reality television, were not so shy. The DriveCam contains a button that drivers can press to add a comment about a triggered event. Some teens used it to record diary entries, a sort of dashboard confessional about events in their lives outside the car. Driving also provided a rather unique window on to the social lives of teens, McGehee told me. “We could tell when someone got a new girlfriend or boyfriend. They would drive more aggressively to show off.”
But it was the safety effects, not the video confessions or dating habits, that interested the researchers. When I spoke to McGehee later, he was in the sixteenth week of the trial. “The riskiest drivers dropped their safety-relevant behaviors by seventy-six percent,” he said. “The farther we get into this, the risky behaviors are just drying up.” Whereas before, the riskiest drivers had been triggering the device up to ten times a day, McGehee said, they were now triggering it only once or twice a week. “Even the magnitude of those triggers is pretty benign relative to their early days,” he noted. “They still might be taking a corner a little too fast but it might be right above the threshold.”
What was really happening to the teens? Were they afraid of getting in trouble with their parents? Were they just seeing their own mistakes for the first time? Or were they simply gaming the system, trying to crack the code like they do with their SATs? “I think what you see is that drivers in this pure behavioral psychology loop are becoming sensors themselves,” McGehee said. “This little accelerometer in there—they start to sense over time what the limit is.” As DriveCam’s Weiss put it, “One kid said, ‘I figured out how to beat the system. I just look way ahead and anticipate traffic and slow down for corners, and I haven’t set it off in a month.’” He was, whether he realized it or not, acting like a good driver.
But what happens when the DriveCam is gone? “I don’t pretend to represent DriveCam as anything but an extrinsic motivation system,” Moeller had said. He admits that in the early days of a DriveCam trial, the mere presence of the camera is enough to get drivers to act more cautiously, in a version of the famous “Hawthorne effect,” which says that people in an experiment change their behavior simply because they know they are in an experiment. But without any follow-up coaching, without “closing the feedback loop,” results begin to erode. “The driver starts to think, ‘The camera’s not intrusive at all. Nothing’s ever going to happen—this is just there so in case I get in a crash this will record who was at fault,’” Moeller said. “When you inject coaching in, then he realizes there is an immediate and certain consequence for his risky driving behavior. That twenty-second loss of privacy is enough for most people.”
The things that DriveCam finds itself coaching drivers on most often do not involve actual driving skills per se—like cornering ability or obstacle avoidance—but mistakes that are born from overconfidence. The most striking example of this came in a trial that Weiss, then with the Mayo Clinic in Minnesota, did with an ambulance company that was trying to improve the “ride experience” for patients. One might think the DriveCam would have been triggered quite regularly in emergency situations, when the drivers, with lights and sirens, were speeding their patients to the hospital, careening around corners, and slaloming through red lights. That was not the case. “It’s actually smoother when you have the red lights and siren on, is how it turned out,” Weiss explained. “We triggered more events—we had harder cornering and more erratic driving—when they were just doing their own thing.” Weiss, himself a former ambulance driver and paramedic, suspected he knew why. “The big difference between running lights and a siren and your normal driving is that you’re focused. They’re seeing the hazards that are out there and they’re slowing sooner when someone can’t see them. Smoother is quicker when you’re running lights and a siren.”
Since most of us don’t have sirens and lights, our driving is of the everyday variety. As the sense of routine begins to take over, we begin to ratchet up our sense of the possible—how close we can follow, how fast we can take curves—and become conditioned to each new plateau. We forget those things that the Stanford researchers were learning as they tried to teach their robot to drive: It is not as easy as it appears. Lisk, who had that morning reviewed a sheaf of collision reports, said that “the large majority were just people who didn’t have enough space, or were not attentive enough. A lack of good old-fashioned basic driving skills was a huge part of it.”
He showed one clip, of a driver moving rather quickly down an open lane toward a tollbooth, flanked on either side by queues of cars. “The driver’s thinking it’s wide open. It’s a football mentality—I’ve got all my blockers and I can go,” Lisk said. It’s as if the driver has already imagined himself to have passed through the lines of cars and past the open tollbooth. There is just one problem: All those other drivers are eagerly salivating over that same space. “Because they’re boxed in they’ve got to come in a pretty abrupt angle and at low speed,” Lisk said. “We see a lot of collisions where the driver hasn’t slowed down enough when they’re approaching that high-risk, open-lane situation.”
This may help explain why EZ Pass-style automated payment lanes at tollbooths, which should theoretically help reduce crashes at these statistically risky areas—drivers no longer have to fumble for change—have been shown to increase crash rates. Drivers approach at a higher speed, with nothing to stop them from zooming through the toll plaza, while other cars, finding themselves in the “wrong” lanes, dart out and jockey among lanes more than they would have under the old system, in which there was less chance of finding a shorter queue.
Each month, DriveCam receives more than fifty thousand of these triggered clips, making it, Moeller said, the world’s largest “repository of risky driving behavior.” The technology of the camera is allowing glimpses into what has been, for most of the automobile’s existence, a kind of closed world: the inner life of the driver.
“Driver behavior” has previously been teased out through things like driving simulators, test tracks, or actually having a researcher sit in the car, clipboard in hand—none of which is quite like real-world driving. Cars could be watched from the outside, via cameras or lab assistants on highway overpasses, but that did not give any glimpse into what the driver was doing. The study of crashes was based largely on police investigations and witness reports, which are both prone to distortion—the latter particularly so.
People are more likely to assign blame to one person or another when a crash is severe, research has shown, than when it is minor. In another study, a group of people were shown films of car crashes. When the subjects were asked, a week later, to gauge the speed of various cars in the films, they estimated higher speeds when the questions used the word “smash,” versus words like “hit” or “contacted.” More subjects remembered seeing broken glass when the word “smash” was used, even though no glass was broken. A driver’s own memory of events is usually clouded by a desire to lessen their own responsibility for an event (perhaps so as to not conflict with their enhanced self-image or to avoid legal liability). “Baker’s law,” named after crash reconstructionist J. Stannard Baker, notes that drivers “tend to explain their traffic accidents by reporting circumstances of lowest culpability compatible with credibility”—that is, the most believable story they can get away with.
Most elusive of all, before Drivecam-style devices, were the crashes that almost happened. There was no way to determine why and how they nearly occurred (or did not), nor how often these near misses took place. If the top of the triangle was murky, the bottom of the triangle was as vast a mystery as the deepest ocean floor.
That has now changed, and large-scale studies, using technology like DriveCam’s, are providing new clues into how drivers behave and, most important, new insight into just why we encounter trouble on the road. The answer is not so much all the things that the road signs warn us about—the high winds on bridges or the deer crossing the highway. Nor is it mostly tire blowouts, faulty brakes, or the mechanical flaws that prompt car makers to issue recalls (“human factors” are said to account for 90 percent of all crashes). Nor does it seem to be “driver proficiency” or our ability to understand traffic signals.
What seems to gives us the most trouble, apart from our overconfidence and lack of feedback in driving, are the two areas in which Stanley and Junior, Stanford’s clumsy robot drivers, have a decided edge. The first is the way we sense and perceive things. As amazing as this process is, we do not always interpret things correctly. More important, we aren’t always aware of this fact. The second thing that separates us from Stanley and Junior on the road is that we are not driving machines: We cannot keep up a constant level of vigilance. Once we feel we have things under control, we begin to act differently. We look out the window or talk on a cell phone. Much of our trouble, as I will show in the next chapter, comes because of our perceptual limitations, and because we cannot pay attention.