The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life - Robert Trivers (2011)

Chapter 9. Self-Deception in Aviation and Space Disasters

01

Disasters are always studied in retrospect. We will not have an experimental science of the subject anytime soon. Disasters range from the personal—your wife tells you she is leaving you for the mailman—to the global—your country invades the wrong nation, with catastrophic effects all around. Disasters, of course, are expected to be closely linked to self-deception. There is nothing like being unconscious of reality to make it intrude upon you in unexpected and painful ways. In this chapter we will concentrate on one kind of disaster—airplane and space crashes—because they typically are subject to intensive investigation immediately afterward to figure out the causes and avoid repetition. For our purposes, these accidents help us study the cost of self-deception in-depth under highly controlled circumstances. The disasters produce a very detailed and well-analyzed body of information on their causes, and they form a well-defined category. As we shall see, there are repeated ties to self-deception at various levels: the individual, pairs of individuals (pilot and copilot), institutions (NASA), and even countries (Egypt).

But there is one striking difference between space and aviation disasters. In the United States, aviation disasters are immediately and intensively studied by teams of experts on twenty-four-hour notice in an institution designed to be insulated from outside interference, the National Transportation Safety Board. The NTSB generally does a superb job and publicizes its findings quickly. It almost always discerns key causes and then makes appropriate recommendations, which appear to have helped reduce the accident rate steadily for some thirty years, so that flying is, by far, the safest form of travel. I know of only one case of a delayed report (about three years) and this was because of interference on the international level, when Egypt fought the truth to the bitter end.

By contrast, NASA’s accidents are investigated by a committee appointed to study only a specific disaster, with no particular expertise, and sometimes with a preordained and expressed goal to exonerate NASA. Study of one disaster does not prevent another, even when it has many of the same causes identified in the first case. Of course, safety corners can more easily be cut when only the lives of a few astronauts are at stake, instead of the great flying public, including airline personnel.

Aviation disasters usually result from multiple causes, one of which may be self-deception on the part of one or more key actors. When the actors number more than one, we can also study processes of group self-deception. A relatively simple example of this is the crash of Air Florida Flight 90 in 1982, in which both pilot and copilot appear to have unconsciously “conspired” to produce the disaster.

AIR FLORIDA FLIGHT 90—DOOMED BY SELF-DECEPTION?

On the afternoon of January 13, 1982, Air Florida Flight 90 took off from Washington, D.C.’s National Airport in a blinding snowstorm on its way to Tampa, Florida. It never made it out of D.C., instead slamming into a bridge and landing in the Potomac River—seventy-four people died, and five survivors were fished out of the back of the plane. Perhaps because one of those who died was an old friend of mine from Harvard (Robert Silberglied), I was listening with unusual interest when soon thereafter the evening news played the audiotape of the cockpit conversation during takeoff. The copilot was flying the plane, and you could hear the fear in his voice as he also performed the role the pilot should have been playing, namely reading the instrument panel. Here is how it went:

Ten seconds after starting down the runway, the copilot responds to instrument readings that suggest the plane is traveling faster than it really is: “God, look at that thing!” Four seconds later: “That doesn’t seem right, does it?” Three seconds later: “Ah, that’s not right.” Two seconds later: “Well . . .”

Then the pilot, in a confident voice, offers a rationalization for the false reading: “Yes, it is, there’s 80,” apparently referring to an airspeed of 80 knots. This fails to satisfy the copilot, who says, “Naw, I don’t think that’s right.” Nine seconds later, he wavers: “Ah, maybe it is.” That is the last we hear from the copilot until a second before the crash when he says, “Larry, we’re going down, Larry,” and Larry says, “I know.”

And what was Larry doing all this time? Except for the rationalization mentioned above, he only started talking once the mistake had been made and the plane was past the point of no return—indeed when the device warning of a stall started to sound. He then appeared to be talking to the plane (“Forward, forward.” Three seconds later: “We only want five hundred.” Two seconds later: “Come on, forward.” Three seconds: “Forward.” Two seconds: “Just barely climb.”). Within three more seconds, they were both dead.

What is striking here is that moments before we have a human disaster that will claim seventy-four human lives, including both primary actors, we have an apparent pattern of reality evasion on the part of one key actor (the pilot) and insufficient resistance on the part of the other. On top of this, typical roles were reversed, each playing the other’s: pilot (ostensibly) as copilot and vice versa. Why was the copilot reading the contradictory panel readings while the pilot was only offering a rationalization? Why did the copilot speak while it mattered, but the pilot started talking only when it was too late?

The first thing to find out is whether these differences are specific to the final moments or we can find evidence of similar behavior in the past. The answer is clear. In the final forty-five minutes of discussion between the two prior to takeoff, a clear dichotomy emerges. The copilot is reality-oriented; the pilot is not. Consider their discussion of snow on the wings, a critical variable. Pilot: “I got a little on mine.” Copilot: “This one’s got about a quarter to half inch on it all the way.” There were equal amounts of snow on both wings but the pilot gave an imprecise and diminutive estimate, while the copilot gave an exact description.

And here is perhaps the most important exchange of all, one that occurred seven minutes before takeoff. Copilot: “Boy, this is a losing battle here on trying to de-ice those things. It gives you a false sense of security is all that it does” (!!). Pilot: “This, ah, satisfies the Feds.” Copilot: “Yeah—as good and crisp as the air is and no heavier than we are, I’d . . . ” Here is the critical moment in which the copilot timidly advanced his takeoff strategy, which presumably was to floor it—exactly the right strategy—but the pilot cut him off midsentence and said, “Right there is where the icing truck, they oughta have two of them, pull right.” The pilot and copilot then explored a fantasy together on how the plane should be deiced just before takeoff.

Note that the copilot began with a true statement—they had a false sense of security based on a de-icing that did not work. The pilot noted that this satisfies the higher-ups but then switched the discussion to the way the system should work. Though not without its long-term value, this rather distracts from the problem at hand—and at exactly the moment when the copilot suggests his countermove. But he tried again. Copilot: “Slushy runway, do you want me to do anything special for this or just go for it?” Pilot: “Unless you got something special you would like to do.” No help at all.

The transcript suggests how easily the disaster could have been averted. Imagine the earlier conversation about snow on the wings and slushy conditions underfoot had induced a spirit of caution in both parties. How easy it would have been for the pilot to say that they should go all-out but be prepared to abort if they felt their speed was insufficient.

A famous geologist once surveyed this story and commented: “You correctly blame the pilot for the crash, but maybe you do not bring out clearly enough that it was the complete insensitivity to the copilot’s doubts, and to his veiled and timid pleas for help, that was the root of all this trouble. The pilot, with much more experience, just sat there completely unaware and without any realization that the copilot was desperately asking for friendly advice and professional help. Even if he (the pilot) had gruffly grunted, ‘If you can’t handle it, turn it over to me,’ such a response would have probably shot enough adrenaline into the copilot so that he either would have flown the mission successfully or aborted it without incident.” It is this dreadful, veiled indecision that seems to seal the disaster: the copilot tentative, uncertain, questioning, as indeed he should be, yet trying to hide it, and ending up dead in the Potomac.

The geologist went on to say that in his limited experience in mountain rescue work and in abandoned mines, the people who lead others into trouble are the hale and hearty, insensitive jocks trying to show off. “They cannot perceive that a companion is so terrified he is about to ‘freeze’ to the side of the cliff—and for very good reasons!” They in turn freeze and are often the most difficult to rescue. In the case of Flight 90, it was not just the wings that froze, but the copilot as well, and then so did the pilot, who ended up talking to the airplane.

Earlier decisions infused with similar effects contributed to the disaster. The pilot authorized “reverse thrust” to power the airplane out of its departure place. It was ineffective in this role but apparently pushed the ice and snow to the forward edge of the wing, where they would do the most damage, and at the same time blocked a key filter that would now register a higher ground speed than was in fact obtained. The pilot has been separately described as overconfident and inattentive to safety details. The presumed benefit in daily life of his style is the appearance of greater self-confidence and the success that this sometimes brings, especially in interactions with others.

It is interesting that the pilot/copilot configuration in Flight 90 (copilot at the helm) is actually the safer of the two. Even though on average the pilot is flying about half the time, more than 80 percent of all accidents occur when he is doing so (in the United States, 1978–1990). Likewise, many more accidents occur when the pilot and copilot are flying for the first time together (45 percent of all accidents, while safe flights have this degree of unfamiliarity only 5 percent of the time). The notion is that the copilot is even less likely to challenge mistakes of the pilot than vice versa, and especially if the two are unfamiliar with each other. In our case, the pilot is completely unconscious, so he is not challenging anyone. The copilot is actually challenging himself but, getting no encouragement from the pilot, he lapses back into ineptitude.

Consider now an interesting case from a different culture. Fatal accident rates for Korea Airlines between 1988 and 1998 were about seventeen times higher than for a typical US carrier, so high that Delta and Air France suspended their flying partnership with Korea Air, the US Army forbade its troops from flying with the airline, and Canada considered denying it landing rights. An outside group of consultants was brought in to evaluate the problem and concluded, among other factors, that Korea, a society relatively high in hierarchy and power dominance, was not preparing its copilots to act assertively enough. Several accidents could have been averted if the relatively conscious copilot had felt able to communicate effectively with the pilot to correct his errors. The culture in the cockpit was perhaps symbolized when a pilot backhanded a copilot across the face for a minor error, a climate that does not readily invite copilots to take strong stands against pilot mistakes. The consultants argued for emphasizing copilot independence and assertion. Even the insistence on better mastery of English—itself critical to communicating with ground control—improved equality in the cockpit since English lacked in-built hierarchical biases to which Koreans responded readily when speaking Korean. In any case, since intervention, Korea Air has had a spotless safety record. The key point is that hierarchy may impede information flow—two are in the cockpit, but with sufficient dominance, it is actually only one.

A similar problem was uncovered in hospitals where patients contract new infections during surgery, many of which turn out to be fatal and could be prevented by simply insisting that the surgeon wash his (or occasionally, her) hands. A steep hierarchy—with the surgeon unchallenged at the top and the nurses carrying out orders at the bottom—was found to be the key factor. The surgeon practiced self-deception, denied the danger of not washing his hands, and used his seniority to silence any voices raised in protest. The solution was very simple. Empower nurses to halt an operation if the surgeon had not washed his hands properly (until then, 65 percent had failed to do so). Rates of death from newly contracted infections have plummeted wherever this has been introduced.

DISASTER 37,000 FEET ABOVE THE AMAZON

Another striking case of pilot error occurred high above the Amazon in Brazil at 5:01 p.m. on September 26, 2006. A small private jet flying at the wrong altitude clipped a Boeing 737 (Gol Flight 1907) from underneath, sending it into a horrifying forty-two-second nosedive to the jungle below, killing all 154 people aboard. The small American executive jet, though damaged, landed safely at a nearby airport with its nine people alive. Again, the pilot of the small jet seemed less conscious than his copilot when the disaster was upon them, but neither was paying attention when the fatal error was made, nor for a long time afterward.

The key facts are not in doubt. The large commercial jet was doing everything it was supposed to do. It was flying at the correct altitude and orientation (on autopilot); its Brazilian pilots were awake, alert, and in regular contact with their flight controllers. In addition, they were fully familiar with the plane they were flying and spoke the local language. The only mistake these pilots made was getting out of bed that morning. By contrast, the American crew was flying a plane of this kind for the first time. They were using the flight itself to master flying the craft by trial and error as they went along. Although they had had limited simulation training on this kind of airplane, they did not know how to read the instrument panel and, as they put it while in flight, were “still working out the kinks” on handling the flight management system. When attempting to do so, they could not compute time until arrival or weather ahead, much less notice whether their transponder was turned off, as soon enough it was. They tried to master the airplane display systems, toyed with a new digital camera, and planned the next day’s flight departure. They chatted with passengers wandering in and out of their cockpit. They did everything but pay attention to the task at hand—flying safely through airspace occupied by other airplanes.

They were, in fact, flying at the wrong altitude, contradicting both normal convention (even numbers in their direction) and the flight plan they had submitted (36,000 feet for the Brasilia–Manaus leg of their trip). But their own error was compounded by that of the Brasilia controller who okayed their incorrect orientation. They had managed to turn off their transponder (or it had done so on its own), so they were flying invisible to other planes and were blind themselves—a transponder warns both oncoming craft of your presence and you of theirs—yet they were completely unaware of this. They were barely in contact with the flight controllers, and when they were, the pilots showed little evidence of language comprehension or of interest in verifying what they thought the controllers were saying (“I have no idea what the hell he said”). They had spoken disparagingly of Brazilians and of the tasks asked of them, such as landing at Manaus.

Their flight plan was simplicity itself. They were to take off from near Sao Paolo on a direct leg to Brasilia at 37,000 feet; then they were to turn northwest toward Manaus at 36,000, since planes flying in the opposite direction would be coming at 37,000 feet. They then were to land at Manaus. Automatic pilots would attend to everything, and there was only one key step in the whole procedure: go down 1,000 feet when they made their turn high over Brasilia. This is precisely what the flight plan they submitted said they would do, it was the universal rule for flights in that direction, and it was assumed to be true by the flight bearing down on them from Manaus.

It was not, however, what they did. Instead, as they made their turn, they were at that moment busying themselves with more distant matters—trying to calculate the landing distance at Manaus and their takeoff duties the next day. This was part of their larger absorption in trying to master a new plane and its technology. For the next twenty minutes, the mistake was not noticed by either the pilots or the Brazilian air controller who had okayed it, but by then the plane’s transponder was turned off and there was no longer clear evidence to ground control of who and where they were. There is no evidence of deception, only of joking around as if jockeying for status while being completely oblivious to the real problem at hand. This is a recurring theme in self-deception and human disasters: overconfidence and its companion, unconsciousness. Incidentally, it was the copilot who seems first to have realized what may have happened, and he took over flight of the plane, later apologizing repeatedly to the pilot for this act of self-assertion. He was also the first to deny the cause of the accident on arrival and provide a cover-up.

In the example of Air Florida Flight 90, the pilot’s self-deception—and copilot’s insufficient strength in the face of it—cost them their lives. In the case of Gol Flight 1907, both pilots who caused the tragedy survived their gross carelessness while 154 innocents perished. This is a distressing feature of self-deception and large-scale disasters more generally: the perpetrators may not experience strong, nor indeed any, adverse selection. As we shall see, it was not mistakes by astronauts or their own self-deception that caused the Challenger and Columbia disasters but rather self-deception and mistakes by men and women with no direct survival consequences from their decisions. The same can be said for wars launched by those who will suffer no ill effects on their own immediate inclusive fitness (long-term may be another matter), whatever the outcome, even though their actions may unleash mortality a thousand times more intense in various unpredictable directions.

ELDAR TAKES COMMAND—AEROFLOT FLIGHT 593

It is hard to know how to classify the 1994 crash of Aeroflot Flight 593 from Moscow to Seoul, Korea, so absurd that its truth was covered up in Russia for months. The pilot was showing his children the cockpit and, against regulations, allowed each to sit in a seat and pretend to control the plane, which was actually on autopilot. His eleven-year-old daughter enjoyed the fantasy, but when his sixteen-year-old son, Eldar, took the controls, the teen promptly applied enough force to the steering wheel to deactivate most of the autopilot, allowing the plane to swerve at his whim.

Deactivation of the autopilot turned on a cockpit light (which was missed by the pilots), but more important, the pilot was trapped in a fantasy world in which he encouraged his children to turn the wheel this way and that and then to believe that this had an effect, while in fact the plane was (supposed to be) on autopilot. When his son actually controlled movements, the pilot was slow to realize this was no fantasy; indeed, his son was the first to point out that the plane was actually turning on its own (due to forces unleashed by Eldar’s turning motions), but the plane then quickly banked at such an angle as to force everyone against their seats and the wall so that the pilot could not wrest control of the plane from his son. After a harrowing vertical ascent, the copilot and Eldar managed to get the plane in a nosedive, which permitted control to be reestablished, but alas it was too late. The plane hurtled to the ground, losing all seventy-five aboard. Besides disobeying all standard rules for cockpit behavior, the pilot appeared blissfully unaware that he was doing this high in the air and was becoming trapped in the very fantasy he had created for his children. Of course, it is easy for adults to underestimate the special ability of children to seize control of electromechanical devices.

SIMPLE PILOT ERROR—OR PILOT FATIGUE?

We now turn to self-deception at higher levels of organization—within corporations or society at large—that impede airline safety. That is, pilot error is compounded by higher-level error. For example, the major cause of fatal airline crashes is said to be pilot error—about 80 percent of all accidents in both 2004 and 2005. This is surely an overestimate, as airlines benefit from high ones. Still, evidence of pilot error is hardly lacking and is usually one of several factors in crashes. We do not know how much of this error is entrained by self-deception, but a common factor in pilot error is one we have already identified: overconfidence combined with unconsciousness of the danger at hand. Certainly this combination appears to have doomed John F. Kennedy Jr. (and his two companions) when he set out on a flight his experienced copilot was unwilling to take—into the gray, dangerous northeastern fog in which a pilot can easily become disoriented, mistake up for down, lose control, and enter a death spiral.

Consider a commercial example, documented by the flight recorder. On a cloudy day in October 2004 at 7:37 in the evening, a twin-engine turboprop approaching the airport at Kirksville, Missouri, was descending too low, too fast, though the pilots could not see the runway lights until they were below three hundred feet and soon were on top of trees. Both pilots and eleven of the thirteen passengers died in the crash. Below ten thousand feet, FAA rules require a so-called sterile cockpit, in which only pertinent communication is permitted, yet both pilots were sharing jokes and cursing frequently below this altitude. They discussed coworkers they did not like and how nice it would be to eat a Philly cheesesteak, but they did not attend to the usual rules regarding rate and timing of descent or to the plane’s warning system alerting them to the rapidly approaching ground below.

Of course, the usual human bias toward self-enhancement makes this negligence more likely: “rules that apply to the average pilot do not apply to better ones, such as me.” The pilot, whose job in this situation was to watch the instruments, said it was all right to descend because he could see the ground. The copilot—whose job was to look for the runway—said he could not see a thing, but he did not challenge the pilot, as rules required him to do. The pilot kept descending as if he could see the runway when he probably saw nothing at all until finally he spotted the landing lights and then immediately the tops of trees. Here we see familiar themes from the crash of Air Florida Flight 90: the irrelevant and distracting talk during takeoff in the first case and landing in this one, pilot overconfidence prevailing over the more reality-oriented but deferential copilot, and the pilot’s failure to read instruments, as was his duty.

It should be mentioned that the pilots could be heard yawning during their descent, and they had spent fourteen hours on the job, after modest sleep. This was their sixth landing that day. Had they followed proper procedure, they still should have been able to land safely, but surely fatigue contributed to their failure to follow procedure, as well as to their degree of unconscious neglect of the risks they were taking.

Now here comes the intervention of self-deception at the next level. In response to this crash, the NTSB recommended that the FAA tighten its work rules for pilots by requiring more rest time, the second time it had done so in twelve years, because the FAA did not act on the first recommendation. In response to this crash, the airline industry, represented by its lobbying organization, the Air Transport Association, argued that this was an isolated incident that did not require change in FAA rules. (If accidents were not isolated incidents, we would not get on airplanes.) “The current FAA rules . . . ensure a safe environment for our crews and the flying public.” Of course, they do no such thing: they save the airlines money by requiring fewer flight crews. And note the cute form of the wording “our crews” comes first—we would hardly subject our own people to something dangerous—followed by reducing everyone else to “the flying public.” But neither management nor lobbyists are part of the flight crew, and predictably, the Airline Pilots Association backed the rule change. True to form, in March 2009, seven airlines sued in federal court to overturn a recent FAA rule that imposed forty-eight-hour rest periods between twenty-hour flights (e.g., Newark to Hong Kong), a decision that followed earlier pioneering work by Delta Airlines to institute the rule and to provide proper sleeping quarters for the pilots during their nearly daylong flight. The fiction is that the FAA represents the so-called flying public; the truth is that it represents the financial interests of the airlines and represents the general public only reluctantly and in response to repeated failures.

ICE OVERPOWERS THE PILOTS; AIRLINES OVERPOWER THE FAA

Ice poses a special problem for airplanes. Ice buildup on the wings increases the plane’s weight while changing the pattern of airflow over both the main wings and the small rear control wings. This reduces lift and in some cases results in rapid loss of control, signaled by a sudden pitch and a sharp roll to one side. The controls move on their own, sometimes overpowering counterefforts by the pilots. Commuter planes are especially vulnerable because they commonly fly at lower altitudes, such as ten thousand feet, at which drizzling ice is more common. When icing results in loss of control, the plane turns over and heads straight to the ground.

To take an example, on October 31, 1994, American Eagle Flight 4184 from Indianapolis had been holding at ten thousand feet in a cold drizzle for thirty-two minutes with its de-icing boot raised (to break some of the ice above it), when it was cleared by Chicago air traffic controllers to descend to eight thousand feet in preparation for landing. Unknown to the pilots, a dangerous ridge of ice had built up on the wings, probably just behind the de-icing boot, so that as the pilots dipped down, they almost immediately lost control. The plane’s controls moved on their own but on the right wing only, immediately tilting the plane almost perpendicular to the ground. The pilots managed to partly reverse the roll before the (top-heavy) plane flipped upside down and hit the ground at a 45-degree angle in a violent impact that left few recognizable pieces, including any of the sixty-eight people aboard.

This was an accident that did not need to happen. This kind of airplane (ATR 42 or 72 turboprops) had a long history of alarming behavior under icing conditions, including twenty near-fatal losses of control under icing conditions and one crash in the Alps in 1987 that killed thirty-seven people. Yet the problem kept recurring because safety recommendations were met by strong resistance from the airlines—which would have to pay for the necessary design changes—and the FAA ended up acting like a biased referee, approving relatively inexpensive patches that probably reduced (at least slightly) the chance of another crash but did not deal with the problem directly. As one expert put it, “Until the blood gets deep enough, there is a tendency to ignore a problem or live with it.” To wait until after a crash to institute even modest safety improvements is known as tombstone technology. The regulators and airline executives are, in effect, conscious of the personal cost—immediate cost to the airlines in mandated repairs and bureaucratic cost to any regulator seen as unfriendly to the airlines—while being unconscious of the cost to passengers.

In the United States, the NTSB analyzes the causes of an airline disaster, relying on a series of objective data, cockpit and flight recorders, damage to aircraft, etc., to determine cause and then makes obvious recommendations. The theory is that this relatively modest investment in safety will pay for itself in future airplane design and pilot training to minimize accidents. In reality, everything works fine until the recommendation stage, when economic interests intervene to thwart the process. This is well demonstrated by the FAA’s inability to respond appropriately to the problem of ice buildup on smaller, commuter airplanes, a problem well known for more than twenty years yet claiming a new set of lives about every eight years, most recently on February 13, 2009, in Buffalo, New York, leaving fifty dead.

A deeper problem within the FAA was its unwillingness to reconsider basic standards for flying under icing conditions, as indeed had been requested by the pilots’ union. The FAA based its position on work done in the 1940s that had concluded that the chief problem was tiny droplets, not freezing rain (larger droplets), but science did not stop in the ’40s, and there was now plenty of evidence that freezing rain could be a serious problem. But this is one of the most difficult changes to make: to change one’s underlying system of analysis and logic. This could lead to wholesale redesign at considerable cost to—whom?—the airlines. So it was patchwork all the way around. There is also an analogy here to the individual. The deeper changes are the more threatening because they are more costly. They require more of our internal anatomy, behavior, and logic to be changed, which surely requires resources, may be experienced as painful, and comes at a cost.

The very symbol of a patch-up approach to safety is the fix the FAA approved for the well-proven habit of these planes in freezing ice to start to flip over. The fix was a credit-card-size piece of metal to be attached to each wing of a several-ton airplane (not counting passengers—or ice). This tiny piece of metal allegedly would alter airflow over the wings so as to give extra stability. No wonder the pilots’ union (representing those at greatest risk) characterized this as a Band-Aid fix and pointed out (correctly) that the FAA had “not gone far enough in assuring that the aircrafts can be operated safely under all conditions.” The union went on to say that the ATR airplanes had an “unorthodox, ill-conceived and inadequately designed” de-icing system. This was brushed aside by the FAA, a full six years before the Indiana crash, in which the airplane was fully outfitted with the FAA-approved credit-card-size stabilizers.

By the way, to outfit the entire US fleet of commuter turboprops with ice boots twice as large as before the Indiana crash would cost about $2 million. To appreciate how absurdly low this cost is, imagine simply dividing it by the number of paying customers on the ill-fated Indianapolis-to-Chicago trip and asking each customer in midair, “Would you be willing to spend $50,000 to outfit the entire American fleet of similar planes with the larger boot, or would you rather die within the next hour?” But this is not how the public-goods game works. The passengers on the Chicago flight do not know it is their flight out of 100,000 that will go down. Rather, the passengers know they have a 0.99999 chance of being perfectly safe even if they do nothing. Let someone else pay. Even so, I bet everyone would get busy figuring out how to raise the full amount. I certainly would. Of course, if each passenger only had to help install the boots on his or her own plane, about $300 per passenger would suffice. The point is that for trivial sums of money, the airlines routinely put passengers at risk. Of course, they can’t put it this way, so they generate assertions and “evidence” by the bushel to argue that all is well, indeed that every reasonable safety precaution is being taken. Six years before this crash, British scientists measured airflow over icy wings and warned that it tended to put the craft at risk, but these findings were vehemently derided as being wholly unscientific, even though they were confirmed exactly by the NTSB analysis of the Indianapolis–Chicago crash.

Finally, a series of trivial devices were installed in the cockpit and new procedures were mandated for pilot behavior. For example, a device giving earlier warning of icing was installed and pilots were told not to fly with autopilot when this light is on, precisely to avoid being surprised by a sudden roll to one side as the autopilot disengages. But of course this does not address the problem of loss of control under icing. From the very first Italian crash over the Alps, when one of the pilots lashed out at the control system that failed to respond to his efforts with an ancient curse on the system’s designers and their ancestors, it has been known that conscious effort to maintain control is not sufficient. And of course, pilots may make matters worse for themselves in a bad situation. In the Buffalo crash, the pilots apparently made a couple of errors, including keeping the plane on autopilot when they lowered their landing gear and deployed the flaps that increase lift. Suddenly there was a severe pitch and roll, suggestive of ice, which in fact had built up on both the wings and the windshield, blocking sight. Although the NTSB attributed the crash to pilot error, the fact that ice had built up, followed by the familiar pitch and roll, suggests a poorly designed airplane as well.

In short, a system has developed in which the pilot may make no errors—and yet the plane can still spin out of control. It is ironic, to say the least, that a basic design problem that deprives a pilot of control of the airplane is being solved by repeatedly refining the pilot’s behavior in response to this fatal design flaw. A pilot’s failure to do any of the required moves, for example, disengage autopilot, will then be cited as the cause. No problem with the airplane; it’s the pilot! But is this not a general point regarding self-deception? In pursuing a path of denial and minimization, the FAA traps itself in a world in which each successive recommendation concerns more and more pilot behavior than actual aircraft design changes. Thus does self-deception lay the foundations for disaster.

Consider an international example.

THE US APPROACH TO SAFETY HELPS CAUSE 9/11

The tragedy of 9/11 had many fathers. But few have been as consistent in this role as the airlines themselves, at least in preventing the actual aircraft takeovers on which the disaster was based. This is typical of US industrial policy: any proposed safety change comes with an immediate threat of bankruptcy. Thus, the automobile industry claimed that seat belts would bankrupt them, followed by airbags, then child-safety door latches, and whatnot. The airline’s lobbying organization, the Air Transport Association, has a long and distinguished record of opposing almost all improvements in security, especially if the airlines have to pay for them. From 1996 to 2000 alone, the association spent $70 million opposing a variety of sensible (and inexpensive) measures, such as matching passengers with bags (routine in Europe at the time) or improving security checks of airline workers. They opposed reinforced cabin doors and even the presence of occasional marshals (since the marshals would occupy nonpaying seats). It was common knowledge that the vital role of airport screening was performed poorly by people paid at McDonald’s wages—but without their training—yet airlines spent millions fighting any change in the security status quo. Of course, a calamity such as 9/11 could have severe economic effects as people en masse avoided a manifestly dangerous mode of travel, but the airlines merely turned around and beseeched the government for emergency aid, which they got.

It seems likely that much of this is done “in good conscience,” that is, the lobbyists and airline executives easily convince themselves that safety is not being compromised to any measurable degree, because otherwise they would have to live with the knowledge that they were willing to kill other people in the pursuit of profit. From an outsider’s viewpoint this is, of course, exactly what they are doing. The key fact is that there is an economic incentive to obscure the truth from others—and simultaneously from self.

Only four years after 9/11, the airlines were loudly protesting legislation that would increase a federal security fee from $2.50 to $5.50, despite numerous surveys showing that people would happily pay $3 more per flight to enhance security. Here the airlines did not pay directly but feared only the indirect adverse effects of this trivial price increase. Note that corporate titans appear to slightly increase their own chances of death to hoard money, but with the increasing use of corporate jets, even this is not certain.

We see again patterns of deceit and self-deception at the institutional and group levels that presumably also entrain individual self-deception within the groups. Powerful economic interests—the airlines—prevent safety improvements of vital importance to a larger economic unit, the “flying public,” but this unit is not acting as a unit. The pilots have their own organization and so of course do the (individually) powerful airlines, but the flying public exerts its effects one by one, in choice of airline, class of travel, destination, and so on—not in the relative safety of the flight, about which the public typically knows nothing. The theory is that the government will act on their behalf. Of course, as we have seen, it does not. Individuals within two entities should be tempted to self-deception—within the airlines that argue strenuously for continuation of their defective products and within the FAA, which, lacking a direct economic self-interest, is co-opted by the superior power of the airlines and acts as their rationalizing agent. In the case of NASA, those who sell space capsules to the public and to themselves never actually ride in them.

Regarding the specific event of 9/11 itself, although the United States already had a general history of inattention to safety, the George W. Bush administration even more dramatically dropped the ball in the months leading up to 9/11—first downgrading Richard Clarke, the internal authority on possible terrorist attacks, including specifically those from Osama bin Laden. The administration stated they were interested in a more aggressive approach than merely “swatting at flies” (bin Laden here being, I think, the fly). Bush himself joked about the August 2001 memo saying that bin Laden was planning an attack within the United States. Indeed, he denigrated the CIA officer who had relentlessly pressed (amid code-red terrorist chatter) to give the president the briefing at his Texas home. “All right,” Bush said when the man finished. “You’ve covered your ass now,” as indeed he had, but Bush left his own exposed. So his administration had a particular interest in focusing only on the enemy, not on any kind of missed signals or failure to exercise due caution. Absence of self-criticism converts attention from defense to offense.

THE CHALLENGER DISASTER

On January 28, 1986, the Challenger space vehicle took off from Florida’s Kennedy Space Center and seventy-three seconds later exploded over the Atlantic Ocean, killing all seven astronauts aboard. The disaster was subject to a brilliant analysis by the famous physicist Richard Feynman, who had been placed on the board that investigated and reported on the crash. He was known for his propensity to think everything through for himself and hence was relatively immune to conventional wisdom. It took him little more than a week (with the help of an air force general) to locate the defective part (the O-ring, a simple part of the rocket), and he spent the rest of his time trying to figure out how an organization as large, well funded, and (apparently) sophisticated as NASA could produce such a shoddy product.

Feynman concluded that the key was NASA’s deceptive posture toward the United States as a whole. This had bred self-deception within the organization. When NASA was given the assignment and the funds to travel to the moon in the 1960s, the society, for better or worse, gave full support to the objective: beat the Russians to the moon. As a result, NASA could design the space vehicle in a rational way, from the bottom up—with multiple alternatives tried at each step—giving maximum flexibility, should problems arise, as the spacecraft was developed. Once the United States reached the moon, NASA was a $5 billion bureaucracy in need of employment. Its subsequent history, Feynman argued, was dictated by the need to create employment, and this generated an artificial system for justifying space travel—a system that inevitably compromised safety. Put more generally, when an organization practices deception toward the larger society, this may induce self-deception within the organization, just as deception between individuals induces individual self-deception.

The space program, Feynman argued, was dominated by a need to generate funds, and critical design features, such as manned flight versus unmanned flight, were chosen precisely because they were costly. The very concept of a reusable vehicle—the so-called shuttle—was designed to appear inexpensive but was in fact just the opposite (more expensive, it turned out, than using brand-new capsules each time). In addition, manned flight had glamour appeal, which might generate enthusiasm for the expenses. But since there was very little scientific work to do in space (that wasn’t better done by machines or on Earth), most was make-do work, showing how plants grow absent gravity (gravity-free zones can be produced on Earth at a fraction of the cost) and so on. This was a little self-propelled balloon with unfortunate downstream effects. Since it was necessary to sell this project to Congress and the American people, the requisite dishonesty led inevitably to internal self-deception. Means and concepts were chosen for their ability to generate cash flow and the apparatus was then designed top-down. This had the unfortunate effect that when a problem surfaced, such as the fragile O-rings, there was little parallel exploration and knowledge to solve the problem. Thus NASA chose to minimize the problem and the NASA unit assigned to deal with safety became an agent of rationalization and denial, instead of careful study of safety factors. Presumably it functioned to supply higher-ups with talking points in their sales pitches to others and to themselves.

Some of the most extraordinary mental gyrations in service of institutional self-deception took place within the safety unit. Seven of twenty-three Challenger flights had shown O-ring damage. If you merely plot chance of damage as a function of temperature at time of takeoff, you get a significant negative relationship: lower temperature meant higher chance of O-ring damage. For example, all four flights below 65 degrees F showed some O-ring damage. To prevent themselves—or others—from seeing this, the safety unit performed the following mental operation. They said that sixteen flights showed no damage and were thus irrelevant and could be excluded from further analysis. This is extraordinary in itself—one never wishes to throw away data, especially when it is so hard to come by. Since some of the damage occurred during high-temperature takeoffs, temperature at takeoff could be ruled out as a cause. This example is now taught in elementary statistics texts as an example of how not to do statistics. It is also taught in courses on optimal (or suboptimal) data presentation since, even while arguing against a flight, the engineers at Thiokol, the company that built the O-ring, presented their evidence in such a way as to invite rebuttal. The relevance of the mistake itself could hardly be clearer since the temperature during the Challenger takeoff (below freezing) was more than 20 degrees below the previous lowest takeoff temperature.

On the previous coldest flight (at a balmy 54 degrees), an O-ring had been eaten one-third of the way through. Had it been eaten all the way through, the flight would have blown up, as did the Challenger. But NASA cited this case of one-third damage as a virtue, claiming to have built in a “threefold safety factor.” This is a most unusual use of language. By law, you must build an elevator strong enough that the cable can support a full load and run up and down a number of times without any damage. Then you must make it eleven times stronger. This is called an elevenfold safety factor. NASA has the elevator hanging by a thread and calls it a virtue. They even used circular arguments with a remarkably small radius: since manned flight had to be much safer than unmanned flight, it perforce was. In short, in service of the larger institutional deceit and self-deception, the safety unit was thoroughly corrupted to serve propaganda ends, that is, to create the appearance of safety where none existed. This must have aided top management in their self-deception: less conscious of safety problems, less internal conflict while selling the story.

There is thus a close analogy between self-deception within an individual and self-deception within an organization—both serving to deceive others. In neither case is information completely destroyed (all twelve Thiokol engineers had voted against flight that morning, and one was vomiting in his bathroom in fear shortly before takeoff). The truth is merely relegated to portions of the person or the organization that are inaccessible to consciousness (we can think of the people running NASA as the conscious part of the organization). In both cases, the entity’s relationship to others determines its internal information structure. In a non-deceitful relationship, information can be stored logically and coherently. In a deceitful relationship, information will be stored in a biased manner the better to fool others—but with serious potential costs. However, note here that it is the astronauts who suffer the ultimate cost, while the upper echelons of NASA—indeed, the entire organization minus the dead—may enjoy a net benefit (in employment, for example) from this casual and self-deceived approach to safety. Feynman imagined the kinds of within-organization conversations that would bias information flow in the appropriate direction. You, as a working engineer, might take your safety concern to your boss and get one of two responses. He or she might say, “Tell me more” or “Have you tried such-and-such?” But if he or she replied, “Well, see what you can do about it” once or twice, you might very well decide, “To hell with it.” These are the kinds of interactions—individual on individual (or cell on cell)—that can produce within-unit self-deception. And have no fear, the pressures from overhead are backed up with power, deviation is punished, and employment is put at risk. When the head of the engineers told upper management that he and other engineers were voting against the flight, he was told to “take off your engineering hat and put on your management hat.” Without even producing a hat, this did the trick and he switched his vote.

There was one striking success of the safety unit. When asked to guess the chance of a disaster occurring, they estimated one in seventy. They were then asked to provide a new estimate and they answered one in ninety. Upper management then reclassified this arbitrarily as one in two hundred, and after a couple of additional flights, as one in ten thousand, using each new flight to lower the overall chance of disaster into an acceptable range. As Feynman noted, this is like playing Russian roulette and feeling safer after each pull of the trigger fails to kill you. In any case, the number produced by this logic was utterly fanciful: you could fly one of these contraptions every day for thirty years and expect only one failure? The original estimate turned out to be almost exactly on target. By the time of the Columbia disaster, there had been 126 flights with two disasters for a rate of one in sixty-three. Note that if we tolerated this level of error in our commercial flights, three hundred planes would fall out of the sky every day across the United States alone. One wonders whether astronauts would have been so eager for the ride if they had actually understood their real odds. It is interesting that the safety unit’s reasoning should often have been so deficient, yet the overall estimate exactly on the mark. This suggests that much of the ad hoc “reasoning” was produced under pressure from the upper ranks after the unit had surmised correctly. There is an analogy here to individual self-deception, in which the initial, spontaneous evaluation (for example, of fairness) is unbiased, after which higher-level mental processes introduce the bias.

There is an additional irony to the Challenger disaster. This was an all-American crew, an African American, a Japanese American, and two women—one an elementary schoolteacher who was to teach a class to fifth graders across the nation from space, a stunt of marginal educational value. Yet the stunt helped entrain the flight, since if the flight was postponed, the next possible date was in the summer, when children would no longer be in school to receive their lesson. Thus was NASA hoisted on its own petard. Or as has been noted, the space program shares with gothic cathedrals the fact that each is designed to defy gravity for no useful purpose except to aggrandize humans. Although many would say that the primary purpose of cathedrals was to glorify God, many such individuals were often self-aggrandizing. One wonders how many more people died building cathedrals than flying space machines.

THE COLUMBIA DISASTER

It is extraordinary that seventeen years later, the Challenger disaster would be repeated, with many elements unchanged, in the Columbia disaster. Substitute “foam” for “O-ring” and the story is largely the same. In both cases, NASA denied they had a problem, and in both cases it proved fatal. In both cases, the flight itself had little in the way of useful purpose but was done for publicity purposes: to generate funding and/or meet congressionally mandated flight targets. As before, the crew was a multicultural dream: another African American, two more women (one of whom was Indian), and an Israeli who busied himself on the flight collecting dust over (where else?) the Middle East. Experiments designed by children in six countries on spiders, silkworms, and weightlessness were duly performed. In short, as before, there was no serious purpose to the flight; it was a publicity show.

The Columbia spacecraft took off on January 15, 2003 (another relatively cold date), for a seventeen-day mission in space. Eighty-two seconds after launch, a 1.7-pound chunk of insulating foam broke off from the rocket, striking the leading edge of the left wing of the space capsule, and (as was later determined) apparently punching a hole in it about a foot in diameter. The insulating foam was meant to protect the rocket from cold during takeoff, and there was a long history of foam breaking off during flight and striking the capsule. Indeed, on average thirty small pieces struck on every flight. Only this time the piece of foam was one hundred times larger than any previously seen. On the Atlantis flight in December 1988, 707 small particles of foam hit the capsule, which, in turn, was inspected during orbit with a camera attached to a robotic arm. The capsule looked as though it had been blasted with a shotgun. It had lost a heat-protective tile but was saved by an aluminum plate underneath. As before, rather than seeing this degree of damage as alarming, the fact that the capsule survived reentry was taken as evidence that foam was not a safety problem. But NASA did more. Two flights before the Columbia disaster, a piece of foam had broken off from the bipod ramp and dented one of the rockets, but shuttle managers formally decided not to classify it as an “in-flight anomaly,” though all similar events from the bipod ramp had been so classified. The reason for this change was to avoid a delay in the next flight, and NASA was under special pressure from its new head to make sure flights were frequent. This is similar to the artificial pressure for the Challenger to fly to meet an external schedule.

The day after takeoff, low-level engineers assigned to review film of the launch were alarmed at the size and speed of the foam that had struck the shuttle. They compiled the relevant footage and e-mailed it to various superiors, engineers, and managers in charge of the shuttle program itself. Anticipating that their grainy photos would need to be replaced by much more accurate and up-to-date footage, they presumed on their own to contact the Department of Defense and ask that satellite or high-resolution ground cameras be used to photograph the shuttle in orbit. Within days the Air Force said it would be happy to oblige and made the first moves to satisfy this request. Then an extraordinary thing happened. Word reached a higher-level manager who normally would have cleared such a request with the Air Force. At once, she asked her superiors whether they wanted to know the requested information. They said no. Armed with this, she told the Air Force they no longer needed to provide the requested information and that the only problem was underlings who failed to go through proper channels! On such nonsense, life-and-death decisions may turn.

This is vintage self-deception: having failed to deal with the problem over the long term, having failed to prepare for a contingency in which astronauts are alive in a disabled capsule unable to return to Earth, the NASA higher-ups then decide to do nothing at all except avert their eyes and hope for the best. With fast, well-thought-out action, there was just barely time to launch a flight that might reach the astronauts before their oxygen expired. It would have required a lot of luck, with few or no hitches during countdown, so it was unlikely. An alternative was for the astronauts to attempt crude patches on the damaged wing itself. But why face reality at this point? They had made no preparation for this contingency, and they would be making life-and-death decisions with all the world watching. Why not make it with no one watching, including themselves? Why not cross their fingers and go with the program? Denial got them where they were, so why not ride it all the way home?

The pattern of instrument failure before disintegration and the wreckage itself made it abundantly clear that the foam strike filmed during takeoff must have brought down the Columbia, but people at NASA still resisted, denying that it was even possible for a foam strike to have done such damage and deriding those who thought otherwise as “foam-ologists.” For this reason, the investigating commission decided to put the matter to a direct test. They fired foam pieces of the correct weight at different angles to the left sides of mock-ups of the spacecraft. Even this NASA resisted, insisting that the test use only the small pieces of foam that NASA had modeled! The key shot was the one that mimicked most closely the actual strike, and it blew a hole in the capsule big enough to put your head through. That was the end of that: even NASA folded its tent. But note that denial (of the problem ahead of time) entrained denial (of the ongoing problem), which entrained denial (after the fact). As we have noted in other contexts, this is a characteristic feature of denial: it is self-reinforcing.

The new safety office created in response to the Challenger explosion was also a fraud, as described by the head of the commission that later investigated the Columbia disaster, with no “people, money, engineering experience, [or] analysis.” Two years after the Columbia crash, the so-called broken safety culture (twenty years and counting) at NASA still had not been changed, at least according to a safety expert and former astronaut (James Wetherbee). Under pressure to stick to budget and flight schedules, managers continue to suppress safety concerns from engineers and others close to reality. Administrators ask what degree of risk is acceptable, when it should be what degree is necessary and how to eliminate that which is unnecessary. A recent poll showed the usual split: 40 percent of managers in the safety office thought the safety culture was improving while only 8 percent of workers saw it that way. NASA’s latest contributions to safety are a round table in the conference room instead of a rectangular one, meetings allowed to last more than half an hour, and an anonymous suggestion box. These hardly seem to go to the heart of the problem.

That the safety unit should have been such a weak force within the organization is part of a larger problem of organizational self-criticism. It has been argued that organizations often evaluate their behavior and beliefs poorly because the organizations turn against their evaluation units, attacking, destroying, or co-opting them. Promoting change can threaten jobs and status, and those who are threatened are often more powerful than the evaluators, leading to timid and ineffective self-criticism and inertia within the organization. As we have seen, such pressures have kept the safety evaluation units in NASA crippled for twenty years, despite disaster after disaster. This is also a reason that corporations often hire outsiders, at considerable expense, to come in and make the evaluation for them, analogous perhaps to individuals who at considerable expense consult psychotherapists and the like. Even grosser and more costly failures of self-criticism occur at the national level, and we will refer to some of these when we discuss war (see Chapter 11).

EGYPT AND EGYPTAIR DENY ALL

A most unusual accident occurred on October 31, 1999, when EgyptAir Flight 990 took off from New York’s JFK Airport bound for Cairo. It climbed to 33,000 feet, flew normally for about half an hour on a calm night, and then suddenly (in about two minutes) plummeted to the ocean below, killing all 217 aboard. Later work by the NTSB proved beyond a reasonable doubt that the plane was deliberately brought down by its second copilot. (Long flights have two crews: one for takeoff and landing and one for flying some of the routine work in between.) The copilot used a little deception to achieve his aim, but there is no evidence of self-deception involved in the disaster (beyond whatever may have been going on in the head of the suicidal copilot). But afterward there was a furious, long-lasting effort by EgyptAir and the Egyptian government to deny the cause of the crash, a denial that entrained self-deception and that continues to this day. Nearly every conceivable counterargument was advanced, including small bombs near the cabin or the rear, active Israeli agents nearby taking out the thirty-four Egyptian army generals aboard, and so on. This is a case of a post hoc attempt to create a false narrative to protect oneself, and we can readily appreciate Egyptian sensitivities. Because the copilot was soon reported to have murmured a standard Muslim prayer (and in Arabic at that!) before he put the plane into its dive, EgyptAir was about to acquire the unenviable reputation of being “unsafe at any speed” due to internal terrorism. If you can’t trust the flight crew to try to stay alive, how on earth can you trust the flight?

Given nearly daily Egyptian resistance to NTSB findings for more than a year from well-trained aviation engineers, with the Egyptian government proposing numerous, sometimes very sophisticated alternatives, this crash is unusually well studied. Yet the basic facts were clear very early. There was no evidence of a bomb at all—fore, aft, or anywhere else. Bombs typically leave at least three kinds of evidence: instrument readings on the flight recorder, voices and sounds on the cockpit recorder, and a certain debris pattern at the bottom of the ocean. Instead, twenty minutes into the flight, the second copilot (fifty-nine years old) maneuvered out the first one (thirty-six years old) by bullying him. He first suggested that the other, who was flying the airplane, take a break and get some rest. When the first one angrily replied that this should have been agreed upon at the start of the flight, he said, “You mean you’re not going to get up? You will get up. Go and get some rest and come back.” In a few moments, the man got up and left the cockpit. The second copilot then buckled himself in next to the pilot (age fifty-seven). After eight minutes of pleasant banter between two old friends, the second copilot found the pen of the first, or more likely, pretended to: “Look, here’s the new first officer’s pen. Give it to him, please. God spare him,” he says to the captain, “to make sure it doesn’t get lost.” Pilot: “Excuse me, Jimmy, while I take a quick trip to the toilet.” Copilot: “Go ahead, please.” Pilot, exiting: “Before it gets crowded, while they are eating, and I will be back to you.” As easy as that, the second copilot had the airplane to himself.

About twenty seconds later, the copilot said (in Arabic, his native tongue), “I rely on God,” and the autopilot disengaged. Four seconds later, another “I rely on God,” and two things happened: the throttles moved from fast to minimum idle and the massive rear elevators dropped, raising the tail and pointing the nose down. The copilot apparently choked the power and pushed the control yoke forward. The airplane dived steeply, and six times in quick succession, the copilot said calmly, “I rely on God.” As the nose continued to pitch downward, the inside of the plane changed from no gravity to negative gravity, with objects hitting the ceiling.

Somehow, sixteen seconds into the dive, the pilot managed to return and yelled, “What’s happening? What’s happening?” He got no answer other than “I rely on God.” Then the two evidently fought for control of the airplane. The pilot tried to move the nose up and the copilot held it down, so that the elevators split, one down, one up, a most unusual configuration. (That they split is a design feature allowing either pilot to overcome a mechanical jam and fly the airplane with only one elevator.) The plane descended at a maximum rate of 630 feet per second and at a downward angle of almost 40 degrees. Somewhere along the line, the copilot turns off the engines while the pilot shouts incredulously. The plane hits about 550 miles per hour at 16,000 feet, when the pilot’s efforts seem to have reversed the dive. The plane then soars steeply back up to 24,000 feet, loses its left engine, and dives at high speed into the ocean. This must have been the most horrifying roller-coaster ride of a lifetime, lasting as it did for two minutes.

The NTSB did a voice-stress analysis that showed a sharp contrast between the copilot and the pilot as they fought for control of the airplane. The pilot’s voice rose steadily in pitch and intensity, as one would expect from a person under growing stress and panic. But the copilot’s never changed. Through a total of twelve utterances of “I rely on God,” his voice never betrayed any stress or fear. He intended what he did and he was calm in his intention.

The only part of this story we do not know is why the copilot brought the plane down. Was it the presence of those thirty-four generals aboard? He was not known to be politically active. Was it the fact that he had been warned only a few days before by a very senior pilot (himself riding as a passenger on this flight) to grow up before he caused himself serious problems? He indeed had a reputation for inappropriate behavior at the hotel in New York where the airline personnel stayed: following women (uninvited) toward their rooms, for example, and similar behavior, nothing dangerous, perhaps more on the pathetic side. He was carrying items in the plane for use back in Egypt, a part for his car and so on, so perhaps the decision was made shortly before he enacted it. If he was acting vindictively toward EgyptAir, the presence of all those generals may have made the suicide more dramatic and costly. We will never know, because part of Egyptian denial was either never to investigate the copilot’s possible motives or to hide whatever they found out. If NTSB investigations in the United States were typically run this way, we would have no objective data on the causes of airplane crashes. This is a case of self-deception intruding at yet a higher level—the international one—to impede the truth at an international cost. Surely we all benefit from reducing civilian crashes.

Of course, Egypt is far from alone. For example, in the United States hardly anyone is conscious of—much less concerned by—the fact that the US economic embargo has prevented Iran from directly acquiring replacement parts for its aging airplanes. The country imposing the embargo is the same one that sold the planes, so here is a country acting in gross violation of international public safety for purely petty reasons when it alone has a legal obligation (original contracts signed) to provide replacement parts. This is a form of economic warfare, perhaps with the meta-message, “Go screw yourself and may your airplanes crash, too.”

SAVED BY LACK OF SELF-DECEPTION?

Perhaps we can end on a more positive note with the reverse of what we have described so far: the celebrated safe landing of a plane in the Hudson River shortly after takeoff from La Guardia Airport in New York on January 17, 2009, saving all 155 lives. The plane was headed for Charlotte, North Carolina, and apparently struck a column of geese at three thousand feet, disabling both engines simultaneously. The captain (fifty-eight years old)—who was not at the time flying the plane—immediately made a series of decisions, none of which was brilliant or exceptional, but all of which showed immediate rational calibration toward a serious danger, one for which the pilot had long prepared. The first thing he did was to take control of the plane. “My airplane,” he announced to his first officer, the standard procedure for a takeover. “Your aircraft,” the officer (age forty-nine) responded.

The pilot first decided against landing at two possible airports and chose the large Hudson Bay instead. He cut speed by lowering wing flaps and made sure that the nose was raised on landing. It was experienced as a “hard landing” by the crew in the rear of the plane, with utensils flying around, but there was no damage to anyone in the craft, beyond a flight attendant’s broken leg. In the captain’s own words:

Losing thrust on both engines, at a low speed, at a low altitude, over one of the most densely populated areas on the planet—yes, I knew it was a very challenging situation. I needed to touch down with the wings exactly level . . . with the nose slightly up . . . at a rate that was survivable . . . and just above our minimum flying speed, not below it. And I needed to make all these things happen simultaneously.

The pilot had several advantages. He was very experienced and competent, having been the top Air Force Academy cadet in his class in flying ability and having flown military jets before becoming a commercial pilot. He was trained as a glider pilot, precisely what was required in this situation, the key being to keep both wings out of the water while landing on it. He had taught courses on risk management and catastrophes. He remembered from training that when in a forced landing in water, you should look to land near a boat. Within moments of landing, there were so many boats nearby, large and small, as to risk swamping the already rapidly sinking aircraft. Children as young as eight months and eighteen months emerged alive. Two women who ended up in the frigid waters were rapidly rescued.

The remarkable scene was a source of excitement for several days. The key is that the captain was highly conscious throughout, very well prepared, and ended up doing everything right. When asked whether he prayed, he said he was concentrating too hard: “I would imagine someone in back was taking care of that for me while I was flying the airplane.”