UNREASONABLE ASSURANCES - Fukushima: The Story of a Nuclear Disaster (2015)

Fukushima: The Story of a Nuclear Disaster (2015)

9

UNREASONABLE ASSURANCES

On March 15, 2012, the five NRC commissioners sat in a row in a Senate hearing room, summoned by the Environment and Public Works Committee and its forceful chairman, Barbara Boxer of California. The hearing topic: “Lessons from Fukushima, One Year Later.”

Despite the title, some committee members questioned whether the United States really had anything to learn from Fukushima. When it was his turn to take the microphone, Senator John Barrasso of Wyoming, the ranking member on the Environment and Public Works nuclear safety subcommittee, decided to play devil’s advocate. The NRC’s critics were saying, he noted, that unless the agency took stronger measures to address the vulnerabilities revealed by Fukushima, then “it may be only a matter of time before a similar disaster happens here.” Pointedly skipping over NRC chairman Gregory Jaczko, Barrasso asked the remaining commissioners to respond to the prediction.

First to reply was Commissioner William Magwood, who declared: “I think that our infrastructure, our regulatory approach, our practices at plants, our equipment, our configuration, our design bases would prevent Fukushima from occurring under similar circumstances at a U.S. plant. I just don’t think it would happen.”

Continuing down the line, commissioners Kristine Svinicki, George Apostolakis, and William Ostendorff echoed Magwood’s assessment.

Technically, the four commissioners were correct in asserting that “it can’t happen here”—if “it” means an event exactly like Fukushima, involving a magnitude 9.0 earthquake, a fifty-foot tsunami, four reactors in crisis, and multiple meltdowns and explosions. For one thing, no U.S. nuclear plant site now has four operating reactors. But as for a different series of calamities triggering a core meltdown, containment breach, and widespread land contamination? The chance of that happening is low—but it isn’t zero.

One year after the accident at Fukushima Daiichi, the five nuclear regulatory commissioners appeared before a Senate committee, where they were asked if a similar disaster could happen in the United States …

One year after the accident at Fukushima Daiichi, the five nuclear regulatory commissioners appeared before a Senate committee, where they were asked if a similar disaster could happen in the United States. The commissioners, from front to rear, were William Magwood, Kristine Svinicki, NRC chairman Gregory Jaczko, George Apostolakis, and William Ostendorff. U.S. Nuclear Regulatory Commission

For example, many reactors in the United States are downstream from large dams. A dam failure, whether caused by an earthquake, a terrorist attack, or a spontaneous breach, could rapidly flood one of these plants with little warning, compounding any problems caused by the same event that breached the dam. Although the NRC had known for many years that it was underestimating the threat of dam breaches, the agency was taking its time deciding what to do about it. To some NRC staff members, the catastrophic flooding at Fukushima was a painful reminder of this unresolved vulnerability at home.

IT COULD HAPPEN HERE

The terrorist attacks of 9/11 reminded the U.S. government of the threat of catastrophic sabotage against critical infrastructure targets. Like other federal agencies, the NRC began to reassess the vulnerabilities of nuclear plants. Those included more than terrorists piloting jetliners. An attack on a dam located upstream of nuclear facilities posed a hazard greater than previously thought. And the terrorist threat alerted the NRC to the dangers of accidental failures of upstream dams as well.

Citing domestic security concerns, the NRC concealed for many years its growing worry about the threat to reactors posed by a dam collapse. Thirty-four reactors at twenty sites around the country are downstream from large dams.

A dam failure could rapidly inundate a nuclear plant and disable its vital power supplies and cooling systems. The risk of such failures was not taken into account in the design-basis flooding analyses when the plants were licensed. Other causes of flooding, such as rainfall, were considered, but these pose far less risk because the water would rise more gradually, providing greater time to prepare.

The issue became public in 2012 when an NRC whistleblower accused the agency of covering up information about the vulnerability. One plant appeared especially at risk: the three-unit Oconee Nuclear Station in South Carolina.

Thirty-four reactors at twenty sites around the United States are located downstream from large dams …

Thirty-four reactors at twenty sites around the United States are located downstream from large dams. The threat posed by the failure of those dams was not taken into account when the plants were licensed. One plant especially at risk is the three-unit Oconee Nuclear Station in South Carolina, which sits downstream of the nearby Jocassee Dam. More than 1.4 million people live within fifty miles of Oconee. Google/Union of Concerned Scientists

For years, the NRC and Oconee’s owner, Duke Energy, have been at odds about the magnitude of risk to the plant from a failure of the nearby Jocassee Dam. Although the agency and the company disagreed about the risks there, they did agree on one thing: the consequences—a prolonged station blackout leading to core melting in less than ten hours and containment failure in less than three days. A “significant [radioactivity] dose to the public would result,” according to Duke’s own estimates.

Although the NRC had been aware of the problem since at least 1996, the agency had not required safety enhancements. NRC staff members themselves were divided on how to proceed, and the issue was still unresolved when Fukushima Daiichi demonstrated that prolonged station blackouts associated with flooding were more than a theoretical possibility.

Five days after the tsunami struck the Japanese plant, an NRC staff member e-mailed a colleague to inquire about the status of a Justification for Continued Operation (JCO) for the Oconee site, which was under dispute within the NRC. “In light of the recent developments in Japan,” he wrote, “is anyone having second thoughts about the JCO, Oconee’s path forward, the entire issue, etc.? Although the scope of such a disaster might be more limited at Oconee than in Japan—that is, the Japanese have other problems on their hands than a nuclear crisis, which is slowing them down to a degree, the Oconee disaster would be no less severe on the [reactor] units. Everything on site would be destroyed or useless.”

In reply, his colleague wrote that the matter was being handed off to regional NRC officials to track as an “inspection project.” “The tsunami should give management pause as the results of it sure look like what I would expect to happen to Oconee.” Except, he noted, Oconee would receive more water.

Plans by Duke Energy to heighten a protective floodwall at Oconee, originally scheduled for completion in 2013, are now reportedly delayed until 2017.

Indeed, in spite of their proclamations to Senator Barrasso, the four commissioners had acknowledged that problems existed a few days earlier when they all voted to approve three major regulatory changes to address vulnerabilities identified by the NRC’s own Fukushima NTTF. However, no longtime observer of the NRC expected the commissioners at the Senate hearing to openly profess doubts about the adequacy of U.S. reactor safety, even with memories of Fukushima Daiichi still vivid. The “it can’t happen here” mind-set is deeply rooted at the NRC, just as it was among Japan’s nuclear establishment at the time of Fukushima. In fact, some would argue, the same mind-set has characterized the NRC’s regulatory philosophy throughout its history.

The legacy of this mind-set cannot be undone without structural reforms. But history holds out little hope that more fundamental changes can occur without a paradigm shift at the NRC. The record is replete with examples of the NRC staying the course even in the face of obvious warning signs.

A tortuous logic flourishes within the commission and influences its decisions. The NRC has always been reluctant to take actions that could call into question its previous judgments that nuclear plants were adequately safe. If it were to require new plants to meet higher safety standards than old ones, for example, the public might no longer accept having an “old one” next door. So the NRC is constantly engaged in an elusive quest for a middle ground from which it can direct needed improvements without having to concede that the plants were not already safe enough, thereby alarming the citizenry. It is not clear, even with the ruins of Fukushima in full view, that the NRC is willing to break out of that pattern.

In that regulatory balancing act, what has evolved over the years is a debate over “how safe is safe enough.” In making its determinations on that issue, the NRC has all too often made choices that aligned with what the industry wanted but left gaping holes in the safety net.

There may be no better example than the long-standing controversy over the Mark I boiling water reactor design. Back in 1989, the NRC staff warned the commissioners that “Mark I containment integrity could be challenged by a large scale core melt accident, principally due to its smaller size.” Staff experts recommended that the NRC require Mark I reactor owners to implement measures to reduce the risk of core damage and containment failure.

If the commissioners had taken effective action—action that would have sent a strong message to Mark I operators around the world, including those in Japan—it is quite possible that the worst consequences of Fukushima might have been avoided. Instead, the matter fell into a regulatory morass of competing interests and emerged with a resolution that accomplished little. It wasn’t the first time that had happened.

In the aftermath of the 1979 Three Mile Island accident, the Kemeny Commission made clear that the safety status quo was inadequate. However, the panel explicitly refused to give its own views on “how safe is safe enough.” Without any useful external guidance, the NRC embarked on a multi-decade struggle to provide an acceptable answer to this issue, the bane of regulators everywhere. It is without doubt a difficult public policy question, but the NRC’s methods of addressing it have only created more confusion over the decades. Fukushima makes one thing clear: the process has not yielded the right answer.

In the NRC’s world, the issue of “how safe is safe enough” is addressed through the concept of “adequate protection.” When Congress created the agency out of the ashes of the Atomic Energy Commission, the NRC inherited the mandate bestowed upon its predecessor by the 1954 Atomic Energy Act: to “provide adequate protection of public health and safety.” The NRC watered down this hazy concept even further by adopting a standard of “reasonable assurance of adequate protection” in its own guidance. The standard was so vague that it essentially gave the NRC and its five political-appointee commissioners a blank check for deciding exactly what constituted “adequate protection.” In fact, several months after Three Mile Island, NRC chairman Joseph Hendrie said in a speech that “adequate protection means what the Commission says it means.” L’état, c’est moi.

Even with such wide latitude, the NRC has always avoided specifying precisely what “reasonable assurance of adequate protection” means. This vagueness allows the commission to avoid drawing a line in the sand. In a 2011 speech, Commissioner Ostendorff, trained as a lawyer, emphasized that “reasonable assurance does not require objective criteria, and it is also determined on a case-by-case basis dependent on the specific circumstances.” Although this policy preserved a great deal of flexibility for the NRC commissioners, the lack of a concrete standard injected a measure of subjectivity into NRC decisions that rendered them vulnerable to political shifts over the decades.

After the Three Mile Island fiasco, a key issue the NRC confronted was whether it could plausibly continue to maintain that its regulations provided “reasonable assurance of adequate protection.” Certainly a lot of Americans didn’t think so. But to admit that it had failed to meet that standard would have thrown into question the foundations of NRC regulations.

Up to that point, regulations were focused on a nuclear plant’s ability to cope with the series of highly stylized events known as “design-basis accidents.” On the NRC’s list of design-basis accidents, the worst was one that assumed “a substantial meltdown of the core with subsequent release of appreciable quantities of fission products.” But as catastrophic as that seems, it was far from the worst case. Owners did not have to consider the failure of more than one safety system at a time. They could assume that emergency core cooling systems would work, that pressure and temperature increases would be limited, and that core damage would be halted before the fuel could melt through the reactor vessel. And the containment structures needed only to be strong enough to prevent leakage of radioactive material to the environment under these modest accident conditions. (This requirement was typically met through the use of steel shells or liners. Reinforced concrete was used in containment buildings to keep things like tornado-driven objects from getting into the reactor, not to keep material within the reactor from getting out.)

Hypothetical events in which multiple safety systems failed, resulting in a complete core meltdown, failure of the reactor vessel, and breach or bypass of the containment, were considered by the NRC to be “beyond-design-basis” accidents. Prior to Three Mile Island, the agency deemed such events so improbable that, in contrast to its policy for design-basis accidents, “mitigation of their consequences [was] not necessary for public safety.” At that time, to the NRC, to achieve “adequate protection” meant to protect against design-basis accidents.

To critics of the design-basis approach, Three Mile Island demonstrated its failure; to others, however, Three Mile Island represented a validation. The accident did not follow the design-basis script because multiple system failures occurred (owing to faulty equipment and human error); the core was severely damaged; and hydrogen exploded in the containment. However, those who saw the glass as half full pointed out that the accident was terminated before the core breached the reactor vessel; the containment never ruptured; and the amount of radioactive material that escaped was well below what the NRC considered acceptable for design-basis accidents.

In Three Mile Island’s aftermath, one NRC-launched review of the accident came to a damning conclusion about this regulatory philosophy: “We have come far beyond the point at which the design-basis accident review approach is sufficient.” In response, the NRC in October 1980 dutifully took up the question of whether it needed to amend its regulations “to determine to what extent commercial nuclear power plants should be designed to cope with reactor accidents beyond those considered in the current ‘design basis accident’ approach.” To set that process in motion, it issued an Advance Notice of Proposed Rulemaking.1

In the Advance Notice, the NRC requested comment on numerous proposals for addressing the risk of beyond-design-basis accidents. These included requirements that containment structures be equipped with systems that could prevent them from being breached, such as filtered vents, hydrogen control measures, and core catchers, structures that could safely trap molten cores if they did manage to breach reactor vessels. Another suggestion was the addition of “an alternate … self-contained decay heat removal system to prevent degradation of the core or to cool a degraded core”—in other words, an external emergency backup cooling system with independent power and water supplies. And the NRC raised the possibility that reactor siting and emergency planning requirements might need to be tightened to address the greater radiation releases from beyond-design-basis accidents.

The Advance Notice of Proposed Rulemaking sent shudders through the nuclear industry. Companies feared that the NRC was setting the stage for a sweeping new rule that would require all plants to be able to withstand accidents previously considered beyond design basis, compelling the installation of costly new systems to cope with them. Without a clear boundary for “how safe is safe enough,” such a regulation could open a Pandora’s box of new requirements. There would be no telling how far it could go.

Members of the industry quickly united under the leadership of their U.S. trade association, the Atomic Industrial Forum (a predecessor to today’s Nuclear Energy Institute), to head off the NRC by organizing a counter-campaign: the Industry Degraded Core Rulemaking program, or IDCOR. Funded with $15 million in contributions (more than $40 million in 2013 dollars) from nuclear utilities and vendors in the United States, Japan, Finland, and Sweden, IDCOR had as its goal to “assure that a rule, if developed, would be based on technical merits and would be acceptable to the nuclear industry.”

IDCOR’s extensive technical program included funding the development of a new computer code to simulate core melt accidents and support what were called “realistic, rather than conservative, engineering approaches.” Yet there was little doubt what the program hoped to accomplish: to block any new regulatory requirements. While the NRC vacillated for four years on what new rules, if any, were needed, IDCOR marched toward its foregone conclusion. In late 1984, the group released its findings. The industry had drawn its own line in the sand. Risks to the public from severe accidents had been vastly overestimated; the actual risks were already so low that more regulation was not needed.

From IDCOR’s perspective, even severe nuclear accidents posed little danger. That was because containment failure would take so long to occur that most fission products would have time to “plate out,” or stick to structures within the damaged reactor, and would not be released to the environment. Therefore, the quantity and type of radioactive material that could escape during a severe accident—the source term—would be far below what the NRC had been assuming in its analysis of health impacts. In reality, no one would die from acute radiation exposure after even the most serious accident, and the numbers of cancer deaths would be hundreds of times smaller than previous studies had shown.

The industry’s proposed reduction of the severe accident source term amounted to a bold jujitsu move to turn the NRC’s original effort to strengthen regulations on its head. One requirement the industry was particularly anxious to undermine was the recently imposed ten-mile emergency evacuation zone around every nuclear plant. At the time, the evacuation requirements were causing a firestorm in New York State, where state and local authorities were blocking operation of the newly constructed Shoreham plant on Long Island by refusing to certify the evacuation plan. (Critics claimed the roads of narrow Long Island couldn’t handle a mass exodus.) But if the amount of radiation that could escape the plant was so much smaller than previously believed, then perhaps a ten-mile evacuation zone wasn’t needed.

The NRC made no attempt to hide its skepticism about the industry’s source term recalibrations. At a 1983 conference, Robert Bernero, director of the agency’s Office of Accident Source Term Programs, called those involved “snake oil salesmen.” To help resolve the growing controversy, the NRC commissioned the American Physical Society, a respected professional association of physicists, to conduct a review of source term research. The physicists concluded that, although the evidence appeared to support reducing the assumed releases of certain radionuclides in certain accidents, there was no basis for the “sweeping generalization” made by IDCOR.

Ultimately, however, the industry’s counter-campaign had an effect. Although the NRC refused to accept the industry’s arguments, in 1985 the commission abandoned efforts to require protection against severe accidents and withdrew the Advance Notice of Proposed Rulemaking. In fact, the NRC went a step farther, issuing a Severe Accident Policy Statement that declared by fiat that “existing plants pose no undue risk to public health and safety.” In other words, there was no need to raise the safety bar to include beyond-design-basis accidents because the NRC’s rules already provided “reasonable assurance of adequate protection,” the vague but legally sanctioned seal of approval. The NRC had already addressed Three Mile Island issues, and that was enough.

However, in the face of a growing body of research that suggested the safety picture was not quite that rosy, this declaration raised questions more than it provided answers. In the time-honored tradition of government bureaucracies, the NRC resolved to continue studying the issue, kicking the can farther down the road and confusing matters even more. While asserting that there were no generic beyond-design-basis issues at U.S. reactors, the commission held out the possibility that problems might exist at individual plants and that it should take steps to identify them. Even this proved controversial, requiring three years of give-and-take between the NRC and the industry merely to set ground rules for the study.

When the smoke cleared in 1988, the scope of the proposed Individual Plant Examination (IPE) program had been diminished to a mere request that plant owners inspect their own facilities for vulnerabilities to core melting or containment failure in an accident. What happened if the inspections actually turned up something was less clear. Even if the plant owners found problems, the NRC could not automatically require them to be fixed. The agency would have authority to do so only if such fixes represented “substantial safety enhancements” and were “cost-effective”—that is, if they passed the strict tests required by the NRC’s recently revised backfit rule, which governed the changes it could require for existing plants.2

The 1988 backfit rule had its origins in the antiregulatory fervor of the Reagan administration. In 1981, President Ronald Reagan issued an executive order barring federal agencies from taking regulatory action “unless the potential benefits to society … outweigh the potential costs to society.” Although such a cost-benefit analysis approach sounded reasonable to those seeking a way to reduce government interference, it was controversial for its coldly reductionist attempt to convert the value of human lives into dollar figures that could be directly compared to the costs incurred by regulated industries.

Although the NRC, like other independent agencies, was exempt from this executive order, a majority of commissioners wanted to adopt cost-benefit requirements anyway to add what they characterized as “discipline” to the backfitting process (as if the NRC were staffed by some sort of renegade regulatory militia.)

In the past, when the NRC had imposed new regulations, the industry complained that the resultant backfits were costly and often of little or no actual safety benefit. Proponents of cost-benefit analysis argued it would “address risks that are real and significant rather than hypothetical or remote.” The key to this would lie in the use of sophisticated mathematical modeling to quantify risk. At the time of Reagan’s executive order, the NRC’s regulations only allowed it to impose backfits if they would “provide substantial, additional protection which is required for the public health and safety or the common defense and security.” However, this standard was so vague that critics from both sides attacked it. Cost-benefit analysis in principle could help to solve that problem by providing a concrete, quantitative method for determining whether the benefits of a backfit—namely, the reduction in potential deaths or injuries following an accident—justified the costs.

Risk analysis had a receptive audience at the NRC. For many years, the NRC and its predecessor, the AEC, had engaged in a similar process. In the early 1970s, the AEC commissioned a pioneering project, the Reactor Safety Study, that attempted to use the tools of probabilistic risk assessment (PRA) to calculate the risk to members of the public of dying from acute radiation exposure or cancer as the result of a nuclear reactor accident. Risk was defined as the product of the likelihood of an occurrence and its consequences. One key conclusion was that even for nuclear accidents with very serious consequences, the “risk” each year to members of the public would be very low, since the probability of such accidents would be very low. That is, multiplying a large number by a very small number would yield a small number. The report, issued in 1975, famously came under blistering attack for its methodological problems and misleading implication that an average American had as much chance of being killed in a nuclear power plant accident as of being struck by a meteor.

One of the main criticisms of the Reactor Safety Study was that its calculations of probabilities “were so uncertain as to be virtually meaningless,” as recounted by Princeton professor Frank von Hippel in his book Citizen Scientist. Each calculation required the input of thousands of variables, many of which had very large margins of error. If these uncertainties were not properly accounted for, the final result would be misleading. Consequently, many critics, including an independent review panel commissioned by the NRC, argued that probabilistic risk assessments were not precise enough to be used for calculating the absolute value of anything, particularly the probability that a given reactor might experience core damage in a given year.

A major source of PRA uncertainty is what types of events should be included in the calculation in the first place. Like good engineers, the early PRA practitioners began by analyzing things that they knew how to do—relatively well-defined events such as a pipe break. These are called internal events because they begin with problems occurring within the plant. But addressing external events like earthquakes, flooding, tornadoes, or even aircraft crashes proved more challenging. First of all, such events are notoriously hard to predict. Second, their consequences could be complex and difficult to model. Trying to come up with numerical values that would accurately describe the risks from these events was an exercise in futility. But instead of acknowledging that the failure to address external events introduced huge uncertainties in the nuclear accident risks they calculated, PRA analysts sometimes pretended that the possibilities didn’t even exist—the scientific equivalent of reaching a verdict with crucial pieces of evidence missing.

Despite these technical challenges, the NRC eventually began to use PRA results more and more in its regulatory decisions—including the absolute values of accident risks that had been called “virtually meaningless.” Over time, the agency began to view PRA risk numbers as more precise than they actually were. They were put to heavy use in the cost-benefit analyses that some commissioners wanted the NRC to rely on. That had a troubling consequence: as the risk of severe accidents appeared to shrink, so did the NRC’s leverage to require plant improvements.

As if calculating the PRA risk values weren’t complicated enough, cost-benefit analyses required another parameter to be specified: the monetary value of a human life. The NRC had carried such a number on its books since the mid-1970s: $1,000 per person-rem, a term used to characterize the total radiation dose to an affected group of people. Based on today’s understanding of cancer risk, that put the value of a human life between $1 and $2 million. (The NRC failed to adjust for inflation for years, finally doubling the figure in the 1990s to about $3 million per life. That was about a half to a third the value placed on a human life by other federal agencies.)

Although a majority of commissioners embraced the cost-benefit approach, a key obstacle remained: could the NRC legally consider costs in making safety decisions? In the mid-1980s, the Union of Concerned Scientists and other public interest groups argued that the Atomic Energy Act did not allow cost to be considered at all; the NRC should base its decisions strictly on protecting public health. If a utility could not afford to build or operate a plant to meet that standard, then it would be out of luck. In response, the industry argued that the NRC had the right to consider the cost of backfits.

At first, the industry prevailed. In 1985, the NRC revised its rules to prohibit the commission from requiring any backfit unless it resulted in “a substantial increase in the overall protection of public health and safety … and that the direct and indirect costs of implementation … are justified in view of this increased protection.” These tests were not required for backfits needed to fix an “undue risk,” but the NRC refused to define what that meant.

Rather than simplify matters, the backfit test made them maddeningly unclear. In the end, it appeared that cost-benefit analyses would be required for essentially all proposed backfits, including any proposals for new regulatory requirements. Commissioner James K. Asselstine, a lawyer who headed the Senate investigation into the Three Mile Island accident before being appointed to the NRC, wrote a withering dissent to the new rule. “In adopting this backfitting rule, the Commission continues its inexorable march down the path toward non-regulation of the nuclear industry… . I can think of no other instance in which a regulatory agency has been so eager to stymie its own ability to carry out its responsibilities.”

Asselstine, voting against the rule, contended that it imposed unreasonably high barriers to increasing safety and required a determination of risk “based on unreliable … analyses.” He wasn’t done: “The Commission also fails to deal with the huge uncertainties associated with the risk of nuclear reactors. The actual risks could be up to 100 times the value frequently picked by the Commission… . There is no reference in this rule … to how uncertainties are to be factored into safety decisions.”

In 1987, the Union of Concerned Scientists, represented by attorneys Ellyn Weiss and Diane Curran, sued the NRC to block the rule, arguing that the commission could not legally consider costs in making backfit decisions. Later that year, an appeals court threw out the backfit rule, calling it “an exemplar of ambiguity and vagueness; indeed, we suspect that the Commission designed the rule to achieve this very result.”

But the court’s ruling created a peculiar two-tier system. In deciding Union of Concerned Scientists v. U.S. Nuclear Regulatory Commission, the court agreed that the Atomic Energy Act prohibited the NRC from considering costs in “setting the level of adequate protection” and required the NRC “to impose backfits, regardless of cost, on any plant that fails to meet this level.” However, the ruling further confused the “how safe is safe enough” issue by concluding that “adequate protection … is not absolute protection.” The NRC could consider the costs of backfits that would go beyond “adequate protection,” the judges ruled.

The NRC revised the backfit rule accordingly in 1988. The court, by tying its decision to the largely arbitrary “adequate protection” standard, had preserved the agency’s free hand to push safety in any direction it wanted. The NRC rebuffed calls to provide a definition of “adequate protection.” The Union of Concerned Scientists failed to get the revised rule thrown out on appeal. Adequate protection would remain “what the Commission says it is.”

The court’s ruling essentially froze nuclear safety requirements at 1988 levels. If new information revealed safety vulnerabilities at operating plants, the NRC would have three options: conclude changes were needed to “ensure” adequate protection; redefine the meaning of “adequate protection” itself; or subject the proposed rules to the backfit test. (The NRC also kept a fourth option, an “administrative exemption,” in its back pocket.) In any of these cases, most new safety proposals would have to leap a high—perhaps impossibly high—hurdle.

The new backfit rule threw a monkey wrench into the NRC’s process for addressing severe accident risks. Because the NRC Severe Accident Policy Statement for the most part equated adequate protection with meeting the design basis, most new safety measures to deal with beyond-design-basis accidents were not needed for adequate protection. This meant that—unless the NRC were to admit that operating plants did not provide adequate protection, or to expand the definition of adequate protection, a step that could have major legal ramifications—it couldn’t issue new requirements without showing that they were “substantial” safety enhancements and that they met the cost-benefit test.

When concern started to grow about the strength of the containment of one type of reactor in particular, the Mark I boiling water reactor, the NRC found itself in a straitjacket. If the agency were to require safety fixes for the Mark I containment on the basis of “ensuring” or “redefining” adequate protection, this would be seen as an admission that the fleet of Mark Is was unsafe. Otherwise, the NRC would have to prove the benefits justified the costs. That option would leave the fate of any safety improvements at the mercy of the risk assessors.

In the 1980s, there were twenty-four GE Mark I boiling water reactors in the United States. Because of their relatively small and weak “pressure suppression” containment structures, these reactors had been controversial almost from the time that the first commercial version, Oyster Creek in New Jersey, went on line in 1969. After the hydrogen explosion at Three Mile Island, in 1981 the NRC required that the relatively vulnerable Mark I and II containments be “inerted” with nitrogen gas to prevent such explosions.

But that was not the Mark I’s only problem. As NRC staff members began to contemplate events they had never thought possible before Three Mile Island, additional frightening scenarios began to emerge. For instance, if a prolonged station blackout were to occur, operators would lose the ability not only to cool the core but also to remove heat from the containment, which could eventually over-pressurize and leak through seals not designed to withstand such high pressures and temperatures. Even worse, if power were not restored, the core would melt through both the reactor vessel and the steel containment liner. Such events would inexorably result in breaches of each of the multiple layers meant to prevent radioactive materials from reaching the environment.

The NRC was already addressing station blackout issues under a 1988 regulation. But that only required plants to develop a strategy to cope with a blackout for no more than sixteen hours. A prolonged station blackout at a Mark I reactor—one longer than the 1988 rule contemplated—would defeat the NRC’s “defense-in-depth” multiple-barrier strategy for protecting the public. An NRC task force convened in 1988 to study the liner melt-through issue concluded that this vulnerability was a “risk outlier” that warranted prompt attention.

That was easier said than done, because the industry’s IDCOR program was already out front with its opposite argument. At the same time the NRC’s analyses were raising alarms about liner melt-through, IDCOR was asserting that the risks of core damage and containment breach were very low. In addition, an industry group, the Nuclear Utility Management & Resources Council (now folded into the Nuclear Energy Institute), immediately submitted a report opposing the NRC task force’s conclusions, asserting that generic hardware modifications to the Mark I were not cost-beneficial because “the total risk from severe core melt accidents is low.” In place of mandatory fixes to the Mark I, the industry wanted the NRC to consider plants on a case-by-case basis as part of the ongoing Individual Plant Examination program, which would take many more years to complete.

For the next several years, the NRC staff and the industry continued to wield dueling technical analyses to get the upper hand with the commissioners. To complicate matters, the NRC staff itself was divided, with some members aligning with the industry. The trade journal Inside N.R.C., covering a three-day meeting in 1988 in which quarrelling NRC staff and officials were sequestered in a Baltimore hotel, quoted sources as stating that “there is literally a war going on” and alluding to instances in which “disagreement over the Mark I issue led to threats involving job security and research funding.” One source told Inside N.R.C. that “there are some senior staff members who are doing everything they can to make sure the game is played by industry’s rules … if the industry can win this one, they can win everything.”

At the core of the dispute was the question of how much risk the Mark I fleet posed (an issue that would again loom large in 2011). Although the NRC’s analyses showed a high likelihood that a Mark I reactor’s containment would fail if the core were damaged, even the agency’s staff believed that the chance of core damage was low—perhaps lower than at a pressurized water reactor like Three Mile Island. So the overall risk to the public might not be any greater from the Mark I than from other reactor types.

If so, it would be hard to demonstrate that fixing the Mark I problems would reduce risks by a big enough factor to satisfy the requirements of the backfit rule. And the majority of commissioners would not be likely to perturb the cherished meaning of “adequate protection” to address the problem via that route. A few weeks after the contentious Baltimore meeting, Themis Speis, the director of the NRC’s research office, wrote an internal memorandum in which he contended that improvements to reduce the likelihood of containment failure would probably be blocked by the backfit rule.

Notwithstanding a likely defeat, the NRC staff went before the commissioners in early 1989 with a proposal for five critical improvements for Mark I plants that reduce their risk of core damage and containment failure. The staff asked the commissioners to:

✵Speed up implementation of the 1988 blackout rule.

✵Require acquisition of backup water supplies and pumps that could operate in a station blackout.

✵Require hardened torus vents that could be used during accidents to expel steam and other gases to reduce containment pressure and temperature. Crucially, operators should be able to open and close the vent valves remotely and in the absence of AC power.

✵Require that the systems needed to automatically depressurize the reactor vessel in an accident—an essential step to be able to pump emergency coolant into an overheating core—be made more reliable, especially in the case of an extended station blackout when the battery power needed to operate the valves would not be available.

✵Require more robust emergency procedures to ensure that operators could effectively utilize all this new hardware.

The staff argued that certain Mark I containment failure modes, such as liner melt-through, could not be stopped should a core meltdown occur. The only strategy was to prevent meltdowns in the first place—and for that the backup coolant supplies and hardened containment vents were crucial. The staff presented calculations supporting its claim that the improvements would pass the backfit test. The staff’s analysis showed that the owners of the Mark I could reduce the likelihood of core damage by a factor of ten by installing hardened vents: a substantial safety increase. The staff’s calculations also showed the cost of the improvements justified the benefits.

At the commission meeting, the NRC staff faced a hostile audience in Commissioner Thomas Roberts and his colleagues. They were not helped by the fact that the Advisory Committee on Reactor Safeguards, the independent panel that reviews NRC activities, also vigorously opposed the staff and supported the industry.

Despite the commissioners’ skepticism at the briefing, they remained deadlocked for months on the Mark I improvement proposal. The tiebreaker was Commissioner James Curtiss. The NRC staff told Curtiss that if the commissioners did not vote for the containment improvement program, and instead folded it into the Individual Plant Examination program, Mark I fixes would be put off for another five years. Curtiss apparently believed resolving the issue could not wait that long.

In July 1989, the commission finally made its decision. Although the final vote was 3-2 in favor of taking action, the outcome was far short of what the staff had requested. Of the five recommendations, the commission accepted only two, and even then it pulled its punches. First, it authorized speeding up the timetable for Mark I plants to comply with the station blackout rule—but this was an easy call, since it did not involve new requirements. Second, the commission decided to take action on hardened containment vents. But, reluctant to directly confront the industry on such a sensitive issue, the NRC gave Mark I owners an offer they couldn’t refuse: install hardened containment vents voluntarily or the NRC staff would conduct plant-specific backfit analysis to determine if the agency could legally require them to comply.

The commission’s offer presented an easy choice for most Mark I owners. If they installed the vents as a voluntary initiative, they would not have to submit a license amendment to the NRC for approval, and the NRC would have almost no regulatory control over the vents.3 Although the NRC set basic standards for the design, construction, maintenance, and testing of the vents, Mark I operators would be under no obligation to meet them. The NRC would also have no authority to issue violation notices if it found problems, unless the vents interfered with other safety systems. In contrast, if the agency could show that the hardened vents passed the backfit test, it could force reactor owners to install and maintain them on the NRC’s terms.

Initially, owners of all but five reactors decided to voluntarily install hardened vents. For the holdouts, the NRC followed through on its threat and did backfit analyses, concluding that it could require all five to install the vents. Four of the plant owners gave up at that point and “voluntarily” complied before they were forced to. The fifth owner—the New York State Power Authority—went on the offensive, challenging the staff’s analyses and cost-benefit calculations regarding its James A. FitzPatrick nuclear plant. This time, it was the NRC’s turn to buckle. FitzPatrick, located on Lake Ontario near the town of Oswego, became the only Mark I BWR in the United States that did not harden its vents.

The NRC staff audited some of the hardened vent designs and inspected the hardware and operating procedures after the vents were installed. But the fact that the licensees had performed the work voluntarily severely restricted the NRC’s ability to ensure that the vents would be usable when needed.

And there was good reason to believe that they wouldn’t be usable. The vents were designed to operate only within the design basis of the plant and only before core damage occurred. That meant that in the event of more severe conditions—high radiation fields, heat, or pressure—the vents might not function. And the NRC staff did not even require the vents to function during a station blackout, relegating that issue to future consideration in the Individual Plant Examination program. Once again, it was one step sideways.

As to the remaining three staff recommendations for Mark I improvements—alternate ways to inject water, improved reliability of reactor vessel depressurization, and emergency procedures and training—the majority of the commissioners supported the industry position that they should be folded into the quasi-voluntary IPE program. Accordingly, later in 1989 the NRC sent a letter to Mark I licensees meekly stating that the NRC “expects” them to “seriously consider these improvements during their Individual Plant Examinations.”

This lackluster request received a lackluster response. When the NRC finally reported on the results of the IPEs in 1996, seven years after the project began, it noted that in several cases the licensees indicated that the containment performance improvements were being “considered, but do not identify the recommendations as commitments.” Most of the Mark I plant owners stated that they already had alternate water sources and merely credited them in the IPE; some did not even bother to credit them. With regard to emergency training, the licensees simply committed to voluntary industry guidelines. And with regard to enhancing the reactor vessel depressurization system, many licensees did not respond at all. The NRC claimed victory in those cases when licensees actually did something, but it was powerless to compel any of them to do more; much less could it conduct thorough reviews and inspections to verify that what they had done would lead to meaningful safety improvements.

One could hardly judge the outcomes of the Mark I containment improvement program and the IPEs to be successes. Yet they set a major precedent for dealing with severe accident issues through “voluntary industry initiatives” (sometimes also confusingly called “regulatory commitments”). Like the backfit requirements of the 1980s, this was in keeping with the regulatory trends of the times, in which industry “self-regulation” tools like voluntary codes of conduct were increasingly used to forestall new government mandates, despite concerns about foxes guarding henhouses. For its part, the nuclear industry now could tout its voluntary actions as examples of its commitment to safety beyond what the NRC required.

One of the key voluntary industry initiatives of the 1990s was the development of Severe Accident Management Guidelines, or SAMGs. These were emergency plans plant operators were to use during an accident in which core damage had already occurred or was imminent. (SAMGs were to be used if a plant’s emergency operating procedures, which in contrast were regulated by the NRC, failed to prevent core damage.) In 1994, the industry, under the auspices of its newly constituted advocacy group, the NEI, developed a guideline document that all licensees promised to adopt. Once again, however, because the SAMGs were voluntary practices, the NRC was virtually powerless to ensure that they would be workable and that plant workers would be appropriately trained to use them.4

Mark I containments were not the only ones that concerned the NRC; the Mark II had similar issues. Also, another type of containment—the Westinghouse ice condenser, a PWR version of a pressure-suppression containment—was vulnerable to failure in severe accidents, especially in the event of a hydrogen explosion. (Although it required the Mark I and II to be inerted with nitrogen gas, the NRC had not done so for ice condensers, or another model of BWR called the Mark III.) Several years after Three Mile Island, the NRC had required owners of ice condensers and Mark III plants to install igniters—similar to spark plugs—that could burn off hydrogen accumulating in a containment before it reached an explosive concentration. However, those igniters required AC power to function, so they wouldn’t be available in a station blackout, a potentially major weakness. Even the gold standard—large, dry PWR containments—might be vulnerable to accidents in which the reactor vessel failed at high pressure. But the NRC’s failure to impose meaningful changes on the Mark I, perhaps the worst of the lot, did not bode well for the future of the containment improvement program.

Over time, the NRC staff appeared to lose its appetite for grappling with the industry over new requirements to reduce severe accident risk. Even worse, in response to growing political pressure, the NRC decided to sweep other stubborn issues under the rug. In fact, as Three Mile Island receded into the past and no other Western-designed reactor experienced an event to jolt the memory (Chernobyl didn’t really count, as it was considered an exotic Soviet beast), the agency in the 1990s embraced a sentiment that its requirements were not too lenient but rather too strict.

According to this line of thinking, severe accident risks were already so low that certain regulations could be weakened without significantly affecting safety. The NRC dubbed this approach “risk-informed regulation,” and counted on probabilistic risk assessment data to justify what it euphemistically referred to as “reducing unnecessary conservatism” but actually amounted to removing safety requirements. Risk-informed regulation was seen by critics (such as David Lochbaum of the Union of Concerned Scientists) as a “single-edged sword”: it was only used to reduce regulatory requirements, never to strengthen them.5

Reservations about the validity of probabilistic risk assessments faded as more and more utilities began to use them in regulatory applications. And why not? They seemed to enable the utilities to get what they wanted: less regulation. But even though PRA methodology had advanced, it still suffered from many of the same problems, including huge uncertainty factors when addressing earthquakes, other external events, and reactor shutdowns (when the risk of an accident can be surprisingly high). Again, the tendency of the NRC and plant owners when confronted with these uncertainties was to downplay or ignore them. As a result, safety decisions based on PRA analysis did not accurately account for the risks of these additional hazards. The misuse of PRA analysis did, however, lend credence to the concerns James Asselstine raised in his 1985 vote on the Severe Accident Policy Statement, when he accused his colleagues of deliberately ignoring uncertainties to minimize risks: “the Commission chooses to rely on a faulty [risk] number which supports the outcome they prefer.”

Take the issue of developing a reliable PRA for an earthquake, which very few plant owners have done. To perform the assessment properly one would need accurate estimates of the likelihoods of earthquakes at each magnitude; detailed models of the effect that a quake of each magnitude would have on plant structures; and a defensible analysis of how the earthquake damage would affect plant operation and the ability of operators to carry out manual actions. Assembling this information would be a daunting task, and the uncertainties at every step would be formidable. There is little wonder that the industry has had difficulty tackling seismic PRAs.6

Among the first regulations that the NRC set its sights on “risk informing” was a post-Three Mile Island requirement that all reactors install “recombiners” that could prevent the accumulation of hydrogen during a loss-of-coolant accident. In reconsidering the requirement for the Mark I and II, the NRC’s analysis found that the recombiners would not be needed to prevent hydrogen explosions during the first twenty-four hours after an accident because the reactor containments were inerted with nitrogen. However, the recombiners could be useful after twenty-four hours had passed because the inerting would become ineffective.7 Nonetheless, in 2003 the NRC eliminated the recombiner requirement, concluding that removing this equipment would not be “risk significant.” The reason: the SAMGs at those plants called for operators to vent or purge hydrogen in a severe accident, and the NRC believed that twenty-four hours gave them plenty of time to prepare to get that done. Based on this calculation, the agency concluded that the monetary value of the increased threat to public health was less than what the utilities would save by not having to maintain the recombiners—$36,000 per year per reactor.

Thus the NRC removed regulatory requirements to prevent hydrogen explosions in part by taking credit for voluntary initiatives—SAMGs—that it did not regulate. This type of twisted logic was typical of the risk analysis that enabled the NRC to weaken its regulations at the beginning of the twenty-first century.

In the years following Three Mile Island, the Japanese closely studied the NRC’s regulatory reforms, and in many cases emulated them. Japan’s Nuclear Safety Commission identified fifty-two lessons learned from Three Mile Island that it recommended for adoption in Japan’s own safety regulations. Japan also began to develop severe accident countermeasures after Chernobyl. Among those that TEPCO incorporated at Fukushima were hardened vents, modifications to allow use of fire-protection pumps to cool the core if needed, and measures for coping with station blackouts of modest length, including loss of DC power.

The Japanese also developed severe accident guidelines, referred to as accident management (AM) measures, using the results of probabilistic risk assessments conducted by research organizations. In short, there were many similarities between actions taken in the United States and those in Japan.

Japan’s severe accident management measures also shared many of the defects of the U.S. approach. All of the AM measures were rooted in the belief that the possibility of severe accidents was so low as not to be “realistic from an engineering viewpoint”; hence these steps were not considered essential. Consequently, the NSC concluded that “effective accident management should be developed by licensees on a voluntary basis,” and the utilities accordingly developed AM measures on their own.

As a result, no regulator assessed whether the plant owners’ assumptions were realistic regarding the ability of workers to carry out AM measures like hardened vent operation and alternate water injection. In particular, no one asked TEPCO why its AM procedures were designed to cope with a station blackout that would last only thirty minutes and affect only one reactor at a site. If someone had, perhaps TEPCO would not have had to concede after Fukushima that the tsunami and flood resulted in “a situation that was outside of the assumptions that were made to plan accident response.”

Suppose that decades ago the NRC staff had succeeded in pushing through a much more aggressive approach for dealing with Mark I core damage and containment failure risks, including the challenges of a prolonged station blackout. There is no guarantee that the Japanese would have followed suit, but they would have been hard-pressed to ignore the NRC’s example. The NRC staff in the 1980s had all but predicted that something like Fukushima was inevitable without the fixes it prescribed, but the agency’s timidity—or perhaps even negligence—contributed to the global regulatory environment that made Fukushima possible. The NRC’s reliance on the flawed assumption that severe accident risks are acceptably low helped to perpetuate a dangerous fallacy in the United States and abroad. Ultimately, the NRC must bear some responsibility for the tragedy that struck Japan. And the commissioners must acknowledge that unless they fully correct the flawed processes of the past, they cannot truthfully testify before Congress that a Fukushima-like event “can’t happen here.”