“THIS IS A CLOSED MEETING. RIGHT?” - Fukushima: The Story of a Nuclear Disaster (2015)

Fukushima: The Story of a Nuclear Disaster (2015)

10

“THIS IS A CLOSED MEETING. RIGHT?”

It was the last session of the NRC’s twenty-third Regulatory Information Conference. The RIC, as it is known, is an annual gathering that attracts regulators, utility executives, industry representatives, the media, and others for discussions of new and ongoing initiatives by the NRC. More than three thousand people from the United States and around the globe, including a team of seismic experts from Japan, had descended on a Marriott conference center across Rockville Pike from the NRC’s White Flint headquarters for the three-day event.

Now, as the conference was winding down, a few dozen people had gathered to hear a panel discuss the latest results of an NRC research project entitled State-of-the-Art Reactor Consequence Analyses, or SOARCA, as it was called in the NRC’s acronym-rich environment. The takeaway message from the panel: even if a severe nuclear power plant accident were to happen—say, an extended station blackout at a Mark I boiling water reactor—it wouldn’t be all that bad.

The date was March 10, 2011.

By all accounts the RIC had been a great success, a reflection of how the NRC’s stature had grown along with the improving fortunes of nuclear power in the United States. After decades without a new reactor order being placed, the United States in recent years had begun to see a resurgence of interest in nuclear energy, spurred on by policy makers, pundits, and industry boosters addressing a public that had largely forgotten the nuclear fears of three decades earlier. They argued that the atom was the only realistic alternative to greenhouse gas-belching fossil fuel plants for delivering large amounts of power to an increasingly energy-hungry world. That message was gaining traction, even among some longtime nuclear skeptics.

Nuclear energy’s prospects were boosted by Congress in 2005. That year’s Energy Policy Act (EPAct) contained energy production tax credits and loan guarantees to help insulate utility investors from the formidable financial risks that had crippled many past nuclear projects.

Thanks to incentives such as these, the NRC was soon besieged by more nuclear plant license applications than it could handle. To cope with the increase in its workload, the agency needed to expand significantly for the first time in decades.

By 2011, some of the momentum had been siphoned off by the persistent recession, which froze credit markets and reduced energy demand, as well as by the ultracheap natural gas made available by hydraulic fracturing. But the so-called nuclear renaissance was very much alive in the nation’s capital. Interest remained high among many in Congress and within the Obama administration.1

Turnout for the 2011 RIC reflected the renewed support. The conference reported the highest attendance in its history, and sessions such as those devoted to the technology du jour—small modular reactors that could be installed in all sorts of unlikely places around the world—generated so much excitement that auditoriums filled to capacity and people were turned away at the doors.

As for the nagging issue of safety? That no longer seemed a showstopper, thanks in large measure to some deft messaging by the nuclear industry, led by the NEI. The long-ago accident at Three Mile Island represented the nuclear industry of old; the accident at Chernobyl was irrelevant to Western designed and operated nuclear plants. An entire generation of Americans had reached adulthood without encountering a major nuclear mishap. Perhaps things had changed when it came to nuclear safety.

This was all good news for the NRC.

Despite its official status as a neutral regulator, the NRC had been doing its part to promote the image of nuclear power as safe. SOARCA was a key element in that campaign. The goal was to supplant older NRC studies that estimated the radiological health consequences of severe reactor accidents. Many in the nuclear power community, both inside and outside the NRC, believed that those studies, dating back more than twenty years, grossly exaggerated the potential danger. Antinuclear groups were misusing old information to frighten the public, they argued. It was time for a new counteroffensive.

In the 1980s, the industry had asserted—via the findings of its own Industry Degraded Core Rulemaking (IDCOR) program—that the NRC was wildly overestimating the radiation releases that could result from nuclear accidents. At the time, the NRC staff, bolstered by the independent review of the American Physical Society, did not concur. However, times had changed. Now the NRC itself was leading the charge to reduce source terms. That gave rise to a new state-of-the-art study: SOARCA.

But in 2011, after spending five years and millions of dollars on the project, the NRC had a new problem: the numbers SOARCA was generating weren’t cooperating with the safety message agenda. It was déjà vu for the NRC, which has grappled with how to explain away inconvenient facts about nuclear power risks over its history.

In the RIC’s final hours, a panel of NRC experts clicked open their PowerPoint presentations and provided a SOARCA update. Only the most attentive would have noted a subtle change in the language describing the study’s findings—an attempt, perhaps, to glide past some of SOARCA’s unwelcome results.

No one in the room could know that these findings, the outcome of computer simulations, were about to be put to the test.

If the conference had taken place a month later, with Fukushima’s devastated reactors still held in check by seawater while thousands of refugees lamented their poisoned homes, SOARCA’s message would have been far different. The panelists would have known by then that the accident scenarios they had analyzed were no longer just theoretical constructs, but instead described real-world events with real-world consequences.

It was now clear that the release of even a small fraction of the radioactive material in a reactor core was enough to wreak havoc around the world and fundamentally disrupt the lives of tens of thousands of people. This was something SOARCA, designed to calculate only numbers of deaths, was not capable of predicting.

Nearly three decades earlier, in November 1982, Representative Edward Markey of Massachusetts held a press conference in Boston with Eric van Loon, executive director of the Union of Concerned Scientists, to disclose troubling information: the NRC was suppressing the results of a study that estimated the consequences for human health and the environment of severe accidents for every nuclear power plant site in the United States.

The NRC staff had drafted a report on the study for public consumption, but the commission had been sitting on it for over six months. At the time, three and a half years after Three Mile Island, antinuclear sentiments in the United States were running high. The possibility that the NRC was engaging in some sort of cover-up about risks confirmed the suspicions of many about the pronuclear bias of the agency.

The study, performed by Sandia National Laboratories and given the bland title “Technical Guidance for Siting Criteria Development,” soon became known as the CRAC2 study, after the computer code it employed (“Calculation of Reactor Accident Consequences”). Among the calculations in the study was a projection of the dispersal of large radioactive plumes from a severe accident with containment failure and an estimation of the resulting casualties.

Like a civilian version of the models used by Cold War-era military strategists that ranked the outcomes of thermonuclear conflicts in impersonal terms like “megadeaths,” CRAC2 was used to quantify the damage from nuclear accidents: the numbers of radiation injuries, “early” fatalities from high levels of radiation exposure, and “latent” cancer fatalities from lower and chronic exposures.

CRAC2 and other radiological assessment codes, like Japan’s SPEEDI and the NRC’s RASCAL, utilize complex models to estimate doses to individuals by simulating the way radioactive plumes released by a nuclear accident are transported through the atmosphere and the biosphere. CRAC2 went beyond those other codes by using more detailed models of the ways people could be exposed: external irradiation by radioisotopes in the air and on the ground, inhalation, and consumption of contaminated food and water.

CRAC2 also had a crude model for estimating the economic consequences associated with land contamination, addressing issues such as lost wages and relocation expenses for evacuees, and costs of cleanup or temporary condemnation of contaminated property. What it couldn’t estimate were nonquantifiable consequences such as the psychological impacts on people forced to leave their contaminated homes and businesses either temporarily or permanently.

Radiological assessment codes like CRAC2 must incorporate many moving parts—plumes are traveling, radioactive particles are being deposited, and the population itself is not sitting still. Each calculation requires the input of hundreds of parameters, from source terms to types of building materials to the movement of evacuees and, eventually, even to the long-term effectiveness of decontamination efforts and the radiation protection standards governing people’s return to their homes. Consequently, the results are very uncertain. Far too often, however, the tendency among regulators has been to endow these rough estimates with more authority than they deserve.

One of the largest sources of uncertainty is weather. Some types of weather could be much more hazardous than others, depending, for example, on whether the wind was blowing toward heavily populated areas and whether there was precipitation. But the NRC’s as yet unreleased CRAC2 draft report contained only averages for weather conditions.

What Ed Markey and Eric van Loon presented to reporters that November day were not just the averages but the “worst case” results for the most unfavorable weather, such as a rainstorm washing out the plume as it passed over a large population center. In these projections, the “peak” early fatalities, as they were called, were far greater than the average values in the NRC report. The numbers were in fact shocking: for the Indian Point plant, thirty-five miles from midtown Manhattan, a worst-case accident could cause more than fifty thousand early fatalities from acute radiation syndrome. In contrast, the average value for early fatalities was 831.

The NRC had held on to the draft CRAC2 report for over half a year, presumably because officials worried that even the average-value casualty figures would be too much for the public to swallow. Markey’s disclosure of the worst-case spreadsheet forced the NRC’s hand, and it finally released the report that same day. The commission was quick to defend its decision not to include the worst-case results, offering a rationale that would become familiar over the years: the chances of an accident severe enough to produce such death and destruction were so slight as to be hardly worth mentioning. Or, as the NRC’s head of risk analysis, Robert Bernero, said at the time, the likelihood of worst-case conditions was “less than the possibility of a jumbo jet crashing into a football stadium during the Superbowl.”

For the next two decades, this line of reasoning formed the backbone of the NRC’s strategy for addressing the threat of severe accidents—namely, that events threatening major harm to the public were so unlikely that they didn’t need to be strictly regulated, a view shared by Japanese authorities and other members of the nuclear establishment worldwide.

In its risk assessments, the NRC was careful always to multiply high-consequence figures by tiny probabilities, ending up with small risk numbers. That way, instead of having to talk about thousands of cancer deaths from an accident, the NRC could provide reassuring-sounding risk values like one in one thousand per year. The NRC was so fixated on this point that it insisted that information about accident consequences also had to refer to probabilities.2

However, critics argued that the probability estimates were so uncertain—and there was so little real data to validate them—that the NRC could not actually prove that severe accidents were extremely unlikely. Therefore, accident consequences should be considered on their own terms.

In any event, the low-probability argument became less relevant in the aftermath of the September 11 aircraft attacks, when the public began to wonder what might have happened had al Qaeda decided to attack nuclear power plants that day instead of the World Trade Center and the Pentagon. No longer could one say with a straight face that a jumbo jet crashing into the Super Bowl was a one-in-a-billion event—if the pilot were intent on doing it deliberately. There was no credible way to calculate the probability of a terrorist attack and come up with a meaningful number. The NRC had long acknowledged this, and consequently did not incorporate terrorist attacks into its probabilistic risk assessments or cost-benefit analyses.

No longer able to hide behind its low-probability fig leaf, the NRC struggled to reassure Americans that they had nothing to fear from an attack on a nuclear power plant. While maintaining that nuclear reactors had multiple lines of defense, from robust containment buildings to highly trained operators, the NRC also had to concede that the reactors were not specifically designed to withstand direct hits from large commercial aircraft, and that it was not sure what would happen if such an attack occurred. The industry steered the public discussion toward the straw-man issue of whether or not the plane would penetrate the containment—it couldn’t, according to the NEI—even though many experts pointed out that terrorists could cause a meltdown by targeting other sensitive parts of a plant.

To learn more about what could happen in an attack, the NRC commissioned a series of “vulnerability assessments” from the national laboratories, but the results remained largely classified for security reasons. Aside from a series of carefully constructed and vaguely reassuring talking points, the NRC provided few details beyond “Trust us.” Communities near nuclear plants would get few tangible answers about the vulnerability of reactors in their midst.

Meanwhile, the 9/11 disaster had provided an opening for environmental and antinuclear groups to once again raise the safety concerns that had faded from view since Chernobyl. In the vacuum of new public information from the NRC, activists found ample fodder in the old CRAC2 study and its references to “peak fatality” and “peak injury” zones. Among them was the organization Hudson Riverkeeper, campaigning to shut down the Indian Point plant. Interpreting the CRAC2 results liberally, the organization’s leader, Robert F. Kennedy Jr., spoke of dangers to the many millions of people within what he referred to as Indian Point’s “kill zone.”

Such talk was deeply upsetting to one NRC commissioner in particular: Edward McGaffigan. A voluble, intellectual, and pugnacious former diplomat and Senate defense aide, McGaffigan began his tenure on the NRC in 1996 by extending open channels of communication to the public. But after 9/11 he became openly hostile toward anyone he believed was exaggerating the dangers of nuclear power or misinterpreting the results of NRC technical studies.

“The media holds us to a very high standard, that what we say is factually true … but the antinuclear groups … basically get away with saying almost anything, however factually untrue it is,” McGaffigan told a gathering of NRC staff in 2003, adding, “The way we fix it is we work aggressively to get our story out.” The story, in his view, was that nuclear power was safe. Those who argued otherwise were misinformed and misguided.

McGaffigan was not alone in his frustration; other commissioners also accused critics of scare tactics. But McGaffigan went further, mocking members of the public who expressed the views he disdained.

McGaffigan’s views worried nuclear watchdog groups. After all, how could a regulator be trusted to make the decisions necessary to protect public health if he had such absolute faith in the benign nature of the facilities he oversaw and did not worry about the effects of low-level radiation?

But there was more to McGaffigan’s crusade; he accused the NRC staff itself of overstating the hazards of nuclear accidents. In his view, disinformation was coming from inside the agency as well as from hostile critics elsewhere. The staff’s technical analyses, he said, were making unrealistically dire assumptions. One case in point was the risk posed by spent fuel pools, a subject that would surge to relevancy in little more than a decade at Fukushima Daiichi.

Edward McGaffigan, who was an NRC commissioner from 1996 to 2007 …

Edward McGaffigan, who was an NRC commissioner from 1996 to 2007. U.S. Nuclear Regulatory Commission

In January 2001, the NRC staff released a report, “Technical Study of Spent Fuel Pool Accident Risks at Decommissioning Nuclear Power Plants,” or NUREG-1738. The report evaluated the potential consequences of an accident, such as a large earthquake, leading to the rapid draining of a spent fuel pool. NUREG-1738 estimated that within hours such an event could cause a zirconium fire throughout the pool that would result in melting of the fuel and release of a large fraction of its inventory of cesium-137 as well as significant amounts of other radionuclides. The study found that dozens of early fatalities and thousands of latent cancer fatalities could result among the downwind population.

Data from that study was later incorporated by outside experts into a technical paper, which was published in 2003 in the respected journal Science and Global Security.3 The paper concluded that the U.S. practice of tightly packing spent fuel in pools was risky. It called on utilities to move most of the fuel to safer dry storage casks. In an angry response, McGaffigan called the publication, based at Princeton University, a “house journal” of “antinuclear activists.” He fumed that “terrorists can’t violate the laws of physics, but researchers can.” But he also denounced NUREG-1738 itself, calling it “the worst” of excessively pessimistic staff studies on spent fuel vulnerabilities.

McGaffigan was so sure the Princeton study was wrong that, in a March 2003 public meeting, he appeared to direct the NRC staff to rebut the study before the staff had completed its own analysis. Such interference by a commissioner was practically unheard of. The NRC’s inspector general investigated and concluded that McGaffigan had tried to exert inappropriate influence on the research staff.4

It was in this overheated political environment that the SOARCA study was conceived. The early results of the nuclear plant vulnerability assessments that the NRC had been conducting since shortly after the 9/11 attacks indicated, in the agency’s view, that the radiological releases and public health consequences resulting from terrorist-caused meltdowns generally wouldn’t be as catastrophic as previous studies, including CRAC2, had found.

Unfortunately for the NRC, it could not broadcast this good news because the vulnerability studies, being related to terrorist threats, were considered “classified” or “safeguards” information. But some inside the NRC reasoned that if the agency applied the same analysis methods to accidents instead of terrorist attacks, it might be able to dodge some of the security restrictions and get the information out to the public. SOARCA was the result.

There was a downside. Opening up the analytical process would also expose the staff’s methodology and assumptions to unwelcome scrutiny by outsiders. So the NRC planned to keep a veil of secrecy over the SOARCA program itself, stamping the staff’s proposal for how to conduct the study, as well as the commission’s response, as “Official Use Only—Sensitive Internal Information.” The NRC would control all information about the study and report the results only when it was ready, and in a manner that could not be—in its judgment—misinterpreted or misused. From the outset, one commissioner, Gregory Jaczko, objected, arguing that the study guidelines and other related documents should be publicly released. He was outvoted.

The NRC’s concern about managing the information coming from SOARCA was evident from the beginning. The commissioners wanted the staff to develop “communication techniques” for presenting the “complex” results to the public. Although the technical analysis had barely begun, the first draft of the communications plan asserted that nuclear power plants were safe and had been getting safer for more than two decades. Even so, the commissioners rejected the draft and continued to micromanage the message. The communications plan would go through at least six revisions before they were satisfied.

One theme the NRC was determined to emphasize was SOARCA’s scientific rigor. As the name suggested, the project was to be all about using “state-of-the-art information and computer modeling tools to develop best estimates of accident progression and … what radioactive material could potentially be released into the environment.” But the NRC Office of Research, try as it might to be an independent scientific body, could never truly be free from the commission’s policy objectives. The research office had faced accusations in the past of trying to influence the results of studies performed by its contractor personnel.5 Now, the clear direction from McGaffigan and other senior officials would make it difficult to produce a completely objective study.

Although by all appearances the purpose of SOARCA was to reassure the public that nuclear power was safe, the nuclear industry did not enthusiastically jump on board. Perhaps company executives did not relish the prospect of another CRAC2-like spreadsheet making an appearance, listing potential accident casualty figures for every nuclear plant in the country—a recipe for bad publicity no matter how low the numbers. After all, Ed Markey was still in Congress, waging battles over nuclear safety.

The Nuclear Energy Institute interceded, sending the NRC a list of forty-four questions about the project, including a suggestion that a fictional plant be used instead of a real one. The SOARCA researchers soon found that very few utilities were interested in cooperating with the NRC on the study. (For added measure, the NEI hinted that any volunteers would want the right to review how their plants were portrayed.) The initial plan to analyze the entire U.S. nuclear fleet of sixty-seven plant sites was whittled down to eight and then to five; ultimately, only three were willing to participate. In the end, the NRC staff analyzed just two stations: Peach Bottom in Pennsylvania, a two-unit Mark I BWR, and Surry in Virginia, a two-unit PWR.6

With a vast, complex, and uncertainty-ridden study like SOARCA, it wasn’t necessary to commit scientific fraud to guide the process to a desired outcome. There were plenty of dusty corners in the analysis where helpful assumptions could be made without drawing attention. The NRC employed a number of maneuvers to help ensure that the study would produce the results it wanted, selectively choosing criteria—in effect, scripting the accident.

It discarded accident sequences that were considered “too improbable,” screening out events that would produce very large and rapid radiological releases, such as a large coolant pipe break. It only evaluated accidents involving a single reactor, even though some of the events it considered, such as earthquakes, could affect both units at either Peach Bottom or Surry. It considered its “best estimate” to be scenarios in which plant personnel would be able to “mitigate” severe accidents and prevent any radiological releases at all; it analyzed scenarios in which mitigation was unsuccessful but pronounced them unlikely. Perhaps most curious was the NRC’s decision to assume that lower doses of radiation are not harmful—an assertion at odds not only with a broad scientific consensus but with the NRC’s own regulatory guidelines.

The fog grew even thicker when the time came to decide how the study results would be presented. First, the commissioners decreed that figures such as the numbers of latent cancer fatalities caused by an accident should not appear. Instead, the report would provide only a figure diluted by dividing the total number of cancer deaths by the number of all people within a region. For instance, if the study predicted one hundred cancer deaths among a population of one million, the individual risk would be 100 ÷ 1,000,000, or one in ten thousand. So rather than saying hundreds or even thousands of cancer deaths would result from an accident—guaranteed to grab a few headlines—the report would state a less alarming conclusion. And since the NRC’s probabilistic risk assessment studies estimated that the chance of such an accident was only about one in one million per year, the current risk to an individual—probability times consequences—would be far less. To use the same example, it would be one million times smaller than one in ten thousand, or a mere one in ten billion per year: a number hardly worth contemplating. The communication strategy for SOARCA appeared to be taking its inspiration from the old Reactor Safety Study and its discredited comparisons of the risks of being killed by nuclear plant accidents versus meteor strikes.

But there was more obfuscation. The NRC would only reveal the values of these results for average weather conditions, and not the more extreme values for worst-case weather; this was the same strategy of evasion that had gotten the agency in hot water with Congressman Markey and the media back in the days of the CRAC2 report.

The commissioners also told the researchers to drop their original plans to include calculations of land contamination and the associated economic consequences. Earlier, the project staff had carried out such calculations for terrorist attacks at two reactor sites with high population density—Indian Point, north of New York City, and Limerick, northwest of Philadelphia—but apparently decided they did not want that kind of information to be made public. According to a staff memo, the models that had been used produced “excessively conservative” results—meaning, in NRC parlance, that the researchers thought the damage estimates were unrealistically high.7 The staff said the models needed to be updated to obtain a “realistic calculation.”

Some issues that emerged as the study progressed did not fit into the predetermined narrative. For instance, it was hard to explain why an earthquake or a major flood striking the Peach Bottom and Surry sites, each featuring side-by-side reactors, could be assumed to damage only one unit and leave the other unscathed. Logically, an accident involving both units would not only increase the source term, or amount of radioactive materials released to the environment, but also force operators to deal simultaneously with two damaged reactors. And, as the analysts noted, “a multiple-unit SBO [station blackout] may require more equipment, such as diesel-driven pumps and portable direct current generators, than what is currently available at most sites.”

The analysts calculated that these scenarios had alarmingly high probabilities. But instead of following the study guidelines and including the scenarios, the staff decided in 2008 to recommend that the case of dual units be considered as a “generic issue”—a program where troublesome safety concerns are sent to languish unresolved for years. (The NRC was still pondering the recommendation three years later when Fukushima demonstrated that multiple-unit accidents were not merely a theoretical concern.)

The NRC’s independent review group, the Advisory Committee on Reactor Safeguards, was not amused by what appeared to be a blatant attempt to bias the SOARCA study. In particular, it objected to the staff’s seemingly arbitrary approach for choosing accident scenarios to analyze. It pointed out that SOARCA’s good news safety message could be less the result of improved plant design or operation and more the result of “changes in the scope of the calculation.” Simply put, SOARCA had analyzed different accident scenarios from those used in earlier studies like CRAC2, and therefore it could not be directly compared with them. Although the final SOARCA results might look better, that was because SOARCA was deliberately excluding the very events that could cause a large, fast-breaking radiation release of the kind CRAC2 had evaluated.

The NRC rebuffed its Advisory Committee’s criticism and continued on the course the commissioners had set. In deference to public complaints that the study was being conducted in secret with no independent quality control, the NRC agreed to form a peer review committee. However, the NRC chose all the members, and the committee’s meetings were also held in secret. The public would just have to trust that the committee was doing a good job.

When the NRC staff presented preliminary results of the study to the Advisory Committee in November 2007, it appeared that the staff had successfully obtained the conclusions its bosses wanted. First, the staff judged that all the identified scenarios could reasonably be mitigated—that is, plant workers, using B.5.b measures and severe accident management guidelines (SAMGs), would be able to stop core damage or block radiation releases from the plant. Even if they failed to prevent the accident from progressing, the news would not be too dreadful: the release of radioactive material would occur later and likely be much smaller than past studies had assumed, resulting in “significantly less severe” off-site health consequences. And finally, the NRC staff was so confident that it stopped the simulations after forty-eight hours, assuming that by then the situation would have been stabilized.

The results that the NRC staff presented to the Advisory Committee were striking. While CRAC2 had found that following a worst-case or “SST1” release,8 acute radiation syndrome would kill ninety-two people at Peach Bottom and forty-five at Surry, SOARCA found the number of deaths to be exactly zero at both sites. There was no magic—or fundamental improvement in reactor safety—behind this stunning difference. The NRC had just fiddled with the clock. In the CRAC2 study, the radiation release began ninety minutes after the start of the accident, before most of the population within ten miles of the plant had time to evacuate, putting many more at risk. But the NRC had chosen accidents for SOARCA that unfolded more slowly. As a result, for most of the SOARCA scenarios, analysts assumed that the population within the ten-mile emergency planning zone would be long gone before any radiation was released. That way, by the time a release did occur, people would be too far away to receive a lethal dose, This was not an apples-to-apples comparison to the earlier study.

Harder to understand were the far lower numbers of cancer deaths projected by the SOARCA analysis, because even people beyond the ten-mile emergency planning zone could receive doses high enough to significantly increase their cancer risk. Whereas CRAC2 estimated 2,700 cancer deaths at Peach Bottom and 1,300 at Surry for this group, SOARCA project staff told the Advisory Committee that they had instead found twenty-five and zero cancer deaths, respectively.

That, too, involved sleight of hand—and some shopping around to find a convenient statistic. Despite a widespread scientific consensus that there is no safe level of radiation, the NRC staff decided to assume that such a level indeed existed: no cancers would develop until exposures reached five rem per year (or ten rem in a lifetime). Any exposure below that would be harmless.

At a 2007 Advisory Committee briefing closed to the public, Randy Sullivan, an emergency preparedness specialist for the NRC, let slip one reason why the SOARCA staff saw the need to use such an unconventional assumption. Apparently, the staff didn’t like the numbers it would get if it used the widely endorsed linear no-threshold hypothesis (LNT), which assumes that any dose of radiation, no matter how low, has the potential to lead to cancer. It was an easy choice: assume a high threshold, predict many fewer cancers. Otherwise, the number of cancer deaths predicted by SOARCA would be so large it could frighten people.

At the briefing, Sullivan acknowledged: “We could easily do LNT, just go ahead, issue the source term, calculate it out to 1,000 miles, run it for four days, assess the consequences for, I don’t know, 300 years and say 2 millirem times [the population within] 1,000 miles of Peach Bottom. What is that? Eighty million people… . We’re going to kill whatever. This is a closed meeting. Right? I hope you don’t mind the drama.

“So then we’ll say that our best estimate is that there will be many, many thousands … you’ll have 2 millirem times 80 million people and you’ll claim that you’re going to kill a bunch of them.”

Considering a five rem per year threshold ultimately proved too misleading even for the SOARCA team itself to tolerate, so it eventually evaluated a range of thresholds, including the LNT assumption of zero. But other optimistic assumptions enabled the team to keep the numbers small. A 2009 update to the commissioners informed them that the study continued to find off-site health consequences “dramatically smaller” than those projected by CRAC2.

SOARCA was supposed to be a three-year project. But by the time of the SOARCA session at the March 2011 Regulatory Information Conference it had dragged on for nearly six years. (Commissioner McGaffigan would not live to see the fruits of the project’s labors—he died in 2007.) Addressing the methodological problems that the Advisory Committee had criticized, running new analyses requested by the peer review panel, coping with problems with contractors, and project mismanagement all contributed to repeated postponements of the completion date. But perhaps the biggest time sink was the need to address more than one thousand comments from other NRC staff, who also had trouble swallowing some of the SOARCA methodology.

The unanticipated volume of staff comments was a clear indication of internal discomfort with the study. In January 2011, after being informed of yet another delay requested by the staff, Office of Research director Brian Sheron wrote in an e-mail, “[I]f we miss this date, I suggest we all start updating o[u]r resumes.”

One of the major internal disagreements had to do with SOARCA’s assumptions regarding so-called mitigated scenarios—in plain English, how fast and successfully operators could use the emergency tools at hand to wrestle an accident to a safe conclusion and avoid a radiation release. Could workers really start and operate the RCIC system at Peach Bottom without generator or battery power, as the SOARCA project staff had confidently concluded? Could they hook up and run portable pumps and generators to run safety systems for forty-eight hours (the limit of the SOARCA analysis)?

As early as 2007, project contractors at Sandia National Laboratories, which was reviewing the SOARCA analysis, were questioning whether all the emergency equipment and procedures would perform as the NRC team predicted. Sandia wanted what it called a “human reliability analysis.”

That year Shawn Burns, a senior technical staff member at Sandia, wrote what proved to be a rather prescient letter to the NRC:

The principal initiating event for the Long Term Site Blackout [at Peach Bottom] … is a seismic event of sufficiently large magnitude … to cause massive and distributed structural failures … the realism of relocating relatively large and heavy mitigation equipment … from their storage location(s) through rubble and other obstacles to their connection points in the plant is difficult to support. Similar questions would apply to the other plausible initiating events, including massive internal flood or large internal fire.

He went on to raise questions about other potential difficulties, including unavailability of backup cooling water supplies, electrical connection problems, difficulties with instrumentation, and dead batteries.

But the NRC commissioners had already spoken on this issue: emergency strategies and equipment—the so-called B.5.b. measures—would work in the SOARCA scenarios. Whether this was a reasonable assumption did not seem to figure into their instructions.

The issue bothered many within the ranks of the NRC, however. Computer models were one thing; actual hands-on experience at the nation’s reactors was an entirely different matter. Senior reactor analysts who work in the NRC’s regional offices for the Office of Nuclear Reactor Regulation, and who have all had previous experience as inspectors in the field, had a less optimistic view of the feasibility of these measures than the researchers running computer models at NRC headquarters. The reactor analysts had seen the B.5.b equipment for themselves.

The Advisory Committee on Reactor Safeguards also expressed doubts about the way the SOARCA report seemed to take the success of the B.5.b mitigation measures for granted. The committee asked whether the project staff had actually “walked down” the emergency measures to determine if they’d work under extreme accident conditions. The answer was no. Instead, the staff had based its conclusions on so-called tabletop demonstrations—that is, moving pieces around a toy model of each plant.

To quiet the skeptics, SOARCA staff visited Peach Bottom in Pennsylvania and the Surry plant in Virginia and examined the actual equipment. Internal dissent continued, but the SOARCA staff went on to question the validity of the concerns of critics and conclude that, based on those walkdowns at the plants, the likelihood of everything working was even greater than previously thought.

It was easy to understand why the project team wanted to believe that plant workers had the ability to mitigate the severe accidents that were analyzed: everything else led to core meltdowns—albeit more slowly than previous studies had found.

An example was the SOARCA staff’s analysis of a hypothetical “long term” station blackout at Peach Bottom, which is located about forty-five miles from Baltimore and eighty-five miles from Washington, DC. The accident scenario proceeded through a grim sequence of events. First, all electrically powered coolant pumps would stop working. Using batteries, operators could start up the steam-powered RCIC system, but after four hours, the batteries would fail, and after another hour, so would the RCIC. The temperature and pressure within the reactor vessel would quickly rise, and the safety relief valves on the vessel would eventually stick open, steadily releasing steam. With no makeup water available to replace the steam, the fuel would be uncovered in a matter of minutes.

At about nine hours, the fuel would start to melt, eventually collapsing and falling to the bottom of the reactor vessel. After about twenty hours, the molten fuel would breach the vessel bottom and spill onto the containment floor, where it would spread out and rapidly melt its way through the steel containment liner. A few minutes later, hydrogen leaking from the containment into the reactor building would cause an explosion, opening up the refueling bay blowout panels and blowing apart the building’s roof.

If batteries were not available for those first four hours—if they were flooded from the adjacent Susquehanna, for instance—the resulting “short term” station blackout would be even worse. In that case, the models predicted that core damage would start after one hour and the containment would fail at eight hours.

In either case, once the containment failed a plume of radioactivity would escape the damaged plant and overspread the area. As expected, the calculations predicted no early fatalities from acute radiation syndrome. But the latent cancer fatality numbers told another story. Early SOARCA data supposedly had showed that health consequences were “dramatically smaller” than those predicted by CRAC2. After the SOARCA staff was repeatedly criticized for making assertions like these without ensuring that the two studies were consistently compared, analysts ran new calculations to see what would happen if SOARCA’s methodology was applied to a CRAC2-sized release at Peach Bottom and compared to the SOARCA result for a station blackout. The outcome? The differences in latent cancer risk were not that dramatic.9

Within fifty miles of Peach Bottom, the estimated number of cancer deaths caused by a short-term station blackout, averaged over weather variations, would be 1,000, compared to the 2,500 projected by CRAC2 for a much larger radiation release. From a statistical standpoint, given the uncertainties, the difference was meaningless. And from a human standpoint, 1,000 deaths, while less than 2,500, was still a pretty unacceptable health consequence. Rather than discrediting the old CRAC2 analysis, the SOARCA study had in important respects validated it.10

For the small crowd gathered at the Marriott on March 10, 2011, to hear an update on SOARCA, some key details were missing. The NRC was still unwilling to release the numbers publicly. Anyone who wanted to know what was going on had to resort to Kremlinology to interpret the subtle changes in the statements that the NRC approved for release.

Two years earlier, at the last SOARCA session at the RIC, Jason Schaperow of the study team had reported a preliminary conclusion that “releases are dramatically smaller and delayed” from those projected in CRAC2, news that hardly surprised anyone in the room who had been watching the data contortions over the years. But in 2011, the statement had subtly changed. Patricia Santiago, the latest of several branch chiefs to oversee the SOARCA study, presented a bullet point that “for cases assumed to proceed unmitigated, accidents progress more slowly and usually [emphasis added] result in smaller and more delayed radiological releases than previously assumed/predicted.”

The word usually was key here. It left the door open to the possibility that there were scenarios in which CRAC2’s predictions had not been so far off base after all: perhaps, for instance, a station blackout at Peach Bottom.

Nor did Santiago repeat the now familiar statement about dramatically smaller numbers of cancer deaths. She noted only that “individual latent cancer risk for selected scenarios generally comes from population returning home after [the] event is over.” In other words, most of the dose to the population would be received not during the early stages of the accident by people exposed to radioactive plumes, but long after the accident was over by evacuees who had no choice but to return to their now contaminated homes to live, hardly a consolation.

However, Santiago did highlight one SOARCA conclusion that had not changed over the years: the accident scenarios analyzed as part of the study “could reasonably be mitigated, either preventing core damage or delaying/ reducing the radiation release.”

The next morning, all the tabletop models and computer runs and fingers-crossed assumptions that supported that conclusion would face their first real-world test. At 11:40 a.m. on March 11, Jason Schaperow sent an e-mail to Santiago, his supervisor. “Today’s Japanese earthquake seems to have caused one of the SOARCA scenarios (long-term station blackout).”

“On this morning’s news they said no release,” Santiago replied. “Time will tell.”

Computer modelers and analysts love to obtain real-time data that they can use to validate the predictions of their models—except, perhaps, if they are simulating disasters. Soon, the SOARCA team would watch as many of the catastrophic events they had deemed improbable unfolded not on a computer screen, but on a television screen. And in the process the limitations of the SOARCA approach—the project in which the NRC had invested so much time and money to win over a skeptical public—would become evident.

Over the years as the SOARCA study progressed, it had revealed the potential for a natural disaster to cause a truly horrific event: an accident that involved multiple reactors, rendered most emergency equipment useless, and contaminated large areas with radiation plumes far beyond emergency planning zones due in part to the vagaries of weather. Yet instead of taking action to prevent such an accident, the NRC convinced itself that even if the accident did happen, the consequences would be minor. Difficult issues were disregarded or put off for another day.

If the NRC had undertaken this study not as an exercise in reinforcing existing biases, but as a roadmap for identifying and fixing safety weaknesses from America to Japan, SOARCA’s most dire predictions might not have made the transition from a PowerPoint presentation to an event that shocked the world.