THE FUKUSHIMA POSTMORTEM: WHAT HAPPENED? - Fukushima: The Story of a Nuclear Disaster (2015)

Fukushima: The Story of a Nuclear Disaster (2015)

APPENDIX

THE FUKUSHIMA POSTMORTEM: WHAT HAPPENED?

Accident modelers from TEPCO, the U.S. national laboratories, industry groups, and other organizations gathered in November 2012 at a meeting of the American Nuclear Society (ANS) to present the results of their attempts to simulate the events at Fukushima and reproduce what was known about them.

Like any postmortem, the goal was to glean as many answers as possible about the causes of the events at Fukushima Daiichi. But answers proved troublingly elusive.

One of the first difficulties encountered by the analysts was the lack of good information about the progression of the accidents. The damaged reactors were still black boxes, far too dangerous for humans to enter, much less conduct comprehensive surveys of, and reliable data on their condition was sparse. In some cases, analysts had to fine-tune their models using trial and error, essentially playing guessing games about what exactly had happened within the reactors.

Even so, the computer simulations could not reproduce numerous important aspects of the accidents. And in many cases, different computer codes gave different results. Sometimes the same code gave different results depending on who was using it.

The inability of these state-of-the-art modeling codes to explain even some of the basic elements of the accident revealed their inherent weaknesses—and the hazards of putting too much faith in them.

Sometimes modelers were frustrated by a lack of essential data. For example, when water-level measurements inside the three reactors were available, they were usually wrong. The readings indicated that water levels were stable when they were actually dropping below the bottom of the fuel. This happened because the gauges were not calibrated to account for the extreme temperature and pressure conditions that were occurring. Although the problem should have been obvious at the time, TEPCO didn’t question the erroneous data and publicly released it.

The lack of reliable water levels meant that analysts in Japan and elsewhere really did not know then or now how much makeup water was entering the reactor vessels at what times during the accident, a critical piece of information for understanding the effectiveness of emergency water-injection strategies. Different assumptions for “correcting” this unreliable data yielded significantly different results.

At the ANS meeting, researchers from Sandia National Laboratories presented results they obtained using the computer code called MELCOR, designed by Sandia for the NRC to track the progression of severe accidents in boiling water and pressurized water reactors. For Unit 1, the event was close to what’s called a “hands-off” station blackout. From the time the tsunami struck, essentially nothing worked. The loss of both AC and battery power disabled the isolation condensers and the high-pressure coolant injection system (HPCI), as well as the instruments for reading pressure and water level. Counting down from the time of the earthquake, the core was exposed after three hours, began to undergo damage after four hours, and by five hours was completely uncovered.

At nine hours, according to this analysis, the molten core slumped into the bottom of the reactor vessel, and by fourteen hours—if not sooner—it had melted completely through. By the time workers had managed to inject emergency water into the vessel at fifteen hours, much of the fuel had already landed on the containment floor and was violently reacting with the concrete.

But even a straightforward “hands-off” blackout turned out to be too complex for MELCOR to fully simulate. For instance, although the code did predict that the containment pressure would rise high enough to force radioactive steam and hydrogen through the drywell seals and into the reactor building, its calculation of the amount of hydrogen that collected at the top of the building “just missed” being large enough to cause an explosion, according to Randy Gauntt, one of the study’s authors.

A U.S. industry consultant, David Luxat, presented a simulation using the industry’s code, called MAAP5. His simulation also predicted that the conditions would not be right for a hydrogen explosion at the time when one actually occurred. His speculation: extra hydrogen leaked from the vent into the reactor building. Ultimately, the explosion at Unit 1 remained something of a mystery.

Another issue that experts disagreed on was what caused the Unit 1 reactor vessel to depressurize suddenly around six or seven hours into the accident. Sandia argued that it was probably a rupture in one of the steam lines leading from the vessel, or a stuck-open valve; others believed it was a failure of some of the tubes used to insert instruments into the vessel to take readings. But no code was capable of predicting one of these events over another; and no one knew what actually took place anyway. Such confirmation will have to await a time when it is safe to enter the containment and conduct forensic examinations. Even then, it is far from certain that the history of the accident will be fully reconstructed, or all its lessons revealed.

The situation was even murkier when trying to understand the more complex events that led to the meltdowns at Unit 3 and then Unit 2.

For Unit 3, which never fully lost battery power, operators were able to run the reactor core isolation cooling (RCIC) system until it shut down, and then the HPCI system until they deliberately shut it down. Although the analysts generally agreed that core damage occurred sometime before 9:00 a.m. on March 13, there was much disagreement about how extensively the core was damaged and whether it had in fact melted through the reactor vessel. The answers depended on the amount of water that actually got into the vessel from the operations of RCIC, HPCI, fire pumps, and fire engines. Various analysts questioned whether RCIC and HPCI operated well under suboptimal conditions, and whether the pumps ever had sufficient pressure to inject meaningful flows of water into the core. Assuming different amounts of water led to different conclusions. In the final analysis, no one could predict with confidence whether or not there was vessel failure.

The explosion at Unit 3 was another puzzle. It appeared larger than the one at Unit 1, but under the assumptions for water injection rates provided by TEPCO, neither the Sandia simulation nor the industry’s found that enough hydrogen was generated to cause any explosion at all.

The analysis of Unit 2 yielded even more mysteries. During the actual accident, Unit 2 was initially a success story compared to its siblings. Even though it lost battery power, the reactor’s RCIC system operated for nearly three days. Under conventional modeling assumptions, it would have failed around an hour after the eight-hour batteries ran out. Therefore, the analysts had to force the RCIC system to keep operating under abnormal conditions in their models, even though they didn’t understand why. However, they also didn’t know how well it had worked. After the RCIC system failed, workers took several hours to start seawater injection; by then, core damage was well under way. As with Unit 3, there was so much uncertainty about the amount of water that reached the core that no simulation could predict with any degree of confidence how much fuel was damaged and whether it melted through the vessel.

Also unexplained was the fact that, although Unit 2’s RCIC system was transferring heat from the core to the torus, the measured pressure within the torus was not increasing as much as it should have been. In fact, some of the models had to assume there was a hole in the torus just to make sense of the data. This led to speculation that there had indeed been a hole in the torus from early on in the accident or that the torus room had flooded with water from the tsunami, which helped to cool it down.

If it was a hole, it would have had to have formed well before workers heard the mysterious noise at Unit 2 on the morning of March 15 that made them think an explosion had damaged the torus. That was the only way to explain the pressure data. In any event, TEPCO later concluded that the noise didn’t originate at Unit 2 at all, but was an echo of the explosion at Unit 4 that occurred at approximately the same time. The rapid pressure drop that instruments recorded in the Unit 2 torus was attributed by TEPCO to instrument failure. A remote inspection made months after the accident did not reveal any visible damage to the torus.

So Unit 2 was apparently the only unit operating at the time of the earthquake that did not experience a violent hydrogen explosion. One possible reason for this is in the “silver lining” category: the opening created in the Unit 2 reactor building by the Unit 1 explosion prevented the buildup of an explosive concentration of hydrogen gas.

As for the Unit 4 explosion, most evidence indicates that the culprit was not hydrogen released from damaged fuel in the spent fuel pool. In fact, the pool was never as seriously in distress as the NRC, TEPCO, and many people around the world feared.

Inspections of the spent fuel in the pool using remote cameras did not see the kind of damage that would have been apparent if any fuel assemblies had overheated and caught fire. And samples of water from the pool did not reflect the radioactivity concentrations that would accompany damaged fuel. TEPCO was so sure of this finding that it pursued another theory for the explosion: hydrogen had leaked into the Unit 4 building during the venting operations in Unit 3 next door.

Like Units 1 and 2, Units 3 and 4 shared a stack for gas exhaust, and some of the piping from the two units was interconnected. Although closed valves ordinarily would have kept one reactor’s exhaust from getting into the other, the valves were not operating properly as a result of the blackout. TEPCO found further evidence for this theory when it measured higher radiation levels on the downstream side of the gas filters at Unit 4 than on the upstream side, indicating that radioactive gas flowed from the outside of the building to the inside.

This didn’t mean, however, that the Unit 4 pool was never in danger. TEPCO believed that the pool had lost up to ten feet of water early in the accident as the result of sloshing or some other unknown mechanism, causing the water temperature to shoot up toward the boiling point. Fortuitously, there was a second large pool of water—the reactor well—sitting on top of the reactor vessel. The reactor well is connected to the spent fuel pool through a gate and is filled during refueling to keep fuel rods submerged at all times. When the water in the spent fuel pool dropped, water flowed from the well into the pool, buying time until an effective external water supply was established.

The Unit 2 spent fuel pool may not have been as lucky. It was not one of the pools that commanded much attention during the early days of the accident. Yet when sampling of the pool water was conducted in April 2011, higher than expected levels of radioactivity were detected. Even more startling was a relative absence of iodine-131 in the pool water compared to the amount of cesium-137. If the radioactivity had originated in the reactor cores, more iodine-131 would have been detected. This led officials in the White House Office of Science and Technology Policy to conclude that there was either mechanical or thermal damage to some of the spent fuel in the pool.

Given all the uncertainty, there is little wonder that analysts still do not know exactly how radioactivity was released into the environment, when it was released, and where it came from. There were multiple and sometimes overlapping periods of radioactive releases from the different units.

Most experts agree that large releases on March 14 and 15, coupled with precipitation, were ultimately responsible for the extensive area of contamination stretching northwest from the plant to Iitate village. TEPCO has argued that the venting operations did not contribute significantly to radiation releases and therefore that the reactor toruses were effective in scrubbing fission products from the steam that was vented. The company claims that Unit 2 was the source of the largest release, coming not from the torus through a hole that may or may not exist but from the drywell, which underwent a rapid drop in pressure during the day on March 15. In this view, even the reactor building explosions at Units 1 and 3, as dramatic as they appeared, did not contribute as much to off-site releases, mainly because the buildings could do little to contain radiation even when they were intact.

However, other analysts have looked at the same data and concluded that the venting did cause large releases and that scrubbing in the torus was not effective. This is a crucial technical issue for the U.S. debate over whether filters should be installed on the hardened vents at Mark I and Mark II BWRs. Until forensic investigations narrow down the various possibilities, though, all of these claims remain in the realm of speculation.

Ultimately, based on off-site measurements and meteorological data, it appears that Fukushima Daiichi Units 1 through 3, on average, released to the atmosphere less than 10 percent of the radioactive iodine and cesium that the three cores contained. That would be generally consistent with the results from computer modeling of station blackouts in studies like SOARCA. But there are so many unexplainable features of the accident right now that the similarity in results may be mere coincidence.

What is clear is that, in terms of the amount of radiation released, the Fukushima Daiichi accident was far from a worst-case event. This meant that the direst scenarios that the National Atmospheric Release Advisory Center (NARAC) estimated for Tokyo and parts of the United States, based on much higher radiation releases, never occurred. Fukushima will not challenge Chernobyl’s ranking as the world’s worst nuclear plant accident in terms of radioactive release, although it will remain classified a level 7 accident by the IAEA.

The difficulties analysts have explaining what happened in 2011 at Fukushima are only compounded when they use the same computer models to predict the future. In other words, when computer models cannot fully explain yesterday’s accident, they cannot accurately simulate tomorrow’s accident. Yet the nuclear establishment continues to place ever-greater reliance on these codes to develop safety strategies and cost-benefit analyses.