Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking - Charles Seife (2008)
Chapter 7. SECRETS
Everything secret degenerates, . . . nothing is safe that does not show how it can bear discussion and publicity.
The cold-fusion affair captured the imagination of the public. Two chemists, two outsiders, claimed to have succeeded with a cheap, tabletop experiment where legions of physicists with hundreds of millions of dollars had failed.
Fusion energy is hard. Even if you manage to get a fusion reaction going in a small device—and a number of people have succeeded in doing just that59—“tabletop” devices all consume more energy than they produce. The more researchers experiment with fusion, the more most of them are convinced that the best—if not the only—way to create a fusion reactor is with a hot plasma, confined and compressed by some powerful force. Nowadays, that leaves only two realistic options: big, expensive magnets or big, expensive lasers.
Both approaches require billions of dollars and thousands of scientists. And both have secrets. Laser fusion’s secret is a matter of national security; magnetic fusion’s secret is a matter of some embarrassment. Both secrets threaten the future of fusion energy.
In the late 1970s, before Shiva came on line, laser scientists at Livermore were extremely confident that they were on the fast track to fusion energy. They believed that Shiva, with its twenty beams, would likely achieve breakeven: the machine would produce as much energy in fusion as was poured into the system by its lasers. They were sure that they would produce a fusion reactor by the century’s end, and they were not ashamed to tell the press about it. In 1978, shortly before all the Shiva experiment’s lasers were all turned on, Livermore’s physicists were talking to the press about having a fusion power plant working in the late 1980s or early 1990s.
Despite the numerous problems that the physicists were encountering with laser fusion—the loss of energy to electrons, the Rayleigh-Taylor instability, and numerous other effects that made it harder than expected to compress and heat a deuterium pellet—they felt that they had good reason for optimism. It was called LASNEX.
LASNEX is a very intricate computer program meant to simulate what happens in the heart of a laser fusion experiment, and though the computer program is still classified, a few details are available outside the fusion community. Scientists apparently began working on it in the late 1960s or early 1970s. When Livermore’s John Nuckolls wrote about laser fusion in Nature in 1972, he referred to an early version of the code, and even then it was relatively advanced. LASNEX described every possible interaction of light, electrons, and nuclei that the designers could imagine. It told them how a cold pellet of matter begins to compress under laser light or x-ray emissions, how hot electrons bleed off energy, how fusion in the belly of the pellet causes it to expand. Physicists could tinker with various conditions, changing the size or contents of the pellet, the color and intensity of the light that shines upon it, the number and location of laser spots hitting a target or a hohlraum. They could, in short, run lots and lots of laser experiments before ever building a real-life laser device. To laser fusion scientists, LASNEX was a secret weapon. Nuckolls likened LASNEX to “the breaking of an enemy code. It tells you how many divisions to bring to bear on a problem.”
In the late 1970s, the LASNEX simulations told Nuckolls and the Livermore team that not that many divisions—not all that much laser power—needed to be brought to bear on the pellet to reach breakeven. The Shiva laser system, with its twenty beams, could pour about 10,000 joules into a pellet within a billionth of a second.60 According to LASNEX, this energy should be enough to ignite the pellet, and the reaction would produce as much energy as it consumed.
LASNEX was wrong. Very wrong. After a year’s worth of experimentation, the New York Times reported that “energy release at the giant $25 million Shiva fusion machine at the Lawrence Livermore National Laboratory in California fell short of the more optimistic estimates by a factor of 10,000.” LASNEX was a quite a few divisions short of a victorious army.
What went wrong? Because LASNEX is classified, it is hard to tell for certain. But what is certain is that a computer simulation is only as good as what its programmers know. When they write the code, they try to include everything they can think of, but there might be, and usually are, unanticipated phenomena lurking around the corner. It’s probable that as researchers experimented with ever more powerful lasers, they discovered difficulties that they didn’t expect and that weren’t programmed into the code. LASNEX wasn’t a perfect reflection of reality, because its programmers didn’t have perfect understanding.
Furthermore, even if every possible phenomenon were somehow programmed into LASNEX, the code wouldn’t necessarily predict the actually progression of a given experiment. There is a built-in limitation in the LASNEX code: it is only two-dimensional.
Running a LASNEX simulation takes an enormous amount of computing power; even the simplest simulated experiments require super-computers to chug away for a long time, moving imaginary electrons and nuclei and photons around in vast memory banks. The calculations are so complex, in fact, that a full-scale, three-dimensional simulation was way too much for the computers to handle. Instead, the programmers decided to simulate only a flat, two-dimensional slice of an imploding pellet. It was a necessary simplification; it brought the calculations into the realm of possibility. But at the same time, it meant that the LASNEX code couldn’t do a true simulation. If the real three-dimensional implosion behaved even slightly differently from a two-dimensional mockup, then LASNEX could not predict its behavior very well.
Nevertheless, LASNEX’s programmers were justly proud of their accomplishment. Under many circumstances, the code did predict precisely the outcome of a given experiment. However, once Shiva became fully operational, it became dreadfully apparent that LASNEX didn’t give quite the right answer all the time—and its promises of success didn’t come true. Its failed predictions were a blow to the Livermore scientists, but they refused to be derailed. A much bigger project, Nova, was already under way. The LASNEX code—presumably tweaked to take into account the results from the Shiva experiments—apparently predicted a resounding success with Nova, which was ten times more powerful than its predecessor.
Then Nova, too, failed to achieve breakeven. The laser was certainly generating fusion reactions. By the mid-1980s it was achieving about ten trillion fusion neutrons with each shot. But again, the laser consumed one thousand to ten thousand times as much energy as the fusion reactions produced. Once more, LASNEX had failed, and the scientists’ optimistic expectations were crushed. This time, though, their failure had cost almost $200 million.
Even though LASNEX has failed and failed again, the Livermore scientists still insist that they have good reason to trust the computer’s more recent predictions. They claim to have experiments that back up the computer code, but it is impossible to tell, from the outside, whether they are telling the truth. The evidence they cite, like the LASNEX code itself, is classified.
Beginning in the late 1970s and extending until the late 1980s, the national laboratories at Los Alamos and Livermore embarked on a classified program of experiments dubbed Halite/Centurion. These experiments were intended to help the laser fusion community test LASNEX and improve the code, and to determine, once and for all, the conditions required to ignite a fusion reaction in a pellet of deuterium. Instead of using lasers to ignite a pellet, the Halite/Centurion program used nuclear bombs.
Though very little information is available about the Halite/Centurion experiments, some details have dribbled out. It appears that the tests used hohlraums—the little metal tubes that are crucial to indirect-drive laser fusion—containing target pellets. These hohlraums and pellets were placed deep underground, at various distances from nuclear bombs. When the bombs went off, they radiated x-rays in all directions. Some of those x-rays shined into the hohlraums, which reradiated x-rays toward the pellet, just as in a laser fusion experiment. These reradiated x-rays, in turn, crushed the deuterium pellets, and scientists observed the resultant fusion reactions.
Laser fusion scientists state that the Halite/Centurion tests were a ringing confirmation of their beliefs and of LASNEX’s predictions. They claim that the tests put to rest questions about the feasibility of laser fusion reactors—though they don’t give any details. If the pro-laser-fusion scientists are to be believed, then Halite/Centurion showed that the laser fusion program is on the right track.
Not everybody agrees. Apparently, the hohlraums in the Halite/ Centurion experiments received varying amounts of energy, from tens to hundreds of millions of joules, about a thousand times greater than the energy even the Nova laser would deliver. But even with that much energy driving them, 80 percent of the capsules failed to ignite, says Leo Mascheroni, a former Los Alamos laser physicist. Worse yet, he says, LASNEX didn’t predict the failures. Mascheroni argues that the pro-laser-fusion lobby is hiding negative results behind a wall of secrecy; if outside scientists could see the data, he says, they would conclude that Halite/Centurion proved that the laser fusion program was failing miserably.
Who is correct? It’s a secret. Those scientists who have access to the data from Halite/Centurion can’t talk; it’s unlawful for them to make any details public. Those who don’t have access obviously can’t assess the arguments. It’s the big secret of laser fusion. Only the scientists working on laser fusion can see the proof that they are on the right track. Those of us on the outside are forced to take their word for it. And for the past few decades, their word hasn’t been very good at all.
Magnetic fusion has the advantage of openness. You can read almost all the literature that has been written about it. You can visit the facilities and walk around without fear of stumbling into a classified area. Indeed, by the 1990s some fusion labs looked as if they were desperate for visitors.
It was a far cry from the golden age of fusion. Twenty years earlier, in the mid-1970s, fusion had plenty of support from Congress and from the public.61 The OPEC crisis had sent fusion budgets soaring, and scientists planned large magnetic fusion machines around the country. Most of them were tokamaks, but a few other designs were also planned, such as a mirror-type machine at Livermore.
The big tokamak in the United States would be at Princeton: the Tokamak Fusion Test Reactor (TFTR), which promised to achieve breakeven. TFTR was supposed to cost a bit more than $300 million, but as often is the case with cutting-edge science projects, the expenditures ballooned well beyond that by the time the project was finished. But achieving breakeven was the minimum requirement for a fusion reactor, and that made the newest generation of tokamaks big and expensive. The United States was not the only country willing to spend hundreds of millions on big tokamaks: European countries were banding together to build one known as the Joint European Torus (JET), and Japan was planning to build a tokamak that would be known as JT-60. The three devices were similarly enormous and expensive. (They had some design differences, too. TFTR would be able to produce higher magnetic fields, and JET would be able to induce larger currents in the plasma; the JT-60 fell between the two extremes.)
In the late 1970s, morale in the magnetic fusion community was extremely high. Though they were still far away from breakeven—fusion energies were still ten thousand times smaller than the energy put in—they had been making steady progress over the years. As the machines got bigger and more expensive, scientists were able to get higher temperatures and densities in their plasmas, and to hold them for longer times. Physicists were confident that the new, large tokamaks being built would achieve breakeven, and perhaps go beyond. So were politicians. In 1980, President Jimmy Carter signed into law an act that promised to double the fusion budget in seven years—from nearly $400 million annually—and established the national goal of “the operation of a magnetic fusion demonstration plant at the turn of the twenty-first century.” The promised land was in sight. It would take only twenty years to get there.
For a tokamak, the promised land is not just breakeven. It is known as “ignition and sustained burn.” Unlike laser fusion devices, which have to create individual bursts of fusion energy, a magnetic fusion device like a tokamak can, in theory, run nonstop, producing continuous energy. Once scientists are able to get their magnetic bottles strong enough, they will be able to exploit this and keep a fusion reaction running indefinitely. The fusion reactions in the belly of the tokamak should suffice to keep the plasma hot, so after they get it started, the reaction will essentially run itself. All the scientists have to do is periodically inject some more deuterium and tritium fuel into the reactor and remove the helium “ash” from the plasma. Once you figure that out, you’ve got an unlimited source of power. Ignition and sustained burn are much better than mere breakeven: once you’ve got it, you’ve built a working reactor. And Carter’s plan called for developing just that.
By the time Ronald Reagan came into office, the climate for fusion was already changing. The OPEC crisis was fading into memory, and energy research was not a high priority for the new president. He scuttled Carter’s plan, and as budget deficits rose, fusion energy money began to disappear, $50 million hunks at a time. The panoply of glorious experiments planned in the 1970s began to crumble under increasing financial pressure. As magnetic fusion budgets dwindled, researchers struggled to save their precious tokamaks from the budget ax. A huge magnetic-mirror project that had already swallowed more than $300 million was scrapped just as it finished its eight-year construction and was about to be dedicated. 62 It never got turned on. One after another, new facilities—such as the “Elmo Bumpy Torus” and the “Impurity Studies Experiment”—died on the drawing board. The TFTR program was delayed, but not cancelled. The big tokamak was barely able to keep itself alive; everything else starved. With budgets in free fall, there was no room for anything other than the tokamak program, and even that was in jeopardy.
Despite the budget crunch, TFTR (and JET, JT-60, and a handful of other tokamaks worldwide) was steadily closing in on breakeven. It was holding plasmas for seconds at a time and achieving temperatures close to a hundred million degrees. Even with the improvements, breakeven was still a distance away, and the promised land of ignition and sustained burn was clearly out of reach. There was no way, with budgets as they were, that fusion scientists could ever hope to build a magnetic fusion reactor. A tokamak big enough and powerful enough to keep a plasma burning indefinitely would cost billions, and America’s fusion budget could never withstand that sort of strain. The story was little different overseas. No single nation could afford to build a tokamak that could achieve breakeven and sustained burn. Perhaps, though, by pooling their resources and joining together in one great effort, fusion scientists around the world could finally build a working fusion reactor.
The idea of an international reactor had been around since the budgets started dropping, but it truly came to life in 1985. At a summit in Geneva, Reagan and the Soviet leader Mikhail Gorbachev tried to reduce tensions between the U.S. and the USSR. Gorbachev suggested to Reagan the possibility of a joint effort to build a fusion reactor. Reagan jumped at the chance, as did France and Japan. Together, the four countries would build an enormous tokamak that would finally achieve ignition and sustained burn. For the first time, humans would be able to harness the power of the sun for peaceful purposes. The International Thermonuclear Experimental Reactor (ITER) was born.
ITER was to be a monster. As design work began on it, scientists realized that it would cost $10 billion. The four parties, working together, could cough up the money, but ITER would devour the fusion budgets of all the participating countries.63 Even the big tokamaks—TFTR, JET, JT-60—would not survive. Once the ITER project was under way, there would be no room in the budget for anything else. This was a big problem.
Princeton scientists did not want their facility to disappear. Other fusion researchers, especially those who thought that non-tokamak machines were still worth exploring, were angry that the world was going to gamble all its fusion money on a tokamak while ignoring all other possibilities. Almost everyone agreed that a big international reactor effort would be a wonderful thing, but at the same time everyone wanted to have a thriving domestic fusion program, too. Fusion researchers wouldn’t get both, especially with the budgets dropping precipitously. In the early 1990s, with ITER in ascendance, the Princeton Plasma Physics Laboratory seemed marked for death.
The first thing that would strike a visitor to the Princeton facility in the early 1990s would be the circles. There were circles everywhere. In the lobby, an office assistant swiveled about behind a large ring-shaped desk. A circular sofa surrounded a donut-shaped model of the TFTR. Other models of ringlike tokamaks were displayed in the waiting room. Even the auditorium was semicircular. And of course, the heart of the whole facility was the donut-shaped TFTR tokamak.
The second thing that would strike a visitor was the air of quiet desperation that hung about the lab. The staff was trying to sell fusion to the public, and while the TFTR was setting temperature records almost daily, nobody seemed to be buying. Budgets were still dropping, and the taxpayers didn’t protest. The lab, quietly, tried to change that attitude. Along each wall of the laboratory’s lobby, colorful posters exhorted the taxpayer to back fusion research. “Why Fusion?” read one. “Do We Really Need To Spend This Much On Energy Research?” asked another. Rush Holt, a physicist and the spokesman for the TFTR project, promised great things for TFTR—6 watts out for every 10 put in, within spitting distance of breakeven—but most of all, he conjured a future with fusion energy. Without it, he said, humanity would be in trouble.64
Where can we as a society get our energy? Fossil fuels pollute, cause global warming, and are running out. Renewable sources—solar, geothermal, wind—can’t provide nearly enough energy for an industrial society.65 That leaves nuclear energy: fusion or fission. Holt argued that fission is messy: a fission reactor uses up its fuel rods and leaves behind a radioactive mess that nobody knows how to dispose of. Fusion, on the other hand, leaves no harmful by-products. It runs on deuterium and tritium, he said, and leaves only harmless helium behind. Clean fusion energy would be a much better choice.
This is the sales pitch of faithful magnetic fusion scientists everywhere. Fusion provides unlimited power—clean, safe energy without the harmful by-products of fission. But there is a dirty little secret. Fusion is not clean. Once again, it’s the fault of those darn neutrons.
Magnetic fields can contain charged particles, but they are invisible to neutral ones. Neutrons, remember, carry no charge and do not feel magnetic forces. They zoom right through a magnetic bottle and slam into the walls of the container beyond. Since a deuterium-deuterium fusion reaction produces lots of high-energy neutrons (one for every two fusions), the walls of a tokamak reactor are bombarded with zillions of the particles every moment it runs.66
Neutrons are nasty little critters. They are hard to stop: they whiz through ordinary matter rather easily. When they do stop—when they strike an atom in a hunk of matter—they do damage. They knock atoms about. They introduce impurities. A metal irradiated by neutrons becomes brittle and weak. That means the metal walls of the tokamak become susceptible to fracture before too long. Every few years, the entire reactor vessel, the entire metal donut surrounding the plasma, has to be replaced.
Unfortunately, neutrons also make materials radioactive. The neutrons hit the nuclei in a metal and sometimes stick, making the nucleus unstable. The longer a substance is exposed to neutrons, the “hotter” it gets with radioactivity. By the time a tokamak’s walls need to be replaced, they are quite hot indeed.
Though fusion scientists portray fusion energy as cleaner than fission, a fusion power plant would produce a larger volume of radioactive waste than a standard nuclear power plant. It would also be just as dangerous—at first. Much of the waste from a fusion reactor tends to “cool down” more quickly than the waste from a fission reactor, taking a mere hundred years or so until humans can approach it safely. But it means that humans will have to figure out where to store it in the meantime, as well as the rest of the waste that, like spent fission fuel, will remain untouchable for thousands of years. Fusion is a bit cleaner than fission, but it still presents a major waste problem.
Fusion scientists recognize this, of course. They are working on exotic alloys that are less affected by neutron bombardment, materials made of vanadium and silicon carbide. However, developing those materials is going to cost a lot of money, and they will still present a waste problem, albeit a reduced one.
It’s an open secret. Fusion isn’t clean, and it probably never will be.