Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking - Charles Seife (2008)


What glory beats in this idea:
Artificial suns on the earth,
Under controlled conditions.


Shortly after the ZETA defeat came a measure of victory. Unfortunately, nobody cared.

In 1958, not long after the British scientists at Harwell retracted their claim of generating thermonuclear fusion, American physicists finally put a tiny fusion reaction in a magnetic bottle. It was not a very large reaction, and starting the fusion consumed many times more energy than the reaction produced, but they had managed to initiate fusion in the laboratory. The neutrons they detected were from thermonuclear fusion.

The machine that did it was a pinch machine, not dissimilar to Columbus or the Perhapsatron, but its pinch was arranged slightly differently. Previous devices simply zapped an electric current down the length of the plasma. The new device, Scylla, ran a current around the circumference of the tube of plasma instead. It was just a variant on the existing pinch machines, but the change made a big difference. Scylla was able to heat deuterium to more than ten million degrees. Scientists began to detect all the expected products of deuterium-deuterium fusion: protons, tritium nuclei, and neutrons. Tens of thousands of neutrons. With a few months of tinkering, physicists were getting roughly twenty million neutrons every time they ran the machine. It was a stunning success after so much failure.

It was just in time, too. The Second International Conference on the Peaceful Uses of Atomic Energy convened in September 1958. Though the ZETA fiasco was still on everybody’s minds, the American display of fusion machines impressed visitors. Scylla made an appearance, along with the other magnetic bottles built by Project Sherwood. The Perhapsatron was there, as was Columbus. The early Stellarators also drew a crowd. Scylla should have been the star of the show, but the Scylla scientists weren’t ready to make a formal announcement of their accomplishment. They were uncertain about whether they had truly achieved thermonuclear fusion and were well aware of the damage that a premature announcement could cause.

There were sly hints, of course. Los Alamos’s Tuck gently implied that Scylla had succeeded where ZETA had failed, but he was much more cautious than the ZETA team had been. There was no press conference, just a scientific paper that stated, blandly, that Scylla “looks probable as a thermonuclear source.” There were to be no adulatory headlines. Even when Tuck finally made a formal announcement—a year and a half later, in March 1960—it was to Congress, not to the press. “We are now prepared to stake our reputations that we have a thermonuclear reaction,” he said. Scylla had done, for real, what ZETA had falsely claimed to do, but this time the world scarcely noticed.

The quest for unlimited fusion energy was in a dramatically different state than it had been a mere two years earlier. The public’s attitude had changed: the ZETA affair and the growing concern about nuclear fallout had soured people’s perception of fusion scientists. The scientists themselves were even growing pessimistic. Gone were the heady days of the 1950s when a working fusion reactor seemed to be just a few hundred thousand dollars away. Plagued by problems and instabilities, Project Sherwood seemed to be stalling. Congress, impatient with fusion scientists’ broken promises, began to pull the plug on magnetic fusion research. Physicists raced to make some kind of discovery that would keep their quest for fusion energy alive. In 1958, the road ahead seemed dark.

In fact, there was a new reason for hope. The year brought a new and powerful idea into America’s quest to tame fusion reactions—a novel Russian design that combined the advantages of the pinch and the Stellarator. It also saw the invention of a revolutionary device, the laser, that could bottle a tiny star in an entirely new way. The magnetic bottle was no longer the only game in town. A new set of hopefuls would soon sally forth to tame the power of the sun, only to be battered by the quest.

If there was one thing that scientists at the 1958 UN conference could agree about, it was that plasmas were proving very tough to control. In part, this was because plasmas were like nothing else scientists had encountered in nature. Plasmas behaved something like fluids, but unlike standard fluids, they interacted in extremely complex ways with magnetic and electric fields. Because of that electromagnetic component, understanding plasmas was becoming an entirely new discipline vastly more complicated than the hydrodynamics field that dealt with the behavior of ordinary fluids. Plasma physicists were charting new territory in a brand-new subject: magnetohydrodynamics. Even the simplest-sounding problems with a plasma turned out not to be simple at all.

What happens, for example, when you expose a plasma to an electric current, as in a pinch machine? The laws of electromagnetism say that electrical currents spawn magnetic fields, and magnetic fields spawn electrical currents. This means that an electrical current traveling down the plasma will generate magnetic fields that generate electrical currents that generate magnetic fields, and so forth—and all these effects change the motion of the particles in the plasma, forcing them toward the center of the cloud. This is why a current causes a pinch, confining the plasma and squeezing it into a tight thread. But this pinch has secondary effects, such as causing the thread to partition itself into little segments like a set of sausage links. The people who designed the pinch machines were immediately able to spot the pinch effect, but it took deeper thought to find the secondary effect of the sausage instability. The deeper the physicists looked into plasma dynamics, the more strange effects they saw—secondary and tertiary and beyond—most of which seemed to make the plasma unstable.

This feedback between electric fields and magnetic fields is just one of many effects that make plasmas hard to predict. Another has to do with the density of the plasma. Electric currents behave differently in plasmas of different densities and pressures. A current passing through a cloud of plasma alters the shape and density of the cloud—a pinch compresses the shape and increases the density—but this change alters the nature of the current passing through the cloud. This changes the shape and density of the cloud, which alters the current, which alters the shape and density of the cloud, and so on. Yet another issue had to do with the very makeup of the plasma. Scientists had been trying to ignore the fact that a plasma is not a nice, homogeneous substance made of a single kind of particle. A plasma is made of very heavy positively charged particles (the nuclei) and very light negatively charged particles (the electrons stripped from the atoms). These two kinds of particles have different properties and behave differently even when they are at the same temperatures, at the same pressures, and subjected to the same electromagnetic fields. Physicists discovered that when they tried to heat a plasma, unless they were very careful they would pour most of the energy into the light (and easy to accelerate) electrons, leaving the heavy nuclei cold, unheated, and slow. This was really bad news. The whole point of heating the hydrogen plasmas was to heat up the nuclei so that they were moving fast enough to fuse; hot electrons and cold nuclei were all but worthless. Unless scientists could compel the hot electrons to share their energy with the nuclei, there would be no hope of fusion. For all these reasons—and more—plasmas were very hard to work with. Even before a plasma gets hot and dense enough to ignite, it is a fiendishly complex brew.

The brew did not behave the way scientists expected it to. It seemed to have a mind of its own, thwarting all attempts to keep it under control. Pinch it or squeeze it or even try to keep it confined in a magnetic trap and it writhed around and ruffled itself in instability after instability. Physicists built bigger and more expensive machines to wrestle the instabilities into submission, but they were failing. As the machines started costing millions and tens of millions of dollars, the scientists were no closer to building a fusion reactor than before; they were just uncovering more and more subtle ways that the plasma fought their will.

Lyman Spitzer’s Stellarator, for one, was mysteriously losing the particles in its plasmas. High-temperature particles move very quickly and are inherently hard to constrain. It was no surprise then that hot plasma particles in a crude magnetic bottle would rapidly spiral out of control and slam into the walls of the vessel, and the higher the temperature, the faster the particles were lost. On the other hand, increasing the magnetic field strength—strengthening the bars of the magnetic cage containing the plasma—should have rapidly brought this problem under control. That was the theory, anyhow. The researchers thought that if they doubled the magnetic field, they should cut the loss rate by a factor of four. If this theory was right, it would be fairly simple to get particle losses under control merely by cranking up the strength of the magnets surrounding the plasma. Relatively weak magnetic fields would suffice to confine even very hot plasmas.

Nature wasn’t quite so kind to the Stellarator. As the scientists turned up the magnetic fields, they were surprised to discover that particles still zoomed out of control very quickly. The particle loss rates weren’t dropping nearly as fast as the physicists’ theories had led them to expect. Even with very powerful magnetic fields, the particles still spiraled out of control in a fraction of a second. Simply turning up the strength of the fields was not enough to bring the losses down to a reasonable level. The plasma was still out of control. The American physicists were beginning to despair.

Even the optimistic Spitzer gave up his dreams of a quick and cheap path to fusion energy with a Stellarator. In the early 1950s, he had thought that his small model-A and model-B Stellarators would lead quickly to a bigger, model-C machine that would serve “partly as a research facility, partly as a prototype or pilot plant for a full-scale power producing reactor.” Spitzer was fairly certain that he would be within sight of a working fusion power plant by the end of the decade. Then, setback after setback sapped his optimism. By the late 1950s, he viewed the $24 million model-C Stellarator then under construction “entirely as a research facility, without any regard for problems of a prototype.”41 Spitzer no longer saw fusion energy as within his grasp; a pilot plant was many generations of machines away.

It was a difficult time for fusion physics. Even the successes of Sherwood, such as Scylla’s first sighting of thermonuclear neutrons, were not showing a path to a working reactor. Making matters worse, the Atomic Energy Commission’s budget, which had skyrocketed through the 1950s, stopped growing, and the fusion research budget itself began to pinch.

These difficulties bred a measure of hope for magnetic fusion. Perhaps because the goal of a fusion reactor was so far out of reach, all the nations working on fusion energy decided to share their knowledge. The stakes had been lowered; there was no obvious path leading to limitless energy, so there was no harm in international collaboration. At the 1958 UN conference, the shroud of secrecy finally lifted from the fusion reactor programs around the world. Not only did American and British physicists have permission to lecture about the work they had done over the past decade, so, too, did their Russian counterparts. And behind the Iron Curtain, Soviet physicists had been doing some extraordinarily good work. The West soon learned of an idea that came from Russia’s version of Edward Teller: Andrei Sakharov.

Sakharov was a little more than a decade younger than Teller, so he was still a student when World War II erupted. He built a wartime reputation by working on conventional, not nuclear, munitions. He came up with a clever method to use electric and magnetic fields to detect defective armor-piercing shells, a vast improvement over the backbreaking work of snapping random shells in half to see whether they were properly manufactured. As the war was ending, Sakharov returned to school to get a graduate degree in physics, thinking he had escaped his weapon-engineering days. But on August 7, 1945, he was drawn back to military work.

On his way to the local bakery, Sakharov happened to glance at a newspaper. It told of the destruction of Hiroshima. “I was so stunned,” he wrote, “that my legs practically gave way.... Something new and awe-some had entered our lives, a product of the greatest of the sciences, of the discipline I revered.” The cloud of the atom bomb began to mushroom over his studies. As Sakharov tried to concentrate on theoretical physics, those mysterious secret cities began to spring up across the nation. His mentor, Igor Tamm, was secretly getting involved in Russia’s nuclear program. By 1948, Sakharov had been drawn into a project to design fusion weapons (the atom bomb problem having already been worked out, in part, thanks to the spying of Klaus Fuchs).

Sakharov immediately came up with his “first idea,” a design for a thermonuclear weapon. This design, the sloika bomb, was almost identical to the layered Alarm Clock design that Teller discarded as impractical in 1946. Though the sloika had the same problems as the Alarm Clock—megaton-size weapons would be too large to be practical—it offered a quick path to building a fusion device. Sakharov’s first idea impressed the Kremlin, which then whisked him away to one of the secret cities. To his dying day, Sakharov only referred to the laboratory as “the Installation,” and for a time, its mere existence was one of the most closely guarded secrets of the Soviet Union. The Installation was an entire town, code-named Arzamas-16, built for the purpose of designing nuclear weapons.

Sakharov’s intellectual trajectory was an eerie mirror image of Teller’s, always delayed by a few years.42 Teller came up with the imprac- tical Alarm Clock configuration for the hydrogen bomb in 1946; Sakharov hit on his sloika in 1949. In 1946, Teller proposed boosting the yield of a fission bomb by injecting a tiny dollop of fusion fuel (the idea tested in Greenhouse Item). Boosting atom bombs was Sakharov’s “second idea,” which came soon after his first. In 1949, Ulam and Teller solved the problem of igniting a fusion reaction by separating the primary fission device from the secondary fusion one; Sakharov and his colleagues came to the same solution—Sakharov’s “third idea”—in 1953.

Not everything, though, occurred to the Americans first, especially when it came to fusion reactors. In 1950, Sakharov was hard at work trying to figure out how to build a hydrogen bomb—an uncontrolled fusion reaction—when he began to ponder whether the reaction could be controlled. Like his American counterparts, he came up with a scheme using magnetic fields, but his idea was slightly different from the ones that would guide Project Sherwood. Sakharov’s device was neither a Stellarator nor a pinch machine. It was somewhere in between. It was a novel design, one that combined some advantages of a pinch machine with those of a Stellarator. It was just what scientists were looking for.

In a pinch machine, the plasma confines itself. The pinch begins by inducing a current of some sort in the plasma, forcing it to contract and to heat up. The Perhapsatron, ZETA, and Columbus all did this by running a current down the length of the plasma, while Scylla ran an electrical current around the circumference of the tube of plasma. In both cases, though, there is a current inside the plasma; this current induces a magnetic field, which squashes the plasma. Pinch machines were successful at making a plasma very dense and hot, even inducing a bit of fusion, but physicists couldn’t keep those conditions going for very long. The pinch was extremely short-lived. Once the current disappeared, so did the confinement of the plasma. Given the enormous energy it took to set up a pinch, and how little energy was generated by the brief fusion reaction, a pinch machine could not ultimately become a working reactor.

In a Stellarator, on the other hand, the plasma is confined from the outside. The machine uses carefully arranged electromagnets to generate intricate magnetic fields that bottle up the cloud. In theory, these fields would allow a Stellarator to confine the plasma for a relatively long time. Unlike a crush-and-release machine, a Stellarator attempted to maintain its hold on the plasma, keeping a fairly stable cloud. But scientists were having difficulty not only confining the plasma but also heating and compressing the cloud. It was much harder to warm and squash plasmas in a Stellarator than it was in a pinch machine.

The choice was bleak for American scientists in the late 1950s and early 1960s. They could get high temperatures and densities for a short time or lower temperatures and densities for a longer time, but not both. Yet scientists really needed a magnetic bottle that could heat the plasma to tens of millions of degrees, keep it very dense, and hold it for a relatively long time. The heating and density would ensure that the fusion reaction took place, while the confinement would ensure that the plasma reacted long enough to generate a significant amount of energy. Only then could physicists hope to turn such a magnetic bottle into a fusion reactor.

Sakharov’s scheme appeared to provide an answer. His bottle looked little different from the ones the Americans and British were proposing. It was donut shaped—toroidal—and used coils of wire to induce magnetic fields, earning it the cumbersome name toroidalnaya kamera ee magnitnaya katushka (toroidal chamber with magnetic coil). It was called the tokamak for short. But the tokamak was a bottle with a difference. Whereas the Stellarator used external magnetic fields to contain the plasma and the pinch machines used internal electric currents to squash it, the tokamak did both.

The tokamak has multiple sets of coils. One group of coils sets up a magnetic field that constrains and stiffens the plasma; it’s an external magnetic bottle, somewhat similar to the Stellarator’s, although not quite as sturdy. What gives the tokamak an extra bit of oomph is another set of coils that pinches the plasma. When scientists send a current through those coils, it induces a corresponding pinching current in the plasma circulating in the torus. This one-two punch of the external magnetic fields and internal current gave scientists a tool that, they hoped, would keep a hot, dense plasma stable for a long time.

Of course, the tokamak design had drawbacks as well. Unlike the Stellarator, which doesn’t require a plasma current at all, a tokamak absolutely needs one; its external magnetic bottle can’t by itself contain the plasma cloud for very long. But reducing this plasma current adds layers of complexity to the plasma, making it more unpredictable.

In some sense, the tokamak is something like a bicycle. Just as a bicycle is not stable until it is going relatively quickly, a tokamak’s plasma is not stable until the plasma current is up and running. The Stellarator is more like a tricycle. Just as a tricycle can be stationary, or can move forward or backward without any threat to its basic stability, a Stellarator can either have no plasma current or have one in either direction and still, theoretically, be stable.

Unfortunately, the current in a tokamak’s plasma is just one more thing that can fail. If an instability causes the current to drop momentarily, things get very bad very quickly. The plasma suddenly loses its pinch and explodes in all directions. This event is called a disruption, and it can be extraordinarily violent. It can even damage the machine. (One disruption at a modern British tokamak made the whole thing, all 120 tons of it, jump a centimeter into the air.) However, the disadvantages of the tokamak soon seemed small compared to the advantages of the design.

Sakharov was too busy working on nuclear weapons to spend a lot of effort on fusion reactors. But other Russian scientists, particularly the physicist Lev Artsimovich, took Sakharov’s design and put it to the test. By the mid-1960s, he was reporting spectacular results. His tokamak was confining a plasma at a given temperature and density ten times longer than could any other machine. Though confinement times were still on the order of milliseconds, Artsimovich’s results, if they were to be believed, indicated that Sakharov’s tokamak was blowing its competition away.

When Spitzer and the Americans first heard the Russian claims, they were skeptical, in part owing to American arrogance. The Stellarator was performing quite poorly, so they concluded that the problems they were encountering were likely due to a universal problem with magnetic confinement. If they weren’t succeeding, nobody was. The Americans were dubious that the Russians could do much better with their tokamak. Furthermore, Artsimovich’s measurements of the temperature of the plasma were rather crude. American scientists were relatively quick to disbelieve them. In the mid-1960s, Spitzer’s skepticism led him to conclude that tokamak performance was roughly the same as the Stellarator’s—underwhelming.

This conclusion was bad news for American fusion research. The enthusiasm of the 1950s had brought a downpour of funding from Congress. Since Project Sherwood’s inception, its budget had skyrocketed from almost nothing to nearly $30 million a year by the time of the 1958 UN conference. As the Stellarator began to choke, losing its plasma rapidly, a skeptical Congress began to wonder whether fusion reactors were possible at all, much less economically feasible. It didn’t help that the scientists, in their optimism, had consistently oversold their machines. They had promised Congress they would be building prototype reactors by the early 1960s, and the machines were nowhere near that stage. And in October 1957, the Russian surprise launch of Sputnik gave the Earth an artificial moon—and it gave Congress another Cold War scientific competition requiring truckloads of taxpayer money. The space race had officially begun. Fusion energy was no longer in the spotlight, and its budget stagnated, then dwindled.

The tokamak had to come to the rescue, but it would be several years before American and British scientists would accept that the Russian achievements were real. It was not for lack of data; Artsimovich continued presenting better and better results—dense plasmas heated to tens of millions of degrees and confined for handfuls of milliseconds. The tokamak results were still far from those needed for a realistic source of fusion energy, but they were certainly an order of magnitude better than anyone else’s. The work was getting harder to dismiss, but detractors still argued that the Russian temperature measurements were inaccurate. To settle the matter, in 1969 a British team visited Artsimovich’s lab in Moscow. They came armed with a sensitive instrument that could measure the temperature of a ten-million-degree plasma.43 At the heart of the instrument was a device that would change the face of fusion research, and not just because it confirmed the Russian claims. It would provide a new way of bottling a plasma without the use of magnets. The device was the laser.

Depending on whom you ask, the laser was invented at Columbia University in the late 1950s or at Hughes Research Laboratories in 1960. (There were competing claims and a patent battle.) But there’s no doubt that in 1960 a short paper in Nature gave the physics community a powerful new tool.

A laser is a device that produces an unusual beam of light. Even to the uninitiated, it is obvious that laser light is different from, say, the light that comes from a flashlight. If you shine a flashlight at a distant wall, you will see that it makes a large, circular spot. If you shine a laser pointer at the same wall, it makes only a tiny dot, barely larger than the hole from which the laser beam emerged. Laser light stays together in a tight beam rather than spreading out into a diffuse cone. A laser beam also consists of light that is a single, intense color,44 unlike a flashlight beam, which is made of a whole bunch of colors mixed together and appears white.

There are many methods of generating light. If you heat something high enough, it begins to glow. When a substance is energetic enough, it emits visible light. (This is how an incandescent lightbulb works; the filament in the bulb is simply heated to a very high temperature.) It is a law of nature: the hotter an object is, the more light waves it emits. Or, if you prefer, you can think of the emissions as light particles rather than light waves. The laws of quantum theory say that light has both a particle-like and a wave-like nature, so physicists use whichever description is most suitable for the behavior they are attempting to describe.

A particle of light—a photon—can interact with matter in a number of different ways. It can strike an atom and give it a kick. It can make the atom rotate or move in other manners. If the photon is just the right color, the atom can absorb it. Absorbing a photon “excites” the atom, packing it full of the energy that once resided in the light particle. This excited atom will soon disgorge the photon, emitting a light particle of precisely the same color and relaxing from its excited state.

In 1917, Albert Einstein made a curious prediction about excited atoms. Such an atom is quivering with energy, looking for an excuse to spit out the photon it has absorbed. Einstein’s calculations showed that if a photon of the right color happens by—one precisely the same color as the one absorbed by the atom—then the atom will immediately disgorge a photon. This photon not only will be precisely the same color as the passerby but will also move with it in lockstep. The two photons will behave almost as a single object. This phenomenon is known as stimulated emission, and it is the mechanism the laser uses to produce its beam of light.45

Imagine that you have a hunk of material—a whole lot of atoms—that you want to turn into a laser. The first step is to excite all the atoms. You do this by “pumping” the material full of energy. It doesn’t matter how. Some lasers pump a material with electricity. Some lasers do it with light, and some do it with chemical reactions. It might even be possible to use nuclear bombs to pump atoms into an excited state.46 Once the atoms in the material are excited, they are primed to get rid of their energy—they want to emit light of a particular color.

This is where the clever part happens. Send a photon of that specific color into the material. The photon encounters an excited atom, which then disgorges a second photon of the same color through stimulated emission. These two photons move in lockstep. They encounter another excited atom, which emits another photon of the same color: three photons now in lockstep. Another excited atom, another photon: four photons, all the same color, all moving in precisely the same way. As the photons move through the material, they encounter more and more excited atoms, which emit more and more photons. The beam snowballs, growing bigger as it travels through the material. By the time it finally emerges, the beam consists of an enormous collection of light particles. It is an intense beam, and all the photons have the exact same color and are moving in lockstep, almost like one enormous particle of light. This is the secret to the laser’s power. It is why the photons in a laser beam don’t zoom out in different directions and have all sorts of colors as a flashlight’s do. The photons in an ordinary beam of light are like are an unruly mob; the photons in a laser beam are an army marching together with a single mind.

The laser’s unusual properties make it an incredible scientific tool. The tight beam allows it to travel great distances—to the moon and back, even—without scattering and dissipating too much. Because the beam is made of photons of the exact same color, it provides a great way to measure very, very hot temperatures.

Shine a laser at a plasma. The photons in the beam will begin with the exact same color. But as the photons strike the fast-moving particles in the plasma, the plasma gives the photons a kick, adding a bit of energy to them, shortening their wavelengths and making them slightly bluer. By looking at the color of a laser beam after it hits a plasma, scientists can calculate the energies of the particles in the plasma, which, in turn, reveals the temperature.

When the British scientists shined a laser beam at Artsimovich’s tokamak plasma, they saw that the Russians were not exaggerating. Their plasma was tens of millions of degrees, dense, and relatively well confined. The tokamak was performing much better than the other forms of magnetic bottles. This was wonderful news for the fusion community in the West, even though the Russians, rather than the Americans or the British, had done it. (One Atomic Energy Commission worker reportedly danced on a table when he heard the news.) Sakharov’s invention showed a way to bypass the troubles of the pinch machines and the Stellarators. Practically overnight, plasma physicists across the world scrapped their old devices and built tokamaks. Even Spitzer succumbed to tokamania. By January 1970, the model-C Stellarator was scrap metal. In its place, a mere four months later, a tokamak sprang up. The fusion community had been pumped full of energy once more.

The laser measurements of Artsimovich’s plasma sparked a fusion energy renaissance in the United States. But the laser was about to change the landscape even more dramatically by providing an alternative to the magnetic bottle.

Lasers produce particularly intense and yet easily controlled light beams. You can point a laser with great precision and make it dump an enormous amount of energy in a very tiny space. To Andrei Sakharov, this suggested that laser beams could be used to heat and contain a plasma of hydrogen. If it worked, laser fusion would be an even more straightforward method than that using magnets. One could simply shine laser light on a pellet of deuterium fuel from all directions: the beams would heat and compress the pellet, creating a tiny fusion reaction—a miniature sun girdled on all sides by light. The plasma would be compressed not by magnetic fields but by particles of light (or by atoms that had been heated by the beams of light).47 This was the birth of inertial confinement fusion. The Americans, too, immediately saw the potential of lasers for inducing fusion. At Teller’s Livermore laboratory, physicists like Ray Kidder, John Nuckolls, and Stirling Colgate set to work designing laser fusion schemes soon after the first laser was built.48 Their calculations seemed to show not only that laser fusion was possible, but also that it might be relatively easy to achieve breakeven. Livermore scientists began building laser bottles intended to ignite and contain fusing plasma.49

The first big one, built in 1974, was known as Janus. Two-faced like the god it was named after, Janus had two laser beams that shot at a tiny pellet of deuterium and tritium from opposite directions. It was more a test of the laser system than a concerted attempt to initiate fusion reactions. A true laser-based bottle would require laser beams to hit the target from all sides at once to fully confine it, but Janus’s lasers only struck from two sides, allowing the plasma to squirt out in various directions. Nevertheless, the Livermore scientists were soon detecting tens of thousands of neutrons coming from the pellet. They had achieved thermonuclear fusion, even though it was on a tiny scale. It was a success, but it was not the first.

The Russians and French had already detected neutrons from pellets hit by lasers, but the American press, skeptical of the foreigners’ claims, did not give them much attention. The press did have a field day, though, with the curious tale of a rogue company—KMS Industries, Inc.—that had built its own laser system. By May 1974, KMS, named after its physicist founder and president, Keeve M. Siegel, reported that it was producing neutrons from laser fusion.

Within two weeks, the story was plastered all over the newspapers. The New York Times touted KMS’s achievement as “a significant step toward the long-range goal of nuclear fusion as a source of almost limitless energy.” The Atomic Energy Commission was less thrilled, because a private firm was doing an end run around the government. If the KMS claims were true, an AEC statement read, it would be “a small but significant initial step toward the achievement of fusion power.” Siegel was making the AEC look bad—and fusion energy look good.

Not only was Siegel using lasers to ignite fusion, but he was doing it as the head of a private company, not as a scientist in a government laboratory. The public took this as a sign that private industry was embracing fusion reactors as a viable source of energy. Siegel, the entrepreneur, exuded confidence in public. He was sure, he said, that he could turn lasers into “efficient fusion power” within “the next few years.” After false starts and two decades of struggle with magnetic bottles, the era of fusion finally seemed at hand.

The timing could scarcely have been better. The United States was just getting through its first oil crisis. Because of American support for Israel during the 1973 Yom Kippur War, the Arab members of the Organization of the Petroleum Exporting Countries (OPEC) cut off oil supplies to the U.S. Gas prices skyrocketed. It was becoming painfully clear that the country had to find another source of energy—anything other than petroleum—if it was to avoid being held hostage to OPEC’s interests. It was scarcely two months after the embargo was lifted that a jittery nation learned about Siegel and KMS. It seemed that fusion would be the way to get out from under OPEC’s thumb. The dream of unlimited power once more beckoned. Fusion energy seemed possible again, and it was more important than ever.

Congress immediately seized upon it and started pouring money into fusion research. Laser fusion saw a dramatic increase in funding, growing from almost nothing to $200 million per year by decade’s end.50Livermore and some other laboratories around the country, particularly those at Los Alamos and at the University of Rochester in New York, began to plan massive laser projects with an eye toward creating a viable fusion reactor. Magnetic fusion, too, benefited from the renewed interest in fusion energy. After stagnating for a decade at around $30 million per year, magnetic fusion budgets doubled and doubled and doubled again. In 1975, more than $100 million went to magnetic fusion; by 1977, more than $300 million; and by 1982, almost $400 million.

Siegel’s 1974 announcement helped ignite public enthusiasm (and governmental largesse) for fusion research, but his story had a tragic ending. In 1975, he keeled over while testifying about his work in front of Congress. Though he was rushed to the nearby George Washington University Hospital, he died shortly thereafter, the victim of a stroke. He was fifty-two years old. Siegel didn’t survive to benefit from the surge of optimism he generated. He also didn’t survive to see the worsening problems laser fusion scientists faced as their lasers grew more powerful.

Livermore’s Janus was already in 1975 suffering from a major snag. Its lasers were extremely powerful for their day, pouring an unprecedented amount of laser light into very tiny spaces. Livermore’s scientists managed to get this level of power by taking enormous slabs of glass made of neodymium and silicon and exciting them with a flash lamp. This glass was the heart of Livermore’s laser. The slabs were what produced an enormous number of infrared photons in lockstep. The resulting beam exited the glass and was bounced around, guided by lenses and mirrors to the target chamber. However, the beam was so intense that it would heat whatever material it touched. This heat changed the properties of lenses, mirrors, and even the air itself. When heat changes the properties of a lens or a mirror, it alters the way the device focuses the beam. These little changes in focus would start creating imperfections in the beam, such as hot and cold spots. These could be disastrous. The hot spots in the beam would pit lenses, destroying them in a tiny fraction of a second. Every time they fired the Janus laser, the machine tore itself to shreds.

Luckily, the Livermore scientists were already working toward a fix. Their next-generation fusion machine, Argus, used a clever technique to eliminate those troublesome hot spots. By shooting the beam down a long tube and carefully removing everything but the light at the very center of the beam, the scientists would be assured of getting light that was uniform and pure—and free of hot spots. This meant that the laser had to be housed in a very large building to accommodate the tubes, which were more than a hundred feet long. In addition, since they were tossing out some of the beam because of its imperfections, they were sacrificing some of the laser’s power. This was a minor inconvenience; the technique worked, and the hot spots disappeared for the time being.

More serious was the problem with electrons. Magnetic fusion researchers had trouble heating the plasma evenly; the lightweight electrons would get hot faster than the heavyweight nuclei, making for a very messy plasma soup. This problem was worse with lasers: light that is shined on a hunk of matter tends to heat the electrons first. This was a huge issue. The electrons in a laser target would get so hot that the target would explode before the nuclei got warmed up. Hot electrons and cold nuclei were no good for fusion—it was the nuclei that scientists really wanted to heat up.

For technical reasons, the bluer the laser beam, the smaller this effect. So the Livermore scientists shined the laser light through crystals that would make the infrared beam green or even ultraviolet.51 The color conversion worked well to reduce the heating of the electrons, but the process was inefficient. The beam lost some of its energy becauseof the color change. It also made the laser more expensive, as big, high-quality color-change crystals were not cheap. Nevertheless, the results—and the number of neutrons—from Argus led Livermore’s physicists to push for a full-size machine, Shiva, that would use twenty beams to zap a pellet of deuterium from all directions. It would ignite the pellet, creating a fusion reaction that would generate as much energy as the laser poured in. Or so the scientists hoped. They were wrong by a factor of ten thousand. Laser fusion scientists, like the magnetic fusion advocates that preceded them, were about to come face-to-face with a nasty instability—one so fundamental that you often encounter it in your kitchen.

It is hard to imagine an instability in the kitchen, but ask yourself the following question: When you invert a glass of water, why doesn’t the water stay in the glass? This seems like a silly thing to ask: gravity pulls the water down and onto the floor. But if you look a little more deeply, the answer is not quite so obvious. Atmospheric pressure makes the question more complicated than you might expect.

Every surface that is exposed to air is under pressure. The very weight of the atmosphere is squashing us from all directions. Every square inch of our skin is subjected to 14.7 pounds of pressure from the air pushing against us. We don’t notice it because our bodies are used to it, but this is an enormous force, easily enough to crush a steel can under the right conditions. It is also more than enough to support a glassful of water and prevent the liquid from falling to the ground. Try it yourself (over a sink, of course). Fill a glass to the rim with water. Hold a smooth, rigid piece of cardboard over the mouth of the glass and invert the whole thing. Gently let go of the cardboard. If you do it carefully enough, you will see that the water stays in the glass. The cardboard isn’t holding the water in. It’s not stuck tightly to the glass; even a gentle touch will dislodge the cardboard and cause the water to run out. And the water isn’t miraculously defying gravity. It is being supported by air pressure. The atmosphere’s upward push of 14.7 pounds per square inch is much, much stronger than the three or four ounces per square inch downward push of the water in the glass. When the two pressures go head to head, the upward push of the atmosphere wins and the water stays put. Believe it or not, the forces are so mismatched that you would need an enormously tall glass of water—about thirty feet high—if you wanted the downward-pushing weight of the water to equal the upward-pushing atmospheric pressure. With such vastly mismatched forces, the question seems a lot less stupid: Why doesn’t water stay in a glass when you turn it upside down?


RAYLEIGH-TAYLOR INSTABILITY IN A GLASS OF WATER: Invert a glass quickly and little ripples on the surface of the water will grow, becoming large blobs. The blobs break off and the water rains down out of the glass.

The water falls out because of an effect known as the Rayleigh-Taylor instability. Whenever a not-very-dense fluid (like air) pushes on a denser fluid (like water), it is an inherently unstable situation. If the interface between the two fluids has any imperfections—any bumps or divots—then those imperfections immediately get bigger and bigger.

An inverted glass of water, no matter how carefully it is inverted, has a few crests and troughs on the surface of the liquid. In a tiny fraction of a second, the crests grow, becoming enormous tendrils of water drooping down from the surface; the troughs also grow, and large fingers of air prod deep into the glass. The tendrils break, the fingers bubble off, and the entire glass of water rains down onto the floor. This is the Rayleigh-Taylor instability in action. Even though the air exerts an enormous amount of pressure on the water, the less-dense air is unable to keep the denser water contained in the glass because of these growing tendrils and fingers. Get rid of those instabilities and the air can keep the water contained. (The cardboard is not susceptible to Rayleigh-Taylor instabilities because it is a solid, so the air-pushing-on-cardboard-pushing-on-water system is stable.) But if Rayleigh-Taylor instabilities are present, then they will wreak havoc on your attempt to keep the denser fluid contained where you want it.

Laser fusion is the equivalent of keeping water trapped in an upside-down glass. As you compress a pellet of deuterium, it becomes denser and denser. Long before you get it hot and dense enough to fuse, it will be much denser than whatever substance you are using to compress it, whether it is particles of light or a collection of hot atoms. You are using a less-dense substance to squash and contain a much denser one, and that means you will get Rayleigh-Taylor instabilities. Any tiny imperfections on the interface between the plasma and the stuff that is pushing on the plasma will immediately grow. Even an almost perfectly round sphere of deuterium will quickly become distorted, squirting tendrils in all directions. Just as this ruins any attempt to keep water in an inverted glass by means of air pressure, it seriously damages a machine’s ability to compress and contain a plasma by means of light. The only way around this was to make sure there were almost no imperfections. The target had to be perfectly smooth, and the compressing lasers had to illuminate the target completely uniformly, without any hot or cold spots that would lead to ever-growing Rayleigh-Taylor tendrils.


RAYLEIGH-TAYLOR INSTABILITY IN INERTIAL CONFINEMENT FUSION: Use lasers or particles to bombard a pellet of fuel and small imperfections on the surface of the pellet quickly become large fingers that cool the fuel and prevent it from fusing properly.

It was almost as if the laser scientists were trying to invert a glass so carefully that the surface of the water inside wouldn’t ripple at all. This is an extraordinarily difficult task. Even the twenty-armed Shiva machine, heating the plasma from twenty different directions at once, wasn’t uniform enough to keep the Rayleigh-Taylor instabilities in check. The twenty pinpricks of laser light were far enough apart from one another that they would create hot spots in the target rather than heating it uniformly. The pellet would compress, getting hot and dense enough to induce a little bit of fusion, but before the reaction really got going, the Rayleigh-Taylor instability would take over. Tendrils would form. Instead of getting denser and hotter, the deuterium would squirt out.

The Livermore scientists tried everything they could to get the Rayleigh-Taylor problem under control. One method mimicked the Teller-Ulam design for the hydrogen bomb. Instead of using the lasers to push directly onto a dollop of deuterium, the new method did it indirectly. The pellet was ensconced at the center of a hollow cylinder known as a hohlraum. Instead of striking the pellet, the lasers struck the insides of the hohlraum. The hohlraum then radiated x-rays toward the pellet. This setup is known as indirect drive, and it helped ameliorate the problems with the instabilities.


DIRECT DRIVE VERSUS INDIRECT DRIVE: In direct drive (left), laser beams shine directly on a pellet of fuel. Indirect drive (right), on the other hand, has laser light shining on a hohlraum, which evaporates and shines x-rays on the pellet.

But it didn’t do enough. Shiva, which had cost $25 million to build, only performed a fraction as well as its designers had hoped. It didn’t come close to producing as much energy from fusion as it took to run the lasers. Reaching breakeven was a much harder task than expected. The answer seemed within reach, though: just build a bigger Shiva, one with ten times the power, and ten times the price. By the beginning of the 1980s, Livermore was building a $200 million laser named Nova. Researchers there were confident Nova would finally take them to the promised land—igniting fusion fuel, producing more energy than it consumed. Once more, fusion scientists were about to have their faith severely tested.

The science of inertial confinement fusion was following the same trajectory as that of magnetic fusion. Early optimism in the 1950s led scientists to believe that plasmas could be confined and induced to fuse relatively easily. Cheap, million-dollar machines, they thought, would be able to do the job. But the plasma always seemed to wriggle out of control. Instability after instability made the magnetic bottles leak, and million-dollar machines turned into ten-million-dollar and hundred-million-dollar machines. Laser fusion began with similar optimism. Livermore’s scientists thought their first few lasers could get more energy out than they put in. But instabilities like Rayleigh-Taylor allowed the plasma to escape its confinement. Million-dollar lasers grew bigger and more expensive. Soon, laser fusion machines were as expensive as their magnetic counterparts.

Even today, decades later, these two approaches—magnetic fusion and inertial confinement fusion—remain the ways that most scientists are trying to bottle up a tiny sun. But both methods are extremely expensive, and both are plagued with instabilities that threaten to destroy the dream of unlimited fusion energy. Shiva’s failure occurred two decades after Homi J. Bhabha predicted that fusion power plants were twenty years away. Yet in the 1970s, and even into the 1980s, fusion scientists spoke of power plants as being thirty years away. After decades of research, the goal of fusion energy had become ten years more distant.

As fusion scientists built ever-bigger tokamaks and lasers for tens and hundreds of millions of dollars, outsiders began to wonder whether there was another cheaper, easier path to fusion energy. The stage was set for the biggest scientific debacle of modern times: cold fusion.