Coming of Age in the Milky Way - Timothy Ferris (2003)

Part II. TIME

Chapter 14. THE EVOLUTION OF ATOMS AND STARS

At quite uncertain times and places,
   The atoms left their heavenly path,
And by fortuitous embraces,
   Engendered all that being hath.

—James Clerk Maxwell

For I have already at times been a boy and a girl, and a bush and a bird and a mute fish in the salty waves.

—Empedocles

           By the dawn of the twentieth century it was becoming evident that some sort of “atomic” energy must be responsible for powering the sun and the other stars. As early as 1898, only two years after Becquerel’s discovery of radioactivity, the American geologist Thomas Chrowder Chamberlin was speculating that atoms were “complex organizations and seats of enormous energies” and that “the extraordinary conditions which reside in the center of the sun may … set free a portion of this energy.”1 But no one could say what this mechanism might be, or just how it might operate, until a great deal more was understood about both atoms and stars. The effort to garner such an understanding involved a growing level of collaboration between astronomers and nuclear physicists. Their work was to lead, not only to a resolution of the stellar energy question, but to the discovery of a golden braid of cosmic evolution intertwining atomic and stellar history.

The key to understanding stellar energy was, as Chamberlin foresaw, to discern the structure of the atom. That there was an internal structure to the atom could be intimated from several lines of research, among them the study of radioactivity: For atoms to emit particles, as they were found to do in the laboratories of Becquerel and the Curies, and for these emissions to change them from one element to another, as Rutherford and the English chemist Frederick Soddy had established, atoms must be more than the simple, indivisible units that their name (from the Greek for “cannot be cut”) implied. But atomic physics still had a long way to go in comprehending that structure. Of the three principal constituents of the atom—the proton, neutron, and electron—only the electron had as yet been identified (by J. J. Thomson, in the waning years of the nineteenth century). Nobody spoke of “nuclear” energy, for the existence of the atomic nucleus itself had not been established, much less that of its constituent particles the proton and the neutron, which were to be identified, respectively, by Thomson in 1913 and James Chadwick in 1932.

Rutherford, Hans Geiger, and Ernest Marsden ranked among the Strabos and Ptolemies of atomic cartography. In Manchester from 1909 through 1911 they probed the atom by launching streams of subatomic “alpha particles”—helium nuclei—at thin foils made of gold, silver, tin, and other metals. Most of the alpha particles flew through the foil, but, to the experimenters’ astonishment, a few bounced right back. Rutherford thought long and hard about this strange result; it was, he remarked, as startling as if a bullet were to bounce off a sheet of tissue paper. Finally, at a Sunday dinner at his house in 1911, he announced to a few friends that he had hit on an explanation—that most of the mass of each atom resides in a tiny, massive nucleus. By measuring the back-scattering rates obtained from foils comprised of various elements, Rutherford could calculate both the charge and the maximum diameter of the atomic nuclei in the target. Here, then, was an atomic explanation of the weights of the elements: Heavy elements are heavier than light elements because the nuclei of their atoms are more massive.

An atom of simple hydrogen consists of a single proton (its nucleus) surrounded by a shell containing one electron. Atoms of heavier elements have more protons, as well as neutrons, in the nucleus, and additional electrons in the shells. (Not to scale: Were the proton the size of a grain of sand, the shell would be larger than a football field.)

The realm of the electrons was then explored by the Danish physicist Niels Bohr, who established that electrons inhabit discrete orbits, or shells, surrounding the nucleus. (For a time Bohr thought of the atom as a miniature solar system, though this analysis soon proved inadequate; the atom is ruled not by Newtonian but by quantum mechanics.) Among its many other felicities, the Bohr model laid bare the physical basis of spectroscopy: The number of electrons in a given atom is determined by the electrical charge of the nucleus, which in turn is due to the number of protons in the nucleus, which is the key to the atom’s chemical identity. When an electron falls from an outer to an inner orbit it emits a photon. The wavelength of that photon is determined by the particular orbits between which the electron has made the transition. And that is why a spectrum, which records the wavelengths of photons, reveals the chemical elements that make up the star or other object the spectroscopist is studying. In the words of Max Planck, the founder of quantum physics, the Bohr model of the atom provided “the long-sought key to the entrance gate into the wonderland of spectroscopy, which since the discovery of spectral analysis had obstinately defied all efforts to breach it.”2

But, marvelous though it might be to realize that the spectra evinced the leaps and tumbles of electrons in their Bohr orbits, nobody could yet read the spectra of stars for significant clues to what made them shine. In the absence of a compelling theory, the field was left to the taxonomists—to those who went on doggedly recording and cataloging the spectra of stars, though they knew not where they were going.

At Harvard College Observatory, a leader in the dull but promising business of stellar taxonomy, photographic plates that revealed the color and spectra of tens of thousands of stars were stacked in front of “computers”—spinsters, most of them, employed as staff members at a university where their sex barred them from attending classes or earning a degree. (Henrietta Leavitt, the pioneer researcher of the Cepheid variable stars that were to prove so useful to Shapley and Hubble, was a Harvard computer.) The computers were charged with examining the plates and entering the data in neat, Victorian script for compilation in tomes like the Henry Draper Catalog, named in honor of the astrophotographer and physician who had made the first photograph of the spectrum of a star. Like prisoners marking off the days on their cell walls, they tallied their progress in totals of stars cataloged; Antonia Maury, Draper’s niece, reckoned that she had indexed the spectra of over five hundred thousand stars. Theirs was authentically Baconian work, of the sort Newton and Darwin claimed to practice but seldom did, and the ladies took pride in it; as the Harvard computer Annie Jump Cannon affirmed, “Every fact is a valuable factor in the mighty whole.”3

It was Cannon who, in 1915, first began to discern the shape of that whole, when she found that most stars belonged to one of about a half-dozen distinct spectral classes. Her classification system, now ubiquitous in stellar astronomy, arranges the spectra of the stars by color, from the blue-white O stars through yellow G stars like the sun to the red M stars.* Here was a sign of simplicity beneath the astonishing variety of the stars.

A still deeper order was soon disclosed, when in 1911 the Danish engineer and self-taught astronomer Ejnar Hertzsprung analyzed Cannon’s and Maury’s data for stars in two clusters, the Hyades and the Pleiades. Clusters like these are intuitively recognizable as genuine assemblies of stars and not merely chance alignments; even an inexperienced observer will start with recognition when sweeping a telescope across the Pleiades, its ice-blue stars tangled in gossamer webs of diamond dust, or the Hyades, whose stars range in color from bone-white to Roman gold. Since all the stars in a given cluster may thus be assumed to lie at about the same distance from the earth, any observed differences in their apparent magnitudes can be ascribed, not to their differing distances, but to actual differences in their absolute magnitudes. Hertzsprung took advantage of this situation, treating the clusters as laboratory samples wherein he could look for a relationship between the colors and intrinsic brightnesses of stars. He found just such a relationship: Most of the stars in each cluster fell along two gently curved lines. This, in sapling form, was the first intimation of a tree of stars that has since been designated the Hertzsprung-Russell diagram.

The applicability of Hertzsprung’s method soon broadened to include noncluster stars as well. In 1914, Walter Adams and Arnold Kohlschutter at Mount Wilson found that the relative intensities of lines in stellar spectra suggested their absolute magnitudes. Hereafter, whenever the distance to a single star of a given variety—a class B giant, say, or a class K dwarf—was measured by the parallax method, then the distances of all other stars that displayed comparable spectra could be estimated as well. This meant that Hertzsprung’s approach of graphing the absolute magnitude of stars against their colors could be applied to field stars as well as to the relatively few stars found in clusters.

Henry Norris Russell, a Princeton astrophysicist with an encyclopedic command of his field, promptly set to work doing just that. Without even knowing of Hertzsprung’s work, Russell plotted the absolute magnitudes of a few hundred field stars against their colors, and found that most lie along a narrow, slanting zone—the trunk of the tree of stars.

The tree has been growing ever since, and today it is embedded in the consciousness of every stellar astronomer in the world. Its trunk is the “main sequence,” a gently curving S along which lie 80 to 90 percent of all visible stars. The sun, a typical yellow star, resides on the main sequence a little less than halfway up the trunk. A slender branch departs from the trunk and makes its way upward and to the right, where it blossoms into a bouquet of brighter, redder stars—the red giants. Below and to the left sits a humus pile of dim, blue to white stars—the dwarfs.

The Hertzsprung-Russell diagram provided astronomers with a frozen record of evolution, the astrophysical equivalent of the fossil record geologists study in rock strata. Presumably, stars somehow evolve, spending most of their time on the main sequence (most stars today, in the snapshot of time given us to observe, are found there) but beginning and ending their careers somewhere else, among the branches or in the humus pile. One could not, of course, wait to see this happen; the lifetimes of even short-lived stars are measured in millions of years. To find the answers required understanding the physics of how stars work.

Progress on the physics side, meanwhile, was blocked by a seemingly insurmountable barrier. Literally so: The agency responsible was known as the Coulomb barrier, and for a time it stymied the efforts of theoretical physicists to comprehend how nuclear fusion might produce energy in stars.

The line of reasoning that led to the barrier was impeccable. Stars are made mostly of hydrogen. (This is evident from studying their spectra.) The nucleus of the hydrogen atom consists of but a single proton, and the proton contains nearly all the mass in the atom. (This we know from Rutherford’s experiments.) Therefore the proton must also contain nearly all of a hydrogen atom’s latent energy. (Recall that mass equals energy: E = mc2). In the heat of a star, the protons are flung about at high velocities—heat means that the particles involved are moving fast—and, as there are plenty of protons milling about in close quarters at the dense core of a star, they must collide quite a lot. In short, the energy of the sun and stars could reasonably be assumed to involve the interactions of protons. This was the basis of Eddington’s surmise that the stellar power source could “scarcely be other than the subatomic energy which, it is known, exists abundantly in all matter.”4

The Hertzsprung-Russell diagram plots the spectral classes (or colors) of stars against their brightnesses. This version of the diagram is thought to represent the general stellar population of our galaxy.

What happens when protons collide? Well, we know that they can stick together—“fuse”—because they are found, stuck together, in the nuclei of all the heavier elements. Might the fusion of protons release energy? A strong hint that this is so lay in the fact that the heavier nuclei weigh a little less that the sum of their parts. There was some confusion about this point, but the basic idea was correct—that energy is released in stars when the nuclei of the light atoms fuse to make those of heavy atoms. Rutherford already had been performing experiments in what he called “the newer alchemy,” bombarding nuclei with protons and changing them into the nuclei of different elements, and, as Eddington wryly noted, “what is possible in the Cavendish Laboratory may not be too difficult in the sun.”5

So far, so good; science was close to identifying thermonuclear fusion as the secret of solar power. But it was here that the Coulomb barrier intervened. Protons are positively charged; particles of like charge repel one another; and this obstacle seemed too strong to be overcome, even at the high velocity of protons flying about in the intense heat at the center of a star. Seldom, according to classical physics, could two protons in a star get going fast enough to breach the walls of their electromagnetic force fields and merge into a single nucleus. The calculations said that the proton collision rate could not possibly suffice to sustain fusion reactions. Yet there stood the sun, its beaming face laughing at the equations that said it could not shine.

There was nothing wrong with the argument, so far as it went: Were classical physics declared the sole law of nature, the stars would indeed wink out. Fortunately, nature on the nuclear scale does not function according to the proscriptions of classical physics, which works fine for big objects like pebbles and planets but breaks down in the realm of the very small. On the nuclear scale, the rules of quantum indeterminacy apply.

In classical mechanics, subatomic particles like protons were viewed as analogous to macroscopic objects like grains of sand or cannonballs. Viewed by these lights, a proton hurled against the Coulomb barrier of another proton had no more chance of penetrating it than a cannonball has of penetrating a ten-foot-thick fortress wall. Introduce quantum indeterminacy, however, and the picture changes dramatically. Quantum mechanics demonstrates that the proton’s future can be predicted only in terms of probabilities: Most of the time the proton will, indeed, bounce off the Coulomb barrier, but from time to time it will pass right through it, as if a cannonball were to fly untouched through a fortress wall.*

This is “quantum tunneling,” and it licenses the stars to shine. George Gamow, eager to exploit connections between astronomy and the exotic new physics at which he was adept, applied quantum probabilities to the question of nuclear fusion in stars and found that protons could surmount the Coulomb barrier—almost. Quantum tunneling took the calculations from the dismal, classical prediction, which had protons fusing at only one one-thousandth of the rate required to account for the energy released by the sun, up to fully one tenth of the necessary rate. It then took less than a year for the remaining deficit to be accounted for: The solution was completed in 1929, when Robert Atkinson and Fritz Houtermans combined Gamow’s findings with what is called the Maxwellian velocity distribution theory. In a Maxwellian distribution there are always a few particles moving much faster than average; Atkinson and Houtermans found that these fleet few were sufficient to make up the difference. Now at last it was clear how the Coulomb barrier could be breached often enough for nuclear fusion to function in stars.

But how, exactly, do the stars do it? Within another decade, two likely fusion processes were identified—the proton-proton chain reaction and the carbon cycle.

The key figure in both developments was Hans Bethe, a refugee from Nazi Germany who had studied with Fermi in Rome and gone on to teach at Cornell. Like his friend Gamow, the young Bethe was an effervescent, nimble thinker, so gifted that he made his work look like play. Though untrained in astronomy, Bethe was a legendarily quick study. In 1938 he helped Gamow’s and Edward Teller’s student C. L. Critchfield calculate that a reaction beginning with the collision of two protons could indeed generate approximately the energy—some 3.86 × 1033 ergs per second—radiated by the sun.* And so, in the span of less than forty years, humankind had progressed from ignorance of the very existence of atoms to an understanding of the primary thermonuclear fusion process that powers the sun.

The proton-proton reaction was insufficiently energetic, however, to account for the much higher luminosities of stars much larger than the sun—stars like the blue supergiants of the Pleiades, which occupy the higher reaches of the Hertzsprung-Russell diagram. This Bethe was to remedy before the year was out.

In April 1938, Bethe attended a conference organized by Gamow and Teller at the Carnegie Institution in Washington to bring astronomers and physicists together to work on the question of stellar energy generation. “At this conference the astrophysicists told us physicists what they knew about the internal constitution of the stars,” Bethe recalled. “This was quite a lot [although] all their results had been derived without knowledge of the specific source of energy.”6 Back at Cornell, Bethe attacked the problem with such alacrity that Gamow would later joke that he had calculated the answer before the train that carried him home arrived at the Ithaca station. Bethe wasn’t that quick, but within only a matter of weeks he had succeeded in identifying the carbon cycle, the critical fusion reaction that powers stars more than one and a half times as massive as the sun.

Publication of the paper, however, was delayed. Bethe finished it that summer and sent it to the Physical Review, but then was informed by a graduate student, Robert Marshak, that the New York Academy of Sciences offered a five-hundred-dollar prize for the best unpublished paper on energy production in stars. Bethe, who had need of the money, coolly asked that the paper be sent back, entered it in the competition, and won. “I used part of the prize to help my mother emigrate,” he told the American physicist Jeremy Bernstein. “The Nazis were quite willing to let my mother out, but they wanted two hundred and fifty dollars, in dollars, to release her furniture. Part of the prize money went to liberate my mother’s furniture.”7 Only then did Bethe permit publication of the paper that was to win him a Nobel Prize. He had, for a time, been the sole human being who knew why the stars shine.

Curiously stutter-stepped were the fusion reactions Bethe perceived. The proton-proton reaction begins with the collision, deep inside the sun, of two protons that have sufficient velocity and good fortune to penetrate the Coulomb barrier. If the collision succeeds in transforming one of the protons into a neutron—another rather unlikely event, involving a weak-force interaction called beta decay—the result is a nucleus of heavy hydrogen. The interaction releases a neutrino, which flies out of the sun, and a positron, which plows into the surrounding gas and thus helps heat the sun. The average proton at the center of the sun finds it necessary to wait more than thirty million years before chancing to experience this brief fling.

The next step, however, comes quickly. Within a few seconds, the heavy hydrogen nucleus snaps up another proton, transforming itself into helium-3 and releasing a photon that carries off further energy into the surrounding gas. Nuclei of helium-3 are rare, and so most are obliged to wait another few million years before encountering a second helium-3 nucleus. Then the two nuclei can fuse, forming a stable helium nucleus and releasing two protons, which are free to join the dance in their turn. The result has been to release energy: The helium end-product weighs sixth tenths of 1 percent less than did the particles that went into the reaction. This mass has been converted into energy, in the form of quanta that slowly make their way to the surface, blundering into atoms and being absorbed and reemitted as they go, until, centuries later, they at last break into the clear and are released into space as sunlight.

The proton-proton reaction has ramifications that are not completely understood—measurements of the neutrino flux on earth have to date yielded only a third as many neutrinos as the theory says should be released—and the carbon cycle is more complicated still. Nonetheless, enough is now known about solar fusion for us humans to begin to appreciate the elegance of the workings of our mother star. We have learned, for one thing, that the sun is not a bomb, although nuclear fusion is the same mechanism that functions in a thermonuclear weapon. When a chain reaction occurs in one tiny area in the center of the sun, it does not normally touch off other reactions in the surrounding gas; instead, the additional heat expands the gas slightly, lowering its density and so decreasing the probability of further proton-proton collisions for the moment. Owing to the operation of this self-regulating process, as averaged out for countless interactions, the entire star equilibrates, expanding to damp the rate of thermonuclear processes when they can attain a runaway rate, then contracting and heating to increase the rate when the center begins to cool. Although only one five-billionth of the sun’s light strikes the earth, that has been sufficient to endow the earth with warmth, and life, and with bipeds clever enough to decipher the particulars of their debt to Sol.

With the basic physics of solar fusion now in hand, it became possible to rework Kelvin’s estimates of the age of the sun. The sun’s mass can be determined, and very accurately so, from Newton’s laws and the orbital velocity of the planets. The result is 1.989 × 1033 grams, the equivalent of three hundred thousand Earths. The sun’s composition, at the surface at least, is revealed by the spectrograph to be principally hydrogen and helium. Knowing, then, the mass, volume, and approximate composition of the sun, one can ascertain the conditions that pertain at its center, where the thermonuclear processes take place. One can, for instance, calculate that the temperature at the core is about 15 million degrees, that the density is about twelve times that of lead (though the heat keeps the dense material in a gaseous and not a solid state), and that the fusion reaction rate is such that some 4.5 million tons of hydrogen are fused into helium inside the sun every second. Since the sun contains a finite amount of hydrogen, it must eventually run low on fuel, at which time its nuclear furnaces will falter. The total hydrogen-burning “lifetime” for the sun can thus be calculated. It turns out to be about ten billion years. Since radiometric dating of the asteroids and the earth yields an age for the solar system of a little less than five billion years old, we conclude that the sun now is in its middle age, and has another five billion years of hydrogen-burning ahead of it. And so the investigation of stellar energy sources, which had been driven in part by the demands of the geologists and biologists for a time scale longer than the old ideas permitted, opened up immensities of astronomical history even longer than the Darwinians had required.

The lifetimes of other stars can be calculated similarly. The fusion rate increases by the fourth power of the mass; consequently, dwarf stars last much longer than giants. The least massive stars have about 1 percent of the mass of the sun. (Much less and they would fail to generate sufficient interior heat for fusion to take place, and would instead be planets.) These little dwarfs, residents of the lower tiers of the Hertzsprung-Russell diagram, burn their hydrogen fuel so prudently that they can last for a trillion years or more. At the other end of the scale, toward the top of the diagram, stand giant stars with up to sixty times the mass of the sun. (If much larger, they would blow themselves apart as soon as they got fired up.) These huge stars squander their fuel profligately, and run out of hydrogen almost immediately; a star ten times as massive as the sun lasts less than one hundred million years.

These considerations greatly enriched and enlivened human appreciation of what might be called the ecology of the Milky Way. They revealed that the most spectacular stars in the galaxy, the giant, blue-white O and B stars, are also the stars that have the least time to live: Giants typically burn for only ten million to one hundred million years, and some may last no longer than a million years. This means that the brilliant giants that trace out the spiral arms are, by galactic standards, flowers that bloom for but a day. Indeed, that is why they trace out the arms. Stars of various masses condense along the arms, but while more modest stars last long enough to drift off into the surrounding disk, the brilliant superstars die before they ever get far from their birthplaces, which, consequently, they demark.

How do stars die? This, too, depends principally on their mass. When an ordinary star like the sun runs low on fuel it takes on a split personality: Its core contracts, no longer propped up by the radiation of energy from thermonuclear processes at the center, while its outer portion—its “atmosphere,” so to speak—expands and cools. The star’s color changes from a yellow-white to a deepening red: It has become a “red giant.” Ultimately the stellar atmosphere boils away into space, leaving behind the naked core, a massive, dense sphere only about the size of the earth—a “white dwarf” star.

Such a prognosis, plotted on the Hertzsprung-Russell diagram, serves to animate the tree of stars. When an average star like the sun exhausts its hydrogen fuel, it leaves the main sequence and moves upward—since the growing size of its outer atmosphere briefly makes it brighter—and to the right, since it is getting redder. Many stars during this phase may become unstable, staggering back and forth from right to left on the diagram. When the star sheds its atmosphere, it drops down the diagram and skids to the left, settling finally into the zone of the white dwarfs. Giant stars follow an approximately similar course, but start higher on the main sequence (since they are brighter) and leave it sooner (since they run out of fuel more rapidly).

The main-sequence lifetimes of stars are determined principally by their masses: Massive stars exhaust their fuel much more rapidly than do low-mass stars.

The destinies of stars once they leave the main sequence also differ greatly, according to their masses. When the sun runs low on fuel it will exit the main sequence toward the right, becoming a red giant. After another billion years or so it will eject its outer atmosphere, skidding from right to left across the diagram as it does so, then plunge down into the graveyard of the white dwarfs. A star with five times the mass of the sun remains on the main sequence for less than a tenth as long, then begins oscillating back and forth near the top of the diagram as an unstable giant. For stars of ten solar masses or more, such instabilities may culminate in the explosion of the star as a supernova.

The ages of star clusters may be inferred from their Hertzsprung-Russell diagrams. In a young cluster like the Pleiades, nearly all the visible stars lie on the main sequence: There are few red giants or white dwarfs to be found, because the cluster is not yet old enough for many of its stars to have run out of hydrogen fuel and departed from the main sequence.

The Hertzsprung-Russell diagram for any given population of stars—a star cluster, say—therefore provides evidence of its age. When the cluster is in its infancy, virtually all its stars lie on the main sequence, contentedly burning hydrogen. Soon the giant stars—those at the upper-left extremity of the main sequence—run out of fuel and balloon into red giants; each, as it does so, leaves the main sequence and moves to the right. As more time goes by, the same fate afflicts stars of ever less mass. The result, on the diagram, is a “cutoff point,” a place along the main sequence where the tree branches off to the right. The diagram is only a snapshot of a moment amid billions of years of stellar history, but the location of the cutoff point tells us how long the cluster has been there: The farther down the trunk the cutoff point falls, the older the tree.

The Hertzsprung-Russell diagram of the Pleiades cluster, for example, shows almost entirely main sequence stars. This tells us that the Pleiades is a young cluster, in which not enough time has passed for even the giant stars to burn down to the red giant stage. (The stars of the Pleiades are estimated to be less than one hundred million years old.) The diagram of the globular cluster M3, however, looks dramatically different. Here the great majority of stars are either in the red giant phase or are on their way to becoming dwarfs. (We don’t see the dwarfs themselves because they are too dim; M3 is an ample thirty thousand light-years away.) The cutoff branch points like the hand of a clock at the age of the cluster: For M3, the age reads out to some fourteen billion years, making it one of the oldest ever dated.

To envision the pace of stellar evolution more directly, imagine that the sun was a star in a young star cluster and that we were present on the earth right from the outset, when our planet had just cooled sufficiently for its crust to have solidified. Imagine, further, that we could speed up the passage of time, so that ten billion years would pass in a single night. As the sun sets, at time zero, we find the sky studded with main-sequence stars. There are as yet no red giants and no dwarfs. A few bright giants stand out, as well as a number of stars about as luminous as the sun, but the great majority of stars are dimmer and less bright than the sun.

Almost immediately, the giant stars exhaust their fuel, become unstable and explode as Supernovae, flooding the landscape with scalding white light. On our compressed time scale, where each hour equals a billion years, all these spectacular stars die within the first few minutes. Conceivably their explosions may shock any remaining gas in the cluster into collapsing to form new stars, but any giant stars produced in this fashion will also consume themselves quickly, so that the fireworks are over by the time we’ve settled down to watch the show.

In the hours that follow, successively less massive stars in turn leave the main sequence; we watch them swell into red giants, shed shells of multicolored gas, and reduce themselves to dim dwarf stars. These events are rare enough to hold our attention, however, because relatively few stars in the cluster are more massive than the sun. By dawn some ten billion years have gone by. Now it is the sun’s turn to die. There is a sudden, shuddering contraction of the sun’s core, and the solar atmosphere balloons into an aethereal red cloud that expands and swallows up the planets Mercury and Venus, and then Earth. Backing away to a prudent distance, we watch the cloud disperse and see the naked, helium-rich core of the sun exposed as a dim, dense dwarf.

An old star cluster like the globular cluster M3 displays a strikingly different Hertzsprung-Russell diagram. Here, the more massive stars have had ample time to burn up their fuel and become red giants, moving up and toward the right on the diagram, and then to slide down and to the left as some evolve into dwarfs. The result is a dramatic “cutoff point” at which the main sequence is interrupted. All else being equal, the lower the cutoff point, the older the cluster.

The night is over, but the story has hardly begun. Most of the cluster stars, less massive than the sun, continue to burn steadily, with an unexceptional, candle-yellow glow. These members of the silent majority have long lives ahead of them on the main sequence; they will still be shining aeons after the evacuated atmosphere of the sun has been gathered up to make new stars and planets. The study of stellar evolution teaches us that the meek shall inherit the galaxy.

Once it had been established that stars shine by means of nuclear fusion, it became apparent that they must also be in the business of building light elements into heavy elements. They could hardly do otherwise, inasmuch as nuclear fusion involves the fusing of the nuclei of light atoms to make the nuclei of heavier atoms. Through a variety of fusion processes, stars build hydrogen into helium; helium into carbon; carbon into oxygen and magnesium, and so forth. Indeed, given that the energy released amounts to but a tiny fraction of the mass being shuffled about, we could say that element-making is the primary business of stars, and that their light and heat, though subjectively important to creatures like ourselves who owe their lives to it, is but a by-product of that process, as incidental as the heat ventilated out of the smokestack of a tool and die works. If, as the textbooks like to say, atoms are the building blocks of matter, stars are the place where the building blocks are built. As Eddington wrote presciently in 1920, “The stars are the crucibles in which the lighter atoms which abound in the nebulae are compounded into more complex elements.”8

Two essential questions remained.

One was just how stars make the heavy elements. Bethe’s proton-proton reaction yields nothing heavier than helium, which is the second lightest element. If stars build heavier atoms, they must do so by means of other fusion processes. The carbon cycle won’t do the trick; it employs carbon, nitrogen, and oxygen merely as catalysts, leaving no new elements behind. Clearly, it would take some fancy nuclear physics to better reconstruct the full complexity of stellar fusion.

The other question, closely related to the first, was whether stars are the sole, or even the primary, source of the elements. There was a competing hypothesis. It held that most of the elements were fused, not in stars, but in the big bang.

For fusion to have taken place in the big bang, the universe at the very onset of its expansion would have had to be hot. The hypothesis that this was the case came in part from the basic laws of thermodynamics, which show that any given volume of material will become hotter if it is compressed. Suppose, for instance, that the Milky Way galaxy were to be enclosed in a gigantic hydraulic press, like the ones used to crush the hulks of old cars into cubes of scrap metal, and were squeezed down into a volume of, say, only one cubic foot. (This is thought to have been its state when the universe was but a fraction of a second old.) While the compression process was taking place, the stars and planets would be melded together, then the molecules would break down, and finally, when the temperature exceeded that of a stellar interior, even the nuclear structures of the matter in the galaxy would begin to decompose, reducing everything to a hot, dense gas made of subatomic particles—what physicists call a plasma. Release the press, and the plasma would expand and cool, recombining into atoms and molecules in the process. This, then, is a small-scale model of what is thought to have happened in the big bang, with the universe evolving from a high-density plasma into the structures—nuclear, atomic, molecular, stellar, and planetary—that we see around us today.

If astronomers at first regarded the hot big bang idea with reservation, the nuclear physicists were more open to it. They were growing accustomed to envisioning conditions of high temperatures and high densities, if only because of their work on chain reactions in nuclear bombs. Gamow in particular was interested in the question of whether the chemical elements that compose the universe today could have been forged in the fires of the big bang. It was a reasonable supposition—the heavier the element, the more energy was required to build it, and where was there more energy than in the big bang?—and Gamow went to work painting in the details with the broad brush and vivid colors that characterized his approach to physics.

Alas, he was soon in trouble. He and his collaborators were able to determine how hydrogen nuclei could fuse to make nuclei of helium (Von Weizsäcker and others had suggested earlier that helium originated in the big bang) but thereafter their calculations stalled. As the physicists Enrico Fermi and Anthony Turkevich learned, there was no way for nuclei heavier than those of helium to be built in any quantity in the rapidly expanding fireball. The conditions just were not right; by the time helium had been synthesized, the primordial material (“ylem,” Gamow called it, after an old Greek word for the substance of the cosmos prior to the evolution of form) would have thinned out too much for further fusion reactions to take place. The Hungarian-American physicist Eugene Wigner tried to find a way to negotiate what Gamow called the “mass five crevasse” that divides helium from the next stable nucleus, that of lithium. Gamow, who liked to illustrate his own books, published a sketch of Wigner, in mountaineer’s garb, leaping the gap while crying, “Please!”9 Wigner never made it across. Nevertheless, many big bang enthusiasts held out hope that Gamow was right in thinking of the big bang as the birthplace of the elemerits, and imagined that the difficulties he was encountering would, like the Coulomb barrier problem in astrophysics, eventually be overcome.

This was not the view, however, taken by researchers skeptical about the big bang theory, the most formidable among whom was the British astrophysicist Fred Hoyle. A born outsider who had by sheer intellectual energy made his way from the gray textile valleys of the north of England to the high table at Cambridge, Hoyle was individualistic to the point of iconoclasm, and as combative as if he had earned his knighthood on horseback. He lectured charismatically, in a working-class accent that seemed if anything to deepen as his scholarly credentials accumulated, and he was equally effective with the written word, publishing incisive technical papers, engrossing popularizations of science, and sprightly science fiction yarns with a seemingly effortless facility. Fearful was his scorn, and withering were his critiques of the big bang theory.

Hoyle damned the theory as epistemologically sterile, in that it seemed to place an inviolable, temporal limitation on scientific inquiry: The big bang was a wall of fire, past which science at the time did not know how to probe. Hoyle found it “highly objectionable that the laws of physics should lead us to a situation in which we are forbidden to calculate what happened before a certain moment in time.”10 He poked fun at the theory’s creationist overtones: Had it not been proposed by a priest, Lemaître, and had not Pope Pius XII, at the opening of a meeting of the Pontifical Academy of Sciences on November 22, 1951, declared that it accorded with the Catholic concept of creation (an endorsement that, Gamow joked, demonstrated its “unquestionable truth”)?11 Empirically, Hoyle was unsparing in calling attention to the big bang theory’s most telling liability, the time-scale problem. Owing to a number of errors, chief among them an inadequate understanding of the absolute magnitude of the Cepheid variable stars employed as intergalactic distance indicators, Hubble and Humason had severely underestimated the dimensions of the expanding universe—and, therefore, its age as well. Hubble’s original statement of the expansion law had been that H0, the expansion parameter, equaled 550 kilometers per second per megaparsec—meaning that for every megaparsec (or 3.26 million light-years) that one looks out into space, one finds galaxies moving apart at an additional 550 kps. The trouble was that this value for H0 yielded an elapsed time since the big bang of only about two billion years. This was smaller than the age of the sun and the earth. Since the universe cannot be younger than the stars and planets it contains, obviously something was wrong.

In Hoyle’s view, what was wrong was the big bang concept itself. As an alternative, he and two colleagues, Herman Bondi and Thomas Gold, promulgated in 1948 what they called the steady state model. According to their theory, the universe was infinitely old and generally unchanging: There had been no creation event, no high-density infancy from- which the universe had evolved.* The steady state theory was not destined to prosper; it lost its raison d’être once the errors in Hubble’s distance figures were repaired, and it predicted that some galaxies ought to be very much older than others, of which no evidence has ever been found. But it had the salubrious effect of concentrating its advocates’ attention on the question of where the heavier elements had come from. The steady staters could scarcely imagine, as Gamow did, that the elements had been synthesized in the big bang, since they denied that there had ever been a big bang. Consequently they were obliged to find another furnace in which to cook up such wonderfully complex atoms as those of iron, aluminum, and tin. The obvious candidate was the stars.

Hoyle, who possessed a command of nuclear physics unsurpassed among the astronomers of his generation, had begun working on the question of stellar fusion reactions in the mid-1940s. He had published little, however, owing to a running battle with “referees,” anonymous colleagues who read papers and vet them for accuracy, whose adversity to Hoyle’s more innovative notions prompted him to stop submitting his work to the journals. Hoyle paid a price for his rebelliousness, though, when in 1951, while he stood stubbornly in the wings, Ernst Öpik and Edwin Salpeter worked out the synthesis in stars of atoms up through beryllium to carbon. Rankling at the missed opportunity, Hoyle then broke his silence, and in a 1954 paper demonstrated how red giant stars could build carbon into oxygen-16.

Ahead still lay the seemingly insurmountable obstacle of iron. Iron is the most stable of all the elements; to fuse iron nuclei into the nuclei of a heavier element consumes energy rather than releasing it; how, then, could stars fuse iron and still shine? Hoyle thought that Supernovae might do the job—that the extraordinary heat of an exploding star might serve to forge the elements heavier than iron, if that of an ordinary star could not. But this he could not yet prove.

Then, in 1956, fresh impetus was lent to the question of stellar element production when the American astronomer Paul Merrill identified the telltale lines of technetium-99 in the spectra of S stars. Technetium-99 is heavier than iron. It is also an unstable element, with a half-life of only two hundred thousand years. Had the technetium atoms that Merrill detected originated billions of years ago in the big bang, they would since have decayed and there would be too few of them left to show up today in S stars or anywhere else. Yet there they were. Clearly the stars knew how to build elements beyond iron, even if the astrophysicists didn’t.

Spurred on by Merrill’s discovery, Hoyle renewed his investigations into stellar nucleogenesis. It was a task he took very seriously; as a boy, hiding atop a stone wall while playing hide-and-seek one night, he had looked up at the stars and resolved to find out what they were, and the adult astrophysicist never forgot his childhood pledge. Visiting the California Institute of Technology, Hoyle found himself in the company of Willy Fowler, a resident faculty member with an encyclopedic knowledge of nuclear physics, and Geoffrey and Margaret Burbidge, a talented husband and wife team who, like Hoyle, were English big bang skeptics.

A break came when Geoffrey Burbidge, scrutinizing recently declassified data from a Bikini Atoll bomb test, noticed that the half-life of one of the radioactive elements produced by the explosion, californium-254, was fifty-five days. This rang a bell: Fifty-five days was just the period that a supernova that Walter Baade studied had taken to fade away. Californium is one of the heaviest of all elements; if it were created in the intense heat of exploding stars, then surely the elements between iron and californium— which comprise,- after all, most of the periodic table—could have formed there, too. But how?

Happily, nature had provided a Rosetta stone against which Hoyle and his collaborators could test their ideas, in the form of the cosmic abundance curve. This was a plot of the weight of the various atoms—some twelve hundred species of nuclei, when the known isotopes were taken into account—against their relative profusion in the universe, as determined by studying the rocks of the earth, meteorites that have fallen to earth from space, and the spectra of the sun and stars. Physicists working on the Manhattan Project and the hydrogen bomb tests that followed had grown accustomed to deciphering the chain reactions involved by studying the relative abundances of various isotopes found in the debris left behind by the explosion. The cosmic element abundance curve was, in a sense, just another such table writ large; Gamow called it “the oldest document pertaining to the history of our universe.”12 But where for Gamow that history was principally the story of the big bang, for Hoyle and his colleagues the important thing was what had gone on since, inside a billion trillion stars. “The problem of element synthesis,” they would write, “is closely allied to the problem of stellar evolution.”13

The differences in abundances are great—there are, for instance, two million atoms of nickel for every four atoms of silver and fifty of tungsten in the Milky Way galaxy—and the abundance curve consequently traced out a series of jagged peaks more rugged than the ridgeline of the Andes. The highest peaks were claimed by hydrogen and helium, the atoms created in the big bang—more than 96 percent of the visible matter in the universe is composed of hydrogen or helium—and there were smaller but still distinct peaks for carbon, oxygen, iron, and lead. The pronounced definition of the curve put welcome constraints on any theory of element synthesis in stars: All one had to do (though this was quite a lot) was to identify the processes by which stars had come preferentially to make some elements in far greater quantities than others. Here the genealogy of the atoms was inscribed, as in some as yet untranslated hieroglyph: “The history of matter,” wrote Hoyle, Fowler, and the Burbidges, “… is hidden in the abundance distribution of the elements.”14

Their work culminated in 1957 in an epochal paper, 103 pages long, that showed how fusion processes operating in addition to Bethe’s proton-proton reaction and carbon cycle could build the atoms of the heavy elements—the “metals,” which in astrophysical parlance means everything heavier than helium. The tentpole of the paper was the arrow of time: The evolution of atoms, it revealed, is bound up in the evolution of stars, and the mix of elements found in the universe today is largely the result of what stars did in the past. At first, a star is powered by “hydrogen burning,” the fusing of hydrogen nuclei to build helium. This is the proton-proton reaction discerned by Bethe, and it can go on for a long time, from about a million years for a furiously burning giant star to ten billion years or so for a more tepid star like the sun. “But,” as Hoyle, Fowler, and the Burbidges noted, “no nuclear fuel can last indefinitely.”15 Eventually the supply of hydrogen runs low and the star’s core contracts. The contraction heats the core, and in the hotter environment helium burning can begin. The fusion of helium nuclei forms atoms of carbon, oxygen, and neon—but not lithium, beryllium, or boron, which explains why the former elements show up as peaks on the cosmic element abundance curve and the latter as valleys. When this process falters, the core contracts and heats further, fusing helium nuclei with those of neon to build magnesium, silicon, sulphur, and calcium. Now the old picture of a split-personality star could be refined into multiple personalities: A highly evolved star sorts itself into layers, like an onion, its gaseous iron core surrounded by concentric shells where silicon, oxygen, neon, carbon, helium, and, in the outermost shell, hydrogen are being burned. And so it goes, through previously undiscerned displays of the virtuosity of stellar alchemy.

The cosmic element abundance curve depicts the relative numbers of various sorts of atoms found in the universe at large. It serves as a constraint on theories of how the elements formed. (After Taube, 1982.)

Iron spells death, and death deliverance. The iron core grows like a cancer in the heart of the star, damping nuclear reactions in all that it touches, until the star becomes fatally imbalanced and falls victim to a general collapse. If the mass of the core is a tenth to two or three times that of the sun—here we draw on research by Gamow, Baade, Robert Oppenheimer, Fritz Zwicky, and others—the core rapidly crystallizes into a steely sphere, a “neutron star.” Smooth as a ball bearing and smaller than a city but as massive as the sun, a neutron star spins rapidly on its axis and emits pulses of radio energy as it spins, creating a beacon of the sort that betrayed the locations of Tycho’s and Kepler’s Supernovae. It resembles nothing so much as a giant atomic nucleus—as if the real business of the star, the conjuring of nuclei, was now at last monumentalized as a colossal nuclear tombstone.

The bright side, literally so, is that the explosion of the star generates sufficient energy to synthesize an enormous variety of atoms heavier than iron. When the iron core collapses it emits a single great clang, and this final ringing of the gong sends a sound wave climbing upward through the inrushing gas from the envelope of starstuff left behind. As the sonic wave rushing outward meets the waves of gas falling in, the result is a shock stronger than any other in the known universe. In a moment, tons of gold and silver, mercury, iron and lead, iodine and tin and copper are forged in the fiery collision zone. The detonation blows the outer layers of the star into interstellar space, and the cloud with its freight of valuable cargo expands, marching out over the course of aeons to become entangled with the surrounding interstellar clouds. When latter-day stars condense from these clouds, their planets inherit the star-forged elements. The earth was one such planet, and such is the ancestry of the bronze shields and steel swords with which men have fought, and the gold and silver they fought over, and the iron nails that Captain Cook’s men traded for the affections of the Tahitians.

Lesser stars contribute less dramatically to the chemical evolution of the universe, but they too play their part, wafting the nuclei of heavy elements into space through stellar winds, shedding their outer atmospheres as planetary nebulae, or blowing them into space in the less disruptive but still imposing explosions called novae. One can see their handiwork in the chemical gradients that show up across the faces of galaxies: Metals are scarce in the spectra of stars near the galactic center, where few stars have formed since the early days, while stars in the spiral arms, where star formation continues apace, are rich in these heavier elements. We see and touch—indeed, are—the products of the evolution of atoms and stars.

Much remains to be learned about dying stars and their chemical legacies. We conclude this unfinished story with a coda by the Berkeley astronomer Frank Shu, drawing on research by the Soviet scientists Yakob Zel’dovich and Igor Novikov. Writes Shu:

Stars begin their lives as a mixture mostly of hydrogen nuclei and their stripped electrons. During a massive star’s luminous phase, the protons are combined by a variety of complicated reactions into heavier and heavier elements. The nuclear binding energy released this way ultimately provides entertainment and employment for astronomers. In the end, however, the supernova process serves to undo most of this nuclear evolution. In the end, the core forms a mass of neutrons. Now, the final state, neutrons, contains less nuclear binding energy than the initial state, protons, and electrons. So where did all the energy come from when the star was shining all those millions of years? Where did the energy come from to produce the sound and the fury which is a supernova explosion? Energy is conserved: who paid the debts at the end? Answer: Gravity! The gravitational potential of the final neutron star is much greater (negatively: that’s the debt) than the gravitational potential energy of the corresponding main-sequence star. So, despite all the intervening interesting nuclear physics, ultimately Kelvin and Helmholtz were right after all! The ultimate energy source in the stars which produce the greatest amount of energy is gravity power.16

Cosmic evolution of elements involves the building of simple atomic nuclei in the big bang, and the subsequent fusion of these light nuclei into heavier and more complex nuclei inside stars. (After Reeves, 1984.)

Let that be the human image we call to mind as we watch the gold dust and diamonds bid farewell to the exhausted star as they parade away to be woven into future worlds and minds: The face of Kelvin, the old boy who cowed Rutherford one day when the century was young, shaking off his sleep to scowl, then smile.

*After many false starts, Cannon designated the classes by the letters O, B, A, F, G, K, and M. Students ever since, in a largely unconscious tribute to her memory, have learned the sequence via the mnemonic phrase “Oh, Be a Fine Girl, Kiss Me.”

*There is a quantum chance that a real cannonball will do the same thing, but as this requires nearly all its protons to get lucky at once, the odds of its happening are small. One can calculate, via quantum theory, that it has almost certainly never occurred, anywhere in the universe, even if—unhappy thought—cannonballs and fortress walls are a cosmic commonplace.

*Carl Friedrich von Weizsäcker, the physicist and later philosopher of science, had shown how the proton-proton reaction would work, but had not calculated its energy output for a solar-mass star.

*To explain why the galaxies are not infinitely far apart in an infinitely old, expanding universe, the theory proposed that hydrogen atoms materialize spontaneously, out of empty space, and thence condense into new stars and galaxies. This hypothesis, though much ridiculed at the time, is not so implausible as it might at first appear. Owing to quantum indeterminacy, “virtual” particles do materialize out of space all the time, though their lifetimes normally are short. In some versions of the big bang theory, notably the “inflationary universe” model, all matter is said to have appeared out of a vacuum—though all at once, and long ago.