Coming of Age in the Milky Way - Timothy Ferris (2003)

Part II. TIME

Chapter 13. THE AGE OF THE EARTH

The antiquity of time is the youth of the world.

—Francis Bacon

What we take for the history of nature is only the very incomplete history of an instant.

—Denis Diderot

           Lyell’s book turned Darwin’s voyage into a trip through time. Darwin began reading it almost immediately, in his bunk, while suffering through the first of the many attacks of seasickness that were to plague him during the next five years—the Beagle, a stout, beamy brig ninety feet long by twenty-four feet wide, was otherwise comfortable, but her hull was rounded and she rolled. He started applying what he called “the wonderful superiority of Lyell’s manner of treating geology”1 as soon as the expedition made landfall in the Cape Verde islands.

To construct an empirically based theory like Darwin’s account of evolution requires not only observational data but an organizing hypothesis as well. Darwin drew his hypothesis—that the world is old, and is continuing to change today much as it did in the past—largely from Lyell. “The great merit of the Principles,” he wrote, “was that it altered the whole tone of one’s mind, and, therefore, that when seeing a thing never seen by Lyell, one yet saw it partially through his eyes.” Later Darwin allowed that “I feel as if my books came half out of Sir Charles Lyell’s brain.”2

The observations Darwin himself was well suited to provide. “Nothing escaped him,” wrote Dr. Edward Eickstead Lane, who often walked with Darwin at Moor Park.

No object in nature, whether Flower, or Bird, or Insect of any kind, could avoid his loving recognition. He knew about them all … could give you endless information … in a manner so full of point and pith and living interest, and so full of charm, that you could not but be supremely delighted, nor fail to feel … that you were enjoying a vast intellectual treat to be never forgotten.”3

During the Beagle expedition Darwin saw the world as few have seen it, in rich diversity and detail, from horseback and muleback and on foot, in cave explorations and excursions across pack ice and blazing sand from Patagonia to Australia to the Keeling Islands of the Indian Ocean. He noted everything, absorbed everything, and collected so many samples of plants and animals that his shipmates wondered aloud whether he was out to sink the Beagle.

In Chile, Darwin found marine fossils on mountaintops twelve thousand feet high and witnessed an earthquake that raised the ground three feet in a matter of minutes—Lyellian evidence that the more or less uniform operation of geological processes can produce changes as dramatic as those ascribed by the theologians ancient catastrophes. At first he was cautious about jumping to conclusions: Reporting his findings in a letter to his teacher Henslow he wrote that “I am afraid you will tell me to learn my A.B.C.—to know quartz from Feldspar—before I indulge in such speculations.”4 But by the time the Beagle reached the South Pacific, Darwin had four years of rigorous fieldwork under his belt, and had begun to feel more confident of his ability to interpret observations in terms of hypotheses.

There he ventured an ingenious theory of his own, concerning the origin of coral atolls. On a hot fall day in 1834, while the Beagle was making headway from the Galapagos Islands toward Tahiti, he climbed the mainmast and saw the bone-white atolls of the Dangerous Archipelago scattered across the sea like so many lacy hoops. He was impressed by their appearance of frailty: “These low hollow coral islands bear no proportion to the vast ocean out of which they abruptly rise,” he wrote, “and it seems wonderful, that such weak invaders are not overwhelmed, by the all-powerful and never-tiring waves of that great sea, miscalled the Pacific.”5

Darwin theorized that the atolls marked the sites of vanished volcanos.* A new volcano can burst through the sea floor and, in successive eruptions, build itself up into a mountainous island that towers above the sea. When the lava stops flowing and things quiet down, a live coral reef can form on the flanks of the volcano, just below sea level. Here begins Darwin’s contribution: Eventually, he said, the inactive volcano may begin to sink, owing either to erosion or to the slow collapse of the ocean floor. As the old island gradually subsides, live coral continues to build up atop the dead and dying coral below. Eventually the original island vanishes beneath the waves, leaving a ring of coral behind. “The reef constructing corals,” Darwin wrote, “have indeed reared and preserved wonderful memorials of the subterranean oscillations of level; we see in each barrier-reef a proof that the land has there subsided, and in each atoll a monument over an island now lost.”6

The beauty of this theory, from a uniformitarian standpoint, was that the process had to be gradual. Living coral requires sunlight; as Darwin noted, it “cannot live at a greater depth than from twenty to thirty fathoms,” or some 120 to 180 feet.7 Had the islands sunk rapidly, as catastrophism demanded, the coral would have been plunged into the dark depths of the sea before new coral had time to grow on top of it, and no atoll would have been left behind.

When Darwin returned home after five years aboard the Beagle, his father upon first laying eyes on him “turned round to my sisters and exclaimed, ‘Why the shape of his head is quite altered.’”8 This was something of a family in-joke; phrenology and physiognomy were Victorian passions that Robert Darwin shared with Robert Fitz-Roy, captain of the Beagle, who had at first refused to hire on the young Darwin owing to what Fitz-Roy took to be the inauspicious shape of his nose. But the elder Darwin was a sensitiveobserver, and his remark reflected his awareness that a lot had changed inside his son’s skull as well. This was cause for celebration, for the young Darwin had been an idle and seemingly vacant lad with a passion for riding, hunting, gambling, drinking, and collecting twigs and stones. “You care for nothing but shooting, dogs, and rat-catching, and you will be a disgrace to yourself and all your family,” his father had complained, to the delight of many a future biographer.9 Darwin had dropped out of medical school, disappointing his father, who was a respected physician, and had failed to distinguish himself even in the undemanding theological studies to which he had been dispatched with the intention of preparing him for sinecure as a country parson.

Darwin’s account of the origin of the atolls held that as a mountain in the sea subsides, live coral continues to build up along what once were its coasts, until the ring of coral is all that remains.

The changes that led to a berth on the Beagle commenced at Cambridge. There Darwin became acquainted with Adam Sedgwick, one of the world’s most accomplished field geologists, took courses in botany from Henslow, who combined an acutely rational mind with a buoyant outlook worthy of Linnaeus, and began to realize that he might, through science, combine his powers of observation with his love for the outdoors and his propensity for collecting. “I discovered,” he wrote years later, “though unconsciously and insensibly, that the pleasure of observing and reasoning was a much higher one than that of skill and sport. The primeval instincts of the barbarian slowly yielded to the acquired tastes of the civilized man.”10

When Darwin left England he was still a creationist. He did “not then in the least doubt the strict and literal truth of every word in the Bible,” he recalled, and he believed, as did most of the geologists and biologists of his day, that all species of life had been created simultaneously and individually.11 He returned home with doubts on this score. He had seen firsthand evidence that the earth is embroiled in continuing change, and he wondered whether species might change, too, and whether their mutability might cause new species to come into existence.

Evolution in itself was not a new idea. As a boy Darwin had read with interest his grandfather Erasmus Darwin’s book Zoonomia, an evolutionary treatise full of robust exclamations over the notion that all life could have evolved from a single ancestor:

Perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which THE GREAT FIRST CAUSE endued with animality …? What a magnificent idea of the infinite power of THE GREAT ARCHITECT! THE CAUSE OF CAUSES! PARENT OF PARENTS!’12

Darwin was familiar, too, with the evolutionary views of the French biologist Jean-Baptiste de Lamarck, who maintained that traits acquired by individuals through experience could be passed on to their offspring. In a Lamarckian world, horses who grew strong through racing bequeathed their fleetness to their young, while giraffes, by stretching their necks to reach the leaves on trees, made the next generation of giraffes more long-necked. Lamarckism was replete with moral overtones gratifying to the Victorians, since it implied that parents who worked hard and avoided vice would have children who were genetically disposed toward hard work and clean living. But it foundered on the question of how new species had arisen. It pointed the way to ever better horses and giraffes, but not to the origin of species, and thus left unanswered the question of why different species are found in the fossil record than are living today.

Darwin’s contribution was not simply to argue that life had evolved—he did not even like to use the word “evolution”—but rather to identify the evolutionary mechanism by which new species come into existence. That was why he titled his book The Origin of Species. His theory can be outlined in terms of three premises and a conclusion.

The first premise has to do with variation. It notes that each individual member of any given species is different—that each, as we would say today, has a distinct genetic makeup. Darwin understood this very well. He grew up at a time when animal breeding and plant hybridization was booming in England—his father-in-law, Josiah Wedgwood, the ceramics manufacturer, was a noted sheep breeder, and his father was a pigeon fancier—and he learned from the husbanders to pay attention to the often subtle individual characteristics that they sought to quash or to perpetuate.* Grounded in the specifics of biological variety, Darwin’s thought was a mosaic of the particular: Scores of his publications consists of little notes in the Gardeners’ Chronicle and Agricultural Gazette and the Journal of Horticulture and Cottage Gardener asking such questions as, “Has any one who has saved seed Peas grown close to other kinds observed that the succeeding crop came up untrue or crossed?”13 and, “Is any record kept of the diameter attained by the largest Pansies?”14

Darwin’s second premise is that all living creatures tend to produce more offspring than the environment can support. It’s a cruel world, in which only a fraction of the wolves and turtles and dragonflies that come into existence manage to find sustenance and avoid predators long enough to reproduce. The English economist Thomas Malthus had quantified these harsh facts of life by pointing out that most species reproduce geometrically, while the environment can support no better than a linear increase in their populations.* Darwin read Malthus’s An Essay on the Principle of Population in London in 1838—“for amusement,” he recalled—and the hypothesis of evolution by natural selection began to take form in his mind. “One may say,” he wrote, that “there is a force like a hundred thousand wedges trying [to] force every kind of adapted structure into the gaps in the [e]conomy of nature, or rather forming gaps by thrusting out weaker ones.”15 It was in the combination of the boundless fecundity of living things with the limited resources available to support them that Darwin found a natural, global mechanism that worked constantly to extinguish most variations, preserving only those carried by individuals who managed to survive and reproduce.

Which leads to the third premise—that the differences among individuals, combined with the environmental pressures emphasized by Malthus, affect the probability that a given individual will survive long enough to pass along its genetic characteristics. This is the process that Darwin called “natural selection.” White moths fare better in snow, where their coloration serves as camouflage and protects them from predator birds, while brown moths do better in snowless autumnal forests, where their color blends in against the brown tree trunks.* It is in this sense that the “fittest” (the phrase is Herbert Spencer’s) survive, not because they are in some sense superior to their colleagues, but because they better “fit” their environment. When environmental conditions change, the most exquisitely adapted individuals may suddenly find themselves no longer fit; then it is the freaks and misfits who inherit the future.

Darwin’s conclusion was that natural selection leads to the origin of new species. Because the world is constantly in a state of change, nature favors the varied—a community of predominantly white moths is better off if it contains a few dark moths, against a smoggy day—and the geographically dispersed, those who do not keep all their eggs in one basket. As a result, the degree of individual variations found within a given species tends to increase with the passage of time, until some groups have become so different from others that they can no longer mate and produce fertile offspring. At that point, a new species has emerged. As Darwin wrote:

During the modification of the descendants of any one species, and during the incessant struggle of all species to increase in numbers, the more diversified the descendants become, the better will be their chance of success in the battle for life. Thus the small differences distinguishing varieties of the same species, steadily tend to increase, till they equal the greater differences between species …16

Darwin noted that in some ways his theory recalled the biblical image of the Tree of Life. But now the tree, instead of being static as in the creationist view, had come alive and was still growing:

The green and budding twigs may represent existing species; and those produced during former years may represent the long succession of extinct species…. The Tree of Life … fills with its dead and broken branches the crust of the earth, and covers the surface with its ever-branching and beautiful ramifications.17

Critics with a preference for Bible stories complained that natural selection was cold and mechanical. But in Darwin’s eyes it both animated and illuminated the natural world:

When we no longer look at an organic being as a savage looks at a ship, as something wholly beyond his comprehension; when we regard every production of nature as one which has had a long history; when we contemplate every complex structure and instinct as the summing up of many contrivances, each useful to the possessor, in the same way as any great mechanical invention is the summing up of the labor, the experience, the reason, and even the blunders of numerous workmen; when we thus view each organic being, how far more interesting—I speak from experience—does the study of natural history become!18

And he added, in what was to become an evolutionists’ credo:

There is a grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.19

Darwin had formulated the essential elements of his theory by the time of his marriage in 1839, and by 1844 had outlined it, in a 230-page essay. Yet he withheld it from publication for the next fifteen years. While the essay lay in his desk drawer, accompanied by strict instructions to publish it in the event of his death, Darwin settled in the country, fathered ten children, corresponded with Lyell and a hundred other scientists, and wrote books—among them a journal of the voyage of the Beagle, an account of his theory of coral reefs, a treatise on volcanos and another on the geology of South America, and a masterful study of barnacles that consumed seven years of work and left him fuming that “I hate a barnacle as no man ever did before.”20 In all, Darwin kept his theory of evolution a secret for nearly as long as Copernicus had concealed his heliocentric cosmology. Why the delay?

One explanation, still sometimes put forward, is that Darwin was constantly ill. This will not wash. Ill he certainly was: From about the time of his marriage and probably long before he was subject to intense headaches, vomiting, and heart palpitations. He consulted the best doctors in England in search of a cure, had himself hypnotized, and resorted to hydrotherapy, spending winter days wrapped in a cold, wet sheet. “His life,” wrote his son Francis Darwin, “was one long struggle against the weariness and strain of sickness.”21 The ailment was never conclusively diagnosed and has since been attributed to many agencies, from Chagas’ disease, brought on by what Darwin called the “attack (for it deserves no better name) of the Benchuca, the great black bug of the pampas” on March 26, 1835, to the psychosomatic affects of internal conflict between this former candidate for the priesthood and the anticlerical implications of his own theory. A more likely if less colorful possibility is that he suffered from severe allergies. But in any case, illness alone cannot explain why Darwin suppressed the theory of natural selection, since during those same years he wrote prolifically on other subjects.

It is much more likely that Darwin feared the storm of opposition he knew his ideas would provoke. He was a gentle, straightforward, almost childishly simple man, habitually respectful of the outlook of others and disinclined toward disputation. His theory, he knew, would draw down fire, not only from the clergy but from many of his fellow scientists as well.

The religious opposition promised to be formidable. Darwin did not have to strain his imagination to foresee what the orthodoxy would make of his assertion that animals and men are kin and that chance mutations drive evolution; to advocate such a thing, he told his friend Joseph Hooker, would be like admitting to a murder. (The murder of Adam, it was to be called.) Nor did he need look beyond England to envision what lay in store for him once word of the theory got out. When William Lawrence, later president of the Royal College of Surgeons, suggested that man evolves through the inheritance of innate rather than acquired traits, the Lord Chancellor declared his book contrary to Scriptures and denied it copyright. The legendary erudition of Benjamin Jowett of Oxford is recalled in a famous Balliol College masque’s quatrain:

First come I; my name is Jowett.
There’s no knowledge but I know it.
I am the master of this college;
What I don’t know isn’t knowledge.

But when Jowett in 1855 published a controversial interpretation of the Epistles of Saint Paul, he was accused of heresy and his salary was frozen. Darwin, puttering happily with wormstones and petunias in gardens that his insight rendered luminescent as Eden, was not eager to see the day when a thousand country parsons would turn his name into a synonym for the antichrist.

The scientific opposition arose in large measure from professional disdain for the very concept of evolution, which had long been an enthusiasm of ecstatics and occultists devoted to seances and tales of fairies flitting across the moors at dawn. To advocate so amateurish a theory was to invite learned ridicule. When in 1844 a theory of evolution was championed in the anonymous and enormously popular Vestiges of the Natural History of Creation, the book was pilloried by such authorities as the Cambridge mineralogist William Whewell (of whom it was said that “science was his forte, omniscience his foible”), the astronomer John Herschel, and the geologist Adam Sedgwick, who devoted eighty-five pages of the Edinburgh Review to its demolition (and who, indeed, was to subject Darwin’s book to comparable scorn once it finally appeared).

Against these forces Darwin, like Copernicus, would have to defend a theory that he knew to be incomplete, for neither he nor anyone else understood the micromechanism of heredity. “The laws governing inheritance,” as Darwin admitted, “are quite unknown.”22 Missing was proof of the existence of the fundamental hereditary unit, the biological quantum—in short, the gene. Without the stability imparted by genes, innovative mutations would be diluted away like drops of blood in the ocean, before they had time to spread to any significant numbers of individuals. In such a situation natural selection might occur, but it could scarcely account for the origin of species.

The first evidence of the existence of genes did not appear until 1866, eight years after Darwin was obliged to publish The Origin of Species, when the Moravian monk Gregor Mendel published the results of his extensive experiments with green peas in the garden of an Augustinian monastery—results that demonstrated the requisite persistence of the quanta of heredity—and Mendel’s findings were in any event universally ignored until attention was called to them in 1900, by which time Darwin was dead. Darwin sought to make up the deficiency by proposing a theory of “pangenesis” to account for the transmission of hereditary traits, but he remained sensitive to his vulnerability on this count. As he once remarked, he appreciated the shortcomings of his theory better than did most of its censurers.

It was, then, a reluctant Darwin who at Lyell’s urging finally began writing an exhaustive account of the origin of species through natural selection. He intended it to be a massive tome, the completion of which could safely be expected to take years; perhaps, like Copernicus, he would not have to live to read the reviews. But then, on June 3, 1858, when he had written only the first few chapters, everything changed. A letter bearing the postmark of the Malay Archipelago arrived at Darwin’s home. It came from the naturalist Alfred Russel Wallace. It contained the draft of an essay by Wallace titled, “On the Tendency of Varieties to Depart Indefinitely from the Original Type.” Wallace asked for Darwin’s reactions to the paper.

Darwin had a reaction, all right, and it was one of horrified astonishment: The theory outlined in the essay was identical to Darwin’s own. “I never saw a more striking coincidence,” he wrote to Lyell that afternoon.23

Wallace, like Darwin, was an indefatigable collector of plants and insects.* He, too, had been impressed by reading Lyell’s book, had long pondered “the question of how changes of species could have been brought about,” and had hit upon the answer after reading Malthus. He was, he recounted, recovering from malaria when “it suddenly flashed upon me that … in every generation the inferior would inevitably be killed off and the superior would remain—that is, the fittest would survive (Wallace’s italics).25 Wallace drafted the theory in three nights and sent it by the next mail to Darwin, who was known in scientific circles to have some sympathy for the hypothesis of evolution.

Darwin’s initial inclination was to take the high road, renouncing his priority and giving all the credit to Wallace. “I should rather burn my whole book, than that he or any other many should think that I had behaved in a paltry spirit,” he told Lyell.26 But Lyell and Hooker prevailed upon Darwin instead to publish a joint announcement of his and Wallace’s conclusions, and then to get to work writing a briefer account of his theory for prompt publication in book form. This he did, rushing to complete what he called an “abstract” of his theory within a year. This was The Origin of Species by Means of Natural Selection.

More than two hundred thousand words in length, the Origin reads less like an abstract than like a steady, not to say relentless, recounting of specifics: The incidence of beetle spoilage in American purple plums; the size of the stem of the Swedish turnip; the exact number of tail feathers sported by the trumpeter pigeon; the tactics employed by male alligators when they fight over female alligators. The book is objective to the point of bloodlessness; here are to be found no ecstatic outbursts comparable to Copernicus’s tributes to the sun, no philosophizing on a level with Newton’s descriptions of the workings of God, none of the fiery contentiousness of Galileo’s dialogues. Instead there is a constant amassing of factual detail, gradual as a silt deposit hardening into sedimentary rock.*

Indeed, the book was so detailed and modest that it struck many readers as self-evident. This was a source of strength, in that nothing so persuades a man to accept a novel idea as the sense that he already knew it to be true. (“How extremely stupid of me not to have thought of that,” said Thomas Huxley, previously an evolutionary skeptic, upon reading the Origin.27) Many scientists and scholars soon came around to Darwin’s point of view—Hooker at once, the botanist Asa Gray soon thereafter, and Lyell, remarkably for a public figure so prominently established as an antievolutionist, only five years later—though more than a few of them would have agreed with Whitehead, who in a conversation in 1944 declared that “Darwin is truly great, but he is the dullest great man I can think of.”29 Darwin replied to contemporary criticism in this vein with his customary restraint:

Some of my critics have said, “Oh, he is a good observer, but he has no power of reasoning.” I do not think that this can be true, for the Origin of Species is one long argument from the beginning to the end, and it has convinced not a few able men. No one could have written it without having some power of reasoning.30

But he conceded that, though the study of living things had never lost its fascination for him, the years of drudgery had taken a toll on his nonscientific interests: Neither music nor literature nor even “fine scenery” held much pleasure for him any longer; he wrote in his Autobiography: “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.”31

The religious reaction was every bit as vehement as Darwin had feared, but much of it was so florid, compared to Darwin’s quiet reasonableness, that it flowed around the Origin like water around a rock. Bishop Wilberforce of Oxford set the tone for the long burlesque that was to follow. A passionate lecturer, called “Soapy Sam” after his habit of rubbing his hands together as he preached, Wilberforce condemned Darwin’s theory as “a dishonoring view of Nature…. absolutely incompatible with the word of God.” A prisoner of his own passion, he soon overplayed his hand. The scene was a meeting of the British Association for the Advancement of Science, at Oxford on June 30, 1860. Taking part in the discussion was Thomas Huxley, who loved a good argument and styled himself “Darwin’s bulldog” for his tireless sallies against the opponents of evolution. With a sarcastic smile, Wilberforce turned to Huxley and asked “was it through his grandfather or his grandmother that he [Huxley] claimed his descent from a monkey?”32 “The Lord hath delivered him into mine hands,” whispered Huxley to his friend Benjamin Brodie, seated beside him. Then he rose, savoring the moment, and replied:

A man has no reason to be ashamed of having an ape for his grandfather. If there were an ancestor whom I should feel shame in recalling it would rather be a man—a man of restless and versatile intellect—who, not content with success in his own sphere of activity, plunges into scientific questions with which he has no real acquaintance, only to obscure them by an aimless rhetoric, and distract the attention of his hearers from the real point at issue by eloquent digressions and skilled appeals to religious prejudice.33

The audience broke into laughter. In the general excitement that followed, one Lady Bruster fainted and had to be carried from the hall, while Captain Fitz-Roy of the Beagle marched up and down the aisles, holding a Bible aloft and chanting, “The Book, the Book!”34 The drama of Darwinism versus Christian fundamentalism went on to play to packed houses in the Dayton, Tennessee, courthouse where Clarence Darrow defended John Scopes, and road-show productions were still drawing crowds to the so-called “creation science” trials of the 1980s. One such case reached the Supreme Court of the United States, which voted in 1987 that the state of Louisiana did not have the right to require that creationism be taught alongside evolution in the public schools (Chief Justice William Rehnquist dissenting). But science is not rhetoric, and the evolutionary debates, though entertaining, were always more show than substance.

The ascent of Darwin’s theory brought new vitality to the question of the age of the earth. Darwinism was a time bomb: For species to have evolved to their present-day diversity through the slow workings of random mutation and natural selection required that the duration of the past be much longer than the six thousand or so years suggested by the Bible. Darwin grasped this nettle firmly: “He who … does not admit how vast have been the past periods of time may at once close this volume,” he wrote in the Origin.35

But while Darwin’s evolution and Lyell’s geology implied that the earth was old, they did not prove it. That issue was left to the physicists, who approached the question of the age of the earth by way of thermodynamics, the developing science of the transfer of heat. The earth, as coal miners know, is hotter in its depths than at the surface. Therefore it must be radiating heat into space, rather than receiving all its warmth from the sun. (Were it the other way around, the earth’s surface would be hotter than its interior.) If, then, one assumed that the earth began as a molten ball and has been cooling ever since, and if one could determine the rate at which it is cooling, it ought to be possible to calculate its age.

The first significant experiments along these lines had been conducted in the 1770s by Buffon, an early champion of deep time. In a thermally stable basement laboratory, Buffon fashioned little spheres one to five inches in diameter from suitably earthy materials, heated them, determined how long it took them to cool, and extrapolated the results to the much larger sphere of the earth. He made his measurements by sitting in the dark and observing how long it took a white-hot ball to fade to invisibility, or by touching them with his hand until they seemed to have returned to room temperature. The results, though admittedly crude, yielded a geochronology generous by the standards of the day: Buffon calculated that the earth was some 75,000 to 168,000 years old, and he guessed privately that the true figure was probably closer to half a million years. This, however, was still far too little time for Darwinian evolution to have brought life on Earth from a single-celled organism to the present-day world of orchids and adders and chimpanzees. That feat would have required billions of years.

Thermodynamics had advanced a long way by the time Darwin came on the scene. Thanks in large measure to its important practical applications in the design of steam engines, the study of heat attracted some of the most intrepid intellects of the nineteenth century—men of the stature of Lord Kelvin, Hermann von Helmholtz, Rudolf Clausius, and Ludwig Boltzmann. But when all this brainpower was brought to bear upon the question of geochronology, the verdict was bad news for Darwin and the uniformitarian geologists.

The titans of physics chose to focus less on the earth than on that suitably grander and more luminous body, the sun. Helmholtz was helpful: An able philosopher as well as a scientist, he was amused to read that the late Immanuel Kant (with whom he disagreed over just about everything) had thought that the sun was “a flaming body, and not a mass of molten and glowing matter.”36 This Helmholtz the physicist knew to be wrong; were the sun simply burning like a giant campfire, it would have run out of fuel in but a thousand years. Casting about for an alternative solar energy-source, Helmholtz hit upon gravitational contraction: The material of the sun, he reasoned, settles in toward the center, releasing gravitational potential energy in the form of heat. This, the most efficient solar energy-production mechanism that could be envisioned by nineteenth-century physics, yielded an age for the sun of some twenty to forty million years—a lot longer than the chronology of Buffon or the Bible, though still not enough to satisfy the Darwinians.

The question of the age of the sun then was taken up by Lord Kelvin, an imposing figure by any intellectual standard. Born in Belfast in 1824, Kelvin (né William Thompson) was admitted to the University of Glasgow at the age of ten, had published his first paper in mathematics before he was seventeen, and was named professor of natural philosophy at Glasgow at age twenty-two. An adept musician and an expert navigator as well as a distinguished mathematician and physicist and inventor, Kelvin was a hard man with whom to differ. Moreover, his forte was heat: The Kelvin scale of absolute temperature is named after him, and he was instrumental in identifying the first law of thermodynamics (that energy is conserved in all interactions, meaning that no machine can produce more energy than it consumes) and the second law (that some energy must always be lost in the process). When Kelvin declared the verdict of thermodynamics as to the question of the age of the sun, few mortals, and fewer biologists, could expect both to differ with him and to prevail.

Kelvin calculated that the sun, releasing heat by virtue of gravitational contraction, could not have been shining for more than five hundred million years. This was a disaster for Darwin. “I take the sun much to heart,” he wrote to Lyell in 1868. “I have not as yet been able to digest the fundamental notion of the shortened age of the sun and earth,” he wrote to Wallace three years later.37 Huxley the bulldog dutifully debated Kelvin on geochronology, at a meeting of the Geological Society of London, but Kelvin was no Bishop Wilberforce and Huxley got nowhere. Clearly either Darwin’s theory or Kelvin’s calculations were wrong. Darwin died not knowing which.

To their credit, both Darwin and Kelvin allowed that something important might be missing from their considerations. As Darwin put it, pleading his case in a late edition of the Origin, “We are confessedly ignorant; nor do we know how ignorant we are.”38 Kelvin, for his part, admitted that his assessments of the age of the sun depended upon the accuracy of Helmholtz’s hypothesis that solar energy came from the alleged contraction of the sun. He remarked, in one of the most pregnant parenthetical phrases in the history of physics, that “(I do not say there may not be laws which we have not discovered.)”39

It was in conceding that their views might be incomplete that both men proved most prophetic. What they lacked was an understanding of two of the fundamental forces of nature, known corporately as nuclear energy. It is the decay of radioactive material —via the weak nuclear force—that has kept the earth warm for nearly five billion years. It is nuclear fusion—which also involves the strong force—that has powered the sun for as long, and that promises to keep it shining for another five billion years. With the discovery of nuclear energy the time-scale debate was resolved in Darwin’s favor, the doors to nuclear physics swung open, and the world lost its innocence.

The nuclear age may be said to have dawned on November 8, 1895, in a laboratory at the University of Würzburg, at the hands of the physicist Wilhelm Conrad Röntgen. Röntgen was experimenting with electricity in a semi vacuum tube. The laboratory was dark. He noticed that a screen across the room, coated with barium, platinum, and cyanide, glowed in the dark whenever he turned on the power to the tube, as if light from the tube were reaching the screen. But ordinary light could not be responsible: The tube was enclosed in black cardboard and no light could escape it. Puzzled, Röntgen placed his hand between the tube and the screen and was startled to see the bones in his hand exposed, as if the flesh had become translucent. Röntgen had detected “X rays”—high-energy photons generated by electron transitions at the inner shells of atoms.*

Among the scores of physicists who took notice of Röntgen’s detection of X rays was Henri Becquerel, a third-generation student of phosphorescence who shared with his father and grandfather a fascination with anything that glowed in the dark. Becquerel’s discovery, like Röntgen’s, was accidental, though both illustrated the validity of Louis Pasteur’s dictum that chance favors the prepared mind. Between experiments in his laboratory in Paris, Becquerel stored some photographic plates wrapped in black paper in a drawer. A piece of uranium happened to be sitting on top of them. When Becquerel developed the plates several days later, he found that they had been imprinted, in total darkness, with an image of the lump of uranium. He had detected radioactivity, the emission of subatomic particles by unstable atoms like those of uranium—which, Becquerel noted in announcing his results in 1896, was particularly radioactive. His work helped initiate a path of research that would lead, eventually, to Einstein’s realization that every atom is a bundle of energy.

At McGill University in Montreal, the energetic experimentalist Ernest Rutherford, a great bear of a man whose roaring voice sent his assistants and their laboratory glassware trembling, found that radioactive materials can produce surprisingly large amounts of energy. A lump of radium, Rutherford established, generates enough heat to melt its weight in ice every hour, and can continue to do so for a thousand years or more. Other radioactive elements last even longer; some keep ticking away at an almost undiminished rate for billions of years.

This, then, was the answer to Kelvin, and one that spelled deliverance for the late Charles Darwin: The earth stays warm because it is heated by radioactive elements in the rocks and molten core of the globe. As Rutherford wrote:

The discovery of the radioactive elements, which in their disintegration liberate enormous amounts of energy, thus increases the possible limit of the duration of life on this planet, and allows the time claimed by the geologist and biologist for the process of evolution.40

Understandably pleased with this conclusion, the young Rutherford rose to address a meeting of the Royal Institution, only to find himself confronted by the one scientist in the world his paper could most deeply offend:

I came into the room, which was half dark, and presently spotted Lord Kelvin in the audience and realized that I was in for trouble at the last part of my speech dealing with the age of the earth, where my views conflicted with his. To my relief, Kelvin fell fast asleep, but as I came to the important point, I saw the old bird sit up, open an eye and cock a baleful glance at me! Then a sudden inspiration came, and I said Lord Kelvin had limited the age of the earth, provided no new source [of energy] was discovered. That prophetic utterance refers to what we are now considering tonight, radium! Behold! the old boy beamed upon me.41

Radioactive materials not only testified to the antiquity of the earth, but provided a way of measuring it as well. Rutherford’s biographer A. S. Eve recounts an exchange that signaled this new insight:

About this time Rutherford, walking in the Campus with a small black rock in his hand, met the Professor of Geology. “Adams,” he said, “how old is the earth supposed to be?” The answer was that various methods lead to an estimate of one hundred million years. “I know” said Rutherford quietly, “that this piece of pitchblende is seven hundred million years old.”42

What Rutherford had done was to determine the rate at which the radioactive radium and uranium in the rock gave off what he called alpha particles, which are the nuclei of helium atoms, and then to measure the amount of helium in the rock. The result, seven hundred million years, constituted a reasonably reliable estimate of how long the radioactive materials had been in there, emitting helium.

Rutherford had taken a first step toward the science of radiometric dating. Every radioactive substance has a characteristic half-life, during which time half of the atoms in any given sample of that element will decay into another, lighter element. By comparing the abundance of the original (or “parent”) isotope with that of the decay product (or “daughter”), it is possible to age-date the stone or arrowhead or bone that contains the parent and daughter isotopes.

Carbon-14 is especially useful in this regard, since every living thing on Earth contains carbon. The half-life of carbon-14 is 5,570 years, meaning that after 5,570 years half of the carbon-14 atoms in any given sample will have decayed into atoms of nitrogen-14. If we examine, say, the remains of a Navaho campfire and find that half the carbon-14 in the charred remains of the burnt logs has decayed into nitrogen-14, we can conclude that the fire was built 5,570 years ago. If three quarters of the carbon has turned to nitrogen, then the logs are twice as old—11,140 years—and so forth. After about five half-lives the amount of remaining parent isotope generally has become too scanty to be measured reliably, but geologists have recourse to other, more long-lived radioactive elements. Uranium-238, for one, has a half-life of over 4 billion years, while the half-life of rubidium-87 is a methuselian 47 billion years.

In practice, radiometric dating is a subtle process, fraught with potential error. First one has to ascertain when the clock started. In the case of carbon-14, this is usually when the living tissue that contained it died. Carbon-14 is constantly being produced by the collision of high-energy subatomic particles from space with atoms in the earth’s upper atmosphere. Living plants and animals ingest carbon-14, along with other forms of carbon, only so long as they live. The scientist who comes along years later to age-date their remains is, therefore, reading a clock that started when the host died. The reliability of the process depends upon the assumption that the amount of ambient carbon-14 in the enviroment at the time was roughly the same as it is today. If not—if, for instance, a storm of subatomic particles from space happened to increase the amount of carbon-14 around thousands of years ago—then the radiometric date will be less accurate. In the case of inorganic materials, one may be dealing with radioactive atoms older than the earth itself; their clocks may have started with the explosion of a star that died when the sun was but a gleam in a nebular eye. But if such intricacies complicate the process of radiometric age-dating they also hint at the extraordinary range of its potential applications, in fields ranging from geology and geophysics to astrophysics and cosmology.

The process of radiometrically age-dating geological strata got under way only ten years after the discovery of radioactivity itself, when the young British geologist Arthur Holmes, in his book The Age of the Earth, correlated the ages of uranium-bearing igneous rocks with those of adjacent fossil-bearing sedimentary strata. By the 1920s it was becoming generally accepted by geologists, physicists, and astronomers that the earth is billions of years old and that radiometric dating presents a reliable way of measuring its age. Since then, ancient rocks in southwestern Greenland have been radiometrically age-dated at 3.7 billion years, meaning that the crust of the earth can be no younger than that. Presumably the planet is older still, having taken time to cool from a molten ball and form a crust. Moon rocks collected by the Apollo astronauts are found to be nearly 4.6 billion years old, about the same age as meteorites—chunks of rock that once were adrift in space and since have been swept up by the earth in its orbit around the sun. It is upon this basis that scientists generally declare the solar system to be some 5 billion years old, a finding that fits well with the conclusions of astrophysicists that the sun is a normal star about halfway through a 10-billion-year lifetime.

When nuclear fission, the production of energy by splitting nuclei, was detailed by the German chemists Otto Hahn and Fritz Strassmann in 1938, and nuclear fusion, which releases energy by combining nuclei, was identified by the American physicist Hans Bethe the following year, humankind could at last behold the mechanism that powers the sun and the other stars. In the general flush of triumph, few paid attention to the dismaying possibility that such overwhelming power might be set loose with violent intent on the little earth. Einstein, for one, assumed that it would be impossible to make a fission bomb; he compared the problem of inducing a chain reaction to trying to shoot birds at night in a place where there are very few birds. He lived to learn that he was wrong. The first fission (or “atomic”) bomb was detonated in New Mexico on July 16, 1945, and two more were dropped on the cities of Hiroshima and Nagasaki a few weeks later. The first fusion (or “hydrogen”) bomb, so powerful that it employed a fission weapon as but its detonator, was exploded in the Marshall Islands on November 1, 1952.

A few pessimists had been able to peer ahead into the gloom of the nuclear future, though their words went largely unheeded at the time. Pierre Curie had warned of the potential hazards of nuclear weapons as early as 1903. “It is conceivable that radium in criminal hands may become very dangerous,” said Curie, accepting the Nobel Prize.* “… Explosives of great power have allowed men to do some admirable works. They are also a terrible means of destruction in the hands of the great criminals who lead nations to war.”43 Arthur Stanley Eddington, guessing that the release of nuclear energy was what powered the stars, wrote in 1919 that “it seems to bring a little nearer to fulfillment our dream of controlling this latent power for the well-being of the human race—or for its suicide.”44 These and many later admonitions notwithstanding, the industrialized nations set about building bombs just as rapidly as they could, and by the late 1980s there were over fifty thousand nuclear weapons in a world that had grown older if little wiser. Studies indicated that the detonation of as few as 1 percent of these warheads would reduce the combatant societies to “medieval” levels, and that climatic effects following a not much larger exchange could lead to global famine and the potential extinction of the human species. The studies were widely publicized, but years passed and the strategic arsenals were not reduced.

It was through the efforts of the bomb builders that Darwin’s century-old theory of the origin of coral atolls was at last confirmed. Soon after World War II, geologists using tough new drilling bits bored nearly a mile down into the coral of Eniwetok Atoll and came up with volcanic rock, just as Darwin had predicted. The geologists’ mission, however, had nothing to do with evolution. Their purpose was to determine the structure and strength of the atoll before destroying it, in a test of the first hydrogen bomb. When the bomb was detonated, its fireball vaporized the island on which it had been placed, tore a crater in the ocean floor two miles deep, and sent a cloud of freshly minted radioactive atoms wafting across the paradisiacal islands downwind. President Truman in his final State of the Union message declared that “the war of the future would be one in which Man could extinguish millions of lives at one blow, wipe out the cultural achievements of the past, and destroy the very structure of civilization.

“Such a war is not a possible policy for rational men,” Truman added.45 Nonetheless, each of the next five presidents who succeeded him in office found it advisable to threaten the Soviets with the use of nuclear weapons. As the British physicist P.M. S. Blackett observed, “Once a nation pledges its safety to an absolute weapon, it becomes emotionally essential to believe in an absolute enemy.”46

Einstein, sad-eyed student of human tragedy, closed the circle of evolution, thermodynamics, and nuclear fusion in a single sentence. “Man,” he said, “grows cold faster than the planet he inhabits.”47

*Though Darwin, echoing Newton, characterized much of his research as purely inductive—“I worked on true Baconian principles,” he said of his account of evolution, “and without any theory collected facts on a wholesale scale”—this has always been a difficult claim to justify scrupulously, and Darwin formulated his theory of coral atoll formation while still in South America, before he ever laid eyes on a real atoll.

*The rise in animal breeding was spurred on by the growing industrialization of England, which brought working people in from the country, where they could keep a few barnyard animals of their own, to the cities, where they were fed from ever larger herds bred to maximize profits. More generally, the advent of Darwinism itself might be said to have been fostered by a certain distancing of human beings from the creatures they studied; it was only once people stopped cohabiting with animals that they began to entertain the idea that they were the animals’ relations.

*Malthus, incidentally, appears to have been inspired in part by reading Darwin’s grandfather Erasmus. It’s a small world, or was so in Victorian England.

*A striking example of adaptive color change occurred among British peppered moths in the vicinity of Manchester. In the eighteenth century, all such moths collected were pallid in color; in 1849 a single black moth was caught in the vicinity, and by the 1880s the black moths were in the majority. Why? Because industrial pollution had blackened tree trunks in the vicinity, robbing the original moths of their camouflage while bestowing its benefits upon the few black moths there. Once pollution-control ordinances came into effect, the soot slowly washed from the tree trunks and the pale peppered moth population rebounded.

*It had been Wallace’s misfortune, however, to lose his specimens in a fire at sea. Watching from an open lifeboat as the blazing ship sank beneath the waves, Wallace recalled, “I began to feel the greatness of my loss. … I had not one specimen to illustrate the unknown lands I had trod, or to call back the recollection of the wild scenes I had beheld! But such regrets were vain … and I tried to occupy myself with the state of things which actually existed.”24

*Readers who tire of the details they encounter in the Origin may take comfort in considering that until he was interrupted by Wallace’s letter, Darwin had intended to include a great many more of them. “To treat this subject properly, a long catalogue of dry facts ought to be given,” he wrote, in Chapter Two of the Origin, “but these I shall reserve for a future work.”28 He kept this promise in his exhaustive, not to say exhausting, book The Variation of Animals and Plants Under Domestication.

*Not long before Röntgen’s discovery, Frederick Smith at Oxford was informed by an assistant that photographic plates stored near a cathode-ray tube were being fogged; but Smith, rather than pondering the matter, simply ordered that the plates be kept somewhere else.

*Curie’s wife Marie, winner of two Nobel Prizes, died of the effects of radiation contracted in years of experimental research into radioactive isotopes. Her laboratory apparatus and even her cookbooks at home, inspected fifty years later, were found to be contaminated by lethal radiation.