Pale Blue Dot: A Vision of the Human Future in Space - Carl Sagan, Ann Druyan (1997)

Chapter 21. TO THE SKY!

The stairs of the sky are let down for him that he may ascend thereon to heaven. O gods, put your arms under the king: raise him, lift him to the sky.

To the sky! To the sky!

—HYMN FOR A DEAD PHARAOH (EGYPT, CA. 2600 B.C.)

When my grandparents were children, the electric light, the automobile, the airplane, and the radio were stupefying technological advances, the wonders of the age. You might hear wild stories about them, but you could not find a single exemplar in that little village in Austria-Hungary, near the banks of the river Bug. But in that same time, around the turn of the last century, there were two men who foresaw other, far more ambitious, inventions—Konstantin Tsiolkovsky, the theoretician, a nearly deaf schoolteacher in the obscure Russian town of Kaluga, and Robert Goddard, the engineer, a professor at an equally obscure American college in Massachusetts. They dreamt of using rockets to journey to the planets and the stars. Step by step, they worked out the fundamental physics and many of the details. Gradually, their machines took shape. Ultimately, their dream proved infectious.

In their time, the very idea was considered disreputable, or even a symptom of some obscure derangement. Goddard found that merely mentioning a voyage to other worlds subjected him to ridicule, and he dared not publish or even discuss in public his long-term vision of flights to the stars. As teenagers, both had epiphanal visions of spaceflight that never left them. “I still have dreams in which I fly up to the stars in my machine,” Tsiolkovsky wrote in middle age. “It is difficult to work all on your own for many years, in adverse conditions without a gleam of hope, without any help.” Many of his contemporaries thought he was truly mad. Those who knew physics better than Tsiolkovsky and Goddard—including The New York Times in a dismissive editorial not retracted until the eve of Apollo 11—insisted that rockets could not work in a vacuum, that the Moon and the planets were forever beyond human reach.

A generation later, inspired by Tsiolkovsky and Goddard, Wernher von Braun was constructing the first rocket capable of reaching the edge of space, the V-2. But in one of those ironies with which the twentieth century is replete, von Braun was building it for the Nazis—as an instrument of indiscriminate slaughter of civilians, as a “vengeance weapon” for Hitler, the rocket factories staffed with slave labor, untold human suffering exacted in the construction of every booster, and von Braun himself made an officer in the SS. He was aiming at the Moon, he joked unselfconsciously, but hit London instead.

Another generation later, building on the work of Tsiolkovsky and Goddard, extending von Braun’s technological genius, we were up there in space, silently circumnavigating the Earth, treading the ancient and desolate lunar surface. Our machines—increasingly competent and autonomous—were spreading through the Solar System, discovering new worlds, examining them closely, searching for life, comparing them with Earth.

This is one reason that in the long astronomical perspective there is something truly epochal about “now”—which we can define as the few centuries centered on the year you’re reading this book. And there’s a second reason: This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself—as well as to vast numbers of others. Let me recount the ways:

·        We’ve been burning fossil fuels for hundreds of thousands of years. By the 1960s, there were so many of us burning wood, coal, oil, and natural gas on so large a scale, that scientists began to worry about the increasing greenhouse effect; the dangers of global warming began slowly slipping into public consciousness.

·        CFCs were invented in the 1920s and 1930s; in 1974 they were discovered to attack the protective ozone layer. Fifteen years later a worldwide ban on their production was going into effect.

·        Nuclear weapons were invented in 1945. It took until 1983 before the global consequences of thermonuclear war were understood. By 1992, large numbers of warheads were being dismantled.

·        The first asteroid was discovered in 1801. More or less serious proposals to move them around were floated beginning in the 1980s. Recognition of the potential dangers of asteroid deflection technology followed shortly after.

·        Biological warfare has been with us for centuries, but its deadly mating with molecular biology has occurred only lately.

·        We humans have already precipitated extinctions of species on a scale unprecedented since the end of the Cretaceous Period. But only in the last decade has the magnitude of these extinctions become clear, and the possibility raised that in our ignorance of the interrelations of life on Earth we may be endangering our own future.

Look at the dates on this list and consider the range of new technologies currently under development. Is it not likely that other dangers of our own making are yet to be discovered, some perhaps even more serious?

In the littered field of discredited self-congratulatory chauvinisms, there is only one that seems to hold up, one sense in which we are special: Due to our own actions or inactions, and the misuse of our technology, we live at an extraordinary moment, for the Earth at least—the first time that a species has become able to wipe itself out. But this is also, we may note, the first time that a species has become able to journey to the planets and the stars. The two times, brought about by the same technology, coincide—a few centuries in the history of a 4.5-billion-year-old planet. If you were somehow dropped down on the Earth randomly at any moment in the past (or future), the chance of arriving at this critical moment would be less than 1 in 10 million. Our leverage on the future is high just now.

It might be a familiar progression, transpiring on many worlds—a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.

Since, in the long run, every planetary society will be endangered by impacts from space, every surviving civilization is obliged to become spacefaring—not because of exploratory or romantic zeal, but for the most practical reason imaginable: staying alive. And once you’re out there in space for centuries and millennia, moving little worlds around and engineering planets, your species has been pried loose from its cradle. If they exist, many other civilizations will eventually venture far from home.*

A MEANS HAS BEEN OFFERED of estimating how precarious our circumstances are—remarkably, without in any way addressing the nature of the hazards. J. Richard Gott III is an astrophysicist at Princeton University. He asks us to adopt a generalized Copernican principle, something I’ve described elsewhere as the Principle of Mediocrity. Chances are that we do not live in a truly extraordinary time. Hardly anyone ever did. The probability is high that we’re born, live out our days, and die somewhere in the broad middle range of the lifetime of our species (or civilization, or nation). Almost certainly, Gott says, we do not live in first or last times. So if your species is very young, it follows that it’s unlikely to last long—because if it were to last long, you (and the rest of us alive today) would be extraordinary in living, proportionally speaking, so near the beginning.

What then is the projected longevity of our species? Gott concludes, at the 97.5 percent confidence level, that there will be humans for no more than 8 million years. That’s his upper limit, about the same as the average lifetime of many mammalian species. In that case, our technology neither harms nor helps. But Gott’s lower limit, with the same claimed reliability, is only 12 years. He will not give you 40-to-1 odds that humans will still be around by the time babies now alive become teenagers. In everyday life we try very hard not to take risks so large, not to board airplanes, say, with 1 chance in 40 of crashing. We will agree to surgery in which 95 percent of patients survive only if our disease has a greater than 5 percent chance of killing us. Mere 40-to-l odds on our species surviving another 12 years would be, if valid, a cause for supreme concern. If Gott is right, not only may we never be out among the stars; there’s a fair chance we may not be around long enough even to make the first footfall on another planet.

To me, this argument has a strange, vaporish quality. Knowing nothing about our species except how old it is, we make numerical estimates, claimed to be highly reliable, about its future prospects. How? We go with the winners. Those who have been around are likely to stay around. Newcomers tend to disappear. The only assumption is the quite plausible one that there is nothing special about the moment at which we inquire into the matter. So why is the argument unsatisfying? Is it just that we are appalled by its implications?

Something like the Principle of Mediocrity must have very broad applicability. But we are not so ignorant as to imagine that everything is mediocre. There is something special about our time—not just the temporal chauvinism that those who reside in any epoch doubtless feel, but something, as outlined above, clearly unique and strictly relevant to our species’ future chances: This is the first time that (a) our exponentiating technology has reached the precipice of self-destruction, but also the first time that (b) we can postpone or avoid destruction by going somewhere else, somewhere off the Earth.

These two clusters of capabilities, (a) and (b), make our time extraordinary in directly contradictory ways—which both (a) strengthen and (b) weaken Gott’s argument. I don’t know how to predict whether the new destructive technologies will hasten, more than the new spaceflight technologies will delay, human extinction. But since never before have we contrived the means of annihilating ourselves, and never before have we developed the technology for settling other worlds, I think a compelling case can be made that our time is extraordinary precisely in the context of Gott’s argument. If this is true, it significantly increases the margin of error in such estimates of future longevity. The worst is worse, and the best better: Our short-term prospects are even bleaker and—if we can survive the short term—our long-term chances even brighter than Gott calculates.

But the former is no more cause for despair than the latter is for complacency. Nothing forces us to be passive observers, clucking in dismay as our destiny inexorably works itself out. If we cannot quite seize fate by the neck, perhaps we can misdirect it, or mollify it, or escape it.

Of course we must keep our planet habitable—not on a leisurely timescale of centuries or millennia, but urgently, on a timescale of decades or even years. This will involve changes in government, in industry, in ethics, in economics, and in religion. We’ve never done such a thing before, certainly not on a global scale. It may be too difficult for us. Dangerous technologies may be too widespread. Corruption may be too pervasive. Too many leaders may be focused on the short term rather than the long. There may be too many quarreling ethnic groups, nation-states, and ideologies for the right kind of global change to be instituted. We may be too foolish to perceive even what the real dangers are, or that much of what we hear about them is determined by those with a vested interest in minimizing fundamental change.

However, we humans also have a history of making long-lasting social change that nearly everyone thought impossible. Since our earliest days, we’ve worked not just for our own advantage but for our children and our grandchildren. My grandparents and parents did so for me. We have often, despite our diversity, despite endemic hatreds, pulled together to face a common enemy. We seem, these days, much more willing to recognize the dangers before us than we were even a decade ago. The newly recognized dangers threaten all of us equally. No one can say how it will turn out down here.

THE MOON WAS WHERE the tree of immortality grew in ancient Chinese myth. The tree of longevity if not of immortality, it seems, indeed grows on other worlds. If we were up there among the planets, if there were self-sufficient human communities on many worlds, our species would be insulated from catastrophe. The depletion of the ultraviolet-absorbing shield on one world would, if anything, be a warning to take special care of the shield on another. A cataclysmic impact on one world would likely leave all the others untouched. The more of us beyond the Earth, the greater the diversity of worlds we inhabit, the more varied the planetary engineering, the greater the range of societal standards and values—then the safer the human species will be.

If you grow up living underground in a world with a hundredth of an Earth gravity and black skies through the portals, you have a very different set of perceptions, interests, prejudices, and predispositions than someone who lives on the surface of the home planet. Likewise if you live on the surface of Mars in the throes of terraforming, or Venus, or Titan. This strategy—breaking up into many smaller self-propagating groups, each with somewhat different strengths and concerns, but all marked by local pride—has been widely employed in the evolution of life on Earth, and by our own ancestors in particular. It may, in fact, be key to understanding why we humans are the way we are.* This is the second of the missing justifications for a permanent human presence in space: to improve our chances of surviving, not just the catastrophes we can foresee, but also the ones we cannot. Gott also argues that establishing human communities on other worlds may offer us our best chance of beating the odds.

To take out this insurance policy is not very expensive, not on the scale on which we do things on Earth. It would not even require doubling the space budgets of the present spacefaring nations (which, in all cases, are only a small fraction of the military budgets and many other voluntary expenditures that might be considered marginal or even frivolous). We could soon be setting humans down on near-Earth asteroids and establishing bases on Mars. We know how to do it, even with present technology, in less than a human lifetime. And the technologies will quickly improve. We will get better at going into space.

A serious effort to send humans to other worlds is relatively so inexpensive on a per annum basis that it cannot seriously compete with urgent social agendas on Earth. If we take this path, streams of images from other worlds will be pouring down on Earth at the speed of light. Virtual reality will make the adventure accessible to millions of stay-on-Earths. Vicarious participation will be much more real than at any earlier age of exploration and discovery. And the more cultures and people it inspires and excites, the more likely it will happen.

But by what right, we might ask ourselves, do we inhabit, alter, and conquer other worlds? If anyone else were living in the Solar System, this would be an important question. If, though, there’s no one else in this system but us, don’t we have a right to settle it?

Of course, our exploration and homesteading should be enlightened by a respect for planetary environments and the scientific knowledge they hold. This is simple prudence. Of course, exploration and settlement ought to be done equitably and transnationally, by representatives of the entire human species. Our past colonial history is not encouraging in these regards; but this time we are not motivated by gold or spices or slaves or a zeal to convert the heathen to the One True Faith, as were the European explorers of the fifteenth and sixteenth centuries. Indeed, this is one of the chief reasons we’re experiencing such intermittent progress, so many fits and starts in the manned space programs of all nations.

Despite all the provincialisms I complained about early in this book, here I find myself an unapologetic human chauvinist. If there were other life in this solar system, it would be in imminent danger because the humans are coming. In such a case, I might even be persuaded that safeguarding our species by settling certain other worlds is offset, in part at least, by the danger we would pose to everybody else. But as nearly as we can tell, so far at least, there is no other life in this system, not one microbe. There’s only Earthlife.

In that case, on behalf of Earthlife, I urge that, with full knowledge of our limitations, we vastly increase our knowledge of the Solar System and then begin to settle other worlds.

These are the missing practical arguments: safeguarding the Earth from otherwise inevitable catastrophic impacts and hedging our bets on the many other threats, known and unknown, to the environment that sustains us. Without these arguments, a compelling case for sending humans to Mars and elsewhere might be lacking. But with them—and the buttressing arguments involving science, education, perspective, and hope—I think a strong case can be made. If our long-term survival is at stake, we have a basic responsibility to our species venture to other worlds.

Sailors on a becalmed sea, we sense the stirring of a breeze.

*Might a planetary civilization which has survived its adolescence wish to encourage others struggling with their emerging technologies? Perhaps they would make special efforts to broadcast news of their existence, the triumphant announcement that it’s possible to avoid self-annihilation. Or would they at first be very cautious? Having avoided catastrophes of their own making, perhaps they would fear giving away knowledge of their existence, lest some other, unknown, aggrandizing civilization out there in the dark is looking for Lebensraum or slavering to put down the potential competition. That might be a reason for us to explore neighboring star systems, but discreetly.
     Maybe they would be silent for another reason: because broadcasting the existence of an advanced civilization might encourage emerging civilizations to do less than their best efforts to safeguard their future—hoping instead that someone will come out of the dark and save them from themselves.

*Cf. Shadows of Forgotten Ancestors: A Search for Who We Are, by Carl Sagan and Ann Druyan (New York: Random House, 1992).