From Eternity to Here: The Quest for the Ultimate Theory of Time - Sean Carroll (2010)

Part I. TIME, EXPERIENCE, AND THE UNIVERSE

Chapter 2. THE HEAVY HAND OF ENTROPY

Eating is unattractive too . . . Various items get gulped into my mouth, and after skillful massage with tongue and teeth I transfer them to the plate for additional sculpture with knife and fork and spoon. That bit’s quite therapeutic at least, unless you’re having soup or something, which can be a real sentence. Next you face the laborious business of cooling, of reassembly, of storage, before the return of these foodstuffs to the Super ette, where, admittedly, I am promptly and generously reimbursed for my pains. Then you tool down the aisles, with trolley or basket, returning each can and packet to its rightful place.

—Martin Amis, Time’s Arrow18

Forget about spaceships, rocket guns, clashes with extraterrestrial civilizations. If you want to tell a story that powerfully evokes the feeling of being in an alien environment, you have to reverse the direction of time.

You could, of course, simply take an ordinary story and tell it backward, from the conclusion to the beginning. This is a literary device known as “reverse chronology” and appears at least as early as Virgil’s Aeneid. But to really jar readers out of their temporal complacency, you want to have some of your characters experience time backward. The reason it’s jarring, of course, is that all of us nonfictional characters experience time in the same way; that’s due to the consistent increase of entropy throughout the universe, which defines the arrow of time.

THROUGH THE LOOKING GLASS

F. Scott Fitzgerald’s short story “The Curious Case of Benjamin Button”—more recently made into a film starring Brad Pitt—features a protagonist who is born as an old man and gradually grows younger as time passes. The nurses of the hospital at which Benjamin is born are, understandably, somewhat at a loss.

Wrapped in a voluminous white blanket, and partly crammed into one of the cribs, there sat an old man apparently about seventy years of age. His sparse hair was almost white, and from his chin dripped a long smoke-coloured beard, which waved absurdly back and forth, fanned by the breeze coming in at the window. He looked up at Mr. Button with dim, faded eyes in which lurked a puzzled question.

“Am I mad?” thundered Mr. Button, his terror resolving into rage. “Is this some ghastly hospital joke?”

“It doesn’t seem like a joke to us,” replied the nurse severely. “And I don’t know whether you’re mad or not—but that is most certainly your child.”

The cool perspiration redoubled on Mr. Button’s forehead. He closed his eyes, and then, opening them, looked again. There was no mistake—he was gazing at a man of threescore and ten—a baby of threescore and ten, a baby whose feet hung over the sides of the crib in which it was reposing.19

No mention is made in the story of what poor Mrs. Button must have been feeling around this time. (In the movie version, at least the newborn Benjamin is baby-sized, albeit old and wrinkled.)

Because it is so bizarre, having time run backward for some characters in a story is often played for comic effect. In Lewis Carroll’s Through the Looking-Glass, Alice is astonished upon first meeting the White Queen, who lives in both directions of time. The Queen is shouting and shaking her finger in pain:

“What IS the matter?” [Alice] said, as soon as there was a chance of making herself heard. “Have you pricked your finger?”

“I haven’t pricked it YET,” the Queen said, “but I soon shall—oh, oh, oh!”

“When do you expect to do it?” Alice asked, feeling very much inclined to laugh.

“When I fasten my shawl again,” the poor Queen groaned out: “the brooch will come undone directly. Oh, oh!” As she said the words the brooch flew open, and the Queen clutched wildly at it, and tried to clasp it again.

“Take care!” cried Alice. “You’re holding it all crooked!” And she caught at the brooch; but it was too late: the pin had slipped, and the Queen had pricked her finger.20

Carroll (no relation21) is playing with a deep feature of the nature of time—the fact that causes precede effects. The scene makes us smile, while serving as a reminder of how central the arrow of time is to the way we experience the world.

Time can be reversed in the service of tragedy, as well as comedy. Martin Amis’s novel Time’s Arrow is a classic of the reversing-time genre, even accounting for the fact that it’s a pretty small genre.22 Its narrator is a disembodied consciousness who lives inside another person, Odilo Unverdorben. The host lives life in the ordinary sense, forward in time, but the homunculus narrator experiences everything backward—his first memory is Unverdorben’s death. He has no control over Unverdorben’s actions, nor access to his memories, but passively travels through life in reverse order. At first Unverdorben appears to us as a doctor, which strikes the narrator as quite a morbid occupation—patients shuffle into the emergency room, where staff suck medicines out of their bodies and rip off their bandages, sending them out into the night bleeding and screaming. But near the end of the book, we learn that Unverdorben was an assistant at Auschwitz, where he created life where none had been before—turning chemicals and electricity and corpses into living persons. Only now, thinks the narrator, does the world finally make sense.

THE ARROW OF TIME

There is a good reason why reversing the relative direction of time is an effective tool of the imagination: In the actual, non-imaginary world, it never happens. Time has a direction, and it has the same direction for everybody. None of us has met a character like the White Queen, who remembers what we think of as “the future” rather than (or in addition to) “the past.”

What does it mean to say that time has a direction, an arrow pointing from the past to the future? Think about watching a movie played in reverse. Generally, it’s pretty clear if we are seeing something running the “wrong way” in time. A classic example is a diver and a pool. If the diver dives, and then there is a big splash, followed by waves bouncing around in the water, all is normal. But if we see a pool that starts with waves, which collect into a big splash, in the process lifting a diver up onto the board and becoming perfectly calm, we know something is up: The movie is being played backward.

Certain events in the real world always happen in the same order. It’s dive, splash, waves; never waves, splash, spit out a diver. Take milk and mix it into a cup of black coffee; never take coffee with milk and separate the two liquids. Sequences of this sort are called irreversible processes. We are free to imagine that kind of sequence playing out in reverse, but if we actually see it happen, we suspect cine matic trickery rather than a faithful reproduction of reality.

Irreversible processes are at the heart of the arrow of time. Events happen in some sequences, and not in others. Furthermore, this ordering is perfectly consistent, as far as we know, throughout the observable universe. Someday we might find a planet in a distant solar system that contains intelligent life, but nobody suspects that we will find a planet on which the aliens regularly separate (the indigenous equivalents of) milk and coffee with a few casual swirls of a spoon. Why isn’t that surprising? It’s a big universe out there; things might very well happen in all sorts of sequences. But they don’t. For certain kinds of processes—roughly speaking, complicated actions with lots of individual moving parts—there seems to be an allowed order that is somehow built into the very fabric of the world.

Tom Stoppard’s play Arcadia uses the arrow of time as a central organizing metaphor. Here’s how Thomasina, a young prodigy who was well ahead of her time, explains the concept to her tutor:

THOMASINA: When you stir your rice pudding, Septimus, the spoonful of jam spreads itself round making red trails like the picture of a meteor in my astronomical atlas. But if you need stir backward, the jam will not come together again. Indeed, the pudding does not notice and continues to turn pink just as before. Do you think this odd?

SEPTIMUS: No.

THOMASINA: Well, I do. You cannot stir things apart.

SEPTIMUS: No more you can, time must needs run backward, and since it will not, we must stir our way onward mixing as we go, disorder out of disorder into disorder until pink is complete, unchanging and unchangeable, and we are done with it for ever. This is known as free will or self-determination.23

The arrow of time, then, is a brute fact about our universe. Arguably the brute fact about our universe; the fact that things happen in one order and not in the reverse order is deeply ingrained in how we live in the world. Why is it like that? Why do we live in a universe where X is often followed by Y, but Y is never followed by X?

The answer lies in the concept of “entropy” that I mentioned above. Like energy or temperature, entropy tells us something about the particular state of a physical system; specifically, it measures how disorderly the system is. A collection of papers stacked neatly on top of one another has a low entropy; the same collection, scattered haphazardly on a desktop, has a high entropy. The entropy of a cup of coffee along with a separate teaspoon of milk is low, because there is a particular orderly segregation of the molecules into “milk” and “coffee,” while the entropy of the two mixed together is comparatively large. All of the irreversible processes that reflect time’s arrow—we can turn eggs into omelets but not omelets into eggs, perfume disperses through a room but never collects back into the bottle, ice cubes in water melt but glasses of warm water don’t spontaneously form ice cubes—share a common feature: Entropy increases throughout, as the system progresses from order to disorder. Whenever we disturb the universe, we tend to increase its entropy.

A big part of our task in this book will be to explain how the single idea of entropy ties together such a disparate set of phenomena, and then to dig more deeply into what exactly this stuff called “entropy” really is, and why it tends to increase. The final task—still a profound open question in contemporary physics—is to ask why the entropy was so low in the past, so that it could be increasing ever since.

FUTURE AND PAST VS. UP AND DOWN

But first, we need to contemplate a prior question: Should we really be surprised that certain things happen in one direction of time, but not in the other? Who ever said that everything should be reversible, anyway?

Think of time as a label on events as they happen. That’s one of the ways in which time is like space—they both help us locate things in the universe. But from that point of view, there is also a crucial difference between time and space—directions in space are created equal, while directions in time (namely, “the past” and “the future”) are very different. Here on Earth, directions in space are easily distinguished—a compass tells us whether we are moving north, south, east, or west, and nobody is in any danger of confusing up with down. But that’s not a reflection of deep underlying laws of nature—it’s just because we live on a giant planet, with respect to which we can define different directions. If you were floating in a space suit far away from any planets, all directions in space would truly be indistinguishable—there would be no preferred notion of “up” or “down.”

The technical way to say this is that there is a symmetry in the laws of nature—every direction in space is as good as every other. It’s easy enough to “reverse the direction of space”—take a photograph and print it backward, or for that matter just look in a mirror. For the most part, the view in a mirror appears pretty unremarkable. The obvious counterexample is writing, for which it’s easy to tell that we are looking at a reversed image; that’s because writing, like the Earth, does pick out a preferred direction (you’re reading this book from left to right). But the images of most scenes not full of human creations look equally “natural” to us whether we see them directly or we see them through a mirror.

Contrast that with time. The equivalent of “looking at an image through a mirror” (reversing the direction of space) is simply “playing a movie backward” (reversing the direction of time). And in that case, it’s easy to tell when time has been inverted—the irreversible processes that define the arrow of time are suddenly occurring in the wrong order. What is the origin of this profound difference between space and time?

While it’s true that the presence of the Earth beneath our feet picks out an “arrow of space” by distinguishing up from down, it’s pretty clear that this is a local, parochial phenomenon, rather than a reflection of the underlying laws of nature. We can easily imagine ourselves out in space where there is no preferred direction. But the underlying laws of nature do not pick out a preferred direction of time, any more than they pick out a preferred direction in space. If we confine our attention to very simple systems with just a few moving parts, whose motion reflects the basic laws of physics rather than our messy local conditions, there is no arrow of time—we can’t tell when a movie is being run backward. Think about Galileo’s chandelier, rocking peacefully back and forth. If someone showed you a movie of the chandelier, you wouldn’t be able to tell whether it was being shown forward or backward—its motion is sufficiently simple that it works equally well in either direction of time.

007

Figure 5: The Earth defines a preferred direction in space, while the Big Bang defines a preferred direction in time.

The arrow of time, therefore, is not a feature of the underlying laws of physics, at least as far as we know. Rather, like the up/down orientation space picked out by the Earth, the preferred direction of time is also a consequence of features of our environment. In the case of time, it’s not that we live in the spatial vicinity of an influential object; it’s that we live in the temporal vicinity of an influential event: the birth of the universe. The beginning of our observable universe, the hot dense state known as the Big Bang, had a very low entropy. The influence of that event orients us in time, just as the presence of the Earth orients us in space.

NATURE’S MOST RELIABLE LAW

The principle underlying irreversible processes is summed up in the Second Law of Thermodynamics:

The entropy of an isolated system either remains constant or increases with time.

(The First Law states that energy is conserved.24) The Second Law is arguably the most dependable law in all of physics. If you were asked to predict what currently accepted principles of physics would still be considered inviolate a thousand years from now, the Second Law would be a good bet. Sir Arthur Eddington, a leading astrophysicist of the early twentieth century, put it emphatically:

If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations [the laws of electricity and magnetism]—then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation—well, these experimentalists do bungle things sometimes. But if your theory is found to be against the Second Law of Thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.25

C. P. Snow—British intellectual, physicist, and novelist—is perhaps best known for his insistence that the “Two Cultures” of the sciences and the humanities had grown apart and should both be a part of our common civilization. When he came to suggest the most basic item of scientific knowledge that every educated person should understand, he chose the Second Law:

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics, the law of entropy. The response was cold: it was also negative. Yet I was asking something which is about the scientific equivalent of: “Have you read a work of Shakespeare’s?”26

I’m sure Baron Snow was quite the hit at Cambridge cocktail parties. (To be fair, he did later admit that even physicists didn’t really understand the Second Law.)

Our modern definition of entropy was proposed by Austrian physicist Ludwig Boltzmann in 1877. But the concept of entropy, and its use in the Second Law of Thermodynamics, dates back to German physicist Rudolf Clausius in 1865. And the Second Law itself goes back even earlier—to French military engineer Nicolas Léonard Sadi Carnot in 1824. How in the world did Clausius use entropy in the Second Law without knowing its definition, and how did Carnot manage to formulate the Second Law without even using the concept of entropy at all?

The nineteenth century was the heroic age of thermodynamics—the study of heat and its properties. The pioneers of thermodynamics studied the interplay between temperature, pressure, volume, and energy. Their interest was by no means abstract—this was the dawn of the industrial age, and much of their work was motivated by the desire to build better steam engines.

Today physicists understand that heat is a form of energy and that the temperature of an object is simply a measure of the average kinetic energy (energy of motion) of the atoms in the object. But in 1800, scientists didn’t believe in atoms, and they didn’t understand energy very well. Carnot, whose pride was wounded by the fact that the English were ahead of the French in steam engine technology, set himself the task of understanding how efficient such an engine could possibly be—how much useful work could you do by burning a certain amount of fuel? He showed that there is a fundamental limit to such extraction. By taking an intellectual leap from real machines to idealized “heat engines,” Carnot demonstrated there was a best possible engine, which got the most work out of a given amount of fuel operating at a given temperature. The trick, unsurprisingly, was to minimize the production of waste heat. We might think of heat as useful in warming our houses during the winter, but it doesn’t help in doing what physicists think of as “work”—getting something like a piston or a flywheel to move from place to place. What Carnot realized was that even the most efficient engine possible is not perfect; some energy is lost along the way. In other words, the operation of a steam engine is an irreversible process.

So Carnot appreciated that engines did something that could not be undone. It was Clausius, in 1850, who understood that this reflected a law of nature. He formulated his law as “heat does not spontaneously flow from cold bodies to warm ones.” Fill a balloon with hot water and immerse it in cold water. Everyone knows that the temperatures will tend to average out: The water in the balloon will cool down as the surrounding liquid warms up. The opposite never happens. Physical systems evolve toward a state of equilibrium—a quiescent configuration that is as uniform as possible, with equal temperatures in all components. From this insight, Clausius was able to re-derive Carnot’s results concerning steam engines.

So what does Clausius’ law (heat never flows spontaneously from colder bodies to hotter ones) have to do with the Second Law (entropy never spontaneously decreases)? The answer is, they are the same law. In 1865 Clausius managed to reformulate his original maxim in terms of a new quantity, which he called the “entropy.” Take an object that is gradually cooling down—emitting heat into its surroundings. As this process happens, consider at every moment the amount of heat being lost, and divide it by the temperature of the object. The entropy is then the accumulated amount of this quantity (the heat lost divided by the temperature) over the course of the entire process. Clausius showed that the tendency of heat to flow from hot objects to cold ones was precisely equivalent to the claim that the entropy of a closed system would only ever go up, never go down. An equilibrium configuration is simply one in which the entropy has reached its maximum value, and has nowhere else to go; all the objects in contact are at the same temperature.

If that seems a bit abstract, there is a simple way of summing up this view of entropy: It measures the uselessness of a certain amount of energy.27 There is energy in a gallon of gasoline, and it’s useful—we can put it to work. The process of burning that gasoline to run an engine doesn’t change the total amount of energy; as long as we keep careful track of what happens, energy is always conserved.28 But along the way, that energy becomes increasingly useless. It turns into heat and noise, as well as the motion of the vehicle powered by that engine, but even that motion eventually slows down due to friction. And as energy transforms from useful to useless, its entropy increases all the while.

The Second Law doesn’t imply that the entropy of a system can never decrease. We could invent a machine that separated out the milk from a cup of coffee, for example. The trick, though, is that we can only decrease the entropy of one thing by creating more entropy elsewhere. We human beings, and the machines that we might use to rearrange the milk and coffee, and the food and fuel each consume—all of these also have entropy, which will inevitably increase along the way. Physicists draw a distinction between open systems—objects that interact significantly with the outside world, exchanging entropy and energy—and closed systems—objects that are essentially isolated from external influences. In an open system, like the coffee and milk we put into our machine, entropy can certainly decrease. But in a closed system—say, the total system of coffee plus milk plus machine plus human operators plus fuel and so on—the entropy will always increase, or at best stay constant.

THE RISE OF ATOMS

The great insights into thermodynamics of Carnot, Clausius, and their colleagues all took place within a “phenomenological” framework. They knew the big picture but not the underlying mechanisms. In particular, they didn’t know about atoms, so they didn’t think of temperature and energy and entropy as properties of some microscopic substrate; they thought of each of them as real things, in and of themselves. It was common in those days to think of energy in particular as a form of fluid, which could flow from one body to another. The energy-fluid even had a name: “caloric.” And this level of understanding was perfectly adequate to formulating the laws of thermodynamics.

But over the course of the nineteenth century, physicists gradually became convinced that the many substances we find in the world can all be understood as different arrangements of a fixed number of elementary constituents, known as “atoms.” (The physicists actually lagged behind the chemists in their acceptance of atomic theory.) It’s an old idea, dating back to Democritus and other ancient Greeks, but it began to catch on in the nineteenth century for a simple reason: The existence of atoms could explain many observed properties of chemical reactions, which otherwise were simply asserted. Scientists like it when a single simple idea can explain a wide variety of observed phenomena.

These days it is elementary particles such as quarks and leptons that play the role of Democritus’s atoms, but the idea is the same. What a modern scientist calls an “atom” is the smallest possible unit of matter that still counts as a distinct chemical element, such as carbon or nitrogen. But we now understand that such atoms are not indivisible; they consist of electrons orbiting the atomic nucleus, and the nucleus is made of protons and neutrons, which in turn are made of different combinations of quarks. The search for rules obeyed by these elementary building blocks of matter is often called “fundamental” physics, although “elementary” physics would be more accurate (and arguably less self-aggrandizing). Henceforth, I’ll use atoms in the established nineteenth-century sense of chemical elements, not the ancient Greek sense of elementary particles.

The fundamental laws of physics have a fascinating feature: Despite the fact that they govern the behavior of all the matter in the universe, you don’t need to know them to get through your everyday life. Indeed, you would be hard-pressed to discover them, merely on the basis of your immediate experiences. That’s because very large collections of particles obey distinct, autonomous rules of behavior, which don’t really depend on the smaller structures underneath. The underlying rules are referred to as “microscopic” or simply “fundamental,” while the separate rules that apply only to large systems are referred to as “macroscopic” or “emergent.” The behavior of temperature and heat and so forth can certainly be understood in terms of atoms: That’s the subject known as “statistical mechanics.” But it can equally well be understood without knowing anything whatsoever about atoms: That’s the phenomenological approach we’ve been discussing, known as “thermodynamics.” It is a common occurrence in physics that in complex, macroscopic systems, regular patterns emerge dynamically from underlying microscopic rules. Despite the way it is sometimes portrayed, there is no competition between fundamental physics and the study of emergent phenomena; both are fascinating and crucially important to our understanding of nature.

One of the first physicists to advocate atomic theory was a Scotsman, James Clerk Maxwell, who was also responsible for the final formulation of the modern theory of electricity and magnetism. Maxwell, along with Boltzmann in Austria (and following in the footsteps of numerous others), used the idea of atoms to explain the behavior of gases, according to what was known as “kinetic theory.” Maxwell and Boltzmann were able to figure out that the atoms in a gas in a container, fixed at some temperature, should have a certain distribution of velocities—this many would be moving fast, that many would be moving slowly, and so on. These atoms would naturally keep banging against the walls of the container, exerting a tiny force each time they did so. And the accumulated impact of those tiny forces has a name: It is simply the pressure of the gas. In this way, kinetic theory explained features of gases in terms of simpler rules.

ENTROPY AND DISORDER

But the great triumph of kinetic theory was its use by Boltzmann in formulating a microscopic understanding of entropy. Boltzmann realized that when we look at some macroscopic system, we certainly don’t keep track of the exact properties of every single atom. If we have a glass of water in front of us, and someone sneaks in and (say) switches some of the water molecules around without changing the overall temperature and density and so on, we would never notice. There are many different arrangements of particular atoms that are indistinguishable from our macroscopic perspective. And then he noticed that low-entropy objects are more delicate with respect to such rearrangements. If you have an egg, and start exchanging bits of the yolk with bits of the egg white, pretty soon you will notice. The situations that we characterize as “low-entropy” seem to be easily disturbed by rearranging the atoms within them, while “high-entropy” ones are more robust.

008

Figure 6: Ludwig Boltzmann’s grave in the Zentralfriedhof, Vienna. The inscribed equation, S = k log W, is his formula for entropy in terms of the number of ways you can rearrange microscopic components of a system without changing its macroscopic appearance. (See Chapter Eight for details.)

So Boltzmann took the concept of entropy, which had been defined by Clausius and others as a measure of the uselessness of energy, and redefined it in terms of atoms:

Entropy is a measure of the number of particular microscopic arrangements of atoms that appear indistinguishable from a macroscopic perspective.29

It would be difficult to overemphasize the importance of this insight. Before Boltzmann, entropy was a phenomenological thermodynamic concept, which followed its own rules (such as the Second Law). After Boltzmann, the behavior of entropy could be derived from deeper underlying principles. In particular, it suddenly makes perfect sense why entropy tends to increase:

In an isolated system entropy tends to increase, because there are more ways to be high entropy than to be low entropy.

At least, that formulation sounds like it makes perfect sense. In fact, it sneaks in a crucial assumption: that we start with a system that has a low entropy. If we start with a system that has a high entropy, we’ll be in equilibrium—nothing will happen at all. That word start sneaks in an asymmetry in time, by privileging earlier times over later ones. And this line of reasoning takes us all the way back to the low entropy of the Big Bang. For whatever reason, of the many ways we could arrange the constituents of the universe, at early times they were in a very special, lo w-entropy configuration.

This caveat aside, there is no question that Boltzmann’s formulation of the concept of entropy represented a great leap forward in our understanding of the arrow of time. This increase in understanding, however, came at a cost. Before Boltzmann, the Second Law was absolute—an ironclad law of nature. But the definition of entropy in terms of atoms comes with a stark implication: entropy doesn’t necessarily increase, even in a closed system; it is simply likely to increase. (Overwhelmingly likely, as we shall see, but still.) Given a box of gas evenly distributed in a high-entropy state, if we wait long enough, the random motion of the atoms will eventually lead them all to be on one side of the box, just for a moment—a “statistical fluctuation.” When you run the numbers, it turns out that the time you would have to wait before expecting to see such a fluctuation is much larger than the age of the universe. It’s not something we have to worry about, as a practical matter. But it’s there.

Some people didn’t like that. They wanted the Second Law of Thermodynamics, of all things, to be utterly inviolate, not just something that holds true most of the time. Boltzmann’s suggestion met with a great deal of controversy, but these days it is universally accepted.

ENTROPY AND LIFE

This is all fascinating stuff, at least to physicists. But the ramifications of these ideas go far beyond steam engines and cups of coffee. The arrow of time manifests itself in many different ways—our bodies change as we get older, we remember the past but not the future, effects always follow causes. It turns out that all of these phenomena can be traced back to the Second Law. Entropy, quite literally, makes life possible.

The major source of energy for life on Earth is light from the Sun. As Clausius taught us, heat naturally flows from a hot object (the Sun) to a cooler object (the Earth). But if that were the end of the story, before too long the two objects would come into equilibrium with each other—they would attain the same temperature. In fact, that is just what would happen if the Sun filled our entire sky, rather than describing a disk about one degree across. The result would be an unhappy world indeed. It would be completely inhospitable to the existence of life—not simply because the temperature was high, but because it would be static. Nothing would ever change in such an equilibrium world.

In the real universe, the reason why our planet doesn’t heat up until it reaches the temperature of the Sun is because the Earth loses heat by radiating it out into space. And the only reason it can do that, Clausius would proudly note, is because space is much colder than Earth.30 It is because the Sun is a hot spot in a mostly cold sky that the Earth doesn’t just heat up, but rather can absorb the Sun’s energy, process it, and radiate it into space. Along the way, of course, entropy increases; a fixed amount of energy in the form of solar radiation has a much lower entropy than the same amount of energy in the form of the Earth’s radiation into space.

This process, in turn, explains why the biosphere of the Earth is not a static place.31 We receive energy from the Sun, but it doesn’t just heat us up until we reach equilibrium; it’s very low-entropy radiation, so we can make use of it and then release it as high-entropy radiation. All of which is possible only because the universe as a whole, and the Solar System in particular, have a relatively low entropy at the present time (and an even lower entropy in the past). If the universe were anywhere near thermal equilibrium, nothing would ever happen.

Nothing good lasts forever. Our universe is a lively place because there is plenty of room for entropy to increase before we hit equilibrium and everything grinds to a halt. It’s not a foregone conclusion—entropy might be able to simply grow forever. Alternatively, entropy may reach a maximum value and stop. This scenario is known as the “heat death” of the universe and was contemplated as long ago as the 1850s, amidst all the exciting theoretical developments in thermodynamics. William Thomson, Lord Kelvin, was a British physicist and engineer who played an important role in laying the first transatlantic telegraph cable. But in his more reflective moments, he mused on the future of the universe:

The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.32

Here, Lord Kelvin has put his finger quite presciently on the major issue in these kinds of discussions, which we will revisit at length in this book: Is the capacity of the universe to increase in entropy finite or infinite? If it is finite, then the universe will eventually wind down to a heat death, once all useful energy has been converted to high-entropy useless forms of energy. But if the entropy can increase without bound, we are at least allowed to contemplate the possibility that the universe continues to grow and evolve forever, in one way or another.

In a famous short story entitled simply “Entropy,” Thomas Pynchon had his characters apply the lessons of thermodynamics to their social milieu.

“Nevertheless,” continued Callisto, “he found in entropy, or the measure of disorganization of a closed system, an adequate metaphor to apply to certain phenomena in his own world. He saw, for example, the younger generation responding to Madison Avenue with the same spleen his own had once reserved for Wall Street: and in American ‘consumerism’ discovered a similar tendency from the least to the most probable, from differentiation to sameness, from ordered individuality to a kind of chaos. He found himself, in short, restating Gibbs’ prediction in social terms, and envisioned a heat-death for his culture in which ideas, like heat-energy, would no longer be transferred, since each point in it would ultimately have the same quantity of energy; and intellectual motion would, accordingly, cease.”33

To this day, scientists haven’t yet determined to anyone’s satisfaction whether the universe will continue to evolve forever, or whether it will eventually settle into a placid state of equilibrium.

WHY CAN’T WE REMEMBER THE FUTURE?

So the arrow of time isn’t just about simple mechanical processes; it’s a necessary property of the existence of life itself. But it’s also responsible for a deep feature of what it means to be a conscious person: the fact that we remember the past but not the future. According to the fundamental laws of physics, the past and future are treated on an equal footing, but when it comes to how we perceive the world, they couldn’t be more different. We carry in our heads representations of the past in the form of memories. Concerning the future, we can make predictions, but those predictions have nowhere near the reliability of our memories of the past.

Ultimately, the reason why we can form a reliable memory of the past is because the entropy was lower then. In a complicated system like the universe, there are many ways for the underlying constituents to arrange themselves into the form of “you, with a certain memory of the past, plus the rest of the universe.” If that’s all you know—that you exist right now, with a memory of going to the beach that summer between sixth and seventh grade—you simply don’t have enough information to reliably conclude that you really did go to the beach that summer. It turns out to be overwhelmingly more likely that your memory is just a random fluctuation, like the air in a room spontaneously congregating over on one side. To make sense of your memories, you need to assume as well that the universe was ordered in a certain way—that the entropy was lower in the past.

Imagine that you are walking down the street, and on the sidewalk you notice a broken egg that appears as though it hasn’t been sitting outside for very long. Our presumption of a low-entropy past allows us to say with an extremely high degree of certainty that not long ago there must have been an unbroken egg, which someone dropped. Since, as far as the future is concerned, we have no reason to suspect that entropy will decrease, there’s not much we can say about the future of the egg—too many possibilities are open. Maybe it will stay there and grow moldy, maybe someone will clean it up, maybe a dog will come by and eat it. (It’s unlikely that it will spontaneously reassemble itself into an unbroken egg, but strictly speaking that’s among the possibilities.) That egg on the sidewalk is like a memory in your brain—it’s a record of a prior event, but only if we assume a low-entropy boundary condition in the past.

We also distinguish past from future through the relationship between cause and effect. Namely, the causes come first (earlier in time), and then come the effects. That’s why the White Queen seems so preposterous to us—how could she be yelping in pain before pricking her finger? Again, entropy is to blame. Think of the diver splashing into the pool—the splash always comes after the dive. According to the microscopic laws of physics, however, it is possible to arrange all of the molecules in the water (and the air around the pool, through which the sound of the splash travels) to precisely “unsplash” and eject the diver from the pool. To do this would require an unimaginably delicate choice of the position and velocity of every single one of those atoms—if you pick a random splashy configuration, there is almost no chance that the microscopic forces at work will correctly conspire to spit out the diver.

In other words, part of the distinction we draw between “effects” and “causes” is that “effects” generally involve an increase in entropy. If two billiard balls collide and go their separate ways, the entropy remains constant, and neither ball deserves to be singled out as the cause of the interaction. But if you hit the cue ball into a stationary collection of racked balls on the break (provoking a noticeable increase in entropy), you and I would say “the cue ball caused the break”—even though the laws of physics treat all of the balls perfectly equally.

THE ART OF THE POSSIBLE

In the last chapter we contrasted the block time view—the entire four-dimensional history of the world, past, present, and future, is equally real—with the presentist view—only the current moment is truly real. There is yet another perspective, sometimes called possibilism: The current moment exists, and the past exists, but the future does not (yet) exist.

The idea that the past exists in a way the future does not accords well with our informal notion of how time works. The past has already happened, while the future is still up for grabs in some sense—we can sketch out alternative possibilities, but we don’t know which one is real. More particularly, when it comes to the past we have recourse to memories and records of what happened. Our records may have varying degrees of reliability, but they fix the actuality of the past in a way that isn’t available when we contemplate the future.

Think of it this way: A loved one says, “I think we should change our vacation plans for next year. Instead of going to Cancún, let’s be adventurous and go to Rio.” You may or may not go along with the plan, but the strategy should you choose to implement it isn’t that hard to work out: You change plane reservations, book a new hotel, and so forth. But if your loved one says, “I think we should change our vacation plans for last year. Instead of having gone to Paris, let’s have been adventurous and have gone to Istanbul,” your strategy would be very different—you’d think about taking your loved one to the doctor, not rearranging your past travel plans. The past is gone, it’s in the books, there’s no way we can set about changing it. So it makes perfect sense to us to treat the past and future on completely different footings. Philosophers speak of the distinction between Being—existence in the world—and Becoming—a dynamical process of change, bringing reality into existence.

That distinction between the fixedness of the past and the malleability of the future is nowhere to be found in the known laws of physics. The deep-down microscopic rules of nature run equally well forward or backward in time from any given situation. If you know the exact state of the universe, and all of the laws of physics, the future as well as the past is rigidly determined beyond John Calvin’s wildest dreams of predestination.

The way to reconcile these beliefs—the past is once-and-for-all fixed, while the future can be changed, but the fundamental laws of physics are reversible— ultimately comes down to entropy. If we knew the precise state of every particle in the universe, we could deduce the future as well as the past. But we don’t; we know something about the universe’s macroscopic characteristics, plus a few details here and there. With that information, we can predict certain broad-scale phenomena (the Sun will rise tomorrow), but our knowledge is compatible with a wide spectrum of specific future occurrences. When it comes to the past, however, we have at our disposal both our knowledge of the current macroscopic state of the universe, plus the fact that the early universe began in a low-entropy state. That one extra bit of information, known simply as the “Past Hypothesis,” gives us enormous leverage when it comes to reconstructing the past from the present.

The punch line is that our notion of free will, the ability to change the future by making choices in a way that is not available to us as far as the past is concerned, is only possible because the past has a low entropy and the future has a high entropy. The future seems open to us, while the past seems closed, even though the laws of physics treat them on an equal footing.

Because we live in a universe with a pronounced arrow of time, we treat the past and future not just as different from a practical perspective, but as deeply and fundamentally different things. The past is in the books, but the future can be influenced by our actions. Of more direct importance for cosmology, we tend to conflate “explaining the history of the universe” with “explaining the state of the early universe”—leaving the state of the late universe to work itself out. Our unequal treatment of past and future is a form of temporal chauvinism, which can be hard to eradicate from our mind-set. But that chauvinism, like so many others, has no ultimate justification in the laws of nature. When thinking about important features of the universe, whether deciding what is “real” or why the early universe had a low entropy, it is a mistake to prejudice our explanations by placing the past and future on unequal footings. The explanations we seek should ultimately be timeless.

The major lesson of this overview of entropy and the arrow of time should be clear: The existence of the arrow of time is both a profound feature of the physical universe and a pervasive ingredient of our everyday lives. It’s a bit embarrassing, frankly, that with all of the progress made by modern physics and cosmology, we still don’t have a final answer for why the universe exhibits such a profound asymmetry in time. I’m embarrassed, at any rate, but every crisis is an opportunity, and by thinking about entropy we might learn something important about the universe.