The Fabric of the Cosmos: Space, Time, and the Texture of Reality - Brian Greene (2004)


Chapter 6. Chance and the Arrow


Even if time doesn’t flow, it still makes sense to ask whether it has an arrow—whether there is a direction to the way things unfold in time that can be discerned in the laws of physics. It is the question of whether there is some intrinsic order in how events are sprinkled along spacetime and whether there is an essential scientific difference between one ordering of events and the reverse ordering. As everyone already knows, there certainly appears to be a huge distinction of this sort; it’s what gives life promise and makes experience poignant. Yet, as we’ll see, explaining the distinction between past and future is harder than you’d think. Rather remarkably, the answer we’ll settle upon is intimately bound up with the precise conditions at the origin of the universe.

The Puzzle

A thousand times a day, our experiences reveal a distinction between things unfolding one way in time and the reverse. A piping hot pizza cools down en route from Domino’s, but we never find a pizza arriving hotter than when it was removed from the oven. Cream stirred into coffee forms a uniformly tan liquid, but we never see a cup of light coffee unstir and separate into white cream and black coffee. Eggs fall, cracking and splattering, but we never see splattered eggs and eggshells gather together and coalesce into uncracked eggs. The compressed carbon dioxide gas in a bottle of Coke rushes outward when we twist off the cap, but we never find spread-out carbon dioxide gas gathering together and swooshing back into the bottle. Ice cubes put into a glass of room-temperature water melt, but we never see globules in a room-temperature glass of water coalesce into solid cubes of ice. These common sequences of events, as well as countless others, happen in only one temporal order. They never happen in reverse, and so they provide a notion of before and after—they give us a consistent and seemingly universal conception of past and future. These observations convince us that were we to examine all of spacetime from the outside (as in Figure 5.1), we would see significant asymmetry along the time axis. Splattered eggs the world over would lie to one side—the side we conventionally call the future—of their whole, unsplattered counterparts.

Perhaps the most pointed example of all is that our minds seem to have access to a collection of events that we call the past—our memories—but none of us seems able to remember the collection of events we call the future. So it seems obvious that there is a big difference between the past and the future. There seems to be a manifest orientation to how an enormous variety of things unfold in time. There seems to be a manifest distinction between the things we can remember (the past) and the things we cannot (the future). This is what we mean by time’s having an orientation, a direction, or an arrow.1

Physics, and science more generally, is founded on regularities. Scientists study nature, find patterns, and codify these patterns in natural laws. You would think, therefore, that the enormous wealth of regularity leading us to perceive an apparent arrow of time would be evidence of a fundamental law of nature. A silly way to formulate such a law would be to introduce the Law of Spilled Milk, stating that glasses of milk spill but don’t unspill, or the Law of Splattered Eggs, stating that eggs break and splatter but never unsplatter and unbreak. But that kind of law buys us nothing: it is merely descriptive, and offers no explanation beyond a simple observation of what happens. Yet we expect that somewhere in the depths of physics there must be a less silly law describing the motion and properties of the particles that make up pizza, milk, eggs, coffee, people, and stars—the fundamental ingredients of everything—that shows why things evolve through one sequence of steps but never the reverse. Such a law would give a fundamental explanation to the observed arrow of time.

The perplexing thing is that no one has discovered any such law. What’s more, the laws of physics that have been articulated from Newton through Maxwell and Einstein, and up until today, show a complete symmetry between past and future.10 Nowhere in any of these laws do we find a stipulation that they apply one way in time but not in the other. Nowhere is there any distinction between how the laws look or behave when applied in either direction in time. The laws treat what we call past and future on a completely equal footing. Even though experience reveals over and over again that there is an arrow of how events unfold in time, this arrow seems not to be found in the fundamental laws of physics.

Past, Future, and the Fundamental Laws of Physics

How can this be? Do the laws of physics provide no underpinning that distinguishes past from future? How can there be no law of physics explaining that events unfold in this order but never in reverse?

The situation is even more puzzling. The known laws of physics actually declare—contrary to our lifetime of experiences—that light coffee can separate into black coffee and white cream; a splattered yolk and a collection of smashed shell pieces can gather themselves together and form a perfectly smooth unbroken egg; the melted ice in a glass of room-temperature water can fuse back together into cubes of ice; the gas released when you open your soda can rush back into the bottle. All the physical laws that we hold dear fully support what is known as time-reversalsymmetry. This is the statement that if some sequence of events can unfold in one temporal order (cream and coffee mix, eggs break, gas rushes outward) then these events can also unfold in reverse (cream and coffee unmix, eggs unbreak, gas rushes inward). I’ll elaborate on this shortly, but the one-sentence summary is that not only do known laws fail to tell us why we see events unfold in only one order, they also tell us that, in theory, events can unfold in reverse order.11

The burning question is Why don’t we ever see such things? I think it’s a safe bet that no one has ever actually witnessed a splattered egg unsplattering. But if the laws of physics allow it, and if, moreover, those laws treat splattering and unsplattering equally, why does one never happen while the other does?

Time-Reversal Symmetry

As a first step toward resolving this puzzle, we need to understand in more concrete terms what it means for the known laws of physics to be time-reversal symmetric. To this end, imagine it’s the twenty-fifth century and you’re playing tennis in the new interplanetary league with your partner, Coolstroke Williams. Somewhat unused to the reduced gravity on Venus, Coolstroke hits a gargantuan backhand that launches the ball into the deep, isolated darkness of space. A passing space shuttle films the ball as it goes by and sends the footage to CNN (Celestial News Network) for broadcast. Here’s the question: If the technicians at CNN were to make a mistake and run the film of the tennis ball in reverse, would there be any way to tell? Well, if you knew the heading and orientation of the camera during the filming you might be able to recognize their error. But could you figure it out solely by looking at the footage itself, with no additional information? The answer is no. If in the correct (forward) time direction the footage showed the ball floating by from left to right, then in reverse it would show the ball floating by from right to left. And certainly, the laws of classical physics allow tennis balls to move either left or right. So the motion you see when the film is run in either the forward time direction or the reverse time direction is perfectly consistent with the laws of physics.

We’ve so far imagined that no forces were acting on the tennis ball, so that it moved with constant velocity. Let’s now consider the more general situation by including forces. According to Newton, the effect of a force is to change the velocity of an object: forces impart accelerations. Imagine, then, that after floating awhile through space, the ball is captured by Jupiter’s gravitational pull, causing it to move with increasing speed in a downward, rightward-sweeping arc toward Jupiter’s surface, as in Figures 6.1a and 6.1b. If you play a film of this motion in reverse, the tennis ball will appear to move in an arc that sweeps upward and toward the left, away from Jupiter, as in Figure 6.1c. Here’s the new question: is the motion depicted by the film when played backward—the time-reversed motion of what was actually filmed—allowed by the classical laws of physics? Is it motion that could happen in the real world? At first, the answer seems obviously to be yes: tennis balls can move in downward arcs to the right or upward arcs to the left, or, for that matter, in innumerable other trajectories. So what’s the difficulty? Well, although the answer is indeed yes, this reasoning is too glib and misses the real intent of the question.


Figure 6.1 (a) A tennis ball flying from Venus to Jupiter together with (b) a close-up. (c) Tennis ball’s motion if its velocity is reversed just before it hits Jupiter.

When you run the film in reverse, you see the tennis ball leap from Jupiter’s surface, moving upward and toward the left, with exactly the same speed (but in exactly the opposite direction) from when it hit the planet. This initial part of the film is certainly consistent with the laws of physics: we can imagine, for example, someone launching the tennis ball from Jupiter’s surface with precisely this velocity. The essential question is whether the rest of the reverse run is also consistent with the laws of physics. Would a ball launched with this initial velocity—and subject to Jupiter’s downward-pulling gravity—actually move along the trajectory depicted in the rest of the reverse run film? Would it exactly retrace its original downward trajectory, but in reverse?

The answer to this more refined question is yes. To avoid any confusion, let’s spell this out. In Figure 6.1a, before Jupiter’s gravity had any significant effect, the ball was heading purely to the right. Then, in Figure 6.1b, Jupiter’s powerful gravitational force caught hold of the ball and pulled it toward the planet’s center—a pull that’s mostly downward but, as you can see in the figure, is also partially to the right. This means that as the ball closed in on Jupiter’s surface, its rightward speed had increased somewhat, but its downward speed had increased dramatically. In the reverse run film, therefore, the ball’s launch from Jupiter’s surface would be headed somewhat leftward but predominantly upward, as in Figure 6.1c. With this starting velocity, Jupiter’s gravity would have had its greatest impact on the ball’s upward speed, causing it to go slower and slower, while also decreasing the ball’s leftward speed, but less dramatically. And with the ball’s upward speed rapidly diminishing, its motion would become dominated by its speed in the leftward direction, causing it to follow an upward-arcing trajectory toward the left. Near the end of this arc, gravity would have sapped all the upward motion as well as the additional rightward velocity Jupiter’s gravity imparted to the ball on its way down, leaving the ball moving purely to the left with exactly the same speed it had on its initial approach.

All this can be made quantitative, but the point to notice is that this trajectory is exactly the reverse of the ball’s original motion. Simply by reversing the ball’s velocity, as in Figure 6.1c—by setting it off with the same speed but in the opposite direction—one can make it fully retrace its original trajectory, but in reverse. Bringing the film back into the discussion, we see that the upward-arcing trajectory to the left—the trajectory we just figured out with reasoning based on Newton’s laws of motion—is exactly what we would see upon running the film in reverse. So the ball’s time-reversed motion, as depicted in the reverse-run film, conforms to the laws of physics just as surely as its forward-time motion. The motion we’d see upon running the film in reverse is motion that could really happen in the real world.

Although there are a few subtleties I’ve relegated to the endnotes, this conclusion is general.2 All the known and accepted laws relating to motion—from Newton’s mechanics just discussed, to Maxwell’s electromagnetic theory, to Einstein’s special and general theories of relativity (remember, we are putting off quantum mechanics until the next chapter)—embody time-reversal symmetry: motion that can occur in the usual forward-time direction can equally well occur in reverse. As the terminology can be a bit confusing, let me reemphasize that we are not reversing time. Time is doing what it always does. Instead, our conclusion is that we can make an object trace its trajectory in reverse by the simple procedure of reversing its velocity at any point along its path. Equivalently, the same procedure—reversing the object’s velocity at some point along its path— would make the object execute the motion we’d see in a reverse-run film.

Tennis Balls and Splattering Eggs

Watching a tennis ball shoot between Venus and Jupiter—in either direction—is not particularly interesting. But as the conclusion we’ve reached is widely applicable, let’s now go someplace more exciting: your kitchen. Place an egg on your kitchen counter, roll it toward the edge, and let it fall to the ground and splatter. To be sure, there is a lot of motion in this sequence of events. The egg falls. The shell cracks apart. Yolk splatters this way and that. The floorboards vibrate. Eddies form in the surrounding air. Friction generates heat, causing the atoms and molecules of the egg, floor, and air to jitter a little more quickly. But just as the laws of physics show us how we can make the tennis ball trace its precise path in reverse, the same laws show how we can make every piece of eggshell, every drop of yolk, every section of flooring, and every pocket of air exactly trace its motion in reverse, too. “All” we need do is reverse the velocity of each and every constituent of the splatter. More precisely, the reasoning used with the tennis ball implies that if, hypothetically, we were able to simultaneously reverse the velocity of every atom and molecule involved directly or indirectly with the splattering egg, all the splattering motion would proceed in reverse.

Again, just as with the tennis ball, if we succeeded in reversing all these velocities, what we’d see would look like a reverse-run film. But, unlike the tennis ball’s, the egg-splattering’s reversal of motion would be extremely impressive. A wave of jostling air molecules and tiny floor vibrations would converge on the collision site from all parts of the kitchen, causing every bit of shell and drop of yolk to head back toward the impact location. Each ingredient would move with exactly the same speed it had in the original splattering process, but each would now move in the opposite direction. The drops of yolk would fly back into a globule just as scores of little shell pieces arrived on the outskirts, perfectly aligned to fuse together into a smooth ovoid container. The air and floor vibrations would precisely conspire with the motion of the myriad coalescing yolk drops and shell pieces to give the newly re-formed egg just the right kick to jump off the floor in one piece, rise up to the kitchen counter, and land gently on the edge with just enough rotational motion to roll a few inches and gracefully come to rest. This is what would happen if we could perform the task of total and exact velocity reversal of everything involved.3

Thus, whether an event is simple, like a tennis ball arcing, or something more complex, like an egg splattering, the laws of physics show that what happens in one temporal direction can, at least in principle, also happen in reverse.

Principle and Practice

The stories of the tennis ball and the egg do more than illustrate the time-reversal symmetry of nature’s laws. They also suggest why, in the real world of experience, we see many things happen one way but never in reverse. To get the tennis ball to retrace its path was not that hard. We grabbed it and sent it off with the same speed but in the opposite direction. That’s it. But to get all the chaotic detritus of the egg to retrace its path would be monumentally more difficult. We’d need to grab every bit of splatter, and simultaneously send each off at the same speed but in the opposite direction. Clearly, that’s beyond what we (or even all the King’s horses and all the King’s men) can really do.

Have we found the answer we’ve been looking for? Is the reason why eggs splatter but don’t unsplatter, even though both actions are allowed by the laws of physics, a matter of what is and isn’t practical? Is the answer simply that it’s easy to make an egg splatter—roll it off a counter—but extraordinarily difficult to make it unsplatter?

Well, if it were the answer, trust me, I wouldn’t have made it into such a big deal. The issue of ease versus difficulty is an essential part of the answer, but the full story within which it fits is far more subtle and surprising. We’ll get there in due course, but we must first make the discussion of this section a touch more precise. And that takes us to the concept of entropy.


Etched into a tombstone in the Zentralfriedhof in Vienna, near the graves of Beethoven, Brahms, Schubert, and Strauss, is a single equation, S = k log W, which expresses the mathematical formulation of a powerful concept known as entropy. The tombstone bears the name of Ludwig Boltzmann, one of the most insightful physicists working at the turn of the last century. In 1906, in failing health and suffering from depression, Boltzmann committed suicide while vacationing with his wife and daughter in Italy. Ironically, just a few months later, experiments began to confirm that ideas Boltzmann had spent his life passionately defending were correct.

The notion of entropy was first developed during the industrial revolution by scientists concerned with the operation of furnaces and steam engines, who helped develop the field of thermodynamics. Through many years of research, the underlying ideas were sharply refined, culminating in Boltzmann’s approach. His version of entropy, expressed concisely by the equation on his tombstone, uses statistical reasoning to provide a link between the huge number of individual ingredients that make up a physical system and the overall properties the system has.4

To get a feel for the ideas, imagine unbinding a copy of War and Peace, throwing its 693 double-sided pages high into the air, and then gathering the loose sheets into a neat pile.5 When you examine the resulting stack, it is enormously more likely that the pages will be out of order than in order. The reason is obvious. There are many ways in which the order of the pages can be jumbled, but only one way for the order to be correct. To be in order, of course, the pages must be arranged precisely as 1, 2; 3, 4; 5, 6; and so on, up to 1,385, 1,386. Any other arrangement is out of order. A simple but essential observation is that, all else being equal, the more ways something can happen, the more likely it is that it will happen. And if something can happen in enormously more ways, like the pages landing in the wrong numerical order, it is enormously more likely that it will happen. We all know this intuitively. If you buy one lottery ticket, there is only one way you can win. If you buy a million tickets, each with different numbers, there are a million ways you can win, so your chances of striking it rich are a million times higher.

Entropy is a concept that makes this idea precise by counting the number of ways, consistent with the laws of physics, in which any given physical situation can be realized. High entropy means that there are many ways; low entropy means there are few ways. If the pages of War and Peace are stacked in proper numerical order, that is a low-entropy configuration, because there is one and only one ordering that meets the criterion. If the pages are out of numerical order, that is a high-entropy situation, because a little calculation shows that there are 1245521984537783433660029353704988291633611012463890451368 8769126468689559185298450437739406929474395079418933875187 6527656714059286627151367074739129571382353800016108126465 3018234205620571473206172029382902912502131702278211913473 5826558815410713601431193221575341597338554284672986913981 5159925119085867260993481056143034134383056377136715110570 4786941333912934192440961051428879847790853609508954014012 5932850632906034109513149466389839052676761042780416673015 4945522818861025024633866260360150888664701014297085458481 5141598392546876231295293347829518681237077459652243214888 7351679284483403000787170636684623843536242451673622861091 9853939181503076046890466491297894062503326518685837322713 6370247390401891094064988139838026545111487686489581649140 3426444110871911844164280902757137738090672587084302157950 1589916232045813012950834386537908191823777738521437536312 2531641598589268105976528144801387748697026525462643937189 3927305921796747169166978155198569769269249467383642278227 3345776718073316240433636952771183674104284493472234779223 4027225630721193853912472880929072034271692377936207650190 4571097887744535443586803319160959249877443194986997700333 2494630732437553532290674481765795395621840329516814427104 2227608124289048716428664872403070364864934832509996672897 3446425310349300626622014604312051101093282396249251196897 8283306192150828270814393659987326849047994166839657747890 2124562796195600187060805768778947870098610692265944872693 4100008726998763399003025591685820639734851035629676461160 0225159200113722741273318074829547248192807653266407023083 2754286312646671501355905966429773337131834654748547607012 4233012872135321237328732721874825264039911049700172147564 7004992922645864352265011199999999999999999999999999999999 9999999999999999999999999999999999999999999999999999999999 9999999999999999999999999999999999999999999999999999999999 99999999999999999999999—about 101878—different out-of-order page arrangements.6 If you throw the pages in the air and then gather them in a neat stack, it is almost certain that they will wind up out of numerical order, because such configurations have enormously higher entropy— there are many more ways to achieve an out-of-order outcome—than the sole arrangement in which they are in correct numerical order.

In principle, we could use the laws of classical physics to figure out exactly where each page will land after the whole stack has been thrown in the air. So, again in principle, we could precisely predict the resulting arrangement of the pages7 and hence (unlike in quantum mechanics, which we ignore until the next chapter) there would seem to be no need to rely on probabilistic notions such as which outcome is more or less likely than another. But statistical reasoning is both powerful and useful. If War and Peace were a pamphlet of only a couple of pages we just might be able to successfully complete the necessary calculations, but it would be impossible to do this for the real War and Peace.8 Following the precise motion of 693 floppy pieces of paper as they get caught by gentle air currents and rub, slide, and flap against one another would be a monumental task, well beyond the capacity of even the most powerful supercomputer.

Moreover—and this is critical—having the exact answer wouldn’t even be that useful. When you examine the resulting stack of pages, you are far less interested in the exact details of which page happens to be where than you are in the general question of whether the pages are in the correct order. If they are, great. You could sit down and continue reading about Anna Pavlovna and Nikolai Ilych Rostov, as usual. But if you found that the pages were not in their correct order, the precise details of the page arrangement are something you’d probably care little about. If you’ve seen one disordered page arrangement, you’ve pretty much seen them all. Unless for some strange reason you get mired in the minutiae of which pages happen to appear here or there in the stack, you’d hardly notice if someone further jumbled an out-of-order page arrangement you’d initially been given. The initial stack would look disordered and the further jumbled stack would also look disordered. So not only is the statistical reasoning enormously easier to carry out, but the answer it yields— ordered versus disordered—is more relevant to our real concern, to the kind of thing of which we would typically take note.

This sort of big-picture thinking is central to the statistical basis of entropic reasoning. Just as any lottery ticket has the same chance of winning as any other, after many tosses of War and Peace any particular ordering of the pages is just as likely to occur as any other. What makes the statistical reasoning fly is our declaration that there are two interesting classes of page configurations: ordered and disordered. The first class has one member (the correct page ordering 1, 2; 3, 4; and so on) while the second class has a huge number of members (every other possible page ordering). These two classes are a sensible set to use since, as above, they capture the overall, gross assessment you’d make on thumbing through any given page arrangement.

Even so, you might suggest making finer distinctions between these two classes, such as arrangements with just a handful of pages out of order, arrangements with only pages in the first chapter out of order, and so on. In fact, it can sometimes be useful to consider these intermediate classes. However, the number of possible page arrangements in each of these new subclasses is still extremely small compared with the number in the fully disordered class. For example, the total number of out-of-order arrangements that involve only the pages in Part One of War and Peace is 10−178 of 1 percent of the total number of out-of-order arrangements involving all pages. So, although on the initial tosses of the unbound book the resulting page arrangement will likely belong to one of the intermediate, not fully disordered classes, it is almost certain that if you repeat the tossing action many times over, the page order will ultimately exhibit no obvious pattern whatsoever. The page arrangement evolves toward the fully disordered class, since there are so many page arrangements that fit this bill.

The example of War and Peace highlights two essential features of entropy. First, entropy is a measure of the amount of disorder in a physical system. High entropy means that many rearrangements of the ingredients making up the system would go unnoticed, and this in turn means the system is highly disordered (when the pages of War and Peace are all mixed up, any further jumbling will hardly be noticed since it simply leaves the pages in a mixed-up state). Low entropy means that very few rearrangements would go unnoticed, and this in turn means the system is highly ordered (when the pages of War and Peace start in their proper order, you can easily detect almost any rearrangement). Second, in physical systems with many constituents (for instance, books with many pages being tossed in the air) there is a natural evolution toward greater disorder, since disorder can be achieved in so many more ways than order. In the language of entropy, this is the statement that physical systems tend to evolve toward states of higher entropy.

Of course, in making the concept of entropy precise and universal, the physics definition does not involve counting the number of page rearrangements of one book or another that leave it looking the same, either ordered or disordered. Instead, the physics definition counts the number of rearrangements of fundamental constituents—atoms, subatomic particles, and so on—that leave the gross, overall, “big-picture” properties of a given physical system unchanged. As in the example of War and Peace, low entropy means that very few rearrangements would go unnoticed, so the system is highly ordered, while high entropy means that many rearrangements would go unnoticed, and that means the system is very disordered.12

For a good physics example, and one that will shortly prove handy, let’s think about the bottle of Coke referred to earlier. When gas, like the carbon dioxide that was initially confined in the bottle, spreads evenly throughout a room, there are many rearrangements of the individual molecules that will have no noticeable effect. For example, if you flail your arms, the carbon dioxide molecules will move to and fro, rapidly changing positions and velocities. But overall, there will be no qualitative effect on their arrangement. The molecules were spread uniformly before you flailed your arms, and they will be spread uniformly after you’re done. The uniformly spread gas configuration is insensitive to an enormous number of rearrangements of its molecular constituents, and so is in a state of high entropy. By contrast, if the gas were spread in a smaller space, as when it was in the bottle, or confined by a barrier to a corner of the room, it has significantly lower entropy. The reason is simple. Just as thinner books have fewer page reorderings, smaller spaces provide fewer places for molecules to be located, and so allow for fewer rearrangements.

But when you twist off the bottle’s cap or remove the barrier, you open up a whole new universe to the gas molecules, and through their bumping and jostling they quickly disperse to explore it. Why? It’s the same statistical reasoning as with the pages of War and Peace. No doubt, some of the jostling will move a few gas molecules purely within the initial blob of gas or nudge a few that have left the blob back toward the initial dense gas cloud. But since the volume of the room exceeds that of the initial cloud of gas, there are many more rearrangements available to the molecules if they disperse out of the cloud than there are if they remain within it. On average, then, the gas molecules will diffuse from the initial cloud and slowly approach the state of being spread uniformly throughout the room. Thus, the lower-entropy initial configuration, with the gas all bunched in a small region, naturally evolves toward the higher-entropy configuration, with the gas uniformly spread in the larger space. And once it has reached such uniformity, the gas will tend to maintain this state of high entropy: bumping and jostling still causes the molecules to move this way and that, giving rise to one rearrangement after another, but the overwhelming majority of these rearrangements do not affect the gross, overall appearance of the gas. That’s what it means to have high entropy.9

In principle, as with the pages of War and Peace, we could use the laws of classical physics to determine precisely where each carbon dioxide molecule will be at a given moment of time. But because of the enormous number of CO2 molecules—about 1024 in a bottle of Coke—actually carrying out such calculations is practically impossible. And even if, somehow, we were able to do so, having a list of a million billion billion particle positions and velocities would hardly give us a sense of how the molecules were distributed. Focusing on big-picture statistical features— is the gas spread out or bunched up, that is, does it have high or low entropy?—is far more illuminating.

Entropy, the Second Law, and the Arrow of Time

The tendency of physical systems to evolve toward states of higher entropy is known as the second law of thermodynamics. (The first law is the familiar conservation of energy.) As above, the basis of the law is simple statistical reasoning: there are more ways for a system to have higher entropy, and “more ways” means it is more likely that a system will evolve into one of these high-entropy configurations. Notice, though, that this is not a law in the conventional sense since, although such events are rare and unlikely, something can go from a state of high entropy to one of lower entropy. When you toss a jumbled stack of pages into the air and then gather them into a neat pile, they can turn out to be in perfect numerical order. You wouldn’t want to place a high wager on its happening, but it is possible. It is also possible that the bumping and jostling will be just right to cause all the dispersed carbon dioxide molecules to move in concert and swoosh back into your open bottle of Coke. Don’t hold your breath waiting for this outcome either, but it can happen.10

The large number of pages in War and Peace and the large number of gas molecules in the room are what makes the entropy difference between the disordered and ordered arrangements so huge, and what causes low-entropy outcomes to be so terribly unlikely. If you tossed only two double-sided pages in the air over and over again, you’d find that they landed in the correct order about 12.5 percent of the time. With three pages this would drop to about 2 percent of the tosses, with four pages it’s about .3 percent, with five pages it’s about .03 percent, with six pages it’s about .002 percent, with ten pages it’s .000000027 percent, and with 693 pages the percentage of tosses that would yield the correct order is so small—it involves so many zeros after the decimal point—that I’ve been convinced by the publisher not to use another page to write it out explicitly. Similarly, if you dropped only two gas molecules side by side into an empty Coke bottle, you’d find that at room temperature their random motion would bring them back together (within a millimeter of each other), on average, roughly every few seconds. But for a group of three molecules, you’d have to wait days, for four molecules you’d have to wait years, and for an initial dense blob of a million billion billion molecules it would take a length of time far greater than the current age of the universe for their random, dispersive motion to bring them back together into a small, ordered bunch. With more certainty than death and taxes, we can count on systems with many constituents evolving toward disorder.

Although it may not be immediately apparent, we have now come to an intriguing point. The second law of thermodynamics seems to have given us an arrow of time, one that emerges when physical systems have a large number of constituents. If you were to watch a film of a couple of carbon dioxide molecules that had been placed together in a small box (with a tracer showing the movements of each), you’d be hard pressed to say whether the film was running forward or in reverse. The two molecules would flit this way and that, sometimes coming together, sometimes moving apart, but they would not exhibit any gross, overall behavior distinguishing one direction in time from the reverse. However, if you were to watch a film of 1024 carbon dioxide molecules that had been placed together in the box (as a small, dense cloud of molecules, say), you could easily determine whether the film was being shown forward or in reverse: it is overwhelmingly likely that the forward time direction is the one in which the gas molecules become more and more uniformly spread out, achieving higher and higher entropy. If, instead, the film showed uniformly dispersed gas molecules swooshing together into a tight group, you’d immediately recognize that you were watching it in reverse.

The same reasoning holds for essentially all the things we encounter in daily life—things, that is, which have a large number of constituents: the forward-in-time arrow points in the direction of increasing entropy. If you watch a film of a glass of ice water placed on a bar, you can determine which direction is forward in time by checking that the ice melts—its H2O molecules disperse throughout the glass, thereby achieving higher entropy. If you watch a film of a splattering egg, you can determine which direction is forward in time by checking that the egg’s constituents become more and more disordered—that the egg splatters rather than unsplatters, thereby also achieving higher entropy.

As you can see, the concept of entropy provides a precise version of the “easy versus difficult” conclusion we found earlier. It’s easy for the pages of War and Peace to fall out of order because there are so many out-of-order arrangements. It’s difficult for the pages to fall in perfect order because hundreds of pages would need to move in just the right way to land in the unique sequence Tolstoy intended. It’s easy for an egg to splatter because there are somany ways to splatter. It’s difficult for an egg to unsplatter, because an enormous number of splattered constituents must move in perfect coordination to produce the single, unique result of a pristine egg resting on the counter. For things with many constituents, going from lower to higher entropy—from order to disorder—is easy, so it happens all the time. Going from higher to lower entropy—from disorder to order—is harder, so it happens rarely, at best.

Notice, too, that this entropic arrow is not completely rigid; there is no claim that this definition of time’s direction is 100 percent foolproof. Instead, the approach has enough flexibility to allow these and other processes to happen in reverse as well. Since the second law proclaims that entropy increase is only a statistical likelihood, not an inviolable fact of nature, it allows for the rare possibility that pages can fall into perfect numerical order, that gas molecules can coalesce and reenter a bottle, and that eggs can unsplatter. By using the mathematics of entropy, the second law expresses precisely how statistically unlikely these events are (remember, the huge number on this page–this page reflects how much more likely it is that pages will land out of order), but it recognizes that they can happen.

This seems like a convincing story. Statistical and probabilistic reasoning has given us the second law of thermodynamics. In turn, the second law has provided us with an intuitive distinction between what we call past and what we call future. It has given us a practical explanation for why things in daily life, things that are typically composed of huge numbers of constituents, start like this and end like that, while we never see them start like that and end like this. But over the course of many years— and thanks to important contributions by physicists like Lord Kelvin, Josef Loschmidt, Henri Poincaré, S. H. Burbury, Ernst Zermelo, and Willard Gibbs—Ludwig Boltzmann came to appreciate that the full story of time’s arrow is more surprising. Boltzmann realized that although entropy had illuminated important aspects of the puzzle, it had not answered the question of why the past and the future seem so different. Instead, entropy had redefined the question in an important way, one that leads to an unexpected conclusion.

Entropy: Past and Future

Earlier, we introduced the dilemma of past versus future by comparing our everyday observations with properties of Newton’s laws of classical physics. We emphasized that we continually experience an obvious directionality to the way things unfold in time but the laws themselves treat what we call forward and backward in time on an exactly equal footing. As there is no arrow within the laws of physics that assigns a direction to time, no pointer that declares, “Use these laws in this temporal orientation but not in the reverse,” we were led to ask: If the laws underlying experience treat both temporal orientations symmetrically, why are the experiences themselves so temporally lopsided, always happening in one direction but not the other? Where does the observed and experienced directionality of time come from?

In the last section we seemed to have made progress, through the second law of thermodynamics, which apparently singles out the future as the direction in which entropy increases. But on further thought it’s not that simple. Notice that in our discussion of entropy and the second law, we did not modify the laws of classical physics in any way. Instead, all we did was use the laws in a “big picture” statistical framework: we ignored fine details (the precise order of War and Peace’s unbound pages, the precise locations and velocities of an egg’s constituents, the precise locations and velocities of a bottle of Coke’s CO2 molecules) and instead focused our attention on gross, overall features (pages ordered vs. unordered, egg splattered vs. not splattered, gas molecules spread out vs. not spread out). We found that when physical systems are sufficiently complicated (books with many pages, fragile objects that can splatter into many fragments, gas with many molecules), there is a huge difference in entropy between their ordered and disordered configurations. And this means that there is a huge likelihood that the systems will evolve from lower to higher entropy, which is a rough statement of the second law of thermodynamics. But the key fact to notice is that the second law is derivative: it is merely a consequence of probabilistic reasoning applied to Newton’s laws of motion.

This leads us to a simple but astounding point: Since Newton’s laws of physics have no built-in temporal orientation, all of the reasoning we have used to argue that systems will evolve from lower to higher entropy toward the future works equally well when applied toward the past. Again, since the underlying laws of physics are time-reversal symmetric, there is no way for them even to distinguish between what we call the past and what we call the future. Just as there are no signposts in the deep darkness of empty space that declare this direction up and that direction down, there is nothing in the laws of classical physics that says this direction is time future and that direction is time past. The laws offer no temporal orientation; it’s a distinction to which they are completely insensitive. And since the laws of motion are responsible for how things change—both toward what we call the future and toward what we call the past—the statistical/probabilistic reasoning behind the second law of thermodynamics applies equally well in both temporal directions. Thus, not only is there an overwhelming probabilitythat the entropy of a physical system will be higher in what we call the future, but there is the same overwhelming probability that it was higher in what we call the past. We illustrate this in Figure 6.2.

This is the key point for all that follows, but it’s also deceptively subtle. A common misconception is that if, according to the second law of thermodynamics, entropy increases toward the future, then entropy necessarily decreasestoward the past. But that’s where the subtlety comes in. The second law actually says that if at any given moment of interest, a physical system happens not to possess the maximum possible entropy, it is extraordinarily likely that the physical system will subsequently have and previously had more entropy. That’s the content of Figure 6.2b. With laws that are blind to the past-versus-future distinction, such time symmetry is inevitable.


Figure 6.2 (a) As it’s usually described, the second law of thermodynamics implies that entropy increases toward the future of any given moment. (b) Since the known laws of nature treat forward and backward in time identically, the second law actually implies that entropy increases both toward the future and toward the past from any given moment.

That’s the essential lesson. It tells us that the entropic arrow of time is double-headed. From any specified moment, the arrow of entropy increase points toward the future and toward the past. And that makes it decidedly awkward to propose entropy as the explanation of the one-way arrow of experiential time.

Think about what the double-headed entropic arrow implies in concrete terms. If it’s a warm day and you see partially melted ice cubes in a glass of water, you have full confidence that half an hour later the cubes will be more melted, since the more melted they are, the more entropy they have.11 But you should have exactly the same confidence that half an hour earlier they were also more melted, since exactly the same statistical reasoning implies that entropy should increase toward the past. And the same conclusion applies to the countless other examples we encounter every day. Your assuredness that entropy increases toward the future— from partially dispersed gas molecules’ further dispersing to partially jumbled page orders’ getting more jumbled—should be matched by exactly the same assuredness that entropy was also higher in the past.

The troubling thing is that half of these conclusions seem to be flatout wrong. Entropic reasoning yields accurate and sensible conclusions when applied in one time direction, toward what we call the future, but gives apparently inaccurate and seemingly ridiculous conclusions when applied toward what we call the past. Glasses of water with partially melted ice cubes do not usually start out as glasses of water with no ice cubes in which molecules of water coalesce and cool into chunks of ice, only to start melting once again. Unbound pages of War and Peace do not usually start thoroughly out of numerical order and through subsequent tosses get less jumbled, only to start getting more jumbled again. And going back to the kitchen, eggs do not generally start out splattered, and then coalesce into a pristine whole egg, only to splatter some time later.

Or do they?

Following the Math

Centuries of scientific investigations have shown that mathematics provides a powerful and incisive language for analyzing the universe. Indeed, the history of modern science is replete with examples in which the math made predictions that seemed counter to both intuition and experience (that the universe contains black holes, that the universe has anti-matter, that distant particles can be entangled, and so on) but which experiments and observations were ultimately able to confirm. Such developments have impressed themselves profoundly on the culture of theoretical physics. Physicists have come to realize that mathematics, when used with sufficient care, is a proven pathway to truth.

So, when a mathematical analysis of nature’s laws shows that entropy should be higher toward the future and toward the past of any given moment, physicists don’t dismiss it out of hand. Instead, something akin to a physicists’ Hippocratic oath impels researchers to maintain a deep and healthy skepticism of the apparent truths of human experience and, with the same skeptical attitude, diligently follow the math and see where it leads. Only then can we properly assess and interpret any remaining mismatch between physical law and common sense.

Toward this end, imagine it’s 10:30 p.m. and for the past half hour you’ve been staring at a glass of ice water (it’s a slow night at the bar), watching the cubes slowly melt into small, misshapen forms. You have absolutely no doubt that a half hour earlier the bartender put fully formed ice cubes into the glass; you have no doubt because you trust your memory. And if, by some chance, your confidence regarding what happened during the last half hour should be shaken, you can ask the guy across the way, who was also watching the ice cubes melt (it’s a really slow night at the bar), or perhaps check the video taken by the bar’s surveillance camera, both of which would confirm that your memory is accurate. If you were then to ask yourself what you expect to happen to the ice cubes during the next half hour, you’d probably conclude that they’d continue to melt. And, if you’d gained sufficient familiarity with the concept of entropy, you’d explain your prediction by appealing to the overwhelming likelihood that entropy will increase from what you see, right now at 10:30 p.m., toward the future. All that makes good sense and jibes with our intuition and experience.

But as we’ve seen, such entropic reasoning—reasoning that simply says things are more likely to be disordered since there are more ways to be disordered, reasoning which is demonstrably powerful at explaining how things unfold toward the future—proclaims that entropy is just as likely to also have been higher in the past. This would mean that the partially melted cubes you see at 10:30 p.m. would actually have been more melted at earlier times; it would mean that at 10:00 p.m. they did not begin as solid ice cubes, but, instead, slowly coalesced out of room-temperature water on the way to 10:30 p.m., just as surely as they will slowly melt into room-temperature water on their way to 11:00 p.m.

No doubt, that sounds weird—or perhaps you’d say nutty. To be true, not only would H2O molecules in a glass of room-temperature water have to coalesce spontaneously into partially formed cubes of ice, but the digital bits in the surveillance camera, as well as the neurons in your brain and those in the brain of the guy across the way, would all need to spontaneously arrange themselves by 10:30 p.m. to attest to there having been a collection of fully formed ice cubes that melted, even though there never was. Yet this bizarre-sounding conclusion is where a faithful application of entropic reasoning—the same reasoning that you embrace without hesitation to explain why the partially melted ice you see at 10:30 p.m. continues to melt toward 11:00 p.m.—leads when applied in the time-symmetric manner dictated by the laws of physics. This is the trouble with having fundamental laws of motion with no inbuilt distinction between past and future, laws whose mathematics treats the future and past of any given moment in exactly the same way.12

Rest assured that we will shortly find a way out of the strange place to which an egalitarian use of entropic reasoning has taken us; I’m not going to try to convince you that your memories and records are of a past that never happened (apologies to fans of The Matrix). But we will find it very useful to pinpoint precisely the disjuncture between intuition and the mathematical laws. So let’s keep following the trail.

A Quagmire

Your intuition balks at a past with higher entropy because, when viewed in the usual forward-time unfolding of events, it would require a spontaneous rise in order: water molecules spontaneously cooling to 0 degrees Celsius and turning into ice, brains spontaneously acquiring memories of things that didn’t happen, video cameras spontaneously producing images of things that never were, and so on, all of which seem extraordinarily unlikely—a proposed explanation of the past at which even Oliver Stone would scoff. On this point, the physical laws and the mathematics of entropy agree with your intuition completely. Such a sequence of events, when viewed in the forward time direction from 10 p.m. to 10:30 p.m., goes against the grain of the second law of thermodynamics—it results in a decrease in entropy—and so, although not impossible, it is very unlikely.

By contrast, your intuition and experience tell you that a far more likely sequence of events is that ice cubes that were fully formed at 10 p.m. partially melted into what you see in your glass, right now, at 10:30 p.m. But on this point, the physical laws and mathematics of entropy only partly agree with your expectation. Math and intuition concur that if there really were fully formed ice cubes at 10 p.m., then the most likely sequence of events would be for them to melt into the partial cubes you see at 10:30 p.m.: the resulting increase in entropy is in line both with the second law of thermodynamics and with experience. But where math and intuition deviate is that our intuition, unlike the math, fails to take account of the likelihood, or lack thereof, of actually having fully formed ice cubes at 10 p.m., given the one observation we are taking as unassailable, as fully trustworthy, that right now, at 10:30 p.m., you see partially melted cubes.

This is the pivotal point, so let me explain. The main lesson of the second law of thermodynamics is that physical systems have an overwhelming tendency to be in high-entropy configurations because there are so many ways such states can be realized. And once in such high-entropy states, physical systems have an overwhelming tendency to stay in them. High entropy is the natural state of being. You should never be surprised by or feel the need to explain why any physical system is in a high-entropy state. Such states are the norm. On the contrary, what does need explaining is why any given physical system is in a state of order, a state of low entropy. These states are not the norm. They can certainly happen. But from the viewpoint of entropy, such ordered states are rare aberrations that cry out for explanation. So the one fact in the episode we are taking as unquestionably true—your observation at 10:30 p.m. of low-entropy partially formed ice cubes—is a fact in need of an explanation.

And from the point of view of probability, it is absurd to explain this low-entropy state by invoking the even lower-entropy state, the even less likely state, that at 10 p.m. there were even more ordered, more fully formed ice cubes being observed in a more pristine, more ordered environment. Instead, it is enormously more likely that things began in an unsurprising, totally normal, high-entropy state: a glass of uniform liquid water with absolutely no ice. Then, through an unlikely but every-so-often-expectable statistical fluctuation, the glass of water went against the grain of the second law and evolved to a state of lower entropy in which partially formed ice cubes appeared. This evolution, although requiring rare and unfamiliar processes, completely avoids the even lower-entropy, the even less likely, the even more rare state of having fully formed ice cubes. At every moment between 10 p.m. and 10:30 p.m., this strange-sounding evolution has higher entropy than the normal ice-melting scenario, as you can see in Figure 6.3, and so it realizes the accepted observation at 10:30 p.m. in a way that is more likely—hugely more likely—than the scenario in which fully formed ice cubes melt.13 That is the crux of the matter.13


Figure 6.3 A comparison of two proposals for how the ice cubes got to their partially melted state, right now, at 10:30 p.m. Proposal 1 aligns with your memories of melting ice, but requires a comparatively low-entropy starting point at 10:00 p.m. Proposal 2 challenges your memories by describing the partially melted ice you see at 10:30 p.m. as having coalesced out of a glass of water, but starts off in a high-entropy, highly probable configuration of disorder at 10:00 p.m. Every step of the way toward 10:30 p.m., Proposal 2 involves states that are more likely than those in Proposal 1—because, as you can see in the graph, they have higher entropy—and so Proposal 2 is statistically favored.

It was a small step for Boltzmann to realize that the whole of the universe is subject to this same analysis. When you look around the universe right now, what you see reflects a great deal of biological organization, chemical structure, and physical order. Although the universe could be a totally disorganized mess, it’s not. Why is this? Where did the order come from? Well, just as with the ice cubes, from the standpoint of probability it is extremely unlikely that the universe we see evolved from an even more ordered—an even less likely—state in the distant past that has slowly unwound to its current form. Rather, because the cosmos has so many constituents, the scales of ordered versus disordered are magnified intensely. And so what’s true at the bar is true with a vengeance for the whole universe: it is far more likely—breathtakingly more likely—that the whole universe we now see arose as a statistically rare fluctuation from a normal, unsurprising, high-entropy, completely disordered configuration.

Think of it this way: if you toss a handful of pennies over and over again, sooner or later they will all land heads. If you have nearly the infinite patience needed to throw the jumbled pages of War and Peace in the air over and over again, sooner or later they will land in correct numerical order. If you wait with your open bottle of flat Coke, sooner or later the random jostling of the carbon dioxide molecules will cause them to reenter the bottle. And, for Boltzmann’s kicker, if the universe waits long enough—for nearly an eternity, perhaps—its usual, high-entropy, highly probable, totally disordered state will, through its own bumping, jostling, and random streaming of particles and radiation, sooner or later just happen to coalesce into the configuration that we all see right now. Our bodies and brains would emerge fully formed from the chaos—stocked with memories, knowledge, and skills—even though the past they seem to reflect would never really have happened. Everything we know about, everything we value, would amount to nothing more than a rare but every-so-often-expectable statistical fluctuation momentarily interrupting a near eternity of disorder. This is schematically illustrated in Figure 6.4.


Figure 6.4 A schematic graph of the universe’s total entropy through time. The graph shows the universe spending most of its time in a state of total disorder—a state of high entropy—and every so often experiencing fluctuations to states of varying degrees of order, varying states of lower entropy. The greater the entropy dip, the less likely the fluctuation. Significant dips in entropy, to the kind of order in the universe today, are extremely unlikely and would happen very rarely.

Taking a Step Back

When I first encountered this idea many years ago, it was a bit of a shock. Up until that point, I had thought I understood the concept of entropy fairly well, but the fact of the matter was that, following the approach of textbooks I’d studied, I’d only ever considered entropy’s implications for the future. And, as we’ve just seen, while entropy applied toward the future confirms our intuition and experience, entropy applied toward the past just as thoroughly contradicts them. It wasn’t quite as bad as suddenly learning that you’ve been betrayed by a longtime friend, but for me, it was pretty close.

Nevertheless, sometimes it’s good not to pass judgment too quickly, and entropy’s apparent failure to live up to expectations provides a case in point. As you’re probably thinking, the idea that all we’re familiar with just popped into existence is as tantalizing as it is hard to swallow. And it’s not “merely” that this explanation of the universe challenges the veracity of everything we hold to be real and important. It also leaves critical questions unanswered. For instance, the more ordered the universe is today— the greater the dip in Figure 6.4—the more surprising and unlikely is the statistical aberration required to bring it into existence. So if the universe could have cut any corners, making things look more or less like what we see right now while skimping on the actual amount of order, probabilistic reasoning leads us to believe it would have. But when we examine the universe, there seem to be numerous lost opportunities, since there are many things that are more ordered than they have to be. If Michael Jackson never recorded Thriller and the millions of copies of this album now distributed worldwide all got there as part of an aberrant fluctuation toward lower entropy, the aberration would have been far less extreme if only a million or a half-million or just a few albums had formed. If evolution never happened and we humans got here via an aberrant jump toward lower entropy, the aberration would have been far less extreme if there weren’t such a consistent and ordered evolutionary fossil record. If the big bang never happened and the more than 100 billion galaxies we now see arose as an aberrant jump toward lower entropy, the aberration would have been less extreme if there were 50 billion, or 5,000, or just a handful, or just one galaxy. And so if the idea that our universe is a statistical fluctuation—a happy fluke—has any validity, one would need to address how and why the universe went so far overboard and achieved a state of such low entropy.

Even more pressing, if you truly can’t trust memories and records, then you also can’t trust the laws of physics. Their validity rests on numerous experiments whose positive outcomes are attested to only by those very same memories and records. So all the reasoning based on the time-reversal symmetry of the accepted laws of physics would be totally thrown into question, thereby undermining our understanding of entropy and the whole basis for the current discussion. By embracing the conclusion that the universe we know is a rare but every-so-often-expectable statistical fluctuation from a configuration of total disorder, we’re quickly led into a quagmire in which we lose all understanding, including the very chain of reasoning that led us to consider such an odd explanation in the first place.14

Thus, by suspending disbelief and diligently following the laws of physics and the mathematics of entropy—concepts which in combination tell us that it is overwhelmingly likely that disorder will increase both toward the future and toward the past from any given moment—we have gotten ourselves neck deep in quicksand. And while that might not sound pleasant, for two reasons it’s a very good thing. First, it shows with precision why mistrust of memories and records—something at which we intuitively scoff—doesn’t make sense. Second, by reaching a point where our whole analytical scaffolding is on the verge of collapse, we realize, forcefully, that we must have left something crucial out of our reasoning.

Therefore, to avoid the explanatory abyss, we ask ourselves: what new idea or concept, beyond entropy and the time symmetry of nature’s laws, do we need in order to go back to trusting our memories and our records—our experience of room-temperature ice cubes melting and not unmelting, of cream and coffee mixing but not unmixing, of eggs splattering but not unsplattering? In short, where are we led if we try to explain an asymmetric unfolding of events in spacetime, with entropy to our future higher, but entropy to our past lower? Is it possible?

It is. But only if things were very special early on.14

The Egg, the Chicken, and the Big Bang

To see what this means, let’s take the example of a pristine, low-entropy, fully formed egg. How did this low-entropy physical system come into being? Well, putting our trust back in memories and records, we all know the answer. The egg came from a chicken. And that chicken came from an egg, which came from a chicken, which came from an egg, and so on. But, as emphasized most forcefully by the English mathematician Roger Penrose,15 this chicken-and-egg story actually teaches us something deep and leads somewhere definite.

A chicken, or any living being for that matter, is a physical system of astonishingly high order. Where does this organization come from and how is it sustained? A chicken stays alive, and in particular, stays alive long enough to produce eggs, by eating and breathing. Food and oxygen provide the raw materials from which living beings extract the energy they require. But there is a critical feature of this energy that must be emphasized if we are to really understand what’s going on. Over the course of its life, a chicken that stays fit takes in just about as much energy in the form of food as it gives back to the environment, mostly in the form of heat and other waste generated by its metabolic processes and daily activities. If there weren’t such a balance of energy-in and energy-out, the chicken would get increasingly hefty.

The essential point, though, is that all forms of energy are not equal. The energy a chicken gives off to the environment in the form of heat is highly disordered—it often results in some air molecules here or there jostling around a touch more quickly than they otherwise would. Such energy has high entropy—it is diffuse and intermingled with the environment—and so cannot easily be harnessed for any useful purpose. To the contrary, the energy the chicken takes in from its feed has low entropy and is readily harnessed for important life-sustaining activities. So the chicken, and every life form in fact, is a conduit for taking in low-entropy energy and giving off high-entropy energy.

This realization pushes the question of where the low entropy of an egg originates one step further back. How is it that the chicken’s energy source, the food, has such low entropy? How do we explain this aberrant source of order? If the food is of animal origin, we are led back to the initial question of how animals have such low entropy. But if we follow the food chain, we ultimately come upon animals (like me) that eat only plants. How do plants and their products of fruits and vegetables maintain low entropy? Through photosynthesis, plants use sunlight to separate ambient carbon dioxide into oxygen, which is given back to the environment, and carbon, which the plants use to grow and flourish. So we can trace the low-entropy, nonanimal sources of energy to the sun.

This pushes the question of explaining low entropy another step further back: where did our highly ordered sun come from? The sun formed about 5 billion years ago from an initially diffuse cloud of gas that began to swirl and clump under the mutual gravitational attraction of all its constituents. As the gas cloud got denser, the gravitational pull of one part on another got stronger, causing the cloud to collapse further in on itself. And as gravity squeezed the cloud tighter, it got hotter. Ultimately, it got hot enough to ignite nuclear processes that generated enough outwardflowing radiation to stem further gravitational contraction of the gas. A hot, stable, brightly burning star was born.

So where did the diffuse cloud of gas come from? It likely formed from the remains of older stars that reached the end of their lives, went supernova, and spewed their contents out into space. Where did the diffuse gas responsible for these early stars come from? We believe that the gas was formed in the aftermath of the big bang. Our most refined theories of the origin of the universe—our most refined cosmological theories—tell us that by the time the universe was a couple of minutes old, it was filled with a nearly uniform hot gas composed of roughly 75 percent hydrogen, 23 percent helium, and small amounts of deuterium and lithium. The essential point is that this gas filling the universe had extraordinarily low entropy. The big bang started the universe off in a state of low entropy, and that state appears to be the source of the order we currently see. In other words, the current order is a cosmological relic. Let’s discuss this important realization in a little more detail.

Entropy and Gravity

Because theory and observation show that within a few minutes after the big bang, primordial gas was uniformly spread throughout the young universe, you might think, given our earlier discussion of the Coke and its carbon dioxide molecules, that the primordial gas was in a high-entropy, disordered state. But this turns out not to be true. Our earlier discussion of entropy completely ignored gravity, a sensible thing to do because gravity hardly plays a role in the behavior of the minimal amount of gas emerging from a bottle of Coke. And with that assumption, we found that uniformly dispersed gas has high entropy. But when gravity matters, the story is very different. Gravity is a universally attractive force; hence, if you have a large enough mass of gas, every region of gas will pull on every other and this will cause the gas to fragment into clumps, somewhat as surface tension causes water on a sheet of wax paper to fragment into droplets. When gravity matters, as it did in the high-density early universe, clumpiness—not uniformity—is the norm; it is the state toward which a gas tends to evolve, as illustrated in Figure 6.5.

Even though the clumps appear to be more ordered than the initially diffuse gas—much as a playroom with toys that are neatly grouped in trunks and bins is more ordered than one in which the toys are uniformly strewn around the floor—in calculating entropy you need to tally up the contributions from all sources. For the playroom, the entropy decrease in going from wildly strewn toys to their all being “clumped” in trunks and bins is more than compensated for by the entropy increase from the fat burned and heat generated by the parents who spent hours cleaning and arranging everything. Similarly, for the initially diffuse gas cloud, you find that the entropy decrease through the formation of orderly clumps is more than compensated by the heat generated as the gas compresses, and, ultimately, by the enormous amount of heat and light released when nuclear processes begin to take place.


Figure 6.5 For huge volumes of gas, when gravity matters, atoms and molecules evolve from a smooth, evenly spread configuration, into one involving larger and denser clumps.

This is an important point that is sometimes overlooked. The overwhelming drive toward disorder does not mean that orderly structures like stars and planets, or orderly life forms like plants and animals, can’t form. They can. And they obviously do. What the second law of thermodynamics entails is that in the formation of order there is generally a more-than-compensating generation of disorder. The entropy balance sheet is still in the black even though certain constituents have become more ordered. And of the fundamental forces of nature, gravity is the one that exploits this feature of the entropy tally to the hilt. Because gravity operates across vast distances and is universally attractive, it instigates the formation of the ordered clumps—stars—that give off the light we see in a clear night sky, all in keeping with the net balance of entropy increase.

The more squeezed, dense, and massive the clumps of gas are, the larger the overall entropy. Black holes, the most extreme form of gravitational clumping and squeezing in the universe, take this to the limit. The gravitational pull of a black hole is so strong that nothing, not even light, is able to escape, which explains why black holes are black. Thus, unlike ordinary stars, black holes stubbornly hold on to all the entropy they produce: none of it can escape the black hole’s powerful gravitational grip.16 In fact, as we will discuss in Chapter 16, nothing in the universe contains more disorder—more entropy—than a black hole.15 This makes good intuitive sense: high entropy means that many rearrangements of the constituents of an object go unnoticed. Since we can’t see inside a black hole, it is impossible for us to detect any rearrangement of its constituents— whatever those constituents may be—and hence black holes have maximum entropy. When gravity flexes its muscles to the limit, it becomes the most efficient generator of entropy in the known universe.

We have now come to the place where the buck finally stops. The ultimate source of order, of low entropy, must be the big bang itself. In its earliest moments, rather than being filled with gargantuan containers of entropy such as black holes, as we would expect from probabilistic considerations, for some reason the nascent universe was filled with a hot, uniform, gaseous mixture of hydrogen and helium. Although this configuration has high entropy when densities are so low that we can ignore gravity, the situation is otherwise when gravity can’t be ignored; then, such a uniform gas has extremely low entropy. In comparison with black holes, the diffuse, nearly uniform gas was in an extraordinarily low-entropy state. Ever since, in accordance with the second law of thermodynamics, the overall entropy of the universe has been gradually getting higher and higher; the overall, net amount of disorder has been gradually increasing. After about a billion years or so, gravity caused the primordial gas to clump, and the clumps ultimately formed stars, galaxies, and some lighter clumps that became planets. At least one such planet had a nearby star that provided a relatively low-entropy source of energy that allowed low-entropy life forms to evolve, and among such life forms there eventually was a chicken that laid an egg that found its way to your kitchen counter, and much to your chagrin that egg continued on the relentless trajectory to a higher entropic state by rolling off the counter and splattering on the floor. The egg splatters rather than unsplatters because it is carrying forward the drive toward higher entropy that was initiated by the extraordinarily low entropy state with which the universe began. Incredible order at the beginning is what started it all off, and we have been living through the gradual unfolding toward higher disorder ever since.

This is the stunning connection we’ve been leading up to for the entire chapter. A splattering egg tells us something deep about the big bang. It tells us that the big bang gave rise to an extraordinarily ordered nascent cosmos.

The same idea applies to all other examples. The reason why tossing the newly unbound pages of War and Peace into the air results in a state of higher entropy is that they began in such a highly ordered, low entropy form. Their initial ordered form made them ripe for entropy increase. By contrast, if the pages initially were totally out of numerical order, tossing them in the air would hardly make a difference, as far as entropy goes. So the question, once again, is: how did they become so ordered? Well, Tolstoy wrote them to be presented in that order and the printer and binder followed his instructions. And the highly ordered bodies and minds of Tolstoy and the book producers, which allowed them, in turn, to create a volume of such high order, can be explained by following the same chain of reasoning we just followed for an egg, once again leading us back to the big bang. How about the partially melted ice cubes you saw at 10:30 p.m.? Now that we are trusting memories and records, you remember that just before 10 p.m. the bartender put fully formed ice cubes in your glass. He got the ice cubes from a freezer, which was designed by a clever engineer and fabricated by talented machinists, all of whom are capable of creating something of such high order because they themselves are highly ordered life forms. And again, we can sequentially trace their order back to the highly ordered origin of the universe.

The Critical Input

The revelation we’ve come to is that we can trust our memories of a past with lower, not higher, entropy only if the big bang—the process, event, or happening that brought the universe into existence—started off the universe in an extraordinarily special, highly ordered state of low entropy. Without that critical input, our earlier realization that entropy should increase toward both the future and the past from any given moment would lead us to conclude that all the order we see arose from a chance fluctuation from an ordinary disordered state of high entropy, a conclusion, as we’ve seen, that undermines the very reasoning on which it’s based. But by including the unlikely, low-entropy starting point of the universe in our analysis, we now see that the correct conclusion is that entropy increases toward the future, since probabilistic reasoning operates fully and without constraint in that direction; but entropy does not increase toward the past, since that use of probability would run afoul of our new proviso that the universe began in a state of low, not high, entropy.17 Thus, conditions at the birth of the universe are critical to directing time’s arrow. The future is indeed the direction of increasing entropy. The arrow of time—the fact that things start like this and end like that but never start like that and end like this—began its flight in the highly ordered, low-entropy state of the universe at its inception.18

The Remaining Puzzle

That the early universe set the direction of time’s arrow is a wonderful and satisfying conclusion, but we are not done. A huge puzzle remains. How is it that the universe began in such a highly ordered configuration, setting things up so that for billions of years to follow everything could slowly evolve through steadily less ordered configurations toward higher and higher entropy? Don’t lose sight of how remarkable this is. As we emphasized, from the standpoint of probability it is much more likely that the partially melted ice cubes you saw at 10:30 p.m. got there because a statistical fluke acted itself out in a glass of liquid water, than that they originated in the even less likely state of fully formed ice cubes. And what’s true for ice cubes is true a gazillion times over for the whole universe. Probabilistically speaking, it is mind-bogglingly more likely that everything we now see in the universe arose from a rare but every-so-often-expectable statistical aberration away from total disorder, rather than having slowly evolved from the even more unlikely, the incredibly more ordered, the astoundingly low-entropy starting point required by the big bang.19

Yet, when we went with the odds and imagined that everything popped into existence by a statistical fluke, we found ourselves in a quagmire: that route called into question the laws of physics themselves. And so we are inclined to buck the bookies and go with a low-entropy big bang as the explanation for the arrow of time. The puzzle then is to explain how the universe began in such an unlikely, highly ordered configuration. That is the question to which the arrow of time points. It all comes down to cosmology.20

We will take up a detailed discussion of cosmology in Chapters 8 through 11, but notice first that our discussion of time suffers from a serious shortcoming: everything we’ve said has been based purely on classical physics. Let’s now consider how quantum mechanics affects our understanding of time and our pursuit of its arrow.