## From Eternity to Here: The Quest for the Ultimate Theory of Time - Sean Carroll (2010)

### Part IV. FROM THE KITCHEN TO THE MULTIVERSE

### Chapter 16. EPILOGUE

*Glance into the world just as though time were gone: and everything crooked will become straight to you.*

*—Friedrich Nietzsche*

Unlike many authors, I had no struggle settling on the title for this book.__ ^{296}__ Once I had come up with

*From Eternity to Here*, it seemed irresistible. The connotations were perfect: On the one hand, a classic movie (based on a classic novel), with that iconic scene of untamed waves from the Pacific crashing around lovers Deborah Kerr and Burt Lancaster caught in a passionate embrace. On the other hand, the cosmological grandeur implicit in the word

*eternity*.

But the title is even more appropriate than those superficial considerations might suggest. This book has not only been about “eternity”; it’s also been about “here.” The puzzle of the arrow of time doesn’t begin with giant telescopes or powerful particle accelerators; it’s in our kitchens, every time we break an egg. Or stir milk into coffee, or put an ice cube into warm water, or spill wine onto the carpet, or let aromas drift through a room, or shuffle a new deck of cards, or turn a delicious meal into biological energy, or experience an event that leaves a lasting memory, or give birth to a new generation. All of these commonplace occurrences exhibit the fundamental irreversibility that is the hallmark of the arrow of time.

The chain of reasoning that started with an attempt to understand that arrow led us inexorably to cosmology—to eternity. Boltzmann provided us with an elegant and compelling microscopic understanding of entropy in terms of statistical mechanics. But that understanding does not explain the Second Law of Thermodynamics unless we also invoke a boundary condition—why was the entropy ever low to start with? The entropy of an unbroken egg is much lower than it could be, but such eggs are nevertheless common, because the overall entropy of the universe is much lower than it could be. And that’s because it used to be even lower, all the way back to the beginning of what we can observe. What happens here, in our kitchen, is intimately connected with what happens in eternity, at the beginning of the universe.

Figures such as Galileo, Newton, and Einstein are celebrated for proposing laws of physics that hadn’t previously been appreciated. But their accomplishments also share a common theme: They illuminate the *universality* of Nature. What happens here happens everywhere—as Richard Feynman put it, “The entire universe is in a glass of wine, if we look at it closely enough.”__ ^{297}__ Galileo showed that the heavens were messy and ever changing, just like conditions here on Earth; Newton understood that the same laws of gravity that accounted for falling apples could explain the motions of the planets; and Einstein realized that space and time were different aspects of a single unified spacetime, and that the curvature of spacetime underlies the dynamics of the Solar System and the birth of the universe.

Likewise, the rules governing entropy and time are common to our everyday lives and to the farthest stretches of the multiverse. We don’t yet know all the answers, but we’re on the threshold of making progress on some big questions.

**WHAT’S THE ANSWER?**

Over the course of this book, we’ve lovingly investigated what we know about how time works, both in the smooth deterministic context of relativity and spacetime, and in the messy probabilistic world of statistical mechanics. We finally arrived at cosmology, and explored how our best theories of the universe fall embarrassingly short when confronted with the universe’s most obvious feature: the difference in entropy between early times and late times. Then, after fourteen chapters of building up the problems, we devoted a scant single chapter to the possible solutions, and fell short of a full-throated endorsement of any of them.

That may seem frustrating, but the balance was entirely intentional. Understanding a deeply puzzling feature of the natural world is a process that can go through many stages—we may be utterly clueless, we may understand how to state the problem but not have any good ideas about the answer, we may have several reasonable answers at our disposal but not know which (if any) are right, or we may have it all figured out. The arrow of time falls in between the second and third of these options—we can state the problem very clearly but have only a few vague ideas of what the answer might be.

In such a situation, it’s appropriate to dwell on understanding the problem, and not become too wedded to any of the prospective solutions. A century from now, most everything we covered in the first three parts of this book should remain standing. Relativity is on firm ground, as is quantum mechanics, and the framework of statistical mechanics. We are even confident in our understanding of the basic evolution of the universe, at least from a minute or so after the Big Bang up to today. But our current ideas about quantum gravity, the multiverse, and what happened at the Big Bang are still very speculative. They may grow into a robust understanding, but many of them may be completely abandoned. At this point it’s more important to understand the map of the territory than to squabble over what is the best route to take through it.

Our universe isn’t a fluctuation around an equilibrium background, or it would look very different. And it doesn’t seem likely that the fundamental laws of physics are irreversible at a microscopic level—or, if they are, it’s very hard to see how that could actually account for the evolution of entropy and complexity we observe in our universe. A boundary condition stuck at the beginning of time is impossible to rule out, but also seems to be avoiding the question more than answering it. It may ultimately be the best we can do, but I strongly suspect that the low entropy of our early universe is a clue to something deeper, not just a brute fact we can do no more than accept.

We’re left with the possibility that our observable universe is part of a much larger structure, the multiverse. By situating what we see inside a larger ensemble, we open the possibility of explaining our apparently finely tuned beginning without imposing any fine-tuning on the multiverse as a whole. That move isn’t sufficient, of course; we need to show why there should be a consistent entropy gradient, and why that gradient should be manifested in a universe that looks like our own, rather than in some other way.

We discussed a specific model of which I am personally fond: a universe that is mostly high-entropy de Sitter space, but which gives birth to disconnected baby universes, allowing the entropy to increase without bound and creating patches of spacetime like the one around us along the way. The details of this model are highly speculative, and rely on assumptions that stretch beyond what the state of the art allows us to reliably compute, to put it mildly. More important, I think, is the general paradigm, according to which entropy is seen to be increasing because entropy can always increase; there is no equilibrium state for the universe. That setup naturally leads to an entropy gradient, and is naturally time-symmetric about some moment of minimal (although not necessarily “small”) entropy. It would be interesting to see if there are other ways of possibly carrying out this general program.

There is one other approach lurking in the background, which we occasionally acknowledged but never granted our undivided attention: the idea that “time” itself is simply an approximation that is occasionally useful, including in our local universe, but doesn’t have any fundamental meaning. This is a perfectly legitimate possibility. Lessons from the holographic principle, as well as a general feeling that the underlying ingredients of a quantum mechanical theory may appear very different from what shows up in the classical regime, make it quite reasonable to imagine that time might be an emergent phenomenon rather than a necessary part of our ultimate description of the world.

One reason why the time-is-just-an-approximation alternative wasn’t emphasized in this book is that there doesn’t seem to be too much to say about it, at least within our present state of knowledge. Even by our somewhat forgiving standards, the way in which time might emerge from a more fundamental description is not well understood. But there is a more compelling reason, as well: Even if time is only an approximation, it’s an approximation that seems extremely good in the part of the universe we can observe, and that’s where the arrow-of-time problem is to be found. Sure, we can imagine that the viability of classical spacetime as a useful concept breaks down completely near the Big Bang. But, all by itself, that doesn’t tell us anything at all about why conditions at that end of time (what we call “the past”) should be so different from conditions at the other end of time (“the future”) within our observable patch. Unless you can say, “Time is only an approximate concept, and therefore entropy should behave as follows in the regime where it’s valid to speak about time,” this alternative seems more like an evasive maneuver than a viable strategy. But that is largely a statement about our ignorance; it is certainly possible that the ultimate answer might lie in this direction.

**THE EMPIRICAL CIRCLE**

The pioneers of thermodynamics—Carnot, Clausius, and others—were motivated by practical desires; among other things, they wanted to build better steam engines. We’ve traveled directly from their insights to grand speculations about universes beyond our own. The crucial question is: How do we get back? Even if our universe does have an arrow of time because it belongs to a multiverse with an unbounded entropy, how would we ever know?

Scientists are fiercely proud of the *empirical* nature of what they do. Scientific theories do not become accepted because they are logical or beautiful, or fulfill some philosophical goal cherished by the scientist. Those might be good reasons why a theory is *proposed*—but being accepted is a much higher standard. Scientific theories must, at the end of the day, fit the data. No matter how intrinsically compelling a theory might be, if it fails to fit the data, it’s a curiosity, not an achievement.

But this criterion of “fitting the data” is more slippery than it first appears. For one thing, lots of very different theories might fit the data; for another, a very promising theory might not completely fit the data as it currently stands, even though there is a kernel of truth to it. At a more subtle level, one theory might seem to fit the data perfectly well, but lead to a conceptual dead end, or to an intrinsic inconsistency, while another theory doesn’t fit the data well at all, but holds promise for developing into something more acceptable. After all, no matter how much data we collect, we have only ever performed a tiny fraction of all possible experiments. How are we to choose?

The reality of how science is done can’t be whittled down to a few simple mottos. The issue of distinguishing “science” from “not science” is sufficiently tricky that it goes by its own name: the *demarcation problem*. Philosophers of science have great fun arguing into the night about the proper way to resolve the demarcation problem.

Despite the fact that the goal of a scientific theory is to fit the data, the worst possible scientific theory would be one that fit *all possible* data. That’s because the real goal isn’t just to “fit” what we see in the universe; it’s to *explain*what we see. And you can explain what we see only if you understand why things are the particular way they are, rather than some other way. In other words, your theory has to say that some things do not ever happen—otherwise you haven’t said very much at all.

This idea was put forth most forcefully by Sir Karl Popper, who claimed that the important feature of a scientific theory wasn’t whether it was “verifiable,” but whether it was “falsifiable.”__ ^{298}__ That’s not to say that there are data that contradict the theory—only that the theory clearly makes predictions that could, in principle, be contradicted by some experiment we could imagine doing. The theory has to stick its neck out; otherwise, it’s not scientific. Popper had in mind Karl Marx’s theory of history, and Sigmund Freud’s theory of psychoanalysis. These influential intellectual constructs, in his mind, fell far short of the scientific status their proponents liked to claim. Popper felt that you could take anything that happened in the world, or any behavior shown by a human being, and come up with an “explanation” of those data on the basis of Marx or Freud—but you wouldn’t ever be able to point to any observed event and say, “Aha, there’s no way to make that consistent with these theories.” He contrasted these with Einstein’s theory of relativity, which sounded equally esoteric and inscrutable to the person on the street, but made very definite predictions that (had the experiments turned out differently) could have falsified the theory.

**THE MULTIVERSE IS NOT A THEORY**

Where does that leave the multiverse? Here we are, claiming to be engaged in the practice of science, attempting to “explain” the observed arrow of time in our universe by invoking an infinite plethora of unobservable other universes. How is the claim that other universes exist falsifiable? It should come as no surprise that this kind of speculative theorizing about unobservable things leaves a bad taste in the mouths of many scientists. If you can’t make a specific prediction that I could imagine doing an experiment to falsify, they say, what you’re doing isn’t science. It’s philosophy at best, and not very good philosophy at that.

But the truth, as is often the case, is a bit more complicated. All this talk of mul tiverses might very well end up being a dead end. A century from now, our successors might be shaking their heads at all the intellectual effort that was wasted on trying to figure out what came before the Big Bang, as much as we wonder at all that work put into alchemy or the caloric theory of heat. But it won’t be because modern cosmologists had abandoned the true path of science; it will (if that’s how things turn out) simply be because the theory wasn’t correct.

Two points deserve to be emphasized concerning the role of unobservable things in science. First, it’s wrong to think of the goal of science as simply to fit the data. The goal of science goes much deeper than that: It’s to *understand* the behavior of the natural world.__ ^{299}__ In the early seventeenth century, Johannes Kepler proposed his three laws of planetary motion, which correctly accounted for the voluminous astronomical data that had been collected by his mentor, Tycho Brahe. But we didn’t really understand the dynamics of planets within the Solar System until Isaac Newton showed that they could all be explained in terms of a simple i nverse-square law for gravity. Similarly, we don’t need to look beyond the Big Bang to understand the evolution of our observable universe; all we have to do is specify what conditions were like at early times, and leave it at that. But that’s a strategy that denies us any understanding of why things were the way they were.

Similar logic would have argued against the need for the theory of inflation; all inflation did was take things that we already knew were true about the universe (flatness, uniformity, absence of monopoles) and attempt to explain them in terms of simple underlying rules. We didn’t need to do that; we could have accepted things as they are. But as a result of our desire to do better, to actually understand the early universe rather than simply accept it, we discovered that inflation provides more than we had even asked for: a theory of the origin and nature of the primordial perturbations that grow into galaxies and large-scale structure. That’s the benefit to searching for understanding, rather than being content with fitting the data: True understanding leads you places you didn’t know you wanted to go. If we someday understand why the early universe had a low entropy, it is a good bet that the underlying mechanism will teach us more than that single fact.

The second point is even more important, although it sounds somewhat trivial: science is a messy, complicated business. It will never stop being true that the basis of science is empirical knowledge; we are guided by data, not by pure reason. But along the way to being guided by data, we use all sorts of nonempirical clues and preferences in constructing models and comparing them to one another. There’s nothing wrong with that. Just because the end product must be judged on the basis of how well it explains the data, doesn’t mean that every step along the way must have the benefit of an intimate and detailed contact with experiment.

More specifically: The multiverse is not a “theory.” If it were, it would be perfectly fair to criticize it on the basis of our difficulty in coming up with possible experimental tests. The correct way to think about the multiverse is as a *prediction* . The theory—such as it is, in its current underdeveloped state—is the marriage of the principles behind quantum field theory to our basic understanding of how curved spacetime works. Starting from those inputs, we don’t simply theorize that the universe could have undergone an early period of superfast acceleration; we *predict* that inflation should occur, if a quantum inflaton field with the right properties finds itself in the right state. Likewise, we don’t simply say, “Wouldn’t it be cool if there were an infinite number of different universes?” Rather, we predict on the basis of reasonable extrapolations of gravity and quantum field theory that a multiverse really should exist.

The prediction that we live in a multiverse is, as far as we can tell, untestable. (Although, who knows? Scientists have come up with remarkably clever ideas before.) But that misses the point. The multiverse is part of a larger, more comprehensive structure. The question should be not “How can we test whether there is a multiverse?” but “How can we test the theories that predict the multiverse should exist?” Right now we don’t know how to use those theories to make a falsifiable prediction. But there’s no reason to think that we can’t, in principle, do so. It will require a lot more work on the part of theoretical physicists to develop these ideas to the point where we can say what, if any, the testable predictions might be. One might be *impatient* that those predictions aren’t laid out before them straightforwardly right from the start—but that’s a personal preference, not a principled philosophical stance. Sometimes it takes time for a promising scientific idea to be nurtured and developed to the point where we can judge it fairly.

**THE SEARCH FOR MEANING IN A PREPOSTEROUS UNIVERSE**

Throughout history, human beings have (quite naturally) tended to consider the universe in human-being-centric terms. That might mean something as literal as putting ourselves at the geographical center of the universe—an assumption that took some effort to completely overcome. Ever since the heliocentric model of the Solar System gained widespread acceptance, scientists have held up the Copernican Principle—“we do not occupy a favored place in the universe”—as a caution against treating ourselves as something special.

But at a deeper level, our anthropocentrism manifests itself as a conviction that human beings somehow *matter* to the universe. This feeling is at the core of much of the resistance in some quarters to accepting Darwin’s theory of natural selection as the right explanation for the evolution of life on Earth. The urge to think that we matter can take the form of a straightforward belief that we (or some subset of us) are God’s chosen people, or something as vague as an insistence that all this marvelous world around us must be more than just an *accident*.

Different people have different definitions of the word *God*, or different notions of what the nominal purpose of human life might be. God can become such an abstract and transcendental concept that the methods of science have nothing to say about the matter. If God is identified with Nature, or the laws of physics, or our feeling of awe when contemplating the universe, the question of whether or not such a concept provides a useful way of thinking about the world is beyond the scope of empirical inquiry.

There is a very different tradition, however, that seeks evidence for God in the workings of the physical universe. This is the approach of natural theology, which stretches long before Aristotle, through William Paley’s watchmaker analogy, up to the present day.__ ^{300}__ It used to be that the best evidence in favor of the argument from design came from living organisms, but Darwin provided an elegant mechanism to explain what had previously seemed inexplicable. In response, some adherents to this philosophy have shifted their focus to a different seemingly inexplicable thing: from the origin of life to the origin of the cosmos.

The Big Bang model, with its singular beginning, seems to offer encouragement to those who would look for the finger of God in the creation of the universe. (Georges Lemaître, the Belgian priest who developed the Big Bang model, refused to enlist it for any theological purposes: “As far as I can see, such a theory remains entirely outside of any metaphysical or religious question.”__ ^{301}__) In Newtonian spacetime, there wasn’t even any such thing as the creation of the universe, at least not as an event happening at a particular time; time and space persisted forever. The introduction of a particular beginning to spacetime, especially one that apparently defies easy understanding, creates a temptation to put the responsibility for explaining what went on into the hands of God. Sure, the reasoning goes, you can find dynamical laws that govern the evolution of the universe from moment to moment, but explaining the creation of the universe itself requires an appeal to something outside the universe.

Hopefully, one of the implicit lessons of this book has been that it’s not a good idea to bet against the ability of science to explain anything whatsoever about the operation of the natural world, including its beginning. The Big Bang represented a point past which our understanding didn’t stretch, back when it was first studied in the 1920s—and it continues to do so today. We don’t know exactly what happened 14 billion years ago, but there’s no reason whatsoever to doubt that we will eventually figure it out. Scientists are tackling the problem from a variety of angles. The rate at which scientific understanding advances is notoriously hard to predict, but it’s not hard to predict that it will be advancing.

Where does that leave us? Giordano Bruno argued for a homogeneous universe with an infinite number of stars and planets. Avicenna and Galileo, with the conservation of momentum, undermined the need for a Prime Mover to explain the persistence of motion. Darwin explained the development of species as an undirected process of descent with random modifications, chosen by natural selection. Modern cosmology speculates that our observable universe could be only one of an infinite number of universes within a grand ensemble multiverse. The more we understand about the world, the smaller and more peripheral to its operation we seem to be.^{302}

That’s okay. We find ourselves, not as a central player in the life of the cosmos, but as a tiny epiphenomenon, flourishing for a brief moment as we ride a wave of increasing entropy from the Big Bang to the quiet emptiness of the future universe. Purpose and meaning are not to be found in the laws of nature, or in the plans of any external agent who made things that way; it is our job to create them. One of those purposes—among many—stems from our urge to explain the world around us the best we can. If our lives are brief and undirected, at least we can take pride in our mutual courage as we struggle to understand things much greater than ourselves.

**NEXT STEPS**

It’s surprisingly hard to think clearly about time. We’re all familiar with it, but the problem might be that we’re *too* familiar. We’re so used to the arrow of time that it’s hard to conceptualize time without the arrow. We are led, unprotesting, to temporal chauvinism, prejudicing explanations of our current state in terms of the past over those in terms of the future. Even highly trained professional cosmologists are not immune.

Despite all the ink that has been spilled and all the noise generated by discussions about the nature of time, I would argue that it’s been discussed too little, rather than too much. But people seem to be catching on. The intertwined subjects of time, entropy, information, and complexity bring together an astonishing variety of intellectual disciplines: physics, mathematics, biology, psychology, computer science, the arts. It’s about time that we took time seriously, and faced its challenges head-on.

Within physics, that’s starting to happen. For much of the twentieth century, the field of cosmology was a bit of a backwater; there were many ideas, and little data to distinguish between them. An era of precision cosmology, driven by large-scale surveys enabled by new technologies, has changed all that; unanticipated wonders have been revealed, from the acceleration of the universe to the snapshot of early times provided by the cosmic microwave background.__ ^{303}__ Now it is the turn for ideas to catch up to the reality. We have interesting suggestions from inflation, from quantum cosmology, and from string theory, to how the universe might have begun and what might have come before. Our task is to develop these promising ideas into honest theories, which can be compared with experiment and reconciled with the rest of physics.

Predicting the future isn’t easy. (Curse the absence of a low-entropy future boundary condition!) But the pieces are assembled for science to take dramatic steps toward answering the ancient questions we have about the past and the future. It’s time we understood our place within eternity.

**APPENDIX: MATH**

*Lloyd: You mean, not good like one out of a hundred?**Mary: I’ d say more like one out of a million.**[pause]**Lloyd: So you’re telling me there’s a chance.*

*—Jim Carrey and Lauren Holly, Dumb and Dumber*

In the main text I bravely included a handful of equations—a couple by Einstein, and a few expressions for entropy in different contexts. An equation is a powerful, talismanic object, conveying a tremendous amount of information in an extraordinarily compact notation. It can be very useful to look at an equation and understand its implications as a rigorous expression of some feature of the natural world.

But, let’s face it—equations can be scary. This appendix is a very quick introduction to exponentials and logarithms, the key mathematical ideas used in describing entropy at a quantitative level. Nothing here is truly necessary to comprehending the rest of the book; just bravely keep going whenever the word *logarithm* appears in the main text.

**EXPONENTIALS**

These two operations—exponentials and logarithms—are exactly as easy or difficult to understand as each other. Indeed, they are opposites; one operation undoes the other one. If we start with a number, take its exponential, and then take the logarithm of the result, we get back the original number we started with. Nevertheless, we tend to come across exponentials more often in our everyday lives, so they seem a bit less intimidating. Let’s start there.

Exponentials just take one number, called the *base*, and raise it to the power of another number. By which we simply mean: Multiply the base by itself, a number of times given by the power. The base is written as an ordinary number, and the power is written as a superscript. Some simple examples:

2^{2} = 2 • 2 = 4,

2^{5} = 2 • 2 • 2 • 2 • 2 = 32,

4^{3} = 4 • 4 • 4 = 64.

(We use a dot to stand for multiplication, rather than the × symbol, because that’s too easy to confuse with the letter *x*.) One of the most convenient cases is where we take the base to be 10; in that case, the power simply becomes the number of zeroes to the right of the one.

10^{1} = 10,

10^{2} = 100,

10^{9} = 1,000,000,000,

10^{21} = 1,000,000,000,000,000,000,000.

That’s the idea of exponentiation. When we speak more specifically about the exponential *function*, what we have in mind is fixing a particular base and letting the power to which we raise it be a variable quantity. If we denote the base by *a* and the power by *x*, we have

*a*^{x}*= a • a • a • a • a • a ... • a, x* times.

This definition, unfortunately, can give you the impression that the exponential function makes sense only when the power *x* is a positive integer. How can you multiply a number by itself minus-two times, or 3.7 times? Here you will have to have faith that the magic of mathematics allows us to define the exponential for *any* value of *x.* The result is a smooth function that is very small when *x* is a negative number, and rises very rapidly when *x* becomes positive, as shown in Figure 88.

**Figure 88:** The exponential function 10* ^{x}*. Note that it goes up very fast, so that it becomes impractical to plot it for large values of

*x*.

There are a couple of things to keep in mind about the exponential function. The exponential of 0 is always equal to 1, for any base, and the exponential of 1 is equal to the base itself. When the base is 10, we have:

10^{0} = 1,

10^{1} = 10.

If we take the exponential of a negative number, it’s just the reciprocal of the exponential of the corresponding positive number:

10^{-1} = 1/10^{1} = 0.1,

10 ^{-3} = 1/10^{3} = 0.001.

These facts are specific examples of a more general set of properties obeyed by the exponential function. One of these properties is of paramount importance: If we *multiply* two numbers that are the same base raised to different powers, that’s equal to what we would get by *adding* the two powers and raising the base to that result. That is:

10^{x}*•* 10* ^{y}* = 10

^{(x+y)}

Said the other way around, the exponential of a sum is the product of the two exponentials.^{304}

**BIG NUMBERS**

It’s not hard to see why the exponential function is useful: The numbers we are dealing with are sometimes very large indeed, and the exponential takes a medium-sized number and creates a very big number from it. As we discuss in Chapter Thirteen, the number of distinct states needed to describe possible configurations of our comoving patch of universe is approximately

10^{10120}

That number is just so enormously, unimaginably huge that it would be hard to know how to even begin describing it if we didn’t have recourse to exponentiation.

Let’s consider some other big numbers to appreciate just how giant this one is. One billion is 10^{9}, while one trillion is 10^{12}; these have become all too familiar terms in discussions of economics and government spending. The number of particles within our observable universe is about 10^{88}, which was also the entropy at early times. Now that we have black holes, the entropy of the observable universe is something like 10^{101}, whereas it conceivably could have been as high as 10^{120}. (That same 10^{120} is also the ratio of the predicted vacuum energy density to the observed density.)

For comparison’s sake, the entropy of a macroscopic object like a cup of coffee is about 10^{25}. That’s related to Avogadro’s Number, 6.02 • 10^{23}, which is approximately the number of atoms in a gram of hydrogen. The number of grains of sand in all the Earth’s beaches is about 10^{20}. The number of stars in a typical galaxy is about 10^{11}, and the number of galaxies in the observable universe is also about 10^{11}, so the number of stars in the observable universe is about 10^{22}—a bit larger than the number of grains of sand on Earth.

The basic units that physicists use are time, length, and mass, or combinations thereof. The shortest interesting time is the Planck time, about 10^{-43} seconds. I nflation is conjectured to have lasted for about 10^{-30} seconds or less, although that number is extremely uncertain. The universe created helium out of protons and neutrons about 100 seconds after the Big Bang, and it became transparent at the time of recombination, 380,000 years (10^{13} seconds) after that. (One year is about 3 • 10^{7} seconds.) The observable universe now is 14 billion years old, about 4 • 10^{17} seconds. In another 10^{100} years or so, all the black holes will have mostly evaporated away, leaving a cold and empty universe.

The shortest length is the Planck length, about 10^{-33} centimeters. The size of a proton is about 10^{-13} centimeters, and the size of a human being is about 10^{2} centimeters. (That’s a pretty short human being, but we’re only being very rough here.) The distance from the Earth to the Sun is about 10^{13} centimeters; the distance to the nearest star is about 10^{18} centimeters, and the size of the observable universe is about 10^{28} centimeters.

The Planck mass is about 10^{-5} grams—that would be extraordinarily heavy for a single particle, but isn’t all that much by macroscopic standards. The lightest particles that have more than zero mass are the neutrinos; we don’t know for sure what their masses are, but the lightest seem to be about 10^{-36} grams. A proton is about 10^{-24} grams, and a human being is about 10^{5} grams. The Sun is about 10^{33} grams, a galaxy is about 10^{45} grams, and the mass within the observable universe is about 10^{56} grams.

**LOGARITHMS**

The logarithm function is the easiest thing in the world: It undoes the exponential function. That is, if we have some number that can be expressed in the form 10* ^{x}*—and every positive number can be—then the logarithm of that number is simply

log(10* ^{x}*) =

*x*.

What could be simpler than that? Likewise, the exponential undoes the logarithm:

10^{log(x)} = *x*.

Another way of thinking about it is: If a number is a perfect power of 10 (like 10, 100, 1,000, etc.), the logarithm is simply the number of zeroes to the right of the initial 1:

log(10) = 1,

log(100) = 2,

log(1,000) = 3.

But just as for the exponential, the logarithm is actually a smooth function, as shown in Figure 89. The logarithm of 2.5 is about 0.3979, the logarithm of 25 is about 1.3979, the logarithm of 250 is about 2.3979, and so on. The only restriction is that we can’t take the logarithm of a negative number; that makes sense, because the logarithm inverts the exponential function, and we can never *get* a negative number by exponentiating. Roughly speaking, for large numbers the logarithm is simply “the number of digits in the number.”

**Figure 89:** The logarithm function log(*x*). It is not defined for negative values of *x*, and as *x* approaches zero from the right the logarithm goes to minus infinity.

Just like the exponential of a sum is the product of exponentials, the logarithm has a corresponding property: The logarithm of a product is the sum of logarithms. That is:

log(*x* • *y*) = log(*x*) + log(*y*).

It’s this lovely property that makes logarithms so useful in the study of entropy. As we discuss in Chapter Eight, a physical property of entropy is that the entropy of two systems combined together is equal to the sum of the entropies of the two individual systems. But you get the number of possible states of the combined systems by multiplying the numbers of states of the two individual systems. So Boltzmann concluded that the entropy should be the logarithm of the number of states, not the number of states itself. In Chapter Nine we tell a similar story for information: Shannon wanted a measure of information for which the total information carried in two independent messages was the sum of the individual informations in each message, so he realized he also had to take the logarithm.

More informally, logarithms have the nice property that they take large numbers and whittle them down to manageable sizes. When we take the logarithm of an unwieldy number like a trillion, we get a nice number like 9. The logarithm is a monotonic function—it always increases as we increase the number we’re taking the logarithm of. So the logarithm gives a specific measure of how big a number is, but it collapses huge numbers down to a reasonable size, which is very helpful in fields like cosmology, statistical mechanics, or even economics.

One final crucial detail is that, just like exponentials, logarithms can come in different bases. The “log base *b*” of a number *x* is the number to which we would have to raise *b* in order to get *x*. That is:

log_{2}(2^{x}) = *x*,

log_{12}(12* ^{x}*) =

*x*,

and so on. Whenever we don’t write the base explicitly, we take it to be equal to 10, because that’s how many fingers most human beings have. But scientists and mathematicians often like to make a seemingly odd choice: they use the *natural logarithm* , often written ln(*x*), in which the base is taken to be Euler’s constant:

ln(*x*) = log* _{e}*(

*x*),

*e* = 2.7182818284 . . .

Euler’s constant is an irrational number, like pi or the square root of two, so its explicit form above would go on forever. At first glance that seems like a truly perverse choice to use as a base for one’s logarithms. But in fact *e* has a lot of nice properties, once you get deeper into the math; in calculus, for example, the function *e** ^{x}* is the only one (aside from the trivial function equal to zero everywhere) that is equal to its own derivative, as well as its own integral. In this book all of our logarithms have used base 10, but if you launch yourself into physics and math at a higher level, it will be natural logarithms all the way.