The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos - Brian Greene (2011)

Notes

Chapter 1: The Bounds of Reality

1. The possibility that our universe is a slab floating in a higher dimensional realm goes back to a paper by two renowned Russian physicists—“Do We Live Inside a Domain Wall?,” V. A. Rubakov and M. E. Shaposhnikov, Physics Letters B 125 (May 26, 1983): 136—and does not involve string theory. The version I’ll focus on in Chapter 5 emerges from advances in string theory in the mid-1990s.

Chapter 2: Endless Doppelgängers

1. The quote comes from the March 1933 issue of The Literary Digest. It is worth noting that the precision of this quote has recently been questioned by the Danish historian of science Helge Kragh (see his Cosmology and Controversy, Princeton: Princeton University Press, 1999), who suggests it may be a reinterpretation of a Newsweek report from earlier that year in which Einstein was referring to the origin of cosmic rays. What is certain, however, is that by this year Einstein had given up his belief that the universe was static and accepted the dynamic cosmology that emerged from his original equations of general relativity.

2. This law tells us the force of gravitational attraction, F, between two objects, given the masses, m1 and m2, of each, and the distance, r, between them. Mathematically, the law reads: F = Gm1m2/r2, where Gstands for Newton’s constant—an experimentally measured number that specifies the intrinsic strength of the gravitational force.

3. For the mathematically inclined reader, Einstein’s equations are Ruv ½ guvR = 8πGTuv where guv is the metric on spacetime, Ruv is the Ricci curvature tensor, R is the scalar curvature, G is Newton’s constant, and Tuv is the energy-momentum tensor.

4. In the decades since this famous confirmation of general relativity, questions have been raised regarding the reliability of the results. For distant starlight grazing the sun to be visible, the observations had to be carried out during a solar eclipse; unfortunately, bad weather made it a challenge to take clear photographs of the solar eclipse of 1919. The question is whether Eddington and his collaborators might have been biased by foreknowledge of the result they were seeking, and so when they culled photographs deemed unreliable because of weather interference, they eliminated a disproportionate number containing data that appeared not to fit Einstein’s theory. A recent and thorough study by Daniel Kennefick (see www.arxiv.org, paper arXiv:0709.0685, which, among other considerations, takes account of a modern reevaluation of the photograph plates taken in 1919) convincingly argues that the 1919 confirmation of general relativity is, indeed, reliable.

5. For the mathematically inclined reader, Einstein’s equations of general relativity in this context reduce to . The variable a(t) is the scale factor of the universe—a number whose value, as the name indicates, sets the distance scale between objects (if the value of a(t) at two different times differs, say, by a factor of 2, then the distance between any two particular galaxies would differ between those times by a factor of 2 as well), G is Newton’s constant,  is the density of matter/energy, and k is a parameter whose value can be 1, 0, or -1 according to whether the shape of space is spherical, Euclidean (“flat”), or hyperbolic. The form of this equation is usually credited to Alexander Friedmann and, as such, is called the Friedmann equation.

6. The mathematically inclined reader should note two things. First, in general relativity we typically define coordinates that are themselves dependent on the matter space contains: we use galaxies as the coordinate carriers (acting as if each galaxy has a particular set of coordinates “painted” on it—so-called co-moving coordinates). So, to even identify a specific region of space, we usually make reference to the matter that occupies it. A more precise rephrasing of the text, then, would be: The region of space containing a particular group of N galaxies at time t1 will have a larger volume at a later time t2. Second, the intuitively sensible statement regarding the density of matter and energy changing when space expands or contracts makes an implicit assumption regarding the equation of state for matter and energy. There are situations, and we will encounter one shortly, where space can expand or contract while the density of a particular energy contribution—the energy density of the so-called cosmological constant—remains unchanged. Indeed, there are even more-exotic scenarios in which space can expand while the density of energy increases. This can happen because, in certain circumstances, gravity can provide a source of energy. The important point of the paragraph is that in their original form the equations of general relativity are not compatible with a static universe.

7. Shortly we will see that Einstein abandoned his static universe when confronted by astronomical data showing that the universe is expanding. It is worth noting, though, that his misgivings about the static universe predated the data. The physicist Willem de Sitter pointed out to Einstein that his static universe was unstable: nudge it a bit bigger, and it would grow; nudge it a bit smaller, and it would shrink. Physicists shy away from solutions that require perfect, undisturbed conditions for them to persist.

8. In the big bang model, the outward expansion of space is viewed much like the upward motion of a tossed ball: attractive gravity pulls on the upward-moving ball and so slows its motion; similarly, attractive gravity pulls on the outward-moving galaxies and so slows their motion. In neither case does the ongoing motion require a repulsive force. However, you can still ask: Your arm launched the ball skyward, so what “launched” the spatial universe on its outward expansion? We will return to this question in Chapter 3, where we will see that modern theory posits a short burst of repulsive gravity, operating during the earliest moments of cosmic history. We will also see that more refined data has provided evidence that the expansion of space is not slowing over time, which has resulted in a surprising—and as later chapters will make clear—potentially profound resurrection of the cosmological constant.

The discovery of the spatial expansion was a turning point in modern cosmology. In addition to Hubble’s contributions, the achievement relied on the work and insights of many others, including Vesto Slipher, Harlow Shapley, and Milton Humason.

9. A two-dimensional torus is usually depicted as a hollow doughnut. A two-step process shows that this picture agrees with the description provided in the text. When we declare that crossing the right edge of the screen brings you back to the left edge, that’s tantamount to identifying the entire right edge with the left edge. Were the screen flexible (made of thin plastic, say) this identification could be made explicit by rolling the screen into a cylindrical shape and taping the right and left edges together. When we declare that crossing the upper edge brings you to the lower edge, that too is tantamount to identifying those edges. We can make this explicit by a second manipulation in which we bend the cylinder and tape the upper and lower circular edges together. The resulting shape has the usual doughnutlike appearance. A misleading aspect of these manipulations is that the surface of the doughnut looks curved; were it coated with reflective paint, your reflection would be distorted. This is an artifact of representing the torus as an object sitting within an ambient three-dimensional environment. Intrinsically, as a two-dimensional surface, the torus is not curved. It is flat, as is clear when it’s represented as a flat video-game screen. That’s why, in the text, I focus on the more fundamental description as a shape whose edges are identified in pairs.

10. The mathematically inclined reader will note that by “judicious slicing and paring” I am referring to taking quotients of simply connected covering spaces by various discrete isometry groups.

11. The quoted amount is for the current era. In the early universe, the critical density was higher.

12. If the universe were static, light that had been traveling for the last 13.7 billion years and has only just reached us would indeed have been emitted from a distance of 13.7 billion light-years. In an expanding universe, the object that emitted the light has continued to recede during the billions of years the light was in transit. When we receive the light, the object is thus farther away—much farther—than 13.7 billion light-years. A straightforward calculation using general relativity shows that the object (assuming it still exists and has been continually riding the swell of space) would now be about 41 billion light-years away. This means that when we look out into space we can, in principle, see light from sources that are now as far as roughly 41 billion light-years. In this sense, the observable universe has a diameter of about 82 billion light-years. The light from objects farther than this distance would not yet have had enough time to reach us and so are beyond our cosmic horizon.

13. In loose language, you can envision that because of quantum mechanics, particles always experience what I like to call “quantum jitter”: a kind of inescapable random quantum vibration that renders the very notion of the particle having a definite position and speed (momentum) approximate. In this sense, changes to position/speed that are so small that they’re on par with the quantum jitters are within the “noise” of quantum mechanics and hence are not meaningful.

In more precise language, if you multiply the imprecision in the measurement of position by the imprecision in the measurement of momentum, the result—the uncertainty—is always larger than a number called Planck’s constant, named after Max Planck, one of the pioneers of quantum physics. In particular, this implies that fine resolutions in measuring the position of a particle (small imprecision in position measurement) necessarily entail large uncertainty in the measurement of its momentum and, by association, its energy. Since energy is always limited, the resolution in position measurements is thus limited too.

Also note that we will always apply these concepts in a finite spatial domain—generally in regions the size of today’s cosmic horizon (as in the next section). A finite-sized region, however large, implies a maximum uncertainty in position measurements. If a particle is assumed to be in a given region, the uncertainty of its position is surely no larger than the size of the region. Such a maximum uncertainty in position then entails, from the uncertainty principle, a minimum amount of uncertainty in momentum measurements—that is, limited resolution in momentum measurements. Together with the limited resolution in position measurements, we see the reduction from an infinite to a finite number of possible distinct configurations of a particle’s position and speed.

You might still wonder about the barrier to building a device capable of measuring a particle’s position with ever greater precision. It too is a matter of energy. As in the text, if you want to measure a particle’s position with ever greater precision, you need to use an ever more refined probe. To determine whether a fly is in a room, you can turn on an ordinary, diffuse overhead light. To determine if an electron is in a cavity, you need to illuminate it with the sharp beam of a powerful laser. And to determine the electron’s position with ever greater accuracy you need to make that laser ever more powerful. Now, when an ever more powerful laser zaps an electron, it imparts an ever greater disturbance to its velocity. Thus, the bottom line is that precision in determining particles’ positions comes at the cost of huge changes in the particles’ velocities—and hence huge changes in particle energies. If there’s a limit to how much energy particles can have, as there always will be, there’s a limit to how finely their positions can be resolved.

Limited energy in a limited spatial domain thus gives finite resolution on both position and velocity measurements.

14. The most direct way to make this calculation is by invoking a result I will describe in nontechnical terms in Chapter 9: the entropy of a black hole—the logarithm of the number of distinct quantum states—is proportional to its surface area measured in square Planck units. A black hole that fills our cosmic horizon would have a radius of about 1028 centimeters, or roughly 1061 Planck lengths. Its entropy would therefore be about 10122 in square Planck units. Hence the total number of distinct states is roughly 10 raised to the power 10122, or 1010122.

15. You might be wondering why I’m not also incorporating fields. As we will see, particles and fields are complementary languages—a field can be described in terms of the particles of which it’s composed, much like an ocean wave can be described in terms of its constituent water molecules. The choice of using a particle or field language is largely one of convenience.

16. The distance that light can travel in a given time interval depends sensitively on the rate at which space expands. In later chapters we will encounter evidence that the rate of spatial expansion is accelerating. If so, there is a limit to how far light can travel through space, even if we wait an arbitrarily long time. Distant regions of space would be receding from us so quickly that light we emit could not reach them; similarly, light they emit could not reach us. This would mean that cosmic horizons—the portion of space with which we can exchange light signals—would not grow in size indefinitely. (For the mathematically inclined reader, the essential formulae are in Chapter 6, note 7.)

17. G. Ellis and G. Bundrit studied duplicate realms in an infinite classical universe; J. Garriga and A. the quantum context.

Chapter 3: Eternity and Infinity

1. One point of departure from the earlier work was Dicke’s perspective, which focused on the possibility of an oscillating universe that would repeatedly go through a series of cycles—big bang, expansion, contraction, big crunch, big bang again. In any given cycle there would be remnant radiation suffusing space.

2. It is worth noting that even though they don’t have jet engines, galaxies generally do exhibit some motion above and beyond that arising from the expansion of space—typically the result of large-scale intergalactic gravitational forces as well as the intrinsic motion of the swirling gas cloud from which stars in the galaxies formed. Such motion is called peculiar velocity and is generally small enough that it can be safely ignored for cosmological purposes.

3. The horizon problem is subtle, and my description of inflationary cosmology’s solution slightly nonstandard, so for the interested reader let me elaborate here in a little more detail. First the problem, again: Consider two regions in the night sky that are so distant from one another that they have never communicated. And to be concrete, let’s say each region has an observer who controls a thermostat that sets his or her region’s temperature. The observers want the two regions to have the same temperature, but because the observers have been unable to communicate, they don’t know how to set their respective thermostats. The natural thought is that since billions of years ago the observers were much closer, it would have been easy for them, way back then, to have communicated and thus to have ensured the two regions had equal temperatures. However, as noted in the main text, in the standard big bang theory this reasoning fails. Here’s more detail on why. In the standard big bang theory, the universe is expanding, but because of gravity’s attractive pull, the rate of expansion slows over time. It’s much like what happens when you toss a ball in the air. During its ascent it first moves away from you quickly, but because of the tug of earth’s gravity, it steadily slows. The slowing down of spatial expansion has a profound effect. I’ll use the tossed ball analogy to explain the essential idea. Imagine a ball that undergoes, say, a six second ascent. Since it initially travels quickly (as it leaves your hand), it might cover the first half of the journey in only two seconds, but due to its diminishing speed it takes four more seconds to cover the second half of the journey. At the halfway point in time, three seconds, it was thus beyond the halfway mark in distance. Similarly, with spatial expansion that slows over time: at the halfway point in cosmic history, our two observers would be separated by more than half their current distance. Think about what this means. The two observers would be closer together, but they would find it harder—not easier—to have communicated. Signals one observer sends would have half the time to reach the other, but the distance the signals would need to traverse is more than half of what it is today. Being allotted half the time to communicate across more than half their current separation only makes communication more difficult.

The distance between objects is thus only one consideration when analyzing their ability to influence each other. The other essential consideration is the amount of time that’s elapsed since the big bang, as this constrains how far any purported influence could have traveled. In the standard big bang, although everything was indeed closer in the past, the universe was also expanding more quickly, resulting in less time, proportionally speaking, for influences to be exerted.

The resolution offered by inflationary cosmology is to insert a phase in the earliest moments of cosmic history in which the expansion rate of space doesn’t decrease like the speed of the ball tossed upwards; instead, the spatial expansion starts out slow and then continually picks up speed: the expansion accelerates. By the same reasoning we just followed, at the halfway point of such an inflationary phase our two observers will be separated by less than half their distance at the end of that phase. And being allotted half the time to communicate across less than half the distance means it is easier at earlier times for them to communicate. More generally, at ever earlier times, accelerated expansion means there is more time, proportionally speaking—not less—for influences to be exerted. This would have allowed today’s distant regions to have easily communicated in the early universe, explaining the common temperature they now have.

Because the accelerated expansion results in a much greater total spatial expansion of space than in the standard big bang theory, the two regions would have been much closer together at the onset of inflation than at a comparable moment in the standard big bang theory. This size disparity in the very early universe is an equivalent way of understanding why communication between the regions, which would have proved impossible in the standard big bang, can be easily accomplished in the inflationary theory. If at a given moment after the beginning, the distance between two regions is less, it is easier for them to exchange signals.

Taking the expansion equations seriously to arbitrarily early times (and for definiteness, imagine that space is spherically shaped), we also see that the two regions would have initially separated more quickly in the standard big bang than in the inflationary model: that’s how they became so much farther apart in the standard big bang compared with their separation in the inflationary theory. In this sense, the inflationary framework involves a period of time during which the rate of separation between these regions is slower than in the usual big bang framework.

Often, in describing inflationary cosmology, the focus is solely on the fantastic increase in expansion speed over the conventional framework, not on a decrease in speed. The difference in description derives from which physical features between the two frameworks one compares. If one compares the trajectories of two regions of a given distance apart in the very early universe, then in the inflationary theory those regions separate much faster than in the standard big bang theory; by today they are also much farther apart in the inflationary theory than in the conventional big bang. But if one considers two regions of a given distance apart today (like the two regions on opposite sides of the night sky upon which we’ve been focused), the description I’ve given is relevant. Namely, at a given moment in time in the very early universe, those regions were much closer together, and had been moving apart much more slowly, in a theory that invokes inflationary expansion as compared with one that doesn’t. The role of inflationary expansion is to make up for the slower start by then propelling those regions apart ever more quickly, ensuring that they arrive at the same location in the sky that they would have in the standard big bang theory.

A fuller treatment of the horizon problem would include a more detailed specification of the conditions from which the inflationary expansion emerges as well as the subsequent processes by which, for example, the cosmic microwave background radiation is produced. But this discussion highlights the essential distinction between accelerated and decelerated expansion.

4. Note that by squeezing the bag, you inject energy into it, and since both mass and energy give rise to the resulting gravitational warpage, the increase in weight will be partially due to the increase in energy. The point, however, is that the increase in pressure itself also contributes to the increase in weight. (Also note that to be precise, we should imagine doing this “experiment” in a vacuum chamber, so we don’t need to consider the buoyant forces due to the air surrounding the bag.) For everyday examples the increase is tiny. However, in astrophysical settings the increase can be significant. In fact, it plays a role in understanding why, in certain situations, stars necessarily collapse to form black holes. Stars generally maintain their equilibrium through a balance between outward-pushing pressure, generated by nuclear processes in the star’s core, and inward-pulling gravity, generated by the star’s mass. As the star exhausts its nuclear fuel, the positive pressure decreases, causing the star to contract. This brings all its constituents closer together and so increases their gravitational attraction. To avoid further contraction, additional outward pressure (what is labeled positive pressure, as in the next paragraph in the text) is needed. But the additional positive pressure itself generates additional attractive gravity and thus makes the need for additional positive pressure all the more urgent. In certain situations, this leads to a spiraling instability and the very thing that the star usually relies upon to counteract the inward pull of gravity—positive pressure—contributes so strongly to that very inward pull that a complete gravitational collapse becomes unavoidable. The star will implode and form a black hole.

5. In the approach to inflation I have just described, there is no fundamental explanation for why the inflaton field’s value would begin high up on the potential energy curve, nor why the potential energy curve would have the particular shape it has. These are assumptions the theory makes. Subsequent versions of inflation, most notably one developed by Andrei Linde called chaotic inflation, find that a more “ordinary” potential energy curve (a parabolic shape with no flat section that emerges from the simplest mathematical equations for the potential energy) can also yield inflationary expansion. To initiate the inflationary expansion, the inflaton field’s value needs to be high up on this potential energy curve too, but the enormously hot conditions expected in the early universe would naturally cause this to happen.

6. For the diligent reader, let me note one additional detail. The rapid expansion of space in inflationary cosmology entails significant cooling (much as a rapid compression of space, or of most anything, causes a surge in temperature). But as inflation comes to a close, the inflaton field oscillates around the minimum of its potential energy curve, transferring its energy to a bath of particles. The process is called “re-heating” because the particles so produced will have kinetic energy and thus can be characterized by a temperature. As space then continues to undergo more ordinary (non-inflationary) big bang expansion, the temperature of the particle bath steadily decreases. The important point, though, is that the uniformity set down by inflation provides uniform conditions for these processes, and so results in uniform outcomes.

7. Alan Guth was aware of the eternal nature of inflation; Paul Steinhardt wrote about its mathematical realization in certain contexts; Alexander Vilenkin brought it to light in the most general terms.

8. The value of the inflaton field determines the amount of energy and negative pressure it suffuses through space. The larger the energy, the greater the expansion rate of space. The rapid expansion of space, in turn, has a back reaction on the inflaton field itself: the faster the expansion of space, the more violently the inflaton field’s value jitters.

9. Let me address a question that may have occurred to you, one we will return to in Chapter 10. As space undergoes inflationary expansion, its overall energy increases: the greater the volume of space filled with an inflaton field, the greater the total energy (if space is infinitely large, energy is infinite too—in this case we should speak of the energy contained in a finite region of space as the region grows larger). Which naturally leads one to ask: What is the source of this energy? For the analogous situation with the champagne bottle, the source of additional energy in the bottle came from the force exerted by your muscles. What plays the role of your muscles in the expanding cosmos? The answer is gravity. Whereas your muscles were the agent that allowed the available space inside the bottle to expand (by pulling out the cork), gravity is the agent that allows the available space in the cosmos to expand. What’s vital to realize is that the gravitational field’s energy can be arbitrarily negative. Consider two particles falling toward each other under their mutual gravitational attraction. Gravity coaxes the particles to approach each other faster and faster, and as they do, their kinetic energy gets ever more positive. The gravitational field can supply the particles with such positive energy because gravity can draw down its own energy reserve, which becomes arbitrarily negative in the process: the closer the particles approach each other, the more negative the gravitational energy becomes (equivalently, the more positive the energy you’d need to inject to overcome the force of gravity and separate the particles once again). Gravity is thus like a bank that has a bottomless credit line and so can lend endless amounts of money; the gravitational field can supply endless amounts of energy because its own energy can become ever more negative. And that’s the energy source that inflationary expansion taps.

10. I will use the term “bubble universe,” although the imagery of a “pocket universe” that opens up within the ambient inflaton-filled environment is a good one too (that term was coined by Alan Guth).

11. For the mathematically inclined reader, a more precise description of the horizontal axis in Figure 3.5 is as follows: consider the two-dimensional sphere comprising the points in space at the time the cosmic microwave background photons began to stream freely. As with any two-sphere, a convenient set of coordinates on this locus are the angular coordinates from a spherical polar coordinate system. The temperature of the cosmic microwave background radiation can then be viewed as a function of these angular coordinates and, as such, can be decomposed in a Fourier series using as a basis the standard spherical harmonics, . The vertical axis in Figure 3.5 is related to the size of the coefficients for each mode in this expansion—farther to the right on the horizontal axis corresponds to smaller angular separation. For technical details, see for example Scott Dodelson’s excellent book Modern Cosmology (San Diego, Calif.: Academic Press, 2003).

12. A little more precisely, it is not the strength of the gravitational field, per se, that determines the slowing of time, but rather the strength of the gravitational potential. For instance, if you were to hang out inside a spherical cavity at the center of a massive star, you wouldn’t feel a gravitational force at all, but because you were deep inside a gravitational-potential well, time for you would run slower than time for someone far outside the star.

13. This result (and closely related ideas) was found by a number of researchers in different contexts, and was most explicitly articulated by Alexander Vilenkin and also by Sidney Coleman and Frank De Luccia.

14. In our discussion of the Quilted Multiverse, you may recall that we assumed particle arrangements would vary randomly from patch to patch. The connection between the Quilted and Inflationary Mulitverses also allows us to make good on that assumption. A bubble universe forms in a given region when the inflaton field’s value drops; as it does, the energy the inflaton contained is converted into particles. The precise arrangement of these particles at any moment is determined by the precise value of the inflaton during the conversion process. But because the inflaton field is subject to quantum jitters, as its value drops it will be subject to random variations—the same random variations that give rise to the pattern of slightly hotter and slightly colder spots in Figure 3.4. When considered across the patches in a bubble universe, these jitters thus imply that the inflaton’s value will display random quantum variations. And this randomness ensures randomness of the resulting particle distributions. That’s why we expect any particle arrangement, such as the one responsible for all we see right now, to be replicated as often as any other.

Chapter 4: Unifying Nature’s Laws

1. I thank Walter Isaacson for personal communications on this and a number of other historical issues related to Einstein.

2. In a little more detail, the insights of Glashow, Salam, and Weinberg suggested that the electromagnetic and weak forces were aspects of a combined electroweak force, a theory that was confirmed by accelerator experiments in the late 1970s and early 1980s. Glashow and Georgi went a step further and suggested that the electroweak and the strong forces were aspects of a yet more fundamental force, an approach that’s called grand unification. The simplest version of grand unification, however, was ruled out when scientists failed to observe one of its predictions—that protons should, every so often, decay. Nevertheless, there are many other versions of grand unification that remain experimentally viable since, for example, the rate of proton decay they predict is so slow that existing experiments would not yet have the sensitivity to detect it. However, even if grand unification is not borne out by data, it is already beyond doubt that the three nongravitational forces can be described using the same mathematical language of quantum field theory.

3. The discovery of superstring theory spawned other, closely related, theoretical approaches seeking a unified theory of nature’s forces. In particular, supersymmetric quantum field theory, and its gravitational extension supergravity, have been vigorously pursued since the mid-1970s. Supersymmetric quantum field theory and supergravity are based on the new principle of supersymmetry, which was discovered within superstring theory, but these approaches incorporate supersymmetry in conventional point-particle theories. We will briefly discuss supersymmetry later in the chapter, but for the mathematically inclined reader, I’ll note here that supersymmetry is the last available symmetry (beyond rotational symmetry, translational symmetry, Lorentz symmetry, and, more generally, Poincaré symmetry) of a nontrivial theory of elementary particles. It relates particles of different quantum mechanical spin, establishing a deep mathematical kinship between particles that communicate forces and the particles making up matter. Supergravity is an extension of supersymmetry that includes the gravitational force. In the early days of string theory research, scientists realized that the frameworks of supersymmetry and supergravity emerged from a low-energy analysis of string theory. At low energies, the extended nature of a string generally cannot be discerned, so it appears to be a point particle. Correspondingly, as we will discuss in this chapter, when applied to low energy processes, the mathematics of string theory transforms into that of quantum field theory. Scientists found that because both supersymmetry and gravity survive the transformation, low energy string theory gives rise to supersymmetric quantum field theory and to supergravity. In more recent times, as we will discuss in Chapter 9, the link between supersymmetric field theory and string theory has grown yet more profound.

4. The informed reader may take exception to my statement that every field is associated to a particle. So, more precisely, the small fluctuations of a field about a local minimum of its potential are generally interpretable as particle excitations. That’s all we need for the discussion at hand. Additionally, the informed reader will note that localizing a particle at a point is itself an idealization, because it would take—from the uncertainty principle—infinite momentum and energy to do so. Again, the essence is that in quantum field theory there is, in principle, no limit to how finally localized a particle can be.

5. Historically speaking, a mathematical technique known as renormalization was developed to grapple with the quantitative implications of severe, small-scale (high-energy) quantum field jitters. When applied to the quantum field theories of the three nongravitational forces, renormalization cured the infinite quantities that had emerged in various calculations, allowing physicists to generate fantastically accurate predictions. However, when renormalization was brought to bear on the quantum jitters of the gravitational field, it proved ineffective: the method failed to cure infinities that arose in performing quantum calculations involving gravity.

From a more modern vantage point, these infinities are now viewed rather differently. Physicists have come to realize that en route to an ever-deeper understanding of nature’s laws, a sensible attitude to take is that any given proposal is provisional, and—if relevant at all—is likely capable of describing physics only down to some particular length scale (or only up to some particular energy scale). Beyond that are phenomena that lie outside the reach of the given proposal. Adopting this perspective, it would be foolhardy to extend the theory to distances smaller than those within its arena of applicability (or to energies above its arena of applicability). And with such inbuilt cutoffs (much as described in the main text), no infinities ever arise. Instead, calculations are undertaken within a theory whose range of applicability is circumscribed from the outset. This means that the ability to make predictions is limited to phenomena that lie within the theory’s limits—at very short distances (or at very high energies) the theory offers no insight. The ultimate goal of a complete theory of quantum gravity would be to lift the inbuilt limits, unleashing quantitative, predictive capacities on arbitrary scales.

6. To get a feel for where these particular numbers come from, note that quantum mechanics (discussed in Chapter 8) associates a wave to a particle, with the heavier the particle the shorter its wavelength (the distance between successive wave crests). Einstein’s general relativity also associates a length to any object—the size to which the object would need to be squeezed to become a black hole. The heavier the object, the larger that size. Imagine, then, starting with a particle described by quantum mechanics and then slowly increasing its mass. As you do, the particle’s quantum wave gets shorter, while its “black hole size” gets larger. At some mass, the quantum wavelength and the black hole size will be equal—establishing a baseline mass and size at which quantum mechanical and general relativistic considerations are both important. When one makes this thought experiment quantitative, the mass and size are found to be those quoted in the text—the Planck mass and Planck length, respectively. To foreshadow later developments, in Chapter 9 I will discuss the holographic principle. This principle uses general relativity and black hole physics to argue for a very particular limit on the number of physical degrees of freedom that can reside in any volume of space (a more refined version of the discussion in Chapter 2 regarding the number of distinct particle arrangements within a volume of space; also mentioned in note 14 of Chapter 2). If this principle is correct, then the conflict between general relativity and quantum mechanics can arise before distances are small and curvatures large. A huge volume containing even a low density gas of particles would be predicted by quantum field theory to have many more degrees of freedom than the holographic principle (which relies on general relativity) would allow.

7. Quantum mechanical spin is a subtle concept. Especially in quantum field theory, where particles are viewed as dots, it is hard to fathom what “spinning” would even mean. What really happens is that experiments show that particles can possess an intrinsic property that behaves much like an immutable quantity of angular momentum. Moreover, quantum theory shows, and experiments confirm, that particles will generally only have angular momentum that is an integer multiple of a fundamental quantity (Planck’s constant divided by 2). Since classical spinning objects possess an intrinsic angular momentum (one, however, that is not immutable—it changes as the object’s rotational speed changes), theoreticians have borrowed the name “spin” and applied it to this analogous quantum situation. Hence the name “spin angular momentum.” While “spinning like a top” provides a reasonable mental image, it’s more accurate to imagine that particles are defined not only by their mass, their electric charge, and their nuclear charges, but also by the intrinsic and immuatable spin angular momentum they possess. Just as we accept a particle’s electric charge as one of its fundamental defining features, experiments establish that the same is true of its spin angular momentum.

8. Recall that the tension between general relativity and quantum mechanics arises from the powerful quantum jitters of the gravitational field that shake spacetime so violently that the traditional mathematical methods can’t cope. Quantum uncertainty tells us that these jitters become ever stronger when space is examined on ever-smaller distances (which is why we don’t see these jitters in everyday life). Specifically, the calculations show that it is the wildly energetic jitters over distances shorter than the Planck scale that make the math go haywire (the shorter the distance, the greater the jitters’ energy). Since quantum field theory describes particles as points with no spatial extent, the distances these particles probe can be arbitrarily small, and hence the quantum jitters they feel can be arbitrarily energetic. String theory changes this. Strings are not points—they have spatial extent. This implies that there is a limit to how small a distance can be accessed, even in principle, since a string can’t probe a distance smaller than its own size. In turn, a limit to how small a scale can be probed translates into a limit on how energetic the jitters can become. This limit proves sufficient to tame the unruly mathematics, allowing string theory to merge quantum mechanics and general relativity.

9. If an object were truly one-dimensional, we wouldn’t be able to see it directly since it would offer no surface from which photons could reflect and would have no capacity to produce photons of its own through atomic transitions. So, when I say “see” in the text, that’s a stand-in for any means of observation or experimentation you might use to seek evidence of an object’s spatial extent. The point, then, is that any spatial extent smaller than the resolving power of your experimental procedure will escape your experiment’s notice.

10. “What Einstein Never Knew,” NOVA documentary, 1985.

11. More precisely, the component of the universe most relevant to our existence would be completely different. Since the familiar particles and the objects they compose—stars, planets, people, etc.—amount to less than 5 percent of the mass of the universe, such a disruption would not affect the vast majority of the universe, at least as measured by mass. However, as measured by its effect on life as we know it, the change would be profound.

12. There are some mild restrictions that quantum field theories place on their internal parameters. To avoid certain classes of unacceptable physical behavior (violations of critical conservation laws, violations of certain symmetry transformations, and so on), there can be constraints on the charges (electric and also nuclear) of the theory’s particles. Additionally, to ensure that in all physical processes, probabilities add to 1, there can also be constraints on particle masses. But even with these constraints, there is wide latitude in the allowed values of particle properties.

13. Some researchers will note that even though neither quantum field nor our current understanding of string theory provides an explanation of the particle properties, the issue is more urgent in string theory. The point is a bit involved, but for the technically minded here’s the summary. In quantum field theory, the properties of particles—say their masses, to be definite—are controlled by numbers that are inserted into the theory’s equations. The fact that quantum field theory’s equations allow such numbers to be varied is the mathematical way of saying that quantum field theory does not determine particle masses but instead takes them as input. In string theory, the flexibility in the masses of particles has a similar mathematical origin—the equations allow particular numbers to vary freely—but the manifestation of this flexibility is more significant. The freely varying numbers—numbers, that is, that can be varied with no cost in energy—correspond to the existence of particles with no mass. (Using the language of potential energy curves introduced in Chapter 3, envision a potential energy curve that’s completely flat, a horizontal line. Just as walking on a perfectly flat terrain would have no impact on your potential energy, changing the value of such a field would have no cost in energy. Since a particle’s mass corresponds to the curvature of its quantum field’s potential energy curve around its minimum, the quanta of such fields are massless.) Excessive numbers of massless particles are a particularly awkward feature of any proposed theory since there are tight limits on such particles coming from both accelerator data and cosmological observations. For string theory to be viable it is imperative that these particles acquire mass. In recent years, various discoveries have revealed ways in which this might happen, having to do with fluxes that can thread through holes in the extra-dimensional Calabi-Yau shapes. I will discuss aspects of these developments in Chapter 5.

14. It is not impossible for experiments to provide evidence that would strongly disfavor string theory. The structure of string theory ensures that certain basic principles should be respected by all physical phenomena. Among these are unitarity (the sum of all probabilities of all possible outcomes in a given experiment must be 1) and local Lorentz invariance (in a small enough domain the laws of special relativity hold), as well as more technical features such as analyticity and crossing symmetry (the result of particle collisions must depend on the particles’ momentum in a manner that respects a particular collection of mathematical criteria). Should evidence be found—perhaps at the Large Hadron Collider—that any of these principles are violated, it would be a challenge to reconcile those data with string theory. (It would also be a challenge to reconcile those data with the standard model of particle physics, which incorporates these principles too, but the underlying assumption is that the standard model must give way to some kind of new physics at a high enough energy scale since the theory does not incorporate gravity. Data conflicting with any of the principles enumerated would argue that the new physics is not string theory.)

15. It is common to speak of the center of a black hole as if it were a position in space. But it’s not. It is a moment in time. When crossing the event horizon of a black hole, time and space (the radial direction) interchange roles. If you fall into a black hole, for example, your radial motion represents progress through time. You are thus pulled toward the black hole’s center in the same way you are pulled to the next moment in time. The center of the black hole is, in this sense, akin to a last moment in time.

16. For many reasons, entropy is a key concept in physics. In the case discussed, entropy is being used as a diagnostic tool to determine if string theory is leaving out any essential physics in its description of black holes. If it was, the black hole disorder that the string mathematics is being used to calculate would be inaccurate. The fact that the answer agrees exactly with what Bekenstein and Hawking found using very different considerations is a sign that string theory has successfully captured the fundamental physical description. This is a very encouraging result. For more details, see The Elegant Universe, Chapter 13.

17. The first hint of this pairing between Calabi-Yau shapes came from the work of Lance Dixon, as well as independently from Wolfgang Lerche, Nicholas Warner, and Cumrun Vafa. My work with Ronen Plesser found a method for producing the first concrete examples of such pairs, which we named mirror pairs, and the relationship between them mirror symmetry. Plesser and I also showed that difficult calculations on one member of a mirror pair, involving seemingly impenetrable details such as the number of spheres that can be packed into the shape, could be translated into far more manageable calculations on the mirror shape. This result was picked up by Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes and put into action—they developed techniques for explicitly evaluating the equality Plesser and I had established between the “difficult” and “easy” formulas. Using the easy formula, they then extracted information about its difficult partner, including the numbers associated with the sphere packing given in the text. In the years since, mirror symmetry has become its own field of research, with a great many important results being established. For a detailed history, see Shing-Tung Yau and Steve Nadis, The Shape of Inner Space (New York: Basic Books, 2010).

18. String theory’s claim to have successfully melded quantum mechanics and general relativity rests on a wealth of supporting calculations, made yet more convincing by results we will cover in Chapter 9.

Chapter 5: Hovering Universes in Nearby Dimensions

1. Classical Mechanics: . Electromagnetism: d*F = *J;dF = 0. Quantum mechanics: . General relativity: .

2. I am referring here to the fine structure constant, e2/hc, whose numerical value (at typical energies for electromagnetic processes) is about 1/137, which is roughly .0073.

3. Witten argued that when the Type I string coupling is dialed large, the theory morphs into the Heterotic-O theory with a coupling that’s dialed small, and vice versa; the Type IIB at large coupling morphs into itself, the Type IIB theory but with small coupling. The cases of the Heterotic-E and Type IIA theories are a little more subtle (see The Elegant Universe, Chapter 12, for details), but the overall picture is that all five theories participate in a web of interrelations.

4. For the mathematically inclined reader, the special thing about strings, one-dimensional ingredients, is that the physics describing their motion respects an infinite dimensional symmetry group. That is, as a string moves, it sweeps out a two-dimensional surface, and so the action functional from which its equations of motion are derived is a two-dimensional quantum field theory. Classically, such two-dimensional actions are conformally invariant (invariant under angle-preserving rescalings of the two-dimensional surface), and such symmetry can be preserved quantum mechanically by imposing various restrictions (such as on the number of spacetime dimensions through which the string moves—the dimension, that is, of spacetime). The conformal group of symmetry transformations is infinite-dimensional, and this proves essential to ensuring that the perturbative quantum analysis of a moving string is mathematically consistent. For example, the infinite number of excitations of a moving string that would otherwise have negative norm (arising from the negative signature of the time component of the spacetime metric) can be systematically “rotated” away using the infinite-dimensional symmetry group. For details, the reader can consult M. Green, J. Schwarz, and E. Witten, Superstring Theory, vol. 1 (Cambridge: Cambridge University Press, 1988).

5. As with many major discoveries, credit deserves to be given to those whose insights laid its groundwork as well as to those whose work established its importance. Among those who played such a role for the discovery of branes in string theory are: Michael Duff, Paul Howe, Takeo Inami, Kelley Stelle, Eric Bergshoeff, Ergin Szegin, Paul Townsend, Chris Hull, Chris Pope, John Schwarz, Ashoke Sen, Andrew Strominger, Curtis Callan, Joe Polchinski, Petr Hořava, J. Dai, Robert Leigh, Hermann Nicolai, and Bernard DeWitt.

6. The diligent reader might argue that the Inflationary Multiverse also entwines time in a fundamental way, since, after all, our bubble’s boundary marks the beginning of time in our universe; beyond our bubble is thus beyond our time. While true, my point here is meant more generally—the multiverses discussed so far all emerge from analyses that focus fundamentally on processes occurring throughout space. In the multiverse we will now discuss, time is central from the outset.

7. Alexander Friedmann, The World as Space and Time, 1923, published in Russian, as referenced by H. Kragh, in “Continual Fascination: The Oscillating Universe in Modem Cosmology,” Science in Context22, no. 4 (2009): 587–612.

8. As an interesting point of detail, the authors of the braneworld cyclic model invoke an especially utilitarian application of dark energy (dark energy will be discussed fully in Chapter 6). In the last phase of each cycle, the presence of dark energy in the braneworlds ensures agreement with today’s observations of accelerated expansion; this accelerated expansion, in turn, dilutes the entropy density, setting the stage for the next cosmological cycle.

9. Large flux values also tend to destabilize a given Calabi-Yau shape for the extra dimensions. That is, the fluxes tend to push the Calabi-Yau shape to grow large, quickly running into conflict with the criterion that extra dimensions not be visible.

Chapter 6: New Thinking About an Old Constant

1. George Gamow, My World Line (New York: Viking Adult, 1970); J. C. Pecker, Letter to the Editor, Physics Today, May 1990, p. 117.

2. Albert Einstein, The Meaning of Relativity (Princeton: Princeton University Press, 2004), p. 127. Note that Einstein used the term “cosmologic member” for what we now call the “cosmological constant”; for clarity, I have made this substitution in the text.

3. The Collected Papers of Albert Einstein, edited by Robert Schulmann et al. (Princeton: Princeton University Press, 1998), p. 316.

4. Of course, some things do change. As pointed out in the notes to Chapter 3, galaxies generally have small velocities beyond the spatial swelling. Over the course of cosmological timescales, such additional motion can alter position relationships; such motion can also result in a variety of interesting astrophysical events such as galaxy collisions and mergers. For the purpose of explaining cosmic distances, however, these complications can be safely ignored.

5. There is one complication that does not affect the essential idea I’ve explained but which does come into play when undertaking the scientific analyses described. As photons travel to us from a given supernova, their number density gets diluted in the manner I’ve described. However, there is another diminishment to which they are subject. In the next section, I’ll describe how the stretching of space causes the wavelength of photons to stretch too, and, correspondingly, their energy to decrease—an effect, as we will see, called redshift. As explained there, astronomers use redshift data to learn about the size of the universe when the photons were emitted—an important step toward determining how the expansion of space has varied through time. But the stretching of photons—the diminishment of their energy—has another effect: It accentuates the dimming of a distant source. And so, to properly determine the distance of a supernova by comparing its apparent and intrinsic brightness, astronomers must take account not just of the dilution of photon number density (as I’ve described in the text), but also the additional diminishment of energy coming from redshift. (More precisely still, this additional dilution factor must be applied twice; the second red shift factor accounts for the rate at which photons arrive being similarly stretched by the cosmic expansion.)

6. Properly interpreted, the second proposed answer for the meaning of the distance being measured may also be construed as correct. In the example of earth’s expanding surface, New York, Austin, and Los Angeles all rush away from one another, yet each continues to occupy the same location on earth it always has. The cities separate because the surface swells, not because someone digs them up, puts them on a flatbed, and transports them to a new site. Similarly, because galaxies separate due to the cosmic swelling, they too occupy the same location in space they always have. You can think of them as being stitched to the spatial fabric. When the fabric stretches, the galaxies move apart, yet each remains tethered to the very same point it has always occupied. And so, even though the second and third answers appear different—the former focusing on the distance between us and the location a distant galaxy had eons ago, when the supernova emitted the light we now see; the latter focusing on the distance now between us and that galaxy’s current location—they’re not. The distant galaxy is now, and has been for billions of years, positioned at one and the same spatial location. Only if it moved through space rather than solely ride the wave of swelling space would its location change. In this sense, the second and third answers are actually the same.

7. For the mathematically inclined reader, here is how you do the calculation of the distance—now, at time tnow—that light has traveled since being emitted at time temitted. We will work in the context of an example in which the spatial part of spacetime is flat, and so the metric can be written as ds2 = c2dt2 – a2(t)dx2, where a(t) is the scale factor of the universe at time t, and c is the speed of light. The coordinates we are using are called co-moving. In the language developed in this chapter, such coordinates can be thought of as labeling points on the static map; the scale factor supplies the information contained in the map’s legend.

The special characteristic of the trajectory followed by light is that ds2 = 0 (equivalent to the speed of light always being c) along the path, which implies that , or, over a finite time interval such as that between . The left side of this equation gives the distance light travels across the static map between emission and now. To turn this into the distance through real space, we must rescale the formula by today’s scale factor; therefore, the total distance the light traveled equals . If space were not stretching, the total travel distance would be  , as expected. When calculating the distance traveled in an expanding universe, we thus see that each segment of the light’s trajectory is multiplied by the factor , which is the amount by which that segment has stretched, since the moment the light traversed it, until today.

8. More precisely, about 7.12 × 10–30 grams per cubic centimeter.

9. The conversion is 7.12 × 10–30 grams/cubic centimeter = (7.12 × 10–30 grams/cubic centimeter) × (4.6 × 104 Planck mass/gram) × (1.62 × 10–33 centimeter/Planck length)3 = 1.38 × 10–123 Planck mass/cubic Planck volume.

10. For inflation, the repulsive gravity we considered was intense and brief. This is explained by the enormous energy and negative pressure supplied by the inflaton field. However, by modifying a quantum field’s potential energy curve, the amount of energy and negative pressure it supplies can be diminished, thus yielding a mild accelerated expansion. Additionally, a suitable adjustment of the potential energy curve can prolong this period of accelerated expansion. A mild and prolonged period of accelerated expansion is what’s required to explain the supernova data. Nevertheless, the small non-zero value for the cosmological constant remains the most convincing explanation to have emerged in the more than ten years since the accelerated expansion was first observed.

11. The mathematically inclined reader should note that each such jitter contributes an energy that’s inversely proportional to its wavelength, ensuring that the sum over all possible wavelengths yields an infinite energy.

12. For the mathematically inclined reader, the cancellation occurs because supersymmetry pairs bosons (particles with an integral spin value) and fermions (particles with a half [odd] integral spin value). This results in bosons being described by commuting variables, fermions by anticommuting variables, and that is the source of the relative minus sign in their quantum fluctuations.

13. While the assertion that changes to the physical features of our universe would be inhospitable to life as we know it is widely accepted in the scientific community, some have suggested that the range of features compatible with life might be larger than once thought. These issues have been widely written about. See, for example: John Barrow and Frank Tipler, The Anthropic Cosmological Principle (New York: Oxford University Press, 1986); John Barrow, The Constants of Nature (New York: Pantheon Books, 2003); Paul Davies, The Cosmic Jackpot (New York: Houghton Mifflin Harcourt, 2007); Victor Stenger, Has Science Found God? (Amherst, N.Y.: Prometheus Books, 2003); and references therein.

14. Based on the material covered in earlier chapters, you might immediately think the answer is a resounding yes. Consider, you say, the Quilted Multiverse, whose infinite spatial expanse contains infinitely many universes. But you need to be careful. Even with infinitely many universes, the list of different cosmological constants represented might not be long. If, for example, the underlying laws don’t allow for many different cosmological constant values, then regardless of the number of universes, only the small collection of possible cosmological constants would be realized. So, the question we’re asking is whether (a) there are candidate laws of physics that give rise to a multiverse, (b) the multiverse so generated contains far more than 10124 different universes, and (c) the laws ensure that the cosmological constant’s value varies from universe to universe.

15. These four authors were the first to show fully that by judicious choices of Calabi-Yau shapes, and the fluxes threading their holes, they could realize string models with small, positive cosmological constants, like those found by observations. Together with Juan Maldacena and Liam McAllister, this group subsequently wrote a highly influential paper on how to combine inflationary cosmology with string theory.

16. More precisely, this mountainous terrain would inhabit a roughly 500-dimensional space, whose independent directions—axes—would correspond to different field fluxes. Figure 6.4 is a rough pictorial depiction but gives a feel for the relationships between the various forms for the extra dimensions. Additionally, when speaking of the string landscape, physicists generally envision that the mountainous terrain encompasses, in addition to the possible flux values, all the possible sizes and shapes (the different topologies and geometries) of the extra dimensions. The valleys in the string landscape are locations (specific forms for the extra dimensions and the fluxes they carry) where a bubble universe naturally settles, much as a ball would settle in such a spot in a real mountain terrain. When described mathematically, valleys are (local) minima of the potential energy associated with the extra dimensions. Classically, once a bubble universe acquired an extra dimensional form corresponding to a valley that feature would never change. Quantum mechanically, however, we will see that tunneling events can result in the form of the extra dimensions changing.

17. Quantum tunneling to a higher peak is possible but substantially less likely according to quantum calculations.

Chapter 7: Science and the Multiverse

1. The duration of the bubble’s expansion prior to collision determines the impact, and attendant disruption, of the ensuing crash. Such collisions also raise an interesting point to do with time, harking back to the example with Trixie and Norton in Chapter 3. When two bubbles collide, their outer edges—where the inflaton field’s energy is high—come into contact. From the perspective of someone within either one of the colliding bubbles, high inflaton energy value corresponds to early moments in time, near that bubble’s big bang. And so, bubble collisions happen at the inception of each universe, which is why the ripples created can affect another early universe process, the formation of the microwave background radiation.

2. We will take up quantum mechanics more systematically in Chapter 8. As we will see there, the statement I’ve made, “slither outside the arena of everyday reality” can be interpreted on a number of levels. What I have in mind here is the conceptually simplest: the equation of quantum mechanics assumes that probability waves generally don’t inhabit the spatial dimensions of common experience. Instead, the waves reside in a different environment that takes account not only of the everyday spatial dimensions but also of the number of particles being described. It is called configuration space and is explained for the mathematically inclined reader in note 4 of Chapter 8.

3. If the accelerated expansion of space that we’ve observed is not permanent, then at some time in the future the expansion of space will slow down. The slowing would allow light from objects that are now beyond our cosmic horizon to reach us; our cosmic horizon would grow. It would then be yet more peculiar to suggest that realms now beyond our horizon are not real since in the future we would have access to those very realms. (You may recall that toward the end of Chapter 2, I noted that the cosmic horizons illustrated in Figure 2.1 will grow larger as time passes. That’s true in a universe in which the pace of spatial expansion is not quickening. However, if the expansion is accelerating, there is distance beyond that we can never see, regardless of how long we wait. In an accelerating universe, the cosmic horizons can’t grow larger than a size determined mathematically by the rate of acceleration.)

4. Here is a concrete example of a feature that can be common to all universes in a particular multiverse. In Chapter 2, we noted that current data point strongly toward the curvature of space being zero. Yet, for reasons that are mathematically technical, calculations establish that all bubble universes in the Inflationary Multiverse have negative curvature. Roughly speaking, the spatial shapes swept out by equal inflaton values—shapes determined by connecting equal numbers in Figure 3.8b—are more like potato chips than like flat tabletops. Even so, the Inflationary Multiverse remains compatible with observation, because as any shape expands its curvature drops; the curvature of a marble is obvious, while that of the earth’s surface escaped notice for millennia. If our bubble universe has undergone sufficient expansion, its curvature could be negative yet so exceedingly small that today’s measurements can’t distinguish it from zero. That gives rise to a potential test. Should more precise observations in the future determine that the curvature of space is very small but positive that would provide evidence against our being part of an Inflationary Multiverse as argued by B. Freivogel, M. Kleban, M. Rodríguez Martínez, and L. Susskind, (see “Observational Consequences of a Landscape,” Journal of High Energy Physics 0603, 039 [2006]), measurement of positive curvature of 1 part in 105 would make a strong case against the kind of quantum tunneling transitions (Chapter 6) envisioned to populate the string landscape.

5. The many cosmologists and string theorists who have advanced this subject include Alan Guth, Andrei Linde, Alexander Vilenkin, Jaume Garriga, Don Page, Sergei Winitzki, Richard Easther, Eugene Lim, Matthew Martin, Michael Douglas, Frederik Denef, Raphael Bousso, Ben Freivogel, I-Sheng Yang, Delia Schwartz-Perlov, among many others.

6. An important caveat is that while the impact of modest changes to a few constants can reliably be deduced, more significant changes to a larger number of constants make the task far more difficult. It is at least possible that such significant changes to a variety of nature’s constants cancel out one another’s effects, or work together in novel ways, and are thus compatible with life as we know it.

7. A little more precisely, if the cosmological constant is negative, but sufficiently tiny, the collapse time would be long enough to allow galaxy formation. For ease, I am glossing over this subtlety.

8. Another point worthy of note is that the calculations I’ve described were undertaken without making a specific choice for the multiverse. Instead, Weinberg and his collaborators proceeded by positing a multiverse in which features could vary and calculated the abundance of galaxies in each of their constituent universes. The more galaxies a universe had, the more weight Weinberg and collaborators gave to its properties in their calculation of the average features a typical observer would encounter. But because they didn’t commit to an underlying multiverse theory, the calculations necessarily failed to account for the probability that a universe with this or that property would actually be found in the multiverse (the probabilities, that is, that we discussed in the previous section). Universes with cosmological constants and primordial fluctuations in certain ranges might be ripe for galaxy formation, but if such universes are rarely created in a given multiverse, it would nevertheless be highly unlikely for us to find ourselves in one of them.

To make the calculations manageable, Weinberg and collaborators argued that since the range of cosmological constant values they were considering was so narrow (between 0 and about 10–120), the intrinsic probabilities that such universes would exist in a given multiverse were not likely to vary wildly, much as the probabilities that you’ll encounter a 59.99997-pound dog or one weighing 59.99999 pounds also don’t differ substantially. They thus assumed that every value for the cosmological constant in the small range consistent with the formation of galaxies is as intrinsically probable as any other. With our rudimentary understanding of multiverse formation, this might seem like a reasonable first pass. But subsequent work has questioned the validity of this assumption, emphasizing that a full calculation needs to go further: committing to a definite multiverse proposal and determining the actual distribution of universes with various properties. A self-contained anthropic calculation that relies on a bare minimum of assumptions is the only way to judge whether this approach will ultimately bear explanatory fruit.

9. The very meaning of “typical” is also burdened, as it depends on how it’s defined and measured. If we use numbers of kids and cars as our delimeter, we arrive at one kind of “typical” American family. If we use different scales such as interest in physics, love of opera, or immersion in politics, the characterization of a “typical” family will change. And what’s true for the “typical” American family is likely true for “typical” observers in the multiverse: consideration of features beyond just population size would yield a different notion of who is “typical.” In turn, this would affect the predictions for how likely it is that we will see this or that property in our universe. For an anthropic calculation to be truly convincing, it would have to address this issue. Alternatively, as indicated in the text, the distributions would need to be so sharply peaked that there would be minimal variation from one life-supporting universe to another.

10. The mathematical study of sets with an infinite number of members is rich and well developed. The mathematically inclined reader may be familiar with the fact that research going back to the nineteenth century established there are different “sizes” or, more commonly, “levels” of infinity. That is, one infinite quantity can be larger than another. The level of infinity that gives the size of the set containing all the whole numbers is called N0. This infinity was shown by Georg Cantor to be smaller than that giving the number of members contained in the set of real numbers. Roughly speaking, if you try to match up whole numbers and real numbers, you necessarily exhaust the former before the latter. And if you consider the set of all subsets of real numbers, the level of infinity grows larger still.

Now, in all of the examples we discuss in the main text, the relevant infinity is N0. since we are dealing with infinite collections of discrete, or “countable,” objects—various collections, that is, of whole numbers. In the mathematical sense, then, all of the examples have the same size; their total membership is described by the same level of infinity. However, for physics, as we will shortly see, a conclusion of this sort would not be particularly useful. The goal instead is to find a physically motivated scheme for comparing infinite collections of universes that would yield a more refined hierarchy, one that reflects the relative abundance across the multiverse of one set of physical features compared with another. A typical physics approach to a challenge of this sort is to first make comparisons between finite subsets of the relevant infinite collections (since in the finite case, all of the puzzling issues evaporate), and then allow the subsets to include ever more members, ultimately embracing the full infinite collections. The hurdle is finding a physically justifiable way of picking out the finite subsets for comparison, and then also establishing that comparisons remain sensible as the subsets grow larger.

11. Inflation is credited with other successes too, including the solution to the magnetic monopole problem. In attempts to meld the three nongravitational forces into a unified theoretical structure (known as grand unification) researchers found that the resulting mathematics implied that just after the big bang a great many magnetic monopoles would have been formed. These particles would be, in effect, the north pole of a bar magnet without the usual pairing with a south pole (or vice versa). But no such particles have ever been found. Inflationary cosmology explains the absence of monopoles by noting that the brief but stupendous expansion of space just after the big bang would have diluted their presence in our universe to nearly zero.

12. Currently, there are differing views on how great a challenge this presents. Some view the measure problem as a knotty technical issue that once solved will provide inflationary cosmology with an important additional detail. Others (for example, Paul Steinhardt) have expressed the belief that solving the measure problem will require stepping so far outside the mathematical formulation of inflationary cosmology that the resulting framework will need to be interpreted as a completely new cosmological theory. My view, one held by a small but growing number of researchers, is that the measure problem is tapping into a deep problem at the very root of physics, one that may require a substantial overhaul of foundational ideas.

Chapter 8: The Many Worlds of Quantum Measurement

1. Both Everett’s original 1956 thesis and the shortened 1957 version can be found in The Many-Worlds Interpretation of Quantum Mechanics, edited by Bryce S. DeWitt and Neill Graham (Princeton: Princeton University Press, 1973).

2. On January 27, 1998, I had a conversation with John Wheeler to discuss aspects of quantum mechanics and general relativity that I would be writing about in The Elegant Universe. Before getting into the science proper, Wheeler noted how important it was, especially for young theoreticians, to find the right language for expressing their results. At the time, I took this as nothing more than sagely advice, perhaps inspired by his speaking with me, a “young theoretician” who’d expressed interest in using ordinary language to describe mathematical insights. On reading the illuminating history laid out in The Many Worlds of Hugh Everett III by Peter Byrne (New York: Oxford University Press, 2010), I was struck by Wheeler’s emphasis of the same theme some forty years earlier in his dealings with Everett, but in a context whose stakes were far higher. In response to Everett’s first draft of his thesis, Wheeler told Everett that he needed to “get the bugs out of the words, not the formalism” and warned him of “the difficulty of expressing in everyday words the goings-on in a mathematical scheme that is about as far removed as it could be from the everyday description; the contradictions and misunderstandings that will arise; the very very heavy burden and responsibility you have to state everything in such a way that these misunderstandings can’t arise.” Byrne makes a compelling case that Wheeler was walking a delicate line between his admiration for Everett’s work and his respect for the quantum mechanical framework that Bohr and many other renowned physicists had labored to build. On the one hand, he didn’t want Everett’s insights to be summarily dismissed by the old guard because the presentation was deemed overreaching, or because of hot-button words (like universes “splitting”) that could appear fanciful. On the other hand, Wheeler didn’t want the established community of physicists to conclude that he was abandoning the demonstrably successful quantum formalism by spearheading an unjustified assault. The compromise Wheeler was imposing on Everett and his dissertation was to keep the mathematics he’d developed but frame its meaning and utility in a softer, more conciliatory tone. At the same time, Wheeler strongly encouraged Everett to visit Bohr and make his case in person, at a blackboard. In 1959 Everett did just that, but what Everett thought would be a two-week showdown amounted to a few unproductive conversations. No minds changed; no positions altered.

3. Let me clarify one imprecision. Schrödinger’s equation shows that the values attained by a quantum wave (or, in the language of the field, the wavefunction) can be positive or negative; more generally, the values can be complex numbers. Such values cannot be interpreted directly as probabilities—what would a negative or complex probability mean? Instead, probabilities are associated with the squared magnitudeof the quantum wave at a given location. Mathematically, this means that to determine the probability that a particle will be found at a given location, we take the product of wave’s value at that point and its complex conjugate. This clarification also addresses an important related issue. Cancellations between overlapping waves are vital to creating an interference pattern. But if the waves themselves were properly described as probability waves, such cancellation couldn’t happen because probabilities are positive numbers. As we now see, however, quantum waves do not only have positive values; this allows cancellations to take place between positive and negative numbers, as well as, more generally, between complex numbers. Because we will only need qualitative features of such waves, for ease of discussion in the main text I will not distinguish between a quantum wave and the associated probability wave (derived from its squared magnitude).

4. For the mathematically inclined reader, note that the quantum wave (wavefunction) for a single particle with large mass would conform to the description I’ve given in the text. However, very massive objects are generally composed of many particles, not one. In such a situation, the quantum mechanical description is more involved. In particular, you might have thought that all of the particles could be described by a quantum wave defined on the same coordinate grid we employ for a single particle—using the same three spatial axes. But that’s not right. The probability wave takes as input the possible position of each particle and produces the probability that the particles occupy those positions. Consequently, the probability wave lives in a space with three axes for each particle—that is, in total three times as many axes as there are particles (or ten times as many, if you embrace string theory’s extra spatial dimensions). This means that the wavefunction for a composite system made of n fundamental particles is a complex-valued function whose domain is not ordinary three-dimensional space but rather 3n-dimensional space; if the number of spatial dimensions is not 3 but rather m, the number 3 in these expressions would be replaced by m. This space is called configuration space. That is, in the general setting, the wavefunction would be a map . When we speak of such a wavefunction as being sharply peaked, we mean that this map would have support in a small mn-dimensional ball within its domain. Note, in particular, that wavefunctions don’t generally reside in the spatial dimensions of common experience. It is only in the idealized case of the wavefunction for a completely isolated single particle that its configuration space coincides with the familiar spatial environment. Note as well that when I say that the quantum laws show that the sharply peaked wavefunction for a massive object traces the same trajectory that Newton’s equations imply for the object itself, you can think of the wavefunction describing the object’s center of mass motion.

5. From this description, you might conclude that there are infinitely many locations that the electron could be found: to properly fill out the gradually varying quantum wave you would need an infinite number of spiked shapes, each associated with a possible position of the electron. How does this relate to Chapter 2 in which we discussed there being finitely many distinct configurations for particles? To avoid constant qualifications that would be of minimal relevance to the major points I am explaining in this chapter, I have not emphasized the fact, encountered in Chapter 2, that to pinpoint the electron’s location with ever-greater accuracy your device would need to exert ever-greater energy. As physically realistic situations have access to finite energy, resolution is thus imperfect. For the spiked quantum waves, this means that in any finite energy context, the spikes have nonzero width. In turn, this implies that in any bounded domain (such as a cosmic horizon) there are finitely many measurably distinct electron locations. Moreover, the thinner the spikes are (the more refined the resolution of the particle’s position) the wider are the quantum waves describing the particle’s energy, illustrating the trade-off necessitated by the uncertainty principle.

6. For the philosophically inclined reader, I’ll note that the two-tiered story for scientific explanation which I’ve outlined has been the subject of philosophical discussion and debate. For related ideas and discussions see Frederick Suppe, The Semantic Conception of Theories and Scientific Realism (Chicago: University of Illinois Press, 1989); James Ladyman, Don Ross, David Spurrett, and John Collier, Every Thing Must Go (Oxford: Oxford University Press, 2007).

7. Physicists often speak loosely of there being infinitely many universes associated with the Many Worlds approach to quantum mechanics. Certainly, there are infinitely many possible probability wave shapes. Even at a single location in space you can continuously vary the value of a probability wave, and so there are infinitely many different values it can have. However, probability waves are not the physical attribute of a system to which we have direct access. Instead, probability waves contain information about the possible distinct outcomes in a given situation, and these need not have infinite variety. Specifically, the mathematically inclined reader will note that a quantum wave (a wavefunction) lies in a Hilbert space. If that Hilbert space is finite-dimensional, then there are finitely many distinct possible outcomes for measurements on the physical system described by that wavefunction (that is, any Hermitian operator has finitely many distinct eigenvalues). This would entail finitely many worlds for a finite number of observations or measurements. It is believed that the Hilbert space associated with physics taking place within any finite volume of space, and limited to having a finite amount of energy, is necessarily finite dimensional (a point we will take up more generally in Chapter 9), which suggests that the number of worlds would similarly be finite.

8. See Peter Byrne, The Many Worlds of Hugh Everett III (New York: Oxford University Press, 2010), p. 177.

9. Over the years, a number of researchers including Neill Graham; Bryce DeWitt; James Hartle; Edward Farhi, Jeffrey Goldstone, and Sam Gutmann; David Deutsch; Sidney Coleman; David Albert; and others, including me, have independently come upon a striking mathematical fact that seems central to understanding the nature of probability in quantum mechanics. For the mathematically inclined reader, here’s what it says: Let  be the wavefunction for a quantum mechanical system, a vector that’s an element of the Hilbert space H. The wavefunction for n-identical copies of the system is thus . Let A be any Hermitian operator with eigenvalues αk, and eigenfunctions. Let Fk(A) be the “frequency” operator that counts the number of times  appears in a given state lying in . The mathematical result is that lim. That is, as the number of identical copies of the system grows without bound, the wavefunction of the composite system approaches an eigenfunction of the frequency operator, with eigenvalue . This is a remarkable result. Being an eigenfunction of the frequency operator means that, in the stated limit, the fractional number of times an observer measuring A will find αkis —which looks like the most straightforward derivation of the famous Born rule for quantum mechanical probability. From the Many Worlds perspective, it suggests that those worlds in which the fractional number of times that αk is observed fails to agree with the Born rule have zero Hilbert space norm in the limit of arbitrarily large n. In this sense, it seems as though quantum mechanical probability has a direct interpretation in the Many Worlds approach. All observers in the Many Worlds will see results with frequencies that match those of standard quantum mechanics, except for a set of observers whose Hilbert space norm becomes vanishingly small as n goes to infinity. As promising as this seems, on reflection it is less convincing. In what sense can we say that an observer with a small Hilbert space norm, or a norm that goes to zero as n goes to infinity, is unimportant or doesn’t exist? We want to say that such observers are anomalous or “unlikely,” but how do we draw a link between a vector’s Hilbert space norm and these characterizations? An example makes the issue manifest. In a two-dimensional Hilbert space, say with states spin-up , and spin-down , consider a state . This state yields the probability for measuring spin-up of about .98 and for measuring spin-down to be about .02. If we consider n copies of this spin system, , then as n goes to infinity, the vast majority of terms in the expansion of this vector have roughly equal numbers of spin-up and spin-down states. So from the standpoint of observers (copies of the experimenter) the vast majority would see spin-ups and spin-downs in a ratio that does not agree with the quantum mechanical predictions. Only the very few terms in the expansion of  that have 98 percent spin-ups and 2 percent spin-downs are consistent with the quantum mechanical expectation; the result above tells us that these states are the only ones with nonzero Hilbert space norm as n goes to infinity. In some sense, then, the vast majority of terms in the expansion of (the vast majority of copies of the experimenter) need to be considered as “non existent.” The challenge lies in understanding what, if anything, that means.

I also independently found the mathematical result described above, while preparing lectures for a course on quantum mechanics I was teaching. It was a notable thrill to have the probabilistic interpretation of quantum mechanics seemingly fall out directly from the mathematical formalism—I would imagine the list of physicists (on this page) who found this result before me had the same experience. I’m surprised at how little known the result is among mainstream physics. For instance, I don’t know of any standard quantum physics textbook that includes it. My take on the result is that it is best thought of as (1) a strong mathematical motivation for the Born probability interpretation of the wavefunction—had Born not “guessed” this interpretation, the math would have led someone there eventually; (2) a consistency check on the probability interpretation—had this mathematical result not held, it would have challenged the internal sensibility of the probability interpretation of the wavefunction.

10. I’ve been using the phrase “Zaxtarian-type reasoning” to denote a framework in which probability enters through the ignorance of each inhabitant of the Many Worlds as to which particular world he or she inhabits. Lev Vaidman has suggested taking more of the particulars of the Zaxtarian scenario to heart. He argues that probability enters the Many Worlds approach in the temporal window between an experimenter completing a measurement and reading the result. But, skeptics counter, this is too late in the game: it’s incumbent on quantum mechanics, and science more generally, to make predictions about what will happen in an experiment, not what did happen. What’s more, it seems precarious for the bedrock of quantum probability to rely on what seems to be an avoidable time delay: if a scientist gains immediate access to the result of his or her experiment, quantum probability seems in danger of being squeezed out of the picture. (For a detailed discussion see David Albert, “Probability in the Everett Picture” in Many Worlds: Everett, Quantum Theory, and Reality, eds. Simon Saunders, Jonathan Barrett, Adrian Kent, and David Wallace (Oxford: Oxford University Press, 2010) and “Uncertainty and Probability for Branching Selves,” Peter Lewis, philsciarchive.pitt.edu/archive/00002636.) A final issue of relevance to Vaidman’s suggestion and also to this type of ignorance probability is this: when I flip a fair coin in the familiar context of a single universe, the reason I say there’s a 50 percent chance the coin will land heads is that while I’ll experience only one outcome, there are two outcomes that I could have experienced. But let me now close my eyes and imagine I’ve just measured the position of the somber electron. I know that my detector display says either Strawberry Fields or Grant’s Tomb, but I don’t know which. You then confront me. “Brian,” you say, “what’s the probability that your screen says Grant’s Tomb?” To answer, I think back on the coin toss, and just as I’m about to follow the same reasoning, I hesitate. “Hmmm,” I think. “Are there really two outcomes that I could have experienced? The only detail that differentiates me from the other Brian is the reading on my screen. To imagine that my screen could have returned a different reading is to imagine that I’m not me. It’s to imagine I’m the other Brian.” So even though I don’t know what my screen says, I—this guy talking in my head right now—couldn’t have experienced any other outcome; that suggests that my ignorance doesn’t lend itself to probabilistic thinking.

11. Scientists are meant to be objective in their judgments. But I feel comfortable admitting that because of its mathematical economy and far-reaching implications for reality, I’d like the Many Worlds approach to be right. At the same time, I maintain a healthy skepticism, fueled by the difficulties of integrating probability into the framework, so I’m fully open to alternative lines of attack. Two of these provide good bookends for the discussion in the text. One tries to develop the incomplete Copenhagen approach into a full theory; the other can be viewed as Many Worlds without the many worlds.

The first direction, spearheaded by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber, tries to make sense of the Copenhagen scheme by changing Schrödinger’s math so that it does allow probability waves to collapse. This is easier said than done. The modified math should barely affect the probability waves for small things like individual particles or atoms, since we don’t want to change the theory’s successful descriptions in this domain. But the modifications must kick in with a vengeance when a large object like a piece of laboratory equipment comes into play, causing the commingled probability wave to collapse. Ghirardi, Rimini, and Weber developed math that does just that. The upshot is that with their modified equations, measuring does indeed make a probability wave collapse; it sets in motion the evolution pictured in Figure 8.6.

The second approach, initially developed by Prince Louis de Broglie back in the 1920s, and then more fully decades later by David Bohm, starts from a mathematical premise that resonates with Everett. Schrödinger’s equation should always, in every circumstance, govern the evolution of quantum waves. So, in the de Broglie–Bohm theory, probability waves evolve just as they do in the Many Worlds approach. The de Broglie–Bohm theory goes on, however, to propose the very idea I emphasized earlier as being wrongheaded: in the de Broglie–Bohm approach, all but one of the many worlds encapsulated in a probability wave are merely possible worlds; only one world is singled out as real.

To accomplish this, the approach jettisons the traditional quantum haiku of wave or particle (an electron is a wave until it’s measured, whereupon it reverts to being a particle) and instead advocates a picture that embraces waves and particles. Contrary to the standard quantum view, de Broglie and Bohm envision particles as tiny, localized entities that travel along definite trajectories, yielding an ordinary, unambiguous reality, much as in the classical tradition. The only “real” world is the one in which the particles inhabit their unique, definite positions. Quantum waves then play a very different role. Rather than embodying a multitude of realities, a quantum wave acts to guide the motion of particles. The quantum wave pushes particles toward locations where the wave is large, making it likely that particles will be found at such locations, and away from locations where the wave is small, making it unlikely that particles will be found at those. To account for the process, de Broglie and Bohm needed an additional equation describing the effect of a quantum wave on a particle, so in their approach, Schrödinger’s equation, while not superseded, shares the stage with another mathematical player. (The mathematically inclined reader can see these equations below.)

For many years, the word on the street was that the de Broglie–Bohm approach was not worth considering, laden as it was with unnecessary baggage—not only a second equation but also, since it involves both particles and waves, a doubly long list of ingredients. More recently, there has been a growing recognition that these criticisms need context. As the Ghirardi-Rimini-Weber work makes explicit, even a sensible version of the standard-bearer Copenhagen approach requires a second equation. Additionally, the inclusion of both waves and particles yields an enormous benefit: it restores the notion of objects moving from here to there along definite trajectories, a return to a basic and familiar feature of reality that the Copenhagenists may have persuaded everyone to relinquish a little too quickly. More technical criticisms are that the approach is nonlocal (the new equation shows that influences exerted at one location appear to instantaneously affect distant locations) and that it is difficult to reconcile the approach with special relativity. The potency of the former criticism is diminished by the recognition that even the Copenhagen approach has non-local features that, moreover, have been confirmed experimentally. The latter point regarding relativity, though, is certainly an important one that has yet to be fully resolved.

Part of the resistance to the de Broglie–Bohm theory arose because the theory’s mathematical formalism has not always been presented in its most straightforward form. Here, for the mathematically inclined reader, is the most direct derivation of the theory.

Begin with Schrödinger’s equation for the wavefunction of a particle: , where the probability density for the particle to be at position x, p(x), is given by the standard equation . Then, imagine assigning a definite trajectory to the particle, with velocity at x given by a function v(x). What physical condition should this velocity function satisfy? Certainly, it should ensure conservation of probability: if the particle is moving with velocity v(x) from one region into another, the probability density should adjust accordingly: . It is now straightforward to solve for v(x) and find , where m is the particle’s mass.

Together with Schrödinger’s equation, this latter equation defines the de Broglie–Bohm theory. Note that this latter equation is nonlinear, but this has no bearing on Schrödinger’s equation, which retains its full linearity. The proper interpretation, then, is that this approach to filling in the gaps left by the Copenhagen approach adds a new equation, which depends nonlinearly on the wavefunction. All of the power and beauty of the underlying wave equation, that of Schrödinger, is fully preserved.

I might also add that the generalization to many particles is immediate: on the right-hand side of the new equation, we substitute the wavefunction of the multiparticle system: ψ(x1, x2, x3, … xn), and in calculating the velocity of the kth particle, we take the derivative with respect to the k-th coordinate (working, for ease, in a one-dimensional space; for higher dimensions, we suitably increase the number of coordinates). This generalized equation manifests the nonlocality of this approach: the velocity of the kth particle depends, instantaneously, on the positions of all other particles (as the particles’ coordinate locations are the arguments of the wavefunction).

12. Here is a concrete in-principle experiment for distinguishing the Copenhagen and Many Worlds approaches. An electron, like all other elementary particles, has a property known as spin. Somewhat as a top can spin about an axis, an electron can too, with one significant difference being that the rate of this spin—regardless of the direction of the axis—is always the same. It is an intrinsic property of the electron, like its mass or its electrical charge. The only variable is whether the spin is clockwise or counterclockwise about a given axis. If it is counterclockwise, we say the electron’s spin about that axis is up; if it is clockwise, we say the electron’s spin is down. Because of quantum mechanical uncertainty, if the electron’s spin about a given axis is definite—say, with 100 percent certainty its spin is up about the z-axis—then its spin about the x- or y-axis is uncertain: about the x-axis the spin would be 50 percent up and 50 percent down; and similarly for the y-axis.

Imagine, then, starting with an electron whose spin about the z-axis is 100 percent up and then measuring its spin about the x-axis. According to the Copenhagen approach, if you find spin-down, that means the probability wave for the electron’s spin has collapsed: the spin-up possibility has been erased from reality, leaving the sole spike at spin-down. In the Many Worlds approach, by contrast, both the spin-up and spin-down outcomes occur, so, in particular, the spin-up possibility survives fully intact.

To adjudicate between these two pictures, imagine the following. After you measure the electron’s spin about the x-axis, have someone fully reverse the physical evolution. (The fundamental equations of physics, including that of Schrödinger, are time-reversal invariant, which means, in particular, that, at least in principle, any evolution can be undone. See The Fabric of the Cosmos for an in-depth discussion of this point.) Such reversal would be applied to everything: the electron, the equipment, and anything else that’s part of the experiment. Now, if the Many Worlds approach is correct, a subsequent measurement of the electron’s spin about the z-axis should yield, with 100 percent certainty, the value with which we began: spin-up. However, if the Copenhagen approach is correct (by which I mean a mathematically coherent version of it, such as the Ghirardi-Rimini-Weber formulation), we would find a different answer. Copenhagen says that upon measurement of the electron’s spin about the x-axis, in which we found spin-down, the spin-up possibility was annihilated. It was wiped off reality’s ledger. And so, upon reversing the measurement we don’t get back to our starting point because we’ve permanently lost part of the probability wave. Upon subsequent measurement of the electron’s spin about the z-axis, then, there is not 100 percent certainty that we will get the same answer we started with. Instead, it turns out that there’s a 50 percent chance that we will and a 50 percent chance that we won’t. If you were to undertake this experiment repeatedly, and if the Copenhagen approach is correct, on average, half the time you would not recover the same answer you initially did for the electron’s spin about the z-axis. The challenge, of course, is in carrying out the full reversal of a physical evolution. But, in principle, this is an experiment that would provide insight into which of the two theories is correct.

Chapter 9: Black Holes and Holograms

1. Einstein undertook calculations within general relativity to prove mathematically that Schwarzschild’s extreme configurations—what we would now call a black hole—could not exist. The mathematics underlying his calculations was invariably correct. But he made additional assumptions that, given the intense folding of space and time that would be caused by a black hole, turn out to be too restrictive; in essence, the assumption left out the possibility of matter imploding. The assumptions meant that Einstein’s mathematical formulation did not have the latitude to reveal black holes as possibly real. But this was an artifact of Einstein’s approach, not an indication of whether black holes might actually form. The modern understanding makes clear that general relativity allows for black hole solutions.

2. Once a system reaches a maximal entropy configuration (such as steam, at a fixed temperature, that is uniformly spread throughout a vat), it will have exhausted its capacity for yet further entropic increase. So, the more precise statement is that entropy tends to increase, until it reaches the largest value the system can support.

3. In 1972, James Bardeen, Brandon Carter, and Stephen Hawking worked out the mathematical laws underlying the evolution of black holes, and found that the equations looked just like those of thermodynamics. To translate between the two sets of laws, all one needed to do was substitute “area of black hole’s horizon” for “entropy” (and vice versa), and “gravity at the surface of the black hole” for “temperature.” So, for Bekenstein’s idea to hold—for this similarity to not just be a coincidence, but to reflect the fact that black holes have entropy—black holes would also need to have a nonzero temperature.

4. The reason for the apparent change in energy is far from obvious; it relies on an intimate connection between energy and time. You can think of a particle’s energy as the vibrational speed of its quantum field. Noting that the very meaning of speed invokes the concept of time, a relationship between energy and time becomes apparent. Now, black holes have a profound effect on time. From a distant vantage point, time appears to slow for an object approaching the horizon of a black hole, and comes to a stop at the horizon itself. Upon crossing the horizon, time and space interchange roles—inside the black hole, the radial direction becomes the time direction. This implies that within the black hole, the notion of positive energy coincides with motion in the radial direction toward the black hole’s singularity. When the negative energy member of a particle pair crosses the horizon, it does indeed fall toward the black hole’s center. Thus the negative energy it had from the perspective of someone watching from afar becomes positive energy from the perspective of someone situated within the black hole itself. This makes the interior of the black hole a place where such particles can exist.

5. When a black hole shrinks, the surface area of its event horizon shrinks too, conflicting with Hawking’s pronouncement that total surface area increases. Remember, however, that Hawking’s area theorem is based on classical general relativity. We are now taking account of quantum processes and coming to a more refined conclusion.

6. To be a little more precise, it’s the minimum number of yes-no questions whose answers uniquely specify the microscopic details of the system.

7. Hawking found that the entropy is the area of the event horizon in Planck units, divided by four.

8. For all the insights that will be described as this chapter unfolds, the issue of a black hole’s microscopic makeup has yet to be fully resolved. As I mentioned in Chapter 4, in 1996, Andrew Strominger and Cumrun Vafa discovered that if one (mathematically) gradually turns down the strength of gravity, then certain black holes morph into particular collections of strings and branes. By counting the possible rearrangements of these ingredients, Strominger and Vafa recovered, in the most explicit manner ever achieved, Hawking’s famous black hole entropy formula. Even so, they were not able to describe these ingredients at stronger gravitational strength, i.e., when the black hole actually forms. Other authors, such as Samir Mathur and various of his collaborators, have put forward other ideas, such as the possibility that black holes are what they call “fuzz balls,” accumulations of vibrating strings strewn throughout the black hole’s interior. These ideas remain tentative. The results we discuss later in this chapter (in the section “String Theory and Holography”) provide some of the sharpest insight into this question.

9. More precisely, gravity can be canceled in a region of space by going into a freely falling state of motion. The size of the region depends on the scales over which the gravitational field varies. If the gravitational field varies only over large scales (that is, if the gravitational field is uniform, or nearly so), your free-fall motion will cancel gravity over a large region of space. But if the gravitational field varies over short-distance scales—the scales of your body, say—then you might cancel gravity at your feet and yet still feel it at your head. This becomes particularly relevant later in your fall because the gravitational field gets ever stronger ever closer to the black hole’s singularity; its strength rises sharply as your distance from the singularity decreases. The rapid variation means there is no way to cancel the effects of the singularity, which will ultimately stretch your body to its breaking point since the gravitational pull on your feet, if you jump in feetfirst, will be ever stronger than the pull on your head.

10. This discussion exemplifies the discovery, made in 1976 by William Unruh, that links one’s motion and the particles one encounters. Unruh found that if you accelerate through otherwise empty space, you will encounter a bath of particles at a temperature determined by your motion. General relativity instructs us to determine one’s rate of acceleration by comparing with the benchmark set by free-fall observers (see Fabric of the Cosmos, Chapter 3). A distant, non-free-fall observer thereby sees radiation emerging from a black hole; a free-fall observer does not.

11. A black hole forms if the mass M within a sphere of radius R exceeds c2R/2G, where c is the speed of light and G is Newton’s constant.

12. In actuality, as the matter collapsed under its own weight and a black hole formed, the event horizon would generally be located within the boundary of the region we’ve been discussing. This means that we would not have so far maxed out the entropy that the region itself could contain. This is easily remedied. Throw more material into the black hole, causing the event horizon to swell out to the region’s original boundary. Since entropy would again increase throughout this somewhat more elaborate process, the entropy of the material we put within the region would be less than that of the black hole that fills the region, i.e., the surface area of the region in Planck units.

13. G. ’t Hooft, “Dimensional Reduction in Quantum Gravity.” In Salam Festschrift, edited by A. Ali, J. Ellis, and S. Randjbar-Daemi (River Edge, N.J.: World Scientific, 1993), pp. 284–96 (QCD161:C512:1993).

14. We’ve discussed that “tired” or “exhausted” light is light whose wavelength is stretched (redshifted) and vibrational frequency reduced by virtue of its having expended energy climbing away from a black hole (or climbing away from any source of gravity). Like more familiar cyclical processes (the earth’s orbit around the sun; the earth’s rotation on its axis, etc.), the vibrations of light can be used to define elapsed time. In fact, the vibrations of light emitted by excited Cesium-133 atoms are now used by scientists to define the second. The tired light’s slower vibrational frequency thus implies that the passage of time near the black hole—as viewed by the faraway observer—is slower too.

15. With most important discoveries in science, the pinnacle result relies on a collection of earlier works. Such is the case here. In addition to ’t Hooft, Susskind, and Maldacena, the researchers who helped blaze the trail to this result and develop its consequences include Steve Gubser, Joe Polchinski, Alexander Polyakov, Ashoke Sen, Andy Strominger, Cumrun Vafa, Edward Witten, and many others.

For the mathematically inclined reader, the more precise statement of Maldacena’s result is the following. Let N be the number of three-branes in the brane stack, and let g be the value of the coupling constant in the Type IIB string theory. When gN is a small number, much less than one, the physics is well described by low-energy strings moving on the brane stack. In turn, such strings are well described by a particular four-dimensional supersymmetric conformally invariant quantum field theory. But when gN is a large number, this field theory is strongly coupled, making its analytical treatment difficult. However, in this regime, Maldacena’s result is that we can use the description of strings moving on the near horizon geometry of the brane stack, which is AdS5 × S5 (anti-de Sitter five-space times the five sphere). The radius of these spaces is controlled by gN (specifically, the radius is proportional to (gN)¼), and thus for large gN, the curvature of AdS5 × S5 is small, ensuring that string theory calculations are tractable (in particular, they are well approximated by calculations in a particular modification of Einsteinian gravity). Therefore, as the value of gN varies from small to large values, the physics morphs from being described by four-dimensional supersymmetric conformally invariant quantum field theory to being described by ten-dimensional string theory on AdS5 × S5. This is the so-called AdS/CFT (anti-de Sitter space/conformal field theory) correspondence.

16. Although a full proof of Maldacena’s argument remains beyond reach, in recent years the link between the bulk and boundary descriptions has become increasingly well understood. For example, a class of calculations has been identified whose results are accurate for any value of the coupling constant. The results can therefore be explicitly tracked from small to large values. This provides a window onto the “morphing” process by which a description of physics from the bulk perspective transforms into a description in the boundary perspective, and vice versa. Such calculations have shown, for instance, how chains of interacting particles from the boundary perspective can transform into strings in the bulk perspective—a particularly convincing interpolation between the two descriptions.

17. More precisely, this is a variation on Maldacena’s result, modified so that the quantum field theory on the boundary is not the one that originally arose in his investigations, but instead closely approximates quantum chromodynamics. This variation also entails parallel modifications to the bulk theory. Specifically, following the work of Witten, the high temperature of the boundary theory translates into a black hole in the interior description. In turn, the dictionary between the two descriptions shows that the difficult viscosity calculations of the quark-gluon plasma translate into the response of the black hole’s event horizon to particular deformations—a technical but tractable calculation.

18. Another approach to providing a full definition of string theory emerged from earlier work in an area called Matrix theory (another possible meaning of the “M” in M-theory), developed by Tom Banks, Willy Fischler, Steve Shenker, and Leonard Susskind.

Chapter 10: Universes, Computers, and Mathematical Reality

1. The number I quoted, 1055 grams, accounts for the contents of the observable universe today, but at ever-earlier times, the temperature of these constituents would be larger and so they would contain higher energy. The number 1065 grams is a better estimate of what you’d need to gather into a tiny speck to recapitulate the evolution of our universe from when it was roughly one second old.

2. You might think that because your speed is constrained to be less than the speed of light, your kinetic energy will also be limited. But that’s not the case. As your speed gets ever closer to that of light, your energy grows ever larger; according to special relativity, it has no bounds. Mathematically, the formula for your energy is: , where c is the speed of light and v is your speed. As you can see, as v approaches c, E grows arbitrarily large. Note too that the discussion is from the perspective of someone watching you fall, say someone stationary on the surface of the earth. From your perspective, while you are in free fall, you are stationary and all the surrounding matter is acquiring increasing speed.

3. With our current level of understanding, there is significant flexibility in such estimates. The number “10 grams” comes from the following consideration: the energy scale at which inflation takes place is thought to be about 10–5 or so times the Planck energy scale, where the latter is about 1019 times the energy equivalent of the mass of a proton. (If inflation happened at a higher energy scale, models suggest that evidence for gravitational waves produced in the early universe should already have been seen.) In more conventional units, the Planck scale is about 10–5 grams (small by everyday standards, but enormous by the scales of elementary particle physics, where such energies would be carried by individual particles). The energy density of an inflaton field would therefore have been about 10–5 grams packed in every cubic volume whose linear dimension is set by roughly 105 times the Planck length (recall, from quantum uncertainty, that energies and lengths scale inversely proportional to each other), which is about 10–28centimeters. The total mass-energy carried by such an inflaton field in a volume that is 10–26 centimeters on a side is thus: 10–5 grams/(10–28 centimeters)3 × (10–26 centimeters)3, which is about 10 grams. Readers of The Fabric of the Cosmos may recall that there I used a slightly different value. The difference came from the assumption that the energy scale of the inflaton was slightly higher.

4. Hans Moravec, Robot: Mere Machine to Transcendent Mind (New York: Oxford University Press, 2000). See also Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Penguin, 2006).

5. See, for example, Robin Hanson, “How to Live in a Simulation,” Journal of Evolution and Technology 7, no. 1 (2001).

6. The Church-Turing thesis argues that any computer of the so-called universal Turing type can simulate the actions of another, and so it’s perfectly reasonable for a computer that’s within the simulation—and hence is itself simulated by the parent computer running the whole simulated world—to perform particular tasks equivalent to those undertaken by the parent computer.

7. Philosopher David Lewis developed a similar idea through what he called Modal Realism. See. On the Plurality of Worlds (Malden, Mass.: Wiley-Blackwell, 2001). However, Lewis’s motivation in introducing all possible universes differs from Nozick’s. Lewis wanted a context where, for example, counterfactual statements (such as, “If Hitler had won the war, the world today would be very different”) would be instantiated.

8. John Barrow has made a similar point in Pi in the Sky (New York: Little, Brown, 1992).

9. As explained in endnote 10 of Chapter 7, the size of this infinity exceeds that of the infinite collection of whole numbers 1, 2, 3, … and so on.

10. This is a variation on the famous Barber of Seville paradox, in which a barber shaves all those who don’t shave themselves. The question then is: Who shaves the barber? The barber is usually stipulated to be male, to avoid the easy answer—the barber is a woman and so doesn’t need to shave.

11. Schmidhuber notes that an efficient strategy would be to have the computer evolve each simulated universe forward in time in a “dovetailed” manner: the first universe would be updated on every other time-step of the computer, the second universe would be updated on every other of the remaining time-steps, the third universe would be updated on every other time-step not already devoted to the first two universes, and so on. In due course, every computable universe would be evolved forward by an arbitrarily large number of time-steps.

12. A more refined discussion of computable and noncomputable functions would also include limit computable functions. These are functions for which there is a finite algorithm that evaluates them to ever greater precision. For instance, such is the case for producing the digits of ψ: a computer can produce each successive digit of ψ, even though it will never reach the end of the computation. So, while ψ is strictly speaking noncomputable, it is limit computable. Most real numbers, however, are not like ψ. They are not just noncomputable, they are also not limit computable.

When we consider “successful” simulations, we should include those based on limit computable functions. In principle, a convincing reality could be generated by the partial output of a computer evaluating limit computable functions.

For the laws of physics to be computable, or even limit computable, the traditional reliance on real numbers would have to be abandoned. This would apply not just to space and time, usually described using coordinates whose values can range over the real numbers, but also for all other mathematical ingredients the laws use. The strength of an electromagnetic field, for example, could not vary over real numbers, but only over a discrete set of values. Similarly for the probability that an electron is here or there. Schmidhuber has emphasized that all calculations that physicists have ever carried out have involved the manipulation of discrete symbols (written on paper, on a blackboard, or input to a computer). And so, even though this body of scientific work has always been viewed as invoking the real numbers, in practice it doesn’t. Similarly for all quantities ever measured. No device has infinite accuracy and so our measurements always involve discrete numerical outputs. In that sense, all the successes of physics can be read as successes for a digital paradigm. Perhaps, then, the true laws themselves are, in fact, computable (or limit computable).

There are many different perspectives on the possibility of “digital physics.” See, for example, Stephen Wolfram’s A New Kind of Science (Champaign, Ill.: Wolfram Media, 2002) and Seth Lloyd’s Programming the Universe (New York: Alfred A. Knopf, 2006). The mathematician Roger Penrose believes that the human mind is based on noncomputable processes and hence the universe we inhabit must involve noncomputable mathematical functions. From this perspective, our universe does not fall into the digital paradigm. See, for instance, The Emperor’s New Mind (New York: Oxford University Press, 1989) and Shadows of the Mind (New York: Oxford University Press, 1994).

Chapter 11: The Limits of Inquiry

1. Steven Weinberg, The First Three Minutes (New York: Basic Books, 1973), p. 131.