The Fabric of the Cosmos: Space, Time, and the Texture of Reality - Brian Greene (2004)

Notes

Chapter 1

1. Lord Kelvin was quoted by the physicist Albert Michelson during his 1894 address at the dedication of the University of Chicago’s Ryerson Laboratory (see D. Kleppner, Physics Today, November 1998).

2. Lord Kelvin, “Nineteenth Century Clouds over the Dynamical Theory of Heat and Light,” Phil. Mag. Ii—6th series, 1 (1901).

3. A. Einstein, N. Rosen, and B. Podolsky, Phys. Rev. 47, 777 (1935).

4. Sir Arthur Eddington, The Nature of the Physical World (Cambridge, Eng.: Cambridge University Press, 1928).

5. As described more fully in note 2 of Chapter 6, this is an overstatement because there are examples, involving relatively esoteric particles (such as K-mesons and B-mesons), which show that the so-called weak nuclear force does not treat past and future fully symmetrically. However, in my view and that of many others who have thought about it, since these particles play essentially no role in determining the properties of everyday material objects, they are unlikely to be important in explaining the puzzle of time’s arrow (although, I hasten to add, no one knows this for sure). Thus, while it is technically an overstatement, I will assume throughout that the error made in asserting that the laws treat past and future on equal footing is minimal—at least as far as explaining the puzzle of time’s arrow is concerned.

6. Timothy Ferris, Coming of Age in the Milky Way (New York: Anchor, 1989).

Chapter 2

1. Isaac Newton, Sir Isaac Newton’s Mathematical Principle of Natural Philosophy and His System of the World, trans. A. Motte and Florian Cajori (Berkeley: University of California Press, 1934), vol. 1, p. 10.

2. Ibid., p. 6.

3. Ibid.

4. Ibid., p. 12.

5. Albert Einstein, in Foreword to Max Jammer, Concepts of Space: The Histories of Theories of Space in Physics (New York: Dover, 1993).

6. A. Rupert Hall, Isaac Newton, Adventurer in Thought (Cambridge, Eng.: Cambridge University Press, 1992), p. 27.

7. Ibid.

8. H. G. Alexander, ed., The Leibniz-Clarke Correspondence (Manchester: Manchester University Press, 1956).

9. I am focusing on Leibniz as the representative of those who argued against assigning space an existence independent of the objects inhabiting it, but many others also strenuously defended this view, among them Christiaan Huygens and Bishop Berkeley.

10. See, for example, Max Jammer, p. 116.

11. V. I. Lenin, Materialism and Empiriocriticism: Critical Comments on a ReactionaryPhilosophy (New York: International Publications, 1909). Second English ed. of Materializm’ i Empiriokrititsizm’: Kriticheskia Zametki ob’ Odnoi Reaktsionnoi Filosofii (Moscow: Zveno Press, 1909).

Chapter 3

1. For the mathematically trained reader, these four equations are

image

denote the electric field, the magnetic field, the electric charge density, the electric current density, the permittivity of free space, and the permeability of free space, respectively. As you can see, Maxwell’s equations relate the rate of change of the electromagnetic fields to the presence of electric charges and currents. It is not hard to show that these equations imply a speed for electromagnetic waves given by 1/√ε0μ , which when evaluated is in fact the speed of light.

2. There is some controversy as to the role such experiments played in Einstein’s development of special relativity. In his biography of Einstein, Subtle Is the Lord: The Science and the Life of Albert Einstein (Oxford: Oxford University Press, 1982), pp. 115–19, Abraham Pais has argued, using Einstein’s own statements from his later years, that Einstein was aware of the Michelson-Morley results. Albrecht Fölsing in Albert Einstein: A Biography (New York: Viking, 1997), pp. 217–20, also argues that Einstein was aware of the Michelson-Morley result, as well as earlier experimental null results in searching for evidence of the aether, such as the work of Armand Fizeau. But Fölsing and many other historians of science have also argued that such experiments played, at best, a secondary role in Einstein’s thinking. Einstein was primarily guided by considerations of mathematical symmetry, simplicity, and an uncanny physical intuition.

3. For us to see anything, light has to travel to our eyes; similarly, for us to see light, the light itself would have to make the same journey. So, when I speak of Bart’s seeing light that is speeding away, it is shorthand. I am imagining that Bart has a small army of helpers, all moving at Bart’s speed, but situated at various distances along the path that he and the light beam follow. These helpers give Bart updates on how far ahead the light has sped and the time at which the light reached such distant locations. Then, on the basis of this information, Bart can calculate how fast the light is speeding away from him.

4. There are many elementary mathematical derivations of Einstein’s insights on space and time arising from special relativity. If you are interested, you can, for example, take a look at Chapter 2 of The Elegant Universe (together with mathematical details given in the endnotes to that chapter). A more technical but extremely lucid account is Edwin Taylor and John Archibald Wheeler, Spacetime Physics: Introduction to Special Relativity (New York, W. H. Freeman & Co., 1992).

5. The stopping of time at light speed is an interesting notion, but it is important not to read too much into it. Special relativity shows that no material object can ever attain light speed: the faster a material object travels, the harder we’d have to push it to further increase its speed. Just shy of light speed, we’d have to give the object an essentially infinitely hard push for it to go any faster, and that’s something we can’t ever do. Thus, the “timeless” photon perspective is limited to massless objects (of which the photon is an example), and so “timelessness” is permanently beyond what all but a few types of particle species can ever attain. While it is an interesting and fruitful exercise to imagine how the universe would appear when moving at light speed, ultimately we need to focus on perspectives that material objects, such as ourselves, can reach, if we want to draw inferences about how special relativity affects our experiential conception of time.

6. See Abraham Pais, Subtle Is the Lord, pp. 113–14.

7. To be more precise, we define the water to be spinning if it takes on a concave shape, and not spinning if it doesn’t. From a Machian perspective, in an empty universe there is no conception of spinning, so the water’s surface would always be flat (or, to avoid issues of the lack of gravity pulling on the water, we can say that the tension on the rope tied between two rocks will always be slack). The statement here is that, by contrast, in special relativity there is a notion of spinning, even in an empty universe, so that the water’s surface can be concave (and the tension on the rope tied between the rocks can be taut). In this sense, special relativity violates Mach’s ideas.

8. Albrecht Fölsing, Albert Einstein (New York: Viking Press, 1997), pp. 208–10.

9. The mathematically inclined reader will note that if we choose units so that the speed of light takes the form of one space unit per one time unit (like one light-year per year or one light-second per second, where a light-year is about 6 trillion miles and a light-second is about 186,000 miles), then light moves through spacetime on 45-degree rays (because such diagonal lines are the ones which cover one space unit in one time unit, two space units in two time units, etc.). Since nothing can exceed the speed of light, any material object must cover less distance in space in a given interval of time than would a beam of light, and hence the path it follows through spacetime must make an angle with the centerline of the diagram (the line running through the center of the loaf from crust to crust) that is less than 45 degrees. Moreover, Einstein showed that the time slices for an observer moving with velocity v—all of space at one moment of such an observer’s time—have an equation (assuming one space dimension for simplicity) given by tmoving = γ(tstationary − (v/c2) xstationary), where γ = ( 1 − v 2/c2)−1/2, and c is the velocity of light. In units where c = 1, we note that ν < 1 and hence a time slice for the moving observer—the locus where tmoving takes on a fixed value—is of the form (tstationary − vxstationary ) = constant. Such time slices are angled with respect to the stationary time slices (the loci of the form tstationary = constant), and because v < 1, the angle between them is less than 45 degrees.

10. For the mathematically inclined reader, the statement being made is that the geodesics of Minkowski’s spacetime—the paths of extremal spacetime length between two given points—are geometrical entities that do not depend on any particular choice of coordinates or frame of reference. They are intrinsic, absolute, geometric spacetime features. Explicitly, using the standard Minkowski metric, the (timelike) geodesics are straight lines (whose angle with respect to the time axis is less than 45 degrees, since the speed involved is less than that of light).

11. There is something else of importance that all observers, regardless of their motion, also agree upon. It’s implicit in what we’ve described, but it’s worth stating directly. If one event is the cause of another (I shoot a pebble, causing a window to break), all observers agree that the cause happened before the effect (all observers agree that I shot the pebble before the window broke). For the mathematically inclined reader, it is actually not difficult to see this using our schematic depiction of spacetime. If event A is the cause of event B, then a line drawn from A to B intersects each of the time slices (time slices of an observer at rest with respect to A) at an angle that is greater than 45 degrees (the angle between the space axes—axes that lie on any given time slice—and the line between A and B is greater than 45 degrees). For instance, if A and B take place at the same location in space (the rubber band wrapped around my finger [A] causes my finger to turn white [B]) then the line connecting A and B makes a 90-degree angle relative to the time slices. If A and B take place at different locations in space, whatever traveled from A to B to exert the influence (my pebble traveling from slingshot to window) did so at less than light speed, which means the angle differs from 90 degrees (the angle when no speed is involved) by less than 45 degrees—i.e. the angle with respect to the time slices (the space axes) is greater than 45 degrees. (Remember from endnote 9 of this chapter that light speed sets the limit and such motion traces out 45-degree lines.) Now, as in endnote 9, the different time slicings associated with an observer in motion are angled relative to those of an observer at rest, but the angle is always less than 45 degrees (since the relative motion between two material observers is always less than the speed of light). And since the angle associated with causally related events is always greater than 45 degrees, the time slices of an observer, who necessarily travels at less than light speed, cannot first encounter the effect and then later encounter the cause. To all observers, cause will precede effect.

12. The notion that causes precede their effects (see the preceding note) would, among other things, be challenged if influences could travel faster than the speed of light.

13. Isaac Newton, Sir Isaac Newton’s Mathematical Principles of Natural Philosophy and His System of the World, trans. A. Motte and Florian Cajori (Berkeley: University of California Press, 1962), vol. 1, p. 634.

14. Because the gravitational pull of the earth differs from one location to another, a spatially extended, freely falling observer can still detect a residual gravitational influence. Namely, if the observer, while falling, releases two baseballs—one from his outstretched right arm and the other from his left—each will fall along a path toward the earth’s center. So, from the observer’s perspective, he will be falling straight down toward the earth’s center, while the ball released from his right hand will travel downward and slightly toward the left, while the ball released from his left hand will travel downward and slightly toward the right. Through careful measurement, the observer will therefore see that the distance between the two baseballs slowly decreases; they move toward one another. Crucial to this effect, though, is that the baseballs were released in slightly different locations in space, so that their freely falling paths toward earth’s center were slightly different as well. Thus, a more precise statement of Einstein’s realization is that the smaller the spatial extent of an object, the more fully it can eliminate gravity by going into free fall. While an important point of principle, this complication can be safely ignored throughout the discussion.

15. For a more detailed, yet general-level, explanation of the warping of space and time according to general relativity, see, for example, Chapter 3 of The Elegant Universe.

16. For the mathematically trained reader, Einstein’s equations are G μν = (8πG/c4) Tμν, where the left-hand side describes the curvature of spacetime using the Einstein tensor and the right-hand side describes the distribution of matter and energy in the universe using the energy-momentum tensor.

17. Charles Misner, Kip Thorne, and John Archibald Wheeler, Gravitation (San Francisco: W. H. Freeman and Co., 1973), pp. 544–45.

18. In 1954, Einstein wrote to a colleague: “As a matter of fact, one should no longer speak of Mach’s principle at all” (as quoted in Abraham Pais, Subtle Is the Lord, p. 288).

19. As mentioned earlier, successive generations have attributed the following ideas to Mach even though his own writings do not phrase things explicitly in this manner.

20. One qualification here is that objects which are so distant that there hasn’t been enough time since the beginning of the universe for their light—or gravitational influence—to yet reach us have no impact on the gravity we feel.

21. The expert reader will recognize that this statement is, technically speaking, too strong, as there are nontrivial (that is, non–Minkowski space) empty space solutions to general relativity. Here I am simply using the fact that special relativity can be thought of as a special case of general relativity in which gravity is ignored.

22. For balance, let me note that there are physicists and philosophers who do not agree with this conclusion. Even though Einstein gave up on Mach’s principle, during the last thirty years it has taken on a life of its own. Various versions and interpretations of Mach’s idea have been put forward, and, for example, some physicists have suggested that general relativity does fundamentally embrace Mach’s ideas; it’s just that some particular shapes that spacetime can have—such as the infinite flat spacetime of an empty universe—don’t. Perhaps, they suggest, any spacetime that is remotely realistic—populated by stars and galaxies, and so forth—does satisfy Mach’s principle. Others have offered reformulations of Mach’s principle in which the issue is no longer how objects, such as rocks tied by a string or buckets filled with water, behave in an otherwise empty universe, but rather how the various time slicings—the various three-dimensional spatial geometries—relate to one another through time. An enlightening reference on modern thinking about these ideas is Mach’s Principle: From Newton’s Bucket to Quantum Gravity,Julian Barbour and Herbert Pfister, eds. (Berlin: Birkhäuser, 1995), which is a collection of essays on the subject. As an interesting aside, this reference contains a poll of roughly forty physicists and philosophers regarding their view on Mach’s principle. Most (more than 90 percent) agreed that general relativity does not fully conform to Mach’s ideas. Another excellent and extremely interesting discussion of these ideas, from a distinctly pro-Machian perspective and at a level suited to general readers, is Julian Barbour’s book The End of Time: The Next Revolution in Physics (Oxford: Oxford University Press, 1999).

23. The mathematically inclined reader might find it enlightening to learn that Einstein believed that spacetime had no existence independent of its metric (the mathematical device that gives distance relations in spacetime), so that if one were to remove everything—including the metric—spacetime would not be a something. By “spacetime” I always mean a manifold together with a metric that solves the Einstein equations, and so the conclusion we’ve reached, in mathematical language, is that metrical spacetime is a something.

24. Max Jammer, Concepts of Space, p. xvii.

Chapter 4

1. More accurately, this appears to be a medieval conception with historical roots that go back to Aristotle.

2. As we will discuss later in the book, there are realms (such as the big bang and black holes) that still present many mysteries, at least in part owing to extremes of small size and huge densities that cause even Einstein’s more refined theory to break down. So, the statement here applies to all but the extreme contexts in which the known laws themselves become suspect.

3. An early reader of this text, and one who, surprisingly, has a particular expertise in voodoo, has informed me that something is imagined to go from place to place to carry out the voodoo practitioner’s intentions—namely, a spirit. So my example of a fanciful nonlocal process may, depending on your take on voodoo, be flawed. Nevertheless, the idea is clear.

4. To avoid any confusion, let me reemphasize at the outset that when I say, “The universe is not local,” or “Something we do over here can be entwined with something over there,” I am not referring to the ability to exert an instantaneous intentioned control over something distant. Instead, as will become clear, the effect I am referring to manifests itself as correlations between events taking place—usually, in the form of correlations between results of measurements—at distant locations (locations for which there would not be sufficient time for even light to travel from one to the other). Thus, I am referring to what physicists call nonlocal correlations. At first blush, such correlations may not strike you as particularly surprising. If someone sends you a box containing one member of a pair of gloves, and sends the other member of the pair to your friend thousands of miles away, there will be a correlation between the handedness of the glove each of you sees upon opening your respective box: if you see left, your friend will see right; if you see right, your friend will see left. And, clearly, nothing in these correlations is at all mysterious. But, as we will gradually describe, the correlations apparent in the quantum world seem to be of a very different character. It’s as if you have a pair of “quantum gloves” in which each member can be either left-handed or right-handed, and commits to a definite handedness only when appropriately observed or interacted with. The weirdness arises because, although each glove seems to choose its handedness randomly when observed, the gloves work in tandem, even if widely separated: if one chooses left, the other chooses right, and vice versa.

5. Quantum mechanics makes predictions about the microworld that agree fantastically well with experimental observations. On this, there is universal agreement. Nevertheless, because the detailed features of quantum mechanics, as discussed in this chapter, differ significantly from those of common experience, and, relatedly, as there are different mathematical formulations of the theory (and different formulations of how the theory spans the gap between the microworld of phenomena and the macroworld of measured results), there isn’t consensus on how to interpret various features of the theory (and various puzzling data which the theory, nevertheless, is able to explain mathematically), including issues of nonlocality. In this chapter, I have taken a particular point of view, the one I find most convincing based on current theoretical understanding and experimental results. But, I stress here that not everyone agrees with this view, and in a later endnote, after explaining this perspective more fully, I will briefly note some of the other perspectives and indicate where you can read more about them. Let me also stress, as we will discuss later, that the experiments contradict Einstein’s belief that the data could be explained solely on the basis of particles always possessing definite, albeit hidden, properties without any use or mention of nonlocal entanglement.However, the failure of this perspective only rules out a local universe. It does not rule out the possibility that particles have such definite hidden features.

6. For the mathematically inclined reader, let me note one potentially misleading aspect of this description. For multiparticle systems, the probability wave (the wavefunction, in standard terminology) has essentially the same interpretation as just described, but is defined as a function on the configuration space of the particles (for a single particle, the configuration space is isomorphic to real space, but for an N-particle system it has 3N dimensions). This is important to bear in mind when thinking about the question of whether the wavefunction is a real physical entity or merely a mathematical device, since if one takes the former position, one would need to embrace the reality of configuration space as well—an interesting variation on the themes of Chapters 2 and 3. In relativistic quantum field theory, the fields can be defined in the usual four spacetime dimensions of common experience, but there are also somewhat less widely used formulations that invoke generalized wavefunctions—so-called wavefunctionals defined on an even more abstract space, field space.

7. The experiments I am referring to here are those on the photoelectric effect, in which light shining on various metals causes electrons to be ejected from the metal’s surface. Experimenters found that the greater the intensity of the light, the greater the number of electrons emitted. Moreover, the experiments revealed that the energy of each ejected electron was determined by the color—the frequency—of the light. This, as Einstein argued, is easy to understand if the light beam is composed of particles, since greater light intensity translates into more light particles (more photons) in the beam—and the more photons there are, the more electrons they will hit and hence eject from the metallic surface. Furthermore, the frequency of the light would determine the energy of each photon, and hence the energy of each electron ejected, precisely in keeping with the data. The particlelike properties of photons were finally confirmed by Arthur Compton in 1923 through experiments involving the elastic scattering of electrons and photons.

8. Institut International de Physique Solvay, Rapport et discussions du 5ème Conseil (Paris, 1928), pp. 253ff.

9. Irene Born, trans., The Born-Einstein Letters (New York: Walker, 1971), p. 223.

10. Henry Stapp, Nuovo Cimento 40B (1977), 191–204.

11. David Bohm is among the creative minds that worked on quantum mechanics during the twentieth century. He was born in Pennsylvania in 1917 and was a student of Robert Oppenheimer at Berkeley. While teaching at Princeton University, he was called to appear in front of the House Un-American Activities Committee, but refused to testify at the hearings. Instead, he departed the United States, becoming a professor at the University of São Paulo in Brazil, then at the Technion in Israel, and finally at Birkbeck College of the University of London. He lived in London until his death in 1992.

12. Certainly, if you wait long enough, what you do to one particle can, in principle, affect the other: one particle could send out a signal alerting the other that it had been subjected to a measurement, and this signal could affect the receiving particle. However, as no signal can travel faster than the speed of light, this kind of influence is not instantaneous. The key point in the present discussion is that at the very moment that we measure the spin of one particle about a chosen axis we learn the spin of the other particle about that axis. And so, any kind of “standard” communication between the particles—luminal or subluminal communication—is not relevant.

13. In this and the next section, the distillation of Bell’s discovery which I am using is a “dramatization” inspired by David Mermin’s wonderful papers: “Quantum Mysteries for Anyone,” Journal of Philosophy 78, (1981), pp. 397–408; “Can You Help Your Team Tonight by Watching on TV?,” in Philosophical Consequences of Quantum Theory: Reflectionson Bell’s Theorem, James T. Cushing and Ernan McMullin, eds. (University of Notre Dame Press, 1989); “Spooky Action at a Distance: Mysteries of the Quantum Theory,” in The Great Ideas Today (Encyclopaedia Britannica, Inc., 1988), which are all collected in N. David Mermin, Boojums All the Way Through (Cambridge, Eng.: Cambridge University Press, 1990). For anyone interested in pursuing these ideas in a more technical manner, there is no better place to start than with Bell’s own papers, many of which are collected in J. S. Bell, Speakable and Unspeakable in Quantum Mechanics (Cambridge, Eng.: Cambridge University Press, 1997).

14. While the locality assumption is critical to the argument of Einstein, Podolsky, and Rosen, researchers have tried to find fault with other elements of their reasoning in an attempt to avoid the conclusion that the universe admits nonlocal features. For example, it is sometimes claimed that all the data require is that we give up so-called realism—the idea that objects possess the properties they are measured to have independent of the measurement process. In this context, though, such a claim misses the point. If the EPR reasoning had been confirmed by experiment, there would be nothing mysterious about the long-range correlations of quantum mechanics; they’d be no more surprising than classical long-range correlations, such as the way finding your left-handed glove over here ensures that its partner over there is a right-handed glove. But such reasoning is refuted by the Bell/Aspect results. Now, if in response to this refutation of EPR we give up realism— as we do in standard quantum mechanics—that does nothing to lessen the stunning weirdness of long-range correlations between widely separated random processes; when we relinquish realism, the gloves, as in endnote 4, become “quantum gloves.” Giving up realism does not, by any means, make the observed nonlocal correlations any less bizarre. It is true that if, in light of the results of EPR, Bell, and Aspect, we try to maintain realism—for example, as in Bohm’s theory discussed later in the chapter—the kind of nonlocality we require to be consistent with the data seems to be more severe, involving nonlocal interactions, not just nonlocal correlations. Many physicists have resisted this option and have thus relinquished realism.

15. See, for example, Murray Gell-Mann, The Quark and the Jaguar (New York: Freeman, 1994), and Huw Price, Time’s Arrow and Archimedes’ Point (Oxford: Oxford University Press, 1996).

16. Special relativity forbids anything that has ever traveled slower than light speed from crossing the speed-of-light barrier. But if something has always been traveling faster than the speed of light, it is not strictly ruled out by special relativity. Hypothetical particles of this sort are called tachyons. Most physicists believe tachyons don’t exist, but others enjoy tinkering with the possibility that they do. So far, though, largely because of the strange features that such a faster-than-light particle would have according to the equations of special relativity, no one has found any particular use for them—even hypothetically speaking. In modern studies, a theory that gives rise to tachyons is generally viewed as suffering from an instability.

17. The mathematically inclined reader should note that, at its core, special relativity claims that the laws of physics must be Lorentz invariant, that is, invariant under SO(3,1) coordinate transformations on Minkowski spacetime. The conclusion, then, is that quantum mechanics would be squared with special relativity if it could be formulated in a fully Lorentz-invariant manner. Now, relativistic quantum mechanics and relativistic quantum field theory have gone a long way toward this goal, but as yet there isn’t full agreement regarding whether they have addressed the quantum measurement problem in a Lorentz-invariant framework. In relativistic quantum field theory, for example, it is straightforward to compute, in a completely Lorentz-invariant manner, the probability amplitudes and probabilities for outcomes of various experiments. But the standard treatments stop short of also describing the way in which one particular outcome or another emerges from the range of quantum possibilities—that is, what happens in the measurement process. This is a particularly important issue for entanglement, as the phenomenon hinges on the effect of what an experimenter does—the act of measuring one of the entangled particle’s properties. For a more detailed discussion, see Tim Maudlin, Quantum Non-locality and Relativity (Oxford: Blackwell, 2002).

18. For the mathematically inclined reader, here is the quantum mechanical calculation that makes predictions in agreement with these experiments. Assume that the axes along which the detectors measure spin are vertical and 120 degrees clockwise and counterclockwise from vertical (like noon, four o’clock, and eight o’clock on two clocks, one for each detector, that are facing each other) and consider, for argument’s sake, two electrons emerging back to back and heading toward these detectors in the so-called singlet state. That is the state whose total spin is zero, ensuring that if one electron is found to be in the spin-up state, the other will be in the spin-down state, about a given axis, and vice versa. (Recall that for ease in the text, I’ve described the correlation between the electrons as ensuring that if one is spin-up so is the other, and if one is spin-down, so is the other; in point of fact, the correlation is one in which the spins point in opposite directions. To make contact with the main text, you can always imagine that the two detectors are calibrated oppositely, so that what one calls spin-up the other calls spin-down.) A standard result from elementary quantum mechanics shows that if the angle between the axes along which our two detectors measure the electron’s spins is θ, then the probability that they will measure opposite spin values is cos2 (θ⁄2). Thus, if the detector axes are aligned (θ = 0), they definitely measure opposite spin values (the analog of the detectors in the main text always measuring the same value when set to the same direction), and if they are set at either +120° or −120°, the probability that they measure opposite spins is cos2 (+120° or − 120°) = 1⁄4. Now, if the detector axes are set randomly, 1⁄3 of the time they will point in the same direction, and 2⁄3 of the time they won’t. Thus, over all runs, we expect to find opposite spins (1⁄3)(1) + (2⁄3)(1⁄4) = 1⁄2 of the time, as found by the data.

You may find it odd that the assumption of locality yields a higher spin correlation (greater than 50 percent) than what we find with standard quantum mechanics (exactly 50 percent); the long-range entanglement of quantum mechanics, you’d think, should yield a greater correlation. In fact, it does. A way to think about it is this: With only a 50 percent correlation over all measurements, quantum mechanics yields 100 percent correlation for measurements in which the left and right detector axes are chosen to point in the same direction. In the local universe of Einstein, Podolsky, and Rosen, a greater than 55 percent correlation over all measurements is required to ensure 100 percent agreement when the same axes are chosen. Roughly, then, in a local universe, a 50 percent correlation over all measurements would entail less than a 100 percent correlation when the same axes are chosen—i.e., less of a correlation than what we find in our nonlocal quantum universe.

19. You might think that an instantaneous collapse would, from the get-go, fall afoul of the speed limit set by light and therefore ensure a conflict with special relativity. And if probability waves were indeed like water waves, you’d have an irrefutable point. That the value of a probability wave suddenly dropped to zero over a huge expanse would be far more shocking than all of the water in the Pacific Ocean’s instantaneously becoming perfectly flat and ceasing to move. But, quantum mechanics practitioners argue, probability waves are not like water waves. A probability wave, although it describes matter, is not a material thing itself. And, such practitioners continue, the speed-of-light barrier applies only to material objects, things whose motion can be directly seen, felt, detected. If an electron’s probability wave has dropped to zero in the Andromeda galaxy, an Andromedan physicist will merely fail, with 100 percent certainty, to detect the electron. Nothing in the Andromedan’s observations reveals the sudden change in the probability wave associated with the successful detection, say, of the electron in New York City. As long as the electron itself does not travel from one place to another at greater than light speed, there is no conflict with special relativity. And, as you can see, all that has happened is that the electron was found to be in New York City and not anywhere else. Its speed never even entered the discussion. So, while the instantaneous collapse of probability is a framework that comes with puzzles and problems (discussed more fully in Chapter 7), it need not necessarily imply a conflict with special relativity.

20. For a discussion of some of these proposals, see Tim Maudlin, Quantum Nonlocalityand Relativity.

Chapter 5

1. For the mathematically inclined reader, from the equation tmoving = γ (tstationary − (v/c2) xstationary ) (discussed in note 9 of Chapter 3) we find that Chewie’s now-list at a given moment will contain events that observers on earth will claim happened (v/c2) xearth earlier, where xearth is Chewie’s distance from earth. This assumes Chewie is moving away from earth. For motion toward earth, has the opposite sign, so the earthbound observers will claim such events happened (v/c2)xearth later. Setting v = 10 miles per hour and xearth = 1010 light-years, we find (v/c2) xearth is about 150 years.

2. This number—and a similar number given in a few paragraphs further on describing Chewie’s motion toward earth—were valid at the time of the book’s publication. But as time goes by here on earth, they will be rendered slightly inaccurate.

3. The mathematically inclined reader should note that the metaphor of slicing the spacetime loaf at different angles is the usual concept of spacetime diagrams taught in courses on special relativity. In spacetime diagrams, all of three-dimensional space at a given moment of time, according to an observer who is considered stationary, is denoted by a horizontal line (or, in more elaborate diagrams, by a horizontal plane), while time is denoted by the vertical axis. (In our depiction, each “slice of bread”—a plane—represents all of space at one moment of time, while the axis running through the middle of the loaf, from crust to crust, is the time axis.) Spacetime diagrams provide an insightful way of illustrating the point being made about the now-slices of you and Chewie.

image

The light solid lines are equal time slices (now-slices) for observers at rest with respect to earth (for simplicity, we imagine that earth is not rotating or undergoing any acceleration, as these are irrelevant complications for the point being made), and the light dotted lines are equal time slices for observers moving away from earth at, say, 9.3 miles per hour. When Chewie is at rest relative to earth, the former represent his now-slices (and since you are at rest on earth throughout the story, these light solid lines always represent your now-slices), and the darkest solid line shows the now-slice containing you (the left dark dot), in earth’s twenty-first century, and he (the right dark dot), both sitting still and reading. When Chewie is walking away from earth, the dotted lines represent his now-slices, and the darkest dotted line shows the now-slice containing Chewie (having just gotten up and started to walk) and John Wilkes Booth (the lower left dark dot). Note, too, that one of the subsequent dotted time slices will contain Chewie walking (if he is still around!) and you, in earth’s twenty-first century, sitting still reading. Hence, a single moment for you will appear on two of Chewie’s now-lists—one list of relevance before and one of relevance after he started to walk. This shows yet another way in which the simple intuitive notion of now—when envisioned as applying throughout space—is transformed by special relativity into a concept with highly unusual features. Furthermore, these now-lists do not encode causality: standard causality (note 11, Chapter 3) remains in full force. Chewie’s now-lists jump because he jumps from one reference frame to another. But every observer—using a single, well-defined choice of spacetime coordinatization—will agree with every other regarding which events can affect which.

4. The expert reader will recognize that I am assuming spacetime is Minkowskian. A similar argument in other geometries will not necessarily yield the entire spacetime.

5. Albert Einstein and Michele Besso: Correspondence 1903–1955, P. Speziali, ed. (Paris: Hermann, 1972).

6. The discussion here is meant to give a qualitative sense of how an experience right now, together with memories that you have right now, forms the basis of your sense of having experienced a life in which you’ve lived out those memories. But, if, for example, your brain and body were somehow put into exactly the same state that they are right now, you would have the same sense of having lived the life that your memories attest to (assuming, as I do, that the basis of all experience can be found in the physical state of brain and body), even if those experiences never really happened, but were artificially imprinted into your brain state. One simplification in the discussion is the assumption that we can feel or experience things that happen at a single instant, when, in reality, processing time is required for the brain to recognize and interpret whatever stimuli it receives. While true, this is not of particular relevance to the point I’m making; it is an interesting but largely irrelevant complication arising from analyzing time in a manner directly tied to human experience. As we discussed earlier, human examples help make our discussion more grounded and visceral, but it does require us to tease out those aspects of the discussion that are more interesting from a biological as opposed to a physical perspective.

7. You might wonder how the discussion in this chapter relates to our description in Chapter 3 of objects “moving” through spacetime at the speed of light. For the mathematically disinclined reader, the rough answer is that the history of an object is represented by a curve in spacetime—a path through the spacetime loaf that highlights every place the object has been at the moment it was there (much as we see in Figure 5.1). The intuitive notion of “moving” through spacetime, then, can be expressed in “flowless” language by simply specifying this path (as opposed to imagining the path being traced out before your eyes). The “speed” associated with this path is then a measure of how long the path is (from one chosen point to another), divided by the time difference recorded on a watch carried by someone or something between the two chosen points on the path. This, again, is a conception that does not involve any time flow: you simply look at what the watch in question says at the two points of interest. It turns out that the speed found in this way, for any motion, is equal to the speed of light. The mathematically inclined reader will realize that the reason for this is immediate. In Minkowski spacetime the metric is ds2 = c2dt2 −dx2 (where dx2 is the Euclidean length dx12 + dx 22 + dx32), while the time carried by a clock (“proper” time) is given by dτ2= ds 2/c2. So, clearly, velocity through spacetime as just defined is given mathematically by ds/dτ, which equals c.

8. Rudolf Carnap, “Autobiography,” in The Philosophy of Rudolf Carnap, P. A. Schilpp, ed. (Chicago: Library of Living Philosophers, 1963), p. 37.

Chapter 6

1. Notice that the asymmetry being referred to—the arrow of time—arises from the order in which events take place in time. You could also wonder about asymmetries in time itself—for example, as we will see in later chapters, according to some cosmological theories time may have had a beginning but it may not have an end. These are distinct notions of temporal asymmetry, and our discussion here is focusing on the former. Even so, by the end of the chapter we will conclude that the temporal asymmetry of things in time relies on special conditions early on in the universe’s history, and hence links the arrow of time to aspects of cosmology.

2. For the mathematically inclined reader, let me note more precisely what is meant by time-reversal symmetry and point out one intriguing exception whose significance for the issues we’re discussing in this chapter has yet to be fully settled. The simplest notion of time-reversal symmetry is the statement that a set of laws of physics is time-reversal symmetric if given any solution to the equations, say S(t), then S(−t) is also a solution to the equations. For instance, in Newtonian mechanics, with forces that depend on particle positions, if x(t) = (x1 (t), x2(t), . . . ,x3n(t)) are the positions of n-particles in three space dimensions, then the fact that x(t) solves d2x(t)/dt2 = F(x(t)) implies that x(-t) is also a solution to Newton’s equations, i.e. d2x(−t)/dt2 = F(x(−t)). Notice that x(-t) represents particle motion that passes through the same positions as x(t), but in reverse order, with reverse velocities.

More generally, a set of physical laws provides us with an algorithm for evolving an initial state of a physical system at time t0 to some other time t + t0. Concretely, this algorithm can be viewed as a map U(t) which takes as input S(t0) and produces S(t + t 0), that is: S(t + t0) = U(t)S(t0). We say that the laws giving rise to U(t) are time-reversal symmetric if there is a map T satisfying U(−t) = T-1U(t)T. In English, this equation says that by a suitable manipulation of the state of the physical system at one moment (accomplished by T), evolution by an amount forward in time according to the laws of the theory (accomplished by U(t)) is equivalent to having evolved the system t units of time backward in time (denoted by U(−t)). For instance, if we specify the state of a system of particles at one moment by their positions and velocities, then would keep all particle positions fixed and reverse all velocities. Evolving such a configuration of particles forward in time by an amount is equivalent to having evolved the original configuration of particles backward in time by an amount t. (The factor of T−1 undoes the velocity reversal so that, at the end, not only are the particle positions what they would have been units of time previously, but so are their velocities.)

For certain sets of laws, the operation is more complicated than it is for Newtonian mechanics. For example, if we study the motion of charged particles in the presence of an electromagnetic field, the reversal of particle velocities would be inadequate for the equations to yield an evolution in which the particles retrace their steps. Instead, the direction of the magnetic field must also be reversed. (This is required so that the × term in the Lorentz force law equation remains unchanged.) Thus, in this case, the operation encompasses both of these transformations. The fact that we have to do more than just reverse all particle velocities has no impact on any of the discussion that follows in the text. All that matters is that particle motion in one direction is just as consistent with the physical laws as particle motion in the reverse direction. That we have to reverse any magnetic fields that happen to be present to accomplish this is of no particular relevance.

Where things get more subtle is the weak nuclear interactions. The weak interactions are described by a particular quantum field theory (discussed briefly in Chapter 9), and a general theorem shows that quantum field theories (so long as they are local, unitary, and Lorentz invariant—which are the ones of interest) are always symmetric under the combined operations of charge conjugation C (which replaces particles by their antiparticles), parity P (which inverts positions through the origin), and a bare-bones time-reversal operation (which replaces by −t). So, we could define a operation to be the product CPT, but if invariance absolutely requires the CP operation to be included, would no longer be simply interpreted as particles retracing their steps (since, for example, particle identities would be changed by such T—particles would be replaced by their antiparticles—and hence it would not be the original particles retracing their steps). As it turns out, there are some exotic experimental situations in which we are forced into this corner. There are certain particle species (K-mesons, B-mesons) whose repertoire of behaviors is CPT invariant but is not invariant under alone. This was established indirectly in 1964 by James Cronin, Val Fitch, and their collaborators (for which Cronin and Fitch received the 1980 Nobel Prize) by showing that the K-mesons violated CP symmetry (ensuring that they must violate symmetry in order not to violate CPT). More recently, symmetry violation has been directly established by the CPLEAR experiment at CERN and the KTEV experiment at Fermilab. Roughly speaking, these experiments show that if you were presented with a film of the recorded processes involving these meson particles, you’d be able to determine whether the film was being projected in the correct forward time direction, or in reverse. In other words, these particular particles can distinguish between past and future. What remains unclear, though, is whether this has any relevance for the arrow of time we experience in everyday contexts. After all, these are exotic particles that can be produced for fleeting moments in high-energy collisions, but they are not a constituent of familiar material objects. To many physicists, including me, it seems unlikely that the time nonreversal invariance evidenced by these particles plays a role in answering the puzzle of time’s arrow, so we shall not discuss this exceptional example further. But the truth is that no one knows for sure.

3. I sometimes find that there is reluctance to accept the theoretical assertion that the eggshell pieces would really fuse back together into a pristine, uncracked shell. But the time-reversal symmetry of nature’s laws, as elaborated with greater precision in the previous endnote, ensures that this is what would happen. Microscopically, the cracking of an egg is a physical process involving the various molecules that make up the shell. Cracks appear and the shell breaks apart because groups of molecules are forced to separate by the impact the egg experiences. If those molecular motions were to take place in reverse, the molecules would join back together, re-fusing the shell into its previous form.

4. To keep the focus on modern ways of thinking about these ideas, I am skipping over some very interesting history. Boltzmann’s own thinking on the subject of entropy went through significant refinements during the 1870s and 1880s, during which time interactions and communications with physicists such as James Clerk Maxwell, Lord Kelvin, Josef Loschmidt, Josiah Willard Gibbs, Henri Poincaré, S. H. Burbury, and Ernest Zermelo were instrumental. In fact, Boltzmann initially thought he could prove that entropy would always and absolutely be nondecreasing for an isolated physical system, and not that it was merely highly unlikely for such entropy reduction to take place. But objections raised by these and other physicists subsequently led Boltzmann to emphasize the statistical/probabilistic approach to the subject, the one that is still in use today.

5. I am imagining that we are using the Modern Library Classics edition of War and Peace, translated by Constance Garnett, with 1,386 text pages.

6. The mathematically inclined reader should note that because the numbers can get so large, entropy is actually defined as the logarithm of the number of possible arrangements, a detail that won’t concern us here. However, as a point of principle, this is important because it is very convenient for entropy to be a so-called extensive quantity, which means that if you bring two systems together, the entropy of their union is the sum of their individual entropies. This holds true only for the logarithmic form of entropy, because the number of arrangements in such a situation is given by the product of the individual arrangements, so the logarithm of the number of arrangements is additive.

7. While we can, in principle, predict where each page will land, you might be concerned that there is an additional element that determines the page ordering: how you gather the pages together in a neat stack. This is not relevant to the physics being discussed, but in case it bothers you, imagine that we agree that you’ll pick up the pages, one by one, starting with the one that’s closest to you, and then picking up the page closest to that one, and so on. (And, for example, we can agree to measure distances from the nearest corner of the page in question.)

8. To succeed in calculating the motion of even a few pages with the accuracy required to predict their page ordering (after employing some algorithm for stacking them in a pile, such as in the previous note), is actually extremelyoptimistic. Depending on the flexibility and weight of the paper, such a comparatively “simple” calculation could still be beyond today’s computational power.

9. You might worry that there is a fundamental difference between defining a notion of entropy for page orderings and defining one for a collection of molecules. After all, page orderings are discrete—you can count them, one by one, and so although the total number of possibilities might be large, it’s finite. To the contrary, the motion and position of even a single molecule are continuous—you can’t count them one by one, and so there is (at least according to classical physics) an infinite number of possibilities. So how can a precise counting of molecular rearrangements be carried out? Well, the short response is that this is a good question, but one that has been answered fully—so if that’s enough to ease your worry, feel free to skip what follows. The longer response requires a bit of mathematics, so without background this may be tough to follow completely. Physicists describe a classical, many-particle system, by invoking phase space, a 6N-dimensional space (where N is the number of particles) in which each point denotes all particle positions and velocities (each such position requires three numbers, as does each velocity, accounting for the 6N dimensionality of phase space). The essential point is that phase space can be carved up into regions such that all points in a given region correspond to arrangements of the speeds and velocities of the molecules that have the same, overall, gross features and appearance. If the molecules’ configuration were changed from one point in a given region of phase space to another point in the same region, a macroscopic assessment would find the two configurations indistinguishable. Now, rather than counting the number of points in a given region—the most direct analog of counting the number of different page rearrangements, but something that will surely result in an infinite answer—physicists define entropy in terms of the volume of each region in phase space. A larger volume means more points and hence higher entropy. And a region’s volume, even a region in a higher-dimensional space, is something that can be given a rigorous mathematical definition. (Mathematically, it requires choosing something called a measure, and for the mathematically inclined reader, I’ll note that we usually choose the measure which is uniform over all microstates compatible with a given macrostate—that is, each microscopic configuration associated with a given set of macroscopic properties is assumed to be equally probable.)

10. Specifically, we know one way in which this could happen: if a few days earlier the COwas initially in the bottle, then we know from our discussion above that if, right now, you were to simultaneously reverse the velocity of each and every CO2 molecule, and that of every molecule and atom that has in any way interacted with the CO 2 molecules, and wait the same few days, the molecules would all group back together in the bottle. But this velocity reversal isn’t something that can be accomplished in practice, let alone something that is likely to happen of its own accord. I might note, though, that one can prove mathematically that if you wait long enough, the CO2 molecules will, of their own accord, all find their way back into the bottle. A result proven in the 1800s by the French mathematician Joseph Liouville can be used to establish what is known as the Poincaré recurrence theorem. This theorem shows that, if you wait long enough, a system with a finite energy and confined to a finite spatial volume (like CO2 molecules in a closed room) will return to a state arbitrarily close to its initial state (in this case, CO2 molecules all situated in the Coke bottle). The catch is how long you’d have to wait for this to happen. For systems with all but a small number of constituents, the theorem shows you’d typically have to wait far in excess of the age of the universe for the constituents to, of their own accord, regroup in their initial configuration. Nevertheless, as a point of principle, it is provocative to note that with endless patience and longevity, every spatially contained physical system will return to how it was initially configured.

11. You might wonder, then, why water ever turns into ice, since that results in the H2O molecules becoming more ordered, that is, attaining lower, not higher, entropy. Well, the rough answer is that when liquid water turns into solid ice, it gives off energy to the environment (the opposite of what happens when ice melts, when it takes in energy from the environment), and that raises the environmental entropy. At low enough ambient temperatures, that is, below 0 degrees Celsius, the increase in environmental entropy exceeds the decrease in the water’s entropy, so freezing becomes entropically favored. That’s why ice forms in the cold of winter. Similarly, when ice cubes form in your refrigerator’s freezer, their entropy goes down but the refrigerator itself pumps heat into the environment, and if that is taken account of, there is a total net increase of entropy. The more precise answer, for the mathematically inclined reader, is that spontaneous phenomena of the sort we’re discussing are governed by what is known as free energy. Intuitively, free energy is that part of a system’s energy that can be harnessed to do work. Mathematically, free energy, F, is defined by F = U − TS, where U stands for total energy, T stands for temperature, and S stands for entropy. A system will undergo a spontaneous change if that results in a decrease of its free energy. At low temperatures, the drop in U associated with liquid water turning into solid ice outweighs the decrease in S (outweighs the increase in −TS), and so will occur. At high temperatures (above 0 degrees Celsius), though, the change of ice to liquid water or gaseous steam is entropically favored (the increase in S outweighs changes to U) and so will occur.

12. For an early discussion of how a straightforward application of entropic reasoning would lead us to conclude that memories and historical records are not trustworthy accounts of the past, see C. F. von Weizsäcker in The Unity of Nature (New York: Farrar, Straus, and Giroux, 1980), 138–46, (originally published in Annalen der Physik 36 (1939). For an excellent recent discussion, see David Albert in Time and Chance (Cambridge, Mass.: Harvard University Press, 2000).

13. In fact, since the laws of physics don’t distinguish between forward and backward in time, the explanation of having fully formed ice cubes a half hour earlier, at 10 p.m., would be precisely as absurd—entropically speaking—as predicting that by a half hour later, by 11:00 p.m., the little chunks of ice would have grown into fully formed ice cubes. To the contrary, the explanation of having liquid water at 10 p.m. that slowly forms small chunks of ice by 10:30 p.m. is precisely as sensible as predicting that by 11:00 p.m. the little chunks of ice will melt into liquid water, something that is familiar and totally expected. This latter explanation, from the perspective of the observation at 10:30 p.m., is perfectly temporally symmetric and, moreover, agrees with our subsequent observations.

14. The particularly careful reader might think that I’ve prejudiced the discussion with the phrase “early on” since that injects a temporal asymmetry. What I mean, in more precise language, is that we will need special conditions to prevail on (at least) one end of the temporal dimension. As will become clear, the special conditions amount to a low entropy boundary condition and I will call the “past” a direction in which this condition is satisfied.

15. The idea that time’s arrow requires a low-entropy past has a long history, going back to Boltzmann and others; it was discussed in some detail in Hans Reichenbach, The Direction of Time (Mineola, N.Y.: Dover Publications, 1984), and was championed in a particularly interesting quantitative way in Roger Penrose, The Emperor’s New Mind (New York: Oxford University Press, 1989), pp. 317ff.

16. Recall that our discussion in this chapter does not take account of quantum mechanics. As Stephen Hawking showed in the 1970s, when quantum effects are considered, black holes do allow a certain amount of radiation to seep out, but this does not affect their being the highest-entropy objects in the cosmos.

17. A natural question is how we know that there isn’t some future constraint that also has an impact on entropy. The bottom line is that we don’t, and some physicists have even suggested experiments to detect the possible influence that such a future constraint might have on things that we can observe today. For an interesting article discussing the possibility of future and past constraints on entropy, see Murray Gell-Mann and James Hartle, “Time Symmetry and Asymmetry in Quantum Mechanics and Quantum Cosmology,” in Physical Origins of Time Asymmetry, J. J. Halliwell, J. Pérez-Mercader, W. H. Zurek, eds. (Cambridge, Eng.: Cambridge University Press, 1996), as well as other papers in Parts 4 and 5 of that collection.

18. Throughout this chapter, we’ve spoken of the arrow of time, referring to the apparent fact that there is an asymmetry along the time axis (any observer’s time axis) of spacetime: a huge variety of sequences of events is arrayed in one order along the time axis, but the reverse ordering of such events seldom, if ever, occurs. Over the years, physicists and philosophers have divided these sequences of events into subcategories whose temporal asymmetries might, in principle, be subject to logically independent explanations. For example, heat flows from hot objects to cooler ones, but not from cool objects to hot ones; electromagnetic waves emanate outward from sources like stars and lightbulbs, but seem never to converge inward on such sources; the universe appears to be uniformly expanding, and not contracting; and we remember the past and not the future (these are called the thermodynamic, electromagnetic, cosmological, and psychological arrows of time, respectively). All of these are time-asymmetric phenomena, but they might, in principle, acquire their time asymmetry from completely different physical principles. My view, one that many share (but others don’t), is that except possibly for the cosmological arrow, these temporally asymmetric phenomena are not fundamentally different, and ultimately are subject to the same explanation—the one we’ve described in this chapter. For example, why does electromagnetic radiation travel in expanding outward waves but not contracting inward waves, even though both are perfectly good solutions to Maxwell’s equations of electromagnetism? Well, because our universe has low-entropy, coherent, ordered sources for such outward waves—stars and lightbulbs, to name two—and the existence of these ordered sources derives from the even more ordered environment at the universe’s inception, as discussed in the main text. The psychological arrow of time is harder to address since there is so much about the microphysical basis of human thought that we’ve yet to understand. But much progress has been made in understanding the arrow of time when it comes to computers—undertaking, completing, and then producing a record of a computation is a basic computational sequence whose entropic properties are well understood (as developed by Charles Bennett, Rolf Landauer, and others) and fit squarely within the second law of thermodynamics. Thus, if human thought can be likened to computational processes, a similar thermodynamic explanation may apply. Notice, too, that the asymmetry associated with the fact that the universe is expanding and not contracting is related to, but logically distinct from, the arrow of time we’ve been exploring. If the universe’s expansion were to slow down, stop, and then turn into a contraction, the arrow of time would still point in the same direction. Physical processes (eggs breaking, people aging, and so on) would still happen in the usual direction, even though the universe’s expansion had reversed.

19. For the mathematically inclined reader, notice that when we make this kind of probabilistic statement we are assuming a particular probability measure: the one that is uniform over all microstates compatible with what we see right now. There are, of course, other measures that we could invoke. For example, David Albert in Time and Chance has advocated using a probability measure that is uniform over all microstates compatible with what we see nowand what he calls the past hypothesis— the apparent fact that the universe began in a low-entropy state. Using this measure, we eliminate consideration of all but those histories that are compatible with the low-entropy past attested to by our memories, records, and cosmological theories. In this way of thinking, there is no probabilistic puzzle about a universe with low entropy; it began that way, by assumption, with probability 1. There is still the same huge puzzle of why it began that way, even if it isn’t phrased in a probabilistic context.

20. You might be tempted to argue that the known universe had low entropy early on simply because it was much smaller in size than it is today, and hence—like a book with fewer pages—allowed for far fewer rearrangements of its constituents. But, by itself, this doesn’t do the trick. Even a small universe can have huge entropy. For example, one possible (although unlikely) fate for our universe is that the current expansion will one day halt, reverse, and the universe will implode, ending in the so-called big crunch. Calculations show that even though the size of the universe would decrease during the implosion phase, entropy would continue to rise, which demonstrates that small size does not ensure low entropy. In Chapter 11, though, we will see that the universe’s small initial size does play a role in our current, best explanation of the low entropy beginning.

Chapter 7

1. It is well known that the equations of classical physics cannot be solved exactly if you are studying the motion of three or more mutually interacting bodies. So, even in classical physics, any actual prediction about the motion of a large set of particles will necessarily be approximate. The point, though, is that there is no fundamental limit to how good this approximation can be. If the world were governed by classical physics, then with ever more powerful computers, and ever more precise initial data about positions and velocities, we would get ever closer to the exact answer.

2. At the end of Chapter 4, I noted that the results of Bell, Aspect, and others do not rule out the possibility that particles always have definite positions and velocities, even if we can’t ever determine such features simultaneously. Moreover, Bohm’s version of quantum mechanics explicitly realizes this possibility. Thus, although the widely held view that an electron doesn’t have a position until measured is a standard feature of the conventional approach to quantum mechanics, it is, strictly speaking, too strong as a blanket statement. Bear in mind, though, that in Bohm’s approach, as we will discuss later in this chapter, particles are “accompanied” by probability waves; that is, Bohm’s theory always invokes particles and waves, whereas the standard approach envisions a complementarity that can roughly be summarized as particles or waves. Thus, the conclusion we’re after— that the quantum mechanical description of the past would be thoroughly incomplete if we spoke exclusively about a particle’s having passed through a unique point in space at each definite moment in time (what we would do in classical physics)—is true nevertheless. In the conventional approach to quantum mechanics, we must also include the wealth of other locations that a particle could have occupied at any given moment, while in Bohm’s approach we must also include the “pilot” wave, an object that is also spread throughout a wealth of other locations. (The expert reader should note that the pilot wave is just the wavefunction of conventional quantum mechanics, although its incarnation in Bohm’s theory is rather different.) To avoid endless qualifications, the discussion that follows will be from the perspective of conventional quantum mechanics (the approach most widely used), leaving remarks on Bohm’s and other approaches to the last part of the chapter.

3. For a mathematical but highly pedagogical account see R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals (Burr Ridge, Ill.: McGraw-Hill Higher Education, 1965).

4. You might be tempted to invoke the discussion of Chapter 3, in which we learned that at light speed time slows to a halt, to argue that from the photon’s perspective all moments are the same moment, so the photon “knows” how the detector switch is set when it passes the beam-splitter. However, these experiments can be carried out with other particle species, such as electrons, that travel slower than light, and the results are unchanged. Thus, this perspective does not illuminate the essential physics.

5. The experimental setup discussed, as well as the actual confirming experimental results, comes from Y. Kim, R. Yu, S. Kulik, Y. Shih, M. Scully, Phys. Rev. Lett, vol. 84, no. 1, pp. 1–5.

6. Quantum mechanics can also be based on an equivalent equation presented in a different form (known as matrix mechanics) by Werner Heisenberg in 1925. For the mathematically inclined reader, Schrödinger’s equation is: HΨ(x,t) = iħ (dΨ(x,t)/ dt), where H stands for the Hamiltonian, Ψ stands for the wavefunction, and ħ is Planck’s constant.

7. The expert reader will note that I am suppressing one subtle point here. Namely, we would have to take the complex conjugate of the particle’s wavefunction to ensure that it solves the time-reversed version of Schrödinger’s equation. That is, the T operation described in endnote 2 of Chapter 6 takes a wavefunction Ψ(x,t) and maps it to Ψ*(x,− t). This has no significant impact on the discussion in the text.

8. Bohm actually rediscovered and further developed an approach that goes back to Prince Louis de Broglie, so this approach is sometimes called the de Broglie–Bohm approach.

9. For the mathematically inclined reader, Bohm’s approach is local in configuration space but certainly nonlocal in real space. Changes to the wavefunction in one location in real space immediately exert an influence on particles located in other, distant locations.

10. For an exceptionally clear treatment of the Ghirardi-Rimini-Weber approach and its relevance to understanding quantum entanglement, see J. S. Bell, “Are There Quantum Jumps?” in Speakable and Unspeakable in Quantum Mechanics (Cambridge, Eng.: Cambridge University Press, 1993).

11. Some physicists consider the questions on this list to be irrelevant by-products of earlier confusions regarding quantum mechanics. The wavefunction, this view professes, is merely a theoretical tool for making (probabilistic) predictions and should not be accorded any but mathematical reality (a view sometimes called the “Shut up and calculate” approach, since it encourages one to use quantum mechanics and wavefunctions to make predictions, without thinking hard about what the wavefunctions actually mean and do). A variation on this theme argues that wavefunctions never actually collapse, but that interactions with the environment make it seem as if they do. (We will discuss a version of this approach shortly.) I am sympathetic to these ideas and, in fact, strongly believe that the notion of wavefunction collapse will ultimately be dispensed with. But I don’t find the former approach satisfying, as I am not ready to give up on understanding what happens in the world when we are “not looking,” and the latter—while, in my view, the right direction—needs further mathematical development. The bottom line is that measurement causes something that is or is akin to or masquerades as wavefunction collapse. Either through a better understanding of environmental influence or through some other approach yet to be suggested, this apparent effect needs to be addressed, not simply dismissed.

12. There are other controversial issues associated with the Many Worlds interpretation that go beyond its obvious extravagance. For example, there are technical challenges to define a notion of probability in a context that involves an infinite number of copies of each of the observers whose measurements are supposed to be subject to those probabilities. If a given observer is really one of many copies, in what sense can we say that he or she has a particular probability to measure this or that outcome? Who really is “he” or “she”? Each copy of the observer will measure—with probability 1—whatever outcome is slated for the particular copy of the universe in which he or she resides, so the whole probabilistic framework requires (and has been given, and continues to be given) careful scrutiny in the Many Worlds framework. Moreover, on a more technical note, the mathematically inclined reader will realize that, depending on how one precisely defines the Many Worlds, a preferred eigenbasis may need to be selected. But how should that eigenbasis be chosen? There has been a great deal of discussion and much written on all these questions, but to date there are no universally accepted resolutions. The approach based on decoherence, discussed shortly, has shed much light on these issues, and has offered particular insight into the issue of eigenbasis selection.

13. The Bohm or de Broglie–Bohm approach has never received wide attention. Perhaps one reason for this, as pointed out by John Bell in his article “The Impossible Pilot Wave,” collected in Speakable and Unspeakable in Quantum Mechanics, is that neither de Broglie nor Bohm was particularly fond of what he himself had developed. But, again as Bell points out, the de Broglie–Bohm approach does away with much of the vagueness and subjectivity of the more standard approach. If for no other reason, even if the approach is wrong, it is worth knowing that particles can have definite positions and definite velocities at all times (ones beyond our ability, even in principle, to measure), and still conform fully to the predictions of standard quantum mechanics—uncertainty and all. Another argument against Bohm’s approach is that the nonlocality in this framework is more “severe” than that of standard quantum mechanics. By this it is meant that Bohm’s approach has nonlocal interactions (between the wavefunction and particles) as a central element of the theory from the outset, while in quantum mechanics the nonlocality is more deeply buried and arises only through nonlocal correlations between widely separated measurements. But, as supporters of this approach have argued, because something is hidden does not make it any less present, and, moreover, as the standard approach is vague regarding the quantum measurement problem—the very place where nonlocality makes itself apparent—once that issue is fully resolved, the nonlocality may not be so hidden after all. Others have argued that there are obstacles to making a relativistic version of the Bohm approach, although progress has been made on this front as well (see, for example, John Bell Beables for Quantum Field Theory, in the collected volume indicated above). And so, it is definitely worth keeping this alternative approach in mind, even if only as a foil against rash conclusions about what quantum mechanics unavoidably implies. For the mathematically inclined reader, a very nice treatment of Bohm’s theory and issues of quantum entanglement can be found in Tim Maudlin, Quantum Nonlocalityand Relativity (Malden, Mass.: Blackwell, 2002).

14. For an in-depth, though technical, discussion of time’s arrow in general, and the role of decoherence in particular, see H. D. Zeh, The Physical Basis of the Direction of Time (Heidelberg: Springer, 2001).

15. Just to give you a sense of how quickly decoherence takes place—how quickly environmental influence suppresses quantum interference and thereby turns quantum probabilities into familiar classical ones—here are a few examples. The numbers are approximate, but the point they convey is clear. The wavefunction of a grain of dust floating in your living room, bombarded by jittering air molecules, will decohere in about a billionth of a billionth of a billionth of a billionth (10−36) of a second. If the grain of dust is kept in a perfect vacuum chamber and subject only to interactions with sunlight, its wavefunction will decohere a bit more slowly, taking a thousandth of a billionth of a billionth (10−21) of a second. And if the grain of dust is floating in the darkest depths of empty space and subject only to interactions with the relic microwave photons from the big bang, its wavefunction will decohere in about a millionth of a second. These numbers are extremely small, which shows that decoherence for something even as tiny as a grain of dust happens very quickly. For larger objects, decoherence happens faster still. It is no wonder that, even though ours is a quantum universe, the world around us looks like it does. (See, for example, E. Joos, “Elements of Environmental Decoherence,” in Decoherence:Theoretical, Experimental, and Conceptual Problems, Ph. Blanchard, D. Giulini, E. Joos, C. Kiefer, I.-O. Stamatescu, eds. [Berlin: Springer, 2000]).

Chapter 8

1. To be more precise, the symmetry between the laws in Connecticut and the laws in New York makes use of both translational symmetry and rotational symmetry. When you perform in New York, not only will you have changed location from Connecticut, but more than likely you will undertake your routines while facing in a somewhat different direction (east versus north, perhaps) than during practice.

2. Newton’s laws of motion are usually described as being relevant for “inertial observers,” but when one looks closely at how such observers are specified, it sounds circular: inertial observers are those observers for whom Newton’s laws hold. A good way to think about what’s really going on is that Newton’s laws draw our attention to a large and particularly useful class of observers: those whose description of motion fits completely and quantitatively within Newton’s framework. By definition, these are inertial observers. Operationally, inertial observers are those on whom no forces of any kind are acting— observers, that is, who experience no accelerations. Einstein’s general relativity, by contrast, applies to all observers, regardless of their state of motion.

3. If we lived in an era during which all change stopped, we’d experience no passage of time (all body and brain functions would be frozen as well). But whether this would mean that the spacetime block in Figure 5.1 came to an end, or, instead, carried on with no change along the time axis—that is, whether time would come to an end or would still exist in some kind of formal, overarching sense—is a hypothetical question that’s both difficult to answer and largely irrelevant for anything we might measure or experience. Note that this hypothetical situation is different from a state of maximal disorder in which entropy can’t further increase, but microscopic change, like gas molecules going this way and that, still takes place.

4. The cosmic microwave radiation was discovered in 1964 by the Bell Laboratory scientists Arno Penzias and Robert Wilson, while testing a large antenna intended for use in satellite communications. Penzias and Wilson encountered background noise that proved impossible to remove (even after they scraped bird droppings—“white noise”— from the inside of the antenna) and, with the key insights of Robert Dicke at Princeton and his students Peter Roll and David Wilkinson, together with Jim Peebles, it was ultimately realized that the antenna was picking up microwave radiation that originated with the big bang. (Important work in cosmology that set the stage for this discovery was carried out earlier by George Gamow, Ralph Alpher, and Robert Herman.) As we discuss further in later chapters, the radiation gives us an unadulterated picture of the universe when it was about 300,000 years old. That’s when electrically charged particles like electrons and protons, which disrupt the motion of light beams, combined to form electrically neutral atoms, which, by and large, allow light to travel freely. Ever since, such ancient light—produced in the early stages of the universe—has traveled unimpeded, and today, suffuses all of space with microwave photons.

5. The physical phenomenon involved here, as discussed in Chapter 11, is known as redshift. Common atoms such as hydrogen and oxygen emit light at wavelengths that have been well documented through laboratory experiments. When such substances are constituents of galaxies that are rushing away, the light they emit is elongated, much as the siren of a police car that’s racing away is also elongated, making the pitch drop. Because red is the longest wavelength of light that can be seen with the unaided eye, this stretching of light is called the redshift effect. The amount of redshift grows with increasing recessional speed, and hence by measuring the received wavelengths of light and comparing with laboratory results, the speed of distant objects can be determined. (This is actually one kind of redshift, akin to the Doppler effect. Redshifting can also be caused by gravity: photons elongate as they climb out of a gravitational field.)

6. More precisely, the mathematically inclined reader will note that a particle of mass m, sitting on the surface of a ball of radius R and mass density ρ, experiences an acceleration, d2R/dt2 given by (4π/3)R3Gρ/R2, and so (1/R) d2R/dt2 = (4π/3)Gρ. If we formally identify with the radius of the universe, and ρ with the mass density of the universe, this is Einstein’s equation for how the size of the universe evolves (assuming the absence of pressure).

7. See P.J.E. Peebles, Principles of Physical Cosmology (Princeton: Princeton University Press, 1993), p. 81.

image

The caption reads: “But who is really blowing up this ball? What makes it so that the universe expands or inflates? A Lambda does the job! Another answer cannot be given.” (Translation by Koenraad Schalm.) Lambda refers to something known as the cosmological constant, an idea we will encounter in Chapter 10.

8. To avoid confusion, let me note that one drawback of the penny model is that every penny is essentially identical to every other, while that is certainly not true of galaxies. But the point is that on the largest of scales—scales on the order of 100 million light-years—the individual differences between galaxies are believed to average out so that, when one analyzes huge volumes of space, the overall properties of each such volume are extremely similar to the properties of any other such volume.

9. You could also travel to just outside the edge of a black hole, and remain there, engines firing away to avoid being pulled in. The black hole’s strong gravitational field manifests itself as a severe warping of spacetime, and that results in your clock’s ticking far slower than it would in a more ordinary location in the galaxy (as in a relatively empty spatial expanse). Again, the time duration measured by your clock is perfectly valid. But, as in zipping around at high speed, it is a completely individualistic perspective. When analyzing features of the universe as a whole, it is more useful to have a widely applicable and agreed upon notion of elapsed time, and that’s what is provided by clocks that move along with the cosmic flow of spatial expansion and that are subject to a far more mild, far more average gravitational field.

10. The mathematically inclined reader will note that light travels along null geodesics of the spacetime metric, which, for definiteness, we can take to be ds2 = dt2− a2(t)(dx2), where dx2 = dx12 + dx22 + dx32, and the xare comoving coordinates. Setting ds2 = 0, as appropriate for a null geodesic, we can write ∫ tt0 (dt/a(t)) for the total comoving distance light emitted at time t can travel by time t0. If we multiply this by the value of scale factor a(t0) at time t0, then we will have calculated the physical distance that the light has traveled in this time interval. This algorithm can be widely used to calculate how far light can travel in any given time interval, revealing whether two points in space, for example, are in causal contact. As you can see, for accelerated expansion, even for arbitrarily large t0, the integral is bounded, showing that the light will never reach arbitrarily distant comoving locations. Thus, in a universe with accelerated expansion, there are locations with which we can never communicate, and conversely, regions that can never communicate with us. Such regions are said to be beyond our cosmic horizon.

11. When analyzing geometrical shapes, mathematicians and physicists use a quantitative approach to curvature developed in the nineteenth century, which today is part of a mathematical body of knowledge known as differential geometry. One nontechnical way of thinking about this measure of curvature is to study triangles drawn on or within the shape of interest. If the triangle’s angles add up to 180 degrees, as they do when it is drawn on a flat tabletop, we say the shape is flat. But if the angles add up to more or less than 180 degrees, as they do when the triangle is drawn on the surface of a sphere (the outward bloating of a sphere causes the sum of the angles to exceed 180 degrees) or the surface of a saddle (the inward shrinking of a saddle’s shape causes the sum of the angles to be less than 180 degrees), we say the shape is curved. This is illustrated in Figure 8.6.

12. If you were to glue the opposite vertical edges of a torus together (which is reasonable to do, since they are identified—when you pass through one edge you immediately reappear on the other) you’d get a cylinder. And then, if you did the same for the upper and lower edges (which would now be in the shape of circles), you’d get a doughnut. Thus, a doughnut is another way of thinking about or representing a torus. One complication of this representation is that the doughnut no longer looks flat! However, it actually is. Using the notion of curvature given in the previous endnote, you’d find that all triangles drawn on the surface of the doughnut have angles that add up to 180 degrees. The fact that the doughnut looks curved is an artifact of how we’ve embedded a two-dimensional shape in our three-dimensional world. For this reason, in the current context it is more useful to use the manifestly uncurved representations of the two- and three-dimensional tori, as discussed in the text.

13. Notice that we’ve been loose in distinguishing the concepts of shape and curvature. There are three types of curvatures for completely symmetric space: positive, zero, and negative. But two shapes can have the same curvature and yet not be identical, with the simplest example being the flat video screen and the flat infinite tabletop. Thus, symmetry allows us to narrow down the curvature of space to three possibilities, but there are somewhat more than three shapes for space (differing in what mathematicians call their global properties) that realize these three curvatures.

14. So far, we’ve focused exclusively on the curvature of three-dimensional space— the curvature of the spatial slices in the spacetime loaf. However, although it’s hard to picture, in all three cases of spatial curvature (positive, zero, negative), the whole four-dimensional spacetime is curved, with the degree of curvature becoming ever larger as we examine the universe ever closer to the big bang. In fact, near the moment of the big bang, the four-dimensional curvature of spacetime grows so large that Einstein’s equations break down. We will discuss this further in later chapters.

Chapter 9

1. If you raised the temperature much higher, you’d find a fourth state of matter known as a plasma, in which atoms disintegrate into their component particles.

2. There are curious substances, such as Rochelle salts, which become less ordered at high temperatures, and more ordered at low temperatures—the reverse of what we normally expect.

3. One difference between force and matter fields is expressed by Wolfgang Pauli’s exclusion principle. This principle shows that whereas a huge number of force particles (like photons) can combine to produce fields accessible to a prequantum physicist such as Maxwell, fields that you see every time you enter a dark room and turn on a light, matter particles are generally excluded by the laws of quantum physics from cooperating in such a coherent, organized manner. (More precisely, two particles of the same species, such as two electrons, are excluded from occupying the same state, whereas there is no such restriction for photons. Thus, matter fields do not generally have a macroscopic, classical-like manifestation.)

4. In the framework of quantum field theory, every known particle is viewed as an excitation of an underlying field associated with the species of which that particle is a member. Photons are excitations of the photon field—that is, the electromagnetic field; an up-quark is an excitation of the up-quark field; an electron is an excitation of the electron field, and so on. In this way, all matter and all forces are described in a uniform quantum mechanical language. A key problem is that it has proved very difficult to describe all the quantum features of gravity in this language, an issue we will discuss in Chapter 12.

5. Although the Higgs field is named after Peter Higgs, a number of other physicists—Thomas Kibble, Philip Anderson, R. Brout, and François Englert, among others— played a vital part in its introduction into physics and its theoretical development.

6. Bear in mind that the field’s value is given by its distance from the bowl’s center, so even though the field has zero energy when its value is in the bowl’s valley (since the height above the valley denotes the field’s energy), its value is not zero.

7. In the text’s description, the value of the Higgs field is given by its distance from the bowl’s center, and so you may be wondering how points on the bowl’s circular valley— which are all the same distance from the bowl’s center—give rise to any but the same Higgs value. The answer, for the mathematically inclined reader, is that different points in the valley represent Higgs field values with the same magnitude but different phases (the Higgs field value is a complex number).

8. In principle, there are two concepts of mass that enter into physics. One is the concept described in the text: mass as that property of an object which resists acceleration. Sometimes, this notion of mass is called inertial mass.The second concept of mass is the one relevant for gravity: mass as that property of an object which determines how strongly it will be pulled by a gravitational field of a specified strength (such as the earth’s). Sometimes this notion of mass is called gravitational mass. At first glance, the Higgs field is relevant only for an understanding of inertial mass. However, the equivalence principle of general relativity asserts that the force felt from accelerated motion and from a gravitational field are indistinguishable—they are equivalent. And that implies an equivalence between the concepts of inertial mass and gravitational mass. Thus, the Higgs field is relevant for both kinds of mass we’ve mentioned since, according to Einstein, they are the same.

9. I thank Raphael Kasper for pointing out that this description is a variation on the prize-winning metaphor of Professor David Miller, submitted in response to British Science Minister William Waldegrave’s challenge in 1993 to the British physics community to explain why taxpayer money should be spent on searching for the Higgs particle.

10. The mathematically inclined reader should note that the photons and W and Z bosons are described in the electroweak theory as lying in the adjoint representation of the group SU(2) × U(1), and hence are interchanged by the action of this group. Moreover, the equations of the electroweak theory possess complete symmetry under this group action and it is in this sense that we describe the force particles as being interrelated. More precisely, in the electroweak theory, the photon is a particular mixture of the gauge boson of the manifest U(1) symmetry and the U(1) subgroup of SU(2); it is thus tightly related to the weak gauge bosons. However, because of the symmetry group’s product structure, the four bosons (there are actually two W bosons with opposite electric charges) do not fully mix under its action. In a sense, then, the weak and electromagnetic interactions are part of a single mathematical framework, but one that is not as fully unified as it might be. When one includes the strong interactions, the group is augmented by including an SU(3) factor—“color” SU(3)—and this group’s having threeindependent factors, SU(3) × SU(2) × U(1), only highlights further the lack of complete unity. This is part of the motivation for grand unification, discussed in the next section: grand unification seeks a single, semi-simple (Lie) group—a group with a single factor—that describes the forces at higher energy scales.

11. The mathematically inclined reader should note that Georgi and Glashow’s grand unified theory was based on the group SU(5), which includes SU(3), the group associated with the strong nuclear force, and also SU(2) × U(1), the group associated with the electroweak force. Since then, physicists have studied the implications of other potential grand unified groups, such as SO(10) and E6.

Chapter 10

1. As we’ve seen, the big bang’s bang is not an explosion that took place at one location in a preexisting spatial expanse, and that’s why we’ve not also asked where it banged. The playful description of the big bang’s deficiency we’ve used is due to Alan Guth; see, for example, his The Inflationary Universe (Reading, Eng.: Perseus Books, 1997), p. xiii.

2. The term “big bang” is sometimes used to denote the event that happened at time-zero itself, bringing the universe into existence. But since, as we’ll discuss in the next chapter, the equations of general relativity break down at time-zero, no one has any understanding of what this event actually was. This omission is what we’ve meant by saying that the big bang theory leaves out the bang. In this chapter, we are restricting ourselves to realms in which the equations do not break down. Inflationary cosmology makes use of such well-behaved equations to reveal a brief explosive swelling of space that we naturally take to be the bang left out by the big bang theory. Certainly, though, this approach leaves unanswered the question of what happened at the initial moment of the universe’s creation—if there actually was such a moment.

3. Abraham Pais, Subtle Is the Lord (Oxford: Oxford University Press, 1982), p. 253.

4. For the mathematically inclined reader: Einstein replaced the original equation Gμv = 8πTμv by Gμv + Λgμv = 8πTμv where Λ is a number denoting the size of the cosmological constant.

5. When I refer to an object’s mass in this context, I am referring to the sum total mass of its particulate constituents. If a cube, say, were composed of 1,000 gold atoms, I’d be referring to 1,000 times the mass of a single such atom. This definition jibes with Newton’s perspective. Newton’s laws say that such a cube would have a mass that is 1,000 times that of a single gold atom, and that it would weigh 1,000 times as much as a single gold atom. According to Einstein, though, the weight of the cube also depends on the kinetic energy of the atoms (as well as all other contributions to the energy of the cube). This follows from E=mc2: more energy (E), regardless of the source, translates into more mass (m). Thus, an equivalent way of expressing the point is that because Newton didn’t know about E=mc 2, his law of gravity uses a definition of mass that misses various contributions to energy, such as energy associated with motion.

6. The discussion here is suggestive of the underlying physics but does not capture it fully. The pressure exerted by the compressed spring does indeed influence how strongly the box is pulled earthward. But this is because the compressed spring affects the total energy of the box and, as discussed in the previous paragraph, according to general relativity, the total energy is what’s relevant. However, the point I’m explaining here is that pressure itself—not just through the contribution it makes to total energy—generates gravity, much as mass and energy do. According to general relativity, pressure gravitates. Also note that the repulsive gravity we are referring to is the internalgravitational field experienced within a region of space suffused by something that has negative rather than positive pressure. In such a situation, negative pressure will contribute a repulsive gravitational field acting within the region.

7. Mathematically, the cosmological constant is represented by a number, usually denoted by Λ (see note 4). Einstein found that his equations made perfect sense regardless of whether Λ was chosen to be a positive or a negative number. The discussion in the text focuses on the case of particular interest to modern cosmology (and modern observations, as will be discussed) in which Λ is positive, since this gives rise to negative pressure and repulsive gravity. A negative value for Λ yields ordinary attractive gravity. Note, too, that since the pressure exerted by the cosmological constant is uniform, this pressure does not directly exert any force: only pressure differences, like what your ears feel when you’re underwater, result in a pressure force. Instead, the force exerted by the cosmological constant is purely a gravitational force.

8. Familiar magnets always have both a north and a south pole. By contrast, grand unified theories suggest that there may be particles that are like a purely north or purely south magnetic pole. Such particles are called monopoles and they could have a major impact on standard big bang cosmology. They have never been observed.

9. Guth and Tye recognized that a supercooled Higgs field would act like a cosmological constant, a realization that had been made earlier by Martinus Veltman and others. In fact, Tye has told me that were it not for a page limit in Physical Review Letters, the journal to which he and Guth submitted their paper, they would not have struck a final sentence noting that their model would entail a period of exponential expansion. But Tye also notes that it was Guth’s achievement to realize the important cosmological implications of a period of exponential expansion (to be discussed later in this and in the next chapter), and thereby put inflation front and center on cosmologists’ maps.

In the sometimes convoluted history of discovery, the Russian physicist Alexei Starobinsky had, a few years earlier, found a different means of generating what we now call inflationary expansion, work described in a paper that was not widely known among western scientists. However, Starobinsky did not emphasize that a period of such rapid expansion would solve key cosmological problems (such as the horizon and flatness problems, to be discussed shortly), which explains, in part, why his work did not generate the enthusiastic response that Guth’s received. In 1981, the Japanese physicist Katsuhiko Sato also developed a version of inflationary cosmology, and even earlier (in 1978), Russian physicists Gennady Chibisov and Andrei Linde hit upon the idea of inflation, but they realized that—when studied in detail—it suffered from a key problem (discussed in note 11) and hence did not publish their work.

The mathematically inclined reader should note that it is not difficult to see how accelerated expansion arises. One of Einstein’s equations is d 2a/dt2/a = −4π/3(ρ + 3p) where a, ρ, and are the scale factor of the universe (its “size”), the energy density, and the pressure density, respectively. Notice that if the righthand side of this equation is positive, the scale factor will grow at an increasing rate: the universe’s rate of growth will accelerate with time. For a Higgs field perched on a plateau, its pressure density turns out to equal the negative of its energy density (the same is true for a cosmological constant), and so the righthand side is indeed positive.

10. The physics underlying these quantum jumps is the uncertainty principle, covered in Chapter 4. I will explicitly discuss the application of quantum uncertainty to fields in both Chapter 11 and Chapter 12, but to presage that material, briefly note the following. The value of a field at a given point in space, and the rate of change of the field’s value at that point, play the same role for fields as position and velocity (momentum) play for a particle. Thus, just as we can’t ever know both a definite position and a definite velocity for a particle, a field can’t have a definite value and a definite rate of change of that value, at any given point in space. The more definite the field’s value is at one moment, the more uncertain is the rate of change of that value—that is, the more likely it is that the field’s value will change a moment later. And such change, induced by quantum uncertainty, is what I mean when referring to quantum jumps in the field’s value.

11. The contribution of Linde and of Albrecht and Steinhardt was absolutely crucial, because Guth’s original model—now called old inflation— suffered from a pernicious flaw. Remember that the supercooled Higgs field (or, in the terminology we introduce shortly, the inflaton field) has a value that is perched on the bump in its energy bowl uniformly across space. And so, while I’ve described how quickly the supercooled inflaton field could take the jump to the lowest energy value, we need to ask whether this quantum-induced jump would happen everywhere in space at the same time. And the answer is that it wouldn’t. Instead, as Guth argued, the relaxation of the inflaton field to a zero energy value takes place by a process called bubble nucleation: the inflaton drops to its zero energy value at one point in space, and this sparks an outward-spreading bubble, one whose walls move at light speed, in which the inflaton drops to the zero energy value with the passing of the bubble wall. Guth envisioned that many such bubbles, with random centers, would ultimately coalesce to give a universe with zero-energy inflaton field everywhere. The problem, though, as Guth himself realized, was that the space surrounding the bubbles was still infused with a non-zero-energy inflaton field, and so such regions would continue to undergo rapid inflationary expansion, driving the bubbles apart. Hence, there was no guarantee that the growing bubbles would find one another and coalesce into a large, homogeneous spatial expanse. Moreover, Guth argued that the inflaton field energy was not lost as it relaxed to zero energy, but was converted to ordinary particles of matter and radiation inhabiting the universe. To achieve a model compatible with observations, though, this conversion would have to yield a uniform distribution of matter and energy throughout space. In the mechanism Guth proposed, this conversion would happen through the collision of bubble walls, but calculations—carried out by Guth and Erick Weinberg of Columbia University, and also by Stephen Hawking, Ian Moss, and John Steward of Cambridge University—revealed that the resulting distribution of matter and energy was not uniform. Thus, Guth’s original inflationary model ran into significant problems of detail.

The insights of Linde and of Albrecht and Steinhardt—now called new inflation— fixed these vexing problems. By changing the shape of the potential energy bowl to that in Figure 10.2, these researchers realized, the inflaton could relax to its zero energy value by “rolling” down the energy hill to the valley, a gradual and graceful process that had no need for the quantum jump of the original proposal. And, as their calculations showed, this somewhat more gradual rolling down the hill sufficiently prolonged the inflationary burst of space so that one single bubble easily grew large enough to encompass the entire observable universe. Thus, in this approach, there is no need to worry about coalescing bubbles. What was of equal importance, rather than converting the inflaton field’s energy to that of ordinary particles and radiation through bubble collisions, in the new approach the inflaton gradually accomplished this energy conversion uniformly throughout space by a process akin to friction: as the field rolled down the energy hill—uniformly throughout space—it gave up its energy by “rubbing against” (interacting with) more familiar fields for particles and radiation. New inflation thus retained all the successes of Guth’s approach, but patched up the significant problem it had encountered.

About a year after the important progress offered by new inflation, Andrei Linde had another breakthrough. For new inflation to occur successfully, a number of key elements must all fall into place: the potential energy bowl must have the right shape, the inflaton field’s value must begin high up on the bowl (and, somewhat more technically, the inflaton field’s value must itself be uniform over a sufficiently large spatial expanse). While it’s possible for the universe to achieve such conditions, Linde found a way to generate an inflationary burst in a simpler, far less contrived setting. Linde realized that even with a simple potential energy bowl, such as that in Figure 9.1a, and even without finely arranging the inflaton field’s initial value, inflation could still naturally take place. The idea is this. Imagine that in the very early universe, things were “chaotic”—for example, imagine that there was an inflaton field whose value randomly bounced around from one number to another. At some locations in space its value might have been small, at other locations its value might have been medium, and at yet other locations in space its value might have been high. Now, nothing particularly noteworthy would have happened in regions where the field value was small or medium. But Linde realized that something fantastically interesting would have taken place in regions where the inflaton field happened to have attained a high value (even if the region were tiny, a mere 10−33 centimeters across). When the inflaton field’s value is high—when it is high up on the energy bowl in Figure 9.1a— a kind of cosmic friction sets in: the field’s value tries to roll down the hill to lower potential energy, but its high value contributes to a resistive drag force, and so it rolls very slowly. Thus, the inflaton field’s value would have been nearly constant and (much like an inflaton on the top of the potential energy hill in new inflation) would have contributed a nearly constant energy and a nearly constant negative pressure. As we are now very familiar, these are the conditions required to drive a burst of inflationary expansion. Thus, without invoking a particularly special potential energy bowl, and without setting up the inflaton field in a special configuration, the chaotic environment of the early universe could have naturally given rise to inflationary expansion. Not surprisingly, Linde had called this approach chaotic inflation. Many physicists consider it the most convincing realization of the inflationary paradigm.

12. Those familiar with the history of this subject will realize that the excitement over Guth’s discovery was generated by its solutions to key cosmological problems, such as the horizon and flatness problems, as we describe shortly.

13. You might wonder whether the electroweak Higgs field, or the grand unified Higgs field, can do double duty—playing the role we described in Chapter 9, while also driving inflationary expansion at earlier times, before forming a Higgs ocean. Models of this sort have been proposed, but they typically suffer from technical problems. The most convincing realizations of inflationary expansion invoke a new Higgs field to play the role of the inflaton.

14. See note 11, this chapter.

15. For example, you can think of our horizon as a giant, imaginary sphere, with us at its center, that separates those things with which we could have communicated (the things within the sphere) from those things with which we couldn’t have communicated (those things beyond the sphere), in the time since the bang. Today, the radius of our “horizon sphere” is roughly 14 billion light-years; early on in the history of the universe, its radius was much less, since there had been less time for light to travel. See also note 10 from Chapter 8.

16. While this is the essence of how inflationary cosmology solves the horizon problem, to avoid confusion let me highlight a key element of the solution. If one night you and a friend are standing on a large field happily exchanging light signals by turning flashlights on and off, notice that no matter how fast you then turn and run from each other, you will always be able subsequently to exchange light signals. Why? Well, to avoid receiving the light your friend shines your way, or for your friend to avoid receiving the light you send her way, you’d need to run from each other at faster than light speed, and that’s impossible. So, how is it possible for regions of space that were able to exchange light signals early on in the universe’s history (and hence come to the same temperature, for example) to now find themselves beyond each other’s communicative range? As the flashlight example makes clear, it must be that they’ve rushed apart at faster than the speed of light. And, indeed, the colossal outward push of repulsive gravity during the inflationary phase did drive every region of space away from every other at much faster than the speed of light. Again, this offers no contradiction with special relativity, since the speed limit set by light refers to motion through space, not motion from the swelling of space itself. So a novel and important feature of inflationary cosmology is that it involves a short period in which there is superluminal expansion of space.

17. Note that the numerical value of the critical density decreases as the universe expands. But the point is that if the actual mass/energy density of the universe is equal to the critical density at one time, it will decrease in exactly the same way and maintain equality with the critical density at all times.

18. The mathematically inclined reader should note that during the inflationary phase, the size of our cosmic horizon stayed fixed while space swelled enormously (as can easily be seen by taking an exponential form for the scale factor in note 10 of Chapter 8). That is the sense in which our observable universe is a tiny speck in a gigantic cosmos, in the inflationary framework.

19. R. Preston, First Light (New York: Random House Trade Paperbacks, 1996), p. 118.

20. For an excellent general-level account of dark matter, see L. Krauss, Quintessence:The Mystery of Missing Mass in the Universe (New York: Basic Books, 2000).

21. The expert reader will recognize that I am not distinguishing between the various dark matter problems that emerge on different scales of observation (galactic, cosmic) as the contribution of dark matter to the cosmic mass density is my only concern here.

22. There is actually some controversy as to whether this is the mechanism behind all type Ia supernovae (I thank D. Spergel for pointing this out to me), but the uniformity of these events—which is what we need for the discussion—is on a strong observational footing.

23. It’s interesting to note that, years before the supernova results, prescient theoretical works by Jim Peebles at Princeton, and also by Lawrence Krauss of Case Western and Michael Turner of the University of Chicago, and Gary Steigman of Ohio State, had suggested that the universe might have a small nonzero cosmological constant. At the time, most physicists did not take this suggestion too seriously, but now, with the supernova data, the attitude has changed significantly. Also note that earlier in the chapter we saw that the outward push of a cosmological constant can be mimicked by a Higgs field that, like the frog on the plateau, is perched above its minimum energy configuration. So, while a cosmological constant fits the data well, a more precise statement is that the supernova researchers concluded that space must be filled with something like a cosmological constant that generates an outward push. (There are ways in which a Higgs field can be made to generate a long-lasting outward push, as opposed to the brief outward burst in the early moments of inflationary cosmology. We will discuss this in Chapter 14, when we consider the question of whether the data do indeed require a cosmological constant, or whether some other entity with similar gravitational consequences can fit the bill.) Researchers often use the term “dark energy” as a catchall phrase for an ingredient in the universe that is invisible to the eye but causes every region of space to push, rather than pull, on every other.

24. Dark energy is the most widely accepted explanation for the observed accelerated expansion, but other theories have been put forward. For instance, some have suggested that the data can be explained if the force of gravity deviates from the usual strength predicted by Newtonian and Einsteinian physics when the distance scales involved are extremely large—of cosmological size. Others are not yet convinced that the data show cosmic acceleration, and are waiting for more precise measurements to be carried out. It is important to bear these alternative ideas in mind, especially should future observations yield results that strain the current explanations. But currently, there is widespread consensus that the theoretical explanations described in the main text are the most convincing.

Chapter 11

1. Among the leaders in the early 1980s in determining how quantum fluctuations would yield inhomogeneities were Stephen Hawking, Alexei Starobinsky, Alan Guth, So-Young Pi, James Bardeen, Paul Steinhardt, Michael Turner, Viatcheslav Mukhanov, and Gennady Chibisov.

2. Even with the discussion in the main text, you may still be puzzled regarding how a tiny amount of mass/energy in an inflaton nugget can yield the huge amount of mass/energy constituting the observable universe. How can you wind up with more mass/energy than you begin with? Well, as explained in the main text, the inflaton field, by virtue of its negative pressure, “mines” energy from gravity. This means that as the energy in the inflaton field increases, the energy in the gravitational field decreases. The special feature of the gravitational field, known since the days of Newton, is that its energy can become arbitrarily negative. Thus, gravity is like a bank that is willing to lend unlimited amounts of money—gravity embodies an essentially limitless supply of energy, which the inflaton field extracts as space expands.

The particular mass and size of the initial nugget of uniform inflaton field depend on the details of the model of inflationary cosmology one studies (most notably, on the precise details of the inflaton field’s potential energy bowl). In the text, I’ve imagined that the initial inflaton field’s energy density was about 1082 grams per cubic centimeter, so that a volume of (10–26 centimeters)3= 10–78 cubic centimeters would have total mass of about 10 kilograms, i.e., about 20 pounds. These values are typical to a fairly conventional class of inflationary models, but are only meant to give you a rough sense of the numbers involved. To give a flavor of the range of possibilities, let me note that in Andrei Linde’s chaotic models of inflation (see note 11 of Chapter 10), our observable universe would have emerged from an initial nugget of even smaller size, 10–33 centimeters across (the so-called Planck length), whose energy density was even higher, about 1094 grams per cubic centimeter, combining to give a lower total mass of about 10–5 grams (the so-called Planck mass). In these realizations of inflation, the initial nugget would have weighed about as much as a grain of dust.

3. See Paul Davies, “Inflation and Time Asymmetry in the Universe,” in Nature, vol. 301, p. 398; Don Page, “Inflation Does Not Explain Time Asymmetry,” in Nature, vol. 304, p. 39; and Paul Davies, “Inflation in the Universe and Time Asymmetry,” in Nature, vol. 312, p. 524.

4. To explain the essential point, it is convenient to split entropy up into a part due to spacetime and gravity, and a remaining part due to everything else, as this captures intuitively the key ideas. However, I should note that it proves elusive to give a mathematically rigorous treatment in which the gravitational contribution to entropy is cleanly identified, separated off, and accounted for. Nevertheless, this doesn’t compromise the qualitative conclusions we reach. In case you find this troublesome, note that the whole discussion can be rephrased largely without reference to gravitational entropy. As we emphasized in Chapter 6, when ordinary attractive gravity is relevant, matter falls together into clumps. In so doing, the matter converts gravitational potential energy into kinetic energy that, subsequently, is partially converted into radiation that emanates from the clump itself. This is an entropy-increasing sequence of events (larger average particle velocities increase the relevant phase space volume; the production of radiation through interactions increases the total number of particles—both of which increase overall entropy). In this way, what we refer to in the text as gravitational entropy can be rephrased as matter entropy generated by the gravitational force. When we say gravitational entropy is low, we mean that the gravitational force has the potential to generate significant quantities of entropy through matter clumping. In realizing such entropy potential, the clumps of matter create a non-uniform, non-homogeneous gravitational field—warps and ripples in spacetime—which, in the text, I’ve described as having higher entropy. But as this discussion makes clear, it really can be thought of as the clumpy matter (and radiation produced in the process) as having higher entropy (than when uniformly dispersed). This is good since the expert reader will note that if we view a classical gravitational background (a classical spacetime) as a coherent state of gravitons, it is an essentially unique state and hence has low entropy. Only by suitably coarse graining would an entropy assignment be possible. As this note emphasizes, though, this isn’t particularly necessary. On the other hand, should the matter clump sufficiently to create black holes, then an unassailable entropy assignment becomes available: the area of the black hole’s event horizon (as explained further in Chapter 16) is a measure of the black hole’s entropy. And this entropy can unambiguously be called gravitational entropy.

5. Just as it is possible both for an egg to break and for broken eggshell pieces to reassemble into a pristine egg, it is possible for quantum-induced fluctuations to grow into larger inhomogeneities (as we’ve described) or for sufficiently correlated inhomogeneities to work in tandem to suppress such growth. Thus, the inflationary contribution to resolving time’s arrow also requires sufficiently uncorrelated initial quantum fluctuations. Again, if we think in a Boltzmann-like manner, among all the fluctuations yielding conditions ripe for inflation, sooner or later there will be one that meets this condition as well, allowing the universe as we know it to initiate.

6. There are some physicists who would claim that the situation is better than described. For example, Andrei Linde argues that in chaotic inflation (see note 11, Chapter 10), the observable universe emerged from a Planck-sized nugget containing a uniform inflaton field with Planck scale energy density. Under certain assumptions, Linde further argues that the entropy of a uniform inflaton field in such a tiny nugget is roughly equal to the entropy of any other inflaton field configuration, and hence the conditions necessary for achieving inflation weren’t special. The entropy of the Planck-sized nugget was small but on a par with the possible entropy that the Planck-sized nugget could have had. The ensuing inflationary burst then created, in a flash, a huge universe with an enormously higher entropy—but one that, because of its smooth, uniform distribution of matter, was also enormously far from the entropy that it could have. The arrow of time points in the direction in which this entropy gap is being lessened.

While I am partial to this optimistic vision, until we have a better grasp on the physics out of which inflation is supposed to have emerged, caution is warranted. For example, the expert reader will note that this approach makes favorable but unjustified assumptions about the high-energy (transplanckian) field modes—modes that can affect the onset of inflation and play a crucial role in structure formation.

Chapter 12

1. The circumstantial evidence I have in mind here relies on the fact that the strengths of all three nongravitational forces depend on the energy and temperature of the environment in which the forces act. At low energies and temperatures, such as those of our everyday environment, the strengths of all three forces are different. But there is indirect theoretical and experimental evidence that at very high temperatures, such as occurred in the earliest moments of the universe, the strengths of all three forces converge, indicating, albeit indirectly, that all three forces themselves may fundamentally be unified, and appear distinct only at low energies and temperatures. For a more detailed discussion see, for example, The Elegant Universe, Chapter 7.

2. Once we know that a field, like any of the known force fields, is an ingredient in the makeup of the cosmos, then we know that it exists everywhere—it is stitched into the fabric of the cosmos. It is impossible to excise the field, much as it is impossible to excise space itself. The nearest we can come to eliminating a field’s presence, therefore, is to have it take on a value that minimizes its energy. For force fields, like the electromagnetic force, that value is zero, as discussed in the text. For fields like the inflaton or the standard-model Higgs field (which, for simplicity, we do not consider here), that value can be some nonzero number that depends on the field’s precise potential energy shape, as we discussed in Chapters 9 and 10. As mentioned in the text, to keep the discussion streamlined we are only explicitly discussing quantum fluctuations of fields whose lowest energy state is achieved when their value is zero, although fluctuations associated with Higgs or inflaton fields require no modification of our conclusions.

3. Actually, the mathematically inclined reader should note that the uncertainty principle dictates that energy fluctuations are inversely proportional to the time resolution of our measurements, so the finer the time resolution with which we examine a field’s energy, the more wildly the field will undulate.

4. In this experiment, Lamoreaux verified the Casimir force in a modified setup involving the attraction between a spherical lens and a quartz plate. More recently, Gianni Carugno, Roberto Onofrio, and their collaborators at the University of Padova have undertaken the more difficult experiment involving the original Casimir framework of two parallel plates. (Keeping the plates perfectly parallel is quite an experimental challenge.) So far, they have confirmed Casimir’s predictions to a level of 15 percent.

5. In retrospect, these insights also show that if Einstein had not introduced the cosmological constant in 1917, quantum physicists would have introduced their own version a few decades later. As you will recall, the cosmological constant was an energy Einstein envisioned suffusing all of space, but whose origin he—and modern-day proponents of a cosmological constant—left unspecified. We now realize that quantum physics suffuses empty space with jittering fields, and as we directly see through Casimir’s discovery, the resulting microscopic field frenzy fills space with energy. In fact, a major challenge facing theoretical physics is to show that the combined contribution of all field jitters yields a total energy in empty space—a total cosmological constant—that is within the observational limit currently determined by the supernova observations discussed in Chapter 10. So far, no one has been able to do this; carrying out the analysis exactly has proven to be beyond the capacity of current theoretical methods, and approximate calculations have gotten answers wildly larger than observations allow, strongly suggesting that the approximations are way off. Many view explaining the value of the cosmological constant (whether it is zero, as long thought, or small and nonzero as suggested by the inflation and the supernova data) as one of the most important open problems in theoretical physics.

6. In this section, I describe one way of seeing the conflict between general relativity and quantum mechanics. But I should note, in keeping with our theme of seeking the true nature of space and time, that other, somewhat less tangible but potentially important puzzles arise in attempting to merge general relativity and quantum mechanics. One that’s particularly tantalizing arises when the straightforward application of the procedure for transforming classical nongravitational theories (like Maxwell’s electrodynamics) into a quantum theory is extended to classical general relativity (as shown by Bryce DeWitt in what is now called the Wheeler-DeWitt equation). In the central equation that emerges, it turns out that the time variable does not appear. So, rather than having an explicit mathematical embodiment of time—as is the case in every other fundamental theory—in this approach to quantizing gravity, temporal evolution must be kept track of by a physical feature of the universe (such as its density) that we expect to change in a regular manner. As yet, no one knows if this procedure for quantizing gravity is appropriate (although much progress in an offshoot of this formalism, called loop quantum gravity, has been recently achieved; see Chapter 16), so it is not clear whether the absence of an explicit time variable is hinting at something deep (time as an emergent concept?) or not. In this chapter we focus on a different approach for merging general relativity and quantum mechanics, superstring theory.

7. It is somewhat of a misnomer to speak of the “center” of a black hole as if it were a place in space. The reason, roughly speaking, is that when one crosses a black hole’s event horizon—its outer edge—the roles of space and time are interchanged. In fact, just as you can’t resist going from one second to the next in time, so you can’t resist being pulled to the black hole’s “center” once you’ve crossed the event horizon. It turns out that this analogy between heading forward in time and heading toward a black hole’s center is strongly motivated by the mathematical description of black holes. Thus, rather than thinking of the black hole’s center as a location in space, it is better to think of it as a location in time. Furthermore, since you can’t go beyond the black hole’s center, you might be tempted to think of it as a location in spacetime where time comes to an end. This may well be true. But since the standard general relativity equations break down under such extremes of huge mass density, our ability to make definite statements of this sort is compromised. Clearly, this suggests that if we had equations that don’t break down deep inside a black hole, we might gain important insights into the nature of time. That is one of the goals of superstring theory.

8. As in earlier chapters, by “observable universe” I mean that part of the universe with which we could have had, at least in principle, communication during the time since the bang. In a universe that is infinite in spatial extent, as discussed in Chapter 8, all of space does not shrink to a point at the moment of the bang. Certainly, everything in the observable part of the universe will be squeezed into an ever smaller space as we head back to the beginning, but, although hard to picture, there are things—infinitely far away—that will forever remain separate from us, even as the density of matter and energy grows ever higher.

9. Leonard Susskind, in “The Elegant Universe,” NOVA, three-hour PBS series first aired October 28 and November 4, 2003.

10. Indeed, the difficulty of designing experimental tests for superstring theory has been a crucial stumbling block, one that has substantially hindered the theory’s acceptance. However, as we will see in later chapters, there has been much progress in this direction; string theorists have high hopes that upcoming accelerator and space-based experiments will provide at least circumstantial evidence in support of the theory, and with luck, maybe even more.

11. Although I haven’t covered it explicitly in the text, note that every known particle has an antiparticle—a particle with the same mass but opposite force charges (like the opposite sign of electric charge). The electron’s antiparticle is the positron; the up-quark’s antiparticle is, not surprisingly, the anti-up-quark; and so on.

12. As we will see in Chapter 13, recent work in string theory has suggested that strings may be much larger than the Planck length, and this has a number of potentially critical implications—including the possibility of making the theory experimentally testable.

13. The existence of atoms was initially argued through indirect means (as an explanation of the particular ratios in which various chemical substances would combine, and later, through Brownian motion); the existence of the first black holes was confirmed (to many physicists’ satisfaction) by seeing their effect on gas that falls toward them from nearby stars, instead of “seeing” them directly.

14. Since even a placidly vibrating string has some amount of energy, you might wonder how it’s possible for a string vibrational pattern to yield a massless particle. The answer, once again, has to do with quantum uncertainty. No matter how placid a string is, quantum uncertainty implies that it has a minimal amount of jitter and jiggle. And, through the weirdness of quantum mechanics, these uncertainty-induced jitters have negativeenergy. When this is combined with the positive energy from the most gentle of ordinary string vibrations, the total mass/energy is zero.

15. For the mathematically inclined reader, the more precise statement is that the square of the masses of string vibrational modes are given by integer multiples of the square of the Planck mass. Even more precisely (and of relevance to recent developments covered in Chapter 13), the square of these masses are integer multiples of the string scale (which is proportional to the inverse square of the string length). In conventional formulations of string theory, the string scale and the Planck mass are close, which is why I’ve simplified the main text and only introduced the Planck mass. However, in Chapter 13 we will consider situations in which the string scale can be different from the Planck mass.

16. It’s not too hard to understand, in rough terms, how the Planck length crept into Klein’s analysis. General relativity and quantum mechanics invoke three fundamental constants of nature: (the velocity of light), (the basic strength of the gravitational force) and ħ (Planck’s constant describing the size of quantum effects). These three constants can be combined to produce a quantity with units of length: (ħG/c 3)1/2, which, by definition, is the Planck length. After substituting the numerical values of the three constants, one finds the Planck length to be about 1.616 × 10−33 centimeters. Thus, unless a dimensionless number with value differing substantially from 1 should emerge from the theory—something that doesn’t often happen in a simple, well-formulated physical theory—we expect the Planck length to be the characteristic size of lengths, such as the length of the curled-up spatial dimension. Nevertheless, do note that this does not rule out the possibility that dimensions can be larger than the Planck length, and in Chapter 13 we will see interesting recent work that has investigated this possibility vigorously.

17. Incorporating a particle with the electron’s charge, and with its relatively tiny mass, proved a formidable challenge.

18. Note that the uniform symmetry requirement that we used in Chapter 8 to narrow down the shape of the universe was motivated by astronomical observations (such as those of the microwave background radiation) within the three large dimensions. These symmetry constraints have no bearing on the shape of the possible six tiny extra space dimensions. Figure 12.9a is based on an image created by Andrew Hanson.

19. You might wonder about whether there might not only be extra space dimensions, but also extra time dimensions. Researchers (such as Itzhak Bars at the University of Southern California) have investigated this possibility, and shown that it is at least possible to formulate theories with a second time dimension that seem to be physically reasonable. But whether this second time dimension is really on a par with the ordinary time dimension or is just a mathematical device has never been settled fully; the general feeling is more toward the latter than the former. By contrast, the most straightforward reading of string theory says that the extra space dimensions are every bit as real as the three we know about.

20. String theory experts (and those who have read The Elegant Universe, Chapter 12) will recognize that the more precise statement is that certain formulations of string theory (discussed in Chapter 13 of this book) admit limits involving eleven spacetime dimensions. There is still debate as to whether string theory is best thought of as fundamentally being an eleven spacetime dimensional theory, or whether the eleven dimensional formulation should be viewed as a particular limit (e.g., when the string coupling constant is taken large in the Type IIA formulation), on a par with other limits. As this distinction does not have much impact on our general-level discussion, I have chosen the former viewpoint, largely for the linguistic ease of having a fixed and uniform total number of dimensions.

Chapter 13

1. For the mathematically inclined reader: I am here referring to conformal symmetry—symmetry under arbitrary angle-preserving transformations on the volume in spacetime swept out by the proposed fundamental constituent. Strings sweep out two-spacetime-dimensional surfaces, and the equations of string theory are invariant under the two-dimensional conformal group, which is an infinite dimensional symmetry group. By contrast, in other numbers of space dimensions, associated with objects that are not themselves one-dimensional, the conformal group is finite-dimensional.

2. Many physicists contributed significantly to these developments, both by laying the groundwork and through follow-up discoveries: Michael Duff, Paul Howe, Takeo Inami, Kelley Stelle, Eric Bergshoeff, Ergin Szegin, Paul Townsend, Chris Hull, Chris Pope, John Schwarz, Ashoke Sen, Andrew Strominger, Curtis Callan, Joe Polchinski, Petr Hořava, J. Dai, Robert Leigh, Hermann Nicolai, and Bernard deWit, among many others.

3. In fact, as explained in Chapter 12 of The Elegant Universe, there is an even tighter connection between the overlooked tenth spatial dimension and p-branes. As you increase the size of the tenth spatial dimension in, say, the type IIA formulation, one-dimensional strings stretch into two-dimensional inner-tube-like membranes. If you assume the tenth dimension is very small, as had always been implicitly done prior to these discoveries, the inner tubes look and behave like strings. As is the case for strings, the question of whether these newly found branes are indivisible or, instead, are made of yet finer constituents, remains unanswered. Researchers are open to the possibility that the ingredients so far identified in string/M-theory will not bring to a close the search for the elementary constituents of the universe. However, it’s also possible that they will. Since much of what follows is insensitive to this issue, we’ll adopt the simplest perspective and imagine that all the ingredients—strings and branes of various dimensions—are fundamental. And what of the earlier reasoning, which suggested that fundamental higher dimensional objects could not be incorporated into a physically sensible framework? Well, that reasoning was itself rooted in another quantum mechanical approximation scheme—one that is standard and fully battle tested but that, like any approximation, has limitations. Although researchers have yet to figure out all the subtleties associated with incorporating higher-dimensional objects into a quantum theory, these ingredients fit so perfectly and consistently within all five string formulations that almost everyone believes that the feared violations of basic and sacred physical principles are absent.

4. In fact, we could be living on an even higher-dimensional brane (a four-brane, a five-brane . . .) three of whose dimensions fill ordinary space, and whose other dimensions fill some of the smaller, extra dimensions the theory requires.

5. The mathematically inclined reader should note that for many years string theorists have known that closed strings respect something called T-duality (as explained further in Chapter 16, and in Chapter 10 of The Elegant Universe ). Basically, T-duality is the statement that if an extra dimension should be in the shape of a circle, string theory is completely insensitive to whether the circle’s radius is or 1/R. The reason is that strings can move around the circle (“momentum modes”) and/or wrap around the circle (“winding modes”) and, under the replacement of with 1/R, physicists have realized that the roles of these two modes simply interchange, keeping the overall physical properties of the theory unchanged. Essential to this reasoning is that the strings are closed loops, since if they are open there is no topologically stable notion of their winding around a circular dimension. So, at first blush, it seems that open and closed strings behave completely differently under T-duality. With closer inspection, and by making use of the Dirichlet boundary conditions for open strings (the “D” in D-branes), Polchinski, Dai, Leigh, as well as Hořava, Green, and other researchers resolved this puzzle.

6. Proposals that have tried to circumvent the introduction of dark matter or dark energy have suggested that even the accepted behavior of gravity on large scales may differ from what Newton or Einstein would have thought, and in that way attempt to account for gravitational effects incompatible with solely the material we can see. As yet, these proposals are highly speculative and have little support, either experimental or theoretical.

7. The physicists who introduced this idea are S. Giddings and S. Thomas, and S. Dimopoulus and G. Landsberg.

8. Notice that the contraction phase of such a bouncing universe is not the same as the expansion phase run in reverse. Physical processes such as eggs splattering and candles melting would happen in the usual “forward” time direction during the expansion phases and would continue to do so during the subsequent contraction phase. That’s why entropy would increase during both phases.

9. The expert reader will note that the cyclic model can be phrased in the language of four-dimensional effective field theory on one of the three-branes, and in this form it shares many features with more familiar scalar-field-driven inflationary models. When I say “radically new mechanism,” I am referring to the conceptual description in terms of colliding branes, which in and of itself is a striking new way of thinking about cosmology.

10. Don’t get confused on dimension counting. The two three-branes, together with the space interval between them, have four dimensions. Time brings it to five. That leaves six more for the Calabi-Yau space.

11. An important exception, mentioned at the end of this chapter and discussed in further detail in Chapter 14, has to do with inhomogeneities in the gravitational field, so-called primordial gravitational waves. Inflationary cosmology and the cyclic model differ in this regard, one way in which there is a chance that they may be distinguished experimentally.

12. Quantum mechanics ensures that there is always a nonzero probability that a chance fluctuation will disrupt the cyclic process (e.g., one brane twists relative to the other), causing the model to grind to a halt. Even if the probability is minuscule, sooner or later it will surely come to pass, and hence the cycles cannot continue indefinitely.

Chapter 14

1. A. Einstein, “Vierteljahrschrift für gerichtliche Medizin und öffentliches Sanitätswesen” 44 37 (1912). D. Brill and J. Cohen, Phys. Rev. vol. 143, no. 4, 1011 (1966); H. Pfister and K. Braun, Class. Quantum Grav. 2, 909 (1985).

2. In the four decades since the initial proposal of Schiff and Pugh, other tests of frame dragging have been undertaken. These experiments (carried out by, among others, Bruno Bertotti, Ignazio Ciufolini, and Peter Bender; and I. I. Shapiro, R. D. Reasenberg, J. F. Chandler, and R. W. Babcock) have studied the motion of the moon as well as satellites orbiting the earth, and found some evidence for frame dragging effects. One major advantage of Gravity Probe B is that it is the first fully contained experiment, one that is under complete control of the experimenters, and so should give the most precise and most direct evidence for frame dragging.

3. Although they are effective at giving a feel for Einstein’s discovery, another limitation of the standard images of warped space is that they don’t illustrate the warping of time. This is important because general relativity shows that for an ordinary object like the sun, as opposed to something extreme like a black hole, the warping of time (the closer you are to the sun, the slower your clocks will run) is far more pronounced than the warping of space. It’s subtler to depict the warping of time graphically and it’s harder to convey how warped time contributes to curved spatial trajectories such as the earth’s elliptical orbit around the sun, and that’s why Figure 3.10 (and just about every attempt to visualize general relativity I’ve ever seen) focuses solely on warped space. But it’s good to bear in mind that in many common astrophysical environments, it’s the warping of time that is dominant.

4. In 1974, Russell Hulse and Joseph Taylor discovered a binary pulsar system—two pulsars (rapidly spinning neutron stars) orbiting one another. Because the pulsars move very quickly and are very close together, Einstein’s general relativity predicts that they will emit copious amounts of gravitational radiation. Although it is quite a challenge to detect this radiation directly, general relativity shows that the radiation should reveal itself indirectly through other means: the energy emitted via the radiation should cause the orbital period of the two pulsars to gradually decrease. The pulsars have been observed continuously since their discovery, and indeed, their orbital period has decreased—and in a manner that agrees with the prediction of general relativity to about one part in a thousand. Thus, even without direct detection of the emitted gravitational radiation, this provides strong evidence for its existence. For their discovery, Hulse and Taylor were awarded the 1993 Nobel Prize in Physics.

5. However, see note 4, above.

6. From the viewpoint of energetics, therefore, cosmic rays provide a naturally occurring accelerator that is far more powerful than any we have or will construct in the foreseeable future. The drawback is that although the particles in cosmic rays can have extremely high energies, we have no control over what slams into what—when it comes to cosmic ray collisions, we are passive observers. Furthermore, the number of cosmic ray particles with a given energy drops quickly as the energy level increases. While about 10 billion cosmic ray particles with an energy equivalent to the mass of a proton (about one-thousandth of the design capacity of the Large Hadron Collider) strike each square kilometer of earth’s surface every second (and quite a few pass through your body every second as well), only about one of the most energetic particles (about 100 billion times the mass of a proton) would strike a given square kilometer of earth’s surface each century. Finally, accelerators can slam particles together by making them move quickly, in opposite directions, thereby creating a large center of mass energy. Cosmic ray particles, by contrast, slam into the relatively slow moving particles in the atmosphere. Nevertheless, these drawbacks are not insurmountable. Over the course of many decades, experimenters have learned quite a lot from studying the more plentiful, lower-energy cosmic ray data, and, to deal with the paucity of high-energy collisions, experimenters have built huge arrays of detectors to catch as many particles as possible.

7. The expert reader will realize that conservation of energy in a theory with dynamic spacetime is a subtle issue. Certainly, the stress tensor of all sources for the Einstein equations is covariantly conserved. But this does not necessarily translate into a global conservation law for energy. And with good reason. The stress tensor does not take account of gravitational energy—a notoriously difficult notion in general relativity. Over short enough distance and time scales—such as occur in accelerator experiments—local energy conservation is valid, but statements about global conservation have to be treated with greater care.

8. This is true of the simplest inflationary models. Researchers have found that more complicated realizations of inflation can suppress the production of gravitational waves.

9. A viable dark matter candidate must be a stable, or very long-lived, particle—one that does not disintegrate into other particles. This is expected to be true of the lightest of the supersymmetric partner particles, and hence the more precise statement is that the lightest of the zino, higgsino, or photino is a suitable dark matter candidate.

10. Not too long ago, a joint Italian-Chinese research group known as the Dark Matter Experiment (DAMA), working out of the Gran Sasso Laboratory in Italy, made the exciting announcement that they had achieved the first direct detection of dark matter. So far, however, no other group has been able to verify the claim. In fact, another experiment, Cryogenic Dark Matter Search (CDMS), based at Stanford and involving researchers from the United States and Russia, has amassed data that many believe rule out the DAMA results to a high degree of confidence. In addition to these dark matter searches, many others are under way. To read about some of these, take a look at http://hepwww.rl.ac.uk/ukdmc/dark_matter/other_searches.htm.

Chapter 15

1. This statement ignores hidden-variable approaches, such as Bohm’s. But even in such approaches, we’d want to teleport an object’s quantum state (its wavefunction), so a mere measurement of position or velocity would be inadequate.

2. Zeilinger’s research group also included Dick Bouwmeester, Jian-Wi Pan, Klaus Mattle, Manfred Eibl, and Harald Weinfurter, and De Martini’s has included S. Giacomini, G. Milani, F. Sciarrino, and E. Lombardi.

3. For the reader who has some familiarity with the formalism of quantum mechanics, here are the essential steps in quantum teleportation. Imagine that the initial state of a photon I have in New York is given by |Ψ> 1 = α|0>1 + β|1>1 where |0> and |1> are the two photon polarization states, and we allow for definite, normalized, but arbitrary values of the coefficients. My goal is to give Nicholas enough information so that he can produce a photon in London in exactly the same quantum state. To do so, Nicholas and I first acquire a pair of entangled photons in the state, say |Ψ>23 = (1/√2) |0203> −(1/√2)|1213>. The initial state of the three-photon system is thus |Ψ>123 = (α/√2) {|010203> − |0 11213>} + (β/√2) {|110 203> − |111213>}. When I perform a Bell-state measurement on Photons 1 and 2, I project this part of the system onto one of four states: |Φ>± = (1/√2) {|0102> ± |111>} and |image>± = (1/√2) {|0112> ± |1102>}. Now, if we re-express the initial state using this basis of eigenstates for Particles 1 and 2, we find: |Ψ>123 = 1⁄2{|Φ>+(α|0 3> − β|13>) + |Φ>− (α|0 3> + β|13>) + |image>+ (−α|1 3> + β|03>) + |image>− (−α|1 3> − β|03>}. Thus, after performing my measurement, I will “collapse” the system onto one of these four summands. Once I communicate to Nicholas (via ordinary means), which summand I find, he knows how to manipulate Photon 3 to reproduce the original state of Photon 1. For instance, if I find that my measurement yields state |Φ> −, then Nicholas does not need to do anything to Photon 3, since, as above, it is already in the original state of Photon 1. If I find any other result, Nicholas will have to perform a suitable rotation (dictated, as you can see, by which result I find), to put Photon 3 into the desired state.

4. In fact, the mathematically inclined reader will note that it is not hard to prove the so-called no-quantum-cloning theorem. Imagine we have a unitary cloning operator U that takes any given state as input and produces two copies of it as output (U maps |α> → |α>|α>, for any input state |α>). Note that U acting on a state like (|α> + |β>) yields (|α>|α> + |β>|β>), which is not a two-fold copy of the original state (|α> + |β>)(|α> + |β>), and hence no such operator U exists to carry out quantum cloning. (This was first shown by Wootters and Zurek in the early 1980s.)

5. Many researchers have been involved in developing both the theory and the experimental realization of quantum teleportation. In addition to those discussed in the text, the work of Sandu Popescu while at Cambridge University played an important part in the Rome experiments, and Jeffrey Kimble’s group at the California Institute of Technology has pioneered the teleportation of continuous features of a quantum state, to name a few.

6. For extremely interesting progress on entangling many-particle systems, see, for example, B. Julsgaard, A. Kozhekin, and E. S. Polzik, “Experimental long-lived entanglement of two macroscopic objects,” Nature 413 (Sept. 2001), 400–403.

7. One of the most exciting and active areas of research making use of quantum entanglement and quantum teleportation is the field of quantum computing. For recent general-level presentations of quantum computing, see Tom Siegfried, The Bit and the Pendulum (New York: John Wiley, 2000), and George Johnson, A Shortcut Through Time (New York: Knopf, 2003).

8. One aspect of the slowing of time at increasing velocity, which we did not discuss in Chapter 3 but will play a role in this chapter, is the so-called twin paradox. The issue is simple to state: if you and I are moving relative to one another at constant velocity, I will think your clock is running slow relative to mine. But since you are as justified as I in claiming to be at rest, you will think that mine is the moving clock and hence is the one that is running slow. That each of us thinks the other’s clock is running slow may seem paradoxical, but it’s not. At constant velocity, our clocks will continue to get farther apart and hence they don’t allow for a direct, face-to-face comparison to determine which is “really” running slow. And all other indirect comparisons (for instance, we compare the times on our clocks by cell phone communication) occur with some elapsed time over some spatial separation, necessarily bringing into play the complications of different observers’ notions of now, as in Chapters 3 and 5. I won’t go through it here, but when these special relativistic complications are folded into the analysis, there is no contradiction between each of us declaring that the other’s clock is running slow (see, e.g., E. Taylor and J. A. Wheeler, Spacetime Physics, for a complete, technical, but elementary discussion) . Where things appear to get more puzzling is if, for example, you slow down, stop, turn around, and head back toward me so that we can compare our clocks face to face, eliminating the complications of different notions of now. Upon our meeting, whose clock will be ahead of whose? This is the so-called twin paradox: if you and I are twins, when we meet again, will we be the same age, or will one of us look older? The answer is that my clock will be ahead of yours—if we are twins, I will look older. There are many ways to explain why, but the simplest to note is that when you change your velocity and experience an acceleration, the symmetry between our perspectives is lost—you can definitively claim that you were moving (since, for example, you felt it—or, using the discussion of Chapter 3, unlike mine, your journey through spacetime has not been along a straight line) and hence that your clock ran slow relative to mine. Less time elapsed for you than for me.

9. John Wheeler, among others, has suggested a possible central role for observers in a quantum universe, summed up in one of his famous aphorisms: “No elementary phenomenon is a phenomenon until it is an observed phenomenon.” You can read more about Wheeler’s fascinating life in physics in John Archibald Wheeler and Kenneth Ford, Geons, Black Holes, and Quantum Foam: A Life in Physics (New York: Norton, 1998). Roger Penrose has also studied the relation between quantum physics and the mind in his The Emperor’s New Mind, and also in Shadows of the Mind: A Search for the Missing Scienceof Consciousness (Oxford: Oxford University Press, 1994).

10. See, for example, “Reply to Criticisms” in Albert Einstein, vol. 7 of The Library of Living Philosophers, P. A. Schilpp, ed. (New York: MJF Books, 2001).

11. W. J. van Stockum, Proc. R. Soc. Edin. A 57 (1937), 135.

12. The expert reader will recognize that I am simplifying. In 1966, Robert Geroch, who was a student of John Wheeler, showed that it is at least possible, in principle, to construct a wormhole without ripping space. But unlike the more intuitive, space-tearing approach to building wormholes in which the mere existence of the wormhole does not entail time travel, in Geroch’s approach the construction phase itself would necessarily require that time become so distorted that one could freely travel backward and forward in time (but no farther back than the initiation of the construction itself).

13. Roughly speaking, if you passed through a region containing such exotic matter at nearly the speed of light and took the average of all your measurements of the energy density you detected, the answer you’d find would be negative. Physicists say that such exotic matter violates the so-called averaged weak energy condition.

14. The simplest realization of exotic matter comes from the vacuum fluctuations of the electromagnetic field between the parallel plates in the Casimir experiment, discussed in Chapter 12. Calculations show that the decrease in quantum fluctuations between the plates, relative to empty space, entails negative averaged energy density (as well as negative pressure).

15. For a pedagogical but technical account of wormholes, see Matt Visser, LorentzianWormholes: From Einstein to Hawking (New York: American Institute of Physics Press, 1996).

Chapter 16

1. For the mathematically inclined reader, recall from note 6 of Chapter 6 that entropy is defined as the logarithm of the number of rearrangements (or states), and that’s important to get the right answer in this example. When you join two Tupperware containers together, the various states of the air molecules can be described by giving the state of the air molecules in the first container, and then by giving the state of those in the second. Thus, the number of arrangements for the joined containers is the square of the number of arrangements of either separately. After taking the logarithm, this tells us that the entropy has doubled.

2. You will note that it doesn’t really make much sense to compare a volume with an area, as they have different units. What I really mean here, as indicated by the text, is that the rate at which volume grows with radius is much faster than the rate at which surface area grows. Thus, since entropy is proportional to surface area and not volume, it grows more slowly with the size of a region than it would were it proportional to volume.

3. While this captures the spirit of the entropy bound, the expert reader will recognize that I am simplifying. The more precise bound, as proposed by Raphael Bousso, states that the entropy flux through a null hypersurface (with everywhere non-positive focusing parameter Θ) is bounded by A/4, where A is the area of a spacelike cross-section of the null hypersurface (the “light-sheet”).

4. More precisely, the entropy of a black hole is the area of its event horizon, expressed in Planck units, divided by 4, and multiplied by Boltzmann’s constant.

5. The mathematically inclined reader may recall from the endnotes to Chapter 8 that there is another notion of horizon—a cosmic horizon—which is the dividing surface between those things with which an observer can and cannot be in causal contact. Such horizons are also believed to support entropy, again proportional to their surface area.

6. In 1971, the Hungarian-born physicist Dennis Gabor was awarded the Nobel Prize for the discovery of something called holography. Initially motivated by the goal of improving the resolving power of electron microscopes, Gabor worked in the 1940s on finding ways to capture more of the information encoded in the light waves that bounce off an object. A camera, for example, records the intensity of such light waves; places where the intensity is high yield brighter regions of the photograph, and places where it’s low are darker. Gabor and many others realized, though, that intensity is only part of the information that light waves carry. We saw this, for example, in Figure 4.2b: while the interference pattern is affected by the intensity (the amplitude) of the light (higher-amplitude waves yield an overall brighter pattern), the pattern itself arises because the overlapping waves emerging from each of the slits reach their peak, their trough, and various intermediate wave heights at different locations along the detector screen. The latter information is called phase information: two light waves at a given point are said to be in phase if they reinforce each other (they each reach a peak or trough at the same time), out of phase if they cancel each other (one reaches a peak while the other reaches a trough), and, more generally, they have phase relations intermediate between these two extremes at points where they partially reinforce or partially cancel. An interference pattern thus records phase information of the interfering light waves.

Gabor developed a means for recording, on specially designed film, both the intensity and the phase information of light that bounces off an object. Translated into modern language, his approach is closely akin to the experimental setup of Figure 7.1, except that one of the two laser beams is made to bounce off the object of interest on its way to the detector screen. If the screen is outfitted with film containing appropriate photographic emulsion, it will record an interference pattern—in the form of minute, etched lines on the film’s surface—between the unfettered beam and the one that has reflected off the object. The interference pattern will encode both the intensity of the reflected light and phase relations between the two light beams. The ramifications of Gabor’s insight for science have been substantial, allowing for vast improvements in a wide range of measurement techniques. But for the public at large, the most prominent impact has been the artistic and commercial development of holograms.

Ordinary photographs look flat because they record only light intensity. To get depth, you need phase information. The reason is that as a light wave travels, it cycles from peak to trough to peak again, and so phase information—or, more precisely, phase differences between light beams that reflect off nearby parts of an object—encodes differences in how far the light rays have traveled. For example, if you look at a cat straight on, its eyes are a little farther away than its nose and this depth difference is encoded in the phase difference between the light beams’ reflecting off each facial element. By shining a laser through a hologram, we are able to exploit the phase information the hologram records, and thereby add depth to the image. We’ve all seen the results: stunning three-dimensional projections generated from two-dimensional pieces of plastic. Note, though, that your eyes do not use phase information to see depth. Instead, your eyes use parallax: the slight difference in the angles at which light from a given point travels to reach your left eye and your right eye supplies information that your brain decodes into the point’s distance. That’s why, for example, if you lose sight in one eye (or just keep it closed for a while), your depth perception is compromised.

7. For the mathematically inclined reader, the statement here is that a beam of light, or massless particles more generally, can travel from any point in the interior of antideSitter space to spatial infinity and back, in finite time.

8. For the mathematically inclined reader, Maldacena worked in the context of AdS× S5, with the boundary theory arising from the boundary of AdS5.

9. This statement is more one of sociology than of physics. String theory grew out of the tradition of quantum particle physics, while loop quantum gravity grew out of the tradition of general relativity. However, it is important to note that, as of today, only string theory can make contact with the successful predictions of general relativity, since only string theory convincingly reduces to general relativity on large distance scales. Loop quantum gravity is understood well in the quantum domain, but bridging the gap to large-scale phenomena has proven difficult.

10. More precisely, as discussed further in Chapter 13 of The Elegant Universe, we have known how much entropy black holes contain since the work of Bekenstein and Hawking in the 1970s. However, the approach those researchers used was rather indirect, and never identified microscopic rearrangements—as in Chapter 6—that would account for the entropy they found. In the mid-1990s, this gap was filled by two string theorists, Andrew Strominger and Cumrun Vafa, who cleverly found a relation between black holes and certain configurations of branes in string/M-theory. Roughly, they were able to establish that certain special black holes would admit exactly the same number of rearrangements of their basic ingredients (whatever those ingredients might be) as do particular, special combinations of branes. When they counted the number of such brane rearrangements (and took the logarithm) the answer they found was the area of the corresponding black hole, in Planck units, divided by 4—exactly the answer for black hole entropy that had been found years before. In loop quantum gravity, researchers have also been able to show that the entropy of a black hole is proportional to its surface area, but getting the exact answer (surface area in Planck units divided by 4) has proven more of a challenge. If a particular parameter, known as the Immirzi parameter, is chosen appropriately, then indeed the exact black hole entropy emerges from the mathematics of loop quantum gravity, but as yet there is no universally accepted fundamental explanation, within the theory itself, of what sets the correct value of this parameter.

11. As I have throughout the chapter, I am suppressing quantitatively important but conceptually irrelevant numerical parameters.