THE MAGICAL MYSTERY TOUR - SCALING MATTER - Knocking on Heaven's Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World - Lisa Randall 

Knocking on Heaven's Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World - Lisa Randall (2011)

Part II. SCALING MATTER

Chapter 5. THE MAGICAL MYSTERY TOUR

Though the ancient Greek philosopher Democritus might have started off on the right track when he posited the existence of atoms 2,500 years ago, no one could have accurately guessed what the true elementary components of matter would turn out to be. Some of the physical theories that apply at small distances are so counterintuitive that even the most creative and open-minded people would never have imagined them if experiments hadn’t forced scientists to accept their new and confounding premises. Once scientists of the last century had the technology to probe atomic scales, they found that the inner structure of matter repeatedly defied expectations. The pieces fit together in a way that is far more magical than anything we will see on a stage.

Any human being will have difficulty creating an accurate visual image of what’s going on at the minuscule scales that particle physicists study today. The elementary components that combine to form the stuff we recognize as matter are very different from what we access immediately through our senses. Those components operate according to unfamiliar physical laws. As scales decrease, matter seems to be governed by properties so different that they appear to be part of entirely different universes.

Many confusions in trying to comprehend this strange inner structure arise from lack of familiarity with the variety of ingredients that emerge at different scales and the range of sizes at which different theories most readily apply. We need to know what exists and to have a sense of the sizes and scales that different theories describe in order to fully understand the physical world.

Later on we will explore the different sizes relevant to space, the final frontier. This chapter first looks inward, starting with familiar scales and ending deep in the interior of matter—the other final frontier. From commonly encountered length scales to the innards of an atom (where quantum mechanics is essential) to the Planck scale (where gravity would be as powerful as the other known forces), we’ll explore what we know and how it all fits together. Let’s now take a tour of this remarkable inner landscape that enterprising physicists and others have deciphered over time.

SCALING THE UNIVERSE

Our journey begins at human scales—the ones we see and touch in our daily lives. It’s no coincidence that a meter—not one-millionth of a meter and not ten thousand meters—is, roughly speaking, the size of a person. It’s about twice the size of a baby and half the size of a fully grown man. It would be rather strange to find that the basic unit we use for common measurements was one-hundredth the size of the Milky Way or the length of an ant’s leg.

Nonetheless, a standard physical unit defined in terms of any particular human wouldn’t be all that useful since a measuring stick should be a length we all agree on and understand.25 So in 1791, the French Academy of Sciences established a standard. A meter was to be defined either as the length of a pendulum with a half period of one second or one ten-millionth of the length of the Earth’s meridian along a quadrant (that is the distance from the Equator to the North Pole).

Neither definition has much to do with us humans. The French were simply trying to find an objective measure that we could all agree on and be comfortable with. They converged on the latter choice of definition to avoid the uncertainties introduced by the slightly varying force of gravity over the surface of the Earth.

The definition was arbitrary. It was designed to make the measure of a meter precise and standard so that everyone could agree on what it was. But one ten-millionth was no coincidence. With the official French definition, a meter stick is something you can comfortably hold in your hands.

Most of us are better approximated by two meters, but none of us are 10, or even three meters in height. A meter is a human scale, and when objects are this size, we’re pretty comfortable with them—at least insofar as our ability to observe and interact with them (we’ll stay away from meter-long crocodiles). We know the rules of physics that apply since they are the ones we witness in our daily existence. Our intuition is based on a lifetime of observing objects and people and animals whose size can be reasonably described in terms of meters.

I sometimes find it remarkable how constrained our comfort zone can be. The NBA basketball player Joakim Noah is a friend of my cousin. My family and I never tire of commenting on his height. We can look at photos or marks on a door frame charting his height at various ages and marvel at him blocking a smaller guy’s shot. Joakim is mesmerizingly tall. But the fact is, he is only about 15 percent taller than the average human being, and his body works pretty much like everyone else’s. The exact proportions might be different, sometimes giving a mechanical advantage and sometimes not. But the rules his bones and muscles follow are pretty much the same that yours do.

Newton’s laws of motion, written down in 1687, still tell us what happens when we apply force to a given mass. They apply to the bones in our body and they apply to the ball Joakim throws. With these laws we can calculate the trajectory of a ball he tosses here on Earth and predict the path the planet Mercury takes when orbiting the Sun. In all cases, Newton’s laws tell us that motion will continue at the same speed unless a force acts on the object. That force will accelerate an object in accordance with its mass. An action will induce an equal and opposite reaction.

Newton’s laws work admirably for a well-understood range of lengths, speeds, and densities. Disparities appear only at the very small distances where quantum mechanics changes the rules, at extremely high speeds where relativity applies, or at enormous densities such as those in a black hole where general relativity takes over.

The effects of any of the new theories that supersede Newton’s laws are too small to ever be observed at ordinary distances, speeds, or densities. But with determination and technology we can reach the regimes where we encounter these limitations.

JOURNEY INSIDE

We have to travel a ways down before we encounter new physics components and new physical laws. But a lot goes on in the range of scales between a meter and the size of an atom. Many of the objects we encounter in our daily existence as well as in life itself have important features we can notice only when we explore smaller systems where different behaviors or substructures become prominent. (See Figure 13 for some scales that we refer to in this chapter.)

Of course, a lot of objects we’re familiar with are made by simply putting together a single fundamental unit many times, with few details or any internal structure of interest. These extensive systems grow like walls of bricks. We can make walls bigger or smaller by adding more or fewer bricks, but the basic functional unit is always the same. A large wall is in many respects just like a small wall. This type of scaling is exemplified in many large systems that grow with the number of repeated elementary components. This applies, for example, to many large organizations as well as computer memory chips that are composed of large numbers of identical transistors.

A different type of scaling that applies to other types of large systems is exponential growth, which occurs when the connections, rather than the fundamental elements, determine a system’s behavior. Although such systems too grow by adding many similar units, the behavior depends on the number of connections—not just the number of basic units. These connections don’t extend just to an adjacent part, as with bricks, but can extend to other units across the system. Neural systems composed of many synaptic connections, cells with many interacting proteins, and the Internet with a large number of connected computers are all examples. This is a worthy subject of study in itself, and some forms of physics also deal with related emergent macroscopic behavior.

images

FIGURE 13 ] A tour of small scales, and the length units that are used to describe them.

But elementary particle physics is not about complex multi-unit systems. It focuses on identifying elementary components and the physical laws they obey. Particle physics zones in on basic physical quantities and their interactions. These smaller components are of course relevant to complex physical behaviors that involve many components interacting in interesting ways. But identifying the smallest basic components and the way they behave is our focus here.

With technology and biological systems, the individual components of the larger systems have internal structure too. After all, computers are built from microprocessors built from transistors. And when doctors look inside human beings, they find organs and blood vessels and everything else that one encounters upon dissection that are in turn built from cells and DNA that one can see only with more advanced technology. The operation of those internal elements is nothing like what we see when we observe only the surface. The elements change at smaller scales. The best description for the rules those elements follow changes as well.

Since the history of the study of physiology is in some ways analogous to the study of physical laws, and covers some of the interesting length scales for humans, let’s take a moment to think a bit about ourselves and how some aspects of the more familiar inner workings of the body were understood before turning to physics and the external world.

The collarbone is an interesting example for which the function could only be understood upon internal dissection. It has its name because on the surface it seems like a collar. But when scientists probed inside the human body they found a key-like piece to the bone that gave it another name we often use: the clavicle.

Nor did anyone understand blood circulation or the capillary system connecting arteries and veins until the early seventeenth century when William Harvey did meticulous experiments to explore the details of hearts and blood networks in animals and humans. Harvey, though English, studied medicine at the University of Padua, where he learned quite a lot from his mentor Hieronymus Fabricius, who was interested in blood flow as well but misunderstood the role of veins and their valves.

Not only did Harvey change our picture of the actual objects involved—here we have networks of arteries and veins carrying blood in a branching network to capillaries working on smaller and smaller scales—but Harvey also discovered a process. Blood is transferred back and forth to cells in ways that no one anticipated until they actually looked. Harvey discovered more than a catalog—he discovered a whole new system.

However, Harvey did not yet have the tools to physically discover the capillary system, which Marcello Malpighi succeeded in doing only in 1661. Harvey’s suggestions had included hypotheses based on theoretical arguments that were only later validated by experiments. Although Harvey made detailed illustrations, he couldn’t achieve the same level of resolution that users of the microscope such as Leeuwenhoek would subsequently attain.

Our circulatory system contains red blood cells. Those internal elements are only seven micrometers long—roughly one hundred thousandth the size of a meter stick. That’s 100 times smaller than the thickness of a credit card—about the size of a fog droplet and about 10 times smaller than what we see with the naked eye (which is in turn a bit smaller than a human hair).

Blood flow and circulation is certainly not the only human process doctors have deciphered over time. Nor has the exploration of inner structure in human beings stopped at the micrometer scale. The discovery of entirely new elements and systems has since been repeated at successively smaller scales, in humans as much as in inanimate physical systems.

Coming down in size to about a tenth of a micron—10 million times smaller than a meter—we find DNA, the fundamental building block of living beings that encodes genetic information. That size is still about 1,000 times bigger than an atom, but is nonetheless a scale where molecular physics (that is, chemistry) plays an important role. Although still not fully understood, the molecular processes occurring within DNA underlie the abundantly broad spectrum of life that covers the globe. DNA molecules contain millions of nucleotides, so the significant role of quantum mechanical atomic bonds should not be surprising.

DNA can itself be categorized on different scales. With its twisty convoluted molecular structure, the total length of human DNA can be measured in meters. But DNA strands are only about two thousandths of a micron—two nanometers wide. That’s a little smaller than the current smallest transistor gate of a microprocessor, which is about 30 nanometers in size. A single nucleotide is only 0.33 nm long, comparable in size to a water molecule. A gene is about 1,000-100,000 nucleotides long. The most useful description of a gene will involve different types of questions than those we would confer on individual nucleotides. DNA therefore operates in different ways on different length scales. With DNA, scientists ask different questions and use different descriptions on different scales.

Biology resembles physics in the way that smaller units give rise to the structure that we see at large scales. But biology involves far more than understanding the individual elements of living systems. Biology’s goals are far more ambitious. Although ultimately we believe the laws of physics underlie the processes at work in the human body, functional biological systems are complex and intricate and often have difficult-to- anticipate consequences. Disentangling the basic units and the complicated feedback mechanisms is enormously difficult—complicated further by the combinatorics of the genetic code. Even with knowledge of the basic units, we still have the formidable task of resolving more complicated emergent science, notably that responsible for life.

Physicists too can’t always understand processes at larger scales through understanding the structure of individual subunits, but most physics systems are simpler in this respect than biological ones. Although composite structure is complex and can have very different properties than the smaller units, feedback mechanisms and evolving structure usually play less of a role. For physicists, finding the simplest, most elementary component is an important goal.

ATOMIC SCALES

As we move away from the mechanics of living systems and descend further in scale to understand basic physical elements themselves, the next length at which we will momentarily pause is the atomic scale, 100 picometers, which is about 10,000 million (1010) times smaller than a meter. The precise scale of an atom is difficult to pin down since it involves electrons that circulate around a nucleus but are never static. However, it is customary to categorize the average distance of the electron from the nucleus and label that as an atom’s size.

People conjure up pictures to explain physical processes on these small scales, but they are necessarily based on analogies. We have no choice but to apply descriptions we’re familiar with from our experiences at ordinary length scales in order to describe a completely different structure that exhibits strange and unintuitive behavior.

Faithfully drawing the interior of an atom is impossible with the physiology most readily at our disposal—namely, our senses and our human-sized manual dexterity. Our vision, for example, relies on phenomena made visible by light composed of electromagnetic waves. These light waves—the ones in the optical spectrum—have a wavelength that varies between about 380 and 750 nanometers. That is far larger than the size of an atom, which is only about a tenth of a nanometer. (See Figure 14.)

Images

FIGURE 14 ] An individual atom is a mere speck relative to even the smallest wavelength of visible light.

This means that probing within the atom with visual light to try to see directly with our eyes is as impossible as threading a needle with mittens on. The wavelengths involved force us to implicitly smear over the smaller sizes that these overly extended waves could never resolve. So when we want to literally “see” quarks or even a proton, we’re asking for something intrinsically impossible. We simply don’t have the capacity to accurately visualize what is there.

But confusing our ability to picture phenomena with our confidence in their reality is a mistake that scientists cannot afford to make. Not seeing or even having a mental image doesn’t mean that we can’t deduce the physical elements or processes that are happening at these scales.

From our hypothetical vantage point on the scale of an atom, the world would appear incredible because the rules of physics are extremely different from those that apply to the scales we tick off on our measuring sticks at familiar lengths. The world of an atom looks nothing like what we think of when we visualize matter. (See Figure 15.)

Parts of the Atom

Images

FIGURE 15 ] An atom consists of electrons orbiting a central nucleus, which consists of positively charged protons, each of charge one, and neutral neutrons, which have zero charge.

Perhaps the first and most striking observation one might make would be that the atom consists primarily of empty space.26 The nucleus, the center of an atom, is about 10,000 times smaller in radius than the electron orbits. An average nucleus is roughly 10-14 meters, 10 femto-meters, in size. A hydrogen nucleus is about 10 times smaller than that. The nucleus is as small compared to the radius of an atom as the radius of the Sun is when compared to the size of the solar system. An atom is mostly empty. The volume of a nucleus is a mere trillionth of the volume of an atom.

That’s not what we observe or touch when we pound our fist on a door or drink cool liquid through a straw. Our senses lead us to think of matter as continuous. Yet on atomic scales we find that matter is mostly devoid of anything substantial. It is only because our senses average over smaller sizes that matter appears to be solid and continuous. On atomic scales, it is not.

Near emptiness is not all that is surprising about matter on the scale of an atom. What took the physics world by storm and still mystifies physicists and nonphysicists alike is that even the most basic premises of Newtonian physics break down at this tiny distance. The wave nature of matter and the uncertainty principle—key elements of quantum mechanics—are critical to understanding atomic electrons. They don’t follow simple curves describing the definite paths that we often see drawn. According to quantum mechanics, no one can measure both the location and the momentum of a particle with infinite precision, a necessary prerequisite for following an object’s path through time. Heisenberg’s uncertainty principle, developed by Werner Heisenberg in 1926, tell us that the accuracy with which position is known limits the maximum precision with which one can measure momentum.27 If electrons were to follow classical trajectories, we would know at any given time exactly where the electron is and how fast and in what direction it is moving so that we could know where it will be at any later time, contradicting Heisenberg’s principle.

Quantum mechanics tells us that electrons don’t occupy fixed locations in the atoms as the classical picture would assert. Instead, probability distributions tell us how likely electrons are to be found in any particular point in space, and all we know are these probabilities. We can predict the average position of an electron as a function of time, but any particular measurement is subject to the uncertainty principle.

Bear in mind that these distributions are not arbitrary. The electrons can’t have just any old energy or probability distribution. There is no good classical way to describe an electron’s orbit—it can only be described in probabilistic terms. But the probability distributions are in fact precise functions. With quantum mechanics, we can write down an equation describing the wave solution for an electron, and this tells us the probability for it to be at any given point in space.

Another property of an atom that is remarkable from the perspective of a classical Newtonian physicist is that the electrons in an atom can occupy only fixed quantized energy levels. Electron orbits depend on their energies, and those particular energy levels and the associated probabilities must be consistent with quantum mechanical rules.

The electrons’ quantized levels are essential to understanding the atom. In the early twentieth century, an important clue that the classical rules had to radically change was that classically, electrons circling a nucleus are not stable. They would radiate energy and quickly fall into the center. Not only would this be nothing like an atom, it wouldn’t permit the structure of matter that follows from stable atoms as we know them.

Niels Bohr in 1912 was faced with a challenging choice—abandon classical physics or abandon his belief in observed reality. Bohr wisely chose the former and assumed classical laws don’t apply at the small distances occupied by electrons in an atom. This was one of the key insights that led to the development of quantum physics.

Once Bohr ceded Newton’s laws, at least in this limited regime, he could postulate that electrons occupied fixed energy levels—according to a quantization condition that he proposed involving a quantity called orbital angular momentum. According to Bohr, his quantization rule applied on an atomic scale. The rules were different from those we use at macroscopic scales, such as for the Earth circulating around the Sun.

Technically, quantum mechanics still applies to these larger systems as well. But the effects are far too small to ever measure or notice. When you observe the orbit of the Earth or any macroscopic object for that matter, quantum mechanics can be ignored. The effects average out in all such measurements so that any prediction you make agrees with its classical counterpart. As discussed in the first chapter, for measurements on macroscopic scales, classical predictions generally remain extremely good approximations—so good that you can’t distinguish that quantum mechanics is in fact the deeper underlying structure. Classical predictions are analogous to the words and images on an extremely high-resolution computer screen. Underlying them are the many pixels that are like the quantum mechanical atomic substructure. But the images or words are all we generally need (or want) to see.

Quantum mechanics constitutes a change in paradigm that becomes apparent only at the atomic scale. Despite Bohr’s radical assumption, he didn’t have to abandon what was known before. He didn’t assume classical Newtonian physics was wrong. He simply assumed that classical laws cease to apply for electrons in an atom. Macroscopic matter, which consists of so many atoms that quantum effects can’t be isolated, obeys Newton’s laws, at least at the level at which anyone could measure the success of its predictions. Newton’s laws are not wrong. We don’t abandon them in the regime in which they apply. But at the atomic scale, Newton’s laws had to fail. And they failed in an observable and spectacular fashion that led to the development of the new rules of quantum mechanics.

NUCLEAR PHYSICS

As we continue our journey down in scale into the atomic nucleus itself, we will continue to see the emergence of different descriptions, different basic components, and even different physical laws. But the basic quantum mechanical paradigm will remain intact.

Inside the atom, we’ll now explore inner structure with size of about 10 femtometers, the nuclear size of a hundred thousandth of a nanometer. So far as we have measured to date, electrons are fundamental—that is, there don’t seem to be any smaller components of electrons. The nucleus, on the other hand, is not a fundamental object. It is composed of smaller elements, known as nucleons. Nucleons are either protons or neutrons. Protons have positive electric charge and neutrons are neutral, with neither a positive nor negative charge.

One way to understand the nature of protons and neutrons is to recognize that they are not fundamental either. George Gamow, the great nuclear physicist and science popularizer, was so excited about the discovery of protons and neutrons that he thought it was the final “other frontier”: he didn’t think any further substructure existed. In his words:

“Instead of a rather large number of ‘indivisible’ atoms of classical physics, we are left with only three essentially different entities; protons, electrons, and neutrons… Thus it seems we have actually hit the bottom in our search for the basic elements of which matter is formed.” 28

That was a little shortsighted. More precisely, it was not shortsighted enough. There does exist further substructure—more elementary components to the proton and neutron—but the more fundamental elements were challenging to find. One had to be able to study length scales smaller than the size of the proton and neutron, which required higher energies or smaller probes than existed when Gamow made his inaccurate prediction.

If we were to now enter inside the nucleus to see nucleons and protons with size about a fermi—about ten times smaller than the nucleus itself—we would encounter objects Murray Gell-Mann and George Zweig suspected existed inside nucleons. Gell-Mann creatively named these units of substructure quarks, in his telling inspired by a line from James Joyce’s Finnegans Wake (“three quarks for Muster Mark”). The up and down quarks inside a nucleon are the more fundamental objects of smaller size (the two up and one down quarks inside are shown in Figure 16) that a force called the strong nuclear force binds together to form protons and neutrons. Despite its generic name, the strong force is a specific force of nature—one that complements the other known forces of electromagnetism, gravity, and the weak nuclear force that we’ll discuss later.

The strong force is called the strong force because it is strong—that’s an actual quote from a fellow physicist. Even though it sounds pretty silly, it’s in fact true. That’s why quarks are always found bound together into objects such as protons and neutrons for which the direct influence of the strong nuclear force cancels. The force is so strong that in the absence of other influences the strongly interacting components won’t ever be found far apart.

Images

FIGURE 16 ] The charge of a proton is carried by three valence quarks—two up quarks and a down quark.

One can never isolate a single quark. It’s as if all quarks carry a sort of glue that becomes sticky at long distances (the particles that communicate the strong force are for this reason known as gluons). You might think of an elastic band whose restoring force comes into play only when you stretch it. Inside a proton or neutron, quarks are free to move around. But trying to remove one of the quarks any significant distance away would require additional energy.

Though this description is entirely correct and fair, one should be careful in its interpretation. One can’t help but think of quarks as all bound together in a sack with some tangible barrier from which they cannot escape. In fact, one model of nuclear systems essentially treats the protons and neutrons in precisely this way. But that model, unlike others we will later encounter, is not a hypothesis for what is really going on. Its purpose was solely to make calculations in a range of distances and energies where forces were so strong our familiar methods don’t apply.

Protons and neutrons are not sausages. There is no synthetic caing that surrounds the quarks in a proton. Protons are stable collections of three quarks held together through the strong force. Because of the strong interactions, three light quarks concertedly act as one single object, either a neutron or proton.

Another significant consequence of the strong force—and quantum mechanics—is the ready creation of additional virtual particles inside a proton or neutron—particles permitted by quantum mechanics that don’t last forever but at any given time contribute energy. The mass—and hence, a la Einstein’s E = mc2, the energy—in a proton or neutron is not carried just by the quarks themselves but also by the bonds that tie them together. The strong force is like the elastic band tying together two balls that itself carries energy. “Plucking” the stored energy allows new particles to be created.

So long as the net charge of the new particles is zero, this particle creation from the energy in the proton doesn’t violate any known physical laws. For example, a positively charged proton cannot suddenly change into a neutral object when virtual particles are created.

This means that every time a quark—which is a particle that carries nonzero charge—is created, an antiquark—which is a particle identical in mass to a quark but with opposite charge—must also be formed. In fact, quark-antiquark pairs can both be created and destroyed. For example, a quark and antiquark can produce a photon (the particle that communicates the electromagnetic force), which in turn produces another particle/antiparticle pair. (See Figure 17.) Their total charge is zero, so even with pair creation and destruction, the charge inside the proton will never change.

Images

FIGURE 17 ] Sufficiently energetic quarks and antiquarks can annihilate into energy that can, in turn, create other charged particles and their antiparticles.

In addition to quarks and antiquarks, the proton sea (that’s the technical term)—consisting of the virtual particles that are created—contains gluons as well. Gluons are the particles that communicate the strong force. They are analogous to the photon that is exchanged between electrically charged particles to create electromagnetic interactions. Gluons (there are eight different ones) act in a similar manner to communicate the strong nuclear force. They are exchanged between particles that carry the charge that the strong force acts on, and their exchange binds or repels the quarks to or from each other.

However, unlike photons, which carry no electric charge and therefore don’t directly experience the electromagnetic force, gluons themselves are subject to the strong force. So whereas photons transmit forces over enormous distances—so we can turn on a TV and get a signal generated miles away—gluons, like quarks, cannot travel far before they interact. Gluons bind objects on small scales comparable in size to a proton.

If we take a course-grained view of the proton and focus just on the elements carrying the proton charge, we would say that a proton is primarily composed of three quarks. However, the proton contains a lot more than the three valence quarks—the two up quarks and the lone down quark—that contribute to its charge. In addition to the three quarks responsible for a proton’s charge, inside a proton is a sea of virtual particles—that is, quark/antiquark pairs and gluons. The closer we examine a proton, the more virtual quark-antiquark pairs and gluons we would find. The exact distribution depends on the energy with which we probe it. At energies with which protons are colliding together today, we find a substantial amount of their energy is carried by virtual gluons and quarks and antiquarks of different types. They are not important for determining electric charge—the sum of the charges of all this virtual stuff is zero—but as we will see later on, they are important for predictions about proton collisions when we need to know exactly what is inside a proton and what carries its energy. (See Figure 18 for the more complicated structure inside a proton.)

More complete picture of a proton

images

FIGURE 18 ] The LHC collides protons together at high energy, each of which contains three valence quarks plus many virtual quarks and gluons that can also participate in the collisions.

Now that we have descended to the scale of quarks, held together by the strong nuclear force, I would like to be able to tell you what happens at yet smaller scales. Is there structure inside a quark? Or inside an electron for that matter? As of now, we have no evidence for such a thing. No experiment to date has given any evidence of further substructure. In terms of our journey inside matter, quarks and electrons are the end of the line—so far.

However, the LHC is now exploring an energy scale more than 1,000 times higher—and hence a distance more than 1,000 times smaller—than the scales associated with the proton mass. The LHC achieves its milestones by colliding together two proton beams that have been accelerated to extremely high energy—higher energy than has ever been achieved before here on Earth. The beams of protons at the LHC consist of a few thousand bunches of 100 billion highly lined-up, or collimated, protons concentrated in tiny packets that circulate in the underground tunnel. There are 1,232 superconducting magnets located around the ring to keep the protons inside the beam pipe while electric fields accelerate them to high energies. Other magnets (392 to be exact) reorient the beams so that the two beams stop streaming by each other and collide.

Then—and here’s where all the action happens—magnets guide the two proton beams around the ring in a precise path so that they collide in a region smaller across than the width of a human hair. When this collision occurs, some of the energy of the accelerated protons will be converted to mass—as Einstein’s famous formula, E = mc2, tells us. And with these collisions and the energy they release, new elementary particles, heavier than any seen before, could be created.

When the protons meet, quarks and gluons occasionally collide with a great deal of energy in a very concentrated region—much as if you had pebbles hidden inside balloons that were smashed together. The LHC provides such high energy that in the events of interest, individual components of the colliding protons crash together. These include the two up quarks and the down quark responsible for the proton’s charge. But at LHC energies, virtual particles carry a sizable fraction of the proton’s energy as well. At the LHC, along with the three quarks contributing to the proton’s charge, the virtual “sea” of particles also collide.

And when that happens—and here is the key to all of particle physics—the numbers and types of particles can change. New results from the LHC should teach us more about smaller distances and sizes. In addition to telling us about possible substructure, it should tell us about other aspects of physical processes that could be relevant at smaller distances. LHC energies are the final short-distance experimental frontier, at least for quite some time.

BEYOND TECHNOLOGY

We’ve now finished our introductory journey to the smaller scales accessible with current or even imagined technology. However, current human limitations on our ability to explore do not constrain the nature of reality. Even if it seems that we will have a tough time developing technology to explore much smaller scales, we can still try to deduce structure and interactions at those distances through theoretical and mathematical arguments.

We’ve come a long way since the time of the Greeks. We now recognize that without experimental evidence it is impossible to be certain of what exists at these minuscule scales we would also like to understand. Nonetheless, even in the absence of measurements, theoretical clues can guide our explorations and suggest how matter and forces could behave at tinier length scales. We can investigate possibilities that could help explain and relate the phenomena that occur at measurable scales, even if the fundamental components are not accessible directly.

We don’t yet know which, if any, of our theoretical speculative ideas will turn out to be right. Yet even without direct experimental access to very small distances, the scales we have observed constrain what can consistently exist—since it is the underlying theory that has to ultimately account for what we see. That is, experimental results, even on larger distance scales, limit the possibilities and motivate us to speculate in certain specific directions.

Because we haven’t yet explored these energies, we don’t know much about them. People even speculate the existence of a desert, a paucity of interesting lengths or energies, between those of the LHC and those applying to much shorter distances or higher energies. Probably this is lack of imagination or data at work. But for many, the next interesting scale has to do with unification.

One of the most intriguing speculations about shorter distances concerns the unification of forces at short distances. It is a concept that sparks both the scientific and the popular imagination. According to such a scenario, the world we see around us fails to reveal the fundamental underlying theory that incorporates all known forces (or, at least, all forces aside from gravity) together with its beauty and simplicity. Many physicists have earnestly searched for such unification from the time the existence of more than one force was first understood.

One of the most interesting such speculations was made by Howard Georgi and Sheldon Glashow in 1974. They suggested that even though we observe three distinct nongravitational forces with different strengths (the electromagnetic and the weak and strong nuclear forces) at low energies, only one force with a single strength will exist at much higher energies. (See Figure 19.)29 This one force was called a unified force because it encompasses the three known forces. The speculation was called a Grand Unified Theory (GUT) because Georgi and Glashow thought that was funny.

Strength of Standard Model Forces as a Function of Energy

Images

FIGURE 19 ] At high energy, the three known nongravitational forces might have the same strength and, therefore, could possibly unify into a single force.

This possibility of the strength of forces converging seems to be more than idle speculation. Calculations using quantum mechanics and special relativity indicate it might well be the case.30 But the energy scale at which it would occur is far above the energies we can study with collider experiments. The distances where the unified force would operate is about 10-30 cm. Even though such a size is far removed from anything we can directly observe, we can look for indirect consequences of unification.

One such possibility is proton decay. According to Georgi and Glashow’s theory—which introduces new interactions between quarks and leptons—protons should decay. Given the rather specific nature of their proposal, physicists could calculate the rate at which this should occur. So far, no experimental evidence for unification has been found, ruling out their specific suggestion. That doesn’t mean that unification is necessarily incorrect. The theory may be more subtle than the one they proposed.

The study of unification demonstrates how we can extend our knowledge beyond scales we directly observe. Using theory, we can try to extrapolate what we have experimentally verified to as yet inaccessible energies. Sometimes we’re lucky and clever experiments suggest themselves that allow us to test whether the extrapolation agrees with data or was somehow too naive. In the case of Grand Unified Theories, proton-decay experiments permitted scientists to indirectly study interactions at distances far too tiny for direct observation. These experiments allowed them to test the proposal. One lesson from this example is that we occasionally gain interesting insights into matter and forces and even come up with ways to extend the implications of our experiments to much higher energies and more general phenomena by speculating about distance scales that at first seem to be too remote to be relevant.

The next (and last) stop on our theoretical journey is a distance known as the Planck length, namely, 10-33 cm. To give a sense of just how minuscule this length is, its size is about as small relative to a proton as a proton is relative to the width of Rhode Island. At this scale, even something as fundamental as our basic notions of space and time will probably fail. We don’t even know how to imagine a hypothetical experiment to probe distances smaller than the Planck length. It is the smallest possible scale we can imagine.

This lack of experimental probes of the Planck length could be more than a symptom of our limited imagination, technology, or even funding. The inaccessibility of shorter distances could be a true restriction imposed by the laws of physics. As we will see in the following chapter, quantum mechanics tells us that small probes require high energies. But once the energy trapped in a small region is too big, matter collapses into a black hole. At this point, gravity takes over. More energy then makes the black holes bigger—not smaller—much as we are accustomed to from more familiar macroscopic situations where quantum mechanics plays only a limited role. We just don’t know how to explore any distance tinier than the Planck length. More energy doesn’t help. Very likely, traditional ideas about space no longer apply at this tiny size.

I recently gave a lecture where, after explaining the current state of particle physics and our suggestions for the possible nature of extra dimensions, someone quoted back to me a statement I had forgotten I’d made about the possible limitations of our notion of spacetime. I was asked how I could reconcile speculations about extra dimensions with the idea of spacetime breaking down.

The speculations for the breakdown of space and possibly time apply only at the unobservably small Planck length. Since no one has observed scales smaller than 10-17 cm, the requirement of a nice smooth geometry at measurable distances is not violated. Even if the notion of space itself breaks down at the Planck scale, this is still much smaller than the lengths we explore. There is no inconsistency so long as a smooth recognizable structure emerges when we average over larger, observable scales. After all, different scales often exhibit very different behaviors. Einstein can talk about smooth geometries of space on large scales. But his ideas might break down at smaller scales—so long as they’re so tiny and yield such negligible effects on measurable scales that the new more fundamental ingredients have no discernible impact we can observe.

Independently of whether or not spacetime breaks down, a critical feature of the Planck length that our equations certainly tell us would be true is that at this distance, gravity, whose strength is minuscule when acting on fundamental particles at the distances we can measure, would become a strong force—comparable in strength to the other forces we know. At the Planck length, our standard formulation of gravity according to Einstein’s theory of relativity would cease to apply. Unlike larger distances where we know how to make predictions that agree well with measurements, quantum mechanics and relativity are inconsistent when we apply the theories we generally use in this tiny regime. We don’t even know how to try to make predictions. General relativity is based on smooth classical spatial geometry. At the Planck length, quantum fluctuations can make a spacetime foam with too much structure for our conventional formulation of gravity to apply.

To address physical predictions at the Planck scale, we need a new conceptual framework that combines quantum mechanics and gravity into a single more comprehensive theory known as quantum gravity. The physical laws that work most effectively at the Planck scale must be very different from the ones that have proven successful on observable scales. The understanding of this scale could conceivably involve a paradigm shift as fundamental as the transition from classical to quantum mechanics. Even if we can’t make measurements at the tiniest distances, we have a chance of learning about the fundamental theory of gravity, space, and time through increasingly advanced theoretical speculations.

The most popular candidate for such a theory is known as string theory. Originally string theory was formulated as a theory that replaces fundamental particles with fundamental strings. We now know that string theory also involves fundamental objects other than strings (which we’ll learn a little more about in Chapter 17), and the name is sometimes replaced with a broader (but less well-defined) term, M-theory. This theory is currently the most promising suggestion for addressing the problem of quantum gravity.

However, string theory poses enormous conceptual and mathematical challenges. No one yet knows how to formulate string theory to answer all the questions we would want a theory of quantum gravity to address. Furthermore, the string scale of 10-33 cm is likely to be beyond the reach of any experiment we can think about.

So a reasonable question is whether investigating string theory is a reasonable expenditure of time and resources. I am often asked this question. Why would anyone study a theory so unlikely to yield experimental consequences? Some physicists find mathematical and theoretical consistency reason enough. Those people think they can repeat the type of success Einstein had when he developed his general theory of relativity, based in large part on purely theoretical and mathematical investigations.

But another motivation for studying string theory—one that I think is very important—is that it can and has provided new ways of thinking about ideas that apply on measurable scales. Two of those ideas are supersymmetry and theories of extra dimensions, ideas that we will address in Chapter 17. These theories do have experimental consequences if they address problems in particle physics. In fact, if certain extra-dimensional theories prove correct and explain phenomena at LHC energies, even evidence of string theory could possibly appear at much lower energies. A discovery of supersymmetry or extra dimensions won’t be proof of string theory. But it will be a validation of the utility of working on abstract ideas, even those without direct experimental consequences. It will of course also be a testimony to the utility of experiments in probing even initially abstract-seeming ideas.