Knocking on Heaven's Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World - Lisa Randall (2011)

Part II. SCALING MATTER

Chapter 6. “SEEING” IS BELIEVING

Scientists could decipher what matter is made of only when tools were developed that let them look inside. The word “look” refers not to direct observations but to the indirect techniques that people use to probe the tiny sizes inaccessible to the naked eye.

It’s rarely easy. Yet despite the challenges and the counterintuitive results that experiments sometimes display, reality is real. Physical laws, even at tiny scales, can give rise to measurable consequences that eventually become accessible to cleverer investigations. Our current knowledge about matter and how it interacts is the culmination of many years of insight and innovation and theoretical development that permit us to consistently interpret a variety of experimental results. Through indirect observations, pioneered by Galileo centuries ago, physicists have deduced what is present at matter’s core.

We’ll now explore the current state of particle physics and the theoretical insights and experimental discoveries that have led us to where we are today. Inevitably, the description will have a rather list-like aspect to it as I enumerate the ingredients that compose the matter we know and how they were discovered. The list is a lot more interesting when we remember the very different behaviors of these diverse ingredients on different scales. The chair you are sitting on is ultimately reducible to these elements, but it’s quite a train of discoveries to get from here to there.

As Richard Feynman mischievously explained when talking about one of his theories, “If you don’t like it, go somewhere else—perhaps to another universe where the rules are simpler… I’m going to tell you what it looks like to human beings who have struggled as hard as they can to understand. If you don’t like it, that’s too bad.” 31 You may think that some of what we believe to be true is so crazy or cumbersome that you won’t want to accept it. But that won’t change the fact that it’s the way nature works.

SMALL WAVELENGTHS

Small distances seem strange because they are unfamiliar. We need tiny probes to observe what is happening on the smallest scales. The page (or screen) you are currently reading looks very different from what resides at matter’s core. That’s because the very act of seeing has to do with observing visible light. That light is emitted from electrons in orbits around nuclei at the center of atoms. As Figure 14 illustrated, the wavelength of that light is never small enough to let us probe inside nuclei.

We need to be more clever—or more ruthless, depending on how you look at it, to detect what is happening on the tiny scale of a nucleus. Small wavelengths are required. That shouldn’t be so hard to believe. Imagine a fictional wave with wavelength equal to the size of the universe. No interaction of this wave could possibly have sufficient information to locate anything in space. Unless there are smaller oscillations in this wave that can resolve structure in the universe, we would have no way, with only this enormous wavelength wave as our guide, to determine that anything is in any particular place. It would be like covering a pile of stuff with a net and asking where your wallet is located in the mess underneath. You can’t find it unless you have enough resolution to look inside on smaller scales.

With waves, you need peaks and troughs with the right spacing—variations on the scale of whatever it is we are trying to resolve—to be able to identify where something is or what its size or shape might be. You can think of a wavelength the size of the net. If all I know is that something is inside it, I can say with certainty only that something is within a region whose size is that of the net with which I caught it. To say anything more requires either a smaller net or some other way of searching for variations on a more sensitive scale.

Quantum mechanics tells us that waves characterize the probability of finding a particle in any given location. Those waves might be waves associated with light. Or they might be the waves that quantum mechanics tells us are secretly carried by any individual particle. The wavelength of those waves tells us the possible resolution one can hope to attain when we use a particle or radiation to probe small distances.

Quantum mechanics also tells us that short wavelengths require high energies. That’s because it relates frequencies to energies, and the waves with the highest frequencies and shortest wavelengths carry the most energy. Quantum mechanics thereby connects high energies and short distances, telling us that only experiments operating at high energies can probe into the inner workings of matter. That is the fundamental reason we need machines that accelerate particles to high energy if we want to probe matter’s fundamental core.

Quantum mechanical wave relations tell us that high energies allow us to probe tiny distances and the interactions that occur there. Only with higher energies, and hence shorter wavelengths, can we study these smaller sizes. The quantum mechanical uncertainty relation that tells us small distances connect to large momenta combined with connections among energy, mass, and momenta provided by special relativity make these connections precise.

On top of that, Einstein taught us that energy and mass are interconvertible. When particles collide, their mass can turn into energy. So at higher energies, heavier matter can be produced, since E = mc2. This equation means that larger energy—E—permits the creation of heavier particles with bigger mass—m. And that energy is ecumenical—capable of creating any type of particle that is kinematically accessible (which is to say light enough).

This tells us that the higher energies we currently explore are taking us to smaller sizes, and the particles that get created are our key to understanding the fundamental laws of physics that apply at these scales. Any new high-energy particles and interactions that emerge at short distances hold the clues to decoding the underpinnings of the so-called Standard Model of particle physics, which describes our current understanding of matter’s most basic elements and their interactions. We’ll now consider a few key Standard Model discoveries, and the methods we now use to advance our knowledge some more.

THE DISCOVERIES OF ELECTRONS AND QUARKS

Each of the destinations on our initial tour of the atom—the electrons circulating around a nucleus and the quarks held together by gluons inside the protons and neutrons—were experimentally discovered with successively higher-energy and hence shorter-distance probes. We’ve seen that the electrons in an atom are bound to a nucleus through the mutual attraction due to their opposite charges. The attractive force gives the bound system—the atom—lower energy than the charged ingredients in isolation. Therefore, to isolate and study electrons, someone had to add enough energy to ionize them, which is to say to free the electrons by ripping them off. Once isolated, physicists could learn more about the electron by studying its properties, such as its charge and its mass.

The discovery of the nucleus, the other part of the atom, was more surprising still. In an experiment analogous to particle experiments today, Ernest Rutherford and his students discovered the nucleus by shooting Helium nuclei (then called alpha particles since nuclei hadn’t been discovered) at a thin gold foil. The alpha particles turned out to have enough energy for Rutherford to identify the structure inside the nucleus. He and his colleagues found that the alpha particles they shot at the foil sometimes scattered at much greater angles than they would have anticipated. (See Figure 20.) They expected scatterings like those from tissue paper and instead discovered ones seeming more like they were ricocheting off marbles inside. In Rutherford’s own words:

images

FIGURE 20 ] Rutherford’s experiment scattered alpha particles (which we now know to be Helium nuclei) off gold foil. The unexpectedly large deflections of some of the alpha particles demonstrated the existence of concentrated masses at the centers of the atoms—atomic nuclei.

“It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus. It was then that I had the idea of an atom with a minute massive centre carrying a charge.” 32

The experimental discovery of quarks inside protons and neutrons used methods in some respects similar to Rutherford’s but required even higher energies than that of the alpha particles he had used. Those higher energies required a particle accelerator that could accelerate electrons and the photons they radiated to sufficiently high energies.

The first circular particle accelerator was named a cyclotron, due to the circular paths along which the particles were accelerated. Ernest Lawrence built the first cyclotron at the University of California in 1932. It was less than a foot in diameter and was very feeble by modern standards. It produced nowhere near the energy needed to discover quarks. That milestone could happen only with a number of improvements in accelerator technology (that nicely gave rise to a couple of important discoveries along the way).

Well before quarks and the inner structure of the nucleus could be explored, Emilio Segre and Owen Chamberlain received the 1959 Nobel Prize for their discovery of antiprotons at Lawrence Berkeley Laboratory’s Bevatron in 1955. The Bevatron was a more sophisticated accelerator than a cyclotron and could raise the protons to energy more than six times their rest mass—more than enough to create proton-antiproton pairs. The proton beam at the Bevatron bombarded targets and (via the magic of E = mc2) produced exotic matter, which includes antiprotons and antineutrons.

Antimatter plays a big role in particle physics, so let’s take a brief detour to explore this remarkable counterpart to the matter we observe. Because the charges of matter and antimatter particles add up to zero, matter can annihilate with its associated antimatter when they meet. For example, antiprotons—one form of antimatter—can combine with protons to produce pure energy according to Einstein’s equation E = mc2.

The British physicist Paul Dirac first “discovered” antimatter mathematically in 1927 when he tried to find the equation that describes the electron. The only equation he could write down consistent with known symmetry principles implied the existence of a particle with the same mass and opposite charge—a particle that no one had ever seen before.

Dirac racked his brain before capitulating to the equation and admitting this mysterious particle had to exist. The American physicist Carl Anderson discovered the positron in 1932, verifying Dirac’s assertion that “The equation was smarter than I was.” Antiprotons, which are significantly heavier, were not discovered until more than twenty years later.

The discovery of antiprotons was important not only for establishing their existence, but also for demonstrating a matter-antimatter symmetry in the laws of physics essential to the workings of the universe. The world is, after all, made of matter, not antimatter. Most of the mass of ordinary matter is carried by protons and neutrons, not by their antiparticles. This asymmetry in matter and antimatter is critical to the world as we know it. Yet we don’t yet know how it arose.

DISCOVERY OF QUARKS

Between 1967 and 1973, Jerome Friedman, Henry Kendall, and Richard Taylor led a series of experiments that established the existence of quarks inside protons and neutrons. They did their work at a linear accelerator, which—unlike the circular cyclotrons and Bevatrons before it—accelerated electrons along a straight line. The accelerator center was named SLAC, the Stanford Linear Accelerator Center, located in Palo Alto. The electrons that SLAC accelerated radiated photons. These energetic—and hence short-wavelength—photons interacted with quarks inside the nuclei. Friedman, Kendall, and Taylor measured the change in interaction rate as the energy of the collision increased. Without structure, the rate would have gone down. With structure, the rate still decreased, but much more slowly. As with Rutherford’s discovery of the nucleus many years before, the projectile (the photon in this case) scattered differently than if the proton was a blob that lacked structure.

Nonetheless, even with experiments performed at the requisite energy, identifying quarks wasn’t entirely straightforward. Technology and theory both had to progress to the point that the experimental signatures could be anticipated and understood. Insightful experiments and theoretical analyses performed by the theoretical physicists James Bjorken and Richard Feynman showed that the rates agreed with the predictions based on structure inside the nucleus, thereby demonstrating that structure in protons and neutrons—namely, quarks—had been discovered. Friedman, Kendall, and Taylor were awarded the Nobel Prize in 1990 for their discovery.

No one could have hoped to use their eyes to directly observe a quark or its properties. The methods were necessarily indirect. Nonetheless, measurements confirmed quarks’ existence. Agreement between predictions and measured properties, as well as the explanatory nature of the quark hypothesis in the first place, established their existence.

Physicists and engineers have over time developed different and better types of accelerators that operated on increasingly larger scales, accelerating particles to ever higher energy. Bigger and better accelerators produced increasingly energetic particles that were used to probe structure at smaller and smaller distances. The discoveries they made established the Standard Model, as each of its elements was discovered.

FIXED-TARGET EXPERIMENTS VERSUS PARTICLE COLLIDERS

The type of experiment that discovered quarks, in which a beam of accelerated electrons was aimed at stationary matter, is known as a fixed-target experiment. It involves a single beam of electrons that is directed toward matter. The matter target is a sitting duck.

The current highest-energy accelerators are different. They involve collisions of two particle beams, both of which have been accelerated to high energy. (See Figure 21 for a comparison.) As one can imagine, those beams have to be highly focused into a small region to guarantee that any collisions can take place. This significantly reduces the number of collisions you can expect, since a beam is much more likely to interact with a chunk of matter than with another beam.

Fixed-Target Setup

images

Particle Collider

../images

FIGURE 21 ] Some particle accelerators generate interactions between a beam of particles and a fixed target. Others collide together two particle beams.

However, beam-beam collisions have one big advantage. These collisions can achieve far higher energy. Einstein could have told you the reason that colliders are now favored over fixed-target experiments. It has to do with what is known as the invariant mass of the system. Although Einstein is famous for his theory of “relativity,” he thought a better name would have been “Invariantentheorie.” The real point of his quest was to find a way to avoid being misled by a particular frame of reference—to find the invariant quantities that characterize a system.

This idea is probably more familiar to you for spatial quantities such as length. Length of a stationary object doesn’t depend on how it is oriented in space. An object has a fixed size that has nothing to do with you or your observations, unlike its coordinates, which depend on an arbitrary set of axes and directions you impose.

Similarly, Einstein showed how to characterize events in a way that doesn’t depend on an observer’s orientation or motion. Invariant mass is a measure of total energy. It tells you how massive an object can be created with the energy in your system.

To determine the amount of invariant mass, one could ask this instead: if your system were sitting still—that is, if it had no overall velocity or momentum—how much energy would it contain? If a system has no momentum, Einstein’s equation E = mc2 applies. Therefore, knowing the energy for a system at rest is equivalent to knowing its invariant mass. When the system is not at rest, we need to use a more complicated version of his formula that depends on the value of momentum as well as energy.

Suppose we collide together two beams with the same energy and equal and opposite momentum. When they collide, the momenta add up to zero. That means that the total system is already at rest. Therefore, all the energy—the sum of the energy of the particles in the two individual beams—can be converted to mass.

A fixed-target experiment is very different. One beam has large momentum, but the target itself has none. Not all the energy is available to make new particles because the combined system of the target and the beam particle that hit it is still moving. Because of this motion, not all the energy from the collision can be transferred into making new particles, since some of the energy remains as kinetic energy associated with the motion. It turns out that the available energy scales only with the square root of the product of the energy of the beam and the target. That means, for example, that if we were to increase the energy of a proton beam by 100 and collide it with a proton at rest, the energy available to make new particles would increase by only a factor of 10.

This tells us there is a big difference between fixed-target and beam-beam collisions. The energy of a beam-beam collision is far greater—much bigger than twice as big as a beam-target collision, which is perhaps what you might assume. But that guess would be based on Newtonian thinking, which doesn’t apply for the relativistic particles in that beam that travel at nearly the speed of light. The difference in net energy of fixed-target compared with beam-beam collisions is much bigger than the simple guess because at near the speed of light, relativity comes into play. When we want to achieve high energies, we have no choice but to turn to particle colliders, which accelerate two beams of particles to high energy before colliding them together. Accelerating two beams together allows for much higher energy, and hence much richer collisions.

The LHC is an example of a collider. It bangs together two beams of particles that magnets deflect so that they will be aimed toward each other. The principal parameters that determine the capabilities of a collider such as the LHC are the type of particles that collide, their energy after acceleration, and the machine’s luminosity (the intensity of the combined beams and hence the number of events that occur).

TYPES OF COLLIDERS

Once we have decided that two beams colliding can provide higher energy (and hence explore shorter distances) than fixed-target experiments, the next question is what to collide. This leads to some interesting choices. In particular, we have to decide which particles to accelerate so that they participate in the collision.

It’s a good idea to use matter that’s readily available here on Earth. In principle, we could try to collide together unstable particles, such as particles called muons that rapidly decay into electrons, or heavy quarks such as top quarks that decay into other lighter matter.

In that case, we would first have to make these particles in a laboratory since they are not readily available. But even if we could make them and accelerate them before they decayed, we’d have to ensure that the radiation from the decay could be safely diverted. None of these problems are necessarily insurmountable, particularly in the case of muons, whose feasibility as particle beams is currently under investigation. But they certainly pose additional challenges that we don’t face with stable particles.

So let’s go with the more straightforward option: stable particles available here on Earth that don’t decay. This means light particles or at least bound stable configurations of light particles such as protons. We also would want the particles to be charged, so that we can readily accelerate them with an electric field. This leaves protons and electrons as options—particles that are conveniently situated in abundance.

Which should we choose? Both have their advantages and their downsides. Electrons have the advantage that they yield nice clean collisions. After all, electrons are fundamental particles. When you collide an electron into something, the electron doesn’t partition its energy into lots of substructure. So far as we know, the electron is all there is. Because the electron doesn’t divide, we can follow very precisely what happens when it collides with anything else.

That’s not true for protons. Recall that protons are composed of three quarks bound together by the strong nuclear force with gluons exchanged among the quarks that “glue” the whole thing together, as was discussed in Chapter 5. When a proton collides at high energy, the interaction you are interested in—that could produce some heavy particle—generally involves only one individual particle inside the proton, such as a single quark.

That quark certainly won’t carry all the energy of the proton. So even though the proton might be very energetic, the quark will generally have much less energy. It can still have quite a bit of energy, just not as much as if the proton could impart all its energy into that single quark.

On top of that, collisions involving protons are very messy. That’s because the other stuff in the proton still hangs around, even if it’s not involved in the super-high-energy collision we care about. All the remaining particles still interact through strong interactions (aptly named), which means there is a flurry of activity surrounding (and obscuring) the interaction you are interested in.

So why would anyone ever want to collide a proton in that case? The reason is that the proton is heavier than an electron. In fact, the proton mass is about 2,000 times greater than that of an electron. It turns out that’s a very good thing when we try to accelerate a proton to high energy. To get to these enormous energies, electric fields accelerate particles around a ring so that they can be accelerated more and more in each successive go-round. But accelerated particles radiate, and the lighter they are, the more they do so.

This means that even though we’d love to collide together super-high-energy electrons, this won’t happen any time soon. We can accelerate electrons to very high energies, but high-energy electrons radiate away a significant fraction of their energy when they are accelerated around a circle. (That’s why the Stanford Linear Accelerator Center [SLAC] in Palo Alto, California, which accelerated electrons, was a linear collider.) So in terms of pure energy and discovery potential, protons win out. Protons can be accelerated to sufficiently high energy that even their quark and gluon subcomponents can carry more energy than an accelerated electron.

In truth, physicists have learned a lot about particles from both types of colliders—those colliding protons and those colliding electrons. Colliders with an electron beam don’t operate at the lofty energies that the highest-energy proton accelerators have attained. But the experiments at colliders with electron beams have achieved measurements more precise than proton collider people could even dream about. In particular, in the 1990s, experiments performed at SLAC and also the Large Electron-Positron collider (LEP) (the blandness of the names never ceases to amuse me) at CERN achieved spectacular precision in verifying the predictions of the Standard Model of particle physics.

These precision electroweak measurement experiments exploited the many different processes that can be predicted with knowledge of the electroweak interactions. For example, they measured the weak force carriers’ masses, the rates of decay into different types of particles, and asymmetries in the forward and backward parts of the detectors that tell even more about the nature of the weak interactions.

Precision electroweak measurements explicitly apply the effective theory idea. Once physicists perform enough experiments to pin down the few parameters of the Standard Model such as the interaction strengths of each of the forces, everything else can be predicted. Physicists check for consistency of all the measurements and look for deviations that would tell us whether something is missing. All told so far, measurements indicate that the Standard Model works extraordinarily well—so well that we still don’t have the clues we need to know what lies beyond except that whatever it is, its effects at LEP energies must be small.

That tells us that getting more information about heavier particles and higher-energy interactions requires directly investigating processes at energies that are considerably higher than those that were achieved at LEP and SLAC. Electron collisions simply won’t achieve the energies we think we’ll need to pin down the question of what gives particles mass and why they are the masses they are—at least not in the near future. That will require proton collisions.

That’s why physicists decided to accelerate protons rather than electrons inside the tunnel that had been built in the 1980s to house LEP. CERN ultimately shut down LEP operations to make way for preparations for its new colossal enterprise, the LHC. Because protons don’t radiate nearly as much energy away, the LHC far more efficiently boosts them to higher energies. Its collisions are messier than those involving electrons, and experimental challenges abound. But with protons in the beam, we have a chance to attain energies high enough to directly tell us the answers we’ve been seeking for several decades.

PARTICLES OR ANTIPARTICLES?

But we still have one more question to answer before we can decide what to collide. After all, collisions involve two beams. We’ve decided that high energies mandate that one beam consist of protons. But will the other beam be made of particles—that is, protons—or their anti-particles—namely, antiprotons? Protons and antiprotons have the same mass and therefore radiate at the same rate. Other criteria must be used to decide between them.

Clearly protons are more plentiful. We don’t see too many antiprotons lying around since they would annihilate with the abundant protons in our surroundings, turning into energy or other, more elementary particles. So why would anyone even consider making beams of antiparticles? What is to be gained?

The answer could be quite a bit. First of all, acceleration is simpler since the same magnetic field can be used to direct protons and antiprotons in opposite directions. But the most important reason has to do with the particles that could be produced.

Particles and antiparticles have equal masses but opposite charges. This means that the incoming particle and antiparticle together carry exactly the same charge as pure energy carries—namely, nothing. According to E = mc2,this means that a particle and its antiparticle can turn into energy, which can in turn create any other particle and antiparticle together, so long as they are not too heavy and have a strong enough interaction with the initial particle-antiparticle pair.

These particles that are created could in principle be new and exotic particles whose charges are different from those of particles in the Standard Model. A colliding particle and antiparticle have no net charge, and neither does an exotic particle plus its antiparticle. So even though the exotic particle’s charges can be different from those in the Standard Model, a particle and antiparticle together have zero charge and can in principle be produced.

Let’s apply this reasoning to electrons. Were we to collide together two particles with equal charges such as two electrons, we could make only objects that carry the same charge as whatever went in. It could produce either a single object with net charge two or two different objects like electrons that each carry a charge of one. That’s rather restrictive.

Colliding two particles with the same charge is very limiting. On the other hand, colliding together particles and antiparticles opens many new doors that wouldn’t be possible were we to collide only particles. Because of the greater number of possible new final states, electron-positron collisions have much more potential than electron-electron collisions. For example, collisions involving electrons and their antiparticles—namely, positrons—have produced uncharged particles like the Z gauge boson (that’s how LEP worked) as well as any particle-antiparticle pair light enough to be produced. Although we pay a steep price when we use antiparticles in the collisions—since they are so difficult to store—we win big when the new exotic particles we hope to discover have different charges than the particles we collide.

Most recently, the highest-energy colliders used one beam of protons and one beam of antiprotons. That of course required a way to make and store antiprotons. Efficiently stored antiprotons were one of CERN’s major accomplishments. Earlier on, before CERN constructed the electron-positron collider, LEP, the lab produced high-energy proton and antiproton beams.

The most important discoveries from the collision of protons and antiprotons at CERN were the electroweak gauge bosons that communicate the electroweak force for which Carlo Rubbia and Simon van der Meer received the Nobel Prize in 1984. As with the other forces, the weak force is communicated by particles. In this case they are known as the weak gauge bosons—the positively and negatively charged W and neutral Z vector bosons—and these three particles are responsible for the weak nuclear force. I still think of the Ws and the Z as the “bloody vector bosons” due to a drunken exclamation of a British physicist who lumbered into the dormitories where visiting physicists and summer students—including me—resided at the time. He was concerned about America’s dominance and was looking forward to Europe’s first major discovery. When the Ws and the Z vector bosons were discovered at CERN in the 1980s, the Standard Model of particle physics, for which the weak force was an essential component, was experimentally verified.

Critical to the success of these experiments was the method that Van der Meer developed to store antiprotons, which is clearly a difficult task since antiprotons want nothing better than to find protons with which to annihilate. In Van der Meer’s process, known as stochastic cooling, the electric signals of a bunch of particles drove a device that “kicked” any particle with particularly high momentum, eventually cooling the entire bunch so that they didn’t move as rapidly and therefore didn’t immediately escape or hit the container so that even antiprotons could be stored.

The idea of a proton-antiproton collider wasn’t restricted to Europe. The highest-energy collider of this type was the Tevatron, built in Batavia, Illinois. The Tevatron reached an energy of 2 TeV (an energy equivalent to about 2,000 times the proton’s rest energy).33 Protons and antiprotons collided together to make other particles that we could study in detail. The most important Tevatron discovery was the top quark, the heaviest and the last Standard Model particle to be found.

However, the LHC is different from either CERN’s first collider or the Tevatron. (See Figure 22 for a summary of the collider types.) Rather than protons and antiprotons, the LHC collides together two proton beams. The reason the LHC chooses two proton beams over a beam of protons and another of antiprotons is subtle but worth understanding. The most opportunistic collisions are those where the net charge of the incoming particles adds up to zero. That’s the type of collision we already discussed. You can produce anything plus its antiparticle (assuming you have enough energy) when your net charge is zero. If two electrons come in, the net charge of whatever is produced would have to be minus two, which rules out a lot of possibilities. You might think colliding together two protons is an equally bad idea. After all, the net charge of two protons is two, which doesn’t seem to be a big improvement.

If protons were fundamental particles, this would be absolutely right. However, as we explored in Chapter 5, protons are made up of subunits.

A COMPARISON OF DIFFERENT COLLIDERS

../images

FIGURE 22 ] A comparison of different colliders showing their energies, what collides, and the accelerator shape.

Protons contain quarks that are bound together through gluons. Even so, if the three valence quarks—two up quarks and a down—that carry its charge were all there were inside a proton, that still wouldn’t be very good: the charges of two valence quarks never add to zero either.

However, most of the mass of the proton isn’t coming from the mass of the quarks it contains. Its mass is primarily due to the energy involved in binding the proton together. A proton traveling at high momentum contains a lot of energy. With all this energy, protons contain a sea of quarks and antiquarks and gluons in addition to the three valence quarks responsible for the protons’ charge. That is, if you were to poke a high-energy proton, you would find not only the three valence quarks, but also a sea of quarks and antiquarks and gluons whose charge adds up to zero.

Therefore, when we consider proton collisions, we have to be a little more careful in our logic than we were with electrons. The interesting events are the result of subunits colliding. The collisions involve the charges of the subunits and not the protons. Even though the sea quarks and gluons don’t contribute to the net proton charge, they do contribute to its composition. When protons collide together, it could be that one of the three valence quarks in the proton hits another valence quark and the net charge in the collision doesn’t add to zero. When the net charge of the event doesn’t vanish, interesting events involving the correct sum of charges might occasionally occur, but the collision won’t have the broad capacities that net-charge-zero collisions do.

But a lot of interesting collisions will happen because of the virtual sea, which allows a quark to meet an antiquark or a gluon to hit a gluon, yielding collisions that carry no net charge. When protons bang together, a quark inside one proton might hit an antiquark inside the other, even if that is not what happens most of the time. All of the possible processes that can happen, including those from the collision of the sea particles, play a role when we ask what happens at the LHC. These sea collisions in fact become more and more likely as the protons are accelerated to higher energy.

The total proton charge doesn’t determine the particles that get made, since the rest of the proton just goes forward, avoiding the collision. The pieces of the protons that don’t collide carry away the rest of the net proton charges, which just disappear down the beam pipe. This was the subtle answer to the question the Paduan mayor asked, which was where the proton charges go during an LHC collision. It has to do with the composite nature of the proton and the high energy that guarantees that only the smallest elements we know of—quarks and gluons—directly collide.

Because only pieces of the proton collide and those pieces can be virtual particles that collide with net zero charge, the choice of proton-proton versus proton-antiproton collider is not so obvious. Whereas in the past, it was worth the sacrifice at lower-energy colliders to make antiprotons in order to guarantee interesting events, at LHC energies that’s not such an obvious choice. At the high energies the LHC will achieve, a significant fraction of the energy of the proton is carried by sea quarks, antiquarks, and gluons.

LHC physicists and engineers made the design choice to collide together two proton beams, rather than a proton and an antiproton beam. 34 This makes generating high luminosity—that is, a higher number of events—a far more accessible goal. It’s considerably easier to make proton beams than antiproton beams.

So—rather than a proton-antiproton collider—the LHC is a proton-proton collider. With its many collisions—more readily achievable with protons colliding with protons—it has enormous potential.