Why Does E=mc²? (And Why Should We Care?) - Brian Cox, Jeffrey R. Forshaw (2009)

Chapter 7. The Origin of Mass

The discovery of E = mc2 marked a turning point in the way physicists viewed energy, for it taught us to appreciate that there is a vast latent energy store locked away inside mass itself. It is a store of energy much greater than anyone had previously dared imagine: The energy locked away in the mass of a single proton is approaching 1 billion times what is liberated in a typical chemical reaction. At first sight it seems we have the solution to the world’s energy problems, and to a degree that may well be the case in the long term. But there is a fly in the ointment, and a big one too: It is very hard to destroy mass completely. In the case of a nuclear fission power plant, only a very tiny fraction of the original fuel is actually destroyed; the rest is converted into lighter elements, some of which may be highly toxic waste products. Even within the sun, fusion processes are remarkably ineffective at converting mass into energy, and this is not only because the fraction of mass that is destroyed is very small: For any particular proton, the chances of fusion ever taking place are exceedingly remote because the initial step of converting a proton into a neutron is an incredibly rare occurrence—so rare, in fact, that it takes around 5 billion years on average before a proton in the core of the sun fuses with another proton to make a deuteron, thereby triggering the release of energy. Actually, the process would never even occur if it weren’t for the fact that the quantum theory reigns supreme at such small distances: In the pre-quantum worldview, the sun is simply not hot enough to push the protons close enough together for fusion to take place—it would have to be around 1,000 times hotter than its current core temperature of 10 million degrees. When the British physicist Sir Arthur Eddington first proposed that fusion might be the power source of the sun in 1920, he was quickly made aware of this potential problem with his theory. Eddington was quite sure that hydrogen fusion into helium was the power source, however, and that an answer to the conundrum of the low temperature would soon be found. “The helium which we handle must have been put together at some time and some place,” he said. “We do not argue with the critic who urges that the stars are not hot enough for this process; we tell him to go and find a hotter place.”

So ponderous is the conversion of protons into neutrons that, “kilogram for kilogram,” the sun is several thousand times less efficient than the human body at converting mass to energy. One kilogram of the sun generates only 1/5,000 of a watt of power on average, whereas the human body typically generates somewhat more than 1 watt per kilogram. The sun is of course very big, which more than makes up for its relative i nefficiency.

As we have been so keen to emphasize in this book, nature works according to laws. So it will not do to get too excited about an equation that tells us, as E = mc2 does, about what might possibly happen. There is a world of a difference between our imagination and what actually happens, and although E = mc2 excites us with its possibilities, we must still understand just how it is that the laws of physics allow mass to be destroyed and energy released. Certainly the equation itself does not logically imply that we have a right to convert mass to energy at will.

One of the wonderful developments in physics over the past hundred years or so has been the realization that we appear to need only a handful of laws to explain pretty much all of physics—at least in principle. Newton seemed to have achieved that goal when he wrote down his laws of motion way back in the late seventeenth century, and for the next two hundred years there was little scientific evidence to the contrary. On that matter, Newton was rather more modest. He once said, “I was like a boy playing on the sea-shore, and diverting myself now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me,” which beautifully captures the modest wonder that time spent doing physics can generate. Faced with the beauty of nature, it seems hardly necessary, not to mention foolhardy, to lay claim to having found the ultimate theory. Notwithstanding this appropriate philosophical modesty about the scientific enterprise, the post-Newton worldview held that everything might be made up of little parts that dutifully obeyed the laws of physics as articulated by Newton. There were admittedly some apparently minor unanswered questions: How do things actually stick together? What are the tiny little parts actually made of? But few people doubted that Newton’s theory sat at the heart of everything—the rest was presumed to be a matter of filling in the details. As the nineteenth century progressed, however, there came to be observed new phenomena whose description defied Newton and eventually opened the doors to Einstein’s relativity and the quantum theory. Newton was duly overturned or, more accurately, shown to be an approximation to a more accurate view of nature, and one hundred years later we sit here again, perhaps ignoring the lessons of the past and claiming that we (almost) have a theory of all natural phenomena. We may well be wrong again, and that would be no bad thing. It is worth remembering not only that scientific hubris has often been shown to be folly in the past, but also that the perception that we somehow know enough, or even all there is to know, about the workings of nature has been and will probably always be damaging to the human spirit. In a public lecture in 1810, Humphry Davy put it beautifully: “Nothing is so fatal to the progress of the human mind as to suppose our views of science are ultimate; that there are no new mysteries in nature; that our triumphs are complete; and that there are no new worlds to conquer.”

Perhaps the whole of physics as we know it represents only the tip of the iceberg, or maybe we really are closing in on a “theory of everything.” Whichever is the case, one thing is certain: We currently have a theory that is demonstrably proven, after a vast and painstaking effort by thousands of scientists around the world, to work across a very broad range of phenomena. It is an astonishing theory, for it unifies so much, yet its central equation can be written on the back of an envelope.



We’ll call this central equation the master equation, and it lies at the heart of what is now known as the Standard Model of Particle Physics. Although it is unlikely to mean much to most readers at first sight, we can’t resist showing it above.

Of course, only professional physicists are going to know what’s going on in detail in the equation, but we did not show it for them. First, we wanted to show one of the most wonderful equations in physics—in a moment we will spend quite some time explaining why it is so wonderful. But also it really is possible to get a flavor of what is going on just by talking about the symbols without knowing any mathematics at all. Let us warm up by first describing the scope of the master equation: What is its job? What does it do? Its job is to specify the rules according to which every particle in the entire universe interacts with every other particle. The sole exception is that it does not account for gravity, and that is much to everyone’s chagrin. Gravity notwithstanding, its scope is still admirably ambitious. Figuring out the master equation is without doubt one of the great achievements in the history of physics.

Let’s be clear what we mean when two particles interact. We mean that something happens to the motion of the particles as a result of their interaction with each other. For example, two particles could scatter off each other, changing direction as they do so. Or perhaps they might spin into orbit around each other, each trapping the other into what physicists call a “bound state.” An atom is an example of such a thing, and in the case of hydrogen, a single electron and a single proton are bound together according to the rules laid down in the master equation. We heard a lot about binding energy earlier in the previous chapter, and the rules for how to calculate the binding energy of an atom, molecule, or atomic nucleus are contained in the master equation. In a sense, knowing the rules of the game means we are describing the way the universe operates at a very fundamental level. So what are the particles out of which everything is made, and just how do they interact with each other?

The Standard Model takes as its starting point the existence of matter. More precisely, it assumes the existence of six types of “quark,” three types of “charged lepton,” of which the electron is one, and three types of “neutrino.” You can see the matter particles as they appear in the master equation: They are denoted by the symbol ψ (pronounced “psi”). For every particle there should also exist a corresponding antiparticle. Antimatter is not the stuff of science fiction; it is a necessary ingredient of the universe. It was British theoretical physicist Paul Dirac who first realized the need for antimatter in the late 1920s when he predicted the existence of a partner to the electron called the positron, which should have exactly the same mass but opposite electrical charge. We have met positrons before as the byproducts of the process whereby two protons fuse to make the deuteron. One of the wonderfully convincing features of a successful scientific theory is its ability to predict something that has never before been seen. The subsequent observation of that “something” in an experiment provides compelling evidence that we have understood something real about the workings of the universe. Taking the point a little further, the more predictions a theory can make, then the more impressed we should be if future experiments vindicate the theory. Conversely, if experiments do not find the thing that is predicted, then the theory cannot be right and it needs to be ditched. There is no room for debate in this kind of intellectual pursuit: Experiment is the final arbiter. Dirac’s moment of glory came just a few years later when Carl Anderson made the first direct observations of positrons using cosmic rays. For their efforts, Dirac shared the 1933 Nobel Prize and Anderson the 1936 prize. Esoteric though the positron might appear to be, its existence is today used routinely in hospitals all over the world. PET scanners (short for “positron emission tomography”) exploit positrons to allow doctors to construct three-dimensional maps of the body. It is not likely that Dirac had medical imaging applications in mind when he was wrestling with the idea of antimatter. Once again it seems that understanding the inner workings of the universe turns out to be useful.

There is one other particle that is presumed to exist, but it would be to rush things to mention it just yet. It is represented by the Greek symbol φ (pronounced “phi”) and it is lurking on the third and fourth lines of the master equation. Apart from this “other particle,” all of the quarks, charged leptons, and neutrinos (and their antimatter partners) have been seen in experiments. Not with human eyes, of course, but most recently with particle detectors, akin to high-resolution cameras that can take a snapshot of the elementary particles as they fleetingly come into existence. Very often, spotting one of them has won a Nobel Prize. The last to be discovered was the tau neutrino in the year 2000. This ghostly cousin of the electron neutrinos that stream out of the sun as a result of the fusion process completed the twelve known particles of matter.

The lightest of the quarks are called “up” and “down,” and protons and neutrons are built out of them. Protons are made mainly of two up quarks and one down, while neutrons are made from two downs and one up. Everyday matter is made of atoms, and atoms consist of a nuclear core, made from protons and neutrons, surrounded at a relatively large distance by some electrons. As a result, up and down quarks, along with the electrons, are the predominant particles in everyday matter. By the way, the names of the particles have absolutely no technical significance at all. The word “quark” was taken from Finnegan’s Wake, a novel by Irish novelist James Joyce, by American physicist Murray Gell-Mann. Gell-Mann needed three quarks to explain the then known particles, and a little passage from Joyce seemed appropriate:

Three quarks for Muster Mark! 
Sure he has not got much of a bark 
And sure any he has it’s all beside the mark.

Gell-Mann has since written that he originally intended the word to be pronounced “qwork,” and in fact had the sound in his mind before he came across the Finnegan’s Wake quotation. Since “quark” in this rhyme is clearly intended to rhyme with “Mark” and “bark,” this proved somewhat problematic. Gell-Mann therefore decided to argue that the word may mean “quart,” as in a measure of drink, rather than the more usual “cry of a gull,” thereby allowing him to keep his original pronunciation. Perhaps we will never really know how to pronounce it. The discovery of three more quarks, culminating in the top quark in 1995, has served to render the etymology even more inappropriate, and perhaps should serve as a lesson for future physicists who wish to seek obscure literary references to name their discoveries.

Despite his naming tribulations, Gell-Mann was proved correct in his hypothesis that protons and neutrons are built of smaller objects, when the quarks were finally glimpsed at a particle accelerator in Stanford, California, in 1968, four years after the original theoretical prediction. Both Gell-Mann and the experimenters who uncovered the evidence were subsequently awarded the Nobel Prize for their efforts.

Apart from the matter particles that we have just been talking about, and the mysterious φ, there are some other particles we need to mention. They are the W and Z particles, the photon and the gluon. We should say an introductory word or two about their role in affairs. These are the particles that are responsible for the interactions between all the other particles. If they did not exist, then nothing in the universe would ever interact with anything else. Such a universe would therefore be an astonishingly dull place. We say that their job is to carry the force of interaction between the matter particles. The photon is the particle responsible for carrying the force between electrically charged particles like the electrons and quarks. In a very real sense it underpins all of the physics uncovered by Faraday and Maxwell and, as a bonus, it makes up visible light, radio waves, infrared and microwaves, X-rays, and gamma rays. It is perfectly correct to imagine a stream of photons being emitted by a lightbulb, bouncing off the page of this book and streaming into your eyes, which are nothing more than sophisticated photon detectors. A physicist would say that the photon mediates the electromagnetic force. The gluon is not as pervasive in everyday life as the ubiquitous photon, but its role is no less important. At the core of every atom lies the atomic nucleus. The nucleus is a ball of positive electric charge (recall that the protons are all electrically charged, while the neutrons are not) and, in a manner analogous to what happens when you try to push two like poles of a magnet together, the protons all repel each other as a result of the electromagnetic force. They simply do not want to stick together and would much rather fly apart. Fortunately, this does not happen, and atoms exist. The gluon mediates the force that “glues” together the protons inside the nucleus, hence the silly name. The gluon is also responsible for holding the quarks together inside the protons and neutrons. This force has to be strong enough to overcome the electromagnetic force of repulsion between the protons, and for that reason it is called the strong force. We are really not covering ourselves in glory in the naming-stakes.

The W and Z particles can be bundled together for our purposes. Without them the stars would not shine. The W particle in particular is responsible for the interaction that turns a proton into a neutron during the formation of the deuteron in the core of our sun. Turning protons into neutrons (and vice versa) is not the only thing the weak force does. It is responsible for hundreds of different interactions among the elementary particles of nature, many of which have been studied in such experiments as those carried out at CERN. Apart from the fact that the sun shines, the W and Z are rather like the gluon in that they are not so apparent in everyday life. The neutrinos only ever interact via the W and Z particles and because of that they are very elusive indeed. As we saw in the last chapter, many billions of them are streaming through your head every second, and you don’t feel a thing because the force carried by the W and Z particles is extremely weak. You’ve probably already guessed that we’ve named it the weak force.

So far we have done little more than trot off a list of which particles “live” in the master equation. The twelve matter particles must be added into the theory a priori, and we don’t really know why there are twelve of them. We do have evidence from observations of the way that Z particles decay into neutrinos made at CERN in the 1990s that there are no more than twelve, but since it seems necessary to have only four (the up and down quarks, the electron, and the electron neutrino) to build a universe, the existence of the other eight is a bit of a mystery. We suspect that they played an important role in the very early universe, but exactly how they have been or are involved in our existence today is something to be added to the big unanswered questions in physics. Humphry Davy can rest easy for the moment.

As far as the Standard Model goes, the twelve are all elementary particles, by which we mean that the particles cannot be split up into smaller parts; they are the ultimate building blocks. That does seem to go against the grain of common sense—it seems perfectly natural to suppose that a little particle could, in principle, be chopped in half. But quantum theory doesn’t work like that—once again our common sense is not a good guide to fundamental physics. As far as the Standard Model goes, the particles have no substructure. They are said to be “pointlike” and that is the end of the matter. In due course, it might well turn out that an experiment reveals that quarks can be split into smaller parts, but the point is that it does not have to be like that; pointlike particles could be the end of the story and questions of substructure might be meaningless. In short, we have a whole bunch of particles that make up our world and the master equation is the key to understanding how they all interact with each other.

One subtlety we haven’t mentioned is that although we keep speaking of particles, it really is something of a misnomer. These are not particles in the usual sense of the word. They don’t go around bouncing off each other like miniature billiard balls. Instead they interact with each other much more like the way surface waves can interact to produce shadows on the bottom of a swimming pool. It is as if the particles have a wavelike character while remaining particles nonetheless. This is again a very counterintuitive picture and it arises out of the quantum theory. It is the precise nature of those wavelike interactions that is rigorously (i.e., mathematically) specified by the master equation. But how did we know what to write down when we wrote the master equation? According to what principles does it arise? Before tackling these obviously very important questions, let’s look a little more deeply at the master equation and try to gain some insight into what it actually means.

The first line represents the kinetic energy carried by the W and Z particles, the photon and the gluon, and it tells us how they interact with each other. We didn’t mention that possibility yet but it is there: Gluons can interact with other gluons and W and Z particles can interact with each other; the W can also interact with the photon. Missing from the list is the possibility that photons can interact with photons, because they do not interact with each other. It is fortunate that they don’t, because if they did it would be very difficult to see things. In a sense it is a remarkable fact that you can read this book. The remarkable thing is that the light coming from the page does not get bounced off-track on the way to your eyes by all the light that cuts across it from all the other things around you, things you could see if you turned your head. The photons literally slip past, oblivious to each other.

The second line of the master equation is where much of the action is. It tells us how every matter particle in the universe interacts with every other one. It contains the interactions that are mediated by the photons, the W and Z particles, and the gluons. The second line also contains the kinetic energies of all the matter particles. We’ll leave the third and fourth lines for the time being.

As we have stressed, buried within the master equation are, bar gravity, all the fundamental laws of physics we know of. The law of electrostatic repulsion, as quantified by Charles Augustin de Coulomb in the late eighteenth century is in there (lurking in the first two lines), as is the entirety of electricity and magnetism, for that matter. All of Faraday’s understanding and Maxwell’s beautiful equations just appear when we “ask” the master equation how the particles with electric charge interact with each other. And of course, the whole structure rests firmly on Einstein’s special theory of relativity. In fact, the part of the Standard Model that explains how light and matter interact is called quantum electrodynamics. The “quantum” reminds us that Maxwell’s equations had to be modified by the quantum theory. The modifications are usually very tiny and lead to subtle effects that were first explored in the middle of the twentieth century by Richard Feynman and others. As we have seen, the master equation also contains the physics of the strong and weak forces. The properties of these three forces of nature are specified in all of their details, which means that the rules of the game are laid out with mathematical precision and without ambiguity or redundancy. So, apart from gravity, we seem to have something approaching a grand unified theory. It is certainly the case that no one has ever found any evidence anywhere in any experiment or through any observation of the cosmos that there is a fifth force at work in the universe. Most everyday phenomena can be explained pretty thoroughly using the laws of electromagnetism and gravity. The weak force keeps the sun burning but otherwise is not much experienced on Earth in everyday life, and the strong force keeps atomic nuclei intact but extends barely outside of the nucleus, so its immense strength does not reach out into our macroscopic world. The illusion that such solid things as tables and chairs are actually solid is provided by the electromagnetic force. In reality, matter is mainly empty space. Imagine zooming in on an atom so that the nucleus is the size of a pea. The electrons might be grains of sand whizzing around at high speeds a kilometer or so away—the rest is emptiness. The “grain of sand” analogy is stretching the point a little, for we should remember that they act rather more like waves than grains of sand, but the point here is to emphasize the relative size of the atom compared to the size of the nucleus at its core. Solidity arises when we try to push the cloud of electrons whizzing around the nucleus through the cloud of a neighboring atom. Since the electrons are electrically charged, the clouds repel and prevent the atoms from passing through each other, even though they are largely empty space. A big clue to the emptiness of matter comes when we look through a glass window. Although it feels solid, light has no trouble passing through, allowing us to see the outside world. In a sense, the real surprise is why a block of wood is opaque rather than transparent!

It is certainly impressive that we can shoehorn so much physics into one equation. It speaks volumes for Wigner’s “unreasonable effectiveness of mathematics.” Why should the natural world not be far more complex? Why do we have a right to condense so much physics into one equation like that? Why should we not need to catalog everything in huge databases and encyclopedias? Nobody really knows why nature allows itself to be summarized in this way, and it is certainly true that this apparent underlying elegance and simplicity is one of the reasons why many physicists do what they do. While reminding ourselves that nature may not continue to submit itself to this wonderful simplification, we can at least for the moment marvel at the underlying beauty we have discovered.

Having said all that, we are still not done. We haven’t yet mentioned the crowning glory of the Standard Model. Not only does it include within it the electromagnetic, strong, and weak interactions, but it also unifies two of them. Electromagnetic phenomena and weak interaction phenomena at first sight appear to have nothing to do with each other. Electromagnetism is the archetypal real-world phenomenon for which we all have an intuitive feel, and the weak force remains buried in a murky sub-nuclear world. Yet remarkably the Standard Model tells us that they are in fact different manifestations of the same thing. Look again at the second line of the master equation. Without knowing any mathematics, you can “see” the interactions between matter particles. The portions of the second line involving WB, and G (for gluon) are sandwiched between two matter particles, ψ, and that means that here are the bits of the master equation that tell us how matter particles “couple” with the force mediators but with a punch line. The photon lives partly in the symbol “W” and partly in “B,” and that is where the Z lives too! The W particle lives entirely in “W.” It is as if the mathematics regards the fundamental objects as W and B, but they mix up to conjure the photon and the Z. The result is that the electromagnetic force (mediated by the photon) and the weak force (mediated by the W and Z particles) are intertwined. In experiments, it means that properties that can be measured in experiments on electromagnetic phenomena should be related to properties measured in experiments on weak phenomena. That is a very impressive prediction of the Standard Model. And it was a prediction: The architects of the Standard Model, Sheldon Glashow, Steven Weinberg, and Abdus Salam, shared a Nobel Prize for their efforts, for their theory was able to predict the masses of the W and Zparticles well before they were discovered at CERN in the 1980s. The whole thing hangs together beautifully. But how did Glashow, Weinberg, and Salam know what to write down? How did they come to realize that “W and B mix up to produce the photon and the Z”? To answer that question is to catch a glimpse of the beautiful heart of modern particle physics. They did not simply guess, they had a big clue: Nature is symmetrical.

Symmetry is evident all around us. Catch a snowflake in your hand and look closely at this most beautiful of nature’s sculptures. Its patterns repeat in a mathematically regular way, as if reflected in a mirror. More mundane, a ball looks unchanged as you turn it around, and a square can be flipped along its diagonal or along an axis that slices through its center without changing its appearance. In physics, symmetry manifests in much the same way. If we do something to an equation but the equation doesn’t change, then the thing we did is said to be a symmetry of the equation. That’s a little abstract, but remember that equations are the way physicists express how real things relate to one another. A simple but important symmetry possessed by all of the important equations in physics expresses the fact that if we pick up an experiment and put it on a moving train, then, provided the train isn’t accelerating, the experiment will return the same results. This idea is familiar to us: It is Galileo’s principle of relativity that lies at the heart of Einstein’s theory. In the language of symmetry, the equations describing our experiment do not depend on whether the experiment is sitting on the station platform or onboard the train, so the act of moving the experiment is a symmetry of the equations. We have seen that this simple fact ultimately led Einstein to discover his theory of relativity. That is often the case: Simple symmetries can lead to profound consequences.

We’re ready to talk about the symmetry that Glashow, Weinberg, and Salam exploited when they discovered the Standard Model of particle physics. The symmetry has a fancy name: gauge symmetry. So what is a gauge? Before we attempt to explain what it is, let’s just say what it does for us. Let’s imagine we are Glashow or Weinberg or Salam, scratching our heads as we look for a theory of how things interact with other things. We’ll start by deciding we are going to build a theory of tiny, indivisible particles. Experiment has told us which particles exist, so we’d better have a theory that includes them all; otherwise, it will be only a half-baked theory. Of course, we could scratch our heads even more and try to figure out why those particular particles should be the ones that make up everything in the universe, or why they should be indivisible, but that would be a distraction. In fact, they are two very good questions to which we still do not have the answers. One of the qualities of a good scientist is to select which questions to ask in order to proceed, and which questions should be put aside for another day. So let’s take the ingredients for granted and see if we can figure out how the particles interact with each other. If they did not interact with each other, then the world would be very boring—everything would pass through everything else, nothing would clump together, and we would never get nuclei, atoms, animals, or stars. But physics is so often about taking small steps, and it is not so hard to write down a theory of particles when they do not interact with each other—we just get the second line of the master equation with the WB, and G bits scratched out. That’s it—a quantum theory of everything but without any interactions. We have taken our first small step. Now here comes the magic. We shall demand that the world, and therefore our equation, have gauge symmetry. The consequence is astonishing: The remainder of the second line and the whole of the first line appear “for free.” In other words, we are mandated to modify the “no interactions” version of the theory if we are to satisfy the demands of gauge symmetry. Suddenly we have gone from the most boring theory in the world to one in which the photon, WZ, and gluon exist and, moreover, they are responsible for mediating all of the interactions between the particles. In other words, we have arrived at a theory that has the power to describe the structure of atoms, the shining of the stars, and ultimately the assembly of complex objects like human beings, all through the application of the concept of symmetry. We have arrived at the first two lines of our theory of nearly everything. All that remains is to explain what this miraculous symmetry actually is, and then those last two lines.

The symmetry of a snowflake is geometrical and you can see it with your eyes. The symmetry behind Galileo’s principle of relativity isn’t something you can see with your eyes, but it isn’t too hard to comprehend even if it is abstract. Gauge symmetry is rather like Galileo’s principle in that it is abstract, although with a little imagination it is not too hard to grasp. To help tie together the descriptions we offer and the mathematical underpinnings, we have been dipping into the master equation. Let’s do it again. We said that the matter particles are represented by the Greek symbol ψ in the master equation. It’s time now to delve just a little deeper. ψ is called a field. It could be the electron field, or an up-quark field, or indeed any of the matter particle fields in the Standard Model. Wherever it is biggest, that’s where the particle is most likely to be. We’ll focus on electrons for now, but the story runs just the same for all the other particles, from quarks to neutrinos. If the field is zero someplace, then the particle will not be found there. You might even want to imagine a real field, one with grass on it. Or perhaps a rolling landscape would be better, with hills and valleys. Where the hills are, the field is biggest, and in the valleys it is smallest. We are encouraging you to conjure up, in your mind’s eye, an imaginary electron field. It might be surprising that our master equation is so noncommittal. It doesn’t work with certainties and we cannot even track the electron around. All we can do is say that it is more likely to be found over here (where the mountain is) and less likely to be found over there (at base camp in the valley). We can put definite numbers on the chances of finding the electron to be here or there, but that is as good as it gets. This vagueness in our description of the world at the very smallest distance scales occurs because quantum theory reigns supreme there, and quantum theory deals only in the odds of things happening. There really does appear to be a fundamental uncertainty built into concepts such as position and momentum at tiny distances. Incidentally, Einstein really did not like the fact that the world should operate according to the laws of probability and it led him to utter his famous remark that “God does not play dice.” Nevertheless, he had to accept that the quantum theory is extremely successful. It explains all the experiments we have conducted in the subatomic world, and without it we would have no idea how the microchips inside a modern computer work. Maybe in the future someone will figure out an even better theory, but for now quantum theory constitutes our best effort. As we have been at pains to point out throughout this book, there is absolutely no reason why nature should work according to our common-sense rules when we venture to explain phenomena outside of our everyday experience. We evolved to be big-world mechanics, not quantum mechanics.

Returning to the task at hand, since quantum theory defines the rules of the game, we are obliged to talk of electron fields. But having specified our field and laid out the landscape, we are not quite done. The mathematics of quantum fields has a surprise lurking. There is some redundancy. For every point on the landscape, be it hill or valley, the mathematics says that we must specify not only the value of the field at a particular point (say, the height above sea level in our real-field analogy), corresponding to the probability that a particle will be found there, but we need also to specify something called the “phase” of the field. The simplest picture of a phase is to imagine a clock face or a dial (or a gauge) with only one clock hand. If the hand points to 12 o’clock, then that is one possible phase, or if it points to half-past, then that would be a different phase. We have to imagine placing a tiny clock face at each and every point on our landscape, with each one telling us the phase of the field at that point. Of course, these are not real clocks (and they certainly do not measure time). The existence of the phase is something that was familiar to quantum physicists well before Glashow, Weinberg, and Salam came along. More than that, everyone knew that although the relative phase between different points of the field matters, the actual value does not. For example, you could wind all of the tiny clocks forward by ten minutes and nothing would change. The key is that you must wind every clock by the same amount. If you forget to wind one of them, then you will be describing a different electron field. So there appears to be some redundancy in the mathematical description of the world.

Back in 1954, several years before Glashow, Weinberg, and Salam constructed the Standard Model, two physicists sharing an office at the Brookhaven Laboratory, Chen Ning Yang and Robert Mills, pondered the possible significance associated with the redundancy in setting the phase. Physics often proceeds when people play around with ideas without any good reason, and Yang and Mills did just that. They wondered what would happen if nature actually did not care about the phase at all. In other words, they played around with the mathematical equations while messing up all the phases, and tried to work out what the consequences might be. This might sound weird, but if you sit a couple of physicists in an office and allow them some freedom, this is the sort of thing they get up to. Returning to the landscape analogy, you might imagine walking over the field, haphazardly changing the little dials by different amounts. What happens is at first sight simple—you are not allowed to do it. It is not a symmetry of nature.

To be more specific, let’s go back and look at only the second line of the master equation. Now strike out all of the WB, and G bits. What we have is then the simplest possible theory of particles that we could imagine: The particles just sit around and never interact with each other. That little portion of the master equation very definitely does not stay the same if we suddenly go and redial all the little clocks (that isn’t something that you are supposed to be able to see by just looking at the equation). Yang and Mills knew this, but they were more persistent. They asked a great question: How can we change the equation so that it does stay the same? The answer is fantastic. We need to add back precisely the missing bits of the master equation that we just struck out, and nothing else will do. In so doing we conjure into existence the force mediators and suddenly we go from a world without any interactions to a theory that has the potential to describe our real world. The fact that the master equation does not care about the values on the clock faces (or gauges) is what we mean by gauge symmetry. The remarkable thing is that demanding gauge symmetry leaves us no choice in what to write down: Gauge symmetry leads inexorably to the master equation. To put it another way, the forces that make our world interesting exist as a consequence of the fact that gauge symmetry is a symmetry of nature. As a postscript, we should add that Yang and Mills set the ball rolling, but their work was primarily of mathematical interest and it came well before particle physicists even knew which particles the fundamental theory ought to describe. It was Glashow, Weinberg, and Salam who had the wit to take their ideas and apply them to a description of the real world.

So we have seen how the first two lines of the master equation that underpins the Standard Model of particle physics can be written, and we hope to have given some flavor as to its scope and content. Moreover, we have seen that it is not ad hoc; instead we are led inexorably to it by the draw of gauge symmetry. Now that we have a better feel for this most important of equations, we can get back to the task that originally motivated us. We were trying to understand to what extent nature’s rules allow for the possibility that mass can actually be converted into energy, and vice versa. The answer lies, of course, within the master equation, for it spells out the rules of the game. But there is a much more appealing way to see what is going on and to understand how the particles interact with each other. This approach involves pictures, and it was introduced into physics by Richard Feynman.



What happens when two electrons come close to each other? Or two quarks? Or a neutrino gets close to an antimuon? And so on. What happens is that the particles interact with each other, in the precise way specified in the master equation. In the case of two electrons, they will push against each other because they have equal electric charge, whereas an electron and antielectron are attracted to each other because they have opposite electric charge. All of this physics resides in the first two lines of the master equation, and all of it can be summarized in just a handful of rules that we can draw pictorially. It really is a very simple business to get a basic grasp of, although the details take a bit more effort to appreciate. We’ll stick to the basics.



Looking again at the second line, the term that involves two ψ symbols and a G is the only portion of the equation that is relevant when quarks interact with each other via the strong force. Two quark fields and a gluon are interacting at the same point in spacetime—that is what the master equation is telling us. More than that, that is the only way they can interact with each other. That single portion of the master equation tells us how quarks and gluons interact, and it is prescribed precisely for us once we decide to make our theory gauge symmetric. We have absolutely no choice in the matter. Feynman appreciated that all of the basic interactions are this simple in essence, and he took to drawing pictures for each of the possible interactions that the theory allows. Figure 14 illustrates how particle physicists usually draw the quark-gluon interaction. The curly line represents a gluon and the straight line represents a quark or antiquark. Figure 15 illustrates the other allowed interactions in the Standard Model that come about from the first two lines of the master equation. Don’t worry about the finer points of the pictures. The message is that we can write them down and that there aren’t too many of them. Particles of light (photons) are represented by the symbol γ and the W and Z particles are labeled as such. The six quarks are labeled generically as q, the neutrinos appear as ν (pronounced “nu”), and the three electrically charged leptons (electron, muon, and tau) are labeled as l. Antiparticles are indicated by drawing a line over the corresponding symbol. Now here is the neat bit. These pictures rep-resent what particle physicists call interaction vertices. You are allowed to sew together these vertices into bigger diagrams, and any diagram you can draw represents a process that can happen in nature. Conversely, if you cannot draw a diagram, then the process cannot happen.



Feynman did a little more than just introduce the diagrams. He associated a mathematical rule with each vertex, and the rules are derived directly from the master equation. The rules multiply together in composite diagrams and allow physicists to calculate the likelihood that the process corresponding to a particular diagram will actually happen. For example, when two electrons encounter each other, the simplest diagram we can draw is as illustrated in Figure 16(a). We say the electrons scatter via the exchange of a photon. This diagram is built up by sewing together two electron-photon vertices. You should think of the two electrons heading in from the left, scattering off each other as a result of the photon exchange, and then heading out to the right. Actually, we have sneaked in another rule here. Namely, you are allowed to flip a particle to an antiparticle (and vice versa) provided you make it into an incoming particle. Figure 16(b) shows another possible way of sewing together the vertices. It is a little more fancy than the other figure, but again it corresponds to a possible way that the two electrons can interact. A moment’s thought should convince you that there are an infinite number of possible diagrams. They all represent different ways that two electrons can scatter, but fortunately for those of us who have to calculate what is going on, some diagrams are more important than others. In fact, the rule is very easy to state: Generally speaking, the most important diagrams are the ones with the fewest vertices. So in the case of a pair of electrons, the diagram in Figure 16(a) is the most important one, because it has only two vertices. That means we can get a pretty good understanding of what happens by calculating only this diagram using Feynman’s rules. It is delightful that what pops out of the math is the physics of how two electrically charged particles interact with each other, as discovered by Faraday and Maxwell. But now we can claim to have a much better understanding of the origin of this physics—we derived it starting from gauge symmetry. Calculations using Feynman’s rules also give us much more than just another way to understand nineteenth-century physics. Even when two electrons interact, we can compute corrections to Maxwell’s predictions—small corrections that improve upon his equations in that they agree better with the experimental data. So the master equation is breaking new ground. We really are just scratching the surface here. As we stressed, the Standard Model describes everything we know about the way particles interact with each other and it is a complete theory of the strong, weak, and electromagnetic forces, even succeeding in unifying two of them. Only gravity is excluded from this ambitious scheme to understand how everything in the universe interacts with everything else.



But we need to stay on message. How do Feynman’s rules, which summarize the essential content of the Standard Model, dictate the ways in which we can destroy mass and convert it into energy? How can we use them to help us best exploit E = mc 2 ? First let us recall an important result from Chapter 5—light is made up of massless particles. In other words, photons do not have any mass. Now there is an interesting diagram we can draw—it is shown in Figure 17. An electron and an antielectron bang into each other and annihilate to produce a single photon (for clarity we have labeled the electron e- and the positron e+). That is allowed by Feynman’s rules. This diagram is noteworthy because it represents a case whereby we started with some mass (an electron and a positron have some mass) and we end up with no mass at all (a photon). It is the ultimate matter-destruction process, and all of the initial energy locked away inside the mass of the electron and antielectron is liberated as the energy of a photon. There is a hitch, though. The annihilation into a single photon is disallowed by the rule that everything that happens must simultaneously satisfy the laws of energy and momentum conservation, and this particular process cannot do that (it is not entirely obvious and we won’t bother to prove it). It is a hitch that is easy to get around, though—make two photons. Figure 18 shows the relevant Feynman diagram—again, the initial mass is utterly destroyed and converted 100 percent into energy, in this case two photons. Processes like this played a very important role in the early history of the universe when matter and antimatter almost completely canceled themselves out by just such interactions. Today we see the remnant of that cancellation. Astronomers have observed that for every matter particle in the universe there are around 100 billion photons. In other words, for every 100 billion matter particles made just after the big bang, only one survived. The rest took the opportunity available to them, as pictured graphically in Feynman’s diagrams, to divest themselves of their mass and become photons.

In a very real sense, the stuff of the universe that makes up stars, planets, and people is only a tiny residue, left over after the grand annihilation of mass that took place early on in the universe’s history. It is very fortunate and almost miraculous that anything was left at all! To this day, we are not sure why that happened. The question “why is the universe not just filled with light and nothing else?” is still open-ended, and experiments around the world are geared up to help us figure out the answer. There is no shortage of clever ideas, but so far we have yet to find the decisive piece of experimental evidence, or proof that the theories are all wrong. The famous Russian dissident Andrei Sakharov carried out the pioneering work in this field. He was the first person to lay out the criteria that must be satisfied by any successful theory aiming to answer the question as to why there is any matter at all left over from the big bang.



We have learned that nature does have a mechanism for destroying mass, but unfortunately it is not very practical for use on Earth because we need a way of generating and storing antimatter—there is nowhere we can go to mine it and as far as we can tell, no lumps of it are lying around in outer space. As a fuel source it seems useless because there simply is no fuel. Antimatter can be created in the laboratory, but only by feeding in lots of energy in the first place. So although the process of matter-antimatter annihilation represents the ultimate mechanism for converting mass to energy, it is not going to help us solve the world’s energy crisis.



What about fusion, the process that powers the sun? How does that come about in the language of the Standard Model? The key is to focus our attention on the Feynman vertex involving a W particle. Figure 19shows what is going on when a deuteron is manufactured from the fusion of two protons. Remember that protons are, to a good approximation, made up of three quarks: two up quarks and one down quark. The deuteron is made up of one proton and one neutron, and the neutron is again mainly made up of three quarks, but this time one up quark and two down quarks. The diagram shows how one of the protons can be converted into a neutron, and as you can see, the W particle is the key. One of the up quarks inside the proton has emitted a W particle and changed into a down quark as a result, thereby converting the proton into a neutron. According to the diagram, the W particle doesn’t hang around. It dies and converts into an antielectron and a neutrino.12 W particles emitted when a deuteron forms always die, and in fact nobody has ever seen W particles except via the stuff they turn into as they exit the world. As a rule of thumb almost all of the elementary particles die, because there is usually a Feynman vertex that allows it. The exception occurs whenever it is impossible to conserve energy or momentum, and that tends to mean that only the lightest particles stick around. That is the reason that protons, electrons, and photons dominate the stuff of the everyday world. They simply have nothing to decay into: The up and down quarks are the lightest quarks, the electron is the lightest charged lepton, and the photon has no mass. For example, the muon is pretty much identical to the electron except that it is heavier. Remember that we encountered it earlier when we were talking about the Brookhaven experiment. Since it starts out with more mass energy than an electron, its decay to an electron will not violate the conservation of energy. In addition, as illustrated in Figure 20, Feynman’s rules allow it to happen and because a pair of neutrinos is also emitted there is no trouble conserving momentum. The upshot is that muons do decay and on average live for a fleeting 2.2 microseconds. Incidentally, 2.2 microseconds is a very long time on the timescale of most of the interesting particle physics processes. In contrast, the electron is the lightest Standard Model particle and it simply has nothing to decay into. As far as anyone can tell, an electron sitting on its own will never decay, and the only way to vanquish an electron is to make it annihilate with its antimatter partner.

Returning to the deuteron, Figure 19 explains how a deuteron can form from the collision of two protons, and it says we should expect to find one antielectron (positron) and one neutrino for every fusion event. As we have already mentioned, the neutrinos interact with the other particles in the universe only very weakly. The master equation tells us that is the case, for the neutrinos are the only particles that interact solely through the weak force. As a result, the neutrinos that are manufactured deep in the core of the sun can escape without too much trouble; they stream outward in all directions and some of them head out toward the earth. As with the sun, the earth is pretty much transparent to them and they pass through it without noticing it is even there. That said, each neutrino does have a very small chance of interacting with an atom in the earth, and experiments like Super-Kamiokande have detected them, as we discussed earlier.

How certain can we be that the Standard Model is correct, at least up to the accuracy of our current experimental capabilities? Over many years now the Standard Model has been put through the most rigorous tests at various laboratories around the world. We don’t need to worry that the scientists are biased in favor of the theory; those conducting the tests would dearly love to find that the Standard Model is broken or deficient in some way, and they are trying hard to test it to destruction. Catching a glimpse of new physical processes, which may open up dazzling new vistas with magnificent views of the inner workings of the universe, is their dream. So far the Standard Model has withstood every test.



The most recent of the big machines used to test it is the Large Hadron Collider (LHC) at CERN. This worldwide collaboration of scientists aims to either confirm or break the Standard Model; we shall return to the LHC shortly. The predecessor to the LHC was the Large Electron Positron Collider (LEP), and it succeeded in delivering some of the most exquisite tests to date. LEP was housed inside a 27-kilometer circular tunnel running underneath Geneva and some picturesque French villages, and it explored the world of the Standard Model for eleven years, from 1989 until 2000. Large electric fields were used to accelerate beams of electrons in one direction and of positrons in the other. Crudely speaking, the acceleration of charged particles by electric fields is similar to the mechanism used to shoot electrons at old-fashioned CRT (cathode ray tube) television screens to produce the picture. The electrons are emitted at the back of the set, and that is why older TVs tend to be quite bulky. Then the electrons are accelerated by an electric field to the screen at the front of the TV. A magnet makes the beam bend and scan across the screen to make the picture.

At LEP, magnetic fields were also exploited, this time to bend the particles in a circle so they followed the arc of the tunnel. The whole point of the venture was to bring the two beams of particles together so they would collide head-on. As we have already learned, the collision of an electron and a positron can lead to the annihilation of both, with their mass converting into energy. This energy is what physicists at LEP were most interested in, because it could be converted into heavier particles in accord with Feynman’s rules. During the first phase of the machine’s operation, the electron and positron had energies that were very precisely tuned to the value that greatly enhanced the chances of making a Z particle (you might want to check back to the list of Feynman’s rules in the Standard Model to check that electron-positron annihilation into a Z particle is allowed). The Z particle is actually pretty heavy by the standards of the other particles—it is nearly 100 times more massive than a proton and nearly 200,000 times more massive than the electron and positron. As a result, the electron and positron had to be pushed to within a whisker of the speed of light to have energy sufficient to bring the Z into being. Certainly the energy locked in their mass and liberated upon annihilation is nowhere near sufficient to make the Z.



The initial goal of LEP was simple: keep on producing Z particles by repeatedly colliding electrons and positrons. Every time the particle beams collide, there would be a reasonable chance of an electron in one beam annihilating against a single positron in the other beam, resulting in the production of a single Z particle. By quick-firing beams into each other, LEP managed to make over 20 million Z particles through electron-positron annihilation during its lifetime.

Just like the other heavy Standard Model particles, the Z is not stable and it lasts for a fleeting 10-25 seconds before it dies. Figure 21 illustrates the various possible Z particle processes that the 1,500 or so LEP physicists were so interested in, not to mention the many thousands more around the world who were eagerly awaiting their results. Using giant particle detectors that surround the point where the electron and positron annihilate each other, particle physicists could capture the stuff produced by the decay of the Z and identify it. Modern particle physics detectors, like those used at LEP, are a little like huge digital cameras, many meters across and many meters tall, that can track particles as they pass through them. They, like the accelerators themselves, are glorious feats of modern engineering. In caverns as big as cathedrals, they can measure a single subatomic particle’s energy and momentum with exquisite accuracy. They are truly at the edge of our engineering capabilities, which makes them wonderful monuments to our collective desire to explore the workings of the universe.

Armed with these detectors and vast banks of high-performance computers, one of the primary goals for the scientists involved a pretty simple strategy. They needed to sift through their data to identify those collisions in which a Z particle was produced and then for each collision, figure out how the Z particle decayed. Sometimes it would decay to produce an electron-positron pair; other times a quark and antiquark would be produced or maybe a muon and an antimuon (see Figure 21 again). Their job was to keep a tally of how many times the Z decayed through each of the possible mechanisms predicted by the Standard Model and compare the results with the expected numbers as predicted by the theory. With over 20 million Z particles on hand, they could make a pretty stringent test of the correctness of the Standard Model and, of course, the evidence showed that the theory works beautifully. This exercise is called measuring the partial widths, and it was one of the most important tests of the Standard Model that LEP provided. Over time, many other tests were performed and in all cases the Standard Model theory was seen to work. When LEP was finally shut down in 2000, its ultraprecise data had been able to test the Standard Model to a precision of 0.1 percent.

Before we leave the subject of testing the Standard Model, we cannot resist one other example from a quite different type of experiment. Electrons (and many other elementary particles) behave like tiny magnets, and some very beautiful experiments have been designed to measure these magnetic effects. These aren’t collider experiments. There is no brutal smashing together of matter and antimatter here. Instead, very clever experiments allow the scientists to measure the magnetism to an astonishing one part per trillion. It is a staggering precision, akin to measuring the distance from London to New York to an accuracy much less than the thickness of a human hair. As if that weren’t amazing enough, the theoretical physicists have been hard at work too. They have calculated the same thing. Calculations like this used to be done using nothing more than a pen and some paper, but these days even the theorists need good computers.

Nevertheless, starting with the Standard Model and a cool head, theorists have calculated the predictions of the Standard Model, and their result agrees exactly with the experimental number. To this day the theory and experiment are in agreement to ten parts per billion. It is one of the most precise tests of any theory that has ever been made in all of science. By now, and thanks in no small part to LEP and the electron magnetism experiments, we have a great deal of confidence that the Standard Model of particle physics is on the right lines. Our theory of nearly everything is in fine shape—except for one last detail, which is actually a fairly big detail. What are those last two lines of the master equation?

We are guilty in fact of hiding one crucial piece of information that is absolutely central to our quest in this book. Now is the time to let the cat out of the bag. The requirement of gauge symmetry seems to demand that all of the particles in the Standard Model have no mass. That is plain wrong. Things do have mass and you do not need a complicated scientific experiment to prove it. We’ve spent the entire book so far thinking about it, and we derived the most famous equation in physics, E = mc2, and that very definitely has an “m” in it. The final two lines of the master equation are there to fix this problem. In understanding those final two lines we will complete our journey, for we will have an explanation for the very origin of mass.

The problem of mass is very easy to state. If we try to add mass directly into the master equation, then were are doomed to spoil gauge symmetry. But as we have seen, gauge symmetry lies at the very heart of the theory. Using it, we were able to conjure into being all of the forces in nature. Worse still, theorists proved in the 1970s that abandoning gauge symmetry is not an option, because then the theory falls apart and stops making sense. This apparent impasse was solved by three groups of people working independently of each other in 1964. François Englert and Robert Brout working in Belgium, Gerald Guralnik, Carl Hagen, and Tom Kibble in London, and Peter Higgs in Edinburgh all wrote landmark papers that led to what later became known as the Higgs mechanism.

What would constitute an explanation of mass? Well, suppose you started out with a theory of nature in which mass never reared its head. In such a theory, mass simply does not exist and you would never invent a word for it. As we have learned, everything would whiz around at the speed of light. Now, suppose that within that theory something happens such that after the event the various particles start to move around with different, slower speeds and certainly no longer move at light speed. Well, you would be quite entitled to say that the thing that happened is responsible for the origin of mass. That “thing” is the Higgs mechanism, and now is the time to explain what it is.

Imagine you are blindfolded, holding a ping-pong ball by a thread. Jerk the string and you will conclude that something with not much mass is on the end of it. Now suppose that instead of bobbing freely, the ping-pong ball is immersed in thick maple syrup. This time if you jerk the thread you will encounter more resistance, and you might reasonably presume that the thing on the end of the thread is much heavier than a ping-pong ball. It is as if the ball is heavier because it gets dragged back by the syrup. Now imagine a sort of cosmic maple syrup that pervades the whole of space. Every nook and cranny is filled with it, and it is so pervasive that we do not even know it is there. In a sense, it provides the backdrop to everything that happens.

The syrup analogy only goes so far, of course. For one thing, it has to be selective syrup, holding back quarks and leptons but allowing photons to pass through it unimpeded. You might imagine pushing the analogy even further to accommodate that, but we think the point has been made and we ought not forget that it is an analogy, after all. The papers of Higgs et al. certainly never mention syrup.

What they do mention is what we now call the Higgs field. Just like the electron field, it has associated with it a particle: the Higgs particle. Just like the electron field, the Higgs field fluctuates, and where it is biggest the Higgs particle is more likely to be found. There is a big difference, though: The Higgs field is not zero even when no Higgs particles are around, and that is the sense in which it is like all-pervasive syrup. All of the particles in the Standard Model are moving around in the background of the Higgs field, and some of the particles are affected by it more than others. The last two lines of the master equation capture just this physics. The Higgs field is represented by the symbol φ and the portions of the third line that involve two instances of φ along with a B or a W (which in our compressed notation are tucked away inside the D symbol in the third line of the master equation) are the terms that generate masses for the W and Z particles. The theory is cleverly arranged so the photon remains massless (the piece of the photon that sits in B and the piece in W cancel out in the third line; again, that’s all hidden in the D symbol) and since the gluon field (G) never appeared, it too has no mass. That is all in accord with experiment. Adding the Higgs field has generated masses for the particles and it has done so without spoiling the gauge symmetry. The masses are instead generated as a result of an interaction with the background Higgs field. That is the magic of the whole idea—we can get masses for the particles without paying the price of losing gauge symmetry. The fourth line of the master equation is the place where the Higgs field generates the masses for the remaining matter particles of the Standard Model.

There is a snag to this fantastic picture. No experiment has ever seen a Higgs particle. Every other particle in the Standard Model has been produced in experiments, so the Higgs really is the missing piece in the entire jigsaw. If it does exists as predicted, then the Standard Model will have triumphed again, and it can add an explanation for the origin of mass to its impressive list of successes. Just like all the other particle interactions, the Standard Model specifies exactly how the Higgs particle should manifest itself in experiments. The only thing it doesn’t tell us is how heavy it is, although it does predict that the Higgs mass should lie within a particular range now that we know the masses of the W particle and the top quark. LEP could have seen the Higgs if it had been at the lighter end of the predicted range, but since none were seen, we might presume it is too heavy to have been produced at LEP (remember that heavier particles need more energy to produce them, by virtue of E = mc2). At the time of writing, the Tevatron collider at the Fermi National Accelerator Laboratory (Fermilab) near Chicago is hunting for the Higgs, but again it has not to date seen a hint. It is again very possible that the Tevatron has insufficient energy to deliver a clear Higgs signal, although it is very much in the race. The LHC is the highest-energy machine ever built, and it really should settle the question of the Higgs’s existence once and for all because it has enough energy to reach well beyond the upper limits set by the Standard Model. In other words, the LHC will either confirm or break the Standard Model. We’ll return shortly to explain why we are so sure that the LHC will do the job the earlier machines have failed to do, but first we would like to explain just how the LHC expects to make Higgs particles.

The LHC was built within the same 27-kilometer-circumference tunnel that LEP used but, apart from the tunnel, everything else has changed. An entirely new accelerator now occupies the space LEP once occupied. It is capable of accelerating protons in opposite directions around the tunnel to an energy equal to more than 7,000 times their mass energy. Smashing the protons into each other at these energies advances particle physics into a new era, and if the Standard Model is right, it will produce Higgs particles in large numbers. Protons are made up of quarks, so if we want to figure out what should happen at the LHC, then all we need to do is identify the relevant Feynman diagrams.



The most important vertices corresponding to interactions between the regular Standard Model particles and the Higgs particle are illustrated in Figure 22, which shows the Higgs as a dotted line interacting with the heaviest quark, the top quark (labeled t), and with the also pretty heavy W or Z particles. Perhaps it will come as no surprise that the particle responsible for the origin of mass prefers to interact with the most massive particles around. Knowing that the protons furnish us with a source of quarks, our task is to figure out how to embed the Higgs vertex into a bigger Feynman diagram. Then we’ll have figured out how Higgs particles can be manufactured at the LHC. Since quarks interact with W (or Z) bosons, it is easy to work out how the Higgs could be produced via W (or Z) particles. The result is shown in Figure 23: A quark from each of the colliding protons (labeled “p”) emits a W (or Z) particle, and these fuse together to make the Higgs. The process is called weak boson fusion, and it is expected to be a key process at the LHC.



The case of the top quark production mechanism is a little trickier. Top quarks do not exist inside protons, so we need a way to go from the light (up or down) quarks to top quarks. Well, top quarks interact with the lighter quarks through the strong force—i.e., mediated by emitting and absorbing a gluon. The result is shown in Figure 24. It is rather similar to the weak boson fusion process except that the gluons replace the W or Z. In fact, because this process proceeds through the strong force, it is the most likely way to produce Higgs particles at the LHC. It goes by the name of gluon fusion.



This then is the Higgs mechanism, the currently most widely accepted theory for the origin of mass in the universe. If all goes according to plan, the LHC will either confirm the Standard Model description of the origin of mass or show that it is wrong. This is what makes the next few years such an exciting time for physics. We are in the classic scientific position of having a theory that predicts precisely what should happen in an experiment, and will therefore stand or fall depending on the results of that experiment. But what if the Standard Model is wrong? Couldn’t something totally different and unexpected happen? Maybe the Standard Model is not quite right and there is no Higgs particle. There is no arguing that these are genuine possibilities. Particle physicists are particularly excited because they know that the LHC mustreveal something new. The possibility that the LHC will see nothing new is not an option at all because the Standard Model, stripped of the Higgs, just does not make sense at the energies that the LHC is capable of generating, and the predictions of the Standard Model simply fall apart. The LHC is the first machine to enter this uncharted area. More specifically, when two W particles collide at energies in excess of 1,000 times the proton’s mass energy, as they certainly will at the LHC, we lose the ability to calculate what is happening if we simply throw the Higgs parts of the master equation away. Adding the Higgs back in makes the calculations work out, but there are other ways to make the W scattering process work—and the Higgs is not the only option. Whichever way nature chooses, it is absolutely unavoidable that the LHC will measure something that necessarily contains physics we have never encountered before. It is not common for scientists to perform an experiment with such a guarantee that interesting things are going to reveal themselves, and this is what makes the LHC the most eagerly anticipated experiment in many years.