The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics - Robert Oerter (2006)

Chapter 11. The Edge of Physics

Birdboot: Where’s Higgs?

—Tom Stoppard, The Real Inspector Hound

The Standard Model is by far the most successful scientific theory ever. Not only have some of its predictions been confirmed to spectacular precision, one part in 10 billion for the electron magnetic moment, but the range of application of the theory is unparalleled. From the behavior of quarks inside the proton to the behavior of galactic magnetic fields, the Standard Model works across the entire range of human experience. Accomplishing this with merely 18 adjustable parameters is an unprecedented accomplishment, making the Standard Model truly the capstone of twentieth-century science.

Why are physicists not content to rest on their laurels and accept general relativity (for gravity) and the Standard Model (for everything else) as the ultimate theories explaining all the interactions in the universe? The answer is two-fold: On one hand, the theories themselves hint at new physics beyond the Standard Model, and on the other, recent experiments are starting to require it.

Where Have All the Neutrinos Gone?

The first indication of physics beyond the Standard Model came from a surprising direction. It was not the multibillion dollar accelerators that produced the evidence, but an experiment that set out to check an apparently well-understood phenomenon—the physics of the sun.

A complete understanding of the sun involves the difficult interaction of hot, flowing plasma with the sun’s magnetic fields, as well as the delicate balancing of the outward pressure of the hot plasma with the inward pull of gravity. The nuclear reactions that produce energy in the sun’s core, on the other hand, were thought to be well understood. Many of those reactions produce neutrinos. Since neutrinos interact so weakly with other matter, they escape the sun’s core without any alteration, providing a glimpse of the conditions in the sun’s interior. An experiment designed by Raymond Davis and his collaborators detected a tiny fraction of the solar neutrinos that constantly rain onto the Earth. In 1968 they reported their results: only about half of the expected number of neutrinos.

Particle physicists viewed this result with interested skepticism. The question: Was there a problem with the physics of the nuclear reactions (the Standard Model did not exist yet), or with the physics of the sun, the so-called standard solar model? As the Standard Model was developed and verified through the 1970s and 1980s, particle physicists’ confidence grew, and many assumed that the solar physics would be found in error. In experiment after experiment, though, the solar neutrino deficit persisted. For 35 years, increasingly sophisticated experiments looked for neutrinos from a number of different solar reactions and, slowly, a consistent picture emerged. The solarphysics was fine, but the neutrinos were behaving strangely. According to the Standard Model, all of the neutrinos produced in the sun should be electron neutrinos. However, the experiments clearly showed that some of these neutrinos were changing their flavor during their trip to earth—they were oscillating from electron neutrinos into some other type. Presumably, that other type of neutrino is a mu or tau neutrino, but it is possible that the electron neutrinos are transforming into some as yet unknown neutrino type.

The theoretical explanation of these neutrino oscillations requires that at least one of the three neutrinos have a mass. Since all neutrinos are massless in the Standard Model, these experiments, the implications of which are just becoming clear in the first years of the twenty-first century, are truly the first indications of physics beyond the Standard Model.

Can the Standard Model be modified to include neutrino masses? As we’ve seen, a particle with mass cannot be purely left-handed, as neutrinos are in the Standard Model. At the very least, then, we need to add right-handed neutrinos to the theory. With these in hand, we can proceed to make neutrinos massive by the same trick that made the electron and quarks massive, by coupling to the Higgs field. Spontaneous symmetry breaking then leads to neutrino masses. All that’s needed is to adjust the couplings to make the masses come out right.

This simple-minded approach has some difficulties. Experiments have shown that neutrino masses must be at least a million times smaller than the electron mass. Why should the coupling between the Higgs and the neutrino be a million times less than the coupling between the Higgs and the electron? Our model gives no way of answering this question. More seriously, the right-handed neutrinos we introduced count as additional light neutrinos, which are ruled out by accelerator experiments and by astrophysical considerations. As we will see in the next chapter, a class of theories known as grand unified theories (GUTs) allows massive neutrinos to be included in a more natural way.

An experiment now running at Fermilab, called the Mini Booster Neutrino Experiment (Mini-BooNE), is expected to produce results sometime in 2005. Mini-BooNE examines a beam of mu neutrinos to see if they transform into electron neutrinos. If this oscillation is seen, a second phase of the experiment (BooNE) will aim for a precise measurement of the transformation rate. A second Fermilab experiment, called the Main Injector Neutrino Oscillation Search (MINOS), uses a pair of detectors, one at Fermilab itself and one 430 miles away in a mine in Minnesota, to look for transformations of mu neutrinos into tau neutrinos. Taken together, Mini-BooNE and MINOS should give a very clear picture of neutrino oscillations. One goal of these investigations is to clarify the question of neutrino masses. The transformation rate of one neutrino flavor into another is directly related to the difference in the two masses. As a result, the oscillation experiments won’t be able to determine the actual masses, only the pattern of mass differences. Of course, any neutrino masses other than zero take us beyond the Standard Model.

An even more intriguing possibility is suggested by the results of an experiment called the Liquid Scintillation Neutrino Detector (LSND) that finished a six-year run in 1998. Like the experiments that looked at neutrinos coming from the sun, LSND also provided evidence of neutrino oscillations. However, the rates of oscillation measured in the various experiments are very hard to reconcile unless one postulates the existence of a fourth type of neutrino. Now, we saw in the previous chapter that the decay rate of the Z° constrains us to exactly three light neutrinos. This conclusion can be avoided if the fourth neutrino doesn’ t interact at all with the Z0. The symmetries of the Standard Model (extended to include the extra neutrino) then forbid the new neutrino from interacting with any of the other particles except the Higgs. For this reason, these hypothetical neutrinos are called sterile. If this fourth neutrino exists, it must be of a very different ilk than the three neutrinos we already know of. It can’t be a member of a fourth family like the other families in the Standard Model. It would be a complete loner, a particle with no relatives and almost no interactions. If it could be proven that a sterile neutrino exists, a whole new chapter in elementary particle physics would begin.

The Scum of the Universe

I have been calling the Standard Model the Theory of Everything Except Gravity. However, a collection of surprising observations, which I have been ignoring until now, indicates that the Standard Model is not even close to being the whole story. About 85 percent of the matter in the universe cannot be accounted for by the particles of the Standard Model. It is astronomy once again, rather than accelerator physics, that forces us to this astonishing conclusion. This result comes not just from a single observation but from a variety of different astronomical observations, providing the sort of converging lines of evidence that are crucial to acceptance of an idea as a scientific fact.

The stars in a galaxy orbit the galactic center just as the planets orbit the sun in our solar system. Newton’s inverse-square law of gravity (or general relativity—the theories are nearly identical for galactic scales) predicts that the speed of a star will fall off as you move away from the galactic center. Actual observations tell a different story. For many galaxies, the speed of stars we observe is nearly constant, instead of falling off as expected. The discrepancy disappears if we assume there is more mass in the galaxy than we can see. Astronomers call the mysterious extra mass dark matter; we don’t see it, but we can detect its presence by its gravitational effects. The amount of dark matter, deduced from the observed star speeds, far exceeds the amount of visible matter—stars, planets, gas, and dust—in the galaxy. Other observations support this astonishing conclusion. Measurements of the relative speed of pairs of galaxies, of speeds in galactic clusters, and of gravitational lensing (the bending of light as it passes by a galaxy) combine to fix the amount of excess mass at about six or seven times the amount of normal matter.

What makes this discovery so shocking is the fact that most of the dark matter must be something other than the protons, neutrons, and electrons that make up ordinary matter. That is, it cannot be dead, dark stars, planets too small to detect, or loose gas and dust that make up the dark matter. Dark matter has never been directly detected: How, then, can we know that it’s not normal matter? The answer comes from the detailed picture of the early universe that the combined big bang-Standard Model provides. The production of the light elements, deuterium, helium, and lithium, in the first few minutes after the big bang is very sensitive to how many protons and neutrons are around. If the dark matter were normal matter, there would have been seven times as many protons and neutrons around during the first moments of the big bang. But then the light elements would have been produced in much greater quantities than we observe. Since protons, neutrons, and electrons are the only stable massive particles in the Standard Model, dark matter takes us beyond. We are forced to conclude that the bulk of the matter in the universe is something mysterious, something not included in the Standard Model. All of the galaxies we see, with all of their stars, planets, and dust clouds, are only a sort of scum on the fringes of enormous, invisible clouds of dark matter. Or, as physicist Pierre Ramond puts it more poetically, “we should think of luminous matter as the foam that rides the crest of waves of the dark matter in the cosmic ocean.”1

What might this mysterious matter be? Observations indicate that our own galaxy is full of dark matter, too. We are presumably surrounded by it at all times, yet we have never detected it except by observations of distant galaxies. It must therefore be something that interacts only very minimally with ordinary matter. It doesn’t radiate light or scatter it, so it must have no electric charge.

One possibility immediately springs to mind: massive neutrinos. They are, of course, neutral, and they interact only weakly with other matter. The universe is presumably full of neutrinos left over from the big bang. That, at least, is what the combination of the Standard Model and the big bang model predicts. Neutrinos have a big disadvantage as dark matter candidates, though. Their mass, if not zero, is very small. This means that in the early universe they were whizzing around at speeds near the speed of light (for which reason they are considered “hot” dark matter), a condition that seems to interfere with galaxy formation, according to computer simulations. We need to look elsewhere for dark matter.

Hypothetical dark matter particles that are much more massive than the proton are known as weakly interacting massive particles, or WIMPs, for short. In some supersymmetric theories, which we will learn about in the next chapter, there is a stable, neutral particle called the neutralino—a perfect WIMP candidate. We don’t need supersymmetry to produce WIMPs, however; a simple extension of the Standard Model provides another candidate. Spontaneous symmetry breaking happens when the Higgs field rolls off the hump in the Mexican hat potential, picking out a particular field “direction,” just as a pencil falls in a certain direction when it falls over.

085

Let’s indicate the final position of the Higgs field (or the pencil) by an arrow, as in the preceding figure. Now, remember that there is a potential like this at every point in space. There is the possibility, then, that the Higgs field will choose a different direction at different points in space. Instead of trying to picture how this happens in our universe, let’s simplify things. Think of a circle with a lot of pencils balanced on end on top of it. They might all fall in the same direction, or they might fall in different directions. The Higgs field, though, must change slowly from one point in space to a nearby point, so let’s require that the pencils’ directions change smoothly, too. For instance, they might all fall outward, or they might form a pattern like the one on the right:

086

Think of the arrow’s (or the pencil’s) direction as a clock. How many complete turns does the clock make as we travel around the big circle on the right? If the pencils all fell in the same direction (say, they all fell to the right), the answer is zero; for the pattern on the left in the illustration (falling outward), the answer is once; and for the pattern on the right, the answer is twice. All of the possible smooth patterns can be classified according to how many turns the clock makes. Each pattern represents a different possibility for the lowest energy state, called the vacuum state, of the Higgs field on the circle.

In our four-dimensional universe, the picture is harder to draw, and the Higgs field of the Standard Model is more complicated than the Mexican hat picture. The result turns out exactly the same, though: The possible vacuum states of the Higgs field can be classified according to how many times the field “wraps around the universe.” Is there any way to find out the actual vacuum state of our universe? There is, and we don’t need to circumnavigate the universe to find the answer. It turns out that any vacuum state other than the simplest no-wrap, all-fall-the-same-direction solution violates a symmetry known as CP symmetry. CP symmetry involves the combination of mirror symmetry, or parity (that’s the P), and an exchange of particles and antiparticles (that’s the C). Experimentally, CP is only violated by a small amount. So our vacuum must be very nearly the no-wrap vacuum.

Out of an infinite number of possible vacua, why are we so close to the no-wrap one? That it “just happens” that way is as unlikely as hitting the bull’s eye when throwing a dart blindfolded. However, it is easy to make a small modification to the Standard Model by adding a piece to the Lagrangian that gives all the wraparound vacua higher energy than the no-wrap vacuum.

087

The diagram shows that the new term creates a “trough.” From field theory we know that whenever we have an energy trough like this, the field can oscillate about the bottom of the trough. Field oscillations are just another way of describing particles, so this modification of the Standard Model implies that a new type of particle can exist. This (so far hypothetical) particle is given the name axion. Axions have the right properties to be WIMPs: zero electric charge and minimal interactions with other matter. Unfortunately, the theory gives few hints about the axion’s mass. If these particles are abundant in the galaxy, they should have noticeable effects on star formation and the properties of supernova explosions. So far, all attempts to detect those effects have failed.

It is, of course, immensely embarrassing for physicists to admit that we have no idea what 85 percent of the matter in the universe is made of. At the same time, it is terribly exciting. Whether dark matter turns out to consist of neutralinos, axions, or something that has not yet been thought of, or whether the solution requires even more radical changes to basic physics, dark matter will certainly be a crucial part of physics beyond the Standard Model.

Over to the Dark Side

Just in the past few years, astronomers have made a new discovery that dwarfs even the dark matter surprise. The story begins in 1929, when Edwin Hubble discovered the expansion of the universe: Distant galaxies are all moving away from us, and the farther they are, the faster they are receding. Now, it is wildly unlikely that our galaxy just happens to be in the center of some vast, cosmic pattern. There must be a reason for this galactic flight. Do we have a case of cosmic BO? Or was it something we said? The real answer lies in Einstein’s theory of gravity, general relativity.

General relativity posits a connection between the curvature of spacetime and the energy, including the mass-energy (E = mc2), in the universe. Simply put, the equations say that Curvature = Energy. Einstein invented these equations to describe, for instance, planets orbiting the sun, or the gravitational attraction between galaxies. He soon began to wonder, though, whether the same equations could describe the spacetime structure of the whole universe. At this point, he made what he would later call his greatest blunder. Hubble had not yet discovered the flight of the galaxies, and so Einstein looked for a static solution to his equations, a solution that would describe an eternal, unchanging universe. In order to find such a solution, Einstein modified his original equation by adding a new term, which came to be known as the cosmological constant. This was a sort of fudge factor, unrelated to any of the known properties of gravity, which allowed a static solution to the (modified) equations of general relativity, a solution describing a universe that neither grew nor shrank but was balanced just on the edge. The cosmological constant represented a new kind of energy, inherent in space itself, evenly spread throughout the whole universe.

With Hubble’s discovery, Einstein realized his mistake. The galactic recession Hubble described had a natural interpretation in Einstein’s theory. It wasn’t that all the galaxies were running away; it was that the space between the galaxies was expanding. To picture this expansion, take a partially inflated balloon and draw some dots on it with a marker. If you now continue to blow up the balloon, you will see that the dots move farther apart. Now think of the dots as galaxies. No matter which dot you are living in, you see all the other dots moving away from you. Moreover, the further away a dot is from your dot, the faster it moves. Space, in Einstein’s theory, is like the rubber of the balloon: It can stretch. If all space is stretching all the time, then more distant galaxies will recede faster than closer objects, just as Hubble found.

The discovery of this cosmic expansion made Einstein’s cosmological constant unnecessary. Physicists building mathematical models of the universe tended to ignore the cosmological constant. The original general relativity equations were perfectly capable of describing an expanding universe that started with a big bang. As we have seen, these models made testable predictions about the universe (for instance, the relative amounts of hydrogen and helium in the universe) that were in excellent agreement with observations. These physicists found that in all the models without a cosmological constant, an expanding universe would slow down. The universe might go on expanding forever at an ever-slowing rate, or the expansion might slow down enough that the universe began to recollapse. Astronomers began trying to measure the expansion rate more accurately to determine by how much the expansion was slowing. On the answer hinged the fate of the universe. An expansion that went on forever meant a universe that grew colder and colder as galaxies moved farther and farther apart. Stars would eventually run out of nuclear fuel, lose their heat, and stop shining. The lights would go out, and the universe would settle into an eternal, expanding night. On the other hand, a universe that recollapsed would begin to heat up again as galaxies were compressed closer and closer together. The temperature would rise, the density would rise, and the collapse of spacetime would accelerate until the universe disappeared in a big crunch, the opposite of a big bang. What was to be the ultimate fate of our universe, fire or ice?

It was only in 1998 that a new generation of telescopes (among them the aptly-named Hubble Space Telescope) became accurate enough to answer the question. By looking at supernovas in distant galaxies, astronomers were at last able to measure how the expansion rate was changing over time. The result was a complete surprise: Far from slowing down, the expansion was accelerating. To explain the accelerating expansion, astrophysicists returned to Einstein’s “blunder.” The cosmological constant supplies a uniform energy density at all points in space. This energy has no interactions with matter (other than gravitational), so it can’t be seen; it is dark energy. Because of the way it enters into Einstein’s equations, this dark energy counteracts the effect of normal mass and energy, causing the universal expansion to accelerate.

Whence does the dark energy arise? In Einstein’s original model, it was simply another constant of nature, which, like the speed of light or Planck’s constant, had to be determined by experiment. Particle physicists had a different suggestion. Recall Schwinger’s description of a quantum field as a collection of harmonic oscillators, one at each point in space. Empty space, the vacuum, as physicists call it, is when all the oscillators are in their lowest energy state. We know, however, that a harmonic oscillator has some energy even in its lowest energy state. This vacuum energy exists at every point in space, and it has exactly the right properties for dark energy. That is, any relativistic quantum field theory predicts that empty space will be filled with dark energy.

It seems, then, that relativistic quantum field theory solves the mystery of dark energy. However, a problem arises when you try to calculate how much vacuum energy there is. Embarrassingly, relativistic quantum field theory predicts an infinite amount of dark energy. As it has no effect on particle interactions, the dark energy had always been ignored by particle physicists. After all, they were used to subtracting infinities as part of the renormalization technique. In the context of cosmology, though, the dark energy rises to critical importance. Is there a way to avoid the embarrassment of infinite energy and explain the dark energy at the same time?

Unfortunately, the answer seems to be no. We can get a finite value for the dark energy if we assume that new physics takes over at some energy scale, the Planck energy, for instance. This procedure results in a value for dark energy that is far too large, however. According to the supernova observations that implied its existence, the actual amount of dark energy contained in a thimbleful of empty space is equivalent to the mass of a few electrons. According to relativistic quantum field theory, though, the amount of dark energy in that same thimble of empty space is equivalent to a mass greater than that of all the known galaxies. Clearly, something is wrong with this picture. Why is the dark energy so nearly zero, but not exactly zero? The dark energy puzzle is one of the greatest unsolved problems of physics today.

The Muon: In Need of a Spin Doctor?

One of the great early successes of relativistic quantum field theory was the calculation of the electron’s magnetic strength, what physicists call the magnetic moment. As we saw in Chapter 6, it is a measure of how fast the electron’s spin axis will precess in a magnetic field. According to the Dirac equation, the magnetic moment should be precisely 2, but the cloud of virtual particles surrounding the electron alters the prediction slightly. The experimental measurement agrees with the predicted value to an astonishing one part in a billion.

Turning to the electron’s heavier brother, the muon, things get considerably more interesting. The magnetic moment of the muon is evaluated in the same manner as for the electron: the basic Dirac equation prediction is still 2, and the cloud of virtual particles make a similar small change that is calculated via Feynman diagrams. Out to seven decimal places, the experimental and theoretical values are in agreement. However, in 2002, experimenters working at Brookhaven National Laboratory reported a new measurement, accurate to 10 decimal places. This result differs from the theoretical prediction only slightly, but the difference is more than twice the combined uncertainty in the experimental and theoretical values. This makes it unlikely, though not impossible, that the difference is due to chance.

In calculating the effects of the cloud of virtual particles surrounding the muon, we need to include not just the effects of virtual photons and virtual electron-positron pairs, but also virtual quarks, virtual Higgs particles, and, in fact, all the particles of the Standard Model. The same is true of the corresponding calculation for the electron. How is it possible, then, that the predicted electron magnetic moment is so accurate, while the muon prediction is slightly off? Here’s one possibility: Suppose that in the real world there are some heavy particles that are not included in the Standard Model. These new particles would show up as virtual particles in the clouds surrounding both the electron and muon. It turns out, though, that because of the larger muon mass, any such heavy particles would affect the muon magnetic moment more than the electron magnetic moment. In other words, the small discrepancy in the muon’s magnetic moment reported by the Brookhaven team could be evidence of new physics beyond the Standard Model. New experiments planned for Brookhaven and the Japan Accelerator Research Complex (J-PARC) may give results 10 times more precise than current values.

Glueballs, Pentaquarks, and All That

It is much more difficult to test QCD than it is to test other parts of the Standard Model because QCD deals with quarks and gluons, particles that always appear in combinations of two or more. It’s simply impossible, as far as we know, to produce a beam of single quarks the way we produce a beam of electrons or neutrinos. Still, as we’ve seen, it’s possible to test QCD in high-energy scattering experiments where the quarks behave like nearly-free particles.

In principle, QCD should also be able to describe the characteristics of bound states: how three quarks interact to form a proton, for example, and why it’s impossible to remove and isolate a single quark. In a general sense, physicists think they know why these things happen, namely, that the long-distance behavior of the color force is just the opposite of its short-distance high-energy 'margin-bottom:6.0pt;text-align:justify;text-indent: 12.0pt;line-height:normal'>In the face of these difficulties, some physicists have turned to computers, rather than pure mathematical analysis, to provide testable predictions. Even with computers, the problem at first seems intractable. Remember that we have to add together all of the possible paths that each particle can take—an infinity of paths, in fact. To make the problem tractable, physicists make a rather drastic simplification: They model spacetime as a finite grid, or lattice, of points rather than a continuum. The quarks and gluons are only allowed to move from one lattice point to another. This approximation, together with a host of calculational techniques for implementing it, goes by the name lattice

QCD.

Even after such a simplification, the calculations require immense computing power, and lattice QCD physicists have for many years been some of the primary customers for the fastest supercomputers around. In recent years, some lattice QCD groups have even designed and built their own special-purpose computers, like the GF11, which uses 566 processors running simultaneously. Designed by Don Weingarten of IBM, the GF11 ran continuously for a year and a half to calculate the masses of a handful of the lightest mesons and baryons. The results agreed with the measured masses to within 5 percent. Obviously, we are far from the realm of one part in a billion accuracy. Five percent, though, was better accuracy than anyone had achieved for such a calculation before.

Calculations like Weingarten’s are retrodictions: explanations after the fact of properties that are already known from experiments. More impressive would be an actual prediction of a previously unsuspected particle. All of the particles of the famed Eightfold Way turned out to be built out of either two or three quarks. Is it possible to build a particle out of four quarks, or five, or more? According to some theoretical considerations, the answer is yes. A pentaquark, for instance, might be built from two up quarks, two down quarks, and an antistrange quark. In 2003, Japanese experimenters at the Spring-8 accelerator near Osaka reported finding a bump in their data that could be interpreted as evidence of a pentaquark. Since then, nine other experiments have reported similar results. Still, researchers are not sure whether these experiments have actually succeeded in producing pentaquarks. More recent experiments, some of them with much better statistics, have failed to see any evidence for the pentaquark. There is disagreement, too, even among the experiments reporting positive results, about the mass and lifetime of the particle.

On the theoretical side, the situation is similarly fuzzy. Turning to pure QCD is hopeless: No one knows how to derive the properties of a proton, much less of a pentaquark. The original calculations that predicted the pentaquark used an approximate theory that captures some aspects of QCD. Later, lattice QCD calculations lent support to the prediction. Now, however, several different lattice QCD groups have carried out the calculation and predict different properties for the pentaquark, or no pentaquark at all.

An experiment at the Thomas Jefferson National Accelerator Facility (JLab) reported its results in April 2005. Because an earlier JLab experiment with a similar design had reported positive results, the new run was considered a crucial test of the pentaquark idea. After observing many more collisions than the earlier Jlab experiment, the new run turned up no evidence for the pentaquark. Yet another Jlab experiment is expected to report results sometime in 2005. If this search, too, turns up nothing, it may signal the demise of the pentaquark. Then theorists will have to address a different question: Why don’t quarks combine in numbers larger than three?

Another exotic possibility is a particle called a glueball, built entirely from gluons. A glueball has no real quarks, though of course it has the virtual quark-antiquark pairs that are always present in QCD. The existence of glueballs cannot be derived directly from QCD; the complexity of the color force gets in the way, just as for particles made of quarks. As with pentaquarks, glueballs were first predicted using approximate versions of QCD, and lattice QCD calculations later confirmed the idea. With glueballs, though, the experimental situation is much less controversial. An experiment at Brookhaven National Laboratory way back in 1982 was the first to report evidence of a possible glueball. More recent experiments, like the L3 experiment at CERN and the ZEUS experiment at Germany’s DESY accelerator, have confirmed the existence of the particle found at Brookhaven and have added several other glueball candidates. These particles certainly exist; however, no one can say for sure whether they are glueballs. On the one hand, there are the unusual uncertainties about the reliability of approximate methods and lattice QCD for the real world. On the other hand, it is not so easy to distinguish a glueball state from a bound state of a quark and an antiquark. A better theoretical understanding is needed to interpret these intriguing new particles.

An experiment now running at Cornell University called CLEO-c is examining the decay of the J/psi particle for evidence of glueballs. Experimenters hope to produce a billion J/psi particles from which a few thousand glueball candidates will be culled. This should be enough to nail down the rate at which the purported glueballs decay, and into which particles.

The discovery of glueballs would have profound implications for our understanding of QCD. Quarks were originally invented by Gell-Mann and Zweig in order to explain the patterns of spin and charge seen in subatomic particles. Later, the apparatus of gluons and the color force was added to explain why the quarks didn’t just fly apart. As we have seen, there are good reasons to believe that quarks exist—the parton behavior in high-energy scattering experiments, for instance. There is no such direct evidence for gluons, however. There is, for example, no way to scatter off of a virtual gluon inside a proton. Glueballs, if they can be proven to exist, will provide the first direct evidence of the existence of gluons.

In the Beginning, There Was Soup

Imagine running the universe backward in time. With the expansion of the universe in reverse, the temperature and density rise as we get closer and closer to the big bang. At about 1/10th of a second after the big bang, the temperature everywhere in the universe is now so high that atoms can no longer exist; they are torn apart into individual protons, neutrons, and electrons. Back up a little more to around 10 microseconds after the big bang and the neutrons and protons are shoulder-to-shoulder throughout the universe, together with a host of other particles that have been created by pair production from the extremely high-energy photons that now populate the universe. Each neutron or proton, we know, is like a little bag with three quarks inside. Go back in time a bit further and the boundaries between the neutrons begin to disappear, the “bags” merging like droplets of mercury running together. This point is known as the quark-hadron transition. With the boundaries between the neutrons and protons gone, the universe is now filled with a thick soup of quarks and gluons, which physicists call the quark-gluon plasma. Understanding this state of matter is crucial to understanding the first microseconds of the universe’s existence. During the past decade, physicists have been trying to re-create this state in the laboratory.

The basic technique is to strip all of the electrons off a heavy atom, such as gold, accelerate it to high energy, and collide it with another heavy atom. This is like smashing two drops of water together so hard that they vaporize. Thousands of particles condense out of the mist and go flying off in all directions. Beginning with several experiments at CERN in the mid-1990s, and continuing with the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in the past five years, experimenters have sought indications of the expected quark-gluon plasma. To date, these experiments have had some remarkable successes: they have achieved the extremely high temperature and density at which the quark-gluon plasma is expected, and they have found strong indications that matter behaves very differently under these conditions than in the usual single-particles collisions. All is not as expected, however, so the experimenters have been reluctant to say for certain that they have produced a quark-gluon plasma.

To mention just one predicted effect of the quark-gluon plasma, there should be fewer J/psi particles produced than in a comparable two-particle collision. The J/psi is a bound state of a charm quark and an anticharm quark. This quark-antiquark pair is created by pair production. Normally, the color force takes over at this point, binding the pair together fleetingly until one or both of the quarks decays. If the charm-anticharm pair is produced in the midst of a quark-gluon plasma, however, both particles will immediately attract a swarm of particles from the plasma. Just as a swarm of moths partially blocks the lamplight that attracts them, the swarm of plasma particles partially blocks the color force between the two quarks. As a result, the quarks are less likely to bind together into a J/psi particle.

As with pentaquarks and glueballs, the difficulty of calculating in QCD makes the interpretation of these experiments tricky. Suppression of the J/psi, for example, could be caused by something other than the plasma state. Some theorists expect a state different from the plasma state to form. One suggestion is that quarks will remain tightly bound to each other by the color force, even at the high temperatures reached in these collisions, and as a result will behave like “sticky molasses.”2 Research continues at RHIC, and a new experimental stage will begin in 2007 when the new CERN accelerator, the Large Hadron Collider (LHC), starts operation. The LHC will accelerate and collide beams of heavy nuclei, such as lead, reaching an energy density perhaps three times higher than RHIC’s. A new detector named ALICE is being built to get a better look at the indications of quark-gluon plasma that have been seen in previous experiments. ALICE will also be able to compare production of particles like the J/psi with production of the Z0, which is not expected to be suppressed since the Z0 doesn’t interact via the color force. These studies will give a much clearer picture of the properties of matter at the kind of temperature and density that existed in the first microseconds of the universe’s existence.

Pentaquarks, glueballs, and the quark-gluon plasma have driven home an important point about the Standard Model: The problem of bound states of quarks and gluons is by far the least understood part of the theory. At the root of the matter is the issue of color confinement. We claimed, back in Chapter 8, that particles in nature only appear in color-neutral combinations. This claim, if it is true, explains why free quarks have never been detected and why gluons don’t create a long-range force, as would be expected of a massless particle. The problem with this explanation is that it can’t be derived from the theory of QCD itself—it must be imposed as an extra condition. The importance of color confinement has been recognized by the Clay Mathematics Institute of Cambridge, Massachusetts, which has offered a reward of $1 million for the solution of a related mathematical problem, the so-called “mass gap problem” of Yang-Mills theories. This was chosen as one of the seven Millennium Problems in mathematics, announced in Paris on May 24, 2000. Often, when testing the Standard Model, it is a case of theoretical predictions, sometimes decades old, waiting for new accelerators capable of performing the necessary tests. QCD is the exception to the rule: There are now exciting new experimental results just waiting for a better theoretical understanding.

Looking for the Higgs

As far as the Standard Model is concerned, the Higgs particle is the holy grail of high-energy experiments. It gives mass to the other particles and is the linchpin of spontaneous symmetry breaking. The Standard Model doesn’t predict the Higgs mass directly, as it does the W and Z° masses. This has left experimenters to conduct a painstaking search at ever-higher energy in the hope of stumbling across it. In recent years, though, the situation has improved.

Earlier in this chapter, we saw that some measurable quantities, like the muon’s magnetic moment, are affected by the presence of virtual particles, including, of course, virtual Higgs particles. By examining the processes that are most influenced by the virtual Higgs particles, we might expect to learn something about the Higgs mass. The procedure is complicated and the results uncertain, since there is still a lot of uncertainty in the values of many of the Standard Model parameters that enter the calculations. In particular, before the discovery of the top quark in 1995, there was too much uncertainty in its mass for any useful prediction of the Higgs mass to be made. In recent years, the top quark mass has been determined much more precisely; precisely enough that the effects of virtual Higgs particles can now be estimated. Careful measurements of the masses and decay rates of the W and Z° have led to a “best fit” Higgs mass of about 130 billion electron-volts. The uncertainties are still large, but there is a 95 percent chance that the true mass is less than 280 billion electron-volts. CERN’s LEP accelerator came tantalizingly close to the best fit value in 2000, before being shut down for the upgrade to LHC. Before the shutdown, the ALEPH experiment detected three events that could be interpreted as Higgs events. The experimenters petitioned for, and received, a one-month delay in the shutdown in order to look for more such events. They saw none, however, and today most physicists believe that the three candidate events were simply flukes. Certainly, they were insufficient to claim discovery of the Higgs.

As of 2005, Fermilab’s Tevatron accelerator is the only machine with sufficient energy to have a chance of finding the Higgs. The Tevatron collides protons and antiprotons, reaching higher energies than LEP was capable of. The proton-antiproton collisions are, however, messier than LEP’s electron-positron collisions, and it will take until about 2009 for experimenters to collect enough data to rule out, or rule in, a Higgs particle with mass up to 180 billion electron-volts. By that time, the LHC should be taking data, if all goes according to plan.

Experimenters can be confident that, if the Higgs behaves as the Standard Model predicts, there will be none of the uncertainty over its identity that plagues the pentaquark and glueball candidates. The Higgs must be a neutral, spin-zero particle with very specific, calculable interactions with the other particles. Experiments at LHC will try to measure as many of these characteristics as they can, but further confirmation may only come from the planned International Linear Collider (ILC).

Of course, the true symmetry-breaking mechanism that occurs in nature might look very different than the Standard Model Higgs particle. After all, the Standard Model Higgs was the bare minimum that needed to be added to the known particles in order to make spontaneous symmetry breaking work. Given all the awkwardness of the Standard Model, most physicists would probably be surprised, perhaps even appalled, if the Standard Model’s Higgs turned out to be correct in every detail. The next chapter will reveal a plethora of alternative theories waiting in the wings, ready to step into the leading role when the expected misstep comes, each with its own predictions about what particle or particles should appear in place of the Higgs. Experimenters are acutely aware of these alternative possibilities, and must design their detectors accordingly. It is an immensely complex undertaking, but the reward is proportionate to the difficulty: Higgs physics may well reveal a deeper level of physical reality than any we have known so far.