The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics - Robert Oerter (2006)

Chapter 12. New Dimensions

What did we hope for, heating and reheating ourselves to absurd temperatures? As matter heats up it is subject to demonstrable change. Boiling in our vessel, our water molecules would begin to break down, stripping us back to elemental hydrogen and oxygen gases. Would this help us to see ourselves as we really are?

Heated further, our atomic structure would be ripped apart.
He and she as plasma again, the most common state of matter
in the universe. Would this bring us closer together?

At about a billion degrees K, give or take a furnace or two, he and she might begin to counterfeit the interior of a neutron star and could rapidly be heated further into sub-atomic particles. You be a quark and I’ll be a lepton.

If we had the courage to cook ourselves to a quadrillion degrees, the splitting, the dividing, the ripping, the hurting, will be over. At this temperature, the weak force and the electromagnetic force are united. A little hotter, and the electroweak and the strong force move together as GUT symmetries appear.

And at last? When gravity and GUTs unite? Listen: one plays
the lute and another the harp. The strings are vibrating and
from the music of the spheres a perfect universe is formed.
Lover and beloved pass into one another identified by
sound.

—Jeanette Winterson, GUT Symmetries

For many years after the development of the Standard Model, all the experimental results pointed in one direction. From the W and Z0 masses, to the parton picture of the proton, to the details of particle interactions and decay rates, everything seemed to indicate that the Standard Model gave an accurate picture of particle interactions. Theorists didn’t wait around for experiments like those discussed in the previous chapter to reveal flaws in the Standard Model, though; they had reasons enough to suspect that the Standard Model wasn’t the whole story. Some of these reasons were aesthetic: Why should a theory require 18 parameters, for example? Other reasons had to do with the shape of the periodic table of quarks and leptons. Why are there three families? Why are all the neutrinos left-handed? Is there any rhyme or reason to the masses of the particles? Why, for goodness’s sake, is the top quark 45,000 times heavier than the up quark? The Standard Model explains none of this. Finally, of course, there is the omission of gravity from the theory. Physicists looking for answers to these questions have pursued two distinct paths. One path is to investigate theories with more symmetry. The other path is to look for deeper structure.

The Aesthetics of Particles, or Diamonds Are Not Forever

One of the ugliest things about the “repulsive” Standard Model is the way parity (mirror symmetry) is violated. As already mentioned, there are left-handed neutrinos but no right-handed ones. Electrons, though, have mass, and massive particles can’t be purely left-handed or purely right-handed. How then to pair the electron and the neutrino, as required by the symmetry of the Standard Model? The problem is solved in a particularly crass manner. The left-handed part of the electron is paired with the neutrino, in an SU(2) doublet. The right-handed part of the electron remains all by itself—a singlet. This makes for an awkward arrangement, as if only one of a set of Siamese twins were married.

Much more aesthetically pleasing would be an underlying theory with left-right symmetry. To explain the left-handedness of the real world, this left-right symmetry must be broken. But we already know a way to make this happen—through spontaneous symmetry breaking. The idea is this: Start with a theory with left-right symmetry and introduce an “ultra-Higgs” particle in such a way that spontaneous symmetry breaking happens twice—first, the underlying left-right symmetric theory breaks down to the SU(3) x SU(2) x U(1) of the Standard Model, and then a second symmetry breaking happens, exactly as in the basic Standard Model. As we will see, theories that start out with left-right symmetry yield a bonus: They explain why neutrinos have a small but nonzero mass.

Another distasteful element of the Standard Model is the way color symmetry is tacked onto the unified electroweak theory. It would be much prettier to have a single underlying symmetry group that somehow contains the entire symmetry group of the Standard Model. Spontaneous symmetry breaking explains why the weak force is so different from the electromagnetic force: the W and Z° have mass, making the force they carry weak and short ranged, and the photon doesn’t, making the electromagnetic force long ranged. Why shouldn’t a similar process also explain the difference between the strong force and the electroweak force? All that is needed is to find the right underlying symmetry and the right pattern of symmetry breaking. Models that work this way are known as grand unified theories, or GUTs, for short.

To see how one symmetry group can contain another symmetry group, consider again the symmetry group of a circle. The circle can be rotated by any amount without changing it. Now consider the symmetry group of the sphere. One symmetry is rotation about the vertical axis. If you compare what happens to the “equator” in this rotation with what happens to the circle, you will see that the two transformations are identical. But the sphere has another symmetry, rotation about the horizontal axis. This is a different symmetry from the first; there is no way to obtain this rotation by any combination of rotations of the first type.

088

The symmetry group of the sphere is thus a larger group than the symmetry group of the circle, since there are two operations—that is, two distinct types of rotation that leave the sphere unchanged. But this larger group contains the symmetry group of the circle, in the sense that any of the possible rotations of the circle can be translated into a rotation of the sphere about one axis. In the same way, it is possible to find symmetry groups that are larger than the SU(3) x SU(2) x U(1) group of the Standard Model, and that contain it as a subgroup.

The smallest grand unification symmetry group that contains the Standard Model’s symmetry group is called SU(5). The SU(5) GUT has been extensively studied, but is now ruled out by experiments. Let’s consider a slightly larger symmetry group, SO(10) (pronounced “ess-oh-ten”). The periodic table of the fermions reveals three fermion families, each with an identical pattern of eight particles: one electron-like particle, one neutrino, and two flavors of quark, each flavor having three colors. The symmetry of the Standard Model accommodates this pattern, but it doesn’t require it. The pattern suggests a more intimate connection between quarks and leptons. Could the electron-like particle and its neutrino be considered quarks of a different color? In that case, all the particles in each family should fit into a single multiplet. There are eight particles of the lightest family. The rules of SO(10) require that we count the right-handed and left-handed components separately. Since there are eight particles, there is a total of 16 components, if the neutrino comes in both left-handed and right-handed versions. The SO(10) symmetry group has a 16-component multiplet, just right to fit all the known particles of the lightest family, plus a right-handed neutrino. When the SO(10) GUT was invented, the presence of a right-handed neutrino was problematic: Neutrinos were believed to exist only in the left-handed version. As we saw in the previous chapter, though, recent experiments measuring the neutrinos coming from the sun have begun to convince physicists that neutrinos have a small but nonzero mass. Here’s where the SO(10) GUT comes into its glory. When spontaneous symmetry breaking occurs, the right-handed neutrinos become extremely massive, so they no longer count as unwanted “light” neutrinos. (Oddly, the masses of the right-handed and left-handed neutrinos need not be the same.) Additionally, a mathematical result known as the seesaw mechanism guarantees that if the right-handed neutrinos are very heavy, then the left-handed neutrinos will be very light. This makes small (left-handed) neutrino masses a natural result of the theory rather than something imposed for no reason by the theorist. Any particle with mass must have both left-handed and right-handed versions. If the solar neutrino experiments are right, then the right-handed neutrino required by SO(10) symmetry must actually exist in nature. What was a drawback of SO( 10) symmetry has become an advantage.

The SO(10) symmetry solves a host of other puzzles. First, it explains the family structure of the fermion table. Each family is a single SO(10) 16-component multiplet. This means that we can’t add quarks to the model unless we add an electron-like particle and a neutrino at the same time. The symmetry requires that there are as many electron-neutrino families as there are quark families, in contrast to the Standard Model, where there was no such requirement from symmetry. Next, recall the odd asymmetry between left-handed and right-handed particles in the Standard Model. In the SO(10) model, left-handed and right-handed particles enter in a perfectly symmetrical way. The difference we observe in the real world between left-handed and right-handed particles arises from spontaneous symmetry breaking. The model is set up so that symmetry breaks in two stages. We introduce an ultra-Higgs particle, with its own Mexican hat potential, as well as the regular Higgs particle from the Standard Model. The parameters of the ultra-Higgs are chosen so that the left-right symmetry breaks first, collapsing the SO(10) symmetry to the SU(3) x SU(2) × U(1) of the Standard Model. The left-handedness of the universe arises as naturally as a tree falling, rather than being imposed by fiat. When the tree is standing, no one can tell which way it will fall. With the ultra-Higgs, similarly, there is no way to tell in the early universe whether the left-handed neutrinos will be light and the right-handed ones heavy or the other way around. It just happens that, in our universe, that was the way things fell out. In the second stage, SU(2) x U(1) symmetry breaks just as in the Standard Model.

Finally, SO(10) symmetry explains something that was completely mysterious in the Standard Model: why the electron and the proton have equal (but opposite) electric charge. The equality is astonishingly precise. Since everyday objects are made of something like 1024 atoms, even an extremely small difference in the two charges would be easy to detect. Experiments show that the difference must amount to less than one part in 1021, in fact. This level of precision suggests the equality is the result of a deep underlying symmetry of nature. SO(10) provides just the symmetry needed. The way the Standard Model’s symmetry fits into SO(10) symmetry requires that one quark flavor have exactly 2/3 of the electron’s charge and that the other quark flavor have -1/3 of the electron’s charge. Here is a fact of fundamental importance that was completely unexplained in the Standard Model, but is required by the SO(10) GUT.

The situation with particle masses is considerably more complicated. Remember that in the Standard Model it was the Higgs field that gave mass to the quarks and leptons. The Higgs couplings were not determined by symmetry, so these masses could be chosen to have whatever value we liked. In GUTs, the symmetry group relates some of the couplings, and so, in principle, gives numerical relations among the fermion masses. For instance, the electron and the down quark masses should be equal. Comparing with the experimental values—me = 0.511 million electron-volts and md = 6.95 million electron-volts—we see something has gone terribly wrong. These are not even close. However, two things need to be taken into account. First, the masses, like the coupling strengths in the Standard Model, are different when measured at different energy scales. The predicted relations clearly don’t hold for the low energies accessible by experiment, but they might hold at the very high energies where SO(10) symmetry is unbroken. Second, the masses depend on the type of Higgs fields and the pattern of symmetry breaking. In GUTs, there are many possible choices for both. So there is still hope that we can find a Yang-Mills symmetry group, a set of Higgs fields, and a pattern of symmetry breaking that results in the correct fermion masses.

Here’s how Howard Georgi describes his first encounter with GUTs:

I first tried constructing something called the SO(10) model, because I happened to have experience building that kind of model. It’s a group in ten dimensions. The model worked—everything fit neatly into it. I was very excited, and I sat down and had a glass of Scotch and thought about it for a while.

Then I realized this SO(10) group had an SU(5) subgroup. So I tried to build a model based on SU(5) that had the same sort of properties. That model turned out to be very easy, too. So I got even more excited, and had another Scotch, and thought about it some more.

And then I realized this made the proton, the basic building block of the atom, unstable. At that point I became very depressed and went to bed.1

Georgi’s discouragement was premature. The crucial question: How quickly does the proton decay? If its lifetime is short, then matter as we know it could not exist. Long before the stars and galaxies formed, the hydrogen needed to make them would have disappeared. But if the proton’s lifetime is long compared to the age of the universe, then SU(5) is still possible. Protons would stick around long enough for galaxies to form, for life to evolve, for civilization to arise. With a very long proton lifetime, we would have to be very lucky, or would have to work very hard, to see a proton decay at all.

Proton decay is a prediction of all GUTs, and it’s easy to see why. Remember that electroweak unification gave us interactions involving the W that led to beta decay. GUTs introduce new intermediate particles, labeled X in the diagram below, that couple quarks with positrons, and quarks with antiquarks:

089

This means the proton can decay by the following process (among others):

090

The proton decays into a positron plus a pion. The pion can then decay by matter-antimatter annihilation, and the positron can go find an electron and do the same. In the end, a proton has vanished and left only a few photons behind. If protons can vanish like this in a flash of light, then all matter is in danger of disappearing. Diamonds are not forever, neither is any other object, whether solid, liquid, or gas. Matter is unstable and, given enough time, will decay into electromagnetic radiation. How much time is enough? Georgi’s paper on SU(5), titled “Unity of All Elementary-Particle Forces” (written with Sheldon Glashow), didn’t give an answer, it only mentioned the possibility. The universe has been around for about 10 billion (1010) years. If the SU(5) model predicted a proton lifetime much less than that, the model could be thrown out, as it would be incompatible with the observed fact that protons are still around. On the other hand, a lifetime like 10100 years would be impossible to measure in any conceivable experiment. A later paper by Georgi, Helen Quinn, and Steve Weinberg came up with a very exciting answer: 1031 years. This was safely above both the 1010 years since the big bang and the known minimum value (1017 years) of the proton lifetime, but it was low enough that it could be tested experimentally. It would require a strange new type of particle detector, however.

How can anyone hope to detect a decaying proton if the proton lives sextillion times longer than the current age of the universe? Obviously, it is out of the question to watch a single proton until it decays. Remember, though, that the “lifetime” of a particle is only its average time before decay, and according to the rules of quantum mechanics, the particle is equally likely to decay at any time. That is, protons formed at the big bang do not wait 1031 years and then decay all at once. According to the rules of quantum mechanics, decays occur at random, regardless of when the decaying particles were “born.” Some will decay immediately, others only much later, so that the average time is 1031 years. This means that if we gather a large enough number of protons together, we have a high probability of seeing decay (if indeed they decay at all).

Proton decay experiments all work on this principle: Put a lot of stuff in one place and watch carefully to see if anything happens. A thousand tons of matter contain about 1033 protons, so, according to the SU(5) model, about 100 protons should decay every year. To avoid as much as possible the confusing effects of cosmic rays, experimenters go deep underground: to the Kolar gold mine in India, the Morton salt mine near Cleveland, Ohio, the Kamioka mine in western Japan. The IMB experiment in the Morton mine, for example, is a six-story high cube filled with purified water whose walls are lined with photon detectors called photomultiplier tubes. A proton at rest should decay into a pion and a positron moving in opposite directions. The pion then decays rapidly into a pair of photons. The positron that is emitted is moving nearly at the empty-space speed of light. The speed of light in water, however, is considerably less than the speed of light in empty space. Although, according to special relativity, nothing can travel faster than the empty-space speed of light, there is no law against going faster than the speed of light in water. Just like a jet airplane that produces a shock wave (a sonic boom) when it travels faster than the speed of sound, the positron traveling faster than the speed of light in water produces a cone of light, a “boom” known as Cerenkov radiation. Eventually, the positron encounters an electron from one of the water molecules and annihilates, producing another pair of photons. Thus, experimenters have a clear signal to look for: a ring of Cerenkov light and two pairs of photons with known energies.

In spite of some initial positive reports, physicists now agree that no genuine proton decays have been recorded in almost 20 years of experiments. The failure to see decays has pushed the minimum proton lifetime to about 7×1033years. This rules out the SU(5) GUT model, but the SO (10) model remains possible.

Why Is There Something Instead of Nothing?

Pause for a moment and think about how far we have come. Modern physics began in the early twentieth century with the attempt to understand the spectrum of light emitted from hydrogen gas. At that time, the question of whether the universe had an origin, or whether there was fundamental mirror symmetry, would have belonged to the realm of religion or philosophy, not to physics. Thanks to general relativity and the big bang model, we now know the answer to the first question (yes, the universe had a beginning); and thanks to the Standard Model, we know the answer to the second (no, the mirror world is not identical to our world). GUTs allow us to probe other deep questions. One such question: Is matter eternal? We will know the answer if we ever detect proton decay. Here’s another deep question: Why is there any matter in the universe?

To understand this last question, recall that every particle has an antiparticle with precisely opposite properties. When a particle and its antiparticle meet, they can annihilate each other, leaving only photons. The same process in reverse is what we called pair production: a photon is converted into a particle-antiparticle pair. Now, if the laws of physics are completely symmetric with respect to particles and antiparticles, then we would expect that the tremendous energy available in the early stages of the big bang would be transformed into any particle and its antiparticle in equal numbers. Why do we see today an overwhelming predominance of one sort of matter, the kind we call normal matter? Where is all the antimatter? Might some of the galaxies we see be made entirely of antimatter? This would resolve the matter-antimatter asymmetry, but there would necessarily be a region of inter-galactic space where stray bits of matter met stray antimatter. The resulting annihilations would cause a faint but detectable glow. Antimatter galaxies cannot be the solution.

According to astrophysicists, it is just possible that distant clusters of galaxies separated from ours by immense amounts of empty space could be made of antimatter, as there would not be enough exchange of stray matter to cause a detectable glow. There’s no plausible explanation, however, for such a separation of normal matter and antimatter. Therefore, physicists believe that all of the galaxies in the observable universe consist of normal matter.

Put the question another way: If particles and antiparticles were created in equal numbers in the early universe, why did they not all meet and annihilate shortly thereafter, leaving a universe full of photons and nearly devoid of matter? Russian physicist Andrei Sakharov considered the question in a 1967 paper and concluded that three conditions had to be met to end up with more matter than antimatter:

■ The universe had to be out of equilibrium.

■ Proton-decay-type processes had to be possible (what physicists call baryon nonconservation).

■ These processes had to violate CP symmetry.

The rapidly expanding universe of the big bang model provides the nonequilibrium environment required by the first condition. We already know that GUTs satisfy the second condition. Do they satisfy the third?

As we’ve seen, the P of CP refers to the parity operation—the mirror symmetry previously discussed. We know that this symmetry is violated in the Standard Model. C refers to charge conjugation: exchanging all particles with their antiparticles. This symmetry is also violated in the Standard Model. Under C, a left-handed neutrino transforms into a left-handed antineutrino, a particle that doesn’t exist in the Standard Model. The CP operation performs both actions at once: replace every particle with its antiparticle and reflect everything in a mirror. For example, the CP operation transforms a single neutrino (which are all left-handed, remember) into a right-handed antineutrino, a particle that does exist in the Standard Model.

Now, suppose we have a GUT with a hypothetical X particle that causes a proton-decay-type process. Let’s say a down quark decays into an X particle and a positron (top, in the following figure). By rotating the diagram, we see that this same interaction would allow the X to decay into a positron and an antidown quark (middle). And by exchanging particles and antiparticles, we see that the anti-X decays into an electron and a down quark (bottom).

091

CP symmetry would guarantee that the decay of X and092happen at the same rate. If at some time in the very early universe a matter-antimatter symmetry was present, with equal numbers of Xs and093s, then CP symmetry ensures that the later universe will contain an equal number of down and antidown quarks. Only if CP is violated can an asymmetry arise from an initially symmetric situation. That is, if CP symmetry is violated, the decay of the X happens more often than the decay of the094, or vice versa.

The Standard Model allows for, but does not require, CP violation; the amount of violation must be determined experimentally. This is fortunate; indeed, it is necessary, since a 1964 experiment by J. W. Cronin, V. Fitch, and two colleagues showed that CP is violated in the decay of kaons, particles composed of a down quark and an antistrange quark. (Cronin and Fitch were awarded the 1980 Nobel Prize for this discovery.) The Standard Model does not, however, include any proton-decay-type processes, so even with CP violation it is utterly unable to explain the existing matter-antimatter asymmetry.

Remarkably, GUTs not only provide CP-violating X particles, they also can explain the smallness of the violation. It turns out that the larger the energy scale at which symmetry breaking occurs, the smaller the CP violation. This is similar to the seesaw mechanism that explains small neutrino masses, where a larger energy scale caused a smaller neutrino mass. In fact, the same energy scale controls both CP violation and neutrino masses. It is this economy of explanation that makes grand unification so exciting: One parameter (the energy scale) can explain multiple parameters (CP violation and neutrino masses) that would otherwise be completely arbitrary. We will not know whether this explanation is correct until we measure the neutrino masses, the amount of CP violation, and the rate of proton decay. If some GUT can be found that fits those measurements, though, we will have an answer to one of the greatest mysteries of all time: why there is something instead of nothing.

GUTs provide a framework for understanding a great deal of physics beyond the Standard Model. Neutrino masses, proton decay, CP violation, and the matter-antimatter asymmetry of the universe can potentially be explained. The greater symmetry of GUTs reduces the number of free parameters. However, the possibility of different types of Higgs fields and of multiple symmetry-breaking energy scales adds new free parameters. Often, extensive fine-tuning of the model is necessary to obtain agreement with experiments. This goes against the spirit of the enterprise: We wanted a theory with fewer free parameters, not more. Whether or not GUTs are, strictly speaking, more economical theories (more physics explained using fewer parameters) than the Standard Model, they are, at least, a conceptual advance. They leave unanswered some of the big questions, however. Although GUTs tell us that the number of quark families must be equal to the number of lepton (electron-neutrino) families, they give no hint why there should be precisely three families. And, as usual, gravity is left out. Grand unified theories, in fact, are neither all that grand nor all that unified. Clearly, they are still far from a Theory of Everything. But that indeed may be ultimately a strength rather than a weakness. By not attempting to explain everything, they may actually succeed in explaining something. Experiments already in progress or planned for the next few years may reveal whether the universe has GUTs. Meanwhile, theorists have been looking elsewhere for answers to the big questions.

Inside the Quark

As the Standard Model racked up successful predictions throughout the 1970s and 1980s, it was natural for theorists to look to the higher symmetries of GUTs for further progress. As we have seen, though, these theories are not without problems, and there is arbitrariness in the selection of a Yang-Mills symmetry group, the Higgs fields, and the symmetry breaking patterns. Perhaps symmetry is the wrong way to go. Might structure provide the answer? The pattern of the chemical elements in Mendeleev’s periodic table was finally understood in terms of the structure of the atom. Elements are not truly fundamental, they are made of protons, neutrons, and electrons. The details of the interactions between these constituent particles (specifically, the quantum mechanics of electron orbitals), explain the existence of chemical families, the columns of the periodic table. Similarly, the geometric patterns of Gell-Mann’s Eightfold Way are now understood in terms of the quark structure of the new particles. Is it possible that the leptons and quarks in our new periodic table are not truly fundamental either, but are made up of still smaller entities?

Hypothetical particles that make up the quarks and leptons are known as preons. Let’s try to build such a model by arranging the lightest lepton family as follows:

095

Suppose we think of the electron and its neutrino as another sort of quark: a fourth color, which we’ll call lilac (or lepton; either way the abbreviation is L). Next, think of the particles in the top row of the table as having up-ness, while those in the bottom row have down-ness. Then, we can explain the form of the table by positing that each particle is composed of two preons. Let’s invent one set of preons to carry the color property. There are four of these preons; let’s call them R, B, G, and L, corresponding to the colors red, blue, green, and lilac. A second set of preons carries the up-ness or down-ness: call them U and D. Finally, we can build all of the particles in the table by supposing that the preons of the first set bind to the preons of the second set. For instance, a red up quark would be formed from the combination RU, whereas the electron would be LD. In addition, we need to invent a force to bind the two types of particles together. The hypothetical force is termed hypercolor, and it is taken to be a Yang-Mills-type force, just like the QCD color force.

Preon theories have to contend with the outlandish success of the Standard Model. Remember that the Standard Model predicts the electron magnet moment to an accuracy of one part in a billion. If the composite structure of the electron is not to mess up this prediction, then the binding energy due to the hypercolor force must be around 10 trillion electron-volts. Recalling that energy and mass are interchangeable (E = mc2 again), we would expect the electron to weigh much more than the half a million electron-volts it actually does weigh.

Preon models thus explain some of the patterns of our table of quarks and leptons, but they come up short on several accounts:

■ Fermion families. Preon models can accommodate any number of families. Why are there only three in nature?

■ Matter versus antimatter. GUTs naturally predict proton decay and CP violation, necessary ingredients for explaining the matter-antimatter imbalance in the universe. No one has yet figured out how to incorporate these ingredients into preon models.

■ Fermion masses. As we have seen, preon models give us no help here.

Although it is tremendously appealing to the imagination to postulate a deeper level of structure to quarks and leptons, theorists have been unable to match preon models to reality. They have been abandoned now in favor of other approaches.

Fermions and Bosons Unite!

Look back at the table of fundamental particles in Chapter 10. Setting aside for the moment the as-yet-undetected Higgs particle, the particles in the Standard Model are either massive fermions (assuming, as seems increasingly likely, that all the neutrinos have at least a small mass) or intermediate force-carrying bosons. In this division, we detect a reflection of the old separation of matter and force. Quarks and leptons are the constituents of matter; the intermediate particles are the forces. This division might cause some uneasiness, however. After all, don’t the W+, W-, and Z° all have mass? Shouldn’t they count as matter, too? And aren’t bosons and fermions treated alike in relativistic quantum field theory?

In that fertile decade for elementary particle theory, the 1970s, a bold suggestion was put forth: Perhaps there is a fundamental symmetry between bosons and fermions. Initially, investigation of this symmetry was little more than a Mount Everest because-it’s-there phenomenon. Theorists noted that such a symmetry was mathematically possible and set out to investigate it. Soon, though, this supersymmetry (affectionately abbreviated as SUSY) was discovered to solve some problems of string theories, which we will encounter shortly. This synergy created tremendous interest in both types of theory.

It may seem absurd to suggest a symmetry between bosons and fermions. A symmetry is an operation that leaves the world unchanged. Surely, if we replaced every fermion by a boson (and vice versa), the world would not be unchanged. Atoms, for instance, are made out of fermions: quarks and electrons. If these were suddenly changed into bosons, we’d certainly notice. There is no Pauli exclusion principle for bosons, so there’d be no periodic table of the elements, and therefore no chemistry, no biology, no stars, planets, or people.

An analogy might help here. Suppose fermions are women and bosons are men. If we replaced all the men with women (and vice versa) we should be able to tell that something changed, right? (For one thing, there would suddenly be a lot of single fathers.) Well, remember that what’s fundamental in relativistic quantum field theory is the interactions. A man and a woman can interact and produce a daughter. Supersymmetry, as applied to interactions, requires that if we switch all the genders we still get a possible process. Switching genders in the sentence above we get: “A woman and a man can interact and produce a son.” Nothing wrong with that. So maybe at the level of interactions supersymmetry makes sense.

Supersymmetry requires that each particle have a superpartner that has all the same properties except for spin. If the particle is a boson, you get the name of its superpartner by adding the suffix -ino. So the photon has a photino, the Higgs a Higgsino, and the W and Z have a Wino (pronounced “weeno,” not “wine-o”) and a Zino (pronounced “zeeno”), respectively, for superpartners. If the particle is a fermion, you get the name of its superpartner by prefixing s- to the name. So quarks have squarks and leptons have sleptons (no, I am not making this up); for example, we get selectrons and sneutrinos.

Now, for the important question: Does supersymmetry exist in nature? The answer: Nope. Supersymmetry insists, for example, that there exists a boson with that same mass and charge as the electron. Such a particle would be easy to find, but has never been seen. We know, however, that some symmetries are spontaneously broken. Perhaps this happens with supersymmetry, too.

Starting with the Standard Model and adding the smallest possible set of fields, we get the minimal supersymmetric standard model (MSSM, for short). Spontaneous symmetry breaking has a lot to accomplish in this model; somehow, all of the superpartners of the known particles have to become massive enough to have escaped detection. Here, we hit the first snag: There is no completely satisfactory way to accomplish this in the MSSM. Let’s ignore this problem. After all, the MSSM is probably (like the Standard Model) only an approximation to some more complete theory. Assuming that supersymmetry is broken somehow, we come to a very exciting result. As we’ve seen, the three coupling constants of the Standard Model almost meet at a very high energy. It is said that “almost” doesn’t count, except in horseshoes and hand grenades. Whether “almost” counts in physics depends on the experimental uncertainty, indicated below for the α3 curve (which has the largest uncertainty of the three curves) by the dashed lines. We see that “almost” doesn’t cut it in this case.

096

If all three intersection points fell inside the dashed lines, we could believe that they actually meet at a single point, and it is only the experimental inaccuracies that make it look like three different intersections. However, the third intersection point is well outside of the dashed lines, so we can’t claim the three lines converge.

When we go to the MSSM, the picture changes. Remember the reason the coupling constants change with energy: because of the screening (or antiscreening) effects of the virtual particles. In the MSSM, there are more particles, so the constants change in different ways. Here, the intersection point is within the bounds of experimental uncertainty. If all three curves indeed meet at a single point, perhaps there really is a unification of the fundamental forces at that energy scale. This is the strongest hint to date of supersymmetry. Convergence of the couplings suggests that the MSSM may be an approximation to a more unified theory. Still, the MSSM has many of the same difficulties as the Standard Model. There are even more free parameters—the masses of all the superpartners, for example. There is no explanation of CP violation, baryon nonconservation, or neutrino masses. One possibility for a more unified theory is a supersymmetric grand unified theory. Supersymmetric GUTs have many of the advantages of normal GUTs: mechanisms for CP violation, neutrino masses that arise naturally, and proton decay. One general feature of these theories is that the proton, instead of decaying into pions (as in GUTs), decays into a kaon and a mu antineutrino. If experimenters ever observe proton decay, it will give a clear indication whether supersymmetry is realized in nature.

Unfortunately, supersymmetric GUTs also share the weaknesses of non-supersymmetric GUTs. There is no explanation for the three fermion families. Symmetry breaking patterns are hard to understand. There are too many free parameters. Finally, gravity is still not included. The potential importance of gravity is clear from the energy scales involved. With supersymmetry, the scale for unification of couplings is pushed up a bit, to about 1016 billion electron-volts. This is within spitting distance of the Planck energy, 1019 billion electron-volts, which is the fundamental scale for gravity. Well, OK, it’s still a factor of a thousand too small, but, considering that the unification scale is more than a trillion times larger than the mass of the W (which gives the electroweak energy scale), a factor of a thousand is nothing to get excited about. Maybe unification will only be achieved when gravity is taken into account on an equal footing with the other forces.

String Music

The journey into GUTs and supersymmetry has opened up new horizons of scientific investigation. A physicist of 100 years ago would have laughed at the idea of explaining the origin of all matter. We do not yet have the answers to these new questions, but the fact that they can be given serious consideration as scientific, rather than philosophical or religious, questions, reveals how far we have come. Other facts remain, though, that show us how far we still have to go.

The energy scale of grand unification is not very far from the fundamental energy scale for gravity, yet none of our theories has anything to say about gravity. In fact, the accepted theory of gravity, general relativity, is in fundamental conflict with quantum mechanics. In our discussion of quantum mechanics, we learned that a particle can be in a superposition state, where it is equally likely to be found on either side of a barrier. According to general relativity, a particle that has mass causes a dimple in spacetime. Where should the dimple be for the particle in the superposition state? There can’t be a half-dimple on the left and a half-dimple on the right—that would correspond to two particles, each with half the mass. The dimple can’t be entirely on the left—that would mean we could discover the particle’s location by measuring the gravitational effects. If we know the particle is on the left, there is no longer 50 percent probability of it being on the right. We have destroyed the superposition state. The same argument tells us the dimple can’t be on the right, either. The only solution is to allow spacetime itself to be in a superposition state. That means we need a quantum mechanical theory of gravity.

String theory is the first theory to bring quantum mechanics and general relativity together. The fundamental premise is very simple: Instead of describing fundamental point-like (zero-dimensional) particles, the theory postulates a world consisting of one-dimensional strings.

097

Closed string

098

Open string

The natural length of these strings is postulated to be the Planck length, obtained in the same way we found the Planck energy: by combining the fundamental constants c (the speed of light), G (the gravitational constant), and ℏ (Planck’s constant of quantum mechanics):

099

This length is as much smaller than a proton as a badger is smaller than our galaxy. Such a tiny string would, for all practical purposes, behave like a point particle. Well then, why not stick to particle theories? The hope is that string theories give you more bang for the buck: More physics (all of the particles and gravity) is explained using fewer parameters. There are only five basic types of string theory (as compared to an infinity of GUTs), and each one has only one free parameter, the string tension. To encompass all of physics with a single number—that would truly be a Theory of Everything.

Strings give us a natural explanation for the bizarre processes of relativistic quantum field theory. The basic interaction is when a string splits into two strings. Slicing up this diagram (known as the pair of pants diagram) shows us the process in detail.

100

The basic interaction is simply a single string pinching itself and separating into two strings. This interaction does not have to be added into the theory the way we added particle interactions to produce a relativistic quantum field theory. The interaction is already implicit in the description of a single string. In relativistic quantum field theory, we had to add all the ways that a particle could go from one point to another. In string theory, we need to add all the ways a string can go from one place to another. This includes the kind of pinching that happens in the pair of pants diagram. Since this interaction is automatically part of string theory, there are no new parameters involved. Because of the incredibly small scale of strings, we will see the process as a particle decaying into two particles:

101

Strings were not originally intended to explain all of physics. They arose in the early 1970s as an attempt to understand the ever-growing particle zoo. Like a guitar string, different vibrations are possible depending on how the string is plucked. If you pluck a guitar string, you hear a note called the fundamental (lowest) note of the string. If you place a finger lightly on the string just over the twelfth fret and pluck the string again, you hear a note that is an octave higher. Doing the same on the fifth fret gives you a note two octaves above the fundamental. Similarly, string theory strings can vibrate in many ways. Each mode of vibration corresponds to a particle of similar properties but different mass, since more energy is needed to produce the higher modes. This early version of string theory was only partially successful in explaining the pattern of particle masses. After the Standard Model became the accepted theory of elementary particles, strings were abandoned by all but a few tenacious researchers.

In 1974, two of these researchers, Joel Scherk and John Schwarz, realized that string theory predicted a particle with spin 2. It had long been known that the graviton, the hypothetical particle that carries the gravitational force just as the photon carries the electromagnetic force, should have spin 2. Scherk and Schwarz realized that string theory, reinterpreted as a theory at the Planck scale, could potentially be the first-ever quantum theory of gravity. There were, however, two serious problems with the quantum version of this theory. First, it predicted the existence of a particle that traveled faster than the speed of light, called a tachyon. Second, the theory only worked if there were 26 space-time dimensions.

In the 1980s, string theory met supersymmetry. It was a match made in heaven. Their offspring, superstring theory, had no embarrassing tachyons. It necessarily included fermions, which were missing from the older string theory. Instead of 26 dimensions, quantum mechanics required 10. Ten dimensions, while an improvement over 26, might still seem excessive. After all, we only see four dimensions: three space dimensions and time. But theorists soon found ways to hide the extra six dimensions, a crucial matter for matching superstrings to the real world. At low energies, where the strings look like particles, these theories became identical to supersymmetric grand unified theories. You got different supersymmetric GUTs depending on how you went about hiding the extra dimensions. As we have seen, supersymmetric GUTs can encompass all the known elementary particles, so superstrings potentially explain all the physics of the Standard Model.

Most exciting of all was the discovery that strings require general relativity. The quantum version of string theory isn’t mathematically consistent unless the equations of general relativity are satisfied. Einstein’s beautiful theory of gravity could actually be derived from string theory. This stunning result gave strings a tremendous boost among the theoretical physics community.

String theories have a very different structure than particle theories, and theorists have had to develop new mathematical tools for dealing with them. The extreme complexity of the mathematics has kept strings from fulfilling many of the early hopes for them. There may be only one free parameter in the theory, but there are billions of different ways of hiding the six extra dimensions. How to choose the correct one? In principle, the theory itself should provide the answer, but in spite of 20 years of work on strings, no one yet knows how to find it.

In spite of these outstanding issues with string theory, it remains an active area of theoretical research. After all, when it comes to a theory that potentially unifies all of known physics, string theory is still the only game in town. And when physicists are investigating the extremes of energy, temperature, and density that existed in the fractions of a second after the big bang, a unified theory is not a luxury; it is indispensable.

The Standard Model lets us take an imaginary trip back in time to a microsecond after the big bang, a time when the universe was a hot soup of quarks, gluons, photons, leptons, Ws, and Zs. According to general relativity, as we run the film backward in time toward the big bang, the temperature and density keep increasing, past the GUT scale at which the symmetry between the strong and electroweak forces is restored, temperature and energy rising up and up until time zero, the instant of the big bang itself, where temperature and density are infinite and the entire universe is collapsed into a single point. This is obviously nonsense; general relativity is signaling its breakdown. Before string theory, physicists dealt with the problem of time zero by bypassing it. Simply assume that at some very small time after the big bang, the universe was in a very dense, very hot state, then all of the predictions of the big bang model follow. The infinite temperature and density at time zero are just ignored.

Strings offer the tantalizing possibility of a glimpse of what happened at, or even before, time zero. Unfortunately, little can be said with any amount of certainty due to the extreme difficulty of performing string calculations. A few promising suggestions have been made, however.

To approach the issue of what happens at or before the big bang, let’s begin with a different question: If string theories are 10-dimensional, why is our space-time apparently four-dimensional? One way of hiding the extra six dimensions is to assume that they are rolled up tightly into loops of the Planck length. A garden hose viewed from a distance appears one-dimensional, but when seen up close appears two-dimensional:

102

In a similar way, we can suppose that the six extra dimensions are very small, like the dimension around the hose’s circumference. This still doesn’t answer the question. Why shouldn’t five, seven, or none of the dimensions be wrapped up small like this? Or all ten? Indeed, since the Planck length is the natural scale of strings, the most natural universe is one in which all of the dimensions are comparable to the Planck length. This leads us to phrase the question another way: Why are four of the dimensions so large?

When a dimension has a small circumference, strings can actually wrap around the circle:

103

Like rubber bands around a rolled-up newspaper, these wrapped strings tend to keep the universe from expanding in that dimension. When strings collide, though, they can unwrap:

104

String theorists discovered that these collisions were likely to happen if (at most) three space dimensions were involved. As strings unwrap, these dimensions expand rapidly, rather like the inflationary model, leaving the remaining six space dimensions small. The resulting picture is of a universe that begins as a hot ball of nine space dimensions, the tenth dimension being time. The dimensions of this ball are all around the Planck length, the natural size for objects in string theory. Suddenly, three of the spatial dimensions begin to expand, an event that we interpret as the moment of the big bang.

If string theory is speculative, then scenarios such as the one just described are ultraspeculative. A different string-based scenario supposes that the pre-big bang universe was infinite and cold, rather than small and hot. The variety of the proposals gives an idea of how much confidence should be placed in them. A better understanding of the structure of string theory must be developed before these issues can be resolved.

Although string theory might reveal what the universe was like before the big bang, it can’t tell us where the universe itself came from. Or can it?

Even before strings, physicist Stephen Hawking noticed that any theory that unites general relativity and quantum mechanics must treat spacetime as a field, the way it is treated in general relativity. When this field is zero, there is no spacetime. There is a possibility, then, that the origin of the universe could be explained as a transition from a state with no spacetime to a state with a spacetime like ours: a real creation ex nihilo. Hawking, together with James Hartle, showed how an expanding spacetime like ours could be connected, at time zero, to a timeless space.

105

Time begins at time zero; there is no time on the other side of this line, so there is no such thing as “a time before the big bang.” This scenario saves us from an unending series of “What happened before that?” at the price of trying to imagine how our universe of space and time could arise from (no, nothing can “arise” if there’s no time), er, be conjoined with, a space with no time. (See what I mean?)

In any of these scenarios, a subtle philosophical problem arises when we take into account the nature of quantum mechanics. Quantum mechanics only gives the probability of a certain outcome. That is, in a collection of a large number of identically prepared systems, quantum mechanics tells us how many will have outcome A, how many will have outcome B, and so on. The difficulty comes in trying to apply quantum mechanical theories to our universe: We only have one universe. We can never assemble a large collection of identically prepared universes, so we can never test the quantum mechanical predictions of these theories. We have an impasse: Any theory that hopes to explain the big bang must unite general relativity with quantum mechanics, but any quantum mechanical theory of the universe is untestable since there is only one universe.

To skirt the impasse, one might postulate that there is in reality a large (probably infinite) collection of universes to which the quantum mechanical predictions apply. One suggestion is that a new baby universe arises whenever a black hole is formed. Moreover, in each of these universes, the laws of physics could be different. The extremes of temperature and density that are produced in the collapse to a black hole restore all of the symmetries, whether they be GUT symmetries, supersymmetry, or those coming from strings. Then, when the newly formed baby universe begins expanding, those symmetries can break in new ways. Although the underlying physical laws remain the same, the low-energy physics that results from spontaneous symmetry breaking might look completely different: a whole new set of elementary particles, with different masses, spins, and interactions. This scenario leads to an ever-branching tree of universes, each with its own laws of physics.

106

Occam’s razor might suggest that postulating the existence of an infinite number of unobservable universes is a poor solution. Nor is that the only problem with this scenario. Testing the quantum mechanical predictions still seems impossible, since the properties of any universe other than our own can never be determined. However, universes with the largest number of black holes will be the most successful at generating new universes. In a sort of Darwinian manner, universes in which the physical laws are tuned to produce the maximum number of black holes should quickly dominate the tree. If we assume that our universe is one of these highly probable ones, then this scenario makes a definite prediction: Any small changes of the laws of physics should result in production of fewer black holes. At our current state of knowledge, this prediction is hard to test. For one thing, we don’t know which fundamental parameters can be independently adjusted. In the Standard Model, for instance, the mass of the electron and the mass of the up quark are independent, but in GUTs, the two masses are related. A more fundamental problem is that our universe might not be one of the most probable ones. The most probable universe might be entirely unsuited for life—for example, it might be a Planck-sized ball that flashes into and out of existence, never growing large enough for stars, let alone life, to form. Perhaps the universe, a unique event, can only be understood in an entirely new, still to be formulated, framework.

A theory can’t be considered scientific unless it makes predictions that can be tested in experiments. Over the 20 or so years of string theory’s existence, several experimental tests of the theory have been proposed.

■ Very massive particles formed by strings that wrap around the extra dimensions of spacetime could have been formed in the big bang. Some of these might be stable and exist even today. Searches for these particles have not been successful.

■ In GUTs, the quarks are required to have 1/3 or 2/3 of the electric charge of the electron by the symmetry of the theory. In strings, the same result is achieved by the way strings wrap around the extra dimensions. Other wrappings would produce particles with 1/5, 1/7, or 1/11 of the electron’s charge. Searches for fractionally charged particles have likewise been unsuccessful.

■ Some versions of string theory predict that gravity will behave differently for very small objects than it does for the large objects for which we can measure it: things weighing a pound or so, up to planets, stars, and galaxies. So far, the most sensitive tests of gravity have not revealed any such deviations.

■ String theories suggest that physics at tiny (Planck-length) distances is very different from what we normally see. Although any conceivable accelerator has far too little energy to probe such small scales, it is possible that the effects of string theory might show up as a cumulative effect when light travels over very long distances. For instance, light from distant galaxies must traverse billions of light-years of space, and so we might expect string physics to blur the images of those galaxies. However, no such blurring has ever been seen.

With so many negative experimental results, why do physicists continue to be excited about string theory? Normally, when a theory fails an experimental test, we expect the theory to be discarded. Note, however, the abundance of weasel-words in the preceding paragraph: “might,” “could,” “some.” The truth is there is no one theory called “string theory.” There are actually many string theories, all of which make different predictions. The experiments that have been performed rule out some versions of string theory, but there are literally billions of other versions that could still be true. The original hope, of a theory with fewer parameters than the Standard Model, has not been realized (so far, at least). String theory has been more successful as a quantum theory of gravity, and has significantly advanced our understanding of black holes. In this, it is almost the only game in town. Even if it is not clear which version, if any, of string theory might describe our universe, the theory provides a framework in which to ask questions that couldn’t otherwise be asked.

At the moment, string theory remains a beautiful but unproven idea. Perhaps future theoretical advances will reveal how the Standard Model arises from the strange string world, or provide clear tests of the theory at accessible energies. Recent results have shown that the five basic versions of superstring theory are all interconnected, raising hopes that they form part of a still deeper theory, which has been labeled M-theory. Until these theories generate and pass experimental test, though, they will remain speculation.

In Search of the Theory of Everything

New discoveries, such as neutrino oscillations, dark matter, and dark energy, are forcing us to the conclusion that the Theory of Almost Everything has major deficiencies. Theoretical considerations, like the incompatibility of the quantum-mechanical Standard Model and the classical theory of general relativity, point in the same direction. But for 30 years, the Standard Model has been proving its worth. In experiment after experiment, it has provided a unified structure for understanding the behavior of elementary particles and unrivalled accuracy in its predictions. For the first time in the history of science, there is a single theory that provides an accurate mathematical description of matter in all its forms and interactions, gravity always excepted. The Standard Model is truly the crowning scientific accomplishment of the twentieth century.

As science progresses, new ways of describing the world are invented. If the new paradigm provides a more accurate description, it takes the place of the old. A scientific revolution occurs. In many cases, though, the older theory is not discarded completely, but retained as a useful approximate theory. Newton’s theory of gravity, more than 300 years old, is still used for planning rocket launches, satellite orbits, and the paths of interplanetary space probes. Only rarely is it necessary to employ the more accurate description provided by general relativity. Maxwell’s electrodynamics equations have been superseded by the equations of QED, but Maxwell’s continue to be used, except when dealing with very high energies or the world of the very small. Even if a more complete, more unified theory is discovered, the Standard Model will undoubtedly continue to be used as an accurate, if approximate, theory of particle interactions.

This is a thrilling time for fundamental physics. The greatest outstanding theoretical mystery, how to reconcile relativistic quantum field theory with general relativity, is beginning to yield to the assaults of the string theorists. New experimental results are starting to reveal how the Standard Model needs to be modified. There are great expectations for the new accelerator being built at CERN, the LHC. If it succeeds in producing the Higgs particle, much may be learned about whatever deeper theory lies beneath the Standard Model. If it fails, that failure will itself imply that the Standard Model is in need of major modifications.

Isaac Newton wrote in 1676, “If I have seen further it is by standing on the shoulders of giants.” Never have we had so high a perch as we do now, never have we seen so far, and never have we had such hopes of new vistas to be discovered. From this height, we see an amazing panorama. Early in this century, scientists trying to understand the bright lines of color in the spectrum of their hydrogen vapor lamps invented the puzzling theory of quantum mechanics. Weaving together quantum mechanics and special relativity, relativistic quantum field theory depicted a world in which particles did the impossible—trav—eled faster than light, went on every path at once—yet this picture resulted in an accurate representation of the world around us. The Standard Model took this bizarre foundation, added a handful of observed particles and a few hypothetical ones, and became the most accurate and wide-ranging theory known to science. With this theory, we can peer inside the proton or journey back in time to the first millisecond of the universe’s existence.

The (so far) speculative theories that go beyond the Standard Model promise to take us to even more bizarre unexplored realms. At the same time, they hint at ultimate limits to our knowledge. The fundamental length scale of string theories is the Planck length, 10-35 meters. Are strings made up of still smaller bits, described by an even more fundamental theory? To answer this question, we would need a way to probe even smaller distances than the Planck length. In particle theories, there is a wavelength associated to every particle. A particle will only work as a probe if its wavelength is at least as small as the features you want to see. As the particle’s energy increases, its wavelength decreases, so you can see features as small as you like. All you need to do is give the particle enough energy.

Strings are different. For low energies, a string behaves like a particle, but if you give a string an energy larger than the Planck energy, the string grows larger. You can’t see details smaller than the Planck length using a string that is larger than the Planck length. In other words, if string theory is true, it is impossible to investigate distances smaller than the Planck length. This is not a matter of insufficient technology, nor is it an indication of breakdown of the theory. No nonsensical results arise; indeed, true stringy behavior is just beginning to show up at the Planck energy. Rather than hinting at still deeper structure, string theory firmly declares that there is no deeper level that is experimentally accessible. There is an ultimate and insuperable limit to knowledge of the very small. So, it is argued, strings could really provide an ultimate theory, beyond which no further structure will ever be discovered.

Many theoretical physicists seem to assume that there is a unique Theory of Everything just waiting for us to discover it. Godlike, it summons the universe into being from nothing, creates all matter and energy, and directs every event. The history of physics tells a different story. Every theory has eventually been replaced by a new theory, often one that gives a radically different picture of reality, as quantum mechanics did in replacing Newtonian mechanics. Moreover, there is clearly an advantage in having different mental images for the same physics, as in the Feynman (particle) and Schwinger (field) images of relativistic quantum field theory. There is no doubt that physics is producing better and better approximations to reality. Can we really hope to achieve an understanding that represents reality exactly?

Suppose we had such a theory, or rather, a candidate. How would we know if it is the Theory of Everything? Our knowledge is limited. The universe, presumably, does not stop at the boundaries of our telescopes’ vision. We can never know anything about the regions beyond. We have explored only a tiny portion of the energy range up to the Planck scale. Even if string theory were somehow verified through the entire range, we would still be in ignorance of what happens at energies a million, or a million million, times larger. There will always be realms of distance and energy that are beyond our reach. Finally, what explains the existence of the physical laws themselves? Even a Theory of Everything can’t reveal why that theory, and not some other, is the one that describes our universe.

Ultimately, we may need to accept our own limitations. Perhaps all physical theories are approximations. Perhaps we need different, complementary, approaches for a unique event such as the big bang and other, repeatable, experiments. Limitations need not prevent us from pushing for a deeper understanding; indeed, knowing our limitations may help achieve that understanding.

As Frank Wilczek put it:

So I expect that in ten to fifteen years we will know a lot more. Will we know everything? More likely, I think, is that as we learn many additional facts, we will also come to comprehend more clearly how much we don’t know—and, let us hope, learn an appropriate humility.2