## The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics - Robert Oerter (2006)

### Chapter 5. The Bizarre Reality of QED

The wicked regard time as discontinuous, the wicked dull their sense of causality. The good feel being as a total dense mesh of tiny interconnections.

*—Iris Murdoch, The Black Prince*

**I**magine that you are getting ready to go to work one morning, and you suddenly realize you need to drop off the kids at school, mail a letter, and pick up money from the cash machine. You’re already running late, so you do the only possible thing: split yourself into three people. One of you drops the kids off, another swings by the post office, and the third goes to the bank and gets the cash out. Finally, you all arrive at work, reunite, and go about your business as a whole person.

This may sound like a bizarre science fiction fantasy, or maybe a drug-induced nightmare hallucination, but according to our best understanding of subatomic physics, this is what the world of elementary particles is like. In fact, it’s even worse (or better!) than that: Different versions of you would make stops in Aruba, the Alps, the moon, and Woonsocket, RI. It may seem like you would need to travel faster than the speed of light to get to the moon and still make it to work on time, thus violating special relativity. In fact, it is the confluence of special relativity, quantum mechanics, and field theory that leads us to this bizarre picture of how the world works.

In the 1930s and 1940s, physicists came to accept special relativity as the correct description of space and time. At the same time, quantum mechanics was giving predictions about atomic spectra that were in excellent agreement with experiments. There were two problems, though, one experimental and one theoretical. The experimental problem was that, as better and better techniques were developed for measuring the lines of atomic spectra, details of the spectra were found that weren’t explained by quantum mechanics. The theoretical problem was that quantum mechanics was a nonrelativistic theory—in other words, the equations were in conflict with special relativity. The resolution that was painstakingly worked out over the next 20 years would weave together the three themes of field theory, special relativity, and quantum mechanics into a harmonious whole, a structure that would be the basis of the most successful scientific theory of all time.

**The Mystery of the Electron**

P. A. M. Dirac took the first step toward making quantum mechanics compatible with special relativity in 1928. The Dirac equation replaced the Schrödinger equation of quantum mechanics. It was perfectly consistent with special relativity, and almost as a bonus, it explained the electron’s spin. Spin, you will recall from Chapter 3, was the mysterious property of electrons that let us put two electrons in each energy level, rather than one (as the Pauli exclusion principle would seem to demand).

All electrons are perfectly identical; there is no individuality among them. They all have precisely the same mass, the same electric charge, and the same spin. You never see an electron with half the mass, or a slightly larger or smaller charge. These properties *define* an electron: If you see a particle with the right mass, charge, and spin, it is an electron. If any one of these quantities is different, it’s not. The same statement, modified to include other types of charge, can be made about any other elementary particle.

What is this thing called spin? The electron’s spin, as the name implies, has to do with rotation. Take a beam of electrons that are all spinning in the same direction and fire it at, say, a brick. If you could keep this up for long enough, and if there were no other forces acting on the brick, the electrons would transfer their rotation to the brick, and it would begin to rotate. So a spinning electron behaves in some respects like any other rotating object. In other respects, though, the electron’s spin seems a bit odd. For instance, the spin rate never changes—it never speeds up or slows down, just as a spinning top would neither speed up nor slow down if it were on a perfectly smooth, frictionless surface. Only the direction of rotation, called the spin axis, can change.

It is tempting to think of electrons as tiny spinning balls of charge. There are problems with this picture, however. Think of how an ice skater speeds up as she pulls her arms in. The smaller she makes herself, the faster she spins. Now, no one has measured the size of an electron, but there are experiments that give it an upper limit. According to these experiments, the electron is so small that, to have the known value of spin, the surface of the “ball” would have to be moving faster than the speed of light. This, of course, is impossible. We are forced to conclude that an electron is not a tiny, spinning ball of charge. Nor can the model be fixed by replacing the ball with some other shape. Whether ball shaped, donut shaped, or piano shaped, the picture of an electron built out of a collection of even smaller charges simply doesn’t work.

Well, then, what is an electron? What has charge and spin but isn’t a spinning charge? The only option is to picture the electron as a truly fundamental particle, a pure geometric point having no size and no shape. We simply have to give up the idea that we can model an electron’s structure at all. How can something with no size have mass? How can something with no structure have spin? At the moment, these questions have no answers. There is nothing to do but accept that the electron does have these properties. They are merely part of what an electron is. Until we find some experimental evidence of electron structure, there’s nothing else to say about an electron.

Since all electrons have exactly the same charge, and all other free particles that have been detected have whole number multiples of that fundamental charge, we can simply say the electron has charge -1. Protons, for instance, have charge +1—the same value of charge as the electron, but positive instead of negative. (Physicists believe that protons and neutrons are made up of quarks having 1/3 or 2/3 of the electron’s charge. This of course makes the smaller value the truly fundamental unit, but instead of rewriting all the physics books to say the electron has charge -3, physicists have stuck with the old definition.) The electron’s spin, in fundamental units, is 1/2. There is no known fundamental mass, however; the electron’s mass is an inconvenient 9.11x10^{-31} kilograms. This is as small compared to your body mass as an amoeba is compared to the entire earth.

It is impossible to measure the electron’s spin directly, as suggested earlier, by shooting a beam of electrons into a brick and measuring how much rotation the brick picks up. The electron’s spin is much too small compared to the inertia of any such object for this method to work. The spin was deduced indirectly from careful measurements of the light spectrum produced by atoms. The rules of quantum mechanics, as you will recall, require that the electrons around an atom occupy distinct energy levels. Now, an electron has both charge and spin, and according to Maxwell’s equations it should behave as a tiny magnet. If the atom emitting the light is immersed in a magnetic field, the energy levels are shifted by the interaction between the electron’s magnetic properties and the magnetic field. The energy level shift shows up in the spectral lines; this phenomenon is known as the *Zeeman effect.*

By immersing atoms in magnetic fields and through other experiments, physicists deduced that the electron’s spin can point in one of only two directions: either in the same direction as the magnetic field (called spin-up) or opposite to it (spin-down). It may seem strange that the spin can’t point in any arbitrary direction; after all, there are no restrictions on how we orient the spin axis of a top or a gyroscope. The quantization of spin, like the quantization of the energy levels, is a consequence of quantum mechanics, however. Just as the energy levels of a (quantum) ant rollerblading in a bowl could only take on certain discrete values, the spin direction of the electron has only two possible values. The explanation of the Zeeman effect in terms of quantum principles was one of the early successes that convinced physicists that quantum mechanics was correct.

The Dirac equation was brilliantly successful: It agreed with special relativity, it got the electron’s spin correct, and it explained results like the Zeeman effect that had seemed inexplicable with the old, nonrelativistic Schrödinger equation. The new equation brought some puzzles of its own, however. The first problem Dirac noticed was that, in addition to the solutions of the equation that described electrons so well, there were other solutions that had negative energy. In fact, there were infinitely many solutions with negative energy. This was unacceptable. To see why, remember what a quantum system is supposed to look like—the previously discussed harmonic oscillator, for instance. An electron in a higher-energy state can drop to a lower-energy state by radiating energy away. But, once in the lowest-energy state, called the ground state, there is nowhere to go. The electron must sit there until it gets zapped with enough energy to move up again. In contrast, Dirac’s equation gave a picture of a bottomless pit, where electrons can keep radiating and dropping, radiating and dropping, without any end. In this way, a single electron could produce an infinite amount of energy output, without any energy input. This was in clear contradiction to experimental fact—something was wrong.

Dirac soon found a way around this problem. In hindsight, we can say that his solution was only a preliminary one, a first step toward relativistic quantum field theory. When we talked about the periodic table, we used Pauli’s exclusion principle, which said that there can be only one electron in any quantum state—if the state is already filled, no more electrons can come in. Dirac made a bold assumption—that the infinitely many negative energy states were all filled, everywhere in the universe! This meant that there was an infinite amount of mass and infinite (negative) electrical charge at each point in space. According to Dirac, though, this infinite sea of mass and charge is normal and so we don’t notice it. This may seem to be a lot to sweep under the rug, but it seemed to work. However, there is a situation where we would notice the sea: when one of the particles in the negative sea gets zapped with enough energy to lift it to a positive energy state. Picture a can packed tightly to the brim with marbles. If you remove one marble it leaves a hole behind. Similarly, removing an electron from the sea leaves an electron-shaped hole in the sea. Now, in the normal condition, all the negative energy states are filled. So, a hole in the sea, which is a missing negative charge, must look to us like a positive charge. As a result, if you dump enough energy at any point in space, you produce two objects ex *nihilo:* an electron and a positively charged hole. What’s more, nearby electrons in the sea can move over to fill the hole, but that will just move the hole elsewhere. The hole can move around. Dirac found that the hole behaves exactly like a particle—a positively charged particle.

This was tremendously exciting for Dirac. At this time (1931), there were only two known subatomic particles: the electron, with charge -1, and the proton, with charge +1. Dirac’s equation seemed to be describing two such particles. Maybe the holes were actually protons. If so, the Dirac equation would be a complete unified theory of elementary particles. Unfortunately, this solution didn’t work. Dirac soon realized that the holes had to have the same mass as the electrons. Protons, at 2000 times the electron’s mass, were just not going to work. (The discovery of the neutron a year later would have changed the picture anyway.) Dirac had no choice but to predict the existence of a particle that no one had ever seen or detected, a particle with the same mass and spin as an electron, but with positive charge. This radical step was found to be justified in 1932. Dirac’s positive electrons, dubbed *positrons,* were detected in the tracks of cosmic rays in cloud chambers.

Cloud chambers had been used for some years as particle detectors. The cloud was created by spraying a fine mist of water into the chamber. When particles passed through the cloud, small droplets of water condensed around their path. One could then take a photograph and make measurements of the paths. A magnetic field bends the particles’ paths (as we know from Chapter 1), so by surrounding the chamber with a powerful magnet, even more information about the particles could be gleaned.

By using radioactive substances as sources, physicists had learned to identify the cloud chamber tracks of the known particles. Even without a radioactive source around, though, they saw tracks. These tracks apparently rained down from the sky, and so were named cosmic rays. Carl D. Anderson, a young physicist working at Caltech, noted that in some of his cloud chamber photographs there were “electrons” that seemed to curve the wrong way in the magnetic field. Perhaps, though, they were normal electrons traveling the other way—coming up from the ground instead of down from the sky. To resolve the question, Anderson inserted a metal plate into the chamber. Particles passing through the plate would be slowed down, and slower moving particles would be curved more sharply by the magnetic field. With the plate in place, all Anderson had to do was wait until one of the backward-curving particles passed through it. He finally obtained a photograph of such an event that settled the matter. Astonishingly, the particle had the mass of an electron but a positive charge. Anderson had discovered the positron, predicted by Dirac two years earlier.

Anderson’s experiment confirmed the existence of what we now call *antimatter.* The positron is the antimatter partner of the electron. Every particle discovered so far has its antimatter partner, called the *antiparticle.* Anderson’s discovery also confirmed Dirac’s place in the pantheon of physics. For the first time (but, as we will see, by no means the last), a particle predicted from purely mathematical considerations had been found to exist in reality. In a few years’ time, both Dirac and Anderson were awarded Nobel prizes for their achievements.

Suppose we have an electron and a hole. The electron can fall into the hole, but when it does so it must radiate away some energy. We are left with no particles, just the leftover energy. The electron and the positron have annihilated each other; the mass of both particles has been entirely converted into energy. Since each particle has rest energy equal to *mc*^{2}*,* particle-antiparticle annihilation always produces at least twice that much—an energy of 2*mc*^{2}. More energy can be produced if either particle is in motion before the collision, because there is then some kinetic energy in addition to the rest energy of the two particles.

As the Dirac equation was successfully applied to describe more and more of the particles that were being discovered, it was realized that not just electrons, but all particles must have an unidentical twin: a particle with the opposite charge, but otherwise with the same properties as the original particle. These antiparticles could annihilate with normal particles, leaving only energy behind. Because the antiparticles behave in the same way as regular matter, they can, in principle, combine to form antiatoms and antimolecules. Antichemistry would be identical to regular chemistry, and so the same sorts of objects and creatures that we have in our matter world could, in principle, exist in a world made entirely of antimatter.

Imagine an anti-earth somewhere, with antipeople on it made out of antiprotons, antineutrons, and antielectrons (that is, positrons). On this planet, there may be an antiyou who becomes an astronaut and comes to visit you on earth (perhaps using a matter drive?). On landing, however, the spaceship immediately explodes in a tremendous matter-antimatter annihilation. The energy released would be equal to several thousand nuclear weapons.

Once we realize that there is a symmetry between particles and antiparticles, it makes more sense to treat them symmetrically, rather than thinking in terms of Dirac’s hole picture. That is, instead of thinking of electrons as fundamental and positrons as missing negative-energy electrons, we think of electrons and positrons as equally fundamental. The rest of this chapter will show what picture emerges when we do this.

Our world is full of protons. Where are the antiprotons? Of course, we do not expect to see any antiprotons here on earth—they would quickly run into a proton and annihilate. Is there an antimatter planet somewhere in the universe, perhaps orbiting an antimatter star in an antimatter galaxy? Are there regions of the universe that are mostly antimatter, just as our neighborhood is mostly matter? If there were, then wherever a matter region borders an antimatter region there should be a mixing region where the particles and antiparticles meet and annihilate releasing energy (in an amount equal to 2*mc*^{2} for each annihilation). Astronomers would see this radiation as a bright spectral line with known energy, so it would be very easy to detect. In spite of many hours spent in telescopic observations, though, no evidence of an antimatter region of the universe has ever been found. We must conclude that all the stars we see, all the distant galaxies, are made from normal matter, not from antimatter.

This is puzzling: If there is a complete symmetry between particles and antiparticles, why aren’t there as many antiparticles in the universe as particles? The lack of antimatter is a deep mystery that cannot be explained using the Standard Model. It implies that the particle-antiparticle symmetry is not quite complete. As we will discover, the very existence of large clumps of matter, such as galaxies and galaxy clusters, provides a hint of physics beyond the Standard Model.

**Dirac Rules!**

One day in the early 1940s, John Archibald Wheeler, then a young assistant professor at Princeton, called up the person he usually bounced his crazy ideas off of: Richard Feynman, at that time still a graduate student at Princeton.

Wheeler said, “Feynman, I know why all the electrons have

the same charge and the same mass.”

“Why?” Feynman asked.

“Because they are all the same electron!” replied Wheeler.^{1}

What Wheeler had realized was that, from a mathematical point of view, a positron is the same as an electron traveling backward in time. We can draw it like this:

The arrow shows the direction the electron is traveling, so the line with the arrow pointing backward in time looks to us like a positron traveling forward in time. So, we see an electron and a positron that get closer and closer (in space) until they annihilate, and we are left with only a photon. Alternatively, we can think of the same picture as depicting an electron traveling forward in time, which then emits a spontaneous burst of energy and starts going backward in time.

To see how this explains why all electrons look exactly alike, look at this diagram:

If we take a snapshot at a particular time (a vertical slice through the diagram), we see a collection of electrons and positrons, with the occasional photon. Alternatively, we can think of this picture as the history of a single electron, which is traveling back and forth in time, reversing direction whenever a photon is emitted or absorbed. Between reversals, the electron could be part of an atom, say, in your body. Eventually, that electron meets a positron and annihilates. The positron, though, was really the same electron traveling backward in time, where it eventually gets zapped by a photon to turn it into an electron again. That electron might be in a star, a rock, or a rock star. According to Wheeler’s picture, every electron in your body is really the same electron, returned after many long trips back and forth in time. If it’s always the same electron, the properties must always be the same.

Feynman saw the flaw in this right away, and perhaps you have, too. It’s exactly the problem of the missing antimatter. A vertical slice through the diagram is a snapshot of the universe at a particular time. In any vertical slice, there are as many backward-arrow lines (positrons) as forward-arrow lines (electrons). So, at any time, there should be as many positrons as electrons, if Wheeler’s picture is right. Unfortunately for Wheeler, we know this isn’t true. (Wheeler suggested the positrons might be “hidden in the protons or something,” which turns out not to be the case either, as we will see.) As appealing as Wheeler’s idea is, it is wrong. The lack of any distinguishing features of electrons is a fundamental feature of relativistic quantum field theory; more than that we cannot say. However, the idea that a positron is the same as an electron moving backward in time is correct, and will be crucial to our understanding of relativistic quantum field theory.

Another oddity of the Dirac equation came to light when physicists looked at the solutions to the equation. The first thing a theorist does when confronted with a new equation is to solve it for the simplest cases imaginable. One of the simplest possibilities is to imagine firing a beam of electrons at a barrier. Since electrons are charged particles, we can suppose the barrier is an electric field that is strong enough to repel the electrons. The solutions showed that there were more electrons in the reflected beam than in the incoming beam. This is like saying you were batting tennis balls against a brick wall and for every tennis ball you hit, two came back at you. The resolution of this conundrum comes when we look “inside the wall”—behind the barrier. For each extra electron in the reflected beam, there is a positron that travels off in the other direction, through the wall.

Here’s what’s actually happening: The barrier is creating pairs of electrons and positrons. The electrons come back at us, like the extra tennis balls. Because of their positive charge, the positrons react differently to the electric field that forms the barrier. The same field that repels the electrons attracts the positrons, and they move off through the wall.

It probably seems like we’re getting something for nothing: For each electron going in, two electrons and a positron come out. The source of the extra particles turns out to be the wall itself, or rather, the electric field that creates the wall. Using Dirac’s hole picture, we can say that energy from the electric field pops an electron out of the infinite sea of negative-energy electrons, leaving a hole behind. The remaining electric field propels the electron in one direction (becoming the extra tennis ball) and the positron in the other.

This is an example of what physicists call a *thought experiment.* It is an idealized situation that can’t easily be turned into a real experiment. The difficulty in this case comes in constructing the wall off which the electrons bounce. A wall built of, say, lead bricks will not be very effective. To an electron, the atoms of the brick look like mostly empty space. The electron won’t bounce off the lead brick; it will penetrate it, scattering off other electrons in the brick until its energy is dissipated. Rather, the wall must be created using an electric field, because that is the only thing we know of that repels electrons. But where to get the electric field? Electric fields come from an accumulation of electric charges, that is, electrons! In other words, the only thing that an electron will bounce off is another electron. The real experiment that is closest to our thought experiment, then, is one in which an electron collides with another electron.

Well, what happens when electrons collide? Do we actually see more electrons flying out than we put in? Yes! The phenomenon is known as *pair production:* electron-positron pairs spontaneously leaping into existence. In fact, this is the main method physicists use to produce positrons: fire a beam of electrons of sufficiently high energy at some target and collect the positrons that fly out. It should come as no surprise that “sufficiently high energy” translates to “more than 2*mc*^{2}”—twice the rest energy of an electron.

The Dirac equation didn’t provide a complete theory of pair production; it was invented as an equation for the behavior of a single electron. Because of pair production, a one-particle theory is insufficient. Only a theory capable of dealing with electron-electron interactions, and with interactions of electrons with the photons of the electromagnetic field, would be capable of describing the real world.

Let’s pause and think about what we’ve accomplished. Dirac’s equation has brought two of our themes, quantum mechanics and special relativity, into an uneasy harmony. On the one hand, the explanation of the electron’s spin, the explanation of the Zeeman effect, and the stunning and successful prediction of positrons indicate that the Dirac equation has put us on the right track. On the other hand, we have become convinced that a complete theory of electrons and positrons must take into account the possibility of electron-positron pairs being spontaneously produced. This pair production involves the electric field, and so a complete theory must include a quantum theory of electromagnetic processes as well.

The Dirac equation by itself doesn’t give a coherent description of these processes for a general situation, although it works in a simple case like the electron-barrier situation. The problem is that quantum mechanics was really designed to deal with one particle at a time, so a process like pair production doesn’t fit into the framework. This is bad news, because as we already saw, pair production is going on all the time. We need a new framework. The new framework, which harmonizes all three of our themes—quantum mechanics, special relativity, and field theory—is known as *relativistic quantum field theory* and is the basis for our modern theory of all particles and their interactions—the Standard Model. Except for gravity, of course.

**You Can Always Get There from Here**

It was 1942, and Dick Feynman was worried. He was A.B.D. at Princeton—All But Dissertation—and his advisor, John Archibald Wheeler, had disappeared to the University of Chicago where he worked for the mysterious Metallurgical Laboratory, which hadn’t hired a single metallurgist. In fact, Wheeler was working with Enrico Fermi to produce a nuclear fission reactor. Feynman himself was already a part of the Manhattan Project, and had been working on a method to separate fissionable uranium from the useless, common variety. Feynman’s fiancée had tuberculosis, which at the time was still incurable and usually fatal. And Feynman was worried about quantum mechanics.

When Feynman didn’t understand something completely, he would tackle it from a completely different angle. For his Ph.D. thesis, he had decided to look for a new approach to quantum mechanics. He began by throwing out the Schrödinger equation, the wave function, everything everyone knew about quantum mechanics. He decided instead to think about the particles. He began with the two-slit experiment. It is as if the electron goes through both slits, he thought. Add the contributions for each path and square the result to get the probability that the electron will hit the screen at that point. What if we open a new slit? Then the electron “goes through” all three. If we add a second baffle with slits of its own, we need to add up all possible combinations of how the electron can go (only a few of which are shown):

As we open up more slits in each baffle, there are more possible paths the electron can take, all of which we need to include in the sum. So, if we now remove the baffles, we should do an infinite sum over *all possible paths* the electron can take to get to that point on the screen. Of course, this includes paths that make a stop in Aruba, or on the moon!

Feynman learned of a suggestion Dirac had made for the contribution to the quantum field from a short segment of path. Amazingly, when Feynman combined this idea with his own sum-over-all-paths approach, he got the exact same answer as quantum mechanics, in every case. Feynman had succeeded in reformulating quantum mechanics in an entirely new framework, one that would prove crucial for relativistic quantum field theory. In 1942, he finished his Ph.D. thesis, in which he introduced this point of view, and then left for Los Alamos to work on the atomic bomb project. All of the best minds in physics were occupied with war work for the next few years and could only return to doing fundamental physics after the war ended. So it was not until 1946 that Feynman took up the idea from his thesis to see if he could make it work for a relativistic theory of electrons, namely, the Dirac equation.

In order to find the probability for some process to take place, Feynman knew he had to add up all the ways that it *could* take place. To keep track of all the possibilities, he started drawing little pictures for each possibility. The most basic action was for an electron to go from one place to another. Feynman represented this with a straight line between the two points, labeled by the symbol for an electron, e—(left in the figure below). As we know, the electron doesn’t necessarily take this straight-line path. We need to take into account the contributions from all possible paths connecting the two points. The straight line is just a shorthand way of writing the sum of the contributions from all the paths. The only other basic possibility was for the electron to emit or absorb a photon, labeled by *Y* in the diagram:

The squiggly line represents the photon. The electron line bends because the electron recoils when it emits the photon, like a fifth-grader throwing a medicine ball while standing on a slippery surface. In this way, Feynman could visualize the force between two electrons as due to an exchange of a photon:

The first (emitting) electron recoils, like the child throwing the medicine ball. The second (absorbing) electron gets the energy and momentum of the photon like a second child who catches the ball. The result is that the two electrons swerve away from each other. So, the picture of electrons exchanging photons explains the fact that the electrons repel each other.

Feynman realized that in going from one point to another, an electron could interact with *itself* by way of the photon exchange. For instance, it could emit and then reabsorb a photon.

The way this diagram is drawn, it looks like the photon curves back to encounter the electron a second time and be reabsorbed. A real photon, of course, always moves in a straight line. But the photon in the diagram is a virtual photon, and the squiggly line doesn’t represent its actual path. The virtual photon travels on all paths, just as the electron does. The only important points in the diagram are the interaction points. The lines can be drawn straight or curved as convenient without changing the meaning of the diagram.

If two photons are involved, there are three possibilities for the electron to interact with itself, as shown in the following figure. What’s more, a photon could produce an electron-positron pair before it is reabsorbed, as shown in the bottom image in the following figure.

There are an infinite number of diagrams that we could draw, with three, four, or more photons, each of which can cause pair production... and we have to add them all up, just to find out how an electron gets from point A to point B. It seems we have an impossible task.

There is good news and bad news, though. The good news is that the interaction between an electron and a photon is fairly weak, so that the more photons and pair productions there are in the diagram, the smaller the contribution of that diagram to the full sum. In fact, each pair of interactions reduces the contribution by a factor of approximately 1/137 (called the fine-structure constant). Therefore, for a calculation to be accurate within 1 percent, we can ignore all diagrams with two or more photons. In that case, we would only need to draw two diagrams: the one in which the electron moves from A to B without any interactions, and the one in which it moves from A to B but emits and absorbs a photon in the process.

By way of analogy, suppose you wanted to find the earth’s orbit about the sun. At first, the problem seems intractable: earth is affected not only by the sun’s gravity, but also by the gravity of all the other planets. The other planets are in turn affected by the earth’s gravity, so it seems as if we can’t find the earth’s orbit unless we already know the earth’s orbit. To make the problem more tractable, let’s simplify it. First, ignore the gravitational pull of all the other planets, since they are much less massive than the sun. Including only the earth-sun interaction, the problem becomes solvable, and one finds that the earth orbits in an elliptical path. Then, if you need more accuracy, you could include the pull of Jupiter, because it is the most massive planet, and you would get a slightly different orbit. For an even more accurate answer, you need to take into account the pull of the other planets, the pull of earth back on Jupiter, and so forth. Physicists call this a *perturbation expansion*—start with the largest influences, then add, one by one, the perturbing effects—in this case, the gravitational pull of the other planets. There are an infinite number of perturbations that need to be added, but fortunately they get smaller and smaller. This is what gives us hope for the theory of electrons and photons we are building. Because the fine structure constant is so small, we can start with the simplest diagrams and calculate an approximate answer. Then, if more accuracy is needed, we can add diagrams with more photons and electron-positron pairs until we reach the accuracy we need.

Now the bad news. When you actually calculate how much each diagram contributes, adding up all possible paths for the electrons and photons, you find that the diagrams with additional interactions in them, instead of giving a smaller contribution as expected, give an infinite contribution. This was not a surprise to Feynman; other physicists using other approaches had already discovered that you get infinite results for many of these calculations. In some cases, they had learned to circumvent the infinite results by doing the calculation in such a way that the infinite part of the answer was never needed. An example is the Dirac “sea” of negative-energy electrons. If you try to calculate the *total* energy, the answer is infinite. But if you only care about *changes* in the energy, for instance when an electron is ejected from the sea and a hole is formed, then the problem of the total energy can be ignored.

Feynman, however, had an advantage: Using his diagrams, he could classify all of the infinities that came up. It turns out that there are really only three fundamental infinities; all the other diagrams that give infinite answers are really manifestations of the three fundamental infinities. The three quantities that are infinite, according to the calculations, and in contradiction with actual experience, are the electron’s mass, the photon’s mass, and the electron’s charge.

Before we investigate these infinities and learn how to eliminate them, let’s take a closer look at what the theory is telling us about how electrons and photons behave. Do the electrons *really* take every path, including the one with a rest stop on the moon, to get to their destination? Suppose I release an electron and detect it three seconds later a short distance away. Wouldn’t the electron need to travel faster than the speed of light to get to the moon and back in that time?

First of all, if we calculate the probability that we will actually detect our electron on the moon during the three-second interval, we get zero—absolutely no chance of detecting it there. Well, you say, that settles it; it can’t ever be there, and so we can exclude all those faster-than-light trips from the calculation. But that won’t work. If we exclude those paths, then everything goes haywire, and we get the wrong answer for the electron’s actual trip. We need to add up *all* paths, not just the ones that seem reasonable, in order to get the right answer. The electrons engaged in bizarre behavior, such as traveling faster than light, are called *virtual* electrons, to distinguish them from electrons on physically reasonable paths, which we will refer to as *real* electrons.

Virtual particles may seem too weird to be relevant to the real world, but they are actually indispensable. For instance, take the simple problem of finding the force between two stationary electrons. We know they each feel a force from the other electron, because like charges repel. According to our theory, this force is caused by the exchange of photons. But how can a stationary electron emit a photon? If the electron recoils, it is no longer stationary. If there is no recoil, the laws of conservation of momentum and energy imply that the photon carries no momentum and no energy—a dud photon that can’t affect anything.

In fact, the photons that are exchanged in this situation are all virtual photons. They live on borrowed momentum, borrowed energy, and borrowed time. They don’t even travel at the speed of light! If you put a detector in the space between the electrons, you will never detect a photon. Because you can never detect them, they aren’t considered real. If you like, you can consider them to be purely a calculational tool, artifacts of the way we calculate, without any actual existence. But they’re such useful artifacts that physicists often talk about them as if they had an existence of their own.

**QED**

We now have a complete theory of electrons, positrons, and their interactions (except for the infinities mentioned earlier!). Physicists call this theory quantum electrodynamics, or QED for short—it is a quantum theory of dynamic (interacting) electrons. QED is our first example of a full-blown relativistic quantum field theory. Richard Feynman summarized the rules in his wonderful little book QED: *The Strange Theory of Light and Matter.* There are just three possible basic actions in the theory (and a picture corresponding to each action):

Together with these three basic actions, there is a rule for calculating: Using any combination of the three basic actions, draw all possible diagrams representing the different ways a process can happen. Calculate the value associated with each diagram. Add up the values for all the diagrams and square the result to get the probability that the given process will actually happen. Of course, the hard work comes in calculating the numerical value assigned to each diagram—you have to somehow do the sum over all possible electron and photon paths. Physics Ph.D. students spend years learning the mathematical techniques for computing these answers.

What about pair production? It might seem like we left that out when we listed the three basic actions of QED. Recall that a photon with energy *E* = 2*mc*^{2} can produce two particles (an electron and a positron) with rest energy *mc*^{2}. In Feynman diagrams, pair production looks like this:

Here, a photon comes down from the top of the picture and creates an electron and a positron, which move off in opposite directions. Now, a Feynman diagram represents an interaction occurring in space and in time. For instance, a diagonal line like this represents an electron moving up the page and forward in time:

The vertical direction in the diagram represents space and the horizontal direction represents time. QED, of course, is a *relativistic* quantum field theory. Special relativity guarantees there will be an intimate connection between space and time in the theory. Because of this spacetime symmetry, we are allowed to take any Feynman diagram and rotate it by 90°. A 90° rotation exchanges the vertical and horizontal directions in the diagram; therefore, it interchanges the roles of space and time.

Take the basic interaction diagram (top, in the following figure), and rotate it 90°. In the rotated diagram (middle), we see a photon moving up the page on the left, and two electrons on the right—but one electron is moving backward in time. As we already know, an electron moving backward in time is equivalent to a positron moving forward in time. Make this switch, and we have exactly the diagram for pair production (bottom, in the figure): a photon enters and splits into an electron-positron pair. Because of the way space and time are interwoven in special relativity, we don’t need to include this as a separate “basic action” in the rules for QED. Pair production is *automatically* included in the theory because of the interaction rule (the third “Action” in the earlier list) and the spacetime symmetry of special relativity.

Electron-positron annihilation is automatically included, too. Just rotate the interaction diagram 90° the other way, and, after exchanging another backward-in-time electron for a positron, we get this process:

Here, we start out with an electron and a positron that meet and annihilate, leaving only a photon. This is the diagram for particle-antiparticle annihilation.

We begin to see the magic of relativistic quantum field theory. Any relativistic quantum field theory that describes an electron interacting with the electromagnetic field (that is, with a photon) will automatically also describe pair production and electron-positron annihilation. Since the same diagram represents all three processes, the probabilities of the three processes will be related in a strict manner governed by the rules of special relativity, which tell you how to rotate the diagrams. This relationship also provides a precise test of the theory: If the three probabilities are not related as required by special relativity, the theory is wrong. Fortunately, all experimental tests so far indicate that the interaction of electrons, positrons, and photons is beautifully described by relativistic quantum field theory.

QED creates a very different mental picture of the subatomic world than the old (classical physics) picture. In classical physics, a solitary electron just sits there with an electric field calmly surrounding it. In QED, on the other hand, a solitary electron is surrounded by a swarm of activity—virtual photons shoot out and back in; virtual electron-positron pairs are created and evaporate instantaneously. The “empty” space around the electron is abuzz with virtual particles. Let’s think about the consequences of this model. When a virtual electron-positron pair is created near the (real) electron, the virtual positron will be attracted toward the real electron, while the virtual electron is repelled. There should be a resulting separation of charge: The real electron should be surrounded by a swarm of virtual positive charge. This charge shields the original negative charge, so that from far away, the electron should look like it has less electric charge than it actually has. Well, we can calculate by how much its true charge is reduced, using the rules of QED, and we find the answer is infinite. This is precisely one of the three fundamental infinities of QED that were mentioned earlier. In order to deal with the infinite amount of shielding charge, we have to assume that the bare (unshielded) electron has *infinite* negative charge, so that we can add an infinite amount of positive shielding charge and still end up with the small, finite, negative charge that we know the electron actually has. This technique is called *renormalization.* You redefine the electron’s charge in order to make the result of the calculation agree with the charge you actually measure.

Renormalization amounts to subtracting infinity from infinity and getting a finite number. According to the mathematicians, “infinity minus infinity” is meaningless. Physicists are not as picky about such niceties as are mathematicians. Still, they too felt that the situation was unsatisfactory—an indication that there was something wrong with the theory. But, being physicists and not mathematicians, they went on doing it as long as it worked, and ignored the contemptuous glares of their mathematical colleagues. It has continued to work, though, so successfully that now renormalizability is considered the hallmark of a viable relativistic quantum field theory. The other infinities in QED, the electron mass and the photon mass, can be dealt with in the same way. Once this is done with the fundamental infinities, all the Feynman diagrams give finite results—not only finite, but astoundingly accurate.

To test the accuracy of the renormalization procedure, imagine trying to penetrate the shielding cloud of virtual particles. By firing higher and higher energy particles at the electron, it should be possible to penetrate the cloud of positive charge, getting closer to the bare, unshielded, charge of the electron. So we expect that, as we increase the energy of the probe particle, the apparent electric charge of the electron will increase. This is exactly what happens in accelerator experiments! At the extremely high energies needed to produce Z^{0} particles (about which we will learn a great deal in Chapter 9), the effective electric charge is 3.5 percent higher than it is in low-energy experiments, in good agreement with the theoretical prediction. (Actually, only half of this change is due to virtual electron-positron pairs. The other half is due to virtual particle-antiparticle pairs of other particle types.) This gives us confidence in the renormalization procedure in spite of its questionable mathematical status.

Some physicists, notably Dirac himself, were never happy with the renormalization procedure, and considered it a way of sweeping the problems under the rug. Steven Weinberg, one of the founders of the Standard Model, feels differently. He points out that the electron’s charge would be shielded by the cloud of virtual particles even if there were no infinities in the calculations, and so renormalization is something we would have to do anyway. It is possible, though, that the theory itself is giving us hints about where it breaks down. The infinities come from the virtual particles with very high energy. Maybe QED is simply incorrect at very high energies. Is there a theory that “looks like” QED at low energies, but at high energies is different enough that no infinities arise? In the past 20 years, some physicists have come to suspect that this is possible, and even have an idea what the underlying theory might look like. We’ll take a look at those ideas later on in the book.