The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics - Robert Oerter (2006)
Chapter 1. The First Unifications
If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis... that all things are made of atoms—tittle particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another.
—Richard Feynman, The Feynman Lectures on Physics
Find a rock. Hit it with a sledgehammer. Take the smallest piece and hit it again. How many times can you repeat this procedure? As the bits of rock get smaller and smaller, you will need new tools and techniques. A razor blade, say, to divide the fragment, and a microscope to see what you’re doing. There are only two possibilities: either you can go on dividing the pieces forever, or else you can’t. If you can’t, there must be some smallest uncuttable piece.
Leucippus and his pupil Democritus, Greek philosophers of the fifth century B.C., proposed that the division process had to end somewhere. The smallest piece was termed the atom, meaning “uncuttable.” The atomic hypothesis flew in the face of common sense and everyday experience. Can you show us one of these atoms? Leucippus’s opponents asked. No, replied the atomists. They are too small to be seen, invisible as well as indivisible.
More than 2000 years later, a new version of the atomic hypothesis was taking hold among scientists. By the beginning of the nineteenth century, it was becoming clear that all objects are composed of many small particles. The new concept of an atom was quite different from the Greek atom. For the Greeks, geometry ruled: atoms were supposed to be distinguished by their shape, even if that shape couldn’t be seen. The new atoms were distinguished instead by their weight and their chemical properties. By the end of the nineteenth century, it was clear that atoms were not the whole story. Rather, there are two kinds of stuff in the world: particles and fields. Everything that we can see and touch is made up of indivisible particles. These particles communicate with each other by way of invisible fields that permeate all of space the way air fills a room. Fields are not made of atoms; they have no smallest unit. The particles determine where the fields will be stronger or weaker, and the fields tell the particles how to move.
The discovery of quantum mechanics in the twentieth century would overturn the straightforward view of a universe full of particles and fields. Another half century would pass before quantum mechanics and special relativity were assimilated into elementary particle physics to produce the most robust and successful scientific theory ever, the Standard Model of Elementary Particles. This chapter will reveal how the concepts of particles and fields were developed in the nineteenth century into powerful tools that brought unity to the diversity of physical theory.
Physics is the study of fundamental processes, of how the universe works at its most basic level. What is everything made of, and how do those components interact? There has been much talk in recent years of a “Theory of Everything,” with string theory the leading candidate. For the physicist, that would be the ultimate achievement: a coherent set of concepts and equations that describe all of the fundamental processes of nature. The search for unifying descriptions of natural phenomena has a long history. Physicists have always tried to do more with less, to find the most economical description of the phenomena. The current drive for unification is but the latest in a long series of simplifications.
In the nineteenth century, physics was divided into many subdisciplines.
✵ Dynamics—The laws of motion. A sliding hockey puck, a ball rolling down a hill, a collision between two billiard balls could all be analyzed using these laws. Together with Newton’s law of universal gravitation, dynamics describes the motions of planets, moons, and comets.
✵ Thermodynamics—The laws of temperature and heat energy, as well as the behavior of solids, liquids, and gases in bulk: expansion and contraction, freezing, melting, and boiling.
✵ Waves—The study of oscillations of continuous media; vibrations of solids, water waves, sound waves in air.
✵ Optics—The study of light. How a rainbow forms, and why a ruler looks bent when you dip it into a fish tank.
✵ Electricity—Why do my socks stick together when I take them out of the dryer? Where does lightning come from? How does a battery work?
✵ Magnetism—Why does a compass always point north? Why does a magnet stick to the refrigerator door?
By the beginning of the twentieth century, these branches had been reduced to two. Because of the atomic hypothesis, thermodynamics and wave mechanics were swallowed up by dynamics. The theory of electromagnetic fields subsumed optics, electricity, and magnetism. All of physics, it seemed, could be explained in terms of particles (the atoms) and fields.
The strongest evidence for the atomic hypothesis came from chemistry rather than physics. The law of definite proportions, proposed in 1799 by the French chemist Joseph-Lois Proust, declared that chemicals combine in fixed ratios when forming compounds. A volume of oxygen, for instance, always combines with twice that volume of hydrogen to produce water. The explanation comes from the atomic hypothesis: if water is compounded of one oxygen atom and two hydrogen atoms (H20), then two parts hydrogen and one part oxygen will combine completely to form water, with nothing left over.
By the end of the nineteenth century, it was already becoming clear that these chemical atoms were not, in fact, indivisible. In 1899, J. J. Thomson announced that the process of ionization involves removing an electron from an atom, and therefore “essentially involves the splitting of the atom.”1 In the early twentieth century, atoms would be further subdivided. “Splitting the atom” acquired a different meaning, namely, breaking apart the atomic nucleus by removing some of the protons and neutrons of which it is composed. An atom composed of neutrons, protons, and electrons was of course no longer uncuttable, but by that time the term atom was firmly established—it was too late to change. The most basic constituents of matter, those bits that could not be decomposed in any way, came to be called elementary (or fundamental) particles.
How does the atomic hypothesis allow thermodynamics to be reduced to dynamics? Take as an example the ideal gas law. Physicists experimenting with gases in the eighteenth and nineteenth centuries discovered that as a gas was heated, the pressure it exerted on its container increased in direct proportion to the temperature. No explanation was known for this behavior; it was an experimental thermodynamic law.
Let’s apply the atomic hypothesis: Consider the gas in the container to consist of many small “atoms” that are continually in motion, colliding into each other and into the walls of the container like children in a daycare center. Now heat the gas to give the gas molecules more energy, raising their average speed. The pressure on the container walls is the cumulative result of many molecules colliding with the walls. As the temperature goes up, the faster-moving molecules hit the walls more frequently and with more force, so the pressure goes up.
A mathematical analysis proves that when you average over the effects of a large number of molecular collisions, the resulting pressure on the wall is indeed proportional to the temperature of the gas. What was formerly an experimental observation has become a theorem of dynamics. The properties of the gas are seen to be a direct result of its underlying structure and composition.
Dreams of Fields
To get an idea of the nineteenth century understanding of a field, start with a simple question: How does a compass needle know which direction is north? The compass needle, isolated inside its case, is not touching or being touched by anything other than the case itself, yet no matter how you twist and turn the case, the needle always returns to north. Like a magician levitating a body, some power reaches in with ghostly fingers and turns the needle to the correct position. Giving it the label magnetism doesn’t answer the fundamental question: how can one object influence another without physical contact?
Isaac Newton struggled with the same question when he put forth his law of universal gravitation in 1687. He realized that the fall of an apple was caused by the same force that holds the moon in orbit around the earth, namely the earth’s gravity. But how could the earth reach across 400,000 kilometers of empty space to clutch at the moon?
That gravity should be innate, inherent and essential to matter, so that one body may act upon another at a distance thro’ a vacuum, without the mediation of anything else, by and through which their action and force may be conveyed from one to another, is to me so great an absurdity, that I believe no man who has in philosophical matters a competent faculty of thinking, can ever fall into it. Gravity must be caused by an agent acting constantly according to certain laws; but whether this agent be material or immaterial, I have left to the consideration of my readers.2 3
The solution of this problem of “action at a distance,” as it was called, came 200 years later in the field concept.
Imagine starting a barbecue in your backyard. Soon neighbors start dropping by: “How’s it going? Oh, having a barbecue? Got an extra burger?” There’s no need to contact them directly to tell them about the cookout, the aroma of the food carries the message. A (magnetic or electric) field works in a similar manner. Objects that display electric or magnetic properties are said to have an electric charge. This charge produces a field, rather like the barbecue produces an aroma. The larger the charge, the larger the field. A distant object doesn’t need to be told of the presence of the charge, it only needs to sniff out the field in its immediate neighborhood, just as your neighbors sniffed out your barbecue. Thus, we say that the Earth behaves like a magnetic “charge” and creates a magnetic field filling the space around it. A compass needle, which is also a magnet, sniffs out the magnetic field and points along it. The compass, whether near the Earth or thousands of kilometers out in space, doesn’t need to know where the Earth is or what it is doing. The compass responds to whatever magnetic field it detects, whether that field is generated by a distant Earth or a nearby refrigerator magnet.
Physicists represent a field by arrows. A bar magnet, for instance, is surrounded by a magnetic field that looks something like this:
The stronger the field, the longer the arrow. Think of the magnetic field like a field of wheat: Each wheat stalk is an arrow, and the “field” is the entire collection of arrows. Unlike the wheat field, which only has a stalk every few feet or so, the magnetic field has an arrow at every spatial point. That is, to specify the magnetic field completely, one must give the strength of the field (length of the arrow) and direction of the field (direction of the arrow) at every point in the entire universe. Obviously, it would be impossible to experimentally determine the magnetic field at every point, even for a limited region, as it would require an infinite number of measurements. In real life, physicists must be content with having a pretty good idea of the field values in some limited region of space. To a physicist, the field is everywhere: in the air around you, penetrating the walls of your house, inside the wood of your chair, even inside your own body.
Around 600 B.C., the philosopher Thales of Miletos noticed that an amber rod rubbed with a silk cloth gained the power to attract small objects. We know this phenomenon as static electricity. (The word electricity comes from the Greek work for amber, electron.) You can perform Thales’s experiment yourself. Tear some small bits off a piece of paper. Now rub a plastic comb on your shirt and hold the comb near the paper bits. If you are quick (and the humidity is low), you will see the paper jump and cling to the comb. This is a different force than magnetism: even a powerful magnet will not pick up the paper shreds, nor will the comb and the magnet exert a force on each other the way two magnets do. We call this new force the electric force. When you comb your hair and it stands out from your head, or when you take your clothes out of the dryer and they cling to each other, you are experiencing the electric force.
In all of these cases, there is a transfer of electric charge from one object to another. Benjamin Franklin discovered in 1747 that there are two types of electric charge, which he termed positive and negative. Ordinarily, objects like your socks have equal amounts of positive and negative charge, and so are electrically neutral (or uncharged). Tumbling in the dryer, the socks pass negatively charged electrons back and forth like school children trading Pokemon cards. As a result, one sock ends up with excess negative charge and the other ends with excess positive charge. Opposites attract, according to the electric force law, so your socks cling together. When combing your hair, the comb strips electrons from your hair. Like charges repel, so your hairs try to get as far from one another as possible.
Electric interactions can be described in either a force picture or a field picture. In the force picture, we postulate a law of universal electricity (analogous to Newton’s law of universal gravitation) that states, “every charged object in the Universe is attracted to (or repelled from, according to whether the charges are alike or opposite) every other charged object with a force proportional to the electric charge of both objects.”
The field picture instead postulates a two-step process. In the first step, each charged object creates an electric field. (This is a different field from the magnetic field, but it can also be represented by drawing arrows at every point in space.) In the second step, each object feels a force proportional to the electric field at its location generated by all the other charged objects.
Mathematically speaking, there is one law to tell us what sort of field is produced by a given set of charges and another law to describe the force on a charge due to the electric and magnetic fields at the location of that charge. The sock doesn’t need to “know” the location of every other charged object in the universe; it only needs to “know” the electric field at the sock’s current location. In the field picture, objects respond to the conditions in their immediate surroundings rather than to the positions and movements of distant objects.
This may seem like a cheat: If both the force and field concepts give the same result, aren’t they really saying the same thing in different words? Haven’t we just hidden the “magic” action-at-a-distance behind an equally magical electric field? Indeed, it seems that the question of “How does the object know what distant objects are doing?” has merely been replaced with that of “How does the electric field know what distant objects are doing?”
To see the full power of the field concept, change the question from “How?” to “When?” Suppose one of your two electrically charged socks is suddenly moved to a new position: When does the other sock learn of the new circumstances? In the force picture, each sock responds to the current location of the other, so the force on one must change to a new direction as soon as the other is moved. In the field picture, however, we can imagine the possibility of a time lag between the movement of the sock and the change of the distant field. For locations near the new position of the sock that was moved, the field is centered at that new position, but for locations far away, the field is still centered at the original position of the sock.
If there is a time lag, there must be kinks in the field between the two regions. Perhaps as time goes on, the kinks move outward and the inner region that “knows about” the new location of the sock grows larger and larger. Can the theory of the electric field be modified to turn this “perhaps” into a definite prediction? To find the answer, we first need to find the connection between the two types of field: electric and magnetic.
The Marriage of Electricity and Magnetism
The first proof of a connection between electricity and magnetism was discovered by a Danish physicist, Hans Christian Oersted, in 1820. He set up a simple circuit with a battery and a wire. With the switch open, no current flowed in the wire and a compass held over the wire pointed north, as usual. When he closed the switch, allowing electric charge to flow along the wire from one terminal of the battery to the other, the compass needle was deflected away from north and instead pointed in a direction perpendicular to the wire. This proved that an electric current (that is, a flow of electric charge) produces a magnetic field.
After Oersted’s breakthrough demonstration, many more connections between electricity and magnetism were discovered. Michael Faraday, an English physicist, reasoned that if an electric current could cause a magnetic field, then a magnetic field should be able to cause an electric current. He was able to generate a current in a loop of wire by changing the magnetic field through the loop. A stationary magnet creates no current in a loop of wire. But if the magnet is moved toward the loop, the magnetic field passing through the loop grows stronger. As this happens, current flows in the wire.
So, a changing magnetic field initiates a flow of charge in the wire. How could Faraday explain this phenomenon using the field picture? Think back to the two-step process: One law tells how fields are generated by charges; the second law tells how charges are affected by fields. In Faraday’s time, the second step was described by the Lorentz force law. According to this law, only an electric field can speed up or slow down a charge. A magnetic field can only change the direction of a charge that is already moving. Before the magnet starts moving, the electrons in the wire are stationary; the current meter indicates zero. Why, then, do the electrons in the wire begin to move when the magnet moves? Maybe the Lorentz force law is wrong, or maybe a moving magnet produces a brand new kind of force. Faraday, however, had a simpler explanation. If a moving charge could produce a magnetic field in Oersted’s experiment, it seemed reasonable that a moving magnet could produce an electric field. It is this electric field that makes the current flow in the wire. Faraday took his experiment as proof that a changing magnetic field creates an electric field.
It was a Scotsman named James Clerk Maxwell who, in 1865, took the field concept (invented by Faraday) and gave it a clear mathematical formulation, incorporating the electric and magnetic force laws into a set of four equations, now known as Maxwell’s equations. In developing these equations, Maxwell realized there would be an inconsistency unless it was possible for a changing electric field to generate a magnetic field. When Maxwell included this crucial modification in his equations for the electric and magnetic fields, he suddenly realized that not only all electric and magnetic phenomena, but also all the discoveries in optics, could be explained by his four equations, together with the Lorentz force law.
To understand the connection with optics, recall the kinks in the electric field that form when a charge is suddenly moved. As the charge moves, the electric field in its immediate area changes: We know from Maxwell’s discovery that a changing electric field gives rise to a magnetic field, so the charge is now surrounded by both an electric and a magnetic field. Before the charge was moved, there was no magnetic field; in other words, there has been a change of magnetic field as well. According to Faraday, a changing magnetic field produces an electric field. A self-sustaining process arises, in which a changing electric field gives rise to a changing magnetic field, which in turn generates an additional electric field, and so on. The two effects reinforce each other, carrying the kinks in the field ever outward. The region near the charge that “knows about” the new location of the charge grows as the kinks move away from the charge.
This self-sustaining combination of changing electric and magnetic fields is called an electromagnetic wave. Maxwell found that the speed of these waves was related in a simple way to the two constants appearing in his equations. The numerical values of these constants were known from experiments that measured the strength of the electric and magnetic fields. Maxwell used the known values to find the speed of his electromagnetic waves, and discovered that they move at the speed of light. This could not be a coincidence: ordinary visible light must be an electromagnetic wave. The connection between light and electromagnetism has since been confirmed in many experiments.
A successful theory should not only explain phenomena that have already been observed and provide a framework for understanding them; it should also predict new phenomena. Experiments can then be designed to look for the new phenomena and test the theory. If Maxwell was right about light being a type of electromagnetic wave, there should be other forms of “light”—electromagnetic waves with a wavelength greater or less than that of visible light. There was nothing in his equations to prevent such waves; all that was necessary to produce them was to find some method of wiggling electric charges at the correct rate. The German physicist Heinrich Hertz set out to look for such things. He charged two metal balls that were separated by a small space. When the charge got high enough, a spark would jump across the space, carrying the negative charge over to the positively charged ball. The sudden movement of charge from one ball to the other created a kink in the electric field: an electromagnetic wave, according to Maxwell. On the other side of the laboratory, he set a loop of wire that had a tiny air gap in it. He knew that the wave should travel across the room at the speed of the light. When the wave hit the loop of the wire, it should cause an electric current to flow in the wire. Because of the air gap, this could only happen if a spark jumped across the gap. With the laboratory darkened, Hertz peered at the air gap and waited as the charge on the two balls built up. Whenever the spark jumped between the two balls, Hertz, on the other side of the room, saw a second tiny spark jump across the air gap in the wire loop.
Hertz found that his waves had a wavelength of about two feet, which is a million times longer than the wavelength of light. Electromagnetic waves with wavelengths of this size are now known as radio waves. Hertz’s “broadcast,” although not as gripping or informative as the Home Shopping Network, was nevertheless a tremendous accomplishment: the first radio transmission. The experiment provided direct proof that an electromagnetic wave was able to cross a room without the aid of wires.
It was later discovered how to produce electromagnetic waves with wavelengths between those of radio and light, and these were named microwaves and infrared radiation. Shorter wavelengths could be produced too, giving us ultraviolet radiation, X rays, and gamma rays. Today’s society would not function without our knowledge of Maxwell’s equations: We use radio waves for radio and TV reception, microwaves for microwave ovens and cellular phone links, infrared for heat lamps, ultraviolet for tanning booths and black light discos, X rays for medicine, and gamma rays for food decontamination. Visible light, running from red (the longest visible wavelength) to violet (the shortest), is only a small fraction of the electromagnetic spectrum.
Most of the electromagnetic “rainbow” is invisible to us. We can “see” ultraviolet waves in a vague, indistinct way, but not with our eyes: Our skin detects them and reacts with sunburn. The higher-energy X and gamma rays penetrate further into the body and can cause cell damage to internal organs. Mostly, though, we need to use specialized instruments as artificial eyes to expose the wavelengths we can’t see directly. A radio or cell phone receiver uses an antenna and an electronic circuit, a dentist’s x-ray machine uses photographic film to transform these signals into a form our senses can handle. Although generated and detected in a variety of ways, these waves are all fundamentally the same—traveling, self-sustaining electric and magnetic fields.
Thanks to Maxwell, electric and magnetic fields are much more than the cheat they might have at first seemed. The fields are not merely another way to talk about forces between particles. Electric and magnetic fields can combine to form electromagnetic waves that carry energy and information across great distances. Radio waves carry a signal from the station to your receiver, where they are decoded into news, music, and advertisements, without which your life would be incomplete. Light from the Sun traverses millions of miles of empty space; without this light, there wouldn’t be life at all. Fields really exist and are a vital part of the world around us.
By the end of the nineteenth century, physicists had a clear picture of the basic physical interactions. According to this picture, everything in the universe is made of particles that interact by way of fields. Particles produce and respond to fields according to definite mathematical laws. The crowning achievements of physics were the two great unifications: the kinetic theory of thermodynamics, based on the atomic model, and Maxwell’s electromagnetic field theory. These theories not only brought together many diverse phenomena, they made predictions about new phenomena and led to new experiments, new techniques, and new technology. The combined picture was so successful and so compelling that some physicists thought there was little left to do. Albert Michelson, a leading American physicist, said this in 1894:
It seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all phenomena which come under our notice... The future truths of Physics are to be looked for in the sixth place of decimals.3
The timing of such a pronouncement could hardly have been worse. By the end of the century, new phenomena were being discovered that were, to say the least, puzzling when considered according to the known laws of physics. In fact, two revolutions were about to occur in physics, and when the dust settled, the field and particle concepts would both be altered beyond recognition.