A Compelling Quartet - Collider: The Search for the World's Smallest Particles - Paul Halpern

Collider: The Search for the World's Smallest Particles - Paul Halpern (2009)

Chapter 5. A Compelling Quartet

The Four Fundamental
Forces

The grand aim of all science … is to cover the greatest number of empirical facts by logical deduction from the smallest possible number of hypotheses or axioms.

—ALBERT EINSTEIN (THE PROBLEM OF SPACE, ETHER, AND THE FIELD IN PHYSICS, 1954, TRANSLATED BY SONJA BARGMANN)

In 1939, Niels Bohr arrived at Princeton with a grave secret. He had just learned that Nazi Germany was pioneering the methods of nuclear fission: the splitting of uranium and other large nuclei. The unspoken question was whether the powerful energies of atomic cores could be used by Hitler to manufacture deadly weapons. To understand fission better, Bohr was working with John Wheeler to develop a model of how nuclei deform and fragment.

Out of respect, Bohr attended one of Einstein’s lectures on unification. Einstein presented an abstract mathematical model of uniting gravitation with electromagnetism. It did not mention nuclear forces, nor did it even address quantum mechanics. Bohr reportedly left the talk in silence. His disinterest characterized the spirit of the times; the nucleus was the new frontier.

Nuclear physics had by then become intensely political. The previous year, Otto Hahn, a German chemist who had assisted Rutherford during his days at McGill, along with Lise Meitner and Fritz Strassmann had discovered how to induce the fission of a particular isotope of uranium through the bombardment of neutrons. When later that year Meitner fled the Nazis after the Anschluss (annexation of Austria), she brought word of the discovery to her nephew Otto Frisch, who was working with Bohr. Bohr became alarmed by the prospects that the Nazis could use this finding to develop a bomb—an anxiety that others in the know soon shared. These fears intensified when Szilard and Italian physicist Enrico Fermi demonstrated that neutrons produced by uranium nuclei splitting apart could trigger other nuclei to split—with the ensuing chain reaction releasing enormous quantities of energy. Szilard wrote a letter to Roosevelt warning of the danger and persuaded Einstein to sign it. Soon the Manhattan Project was born, leading to the development by the Americans of the atomic bomb.

The nucleus was a supreme puzzle. What holds it together? Why does it decay in certain ways? What causes some isotopes to disintegrate more readily than others? How come the number of neutrons in most atoms greatly exceeds the number of protons? Why does there seem to be an upper limit on the size of nuclei found in nature? Could artificial nuclei of any size be produced?

Throughout the turbulent years culminating in the Second World War, one of the foremost pioneers in helping to resolve those mysteries was Fermi. Born in Rome on September 29, 1901, young Enrico was a child prodigy with an amazing aptitude for math and physics. By age ten he was studying the nuances of geometric equations such as the formula for a circle. After the tragic death of his teenage brother, he immersed himself in books as a way of trying to cope, leading to even further acceleration in his studies. Following a meteoric path through school and university, he received a doctorate from the University of Pisa when he was only twenty-one. During the mid-1920s, he spent time in Gottingen, Leiden, and Florence, before becoming professor of physics at the University of Rome.

Among other critical contributions Fermi made to nuclear and particle physics, in 1933, he developed the first mathematical model of beta decay. The impetus to do so arose when at the Seventh Solvay Conference earlier that year, Pauli spoke formally for the first time about the theory of the neutrino. Pauli explained that when beta rays are emitted from the radioactive decay of a nucleus, an unseen, electrically neutral, lightweight particle must be produced to account for unobserved extra energy. He had originally called it a neutron, but when those heavier particles were discovered, he took up a suggestion by Fermi and switched to calling it by its Italian diminutive. Fermi proceeded to calculate how the decay process would work. Though, as it would turn out, his model was missing several key ingredients, it offered the monumental unveiling of a wholly new force in nature—the weak interaction. It is the force that causes certain types of particle transformations, producing unstable phenomena such as beta decay.

As physicist Emilio Segrè, who worked under Fermi, recalled:

Fermi gave the first account of this theory to several of his Roman friends while we were spending the Christmas vacation of 1933 in the Alps. It was in the evening after a full day of skiing; we were all sitting on one bed in a hotel room, and I could hardly keep still in that position, bruised as I was after several falls on icy snow. Fermi was fully aware of the importance of his accomplishment and said that he thought he would be remembered for this paper, his best so far.1

Fermi’s model of beta decay imagines it as an exchange process involving particles coming together at a point. For example, if a proton meets up with an electron, the proton can transfer its positive charge to the electron, transforming itself into a neutron and the electron into a neutrino. Alternatively, a proton can exchange its charge and become a neutron, along with a positron and a neutrino. As a third possibility, a neutron can transmute into a proton, in conjunction with an electron and an antineutrino (like a neutrino but with a different production mechanism). Each of these involves a huddling together and a transfer—like a football player approaching a member of the opposing team, grabbing the ball, and heading off in another direction.

In electromagnetism, two electric currents—streams of moving electric charge—can interact with each other by means of the exchange of a photon. Because the photon is an electrically neutral particle, no charge is transferred in the process. Rather, the photon exchange can either bring the currents together or separate them depending on the nature and direction of the moving charges.

In modern terminology, we say the photon is the exchange particle conveying the electromagnetic force. Exchange particles, including photons, belong to a class of particles called bosons. The smallest ingredients of matter—now known to be quarks and leptons—are all fermions. If fermions are like the bones and muscles of the body, bosons supply the nerve impulses that provide their dynamics.

For the weak force, as Fermi noted, two “currents,” one the proton/neutron and the other the electron/neutrino, can exchange charge and identity during their process of interaction. Here Fermi generalized the concept of current to mean not just moving charges but also any stream of particles that may keep or alter certain properties during an interaction.

Just as mass measures the impact of gravity, and charge the strength of electromagnetism, Fermi identified a factor—now known as the Fermi weak coupling constant—that sets the strength of the weak interaction. He used this information to construct a method, known as Fermi’s “golden rule,” for calculating the odds of a particular decay process taking place. Suddenly, the long-established gravitational and electromagnetic interactions had a brand-new neighbor. But no one knew back then how to relate the new kid on the block to the old-timers.

Types of elementary particles.

009

To make matters even more complicated, in 1934, Japanese physicist Hideki Yukawa postulated a fourth fundamental interaction, similarly on the nuclear scale. Yukawa noted that while beta decay is a rare event, another linkage between protons and neutrons is much more common and significantly more powerful. Rather than causing decay, it enables coherence. To distinguish Yukawa’s nuclear interaction from Fermi’s, the former became known as the strong force.

The need for a strong force bringing together nucleons (nuclear particles) has to do with their proximity and, in the proton’s case, their identical charge. Judging each other on the basis of charge alone, protons wouldn’t want to stick together. Their mutually repulsive electrostatic forces would make them want to get as far away from each other as possible, like the north poles of two magnets pushing each other apart. The closer together they’d get, their shared desire to flee would grow even greater. Then how do they fit into a cramped nucleus on the order of a quadrillionth of an inch?

Born in Tokyo in 1907, Yukawa grew up at a time when the Japanese physics community was very isolated and there was very little interaction with European researchers. His father, who became a geology professor, strongly encouraged him to pursue his scientific interests. Attending the university where his father taught, Kyoto University, he demonstrated keen creativity in dealing with mathematical challenges—which would propel him to a pioneering role in establishing theoretical physics in his native land. At the age of twenty-seven, while still a Ph.D. student, he developed a brilliant way of treating nuclear interactions that became a model for describing the various natural forces.

Yukawa noted that while electromagnetic interactions can bridge vast distances, nuclear forces tend to drop off very quickly. The magnetic effects of Earth’s iron core can, for example, align a compass thousands of miles away, but nuclear stickiness scarcely reaches beyond a range about one trillionth of the size of a flea. He attributed the difference in scale to a distinction in the type of boson conveying the interaction. (Remember that bosons are like the universe’s nervous system, conveying all interactions.) The photon, a massless boson, serves to link electrical currents spanning an enormous range of distances. If it were massive, however, its range would shrink down considerably, because the inverse-squared decline in interactive strength over distance represented by Maxwell’s wave equations would be replaced by an exponentially steeper drop. The situation would be a bit like throwing a Frisbee back and forth across a lawn and then replacing it with a lead dumbbell. With the far heavier weight, you’d have to stand much closer to keep up the exchange.

By substituting nuclear charge for electric charge, and massive bosons, called mesons, for photons, Yukawa found that he could describe the sharp, pinpoint dynamics of the force between nucleons—demonstrating why the interaction is powerful enough to bind nuclei tightly together while being insignificant at scales larger than atomic cores. All that would be needed was a hitherto unseen particle. If Dirac’s hypothesized positrons could be found, why not mesons?

Nature sometimes plays wicked tricks. In 1936, Carl Anderson observed a strange new particle in a stream of cosmic rays. Because a magnetic field diverted it less than protons and more than electrons or positrons, he estimated its mass to be somewhere in between—a little more than two hundred times the mass of the electron. On the face of things, it seemed the answer to nuclear physicists’ dreams. It fit in well with Yukawa’s predictions for the mass of the exchange boson for the strong force, and physicists wondered if it was the real deal.

Strangely enough, any resemblance between the cosmic intruder and Yukawa’s hypothesized particle was pure coincidence. Further tests revealed the new particle to be identical to the electron in all properties except mass. Indeed it turned out to be a lepton, a category that doesn’t experience the strong force at all, rather than a hadron, the term denoting strongly interacting particles. (Lepton and hadron derive from the Greek for “thin” and “thick,” respectively—a reference to their relative weights that is not always accurate; some leptons are heavier than some hadrons.) Anderson’s particle was eventually renamed the muon, to distinguish it from Yukawa’s exchange particle. Pointing out the muon’s seeming redundancy and lack of relevance to the theories of his time, physicist Isidor I. Rabi famously remarked, “Who ordered that?”

True mesons would not be found for more than a decade. Not many nuclear physicists were contemplating pure science during that interval; much energy was subsumed by the war effort. Only after the war ended could the quest for understanding the world of particles resume in earnest.

In 1947, a team of physicists led by Cecil Powell of the University of Bristol, England, discovered tracks of the first known meson, in a photographic image of cosmic ray events. Born in Tonbridge in Kent, England, in 1903, Powell had an unlucky early family life. His grandfather was a gun maker who had the misfortune of accidentally blinding someone while out shooting—an action that led to a lawsuit and financial ruin. Powell’s father tried to continue in the family trade, but the advent of assembly-line production bankrupted him.

Fortunately, Powell himself decided to pursue a different career path. Receiving a scholarship to Cambridge in 1921, he consulted with Rutherford about joining the Cavendish group as a research student. Rutherford agreed and arranged for Charles Wilson to be his supervisor. Powell soon became an expert on building cloud chambers and using them for detection.

In the mid-1930s, after Cockcroft and Walton built their accelerator, Powell constructed his own and actively studied collisions between high-energy protons and neutrons. By then he had relocated to Bristol. While at first he used cloud chambers to record the paths of the by-products, he later found that a certain type of photographic emulsion (a silver bromide and iodide coating) produced superior images. Placing chemically treated plates along the paths of particle beams, he could observe disintegrations as black “stars” against a transparent background—indicating all of the offshoots of an interaction. Moreover the length of particle tracks on the plates offered a clear picture of the decay products’ energies—with any missing energy indicating possible unseen marauders, such as neutrinos, that have discreetly stolen it away.

In 1945, Italian physicist Giuseppe Occhialini joined the Bristol group, inviting one of his most promising students, César Lattes, along one year later. Together with Powell they embarked upon an extraordinary study of the tracks produced by cosmic rays. To obtain their data they brought covered photographic plates up to lofty altitudes, including an observatory high up in the French Pyrenees and onboard RAF (Royal Air Force) aircraft. After exposing the plates to the steady stream of incoming celestial particles, the researchers were awestruck by the complex webs of patterns they etched—intricate family trees of subatomic births, life journeys, and deaths.

As Powell recalled:

When [the plates] were recovered and developed at Bristol it was immediately apparent that a whole new world had been revealed. The track of a slow proton was so packed with developed grains that it appeared almost like a solid rod of silver, and the tiny volume of emulsion appeared under the microscope to be crowded with disintegrations produced by fast cosmic ray particles with much greater energies than any which could have been produced artificially at the time. It was as if, suddenly, we had broken into a walled orchard, where protected trees had flourished and all kinds of exotic fruits had ripened in great profusion.2

Among the patterns they saw was a curious case of one midsize particle stopping and decaying into another, appearing as if a slightly more massive type of muon gave birth to the conventional variety. Yet a long line of prior experiments indicated that if muons decay they always produce electrons, not more muons. Consequently, the researchers concluded that the parent particle must have been something else. They named it the “pi meson,” which became “pion” for short. It soon became clear that the pion matched the exchange particle predicted by Yukawa.

Around the same time, George Rochester of the University of Manchester detected in cloud chamber images a heavier type of meson, called the neutral kaon, that decays along a V-shaped track into two pions—one positive and the other negative. In short order, researchers realized that pions and kaons each have positive, negative, and neutral varieties—with neutral kaons themselves coming in two distinct types, one shorter lived than the other.

The importance of the discovery of mesons was so widely recognized that Powell received the Nobel Prize in lightning speed—in 1950, only three years later. Occhialini would share the 1979 Wolf Prize, another prestigious award, with George Uhlenbeck.

The Bristol team’s discovery represented the culmination of the Cavendish era of experimental particle physics. From the 1950s until the 1970s, the vast majority of new findings would take place by means of American accelerators, particularly successors to Lawrence’s cyclotron. An exciting period of experimentation would demonstrate that Powell’s “orchard of particles” is full of strange fruit indeed.

While high-energy physicists, as researchers exploring experimental particle physics came to be known, tracked an ever-increasing variety of subatomic events, a number of nuclear physicists joined with astronomers in attempts to unravel how the natural elements formed. An influential paper by physicist Hans Bethe, “Energy Production in Stars,” published in 1939, showed how the process of nuclear fusion, the uniting of smaller into larger nuclei, enables stars to shine. Through a cycle in which ordinary hydrogen combines into deuterium, deuterium unites with more hydrogen to produce helium-3, and finally helium-3 combines with itself to make helium-4 and two extra protons, stars generate enormous amounts of energy and radiate it into space. Bethe proposed other cycles involving higher elements such as carbon.

George Gamow, by then at George Washington University, humorously borrowed Bethe’s name while applying his idea to the early universe in a famous 1948 paper with Ralph Alpher, “The Origin of Chemical Elements.” Although Alpher and Gamow were the paper’s true authors, they inserted Bethe’s appellation to complete the trilogy of the first Greek letters; hence it is sometimes known as the “alphabetical paper.”

Alpher and Gamow’s theory of element production relies on the universe having originated in an extremely dense, ultrahot state, dubbed by Fred Hoyle the “Big Bang.” (Hoyle, a critic of the theory, meant his appellation to be derogatory, but the name stuck.) The idea that the universe was once extremely small was first proposed by Belgian mathematician and priest Georges Lemaitre, and gained considerable clout when American astronomer Edwin Hubble discovered that distant galaxies are moving away from ours, implying that space is expanding. Alpher and Gamow hypothesized that helium, lithium, and all higher elements were forged in the furnace of the fiery nascent universe.

Curiously enough, although they were right about helium, they were wrong about the other elements. While the primordial universe was indeed hot enough to fuse helium from hydrogen, as it expanded, it markedly cooled down and could not have produced higher elements in sufficient quantities to explain their current amounts. Thus the carbon and oxygen in plants and animals were not produced in the Big Bang. Rather, as Hoyle and three of his colleagues demonstrated, elements higher than helium were wrought in a different type of cauldron—the intense infernos of stellar cores—and released into space through the stellar explosions called supernovas.

Gamow was flummoxed by the idea that there could be two distinct mechanisms for element production. In typical humorous fashion, he channeled his bafflement and disappointment into mock biblical verse: a poem titled “New Genesis.”

“In the beginning,” the verse begins, “God created radiation and ylem (primordial matter).” It continues by imagining God fashioning element after element simply by calling out their mass numbers in order. Unfortunately, God forgot mass number five, almost dooming the whole enterprise. Rather than starting again, He crafted an alternative solution: “And God said: ‘Let there be Hoyle’ … and told him to make heavy elements in any way he pleased.”3

Despite its failure to explain synthesis of higher elements, the Big Bang theory has proven a monumentally successful description of the genesis of the universe. A critical confirmation of the theory came in 1965 when Arno Penzias and Robert W. Wilson pointed a horn antenna into space and discovered a constant radio hiss in all directions with a temperature of around three degrees above absolute zero (the lower limit of temperature). After learning of these results, Princeton physicist Robert Dicke demonstrated that its distribution and temperature were consistent with expectations for a hot early universe expanding and cooling down over time.

In the 1990s and 2000s, designated satellites, called the COBE (Cosmic Background Explorer) and the WMAP (Wilkinson Microwave Anisotropy Probe), mapped out the fine details of the cosmic background radiation and demonstrated that its temperature profile, though largely uniform, was pocked with slightly hotter and colder spots—signs that the early universe harbored embryonic structures that would grow up into stars, galaxies, and other astronomical formations. This colorfully illustrated profile was nicknamed “Baby Picture of the Universe.”

The Baby Picture harkens back to a very special era, about three hundred thousand years after the Big Bang, in which electrons joined together with nuclei to form atoms. Before this “era of recombination,” electromagnetic radiation largely bounced between charged particles in a situation akin to a pinball machine. However, once the negative electrons and positive cores settled down into neutral atoms, it was like turning off the “machine” and letting the radiation move freely. Released into space the hot radiation filled the universe—bearing subtle temperature differences reflecting slightly denser and slighter more spread out pockets of atoms. As the cosmos evolved, the radiation cooled down and the denser regions drew more and more matter. When regions accumulated the critical amount of hydrogen to fuse together, maintain steady chain reactions, and release energy in the form of light and heat, they began to shine and stars were born.

The creation of stars, planets, galaxies, and so forth is the celestial drama that engages astrophysicists and astronomers. Particle physicists are largely interested in the back story: what happened before recombination. The details of how photons, electrons, protons, neutrons, and other constituents interacted with one another in the eons before atoms, and particularly in the first moments after the Big Bang reflect the properties of the fundamental natural interactions. Therefore, like colliders, the early universe represents a kind of particle physics laboratory; any discoveries from one venue can be compared to the other.

The same year that Alpher and Gamow published their alphabet paper, three physicists, Julian Schwinger and Richard Feynman of the United States and Sin-Itiro Tomonaga of Japan, independently produced a remarkable set of works describing the quantum theory of the electromagnetic interaction. (Tomonaga developed his ideas during the Second World War when it was impossible for him to promote them.) Distilled into a comprehensive theory through the vision of Princeton physicist Freeman Dyson, quantum electrodynamics (QED), as it was called, became seen as the prototype for explaining how natural forces operate.

Of all the authors who developed QED, the one who offered the most visual representation was Feynman. He composed a remarkably clever shorthand for describing how particles communicate with one another—with rays (arrowed line segments) representing electrons and other charged particles, and squiggles denoting photons. Two electrons exchanging a photon, for example, can be depicted as rays coming closer over time, connecting up with a squiggle, and then diverging. Assigning each possible picture a certain value, and developing a means for these to be added up, Feynman showed how the probability of all manner of electromagnetic interactions could be determined. The widely used notation became known as Feynman diagrams.

Through QED came the alleviation of certain mathematical maladies afflicting the quantum theory of electrons and other charged particles. In trying to apply earlier versions of quantum field theory to electrons, theorists obtained the nonsensical answer “infinity” when performing certain calculations. In a process called renormalization, Feynman showed that the values of particular diagrams nicely canceled out, yielding finite solutions instead.

Inspired by the power of QED, in the 1950s, various theorists attempted to apply similar techniques to the weak, strong, and gravitational interactions. None of the efforts in this theoretical triathlon would come easy—with each leg of the race offering unique challenges.

By that point, Fermi’s theory of beta decay had been extended to muons and become known as the Universal Fermi Interaction. Confirmation of one critical prediction of the theory came during the middle of the decade, when Frederick Reines and Clyde Cowan, scientists working at Los Alamos National Laboratory, placed a large vat of fluid near a nuclear reactor and observed the first direct indications of neutrinos. The experiment was set up to measure rare cases in which neutrinos from the reactor would interact with protons in the liquid, changing them into neutrons and positrons (antimatter electrons) in a process that is called reverse beta decay. When particles meet their antimatter counterparts, they annihilate each other in a burst of energy, producing photons. Neutrons, when absorbed by the liquid, also produce photons. Therefore Reines and Cowan realized that twin flashes (in another light-sensitive fluid) triggered by dual streams of photons would signal the existence of neutrinos. Amazingly, they found such a rare signal. Subsequent experiments they and others performed using considerably larger tanks of fluid confirmed their groundbreaking results.

By the time of the confirmation of the final component of Fermi’s theory—the prototype of the weak interaction—physicists had begun to realize its significant gaps. These manifested themselves by way of comparison with the triumphs of QED. QED is a theory replete with many natural symmetries. Looking at Feynman diagrams representing its processes, many of these symmetries are apparent. For example, flip the time axis, reversing the direction of time, and you can’t tell the difference from the original. Thus, processes run the same backward and forward in time. That is a symmetry called time-reversal invariance.

Another symmetry, known as parity, involves looking at the mirror image of a process. If the mirror image is the same, as in the case of QED, that is called conservation of parity. For example, the letter “O,” looking the same in the mirror, has conserved parity, while the letter “Q” clearly doesn’t because of its tail.

In QED, mass is also perfectly conserved—representing yet another symmetry. When electrons (or other charged particles) volley photons back and forth, the photons carry no mass whatsoever. Electrons keep their identities during electromagnetic processes and never change identities. Comparing that to beta decay, in which electrons sacrifice charge and mass and end up as neutrinos, the difference is eminently clear.

The question of symmetries in the weak interaction came to the forefront in 1956 when Chinese American physicists Tsung Dao Lee and Chen Ning (Frank) Yang proposed a brilliant solution to a mystery involving meson decay. Curiously, positively charged kaons have two different modes of decay: into either two or three pions. Because each of these final states has a different parity, physicists thought at first that the initial particles constituted two separate types. Lee and Yang demonstrated that if the weak interaction violated parity, then one type of particle could be involved with both kinds of processes. The “mirror-image” of certain weak decays could in some cases be something different. Parity violation seemed to breach common sense, but it turned out to be essential to understanding nuances of the weak interaction.

Unlike the weak interaction, the strong force does not have the issue of parity violation. Thanks to Yukawa, researchers in the 1950s had a head start in developing a quantum theory of that powerful but short-ranged force. However, because at that point experimentalists had yet to probe the structure of nucleons themselves, the Yukawa theory was incomplete.

The final ingredient in assembling a unified model of interactions would be a quantum theory of gravity. After QED was developed, physicists trying to develop an analogous theory of gravitation encountered one brick wall after another. The most pressing dilemma was that while QED describes encounters that take place over time, such as one electron being scattered by another due to a photon exchange, gravitation, according to general relativity, is a feature stemming from the curvature of a timeless four-dimensional geometry. In other words, it has the agility of a statue. Even to start thinking about quantum gravity required performing the magic trick of turning a timeless theory into an evolving theory. A major breakthrough came in 1957 when Richard Arnowitt, Stanley Deser, and Charles Misner developed a way of cutting space-time’s loaf into three-dimensional slices changing over time. Their method, called ADM formalism, enabled researchers to craft a dynamic theory of gravity ripe for quantization.

Another major problem with linking gravity to the other forces involves their vast discrepancy in strength—a dilemma that has come to be known as the hierarchy problem. At the subatomic level, gravitation is 1040 (1 followed by 40 zeroes) times punier than electromagnetism, which itself is much less formidable than the strong force. Bringing all of these together in a single theory is a serious dilemma that has yet to be satisfactorily resolved.

Finally, yet another wrench thrown into the works involves renormalizing any gravitational counterparts to QED. To theorists’ chagrin, the methods used by Schwinger, Feynman, and Tomonaga were ineffective in removing infinite terms that popped up in attempts to quantize gravity. Gravity has proven a stubborn ox indeed.

Unification is one of the loftiest goals of the human enterprise. We long for completeness, yet each discovery of commonalities seems accompanied by novel examples of diversity. Electricity and magnetism get along together just perfectly, as Maxwell showed, but the other forces each have glaring differences. The periodic table seemed fine for a while to explain the elements until scientists encountered isotopes. Rutherford, Bohr, Heisenberg, and their colleagues seemed to wrap up the world of the atom in a neat parcel, until neutrinos, antimatter, muons, and mesons arrived on the scene.

From the mid-1950s until the mid-1990s, powerful new accelerators would reveal a vastly more complex realm of particles than anyone could have imagined. Suddenly, ordinary protons, neutrons, and electrons would be vastly outnumbered by a zoo of particles with bizarre properties and a wide range of lifetimes. Only a subset of the elementary particles could even be found in atoms; most had nothing to do with them save their reactions to the fundamental forces. It would be like walking into a barn and finding the placid cows and sheep being serenaded by wild rhinoceri, hyenas, platypi, mammoths, and a host of unidentified alien creatures. Given the ridiculously diverse menagerie nature had revealed, finding any semblance of unity would require extraordinary pattern-recognition skills, a keen imagination, and a hearty sense of humor.