The Secrets of Creation - Collider: The Search for the World's Smallest Particles - Paul Halpern

Collider: The Search for the World's Smallest Particles - Paul Halpern (2009)

Chapter 1. The Secrets of Creation

When in the height heaven was not named,
And the earth beneath did not yet bear a name,
And the primeval Apsu, who begat them,
And chaos, Tiamut, the mother of them both
Their waters were mingled together,
And no field was formed, no marsh was to be seen;
When of the gods none had been called into being,
And none bore a name, and no destinies were ordained …

—ENUMA ELISH, THE BABYLONIAN EPIC OF CREATION, TRANSLATED BY L. W. KING

Hidden among the haze of cosmic dust and radiation, buried in the very soil we walk upon, locked away in the deep structure of everything we see, feel, or touch, lie the secrets of our universal origins. Like the gleaming faces of a beautiful but impenetrable diamond, each facet of creation offers a glimpse of a wonderful, yet inscrutable, unity. With probing intellect, humankind longs to cut through the layers and reach the core of truth that underlies all things. What is the universe made of? What are the forces that affect our universe? How was the universe created?

Ancient Greek philosophers offered competing explanations of what constitutes the tiniest things. In the fifth century BCE, Leucippus and Democritus, the founders of atomism, argued that materials could be broken down only so far before their basic constituents would be reached. They imagined these smallest, unbreakable pieces, or “atoms,” as possessing a variety of shapes and sizes, like an exotic assortment of pebbles and shells.

Another view, proposed by Empedocles, is that everything is a mixture of four elements: fire, water, air, and earth. Aristotle supplemented these with a fifth essence, the void. For two millennia these classical elements were the assumed building blocks of creation until scientific experimentation prodded Europe toward an empirical view of nature.

In his influential book The Sceptical Chymist, Robert Boyle (1627-1691) demonstrated that fire, air, earth, and water couldn’t realistically be combined to create the extraordinary range of materials on Earth. He argued for a new definition of the term “element” based on the simplest ingredients comprising any substance. Chemists could identify these, he argued, by breaking things down into their most basic parts, rather than through relying on philosophical speculation. Boyle’s clever insight challenged experimenters to discover, through a variety of methods, the true chemical elements—familiar to us (in no particular order) as hydrogen, oxygen, carbon, nitrogen, sulphur, and so forth. Whenever children today combine assorted liquids and powders in their chemistry sets, set off bubbling reactions, and concoct colorful, smelly, gooey by-products, they owe a debt to Boyle.

Boyle was an ardent atomist and a meticulous experimenter. Refusing to accept the hypothesis on faith alone, he developed a clever experiment designed to test the concept that materials are made of small particles—which he called corpuscles—with empty space between them. He started with a curved glass tube, exposed to the air on one end and closed on the other. Filling the open end with mercury, he trapped some of the air in the tube and pressed it into a smaller and smaller volume. Then, by slowly removing the mercury, he noted that the trapped air expanded in inverse proportion to its pressure (a relationship now called Boyle’s law). He reasoned that this could happen only if the air was made of tiny components separated by gaps.

Manchester chemist John Dalton was an earnest young Quaker whose research about how different substances react with one another and combine led him to the spectacular insight that each chemical element is composed of atoms with distinct characteristics. Dalton was the first, in fact, to use the word “atom” in the modern sense: the smallest component of a chemical element that conveys its properties.

Dalton developed a clever visual shorthand for showing how different atoms combine. He depicted each type of element as a circle with a distinctive mark in the center—for example, hydrogen with a dot, sodium (which he called “soda”) with two vertical lines, and silver with the letter “s.” Dalton counted twenty elements; today we know of ninety-two natural elements and at least twenty-five more that can be produced artificially. By arranging his circular symbols into various patterns, he showed how compounds such as water and carbon dioxide could be assembled from the “Lego blocks” of elements such as hydrogen, oxygen, and carbon. In what he called the law of multiple proportions, he demonstrated that the elements forming particular substances always combined in the same fixed ratios.

Dalton also attempted to characterize atoms by their relative weights. Although many of his estimates were off, his efforts led to simple arithmetical ways of understanding chemistry. In 1808, Scottish chemist Thomas Thomson combined oxalic acid (a compound of hydrogen, carbon, and oxygen) with several different elements, including strontium and potassium, and produced a variety of salts. Weighing these salts, he found proportionalities corresponding to differences in the elements he used. Thomson’s results, published in his book A System of Chemistry, helped Dalton’s theories gain wide acceptance in the scientific community.

One thing that Dalton’s theories couldn’t do was predict new elements. Arranging atoms in order of their relative weights didn’t offer enough information or impetus for scientists to infer that others existed. It’s as if a mother brought three of her sons to a new school to register them and reported only their names and ages. Without saying more about her family, the teachers there would have no reason to believe she had other kids that were older, younger, or in between.

Indeed the family of elements was much larger than Dalton surmised. By the mid-nineteenth century the number of known elements had tripled to about sixty. Curiously, some of these had shared properties—even ones associated with much different atomic weights. For example, sodium and potassium, though separated in terms of weight, seemed to react with other substances in similar ways.

In the late 1860s, Russian chemist Dmitry Mendeleyev decided to write a state-of-the-art chemistry textbook. To illustrate the great progress in atomic theory, he included a chart depicting all of the then-known elements in order of weight. In a bold innovation, he listed the elements in table form with each row representing elements with similar properties. By doing so, he illustrated that elements fall into patterns. Some of the spaces in what became known as the periodic table he left blank, pointing to elements he predicted would later be discovered. He was absolutely right; like a solved Sudoku puzzle, all of the gaps in his table were eventually filled.

Science didn’t realize the full significance of Mendeleyev’s discovery until the birth of quantum mechanics decades later. The periodic table’s patterns reveal that the Democritean term “atom” is really a misnomer; atoms are indeed “breakable.” Each atom is a world unto itself governed by laws that supersede Newtonian mechanics. These laws mandate a hierarchy of different kinds of atomic states, akin to the rules of succession for a monarchy. Just as firstborn sons in many kingdoms assume the throne before second-born sons, because of quantum rules, certain types of elements appear in the periodic table before other kinds.

The atom has sometimes been compared to the solar system. While this comparison is simplistic—planetary orbits don’t obey quantum rules, for one thing—there are two key commonalities. Both have central objects—the Sun and what is called the atomic nucleus—and both are steered by forces that depend inversely on the squares of distances between objects. An “inverse-square law” means that if the distance between two objects is doubled, their mutual force diminishes by a factor of four; if their distance is tripled, their force weakens ninefold, and so forth. Physicists have found that inverse-square laws are perfect for creating stable systems. Like a well-designed electronic dog collar it allows some wandering away from the house but discourages fleeing the whole property.

While scientists like Boyle, Dalton, and Mendeleyev focused on discovering the ingredients that make up our world, others tried to map out and understand the invisible forces that govern how things interact and transform. Born on Christmas Day in 1642, Sir Isaac Newton possessed an extraordinary gift for finding patterns in nature and discerning the basic rules underlying its dynamics. Newton’s laws of mechanics transformed physical science from a cluttered notebook of sundry observations to a methodical masterwork of unprecedented predictive power. They describe how forces—pushes and pulls—affect the journeys through space of all things in creation.

If you describe the positions and velocities of a set of objects and delineate all of the forces acting on them, Newton’s laws state unequivocally what would happen to them next. In the absence of force or with forces completely balanced, nonmoving objects would remain at rest and moving objects would continue to move along straight lines at constant speeds—called the state of inertia. If the forces on an object are unbalanced, on the other hand, it would accelerate at a rate proportional to the net force. The extent to which an object accelerates under the influence of a net force defines a physical property called mass. The more massive a body, the harder it is for a given force to change its motion. For example, all other factors being equal, a tow truck’s tug would have much less effect on a monstrous eighteen-wheeler than it would on a sleek subcompact car.

Newton famously showed that gravity is a universal force, attracting anything with mass to anything else with mass. The moon, the International Space Station, and a bread crumb pushed off a picnic table by an ornery ant are all attracted to Earth. The more massive the objects, the greater their gravitational attraction. Thus, mass serves two purposes in physics—to characterize the strength of gravity and to determine the accelerating effect of a force. Because mass takes on both roles, it literally cancels out of the equation that determines the effect of gravitational force on acceleration. Therefore bodies accelerate under gravity’s influence independent of their masses. If it weren’t for the air whooshing by, an aquatic elephant and a mouse up for a challenge would plunge from the high diving board into a swimming pool straight below them at the same rate. The fact that gravitational acceleration doesn’t depend on mass places gravity on different footing from any other force in nature.

The concept of attractive forces offers a means by which large objects can build up from smaller ones—at least on astronomical scales. Take scattered bits of slow-moving material, wait long enough for attractive forces to kick in and they’ll tend to clump together—assuming they aren’t driven apart by even stronger repulsive forces. Attraction offers a natural way for matter to build up from tiny constituents. Therefore it’s not surprising that Newton subscribed to the atomist view, believing that all matter, and even light, is made up of minute corpuscles.

In his treatise on optics Newton wrote, “It seems probable to me that God in the beginning formed matter in solid, massy, hard, impenetrable, movable particles of such sizes and figures and with such other properties and in such proportion to space as most conduced to the end for which he formed them. And that these primitive particles being solids, are incomparably harder than any porous bodies compounded of them, even so hard as never to wear or break in pieces; no ordinary power being able to divide what God himself made one in the first creation.”1

Newton’s belief that God fashioned atoms reflected his deeply held religious views about the role of divinity in creation. He believed that an immortal being needed to design, set into motion, and tweak from time to time an otherwise mechanistic universe. His example, in line with the views of the similarly devout Boyle, showed that atomism and religion were compatible.

As Newton demonstrated, the solar system is guided by gravity. Gravity is important on astronomical scales, but it is far too weak a force on small scales to hold atoms together. The force that stabilizes atoms by holding them together is called the electrostatic force, part of what is known as the electromagnetic interaction. While gravity depends on mass, the electrostatic force affects things that have a property called electric charge.

The renowned eighteenth-century American statesman Benjamin Franklin was the first to characterize electric charge as either positive or negative. Influenced by Franklin and Newton, British natural philosopher Joseph Priestley proposed that the electrostatic force, like gravity, obeys an inverse-square law, only depending on charge instead of mass. While gravity always brings objects together, the electrostatic force can be either attractive or repulsive; opposite charges attract and like charges repel. These conjectures were splendidly proven in the 1780s by French physicist Charles-Augustin de Coulomb, for whom the law describing the electrostatic force is named.

Like the electrostatic force, magnetism is another force that can be either attractive or repulsive. The analogue of positive and negative electric charges is north and south magnetic poles. The ancients were familiar with magnetized iron, or lodestone, and knew that by suspending such a material in the air it would naturally align with the north-south direction of Earth. The term “magnetism” derives from the Greek for lodestone, just as “electricity” stems from the Greek for amber, a material that can be easily electrically charged.

Newton’s model of forces envisioned them as linking objects by a kind of invisible rope that spans the distance between them. It’s like a boy in a first-floor alcove of a church pulling a thin cord that manages to ring a bell in its tower. We call this concept action at a distance. In a way, it is an extension of the Democritean concept of atoms moving in an absolute void. Somehow, two things manage to influence each other without having anything in between to mediate their interaction.

British physicist Michael Faraday found the notion of action at a distance not very intuitive. He proposed the concept of electric and magnetic fields as intermediaries that enable electric and magnetic forces to be conveyed through space. We can think of a field as a kind of ocean that fills all of space. Placing a charge in an electric field or pole in a magnetic field is like an ocean liner disturbing the water around it and disrupting the paths of other boats in its wake. If you were kayaking off the coast of California and suddenly began rocking back and forth, you wouldn’t be surprised to see an approaching vessel generating major waves. Similarly, when a charge or pole feels a force it is due to the combined effect on the electric or magnetic field of other charges or poles.

A child playing with a bar magnet in a room illuminated by an electric lightbulb would probably have little inkling that the two phenomena have anything to do with each other. Yet as Danish physicist Hans Christian003rsted, Faraday, and other nineteenth-century researchers experimentally explored, electrical and magnetic effects can be generated by each other. For example, as004rsted showed, flipping an electrical switch on and off while placing a compass nearby can deflect its magnetic needle. Conversely, as Faraday demonstrated, jiggling a bar magnet back and forth near a wire can create an electrical current (moving charge) within it—a phenomenon called induction. So a clever enough child could actually light her own play space with her own bar magnet, bulb, and wire.

It took a brilliant physicist, James Clerk Maxwell, to develop the mathematical machinery to unite all electrical and magnetic phenomena in a single theory of electromagnetism. Born in Edinburgh, Scotland, in 1831, Maxwell was raised on a country estate and grew up with a fondness for nature. He loved walking along on the muddy banks of streams and tracing their meandering courses. In his adult life, as a professor at King’s College, University of London, he became interested in a different kind of flow, the paths of electric and magnetic field lines fanning out from their sources.

In 1861, convinced that both electricity and magnetism could be explained through the same set of equations, Maxwell synthesized everything that was known at the time about their interconnections. Coulomb’s law showed how charge produced an electrostatic force, by way of an electric field. Another law, developed by French physicist André-Marie Ampère based on005rsted’s work, indicated how electric current generated a magnetic field. Faraday’s law demonstrated that changing magnetic fields induce electric fields, and another result indicated that changing electric fields create magnetic fields. Maxwell combined these, added a corrective term to Ampère’s law, and solved the complete set of equations, resulting in his influential paper “On Physical Lines of Force.”

Maxwell’s solution demonstrated that whenever charges oscillate, for example, electricity running up and down an antenna, they produce changing electric and magnetic fields propagating through space at right angles to each other. That is, if the electric field strength is changing in the vertical direction, the magnetic field is changing in the horizontal direction, and vice versa. The result is what is called an electromagnetic wave radiating outward from the source like ripples from a stone tossed into a pond.

We can think of electromagnetic radiation as akin to a line dance alternating between men and women, with successive dancers engaged in different hand motions at right angles to each other. Suppose the first dancer is a man who raises his hands up and down. As soon as the woman behind him notices his arms dropping, she shifts her own hands left and right. Then, triggered by her motion, the man behind her lifts his arms up and down, and so forth. In this manner, a wave of alternating hand motions rolls from the front of the line to the back. Similarly, through successive electric and magnetic “gestures,” an electromagnetic wave flows from its source throughout space.

One of the most surprising aspects of Maxwell’s discovery was his calculation of the speed of electromagnetic waves. He determined that the theoretical wave velocity matched the speed of light—leading him to the bold conclusion that electromagnetism is light. A mystery dating from ancient times was finally resolved—light is not a separate element (the “fire” of classical belief) but rather a radiative effect generated by moving electric charges.

Until the turn of the nineteenth century, science was aware of only optical light: the rainbow of colors that make up the visible spectrum. Each pure color corresponds to a characteristic wavelength and frequency of electromagnetic waves. A wavelength is the distance between two succeeding peaks of the rolling sierra of electromagnetic oscillations. Frequency is the rate per second that peaks of a given wave pass a particular point in space—like someone standing on an express train platform and counting how many carriages zoom by in one second. Because light in the absence of matter always travels at the same speed, as defined by the results of Maxwell’s equations, its wavelength and frequency are inversely dependent on each other. The color with the largest wavelength, red, has the lowest frequency—like enormous freight cars taking considerable time to pass a station. Conversely, violet, the color with the shortest wavelength, possesses the highest frequency—akin to a tiny caboose whizzing by.

The visible rainbow comprises but a small segment of the entire electromagnetic spectrum. In 1800, British astronomer William Herschel, best known for discovering the planet Uranus, was measuring the temperatures of various colors and was amazed to find an invisible region beyond the red end of the spectrum that still produced a notable thermometer reading. The low-frequency light he measured just beyond the range of visibility is now called infrared radiation.

The following year, after learning about Herschel’s experiment, German physicist Johann Ritter decided to explore the region of the spectrum just beyond violet. He found that invisible rays in that zone, later called ultraviolet radiation, produced a noticeable reaction with the chemical silver chloride, known to react with light.

Radio waves were the next type of electromagnetic radiation to be found. In the late 1880s, inspired by Maxwell’s theories, German physicist Heinrich Hertz constructed a dumbbell-shaped transmitter that produced electromagnetic waves of frequencies lower than infrared. A receiver nearby picked up the waves and produced a spark. Measuring the velocity and other properties of the waves, Hertz demonstrated that they were unseen forms of light—thereby confirming Maxwell’s hypothesis.

The known spectrum was to expand even further in 1895 with German physicist Wilhelm Roentgen’s identification of high-frequency radiation produced by the electrical discharge from a coil enclosed in a glass tube encased in black cardboard. The invisible radiation escaped the tube and case, traveled more than a yard, and induced a chemically coated paper plate to glow. Because of their penetrating ability, X rays, as they came to be called, have proven extremely useful for imaging. They’re not the highest frequency light, however. That distinction belongs to gamma rays, identified by French physicist Paul Villard about five years after X rays were discovered and capping off the known electromagnetic spectrum.

The picture of light described by Maxwell’s equations bears little resemblance to the Newtonian idea of corpuscles. Rather, it links electromagnetic radiation with other wave phenomena such as seismic vibrations, ocean waves, and sound—each involving oscillations in a material medium. This raises the natural question, What is the medium for light? Could light travel through absolute vacuum?

Many nineteenth-century physicists believed in a dilute substance, called ether, filling all of space and serving as the conduit for luminous vibrations. One prediction of that hypothesis is that light’s measured speed should vary with the direction of the ether wind. A famous 1887 experiment by American researchers Albert Michelson and Edward Morley disproved the ether hypothesis by showing that the speed of light is the same in all directions. Still, given the compelling analogy to material waves, it was hard for the scientific community to accept that light is able to move through sheer emptiness.

The constancy of the speed of light in a vacuum raised another critical question. In a scenario pondered by the young Albert Einstein, what would happen if someone managed to chase and catch up with a light wave? Would it appear static, like a deer frozen in a car’s headlights? In other words, in that case would the measured speed of light be zero? That’s what Newtonian mechanics predicts, because if two things are at the same speed, they should seem to each other not to be moving. However, Maxwell’s equations make no provision for the velocity of the observer. The speed of light always flashes at the same value, lit up by the indelible connections between electric and magnetic fluctuations. Einstein would devote much of his youthful creativity to resolving this seeming contradiction.

Einstein’s special theory of relativity, published in 1905, cleared up this mystery. He modified Newtonian mechanics through extra factors that stretch out time intervals and shrink spatial distances for travelers moving close to light speed. These two factors—known respectively as time dilation and length contraction—balance in a way that renders the measured speed of light the same for all observers. Strangely, they make the passage of time and the measurement of length dependent on how fast an observer happens to be moving, but that’s the price Einstein realized he had to pay to reconcile Maxwell’s equations with the physics of motion.

Einstein found that in redefining distance, time, and velocity, he also had to rework other properties from Newtonian physics. For example, he broadened the concept of mass to encompass relativistic mass as well as rest mass. While rest mass is the inherent amount of matter an object possesses, changing only if material is added or subtracted, relativistic mass depends on the object’s velocity. An initially nonmoving chunk of matter starts out with its rest mass and acquires a greater and greater relativistic mass if it speeds up faster and faster. Einstein determined that he could equate the total energy of an object with its relativistic mass times the speed of light squared. This famous formula, E = mc2, implied that under the right circumstances mass and energy could transform into each other, like ice into water.

Yet another question to which Einstein would apply his legendary intellect concerned whether light’s energy depends solely on its brightness or has something to do with its frequency. The traditional theory of waves associates their energy with their amount of vibration; waves rising higher carry more energy than flatter waves. For example, pounding on a drum harder produces stronger vibrations that result in a louder, more energetic sound. Just as loudness represents the intensity of sound, a function of the amplitude or height of its waves, brightness characterizes the intensity of light, similarly related to its wave amplitude.

An object that absorbs light perfectly is called a blackbody. Heat up a blackbody box (a carton covered with dark paper, say) and like any hot object it starts to radiate. If you assume that this radiation is in the form of electromagnetic waves distributed over every possible frequency and attempt to figure out how much of each frequency is actually produced, a problem arises. Just as more folded napkins can fit into a carton than unfolded napkins, more types of low-wavelength vibrations can fit into a box than high-wavelength vibrations. Hence, calculations based on classical wave models predict that armies of low-wavelength modes would seize the bulk of the available energy compared to the paltry set of high-wavelength vibrations. Thus, the radiation from the box would be skewed toward low-wavelength high-frequency waves such as ultraviolet and beyond. This prediction, called the ultraviolet catastrophe, is not what really happens, of course; otherwise if you heat up a food container that happens to have a dark coating and set it on a kitchen table, it would start emitting UV radiation like a tanning bed, harmful X rays, and even lethal gamma rays. Clearly the presumption that light is precisely like a classical wave is a recipe for disaster!

In 1900, German physicist Max Planck developed a mathematical solution to the blackbody mystery. In contrast to the classical wave picture of light, which imagines it delivering energy proportional to its brightness, he proposed that light energy comes in discrete packages, called “quanta” (plural of “quantum,” the Greek for package), with the amount of energy proportional to the light’s frequency. The constant of proportionality is now called Planck’s constant. Planck’s proposal eliminated the ultraviolet catastrophe by channeling energy into lower frequencies.

Five years later, Einstein incorporated the quantum idea into a remarkable solution of a phenomenon called the photoelectric effect. The photoelectric effect involves what happens when light shines on a metal, releasing electrons (negatively charged particles) in the process. Einstein showed that the light delivers energy to the electrons in discrete quanta. In other words, light has particlelike as well as wavelike qualities. His solution offered the fledgling steps toward a full quantum theory of matter and energy. With his special relativity, energy-matter equivalence, and photoelectric papers all published in 1905, no wonder it is known as Einstein’s “miracle year.”

Soon thereafter, Russian German mathematician Hermann Minkowski recast special relativity in an extraordinary fashion. By labeling time the fourth dimension—supplementing the spatial dimensions of length, width, and height—he noticed that Einstein’s theory took on a much simpler form. Abolishing space and time as separate entities, Minkowski declared the birth of four-dimensional “space-time.”

Einstein soon realized that space-time would be a fine canvas for sketching a new theory of gravity. Though he recognized the success of Newton’s theory, Einstein wished to construct a purely local explanation based on the geometry of space-time itself. He made use of the independence of gravitational acceleration on mass to formulate what he called the equivalence principle: a statement that there is no physical distinction between free-falling objects and those at rest. From this insight he found a way to match up the local effect of gravity in every region of space-time with the geometry of that region. Matter, he proposed, bends space-time geometry. This warping forces objects in the vicinity to follow curved paths. For example, due to the Sun’s distortion of space-time, the Earth must travel in an elliptical orbit around it. Thus, space-time curvature, rather than invisible, distant pulls, is the origin of gravity. Einstein published his masterful gravitational description—called the general theory of relativity—in 1915.

A basic analogy illustrates the general relativistic connection between material and form. Consider space-time to be like a mattress. If nothing is resting on it, it is perfectly flat. Now suppose a sleepy elephant decides to take a nap. When it lies down, the mattress would sag. Any peanuts the elephant might have dropped on its surface while snacking would travel in curved paths due to its distortion. Similarly, because the Sun presses down on the solar system’s space-time “mattress,” all of the planets in the Sun’s vicinity must journey along curved orbits around it.

One of the outstanding features of general relativity is that it offers clues as to the origin of the universe. Coupled with astronomical evidence, it shows that there was a beginning of time when the cosmos was extremely hot and dense. Over billions of years, space expanded from minute proportions to scales large enough to accommodate more than one hundred billion galaxies, each containing billions to hundreds of billions of stars.

The idea of spatial expansion surprised Einstein, who expected that his theory of gravity would be consistent with a static universe. Inserting a sample distribution of matter into the equations of general relativity, he was astonished to find the resulting geometry to be unstable—expanding or contracting with just the tiniest nudge. It was like a rickety building that would topple over with the mere hint of a breeze. Given his expectations for large-scale constancy, that wouldn’t do. To stabilize his theory he added an extra term, called the cosmological constant, whose purpose was essentially to serve as a kind of “antigravity”—preventing things on the largest scale from clumping together too much.

Then in 1929, American astronomer Edwin Hubble made an astonishing discovery. Data taken at the Mount Wilson Observatory in Southern California demonstrated that all of the other galaxies in the universe, except for the relatively close ones, are moving away from our own Milky Way galaxy. This showed that space is expanding. Extrapolating backward in time led many researchers to conclude that the universe was once far, far smaller than it is today—a proposal later dubbed the Big Bang theory.

Once he realized the implications of Hubble’s findings, Einstein discarded the cosmological constant term, calling it his “greatest blunder.” The result was a theory that modeled a steadily growing universe. As Russian theorist Alexander Friedman had demonstrated in previous work, depending on the density of the universe compared to a critical value, this growth would either continue forever or reverse course someday. Recent astronomical results have indicated, however, that not only is the universe’s expansion continuing, it is actually speeding up. Consequently, some theorists have suggested a revival of the cosmological constant as a possible explanation of universal acceleration.

Today, thanks to detailed measurement of the background radiation left over from the Big Bang, the scientific community understands many aspects of how the early universe developed and acquired structure. This radiation was released when atoms first formed and subsequently cooled as space expanded. Hence, it offers a snapshot of the infant universe, showing which regions were denser and which were sparser. Einstein’s theoretical achievements combined with modern astronomical observations have opened a window into the past—enabling scientists to speak with authority about what happened just seconds after the dawn of time.

Science has made incredible strides in answering many of the fundamental questions about the cosmos. Our sophisticated understanding of the building blocks of matter, the fundamental forces, and the origins of the universe reflect astonishing progress in chemistry, physics, astronomy, and related fields. Yet our curiosity compels us to press even further—to attempt to roll back the hands of time to the nascent instants of creation, a mere trillionth of a second after the Big Bang, and understand the fundamental principles underlying all things.

Since we cannot revisit the Big Bang, the Large Hadron Collider (LHC) will serve as a way of reproducing some of its fiery conditions through high-energy particle collisions. Through the relativistic transformation of energy into mass, it will offer the possibility of spawning particles that existed during the embryonic moments of physical reality. It will also offer the prospect of exploring common origins of the natural forces. Thus, from the chaotic aftermath of particles smashing together at near light speeds, we could possibly unlock the secrets of a lost unity.

Distilling novel ideas from turbulence is nothing new to the people of Geneva. Only six miles southeast of the LHC is Geneva’s stunningly beautiful old town. The historic streets and squares, where Jean Calvin once preached religious independence and Jean-Jacques Rousseau once taught about social contracts, are used to all manner of revolutionary currents. Soon Geneva could witness yet another revolution, this time in humanity’s comprehension of the fundamental nature of the cosmos.