The End of the World As We Know It - The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics - Robert Oerter

The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics - Robert Oerter (2006)

Chapter 3. The End of the World As We Know It

Can Nature possibly be as absurd as it seems to us in these atomic experiments?

—Werner Heisenberg

The crack of the bat. The center fielder, peering intently at the batter, hesitates only a fraction of a second, then turns and runs at full speed away from home plate. As he nears the home run wall he turns, raises his glove, and the baseball falls neatly into the pocket.

The center fielder has solved the quintessential problem of classical physics, which is to say nineteenth-century physics. During the brief interval after the ball was hit, he estimated its speed and direction. Years of experience allowed him to deduce where the ball would land and how fast he needed to run to be there in time to catch it. This is exactly what a physicist seeks to do: predict the future based on knowledge of the present. That is, knowing the position of the ball and the velocity (that is, the speed and direction) of the ball at the time it was hit, the physicist attempts to predict the position and velocity at any later time.

The physicist summarizes her years of experience in a set of mathematical laws. In the case of the baseball, those laws must encapsulate the effects of gravity and air resistance on the ball’s motion. More generally, as we’ve seen, we need to know what fields there are and how those fields affect the object’s motion. Given a snapshot of the universe, or some part of it, at any time, the laws of physics let us picture the world at any other time. This classical worldview consists of:

A list of every object (particle or field) in the world and its location. For fields, which extend throughout space, we need the value of the field at each point in space, for example, the lengths and directions of the electric and magnetic field arrows.

■ A description of how every object is changing. For particles, this means the velocity, which tells how the particle’s position is changing. For fields, this means the rate of change of the length and direction of the field arrows at each point in space.

■ A theory of interactions (forces) between particles and fields. We need to know how particles create fields and how particles respond to fields.

The first two items completely specify the state of the universe at a particular time. The third item allows us (in principle) to extend the description to another time, future or past. In other words, if we know the state of the universe at any one time, we can completely predict the future, and completely reconstruct the past.

Special relativity, counterintuitive as it is, still fits comfortably into the classical worldview. Predicting the state of the universe at any time, we know from special relativity, only makes sense with respect to a chosen frame of reference. Special relativity merely changes some of the equations we use to extrapolate our snapshot of the world from one time to another.

Quantum mechanics completely overturned the classical worldview. Quantum mechanics denies that such a classical description can ever, even in principle, be given. Not only are the future and past unknowable, in the sense of knowing the first two items in the preceding list, but even knowing the present (classical) state of the universe is impossible. Nor is it a problem with finding the information in an infinite number of places at once: Even for one particle and one instant in time, it is impossible in principle to know both the location and the velocity precisely.

Why did physicists abandon the predictive power of classical physics and embrace the probabilities and uncertainties of quantum mechanics? Some observers have suggested that they were infatuated with eastern mysticism and mathematical group theory and were looking for a way to work them into physics. On the contrary, they were forced by Nature to accept this strange description of her. Two phenomena in particular triggered the quantum revolution: the photoelectric effect and the structure of the atom.

If you have never read about quantum mechanics before (perhaps even if you have), you will no doubt find it confusing, maybe incomprehensible. If you do, take heart! As the brilliant (and Nobel prize-winning) physicist Richard Feynman put it:

It is my task to convince you not to turn away because you don’t understand it. You see, my physics students don’t understand it either. That’s because I don’t understand it. Nobody does.1

The Unbearableness of Being Light

In the seventeenth century, opinion was divided: Did light consist of particles emitted by the light source and absorbed by the eye, or was it a wave, a vibration of some medium, the way sound is a vibration of the air? Isaac Newton began investigating light in 1665 using a prism he bought at a traveling fair. (His groundbreaking demonstration that the colors of the rainbow could be recombined into white light had to wait until the fair came around again and he could buy another prism.) Newton eventually became a strong proponent of the particle model. Using the particle model, he was able to explain several important optical phenomena.

First, light travels in a straight line. Because particles travel in a straight line unless acted on by an external force (a fact now known as Newton’s first law of motion), Newton could explain the straight-line motion of light by making one additional assumption: that the light particles are weightless, that is, they are not affected by gravity.

Second, if you shine a flashlight at a mirror, the beam bounces off at the same angle as the incident angle. This behavior is also explained by the particle model; a ball bouncing off the floor or wall does the same thing.

Third, when light travels from one medium to another, a light beam changes direction. For instance, when light goes from the water in a fish tank into air, the light path is bent. This effect, known as refraction, makes a ruler partially immersed in water appear crooked. Newton was able to explain refraction by assuming that a force acts on the light particles when they are near the interface between the water and the air, so that their speed changes as they travel into a different medium.

At the same time, Christian Huygens, a Dutch physicist, was developing a theory in which light is a wave. He was able to explain reflection and refraction using this model, but the straight-line propagation of light was more difficult. The problem is that waves tend to bend around corners. This is why it is possible to hear someone talking who is in the next room, even if you can’t see them. The sound bends around the corner, but the light doesn’t.

The spreading of a wave as it passes a corner or an obstacle is called diffraction. In a report published posthumously in 1665, the Italian physicist Francesco Grimaldi demonstrated that light does, in fact, spread out as it passes through an opening in an opaque screen, forming a series of dark and light bands a small distance into the shadow region, as shown in the following image.

Grimaldi proved that light does bend around corners—it just doesn’ t bend nearly as much as sound does. The tiny amount of bending makes sense if the wavelength of light, the peak-to-peak distance between successive waves, is very small. In fact, light must be graced with a wavelength a hundred thousand times shorter than that of sound to explain Grimaldi’s observations.


You can check Grimaldi’s results for yourself. Find a straight-edged object such as a ruler, a pen, or a Republican. Hold the ruler between your eye and a light source with a sharp outline, a fluorescent light fixture, for instance. (It helps if the room is dark except for the light source.) The light will seem to come through the ruler’s edge. What you are actually seeing is light that has bent around the edge of the ruler.

Even when Newton came across Grimaldi’s report of the diffraction of light by an aperture he stubbornly stuck to his particle model. He explained the spreading of the light by invoking a force on those light particles that just skim the edges of the opening. The bright and dark bands he explained by supposing that the force was sometimes attractive and sometimes repulsive, so that “the Rays of light in passing by the edges and sides of bodies (are) bent several times backwards and forwards, with a motion like that of an Eel.” Not a very good explanation, but with Newton’s magisterial reputation behind it, the particle model remained popular.

The controversy raged on for the next 200 years, but not much progress was made until the early 1800s when Thomas Young set up a simple experiment using a light source, an opaque screen with two narrow slits, and a second screen to view the light that passes through. When only one slit was open, Young saw a central band of light, with Grimaldi’s diffraction pattern at the edges. When the second slit was opened, though, a series of bright and dark bands formed within the central illuminated region.


Now, imagine standing at the location of the viewing screen with your eyes at the position of one of the dark bands on the screen. Before the second slit is opened, you see bright light coming from the open slit. When the other slit is opened, you suddenly see no light at all! In other words, portions of the illuminated region become darker upon the addition of the light from the second slit. This cannot happen in Newton’s theory, Eels or no Eels. Opening the second slit can only add light particles: You should get more light, not less. Young, using the wave theory of light, was able to explain how light from one slit could “interfere” with the light from the other slit.

To understand Young’s explanation, let’s look at water waves generated by wiggling two sticks at opposite ends of a tank. The wave on the left will travel to the right and the one on the right will travel left until the waves meet in the middle. The two sticks are being wiggled in opposite directions, so the front of the lefthand wave is above the normal surface of the water while the front of the right-hand wave is below the normal surface. Therefore, when the two waves meet at the midpoint of the tank, the two disturbances will cancel each other. The water level at the exact midpoint of the tank will always remain at the original, undisturbed level. The cancellation of two waves is calleddestructive interference.


Destructive interference is the source of the bright and dark bands that appear when the second slit is opened in Young’s two-slit experiment. Any point (other than the exact center point) on the viewing screen is slightly farther from one of the slits than from the other slit. There are certain points on the screen where the path difference is just enough so that the light wave from one slit is doing the exact opposite of the wave from the other slit. The extra distance is just enough for the wave from the lower slit to go through half of its cycle.


As a result, it is down whenever the other wave is up, and vice versa. When the two waves meet at the viewing screen, they will always cancel. The dark bands, called nodes, are the points on the screen where this cancellation happens. Between the nodes are the points where the two waves add: the bright bands.

Destructive interference is the hallmark of wave phenomena. As we have seen, no particle model can explain how the brightness can decrease at the nodes when a second slit is opened. Young’s experiment convinced physicists that light is in fact a wave. By the end of the nineteenth century, Maxwell’s electromagnetic theory gave a firm theoretical foundation for this belief. Victory for the wave model seemed assured.

Then, there was the photoelectric effect. Shine a light on certain metals and an electric current will start to flow. Physicists studying this effect assumed that the energy in the light was being transferred to the electrons in the metal, but there was a puzzle. Blue light causes current to flow, but red light doesn’t. The puzzling aspect is that, in Maxwell’s theory of light, the energy provided by the light depends on the brightness of the light. So brighter light should cause more of a current regardless of what color, or wavelength, of light you are using. This is not what happens, however. Light of long wavelength (towards the red end of the spectrum) doesn’t cause any current to flow, no matter how bright it is. Light with shorter wavelength (blue light) will start a current flowing, but there is a sharp cutoff in wavelength. Wavelengths longer than this cutoff value generate no current.

In 1905, the same year he published his theory of special relativity, Einstein suggested that the photoelectric effect could be explained if light was composed of particle-like packets, called photons. The energy of each packet is inversely proportional to its wavelength: Blue light, having a shorter wavelength, consists of packets with a higher energy than red light, which has a longer wavelength. In other words, the kick given to the electron by the photon depends on the color, not the brightness, of the light.

Picture the electrons in the metal as soccer balls sitting on a low-lying playing field surrounded by higher ground. It is Saturday morning and the kids on the soccer team, wearing bright red uniforms, are trying to kick the balls up the hill onto the higher ground. None of them are strong enough to get a ball up the hill, though; each ball goes up a bit and then rolls back down. No matter how many kids are on the field, none of the balls ever makes it up the hill. Then, Soccer Mom comes onto the field, wearing her light blue coaches’ outfit. She easily boots the ball up the hill and it rolls off into the distance.

Einstein’s proposal worked beautifully for the photoelectric effect. Long-wavelength (red) photons, like the kids on the soccer field, don’t give the electron enough energy to make it up the “hill,” which is formed for the electron by the force of the atoms in the metal. Brighter light has more photons and carries more energy, just as in Maxwell’s theory. But the electron only absorbs one photon at a time, so no matter how many photons there are, none make it up the hill to become conduction electrons. As with the young soccer players, having more on the playing field doesn’t help. Blue light has short-wavelength photons that carry more of a punch, like Soccer Mom. An electron that absorbs a photon of blue light gains enough energy to make it up the hill and become part of the flowing current.

For 40 years, in test after test, Maxwell’s electromagnetic theory had triumphed, proving conclusively (so physicists thought) that light was a wave. An entire industry of the generation, distribution, and use of electrical power was built on this understanding of electricity and magnetism. Now Einstein was proposing that light had a particle nature, as Newton had suggested more than two centuries earlier! The old question, “Is light a particle or a wave?” was suddenly resurrected from the grave that Maxwell had made for it. How can it be that Maxwell’s equations are so brilliantly successful in explaining wave phenomena like interference and diffraction if light is “really” a particle and not a wave at all? The answer only came with the development of relativistic quantum field theory, the subject of Chapter 5, “The Bizarre Reality of QED.” As a provisional solution, though, we can think of light as a field that indeed obeys Maxwell’s equations, but at the same time it is a field that comes in chunks—it can only be emitted or absorbed by matter as a whole packet of the correct energy for its wavelength.

Einstein’s proposal was not yet a replacement for Maxwell’s theory; it was still incomplete. The concept of photons was difficult to square with Maxwell’s electromagnetic wave theory of light. Almost a half-century would pass before a complete quantum theory of light was found, one that made sense of both Maxwell’s equations and the photon nature of light.

The Death of Certainty

The photoelectric effect was a problem for classical physics, but it was easy to imagine that photoelectric metals had some peculiarity that caused the strange behavior. The structure of the atom raised difficulties that could not be shrugged off so easily. In 1911, Ernest Rutherford deduced from his experiments that atoms were composed of a positively charged core, called the nucleus, surrounded by much lighter negatively charged electrons. Thus, the atom was thought to look like a tiny solar system, with the nucleus as the sun, electrons as the planets, and the electrical attraction of the positively charged nucleus and the negatively charged electron replacing gravity as the force that holds it together.

According to Maxwell’s theory, however, an electron in such an orbit would have to emit electromagnetic radiation, thereby losing energy, which would send it into a “death spiral,” which could not end until the electron reached the nucleus. With all the negatively charged electrons in the nucleus canceling out the positive nuclear charge, there would be no electric repulsion keeping the nuclei at atomic distances from each other. In a fraction of a second a house would collapse to the size of a grain of sand. No objects would retain their shapes or size. All matter would be unstable. Here was a serious difficulty. It couldn’t be brushed off as a peculiarity of a few special materials. The laws of physics were saying that matter as we know it simply can’t exist. It was time for some new laws of physics.

Physicists didn’t see the radiation from the death spiral, but they did see a different pattern of electromagnetic radiation coming from atoms. Since the mid-nineteenth century, it was known that each chemical element has a characteristic spectrum of light. Send the light from an incandescent bulb through a prism and it spreads into a complete rainbow. If you use a fluorescent bulb, however, you see instead a series of bright lines of color. A fluorescent bulb is filled with a particular gas, and each different gas has its own line spectrum, a chemical fingerprint. The origin of these bright lines was a complete mystery. Quantum mechanics, the theory that was developed to explain them, did more than change a few of the equations of classical physics. It required a completely new view of reality and of the goals of physics.

It took many years and the combined effort of many brilliant minds to create the theory of quantum mechanics. Rather than lead you through the historical development, I am going to jump to the end and describe the picture of the microworld that physicists ended up with. Discussion of quantum mechanics by itself could occupy an entire book; indeed some excellent books have been written on the subject. See the “Further Reading” section for some suggestions.

In classical physics, the universe is composed of particles and fields. A complete description of the world at any instant must specify the locations of the particles, the values of the fields, and how both are changing. From this information and the laws of interaction between particles and fields, the complete future of the universe can be predicted.

In quantum mechanics, the basic picture is radically different:

a. The motion of any particle is described by a wave, known as the wavefunction or quantum field.

b. The probability for the particle to be detected at a given point is the square of the quantum field at that point.

c. The quantum field changes according to a mathematical law known as the Schrödinger equation.

In (A), we have the first hint that particles and fields are not such wildly different entities as they appear in classical physics. The full realization of this wave-particle duality comes in relativistic quantum field theory, where particles and fields are treated identically. Relativistic quantum field theory is, then, a unification of particles and the forces acting on them. The terrible price we have to pay for the unification of particles and fields is revealed in (B). The laws of quantum mechanics are random; only probabilities can be determined. We give up the ability to predict the future that classical mechanics promised us—we can only have limited, probabilistic knowledge of the outcome of any experiment. Not only that, but even the present is not completely knowable. In quantum mechanics, a complete description of the current state of the universe consists of specifying the value of each quantum field at every point in space. From (B), we learn that such a description doesn’t tell us where the particle is, it only gives the probability of finding a particle at any given location. Moreover, the wave description of particles involves a tradeoff: the more precisely the location of a particle is known, the more uncertain is its velocity. This result is called the Heisenberg uncertainty principle.

The idea that matter had wave-like properties, a key aspect of quantum mechanics, was first proposed in 1923 in the doctoral thesis of a young French nobleman, Luis-Victor Pierre-Raymond de Broglie. De Broglie was inspired by Einstein’s suggestion that light might have particle-like properties, even though the interference effects “proved” it was a wave. If a wave could act at times like a particle, de Broglie reasoned, why couldn’t a particle, the electron for instance, act like a wave? According to de Broglie, every particle has a wavelength associated with it that depends on the mass and the velocity of the particle. The faster the particle moves, the shorter its wavelength. Just as for photons, a smaller wavelength means more energy.

De Broglie’s suggestion, like Einstein’s theory of the photoelectric effect, was not a complete solution to the problem of electron motion. De Broglie gave no equation for his matter waves analogous to Maxwell’s electromagnetic equations.

Three years later, Erwin Schrödinger came up with an equation to describe the motion of the matter waves. As reconstructed by physicist Leon Lederman, it happened this way:

Leaving his wife at home, Schrödinger booked a villa in the Swiss Alps for two and a half weeks, taking with him his notebooks, two pearls, and an old Viennese girlfriend. Schrödinger’s self-appointed mission was to save the patched up, creaky quantum theory of the time. The Viennese born physicist placed a pearl in each ear to screen out any distracting noises. Then he placed the girlfriend in bed for inspiration. Schrödinger had his work cut out for him. He had to create a new theory and keep the lady happy. Fortunately he was up to the task.2

Before we look at how quantum mechanics explained the line spectra of atoms, let’s apply the matter wave idea and the Schrödinger equation to a simpler example, the harmonic oscillator. This special case appears often in physics, and it will be crucial to the discussion of relativistic quantum field theory.

Forget about quantum mechanics for a moment and picture an object sliding back and forth in a smooth bowl. It doesn’t matter what the object is; let’s say it’s an ant on rollerblades. If you place the ant somewhere on the side of the bowl and release it, the ant will roll down to the bottom, then up the other side of the bowl. When it reaches the exact height at which you let go of it, it comes momentarily to rest, then rolls back down the other way until it returns to the point where you let it go. Assuming there is no friction, the ant will then repeat the same motion precisely, forever. You can start the ant from any height you like. If the shape of the bowl is just right, then the time it takes to go back and forth once will be the same no matter how high up the bowl you release the ant. You can picture how this works: from higher up, the ant has farther to go, but it will be going faster when it gets to the bottom. The effect of the longer distance is cancelled by the greater speed, resulting in the same travel time for any starting position. In this case, physicists call the system a harmonic oscillator.

As simple as it seems, this problem is one of the most important in all of physics. The reason is not that you commonly find ants rollerblading in specially shaped bowls, of course. First of all, it doesn’ t need to be an ant—any object will do. Then, almost any shape bowl will give you something very close to simple harmonic motion, if the oscillations are small—that is, if the object starts out near the bottom of the bowl. In fact, it doesn’t have to be a bowl. Almost any situation where something oscillates will be described, at least approximately, by simple harmonic motion. A shaking leaf, a vibrating violin string, even the small variations of the moon’s orbit due to the effects of distant planets, all can be described using simple harmonic motion. In the quantum world, too, we find it everywhere: the vibrations of atoms in a crystal, the states of the hydrogen atom itself, and the relativistic quantum field can all be expressed in terms of harmonic oscillators. Most importantly, the harmonic oscillator is a problem that can be solved! It may seem surprising, but most of the problems you run into in physics are unsolvable. The mathematics is simply too hard. Progress is made by finding approximate problems that keep the important characteristics of the original problem, but which are solvable. Much of the time, the solvable problem that you end up with is the harmonic oscillator in some guise.

The ant-on-rollerblades picture of the harmonic oscillator is a classical picture. That means we need to know the following:

■ The initial location of the object—how far up the bowl we started the ant.

■ The initial velocity of the object—in our case, the ant was at rest when we released it.

■ The forces on the object—the force of gravity and the contact force between the bowl and the ant’s rollerblades.

From this information, as we just discussed, we know what the ant is doing—forever. In the quantum world, the procedure is totally different. First of all, we need to represent the particle (ant) by a quantum field, or wave function. This field must be a solution of Schrödinger’s equation. Just to show you what it looks like, here is the Schrödinger equation for the quantum field, denoted by ψ:


You don’t need to understand the mathematics of this equation, but there is one very important point about it to note: It introduces a new fundamental constant of nature, ℏ (pronounced “aitch-bar”), known to physicists as Planck’s constant. Like the speed of light,014can be measured by anyone, anywhere, at any time, and the result will always be the same. Whenever Planck’s constant shows up, we know quantum mechanics is somehow involved.

The Schrödinger equation tells us how to take the particle wave and fit it correctly into the bowl. We find that the wave only fits at certain places in the bowl, giving a set of discrete energy levels.


On the top, I plot the quantum field; on the bottom is the field value squared. According to the principles of quantum mechanics (specifically, principle (B) in the earlier list), the bottom graph gives the probability of finding the ant at a given location in the bowl. Look closely at the probability graph for the second energy level. Strangely, the probability of finding the ant exactly in the center of the bowl is zero! We thought we were describing something oscillating back and forth, but Schrödinger’s equation tells us it can never be found in between! This is an unexpected consequence of the wave nature of particles.

Now consider the whole sequence of energy levels. The wave for the lowest level has just one bump, the next level has two, and so forth. As you try to cram more waves into the bowl, you have to go higher up the sides, that is, to higher energy. The Schrödinger equation tells us that only these energy levels are allowed. You can’t have, for instance, one and a half waves across the bowl. There are no in-between energies allowed: The levels are discrete. Now, if we think about our rollerblading ant, discrete energy levels are quite strange. In classical physics, we can give the ant any energy we like; we simply start the ant at the appropriate height up the side of the bowl. Quantum mechanics seems to be saying we can’t start the ant wherever we want on the bowl, only at certain discrete places. As odd as this may seem, it is exactly what we need to explain the sharp lines in atomic spectra.

The energy levels of the harmonic oscillator shown previously are perfectly evenly spaced. Suppose we use the harmonic oscillator to model the hydrogen atom: The “ant” is now an electron, and the “bowl” that keeps pushing the “ant” back towards the middle is the force of electrical attraction between the proton in the nucleus and the electron. If the electron is allowed only certain energies, it can’t execute the death spiral we expected from classical electrodynamics. Instead, it must jump from one energy level to another. In each of these quantum jumps, the electron gives off energy in the form of light. When the electron reaches the lowest energy level (called the ground state),however, it can go no further. In classical mechanics, the electron can radiate away all its energy; in quantum mechanics, it ends up stuck in the ground state with nowhere to go.

What spectrum would we see from this “atom”? Because all the levels are the same distance apart (in energy), when the electron jumps from any level to the next lower level, it emits a photon of the same wavelength. This doesn’t mean that there is only one line in the spectrum, however. An electron can also jump down two levels, or three levels, or more.


These jumps would generate photons with two (or three...) times as much energy as the photon generated by a jump of a single energy level. Separate the light from this “atom” by sending the light through a prism and you will see a series of evenly spaced lines, ranging from red to violet.

There is, however, no real atom that has this spectrum. For real atoms, the electric force between the nucleus and the electrons forms a “bowl” with a different shape, and therefore a different set of energy levels, than the harmonic oscillator.

For hydrogen, for instance, the visible lines are formed when an electron jumps down to energy level 2 from a higher energy level. Unlike the harmonic oscillator, the energy levels of hydrogen aren’t evenly spaced. As a result, neither are the spectral lines.


The energy levels are irregularly spaced, that is, there is more space between the first and second energy levels than there is between the second and third. As a result, electrons ending up in level 1 give off more energy than those ending up in level 2, and therefore emit photons with a shorter wavelength. These short wavelengths are called ultraviolet light. They are invisible to the eye, but can be recorded on photographic film. Electrons ending up in level 3 or any higher level, on the other hand, give off less energy, and thus emit longer wavelength photons. This part of the spectrum is called infrared. As with ultraviolet light, infrared light is invisible to the eye but can be recorded on film or with other specialized devices. The complete set of possible wavelengths forms the line spectrum of hydrogen.

Other chemical elements have different numbers of protons in the nucleus, causing different amounts of electric force, and so creating the spectral fingerprint characteristic of each element. The Schrödinger equation gives precise predictions for these energy levels, and therefore predicts the line spectrum that each element should produce. Atomic line spectra became the crucible in which quantum mechanics was tested and found brilliantly successful. For the most part, that is. Some small discrepancies remained, the so-called fine structure of the spectrum. These discrepancies would force physicists to look deeper, and would lead eventually to the even more accurate theory of quantum electrodynamics.

Building the Periodic Table

Electrons in an atom can occupy only certain specific energy levels: This is what quantum mechanics tells us. This observation explained the mysterious line spectra of chemical elements. It also explains the regularities in the periodic table of the elements.


Atoms, we know, are made of protons, neutrons, and electrons. The protons and neutrons in the atomic nucleus make up nearly all the mass of the atom. Electrons weigh only one-thousandth as much as a proton. The protons have a positive electric charge that is exactly equal in magnitude to the negative charge of the electron. Neutrons, as their name implies, are electrically neutral: They have no charge. The electrons are attracted to the positively charged nucleus, but they cannot fall all the way in because they are constrained to certain energy levels, as we have just seen. You need to know two other things about electrons. First, no two electrons can occupy the same quantum state. This property is called the Pauli exclusion principle, after Wolfgang Pauli. Particles that have this property are called fermions—we’ll learn more about them in Chapter 8. The exclusion principle means, for example, that if you put two electrons in a box, they will never be found at the same place in the box. (This may seem obvious, but there are other types of particles that can be found at the same place.) Second, electrons have a property called spin. As its name implies, spin is connected with rotation. For an electron, however, spin can only take on two values, called spin-up and spin-down.

Now we’re ready to start. We’ll construct the periodic table from left to right across a row, and then proceed to the next row. This means we are going in order of increasing atomic number, which is simply the number of protons in the nucleus. Start with hydrogen (H): just one proton in the nucleus, and one electron (we are considering neutral atoms), which can go in any energy level it likes. Most atoms at room temperature are in their ground state, that is, their lowest possible energy level. Let’s assume that’s where hydrogen’s electron is, in energy level 1. Next comes helium (He): two protons in the nucleus. We can throw in a handful of neutrons, too. In natural helium, two neutrons are the most popular configuration, but one neutron is also possible. These correspond to the isotopes helium-4 and helium-3; the number given counts the total number of protons plus neutrons. (Isotopes are chemically the same element; they only differ in the number of neutrons in the nucleus. The chemical properties of elements derive from the behavior of the electrons, which is unchanged by the addition of electrically neutral neutrons.) We add two electrons to make a neutral atom, and put both electrons into energy level 1, but one electron must be spin-up and one must be spin-down. This is because of the Pauli exclusion principle: Since both electrons are in the same energy level, they must have different spin directions in order to be in different quantum states.

After helium comes lithium (Li): three protons, typically four neutrons, and three electrons to make it neutral. The first two electrons can go into the lowest energy level (level 1), one up and one down, but the third electron can’t go into level 1. Whether we assign spin-up or spin-down to the third electron, it will be in the same energy level and have the same spin state as one of the two electrons that are already in level 1. The Pauli exclusion principle requires that the third electron go into the next energy level, level 2. So, lithium has a full level 1 (two electrons) and a lone electron in the outer level 2. It is this lone electron that makes lithium chemically similar to hydrogen, which also has a lone electron. Lithium, therefore, goes in the next row of the table, underneath hydrogen. Beryllium (Be) comes next, with four electrons: two in level 1 and two in level 2. With boron (B), things get interesting: Schrödinger’s equation allows for three states in energy level 2 in which the electron is rotating about the nucleus. Each of these states can hold two electrons (one spin-up and one spin-down, of course), so we get six elements to fill up the rest of the second row: boron (B), carbon (C), nitrogen (N), oxygen (O), fluorine (F), and neon (Ne). The next element, sodium (Na), has a lone electron in level 3, and so goes in the next row, under lithium. The third row fills in the same way as the second row, but in the fourth row we find an additional five rotational states, which gives room for the elements 21 (scandium [Sc]) through 30 (zinc [Zn]).

In 1871, the Russian chemist Dmitri Mendeleev organized the known elements into a periodic table. In order to place the elements with similar chemical properties in a single column of the table, Mendeleev had to leave gaps in the table. These gaps, he claimed, corresponded to elements that had not yet been discovered. For instance, no known element had the correct properties to fill the spaces in the table below aluminum (Al) and silicon (Si). He predicted the existence of two new elements, which he called eka-aluminum and eka-silicon. These new elements should resemble aluminum and silicon in their chemical behavior and both should be graced with atomic weights between the weight of zinc (Zn) and arsenic (As). Four years later, the element gallium (Ga) was discovered and found to have the properties Mendeleev had predicted for eka-aluminum. In 1886, germanium (Ge) was discovered, with the right properties to fill the space in the table below silicon. In hindsight, we can see these successful predictions as confirmation of the quantum-mechanical understanding of the atom. A similar process was to occur in the 1970s when physicists were classifying the plethora of particles being produced in particle accelerators.

In principle, Schrödinger’s equation should be able to predict not just the structure and spectrum of atoms, but also the interactions between atoms. Chemistry then becomes part of physics. Instead of merely measuring reaction rates and energies between compounds, you could predict them by solving a quantum mechanics problem. In practice, this is extremely difficult to do, and many scientists are still attempting to do this today, as part of the field known as physical chemistry.