BREATHING IN EINSTEIN - SMALL THINGS - Quantum Theory Cannot Hurt You - Marcus Chown

Quantum Theory Cannot Hurt You - Marcus Chown (2007)




A hydrogen atom in a cell at the end of my nose was once part of an elephant’s trunk.

Jostein Gaarder

We never had any intention of using the weapon. But they were such a terribly troublesome race. They insisted on seeing us as the “enemy” despite all our efforts at reassurance. When they fired their entire nuclear stockpile at our ship, orbiting high above their blue planet, our patience simply ran out.

The weapon was simple but effective. It squeezed out all the empty space from matter.

As the commander of our Sirian expedition examined the shimmering metallic cube, barely 1 centimetre across, he shook his primary head despairingly. Hard to believe that this was all that was left of the “human race”!

If the idea of the entire human race fitting into the volume of a sugar cube sounds like science fiction, think again. It is a remarkable fact that 99.9999999999999 per cent of the volume of ordinary matter is empty space. If there were some way to squeeze all the empty space out of the atoms in our bodies, humanity would indeed fit into the space occupied by a sugar cube.

The appalling emptiness of atoms is only one of the extraordinary characteristics of the building blocks of matter. Another, of course, is their size. It would take 10 million atoms laid end to end to span the width of a single full stop on this page, which raises the question, how did we ever discover that everything is made of atoms in the first place?

The idea that everything is made of atoms was actually first suggested by the Greek philosopher Democritus in about 440 BC.1 Picking up a rock—or it may have been a branch or a clay pot—he asked himself the question: “If I cut this in half, then in half again, can I go on cutting it in half forever?” His answer was an emphatic no. It was inconceivable to him that matter could be subdivided forever. Sooner or later, he reasoned, a tiny grain of matter would be reached that could be cut no smaller. Since the Greek for “uncuttable” was “a-tomos,” Democritus called the hypothetical building blocks of all matter “atoms.”

Since atoms were too small to be seen with the senses, finding evidence for them was always going to be difficult. Nevertheless, a way was found by the 18th-century Swiss mathematician Daniel Bernoulli. Bernoulli realised that, although atoms were impossible to observe directly, it might still be possible to observe them indirectly. In particular, he reasoned that if a large enough number of atoms acted together, they might have a big enough effect to be obvious in the everyday world. All he needed was to find a place in nature where this happened. He found one—in a “gas.”

Bernoulli imagined a gas like air or steam as a collection of billions upon billions of atoms in perpetual frenzied motion like a swarm of angry bees. This vivid picture immediately suggested an explanation for the “pressure” of a gas, which kept a balloon inflated or pushed against the piston of a steam engine. When confined in any container, the atoms of a gas would drum relentlessly on the walls like hailstones on a tin roof. Their combined effect would be to create a jittery force that, to our coarse senses, would seem like a constant force pushing back the walls.

But Bernoulli’s microscopic explanation of pressure provided more than a convenient mental picture of what was going on in a gas. Crucially, it led to a specific prediction. If a gas were squeezed into half its original volume, the gas atoms would need to fly only half as far between collisions with the container walls. They would therefore collide twice as frequently with those walls, doubling the pressure. And if the gas were squeezed into a third of its volume, the atoms would collide three times as frequently, trebling the pressure. And so on.

Exactly this behaviour was observed by the English scientist Robert Boyle in 1660. It confirmed Bernoulli’s picture of a gas. And since Bernoulli’s picture was of tiny grainlike atoms flying hither and thither through empty space, it bolstered the case for the existence of atoms. Despite this success, however, definitive evidence for the existence of atoms did not come until the beginning of the 20th century. It was buried in an obscure phenomenon called Brownian motion.

Brownian motion is named after Robert Brown, a botanist who sailed to Australia on the Flinders expedition of 1801. During his time down under, he classified 4,000 species of antipodean plants; in the process, he discovered the nucleus of living cells. But he is best remembered for his observation in 1827 of pollen grains suspended in water. To Brown, squinting through a magnifying lens, it seemed as if the grains were undergoing a curious jittery motion, zigzagging their way through the liquid like drunkards lurching home from a pub.

Brown never solved the mystery of the wayward pollen grains. That breakthrough had to wait for Albert Einstein, aged 26 and in the midst of the greatest explosion of creativity in the history of science. In his “miraculous year” of 1905, not only did Einstein overthrow Newton, supplanting Newtonian ideas about motion with his special theory of relativity, but he finally penetrated the 80-year-old mystery of Brownian motion.

The reason for the crazy dance of pollen grains, according to Einstein, was that they were under continual machine-gun bombardment by tiny water molecules. Imagine a giant inflatable rubber ball, taller than a person, being pushed about a field by a large number of people. If each person pushes in their own particular direction, without any regard for the others, at any instant there will be slightly more people on one side than another. This imbalance is enough to cause the ball to move erratically about the field. Similarly, the erratic motion of a pollen grain can be caused by slightly more water molecules bombarding it from one side than from another.

Einstein devised a mathematical theory to describe Brownian motion. It predicted how far and how fast the average pollen grain should travel in response to the relentless battering it was receiving from the water molecules all around. Everything hinged on the size of the water molecules, since the bigger they were the bigger would be the imbalance of forces on the pollen grain and the more exaggerated its consequent Brownian motion.

The French physicist Jean Baptiste Perrin compared his observations of water-suspended “gamboge” particles, a yellow gum resin from a Cambodian tree, with the predictions of Einstein’s theory. He was able to deduce the size of water molecules and hence the atoms out of which they were built. He concluded that atoms were only about one 10-billionth of a metre across—so small that it would take 10 million, laid end to end, to span the width of a full stop.

Atoms were so small, in fact, that if the billions upon billions of them in a single breath were spread evenly throughout Earth’s atmosphere, every breath-sized volume of the atmosphere would end up containing several of those atoms. Put another way, every breath you take contains at least one atom breathed out by Albert Einstein—or Julius Caesar or Marilyn Monroe or even the last Tyrannosaurus Rex to walk on Earth!

What is more, the atoms of Earth’s “biosphere” are constantly recycled. When an organism dies, it decays and its constituent atoms are returned to the soil and the atmosphere to be incorporated into plants that are later eaten by animals and humans. “A carbon atom in my cardiac muscle was once in the tail of a dinosaur,” writes Norwegian novelist Jostein Gaarder in Sophie’s World.

Brownian motion was the most powerful evidence for the existence of atoms. Nobody who peered down a microscope and saw the crazy dance of pollen grains under relentless bombardment could doubt that the world was ultimately made from tiny, bulletlike particles. But watching jittery pollen grains—the effect of atoms—was not the same as actually seeing atoms. This had to wait until 1980 and the invention of a remarkable device called the scanning tunnelling microscope (STM).

The idea behind the STM, as it became known, was very simple. A blind person can “see” someone’s face simply by running a finger over it and building up a picture in their mind. The STM works in a similar way. The difference is that the “finger” is a finger of metal, a tiny stylus reminiscent of an old-fashioned gramophone needle. By dragging the needle across the surface of a material and feeding its up-and-down motion into a computer, it is possible to build up a detailed picture of the undulations of the atomic terrain.2 Of course, there is a bit more to it than that. Although the principle of the invention was simple, there were formidable practical difficulties in its realisation. For instance, a needle had to be found that was fine enough to “feel” atoms. The Nobel Prize committee certainly recognised the difficulties. It awarded Gerd Binnig and Heinrich Rohrer, the IBM researchers behind the STM, the 1986 Nobel Prize for Physics.

Binnig and Rohrer were the first people in history to actually “see” an atom. Their STM images were some of the most remarkable in the history of science, ranking alongside that of Earth rising above the gray desolation of the Moon or the sweeping spiral staircase of DNA. Atoms looked like tiny footballs. They looked like oranges, stacked in boxes, row on row. But most of all they looked like the tiny hard grains of matter that Democritus had seen so clearly in his mind’s eye, 2,400 years before. No one else has ever made a prediction that far in advance of experimental confirmation.

But only one side of the atom was revealed by the STM. As Democritus himself had realised, atoms were a lot more than simply tiny grains in ceaseless motion.


Atoms are nature’s Lego bricks. They come in a variety of different shapes and sizes, and by joining them together in any number of different ways, it is possible to make a rose, a bar of gold, or a human being. Everything is in the combinations.

The American Nobel Prize winner Richard Feynman said: “If in some cataclysm all of scientific knowledge were destroyed and only one sentence passed on to succeeding generations, what statement would convey the most information in the fewest words?” He was in no doubt: “Everything is made of atoms.”

The key step in proving that atoms are nature’s Lego bricks was identifying the different kinds of atoms. However, the fact that atoms were far too small to be perceived directly by the senses made the task every bit as formidable as proving that atoms were tiny grains of matter in ceaseless motion. The only way to identify different types of atoms was to find substances that were made exclusively out of atoms of a single kind.

In 1789 the French aristocrat Antoine Lavoisier compiled a list of substances that he believed could not, by any means, be broken down into simpler substances. There were 23 “elements” in Lavoisier’s list. Though some later turned out not to be elements, many—including gold, silver, iron, and mercury—were indeed elemental. Within 40 years of Lavoisier’s death at the guillotine in 1794, the list of elements had grown to include close to 50. Nowadays, we know of 92 naturally occurring elements, from hydrogen, the lightest, to uranium, the heaviest.

But what makes one atom different from another? For instance, how does a hydrogen atom differ from a uranium atom? The answer would come only by probing their internal structures. But atoms are so fantastically small. It seemed impossible that anyone would ever find a way to look inside one. But one man did—a New Zealander named Ernest Rutherford. His ingenious idea was to use atoms to look inside other atoms.


The phenomenon that laid bare the structure of atoms was radioactivity, discovered by the French chemist Henri Becquerel in 1896. Between 1901 and 1903, Rutherford and the English chemist Frederick Soddy found strong evidence that a radioactive atom is simply a heavy atom that is seething with excess energy. Inevitably, after a second or a year or a million years, it sheds this surplus energy by expelling some kind of particle at high speed. Physicists say it disintegrates, or “decays,” into an atom of a slightly lighter element.

One such decay particle was the alpha particle, which Rutherford and the young German physicist Hans Geiger demonstrated was simply an atom of helium, the second lightest element after hydrogen.

In 1903, Rutherford had measured the speed of alpha particles expelled from atoms of radioactive radium. It was an astonishing 25,000 kilometres per second—100,000 times faster than a present-day passenger jet. Here, Rutherford realised, was a perfect bullet to smash into atoms and reveal what was deep inside.

The idea was simple. Fire alpha particles into an atom. If they encountered anything hard blocking their way, they would be deflected from their path. By firing thousands upon thousands of alpha particles and observing how they were deflected, it would be possible to build up a detailed picture of the interior of an atom.

Rutherford’s experiment was carried out in 1909 by Geiger and a young New Zealand physicist called Ernest Marsden. Their “alphascattering” experiment used a small sample of radium, which fired off alpha particles like microscopic fireworks. The sample was placed behind a lead screen containing a narrow slit, so a thread-thin stream of alpha particles emerged on the far side. It was the world’s smallest machine gun, rattling out microscopic bullets.

In the firing line Geiger and Marsden placed a sheet of gold foil only a few thousand atoms thick. It was insubstantial enough that all the alpha particles from the miniature machine gun would pass through. But it was substantial enough that, during their transit, some would pass close enough to gold atoms to be deflected slightly from their path.

At the time of Geiger and Marsden’s experiment, one particle from inside the atom had already been identified. The electron had been discovered by the British physicist “J. J.” Thomson in 1895. Ridiculously tiny particles—each about 2,000 times smaller than even a hydrogen atom—had turned out to be the elusive particles of electricity. Ripped free from atoms, they surged along a copper wire amid billions of others, creating an electric current.

The electron was the first subatomic particle. It carried a negative electric charge. Nobody knows exactly what electric charge is, only that it comes in two forms: negative and positive. Ordinary matter, which consists of atoms, has no net electrical charge. In ordinary atoms, then, the negative charge of the electrons is always perfectly balanced by the positive charge of something else. It is a characteristic of electrical charge that unlike charges attract each other whereas like charges repel each other. Consequently, there is a force of attraction between an atom’s negatively charged electrons and its positively charged something else. It is this attraction that glues the whole thing together.

Not long after the discovery of the electron, Thomson used these insights to concoct the first-ever scientific picture of the atom. He visualised it as a multitude of tiny electrons embedded “like raisins in a plum pudding” in a diffuse ball of positive charge. It was Thomson’s plum pudding model of the atom that Geiger and Marsden expected to confirm with their alpha-scattering experiment.

They were to be disappointed.

The thing that blew the plum pudding model out of the water was a rare but remarkable event. One out of every 8,000 alpha particles fired by the miniature machine gun actually bounced back from the gold foil!

According to Thomson’s plum pudding model, an atom consisted of a multitude of pin-prick electrons embedded in a diffuse globe of positive charge. The alpha particles that Geiger and Marsden were firing into this flimsy arrangement, on the other hand, were unstoppable subatomic express trains, each as heavy as around 8,000 electrons. The chance of such a massive particle being wildly deflected from its path was about as great as that of a real express train being derailed by a runaway dolls pram. As Rutherford put it: “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you!”

Geiger and Marsden’s extraordinary result could only mean that an atom was not a flimsy thing at all. Something buried deep inside it could stop a subatomic express train dead in its tracks and turn it around. That something could only be a tiny nugget of positive charge sitting at the dead centre of an atom and repelling the positive charge of an incoming alpha particle. Since the nugget was capable of standing up to a massive alpha particle without being knocked to kingdom come, it too must be massive. In fact, it must contain almost all of the mass of an atom.

Rutherford had discovered the atomic nucleus.

The picture of the interior of the atom that emerged was as unlike Thomson’s plum pudding picture as was possible to imagine. It was a miniature solar system in which negatively charged electrons were attracted to the positive charge of the nucleus and orbited it like planets around the Sun. The nucleus had to be at least as massive as an alpha particle—and probably a lot more so—for the nucleus with which it collided not to be kicked out of the atom. It therefore had to contain more than 99.9 per cent of the atom’s mass.3

The nucleus was very, very tiny. Only if nature crammed a large positive charge into a very small volume could a nucleus exert a repulsive force so overwhelming that it could make an alpha particle execute a U-turn. What was most striking about Rutherford’s vision of an atom was, therefore, its appalling emptiness. The playwright Tom Stoppard put it beautifully in his play Hapgood: “Now make a fist, and if your fist is as big as the nucleus of an atom then the atom is as big as St Paul’s, and if it happens to be a hydrogen atom then it has a single electron flitting about like a moth in an empty cathedral, now by the dome, now by the altar.”

Despite its appearance of solidity, the familiar world was actually no more substantial than a ghost. Matter, whether in the form of a chair, a human being, or a star, was almost exclusively empty space. What substance an atom possessed resided in its impossibly small nucleus—100,000 times smaller than a complete atom.

Put another way, matter is spread extremely thinly. If it were possible to squeeze out all the surplus empty space, matter would take up hardly any room at all. In fact, this is perfectly possible. Although an easy way to squeeze the human race down to the size of a sugar cube probably does not exist, a way does exist to squeeze the matter of a massive star into the smallest volume possible. The squeezing is done by tremendously strong gravity, and the result is a neutron star. Such an object packs the enormous mass of a body the size of the Sun into a volume no bigger than Mount Everest.4


Rutherford’s picture of the atom as a miniature solar system with tiny electrons flitting about a dense atomic nucleus like planets around the Sun was a triumph of experimental science. Unfortunately, it had a slight problem. It was totally incompatible with all known physics!

According to Maxwell’s theory of electromagnetism—which described all electrical and magnetic phenomena—whenever a charged particle accelerates, changing its speed or direction of motion, it gives out electromagnetic waves—light. An electron is a charged particle. As it circles a nucleus, it perpetually changes its direction; so it should act like a miniature lighthouse, constantly broadcasting light waves into space. The problem is that this would be a catastrophe for any atom. After all, the energy radiated as light has to come from somewhere, and it can only come from the electron itself. Sapped continually of energy, it should spiral ever closer to the centre of the atom. Calculations showed that it would collide with the nucleus within a mere hundred-millionth of a second. By rights, atoms should not exist.

But atoms do exist. We and the world around us are proof enough of that. Far from expiring in a hundred-millionth of a second, atoms have survived intact since the earliest times of the Universe almost 14 billion years ago. Some crucial ingredient must be missing from Rutherford’s picture of the atom. That ingredient is a revolutionary new kind of physics—quantum theory.

1 Some of these ideas were covered in my earlier book, The Magic Furnace (Vintage, London, 2000). Apologies to those who have read it. In my defense, it is necessary to know some basic things about the atom in order to appreciate the chapters that follow on quantum theory, which is essentially a theory of the atomic world.

2 Of course, there is no way a needle can actually feel a surface like a human finger can. However, if the needle is charged with electricity and placed extremely close to a conducting surface, a minuscule but measurable electric current leaps the gap between the tip of the needle and the surface. It is known as a “tunnelling current”, and it has a crucial property that can be exploited: the size of the current is extraordinarily sensitive to the width of the gap. If the needle is moved even a shade closer to the surface, the current grows very rapidly; if it is pulled away a fraction, the current plummets. The size of the tunnelling current therefore reveals the distance between the needle tip and the surface. It gives the needle an artificial sense of touch.

3 Eventually, physicists would discover that the nucleus contains two particles: positively charged protons and uncharged, or neutral, neutrons. The number of protons in a nucleus is always exactly balanced by an equal number of electrons in orbit about it. The difference between atoms is in the number of protons in their nuclei (and consequently the number of electrons in orbit). For instance, hydrogen has one proton in its nucleus and uranium a whopping 92.

4 See Chapter 4, “Uncertainty and the Limits of Knowledge.”