The Clockwork Universe: Isaac Newton, the Royal Society, and the Birth of the Modern World - Edward Dolnick (2011)

Part III. Into the Light

Chapter 37. All Men Are Created Equal

The taming of infinity represents another of those breakthroughs where a once-baffling abstraction, like “zero” or “negative five,” comes to seem simple in hindsight. The key was to stay relentlessly down-to-earth and never to venture into such murky territory as “the nature of infinity.”

The abstraction that would save the day was the notion of a “limit.” The mathematical sense is close to the everyday one. In one of the Lincoln-Douglas debates, Abraham Lincoln asked his listeners why the Declaration of Independence asserted that “all men are created equal.” Not because the founders believed that all men had already attained equality, Lincoln said. That was an “obvious untruth.” The founders’ point, Lincoln declared, was that equality for all was a goal that should be “constantly looked to, constantly labored for, and even though never perfectly attained, constantly approximated.”

In the same sense, a mathematical limit is a goal, a target that a sequence of numbers comes ever closer to. The sequence doesn’t have to reach the limit, but it does have to get nearer and nearer. The limit of the sequence 1, .1, .01, .001, .0001, . . . is the number 0, even though the sequence never gets there. Similarly, the limit of ½, ¾, ⅚, 7/8, 9/10, 10/11, . . . is the number 1, also never attained. The sequence 1, 2, 1, 2, 1, 2, . . . does not have a limit, because it hops back and forth forever and never homes in on a target.44

Zeno cast his paradox in the form of a story about a journey across a room. In the 1500s and 1600s a few intrepid mathematicians reframed his tale as a statement about numbers. From that perspective, the question was whether or not 1 + ½ + ¼ + ⅛ + 1/16 + . . . added up to infinity. Zeno’s answer was “yes,” because the numbers go on forever and each contributes something to the sum. But when mathematicians turned from Zeno’s words to their numbers and began adding, they found something odd. They began with 1 + ½. That made 1 ½. Nothing dire there. How about 1 + ½ + ¼? That came to 1 ¾. Still okay. 1 + ½ + ¼ + ⅛? That was 1 7/8. They added more and more terms and never ran into trouble. The running total continued to grow, but it became ever clearer that the number 2 represented a kind of boundary. You could draw arbitrarily close to that boundary—within one-thousandth or one-billionth or even closer—but certainly you could never break through in the way that a runner breaks the tape at the finish line.

For the practical-minded scientists of the seventeenth century, this meant the end of Zeno. In the battle with infinity, they declared victory. Zeno had maintained that if it took one second to reach the middle of a room, it would take forever to cross to the other side. Not so, said the new mathematicians. It would take two seconds.

Why did that strike them as so momentous? Because when they had taken up the question they truly wanted to answer—what does instantaneous speed mean?—they had run head-on into Zeno’s paradox. They had wanted to know a hackney coach’s speed at the instant of noon and had found themselves ensnarled in an infinite regress of questions of the form, what was the coach’s speed between 12:00 and one minute after? between 12:00 and 30 seconds after? between 12:00 and 15 seconds after? between 12:00 and . . . ?

This was the seventeenth-century’s counterpart of phone-menu hell (“if your call is about billing, press 1”), and early scientists groaned in despair because the questions continued endlessly, and escape seemed impossible. But now their victory over Zeno gave them hope. Yes, the questions about the coach did go on forever. But suppose you looked at the coach’s speed in briefer and briefer intervals and found that that sequence of speeds homed in on a limit?

Then your troubles would be over. That limit would be a number—a definite, perfectly ordinary number. That was what “instantaneous speed” meant. Nothing to it. But the greatest mathematicians of antiquity, and all their descendants for another fifteen centuries, had failed to see it.

This was not quite calculus, but it was a giant step toward it. In essence, calculus would be a mathematical microscope, a tool that let you pin motion down and scrutinize it tip to toe. Some moments were more important than others—the arrow’s height at the instant it reached its peak, the cannonball’s speed at the instant it smashed into a city’s wall, a comet’s speed when it rounded the sun—and, with calculus’s help, you could fix those particular moments to a slide and study them close-up.

Or so the newly optimistic mathematicians presumed. But when they grabbed the microscope, they found that no matter how they twisted and tweaked its knobs they simply could not bring the image into focus. The problem, they soon saw, was that everything hinged on the notion of limits, and limits weren’t as straightforward as they had thought.

As with all other abstractions, the problem was trying to wrestle with a phantom. What did it mean, precisely and quantitatively, for a sequence of numbers to come very close to a limit? “The planet Mars comes close to the Earth when it is 50 million miles away,” one modern mathematician observes. “On the other hand, a bullet comes close to a person if it gets within a few inches of him.” How close is close?

Even Isaac Newton and Gottfried Leibniz, the boldest thinkers of their age and the leaders of the assault on infinity, found themselves tangled up in confusion and contradiction. For one thing, infinity seemed to come in a disarming variety of forms. In ordinary usage, infinity conjured up thoughts of boundless immensity. Now, though, in all this talk of speed at a given instant, it seemed vital to sort out the meaning of “infinitely small” lengths and “infinitely brief” stretches of time, as well.

Worse still, the tiny distances and the tiny intervals of time were all mingled together. Speed means distance divided by time. That was not a problem when you were dealing with large, familiar units like miles and hours. But how could you keep your eyes from blurring when it came to dividing ever-shorter distances by ever-briefer time spans?

No one could think how to classify these vanishingly small times and lengths. Leibniz talked of “infinitesimals,” which were by definition “the smallest possible numbers,” but that definition raised as many questions as it answered. How could a number be smaller than every fraction? Perhaps infinitesimals were real but too small to see, like the microscopic creatures Leeuwenhoek had recently discovered? As tiny as they were, infinitesimals were bigger than 0. Except sometimes, when they weren’t.

Leibniz tried to explain, but he only made matters worse. “By . . . infinitely small, we understand something . . . indefinitely small, so that each conducts itself as a sort of class, and not merely as the last thing of a class. If anyone wishes to understand these [the infinitely small] as the ultimate things . . . it can be done.” This was, two of Leibniz’s disciples acknowledged, “an enigma rather than an explication.” Newton spoke instead of “the ultimate ratio of evanescent quantities,” which was perhaps clear to him but baffling to almost everyone else. “In mathematics the minutest errors are not to be neglected,” he insisted in one breath, and in the next he pointed out that these tiny crumbs of numbers were so close to 0 that they could safely be ignored.

Amazingly, things mostly worked out, much as earlier generations had found that things mostly worked out when they manipulated what were then newfangled and still mysterious negative numbers. In the case of calculus, a seemingly mystical abracadabra yielded utterly down-to-earth, hardheaded results about such questions as how far cannonballs would travel and how much damage they would do when they landed. The very name calculus served as a testimonial to the practical value of this new art; calculus is the Latin word for “pebble,” a reference to the heaps of stones once used as a calculating aid in addition and multiplication.

Skeptics contended that any correct results must have been due to happy accidents in which multiple errors canceled themselves out. (“For science it cannot be called,” one critic later charged, “when you proceed blindfold and arrive at the Truth not knowing how or by what means.”) But so long as the slapdash new techniques kept churning out answers to questions that had always lain out of reach, no one spent much time worrying about rigor. Leibniz, boundlessly optimistic in personality as well as in his philosophical views, argued explicitly that this gift horse should be saddled and ridden, not inspected. It would all work out.

The muddle would last until the 1800s. Only then would a new generation of mathematicians find a way to replace vague intuitions with clear definitions. (The breakthrough was finding a way to define “limits” while banishing all talk of infinitely small numbers.) In all the intervening years mathematicians and scientists had rejoiced in a bounty they did not understand. Instead they followed the advice of Jean d’Alembert, a French mathematician who lived a century after Newton and Leibniz but during the era when the underpinnings of calculus were still cloaked in mystery.

“Persist,” d’Alembert advised, “and faith will come to you.”