Chaos: Making a New Science - James Gleick (1988)
Of course, the entire effort is to put oneself
Outside the ordinary range
Of what are called statistics.
THE HISTORIAN OF SCIENCE Thomas S. Kuhn describes a disturbing experiment conducted by a pair of psychologists in the 1940s. Subjects were given glimpses of playing cards, one at a time, and asked to name them. There was a trick, of course. A few of the cards were freakish: for example, a red six of spades or a black queen of diamonds.
At high speed the subjects sailed smoothly along. Nothing could have been simpler. They didn’t see the anomalies at all. Shown a red six of spades, they would sing out either “six of hearts” or “six of spades.” But when the cards were displayed for longer intervals, the subjects started to hesitate. They became aware of a problem but were not sure quite what it was. A subject might say that he had seen something odd, like a red border around a black heart.
Eventually, as the pace was slowed even more, most subjects would catch on. They would see the wrong cards and make the mental shift necessary to play the game without error. Not everyone, though. A few suffered a sense of disorientation that brought real pain. “I can’t make that suit out, whatever it is,” said one. “It didn’t even look like a card that time. I don’t know what color it is now or whether it’s a spade or a heart. I’m not even sure what a spade looks like. My God!”
Professional scientists, given brief, uncertain glimpses of nature’s workings, are no less vulnerable to anguish and confusion when they come face to face with incongruity. And incongruity, when it changes the way a scientist sees, makes possible the most important advances. So Kuhn argues, and so the story of chaos suggests.
Kuhn’s notions of how scientists work and how revolutions occur drew as much hostility as admiration when he first published them, in 1962, and the controversy has never ended. He pushed a sharp needle into the traditional view that science progresses by the accretion of knowledge, each discovery adding to the last, and that new theories emerge when new experimental facts require them. He deflated the view of science as an orderly process of asking questions and finding their answers. He emphasized a contrast between the bulk of what scientists do, working on legitimate, well-understood problems within their disciplines, and the exceptional, unorthodox work that creates revolutions. Not by accident, he made scientists seem less than perfect rationalists.
In Kuhn’s scheme, normal science consists largely of mopping up operations. Experimentalists carry out modified versions of experiments that have been carried out many times before. Theorists add a brick here, reshape a cornice there, in a wall of theory. It could hardly be otherwise. If all scientists had to begin from the beginning, questioning fundamental assumptions, they would be hard pressed to reach the level of technical sophistication necessary to do useful work. In Benjamin Franklin’s time, the handful of scientists trying to understand electricity could choose their own first principles—indeed, had to. One researcher might consider attraction to be the most important electrical effect, thinking of electricity as a sort of “effluvium” emanating from substances. Another might think of electricity as a fluid, conveyed by conducting material. These scientists could speak almost as easily to laymen as to each other, because they had not yet reached a stage where they could take for granted a common, specialized language for the phenomena they were studying. By contrast, a twentieth-century fluid dynamicist could hardly expect to advance knowledge in his field without first adopting a body of terminology and mathematical technique. In return, unconsciously, he would give up much freedom to question the foundations of his science.
Central to Kuhn’s ideas is the vision of normal science as solving problems, the kinds of problems that students learn the first time they open their textbooks. Such problems define an accepted style of achievement that carries most scientists through graduate school, through their thesis work, and through the writing of journal articles that makes up the body of academic careers. “Under normal conditions the research scientist is not an innovator but a solver of puzzles, and the puzzles upon which he concentrates are just those which he believes can be both stated and solved within the existing scientific tradition,” Kuhn wrote.
Then there are revolutions. A new science arises out of one that has reached a dead end. Often a revolution has an interdisciplinary character—its central discoveries often come from people straying outside the normal bounds of their specialties. The problems that obsess these theorists are not recognized as legitimate lines of inquiry. Thesis proposals are turned down or articles are refused publication. The theorists themselves are not sure whether they would recognize an answer if they saw one. They accept risk to their careers. A few freethinkers working alone, unable to explain where they are heading, afraid even to tell their colleagues what they are doing—that romantic image lies at the heart of Kuhn’s scheme, and it has occurred in real life, time and time again, in the exploration of chaos.
Every scientist who turned to chaos early had a story to tell of discouragement or open hostility. Graduate students were warned that their careers could be jeopardized if they wrote theses in an untested discipline, in which their advisors had no expertise. A particle physicist, hearing about this new mathematics, might begin playing with it on his own, thinking it was a beautiful thing, both beautiful and hard—but would feel that he could never tell his colleagues about it. Older professors felt they were suffering a kind of midlife crisis, gambling on a line of research that many colleagues were likely to misunderstand or resent. But they also felt an intellectual excitement that comes with the truly new. Even outsiders felt it, those who were attuned to it. To Freeman Dyson at the Institute for Advanced Study, the news of chaos came “like an electric shock” in the 1970s. Others felt that for the first time in their professional lives they were witnessing a true paradigm shift, a transformation in a way of thinking.
Those who recognized chaos in the early days agonized over how to shape their thoughts and findings into publishable form. Work fell between disciplines—for example, too abstract for physicists yet too experimental for mathematicians. To some the difficulty of communicating the new ideas and the ferocious resistance from traditional quarters showed how revolutionary the new science was. Shallow ideas can be assimilated; ideas that require people to reorganize their picture of the world provoke hostility. A physicist at the Georgia Institute of Technology, Joseph Ford, started quoting Tolstoy: “I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”
Many mainstream scientists remained only dimly aware of the emerging science. Some, particularly traditional fluid dynamicists, actively resented it. At first, the claims made on behalf of chaos sounded wild and unscientific. And chaos relied on mathematics that seemed unconventional and difficult.
As the chaos specialists spread, some departments frowned on these somewhat deviant scholars; others advertised for more. Some journals established unwritten rules against submissions on chaos; other journals came forth to handle chaos exclusively. The chaoticists or chaologists (such coinages could be heard) turned up with disproportionate frequency on the yearly lists of important fellowships and prizes. By the middle of the eighties a process of academic diffusion had brought chaos specialists into influential positions within university bureaucracies. Centers and institutes were founded to specialize in “nonlinear dynamics” and “complex systems.”
Chaos has become not just theory but also method, not just a canon of beliefs but also a way of doing science. Chaos has created its own technique of using computers, a technique that does not require the vast speed of Crays and Cybers but instead favors modest terminals that allow flexible interaction. To chaos researchers, mathematics has become an experimental science, with the computer replacing laboratories full of test tubes and microscopes. Graphic images are the key. “It’s masochism for a mathematician to do without pictures,” one chaos specialist would say. “How can they see the relationship between that motion and this? How can they develop intuition?” Some carry out their work explicitly denying that it is a revolution; others deliberately use Kuhn’s language of paradigm shifts to describe the changes they witness.
Stylistically, early chaos papers recalled the Benjamin Franklin era in the way they went back to first principles. As Kuhn notes, established sciences take for granted a body of knowledge that serves as a communal starting point for investigation. To avoid boring their colleagues, scientists routinely begin and end their papers with esoterica. By contrast, articles on chaos from the late 1970s onward sounded evangelical, from their preambles to their perorations. They declared new credos, and they often ended with pleas for action. These results appear to us to be both exciting and highly provocative. A theoretical picture of the transition to turbulence is just beginning to emerge. The heart of chaos is mathematically accessible. Chaos now presages the future as none will gainsay. But to accept the future, one must renounce much of the past.
New hopes, new styles, and, most important, a new way of seeing. Revolutions do not come piecemeal. One account of nature replaces another. Old problems are seen in a new light and other problems are recognized for the first time. Something takes place that resembles a whole industry retooling for new production. In Kuhn’s words, “It is rather as if the professional community had been suddenly transported to another planet where familiar objects are seen in a different light and are joined by unfamiliar ones as well.”
THE LABORATORY MOUSE of the new science was the pendulum: emblem of classical mechanics, exemplar of constrained action, epitome of clockwork regularity. A bob swings free at the end of a rod. What could be further removed from the wildness of turbulence?
Where Archimedes had his bathtub and Newton his apple, so, according to the usual suspect legend, Galileo had a church lamp, swaying back and forth, time and again, on and on, sending its message monotonously into his consciousness. Christian Huygens turned the predictability of the pendulum into a means of timekeeping, sending Western civilization down a road from which there was no return. Foucault, in the Panthéon of Paris, used a twenty-story–high pendulum to demonstrate the earth’s rotation. Every clock and every wristwatch (until the era of vibrating quartz) relied on a pendulum of some size or shape. (For that matter, the oscillation of quartz is not so different.) In space, free of friction, periodic motion comes from the orbits of heavenly bodies, but on earth virtually any regular oscillation comes from some cousin of the pendulum. Basic electronic circuits are described by equations exactly the same as those describing a swinging bob. The electronic oscillations are millions of times faster, but the physics is the same. By the twentieth century, though, classical mechanics was strictly a business for classrooms and routine engineering projects. Pendulums decorated science museums and enlivened airport gift shops in the form of rotating plastic “space balls.” No research physicist bothered with pendulums.
Yet the pendulum still had surprises in store. It became a touchstone, as it had for Galileo’s revolution. When Aristotle looked at a pendulum, he saw a weight trying to head earthward but swinging violently back and forth because it was constrained by its rope. To the modern ear this sounds foolish. For someone bound by classical concepts of motion, inertia, and gravity, it is hard to appreciate the self-consistent world view that went with Aristotle’s understanding of a pendulum. Physical motion, for Aristotle, was not a quantity or a force but rather a kind of change, just as a person’s growth is a kind of change. A falling weight is simply seeking its most natural state, the state it will reach if left to itself. In context, Aristotle’s view made sense. When Galileo looked at a pendulum, on the other hand, he saw a regularity that could be measured. To explain it required a revolutionary way of understanding objects in motion. Galileo’s advantage over the ancient Greeks was not that he had better data. On the contrary, his idea of timing a pendulum precisely was to get some friends together to count the oscillations over a twenty-four–hour period—a labor-intensive experiment. Galileo saw the regularity because he already had a theory that predicted it. He understood what Aristotle could not: that a moving object tends to keep moving, that a change in speed or direction could only be explained by some external force, like friction.
In fact, so powerful was his theory that he saw a regularity that did not exist. He contended that a pendulum of a given length not only keeps precise time but keeps the same time no matter how wide or narrow the angle of its swing. A wide-swinging pendulum has farther to travel, but it happens to travel just that much faster. In other words, its period remains independent of its amplitude. “If two friends shall set themselves to count the oscillations, one counting the wide ones and the other the narrow, they will see that they may count not just tens, but even hundreds, without disagreeing by even one, or part of one.” Galileo phrased his claim in terms of experimentation, but the theory made it convincing—so much so that it is still taught as gospel in most high school physics courses. But it is wrong. The regularity Galileo saw is only an approximation. The changing angle of the bob’s motion creates a slight nonlinearity in the equations. At low amplitudes, the error is almost nonexistent. But it is there, and it is measurable even in an experiment as crude as the one Galileo describes.
Small nonlinearities were easy to disregard. People who conduct experiments learn quickly that they live in an imperfect world. In the centuries since Galileo and Newton, the search for regularity in experiment has been fundamental. Any experimentalist looks for quantities that remain the same, or quantities that are zero. But that means disregarding bits of messiness that interfere with a neat picture. If a chemist finds two substances in a constant proportion of 2.001 one day, and 2.003 the next day, and 1.998 the day after, he would be a fool not to look for a theory that would explain a perfect two-to–one ratio.
To get his neat results, Galileo also had to disregard nonlinearities that he knew of: friction and air resistance. Air resistance is a notorious experimental nuisance, a complication that had to be stripped away to reach the essence of the new science of mechanics. Does a feather fall as rapidly as a stone? All experience with falling objects says no. The story of Galileo dropping balls off the tower of Pisa, as a piece of myth, is a story about changing intuitions by inventing an ideal scientific world where regularities can be separated from the disorder of experience.
To separate the effects of gravity on a given mass from the effects of air resistance was a brilliant intellectual achievement. It allowed Galileo to close in on the essence of inertia and momentum. Still, in the real world, pendulums eventually do exactly what Aristotle’s quaint paradigm predicted. They stop.
In laying the groundwork for the next paradigm shift, physicists began to face up to what many believed was a deficiency in their education about simple systems like the pendulum. By our century, dissipative processes like friction were recognized, and students learned to include them in equations. Students also learned that nonlinear systems were usually unsolvable, which was true, and that they tended to be exceptions—which was not true. Classical mechanics described the behavior of whole classes of moving objects, pendulums and double pendulums, coiled springs and bent rods, plucked strings and bowed strings. The mathematics applied to fluid systems and to electrical systems. But almost no one in the classical era suspected the chaos that could lurk in dynamical systems if nonlinearity was given its due.
A physicist could not truly understand turbulence or complexity unless he understood pendulums—and understood them in a way that was impossible in the first half of the twentieth century. As chaos began to unite the study of different systems, pendulum dynamics broadened to cover high technologies from lasers to superconducting Josephson junctions. Some chemical reactions displayed pendulum-like behavior, as did the beating heart. The unexpected possibilities extended, one physicist wrote, to “physiological and psychiatric medicine, economic forecasting, and perhaps the evolution of society.”
Consider a playground swing. The swing accelerates on its way down, decelerates on its way up, all the while losing a bit of speed to friction. It gets a regular push—say, from some clockwork machine. All our intuition tells us that, no matter where the swing might start, the motion will eventually settle down to a regular back and forth pattern, with the swing coming to the same height each time. That can happen. Yet, odd as it seems, the motion can also turn erratic, first high, then low, never settling down to a steady state and never exactly repeating a pattern of swings that came before.
The surprising, erratic behavior comes from a nonlinear twist in the flow of energy in and out of this simple oscillator. The swing is damped and it is driven: damped because friction is trying to bring it to a halt, driven because it is getting a periodic push. Even when a damped, driven system is at equilibrium, it is not at equilibrium, and the world is full of such systems, beginning with the weather, damped by the friction of moving air and water and by the dissipation of heat to outer space, and driven by the constant push of the sun’s energy.
But unpredictability was not the reason physicists and mathematicians began taking pendulums seriously again in the sixties and seventies. Unpredictability was only the attention-grabber. Those studying chaotic dynamics discovered that the disorderly behavior of simple systems acted as a creative process. It generated complexity: richly organized patterns, sometimes stable and sometimes unstable, sometimes finite and sometimes infinite, but always with the fascination of living things. That was why scientists played with toys.
One toy, sold under the name “Space Balls” or “Space Trapeze,” is a pair of balls at opposite ends of a rod, sitting like the crossbar of a T atop a pendulum with a third, heavier ball at its foot. The lower ball swings back and forth while the upper rod rotates freely. All three balls have little magnets inside, and once set in motion the device keeps going because it has a battery-powered electromagnet embedded in the base. The device senses the approach of the lowest ball and gives it a small magnetic kick each time it passes. Sometimes the apparatus settles into a steady, rhythmic swinging. But other times, its motion seems to remain chaotic, always changing and endlessly surprising.
Another common pendulum toy is no more than a so-called spherical pendulum—a pendulum free to swing not just back and forth but in any direction. A few small magnets are placed around its base. The magnets attract the metal bob, and when the pendulum stops, it will have been captured by one of them. The idea is to set the pendulum swinging and guess which magnet will win. Even with just three magnets placed in a triangle, the pendulum’s motion cannot be predicted. It will swing back and forth between A and B for a while, then switch to B and C, and then, just as it seems to be settling on C, jump back to A. Suppose a scientist systematically explores the behavior of this toy by making a map, as follows: Pick a starting point; hold the bob there and let go; color the point red, blue, or green, depending on which magnet ends up with the bob. What will the map look like? It will have regions of solid red, blue, or green, as one might expect—regions where the bob will swing reliably to a particular magnet. But it can also have regions where the colors are woven together with infinite complexity. Adjacent to a red point, no matter how close one chooses to look, no matter how much one magnifies the map, there will be green points and blue points. For all practical purposes, then, the bob’s destiny will be impossible to guess.
Traditionally, a dynamicist would believe that to write down a system’s equations is to understand the system. How better to capture the essential features? For a playground swing or a toy, the equations tie together the pendulum’s angle, its velocity, its friction, and the force driving it. But because of the little bits of nonlinearity in these equations, a dynamicist would find himself helpless to answer the easiest practical questions about the future of the system. A computer can address the problem by simulating it, rapidly calculating each cycle. But simulation brings its own problem: the tiny imprecision built into each calculation rapidly takes over, because this is a system with sensitive dependence on initial conditions. Before long, the signal disappears and all that remains is noise.
Or is it? Lorenz had found unpredictability, but he had also found pattern. Others, too, discovered suggestions of structure amid seemingly random behavior. The example of the pendulum was simple enough to disregard, but those who chose not to disregard it found a provocative message. In some sense, they realized, physics understood perfectly the fundamental mechanisms of pendulum motion but could not extend that understanding to the long term. The microscopic pieces were perfectly clear; the macroscopic behavior remained a mystery. The tradition of looking at systems locally—isolating the mechanisms and then adding them together—was beginning to break down. For pendulums, for fluids, for electronic circuits, for lasers, knowledge of the fundamental equations no longer seemed to be the right kind of knowledge at all.
As the 1960s went on, individual scientists made discoveries that paralleled Lorenz’s: a French astronomer studying galactic orbits, for example, and a Japanese electrical engineer modeling electronic circuits. But the first deliberate, coordinated attempt to understand how global behavior might differ from local behavior came from mathematicians. Among them was Stephen Smale of the University of California at Berkeley, already famous for unraveling the most esoteric problems of many-dimensional topology. A young physicist, making small talk, asked what Smale was working on. The answer stunned him: “Oscillators.” It was absurd. Oscillators—pendulums, springs, or electrical circuits—were the sort of problem that a physicist finished off early in his training. They were easy. Why would a great mathematician be studying elementary physics? Not until years later did the young man realize that Smale was looking at nonlinear oscillators, chaotic oscillators, and seeing things that physicists had learned not to see.
SMALE MADE A BAD CONJECTURE. In the most rigorous mathematical terms, he proposed that practically all dynamical systems tended to settle, most of the time, into behavior that was not too strange. As he soon learned, things were not so simple.
Smale was a mathematician who did not just solve problems but also built programs of problems for others to solve. He parlayed his understanding of history and his intuition about nature into an ability to announce, quietly, that a whole untried area of research was now worth a mathematician’s time. Like a successful businessman, he evaluated risks and coolly planned his strategy, and he had a Pied Piper quality. Where Smale led, many followed. His reputation was not confined to mathematics, though. Early in the Vietnam war, he and Jerry Rubin organized “International Days of Protest” and sponsored efforts to stop the trains carrying troops through California. In 1966, while the House Un-American Activities Committee was trying to subpoena him, he was heading for Moscow to attend the International Congress of Mathematicians. There he received the Fields Medal, the highest honor of his profession.
The scene in Moscow that summer became an indelible part of the Smale legend. Five thousand agitated and agitating mathematicians had gathered. Political tensions were high. Petitions were circulating. As the conference drew toward its close, Smale responded to a request from a North Vietnamese reporter by giving a press conference on the broad steps of Moscow University. He began by condemning the American intervention in Vietnam, and then, just as his hosts began to smile, added a condemnation of the Soviet invasion of Hungary and the absence of political freedom in the Soviet Union. When he was done, he was quickly hustled away in a car for questioning by Soviet officials. When he returned to California, the National Science Foundation canceled his grant.
Smale’s Fields Medal honored a famous piece of work in topology, a branch of mathematics that flourished in the twentieth century and had a particular heyday in the fifties. Topology studies the properties that remain unchanged when shapes are deformed by twisting or stretching or squeezing. Whether a shape is square or round, large or small, is irrelevant in topology, because stretching can change those properties. Topologists ask whether a shape is connected, whether it has holes, whether it is knotted. They imagine surfaces not just in the one–, two–, and three-dimensional universes of Euclid, but in spaces of many dimensions, impossible to visualize. Topology is geometry on rubber sheets. It concerns the qualitative rather than the quantitative. It asks, if you don’t know the measurements, what can you say about overall structure. Smale had solved one of the historic, outstanding problems of topology, the Poincaré conjecture, for spaces of five dimensions and higher, and in so doing established a secure standing as one of the great men of the field. In the 1960s, though, he left topology for untried territory. He began studying dynamical systems.
Both subjects, topology and dynamical systems, went back to Henri Poincaré, who saw them as two sides of one coin. Poincaré, at the turn of the century, had been the last great mathematician to bring a geometric imagination to bear on the laws of motion in the physical world. He was the first to understand the possibility of chaos; his writings hinted at a sort of unpredictability almost as severe as the sort Lorenz discovered. But after Poincaré’s death, while topology flourished, dynamical systems atrophied. Even the name fell into disuse; the subject to which Smale nominally turned was differential equations. Differential equations describe the way systems change continuously over time. The tradition was to look at such things locally, meaning that engineers or physicists would consider one set of possibilities at a time. Like Poincaré, Smale wanted to understand them globally, meaning that he wanted to understand the entire realm of possibilities at once.
Any set of equations describing a dynamical system—Lorenz’s, for example—allows certain parameters to be set at the start. In the case of thermal convection, one parameter concerns the viscosity of the fluid. Large changes in parameters can make large differences in a system—for example, the difference between arriving at a steady state and oscillating periodically. But physicists assumed that very small changes would cause only very small differences in the numbers, not qualitative changes in behavior.
Linking topology and dynamical systems is the possibility of using a shape to help visualize the whole range of behaviors of a system. For a simple system, the shape might be some kind of curved surface; for a complicated system, a manifold of many dimensions. A single point on such a surface represents the state of a system at an instant frozen in time. As a system progresses through time, the point moves, tracing an orbit across this surface. Bending the shape a little corresponds to changing the system’s parameters, making a fluid more viscous or driving a pendulum a little harder. Shapes that look roughly the same give roughly the same kinds of behavior. If you can visualize the shape, you can understand the system.
When Smale turned to dynamical systems, topology, like most pure mathematics, was carried out with an explicit disdain for real-world applications. Topology’s origins had been close to physics, but for mathematicians the physical origins were forgotten and shapes were studied for their own sake. Smale fully believed in that ethos—he was the purest of the pure—yet he had an idea that the abstract, esoteric development of topology might now have something to contribute to physics, just as Poincaré had intended at the turn of the century.
One of Smale’s first contributions, as it happened, was his faulty conjecture. In physical terms, he was proposing a law of nature something like this: A system can behave erratically, but the erratic behavior cannot be stable. Stability—“stability in the sense of Smale,” as mathematicians would sometimes say—was a crucial property. Stable behavior in a system was behavior that would not disappear just because some number was changed a tiny bit. Any system could have both stable and unstable behaviors within it. The equations governing a pencil standing on its point have a good mathematical solution with the center of gravity directly above the point—but you cannot stand a pencil on its point because the solution is unstable. The slightest perturbation draws the system away from that solution. On the other hand, a marble lying at the bottom of a bowl stays there, because if the marble is perturbed slightly it rolls back. Physicists assumed that any behavior they could actually observe regularly would have to be stable, since in real systems tiny disturbances and uncertainties are unavoidable. You never know the parameters exactly. If you want a model that will be both physically realistic and robust in the face of small perturbations, physicists reasoned that you must surely want a stable model.
The bad news arrived in the mail soon after Christmas 1959, when Smale was living temporarily in an apartment in Rio de Janeiro with his wife, two infant children, and a mass of diapers. His conjecture had defined a class of differential equations, all structurally stable. Any chaotic system, he claimed, could be approximated as closely as you liked by a system in his class. It was not so. A letter from a colleagueinformed him that many systems were not so well-behaved as he had imagined, and it described a counterexample, a system with chaos and stability, together. This system was robust. If you perturbed it slightly, as any natural system is constantly perturbed by noise, the strangeness would not go away. Robust and strange—Smale studied the letter with a disbelief that melted away slowly.
Chaos and instability, concepts only beginning to acquire formal definitions, were not the same at all. A chaotic system could be stable if its particular brand of irregularity persisted in the face of small disturbances. Lorenz’s system was an example, although years would pass before Smale heard about Lorenz. The chaos Lorenz discovered, with all its unpredictability, was as stable as a marble in a bowl. You could add noise to this system, jiggle it, stir it up, interfere with its motion, and then when everything settled down, the transients dying away like echoes in a canyon, the system would return to the same peculiar pattern of irregularity as before. It was locally unpredictable, globally stable. Real dynamical systems played by a more complicated set of rules than anyone had imagined. The example described in the letter from Smale’s colleague was another simple system, discovered more than a generation earlier and all but forgotten. As it happened, it was a pendulum in disguise: an oscillating electronic circuit. It was nonlinear and it was periodically forced, just like a child on a swing.
It was just a vacuum tube, really, investigated in the twenties by a Dutch electrical engineer named Balthasar van der Pol. A modern physics student would explore the behavior of such an oscillator by looking at the line traced on the screen of an oscilloscope. Van der Pol did not have an oscilloscope, so he had to monitor his circuit by listening to changing tones in a telephone handset. He was pleased to discover regularities in the behavior as he changed the current that fed it. The tone would leap from frequency to frequency as if climbing a staircase, leaving one frequency and then locking solidly onto the next. Yet once in a while van der Pol noted something strange. The behavior sounded irregular, in a way that he could not explain. Under the circumstances he was not worried. “Often an irregular noise is heard in the telephone receivers before the frequency jumps to the next lower value,” he wrote in a letter to Nature. “However, this is a subsidiary phenomenon.” He was one of many scientists who got a glimpse of chaos but had no language to understand it. For people trying to build vacuum tubes, the frequency-locking was important. But for people trying to understand the nature of complexity, the truly interesting behavior would turn out to be the “irregular noise” created by the conflicting pulls of a higher and lower frequency.
Wrong though it was, Smale’s conjecture put him directly on the track of a new way of conceiving the full complexity of dynamical systems. Several mathematicians had taken another look at the possibilities of the van der Pol oscillator, and Smale now took their work into a new realm. His only oscilloscope screen was his mind, but it was a mind shaped by his years of exploring the topological universe. Smale conceived of the entire range of possibilities in the oscillator, the entire phase space, as physicists called it. Any state of the system at a moment frozen in time was represented as a point in phase space; all the information about its position or velocity was contained in the coordinates of that point. As the system changed in some way, the point would move to a new position in phase space. As the system changed continuously, the point would trace a trajectory.
For a simple system like a pendulum, the phase space might just be a rectangle: the pendulum’s angle at a given instant would determine the east-west position of a point and the pendulum’s speed would determine the north-south position. For a pendulum swinging regularly back and forth, the trajectory through phase space would be a loop, around and around as the system lived through the same sequence of positions over and over again.
Smale, instead of looking at any one trajectory, concentrated on the behavior of the entire space as the system changed—as more driving energy was added, for example. His intuition leapt from the physical essence of the system to a new kind of geometrical essence. His tools were topological transformations of shapes in phase space—transformations like stretching and squeezing. Sometimes these transformations had clear physical meaning. Dissipation in a system, the loss of energy to friction, meant that the system’s shape in phase space would contract like a balloon losing air—finally shrinking to a point at the moment the system comes to a complete halt. To represent the full complexity of the van der Pol oscillator, he realized that the phase space would have to suffer a complex new kind of combination of transformations. He quickly turned his idea about visualizing global behavior into a new kind of model. His innovation—an enduring image of chaos in the years that followed—was a structure that became known as the horseshoe.
MAKING PORTRAITS IN PHASE SPACE. Traditional time series (above) and trajectories in phase space (below) are two ways of displaying the same data and gaining a picture of a system’s long-term behavior. The first system (left) converges on a steady state—a point in phase space. The second repeats itself periodically, forming a cyclical orbit. The third repeats itself in a more complex waltz rhythm, a cycle with “period three.” The fourth is chaotic.
To make a simple version of Smale’s horseshoe, you take a rectangle and squeeze it top and bottom into a horizontal bar. Take one end of the bar and fold it and stretch it around the other, making a C-shape, like a horseshoe. Then imagine the horseshoe embedded in a new rectangle and repeat the same transformation, shrinking and folding and stretching.
The process mimics the work of a mechanical taffy-maker, with rotating arms that stretch the taffy, double it up, stretch it again, and so on until the taffy’s surface has become very long, very thin, and intricately self-embedded. Smale put his horseshoe through an assortment of topological paces, and, the mathematics aside, the horseshoe provided a neat visual analogue of the sensitive dependence on initial conditions that Lorenz would discover in the atmosphere a few years later. Pick two nearby points in the original space, and you cannot guess where they will end up. They will be driven arbitrarily far apart by all the folding and stretching. Afterward, two points that happen to lie nearby will have begun arbitrarily far apart.
SMALE’S HORSESHOE. This topological transformation provided a basis for understanding the chaotic properties of dynamical systems. The basics are simple: A space is stretched in one direction, squeezed in another, and then folded. When the process is repeated, it produces a kind of structured mixing familiar to anyone who has rolled many-layered pastry dough. A pair of points that end up close together may have begun far apart.
Originally, Smale had hoped to explain all dynamical systems in terms of stretching and squeezing—with no folding, at least no folding that would drastically undermine a system’s stability. But foldingturned out to be necessary, and folding allowed sharp changes in dynamical behavior. Smale’s horseshoe stood as the first of many new geometrical shapes that gave mathematicians and physicists a new intuition about the possibilities of motion. In some ways it was too artificial to be useful, still too much a creature of mathematical topology to appeal to physicists. But it served as a starting point. As the sixties went on, Smale assembled around him at Berkeley a group of young mathematicians who shared his excitement about this new work in dynamical systems. Another decade would pass before their work fully engaged the attention of less pure sciences, but when it did, physicists would realize that Smale had turned a whole branch of mathematics back toward the real world. It was a golden age, they said.
“It’s the paradigm shift of paradigm shifts,” said Ralph Abraham, a Smale colleague who became a professor of mathematics at the University of California at Santa Cruz.
“When I started my professional work in mathematics in 1960, which is not so long ago, modern mathematics in its entirety—in its entirety—was rejected by physicists, including the most avant-garde mathematical physicists. So differentiable dynamics, global analysis, manifolds of mappings, differential geometry—everything just a year or two beyond what Einstein had used—was all rejected. The romance between mathematicians and physicists had ended in divorce in the 1930s. These people were no longer speaking. They simply despised each other. Mathematical physicists refused their graduate students permission to take math courses from mathematicians: Take mathematics from us. We will teach you what you need to know. The mathematicians are on some kind of terrible ego trip and they will destroy your mind. That was 1960. By 1968 this had completely turned around.” Eventually physicists, astronomers, and biologists all knew they had to have the news.
A MODEST COSMIC MYSTERY: the Great Red Spot of Jupiter, a vast, swirling oval, like a giant storm that never moves and never runs down. Anyone who saw the pictures beamed across space from Voyager 2 in 1978 recognized the familiar look of turbulence on a hugely unfamiliar scale. It was one of the solar system’s most venerable landmarks—“the red spot roaring like an anguished eye/ amid a turbulence of boiling eyebrows,” as John Updike described it. But what was it? Twenty years after Lorenz, Smale, and other scientists set in motion a new way of understanding nature’s flows, the other-worldly weather of Jupiter proved to be one of the many problems awaiting the altered sense of nature’s possibilities that came with the science of chaos.
For three centuries it had been a case of the more you know, the less you know. Astronomers noticed a blemish on the great planet not long after Galileo first pointed his telescopes at Jupiter. Robert Hooke saw it in the 1600s. Donati Creti painted it in the Vatican’s picture gallery. As a piece of coloration, the spot called for little explaining. But telescopes got better, and knowledge bred ignorance. The last century produced a steady march of theories, one on the heels of another. For example:
The Lava Flow Theory, Scientists in the late nineteenth century imagined a huge oval lake of molten lava flowing out of a volcano. Or perhaps the lava had flowed out of a hole created by a planetoid striking a thin solid crust.
The New Moon Theory. A German scientist suggested, by contrast, that the spot was a new moon on the point of emerging from the planet’s surface.
The Egg Theory. An awkward new fact: the spot was seen to be drifting slightly against the planet’s background. So a notion put forward in 1939 viewed the spot as a more or less solid body floating in the atmosphere the way an egg floats in water. Variations of this theory—including the notion of a drifting bubble of hydrogen or helium—remained current for decades.
The Column-of-Gas Theory. Another new fact: even though the spot drifted, somehow it never drifted far. So scientists proposed in the sixties that the spot was the top of a rising column of gas, possibly coming through a crater.
Then came Voyager. Most astronomers thought the mystery would give way as soon as they could look closely enough, and indeed, the Voyager fly-by provided a splendid album of new data, but the data, in the end, was not enough. The spacecraft pictures in 1978 revealed powerful winds and colorful eddies. In spectacular detail, astronomers saw the spot itself as a hurricane-like system of swirling flow, shoving aside the clouds, embedded in zones of east-west wind that made horizontal stripes around the planet. Hurricane was the best description anyone could think of, but for several reasons it was inadequate. Earthly hurricanes are powered by the heat released when moisture condenses to rain; no moist processes drive the Red Spot. Hurricanes rotate in a cyclonic direction, counterclockwise above the Equator and clockwise below, like all earthly storms; the Red Spot’s rotation is anticyclonic. And most important, hurricanes die out within days.
Also, as astronomers studied the Voyager pictures, they realized that the planet was virtually all fluid in motion. They had been conditioned to look for a solid planet surrounded by a paper-thin atmosphere like earth’s, but if Jupiter had a solid core anywhere, it was far from the surface. The planet suddenly looked like one big fluid dynamics experiment, and there sat the Red Spot, turning steadily around and around, thoroughly unperturbed by the chaos around it.
The spot became a gestalt test. Scientists saw what their intuitions allowed them to see. A fluid dynamicist who thought of turbulence as random and noisy had no context for understanding an island of stability in its midst. Voyager had made the mystery doubly maddening by showing small-scale features of the flow, too small to be seen by the most powerful earthbound telescopes. The small scales displayed rapid disorganization, eddies appearing and disappearing within a day or less. Yet the spot was immune. What kept it going? What kept it in place?
The National Aeronautics and Space Administration keeps its pictures in archives, a half-dozen or so around the country. One archive is at Cornell University. Nearby, in the early 1980s, Philip Marcus, a young astronomer and applied mathematician, had an office. After Voyager, Marcus was one of a half-dozen scientists in the United States and Britain who looked for ways to model the Red Spot. Freed from the ersatz hurricane theory, they found more appropriate analogues elsewhere. The Gulf Stream, for example, winding through the western Atlantic Ocean, twists and branches in subtly reminiscent ways. It develops little waves, which turn into kinks, which turn into rings and spin off from the main current—forming slow, long-lasting, anticyclonic vortices. Another parallel came from a peculiar phenomenon in meteorology known as blocking. Sometimes a system of high pressure sits offshore, slowly turning, for weeks or months, in defiance of the usual east-west flow. Blocking disrupted the global forecasting models, but it also gave the forecasters some hope, since it produced orderly features with unusual longevity.
Marcus studied those NASA pictures for hours, the gorgeous Hasselblad pictures of men on the moon and the pictures of Jupiter’s turbulence. Since Newton’s laws apply everywhere, Marcus programmed a computer with a system of fluid equations. To capture Jovian weather meant writing rules for a mass of dense hydrogen and helium, resembling an unlit star. The planet spins fast, each day flashing by in ten earth hours. The spin produces a strong Coriolis force, the sidelong force that shoves against a person walking across a merry-go–round, and the Coriolis force drives the spot.
Where Lorenz used his tiny model of the earth’s weather to print crude lines on rolled paper, Marcus used far greater computer power to assemble striking color images. First he made contour plots. He could barely see what was going on. Then he made slides, and then he assembled the images into an animated movie. It was a revelation. In brilliant blues, reds, and yellows, a checkerboard pattern of rotating vortices coalesces into an oval with an uncanny resemblance to the Great Red Spot in NASA’s animated film of the real thing. “You see this large-scale spot, happy as a clam amid the small-scale chaotic flow, and the chaotic flow is soaking up energy like a sponge,” he said. “You see these little tiny filamentary structures in a background sea of chaos.”
The spot is a self-organizing system, created and regulated by the same nonlinear twists that create the unpredictable turmoil around it. It is stable chaos.
As a graduate student, Marcus had learned standard physics, solving linear equations, performing experiments designed to match linear analysis. It was a sheltered existence, but after all, nonlinear equations defy solution, so why waste a graduate student’s time? Gratification was programmed into his training. As long as he kept the experiments within certain bounds, the linear approximations would suffice and he would be rewarded with the expected answer. Once in a while, inevitably, the real world would intrude, and Marcus would see what he realized years later had been the signs of chaos. He would stop and say, “Gee, what about this little fluff here.” And he would be told, “Oh, it’s experimental error, don’t worry about it.”
But unlike most physicists, Marcus eventually learned Lorenz’s lesson, that a deterministic system can produce much more than just periodic behavior. He knew to look for wild disorder, and he knew that islands of structure could appear within the disorder. So he brought to the problem of the Great Red Spot an understanding that a complex system can give rise to turbulence and coherence at the same time. He could work within an emerging discipline that was creating its own tradition of using the computer as an experimental tool. And he was willing to think of himself as a new kind of scientist: not primarily an astronomer, not a fluid dynamicist, not an applied mathematician, but a specialist in chaos.