JOHN D.BARROW - Seeing Further: The Story of Science, Discovery, and the Genius of the Royal Society - Bill Bryson

Seeing Further: The Story of Science, Discovery, and the Genius of the Royal Society - Bill Bryson (2010)

16. JOHN D.BARROW

SIMPLE REALLY:FROM SIMPLICITY TO COMPLEXITY - AND BACK AGAIN

John D. Barrow FRS is a cosmologist, Professor of Mathematical Sciences, Director of the Millennium Mathematics Project, University of Cambridge, and Gresham Professor of Geometry at Gresham College, London. His many books include The Anthropic Cosmological Principle, The World Within the World, Pi in the Sky, Theories of Everything, The Origin of the Universe, The Left Hand of Creation, The Artful Universe, Impossibility: The Limits of Science, The Science of Limits, Between Inner Space, Outer Space, The Constants of Nature: From Alpha to Omega and Cosmic Imagery: Key Images in the History of Science. His latest is 100 Essential Things You Didn’t Know You Didn’t Know.

MAKING SENSE OF THE WORLD SCIENTIFICALLY HAS OFTEN MEANT SEARCHING FOR SIMPLICITY UNDERLYING THE APPARENTLY COMPLEX. FINE, SAYS JOHN BARROW, EXCEPT WHEN THE COMPLEXITY TURNS OUT TO BE IRREDUCIBLE. OR DOES IT?

Symmetry calms me down, lack of symmetry makes me crazy.

- Yves Saint Laurent

WHAT IS THE WORLD LIKE?

Is the world simple or complicated? As with many things, it depends on who you ask, when you ask, and how seriously they take you. If you should ask a particle physicist you would soon be hearing how wonderfully simple the universe appears to be. But, on returning to contemplate the everyday world, you just know ‘it ain’t necessarily so’: it’s far from simple. For the psychologist, the economist, or the botanist, the world is a higgledy-piggledy mess of complex events that just seemed to win out over other alternatives in the long run. It has no mysterious penchant for symmetry or simplicity.

So who is right? Is the world really simple, as the particle physicists claim, or is it as complex as almost everyone else seems to think? Understanding the question, why you got two different answers, and what the difference is telling us about the world, is a key part of the story of science over the past 350 years from the inception of the Royal Society to the present day.

THE QUEST FOR SIMPLICITY

Our belief in the simplicity of Nature springs from the observation that there are regularities which we call ‘laws’ of Nature. The idea of laws of Nature has a long history rooted in monotheistic religious thinking, and in ancient practices of statute law and social government.1 The most significant advance in our understanding of their nature and consequences followed Isaac Newton’s identification of a law of gravitation in the late seventeenth century, and his creation of a battery of mathematical tools with which to unpick its consequences. Newton made his own tools: with them we have made our tools ever since. His work inspired the early Fellows of the Royal Society, and scientists all over Europe, who followed the advances reported at its meetings and in its published Transactions closely during the years of his long Presidency from 1703 to his death in 1727, to bring about a Newtonian revolution in the study of the mathematical description of motion, gravity and light. It gave rise to a style of mathematics applied to science that remains distinctively Newtonian.

Laws reflect the existence of patterns in Nature. We might even define science as the search for those patterns. We observe and document the world in all possible ways; but while this data-gathering is necessary for science, it is not sufficient. We are not content simply to acquire a record of everything that is, or has ever happened, like cosmic stamp collectors. Instead, we look for patterns in the facts, and some of these patterns we have come to call the laws of Nature, while others have achieved only the status of by-laws. Having found, or guessed (for there are no rules at all about how you might find them) possible patterns, we use them to predict what should happen if the pattern is also followed at all times and in places where we have yet to look. Then we check if we are right (there are strict rules about how you do this!). In this way, we can update our candidate pattern and improve the likelihood that it explains what we see. Sometimes a likelihood gets so low that we say the proposal is ‘falsified’, or so high that it is ‘confirmed’ or ‘verified’, although strictly speaking this is always provisional, none is ever possible with complete certainty. This is called the ‘scientific method’.2

For Newton and his contemporaries, the laws of motion were codifications into simple mathematical form of the habits and recurrences of Nature. They were idealistic: ‘bodies acted upon by no forces will …’ because there are no such bodies. They were laws of cause and effect: they told you what happened if a force was applied. The future is uniquely and completely determined by the present.

Later, these laws of change were found to be equivalent to statements that quantities did not change. The requirement that the laws were the same everywhere in the universe was equivalent to the conservation of momentum; the requirement that they be found to be the same at all times was equivalent to the conservation of energy; and the requirement that they be found the same in every direction in the universe was equivalent to the conservation of angular momentum. This way of looking at the world in terms of conserved quantities, or invariances and unchanging patterns, would prove to be extremely fruitful.

During the twentieth century, physicists became so enamoured of the seamless correspondence between laws dictating changes and invariances preserving abstract patterns when particular forces of Nature acted, that their methodology changed. Instead of identifying habitual patterns of cause and effect, codifying them into mathematical laws, and then showing them to be equivalent to the preservation of a particular symmetry in Nature, physicists did a U-turn. The presence of symmetry became such a persuasive and powerful facet of laws of physics that physicists began with the mathematical catalogue of possible symmetries. They could pick out symmetries with the right scope to describe the behaviour of a particular force of Nature. Then, having identified the preserved pattern, they could deduce the laws of change that are permitted and test them by experiment.

Since 1973, this focus upon symmetry has taken centre stage in the study of elementary-particle physics and the laws governing the fundamental interactions of Nature. Symmetry is the primary guide into the legislative structure of the elementary-particle world, and its laws are derived from the requirement that particular symmetries, often of a highly abstract character, are preserved when things change. Such theories are called ‘gauge theories’. All the currently successful theories of four known forces of Nature - the electromagnetic, weak, strong and gravitational forces - are gauge theories. These theories prescribe as well as describe: preserving the invariances upon which they are based requires the existence of the forces they govern. They are also able to dictate the character of the elementary particles of matter that they govern. In these respects, gauge theories differ from the classical laws of Newton, which, since they governed the motions of all bodies, could say nothing about the properties of those bodies. The reason for this added power of explanation is that the elementary-particle world, in contrast to the macroscopic world, is populated by collections of identical particles (‘once you’ve seen one electron, you’ve seen ’em all,’ as Richard Feynman remarked). Particular gauge theories govern the behaviour of particular subsets of all the elementary particles, according to their shared attributes. Each theory is based upon the preservation of a pattern.

This generation of preserved patterns for each of the separate interactions of Nature has motivated the search for a unification of those theories into more comprehensive editions based upon larger symmetries. Within those larger patterns, smaller patterns respected by the individual forces of Nature might be accommodated, like jigsaw pieces, in an interlocking fashion that places some new constraint upon their allowed forms. So far, this strategy has resulted in a successful, experimentally tested, unification of the electromagnetic and weak interactions, and a number of purely theoretical proposals for a further unification with the strong interaction (‘grand unification’), and candidates for a four-fold unification with the gravitational force to produce3 a so-called ‘theory of everything’, or ‘TOE’. It is this general pattern of explanation by which forces and their underlying patterns are linked and reduced in number by unifications, culminating in a single unified law, that lies at the heart of the physicist’s perception of the world as ‘simple’. The success of this promising path of progress is the reason that led our hypothetical particle physicist to tell us that the world is simple. The laws of Nature are few in number and getting fewer.

The first candidate for a TOE was a ‘superstring’ theory, first developed by Michael Green and John Schwarz in 1984. After the initial excitement that followed their proof that string theories are finite and well-defined theories of fundamental physics, hundreds of young mathematicians and physicists flocked to join this research area at the world’s leading physics departments. It soon became clear that there were five varieties of string theory available to consider as a TOE: all finite and logically self-consistent, but all different. This was a little disconcerting. You wait nearly a century for a theory of everything then, suddenly, five come along all at once. They had exotic-sounding names that described aspects of the mathematical patterns they contained - type I, type IIA and type IIB superstring theories, SO(32) and E8 heterotic string theories, and eleven-dimensional super-gravity. These theories are all unusual in that they have ten dimensions of space and time, with the exception of the last one, which has eleven. Although it is not demanded for the finiteness of the theory, it is generally assumed that only one of these ten or eleven dimensions is a ‘time’ and the others are spatial. Of course, we do not live in a nine- or ten-dimensional space so in order to reconcile such a world with what we see it must be assumed that only three of the dimensions of space in these theories became large and the others remain ‘trapped’ with (so far) unobservably small sizes. It is remarkable that in order to achieve a finite theory we seem to need more dimensions of space than those that we experience. This might be regarded as a prediction of the theory. It is a consequence of the amount of ‘room’ that is needed to accommodate the patterns governing the four known forces of Nature inside a single bigger pattern without hiving themselves off into sub-patterns that each ‘talk’ only to themselves rather than to everything else. Nobody knows why three dimensions (rather than one or four or eight, say) became large, or what is the force responsible. Nor do we know if the number of large dimensions is something that arises at random and so could be different - and may be different - elsewhere in the universe, or is an inevitable consequence of the laws of physics that could not be otherwise without destroying the logical self-consistency of physical reality.

One thing that we do know is that only in spaces with three large dimensions can things bind together to form structures like atoms, molecules, planets and stars. No complexity and no life is possible except in spaces with three large dimensions. So, even if the number of large dimensions is different in different parts of the universe, or separate universes are possible with different numbers of large dimensions, we would have to find ourselves living where there are three large dimensions, no matter how improbable that might be, because life could exist in no other type of space.

At first, it was hoped that one of these theories would turn out to be special and attention would then narrow down to reveal it to be the true theory of everything. Unfortunately, things were not so simple. Progress was slow and unremarkable until Edward Witten, at Princeton, discovered that these different string theories are not really different. They are linked to one another by mathematical transformations that amount to exchanging large distances for small ones, and vice versa in a particular way. Nor were these string theories fundamental. Instead, they were each limiting situations of another deeper, as yet unfound, TOE which lives in eleven dimensions of space and time. That theory became known as ‘M-theory’, where M has been said to be an abbreviation for Mystery, Matrix, or Millennium, just as you like.4

Do these ‘extra’ dimensions of space really exist? This is a key question for all these new theories of everything. In most versions, the other dimensions are so small (10-33 cm) that no direct experiment will ever see them. But, in some variants, they can be much bigger. The interesting feature is that only the force of gravity will ‘feel’ these extra dimensions and be modified by their presence. In these cases the extra dimensions could be up to one hundredth of a millimetre in extent and they would alter the form of the law of gravity over these and smaller distances. This gives experimental physicists a wonderful challenge: test the law of gravity on submillimetre scales. More sobering still is the fact that all the observed constants of Nature, in our three dimensions, are not truly fundamental, and need not be constant in time or space:5 they are just shadows of the true constants that live in the full complement of dimensions. Sometimes simplicity can be complex too.

ELEMENTARY PARTICLES?

The fact that Nature displays populations of identical elementary particles is its most remarkable property. It is the ‘fine tuning’ that surpasses all others. In the nineteenth century another of the Royal Society’s greatest Fellows, James Clerk Maxwell, first stressed that the physical world was composed of identical atoms which were not subject to evolution. Today, we look for some deeper explanation of the sub-atomic particles of Nature from our TOE. One of the most perplexing discoveries by experimentalists has been that such ‘elementary’ particles appear to be extremely numerous. They were supposed to be an exclusive club, but they have ended up with an embarrassingly large clientele.

String theories offered another route to solving this problem. Instead of a TOE containing a population of elementary point-like particles, string theories introduce basic entities that are loops (or lines) of energy which have a tension. As the temperature rises the tension falls and the loops vibrate in an increasingly stringy fashion, but as the temperature falls the tension increases and the loops contract to become more and more point-like. So, at low energies the strings behave like points and allow the theory to make the successful predictions about what we should see there as the intrinsically point-like theories do. However, at high energies, things are different. The hope is that it will be possible to determine the principal energies of vibration of the superstrings. All strings, even guitar strings, have a collection of special vibrational energies that they naturally take up when disturbed. If we could calculate these special energies for super-strings, then they would (by virtue of Einstein’s famous mass-energy equivalence - E = mc2) correspond to the masses of the ‘particles’ that we call elementary. So far, these energies have proved too hard to calculate. However, one of them has been found: it corresponds to a particle with zero mass and two units of a quantum attribute called ‘spin’. This spin value ensures that it mediates attractions between all masses. It is the particle we call the ‘graviton’ and it is responsible for mediating the force of gravity. Its appearance shows that string theory necessarily includes gravity and, moreover, its behaviour is described by the equations of general relativity at low energies - a remarkable and compelling feature since earlier candidates for a TOE all failed miserably to include gravity in the unification story at all.

WHY IS THE WORLD MATHEMATICAL?

This reflection on the symmetries behind the laws of Nature also tells us why mathematics is so useful in practice. Mathematics is simply the catalogue of all possible patterns. Some of those patterns are especially attractive and are studied or used for decoration, others are patterns in time or in chains of logic. Some are described solely in abstract terms, while others can be drawn on paper or carved in stone. Viewed in this way, it is inevitable that the world is described by mathematics. We could not exist in a universe in which there was neither pattern nor order. The description of that order, and all the other sorts that we can imagine, is what we call mathematics. Yet, although the fact that mathematics describes the world is not a mystery, the exceptional utility of mathematics is. It could have been that the patterns behind the world were of such complexity that no simple algorithms could approximate them. Such a universe would ‘be’ mathematical, but we would not find mathematics terribly useful. We could prove ‘existence’ theorems about what structures exist but we would be unable to predict the future using mathematics in the way that NASA’s mission control does.

Seen in this light, we recognise that the great mystery about mathematics and the world is that such simple mathematics is so far reaching. Very simple patterns, described by mathematics that is easily within our grasp, allow us to explain and understand a huge part of the universe and the happenings within it.

THE COPERNICAN PRINCIPLE APPLIED TO LAWS

It is often said with hindsight that Nicholas Copernicus taught us not to assume that our position in the universe is special in every way. Of course, this does not mean that it cannot be special in any way, simply because life is only possible in certain places.6 Once we start distinguishing between the laws of Nature and their outcomes we should also bring this Copernican view to bear upon the laws of Nature as well as their outcomes.

Universal laws of Nature should be just that - universal - they should not just exist in special forms for some privileged observers at special locations, or who are moving in particular ways, in the universe. Alas, Newton’s laws do not have this democratic property. They only have simple forms for privileged observers who are moving in a special way, neither rotating nor accelerating with respect to the distant ‘fixed’ stars. So there were privileged observers in Newton’s universe for whom all the laws of motion look simple.

Newton’s first law of motion demands that bodies acted upon by no forces do not accelerate: they remain at rest or move with constant speed. However, this law of motion will only be observed by a special class of observers who are neither accelerating nor rotating relative to the fixed stars. The appearance of these special observers for whom all the laws of motion look simpler violates the Copernican principle.

Imagine that you are located inside a spaceship through whose windows you can see the far distant stars. Put the spaceship in a spin. Through the windows you will see the distant stars accelerating past in the opposite sense to the spin, even though they are not acted upon by any forces. Newton’s first law is not true for a spinning observer - a much more complicated law holds. This undemocratic situation signalled that there was something incomplete and unsatisfactory about Newton’s formulation of the laws of motion. One of Einstein’s great achievements was to create a new theory of gravity in which all observers, no matter how they move, do find the laws of gravity and motion to take the same form.7 By incorporating this principle of ‘general covariance’, Einstein’s theory of general relativity completed the extension of the Copernican principle from outcomes to laws.

OUTCOMES ARE DIFFERENT

The simplicity and economy of the laws and symmetries that govern Nature’s fundamental forces are not the end of the story. When we look around us we do not observe the laws of Nature; rather, we see the outcomes of those laws. The distinction is crucial. Outcomes are much more complicated than the laws that govern them because they do not have to respect the symmetries displayed by the laws. By this subtle interplay, it is possible to have a world which displays an unlimited number of complicated asymmetrical structures yet is governed by a few, very simple, symmetrical laws. This is one of the secrets of the universe.

Suppose we balance a ball at the apex of a cone. If we were to release the ball, then the law of gravitation will determine its subsequent motion. Gravity has no preference for any particular direction in the universe; it is entirely democratic in that respect. Yet, when we release the ball, it will always fall in some particular direction, either because it was given a little push in one direction, or as a result of quantum fluctuations which do not permit an unstable equilibrium state to persist. So here, in the outcome of the falling ball, the directional symmetry of the law of gravity is broken. This teaches us why science is often so difficult. As observers, we see only the broken symmetries manifested as the outcomes of the laws of Nature; from them, we must work backwards to unmask the hidden symmetries behind the appearances.

We can now understand the answers that we obtained from the different scientists we originally polled about the simplicity of the world. The particle physicist works closest to the laws of Nature themselves, and so is especially impressed by their unity, simplicity and symmetry. But the biologist, the economist, or the meteorologist is occupied with the study of the complex outcomes of the laws, rather than with the laws themselves. As a result, it is the complexities of Nature, rather than her laws, that impress them most.

AMBIGUITIES BETWEEN LAWS AND OUTCOMES

One of the most important developments in fundamental physics and cosmology over the past twenty years has been the steady dissolution of the divide between laws and outcomes. When the early quest for a theory of everything began many thought that such a theory would uniquely and completely specify all the constants of physics and the structural features of the universe. There would be no room left for wondering about ‘other’ universes, or hypothetical changes to the structure of our observed universe. Remarkably, things did not turn out like that. Candidate theories of everything revealed that many of the features of physics and the universe which we had become accustomed to think of as programmed into the universe from the start in some unalterable way, were nothing of the sort. The number of forces of Nature, their laws of interaction, the populations of elementary particles, the values of the so-called constants of Nature, the number of dimensions of space, and even whole universes, can all arise in quasi-random fashion in these theories. They are elaborate outcomes of processes that can have many different physically self-consistent results. There are fewer unalterable laws than we might think.

This means that we have to take seriously the possibility that some features of the universe which we call fundamental may not have explanations in the sense that had always been expected. A good example is the value of the infamous cosmological constant which appears to drive the acceleration of the universe today. Its numerical value is very strange. It cannot so far be explained by known theories of physics. Some physicists hope that there will ultimately be a single theory of everything which will predict the exact numerical value of the cosmological constant that the astronomers need to explain their observations. Others recognise that there may not be any explanation of that sort to be found. If the value of the cosmological constant is a random outcome of some exotic symmetry-breaking process near the beginning of the universe’s expansion then all we can say is that it falls within the range of values that permit life to evolve and persist. This is a depressing situation to those who hoped to explain its value. However, it would be a strange (non-Copernican) universe that allowed us to determine everything that we want about it. We may just have to get used to the fact that there are some things we can predict and others that we can only measure. Here is a little piece of science faction to illustrate the point.

Imagine someone in 1600 trying to convince Johannes Kepler that a theory of the solar system won’t be able to predict the number of planets in the solar system. Kepler would have had none of it. He would have been outraged. This would have constituted an admission of complete failure. He believed that the beautiful Platonic symmetries of mathematics required the solar system to have a particular number of planets. For Kepler this would have been the key feature of such a theory. He would have rejected the idea that the number of planets had no part to play in the ultimate theory.

Today, no planetary astronomer would expect any theory of the origin of the solar system to predict the number of planets. It would make no sense. This number is something that falls out at random as a result of a chaotic sequence of formation events and subsequent mergers between embryonic planetesimals. It is simply not a predictable outcome. We concentrate instead on predicting other features of the solar system so as to test the theory of its origin. Perhaps those who are resolutely opposed to the idea that quantities like the cosmological constant might be randomly determined, and hence unpredictable by the theory of everything, might consider how strange Kepler’s views about the importance of the number of planets now seem.

DISORGANISED COMPLEXITIES

Complexity, like crime, comes in organised and disorganised forms. The disorganised form goes by the name of chaos and has proven to be ubiquitous in Nature. The standard folklore about chaotic systems is that they are unpredictable. They lead to out-of-control dinosaur parks and frustrated meteorologists. However, it is important to appreciate the nature of chaotic systems more fully than the Hollywood headlines.

Classical (that is, non-quantum mechanical) chaotic systems are not in any sense intrinsically random or unpredictable. They merely possess extreme sensitivity to ignorance. As Maxwell was again the first to recognise in 1873, any initial uncertainty in our knowledge of a chaotic system’s state is rapidly amplified in time. This feature might make you think it hopeless even to try to use mathematics to describe a chaotic situation. We are never going to get the mathematical equations for weather prediction 100 per cent correct - there is too much going on - so we will always end up being inaccurate to some extent in our predictions. But although that type of inaccuracy can contribute to unpredictability, it is not in itself a fatal blow to predicting the future adequately. After all, small errors in the weather equations could turn out to have an increasingly insignificant effect on the forecast as time goes on. In practice, it is our inability to determine the weather everywhere at any given time with perfect accuracy that is the major problem. Our inevitable uncertainties about what is going on in between weather stations leaves scope for slightly different interpolations of the temperature and the wind motions in between their locations. Chaos means that those slight differences can produce very different forecasts about tomorrow’s weather.

An important feature of chaotic systems is that, although they become unpredictable when you try to determine the future from a particular uncertain starting value, there may be a particular stable statistical spread of outcomes after a long time, regardless of how you started out. The most important thing to appreciate about these stable statistical distributions of events is that they often have very stable and predictable average behaviours. As a simple example, take a gas of moving molecules (their average speed of motion determines what we call the gas ‘temperature’) and think of the individual molecules as little balls. The motion of any single molecule is chaotic because each time it bounces off another molecule any uncertainty in its direction is amplified exponentially. This is something you can check for yourself by observing the collisions of marbles or snooker balls. In fact, the amplification in the angle of recoil, $$, in the successive (the n+1st and nth) collisions of two identical balls is well described by a rule:

where d is the average distance between collisions and r is the radius of the balls. Even the minimal initial uncertainty in θ0 allowed by Heisenberg’s uncertainty principle is increased to exceed θ = 360 degrees after only about 14 collisions. So you can then predict nothing about its trajectory.

The motions of gas molecules behave like a huge number of snooker balls bouncing off each other and the denser walls of their container. One knows from bitter experience that snooker exhibits sensitive dependence on initial conditions: a slight miscue of the cue-ball produces a big miss! Unlike the snooker balls, the molecules won’t slow down and stop. Their typical distance between collisions is about 200 times their radius. With this value of d/r the unpredictability grows 200-fold at each close molecular encounter. All the molecular motions are individually chaotic, just like the snooker balls, but we still have simple rules like Boyle’s Law governing the pressure P, volume V, and temperature T- the averaged properties8 - of a confined gas of molecules:

The lesson of this simple example is that chaotic systems can have stable, predictable, long-term, average behaviours. However, it can be difficult to predict when they will. The mathematical conditions that are sufficient to ensure it are often very difficult to prove. You usually just have to explore numerically to discover whether the computation of time averages converges in a nice way or not.9

Considerable impetus was imparted to the study and understanding of this type of chaotic unpredictability and its influence on natural phenomena by theoretical biologists like Robert May (later to become the fifty-eighth President of the Royal Society in 2000) and George Oster, together with the mathematician James Yorke. They identified simple features displayed by wide classes of difference equation relating the (n+1)st to the nth state of a system as it made the transition from order to chaos.10

ORGANISED COMPLEXITIES

Among complex outcomes of the laws of Nature, the most interesting are those that display forms of organised complexity. A selection of these are displayed in the diagram on the next page, in terms of their size, gauged by their information storage capacity, which is just how many binary digits are needed to specify them versus their ability to process information, which is simply how quickly they can change one list of numbers into another list.

As we proceed up the diagonal, increasing information storage capability grows hand in hand with the ability to transform that information into new forms. Organised complexity grows. Structures are typified by the presence of feedback, self-organisation and non-equilibrium behaviour. Mathematical scientists in many fields are searching for new types of ‘by-law’ or ‘principle’ which govern the existence and evolution of different varieties of complexity. These rules will be quite different from the ‘laws’ of the particle physicist. They will not be based upon symmetry and invariance, but upon principles of probability and information processing. Perhaps the second law of thermodynamics is as close as we have got to discovering one of this collection of general rules that govern the development of order and disorder.

The defining characteristic of the structures in the diagram below is that they are more than the sum of their parts. They are what they are, they display the behaviour that they do, not because they are made of atoms or molecules (which they all are), but because of the way in which their constituents are organised. It is the circuit diagram of the neutral network that is the root of its complex behaviour. The laws of electromagnetism alone are insufficient to explain the working of a brain. We need to know how it is wired up and its circuits inter-connected. No theory of everything that the particle physicists supply us with is likely to shed any light upon the complex workings of the human brain or a turbulent waterfall.

ON THE EDGE OF CHAOS

The advent of small, inexpensive, powerful computers with good interactive graphics has enabled large, complex, and disordered situations to be studied observationally - by looking at a computer monitor. Experimental mathematics is a new tool. A computer can be programmed to simulate the evolution of complicated systems, and their long-term behaviour observed, studied, modified and replayed. By these means, the study of chaos and complexity has become a multidisciplinary subculture within science. The study of the traditional, exactly soluble problems of science has been augmented by a growing appreciation of the vast complexity expected in situations where many competing influences are at work. Prime candidates are provided by systems that evolve in their environment by natural selection, and, in so doing, modify those environments in complicated ways.

As our intuition about the nuances of chaotic behaviour has matured by exposure to natural examples, novelties have emerged that give important hints about how disorder often develops from regularity. Chaos and order have been found to coexist in a curious symbiosis. Imagine a very large egg-timer in which sand is falling, grain by grain, to create a growing sand pile. The pile evolves under the force of gravity in an erratic manner. Sandfalls of all sizes occur, and their effect is to maintain the overall gradient of the sand pile in a temporary equilibrium, always just on the verge of collapse. The pile steadily steepens until it reaches a particular slope and then gets no steeper. This self-sustaining process was dubbed ‘self-organising criticality’ by its discoverers, Per Bak, Chao Tang and Kurt Wiesenfeld, in 1987. The adjective ‘self-organising’ captures the way in which the chaotically falling grains seem to arrange themselves into an orderly pile. The title ‘criticality’ reflects the precarious state of the pile at any time. It is always about to experience an avalanche of some size or another. The sequence of events that maintains its state of large-scale order is a slow local build-up of sand somewhere on the slope, then a sudden avalanche, followed by another slow build-up, a sudden avalanche, and so on. At first, the infalling grains affect a small area of the pile, but gradually their avalanching effects increase to span the dimension of the entire pile, as they must if they are to organise it.

At a microscopic level, the fall of sand is chaotic, yet the result in the presence of a force like gravity is large-scale organisation. If there is nothing peculiar about the sand,11that renders avalanches of one size more probable than all others, then the frequency with which avalanches occur is proportional to some mathematical power of their size (the avalanches are said to be ‘scale-free’ processes). There are many natural systems - like earthquakes - and man-made ones - like some stock market crashes - where a concatenation of local processes combine to maintain a semblance of equilibrium in this way. Order develops on a large scale through the combination of many independent chaotic small-scale events that hover on the brink of instability. Complex adaptive systems thrive in the hinterland between the inflexibilities of determinism and the vagaries of chaos. There, they get the best of both worlds: out of chaos springs a wealth of alternatives for natural selection to sift; while the rudder of determinism sets a clear average course towards islands of stability.

Originally, its discoverers hoped that the way in which the sandpile organised itself might be a paradigm for the development of all types of organised complexity. This was too optimistic. But it does provide clues as to how many types of complexity organise themselves. The avalanches of sand can represent extinctions of species in an ecological balance, traffic flow on a motorway, the bankruptcies of businesses in an economic system, earthquakes or volcanic eruptions in a model of the pressure equilibrium of the Earth’s crust, and even the formation of ox-bow lakes by a meandering river. Bends in the river make the flow faster there, which erodes the bank, leading to an ox-bow lake forming. After the lake forms, the river is left a little straighter. This process of gradual build-up of curvature followed by sudden ox-bow formation and straightening is how a river on a flat plain ‘organises’ its meandering shape.

It seems rather remarkable that all these completely different problems should behave like a tumbling pile of sand. A picture of Richard Solé’s, showing a dog being taken for a bumpy walk, reveals the connection.12 If we have a situation where a force is acting - for the sand pile it is gravity, for the dog it is the elasticity of its leash - and there are many possible equilibrium states (valleys for the dog, stable local hills for the sand), then we can see what happens as the leash is pulled. The dog moves slowly uphill and then is pulled swiftly across the peak to the next valley, begins slowly climbing again, and then jumps across. This staccato movement of slow build-up and sudden jump, time and again, is what characterises the sandpile with its gradual buildup of sand followed by an avalanche. We can see from the picture that it will be the general pattern of behaviour in any system with very simple ingredients.

At first, it was suggested that this route to self-organisation might be followed by all complex self-adaptive systems. That was far too optimistic: it is just one of many types of self-organisation. Yet, the nice feature of these insights is that they show that it is still possible to make important discoveries by observing the everyday things of life and asking the right questions, just like the founding Fellows of the Royal Society 350 years ago. You don’t always have to have satellites, accelerators and overwhelming computer power. Sometimes complexity can be simple too.

1 This civil and theological background can be traced in the study in J.D. Barrow, The World Within the World (Oxford, OUP, 1988), of the development of the concept of laws of Nature in ancient societies.

2 In practice, the process of improving central theories of physics usually involves a process of replacing a theory by a deeper and broader version that contains the original as a special, or limiting, case. Thus, Newton’s theory of gravity has been superseded by Einstein’s theory of general relativity but not replaced by it in some type of scientific ‘revolution’. Einstein’s theory becomes the same as Newton’s when we confine attention to weak gravitational forces and to motions at speeds much less than that of light. Similarly, another limiting process recovers Newtonian mechanics from quantum mechanics. This is why, regardless of the results of our search for the ‘ultimate’ theory of gravity, structural engineers and sports scientists will still be using Newton’s laws in a thousand years’ time.

3 Four fundamental forces are known, of which the weakest is gravitation. There might exist other, far weaker, forces of Nature. Although too weak for us to measure (perhaps ever), their existence may be necessary to fix the logical necessity of that single theory of everything. Without any means to check on their existence, we would always be missing a crucial piece of the cosmic jigsaw puzzle; see J.D. Barrow, New Theories of Everything: The quest for ultimate explanation (Oxford, OUP, 2007) and B. Greene, The Elegant Universe (London, Jonathan Cape, 1999).

4 These mathematical discoveries launched an intensive search for the underlying M theory. But so far it has not been found. Other possibilities have emerged along the way, with the arguments of Lisa Randall and Raman Sundrum that the three-dimensional space that we inhabit may be thought of as the surface of a higher-dimensional space in which the strong, weak, and electromagnetic forces act only in that three-dimensional surface while the force of gravity reaches out into all the other dimensions as well. This is why it is so much weaker than the other three forces of Nature in this picture; see L. Randall, Warped Passages: Unravelling the Universe’s Hidden Dimensions (London, Penguin, 2006).

5 For a discussion of the status of the constants of Nature and evidence for their possible time variation, see J.D. Barrow, The Constants of Nature (London, Cape, 2002).

6This is one of the lessons learned from the anthropic principles.

7 Einstein used the elegant fact that tensor equations maintain the same form under any transformation of the coordinates used to express them. This is called the principle of general covariance.

8T he velocities of the molecules will also tend to attain a particular probability distribution of values, depending only on the temperature, called the Maxwell-Boltzmann distribution after many collisions, regardless of their initial values.

9 This is clearly very important for computing the behaviour of chaotic systems. Many systems posesess a shadowing property that ensures that computer calculations of long-term averages can be very accurate, even in the presence of rounding errors and other small inaccuracies introduced by the computer’s ability to store only a finite number of decimal places. These ‘round-off’ errors move the solution being calculated on to another nearby solution trajectory. Many chaotic systems have the property that these nearby behaviours end up visiting all the same places as the original solution and it doesn’t make any difference in the long-run that you have been shifted from one to the other. For example, when considering molecules moving inside a container, you would set about calculating the pressure exerted on the walls by considering a molecule travelling from one side to the other and rebounding off a wall. In practice, a particular molecule might never make it across the container to hit the wall because it runs into other molecules. However, it gets replaced by another molecule that is behaving in the same way as it would have done had it continued on its way unperturbed.

10 R.M. May, ‘Simple Mathematical Models with Very Complicated Dynamics’, Nature, 261 (1976), 45. Later, this work would be rigorously formalised and generalised by Mitchell Feigenbaum in his classic paper ‘The Universal Metric Properties of Nonlinear Transformations’, published in J. Stat. Phys., 21 (1979), 669 and then explained in simpler terms for a wider audience in the magazine Los Alamos Science 1, 4 (1980).

11Closer examination of the details of the fall of sand has revealed that avalanches of asymmetrically shaped grains, like rice, produce the critical scale-independent behaviour even more accurately because the rice grains always tumble rather than slide.

12 P. Bak, How Nature Works (New York, Copernicus, 1996).