Unweaving the Rainbow: Science, Delusion and the Appetite for Wonder - Richard Dawkins (2000)


The brain is a three pound mass you can hold in your hand that can conceive of a universe a hundred billion light-years across.


It is a commonplace among historians of science that the biologists of any age, struggling to understand the workings of living bodies, make comparison with the advanced technology of their time. From clocks in the seventeenth century to dancing statues in the eighteenth, from Victorian heat engines to today's heat-seeking, electronically guided missiles, the engineering novelties of every age have refreshed the biological imagination. If, of all these innovations, the digital computer promises to overshadow its predecessors, the reason is simple. The computer is not just one machine. It can be swiftly reprogrammed to become any machine you like: calculator, word processor, card index, chess master, musical instrument, guess-your-weight machine, even, I regret to say, astrological soothsayer. It can simulate the weather, lemming population cycles, an ants' nest, satellite docking, or the city of Vancouver.

The brain of any animal has been described as its on-board computer. It does not work in the same way as an electronic computer. It is made from very different components. These are individually much slower, but they work in huge parallel networks so that, by some means still only partly understood, their numbers compensate for their slower speed, and brains can, in certain respects, outperform digital computers. In any case, the differences of detailed working do not disempower the metaphor. The brain is the body's on-board computer, not because of how it works but because of what it does in the life of the animal. The resemblance of role extends to many parts of the animal's economy but, perhaps most spectacularly of all, the brain simulates the world with the equivalent of virtual reality software.

It might seem a good idea, in a general way, for any animal to grow a large brain. Isn't greater computing power always likely to be an advantage? Maybe, but it has costs, too. Weight for weight, brain tissue consumes more energy than other tissues. And our big brains as babies make it quite difficult for us to be born. Our presumption that braininess must be a good thing partly grows out of vanity in our species' own hypertrophy of the brain. But it remains an interesting question why human brains have grown so especially big.

One authority has said that the evolution of the human brain over the last million years or so is 'perhaps the fastest advance recorded for any complex organ in the entire history of life'. This may be an exaggeration, but the evolution of the human brain is undeniably fast. Compared with the skulls of other apes, the modern human skull, at least the bulbous part that houses the brain, has blown up like a balloon. When we ask why this happened, it is not satisfactory to produce general reasons why having a large brain might be useful. Presumably such general benefits would apply to many kinds of animal, especially those that navigate rapidly through the complicated three-dimensional world of the forest canopy, as most primates do. A satisfying explanation will be one that tells us why one particular lineage of apes—actually, one that had left the trees—suddenly took off, leaving the rest of the primates standing.

It was once fashionable to lament—or, according to taste, gloat over—the paucity of fossils linking Homo sapiens to our ape ancestors. This has changed. We now have a rather good fossil series and as we go backwards in time we can trace a gradual shrinkage in braincase through various species of Homo to our predecessor genus Australopithecus whose braincase was about the same size as a modern chimpanzee's. The main difference between Lucy or Mrs Pies (famous Australopithecines) and a chimpanzee lay not in the brain at all, but in the Australopithecine habit of walking upright on two legs. Chimps only occasionally do. The blowing up of the brain balloon spanned three million years from Australopithecus through Homo habilis, then Homo erectus, through archaic Homo sapiens to modern Homo sapiens.

Something a bit similar seems to have happened in the growth of the computer. But, if the human brain has blown up like a balloon, the computer's progress has been more like an atom bomb. Moore's law states that the capacity of computers of a given physical size doubles every 1.5 years. (This is a modern version of the law. When Moore originally stated it more than three decades ago he was referring to transistor counts which, on his measurements, doubled every two years. Computer performance has improved even faster because transistors became faster as well as smaller and cheaper.) The late Christopher Evans, a computer-literate psychologist, put the point dramatically:

Today's car differs from those of the immediate post-war years on a number of counts. It is cheaper, allowing for the ravages of inflation, and it is more economical and efficient ... But suppose for a moment that the automobile industry had developed at the same rate as computers and over the same period: how much cheaper and more efficient would the current models be? If you have not already heard the analogy the answer is shattering. Today you would be able to buy a Rolls-Royce for £1.35, it would do three million miles to the gallon, and it would deliver enough power to drive the Queen Elizabeth II. And if you were interested in miniaturization, you could place half a dozen of them on a pinhead.

The Mighty Micro (1979)

Of course, things on the timescale of biological evolution inevitably happen far more slowly. One reason is that every improvement has to come about through individuals dying and rival individuals reproducing. So comparisons of absolute speed cannot be made. If we compare the brains of Australopithecus, Homo habilis, Homo erectus and Homo sapiens, we get a rough equivalent of Moore's law, slowed down by six orders of magnitude. From Lucy to Homo sapiens, brain size has approximately doubled every 1.5 million years. Unlike Moore's law for computers, there is no particular reason to think that the human brain will go on swelling. In order for this to happen, large-brained individuals have to have more children than small-brained individuals. It isn't obvious that this is now happening. It must have happened during our ancestral past, otherwise our brains would not have grown as they did. It also must have been true, incidentally, that braininess in our ancestors was under genetic control. If it had not been, natural selection would have had nothing to work on, and the evolutionary growth of the brain would not have occurred. For some reason, many people take grave political offence at the suggestion that some individuals are genetically cleverer than others. But this must have been the case when our brains were evolving, and there is no reason to expect that facts will suddenly change to accommodate political sensitivities.

Lots of influences have contributed to computer development which are not going to help us to understand brains. A major step was the change from the valve (vacuum tube) to the much smaller transistor, and then the spectacular and continuing miniaturization of the transistor in integrated circuits. These advances are all irrelevant to brains, because—the point deserves repetition—brains don't work electronically anyway. But there is another source of computer advancement, and this might be relevant to brains. I'll call it self-feeding co-evolution.

We have already met co-evolution. It means the evolving together of different organisms (as in the arms races between predators and prey), or between different parts of the same organism (the special case called co-adaptation). As another example, there are some small flies whose appearance mimics that of a jumping spider, including large dummy eyes looking straight forward like paired headlights—very unlike the compound eyes with which the flies themselves see. Real spiders are potential predators of flies of this size, but they are put off by the flies' similarity to another spider. The flies enhance the mimicry by waving their arms in ways that resemble the histrionic semaphore signals that jumping spiders use when courting their own opposite sex. In the fly, genes controlling the anatomical resemblance to spiders must have evolved together with separate genes controlling the semaphoring behaviour. This evolving together is co-adaptation.

Self-feeding is the name I am giving to any process in which 'the more you have, the more you get'. A bomb is a good example. The atomic bomb is said to depend upon a chain reaction, but the metaphor of a chain is too stately to convey what happens. When the unstable nucleus of uranium 235 breaks up, energy is released. Neutrons shooting out from the break-up of one nucleus may hit another and induce it to break up as well, but that is usually the end of the story. Most of the neutrons miss other nuclei and shoot off harmlessly into empty space, for uranium, though one of the densest of metals, is 'really', like all matter, mostly empty space. (The virtual model of metal in our brains is constructed with the persuasive illusion of dense solidity because that is the most useful internal representation of a solid for our survival purposes.) On their own scale, the atomic nuclei in a metal are far more spaced out than gnats in a swarm, and a particle expelled by one decaying atom is quite likely to have a clear run out of the swarm. If, however, you pack in a quantity (the famous 'critical mass') of uranium 235 which is just sufficient to see to it that a typical neutron expelled from any one nucleus is on average likely to hit one other nucleus before leaving the mass of metal altogether, a so-called chain reaction gets going. On average, each nucleus that splits causes another to split, there is an epidemic of atom-splitting, with an exceedingly rapid release of heat and other destructive energy, and the results are only too well known. All explosions have this same epidemic quality and, on a slower time-scale, epidemics of disease sometimes resemble explosions. They require a critical mass of susceptible victims in order to get started and, once they do get started, the more you have the more you get. This is why it is so important to vaccinate a critical proportion of the population. If fewer than the 'critical mass' remain unvaccinated, epidemics cannot take off. (This is also why it is possible for selfish free-riders to avoid being vaccinated and still benefit from the fact that most other people have been.)

In The Blind Watchmaker I noted a 'critical mass for explosion' principle at work in human popular culture. Many people choose to buy records, books or clothes for no better reason than that lots of other people are buying them. When a bestseller list is published, this could be seen as an objective report of purchasing behaviour. But it is more than that because the published list feeds back on people's buying behaviour and influences future sales figures. Bestseller lists are therefore, at least potentially, victims of self-feeding spirals. That's why publishers spend lots of money early in a book's career, in a strenuous attempt to nudge it over the critical threshold of the bestseller list. The hope is that then it will 'take off'. The more you have, the more you get, with the additional feature of sudden take-off, which we need for the purpose of our analogy. A dramatic example of a self-feeding spiral going in the opposite direction is the Wall Street Crash and other cases where panic selling on the stock market feeds on itself in a downward tailspin.

Evolutionary co-adaptation does not necessarily have the additional explosive property of being self-feeding. There is no reason to suppose that, in the evolution of our spider-mimicking fly, the co-adaptation of spider shape and spider behaviour was explosive. In order to be so, it is necessary that the initial resemblance, say a slight anatomical similarity to a spider, set up an increased pressure to mimic the spider's behaviour. This in turn fed an even stronger pressure to mimic the spider's shape, and so on. But, as I say, there is no reason to think it happened like this: no reason to suppose that the pressure was self-feeding and therefore increasing as it shuttled back and forth. As I explained in The Blind Watchmaker, it is possible that the evolution of bird of paradise tails, peacock fans and other extravagant ornaments by sexual selection is genuinely self-feeding and explosive. Here, the principle of 'the more you have, the more you get' may really apply.

In the case of the evolution of the human brain, I suspect that we are looking for something explosive, self-feeding, like the chain reaction of the atomic bomb or the evolution of a bird of paradise tail, rather than like the spider-mimicking fly. The appeal of this idea is its power to explain why, among a set of African ape species with chimpanzee-sized brains, one suddenly raced ahead of the others for no very obvious reason. It is as though a random event nudged the hominid brain over a threshold, something equivalent to a 'critical mass', and then the process took off explosively, because it was self-feeding.

What might this self-feeding process have consisted of? The conjecture I offered in my Royal Institution Christmas Lectures was 'software/hardware co-evolution'. As its name suggests, it can be explained by a computer analogy. Unfortunately for the analogy, Moore's law doesn't seem to be explained by any single self-feeding process. Integrated circuit improvement over the years seems to have been brought about by a messy collection of changes, which makes it puzzling why there is apparently steady exponential improvement. Nevertheless, there surely is some software/hardware co-evolution driving the history of computer advances. In particular, there is something corresponding to bursting through a threshold after a pent-up 'need' has been felt.

In the early days of personal computers they offered only primitive word processing software; mine didn't even 'wrap around' at the end of lines. I was then addicted to machine code programming and (I'm slightly ashamed to admit) went to the lengths of writing my own word processing software, called 'Scrivener', which I used to write The Blind Watchmaker— which would otherwise have been finished sooner! During the development of Scrivener, I became increasingly frustrated by the idea of using the keyboard to move the cursor around the screen. I just wanted to point I toyed with using a joystick, as supplied for computer games, but couldn't work out how to do it. I overwhelmingly felt that the software I wanted to write was held up for want of a critical hardware breakthrough. Later I discovered that the device I desperately needed, but wasn't clever enough to imagine, had in fact been invented much earlier. That device was, of course, the mouse.

The mouse was a hardware advance, conceived in the 1960s by Douglas Engelbart who foresaw that it would make possible a new kind of software. This software innovation we now know, in its developed form, as the Graphical User Interface, or GUI, developed in the 1970s by the brilliantly creative team at Xerox PARC, that Athens of the modern world. It was cultivated into commercial success by Apple in 1983, then copied by other companies under names like VisiOn, GEM and—the most commercially successful today—Windows. The point of the story is that an explosion of ingenious software was, in a sense, pent up, waiting to burst on the world, but it had to wait for a crucial piece of hardware, the mouse. Subsequently, the spread of GUI software placed new demands on hardware, which had to become faster and more capacious to handle the needs of graphics. This in turn allowed a rush of more sophisticated new software, especially software capable of exploiting high-speed graphics. The software/hardware spiral continued and its latest production is the worldwide web. Who knows what may be spawned by future turns of the spiral?

Then if you look forward, it turns out the [computer] power is going to be used for a variety of things. Incremental enhancements and ease of use things, and then occasionally you go over some threshold and something new is possible. That was true with the graphical user interface. Every program got graphical and every output got graphical, that cost us vast amounts of CPU power and it was worth it ... In fact, I have my own law of software, Nathan's Law, which is that software grows faster than Moore's Law. And that is why there is a Moore's Law.

NATHAN MYHRVOLD, Chief Technology Officer,
Microsoft Corporation (1998)

Returning to the evolution of the human brain, what are we looking for to complete the analogy? A minor improvement in hardware, perhaps a slight increase in brain size, which would have gone unnoticed had it not enabled a new software technique which, in turn, unleashed a blossoming spiral of co-evolution? The new software changed the environment in which brain hardware was subject to natural selection. This gave rise to strong Darwinian pressure to improve and enlarge the hardware, to take advantage of the new software, and a self-feeding spiral was under way, with explosive results.

In the case of the human brain, what might the blossoming advance in software have been? What was the equivalent of the GUI? I'll give the clearest example I can come up with of the kind of thing it might have been, without for a moment committing myself to the view that this was the actual one that inaugurated the spiral. My clear example is language. Nobody knows how it began. There doesn't seem to be anything like syntax in non-human animals and it is hard to imagine evolutionary forerunners of it. Equally obscure is the origin of semantics; of words and their meanings. Sounds that mean things like 'feed me' or 'go away' are commonplace in the animal kingdom, but we humans do something quite different. Like other species, we have a limited repertoire of basic sounds, the phonemes, but we are unique in recombining those sounds, stringing them together in an indefinitely large number of combinations to mean things that are fixed only by arbitrary convention. Human language is open-ended in its semantics: phonemes can be recombined to concoct an indefinitely expanding dictionary of words. And it is open-ended in its syntax, too: words can be recombined in an indefinitely large number of sentences by recursive embedment: 'The man is coming. The man who caught the leopard is coming. The man who caught the leopard which killed the goats is coming. The man who caught the leopard which killed the goats who give us our milk is coming.' Notice how the sentence grows in the middle while the ends—its fundamentals—stay the same. Each of the embedded subordinate clauses is capable of growing in the same way, and there is no limit to the permissible growth. This kind of potentially infinite enlargement, which is suddenly made possible by a single syntactic innovation, seems to be unique to human language.

Nobody knows whether our ancestors' language went through a prototype stage with a small vocabulary and a simple grammar before gradually evolving to the present point where all the thousands of languages in the world are very complex (some say they are all exactly equally complex, but that sounds too ideologically perfect to be wholly plausible). I am biased towards thinking that it was gradual, but it is not quite obvious that it had to be. Some people think it began suddenly, more or less literally invented by a single genius in a particular place at a particular time. Whether it was gradual or sudden, a similar story of software/hardware co-evolution could be told. A social world in which there is language is a completely different kind of social world from one in which there is not. The selection pressures on genes will never be the same again. The genes find themselves in a world that is more dramatically different than if an ice age had suddenly struck or some terrible new predator had suddenly arrived in the land. In the new social world where language first burst on the scene, there must have been dramatic natural selection in favour of individuals genetically equipped to exploit the new ways. It is reminiscent of the conclusion of the previous chapter, in which I spoke of genes being selected to survive in the virtual worlds constructed socially by brains. It is almost impossible to overestimate the advantages that could have been enjoyed by individuals able to excel in taking advantage of the new world of language. It is not just that brains became bigger to cope with managing language itself. It is also that the whole world in which our ancestors lived was transformed as a consequence of the invention of speaking.

But I used the example of language just to make the idea of software/hardware co-evolution plausible. It may not have been language that pushed the human brain over its critical threshold for inflation, although I have a hunch that it played an important role. It is controversial whether the sound-modulating hardware in the throat was capable of language at the time when the brain began to swell up. There is some fossil evidence to suggest that our likely ancestors Homo habilis and Homo erectus, because of their relatively undescended larynx, probably were not capable of articulating the full range of vowel sounds that modern throats put at our disposal. Some people take this as indicating that language itself arrived late in our evolution. I think this a rather unimaginative conclusion. If there was software/hardware co-evolution, the brain is not the only hardware that we should expect to have improved in the spiral. The vocal apparatus, too, would have evolved in parallel, and the evolutionary descent of the larynx is one of the hardware changes that language itself would drive. Poor vowels are not the same thing as no vowels at all. Even if Homo erectus speech sounded monotonous by our exacting standards, it could still have served as the arena for the evolution of syntax, semantics and the self-feeding descent of the larynx itself. Homo erectus, incidentally, conceivably made boats as well as fire; we should not underestimate them.

Setting language on one side for a moment, what other software innovations might have nudged our ancestors over the critical threshold and initiated the co-evolutionary escalation? Let me suggest two that could have arisen naturally from our ancestors' evolving fondness for meat and hunting. Agriculture is a recent invention. Most of our hominid ancestors have been hunter gatherers. Those who still subsist from this ancient way of life are often formidable trackers. They can read patterns of footprints, disturbed vegetation, dung deposits and traces of hair to build up a detailed picture of events over a wide area. A pattern of footprints is a graph, a map, a symbolic representation of a series of incidents in animal behaviour. Remember our hypothetical zoologist, whose ability to reconstruct past environments by reading an animal's body and its DNA justified the statement that an animal is a model of its environment? Mightn't we say something similar of an expert !Kung San tracker, who has only to read footprints in the Kalahari dirt to reconstruct a detailed pattern, description, or model of animal behaviour in the recent past? Properly read, such spoors amount to maps and pictures, and it seems to me plausible that the ability to read such maps and pictures might have arisen in our ancestors before the origin of speech in words.

Suppose that a band of Homo habilis hunters needed to plan a cooperative hunt. In a remarkable and chilling 1992 television film, Too Close for Comfort, David Attenborough shows modern chimpanzees executing what seems to be a carefully planned and successful drive and ambush of a colobus monkey, which they then tear to pieces and eat. There is no reason to think that the chimpanzees communicated any detailed plan to each other before beginning the hunt, but every reason to think that habilis might have benefited from some such communication if it could have been achieved. How might such communication have developed?

Suppose that one of the hunters, whom we can think of as a leader, has a plan to ambush an eland and he wishes to convey the plan to his colleagues. No doubt he could mime the behaviour of the eland, perhaps donning an eland skin for the purpose, as hunting peoples do today for ritual or entertainment purposes. And he could mime the actions he wants his hunters to perform: studied exaggeration of stealth in the stalk; noisy conspicuousness in the drive; sudden startle in the final ambush. But there is more that he could do, and in this he would resemble any modern army officer. He could point out objectives and planning manoeuvres on a map of the area.

Our hunters, we may suppose, are all expert trackers, with a feel for the layout, in two-dimensional space, of footsteps and other traces: a spatial expertise which may have been beyond anything we (unless we happen to be !Kung San hunters ourselves) can easily imagine. They are all fully accustomed to the idea of following a trail, and imagining it laid out on the ground as a life-size map and a temporal graph of the movements of an animal. What could be more natural than for the leader to seize a stick and draw in the dust a scale model of just such a temporal picture: a map of movement over a surface? The leader and his hunters are fully used to the idea that a series of hoofprints indicate the flow of wildebeests along the muddy bank of a river. Why should he not draw a line indicating the flow of the river itself on a scale map in the dust? Accustomed as they all are to following human footprints from their own home cave to the river, why would the leader not point on his map to the position of the cave in relation to the river? Moving around the map with his stick, the hunter could indicate the direction of approach by the eland, the angle of his proposed drive, the location of the ambush: indicate them literally by drawing in the sand.

Could something like this have been how the notion of a scaled-down representation in two dimensions was born—as a natural generalization of the important skill of reading animal footprints? Maybe the idea of drawing the likeness of animals themselves arose from the same source. The imprint in mud of a wildebeest hoof is obviously a negative image of the real thing. The fresh paw mark of a lion must have aroused fear. Did it also engender in a blinding flash the realization that one could draw a representation of a part of an animal—and hence, by extrapolation, of the whole animal? Perhaps the blinding flash that led to the first drawing of a whole animal came from the imprint of a whole corpse, dragged out of mud which had baked hard around it. Or a less distinct image in the grass could easily have been fleshed out by the mind's own virtual reality software.

Because the mountain grass
Cannot but keep the form
Where the mountain hare has lain.

W. B. YEATS, 'Memory' (1919)

Representational art of all kinds (and probably nonrepresentational art, too) depends upon noticing that something can be made to stand for something else and that this may assist thought or communication. The analogies and metaphors that underlie what I have been calling poetic science—good and bad—are other manifestations of the same human faculty of symbol-making. Let's recognize a continuum, which could represent an evolutionary series. At one end of the continuum we allow things to stand for other things that they resemble—as in cave paintings of buffaloes. At the other end are symbols which do not obviously resemble the things that they stand for—as in the word 'buffalo', which means what it does only because of an arbitrary convention which all English speakers respect. The intermediate stages along the continuum may, as I said, represent an evolutionary progression. We may never know how it began. But perhaps my story of the footprints represents the kind of insight that might have been involved when people first began to think by analogy, and hence realize the possibility of semantic representation. Whether or not it gave birth to semantics, my tracker map joins language as my second suggestion for a software innovation that may have triggered the co-evolutionary spiral that drove the expansion of our brain. Could it have been the drawing of maps that boosted our ancestors beyond the critical threshold which the other apes just failed to cross?

My third possible software innovation is inspired by a suggestion made by William Calvin. He proposed that ballistic movements, such as throwing projectiles at a distant target, make special computational demands on nervous tissue. His idea was that the conquering of this particular problem, perhaps originally for purposes of hunting, equipped the brain to do lots of other important things as a by-product.

On a shingle beach, Calvin was amusing himself by tossing stones at a log and the action inadvertently launched (the metaphor is no accident) a productive train of thought. What kind of computation must the brain be doing when we throw something at a target, as our ancestors must increasingly have done while they evolved the hunting habit? One crucial component of an accurate throw is timing. Whichever arm action you favour, whether underarm lobbing, overarm bowling or throwing, or wristy flicking, the exact moment at which you release your projectile makes all the difference. Think about the overarm action of a bowler in cricket (bowling differs from baseball pitching in that the arm must remain straight, and this makes it easier to think about). If you release the ball too soon, it flies over the batsman's head. If you let go too late, it digs into the ground. How does the nervous system achieve the feat of releasing the projectile at exactly the right moment, tailored to the speed of arm movement? Unlike a lunge with a sword, in which you might steer your aim all the way to the target, bowling or throwing is ballistic. The projectile leaves your hand and is then beyond your control. There are other skilled movements, like hammering a nail, which are effectively ballistic, even if the tool or weapon doesn't leave your hand. All the computation has to be done in advance: 'dead reckoning'.

One way to solve the release timing problem when throwing a stone or a spear would be to compute the necessary contractions of individual muscles on the fly, while the arm was in motion. Modern digital computers would be capable of this feat, but brains are too slow. Calvin reasoned instead that nervous systems, being slow, would be better off with a buffer store of rote commands to the muscles. The whole sequence of bowling a cricket ball, or throwing a spear, is programmed in the brain as a pre-recorded list of individual muscle twitch commands, packed away in the order they are to be released.

Obviously, more distant targets are harder to hit. Calvin dusted off his physics textbooks and worked out how to calculate the decreasing 'launch window' as you try to maintain accuracy for longer and longer throws. Launch window is space jargon. Rocket scientists (that proverbially gifted profession) calculate the window of opportunity during which they must launch a spacecraft if they are to hit, say, the moon. Fire too soon, or too late, and you miss. Calvin worked out that for a rabbit-sized target four metres away, his launch window was about 11 milliseconds wide. If he released his stone too soon, it overshot the rabbit. If he held on too long, his stone fell short. The difference between two short and too long was a mere u milliseconds, about a hundredth of a second. Being an expert in the timings of nerve cells, this bothered Calvin, because he knew that the normal margin of error of a nerve cell is greater than the launch window. Yet he also knew that good human throwers are capable of hitting such a target at this distance, even while running. I myself have never forgotten the spectacle of my Oxford contemporary the Nawab of Pataudi (one of India's greatest cricketers, even after losing one eye) fielding for the university and throwing the ball with devastating speed and accuracy at the wicket, again and again, even while running at a speed that visibly intimidated the batsmen while raising the game of his team.

Calvin had a mystery to solve. How do we throw so well? The answer, he decided, must lie in the law of large numbers. No one timing circuit can achieve the accuracy of a !Kung hunter throwing a spear, or a cricketer throwing a ball. There must be lots of timing circuits working in parallel, their effects being averaged to reach the final decision of when to release the projectile. And now comes the point. Having developed a population of timing and sequencing circuits for one purpose, why not turn them to other ends? Language itself relies upon precise sequencing. So does music, dancing, even thinking out plans for the future. Could throwing have been the forerunner of foresight itself? When we throw our mind forward in imagination, are we doing something almost literal as well as metaphorical? When the first word was uttered, somewhere in Africa, did the speaker imagine himself throwing a missile from his mouth to his intended hearer?

My fourth candidate for software that partakes in software/ hardware co-evolution is the 'meme', the unit of cultural inheritance. We've already hinted at it when discussing the epidemic-style 'take-off' of bestsellers. I here draw upon books of my colleagues Daniel Dennett and Susan Blackmore, who have been among several constructive memetic theorists since the word was first coined in 1976. Genes are replicated, copied from parent to offspring down the generations. A meme is, by analogy, anything that replicates itself from brain to brain, via any available means of copying. It is a matter of dispute whether the resemblance between gene and meme is good scientific poetry or bad. On balance, I still think it is good, although if you look the word up on the worldwide web you'll find plenty of examples of enthusiasts getting carried away and going too far. There even seems to be some kind of religion of the meme starting up—I find it hard to decide whether it is a joke or not.

My wife and I both occasionally suffer from sleeplessness when our minds are taken over by a tune which repeats itself over and over in the head, relentlessly and without mercy, all through the night. Certain tunes are especially bad culprits, for example Tom Lehrer's 'Masochism Tango'. This is not a melody of any great merit (unlike the words, which are brilliantly rhymed), but it is almost impossible to shake off once it gams a hold. We now have a pact that, if we have one of the danger tunes on the brain during the day (Lennon and McCartney are other prime culprits), we shall under no circumstances sing or whistle it near bedtime, for fear of infecting the other. This notion that a tune in one brain can 'infect' another brain is pure meme talk.

The same thing can happen when one is awake. Dennett tells the following anecdote in Darwin's Dangerous Idea (1995):

The other day, I was embarrassed—dismayed—to catch myself walking along humming a melody to myself It was not a theme of Haydn or Brahms or Charlie Parker or even Bob Dylan: I was energetically humming 'It takes two to tango'—a perfectly dismal and entirely unredeemed bit of chewing gum for the ears that was unaccountably popular sometime in the 1950s. I am sure I have never in my life chosen this melody, esteemed this melody, or in any way judged it to be better than silence, but there it was, a horrible musical virus, at least as robust in the meme pool as any melody I actually esteem And now, to make matters worse, I have resurrected the virus in many of you, who will no doubt curse me in days to come when you find yourself humming, for the first time in over thirty years, that boring tune.

For me, the maddening jingle is just as often not a tune but an endlessly repeated phrase, not a phrase with any obvious significance, just a fragment of language that I, or somebody else, has perhaps said at some point during the day. It isn't clear why a particular phrase or tune is chosen but, once there, it is extremely hard to shift. It goes on endlessly rehearsing itself. In 1876 Mark Twain wrote a short story, 'A Literary Nightmare', about his mind being taken over by a ridiculous fragment of versified instruction to a bus conductor with a ticket machine, of which the refrain was 'Punch in the presence of the passenjare'.

Punch in the presence of the passenjare
Punch in the presence of the passenjare

It has a mantra-like rhythm and I almost dared not quote it for fear of infecting you. I had it going round in my own head for a whole day after reading Mark Twain's story. Twain's narrator finally liberated himself by passing it on to the vicar, who in turn was driven demented. This 'Gadarene swine' aspect of the story—the idea that when you pass a meme to somebody else you thereby lose it—is the only part that does not ring true. Just because you infect somebody else with a meme, does not mean you cleanse your brain of it.

Memes can be good ideas, good tunes, good poems, as well as drivelling mantras. Anything that spreads by imitation, as genes spread by bodily reproduction or by viral infection, is a meme. The chief interest of them is that there is at least the theoretical possibility of a true Darwinian selection of memes, to parallel the familiar selection of genes. Those memes that spread do so because they are good at spreading. Dennett's relentless jingle, like mine and my wife's, was a tango. Is there something insidious about the tango rhythm? Well, we need further evidence. But the general idea that some memes may be more infective than others because of their inherent properties is reasonable enough.

As with genes, we can expect the world to become filled with memes that are good at the art of getting themselves copied from brain to brain. We can notice that some memes, like Mark Twain's jingle, have this property as a matter of fact, though without being able to analyse what gives it to them. It is enough that memes vary in their infectivity for Darwinian selection to get going. Sometimes we can work out what it is that a meme has that helps it to spread. Dennett notes that the conspiracy theory meme has a built-in response to the objection that there is no good evidence for the conspiracy: 'Of course not—that's how powerful the conspiracy is!'

Genes will spread by reason of pure parasitic effectiveness, as in a virus. We may think this spreading for the sake of spreading rather futile, but nature is not interested in our judgements, of futility or of anything else. If a piece of code has what it takes, it spreads and that's that. Genes can also spread for what we think of as a more 'legitimate' reason, say, because they improve the acuity of a hawk's eyesight. They are the ones that first occur to us when we think of Darwinism. In Climbing Mount Improbable I explained that an elephant's DNA and a virus's are both 'Copy Me' programmes. The difference is that one of them has an almost fantastically large digression: 'Copy me by building em elephant first.' But both kinds of program spread because, in their different ways, they are good at spreading. The same is true for memes. Jingling tangos survive in brains, and infect other brains, for reasons of pure parasitic effectiveness. They are near the virus end of the spectrum. Great ideas in philosophy, brilliant insights in mathematics, clever techniques for tying knots or fashioning pots, survive in the meme pool for reasons that are closer to the 'legitimate' or 'elephant' end of our Darwinian spectrum.

Memes could not spread but for the biologically valuable tendency of individuals to imitate. There are plenty of good reasons why imitation should have been favoured by conventional natural selection working on genes. Individuals that are genetically predisposed to imitate enjoy a fast track to skills that may have taken others a long time to build up. One of the finest examples is the spread of the habit of opening milk bottles among tits (European equivalent of American chickadees). Milk is delivered in bottles very early to British doorsteps and it usually sits there for a while before being taken in. A small bird is capable of pecking through the lid, but it is not an obvious thing for a bird to do. What happened was that a series of epidemics of bottletop raiding among blue tits spread outwards from discrete geographical foci in Britain. Epidemic is exactly the right word. The zoologists James Fisher and Robert Hinde were able to document the spread of the habit in the 1940s as it radiated outwards by imitation from the focal points where it started, presumably discovered by a few isolated birds: islands of inventiveness and founders of meme epidemics.

Similar stories can be told of chimpanzees. Fishing for termites by poking twigs into a mound is learned by imitation. So is the skill of cracking nuts with stones on a log or stone anvil, which occurs in certain local areas of west Africa but not others. Our hominid ancestors surely learned vital skills by imitating each other. Among surviving tribal groups, stone toolmaking, weaving, techniques for fishing, thatching, pottery, firemaking, cooking, smithwork, all these skills are learned by imitation. Lineages of masters and apprentices are the memetic equivalent of genetic ancestor/descendant lines. The zoologist Jonathan Kingdon has suggested that some of our ancestors' skills began when humans imitated other species. For example, spider webs may have inspired the invention of fishing nets and of string or twine, weaver bird nests the invention of knots or thatching.

Memes, unlike genes, don't seem to have clubbed together to build large 'vehicles'—bodies—for their joint housing and survival. Memes rely on the vehicles built by genes (unless, as has been suggested, you count the Internet as a meme vehicle). But memes manipulate the behaviour of living bodies no less effectively for that. The analogy between genetic and memetic evolution starts to get interesting when we apply our lesson of 'the selfish cooperator'. Memes, like genes, survive in the presence of certain other memes. A mind can become prepared, by the presence of certain memes, to be receptive to particular other memes. Just as a species gene pool becomes a cooperative cartel of genes, so a group of minds—a 'culture', a 'tradition'—becomes a cooperative cartel of memes, a memeplex, as it has been called. As in the case of genes, it is a mistake to see the whole cartel as a unit being selected as a single entity. The right way to see it is in terms of mutually assisting memes, each providing an environment which favours the others. Whatever may be the limitations of the meme theory, I think this one point, that a culture or a tradition, a religion or a political complexion grows up according to the model of 'the selfish cooperator' is probably at least an important part of the truth.

Dennett vividly evokes the image of the mind as a seething hotbed of memes. He even goes so far as to defend the hypothesis that 'Human consciousness is itself a huge complex of memes...' He does this, along with much else, persuasively and at length, in his book Consciousness Explained (1991). I cannot possibly summarize the intricate series of arguments in that book, and will content myself with one more characteristic quotation:

The haven all memes depend on reaching is the human mind, but a human mind itself is an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication: native Chinese minds differ dramatically from native French minds, and literate minds differ from illiterate minds. What memes provide in return to the organisms in which they reside is an incalculable store of advantages—with some Trojan horses thrown in for good measure ... But if it is true that human minds are themselves to a very great degree the creations of memes, then we cannot sustain the polarity of vision we considered earlier; it cannot be 'memes versus us,' because earlier infestations of memes have already played a major role in determining who or what we are.

There is an ecology of memes, a tropical rainforest of memes, a termite mound of memes. Memes don't only leap from mind to mind by imitation, in culture. That is just the easily visible tip of the iceberg. They also thrive, multiply and compete within our minds. When we announce to the world a good idea, who knows what subconscious quasi-Darwinian selection has gone on behind the scenes inside our heads? Our minds are invaded by memes as ancient bacteria invaded our ancestors' cells and became mitochondria. Cheshire Cat-like, memes merge into our minds, even become our minds, just as eucaryotic cells are colonies of mitochondria, chloroplasts and other bacteria. This sounds like a perfect recipe for co-evolutionary spirals and the enlargement of the human brain, but specifically what drives the spiral? Where lies the self-feeding, the element of 'the more you have, the more you get'?

Susan Blackmore tackles this question, by asking another: 'Whom should you imitate?' The individuals who are best at the skill in question, certainly, but there is a more general answer to the question. Blackmore suggests that you should choose to imitate the best imitators—they are likely to have picked up the best skills. And her next question, 'With whom do you mate?' is answered in a similar way. You mate with the best imitators of the trendiest memes. So, not only are memes selected for the ability to spread themselves, genes are selected in ordinary Darwinian selection for their ability to make individuals that are good at spreading memes. I do not wish to steal Doctor Blackmore's thunder, for I have been privileged to see an advance draft of her book, The Meme Machine (1999). I will simply note that here we have software/hardware co-evolution. The genes build the hardware. The memes are the software. The co-evolution is what may have driven the inflation of the human brain.

I said that I'd return to the illusion of the 'little man in the brain'. Not to solve the problem of consciousness, which is way beyond my capacity, but to make another comparison between memes and genes. In The Extended Phenotype, I argued against taking the individual organism for granted. I didn't mean individual in the conscious sense but in the sense of a single, coherent body surrounded by a skin and dedicated to a more or less unitary purpose of surviving and reproducing. The individual organism, I argued, is not fundamental to life, but something that emerges when genes, which at the beginning of evolution were separate, warring entities, gang together in cooperative groups, as 'selfish cooperators'. The individual organism is not exactly an illusion. It is too concrete for that. But it is a secondary, derived phenomenon, cobbled together as a consequence of the actions of fundamentally separate, even warring, agents. I shan't develop the idea but just float, following Dennett and Blackmore, the idea of a comparison with memes. Perhaps the subjective 'I', the person that I feel myself to be, is the same kind of semi-illusion. The mind is a collection of fundamentally independent, even warring, agents. Marvin Minsky, the father of artificial intelligence, called his 1985 book The Society of Mind. Whether or not these agents are to be identified with memes, the point I am now making is that the subjective feeling of 'somebody in there' may be a cobbled, emergent, semi-illusion analogous to the individual body emerging in evolution from the uneasy cooperation of genes.

But that was an aside. I have been looking for software innovations that might have launched a self-feeding spiral of hardware/software co-evolution to account for the inflation of the human brain. I have so far mentioned language, map reading, throwing and memes. Another possibility is sexual selection, which I introduced as an analogy to explain the principle of explosive co-evolution, but could it actually have driven the inflation of the human brain? Did our ancestors impress their mates by a sort of mental peacock's tail? Was larger brain hardware favoured because of its ostentatious software manifestations, perhaps as the ability to remember the steps of a formidably complicated ritual dance? Perhaps.

Many people will find language itself the most persuasive, as well as the clearest candidate for a software trigger of brain expansion, and I'd like to come back to it from another point of view. Terrence Deacon, in The Symbolic Species (1997), has a meme-like approach to language:

It is not too far-fetched to think of languages a bit as we think of viruses, neglecting the difference in constructive versus destructive effects. Languages are inanimate artifacts, patterns of sounds and scribblings on clay or paper, that happen to get insinuated into the activities of human brains which replicate their parts, assemble them into systems, and pass them on. The fact that the replicated information that constitutes a language is not organized into an animate being in no way excludes it from being an integrated adaptive entity evolving with respect to human hosts.

Deacon goes on to prefer a 'symbiotic' rather than a virulently parasitic model, drawing the comparison again with mitochondria and other symbiotic bacteria in cells. Languages evolve to become good at infecting child brains. But the brains of children, those mental caterpillars, also evolve to become good at being infected by language: co-evolution yet again.

C. S. Lewis, in 'Bluspels and Flalansferes' (1939), reminds us of the philologist's aphorism that our language is full of dead metaphors. In his 1844 essay 'The Poet', the philosopher and poet Ralph Waldo Emerson said, 'Language is fossil poetry.' If not all of our words, certainly a great number of them, began as metaphors. Lewis mentions 'attend' as having once meant 'stretch'. If I attend to you, I stretch my ears towards you. I 'grasp' your meaning as you 'cover' your topic and 'drive home' your 'point'. We 'go into' a subject, 'open up' a 'line' of thought. I have deliberately chosen cases whose metaphoric ancestry is recent and therefore accessible. Philological scholars will delve deeper (see what I mean?) and show that even words whose origins are less obvious were once metaphors, perhaps in a dead (get it?) language. The word language itself comes from the Latin for tongue.

I have just bought a dictionary of contemporary slang because I was disconcerted to be told by American readers of the typescript of this book that some of my favourite English words would not be understood across the Atlantic. 'Mug', for instance, meaning fool, dupe or patsy, is not understood there. In general I have been reassured to find from the dictionary how many slang words are actually universal in the English-speaking world. But I have been more intrigued at the astonishing creativeness of our species in inventing an endless supply of new words and usages. 'Parallel parking' or 'getting your plumbing snaked' for copulation; 'idiot box' for television; 'park a custard' for vomit; 'Christmas on a stick' for a conceited person; 'nixon' for a fraudulent deal; 'jam sandwich' for a police car; these slang expressions represent the cutting edge of an astonishing richness of semantic innovation. And they perfectly illustrate C. S. Lewis's point. Is this how all our words got their start?

As with the 'footprint maps', I wonder whether the ability to see analogies, the ability to express meanings in terms of symbolic resemblances to other things, may have been the crucial software advance that propelled human brain evolution over the threshold into a co-evolutionary spiral. In English we use the word 'mammoth' as an adjective, synonymous with very large. Could our ancestors' breakthrough into semantics have come when some pre-sapient poetic genius, struggling to convey the idea of 'large' in some quite different context hit upon the idea of imitating, or drawing, a mammoth? Could that have been the kind of software advance that nudged humanity into an explosion of software/hardware co-evolution? Perhaps not this particular example, because large size is too easily conveyed by the universal hand gesture beloved of boastful anglers. But even that is a software advance over chimpanzee communication in the wild. Or how about imitating a gazelle to mean the delicate, shy grace of a girl, in a Pliocene anticipation of Yeats's 'Two girls, both beautiful, one a gazelle'? How about sprinkling water from a gourd to mean not just rain, which is almost too obvious, but tears when trying to convey sadness? Could our remote habilis or erectus ancestors have imagined—and momentously discovered the means to express—an image like the 'sobbing rain' of John Keats? (Though, to be sure, tears themselves are an unsolved evolutionary mystery.)

However it began, and whatever its role in the evolution of language, we humans, uniquely among animalkind, have the poet's gift of metaphor: of noticing when things are like other things and using the relation as a fulcrum for our thoughts and feelings. This is an aspect of the gift of imagining. Perhaps this was the key software innovation that triggered our co-evolutionary spiral. We could think of it as a key advance in the world-simulating software that was the subject of the previous chapter. Perhaps it was the step from constrained virtual reality, where the brain simulates a model of what the sense organs are telling it, to unconstrained virtual reality, in which the brain simulates things that are not actually there at the time—imagination, daydreaming, 'what if?' calculations about hypothetical futures. And this, finally, brings us back to poetic science and the dominant theme of the whole book.

We can take the virtual reality software in our heads and emancipate it from the tyranny of simulating only utilitarian reality. We can imagine worlds that might be, as well as those that are. We can simulate possible futures as well as ancestral pasts. With the aid of external memories and symbol-manipulating artifacts—paper and pens, abacuses and computers—we are in a position to construct a working model of the universe and run it in our heads before we die.

We can get outside the universe. I mean in the sense of putting a model of the universe inside our skulls. Not a superstitious, small-minded, parochial model filled with spirits and hobgoblins, astrology and magic, glittering with fake crocks of gold where the rainbow ends. A big model, worthy of the reality that regulates, updates and tempers it; a model of stars and great distances, where Einstein's noble spacetime curve upstages the curve of Yahweh's covenantal bow and cuts it down to size; a powerful model, incorporating the past, steering us through the present, capable of running far ahead to offer detailed constructions of alternative futures and allow us to choose.

Only human beings guide their behaviour by a knowledge of what happened before they were born and a preconception of what may happen after they are dead; thus only humans find their way by a light that illuminates more than the patch of ground they stand on.

P. B. and J. S. MEDAWAR, The Life Science (1977)

The spotlight passes but, exhilaratingly, before doing so it gives us time to comprehend something of this place in which we fleetingly find ourselves and the reason that we do so. We are alone among animals in foreseeing our end. We are also alone among animals in being able to say before we die: Yes, this is why it was worth coming to life in the first place.

Now more than ever seems it rich to die,
To cease upon the midnight with no pain,
While thou art pouring forth thy soul abroad
In such an ecstasy!

JOHN KEATS, 'Ode to a Nightingale' (1820)

A Keats and a Newton, listening to each other, might hear the galaxies sing.