EVOLUTION IN PERSPECTIVE - REMEMBERING WHEREWE HAVE COME FROM - Ecological Intelligence: Rediscovering Ourselves in Nature - Ian McCallum

Ecological Intelligence: Rediscovering Ourselves in Nature - Ian McCallum (2008)




WHERE WERE YOU WHEN I LAID THE FOUNDATIONS OF THE EARTH?” is the famous question asked by the Old Testament God of Job after he had complained to his maker about his miserable fate. Not surprisingly, the response was one of silence. How would you have answered that one? I think your silence would have been as loud as mine.

“Where were you?” I believe this to be a personal question and a profoundly evolutionary one. It as a question that demands an ecological answer. Perhaps, by reviewing our remarkable history, we might discover that we are a lot closer to those foundations than we previously imagined.


The known universe, according to recent estimates, is somewhere between 13 and 15 billion years old—15,000 million years! How did it all begin? Well, we don’t really know. General scientific consensus acknowledges a big bang as the starting point, not only of the explosive outward trek of radiation, particles, molecules, gas, and dust—all of these constellating over millions of years into the supernovas, galaxies, stars, and planets that we call the cosmos—but of the beginning of time. It is indeed a conundrum, a situation begging the question:

“What happened before the ‘big’ event?” Once again, we don’t really know. Instead, our imaginations are now being stirred by a host of new hotly debated theories about alternative or parallel universes to ours, including notions of multiple conditions of existence outside our usual, three-dimensional one, some of them having little to do with the timing of the big bang. As they say, watch this space.

While no one knows what happened before the big bang, we think we know what happened directly afterward. In that first trillionth of a second, gravity and the four dimensions of length, breadth, height, and time were born. For the time being, let’s stay with the universe we know, or at least the one that we pretend to know. What does it consist of?

The visible matter, from planets, stars, galaxies, nebulae, and so forth…everything that the eye can see, telescopes and all, is believed to be a tiny 1 percent of what we know (it could be even less).Ninety-nine percent of the universe, then, is invisible! About 3 percent of what is invisible is made up of baryonic matter (protons, neutrons, and electrons), intergalactic gas, brown dwarfs, and black holes (a gravitational force so powerful that neither light, protons, neutrons, nor atoms can escape). A further 23 percent is made up of another kind of dark matter in the form of exotic, unknown particles. We don’t know what they are, but we know that they are there. If this sounds absurd then what about the remaining 70 percent of our outwardly accelerating universe? Simply referred to as dark energy, it is believed to be the cosmic force responsible for the acceleration of the galaxies, some of them at speeds faster than the speed of light. Akin to Einstein’s notion of antigravity—what he once called his “biggest blunder”—this force is yet to be positively identified, but we know it is there.

In an interesting parallel, it is estimated that 70 percent of the world’s living species, from bacteria to worms, ants, flowering plants, mammals, and even primates, have yet to be identified. Forget about space, we hardly know what’s on our own doorstep. And if you don’t mind a poetic parallel, we might as well be saying the same thing for how little we know about the human mind, itself a phenomenon in process—exotic, precious, and with its own blind spots and black holes, its own dark energy, and its own peculiar resistance to gravity.

Looking around us, we appear to be alone. We are uncertain. We think we know where we are but the answer as to the why is not readily forthcoming. What we are, as we shall see, is easy. We are human animals—curious, witty, aggressive, reflective, wonderful, and pathetic and, as Anthony Fairall of the Department of Astronomy at the University of Cape Town once quipped, “this is the right time for us to be here.”


So, this is our time and this is where we are: Earth. We are biologically in it and of it, children of a 4.5-billion-year-old planet and a 5.5-billion-year-old star called the sun. Rotating around our parental star in a 365-day solar year, we are part of a tiny solar system in an equally tiny corner of a trillion-star cluster known as the Milky Way Galaxy. At the center of our galaxy is a black hole around which our solar system and the rest of the Milky Way spins. This dark and massive force, when viewed from Earth, is somewhere beyond the constellation of Sagittarius, about 40,000 light-years away. That’s how long, in years, it will take us to get there if we were traveling at 186,411 miles (300,000 kilometers) per second—the speed of light. It is indeed, in human dimensions, a long, long way from home.

While these figures might be comprehensible to some, they are meaningless, really, unless we can bring them down to Earth, so to speak. By referring to cosmic years, eminent British astronomer Sir Patrick Moore has given us a way of condensing our notion of time to a more user-friendly scale.

A cosmic year is the equivalent of 225 million solar years—the time it takes for our solar system to rotate once around the center of our galaxy. This tells us that if the Earth is 4.5 billion solar years old, then in cosmic years, dividing 4.5 billion by 225 million, the Earth is twenty cosmic years old. The Earth, then, has circled the black hole center of our galaxy roughly twenty times in its history. To put a human life span onto this time scale, seventy years translates into roughly nine cosmic seconds. And so, using the model of cosmic time, let’s review our evolutionary milestones. See how this compares with conventional time in the diagram on the next page.

The first two “years” of the Earth’s existence were ones of molten fury—a fiery hangover from its split from the sun. Unable to generate its own heat, it began to cool, and about eighteen cosmic years ago our hot Precambrian planet—so named after the rocks of Cambria, the former name of present-day Wales—gave rise to the world’s oldest known igneous rocks. These molten elements solidified into the well-known crystal shapes of ancient granite and basalt. With the cooling of the Earth came the ocean-forming rains and the beginning of a geological process called the cycle of stones. The alternating heat and cold of day and night caused the rocks to swell and to retract until, exhausted by the process, the outer geological skin of the basalts and granites began to erode and flake off. Carried away by wind and water, it took another two cosmic years for the first great rock formations to erode their way to the seas. The first stage in the cycle was over.

Under the massive weight of oxygen-free water, the second stage of the cycle began. In a process of geological transformation, layer upon layer of the exfoliated and eroded igneous tissue compressed to become the oldest known sedimentary rocks on Earth. The crystals in these strata, under intense heat and pressure, were transformed in the third stage into the tough, elegantly grained metamorphic form that we find in the present-day mountain ranges such as the Alps and the Himalayas.

As a metaphor for the shaping of human life and character, it would appear that our personal fine- and coarse-grained life experiences, our patterns of weathering, trauma, and transformations are not unlike those patterns in the cycle of stones. Meanwhile, it is curious to think, as British geologist and archaeologist Jacquetta Hawkes puts it, that

granite and basalt, with water, nitrogen and carbon dioxide in combination with the early atmosphere of Earth, have made all the material paraphernalia with which man now surrounds himself, the sky-scraper, the wine glass, the vacuum cleaner, jewels, the mirror into which I look. And the woman who looks? Where did it come from, this being behind the eyes, this thing that asks? How has this been gleaned from a landscape of harsh rock and empty seas?


It would seem that we cannot escape our molecular and geological foundations. They are in our blood.


With the unraveling of DNA sequences in living forms, most biologists now acknowledge three domains of life. These are the Bacteria—the conventional microbes of the world; the Archaea, ancient single-cell organisms that inhabit environments of extreme temperature and acidity (thermacidophiles), salty environments (halobacteria), and anoxic bogs (methanogenic bacteria). The third domain comprises the Eukarya—organisms that are made up of cells with organelles and a separate, membrane-bound nucleus. The Eukarya comprise the fungi, the plants, and all animals, including us.

The Archaea were the first organic inhabitants of the Earth.Without them, there would be no trees, flowers, or fish…and we wouldn’t be here either. But when and how did they come about? As for the when, we believe it to be about thirteen or fourteen cosmic years ago (3 billion years). The how is speculative but highly likely. With 60 percent of the granites already established, the electrochemical mixture of land, water, and lightning combined to produce molecular compounds of nitrogen, carbon, and other elements that had not existed on Earth before. There was no turning back. A process had been initiated in which the electrically charged molecules combined to form water-borne organisms capable of living in an oxygen-free world. The next step in the process was crucial: the development of a membrane—the first organic boundary, the first fence, the first hint of specialization.

However, if there was ever a defining moment in the evolution of life as we know it, it occurred about ten cosmic years (about 2 billion years) ago. It marks the earliest evidence of one of the great strategies of species survival: symbiosis—so named by German botanist Anton de Bary in 1873 to describe the living together of different organisms for mutual benefit. With it came the emergence of the first differentiated cells. These were the first cells to have organelles and a nucleus with its own membrane. The reason for the nuclear membrane will become clear. But what triggered this first symbiotic relationship? It was the changing conditions of the surroundings.

In an environment that was becoming increasingly oxygenated, new aerobic (oxygen-coping) bacteria began to emerge, putting them at a clear advantage over the anaerobes. With competition for nutrients becoming increasingly serious, including a phase when, in all likelihood, the two strains of bacteria were feeding off each other, the first great alliance took place. Instead of being devoured by the predatory anaerobes, the more recent, threadlike aerobic organisms became part of the intracellular structure of their evolutionary older anaerobic cousins. They literally came on board, where they function to this day in all living cells, as the indispensable organelles responsible for the conversion of oxygen into energy. Essential for cellular metabolism and homeostasis, these little subcompartments of our cells are known as mitochondria, from the Greek mitos, meaning “thread,” and chondrion, meaning “granule.” Because of the energy they generate, they are also called the powerhouses of the cells. Without them we would not be able to move, think, or dream. Without them, the animal and insect kingdoms as we know them today would not exist.

The symbiotic relationship, however, was a conditional one. The host cells, compelled to protect their own DNA, ensured their long-term survival by developing a membrane around their nuclei. The mitochondria, for the same reason, developed a double membrane. This genetic independence of the cell nuclei and mitochondria brings a fascinating twist to the symbiotic tale. It is well known that the genetic information in the nucleus of mammalian cells comes from both parents. What we didn’t know until very recently is that the genetic information in the mitochondria is passed on, generation after generation, by the female of the species only. In other words, the mitochondria, the powerhouses of our cells, come from our biological mothers. Why there is no contribution from the biological father is unknown, but it would seem that the genetic information, if any, which the sperm may carry regarding the mitochondria is either absent or, if not, lost or destroyed at the moment of conception. Be that as it may, the maternal link to our mitochondria has opened up a fascinating avenue into our understanding of human ancestry. With the discovery of this lineage, we are able to show that modern humans, Homo sapiens sapiens, as little as 200,000 years ago shared not only a common bloodline, but as recently as 60,000 years ago, a lineage through six or seven possible biological mothers. As humans, it would seem that we are more closely related to each other than we sometimes like to think. As for our link with animals, the evidence suggests that the mammalian bloodline goes back 100 million years. It would appear that the poetry of the brotherhood and sisterhood of all living things has become science.

A similar symbiotic process occurred in plant cells as well, but where the new bacterial tenants (cyanobacteria) are what are known as chloroplasts—the “green stuff” of plants. Instead of using oxygen, they combine carbon dioxide with water and light to produce oxygen. As with mitochondria, chloroplasts too, have their own DNA.

It should therefore not be surprising to learn that other biological partnerships followed. One of the most important of these partnerships is described by the science writers John Briggs and F. David Peat in their book Turbulent Mirror as “the taking into the cell in another intrusionturned- marriage the highly mobile, corkscrew-shaped bacteria”—the spirochetes. Once again, in return for nourishment and protection, the spirochetes, or “wrigglers,” as neuroscientist and author Lynn Margulis calls them, made their sluggish hosts an offer they couldn’t refuse. They brought with them their stout cilia, or hairlike propelling strands, to act as miniature outboard motors for their new hosts. Could this have been a hint of the future legs and wings to come? Perhaps so, but not all wrigglers became propelling mechanisms. Some of them developed into microtubules within the host cell, eventually joining and elongating to become what is believed to be primitive axons and dendrites—the “business ends” of neurons, as Margulis describes them. As she suggests, it is not improbable that the growing network of connecting tubules developed into neurological tissue and later, much later, the first brains.

Moving on to four cosmic years ago (900 million years), we would have found ourselves in the company of the planet’s first multicellular plants. Known as stromatolites from the Greek stroma, meaning “matrix” or “tissue,” they established themselves in networks of algae or algal beds. One galactic turn later we would have seen the first jelly-fish, coelenterata, and only two cosmic years ago, the trilobites—the world’s first insects. Marine and land invertebrates were developing their first shells, or exoskeletons, and then came the glaciation of an African landmass very different to its modern shape. With the receding of the ice roughly one and a half cosmic years ago, the sea became home to horn corals and boneless fish—the predecessors of modern sharks.

With a steady increase in temperatures, the Earth produced its first tree ferns, sharks, and early amphibians. The stage was set for what seemed to be an inevitable explosion of life, but it was not to be. Instead, as a result of large-scale volcanic activity and global warming, carbon dioxide levels rose to toxic proportions, wiping out 95 percent of the Earth’s species! This catastrophic occurrence, a fraction more than one cosmic year ago and now referred to as the Permian Extinction, heralded a new geological period on our planet—the Triassic. The survivors regrouped themselves. New forms began to take shape, among them the ancestors of modern turtles, sharks, and the much-maligned crocodile, surely the greatest survivor of all modern animals. Gymnosperms (our nonflowering trees and plants) began to carpet many parts of the world, contributing not only to an increase in the Earth’s atmospheric oxygen, but to a change in the weather too. Increasing forestation meant increasing rainfall. The rivers began to flow freely, providing a niche for countless riverine plants, fish, and insects. Nine “months” (180 million years) ago, in a new period known as the Jurassic, the dinosaurs (from the Greek words deinos, meaning “terrible,” and sauros, meaning “lizard”), became the food-chain champions of the world.

A “month” later, accompanied by a splash of colors, plants with sexual organs made their first appearance. The flowers of the fields opened their petals and sepals to expose stamens and pistils—the respective male (pollen producing) and female (seed producing) components of flowers. Drawn to the plethora of colors and perfumes came an equal plethora of unwitting pollinators in the forms of wasps, flies, butterflies, and bees.

Spiders and crustaceans introduced themselves to the Earth’s ecosystems at about the same time as the flowering plants, while behind the scenes, a group of dinosaurs (they weren’t all as big as Tyrannosaurus rex) evolved a new way of escaping their larger, hungry relatives: their scales softened into feathers. Examine a reptilian scale through a powerful microscope and you will discover that its molecular architecture is practically identical to that of a feather.And so it was, only seven “months” (about 130 million years) ago that Archaeopteryx, the first known feathered creature (with teeth!)—a true ancestor of the birds—took to the sky. Escaping predators was a huge benefit to the winged creatures, but there were other advantages as well: flight provided new and wonderful opportunities for insulation, feeding, nesting, and travel.

At the same time the birds (now warm-blooded) began taking flight, the Earth’s surface began to split up again. It was the start of a significant land migration, otherwise known as continental drift. This major breakup and spread of the southerly landmass took about four cosmic months (70 million years) to give us the recognizable continents of South America, Africa, Antarctica, and Australia as well as the subcontinent of India. The Earth’s anatomy, like a huge geological embryo, had, in a sense, differentiated itself.

Need we be reminded that the same pattern of anatomical differentiation occurs in every living embryo, from stem cells to livers, kidneys, hearts, spleens, and brains? Is global anatomy a metaphor worth taking seriously? Can we learn from our own bodies? To me, the human anatomy is one of the finest examples I know of biological differentiation and diversity. It is a living definition of ecology, an embodiment of the interactions and interdependence between molecules, cells, tissues, organs, and systems, sensitive to both inner and outer environments. Sociologically it would appear to be the same—we are a body of humans, drifting and differentiated at the same time, interacting and relating to each other and we do it because we have to. As we shall see, it is part of our survival as biopsycho-social beings.

A little over three cosmic months ago (65 million years), not too long before the establishment of the continents as we know them today, the dinosaurs’ reign ended. It is chillingly speculated that the cause of this abrupt end to the dinosaurs’ 120 million-year existence was a massive asteroid impact on the Yucatan peninsula of present-day northern Mexico. It is thought that the event caused so much dust to be thrown into the atmosphere that the sun all but disappeared from the sky. The resulting drop in temperature was so severe that the sun-dependent creatures stood no chance of survival.

How do we know that this theory is the correct one? Well, we don’t know for sure, but it seems to be the most likely one. What we do know is that there was indeed an asteroid impact as described. The element iridium is the signature of asteroid impacts and there is plenty of it in a huge but well-defined area on the Yucatan peninsula. It is dated to 65 million years ago. We also know that the dinosaurs made their surprisingly rapid exit at about that time. As plausible as they might seem, two contending theories—a decimating epidemic or an intolerable atmospheric/ climatic change of another kind—have not been substantiated. Of the three possibilities for extinction, which one could the human animal be facing?

And so, in what could be described as a huge coincidence, the demise of the dinosaurs gave the burrowing, warm-blooded placentals, class Mammalia, the opportunity to establish themselves. While this is our class, there were no mammalian forms at that time even vaguely ready to put up their hands or wiggle their thumbs. The geological period known as the Cretaceous, from the Latin word for “chalk,” had ended and a warm-blooded class of creatures tentatively tiptoed into the Tertiary. The burrowing lemurs, shrews, rats, and mice showed their daytime faces. Ancestral ungulates and other ancient carnivores announced themselves, along with a fresh spurt of newly evolving birds, insects, frogs, worms, mosses, and flowering plants.


About two cosmic months ago, the Great Rift Valley began to open up and, peering into it and out of it, were the tiny evolutionary cousins of the elephant, the family Procaviidae—the hyraxes of bush, trees, and rocks. The aardvark and the early rhino made their acquaintance with Africa about one “month” ago. Then, with the worldwide expansion of grasslands only twelve cosmic days later, the hollow-horned antelopes showed up alongside their slightly older ruminant companions, the giraffes, with their horns of solid bone. Bulk-feeders such as the buffalo, Syncerus caffer, began herding themselves out of Europe and into the African grasslands while the zebra (family Equidae), whose ancestors hail from South America, declared their savannah stripes. As if to balance the wilderness equation, the modern carnivores, such as the lion and the hyena, left their European origins to become part of the African food chain. This all took place about six “days” (3 to 4 million years) ago.

Twenty-four cosmic hours later, not far from the foothills of the newly formed volcanic slopes of Kilimanjaro, an astonishingly odd-looking primate stood up. It was an apelike being of the genus Australopithecus (from the Latin australis, meaning “southern,” and the Greek pithekos, meaning “ape”). Genetically different to the hominids that are linked to modern orangutans, these bipedal creatures of the subfamily Homininae, now extinct, are our earliest hominid ancestors.

There appears to be little doubt about who our early ancestors are, but what is unclear is our ancestry—the line of descent. From Australopithecus to modern man, what we do know, however, is that the progressive increase in brain size of our intermediate ancestors and, with it, a consciousness that would eventually define the human animal, has the quality of a quantum leap. The diminishing gaps in time between the increments has forced us to revise our notions of evolution as something slow and purposive. Let’s have a look at these leaps.

With a brain size of 750 cc, Homo habilis, our original hominid grandparents, appeared on Earth about four cosmic days ago (2.5 million years). It would appear that they lived in an overlap phase with their smaller-brained but similar-looking cousins, Australopithecus africanus and A. bosei. One animal among many others alongside our Australopithecan cousins must have been watching the early development of the hominid family. It was the African elephant, Loxodonta africana, who emerged from its own ancestral line at more or less the same time as H. habilis, the world’s first toolmakers. Habilis, from the Latin habilis, meaning “dexterous,” is linked with the first discovery of concentrations of animal remains, as well as stone collections, many of which had been brought from long distances. These pebble tools, choppers, and waterworn cobbles crudely flaked on one side to form a jagged cutting edge, were mankind’s first embellished stone tools.

Habilis, along with having a wider range of equipment, also had a different arrangement of teeth to those of their Australopithecan relatives. They were, indeed, a different species. The back teeth of these toolmaking hominids were narrower, suggesting the development of an important change in their diets—they were eating more animal food than their mostly vegetarian ancestors. As for the size of the habilisbrain, not only was it larger than that of Australopithecus, but, for the first time, the bulge of Broca’s area, the convolution of the brain corresponding to the center for executive speech, became evident on a primate skull.

In their book The Wisdom of Bones, Alan Walker and Pat Shipman remind us that the anatomical capacity for speech is also a reflection of other particular mental abilities, including the ability to categorize and analyze the world in a complex fashion. It includes the capacity to name and to talk about things, as well as to describe actions without performing them. The Earth had a new tongue. Our early hominid grandparents were not only the carriers of stones and bones, they were also the carriers and shapers of words.

About one and a half cosmic days ago (a million years), Africa was witness to another sudden leap in the size of the hominid skull. Homo erectus emerged with a 1,200-1,300 cc brain. Also known as Homoergaster, or “The Work Man,” these ancestors brought with them an up-to-date tool kit containing a variety of large, symmetrically flaked stone bifaces, or hand axes, for chopping, cutting, piercing, and pounding. They, too, were anatomically different to their immediate ancestors. Compared with habilis, the faces of erectus had become smaller as well as more expressive, while their evenly spaced and smaller back teeth confirmed the early shift from a primarily vegetable diet to one that included significantly more animal protein. This increase in brain size was believed to be a reflection of the cognitive requirements for cooperative hunting and living as well as for the evolutionarily significant gift of storytelling and symbol formation. It was also associated with the capacity to harness that great element of the gods—fire.

Fire meant an extension of the light into the night. It became a gravitational force, gathering people around it not only for warmth and safety, but for storytelling. The dark became less frightening. Essential for the developing brains of the hominids to come, celluloserich plants could be cooked and transformed into energy-providing carbohydrates.With fire, we were able to keep pantries and to establish ourselves in previously formidable geographical areas. Fired by the exploratory flames of human consciousness, we zigzagged our way out of Africa into southeastern and eastern Asia, a poetic, yet cognitive, equivalent of continental drift.

About eight cosmic hours ago (250,000 years), a hominid with a 1,450 cc brain showed up. It was the grand entrance of Homo sapiens, from the Latin word sapia, which means “wise.” These large-brain ancestors did not include our heavily browed, hairy, and more muscled cousin, Homo neanderthalensis. Matthias Krings of the University of Munich has shown that there is a significant difference between the DNA of Neanderthal Man and that of modern human beings, which means, although related to us, they were altogether a different species. It is not known exactly when our Neanderthal relatives became extinct (estimates are between 50,000 and 200,000 years ago), but, in spite of 10,000 years of living side by side with H. sapiens in Europe and the Middle East, we think we know why. It is believed they were vanquished by none other than their highly inventive and aggressive hominid cousins—us.

The next step in our evolution has to be regarded as one of the great cognitive milestones in our history—the beginnings of sophisticated art. Prior to as little as 40,000 years ago, no rock art or engravings of any aesthetic significance, whether on bone or stone, are known to exist. It is as if from one level of capability to another, human creativity took a quantum leap. The signature and skill of an artist hitherto unknown suddenly emerged. The great sand faces of the Earth became the diaries of human experience as well as the mirrors of the human soul. Modern man had arrived.

So this is who we are—Homo sapiens sapiens—the sole survivors of at least eighteen species of bipedal ancestors. We are privileged. Creative and clever? Yes. Doubly wise? I doubt it.


The human animal traveled the world. Equipped with a brain that was primed to seek and to explore, we had no choice. The search for food and new hunting grounds made sure of that until, close on the heels of the last ice age ten thousand years ago and with the Earth’s temperatures warming again, one of Nature’s most fortuitous genetic accidents occurred. It stopped our nomadic ancestors in their tracks. By some great fluke, or perhaps the result of a hitherto unknown temperature-dependent bacterial alliance with wild grasses, a wind-scattered wild wheat with fourteen chromosomes crossed with a natural goat grass of the same chromosome number. The result was a fertile twenty-eight-chromosome hybrid called emmer. The seeds of this edible hybrid were still light enough to be wind-borne but then a second accident occurred when emmer crossed with another goat grass, producing a still larger hybrid with forty-two chromosomes. This hybrid is the cereal called bread wheat, Triticum vulgare, the staple diet of millions of people today.

Prior to this, the order of the day was to collect grass seeds and to bring them home, but suddenly, in an exotic, symbiotic relationship beautifully described by scientist and philosopher J. Bronowski, “man and a plant came together.” A grain had developed that was too heavy for wind dispersal and that had to be cultivated by a species that understood the behavior of flowering plants and grasses. By accident or coincidence, the coalition of natural grasses to form cereals accelerated. Barley, Hordeum vulgare, sprang up in the Middle East, followed by maize, Zea mays, in the American tropics 7,000 years ago. Nearly two thousand years later, rice, Oryza sativa, cropped up in Thailand and China, while in Africa sorghum, Sorghum bicolor, and the millets, Pennisetum glaucum and Eleusine corocana, began seeding themselves. At last, the hominids were able to take off their nomadic shoes and stay put for a while. Planting, cultivating, harvesting, and the domesticating and interbreeding of animals signaled another quantum jump in the evolution of human culture. It added a dimension to the definition of home. It gave us the time and the luxury to reflect upon matters beyond our immediate survival. It was the beginning of surplus and of specialization, a time not only to tell tales, but to embellish them. Personal lives became stories, stories became legends, legends became myths, and our myths became our dreams.

If the traditional agricultural practices of Africa, India, and the Far East are anything to go by, it should not surprise us to learn that women were the first agriculturists. Who else would have intuited better the significance of fertility, pregnancy, and cultivation? Who other than the traditional gatherers of the plains would have recognized the potential of a new food source when it presented itself?

Agriculture has been important in our history but it came with a price. Cultivation is synonymous with growth and therein lies the shadow or the dark side of this evolutionary event. It is called expansionism. Staying in one place led to an unprecedented growth in local populations. This meant a need for more food. More food meant competition for more land and it is not difficult to see the link between land, territory, colonization, and the means of getting it—politics and war. Cultivation took on a new dimension—the cultivation of words, wealth, and weapons.

There was no turning back, but it had its positive side. Human language took on another form. Through exquisite, painstaking art, including our earliest scribbled signs and symbols, our agricultural ancestors wrote themselves into the record book. No longer restricted to body signals and to speech, language in its written form allowed the human animal to record, to think in words, and to read between the lines. From rock faces to papyrus and paper, the files of human history became indelible and, as every poet will tell you, ink and blood are the same thing.

With the onset of agriculture and the interweaving seasons of bread and wine, cultivation became a multifaceted metaphor for the human narrative—the seasons of birth, death, and rebirth. It reinforced in us the Neanderthal notion of continuity and an afterlife, for these relatives were the first hominids to add to the graves of their dead something for an afterlife—flowers, food, and sea urchins.

Continuity and the notions of deities, gods, and God represent profound leaps in the evolution of human culture. Let the histories of the world’s great religious philosophies speak for themselves. Accompanied by laws that would later be engraved on stones, scrolls, and in leatherbound creeds, it is a history of the human quest for a greater understanding of the creation and of its creator. Visible gods became an invisible God. Animism was replaced by theism, which in turn has been challenged by humanism and the supremacy of human rights. God moved from being outside us to being inside and then to being everywhere. Some say that He left and others that He will come back again. All things considered, the idea—or for some the conviction—of life everlasting appears to be deeply embedded in the human psyche, for, as the poet Czeslaw Milosz reminds us, “it has accompanied man in his wanderings through time. It has always been larger and deeper than religious or philosophical creeds which expressed only one of its forms.”

Because of the meaning that is derived from them, the significance of the world’s religions should be neither negated nor underestimated. They are far more than mere codes of conduct or moral philosophies. Ligare, the Latin word which means “to connect” or “to bind” and from which the word religion is derived, plays no small role in the survival of a species that knows its ultimate fate. Continuity, connection, transformation, and transmutation are the hallmarks of evolution, are they not? Everything in life changes its skin…even the gods. Does it really matter that someone else’s cosmology or notion of God might look a little different from yours or mine? How different that could be is reflected in these lines of a poem by Howard Nelson. The poem is called “Elephant Thoughts.”

Afterwards one of us asked

“What is the difference between us and the elephants?”

Many differences, as big as elephants, no doubt—

But we sat dumb a while, not sure what to answer.

Then one, the one who had lived with the elephants said

“The difference is this—human beings are the only species that claim to be made in God’s image.”

So, maybe he is an elephant. A large female

Somewhere out on the plains

Tossing dust onto her shoulders, surrounded by her disciples.

Perhaps God has huge grey ears.

Perhaps God is so massive that it seems to flow.

Perhaps God’s tusks are long, powerful, tapered arcs…

I’ve heard stranger claims.

There is at least one philosophical problem in which all thinking people are interested, wrote historian and philosopher Bryan Magee. “It is the problem of cosmology; the problem of understanding the world—including ourselves, and our knowledge as part of the world. All science is cosmology,” he said.


The dinosaurs might be gone but they are not forgotten, for the Earth, it would seem, does not forget her children. Their signatures, along with those of our mammalian predecessors, are not only written in our genes but they can also be found in the anatomy and chemistry of the human brain. Their imprint, as we shall see, is still wet and very much with us.

In the 1960s, in a fascinating yet sobering analysis of the evolution of the brain, Paul Maclean introduced the notion of the human brain as an organ that has retained its reptilian and paleomammalian origins. The human brain, he said, is a triune brain. In other words, the human animal, to this day, operates with three “brains”—a reptilian brain, an early mammalian one, and a neomammalian, or human, one. According to Maclean, each of these brains has its own memory, motor functions, intelligence, and its own sense of time and space. The boundaries between the three levels of brain functioning are obviously not as rigid as the diagram portrays, but in the light of an ecological intelligence the concept is both useful and important.

The reptilian brain of crocodiles, lizards, and snakes, including the extinct dinosaurs, has changed little in its 180- to 220-million-year history. Its anatomy consists chiefly of a brain stem and other nuclei responsible for the rhythm of the heart, breathing, coordinating fight and flight responses, and for the interpretation of perceptual stimuli, such as sounds, movement, and particularly that of olfaction—the ancient sense of smell. Although our sense of smell compared with our other senses appears to have lost the survival significance that it still holds for our reptilian and mammalian cousins (elephants can smell water more than eighteen miles away), these other reptilian nuclei remain intact and functional in the brains of the human animal. And yet our sense of smell, in spite of its lack of potency, is nevertheless an important one.Odors and fragrances of all sorts, from wax crayons, pencil shavings, peanut butter sandwiches, eggs, and bacon, body scents, and perfumes to the smell of the first rains are powerful reminders of one’s culture, one’s community and even one’s identity.

As we compare the evolution and behavior of the living creatures on our planet, it is important that we remember that the game we are playing is a shared one. It is called survival. In this light, when we snootily describe the behavior of reptiles and other creatures as being instinctive with a tendency to be automatic, we would do well to acknowledge our own brain stem behavior. Yes, it is likely that crocodiles are unemotional, but we too are capable of cold-blooded indifference. Yes, reptiles do tend to be opportunistic with little or no cognitive appreciation of the present, the future, or of past events, but we, too, have an eye for the gap. “I want it all, and I want it now” is the brain stem speaking. We, too, are territorially and materially acquisitive, often getting what we want by acts of intimidation and threat displays, otherwise known as bullying and blackmail.

Yes, reptiles are naturally prejudiced in favor of brain stem drives, but they are anything but unintelligent. Take the modern Nile crocodile of tropical Africa, Asia, and Australia, Crocodylus niloticus, for example. It has been on Earth at least fifty times longer than we have, outliving countless species that have come and gone during their remarkable tenure. These creatures can remain underwater for up to forty-five minutes, and with their short, mobile earflaps acting like volume controls, their hearing is better than any other reptile. They continue to grow throughout their lives, and an adult crocodile, by utilizing the accumulated fat in its long tail, is capable of going with-out food for up to two years. What is more, it can determine the sex of its oviparous (egg-born) offspring according to the depth at which the female lays her eggs in the sand. Males are born from the shallower and therefore warmer levels of the conical-shaped hole in which the crocodile lays her eggs, females from the deeper levels. What kind of intelligence accounts for these extraordinary capacities? There is a crocodile in me and it shows itself in my drives, my impulsiveness, my compulsions, my deceptiveness, and my guile. Consciousness and intelligence, as we shall see, are not to be confused.

Although the Earth’s earliest known mammals, such as the mouse-like climber Eomaia scansoria, made their appearance about seven cosmic months ago (125 million years), it was the onset of the Tertiary epoch, 65 million years ago, that coincided with the rapid emergence of what Maclean calls the second brain—the convoluting brain of the warm-blooded class Mammalia. Called the paleomammalian brain, this new and larger structure gave its owners a more sophisticated range of motor functions, emotions, memory, and a sense of place, but everything “id”—everything impulsive—about its reptilian origins, came along with it. The main characteristic of this new brain was a consolidation of the widespread connections between the autonomic centers for body homeostasis, such as hunger/satiety and sleep/ wakefulness, and those of smell, sight, and taste. But there was more.The other anatomical changes included a fairly well-defined positioning of the hypothalamus—the hormone-primed seat of the emotions associated with aggression, flight, anticipation, passivity, and caregiving. It also included the significant consolidation of the links between those delicate neurological structures and chemicals associated with learning and the retention of memory. This new brain became associated with important changes in animal socialization. It became part and parcel not only of the socially significant differentiation of audiovocal calls into those of alarm, contact, comfort, separation, and sexual communication, but to an increase in the sophistication of cooperative care for the young as well. On top of that, this new brain heralded what is arguably the most outstanding behavioral difference between reptiles and mammals—the capacity for play.

Along a spectrum of rough-and-tumble games, ambushing, chasing, and hide-and-seek, every mammal in its own way knows how to play. Play has its neurological substrate in the thalamic region of the limbic system and its contribution toward the survival of each mammalian species is a profound one. Looked at a little more critically, play is about affiliation and bonding, about prowess, future ranking, and the honing of skills. It is also a mode of self-discovery, of finding one’s physical boundaries and limitations, of games that end in tears, and of establishing rules—ask any child who grew up with brothers and sisters. Play and learning go hand in hand. Through play we stretch not only our muscles but, through wordplay, our vocabulary and our imagination as well. And lest we forget, wordplay is central to political and economic one-upmanship. Let no one say there is no point in play…

The wilderness says

“Don’t fool yourself!”

To play
is an ancient dress rehearsal
for the kill.

Like the brain of their reptilian relatives, the paleomammalian brain, too, is not concerned with the poetry of moonlight. It is not concerned with meaning or the philosophical significance of events, but it nevertheless carries the early chemistry of fair play. One only has to spend time with wolves, elephants, baboons, and chimpanzees to recognize in their social systems that these animals, especially the females, are aware of the difference between acceptable and unacceptable behavior within their groups. In other words, it would appear that some-where in the transition between the second and the third brain, justice and morality—a sense of right and wrong—begins to define itself.

To illustrate this, Sarah Brosnan, a doctoral research worker at the Yerkes National Primate Research Center in Atlanta, Georgia, has come up with some fascinating evidence to support the evolutionary significance of fair play. Working with South American capuchin monkeys, Cebus apella, she devised an experiment where pairs of female capuchins were trained to exchange stones for pieces of cucumber. This in itself was significant for, as Brosnan reminds us, not many species are willing to relinquish their possessions intentionally. She then changed the experiment. Dividing the monkeys into two groups, she placed them in separate but adjacent cages. The capuchins in one group would be able to observe the exchanges between the handler and their colleagues in the other group. Brosnan then deliberately changed the stones-for-food formula in one of the groups. In exchange for their stones, she began rewarding group A with grapes while continuing to reward group B with cucumber. Her bias went even further when, in some instances, she purposely rewarded the “favored” group for not having performed at all. The cucumber group meanwhile, hoping to earn a higher salary (grapes) in exchange for their products (stones), continued to get paid in cucumbers. Unfair? The monkeys certainly thought so. What follows is amazing but not surprising. The cucumber group stopped their exchanges with the handler, preferring to withhold their stones rather than be given an inferior reward. As the experiment progressed, not only did group B refuse the cucumber, in some cases they hurled the unwanted food at the handler. In this patently biased experiment, it is not difficult to imagine human beings responding in the same way. It tells us a lot about the evolutionary origins of trade unions and revolution.

We come now to the emergence of the neomammalian brain—that incredible matrix beneath our skulls without which there would be no sense of music, mercy, morality, or meaning. What is it that makes us different to our animal brothers and sisters, and where should we look to find the answer? I suggest we look once more at the human genome and to our nearest primate cousin, the chimpanzee, Pan troglodytes. If the genetic difference between a human and a chimpanzee is as little as 2 percent, then that tiny fraction has to be seen as colossal.

We might as well be comparing different galaxies, for within that fractional difference lies a consciousness that is uniquely human. We are indeed creatures of the wild, but unlike our animal kin—and thanks to those additional convolutions of gray matter, especially the frontal lobe (the chimpanzee has vastly less of it)—we have become creatures of culture and conscience also. Remove the convoluted frontal cortex from a human brain and you will be faced with an individual who is both disturbed and disturbing, grossly lacking in insight and without any sense of consequence. Without the frontal lobe, we lose what is arguably the most important ability of human socialization—the capacity to deliberately inhibit or to delay our actions. Take away the frontal lobe and we lose our ability to say, “Wait a minute…let’s think about it.” We lose our ability to regulate our behavior.

But what is consciousness? This is an ancient question and because it is a subject that is both philosophical and physiological, any definition is going to be contentious. For a start, most of our perceptions, interpretations, and responses to the world around us are in fact unconscious—we are not aware that we are doing them. Does this mean that these activities are not a part of consciousness? The answer, of course, is no. Consciousness, if understood as evolutionary and survival oriented, must obviously include these hugely important “unconscious” attributes. It should also be obvious that certain aspects of consciousness are shared by all mammals.

We will deal with the subject of the unconscious in the following chapter, but because it could help to tease out the difference between human consciousness and that of other animals, I invite the reader to see consciousness in its awakened state. In other words, without demeaning the role of the unconscious, I wish to equate our varying levels of consciousness with varying levels of awareness. In a hierarchy, to be conscious includes being awake (level one), being alert (level two), being aware (level three), being self-aware (level four), and finally, being aware that we are aware (level five). It is obvious that to be alive and effective every living creature needs to be functional in at least the first three levels, and for “higher” animals, including humans, the first four. Mediated through the senses of sight, smell, touch, taste, and hearing, the first three levels are essential for a consciousness of external stimuli and events—the movement of an impala, the alarm call of a francolin, the smell of meat, of estrus, and the taste of blood. The fourth level, to be self-aware, implies an awareness, however crude or rudimentary, of one’s internal state. It is to be aware of the emotion or feeling-charged chemistry of hunger, thirst, sleep, sexual desire, protection, and escape and of being able to link this awareness to the external environment. It is important to remember that the awareness of one’s external environment is associated with emotion-charged nuclei in the oldest evolutionary part of our brains, the brain stem—a reminder that any creature with a brain stem has feelings.

The next level of awareness—to be aware that you are aware—is a massive jump from the fourth level. Dependent on the other levels of awareness, it describes a consciousness that can reflect upon itself, upon its history, its nature, and its coexistence with other creatures. Think about yourself for a moment. How do you see yourself, or expect to see yourself, when you look into a mirror? Is that you looking back, and what is that smudge of paint doing on your cheek? If you lean forward toward the mirror and watch yourself removing the smudge, then you are not only self-aware, but aware that you are aware. You understand the concept of “me.” That is me in the mirror. Surely such a consciousness sets us apart from our primate cousins? Well, the answer is no. Chimpanzees recognize their own reflections. It either knows, or soon learns upon looking into a mirror, that a blob of paint, deliberately daubed onto its forehead, belongs to it. It will also, in the same way that we groom ourselves, observe itself removing the blob until the image in the mirror is to its satisfaction. All other animals, on the other hand, with the possible exception of the African elephant and other primates like gorillas, seem to be utterly indifferent to their reflections. Instead, their consciousness is geared to the level of being awake, alert, and aware of what is going on outside the notion of a personal identity. This should not be construed as believing that animals are not aware of an internal world of feelings or that they are unintelligent. They do have feelings and they are intelligent. Aware of the frustration that comes with the failure to get what it wants, most animals are quite capable of engaging in problem solving as well as attending to certain stimuli rather than others. For example, wolves, dogs, elephants, and primates are known to initiate and terminate behavioral and cognitive activities such as play and herding, as well as assisting an injured or handicapped companion, including human companions. In his book Good Natured, Frans de Waal describes a chimpanzee offering guidance to a blindfolded handler by leading the handler by the hand to a source of food. Few will doubt that this kind of action is an example of fairly sophisticated thought processing or, if you like, a higher consciousness.

So, what really separates us from other animals? Let’s go back to that mirror. The difference between humans and our animal kin is probably related to the way that we look at ourselves in a mirror. It relates to the questions we ask of ourselves and to the stirring of the imagination when we peer into that looking glass. For instance, How did that blob of paint get there? Who put it there, and why? And what about the face that looks back? When studying your reflection, do you recognize someone who had a little bit too much to drink at last night’s party or wonder what happened to the youthful features that used to look back at you? Do you promise yourself that you are going to spend more time with the family or that you need a holiday?

With that objective image of “me” looking back, an entirely subjective world comes into play and the result is a kind of dialogue or interaction with oneself. And it is ongoing. The world, in effect, becomes a mirror. With the realization that we are constantly interacting with the world, we are able to put ourselves into it, to see our reflections in it, and to reflect upon them. But we are also interested in what is going on behind the mirror. From astrology to the reading of tea leaves, we are constantly trying to decode and recode what we perceive to be the intentions of Nature. I don’t know that there is any other animal that is quite so analytical and speculative. Yes, other animals, too, have memories and some of them have dreams, but can they reflect upon their mortality? Can they speculate about their future? Can they say, “Hey, I wonder where I’ll be this time tomorrow?” To be aware that you are aware, or, to be more precise, to be aware that you are aware that I am aware of what you are aware of, and so forth, is the neurological legacy of an ancestor that began to understand the deeper significance of relationships and of time—yesterday, today, and tomorrow—and with that, the need to consciously plan for tomorrow. It was the gift of sequential thinking and of the molding of words into past, present, and future tenses. It marked the beginning of experimental science, of music and stories that begin “Once upon a time…” It was the beginning of an understanding of the impermanence of life, of cosmologies, of philosophies, of the human need for continuity, and of what would become organized religions. It was the redefining of the human identity. Without sequential thought and language, our ability to create ideas, symbols, and concepts about our world would not only be severely impaired, but, in all likelihood, impossible. Without language, it is unlikely that we could maintain an identity that is personal. To me, that fifth level of consciousness and language go hand in hand.

What else can we find in that genetic fraction between us and our troglodyte cousins that might qualify us for a consciousness that is different to the rest of the animal kingdom? Perhaps the following attributes are the ones that make it so: aware that we are never far from the edge of the unknown, that we are mortal, and that we are not the masters of our fate, we are the only creatures that create humor out of our fate. As far as we know, we are the only species that contemplates an afterlife. We also appear to be the only animals capable of imagining what we might become, of seeing beyond ourselves and, as if pulled by that vision, of daring to go for it. We are the only animals I know for whom food, water, and air will never be enough for an existence that is meaningful and who have therefore learned to feed off their imagination and their dreams.

Looking back upon our molecular origins, to our geology, to those first cellular membranes, and to the eventual expression of a species capable of reflecting upon itself, it would appear that we are indeed the “salt of the Earth,” as Saint Matthew put it, not just in soul, but in science also. The relationship of the principal cations (the electropositive elements) in the blood serum of all animals, as well as of man, is constant. It is calcium : sodium : potassium = 5 : 10 : 160. This is a close representation of their respective proportions in seawater, differing only by a greater content of magnesium in the oceans as we know them today. According to McCallum’s theory (no known relation to the author) in 1901, this difference can be explained by the low precipitations of ocean magnesium in the Cambrian epoch just prior to the emergence of organisms from the surrounding water onto the land 550 to 570 million years ago.

The animals, then, are in us and with us; we share their genes and their juices. Made up of countless molecules, cells, and complex organs, each one of us is the carrier not only of the pattern of embryonic gill slits and tails, but the entire history of life also. It would appear that the aboriginal “water of life” still circulates in the blood of every animal, including us. To me it is both exciting and humbling to acknowledge that the sophisticated cells, tissues, organs, and systems of the living creatures of our time have their origins in the single-cell organisms that adapted to life on Earth nearly 3 billion years ago. It should not be that difficult to imagine, either. After all, suggests Lynn Margulis, “the fertilized human cell begins as a single water-borne cell which then begins to divide, taking only forty weeks to differentiate into a creature that is capable of living in air.” It would appear that we are, indeed, cosmic mongrels, a little bit of this, a little bit of that. I agree with writer and philosopher Jorge Luis Borge who wrote: “We would do well to practice a sublime astronomy…for if we see the Milky Way it is because it actually exists in our souls.”

Four mammalian embryos at variuos stages of development: A, hog; B, calf; C, rabbit; D, human.(Villee: from Romanes'"Darwin,"after Haeekel, with the permition of the open court publition Company)

And so, where were you when the foundations of the Earth were being laid? Linked to the molecular and chemical origins of this planet, one way of answering this question is to reply that, in essence, we were all there and we are still there. Every hydrogen atom in our bodies originates from the time of the big bang; every atom of iron in our red blood cells is a leftover of supernova explosions; every atom of oxygen and carbon is a gift from our sun. Psychologically, those foundations are being laid right now. They are the foundations of a new way of thinking about who and what we are in relationship to the Earth and to Nature. And we are the masons of the way we think. We can say yes and no.


You ask what time it is—it is time to pray.