Proust Was a Neuroscientist - Jonah Lehrer (2007)
Chapter 2. George Eliot
The Biology of Freedom
Seldom, very seldom, does complete truth belong to any human disclosure; seldom can it happen that something is not a little disguised or a little mistaken.
—Jane Austen, Emma
GEORGE ELIOT WAS A WOMAN of many names. Born Mary Anne Evans in 1819, the same year as Queen Victoria, she was at different times in her life Mary Ann Evans, Marian Evans, Marian Evans Lewes, Mary Ann Cross, and, always in her art, George Eliot. Each of her names represented a distinct period of her life, reflecting her slightly altered identity. Though she lived in a time when women enjoyed few freedoms, Eliot refused to limit her transformations. She had no inheritance, but she was determined to write. After moving to London in 1850 to become an essayist and translator, Eliot decided, at the age of thirty-seven, to become a novelist. Later that year, she finished her first novella, The Sad Fortunes of the Reverend Amos Barton. She signed the story with her new name; she was now George Eliot.
Why did she write? After finishing her masterpiece Middlemarch (1872), Eliot wrote in a letter that her novels were "simply a set of experiments in life—an endeavor to see what our thought and emotion may be capable of." Eliot's reference to "experiments" isn't accidental; nothing she wrote was. The scientific process, with its careful blend of empiricism and imagination, fact and theory, was the model for her writing process. Henry James once complained that Eliot's books contained too much science and not enough art. But James misunderstood Eliot's method. Her novels are fiction in the service of truth, "examination[s] of the history of man" under the "varying experiments of time." Eliot always demanded answers from her carefully constructed plots.
And while her realist form touched upon an encyclopedia of subjects, her novels are ultimately concerned with the nature of the individual. She wanted "to pierce the obscurity of the minute processes" at the center of human life. A critic of naive romanticism, Eliot always took the bleak facts of science seriously. If reality is governed by mechanical causes, then is life just a fancy machine? Are we nothing but chemicals and instincts, adrift in an indifferent universe? Is free will just an elaborate illusion?
These are epic questions, and Eliot wrote epic novels. Her Victorian fiction interweaves physics and Darwin with provincial politics and melodramatic love stories. She forced the new empirical knowledge of the nineteenth century to confront the old reality of human experience. For Eliot, this was the novel's purpose: to give us a vision of ourselves "more sure than shifting theory." While scientists were searching for our biological constraints—they assumed we were prisoners of our hereditary inheritance—Eliot's art argued that the mind was "not cut in marble." She believed that the most essential element of human nature was its malleability, the way each of us can "will ourselves to change." No matter how many mechanisms science uncovered, our freedom would remain.
In Eliot's time, that age of flowering rationality, the question of human freedom became the center of scientific debate. Positivism—a new brand of scientific philosophy founded by Auguste Comte—promised a utopia of reason, a world in which scientific principles perfected human existence. Just as the theological world of myths and rituals had given way to the philosophical world, so would philosophy be rendered obsolete by the experiment and the bell curve. At long last, nature would be deciphered.
The lure of positivism's promises was hard to resist. The intelligentsia embraced its theories; statisticians became celebrities; everybody looked for something to measure. For the young Eliot, her mind always open to new ideas, positivism seemed like a creed whose time had come. One Sunday, she abruptly decided to stop attending church. God, she decided, was nothing more than fiction. Her new religion would be rational.
Like all religions, positivism purported to explain everything. From the history of the universe to the future of history, there was no question too immense to be solved. But the first question for the positivists, and in many ways the question that would be their undoing, was the paradox of free will. Inspired by Isaac Newton's theory of gravity, which divined the cause of the elliptical motions found in the heavens, the positivists struggled to uncover a parallel order behind the motions of humans.* According to their depressing philosophy, we were nothing but life-size puppets pulled by invisible strings.
The founder of this "science of humanity" was Pierre-Simon Laplace. The most famous mathematician of his time, Laplace also served as Napoleon's minister of the interior.† When Napoleon asked Laplace why there was not a single mention of God in his five-volume treatise on cosmic laws, Laplace replied that he "had no need of that particular hypothesis." Laplace didn't need God because he believed that probability theory, his peculiar invention, would solve every question worth asking, including the ancient mystery of human freedom.
Laplace got the idea for probability theory from his work on the orbits of planets. But he wasn't nearly as interested in celestial mechanics as he was in human observation of those mechanics. Laplace knew that astronomical measurements rarely measured up to Newton's laws. Instead of being clocklike, the sky described by astronomers was consistently inconsistent. Laplace, trusting the order of the heavens over the eye of man, believed this irregularity resulted from human error. He knew that two astronomers plotting the orbit of the same planet at the same time would differ reliably in their data. The fault was not in the stars, but in ourselves.
Laplace's revelation was that these discrepancies could be defeated. The secret was to quantify the errors. All one had to do was plot the differences in observation and, using the recently invented bell curve, find the most probable observation. The planetary orbit could now be tracked. Statistics had conquered subjectivity.
But Laplace didn't limit himself to the trajectory of Jupiter or the rotation of Venus. In his book Essai sur les Probabilités, Laplace attempted to apply the probability theory he had invented for astronomy to a wide range of other uncertainties. He wanted to show that the humanities could be "rationalized," their ignorance resolved by the dispassionate logic of math. After all, the principles underlying celestial mechanics were no different than those underlying social mechanics. Just as an astronomer is able to predict the future movement of a planet, Laplace believed that before long humanity would be able to reliably predict its own behavior. It was all just a matter of computing the data. He called this brave new science "social physics."
Laplace wasn't only a brilliant mathematician; he was also an astute salesman. To demonstrate how his new brand of numerology would one day solve everything—including the future—Laplace invented a simple thought experiment. What if there were an imaginary being—he called it a "demon"—that "could know all the forces by which nature is animated"? According to Laplace, such a being would be omniscient. Since everything was merely matter, and matter obeyed a short list of cosmic laws (like gravity and inertia), knowing the laws meant knowing everything about everything. All you had to do was crank the equations and decipher the results. Man would finally see himself for "the automaton that he is." Free will, like God, would become an illusion, and we would see that our lives are really as predictable as the planetary orbits. As Laplace wrote, "We must ... imagine the present state of the universe as the effect of its prior state and as the cause of the state that will follow it. Freedom has no place here."
But just as Laplace and his cohorts were grasping on to physics as the paragon of truth (since physics deciphered our ultimate laws), the physicists were discovering that reality was much more complicated than they had ever imagined. In 1852, the British physicist William Thomson elucidated the second law of thermodynamics. The universe, he declared, was destined for chaos. All matter was slowly becoming heat, decaying into a fevered entropy. According to Thomson's laws of thermodynamics, the very error Laplace had tried to erase—the flaw of disorder—was actually our future.
James Clerk Maxwell, a Scottish physicist who discovered elec-tromagnetism, the principles of color photography, and the kinetic theory of gases, elaborated on Thomson's cosmic pessimism. Maxwell realized that Laplace's omniscient demon actually violated the laws of physics. Since disorder was real (it was even increasing), science had fundamental limits. After all, pure entropy couldn't be solved. No demon could know everything.
But Maxwell didn't stop there. While Laplace believed that you could easily apply statistical laws to specific problems, Maxwell's work with gases had taught him otherwise. While the temperature of a gas was wholly determined by the velocity of its atoms—the faster they fly, the hotter the gas—Maxwell realized that velocity was nothing but a statistical average. At any given instant, the individual atoms were actually moving at different speeds. In other words, all physical laws are only approximations. They cannot be applied with any real precision to particulars. This, of course, directly contradicted Laplace's social physics, which assumed that the laws of science were universal and absolute. Just as a planet's position could be deduced from the formula of its orbit, Laplace believed, our behaviors could be plotted in terms of our own ironclad forces. But Maxwell knew that every law had its flaw. Scientific theories were functional things, not perfect mirrors to reality. Social physics was founded on a fallacy.
An etching of George Eliot in 1865 by Paul Adolphe Rajon, after the drawing by Sir Frederick William Burton
Love and Mystery
George Eliot's belief in positivism began to fade when she suffered a broken heart. Here was a terrible feeling no logic could solve. The cause of her sadness was Herbert Spencer, the Victorian biologist who coined the phrase "survival of the fittest." After Eliot moved to London, where she lived in a flat on the Strand, she grew intimate with Spencer. They shared long walks in the park and a subscription to the opera. She fell in love. He did not. When he began to ignore her—their relationship was provoking the usual Victorian rumors—Eliot wrote Spencer a series of melodramatic yet startlingly honest love letters. She pleaded for his "mercy and love": "I want to know if you can assure me that you will not forsake me, and that you will always be with me as much as you can and share your thoughts and feelings with me. If you become attached to someone else, then I must die, but until then I could gather courage to work and make life valuable, if only I had you near me." Despite Eliot's confessions of vulnerability, the letter proudly concludes with an acknowledgment of her worth: "I suppose no woman before ever wrote such a letter as this—but I am not ashamed of it, for I am conscious that in the light of reason and true refinement I am worthy of your respect and tenderness."
Spencer ignored Eliot's letters. He was steadfast in his rejection. "The lack of physical attraction was fatal," he would later write, blaming Eliot's famous ugliness for his absence of feeling. He could not look past her "heavy jaw, large mouth, and big nose."* Spencer believed his reaction was purely biological, and was thus immutable: "Strongly as my judgment prompted, my instincts would not respond." He would never love Eliot.
Her dream of marriage destroyed, Eliot was forced to confront a future as a single, anonymous woman. If she was to support herself, she had to write. But her heartbreak was more than a painful emancipation; it also caused her to think about the world in new ways. In Middlemarch, Eliot describes an emotional state similar to what she must have been feeling at the time: "She might have compared her experience at that moment to the vague, alarmed consciousness that her life was taking on a new form, that she was undergoing a metamorphosis ... Her whole world was in a state of convulsive change; the only thing she could say distinctly to herself was, that she must wait and think anew ... This was the effect of her loss." In the months following Spencer's rejection, Eliot decided that she would "nourish [a] sleek optimism." She refused to stay sad. Before long, Eliot was in love again, this time with George Henry Lewes.
In many important ways, Lewes was Spencer's opposite. Spencer began his career as an ardent positivist, futilely searching for a theory of everything. After positivism faded away, Spencer became a committed social Darwinist, and he enjoyed explaining all of existence—from worms to civilization—in terms of natural selection. Lewes, on the other hand, was an intellectual renowned for his versatility; he wrote essays on poetry and physics, psychology and philosophy. In an age of increasing academic specialization, Lewes remained a Renaissance man. But his luminous mind concealed a desperate unhappiness. Like Eliot, Lewes was also suffering from a broken heart. His wife, Agnes, was pregnant with the child of his best friend.
In each other, Lewes and Eliot found the solution for their melancholy. Lewes would later describe their relationship as deeply, romantically mysterious. "Love," Lewes wrote, "defies all calculation." "We are not 'judicious' in love; we do not select those whom we 'ought to love,' but those whom we cannot help loving." By the end of the year, Lewes and Eliot were traveling together in Germany. He wanted to be a "poet in science." She wanted to be "a scientific poet."
It is too easy to credit love for the metamorphosis of Eliot's world-view. Life's narratives are never so neat. But Lewes did have an unmistakable effect on Eliot. He was the one who encouraged her to write novels, silencing her insecurities and submitting her first manuscript to a publisher.
Unlike Spencer, Lewes never trusted the enthusiastic science of the nineteenth century. A stubborn skeptic, Lewes first became famous in 1855 with his Life of Goethe, a sympathetic biography that interwove Goethe's criticisms of the scientific method with his romantic poetry. In Goethe, Lewes found a figure who resisted the mechanistic theories of positivism, trusting instead in the "concrete phenomena of experience?" And while Lewes eagerly admitted that a properly experimental psychology could offer an "objective insight into our thinking organ," he believed that "Art and Literature" were no less truthful, for they described the "psychological world." In an age of ambitious experiments, Lewes remained a pluralist.
Lewes's final view of psychology, depicted most lucidly in The Problems of Life and Mind (a text that Eliot finished after Lewes's death), insisted that the brain would always be a mystery, "for too complex is its unity." Positivists may proselytize their bleak vision, Lewes wrote, but "no thinking man will imagine anything is explained by this. Life and Being remain as inaccessible as ever." If nothing else, freedom is a necessary result of our ignorance.
By the time Eliot wrote her last novel, Daniel Deronda (1876), she had come to see that Laplace and Spencer and the rest of the positivists were wrong. The universe could not be distilled into a neat set of causes. Freedom, however fragile, exists. "Necessitarianism,"* Eliot wrote, "I hate the ugly word." Eliot had read Maxwell on molecules, even copying his lectures into her journals, and she knew that nothing in life could be perfectly predicted. To make her point, Eliot began Daniel Deronda with a depiction of human beings as imagined by Laplace. The setting is a hazy and dark casino, full of sullen people who act, Eliot writes, "as if they had all eaten of some root that for the time compelled the brains of each to the same narrow monotony of action." These gamblers are totally powerless, dependent on the dealer to mete out their random hands. They passively accept whichever cards they are dealt. Their fortune is determined by the callous laws of statistics.
In Eliot's elaborately plotted work, the casino is no casual prop—it is a criticism of determinism. As soon as Eliot introduces this mechanical view of life, she begins deconstructing its silly simplicities. After Daniel enters the casino, he spies a lone woman, Gwendolen Harleth. "Like dice in mid-air," Gwendolen is an unknown. Her mysteriousness immediately steals Daniel's attention; she transcends the depressing atmosphere of the casino. Unlike the gamblers, who do nothing but wait for chance to shape their fate, Gwendolen seems free. Daniel stares at her and wonders: "Was she beautiful or not beautiful? And what was the secret of form or expression which gave the dynamic quality to her glance?"
Eliot uses the casino to remind us that we are also mysterious, a "secret of form." And because Gwendolen is a dynamic person, her own "determinate," she will decide how her own life unfolds. Even when she is later entrapped in a marriage to the evil Grandcourt—"his voice the power of thumbscrews and the cold touch of the rack"—she manages to free herself. Eliot creates Gwendolen to remind us that human freedom is innate, for we are the equation without a set answer. We solve ourselves.*
While George Eliot spurned the social physics of her day, she greeted Darwin's theory of natural selection as the start of a new "epoch." She read On the Origin of Species when it was first published in 1859 and immediately realized that the history of life now had a coherent structure. Here was an authentic version of our beginning. And while positivists believed that the chaos of life was only a facade, that beneath everything lay the foundation of physical order, Darwinism said that randomness was a fact of nature. In many ways, randomness was the fact of nature.* According to Darwin, in a given population sheer chance dictated variety. Genetic mutations (Darwin called them saltations) followed no natural laws. This diversity created differing rates of reproduction among organisms, which led to the survival of the fittest. Life progressed because of disorder, not despite it. The theologian's problem—the question of why nature contained so much suffering and contingency—became Darwin's solution.
The bracing embrace of chance was what attracted Eliot to Darwin. Here was a narrative that was itself unknowable, since it was guided by random variation. The evolution of life depended on events that had no discernible cause. Unlike Herbert Spencer, who believed that Darwin's theory of evolution could solve every biological mystery (natural selection was the new social physics), Eliot believed that Darwin had only deepened the mystery. As she confided to her diary: "So the world gets on step by step towards brave clearness and honesty! But to me the Development theory [Darwin's theory of evolution] and all other explanations of processes by which things came to be produce a feeble impression compared with the mystery that lies under the process." Because evolution has no purpose or plan—it is merely the sum of its accumulated mistakes—our biology remains impenetrable. "Even Science, the strict measurer," Eliot confessed, "is obliged to start with a make-believe unit."
The intrinsic mystery of life is one of Eliot's most eloquent themes. Her art protested against the braggadocio of positivism, which assumed that everything would one day be defined by a few omnipotent equations. Eliot, however, was always most interested in what we couldn't know, in those aspects of reality that are ultimately irreducible: "If we had a keen vision and feeling of all ordinary human life," she warns us in Middlemarch, "it would be like hearing the grass grow and the squirrel's heart beat, and we should die of that roar which lies on the other side of silence. As it is, the quickest of us walk about well wadded with stupidity." Those characters in her novels who deny our mystery, who insist that freedom is an illusion and that reality is dictated by abstract laws (which they happen to have discovered), work against the progress of society. They are the villains, trusting in "inadequate ideas." Eliot was fond of quoting Tennyson's In Memoriam: "There lives more faith in honest doubt, / Believe me, than in half the creeds."
Middlemarch, Eliot's masterpiece, contains two reductionists searching for what Laplace called "the final laws of the world." Edward Casaubon, the pretentious husband of Dorothea Brooke, spends his days writing a "Key to All Mythologies," which promises to find the hidden connection between the varieties of religious experience. His work is bound to fail, Eliot writes, for he is "lost among small closets and winding stairs." Casaubon ends up dying of a "fatty degeneration of the heart," a symbolic death if ever there was one.
Dr. Tertius Lydgate, the ambitious country doctor, is engaged in an equally futile search, looking for the "primitive tissue of life." His foolish quest is an allusion to Herbert Spencer's biological theories, which Eliot enjoyed mocking.* Like Casaubon, Lydgate continually overestimates the explanatory power of his science. But reality eventually intrudes and Lydgate's scientific career collapses. After enduring a few financial mishaps, Lydgate ends up becoming a doctor of gout, and "considers himself a failure: he had not done what he once meant to do." His own life becomes a testament to the limits of science.
After Casaubon dies, Dorothea, the heroine of Middlemarch, who bears an uncanny resemblance to Eliot, falls in love with Will Ladislaw, a poetic type and not-so-subtle symbol of free will. (Will is in "passionate rebellion against his inherited blot.") Tragically, because of Casaubon's final will (notice the emerging theme), Dorothea is unable to act on her love. If she marries Will, who is of low social rank, she loses her estate. And so she resigns herself to a widowed unhappiness. Many depressing pages ensue. But then Will returns to Middlemarch, and Dorothea, awakened by his presence, realizes that she wants to be with him. Without freedom, money is merely paper. She renounces Casaubon's estate and runs away with her true love. Embracing Will is her first act of free will. They live happily ever after, in "the realm of light and speech."
But Middlemarch, a novel that denies all easy answers, is more complicated than its happy ending suggests. (Virginia Woolf called Middlemarch "one of the few English novels written for grown-up people.") Eliot had read too much Darwin to trust in the lasting presence of joy. She admits that each of us is born into a "hard, unaccommodating Actual." This is why Dorothea, much to Eliot's dismay, could not end the novel as a single woman. She was still trapped by the social conventions of the nineteenth century. As Eliot admonishes in the novel's final paragraphs, "There is no creature whose inward being is so strong that it is not greatly determined by what lies outside it."
In her intricate plots, Eliot wanted to demonstrate how the outside and the inside, our will and our fate, are in fact inextricably entangled. "Every limit is a beginning as well as an ending," Eliot confesses in Middlemarch.Our situation provides the raw material out of which we make our way, and while it is important "never to beat and bruise one's wings against the inevitable," it is always possible "to throw the whole force of one's soul towards the achievement of some possible better." You can always change your life.
The Brand-New Mind
If science could see freedom, what would it look like? If it wanted to find the will, where would it search? Eliot believed that the mind's ability to alter itself was the source of our freedom. In Middlemarch, Dorothea—a character who, like Eliot herself, never stopped changing—is reassured that the mind "is not cut in marble—it is not something solid and unalterable. It is something living and changing." Dorothea finds hope in this idea, since it means that the soul "may be rescued and healed." Like Jane Austen, a literary forebear, Eliot reserved her highest praise for characters brave enough to embrace the possibilities of change. Just as Elizabeth Bennet escapes her own prejudices, so does Dorothea recover from her early mistakes. As Eliot wrote, "we are a process and an unfolding."
Biology, at least until very recently, did not share Eliot's faith in the brain's plasticity. While Laplace and the positivists saw our environment as a prison—from its confines, there was no escape—in the time after Darwin, determinism discovered a new stalking-horse. According to biology, the brain was little more than a genetically governed robot, our neural connections dictated by forces beyond our control. As Thomas Huxley disdainfully declared, "We are conscious automata."
The most glaring expression of that theme was the scientific belief that a human was born with a complete set of neurons. This theory held that brain cells—unlike every other cell in our body—didn't divide. Once infancy was over, the brain was complete; the fate of the mind was sealed. Over the course of the twentieth century, this idea became one of neuroscience's fundamental principles.
The most convincing defender of this theory was Pasko Rakic, of Yale University. In the early 1980s, Rakic realized that the idea that neurons never divide had never been properly tested in primates. The dogma was entirely theoretical. Rakic set out to investigate. He studied twelve rhesus monkeys, injecting them with radioactive thymidine (an amino acid), which allowed him to trace the development of neurons in the brain. Rakic then killed the monkeys at various stages after the injection of the thymidine and searched for signs of new neurons. There were none. "All neurons of the rhesus monkey brain are generated during prenatal and early post-natal life," Rakic wrote in his influential paper "Limits of Neurogenesis in Primates," which he published in 1985. While Rakic admitted that his proof wasn't perfect, he persuasively defended the dogma. He even went so far as to construct a plausible evolutionary theory as to why neurons couldn't divide. Rakic imagined that at some point in our distant past, primates had traded the ability to give birth to new neurons for the ability to modify the connections between our old neurons. According to Rakic, the "social and cognitive" behavior of primates required the absence of neurogenesis. His paper, with its thorough demonstration of what everyone already believed, seemed like the final word on the matter. His experiments were never independently verified.
The genius of the scientific method, however, is that it accepts no permanent solution. Skepticism is its solvent, for every theory is imperfect. Scientific facts are meaningful precisely because they are ephemeral, because a new observation, a more honest observation, can always alter them. This is what happened to Rakic's theory of the fixed brain. It was, to use Karl Popper's verb, falsified.
In 1989, Elizabeth Gould, a young postdoc working in the lab of Bruce McEwen at Rockefeller University, in New York City, was investigating the effect of stress hormones on rat brains. Chronic stress is devastating to neurons, and Gould's research focused on the death of cells in the hippocampus. But while Gould was documenting the brain's degeneration, she happened upon something completely miraculous: the brain also healed itself.
Confused by this anomaly, Gould went to the library. She assumed she was making some simple experimental mistake, because neurons don't divide. Everybody knew that. But then, looking through a dusty twenty-seven-year-old science journal, Gould found a tantalizing clue. Beginning in 1962, a researcher at MIT, Joseph Altman, published several papers claiming that adult rats, cats, and guinea pigs all formed new neurons. Although Altman used the same technique that Rakic later used in monkey brains—the injection of radioactive thymidine—his results were ridiculed, and then ignored.
As a result, the brand-new field of neurogenesis vanished before it began. It would take another decade before Michael Kaplan, at the University of New Mexico, would use an electron microscope to image neurons giving birth to new neurons. Kaplan discovered these fresh cells everywhere in the mammalian brain, including the cortex. Yet even with this visual evidence, science remained stubbornly devoted to its doctrine. After enduring years of scorn and skepticism, Kaplan, like Altman before him, abandoned the field of neurogenesis.
Reading Altman's and Kaplan's papers, Gould realized that her mistake wasn't a mistake: it was an ignored fact. The anomaly had been suppressed. But the final piece of the puzzle came when Gould discovered the work of Fernando Nottebohm, who was, coincidentally, also at Rockefeller. Nottebohm, in a series of remarkably beautiful studies on bird brains, showed that neurogenesis was required for bird song. To sing their complex melodies, male birds needed new brain cells. In fact, up to 1 percent of the neurons in the bird's song center were made fresh every day. "At the time, this was a radical idea," Nottebohm says. "The brain was thought to be a very fixed organ. Once development was over, scientists assumed that the mind was cast in a crystalline structure. That was it; you were done."
Nottebohm disproved this dogma by studying birds in their actual habitat. If he had kept his birds in metal cages, depriving them of their natural social context, he would never have observed the abundance of new cells that he did. The birds would have been too stressed to sing, and fewer new neurons would have been created. As Nottebohm has said, "Take nature away and all your insight is in a biological vacuum." It was only because he looked at birds outside of the laboratory's vacuum that he was able to show that neurogenesis, at least in finches and canaries, had a real evolutionary purpose.
Despite the elegance of Nottebohm's data, his science was marginalized. Bird brains were seen as irrelevant to the mammalian brain. Avian neurogenesis was explained away as an exotic adaptation, a reflection of the fact that flight required a light cerebrum. In his Structure of Scientific Revolutions, the philosopher of science Thomas Kuhn wrote about how science tends to exclude its contradictions: "Until the scientist has learned to see nature in a different way, the new fact is not quite a scientific fact at all." Evidence of neurogenesis was systematically excluded from the world of "normal science."
But Gould, motivated by the strangeness of her own experimental observations, connected the dots. She realized that Altman, Kaplan, and Nottebohm all had strong evidence for mammalian neurogenesis. Faced with this mass of ignored data, Gould abandoned her earlier project and began investigating the birth of neurons.
She spent the next eight years quantifying endless numbers of radioactive rat brains. But the tedious manual labor paid off. Gould's data shifted the paradigm. More than thirty years had passed since Altman first glimpsed new neurons, but neurogenesis had finally become a scientific fact.
After her frustrating postdoc, during which time her science was continually attacked, Gould was offered a job at Princeton. The very next year, in a series of landmark papers, she began documenting neurogenesis in primates, in direct contradiction of Rakic's data. She demonstrated that marmosets and macaques created new neurons throughout life. The brain, far from being fixed, is actually in a constant state of cellular upheaval. By 1998, even Rakic admitted that neurogenesis was real, and he reported seeing new neurons in rhesus monkeys.* The textbooks were rewritten: the brain is constantly giving birth to itself.
Gould has gone on to show that the amount of neurogenesis is itself modulated by the environment, and not just by our genes. High levels of stress can decrease the number of new cells; so can being low in a dominance hierarchy (the primate equivalent of being low class). In fact, monkey mothers who live in stressful conditions give birth to babies with drastically reduced neurogenesis, even if those babies never experienced stress themselves. But there is hope: the scars of stress can be healed. When primates were transferred to enriched enclosures—complete with branches, hidden food, and a rotation of toys—their adult brains began to recover rapidly. In less than four weeks, their deprived cells underwent radical renovations and formed a wealth of new connections. Their rates of neurogenesis returned to normal levels. What does this data mean? The mind is never beyond redemption, for no environment can extinguish neurogenesis. As long as we are alive, important parts of the brain are dividing. The brain is not marble, it is clay, and our clay never hardens.
Neuroscience is just beginning to explore the profound ramifixations of this discovery. The hippocampus, the part of the brain that modulates learning and memory, is continually supplied with new neurons, which help us to learn and remember new ideas and behaviors. Other scientists have discovered that antidepressants work by stimulating neurogenesis (at least in rodents), implying that depression is ultimately caused by a decrease in the amount of new neurons, and not by a lack of serotonin. A new class of antidepressants is being developed that targets the neurogenesis pathway. For some reason, newborn brain cells make us happy.
And while freedom remains an abstract idea, neurogenesis is cellular evidence that we evolved to never stop evolving. Eliot was right: to be alive is to be ceaselessly beginning. As she wrote in Middlemarch, the "mind [is] as active as phosphorus." Since we each start every day with a slightly new brain, neurogenesis ensures that we are never done with our changes. In the constant turmoil of our cells—in the irrepressible plasticity of our brains—we find our freedom.
The Literary Genome
Even as neuroscience began to reveal the brain's surprisingly supple structure, other scientists were becoming entranced with an even more powerful deterministic principle: genetics. When James Watson and Francis Crick discovered the chemical structure of DNA, in 1953, they gave biology a molecule that seemed to explain life itself. Here was our source stripped bare, the incarnate reduced to some amino acids and weak hydrogen bonds. Watson and Crick recognized the handsome molecule the moment they assembled it out of their plastic atoms. What they had constructed was a double helix, a spiraling structure composed of two interwoven threads. The form of the double helix suggested how it might convey its genetic information. The same base pairs that held the helix together also represented its code, a hieroglyph consisting of four letters: A, T, C, and G.
Following Watson and Crick, scientists discovered how the primitive language of DNA spelled out the instructions for complex organisms. They summarized the idea in a simple epithet, the Central Dogma: DNA made RNA that made protein. Since we were merely elaborate sculptures of protein, biologists assumed that we were the sum of our DNA. Crick formulated the idea this way: "Once 'information' has passed into the protein [from the DNA,] it can not get out again" From the perspective of genetics, life became a neat causal chain, our organism ultimately reducible to its text, these wispy double helices afloat in the cellular nuclei. As Richard Dawkins declared in The Selfish Gene, "We are survival machines—robot vehicles blindly programmed to preserve the selfish molecules known as genes."
The logical extension of this biological ideology was the Human Genome Project. Begun in 1990, the project was an attempt to decode the genetic narrative of our species. Every chromosome, gene, and base pair would be sequenced and understood. Our textual underpinnings would be stripped of their mystery, and our lack of freedom would finally be exposed. For the paltry sum of $2.7 billion, everything from cancer to schizophrenia would be eradicated.
That was the optimistic hypothesis. Nature, however, writes astonishingly complicated prose. If our DNA has a literary equivalent, it's Finnegans Wake. As soon as the Human Genome Project began decoding our substrate, it was forced to question cherished assumptions of molecular biology. The first startling fact the project uncovered was the dizzying size of our genome. While we technically need only 90 million base pairs of DNA to encode the 100,000 different proteins in the human body, we actually have more than 3 billion base pairs. Most of this excess text is junk. In fact, more than 95 percent of human DNA is made up of what scientists call introns: vast tracts of repetitive, noncoding nonsense.
But by the time the Human Genome Project completed its epic decoding, the dividing line between genes and genetic filler had begun to blur. Biology could no longer even define what a gene was. The lovely simplicity of the Central Dogma collapsed under the complications of our genetic reality, in which genes are spliced, edited, methylated, and sometimes jump chromosomes (these are called epigenetic effects). Science had discovered that, like any work of literature, the human genome is a text in need of commentary, for what Eliot said of poetry is also true of DNA: "all meanings depend on the key of interpretation."
What makes us human, and what makes each of us his or her own human, is not simply the genes that we have buried in our base pairs, but how our cells, in dialogue with our environment, feed back to our DNA, changing the way we read ourselves. Life is a dialectic. For example, the code sequence GTAAGT can be translated as instructions to insert the amino acid valine and serine; read as a spacer, a genetic pause that keeps other protein parts an appropriate distance from one another; or interpreted as a signal to cut the transcript at that point. Our human DNA is defined by its multiplicity of possible meanings; it is a code that requires context. This is why we can share 42 percent of our genome with an insect and 98.7 percent with a chimpanzee and yet still be so completely different from both.
By demonstrating the limits of genetic determinism, the Human Genome Project ended up becoming an ironic affirmation of our individuality. By failing to explain us, the project showed that humanity is not simply a text. It forced molecular biology to focus on how our genes interact with the real world. Our nature, it turns out, is endlessly modified by our nurture. This uncharted area is where the questions get interesting (and inextricably difficult).
Take the human mind. If its fissured cortex—an object that is generally regarded as the most complicated creation in the known universe—were genetically programmed, then it should have many more genes than, say, the mouse brain. But this isn't the case. In fact, the mouse brain contains roughly the same number of genes as the human brain. After decoding the genomes of numerous species, scientists have found that there is little correlation between genome size and brain complexity. (Several species of amoeba have much larger genomes than humans.) This strongly suggests that the human brain does not develop in accordance with a strict genetic program that specifies its design.
But if DNA doesn't determine the human brain, then what does? The easy answer is: nothing. Although genes are responsible for the gross anatomy of the brain, our plastic neurons are designed to adapt to our experiences. Like the immune system, which alters itself in response to the pathogens it actually encounters (we do not have the B cells of our parents), the brain is constantly adapting to the particular conditions of life. This is why blind people can use their visual cortex to read Braille, and why the deaf can process sign language in their auditory cortex. Lose a finger and, thanks to neural plasticity, your other fingers will take over its brain space. In one particularly audacious experiment, the neuroscientist Mriganka Sur literally rewired the mind of a ferret, so that the information from its retina was plugged into its auditory cortex. To Sur's astonishment, the ferrets could still see. Furthermore, their auditory cortex now resembled the typical ferret visual cortex, complete with spatial maps and neurons tuned to detect slants of light. Michael Merzenich, one of the founders of the plasticity field, called this experiment "the most compelling demonstration you could have that experience shapes the brain." As Eliot always maintained, the mind is defined by its malleability.*
This is the triumph of our DNA: it makes us without determining us. The invention of neural plasticity, which is encoded by the genome, lets each of us transcend our genome. We emerge, characterlike, from the vague alphabet of our text. Of course, to accept the freedom inherent in the human brain—to know that the individual is not genetically predestined—is also to accept the fact that we have no single solutions. Every day each one of us is given the gift of new neurons and plastic cortical cells; only we can decide what our brains will become.
The best metaphor for our DNA is literature. Like all classic literary texts, our genome is defined not by the certainty of its meaning, but by its linguistic instability, its ability to encourage a multiplicity of interpretations. What makes a novel or poem immortal is its innate complexity, the way every reader discovers in the same words a different story. For example, many readers find the ending of Middlemarch, in which Dorothea elopes with Will, to be a traditional happy ending, in which marriage triumphs over evil. However, some readers—such as Virginia Woolf—see Dorothea's inability to live alone as a turn of plot "more melancholy than tragedy." The same book manages to inspire two completely different conclusions. But there is no right interpretation. Everyone is free to find his or her own meaning in the novel. Our genome works the same way. Life imitates art.
The Blessing of Chaos
How does our DNA inspire such indeterminacy? After all, Middlemarch had an author; she deliberately crafted an ambiguous ending. But real life doesn't have an intelligent designer. In order to create the wiggle room necessary for individual freedom, natural selection came up with an ingenious, if unnerving, solution. Although we like to imagine life as a perfectly engineered creation (our cells like little Swiss clocks), the truth is that our parts aren't predictable. Bob Dylan once said, "I accept chaos. I'm not sure whether it accepts me." Molecular biology, confronted with the un-ruliness of life, is also forced to accept chaos. Just as physics discovered the indeterminate quantum world—a discovery that erased classical notions about the fixed reality of time and space—so biology is uncovering the unknowable mess at its core. Life is built on an edifice of randomness.
One of the first insights into the natural disorder of life arrived in 1968, when Motoo Kimura, the great Japanese geneticist, introduced evolutionary biology to his "neutral theory of molecular evolution." This is a staid name for what many scientists consider the most interesting revision of evolutionary theory since Darwin. Kimura's discovery began with a paradox. Starting in the early 1960s, biologists could finally measure the rate of genetic change in species undergoing natural selection. As expected, the engine of evolution was random mutation: double helices suffered from a constant barrage of editing errors. Buried in this data, however, was an uncomfortable new fact: DNA changes way too much. According to Kimura's calculations, the average genome was changing at a hundred times the rate predicted by the equations of evolution. In fact, DNA was changing so much that there was no possible way natural selection could account for all of these so-called adaptations.
But if natural selection wasn't driving the evolution of our genes, then what was? Kimura's answer was simple: chaos. Pure chance. The dice of mutation and the poker of genetic drift. At the level of our DNA, evolution works mostly by accident.* Your genome is a record of random mistakes.
But perhaps that randomness is confined to our DNA. The clocklike cell must restore some sense of order, right? Certainly the translation of our genome—the expression of our actual genes—is a perfectly regulated process, with no hint of disarray. How else could we function? Although molecular biology used to assume that was the case, it isn't. Life is slipshod. Inside our cells, shards and scraps of nucleic acid and protein float around aimlessly, waiting to interact. There is no guiding hand, no guarantee of exactness.
In a 2002 Science paper entitled "Stochastic Gene Expression in a Single Cell," Michael Elowitz of Caltech demonstrated that biological "noise" (a scientific synonym for chaos) is inherent in gene expression. Elowitz began by inserting two separate sequences of DNA stolen from fireflies into the genome of E. coli. One gene encoded a protein that made the creatures glow neon green. The other gene made the bacteria radiate red. Elowitz knew that if the two genes were expressed equally in the E. coli (as classical biological theory predicted), the color yellow would dominate (for light waves, red plus green equals yellow). That is, if life were devoid of intrinsic noise, all the bacteria would be colored by the same neon hue.
But Elowitz discovered that when the red- and green-light genes were expressed at ordinary levels, and not over expressed, the noise in the system suddenly became visible. Some bacteria were yellow (the orderly ones), but other cells, influenced by their intrinsic disorder, glowed a deep turquoise or orange. All the variance in color was caused by an inexplicable variance in fluorescent-protein level: the two genes were not expressed equally. The simple premise underlying every molecular biology experiment—that life follows regular rules, that it transcribes its DNA faithfully and accurately—vanished in the colorful collage of prokaryotes. Although the cells were technically the same, the randomness built into their system produced a significant amount of fluorescent variation. This disparity in bacterial hue was not reducible. The noise had no single source. It was simply there, an essential part of what makes life living.
Furthermore, this messiness inherent in gene translation percolates upward, infecting and influencing all aspects of life. Fruit flies, for example, have long hairs on their bodies that serve as sensory organs. The location and density of those hairs differ between the two sides of the fly, but not in any systematic way. After all, the two sides of the fly are encoded by the same genes and have developed in the same environment. The variation in the fly is a consequence of random atomic jostling inside its cells, what biologists call "developmental noise." (This is also why your left hand and right hand have different fingerprints.)
This same principle is even at work in our brain. Neuroscientist Fred Gage has found that retrotransposons—junk genes that randomly jump around the human genome—are present at unusually high numbers in neurons. In fact, these troublemaking scraps of DNA insert themselves into almost 80 percent of our brain cells, arbitrarily altering their genetic program. At first, Gage was befuddled by this data. The brain seemed intentionally destructive, bent on dismantling its own precise instructions. But then Gage had an epiphany. He realized that all these genetic interruptions created a population of perfectly unique minds, since each brain reacted to retrotransposons in its own way. In other words, chaos creates individuality. Gage's new hypothesis is that all this mental anarchy is adaptive, as it allows our genes to generate minds of almost infinite diversity.
And diversity is a good thing, at least from the perspective of natural selection. As Darwin observed in On the Origin of Species, "The more diversified the descendants from any one species become in structure, constitution and habits, by so much will they be better enabled to seize on many and widely diversified places in the polity of nature." Our psychology bears out this evolutionary logic. From the moment of conception onward, our nervous system is designed to be an unprecedented invention. Even identical twins with identical DNA have strikingly dissimilar brains. When sets of twins perform the same task in a functional MRI machine, different parts of each cortex become activated. If adult twin brains are dissected, the details of their cerebral cells are entirely unique. As Eliot wrote in the preface to Middlemarch, "the indefiniteness remains, and the limits of variation are really much wider than anyone would imagine."
Like the discovery of neurogenesis and neural plasticity, the discovery that biology thrives on disorder is paradigm-shifting. The more science knows about life's intricacies, about how DNA actually builds proteins and about how proteins actually build us, the less life resembles a Rolex. Chaos is everywhere. As Karl Popper once said, life is not a clock, it is a cloud. Like a cloud, life is "highly irregular, disorderly, and more or less unpredictable." Clouds, crafted and carried by an infinity of currents, have inscrutable wills; they seethe and tumble in the air and are a little different with every moment in time. We are the same way. As has happened so many times before in the history of science, the idée fixe of deterministic order proved to be a mirage. We remain as mysteriously free as ever.
The lovely failure of every reductionist attempt at "solving life" has proved that George Eliot was right. As she famously wrote in 1856, "Art is the nearest thing to life; it is a mode of amplifying experience." The sprawling realism of Eliot's novels ended up discovering our reality. We are imprisoned by no genetic or social physics, for life is not at all like a machine. Each of us is free, for the most part, to live as we choose to, blessed and burdened by our own elastic nature. Although this means that human nature has no immutable laws, it also means that we can always improve ourselves, for we are works in progress. What we need now is a new view of life, one that reflects our indeterminacy. We are neither fully free nor fully determined. The world is full of constraints, but we are able to make our own way.
This is the complicated existence that Eliot believed in. Although her novels detail the impersonal forces that influence life, they are ultimately celebrations of self-determination. Eliot criticized all scientific theories that disrespected our freedom, and instead believed "that the relations of men to their neighbours may be settled by algebraic equations." "But," she wrote, "none of these diverging mistakes can co-exist with a real knowledge of the people." What makes humans unique is that each of us is unique. This is why Eliot always argued that trying to define human nature was a useless endeavor, dangerously doomed to self-justification. "I refuse," she wrote, "to adopt any formula which does not get itself clothed for me in some human figure and individual experience." She knew that we inherit minds that let us escape our inheritance; we can always impose our will onto our biology. "I shall not be satisfied with your philosophy," she wrote to a friend in 1875, "till you have conciliated Necessitarianism ... with the practice of willing, of willing to will strongly, and so on."
As Eliot anticipated, our freedom is built into us. At its most fundamental level, life is full of leeway, defined by a plasticity that defies every determinism. We are only chains of carbon, but we transcend our source. Evolution has given us the gift of infinite individuality. There is grandeur in this view of life.