Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn - Cathy N. Davidson (2011)
Part I. Distraction and Difference
Chapter 2. Learning Ourselves
The first thing we need to know if we’re going to make sense of the patterns of attention that are normally invisible to us is that the changes we talked about in the last chapter transform not merely our behavior but the underlying neural networks that make attention possible. Every manifestation of attention in the real world begins in the brain, which means that we need to look at a few basic principles of brain biology and at the way neural networks develop.
Neurons are the most basic cells in the nervous system, which includes the brain and spinal cord. Neurons are excitable, meaning that they process the body’s electrical and chemical signals. There are various kinds of neurons, each with a specialized function, all interconnected in astonishingly intricate ways. The adult brain has been estimated to contain over a hundred billion neurons, each of which fires several times a second and has several thousand connections to other neurons. There are over a million billion neural connections in your brain. That’s a lot of zeros.1
Like so much else we believe we know, the basics of brain biology are often not what we think they are. In fact, for much of the twentieth century, it was believed that the number of neurons in the brain increased as we aged. It was thought that connections must expand in number in much the same way that we grow taller or gain more knowledge over time. That’s a logical assumption, but a false one.2 The way the brain actually works, then, is counterintuitive: An infant has more neurons, not fewer, than anyone old enough to be reading this book. Our little Baby Andrew has an excess of neurons. If his development unfolds as it should, he will lose 40 percent of his extra neurons before he grows up. If he does not, he will not be able to function independently in society and will be considered mentally handicapped or disabled.
On a structural level, the process by which neurons are shed is remarkably similar to, and in fact is prompted by, the processes of selecting what to pay attention to that we discussed in the previous chapter; an infant’s brain matures by selection. As the infant selects from all the world’s stimuli those that matter—that deserve attention—he is also “editing” neurons. As he begins to select, concentrate, and focus on some things and not others, his brain is shearing unused neural pathways. When the brain is not making connections, the unused linkages wither away. The excess is eliminated so only the relevant data and relevant neural pathways remain.
Canadian Donald O. Hebb is often called the father of neuropsychology because he was the first person to observe that learning occurs when neurons streamline into pathways and then streamline into other pathways, into efficient clusters that act in concert with one another. This is now called the Hebbian principle: Neurons that fire together, wire together. This means that the more we repeat certain patterns of behavior (that’s the firing together), the more those behaviors become rapid, then reflexive, then automatic (that’s the wiring). They become patterns, habits, groupings, categories, or concepts, all efficiencies that “wire together” sets of individual reflexes or responses. Reflexive behaviors combine into patterns so we don’t have to think about the components of each series of reactions each time we call upon them. We don’t have to think, every time it happens, “Now I see the red light, now I am lifting my right foot off the gas pedal and moving it to the brake, now I am pushing down on the brake in order that I can stop the car before this yellow crosswalk.” I see the red light, I stop.
If it weren’t for the Hebbian “wiring together” of actions we perform in tandem, we would feel like everything we do is taxing, everything requires extra energy, everything is inefficient—in a word, that everything requires multitasking. Seeing the light, releasing the gas pedal, moving to the brake, all the while keeping my eyes on the road, keeping control of the car, while the kids are chattering in the backseat and the radio is on; all of this is multitasking. Fortunately, the Hebbian principle of our wired-together, firing-together neurons makes most of life’s multiple tasks easier to process and respond to at once.
We don’t know exactly how many repetitions it takes to create the pathways that produce automatic responses in infants—clearly there is tremendous variation, depending on whether the behavior is learning to smile or beginning the long conceptual and biological process toward toilet training. But we know that babies who do not receive a lot of care in infancy have a difficult time catching up, and some never do. This is why early infant care and preschooling are so crucial to a productive adulthood. All the pathways that will shape attention are being laid and reinforced, over and over, long before the child goes to school—where patterns will be named, mapped, and systematized—and long before that grown-up child enters the workforce, where the efficiencies that are developed and built upon since infancy will come to productive use.
Neural pathways connect the different parts of the brain and nervous system, translating ideas into actions in patterns learned over and over, reinforced by repetition, until they seem to us to be automatic. Repetitions literally shape specific patterns in very particular and extremely complex ways, coordinating impulses and activities across parts of the brain and nervous system that might be quite distant from one another. For example, the desire to walk and the ability to walk may seem automatic or natural to an able-bodied adult, but they are a complex operation involving many different parts of the brain that, with repetition, become more and more connected via neural pathways. To an infant, connecting the parts is a mysterious process. To a toddler, the process is clearer although not necessarily smooth. In those early stages, there are still many extraneous movements bundled into “toddling.” We are constantly correcting this process (as we correct everything little Andy does) by our reward system, applauding some behaviors, moderating others.
Once human babies have learned to walk without thinking about it, they’ve reached a new level, both in learning and in the efficiency of the neural pathways. That means that the basic task of walking seems automatic: There are few impediments in the way between thinking you want to walk and walking. When transmittal is this reflexive, you can then build in other activities. You can think and walk. You can carry things. You can perform certain movements. Of course, all these are forms of multitasking, but because the basic task—walking—has happened so often and the complex network of neurons is so efficiently created and reinforced, we don’t perceive it as multitasking. That which seems automatic doesn’t feel to us like a “task.” Because the most fundamental of the tasks—walking—seems natural and automatic and requires no premeditation, we can do other things easily as we walk. When you get really good at it, you might even be able to walk and chew gum.
Perhaps the single most important and certainly the most striking example of how an infant’s neurons are firing together and wiring together—selecting relevant features from the irrelevant ones to be ignored—is language learning. At four months, an infant can still hear all the different sounds in all the world’s languages.3 But barely. At four months, he’s already beginning to shape pathways that allow him to hear in English. That means he is excluding—shearing away—potential language sounds that do not occur in English. On a neural level, that means he is losing an ability he was born with: to be able to recognize all linguistic sounds.4 Most infants in the United States lose this ability in order to focus on the sounds required for learning English and not French or Swahili or Icelandic or Arabic or Sanskrit or Chinese, all of which possess certain sounds that the other languages do not.
In the little scene with Andy, among all the babbling sounds he might make, Mama thinks she hears him say “Dada.” She praises him, remarks on it, offers him affection as a reward, reinforcing the significance of those two syllables. “Dada” means something. When two syllables mean something, they are reinforced. “Mada” is ignored.5 “Dada” is a word; in English, “Mada” is noise, so it receives no reinforcement. He stops trying to say “Mada.”
Andy’s babbling includes many sounds that aren’t used in English at all. Those become meaningless; surprisingly quickly, they become unhearable. If, later, Andy decides to take up another language where those sounds are crucial, he will have to studiously retrain his ear to hear the sounds he’s taught himself to ignore.
So a Japanese infant can distinguish r from l. A Japanese toddler cannot. There is no distinction between r and l in Japanese. Westerners hear these as two sounds because they both are sounds in our language. They aren’t in Japanese; there is no careful distinguishing of them to the Japanese Andy, no one is constantly speaking to him using the two sounds, correcting his use of them, and so the distinction simply goes away for him. A Japanese infant can’t hear a difference between r and l once he starts being able to speak Japanese.6
The principle we can learn from Andy and apply to our own lives is that this process doesn’t stop with infancy. What we learn is also something we unlearn. Learn Japanese, unlearn the difference between rand l. It’s not nearly as easy to relearn that difference, but it is possible so long as we remember that it is. If we believe capacities are “natural,” we’re lost. We need to remember how Andy’s process of learning categories and concepts makes him believe that he is seeing the whole world, even though he isn’t. He isn’t even seeing or hearing all of the world that was once available to him, before he got rid of that overabundance of neurons.
Even if Andy were raised from infancy to speak more than one language, there would still be innumerable sounds that would be lost to him. He would still be paring down countless potential neural connections to relatively few. By definition, making new neural connections means severing others—the yin and yang of attention is mapped in the yin and yang of neural development. That’s another key principle of learning. It’s so basic that it has been given an extremely dramatic and powerful name that sounds like science fiction: programmed cell death.7 Programmed cell death means that unused cells must die. They are use-less and soon don’t exist. Learning requires this selection and discarding. Learning makes speedy, efficient, seemingly automatic neural pathways out of a tangle of haphazard connections.
The world would be chaos if unused neurons didn’t atrophy and die. Without strong and efficient neural pathways, we’d be overwhelmed by the constant overstimulation of everything. Perceptually, it would be like being in the woods, lost and without a path, versus being in the woods on a well-marked path. Emotionally, it would feel terrifying to be constantly startled by events that always felt new and random.
An unsorted world would be almost impossible to navigate with any efficiency or success at all. One recent theory of severe autism is that something like this absence of categories happens in the early neurological development, around the same time that language learning is coalescing. Instead of being able to understand and assimilate and use categories, the autistic child can’t make the groupings. Bundling never happens. The world may well make sense to the autistic individual, but that “sense” is incomprehensible to those around him. Communication falters, and so does the autistic child, whose development, in many cases, takes a very different course.
THOSE OF US WHO DON’T suffer from an unusual neural condition rarely pay attention to the efficiency of our neural pathways until something gets in their way, as might happen if one were to experience a crippling disease like Parkinson’s or an injury that affects the limbs or spinal cord. In the aftermath of some catastrophic disruption of the neural pathways, the relay between the desire to walk and the act of walking can once again become a conscious process, often an arduous, sometimes impossible one.
These instances, when we need to consciously “rehab” and relearn what once seemed automatic, reveal to us the complexity of the task made efficient by neural shearing. Because learning how to walk again as an adult is very different from learning it as an infant, there is a significant component of unlearning, on a physical and neurological level, for one first has to break one’s old habits and expectations in order to learn how to effectively walk again. The end result may seem the same, but because the process is so different, you actually need to retrain your mind to a new concept of “walking.” Neural shaping and shearing that occurred during childhood made walking easy. After the injury, one has to disrupt the old patterns in order to find a new way to learn to walk, thus forming new neural pathways that eventually will make the relearned skill of walking more automatic. Because the injury unbundles well-trodden neural pathways, each part of learning to walk again requires creating new pathways, new patterns that, with extensive rehabilitation, may become automatic once more.
On a biological level, attention blindness is located deep within the brain and nervous system. If things are habitual, we do not pay attention to them—until they become a problem. Attention is about difference. We pay attention to things that are not part of our automatic repertoire of responses, reflexes, concepts, preconceptions, behaviors, knowledge, and categories and other patterns both mental and physical (if we can make such a crude distinction) for which we have, over time, developed more and more efficient neural pathways. We are constantly developing efficient ways of processing information so that certain sequences become automatic, freeing up valuable strategic thinking for novel tasks that have not yet been incorporated into our brain’s repertoire of automatic actions.
It’s only when something major and disruptive happens—say a kitten steps into the road right before I get to a stoplight—that I might break my pattern. Instead of bringing the car to a calm halt, I might jam on the brakes. I might even swerve sharply to miss the kitty. That change in my behavior, stimulated by the kitten, shakes me up in ways that bringing my car to a halt at a red light does not. I’ve been more or less paying attention as I drive, but braking for the cat makes me aware of paying attention.
Being aware of where and when I’m paying attention marks the difference from the usual forms of attention in everyday life. Suddenly being aware of having to pay attention is stressful, in good ways (exhilaration, inspiration) and in bad ways (anxiety, anger). On a biological level and on a pedagogical level, we become conscious of things that we normally don’t think about. As sociolinguist George Lakoff says, we can be “reflective about our reflexes.”8 Self-reflexiveness or self-awareness is not necessary in all situations, but it is a key aspect of all successful informal and formal learning.
In times of major, global changes such as our own, a lot of life’s incidents leave an indelible mark in the same way as slamming on the brakes to avoid the kitty, and for the same reason: They disrupt patterns that were laid down long ago. They unbundle neurons that have been firing together for a while. They start a new process of bundling, but until that process is successful—until enough firing and rewiring occur to become habitual—we will feel the stresses of the new. We will be aware that we need to pay attention in order to learn.
With new experiences, neurons that were not wired together by previous experience now have to fire urgently and independently. In the example of the straying kitty, there is also an emotional jolt to the system that I won’t forget for a long time. Those powerful moments of learning—ones that come accompanied by some trauma or thrill—are the ones that make a difference. This is one reason why, as we shall see, a number of educators are advocating game principles as a learning system. If learning is exciting and as instantaneously self-reinforcing as winning a new game challenge, which comes with its own emotional bells and whistles to signal our learning victory, we are much more likely to remember and to incorporate the experience of our own success into other aspects of our life. We not only learn the content but we learn the form of winning, making us more adaptable and receptive to change in the future.
THE UPSHOT OF ALL THIS is that disruption in all its forms has the same effect: It makes us realize that what we thought was natural is actually a learned behavior that has come to feel that way thanks to the biological consequences of repetition. And natural defines not merely behavior, but culture and environment as well. In the Cymbalta ad, many of the most skillful manipulations were playing to our cultural efficiencies, to the thoughts, feelings, and desires we take for granted. Thanks to how often these are reinforced, they of course have neurological underpinnings just as ingrained as those that allow us to walk without thinking about it. Any new experience disrupts them in small or large ways, from a first taste of Ethiopian food to learning to drive stick shift after a decade on automatic to learning how to navigate the Internet. You thought you had those neural pathways nicely sheared and shaped only to find them disrupted.
Our institutions—family, friends, churches, social organizations, schools, workplaces—reinforce biological patterns all the time, thereby shaping those biological processes on the most fundamental levels. Sometimes, in periods of great change, there is a mismatch between the patterns our institutions reinforce and the patterns we need to operate efficiently in the new situation we are facing. If we had been in a terrible car accident or were suffering from a degenerative disease, we’d be using physical rehabilitation and the expert advice of others to make our motor neurons (which connect the spinal cord to muscles) work as smoothly as possible, given impediments that no longer allow us to work as efficiently as we once did.
The same is true in times of tremendous change. That is when we need to unlearn the previous patterns because they are not serving us. That is when we need to unlearn old habits so we can begin to relearn how to learn again.
At the nexus of culture and biology we find the catastrophic neurological condition called Williams syndrome, which has much to tell us about both the necessity of neural shearing and the relativity of cultural norms that we often take for granted.9 It is now thought that this rare genetic disorder (technically, the absence of twenty-six genes from the seventh chromosome) results in a systemic aberration of the architecture of the cortex in which too few neural networks are cut away. A child with Williams syndrome is bombarded with too much information and has no efficient way to sort it all out.10
Children with Williams syndrome typically test very low on IQ tests, in the 40–50 range, and have very little working memory, lacking the ability to remember a sequence of simple operations required to perform such simple tasks as shoe tying or counting.11 Yet despite these inabilities, the disorder’s unique genetic makeup also bestows personality traits and aptitudes that we might find quite valuable. Preschoolers with Williams syndrome exhibit exceptional ability at facial recognition, a difficult cognitive task not mastered in most children with “normal” intelligence until the age of five or six. Williams syndrome children often also have a love of music, typically have perfect pitch, and tend to be oblivious to negative cultural cues, such as those for racial bias that adults passively (and often unknowingly) transmit to children. On tests for racism, Williams syndrome children often test virtually prejudice-free.
The single most immediately obvious characteristic of children with Williams syndrome is a tendency to have an abundant and precise vocabulary and exceptional storytelling abilities.12 Asked to name ten animals in a timed test, an average child might name such animals as cat or dog. The child with Williams syndrome might say ibex or newt or alpaca but might not be able to understand a simple processing concept such as naming ten animals. The child will therefore keep going, naming more and more animals until the tester stops him.
Things get even more interesting when we look at perceptions of Williams syndrome across cultures. In the United States, the diagnostic literature on Williams syndrome invariably defines a variety of personality traits thought to be characteristic of the disease. The words for these are almost entirely positive. Children with Williams syndrome are considered remarkably affable, inquisitive, charming, smiling, laughing, cheerful, affectionate, loving, and gregarious. In the West, that string of attributes is valued. The pleasing personality of the child with Williams syndrome is often considered a saving grace or blessing, some compensation for all the disabilities, varying in degree, of those born with this neurodevelopmental disorder. Children with Williams syndrome are sometimes described as “elfin,” both because of their characteristic appearance and their spritely personalities. American researchers are studying oxytocin levels of Williams children with the idea that perhaps there is something about those twenty-six deleted genes on chromosome 7 that contributes to this powerful neurotransmitter, which helps us regulate pleasure, maternal feelings, empathy, and other positive responses.
In Japan, however, gregariousness, intrusiveness into other people’s business, effusiveness, and affection in public or to casual acquaintances are fundamental offenses against the social fabric. Rather than being positive, these emotional and social traits rank as disabilities, as significant as neurological disorders in the catalogue of problems inherited by children with Williams syndrome. In Japan, they are considered continuous with the other mental and physical disabilities, and Williams syndrome children are not held up as a model for positive personality traits. They are pitied because of those same characteristics of personality. They are not studied for potential positive characteristics that might be used someday to help genetically engineer better human beings. They are far more likely than Western children with Williams syndrome to be institutionalized because of (what the Japanese perceive as) their antisocial nature.13
There is one last feature of brain biology we need to understand before we move on to schools, that formal place where our categories, concepts, patterns, and all our other habits of learning become reinforced in the most rigid way possible: through grades, rankings, evaluations, and tests. Everything we’ve seen about attention will be enacted—for good or ill—in the schoolroom.
The final principle of learning—and unlearning and relearning—we need to understand is mirror neurons. They were discovered in the 1980s and 1990s, and some people consider their discovery to be as important for neuroscience as sequencing the genome has been for genetics.
It happened in the lab of Giacomo Rizzolatti and his colleagues at the University of Parma in Italy. His lab didn’t set out to find mirror neurons, as no one really knew they existed. At the time, the Parma scientists were studying how neurons synchronize hand-eye coordination. They placed electrodes in the ventral premotor cortex of macaque monkeys to see how their neurons were firing when they were picking up food and then eating it.14 By doing so, the neurophysiologists were able to record the activity of single neurons when the monkeys were feeding themselves. That, alone, was news.
Then, one day, something really interesting happened. The Parma scientists began to notice that some neurons were firing in exactly the same pattern whether the monkey was picking up a piece of food and eating it or was watching a human or another monkey pick up a piece of food to eat. It didn’t matter whether the monkey was performing the activity or watching it: The neural response was the same.
Many years later, we are finding that humans, too, have mirror neurons. Mirror neurons respond in the exact same way when a person performs a certain action and when a person observes someone else performing that action. That is so startling and so counterintuitive that it bears restating: These specialized neurons mirror the person (or monkey) observed as if the observer himself were performing the action.
Not all neurons act this way, only a particular subset. But this discovery was far more revolutionary than anything the scientists had set out to find about the neural mechanism of hand-eye coordination. The Parma neuroscientists switched the hypothesis and the protocols of their experiments. Soon they were isolating and identifying a subset of neurons that they named mirror neurons. They argued that approximately 10 percent of the neurons in the monkey’s frontal cortex had these mirroring properties.
Since those early experiments, fMRIs have been used to study the human brain, and more and more mirror neurons are being found in new areas. There are mirror neurons that register the sounds we hear others make, as well as visual mirror neurons too. Recently, mirror neurons have also been located in the somatosensory areas of the brain associated with empathy.15 Throughout our life, our mirror neurons respond to and help us build upon what we see, hear, and experience from others around us. In turn, their mirror neurons respond in the same way to us.
Primatologist Frans de Waal likes to say that imitation doesn’t begin to comprehend the complex, mutually reinforcing interactivity of human teaching and learning.16 He notes that teaching requires that mirroring work both ways. The child watches the parent do something and tries it, and then the parent watches the child trying and reinforces what she’s doing right and corrects what the child is doing wrong: an intricate, empathic dance. Mirror neurons help to make the correction.
De Waal is highly skeptical of the claim, through the ages, that one or another distinctive feature “makes us humans” or “separates us from the animals.” We are animals after all. De Waal, in fact, believes that animals can do just about everything we think defines us as humans. However, he still thinks that there is something special, exceptional even, about the calibrated, interactive nature of teaching. The animals he studies feel, think, solve problems, and have all kinds of astonishing capabilities that humans don’t even approach—animals fly on their own power, navigate on and beneath the oceans without vessels, and migrate thousands of miles without instruments, and so on and so forth.
But teaching is a very complicated, interactive process. And it is possible, he suggests, that humans are among the only animals that actually teach not just by modeling behavior, but by watching and correcting in a complex, interactive, and empathetic way. De Waal claims that there is only one other very clear example in the nonhuman animal kingdom of actual teaching in this sense: a pod of killer whales off Argentina. He says this particular pod of killer whales actually trains its young in the dangerous practice of pursuing seals onto shore to eat them, managing this feat with barely enough time to ride the surf back out to safety again without being beached there to die. The older whales evaluate which of the young ones are good enough to do this, and they encourage this behavior relative to the young whales’ skills. They don’t just model but actually calibrate how they teach to those gifted students capable of learning this death-defying feat. These whales school their young the way humans do.
This particular ability to teach individually, to the best skills and abilities of the students, in a way that’s interactive and geared to the particular relationship of student and teacher, seems if not exclusive to humans then certainly one of our very special talents. De Waal doesn’t know of other animals able to calibrate empathy with instruction in this way. It is how we humans learn our world.17 In the classroom and at work, it is the optimal way to teach any new skill and to learn it. It is the way most suited to all aspects of our brain biology, the way most suited to what we know about the brain’s ways of paying attention.
Mirror neurons allow us to see what others see. They also allow us to see what we’re missing simply by mirroring people who see different things than we do. As we will find in the subsequent chapters on attention in the classroom and at work, that feature alone can be world-changing for any of us.
To live is to be in a constant state of adjustment. We can change by accident—because we have to, because life throws us a curveball. But we can also train ourselves to be aware of our own neural processing—repetition, selection, mirroring—and arrange our lives so we have the tools and the partners we need to help us to see what we might miss on our own. Especially in historical moments such as our own rapidly changing digital age, working with others who experience the world differently than we do and developing techniques for maintaining that kind of teamwork can help to take the natural processes of repetition, selection, and mirroring, and turn them to our advantage.
One guide to keep in mind—almost a mnemonic or memory device—is that when we feel distracted, something’s up. Distraction is really another word for saying that something is new, strange, or different. We should pay attention to that feeling. Distraction can help us pinpoint areas where we need to pay more attention, where there is a mismatch between our knee-jerk reactions and what is called for in the situation at hand. If we can think of distraction as an early warning signal, we can become aware of processes that are normally invisible to us. Becoming aware allows us to either attempt to change the situation or to change our behavior. In the end, distraction is one of the best tools for innovation we have at our disposal—for changing out of one pattern of attention and beginning the process of learning new patterns.
Without distraction, without being forced into an awareness of disruption and difference, we might not ever realize that we are paying attention in a certain way. We might think we’re simply experiencing all the world there is. We learn our patterns of attention so efficiently that we don’t even know they are patterns. We believe they are the world, not a limited pattern representing the part of the world that has been made meaningful to us at a given time. Only when we are disrupted by something different from our expectations do we become aware of the blind spots that we cannot see on our own.
MANY OF OUR ANXIETIES ABOUT how the new digital technologies of today are “damaging” our children are based on the old idea of neural development as fixed, or “hardwired,” and on notions of distraction and disruption as hindrances instead of opportunities for learning. Our fears about multitasking and media stacking are grounded in the idea that the brain progresses in a linear fashion, so we are accumulating more and more knowledge as we go along. Most of us, as parents or teachers or educational policy makers, have not yet absorbed the lessons of contemporary neuroscience: that the most important feature of the brain is Hebbian, in the sense that the laying down of patterns causes efficiencies that serve us only while they really are useful and efficient. When something comes along to interrupt our efficiency, we can make new patterns. We don’t grow or accumulate new patterns. In many cases, new ones replace the old. Slowly or rapidly, we make a new pattern when a situation requires it, and eventually it becomes automatic because the old pattern is superseded.
Pundits may be asking if the Internet is bad for our children’s mental development, but the better question is whether the form of learning and knowledge making we are instilling in our children is useful to their future. The Internet is here to stay. Are we teaching them in a way that will prepare them for a world of learning and for human relationships in which they interweave their interests into the vast, decentralized, yet entirely interconnected content online?
As we will see, the answer more often than not is no. We currently have a national education policy based on a style of learning—the standardized, machine-readable multiple-choice test—that reinforces a type of thinking and form of attention well suited to the industrial worker—a role that increasingly fewer of our kids will ever fill. It’s hard to imagine any pattern of learning less like what is required to search and browse credibly and creatively on the free-flowing Internet than this highly limited, constricted, standardized brand of knowledge.
If some pundits are convinced that kids today know nothing, it may well be because they know nothing about what kids today know. A young person born after 1985 came into a world organized by different principles of information gathering and knowledge searching than the one into which you were born if your birthday preceded that of the Internet. Their world is different from the one into which we were born, therefore they start shearing and shaping different neural pathways from the outset. We may not even be able to see their unique gifts and efficiencies because of our own.
When we say that we resent change, what we really mean is that we resent the changes that are difficult, that require hundreds or even thousands of repetitions before they feel automatic. Adults often feel nostalgic for the good ol’ days when we knew what we knew, when learning came easily; we often forget how frustrated we felt in calculus class or Advanced French, or when we played a new sport for the first time, or had to walk into a social situation where we didn’t know a soul, or interviewed for a job that, in our hearts, we knew was wrong for us. We also tend to forget that, if we did poorly in French, we stopped taking it, typically narrowing our world to those things where we had the greatest chance of success.
We humans tend to worry about the passing of what and who we once were, even though our memories, with distance, grow cloudy. When calculators were invented, people were concerned about the great mental losses that would occur because we no longer used slide rules. With programmable phones, people wonder if anyone will memorize phone numbers anymore. Both predictions have probably come true, but once we no longer think about the loss, the consequences stop seeming dire. But, yes, that’s how it does work, on a practical level and also on a neural level. Unlearning and relearning, shearing and shaping.
All of us like to believe we are part of the 50 percent in any situation who see it all clearly, act rationally, and make choices by surveying all the options and rationally deciding on the best course of action. But for most of us, it takes something startling to convince us that we aren’t seeing the whole picture. That is how attention works. Until we are distracted into seeing what we are missing, we literally cannot see it. We are, as my colleague the behavioral economist Dan Ariely has shown us, “predictably irrational.” You take courses in psychology and business on your way to designing direct-to-consumer ads in order to understand the ways most of us, most of the time, think. We believe we are rational, but in quite predictable patterns, we are not.18
We do not have to be stuck in our patterns. Learning happens in everything we do. Very little knowledge comes by “instinct.” By definition, instinct is that which is innate, invariable, unlearned, and fixed. Instinct applies to those things that cannot be changed even if we want to change them. So far as anyone can test, measure, or prove, instinct doesn’t account for much in humans. Biologists unanimously define as “instinctive” only a few very basic reflexive responses to stimuli. One of these, known as the Babinski reflex, is an involuntary fanning of the newborn’s toes when her foot is stroked, a primitive reflex that disappears by around twelve or eighteen months.19
Except for this very specific reflex, babies come into the world having learned nothing by instinct and eager to pay attention to everything. We focus their attention and, in that process, also focus their values and deepest ways of knowing the world. The world we want them to know and explore should be as expansive and creative as possible.
As we move from the world of the infant to the world of formal education, the big questions we will be asking are: What values does formal education regulate? What forms of attention does formal education systematize? How valuable are both in the contemporary world? And, most important, are the educational areas on which we’re placing our attention a good match with the world for which we should be preparing our children?