Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn - Cathy N. Davidson (2011)

Conclusion

Now You See It

Social networking pioneer Howard Rheingold begins his digital journalism course each year at Stanford with a participatory experiment. Shut off your cell phone, he tells his students. Shut your laptop. Now, shut your eyes. Then, the visionary author of Smart Mobs, who penned his first book, Virtual Reality, back in the medieval era (i.e., 1992), spends the next five full minutes in silence with his students, in the darkness behind their own eyelids, completely unplugged.1

Why? Rheingold wants his students to experience the cacophony of their own silence. We are so distracted by our feelings of distraction in the digital age that it’s easy to forget that, with nothing commanding our attention at all, with no e-mail bombarding us and no Tweets pouring in and not a single Facebook status to update, our brain is still a very busy and chaotic place. The brain never was an empty vessel, but it’s hard for us to appreciate all that’s going on in there when we’re not paying attention to anything else but our own attention.

Rheingold prepares his students for their five-minute self-guided tour through their own minds with a few words of advice. He’s discovered that many of his students have never been in this space before, alone with their own mental processes, charged with tracking their own attention. To help them focus their introspection, he advises them to pay attention not only to what they are thinking but also to how. He asks them to chart their mind’s way, to notice how one thought connects to another, to observe how their own mind operates when it is not being interrupted by e-mail, text messages, cell phones, or anything else. He asks them to note their patterns of attention and inattention. He has them keep track of how long they can stay on one, linear, single path. He challenges them to stay focused.

After they open their eyes again, he asks his students to report on what they observed. Typically, they express shock at how incapable they were of staying on a single track for even a few seconds, let alone for five interminable minutes. There’s a constant bombardment, one idea or sensation or memory or impulse or desire tumbling into another, with each distraction short-circuiting the next and each interruption interrupted. They find it almost impossible to narrate what they observed in their own unquiet minds. They stumble as they try to reconstruct the sequence of their own mental processing. Five minutes with no interruptions, yet what captured their attention in that time was as fleeting and as difficult to recall as the figments of dreams.

WE WORRY ABOUT WHAT IT means to be surrounded by a ceaseless assault of information, the constant distractions of our digital age. “Drop that BlackBerry! Multitasking may be harmful” shouts a recent headline on CNN Health. com, reporting on a study of attention problems among multitaskers.2 What the Rheingold experiment demonstrates with its simple eloquence is that to be human is to be distractable. We are always interrupting ourselves. That’s what those 100 billion neurons do. Until we die, they are operating. They are never still. They are never off. Our brain is always wired—even when we think we’re unplugged.

The lesson of attention is that, even when we are sitting in the dark behind our eyelids, unfocused, our mind wanders anywhere and everywhere it wants to go. It might notice the itchy nose, the annoying buzzing sound, a truck rumbling by outside, or the cricket noises generated inside our head by our head. This is the world of distraction that our powers of attention shut out from view most of the time; that diverting other world is nonetheless always there. We’re simply blind to it. More than that, our brain expends an inordinate amount of energy at any given time working so that we do not notice our own inner distractions. It’s selecting what to attend to, and that selection requires work. We’ve been doing it so long, as part of the process of learning to pay attention, that we aren’t even aware of it until someone or something makes us aware. When we’re not distracted by the normal world that occupies us, there is suddenly plenty to pay attention to that we never noticed before. Simply staying on track with any thought, unwired and eyes closed, requires selection and rejection.

It also requires narrative. When Howard Rheingold’s students try to describe what they were thinking, they inevitably turn it into some kind of story, with a recognizable beginning and ending—that is, with clear categories. Like the Cymbalta ad, we use our mind’s main story line to screen out the upsetting side effects. Like the infant learning what counts in his world, we have to master our culture’s main plot, and we unconsciously repeat and reinforce it often, over and over, to make sense of our lives.

For well over a hundred years, our culture’s story line has been about focused, measurable productivity. Productivity has been defined by artificial and manufactured separations dividing the world of work from the rest of life. Within the workforce, twentieth-century productivity has been based on a philosophy of the division of labor—into hierarchies, job descriptions, responsibilities, and tasks. Badging in, punching a time clock, workdays and weekends, lunch hours and coffee breaks, have all been prescribed, sorting our day into the “on” times for our attentive labor and the “off” times for our leisure. School has been organized to prepare us for this world of work, dividing one age group from another, one subject from another, with grades demarcating who is or is not academically gifted, and with hierarchies of what knowledge is or is not important (science on top, arts on the bottom, physical education and shop gone). Intellectual work, in general, is divided up too, with facts separated from interpretation, logic from imagination, rationality from creativity, and knowledge from entertainment. In the end, there are no clear boundaries separating any of these things from the other, but we’ve arranged our institutions to make it all seem as discrete, fixed, and hierarchical as possible. And though categories are necessary and useful to sort through what would otherwise be chaos, we run into trouble when we start to forget that categories are arbitrary. They define what we want them to, not the other way around.

We’ve sorted our life cycles in a similar way, with developmental theories of how children learn and geriatric theories of how we begin to forget. If there is any word that defines the twentieth century, it might be normative: a defining and enforcing of standards of what counts as correct. We’ve divided the “norm” from the non-normal, we’ve created tests to measure where we are on that scale, and we have elaborated forms of statistical analysis rooted in a theory that the mean is good and that it’s important to measure how far we do or do not deviate from the mean.

The old idea of the brain mirrored, supported, and justified a certain ideal of human labor, too. Attention is, definitionally, the concentration of the mind on a single thought, object, or task. In the Taylorized model of attention, it is assumed that if we can control the amount of external stimuli we experience, we can become more attentive and productive. If we can minimize interruption, control the task, narrow the focus, eliminate input from the other senses, and blot out disruption, then our attention will be undivided and our thoughts will flow, one useful idea leading to the next, as neatly and efficiently as on any assembly line Taylor ever conceived.

Except that that’s not how the mind works. As Howard Rheingold’s experiment shows us, the mind is messy if left to its own devices. Try it yourself and you’ll see: Shut off your cell phone, shut your laptop, shut your eyes. Even the most frustratingly disruptive workplace multitasking is sane compared to all the activity among all those neurons at any one given time, each making jumbled and associational connections that you would not, in your logical narrative, ever have devised.

If the brain were focused by nature, it would be easy not to be distracted when we had no external distractions. But it turns out that staying on track when there is nothing capturing our attention is intrinsically difficult to do. Neurologist Marcus Raichle’s research at Washington University has recently determined that an astonishing 80 percent of our neural energy is taken up not by external distractions at all but by the mind talking to itself.3 Raichle hooked up neuroimaging machines to dozens of volunteers and found that the brain lights up the screen in all kinds of patterns whether a person is actually doing a task or seemingly at rest. Quiet, uninterrupted thinking turns out to be quite active neurologically, even when, later, the volunteer insists that he wasn’t really thinking about anything. Brain chatter is so much a feature of our brains that we are rarely aware of it at the time or subsequently—unless we have psychologists to tell us that our brain was doing something when we were sure it wasn’t.

Raichle has found that more of the areas of the brain light up when the volunteer is daydreaming than when the same person is engaged in a concentrated task. Remote parts of the brain are “talking” to one another in those down times, and the entire brain is about twenty times more active than when it’s being stimulated from outside. Switching from one specific task to another also turns out to be energy efficient. It uses less than 5 percent of the brain’s normal “at rest” energy. We use less brain energy multitasking than we use “sitting there at rest, thinking random thoughts.”4

The brain is not built to take one task from start to completion without interruption, then the next task, and then the next one. Our schools and our offices may be built that way, but the brain loves its back channels and is constantly seeking ways to rebel against the order we try to impose upon it. Malia Mason, working first at the Neurocognition Lab at Harvard Medical School and then at Columbia University’s School of Business, has begun to pinpoint how various cortical areas generate daydreams. She has suggested that mind-wandering, not focus, may turn out to be good for the brain and beneficial for the ideas it generates.5 Connecting one part of the brain to another, all the time, is as crucial to brain health as aerobic activity is to the body. Mason’s team has been finding that even when we’re engaged in task-specific mental processing, parts of the brain we once thought were irrelevant to the specific task light up on fMRIs. Areas of the brain that are distant from one another are working that back channel, so even when we think we are focused, there’s another conversation going on among the medial prefrontal cortex (primarily responsible for so-called executive functions), premotor cortex (coordination of body movement), and cingulate gyrus (limbic system and, by extension, memory and learning). Mason’s preliminary theory is that the brain performs cognitive cross-training. Those interconnected neural training activities help us minimize our attention blindness, assist in multitasking, and give us a boost as we practice collaboration by difference. Taken together, they are what prepare us for unlearning one set of habits so we can learn something new.6

044

Our era may be obsessed with attention, distraction, and multitasking, but I’m convinced that these age-old concerns always emerge with new force whenever a major new technology makes us aware of habits, patterns, and processes that had been invisible before. I’m not convinced that our own era is intrinsically any more distracting than any other time. Because we’re in a transitional moment, in the midst of tremendous social and technological change, we’re experiencing the tension between the old and the new, a tension that exposes us to our own patterns of attention, patterns that are usually hidden behind closed eyelids.

The present conversation about multitasking raises implicit questions: Was there ever such a thing as monotasking? Were we ever good at it? Or did we just come to think we were decent monotaskers after a hundred years of school and workplaces reinforcing the ideal of focused attention to a specific task? Are we experiencing something different than we did before? And a further question: Are we all experiencing the same thing? To return to an issue we’ve addressed in many ways in this book, how great are the differences among us at experiencing a phenomenon like multitasking? How great are our own differences from one task to another? Do we experience attention, distraction, and multitasking in different ways in different circumstances?

The answer is “all of the above”—more or less. Yes, information overload and the extension of the workday are real. As a society, we have not yet grappled with or conquered (and eliminated) the eighty-hour workweek, and clearly we must. That kind of work stress is bad for our health and is unproductive in the long run. Similarly, some of the extra time we’re spending at work and some of the tension we’re feeling happens because we have not yet routinized certain aspects of the new technologies in our lives, such as developing accepted e-mail protocols that can make lower-order decisions for us. It’s like etiquette. There is no intrinsic reason why you use one fork for one course and a different one for another, but having a set of rules means you don’t have to rethink the issue every time you’re presented with a salad. Some of our moment’s agony over multitasking is that we haven’t yet figured out which digital fork to use at which times. The Internet is still in its adolescence. We will work out those rules. But until we do, we can expect to feel taxed.

The upshot, though, is that there is nothing intrinsically, biologically harmful about multitasking. Our brain science is there but we really haven’t gotten our minds around that point yet. Because of the new information about the neurology of attention, a few psychologists have taken out the Taylorist stopwatch again, this time to measure not task completion but multitasking and other new attention issues. Psychologist Jonathan Schooler of the University of California–Santa Barbara asks volunteers to push a button anytime their minds wander while they are reading from lengthy books, such as War and Peace.7Even when they know their attention is being tested, volunteers report their focus deviating from the fates of Prince Andre and other members of the Bolkonsky family about 25 percent of the time.8 Schooler is currently researching what he calls meta-awareness, the gap between how much our mind wanders and how much we think it does. Our widespread worries about multitasking create the problem; they don’t just name it. It’s the gorilla experiment again: Focus only on the horrors of multitasking and that’s all you see.

It’s interesting that even people who worry about the harmfulness of multitasking often rate themselves as being quite productive, despite being required to multitask. On the very same tests, though, they worry about the distractions everyone else has to encounter in the workplace. This is hardly surprising, given that we always have difficulty assessing ourselves. On the other hand, when these tests are set up to reveal to the subjects that they themselves are distracted, they then tend to overestimate the negative effect of the disruption, exaggerating both its impact and duration.9

Another recent study, this one led by sociologist Clifford Nass, whose office is down the hall from Howard Rheingold’s at Stanford, made headlines by pronouncing that even die-hard multitaskers are worse at multitasking than those who concentrate on only one thing at a time. That was the gleeful sound bite (“See!”) that hit the media. But the study itself has a very interesting conclusion that leaves the door open on that exact question. The authors conclude that “heavy media multitaskers . . . performed worse on a test of task-switching ability, likely due to reduced ability to filter out interference from the irrelevant task set.” Translated from psychologese to English, that means that the multitaskers transferred some knowledge from one task (the task the testers deemed irrelevant) to the other task they were performing. Does that make them bad multitaskers—or good ones? Don’t we want to be applying information from one situation to another? Perhaps the “reduced ability to filter out interference” is actually a new mashup style of mental blending that helps one be a good multitasker. Outside a testing situation, in real life, it is precisely your susceptibility to distraction that makes you inclined to switch to another task and then, after a time, to switch back or to another one. But what if in all that switching we’re not just forgetting but we are creating a new, blended cognitive map of what we’re learning, task to task, from the process? In the end, isn’t remixing what all learning should be?

Especially with a computer screen, that’s certainly what surfing is. You do the switching. In a psychology test, you don’t have volition. Something new flashes to distract your attention, then something else does. But what relationship does that kind of structured, experimental distraction have to the way we operate, now, on the Internet? The choice of when to stay on one page or when to click on an enticing URL and go elsewhere is exactly what defines the adept surfer. The experimenters acknowledge (not in the headline but in the fine print of their scientific study) that they were testing young multitaskers using an oldschool test for what is known in the field of psychology as “stimulus-dependent attention,” or interruption from outside. But multitasking, as we know it in our offices or even in an evening’s leisurely Web surfing, is more various than that. Sometimes attention is derailed from the project at hand by an incoming text message, but other times we’re just following our curiosity where it takes us, and at still other times, we are talking and texting together as a practice and a skill, as we saw with the adept back-channeling IBMers on their multinational conference calls. The IBMers’ “reduced ability to filter out interference from the irrelevant task” is what allows them to text-chat while they are listening and speaking. Successful multitaskers know how to carry over information across tasks when they need to do so. More to the point, multitasking takes many forms, in many situations, and operates differently for each of us, depending on a multitude of factors. The headline-grabbing “multitaskers aren’t even good at multitasking” is a nonsensical statement, when you break it down and try to think about it seriously.

The authors of the Stanford study suspect as much and worry that they might be using old tools and concepts to test something new, a new way of interacting in the world. They continue: “It remains possible that future tests of higher-order cognition will uncover benefits, other than cognitive control, of heavy media multitasking, or will uncover skills specifically exhibited by [multitaskers] not involving cognitive control.”10 The media didn’t pick up that there were possible alternative ways of viewing the test results, because the media were blinded by their old view of attention.

I’m suggesting that the most important finding from this test is not that multitaskers are paying attention worse, but that they are paying attention differently. If frequent multitaskers have difficulty ignoring information from other tasks, is that information really irrelevant, or might they be creating a different, interconnected cognitive map, a mashup or synthesis from some trace of all of the tasks at hand? And could this mashup be a new way of paying attention, one very different from the cognitive sequencing of discrete tasks we tend, still, to label “productivity”?

If what we are seeing isn’t discrete switching from one task to another but a form of attention that merges and remixes different strands of information almost seamlessly, then one ally we have in this new form of attention is the brain’s own associational, interconnecting energies. We are only just now beginning to see experiments designed to test attention in this creative, selfgenerating, multifaceted way. It is as if we thought we could evaluate running ability by seeing how well we do on a treadmill, using uniform metrics and standardized scales. We haven’t yet come up with ways to measure running and winning when the test is more like the parkour freerunning path over the tops of buildings and up walls, where the key to winning is also knowing how to tuck and roll so you can pick yourself up when you fall.

The new brain science helps us to re-ask the old questions about attention in a new ways. What are we learning by being open to multitasking? What new muscles are we exercising? What new neural pathways are we shaping, what old ones are we shearing, what new unexpected patterns are we creating? And how can we help one another out by collaborative multitasking?

We know that in dreams, as in virtual worlds and digital spaces, physical and even traditional linear narrative rules do not apply. It is possible that, during boundless wandering thinking, we open ourselves to possibilities for innovative solutions that, in more focused thinking, we might prematurely preclude as unrealistic. The Latin word for “inspiration” is inspirare, to inflame or breathe into. What if we thought of new digital ways of thinking not as multitasking but multi-inspiring, as potentially creative disruption of usual thought patterns. Look at the account of just about any enormous intellectual breakthrough and you’ll find that some seemingly random connection, some associational side thought, some distraction preceded the revelation. Distraction, we may discover, is as central to innovation as, say, an apple falling on Newton’s head.

Two neuroscientists at Cambridge University, Alexa M. Morcom and Paul C. Fletcher, have already argued along these lines to suggest that it is time to get rid of the metaphor of a baseline of focused attention from which the mind is diverted. By examining neural processing, they show that the brain does not perform one simple maneuver over and over and over again. Even if repetition is what your job or your teacher requires, your brain will often find playful ways of diverting itself. The brain is almost like a little kid figuring out a new way to walk the boring old route to school: Let’s walk the whole way on the curb this time! Let’s not step on a single crack in the sidewalk! Let’s try it backward! Why not walk the same old route in the same old way every day? It’s more efficient that way, but sometimes you want to mix it up a little. The brain is inquisitive by design. It is constantly and productively self-exploratory, especially at times when one’s experiences are rich and new.11

Morcom and Fletcher conclude that, on the level of neural networks, there is no such thing as monotasking. They argue that there is no unique function that happens in any one separate part of a brain, no process that is singular in its focus, and no form of attention that is seamless or constant. Apparently, neurons just can’t abide being bored. The mind always wanders off task because the mind’s task is to wander.

Mind-wandering might turn out to be exactly what we need to encourage more of in order to accomplish the best work in a global, multimedia digital age. In the twenty-first-century workplace dominated increasingly by the so-called creative class, it is not self-evident that focused, undeviating, undistracted attention really produces better, more innovative solutions. More and more, researchers are taking what’s been called a nonsense approach or even a Dadaist approach to attention—you know, the playful artistic toying with reality evidenced in the fur-lined teacup, a urinal displayed as art and labeled “Fountain,” or an image of a pipe with a label “This is not a pipe.” What confuses the brain delights the brain. What confounds the brain enlivens the brain. What mixes up categories energizes the brain. Or to sum it all up, as we have seen, what surprises the brain is what allows for learning. Incongruity, disruption, and disorientation may well turn out to be the most inspiring, creative, and productive forces one can add to the workplace.

NEUROSCIENTIST DANIEL LEVITIN HAS SAID recently that “surprise is an adaptive strategy for the brain, indicating a gap in our knowledge of the world. Things are surprising only if we failed to predict them. Surprise gives us an opportunity to improve our brain’s predictive system.”12 He’s right on all counts. What I am suggesting is that our Taylorist history and traditions make us nervous about what we are missing in an era when work is now structurally changing, when surprise is inevitable.

As with all neural shaping, there’s shearing too, so it is quite possible that because we are paying attention in a new way, some of our old ways of paying attention will be lost. The issue is, which of those matter? If they matter enough, what institutional structures and personal practices can we cultivate to make sure they survive? Some people worry that this “younger generation” will never read long novels again, but then I think about all those kids playing video games in line at midnight, waiting for the local bookstore to open so they can buy the latest installment of Harry Potter.

We need to sort out value from nostalgia, separating the time-honored tradition of one generation castigating the next from real concerns about what might be lost or gained for humanity as technologies change. If some younger people aren’t bothered by multitasking, it may well be simply a matter of practice. The more you do something, the more automatic it becomes. Once automatic, it no longer seems like a task, but a platform you can build upon. But there is tremendous individual variation in our ability to do this. Most people would not consider driving a car while listening to a radio multitasking, but it certainly was when Motorola put the first radio in a dashboard in 1930. Media expert Linda Stone coined the phrase “continuous partial attention” to describe the way we surf, looking in multiple directions at once, rather than being fully absorbed in one task only.13 Rather than think of continuous partial attention as a problem or a lack, we may need to reconsider it as a digital survival skill.

In most of life, our attention is continuous and partial until we’re so forcefully grabbed by something that we shut out everything else. Those blissful episodes of concentrated, undistracted, continuous absorption are delicious—and dangerous. That’s when we miss the gorilla—and everything else. The lesson of attention blindness is that sole, concentrated, direct, centralized attention to one task—the ideal of twentieth-century productivity—is efficient for the task on which you are concentrating but it shuts out other important things we also need to be seeing.

In our global, diverse, interactive world, where everything seems to have another side, continuous partial attention may not only be a condition of life but a useful tool for navigating a complex world. Especially if we can compensate for our own partial attention by teaming with others who see what we miss, we have a chance to succeed and the possibility of seeing the other side—and then the other side of that other side. There’s a Japanese proverb that I love. It translates, literally, as “The reverse side itself also has a reverse side” (“Monogoto niwa taitei ura no ura ga aru mono da”). In a diverse, interconnected, and changing world, being distracted from our business as usual may not be a downside but the key to success.

045

This past November I visited South Korea for the first time. I was a guest of KAIST University,14 the equivalent of MIT, keynoting a conference on Digital Youth organized by the Brain Science Research Center. The other organizers of the conference were mostly from the air forces and aerospace industries of the United States and countries around the Pacific Rim.

It was a fascinating event, and I will never forget one exchange at the opening dinner when a senior flight surgeon in the U.S. Air Force sparred with his equivalent in the Korean Air Force. “So Korean medicine still believes in that hocus pocus about meridians?” the U.S. medical officer chided about the traditional Chinese medicine theory that the energy of the body flows through channels or meridians. “And you still believe in ‘right brain’ and ‘left brain’ divisions?” shot back the Korean flight surgeon, who has degrees in both Chinese and Western medicine. After some seriously self-deprecating chuckling between these two brilliant men, the Korean doctor added, “Sometimes excellent research can happen, even when the principles themselves may be simplistic.” The rest of the conference was like that, with many conversations inspired by orchestrated collisions of different principles.

After a few days in Daejeon, where KAIST is located, we were then bused the three hours up to Seoul, entering like Dorothy into the Emerald City that is contemporary Seoul, dwarfed by the gleaming towering glass-and-steel skyscrapers. We spent an afternoon at the futuristic Samsung D’Lite interactive showcase and then held a “Bar Camp” (an unconference spontaneously organized by the participants) at a private university owned by Samsung, whose motto is “Beyond Learning.” Seoul was more than living up to its reputation as the “bandwidth capital of the world.” I felt that I was in the dynamic heart of the digital future. Orderly. Scientific. Innovative. New.

By accident, I discovered there was another Seoul coexisting with the gleaming digital city. I would never have seen it, were it not for the helpful concierge at the Seoul Hilton. I’d spent four days with the aerospace scientists and then three touring Seoul with multimedia artists, and as I was packing for home, it became clear that all the beautiful books and catalogs, plus the goodies and the souvenirs I’d bought, were not going to fit into my suitcase. It was around eleven P.M. when I made my way to the concierge desk and asked if there would be any place where I could purchase a suitcase before my bus left for the airport first thing in the morning.

“Sure! Go to the night market,” the young woman offered brightly.

She told me to walk out of the Hilton, past the gigantic glitzy “foreigners’ casino” (Koreans are not allowed entry), and toward Seoul Central Station, the main subway and train station in the city. I had been in Seoul Station several times and had walked this ultramodern urban street, with its multiple lanes, its overpasses and underpasses, a dozen times. She said about a block before the station I’d find a Mini Stop, the Korean version of 7-Eleven. That’s where I would find a narrow alley leading down the hill behind the store. If I looked closely, I’d see light coming from the alley. That was Dongdaemun, the night market.

I knew that no concierge at the Hilton would be sending an American woman alone into a dark alley if it weren’t safe. And from my googling before the trip, I knew Korea had virtually no unemployment and one of the lowest crime rates on earth. So I didn’t hesitate to go out on an adventure. I had money in my jeans pocket, my room key, and nothing else—no identification, as I didn’t have my wallet or my passport. I was on my own without even a phrase book on my final night in the bandwidth capital of the world.

I walked down the familiar thoroughfare outside the Hilton, paused at the Mini Stop, saw the dark alley winding down the hill, and yes, I could see some light coming from there, somewhere. No one else was on the street. I looked left, right, then took my first step into the alley.

In fairy tales, this is where you see the ogre. In horror movies, here’s where girl meets knife blade.

I started down the hill. I saw the glowing light ahead. I turned. Suddenly, everything changed. It was unreal. I felt as though I’d stumbled onto the movie set of a bustling nineteenth-century Korean bazaar. Removed from the glistening glass skyscrapers on the highway, I was now walking amid a warren of stalls and stores and food wagons and fortune tellers and palm readers and Go players and barkers hawking fish and fish products, vegetables fresh and cured, tofu, pickles, sweets, and kimchi in all colors and textures. At some stalls, elderly men squatted on little stools, their heads tipped over their spicy noodle bowls, the steam swirling into the night air. Other stalls sold silks or leather goods, Samsung smartphones or knockoff Dior bags, stickers and pens and purses featuring Choo-Choo, Korea’s slightly demented feline answer to Japan’s Hello Kitty. There were electronics, plumbing fixtures, electrical parts, and yes, luggage, too, and all manner of men and women, young and old, all beneath the red awnings, carved signs, and dangling lights of the Dongdaemun night market, a different swath of life through the back alleys of the most modern, wired urban landscape imaginable.

I found my suitcase within two minutes and strolled for another hour, wheeling it behind me, filling it with new treats as I stopped at stall after stall. It seemed impossible that I had walked within twenty yards of this scene every night for three nights and hadn’t even suspected it was here. Dongdaemun existed alongside the high-tech digital Seoul. They were parallel worlds, although, no doubt, neither would have existed in quite the same way without the other.

After a while, I reluctantly reversed my steps, locating the exact alley through which I’d entered, across from the luggage store. Those fairy tales were in my head again as I crossed the threshold back into the ultramodern Seoul I had come to see, and now saw differently. I would finish packing. In the morning I would take the airport bus, and then, on the long flight home, I’d sit among people from all over the world, people like and not like me at all.

My world had changed. I’d glimpsed a new city, front side and back. In a guidebook on the plane home, I read that the Dongdaemun was founded in 1905. Its name originally meant, Market for Learning.

Whatever you see means there is something you do not see. And then you are startled or distracted, lost or simply out for an adventure, and you see something else. If you are lucky, everything changes, in a good way.

But the key factor here is that “everything changes” has more to do with the way you see than with what exists. The gorilla and the basketball players coexist in the same space. So does the physical world and the World Wide Web. And so does the twentieth century’s legacy in the twenty-first.

To believe that the new totally and positively puts an end to the old is a mistaken idea that gets us nowhere, neither out of old habits nor finding new ones better suited to the demands of that which has changed. John Seely Brown calls the apocalyptic view of change endism. Endism overstates what is gone. It blinds us to the fact that worlds not only exist in parallel all the time but that, if we learn how, we can dip in and out of one stream and enter another and enjoy the benefits and reminders of each as we are in the other. Like those Stanford multitaskers, we carry the residue of one task into another, but instead of that being irrelevant or a problem, it is exactly what makes navigating complex new worlds possible. We can see the periphery as well as the center, especially if we work to remain connected to those who see the world in a different way than we do. Without that collaboration of different minds, we can insist on gorillas or basketballs and never understand that they, like everything else, are all part of the same experiment. So are we. We just need to be linked to one another to be able to see it all. When I talk to my students about the way we select the worlds we see in our everyday life, they often ask how they can possibly change the way they see. It’s easy, I always answer. I’ll assign you the task of seeing differently. And you will. That’s what learning is.

IF I WERE TO DISTILL one simple lesson from all the science and all the stories in this book, it would be that with the right practice and the right tools, we can begin to see what we’ve been missing. With the right tools and the right people to share them with, we have new options.

From infancy on, we are learning what to pay attention to, what to value, what is important, what counts. Whether on the largest level of our institutions or the most immediate level of concentrating on the task before us, whether in the classroom or at work or in our sense of ourselves as human beings, what we value and what we pay attention to can blind us to everything else we could be seeing. The fact that we don’t see it doesn’t mean it’s not there.

Why is this important? Because sometimes we can make ourselves miserable seeing only what we think we are supposed to see. When there is a difference between expectation and reality, we feel like failures. In one generation, our world has changed radically. Our habits and practices have been transformed seemingly overnight. But our key institutions of school and work have not kept up. We’re often in a position of judging our new lives by old standards. We can feel loss and feel as if we are lost, failed, living in a condition of deficit.

The changes of the digital age are not going to go away, and they are not going to slow down in the future. They’ll accelerate. It’s time to reconsider the traditional standards and expectations handed down to us from the linear, assembly-line arrangements of the industrial age and to think about better ways to structure and to measure our interactive digital lives.

By retraining our focus, we can learn to take advantage of a contemporary world that may, at first, seem distracting or unproductive. By working with others who do not share our values, skills, and point of view, change becomes not only possible but productive—and sometimes, if we’re lucky, exhilarating.

The brain is constantly learning, unlearning, and relearning, and its possibilities are as open as our attitudes toward them. Right now, our classrooms and workplaces are structured for success in the last century, not this one. We can change that. By maximizing opportunities for collaboration, by rethinking everything from our approach to work to how we measure progress, we can begin to see the things we’ve been missing and catch hold of what’s passing us by.

If you change the context, if you change the questions you ask, if you change the structure, the test, and the task, then you stop gazing one way and begin to look in a different way and in a different direction. You know what happens next:

Now you see it.