Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn - Cathy N. Davidson (2011)
Part IV. The Brain You Change Yourself
Chapter 8. You, Too, Can Program Your VCR (and Probably Should)
Instead of answering my question, the physical therapist at the Duke Sports Medicine Center wrote a series of numbers on the whiteboard and then drew a line down the middle, forming two columns. It looked something like this:
I was only halfway through physical rehab at that time. I’d spent three months after surgery trying to get my right arm and hand functioning again, following one of those freak accidents that changes your life in a matter of minutes. My husband, Ken, and I were visiting our friend Diego at a summer place he was renting on the incomparably beautiful island of Capri, off Naples, Italy. We decided to go swimming in the famous opalescent waters near the Blue Grotto one day. Beaches on Capri aren’t long stretches of sand. They are flat places on stern cliffs that jut from the Mediterranean. Diego and Ken had already cannonballed into the water. They bobbed there, smiling, waving me in. It wasn’t a long jump but I got cold feet and decided to enter via the ladder embedded in the cliff face.
Apparently my foot slipped. I don’t remember the interval between touching one toe to the cool metal rung and, sometime later, being untangled and lifted to safety by a stranger. I remember Diego and Ken swimming toward me, a look of horror having replaced the earlier smiles. It was my good fortune that the Italian gentleman waiting to take the ladder into the water after me happened to be a doctor. Apparently, when my foot slid, instead of falling into the water, I got my arm tangled in two directions, between the top rungs and also between the cliff face and the ladder. When the doctor described what he’d witnessed, he made an abrupt gesture, like someone killing a chicken by wringing its neck.
On the advice of the doctor, Ken and Diego bundled me into an island taxi and rushed me to the tiny emergency room on Capri. I was obviously not the first person to arrive after a collision with the rock face. The colossal nurse who presided there looked like someone out of a Fellini film, but I was soon happy for her size and her dominatrix disposition. Even before the X-rays came back, my arm was starting to swell, and she said it was best to fix a dislocation immediately or it would be weeks before I’d have another possibility to set it in its socket. Nerve damage could result. With one expert chop to my shoulder, she re-placed my arm. With similarly precise power moves, she restored first my elbow and then my thumb. I don’t actually remember any of these, for I seem to have blacked out on impact each time.
The X-rays showed that nothing was broken, but we all could see I was in bad shape. The nurse put my arm in a sling, supplied me with meds, and sent me on my way. She didn’t even take my name, because in Italy all health care is free, even for tourists. We took the ferry back to Naples, and I pretty much stayed drugged during the flights to Rome, then to Dallas, then to Raleigh-Durham, where I went to the emergency room at Duke and eventually they figured out what to do with me.
Claude T. Moorman III, an athletic man who lettered in football for Duke, was the brilliant surgeon into whose hands I was delivered. The complexity, extent, and unsystematic irrationality of my damage resembled that of an athlete leaping high to catch a football only to have his legs tackled out from under him, his arm entangled in tons of human flesh. I didn’t look mangled the way a crash victim with similar damage would, but basically nothing was working; a lot of tendons and muscles and nerves had been severed. A good deal of the surgery had to be improvised. X-rays of my postsurgical arm look alarming or comical, depending on your sense of humor, with titanium gizmos that look as though they’d come from Home Depot. I’ve had follow-up tests in the seven years since the accident, and apparently I’m the only person left, anywhere, with a cup hook holding up her biceps.
Besides actual screws, what was drilled into me in the weeks after the surgery by Dr. Moorman and everyone else was the importance of physical therapy. I was fortunate again to be at a place like Duke, with its splendid medical facility and athletics, and was soon passed on to the wise and patient Robert Bruzga and his staff at Duke Sports Medicine, for what would turn out to be six months of mind-numbingly boring exercises and surprising glimpses into the human psyche in multiple forms, including my own. Bob and his staff were great about giving me small challenges to meet—almost like the boss-level challenges in game mechanics. As a teacher and a researcher on the science of attention, I was fascinated by how creative they were at coming up with new goals, new modes of testing. Although progress was glacial, the methods kept me mentally alert, focused and refocused, and physically reaching—another lesson useful for both the classroom and the workplace.
The day I could move my right arm one inch forward from plumb position was a triumph. That took about a month.
Many factors conspired toward my recovery. First, I was given a tip by one of my most inspiring rehab buddies at the center, a man I knew only as Bill, a seventy-three-year-old runner who was getting his knee back in shape, after surgery, for an upcoming triathlon. He told me that Duke athletes who were rehabbing injuries or working on skills tended to arrive between four and five o’clock. I started coming to rehab at three, would work for an hour, and then get a second boost of energy from watching these young men and women work at a level of intensity that both put me to shame and inspired me.
On some afternoons, a young star on our basketball team, a first-year student, would also be there working with Bob Bruzga and others to improve his jump by trying to snatch a small flag dangling above his head from a very long stick. As the student doggedly repeated the exercise over and over, a team of professionals analyzed his jump to make recommendations about his performance, but also recalibrated the height of the flag to his most recent successes or failures, putting it just out of reach one time, within the grasp of his fingertips the next, and then, as his back was turned, just a little too high to grasp. It is a method I’d seen used to train competition-level working dogs, a marvelous psychological dance of reward and challenge, adjusted by the trainer to the trainee’s desires, energy, success, or frustration, and all designed to enhance the trainee’s ability to conceptualize achievement just beyond his grasp. It is also a time-honored method used by many great teachers. Set the bar too high and it is frustratingly counterproductive; set it too low and dulled expectations lead to underachievement. Dogged is the right word. That kid just would not give up. I would watch him and try again to inch my arm forward.
Another inspiration came from reading the work of Vilayanur S. Ramachandran, one of the most inventive neuroscientists of our age, who also supervises actual physical rehab work with patients, putting his research to immediate and transformative use. One of his most famous “cures” is for phantom limbs. A strange phenomenon no one has really figured out, phantom limbs are experienced as real by the amputee, as if they’d never been severed at all. Sometimes they cause pain, sometimes they itch, sometimes they are just plain spooky in their simulation of the missing limb. Ramachandran was one of the first people to actually ask patients to describe these limbs. In his marvelous book Phantoms in the Brain, Ramachandran offers a vivid description of one such encounter:
I placed a coffee cup in front of John and asked him to grab it [with his phantom limb]. Just as he said he was reaching out, I yanked the cup away.
“Ow!” he yelled. “Don’t do that!”
“What’s the matter?”
“Don’t do that,” he repeated. “I had just got my fingers around the cup handle when you pulled it. That really hurts!”
. . . The fingers were illusory, but the pain was real—indeed, so intense that I dared not repeat the experiment.1
I especially appreciate the way Ramachandran, in this account, respects the intensity of the real pain experienced in a phantom limb and strives not to inflict it again.
Although my own arm was real, the nerve damage was extensive, and I often experienced odd sensations and even abrupt movements, though it still dangled limp at my side. It used to be thought that a phantom limb resulted because the nerve endings had been irritated during amputation and were still sending signals back to the brain. Ramachandran theorizes that certain areas of the cortex near the ones that would normally control the missing arm continue to respond as if the arm were still there. He came up with a novel treatment using a mirror box originally devised by thieving street magicians in India. The thief would trick gullible believers into thinking they were seeing one hand in the box when they were actually seeing a mirror image of the other hand. The free hand was actually under the table picking their pockets. Ramachandran used one of these mirrored boxes so that patients could think they were looking at their severed hand and give it instructions. Open, close: What looked like the phantom limb would open and close. The brain, in this way, could be convinced that its signals were being dutifully responded to so it could calm down and stop giving them. The mirror trick didn’t work in all cases but was successful in several. The disturbing phantom limb went away.
Since the nerves in my arm had all been traumatized in one way or another and often signaled to me in unpredictable and inexplicable ways, I began to try variations on Ramachandran’s mirror experiments during my physical therapy. I would often lift my good (left) arm in front of a mirror, but I would stand so that I could not see my injured arm. I would lift my left arm in the correct movement, and visualize in exact detail my right arm doing the same. It probably was only going up an inch or two, but I would stare at the image of the working arm in the mirror, imagine it as the injured one, and lift, partly with my arm but partly with my imagination.
It took about a year for me to be able to successfully use a computer mouse again, my thumb being difficult to rehabilitate. I would mix it up, mousing lefthanded with a big mirror leaning against my computer screen so I could visualize it was my right hand mousing. I tried lots of other tricks with mirrors too.
Did all these exercises inspired by Ramachandran’s neuroscience do the trick? I have no idea. Probably not, from a purely medical point of view (if one believes in such a distinction). But psychologically, it seemed to be working, and if you were to look at me now, you would not know this right arm of mine, blazing away at the keyboard, once hung like a twelve-pound anchor at my side, nor would you suspect that my busy mouse thumb had ever been compromised.
The third huge help came from Sadiik (as I’ll call him), the Somali taxi driver who would pick me up at my home, drive me to physical therapy three miles away at the Sports Medicine Center, and then drive me home again. The first time he picked me up, he came around to open the taxi door for me, and I could see that he limped and carried one shoulder higher than another. He told me a little about himself over the course of the next several months, and although he looked to be in his late thirties or early forties, I learned he was actually a decade younger than that. He’d been living in the United States for five years and had brought his entire family here. Many of the taxis in my area of North Carolina are owned by Somalis. Sadiik was cautious about describing how he was injured, but war and camp were two words I heard in the course of his very circumspect admissions. He was extremely wary of saying anything that might hurt his chances of earning his green card, and I, too, learned to divert the conversation if it seemed to be heading in a direction that would cause him anxiety later on.
Every day as he dropped me off at rehab he would say, “You are so lucky,” and I would say, “Yes, I am very lucky.” Every day when he picked me up after two hours of rehab, he would ask, “So what did you do better today?” Because I knew he would ask that question, I felt inspired to do something, each day, that would give me something to answer. If I failed to lift my arm an inch, I would try to ride the exercise bike an extra twenty minutes. If I was too tired for aerobics, I would work on stretches. I reported something better every day, until that day, in March, with the leaves finally coming onto the trees again and the first cherry blossoms beginning to bloom, when Sadiik watched me approach his taxi, and said quietly, “This is the last time I’ll drive you, isn’t it?”
It was both exciting and sad to tell him that I’d been given permission to drive myself to physical therapy. It was on that last ride home that I thanked him, again, for helping to inspire my recovery and he told me, for the first time, that my persistence had also inspired him do to some stretching exercises he was supposed to do, but that he’d given up years ago because of the pain they caused. Sometimes, when there wasn’t another fare, he would park his taxi, wait for me, and say his Salaah, his evening prayers. If he had time, he said, he would then do the arm exercises while he waited for me. The stretching was starting to ease the pain in his shoulder.
I hadn’t realized before that my daily response to his inspiring question had also been motivating him. This, I began to see, was collaboration by difference of the tenderest kind.
So what about those numbers the physical therapist wrote on the board in answer to my question? On the days when I went to the Sports Medicine clinic, I would end my therapy session doing water exercises in the luxurious therapy pool. At first I’d been angry at how awful the changing room was—freezing showers that were hard to turn off and on, uncomfortable seating, no one to help you get a swimsuit on or off (no easy task wet and one-armed). But I soon realized that the unpleasant locker room was intended to remind people that the pool was for therapy. It wasn’t intended to be a spa. This was important because I discovered there was a substantial group of people who would have preferred to use physical therapy as a time to loll in the pool and chitchat, mostly to complain. I’m convinced that if the shower room had been any more pleasant, they would have been there taking up limited space in the pool every single day.
As you can see, I was irritated by these pool slackers, as I called them. I resented their bad attitude. Unlike the septuagenarian triathlete, unlike the basketball player perfecting his jump, unlike Sadiik, and unlike the dedicated therapists at the center, those pool folks drained my energy. If I focused on them (including my irritation at them), I felt defeated that day. So mostly I pretended they weren’t there. I loved the pool when I was there by myself or with someone really dedicated to physical therapy. Parts of the pool were eight feet deep, allowing me to do all manner of exercises with the various flotation devices. The water was a luscious 85 degrees and limpid turquoise as soothing as the Caribbean. The two-story windows looked out on a beautiful patch of forest land. The warm water felt fabulous on my dangling, near-dead arm, and I’m sure the water made those other folks in the pool feel just as good. I was unkind not to appreciate that—but I didn’t. They’d be there almost every day, the same cluster of “old people,” many of them younger than me. The most common topic of conversation was their surgery. Typically, they were on a second or third or even fourth surgery for the same injury, often to the rotator cuff. The water felt so good, and I was able to perform exercises there with the benefit of the buoyancy that I could not do without, so I loved that part, but I dreaded how often people would come into the pool, meet one another for the first time, and then immediately join a chorus of complaint about “getting old,” how the “body doesn’t heal fast anymore,” often pointing over at some young athlete walking laps with water-weights over and over and over, “Look how the young ones heal!” someone would say, floating there doing nothing.
So that was my question, the one that elicited the chart on the whiteboard with the two columns of numbers. Was there any hope for me, realistically speaking, of regaining the full use of my arm at my age?
The therapist said there was absolute, irrefutable evidence that, within the same family of injuries, the more time you spent in serious and constructive rehab, the greater the extent of your recovery. There actually was no real evidence that, given the same amount of careful rehab effort, a young person healed more quickly or better than an older person. It was impossible to measure individual by individual, of course, because no two injuries are alike and no two bodies are either. But in the rehab studies, controlling for as many body types as possible, the results were rather like Timothy Salthouse’s studies of IQ in which he gave a large group of people the different types of IQ tests under different conditions, and found greater variations within one individual than between individuals. Physically, as well as mentally, we are each our own bell curve.
That’s when the physical therapist wrote the numbers on the whiteboard. It turned out those were ages. On the left were the ages before the turn of a new decade, and on the right the age one year after turning thirty, forty, fifty, or sixty. She called it a “decade event.” She said she had no rigorous test for it, but she and the other therapists had noticed that if people were a year away from a decade event, they often felt as if they had lucked out, that time was still on their side. These patients often worked hard, did the exercises, got better, and felt exhilarated at the “near miss” of recovering, while still in their twenties, thirties, forties, or fifties.
However, she said that, anecdotally she found the opposite also to be true. If people had just gone through the ritual of passing into a new decade, it was hard to convince them that the birthday was not the kiss of death for future recovery. They would often begin physical therapy with a lament, half joking but also real: “I knew my thirties would be downhill.” Her point was that if you begin with a negative rhetoric and mind-set, it is hard to inspire yourself to do what are, in the end, excruciatingly tedious repetitions of the same exercise toward recovery so incremental it hardly counts as positive feedback. She and her colleagues had seen enough evidence in the training areas to convince them that those who’d just had a major birthday tended to exercise less, had a rotten attitude, and never made up the lost ground. Their poor recovery confirmed what a bummer entering this new decade was, with all its harbingers of approaching decrepitude. If the patient was thirty-three, he or she seemed to be on track again; by thirty-five the decade event was irrelevant, but then the pattern would start again, the ticking clock of aging making him determined as he approached the new decade marker and resigned to failure on the other side of it. The therapist was convinced that a twenty-nine-year-old had more in common with a thirty-nine-year-old, in terms of attitude toward and success at recovery, than with a thirty-one-year-old. Attitude was all. Old age didn’t seem to have nearly as detrimental an effect on healing as one’s prejudices about age and the body’s ability to heal.
That rhetoric about “I’m too old to be cured” is something I heard over and over during my six months of intense rehab. I suspect that physical therapist’s intuition was correct, that the self-fulfilling prophesies of middle age hurt us incomparably more than the creaking of joints or the forgetting of proper names.
I see the same thing now, in my weekly or twice-weekly Pilates classes with Radonna Patterson, who stands five feet tall but is as strong and as tough as any other Texan you’ll find. A professional dancer, she suffered a catastrophic back injury that effectively ended one career and brought her to another as a physical therapist. Convinced from her dance background that every part of the body is linked to every other, Radonna now combines her training in traditional physical therapy with Pilates, Gyrokinesis, yoga, breathing techniques from Eastern meditation, and deep-tissue massage. She even partners with an elderly doctor of Chinese medicine, who comes once a week to work with various machines and techniques. Radonna also uses a lot of hard-eyed determination. She doesn’t tolerate whiners. After getting my arm functioning about halfway, Bob Bruzga delivered me to Radonna. She sized me up, put me through the paces on her machines, and assessed what I could do. Then she began challenging me to do more.
Given the mechanics of my arm now, there are some things I’ll never do again, but most of the time, I am unaware of those limits. As with attention blindness, we can rarely see our own body patterns, and I can’t tell that my arm “limps” except in a few very specific situations. Radonna, though, delights in finding weird movements that I’m not able to perform with my “bionic” arm, and when I admit I can’t do something, she’ll come up with some pretzely exercise to work on that. She’s the kind of teacher who stays up at night thinking about such things and then tests them the next day. It’s not surprising that most of her clients are recommended by doctors or that many are medical professionals themselves. She’s not big on self-pity, so almost everyone in Radonna’s studio looks and acts considerably younger than their years. It’s not unusual to see someone seventy-five suspended upside down from the towering piece of Pilates equipment known as the Cadillac.
No one is required to go to Radonna’s. People are there to work, at any age. What the physical therapists confirmed as the significant correlation between healing and the determination to do what it takes to heal—even mindless, repetitive, constant, dogged exercise—is alive and well at Radonna’s studio.
THE SAME PATTERNS OF EFFORT my physical therapist found also hold true for the brain. Some biological differences among elderly, middle-aged, and young brains exist, of course. I’m not saying they don’t, any more than I’m saying that physical challenges don’t exist. But attitude plays a tremendous role, at any age, in one’s cognitive and physical health and in one’s ability to make a change. Other factors that I experienced during rehab and that we’ve seen throughout this book also play their roles: inspiration from the young, indefatigable athletes working on improving their game, as well as from the seventy-three-year-old triathlete who told me about them; the daily challenges of situational, strategic learning geared to my successes (rather than a standardized lesson plan for progress imagined for a “mean” that does not exist); the kind reticence of Sadiik, whose life and example disallowed my wallowing; and, of course, reading neuroscience and being convinced (whether it was “true” or not) that my mental attitude played as big a role in my rehab as my physical exercises—and that the two were connected.
Avoiding the downers, the people who were convinced the exercises did not work, was also key to the cure. What the chart epitomized was unforgettable: At any age, if we believe our age controls us, it does. If we believe we control it, we have a fighting chance.
Back in the distant twentieth century, the ultimate symbol of our lack of control over our body and over technology in a changing world came together in the device known as the VCR. Younger readers might not remember what those are, so for their benefit, I will explain: VCR stands for video cassette recorder. You hooked up this device the size of a large briefcase to your television set, you programmed it to record a TV show you wanted saved, and then, later, you could play back a wobbly, grainy, seasick-colored tape of the show. A modern miracle! That’s how TV was “revolutionized” back in the day, kids. This was before you could watch streaming video via Hulu on your laptop or download from TV.com on your iPhone. In fact, it was before Hulu, TV.com, iPhones, or the Internet even existed.
The VCR was clunky, and it quickly became the late-twentieth-century symbol of the technological generation gap. You could divide the world into “old people,” who couldn’t figure out how to program a VCR, and “young people,” who would program it for you if you asked nicely. Gender exacerbated the situation. “I’m too old to learn how to program the VCR,” someone, invariably female, would say, and a teenager would step up to do it for her.
“Age,” Radonna would say, “gets blamed a lot of times when the operative word should be lazy.”
Lazy would be an odd word to use for the brawny industrial age, but it certainly is the case that much of our conditioning and many of the institutions designed to prepare us for work in the twentieth century were also about delegating expertise and oversight to some people and therefore delegating the status of the amateur to others. But go back before the twentieth century, and you find a different system. In preindustrial society, you figured out solutions, worked in communities, turned to the person you knew who was “good at” fixing oxcart wheels, and paid him with the Speckled Sussex hen you bred. It’s not that he was the expert and you weren’t, but that, as a community, you exchanged expertise all the time—collaboration by difference! And the next week, when the guy who repaired the wheel on your oxcart needed help mending fences, he might drop by to ask you to pitch in on that job. By contrast, if the blinking light on my VCR is driving me nuts and no teenager is handy to fix it, I do exactly what my century trained me to do: I go to the Yellow Pages, look up “TV repair,” and call an expert to come to my house and reprogram my VCR. I don’t pay him in chickens. I hand over forty bucks.
Expertise existed long before the industrial age, as well as after, but the hallmark of the twentieth century is that it formalizes who is and who is not an expert, who will and who won’t be charged with paying attention to certain problems but not to others. Expertise in the twentieth century gets fully credentialed, institutionalized, formalized, and, perhaps most important, put into a strict and even rigid hierarchy of importance: What is most valued, respected, and/or remunerative is, implicitly, the standard against which others who lack these skills are judged. The parsing of humanity according to who is delegated to be the official judge of quality in each realm is exactly what the industrial age is about. The division of labor into white-, blue-, and pink-collar workers; the division of home and work; the division of fields; the division of the “sciences and technology” from the “human and social sciences”; the division of talents (“good at art,” “good at math,” “good at school”); the division of abilities (and therefore disabilities); the division of the young, the adult, and the aged; and even the division of the brain (executive functions in the prefrontal cortex, emotional reactions to music in the cerebellum): All of the different ways the world and human nature are parsed and segregated and valued that may seem natural to us have evolved into carefully stratified levels, separate spheres of influence, and rigidly credentialized and institutionalized roles and functions in the last one hundred years of human history.
And then comes the World Wide Web—all the world’s documents in all the world’s media—riding atop the Internet, which links each to all and allows anyone with access the ability to connect to anything and anyone at any level at any time without intervention by an expert. In the twenty-first century, if your oxcart wheel needs fixing, you don’t go to the Yellow Pages to find someone you can pay to fix it. You go to Oxcart.com and post a photograph of your broken wheel and ask for advice on the best way to fix it. Within minutes, someone in Thailand will be giving you point-by-point instructions, and then someone in Bali might be offering a different opinion. If the twentieth century was all about training experts and then not paying attention to certain things because the experts would take care of the matter for you, the twenty-first is about crowdsourcing that expertise, contributing to one another’s fund of knowledge, and learning how to work together toward solutions to problems. We’re just at the beginning of that process, but a first step is unlearning our assumptions, not only about who is or isn’t the best expert, but also about our own ability to become expert in a given situation. That means getting rid of a lot of tired ideas about ability and disability, prowess and infirmity, and even about the inevitabilities of getting old. Life is a constantly evolving, changing, customizing process of remixing and mashup, learning and unlearning and relearning, over and over. There is no pinnacle to which we ascend and from which we fall. Those “decade events” don’t spell our doom—as much as we might like the excuse that, now that we’ve turned thirty or forty or fifty, we are “getting old” and therefore can delegate a lot—like programming the VCR—to those still young and vigorous.
The more immediate point is that we are not getting away with anything, in the long run, by blaming age and avoiding change. All we do when we use aging as our excuse is reinforce our sense of ourselves as losing control. And feeling in control, it turns out, also is a factor in our success. A study led by Margie Lachman, director of the Lifespan Lab at Brandeis University, underscores this point. In 2006, Lachman’s team studied 335 adults whose ages ranged between twenty-one and eighty-three. The study was funded by the National Institute of Aging (NIA) and specifically looked at the sense of control that middle-aged and older adults perceived themselves to have over their own cognitive functioning. All of the participants were set the task of recalling a list of thirty “categorizable” words—fruits, trees, flowers, etc.
What the study revealed is not that younger participants were able to recall more nouns than older participants. The study showed that those who were confident in their cognitive abilities did better on the test than those who were not.
That is very good news. In purely biological terms, there are differences between twenty-one-year-old brains and sixty-year-old brains. Myelin is a chief difference, and those who want to assert the biological determinism of aging often emphasize it. A twenty-one-year-old is at the very height of myelin production, neurally fired up and ready to go. Myelin is the fatty substance that surrounds the power source—the axons—within each neuron in the brain. Myelinated axons transmit information a hundred times faster than unmyelinated axons, and myelination keeps impulses flowing smoothly along the axons.
We’re back to the Hebbian principle we saw in chapter 2: Neurons that fire together wire together. Well-myelinated neurons fire best. From twenty-five years on, myelination begins decreasing slowly and then declines more rapidly after sixty or so.2 Demyelinated axons in the middle-aged and elderly are thought to contribute to diminished short-term memory and working memory (the kind of memory needed to remember a sequence of operations, such as programming a VCR).
Demyelination is a fact of brain biology, but a bit of a squirmy one. No one knows how much demyelination occurs within any one individual and at what rate, nor how much the actual diminishing of myelin changes that person’s life. There is no way to measure individual change, biological or behavioral, in anything like a sustained way. There’s a leap of inference here that the biological change necessitates a corresponding change in performance or behavior because, as we saw from the chart on the whiteboard, will and attitude can outweigh diminished capacities to such a great extent that chronological age becomes a relatively minor part of the equation.
The president of the United States, the so-called leader of the Free World, was 48 when he took the oath of office, and he was considered a young man. Similarly, the average starting age of a new CEO of a Fortune 500 company in America is 48.8. Even in today’s supposedly youthful digital information industries, the average starting age for a CEO is 45.2 years.3 Their middle-aged neurons are fraying as we speak! But what does that mean? Our definition of young is variable, depending on the circumstances. Forty-eight is “young” to lead a nation or a corporation but “old” when it comes to physical rehabilitation or the VCR or the occasional memory lapse.
Lachman’s study tells us a lot about the importance of attitude relative to biology. Confidence in one’s sense of cognitive control might even outweigh the supposed debilitating effects of demyelination. She found that, statistically, there is a greater correlation between confidence and memory than between age and memory. As Lachman summarizes, “One’s sense of control is both a precursor and a consequence of age-related losses in memory.”
Anyone over the age of twenty-five needs to go back and parse that sentence. Lachman is saying that feeling confident and in control helps your memory; having a sense that your memory is good helps you have a good memory.
And the reverse is also true. If you are facing middle age and finding yourself at a loss for the occasional proper name, you can exaggerate the loss and soon be undermining your own confidence. If you are positive that anytime you forget a word you are seeing a harbinger of worse things to come—dementia or Alzheimer’s, just on the horizon there, right around the next cognitive corner—you shake your confidence. Lachman’s study suggests that loss of a sense of control makes things worse, and pretty soon the process becomes self-fulfilling, as any new lapse looms large while successes are invisible. It’s the self-reinforcing selectivity of attention, yet again.
My eighteen-year-old students don’t care a bit if they or their classmates forget a noun. The last time I taught This Is Your Brain on the Internet, I conducted a little experiment. There was nothing rigorous or official about it; I simply made a mark in my notebook whenever I caught one of them forgetting a proper name or a date. The discussion in class was always energetic and roamed widely, covering anything from parallel GPU (graphics processing unit) 3-D rendering to the brain waves of Buddhist monks during meditation. The students ranged in age from eighteen to twenty-two, with one “elderly” graduate student (he was twenty-eight) the exception to this undergrad demographic. If any student forgot a word or name, I made a little tick and watched to see how they recovered from the lapse. This happened a lot more often than you would think: on average about five to seven times per class.
How did they react? They themselves barely noticed and no one else cared at all. Their memory lapses didn’t disrupt the flow of the conversation and seldom diverted them off course from the points they were making. If it didn’t matter, they’d just skip over it with a phrase like “I’m blanking on his name.” End of matter, moving on. If it was something important, another student often pitched in to help, suggesting the forgotten name. On other occasions, the student who couldn’t remember in class would post the information later to our class blog, often with relevant links to Web sites or to an interesting video of the same phenomenon on YouTube. No embarrassment, no apology.
What I never witnessed once in my fourteen weeks of keeping track was the kind of self-conscious stumbling around that characterizes middle-aged forgetfulness. I have a coworker who stammers and blushes and apologizes and stops any conversation dead in its tracks whenever she forgets a name. We all feel embarrassed (and annoyed) as she hems and haws and apologizes instead of finishing what she had to say in the first place. Of course, that failure and humiliation make her feel less confident, and that exacerbates the potential for forgetfulness next time.
You know what I’m talking about—the way middle-aged people break into self-consciously apologetic jokes about “senior moments” or lament that “old age sucks.” My friend, novelist Michael Malone, calls this tiresome midlife game Name That Noun! It’s a self-preoccupied game. It brings everything else to a halt and calls humiliating attention to the forgetful moment and the forgetter. Given what we know about attention, we know that every time we remind ourselves of a painful failure, we are setting up the anxieties that contribute to the next one.
I admire people who find ways to cope easily and graciously with inevitable stumbling blocks. No one does it better than Nannerl Keohane, who was president of Duke University from 1993 to 2004. My friend and also a mentor, Nan engages affably and generously with everyone she meets. She’s an extraordinary human being by any standard, and she was always gracious as president. No matter how many dozens of faculty, students, alums, and interested others would come to greet her at our university functions, she found ways to make a personal touch with almost everyone.
“You have an incredible memory for names,” I once said after observing her at a very long public event. She demurred, noting that her ability to summon up the exact proper name in a circumstance was good enough for her purposes, but she didn’t consider it exceptional. She said what she always did try to do was make some meaningful connection with every person she met, so there was something significant in any exchange. “That’s always more important than remembering names,” she said.
I observed her more carefully after that. “So good to see you,” she might say upon warmly greeting someone at a party. And then, shortly after, “How’s your son liking college?” She’d refer to a past encounter or simply connect with a comment offered by the other person. Sometimes she said the person’s name, sometimes not. I had no way of telling if she didn’t say a name because she didn’t remember it or because the conversation was so engrossing that I hadn’t even noticed whether she’d said the name or not. The point is that the name was not the point. A conversation began, a connection was made, a human contact reaffirmed. If she had played Name That Noun! and made a big deal about either remembering or forgetting the person’s name, the flow would have stopped and the focus would have turned from her warm, human interactions to her own accomplishment of (or failure at) being able to remember a proper name.
The ability to remember names is an achievement, but being able to connect to the person you are addressing—whether or not you can recall that person’s name—is a far greater one. It’s also a better way, neurologically speaking, of remembering names in the future. The more you stand and stammer, the more you retread the tracks of forgetfulness. If you move on to another association with the person, you have a better chance of triggering the memory of the name you misplaced.
We’re back to the old VCR routine of the 1990s. If you program it, you become confident about programming it the next time. If you palm it off on your teenage son, you create a bad habit, a pattern of avoiding what you don’t want to do and blaming it on your age. If Radonna were around, she’d be chiding you and urging you to climb on the Cadillac and give yourself a good stretch, upside down.
THE CONFIDENCE OF YOUTH, EVEN in the face of forgetfulness, is a good enough reason to hang out with young people. James R. Gaines, formerly a top editor at Time, Life, and People magazines, recently started at FLYP, a multimedia online publication that combines text with motion graphics, cool Flash animation, and streaming audio and video. At sixty-one, Gaines was suddenly the oldest person in every room, often twice as old as anyone else. He quickly noticed that he was the only one who cared about his age. His young colleagues wanted his new ideas, not his past experiences. He learned to think in the present and the future, not so much in the past. He’s the boss of the operation and yet daily he finds himself relying on much younger people to help him with things like video codecs and MySQL databases. He’s learned that his reliance on his young coworkers changes not only the hierarchy of the office but changes him. He has the wisdom of the boss but also the energy and enthusiasm of the learner. Collaboration by difference has the collateral effect of turning one into a student again, even reversing the typical position of student and teacher.
This isn’t like palming off the programming of the VCR onto the youngsters. This is a collaboration of those uniquely qualified minds, cross-generationally, but with the same qualities of trust and asymmetrical exchange that we saw in the last chapter. There is even a Shane Battier–like facilitated leadership at work here, for when it comes to the latest of the latest technologies, Gaines finds himself deferring leadership to someone else.
He enjoys being on the flip side of their youthful wisdom. He was especially pleased when a young coworker, who had been working for FLYP longer than he had, commented, “Seeing someone like Jim find what we’re doing exciting has made me see it in a new way, sort of like when I started out. His enthusiasm for it reminds me why I went this way.” Gaines finds it charming (and only ever so slightly galling) when someone half his age compliments him in an avuncular fashion, “Fine young man, that Jim! Going places, that boy!”
What Gaines adds to FLYP is a certain calm and an ability to put small upsets and victories into a larger perspective. That skill is appreciated by his young coworkers as much as his expertise and insights. The asymmetry of the contribution does not diminish respect in either direction but adds to it. Gaines finds his new role inspiring. “The young people I work with now will be the settlers of that frontier, and I can’t think of anything I would rather do than help them get there,” he says.4
Complementary skills help everyone see differently and better. That is emphatically true cross-generationally. We are susceptible to peer pressure. Hang out with middle-aged people all the time and it can be like the people in the therapy pool. You start reinforcing one another’s worst habits.
If schools and the workplace operate on a twentieth-century industrial model, think of the institution of the nursing home. As an outgrowth of what used to be called the sanatorium, the development of a segregated facility for the elderly is yet another part of the division of labor and productivity we’ve seen was characteristic of the twentieth century. We now know that the elderly do better when their peers are not limited to other old people, and studies have shown vivid improvements if you pair older people with the young, infants, the disabled, orphans, prisoners, dogs, hamsters, or even goldfish. In fact, giving a very infirm person a pet to take care of, a pet for which she is responsible, is probably the single best thing you can do for both of them. As one friend likes to joke, it is equally good for both “the petter and the pettee.”
The key is restoring a sense of vitality, purpose, difference, and control. Of course, control is mostly an illusion—so why not cultivate the illusion to one’s benefit? It’s a bit of a trick with mirrors and phantom limbs sometimes, but if the trick works, go for it. Here’s Margie Lachman again: “The more you believe there are things you can do to remember information, the more likely you will be to use effort and adaptive strategies and to allocate resources effectively, and the less you will worry about forgetting.”5
As my students in This Is Your Brain on the Internet know, if you forget something, why worry? What’s the big deal? You can always google it later on.
I am optimistic about the most recent discoveries in the neuroscience of aging. I believe in the power of a can-do spirit and the rejuvenating energy that comes from working with others who don’t use their age as an explanation for their shortcomings. That said, I want to say outright and explicitly that I am skeptical about many grand claims about rejuvenation. Some are overstated and overly optimistic. Others smack of hucksterism. Amazing feats of neurogenesis—neural plasticity and recovery—are reported all the time, but aging still exists, and so does brain damage and disease. In the late 1990s, when stem cells were discovered in the hippocampus—a key area for memory—some predicted an imminent cure for Alzheimer’s, brain-stem strokes, and other forms of damage. That miracle hasn’t happened yet.6 Every year or two some new health evangelist promotes (and some get rich by advocating) a new substance or practice: green tea, blueberries, cocoa, ibuprofen, ballroom dancing, aerobic exercise. Others tout the rejuvenating properties of extension courses, learning a new language, plenty of sleep, or positive leisure activities. Daniel G. Amen, a popular motivational speaker on aging, advocates antioxidants such as ginkgo biloba and vitamin E (in fish oil), natural vitamin supplements, aerobics, psychological exercises ranging from meditation to goal-oriented positive thinking, music therapy, good nutrition, lots of sleep. Plus ample sex. There are worse ways to rev up those neurons.
But lots of people are getting old right now, worldwide. Aging is big business. There’s a geriatric industrial complex. Claims of a fountain of youth flowing from this or that substance or practice need to be viewed with critical distance.
I have another, more personal reason for being aware that not all damage to the brain can be cured, all aging reversed, all IQs improved. Tragically, around the same time that I fell off the ladder on Capri, my younger brother suffered what his doctors called a series of “neurological events,” several small strokes. He’d had brain damage for many years following the removal of a benign tumor in his early thirties. The inexact nature of surgery at the time and what was called “overzealous use of radiation” left him with substantial hearing impairment, some memory problems, a diminished sense of time, and an increased tendency to lose his temper unexpectedly and for no reason. Still, he was able to hold down a job, has a beautiful home, and has a wonderful wife and son, and he remained very intelligent, an avid reader, and conversant on many topics, so much so that people overlooked his impediments. With the more recent strokes, however, he slipped beneath some cognitive threshold. His high intelligence could no longer mask the impairments. He was put on disability at work and is no longer allowed to drive a car. He finds this frustrating when he is awake, but increasingly, much of his day is spent asleep.
Through my research and contacts in the neurobiology community, I was able to help my brother and his wife find and get appointments with some of the best doctors in his city. My brother would say they caused him only misery. They were the ones who recommended he no longer be allowed to work or drive. His quality of life since then has diminished. No one is optimistic of seeing a cure-all in his future.
Whenever I hear of miraculous possibilities ahead or read of astonishing incidents of neural plasticity, I am thrilled for the hope they offer. But I also remain aware, on an intensely personal level, that not all brains change for the better. Not all damaged brains heal themselves. My decade of research on the science of the brain cannot alleviate my beloved brother’s condition. I never forget that.
Rather than seek some specious fountain of youth, I prefer to think about what the aging brain does well and to imagine ways those skills can be maximized. Roberto Cabeza, professor of psychology and neuroscience at Duke University, has found that the older brain excels at, in technical terms, “cross-functional complementarity.” In lay person’s language, that means being skilled at sharing knowledge with all kinds of different people, drawn from many different sources with distinctively different levels of credibility, and being able, from the brew of thought, to come up with creative solutions. Collaboration by difference! Who knew? Midlifers turn out to be even better at it than young people.
Cabeza’s lab has also found that midlifers do well in a wide array of problem-solving situations and excel at episodic memory—that is, relating a sequence of autobiographical events in order and with proper causality. Midlifers are also excellent at perceiving accurately and paying prolonged and sustained attention—especially if they are given an opportunity to control and shut out sources of distraction. Midlifers are good at figuring out (all that problem solving, synthesis, generalization, and episodic memory) what will cause them the most distraction and compensating accordingly.
One of the most remarkable findings of all from Cabeza’s intriguing tests is an increase in what’s called midlife brain bilaterality. The neurons flying along paths from one to the other hemisphere of the brain seem to help one another. If one part of the brain isn’t working up to speed, the senior brain finds another part of the brain that can pitch in.7 This cross-brain assistance also suggests a complexity of thinking, almost an in-brain collaboration by difference!
No single test measures this complex synthesis of abilities, but it is probably more important in everyday life than knowing the missing number or letter in a sequence, anticipating what the three-dimensional shape will look like if turned upside down, figuring out which word goes with which other word, and other features of conventional IQ and cognitive testing.
Indeed, Harvard educator Howard Gardner has spent the last three decades arguing that there are multiple intelligences and that all of them are important. Here’s his list of the major different forms of intelligence that any one individual can possess in different combinations and with different strengths and weaknesses:
1. Linguistic intelligence (as in a poet)
2. Logical-mathematical intelligence (as in a scientist)
3. Musical intelligence (as in a composer)
4. Spatial intelligence (as in a sculptor or airplane pilot)
5. Bodily kinesthetic intelligence (as in an athlete or dancer)
6. Interpersonal intelligence (as in a salesman or teacher)
7. Intrapersonal intelligence (exhibited by individuals with accurate views of themselves) 8
When we speak of declining, age-related mental capacities, are we referring to these capacities? We know, for example, that, on average, mathematicians tend to be young and historians do their best work later in life. Is one smarter than the other? Does one have more cognitive resources than the other? Is the pure abstract thinking of the mathematician really superior, cognitively, to the associational searching and reading and analyzing and interpreting and synthesizing and then focusing and narrating that are the historian’s gift and trade?9 One could also note that it is mathematicians and statisticians, not historians, who tend to make up the statistical measures by which we judge cognitive excellence, achievement, and decline. On the other hand, it was two historians who created H-Bot, that robot who can whup most of us on standardized tests.
It may seem hokey to equate just trying harder or coming up with alternative methods of thinking with a rejuvenated brain, but these methods work in surprising ways, psychologically and—if we can even make such a distinction—biologically. Dr. Yaakov Stern, professor of clinical neuropsychology at the College of Physicians and Surgeons of Columbia University, has spent much of his career studying the most frightening of age-related mental deteriorations, Alzheimer’s disease. From years of study of degenerative age-related disorders, Stern has concluded that the single most important thing anyone can do to prepare for the possibility of brain disability is to have a backup plan. Seriously.
I love this idea and readily admit it appeals to me largely because I’m the kind of person who goes through life with a backup plan and often a backup plan to the backup plan. You probably figured that out from my account of the various physical therapies I undertook to rehab my arm.
For someone like me, Stern’s theory is reassuring. It is based on the remarkable results of autopsies performed on 137 people who had been diagnosed with Alzheimer’s disease in the 1990s. In clinical terms, Stern notes, Alzheimer’s disease is more than its well-known symptoms of confusion, memory loss, and cognitive decline. It is also a brain pathology characterized by plaques and tangles that cause deterioration, starting in the hippocampus (affecting memory) and moving to the cerebral cortex, where it influences reasoning and language abilities. Because official diagnosis of Alzheimer’s cannot be confirmed before autopsy, the diagnosis of a living patient is usually “probable Alzheimer’s disease” and is based on the symptoms the patient manifests while alive.
Stern’s research team at Columbia University assumed that there would be a great correlation between the severity of the symptoms manifested in the living individuals and the degree of deterioration found in the autopsied brain. That’s a perfectly logical theory if you believe that aging and cognition are purely biological. But while the team found that there were correspondences, there were also surprises in both directions. They found some severely disabled AD patients with brains that turned out to be less diseased than expected and patients with few AD symptoms whose brains were badly ravaged. What could account for this difference, Stern and his research team wondered?
Stern’s idea is that, in the same way that the visual cortex of certain people afflicted with blindness can be adapted by the brain for sound and music, something similar happened in some AD patients. Complex operations of working, procedural, long-term, and short-term memory were taken over by the undiseased parts of the patient’s brain. In other patients, other parts of the brain filled in and averted the usual variety of emotional and behavioral conditions associated with Alzheimer’s.
Why this cognitive boost in some patients but not others? Stern attributes the discrepancy to what he calls cognitive reserves—a backup plan. Some people have a reserve of intellectual, cognitive, physical, and emotional experiences that allow them to create complex and interconnected neural pathways that can be called into service throughout their lives. In some cases, these new routes are useful in the case of brain damage or disease. If a neural pathway is blocked, it is as if the brain finds another way that isn’t. If you have plenty of cognitive reserves, you can go around even a major blockage, taking a back road or a circuitous route toward your goal. According to Stern’s theory, those patients who had cognitive reserves had seemingly been able to tolerate progressive brain deterioration without manifesting catastrophic behavioral or cognitive symptoms.10
So how do we get our cognitive reserves? Based on this research, Stern has created a list of things you and I can do to pump up our own cognitive reserves well in advance of any neurological decline or disaster: continuing education, meaningful and enjoyable work, pleasurable leisure activities, physical exercise, social interaction, learning new skills, learning a new language, and—of course!—taking up new computer skills, playing video games, and interacting on social networks. All those mentally stimulating activities build reserves of neurons, synapses, and neuromotor skills that we might be able to call upon if we need them. If we are lucky enough that we don’t need them, they are there to be enjoyed anyway.
I was able to learn firsthand of the amazing ability of these cognitive reserves to take over after a friend of mine, Ichiro, collapsed with an aneurysm several years ago and was rushed to the hospital in Osaka, Japan. While on the way to the hospital, the ambulance driver was sure he was hearing Ichiro speaking gibberish. “No, it’s French!” his son said. Ichiro is Japanese. He learned English as a second language, then fell in love with a French woman and learned French. Because new languages are acquired and remembered in different areas of the brain than first languages, the parts of Ichiro’s brain that remained intact after his stroke remained able to function even at the height of his distress. When I saw him a few years later, he had relearned Japanese, with notable gaps and lapses, but his English was better than it had been when I first met him over a decade before.
What Stern calls cognitive reserves are what, throughout this book, I’ve been calling learning. In fact, the concept of cognitive reserves corresponds nicely to the digital model of learning. That is, the more actively you are pursuing Toffler’s idea of learning, unlearning, and relearning, the more likely you are to be working to ensure that your brain is not only active but stimulated in ways that are always new. And if we buy Stern’s ideas, then the positive outcome is that, the more your brain has been rewiring, then the more likely that the symptoms and signs of degenerative brain conditions will be less when you are alive than they will be when you are dead. That’s not a fountain of youth, precisely, but it’s not a bad bargain, either.
LET’S RETURN TO THAT OLD VCR again. The kind of mental workout one gets from mastering any new electronic device and incorporating it into one’s daily life is beneficial to rewiring the brain. Think of it as building up those cognitive reserves.
The VCR is also a great symbol in other ways. First, it really was a crappy technology. VCRs broke a lot. They were always on the blink. It was a makeshift and conceptually user-unfriendly interface that seemed designed to separate young and old, female and male. It was almost the opposite of the user-powered Internet and the World Wide Web. If I seem a slap-happy technology utopian, it may well be because I’ve seen before and after—and the VCR was the epitome of before. The technological device that was designed to bring you pleasure brought more than its share of pain. It was a little like my physical rehab. To get it going, you had to go through a lot of procedures, and the final product was never as good as the original.
The Internet and all the mobile devices that came after may seem like an extension of the VCR, but I think of them as its antithesis. The digital age, and the World Wide Web itself, is based on an idea that it works only if everyone participates. Rather than reinforcing distinctions, all the new kinds of devices depend on inclusion. And, amazingly, they are inclusive. Over 80 percent of Americans between fifty and fifty-four are now online. We can look at the numbers another way and note that older Boomers (those born between 1946 and 1954) make up 13 percent of the total adult population of the United States and exactly the same percent of the Internet-using population.11
Contrary to the myths about our digital era, the average blogger in the United States is not fourteen-year-old Amber but a thirty-seven-year-old white, middle-class male. Social networking media are similarly popular across demographics, with approximately 40 percent of Facebook and MySpace users over thirty-five.12 “Digital immigrants,” in other words, are streaming through the technological equivalent of Ellis Island. Including old digital immigrants. In the seventy-six-plus age range, a surprising 27 percent of Americans are using the Internet, and where the Internet is easily and inexpensively available to them, the elderly are, statistically, the fastest-growing and most active demographic of new adopters.13
Seniors are taking advantage of the brain tonic that is the Internet, turning into practice all the research going back to the 1980s and 1990s demonstrating the cognitive benefits of interactive digital media. The latest research suggests that engaged use of the Internet provides better mental calisthenics than the more conventional senior pastimes of reading, television or movie watching, or playing cards, board games, or word games like Scrabble. Coupled with bilateral exercise (i.e., walking, running, or dancing) for those who are able-bodied, “silver surfing” is the single most beneficial senior activity we know.
I met him on the Internet. He called himself ToughLoveforX. He had a way with winning phrases. Soon, I was one of his followers.
This story isn’t going where you think it is.
ToughLoveforX is Michael Josefowicz, a retired printer who has an active life on Twitter. I began to follow him, which means, for those who don’t use Twitter, that I selected to receive Josefowicz’s tweets. They roll past me every time I check out my Twitter page, several times a day. Twitter is called an asymmetrical technology in that you can follow anyone you want and read what they tweet. You follow people because you are interested in what they have to say. Because he seemed always to be on top of the most exciting news and research on education, social media, publishing, civil society, the brain, and other areas that interest me, I began following ToughLoveforX—or TLX, as I now call him for short—and found that every other day or so, he would be sending out a link to something invaluable for my own thinking and research.
I’m not the only one who thinks TLX has a bead on something worth seeing. He currently has 3,202 followers. That’s a pretty substantial audience, given that, of the 75 million people with accounts on Twitter at the end of 2009, only the top 1 percent had more than 500 followers. The average Twitter user has exactly 27 followers.14 TLX describes himself as: “Retired printer. Lots of time for blabla. If I can help fix high school. So much the better.” That candid, offhanded, unpretentious style is why a lot of us follow him. Another reason is that, in addition to having so many followers, he himself is avid at following others, 3,410 of them to be exact. He has made himself one of the most informed people out there, anywhere, on all of the research in neuroscience and learning.
There’s a silly status game that celebrity wannabes play wherein they measure the ratio of who follows them to whom they follow. The actor Ashton Kutcher, for example, follows about 400 people while over 4 million follow him. But the beauty of TLX is that he is a true information node. He reads voraciously, he sifts and sorts through all those Twitter feeds, and he distills it all, publishing what he thinks is the very best of his own personalized, constant stream of information. For the 3,202 of us who follow him, he is a filter on educational reforms happening anywhere in the world. Like Dennis Quaintance’s “sustainability filter,” TLX is an “education filter” who helps many of us—thousands, actually—to see the complex and changing landscape of education with more clarity and depth.
That is why I sent him a DM (direct message) and said I’d like to interview him to find out how he got into this tweet-filled life. We DM’d a few times, then we tried something we called Twitter Tennis—an open-to-anyone exchange of ideas on Twitter. From there, we went to e-mail and had a long phone conversation. He told me his story. He’d owned his own printing business, retired, and then could have dwindled into a solitary old age. Instead, Michael Josefowicz of Brooklyn, New York, became ToughLoveforX, a vocal champion of education reform for kids, someone who turned his passion for kids and education into an information resource that reaches far more people, several times a day, than most people ever reach in a lifetime. It costs him nothing. It turns leisure time into productive time, and now that he is followed by some pretty prominent educators, he has an impact on the world. Or as TLX says in his inimitable style, in 140 characters or less per tweet, it’s the butterfly effect: “initial condition of a dynamical system may produce large variations in the long term behavior of the system.” Not bad for a retired guy.
The son of immigrants, he’d gone to Columbia University, was a student there during the famous protest strikes against the Vietnam War in the 1960s. Josefowicz maintains an independent Boomer spirit, but he confines his activism to his online research and communications. He is amazed that so many people look to him for insight, that so many people follow him on Twitter—and pleased. He says it took him about six months to get into the swing of it, and once he did, he was astonished that he, one guy in Brooklyn, could have an impact. In fact, his slightly sinister Twitter name came from a simple inefficiency. Before Twitter, he had a blog called Tough Love for Xerox, begun as a protest against a policy by the company that denied some retired employees benefits they were owed. A lifelong admirer of Xerox because it made the digital photocopier that transformed his printing business, he was disappointed at the company’s shabby treatment of these retirees and wrote about it in his blog. When he transferred the same name to Twitter, it cut off his name after the X, and he liked the mysteriousness and kept it.
He’s frank about the pleasure it gives him to be such an information conduit on a topic close to his heart. He doesn’t feel like the isolated, classically useless retiree but, thanks to his online life, knows that he has a purpose. By twentieth-century rules, when one is not gainfully employed, one is not important. Thanks to the low participation barrier and the excellent possibilities for communication online, there are now many new ways to be gainful, to be important. I think back once again to that kindergarten class at Forest View Elementary. “We’re kindergartners.” In the end, that’s really what it is all about, taking pride in who we are and in how we inhabit our world.
When I called Josefowicz, he was as delighted as I was about the way a random connection on Twitter had turned out to be anything but random, actually a linkage of strangers who were paying attention to the same issues, the same world. When I asked him if he could do one thing to improve the world, what would it be? he said without hesitation, “Scale John Seely Brown.” I had to laugh. JSB is one of the most profound thinkers of the information age, formerly the legendary “out of the box” director of Xerox’s PARC, the think tank of all think tanks. He also happens to be my friend. Scaling JSB is a great idea.
“He’ll like that,” I told Josefowicz, “I’ll tell him when I talk to him tomorrow.” Josefowicz was momentarily speechless. He never dreamed his tweeting would bring him one degree of separation from someone who had been an intellectual hero responsible for shaping his vision. This was a twist on collaboration by difference—almost communication by difference, with a certain loop of karmic “what goes around comes around” thrown into the mix.
Within a week, I’d written a “Scaling John Seely Brown” blog. JSB tweeted it. TLX retweeted. So it goes. That’s not exactly a circle, but it’s a connection forged by a shared vision, not by sameness but by contribution. For TLX and many others, it is what growing older happily in the twenty-first century is about.
Having vigorous links to the world and to others goes a long way toward keeping TLX young, and the statistics are on his side. Health insurers have found Internet use rivaling or, in some studies, exceeding the success rates of antidepressants in managing geriatric mood disorders. A research team at the Centre for Mental Health Research at Australian National University has even demonstrated the surprising result that Internet human contact can be as effective in mitigating depression as face-to-face human contact, especially in the long run, replacing the familiar despair of losing friends and relatives one by one with a sense of being part of a vibrant and growing community of caring and attentive acquaintances.15 Better geriatric health has also been attributed to Internet-assisted self-help, because the elderly become their own health advocates, finding reliable sources of medical information online, taking greater control of their own health care, resulting in fewer doctor bills, less medication, and a greater sense of well-being and security.16The punch line of all of this is: Don’t retire. Rewire! Don’t retreat. Retweet!
In Holland some years ago, an inventive group of medical professionals at an institute in Amsterdam called the Waag Society came up with an idea to use the Internet to pair elderly shut-ins in their nursing home with young partners, some who volunteered and some who were paid a small stipend for participating. In most cases, the senior citizens had to learn how to use a computer to talk to their young partners. Some of the kids were jobless, with too much time on their hands, hanging out in Internet cafés. Some of the elderly were virtually senile, also with too much time on their hands. It seemed like a crazy idea, but no one had much to lose. Why not try? There were low expectations on all sides.
The first surprise came when the old people jumped at the chance to do this. Their excitement wasn’t at learning computers; that was just a means to an end. The excitement was in learning something “youthful” to be doing, something that connected them to actual young people. The second surprise was that, as soon as the connection was made, everyone started reporting that they felt happier. This wasn’t an official experiment. There were no scientific tests. There were simple self-reports of feeling better. The next surprise was that these reports came not just from the oldsters but from the kids, too. As the old and the young began writing to one another, sharing their stories, they began feeling a sense of optimism.
That would have been a successful enough experiment. But soon the nurses were noticing that the seniors weren’t requesting as much medication for pain, anxiety, depression, memory loss, or even their physical ailments. Cognitive signs seemed to be reversing too, and people were doing things that, a few months before, seemed beyond their mental grasp. There was only one problem with the experiment. New beds had to be added. Why? Because the oldsters were sticking around the nursing home longer than expected. Put plainly, they weren’t dying as fast as they used to.
The results of this early use of the Internet to make human connections have been so positive that organizations dedicated to digital innovation, like the remarkable Waag Society, are now busy creating multimedia interfaces for computers specifically to facilitate online storytelling by the elderly and the physically impaired.17 This being Holland, there is also now what amounts to a technological bill of rights guaranteeing elderly Dutch citizens their inalienable right to an Internet connection.18
Until her untimely death in 2008, Australian blogger Olive Riley was banned from two of the world’s most popular social networking sites. A blogger who also regularly posted vlogs on YouTube, Riley loved the Internet and kept up an e-mail correspondence with friends the world over, despite living in a medical facility. Yet she was unable to join MySpace and Facebook. Like all social networking sites, these two set their own community rules for who can or cannot belong; ninety thousand registered sex offenders in the United States, for example, were barred from MySpace in 2009.19 However, Olive Riley was prohibited membership not for criminal or even antisocial activity, but because of her age. You must be 13 years old to join Facebook and 14 to have an account on MySpace, but few people know that there is also an upper limit on these sites as well.20 On its application page, the Facebook birth-date box does not accept anyone whose birthday is before 1900. MySpace’s registration system excludes those over the age of 100. When she passed away, Olive Riley was an intrepid, game, feisty, tech-savvy 108 years young.21
Dubbed the World’s Oldest Blogger by the media who came to feel affection for her, Olive Riley embraced the digital age, but her passing didn’t substantially decrease the average age of those participating in a Web 2.0 world.22 Although her age made the charming centenarian blogger an international sensation, her story is representative of the many older Americans now piling onto the Internet. This is a remarkable change from the VCR era. And the difference is interaction. It is not the technology that is good for us, it is this volition—the sense of control—coupled with the connection with anything and anyone of the billions of others connecting online at any time. The best affordance the Internet offers is a chance to move beyond ourselves, even if we have limited capacity to move in a physical sense. And the best news is that this isn’t the cod-liver-oil theory of learning, that you have to do it because it is good for you. It is the enjoyment that makes it so, and no one has to force you. Despite the stereotypes, older people are online as frequently as the young.
We need to apply the lesson of attention blindness to our midlife anxieties about the Internet and rethink where those anxieties come from. If we each were assigned our own private sociologist to follow us around, marking off instances of adaptation and successful change, we would be able to chart how much we learn and adjust every day. Because we are not “blessed” with this external analysis of our actions, we see only what we focus on, and we tend to focus on what frustrates, eludes, annoys, or shames us by making us feel inflexible, incompetent, and superannuated—a term that means, literally, being disqualified or incapacitated for active duty because of advanced age. We are hypersensitive when we hesitate, cringe, or stumble; we tend not to notice all the ways we have successfully integrated the new technologies into our lives.
Despite the fact that we adapt to enormous changes all the time, what we attend to most are those instances when lack of success brings a sense of inadequacy and shame. That sense especially accompanies middle age because, in modern society, especially in the United States, middle and old age are treated as shameful conditions, almost diseases or disabilities, in need of help or repair. Multi-billion-dollar youth industries—from plastic surgery to pharmaceuticals like Viagra—prey on our vulnerability. Rejuvenation is big business, especially as the world’s population ages.
If the patterns of learning predict what we see, then it is past the time to unlearn our preconceptions of aging. Like Olive Riley, that indefatigable 108-year-old blogger, we can begin to find more exciting, open, inclusive ways of relearning suitable to this astonishingly rich and bewildering information age. In Olive Riley’s simple lesson plan for the future, “We’re never too old to learn.”