Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn - Cathy N. Davidson (2011)

Part III. Work in the Future

Chapter 6. The Changing Workplace

If there’s one place where the question of attention raises even more of a furor than when it comes to our kids, it’s at work. From the spate of books and articles on distraction in the contemporary workplace, you would think that no one had accomplished an honest lick of work since that fateful day in April 1993 when Mosaic 1.0, the first popular Web browser, was unleashed upon an unwitting public. The Internet, the pundits warn, makes us stupid, distracted, and unfit to be good workers. Yet according to the international Organisation for Economic Co-operation and Development (OECD), which has been measuring these things since World War II, the opposite is the case. Worker productivity has increased, not decreased, since the advent of the Internet and continues to increase even despite the global economic downturn.1 In addition, every study of American labor conducted in the last five years shows that we work more hours now than our hardworking parents did—and they worked more than their parents did too. Most of us knowthis has to be true, and yet few of us really feel like we’re doing better than grandfather, that icon of the tireless American laborer. “Work flow” just doesn’t feel productive the way “work” was purported to feel. Why? Why do most of us feel inefficient when we are working as hard as we can?

As we have seen, most of the institutions of formal education that we think of as “natural” or “right” (because they are seemingly everywhere) were actually crafted to support an idea of productive labor and effective management for the industrial age. We have schooled a particular form of attention based on a particular set of values for the last one hundred years, with an emphasis on specialization, hierarchy, individual achievement, the “two cultures,” linear thinking, focused attention to task, and management from top down. Those values may well have matched the needs of the industrial workplace. Now that the arrangements of our working lives are changing in a contemporary, globalized workplace, those modes of training aren’t doing us much good. Why are we still hanging on to institutions to support a workplace that no longer exists the way it did a hundred years ago? If the current world of work can be frustrating and perplexing, it is for the same reason that parents, teachers, and even policy makers feel frustrated with schools: There is a mismatch between the actual world we live, learn, and work in and the institutions that have been in place for the last hundred years that are supposed to help us do a better job in that world.

The best way to answer the question of why we feel inefficient despite working to our limits is to take our anxieties seriously. We need to analyze their source and propose some solutions. The era we are living in is way too complicated to reduce to the current “Internet is making us stupid” argument or to its utopian counterpart (“Internet is making us brilliant”). We all know that neither generalization tells the whole story of our working lives. Both hyperbolic arguments reenact the attention blindness experiments from one or another point of view.2 If you insist the Internet is the source of all trouble in the workplace, you miss one of the most remarkable achievements in all of human history, not only the invention of the Internet and the World Wide Web, but also the astonishing transformations we have all undergone, as individuals and as societies, in less than two decades. Who even thought change of this magnitude was possible so extensively or so soon? On the other hand, if you think only in terms of the “wisdom of crowds” and the bountiful possibilities of “here comes everybody,” you trivialize the very real challenges that nearly all of us face as we try to adapt life-learned habits of attention to the new modes of communication and information exchange that have turned our lives around and, in some cases, turned them upside down.

Instead of going to either extreme, in thinking about how we might succeed in the contemporary workplace and what might lie ahead for our work lives in the future, let’s take seriously the causes of distress in the digital workplace and take the time to examine how the new workplace requires different forms of attention than the workplace we were trained for—or even than the workplace for which, ostensibly, we are training our children. We will also look at people who have developed methods, small and large, for improving their workplaces and whose insights might help us transform our own workplaces to take maximum advantage of new, interactive and collaborative possibilities.

It’s about time. We need a Project Workplace Makeover as desperately as we need to make over our classrooms. If your office is anything like mine, it is long overdue for renovation. I am guessing that yours, like mine, retains the basic architecture, organizational structure, personnel rules, management chart, and conventional work hours of the twentieth-century workplace. Except for the computer in the office, you would think the Internet had never been invented.

No wonder we so often feel distracted and unproductive! How could it be otherwise when there is such a radical mismatch between workplace and work? We’ve created structures to help us focus on a workplace that no longer exists. We’re preserving old standards for measuring our achievements and haven’t yet made the changes we need to facilitate our success in a global economy that, everyone knows, has rearranged some of the fundamental principles of how the world does business. Is the real issue of the twenty-first century, as some would have it, that the Internet makes us so shallow we can no longer read long novels?3 Is that the real crisis of the contemporary workplace? I don’t think anyone who works for a living really believes that. But as a culture, we certainly are paying a lot of attention to this very literal-minded way of measuring what our brain does or does not do as well as it purportedly used to. Once again, our metric is too belated and our optic for looking backward is short-sighted. We are so fixated on counting basketballs that we’re not seeing the gorilla in our midst.

The gorilla isn’t the Internet, but rather a fundamental contradiction between how we actually are working today and the expectations we have for our workplace and our workday. About 80 percent of us who are employed still spend some or part of our day away from home in an office. But although the walls of that office may not have moved, what an office is, in functional terms, has altered dramatically. More and more of us spend our days at our “desktops” rather than our desks. We might close the office door, but that merely shuts out the distractions in the corridor. It’s our desktop itself that connects us to everything on the World Wide Web. The center of our work life is a machine whose chief characteristics are interactivity and ease of access. The Internet connects us to everything that is necessary to our work at the same time that it connects us to everything that isn’t about the task we are doing in a given moment. A workplace is supposed to support us in our work endeavors, to help us focus on our work in order to support our productivity. Yet very little about the workplace of the twenty-first century has been redesigned to support the new ways that we actually work together. That is some gorilla!

LET’S STEP BACK A MOMENT and think about what a computer actually is before we even begin to tackle the question of attention in the contemporary workplace. Because the information revolution has happened so quickly and so thoroughly, it is easy to take it for granted. Let’s slow the hyperbolic dumber/ smarter Internet conversation way down and really think about the nature of the machine that has come to dominate our lives at work and at home.

To start with, a computer is not like any other piece of office equipment. Most machines in the workplace were designed with a single primary function. A typewriter types, a copy machine makes copies. A computer computes, of course, but it is also the repository of all of our work all the time, plus a lot of our play. The same computer gives us access to spreadsheets, the five-year audit, the scary corporate planning document, our client files, invoices, work orders, patient histories, and reports, as well as forms for ordering new pencils or even the mechanism for summoning the tech guy to come fix the computer. Whether we work in an automobile repair shop, a newspaper office, a restaurant, a law firm, a hospital, or even a lonely fire lookout tower in the Grand Tetons, more and more of our work depends on bandwidth. I used to joke that the only businesses today that weren’t part of the digital world were in Amish country. Then, a few weeks ago, I happened upon an Amish grocery store, part of a small Amish community in rural North Carolina. The whole-grain flour I bought was hand-milled using techniques unchanged in the last two hundred years, but the cash register instantly tracked my purchases, updated the store’s inventory, and sent the data to suppliers shipping Amish goods worldwide. So much for that joke.

The computer connects us to every single work task we do, past and present, but it also links us to our colleagues and their questions and quibbles. At the same time, it keeps us attached to every joy and problem in our personal life, to family, to community, to entertainment, shopping, and hobbies, to all of our habits in the workplace but also outside it. It’s a lot to ask that a person stay on task when the very same keyboard holds all the distractions of everything else we know we should be doing, plus all the temptations of everything we know we shouldn’t.

We’ve inherited a sense of efficiency modeled on attention that is never pulled off track. All those little circles to fill in neatly on those multiple-choice exams are the perfect emblem of our desire to instill a sense of “here’s the problem” and “here’s the solution,” as if the world of choices is strictly limited and there is always one and only one right answer to be selected among them. If ever that model of workplace attention and problem solving sufficed, it is hard to imagine anything less like the limitless digital possibilities we juggle now. Single-minded focus on a problem doesn’t remotely correspond to the way we mustwork if we are going to succeed in the twenty-first-century workplace.

It is not that the Internet is distracting. It is a fact of twenty-first-century life that we now work in a jumpier, edgier, more shape-shifting way than we did a decade ago. That way is required by, facilitated by, and impeded by the Web. Broadcast media are being displaced by participatory media, with the number of daily newspapers declining and the number of blogs escalating. Encyclopedias are replaced by Wikipedia. We’re now up to 2 billion Google searches a day—and 4.5 billion text messages.4 To say we live in an interactive era is a gross understatement. It’s just missing the point to say all that connection makes us unproductive when connection is, for most of our endeavors, what also makes us productive. The principal mechanism of our productive labor is also the engine of our distraction.

That would seem to be an impossible pairing. Yet we’re doing it, and according to the OECD, we’re doing it well enough to rack up higher scores for our productivity than ever before. How are we managing this? Sometimes it’s by burning the midnight oil, returning to our work after dinner to wrap up the report that got waylaid somewhere between the worrisome financial projections from head office, the plans for Mandy’s baby shower, and the crazy YouTube videos everyone was sending around the office listserv today.

So those are the two interconnected traps of our workplace, the Scylla and Charybdis of contemporary attention. On the one hand, having the mechanism of distraction so close at hand hurts our efficiency at work. On the other hand, having those very same properties bundled in one computing device ensures that work invades our homes, our weekends, our vacations. All the old markers of what was supposed to be leisure time now bleed into the workday, while work now bleeds into our leisure time. If the Internet distracts us at the office all day, it also sends work home with us each evening to encroach on our family and leisure time.

The complaint about too much distraction on the job is the flip side of worrying that there’s no escape from work. The executive on the beach with his PDA or the lawyer recharging her laptop for the next leg of a transcontinental flight are as much symptoms of our transitional age as is the fear of too much distraction. If there is occasion for leisure as well as work on the laptop at the office and occasion for work as well as play on the laptop at home, where are the boundaries anymore? Where does work stop and leisure begin?

As we’ve seen with schools, the last hundred years were spent developing institutions that were suited to productivity in the twentieth-century workplace. If you have a job just about anywhere but Google, you are most likely working in a space designed for a mode of work that is disappearing. We’re all Amish in the sense that we juggle contradictory ideas of the workplace these days. We’ve just begun to think about the best ways to restructure the industrial labor values we’ve inherited in order to help us maximize our productivity in the information age.

The science of attention is key to the workplace of the digital future. If we understand more about how we have been encouraged to pay attention for the last hundred years and how we need to pay attention now, we can begin to rethink work in the future. By looking at new ways that individuals and even corporations have begun to refocus their efforts for the digital age, we can start to rearrange the furnishings of the contemporary workplace in order to better manage the work-leisure challenges on our desktop—and in our lives.

025

Gloria Mark knows more about how we pay attention in the information-age workplace than just about anyone. Professor of informatics at the Bren School of Information and Computer Sciences at the University of California–Irvine, she studies the many different ways we work effectively and the ways we don’t. With an army of students, she’s been clocking the work habits of employees, especially those who work in the information industry, measuring their distractions in hours, minutes, and even seconds. She’s discovered that the contemporary worker switches tasks an average of once every three minutes.5 Often that change isn’t just a matter of moving from one job to another but requires a shift in context or collaborators, sometimes communicating with workers around the globe with different cultural values and who do not speak English as a first language. Perhaps not surprisingly, once interrupted, the workers she’s studying take nearly twenty-five minutes to return to their original task.6

The high-speed information industry may be an extreme case, but that’s her point. If we’re going to understand attention in the contemporary workplace, why not dive into the belly of the beast and see how those who are designing the information age for us are managing? If they can’t stay on task for more than three minutes at a time, what about the rest of us? Are we doomed? That is certainly a central concern of our era.

Mark’s research doesn’t stop with measuring multitasking but looks more deeply into how our attention is being diverted from one thing to another. She finds that, even in the most high-speed, high-tech, multitasking, app-laden, information-intensive workplaces on the planet, 44 percent of workplace interruptions come not from external sources but from internal ones—meaning that almost half the time what’s distracting us is our own minds.

That is really interesting. Think about it. If I leave the office and don’t feel that I’ve had a productive day, I might blame too much e-mail, annoying phone calls, or coworkers bursting into my office unannounced, but almost half the time I’m actually distracting myself, without any real external interruptions to blame. I need no goad for my mind to wander from one job to another or from work to something that isn’t work—surfing the Web, gaming, checking out my Facebook page, or simply walking down the hall to the water cooler or staring blankly out the window.7

Mark’s emphasis on different forms of mind wandering, including those I instigate all on my own, is far more consistent with the modern science of attention than the more instrumental arguments about what or how the Internet “makes” us. Biologically speaking, there is no off switch for attention in the brain. Rather, our attention is trained in one direction or another. If we’re paying attention to one thing, we’re not paying attention to something else—but we are always paying attention to something. That’s the way the brain is structured. Even when we’re sleeping, our brain pays attention to its own meanderings. While awake, in a given moment, we may be focusing on what we should be focusing on, or we might be engaged in a reverie about tonight’s dinner or the memory of last night’s winning basket in the Bulls game. Fifteen minutes of daydreaming can go by in a flash because we’re absorbed. We’re paying attention. Time flies. Then, when we are interrupted out of our daydream, we see we’ve lost fifteen minutes toward the looming deadline, and we curse our distractions.

Suddenly it’s a good time to blame the Internet. But blaming technology is actually a shorthand for saying that our customary practices—what we have come to think of as normal—aren’t serving us in the comforting, automatic way we’ve come to rely on. We blame technology when we notice our habits aren’t quite as habitual as they once were because we haven’t yet developed new habits that serve us without our having to pay attention to them. Change is constant and inevitable. We blame technology when we aren’t coping effortlessly with change. Like attention, change is what we notice when it’s a problem.

It’s precisely because we’re always paying attention to something that we need a workplace optimally designed to help us focus in a way that facilitates the kind of productivity the workplace values. There is no attention or productivity in the abstract. What do we want out of the workplace? How can we structure the workplace to facilitate that goal?

We know it can be easier or more difficult to pay attention in certain situations than in others, and there’s a cottage industry of industrial psychologists studying what does or does not contribute to our distraction in the workplace.8 Given that the mind wanders all on its own a lot of the time, it makes sense to try to minimize external causes, then maybe rethink some of the possible internal motivators, too. For that, it’s useful to analyze how our old industrial work habits might have been upended by our Internet connection.

Most of the physical spaces in which we work were designed in response to ideas of work developed in the late nineteenth and early twentieth centuries. Just as our methods of measuring success at school are intended to emphasize the separation of subjects, projects, age-based grade levels, achievement, ability, skills, and tasks, so, too, is the workplace constructed for separation of tasks and workers, not for connection or collaboration. Most workspaces take large, open spaces and divide them up with walls or prefab partitions. Even in offices with open, free-flowing spaces, desks (including piles of papers on desks) are often lined up to create symbolic separate workspaces.

Time, too, was divided up artificially in the old workplace. You didn’t take a break when you were tired or go to lunch when you were hungry. The day was scheduled. You “got ready for work” and went to work. The prescribed workday was designed to separate time spent in the workplace from home, nature, and fun (leisure activities, entertainment, amusements, and so forth).9 Both at school and in the workplace, break times—recess, lunch, coffee breaks—were officially separated from the serious parts of our day, often with regulations (whether enforced by the employer or championed by unions) about how much nonwork time one was to be allocated during a given day. That binary structure of on and off, work and break times, continues, for most of us, in the twenty-first-century workplace. The wired computer, however, defies segregation. Who needs a break away from the office to chat with friends, shop, or play games when all those opportunities exist a mouse click away?

The bricks-and-mortar industrial workplace was designed to minimize distractions from the outside world. Very few contemporary workplaces have rethought their mechanisms, rules, or even informal protocols in light of the myriad possibilities offered by the digital world that structures much of our day.

Even when we are doing great, either collectively (those higher and higher OECD scores for worker productivity) or individually, we live in a world where productivity is almost impossible for employers to quantify, especially if we measure by old markers. So much of what constitutes work in our times doesn’t feel like measurable, productive labor.

Take e-mail. I don’t know about you, but my laboring heart never swells with the pride of accomplishment after eight hours dealing with nonstop e-mail. The constant change in focus required to answer thirty or a hundred e-mails in the course of an average workday creates what management experts call a psychological toll, because it requires constantly refocusing and switching contexts. Each e-mail, with its own style and implied but unstated norms, is a miniature cross-cultural interaction that requires, again in the management jargon, code-switching. 10 No one who wants to keep a job sends the same style e-mail to her boss’s boss and to that wacky marketing guy who forwards all the knock-knock jokes, but we haven’t formulated even the most mundane conventions or rules for e-mail correspondence to help us out. Remember how prescriptive business letters were? Date on line fifteen, and so forth. Back in those good old days, when a business letter thunked into your in-box once or twice an afternoon, you had the leisurely pace of opening the letter, reading it, thinking of how to craft a response, researching the information needed, writing the response, typing it out, putting the letter into the envelope, and mailing it, with a waiting period before you had to deal with its consequences. That process might be the job of three different people, or a half dozen if we count the mail system and mail carriers, too. Now most of us do all these jobs many times a day. Even my doctor now examines me, decides if I might need medicine, and then, instead of scrawling a prescription on his pad, e-mails my personal information and the prescription to my pharmacy. I am certain that data entry was not part of his medical training.

We have not yet come up with etiquette, protocols, or even shared expectations for e-mail. Not only is there a stunningly greater amount of communication with e-mail than with what is so appropriately called snail mail, but no one agrees about timing. “Did you get my e-mail?” translates into “Why haven’t you answered yet?” If I see that you’ve just updated your Facebook page from your iPhone on Saturday morning and send you a business query, should I expect an immediate answer, or are you justified in waiting until your weekend is over to send a reply?

To respond to the avalanche of information in our lives, some entrepreneurs are trying to create digital tools to help us to pay attention by managing those external disturbances from the perpetual digital flow. A software utility called Freedom, designed by developer and educator Fred Stutzman, lets you block the Internet for a specified period. If you need to work on your computer but want to be left alone for a few hours to focus only on the work on your screen, you can set Freedom to control the information flow. Or there’s Rocket’s Concentrate, which, for $29, you can download to your Mac and then load with only those programs you rely on for a given job. So, for example, if I have a work deadline looming, I can set Concentrate to open my preferred Firefox browser, but it might block my e-mail and close my access to distracting social networking sites such as Twitter and Facebook.

These devices may work well for some people, but I can’t help thinking about Gloria Mark’s research. If I use Freedom and Concentrate, how much will I be worrying about what may be coming in that I’m not seeing? Software programs might be useful to control the information flow to my computer but they aren’t doing anything to control the flows of curiosity and anxiety that addle my brain.

We need not only new software programs but also revisions of workplace structures and rules. We also need more good, solid, empirical research on how we actually pay attention in the contemporary workplace and on what works and doesn’t work to help us control our attention flow.

This is why I turned to Aza Raskin. Formerly head of user experience for Mozilla Labs, he’s now Mozilla’s creative lead for Firefox, the world’s second most popular browser. The Firefox story is a bit like David and Goliath. No one expected an open-source, crowdsourced, collaboratively developed browser to do so well against the giant, Microsoft’s Internet Explorer.11

I first met Aza Raskin in 2009, when he served as one of our eminent finalist judges for the international HASTAC/MacArthur Foundation Digital Media and Learning Competition. He was twenty-four or maybe twenty-five at the time, a wunderkind. He’s a thoughtful, energetic person who has his mind on how to make the digital workplace a better and happier place for all of us. Because his expertise is in human-computer interaction, he is little concerned with abstract rhetoric about how we should or shouldn’t pay attention or about which way to pay attention is best. His job is to design better, more efficient interfaces between our human abilities and current computer design. Rather than hectoring us on how we should or shouldn’t be paying attention to our computer screens, he is trying hard to help us in that endeavor.

Right now, Raskin is developing new ways to improve the cumbersome system of tabs on our computers. His data gleaned from how millions of us keep track of our information during the course of a day shows that we rely on tabs more than bookmarks, but the tab system is a mess. It works OK for eight or ten, but, beyond that, our tabs run off the page, and text-based organization doesn’t help us remember what we have open and how to get to that information easily. Raskin thinks it’s evident that the product design of a standard computer has not begun to keep pace with the ways we use it. We’re having to fight the machine’s limitations just to do our business.

Raskin’s work is all about removing those limitations. His charge is to figure out ways that the Internet can help us to do our work productively and even, he might say, joyfully. In fact, I’m not sure he would think that is a useful distinction, because cognitive science, as we’ve seen, shows conclusively that we do better at the things that make us happier, at the tasks that reward us emotionally and give us pleasure. So his challenge is to make the Internet more fun so we can be more effective.

Raskin scoffs at the idea that multitasking is something new under the sun. Computers require no more multitasking than, say, lassoing an injured bull in a field, fixing a power line downed during a fierce electrical storm, or keeping track of an infant and a toddler while making dinner. It’s hard to imagine how any of those or other complex human activities count as “monotasking.” Attention is divided when various important things are happening around us to divide it. That’s how attention works. So he sees it as his job to find the best ways to make the most of our cognitive capacities and everyday habits, given both the existing tools we have at our disposal and the better ones he and his team can imagine for us.

He’s currently collecting data and experimenting on the ways we use visual and other sensory cues to stay focused. What if tabs weren’t just strings of words but came with sounds or colors or icons or badges to help us recall what we opened earlier? He’s studying how we actually use our computers, because humans, as we know from the attention-blindness experiments, are notoriously poor at knowing how we actually do what we do. Unless accompanied by a psychologist or ethnographer, most of us can’t stand outside ourselves to see what makes us most efficient. Left to our own devices, we tend to go to the lowest barrier of entry, which can mean we gravitate to that which least disrupts our current habits, even if those habits don’t work very well in the long run. He’s convinced interface designers need to be a lot smarter about how the Internet can help us be smarter.

This, of course, prompts the one-terabyte question: How does the designer charged with remaking the Internet to make our lives easier organize his own workflow? Never mind how he organizes the innumerable data points from the billions of bits of information gleaned from 360 million Firefox users. How does he organize his own, personal workspace right now?

Raskin’s main organizational computer trick is one I’ve heard others in the industry espouse too. He divides his tasks on two entirely separate computers. He keeps these two computers, each with its own separate work functions, separated by a small but meaningful distance. He insists that having a single machine for all functions doesn’t suit the human brain very well. With the old physical typewriter, there was only one piece of paper in the machine at one time. The typewriter controlled our workflow whether we wanted it to or not. We haven’t really figured out how much of a hodgepodge of old and new forms of work the computer really is. Maybe computers of the future will all come in multiples. Until then, Raskin improvises by using two computers, each loaded with different kinds of information, presorted for his work needs.

Instead of sorting by function, he sorts by hierarchies of what he wants to pay attention to. He uses his main computer to hold the major project on which he wants to concentrate on a given day. That’s a first principle. He decides what he wants on the prize computer and leaves it on that machine until he is finished with it. That might be computer code he’s writing, data he’s crunching or analyzing, the results of an experiment he’s running, or a scientific paper he’s authoring. Nothing else goes on that computer. Adjacent to his main computer, there’s an entirely separate one that connects him to the outside world—his work e-mail, his Internet connection. If he needs to move away from his programming to do some research on the Web, he physically turns to this second computer.

One thing Raskin emphasizes is that, although this seems simple, simplicity is good. It’s an ideal for him. Behind this simple division is a lot of research on the brain and also the body. He notes that in the very act of turning, forcing himself to actually move his chair a little, his body moves too. He forces himself to physically move, to change postures. That switch is good physically and mentally. He might take twenty-five minutes on the secondary computer but, throughout that diversion, he constantly has the primary computer and the primary task there in his peripheral vision. When it’s time to turn back to the main task, it is right there, uninterrupted, exactly where he left it.

“I’m still multitasking,” he says, “but there’s a hierarchy in my brain of what is unchangeable and important on the main computer, and then everything on the small screen is done in the service of that unchanged work. It helps a lot.”

But that’s not all. He has yet another computer set up at a distance, across the room or, even better, down the hall. This one requires him to leave his desk chair. He reserves it for all the fun things that might tempt him in a given day. His blog might be there and his active Twitter stream, perhaps personal e-mail, multiplayer or single-player games he likes, social networking sites, all the things he enjoys online. They are the enticement to fun so, to enjoy them, he has to get up physically from his desk and walk. “Ergonomically, that’s important. I’ve made my temptation be a healthy thing for my body. What refreshes the body helps me be mentally alert when I return to my main screen.”

He also has written a software program for his fun screen to aid and abet him in returning to the main, unchanging work screen. A programmed to-do list updates automatically and pops up only when he is using one of the procrastinating sites, not when he is in the middle of productive work. Ten minutes into joyful Tweeting, a message reminds him he has to send off some new programming to his associate in Toronto or Doha by 6:00 P.M. Time does fly when you’re having fun, so the to-do list is programmed to remind him of passing time and of deadlines awaiting him.

He’s recently come up with an even more ingenious way of controlling his procrastination. He has programmed the fun sites so they get slower and more spotty the longer he’s on them. He’s found that when he slows the sites (mimicking the annoyance of being on a bad connection), he gets frustrated at the site instead of at the blocker. In doing so, he takes the joy out of procrastinating! It’s a brilliantly simple solution to one of the workplace dilemmas of our era. Like all Raskin does, whether to train his own work habits or all of ours, his motto is “I simplify.”

Raskin, in other words, is not just a software developer. He designs better ways that we can interact with our machines. As we’ve seen, the revolutionary difference between the Internet and old mainframe computing devices is interactivity, and he believes that fundamentally the Internet is what we all make it. We do not have to be automata, machines controlled by our machines. Raskin urges us to take charge, for that, of course, is the Open Web credo. The Web R Us.

Not all of us have as many computers at our disposal as Aza Raskin does, but we can each learn something important from his emphasis on compartmentalizing his work and his play according to his own needs and in his goal of programming interruption and physical movement into his screen-dominated workday. I suspect that if consumers begin to clamor for devices that facilitate this kind of separation, the computer industry in a few years will be offering us relatively inexpensive options that make it possible—apps for different attentional styles, for different kinds of workflow, and maybe even for programmed work stoppage and messages that urge you toward that water cooler for a fifteenminute break.

In our transitional time, when we haven’t yet developed new habits, and in new and more flexible work situations—more and more of us working at home and without a foreman or middle manager standing over us telling us what has to be done now—learning our own ways of paying attention and customizing workflow to those ways is key. For one of my friends, humor is key. About six months ago, he brought an embarrassingly analog Felix the Cat kitchen timer (it has an actual bell) into his minimalist office at a high-tech firm and sat it defiantly beside his computer mouse. When he has a project due, his first task of the morning is to set the timer for an hour or ninety minutes, pull up the project on his desktop screen, and go for it until the bell starts ringing so crazily that Felix is grinning and waving his hands and the whole thing rocks and rolls. That’s his signal to give himself time to think about the work he’s accomplished so far. He does this by going for his morning coffee and sipping it while taking a stroll around his entire building. When he returns to his desk, he then decides whether to wind up Felix the Cat again or, if the project is done, he opens his e-mail for the first time, sends the document off to his collaborators, and, with the timer ticking (really ticking) again, he sets himself a specific amount of time on e-mail until lunch. And so on.

Raskin is working on somewhat more sophisticated interfaces than the hilarious one my friend has come up with, but the point is the same, to recognize that we are people with bodies, not just brains and eyes and typing fingers. Because of the computer’s evolution from the typewriter, information is currently dominated by text, but Raskin never forgets the whole person, which is why he’s designed his office solutions to force himself to move and turn and twist and otherwise engage his body in the act of changing attention. Someone else might find auditory cues more helpful, anything from theme music assigned to different tasks or a cash-register ca-chink, ca-chink, satisfyingly recording each task completed. Others might find badges, such as the ones gamers earn for winning challenges, a useful way to measure progress. What about color? Would we even be able to keep seventeen colors straight, or would that rainbow text confuse us even more?

The answers to the questions are still being unpacked, but the point is that Raskin is asking them on this level. He is not using some twentieth-century measure to dictate how we use the twenty-first-century workplace. It’s not our brain that’s the issue. It’s the antiquated design of the workplace, and even of computers themselves. Like Christian cathedrals built upon Roman ruins, the desktop is a remnant of an older culture that did not champion the values on which the World Wide Web is based: interaction, collaboration, customizing, remixing, iteration, and reiteration.

“We often think we’ve solved a problem when we’ve merely come up with a good answer to the wrong question,” Raskin insists. He’s big on unlearning. He calls it inspiration. Remember, Mozilla was built using an open, crowdsourcing model. That means, with any problem, if you get it wrong the first time, instead of being dismayed, you are challenged. When you get it right, you go to the next level. Open-source building is a lot like game mechanics that entice everyone playing to compete together so they can move to the next challenge. If the end product is enticing and the path exciting, everyone will pitch in, not only to solve the problem but to expand it and then solve the more complex problem too. That’s how Firefox was created.

“You don’t get people interested in riding the railroad by building the railroad,” Raskin says. “And you don’t do it by telling them they have to ride the railroad. You do it by building the great resort town at the end of the railroad that they want to get to. Then you build the first hundred miles of track and invite them to participate in building it the rest of the way.”

It’s a little like luring yourself out of your desk chair to go to the fun screen, thereby forcing yourself to do an invigorating stretch along the way. Simple. And also attractive. These principles are Raskin’s alpha and omega. He even calls himself a maker of “shiny things.” He believes simple, shiny things help us turn the digital office into a place of possibility and creativity, not anxiety and limitation. It’s an opposite way of thinking, really, from much of the anxious discourse about the overloaded, distracting, and confusing contemporary workplace.

AZA RASKIN WAS BORN IN 1984. His father, the late Jef Raskin, started the Macintosh project at Apple in 1979, and his essay “Computers by the Millions” is one of the visionary think pieces of personal computing.12 Unquestionably, Aza grew up imbibing the idea that the workplace of the future could be a shiny, simple space. I’m glad the way we use our computers is in his hands.

But what about the rest of us? What about those of us who were born before 1984? We weren’t brought up on the seamless, continuous interactivity of the World Wide Web. We weren’t brought up to see personal computing as either shiny or simple. Many people who do not count as “millennials” see the digital workplace as a morass and view the desktop as some siren tempting us away from the real business of work. And frankly, few of us have the power to requisition two extra computers to help realize ideal working conditions.

I don’t want to dismiss our anxieties. As any good student of Freud will tell us, unless we understand root causes, we won’t be able to come up with a cure. And most of us need a cure. The double-bind of the labor speed-up coupled with obsessive public and corporate attention to the “problem” of multitasking and distraction is an entirely unsatisfactory vision of the future of work.

From a historical standpoint, it’s a problem whose roots are easy to see. After all, we inherited our twenty-first-century workplace from the industrial age. It makes sense that Gloria Mark and others have been called upon to monitor, clock, time, measure, and assess our efficiency. We have inherited an ethos of work designed for bricklayers, not information workers, and a model of attention designed with the same tasks in mind. We are inheritors of a workplace carefully designed to make us efficient for a world that no longer exists.

026

After the panic of 1873, at the height of one of the worst depressions in American history, Frederick Winslow Taylor, scion of one of Philadelphia’s wealthiest and most prominent Quaker families, walked into a pig-iron plant seeking work as an apprentice. Factory work was an odd career choice for a severely myopic graduate of the elite Exeter Academy who had seemed bound for Harvard Law.13 Soon, though, Taylor wasn’t just pursuing factory work as a career path; he was reforming it by dedicating himself to industrial efficiency. The day Taylor pulled out his stopwatch to clock his peers at their tasks was a watershed moment in the history of work. In two famous books compiling his “time and motion studies,” Shop Management (1905) and The Principles of Scientific Management (1911), Taylor alerted readers to “the great loss which the whole country is suffering through inefficiency in almost all of our daily acts.”14 He documented the problem meticulously: “One bricklayer and helper will average about 480 bricks in 10 H[ours],” Taylor recorded on January 29, 1880. On the next day, he wrote that a laborer with a wheelbarrow full of loose dirt could “wheel it approximately one hundred feet exactly 240 times in a ten-hour day.”15 To solve the inefficiency problem, he set arbitrary (he would say “scientific”) production quotas, penalized “malingerers” and “loafers,” and rewarded the “soldiers” who exceeded the production quotas he set. Along with Henry Ford, who later turned out profitable Model T’s on his assembly lines, Taylor created the modern industrial workplace, transforming how America and the rest of the world made things, whether in pig-iron factories, auto factories, or fast-food operations.

Management guru Peter Drucker calls Taylor’s work “the most powerful as well as the most lasting contribution America has made to Western thought since the Federalist Papers.”16 If our schools are designed to train kids on how to do single, specialized tasks on time and to a schedule, it is to prepare them for the Taylorist workplace. If the modern manager and executive also feel the most important part of their jobs is to complete discrete tasks, that, too, is Taylor’s legacy.17 Taylor recognized that humans are more easily diverted from their tasks than machines. His antidote to this problem was to make labor as machinelike as possible. He argued that uniform, compartmentalized, and undeviating human operation equals productivity.18 He also emphasized the importance of segregating one kind of worker from another, and especially in separating management from labor. He assumed, sometimes explicitly and sometimes implicitly, that laborers were inferior, possessing an “animal-like” intelligence that suited them to manual tasks. They needed strict supervision, structures, schedules, rewards and punishments, and a predetermined place in the movements of the assembly line in order to perform in a way efficient to industrial production.

It took a disaster to help extend Taylor’s ideas beyond the factory to the modern office building, though the disaster itself took place two years before Taylor set foot in a plant. Whether or not the great Chicago fire of 1871 was actually started by Mrs. O’Leary’s cow kicking over a lantern, America’s hub city was left in ruins. Because the city connected the offices back east to the loam of the Midwest—cattle and corn—through both the financial operations of the commodities exchange and the physical nexus of the railroads, Chicago had to be rebuilt quickly, yet large questions loomed about how to rebuild. The fire caught the city at a turning point in architectural and urban design as well as in industrial philosophy. Retain Victorian edifices or zoom on into the modernist future? The construction would decide the fate of not just the city’s architecture, but its character, even its future. Chicago’s city officials and planners voted for modernism. Resources were invested in rebuilding the city as the urban workplace of the future. Led by architects Henry Hobson Richardson, Frederick Baumann, Dankmar Adler, and Louis Sullivan, Chicago became the first city of skyscrapers. While the factory and assembly line sprawled horizontally, the “Chicago skeleton,” a steel frame structure that supported the full load of the walls, allowed buildings to be taller than ever before. In 1884–85, the ten-story Home Insurance Building opened in Chicago to enormous fanfare. Thereafter the urban landscape was rapidly transformed, first in Chicago and then everywhere.

As the skyscraper rose, Taylor’s principles of management rose with it, as exemplified by the growing importance of the MBA. At the brand-new Wharton School of Business at the University of Pennsylvania, faculty and students began to theorize the best way to subdivide white-collar office work in a way analogous to what Taylor did for the assembly line. The first MBAs worked to define corporate structure that could all be realized and reinforced by interior and exterior building design: hierarchy, stratification, specialization of management roles and tasks, the creation of middle-manager roles, differentiation and coordination of managers and clerical workers, and measurement of individual and unit productivity and quality were all objectives. Because there’s no foreman for white-collar workers, industrial theorists came up with literal ways to make the office hierarchy obvious. Top floors were delegated to top executives. Senior managers were awarded corner offices from which they could gaze out over the city, as if keeping watch. And to minimize distraction, the modern office became as devoid of ornamentation, decoration, luxury, leisure, or pleasurable amenities as possible. Offices became subdivided into areas, including financial departments, technology areas, and almost from the beginning, “human resources” (HR) departments to manage this new mass of people all in one physical, contained space.

Everything about the modern office building was designed to signal that “this is not your home; this is not fun; this is not personal; this is not about you.” If you work in a city, the odds are that you spend the better part of each workday in a building that still adheres pretty closely to these one-hundredyear-old principles.

Since there was no pig iron to weigh in the office setting, new ways of measuring how effectively office workers performed had to be developed to help sort workers, to measure what they produced at their desks, and to help determine the practices for advancement and promotion. The new business schools championed the latest statistical methods, including deviation from the mean and multiple-choice achievement, aptitude, and personality testing, as well as IQ tests. Assessment skills became key to the MBA, which led to standardizing how we measure success and regulate work, from productivity to humanresource practices to profit margins.

There was a gender element too. By far and away the majority of business school professors, in the 1920s as in the present, were men. And more often than not, these men were training other men to be managers of women. During World War I, many office jobs that formerly had been held by men—secretaries, typists, receptionists, and phone operators—became “pink-collar” jobs assumed by the women who were not out fighting a war. When the soldiers came home, it was common for young, unmarried women to stay in those pink-collar office jobs, paid at considerably lower salaries than their male counterparts had earned. Men who would have occupied those positions previously spread out into the increasing ranks of the “middle manager.”19

Business schools were almost exclusively the province of men. Being able to wield statistics—with numbers obtained from standardized testing—was part of the artillery of the MBA. The science of evaluation and assessment became key to the curriculum of the new business colleges and MBA programs.20

The spreadsheet (literally a “spread” across two facing pages as in a magazine or newspaper) became the white-collar equivalent of the stopwatch. Managers devised new ways to measure office productivity, to expose any deviation from attention to the task at hand. Buildings became as task-specific as workers, with spaces divided into business areas, recreational areas (restrooms, break rooms, cafeterias), areas of heavy traffic flow (central hallways), and then the quiet work areas that would be undisturbed, solitary, and sterile.

Whether applied to life at the assembly line or inside the new skyscrapers, efficiency was a harsh taskmaster. It required that humans be as uniform as possible, despite their individual circumstances, talents, or predispositions. Working regular hours, each person was assigned a place and a function; doing what one was told and not questioning the efficacy of the process were both part of twentieth-century work. But a problem increasingly reported in the modern office was self-motivation. With so much control exerted by others, there wasn’t much reason for the office worker to think for himself, to exceed expectation, or to innovate. Regularity and regulation do not inspire self-motivated workers. Assigning tasks to workers and ensuring that they carried them out on time and with care fell more and more to the middle manager.

By the late twentieth century, HR departments, perfecting rules to regulate who must do what, when, where, and how, had grown larger than ever, to become one of the defining components of the modern workplace. According to the U.S. Bureau of Labor Statistics, over nine hundred thousand Americans are currently working as HR professionals.21 As with testing, for HR, uniformity and standardization are the ideal, with workers’ compensation laws and litigation practices providing the teeth to keep bureaucracies and bureaucrats productive. These rules come with time clocks and time cards, all prescribing the twentieth-century way of work.

DOES ANY OFFICE TODAY ACTUALLY work on the same principles as the HR department’s rules? At HASTAC, our programmers and our server administrators work together constantly. Neither group reports to the other, but the relationship is mutually interdependent while evolving quite independently, except at the point of connection. You can’t have a Web site without a server; servers exist because they are needed to host Web sites.

One New Year’s Eve, Barbara, the newest programmer on our team, needed code transferred to the development server before she could complete a task. Brin, the server administrator, pitched in. I was the supervisor of the project, but I only found out after the fact that they had both been working on New Year’s Eve. (I admit I was not happy about this.) They communicated with one another almost entirely on Jabber, an instant messaging system. One chatted from Southern California and one from North Carolina. The two of them have never met. And to this day, I have not met either one of them. That is a different kind of workplace and a different kind of worker than we have seen before.

What are the best labor practices in this twenty-first century, when the very idea of a workplace is being challenged and workday is basically 24/7? Social scientists study our contemporary “interrupt-driven” work life, but when work is this decentralized and distributed, the very concept of an interruption is irrelevant. 22 In fact, what was “interrupted” was their leisure time, no doubt when the participants all noticed one another on Twitter or some social networking site and responded and got the job done. Almost nothing about the New Year’s Eve interaction at HASTAC “fits” in twentieth-century management theory, architecture, or interior design, and I cannot even imagine what statistical measures one would use to quantify or assess its efficiency. But it certainly is effective and productive in every way.

Is indefiniteness a bad thing? Are our twenty-first-century work lives really all that bad? Or to turn the question around, were work lives ever as uniform, task segregated, and uninterrupted as retrospect may make them seem? If I try hard enough, I can remember back to the distant twentieth century when work was not always humming along, all the attention in the right place all the time.

When we talk about workplace disruptions, there is an implicit idea that the work style we used to have was continuous, uninterrupted, natural. It wasn’t. The twentieth-century workplace was completely and quite carefully constructed, with bricks and mortar, MBA programs, management philosophies, labor and productivity statistics, and other forms of regulation, all supported by childrearing practices and an educational system designed to make the workplace—factory or office—seem more natural, right, and expected than it was.

Taylorism, with its emphasis on individual attention to a task, could be mind-numbingly dull and uncreative in its blue-, white-, or pink-collar manifestations. Given a choice—in our hobbies, for example—few of us take pleasure in doing one thing repetitively over and over again on someone else’s timetable and with each action managed by someone else. There is little about human nature that is as time- and task-bound as Taylorism wanted us to be.

Certainly in agrarian life and in preindustrial times more generally, there was far less sense of either task or specialization and far less division between work and home, occupation and leisure time. Leisure time, as a concept, had to be invented, not least by the advertising industry, which turned leisure and recreation into big business. Even the creation of the National Park Service was justified in part so there would be natural areas set aside, in perpetuity, for

“enjoyment” (as President Woodrow Wilson said) away from the workplace.23 It’s interesting that it was created in 1916, right when statistical methods were being developed, multiple-choice and IQ tests were being invented, public high school was becoming required, Model T’s were being turned out on assembly lines on Taylorist principles, business and professional schools were being created, the labor movement was flourishing, and office buildings were reaching up to the sky.

As educator and philosopher John Dewey insisted upon over and over throughout his championing of integrated, practical teaching methods, none of these modern divisions are natural for humans. We had to learn these separations as part of the industrial economy. We used our schools to train children at a very early age to the mechanisms and efficiencies of twentieth-century work. We spent over a hundred years developing a twentieth-century way of thinking about ourselves as productive workers, which is to say as productive contributors to our industrial world.

Once the U.S. economy shifted so that not just the working poor but middle-class families, too, required two incomes to maintain a decent standard of living, with more and more women entering the workforce, the strict division of work and home became even more complex to maintain. With the Internet and the merging of home and work, leisure and work, play and work, into the computing devices that spin both our workplace and our home, we have entered into an era as complex as the one for which Taylor felt compelled to pull out a stopwatch to measure wheelbarrow loads, one for which an Ivy League school like Penn decided it had to create a separate, professional school of business.

The industrial age wasn’t built in a day, nor will ours be. Reforming how we learn and work and play together in the digital age isn’t going to happen overnight either.

The contemporary work environment is far from standardized from place to place. In the same building with HASTAC’s headquarters, where our team thrives in the new digital world of merged work and leisure, there are numerous workers who keep nine-to-five hours pretty much as they would have done in the Taylorist past. Those workers are statistically much less likely than I am to take work home at night, but they are far more likely to be holding down a second job at another workplace away from home.24 That is a condition of modern work and modern society that underscores the complexities of unequal work, unequal labor, all distributed in multiple ways even among people who greet each other at the office each day. We may inhabit the same “workplace,” but often the physical space itself is the only thing our work lives share.

What are the possibilities for satisfying work in the distributed, dispersed, and ever-diverting workplace of the twenty-first century? And what happens if we think of the blurred distinction between work and play, home and office, not as a problem at all but as an advantage? What if the ideal twenty-first-century workplace is the home? And what if, like the best forms of learning, the most productive new forms of labor are also challenging, inspiring, exciting, and therefore pleasurable?

027

Chuck Hamilton rarely drives to work. He turns on his laptop, checks for messages on his smartphone, and he’s there. He works mostly from home, exerting control over when he starts and stops his workday or adjusting it to the waking hours of his colleagues around the globe. Like about 40 percent of his firm’s employees, Hamilton works remotely—which is to say he does not have an official office at company headquarters. The other stats from his firm also reflect changing trends in the contemporary workforce, such as the fact that over half of Hamilton’s colleagues have been with the company fewer than five years. We can think of this as the corporation of the ever-changing future. His work style is so different from the old divisions of labor, the old separation of home and work or work and leisure, that he’s had to coin a new term to describe his hybrid, blended, merged form of work. He calls it: @work@home@play.

What is the name of this frisky, visionary, youthful firm of the future? What company supports all these young new hires working away on their laptops on the patio, in their shorts and flip-flops?

Three letters: IBM. Yes, Big Blue. The symbol of the traditional, hierarchical way of business throughout the twentieth century is now pioneering virtual new models of the workplace that intentionally confound the old divisions among work and home and play. Once synonymous with conservative corporate values, with the spanking white shirt, the navy pinstripe suit, and buttoned-down management and personnel practices to match, IBM is one of the few business equipment manufacturing corporations to have made the transition from the machine age to the information age. It was founded in 1896 as the Tabulating Machine Company upon the success of its electrical counting machine, which had been used to process data from the U.S. Census of 1890. In the early twentieth century, IBM became the world’s largest maker of business machinery—typewriters, punch-card machines, time clocks, copiers, data-processing equipment, and sorting equipment, and later became the major producer of room-size mainframe computers. Signified by the phrase “IBM and the Bunch,” IBM dominated late twentieth-century office-equipment manufacturing.25

IBM went through a slump as technology shifted to Seattle and Silicon Valley and had to regroup and reassess its primary product line—business machines—not only as things but also as information. The mechanics of the Internet reside in hardware, software, and then also in less tangible commodities like information networks that you might not be able to “make,” in the old sense, but that are essential to success in the new global business economy.

IBM’s remaking of itself is one of the legends in modern business history, figured as the “elephant that learned to dance again.”26 This is not to say IBM has stopped making machines. Hardly. IBM produces the microprocessors used in just about every gaming console (Xbox, PlayStation, Nintendo, Wii, and so forth). But in a conceptual leap that would have befuddled its nineteenth-century founders, IBM now simultaneously maintains an open-source developer’s Web site, developeWorks, with discussion boards, podcasts, wikis, and blogs where anyone in the industry can learn or contribute—for free.

What kind of business model is that for a proud old equipment manufacturer like IBM? The short answer is that it’s a highly successful one. One symbol of its transformation is the discarding of the three-piece business suit as its dress code. “Business casual” is the company uniform of the rebuilt IBM—but who’s checking, when 40 percent of employees work from home? Those homes can also be just about anywhere on the planet. A majority of the firm’s four hundred thousand employees reside outside the Americas, and that majority grew substantially after 2004 with the sale of the personal computer division to Lenovo, a state-owned Chinese enterprise.

As Chuck Hamilton notes, quoting a recent IBM corporate presentation, “We are ‘virtual’ by nature and action, something that was not easy to achieve and is even harder to manage. We can see that social learning and informal connection of all kinds is becoming a sort of virtual glue for globally integrated companies.”27

The corporation once synonymous with tradition and convention thrives in the digital world because it transformed itself. The question raised by the example of IBM is: How do you manage efficiencies without the familiar institutional structures governing (as it once did) the hours you work, how you arranged your office, and what you wore? Add to that context: How do you manage across diverse systems of government, economics, ideology, and culture, such as those one must negotiate in a partnership with the Chinese government and thousands of Chinese colleagues and employees? Over the course of a century, IBM developed corporate practices and policies to focus energy and attention and to maximize productivity. Yet evolving for continued success in this century meant jettisoning or refreshing many of these practices. How do you do it? How do you break old workplace habits so that you can learn how to pay attention in a work “place” that is diverse, scattered, decentralized, multicultural, and mostly virtual?

That’s where Chuck Hamilton comes in. His title is virtual learning strategy leader, and he reports to the IBM Center for Advanced Learning, a global organization, although he lives in Vancouver, Canada. His job is to help IBM’s worldwide employees work as effectively and productively as possible in their changing work environments. In the old IBM, his title might have been something like human resources manager or director of labor relations and employee training. Corporations always have had managers dedicated to keeping the workforce up-to-date, but Hamilton’s job description doesn’t line up precisely with the old management categories of accounting, marketing, operations management, or even human resources. He does a little of all of that, nestled under the surprising rubric of “learning strategies.”

A partial list of topics on which he spends his time looks like this:

The remote worker 
Endeavor-based work 
Global teaming 
Crowdsourcing 
Mass collaboration—we versus me 
Connectedness—the new classroom 
Producers/consumers 
@work@home@play28

This is the syllabus at IBM? Frederick Winslow Taylor must be turning in his grave.

Hamilton loves his job. His hours are long, and he works hard, but he says people are always telling him he has the “best job in the world.” He is infectiously enthusiastic about the opportunities presented by the workplace of the twenty-first century. When I mention the drawbacks to a globalized and distributed workforce—the sense of dislocation, the work overload, the constant adaptation to change, the harm to communities, the exploitation of those less affluent, the lack of regulation of working conditions—he acknowledges those dangers, then adds, quickly, that the predigital workforce was also “distributed.”

He is right. I live in North Carolina. Most of the industrial-age industries here—especially textiles and furniture making—were started during the 1890s because labor unions in New England and the Midwest were championing safety, wage minimums, shorter workdays, child protection, and vacation time. North Carolina was impoverished and advertised itself as a “right to work” state—meaning not unionized or regulated, the equivalent of a Third World country offering cheap labor to companies headquartered in wealthier places. In the late twentieth century, the textile and furniture industries left North Carolina for Asia out of a similar motivation.29 Hamilton is skeptical of those who immediately call “globalization” a recent phenomenon or who think of it only as evil, without considering all the alternatives, the histories, the precedents, and the future: “There’s a dark side to every light side,” he says. “People often accentuate the dark side out of fear.”

Although IBM may be ahead of other large corporations in its new ways of work, it’s a harbinger of the future for many of us. The digital, distributed, fully globalized office isn’t going away. Work will be even more dispersed in the future. So as virtual learning strategy leader, Hamilton’s job is to keep asking, “How can I take all this and make something good?”

Hamilton believes we learn best when we learn together, and learning in the workplace of the future is continual. Instead of the boss with the stopwatch or the physical workplace separated by levels of power and authority telling you what to do at every moment, instead of an office building with its executives in corner offices with penthouse views, IBM’s workplace arrangements are now remote and constantly shifting. Even the most basic feature of the workplace—the time clock—has changed dramatically in the last decade. Ninety-eight percent of employees at the particular IBM center where he works in Vancouver used to “badge in,” meaning they held up their company name badge in front of a sensor when they walked into the office every day at 8:30 A.M. and then held it up again to badge out at night. Now about 11 percent badge in. They don’t work any less, but they have more control over their workday, more ability to adjust it to the project they are working on and the people they are working with. That also means a new kind of self-control and self-regulation.

Where does one learn how to do that? Certainly not by twelve years of learning how to ace the end-of-grade tests or even by four years in a conventional college classroom. If you aren’t required to badge in, you do your job when you are the most productive (morning person? night owl?) and when it best suits the workings of your team, wherever they may be. Very little in traditional formal education—from preschool to the conventional MBA—prepares us for this new decentralized workplace, although these are exactly the skills required for mastery in multiplayer online games.

This lack of preparation is at the heart of many of our difficulties in adjusting to the new workflow. If the workplace has changed and our training for that workplace has not, we not only have an unproductive mismatch but also a profound sense of dislocation and distraction. Except at the most innovative margins, we have not even begun to think about how we can train or retrain our focus and attention for the distributed digital workplace, for the method we’ve been calling collaboration by difference.

As we’ve seen, collaboration by difference is the open-source and openaccess principle upon which the Internet and the World Wide Web were originally created and by which they continue to be governed. It is based on the idea that productive collaboration requires not just a lot of participation by many different kinds of people but a form of collaboration that is as open, unstructured, and flexible as possible, in its design, principles, and motivation. It is based on the Internet’s many projects in massive collaboration, from Linux computer code to Mozilla Web browsers to Wikipedia to Craigslist. Rather than aiming at uniformity and standardization as regulated or enforced by institutional hierarchy, this form of crowdsourced collaboration is based on the idea that if you allow people to contribute in as many different ways as they want, offering points of view as distinctive as possible, the whole outcome is more innovative, stronger, better, and more ambitious than if you start with a goal or a mission and then structure each contribution as a deliberate step toward fulfillment of that goal. From the point of view of twentieth-century management theory, collaboration by difference is chaotic. It shouldn’t work. Except that, well, the proof is in the digital pudding. What has been accomplished by mass collaboration—the Internet, the World Wide Web, Wikipedia, and so forth—is so astonishing that it’s clear we are only beginning to understand its full potential.

The challenge, then, is to figure out how to change our institutions’ structures to support these forms of collaboration based on difference. We need the best ways to train ourselves for participation, for productive interactivity, and even for the self-regulation necessary for collaborating with others for the success of the whole. To truly succeed, we need to start preparing, right now, for the kind of work required by IBM or for the kind of work that will exist even more in the Microsoft/Apple/Lenovo/Mozilla/Facebook/Twitter/Google/What Comes Next hybrid workplace of the future.

Chuck Hamilton is part of the small but growing contingent of leaders who are working to develop that preparedness. In an ideal world where both worker and workplace have struck the right balance between decentralization and efficiency, the benefits will be mutually reinforcing. Hamilton notes that flexibility serves IBM’s global business operation. If some people at IBM prefer to work at night, some by day, that eases the differences in time zones across IBM’s workforce. Flexibility is a key asset in achieving a more productive twenty-four-hour global work cycle. Without the old workplace rules, everyone at IBM is learning and adapting to new behaviors and patterns all the time.

Take attention, for instance. How one pays attention in a distributed environment has to change. None of the material conditions (silence, sterility, absence of distraction) or the personal ones (eye contact) help you focus when you are on a conference call, as is typical at IBM, with fifteen people in fifteen different locations. How do you stay on track, both personally and in a group?

Hamilton says this was a problem at first, but over time, IBM was able to create an entirely new kind of multiperson conversational culture, with its own methods and tools. The key, as is often the case in this kind of innovation, involves deemphasizing the typical hierarchy of a meeting through a clever use of technology. Instead of one executive leading the conversation while everyone else just listens, an IBM conference call now flows not only verbally but by text, too. Let’s say fifteen people are on a conference call across Vancouver, Toronto, New York, Rio, and Beijing. Everyone chats using Sametime, IBM’s internal synchronous chat tool, and has a text window open during the conversation. Anyone can be typing in a comment or a question (backchatting) while any other two people are speaking. Participants are both listening to the main conversation between whichever two people happen to be talking while also reading the comments, questions, and answers that any of the other participants might be texting. The conversation continues in response to both the talk and the text.

Hamilton admits that, when they began this practice, it seemed distracting. Now when he and one of his colleagues in Ontario join me in a conference call without any chat feature, he jokes that he finds his attention wandering occasionally as he wonders why Leslie isn’t saying more. A few times he stops our train of thought to ask for her input then laughs and says that, if we had a back channel, we wouldn’t have to interrupt ourselves. Over time everyone at IBM has become so proficient at conference calling with back channels that its absence seems like an impoverished, sluggish, frustrating, and unfair way of conversing. It almost feels rude. When fifteen people are on a conference call without a back channel, someone has ideas that aren’t getting expressed. With same-time backchatting, while two people are talking, everyone else can be participating, responding to the topic, offering ideas and responses, coming up with new twists that can easily turn the conversation in a new direction without any rudeness or even interruption of the flow. Plus, you can save these useful text messages and refer to them later.

I joke that I can imagine some social scientist with a stopwatch clocking one of these fifteen-person backchatting sessions and proving that ideas aren’t actually finished to completion, that people interrupt one another x number of times, that the multitasking is an illusion, that people are really paying attention only part of the time, and on and on. We’ve all read these studies as reported in the popular press. Hamilton protests that such a study would entirely miss how this process works once participants have become accustomed to it. There’s no such thing as “an idea being interrupted,” because each person’s ideas are constantly being reshaped by the flow of feedback; it’s a new way of thoughts melding together “synchronously,” at the same time, in the manner of synchronized swimmers or dancers who could never do what they do alone, because they depend on one another for the final result. One cognitive consequence of this method is learning to pay attention to multiple inputs from others, including by reviewing the back channel later. These calls provide information, but they are also mental workouts training IBM employees for a new form of global collaboration wherein expertise flows in streams from those familiar with the needs and assets in a different part of the world.

Once again, we have a new form of communication being measured by old metrics that calculate only its least interesting (and most irrelevant) features. We don’t have mechanisms yet for measuring the success of this back-and-forth form of collaboration. Because our statistical methods were designed for one form of productivity, we have very few principles or analytical tools for understanding collective, cumulative, synchronous thinking and problem solving. Who gets the credit for the idea? Who gets the merit raise or the bonus? In such multilayered conversation, who’s the boss?

At IBM, they’ve all become so well schooled in backchatting that Hamilton says he now talks, texts, and reads text so seamlessly that, at the end of a conversation, he cannot recall who proposed what and doesn’t really care. Everyone contributes to the process, and credit goes not to the person with the best idea but to the team that functions best. That’s the “we versus me” item on his list. Conversations progress almost like “twin talk,” with people finishing one another’s sentences, filling in blank spaces, nudging the discussion one way or another, problem solving together. Rather than seeming like multitasking, the activity itself flows.Compressed, efficient, energizing, and deeply interactive, this synchronous flow, Hamilton insists, makes IBM employees know they can rely on one another, “one green dot away from each other, connected all the time.”

Hamilton relates a story someone told him about driving kids to school. As they approached the school building, the kids madly IM’d away to their friends, all the while chatting energetically with one another. Then they arrived at school, and they lined up, went silent, and switched off—not just their electronic devices but their own personalities: They became less animated versions of themselves. They went into school lockdown mode—until recess. Out on the playground, the chatting and IM-ing began again. They switched off again for the rest of the school day and lit up once more when the school day ended. Hamilton jokes that when he hears about companies that still operate in the traditional twentieth-century way, he thinks of that school yard story. Something is off-kilter, we both agree, when school or the workplace, where we should be productively in the moment, prohibit behaviors that have already transformed our everyday lives.

If schools and workplaces create rules against practices that already shape everyday lives and everyday habits, they not only miss a tremendous opportunity but also cause disruption. Aren’t they contributing to anxieties and fears about productivity and attention if they continue to rely on practices that are no longer part of our new habits? The “alienated worker” was a figure haunting the twentieth-century landscape. Are we trying to preserve that condition of alienation? If institutions of school and work fight changes that people happily have adopted in their lives, then perhaps the source of distraction in the workplace isn’t technology—perhaps it is the outmoded practices required by our schools and workplaces.

Chuck Hamilton could not be more certain that these kinds of anachronisms and discrepancies are the real source of much twenty-first-century workplace stress. Holdovers from antiquated ways of working in a world that no longer exists leave an unproductive residue of alienation over everything else. He’s not someone beleaguered and overwhelmed by the everyday world of contemporary work. On the contrary, he uses the word play a lot and, like Aza Raskin, believes that the best work—like the best education—has to be inspiring, challenging, exciting, playful.

Hard work is not necessarily the opposite of play. We know from studying video gamers and runners that the harder they work, the more adrenaline they produce and the more endorphins course happily through their bodies. Their brains light up like Christmas trees too, with neurotransmitters activating everywhere at once, including in the limbic and prefrontal areas associated with emotion and judgment. With every part of the brain engaged, concentration is also intense.30 Hamilton notes that, as a musician, he is happiest playing with musicians who are better than he is. His own music improves and he’s inspired by the challenge. He champions that model of work as well, thriving on the energy of others in the interactive workplace.

He’s excited by IBM’s practice of “endeavor-based work.” This is yet another new concept that we haven’t learned how to train people for. It means that you do not perform a single, specialized task and are not called upon over and over to perform one specialized kind of function. More and more people at IBM don’t even have job descriptions in the conventional sense anymore. Instead, they contribute certain kinds of talents or even dispositions as needed to a project and stay on that team as long as they contribute to its success. He’s confident that people know when the time comes to move on to the next project. They rarely need to be told. Why would they want to waste their time or anyone else’s when their contribution is no longer needed? That’s unfulfilling for everyone.

No supervisor tells you to move along? How is that possible? In this system, what happens to performance evaluations? Job reviews? Because endeavor-based organization doesn’t depend on the amount of time you spend in the office or on the project, but on how you contribute to the success of the team, how do the usual statistical ways of measuring individual performance pertain? It’s actually quite simple. Hamilton notes that “we are measured against business results, against each other’s contribution, one’s team contribution, one’s credibility, and the deliverables (performed work).” That seems a more logical and thorough way of measuring an employee’s complex contribution to the success of a company than the standard method at many businesses, where a supervisor is required to grade each employee and sometimes even to rank who in a unit is best, second best, and so forth. That system of rating employees against one another instead of according to the work they actually perform (either singly or as a group) is inimical to endeavor-based work.

From this perspective, one sees why John Seely Brown is convinced that the gamer mentality is well suited to the new world of business in the twenty-first century. Endeavor-based work is equivalent to your guild self-organizing for maximum effect and then pulling off an epic win together.

“Endeavor-based organization is a little like the movie industry,” Hamilton says. You work hard together to make the movie, each contributing some aspect—whether it is acting or doing stunts or creating special effects or getting the financing together. When the film’s in the can, you then go on to the next job. But you don’t all go on to the same next job. The team disassembles, but everyone has learned who performed best at what, so for a future task, you know whom to call. Endeavor-based organization is structurally different from forms of work ultimately grounded in an assembly-line organizational model, with each person always contributing in the same specialized way to the same team to make the same product.

In endeavor-based organization, hierarchy must be lax and shifting. In one endeavor, someone who has official company experience or status might be the most expert and therefore be appointed the team leader. An hour later, as the workflow changes, that same person may sink back into the position of being a learner. Hierarchy isn’t the guiding principle so much as trust is: depending on one another’s capacities to work together. I think about Duncan Germain at Voyager Academy asking his sixth graders to write down who they will depend upon for a given task—themselves, their partner, or their toon. The answer varies per endeavor, per person.

Not much at IBM happens in the old model of one person, one job, one focus, one specialty, one expert, one manager. Because everyone is connected and because the teams shift constantly, everyone is a potential resource for everyone else. Proximity doesn’t have to be a consideration. “I may need someone from India, China, Brazil, Canada, or the U.S.,” says Hamilton. This is the “global teaming” on his list. That concept, too, evolved over the last decade. Global teaming requires an inherent humility, an intuitive and inquisitive gift for unlearning and learning, because one’s patterns and expectations constantly come into productive collaboration with those of people schooled in other traditions, other cultures.

Hamilton underscores the fact that he’s always learning too, and he tells me about the time colleagues in Brazil turned to him for some diversity training. No problem. IBM takes pride in its diversity and embraces the Internet crowdsourcing adage that you solve problems best by having as many eyeballs—as many different points of view—on any problem as you possibly can. You succeed by seeing differently. So the North American contingent of IBM had lots of company material they’d developed about managing diversity. But instead of simply trying to foist a prepackaged diversity program on the Brazilians, Hamilton asked his colleague to say a bit about what diversity signaled in Brazil. Something that emerged early was confusion over different norms of social office etiquette. In Brazil, coworkers often greeted one another with a hug or a kiss on the cheek. This was out of the comfort zone of many North American or Asian colleagues. Diversity as an ideal is different from the kinds of issues and tensions that might arise from a diverse work environment in actual practice, and he has learned that you need to ask, not assume, that you know what different means. You can’t have one uniform set of materials on diversity that works everywhere.

It’s that way for everything with global teaming, Hamilton says. “You cannot work in China without a deep understanding of how Chinese business works—and that means listening to the Chinese,” he says. “It’s not about telling them what should work there just because that’s how it works here.”

Learning only works in a corporation that embraces learning all the way down, Hamilton insists. Take corporate jamming. The term is borrowed from the improvisational sessions of jazz musicians and is IBM’s way of drawing upon its worldwide network of employees. IBM turns to its global workforce not just to solve problems but to define what those problems are, using its trademarked protocol of Collaborative Innovation.31 The company is convinced it learns more from these jam sessions than from hiring expensive “experts” to chart the future. The jam sessions also reinforce IBM’s core value of cross-cultural collaboration.

The 2005 Habitat Jam invited IBM employees to join a virtual meeting. Tens of thousands of people from 158 countries participated and focused for three days on ways to improve the environment, health, safety, and the quality of life in the world’s urban centers. Chongqing, China, is now thought to be the world’s largest city, but it’s growing by over half a million inhabitants a year, faster than anyone can count, so no one really knows for sure. It has nothing like the infrastructure and social supports an enormous megacity requires. Similar cities are springing up seemingly overnight in areas where IBM has a workforce, and no one yet has a handle on matters of infrastructure, ecology, or health and safety conditions in them. Habitat Jam addressed these new modes of urbanization, creating the agenda for the UN’s World Urban Forum in 2006.

The seventy-two-hour Innovation Jam of 2008 brought together over 84,000 participants who contributed some 30,000 individual posts, divided into over 2,700 themes and 2,300 conversational threads on a diverse range of topics. The topics included geeky new statistical methods for crunching massive data samples gleaned from seemingly incompatible sources (“intelligent decisioning”), construction of Web sites where citizens can map local services of benefit to new arrivals, a blueprint of best practices and protocols to institute IBM-wide carbon-free days, and creation of an electronic marketplace that allows retired IBM employees to offer their services for special projects. In many cases, the resulting ideas were almost instantly put into practice, instead of languishing in endless committees and subcommittees.32

IBM derives millions of dollars in entrepreneurial ideas from these corporate jams, as well as the equivalent in job satisfaction from those who offer ways to make the company a better working environment. Because anyone can contribute to an idea as it is developing, there is also a larger, companywide buy-in for new ideas that IBM might want to roll out as a result of the jams, an inestimable contribution to the bottom line because rapid employee adaptation to new workplace mechanisms and strategies is key to corporate innovation. Because an atmosphere is created in which anyone can contribute ideas or build upon the ideas of others, there is also a better chance that the contributions from the highly diverse, multinational workforce might help the corporation to see its own blind spots. It counts on its own workforce, in other words, to help chart the leading edge of its own innovation, looking to the corporate jams to propose ideas outside the tried-and-true twentieth-century business school methods of strategic planning, flowcharts, goal setting, organizational mission, revenue optimization, targets, projections, benchmarks, market analysis, and milestones.

IBM obviously finds the jams to be productive, for the company actually implements many ideas that arise from them. The method is considered to be part of the corporation’s new digital-age trademark. The jams are efficient on both the practical level (innovative ideas worth implementing) and the philosophical (setting a tone of global inclusion and collaboration). There is no way to measure innovation, creativity, and cultural change by old metrics designed to tally efficiency, yet all of these are necessary for success. If I happen to be the person who tosses out the idea that thousands of others work to develop and that turns into a company winner, it may not feel as if it counts as my individual contribution according to a Taylorist scheme of things, but because the jams are documented and public, my contribution is recognized in the new metric of interconnection. You can be sure that the person who is throwing out great ideas for the company is also being pinged with offers to join this or that project team. That openness of contribution leading to subsequent participation, with evolving collaborative roles and responsibilities based on one’s participation and actual performance (rather than on one’s job description or title), is exactly how work flows in the new open, digital, interactive workplace. The corporate jam offers a new model of productivity and efficiency because each person and each office in the corporation is linked to every other one and any great idea can be taken up by anyone and implemented in a new environment. Wheelbarrows can’t hold what I bring away from participating in such an event.

“Boardrooms are not big enough to handle the kind of learning we need now,” Hamilton emphasizes. He notes that the “we versus me” model of mass collaboration works well with a corporation as big as IBM but that even to call it a model is misleading, because collaboration works differently, on a different scale and in different formats, in different contexts. Not all virtual meetings are alike. Even structure is something that has to be evolved in context. For example, when IBM calls meetings in virtual environments, in North America thirty people might show up. In China, it’s typical for four hundred people to come to a virtual event. The flow of people is also different, with the Chinese group moving into a large virtual lecture hall but also, in the manner of face-to-face conferences, circulating a lot, breaking into small groups, and backchatting in order to pull away and interact even as the main presentation is going on. It’s a very efficient process. Hamilton evokes business leader Don Tapscott’s concept of “wikinomics,” the new economics of collaborative enterprise, noting that “fifteen minutes of four hundred people’s time is a lot of hours.”

HAMILTON ALSO BELIEVES THAT PLAYFULNESS is part of creative, innovative, collaborative, productive work. One function of play in our lives is to learn how to enjoy working with others, and he is passionate about creating immersive virtual environments that renew our spirit of enjoyable, engaged learning. He believes virtual environments encourage relaxation, informal chatter, and friendly, happy relationships. This is where “@play” comes in. He insists that, when so much work is accomplished virtually, play is more crucial than ever. Through play, people learn to trust one another, to engage one another; they learn one another’s skills and sensitivities, blind spots and potentials in an arena where the stakes aren’t high. They learn respect. You need that in order to develop trust, and you need employees to be confident that, if they contribute an idea about carbon-free days to an innovation jam, they will be heard and valued, not ignored or punished. Believing in the importance of one’s role is crucial in endeavor-based organization.

One of the drawbacks of the new workplace is that when people work remotely, there’s no place to take a break and chat about sports or politics or celebrities and have a water-cooler conversation, which is that part of the work life that binds people together and shapes trust and commitment. “People often forget the social nature of why we work,” Hamilton says. One way he helps fix this problem is by creating digital water coolers—places where people scattered halfway around the globe can come together and chat each other up.

Hamilton uses the virtual environment of Second Life a lot in his work. Second Life was developed by Linden Lab in 2003. Accessible on the Internet, Second Life provides a virtual space where its “residents” interact, socialize, collaborate as individuals or in groups, all by means of avatars designed by the individual resident. IBM has its own “islands,” or virtual meeting spaces in Second Life, for its employees. And it’s not the only company there. About thirteen hundred organizations and businesses now hold meetings in Second Life. For a modest monthly fee, employees can choose and customize an avatar, fly to an “in-world” meeting place, and hold either a private meeting or a public one, if they wish.33

If you are using Second Life for the first time, it all seems a bit bizarre—and Hamilton would say that’s its charm as well as its effectiveness as a space for loosening people up, especially people from different cultures, to interact and collaborate together. You don’t “look in” on Second Life. You can’t just be an observer. You become a Resident, as users are called, which means you represent yourself in the virtual world with an avatar, an onscreen representation of yourself, typically with its own name and look, both designed by you. Even before you begin interacting with anyone else, you have to choose what kind of avatar you want to be. You can use default avatars, but even they require choices far beyond the normal ones before a business meeting of wearing or not wearing a tie today, choosing the flats or spike heels. Do I want my avatar to be male or female, Chinese or Caucasian American, young or old, thin or fat, full head of hair or bald? Even if I don’t want to spend my Linden dollars “skinning” my avatar—customizing it with inventive features—I cannot be part of Second Life without thinking about how I represent myself in the world and entertaining the possibilities for imagination, whimsy, and playacting. You can buy yourself shirts, pants, hats, or dresses—or you can decide to be Tyrannosaurus rex, Sherlock Holmes, a Transformer, or a Disney character. The choices are endless. When you are conducting an IBM meeting in Second Life for thirty people or four hundred people who have all chosen and/or skinned their avatars, you are in a different kind of workplace. Hamilton thinks that is a good thing, a necessary thing, to relieve some of the tensions of a busy work life.

Chuck’s own avatar wears a kilt, and his shaved head is dotted by impressive, punky spikes. He helps new SL’ers choose their avatars and then wanders around the SL spaces, including the IBM islands. Chuck works with “wranglers” (experienced Second Life Residents) who partner with first-time users until they get the hang of interacting in SL. The learning curve is steep at first, because you use your keyboard, keypad, mouse, and other features of your computer in specialized ways. For example, Ctrl-2 activates the “move” tool and pairs with other keystrokes to allow you to gesture, touch, pick things up, interact with others, and walk, run, jump, or fly.

Communicating in Second Life is a learned skill. Once in Second Life, you open a “chat bar,” and when you meet another Resident, you type in what you want to say. Your conversation can be public to anyone else in your vicinity in SL. There’s also a “shout” feature which allows you to broadcast across distances when addressing a large crowd. You type in ALL CAPS, and your avatar is automatically animated to look as if she’s shouting. You can right-click and send a private IM (instant message) that can only be seen by the Resident you are addressing. You can address Friends in your contact list, and if they are online at the time, they can join you in SL. You can install Voice Chat when you set up your Second Life account so that you can talk to anyone else who has Voice Chat, again at close range or to a crowd. There are many other features that mimic those in RL (real life), but with a difference: All these communications might be happening not only with punky guys in kilts but also among robots, superheroes, animals, cartoon characters, or anything else. Contemporary digital campfires replace informal gatherings in boardrooms and conference rooms, as well as providing opportunities for digital events like U2 concerts or music in the park for symphony lovers.

IBM has figured out that the movement of people is costlier, more cumbersome, and frequently more stressful than bringing people together to meet in other ways that never require you to leave your office or even your den. In our post-9/11 world, travel is less convenient than it once was. In the recession, corporate travel bans to contain costs make it harder to communicate face-to-face, since fewer employees can see one another. And there are other issues too. When the H1N1 flu epidemic looked as though it might take over the world and possibly even result in nation-by-nation quarantines that would limit air travel drastically, many organizations began creating secure virtual meeting spaces where international delegates could meet without having to travel physically in order to be together.

“Geography is now history,” Hamilton says, quoting one of his colleagues in India. Trained originally as a designer, then in systems management, Hamilton’s work life now blends the two. He is the systematic designer of synchronous virtual spaces.

Sometimes IBM employees use Second Life simply for efficiency. In Beijing, traffic snarls can eat an entire day, but conference calls can sometimes be too impersonal for important meetings, so IBM China often chooses to meet in Second Life. It’s possible to upload into SL virtually any documents you’d use at a standard business meeting, including PowerPoint slides and live or prerecorded video. There are also tools for virtual brainstorming, such as whiteboarding and 3-D mind mapping—interactive tools for a shared virtual space, even if one person is based in Tokyo and another in Moscow.

Of course, there are also the notorious “adult” spaces in Second Life, as well as a considerable amount of in-world gambling. Even antiglobalization rallies have been staged in Second Life, including one in 2007 when an Italian union staged a twelve-hour protest against IBM’s virtual campus there.

Hamilton notes that, if you want, you can replicate everyday business practices in SL, then asks, “But why?” If suddenly you have this new workplace tool that erases distance and builds in the capacity for imagination and fun, why would you want to replicate the material limits of the twentieth-century workplace that the virtual world transcends? For a company that wants to imagine the future, what better start than business meetings where the boundaries are neither architectural nor institutional, but mental. “If you let employees imagine a future without the usual limits,” Hamilton asks, “why would you then limit what they come up with together?” That would be equivalent to forcing an avatar with the ability to fly anywhere at will to stand in long lines at a virtual airport, waste hours on a cramped virtual commercial jet, in order to journey to a virtual meeting she could have arrived at instantly by pushing the Fly button on the SL screen. When you have all the possibilities of your imagination at your disposal for interacting with colleagues around the globe, why would you choose to replicate practices where those matters were fixed by the walls and rules of the industrial-age workplace?

That’s a central question of this entire book. Given the new options in our digital world, why, exactly, would we want to do things the way we did them before? Why would we choose to measure the new possibilities of the digital age against a standard invented to count productivity in the old industrial regime? Given the newly interconnected world we all now live, learn, and work in, given the new ways of connecting that our world affords, why would we not want to use our options? The question isn’t which is better, the past or the present. The question is, given the present possibilities, how can we imagine and work toward a better future?

The term affordance is used by technology experts to signify those things we can do now, because of the way we are all connected to one another, that we could not do in the twentieth century. If the desktop in my office now can transport me instantly to Beijing where my Wonder Woman avatar can highfive your Monkey King avatar before a meet-up where a guy in a kilt leads us in brainstorming ideas for the next worldwide IBM corporate jam session, then where, exactly, is my workplace? Given these affordances and possibilities, does attention to task mean what it once did? Is fretting over multitasking even relevant anymore?

Hamilton questions the presumed equation, so fundamental to twentieth-century ideas of productivity, between a standardized workplace and efficiency. If the world is no longer uniform, is it really effective anymore for the workplace to be? And refuting another binary of the industrial workplace, he wonders if fun really is the enemy of work. It is faster and less stressful to bring a half dozen people together in a virtual space than to have them find a place to meet in central Beijing. But Hamilton argues that, unless that virtual space is inviting, it will fail. “Play can be an incentive to work,” he notes, especially in virtual spaces. His comment reminds me of Aza Raskin’s emphasis on the importance of “shiny things.” Hamilton would agree that shiny things can motivate innovation and collaboration. The same passions and curiosities that motivate us to learn in our daily lives can be reignited, given the right, inspiring workplace.

In the right setting, we want to learn. That’s true of preschoolers who want to know the “why” of everything until spontaneous curiosity is schooled out of them. It’s also true of adults taking up the guitar or perfecting their golf swing or enrolling in Ancient Greek at the local community college in their leisure time. We shouldn’t squelch that impulse to learn just because we happen to be at work. And we shouldn’t have to if we can do a better job matching what a worker loves to do with what the workplace needs.

Hamilton believes curiosity and passion can be built into the contemporary workplace. Philosophers from Confucius to Freud have insisted that meaningful contribution (not laziness) is the human ideal. Hamilton thinks that because it is now possible to work at just about anything anywhere in the world—to find that niche where your talents and interests are needed—the twenty-first-century workplace holds greater possibility than ever before for more people to be able to do what they love and love what they do.

Our Taylorist legacy—whether in the factory or the modern office or in its educational system—separated work from fun, home from work. Play was not a good word in the twentieth-century workplace—or in the twentieth-century school, for that matter. Hamilton insists that, in ruling out play, we squander one of our greatest resources. From infancy on, enjoyment is, after all, one of our most powerful motivators. Hamilton believes pleasure is an underutilized affordance of the contemporary workplace.

If Web 1.0 was about democratizing access, and Web 2.0 was about democratizing participation, Hamilton is predicting that Web 3.0 will be the democratization of immersion. He thinks more of us will prefer meeting together and doing business together in virtual environments a decade from now. We will enjoy the playful creativity of having our avatars, in their kilts and their spiky hair, wander around in virtual realms.

Chuck Hamilton’s optimism is not unrealistic. His job is to manage learning strategies for one of the world’s leading corporations. Nor is his life entirely virtual. He spends a lot of time on planes, flying to give presentations fifteen or more times a year. Obviously, from his own experience, he knows that sometimes face-to-face is the more effective model. Yet he’s convinced that we’re still at the very beginning of exploring what we can do, what we can make happen, in virtual worlds. None of us knows where the virtual will lead us.

028

Thinking about the way Chuck Hamilton works and the ways he helps others to learn to work at IBM reminds us that it’s time to reverse some of the assumptions motivating our debates about work and learning in the twenty-first century. Are the virtual worlds he works in “artificial”? Or instead, are we using the artificial props of an outmoded tradition to keep us from realizing the potential of the twenty-first century, a world that already is profoundly digitized and distributed?

For over a century we’ve been schooling ourselves and our children to take advantage of the affordances of the industrial-age workplace. It’s time to rethink the affordances of the digital workplace.

There is nothing natural about either an educational system or a workplace. It’s always about the best way of succeeding within a given context. By practicing certain forms over and over, they become habitual. We stop thinking about our assumptions and values, what we consider to be valuable, what counts and how we count. We see what we expect. When suddenly, abruptly, our context changes, we are forced then to pay attention to all the things we didn’t see before. And the context of our work has changed in dramatic ways in the last decade. If we feel distracted, that is not a problem. It is an awakening. Distraction signals a serious realignment of our attention, a necessary realignment if we are going to flourish in the future. As with all changes in how we pay attention, we have to unlearn our past—our past practices, our past habits—in order that we can learn better ones to take advantage of future possibilities. The outcome isn’t sustainability but thrivability, the potential to thrive in the conditions that comprise our collective future.

If Chuck Hamilton is right, then a lot of us have it backward. If there is stress in the workplace right now, maybe the problem is not that the workplace is changing too fast but that it isn’t changing fast enough. The world has changed. We have changed.

It is stressful when we are forced to pretend that everything continues as it did before. We’re like those kids waiting for recess so their normal lives can begin again.

Holding on to old distinctions—at school, at work—may well be the most anxious and unproductive component of our twenty-first-century lives. The vestiges of the twentieth century may be the make-believe that is harder and harder to sustain because it does not accord with the reality of our lives. A lot about our schools and our workplaces feels like the Amish grocery store I mentioned earlier, with villagers in homespun clothes transacting business using 3G wireless telecommunications systems. There is nothing wrong with anachronism, but one has to juggle one’s principles ever faster to keep up with the inherent contradictions that this digital age pushes to the fore.

As Chuck Hamilton in his kilt and spiky hair would say, gathering other SL Residents around him, it’s exciting to contemplate what lies ahead, but it’s even better to enjoy what’s already here. “Pull your blanket a little closer to the fire,” he urges. “The best is yet to come.”34