Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)

Section 1. What is Everyware?

Ever more pervasive, ever harder to perceive, computing has leapt off the desktop and insinuated itself into everyday life. Such ubiquitous information technology—"everyware"—will appear in many different contexts and take a wide variety of forms, but it will affect almost every one of us, whether we're aware of it or not.

What is everyware? How can we recognize it when we encounter it? And how can we expect it to show up in our lives?

images Thesis 01

There are many ubiquitous computings.

Almost twenty years ago, a researcher at the legendary Xerox Palo Alto Research Center wrote an article—a sketch, really—setting forth the outlines of what computing would look like in a post-PC world.

The researcher's name was Mark Weiser, and his thoughts were summarized in a brief burst simply entitled "Ubiquitous Computing #1." In it, as in the series of seminal papers and articles that followed, Weiser developed the idea of an "invisible" computing, a computing that "does not live on a personal device of any sort, but is in the woodwork everywhere."

What Weiser was describing would be nothing less than computing without computers. In his telling, desktop machines per se would largely disappear, as the tiny, cheap microprocessors that powered them faded into the built environment. But computation would flourish, becoming intimately intertwined with the stuff of everyday life.

In this context, "ubiquitous" meant not merely "in every place," but also "in every thing." Ordinary objects, from coffee cups to raincoats to the paint on the walls, would be reconsidered as sites for the sensing and processing of information, and would wind up endowed with surprising new properties. Best of all, people would interact with these systems fluently and naturally, barely noticing the powerful informatics they were engaging. The innumerable hassles presented by personal computing would fade into history.

Even for an institution already famed for paradigm-shattering innovations—the creation of the graphical user interface and the Ethernet networking protocol notable among them—Weiser's "ubicomp" stood out as an unusually bold vision. But while the line of thought he developed at PARC may have offered the first explicit, technically articulated formulation of a ubiquitous computing in the post-PC regime, it wasn't the only one. The general idea of an invisible-but-everywhere computing was clearly loose in the world.

At the MIT Media Lab, Professor Hiroshi Ishii's "Things That Think" initiative developed interfaces bridging the realms of bits and atoms, a "tangible media" extending computation out into the walls and doorways of everyday experience. At IBM, a whole research group grew up around a "pervasive computing" of smart objects, embedded sensors, and the always-on networks that connected them.

And as mobile phones began to percolate into the world, each of them nothing but a connected computing device, it was inevitable that someone would think to use them as a platform for the delivery of services beyond conversation. Philips and Samsung, Nokia and NTT DoCoMo—all offered visions of a mobile, interconnected computing in which, naturally, their products took center stage.

By the first years of the twenty-first century, with daily reality sometimes threatening to leapfrog even the more imaginative theorists of ubicomp, it was clear that all of these endeavors were pointing at something becoming real in the world.

Intriguingly, though, and maybe a little infuriatingly, none of these institutions understood the problem domain in quite the same way. In their attempts to grapple with the implications of computing in the post-PC era, some concerned themselves with ubiquitous networking: the effort to extend network access to just about anyplace people could think of to go. With available Internet addresses dwindling by the day, this required the development of a new-generation Internet protocol; it also justified the efforts of companies ranging from Intel to GM to LG to imagine an array of "smart" consumer products designed with that network in mind.

Others concentrated on the engineering details of instrumenting physical space. In the late 1990s, researchers at UC Berkeley developed a range of wireless-enabled, embedded sensors and microcontrollers known generically as motes, as well as an operating system for them to run on. All were specifically designed for use in ubicomp,

Thirty miles to the south, a team at Stanford addressed the absence in orthodox computer science of a infrastructural model appropriate for the ubiquitous case. In 2002, they published a paper describing the event heap, a way of allocating computational resources that better accounted for the arbitrary comings and goings of multiple simultaneous users than did the traditional "event queue."

Developments elsewhere in the broader information technology field had clear implications for the ubiquitous model. Radio-frequency identification (RFID) tags and two-dimensional barcodes were just two of many technologies adapted from their original applications, pressed into service in ubicomp scenarios as bridges between the physical and virtual worlds. Meanwhile, at the human-machine interface, the plummeting cost of processing resources meant that long-dreamed-of but computationally-intensive ways of interaction, such as gesture recognition and voice recognition, were becoming practical; they would prove irresistible as elements of a technology that was, after all, supposed to be invisible-but-everywhere.

And beyond that, there was clearly a ferment at work in many of the fields touching on ubicomp, even through the downturn that followed the crash of the "new economy" in early 2001. It had reached something like a critical mass of thought and innovation by 2005: an upwelling of novelty both intellectual and material, accompanied by a persistent sense, in many quarters, that ubicomp's hour had come 'round at last. Pieces of the puzzle kept coming. By the time I began doing the research for this book, the literature on ubicomp was a daily tide of press releases and new papers that was difficult to stay on top of: papers on wearable computingaugmented realitylocative medianear-field communicationbodyarea networking. In many cases, the fields were so new that the jargon hadn't even solidified yet.

Would all of these threads converge on something comprehensible, useful, or usable? Would any of these ubiquitous computings fulfill PARC's promise of a "calm technology?" And if so, how?

Questions like these were taken up with varying degrees of enthusiasm, skepticism, and critical distance in the overlapping human-computer interaction (HCI) and user experience (UX) communities. The former, with an academic engineering pedigree, had evolved over some thirty years to consider the problems inherent in any encounter between complex technical systems and the people using them; the latter, a more or less ad hoc network of practitioners, addressed similar concerns in their daily work, as the Internet and the World Wide Web built on it became facts of life for millions of nonspecialist users. As the new millennium dawned, both communities found ubicomp on their agendas, in advance of any hard data gleaned from actual use.

With the exception of discussions going on in the HCI community, none of these groups were necessarily pursuing anything that Mark Weiser would have recognized as fully cognate with his ubiquitous computing. But they were all sensing the rapidly approaching obsolescence of the desktop model, the coming hegemony of networked devices, and the reconfiguration of everyday life around them. What they were all grasping after, each in their own way, was a language of interaction suited to a world where information processing would be everywhere in the human environment.

images Thesis 02

The many forms of ubiquitous computing are indistinguishable from the user's perspective and will appear to a user as aspects of a single paradigm: everyware.

In considering Mark Weiser's "ubiquitous" computing alongside all those efforts that define the next computing as one that is "mobile" or "wearable" or "connected" or "situated," one is reminded time and again of the parable of the six blind men describing an elephant.

We've all heard this one, haven't we? Six wise elders of the village were asked to describe the true nature of the animal that had been brought before them; sadly, age and infirmity had reduced them all to a reliance on the faculty of touch. One sage, trying and failing to wrap his arms around the wrinkled circumference of the beast's massive leg, replied that it must surely be among the mightiest of trees. Another discerned a great turtle in the curving smoothness of a tusk, while yet another, encountering the elephant's sinuous, muscular trunk, thought he could hardly have been handling anything other than the king of snakes. None of the six, in fact, could come anywhere close to agreement regarding what it was that they were experiencing, and their disagreement might have become quite acrimonious had the village idiot not stepped in to point out that they were all in the presence of the same creature.

And so it is with post-PC computing. Regardless of the valid distinctions between these modes, technologies, and strategies, I argue that such distinctions are close to meaningless from the perspective of people exposed to the computing these theories all seem to describe.

Historically, there have been some exceptions to the general narrowness of vision in the field. Hiroshi Ishii's Tangible Media Group at the MIT Media Lab saw their work as cleaving into three broad categories: "interactive surfaces," in which desks, walls, doors, and even ceilings were reimagined as input/output devices; "ambients," which used phenomena such as sound, light, and air currents as peripheral channels to the user; and "tangibles," which leveraged the "graspable and manipulable" qualities of physical objects as provisions of the human interface.

A separate MIT effort, Project Oxygen, proceeded under the assumption that a coherently pervasive presentation would require coordinated effort at all levels; they set out to design a coordinated suite of devices and user interfaces, sensor grids, software architecture, and ad hoc and mesh-network strategies. (Nobody could accuse them of lacking ambition.)

These inclusive visions aside, however, very few of the people working in ubicomp or its tributaries seem to have quite gotten how all these pieces would fit together. From the user's point of view, I'd argue, these are all facets of a single larger experience.

What is that experience? It involves a diverse ecology of devices and platforms, most of which have nothing to do with "computers" as we've understood them. It's a distributed phenomenon: The power and meaning we ascribe to it are more a property of the network than of any single node, and that network is effectively invisible. It permeates places and pursuits that we've never before thought of in technical terms. And it is something that happens out here in the world, amid the bustle, the traffic, the lattes, and gossip: a social activity shaped by, and in its turn shaping, our relationships with the people around us.

And although too many changes in the world get called "paradigm shifts"—the phrase has been much abused in our time—when we consider the difference between our experience of PCs and the thing that is coming, it is clear that in this case no other description will do. Its sense of a technological transition entraining a fundamental alteration in worldview, and maybe even a new state of being, is fully justified.

We need a new word to begin discussing the systems that make up this state of being—a word that is deliberately vague enough that it collapses all of the inessential distinctions in favor of capturing the qualities they all have in common.

What can we call this paradigm? I think of it as everyware.

images Thesis 03

Everyware is information processing embedded in the objects and surfaces of everyday life.

Part of what the everyware paradigm implies is that most of the functionality we now associate with these boxes on our desks, these slabs that warm our laps, will be dispersed into both the built environment and the wide variety of everyday objects we typically use there.

Many such objects are already invested with processing power—most contemporary cameras, watches, and phones, to name the most obvious examples, contain microcontrollers. But we understand these things to be technical, and if they have so far rarely participated in the larger conversation of the "Internet of things," we wouldn't necessarily be surprised to see them do so.

Nor are we concerned, for the moment, with the many embedded microprocessors we encounter elsewhere in our lives, generally without being aware of them. They pump the brakes in our cars, cycle the compressors in our refrigerators, or adjust the water temperature in our washing machines, yet never interact with the wider universe. They can't be queried or addressed by remote systems, let alone interact directly with a human user.

It's not until they do share the stream of information passing through them with other applications and services that they'll become of interest to us. It is my sense that the majority of embedded systems will eventually link up with broader networks, but for now they play a relatively small role in our story.

By contrast, what we're contemplating here is the extension of information-sensing, -processing, and -networking capabilities to entire classes of things we've never before thought of as "technology." At least, we haven't thought of them that way in a long, long time: I'm talking about artifacts such as clothing, furniture, walls and doorways.

Their transparency is precisely why this class of objects is so appealing to engineers and designers as platforms for computation. These things are already everywhere, hiding in plain sight; nobody bats an eyelash at them. If nothing else, they offer a convenient place to stash the componentry of a computing power that might otherwise read as oppressive. More ambitiously, as we'll see, some designers are exploring how the possibilities inherent in an everyday object can be thoroughly transformed by the application of information technologies like RFID, Global Positioning System (GPS), and mesh networking.

The idea of incorporating digital "intelligence" into objects with an everyday form factor—industrial-designer jargon for an object's physical shape and size—appeared early in the developmental history of ubicomp. As far back as 1989, Olivetti Research deployed an early version of Roy Want's Active Badge, in which the familiar workplace identity tag became a platform for functionality.

Workers wearing Active Badges in an instrumented building could automatically unlock areas to which they had been granted access, have phone calls routed to them wherever they were, and create running diaries of the meetings they attended. They could also be tracked as they moved around the building; at one point, Olivetti's public Web site even allowed visitors to query the location of an employee wearing an Active Badge. And while the intent wasn't to spy on such workers, it was readily apparent how the system could be abused, especially when the device responsible was so humble and so easy to forget about. Original sin came early to ubicomp.

Want went on to join Mark Weiser's team at PARC (Palo Alto Research Center), where he contributed to foundational work on a range of networked devices called "tabs," "pads," and "boards." As with Active Badge, these were digital tools for freely roaming knowledge workers, built on a vocabulary of form universally familiar to anyone who's ever worked in an office: name tags, pads of paper, and erasable whiteboards, respectively.*

* These form factors had been looming in the mass unconscious for a long time. PARC's "pad," in particular, seemed to owe a lot to the slablike media/communication devices used by astronauts Frank Poole and Dave Bowman in Stanley Kubrick's 1968 classic 2001: A Space Odyssey.

Each had a recognizable domain of function. Tabs, being the smallest, were also the most personal; they stayed close to the body, where they might mediate individual information such as identity, location and availability. Pads were supposed to be an individual's primary work surface, pen-based devices for documents and other personal media. And boards were wall-sized displays through which personal work could be shared, in a flow of discovery, annotation and commentary.

Networking infrastructure throughout the office itself enabled communication among the constellation of tabs, pads and boards in active use, allocating shared resources like printers, routing incoming e-mails and phone calls, and providing background maintenance and security functions. Documents in progress would follow a worker into and out of meetings, up onto public boards for discussion, and back down to one's own pad for further revision.

Part of the reasoning behind this was to replace the insular, socially alienating PC with something that afforded the same productivity. In this, PARC anticipated by half a decade the casual, and casually technical, workspace that did in fact emerge during the late-1990s ascendancy of the dot-coms. At least in theory, by getting people out from behind their screens, tabs and pads and boards lent themselves to an open, fluid, and collaborative work style.

Although none of these devices was ever commercialized, at least by Xerox, the die had been cast. Many of the ubicomp projects that followed took PARC's assumptions more or less as givens, as researchers turned their efforts toward enabling the vision of collaborative, distributed work embedded in it.

But what about that percentage of our lives we spend outside the confines of work? While it was more or less inevitable that efforts would be made to provision objects outside the workplace with a similar capacity for digital mediation—if for no other reason than the attractively high margins and immense volume of the consumer-electronics sector—it took longer for them to appear.

To understand why such efforts took so long to get off the ground, it's necessary to reconstruct for a moment what the world looked like at the very dawn of ubicomp. As strange as it now seems, the early conceptual work in the field happened in a world without a Web or, for that matter, widespread adoption of mobile phones in North America.

The 802.11b standard we know as Wi-Fi, of course, didn't yet exist. you couldn't simply cobble together a project around off-the-shelf wireless routers. The in-building wireless network prototypes like Active Badge depended on were bespoke, one-off affairs; in more than one project, students simply sketched in connectivity as a black box, an assertion that if an actual network were somehow to come into existence, then the proposed system would function like so.

In such an environment, it may have been reasonable to posit a pervasive wireless network in the workplace. However, a deployment in public space or the domestic sphere was clearly out of the question.

The mass uptake of the Internet changed everything. What would have seemed fanciful from the perspective of 1992 became far more credible in its wake. As a lingua franca, as an enabling technology, and especially as an available kit of parts, the dramatic, global spread of Internet Protocol-based networking immediately made schemes of ubiquity feasible.

Over the next several years, a profusion of projects explored various strategies for living with, and not merely using, information technology. Some of the proposals and products we'll be encountering in this book include keys and wallets that locate themselves when misplaced; a beer mat that summons the bartender when an empty mug is placed upon it; and a bathtub that sounds a tone in another room when the desired water temperature has been reached. *

* Could the mental models attached to such familiar forms unduly limit what people think of to do with them? The answer is almost certainly yes; we'll take up that question a bit later on.

Some of the most beautiful everyware I've seen was designed by former PARC researcher Ranjit Makkuni, whose New Delhi-based Sacred World Foundation works to bridge the gap between technological and traditional cultures. This is information processing interwoven with the familiar daily forms not of the developed world, but of the global South, cycle rickshaws, clay pots, and amulets among them. It's a lovely reminder that the world contains a great many different "everydays," beyond the ones we happen to be used to.

Whether clay pot or beer mat, though, these projects all capitalize on the idea that the distinctly local application of intelligence, and not the generic, one-size-fits-all vision embodied in computers, will turn out to be among the most important and useful legacies of our technological moment. In this, they appear to be following the advice of human interface pioneer Don Norman.

Norman argues, in The Invisible Computer and elsewhere, that the difficulty and frustration we experience in using the computer are primarily artifacts of its general-purpose nature. He proposes that a truly human-centered design would explode the computer's many functions into a "quiet, invisible, unobtrusive" array of networked objects scattered throughout the home: simple, single-purpose "information appliances" in the form of shoes, bookshelves, even teddy bears.

Or we could go still deeper "into the woodwork." Stefano Marzano points out, in his introduction to Philips Electronics' 2000 exploration of wearable electronics, New Nomads, that when we remove the most transient layer of things from the environments we spend our lives in, we're left with nothing but the spaces themselves, abstracted down to their essentials. These are universals humans have lived in for millennia, elements like walls and roofs, tables and seating, clothing. And, of course, the body itself—our original and our final home. In everyware, all of these present appealing platforms for networked computation.

Fifteen years downstream from its tentative beginnings at Olivetti, the idea of the ordinary as a new frontier for computing is finally starting to bear fruit. We're beginning to see the walls and books, sweaters, and tabletops around us reconsidered as sensors, interface objects, or active sites that respond in some way to data they receive from outside. Eventually, we may even come to see them as the articulated parts of a massively distributed computational engine.

When everyday things are endowed with the ability to sense their environment, store metadata reflecting their own provenance, location, status, and use history, and share that information with other such objects, this cannot help but redefine our relationship with such things. We'll find our daily experience of the world altered in innumerable ways, some obvious and some harder to discern. And among the more significant consequences of this "computing everywhere" is that it strongly implies "information everywhere."

images Thesis 04

Everyware gives rise to a regime of ambient informatics.

With its provisions for sensing capacity built into such a wide variety of everyday objects, we've seen that everyware multiplies by many, many times the number of places in the world in which information can be gathered. The global network will no longer be fed simply by keyboards, microphones, and cameras, in other words, but also by all of the inputs implied by the pervasive deployment of computational awareness.

Even if all of those new inputs were fed into conventional outputs—Web sites, say, or infographics on the evening news—that would certainly be a significant evolution in the way we experience the world. But everyware also provides for a far greater diversity of channels through which information can be expressed, either locally or remotely. In addition to relatively ordinary displays, it offers spoken notifications, "earcons" and other audio cues; changes in light level or color; even alterations in the physical qualities of objects and architectural surfaces, from temperature to reflectivity.

And it's this expansion in the available modes of output that is likely to exert a much stronger shaping influence on our lives. When so many more kinds of information can be expressed just about anywhere, the practical effect will be to bring about a relationship with that information that I think of as ambient informatics.

Ambient informatics is a state in which information is freely available at the point in space and time someone requires it, generally to support a specific decision. Maybe it's easiest simply to describe it as information detached from the Web's creaky armature of pages, sites, feeds and browsers, and set free instead in the wider world to be accessed when, how and where you want it.

One of the notions that arrives alongside ambient informatics is the idea of context-or location-aware services. This could be something as simple as a taxi's rooftop advertisement cross-referencing current GPS coordinates with a database of bank branches, in order to display the location of the nearest ATM. It could be the hackneyed m-commerce use case, all but invariably trotted out in these discussions, of a discount "coupon" sent to your phone whenever you pass through the catchment area of a Starbucks or a McDonald's. Or it might simply mean that the information pushed to you varies with where you are, who you're with, and what you're doing.

Ideally, this means effortless utility—the day's weather displayed on your bathroom mirror, the traffic report on your windshield, the cue embedded in your wallet or handbag that lets you know when one of your friends is within a hundred meters of your present position—but, as we shall see, there are darker implications as well. Perhaps we'll find that a world with too much information presents as many problems as one with too little.

Either way, there will still be an Internet, and we'll likely make more use of it than ever before. But with contextual information diffused as appropriate in the environment, we won't need a computer to get to it, and the entire Web as we've come to know it may become something of a backwater.

images Thesis 05

At its most refined, everyware can be understood as information processing dissolving in behavior.

The great product designer Naoto Fukasawa speaks of "design dissolving in behavior." By this, he means interactions with designed systems so well thought out by their authors, and so effortless on the part of their users, that they effectively abscond from awareness.

The objects he is best known for—mobile phones and CD players, humidifiers and television sets, uniformly display this quality. His work draws much of its power from its attention to the subtle, humble, profoundly comfortable ways in which people use the world—the unconsciousness with which people hang umbrellas from a lunch counter by their handles, use notepads as impromptu drink coasters, or gaze at their reflections in a mug of coffee. There's a lot in common here with Mark Weiser's dictum that "the most profound technologies are those that disappear."

Correspondingly, we can think of everyware as information processing dissolving in behavior. This is the ambition that I discern behind so many of the scenarios of ubiquitous and pervasive computing, from Roy Want to Don Norman: that we could claim the best of both worlds, harnessing all of the power of a densely networked environment, but refining its perceptible signs until they disappear into the things we do every day.

In this telling, ordinary interactions with information become transparent, eliminating the needless deformations introduced by our forty-year fixation on "the computer." you close the door to your office because you want privacy, and your phone and IM channel are automatically set to "unavailable." you point to an unfamiliar word in a text, and a definition appears. you sit down to lunch with three friends, and the restaurant plays only music that you've all rated highly. In each scenario, powerful informatics intervene to produce the experience, but you'd have to look pretty hard to turn up their traces. Such interactions are supposed to feel natural, human, right.

Well and good, in principle. How does it work in practice? Let's take a canonical example: the exchange of business cards.

This tiny ritual happens by the million every day, throughout the commercial world. The practice differs from place to place, but it is always important, always symbolically freighted with performances of status and power, or accessibility and openness. It's no stretch to assert that billion-dollar deals have hinged on this exchange of tokens. How could it be reimagined as everyware?

One, relatively crude and timid, expression might propose that, instead of the inert slips of paper we now proffer, we hand out RFID-equipped "smart" cards encoding our contact information and preferences. (Maybe you'd tap such a card against a reader to place a call, without having to be bothered with the details of remembering a number, or even a full name; fans of Aliens may recall that Lt. Ripley availed herself of just such an interface, in her wee-hours call to corporate weasel Burke.)

In a more aggressive version of this story the physical token disappears from the transaction; instead, a data file containing the same information is transmitted from one party to the other over a low-voltage network, using the skin's own inherent conductivity. Maybe, in a further refinement, the only data actually sent over the network is a pointer, a key to unlock a record maintained locally elsewhere.

And there it is, everyware's true and perfect sigil. Information has passed between two parties, adding a node to one's personal or professional network. This transaction takes several steps to accomplish on a contemporary social-networking site, and here it's been achieved with a simple handshake—an act externally indistinguishable from its non-enhanced equivalent. Here we can truly begin to understand what Weiser may have been thinking when he talked about "disappearance."

If that's too abstract for you, let's take a look at MasterCard's RFID-equipped PayPass contactless payment system, which will have been introduced commercially (alongside Chase's competing Blink) by the time this book is published. MasterCard's tagline for PayPass is "tap & go," but that belies the elaborate digital choreography concealed behind the simple, appealing premise.

Schematically, it looks like this: you bring your card, key fob, or other PayPass-equipped object into range, by tapping it on the reader's "landing zone." The reader feeds power inductively to the device's embedded antenna, which powers the chip. The chip responds by transmitting an encrypted stream of data corresponding to your account number, a stream produced by modulating the strength of the electromagnetic field between it and the reader.

From this point forward, the transaction proceeds in the conventional manner: the reader queries the network for authorization, compares the amount of the purchase in question with the availability of funds on hand, and confirms or denies the purchase. And all of that happens in the space of 0.2 seconds: far less than a single heartbeat, and, as MasterCard clearly counts on, not nearly enough time to consider the ramifications of what we've just done.

Intel Research's Elizabeth Goodman argues that, "[t]he promise of computing technology dissolving into behavior, invisibly permeating the natural world around us cannot be reached," because "technology is...that which by definition is separate from the natural." In the face of technologies like PayPass, though, I wonder whether she's right. I don't think it's at all unlikely that such transactions will effectively become invisible—at least, for most of us, most of the time.

I do, however, think it's of concern. If this dissolving into behavior is the Holy Grail of a calm and unobtrusive computing, it's also the crux of so many of the other issues which ought to unsettle us—simultaneously everyware's biggest promise, and its greatest challenge.

images Thesis 06

There are many rationales for the move away from the PC, any one of which would have been sufficient on its own.

At this point, you may well be wondering about the "why" of all this. Why embed computing in everyday objects? Why reinvent thoroughly assimilated habits and behaviors around digital mediation? Above all, why give up the settled and familiar context of the PC for a wild and unruly user environment, rivaling in complexity the knottiest and most difficult problems human beings have ever set up for themselves?

As you might suspect, there's no one answer. Part of the reason that the emergence of everyware seems so inevitable to me is that there are a great many technical, social, and economic forces driving it, any one of which would probably have been sufficient on its own.

Certainly, Mark Weiser's contingent at PARC wanted to push computation into the environment because they hoped that doing so judiciously might ameliorate some less pleasant aspects of a user experience that constantly threatened to spin out of control. As Weiser and co-author John Seely Brown laid out in a seminal paper, "The Coming Age of Calm Technology," they wanted to design tools to "encalm as well as inform." Similar lines of argument can be adduced in the work of human-centered design proponents from Don Norman onward.

Much of the Japanese work along ubiquitous lines, and in parallel endeavors such as robotics, is driven by the recognition that an aging population will require not merely less complicated interfaces, but outboard memory augmentation—and Japan is far from the only place with graying demographics. Gregory Abowd's Aware Home initiative at Georgia Tech is probably the best-known effort to imagine a ubicomp that lets the elderly safely and comfortably "age in place."

Ranjit Makkuni might argue that well-crafted tangible interfaces are not merely less intimidating to the non-technically inclined but are, in fact, essential if we want to provide for the needs of the world's billion or more non-literate citizens.

The prospect of so many new (and new kinds of) sensors cannot help but beguile those groups and individuals, ever with us, whose notions of safety—or business models—hinge on near-universal surveillance. Law-enforcement and public-safety organizations planetwide can be numbered among them, as well as the ecosystem of vendors, consultants, and other private concerns that depend on them for survival.

Beyond these, it would already be hard to number the businesses fairly salivating over all of the niches, opportunities, and potential revenue streams opened up by everyware.

Finally, looming behind all of these points of view is an evolution in the material and economic facts of computing. When computational resources become so cheap that there's no longer any need to be parsimonious with them, people feel freer to experiment with them. They'll be more likely to indulge "what if" scenarios: what if we network this room? this parka? this surfboard? (and inevitably: this dildo?)

With so many pressures operating in everyware's favor, it shouldn't surprise us if some kind of everyware appeared in our lives at the very first moment in which the necessary technical wherewithal existed. And that is exactly what is now happening all around us, if we only have the eyes to see it.

Whether you see this as a paradigm shift in the history of information technology, as I do, or as something more gently evolutionary, there can be little doubt that something worthy of note is happening.

images Thesis 07

Everyware isn't so much a particular kind of hardware or software as it is a situation.

The difficult thing to come to terms with, when we're so used to thinking of "computing" as something to do with discrete devices, is that everyware finally isn't so much a particular kind of hardware, philosophy of software design, or set of interface conventions as it is a situation—a set of circumstances.

Half the battle of making sense of this situation is learning to recognize when we've entered it. This is especially true because so much of what makes up everyware is "invisible" by design; we have to learn to recognize the role of everyware in a "smart" hotel room, a contactless payment system, and a Bluetooth-equipped snowboarding parka.

The one consistent thread that connects all of these applications is that the introduction of information processing has wrought some gross change in their behavior or potential. And yet it appears in a different guise in each of them. Sometimes everyware is just there: an ambient, environmental, enveloping field of information. At other times, it's far more instrumental, something that a user might consciously take up and turn to an end. And it's this slippery, protean quality that can make everyware so difficult to pin down and discuss.

Nevertheless, I hope I've persuaded you by now that there is in fact a coherent "it" to be considered, something that appears whenever there are multiple computing devices devoted to each human user; when this processing power is deployed throughout local physical reality instead of being locked up in a single general-purpose box; and when interacting with it is largely a matter of voice, touch, and gesture, interwoven with the existing rituals of everyday life.

We might go a step further: The diversity of ways in which everyware will appear in our lives—as new qualities in the things that surround us, as a regime of ambient informatics, and as information processing dissolving in behavior—are linked not merely by a technical armature, but by a set of assumptions about the proper role of technology.

They're certainly different assumptions from the ones most of us have operated under for the last twenty or so years. The conceptual models we've been given to work with, both as designers of information technology and as users of it, break down completely in the face of the next computing.

As designers, we will have to develop an exquisite and entirely unprecedented sensitivity to contexts we've hitherto safely been able to ignore. As users, we will no longer be able to hold computing at arm's length, as something we're "not really interested in," whatever our predilections should happen to be. For better or worse, the everyware situation is one we all share.

images Thesis 08

The project of everyware is nothing less than the colonization of everyday life by information technology.

Objects, surfaces, gestures, behaviors: as we've seen, all have become fair game for technological intervention. Considered one by one, each intervention may proceed by granular and minuscule steps, but in the aggregate, whether intentionally or not, the effect is to begin overwriting the features of our day-to-day existence with something that never used to be there.

We all have an idea what "everyday life" means, though of course the details will be different for each of us. your conception might be organized around work: rushing to catch the train, delivering a convincing presentation to your colleagues, and remembering to pick up the dry cleaning on the way home. Someone else's might reside primarily in frustrations like paying the phone bill, or waiting in stalled traffic on the freeway.

My own sense of everyday life resides in less stressful moments, like strolling through the neighborhood after dinner with my wife, meeting friends for a drink, or simply gazing out the window onto the life of the city. However the overall tenor diverges, though, we have this much in common: We all know what it's like to bathe and dress, eat and socialize, make homes and travel between them.

Our lives are built from basic, daily operations like these. We tend to think of them as being somehow interstitial to the real business of a life, but we wind up spending much of our time on earth engaged in them. And it's precisely these operations that everyware proposes to usefully augment, out here in the rough-and-tumble of ordinary existence, in what I've elsewhere called "the brutal regime of the everyday." It aims to address questions as humble, and as important, as "Where did I leave my keys?" "Will I be warm enough in this jacket?" "What's the best way to drive to work this morning?"

As unheroic as they are, the transactions of everyday life would hardly seem to require technical intervention. It seems strange to think of managing them through an interface of digital mediation, or indeed applying any technology beyond the most basic to their enrichment. And yet that is where computing currently appears to be headed.

The stakes are high. But troublingly—especially given the narrow window of time we have in which to make meaningful choices about the shape of everyware—a comprehensive understanding of what the options before us really mean has been hard to come by.

In the rest of this book, we'll attempt to offer just that. We'll start by examining how everyware differs from the information technology we've become accustomed to. We'll prise out, in some detail, what is driving its emergence and try to get a handle on some of the most pressing issues it confronts us with. We'll sum it all up by asking who gets to determine the shape of everyware's unfolding, how soon we need to be thinking about it, and finally, and perhaps most importantly, what we can do to improve the chance that as it appears, it does so in ways congenial to us.