Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)

Section 2. How is Everyware Different from What We're Used to?

In Section 1, we discussed the idea that a paradigm shift in the way we use computing is upon us. In its fullest sense, this shift is both an exodus from the PC as a standalone platform and a remaking of everyday life around the possibilities of information processing.

But how, specifically, is this different from the computing we've gotten used to? What does everyware look like from the other side of the transition?

images Thesis 09

Everyware has profoundly different implications for the user experience than previous paradigms.

Contemporary designers of digital products and services speak of the "user experience": in other words, how does it feel to use this?

Where complex technological artifacts are concerned, such experiences arise from the interplay of a great many factors, subjective ones and those which are more objectively quantifiable. Consistently eliciting good user experiences means accounting for the physical design of the human interface, the flow of interaction between user and device, and the larger context in which that interaction is embedded.

In not a single one of these dimensions is the experience of everyware anything like that of personal computing. Simply put, people engaging everyware will find it hard to relate it to anything they may have experienced in previous encounters with computing technology.

Let's compare a familiar scenario from personal computing with a correspondingly typical use case for everyware to see why this is so.

The conventional case is something that happens millions of times a day: Driven by some specific need, a user sits down at her desk, opens up her computer, and launches an application—a spreadsheet, a word processor, an e-mail client. via the computer's operating system, she issues explicit commands to that application, and with any luck she achieves what she set out to do in some reasonable amount of time.

By contrast, in everyware, a routine scenario might specify that, by entering a room, you trigger a cascade of responses on the part of embedded systems around you. Sensors in the flooring register your presence, your needs are inferred (from the time of day, the presence of others, or even the state of your own body), and conditions in the room altered accordingly. But few if any of these are directly related to the reason you had in your mind for moving from one place to another.

The everyware scenario is still based on the same primitives the desktop interaction relies upon; at the very lowest level, both are built on a similar substrate of microprocessors, memory, networking, and code. But should you happen to distribute those processors in the physical environment, link them to a grid of embedded sensors and effectors, and connect them all as nodes of a wireless network, you have something markedly different on your hands.

The PC user actively chose the time, manner, and duration of her involvement with her machine, and also (assuming that the machine was portable) had some say regarding where it took place. That one machine was the only technical system she needed to accomplish her goal, and conversely she was the only person whose commands it needed to attend to. Thus bounded on both sides, the interaction fell into a call-and-response rhythm: user actions followed by system events. In a way, we might even say that an application is called into being by the user's task-driven or information-seeking behavior and shut down when her success conditions have been achieved.

Compare these facets of the experience to the situation of everyware, in which the system precedes the user. you walk into a room, and something happens in response: The lights come on, your e-mails are routed to a local wall screen, a menu of options corresponding to your new location appears on the display sewn into your left sleeve. Maybe the response is something you weren't even aware of. Whether or not you walk into the room in pursuance of a particular aim or goal, the system's reaction to your arrival is probably tangential to that goal. Such an interaction can't meaningfully be constructed as "task-driven." Nor does anything in this interplay between user and system even correspond with the other main mode we see in human interaction with conventional computing systems: information seeking.

Could you use everyware instrumentally, to accomplish a discrete task or garner facts about the world? Of course. But that's not what it's "for," not in the same way that a specific function is what an application or a search engine is for. It simply "is," in a way that personal computing is not, and that quality necessarily evokes an entirely different kind of experience on the part of those encountering it.

Even the ways that you address such a system, or are kept apprised of its continuing operation, are different than the ones we're used to. As PARC's victoria Bellotti and her co-authors pointed out, in a 2002 paper, designers of user experiences for standard systems "rarely have to worry about questions of the following sort:

·        When I address a system, how does it know I am addressing it?

·        When I ask a system to do something how do I know it is attending?

·        When I issue a command (such as save, execute or delete), how does the system know what it relates to?

·        How do I know the system understands my command and is correctly executing my intended action?

·        How do I recover from mistakes?"

All of these questions, of course, come into play in the context of everyware. They go directly to the heart of the difference between ubiquitous systems and the digital artifacts we're used to. What they tell us is that everyware is not something you sit down in front of, intent on engaging. It's neither something that is easily contained in a session of use, nor an environment in which blunders and missteps can simply be Ctrl-Z'ed away.

What it is is a radically new situation that will require the development over time of a doctrine and a body of standards and conventions—starting with the interfaces through which we address it.

images Thesis 10

Everyware necessitates a new set of human interface modes.

One of the most obvious ways in which everyware diverges from the PC case is its requirement for input modalities beyond the standard keyboard and screen, trackball, touchpad, and mouse.

With functionality distributed throughout the environment, embedded in objects and contexts far removed from anything you could (or would want to) graft a keyboard onto, the familiar ways of interacting with computers don't make much sense. So right away we have to devise new human interfaces, new ways for people to communicate their needs and desires to the computational systems around them.

Some progress has already been made in this direction, ingenious measures that have sprouted up in response both to the diminutive form factor of current-generation devices and the go-everywhere style of use they enable. Contemporary phones, PDAs, and music players offer a profusion of new interface elements adapted to their context: scroll wheels, voice dialing, stylus-based input, and predictive text-entry systems that, at least in theory, allow users of phone keypads to approximate the speed of typing on a full keyboard.

But as anyone who has spent even a little time with them knows, none of them is entirely satisfactory. At most, they are suggestive of the full range of interventions everyware will require.

One set of possibilities is suggested by the field known as "tangible media" at the MIT Media Lab, and "physical computing" to those researching it at NyU's Interactive Telecommunications Program. The field contemplates bridging the worlds of things and information, atoms and bits: Physical interface elements are manipulated to perform operations on associated data. Such haptic interfaces invoke the senses of both touch and proprioception—what you feel through the skin, that is, and the sensorimotor awareness you maintain of the position and movement of your body.

In a small way, using a mouse is physical computing, in that moving an object out in the world affects things that happen on screen. The ease and simplicity most users experience in mousing, after an exceedingly brief period of adaptation upon first use, relies on the subtle consciousness of cursor location that the user retains, perceived solely through the positioning of the wrist joint and fingers. It isn't too far a leap from noticing this to wondering whether this faculty might not be brought directly to bear on the world.

An example of a tangible interface in practice is the "media table" in the lobby of New york's Asia Society, a collaboration between Small Design Firm and media designer Andrew Davies. At first glance, the table appears to be little more than a comfortable place to sit and rest, a sumptuously smooth ovoid onto which two maps of the Asian landmass happen to be projected, each facing a seated user. But spend a few minutes playing with it—as its design clearly invites you to—and you realize that the table is actually a sophisticated interface to the Asia Society's online informational resources.

Off to the table's side, six pucks, smoothly rounded like river stones, nestle snugly in declivities designed specifically for them. Pick one up, feel its comfortable heft, and you see that it bears lettering around its rim: "food," or "news headlines," or "country profiles." And although it's sufficiently obvious from the context that the stones "want" you to place them over the map display, a friendly introduction gives you permission to do just that.

When you do, the map zooms in on the country you've chosen and offers a response to your selection. Holding the "news headlines" stone over Singapore, for example, calls up a live Web feed from the Straits Times, while holding "food" over Thailand takes you to recipes for Tom yum Kung and Masaman Curry. you can easily spend fifteen minutes happily swapping stones, watching the display smoothly glide in and out, and learning a little bit about Asia as you're doing so.

As the media table suggests, such tangible interfaces are ideal for places where conventional methods would be practically or aesthetically inappropriate, or where the audience might be intimidated by them or uncomfortable in using them. The production values of the lobby are decidedly high, with sumptuous fixtures that include a million-dollar staircase; the Asia Society had already decided that a drab, standard-issue Web kiosk simply wouldn't do. So part of Small Design's reasoning was aesthetic: it wanted to suggest some connection, however fleeting, to the famous garden of the Ryoan-ji temple in Kyoto. If the media table is not precisely Zenlike, it is at the very least a genuine pleasure to touch and use.

But Small also knew that the Asia Society's audience skewed elderly and wanted to provide visitors who might have been unfamiliar with pull-down menus a more intuitive way to access all of the information contained in a complex decision tree of options. Finally, with no moving parts, the media table stands up to the demands of a high-traffic setting far better than a standard keyboard and trackball would have.

Though the Asia Society media table is particularly well-executed in every aspect of its physical and interaction design, the presentation at its heart is still a fairly conventional Web site. More radical tangible interfaces present the possibility of entirely new relationships between atoms and bits.

Jun Rekimoto's innovative DataTiles project, developed at Sony Computer Science Laboratories (CSL), provides the user with a vocabulary of interactions that can be combined in a wide variety of engaging ways—a hybrid language that blends physical cues with visual behaviors. Each DataTile, a transparent pane of acrylic about 10 centimeters on a side, is actually a modular interface element with an embedded RFID tag. Place it on the display and its behaviors change depending on what other tiles it has been associated with. Some are relatively straightforward applications: Weather, video, and Paint tiles are exactly what they sound like. Playing a Portal tile opens up a hole in space, a linkage to some person, place or object in the real world—a webcam image of a conference room, or the status of a remote printer. Some Portals come with an appealing twist: the Whiteboard module not only allows the user to inscribe their thoughts on a remote display board, but also captures what is written there.

Parameter tiles, meanwhile, constrain the behavior of others. The Time-Machine, for example, bears a representation of a scroll wheel; when it is placed next to a video tile, the video can be scrubbed backward and forward. Finally, inter-tile gestures, made with a stylus, allow media objects to be "sent" from one place or application to another.

From the kind of physical interaction behaviors we see in the media table and the DataTiles, it's only a short step to purely gestural ones, like the one so resonantly depicted in the opening moments of Steven Spielberg's 2002 Minority Report. Such gestural interfaces have been a continual area of interest in everyware, extending as they do the promise of interactions that are less self-conscious and more truly intuitive. Like physical interfaces, they allow the user to associate muscle memory with the execution of a given task; theoretically, anyway, actions become nearly automatic, and response times tumble toward zero.

With this payoff as incentive, a spectrum of methods have been devised to capture the retinue of expressive things we do with our hands, from the reasonably straightforward to arcane and highly computationally intensive attempts to infer the meaning of user gestures from video. Some of the more practical rely on RFID-instrumented gloves or jewelry to capture gesture; others depend on the body's own inherent capacitance. Startup Tactiva even offers PC users something called TactaPad, a two-handed touchpad that projects representations of the user's hands on the screen, affording a curious kind of immersion in the virtual space of the desktop.

With the pace of real-world development being what it is, this category of interface seems to be on the verge of widespread adoption. Nevertheless, many complications and pitfalls remain for the unwary.

For example, at a recent convention of cartographers, the technology research group Applied Minds demonstrated a gestural interface to geographic information systems. To zoom in on the map, you place your hands on the map surface at the desired spot and simply spread them apart. It's an appealing representation: immediately recognizable and memorable, intuitive, transparent. It makes sense. Once you've done it, you'll never forget it.

Or so Applied Minds would have you believe. If only gesture were this simple! The association that Applied Minds suggests between a gesture of spreading one's hands and zooming in to a map surface is culturally specific, as arbitrary as any other. Why should spreading not zoom out instead? It's just as defensibly natural, just as "intuitive" a signifier of moving outward as inward. For that matter, many a joke has turned on the fact that certain everyday gestures, utterly unremarkable in one culture—the thumbs-up, the peace sign, the "OK" sign—are vile obscenities in another.

What matters, of course, is not that one particular system may do something idiosyncratically: Anything simple can probably be memorized and associated with a given task with a minimum of effort. The problem emerges when the different systems one is exposed to do things different ways: when the map at home zooms in if you spread your hands, but the map in your car zooms out.

The final category of new interfaces in everyware concerns something still less tangible than gesture: interactions that use the audio channel. This includes voice-recognition input, machine-synthesized speech output, and the use of "earcons," or auditory icons.

The latter, recognizable tones associated with system events assume new importance in everyware, although they're also employed in the desktop setting. (Both Mac OS and Windows machines can play earcons—on emptying the Trash, for example.) They potentially serve to address one of the concerns raised by the Bellotti paper previously referenced: Used judiciously, they can function as subtle indicators that a system has received, and is properly acting on, some input.

Spoken notifications, too, are useful, in situations where the user's attention is diverted by events in their visual field or by the other tasks that he or she is engaged in: Callers and visitors can be announced by name, emergent conditions can be specified, and highly complex information can be conveyed at arbitrary length and precision.

But of all audio-channel measures, it is voice-recognition that is most obviously called upon in constructing a computing that is supposed to be invisible but everywhere. voices can, of course, be associated with specific people, and this can be highly useful in providing for differential permissioning—liquor cabinets that unlock only in response to spoken commands issued by adults in the household, journals that refuse access to any but their owners. Speech, too, carries clear cues as to the speaker's emotional state; A household system might react to these alongside whatever content is actually expressed—yes, the volume can be turned down in response to your command, but should the timbre of your voice indicate that stress and not loudness is the real issue, maybe the ambient lighting is softened as well.

We shouldn't lose sight of just how profound a proposition voice-recognition represents when it is coupled to effectors deployed in the wider environment. For the first time, the greater mass of humanity can be provided with a practical mechanism by which their "perlocutionary" utterances—speech acts intended to bring about a given state—can change the shape and texture of reality.

Whatever else comes of this, though, computing equipped with tangible, gestural, and audio-channel interfaces is set free to inhabit a far larger number and variety of places in the world than can be provided for by conventional methods.

images Thesis 11

Everyware appears not merely in more places than personal computing does, but in more different kinds of places, at a greater variety of scales.

In principle, at least as far as some of the more enthusiastic proponents of ubicomp are concerned, few human places exist that could not be usefully augmented by networked information processing.

Whether or not we happen to agree with this proposition ourselves, we should consider it likely that over the next few years we'll see computing appear in a very great number of places (and kinds of places) previously inaccessible to it. What would this mean in practice?

Some classic sites for the more traditional sort of personal computing are offices, libraries, dorm rooms, dens, and classrooms. (If we want to be generous, we might include static informational kiosks.)

When people started using wireless-equipped laptops, this domain expanded to include coffee houses, transit lounges, airliner seats, hotel rooms, airport concourses—basically anywhere it would be socially acceptable to sit and balance a five-pound machine on your knees, should it come to that.

The advent of a mobile computing based on smartphones and wireless PDAs opened things up still further, both technically and interpersonally. On top of the kinds of places where laptops are typically used, we can spot people happily tapping away at their mobile devices on, in, and around sidewalks, cars, waiting rooms, supermarkets, bus stops, civic plazas, commuter trains.

But extending this consideration to include ubiquitous systems is almost like dividing by zero. How do you begin to discuss the "place" of computing that subsumes all of the above situations, but also invests processing power in refrigerators, elevators, closets, toilets, pens, tollbooths, eyeglasses, utility conduits, architectural surfaces, pets, sneakers, subway turnstiles, handbags, HvAC equipment, coffee mugs, credit cards, and many other things?

The expansion not merely in the number of different places where computing can be engaged, but in the range of scales involved, is staggering. Let's look at some of them in terms of specific projects and see how everyware manifests in the world in ways and in places previous apparitions of computing could not.

images Thesis 12

Everyware acts at the scale of the body.

Of all the new frontiers opening up for computation, perhaps the most startling is that of the human body. As both a rich source of information in itself and the vehicle by which we experience the world, it was probably inevitable that sooner or later somebody would think to reconsider it as just another kind of networked resource.

The motivations for wanting to do so are many: to leverage the body as a platform for mobile services; to register its position in space and time; to garner information that can be used to tailor the provision of other local services, like environmental controls; and to gain accurate and timely knowledge of the living body, in all the occult complexity of its inner workings.

It's strange, after all, to live in our bodies for as long as we do, to know them about as intimately as anything ever can be known, and to still have so little idea about how they work. The opacity of our relationship with our physical selves is particularly frustrating given that our bodies are constantly signaling their status beneath the threshold of awareness, beyond our ability to control them. In every moment of our lives, the rhythm of the heartbeat, the chemistry of the blood, even the electrical conductivity of the skin are changing in response to evolving physical, situational, and emotional environment.

If you were somehow able to capture and interpret these signals, though, all manner of good could come from it. Bacterial and viral infections could be detected and treated, as might nutritional shortfalls or imbalances. Doctors could easily verify their patients' compliance with a prescribed regimen of pharmaceutical treatment or prophylaxis; a wide variety of otherwise dangerous conditions, caught early enough, might yield to timely intervention.

The information is there; all that remains is to collect it. Ideally, this means getting a data-gathering device that does not call undue attention to itself into intimate proximity with the body, over reasonably long stretches of time. A Pittsburgh-based startup called BodyMedia has done just that, designing a suite of soft sensors that operate at the body's surface.

Their SenseWear Patch prototype resembles a sexy, high-tech Band-Aid. Peel the paper off its adhesive backing and seat it on your arm, and its sensors detect the radiant heat of a living organism, switching it on. Once activated, the unit undertakes the production of what BodyMedia calls a "physiological documentary of your body," a real-time collection of data about heart rate, skin temperature, galvanic skin response, and so on, encrypted and streamed to a base station.

Other networked biosensors operate further away from the body. The current state of the art in such technology has to be regarded as Matsushita Electric's prototype Kenko Toware, an instrumented toilet capable of testing the urine for sugar concentration, as well as registering a user's pulse, blood pressure, and body fat. In what is almost certainly a new frontier for biotelemetry, a user can opt to have this data automatically sent to a doctor via the toilet's built-in Internet connection.*

* One can only hope that such communications would be heavily encrypted.

Is such functionality of any real value? While nominally useful in the diagnosis of diabetes, urine testing is regarded as a poor second to blood testing. Most other types of urine-based diagnostics are complicated by the necessity of acquiring an uncontaminated "clean catch," Nevertheless, the significance of Kenko Toware is clear: From now on, even your bodily waste will be parsed, its hidden truths deciphered, and its import considered in the context of other available information.

What of that unfolding just the other side of the skin? Without leaving the scale of the body, we encounter a whole range of technical interventions less concerned with the body as process or oracle than with its possibilities as a convenient platform—one that follows us everywhere we go. These have generally been subsumed under the rubric of "wearable computing."

Early experiments in wearability focused on the needs of highly mobile workers—primarily couriers, logistics personnel, law enforcement officers, and other first responders—whose jobs relied on timely access to situational information yet required that they keep their hands free for other tasks. A series of successful academic studies in the 1980s and 1990s, including those at the MIT Media Lab, ETH ZÜrich, and the Universities of Bristol and Oregon, demonstrated that deploying informatic systems on the body was at least technically feasible.

They were less convincing in establishing that anything of the sort would ever be acceptable in daily life. Researchers sprouting head-up "augmented reality" reticules, the lumpy protuberances of prototype "personal servers," and the broadband cabling to tie it all together may have proven that the concept of wearable computing was valid, but they invariably looked like extras from low-budget cyberpunk films—or refugees from Fetish Night at the anime festival.

University of Toronto professor Steve Mann has easily trumped anyone else's efforts in this regard, willingly exploring full-time life as a cyborg over the course of several years (and still doing so, as of this writing). Mann attempted to negotiate modern life gamely festooned with all manner of devices, including an "eyetap" that provided for the "continuous passive capture, recording, retrieval, and sharing" of anything that happened to pass through his field of vision.

It was difficult to imagine more than a very few people ever submitting to the awkwardness of all this, let alone the bother that went along with being a constant focus of attention; Mann himself was the subject of a notorious incident at the U.S.-Canada border, soon after the September 11th attacks, in which his mediating devices were forcibly removed by immigration authorities.*

* This sudden deprivation of the massive input he had become accustomed to was apparently a harrowing experience for Mann. He described it as one of deep disorientation and nausea.

But happily for all concerned, the hardware involved in wearable computing has become markedly smaller, lighter, and cheaper. As ordinary people grew more comfortable with digital technology, and researchers developed a little finesse in applying it to the body, it became clear that "wearable computing" need not conjure visions of cyberdork accessories like head-up displays. One obvious solution, once it became practical, was to diffuse networked functionality into something people are already in the habit of carrying on their person at most times: clothing.

In 1999 Philips Electronics published a glossy volume called New Nomads, featuring a whole collection of sleek and highly stylish fashions whose utility was amplified by onboard intelligence. While the work was speculative—all the pieces on display were, sadly, nonfunctional mockups and design studies—there was nothing in them that would have looked out of place on the sidewalks or ski slopes of the real world. Philips managed to demonstrate that wearable computing was not merely feasible, but potentially sexy.

Nor did the real world waste much time in catching up. Burton released an iPod-compatible snowboarding jacket called Amp in 2003, with narrow-gauge wiring threaded down the sleeves to a wrist-mounted control panel; by winter 2004-2005, Burton and Motorola were offering a Bluetooth-equipped suite of jacket and beanie that kept snowboarders wirelessly coupled to their phones and music players. The adidas_1 sneaker did still more with embedded processors, using sensors and actuators to adjust the shoe's profile in real time, in response to a runner's biomechanics.

Beyond the things you can already buy, hundreds of student projects have explored the possibilities of sensors light and flexible enough to be woven into clothing, typified by Richard Etter and Diana Grathwohl's AwareCuffs—sleeves that sense the digital environment and respond to the presence of an open Wi-Fi network. Given how often the ideas animating such projects have turned up in commercial products just a few years (or even months) later, we can expect the imminent appearance of a constellation of wearables.

And while none of these products and projects are as total as the digital exoskeleton Steve Mann envisioned, maybe they don't have to be. For all his personal bravery in pioneering wearable computing, Mann's vision was the product of a pre-Web, pre-cellular era in which computational resources were a lot scarcer than they are now. When more such resources are deployed in the world, we probably have to carry fewer of them around with us.

So what's the next step? After decades of habituation due to the wristwatch, the outer surface of the forearm is by now a "natural" and intuitive place to put readouts and controls. Meanwhile, with the generous expanse it offers, the torso offers wireless communication devices enough space for a particularly powerful and receptive antenna; a prototype from the U.S. Army's Natick Soldier Center integrates such an antenna into a vest that also provides a (near-literal) backbone for warfighter electronics, optics, and sensor suites. (The Army, at least, is prepared to attest to the safety of such antennae for its personnel, but I'm less certain that anyone not subject to military law would be so sanguine about wearing one all day.)

We'll also see garments with embedded circuitry allowing them to change their physical characteristics in response to external signals. The North Face's MET5 jacket takes perhaps the simplest approach, offering the wearer a controller for the grid of microscopic conductive fibers that carry heat through the garment. But more than one high-profile consumer fashion brand is currently developing clothing whose fibers actually alter their loft, and therefore their insulation profile, when signaled. When coupled to a household management system, this gives us shirts and pants that get more or less insulating, warmer or cooler, depending on the momentary temperature in the room.

Finally, there are a number of products in development that treat the clothed body as a display surface, the garment itself as a site of mediation. The U.S. Army, again, is experimenting with electro-optical camouflage for its next-generation battle dress, which suggests some interesting possibilities for clothing, from animated logos to "prints" that can be updated with the passing seasons. (Real-world approximations of the identity-dissimulating "scramble suits," so memorably imagined by Philip K. Dick in his 1972 A Scanner Darkly, are another potential byproduct.)

Considered in isolation, these projects—from toilet to eyetap, from "body area network" to running shoe—are clearly of varying degrees of interest, practicality and utility. But in the end, everything connects. Taken together, they present a clear picture of where we're headed: a world in which the body has been decisively reimagined as a site of networked computation.

images Thesis 13

Everyware acts at the scale of the room.

If even the body is subject to colonization by ubiquitous computing, the same is certainly true of the places we spend most of our time in and relate to most readily: architectural spaces of room scale.

As we've seen, most of the early experiments in ubicomp focused on the office environment, and supported the activities that people typically do there. But many aspects of these investigations were applicable to other kinds of spaces and pursuits as well, with processing deployed in features that most rooms have in common: walls, doorways, furniture, and floors.

If you want to provide services to people as they roam freely through a space, it's quite important to know exactly where they are and get some idea of what they might be doing. If their identities have not already been mediated by some other mechanism, it's also useful to be able to differentiate between them. So one strong current of development has concerned the floor beneath our feet, quite literally the perfect platform for sensors able to relay such information.

As far back as 1997, the Olivetti and Oracle Research Lab at the University of Cambridge had developed a prototype Active Floor, which monitored both weight distribution and the time variation of loads. Georgia Tech's Smart Floor followed, improving on Active Floor not least by its attempt to identify users by their "footfall signature," while the University of Florida's Gator Tech Smart House uses flooring throughout with impact sensors capable of detecting falls and reporting them to emergency services.

Two current strains of thinking about smart flooring are represented by very different projects announced within the last year. On one hand, we have NTT DoCoMo's CarpetLAN prototype, which uses weak electrical fields to afford both wireless networking and positioning accurate down to about one meter of resolution. CarpetLAN bears all the marks of a highly sophisticated effort to understand what kinds of functionality can be practically subsumed in a floor.

And then there is inventor Leo Fernekes' Sensacell capacitive sensor grid system, developed in collaboration with architect Joakim Hannerz. It has to be said that Sensacell is not the world's most versatile system. It relies on changes in capacitance to detect presence and location and can therefore pick up conductive objects like the human body, but that's about it. Sensacell returns no load information, offers no way to differentiate between individuals, and certainly isn't designed to establish connections with mobile devices. It's not even necessarily fine-grained enough to distinguish between your transit of a space and the midnight errands of the family cat. And given that its presentation to date has focused on output in the (admittedly quite pretty) form of banks of embedded LEDs, Sensacell seems destined for applications in a relatively narrow swath of high-concept bars, lounges, and retail spaces.

But Sensacell has three big advantages over its predecessors: It's modular, it comes in sizes that conform to the square-foot grid actually used by contractors, and it is commercially available right now. What's more, the luminous cells can be integrated into vertical surfaces, even furniture, and their stream of output data can be jacked into just about any garden-variety PC. The pricing is on the high side, but not absurdly so, and will surely fall in the event of any large-scale production. For all of these reasons, Sensacell is accessible, within reach of the kind of tinkering ubihackers who may be key to the wider spread of everyware.

In addition to flooring, instrumented doorways have also begun to appear. Knowledge of door status can be very useful in context-aware applications—whether an office door is open or closed can imply something about the relative intensity with which the occupant is engaged in a task, while a change in state is generally a marker that a user is transitioning between one activity and another. But door sensors can also be used simply to count how many people enter or leave a given room. (Fire inspectors might want to take note.) California startup InCom's recent pilot program InClass aimed to cut down on teacher administrative time by doing just this, producing a tally of classroom attendance as students wearing RFID-equipped nametags passed beneath a transom-mounted reader.*

* Parents objected to the program on privacy concerns, and the system was withdrawn from operation after less than a month.

If the doorway produces both headcounts and inferences about behavior, and the floor is occasionally called upon to be everything from impact detector to transmission medium, walls have it relatively easy. Most ubiquitous projects to date have treated the wall first and foremost as a large-scale display surface, with its use as a communication hub following from this.

This, of course, conforms to a venerable tradition in science fiction, but such ultraflat, ultrawide screens are now on the verge of practical reality. Motorola's Physical Science Research Laboratory recently presented sections of a prototype carbon nanotube screen 160 cm across diagonally by a single centimeter thick. If either Motorola or its competitors manage to produce nanotube displays at commercial scale, truly wall-spanning displays cannot be far off, although they're still probably years and not months away.

For some interested parties, this may seem like a long time to wait, given the wall-screen's centrality to their visions of the "digital home." In the more elaborate of such schemes, the wall becomes some combination of home theater, videophone, whiteboard, and family scratchpad—a site where downloaded media objects are delivered for consumption, a communication medium in its own right, and the place where other networked devices in the home are managed.

You can already buy appliances ostensibly designed with such distributed control in mind: the first generation of Internet-capable domestic appliances, typified by LG's suite of refrigerator, air conditioner, microwave, and washing machine.

Whatever their merits as appliances, however, they completely fail to capitalize on their nature as networked devices capable of communicating with other networked devices, which tends to rule out the more interesting sorts of interaction that might otherwise be envisioned. Users address LG's appliances one by one, via a superficially modified but otherwise entirely conventional Windows interface; the advertised functionality is limited to use cases that must have struck even the marketing department as forced. The webcam-equipped refrigerator, for example, lets family members send each other video memos, while the air conditioner offers new patterns of airflow for download, presumably as one would download polyphonic ring tones. The "Internet" microwave is even worse, forcing a user to connect an external PC to the Web to download recipes.

True utility in the digital room awaits a recognition that the networked whole is distinctly more than the sum of its parts. In contrast with such piecemeal conceptions, there have been others that approached the ubiquitous systems operating in a space as a unified whole.

The MIT Tangible Media Group's 1998 prototype ambientROOM was one such pioneering effort. Built into a free-standing Steelcase office cubicle of around fifty square feet, ambientROOM was nothing if not an exercise in holism: The entire space was considered as an interface, using lighting and shadow, sound cues, and even the rippled reflection of light on water to convey activity meaningful to the occupant. The sound of birdsong and rainfall varied in volume with some arbitrary quantity set by a user—both the "value of a stock portfolio" or the "number of unread e-mail messages" were proposed at the time*—while "active wallpaper" took on new qualities in reaction to the absence or presence of people in a nearby conference room.

* Conveying the quantity of unread e-mail is apparently an eternal goal of such systems, while explicitly calling out one's stock portfolio as something to be tracked by the minute seems to have been an artifact of the go-go, day-trading era in which ambientROOM was designed.

Projects like ambientROOM begin to suggest how systems made up of media hubs, wall-screens, networked refrigerators, and all the other appurtenances of room-scale everyware might work when designed in recognition of the person at their heart.

Some of the first to get a taste of this in real life have been high-margin frequent travelers. Since mid-2005, rooms at the Mandarin Oriental in New york have loaded preference files maintained on a central server when the hotel's best customers check in, customizing settings from the shades to the thermostat, lining up entertainment options, and loading frequently dialed numbers into the phone.* "Digital home" solutions that propose to do many of the same things in a domestic setting can be expected to reach the market in the near term, though whether they'll afford experiences of reasonably seamless ubiquity is debatable.

And so we see it cropping up again, here at the scale of the room, this pattern that may by now seem familiar to you: Our discussions of everyware have much less to do with some notional future than they do with a blunt inventory of products already finding their way to market.

* As you may have suspected, yes, the hotel does keep track of what you're watching. The potential for embarrassment is real, and is something we'll deal with extensively in Sections 6 and 7.

images Thesis 14

Everyware acts at the scale of the building.

In this thesis and the next, which concerns the extension of everyware into public space, we reach scales where the ubiquitous deployment of processing starts to have consequences beyond ones we can easily envision. When we find networked intelligence operating at the scale of whole buildings, it doesn't even necessarily make sense to speak of how the everyware experience diverges from that of personal computing—these are places that people using PCs have rarely if ever been able to reach.

The idea of a building whose program, circulation, and even structure are deeply molded by flows of digital information is nothing new. As a profession, architecture has been assuming that this is going to happen for quite awhile now. The design press of the 1990s was saturated with such visions. Anyone who regularly read Metropolis or wallpaper or ID in those years will likely remember a stream of blobjectified buildings, all nurbly and spliny, with tightly-kerned Helvetica Neue wrapped around the corners to represent "interactive surfaces," and images of Asian women sleekly coutoured in Jil Sander Photoshopped into the foreground to connote generic urban futurity.* But the role played by networked information in such projects mostly seemed to mean some variation on Web-on-the-wall.

* Architects: I kid. I kid, because I love.

For all the lovely renderings, we have yet to see the appearance of buildings structurally modified in any significant way by the provision of realtime, networked information.

Yet since the 1970s, it has been commonplace of commercial architecture and engineering, at least, that information technology allows impressive efficiencies to be realized when incorporated in the design of buildings. It is now rare for a new, premium commercial building to break ground without offering some such provision.

Circulation and delivery of services in so-called "smart buildings" can be tuned in real time, in pursuit of some nominal efficiency profile. Instead of stupidly offering an unvarying program of light, heat, and air conditioning, energy management control systems (EMCS) infer appropriate environmental strategies from the time of day and of year, solar gain, and the presence or absence of occupants. And security and custodial staffs are assisted in their duties by the extension of computational awareness throughout the structure. It would be a stretch to call such systems "routine," but only just barely.

Other computationally-enhanced building systems are becoming increasingly common, like Schindler Elevator's Miconic 10, which optimizes load by aggregating passenger groups based on where they're going. Instead of the time-honored principle of pressing an "up" button, and then waiting in a gaggle with all the other upbound passengers, the Miconic 10's clever load-optimization algorithm matches people bound for the same floor with the elevator cab currently offering the shortest wait time. (It's simpler to do than it is to explain.) Schindler claims the elevators make each individual trip 30 percent faster, and also allow a building to handle a proportionally increased flow of visitors.*

* What gets lost, though, in all of this—as with so many digitally "rationalized" processes—is the opportunity for serendipitous interaction that happens when people from different floors share this particular forty-five-second interval of the day. Isn't the whole cherished trope of the "elevator pitch" based around the scenario of a kid from the mailroom finding him-or herself willy-nilly sharing a cab with the CXO types headed for the executive floors?

When such systems are coupled to the relational, adaptive possibilities offered up by everyware in its other aspects, we start to get into some really interesting territory. The Arch-OS "operating system for architecture," for example, a project of the School of Computing, Communications and Electronics at the University of Plymouth, suggests some of the possible directions. As its Web site explains, the project aims to capture the state of a building in real time from inputs including "building energy management systems...the flow of people and social interactions, ambient noise levels and environmental conditions," and return that state to public awareness through a variety of visualizations.

While there's ample reason to believe that such ambient displays of information relating to building systems will become both prevalent and useful, most of the Arch-OS projects to date lean toward the artistic. While it sounds fascinating, for example, it's unclear from the project's documentation whether the "psychometric architecture" project—the recording of activity in a building throughout the day, for playback on its outer envelope at night—was ever attempted. The generative soundscapes and abstract visualizations on hand do seem to mesh well, though, with other recent efforts to equip the outer surfaces of a building with interactive media.

Consider dECOi's 2003 Aegis Hyposurface, a continuously-transformable membrane that allows digital input—whether from microphone, keyboard, or motion sensor—to be physically rendered on the surface itself, showing up as symbols, shapes, and other deformations. Its creators call Aegis "a giant sketchpad for a new age," and while its complexity has kept it from being produced as anything beyond a prototype, it at least was explicitly designed to respond to the kind of inputs Arch-OS produces.

Meanwhile, similar systems, which have actually been deployed commercially, fail to quite close the loop. UNStudio's recent digital facade for Seoul's high-end Galleria department store, developed in association with Arup Engineering and lighting designer Rogier van der Heide, is one such project. The architects wrapped a matrix of LED-illuminated disks around what used to be a drab concrete box, turning the whole surface into a field of ever-renewing data and color. It's a success—it currently bathes the Apgujeong district with gorgeous washes of light nightly—and yet the images flowing across the surface seem to cry out for some generative connection to the inner life of the building.

But already a vanguard few are wrestling with challenges beyond the mere display of information, exploring the new architectural morphologies that become possible when computation is everywhere in the structure itself. Los Angeles–based architect Peter Testa has designed a prototype building called the Carbon Tower: an all-composite, forty-story high-rise knit, braided and woven from carbon fiber.

Unlike conventional architecture, the Carbon Tower dispenses with all internal bracing, able to do so not merely because of the mechanical properties of its textile exoskeleton, but due to the way that exoskeleton is managed digitally. As Testa envisions it, the Carbon Tower exhibits "active lateral bracing": sensors and actuators embedded in its structural fiber cinch the building's outer skin in response to wind load and other dynamic forces.

And if building morphology can be tuned in response to environmental inputs, who's to say that those inputs should be limited to the weather? Arch-OS-style polling of foot traffic and social interactions, coupled to output in the form of structural changes can take us in some genuinely novel directions. Something resembling fondly remembered and much-beloved Archigram projects of the 1960s such as Instant City, Tuned Suburb, and the Control and Choice Dwelling may finally be realized—or so fans of the visionary collective can hope. When compared to the inert structures we now inhabit, everyware-age architecture—for better or worse—will be almost certainly be weirder.

images Thesis 15

Everyware acts at the scale of the street and of public space in general.

At present, the most often-pursued applications of everyware at scales beyond the individual building concern wayfinding: knowing where in the world you are and how to get where you're going.

We've become familiar with the idea that dashboard navigation displays using the Global Positioning System (GPS) will help us figure these things out. But GPS is a line-of-sight system—you need to be visible to at least three satellites currently above the horizon in order for it to triangulate your position—so it doesn't work indoors, in tunnels, or in places where there's lots of built-up density. This makes GPS a fairly poor way of finding your way around places like Manhattan, although it seems to work satisfactorily in lower-density conurbations like Tokyo.

Other systems that might help us find our way around the city have their own problems. Schemes that depend on tracking your various personal devices by using the local cellular network can't offer sufficient precision to really be useful as a stand-alone guide. And while the kinds of sensor grids we've discussed in the context of indoor spaces can be accurate to sub-meter tolerances, it would clearly be wildly impractical to deploy them at the scale of a whole city.

But what if the city itself could help you find your way? In 1971, in a landmark study entitled The Image Of The City, MIT professor and urbanist Kevin Lynch explored a quality of the city he called "legibility." How do people read a city, in other words? What sorts of features support their attempts to figure out where they are, which paths connect them to a given destination, and how best to actually go about getting there?

Lynch identified a few systems that have historically helped us find our way in the city: signage, of course, but also explicit maps, even street numbering conventions. Such systems function best in a city that itself offers distinctly characterized districts, clearly identifiable paths between them, and above all, the kind of highly visible landmarks that allow people to orient themselves from multiple vantage points, such as Manhattan's Empire State Building, Seoul's Namsan Tower, and Berlin's Fernsehturm. Other kinds of landmarks play a role, too: prominent, easy-to-specify places—the clock in Grand Central Station—where arrivals are readily visible to one another.

All of these features are now subject to computational enhancement. Street furniture such as lamp posts, signage, even manhole covers can provide the urban sojourner with smart waypoints; Tokyo's Shinjuku ward is currently tagging some 10,000 lamp posts with RFID panels that give visitors information on nearby public toilets, subway entrances, and other accommodations.

Meanwhile, maps themselves can offer dynamic, real-time information on position and direction, just as their automotive equivalents do. At Lancaster University, in the UK, just such a prototype public navigation system—dubbed GAUDI, for "grid of autonomous displays"—helps visitors find their way around campus, using adaptive displays as directional signs.

The system's initial release is intended for use as temporary signage for events—lectures, academic conferences, concerts, and the like. Upon being switched on, each portable GAUDI panel queries the navigational server for its current location. It then displays the name and direction of, and approximate distance to, the selected destination. Moving a GAUDI display from one place to another automatically updates it; an arrow that points left to the destination will reverse when placed on the opposite wall.

Nor is GAUDI limited to a selection of fixed campus landmarks. It can direct visitors to that wide variety of campus events that are regular but subject to frequent changes in location—a professor's office hours, or a meeting that has outgrown the auditorium in which it was originally scheduled.

It's easy to see how something like GAUDI, suitably ruggedized and secured, could transform the experience of citying, especially when combined with other locational and directional indicators carried on the body or integrated into clothing. Taken together, they would render the urban domain legible in a way Kevin Lynch could not have imagined in 1970. In such a place, locations self-identify, notices of congestion immediately generate alternative paths to the destination, and services announce themselves. (Anyone who's ever spent the day on foot in one of Earth's great cities will appreciate the prospect of knowing where the nearest public restroom is, even at what time it was last cleaned.)

Information architect Peter Morville calls such interventions in the city "wayfinding 2.0"—an aspect of the emerging informational milieu he thinks of as "ambient findability," in which a combination of pervasive devices, the social application of semantic metadata, and self-identifying objects renders the built environment (and many other things besides) effectively transparent to inquiry.

But as we shall see in some detail, everyware also functions as an extension of power into public space, whether that space be streetscape, commons, station, or stadium—conditioning it, determining who can be there and what services are available to each of them. More deeply still, there are ways in which the deployment of a robust everyware will connect these places with others previously regarded as private. Our very notions of what counts as "public" cannot help but be changed in the aftermath.

images Thesis 16

Everyware can be engaged inadvertently, unknowingly, or even unwillingly.

I hope it's obvious by now that one of the most significant ways in which everyware diverges from our experience of personal computing is that in all of the scenarios we've been exploring, it can be engaged even in the absence of an active, conscious decision to do so.

There are at least three modes in which this lack of agency becomes relevant. The first is the situation of inadvertency: I didn't mean to engage this system. I didn't mean to broadcast my current location to anybody who asks for it. I meant to do something else—perhaps set it such that only my friends and family could find me—and I've forgotten the command that would limit this function or disable it entirely.

There's also the case where everyware is invoked unknowingly: I wasn't aware of this system's extent, domain of operation, capabilities, or ownership. I had no idea that this store tracked my movements through it and would mail me coupons for products I stood next to for more than ten seconds but didn't purchase. I didn't know that this toilet would test my urine for the breakdown products of opiates and communicate its findings to my doctor, my insurers, or law-enforcement personnel. (The consequences of unknowing engagement also affect children, the developmentally disabled, the mentally ill, and others who would be unable to grasp a ubiquitous system's region of influence and their exposure to same.)

Finally, there is the case where the user is unwilling: I don't want to be exposed to this system, but I have been compelled by simple expedience, by social convention, by exhaustion, by force of regulation or law to accept such an exposure. I don't want to wear an RFID nametag, but my job requires me to. I'd rather not have my precise weight be a matter of public record, but the only way from my desk to the bathroom has load sensors built into the flooring. I know that the bus will query my ID and maybe I don't want that, but the hour is late; despite my disinclination, I don't have the energy to find an alternative.

How different this is from most of the informational systems we're accustomed to, which, for the most part, require conscious action even if we are to betray ourselves. The passive nature of our exposure to the networked sensor grids and other methods of data collection implied by everyware implicates us whether we know it or not, want it or not. In such a regime, information is constantly being gathered and acted upon, sensed and responded to, archived and retrieved, in ways more subtle than those that function at present.

But inadvertent, unknowing, or unwilling engagement of everyware means more than a simple disinclination to be sensed or assayed. There is also our exposure to the output of ubiquitous systems, to the regularized, normalized, optimized courses of action that so often result from the algorithmic pursuit of some nominal profile. Maybe we'd prefer the room to remain refreshingly crisp overnight, not "comfortably" warm; maybe we actually get a charge from the sensation of fighting upstream against the great surge of commuters emerging from the station at 8:45 each morning. Or maybe we simply want to revel in the freedom to choose such things, however uncomfortable, rather than have the choices made for us.

Right now, these aspects of our environment are freely variable, not connected to anything else except in the most tenuous and existential way. Where networked information-processing systems are installed, this is no longer necessarily the case.

images Thesis 17

The overwhelming majority of people experiencing everyware will not be knowledgeable about information technology.

When computing was something that took place behind the walls of corporate data centers and campus Comp Sci labs, its user base was demographically homogeneous. There were simply not that many ways in which a computer user of the mid-1960s could vary: In North America, anyway, we know that they were overwhelmingly male, overwhelmingly white, overwhelmingly between the ages of 18 and, say, 40 at the outside; they were, by definition, highly educated.

So it was entirely reasonable for the designers of computing systems of the time to suppose that their users shared a certain skill set, even a certain outlook. In fact, the homogeneity went still deeper. The intricacies of arcane languages or the command-line interface simply didn't present a problem in the early days of computing, because the discourse was founded on a tacit assumption that every user was also a developer, a programmer comfortable with the intellectual underpinnings of computer science.

Every stage in the material evolution of computing, though, has undermined this tidy equation, starting with the introduction of time-sharing systems in the mid-1960s. The practical effect of time sharing was to decouple use of the machine from physical and, eventually, cultural proximity. Relatively cheap and robust remote terminals found their ways into niches in which mainframes would never have made sense either economically or practically, from community centers to elementary schools, multiplying those mainframes' reach by the hundredfold. It was no longer safe to assume that a user was a fungibly clean-cut, bythe-numbers CS major.

The safety of any such assumption was further damaged by the advent of the personal computer, a device which by dint of its relative portability, affordability, and autonomy from regulation gathered grade-schoolers and grandmothers as users. The audience for computing has only gotten larger and more diverse with every reduction in the PC's size, complexity, and price. The apotheosis of this tendency to date is the hand-cranked "$100 laptop" currently being developed by the MIT Media Lab, intended for the hundreds of millions of semiliterate or even nonliterate potential users in the developing world.

But such assumptions will be shattered completely by everyware. How could it be otherwise? Any technology that has been so extensively insinuated into everyday life, at so many scales, in space both public and private, cannot help but implicate the greatest possible number and demographic diversity of users. Only a small minority of them will have any significant degree of competence with information technology (although it's also true that tropes from the technological realm are increasingly finding their way into mass culture). Still more so than the designers of personal computers, smartphones, or PDAs, those devising ubiquitous systems will have to accommodate the relative technical non-sophistication of their enormous user base.

images Thesis 18

In many circumstances, we can't really conceive of the human being engaging everyware as a "user."

Traditionally, the word we use to describe the human person engaged in interaction with a technical system is "user": "user-friendly," "user-centered." (More pointedly, and exposing some of the frustration the highly technically competent may experience when dealing with nonspecialists: "luser.")

Despite this precedent, however, the word stumbles and fails in the context of everyware. As a description of someone encountering ubiquitous systems, it's simply not accurate.

At the most basic level, one no more "uses" everyware than one would a book to read or the floor to stand on. For many of the field's originators, the whole point of designing ubiquitous systems was that they would be ambient, peripheral, and not focally attended to in the way that something actively "used" must be.

Perhaps more importantly, "user" also fails to reflect the sharply reduced volitionality that is so often bound up with such encounters. We've already seen that everyware is something that may be engaged by the act of stepping into a room, so the word carries along with it the implication of an agency that simply may not exist.

Finally, exactly because of its historical pedigree in the field, the term comes with some baggage we might well prefer to dispense with. As HCI researcher Jonathan Grudin has argued, because "the computer is assumed [and] the user must be specified" even in phrases like "user-centered design," such terminology "retains and reinforces an engineering perspective" inimical to our present concerns.

I think we might be better served by a word that did a better job of evoking the full, nuanced dimensions of what is experienced by someone encountering everyware. The trouble is that most other candidate words succumb to the same trap that ensnares "user." They also elide one or more important aspects of this person's experience.

Of the various alternative terms that might be proposed, there is one that captures two aspects of the everyware case that happen to be in real tension with one another, both of which are necessary to account for: "subject."

On the one hand, a subject is someone with interiority, with his or her own irreducible experience of the world; one has subjectivity. But interestingly enough, we also speak of a person without a significant degree of choice in a given matter as being subject to something: law, regulation, change. As it turns out, both senses are appropriate in describing the relationship between a human being and the types of systems we're interested in.

But a moment's consideration tells us that "subject" is no good either. To my ears, anyway, it sounds tinny and clinical, a word that cannot help but conjure up visions of lab experiments, comments inscribed on clipboards by white-coated grad students. I frankly cannot imagine it being adopted for this purpose in routine speech, either by professionals in the field or by anyone else.

So we may be stuck with "user" after all, at least for the foreseeable future, no matter how inaccurate it is. Perhaps the best we can hope for is to remain acutely mindful of its limitations.

images Thesis 19

Everyware is always situated in a particular context.

Nothing takes place in a vacuum. As former PARC researcher Paul Dourish observes, in his 2001 study Where the Action Is, "interaction is intimately connected with the settings in which it occurs." His theory of "embodied interaction" insists that interactions derive their meaning by occurring in real time and real space and, above all, among and between real people.

In Dourish's view, the character and quality of interactions between people and the technical systems they use depend vitally on the fact that both are embedded in the world in specific ways. A video chat is shaped by the fact that I'm sitting in this office, in other words, with its particular arrangement of chair, camera, and monitor, and not that one; whether a given gesture will seem to be an appropriate mapping to a system command will seem different depending on whether the user is Sicilian or Laotian or Senegalese.

This seems pretty commonsensical, but it's something that by and large we've been able to overlook throughout the PC era. This is because personal computing is something that we've historically conceived of as being largely independent of context.

In turning on your machine, you enter the nonspace of its interface—and that nonspace is identical whether your laptop is sitting primly atop your desk at work or teetering atop your knees on the library steps. Accessing the Web through such interfaces only means that the rabbit hole goes deeper; as William Gibson foresaw in the first few pages of Neuromancer, it really is as if each of our boxes is a portal onto a "consensual hallucination" that's always there waiting for us. No wonder technophiles of the early 1990s were so enthusiastic about virtual reality: it seemed like the next logical step in immersion.

By instrumenting the actual world, though, as opposed to immersing a user in an information-space that never was, everyware is something akin to virtual reality turned inside out. So it matters quite a lot when we propose to embed functionality in all the surfaces the world affords us: we find ourselves deposited back in actuality with an almost-audible thump, and things work differently here. If you want to design a system that lets drive-through customers "tap and go" from the comfort of their cars, you had better ensure that the reader is within easy reach of a seated driver; if your building's smart elevator system is supposed to speed visitor throughput, it probably helps to ensure that the panel where people enter their floors isn't situated in a way that produces bottlenecks in the lobby.

Interpersonal interactions are also conditioned by the apparently trivial fact that they take place in real space. Think of all of the subtle, nonverbal cues we rely upon in the course of a multi-party conversation and how awkward it can be when those cues are stripped away, as they are in a conference call.

Some ubiquitous systems have made attempts at restoring these cues to mediated interactions—one of Hiroshi Ishii's earlier projects, for example, called ClearBoard. ClearBoard attempted to "integrate interpersonal space and shared workspace seamlessly"; it was essentially a shared digital whiteboard, with the important wrinkle that the image of a remote collaborator was projected onto it, "behind" what was being drawn on the board itself.

Not only did this allow partners working at a distance from one another to share a real-time workspace, it preserved crucial indicators like "gestures, head movements, eye contact, and gaze direction"—all precisely the sort of little luxuries that do so much to facilitate communication in immediate real space and that are so often lacking in the virtual.

A sensitively designed everyware will take careful note of the qualities our experiences derive from being situated in real space and time. The more we learn, the more we recognize that such cues are more than mere niceties—that they are, in fact, critical to the way we make sense of our interactions with one another.

images Thesis 20

Everyware unavoidably invokes the specter of multiplicity.

One word that should bedevil would-be developers of everyware is "multiple," as in multiple systems, overlapping in their zones of influence; multiple inputs to the same system, some of which may conflict with each other; above all, multiple human users, each equipped with multiple devices, acting simultaneously in a given space.

As we've seen, the natural constraints on communication between a device and its human user imposed by a one-to-one interaction model mean that a PC never has to wonder whether I am addressing it, or someone or -thing else in our shared environment. With one application open to input at any given time, it never has to parse a command in an attempt to divine which of a few various possibilities I might be referring to. And conversely, unless some tremendously processor-intensive task has monopolized it for the moment, I never have to wonder whether the system is paying attention to me.

But the same thing can't really be said of everyware. The multiplicity goes both ways, and runs deep.

Perhaps my living room has two entirely separate and distinct voice-activated systems—say, the wall screen and the actual window—to which a command to "close the window" would be meaningful. How are they to know which window I mean?

Or maybe our building has an environmental control system that accepts input from personal body monitors. It works just fine as long as there's only one person in the room, but what happens when my wife's monitor reports that she's chilly at the same moment that mine thinks the heat should be turned down?

It's not that such situations cannot be resolved. Of course they can be. It's just that designers will have to explicitly anticipate such situations and devise rules to address them—something that gets exponentially harder when wallscreen and window, shirt telemetry and environmental control system, are all made by different parties.

Multiplicity in everyware isn't just a user-experience issue, either. It's a question that goes directly to the management and allocation of computational resources, involving notions of precedence. Given the resources available locally, which of the many running processes present gets served first? What kind of coordinating mechanisms become necessary?

The situation is complicated still further by the fact that system designers cannot reasonably foresee how these multiple elements will behave in practice. Ideally, a pervasive household network should be able to mediate not merely among its own local, organic resources, but whatever transient ones are brought into range as well.

People come and go, after all, with their personal devices right alongside them. They upgrade the firmware those devices run on, or buy new ones, and those all have to work too. Sometimes people lose connectivity in the middle of a transaction; sometimes their devices crash. As Tim Kindberg and Armando Fox put it, in their 2002 paper "System Software for Ubiquitous Computing," "[a]n environment can contain infrastructure components, which are more or less fixed, and spontaneous components based on devices that arrive and leave routinely." Whatever infrastructure is proposed to coordinate these activities had better be able to account for all of that.

Kindberg and Fox offer designers a guideline they call the volatility Principle: "you should design ubicomp systems on the assumption that the set of participating users, hardware and software is highly dynamic and unpredictable. Clear invariants that govern the entire system's execution should exist."

In other words, no matter what kind of hell might be breaking loose otherwise, it helps to have some kind of stable arbitration mechanism. But with so many things happening at once in everyware, the traditional event queue—the method by which a CPU allocates cycles to running processes—just won't do. A group of researchers at Stanford (including Fox) has proposed a replacement better suited to the demands of volatility: the event heap.

Without getting into too much detail, the event heap model proposes that coordination between heterogeneous computational processes be handled by a shared abstraction called a "tuple space." An event might be a notification of a change in state, like a wireless tablet coming into the system's range, or a command to perform some operation; any participating device can write an event to the tuple space, read one out, or copy one from it so that the event in question remains available to other processes.

In this model, events expire after a specified elapse of time, so they're responded to either immediately, or (in the event of a default) not at all. This keeps the heap itself from getting clogged up with unattended-to events, and it also prevents a wayward command from being executed so long after its issuance that the user no longer remembers giving it. Providing for such expiry is a canny move; imagine the volume suddenly jumping on your bedside entertainment system in the middle of the night, five hours after you had told it to.

The original Stanford event heap implementation, called iRoom, successfully coordinated activities among several desktop, laptop, and tablet Windows PCs, a Linux server, Palm OS, and Windows CE handheld devices, multiple projectors, and a room lighting controller. In this environment, moving a pointer from an individual handheld to a shared display was easily achieved, while altering a value in a spreadsheet on a PDA updated a 3D model on a machine across the room. Real-world collaborative work was done in iRoom. It was "robust to failure of individual interactors," and it had been running without problems for a year and a half at the time the paper describing it was published.

It wasn't a perfect solution—the designers foresaw potential problems emerging around scalability and latency—but iRoom and the event heap model driving it were an important first response to the challenges of multiplicity that the majority of ubiquitous systems will eventually be forced to confront.

images Thesis 21

Everyware recombines practices and technologies in ways that are greater than the sum of their parts.

The hundred-billion-dollar question: do the products and services we've been discussing truly constitute a system, a continuous fabric of computational awareness and response?

Some—including, it must be said, some of the most knowledgeable, prominent, and respected voices in academic ubicomp—would say that they clearly do not. Their viewpoint is that originators such as Mark Weiser never intended "ubiquitous" to mean anything but locally ubiquitous: present everywhere "in the woodwork" of a given, bounded place, not literally circumambient in the world. They might argue that it's obtuse, disingenuous, or technically naive to treat artifacts as diverse as a PayPass card, a SenseWear patch, a Sensacell module, a Miconic 10 elevator system, and a GAUDI display as either epiphenomena of a deeper cause or constituents of a coherent larger-scale system.

If I agreed with them, however, I wouldn't have bothered writing this book. All of these artifacts treat of nothing but the same ones and zeroes, and in principle there is no reason why they could not share information with each other. Indeed, in many cases there will be—or will appear to be—very good reasons why the streams of data they produce should be merged with the greater flow. I would go so far as to say that if the capacity exists, it will be leveraged.

To object that a given artifact was not designed with such applications in mind is to miss the point entirely. By reconsidering them all as network resources, everyware brings these systems into a new relationship with each other that is decidedly more than the sum of their parts. In the sections that follow, I will argue that, however discrete such network-capable systems may be at their design and inception, their interface with each other implies a significantly broader domain of action—a skein of numeric mediation that stretches from the contours of each individual human body outward to satellites in orbit.

I will argue, further, that since the technical capacity to fuse them already exists, we have to treat these various objects and services as instantiations of something larger—something that was already by 1990 slouching toward Palo Alto to be born; that it simply makes no sense to consider a biometric patch or a directional display in isolation—not when output from the one can furnish the other with input; and that if we're to make sense of the conjoined impact of these technologies, we have to attend to the effects they produce as a coordinated system of articulated parts.

images Thesis 22

Everyware is relational.

One of the more significant effects we should prepare for is how fiercely relational our lives will become. In a world saturated with everyware, responses to the actions we take here and now will depend not only on our own past actions, but also on an arbitrarily large number of other inputs gathered from far afield.

At its most basic, all that "relational" means is that values stored in one database can be matched against those from another, to produce a more richly textured high-level picture than either could have done alone. But when the number of available databases on a network becomes very large, the number of kinds of facts they store is diverse, and there are applications able to call on many of them at once, some surprising things start to happen.

Consider the price of your morning cup of coffee. At present, as any businessperson will tell you, retail pricing is one of the black arts of capitalism. As with any other business, a coffee retailer bases its pricing structure on a calculus designed to produce a profit after accounting for all of the various costs involved in production, logistics, marketing, and staffing—and in many cases this calculus is best described as an educated guess.

The calculus is supposed to find a "sweet spot" that balances two concerns that must be brought together to conclude a sale: the minimum the retailer can afford to charge for each cup and still make a profit, and the maximum you're willing to pay for that cup.

Famously, though, there's many a slip 'twixt cup and lip. For one thing, both values are constantly changing—maybe as often as several times a day. The first fluctuates with the commodities market, transportation costs, and changes in wage laws; the second responds to moves made by competitors as well as factors that are far harder to quantify, like your mood or the degree of your craving for caffeine. Nor does any present pricing model account for things like the variation in rent between different retail locations.

There's simply no practical way to capture all of this variability, and so all these factors get averaged out in the formulation of pricing structures. However expertly devised, they're always something akin to a stab in the dark.

But remember the event heap? Remember how it allowed a value entered here to affect a process unfolding over there? A world with ubiquitous inputs and event-heap-style coordinating mechanisms writ large turns this assumption upside down. Imagine how much more fluid and volatile the price of a tall decaf latte would be if it resulted from an algorithm actually pegged to something like a real-time synopsis of all of the factors impingent upon it—not only those involved in its production, but also whatever other quantifiable considerations influenced your decision to buy it.

Objections that consumers wouldn't stand still for such price volatility are easily countered by arguing that such things matter much less when the customer does not attend to the precise amount of a transaction. This widely happens to be the case already when the point-of-purchase scenario involves credit and debit cards, and it will surely happen more often as the mechanism of payment increasingly dissolves in behavior.

Say the price was a function of the actual cost of the Jet-A fuel that flew this lot of beans in from Kona, the momentary salary of the driver who delivered it...and a thousand other elements. Maybe it reflects a loyalty bonus for having bought your morning jolt from the same store on 708 of the last 731 days, or the weather, or even the mass psychology of your particular market at this particular moment. (Derived—who knows?—from the titration of breakdown products of Prozac and Xanax in the municipal sewage stream.)

This is economics under the condition of ambient informatics. As it happens, many of these quantities are already easily recoverable, even without positing sensors in the sewers and RFID tags on every bag and pallet.

They exist, right now, as numeric values in a spreadsheet or a database somewhere. All that is necessary to begin deriving higher-order information from them is for some real-time coordinating mechanism to allow heterogeneous databases, owned by different entities and maintained in different places, to talk to each other over a network. This way of determining price gets asymptotically close to one of the golden assumptions of classical economics, the frictionlessness of information about a commodity. Consumers could be sure of getting something very close to the best price consistent with the seller's reasonable expectation of profit. In this sense, everyware would appear to be late capitalism's Holy Grail.

And where sharing such information was once anathema to business, this is no longer necessarily true. Google and yahoo! already offer open application programming interfaces (APIs) to valuable properties like Google Maps and the Flickr photo-sharing service, and the practice is spreading; business has grown comfortable sharing even information traditionally held much closer to the vest, like current inventory levels, with partners up and down the supply chain. When mutual benefit has once been scented, connection often follows.

Beyond this point, the abyss of ridiculously granular relational micropayment schemes opens up. Accenture Technology Labs has demo'd a payper-use chair, for example, which monitors use, charges according to a schedule of prices that fluctuates with demand, and generates a monthly statement. While one hopes this is at least a little tongue-in-cheek, there is nothing that separates it in principle from other everyware-enabled dynamic pricing models that have been proposed—close-to-real-time insurance premiums, for example, based on the actual risk incurred by a client at any given moment.

Relationality has its limits. We know that a change anywhere in a tightly-coupled system ripples and cascades through everything connected to it, under the right (or wrong) circumstances rendering the whole mesh exquisitely vulnerable to disruption. Nevertheless, it's hard to imagine that a world so richly provisioned with sources of information, so interwoven with the means to connect them, would not eventually provoke someone to take maximum advantage of their joining.

images Thesis 23

Everyware has profoundly different social implications than previous information-technology paradigms.

By its very nature, a computing so pervasive and so deeply intertwined with everyday life will exert a transformative influence on our relationships with ourselves and with each other.

In fact, wherever it appears in the world, everyware is always already of social consequence. It can hardly be engaged without raising issues of trust, reputation, credibility, status, respect, and the presentation of self.

Take JAPELAS, a recent Tokushima University project that aims to establish the utility of ubiquitous technology in the classroom—in this case, a Japanese-language classroom. One of the complications of learning to speak Japanese involves knowing which of the many levels of politeness is appropriate in a given context, and this is just what JAPELAS sets out to teach.

The system determines the "appropriate" expression by trying to assess the social distance between interlocutors, their relative status, and the overall context of their interaction; it then supplies the student with the chosen expression, in real time.

Context is handled straightforwardly: Is the setting a bar after class, a job interview, or a graduation ceremony? Social distance is also relatively simple to determine—are these students in my class, in another class at the same school, or do they attend a different school altogether? But to gauge social status, JAPELAS assigns a rank to every person in the room, and this ordering is a function of a student's age, position, and affiliations.

The previous paragraph probably won't raise any red flags for Japanese readers. Why should it? All that JAPELAS does is encode into a technical system rules for linguistic expression that are ultimately derived from conventions about social rank that already existed in the culture. Any native speaker of Japanese makes determinations like these a hundred times a day, without ever once thinking about them: a senior outranks a freshman, a TA outranks a student, a tenured professor outranks an adjunct, and a professor at one of the great national universities outranks somebody who teaches at a smaller regional school. It's "natural" and "obvious."

But to me, it makes a difference when distinctions like these are inscribed in the unremitting logic of an information-processing system.* Admittedly, JAPELAS is "just" a teaching tool, and a prototype at that, so maybe it can be forgiven a certain lack of nuance; you'd be drilled with the same rules by just about any human teacher, after all. (I sure was.) It is nevertheless disconcerting to think how easily such discriminations can be hard-coded into something seemingly neutral and unimpeachable and to consider the force they have when uttered by such a source. And where PC-based learning systems also observe such distinctions, they generally do so in their own bounded non-space, not out here in the world.

* As someone nurtured on notions of egalitarianism, however hazy, the idea that affiliations have rank especially raises my hackles. I don't like the idea that the city I was born in, the school I went to, or the military unit I belonged to peg me as belonging higher (or lower) on the totem pole than anyone else. Certain Britons, Brahmins, and graduates of the Ecole Normale SupÉrieure may have a slightly easier time accepting the idea.

Everyware may not always reify social relations with quite the same clunky intensity that JAPELAS does, but it will invariably reflect the assumptions its designers bring to it. Just as with JAPELAS, those assumptions will result in orderings—and those orderings will be manifested pervasively, in everything from whose preferences take precedence while using a home-entertainment system to which of the injured supplicants clamoring for the attention of the er staff gets cared for first.

As if that weren't enough to chew on, there will also be other significant social consequences of everyware. among other things, the presence of an ambient informatics will severely constrain the presentation of self, even to ourselves.

This is because information that can be called upon at any time and in any place necessarily becomes part of social transactions in a way that it could not when bound to fixed and discrete devices. we already speak of Googling new acquaintances—whether prospective hires or potential lovers—to learn what we can of them. But this is rarely something we do in their presence; it's something we do, rather, when we remember it, back in front of our machine, hours or days after we've actually made the contact.

What happens when the same information is pushed to us in real time, at the very moment we stand face to face with someone else? what happens when we're offered a new richness of facts about a human being—their credit rating, their claimed affinities, the acidity of their sweat—from sources previously inaccessible, especially when those facts are abstracted into high-level visualizations as simple (and decisive) as a check or a cross-mark appearing next to them in the augmented view provided by our glasses?

And above all, what happens when the composite view we are offered of our own selves conflicts with the way we would want those selves to be perceived?

Erving Goffman taught us, way back in 1958, that we are all actors. we all have a collection of masks, in other words, to be swapped out as the exigencies of our transit through life require: one hour stern boss, the next anxious lover. who can maintain a custody of the self conscious and consistent enough to read as coherent throughout all the input modes everyware offers?

What we're headed for, I'm afraid, is a milieu in which sustaining different masks for all the different roles in our lives will prove to be untenable, if simply because too much information about our previous decisions will follow us around. And while certain futurists have been warning us about this for years, for the most part even they hadn't counted on the emergence of a technology capable of closing the loop between the existence of such information and its actionability in everyday life. For better or worse, everyware is that technology.

We've taken a look, now, at the ways in which everyware will differ from personal computing and seen that many of its implications are quite profound. Given the magnitude of the changes involved, and their disruptive nature, why does this paradigm shift seem so inevitable? Why have I felt so comfortable asserting that this will happen, or is happening, or even, occasionally, has happened? Especially about something that at the moment mostly seems to be manifested in prototypes and proofs of concept? you may recall that I believe the emergence of everyware is over-determined—and in the next section, we'll get into a good deal of depth as to why I think this is so.