Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)
Section 4. What are the Issues We Need to be Aware of?
If we have by now concluded that some kind of everyware does seem inevitable, the precise form that it will take in our lives is still contingent, open to change.
What are some of the issues we need to be aware of, in order to make the sort of wise decisions that will shape its emergence in congenial ways?
Everyware insinuates itself into transactions never before subject to technical intervention.
Even if you yourself are not a connoisseur of gourmet bathing experiences, you may be interested to learn that the Brazilian company IHOUSE last year offered for sale something it called the Smart Hydro "intelligent bathtub."
The Smart Hydro is a preview of the experience that awaits us in the fully networked home, at least at the high end of the market. It really puts the bather in the driver's seat, as it were, giving its user access to a range of preference settings, from the essentials of water temperature and level to treats like "bath essence or foam, a variety of hydromassage programs and even light intensity." It can even be programmed to fill itself and then call you on your mobile phone mid-commute, just to let you know that your bath will be ready for you the moment you step through the door. (Of course it will be "[kept] temperature controlled until you arrive home.")
But you already knew how to draw a bath, didn't you? And you've somehow survived this far in life without the help of automated calls from the bathroom infrastructure. In fact, learning how to manage your bathtub's preference settings is probably not on the list of things you most want to do with your time—not when you've pretty much had a handle on the situation since the age of five or six.
Especially as a consequence of its insinuation into everyday life, everyware appears in all kinds of transactions that have never before been subject to highly technical intervention. Ubicomp advocate Mike Kuniavsky acknowledges this in his "Smart Furniture Manifesto": in his own words, endowing furniture and other everyday things with digital intelligence "can introduce all kinds of complexity and failure modes that don't currently exist." (I'd argue that you can replace the "can" in that sentence with "demonstrably will.")
The consequences of such complexification extend beyond bathing, or the similarly simple but profound pleasures of hearth and table, to implicate a further set of experiences that tend to be the most meaningful and special to us.
Take friendship. Current social-networking applications, like Orkut or Friendster, already offer us digital profiles of the people we know. An ambient version—and such systems have been proposed—could interpose these profiles in real time, augmenting the first glimpse of an acquaintance with an overlay of their name, a list of the friends we have in common, and an indication of how warmly we regard them. The benefit qua memory augmentation is obvious, perfect for those of us who always feel more than a little guilty about forgetting someone's name. But doesn't this begin to redefine what it means to "recognize" or to "know" someone? (We'll see in the next thesis that such a degree of explicitness poses significant challenges socially as well as semantically.)
Take exercise, or play, or sexuality, all of which will surely become sites of intense mediation in a fully developed everyware milieu. Something as simple as hiking in the wilderness becomes almost unrecognizable when overlaid with GPS location, sophisticated visual pattern-recognition algorithms, and the content of networked geological, botanical, and zoological databases—you won't get lost, surely, or mistake poisonous mushrooms for the edible varieties, but it could hardly be said that you're "getting away from it all."
Even meditation is transformed into something new and different: since we know empirically that the brains of Tibetan monks in deep contemplation show regular alpha-wave patterns, it's easy to imagine environmental interventions, from light to sound to airflow to scent, designed to evoke the state of mindfulness, coupled to a body-monitor setting that helps you recognize when you've entered it.
If these scenarios present us with reason to be concerned about ubiquitous interventions, this doesn't necessarily mean we should forgo all such attempts to invest the world with computational power. It simply means that we have to be unusually careful about what we're doing, more careful certainly than we've been in the past. Because by and large, whatever frustrations our sojourns in the world present us with, we've had a long time to get used to them; to paraphrase Paul Robeson, we suits ourselves. Whatever marginal "improvement" is enacted by overlaying daily life with digital mediation has to be balanced against the risk of screwing up something that already works, however gracelessly or inelegantly.
Eliel Saarinen—Eero's father, and a professor of architecture in his own right—invariably reminded his students that they must "[a]lways design a thing by considering it in its next larger context." The implications of this line of thought for everyware are obvious: In some particularly delicate circumstances, it would probably be wisest to leave well enough alone.
Everyware surfaces and makes explicit information that has always been latent in our lives, and this will frequently be incommensurate with social or psychological comfort.
Remember BodyMedia, the company responsible for the conformal, Band-Aid–sized SenseWear sensor? BodyMedia's vice president for product design, Chris Kasabach, says the company thinks of the living body as a "continuous beacon": "signals can either fall on the floor, or you can collect them and they can tell you something higher-level" about the organism in question.
Stripped of its specific referent, this is as good a one-sentence description of the data-discovery aspect of everyware as you are ever likely to come across. Everyware's mesh of enhanced objects dispersed throughout everyday life also happens to offer a way of collecting the signals already out there and making of them a gnosis of the world.
In the case of the body especially, these signals have always been there. All that's really new about SenseWear is the conjoined ambition and practical wherewithal to capture and interpret such signals—and to make use of them. This is true of many things. The world is increasingly becoming a place where any given fact is subject to both quantification and publication—and not merely those captured by the various kinds of sensors we encounter, but also ones that you or I have volunteered.
The truth of this was driven home by the first online social-networking sites. Those of us who used early versions of Friendster, Orkut, or LinkedIn will understand what I mean when I say they occasionally made uncomfortably explicit certain aspects of our social relationships that we generally prefer to keep shrouded in ambiguity: I like her better than him; she thinks I'm highly reliable and even very cool, but not at all sexy; I want to be seen and understood as an associate of yours, but not of his.
Even services with other primary objectives observe such social differentiation these days. The Flickr photo-sharing service, for example, recognizes a gradient of affinity, inscribing distinctions between a user's "family," "friends," "contacts," and everyone else—with the result that there's plenty of room for people who know me on Flickr to wonder why (and potentially be hurt by the fact that) I consider them a "contact" and not a "friend."
What if every fact about which we generally try to dissemble, in our crafting of a mask to show the world, was instead made readily and transparently available? I'm not just talking about obvious privacy issues—histories of various sorts of irresponsibility, or of unpopular political, religious, or sexual predilections—but about subtler and seemingly harmless things as well: who you've chosen to befriend in your life, say, or what kinds of intimacy you choose to share with them, but not others.
This is exactly what is implied by a global information processing system with inputs and outputs scattered all over the place. With everyware, all that information about you or me going into the network implies that it comes out again somewhere else—a "somewhere" that is difficult or impossible to specify ahead of time—and this has real consequences for how we go about constructing a social self. When these private and unspoken arrangements are drawn out into the open, are made public and explicit, embarrassment, discomfort, even resentment can follow for all parties involved.
These are events that Gary T. Marx, the MIT professor emeritus of sociology whose theories of technology and social control we discussed in Thesis 30, refers to as border crossings: irruptions of information in an unexpected (and generally problematic) context. Marx identifies several distinct types of crossing—natural, social, spatial/temporal, and ephemeral—but they all share a common nature: in each case, something happens to violate "the expectation by people that parts of their lives can exist in isolation from other parts." you see something compromising through a hole in your neighbor's fence, for example, or a mother sneaks into her daughter's room and reads her "secret" diary.
The Web is a generator par excellence of such crossings, from the ludicrous to the terrifying. We've all seen a momentary slip of the tongue recorded on high-fidelity video and uploaded for all the world to see (and mock). There's an entire genre of humor revolving around the sundry Jedi Knight fantasies and wardrobe malfunctions that shall now live for all time, mirrored on dozens or hundreds of servers around the globe. And much of the annoyance of spam, for many of us, is the appearance of sexually explicit language and/or imagery in times and places we've devoted to other activities.
But this is all a foretaste of what we can see coming. Where everyware is concerned, we can no longer expect anything to exist in isolation from anything else. It comprises a "global mnemotechnical system," in the words of French philosopher Bernard Stiegler—a mesh of computational awareness, operating in a great many places and on a great many channels, fused to techniques that permit the relational or semantic cross-referencing of the facts thus garnered, and an almost limitless variety of modes and opportunities for output. It brings along with it the certainty that if a fact once enters the grid—any fact, of any sort, from your Aunt Helga's blood pressure at noon last Sunday to the way you currently feel about your most recent ex-boyfriend—it will acquire a strange kind of immortality.
Unable, apparently, to bear the idea that our signals might "fall on the floor," we've arranged to capture them for all time, uploading them to a net where they will bounce from one node, to another, to another—for as long as there remains a network to hold them.
One trouble with this is that we've historically built our notions of reputation such that they rely on exformation—on certain kinds of information leaving the world, disappearing from accessibility. But with such mnemo-technical systems in place, information never does leave the world. It just keeps accumulating, simultaneously more explicit, more available, and more persistent than anything we or our societies have yet reckoned with.
Latent information is also made explicit as a result of the conventions suggested by everyware.
Latent data pops to the surface in everyware as a result of new conventions, every bit as much as of new systems. These are ways of thinking about the world that follow in the wake of the mass adoption of information technology, as natural afterward as they would have seemed alien beforehand.
For example, designers Ulla-Maaria Mutanen and Jyri EngestrÖm, working with economics student Adam Wern, have proposed something they call a "free product identifier." Their ThingLinks offer an equivalent of the familiar UPC or ISBN codes, specifically designed for the "invisible tail" of short-run, amateur, or folk productions previously denied a place in the grand electronic souk of late capitalism. Anyone, at no charge, can generate a code, associate it with an object, and fill out some basic information relating to it, and forever after that object can be looked up via the net—either the one we enjoy now, or whatever ubiquitous variant comes afterward.*
* Each ThingLink is technically a valid Uniform Resource Identifier, albeit one refined down to the "scheme" and "path" semantic elements.
On the one hand, this is a truly utopian gesture. As a manifestation of the emerging culture of mass amateurization, such open codes would allow small-scale producers—from West Berkeley sculptors to Bangladeshi weaving collectives—to compete on something approaching the same ground as professional manufacturers and distributors.
At the same time, though, and despite its designers' clear intentions, a free product identifier could be regarded as a harbinger of the insidious transformation of just about everything into machine-readable information.
Such an identifier is not a technical system in the usual sense. It is intangible, nonmaterial. It's nothing more than a convention: a format and perhaps some protocols for handling information expressed in that format. But its shape and conception are strongly conditioned by the existence of parallel conventions—conventions that are embodied in specific technologies. The whole notion of a Uniform Resource Identifier, for example, which was called into being by the Internet, or a Universal Product Code, which cannot be separated from the technics of bar-coding and its descendent, RFID.
And though such conventions may be intangible, they nevertheless have power, in our minds and in the world. The existence of a machine-readable format for object identification, particularly, is a container waiting to be filled, and our awareness that such a thing exists will transform the way we understand the situations around us. Because once we've internalized the notion, any object that might once have had an independent existence—unobserved by anyone outside its immediate physical vicinity, unrecorded and certainly uncorrelated—can be captured and recast as a node.
Once again, we see latent facts about the world brought to the surface and made available to our networked mnemotechnical systems—this time, through the existence of a convention, rather than a type or particular deployment of technology. Ubiquitous, then, does turn out to mean exactly that.
Everyday life presents designers of everyware with a particularly difficult case because so very much about it is tacit, unspoken, or defined with insufficient precision.
One of the reasons why networked bathtubs, explicitly ordered friendship arrays, and the other artifacts of everyware we've encountered can seem so dissonant to us is that they collapse two realms that have generally been separate and distinct in the past: the technical, with all the ordered precision it implies, and the messy inexactitude of the everyday.
Everyday situations pose a particular challenge to the designer because so many of their important constituents are tacit or unspoken. They lack hard edges, clear definitions, known sets of actors, and defined time limits, and this makes them exquisitely difficult to represent in terms useful to system engineers. Significant aspects of a given transaction may not even be consciously perceived by participants, in the way that things seen too often disappear from sight.
Information technology, of course, requires just the opposite: that as much as possible be made as explicit as possible. In programming, for example, software engineers use tools like Object Management Group's Unified Modeling Language (UML) to specify application structure, behavior and architecture with the necessary degree of rigor. (UML and its analogues can also be used to map less technical procedures, e.g., business processes.)
Make no mistake about it, "specify" is the operative word here. UML allows programmers to decompose a situation into a use case: a highly granular, stepped, sequential representation of the interaction, with all events and participants precisely defined. Such use cases are a necessary intermediate step between the high-level, natural language description of a scenario and its ultimate expression in code.
In a valid use case, nothing is left unspecified or undefined. Every party to an interaction must be named, as well as all of the attributes belonging to (and all the operations that can be performed on) each of them. To an non-engineer, the level of attention to detail involved can seem almost pathologically obsessive. In order to write sound code, though, all of these values must be specified minutely.
Now consider such requirements in the light of everyday life—specifically, in the light of those everyday communications that appear to lack content, but which nonetheless convey important social and emotional information: so-called phatic utterances. These present us with a perfect example of something critical to our understanding of everyday life, which is yet highly resistant to technical specification.
We're familiar with quite a few kinds of phatic utterance, from morning greetings in the elevator, or the flurry of endearments with which happy couples shower each other, to "minimum verbal encouragers"—the odd grunts and "uh-huh"s that let you know when someone is listening to you. Responding appropriately to such communications means accurately picking up subtle cues as to the communicator's intent, and those cues are often nonverbal in nature. They may reside in the speaker's body language, or in the context in which the utterance is offered. Either way, though, we understand that when a neighbor says "Lovely weather we're having, isn't it?," this is a performance of openness, availability, and friendliness, not an invitation to discuss the climate.
Most nonautistic adults have internalized the complexities of parsing situations like these, but designers of technical systems will find them very, very difficult to represent adequately. And if phatic speech poses real problems, at least it is speech. What about the pauses, ellipses, pregnant silences, and other signifying absences of everyday communication? How do you model profoundly meaningful but essentially negative events like these in UML?
This isn't a rhetorical question. The fuzziness, indirection and imprecision we see in everyday speech are far from atypical; they can stand in for many human behaviors that are not exactly what they seem, many situations whose meaning is the product of an extremely subtle interplay among events both tacit and fully articulated.
If we are ever to regard the appearance of computing in everyday life as anything more than an annoyance, though, someone will have to do just this sort of thing. Someone will have to model fuzzy, indirect, imprecise behaviors. Someone will have to teach systems to regard some utterances as signal and some as noise, some facts as significant and some as misdirection, some gestures as compulsive tics and yet others as meaningful commands.
These are clearly not trivial things to expect. In fact, challenges of this order are often called "AI-hard"—that is, a system capable of mastering them could be construed as having successfully met the definition of artificial human intelligence. Simply describing everyday situations in useful detail would utterly tax contemporary digital design practice and most of the methodological tools it's built on.
Again, I am not expressing the sentiment that we should not attempt to design systems graceful enough for everyday life. I am simply trying to evoke the magnitude of the challenges faced by the designers of such systems. If nothing else, it would be wise for us all to remember that, while our information technology may be digital in nature, the human beings interacting with it will always be infuriatingly and delightfully analog.
Everyware is problematic because it is hard to see literally.
Like bio- and nanotechnology, everyware is a contemporary technics whose physical traces can be difficult or even impossible to see with the unaided eye.
It's a minuscule technology. Its material constituents are for the most part sensors, processors, and memory chips of centimeter scale or smaller, connected (where they require physical connections at all) via printed, woven, or otherwise conformal circuitry.
It's a dissembling technology: those constituents are embedded in objects whose outer form may offer no clue as to their functionality.
It's also, of course, a wireless technology, its calls and responses riding through the environment on modulated radio waves.
All of these qualities make everyware quite hard to discern in its particulars—especially as compared to earlier computing paradigms with their obvious outcroppings of High Technology.
We should get used to the idea that there will henceforth be little correlation between the appearance of an artifact and its capabilities—no obvious indications as to how to invoke basic functionality nor that the artifact is capable of doing anything at all. When even a space that appears entirely empty may in fact contain—to all intents, may be—a powerful information processing system, we can no longer rely on appearances to guide us.
Everyware is problematic because it is hard to see figuratively.
If its physical constituents are literally too small, too deeply buried, or too intangible to be made out with the eye, there are also other (and potentially still more decisive) ways in which everyware is hard to see clearly.
This quality of imperceptibility is not simply a general property of ubiquitous systems; for the most part, rather, it's something that has deliberately been sought and worked toward. As we've seen, the sleight of hand by which information processing appears to dissolve into everyday behavior is by no means easy to achieve.
There are two sides of this, of course. On the one hand, this is what Mark Weiser and John Seely Brown set out as the goal of their "calm technology": interfaces that do not call undue attention to themselves, interactions that are allowed to remain peripheral. If a Weiserian calm technology appears as the result of a consciously pursued strategy of disappearance, it does so because its designers believed that this was the best way to relieve the stress engendered by more overt and attention-compelling interfaces.
But if they contain enormous potential for good, such disappearances can also conceal what precisely is at issue in a given transaction, who stands to benefit from it and whose interests are at risk. MasterCard, for example, clearly hopes that people will lose track of what is signified by the tap of a PayPass card—that the action will become automatic and thus fade from perception. In one field test, users of PayPass-enabled devices—in this case, key fobs and cell phones—spent 25 percent more than those using cash. ("Just tap & go," indeed.)
As computing technology becomes less overt and less conspicuous, it gets harder to see that devices are designed, manufactured, and marketed by some specific institution, that network and interface standards are specified by some body, and so on. A laptop is clearly made by Toshiba or Dell or Apple, but what about a situation?
This is the flipside of the seeming inevitability we've considered, the argument against technodeterminism. Despite the attributes that appear to inhere in technologies even at the very moment that they come into being, there is always human agency involved—always. So if RFID "wants" to be everywhere and part of everything, if IPv6 "wants" to transform everything in the world into a node, we should remember to ask: Who designed them to be that way? Who specified a networking protocol or an address space with these features, and why did they make these decisions and not others?
Historically, its opacity to the nonspecialist has lent technological development an entirely undeserved aura of inevitability, which in turn has tended to obscure questions of agency and accountability. This is only exacerbated in the case of a technology that is also literally bordering on the imperceptible.
Most difficult of all is the case when we cease to think of some tool as being "technology" at all—as studies in Japan and Norway indicate is currently true of mobile phones, at least in those places. Under such circumstances, the technology's governing metaphors and assumptions have an easier time infiltrating the other decisions we make about the world. Their effects come to seem more normal, more natural, simply the way things are done, while gestures of refusal become that much harder to make or to justify. And that is something that should give us pause, at the cusp of our embrace of something as insinuative and as hard to see as everyware.
The discourse of seamlessness effaces or elides meaningful distinctions between systems.
Closely related to the question of everyware's imperceptibility is its seamlessness. This is the idea that both the inner workings of a given system and its junctures with others should be imperceptible to the user, and it's been extraordinarily influential in ubicomp circles over the last eight or ten years. In fact, seamless has become one of those words that one rarely hears except in the context of phrases like "seamless interaction," "seamless integration," "seamless interconnection," or "seamless interfaces."
The notion inarguably has a pedigree in the field; the term itself goes straight back to the ubiquitous Mark Weiser. Ironically enough, though, given its later widespread currency, Weiser regarded seamlessness as an undesirable and fundamentally homogenizing attribute in a ubiquitous system.
Without seams, after all, it's hard to tell where one thing ends and something else begins—points of difference and distinction tend to be smoothed over or flattened out. very much alive to this danger, Weiser advocated the alternative concept of "seamfulness, with beautiful seams," in which users are helped to understand the systems they encounter, how they work, and what happens at their junctures with one another by the design of the systems themselves.
However rewarding, properly providing the user with seamful experiences is obviously a rather time-consuming and difficult way to go about doing things. Maybe this is why Matthew Chalmers and Ian MacColl, then of the University of Glasgow, found in 2003 that Weiser's musings had been oddly inverted in the process of their reification. Phrases invoking seam-lessness positively peppered the ubicomp literature they surveyed, from IBM's pervasive computing Web site to the EU's prospectus for its Disappearing Computer initiative—and where the idea appeared in such material, it was generally presented as an unquestioned good and a goal to which the whole ubicomp community should aspire.
Chalmers and MacColl decided to reintroduce the notion of beautiful seams, challenging the whole discourse of smooth continuity they found to be endemic in contemporary models of ubicomp. But while they were among the earliest critics of seamlessness, they were far from alone in their discomfort with the notion, at least if the frequency with which their work on seamful design is cited is any indication.
Critics were motivated by several apparent flaws in the staging of seamless presentations. The most obvious was dishonesty: The infrastructure supporting the user's experience is deeply heterogeneous, and, at least in contemporary, real-world systems, frequently enough held together by the digital equivalent of duct tape and chewing gum. In Chalmers and MacColl's words, ubiquitous devices and infrastructural components "have limited sampling rates and resolution, are prone to communication delays and disconnections, have finite data storage limits and have representational schemes of finite scope and accuracy"; any attempt to provide the user with a continuous experience must somehow paper over these circumstances.
More worrisome than simple dishonesty, though, is the paternalism involved: seamlessness deprives the user of meaningful participation in the decisions that affect his or her experience. The example often given by Chalmers, in his discussion of the distinctions between seamlessness and its inverse, is that of a mobile phone user: In most such cases, information such as differentials in signal strength between adjacent cells, or the location of boundaries at which a phone is "handed over" from one cell to another, is inaccessible, handled automatically at a level beneath presentation in the interface.
While such information is probably useless, or even annoying, to most users at most times, it surely might prove desirable for some users at some times. By extension, most ubiquitous systems will involve the sort of complexity that designers are ordinarily tempted to sweep under the rug, secure in the wisdom that "users don't care about that." No matter how correct this determination may be in most cases, or how well-intentioned the effort to shield the user, there should always be some accommodation for those wishing to bring the full scope of a system's complexity to light.
Another critical flaw in seamlessness was also first raised by Chalmers and MacColl, and it related to the question of appropriation. Drawing on the earlier work of Paul Dourish and Steve Harrison, they questioned whether a system that was presented to users as seamless could ever afford those users the sense of ownership so critical to rewarding experiences of technology.
Dourish and Harrison offered as an object lesson the distinction between two early videoconferencing systems, from Bellcore Labs and Xerox PARC. The Bellcore system, videoWindow, was an extremely sophisticated effort in supporting "copresence"; it was complex and expensive, and it presented itself to users monolithically. In Dourish and Harrison's words, "it wasn't theirs, and they could not make it theirs." By contrast, users could and did play with the Xerox system, based as it was on cheap, portable cameras. Predictably enough, those who used the Xerox effort found that it "offered something wonderful," while the designers of videoWindow could only lamely conclude from their disappointing trials that their system "lack[ed] something due to factors we do not understand."
Chalmers and MacColl drew from this the inference that systems presented as seamless would be difficult to appropriate, claim and customize in the ways that seem to make people happiest. visible seams, by contrast, expose the places where users can "reach into" a system and tune it to their preference.
The resonance of such critiques will only grow as ubiquitous systems get closer to everyday reality, because the discourse of seamlessness continues to hold sway in the field. Consider how they might play out in the relationship between two notional data-collection systems, biometric in nature, one of which operates at the household level and the other at some ambiguously larger scale.
You own the local system, whose tendrils reach from your refrigerator to the bathroom scale to the exercise monitor you wear to the gym. While it is constantly deriving precise numeric values for attributes like caloric intake and body-mass index, its findings are delivered to you not as raw data, but as gentle, high-level comments that appear on the bathroom mirror and the refrigerator door: "Run an extra ten minutes this morning," or "Try adding leafy greens to today's meals."
And while most of us, for obvious reasons, would not want something like this directly connected with the outside world, in this case your health-management system is interfaced with your household-management system. And it is the latter that is coupled, at levels beneath your awareness, to larger, external information-gathering efforts—those belonging to insurance companies, or marketers, or the Department of Health and Human Services.
These are two different manifestations of seamlessness, and both interfere with your ability to regulate the flow of information around you. Maybe you're actually curious to know exactly how many calories you burned today. More seriously, of course, is the problem posed by the obscure interconnection of apparently discrete systems. There we run into the same issue we saw with PayPass: that the decision made to shield the user from the system's workings also conceals who is at risk and who stands to benefit in a given transaction.
Given these potentials, there's something refreshing about the notion of making the seams and junctions that hold our technologies together at least optionally visible. In some sense, doing so would demolish the magical sense of effortlessness so many theories of ubiquitous design aim for, but that could always be switched back on, couldn't it?
I like the honesty of seamfulness, the way it invites users to participate in a narrative from which they had been foreclosed. In everyware as in life, good fences make good neighbors.
Before they are knit together, the systems that comprise everyware may appear to be relatively conventional, with well-understood interfaces and affordances. When interconnected, they will assuredly interact in emergent and unpredictable ways.
Our information technology is difficult to consider holistically, because it is modular: Though its constituents work together, they are designed at different times, in different places, by different parties, to different ends. Furthermore, these constituents are agnostic as to the circumstances of their use: A database doesn't "care" whether it's used to enhance a rental car company's relationships with its best customers, manage a public housing authority's inventory of cleaning supplies, or keep tabs on the members of a particular ethnic group scattered through a larger population.
This modularity has historically been the very strength of information technology, at least in the years since the Internet raised notions of standardization and interoperability to wide currency in the industry. It lends our informatic systems an enormous degree of flexibility, adaptability, even vigor, and it will and should continue to do so.
But in our experiences of everyware, we'll also find that modularity will occasionally prove to be the cause of concern. Even where we recognize that certain high-level effects of ubiquitous systems are less comfortable for users than we might like, it will generally not be possible to address these effects at the level of component artifacts. Apart from those of literal imperceptibility, almost all of the issues we're interested in—the surfacing of previously latent information, the persistence lent ephemera by their encoding in mnemotechnical systems, certainly seamlessness—do not arise as a result of properties residing in components themselves. They are, rather, an emergent property of the components' interaction, of their deployment in the world in specific patterns and combinations.
This is a thoroughly conventional microcontroller. This is an off-the-shelf microphone. This is a stock door-actuator mechanism. We think we know them thoroughly, understand their properties and characteristics in detail, and under most circumstances, that is a reasonably safe assumption. But put them together properly, embed them in the world in a particular relationship with each other, and we have a system that closes a door between two spaces any time the sound level in one of them breaches a certain threshold. And though the exact details will vary depending on where and how it is deployed, this system will have behaviors and consequences that absolutely could not have been predicted by someone considering the components beforehand—no matter how expert or insightful that person might be.
Maybe this strikes you as trivial. Consider, then, a somewhat more baroque but no less credible scenario: Contributions to political campaigns, at least in the United States, are already a matter of public record, stored in databases that are easily accessible on the Internet.* And we've known from as far back as Smart Floor that unconscious aspects of a person's gait—things like weight distribution and tread period—can not only be sensed, but can also serve as a surprisingly accurate identifier, at least when a great many data points are gathered. (As the Georgia Tech team responsible for Smart Floor rather disingenuously asked, "Why not make the floor 'smart,' and use it to identify and track people?")
* The source I happen to be thinking of is opensecrets.org, but there are many others. See spywareguide.com for some interesting mash-ups and combinations.
On the surface, these seem like two thoroughly unrelated factoids about the world we live in, so much so that the third sentence of the paragraph above almost reads like a non sequitur. But connect the two discrete databases, design software that draws inferences from the appearance of certain patterns of fact—as our relational technology certainly allows us to do—and we have a situation where you can be identified by name and likely political sympathy as you walk through a space provisioned with the necessary sensors.
Did anyone intend this? Of course not—at least, we can assume that the original designers of each separate system did not. But when things like sensors and databases are networked and interoperable, agnostic and freely available, it is a straightforward matter to combine them to produce effects unforeseen by their creators.
The Smart Floor example is, of course, deliberately provocative; there's nothing in the literature to suggest that Georgia Tech's gait-recognition system was ever scaled up to work in high-traffic public spaces, let alone that any spaces beyond their own lab were ever instrumented in this way. Nevertheless, there is nothing in the scenario that could not in principle be done tomorrow.
We should never make the mistake of believing, as designers, users or policymakers, that we understand exactly what we're dealing with in an abstract discussion of everyware. How can we fully understand, let alone propose to regulate, a technology whose important consequences may only arise combinatorially as a result of its specific placement in the world?
I believe that in the fullness of time, the emergent behavior of ubiquitous systems will present us and our societies with the deepest of challenges. Such behavior will raise the specter of autonomous artifacts, even call into question the proper degree of loyalty an object should have toward its owners or users. As real as these issues are, though, we're not quite there yet; perhaps it would be wiser to deal with the foreseeable implications of near-term systems before addressing the problems that await us a few steps further out.
In everyware, many issues are decided at the level of architecture, and therefore do not admit to any substantive recourse in real time.
Stanford law professor Lawrence Lessig argues, in his book Code and Other Laws of Cyberspace, that the deep structural design of informatic systems—their architecture—has important implications for the degree of freedom people are allowed in using those systems, forever after. Whether consciously or not, values are encoded into a technology, in preference to others that might have been, and then enacted whenever the technology is employed.
For example, the Internet was originally designed so that the network itself knows nothing about the systems connected to it, other than the fact that each has a valid address and handles the appropriate protocols. It could have been designed differently, but it wasn't. Somebody made the decision that the cause of optimal network efficiency was best served by such an "end-to-end" architecture.*
* In this case, the identity of the "somebody" in question is widely known: The relevant design decisions were set forth by Robert E. Kahn and Vint Cerf, in a 1974 paper called A Protocol for Packet Network Intercommunication. The identity of responsible parties will not always be so transparent.
Lessig believes that this engineering decision has had the profoundest consequences for the way we present ourselves on the net and for the regulability of our behavior there. Among other things, "in real space... anonymity has to be created, but in cyberspace anonymity is the given." And so some rather high-level behaviors—from leaving unsigned comments on a Web site, to being able to download a movie to a local machine, traceable to nothing more substantial than an IP address—are underwritten by a decision made years before, concerning the interaction of host machines at the network layer.
We needn't go quite that deep to get to a level where the design of a particular technical system winds up inscribing some set of values in the world.
Imagine that a large American company—say, an automobile manufacturer—adopts a requirement that its employees carry RFID-tagged personal identification. After a lengthy acquisition process, the company selects a vendor to provide the ID cards and their associated paraphernalia—card encoders and readers, management software, and the like.
As it happens, this particular identification system has been designed to be as flexible and generic as possible, so as to appeal to the largest pool of potential adopters. Its designers have therefore provided it with the ability to encode a wide range of attributes about a card holder—ethnicity, sex, age, and dozens of others. Although the automotive company itself never uses these fields, every card carried nevertheless has the technical ability to record such facts about its bearer.
And then suppose that—largely as a consequence of the automobile manufacturer's successful and public large-scale roll-out of the system—this identification system is adopted by a wide variety of other institutions, private and public. In fact, with minor modifications, it's embraced as the standard driver's license schema by a number of states. And because the various state DMvs collect such data, and the ID-generation system affords them the technical ability to do so, the new licenses wind up inscribed with machine-readable data about the bearer's sex, height, weight and other physical characteristics, ethnicity....
If you're having a hard time swallowing this set-up, consider that history is chock-full of situations where some convention originally developed for one application was adopted as a de facto standard elsewhere. For our purposes, the prime example is the Social Security number, which was never supposed to be a national identity number—in fact, it was precisely this fear that nearly torpedoed its adoption, in 1936.
By 1961, however, when the Internal Revenue Service adopted the Social Security number as a unique identifier, such fears had apparently faded. At present, institutions both public (the armed forces) and private (banks, hospitals, universities) routinely use the SSN in place of their own numeric identification standards. So there's ample justification not to be terribly impressed by protestations that things like this "could never happen."
We see that a structural decision made for business purposes—i.e., the ability given each card to record a lengthy list of attributes about the person carrying it—eventually winds up providing the state with an identity card schema that reflects those attributes, which it can then compel citizens to carry. What's more, private parties equipped with a standard, off-the-shelf reader now have the ability to detect such attributes, and program other, interlinked systems to respond to them.
Closing the loop, what happens when a building's owners decide that they'd rather not have people of a given age group or ethnicity on the premises? What happens if such a lock-out setting is enabled, even temporarily and accidentally?*
* It's worth noting, in this context, that the fundamentalist-Christian putsch in Margaret Atwood's 1986 The Handmaid's Tale is at least in part accomplished by first installing a ubiquitous, nationwide banking network and then locking out all users whose profiles identify them as female.
No single choice in this chain, until the very last, was made with anything but the proverbial good intentions. The logic of each seemed reasonable, even unassailable, at the time it was made. But the clear result is that now the world has been provisioned with a system capable of the worst sort of discriminatory exclusion, and doing it all cold-bloodedly, at the level of its architecture.
Such decisions are essentially uncontestable. In this situation, the person denied access has no effective recourse in real time—such options that do exist take time and effort to enact. Even if we are eventually able to challenge the terms of the situation—whether by appealing to a human attendant who happens to be standing by, hacking into the system ourselves, complaining to the ACLU, or mounting a class-action lawsuit—the burden of time and energy invested in such activism falls squarely on our own shoulders.
This stands in for the many situations in which the deep design of ubiquitous systems will shape the choices available to us in day-to-day life, in ways both subtle and less so. It's easy to imagine being denied access to some accommodation, for example, because of some machine-rendered judgment as to our suitability—and given a robustly interconnected everyware, that judgment may well hinge on something we did far away in both space and time from the scene of the exclusion.
Of course, we may never know just what triggered such events. In the case of our inherent attributes, maybe it's nothing we "did" at all. All we'll be able to guess is that we conformed to some profile, or violated the nominal contours of some other.
One immediate objection is that no sane society would knowingly deploy something like this—and we'll accept this point of view for the sake of argument, although again history gives us plenty of room for doubt. But what if segregation and similar unpleasant outcomes are "merely" an unintended consequence of unrelated, technical decisions? Once a technical system is in place, it has its own logic and momentum; as we've seen, the things that can be done with such systems, especially when interconnected, often have little to do with anything the makers imagined.*
* As security expert Bruce Schneier says, "I think [a vendor of RFID security systems] understands this, and is encouraging use of its card everywhere: at sports arenas, power plants, even office buildings. This is just the sort of mission creep that moves us ever closer to a 'show me your papers' society."
We can only hope that those engineering ubiquitous systems weigh their decisions with the same consciousness of repercussion reflected in the design of the original Internet protocols. The downstream consequences of even the least significant-seeming architectural decision could turn out to be considerable—and unpleasant.
Everyware produces a wide belt of circumstances where human agency, judgment, and will are progressively supplanted by compliance with external, frequently algorithmically-applied, standards and norms.
One of the most attractive prospects of an ambient informatics is that information itself becomes freely available, at any place and any time. We can almost literally pull facts right out of the air, as and when needed, performing feats of knowledge and recall that people of any other era would rightly have regarded as prodigious.
But we're also likely to trade away some things we already know how to do. As Marshall McLuhan taught us, in his 1964 Understanding Media, "every extension is [also] an amputation." By this he meant that when we rely on technical systems to ameliorate the burdens of everyday life, we invariably allow our organic faculties to atrophy to a corresponding degree.
The faculty in question begins to erode, in a kind of willed surrender. Elevators allow us to live and work hundreds of feet into the air, but we can no longer climb even a few flights without becoming winded. Cars extend the radius of our travels by many times, but it becomes automatic to hop into one if we're planning to travel any further than the corner store—so much so that entire subdivisions are built around such assumptions, and once again we find behavior constrained at the level of architecture.
An example that may be more relevant to our present inquiry concerns phone numbers. Before speed dial, before mobile phones, you committed to memory the numbers of those closest to you. Now such mnemo-technical systems permit us to store these numbers in memory—an extension that, it is undeniable, allows us to retain many more numbers than would otherwise have been the case. But if I ask you your best friend's phone number? Or that of the local pizza place?
This is one danger of coming to rely too heavily, or too intimately, on ubiquitous technology. But unlike literal amputations, which tend to be pretty noticeable, these things only become visible in the default of the technical system in question. The consequences of an overreliance on extensions can clearly be seen in the aftermath of Hurricane Katrina, in which we saw that New Orleans' evacuation plan was predicated on the automobility of the city's entire population. When the storm revealed that assumption to have been unjustified, to say the least, we saw the stunning force with which a previously obscured amputation can suddenly breach the surface of awareness. McLuhan saw an uneasy, subliminal consciousness of what has been traded away at the root of the "never-explained numbness that each extension brings about in the individual and society."
"Amputation," though, implies that a faculty had at least once existed. But it's also the case that the presence of an ambient informatics might interfere in learning certain skills to begin with. Before I learned to drive, for example, I couldn't have given you any but the vaguest sort of directions. It wasn't until I acquired the fused haptic and cognitive experience of driving from origin to destination—the memory of making the decision to turn here, in other words, fused to the feeling of turning the wheel to make it so, and the perception of the consequences—that I laid down a mental map of the world in sufficient detail to permit me to convey that information to anyone else.
Children who grow up using everyware, told always where they are and how to get where they are going, may never acquire the same fluency. Able to rely on paraphernalia like personal location icons, route designators, and turn indicators, whether they will ever learn the rudiments of navigation—either by algorithm or by landmark or by dead reckoning—is open to question. Even memorizing street names might prove to be an amusingly antiquated demonstration of pointless skill, like knowing the number of pecks in a bushel.
If a reliance on ubiquitous systems robs us of some of our faculties, it may also cause us to lose faith in the ones that remain. We will find that everyware is subtly normative, even prescriptive—and, again, this will be something that is engineered into it at a deep level.
Take voice-recognition interfaces, for example. Any such system, no matter how sophisticated, will inscribe notions of a nominal voice profile that a speaker must match in order for his or her utterances to be recognized. Spoken commands made around a mouthful of coffee—or with a strong accent—may not be understood. It may turn out that ubiquitous voice recognition has more power to enforce crisp enunciation than any locution teacher ever dreamed of wielding.
This is problematic in two ways. First, of course, is the pragmatic concern that it forces users to focus on tool and not task, and thus violates every principle of an encalming pervasive technology. But more seriously, we probably weren't looking to our household management system for speech lessons. Why should we mold something as intimate, and as constitutive of personality, as the way we speak around some normative profile encoded into the systems around us?
There are still more insidious ways in which we can feel pressured to conform to technically-derived models of behavior. Some of the most unsettling are presented by biometric monitors such as BodyMedia's SenseWear patch.
BodyMedia's aim, as a corporate tagline suggests, is to "collect, process, and present" biometric information, with the strong implication that the information can and will be acted upon. This is, no doubt, a potential boon to millions of the sick, the infirm and the "worried well." But it's also a notion with other reverberations in a society that, at least for the moment, seems hell-bent on holding its members to ever-stricter ideals of form and fitness. For many of us, a product that retrieves biometric data painlessly, coupled to sophisticated visualization software that makes such data not merely visible but readily actionable, is going to be irresistible.
Notice how readily the conversation tends to drift onto technical grounds, though. Simply as a consequence of having the necessary tools available, we've begun to recast the body as a source of data rather than the seat of identity (let alone the soul). The problems therefore become ones of ensuring capture fidelity or interpreting the result, and not, say, how it feels to know that your blood pressure spikes whenever your spouse gets home from work. We forget to ask ourselves whether we feel OK about the way we look; we learn to override the wisdom and perspective that might counsel us that the danger posed by an occasional bacchanal is insignificant. We only notice how far our blood glucose levels have departed from the normative curve over the last 48 hours.
This is not to park such issues at BodyMedia's door alone. The same concerns could of course be raised about all of the systems increasingly deployed throughout our lives. The more deeply these systems infiltrate the decisions we make every day, the more they appear to call on all the powers of insight and inference implied by a relational technology, the less we may come to trust the evidence of our own senses.
In the event of a default, fixing a locus of control may be effectively impossible.
Largely as a consequence of their complex and densely interwoven nature, in the event of a breakdown in ubiquitous systems, it may not be possible to figure out where something's gone wrong. Even expert technicians may find themselves unable to determine which component or subsystem is responsible for the default.
Let's consider the example of a "smart" household-management system, to which all of the local heating, lighting, ventilation, and plumbing infrastructure has been coupled. In the hope of striking a balance between comfort and economy, you've set its winter mode to lower any room's temperature to 60 degrees Fahrenheit when that room has been empty for ten minutes or more, but to maintain it at 68 otherwise.
When the heat fails to come on in one room or another, which of the interlinked systems involved has broken down? Is it a purely mechanical problem with the heater itself, the kind of thing you'd call a plumber for? Is it a hardware issue—say, a failure of the room's motion detector to properly register your presence? Maybe the management interface has locked up or crashed entirely. It's always possible that your settings file has become corrupt. Or perhaps these systems have between them gotten into some kind of strange feedback loop.
In the latter case particularly—where the problem may indeed not reside in any one place at all, but rather arises out of the complex interaction of independent parts—resolving the issue is going to present unusual difficulties. Diagnosis of simple defaults in ubiquitous systems will likely prove to be inordinately time-consuming by current standards, but systems that display emergent behavior may confound diagnosis entirely. Literally the only solution may be to power everything down and restart components one by one, in various combinations, until a workable and stable configuration is once again reached.
This will mean rebooting the car, or the kitchen, or your favorite sweater, maybe once and maybe several times, until every system that needs to do so has recognized the others and basic functionality has been restored to them all. And even then, of course, the interaction of their normal functioning may entrain the same breakdown. Especially when you consider how dependent on everyware we are likely to become, the prospect of having to cut through such a Gordian tangle of interconnected parts just to figure out which one has broken down is somewhat less than charming.
Users will understand their transactions with everyware to be essentially social in nature.
There's good reason to believe that users will understand their transactions with ubiquitous systems to be essentially social in nature, whether consciously or otherwise—and this will be true even if there is only one human party to a given interaction.
Norbert Wiener, the "father of cybernetics," had already intuited something of this in his 1950 book, The Human Use of Human Beings: according to Wiener, when confronted with cybernetic machines, human beings found themselves behaving as if the systems possessed agency.
This early insight was confirmed and extended in the pioneering work of Byron Reeves and Clifford Nass, published in 1996 as The Media Equation. In an extensive series of studies, Reeves and Nass found that people treat computers more like other people than like anything else—that, in their words, computers "are close enough to human that they encourage social responses." (The emphasis is present in the original.) We'll flatter a computer, or try wheedling it into doing something we want, or insult it when it doesn't—even if, intellectually, we're perfectly aware how absurd this all is.
We also seem to have an easier time dealing with computers when they, in turn, treat us politely—when they apologize for interrupting our workflow or otherwise acknowledge the back-and-forth nature of communication in ways similar to those our human interlocutors might use. Reeves and Nass urge the designers of technical systems, therefore, to attend closely to the lessons we all learned in kindergarten and engineer their creations to observe at least the rudiments of interpersonal etiquette.
Past attempts to incorporate these findings into the design of technical systems, while invariably well-intentioned, have been disappointing. From Clippy, Microsoft's widely-loathed "Office Assistant" ("It looks like you're writing a letter"), to the screens of Japan Railways' ticket machines, which display an animated hostess bowing to the purchaser at the completion of each transaction, none of the various social interfaces have succeeded in doing anything more than reminding users of just how stilted and artificial the interaction is. Even Citibank's ATMs merely sound disconcerting, like some miserly cousin of HAL 9000, when they use the first person in apologizing for downtime or other violations of user expectations ("I'm sorry—I can only dispense cash in multiples of $20 right now.")
But genuinely internalizing the Media Equation insights will be critical for the designers of ubiquitous systems. Some are directly relevant to the attempted evocation of seamlessness ("Rule: Users will respond to the same voice on different computers as if it were the same social actor"), while others speak to the role of affect in the ubiquitous experience—notably, the authors' finding that the timing of interactions plays a critical role in shaping their interpretations, just as much as their content does.* Coming to grips with what Reeves and Nass are trying to tell us will help designers accept the notion that people will more often understand their interactions with everyware to be interpersonal in nature than technical.
* The example Reeves and Nass offer is how we react when praise is delayed by a critical few beats in response to a query—i.e., not well.
These findings take on new importance when people encounter a technology that, by design, borders on the imperceptible. When there are fewer visible cues as to a system's exact nature, we're even more likely to mistake it for something capable of reciprocating our feelings—and we will be that much more hurt if it does not.
Users will tend to blame themselves for defaults in everyware.
When ubiquitous systems break down, as they surely must from time to time, how will we react?
We've seen that it may be difficult to determine the origin of a problem, given a densely interwoven mesh of systems both local and remote—that emergent behavior arising in such a mesh means that there mightn't even be a single point of failure, in the classical sense.
We've also seen that users are likely to understand their interactions with everyware as primarily social in nature. Reeves and Nass tell us, further, that we generally treat informatic systems as though they had personalities, complete with agency—in other words, that we'll routinely fail to see through a system to the choices of its designers. As a consequence, we show a marked reluctance to ascribe blame to the systems themselves when things go wrong: we don't want to hurt their feelings (!). Even in the depths of this narcissistic age, we're still, apparently, gracious and forgiving in our dealings with information technology.
Given these two boundary constraints, the most obvious option remaining open is for users to blame themselves. We can expect that this will in fact be the most frequent user response to defaults in the ubiquitous and pervasive systems around them.
I can only cite my own experiences in support of this idea. As an information architect and user-experience consultant, I've helped to develop more than fifty enterprise-scale Web sites over the last seven years, as well as a smaller number of kiosk-based and mobile-phone interfaces. My work frequently involves the assessment of a client's existing site—observing real users in their interactions with it and attending closely to their subjective responses. And one thing that I've seen with a fair, if disheartening, degree of regularity in this work is that users blame themselves when they can't get a site to work properly—and this is more true the less technically sophisticated the user is.
This very much despite the fact that the site in question may simply be wretchedly designed. People will say "I can't figure this out," "I'm too stupid," or "I get confused so easily," far more often than they'll venture an opinion that the site's designers or developers have done an incompetent job. Especially as everyware subtends an ever-larger population of nonspecialists—everyday people without any particular interest in the workings of the information technology they rely on—we can expect to see similar responses grow more and more common in reaction to breakdowns and defaults.
And this is the ultimate "next larger context" for our considerations of everyware. If we wish to design ubiquitous systems to support people in all the richness and idiosyncrasy of their lives, that address the complications of those lives without introducing new ones, we should bear in mind how crushingly often our mistakes will come to haunt not us but the people on whose behalf we're supposed to be acting.
But who, really, is this mysterious "we" I keep talking about? Up until this point, I've spoken as though responsibility for determining the shape of the ubiquitous future is general in the world and in the audience for these words as well—but surely not everyone reading this book will be able to exert an identical amount of influence on that shape. As we head into the next section, then, we'll consider these questions with a greater degree of rigor: Who gets to speak for users? And just who will decide what kind of everyware we're to be furnished with?