Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)

Section 7. How might We Safeguard Our Prerogatives in an Everyware World?

By now, our picture is essentially complete. We have a reasonably comprehensive understanding of the nature of ubiquitous computing and the forces involved in determining that nature.

How can we, as designers, users, and consumers, ensure that everyware contains provisions preserving our quality of life and safeguarding our fundamental prerogatives?

images Thesis 70

It will not be sufficient simply to say, "First, do no harm."

We've agreed that, in order to protect the interests of everyone involved, it would be wise for us to establish some general principles guiding the ethical design and deployment of ubiquitous technology.

The most essential principle is, of course, first, do no harm. If everyone contemplating the development of everyware could be relied upon to take this simple idea to heart, thoughtfully and with compassion, there would be very little need to enunciate any of the following.

There are difficulties with such a laissez-faire approach, though. For one thing, it leaves entirely too much unspoken as to what constitutes harm, as to who is at risk, as to what the likely consequences of failure would be. It assumes that everyone developing everyware will do so in complete good faith and will always esteem the abstract-seeming needs of users more highly than market share, the profit motive, or the prerogatives of total information awareness. And, even where developers can be relied upon to act in good faith, it's simply not specific enough to constitute practically useful guidance.

The next best thing, then, is to develop a strategy for ethical development that does take these factors into account—something that spells out the issues in sufficient detail to be of use to developers, that strikes a balance between their needs and those of users, and that incentivizes compliance rather than punish noncompliance.

How might we go about designing such a strategy? Let's consider the fundamental nature of the challenge before us one last time, and with that fresh in mind, articulate a framework that should help us develop wiser, more useful, and more humane instantiations of everyware.

images Thesis 71

We're not very good at doing "smart" yet, and we may never be.

After 230 pages in which we've explored the vast and sprawling terrain of everyware in a fair degree of detail, perhaps we would be safe in venturing some guesses about the deeper nature of its challenge.

At root, I see it this way: as a civilization, our production of high-technological artifacts does not yet display anything like the degree of insight, refinement and robustness that toolmakers, furniture artisans, and craftspeople have developed over the thousands of years of their collective endeavor. Our business practices and development methodologies, the complexity of our technology and even the intellectual frameworks we bring to the task, militate against our being able to do so.

Nor have we so far been able to design systems capable of producing inferences about behavior nearly as accurate as those formed in a split-second glance by just about any adult human being.

In other words, we simply don't do "smart" very well yet, and there are good reasons to believe that we may never.

With regard to the tools we build, compare one of the most accomplished fruits of our high technology, Apple's iPod, with just about any piece of furniture—say, an Eames Aluminum Group lounge chair.

The iPod can fail in many more ways than the chair can, yielding to anything from a cracked case to the exhaustion of its battery to a corruption in the software that drives it. By comparison, just about the only way the chair can truly fail is to suffer some catastrophic structural degradation that leaves it unable to support the weight of an occupant.

Nobody needs to be told how to use the lounge chair. "Users" of any age, background, or degree of sophistication can immediately comprehend it, take it in, in almost all of its details, at a single glance. It is self-revealing to the point of transparency. The same can be said of most domestic furniture: you walk on a floor, lie on a bed, put books and DvDs and tchotchkes on shelves, laptops and flowers and dinner on tables. Did anyone ever have to tell you this?

The same cannot be said of the iPod—widely regarded as one of the best thought-out and most elegant digital artifacts ever, demonstrating market-leading insight into users and what they want to do with the things they buy. To someone encountering an iPod for the very first time, it's not obvious what it does or how to get it to do that. It may not even be obvious how to turn the thing on.

You needn't configure the chair, or set its preferences, or worry about compatible file formats. you can take it out of one room or house and drop it into another, and it still works exactly the same way as it did before, with no adjustment. It never reminds you that a new version of its firmware is available and that certain of its features will not be available until you do choose to upgrade. As much as I love my iPod, and I do, none of these statements is true of it.

Many particulars of the chair's form and structure result from a long history of incremental improvements, though of course it doesn't hurt that it was designed by a pair of geniuses. It is very well adapted to everyday life, and unless this particular chair affronts your aesthetic sense, it is likely to provide you with all three of the classical vitruvian virtues of firmitasutilitas, and venustas: durability, utility and delight. The iPod, also designed by a genius, is undeniably delightful, but it falls short on the other two scales. Its utility has been compromised to some degree by "feature creep": As a combination music player, address book, calendar, image viewer and video device, it now does more things than its elegantly simple interface can handle gracefully.

But most digital tools people use regularly are not nearly as refined as the iPod. As technology companies go, Apple devotes an exemplary and entirely atypical amount of time, money, and attention to the user experience, and even so, it still gets something wrong from time to time.

Nor, of course, is the issue limited to MP3 players. Digital products and services of all sorts suffer from the same inattention to detail, inability to model user assumptions correctly, and disinclination to perceive interactions from that user's point of view. Even today, you'll occasionally stumble across a high-profile Web site whose navigation seems intentionally designed to perplex and confound. How much worse will it be when the interface we have to puzzle out isn't that of a Web site or an MP3 player, but that of the toilet, the environment-control system, the entire house?

We've come a long, long way from the simple and profound pleasures of relaxing into a chair, wrapping our palms around the warm curve of a mug, or flicking on a lamp when the dusk begins to claim the fading day. If we've, by now, mostly overcome the legendary blinking-12:00 problem that used to afflict so many of us in our dealings with vCRs, that is still emblematic of the kind of thing that happens—and will continue to happen—routinely when complex technology pervades everyday life.

And this only gets more problematic because, as we've seen, so many applications of everyware rely on machine inference, on estimates about higher-level user behavior derived from patterns observed in the flow of data. A perfect example is the "smart coffee cup" Tim Kindberg and Armando Fox refer to in their 2002 article "System Software for Ubiquitous Computing," which "serves as a coffee cup in the usual way, but also contains sensing, processing and networking elements that let it communicate its state (full or empty, held or put down). So, the cup can give colleagues a hint about the state of the cup's owner."

But the word "hint" is well-chosen here, because that's really all the cup will be able to communicate. It may well be that a full mug on my desk implies that I am also in the room, but this is not always going to be the case, and any system that correlates the two facts had better do so pretty loosely. Products and services based on such pattern-recognition already exist in the world—I think of Amazon's "collaborative filtering"–driven recommendation engine—but for the most part, their designers are only now beginning to recognize that they have significantly underestimated the difficulty of deriving meaning from those patterns. The better part of my Amazon recommendations turn out to be utterly worthless—and of all commercial pattern-recognition systems, that's among those with the largest pools of data to draw on.

Lest we forget: "simple" is hard. In fact, Kindberg and Fox remind us that "[s]ome problems routinely put forward [in ubicomp] are actually AI-hard"—that is, as challenging as the creation of an artificial human-level intelligence. The example they offer—whether a technical system can accurately determine whether a meeting is in session in a given conference room, based on the available indicators—could be supplemented with many another. Knowing when a loved one's feelings have been hurt, when a baby is hungry, when confrontation may prove a better strategy than conciliation: These are things that we know in an instant, but that not even the most sensitive pattern-detection engine can determine with any consistency at all.

So there's a certain hubris in daring to intervene, clumsily, in situations that already work reasonably well, and still more in labeling that intervention "smart." If we want to consistently and reliably build ubiquitous systems that do share something of the nature of our finest tools, that do support the finest that is in us, we really will need some help.

images Thesis 72

Even acknowledging their contingency, some explicit set of principles would be highly useful to developers and users both.

Almost all of the available literature on ubiquitous computing is academic. That is, it emerges from the methods and viewpoints of applied science as it is practiced in the collective institution of higher education.

As part of their immersion in the scientific method, academics are trained to be descriptive. A proper academic paper in the sciences is neither proscriptive nor prescriptive; it expresses no opinion about what should or should not happen. Much of the discourse around ubiquitous computing has to date been of the descriptive variety: This is a system we contemplate engineering; this is how far we were able to get with it; this is where our assumptions broke down.

But however useful such descriptive methodologies are, they're not particularly well suited to discussions of what ought to be (or ought not to be) built.

This is not to say that such discussions do not take place—of course they do, whether in person over a cold beer, on electronic mailing lists, or in any of the fora where people working in the field gather. The debates I've been lucky enough to witness are learned, wise, contentious, impassioned, occasionally hysterically funny...but they rarely seem to show up in the literature, except as traces. The realism and the critical perspective so often vividly present in these discussions are lost to the record, all but invisible to anyone who only knows ubiquitous computing through conference proceedings and published work.

There have been attempts to return this perspective to written discussions of ubiquitous systems, some more successful than others. Thinkers as varied as the sociologist and anthropologist Anne Galloway, the industrial design provocateurs Dunne & Raby, and symposiarch John Thackara of the Doors of Perception conferences have all considered the question of pervasive computing from a critical perspective. I read Paul Dourish's Where the Action Is, particularly, as a passionate call to strip away the layers and layers of abstraction that so often prevent computing from benefiting the people it is intended to serve, people whose choices are both limited and given meaning by their being-in-the-world. But even this most literate of ubicomp writings is not enough—or is, at least, insufficiently explicit to help the working designer.

And that really is the issue: The working designer may not have the inclination, and definitely does not have the time, to trawl Heidegger for insight into the system they are bringing into being. Anybody working under the pressures and constraints of contemporary technology-development practice will need relatively clear-cut principles to abide by and to wield in discussions with the other members of their team.

Moreover, such guidelines would be of clear utility to those procuring and using everyware. If there is a compact, straightforward, and widely agreed-upon set of guidelines, then a given system's compliance with them could be verified and certified for all to see by something analogous to an interoperability mark. We could trust, in encountering such a system, that every practical measure had been taken to secure the maintenance or extension of our prerogatives.

This is just what all of our explorations have been building toward. After considering its definition, its origins, its likely implications, and the timing of its arrival, we are now ready to articulate five principles for the ethical development of everyware, even as we acknowledge that any such set of principles is bound to be contingent, provisional, and incomplete at best.

One final note: While these principles do aim to provide both developers and users with a useful degree of clarity, they do not spell solutions out in detail. Given the early stage of everyware's evolution, and especially everything we've learned about the futility of evaluating a system when it's been decontextualized and stripped of its specific referents to the real world, the principles focus not on how to achieve a given set of ends, but on what ends we should be pursuing in the first place.

images Thesis 73

Everyware must default to harmlessness.

The first of our principles concerns what happens when ubiquitous systems fail. What happens when a critical percentage of sensors short out, when the building's active lateral bracing breaks down, when weather conditions disrupt the tenuous wireless connection? Or what if there's a blackout?

"Graceful degradation" is a term used in engineering to express the ideal that if a system fails, if at all possible it should fail gently in preference to catastrophically; functionality should be lost progressively, not all at once. A Web browser might be unable to apply the proper style sheet to a site's text, but it will still serve you with the unstyled text, instead of leaving you gazing at a blank screen; if your car's ABS module goes out, you lose its assistance in autopumping the brakes ten times a second, but you can still press down on the brake pedal in order to slow the car.

Graceful degradation is nice, but it doesn't go nearly far enough for our purposes. Given the assumption of responsibility inherent in everyware, we must go a good deal further. Ubiquitous systems must default to a mode that ensures users' physical, psychic, and financial safety.

Note that this is not an injunction to keep subjects safe at all times: That is as ridiculous as it would be undesirable. It's simply, rather, a strong suggestion that when everyware breaks down—as it surely will from time to time, just like every other technical system that humanity has ever imagined—it should do so in a way that safeguards the people relying on it.

What precisely "safety" means will obviously vary with place and time. Even as regards physical safety alone, in the United States, we find ourselves in a highly risk-averse era, in which public fear and litigiousness place real limits on what can be proposed. (A playground surface that no German would think twice about letting their children frolic on simply wouldn't fly in the States, and I sometimes wonder what our media would do to fill airtime were it not for flesh-eating bacteria, bloodthirsty sharks, missing blonde women, and al-Qaida sleeper cells.)

Coming to agreement as to what constitutes psychic and financial safety is probably more culture-dependent still. So it's entirely possible that working out a definition of safety broad enough to be shared will leave few parties wholly content.

But the ubiquitous systems we're talking about engage the most sensitive things in our lives—our bodies, our bank accounts, our very identities—and we should demand that a commensurately high level of protection be afforded these things.

images Thesis 74

Everyware must be self-disclosing.

The second principle of ethical development concerns provisions to notify us when we are in the presence of some informatic system, however intangible or imperceptible it otherwise may be.

We've seen that everyware is hard to see for a variety of reasons, some circumstantial and some intentional. Information processing can be embedded in mundane objects, secreted away in architectural surfaces, even diffused into behavior. And as much as this may serve to encalm, it also lends itself to too many scenarios in which personal information, including that of the most intimate sort, can be collected without your awareness, let alone your consent.

Given the degree to which ubiquitous systems will be interconnected, information once collected can easily, even inadvertently, be conveyed to parties unknown, operating outside the immediate context.

This is an unacceptable infringement on your right of self-determination. Simply put, you should know what kinds of information-gathering activities are transpiring in a given place, what specific types of information are being collected, and by whom and for what purpose. Finally, you should be told how and in what ways the information-gathering system at hand is connected to others, even if just as a general notification that the system is part of the global net.

We might express such an imperative like this: Ubiquitous systems must contain provisions for immediate and transparent querying of their ownership, use, and capabilities.

Everyware must, in other words, be self-disclosing. Such disclosures ensure that you are empowered to make informed decisions as to the level of exposure you wish to entertain.

So, for example, if the flooring in eldercare housing is designed to register impacts, it should say so, as well as specifying the threshold of force necessary to trigger an alert. If the flooring does register a fall, what is supposed to happen? If the flooring is connected in some way to a local hospital or ambulance dispatcher, which hospital is it? Even in such an apparently benign implementation of everyware—and maybe even especially in such cases—the choices made by designers should always be available for inspection, if not modification.

None of this is to say that users should be confronted with a mire of useless detail. But seamlessness must be an optional mode of presentation, not a mandatory or inescapable one.

Less ominously, though, such disclosures also help us know when otherwise intangible services are available to us. When an otherwise unremarkable object affords some surprising functionality, or when a digital overlay of information about some place exists, we need to have some way of knowing these things that does not itself rely on digital mediation.

Design researcher Timo Arnall has developed a vocabulary of graphic icons that communicate ideas like these: a friendly, human-readable equivalent of the "service discovery layer" in Bluetooth that specifies what devices and services are locally available. Perhaps Arnall's icons could serve as the basis of a more general graphic language for ubiquitous systems—a set of signs that would eventually become as familiar as "information" or "bathroom," conveying vital ideas of the everyware age: "This object has invisible qualities," or "network dead zone."

Whether we use them to protect ourselves from intrusive information collection or to discover all the ways our new technology can be used, provisions for transparent self-disclosure on the part of ubiquitous systems will be of critical importance in helping us find ways to live around and with them. Such knowledge is the basis of any meaningful ability on our part to decide when and to what degree we wish to engage with everyware and when we would prefer not to.

images Thesis 75

Everyware must be conservative of face.

Something too rarely considered by the designers of ubiquitous systems is how easily their ordinary operation can place a user's reputation and sense of dignity and worth at risk.

Thomas Disch illustrates this beautifully in his classic 1973 novel 334. The grimly futuristic Manhattan of 334 is a place whose entropic spiral is punctuated only by the transient joys of pills, commercial jingles, and empty sex. The world-weary residents of 334 East 13th Street survive under the aegis of a government welfare agency called MODICUM, a kind of Great Society program gone terminally sour.

In particular, 334's casual sketch of what would later be known as an Active Badge system hews close to this less-than-heroic theme. Disch shows us not the convenience of such a system, but how it might humiliate its human clients—in this case the aging, preoccupied hospital attendant Arnold Chapel. Embroiled in an illicit plot, Chapel has allowed himself to wander from his course, and is audibly corrected by the hospital's ubiquitous traffic control system:

"Arnold Chapel," a voice over the PA said. "Please return along 'K' corridor to 'K' elevator bank. Arnold Chapel, please return along 'K' corridor to 'K' elevator bank."
Obediently he reversed the cart and returned to 'K' elevator bank. His identification badge had cued the traffic control system. It had been years since the computer had had to correct him out loud.

All that was, in fact, necessary or desirable in this scenario was that the system return Chapel to his proper route. Is there any justification, therefore, for the broadcast of information embarrassing to him? Why humiliate, when adjustment is all that is mandated?

Of course, no system in the world can keep people from making fools of themselves. About all that we can properly ask for is that our technology be designed in such a way that it is conservative of face: that ubiquitous systems must not act in such a manner as would unduly embarrass or humiliate users, or expose them to ridicule or social opprobrium, in the course of normal operations.

The ramifications of such an imperative in a fully-developed everyware are surprisingly broad. With so many systems potentially able to provide the location of users in space and time, we've seen that finding people will become trivially easy. We also know that when facts about your location are gathered alongside other facts—who you are with, what time it is, what sorts of services happen to be available nearby—and subjected to data-mining operations, a relational system can begin to paint a picture of your behavior.

Whether this should be an accurate picture or not—and remember everything we said about the accuracy of machine inference—the revelation of such information can lead to awkward questions about our activities and intentions, the kind we'd rather not have to answer. Even if we didn't happen to be doing anything "wrong," we will still naturally resent the idea that we should answer to anyone else for our choices.

Our concern here goes beyond information privacy per se, to the instinctual recognition that no human community can survive the total evaporation of its membrane of protective hypocrisy. We lie to each other all the time, we dissemble and hedge, and these face-saving mechanisms are critical to the coherence of our society.

So some degree of plausible deniability, including, above all, imprecision of location, is probably necessary to the psychic health of a given community, such that even (natural or machine-assisted) inferences about intention and conduct may be forestalled at will.

How might we be afforded such plausible deniability? In a paper on seamfulness, Ian MacColl, Matthew Chalmers, and their co-authors give us a hint. They describe an ultrasonic location system as "subject to error, leading to uncertainty about...position," and, as they recognized, this imprecision can within reasonable limits be a good thing. It can serve our ends, by giving anyone looking for you most of the information they need about where you are, but not a pinpoint granular location that might lend itself to unwelcome inference.

The degree to which location becomes problematic depends to some extent on which of two alternative strategies is adopted in presenting it. In a "pessimistic" presentation, only verifiably and redundantly known information is displayed, while an "optimistic" display includes possibles, values with a weaker claim on truth. The less parsimonious optimistic strategy obviously presents the specter of false positives, but if this is less than desirable in ordinary circumstances, in this context, a cloud of possible locations bracketing the true one might be just the thing we want. Still worse than the prospect of being nakedly accountable to an unseen, omnipresent network is being nakedly accountable to each other, at all times and places.

Some critics have insisted that there are, at least occasionally, legitimate social purposes invoked in using technology to shame. They point to the example of Korea's notorious "Dogshit Girl," a self-absorbed young lady whose fashion-accessory pet soiled a subway car; having made not the slightest effort to clean it up, she was immediately moblogged by angry onlookers. The pictures appeared online within minutes and throughout the national press after a few hours; according to the Korean press, her humiliation was so total that the young lady eventually withdrew from university.

The argument is that, had the technology not been in place to record her face and present it for all the world to see (and judge), she would have escaped accountability for her actions. There would have been no national furor to serve—ostensibly, anyway—as deterrent against future transgressions along the same lines.

As to whether hounding someone until she feels compelled to quit school and become a recluse can really be considered "accountability" for such a relatively minor infraction, well, thereof we must be silent. Whatever the merits of this particular case, though, there is no doubt that shame is occasionally as important to the coherence of a community as hypocrisy is in another context.

But we are not talking about doing away with shame. The issue at hand is preventing ubiquitous systems from presenting our actions to one another in too perfect a fidelity—in too high a resolution, as it were—and therefore keeping us from maintaining the beneficial illusions that allow us to live as a community. Where everyware contains the inherent potential to multiply the various border crossings that do so much to damage our trust and regard for one other, we must design it instead so that it affords us moments of amnesty. We must build ourselves safe harbors in which to hide from the organs of an accountability that otherwise tends toward the total.

Finally, as we've seen, there is the humiliation and damage to self-worth we experience when we simply can't figure out how to use a poorly designed technical system of any sort. Sadly, no principle or guideline—however strongly stated, however widely observed—can ever endow all the world's designers with equal measures of skill, diligence, and compassion. Nor could any guideline ensure that designers are afforded the time and space they require to work out the details of humane systems. What we can insist on, however, is that those tasked with the development of everyware be reminded of the degree to which our sense of ourselves rides on the choices they make.

images Thesis 76

Everyware must be conservative of time.

One of the reasons that the Fukasawan vision of information processing dissolving in behavior is so alluring is because it promises to restore a little simplicity to our world. As a recent ethnographic study by Scott Main-waring, Ken Anderson, and Michele Chang of Intel Research underscores, daily life in the developed world now exposes us to a multitude of physical and informational infrastructures, each of which requires some kind of token to mediate. Simply to get through the day, we carry keys, cash, credit cards, debit cards, transit passes, parking receipts, library cards, loyalty-program cards—and this list is anything but comprehensive.

Moreover, in the course of a single day we may use any or all of an extensive inventory of digital tools and devices, each of which has a different user interface, each of which behaves differently: music and video players, telephones, personal computers, cameras, cable and satellite television controllers, ATMs, household appliances, even vehicles.

Everyware, of course, promises to replace this unseemly shambles with a compact and intuitive complement of interface provisions, ones that require far less of our time, energy and attention to deal with. The appeal of this paradoxical vision—it might be called high complexity in the service of simplicity—should not be underestimated. But the inevitable flip-side of it, at least if our experience with other information technologies is an accurate guide, is that almost all users will face the prospect of wasted time and effort at one time or another.

Philip K. Dick, never one to overlook the all-too-human complications likely in any encounter with high technology, depicted more than one hapless protagonist wrestling with ornery or outright recalcitrant pervasive devices.

In (appropriately enough) Ubik, Joe Chip is threatened with a lawsuit by his front door:

The door refused to open. It said, "Five cents, please."

He searched his pockets. No more coins; nothing. "I'll pay you tomorrow," he told the door. Again he tried the knob. Again it remained locked tight. "What I pay you," he informed it, "is in the nature of a gratuity; I don't have to pay you."

"I think otherwise," the door said. "Look in the purchase contract you signed when you bought this [apartment]."

In his desk drawer he found the contract...Sure enough; payment to his door for opening and shutting constituted a mandatory fee. Not a tip.

"You discover I'm right," the door said. It sounded smug.

From the drawer beside the sink Joe Chip got a stainless steel knife; with it he began systematically to unscrew the bolt assembly of his apt's money-gulping door.

"I'll sue you," the door said as the first screw fell out.

Joe Chip said, "I've never been sued by a door before. But I guess I can live through it."

And this is just to get out of the house and on with his day. Self-important doors are probably not even the worst of it, either; this is the kind of moment we can see strewn through our days, like landmines in the meadows, upon the introduction of an incompetent ubiquitous technology. Accordingly, we should assert as a principle the idea that ubiquitous systems must not introduce undue complications into ordinary operations.

You should be able to open a window, place a book upon a shelf, or boil a kettle of water without being asked if you "really" want to do so, or having fine-grained control of the situation wrested away from you. you should not have to configure, manage, or monitor the behavior of a ubiquitous system intervening in these or similar situations—not, at least, after the first time you use it or bring it into some new context. Furthermore, in the absence of other information, the system's default assumption must be that you, as a competent adult, know and understand what you want to achieve and have accurately expressed that desire in your commands.

By the same token, wherever possible, a universal undo convention similar to the keyboard sequence "Ctrl-Z" should be afforded; "save states" or the equivalent must be rolling, continuous, and persistently accessible in a graceful and reasonably intuitive manner. If you want to undo a mistake, or return to an earlier stage in an articulated process, you should be able to specify how many steps or minutes' progress you'd like to efface.

You shouldn't have to work three or four times as hard to achieve some utterly mundane effect (like drawing a bath, starting a car or sharing contact information with a new acquaintance) with everyware as you would have without its putative assistance. Nor should you be forced to spend more time fixing the mess resulting from some momentary slip in a sequence of interactions than the entire process should have taken in the first place.

Will this occasionally approach "AI-hard?" Probably. Nevertheless, we should insist on excluding ubiquitous systems from our everyday lives unless they are demonstrably more respectful of our time than information technologies have tended to be in the past.

images Thesis 77

Everyware must be deniable.

Our last principle is perhaps the hardest to observe: Ubiquitous systems must offer users the ability to opt out, always and at any point.

You should have the ability to simply say "no," in other words. you should be able to shut down the ubiquitous systems you own and face no penalty other than being unable to take advantage of whatever benefits they offered in the first place. This means, of course, that realistic alternatives must exist.

If you still want to use an "old-fashioned" key to get into your house, and not have to have an RFID tag subcutaneously implanted in the fleshy part of your hand, well, you should be able to do that. If you want to pay cash for your purchases rather than tapping and going, you should be able to do that too. And if you want to stop your networked bathtub or running shoe or car in the middle of executing some sequence, so that you can take over control, there should be nothing to stand in your way.

In fact—and here is the deepest of all of the challenges these principles impose on developers and on societies—where the private sphere is concerned, you should be able to go about all the business of an adult life without ever once being compelled to engage the tendrils of some ubiquitous informatic system.

In public, where matters are obviously more complicated, you must at least be afforded the opportunity to avoid such tendrils. The mode of circumvention you're offered doesn't necessarily have to be pretty, but you should always be able to opt out, do so without incurring undue inconvenience, and above all without bringing suspicion onto yourself. At the absolute minimum, ubiquitous systems with surveillant capacity must announce themselves as such, from safely beyond their fields of operation, in such a way that you can effectively evade them.

The measure used to alert you needn't be anything more elaborate than the signs we already see in ATM lobbies, or anywhere else surveillance cameras are deployed, warning us that our image is about to be captured—but such measures must exist.

Better still is when the measures allowing us to choose alternative courses of action are themselves networked, persistently and remotely available. Media Lab researcher Tad Hirsch's Critical Cartography project is an excellent prototype of the kind of thing that will be required: it's a Web-based map of surveillance cameras in Manhattan, allowing those of us who would rather not be caught on video to plan journeys through the city that avoid the cameras' field of vision. (Hirsch's project also observes a few important provisions of our principle of self-disclosure: His application includes information about where cameras are pointed and who owns them.)

All of the wonderful things our ubiquitous technology will do for us—and here I'm not being sarcastic; I believe that some significant benefits await our adoption of this technology—will mean little if we don't, as individuals, have genuine power to evaluate its merits on our own terms and make decisions accordingly. We must see that everyware serves us, and when it does not, we must be afforded the ability to shut it down. Even in the unlikely event that every detail of its implementation is handled perfectly and in a manner consistent with our highest ambitions, a paradise without choice is no paradise at all.

images Thesis 78

Measures aimed at securing our prerogatives via technical means will also appear.

It's not as if the people now developing ubiquitous systems are blind to the more problematic implications of their work—not all of them, anyway, and not by a long stretch. But perhaps unsurprisingly, when they think of means to address these implications, they tend to consider technical solutions first.

Consider the ethic that your image belongs to you—that in private space, anyway, you have the right to determine who is allowed to record that image and what is done with it. At the seventh annual Ubicomp conference, held in Tokyo in September 2005, a team from the Georgia Institute of Technology demonstrated an ingenious system that would uphold this ethic by defeating unwanted digital photography, whether overt or surreptitious.

By relying on the distinctive optical signature of the charge-coupled devices (CCDs) digital cameras are built around, the Georgia Tech system acquires any camera aimed its way in fractions of a second, and dazzles it with a precisely-calibrated flare of light. Such images as the camera manages to capture are blown out, utterly illegible. As demonstrated in Tokyo, it was both effective and inspiring.

Georgia Tech's demo seemed at first blush to be oriented less toward the individual's right to privacy than toward the needs of institutions attempting to secure themselves against digital observation—whether it might be Honda wanting to make sure that snaps of next year's Civic don't prematurely leak to the enthusiast press, or the Transportation Security Agency trying to thwart the casing of their arrangements at LAX. But it was nevertheless fairly evident that, should the system prove effective under real-world conditions, there was nothing in principle that would keep some equivalent from being deployed on a personal level.

This functions as a timely reminder that there are other ways to protect ourselves and our prerogatives from the less salutary impacts of ubiquitous technology than the guidelines contemplated here. There will always be technical means: various tools, hacks and fixes intended to secure our rights for us, from Dunne & Raby's protective art objects to the (notional) RFIDwasher, a keyfob-sized device that enables its users "to locate RFID tags and destroy them forever!" Some will argue that such material strategies are more efficient, more practical, or more likely to succeed than any assertion of professional ethics.

images Thesis 79

Technical measures intended to secure our prerogatives may ignite an arms race or otherwise muddy the issue.

However clever the Georgia Tech system was as a proof of concept—and it made for an impressive demo—there were factors it was not able to account for. For example, it could not prevent photographers using digital SLR cameras (or, indeed, conventional, film-based cameras of any kind) from acquiring images. This was immediately pointed out by optics-savvy members of the audience and openly acknowledged by the designers.

If you were among those in the audience that day in Tokyo, you might have noticed that the discussion took a 90-degree turn at that point. It became one of measures and countermeasures, gambits and responses, ways to game the system and ways to bolster its effectiveness. Thirty seconds after the last echo of applause had faded from the room, we were already into the opening moments of a classic arms race.

This may well be how evolution works, but it has the unfortunate effect of accommodating instead of challenging the idea that, for example, someone has the right to take your image, on your property, without your knowledge or consent. It's a reframing of the discussion on ground that is potentially inimical to our concerns.

Admittedly, this was a presentation of a prototype system at an academic technology conference, not an Oxford Union debate on the ethics of image and representation in late capitalism. But isn't that just the point? Once we've made the decision to rely on an ecology of tools for our protection—tools made on our behalf, by those with the necessary technical expertise—we've let the chance to assert our own prerogatives slip away. An ethics will inevitably be inscribed in the design of such tools, but it needn't be ours or anything we'd even remotely consider endorsing. And once the initiative slips from our grasp, it's not likely to be returned to us for a very long time.

We know, too, that such coevolutionary spirals tend to stretch on without end. There's rarely, if ever, a permanent technical solution in cases like this: There are always bigger guns and thicker grades of armor, more insidious viruses and more effective security patches.

From my point of view, then, technical solutions to ethical challenges are themselves problematic. I'm not suggesting that we do without them entirely. I'm saying, rather, that technical measures and ethical guidelines ought to be seen as complementary strategies, most effective when brought to bear on the problem of everyware together. And that where we do adopt technical means to address the social, political, and psychological challenges of ubiquitous technology, that adoption must be understood by all to be without prejudice to the exercise of our ethical prerogatives.

images Thesis 80

The principles we've enunciated can be meaningfully asserted through voluntary compliance.

One of the obvious difficulties with any set of principles such as the ones we've been discussing concerns the matter of compliance—or, looked at another way, enforcement.

Can developers working on everyware reasonably be expected to police themselves, to spend the extra time and effort necessary to ensure that the systems they produce do not harm or unduly embarrass us, waste our time, or otherwise infringe on our prerogatives?

Will such guidelines simply be bypassed, going unobserved for the usual gamut of reasons, from ignorance to incompetence to unscrupulousness? Or will any such self-policing approach be rendered irrelevant by governmental attempts to regulate everyware?

No response to the drawbacks of everyware will be anything close to perfect. Even if we could assume that all of the practical challenges posed by our embrace of ubiquitous systems were tractable, there will always be bad actors of one sort or another.

Given the almost unlimited potential of everyware to facilitate the collection of all sorts of information, the extreme subtlety with which ubiquitous systems can be deployed, and the notable propensity of certain parties—corporate, governmental—to indulge in overreaching information-gathering activities if once granted the technical wherewithal, I would be very surprised if we didn't see some highly abusive uses of this technology over the next few years. I don't think they can be stopped, any more than spammers, script kiddies, and Nigerian scam artists can be.

Without dismissing these perils in any way, I am actually less worried about them than about the degraded quality of life we are sure to experience if poorly designed everyware is foisted upon us. I believe that this latter set of challenges can be meaningfully addressed by collective, voluntary means, like the five principles offered in this book. If standards for the ethical and responsible development of everyware can be agreed upon by a visible cohort of developers, the onus will be on others to comply with them. Given an articulate and persuasive enough presentation of the reasoning behind the principles, those of us committed to upholding them might even find the momentum on our side.

If you think this scenario sounds unduly optimistic, recent technological history offers some support for it. Starting in 1998, a grassroots movement of independent developers demanding so-called "Web standards" forced the hand of industry giants like Microsoft and Netscape, and in not such a terribly long period of time, either.

Within a few years, any major browser you cared to download was compliant with the set of standards the activists had pushed for. The combination of structural and presentational techniques the so-called "standardistas" insisted on is now considered a benchmark of responsible Web development. By any measure, this is a very successful example of bottom-up pressure resulting in wholesale improvements to the shared technological environment.

The standardistas, it must be said, were on the right side of an emerging business calculus to begin with: by the time the movement came to prominence, it was already punitively expensive for developers to code six or seven different versions of a site simply to render properly in all the incompatible browsers then popular. They also enjoyed the advantage of urging their changes on a relatively concentrated decision nexus, at least where the browser-makers were concerned.

And before we get too enthusiastic about this precedent, and what it may or may not imply for us, we should remind ourselves that ensuring that a given ubiquitous system respects our prerogatives will be many orders of magnitude more difficult than ascertaining a Web site's compliance with the relevant standards. The latter, after all, can be verified by running a site's source code through an automated validator. By contrast, we've seen how much room for interpretation there is in defining "undue complications," let alone in determining what might constitute "harm" or "embarrassment." The grey areas are legion compared to the simple, binary truths of Web standards: Either a site is coded in well-formed XHTML, or it is not.

Nevertheless, at its core, the story of Web standards is both inspiring and relevant to our concerns: the coordinated action of independent professionals and highly motivated, self-educated amateurs did change the course of an industry not particularly known for its flexibility. As a direct result, the browsers that the overwhelming majority of us use today are more powerful, the experience of using compliant Web sites is vastly improved, and untold economies have been realized by the developers of both. Rarely has any circumstance in information technology been quite so "win/win/win."

We've seen the various complications that attend technical solutions to the problems of everyware. And we also have abundant reason to believe that governmental regulation of development, by itself, is unlikely to produce the most desirable outcomes. But in the saga of Web standards, we have an object lesson in the power of bottom-up self-regulation to achieve ends in technological development that are both complex and broadly beneficial.

So I see a real hope in the idea that a constituency of enlightened developers and empowered users will attend the rise of everyware, demanding responsible and compassionate design of ubiquitous technology. I further hope that the principles I've offered here are a meaningful contribution to the discussion, that they shed some light on what responsible and compassionate everyware might look like.

I have one final thought on the question of principles and self-guided development. It's clear that an approach such as the one I've outlined here will require articulate, knowledgeable, energetic, and above all visible advocacy if it has any chance of success. But it requires something else, as well: a simple, clear way for users and consumers to know when a system whose adoption they are contemplating complies with the standards they wish to support.

What I would like to see is something along the lines of the Snell certification for auto-racing and motorcycle helmets—or better yet, the projected ISO standards for environmental safety in nanotechnological engineering. This would be a finding of fitness verified by an independent, transparent, and international licensing body: a guarantee to all concerned that to the degree possible, the ubiquitous system in question had been found to observe all necessary protections of the human user. (Such certifications, of course, would do little to protect us from harmful emergent behavior of interacting systems, but neither would they be without value.)

A mechanism such as this means that we can feel safer in harnessing the power of the market to regulate the development of everyware, because the market will have been provided with accurate and appropriate information. A simple, high-visibility marker lets people make informed decisions: Either this system meets the guidelines as they existed at such-and-such a date, or it does not. The guidelines are of course there to peruse in detail should anyone wish to do so, but it's not necessary to have a comprehensive understanding of what they mean at the time of purchase, download, or installation. Everything a user needs to know is right there in the certification.

If sentiment in support of these ideas attains critical mass, we reach a point past which buy-in becomes lock-in. From that point forward, most of the everyware we encounter will have been designed and engineered with a deep consideration for our needs and prerogatives.

The aim, of course, is to build a world in which we get to enjoy as many of the benefits of everyware as possible while incurring the smallest achievable cost. I think this is doable, but to a greater extent than has usually been the case, it's not going to come easily. If we want all of these things, we'll have to:

·        educate ourselves as to the nature of the various technologies I have here grouped under the rubric of everyware;

·        decide which of them we will invite into our lives, and under what circumstances;

·        demand that the technologies we are offered respect our claims to privacy, self-determination, and the quality of life;

·        and (hardest of all) consistently act in accordance with our beliefs—at work, at the cash register, and at the polls.

Everyware promises so much more than simply smoothing the hassles we experience in our interactions with computers. It aims to rebuild the relationship between computer and user from the ground up, extend the power of computational awareness to every corner of our lives, and offer us vastly more timely, accurate, and useful knowledge of our surroundings, our communities, and ourselves in so doing. It is, in fact, the best candidate yet to become that "sufficiently advanced" technology Arthur C. Clarke so famously described as being "indistinguishable from magic."

We can have it, if we want it badly enough. But the hour is later than we know, the challenges are many and daunting, and most of us barely have an inkling that there's anything to be concerned about in the advent of the next computing. We have our work cut out for us.

images Thesis 81

These principles are necessary but not sufficient: they constitute not an end, but a beginning.