Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)

Section 6. When Do We Need to Begin Preparing for Everyware?

We've gotten a sense of the various factors shaping the development of ubiquitous computing—and of the different forms that computing will take in different places.

Which of the many challenges involved in bringing it into being have been resolved? And which remain to be addressed? Most important, how much time do we have to prepare for the actuality of everyware?

images Thesis 52

At most, everyware will subsume traditional computing paradigms. It will not supplant them—certainly not in the near term.

In determining when everyware might realistically arrive, the first notion that we need to dispense with is that it is an all-or-nothing proposition. Just as there are still mainframes and minicomputers chugging away in the world, doing useful work unthreatened by the emergence of the PC, the advent of ubiquitous computing will not mean the disappearance of earlier forms.

Wearables, embedded sensors, RFID-based infrastructures of one sort or another, and the many other systems that we've here defined as ubiquitous in nature can—in fact already do—happily coexist with thoroughly ordinary desktops and laptops. Even after information processing begins to pervade the environment in more decisive ways, there will continue to be a healthy measure of backward compatibility; for some time yet to come, anyone writing a dissertation, keeping a budget, or designing a logo will be likely to interact with conventional applications running on relatively conventional machines.

Personal computers of relatively familiar aspect will continue to be made and sold for the foreseeable future, though they will increasingly tend to be conceived of as portals onto the far greater functionality offered by the local constellation of ubiquitous resources. Such PCs may well serve as the hub by which we access and control the mÉlange of technical systems imperceptibly deployed everywhere around us, without ever quite disappearing themselves. We could say the same of the various "Ubiquitous Communicator"-style phones that have been proposed, in that they'll persist as discrete objects very much at the focus of attention.

It's true that this kind of setup doesn't go terribly far toward fulfilling Weiser and Brown's hopes for a calm technology, but neither is it quite what we've thought of as personal computing historically. Such scenarios illustrate the difficulties of inscribing a hard-and-fast line between the two paradigms, let alone specifying a date by which personal computing will indisputably have disappeared from the world. Moreover, there will always be those, whatever their motivation, who prefer to maintain independent, stand-alone devices—and if for no other reason than this, the personal computer is likely to retain a constituency for many years past its "sell-by" date.

The safest conclusion to draw is that, while there will continue to be room for PCs in the world, this should not be construed as an argument against the emergence of a more robust everyware. If the border between personal and ubiquitous computing is not always as clear as we might like, that should not be taken as an admission that the latter will not turn out to have enormous consequences for all of us.

images Thesis 53

Depending on how it is defined, everyware is both an immediate issue and a "hundred-year problem."

The question of how soon we need to begin preparing for everyware really turns on how strictly it is defined. If we're simply using the word to denote artifacts like PayPass cards and Smart Hydro bathtubs, then it's clear that "preparing" is out of the question: these things already exist.

But everyware is also, and simultaneously, what HP Laboratories' Gene Becker calls a "hundred-year problem": a technical, social, ethical and political challenge of extraordinary subtlety and difficulty, resistant to comprehensive solution in anything like the near term. In fact, if we use the word "everyware" maximally, to mean a seamless and intangible application of information processing that causes change to occur, whether locally or remotely, in perfect conformity with the user's will, we may never quite get there however hard we try.

As is so often the case, the useful definition will be found somewhere in between these two extremes. The trouble is that we're not particularly likely to agree on just where in between: we've already seen that there are many ubiquitous computings, and as if that weren't complication enough, we've also seen that there are places where the line between personal and ubiquitous computing is fairly blurry to begin with.

So how are we to arrive at an answer to our question? Let's see whether we can't narrow the window of possible responses somewhat, by considering schematically which of the components required by a truly ubiquitous computing are already in place and which remain to be developed.

Many such components already exist in forms capable of underwriting a robust everyware, even in the scenarios imagined by its more exuberant proponents. And while a very high degree of finesse in implementation is an absolute precondition for any sort of acceptable user experience, there's nothing in principle that keeps these components from being used to build ubiquitous applications today:

·        Processor speeds are sufficient to all but the most computationally intensive tasks.

·        Storage devices offer the necessary capacity.

·        Displays have the necessary flexibility, luminance and resolution.

·        The necessary bridges between the physical reality of atoms and the information space of bits exist.

·        The necessary standards for the representation and communication of structured data exist.

·        A sufficiently capacious addressing scheme exists.

What makes a system composed of these elements "ubiquitous" in the first place is the fact that its various organelles need not be physically coextensive; given the right kind of networking protocol, they can be distributed as necessary throughout local reality. As it happens, an appropriate protocol exists, and so we can add this too to the list of things that need not hold us back.

But there are also a few limiting factors we may wish to consider. These are the circumstances that have thus far tended to inhibit the appearance of everyware, and which will continue to do so until addressed decisively:

·        Broad standards for the interoperability of heterogeneous devices and interfaces do not exist.

·        In most places, the deployed networking infrastructure is insufficient to support ubiquitous applications.

·        Appropriate design documents and conventions simply do not exist, nor is there a community consciously devoted to the design of ubiquitous systems at anything like industrial scale.

·        There is barely any awareness on the part of users as to the existence of ubiquitous systems, let alone agreement as to their value or utility.

Overall, these issues are much less tractable than the purely technological challenges posed by processor speed or storage capacity, and it's these which account for much of the complexity implied by Becker's "hundred-year problem." We'll consider each point individually before venturing an answer as to when everyware will become an urgent reality.

My own contention is that, while the existence of this latter set of factors constitutes a critical brake on the longer-term development of everyware, the social and ethical questions I am most interested in are activated even by systems that are less total in ambition and extent—some of which are already deployed and fully operational. So we'll consider a few such operational systems as well. By the time the section concludes, I hope you will agree with me that however long it may take a full-fledged everyware to appear, the moment to begin developing a praxis appropriate to it is now.

images Thesis 54

Many problems posed by everyware are highly resistant to comprehensive solution in the near term.

By now, the outlines of this thing we've been sketching are clear.

We've taken concepts originating in logistics, chip design, network theory, cultural anthropology, computer-supported collaborative work, and dozens of other disciplines, and fused them into a computing that has become genuinely ubiquitous in our lives, as present in our thoughts as it will be in our tools and jobs and cities.

Most of these "small pieces" are matters of the real world and the present day—a few, admittedly, only as incremental steps or standards adopted but not yet implemented. Even in many of the latter cases, though, we can reasonably expect that the necessary pieces of the puzzle will appear within the next year or two.

But even though we already have most of the componentry we'll ever require, there are excellent reasons to suppose that everyware will take decades to mature fully. Sprawling, amorphous, it touches on so many areas of our lives, complicates social and political debates that are already among the thorniest our societies have ever faced. In some cases, indeed, we may never fully master the challenges involved. The following are some of the factors that are actively inhibiting either the development or the chances for adoption of ubiquitous computing.

images Thesis 55

The necessary standards for interoperability do not exist or are not yet widely observed.

A lack of widely observed standards in the dimensions of screw threading inhibited Charles Babbage in his quest to build the Difference Engine, the world's first computer, in the 1840s. A lack of standards in languages and operating systems kept electronic computers from communicating with each other for decades. Well into the era of the personal computer, a lack of standards kept software development balkanized. A lack of standards led to the so-called Browser Wars, which suppressed adoption of the World Wide Web straight into the early years of this decade, as institutions that wanted Web sites were forced to build different versions compatible with each browser then widely used.

This afflicts almost all technologies at some point during their evolution, not just computing. Every American owner of a VW Beetle remembers the hassle of driving a car engineered to metric tolerances in an English-measurement culture; to this day, travelers arriving by rail at the French-Spanish border are forced to change trains because the countries' standard track gauges differ. The matter of standards seems to be a place where we are always having to learn the same lesson.

In some cases, there's a reasonable excuse for one or another system's failure to observe the relevant convention; The Octopus smartcard scheme we'll be discussing, for example, uses an idiosyncratic RFID architecture that does not conform to the ISO 14443 standard, simply because it was first deployed before the standard itself was established.

In other cases, the thicket of incompatible would-be standards is a matter of jockeying for advantageous position in a market that has not yet fully matured. We see this in the wireless networking arena, for example, where it can be hard even for a fairly knowledgeable observer to disentangle the competing specifications, to distinguish Wireless USB from Wi-Media, next-generation Bluetooth, and IEEE 802.15.3a—or even to determine whether they compete on the same ground.

There are good commercial reasons for this, of course. Every manufacturer would ideally like to be able to benefit from the "lock-in" effect, in which its entry is recognized as the universal standard, as happened when JVC's VHS beat Sony's ostensibly superior Betamax format early in the adoption of home video. (It is a persistent urban legend that this was in large part due to Sony's refusal to license pornographic content for Betamax.) VHS, of course, went on to become a multibillion-dollar industry, while the Beta format was more or less forgotten, by all except a diehard few. Sony certainly remembers: It has absolutely no intention of letting its high-capacity Blu-Ray DVD format lose out to the competing HD DVD standard.

But what gets lost in the shuffle in such cases is that the jockeying can permanently retard adoption of a technology, especially when it goes on for long enough that the technology itself is leapfrogged. This was the case with early HDTV efforts: Competing producers advanced their incompatible analog standards for so long that a far superior digital HDTV technology emerged in the interim. None of the parties originally marketing analog standards are competitive in HDTV today.

Sometimes lock-in and other legacy issues inhibit the adoption of a standard that might otherwise seem ideal for a given application. We'll be seeing how powerful and general XML is where there is a requirement to communicate structured data between applications, but even given its clear suitability there are some prominent contexts in which it's not yet used. To take two familiar examples, neither the EXIF data that encodes properties such as date, time, and camera type in digital images, nor the ID3 tags that allow MP3 players to display metadata such as track, artist, and album name, are expressed in valid XML. And yet, as we'll be seeing, this is exactly the kind of application XML is well suited for. Whatever the reasons for maintaining separate formats, surely their advantages would be outweighed by those attending compliance with a more universal scheme?

Finally, even where broadly applicable technical standards exist, compliance with them is still subject to the usual vagaries—a process that can be seen, in microcosm, in the market for pet-identification RFID transponders.

The United Kingdom mandates that all pet transponders and veterinary readers sold conform to the ISO FDXB standard. A single countrywide registry called PetLog, maintained by the national Kennel Club and recognized by the government's Department for Food, Environment and Rural Affairs, contains some two million records, and lost pets are routinely reunited with their owners as a result of the system's deployment.

By contrast, in the United States, there is no national standard for such tags; your vet has whatever scanner system he or she happens to have bought, which can read the proprietary tags sold by the scanner manufacturer, but not others. Should your pet wander into the next town over and get taken to a vet or a pound using an RFID system from a different vendor, the odds of its being properly identified are slim indeed.

In this case, as in so many others, it's not that a relevant standard does not exist; it does, and it's evidently being used successfully elsewhere. It's merely a question of when, or whether, some combination of pressures from the bottom up (market incentives, consumer action) and the top down (regulation, legislation) will result in a convergence on one universal standard. And we understand by now, certainly, that such processes can drag on for an indefinite amount of time.

images Thesis 56

The necessary network infrastructure does not exist.

Whether they themselves are infrastructural or mobile in nature, all of the visions of everyware we've considered in this book depend vitally on near-universal broadband network access.

And although it frequently seems that each day's newspaper brings word of another large-scale Internet access initiative—from Philadelphia's effort to provide a blanket of free municipal Wi-Fi to Google's similar endeavor on behalf of San Francisco—the network infrastructure so necessary to these visions simply does not exist yet in most places.

Even in the United States, broadband penetration is significantly less than total, and as of the end of 2005, many Internet users still eke by with dial-up connections. The problem is particularly exacerbated in areas far from the dense urban cores, where the possibility of ever being fully wired—let alone richly provided with overlapping areas of wireless service—is simply out of the question. Given the economics involved, even in an age of satellite broadband, it's been speculated that some analogue of the Tennessee valley Authority's Rural Electrification Program of the 1930s might be necessary if universal high-speed connectivity is ever to be a reality.

Newer technologies like WiMAX, especially as used to support mesh networks, show every sign of addressing these issues, but we'll have to wait for their scheduled deployment during 2006-2007 to see whether they make good on the claims of their proponents. Unless these challenges can be resolved, all we'll ever be able to build is a computing that is indeed ubiquitous, but only in some places.

If this sounds like an absurdity, it isn't. Many of these places will be domains large enough for the bulk of our social and experiential concerns to come into play: corporate and university campuses, even entire cities. It may simply be some time before these concerns are fully relevant to the majority of people, even in the developed nations.

Finally, though raising this point may sound an odd note here, we should never forget that many human places abide without electricity, running water, or sewerage, let alone Internet access. As much as I believe that information is power, there's no question that shelter, safe drinking water and sanitation come first—on Maslow's pyramid and in any development scheme I'd want to endorse. Whatever promise everyware may extend to us, it will be quite some time indeed until we all get to share its benisons on anything like an equal footing.

images Thesis 57

Appropriate design documents and conventions do not yet exist.

One unexpected factor that that may inhibit the development of everyware for some time to come is that, while the necessary technical underpinnings may exist, a robust design practice devoted to the field does not. As designers, we haven't even begun to agree on the conventions we'll use to describe the systems we intend to build.

Consider what is involved in an analogous process of development, the design of a large-scale Web site. The success of the whole effort hinges on the accurate communication of ideas among members of the development team. The person who actually has to code the site is joined by the visual designer, who is responsible for the graphic look and feel; the information architect, responsible for the structure and navigation; and perhaps a content strategist, who ensures that written copy and "navitorial" convey a consistent "tone and voice." When sites are developed by agencies operating on behalf of institutional clients, invariably there will also be input from a client-facing account manager as well as representatives of the client's own marketing or corporate communications department.

The documents that are used to coordinate the process among all the parties involved are referred to as "deliverables." A reasonably comprehensive set of deliverables for a Web site might include visual comps, which depict the graphic design direction; a site map, which establishes the overall structure of the site as well as specifying the navigational relationship of a given page to the others; schematics, which specify the navigational options and content available on a given page; and task flows and use cases, which depict trajectories through the site in highly granular detail.

All of these things are signed off on by the client, after which they are released to the software development engineer, who is then responsible for the actual coding of the site.

When done conscientiously, this is an involved, painstaking process, one that can go on for many months and cost hundreds of thousands of dollars. Success in the endeavor depends vitally on accurate deliverables that clearly convey what is required.

No such deliverables currently exist for everyware. If everyware presents situations in which multiple actors interact simultaneously with multiple systems in a given environment, in three dimensions of space and one of time, we lack the conventions that would allow us to represent such interactions to each other. If everyware implies that the state of remote systems may impinge quite profoundly on events unfolding here and now, we scarcely have a way to model these influences. If everyware involves mapping gesture to system behavior, we lack whatever equivalent of choreographic notation would be necessary to consistently describe gesture numerically. And where the Web, until very recently, was governed by a page metaphor that associated a consistent address with a known behavior, interaction in everyware lacks for any such capacity. As designers, we simply don't yet know how to discuss these issues—not with each other, not with our clients, and especially not with the people using the things we build.

At present, these challenges are resolved on a bespoke, case-by-case basis, and development teams have tended to be small and homogeneous enough that the necessary ideas can easily be conveyed, one way or another. This is strikingly reminiscent of design practice in the early days of the Web—a glorious moment in which a hundred flowers certainly bloomed, and yet so terribly disappointing in that ninety-six of them turned out to be weeds.

Just as was the case with the Web, as everyware matures—and especially as it becomes commercialized and diffuses further into the world—there will be a greater demand for consistency, reliability and accountability, and this will mandate the creation of deliverable formats to account for all of the relevant variables. It is true that such design documents did not exist for hypertext systems prior to the advent of the World Wide Web, and that a practice developed and to some degree became formalized within just a few years. Nevertheless, with regard to everyware, this conversation hasn't even properly started yet.

images Thesis 58

As yet, everyware offers the user no compelling and clearly stated value proposition.

The last of the inhibiting factors we'll be discussing is the deep and as yet unaddressed disconnect that exists between the current discourse around ubiquitous systems, and any discernable desire on the part of meaningfully large populations for such systems.

Inside the field, however elaborated they've become with an embroidery of satisfying and clever details, we've told each other these tales of ubiquity so many times that they've become rote, even clichÉd—but we've forgotten to ascertain whether or not they make any sense to anyone outside the contours of our consensual hallucination.

HP's Gene Becker describes the issue this way:

The potential uses and benefits of ubicomp often seem 'obvious'; most of us in the field have spun variations of the same futuristic scenarios, to the point where it seems like a familiar and tired genre of joke. 'you walk into the [conference room, living room, museum gallery, hospital ward], the contextual intention system recognizes you by your [beacon, tag, badge, face, gait], and the [lights, music, temperature, privacy settings, security permissions] adjust smoothly to your preferences. your new location is announced to the [room, building, global buddy list service, Homeland Security Department], and your [videoconference, favorite TV show, appointment calendar, breakfast order] is automatically started.' And so on. Of course, what real people need or want in any given situation is far from obvious.

It's ironic, then, that one of the things that real people demonstrably do not want in their present situation is everyware. There is no constituency for it, no pent-up demand; you'll never hear someone spontaneously express a wish for a ubiquitous house or city. There are days, in fact, when it can seem to me that the entire endeavor has arisen out of some combination of the technically feasible and that which is of interest to people working in human-computer interaction. Or worse, much worse: out of marketing, and the desire to sell people yet more things for which they have neither a legitimate need nor even much in the way of honest desire.

What people do want, and will ask for, is more granular. They want, as Mark Weiser knew so long ago, to be granted a god's-eye view of the available parking spaces nearby, to spend less time fumbling with change at the register, to have fewer different remote controls to figure out and keep track of.

And, of course, everyware is the (or at least an) answer to all of these questions. But until those of us in the field are better able to convey this premise to the wider world in convincing and compelling detail, we can expect that adoption will be significantly slower than might otherwise be the case.

images Thesis 59

The necessary processor speed already exists.

Of the major limiting factors on ubiquitous computing, one of the most vexing—and certainly the most fundamental—has always been processor speed. The challenges posed by the deployment of computing out in the everyday environment, whether parsing the meaning of a gesture in real time or tracking 500 individual trajectories through an intersection, have always been particularly processor-intensive.

But if processor speed has historically constituted a brake on development, it needn't any longer. The extravagance of computational resources such applications require is now both technically feasible and, at long last, economic.

The machine I am writing these words on operates at a clock speed of 1.5 GHz—that is, the internal clock by which it meters its processes cycles 1.5 billion times every second. While this sounds impressive enough in the abstract, it's not particularly fast, even by contemporary standards. Central processors that operate more than twice as fast are widely commercially available; a 2004 version of Intel's Pentium 4 chip runs at 3.4 GHz, and by the time this book reaches your hands, the CPU inside the most generic of PCs will likely be faster yet.

We know, too, that relying on CPU clock speeds for estimates of maximum speed can be deceptive: such general-purpose chips are held to speeds well below the theoretical maximum, while specialized chips can be optimized to the requirements of a particular application—video or sound processing, encryption, and so on. In synchrony, CPUs and specialized chips already handle with aplomb the elaborate variety of processor-intensive applications familiar from the desktop, from richly immersive games to real-time multiway videoconferencing.

In principle, then, a locally ubiquitous system—say, one dedicated to household management—built right now from commonly available CPUs and supported by a battery of specialized helpers, should be perfectly adequate to the range of routine tasks foreseeable in such a setting. Excepting those problems we've already identified as "AI-hard," which aren't as a rule well-suited to brute-force approaches anyway, there shouldn't be anything in the home beyond the compass of such a system.

Especially if a grid architecture is employed—if, that is, the computational burden imposed by more convoluted processes is distributed through the constellation of locally-embedded processors, working in parallel—today's clock speeds are entirely adequate to deliver services to the user smoothly and reliably. Whatever challenges exist, it's hard to imagine that they would be order-of-magnitude harder than supporting an iRoom-style collaborative workspace, and that was achieved with 2001-vintage processor speeds.

The other side of the speed equation is, of course, expense; one-off showpieces for research labs and corporate "visioning" centers are well and good, but their effects are generally achieved at prohibitive cost. In order to support meaningfully ubiquitous systems, componentry must be cheap. Current projections—and not necessarily the most optimistic—indicate that processors with speeds on the order of 2 GHz will cost about what ordinary household electrical components (e.g., dimmer switches) do now, at the end of the decade or very soon thereafter. This would allow an ordinary-sized room to be provisioned with such an abundance of computational power that it is difficult to imagine it all being used, except as part of some gridlike approach to a particularly intractable problem. Less extravagant implementations could be accomplished at negligible cost.

When there are that many spare processing cycles available, some kind of market mechanism might evolve to allocate them: an invisible agora going on behind the walls, trading in numeric operations. But we can leave such speculations for other times. For the moment, let's simply note that—even should Moore's Law begin to crumble and benchmark speeds stagnate rather than continuing their steep upward climb—processing capacity presents no obstacle to the emergence of full-fledged ubiquitous services.

images Thesis 60

The necessary storage capacity already exists.

It's easy to infer that a panoply of ubiquitous systems running at all times—systems whose operation by definition precedes users, as we've noted—is going to churn up enormous quantities of data. How and where is all this information going to be stored? Will the issue of storage itself present any obstacle to the real-world deployment of everyware?

We can derive a useful answer by, again, extrapolating not from the best currently available systems, but from those at the middle of the pack. The iPod shuffle I wear when I go running, for example, is a circa-2004 solid-state storage device, with only incidental moving parts, that boasts a capacity of 1 GB. This is about a day and a half's worth of music encoded with middling fidelity, a few hours' worth at the highest available resolution. It achieves this (as Apple's advertising was pleased to remind us) inside a form factor of around the same volume as a pack of chewing gum, and it's already been rendered obsolete by newer and more capacious models.

A day and a half sure sounds like a decent amount of music to pack into a few cubic centimeters; certainly it's suggestive of what might be achieved if significant parts of a structure were given over to solid-state storage. But hard-ubicomp enthusiasts already dream of far greater things. On a chilly night in GÖteborg in late 2002, Lancaster University HCI pioneer Alan Dix described an audacious plan to record in high fidelity every sense impression a human being ever has—favoring me with a very entertaining estimate of the bandwidth of the human sensorium, the total capacity necessary to store all of the experiences of an average lifetime, and a guess as to what volume would suffice to do so: "If we start recording a baby's experiences now, by the time she's 70 all of it will fit into something the size of a grain of sand."

If I recall correctly, Dix's order-of-magnitude guess was that no more than 20 TB (each terabyte is 1,000 GB) would be required to record every sensory impression of any sort that you have in the entire course of your life. And when you run the numbers—making the critical assumption that increases in storage capacity will continue to slightly outpace the 24-month doubling period specified by Moore's law for transistor density—mirabile dictu, it does turn out to be the case that by mid-2033, it will at least theoretically be possible to store that amount of information in a nonvolatile format the size and weight of a current-generation iPod nano. (The grain of sand appears not long thereafter.)

As of the end of 2005, the numbers undernetting this rather science-fictiony-sounding estimate still hold and are maybe even a little conservative. The real point of all this extrapolation, though, is to buy some room for those challenges inherent in everyware that, however daunting they may seem at the moment, are nonetheless of smaller magnitude.

If we can take as a limit case the recording of every single impression experienced in the course of a life, then it seems fair to say that all the other issues we're interested in addressing will be found somewhere inside this envelope. And if this is so—and there's currently little reason to believe otherwise—we can safely assume that even devices with small form factors will be able to contain usefully large storage arrays.

Going a step further still, such high local information densities begin to suggest the Aleph of Borges (and William Gibson): a single, solid-state unit that contains high-fidelity representations of literally everything, "the only place on earth where all places are." As strange as this poetic notion may sound in the context of an engineering discussion, the numbers back it up; it's hard to avoid the conclusion that we are entering a regime in which arbitrarily large bodies of information can be efficiently cached locally, ready to hand for whatever application requires them.

If this is too rich for your blood, Roy Want, Gaetano Boriello, and their co-authors point out, in their 2002 paper "Disappearing Hardware," that we can at least "begin to use storage in extravagant ways, by prefetching, caching and archiving data that might be useful later, lessening the need for continuous network connectivity."

While this is more conservative, and certainly less romantic, than Borges' Aleph, it has the distinct advantage (for our immediate purposes, anyway) of referring to something real. Intel has demonstrated several iterations of a high-density mobile/wearable storage system based on these ideas, called a "personal server," the earliest versions of which were little more than a hard drive with a built-in wireless connection. Where Want's version of Alan Dix's calculation puts that total lifetime throughput figure at a starkly higher 97 TB ("80 years, 16 hours a day, at 512 Kbps"), he reckons that a personal server should store that amount of data by the more optimistic date of 2017; some of the apparent optimism no doubt reflects the difference in scale between a grain of sand and the mobile-phone-sized personal server.

But, again, the purpose of providing such calculations is merely to backstop ourselves. Any ubiquitous application that requires less in the way of local storage than that required by recording every sensation of an entire life in high fidelity would seem to present little problem from here on out.

images Thesis 61

The necessary addressing scheme already exists.

As we considered earlier, a technology with the ambition to colonize much of the observable world has to offer some provision for addressing the very large number of nodes implied by such an ambition. We've seen that a provision along these lines appears to exist, in the form of something called IPv6, but what exactly does this cryptic little string mean?

In order to fully understand the implications of IPv6, we have to briefly consider what the Internet was supposed to be "for" in the minds of its original designers, engineers named Robert E. Kahn and Vint Cerf. As it turns out, Kahn and Cerf were unusually prescient, and they did not want to limit their creation to one particular use or set of uses. As a result, from the outset it was designed to be as agnostic as possible regarding the purposes and specifications of the devices connected to it, which has made it a particularly brilliant enabling technology.

The standard undergirding communication over the Internet—a network layer protocol known, rather sensibly, as Internet Protocol, or IP—doesn't stipulate anything but the rules by which packets of ones and zeroes get switched from one location to another. The model assumes that all the intelligence resides in the devices connected to the network, rather than in the network itself. (The term of art engineers use to describe this philosophy of design is "end to end.") As a result, as these things go, the Internet is simple, robust, all but endlessly extensible, and very, very flexible.

For our purposes, the main point of interest of the current-generation IP—version 4—is that it is running out of room. Addresses in IPv4 may be 32 bits long, and the largest number of discrete addresses that it will ever be possible to express in 32 bits turns out to be a little over four billion. This sounds like a comfortably large address space, until you consider that each discrete node of digital functionality ("host") you want to be able to send and receive traffic over the network requires its own address.

The exponential growth of the Internet in all the years since scientists first started sending each other e-mail, and particularly the spike in global traffic following the introduction of the World Wide Web, have swallowed up all of the numeric addresses provided for in the original protocol, many years before its designers thought such a thing possible. It's as if, while building a new settlement in a vast desert, you had somehow begun to run out of street numbers—you can see limitless room for expansion all around you, but it's become practically impossible to build even a single new house because you would no longer be able to distinguish it from all of its neighbors.

This scarcity is one of the stated justifications behind promulgating a new version of IP, version 6. By virtue of extending the length of individual addresses in IPv6 to a generous 128 bits, the address space thus evoked becomes a staggering 2128 discrete hosts—roughly equivalent to a number that starts with the numeral 3 and continues for 38 zeroes. That works out to 6.5 x 1023 for every square meter on the surface of the planet. (One commentary on the specification dryly suggests that this "should suffice for the foreseeable future.")

What this means above all is that we no longer need to be parsimonious with IP addresses. They can be broadcast promiscuously, tossed into the world by the bucketload, without diminishing or restricting other possibilities in the slightest. There are quite enough IPv6 addresses that every shoe and stop sign and door and bookshelf and pill in the world can have one of its own, if not several.

The significance of IPv6 to our story is simply that it's a necessary piece of the puzzle—if the role of sufficiently capacious addressing scheme wasn't filled by this particular specification, it would have to be by something else. But everyware needs a framework that provides arbitrarily for the communication of anything with anything else, and IPv6 fills that requirement admirably.

images Thesis 62

The necessary display technologies already exist.

Although many—perhaps even the majority of—deployments of everyware will by their nature not require display screens of the conventional sort, there will still be a general requirement for the graphic presentation of information.

With displays of various sorts appearing in an ever greater number of places throughout the environment, though, we can assume that the ethic of calmness we've discussed in other contexts will also inform their design. And this turns out to have a lot to do with screen luminance and resolution, threshold values of which must be reached before the display itself fades from awareness—before, that is, you feel like you're simply working on a document and not on a representation of a document.

With the commercial introduction of Sony's LIBRIÉ e-book reader in early 2004, display screens would appear to have effectively surmounted the perceptual hurdles associated with such transparency of experience: In reading text on them, you're no more conscious of the fact that you're using a screen than you would ordinarily be aware that you're reading words from a printed page.

The LIBRIÉ, a collaboration of Sony, Philips Electronics' Emerging Display Technology unit, and startup E Ink, is a relatively low-cost, mass-market product. At 170 pixels per inch, the screen's resolution is not heroically high by contemporary standards—Mac OS defaults to 72 ppi for displaying graphics on monitors, Windows to 96—but text rendered on it "looks like a newspaper" and has left a strongly favorable impression on most of us lucky enough to have seen it.*

* If Sony had chosen not to cripple the LIBRIÉ with unreasonably restrictive content and rights-management policies, it's very likely that you, too, would have seen the device. As it is, Sony's regrettable distrust of its own customers has ensured that an otherwise-appealing product ends up atop the dustbin of history.

The LIBRIÉ owes much of its oooooh factor to E Ink's proprietary microencapsulation technology—a technology which, it must be said, is impressive in many regards. The quality that leaps out at someone encountering it for the first time is its dimensionality. The technique allows conformal screens of so-called "electronic paper" to be printed to the required specifications, and this can result in some striking applications, like the rather futuristic watch prototype the company has produced in collaboration with Seiko, a gently curving arc a few millimeters thick. But it's also versatile—large-scale prototype displays have been produced—and astonishingly vibrant, and it's easy to imagine such units replacing conventional displays in the widest possible variety of applications.**

** None of this is to neglect that other common trope of ubicomp imaginings, the wall- or even building-scale display. A friend once proposed, in this regard, that the Empire State Building be lit each night with a display of color tuned to function as a thermometer—a kind of giant ambient weather beacon.

Nor is E Ink the only party pursuing next-generation displays. Siemens offers a vanishingly thin "electrochromic" display potentially suitable for being printed on cardboard, foil, plastic and paper. These are being envisioned, initially at least, for limited-lifetime applications such as packaging, labels, and tickets; when combined with printable batteries such as those produced by Israeli startup Power Paper, the Minority

Report scenario of yammering, full-motion cereal boxes is that much closer to reality.

Commercial products using the E Ink technology, including the Seiko watch, are slated for introduction during 2006; Siemens, meanwhile, plans to introduce 80-dpi electrochromic packaging labels (at a unit cost of around 30 cents) during 2007. The inference we can draw from such developments is that the challenges posed by a general requirement for highly legible ambient display are well on their way to being resolved, at a variety of scales. As a consequence, we can regard the issue of display as posing no further obstacle to the development of ubiquitous systems requiring them.

images Thesis 63

The necessary wireless networking protocols already exist.

If IPv6 gives us a way to identify each of the almost unimaginably rich profusion of nodes everyware will bring into being, we still need to provide some channel by which those nodes can communicate with each other. We already have some fairly specific ideas of what such a channel should look like: Most of our visions of ubiquity presuppose a network that:

·        is wireless;

·        provides broadband;

·        affords autodiscovery;

·        is available wherever you might go.

As it happens, each of them is neatly answered by a brace of emerging (and in some cases conflicting) networking standards.

At ultra-short range, a new standard called Wireless USB is intended by its developers to succeed Bluetooth in the personal area networking (PAN) role during 2006-2007, connecting printers, cameras, game controllers, and other peripherals. Supported by the WiMedia Alliance, an industry coalition that counts HP, Intel, Microsoft, Nokia, Samsung, and Sony among its mainstays, Wireless USB is—like similar ultrawide-band (UWB) protocols—a low-power specification affording connection speeds of up to 480 Mbps. Undeterred, the industry alliance responsible for Bluetooth—the Bluetooth Special Interest Group—has announced its own plans for a new, UWB-compatible generation of their own standard. (Confusingly enough, the group counts many of the same companies supporting Wireless USB among its adherents.)

As we've seen, this is sufficient to stream high-definition video between devices in real time. Given that such streams represent something like peak demand on PAN, at least for the foreseeable future, we're probably safe in regarding the challenges of wireless networking at short range as having been overcome upon the introduction of Wireless USB or similar.

Wireless USB and its competing standards, although intended mainly to link peripherals at ranges of a few meters, begin to blur into what has historically been considered the domain of local area networking, or LAN. They certainly offer higher speeds than the current widely-deployed wireless LAN implementation, Wi-Fi: While the 802.11g variant of Wi-Fi provides for a nominal maximum speed of 54 Mbps, in practice, throughput is often limited to a mere fraction of that number and in some cases is barely any faster than the 11 Mbps maximum of the earlier 802.11b standard.

Nor can current-generation Wi-Fi base stations cope with the longer ranges implied by so-called metropolitan area networking, in which regions anywhere up to several kilometers across are suffused with a continuous wash of connectivity. vulnerable on both the counts of speed and range, then, Wi-Fi will almost certainly be superseded over the next year or two by the new WiMAX standard.

WiMAX isn't some new and improved form of Wi-Fi; it is a relatively radical departure from the Ethernet model from which the 802.11 standard is originally derived. (Some of the blame for this common misperception must clearly be laid at the feet of those who chose to brand the standard thusly.) Fortunately, this does not prevent the standard from offering a certain degree of backward-compatibility with earlier devices, although they will surely not be able to take advantage of all that it has to offer.

And what WiMAX has to offer is impressive: bandwidth sufficient for simultaneous voice over IP, video, and Internet streams, with data rates of 70 Mbps provided over ranges up to a nominal 50 kilometers. The speed is only a little bit faster than the swiftest current flavor of Wi-Fi, 802.11g, but the range is vastly improved. When both WiMAX and Wireless USB have supplanted the current generation of networking standards—as they are supposed to, starting in 2006—we will have three of the four elements we were looking for in our robust ubiquitous network: wireless broadband connectivity, at a range of scales, just about anywhere we might think to go.

This leaves us only the issue of autodiscovery.

One of the less charming provisions of Bluetooth, at least in its earlier incarnations, was that devices equipped with it did not automatically "discover" and recognize one another. They had to be manually paired, which, on the typical mobile phone, meant a tiresome descent through the phone's hierarchy of menus, in search of the one screen where such connectivity options could be toggled.

While establishing a Wi-Fi connection is not typically as onerous as this, it too presents the occasional complication—even Apple's otherwise refined AirPort Extreme implementation of 802.11 confronts the user with a variety of notifications and dialog boxes relating to the current state of connection. When I'm out in the field, for example, my computer still asks me if I'd like to join one or another of the networks it detects, rather than making an educated guess as to the best option and acting on it.

And this is precisely the kind of overinvolvement that a classically Weiserian everyware would do away with; presumably, the task you are actually interested in accomplishing is several levels of abstraction removed from pairing two devices. That is, not only do you not want to be bothered with the granular details of helping devices discover one another, you're not particularly interested in connectivity per se, or even in sending files. These are simply things that must be accomplished before you can engage the task that you originally set out to do.

Autodiscovery, then, however arcane it may sound, is a sine qua non of truly ubiquitous connectivity. you shouldn't have to think about it—not if our notions of the encalming periphery are to make any sense at all.*

* We should note, however, that this is precisely the kind of situation the doctrine of "beautiful seams" was invented for. If the default setting is for users to be presented with fully automatic network discovery, they should still be offered the choice of a more granular level of control.

But while a smooth treatment of service discovery is indisputably critical to good user experience in ubiquitous computing, it's a question more of individual implementations of a wireless networking technology than of any given protocol itself.

With the near-term appearance of standards such as Wireless USB and WiMAX, the necessary provisions for ubiquitous networking are at last in hand. The question of how to connect devices can itself begin to disappear from consciousness, unless we explicitly desire otherwise.

images Thesis 64

The necessary bridges between atoms and bits already exist.

Like a leitmotif, one idea has been woven through this book from its very beginning, popping to the surface in many places and in many ways: The logic of everyware is total. Whether anyone consciously intended it to be this way or not, this is a technology with the potential to sweep every person, object and place in the world into its ambit.

Obviously, though, and for a variety of good reasons, not everything in the world can or should have the necessary instrumentation built into it at design time. Sometimes we'd like to account for something built before everyware was ever contemplated, whether it be a medieval manuscript or a 1970 Citroën DS; sometimes we might want to keep track of something whose nature precludes ab initio integration, like a cat, or a can of cranberry sauce, or a stand of bamboo.

So in order for the more total visions of information processing in everyday life to be fully workable, there exists a generic requirement for something that will allow all this otherwise unaugmented stuff of the physical world to exist also in the hyperspace of relational data—a bridge between the realm of atoms and that of bits.

Ideally, such bridges would be reasonably robust, would not require an onboard power supply, and could be applied to the widest possible range of things without harming them. Given the above use scenarios, a very small form factor, a low-visibility profile, or even total imperceptibility would be an advantage. Above all, the proposed bridge should be vanishingly cheap—the better to economically supply all the hundreds of billions of objects in the world with their own identifiers.

Such bridges already exist—and are in fact already widely deployed. We'll limit our discussion here to the two most prominent such technologies: RFID tags and two-dimensional bar-codes.

The acronym RFID simply means "radio-frequency identification," although in use it has come to connote a whole approach to low-cost, low-impact data-collection. There are two fundamental types of RFID tags, "active" and "passive"; just as you'd assume, active tags broadcast while passive tags require scanning before offering up their payload of information.

While both types of tags incorporate a chip and an antenna, passive tags do not require an onboard power supply. This allows them to be extremely cheap, small, and flexible; they can be woven into fabrics, printed onto surfaces, even slapped on in the form of stickers. Of course, this limits their range of action to short distances, no more than a few meters at the very outside, while active RFID units, supplied with their own onboard transmitter and power supply, trade greater range for a correspondingly bulkier profile.

The onboard memory chip generally encodes a unique numeric identifier and includes as well whatever other information is desired about the item of interest: part number, account number, SKU, color.... Really, the possibilities are endless. And it's this flexibility that accounts for the incredibly wide range of RFID applications we see: In everyday life, you're almost certainly already engaging RFID infrastructures, whether you're aware of it or (more likely) not.

Two-dimensional bar codes address some of the same purposes as passive RFID tags, though they require visual scanning (by a laser reader or compatible camera) to return data. While unidimensional bar-codes have seen ubiquitous public use since 1974 as the familiar Universal Product Code, they're sharply limited in terms of information density; newer 2D formats such as Semacode and QR, while perhaps lacking the aesthetic crispness of the classic varieties, allow a literally geometric expansion of the amount of data that can be encoded in a given space.

At present, one of the most interesting uses of 2D codes is when they're used as hyperlinks for the real world. Semacode stickers have been cleverly employed in this role in the Big Games designed by the New York City creative partnership area/code, where they function as markers of buried treasure, in a real-time playfield that encompasses an entire urban area—but what 2D coding looks like in daily practice can perhaps best be seen in Japan, where the QR code has been adopted as a de facto national standard.

QR codes can be found anywhere and everywhere in contemporary Japan: in a product catalogue, in the corner of a magazine ad, on the back of a business card. Snap a picture of one with the camera built into your phone—and almost all Japanese keitai are cameraphones—and the phone's browser will take you to the URL it encodes and whatever information waits there. It's simultaneously clumsy and rather clever.

Ultra-low-cost 2D-coded stickers allow what might be called the depositional annotation of the physical world, as demonstrated by the recent Semapedia project. Semapedia connects any given place with a Wikipedia page to that page, by encoding a link to the Wikipedia entry in a sticker. For example, there's a Semapedia sticker slapped up just outside Katz's Delicatessen on the Lower East Side of Manhattan; shoot a picture of the sticker with a compatible cameraphone, and you're taken to the Katz's page on Wikipedia, where you can learn, among other things, precisely how much corned beef the deli serves each week.*

* Five thousand pounds.

The significance of technologies like RFID and 2D bar-coding is that they offer a low-impact way to "import" physical objects into the data-sphere, to endow them with an informational shadow. An avocado, on its own, is just a piece of fleshy green fruit—but an avocado whose skin has been laser-etched with a machine-readable 2D code can tell you how and under what circumstances it was grown, when it was picked, how it was shipped, who sold it to you, and when it'll need to be used by (or thrown out).

This avocado, that RFID-tagged pallet—each is now relational, searchable, available to any suitable purpose or application a robust everyware can devise for it. And of course, if you're interested in literal ubiquity or anything close to it, it surely doesn't hurt that RFID tags and 2D codes are so very cheap.

Richly provisioned with such bridges between the respective worlds of things and of data, there is no reason why everyware cannot already gather the stuff of our lives into the field of its awareness.

images Thesis 65

The necessary standards for the representation and communication of structured data already exist.

From the perspective of someone unfamiliar with the details of contemporary information technology—which is to say most of us—one factor that might seem to stand in the way of everyware's broader diffusion is the wild heterogeneity of the systems involved. We've grown accustomed to the idea that an ATM card issued in Bangor might not always work in Bangkok, that a Bollywood film probably won't play on a DVD player bought in Burbank—so how credible is any conception of the ubiquitous present that relies on the silky-smooth interplay of tag reader and wireless network, database and embedded microcontroller?

However intractable such issues may seem, their solution is in hand—if currently wedged somewhere in the gap between theory and robust praxis. Exactly how is a piece of information represented so that it may be reported by an RFID tag, put into proper perspective by visualization software, correlated with others in a database, and acted on by some remote process? How do such heterogeneous systems ever manage to pass data back and forth as anything more than a stripped-down, decontextualized series of values?

One of the first successful attempts to address such questions was the Standard Generalized Markup Language (SGML), adopted as the international standard ISO 8879 in 1986. SGML was intended to permit the sharing of machine-readable documents between different systems; its fundamental innovation, still observed in all the markup languages descended from it, was to propose that a document be provisioned with interpolated, semantic "tags" describing its various parts.* For example, a document describing this book might mark it up (at least in part) like this:

* Such tags are not to be confused with those of the RFID variety.

<title>Everyware</title>

<subtitle>The dawning age of ubiquitous computing</subtitle>

<author>Adam Greenfield</author>

<pubyear>2006</pubyear>

Once a document has been marked up this way, SGML-compliant but otherwise incompatible systems will parse it identically. Moreover, SGML is a metalanguage, a tool kit for the construction of interoperable special-purpose languages; as long as all observe the rules of valid SGML, any number of different applications can be built with it.

This would seem to make SGML the perfect lingua franca for technical systems—in theory, anyway. In practice, the language has some qualities that make it hard to use, most notably its complexity; it was also not ideally suited to the multilingual Internet, where documents might well be rendered in tightly-woven Farsi or the stolid ideograms of Traditional Chinese. In the late 1990s, therefore, a working group of the World Wide Web Consortium developed a streamlined subset of SGML known as XML (for eXtensible Markup Language) specifically designed for use in the Internet context.**

** Regrettably, the most recent version of XML still excludes support for several of the world's writing systems, notably the plump curls of Burmese and the hauntingly smokelike vertical drafts of Mongolian Uighur script.

While XML has the very useful quality that it is both machine-readable and (reasonably) legible to people, the source of its present interest to us is the success it has enjoyed in fostering machine-to-machine communication. Since its release in 1998, XML has becoming the lingua franca SGML never was, allowing the widest possible array of devices to share data in a manner comprehensible to all.

XML compatibility, inevitably, is not universal, nor has it been perfectly implemented everywhere it has been deployed. But it is a proven, powerful, general solution to the problem of moving structured data across systems of heterogeneous type and capability. Once again, we'll have to look elsewhere if we're interested in understanding why everyware is anything but a matter of the very near term.

images Thesis 66

For many of us, everyware is already a reality.

Maybe it's time for a reality check. We should be very clear about the fact that when we raise the question of ubiquitous computing, we're not simply talking about the future—even the near future—but also about things that actually exist now.

Far from presenting itself to us as seamless, though, everyware as it now exists is a messy, hybrid, piecemeal experience, and maybe that's why we don't always recognize it for what it is. It certainly doesn't have the science-fictional sheen of some of the more enthusiastic scenarios.

There are systems in the world that do begin to approach such scenarios in terms of their elegance and imperceptibility. The qualities defined way back in Section 1 as being diagnostic of everyware—information processing embedded in everyday objects, dissolving in behavior—can already be found in systems used by millions of people each day.

We've already discussed PayPass and Blink, the RFID-based payment systems that will receive their large-scale commercial rollouts by the end of 2005. What if they succeed beyond their sponsors' expectations and become a matter-of-fact element of daily life? What if you could use the same system to pay for everything from your mid-morning latte to a few quick copies at the local 7-Eleven to the train home in the evening—all with a jaunty wave of your wrist?

If you've ever visited Hong Kong, or are lucky enough to live there, you know exactly what this would look like: Octopus. Octopus is a contactless, stored-value "smartcard" used for electronic payment throughout Hong Kong, in heavy and increasing daily use since 1997, and it gives us a pretty good idea of what everyware looks like when it's done right.

Appropriately enough, given its origins as a humble transit pass, Octopus can be used on most of the city's wild and heterogeneous tangle of public transportation options, even the famous Star Ferries that ply the harbor.

Even if getting around town were the only thing Octopus could be used for, that would be useful enough. But of course that's not all you can do with it, not nearly. The cards are anonymous, as good as cash at an ever-growing number of businesses, from Starbucks to local fashion retailer Bossini. you can use Octopus at vending machines, libraries, parking lots, and public swimming pools. It's quickly replacing keys, card and otherwise, as the primary means of access to a wide variety of private spaces, from apartment and office buildings to university dorms. Cards can be refilled at just about any convenience store or ATM. And, of course, you can get a mobile with Octopus functionality built right into it, ideal for a place as phone-happy as Hong Kong.*

* Despite the popular "Octo-phone" moniker, Nokia made the canny decision to embed the Octopus RFID unit not in any one model of phone, but in an interchangeable faceplate.

If this description sounds a little breathless, it's because I have used Octopus myself, in many of the above contexts and experienced a little millennial flush of delight every time I did so. The system's slogan is "making everyday life easier," and rarely has a commercial product made so good on its tagline. And if you want to know what "information processing dissolving in behavior" really looks like, catch the way women swing their handbags across the Octopus readers at the turnstiles of the Mong Kok subway station; there's nothing in the slightest to suggest that this casual, 0.3-second gesture is the site of intense technical intervention.

According to the Octopus consortium, 95 percent of Hong Kong citizens between the ages of 16 and 65 use their product; you don't get much more ubiquitous than that. As of late 2004, the last period for which full figures are available, Octopus recorded some eight million transactions a day—more, in other words, than there are people in the city. Is this starting to sound like something real?

Nor should we make the mistake of thinking that the daily experience of everyware is limited to the other side of the Pacific. Something closer to home for American readers is the E-ZPass electronic toll-collection system, now used on highways, bridges and tunnels throughout the Northeast Corridor.

E-ZPass, like California's FasTrak, is an RFID-based system that lets subscribers sail through toll plazas without stopping: A reader built into the express-lane infrastructure queries dashboard- or windshield-mounted tags and automatically debits the subscriber's account. While the system is limited to highways and parking lots at present, some Long Island McDonald's outlets are experimenting with a pilot program allowing customers to pay for their fries and burgers with debits from their E-ZPass accounts. It's not quite Octopus yet—not by a long shot—but a more useful system is there in embryo, waiting for the confluence of corporate and governmental adoption that would render it truly ubiquitous.*

* In retrospect, the stroke of genius that secured early success for Octopus was enlisting all of Hong Kong's six major transit systems—and thus, indirectly, the majority-partner government itself—in the joint venture. Ordinarily competitors, each had strong incentive to promote the system's wider adoption.

What fully operational systems such as Octopus and E-ZPass tell us is that privacy concerns, social implications, ethical questions, and practical details of the user experience are no longer matters for conjecture or supposition. With ubiquitous systems available for empirical inquiry, these are things we need to focus on today.

images Thesis 67

Everyware is an immediate issue because it will appear to be a commercially reasonable thing to attempt in the near term.

If Octopus isn't a sufficiently robust instance of real-world everyware for you, perhaps you'll be more impressed by what's happening along the Incheon waterfront, some 40 miles southwest of Seoul. Rising from the sea here are 1,500 acres of newly reclaimed land destined to become South Korea's "ubiquitous city," New Songdo.

New Songdo is being designed, literally from the ground up, as a test bed for the fullest possible evocation of ubiquitous technology in everyday life. Like a living catalogue of all the schemes we've spent the last hundred-and-some pages discussing, it will be a place where tossing a used soda can into a recycling bin will result in a credit showing up in your bank account—where a single smartcard will take you from bus to library to bar after work, and straight through your front door. In fact, almost every scenario we've covered is reflected somewhere or another in New Songdo's marketing materials; the developers have even included the pressure-sensitive flooring for the homes of older residents, where it's once again touted as being able to detect falls and summon assistance. It's quite a comprehensive—and audacious—vision.

And while it certainly sounds like something out of AT&T's infamously techno-utopian "you Will" commercials of the early 1990s, New Songdo is entirely real. It's being built right now, at a cost estimated to be somewhere north of $15 billion.

That financing for the project is being provided by international institutions like ABN Amro, as well as Korean heavyweights Kookmin Bank and Woori Bank, should tell us something. It doesn't even particularly matter whether few of the "enhancements" planned for this or other East Asian "u-cities" pan out entirely as envisioned; it's sufficient that hardheaded, profit-driven businesspeople think there's a reasonable chance of seeing a return on their investment in everyware to lend the notion commercial credibility.

Of course, New Songdo's planners present it to the world as more than just smart floors and RFID-scanning trash cans. It's being promoted as a 21st century trade portal, an English-speaking "Free Economic Zone" richly supplied with multimodal transportation links to the Incheon seaport and the international airport some six miles away. And it's true that banks, even large and savvy ones, have made costly blunders before.

But the fact that such institutions are willing to underwrite a project that places such weight on its ubiquitous underpinnings advances any discussion of the technology to a new and decisive phase. New Songdo isn't about one or two of the prototype systems we've discussed graduating into everyday use; it's something on the order of all of them, all at once, with their performance to spec crucial to the success of a going business concern. This is the most powerful argument yet in favor of rapidly formulating the conventions and standards that might positively affect the way "the u-life" is experienced by the residents of New Songdo and all those who follow.

images Thesis 68

Given that, in principle, all of the underpinnings necessary to construct a robust everyware already exist, the time for intervention is now.

The lack of design documentation, the absence of widely agreed-upon standards, the yawning gaps in deployed network infrastructure, and above all the inordinate complexity of many of the challenges involved in everyware certainly suggest that its deployment is in some sense a problem for the longer term. Perhaps we're reading too much into the appearance of a few disarticulated systems; it's possible that the touchless payment systems and tagged cats and self-describing lampposts are not, after all, part of some overarching paradigm.

If, on the other hand, you do see these technologies as implying something altogether larger, then maybe we ought to begin developing a coherent response. It is my sense that if its pieces are all in place—even if only in principle—then the time is apt for us to begin articulating some baseline standards for the ethical and responsible development of user-facing provisions in everyware.

We should do so, in other words, before our lives are blanketed with the poorly imagined interfaces, infuriating loops of illogic, and insults to our autonomy that have characterized entirely too much human-machine interaction to date. Especially with genuinely ubiquitous systems like PayPass and Octopus starting to appear, there's a certain urgency to all this.

As it turns out, after some years of seeing his conception of ubicomp garbled—first by "naively optimistic" engineers, and then by "overblown and distorted" depictions of its dangers in the general-interest media—Mark Weiser had given some thought to this. In a 1995 article called "The Technologist's Responsibilities and Social Change," he enumerated two principles for inventors of "socially dangerous technology":

1.    Build it as safe as you can, and build into it all the safeguards to personal values that you can imagine.

2.    Tell the world at large that you are doing something dangerous.

In a sense, that's the project of this book, distilled into 32 words.

What Weiser did not speak to on this occasion—and he was heading into the final years of his life, so we will never know just how he would have answered the question—was the issue of timing. When is it appropriate to "tell the world at large"? How long should interested parties wait before pointing out that not all of the "appropriate safeguards" have been built into the ubiquitous systems we're already being offered?

My guess, in both cases, is that Weiser's response would be the earliest possible moment, when there's still at least the possibility of making a difference. Even if everyware does take the next hundred years to emerge in all its fullness, the time to assert our prerogatives regarding its evolution is now.

images Thesis 69

It is ethically incumbent on the designers of ubiquitous systems and environments to afford the human user some protection.

We owe to the poet Delmore Schwartz the observation that "in dreams begin responsibilities." These words were never truer than they are in the context of everyware.

Those of us who have participated in this conversation for the last several years have dreamed a world of limitless interconnection, where any given fact or circumstance can be associated with an immensely large number of others. And despite what we can see of the drawbacks and even dangers implied, we have chosen to build the dream.

If the only people affected by this decision were those making it, that would be one thing. Then it wouldn't really matter what kind of everyware we chose to build for ourselves, any more than I'm affected right now by Steve Mann's cyborg life, or by the existence of someone who's wired every light and speaker in their home to a wood-grained controller they leave on the nightstand. However strange or tacky or pointless such gestures might seem, they harm no one. They're ultimately a matter of individual taste on the part of the person making them and therefore off-limits to regulation in a free society.

But that's not, at all, what is at stake here, is it? By involving other people by the hundreds of millions in our schemes of ubiquity, those of us designing everyware take onto our own shoulders the heaviest possible burden of responsibility for their well-being and safety. We owe it to them to anticipate, wherever possible, the specific circumstances in which our inventions might threaten the free exercise of their interests, and—again, wherever possible—to design such provisions into the things we build that would protect those interests.

This is not paternalism; in fact, it's just the opposite. Where paternalism is the limitation of choice, all I am arguing for is that people be informed just what it is that they are being offered in everyware, at every step of the way, so they can make meaningful decisions about the place they wish it to have in their lives.

The remainder of this book will articulate some general principles we should observe in the development of ubiquitous computing to secure the interests of those people most affected by it.