Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)

Section 3. What's Driving the Emergence of Everyware?

Section 2 explored why the transition from personal computing to a technology of ubiquitous networked devices is truly a "paradigm shift." Why does the emergence of such a radical and potentially disruptive technology seem so ineluctable? What are some of the converging trends that support its emergence?

images Thesis 24

Everyware, or something very much like it, is effectively inevitable.

We've considered some of the ways the emergence of everyware seems to be overdetermined. There are forces aplenty driving its appearance, from the needs of the elderly infirm in the world's richest societies to those of nonliterate users in the developing world.

There is an argument to be made that the apparent significance of these drivers is illusory—that Weiser and the other prophets of ubiquitous technology were simply wrong about what people would want from computing, and particularly that they underestimated the persistent appeal of the general-purpose desktop machine despite its flaws.

In this view, most of the products or services we've discussed here will come to fruition, but they'll never amount to much more than bits and pieces, an incoherent scatter of incompatible technologies. Meanwhile, for quite some time to come, we'll continue to interact with information technology much as we have for the last decade, using ever more-sophisticated and possibly more-"converged," but essentially conventional, PCs.

In fairness, there's plenty of empirical support for this position. The streamlined "information appliances" Don Norman imagined got their trial in the market and failed; despite flattering notices in magazine articles and the like, I've never actually met someone who owns one of the "ambient devices" supposed to represent the first wave of calm technology for the home. There seems to be little interest in the various "digital home" scenarios, even among the cohort of consumers who could afford such things and have been comparatively enthusiastic about high-end home theater.*

But I don't think this is anything like the whole story. In fact, barring the wholesale collapse of highly technological civilization on Earth, I believe the advent of some fairly robust form of everyware is effectively inevitable, at least in the so-called "First World." So many of the necessary material and intellectual underpinnings are already fully developed, if not actually deployed, that it is very hard to credit scenarios beyond the near term in which ubiquitous computing does not play some role in everyday life. All the necessary pieces of the puzzle are sitting there on the tabletop, waiting for us to pick them up and put them together.

But let's first do away with the idea that I am depending on a lawyerly, not to say Clintonian, parsing of definitions. Proclaiming the inevitability of everyware would be a fatuously empty proposition if all I meant by it was that touchless e-cash transactions would begin to replace credit cards, or that you'll soon be able to answer your phone via your television. I mean to assert, rather, that everyware, the regime of ambient informatics it gives rise to, and the condition of ambient findability they together entrain, will have significant and meaningful impact on the way you live your life and will do so before the first decade of the twenty-first century is out.

This is such a strong claim that I'll devote the remainder of this section to supporting it in sufficient detail that I believe you will be convinced, whatever your feelings at the moment.

* A Motorola executive, interviewed in a recent issue of The Economist, asserted the rather patronizing viewpoint that if customers didn't want these conveniences, they'd simply have to be "educated" about their desirability until they did manage to work up the appropriate level of enthusiasm. In other words, "the floggings will continue until morale improves."

images Thesis 25

Everyware has already staked a claim on our visual imaginary, which in turn exerts a surprising influence on the development of technology.

Before we turn to more material drivers, we might first want to attend to a surprisingly influential force that does so much to bolster everyware's aura of inevitability, and that is how often we've already seen it.

More so than in many other fields of contemporary technologic endeavor, in everyware pop culture and actual development have found themselves locked in a co-evolutionary spiral. Time and again, the stories we've told in the movies and the pages of novels have gone on to shape the course of real-world invention. These, in their turn, serve as seed-stock for ever more elaborate imaginings, and the cycle continues.

Beyond genre SF, where the eventual hegemony of some kind of ubiquitous computing has long been an article of faith, traces of everyware's arrival have already turned up in literary fiction. David Foster Wallace lightly drops one or two such intimations into his recent short story "Mister Squishy," while Don DeLillo captures the zeitgeist particularly well in his 2003 Cosmopolis; the latter's protagonist, a maximally connected trader in currencies, muses that the discrete devices he relies on are "already vestigial...degenerate structures."*

*"Computers will die. They're dying in their present form. They're just about dead as distinct units. A box, a screen, a keyboard. They're melting into the texture of everyday life...even the word 'computer' sounds backward and dumb." But for the punchy cadence, the words could well be Mark Weiser's.

Despite these surfacings, though, as well as the undeniable cultural impact of some other visions which have similarly never left the printed page—William Gibson's original depiction of cyberspace comes to mind—it's the things we see up on the screen that generally leave the strongest emotional impression on us.

Movies have certainly shaped the conception of ubiquitous artifacts before, from Jun Rekimoto's DataTiles, the design of which was explicitly inspired by HAL 9000's transparent memory modules in 2001: A Space Odyssey, to a long series of products and services that seem to owe their visual forms entirely to the influence of 1970's THX 1138. But for most nonaficionados, everyware's most explicit and memorable claim on the visual imaginary has been the 2002 Minority Report.

For Minority Report, director Steven Spielberg asked interaction and interface designers from the MIT Media Lab, Microsoft Research, Austin-based Milkshake Media, and elsewhere to imagine for him what digital media might look like in 2045. They responded with a coherent vision binding together: embedded sensor grids, gestural manipulation of data, newspaperlike information appliances, dynamic and richly personalized advertising, and ubiquitous biometric identification, all undergirded by a seamless real-time network. There is no doubt that their vision, interpreted for the screen, helped mold our shared perception of what would be technically possible, likely, or desirable in next-generation computing.

But before that could happen, a little alchemy would be required. With one or two exceptions, the actual prototypes submitted to the Minority production are awkward and unconvincing. They look, in fact, like what they are: things designed by engineers, for engineers. It took futurists immersed in the art of visual storytelling to take these notions and turn them into something compelling—and it was the synthesis of all these ideas in the vivid, if scenery-chewing, vignette that opens Minority Report that sold it.

True, the same ideas could have been (and of course had been) presented in academic papers and research conferences and gone little remarked upon outside the community of people working in human-computer interaction. But when situated in a conventionally engaging narrative, animated by recognizable stars, and projected onto megaplexed screens with all of the awesome impact of a Hollywood blockbuster, this set of notions about interface immediately leapt from the arcane precincts of academe into the communal imaginary.

This is partly a matter of the tools any film has at its disposal (emotional, evocative, environment-shaping) and partly a simple matter of scale. Unlike, say, the audience for this book, Minority Report's audience was probably not one composed of people inclined to think about such things outside the context of imaginings on screen, at least not in any detail. But over the course of their two hours in the dark, millions of moviegoers absorbed a vivid idea of what might be working its way toward them, a hook on which to hang their own imaginings and expectations. And where a scholarly paper on gestural device interfaces might be read by tens of thousands, at most, the total lifetime audience for such a thing is easily trumped by a blockbuster's on its opening weekend alone.

Closing the circuit, some members of that audience then go on to furnish the world with the things they've seen. The imaginary informs the world—not of 2045, as it turns out, but of 2005: Media Lab alumnus John Underkoffler, designer of the gesture-driven interface in Report, was sought out by a group at defense contractor Raytheon, whose members had seen and been impressed by the film. He was eventually hired by Raytheon to develop similar systems for the U.S. military's "net-centric warfare" efforts, including a shared interface called the Common Tactical Blackboard.

Thus is the fantastic reified, made real.

images Thesis 26

Something with the properties we see in everyware was foreordained the moment tools and services began to be expressed digitally.

Long before there was an everyware, we simply had tools. Some of them were mechanical in nature, like timepieces and cameras. Others were electric, or electronic, in whole or in part: radios, telephones, televisions. Others still were larger, less mobile, designed to perform a single function: appliances. And together, they comprised a technics of everyday life.

It was, of course, an analog universe. Where these tools gathered information about the world, it was encoded as the state of a continuously variable physical system: so many turns of a toothed wheel, an etched groove of such-and-such depth. And this had its advantages: to this day, there are those who swear by the tone and richness of analog recordings or the images produced by the fall of light on film grain.

In time, though, many of the tools that had been electric, mechanical, or some combination of the two were recast as digital, which is to say that when they encoded information about the world, it was rendered as a discrete, stepped progression of ones and zeroes. This afforded perfect fidelity in reproduction, more efficient transmission, all but cost-free replication. And so the analog Walkman gave way to the digital iPod, the Motorola MicroTAC begat the RAZR, the original Canon Elph became—what else?—the Digital Elph.

None of the analog devices could have communicated with each other—they barely even related to one another. What, after all, does a song on cassette have to do with an image burned into film? you could rub them against each other all day and not get anything for your trouble but scratched celluloid, tangled up in ribbons of magnetic tape. What could you have done with either over a telephone, except tell the person on the other end of the line all about the neat songs and pretty pictures?

All of the digital devices can and do communicate with each other, routinely and trivially. you can take a picture with your camera, send it from your phone, store it on your iPod. In just a few brief years, we've come to regard transactions like this as thoroughly unremarkable, but they're marvelous, really—almost miraculous. And they owe everything to the fact that all of the devices involved share the common language of on and off, yes or no, one and zero.

We too often forget this. And although I would prefer to resist determinism in any of its forms, above all the technological, it's hard to argue with in this instance. It's not simply that, as my former employers at Razorfish used to say, "Everything that can be digital, will be"; it's that everything digital can by its very nature be yoked together, and will be.

This is the logic of "convergence." Everything connects.

images Thesis 27

Everyware is structurally latent in several emerging technologies.

The seemingly ineluctable logic of connection is not the only one driving the emergence of everyware. There is another type of determinism at work here, as well, harder to substantiate but no less real.

There must still be those, somewhere, who would insist that all technologies come into being neutral and uninflected, freely available for any use whatsoever. But ever since McLuhan, it's been a little difficult to take such a view seriously. A more nuanced stance would be that technologies do contain inherent potentials, gradients of connection. Each seems to fit into the puzzle that is the world in certain ways and not others.

This is not to say that social, juridical, and political forces do not exert shaping influences that are at least as significant—otherwise we really would have architected our cities around the Segway, and RU-486 would be dispensed over every drugstore counter in the land. But it wouldn't have taken a surplus of imagination, even ahead of the fact, to discern the original Napster in Paul Baran's first paper on packet-switched networks, the Manhattan skyline in the Otis safety elevator patent, or the suburb and the strip mall latent in the heart of the internal combustion engine.

Let's draw three emerging technologies from the alphabet soup of new standards and specifications we face at the moment and take a look at what they seem to "want."

First, RFID, the tiny radio-frequency transponders that are already doing so much to revolutionize logistics. The fundamental characteristic of an RFID tag is cheapness—as of mid-2004, the unit production cost of a standard-issue passive tag stood at about fifty cents, but industry sources are unanimous in predicting a drop below five cents in the next few years.

Somewhere around the latter price point, it becomes economic to slap tags onto just about everything: every toothbrush, every replacement windshield wiper and orange-juice carton in existence. And given how incredibly useful the things are—they readily allow the tracking, sorting, and self-identification of items they're appended to, and much more besides—there are likely to be few persuasive arguments against doing so. RFID "wants" to be everywhere and part of everything.

In networking, the next step beyond the Wi-Fi and Bluetooth standards we're familiar with is a technology called ultra-wideband (UWB), a lowpower scheme that relays data at rates upwards of 500 MB/second—around ten times faster than current wireless. UWB is rich enough to support the transmission of multiple simultaneous streams of highdefinition video, agile and responsive enough to facilitate ad-hoc mesh networking.* UWB wants to be the channel via which all the world's newly self-identifying artifacts transact and form spontaneous new connections.

Of course, if you want to send a message, it helps to have an address to send it to. At the moment, the prospects for anything like ubiquitous computing at the global level are starkly limited by a shortage of available addresses. But as we'll see in Section 6, the new Internet Protocol, IPv6, provides for an enormous expansion in the available address space—enough for every grain of sand on the planet to have its own IP address many times over, should such an improbable scenario ever prove desirable. Why specify such abyssal reaches of addressability, if not to allow every conceivable person, place, and artifact to have a comfortable spread of designators to call their own? IPv6 wants to transform everything in the world, even every part of every thing, into a node.

* An ad-hoc network is one that forms spontaneously, from whatever nodes are available at the moment. Mesh networking supports decentralized connectivity, with each node dynamically routing data to whichever neighbor affords the fastest connection at the moment. A scheme with both properties—self-configuring, self-healing, and highly resistant to disruption—is ideal for everyware.

These are minuscule technologies, all of them: technologies of low power, low range, fine-grained resolution, and low costs. There is something in the nature of all of them that seemingly bespeaks a desire to become part of literally everything. Advertently or otherwise, we've created artifacts and standards that don't merely provide for such a thing—they almost seem to be telling us that this is what they want us to do with them.

images Thesis 28

Everyware is strongly implied by the need of business for continued growth and new markets beyond the PC.

That Motorola executive recently interviewed by The Economist spoke the truth after all: there really is a need to educate consumers about "the value of a connected home and lifestyle"...to Motorola. (Not to single Motorola out, of course.)

Whether or not any one of us has asked to live in such a home, or would ever dream of pursuing such a "lifestyle," there are hard-nosed business reasons why everyware looks like a safe bet. Entire sectors of the economy are already looking to the informatic colonization of everyday things, and not merely as part of an enhanced value proposition offered the purchaser of such things. For manufacturers and vendors, the necessary gear represents quite a substantial revenue stream in its own right.

The logic of success in late capitalism is, of course, continuous growth. The trouble is that the major entertainment conglomerates and consumerelectronics manufacturers have hit something of a wall these last few years; with a few exceptions (the iPod comes to mind), we're not buying as much of their product as we used to, let alone ever more of it. Whether gaming systems, personal video recorders (PvRs), or video-enabled mobile phones, nothing has yet matched the must-have appeal of the PC, let alone reached anything like television's level of market penetration.

Putting with maximum bluntness an aspect of the ubiquitous computing scenario that is rarely attended to as closely as it ought to be: somebody has to make and sell all of the sensors and tags and chipsets and routers that together make up the everyware milieu, as well as the clothing, devices, and other artifacts incorporating them. One rather optimistic analyst sees the market for "digital home" componentry alone growing to $1 trillion worldwide by the end of the decade (yes, trillion, with a tr), and that doesn't include any of the other categories of ubiquitous information-processing gear we've discussed.

So if businesses from Samsung to Intel to Philips to Sony have any say in the matter, they'll do whatever they can to facilitate the advent of truly ubiquitous computing, including funding think tanks, skunk works, academic journals, and conferences devoted to it, and otherwise heavily subsidizing basic research in the field. If anything, as far as the technology and consumer-electronics industries are concerned, always-on, real-time any- and everyware can't get here fast enough.

images Thesis 29

Everyware is strongly implied by the needs of an aging population in the developed world.

At the moment, those of us who live in societies of the global North are facing one of the more unusual demographic transitions ever recorded. As early childhood immunization has become near-universal over the last half-century, access to the basics of nutrition and healthcare have also become more widespread. Meanwhile, survival rates for both trauma and chronic conditions like heart disease and cancer have improved markedly, yielding to the application of medical techniques transformed, over the same stretch of time, by everything from the lessons of combat surgery, to genomics, to materials spun off from the space program, to the Internet itself.

It really is an age of everyday wonders. One reasonably foreseeable consequence of their application is a population with a notably high percentage of members over the age of sixty-five. With continued good fortune, many of them will find themselves probing the limit of human longevity, which currently seem to stand pretty much where it has for decades: somewhere around the age of 115.*

* Curiously enough, after a demographic bottleneck, it is the percentage of the "oldest old" that is rising most markedly. Apparently, if you can somehow manage to survive to eighty-five, your odds of enjoying an additional ten or even twenty years are sharply improved.

At the same time, though, with fertility rates plummeting (the populations of North America and Western Europe would already have fallen below replacement level if not for immigration, while Russia and Japan shrink a little with every passing year), there are fewer and fewer young people available to take on the traditional role of looking after their elders. At least in this wide swath of the world, society as a whole is aging. For the first time, we'll get to explore the unfolding consequences of living in a gerontocracy.

This inevitably raises the question of how best to accommodate the special needs of a rapidly graying population. Unfortunately, our present arrangements—assisted-living communities, round-the-clock nursing for those who can afford it—don't scale very well, complicated by prideful reluctance or simple financial inability to accept such measures on the part of a great many. Even if everyone turning eighty wanted to and could afford to do so, neither appropriate facilities nor the qualified people to staff them exist in anything like the necessary numbers. So the remaining alternative is to try to find some way to allow people to "age in place," safely and with dignity and autonomy intact.*

* Obviously, there are many alternative responses to this challenge, some of which are social or political in nature. In ubicomp circles, though, they are almost never countenanced—it rarely seems to occur to some of the parties involved that these ends might better be served by encouraging people to become caretakers through wage or benefit incentives or liberalizing immigration laws. The solution is always technical. Apparently, some of us would rather attempt to develop suitably empathetic caretaker robots than contemplate raising the minimum wage.

A number of initiatives, from the Aware Home consortium based at the Georgia Institute of Technology to Nomura Research Institute's various "ubiquitous network" efforts, have proposed a role for ubiquitous computing in addressing the myriad challenges confronting the elderly. (If a high percentage of such proposals seem to be Japanese in origin, there's a reason: the demographic crisis is especially pressing in Japan, which is also almost certainly the society most inclined to pursue technical solutions.)

Some systems, though originally developed for the elderly, have broad application for use with children, the disabled, or other groups for whom simply navigating the world is a considerable challenge—for example, a wearable, RFID-based system recently described in the Japanese Mainichi Shimbun that automatically turns crossing signals green for elderly citizens, holding oncoming traffic until they have crossed safely.

Others are more focused on addressing the specific issues of aging. Context-aware memory augmentation—in the senses of finding missing objects, recalling long-gone circumstances to mind, and reminding someone boiling water for tea that they've left the kettle on—would help aged users manage a daily life suddenly become confusing, or even hostile. Equally importantly, such augmentation would go a long way toward helping people save face, by forestalling circumstances in which they would seem (or feel themselves to be) decrepit and forgetful.

Users with reduced vision or advanced arthritis will find voice-recognition and gesture-based interfaces far easier to use than those involving tiny buttons or narrow click targets—this will become especially critical in managing viewscreens and displays, since they may be the main source of socialization, entertainment and mental stimulation in a household. Such "universal" interfaces may be the difference that allows those with limited mobility to keep in touch with distant family members or friends in similar circumstances.

Meanwhile, the wearable biometric devices we've discussed have particular utility in geriatric telemedicine, where they can enable care centers to keep tabs on hundreds of clients at a time, monitoring them for sudden changes in critical indicators such as blood pressure and glucose level. The house itself will assume responsibility for monitoring other health-related conditions, detecting falls and similar injuries, and ensuring that users are both eating properly and taking their prescribed medication on schedule.

To so many of us, the idea of living autonomously long into old age, reasonably safe and comfortable in our own familiar surroundings, is going to be tremendously appealing, even irresistible—even if any such autonomy is underwritten by an unprecedented deployment of informatics in the home. And while nothing of the sort will happen without enormous and ongoing investment, societies may find these investments more palatable than other ways of addressing the issues they face. At least if things continue to move in the direction they're going now, societies facing the demographic transition will be hard-pressed to respond to the needs of their elders without some kind of intensive information-technological intervention.

images Thesis 30

Everyware is strongly implied by the ostensible need for security in the post-9/11 era.

We live, it is often said, in a surveillance society, a regime of observation and control with tendrils that run much deeper than the camera on the subway platform, or even the unique identifier that lets authorities trace the movements of each transit-pass user.

If some of the specific exercises of this watchfulness originated recently—to speak with those who came to maturity anytime before the mid-1980s is to realize that people once showed up for flights with nothing more than cash in hand, opened savings accounts with a single check, or were hired without having to verify their citizenship—we know that the urge to observe and to constrain has deep, deep roots. It waxes and wanes in human history, sometimes hemmed in by other influences, other times given relatively free rein.

We just happen to be living through one of the latter periods, in which the impulse for surveillance reaches its maximum expression—its sprawling ambit in this case accommodated by the same technologies of interconnection that do so much to smooth the other aspects of our lives. If there was any hope of this burden significantly lightening in our lifetimes, though, it almost certainly disappeared alongside so many others, on the morning of September 11, 2001.

The ostensible prerogatives of public safety in the post–September 11 era have been neatly summarized by curators Terence Riley and Guy Nordenson, in their notes to the 2004 Museum of Modern Art show "Tall Buildings," as "reduce the public sphere, restrict access, and limit unmonitored activity." In practice, this has meant that previous ways of doing things in the city and the world will no longer do; our fear of terror, reinscribed by the bombings in Bali, Madrid and London, has on some level forced us to reassess the commitment to mobility our open societies are based on.

This is where everyware enters the picture. At the most basic level, it would be difficult to imagine a technology more suited to monitoring a population than one sutured together from RFID, GPS, networked biometric and other sensors, and relational databases; I'd even argue that everyware redefines not merely computing but surveillance as well.*

* A recent Washington Post article described a current U.S. government information-gathering operation in which a citizen's "[a]ny link to the known terrorist universe—a shared address or utility account, a check deposited, [or even] a telephone call" could trigger their being investigated. The discovery of such tenuous connections is precisely what relational databases are good for, and it's why privacy experts have been sounding warnings about data mining for years. And this is before the melding of such databases with the blanket of ubiquitous awareness implied by everyware.

But beyond simple observation there is control, and here too the class of information-processing systems we're discussing has a role to play. At the heart of all ambitions aimed at the curtailment of mobility is the demand that people be identifiable at all times—all else follows from that. In an everyware world, this process of identification is a much subtler and more powerful thing than we often consider it to be; when the rhythm of your footsteps or the characteristic pattern of your transactions can give you away, it's clear that we're talking about something deeper than "your papers, please."

Once this piece of information is in hand, it's possible to ask questions like Who is allowed to be here? and What is he or she allowed to do here?, questions that enable just about any defensible space to enforce its own accesscontrol policy—not just on the level of gross admission, either, but of finely grained differential permissioning. What is currently done with guards, signage, and physical barriers ranging from velvet rope to razor wire, can still more effectively be accomplished when those measures are supplemented by gradients of access and permission—a "defense in depth" that has the additional appeal of being more or less subtle.

If you're having trouble getting a grip on how this would work in practice, consider the ease with which an individual's networked currency cards, transit passes and keys can be traced or disabled, remotely—in fact, this already happens.* But there's a panoply of ubiquitous security measures both actual and potential that are subtler still: navigation systems that omit all paths through an area where a National Special Security Event is transpiring, for example, or subways and buses that are automatically routed past. Elevators that won't accept requests for floors you're not accredited for; retail items, from liquor to ammunition to Sudafed, that won't let you purchase them, that simply cannot be rung up.

* If you purchase a New York City MetroCard with a credit or debit card, your identity is associated with it, and it can be used to track your movements. The NYPD tracked alleged rapist Peter Braunstein this way.

Context-aware differential permissioning used as a security tool will mean that certain options simply do not appear as available to you, like grayedout items on a desktop menu—in fact, you won't get even that backhanded notification—you won't even know the options ever existed.

Such interventions are only a small sampling of the spectrum of control techniques that become available in a ubiquitously networked world. MIT sociologist Gary T. Marx sees the widest possible scope for security applications in an "engineered society" like ours, where "the goal is to eliminate or limit violations by control of the physical and social environment."

Marx identifies six broad social-engineering strategies as key to this control, and it should surprise no one that everyware facilitates them all.

·        We all understand the strategy of target removal: "something that is not there cannot be taken," and so cash and even human-readable credit and debit cards are replaced with invisible, heavily encrypted services like PayPass.

·        Target devaluation seeks to make vulnerable items less desirable to those who would steal them, and this is certainly the case where self-identifying, self-describing devices or vehicles can be tracked via their network connection.

·        For that matter, why even try to steal something that becomes useless in the absence of a unique biometric identifier, key or access code? This is the goal of offender incapacitation, a strategy also involved in attempts to lock out the purchase of denied items.

·        Target insulation and exclusion are addressed via the defense in depth we've already discussed—the gauntlet of networked sensors, alarms, and cameras around any target of interest, as well as all the subtler measures that make such places harder to get to.

·        And finally there is the identification of offenders or potential offenders, achieved via remote iris scanning or facial recognition systems like the one currently deployed in the Newham borough of London.

Who's driving the demand for ubiquitous technologies of surveillance and control? Obviously, the law-enforcement and other agencies charged with maintaining the peace, as well as various more shadowy sorts of government security apparatus. But also politicians eager to seem tough on terror, ever aware that being seen to vote in favor of enhanced security will be remembered at election time. Private security firms and renta-cops of all sorts. Building and facility managers with a healthy line item in their budget to provide for the acquisition of gear but neither the ongoing funds nor the authority to hire security staff. Again, the manufacturers and vendors of that gear, scenting another yawning opportunity. And never least, us, you and I, unable to forget the rubble at Ground Zero, spun senseless by routine Amber Alerts and rumors of Superdome riots, and happy for some reassurance of safety no matter how illusory.

These are obviously thorny, multisided issues, in which the legitimate prerogatives of public safety get tangled up with the sort of measures we rightfully associate with tyranny. There should be no doubt, though, that everyware's ability to facilitate the collection and leveraging of large bodies of data about a population in the context of security will be a major factor driving its appearance.

images Thesis 31

Everyware is a strategy for the reduction of cognitive overload.

Happily, there are also less distressing arguments in support of everyware. One of the original motivations for conducting research into post-PC interfaces, in fact, was that they might ameliorate the sense of overload that so often attends the use of information technology.

An early culmination of this thinking was Mark Weiser and John Seely Brown's seminal "The Coming Age of Calm Technology," which argued that the ubiquity of next-generation computing would compel its designers to ensure that it "encalmed" its users. In their words, "if computers are everywhere, they better stay out of the way."

While part of Brown and Weiser's apparent stance—that designers and manufacturers would find themselves obliged to craft gentle interfaces just because it would clearly be the sensible and humane thing to do—may now strike us as naive, they were onto something.

They had elsewhere diagnosed computer-mediated information overload and its attendant stress, as some of the least salutary aspects of contemporary life. Even residing, as they then did, in an age before the widespread adoption of mobile phones in North America, they could foresee that the total cognitive burden imposed by a poorly designed ubicomp on the average, civilian user would be intolerable. (One wonders to what degree daily life at PARC in the early nineties prefigured the inbox/voicemail clamor we've all since grown so used to.) And so they set for themselves the project of how to counter such tendencies.

The strategy they devised to promote calm had to do with letting the user shift back and forth between the focus of attention and what they called the "periphery"—that which "we are attuned to without attending to explicitly." Just as, in your peripheral vision you may see objects but not need to attend to them (or even necessarily be consciously aware of their presence), here the periphery was a place where information could reside until actively required.

To design systems that "inform without overburdening," though, you'd need to call upon a different set of interface modes than the conventional PC keyboard and mouse. Brown and weiser thought input modes like these were a big part of the problem; roy want and his co-authors, in a 2002 paper, flatly state that "[n]ondesktop interface modalities, such as pen, speech, vision, and touch, are attractive" to the enlightened interface designer "because they require less of a user's attention than a traditional desktop interface."*

* The presence of "speech" on this list, and in so many depictions that come after, is interesting. Mark Weiser explicitly excluded voice-recognition interfaces from his vision of ubiquitous computing, pointing out that it would be "prominent and attention-grabbing" in precisely the way that "a good tool is not."

The ideal system would be one which was imperceptible until required, in which the user's focus fell not on the tool itself but on what they were actually attempting to do with it. were there any real-world examples of such imperceptible tools that might be offered, so that people could begin to wrap their heads around what Brown and weiser were proposing?

One of the first things they cited happened to be a feature of the hallway right outside their offices: artist Natalie Jeremijenko's installation Live Wire (also known as Dangling String). This was an "eight-foot piece of plastic spaghetti" attached to an electric motor mounted in the ceiling that was itself wired into the building's ethernet. Fluctuations in network traffic ran the motor, causing the string to oscillate visibly and audibly.

When traffic was low, Live Wire remained largely inert, but when activity surged, it would spring to life in such a way that it could both be seen by hallway passers-by and heard throughout the suite of nearby offices. you might not even be consciously aware of it—you would just, somewhere in the back of your mind, register the fact that traffic was spiking. Jeremijenko's approach and the results it garnered were true to everything Brown and Weiser had speculated about the periphery.

Despite its success, this was the last anyone heard of calm technology for quite a few years; the cause wasn't taken up again until the late 1990s, when a company called Ambient Devices offered for sale something it called the Ambient Orb. The Orb was a milky globe maybe ten centimeters in diameter that communicated with a proprietary wireless network, independent of the Internet. It was supposed to sit atop a desk or a night table and use gentle modulations of color to indicate changes in some user-specified quantity, from the weather (color mapped to temperature, with the frequency of pulses indicating likelihood of precipitation) to commute traffic (green for smooth sailing, all the way through to red for "incident").

These examples are certainly more relevant to the way life is actually lived—more actionable—than a simple index of bits flowing through a network. But what if the information you're interested in is still more complex and multidimensional than that, such as the source, amount, and importance of messages piling up in your email inbox?

London-based designer/makers Jack Schulze and Matt Webb, working for Nokia, have devised a presentation called Attention Fader that addresses just this situation. It's a framed picture, the kind of thing you might find hanging on the side wall of an office cubicle, that appears at first glance to be a rather banal and uninflected portrait of a building along the south bank of the Thames.

But the building has a lawn before it, and a swath of sky above it, and there's a section of pathway running past, along the river embankment, and Schulze and Webb have used each of these as subtle channels for the display of useful information. Leave town for a few days, let your in-box fill up, and the number of people gaggling on the river path will slowly mount. Ignore a few high-priority messages, and first cars, then trucks, and finally tanks pull up onto the lawn; let the whole thing go, and after a while some rather malevolent-looking birds begin to circle in the sky.

But subtly, subtly. None of the crowds or trucks or birds is animated; they fade into the scene with such tact that it's difficult to say just when they arrive. It's precisely the image's apparent banality that is key to its success as a peripheral interface; it's neither loud, nor colorful, nor attention-grabbing in any obvious way. It is, rather, the kind of thing you glance up at from time to time, half-consciously, to let its message seep into your awareness. Those who see the picture at infrequent intervals mightn't notice anything but a London street scene.

Schulze and Webb's project is a paragon of encalming technology. It points clearly to a world in which the widespread deployment of information-processing resources in the environment paradoxically helps to reduce the user's sense of being overwhelmed by data. To invert Mies, here more is less.

As the global audience for computing surges past a billion, with each of those users exposed to tens or even hundreds of different technical systems in the course of a day, such encalming is going to be an appealing business case every bit as much as an ethical question for system designers. Brown and Weiser were probably wrong as to just how strong an incentive it would provide, they were correct that the specter of global information overload would prompt at least some developers to pursue less intrusive interfaces—and these, in turn, will underwrite the further spread of everyware.

images Thesis 32

Everyware is strongly implied by the continuing validity of Moore's law.

No matter what we choose to do with it, the shape that information technology takes in our lives will always be constrained by the economic and material properties of the processors undergirding it. Speed, power consumption profile, and unit production cost are going to exert enormous influence on the kinds of artifacts we build with processors and on how we use them.

Pretty much right up to the present moment, these qualities have been limiting factors on all visions involving the widespread deployment of computing devices in the environment. Processors have historically been too expensive, too delicate, and too underpowered to use in any such way, leaving computing cycles too scarce a commodity to spend on extravagances like understanding spoken commands.

As the price of processors falls dramatically, and computing power begins to permeate the world, the logic behind such parsimoniousness disappears—we can afford to spend that power freely, even lavishly, with the result that computing resources can be brought to bear on comparatively trivial tasks. We arrive at the stage where processor power can be economically devoted to addressing everyday life: As Mark Weiser put it, "where are the car keys, can I get a parking place, and is that shirt I saw last week at Macy's still on the rack?"

In fact, we know that scattering processors throughout the environment will only continue to get cheaper. The reasoning behind this assertion was first laid out in 1965 by engineer (and later Intel co-founder) Gordon Moore, in a now-legendary article in the industry journal Electronics. It would turn out to be one of the most profoundly influential observations in the history of computing, and as nakedly self-fulfilling a prophecy as there ever has been. (It's so well known in the industry, in fact, that if you feel like you've got a handle on what it implies for everyware, there's no reason for you not to skip ahead to Thesis 33.)

Moore's essay simply pointed out that the prevailing industry trend was for ever greater numbers of transistors to be packed into an ever smaller space, with the number of transistors per unit area approximately doubling every 24 months. He concluded almost parenthetically that the trend would continue for at least ten years into the future.

Transistor density being a fairly reliable stand-in for certain other qualities of a computer—notably, speed—this implied that future devices would offer sharply higher performance, in a smaller envelope, at a fixed cost. This "prediction" was actually a rather weak one, couched in a number of qualifiers, but nonetheless it has acquired the imposing name of "Moore's law."

Although the article never says so in so many words, Moore's law has almost universally been interpreted as a bald statement that the amount of processing power available at a given cost will double every eighteen months, indefinitely. Applied to the slightly different context of memory, the Moore curve predicts that a given amount of storage will cost roughly half as much a year and a half from now and take up half as much volume.*

* Nowhere in the annals of computing is it convincingly explained how the 24-month doubling period of Moore's original article became the 18-month period of geek legend. Moore himself insists to this day that he never used the latter number, either in his published comments or elsewhere.

That Moore's law was more or less consciously adopted as a performance goal by the chip-design industry goes a long way toward explaining the otherwise improbable fact that it still has some predictive utility after some forty years. Compare, for example, the original microprocessor, Intel's 1971 4004, to a 2004 version of the same company's Pentium 4 chip: the 4004 packed 2,300 transistors and ran at a clock speed of 740 KHz, while the Pentium 4 boasts a transistor count of 178 million and runs at 3.4 GHz. That's not so far off the numbers called for by a 24-month doubling curve.

In a purely technodeterminist reading, anyway, Moore's law tells us exactly where we're headed next. It's true that Gordon Moore made his observation in the long-ago of 1965, and so one might be forgiven for thinking that his "law" had little left to tell us. But as far as anyone knowledgeable can tell, its limits are a long way off. A vocal minority continues to assert the belief that even after the photolithography used in chip fabrication hits the limits inherent in matter, more exotic methods will allow the extension of Moore's unprecedented run. Whether or not Moore's law can be extended indefinitely, there is sufficient reason to believe that information-processing componentry will keep getting smaller, cheaper, and more powerful for some time yet to come.

Because processors will be so ridiculously cheap, the world can be seeded with them economically. Because their cheapness will mean their disposability, they'll be installed in places it wouldn't have made sense to put them before—light switches, sneakers, milk cartons. There will be so very, very many of them, thousands of them devoted to every person and place, and it won't really matter whether some percentage of them fail. They will be both powerful individually, and able to share computation among themselves besides, and able to parse the complexities presented by problems of everyday life. Whatever name it is called by, however little it may resemble the calm technology envisioned by Mark Weiser, a computing with these properties will effectively be ubiquitous, in any meaningful sense of the word.

images Thesis 33

The appeal of everyware is at some level universal.

Be honest now: Who among us has not wished, from time to time, for some powerful sympathetic agency to intervene in our lives, to fix our mistakes and rescue us from the consequences of our lapses in judgment?

This is one desire I sense, beneath all the various projects devoted to ubiquitous surveillance or memory augmentation or encalming. What are they if not dreams of welcome and safety, of some cushion against the buffeting of our times? What are they if not a promise of some awareness in the world other than our own, infused into everything around us, capable of autonomous action and dedicated to our well-being?

In a sense this is only a return to a much older tradition. For most of our sojourn on this planet, human beings have understood the physical world as a place intensely invested with consciousness and agency; the idea that the world is alive, that the objects therein are sentient and can be transacted with, is old and deep and so common to all the cultures of humanity that it may as well be called universal.

As Freud described it, "the world was full of spirits...and all the objects in the external world were their dwelling-place, or perhaps identical with them." It is only comparatively recently that most people have believed otherwise—indeed, most of the humans who ever walked the planet would have found it utter folly to conceive of the natural world as mainstream Western culture did until very recently: a passive, inert, purely material stage, on which the only meaningful actors are human ones.

If we have always acted as though the things around us are alive, then the will to make it so in fact (or at least make it seem so) at the moment the technical wherewithal became available is understandable. That things like gestural and voice-recognition interfaces are so fervently pursued despite the many difficulties involved in perfecting them might tell us something about the deep roots of their appeal, if we're willing to listen.

Their long pedigree in science fiction merely extends the earlier tradition; folklore is replete with caves that open at a spoken command, swords that can be claimed only by a single individual, mirrors that answer with killing honesty when asked to name the fairest maiden in the land, and so on. Why, then, should anyone be surprised when we try to restage these tales, this time with our technology in the central role? Everyware is simply speaking to something that has lain dormant within us for much of modernity and played an overt, daily role in our lives for a very long time before that.

This is perhaps the most poignant factor driving the development of everyware, but as we've seen, it is far from the only one. From the crassest of motives to the noblest, there are so many powerful forces converging on the same set of technical solutions that their eventual realization truly does seem inevitable, no matter how we may quail at the determinism implied.

We will get to make meaningful choices about the precise shape of their appearance in the world, however—but only if we are smarter and more prudent than we have been about previous technologies. The next section will cover some of the issues we will need to keep foremost in mind if we want to make these crucial decisions wisely.