Everyware: The Dawning Age of Ubiquitous Computing - Adam Greenfield (2006)
Section 5. Who Gets to Determine the Shape of Everyware?
Beyond the purely technical, there are powerful social, cultural, and economic forces that will act as boundary constraints on the kind of everyware that comes into being. What are these forces? What sorts of influences will they bring to bear on the ubiquitous technology we encounter in daily life? And especially, who has the power to decide these issues?
The practice of technological development is tending to become more decentralized.
Sometime in early 2002—in an agnÈs b. store in Shibuya, of all places—I heard the full-throated sound of the new century for the first time. The track thumping from the store's sound system bent the familiar surge of Iggy and the Stooges' "No Fun" to the insistent beat of "Push It" by Salt'n'Pepa, and it was just as the old commercials promised: These truly were two great tastes that tasted great together. Rarely, in fact, had any music sent such an instant thrill of glee and profound rightness through me. The next track smashed the velvet Underground classic "I'm Waiting For The Man" into some gormless eighties hit, and that was almost as tasty; I found myself literally pogo-dancing around the store.
I had caught the mashup virus. A mashup is just about what it sounds like: the result of a DJ taking two unrelated songs and—by speeding, slowing, or otherwise manipulating one or both of them—hybridizing them into something entirely new. Anyone can do it, really, but the genius of a truly clever mashup is finding some note of deep complementarity in two source texts that seemingly could not possibly have less to do with one another. After all, until this particular gang of provocateurs—a Belgian duo calling themselves 2 Many DJs—came along, who ever would have thought that a prime slab of Motor City protopunk circa 1969 would have worked so well against a sassily righteous hip-hop single of the mid-1980s?
I was hooked, all right. What I didn't understand at the time, though, was that I had also been given a first glimpse of one of the most important ideas to hit software development since object-oriented programming achieved widespread acceptance in the mid-1980s. The cultural logic of the mashup, in which amateurs pick up pieces already at hand and plug them into each other in unexpected and interesting ways, turns out to be perfectly suited to an age of open and distributed computational resources—the "small pieces loosely joined" that make up the contemporary World Wide Web, in essayist David Weinberger's evocative phrasing.
In the case of the Web, the ingredients of a mashup are not songs, but services provided by sites such as Google and yahoo! and the community site Craigslist, each of which generates enormous quantities of data on a daily basis. Services like these have at their core an extensive database that is more or less richly tagged with metadata—information about information, such as where and when a picture was taken, or the ZIP code of an apartment listing. When millions of pieces of such self-describing data are made available—tossed on the table like so many Lego bricks, as it were—it's easy for third-party applications to pick them up and plug them into one another.
And so we see mashups from HousingMaps, which combines apartment listings from Craigslist with Google Maps to produce a searchable map of available rentals, to Stamen Design's Mappr, which uses metadata associated with images drawn from the Flickr photo-sharing service to plot them on a map of the United States. There's even a free application called Ning that lets anyone build a custom mashup, employing what might be called the Chinese menu approach: choose an application from Column A, another from Column B, and so on. It is simultaneously a profound demystification of the development process and a kit of parts that lets people without any programming experience to speak of build local, small-scale applications perfectly tuned to their specific needs.
What does any of this have to do with everyware? It suggests that we're about to experience a significant and unprecedented growth in the number of nonspecialists empowered to develop homebrew applications. It's crucial that the tools exist that allow us to do so, but still more important is a cultural context that not merely permits but encourages us to experiment—and that is just what the sense of ferment around mashups provides us.
Especially when combined with the revolution in conceptions of intellectual property that was originally sparked by the open-source and free software movements, we have everything necessary to democratize the development of information technology. What was not so very long ago a matter of a few thousand corporate and academic research centers will explode to encompass tens or even hundreds of millions of independent, unaffiliated developers scattered across the globe.
As a result, everyware is not going to be something simply vended to a passive audience by the likes of Intel and Samsung: What tools such as Ning tell us is that there will be millions of homebrew designer/makers developing their own modules of functionality, each of which will bear the hooks that allow it to be plugged into others.
Of course, not everyone will be interested in becoming a developer: The far greater proportion of people will continue to be involved with information technology primarily as users and consumers. And for the time being, anyway, the sheer complexity of ubiquitous systems will mitigate the otherwise strong turn toward amateurism that has characterized recent software development. But over the longer term, the centrifugal trend will be irresistible. The practice of technological development itself will become decisively decentralized, in a way that hasn't been true for at least a century.
Those developing everyware may have little idea that this is in fact what they are doing.
Given how conventional a component system may appear before it is incorporated in some context we'd be able to recognize as everyware, we're led to a rather startling conclusion: Relatively few of the people engaged in developing the building blocks of ubiquitous systems will consciously think of what they're doing as such.
In fact, they may never have heard the phrase "ubiquitous computing" or any of its various cognates. They will be working, rather, on finer-grained problems: calibrating the sensitivity of a household sensor grid so that it recognizes human occupants but not the cat, or designing an RFID-equipped key fob so that it reads properly no matter which of its surfaces is brought into range of the reader. With such a tight focus, they will likely have little sense for the larger schemes into which their creations will fit.
This is not an indictment of engineers. They are given a narrow technical brief, and they return solutions within the envelope available to them—an envelope that is already bounded by material, economic, and time constraints. Generally speaking, it is not in their mandate to consider the "next larger context" of their work.
And if this is true of professional engineers, how much more so will it apply to all the amateurs newly empowered to develop alongside them? Amateurs have needs and desires, not mandates. They'll build tools to address the problem at hand, and inevitably some of these tools will fall under the rubric of everyware—but the amateur developers will be highly unlikely to think of what they are doing in these terms.
Present IT development practices as applied to everyware will result in unacceptably bad user experience.
In Weiser and Brown's seminal "The Coming Age of Calm Technology," it appears to have been the authors' contention that responses to a suddenly hegemonic computing would arise as a consequence of its very ubiquity: "If computers are everywhere, they better stay out of the way."
Given the topic, this is a strikingly passive way for them to frame the question. It's as if Weiser and Brown trusted all of the people developing ubiquitous technology to recognize the less salutary implications of their efforts ahead of time and plan accordingly.
Even in the pre-Web 1990s, this was an unreasonably optimistic stance—and taking into account all that we've concluded about how little developers may understand the larger context in which their work is embedded, and the difficulty of planning for emergent properties of interacting systems, it would be indefensible today.
In fact, we should probably regard IT development itself as something unsuited to the production of an acceptably humane everyware. The reason has to do with how such development is conducted in organizations both large and small, from lean and hungry startups to gigantic international consultancies.
Every developer is familiar with the so-called "iron triangle." The version I learned was taught to me by a stereotypically crusty engineer, way back at my first dot-com job in 1999. In response to my request that he build a conduit between our Web site's shopping cart and the warehouse's inventory control system, he grunted, scrawled a quick triangle up on a handy whiteboard, hastily labeled the vertices FAST, GOOD, and CHEAP, and said, "Pick any two."
For all that this is obviously a cartoon of the technology development process, it's also an accurate one. For a variety of reasons, from the advantages that ostensibly accrue to first movers to the constraints imposed by venture capitalists, shareholders, and other bottom-liners, GOOD is rarely among the options pursued. Given the inherent pressures of the situation, it often takes an unusually dedicated, persistent, and powerful advocate—Steve Jobs comes to mind, as does vacuum-cleaner entrepreneur James Dyson—to see a high-quality design project through to completion with everything that makes it excellent intact.
Moreover, the more complex the product or service at hand, the more likely it will be to have a misguided process of "value engineering" applied at some point between inception and delivery. Although the practice has its roots in an entirely legitimate desire to prune away redundancy and overdesign, it is disastrous when applied to IT development. However vital, the painstakingly detailed work of ensuring a good user experience is frequently hard to justify on a short-term ROI basis, and this is why it is often one of the first things to get value-engineered out of an extended development process. Even if it's clearly a false efficiency from a more strategic perspective, reducing or even eliminating the user-experience phase of development can seem like getting rid of an unnecessary complication.
But we've seen that getting everyware right will be orders of magnitude more complicated than achieving acceptable quality in a Web site, let alone a desktop application. We have an idea how very difficult it will be to consistently produce ubiquitous experiences that support us, encalm us, strengthen and encourage us. Where everyware is concerned, even GOOD won't be GOOD enough. This is not the place for value engineers, not unless they have first earned a sensitive understanding of how difficult the problem domain is and what kinds of complexity it genuinely requires—both in process and product.
Everyware will appear differently in different places: that is, there is and will be no one continuum of adoption.
Remember our first thesis, that there are many ubiquitous computings? This is never truer than in the sense that everyware will prove to be different, in fundamental and important ways, in every separate cultural context in which it appears. In fact, the most basic assumptions as to what constitutes ubiquitous computing can differ from place to place.
An old Taoist proverb asks whether it is wiser to pave the world in soft leather or simply find yourself a nice comfortable pair of shoes. Along similar lines, some question the wisdom of any attempt to instrument the wider world. Such unwieldy "infrastructural" approaches, they argue, amount to overkill, when all that is really desired is that people have access to services wherever they happen to go.
One such perspective is offered by Teruyasu Murakami, head of research for Nomura Research Institute and author of a doctrine Nomura calls the "ubiquitous network paradigm." In Murakami's view, the mobile phone or its immediate descendent, the Ubiquitous Communicator, will do splendidly as a mediating artifact for the delivery of services.* His point: is it really necessary to make the heavy investment required for an infrastructural approach to the delivery of services if people can take the network with them?
* Contemporary Japanese ubicomp schemes often specify the use of such "Ubiquitous Communicators" or "UCs." While the form factors and capabilities of UCs are rarely specified in detail, it can be assumed that they will follow closely on the model offered by current-generation keitai, or mobile phones.
Taking the present ubiquity of PDAs, smartphones, and mobiles as a point of departure, scenarios like Murakami's—similar schemes have in the past been promoted by the likes of Nokia and the old AT&T—imagine that the widest possible range of daily tasks will be mediated by a single device, the long-awaited "remote control for your life." If you live outside one of the places on Earth where mobile phone usage is all but universal, this may sound a little strange to you, but it happens to be a perfectly reasonable point of view (with the usual reservations) if you live in Japan. *
* The reservations are both practical and philosophical: What happens if you lose your Ubiquitous Communicator, or leave it at home? But also: Why should people have to subscribe to phone services if all they want is to avail themselves of pervasive functionality?
In the West, the development of everyware has largely proceeded along classically Weiserian lines, with the project understood very much as an infrastructural undertaking. In Japan, as has been the case so often in the past, evolution took a different fork, resulting in what the cultural anthropologist Mizuko Ito has referred to as an "alternatively technologized modernity."
With adoption rates for domestic broadband service lagging behind other advanced consumer cultures—North America, Western Europe, Korea—and a proportionally more elaborate culture emerging around keitai, it didn't make much sense for Japan to tread quite the same path to everyware as other places. The Web per se has never met with quite the same acceptance here as elsewhere; by contrast, mobile phones are inescapable, and the range of what people use them for is considerably broader. Many things North Americans or Europeans might choose to do via the Web—buy movie tickets, download music, IM a friend—are accomplished locally via the mobile Internet.
Ito argues that "the Japan mobile Internet case represents a counterweight to the notion that PC-based broadband is the current apex of
Internet access models; characteristics such as ubiquity, portability, and lightweight engagement form an alternative constellation of 'advanced' Internet access characteristics that contrast to complex functionality and stationary immersive engagement."
Indeed, in the words of a 2005 design competition sponsored by Japanese mobile market leader NTT DoCoMo, the mobile phone "has become an indispensable tool for constructing the infrastructure of everyday life." Despite the rather self-serving nature of this proposition, and its prima facie falsehood in the context of Western culture, it's probably something close to the truth in Japanese terms. This is a country where, more so than just about anywhere else, people plan gatherings, devise optimal commutes, and are advised of the closest retailers via the intercession of their phones.
Given the facts on the ground, Japanese developers wisely decided to concentrate on the ubiquitous delivery of services via keitai—for example, the RFID-tagged streetlamps of Shinjuku already discussed, or the QR codes we'll be getting to shortly. And as both phones themselves and the array of services available for them become more useful and easier to use, we approach something recognizable as the threshold of everyware. This is a culture that has already made the transition to a regime of ambient informatics—as long, that is, as you have a phone. As a result, it's a safe bet to predict that the greater part of Japanese efforts at designing everyware will follow the mobile model for the foreseeable future.
Rather than casting this as an example of how Japanese phone culture is "more advanced" than North America's, or, conversely, evidence that Japan "doesn't get the Web" (the latter a position I myself have been guilty of taking in the past), it is simply the case that different pressures are operating in these two advanced technological cultures—different tariffs on voice as opposed to data traffic, different loci of control over pricing structures, different physical circumstances resulting in different kinds of legacy networks, different notions about monopoly and price-fixing—and they've predictably produced characteristically different effects. This will be true of every local context in which ideas about ubiquitous computing appear.
Many of the boundary conditions around the development of everyware will be sociocultural in nature. For example, one point of view I've heard expressed in the discussion around contemporary Korean ubicomp projects is that East Asians, as a consequence of the Confucian values their societies are at least nominally founded on, are more fatalistic about issues of privacy than Westerners would be in similar circumstances. I'm not at all sure I buy this myself, but the underlying point is sound: Different initial conditions of culture will reliably produce divergent everywares.
Is there more than one pathway to everyware? Absolutely. Individuals make choices about technology all the time, and societies do as well. I won't have a video game in the house—the last thing I need is another excuse to burn life time; I've never particularly warmed to fax machines; and I do not and will not do SMS. On a very different level, the governments of Saudi Arabia and the People's Republic of China have clearly decided that the full-on clamor of the Internet is not for them—or, more properly, not for their citizens. So the nature and potential of technology only go so far in determining what is made of it. The truly vexing challenge will reside in deciding what kind of everyware is right for this place, at this time, under these circumstances.
The precise shape of everyware is contingent on the decisions made by designers, regulators, and markets of empowered buyers. The greater mass of people exposed to such systems are likely to have relatively little say in their composition.
If societies are afforded some leeway in choosing just how a particular technology appears, what does history tell us about how this process has played out in the recent past?
Societies, as it happens, turn their backs on technologies all the time, even some that seem to be at the very cusp of their accession to prominence. Citizen initiatives have significantly shaped the emergence—and the commercial viability—of biotechnology and genetically modified foods planetwide; concerns both ethical and environmental continue to be raised about cloning and nanotechnology.* Nor are Americans any exception to the general rule, however happy we are to be seen (and to portray ourselves) as a nation of can-do techno-optimists: In the past twenty years, we've rejected fission power and supersonic commercial aviation, to name just two technologies that once seemed inevitable. And these outcomes, too, had a lot to do with local struggles and grassroots action.
*For that matter, similar concerns have also been raised about producing computing on a scale sufficient to supply the rural developing world with "$100 laptops." See, e.g., worldchanging.com.
Some would say that bottom-up resistance to such technologies arises out of an almost innumerate inability to calculate risk—out of simple fear of the unknown, that is, rather than any reasoned cost-benefit analysis. There are also, without doubt, those who feel that such resistance "impedes progress." But outcomes such as these stand as testament to a certain vigor remaining in democracy: In considering the debates over fission and the SST, the clear lesson—as corny as it may seem—is that the individual voice has made a difference. And this has been the case even when groups of disconnected individuals have faced coherent, swaggeringly self-confident, and infinitely better-funded pro-technology lobbies.
So on the one hand, we have reason to trust that "the system works." At least in the United States, we have some reason to believe that the ordinary messy process of democracy functions effectively to discover those technologies whose adoption appears particularly unwise, even if it's not necessarily able to propose meaningful alternatives to them. And this may well turn out to be the case where the more deleterious aspects of ubiquitous technology are concerned.
But something tells me everyware will be different. It's a minuscule technology, one that proceeds by moments and levers its way in via whatever crevices it is afforded. It will call itself by different names, it will appear differently from one context to another, and it will almost always wear the appealing masks of safety or convenience. And as we've seen, the relevant choices will be made by a relatively large number of people each responding to their own local need—"large," anyway, as compared to the compact decision nexus involved in the production of a fission plant or a supersonic airliner.
Who, then, will get to determine the shape of the ubiquitous computing we experience?
Designers, obviously—by which I mean the entire apparatus of information-technology production, from initial conceptual framing straight through to marketing.
Regulators, too, will play a part; given everyware's clear potential to erode privacy, condition public space, and otherwise impinge on the exercise of civil liberties, there is a legitimate role for state actors here.
And markets surely will. In fact, of all of these influences, the market is likely to have the most significant impact on what kinds of everyware find their way into daily use, with self-evidently dangerous, wasteful, or pointless implementations facing the usual penalties. But let's not get carried away with enthusiasm about the power of markets to converge on wise choices—as anyone who's been involved with technology can tell you, buyers are frequently not at all the same people as end users, and there are many instances in which their interests are diametrically opposed to one another. A corporate IT department, for example, generally purchases PCs based on low bid, occasionally ease of maintenance; the user experience is rarely factored, as it properly should be, into estimates of the total cost of ownership (TCO).
Left out of these considerations, though, is the greater mass of people who will be affected by the appearance of everyware, who will find their realities shaped in countless ways by the advent of a pervasive, ubiquitous, and ambient informatics. And while there is a broad community of professionals—usability specialists, interaction designers, information architects, and others working under the umbrella of user experience—that has been able to advocate for the end user in the past, with varying degrees of effectiveness, that community's practice is still oriented primarily to the challenges of personal computing. The skill sets and especially the mindsets appropriate to user-experience work in everyware have barely begun to be developed.
This raises the crucial question of timing. Are discussions of everyware abstract debates best suited to the coffeehouse and the dorm room, or are they items for near-term agendas, things we should be discussing in our school-board and city-council and shareholder meetings right now?
I strongly believe that the latter is true—that the interlocking influences of designer, regulator, and market will be most likely to result in beneficial outcomes if these parties all treat everyware as a present reality, and if the decision makers concerned act accordingly. This is especially true of members of the user experience community, who will best be able to intervene effectively if they develop appropriate insights, tools, and methodologies ahead of the actual deployment of ubiquitous systems.
In Section 6 we will consider why—while everyware is indeed both an immediate issue and a "hundred-year problem"—it makes the most sense to treat everyware as an emergent reality in the near term.