The Master Switch: The Rise and Fall of Information Empires - Tim Wu (2010)
Part V. The Internet Against Everyone
Chapter 20. Father and Son
Steve Jobs stood before an audience of thousands, many of whom had camped out overnight to share this moment. In his signature black turtleneck and blue 501s he was completely in his element: in perfect control of his script, the emotions of the crowd, and the image he projected to the world. Behind him was an enormous screen—another of his trademarks—flashing words, animations, and surprising pictures. In the computer world, and particularly among members of the cult of the Mac, the annual Jobs keynote at Apple’s Macworld is a virtual sacrament. During this one, on January 9, 2007, Jobs was to announce his most important invention since the Apple Macintosh.1
“Today,” said Jobs, “we’re introducing three revolutionary new products. Three things: a widescreen iPod with touch controls; a revolutionary mobile phone; a breakthrough Internet communications device.”
Loud cheers.
“An iPod, a phone … are you getting it? These are not three separate devices!”
Louder cheers.
“We are calling it iPhone!”
The audience rose to its feet. On the screen: “iPhone: Apple reinvents the phone.”
The iPhone was beautiful; it was powerful; it was perfect. After demonstrating its many features, Jobs showed how the iPhone could access the Internet as no phone ever had before, through a full-featured real browser.
“Now, you can’t—you can’t really think about the Internet, of course, without thinking about Google.… And it’s my pleasure now to introduce Dr. Eric Schmidt, Google’s CEO!”
To more cheers, Schmidt came jogging in from stage left, wearing an incongruously long orange tie. The two men shook hands warmly at center stage, like two world leaders. A member of the Apple board, Schmidt thanked Jobs and began his comments with a perhaps ill-advised joke about just how close Apple and Google had become. “There are a lot of relationships between the boards, and I thought if we just sort of merged the companies we could call them AppleGoo,” he said. “But I’m not a marketing guy.”
Indeed, in 2007 Google and Apple were about as close as two firms could be. Schmidt was not the only member of both corporate boards. The two firms were given to frequent and effusive public acclamations of each other. Their respective foundings a generation apart, Google and Apple were, to some, like father and son—both starting life as radical, idealistic firms, dreamed up by young men determined to do things differently. Apple was the original revolutionary, the protocountercultural firm that pioneered personal computing, and, in the 1970s, became the first company to bring open computing, then merely an ideological commitment, to mass production and popular use. Google, meanwhile, having overcome skepticism about its business model at every turn, had by the new millennium become the incarnation of the Internet gospel of openness. It had even hired Vint Cerf, one of the network’s greatest visionaries, giving him the title “Chief Internet Evangelist.”2
Their corporate mottoes, “Think Different” and “Do No Evil,” while often mocked by critics and cynics, were an entirely purposeful way of propounding deeply counterintuitive ideas about corporate culture. Both firms, launched out of suburban garages a few miles apart, took pride in succeeding against the grain. Google entered the search business in 2000, when searching was considered a “commodity,” or low-profit operation, and launched a dot-com after the tech boom went bust. Apple’s revolution had been even more fundamental: in the 1970s, still the era of central mainframe machines, it built a tiny personal computer and later gave it a “desktop” (the graphic user interface of windows and icons and toolbars that is now ubiquitous), as well as a mouse. The two firms also shared many real and imagined foes: Microsoft, mainstream corporations, and uptight people in general.
Back in San Francisco, Schmidt, done with his jokes, continued his presentation.
“What I like about this new device [the iPhone] and the new architecture of the Internet is that you can actually merge without merging.… Internet architectures allow you now to take the enormous brain trust that is represented by the Apple development team and combine that with the open protocols and data service that companies like Google [provide].”
Unnoticed by most, here was enunciated a crucial idea, a philosophy of business organization radical in its implications. Schmidt was suggesting that, on a layered network, in an age of open protocols, all the advantages of integration—the “synergies” and efficiencies of joint operation—could be realized without actual corporate mergers. Call it Google’s theory of the firm. With the Internet, there was no need for mergers and exclusive partnerships. Each company could focus just on what it did best. Thanks to the Internet, the age of Vail, Rockefeller, and Carnegie, not to mention the media conglomerates created by Steven Ross and Michael Eisner—the entire age of giant corporate empires—was, according to this revelation, over.
But was it really? The warmth of Jobs’s greeting concealed the fact that Apple’s most important partner for the iPhone launch was not Google—not by a long shot—but rather one of Google’s greatest foes. At the end of his speech, in an understated way, Jobs dropped a bomb. The iPhone would work exclusively on the network of one company: AT&T.*
“They are the best and most popular network in the country,” said Jobs. “Fifty-eight million subscribers. They are number one. And they’re going to be our exclusive partner in the U.S.”
In entering this partnership, Apple was aligning itself with the nemesis of everything Google, the Internet, and once even Apple itself stood for.
✵ ✵ ✵
We don’t know whether Ed Whitacre, Jr., AT&T’s CEO, was listening to Eric Schmidt’s speech at the iPhone launch. But we can be sure that he would have disagreed with Schmidt that the age of grand mergers was over. Just one week earlier, Whitacre had quietly gained final federal approval for the acquisitions that would bring most of the old Bell system back under AT&T’s control. Unfazed by the arrival of the Internet, Whitacre and his telephone Goliath were practicing the old-school corporate strategies of leveraging size to achieve domination, just as AT&T had done for more than a hundred years. The spirit of Theodore Vail was alive and well in the resurrected dominion of the firm with which Apple was now allied.
Within two years of the iPhone launch, relations between Apple and Google would sour as the two pursued equally grand, though inimical, visions of the future. In 2009 hearings before the FCC, they now sat on opposite sides. Steve Jobs accused Google of wasting its time in the mobile phone market; a new Google employee named Tim Bray in 2010 described Apple’s iPhone as “a sterile Disney-fied walled garden surrounded by sharp-toothed lawyers.… I hate it.”3
As this makes clear, where once there had been only subtle differences there now lay a chasm. Apple, while it had always wavered on “openness,” had committed to a program that fairly suited not just the AT&T mind-set, but also the ideals of Hollywood and the entertainment conglomerates as well. Despite the many missteps, including the AOL-Time Warner merger, the conglomerates were still at bottom looking for their entry point into the Internet game. By 2010, Apple would clearly seem the way—whether through its iTunes music store, its online videos, or the magic of the iPad. In fact, the combination of Apple, AT&T, and Hollywood now held out an extremely appealing prospect: Hollywood’s content, AT&T’s lines, and Apple’s gorgeous machines—an information paradise of sorts, succeeding where AOL-Time Warner had failed.
For its part, Google would remain fundamentally more radical with utopian, even vaguely messianic, ideals. As Apple befriended the old media, Google’s founders continued to style themselves the challengers to the existing order, to the most basic assumptions about the proper organization of information, the nature of property, the duties of the American corporation, and even the purpose of life. They envisioned taking the Internet revolution into every sector of the information realm—to video and film, television, book, newspaper, and magazine publishing, telephony—every way that humans send or receive information.
You might think that such splits are simply the way the capitalist cookie crumbles and one shouldn’t dwell overmuch on the rupture between two firms. But these are not just any two firms. These are, in communications, the industrial and ideological leaders of our times. These are the companies that are determining how Americans and the rest of the world will share information. If Huxley could say in 1927 that “the future of America is the future of the world,” we can equally say that the future of Apple and Google will form the future of America and the world.4
What should be apparent to any reader having reached this point is that here in the twenty-first century, these firms and their allies are fighting anew the age-old battle we’ve recounted time and time again. It is the perennial Manichaean contest informing every episode in this book: the struggle between the partisans of the open and of the closed, between the decentralized and the consolidated visions of a proper order. But this time around, as compared with any other, the sides are far more evenly matched.
APPLE’S RADICAL ORIGINS
Apple is a schizophrenic company: a self-professed revolutionary closely allied with both of the greatest forces in information, the entertainment conglomerates and the telecommunications industry. To work out this contradiction we need to return to Apple’s origins and see how far it has come. Let’s return to 1971, when a bearded young college student in thick eyeglasses named Steve Wozniak was hanging out at the home of Steve Jobs, then in high school. The two young men, electronics buffs, were fiddling with a crude device they’d been working on for more than a year. To them it must have seemed just another attempt in their continuing struggle to make a working model from a clever idea, just as Alexander Bell and Watson had done one hundred years earlier.5
That day in 1971, however, was different. Together, they attached Wozniak’s latest design to Jobs’s phone, and as Wozniak recalls, “it actually worked.”6 It would be their first taste of the eureka moment that would-be inventors have always lived for. The two used the device to place a long distance phone call to Orange County. Apple’s founders had managed to hack AT&T’s long distance network: their creation was a machine, a “blue box,” that made long distance phone calls for free.
Such an antiestablishment spirit of enterprise would underlie all of Jobs and Wozniak’s early collaborations and form the lore that still gives substance to the image long cultivated: the iconoclast partnership born in a Los Altos garage, which, but a few years later, in March of 1976, would create a personal computer called “the Apple,” one hundred years to the month after Bell invented the telephone in his own lonely workshop.
In the 1970s this imagery would be reinforced by the pair’s self-styling as bona fide counterculturals, with all the accoutrements—long hair, opposition to the war, an inclination to experiment with chemical substances as readily as with electronics. Wozniak, an inveterate prankster, ran an illegal “dial-a-joke” operation; Jobs would travel to India in search of a guru.
But, as is often the case, the granular truth of Apple’s origins was a bit more complicated than the mythology. For even in the beginning, there was a significant divide between the two men. There was no real parity in technical prowess: it was Wozniak, not Jobs, who had built the blue box. And it was Wozniak who would conceive of and build the Apple and the Apple II, the most important Apple products ever, and arguably among the most important inventions of the later twentieth century.* For his part, Jobs was the businessman and the dealmaker of the operation, essential as such, but hardly the founding genius of Apple computers, the man whose ideas were turned into silicon to change the world; that was Wozniak. The history of the firm must be understood in this light. For while founders do set the culture of a firm, they cannot dictate it in perpetuity; as Wozniak withdrew from the operation, Apple became more and more concerned with, as it were, the aesthetics of radicalism than with its substance.
Steve Wozniak is not the household name that Steve Jobs is, but his importance to communications and culture in the postwar period merits a closer look. While Apple’s wasn’t the only personal computer invented in the 1970s, it was the most influential. For the Apple II took personal computing, an obscure pursuit of the hobbyist, and made it into a nationwide phenomenon, one that would ultimately transform not just computing, but communications, culture, entertainment, business—in short, the whole productive part of American life.
We’ve seen these moments before, when a hobbyist or limited-interest medium becomes a mainstream craze; it happened with the telephone in 1894, with the birth of radio broadcasting in 1920, and with cable television in the 1970s. But the computer revolution was arguably more radical than any of these advances on account of having posed such a clear ideological challenge to the information economy’s status quo. As we’ve seen, for most of the twentieth century, innovators would lodge the control and power of new technologies within giant institutions. Innovation begat industry, and industry begat consolidation. Wozniak’s computer had the opposite effect: he took the power of computing, formerly the instrument of large companies with mainframe resources, and put it in the hands of individuals. That feat, and every manifestation of communications freedom that has flowed from it, is doubtless his greatest contribution to society. It was almost unimaginable at the time: a device that made ordinary individuals sovereign over information by means of computational powers they could tailor to their individual needs. Even if that sovereignty was limited by the primitive capacities of the Apple II—48 KB of RAM, puny compared with even our present-day telephones but also with industrial computers of the time—the machine nevertheless planted the seed that would change everything.
With slots to accommodate all sorts of peripheral devices and an operating system that ran a variety of software, the Wozniak design was open in ways that might be said still to define the concept in the computing industries. Wozniak’s ethic of openness extended even to disclosing design specifications. He once gave a talk and put the point this way: “Everything we knew, you knew.”7 In the secretive high-tech world, such transparency was unheard of, as it is today. Google, for example, despite its commitment to network openness, keeps most of its code and operations secret, and today’s Apple, unlike the Apple of 1976, guards technical and managerial information the way Willy Wonka guarded candy recipes.
Put another way, Wozniak welcomed the amateur enthusiast, bringing the cult of the inspired tinkerer to the mass-produced computer. That ideology wasn’t Wozniak’s invention, but rather in the 1970s it was an orthodoxy among computing hobbyists like the Bay Area’s Homebrew computer club, where Wozniak offered the first public demonstration of the Apple I in 1976. As Wozniak described the club, “Everyone in the Homebrew Computer Club envisioned computers as a benefit to humanity—a tool that would lead to social justice.” These men were the exact counterparts of the radio pioneers of the 1910s—hobbyist-idealists who loved to play with technology and dreamed it could make the world a better place. And while a computer you can tinker with and modify may not sound so profound, Wozniak contemplated a spiritual relationship between man and his machine, the philosophy one finds in Matthew Crawford’s Shop Class as Soulcraft or the older Zen and the Art of Motorcycle Maintenance. “It’s pretty rare to make your engineering an art,” said Wozniak, “but that’s how it should be.”8
The original Apple had a hood; and as with a car, the owner could open it up and get at the guts of the machine. Indeed, although it was a fully assembled device, not a kit like earlier PC products, one was encouraged to tinker with the innards, to soup it up, make it faster, add features, whatever. The Apple’s operating system, using a form of BASIC as its programming language and operating environment, was, moreover, one that anyone could program. It made it possible to write and sell one’s programs directly, creating what we now call the “software” industry.
In 2006, I briefly met with Steve Wozniak on the campus of Columbia University.
“There’s a question I’ve always wanted to ask you,” I said. “What happened with the Mac? You could open up the Apple II, and there were slots and so on, and anyone could write for it. The Mac was way more closed. What happened?”
“Oh,” said Wozniak. “That was Steve. He wanted it that way. The Apple II was my machine, and the Mac was his.”
Apple’s origins were pure Steve Wozniak, but as everyone knows, it was the other founder, Steve Jobs, whose ideas made Apple what it is today. Jobs maintained the early image that he and Wozniak created, but beginning with the Macintosh in the 1980s, and accelerating through the age of the iPod, iPhone, and iPad, he led Apple computers on a fundamentally different track.
Jobs is a man who would seem as much at home in Victorian England as behind the counter of a sushi bar: he is an apostle of perfectibility and believes in a single best way of performing any task and presenting the results. As one might expect, his ideas embody an aesthetic philosophy as much as a sense of functionality, which is why Apple’s products look so good while working so well. But those ideas have also long been at odds with the principles of the early computing industry, of the Apple II and of the Internet, sometimes to the detriment of Apple itself.
As Wozniak told me, the Macintosh, launched in 1984, marked a departure from many of his ideas as realized in the Apple II. To be sure, the Macintosh was radically innovative in its own right, being the first important mass-produced computer to feature a “mouse” and a “desktop”—ideas born in the mind of Douglas Engelbart in the 1950s, ideas that had persisted without fructifying in computer science labs ever since.* Nevertheless the Mac represented an unconditional surrender of Wozniak’s openness, as was obvious from the first glance: gone was the concept of the hood. You could no longer easily open the computer and get at its innards. Generally, only Apple stuff, or stuff that Apple approved, could run on it (as software) or plug into it (as peripherals). Apple now refused to license its operating system, meaning that a company like Dell couldn’t make a Mac-compatible computer. If you wanted a laser printer, software, or virtually any accessory, it was to Apple you had to turn. Apple thus became the final arbiter over what the Macintosh was and was not, rather in the way that AT&T at one time had sole discretion over what could and what could not connect to the telephone network.
Thus via the Mac, Apple was at once an innovative and a completely retrograde company. Jobs had elected the design principles that had governed the Hollywood studios, Theodore Vail’s AT&T, indeed anyone who ever dreamed of a perfect system. He created an integrated product, installing himself as its prime mover. If the good of getting everything to work together smoothly—perfectly—meant a little less freedom of use, so be it. Likewise, if it required a certain restraint to create and market it, that was fine. Leander Kahney, author of Inside Steve’s Brain, describes Jobs’s modus operandi as one of “unrelenting control over his employees, his image, and even his customers” with the goal of achieving “unrelenting control over his products and how they’re used.”9
By the time the Macintosh became Apple’s lead product, Wozniak had lost whatever power he had once held over Apple’s institutional ideology and product design. One salient reason had nothing to do with business or philosophy. In 1981 he crashed his Beechcraft Bonanza on takeoff from Scotts Valley, just outside the San Francisco Bay Area. Brain damage resulted in pronounced though temporary cognitive impairment, including retrograde amnesia. He would take a leave of absence, but his return would not alter the outcome of a quiet power struggle that had been building since before the accident. Its resolution would permanently sideline “the other Steve,” leaving the far more ambitious Jobs and his ideas ascendant.
Like all centralized systems, Jobs’s has its merits: one can easily criticize its principles yet love its products. Computers, it turns out, can indeed benefit in some ways from a centralizing will to perfection, no less than French cuisine, a German automobile, or any number of other elevated aesthetic experiences that depend on strict control of process and the consumer. Respecting functionality, too, Jobs has reason to crow. Since the original Macintosh, his company’s designs have more often than not worked better, as well as more agreeably, than anything offered by the competition.
But the drawbacks have been obvious, too, not just for the consumer but for Apple. For even if Jobs made beautiful machines, his decision to close the Macintosh contributed significantly to making Bill Gates the richest man on earth. No one would say it was the only reason, but Apple’s long-standing adherence to closed design left the door wide open for the Microsoft Corporation and the many clones of the IBM PC to conquer computing with hardware and software whose chief virtue was combining the best features of the Mac and the Apple II. Even if Windows was never as advanced or well designed as Apple’s operating system, it enjoyed one insuperable advantage: it worked on any computer, supported just about every type of software, and could interface with any printer, modem, or whatever other hardware one could design. After it was launched in the late eighties, early-nineties Windows ran off with the market Apple had pioneered, based mostly on ideas that had been Apple’s to begin with.
The victory of PCs and Windows over Apple was viewed by many as the defining parable of the decade; its moral was “open beats closed.” It suggested that Wozniak had been right from the beginning. But by then Steve Jobs had been gone for years, having been forced out of Apple in 1985 in a boardroom coup. Yet even in his absence Jobs would never agree about the superiority of openness, maintaining all the while that closed had simply not yet been perfected. A decade after his expulsion, back at the helm of the company he founded, Steve Jobs would try yet again to prove he had been the true prophet.
JUST WHAT IS GOOGLE?
In 1902, the New York Telephone Company opened the world’s first school for “telephone girls.” It was an exclusive institution of sorts. As the historian H. N. Casson described the qualifications for admission in 1910: “Every girl shall be in good health, quick-handed, clear-voiced, and with a certain poise and alertness of manner.” There were almost seventeen thousand applicants every year for the school’s two thousand places.10
Acquiring this credential was scarcely the hardest part of being a telephone girl. According to a 1912 New York Times story, 75 percent were fired after six months for “mental inefficiency.” The job also required great manual dexterity to connect dozens of callers per minute to their desired parties. During the 1907 financial panic in New York, one exchange put through fifteen thousand phone calls in the space of an hour. “A few girls lost their heads. One fainted and was carried to the rest-room.”11
People often wonder, “What exactly is Google?” Here is a simple answer: Like its harbinger the telephone girl, Google offers a fast, accurate, and polite way to reach your party. In other words, Google is the Internet’s switch. In fact, it’s the world’s most popular Internet switch, and as such, it might even be described as the current custodian of the Master Switch.12
Every network needs a way to connect the parties who use it. In the early days of the telephone, before direct dial, you’d ask the telephone girl for your party by name (“Connect me with Ford Motors, please”). Later on, you’d directly dial the phone number, from either memory or the telephone directory, which seems rather a decline in service. Today, Google upholds the earlier standard, but on the Internet. Needing no address, you ask for your party by name (typing in “Ford Motor Company,” for instance), and Google shows you the way to connect with them over the World Wide Web.
The comparison with Bell’s telephone switchboard girls might sound a little anticlimactic to describe a firm with ambitions as grand as Google’s, but this reaction betrays a lack of awareness of the lofty import the switch has in the information world. For it is the switch that transforms mere communications into networking—that ultimately decides who reaches what or whom. And at the superintending level, which most networks eventually do develop at some point, it is the Master Switch, as Fred Friendly reminds us, that will decide who is to be heard. However many good things the Internet has to offer—services, information resources, retail outlets—it hardly matters if you can’t get to them.
There are, of course, some differences between Google and the switch monopolists of yesteryear, including the firm that is arguably its truest forerunner, Vail’s AT&T. For one thing, Google is not a switch of necessity, such as the telephone company was, but rather a switch of choice. This is a somewhat technical point, but suffice it to say that Google’s search engine is not the only Internet switch. There are other means by which to reach people or places on the Internet, as well as other points that might be described as “switches,” like the physical routers that direct the flow of Internet traffic on the data packet level. There are plenty of ways around Google: you can use domain names to navigate the Internet, or use one of Google’s competitors (Yahoo!, Bing, and the like), or for the truly hard-core, simply remember the IP addresses (e.g., 98.130.232.209), the way people once used to remember phone numbers. In fact, unlike AT&T, Google could be replaced at any time. And yet if by 2010 Google wasn’t the only game in town, it was clearly the most popular Internet switch; by its market share of the search business (over 65 percent) it clearly qualifies as a monopoly.
In some ways, Google nevertheless enjoys a much broader control over switching than the old AT&T ever did. For it is not just the way that people reach one another to talk, but the way most people find all forms of information, across all media platforms, at a time when information is a far more prominent commodity in our national life and economy than it has ever been. Siva Vaidhyanathan’s aptly titled book Googlization of Everything points out how much power this gives Google over global culture. As he writes, “for a growing (but not universal) portion of the Web and the world, we allow Google to determine what is important, relevant, and true.”13 As he suggests—and how many would disagree?—whatever shows up on the first page of a Google search is what matters in forming our sense of any reality; the rest doesn’t.
To understand this unusual level of consumer preference and trust in a market with other real choices is to understand the source of Google’s singular power. But is this a stroke of cold luck, or is Google something special? Quite enough has been written about Google’s corporate culture, whether one looks to the cafeterias that serve free food, the beach volleyball, or the fact that its engineers like to attend the Burning Man festival in the Nevada desert.14 Not that such things aren’t useful inducements to productivity or the exception in corporate America, but they are more nearly adaptations of a general Silicon Valley corporate ethos than one particular to Google, a point the company readily admits.
Boiled down, the Google difference amounts to two qualities, rather than any metaphysical uniqueness. The first, as we’ve already remarked, is its highly specialized control of the Internet switch. We shall describe the nature of that specialization in more detail presently, but for now let us say it accords Google a dominance in search befitting an engine whose name has become a verb and synonymous with function. (No one Apples or AT&Ts a potential new boyfriend.) While the firm does have dozens of other projects, it is obvious (certainly from their individual direct contributions to cash flow, which are minuscule) that most, including the maps, the lavishly capacious Gmail accounts, even the hugely popular YouTube, are ultimately trial balloons, experiments of a kind, or a way of enhancing the primacy of the core business of search, whether by creating complementary information resources or simply engendering the goodwill that comes of offering cool stuff for free. The second Google difference is in its corporate structure. The firm, while having as many ventures as it has engineers, eschews vertical integration of these efforts to a degree virtually unprecedented for a communications giant. This structural distinction may be hard to grasp, so let’s explain it more carefully.
✵ ✵ ✵
A medieval architect looking at the skyline of New York City or Hong Kong would be astonished that the buildings manage to stand without flying buttresses, thick walls, or other visible supports. A nineteenth- or twentieth-century industrialist would feel much the same bewildered awe regarding a major Internet firm like Google. You might call Google a media company, but it doesn’t own content. It is a communications company, but it doesn’t own the wires or airwaves over which packets reach people. You might accept my characterization that Google is simply the switch, but a switch alone has never before constituted a freestanding company—what basis is that for value? Many credit Google with ambitions to take over the world, but an industrial theorist might well ask how such a radically disintegrated firm could long endure, let alone achieve global domination.
Compared with other giants, like Time Warner circa 2000, or Paramount Pictures circa 1927, or AT&T’s original and resurrected incarnations, Google is underintegrated and undefended. The business rests on a set of ideas—or more precisely, a set of open protocols designed by government researchers. But that is the point: it is the structure of the Internet, much less than anything particular to the firm itself, that keeps Google standing. It traffics in content originated by billions of people, none of them on salary, who build the websites or make the home videos. It reaches its customers on wires and over airwaves owned by other firms. This may seem an improbably shaky foundation to build a firm on, but perhaps that is the genius of it.
If that seems a bit abstract, it is well to remember that Google is an unusually academic company in origins and sensibility. Larry Page, one of the two founders, described his personal ambitions this way: “I decided I was either going to be a professor or start a company.” Just as Columbia University effectively financed FM radio in the 1930s, Stanford got Google started. With its original Web address http://google.stanford.edu/, the operation relied on university hardware and software and the efforts of graduate students. “At one point,” as John Battelle writes in The Search, the early Google “consumed nearly half of Stanford’s entire network bandwidth.”15
Google’s corporate design remains both its greatest strength and its most serious vulnerability. It is what makes the firm so remarkably well adapted to the Internet environment, as a native species, so to speak. Unlike AOL, Google never tried to resist or subdue the Internet’s essential structure. It is a creature perfectly suited to the network as its framers intended it. In this sense, it is the antithesis of AOL.
Google’s chief advantage, as we have suggested, can be summarized in a single word: specialization. Companies like AT&T or the big entertainment conglomerates succeed by being big and integrated—doing everything, and owning everything. A company like Google, in contrast, succeeds by doing one (well-chosen) thing, but doing it better than anyone else. It’s the trait that makes Google the hedgehog to so many others’ fox. The firm harvests the best of the Internet, organizing the worldwide chaos in a useful way, and asks its users to navigate this order via their own connections; by relying on the sweat of others for content and carriage, Google can focus on its central mission: search. From its founding, the firm was dedicated to performing that function with clear superiority; it famously pioneered an algorithm called PageRank, which arranged search hits by importance rather than sheer numerical incidence, thereby making search more intelligent. The company resolved to stand or fall on the strength of that competitive edge. As Google’s CEO, Eric Schmidt, explained to me once, firms like the old AT&T or Western Union “had to build the entire supply chain. We are specialized. We understand that infrastructure is not the same thing as content. And we do infrastructure better than anyone else.”
Google, between content and transport
Unlike AOL Time Warner, Google doesn’t need to try to steer users anywhere in particular. They need only focus their resources on helping you get wherever you want to go, whether you know where that is or not. Needless to say, it is a great plus not to be involved in trying to persuade anyone to consume, say, Warner Bros. content. Such was the inherently corrupting project of AOL when Steve Case joined his company to Time Warner. Case had assumed that any Internet company would need control of both wires and content to succeed in the 2000s. He was wrong.
That’s the advantage. On the other hand, Google’s lack of vertical integration leaves it vulnerable, rather like a medieval city without a wall.* He who controls the wires or airwaves can potentially destroy Google, for it is only via these means that Google reaches its customers. To use the search engine and other utilities, you need Internet access, not a service Google now provides (with trivial exceptions). To have such access, you need to pay an Internet Service Provider—typically your telephone or cable company. Meanwhile, Google itself must also pay for Internet service, a fact that, conceptually at least, puts the firm and its customers on an equal footing: both are subscription users of the Internet. And so whoever controls those connection services can potentially block Google—or any other site or content, as well as the individual user, for that matter.
Nor is this matter of infrastructure the firm’s only weakness. A concerted boycott among content owners—website operators or other sources—could achieve the same choking effect. Under long-established protocols, any website can tell Google that it doesn’t want to be indexed.† In theory, Wikipedia, The New York Times, CNN, and dozens of other websites could begin telling Google, “Thanks, but no thanks,” or conceivably strike an exclusive deal with one of Google’s rivals.
How Google reaches customers
Both of these vulnerabilities are a direct consequence of Google’s corporate design, of the fact it owns no connections and no content. As we shall see, the firm’s most determined enemies have begun to understand and exploit these frailties.
In Chicago in 2005, AT&T’s Ed Whitacre took a break during a typical day of empire building to grant BusinessWeek’s Roger Crockett an interview.16 In the midst of his campaign to reunify the Bell company, the CEO was refreshingly clear about his strategy. “It’s about scale and scope,” he told Crockett a few times, “scale and scope.”
Crockett asked, “How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?”
Whitacre immediately homed in on their weakness. “How do you think they’re going to get to customers? Through a broadband pipe.
“Cable companies have them. We have them,” he continued. “Now what they would like to do is use my pipes free, but I ain’t going to let them do that.”
From this it was clear that AT&T had identified precisely the soft underbelly of Google and the rest of the Internet industry: “How do you think they’re going to get to customers?” Whitacre understood that he, allied with the cable industry and the other parts of Bell, was strategically positioned to choke the Internet industry into submission.
Such comments make vivid just why the ideal of net neutrality and the government’s enforcement of it by statute or regulatory rules have become such urgent concerns for Google and the rest of the Internet industry, as well as increasingly for a great many individual users. If one allows that the Internet is our key means of conveyance, the “common medium” of our national life and economy, net neutrality is the twenty-first century’s version of common carriage. Just as with the operator of the sole ferry between an island and the mainland, proprietorship of any part of the Internet’s vital infrastructure rightly obliges one to carry the whole Internet, without discrimination or favoritism, in accordance with one of the oldest assumptions of our legal tradition. To be entrusted with a utility of such unique public importance comes with responsibilities such as AT&T assumed in 1910. In the case of the Internet, common carriage under the name of net neutrality amounts to an FCC rule that bans any degree of blocking individual sites, transmission of data (whether according to size, sender, time of day, or any other factor). Put most simply, net neutrality is what prevents the telephone and cable industry from killing Google, Amazon, Wikipedia, blogs, or anything else that might incur their displeasure.
In 2006, when Whitacre made his remarks, it seemed a plausible inference that AT&T and its allies might undertake—if not imminently, at least gradually—to subjugate the Internet and thereby the firms that depend on it, aiming to accomplish with long-honed lethal efficacy what AOL Time Warner had bungled. The initial step would be subtle: AT&T would begin offering, for a fee, a “fast lane” by which to reach consumers, inspiring the cable firms and Verizon to do the same. The precedent was Vail’s policies of the 1910s, a system of preferential treatment with an eye toward creating vassals of the dependent industries. Of course AT&T would offer its ever-ready excuse: management of the network in the name of better service. The effects at first would be small. But it doesn’t take a genius to realize that if AT&T and the cable companies exercised broad discretion to speed up the business of some firms and slow down that of others, they would gain the power of life and death over the Internet.
Google’s advantage in being obliged to promote no one’s product is double-edged: the ingenious idea of depending entirely on others for content leaves one entirely at the mercy of others for access to it. Google itself owns almost nothing: no movies, no websites, videos, or texts of any significant interest. In most instances, content owners have been only too happy to allow Google to lead its customers to them. On the other hand, the company’s commitment to liberating content, making it accessible to as many as possible, has also left it the object of copyright-holder grievance and exposed it to potential lawsuits as well as threats of organized boycott.
Sometime in 1996, when Google began operations,* it made a copy of the entire World Wide Web in order to prepare a search index. In retrospect, no one really knows whether that copying was legal—whether a massive copyright violation occurred at the birth of the firm, confirming Balzac’s observation that behind every great fortune is a great crime. As a matter of law, copying generally requires permission, something that Google never asked for, and indeed never has requested, for to do so comprehensively would be impracticable. Today, most copyright scholars would agree that Google has implied permission to copy the Web—no one brought suit against them for having done so, and so a new norm has been suggested. It is also likely an instance of “fair use,” though, given the uniqueness of the act, there is little case law quite supporting such an assertion. Certainly at the time, the legality of what was done wasn’t entirely clear; and truth to tell, if a copyright lawyer had been among Google’s founders, it’s doubtful the thing would have gotten off the ground.
Since its audacious birth, Google has never been completely at peace with the owners of content upon which it depends. The dispostion of those owners has varied according to what was in it for them in each instance. Some, of course, love, or at least respect, Google as the primary means by which their content gets found. For the small-scale business and those struggling to be heard without a major platform, Google’s engine is a godsend, for it tends to equalize giant and one-man retailers, new bloggers and those who write for highly capitalized publications. Thanks to Google’s proprietary algorithm, an entry on the nonprofit Wikipedia consistently outranks any official site related to a search term. A search for McDonald’s also turns up McSpotlight, a page dedicated to exposing the misdeeds of the restaurant chain.
In contrast, owners of “valuable” content have a far more ambivalent relationship with the great Internet switch. In the United States, Google receives a daily stream of notices demanding that it remove links to copyright-infringing materials (YouTube accounts for the lion’s share). Many, especially in New York’s old-media conglomerates and publishing industries, hold Google in deep suspicion, a feeling that persists no matter how many earnest professions of benign intent are offered by Google’s employees. Those professions, in fact, tend to make matters worse, as they leave the old content generators feeling Google doesn’t appreciate how a dollar should be made in the information game.
When such anxiety boils over, it is expressed through lawsuits. When in 2004 Google proposed a system for searching books modeled on its search engine for the Web, it was promptly sued by a consortium of publishers and authors. YouTube, similarly, was subject to a deck-clearing lawsuit in 2006, funded by Viacom, the entertainment conglomerate. By the first decade of its existence, Google’s legal department had accumulated a large collection of copyright experts, and they needed every one of them.
Google has so far managed to settle many of the most serious claims, thanks in part to its lawyers, but a different sort of danger looms in the form of threatened content boycotts. Rupert Murdoch, owner of the News Corp. conglomerate and a master of exploiting the structural weaknesses of other firms, started complaining in 2009 about sites like Google that “steal” newspaper content.17 Here is a portion of a television interview he gave on the subject (reproduced as a matter of fair use):
Murdoch: [The problem is] the people who just simply pick up everything, and run with it, who steal our stories … Google, Microsoft, Ask.com.…
Interviewer: Their argument is that they are directing traffic your way.… Aren’t they helping you?
Murdoch: What’s the point of having someone come occasionally, who likes a headline they see in Google? … We’d rather have fewer people coming, and paying.
Interviewer: The other argument from Google is that you could choose not to be on their search engine, you could simply refuse … so that when someone does a search, your websites don’t come up—why haven’t you done that?
Murdoch: Well, I think we will, but that’s when we start charging.
While Murdoch doesn’t go so far as to announce or promise a boycott, his implication is perfectly clear—as is the risk to Google owing to extreme specialization. To persist in doing what it does, Google, though a powerful monopoly, needs information industries disposed to play nice, cooperate, and share—to let the world’s greatest organizer of information index their content and make it accessible over their wires. Unfortunately, playing nice has never been common practice in the information industries, as this book should already have made clear. Something about the intangible nature of information products seems to make everyone only more cutthroat than the average widget manufacturer.
THE BATTLE FOR TERRITORY IN THE 2010S
In Hindu mythology, deities and demons assume different incarnations to fight the same battles repeatedly. At the beginning of the 2010s, as a chasm opened between Google and its allies like Amazon, eBay, and nonprofits like Wikipedia on the one side and Apple, AT&T, and the entertainment conglomerates on the other, it was obvious that what loomed was just the latest iteration of the perennial ideological struggle into which every information industry is eventually swept. It is the old conflict between the concepts of the open system and the closed, between the forces of centralized order and those of dispersed variety. The antagonists assume new forms, the generals change, but essentially the same battles are fought over and over again. It is the very essence of the Cycle, which even a technology as radical and powerful as the Internet seems able at most to moderate but not to abolish.
For the information industries that now account for an ever increasing share of American and world GDP, the coming decade will be given over to a mighty effort to seize territory, to bolt the competition from its habitat. But this is not a case of one pack of wolves chasing another out of a prime valley. While it may sound fanciful, the contest in question is more like one of polar bears battling lions for domination of the world. Each animal, insuperably dominant in its natural element—the polar bear on ice and snow, the lion on the open plains—will undertake a land grab where it has no natural business being. The only practicable strategy will be a campaign of climate change, the polar bears seeking to cover as much of the world with snow as they can, while the lion tries to coax a savannah from the edges of a tundra. Sounds absurd, but for these mighty predators, it’s simply the law of nature.
For the past few years, Google, together with Amazon, eBay, Facebook, and nonprofits like Wikipedia, has generally been trying to convert as much of the world as they can into something that looks like the Internet: a clear, free path between any two points, with no hierarchy or preferential treatment according to market capitalization, number of paid lobbyists, or any other prerogative of size and concentration. Meanwhile, AT&T, the entertainment conglomerates, and the rest are trying to succeed where AOL Time Warner failed, and bring the Internet to heel. They envision a rational regime of access and flow of information, acknowledging that the network is not some renewable natural resource but a man-made structure, one that exists only owing to decades of infrastructure building at great cost to great companies, entities that believe they ultimately are entitled to a say. For the telephone and cable companies it is a matter of respecting the ownership of the Internet’s sine qua non: the wires, bandwidth, and cable. Naturally allied to such respect for ownership are copyright holders, whose just due they fear is being lost in the giddy idealistic effort to make everything available to everyone without limit, and as often without compensation. There is, the partisans of this side argue, a cost to building a bridge, a cost to writing a novel. An information economy, so called, cannot ultimately be sustained without acknowledging such hard facts. Information may “want” to be free, but we cannot expect it to be moved or created if we drive down to nothing the incentives for performing either function. If this side has its way, the twenty-first-century world of information will look, as much as possible, like that of the twentieth century, except that the screens that consumers are glued to will be easier to carry.
This, in essence, is our present war for information, one being waged on multiple fronts in ways subtle and not so subtle. Let us consider now the face of battle.
APPLE’S CHALLENGE TO THE COMPUTER
In 2006, Professor Jonathan Zittrain of Harvard made the startling prediction that over the next decade, the information industry would undertake a determined effort to replace the personal computer with a new generation of “information appliances.”18 He was, it turned out, exactly right. But the one thing he couldn’t forecast exactly was the general who would lead the charge. How indeed could anyone have guessed that Apple Inc., the creator of the personal computer, would be spearheading the effort to replace it? Unlikely though it was, beginning in 2010, Apple, allied with the entertainment conglomerates, became the key firm in a broad challenge to the whole concept of the personal computer.
When, in 1997, following another boardroom coup, Steve Jobs took back control of Apple, it was clear he had not changed or abandoned his basic ideas; to the contrary, he had intensified them, taking his whole ideology to, as it were, the next level. In doing so he repudiated, now decisively and forever, Steve Wozniak’s vision of the firm. The transformation would be symbolized by the moment in 2007 when Jobs renamed Apple Computers “Apple Inc.”—and at roughly the same time, as a personal flourish, refused to write a foreword for his old friend’s autobiography, iWoz.19
By the dawn of the decade, the cornerstone of Jobs’s strategy seeking perfect control over product and consumer had been laid; it took form as a triad of beautiful, perfect machines that have since won the allegiance of millions of users. Usurping the throne of the personal computer, in their order of succession, came the iPod, the iPhone, and the iPad. These would be, if all went according to plan, the information appliances of the 2010s.
On the inside, the iPod, iPhone, and iPad are actually computers. But they are computers that have been reduced to a strictly limited set of functions that they are designed to perform extremely well. It’s easy to see this with the iPod, which, rather obviously, is designed solely and optimally for playing music and watching videos. The limitation is much harder to see on the iPhone and the iPad, both of which can do things like make phone calls, send email, surf the Web, and allow one to read books, in addition to the seemingly unlimited variety of functions that can be acquired through the “app store.” But even if invisible to many consumers, the inescapable reality is that these machines are closed in a way the personal computer never was. True, Apple does allow outsiders to develop applications on its platform—the defeat of the Macintosh by Windows taught Jobs that a platform completely closed to outside developers is suicide. Nevertheless, all innovation and functionality are ultimately subject to Apple’s veto, making these devices antithetical to the Apple II and all the hardware development it inspired.
Apple’s new generation of devices are user-friendly, but also what you might call “Hollywood-friendly.” They are engineered with an eye to complicated deals the firm has made with the existing entertainment conglomerates, deals securing access to content that Apple’s rivals have had trouble matching. In exchange for this access, Apple generally, if not quite perfectly, guards the intellectual property of its partners. The devices, in a similar way, are also “telecom-friendly”: designed to operate with one carrier only, they reinforce the favored telephone company’s power—for a price.
The veto that Apple maintains over functionality and specific applications is not notional, but one wielded in service of its partnerships’ interests. The first major exercise was in blocking Skype, the voice-over-IP firm whose software lets users call each other over the Internet for free, eating into AT&T’s long distance margins. Later, during the summer of 2009, Apple bodychecked an application written for the iPhone by Google. The product, named “Google Voice,” was designed to make a single phone number, when dialed, ring on all one’s phones at once. The rejection of this service six months after its submission for consideration was not another effort at protecting the telephone partner (all GV users would place and receive calls over their carriers’ networks and not, as some had feared, over the Internet). Rather, this rejection of a widely anticipated function appears to have been motivated by perceived competition with existing Apple applications such as the dialer, voice mailbox, and others. This, in a way, makes the move seem pettier. And the pattern would continue. As Tom Conlon of Popular Science would write when the iPad was unveiled, “How long before it [Apple] blocks movies, TV shows, songs, books and even web sites? Scoff now, but don’t be so naïve as to believe that this isn’t possible.”
Lest these examples be taken amiss, let me speak plainly: These are amazing machines. They make available an incredible variety of content—video, music, technology—with an intuitive interface that is a pleasure to use. But they are also machines whose soul is profoundly different from that of any other personal computer, let alone Wozniak’s Apple II. For all their glamour, these appliances are a betrayal of the inspiration behind that pathbreaking device, which was fundamentally meant to empower its users, not control them. That proposition may appeal to geeks more than to the average person, but anyone can appreciate the sentiment behind putting enormous power at the discretion of any individual. The owner of an iPod or iPad is in a fundamentally different position: his machine may have far more computational power than a PC of a decade ago, but it is designed for consumption, not creation. Or, as Conlon declared vehemently, “Once we replace the personal computer with a closed-platform device such as the iPad, we replace freedom, choice and the free market with oppression, censorship and monopoly.”
GOOGLE’S COUNTERMOVE
Throughout the summer of 2007, rumors flew that some kind of Google phone was in the works. At the Googleplex, the firm’s storied campus, a suspicious statue, a human-robot, or android, with red eyes showed up in a nondescript building across from the main campus. Finally, on November 5, 2007, Google effectively announced the Gphone—by letting it be known that there was no such thing.
In contrast with the unveiling of the the iPhone, there was no stadium event, no screaming crowd, and most important, no product. Instead, there was just a blog post entitled “Where’s My Gphone?”20 An employee named Andy Rubin wrote the following: “Despite all of the very interesting speculation over the last few months, we’re not announcing a Gphone. However, we think what we are announcing—the Open Handset Alliance and Android—is more significant and ambitious than a single phone.”*
Here it was: Google’s first real foray into the world of the telephone, as distinct from the computer and the Internet. The significance cannot be overstated. Until 2007, the Internet industries had, in the main, been playing defense—attempting to preserve the status quo of net neutrality and limit the power of their rivals among other information enterprises. Now, coming out of this defensive crouch, Google took the fight to its adversaries, attempting to plant the flag of openness deep in the heart of telephone territory, Bell’s holy land since the 1880s.
Project Android has puzzled many industry observers, for it has no obvious revenue model. Google distributes Android for free, as it does most of its other products. Mind you, what Google is giving away is not a telephone or even a telephone service—users must still buy those—but rather an operating system for telephones, based on the Linux kernel, the Ur-free and open software beloved of tech geeks. By giving away a version adapted for telephony, Google was distributing a free set of tools for programmers of any affiliation to write applications.
Given what we understand about Google, it should be obvious that this move was, like so many other initiatives, a means to an end rather than an end in itself. Project Android is a hearts-and-minds effort, a use of soft power to “convert” the mobile world into territory that is friendly to Google rather than to its enemies. It is, to return to our wild kingdom analogy, an effort to extend the world of ice and snow, where the polar bear cannot be defeated. And of course it is a long shot. As I wrote at the time, in Slate magazine, “Google is making its deepest foray yet into a foreign territory where its allies are few. It faces the challenge of not just entering the wireless world but also converting its inhabitants. Provided that Google has the nerve and resources to try to remake wireless in its image, it’ll either prove its greatest triumph or its Waterloo.” A high-stakes long shot, but one that Google’s adversaries have given it little choice but to take.
Not surprisingly, Android has made the already strained relationship with former pal Apple downright hostile. Perhaps stung by the memory of Windows and what had followed, Jobs was quick to trash Android in The New York Times. “Android hurts them [Google] more than it helps them,” said Jobs. “It’s just going to divide them and people who want to be their partners.”
This quote illustrates a crucial difference of mind-set. Since 2000, Jobs’s innovations have depended on making the right deals. The success of his iTunes store has had less to do with the technology than with his being the first to get the music industry to consent to online downloads. Having been CEO of Pixar Animation Studios during his years in the Apple wilderness, Jobs is one of the few players who can move with ease between Hollywood and Silicon Valley. And he was able to extend his dealmaking reach beyond that corridor to work with the world’s largest telephone company, making the iPhone the ultimate expression of his partnership mentality.
Schmidt and Google, meanwhile, have taken a different view. Their partnerships are few, and rarely, if ever, exclusive. For at bottom the firm believes, almost as an article of faith, that open protocols obviate the need for big combinations. As Schmidt puts it, the “interconnection makes you appear as one company while operating as two.” In other words, why incur the burdens of marriage when you can have friends with benefits? Implicit in this view is the basic conception of the Internet and Wozniak’s idea of the computer as worlds that minimize the need for permission.* The very same idea animates the Android.
Android may be the most significant of Google’s territorial maneuvers, but it is not the only one. In the winter of 2010 the firm announced plans to build its own fiber optic connections, another bold incursion into the lands of telephone and cable and its first real flirtation with vertical integration. The full scope of the motivations isn’t clear—Google insists it means only to create a “showcase” designed to spur the telephone and cable companies to expand broadband penetration, in which America lags the developed world despite having invented the technology. More startling were reports in the New York Times in the summer of 2010 that Google was on the verge of a deal with Verizon to align their policy positions and launch special “managed services.” Google’s close relationship with Verizon—its first friendship with a Bell—is hard to interpret. Eric Schmidt and Google believe they are converting Verizon to the side of openness and sundering the Bell Empire for good. But we’ve heard that before, and it is not completely clear who is converting whom. Verizon/Google makes for a powerful vertical combination. Verizon, formerly known as Bell Atlantic, is a seasoned monopolist, having held parts of its domain since the capitulation of Western Union in the 1870s. And so it may be Google that is learning from the master.
For now, we may still view Google and its Internet industry allies as locked in a complex, slow-moving struggle with AT&T and cable, the entertainment conglomerates and Apple. But while there are two sides in the broadest terms, the underlying reality is not so simple, as the firms have a web of allegiances as complex as those of nineteenth-century Europe. Verizon, a former Bell, for instance, has been in Google’s camp for some time, having become an Android convert in 2009. It likes to declare itself an apostle of an open wireless future,21 which presents the odd prospect of the reborn Bell system split into an open and a closed half. Meanwhile, some Internet firms, including Yahoo!, have long allied themselves with the centralizers, if only as a hedge against Google. Nevertheless, no one denies that the future is to be decided by one of two visions.
If the centralizers—AT&T, Hollywood, and Apple—prevail, the future will be informed by a marriage of twenty-first-century technology and twentieth-century integrated corporate structure. The best content from Hollywood and New York and the telephone and networking power of AT&T will converge on Apple’s appliances, which respond instantly to ever more various human desires. It is a combination of undeniable power and attraction. And not least among its virtues, the worst of the Internet—the spam, the faulty apps, the junky amateur content—is eliminated. Instead, the centralizers pledge to deliver what Lord Reith promised from the BBC: “the Best of Everything.”
For its part, the openness movement, of which Google has been the leader, despite whatever sort of pact may loom with Verizon, is based on a contrary notion of virtue, one that can be traced back to the idealism of 1920s radio and of course the foundation of the Internet itself. At some level, the apostles of openness aspire to nothing less than social transformation. They idealize a postscarcity society, one in which the assumption of limited resources dictating traditional economic theory are overturned, a world in which most goods and services are free or practically free, thereby liberating the individual to pursue self-expression and self-actualization as an activity of primary importance.22 It may sound fantastical, but our lives are already full of manifestations of this idea. Digitization, for example, by eliminating most of the expenses associated with activities like making a film or distributing a recording, has enabled virtually anyone to prove his worth as a filmmaker or a singer. But the feasibility of such a quasiutopian information economy depends on an open communications infrastructure that facilitates individual expression, not mass conformity.
There is, as with all competing visions of the good, a downside to each. More specifically, as ever in the history of information networks, something is lost in seeking the benefits of an open system, just as there is in adopting a closed one. Each side of course imagines its preference as offering us more than it denies us. Apple and the conglomerates think it perfectly sensible to identify popular desires and then to fulfill them. As Jobs put it, “We figure out what we want. And I think we’re pretty good at having the right discipline to think through whether a lot of other people are going to want it, too. That’s what we get paid to do.” Such cultural surrogacy does deliver an extremely polished product, both as content and as delivery system, indeed one very widely desired. But inevitably it is not to every taste. The champions of openness propose an untidier world of less polish, less perfection, but with more choice. It is, in that side’s view, choice, the freedom to figure out what one wants, that people prize most. In Eric Schmidt’s words: “The vote is clear that the end user prefers choice, freedom, and openness.”
And so we have the essential alternatives: a world of information that looks much like the twentieth century’s, only better—more beautiful and more convenient. Or a revolution in the very means by which information is produced and consumed.
The conflict is familiar in its contours; we have now seen it several times before, as the Cycle has worked its way through the film, radio, and telephone industries. The difference now, however, is this: In the 1920s and 1930s, there was a sense that the progress toward centralized, integrated models was somehow inevitable, simply the norm of industrial evolution. In the time of Henry Ford, Theodore Vail, and the rest, it had seemed quite natural, in a Darwinian way, that the big fish ate the little ones until there were only big ones trying to eat one another. All the power would thus come to reside in one or two highly centralized giants, until some sort of sufficiently disruptive innovation came along and proved itself a giant killer. Small fry would then enter the new decentralized environment, and the natural progression would start all over again.
The twenty-first century begins with no such predilection for central order. In our times, Jane Jacobs is the starting point for urban design, Hayek’s critique of central planning is broadly accepted, and even governments with a notable affinity for socialist values tout the benefits of competition, rejecting those of monopoly. Nor does the new century partake of the previous one’s sense of what is inevitable. Technology has reached a point where the inventive spirit has a capacity for translating inspiration into commerce virtually overnight, creating major players with astonishing speed, where once it took years of patient chess moves to become one, assuming one wasn’t devoured. The democratization of technological power has made the shape of the future hard to know, even for the best informed. The individual holds more power than at any time in the past century, and literally in the palm of his hand. Whether or not he can hold on to it is another matter.
* At the time of announcement, AT&T Wireless was operating under its old name, Cingular.
* Some may argue that the Macintosh was more significant than the Apple II; without discounting the importance of the former, the significance of personal computing seems categorically larger than the importance of adding the desktop interface to the personal computer.
* Most notably at the Palo Alto Research Corporation, then owned by Xerox, whose labs, by 1975, had produced a computer closely resembling the Apple Macintosh. The Apple Lisa also, technically, came between the Macintosh but did not thrive.
* As we’ve seen, vertical integration serves as often as a means of corporate defense as efficiency. By combining related functions, the integrated entity can prevent rivals from depriving it of some essential component, as for instance when the Hollywood studios acquired movie theaters to prevent theater owners from shutting out studio products. Interesting, but beyond the scope of this book, is whether this defense function suggests an alternative explanation to the prevailing theory of the firm as shaped by the relative efficiency of internal and external contracting, which the economist Ronald Coase articulated in 1937.
† Technically, this is achieved by placing a “robots.txt” file on the root directory of the Web server in question. Google, for its part, could ignore the robots.txt files; in the United States that would foreground an unsettled copyright question, namely, whether expressly involuntary indexing is copyright infringement.
* At the time, it went by the name “BackRub.”
* Notable members of the alliance at its launch included China Mobile, Intel, NTT DOCOMO, Sprint/Nextel, T-Mobile, HTC, LG, Samsung, and Motorola.
* Permission is a fundamental feature of what we call “property,” and in this sense you can understand that at some level the entire struggle is between a world with more or fewer property rights.