The Master Switch: The Rise and Fall of Information Empires - Tim Wu (2010)

Part III. The Rebels, the Challengers, and the Fall

Chapter 15. Esperanto for Machines

As a high school student in Bialystok, Russia, Ludwik Łazarz Zamenhof spent his spare time devising a language. He would work on it diligently for years, and in 1887, at the age of twenty-six, he published a booklet entitled Lingvo internacia. Antaŭparolo kaj plena lernolibro (International Language: Foreword and Complete Textbook). He signed it “Doctor Hopeful,” or, in the language he had invented, “Doktoro Esperanto.”1

Zamenhof’s idea for a standardized international language was an ingenious idea poorly implemented. Consequently, his noble ambition is often forgotten: to dissolve what he considered the curse of nationalism. If everyone in the world shared a second language, “the impassable wall that separates literatures and peoples,” he wrote, “would at once collapse into dust, and … the whole world would be as one family.”2 While there have been moments when it seemed that Esperanto might really take off—in 1911, for instance, the Republic of China considered adopting it as the country’s official language—Zamenhof’s invention has not remotely become a universal language.3 Nonetheless, we live in a world in which his dream of a common tongue has been achieved—though not among humans, as he would have hoped, but among machines. It goes by a less hopeful sounding name, “the Internet Protocol,” or TCP/IP, but for computer users it has succeeded where Esperanto failed.

Between Licklider’s first enunciation of an intergalactic network and the mid-1970s, the idea of computers as communications devices had actually given birth to a primitive network, known as the ARPANET. The ARPANET was an experimental network that connected university and government computers over lines leased from AT&T. But it wasn’t quite the universal network Licklider envisioned, one that could connect any network to any other. To achieve that goal of a true, universal computer network, one would need a universal language. One would need an Esperanto for computers. In 1973, this was the problem facing two young computer science graduate students named Vint Cerf and Robert Kahn.

One memorable afternoon in 2008 in a small Google conference room equipped with a whiteboard, I asked Vint Cerf what exactly was the problem he had been trying to solve when he designed the Internet protocol.4 The answer surprised me. As Cerf explained it, he and Kahn were focused on developing not some grand design but rather a very much ad hoc accommodation. Running on a collection of government lines and lines leased from AT&T, the ARPANET was at the time just one of three packet networks in development. The others were a packet radio network and a packet satellite network, both privately run. Cerf and Kahn were trying to think of some way to make these networks talk to one another. That was the immediate necessity for an “internetwork,” or a network of networks.

The Internet’s design, then, wasn’t the result of some grand theory or vision that emerged fully formed like Athena from the head of Zeus. Rather, these engineers were looking for a specific technical fix. Their solution was indeed ingenious, but only later would it become clear just how important it was. Cerf describes the open design of the Internet as necessitated by the particularities of the specific engineering challenge he faced. “A lot of the design,” Cerf said, “was forced on us.”

The Internet’s creators, mainly academics operating within and outside the government, lacked the power or ambition to create an information empire. They faced a world in which the wires were owned by AT&T and computing was a patchwork of fiefdoms centered on the gigantic mainframe computers, each with idiosyncratic protocols and systems. Now as then, the salient reality—and one that too many observers don’t grasp, or overlook—is that the Internet works over an infrastructure that doesn’t belong to those using it. The owner is always someone else, and in the 1970s, that someone was generally AT&T.5

The Internet founders built their unifying network around this fundamental constraint. There was no other choice: even with government funding they did not have the resources to create an alternative infrastructure, to wire the world as Bell had spent generations and untold billions doing. Consequently, their network was from its beginning beholden to the power and autonomy of its owners. It was designed to link human brains, but it had no more control over their activities than that, an egalitarianism born of necessity, and one which would persist as the network grew over decades to include everyone.

The stroke of genius underlying a network that could interconnect other networks was the concept of “encapsulation.” As Cerf said, “we thought of it as envelopes.” Encapsulation means wrapping information from local networks in an envelope that the internetwork could recognize and direct. It is akin to the world’s post offices agreeing to use names of countries in English, even if the local addresses are in Japanese or Hindi. In what would come to be known as TCP (or Transmission Control Protocol), Cerf and Kahn created a standard for the size and flow rate of data packets, thereby furnishing computer users with a lingua franca that could work among all networks.6

As a practical matter, this innovation would allow the Internet to run on any infrastructure, and carry any application, its packets traveling any type of wire or radio broadcast band, even those owned by an entity as given to strict controls as AT&T. It was truly a first in human history: an electronic information network independent of the physical infrastructure over which it ran. The invention of encapsulation also permitted the famous “layered” structure of the Internet, whereby communications functions are segregated, allowing the network to negotiate the differing technical standards of various devices, media, and applications. But, again, this was an idea born not of design but of the practical necessity to link different types of networks.

To ponder the design of the Internet is to be struck by its resemblance to other decentralized systems, such as the federal system of the United States. The Founding Fathers had no choice but to deal with the fact of individual states already too powerful and mature to give up most of their authority to a central government. The designs of the first two constitutions were therefore constrained—indeed, overwhelmingly informed—by the imperative of preserving states’ rights, in order to have any hope of ratification. Similarly, the Internet’s founders were forced, however fortunate the effect may now seem, to invent a protocol that took account of the existence of many networks, over which they had limited power.

Cerf and Kahn pursued a principle for Internet protocols that was the exact opposite of Vail’s mantra of “One System, One Policy, Universal Service.” Where AT&T had unified American communications in the 1910s by forcing the same telephone on every single user, Cerf and Kahn and the other Internet founders embraced a system of tolerated difference—a system that recognized and accepted the autonomy of the network’s members. Indeed, to do what Bell had done fifty years earlier might in fact have been impossible, even for an entity as powerful as Bell itself. For by the sixties, the charms of centrally planned systems generally were beginning to wear thin, soon to go the way of short-sleeved dress shirts.

DECENTRALIZATION

The economist John Maynard Keynes once said, “When the facts change, I change my mind. What do you do, sir?”7 No apostle of central planning could live through Europe’s Fascist and Soviet experiments without admitting that directed economies had their limitations and liabilities. The same ideas that had inspired Henry Ford and Theodore Vail had, in the realm of politics, led to Hitler and Stalin. And so a general repudiation of the whole logic of centralization was a natural fact of the Cold War era.

It was an Austrian economist who would provide the most powerful critique not just of central planning but of the Taylorist fallacies underlying it. Friedrich Hayek, author of The Road to Serfdom, is a patron saint of libertarians for having assailed not only big government, in the form of socialism, but also central planning in general.8 For what he found dangerous about the centralizing tendencies of socialism applies equally well to the overbearing powers of the corporate monopolist.

Hayek would have agreed with Vail’s claim, as with the Soviets’, up to a point: ideally, planning should eliminate the senseless duplication that flows from decentralized decision making. There is a certain waste implied in having, say, two gas stations on a single street corner, and in this sense, as Vail insisted, monopolies are more efficient than competition.*

But what prevented monopoly and all centralized systems from realizing these efficiencies, in Hayek’s view, was a fundamental failure to appreciate human limitations. With perfect information, a central planner could effect the best of all possible arrangements, but no such planner could ever hope to have all the relevant facts of local, regional, and national conditions to arrive at an adequately informed, or right, decision. As he wrote:

If we possess all the relevant information, if we can start out from a given system of preferences and if we command complete knowledge of available means, the problem which remains is purely one of logic.… This, however, is emphatically not the economic problem which society faces.… [T]he “data” from which the economic calculus starts are never for the whole society “given” to a single mind which could work out the implications, and can never be so given.9

Such a rejection of central planning beginning in the sixties was hardly limited to those with conservative sensibilities. Indeed, the era’s emblematic liberal thinkers, too, were rediscovering a love for organic, disorganized systems. Another Austrian, the political scientist Leopold Kohr in the 1950s, began a lifetime campaign against empires, large nations, and bigness in general. As he wrote, “there seems to be only one cause behind all forms of social misery: bigness. Oversimplified as this may seem, we shall find the idea more easily acceptable if we consider that bigness, or oversize, is really much more than just a social problem.… Whenever something is wrong, something is too big.”10

Kohr’s student, the economist E. F. Schumacher, in 1973 wrote Small Is Beautiful: Economics As If People Mattered, developing the concept of “enoughness” and sustainable development.11 Jane Jacobs, the great theorist of urban planning, expresses a no less incendiary disdain for centralization, and as in Hayek, the indictment is based on an inherent neglect of humanity. In her classic The Death and Life of Great American Cities, she relies on careful firsthand observations made while walking around cities and new developments to determine how Olympian planners like Robert Moses were going wrong.12 There was no understanding, let alone regard, for the organic logic of the city’s neighborhoods, a logic discernible only on foot.

All of these thinkers opposed bigness and prescribed a greater humility about one’s unavoidable ignorance. No one could fully understand all the facts of the dynamic market any more than one could weigh the true costs of introducing a vast new flow of traffic through neighborhoods like New York’s SoHo and West Village, which had developed organically for centuries. These thinkers were speaking up against a moribund belief in human perfectibility, or scientific management theorist Frederick Taylor’s the “one right way.”13 Cities, like markets, had an inscrutable, idiosyncratic logic not easily grasped by the human mind but deserving of respect.

It was beginning to seem that the same might be true of information systems.

While its design had been born of necessity, through the 1970s and early 1980s the Internet’s developers began to see a virtue in it. And their awareness grew with their understanding of what a universal network needed if it was to operate, evolve, and advance in organic fashion. In the final draft of the TCP protocol, Jon Postel,* another Internet founder, inserted the following dictum:

Be conservative in what you do. Be liberal in what you accept from others.14

It may seem strange that such a philosophical, perhaps even spiritual, principle should be embedded in the articulation of the Internet, but then network design, like all design, can be understood as ideology embodied, and the Internet clearly bore the stamp of the opposition to bigness characteristic of the era. Not long thereafter, three professors of computer science, David Reed, David Clark, and Jerome Saltzer, would try to explain what made the Internet so distinctive and powerful. In a seminal paper of 1984, “End-to-End Arguments in System Design,” they argued for the enormous potential inherent in decentralizing decisional authority—giving it to the network users (the “ends”).15 The network itself (the “middle”) should, they insisted, be as nonspecialized as possible, so as to serve the “ends” in any ways they could imagine.*

What were such notions if not the computer science version of what Hayek and Jacobs, Kohr and Schumacher, had been arguing? While we cannot say exactly that the network pioneers of the 1970s were disciples of these or any particular thinker, there is no denying the general climate of thought in which computer scientists were living, along with everybody else. Coming of age concurrently with an ideological backlash against centralized planning and authority, the Internet became a creature of its times.

In 1982 Vint Cerf and his colleagues issued a rare command, drawing on the limited power they did have over their creation. “If you don’t implement TCP/IP, you’re off the Net.”16 It was with that ultimatum that the Internet truly got started, as computer systems around the world came online. As with many new things, what was there at first was more impressive in a conceptual sense than in terms of bells and whistles, but as usual, it was the human factor that made the difference, as those who joined could suddenly email or discuss matters with fellow computer scientists—the first “netizens.” The Internet of the 1980s was a mysterious, magical thing, like a secret club for those who could understand it.

What was the Internet in 1982? Certainly, it was nothing like what we think of as the Internet today. There was no World Wide Web, no Yahoo!, no Facebook. It was a text-only network, good for transmitting verbal messages alone. More important, it was not the mass medium of our experience. It still reached only large computers at universities and government agencies, and mostly ran over lines leased from AT&T (as a complement, the federal government would begin to build its own physical network in 1986, the NFSNET). As far as the internetwork had come conceptually, a different kind of revolution would be needed to bring it to the people. And that transformation, less technological than industrial, would take another decade; first, the computer would have to become personal.

* In the parlance of economists, many market failures—externalities, collective action problems, and so on—can be eliminated by a central planner.

* Postel is profiled in my first book (coauthored with Jack Goldsmith), Who Controls the Internet?

* Much later, in the early years of the twenty-first century, the phrase “net neutrality” would become a kind of shorthand for these founding principles of the Internet. The ideal of neutrality bespeaks a network that treats all it carries equally, indifferent to the nature of the content or the identity of the user. In the same spirit as the end-to-end principle, the neutrality principle holds that the big decisions concerning how to use the medium are best left to the “ends” of the network, not the carriers of information.