The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016)
It’s taken me 60 years, but I had an epiphany recently: Everything, without exception, requires additional energy and order to maintain itself. I knew this in the abstract as the famous second law of thermodynamics, which states that everything is falling apart slowly. This realization is not just the lament of a person getting older. Long ago I learned that even the most inanimate things we know of—stone, iron columns, copper pipes, gravel roads, a piece of paper—won’t last very long without attention and fixing and the loan of additional order. Existence, it seems, is chiefly maintenance.
What has surprised me recently is how unstable even the intangible is. Keeping a website or a software program afloat is like keeping a yacht afloat. It is a black hole for attention. I can understand why a mechanical device like a pump would break down after a while—moisture rusts metal, or the air oxidizes membranes, or lubricants evaporate, all of which require repair. But I wasn’t thinking that the nonmaterial world of bits would also degrade. What’s to break? Apparently everything.
Brand-new computers will ossify. Apps weaken with use. Code corrodes. Fresh software just released will immediately begin to fray. On their own—nothing you did. The more complex the gear, the more (not less) attention it will require. The natural inclination toward change is inescapable, even for the most abstract entities we know of: bits.
And then there is the assault of the changing digital landscape. When everything around you is upgrading, this puts pressure on your digital system and necessitates maintenance. You may not want to upgrade, but you must because everyone else is. It’s an upgrade arms race.
I used to upgrade my gear begrudgingly (why upgrade if it still works?) and at the last possible moment. You know how it goes: Upgrade this and suddenly you need to upgrade that, which triggers upgrades everywhere. I would put it off for years because I had the experiences of one “tiny” upgrade of a minor part disrupting my entire working life. But as our personal technology is becoming more complex, more codependent upon peripherals, more like a living ecosystem, delaying upgrading is even more disruptive. If you neglect ongoing minor upgrades, the change backs up so much that the eventual big upgrade reaches traumatic proportions. So I now see upgrading as a type of hygiene: You do it regularly to keep your tech healthy. Continual upgrades are so critical for technological systems that they are now automatic for the major personal computer operating systems and some software apps. Behind the scenes, the machines will upgrade themselves, slowly changing their features over time. This happens gradually, so we don’t notice they are “becoming.”
We take this evolution as normal.
Technological life in the future will be a series of endless upgrades. And the rate of graduations is accelerating. Features shift, defaults disappear, menus morph. I’ll open up a software package I don’t use every day expecting certain choices, and whole menus will have disappeared.
No matter how long you have been using a tool, endless upgrades make you into a newbie—the new user often seen as clueless. In this era of “becoming,” everyone becomes a newbie. Worse, we will be newbies forever. That should keep us humble.
That bears repeating. All of us—every one of us—will be endless newbies in the future simply trying to keep up. Here’s why: First, most of the important technologies that will dominate life 30 years from now have not yet been invented, so naturally you’ll be a newbie to them. Second, because the new technology requires endless upgrades, you will remain in the newbie state. Third, because the cycle of obsolescence is accelerating (the average lifespan of a phone app is a mere 30 days!), you won’t have time to master anything before it is displaced, so you will remain in the newbie mode forever. Endless Newbie is the new default for everyone, no matter your age or experience.
• • •
If we are honest, we must admit that one aspect of the ceaseless upgrades and eternal becoming of the technium is to make holes in our heart. One day not too long ago we (all of us) decided that we could not live another day unless we had a smartphone; a dozen years earlier this need would have dumbfounded us. Now we get angry if the network is slow, but before, when we were innocent, we had no thoughts of the network at all. We keep inventing new things that make new longings, new holes that must be filled.
Some people are furious that our hearts are pierced this way by the things we make. They see this ever-neediness as a debasement, a lowering of human nobility, the source of our continual discontentment. I agree that technology is the source. The momentum of technologies pushes us to chase the newest, which are always disappearing beneath the advent of the next newer thing, so satisfaction continues to recede from our grasp.
But I celebrate the never-ending discontentment that technology brings. We are different from our animal ancestors in that we are not content to merely survive, but have been incredibly busy making up new itches that we have to scratch, creating new desires we’ve never had before. This discontent is the trigger for our ingenuity and growth.
We cannot expand our self, and our collective self, without making holes in our heart. We are stretching our boundaries and widening the small container that holds our identity. It can be painful. Of course, there will be rips and tears. Late-night infomercials and endless web pages of about-to-be-obsolete gizmos are hardly uplifting techniques, but the path to our enlargement is very prosaic, humdrum, and everyday. When we imagine a better future, we should factor in this constant discomfort.
• • •
A world without discomfort is utopia. But it is also stagnant. A world perfectly fair in some dimensions would be horribly unfair in others. A utopia has no problems to solve, but therefore no opportunities either.
None of us have to worry about these utopia paradoxes, because utopias never work. Every utopian scenario contains self-corrupting flaws. My aversion to utopias goes even deeper. I have not met a speculative utopia I would want to live in. I’d be bored in utopia. Dystopias, their dark opposites, are a lot more entertaining. They are also much easier to envision. Who can’t imagine an apocalyptic last-person-on-earth finale, or a world run by robot overlords, or a megacity planet slowly disintegrating into slums, or, easiest of all, a simple nuclear Armageddon? There are endless possibilities of how the modern civilization collapses. But just because dystopias are cinematic and dramatic, and much easier to imagine, that does not make them likely.
The flaw in most dystopian narratives is that they are not sustainable. Shutting down civilization is actually hard. The fiercer the disaster, the faster the chaos burns out. The outlaws and underworlds that seem so exciting at “first demise” are soon taken over by organized crime and militants, so that lawlessness quickly becomes racketeering and, even quicker, racketeering becomes a type of corrupted government—all to maximize the income of the bandits. In a sense, greed cures anarchy. Real dystopias are more like the old Soviet Union rather than Mad Max: They are stiflingly bureaucratic rather than lawless. Ruled by fear, their society is hobbled except for the benefit of a few, but, like the sea pirates two centuries ago, there is far more law and order than appears. In fact, in real broken societies, the outrageous outlawry we associate with dystopias is not permitted. The big bandits keep the small bandits and dystopian chaos to a minimum.
However, neither dystopia nor utopia is our destination. Rather, technology is taking us to protopia. More accurately, we have already arrived in protopia.
Protopia is a state of becoming, rather than a destination. It is a process. In the protopian mode, things are better today than they were yesterday, although only a little better. It is incremental improvement or mild progress. The “pro” in protopian stems from the notions of process and progress. This subtle progress is not dramatic, not exciting. It is easy to miss because a protopia generates almost as many new problems as new benefits. The problems of today were caused by yesterday’s technological successes, and the technological solutions to today’s problems will cause the problems of tomorrow. This circular expansion of both problems and solutions hides a steady accumulation of small net benefits over time. Ever since the Enlightenment and the invention of science, we’ve managed to create a tiny bit more than we’ve destroyed each year. But that few percent positive difference is compounded over decades into what we might call civilization. Its benefits never star in movies.
Protopia is hard to see because it is a becoming. It is a process that is constantly changing how other things change, and, changing itself, is mutating and growing. It’s difficult to cheer for a soft process that is shape-shifting. But it is important to see it.
Today we’ve become so aware of the downsides of innovations, and so disappointed with the promises of past utopias, that we find it hard to believe even in a mild protopian future—one in which tomorrow will be a little better than today. We find it very difficult to imagine any kind of future at all that we desire. Can you name a single science fiction future on this planet that is both plausible and desirable? (Star Trek doesn’t count; it’s in space.)
There is no happy flying-car future beckoning us any longer. Unlike the last century, nobody wants to move to the distant future. Many dread it. That makes it hard to take the future seriously. So we’re stuck in the short now, a present without a generational perspective. Some have adopted the perspective of believers in a Singularity who claim that imagining the future in 100 years is technically impossible. That makes us future-blind. This future-blindness may simply be the inescapable affliction of our modern world. Perhaps at this stage in civilization and technological advance, we enter into a permanent and ceaseless present, without past or future. Utopia, dystopia, and protopia all disappear. There is only the Blind Now.
The other alternative is to embrace the future and its becoming. The future we are aimed at is the product of a process—a becoming—that we can see right now. We can embrace the current emerging shifts that will become the future.
The problem with constant becoming (especially in a protopian crawl) is that unceasing change can blind us to its incremental changes. In constant motion we no longer notice the motion. Becoming is thus a self-cloaking action often seen only in retrospect. More important, we tend to see new things from the frame of the old. We extend our current perspective to the future, which in fact distorts the new to fit into what we already know. That is why the first movies were filmed like theatrical plays and the first VRs shot like movies. This shoehorning is not always bad. Storytellers exploit this human reflex in order to relate the new to the old, but when we are trying to discern what will happen in front of us, this habit can fool us. We have great difficulty perceiving change that is happening right now. Sometimes its apparent trajectory seems impossible, implausible, or ridiculous, so we dismiss it. We are constantly surprised by things that have been happening for 20 years or longer.
I am not immune from this distraction. I was deeply involved in the birth of the online world 30 years ago, and a decade later the arrival of the web. Yet at every stage, what was becoming was hard to see in the moment. Often it was hard to believe. Sometimes we didn’t see what was becoming because we didn’t want it to happen that way.
We don’t need to be blind to this continuous process. The rate of change in recent times has been unprecedented, which caught us off guard. But now we know: We are, and will remain, perpetual newbies. We need to believe in improbable things more often. Everything is in flux, and the new forms will be an uncomfortable remix of the old. With effort and imagination we can learn to discern what’s ahead more clearly, without blinders.
Let me give you an example of what we can learn about our future from the very recent history of the web. Before the graphic Netscape browser illuminated the web in 1994, the text-only internet did not exist for most people. It was hard to use. You needed to type code. There were no pictures. Who wanted to waste time on something so boring? If it was acknowledged at all in the 1980s, the internet was dismissed as either corporate email (as exciting as a necktie) or a clubhouse for teenage boys. Although it did exist, the internet was totally ignored.
Any promising new invention will have its naysayers, and the bigger the promises, the louder the nays. It’s not hard to find smart people saying stupid things about the web/internet on the morning of its birth. In late 1994, Time magazine explained why the internet would never go mainstream: “It was not designed for doing commerce, and it does not gracefully accommodate new arrivals.” Wow! Newsweek put the doubts more bluntly in a February 1995 headline: “The Internet? Bah!” The article was written by an astrophysicist and network expert, Cliff Stoll, who argued that online shopping and online communities were an unrealistic fantasy that betrayed common sense. “The truth is no online database will replace your newspaper,” he claimed. “Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Internet. Uh, sure.” Stoll captured the prevailing skepticism of a digital world full of “interacting libraries, virtual communities, and electronic commerce” with one word: “baloney.”
This dismissive attitude pervaded a meeting I had with the top leaders of ABC in 1989. I was there to make a presentation to the corner-office crowd about this “Internet Stuff.” To their credit, the executives of ABC realized something was happening. ABC was one of the top three mightiest television networks in the world; the internet at that time was a mere mosquito in comparison. But people living on the internet (like me) were saying it could disrupt their business. Still, nothing I could tell them would convince them that the internet was not marginal, not just typing, and, most emphatically, not just teenage boys. But all the sharing, all the free stuff seemed too impossible to business executives. Stephen Weiswasser, a senior VP at ABC, delivered the ultimate put-down: “The Internet will be the CB radio of the ’90s,” he told me, a charge he later repeated to the press. Weiswasser summed up ABC’s argument for ignoring the new medium: “You aren’t going to turn passive consumers into active trollers on the internet.”
I was shown the door. But I offered one tip before I left. “Look,” I said. “I happen to know that the address abc.com has not been registered. Go down to your basement, find your most technical computer geek, and have him register abc.com immediately. Don’t even think about it. It will be a good thing to do.” They thanked me vacantly. I checked a week later. The domain was still unregistered.
While it is easy to smile at the sleepwalkers in TV land, they were not the only ones who had trouble imagining an alternative to couch potatoes. Wired magazine did too. I was a co–founding editor of Wired, and when I recently reexamined issues of Wired from the early 1990s (issues that I’d proudly edited), I was surprised to see them touting a future of high production-value content—5,000 always-on channels and virtual reality, with a sprinkling of bits of the Library of Congress. In fact, Wired offered a vision nearly identical to that of internet wannabes in the broadcast, publishing, software, and movie industries, like ABC. In this official future, the web was basically TV that worked. With a few clicks you could choose any of 5,000 channels of relevant material to browse, study, or watch, instead of the TV era’s five channels. You could jack into any channel you wanted from “all sports all the time” to the saltwater aquarium channel. The only uncertainty was, who would program it all? Wired looked forward to a constellation of new media upstarts like Nintendo and Yahoo! creating the content, not old-media dinosaurs like ABC.
Problem was, content was expensive to produce, and 5,000 channels of it would be 5,000 times as costly. No company was rich enough, no industry large enough to carry off such an enterprise. The great telecom companies, which were supposed to wire up the digital revolution, were paralyzed by the uncertainties of funding the net. In June 1994, David Quinn of British Telecom admitted to a conference of software publishers, “I’m not sure how you’d make money out of the internet.” The immense sums of money supposedly required to fill the net with content sent many technocritics into a tizzy. They were deeply concerned that cyberspace would become cyburbia—privately owned and operated.
The fear of commercialization was strongest among hard-core programmers who were actually building the web: the coders, Unix weenies, and selfless volunteer IT folk who kept the ad hoc network running. The techy administrators thought of their work as noble, a gift to humanity. They saw the internet as an open commons, not to be undone by greed or commercialization. It’s hard to believe now, but until 1991 commercial enterprise on the internet was strictly prohibited as an unacceptable use. There was no selling, no ads. In the eyes of the National Science Foundation (which ran the internet backbone), the internet was funded for research, not commerce. In what seems remarkable naiveté now, the rules favored public institutions and forbade “extensive use for private or personal business.” In the mid-1980s I was involved in shaping the WELL, an early text-only online system. We struggled to connect our private WELL network to the emerging internet because we were thwarted, in part, by the NSF’s “acceptable use” policy. The WELL couldn’t prove its users would not conduct commercial business on the internet, so we were not allowed to connect. We were all really blind to what was becoming.
This anticommercial attitude prevailed even in the offices of Wired. In 1994, during the first design meetings for Wired’s embryonic website, HotWired, our programmers were upset that the innovation we were cooking up—the first ever click-through ad banner—subverted the great social potential of this new territory. They felt the web was hardly out of diapers, and already they were being asked to blight it with billboards and commercials. But prohibiting the flow of money within this emerging parallel civilization was crazy. Money in cyberspace was inevitable.
That was a small misperception compared with the bigger story we all missed.
Computing pioneer Vannevar Bush outlined the web’s core idea—hyperlinked pages—way back in 1945, but the first person to try to build out the concept was a freethinker named Ted Nelson, who envisioned his own scheme in 1965. However, Nelson had little success connecting digital bits on a useful scale, and his efforts were known only to an isolated group of disciples.
At the suggestion of a computer-savvy friend, I got in touch with Nelson in 1984, a decade before the first websites. We met in a dark dockside bar in Sausalito, California. He was renting a houseboat nearby and had the air of someone with time on his hands. Folded notes erupted from his pockets and long strips of paper slipped from overstuffed notebooks. Wearing a ballpoint pen on a string around his neck, he told me—way too earnestly for a bar at four o’clock in the afternoon—about his scheme for organizing all the knowledge of humanity. Salvation lay in cutting up three-by-five cards, of which he had plenty.
Although Nelson was polite, charming, and smooth, I was too slow for his fast talk. But I got an aha! from his marvelous notion of hypertext. He was certain that every document in the world should be a footnote to some other document, and computers could make the links between them visible and permanent. This was a new idea at the time. But that was just the beginning. Scribbling on index cards, he sketched out complicated notions of transferring authorship back to creators and tracking payments as readers hopped along networks of documents, in what he called the “docuverse.” He spoke of “transclusion” and “intertwingularity” as he described the grand utopian benefits of his embedded structure. It was going to save the world from stupidity!
I believed him. Despite his quirks, it was clear to me that a hyperlinked world was inevitable—someday. But as I look back now, after 30 years of living online, what surprises me about the genesis of the web is how much was missing from Vannevar Bush’s vision, and even Nelson’s docuverse, and especially my own expectations. We all missed the big story. Neither old ABC nor startup Yahoo! created the content for 5,000 web channels. Instead billions of users created the content for all the other users. There weren’t 5,000 channels but 500 million channels, all customer generated. The disruption ABC could not imagine was that this “internet stuff” enabled the formerly dismissed passive consumers to become active creators. The revolution launched by the web was only marginally about hypertext and human knowledge. At its heart was a new kind of participation that has since developed into an emerging culture based on sharing. And the ways of “sharing” enabled by hyperlinks are now creating a new type of thinking—part human and part machine—found nowhere else on the planet or in history. The web has unleashed a new becoming.
Not only did we fail to imagine what the web would become, we still don’t see it today. We are oblivious to the miracle it has blossomed into. Twenty years after its birth the immense scope of the web is hard to fathom. The total number of web pages, including those that are dynamically created upon request, exceeds 60 trillion. That’s almost 10,000 pages per person alive. And this entire cornucopia has been created in less than 8,000 days.
The accretion of tiny marvels can numb us to the arrival of the stupendous. Today, from any window on the internet, you can get: an amazing variety of music and video, an evolving encyclopedia, weather forecasts, help-wanted ads, satellite images of any place on earth, up-to-the-minute news from around the globe, tax forms, TV guides, road maps with driving directions, real-time stock quotes, real estate listings with virtual walk-throughs and real-time prices, pictures of just about anything, latest sports scores, places to buy everything, records of political contributions, library catalogs, appliance manuals, live traffic reports, archives to major newspapers—all accessed instantly.
This view is spookily godlike. You can switch your gaze on a spot in the world from map to satellite to 3-D just by clicking. Recall the past? It’s there. Or listen to the daily complaints and pleas of almost anyone who tweets or posts. (And doesn’t everyone?) I doubt angels have a better view of humanity.
Why aren’t we more amazed by this fullness? Kings of old would have gone to war to win such abilities. Only small children back then would have dreamed such a magic window could be real. I have reviewed the expectations of the wise experts from the 1980s, and I can affirm that this comprehensive wealth of material, available on demand and free of charge, was not in anyone’s 20-year plan. At that time, anyone silly enough to trumpet the above list as a vision of the near future would have been confronted by the evidence: There wasn’t enough money in all the investment firms in the entire world to fund such bounty. The success of the web at this scale was impossible.
But if we have learned anything in the past three decades, it is that the impossible is more plausible than it appears.
Nowhere in Ted Nelson’s convoluted sketches of hypertext transclusion did the fantasy of a virtual flea market appear. Nelson hoped to franchise his Xanadu hypertext systems in the physical world at the scale of mom-and-pop cafés—you would go to a Xanadu store to do your hypertexting. Instead, the web erupted into open global flea markets like eBay, Craigslist, or Alibaba that handle several billion transactions every year and operate right into your bedroom. And here’s the surprise: Users do most of the work—they photograph, they catalog, they post, and they market their own sales. And they police themselves; while the sites do call in the authorities to arrest serial abusers, the chief method of ensuring fairness is a system of user-generated ratings. Three billion feedback comments can work wonders.
What we all failed to see was how much of this brave new online world would be manufactured by users, not big institutions. The entirety of the content offered by Facebook, YouTube, Instagram, and Twitter is not created by their staff, but by their audience. Amazon’s rise was a surprise not because it became an “everything store” (not hard to imagine), but because Amazon’s customers (me and you) rushed to write the reviews that made the site’s long-tail selection usable. Today, most major software producers have minimal help desks; their most enthusiastic customers advise and assist other customers on the company’s support forum web pages, serving as high-quality customer support for new buyers. And in the greatest leverage of the common user, Google turns traffic and link patterns generated by 90 billion searches a month into the organizing intelligence for a new economy. This bottom-up overturning was also not in anyone’s 20-year vision.
No web phenomenon has been more confounding than the infinite rabbit hole of YouTube and Facebook videos. Everything media experts knew about audiences—and they knew a lot—promoted the belief that audiences would never get off their butts and start making their own entertainment. The audience was a confirmed collective coach potato, as the ABC honchos assumed. Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs in terms of cost and expertise. User-generated creations would never happen at a large scale, or if they happened they would not draw an audience, or if they drew an audience they would not matter. What a shock, then, to witness the near instantaneous rise of 50 million blogs in the early 2000s, with two new blogs appearing every second. And then a few years later the explosion of user-created videos—65,000 per day are posted to YouTube, or 300 video hours every minute, in 2015. And in recent years a ceaseless eruption of alerts, tips, and news headlines. Each user doing what ABC, AOL, USA Today—and almost everyone else—expected only ABC, AOL, USA Today would be doing. These user-created channels make no sense economically. Where are the time, energy, and resources coming from?
The nutrition of participation nudges ordinary folks to invest huge hunks of energy and time into making free encyclopedias, creating free public tutorials for changing a flat tire, or cataloging the votes in the Senate. More and more of the web runs in this mode. One study a few years ago found that only 40 percent of the web is commercially manufactured. The rest is fueled by duty or passion.
Coming out of the industrial age, when mass-produced goods outperformed anything you could make yourself, this sudden tilt toward consumer involvement is a surprise. We thought, “That amateur do-it-yourself thing died long ago, back in the horse-and-buggy era.” The enthusiasm for making things, for interacting more deeply than just choosing options, is the great force not reckoned—not seen—decades ago, even though it was already going on. This apparently primeval impulse for participation has upended the economy and is steadily turning the sphere of social networking—smart mobs, hive minds, and collaborative action—into the main event.
When a company opens part of its databases and functionality to users and other startups via a public API, or application programming interface, as Amazon, Google, eBay, Facebook, and most large platforms have, it is encouraging the participation of its users at new levels. People who take advantage of these capabilities are no longer a company’s customers; they’re the company’s developers, vendors, laboratories, and marketers.
With the steady advance of new ways for customers and audiences to participate, the web has embedded itself into every activity and every region of the planet. Indeed, people’s anxiety about the internet being out of the mainstream seems quaint now. The genuine 1990 worry about the internet being predominantly male was entirely misplaced. Everyone missed the party celebrating the 2002 flip point when women online first outnumbered men. Today, 51 percent of netizens are female. And, of course, the internet is not and has never been a teenage realm. In 2014 the average age of a user was roughly a bone-creaking 44 years old.
And what could be a better mark of universal acceptance than adoption by the Amish? I was visiting some Amish farmers recently. They fit the archetype perfectly: straw hats, scraggly beards, wives with bonnets, no electricity, no phones or TVs, horse and buggy outside. They have an undeserved reputation for resisting all technology, when actually they are just very late adopters. Still, I was amazed to hear them mention their websites.
“Amish websites?” I asked.
“For advertising our family business. We weld barbecue grills in our shop.”
“Yes, but . . .”
“Oh, we use the internet terminal at the public library. And Yahoo!”
I knew then the takeover was complete. We are all becoming something new.
• • •
As we try to imagine this exuberant web three decades from now, our first impulse is to imagine it as Web 2.0—a better web. But the web in 2050 won’t be a better web, just as the first version of the web was not better TV with more channels. It will have become something new, as different from the web today as the first web was from TV.
In a strict technical sense, the web today can be defined as the sum of all the things that you can google—that is, all files reachable with a hyperlink. Presently major portions of the digital world can’t be googled. A lot of what happens in Facebook, or on a phone app, or inside a game world, or even inside a video can’t be searched right now. In 30 years it will be. The tendrils of hyperlinks will keep expanding to connect all the bits. The events that take place in a console game will be as searchable as the news. You’ll be able to look for things that occur inside a YouTube video. Say you want to find the exact moment on your phone when your sister received her acceptance to college. The web will reach this. It will also extend to physical objects, both manufactured and natural. A tiny, almost free chip embedded into products will connect them to the web and integrate their data. Most objects in your room will be connected, enabling you to google your room. Or google your house. We already have a hint of that. I can operate my thermostat and my music system from my phone. In three more decades, the rest of the world will overlap my devices. Unsurprisingly, the web will expand to the dimensions of the physical planet.
It will also expand in time. Today’s web is remarkably ignorant of the past. It may supply you with a live webcam stream of Tahrir Square in Egypt, but accessing that square a year ago is nearly impossible. Viewing an earlier version of a typical website is not easy, but in 30 years we’ll have time sliders enabling us to see any past version. Just as your phone’s navigation directions through a city are improved by including previous days, weeks, and months of traffic patterns, so the web of 2050 will be informed by the context of the past. And the web will slide into the future as well.
From the moment you wake up, the web is trying to anticipate your intentions. Since your routines are noted, the web is attempting to get ahead of your actions, to deliver an answer almost before you ask a question. It is built to provide the files you need before the meeting, to suggest the perfect place to eat lunch with your friend, based on the weather, your location, what you ate this week, what you had the last time you met with your friend, and as many other factors as you might consider. You’ll converse with the web. Rather than flick through stacks of friends’ snapshots on your phone, you ask it about a friend. The web anticipates which photos you’d like to see and, depending on your reaction to those, may show you more or something from a different friend—or, if your next meeting is starting, the two emails you need to see. The web will more and more resemble a presence that you relate to rather than a place—the famous cyberspace of the 1980s—that you journey to. It will be a low-level constant presence like electricity: always around us, always on, and subterranean. By 2050 we’ll come to think of the web as an ever-present type of conversation.
This enhanced conversation will unleash many new possibilities. Yet the digital world already feels bloated with too many choices and possibilities. There seem to be no slots for anything genuinely new in the next few years.
Can you imagine how awesome it would have been to be an ambitious entrepreneur back in 1985 at the dawn of the internet? At that time almost any dot-com name you desired was available. All you had to do was simply ask for the one you wanted. One-word domains, common names—they were all available. It didn’t even cost anything to claim. This grand opportunity was true for years. In 1994 a Wired writer noticed that mcdonalds.com was still unclaimed, so with my encouragement he registered it. He then tried unsuccessfully to give it to McDonald’s, but the company’s cluelessness about the internet was so hilarious (“dot what?”) that this tale became a famous story we published in Wired.
The internet was a wide-open frontier then. It was easy to be the first in any category you chose. Consumers had few expectations and the barriers were extremely low. Start a search engine! Be the first to open an online store! Serve up amateur videos! Of course, that was then. Looking back now, it seems as if waves of settlers have since bulldozed and developed every possible venue, leaving only the most difficult and gnarly specks for today’s newcomers. Thirty years later the internet feels saturated with apps, platforms, devices, and more than enough content to demand our attention for the next million years. Even if you could manage to squeeze in another tiny innovation, who would notice it among our miraculous abundance?
But, but . . . here is the thing. In terms of the internet, nothing has happened yet! The internet is still at the beginning of its beginning. It is only becoming. If we could climb into a time machine, journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2050 were not invented until after 2016. People in the future will look at their holodecks and wearable virtual reality contact lenses and downloadable avatars and AI interfaces and say, “Oh, you didn’t really have the internet”—or whatever they’ll call it—“back then.”
And they’d be right. Because from our perspective now, the greatest online things of the first half of this century are all before us. All these miraculous inventions are waiting for that crazy, no-one-told-me-it-was-impossible visionary to start grabbing the low-hanging fruit—the equivalent of the dot-com names of 1984.
Because here is the other thing the graybeards in 2050 will tell you: Can you imagine how awesome it would have been to be an innovator in 2016? It was a wide-open frontier! You could pick almost any category and add some AI to it, put it on the cloud. Few devices had more than one or two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh. “Oh, if only we realized how possible everything was back then!”
So, the truth: Right now, today, in 2016 is the best time to start up. There has never been a better day in the whole history of the world to invent something. There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside than now. Right now, this minute. This is the moment that folks in the future will look back at and say, “Oh, to have been alive and well back then!”
The last 30 years has created a marvelous starting point, a solid platform to build truly great things. But what’s coming will be different, beyond, and other. The things we will make will be constantly, relentlessly becoming something else. And the coolest stuff of all has not been invented yet.
Today truly is a wide-open frontier. We are all becoming. It is the best time ever in human history to begin.
You are not late.