The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016)
Much of what I believed about human nature, and the nature of knowledge, was upended by Wikipedia. Wikipedia is now famous, but when it began I and many others considered it impossible. It’s an online reference organized like an encyclopedia that unexpectedly allows anyone in the world to add to it, or change it, at any time, no permission needed. A 12-year-old in Jakarta could edit the entry for George Washington if she wanted to. I knew that the human propensity for mischief among the young and bored—many of whom lived online—would make an encyclopedia editable by anyone an impossibility. I also knew that even among the responsible contributors, the temptation to exaggerate and misremember was inescapable, adding to the impossibility of a reliable text. I knew from my own 20-year experience online that you could not rely on what you read by a random stranger, and I believed that an aggregation of random contributions would be a total mess. Even unedited web pages created by experts failed to impress me, so an entire encyclopedia written by unedited amateurs, not to mention ignoramuses, seemed destined to be junk.
Everything I knew about the structure of information convinced me that knowledge would not spontaneously emerge from data without a lot of energy and intelligence deliberately directed to transforming it. All the attempts at headless collective writing I had previously been involved with generated only forgettable trash. Why would anything online be any different?
So when the first incarnation of the online encyclopedia launched in 2000 (then called Nupedia), I gave it a look, and was not surprised that it never took off. While anyone could edit it, Nupedia required a laborious process of collaborative rewriting by other contributors that discouraged novice contributors. However, the founders of Nupedia created an easy-to-use wiki off to the side to facilitate working on the text, and much to everyone’s surprise that wiki became the main event. Anyone could edit as well as post without waiting on others. I expected even less from that effort, now renamed Wikipedia.
How wrong I was. The success of Wikipedia keeps surpassing my expectations. At last count in 2015 it sported more than 35 million articles in 288 languages. It is quoted by the U.S. Supreme Court, relied on by schoolkids worldwide, and used by every journalist and lifelong learner for a quick education on something new. Despite the flaws of human nature, it keeps getting better. Both the weaknesses and virtues of individuals are transformed into common wealth, with a minimum of rules. Wikipedia works because it turns out that, with the right tools, it is easier to restore damaged text (the revert function on Wikipedia) than to create damaged text (vandalism), and so the good enough article prospers and continues to slowly improve. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.
It has always been clear that collectives amplify power—that is what cities and civilizations are—but what’s been the big surprise for me is how minimal the tools and oversight that are needed. The bureaucracy of Wikipedia is relatively so small as to be invisible, although it has grown over its first decade. Yet the greatest surprise brought by Wikipedia is that we still don’t know how far this power can go. We haven’t seen the limits of wiki-ized intelligence. Can it make textbooks, music, and movies? What about law and political governance?
Before we say, “Impossible!” I say: Let’s see. I know all the reasons why law can never be written by know-nothing amateurs. But having already changed my mind once on this, I am slow to jump to conclusions again. A Wikipedia is impossible, but here it is. It is one of those things that is impossible in theory but possible in practice. Once you confront the fact that it works, you have to shift your expectation of what else there may be that is impossible in theory but might work in practice. To be honest, so far this open wiki model has been tried in a number of other publishing fields but has not been widely successful. Yet. Just as the first version of Wikipedia failed because the tools and processes were not right, collaborative textbooks, or law, or movies may take the invention of further new tools and methods.
I am not the only one who has had his mind changed about this. When you grow up having “always known” that such a thing as Wikipedia works, when it is obvious to you that open source software is better than polished proprietary goods, when you are certain that sharing your photos and other data yields more than safeguarding them—then these assumptions will become a platform for a yet more radical embrace of the common wealth. What once seemed impossible is now taken for granted.
Wikipedia has changed my mind in other ways. I was a fairly steady individualist, an American with libertarian leanings, and the success of Wikipedia led me toward a new appreciation of social power. I am now much more interested in both the power of the collective and the new obligations stemming from individuals toward the collective. In addition to expanding civil rights, I want to expand civil duties. I am convinced that the full impact of Wikipedia is still subterranean and that its mind-changing force is working subconsciously on the global millennial generation, providing them with an existent proof of a beneficial hive mind, and an appreciation for believing in the impossible.
More important, Wikipedia has taught me to believe in the impossible more often. In the past several decades I’ve had to accept other ideas that I formerly thought were impossibilities but that later turned out to be good practical ideas. For instance, I had my doubts about the online flea market called eBay when I first encountered it in 1997. You want me to transfer thousands of dollars to a distant stranger trying to sell me a used car I’ve never seen? Everything I had been taught about human nature suggested this could not work. Yet today, strangers selling automobiles is the major profit center for the very successful eBay corporation.
Twenty years ago I might have been able to believe that in 2016 we’d have maps for the entire world on our personal handheld devices. But I could not have been convinced we’d have them with street views of the buildings for many cities, or apps that showed the locations of public toilets, and that it would give us spoken directions for walking or public transit, and that we’d have all this mapping and more “for free.” It seemed starkly impossible back then. And this free abundance still seems hard to believe in theory. Yet here it is on hundreds of millions of phones.
These supposed impossibilities keep happening with increased frequency. Everyone “knew” that people don’t work for free, and if they did, they could not make something useful without a boss. But today entire sections of our economy run on software instruments created by volunteers working without pay or bosses. Everyone knew humans were innately private beings, yet the impossibility of total open round-the-clock sharing still occurred. Everyone knew that humans are basically lazy, and they would rather watch than create, and they would never get off their sofas to create their own TV. It would be impossible that millions of amateurs would produce billions of hours of video, or that anyone would watch any of it. Like Wikipedia, YouTube is theoretically impossible. But here again this impossibility is real in practice.
This list goes on, old impossibilities appearing as new possibilities daily. But why now? What is happening to disrupt the ancient impossible/possible boundary?
As far as I can tell, the impossible things happening now are in every case due to the emergence of a new level of organization that did not exist before. These incredible eruptions are the result of large-scale collaboration, and massive real-time social interacting, which in turn are enabled by omnipresent instant connection between billions of people at a planetary scale. Just as fleshy tissue yields a new, higher level of organization for a bunch of individual cells, these new social structures yield new tissue for individual humans. Tissue can do things that cells can’t. The collectivist organizations of Wikipedia, Linux, Facebook, Uber, the web—even AI—can do things that industrialized humans could not. This is the first time on this planet that we’ve tied a billion people together in immediate syncopation, just as Facebook has done. From this new societal organization, new behaviors emerge that were impossible at the lower level.
Humans have long invented new social organizations, from law, courts, irrigation systems, schools, governments, libraries up to the largest scale, civilization itself. These social instruments are what makes us human—and what makes our behavior “impossible” from the vantage point of animals. For instance, when we invented written records and laws, these enabled a type of egalitarianism not possible in our cousins the primates, and not present in oral cultures. The cooperation and coordination bred by irrigation and agriculture produced yet more impossible behaviors of anticipation and preparation, and sensitivity to the future. Human society unleashed all kinds of previously impossible human behaviors into the biosphere.
The technium—the modern system of culture and technology—is accelerating the creation of new impossibilities by continuing to invent new social organizations. The genius of eBay was its invention of cheap, easy, and quick reputation status. Strangers could sell to strangers at a great distance because we now had a technology to quickly assign persistent reputations to those beyond our circle. That lowly innovation opened up a new kind of higher-level coordination that permitted a new kind of exchange (remote purchasing among strangers) that was impossible before. The same kind of technologically enabled trust, plus real-time coordination, makes the decentralized taxi service Uber possible. The “revert log” button on Wikipedia, which made it easier to restore a vandalized passage than to vandalize it, unleashed a new higher organization of trust, emphasizing one facet of human behavior not enabled at a large scale before.
We have just begun to fiddle with social communications. Hyperlinks, wifi, and GPS location services are really types of relationships enabled by technology, and this class of innovations is just beginning. The majority of the most amazing communication inventions that are possible have not been invented yet. We are also just in the infancy of being able to invent institutions at a truly global scale. When we weave ourselves together into a global real-time society, former impossibilities will really start to erupt into reality. It is not necessary that we invent some kind of autonomous global consciousness. It is only necessary that we connect everyone to everyone else—and to everything else—all the time and create new things together. Hundreds of miracles that seem impossible today will be possible with this shared human connectivity.
I am looking forward to having my mind changed a lot in the coming years. I think we’ll be surprised by how many of the things we assumed were “natural” for humans are not really natural at all. It might be fairer to say that what is natural for a tribe of mildly connected humans will not be natural for a planet of intensely connected humans. “Everyone knows” that humans are warlike, but I would guess organized war will become less attractive, or useful, over time as new means of social conflict resolution arise at a global level. Of course, many of the impossible things we can expect will be impossibly bad. The new technologies will unleash whole new ways to lie, cheat, steal, spy, and terrorize. We have no consensual international rules for cyberconflict, which means we can expect some very nasty unexpected “impossible” cyber events in the coming decade. Because of our global connectivity, a relatively simple hack could cause an emerging cascade of failure, which would reach impossible scale very quickly. Worldwide disruptions of our social fabric are in fact inevitable. One day in the next three decades the entire internet/phone system will blink off for 24 hours, and we’ll be in shock for years afterward.
I don’t focus on these expected downsides in this book for several reasons. First, there is no invention that cannot be subverted in some way to cause harm. Even the most angelic technology can be weaponized, and will be. Criminals are some of the most creative innovators in the world. And crap constitutes 80 percent of everything. But importantly, these negative forms follow exactly the same general trends I’ve been outlining for the positive. The negative, too, will become increasingly cognified, remixed, and filtered. Crime, scams, warring, deceit, torture, corruption, spam, pollution, greed, and other hurt will all become more decentralized and data centered. Both virtue and vice are subject to the same great becoming and flowing forces. All the ways that startups and corporations need to adjust to ubiquitous sharing and constant screening apply to crime syndicates and hacker squads as well. Even the bad can’t escape these trends.
Additionally, it may seem counterintuitive, but every harmful invention also provides a niche to create a brand-new never-seen-before good. Of course, that newly minted good can then be (and probably will be) abused by a corresponding bad idea. It may seem that this circle of new good provoking new bad which provokes new good which spawns new bad is just spinning us in place, only faster and faster. That would be true except for one vital difference: On each round we gain additional opportunities and choices that did not exist before. This expansion of choices (including the choice to do harm) is an increase in freedom—and this increase in freedoms and choices and opportunities is the foundation of our progress, of our humanity, and of our individual happiness.
Our technological spinning has thrown us up to a new level, opening up an entirely new continent of unknown opportunities and scary choices. The consequences of global-scale interactions are beyond us. The amount of data and power needed is inhuman; the vast realms of peta-, exa-, zetta-, zillion don’t really mean anything to us today because this is the vocabulary of megamachines, and of planets. We will certainly behave differently collectively than as individuals, but we don’t know how. Much more important, as individuals we behave differently in collectives.
This has been true for humans for a long while, ever since we moved to cities and began building civilizations. What’s new now and in the coming decades is the velocity of this higher territory of connectivity (speed of light), and its immensely vaster scale (the entire planet). We are headed for a trillion times increase. As noted earlier, a shift by a trillion is not merely a change in quantity, but a change in essence. Most of what “everybody knows” about human beings has so far been based on the human individual. But there may be a million different ways to connect several billion people, and each way will reveal something new about us. Or each way may create in us something new. Either way, our humanity will shift.
Connected, in real time, in multiple ways, at an increasingly global scale, in matters large and small, with our permission, we will operate at a new level, and we won’t cease surprising ourselves with impossible achievements. The impossibility of Wikipedia will quietly recede into outright obviousness.
In addition to hard-to-believe emergent phenomenon, we are headed to a world where the improbable is the new normal. Cops, emergency room doctors, and insurance agents see a bit of this already. They realize how many crazy impossible things actually happen all the time. For instance, a burglar gets stuck in a chimney; a truck driver in a head-on collision is thrown out his front window and lands on his feet, walking away; a wild antelope galloping across a bike trail knocks a man off his bicycle; a candle at a wedding ignites the bride’s hair on fire; a girl casually fishing off a backyard dock catches a huge man-size shark. In former times these unlikely events would be private, known only as rumors, stories a friend of a friend told, easily doubted and not really believed.
But today they are on YouTube, and they fill our vision. You can see them yourself. Each of these weird freakish events has been seen by millions.
The improbable consists of more than just accidents. The internets are also brimming with improbable feats of performance—someone who can run up a side of a building, or slide down suburban rooftops on a snowboard, or stack up cups faster than you can blink. And not just humans—pets open doors, ride scooters, and paint pictures. The improbable also includes extraordinary levels of superhuman achievements: people doing astonishing memory tasks, or imitating all the accents of the world. In these extreme feats we see the super in humans.
Every minute a new impossible thing is uploaded to the internet and that improbable event becomes just one of hundreds of extraordinary events that we’ll see or hear about today. The internet is like a lens that focuses the extraordinary into a beam, and that beam has become our illumination. It compresses the unlikely into a small viewable band of everydayness. As long as we are online—which is almost all day many days—we are illuminated by this compressed extraordinariness. It is the new normal.
That light of superness changes us. We no longer want mere presentations; we want the best, greatest, most extraordinary presenters alive, like in the TED videos. We don’t want to watch people playing games; we want to watch the highlights of the highlights, the most amazing moves, catches, runs, shots, and kicks, each one more remarkable and improbable than the other.
We are also exposed to the greatest range of human experience: the heaviest person, shortest midgets, longest mustache—the entire universe of superlatives. Superlatives were once rare—by definition—but now we see multiple videos of superlatives all day long, and they seem normal. Humans have always treasured drawings and photos of the weird extremes of humanity (witness early issues of National Geographic and Ripley’s Believe It or Not), but there is an intimacy about watching these extremities on our phones while we wait at the dentist. They are now much realer, and they fill our heads. I think there is already evidence that this ocean of extraordinariness is inspiring and daring ordinary folks to try something extraordinary.
At the same time, superlative epic failures are foremost as well. We are confronted by the stupidest people in the world doing the dumbest things imaginable. In some respects this may place us in a universe of nothing more than tiny, petty, obscure Guinness World Record holders. In every life there is probably at least one moment that is freakish, so everyone alive is a world record holder for 15 minutes. The good news may be that it cultivates in us an expanded sense of what is possible for humans, and for human life, and so extremism expands us. The bad news may be that this insatiable appetite for super-superlatives leads to dissatisfaction with anything ordinary.
There’s no end to this dynamic. Cameras are ubiquitous, so as our collective tracked life expands, we’ll accumulate thousands of videos showing people being struck by lightning—because improbable events are more normal than we think. When we all wear tiny cameras all the time, then the most improbable event, the most superlative achievement, the most extreme actions of anyone alive will be recorded and shared around the world in real time. Soon only the most extraordinary moments of 6 billion citizens will fill our streams. So henceforth rather than be surrounded by ordinariness we’ll float in extraordinariness—as it becomes mundane. When the improbable dominates our field of vision to the point that it seems as if the world contains only the impossible, then these improbabilities don’t feel as improbable. The impossible will feel inevitable.
There is a dreamlike quality to this state of improbability. Certainty itself is no longer as certain as it once was. When I am connected to the Screen of All Knowledge, to that billion-eyed hive of humanity woven together and mirrored on a billion pieces of glass, truth is harder to find. For every accepted piece of knowledge I come across, there is, within easy reach, a challenge to the fact. Every fact has its antifact. The internet’s extreme hyperlinking will highlight those antifacts as brightly as the facts. Some antifacts are silly, some borderline, and some valid. This is the curse of the screen: You can’t rely on experts to sort them out because for every expert there is an equal and opposite anti-expert. Thus anything I learn is subject to erosion by these ubiquitous antifactors.
Ironically, in an age of instant global connection, my certainty about anything has decreased. Rather than receiving truth from an authority, I am reduced to assembling my own certainty from the liquid stream of facts flowing through the web. Truth, with a capital T, becomes truths, plural. I have to sort the truths not just about things I care about, but about anything I touch, including areas about which I can’t possibly have any direct knowledge. That means that in general I have to constantly question what I think I know. We might consider this state perfect for the advancement of science, but it also means that I am more likely to have my mind changed for incorrect reasons.
While hooked into the network of networks I feel like I am a network myself, trying to achieve reliability from unreliable parts. And in my quest to assemble truths from half-truths, nontruths, and some noble truths scattered in the flux, I find my mind attracted to fluid ways of thinking (scenarios, provisional belief, subjective hunches) and toward fluid media like mashups, twitterese, and search. But as I flow through this slippery web of ideas, it often feels like a waking dream.
We don’t really know what dreams are for, only that they satisfy some fundamental need of consciousness. Someone watching me surf the web, as I jump from one suggested link to another, would see a daydream. On the web recently I found myself in a crowd of people watching a barefoot man eat dirt, then I saw a boy singing whose face began to melt, then Santa burned a Christmas tree, then I was floating inside a mud house on the very tippy top of the world, then Celtic knots untied themselves, then a guy told me the formula for making clear glass, then I was watching myself, back in high school, riding a bicycle. And that was just the first few minutes of my time surfing the web one morning. The trancelike state we fall into while following the undirected path of links could been seen as a terrible waste of time—or, like dreams, it might be a productive waste of time. Perhaps we are tapping into our collective unconscious as we roam the web. Maybe click-dreaming is a way for all of us to have the same dream, independent of what we click on.
This waking dream we call the internet also blurs the difference between my serious thoughts and my playful thoughts, or to put it more simply: I no longer can tell when I am working and when I am playing online. For some people the disintegration between these two realms marks all that is wrong with the internet: It is the high-priced waster of time. It breeds trifles and turns superficialities into careers. Jeff Hammerbacher, a former Facebook engineer, famously complained that the “best minds of my generation are thinking about how to make people click ads.” This waking dream is viewed by some as an addictive squandering. On the contrary, I cherish a good wasting of time as a necessary precondition for creativity. More important, I believe the conflation of play and work, of thinking hard and thinking playfully, is one of the greatest things this new invention has done. Isn’t the whole idea that in a highly evolved advanced society work is over?
I’ve noticed a different approach to my thinking now that the hive mind has spread it extremely wide and loose. My thinking is more active, less contemplative. Rather than begin a question or hunch by ruminating aimlessly in my mind, nourished only by my ignorance, I start doing things. I immediately go. I go looking, searching, asking, questioning, reacting, leaping in, constructing notes, bookmarks, a trail—I start off making something mine. I don’t wait. Don’t have to wait. I act on ideas first now instead of thinking on them. For some folks, this is the worst of the net—the loss of contemplation. Others feel that all this frothy activity is simply stupid busywork, or spinning of wheels, or illusionary action. But compared with what? Compared with the passive consumption of TV? Or time spent lounging at a bar chatting? Or the slow trudge to a library only to find no answers to the hundreds of questions I have? Picture the thousands of millions of people online at this very minute. To my eye they are not wasting time with silly associative links, but are engaged in a more productive way of thinking—getting instant answers, researching, responding, daydreaming, browsing, being confronted with something very different, writing down their own thoughts, posting their opinions, even if small. Compare that to the equivalent of hundreds of millions of people 50 years ago watching TV or reading a newspaper in a big chair.
This new mode of being—surfing the waves, diving down, rushing up, flitting from bit to bit, tweeting and twittering, ceaselessly dipping into newness with ease, daydreaming, questioning each and every fact—is not a bug. It is a feature. It is a proper response to the ocean of data, news, and facts flooding us. We need to be fluid and agile, flowing from idea to idea, because that fluidity reflects the turbulent informational environment surrounding us. This mode is neither a lazy failure nor an indulgent luxury. It is a necessity in order to thrive. To steer a kayak on white-water rapids you need to be paddling at least as fast as the water runs, and to hope to navigate the exabytes of information, change, disruption coming at us, you need to be flowing as fast as the frontier is flowing.
But don’t confuse this flux for the shallows. Fluidity and interactivity also allow us to instantly divert more attention to works that are far more complex, bigger, and more complicated than ever before. Technologies that provided audiences with the ability to interact with stories and news—to time shift, play later, rewind, probe, link, save, clip, cut and paste—enabled long forms as well as short forms. Film directors started creating motion pictures that were not a series of sitcoms, but a massive sustained narrative that took years to tell. These vast epics, like Lost, Battlestar Galactica, The Sopranos, Downton Abbey, and The Wire, had multiple interweaving plotlines, multiple protagonists, and an incredible depth of characters, and these sophisticated works demanded sustained attention that was not only beyond previous TV and 90-minute movies, but would have shocked Dickens and other novelists of yore. Dickens would have marveled back then: “You mean the audience could follow all that, and then want more? Over how many years?” I would never have believed myself capable of enjoying such complicated stories, or caring about them enough to put in the time. My attention has grown. In a similar way the depth, complexity, and demands of video games can equal the demands of marathon movies or any great book. Just to become proficient in some games takes 50 hours.
But the most important way these new technologies are changing how we think is that they have become one thing. It may appear as if you are spending endless nanoseconds on a series of tweets, and infinite microseconds surfing between web pages, or hours wandering between YouTube channels, and then hovering only mere minutes on one book snippet after another, when you finally turn back to your spreadsheet at work or flick through the screen of your phone. But in reality you are spending 10 hours a day paying attention to one intangible thing. This one machine, this one huge platform, this gigantic masterpiece is disguised as a trillion loosely connected pieces. The unity is easy to miss. The well-paid directors of websites, the hordes of commenters online, and the movie moguls reluctantly letting us stream their movies—these folks don’t believe they are mere data points in a big global show, but they are. When we enter any of the 4 billion screens lit today, we are participating in one open-ended question. We are all trying to answer: What is it?
The computer manufacturer Cisco estimates that there will be 50 billion devices on the internet by 2020, in addition to tens of billions of screens. The electronics industry expects a billion wearable devices in five years, tracking our activities, feeding data into the stream. We can expect another 13 billion appliances, like the Nest thermostat, animating our smarthomes. There will be 3 billion devices built into connected cars. And 100 billion dumb RFID chips embedded into goods on the shelves of Walmart. This is the internet of things, the emerging dreamland of everything we manufacture that is the new platform for the improbable. It is built with data.
Knowledge, which is related, but not identical, to information, is exploding at the same rate as information, doubling every two years. The number of scientific articles published each year has been accelerating even faster than this for decades. Over the last century the annual number of patent applications worldwide has risen in an exponential curve.
We know vastly more about the universe than we did a century ago. This new knowledge about the physical laws of the universe has been put to practical use in such consumer goods as GPS and iPods, with a steady increase in our own lifespans. Telescopes, microscopes, fluoroscopes, oscilloscopes allowed us to see in new ways, and when we looked with new tools, we suddenly gained many new answers.
Yet the paradox of science is that every answer breeds at least two new questions. More tools, more answers, ever more questions. Telescopes, radioscopes, cyclotrons, atom smashers expanded not only what we knew, but birthed new riddles and expanded what we didn’t know. Previous discoveries helped us to recently realize that 96 percent of all matter and energy in our universe is outside of our vision. The universe is not made of the atoms and heat we discovered last century; instead it is primarily composed of two unknown entities we label “dark”: dark energy and dark matter. “Dark” is a euphemism for ignorance. We really have no idea what the bulk of the universe is made of. We find a similar proportion of ignorance if we probe deeply into the cell, or the brain. We don’t know nothin’ relative to what could be known. Our inventions allow us to spy into our ignorance. If knowledge is growing exponentially because of scientific tools, then we should be quickly running out of puzzles. But instead we keep discovering greater unknowns.
Thus, even though our knowledge is expanding exponentially, our questions are expanding exponentially faster. And as mathematicians will tell you, the widening gap between two exponential curves is itself an exponential curve. That gap between questions and answers is our ignorance, and it is growing exponentially. In other words, science is a method that chiefly expands our ignorance rather than our knowledge.
We have no reason to expect this to reverse in the future. The more disruptive a technology or tool is, the more disruptive the questions it will breed. We can expect future technologies such as artificial intelligence, genetic manipulation, and quantum computing (to name a few on the near horizon) to unleash a barrage of new huge questions—questions we could have never thought to ask before. In fact, it’s a safe bet that we have not asked our biggest questions yet.
• • •
Every year humans ask the internet 2 trillion questions, and every year the search engines give back 2 trillion answers. Most of those answers are pretty good. Many times the answers are amazing. And they are free! In the time before instant free internet search, the majority of the 2 trillion questions could not have been answered for any reasonable cost. Of course, while the answers may be free to users, they do cost the search companies like Google, Yahoo!, Bing, and Baidu something to create. In 2007, I calculated the cost to Google to answer one query to be approximately 0.3 cents, which has probably decreased a bit since then. By my calculations Google earns about 27 cents per search/answer from the ads placed around its answers, so it can easily afford to give its answers away for free.
We’ve always had questions. Thirty years ago the largest answering business was phone directory assistance. Before Google, there was 411. The universal “information” number 411 was dialed from phones about 6 billion times per year. The other search mechanism in the past was the yellow pages—the paper version. According to the Yellow Pages Association, 50 percent of American adults used the print yellow pages at least once a week, performing two lookups per week in the 1990s. Since the adult population in the 1990s was around 200 million, that’s 200 million searches per week, or 104 billion questions asked per year. Nothing to sneeze at. The other classic answer strategy was the library. U.S. libraries in the 1990s counted about 1 billion library visits per year. Out of those 1 billion, about 300 million were “reference transactions,” or questions.
Despite those 100 billion–plus searches for answers per year (in the U.S. alone), no one would have believed 30 years ago that there was an $82 billion business in answering people’s questions for cheap or for free. There weren’t many MBAs dreaming of schemes to fill this need. The demand for questions/answers was latent. People didn’t know how valuable instant answers were until they had access to them. One study conducted in 2000 determined that the average American adult sought to answer four questions per day online. If my own life is any indication, I am asking more questions every day. Google told me that in 2007 I asked it 349 questions in one month, or 10 per day (and my peak hour of inquiry was 11 a.m. on Wednesdays). I asked Google how many seconds in a year and it instantly told me: 31.5 million. I asked it how many searches all search engines do per second? It said 600,000 searches per second, or 600 kilohertz. The internet is answering questions at the buzzing frequency of radio waves.
But while answers are provided for free, the value of those answers is huge. Three researchers at the University of Michigan performed a small experiment in 2010 to see if they could ascertain how much ordinary people might pay for search. Their method was to ask students inside a well-stocked university library to answer some questions that were asked on Google, but to find the answers only using the materials in the library. They measured how long it took the students to answer a question in the stacks. On average it took 22 minutes. That’s 15 minutes longer than the 7 minutes it took to answer the same question, on average, using Google. Figuring a national average wage of $22 per hour, this works out to a savings of $1.37 per search.
In 2011, Hal Varian, the chief economist at Google, calculated the average value of answering a question in a different way. He revealed the surprising fact that the average user of Google (judged by returning cookies, etc.) makes only one search per day, on average. This is certainly not me. But my near constant googling is offset by, say, my mother, who may search only once every several weeks. Varian did some more math to compensate for the fact that because questions are now cheap we ask more of them. So when this effect is factored in, Varian calculated that search saves the average person 3.75 minutes per day. Using the same average hourly wage, people save 60 cents per day. We could even round that off to a dollar per day if your time is more valuable. Would most people pay a dollar per day, or $350 per year, for search if they had to? Maybe. (I absolutely would.) They might pay a dollar per search, which is another way of paying the same amount. Economist Michael Cox asked his students how much they would accept to give up the internet entirely and reported they would not give up the internet for a million dollars. And this was before smartphones became the norm.
We are just starting to get good at giving great answers. Siri, the audio phone assistant for the iPhone, delivers spoken answers when you ask her a question in natural English. I use Siri routinely. When I want to know the weather, I just ask, “Siri, what’s the weather for tomorrow?” Android folks can audibly ask Google Now for information about their calendars. IBM’s Watson proved that for most kinds of factual reference questions, an AI can find answers fast and accurately. Part of the increasing ease in providing answers lies in the fact that past questions answered correctly increase the likelihood of another question. At the same time, past correct answers increase the ease of creating the next answer, and increase the value of the corpus of answers as a whole. Each question we ask a search engine and each answer we accept as correct refines the intelligence of the process, increasing the engine’s value for future questions. As we cognify more books and movies and the internet of things, answers become ubiquitous. We are headed to a future where we will ask several hundred questions per day. Most of these questions will concern us and our friends. “Where is Jenny? What time is the next bus? Is this kind of snack good?” The “manufacturing costs” of each answer will be nanocents. Search, as in “give me an answer,” will no longer be considered a first-world luxury. It will become an essential universal commodity.
Very soon now we’ll live in a world where we can ask the cloud, in conversational tones, any question at all. And if that question has a known answer, the machine will explain it to us. Who won the Rookie of the Year Award in 1974? Why is the sky blue? Will the universe keep expanding forever? Over time the cloud, or Cloud, the machine, or AI, will learn to articulate what is known and not known. At first it may need to engage us in a dialog to clarify ambiguities (as we humans do when answering questions), but, unlike us, the answer machine will not hesitate to provide deep, obscure, complex factual knowledge on any subject—if it exists.
But the chief consequence of reliable instant answers is not a harmony of satisfaction. Abundant answers simply generate more questions! In my experience, the easier it is to ask a question and the more useful the reply, the more questions I have. While the answer machine can expand answers infinitely, our time to form the next question is very limited. There is an asymmetry in the work needed to generate a good question versus the work needed to absorb an answer. Answers become cheap and questions become valuable—the inverse of the situation now. Pablo Picasso brilliantly anticipated this inversion in 1964 when he told the writer William Fifield, “Computers are useless. They only give you answers.”
So at the end of the day, a world of supersmart ubiquitous answers encourages a quest for the perfect question. What makes a perfect question? Ironically, the best questions are not questions that lead to answers, because answers are on their way to becoming cheap and plentiful. A good question is worth a million good answers.
A good question is like the one Albert Einstein asked himself as a small boy—“What would you see if you were traveling on a beam of light?” That question launched the theory of relativity, E=MC2, and the atomic age.
A good question is not concerned with a correct answer.
A good question cannot be answered immediately.
A good question challenges existing answers.
A good question is one you badly want answered once you hear it, but had no inkling you cared before it was asked.
A good question creates new territory of thinking.
A good question reframes its own answers.
A good question is the seed of innovation in science, technology, art, politics, and business.
A good question is a probe, a what-if scenario.
A good question skirts on the edge of what is known and not known, neither silly nor obvious.
A good question cannot be predicted.
A good question will be the sign of an educated mind.
A good question is one that generates many other good questions.
A good question may be the last job a machine will learn to do.
A good question is what humans are for.
• • •
What is it that we are making with our question-and-answer machine?
Our society is moving away from the rigid order of hierarchy toward the fluidity of decentralization. It is moving from nouns to verbs, from tangible products to intangible becomings. From fixed media to messy remixed media. From stores to flows. And the value engine is moving from the certainties of answers to the uncertainties of questions. Facts, order, and answers will always be needed and useful. They are not going away, and in fact, like microbial life and concrete materials, facts will continue to underpin the bulk of our civilization. But the most precious aspects, the most dynamic, most valuable, and most productive facets of our lives and new technology will lie in the frontiers, in the edges where uncertainty, chaos, fluidity, and questions dwell. The technologies of generating answers will continue to be essential, so much that answers will become omnipresent, instant, reliable, and just about free. But the technologies that help generate questions will be valued more. Question makers will be seen, properly, as the engines that generate the new fields, new industries, new brands, new possibilities, new continents that our restless species can explore. Questioning is simply more powerful than answering.