COGNIFYING - The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016)

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016)

2

COGNIFYING

It is hard to imagine anything that would “change everything” as much as cheap, powerful, ubiquitous artificial intelligence. To begin with, there’s nothing as consequential as a dumb thing made smarter. Even a very tiny amount of useful intelligence embedded into an existing process boosts its effectiveness to a whole other level. The advantages gained from cognifying inert things would be hundreds of times more disruptive to our lives than the transformations gained by industrialization.

Ideally, this additional intelligence should be not just cheap, but free. A free AI, like the free commons of the web, would feed commerce and science like no other force we can imagine and would pay for itself in no time. Until recently, conventional wisdom held that supercomputers would be the first to host this artificial mind, and then perhaps we’d get mini minds at home, and then soon enough we’d add consumer models to the heads of our personal robots. Each AI would be a bounded entity. We would know where our thoughts ended and theirs began.

However, the first genuine AI will not be birthed in a stand-alone supercomputer, but in the superorganism of a billion computer chips known as the net. It will be planetary in dimensions, but thin, embedded, and loosely connected. It will be hard to tell where its thoughts begin and ours end. Any device that touches this networked AI will share—and contribute to—its intelligence. A lonely off-the-grid AI cannot learn as fast, or as smartly, as one that is plugged into 7 billion human minds, plus quintillions of online transistors, plus hundreds of exabytes of real-life data, plus the self-correcting feedback loops of the entire civilization. So the network itself will cognify into something that uncannily keeps getting better. Stand-alone synthetic minds are likely to be viewed as handicapped, a penalty one might pay in order to have AI mobility in distant places.

When this emerging AI arrives, its very ubiquity will hide it. We’ll use its growing smartness for all kinds of humdrum chores, but it will be faceless, unseen. We will be able to reach this distributed intelligence in a million ways, through any digital screen anywhere on earth, so it will be hard to say where it is. And because this synthetic intelligence is a combination of human intelligence (all past human learning, all current humans online), it will be difficult to pinpoint exactly what it is as well. Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?

The arrival of artificial thinking accelerates all the other disruptions I describe in this book; it is the ur-force in our future. We can say with certainty that cognification is inevitable, because it is already here.

✵ ✵ ✵

Two years ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of this rapidly appearing, long overdue arrival of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here—it’s about the size of a bedroom, with 10 upright refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines’ backs. It is surprisingly warm inside, as if the cluster were alive.

Today’s Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred “instances” of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers. This kind of AI can be scaled up or down on demand. Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be quickly transferred to the others. And instead of one single program, it’s an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.

Consumers can tap into that always-on intelligence directly, but also through third-party apps that harness the power of this AI cloud. Like many parents of a bright mind, IBM would like Watson to pursue a medical career, so it should come as no surprise that the primary application under development is a medical diagnosis tool. Most of the previous attempts to make a diagnostic AI have been pathetic failures, but Watson really works. When, in plain English, I give it the symptoms of a disease I once contracted in India, it gives me a list of hunches, ranked from most to least probable. The most likely cause, it declares, is giardia—the correct answer. This expertise isn’t yet available to patients directly; IBM provides Watson’s medical intelligence to partners like CVS, the retail pharmacy chain, helping it develop personalized health advice for customers with chronic diseases based on the data CVS collects. “I believe something like Watson will soon be the world’s best diagnostician—whether machine or human,” says Alan Greene, chief medical officer of Scanadu, a startup that is building a diagnostic device inspired by the Star Trek medical tricorder and powered by a medical AI. “At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”

Medicine is only the beginning. All the major cloud companies, plus dozens of startups, are in a mad rush to launch a Watson-like cognitive service. According to the analysis firm Quid, AI has attracted more than $18 billion in investments since 2009. In 2014 alone more than $2 billion was invested in 322 companies with AI-like technology. Facebook, Google, and their Chinese equivalents, TenCent and Baidu, have recruited researchers to join their in-house AI research teams. Yahoo!, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have all purchased AI companies since 2014. Private investment in the AI sector has been expanding 70 percent a year on average for the past four years, a rate that is expected to continue.

One of the early stage AI companies Google purchased is DeepMind, based in London. In 2015 researchers at DeepMind published a paper in Nature describing how they taught an AI to learn to play 1980s-era arcade video games, like Video Pinball. They did not teach it how to play the games, but how to learn to play the games—a profound difference. They simply turned their cloud-based AI loose on an Atari game such as Breakout, a variant of Pong, and it learned on its own how to keep increasing its score. A video of the AI’s progress is stunning. At first, the AI plays nearly randomly, but it gradually improves. After a half hour it misses only once every four times. By its 300th game, an hour into it, it never misses. It keeps learning so fast that in the second hour it figures out a loophole in the Breakout game that none of the millions of previous human players had discovered. This hack allowed it to win by tunneling around a wall in a way that even the game’s creators had never imagined. At the end of several hours of first playing a game, with no coaching from the DeepMind creators, the algorithms, called deep reinforcement machine learning, could beat humans in half of the 49 Atari video games they mastered. AIs like this one are getting smarter every month, unlike human players.

Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. You’ll simply plug into the grid and get AI as if it was electricity. It will enliven inert objects, much as electricity did more than a century past. Three generations ago, many a tinkerer struck it rich by taking a tool and making an electric version. Take a manual pump; electrify it. Find a hand-wringer washer; electrify it. The entreprenuers didn’t need to generate the electricity; they bought it from the grid and used it to automate the previously manual. Now everything that we formerly electrified we will cognify. There is almost nothing we can think of that cannot be made new, different, or more valuable by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. Find something that can be made better by adding online smartness to it.

An excellent example of the magic of adding AI to X can be seen in photography. In the 1970s I was a travel photographer hauling around a heavy bag of gear. In addition to a backpack with 500 rolls of film, I carried two brass Nikon bodies, a flash, and five extremely heavy glass lenses that weighed over a pound each. Photography needed “big glass” to capture photons in low light; it needed light-sealed cameras with intricate marvels of mechanical engineering to focus, measure, and bend light in thousandths of a second. What has happened since then? Today my point-and-shoot Nikon weighs almost nothing, shoots in almost no light, and can zoom from my nose to infinity. Of course, the camera in my phone is even tinier, always present, and capable of pictures as good as my old heavy clunkers. The new cameras are smaller, quicker, quieter, and cheaper not just because of advances in miniaturization, but because much of the traditional camera has been replaced by smartness. The X of photography has been cognified. Contemporary phone cameras eliminated the layers of heavy glass by adding algorithms, computation, and intelligence to do the work that physical lenses once did. They use the intangible smartness to substitute for a physical shutter. And the darkroom and film itself have been replaced by more computation and optical intelligence. There are even designs for a completely flat camera with no lens at all. Instead of any glass, a perfectly flat light sensor uses insane amounts of computational cognition to compute a picture from the different light rays falling on the unfocused sensor. Cognifying photography has revolutionized it because intelligence enables cameras to slip into anything (in a sunglass frame, in a color on clothes, in a pen) and do more, including calculate 3-D, HD, and many other options that earlier would have taken $100,000 and a van full of equipment to do. Now cognified photography is something almost any device can do as a side job.

A similar transformation is about to happen for every other X. Take chemistry, another physical endeavor requiring laboratories of glassware and bottles brimming with solutions. Moving atoms—what could be more physical? By adding AI to chemistry, scientists can perform virtual chemical experiments. They can smartly search through astronomical numbers of chemical combinations to reduce them to a few promising compounds worth examining in a lab. The X might be something low-tech, like interior design. Add utility AI to a system that matches levels of interest of clients as they walk through simulations of interiors. The design details are altered and tweaked by the pattern-finding AI based on customer response, then inserted back into new interiors for further testing. Through constant iterations, optimal personal designs emerge from the AI. You could also apply AI to law, using it to uncover evidence from mountains of paper to discern inconsistencies between cases, and then have it suggest lines of legal arguments.

The list of Xs is endless. The more unlikely the field, the more powerful adding AI will be. Cognified investments? Already happening with companies such as Betterment or Wealthfront. They add artificial intelligence to managed stock indexes in order to optimize tax strategies or balance holdings between portfolios. These are the kinds of things a professional money manager might do once a year, but the AI will do every day, or every hour.

Here are other unlikely realms waiting to be cognitively enhanced:

Cognified music—Music can be created in real time from algorithms, employed as the soundtrack for a video game or a virtual world. Depending on your actions, the music changes. Hundreds of hours of new personal music can be written by the AI for every player.

Cognified laundry—Clothes that tell the washing machines how they want to be washed. The wash cycle would adjust itself to the contents of each load as directed by the smart clothes.

Cognified marketing—The amount of attention an individual reader or watcher spends on an advertisement can be multiplied by their social influence (how many people followed them and what their influence was) in order to optimize attention and influence per dollar. Done at the scale of millions, this is a job for AI.

Cognified real estate—Matching buyers and sellers via an AI that can prompt “renters who liked this apartment also liked these …” It could then generate a financing package that worked for your particular circumstances.

Cognified nursing—Patients outfitted with sensors that track their bio markers 24 hours a day can generate highly personalized treatments that are adjusted and refined daily.

Cognified construction—Imagine project management software that is smart enough to take into account weather forecasts, port traffic delays, currency exchange rates, accidents, in addition to design changes.

Cognified ethics—Robo cars need to be taught priorities and behavior guidelines. The safety of pedestrians may precede the safety of drivers. Anything with some real autonomy that depends on code will also require smart ethical code as well.

Cognified toys—Toys more like pets. Furbies were primitive compared with the intense attraction that a smart petlike toy will invoke from children. Toys that can converse are lovable. Dolls may be the first really popular robots.

Cognified sports—Smart sensors and AI can create new ways to score and referee sporting games by tracking and interpreting subtle movements and collisions. Also, highly refined statistics can be extracted from every second of each athlete’s activity to create elite fantasy sports leagues.

Cognified knitting—Who knows? But it will come!

Cognifying our world is a very big deal, and it’s happening now.

✵ ✵ ✵

Around 2002 I attended a private party for Google—before its IPO, when it was a small company focused only on search. I struck up a conversation with Larry Page, Google’s brilliant cofounder. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page’s reply has always stuck with me: “Oh, we’re really making an AI.”

I’ve thought a lot about that conversation over the past few years as Google has bought 13 other AI and robotics companies in addition to DeepMind. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search constitutes 80 percent of its revenue. But I think that’s backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter Bunny looks like. Each of the 3 billion queries that Google conducts each day tutors the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousandfold more data and a hundred times more computing resources, Google will have an unrivaled AI. In a quarterly earnings conference call in the fall of 2015, Google CEO Sundar Pichai stated that AI was going to be “a core transformative way by which we are rethinking everything we are doing… . We are applying it across all our products, be it search, be it YouTube and Play, etc.” My prediction: By 2026, Google’s main product will not be search but AI.

This is the point where it is entirely appropriate to be skeptical. For almost 60 years, AI researchers have predicted that AI is right around the corner, yet until a few years ago it seemed as stuck in the future as ever. There was even a term coined to describe this era of meager results and even more meager research funding: the AI winter. Has anything really changed?

Yes. Three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence:

1. Cheap Parallel Computation

Thinking is an inherently parallel process. Billions of neurons in our brain fire simultaneously to create synchronous waves of computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbors to make sense of the signals it receives. To recognize a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could ping only one thing at a time.

That began to change more than a decade ago, when a new kind of chip, called a graphics processing unit, or GPU, was devised for the intensely visual—and parallel—demands of video games, in which millions of pixels in an image had to be recalculated many times a second. That required a specialized parallel computing chip, which was added as a supplement to the PC motherboard. The parallel graphics chips worked fantastically, and gaming soared in popularity. By 2005, GPUs were being produced in such quantities that they became so cheap they were basically a commodity. In 2009, Andrew Ng and a team at Stanford realized that GPU chips could run neural networks in parallel.

That discovery unlocked new possibilities for neural networks, which can include hundreds of millions of connections between their nodes. Traditional processors required several weeks to calculate all the cascading possibilities in a neural net with 100 million parameters. Ng found that a cluster of GPUs could accomplish the same thing in a day. Today neural nets running on GPUs are routinely used by cloud-enabled companies such as Facebook to identify your friends in photos or for Netflix to make reliable recommendations for its more than 50 million subscribers.

2. Big Data

Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples as a child before it can distinguish between cats and dogs. That’s even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI breakthrough lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital universe became the teachers making AI smart. Andrew Ng explains it this way: “AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms but the fuel is the huge amounts of data we can feed to these algorithms.”

3. Better Algorithms

Digital neural nets were invented in the 1950s, but it took decades for computer scientists to learn how to tame the astronomically huge combinatorial relationships between a million—or a hundred million—neurons. The key was to organize neural nets into stacked layers. Take the relatively simple task of recognizing that a face is a face. When a group of bits in a neural net is found to trigger a pattern—the image of an eye, for instance—that result (“It’s an eye!”) is moved up to another level in the neural net for further parsing. The next level might group two eyes together and pass that meaningful chunk on to another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’s Watson; DeepMind, Google’s search engine; and Facebook’s algorithms.

This perfect storm of cheap parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue—and there’s no reason to think they won’t—AI will keep improving.

As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing empowers the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people who use it. The more people who use it, the smarter it gets. And so on. Once a company enters this virtuous cycle, it tends to grow so big so fast that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

In 1997, Watson’s precursor, IBM’s Deep Blue, beat the reigning chess grand master Garry Kasparov in a famous man-versus-machine match. After machines repeated their victories in a few more matches, humans largely lost interest in such contests. You might think that was the end of the story (if not the end of human history), but Kasparov realized that he could have performed better against Deep Blue if he’d had the same instant access to a massive database of all previous chess moves that Deep Blue had. If this database tool was fair for an AI, why not for a human? Let the human mastermind be augmented by a database just as Deep Blue’s was. To pursue this idea, Kasparov pioneered the concept of man-plus-machine matches, in which AI augments human chess players rather than competes against them.

Now called freestyle chess matches, these are like mixed martial arts fights, where players use whatever combat techniques they want. You can play as your unassisted human self, or you can act as the hand for your supersmart chess computer, merely moving its board pieces, or you can play as a “centaur,” which is the human/AI cyborg that Kasparov advocated. A centaur player will listen to the moves suggested by the AI but will occasionally override them—much the way we use the GPS navigation intelligence in our cars. In the championship Freestyle Battle 2014, open to all modes of players, pure chess AI engines won 42 games, but centaurs won 53 games. Today the best chess player alive is a centaur. It goes by the name of Intagrand, a team of several humans and several different chess programs.

But here’s the even more surprising part: The advent of AI didn’t diminish the performance of purely human chess players. Quite the opposite. Cheap, supersmart chess programs inspired more people than ever to play chess, at more tournaments than ever, and the players got better than ever. There are more than twice as many grand masters now as there were when Deep Blue first beat Kasparov. The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computerlike of all human chess players. He also has the highest human grand master rating of all time.

If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.

Yet most of the commercial work completed by AI will be done by nonhuman-like programs. The bulk of AI will be special purpose software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube, but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdly narrow, supersmart specialists.

In fact, robust intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in finance instead. What we want instead of conscious intelligence is artificial smartness. As AIs develop, we might have to engineer ways to prevent consciousness in them. Our most premium AI services will likely be advertised as consciousness-free.

Nonhuman intelligence is not a bug; it’s a feature. The most important thing to know about thinking machines is that they will think different.

Because of a quirk in our evolutionary history, we are cruising as the only self-conscious species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose,” because compared with other kinds of minds we have met, it can solve more types of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.

The kind of thinking done by the emerging AIs today is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans could do, they don’t do it in a humanlike fashion. I recently uploaded 130,000 of my personal snapshots—my entire archive—to Google Photo, and the new Google AI remembers all the objects in all the images from my life. When I ask it to show me any image with a bicycle in it, or a bridge, or my mother, it will instantly display them. Facebook has the ability to ramp up an AI that can view a photo portrait of any person on earth and correctly identify them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this artificial ability very unhuman. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that they won’t drive like humans, with our easily distracted minds.

In a superconnected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial-strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences and entirely new ways of thinking—in the way a calculator is a genius in arithmetic. Calculation is only one type of smartness. We don’t know what the full taxonomy of intelligence is right now. Some traits of human thinking will be common (as common as bilateral symmetry, segmentation, and tubular guts are in biology), but the possibility space of viable minds will likely contain traits far outside what we have evolved. It is not necessary that this type of thinking be faster than humans’, greater, or deeper. In some cases it will be simpler.

The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.

The reason this fanciful exercise is worth doing is because, while it is inevitable that we will manufacture intelligences in all that we make, it is not inevitable or obvious what their character will be. Their character will dictate their economic value and their roles in our culture. Outlining the possible ways that a machine might be smarter than us (even in theory) will assist us in both directing this advance and managing it. A few really smart people, like astronomer Stephen Hawking and genius inventor Elon Musk, worry that making supersmart AIs could be our last invention before they replace us (though I don’t believe this), so exploring possible types is prudent.

Imagine we land on an alien planet. How would we measure the level of the intelligences we encounter there? This is an extremely difficult question because we have no real definition of our own intelligence, in part because until now we didn’t need one.

In the real world—even in the space of powerful minds—trade-offs rule. One mind cannot do all mindful things perfectly well. A particular species of mind will be better in certain dimensions, but at a cost of lesser abilities in other dimensions. The smartness that guides a self-driving truck will be a different species than the one that evaluates mortgages. The AI that will diagnose your illness will be significantly different from the artificial smartness that oversees your house. The superbrain that predicts the weather accurately will be in a completely different kingdom of mind from the intelligence woven into your clothes. The taxonomy of minds must reflect the different ways in which minds are engineered with these trade-offs. In the short list below I include only those kinds of minds that we might consider superior to us; I’ve omitted the thousands of species of mild machine smartness—like the brains in a calculator—that will cognify the bulk of the internet of things.

Some possible new minds:

· A mind like a human mind, just faster in answering (the easiest AI mind to imagine).

· A very slow mind, composed primarily of vast storage and memory.

· A global supermind composed of millions of individual dumb minds in concert.

· A hive mind made of many very smart minds, but unaware it/they are a hive.

· A borg supermind composed of many smart minds that are very aware they form a unity.

· A mind trained and dedicated to enhancing your personal mind, but useless to anyone else.

· A mind capable of imagining a greater mind, but incapable of making it.

· A mind capable of creating a greater mind, but not self-aware enough to imagine it.

· A mind capable of successfully making a greater mind, once.

· A mind capable of creating a greater mind that can create a yet greater mind, etc.

· A mind with operational access to its source code, so it can routinely mess with its own processes.

· A superlogic mind without emotion.

· A general problem-solving mind, but without any self-awareness.

· A self-aware mind, but without general problem solving.

· A mind that takes a long time to develop and requires a protector mind until it matures.

· An ultraslow mind spread over large physical distance that appears “invisible” to fast minds.

· A mind capable of cloning itself exactly many times quickly.

· A mind capable of cloning itself and remaining in unity with its clones.

· A mind capable of immortality by migrating from platform to platform.

· A rapid, dynamic mind capable of changing the process and character of its cognition.

· A nanomind that is the smallest possible (size and energy profile) self-aware mind.

· A mind specializing in scenario and prediction making.

· A mind that never erases or forgets anything, including incorrect or false information.

· A half-machine, half-animal symbiont mind.

· A half-machine, half-human cyborg mind.

· A mind using quantum computing whose logic is not understandable to us.

✵ ✵ ✵

If any of these imaginary minds are possible, it will be in the future beyond the next two decades. The point of this speculative list is to emphasize that all cognition is specialized. The types of artificial minds we are making now and will make in the coming century will be designed to perform specialized tasks, and usually tasks that are beyond what we can do. Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.

To really solve the current grand mysteries of quantum gravity, dark energy, and dark matter, we’ll probably need other intelligences beside human. And the extremely complex harder questions that will come after those hard questions may require even more distant and complex intelligences. Indeed, we may need to invent intermediate intelligences that can help us design yet more rarefied intelligences that we could not design alone. We need ways to think different.

Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs have become so complex only computers are able to rigorously check every step, but these proofs are not accepted as “proof” by all mathematicians. The proofs are not understandable by humans alone so it is necessary to trust a cascade of algorithms, and this demands new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills, and a further broadening of ourselves. An embedded AI will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real-time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know, and progress, according to the criteria of new minds. At that point everything changes.

AI could just as well stand for “alien intelligence.” We have no certainty we’ll contact extraterrestrial beings from one of the billion earthlike planets in the sky in the next 200 years, but we have almost 100 percent certainty that we’ll manufacture an alien intelligence by then. When we face these synthetic aliens, we’ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be: Humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different—to create alien intelligences. We should really call AIs “AAs,” for “artificial aliens.”

An AI will think about science like an alien, vastly different than any human scientist, thereby provoking us humans to think about science differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science or art. The alienness of artificial intelligence will become more valuable to us than its speed or power.

Artificial intelligence will help us better understand what we mean by intelligence in the first place. In the past, we would have said only a superintelligent AI could drive a car or beat a human at Jeopardy! or recognize a billion faces. But once our computers did each of those things in the last few years, we considered that achievement obviously mechanical and hardly worth the label of true intelligence. We label it “machine learning.” Every achievement in AI redefines that success as “not AI.”

But we haven’t just been redefining what we mean by AI—we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for. If we aren’t unique toolmakers, or artists, or moral ethicists, then what, if anything, makes us special? In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.

✵ ✵ ✵

The alien minds that we’ll pay the most attention to in the next few years are the ones we give bodies to. We call them robots. They too will come in all shapes, sizes, and configurations—manifesting in diverse species, so to speak. Some will roam like animals, but many will be immobile like plants or diffuse like a coral reef. Robots are already here, quietly. Very soon louder, smarter ones are inevitable. The disruption they cause will touch our core.

Imagine that seven out of ten working Americans got fired tomorrow. What would they all do?

It’s hard to believe you’d have an economy at all if you gave pink slips to more than half the labor force. But that—in slow motion—is what the industrial revolution did to the workforce of the early 19th century. Two hundred years ago, 70 percent of American workers lived on the farm. Today automation has eliminated all but 1 percent of their jobs, replacing them (and their work animals) with machines. But the displaced workers did not sit idle. Instead, automation created hundreds of millions of jobs in entirely new fields. Those who once farmed were now manning the legions of factories that churned out farm equipment, cars, and other industrial products. Since then, wave upon wave of new occupations have arrived—appliance repair person, offset printer, food chemist, photographer, web designer—each building on previous automation. Today, the vast majority of us are doing jobs that no farmer from the 1800s could have imagined.

It may be hard to believe, but before the end of this century, 70 percent of today’s occupations will likewise be replaced by automation—including the job you hold. In other words, robots are inevitable and job replacement is just a matter of time. This upheaval is being led by a second wave of automation, one that is centered on artificial cognition, cheap sensors, machine learning, and distributed smarts. This broad automation will touch all jobs, from manual labor to knowledge work.

First, machines will consolidate their gains in already automated industries. After robots finish replacing assembly line workers, they will replace the workers in warehouses. Speedy bots able to lift 150 pounds all day long will retrieve boxes, sort them, and load them onto trucks. Robots like this already work in Amazon’s warehouses. Fruit and vegetable picking will continue to be robotized until no humans pick outside of specialty farms. Pharmacies will feature a single pill-dispensing robot in the back while the pharmacists focus on patient consulting. In fact, prototype pill-dispensing robots are already up and running in hospitals in California. To date, they have not messed up a single prescription, something that cannot be said of any human pharmacist. Next, the more dexterous chores of cleaning in offices and schools will be taken over by late-night robots, starting with easy-to-do floors and windows and eventually advancing to toilets. The highway parts of long-haul trucking routes will be driven by robots embedded in truck cabs. By 2050 most truck drivers won’t be human. Since truck driving is currently the most common occupation in the U.S., this is a big deal.

All the while, robots will continue their migration into white-collar work. We already have artificial intelligence in many of our machines; we just don’t call it that. Witness one of Google’s newest computers that can write an accurate caption for any photo it is given. Pick a random photo from the web, and the computer will “look” at it, then caption it perfectly. It can keep correctly describing what’s going on in a series of photos as well as a human, but never tire. Google’s translation AI turns a phone into a personal translator. Speak English into the microphone and it immediately repeats what you said in understandable Chinese, or Russian, or Arabic, or dozens of other languages. Point the phone to the recipient and the app will instantly translate their reply. The machine translator does Turkish to Hindi, or French to Korean, etc. It can of course translate any text. High-level diplomatic translators won’t lose their jobs for a while, but day-to-day translating chores in business will all be better done by machines. In fact, any job dealing with reams of paperwork will be taken over by bots, including much of medicine. The rote tasks of any information-intensive job can be automated. It doesn’t matter if you are a doctor, translator, editor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.

We are already at the inflection point.

We have preconceptions about how an intelligent robot should look and act, and these can blind us to what is already happening around us. To demand that artificial intelligence be humanlike is the same flawed logic as demanding that artificial flying be birdlike, with flapping wings. Robots, too, will think different.

Consider Baxter, a revolutionary new workbot from Rethink Robotics. Designed by Rodney Brooks, the former MIT professor who invented the bestselling Roomba vacuum cleaner and its descendants, Baxter is an early example of a new class of industrial robots created to work alongside humans. Baxter does not look impressive. Sure, it’s got big strong arms and a flat-screen display like many industrial bots. And Baxter’s hands perform repetitive manual tasks, just as factory robots do. But it’s different in three significant ways.

First, it can look around and indicate where it is looking by shifting the cartoon eyes on its head. It can perceive humans working near it and avoid injuring them. And workers can see whether it sees them. Previous industrial robots couldn’t do this, which meant that working robots had to be physically segregated from humans. The typical factory robot today is imprisoned within a chain-link fence or caged in a glass case. They are simply too dangerous to be around, because they are oblivious to others. This isolation prevents such robots from working in a small shop, where isolation is not practical. Optimally, workers should be able to get materials to and from the robot or to tweak its controls by hand throughout the workday; isolation makes that difficult. Baxter, however, is aware. Using force-feedback technology to feel if it is colliding with a person or another bot, it is courteous. You can plug it into a wall socket in your garage and easily work right next to it.

Second, anyone can train Baxter. It is not as fast, strong, or precise as other industrial robots, but it is smarter. To train the bot, you simply grab its arms and guide them in the correct motions and sequence. It’s a kind of “watch me do this” routine. Baxter learns the procedure and then repeats it. Any worker is capable of this show and tell; you don’t even have to be literate. Previous workbots required highly educated engineers and crack programmers to write thousands of lines of code (and then debug them) in order to instruct the robot in the simplest change of task. The code has to be loaded in batch mode—i.e., in large, infrequent batches—because the robot cannot be reprogrammed while it is being used. Turns out the real cost of the typical industrial robot is not its hardware but its operation. Industrial robots cost $100,000-plus to purchase but can require four times that amount over a lifespan to program, train, and maintain. The costs pile up until the average lifetime bill for an industrial robot is half a million dollars or more.

The third difference, then, is that Baxter is cheap. Priced at $25,000, it’s in a different league compared with the $500,000 total bill of its predecessors. It is as if those established robots, with their batch-mode programming, are the mainframe computers of the robot world and Baxter is the first PC robot. It is likely to be dismissed as a hobbyist toy, missing key features like sub-millimeter precision. But as with the PC and unlike the ancient mainframe, the user can interact with it directly, immediately, without waiting for experts to mediate—and use it for nonserious, even frivolous things. It’s cheap enough that small-time manufacturers can afford one to package up their wares or custom paint their product or run their 3-D printing machine. Or you could staff up a factory that makes iPhones.

Baxter was invented in a century-old brick building near the Charles River in Boston. In 1895 the building was a manufacturing marvel in the very center of the new manufacturing world. It even generated its own electricity. For a hundred years the factories inside its walls changed the world around us. Now the capabilities of Baxter and the approaching cascade of superior robot workers spur inventor Brooks to speculate on how these robots will shift manufacturing in a disruption greater than the last revolution. Looking out his office window at the former industrial neighborhood, he says, “Right now we think of manufacturing as happening in China. But as manufacturing costs sink because of robots, the costs of transportation become a far greater factor than the cost of production. Nearby will be cheap. So we’ll get this network of locally franchised factories, where most things will be made within five miles of where they are needed.”

That may be true for making stuff, but a lot of remaining jobs for humans are service jobs. I ask Brooks to walk with me through a local McDonald’s and point out the jobs that his kind of robots can replace. He demurs and suggests it might be 30 years before robots will cook for us. “In a fast-food place you’re not doing the same task very long. You’re always changing things on the fly, so you need special solutions. We are not trying to sell a specific solution. We are building a general-purpose machine that other workers can set up themselves and work alongside.” And once we can cowork with robots right next to us, it’s inevitable that our tasks will bleed together, and soon our old work will become theirs—and our new work will become something we can hardly imagine.

To understand how robot replacement will happen, it’s useful to break down our relationship with robots into four categories.

1. Jobs Humans Can Do but Robots Can Do Even Better

Humans can weave cotton cloth with great effort, but automated looms make perfect cloth by the mile for a few cents. The only reason to buy handmade cloth today is because you want the imperfections humans introduce. There’s very little reason to want an imperfect car. We no longer value irregularities while traveling 70 miles per hour on a highway—so we figure that the fewer humans touching our car as it is being made, the better.

And yet for more complicated chores, we still tend to mistakenly believe computers and robots can’t be trusted. That’s why we’ve been slow to acknowledge how they’ve mastered some conceptual routines, in certain cases even surpassing their mastery of physical routines. A computerized brain known as autopilot can fly a 787 jet unaided for all but seven minutes of a typical flight. We place human pilots in the cockpit to fly those seven minutes and for “just in case” insurance, but the needed human pilot time is decreasing rapidly. In the 1990s, computerized mortgage appraisals replaced human appraisers wholesale. Much tax preparation has gone to computers, as well as routine X-ray analysis and pretrial evidence gathering—all once done by highly paid smart people. We’ve accepted utter reliability in robot manufacturing; soon we’ll accept the fact that robots can do it better in services and knowledge work too.

2. Jobs Humans Can’t Do but Robots Can

A trivial example: Humans have trouble making a single brass screw unassisted, but automation can produce a thousand exact ones per hour. Without automation, we could not make a single computer chip—a job that requires degrees of precision, control, and unwavering attention that our animal bodies don’t possess. Likewise no human—indeed no group of humans, no matter their education—can quickly search through all the web pages in the world to uncover the one page revealing the price of eggs in Kathmandu yesterday. Every time you click on the search button you are employing a robot to do something we as a species are unable to do alone.

While the displacement of formerly human jobs gets all the headlines, the greatest benefits bestowed by robots and automation come from their occupation of jobs we are unable to do. We don’t have the attention span to inspect every square millimeter of every CAT scan looking for cancer cells. We don’t have the millisecond reflexes needed to inflate molten glass into the shape of a bottle. We don’t have an infallible memory to keep track of every pitch in Major League baseball and calculate the probability of the next pitch in real time.

We aren’t giving “good jobs” to robots. Most of the time we are giving them jobs we could never do. Without them, these jobs would remain undone.

3. Jobs We Didn’t Know We Wanted Done

This is the greatest genius of the robot takeover: With the assistance of robots and computerized intelligence, we already can do things we never imagined doing 150 years ago. We can today remove a tumor in our gut through our navel, make a talking-picture video of our wedding, drive a cart on Mars, print a pattern on fabric that a friend mailed to us as a message through the air. We are doing, and are sometimes paid for doing, a million new activities that would have dazzled and shocked the farmers of 1800. These new accomplishments are not merely chores that were difficult before. Rather they are dreams created chiefly by the capabilities of the machines that can do them. They are jobs the machines make up.

Before we invented automobiles, air-conditioning, flat-screen video displays, and animated cartoons, no one living in ancient Rome wished they could watch pictures move while riding to Athens in climate-controlled comfort. I did that recently. One hundred years ago not a single citizen of China would have told you that they would rather buy a tiny glassy slab that allowed them to talk to faraway friends before they would buy indoor plumbing. But every day peasant farmers in China without plumbing purchase smartphones. Crafty AIs embedded in first-person shooter games have given millions of teenage boys the urge, the need, to become professional game designers—a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations—occupations we would not have fantasized about without the prompting of the automation.

To reiterate, the bulk of new tasks created by automation are tasks only other automation can handle. Now that we have search engines like Google, we set the servant upon a thousand new errands. Google, can you tell me where my phone is? Google, can you match the people suffering depression with the doctors selling pills? Google, can you predict when the next viral epidemic will erupt? Technology is indiscriminate this way, piling up possibilities and options for both humans and machines.

It is a safe bet that the highest-earning professions in the year 2050 will depend on automations and machines that have not been invented yet. That is, we can’t see these jobs from here, because we can’t yet see the machines and technologies that will make them possible. Robots create jobs that we did not even know we wanted done.

4. Jobs Only Humans Can Do—at First

The one thing humans can do that robots can’t (at least for a long while) is to decide what it is that humans want to do. This is not a trivial semantic trick; our desires are inspired by our previous inventions, making this a circular question.

When robots and automation do our most basic work, making it relatively easy for us to be fed, clothed, and sheltered, then we are free to ask, “What are humans for?” Industrialization did more than just extend the average human lifespan. It led a greater percentage of the population to decide that humans were meant to be ballerinas, full-time musicians, mathematicians, athletes, fashion designers, yoga masters, fan-fiction authors, and folks with one-of-a-kind titles on their business cards. With the help of our machines, we could take up these roles—but, of course, over time the machines will do these as well. We’ll then be empowered to dream up yet more answers to the question “What should we do?” It will be many generations before a robot can answer that.

This postindustrial economy will keep expanding because each person’s task (in part) will be to invent new things to do that will later become repetitive jobs for the robots. In the coming years robot-driven cars and trucks will become ubiquitous; this automation will spawn the new human occupation for former truck drivers of trip optimizer, a person who tweaks the traffic algorithms for optimal energy and time usage. Routine robosurgery will necessitate the new medical skills of keeping complex machines sterile. When automatic self-tracking of all your activities becomes the normal thing to do, a new breed of professional analysts will arise to help you make sense of the data. And of course we will need a whole army of robot nannies, dedicated to keeping your personal robots up and running. Each of these new vocations will in turn be taken over by automation later.

The real revolution erupts when everyone has personal workbots, the descendants of Baxter, at their beck and call. Imagine you are one of the 0.1 percent of people who still farm. You run a small organic farm with direct sales to your customers. You still have a job as a farmer, but robots do most of the actual farmwork. Your fleets of worker bots do all the outside work under the hot sun—weeding, pest control, and harvesting of produce—as directed by a very smart mesh of probes in the soil. Your new job as farmer is overseeing the farming system. One day your task might be to research which variety of heirloom tomato to plant; the next day to find out what your customers crave; the following day might be the time to update the information on your custom labels. The bots perform everything else that can be measured.

Right now it seems unthinkable: We can’t imagine a bot that can assemble a stack of ingredients into a gift or manufacture spare parts for our lawn mower or fabricate materials for our new kitchen. We can’t imagine our nephews and nieces running a dozen workbots in their garage, churning out inverters for their friend’s electric vehicle startup. We can’t imagine our children becoming appliance designers, making custom batches of liquid nitrogen dessert machines to sell to the millionaires in China. But that’s what personal robot automation will enable.

Everyone will have access to a personal robot, but simply owning one will not guarantee success. Rather, success will go to those who best optimize the process of working with bots and machines. Geographical clusters of production will matter, not for any differential in labor costs but because of the differential in human expertise. It’s human-robot symbiosis. Our human assignment will be to keep making jobs for robots—and that is a task that will never be finished. So we will always have at least that one “job.”

✵ ✵ ✵

In the coming years, our relationships with robots will become ever more complex. But already a recurring pattern is emerging. No matter what your current job or your salary, you will progress through a predictable cycle of denial again and again. Here are the Seven Stages of Robot Replacement:

1. A robot/computer cannot possibly do the tasks I do.

2. [Later.]

OK, it can do a lot of those tasks, but it can’t do everything I do.

3. [Later.]

OK, it can do everything I do, except it needs me when it breaks down, which is often.

4. [Later.]

OK, it operates flawlessly on routine stuff, but I need to train it for new tasks.

5. [Later.]

OK, OK, it can have my old boring job, because it’s obvious that was not a job that humans were meant to do.

6. [Later.]

Wow, now that robots are doing my old job, my new job is much more interesting and pays more!

7. [Later.]

I am so glad a robot/computer cannot possibly do what I do now.

[Repeat.]

This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots. Ninety percent of your coworkers will be unseen machines. Most of what you do will not be possible without them. And there will be a blurry line between what you do and what they do. You might no longer think of it as a job, at least at first, because anything that resembles drudgery will be handed over to robots by the accountants.

We need to let robots take over. Many of the jobs that politicians are fighting to keep away from robots are jobs that no one wakes up in the morning really wanting to do. Robots will do jobs we have been doing, and do them much better than we can. They will do jobs we can’t do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.

It is inevitable. Let the robots take our jobs, and let them help us dream up new work that matters.