BETWEEN HUMAN AND MACHINE - Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff (2015)

Chapter 1. BETWEEN HUMAN AND MACHINE

Bill Duvall was already a computer hacker when he dropped out of college. Not long afterward he found himself face-to-face with Shakey, a six-foot-tall wheeled robot. Shakey would have its moment in the sun in 1970 when Lifemagazine dubbed it the first “electronic person.” As a robot, Shakey fell more into the R2-D2 category of mobile robots than the more humanoid C-3PO of Star Wars lore. It was basically a stack of electronic gear equipped with sensors and motorized wheels, first tethered, then later wirelessly connected to a nearby mainframe computer.

Shakey wasn’t the world’s first mobile robot, but it was the first one that was designed to be truly autonomous. An early experiment in artificial intelligence (AI), Shakey was intended to reason about the world around it, plan its own actions, and perform tasks. It could find and push objects and move around in a planned way in its highly structured world. Moreover, as a harbinger of things to come, it was a prototype for much more ambitious machines that were intended to live, in military parlance, in “a hostile environment.”

Although the project has now largely been forgotten, the Shakey designers pioneered computing technologies today used by more than one billion people. The mapping software in everything from cars to smartphones is based on techniques that were first developed by the Shakey team. Their A* algorithm is the best-known way to find the shortest path between two locations. Toward the end of the project, speech control was added as a research task, and today Apple’s Siri speech service is a distant descendant of the machine that began life as a stack of rolling actuators and sensors.

Duvall had grown up on the Peninsula south of San Francisco, the son of a physicist who was involved in classified research at Stanford Research Institute, the military-oriented think tank where Shakey resided. At UC Berkeley he took all the computer programming courses the university offered in the mid-1960s. After two years he dropped out to join the think tank where his father worked, just miles from the Stanford campus, entering a cloistered priesthood where the mainframe computer was the equivalent of a primitive god.

For the young computer hacker, Stanford Research Institute, soon after renamed SRI International, was an entry point into a world that allowed skilled programmers to create elegant and elaborate software machines. During the 1950s SRI pioneered the first check-processing computers. Duvall arrived to work on an SRI contract to automate an English bank’s operations, but the bank had been merged into a larger bank, and the project was put on an indefinite hold. He used the time for his first European vacation and then headed back to Menlo Park to renew his romance with computing, joining the team of artificial intelligence researchers building Shakey.

Like many hackers, Duvall was something of a loner. In high school, a decade before the movie Breaking Away, he joined a local cycling club and rode his bike in the hills behind Stanford. In the 1970s the movie would transform the American perception of bike racing, but in the 1960s cycling was still a bohemian sport, attracting a ragtag assortment of individualists, loners, and outsiders. That image fit Duvall’s worldview well. Before high school he attended the Peninsula School, an alternative elementary and middle school that adhered to the philosophy that children should learn by doing and at their own pace. One of his teachers had been Ira Sandperl, a Gandhi scholar who was a permanent fixture behind the cash register at Kepler’s, a bookstore near the Stanford Campus. Sandperl had also been Joan Baez’s mentor and had imbued Duvall with an independent take on knowledge, learning, and the world.

Duvall was one of the first generation of computer hackers, a small subculture that had originally emerged at MIT, where computing was an end in itself and where the knowledge and code needed to animate the machines were both freely shared. The culture had quickly spread to the West Coast, where it had taken root at computing design centers like Stanford and the University of California at Berkeley.

It was an era in which computers were impossibly rare—a few giant machines were hidden away in banks, universities, and government-funded research centers. At SRI, Duvall had unfettered access to a room-sized machine first acquired for an elite military-funded project and then used to run the software controlling Shakey. At both SRI and at the nearby Stanford Artificial Intelligence Laboratory (SAIL), tucked away in the hills behind Stanford University, there was a tightly knit group of researchers who already believed in the possibility of building a machine that mimicked human capabilities. To this group, Shakey was a striking portent of the future, and they believed that the scientific breakthrough to enable machines to act like humans would come in just a few short years.

Indeed, during the mid-sixties there was virtually boundless optimism among the small community of artificial intelligence researchers on both coasts. In 1966, when SRI and SAIL were beginning to build robots and AI programs in California, another artificial intelligence pioneer, Marvin Minsky, assigned an undergraduate to work on the problem of computer vision on the other side of the country, at MIT. He envisioned it as a summer project. The reality was disappointing. Although AI might be destined to transform the world, Duvall, who worked on several SRI projects before transferring to the Shakey project to work in the trenches as a young programmer, immediately saw that the robot was barely taking baby steps.

Shakey lived in a large open room with linoleum floors and a couple of racks of electronics. Boxlike objects were scattered around for the robot to “play” with. The mainframe computer providing the intelligence was nearby. Shakey’s sensors would capture the world around it and then “think”—standing motionless for minutes on end—before resuming its journey, even in its closed and controlled world. It was like watching grass grow. Moreover, it frequently broke down or would drain its batteries after just minutes of operation.

For a few months Duvall made the most of his situation. He could see that the project was light-years away from the stated goal of an automated military sentry or reconnaissance agent. He tried to amuse himself by programming the rangefinder, a clunky device based on a rotating mirror. Unfortunately it was prone to mechanical failure, making software development a highly unsatisfying exercise in error prediction and recovery. One of the managers told him that the project was in need of a “probabilistic decision tree” to refine the robot’s vision system. So rather than working on that special-purpose mechanism, he spent his time writing a programming tool that could generate such trees programmatically. Shakey’s vision system worked better than the rangefinder. Even with the simplest machine vision processing, it could identify both edges and basic shapes, essential primitives to understand and travel in its surroundings.

Duvall’s manager believed in structuring his team so that “science” would only be done by “scientists.” Programmers were low-status grunt workers who implemented the design ideas of their superiors. While some of the leaders of the group appeared to have a high-level vision to pursue, the project was organized in a military fashion, making work uninspiring for a low-level programmer like Duvall, stuck writing device drivers and other software interfaces. That didn’t sit well with the young computer hacker.

Robots seemed like a cool idea to him, but before Star Wars there weren’t a lot of inspiring models. There was Robby the Robot from Forbidden Planet in the 1950s, but it was hard to find inspiration in a broader vision. Shakey simply didn’t work very well. Fortunately Stanford Research Institute was a big place and Duvall was soon attracted by a more intriguing project.

Just down the hall from the Shakey laboratory he would frequently encounter another research group that was building a computer to run a program called NLS, the oN-Line System. While Shakey was managed hierarchically, the group run by computer scientist Doug Engelbart was anything but. Engelbart’s researchers, an eclectic collection of buttoned-down white-shirted engineers and long-haired computer hackers, were taking computing in a direction so different it was not even in the same coordinate system. The Shakey project was struggling to mimic the human mind and body. Engelbart had a very different goal. During World War II he had stumbled across an article by Vannevar Bush, who had proposed a microfiche-based information retrieval system called Memex to manage all of the world’s knowledge. Engelbart later decided that such a system could be assembled based on the then newly available computers. He thought the time was right to build an interactive system to capture knowledge and organize information in such a way that it would now be possible for a small group of people—scientists, engineers, educators—to create and collaborate more effectively. By this time Engelbart had already invented the computer mouse as a control device and had also conceived of the idea of hypertext links that would decades later become the foundation for the modern World Wide Web. Moreover, like Duvall, he was an outsider within the insular computer science world that worshipped theory and abstraction as fundamental to science.

image

Artificial intelligence pioneer Charles Rosen with Shakey, the first autonomous robot. The Pentagon funded the project to research the idea of a future robotic sentry. (Image courtesy of SRI International)

The cultural gulf between the worlds defined by artificial intelligence and Engelbart’s contrarian idea, deemed “intelligence augmentation”—he referred to it as “IA”—was already palpable. Indeed, when Engelbart paid a visit to MIT during the 1960s to demonstrate his project, Marvin Minsky complained that it was a waste of research dollars on something that would create nothing more than a glorified word processor.

Despite earning no respect from establishment computer scientists, Engelbart was comfortable with being viewed as outside the mainstream academic world. When attending the Pentagon DARPA review meetings that were held regularly to bring funded researchers together to share their work, he would always begin his presentations by saying, “This is not computer science.” And then he would go on to sketch a vision of using computers to permit people to “bootstrap” their projects by making learning and innovation more powerful.

Even if it wasn’t in the mainstream of computer science, the ideas captivated Bill Duvall. Before long he switched his allegiance and moved down the hall to work in Engelbart’s lab. In the space of less than a year he went from struggling to program the first useful robot to writing the software code for the two computers that first connected over a network to demonstrate what would evolve to become the Internet. Late in the evening on October 29, 1969, Duvall connected Engelbart’s NLS software in Menlo Park to a computer in Los Angeles controlled by another young hacker via a data line leased from the phone company. Bill Duvall would become the first to make the leap from research to replace humans with computers to using computing to augment the human intellect, and one of the first to stand on both sides of an invisible line that even today divides two rival, insular engineering communities.

Significantly, what started in the 1960s was then accelerated in the 1970s at a third laboratory also located near Stanford. Xerox’s Palo Alto Research Center extended ideas originally incubated at McCarthy’s and Engelbart’s labs, in the form of the personal computer and computer networking, which were in turn successfully commercialized by Apple and Microsoft. Among other things, the personal computing industry touched off what venture capitalist John Doerr identified during the 1990s as the “largest legal accumulation of wealth in history.”1

Most people know Doug Engelbart as the inventor of the mouse, but his more encompassing idea was to use a set of computer technologies to make it possible for small groups to “bootstrap” their projects by employing an array of ever more powerful software tools to organize their activities, creating what he described as the “collective IQ” that outstripped the capabilities of any single individual. The mouse was simply a gadget to improve our ability to interact with computers.

In creating SAIL, McCarthy’s impact upon the world was equal to Engelbart’s in many ways. People like Alan Kay and Larry Tesler, who were both instrumental in the design of the modern personal computer, passed through his lab on their way to Xerox and subsequently to Apple Computer. Whitfield Diffie took away ideas that would lead to the cryptographic technology that secures modern electronic commerce.

There were, however, two other technologies being developed simultaneously at SRI and SAIL that are only now beginning to have a substantial impact: robotics and artificial intelligence software. Both of these are not only in the process of transforming economies; they are fostering a new era of intelligent machines that is fundamentally changing the way we live.

The impact of both computing and robotics had been forecast before these laboratories were established. Norbert Wiener invented the concept of cybernetics at the very dawn of the computing era in 1948. In his book Cybernetics, he outlined a new engineering science of control and communication that foreshadowed both technologies. He also foresaw the implications of these new engineering disciplines, and two years after he wrote Cybernetics, his companion book, The Human Use of Human Beings, explored both the value and the danger of automation.

He was one of the first to foresee the twin possibilities that information technology might both escape human control and come to control human beings. More significantly he posed an early critique of the arrival of machine intelligence: the danger of passing decisions on to systems that, incapable of thinking abstractly, would make decisions in purely utilitarian terms rather than in consideration of richer human values.

Engelbart worked as an electronics technician at NASA’s Ames Research Center during the 1950s, and he had watched as aeronautical engineers first built small models to test in a wind tunnel and then scaled them up into full-sized airplanes. He quickly realized that the new silicon computer circuits could be scaled in the opposite direction—down into what would become known as the “microcosm.” By shrinking the circuitry it would be possible to place more circuits in the same space for the same cost. And dramatically, each time the circuit density increased, performance improvement would not be additive, but rather multiplicative. For Engelbart, this was a crucial insight. Within a year after the invention of the modern computer chip in the late 1950s he understood that there would ultimately be enough cheap and plentiful computing power to change the face of humanity.

This notion of exponential change—Moore’s law, for example—is one of the fundamental contributions of Silicon Valley. Computers, Engelbart and Moore saw, would become more powerful ever more quickly. Equally dramatically, their cost would continue falling, not incrementally, but also at an accelerating rate, to the point where soon remarkably powerful computers would be affordable by even the world’s poorest people. During the past half decade that acceleration has led to rapid improvement in technologies that are necessary components for artificial intelligence: computer vision, speech recognition, and robotic touch and manipulation. Machines now also taste and smell, but recently more significant innovations have come from modeling human neurons in electronic circuits, which has begun to yield advances in pattern recognition—mimicking human cognition.

The quickening pace of AI innovation has led some, such as Rice University computer scientist Moshe Vardi, to proclaim the imminent end of a very significant fraction of all tasks performed by humans, perhaps as soon as 2045.2 Even more radical voices argue that computers are evolving at such a rapid pace that they will outstrip the intellectual capabilities of humans in one, or at the most two more generations. The science-fiction author and computer scientist Vernor Vinge posed the notion of a computing “singularity” in which machine intelligence will make such rapid progress that it will cross a threshold and then in some as yet unspecified leap, become superhuman.

It is a provocative claim, but far too early to answer definitively. Indeed, it is worthwhile recalling the point made by longtime Silicon Valley observer Paul Saffo when thinking about the compounding impact of computing. “Never mistake a clear view for a short distance,” he has frequently reminded the Valley’s digerati. For those who believe that human labor will be obsolete in the space of a few decades, it’s worth remembering that even against the background of globalization and automation, between 1980 and 2010, the U.S. labor force actually continued to expand. Economists Frank Levy and Richard J. Murnane recently pointed out that since 1964 the economy has actually added seventy-four million jobs.3

MIT economist David Autor has offered a detailed explanation of the consequences of the current wave of automation. Job destruction is not across the board, he argues, but instead has focused on the routinized tasks performed by those in the middle of the job structure—the post-World War II white-collar expansion. The economy has continued to expand at both the bottom and the top of the pyramid, leaving the middle class vulnerable while expanding markets for both menial and expert jobs.

Rather than extending that debate here, however, I am interested in exploring a different question first posed by Norbert Wiener in his early alarms about the introduction of automation. What will the outcome of McCarthy’s and Engelbart’s differing approaches be? What are the consequences of the design decisions made by today’s artificial intelligence researchers and roboticists, who, with ever greater ease, can choose between extending and replacing the “human in the loop” in the systems and products they create? By the same token, what are the social consequences of building intelligent systems that substitute for or interact with humans in business, entertainment, and day-to-day activities?

Two distinct technical communities with separate traditions, values, and priorities have emerged in the computing world. One, artificial intelligence, has relentlessly pressed ahead toward the goal of automating the human experience. The other, the field of human-computer interaction, or HCI, has been more concerned with the evolution of the idea of “man-machine symbiosis” that was foreseen by pioneering psychologist J. C. R. Licklider at the dawn of the modern computing era as an interim step on the way to brilliant machines. Significantly, Licklider, as director of DARPA’s Information Projects Techniques Office in the mid-1960s, would be an early funder of both McCarthy and Engelbart. It was the Licklider era that would come to define the period when the Pentagon agency operated as a truly “blue-sky” funding organization, a period when, many argue, the agency had its most dramatic impact.

Wiener had raised an early alert about the relationship between man and computing machines. A decade later Licklider pointed to the significance of the impending widespread use of computing and how the arrival of computing machines was different from the previous era of industrialization. In a darker sense Licklider also forecast the arrival of the Borg of Star Trek notoriety. The Borg, which entered popular culture in 1988, was a proposed cybernetic alien species that assembles into a “hive mind” in which the collective subsumes the individual, intoning the phrase, “You will be assimilated.”

Licklider wrote in 1960 about the distance between “mechanically extended man” and “artificial intelligence,” and warned about the early direction of automation technology: “If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years. ‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped. In some instances, particularly in large computer-centered information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate.”4 That observation seems fatalistic in accepting the shift toward automation rather than augmentation.

Licklider, like McCarthy a half decade later, was confident that the advent of “Strong” artificial intelligence—a machine capable of at least matching wits and self-awareness with a human—was likely to arrive relatively soon. The period of man-machine “symbiosis” might only last for less than two decades, he wrote, although he allowed that the arrival of truly smart machines that were capable of rivaling thinking humans might not happen for a decade, or perhaps fifty years.

Ultimately, although he posed the question of whether humans will be freed or enslaved by the Information Age, he chose not to directly address it. Instead he drew a picture of what has become known as a “cyborg”—part human, part machine. In Licklider’s view human operators and computing equipment would blend together seamlessly to become a single entity. That vision has since been both celebrated and reviled. But it still begs the unanswered question—will we be masters, slaves, or partners of the intelligent machines that are appearing today?

Consider the complete spectrum of human-machine interactions from simple “FAQbots” to Google Now and Apple’s Siri. Moving into the unspecified future in the movie Her, we see an artificial intelligence, voiced by Scarlett Johansson, capable of carrying on hundreds of simultaneous, intimate, human-level conversations. Google Now and Siri currently represent two dramatically different computer-human interaction styles. While Siri intentionally and successfully mimics a human, complete with a wry sense of humor, Google Now opts instead to function as a pure information oracle, devoid of personality or humanity.

It is tempting to see the personalities of the two competing corporate chieftains in these contrasting approaches. At Apple, Steve Jobs saw the potential in Siri before it was even capable of recognizing human speech and focused his designers on natural language as a better way to control a computer. At Google, Larry Page, by way of contrast, has resisted portraying a computer in human form.

How far will this trend go? Today it is anything but certain. Although we are already able to chatter with our cars and other appliances using limited vocabularies, computer speech and voice understanding is still a niche in the world of “interfaces” that control the computers that surround us. Speech recognition clearly offers a dramatic improvement in busy-hand, busy-eye scenarios for interacting with the multiplicity of Web services and smartphone applications that have emerged. Perhaps advances in brain-computer interfaces will prove to be useful for those unable to speak or when silence or stealth is needed, such as card counting in blackjack. The murkier question is whether these cybernetic assistants will eventually pass the Turing test, the metric first proposed by mathematician and computer scientist Alan Turing to determine if a computer is “intelligent.” Turing’s original 1951 paper has spawned a long-running philosophical discussion and even an annual contest, but today what is more interesting than the question of machine intelligence is what the test implies about the relationship between humans and machines.

Turing’s test consisted of placing a human before a computer terminal to interact with an unknown entity through typewritten questions and answers. If, after a reasonable period, the questioner was unable to determine whether he or she was communicating with a human or a machine, then the machine could be said to be “intelligent.” Although it has several variants and has been widely criticized, from a sociological point of view the test poses the right question. In other words, it is relevant with respect to the human, not the machine.

In the fall of 1991 I covered the first of a series of Turing test contests sponsored by a New York City philanthropist, Hugh Loebner. The event was first held at the Boston Computer Museum and attracted a crowd of computer scientists and a smattering of philosophers. At that point the “bots,” software robots designed to participate in the contest, weren’t very far advanced beyond the legendary Eliza program written by computer scientist Joseph Weizenbaum during the 1960s. Weizenbaum’s program mimicked a Rogerian psychologist (a human-centered form of psychiatry focused on persuading a patient to talk his or her way toward understanding his or her actual feelings) and he was horrified to discover that his students had become deeply immersed in intimate conversations with his first, simple bot.

But the judges for the original Loebner contest in 1991 fell into two broad categories: computer literate and computer illiterate. For human judges without computer expertise, it turned out that for all practical purposes the Turing test was conquered in that first year. In reporting on the contest I quoted one of the nontechnical judges, a part-time auto mechanic, saying why she was fooled: “It typed something that I thought was trite, and when I responded it interacted with me in a very convincing fashion,”5 she said. It was a harbinger of things to come. We now routinely interact with machines simulating humans and they will continue to improve in convincing us of their faux humanity.

Today, programs like Siri not only seem almost human; they are beginning to make human-machine interactions in natural language seem routine. The evolution of these software robots is aided by the fact that humans appear to want to believe they are interacting with humans even when they are conversing with machines. We are hardwired for social interaction. Whether or not robots move around to assist us in the physical world, they are already moving among us in cyberspace. It’s now inevitable that these software bots—AIs, if only of limited capability—will increasingly become a routine part of daily life.

Intelligent software agents such as Apple’s Siri, Microsoft’s Cortana, and Google Now are interacting with hundreds of millions of people, by default defining this robot/human relationship. Even at this relatively early stage Siri has a distinctly human style, a first step toward the creation of a generation of likable and trusted advisors. Will it matter whether we interact with these systems as partners or keep them as slaves? While there is an increasingly lively discussion about whether intelligent agents and robots will be autonomous—and if they are autonomous, whether they will be self-aware enough that we need to consider questions of “robot rights”—in the short term the more significant question is how we treat these systems and what the design of those interactions says about what it means to be human. To the extent that we treat these systems as partners it will humanize us. Yet the question of what the relationship between humans and machines will be has largely been ignored by much of the modern computing world.

Jonathan Grudin, a computer scientist at Microsoft Research, has noted that the separate disciplines of artificial intelligence and human-computer interaction rarely speak to one another.6 He points to John McCarthy’s early explanation of the direction of artificial intelligence research: “[The goal] was to get away from studying human behavior and consider the computer as a tool for solving certain classes of problems. Thus AI was created as a branch of computer science and not as a branch of psychology.”7 McCarthy’s pragmatic approach can certainly be justified by the success the field has had in the past half decade. Artificial intelligence researchers like to point out that aircraft can fly just fine without resorting to flapping their wings—an argument that asserts that to duplicate human cognition or behavior, it is not necessary to comprehend it. However, the chasm between AI and IA has only deepened as AI systems have become increasingly facile at human tasks, whether it is seeing, speaking, moving boxes, or playing chess, Jeopardy!, or Atari video games.

Terry Winograd was one of the first to see the two extremes clearly and to consider the consequences. His career traces an arc from artificial intelligence to intelligence augmentation. As a graduate student at MIT in the 1960s, he focused on understanding human language in order to build a software equivalent to Shakey—a software robot capable of interacting with humans in conversation. Then, during the 1980s, in part because of his changing views on the limits of artificial intelligence, he left the field—a shift in perspective moving from AI to IA. Winograd walked away from AI in part because of a series of challenging conversations with a group of philosophers at the University of California. A member of a small group of AI researchers, he engaged in a series of weekly seminars with Berkeley philosophers Hubert Dreyfus and John Searle. The philosophers convinced him that there were real limits to the capabilities of intelligent machines. Winograd’s conversion coincided with the collapse of a nascent artificial intelligence industry known as the “AI Winter.” Several decades later, Winograd, who was faculty advisor for Google cofounder Larry Page at Stanford, famously counseled the young graduate student to focus on the problem of Web search rather than self-driving cars.

In the intervening decades Winograd had become acutely aware of the importance of the designer’s point of view. The separation of the fields of AI and human-computer interaction, or HCI, is partly a question of approach, but it’s also an ethical stance about designing humans either into or out of the systems we create. More recently at Stanford Winograd helped create an academic program focusing on “Liberation Technologies,” which studies the construction of computerized systems based on human-centered values.

Throughout human history, technology has displaced human labor. Locomotives and tractors, however, didn’t make human-level decisions. Increasingly, “thinking machines” will. It is also clear that technology and humanity coevolve, which again will pose the question of who will be in control. In Silicon Valley it has become fashionable to celebrate the rise of the machines, most clearly in the emergence of organizations like the Singularity Institute and in books like Kevin Kelly’s 2010 What Technology Wants. In an earlier book in 1994, Out of Control, Kelly came down firmly on the side of the machines. He described a meeting between AI pioneer Marvin Minsky and Doug Engelbart:

When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

Minsky: We’re going to make machines intelligent. We are going to make them conscious!

Engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side—on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?8

Kelly is correct to point out that there are Minsky and Engelbart “sides.” But to say that people will “survive” belittles the consequences. He is basically echoing Minsky, who is famously said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, maybe they’ll keep us as pets.”

Minsky’s position is symptomatic of the chasm between the AI and IA camps. The artificial intelligence community has until now largely chosen to ignore the consequences of the systems it considers merely powerful tools, dispensing with discussions of morality. As one of the engineers who is building next-generation robots told me when I asked about the impact of automation on people: “You can’t think about that; you just have to decide that you are going to do the best you can to improve the world for humanity as a whole.”

During the past half century, McCarthy’s and Engelbart’s philosophies have remained separate and their central conflict stands unresolved. One approach supplants humans with an increasingly powerful blend of computer hardware and software. The other extends our reach intellectually, economically, and socially using the same ingredients. While the chasm between these approaches has been little remarked, the explosion of this new wave of technology, which now influences every aspect of modern life, will encapsulate the repercussions of this divide.

Will machines supplant human workers or augment them? On one level, they will do both. But once again, that is the wrong question to ask, and it provides only a partial answer. Both software and hardware robots are flexible enough that they can ultimately become whatever we program them to be. In our current economy, how robots—both machines and intelligent systems—are designed and how they are used is overwhelmingly defined by cost and benefit, and costs are falling at an increasingly rapid rate. In our society, economics dictate that if a task can be done more cheaply by machine—software or hardware—in most cases it will be. It’s just a matter of when.

The decision to come down on either side of the debates is doubly difficult because there are no obvious right answers. Although driverless cars will displace millions of jobs, they will also save many lives. Today, decisions about implementing technologies are made largely on the basis of profitability and efficiency, but there is an obvious need for a new moral calculus. The devil, however, is in more than the details. As with nuclear weapons and nuclear power, artificial intelligence, genetic engineering, and robotics will have society-wide consequences, both intended and unintended, in the next decade.