MASTERS, SLAVES, OR PARTNERS - Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff (2015)

Chapter 9. MASTERS, SLAVES, OR PARTNERS?

It was almost midnight in Grand Central Station on a spring night in 1992. An elderly man wearing a blue New York Times windbreaker was leaning on a cane on the platform, waiting for a train to Westchester County. I had been at the Times for several years and I was puzzled by the ghostly figure. “Do you work for the paper?” I asked.

Many years earlier, he said, he had been a typesetter at the Times. In 1973 his union negotiated a deal to phase out workers’ jobs while the company implemented computerized printing systems in exchange for guaranteed employment until retirement. Although he had not worked for more than a decade, he still enjoyed coming to the press in Times Square and spending his evenings with the remaining pressmen as they produced the next day’s paper.

Today, the printer’s fate remains a poignant story about labor in the face of a new wave of AI-based automation technologies. His union first battled with newspaper publishers in the 1960s and struck a historic accommodation in the 1970s. Since then, however, the power of unions has declined dramatically. In the past three decades, the unionized percentage of the U.S. workforce has fallen from 20.1 to 11.3 percent. Collective bargaining will not play a significant role in defending workers’ jobs against the next wave of computerization. Printers and typographers in particular were highly skilled workers who fell prey to the technical advances of a generation of minicomputers during the 1970s, and the cost of computing plummeted as technologies shifted from transistor-based machines to lower-cost integrated circuits. Today, the lone typesetter’s soft landing is an extraordinary rarity.

There is evidence that the 2008 recession significantly accelerated the displacement of workers by computerized systems. Why rehire workers when your company can buy technology that replaces them for less cost? A 2014 working paper released by the National Bureau of Economic Research confirmed the trend, and yet Henry Siu, an associate professor at the University of British Columbia and one of the authors of the report, clung to the conventional Keynesian view on technological unemployment. He explained: “Over the very long run, technological progress is good for everybody, but over shorter time horizons, it’s not that everybody’s a winner.”1 It is probably worth noting that Keynes also pointed out that in the long run, we are all dead.

Indeed, Keynes’s actuarial logic is impeccable, but his economic logic is now under assault. There is an emerging perspective among technologists and some economists that Keynesian assumptions about technological unemployment—that individual jobs are lost but the overall amount of work stays constant—no longer hold true. AI systems that can move, see, touch, and reason are fundamentally altering the equation for human job creation. The debate today is not whether AI systems will arrive, but when.

It is still possible that history will vindicate the Keynesians. Modern society may be on the cusp of another economic transformation akin to the industrial revolution. It is conceivable that social forces like crowdsourcing and the Internet-enabled reorganization of the workforce will remake the U.S. economy in ways that are now inconceivable. The Internet has already created new job categories like “search engine optimization,” and there will certainly be other Internet-enabled and unexpected new job categories in the future.

However, if there is a new employment boom coming, it is still over the horizon. The Bureau of Labor Statistics projections now predict that U.S. job growth will be primarily influenced by the aging of American society, not by technological advances that displace and create jobs. The BLS predicts that of the 15.6 million jobs that will be created by 2022, 2.4 million of those jobs will be in the health-care and elder-care sectors. It is striking that among new types of jobs, those based on technological advances and innovation will account for a relatively small portion of overall job growth according to the BLS, and of those, software developers were highest ranked at twenty-sixth, with just 139,000 new jobs projected by 2022.2 The BLS projections suggest that technology will not be a fount of economic growth, but will instead pose a risk to all routinized jobs and skill-based jobs that require the ability to perform diverse kinds of “cognitive” labor, from physicians to reporters to stockbrokers.

Still, despite fears of a “jobs apocalypse,” there is another way to consider the impact of automation, robotics, and AI on society. Certainly AI and robotics technologies will destroy a vast number of jobs, but they can also be used to extend humanity. Which path is taken will be determined entirely by individual human designers. Tandy Trower is a software engineer who once oversaw armies of software engineers at Microsoft Corporation, but now works from a cramped office in South Seattle. The four-room shop might be any Silicon Valley garage start-up. There are circuit boards and computers strewn in every direction, and there are robots. Many of them are toys, but several look suspiciously like extras from the movie Robot & Frank. The idea of developing a robot to act as a human caregiver speaks directly to the tensions between AI and IA approaches to robotics.

How will we care for our elderly? For some, integrating robots into elder care taps into a largely unmined market and offers roboticists the chance to orient their research toward a social good. Many argue that there is a shortage of skilled caregivers and believe that the development of robots that will act as companions and caregivers is a way of using artificial intelligence to ward off one of the greatest hazards of old age—loneliness and isolation.

The counterpoint to this argument is that there is not really a shortage of caregivers but rather a shortage in society’s willingness to allocate resources for tasks such as caregiving and education. “Of course we have enough human caregivers for the elderly. The country—and the world—is awash in underemployment and unemployment, and many people find caregiving to be a fulfilling and desirable profession. The only problem is that we—as a society—don’t want to pay caregivers well and don’t value their labor,” writes Zeynep Tufekci, a social scientist at the University of North Carolina at Chapel Hill.3 Tufekci was responding to an essay written by Louise Aronson, a gerontologist at the University of California, San Francisco who argued that there is an urgent need for robot caregivers to perform tasks that range from watching over the health of elder patients, organizing their lives, and serving as companions. She describes making house calls and staying much longer than she should for each patient as she is forced to play the dual role of caregiver and companion.4 Tufekci envisions a society in which a vast army of skilled human doctors will be trained to spend time with the elderly. Sadly, as she notes, we live in a world that places more value on the work of stockbrokers and lawyers than nursing aides and teachers. In the end, however, this argument is not about technology. Once, in agrarian communal societies, families cared for their elders. In Western society, that is frequently no longer the case, and it is inconceivable that we will return to any kind of geographically centralized extended family structure soon.

Still, Tufekci’s challenge poses several questions.

First, will robots ever approximate the care of a human stranger? There are many horror stories about elder-care treatment in modern nursing homes and care facilities. Tufekci argues that every elder deserves the attention of an educated, skilled, and compassionate Dr. Aronson. However, if that doesn’t happen, will increasingly low-cost robots make life for elders better or worse? The vision of an aging population locked away and “watched over by machines of loving grace” is potentially disturbing. Machines may eventually look, act, and feel as if they are human, but they are decidedly not.

However, robots do not need to entirely replace human caregivers in order to help the elderly. For example, there could be a web of interconnected robots that make it possible for elders who are isolated to build a virtual human community on the Internet. Perhaps shut-in elders will be the most loyal users of augmented reality technologies being designed by Magic Leap, Microsoft, and others. The possibility of virtual caregivers is a compelling idea for those who are physically infirm.

Today Tandy Trower places himself squarely in the augmentation camp. He came to robotics as a member of Bill Gates’s technical staff at Microsoft. Gates was touring college campuses during 2006 and realized that there was an intense interest in robotics at computer science departments around the country. Everywhere he went, he watched demonstrations of robotics research. After one of his trips, he came back and asked Trower to put together a proposal for a way that Microsoft might become more active in the emerging robotics industry. Trower wrote a sixty-page report calling on Microsoft to create a group that built software tools to develop robots. Microsoft gave Trower a small group of researchers and he went off to build a simulator and a graphical programming language. They named it the Microsoft Robotics Developer Studio.

Then, however, Gates retired to start his foundation, and everything changed at Microsoft. The new chief executive, Steve Ballmer, had a very different focus. He was more concerned about making money and less willing to take risks. Through Microsoft veteran and chief strategy officer Craig Mundie, he sent Trower a clear message: tell me how Microsoft is going to make money on this.

Ballmer was very clear: he wanted a business that generated one billion dollars in revenue annually within seven years. Microsoft had industrial robotics partners, but these partners had no interest in buying software from Microsoft—they already had their own software. Trower started looking for different industries that might be interested in purchasing his software. He looked at the automotive industry, but Microsoft already had an automotive division. He looked at the science education market, but it didn’t have much revenue potential. It seemed too early to pitch a telepresent robot. The more he looked, the more he considered the problem of aging and elder care. “Wow!” he thought to himself. “Here is a market that is going to explode in the next twenty or thirty years.” Today in the United States more than 8.5 million seniors require some kind of assistance, and that number will increase to more than 21 million in the next two decades.

There was an obvious need for robotic assistance in elder care, and no major players were angling for that market. Despite his enthusiasm, however, Trower wasn’t able to persuade Mundie or Ballmer that Microsoft should invest in the idea. Ballmer was interested in shrinking the range of Microsoft investments and focusing on a few core items.

“I have to do this,” Trower thought to himself. And so in late 2009, he left Microsoft after twenty-eight years and founded Hoaloha Robotics—the word hoaloha translates from Hawaiian as “friend”—with the intent of creating a mobile elder-care robot at a reasonable cost. Half a decade later, Trower has developed a four-foot-tall robotic prototype, affectionately known as Robby. It isn’t a replacement for a human caregiver, but it will be able to listen and speak, help with medicine, relay messages, and act as a telepresence when needed. It doesn’t walk—it rolls on a simple wheel assembly that allows it to move fluidly in any direction. Instead of arms, it has a tray whose height it can adjust. This allows Robby to perform certain tasks, like picking up dropped items.

Trower does not think that Robby will displace human workers. Rising costs and a shrinking supply of workers will instead create a situation in which a helper robot can extend the capabilities of both human patients and helpers. Human caregivers already cost $70,000 or more a year, Trower argues, and a low-cost robot will actually extend assistance to those who cannot afford it.

Ignoring Tufekci’s fears, Trower has focused his engineering skills on extending and assisting humans. But when will these machines meet our expectations for them? And how will those who are cared for greet them? These remain open questions, although there is a wealth of anecdotal evidence that suggests that, as speech recognition and speech synthesis technologies continue to improve, as sensors fall in cost, and as roboticists develop more agile machines, we will gratefully accept them. Moreover, for an Internet-savvy generation that has grown up with tablets, iPhones, and Siri, caregiving machines will seem like second nature. Robots—elder-care workers, service workers, drivers, and soldiers—are an inevitability. It is more difficult, however, to predict our relationship with these robots. Tales such as that of the golem weave the idea of a happy slave that serves our every desire deeply into our psyches as well as our mythology. In the end the emergence of intelligent machines that largely displace human labor will undoubtedly instigate a crisis of human identity.

For now, Trower has focused on a clear and powerful role for robots as assistants for the infirm and the elderly. This is an excellent example of AI used directly in the service of humans, but what happens if AI-based machines spread quickly through the economy? We can only hope that the Keynesians are vindicated—in the long run.

The twin paths of AI and IA place a tremendous amount of power and responsibility in the hands of the two communities of designers described in this book. For example, when Steve Jobs set out to assemble a team of engineers to reinvent personal computing with Lisa and the Macintosh, he had a clear goal in mind. Jobs thought of computing as a “bicycle for our minds.” Personal computing, which was initially proposed by a small group of engineers and visionaries in the 1970s, has since then had a tremendous impact on the economy and the modern workforce. It has both empowered individuals and unlocked human creativity on a global scale.

Three decades later, Andy Rubin’s robotics project at Google is representative of a similar small group of engineers who are advancing the state-of-the-art of robots. Rubin set out with an equally clear—if dramatically different—vision in mind. When he started acquiring technology and talent for Google’s foray into robotics, he described a ten- to fifteen-year-long effort to radically advance an array of developments in robotics, from walking machines to robot arms and sensor technology. He sketched a vision of bipedal Google delivery robots arriving at homes by sitting on the back of Google cars, from which they would hop off to deliver packages.

Designing humans either into or out of computer systems is increasingly possible today. Further advances in both artificial intelligence and augmentation tools will confront roboticists and computer scientists with clear choices about the design of the systems in the workplace and, increasingly, in the surrounding world. We will soon be living—either comfortably or uncomfortably—with autonomous machines.

Brad Templeton, a software designer and consultant to the Google car project, has asserted, “A robot will be truly autonomous when you instruct it to go to work and it decides to go to the beach instead.”5 It is a wonderful turn of phrase, but he has conflated self-awareness with autonomy. Today, machines are beginning to act without meaningful human intervention, or at a level of independence that we can consider autonomous. This level of autonomy poses difficult questions for designers of intelligent machines. For the most part, however, engineers ignore the ethical issues posed by the use of computer technologies. Only occasionally does the community of artificial intelligence researchers sense a quiver of foreboding.

At the Humanoids 2013 conference in Atlanta, which focused on the design and application of robots that appear humanlike, Ronald Arkin, a Georgia Tech roboticist, made a passionate plea to audiences in his speech “How to NOT Build a Terminator.” He reminded the group that in addition to his famous three laws, Asimov later added the fundamental “zeroth” law of robotics, which states, “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”6 Speaking to a group of more than two hundred roboticists and AI experts from universities and corporations, Arkin challenged them to think more deeply about the consequences of automation. “We all know that [the DARPA Robotics Challenge] is motivated by urban seek-and-destroy,” he said sardonically, adding, “Oh no, I meant urban search-and-rescue.”

The line between robots as rescuers and enforcers is already gray, if it exists at all. Arkin showed clips from sci-fi movies, including James Cameron’s 1984 The Terminator. Each of the clips depicted evil robots performing tasks that DARPA has specified as part of its robotics challenge: clearing debris, opening doors, breaking through walls, climbing ladders and stairs, and riding in utility vehicles. Designers can exploit these capabilities either constructively or destructively, depending on their intent. The audience laughed nervously—but Arkin refused to let them off the hook. “I’m being facetious,” he said, “but I’m just trying to tell you that these kinds of technologies you are developing may have uses in places you may not have fully envisioned.” In the world of weapons design, the potential for unexpected consequences has long been true for what are described as “dual-use” technologies, like nuclear power, which can be used to produce both electric power and weapons. Now it is also increasingly true of robotics and artificial intelligence technologies. These technologies are dual-use not just as weapons, but also in terms of their potential to either augment or replace humans. Today, we are still “in the loop”—machines that either replace or augment humans are the product of human designers, so the designers cannot easily absolve themselves of the responsibility for the consequences of their inventions. “If you would like to create a Terminator, then I would contend: Keep doing what you are doing, because you are creating component technologies for such a device,” Arkin said. “There is a big world out there, and this world is listening to the consequences of what we are creating.”

The issues and complications of automation have extended beyond the technical community. In a little-noted, unclassified Pentagon report entitled “The Role of Autonomy in DoD Systems,”7 the report’s authors pointed out the ethical quandaries involved in the automation of battle systems. The military itself is already struggling to negotiate the tension between autonomous systems, like drones, that promise both accuracy and cost efficiency, and the consequences of stepping ever closer to the line where humans are no longer in control of decisions on life and death. Arkin has argued elsewhere that, unlike human soldiers, autonomous war-fighting robots might have the advantage that they wouldn’t feel a threat to their personal safety, which could potentially reduce collateral damage and avoid war crimes. This question is part of a debate that dates back at least to the 1970s, when the air force generals who controlled the nation’s fleets of strategic bombers used the human-in-the-loop argument—that it was possible to recall a bomber and use human pilots to assess damage—in an attempt to justify the value of bomber aircraft in the face of more modern ballistic missiles.

But Arkin also posed a new set of ethical questions in his talk. What if we have moral robots but the enemy doesn’t? There is no easy answer to that question. Indeed, increasingly intelligent and automated weapons technologies have inspired the latest arms race. Adding inexpensive intelligence to weapons systems threatens to change the international balance of power between nations.

When Arkin concluded his talk at the stately Historic Academy of Medicine in Atlanta, Gill Pratt, the DARPA director of the agency’s Robotics Challenge, was one of the first to respond. He didn’t refute Arkin’s point. Instead, he acknowledged that robots are a “dual-use” technology. “It’s very easy to pick on robots that are funded by the Defense Department,” he said. “It’s very easy to pick on a robot that looks like the Terminator, but in fact with dual-use being everywhere, it really doesn’t matter. If you’re designing a robot for health care, for instance, the autonomy it needs is actually in excess of what you would need for a disaster response robot.”8 Advanced technologies have long posed questions about dual-use. Now, artificial intelligence and machine autonomy have reframed the problem. Until now, dual-use technologies have explicitly required that humans make ethical decisions about their use. The specter of machine autonomy either places human ethical decision-making at a distance or removes it entirely.

In other fields, certain issues have forced scientists and technologists to consider the potential consequences of their work, and many of those scientists acted to protect humanity. In February of 1975, for example, Nobel laureate Paul Berg encouraged the elite of the then new field of biotechnology to meet at the Asilomar Conference Grounds in Pacific Grove, California. At the time, recombinant DNA—inserting new genes into the DNA of living organisms—was a fledgling development. It presented both the promise for dramatic advances in medicine, agriculture, and new materials and the horrifying possibility that scientists could unintentionally bring about the end of humanity by engineering a synthetic plague. For the scientists, the meeting led to an extraordinary resolution. The group recommended that molecular biologists refrain from certain kinds of research and embark on a period of self-regulation during which they would pause their research while the scientists considered how to make it safe. To monitor the field, biotechnologists set up an independent committee at the National Institutes of Health to review research. After a little more than a decade, the NIH had gathered sufficient evidence from a wide array of experiments to suggest that it should lift the restrictions on research. It was a singular example of how society might thoughtfully engage with the consequences of scientific advance.

Following in the footsteps of the biologists, in February of 2009, a group of artificial intelligence researchers and roboticists also met at Asilomar to discuss the progress of AI after decades of failure. Eric Horvitz, the Microsoft AI researcher who was serving as president of the Association for the Advancement of Artificial Intelligence, called the meeting. During the previous five years, the researchers in the field had begun discussing twin alarms. One came from Ray Kurzweil, who had heralded the relatively near-term arrival of computer superintelligences. Bill Joy, a founder of Sun Microsystems, also offered a darker view of artificial intelligence. He wrote a Wired magazine article that detailed a trio of technology threats from the fields of robotics, genetic engineering, and nanotechnology.9 Joy believed that the technologies represented a triple threat to human survival and he did not see an obvious solution.

The artificial intelligence researchers who met at Asilomar chose to act less cautiously than their predecessors in the field of biotechnology. The group of computer science and robotics luminaries, including Sebastian Thrun, Andrew Ng, Manuela Veloso, and Oren Etzioni, who is now the director of Paul Allen’s Allen Institute for Artificial Intelligence, generally discounted the possibility of superintelligences that would surpass humans as well as the possibility that artificial intelligence might spring spontaneously from the Internet. They agreed that robots that can kill autonomously have already been developed, yet, when it emerged toward the end of 2009, the group’s report proved to be an anticlimax. The field of AI had not yet arrived at the moment of imminent threat. “The 1975 meeting took place amidst a recent moratorium on recombinant DNA research. In stark contrast to that situation, the context for the AAAI panel is a field that has shown relatively graceful, ongoing progress. Indeed, AI scientists openly refer to progress as being somewhat disappointing in its pace, given hopes and expectations over the years,”10the authors wrote in a report summarizing the meeting.

Five years later, however, the question of machine autonomy emerged again. In 2013, when Google acquired DeepMind, a British artificial intelligence firm that specialized in machine learning, popular belief held that roboticists were very close to building completely autonomous robots. The tiny start-up had produced a demonstration that showed its software playing video games, in some cases better than human players. Reports of the acquisition were also accompanied by the claim that Google would set up an “ethics panel” because of concerns about potential uses and abuses of the technology. Shane Legg, one of the cofounders of DeepMind, acknowledged that the technology would ultimately have dark consequences for the human race. “Eventually, I think human extinction will probably occur, and technology will likely play a part in this.”11 For an artificial intelligence researcher who had just reaped hundreds of millions of dollars, it was an odd position to take. If someone believes that technology will likely evolve to destroy humankind, what could motivate them to continue developing that same technology?

At the end of 2014, the 2009 AI meeting at Asilomar was reprised when a new group of AI researchers, funded by one of the Skype founders, met in Puerto Rico to again consider how to make their field safe. Despite a new round of alarming statements about AI dangers from luminaries such as Elon Musk and Stephen Hawking, the attendees wrote an open letter that notably fell short of the call to action that had been the result of the original 1975 Asilomar biotechnology meeting.

Given that DeepMind had been acquired by Google, Legg’s public philosophizing is particularly significant. Today, Google is the clearest example of the potential consequences of AI and IA. Founded on an algorithm that efficiently collected human knowledge and then returned it to humans as a powerful tool for finding information, Google is now engaged in building a robot empire. The company will potentially create machines that replace human workers, like drivers, delivery personnel, and electronics assembly workers. Whether it will remain an “augmentation” company or become a predominately AI-oriented organization is unclear.

The new concerns about the potential threat from AI and robotics evoke the issues that confronted the fictional Tyrell Corporation in the science-fiction movie Blade Runner, which raised the ethical issues posed by the design of intelligent machines. Early in the movie Deckard, a police detective, confronts Rachael, an employee of a firm that makes robots, or replicants, and asks her if an artificial owl is expensive. She suggests that he doesn’t believe the company’s work is of value. “Replicants are like any other machine,” he responds. “They’re either a benefit or a hazard. If they’re a benefit, it’s not my problem.”12

How long will it be before Google’s intelligent machines, based on technologies from DeepMind and Google’s robotics division, raise the same questions? Few movies have had the cultural impact of Blade Runner. It has been released seven different times, once with a director’s cut, and a sequel is on the docket. It tells the story of a retired Los Angeles police detective in 2019 who is recalled to hunt down and kill a group of genetically engineered artificial beings known as replicants. These replicants were originally created to work off-planet and have returned to Earth illegally in an effort to force their designer to extend their artificially limited life spans. A modern-day Wizard of Oz, it captured a technologically literate generation’s hopes and fears. From the Tin Man, who gains a heart and thus a measure of humanity, to the replicants who are so superior to humanity that Deckard is ordered to terminate them, humanity’s relations to robots have become the defining question of the era.

These “intelligent” machines may never be intelligent in a human sense or self-aware. That’s beside the point. Machine intelligence is improving quickly and approaching a level where it will increasingly offer the compelling appearance of intelligence. When it opened in December 2013, the movie Her struck a chord, most likely because millions of people already interact with personal assistants such as Apple’s Siri. Her-like interactions have become commonplace. Increasingly as computing moves between desktops and laptops and becomes embedded in everyday objects, we will expect them to communicate intelligently. In the years while he was designing Siri and the project was still hidden from the public eye, Tom Gruber referred to this trend as “intelligence at the interface.” He felt he had found a way to blend the competing worlds of AI and IA.

And indeed, the emergence of software-based intelligent assistants hints at a convergence between the work in disparate communities of AI and human-computer interaction designers. Alan Kay, who conceived of the first modern personal computer, has said that in his early explorations of computer interfaces, he was working roughly ten to fifteen years in the future, while Nicholas Negroponte, one of the first people to explore the ideas of immersive media, virtual reality, and conversational interfaces, was working twenty-five to thirty years in the future. Like Negroponte, Kay asserts that the best computerized interfaces are the ones that are closer to theater, and the best theater draws the audience into its world so completely that they feel as if they are part of it. That design focus on interactive performance points directly toward interactive systems that will function more as AI-based “colleagues” than computerized tools.

How will these computer avatars transform society? Humans are already spending a significant fraction of their waking hours either interacting with other humans through computers or directly interacting with humanlike machines, either in fantasy and video games or in a plethora of computerized assistance systems that range from so-called FAQbots to Siri. We even use search engines in our everyday conversations with others.

Will these AI avatars be our slaves, our assistants, our colleagues, or some mixture of all three? Or more ominously, will they become our masters? Considering robots and artificial intelligences in terms of social relationships may initially seem implausible. However, given that we tend to anthropomorphize our machines, we will undoubtedly develop social relationships with them as they become increasingly autonomous. Indeed, it is not much different to reflect on human-robot relations than it is to consider traditional human relations with slaves, who have been dehumanized by their masters throughout history. Hegel explored the relationship between master and slave in The Phenomenology of Spirit and his ideas about the “master-slave dialectic” have influenced other thinkers ranging from Karl Marx to Martin Buber. At the heart of Hegel’s dialectic is the insight that both the master and the slave are dehumanized by their relationship.

Kay has effectively translated Hegel for the modern age. Today, a wide variety of companies are developing conversational computers like Siri. Kay argues that as a consequence, designers should aim to create programs that function as colleagues rather than servants. If we fail, history hints at a disturbing consequence. Kay worried that building intelligent “assistants” might only recapitulate the problem the Romans faced by letting their Greek slaves do their thinking for them. Before long, those in power were unable to think independently.

Perhaps we have already begun to slip down a similar path. For example, there is growing evidence that reliance on GPS for directions and for correction of navigational errors hinders our ability to remember and reason spatially, which are more generally useful survival skills.13 “When people ask me, ‘are computers going to take over the world?’” Kay said, “For most people they already have, because they have ceded authority to them in so many different ways.”

That hints at a second great problem: the risk of ceding individual control over everyday decisions to a cluster of ever more sophisticated algorithms. Not long ago, Randy Komisar, a veteran Silicon Valley venture capitalist, sat in a meeting listening to someone describe a Google service called Google Now, the company’s Siri competitor. “What I realized was that people are dying to have an intelligence tell them what they should be doing,” he said. “What food they should be eating, what people they should be meeting, what parties they should be going to.” For today’s younger generation, the world has been turned upside down, he concluded. Rather than using computers to free them up to think big thoughts, develop close relationships, and exercise their individuality and creativity and freedom, young people were suddenly so starved for direction that they were willing to give up that responsibility to an artificial intelligence in the cloud. What started out as Internet technologies that made it possible for individuals to share preferences efficiently has rapidly transformed into a growing array of algorithms that increasingly dictate those preferences for them. Now the Internet seamlessly serves up life directions. They might be little things like finding the best place nearby for Korean barbecue based on the Internet’s increasingly complete understanding of your individual wants and needs, or big things like an Internet service arranging your marriage—not just the food, gifts, and flowers, but your partner, too.

The tension inherent in AI and IA perspectives was a puzzle to me when I first realized that Engelbart and McCarthy had set out to invent computer technologies with radically different goals in mind. Obviously they represent both a dichotomy and a paradox. For if you augment a human with computing technology, you inevitably displace humans as well. At the same time, choosing one side or another in the debate is an ethical choice, even if the choice isn’t black or white. Terry Winograd and Jonathan Grudin have separately described the rival communities of scientists and engineers that emerged from that early work. Both men have explored the challenge of fusing the two contradictory approaches. In particular, in 2009 Winograd set out to build a Program on Liberation Technology at Stanford to find ways that computing technologies could improve governance, enfranchise the poor, support human rights, and implement economic development, along with a host of other aims.

Of course, there are limits to this technology. Winograd makes the case that whether computing technologies are deployed to extend human capabilities or to replace them is more a consequence of the particular economic system in which they are created and used than anything inherent in the technologies themselves. In a capitalist economy, if artificial intelligence technologies improve to the point that they can replace new kinds of white-collar and professional workers, they will inevitably be used in that way. That lesson carries forward in the differing approaches of the software engineers, AI researchers, roboticists, and hackers who are the designers of these future systems. It should be obvious that Bill Joy’s warning that “the future doesn’t need us” is just one possible outcome. It is equally apparent that the world transformed by these technologies doesn’t have to play out catastrophically.

A little over a century ago, Thorstein Veblen wrote an influential critique of the turn-of-the-century industrial world, The Engineers and the Price System. He argued that, because of the power and influence of industrial technology, political power would flow to engineers, who could parlay their deep knowledge of technology into control of the emerging industrial economy. It certainly didn’t work out that way. Veblen was speaking to the Progressive Era, looking for a middle ground between Marxism and capitalism. Perhaps his timing was off, but his basic point, as echoed a half century later at the dawn of the computer era by Norbert Wiener, may yet prove correct. Today, the engineers who are designing the artificial intelligence-based programs and robots will have tremendous influence over how we will use them. As computer systems are woven more deeply into the fabric of everyday life, the tension between augmentation and artificial intelligence has become increasingly salient.

What began as a paradox for me has a simple answer. The solution to the contradiction inherent in AI versus IA lies in the very human decisions of engineers and scientists like Bill Duvall, Tom Gruber, Adam Cheyer, Terry Winograd, and Gary Bradski, who all have intentionally chosen human-centered design.

At the dawn of the computing age, Wiener had a clear sense of the significance of the relationship between humans and their creations—smart machines. He recognized the benefits of automation in eliminating human drudgery, but he also worried that the same technology might subjugate humanity. The intervening decades have only sharpened the dichotomy he first identified.

This is about us, about humans and the kind of world we will create.

It’s not about the machines.