Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff (2015)
Chapter 3. A TOUGH YEAR FOR THE HUMAN RACE
With these machines, we can make any consumer device in the world,” enthused Binne Visser, a Philips factory engineer who helped create a robot assembly line that disgorges an unending stream of electric shavers. His point is they could be smartphones, computers, or virtually anything that is made today by hand or using machines.1
The Philips electric razor factory in Drachten, a three-hour train ride north from Amsterdam through pancake-flat Dutch farmland, offers a clear view of the endgame of factory robots: that “lights-out,” completely automated factories are already a reality, but so far only in limited circumstances. The Drachten plant feels from the outside like a slightly faded relic of an earlier era when Philips, which started out making lightbulbs and vacuum tubes, grew to be one of the world’s dominant consumer electronics brands. Having lost its edge to Asian upstarts in consumer products such as television sets, Philips remains one of the world’s leading makers of electric shavers and a range of other consumer products. Like many European and U.S. companies, it has based much of its manufacturing in Asia where labor is less expensive. A turning point came in 2012 when Philips scrapped a plan to move a high-end shaver assembly operation to China. Because of the falling prices of sensors, robots, and cameras and the increasing transportation costs to ship finished goods to markets outside Asia, Philips built an almost entirely automated assembly line of robot arms at the Drachten factory. Defeated in many consumer electronics categories, Philips decided to invest to maintain its edge in an eclectic array of precomputer home appliances.
The brightly lit single-story automated shaver factory is a modular mega-machine composed of more than 128 linked stations—each one a shining transparent cage connected to its siblings by a conveyor, resembling the glass-enclosed popcorn makers found in movie theaters. The manufacturing line itself is a vast Rube Goldberg-esque orchestra. Each of the 128 arms has a unique “end effector,” a specialized hand for performing the same operation over and over and over again at two-second intervals. One assembly every two seconds translates into 30 shavers a minute, 1,800 an hour, 1,304,000 a month, and an astounding 15,768,000 a year.
A Philips shaver assembly plant in Drachten, Netherlands, that operates without assembly workers. (Photo courtesy of Philips)
The robots are remarkably dexterous, each specialized to repeat its single task endlessly. One robot arm simultaneously picks up two toothpick-thin two-inch pieces of wire, precisely bends the wires, and then delicately places their stripped ends into tiny holes in a circuit board. The wires themselves are picked from a parts feeder called a shake table. A human technician loads them into a bin that then spills them onto a brightly lit surface observed by a camera placed overhead. As if playing Pick Up Sticks, the robot arm grabs two wires simultaneously. Every so often, when the wires are jumbled, it shakes the table to separate them so it can see them better and then quickly grabs two more. Meanwhile, a handful of humans flutter around the edges of the shaver manufacturing line. A team of engineers dressed in blue lab coats keeps the system running by feeding it raw materials. A special “tiger team” is on-call around the clock so no robot arm is ever down for more than two hours. Unlike human factory workers, the line never sleeps.
The factory is composed of American robot arms programmed by a team of European automation experts. Is it a harbinger of an era of manufacturing in which human factory line workers will vanish? Despite the fact that in China millions of workers labor to hand-assemble similar consumer gadgets, the Drachten plant is assembling devices more mechanically complex than a smartphone—entirely without human labor. In the automated factory mistakes are rare—the system is meant to be tolerant of small errors. At one station, toward the end of the line, small plastic pieces of the shaver case are snapped in place just beneath the rotary cutting head. One of the pieces, resembling a guitar pick, pops off onto the floor, like a Tiddlywink. The line doesn’t stutter. A down-the-line sensor recognizes that the part is missing and the shaver is shunted aside into a special rework area. The only humans directly working on the shaver factory line are eight women performing the last step in the process: quality inspection, not yet automated because the human ear is still the best instrument for determining that each shaver is functioning correctly.
Lights-out factories, defined as robotic manufacturing lines without humans, create a “good news, bad news” scenario. To minimize the total cost of goods, it makes sense to place factories either near sources of raw materials, labor, and energy or near the customers for the finished goods. If robots can build virtually any product more cheaply than human workers, then it is more economical for factories to be close to the markets they serve, rather than near sources of low-cost labor. Indeed, factories are already returning to the United States. A solar panel manufacturing factory run by Flextronics has now located in Milpitas, south of San Francisco, where a large banner proudly proclaims, BRINGING JOBS & MANUFACTURING BACK TO CALIFORNIA! Walking the Fremont factory line, however, it quickly becomes clear that the facility is a testament to highly automated manufacturing rather than creating jobs; there are fewer than ten workers actually handling products on the assembly line producing almost as many panels as hundreds of employees would in the company’s conventional factory in Asia. “At what point does the chainsaw replace Paul Bunyan?” a Flextronics executive asks. “There’s always a price point, and we’re very close to that point.”2
At the dawn of the Information Age, the pace and consequences of automation were very much on Norbert Wiener’s mind. During the summer of 1949, Wiener wrote a single-spaced three-page letter to Walter Reuther, the head of the United Auto Workers, to tell Reuther that he had turned down a consulting opportunity with a General Electric corporation to offer technical advice on designing automated machinery. GE had approached the MIT scientist twice in 1949 asking him both to lecture and to consult on the design of servomechanisms for industrial control applications. Servos used feedback to precisely control a component’s position, which was essential for the automated machinery poised to enter the factory after World War II. Wiener had refused both offers for what he called ethical reasons, even though he realized that others with similar knowledge but no sense of obligation to factory workers would likely accept.
Wiener, deeply attuned to the potential dire “social consequences,” had already unsuccessfully attempted to contact other unions, and his frustration came through clearly in the Reuther letter. By late 1942 it was clear to Wiener that a computer could be programmed to run a factory, and he worried about the ensuing consequences of an “assembly line without human agents.”3 Software had not yet become a force that, in the words of browser pioneer Marc Andreessen, would “eat the world,” but Wiener portrayed the trajectory clearly to Reuther. “The detailed development of the machine for particular industrial purpose is a very skilled task, but not a mechanical task,” he wrote. “It is done by what is called ‘taping’ the machine in the proper way, much as present computing machines are taped.”4 Today we call it “programming,” and software animates the economy and virtually every aspect of modern society.
Writing to Reuther, Wiener foresaw an apocalypse: “This apparatus is extremely flexible, and susceptible to mass production, and will undoubtedly lead to the factory without employees; as for example, the automatic automobile assembly line,” he wrote. “In the hands of the present industrial set-up, the unemployment produced by such plants can only be disastrous.” Reuther responded by telegram: DEEPLY INTERESTED IN YOUR LETTER. WOULD LIKE TO DISCUSS IT WITH YOU AT EARLIEST OPPORTUNITY.
Reuther’s response was sent in August 1949 but it was not until March 1951 that the two men met in a Boston hotel.5 They sat together in the hotel restaurant and agreed to form a joint “Labor-Science-Education Association”6 to attempt to deflect the worst consequences of the impending automation era for the nation’s industrial workers. By the time Wiener met with Reuther he had already published The Human Use of Human Beings, a book that argued both for the potential benefits of automation and warned about the possibility of human subjugation by machines. He would become a sought-after national speaker during the first half of the 1950s, spreading his message of concern both about the possibility of runaway automation and the concept of robot weapons. After the meeting Wiener enthused that he had “found in Mr. Reuther and the men about him exactly that more universal union statesmanship which I had missed in my first sporadic attempts to make union contacts.”7
Wiener was not the only one to attempt to draw Reuther’s attention to the threat of automation. Several years after meeting with Wiener, Alfred Granakis, president of UAW 1250, also wrote to Reuther, warning him about the loss of jobs after he was confronted with new workplace automation technologies at a Ford Motor engine plant and foundry in Cleveland, Ohio. He described the plant as “today’s nearest approach to a fully automated factory in the automobile industry,” adding: “What is the economic solution to all this, Walter? I am greatly afraid of embracing an economic ‘Frankenstein’ that I helped create in its infancy. It is my opinion that troubled days lie ahead for Labor.”8
Wiener had broken with the scientific and technical establishment some years earlier. He expressed strong beliefs about ethics in science in a letter to the Atlantic Monthly titled “A Scientist Rebels,” published in December 1946, a year after he had suffered a crisis of conscience resulting from the bombing of Hiroshima and Nagasaki. The essay contained this response to a Boeing research scientist’s request for a technical analysis of a guided missile system during the Second World War: “The practical use of guided missiles can only be used to kill foreign civilians indiscriminately, and it furnishes no protection whatever to civilians in this country.”9 The same letter raises the moral question of the dropping of the atomic bomb: “The interchange of ideas which is one of the great traditions of science must of course receive certain limitations when the scientist becomes an arbiter of life and death.”10
In January of 1947 he withdrew from participating in a symposium on calculating machinery at Harvard University, in protest that the systems were to be used for “war purposes.” In the 1940s both computers and robots were entirely the stuff of science fiction, so it’s striking how clearly fleshed out Wiener’s understandings were of the technology impact that is only today playing out. In 1949, the New York Times invited Wiener to summarize his views about “what the ultimate machine age is likely to be,” in the words of its longtime Sunday editor, Lester Markel. Wiener accepted the invitation and wrote a draft of the article; the legendarily autocratic Markel was dissatisfied and asked him to rewrite it. He did. But through a distinctly pre-Internet series of fumbles and missed opportunities, neither version ever appeared at the time.
In August of 1949, according to Wiener’s papers at MIT, the Times asked him to resend the first draft of the article to be combined with the second draft. (It is unclear why the editors had misplaced the first draft.) “Could you send the first draft to me, and we’ll see whether we can combine the two into one story?” wrote an editor in the paper’s Sunday department, then separate from the daily paper. “I may be mistaken, but I think you lost some of your best material.” But by then Wiener was traveling in Mexico, and he responded: “I had assumed that the first version of my article was finished business. To get hold of the paper in my office at the Massachusetts Institute of Technology would involve considerable cross-correspondence and annoyance to several people. I therefore do not consider it a practical thing to do. Under the circumstances I think that it is best for me to abandon this undertaking.”
The following week the Times editor returned the second draft to Wiener, and it eventually ended up with his papers in MIT Libraries’ Archives and Special Collections, languishing there until December 2012, when it was discovered by Anders Fernstedt, an independent scholar researching the work of Karl Popper, Friedrich Hayek, and Ernst Gombrich, three Viennese philosophers active in London for most of the twentieth century.11 In the unpublished essay Wiener’s reservations were clear: “The tendency of these new machines is to replace human judgment on all levels but a fairly high one, rather than to replace human energy and power by machine energy and power. It is already clear that this new replacement will have a profound influence upon our lives,” he wrote.
Wiener went on to mention the emergence of factories that were “substantially without employees” and the rise of the importance of “taping.” He also presented more than a glimmer of the theoretical possibility and practical impact of machine learning: “The limitations of such a machine are simply those of an understanding of the objects to be attained, and of the potentialities of each stage of the processes by which they are to be attained, and of our power to make logically determinate combinations of those processes to achieve our ends. Roughly speaking, if we can do anything in a clear and intelligible way, we can do it by machine.”12
At the dawn of the computer age, Wiener could see and clearly articulate that automation had the potential of reducing the value of a “routine” factory employee to where “he is not worth hiring at any price,” and that as a result “we are in for an industrial revolution of unmitigated cruelty.”
Not only did he have early dark forebodings of the computer revolution, but he foresaw something else that was even more chilling: “If we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.”13
In the early 1950s Reuther and Wiener agreed on the idea of a “Labor-Science-Education Association,” but the partnership did not have an immediate impact, in part because of Wiener’s health issues and in part because Reuther represented a faction of the U.S. labor movement that viewed automation as unavoidable progress—the labor leader was intent on forging an economic bargain with management around the forces of technology: “In the final analysis, modern work processes had to be endured, offset by the reward of increased leisure and creative relaxation. In his embrace of automation and new technology, he often seemed to be wholly taken by the notion of efficiency as a desirable and essentially neutral condition.”14
Wiener’s warning would eventually light a spark—but not during the 1950s, a Republican decade when the labor movement did not have many friends in government. Only after Kennedy’s election in 1960 and his succession by Lyndon Johnson would the early partnership between Wiener and Reuther lead to one of the few serious efforts on the part of the U.S. government to grapple with automation, when in August of 1964 Johnson established a blue-ribbon panel to explore the impact of technology on the economy.
Pressure came in part from the Left in the form of an open letter to the president from a group that called itself the Ad Hoc Committee on the Triple Revolution, including Democratic Socialists of America head Michael Harrington, Students for a Democratic Society cofounder Tom Hayden, biologist Linus Pauling, Swedish economist Gunnar Myrdal, pacifist A. J. Muste, economic historian Robert Heilbroner, social critic Irving Howe, civil rights activist Bayard Rustin, and Socialist Party presidential candidate Norman Thomas, among many others.
The first revolution they noted was the emergence of the “Cybernation”: “A new era of production has begun. Its principles of organization are as different from those of the industrial era as those of the industrial era were different from the agricultural. The cybernation revolution has been brought about by the combination of the computer and the automated self-regulating machine. This results in a system of almost unlimited productive capacity which requires progressively less human labor.”15 The resulting National Commission on Technology, Automation, and Economic Progress would include a remarkable group ranging from Reuther, Thomas J. Watson Jr. of IBM, and Edwin Land of Polaroid, to Robert Solow, the MIT economist, and Daniel Bell, the Columbia sociologist.
When the 115-page report appeared at the end of 1966 it was accompanied by 1,787 pages of appendices including special reports by outside experts. The 232-page analysis of the impact of computing by Paul Armer of the RAND Corporation did a remarkable job of predicting the impact of information technology. Indeed, the headings in the report have proven true over the years: “Computers Are Becoming Faster, Smaller, and Less Expensive”; “Computing Power Will Become Available Much the Same as Electricity and Telephone Service Are Today”; “Information Itself Will Become Inexpensive and Readily Available”; “Computers Will Become Easier to Use”; “Computers Will Be Used to Process Pictorial Images and Graphic Information”; and “Computers Will Be Used to Process Language,” among others. Yet the consensus that emerged from the report would be the traditional Keynesian view that “technology eliminated jobs, not work.” The report concluded that technological displacement would be a temporary but necessary stepping-stone for economic growth.
The debate over the future of technological unemployment dissipated as the economy heated up, in part as a consequence of the Vietnam War, and the postwar civil strife in the late 1960s further sidelined the question. A decade and a half after he had issued his first warnings about the consequences of automated machines, Wiener turned his thoughts to religion and technology while remaining a committed humanist. In his final book, God & Golem, Inc., he explored the future human relationship with machines through the prism of religion. Invoking the parable of the golem, he pointed out that despite best intentions, humans are incapable of understanding the ultimate consequences of their inventions.16
In his 1980 dual biography of John von Neumann and Wiener, Steven Heims notes that in the late 1960s he had asked a range of mathematicians and scientists about Wiener’s philosophy of technology. The general reaction of the scientists was as follows: “Wiener was a great mathematician, but he was also eccentric. When he began talking about society and the responsibility of scientists, a topic outside of his area of expertise, well, I just couldn’t take him seriously.”17
Heims concludes that Wiener’s social philosophy hit a nerve with the scientific community. If scientists acknowledged the significance of Wiener’s ideas, they would have to reexamine their deeply held preconceived notions about personal responsibility, something they were not eager to do. “Man makes man in his own image,” Wiener notes in God and Golem, Inc. “This seems to be the echo or the prototype of the act of creation, by which God is supposed to have made man in His image. Can something similar occur in the less complicated (and perhaps more understandable) case of the nonliving systems that we call machines?”18
Shortly before his death in 1964, Wiener was asked by U.S. News & World Report: “Dr. Wiener, is there any danger that machines—that is, computers—will someday get the upper hand over men?” His answer was: “There is, definitely, that danger if we don’t take a realistic attitude. The danger is essentially intellectual laziness. Some people have been so bamboozled by the word ‘machine’ that they don’t realize what can be done and what cannot be done with machines—and what can be left, and what cannot be left to the human beings.”19
Only now, six and a half decades after Wiener wrote Cybernetics in 1948, is the machine autonomy question becoming more than hypothetical. The Pentagon has begun to struggle with the consequences of a new generation of “brilliant” weapons,20 while philosophers grapple with the “trolley problem” in trying to assign moral responsibility for self-driving cars. Over the next decade the consequences of creating autonomous machines will appear more frequently as manufacturing, logistics, transportation, education, health care, and communications are increasingly directed and controlled by learning algorithms rather than humans.
Despite Wiener’s early efforts to play a technological Paul Revere, after the automation debates of the 1950s and 1960s tailed off, fears of unemployment caused by technology would vanish from the public consciousness until sometime around 2011. Mainstream economists generally agreed on what they described as the “Luddite fallacy.” As early as 1930, John Maynard Keynes had articulated the general view on the broad impact of new technology: “We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor. But this is only a temporary phase of maladjustment.”21
Keynes was early to point out that technology was a powerful generator of new categories of employment. Yet what he referred to as a “temporary phase” is certainly relative. After all, he also famously noted that in “the long run” we are all dead.
In 1995, economist Jeremy Rifkin wrote The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era. The decline of the agricultural economy and the rapid growth of new industrial employment had been a stunning substantiation of Keynes’s substitution argument, but Rifkin argued that the impact of new information technologies would be qualitatively different from that of previous waves of industrial automation. He began by noting that in 1995 unemployment globally had risen to its highest level since the depression of the 1930s and that globally eight hundred million people were unemployed or underemployed. “The restructuring of production practices and the permanent replacement of machines for human laborers has begun to take a tragic toll on the lives of millions of workers,” he wrote.22
The challenge to his thesis was that employment in the United States actually grew from 115 million to 137 million during the decade following the publication of his book. That meant that the size of the workforce would grow by over 19 percent while the nation’s population grew by only 11 percent. Moreover, key economic indicators such as the labor force participation rate, employment to working population ratio, and the unemployment rate showed no evidence of technological unemployment. The situation, then, was more nuanced than the impending black-and-white labor calamity Rifkin had forecast. For example, from the 1970s, the outsourcing of jobs internationally, as multinational corporations fled to low-cost manufacturing regions and used telecommunications networks to relocate white-collar jobs, had a far more significant impact on domestic employment than the deployment of automation technologies. And so Rifkin’s work, substantially discredited, also went largely unnoticed.
In the wake of the 2008 recession, there were indications of a new and broader technology transformation. White-collar employment had been the engine of growth for the U.S. economy since the end of World War II, but now cracks began to appear. What were once solid white-collar jobs began disappearing. Routinized white-collar work was now clearly at risk as the economy began to recover in 2009 in the form of what was described as a “jobless recovery.” Indications were that knowledge workers’ jobs higher up in the economic pyramid were for the first time vulnerable. Economists such as MIT’s David Autor began to pick apart the specifics of the changing labor force and put forward the idea that the U.S. economy was being “hollowed out.” It might continue to grow at the bottom and the top, but middle-class jobs, essential to a modern democracy, were evaporating, he argued.
There was mounting evidence that the impact of technology was not just a hollowing out but a “dumbing down” of the workforce. In some cases specific high-prestige professions began to show the impact of automation based on the falling costs of information and communications technologies, such as new global computer networks. Moreover, for the first time artificial intelligence software was beginning to have a meaningful impact on certain highly skilled jobs, like $400-per-hour lawyers and $175-per-hour paralegals. As the field of AI once again gathered momentum beginning in 2000, new applications of artificial intelligence techniques based on natural language understanding, such as “e-discovery,” or the automated processing of the relevance of legal documents required to disclose in litigation, emerged. The software would soon go beyond just finding specific keywords in email. E-discovery software evolved quickly, so that it became possible to scan millions of documents electronically and recognize underlying concepts and even find so-called smoking guns—that is, evidence of illegal or improper behavior.
In part, the software had become essential as litigation against corporations routinely involved the review of millions of documents for relevance. Comparative studies showed that the machines could do as well or better than humans in analyzing and classifying documents. “From a legal staffing viewpoint, it means that a lot of people who used to be allocated to conduct document review are no longer able to be billed out,” said Bill Herr, who as a lawyer at a major chemical company used to muster auditoriums of lawyers to read documents and correspondence for weeks on end. “People get bored, people get headaches. Computers don’t.”23
Observing the impact of technologies such as e-discovery software, which is now dramatically eliminating the jobs of lawyers, led Martin Ford, an independent Silicon Valley engineer who owned a small software firm, to self-publish The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future at the end of 2009. Ford had come to believe that the impact of information technology on the job market was moving much more quickly than was generally understood. With a professional understanding of software technologies, he was also deeply pessimistic. For a while he stood alone, much in the tradition of Rifkin’s 1995 The End of Work, but as the recession dragged on and mainstream economists continued to have trouble explaining the absence of job growth, he was soon joined by an insurgency of technologists and economists warning that technological disruption was happening full force.
In 2011, two MIT Sloan School economists, Erik Brynjolfsson and Andrew McAfee, self-published an extended essay titled “Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy.” Their basic theme was as follows: “Digital technologies change rapidly, but organizations and skills aren’t keeping pace. As a result, millions of people are being left behind. Their incomes and jobs are being destroyed, leaving them worse off … than before the digital revolution.”24 The “Race Against the Machine” essay was passed around samizdat-style over the Internet and was instrumental in reigniting the debate over automation. The basic theme of the discussion was around the notion that this time—because of the acceleration of computing technologies in the workplace—there would be no Keynesian solution in which the economy created new job categories.
Like Martin Ford, Brynjolfsson and McAfee chronicled a growing array of technological applications that were redefining the workplace, or seemed poised on the brink of doing so. Of the wave of new critiques, David Autor’s thesis was perhaps the most compelling. However, even he began to hedge in 2014, based on a report that indicated a growing “deskilling” of the U.S. workforce and a declining demand for jobs that required cognitive skills. He worried that the effect was creating a downward ramp. The consequence, argued Paul Beaudry, David A. Green, and Ben Sand in a National Bureau of Economic Research (NBER) working paper, was that higher-skilled workers tended to push lower-skilled workers out of the workforce.25 Although they have no clear evidence directly related to the deployment of particular types of technologies, the analysis of the consequences for the top of the workforce is chilling. They reported: “Many researchers have documented a strong, ongoing increase in the demand for skills in the decades leading up to 2000. In this paper, we document a decline in that demand in the years since 2000, even as the supply of high education workers continues to grow. We go on to show that, in response to this demand reversal, high-skilled workers have moved down the occupational ladder and have begun to perform jobs traditionally performed by lower-skilled workers.”26 Yet despite fears of a “job apocalypse” based on machines that can see, hear, speak, and touch, once again the workforce has not behaved as if there will be a complete collapse precipitated by technological advance in the immediate future. Indeed, in the decade from 2003 to 2013, the size of the U.S. workforce increased by more than 5 percent, from 131.4 million to 138.3 million—although, to be sure, this was a period during which the population grew by more than 9 percent.
If not complete collapse, the slowing growth rate suggested a more turbulent and complex reality. One possibility is that rather than a pure deskilling, the changes observed may represent a broader “skill mismatch,” an interpretation that is more consistent with Keynesean expectations. For example, a recent McKinsey report on the future of work showed that between 2001 and 2009, jobs related to transactions and production both declined, but more than 4.8 million white-collar jobs were created relating to interactions and problem-solving.27 What is clear is that both blue-collar and white-collar jobs involving routinized tasks are at risk. The Financial Times reported in 2013 that between 2007 and 2012 the U.S. workforce gained 387,000 managers while losing almost two million clerical jobs.28 This is an artifact of what is popularly described as the Web 2.0 era of the Internet. The second generation of commercial Internet applications brought the emergence of a series of software protocols and product suites that simplified the integration of business functions. Companies such as IBM, HP, SAP, PeopleSoft, and Oracle, helped corporations to relatively quickly automate repetitive business functions. The consequence has been a dramatic loss of clerical jobs.
However, even within the world of clerical labor there are subtleties that suggest that predictions of automation and job destruction across the board are unlikely to prove valid. The case of bank tellers and the advent of automated teller machines is a particularly good example of the complex relationship between automation technologies, computer networks, and workforce dynamics. In 2011, while discussing the economy, Barack Obama used this same example: “There are some structural issues with our economy where a lot of businesses have learned to become much more efficient with a lot fewer workers. You see it when you go to a bank and you use an ATM; you don’t go to a bank teller. Or you go to the airport, and you’re using a kiosk instead of checking in at the gate.”29
This touched off a political kerfuffle about the impact of automation. The reality is that despite the rise of ATMs, bank tellers have not gone away. In 2004 Charles Fishman reported in Fast Company that in 1985, relatively early in the deployment of ATMs, there were about 60,000 ATMs and 485,000 bank tellers; in 2002 that number had increased to 352,000 ATMs and 527,000 bank tellers. In 2011 the Economist cited 600,500 bank tellers in 2008, while the Bureau of Labor Statistics was projecting that number would grow to 638,000 by 2018. Furthermore the Economist pointed out that there were an additional 152,900 “computer, automated teller, and office machine repairers” in 2008.30 Focusing on ATMs in isolation doesn’t begin to touch the complexity of the way in which automated systems are weaving their way into the economy.
Bureau of Labor Statistics data reveal that the real transformation has been in the “back office,” which in 1972 made up 70 percent of the banking workforce: “First, the automation of a major customer service task reduced the number of employees per location to 75% of what it was. Second, the [ATM] machines did not replace the highly visible customer-facing bank tellers, but instead eliminated thousands of less-visible clerical jobs.”31 The impact of back-office automation in banking is difficult to estimate precisely, because the BLS changed the way it recorded clerk jobs in banking in 1982. However, it is indisputable that banking clerks’ jobs have continued to vanish.
Looking forward, the consequences of new computing technology on bank tellers might anticipate the impact of driverless delivery vehicles. Even if the technology can be perfected—and that is still to be determined, because delivery involves complex and diverse contact with human business and residential customers—the “last mile” delivery personnel will be hard to replace.
Despite the challenges of separating the impact of the recession from the implementation of new technologies, increasingly the connection between new automation technologies and rapid economic change has been used to imply that a collapse of the U.S. workforce—or at least a prolonged period of dislocation—might be in the offing. Brynjolfsson and McAfee argue for the possibility in a much expanded book-length version of “Race Against the Machine,” entitled The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Similar sentiments are offered by Jaron Lanier, a well-known computer scientist now at Microsoft Research, in the book Who Owns the Future? Both books draw a direct link between the rise of Instagram, the Internet photo-sharing service acquired by Facebook for $1 billion in 2012, and the decline of Kodak, the iconic photographic firm that declared bankruptcy that year. “A team of just fifteen people at Instagram created a simple app that over 130 million customers use to share some sixteen billion photos (and counting),” wrote Brynjolfsson and McAfee. “But companies like Instagram and Facebook employ a tiny fraction of the people that were needed at Kodak. Nonetheless, Facebook has a market value several times greater than Kodak ever did and has created at least seven billionaires so far, each of whom has a net worth ten times greater than [Kodak founder] George Eastman did.”32
Lanier makes the same point about Kodak’s woes even more directly: “They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only thirteen people. Where did all those jobs disappear to? And what happened to the wealth that those middle-class jobs created?”33
The flaw in their arguments is that they mask the actual jobs equation and ignore the reality of Kodak’s financial turmoil. First, even if Instagram did actually kill Kodak—it didn’t—the jobs equation is much more complex than the cited 13 versus 145,000 disparity. Services like Instagram didn’t spring up in isolation, but were made possible after the Internet had reached a level of maturity that had by then created millions of mostly high-quality new jobs. That point was made clearly by Tim O’Reilly, the book publisher and conference organizer: “Think about it for a minute. Was it really Instagram that replaced Kodak? Wasn’t it actually Apple, Samsung, and the other smartphone makers who have replaced the camera? And aren’t there network providers, data centers, and equipment suppliers who provide the replacement for the film that Kodak once sold? Apple has 72,000 employees (up from 10,000 in 2002). Samsung has 270,000 employees. Comcast has 126,000. And so on.”34 And even O’Reilly’s point doesn’t begin to capture the positive economic impact of the Internet. A 2011 McKinsey study reported that globally the Internet created 2.6 new jobs for every job lost, and that it had been responsible for 21 percent of GDP growth in the five previous years in developed countries.35 The other challenge for the Kodak versus Instagram argument is that while Kodak suffered during the shift to digital technologies, its archrival FujiFilm somehow managed to prosper through the transition to digital.36
The reason for Kodak’s decline was more complex than “they missed digital” or “they failed to buy (or invent) Instagram.” The problems included scale, age, and abruptness. The company had a massive burden of retirees and an internal culture that lost talent and could not attract more. It proved to be a perfect storm. Kodak tried to get into pharmaceuticals in a big way but failed, and it failed in its effort to enter the medical imaging business.
The new anxiety about AI-based automation and the resulting job loss may eventually prove well founded, but it is just as likely that those who are alarmed have in fact just latched onto the right backward-facing snapshots. If the equation is framed in terms of artificial intelligence-oriented technologies versus those oriented toward augmenting humans, there is hope that humans still retain an unbounded ability to both entertain and employ themselves doing something marketable and useful.
If the humans are wrong, however, 2045 could be a tough year for the human race.
Or it could mark the arrival of a technological paradise.
The year 2045 is when Ray Kurzweil predicts humans will transcend biology, and implicitly, one would presume, destiny.37
Kurzweil, the serial artificial intelligence entrepreneur and author who joined Google as a director of engineering in 2012 to develop some of his ideas for building an artificial “mind,” represents a community of many of Silicon Valley’s best and brightest technologists. They have been inspired by the ideas of computer scientist and science-fiction author Vernor Vinge about the inevitability of a “technological singularity” that would mark the point in time at which machine intelligence will surpass human intelligence. When he first wrote about the idea of the singularity in 1993, Vinge framed a relatively wide span of years—between 2005 and 2030—during which computers might become “awake” and superhuman.38
The singularity movement depends on the inevitability of mutually reinforcing exponential improvements in a variety of information-based technologies ranging from processing power to storage. In one sense it is the ultimate religious belief in the power of technology-driven exponential curves, an idea that has been explored by Robert Geraci in Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. There he finds fascinating sociological parallels between singularity thinking and a variety of messianic religious traditions.39
The singularity hypothesis also builds on the emergent AI research pioneered by Rodney Brooks, who first developed a robotics approach based on building complex systems out of collections of simpler parts. Both Kurzweil in How to Create a Mind: The Secret of Human Thought Revealed and Jeff Hawkins in his earlier On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines attempt to make the case that because the simple biological “algorithms” that are the basis for human intelligence have been discovered, it is largely a matter of “scaling up” to engineer intelligent machines. These ideas have been tremendously controversial and have been criticized by neuroscientists, but are worth mentioning here because they are an underlying argument in the new automation debate. What is most striking today is the extreme range of opinions about the future of the workforce emerging from different interpretations of the same data.
Moshe Vardi is a Rice University computer scientist who serves as editor-in-chief of the Communications of the Association for Computing Machinery. In 2012 he began to argue publicly that the rate of acceleration in AI was now so rapid that all human labor will become obsolete within just over three decades. In an October 2012 Atlantic essay, “The Consequences of Human Intelligence,”40 Vardi took a position that is becoming increasingly representative of the AI research community: “The AI Revolution, however, is different, I believe, than the Industrial Revolution. In the 19th century machines competed with human brawn. Now machines are competing with human brain. Robots combine brain and brawn. We are facing the prospect of being completely out-competed by our own creations.”41
Vardi believes that the areas where new job growth is robust—for example, in the Web search engine economy, where new categories of workers such as those who perform tasks like search engine optimization, or SEO—are inherently vulnerable in the very near term. “If I look at search engine optimization, yes, right now they are creating jobs in doing this,” he said. “But what is it? It is learning how search engines actually work and then applying this to the design of Web pages. You could say that is a machine-learning problem. Maybe right now we need humans, but these guys [software automation designers] are making progress.”42
The assumption of many like Vardi is that a market economy will not protect a human labor force from the effects of automation technologies. Like many of the “Singularitarians,” he points to a portfolio of social engineering options for softening the impact. Brynjolfsson and McAfee in The Second Machine Age sketch out a broad set of policy options that have the flavor of a new New Deal, with examples like “teach the children well,” “support our scientists,” “upgrade infrastructure.” Others like Harvard Business School professor Clayton Christensen have argued for focusing on technologies that create rather than destroy jobs (a very clear IA versus AI position).
At the same time, while many who believe in accelerating change agonize about its potential impact, others have a more optimistic perspective. In a series of reports issued beginning in 2013, the International Federation of Robotics (IFR), established in 1987 with headquarters in Frankfurt, Germany, self-servingly argued that manufacturing robots actually increased economic activity and therefore, instead of causing unemployment, both directly and indirectly increased the total number of human jobs. One February 2013 study claims the robotics industry would directly and indirectly create 1.9 million to 3.5 million jobs globally by 2020.43 A revised report the following year argued that for every robot deployed, 3.6 jobs were created.
But what if the Singularitarians are wrong? In the spring of 2012 Robert J. Gordon, a self-described “grumpy” Northwestern University economist rained on the Silicon Valley “innovation creates jobs and progress” parade by noting that the claims for gains did not show up in conventional productivity figures. In a widely cited National Bureau of Economic Research white paper in 2012 he made a series of points contending that the productivity bubble in the twentieth century was a one-time event. He also noted that the automation technologies cited by those he would later describe as “techno-optimists” had not had the same kind of productivity impact as earlier nineteenth-century industrial innovations. “The computer and Internet revolution (IR3) began around 1960 and reached its climax in the dot-com era of the late 1990s, but its main impact on productivity has withered away in the past eight years,” he wrote. “Many of the inventions that replaced tedious and repetitive clerical labour with computers happened a long time ago, in the 1970s and 1980s. Invention since 2000 has centered on entertainment and communication devices that are smaller, smarter, and more capable, but do not fundamentally change labour productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.”44
In one sense it was a devastating critique of the Silicon Valley faith in “trickle down” from exponential advances in integrated circuits, for if the techno-optimists were correct, the impact of new information technology should have resulted in a dramatic explosion of new productivity, particularly after the deployment of the Internet. Gordon pointed out that unlike the earlier industrial revolutions, there has not been a comparable productivity advance tied to the computing revolution. “They remind us Moore’s Law predicts endless exponential growth of the performance capability of computer chips, without recognizing that the translation from Moore’s Law to the performance-price behavior of ICT equipment peaked in 1998 and has declined ever since,” he noted in a 2014 rejoinder to his initial paper.45
Gordon squared off with his critics, most notably with MIT economist Erik Brynjolfsson, at the TED Conference in the spring of 2013. In a debate moderated by TED host Chris Anderson, the two jousted over the future impact of robotics and whether the supposed exponentials would continue or were rather the peak of an “S curve” with a decline on the way.46 The techno-optimists believe that a lag between invention and adoption of technology simply delays the impact of productivity gains and even though exponentials inevitably taper off, they spawn successor inventions—for example the vacuum tube was followed by the transistor, which in turn was followed by the integrated circuit.
Gordon, however, has remained a consistent thorn in the side of the Singularitarians. In a Wall Street Journal column, he asserted that there are actually relatively few productivity opportunities in driverless cars. Moreover, he argued, they will not have a dramatic impact on safety either—auto fatalities per miles traveled have already declined by a factor of ten since 1950, making future improvements less significant.47 He also cast a skeptical eye on the notion that a new generation of mobile robots would make inroads into both the manufacturing and service sectors of the economy: “This lack of multitasking ability is dismissed by the robot enthusiasts—just wait, it is coming. Soon our robots will not only be able to win at Jeopardy! but also will be able to check in your bags at the skycap station at the airport, thus displacing the skycaps. But the physical tasks that humans can do are unlikely to be replaced in the next several decades by robots. Surely multiple-function robots will be developed, but it will be a long and gradual process before robots outside of the manufacturing and wholesaling sectors become a significant factor in replacing human jobs in the service or construction sectors.”48
His skepticism unleashed a torrent of criticism, but he has refused to back down. His response to his critics is, in effect, “Be careful what you wish for!” Gordon has also pointed out that Norbert Wiener may have had the most prescient insight into the potential impact of the “Third Industrial Revolution” (IR3), of computing and the Internet beginning in about 1960, when he argued that automation for automation’s sake would have unpredictable and quite possibly negative consequences.
The productivity debate has continued unabated. It has recently become fashionable for technologists and economists to argue that the traditional productivity benchmarks are no longer appropriate for measuring an increasingly digitized economy in which information is freely shared. How do you measure the economic value of a resource like Wikipedia, they ask? If the Singularitarians are right, however, the transformation in the form of an unparalleled economic crisis as human labor becomes surplus should be obvious soon. Indeed, the outcome might be quite gloomy; there will be fewer and fewer places for humans in the resulting economy.
That has certainly not happened yet in the industrialized world. However, one intriguing shift that suggests there are limits to automation was the recent decision by Toyota to systematically put working humans back into the manufacturing process. In quality and manufacturing on a mass scale, Toyota has been a global leader in automation technologies based on the corporate philosophy of kaizen (Japanese for “good change”) or continuous improvement. After pushing its automation processes toward lights-out manufacturing, the company realized that automated factories do not improve themselves. Once Toyota had extraordinary craftsmen that were known as Kami-sama, or “gods” who had the ability to make anything, according to Toyota president Akio Toyoda.49 The craftsmen also had the human ability to act creatively and thus improve the manufacturing process. Now, to add flexibility and creativity back into their factories, Toyota chose to restore a hundred “manual-intensive” workspaces.
The restoration of the Toyota gods is evocative of Stewart Brand’s opening line to the 1968 Whole Earth Catalog: “We are as gods and might as well get good at it.” Brand later acknowledged that he had borrowed the concept from British anthropologist Edmund Leach, who wrote, also in 1968: “Men have become like gods. Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid. Why should this be? How might these fears be resolved?”50
Underlying both the acrimonious productivity debate and Toyota’s rebalancing of craft and automation is the deeper question about the nature of the relationship between humans and smart machines. The Toyota shift toward a more cooperative relationship between human and robot might alternatively suggest a new focus on technology for augmenting humans rather than displacing them. Singularitarians, however, argue that such human-machine partnerships are simply an interim stage during which human knowledge is transferred and at some point creativity will be transferred to or will even arise on its own in some future generation of brilliant machines. They point to small developments in the field of machine learning that suggest that computers will exhibit humanlike learning skills at some point in the not-too-distant future. In 2014, for example, Google paid $650 million to acquire DeepMind Technologies, a small start-up with no commercial products that had shown machine-learning algorithms with the ability to play video games, in some cases better than humans. When the acquisition was first reported it was rumored that because of the power and implications of the technology Google would set up an “ethics board” to evaluate any unspecified “advances.”51 It has remained unclear whether such oversight will be substantial or whether it was just a publicity stunt to hype the acquisition and justify its price.
It is undeniable that AI and machine-learning algorithms have already had world-transforming application in areas as diverse as science, manufacturing, and entertainment. Examples range from machine vision and pattern recognition essential in improving quality in semiconductor design and so-called rational drug discovery algorithms, which systematize the creation of new pharmaceuticals, to government surveillance and social media companies whose business model is invading privacy for profit. The optimists hope that potential abuses will be minimized if the applications remain human-focused rather than algorithm-centric. The reality is that, until now, Silicon Valley has not had a track record that is morally superior to any earlier industries. It will be truly remarkable if any Silicon Valley company actually rejects a profitable technology for ethical reasons.
Setting aside the philosophical discussion about self-aware machines, and in spite of Gordon’s pessimism about productivity increases, it is clearly becoming increasingly possible and “rational” to design humans out of systems for both performance and cost reasons. Google, which can alternatively be seen as either an IA or AI company, seems to be engaged in an internal tug-of-war over this dichotomy. The original PageRank algorithm that the company is based on can perhaps be construed as the most powerful example in the history of human augmentation. The algorithm systematically mined human decisions about the value of information and pooled and ranked those decisions to prioritize Web search results. While some have chosen to criticize this as a systematic way to siphon intellectual value from vast numbers of unwitting humans, there is clearly an unstated social contract between user and company. Google mines the wealth of human knowledge and returns it to society, albeit with a monetization “catch.” The Google search dialog box has become the world’s most powerful information monopoly.
Since then, however, Google has yo-yoed back and forth in designing both IA and AI applications and services, whichever works best to solve the problem at hand. For example, for all of the controversy surrounding it, the Google Glass reality augmentation system clearly has the potential to be what the name promises—a human augmentation tool—while the Google car project represents the pros and cons of a pure AI system replacing human agency and intelligence with a machine. Indeed, Google as a company has become a de facto experiment about the societal consequences of AI-based technologies deployed on a massive scale. In a 2014 speech to the group of NASA scientists, Peter Norvig, Google’s director of research, was clear that the only reasonable solution to AI advances would lie in designing systems in which humans partner with intelligent machines. His solution was a powerful declaration of intent about the need to converge the separate AI and IA communities.
Given the current rush to build automated factories, such a convergence seems unlikely on a broad societal basis. However, the dark fears that have surfaced recently about job-killing manufacturing robots are perhaps likely to soon be supplanted by a more balanced view of our relationship with machines beyond the workplace. Consider Terry Gou, the chief executive of Foxconn, one of the largest Chinese manufacturers and makers of the Apple iPhone. The company had already endured global controversy for labor conditions in its factories when, at the beginning of 2012, Gou declared that Foxconn was now planning a significant commitment to robots to replace his workers. “As human beings are also animals, to manage one million animals gives me a headache,” he said during a business meeting.52
Although the statement drew global attention, his vision of a factory without workers is only one of the ways in which robotics will transform society in the next decade. Although job displacement is currently seen as a bleak outcome for humanity, other forces now at play will reshape our relations with robots in more positive ways. The specter of disruption driven by technological unemployment in China, for example, could conceivably be even more dramatic than that in the United States. As China has industrialized in the past two decades, significant parts of its rural population urbanized. How will China adapt to lights-out consumer electronics manufacturing?
Probably with ease, as it turns out. The Chinese population is aging dramatically, fast enough that they will soon be under significant pressure to automate their manufacturing industries. As a consequence of China’s one-child policy, governmental decisions made in the late 1970s and early 1980s have now resulted in a rapidly growing elderly population. In 2050, China will have the largest number of people over 80 years old in the world. There will be 90 million elderly Chinese compared to the United States with 32 million.53
Europe is also aging quickly. According to European Commission data, in 2050 there will be only 2 (reduced from 4 today) people of working age in Europe for each person over 65, and an estimated 84 million people with age-related health problems.54 The European Union views the demographic shift as a significant one and projects the emergence of a $17.6 billion market for elder-care robots in Europe by as early as 2016. The United States faces an aging scenario that is in many ways similar, although not as extreme, to Asian and European societies. Despite the fact that the United States is aging more slowly than some other countries—in part because of continuing significant immigration inflow—the “dependency ratio” will continue to rise. That means that the number of children and elderly will shift from 59 youngsters and elderly people per 100 work-age adults in 2005 to 72 per 100 in 2050.55 The retirement of baby boomers in the United States—people who turn 65—is now taking place at the rate of roughly 10,000 each day, and that rate will continue for the next 19 years.56
How will the world’s industrial societies care for their aging populations? An aging world will dramatically transform the conversation about robotics during the next decade from the fears about automation to new hope for augmentation. Robot & Frank is an amusing, thoughtful, and possibly prophetic 2012 film set in the near future, where it depicts the relationship between a retired ex-convict in the first stages of dementia and his robot caregiver. How ironic if caregiving robots like Frank’s were to arrive just in time to provide a technological safety net for the world’s previously displaced, now elderly population.