TO THE RESCUE - Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots - John Markoff (2015)

Chapter 7. TO THE RESCUE

The robot laboratory was ghostly quiet on a weekend afternoon in the fall of 2013. The design studio itself could pass for any small New England machine shop, crammed with metalworking and industrial machines. Marc Raibert, a bearded roboticist and one of the world’s leading designers of walking robots, stood in front of a smallish interior room, affectionately called the “meat locker,” and paused for effect. The room was a jumble of equipment, but at the far end seven imposing humanoid robots were suspended from the ceiling, as if on meat hooks. Headless and motionless, the robots were undeniably spooky. Without skin, they were cybernetic skeleton-men assembled from an admixture of steel, titanium, and aluminum. Each was illuminated by an eerie blue LED glow that revealed a computer embedded in the chest that monitored its motor control. Each of the presently removed “heads” housed another computer that monitored the body’s sensor control and data acquisition. When they were fully equipped, the robots stood six feet high and weighed 330 pounds. When moving, they were not as lithe in real life as they were in videos, but they had an undeniable presence.

It was the week before DARPA would announce that it had contracted Boston Dynamics, the company that Raibert had founded two decades earlier, to build “Atlas” robots as the common platform for a new category of Grand Challenge competitions. This Challenge aimed to create a generation of mobile robots capable of operating in environments that were too risky or unsafe for humans. The company, which would be acquired by Google later that year, had already developed a global reputation for walking and running robots that were built mostly for the Pentagon.

Despite taking research dollars from the military, Raibert did not believe that his firm was doing anything like “weapons work.” For much of his career, he had maintained an intense focus on one of the hardest problems in the world of artificial intelligence and robotics: building machines that moved with the ease of animals through an unstructured landscape. While artificial intelligence researchers have tried for decades to simulate human intelligence, Raibert is a master at replicating the agility and grace of human movement. He had long believed that creating dexterous machines was more difficult than many other artificial intelligence challenges. “It is as difficult to reproduce the agility of a squirrel jumping from branch to branch or a bird taking off and landing,” Raibert argued, “as it is to program intelligence.”

The Boston Dynamics research robots, with names like LittleDog, BigDog, and Cheetah, had sparked lively and occasionally hysterical Internet discussion about the Terminator-like quality of modern robots. In 2003 the company had received its first DARPA research contract for a biologically inspired quadruped robot. Five years later, a remarkable video on YouTube showed BigDog walking over uneven terrain, skittering on ice, and withstanding a determined kick from a human without falling. With the engine giving off a banshee-like wail, it did not take much to imagine being chased through the woods by such a contraption. More than sixteen million people viewed the video, and the reactions were visceral. For many, BigDog exemplified generations of sinister sci-fi and Hollywood robots.

Raibert, who usually wears jeans and Hawaiian shirts, was unfazed by, and even enjoyed, his Dr. Evil image. As a rule, he would shy away from engaging directly with the media, and communicated instead through a frequent stream of ever more impressive “killer” videos. Yet he monitored the comments and felt that many of them ignored the bigger picture: mobile robots were on the cusp of becoming a routine part of the way humans interact with the world. When speaking on the record, he simply said that he believed his critics were missing the point. “Obviously, people do find it creepy,” he told a British technical journal. “About a third of the 10,000 or so responses we have to the BigDog videos on YouTube are from people who are scared, who think that the robots are coming for them. But the ingredient that affects us most strongly is a sense of pride that we’ve been able to come so close to what makes people and animals animate, to make something so lifelike.”1 Another category of comments, he pointed out, was from viewers who feigned shock while enjoying a sci-fi-style thrill.

The DARPA Robotics Challenge (DRC) underscored the desired spectrum of possibilities for the relationship between humans and robots even more clearly than the previous Grand Challenge for driverless cars. It foreshadowed a world in which robots would partner with humans, dance with them, be their slaves, or potentially replace them entirely. In the initial DRC competition in 2013, the robots were almost completely teleoperated by a human reliant on the robot’s sensor data, which was sent over a wired network connection. Boston Dynamics built Atlas robots with rudimentary motor control capabilities like walking and arm movements and made them available to competing teams, but the higher-level functions that the robots would need to complete specific tasks were to be programmed independently by the original sixteen teams. Later that fall, when Boston Dynamics delivered the robots to the DRC, and also when they actually competed in a preliminary competition held in Florida at the end of the year, the robots proved to be relatively slow and clumsy.

Hanging in the meat locker waiting to be deployed to the respective teams, however, they looked poised to spring into action with human nimbleness. On a quiet afternoon it evoked a scene from the 2004 movie I, Robot, where a police detective played by actor Will Smith walks, gun drawn, through a vast robot warehouse containing endless columns of frozen humanoid robots awaiting deployment. In a close-up shot, the eyes of one sinister automaton focus on the moving detective before it springs into action.

Decades earlier, when Raibert began his graduate studies at MIT, he had set out to study neurophysiology. One day he followed a professor back to the MIT AI Lab. He walked into a room where one of the researchers had a robot arm lying in pieces on the table. Raibert was captivated. From then on he wanted to be a roboticist. Several years later, as a newly minted engineer, Raibert got a job at NASA’s Jet Propulsion Laboratory in Pasadena. When he arrived, he felt like a stranger in a strange land. Robots, and by extension their keepers, were definitely second-class citizens compared to the agency’s stars, the astronauts. JPL had hired the brand-new MIT Ph.D. as a junior engineer into a project that proved to be stultifyingly boring.

Out of self-preservation, Raibert started following the work of Ivan Sutherland, who by 1977 was already a legend in computing. Sutherland’s 1962 MIT Ph.D. thesis project “Sketchpad” had been a major step forward in graphical and interactive computing, and he and Bob Sproull codeveloped the first virtual reality head-mounted display in 1968. Sutherland went to Caltech in 1974 as founding chair of the university’s new computer science department, where he was instrumental in working with physicist Carver Mead and electrical engineer Lynn Conway on a new model for designing and fabricating integrated circuits with hundreds of thousands of logic elements and memory—a 1980s advance that made possible the modern semiconductor industry.

Alongside his older brother Bert, Sutherland had actually come to robotics in high school, during the 1950s. The two boys had the good fortune to be tutored by Edmund C. Berkeley, an actuary and computing pioneer who had written Giant Brains, or Machines That Think in 1949. In 1950, Berkeley had designed Simon, which, although it was constructed with relays and a total memory of four two-bit numbers, could arguably be considered the first personal computer.2 The boys modified it to do division. Under Berkeley’s guidance, the Sutherland brothers worked on building a maze-solving mouselike robot and Ivan built a magnetic drum memory that was capable of storing 128 two-bit numbers for a high school science project, which got Ivan a scholarship to Carnegie Institute of Technology.

Once in college, the brothers continued to work on a “mechanical animal.” They went through a number of iterations of a machine called a “beastie,” which was based on dry cell batteries and transistors and was loosely patterned after Berkeley’s mechanical squirrel named Squee.3 They spent endless hours trying to program the beastie to play tag.

Decades later, as the chair of Caltech’s computer science department in the 1970s, Sutherland, long diverted into computer graphics, had seemingly left robot design interests behind him. When Raibert heard Sutherland lecture, he was riveted by the professor’s musings on what might soon be possible in the field. Raibert left the auditorium feeling entirely fired up. He set about breaking down the bureaucratic wall that protected the department chair by sending Sutherland several polite emails, and also leaving a message with his secretary.

His initial inquiries ignored, Raibert became irritated. He devised a plan. For the next two and a half weeks, he called Sutherland’s office every day at two P.M. Each day the secretary answered and took a message. Finally a gruff Sutherland returned his call. “What do you want?” he shouted. Raibert explained that he was anxious to collaborate with Sutherland and wanted to propose some possible projects. When they finally met in 1977, Raibert had prepared three ideas and Sutherland, after listening to the concept of a one-legged walking—hopping, actually—robot, brusquely declared: “Do that one!”

Sutherland would become Raibert’s first rainmaker. He took him along on a visit to DARPA (where Sutherland had worked for two years just after Licklider) and to the National Science Foundation, and they came away with a quarter million dollars in research funding to get the project started. The two worked together on early walking robots at Caltech, and several years later Sutherland persuaded Raibert to move with him to Carnegie Mellon, where they continued with research on walking machines.

Ultimately Raibert pioneered a remarkable menagerie of robots that hopped, walked, twirled, and even somersaulted. The two had adjoining offices at CMU and coauthored an article on walking machines for Scientific Americanin January 1983. Raibert would go on to set up the Leg Laboratory at CMU in 1981 and then move the laboratory to MIT while he held a faculty position there from 1986 to 1992. He left MIT to found Boston Dynamics. Another young MIT professor, Gill Pratt, would continue to work in the Leg Lab, designing walking machines and related technologies enabling robots to work safely in partnership with humans.

Raibert pioneered walking machines, but it was his CMU colleague Red Whittaker who almost single-handedly created “field robotics,” machines that moved freely in the physical world. DARPA’s autonomous vehicle contest had its roots in Red Whittaker’s quixotic scheme to build a machine that could make its way across an entire state. The new generation of mobile walking rescue robots had their roots in the work that he did in building some of the first rescue robots three and a half decades ago.

Whittaker’s career took off with the catastrophe at Three Mile Island Nuclear Generating Station on March 28, 1979. He had just received his Ph.D. when there was a partial meltdown in one of the two nuclear reactors at the site. The crisis exposed how unprepared the industry was to cope with the loss of control of a reactor’s radioactive fuel. It would be a half decade before robots built by Whittaker and his students would enter the most severely damaged areas of the reactor and help with the cleanup.

Whittaker’s opportunity came when two giant construction firms, having spent $1 billion, failed to get into the basement of the crippled reactor to inspect it and begin the cleanup. Whittaker sent the first CMU robot, which his team assembled in six months and dubbed “Rover,” into Three Mile Island in April of 1984. It was a six-wheeled contraption outfitted with lights, and a camera tethered to its controller. It was lowered into the basement, where it traversed water, mud, and debris, successfully gathering the first images of consequences of the disaster. The robot was later modified to perform inspections and conduct sampling.4

The success of this robot set the tone for Whittaker’s can-do style of tackling imposing problems. After years of bureaucratic delays, his first company, Redzone Robotics, supplied a robot to help with the cleanup at Chernobyl, the 1986 nuclear power plant disaster in Ukraine. By the early 1990s Whittaker was working on a Mars robot for NASA. The Mars robot was large and heavy, so it was unlikely to make the first mission. Instead, Whittaker plotted to find an equally dramatic project back on Earth. Early driverless vehicle research was beginning to show promise, so the CMU researchers started experimenting letting vehicles loose on Pittsburgh’s streets. What about driving across an entire state? Whittaker thought that that idea, which he called “the Grand Traverse,” would prove that robots were ready to perform in the real world and not just in the laboratory. “Give me two years and a half-dozen graduate students and we could make it happen,” he boasted to the New York Times in 1991.5 A decade and a half later at DARPA, Tony Tether lent credence to this idea by underwriting the first autonomous vehicle Grand Challenge.

Although the roboticists finally made rapid progress in building useful robots in the early 1990s, it was only after decades of disappointment. The technology failure at Three Mile Island initially cast a pall over the robotics industry. In the June 1980 issue of Omni magazine, Marvin Minsky wrote a long manifesto calling for the development of telepresence technologies—mobile robots outfitted with video cameras, displays, microphones, and speakers that allow their operator to be “present” from a remote location anywhere in the connected world. Minsky used his manifesto to rail against the shortcomings of the world of robotics:

Three Mile Island really needed telepresence. I am appalled by the nuclear industry’s inability to deal with the unexpected. We all saw the absurd inflexibility of present day technology in handling the damage and making repairs to that reactor. Technicians are still waiting to conduct a thorough inspection of the damaged plant—and to absorb a year’s allowable dose of radiation in just a few minutes. The cost of repair and the energy losses will be $1 billion; telepresence might have cut this expense to a few million dollars.

The big problem today is that nuclear plants are not designed for telepresence. Why? The technology is still too primitive. Furthermore, the plants aren’t even designed to accommodate the installation of advanced telepresence when it becomes available. A vicious circle!6

The absence of wireless networking connectivity was the central barrier to the development of remote-controlled robots at the time. But Minsky also focused on the failure of the robotics community to build robots with the basic human capabilities to grasp, manipulate, and maneuver. He belittled the state of the art of robotic manipulators used by nuclear facility operators, calling them “little better than pliers” and noted that they were not a match for human hands. “If people had a bit more engineering courage and tried to make these hands more like human hands, modeled on the physiology of the palm and fingers, we could make nuclear reactor plants and other hazardous facilities much safer.”7

It was an easy criticism to make, yet when the article was reprinted three decades later in IEEE Spectrum in 2010, the field had made surprisingly little progress. Robotic hands like those Minsky had called for still did not exist. In 2013 Minsky bemoaned the fact that even at the 2011 Fukushima meltdowns, there wasn’t yet a robot that could easily open a door in an emergency. It was also clear that he remained bitter over the fact that the research community had largely chosen the vision charted by Rod Brooks, which involved hunting for emergent complex behaviors by joining simple components.

One person who agreed with Minsky was Gill Pratt, who had taken over as director of the MIT Leg Lab after Marc Raibert. Later a professor and subsequently dean at Olin College in Needham, Massachusetts, Pratt arrived at DARPA in early 2010 as a program manager in charge of two major programs. One, the ARM program, for Autonomous Robotic Manipulation, involved building the robotic hands whose absence Minsky had noted. ARM hands were specified to possess a humanlike functionality for a variety of tasks: picking up objects, grasping and controlling tools designed for humans, and operating a flashlight. A second part of ARM funded efforts to connect the human brain to robotic limbs, which would give wounded soldiers and the disabled—amputees, paraplegics, and quadriplegics—new freedoms. A parallel project to ARM, called Synapse, focused on developing biologically inspired computers that could better translate a machine’s perception into robotic actions.

Pratt represented a new wave at DARPA, arriving shortly after the Obama administration had replaced Tony Tether with Regina Dugan as the agency’s director. Tether had moved DARPA away from its historically close relationship with academia by shifting funding to classified military contractors. Dugan and Pratt tried to repair the damage by quickly reestablishing closer relations with university campuses. Pratt’s research before arriving at DARPA had focused on building robots that could navigate the world outside of the lab. The challenge was giving the robots practical control over the relatively gentle forces that they would encounter in the physical world. The best way to do this, he found, was to insert some elastic material between the robot’s components and the gear train that drives them. The approach tried to mimic the function played by biological tendons located between a muscle and a joint. The springy tendon material can stretch and, when measured, indicates how much force was being applied to it. Until then the direct mechanical connection between the components that made up the arms and legs used by robots gave them both power and precision that was too inflexible—and potentially dangerous—for navigating the unpredictable physical world populated by vulnerable and litigious humans.

Pratt had not initially considered human-robot collaboration. Instead, he was interested in how the elderly safely move about in the world. Typically, the infirm used walkers and wheelchairs. As he explored the contact between humans and the tools they use, he realized that elasticity offered the humans a measure of protection against unyielding obstacles. More elastic robots, Pratt concluded, could make it possible for humans to work close to the machines without fear of being injured.

They were working with Cog, an early humanoid robot designed by Rodney Brooks’s robot laboratory during the 1990s. A graduate student, Matt Williamson, was testing the robot’s arm. A bug in the code caused the arm to repeatedly slap the test fixture. When Brooks inserted himself between the robot and the test bench, he became the first human to ever be spanked by a robot. It was a gentle whipping and—fortunately for his graduate students—Brooks survived. Pratt’s research was an advance both in biomimicry and human-robot collaboration. Brooks adopted “elastic actuation” as a central means of making robots safe for people to work with.

When Pratt arrived at DARPA he was keenly aware that despite decades of research, most robots were still kept inside labs, not just for human safety but also to protect the robot’s software from an uncontrolled environment. He had been at DARPA for a little more than a year when on March 12, 2011, the tsunami struck the Fukushima Daiichi Nuclear Power Plant. The workers inside the plant had been able to control the emergency for a short period, but then high radiation leakage forced them to flee before they could oversee a safe shutdown of the reactors. DARPA became peripherally involved in the crisis because humanitarian assistance and disaster relief is a Pentagon responsibility. (The agency tried to help out in the wake of the 9/11 attacks by sending robots to search for survivors at the World Trade Center.) DARPA officials coordinated a response at Fukushima by contacting U.S. companies who had provided assistance at Three Mile Island and Chernobyl. A small armada of U.S. robots was sent to Japan in an effort to get into the plant and make repairs, but by the time power plant personnel were trained to use them it was too late to avoid the worst damage. This was particularly frustrating because Pratt could see that a swift deployment of robots would almost certainly have been helpful and limited the damage. “The best the robots could do was help survey the extensive damage that had already occurred and take radiation readings; the golden hours for early intervention to mitigate the extent of the disaster had long since passed,” he wrote.8

The failure led to the idea of the DARPA Robotics Challenge, which was announced in April 2012. By sponsoring a grand challenge on the scale of Tether’s autonomous vehicle contest, Pratt sought to spark innovations in the robotics community that would facilitate the development of autonomous machines that could operate in environments that were hostile for humans. Teams would build and program a robot to perform a range of eight tasks9 that might be expected in a power plant emergency, but most of them would not build the robots from scratch: Pratt had contracted with Boston Dynamics to supply Atlas humanoid robots as a joint platform to jump-start the competition.

In the dark it is possible to make out the blue glow of an unblinking eye staring into the evening gloom. This light is a retina scanner that uses the eye as a digital fingerprint. These pricey electronic sentinels are not yet commonplace, but they do show up in certain ultra-high security locations. Passing beneath their gaze is a bit like passing before the unblinking eye of some cybernetic Cerberus. The scanner isn’t the only bit of info-security decor. The home itself is a garden of robotic delights. Inside in the foyer, a robotic arm gripping a mallet strikes a large gong to signal a new arrival. There are wheeled, flying, crawling, and walking machines everywhere. To a visitor, it feels like the scene in the movie Blade Runner in which detective Rick Deckard arrives at the home of the gene-hacker J. F. Sebastian and finds himself in a menagerie of grotesque, quirky synthetic creatures.

The real-life J.F. lording over this lair is Andy Rubin, a former Apple engineer who in 2005 joined Google to jump-start the company’s smartphone business. At the time the world thought of Google as an unstoppable company, since it had rapidly become one of the globe’s dominant computing technology companies. Inside Google, however, the company’s founders were deeply concerned that their advantage in Web search, and thus their newly gained monopoly, might be threatened by the rapid shift away from desktop to handheld mobile computers. The era of desktop computing was giving way to a generation of more intimate machines in what would soon come to be known as the post-PC era. The Google founders were fearful that if Microsoft was able to replicate its desktop monopoly in the emerging world of phones, they would be locked out and would lose their search monopoly. Apple had not yet introduced the iPhone, so they had no way of knowing how fundamentally threatened Microsoft’s desktop stranglehold would soon be.

In an effort to get ahead, Google acquired Rubin’s small start-up firm to build its own handheld software operating system as a defense against Microsoft. Google unveiled Android in November 2007, ten months after the iPhone first appeared. During the next half decade, Rubin enjoyed incredible success, displacing not just Microsoft, but Apple, Blackberry, and Palm Computing as well. His strategy was to build an open-source operating system and offer it freely to the companies who had once paid hefty licenses to Microsoft for Windows. Microsoft found it impossible to compete with free. By 2013 Google’s software would dominate the world of mobile phones in terms of market share.

Early in his career, Rubin had worked at Apple Computer as a manufacturing engineer after a stint at Zeiss in Europe programming robots. He left Apple several years later with an elite group of engineers and programmers to build one of the early handheld computers at General Magic. General Magic’s efforts to seed the convergence of personal information, computing, and telephony became an influential and high-profile failure in the new mobile computing world.

image

Andy Rubin went on a buying spree for Google when the company decided to develop next-generation robotics technologies. Despite planning a decade-long effort, he walked away after just a year. (Photo courtesy of Jim Wilson/New York Times/Redux)

In 1999, Rubin started Palo Alto-based Danger, Inc., a smartphone handset maker, with two close friends who had also been Apple engineers. The company name reflected Rubin’s early obsession with robots. (In the 1960s science-fiction television series Lost in Space, a robot guardian for a young boy would say “Danger, Will Robinson!” whenever trouble loomed.) Danger created an early smartphone called the Sidekick, which was released in 2002. It attracted a diverse cult following with its switchblade-style slide-out keyboard, downloadable software, email, and backups of personal information in “the cloud.” While most businesspeople were still chained to their BlackBerrys, the Sidekick found popularity among young people and hipsters, many of whom switched from PalmPilots.

Rubin was a member of a unique “Band of Brothers” who passed through Apple Computer in the 1980s, a generation of young computer engineers who came of age in Silicon Valley as disciples of Steve Jobs. Captivated by Jobs’s charisma and his dedication to using good design and computing technology as levers to “change the world,” they set out independently on their own technology quests. The Band of Brothers reflected the tremendous influence Jobs’s Macintosh project had on an entire Silicon Valley generation, and many stayed friends for years afterward. Silicon Valley’s best and brightest believed deeply in bringing the Next Big Thing to millions of people.

Rubin’s robot obsession, however, was extraordinary, even by the standards of his technology-obsessed engineering friends. While working on phones at Google, he bought an $80,000 robot arm and brought it to work, determined to program it to make espresso—a project that stalled for more than a year because one step in the process required more strength than the arm could exert.

Early on, Rubin had acquired the Internet domain name android.com, and friends would teasingly even refer to him as “the android.” In his home in the hills near Palo Alto, evidence of the coming world of robots was everywhere, because, once again, Andy Rubin had seen something that hadn’t yet dawned on most others in Silicon Valley. Rubin would soon get the opportunity to make the case for the coming age of mobile robots on a much larger stage.

In the spring of 2013, Google CEO Larry Page received a curious email. Seated in his office at the company’s Mountain View headquarters, he read a message that warned him an alien attack was under way. Immediately after he read the message, two large men burst into his office and instructed him that it was essential he immediately accompany them to an undisclosed location in Woodside, the elite community populated by Silicon Valley’s technology executives and venture capitalists.

This was Page’s surprise fortieth birthday party, orchestrated by his wife, Lucy Southworth, a Stanford bioinformatics Ph.D. A crowd of 150 people in appropriate alien-themed costumes had gathered, including Google cofounder Sergey Brin, who wore a dress. In the basement of the sprawling mansion where the party was held, a robot arm grabbed small boxes one at a time and gaily tossed the souvenirs to an appreciative crowd. The robot itself consisted of a standard Japanese-made industrial robot arm outfitted with a suction gripper hand driven by a noisy air compressor. It helped that the robot could “see” the party favors it was picking up. For eyes—actually a single “eye”—the robot used the same sensor Microsoft originally added to the Xbox to capture the gestures of video game players in the living room.

The box-throwing robot was a prototype designed by Industrial Perception, Inc., a small team then located in a garage just across the freeway from the Googleplex in Palo Alto. When the robot, which had already briefly become an Internet sensation after a video showing its box-tossing antics had appeared on YouTube,10 wasn’t slinging boxes, it was being prototyped as a new class of intelligent industrial labor that might take over tasks as diverse as loading and unloading trucks, packing in warehouses, working on assembly lines, and restocking grocery shelves.

Equipping the robots to understand what they are seeing was only part of the challenge. Recognizing six-sided boxes had proven not to be an insurmountable problem, although AI researchers only recently solved it. Identifying wanted items on grocery shelves, for example, is an immensely more complicated challenge, and today it still exceeds the capability of the best robot programmers. However, at the Page party, the Yaskawa robot had no apparent difficulty finding the party favor boxes, each of which contained a commemorative T-shirt. Ironically, humans had packed each of those boxes, because the robot was not yet able to handle loose shirts.

The Industrial Perception arm wasn’t the only intelligent machine at the party. A telepresence robot was out on the dance floor, swaying to the music. It was midnight in Woodside, but Dean Kamen, the inventor of the Segway, was controlling the robot from New Hampshire—where it was now three A.M.

This robot, dubbed a “Beam,” was from Suitable Technologies, another small start-up just a couple of blocks away from Industrial Perception. Both companies were spin-offs from Willow Garage, a robotics laboratory funded by Scott Hassan, a Stanford graduate school classmate and friend of Page’s. Hassan had been the original programmer of the Google search engine while it was still a Stanford research project. Willow Garage was his effort to build a humanoid robot as a research platform. The company had developed a freely available operating system for robotics as well as a humanoid telepresence robot, PR2, that was being used in a number of universities.

That evening, both AI and IA technologies were thus in attendance at Page’s party—one of the robots attempted to replace humans while another attempted to augment them. Later that year Google acquired Industrial Perception, the box-handling company, for Rubin’s new robot empire.

Scott Hassan’s Willow Garage spin-offs once again pose the “end of work” question. Are Page and Hassan architects of a generation of technology that will deeply disrupt the economy by displacing both white-collar and blue-collar workers? Viewed as a one-to-one replacement for humans, the Industrial Perception box handler, which will load or unload a truck, is a significant step into what has been one of the last bastions of unskilled human labor. Warehouse workers, longshoremen, and lumpers all have rough jobs that are low paying and unrewarding. Human workers moving boxes—which can weigh fifty pounds or more—roughly every six seconds get tired and often hurt their backs and wind up disabled.

The Industrial Perception engineers determined that to win contracts in warehouse and logistics operations, they needed to demonstrate that their robots could reliably move boxes at four-second intervals. Even before their acquisition by Google, they were very close to that goal. However, from the point of view of American workers, a very different picture emerges. In fact, the FedExes, UPSes, Walmarts, and U.S. Post Offices that now employ many of the nation’s unskilled laborers are no longer primarily worried about labor costs and are not anxious to displace workers with lower-cost machines. Many of the workers, it turns out, have already been displaced. The companies are instead faced with an aging workforce and the reality of a labor scarcity. In the very narrow case of loading and unloading trucks, at least, it’s possible the robots have arrived just in time. The deeper and as yet unanswered question remains whether our society will commit to helping its human workers across the new automation divide.

At the end of 2013 in a nondescript warehouse set behind a furniture store in North Miami, a group of young Japanese engineers began running practice sessions fully a month before the DARPA Robotics Challenge. They had studied under Masayuki Inaba, the well-known roboticist who himself was the prize student of the dean of Japanese robotics, Hirochika Inoue. Inoue had started his work in robotics during graduate school in 1965 when his graduate thesis advisor proposed that he design a mechanical hand to turn a crank.

Robots have resonated culturally in Japan more positively than they have in the United States. America has long been torn between the robot as a heroic “man of steel” and the image of a Terminator. (Of course, one might reasonably wonder what Americans really felt about the Terminator after Californians twice elected the Hollywood actor who portrayed it as governor!) In Japan, however, during the 1950s and 1960s the cartoon robot character Mighty Atom, called Astro Boy in other countries, had framed the more universally positive view of robotics. To some extent, this makes sense: Japan is an aging society, and the Japanese believe they will need autonomous machines to care for their elderly.

The Japanese team, which named themselves Schaft, came out of JSK, the laboratory Dr. Inoue had established at Tokyo University, early in 2013 with the aim of entering the DARPA Robotics Challenge. They had been forced to spin off from Tokyo University because the school, influenced by the antimilitarist period after the end of World War II, prevented the university laboratory from participating in any event that was sponsored by the U.S. military.11The team took its name from a 1990s Japanese musical group of the electro-industrial rock genre. Rubin had found the researchers through Marc Raibert.

When news broke that Google had acquired Schaft, it touched off a good deal of hand-wringing in Japan. There was great pride in the country’s robotics technology. Not only had the Japanese excelled at building walking machines, but for years they had commercialized some of the most sophisticated robots, even as consumer products. Sony had introduced Aibo, a robotic pet dog, in 1999, and continued to offer improved versions until 2005. Following Aibo, a two-foot-tall robot, Qrio, was developed and marketed but never sold commercially. Now it appeared that Google was waltzing in and skimming the cream from decades of Japanese research.

The reality, however, is that while the Japanese dominated the first-generation robot arms, other nations are now rapidly catching up. Most of the software-centric next-generation robot development work and related artificial intelligence research was happening in the United States. Both Silicon Valley and Route 128 around Boston had once again become hotbeds of robotics start-up activity in 2012 and 2013.

When they agreed to join Rubin’s expanding robot empire, the Schaft researchers felt conflicted. They expected that now that they were marching to Google’s drumbeat, they would have to give up their dream of competing in the Pentagon contest. “No way!” Rubin told them. “Of course you’re going to stay in the competition.” The ink was barely dry on the Google contract when the Japanese engineers threw themselves into the contest. They started building three prototype machines immediately and they built mockups of each of the eight contest tasks—rough terrain, a door to open, a valve to close, a ladder, and so on—so they could start testing their robots immediately. In June, when DARPA officials checked on the progress of each group, Team Schaft’s thorough preparation stunned Gill Pratt—at the time, none of the other teams had even started!

In September, when two members of the Schaft team traveled to a DARPA evaluation meeting held in Atlanta alongside the Humanoids 2013 technical conference, they brought a video to demonstrate their progress. Though they spoke almost no English, the video hit like a thunderbolt. The video showed that the young Japanese had solved all the programming problems while the other competitors were still learning how to program their robots. The other teams at the conference were visibly in shock when the two young engineers left the stage. Two months later in their Miami warehouse, the team had settled in and recreated a test course made from plywood. Even though it was almost December, muggy and miserable Miami weather and mosquitoes plagued the researchers. A local security guard who watched over the team was so bitten that he ended up in the hospital after a severe allergic reaction.

Schaft established a control station on a long table in the cavernous building. Controlling the robot was dead simple—users operated the machine with a Sony Playstation PS3 controller, just like a video game. The robot pilot borrowed sound bites from Nintendo games and added his own special audio feedback to the robot. The researchers practiced each of the tasks over and over until the robot could maneuver the course perfectly.

Homestead-Miami Speedway was no stranger to growling machines. When it hosts NASCAR races, the stands are usually filled with good ol’ Southern boys. In December of 2013, however, the Robot Challenge had a decidedly different flavor. Raibert called it “Woodstock for robots.” He was there to oversee both the supporting role that Boston Dynamics was playing in technical care and feeding for the Atlas humanoid robots and the splashy demonstrations of several Pentagon-funded four-legged running and walking robots. These machines would periodically trot or gallop along the racecourse to the amazement of the audience of several thousand. DARPA also hosted a robot fair with several dozen exhibitors during the two days of robot competition, which generated a modest crowd as well as a fairly hefty media contingent.

Google underscored the growing impact of robotics on all aspects of society when it publicly announced Rubin’s robotics division just weeks before the Robotics Challenge. At the beginning of that month, 60 Minutes had aired a segment about Jeff Bezos and Amazon that included a scene in which Bezos led Charlie Rose into a laboratory and showed off an octocopter drone designed to deliver Amazon products autonomously “in 30 minutes.”12 The report sparked another flurry of discussions about the growing role of robots in society. The storage and distribution of commercial goods is already a vast business in the United States, and Amazon has quickly become a dominant low-cost competitor. Google is intent on competing against Amazon in the distribution of all kinds of goods, which will create pressure to automate warehouse processes and move distribution points closer to consumers. If the warehouse was close enough to a consumer—within just blocks, for example, in a large city—why not use a drone for the “last mile”? The idea felt like science fiction come to life, and Rose, who appeared stunned, did not ask hard questions.

Google, however, unveiled its own drone delivery research project. Just days after the Amazon 60 Minutes extravaganza, the New York Times reported on Google’s robotic ambitions, which dwarfed what Bezos had sketched on the TV news show. Rubin had stepped down as head of Google’s Android phone division in the spring of 2013. Despite reports that he had lost a power struggle and was held in disfavor, exactly the opposite was true. Larry Page, Google’s chief executive, had opened the corporate checkbook and sent Rubin on a remarkable shopping spree. Rubin had spent hundreds of millions of dollars recruiting the best robotics talent and buying the best robotic technology in the world. In addition to Schaft, Google had also acquired Industrial Perception, Meka Robotics, and Redwood Robotics, a group of developers of humanoid robots and robot arms in San Francisco led by one of Rodney Brooks’s star students, and Bot & Dolly, a developer of robotic camera systems that had been used to create special effects in the movie Gravity. Boston Dynamics was the exclamation mark in the buying spree.

Google’s acquisition of an R & D company closely linked to the military instigated a round of speculation. Many suggested that Google, having bought a military robotics firm, might become a weapons maker. Nothing could have been further from the truth. In his discussions with the technologists at the companies he was acquiring, Rubin sketched out a vision of robots that would safely complete tasks performed by delivery workers at UPS and FedEx. If Bezos could dream of delivering consumer goods from the air, then how outlandish would it be for a Google Shopping Express truck to pull up to a home and dispatch a Google robot to your front door? Rubin also had long had a close relationship with Terry Gou, the CEO of Foxconn, the giant Chinese manufacturer. It would not be out of the realm of possibility to supply robots to replace some of Gou’s one million “animals.”

Google’s timing in unveiling its new robotics effort was perfect. The December 2013 Robotics Challenge was a preliminary trial to be followed by a final event in June of 2015. DARPA organized the first contest into “tracks” broken broadly into teams that supplied their own robots and teams that used the DARPA-supplied Atlas robots from Boston Dynamics. The preliminary trial turned out to be a showcase for Google’s new robot campaign. Rubin and a small entourage flew into an airport north of Miami on one of Google’s G5 corporate jets and were met by two air-conditioned buses rented for the joint operation.

The contest consisted of the eight separate tasks performed over two days. The Atlas teams had a comparatively short amount of time before the event to program their robots and practice, and it showed. Compared with the nimble four-legged Boston Dynamics demonstration robots, the contestants themselves were slow and painstaking. A further reminder of how little progress had been made was that the robots were tethered from above in order to protect them from damaging falls without hampering their movements.

image

The Boston Dynamics Atlas Robot, designed for the DARPA Robot Challenge. Boston Dynamics was later acquired by Google and has designed a second-generation Atlas intended to operate without tether or power connection. (Photo courtesy of DARPA)

If that wasn’t enough, DARPA gave the teams a little break in the driving task: they allowed human assistants to place the robots in the cars and connect them to the steering wheel and brakes before they drove through a short obstacle course. Even the best teams, including Schaft, drove the course in a stop-and-go fashion, pulling forward a distance and pausing to recalibrate. The slow pace was strikingly reminiscent of the SRI Shakey robot many decades earlier. The robots were not yet autonomous. Their human controllers were hidden away in the garages while the robots performed their tasks on the speedway’s infield, directed via a fiber-optic network that fed video and sensor data back to the operator console workstations. To bedevil the teams and create a real-world sense of crisis, DARPA throttled the data connection at regular intervals. This gave even the best robots a stuttering quality, and the assembled press hunted for metaphors less trite than “watching grass grow” or “watching paint dry” to describe the scene.

Nevertheless, the DARPA Robotics Challenge did what it was designed to do: expose the limits of today’s robotic systems. Truly autonomous robots are not yet a reality. Even the prancing and trotting Boston Dynamics machines that performed on the racetrack tarmac were wirelessly tethered to human controllers. It is equally clear, however, that truly autonomous robots will arrive soon. Just as the autonomous vehicle challenges of 2004 through 2007 significantly accelerated the development of self-driving cars, the Robotics Challenge will bring us close to Gill Pratt’s dream of a robot that can work in hazardous environments and Andy Rubin’s vision of the automated Google delivery robot. What Homestead-Miami also made clear was that there are two separate paths forward in defining the approaching world of humans and robots, one moving toward the man-machine symbiosis that J. C. R. Licklider had espoused and another in which machines will increasingly supplant humans. Just as Norbert Wiener realized at the onset of the computer and robotics age, one of the future possibilities will be bleak for humans. The way out of that cul-de-sac will be to follow in Terry Winograd’s footsteps by placing the human in the center of the design.

Darkness had just fallen on the pit lane at Homestead-Miami Speedway, giving the robotic bull trotting on the roadway a ghostlike form. The bull’s machinery growled softly as its mechanical legs swung back and forth, the crate latched to its side rhythmically snapping against its trunk in a staccato rhythm. A human operator trailed the robot at a comfortable pace. Wearing a radio headset and a backpack full of communications gear, he used an oversized video game-style controller to guide the beast’s pace and direction. The contraption trotted past the garages where clusters of engineers and software hackers were busy packing up robots from the day’s competition.

The DRC evoked the bar scene in the Star Wars movie Episode IV: A New Hope. Boston Dynamics designed most of its robots in humanoid form. This was a conscious decision: a biped interacts better with man-made environments than other forms do. There were also weirder designs at the contest, like a “transformer” from Carnegie Mellon that was reminiscent of robots in Japanese sci-fi films, and a couple of spiderlike walking machines as well. The most attractive robot was Valkyrie, a NASA robot that resembled a female Star Wars Imperial Stormtrooper. Sadly, Valkyrie was one of the three underperformers in the competition; it completed none of the tasks successfully. NASA engineers had little time to refine its machinery because the shutdown of the federal government cut funds for development.

The star of the two-day event was clearly the Team Schaft robot. The designers, a crew of about a dozen Japanese engineers, had been the only team to almost perfectly complete all the tasks and so they easily won the first Robotics Challenge. Indeed, the Schaft robot had only made a single error: it tried to walk through a door that was slammed shut by the wind. Gusts of wind had repeatedly blown the door out of the Japanese robot’s grasp before it could extend its second arm to secure the door’s spring closing mechanism.

While the competition took place, Rubin was busy moving his Japanese roboticists into a sprawling thirty-thousand-square-foot office perched high atop a Tokyo skyscraper. To ensure that the designers did not disturb the building’s other tenants—lawyers, in this case—Google had purchased two floors in the building and decided to leave one floor as a buffer for sound isolation.

In the run-up to the Robotics Challenge, both Boston Dynamics and several of the competing teams had released videos showcasing Atlas’s abilities. Most of the videos featured garden-variety demonstrations of the robot walking, balancing, or twisting in interesting ways. However, one video of a predecessor to Atlas showed the robot climbing stairs and crossing an obstacle field that involved spreading its legs across a wide gap while balancing its arms against the walls of the enclosure. It moved at human speed and with human dexterity. The video had been carefully staged and the robot was being teleoperated—it was not acting autonomously.13 But the implications of the video were clear—the hardware for robots was capable of real-world mobility when the software and sensors caught up.

While public reaction to the video was mixed, the Schaft team loved it. In the wake of their victory, they watched in amazement as the Boston Dynamics robotic bull trotted toward their garage. It squatted on the ground and shut down. The team members swarmed around the robot and opened the crate that was strapped to its back. It contained a case of champagne, brought as a congratulatory offering from the Boston Dynamics engineers in an attempt to bond the two groups of roboticists who would soon be working together on some future Google mobile robot.

Several of the company’s engineers had considered doing something splashier. While planning for the Boston Dynamics demonstrations at the speedway, executives at another one of Rubin’s AI companies came up with a PR stunt to unveil at the Boston Dynamics demonstrations during both afternoons of the Robotics Challenge. The highlight of the two-day contest had not been watching the robots as they tried to complete a set of tasks. The real crowd-pleasers were the LS3 and Wildcat four-legged robots, both of which had come out on the raceway tarmac to trot back and forth. LS3, a robotic bull-like machine without a head, growled as it moved at a determined pace. Every once in a while, a Boston Dynamics employee pushed the machine to set it off balance. The robot nimbly moved to one side and recovered quickly—as if nothing had happened. Google initially wanted to stage something more impressive. What if they could show off a robot dog chasing a robot car? That would be a real tour de force. DARPA quickly nixed the idea, however. It would have smacked of a Google promotional and the “optics” might not play well either. After all, if robots could chase each other, what else might they chase?

Team Schaft finished the champagne as quickly as they had cracked it open. It was a heady night for the young Japanese engineers. One researcher, who staggered around with a whole bottle of champagne in his hand, ended up in a hospital and with a fierce headache the next day. As the evening wound down, the implications of Schaft’s win were very clear to the crowd of about three dozen robot builders who were gathered in front of the Schaft garage that evening. Rubin’s new team shared a common purpose. Machines would soon routinely move among people and would inevitably assume even more of the drudgery of human work. Designing robots that could do anything from making coffee to loading trucks was well within the engineers’ reach.

The Google roboticists believed passionately that, in the long run, machines woud inevitably substitute for humans. Given enough computing power and software ingenuity, it now seemed possible that engineers could model all human qualities and capabilities, including vision, speech, perception, manipulation, and perhaps even self-awareness. To be sure, it was important to these designers that they operated in the best interests of society. However, they believed that while the short-term displacement of humans would stoke conflict, in the long run, automation would improve the overall well-being of humanity. This is what Rubin had set out to accomplish. That evening he hung back from the crowd and spoke quietly with several of the engineers who were about to embark on a new journey to introduce robots into the world. He was at the outset of his quest but had already won a significant wager, which was a sign of his confidence in his team. He had bet Google CEO Larry Page his entire salary for a year that the Schaft team would win the DARPA trials. Luckily for Page, Rubin’s annual salary was just one dollar. Like many Google executives’, his actual compensation was much, much higher. However, a year after launching the company’s robotics division, Rubin would depart the company. He had acquired a reputation as one of the Valley’s most elite technologists. But, by his own admission, he was more interested in creating new projects than running them. The robot kingdom he set out to build would remain very much a work in progress after his abrupt departure at the end of 2014.

In the weeks after Homestead, Andy Rubin made it clear that his ultimate goal was to build a robot that could complete each of the competitive tasks in the challenge at the push of a button. Ultimately, it was not to be. Months later, Google would withdraw Schaft from the finals to focus on supplying state-of-the-art second-generation Atlas robots for other teams to use.

Today Google’s robot laboratory can be found in the very heart of Silicon Valley, on South California Ave., which divides College Terrace, a traditional student neighborhood that was once full of bungalows and now has grown increasingly tony, from the Stanford Industrial Park, which might properly be called the birthplace of the Valley. Occupying seven hundred acres of the original Leland Stanford Jr. family farm, the industrial park was the brainchild of Frederick Terman, the Stanford dean who convinced his students William Hewlett and David Packard to stay on the West Coast and start their own business, instead of following a more traditional career path and heading east to work for the electronics giants of the first half of the last century.

The Stanford Industrial Park has long since grown from a manufacturing center into a sprawling cluster of corporate campuses. Headquarters, research and development centers, law offices, and finance firms have gathered in the shadow of Stanford University. In 1970, Xerox Corp. temporarily located its Palo Alto Research Center at South California Avenue and Hanover Street, where, shortly thereafter, a small group of researchers designed the Alto computer. Smalltalk, the Alto’s software, was created by another PARC group led by computer scientist Alan Kay, a student of Ivan Sutherland at Utah. Looking for a way to compete with IBM in the emerging market for office computing, Xerox had set out to build a world-class computer science lab from scratch in the Industrial Park.

More than a decade ahead of its time, the Alto was the first modern personal computer with a windows-based graphical display that included fonts and graphics, making possible on-screen pages that corresponded precisely to final printed documents (ergo WYSIWYG, pronounced “whizziwig,” which stands for “what you see is what you get”). The machine was controlled by an oddly shaped rolling appendage with three buttons wired to the computer known as a mouse. For those who saw the Alto while it was still a research secret, it drove home the meaning of Engelbart’s augmentation ideas. Indeed, one of those researchers was Stewart Brand, a counterculture impresario—photographer, writer, and editor—who had masterminded the Whole Earth Catalog. In an article for Rolling Stone, Brand referred to PARC as “Shy Research Center,” and he coined the term “personal computing.” Now, more than four decades later, the desktop personal computers of PARC are handheld and they are in the hands of much of the world’s population.

Today Google’s robot laboratory sits just several hundred feet from the building where the Xerox pioneers conceived of personal computing. The proximity emphasizes Andy Rubin’s observation that “Computers are starting to sprout legs and move around in the environment.” From William Shockley’s initial plan to build an “automatic trainable robot” at the very inception of Silicon Valley to Xerox PARC and the rise of the PC and now back to Google’s mobile robotics start-up, the proximity of the two laboratories underscores how the region has moved back and forth in its efforts to alternatively extend and replace humans, from AI to IA and back again.

There is no sign that identifies Google’s robot laboratory. Inside the entryway, however, stands an imposing ten-foot-high steel statue—of what? It doesn’t quite look like a robot. Maybe it is meant to signify some kind of alien creature. Maybe it is a replicant? The code name of Rubin’s project was “Replicant,” inspired, of course, by the movie Blade Runner. Rubin’s goal was to build and commercially introduce a humanoid robot that could move around in the world: a robot that could deliver packages, work in factories, provide elder care, and generally collaborate with and potentially replace human workers. He had set out to finish what had in effect begun nearby almost a half century earlier at the Stanford Artificial Intelligence Laboratory.

The earlier research spawned by SAIL had created a generation of students like Ken Salisbury. As a young engineer Salisbury viewed himself as less of an “AI guy,” and more of a “control person.” He was trained in the Norbert Wiener tradition and so didn’t believe that intelligent machines needed autonomy. He had been involved in automation long enough to see the shifting balance between human and machine, and preferred to keep humans in the loop. He wanted to build a robot that, for example, could shake hands with you without crushing your hand. Luckily for Salisbury, autonomy was slow to arrive. The challenge of autonomous manipulation as humans are capable of—“pick up that red rag over there”—has remained a hard problem.

Salisbury lived at the heart of the paradox described by Hans Moravec—things that are hardest for humans are easiest for machines, and vice versa. This paradox was first clarified by AI researchers in the 1980s, and Moravec had written about it in his book Mind Children: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”14 John McCarthy would frame the problem by challenging his students to reach into their pocket, feel a coin, and identify that coin as a nickel. Build a robot that could do that! Decades later, Rodney Brooks was still beginning his lectures and talks with the same scenario. It was something that a human could do effortlessly. Despite machines that could play chess and Jeopardy! and drive cars, little progress had been made in the realms of touch and perception.

Salisbury was a product of the generation of students that emerged from SAIL during its heyday in the 1970s. While he was a graduate student at Stanford, he designed the Stanford/JPL hand, an example of the first evolution of robotic manipulators from jawed mechanical grippers into more articulated devices that mimicked human hands. His thesis had been about the geometric design of a robotic hand, but Salisbury was committed to the idea of something that worked. He stayed up all night before commencement day to get a final finger moving.

He received his Ph.D. in 1982, just a year after Brooks. Both would ultimately migrate to MIT as young professors. There, Salisbury explored the science of touch because he thought it was key to a range of unsolved problems in robotics. At MIT he became friendly with Marvin Minsky and the two spent hours discussing and debating robot hands. Minsky wanted to build hands covered with sensors, but Salisbury felt durability was more important than perception, and many designs forced a trade-off between those two qualities.

While a professor at the MIT Artificial Intelligence Laboratory he worked with a student, Thomas Massie, on a handheld controller to serve as a computer interface making three-dimensional images on a computer display something that people could touch and feel. The technology effectively blurs the line between the virtual computer world and the real world. Massie—who would later become a Tea Party congressman representing Kentucky—and his wife, both mechanical engineers, turned the idea into Sensable Devices, a company that created an inexpensive haptic—or touch—control device. After taking a sabbatical year to help found both Sensable and Intuitive Surgical, a robot surgery start-up based in Silicon Valley, Salisbury returned to Stanford, where he established a robotics laboratory in 1999.

In 2007, he created the PR1, or Personal Robot One, with his students Eric Berger and Keenan Wyrobek. The machine was a largely unnoticed tour de force. It had the capabilities to leave a building, buy coffee for Salisbury, and return. The robot asked Salisbury for some money, then made its way through a series of three heavy doors. It opened each of them by pulling the handle halfway, then turning sideways so it could fit through the opening. Then it found its way to an elevator, called it, checked to make sure that no humans were inside, entered the elevator, pressed the button for the third floor, and checked to make sure the elevator had indeed gotten to the correct floor using visual cues. The robot then left the elevator, made its way to the coffee vendor, purchased coffee, and brought it back to the lab—without spilling it and before it got cold.

The PR1 looked a little like a giant coffee can with arms, motorized wheels for traction, and stereo cameras for vision. Building it cost about $300,000 over about eighteen months. It was generally run by teleoperation except for specific preprogrammed tasks, such as fetching coffee or a beer. Capable of holding about eleven pounds in each arm, it could perform a variety of household chores. An impressive YouTube video shows the PR1 cleaning a living room. Like the Boston Dynamics Atlas, however, it was teleoperated and that particular video was sped up eight times to make it look like it moved at human speed.15

The PR1 project emerged from Salisbury’s lab at the same time Andrew Ng, a young Stanford professor who was an expert in machine vision and statistical techniques, was working on a similar but more software-focused project, the Stanford Artificial Intelligence Robot, or STAIR. At one point Ng gave a talk describing STAIR to the Stanford Industrial Affiliates program. In the audience was Scott Hassan, the former Stanford graduate student who had done the original heavy lifting for Google as the first PageRank algorithm programmer, the basis for the company’s core search engine.

It’s time to build an AI robot, Ng told the group. He said his dream was to put a robot in every home. The idea resonated with Hassan. A student in computer science first at the State University of New York at Buffalo, he then entered graduate programs in computer science at both Washington University in St. Louis and Stanford, but dropped out of both programs before receiving an advanced degree. Once he was on the West Coast, he had gotten involved with Brewster Kahle’s Internet Archive Project, which sought to save a copy of every Web page on the Internet.

Larry Page and Sergey Brin had given Hassan stock for programming PageRank, and Hassan also sold E-Groups, another of his information retrieval projects, to Yahoo! for almost a half-billion dollars. By then, he was a very wealthy Silicon Valley technologist looking for interesting projects.

In 2006 he backed both Ng and Salisbury and hired Salisbury’s students to join Willow Garage, a laboratory he’d already created to facilitate the next generation of robotics technology—like designing driverless cars. Hassan believed that building a home robot was a more marketable and achievable goal, so he set Willow Garage to work designing a PR2 robot to develop technology that he could ultimately introduce into more commercial projects.

Sebastian Thrun had begun building a network of connections in Silicon Valley after he arrived on a sabbatical from CMU several years earlier. One of those was Gary Bradski, an expert in machine vision at Intel Labs in Santa Clara. The company was the world’s largest chipmaker and had developed a manufacturing strategy called “copy exact,” a way of developing next-generation manufacturing techniques to make ever-smaller chips. Intel would develop a new technology at a prototype facility and then export that process to wherever it planned to produce the denser chips in volume. It was a system that required discipline, and Bradski was a bit of a “Wild Duck”—a term that IBM originally used to describe employees who refused to fly in formation—compared to typical engineers in Intel’s regimented semiconductor manufacturing culture.

A refugee from the high-flying finance world of “quants” on the East Coast, Bradski arrived at Intel in 1996 and was forced to spend a year doing boring grunt work, like developing an image-processing software library for factory automation applications. After paying his dues, he was moved to the chipmaker’s research laboratory and started researching interesting projects. Bradski had grown up in Palo Alto before leaving to study physics and artificial intelligence at Berkeley and Boston University. He returned because he had been bitten by the Silicon Valley entrepreneurial bug.

For a while he wrote academic research papers about machine vision, but he soon learned that there was no direct payoff. The papers garnered respect at places like Berkeley, Stanford, and MIT, but they didn’t resonate with the rest of Silicon Valley. Besides, he realized that what was special about Intel was its deep pockets. He decided he should be exploiting them. “I should do something that has higher leverage,” he thought to himself.

In his first year at Intel he met some superstar Russian software designers who worked under contract for the chipmaker, and he realized that they could be an important resource for him. At the time, the open-source software movement was incredibly popular. His background was in computer vision, and so he put two and two together and decided to create a project to build a library of open-source machine vision software tools. Taking the Linux operating system as a reference, it was obvious that when programmers worldwide have access to an extraordinary common set of tools, it makes everybody’s research a lot easier. “I should give everyone that tool in vision research,” he decided.

While his boss was on sabbatical he launched OpenCV, or Open Source Computer Vision, a software library that made it easier for researchers to develop vision applications using Intel hardware. Bradski was a believer in an iconoclastic operating style that was sometimes attributed to Admiral Grace Hopper and was shared by many who liked getting things done inside large organizations. “Better to seek forgiveness than to ask permission” was his motto. Eventually OpenCV contained a library of more than 2,500 algorithms including both computer vision and machine-learning software. OpenCV also hosted programs that could recognize faces, identify objects, classify human motion, and so on. From his initial team of just a handful of Intel researchers, a user community grew to more than 47,000 people, and more than ten million copies of the toolset have been downloaded to date.

image

Gary Bradski created a popular computer vision software library and helped design robots. He would later leave robotics to work with a company seeking to build augmented reality glasses. (Photo © 2015 by Gary Bradski)

Realizing that he would one day leave Intel and would need a powerful toolset for his next project, Bradski developed a second agenda. OpenCV would be his calling card when he left his job at the chipmaker. Open-source software efforts were in favor inside Intel because the company wanted leverage in its difficult relationship with Microsoft. The two companies dominated the personal computing industry, but often clashed over issues of control, strategic direction, and ultimately revenue. For a while Bradksi had tremendous support inside the laboratory: at one point, he had fifteen researchers on the OpenCV project. That moment was one of the high points of his career at Intel.

Then, Intel gave him a division award and told him, “All right, now you have to move on.” “What do you mean?” he responded to his managers. “This is a decadelong project.” Grudgingly, he did some other things, but he covertly kept the OpenCV project going on the side. That did not sit well inside the giant semiconductor firm. One of his Russian programmers was given a performance review demerit—“improvement required”—by management because he was associated with the program.

Intel’s refusal to see the value of the project left Bradski feeling disaffected. In 2001, Intel dropped its camera division, which pushed him to the edge. “More shortsighted bean counter thinking,” he decided. “Of course this is low-margin silicon, but this is a loss leader, so you can eventually profit from the whole thing!” He had no idea that the mobile computing and smartphone wave was just a half decade away, but at that moment, he was right. Intel, in retrospect, had had a history of trying new ideas and then canceling them before they could bear fruit. His frustration made him an easy recruit for Sebastian Thrun, who was then building his team at Stanford to create the Stanley autonomous vehicle for the 2005 DARPA competition.

They had struck up a relationship when Thrun had been at Stanford on sabbatical in 2001. When he returned in 2003 as a faculty member, Bradski, who was disaffected with Intel, was preparing to take his own sabbatical at EPFL, a Swiss research university in Lausanne. Thrun said, “Why don’t you come to Stanford instead?” Bradski was faced with a difficult decision. Switzerland would have offered him an academic feast, a chance to work on neural nets and evolutionary learning algorithms, and a great party. At the end of the day, he realized that a sabbatical at EPFL was a diversion for someone who had entrepreneurial aspirations, and the nightmare Swiss bureaucracy overwhelmed him: he should have started a year earlier getting his kids into private school, and renting a house in Lausanne was a challenge—one potential landlord told him there would be no showering after ten P.M. and he wouldn’t permit noisy children!

So Bradski switched gears and took his sabbatical at relatively laid-back Stanford. He taught courses and flirted with ideas for a new start-up. His first project involved building an advanced security camera. However, he ended up with a partner who was a poor match for the project, and it quickly turned into a bad marriage. Bradski backed out. By that time, his sabbatical was over, so he went back to work at Intel and managed a large research group. He quickly realized that management involved a lot of headaches and little interesting work, so he tried to pare down his group to a core team.

Before, Bradski had been oblivious to the frustrations of other researchers, but now he noticed that engineers everywhere inside the company had similar frustrations. He joined an underground laboratory for the disaffected. Then, on a visit to Stanford, Thrun said, “Come out back to the parking lot.” Thrun showed Bradski Stanley, the secret project preparing to enter the second DARPA Grand Challenge. This was obviously the coolest thing around, and Bradski immediately fell in love with the idea. Back at Intel, he quickly pulled together a secret skunkworks group to help with the computer vision system for the car. He didn’t bother to ask permission. He hosted his design meetings during lunchtime and met with the Stanford team on Tuesdays.

There were immediately two problems. After Intel promised that it would not involve itself directly in the DARPA Grand Challenge, the company started sponsoring Red Whittaker’s CMU team. Bradski’s boss started getting complaints that Bradski was distracting people from their assigned work. “This could build up to be a firing offense,” his boss told him. “We’re not sponsoring the Stanford team and we’re not getting into robotics.” As a concession, Bradski’s boss told him he could continue to work on the project personally, but could not involve other Intel Labs researchers. By then, however, Bradski no longer cared about being fired. That made everything a lot easier, and the lunchtime meetings intensified.

The tensions at Intel came to a head two days before the race. The cars and teams had arrived in Primm, Nevada, a three-casino watering hole on the California-Nevada border. Bradski called a contact in Intel’s marketing department and said he needed an immediate decision about whether Intel was going to officially sponsor the Stanford car. A decal on the car would usually cost $100,000, but Thrun told him that Bradski’s team had donated so much volunteer labor that they could have the sponsorship for just $20,000. The Intel marketing guy loved the idea: sponsoring two cars would double Intel’s chance of backing a winner, but he balked at making an instant decision. “The money’s there, but I can’t just give it to you unilaterally,” the executive told him.

“Look, the cars are about to be sequestered, we have half an hour left,” Bradski responded.

It worked. “Okay, do it,” the executive said.

Because it was so late, there was no room left on the car except a passenger window—a brilliantly visible location. Stanley won the race and Intel had backed a winner, so that was a coup. Bradski had pulled himself back from the edge of unemployment.

The vision system contributed to Stanley’s success. The car relied on lasers that could sense a dynamic point cloud around the car and digital cameras that fed machine vision algorithms. In the end, the cameras saw far enough ahead that Stanley could maintain speed without slowing down. And going fast, needless to say, was necessary to win.

The glory didn’t last long, however. Bradski had secured a small DARPA contract to research “cognitive architectures” with Thrun and Daphne Koller, another Stanford machine-learning expert. However, the DARPA program manager had announced his departure, which meant the grant was likely not be renewed, which in turn meant Bradski would have to look for funding elsewhere. Sure enough, Phase II was canceled as “too ambitious.”

Bradski was very intrigued by robotics, so he used some of his grant money to purchase a robot arm. The $20,000 purchase set off a small explosion inside Intel’s legal department. The grant money, they insisted, was restricted for hiring interns, not buying hardware, and he had to transfer the ownership of the robot arm away from Intel. Bradski gave the arm to the Stanford STAIR project, which was run by Andrew Ng. Ng was starting to explore the world of robotics with machine-learning ideas. Could they design a robot to load and unload a dishwasher? It became part of the mix leading to the PR1 robot that was brewing between Salisbury’s laboratory and Ng’s project.

Meanwhile, Bradski found Intel’s bureaucracy more and more overbearing. He knew it was time to leave and quickly negotiated a deal to join an Israeli machine vision start-up based in San Mateo. He took the OpenCV project with him. The machine vision start-up, however, turned out to be a less than perfect match. The Israelis loved conflict and Bradski was constantly butting heads with the CTO, a former sergeant in the Israeli Army. He would usually win the arguments, but at a cost. He began job hunting again after being at the new company for just a year.

It was hard to search for a job clandestinely. He toyed with working at Facebook, who had offered him a job, but they weren’t doing anything interesting in computer vision. “Come anyway,” they told him. “We’ll find something for you to do.” To Bradksi, their recruiting seemed highly disorganized. He showed up for his interview and they told him he was late. He showed them the email that indicated that he was, in fact, on time.

“Well,” they said, “you were supposed to be down the street an hour ago.”

Down the street he found the building locked, closed, and dark. It occurred to him that perhaps this was some kind of weird job test, and that a camera might be following him to see what he was going to do. He kicked the door and finally someone came out. The man didn’t say anything, but it seemed obvious to Bradski that he had woken him up. The guy held the door open so Bradski could go inside, then walked off silently. Bradski sat down in the dark building and before long an admin arrived and apologized for being late. There was no record of a scheduled interview and so he called the recruiter who had supposedly set everything up. After a lot of apologizing and some more runaround, Bradski had his interview with Facebook’s CTO. A few days later, he had his second interview with a higher-ranking executive. The Facebook offer would have given him a lot of stock, but going to work for Facebook didn’t make much sense. Miserable with the Israelis, Bradski realized he would also be miserable at Facebook, where he would most likely be forced to work on uninteresting projects. So he kept hedging. The longer he held out, the more stock Facebook offered. At that point, the job was probably worth millions of dollars, but would cause Bradski great unhappiness in what seemed like a pressure cooker.

One day, Andrew Ng called Bradski and told him he needed to meet an interesting new group of roboticists at a research lab called Willow Garage. Founded by Hassan, it was more of a research lab than a start-up. Hassan was preparing to hire seventy to eighty roboticists to throw things against the wall and see what stuck. It fit within a certain Silicon Valley tradition; labs like Xerox PARC and Willow Garage were not intended to directly create products. Rather they experimented with technologies that frequently led in unexpected directions. Xerox had created PARC in 1970 and Paul Allen had financed David Liddle to “do PARC right” when he established Interval Research in 1992. In each case the idea was to “live in the future” by building technologies that were not quite mature but soon would be. Now it looked like robotics was ripe for commercialization.

Initially Bradski was hesitant about going by for a quick lunchtime handshake. He would have to race down and back or the Israelis would notice his absence. Ng insisted. Bradski realized that Andrew was usually right about these things and so decided to give it a shot. Everything clicked. At the end of the afternoon, Bradski was still there and he no longer cared about his start-up. This was where he should be. At the end of the day, while he was still sitting in the Willow Garage parking lot, he called Facebook to say he wasn’t interested. Shortly afterward, he quit his start-up.

In December of 2007 Bradski was hired to run the vision group for the next generation of Salisbury and Ng’s earlier robot experiments, morphing PR1 into PR2. They built the robot and then ran it through a series of tests. They wanted the robot to do more than retrieve a beer from the fridge. They “ran” a marathon, maneuvering the robot for twenty-six miles inside the company office while Google cofounder Sergey Brin was in attendance. Afterward, they instructed the robot to find and plug itself into ten wall sockets within an hour. “Now they can escape and fend for themselves,” Bradski told friends via email.

PR2 wasn’t the first mobile robot to plug itself in, however. That honor went to a mobile automaton called “The Beast,” designed at the Johns Hopkins Applied Physics Lab in 1960—but it could do little else.16 PR2 was Shakey reborn half a century later. This time, however, the robot was far more dexterous. Pieter Abbeel, a University of California at Berkeley roboticist, was given one of eight PR2s that were distributed to universities. With his students, he taught the machine to fold laundry—albeit very slowly.

Though the Willow Garage team had made a great deal of progress, their research revealed to them just how far they were from developing a sophisticated machine that could function autonomously in an ordinary home. Kurt Konolige, a veteran SRI roboticist recruited by Bradski to Willow Garage, had told Bradski that these were decadelong technology development projects. They would need to refine each step dozens of times before they got everything right.

In the end, however, like Paul Allen, who had decided to pull the plug on Interval Research after just eight years of its planned ten-year life span, Scott Hassan proved not to have infinite patience. Bradski and Konolige looked on in dismay as the Willow Garage team held endless brainstorming sessions to try to come up with home robot ideas that they could commercialize relatively quickly. They both realized the lab was going to be closed. Bradski believed he knew what people really wanted in their homes—a French maid—and that wasn’t going to be possible anytime soon. In his meetings with Hassan, Bradski pleaded for his team to be permitted to focus instead on manufacturing robotics, but he was shot down every time. Hassan was dead-set on the home. Eventually, Konolige didn’t even bother to show up at one of the meetings—he went kayaking instead.

For a while Bradski tried to be a team player, but then he realized he was in danger of reentering the world of compromises that he had left at Intel.

“What the hell,” he thought. “This isn’t me. I need to do what I want.”

He started thinking about potential applications for industrial robotics integration, from moving boxes to picking up products with robot arms. After discussing robotics extensively with people in industry, he confirmed that companies were hungry for robots. He told Willow’s CEO that it was essential to have a plan B in case the home robot developments didn’t pan out. The executive grudgingly allowed Bradski to form a small group to work on industrial applications.

Combining robot arms with new machine vision technology, Bradski’s group made rapid progress, but he tried to keep word of the advances from Hassan. He knew that if word got out, the project would quickly be commercialized. He did not want to be kicked out of the Willow Garage “nest” before he was ready to launch the new venture. Finally, early in 2012, one of the programmers blabbed to the Willow Garage founder about their success and the industrial interest in robotics. Hassan sent the group an email: “I will fund this tomorrow, let’s meet on Friday morning.”

With Konolige and several others, and with start-up funding from Hassan, Bradski created Industrial Perception, Inc., a robotic arm company with a specific goal—loading and unloading boxes from trucks such as package delivery vehicles. After Bradski left to cofound Industrial Perception, Willow gradually disintegrated. Willow was divvied up into five companies, several robot standards efforts, and a consulting group. It had been a failure, but home robots—except for robotic vacuum cleaners—were still a distant goal.

Bradski’s new company set up operations in an industrial neighborhood in South Palo Alto. The office was in a big garage, which featured one room of office cubicles and a large unfinished space where they set up stacks of boxes for the robots to endlessly load and unload. By this point, Industrial Perception had garnered interest from giant companies like Procter & Gamble, which was anxious to integrate automation technologies into its manufacturing and distribution operations. More importantly, Industrial Perception had a potential first customer: UPS, the giant package delivery firm, had a very specific application in mind—replacing human workers who loaded and unloaded their trucks.

Industrial Perception made an appearance at just one trade show, Automatica, in Chicago in January 2012. As it turned out, they didn’t even need that much publicity. A year later, Andy Rubin visited their offices. He was traveling the country, scouting and acquiring robotics firms. He told those he visited that in ten to fifteen years, Google would become the world’s delivery service for information and material goods. He needed to recruit machine vision and navigation technologies, and Industrial Perception had seamlessly integrated these technologies into their robotic arms so they could move boxes. Along with Boston Dynamics and six other companies, Rubin secretly acquired Industrial Perception. The deals, treated as “nonmaterial” by Google, would not become public for more than six months. Even when the public found out about Google’s new ambitions, the company was circumspect about its plans. Just as with the Google car, the company would keep any broader visions to itself until it made sense to do otherwise.

For Rubin, however, the vision was short-lived. He tried to persuade Google to let him run his new start-up independently from what he now saw as a claustrophobic corporate culture. He lost that battle, so at the end of 2014 he left the company and moved on to create an incubator for new consumer electronics start-up ideas.

The majority of the Industrial Perception team was integrated into Google’s new robotics division. Bradski, however, turned out to be too much of a Wild Duck for Google as well—which was fortuitous, because Hassan still had plans for him. He introduced Bradski to Rony Abovitz, a successful young roboticist who had recently sold Mako Surgical, a robotic surgery company that developed robots to provide support to less-experienced surgeons. Abovitz had another, potentially even bigger idea, and he needed a machine vision expert.

Abovitz believed he could reinvent personal computing so it could serve as the ultimate tool for augmenting the human mind. If he was right, it would offer a clear path to merging the divergent worlds of artificial intelligence and augmentation. At Mako, Abovitz had used a range of technologies to digitally capture the skills of the world’s best surgeons and integrate them into a robotic assistant. This made it possible for a less-skilled surgeon to use a robotic template to get consistently good results using a difficult technique. The other major robot surgery company, Intuitive Surgical, was an SRI spin-off that sold teleoperated robotic instruments that allowed surgeons to operate remotely with great precision. Abovitz instead focused on the use of haptics—giving the robot’s operators a sense of touch—to attempt to construct a synthesis of human and robot, a surgeon more skilled than a human surgeon alone. It helped that Mako focused on operations that dealt with bone instead of soft tissue surgery (which, incidentally, was the focus of Intuitive’s research). Bone, a harder material, was much easier to “feel” with touch feedback. In this system, the machine and the human would each do what they were good at to create a powerful symbiosis.

It’s important to note that the resulting surgeon isn’t a “cyborg”—a half-man, half-machine. A bright line between the surgeon and the robot is maintained. In this case the human surgeon works with the separate aid of a robotic surgery tool. In contrast, a cyborg is a creature in which the line between human and machine becomes blurred. Abovitz believed that “Strong” artificial intelligence—a machine with human-level intelligence—was an extremely difficult problem and would take decades to develop, if it was ever possible. From his Mako experience designing a robot to aid a surgeon, he believed the most effective way to design systems was instead to use artificial intelligence technology to enhance human powers.

After selling Mako Surgical for $1.65 billion in late 2013, Abovitz set out to pursue his broader and more powerful augmentation idea—Magic Leap, a start-up with the modest goal of replacing both televisions and personal computers with a technology known as augmented reality. In 2013, the Magic Leap system worked only in a bulky helmet. However, the company’s goal was to shrink the system into a pair of glasses less obtrusive and many times more powerful than Google Glass. Instead of joining Google, Bradski went to work for Abovitz’s Magic Leap.

In 2014, there was already early evidence that Abovitz had made significant headway in uniting AI and IA. It could be seen in Gerald, a half-foot-high animated creature floating in an anonymous office complex in a Miami suburb. His four arms waved gently while he hung in space and walked in circles in front of a viewer. Gerald wasn’t really there. He was actually an animated projection that resembled a three-dimensional hologram. Users could watch him through transparent lenses that project what computer scientists and optical engineers describe as a “digital light field” into the eyes of a human observer. Although Gerald doesn’t exist in the real world, Abovitz is trying to create an unobtrusive pair of computer-augmented glasses with which to view animations like Gerald. And it doesn’t stop with imaginary creatures. In principle it is possible to project any visual object created with the technology that matches the visual acuity of the human eye. For example, as Abovitz describes the Magic Leap system, it will make it possible for someone wearing the glasses to simply gesture with their hands to create a high-resolution screen as crisp as a flat-panel television. If they are perfected, the glasses will replace not only our TVs and computers, but many of the other consumer electronics gadgets that surround us.

The glasses are based on a transparent array of tiny electronic light emitters that are installed in each lens to project the light field—and so the image—onto each retina. In practice, computer-generated light fields attempt to mimic what the human eye sees in the physical world. It is a computer-generated version of the analog light field that comprises the sum of all of the light rays that form a visual scene for the human eye. Digital light fields simulate the way light behaves in the physical world. When photons bounce off objects in the world, they act like rivers of light. The human neuro-optic system has evolved so that the lenses in our eyes adjust to match the wavelength of the natural light field and focus on objects. Watching Gerald wander in space through a prototype of the Magic Leap glasses gives a hint that in the future it will be possible to visually merge computer-generated objects with the real world. Significantly, Abovitz claims that digital light field technology holds out the promise of circumventing the limitations that have plagued stereoscopic displays for decades. Today, these displays cause motion sickness in users and they do not offer “true” depth-of-field perception.

By January of 2015 it had become clear that augmented reality was no longer a fringe idea. With great fanfare Microsoft demonstrated a similar system called HoloLens based on a competing technology. Is it possible to imagine a world where the ubiquitous LCDs of today’s modern world—televisions, computer monitors, smartphone screens—simply disappear? In Hollywood, Florida, Magic Leap’s demonstration suggests that workable augmented reality is much closer than we might assume. If they are correct, such an advance would also change the way we think about and experience augmentation and automation. In October 2014, Magic Leap’s technology received a significant boost when Google led a $524 million investment round in the tiny start-up.

The Magic Leap prototype glasses look like ordinary glasses, save for the thin cable that runs down a user’s back and connects to a small, smartphone-sized computer. These glasses don’t simply represent a break with existing display technologies. The technology behind them makes extensive use of artificial intelligence, and machine vision, to remake reality. The glasses are compelling for two reasons. First, their resolution will approach the resolving power of the human eye. The best computer displays are just reaching this level of resolution. As a result, the animations and imagery will surpass those of today’s best consumer video game systems. Second, they are the first indication that it is possible to seamlessly blend computer-generated imagery with physical reality. Until now, the limits of consumer computing technology have been defined by what is known as the “WIMP” graphical interface—the windows, icons, menus, and pointer of the Macintosh and Windows. The Magic Leap glasses, however, will introduce augmented reality as a way of revitalizing personal computing and, by extension, presenting new ways to augment the human mind.

In an augmented reality world, the “Web” will become the space that surrounds you. Cameras embedded in the glasses will recognize the objects in people’s environments, making it possible to annotate and possibly transform them. For example, reading a book might become a three-dimensional experience: images could float over the text, hyperlinks might be animated, readers could turn pages with the movement of their eyes, and there would be no need for limits to the size of a page.

Augmented reality is also a profoundly human-centered version of computing, in line with Xerox PARC computer scientist Mark Weiser’s original vision of “calm” ubiquitous computing. It will be a world in which computers “disappear” and everyday objects acquire “magical” powers. This presents a host of new and interesting ways for humans to interact with robots. The iPod and the iPhone were the first examples of this transition as a reimagining of the phonograph and the telephone. Augmented reality would also make the idea of telepresence far more compelling. Two people separated by great distance could gain the illusion of sharing the same space. This would be a radical improvement on today’s videoconferencing and awkward telepresence robots like Scott Hassan’s Beam, which place a human face on a mobile robot.

Gary Bradski left the world of robots to join Abovitz’s effort to build what will potentially become the most intimate and powerful augmentation technology. Now he spends his days refining computer vision technologies to fundamentally remake computing in a human-centered way. Like Bill Duvall and Terry Winograd, he has made the leap from AI to IA.