Love’s labor lost - The Robotic Moment - Alone Together - Sherry Turkle

Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)

Part I. The Robotic Moment

Chapter 6. Love’s labor lost

When Takanori Shibata took the floor at a spring 2009 meeting at MIT’s V AgeLab, he looked trium p hant. The daylong conference centered on robots for the elderly, and Shibata, inventor of the small, seal-like sociable robot Paro, was the guest of honor. The AgeLab’s mission is to create technologies for helping the elderly with their physical and emotional needs, and already Paro had carved out a major role on this terrain. Honored by Guinness Records as “the most therapeutic robot in the world” in 2002, Paro had been front and center in Japan’s initiative to use robots to support senior citizens.1 Now Shibata proudly announced that Denmark had just placed an order for one thousand Paros for its elder-care facilities. The AgeLab gathering marked the beginning of its American launch.

Shibata showed a series of videos: smiling elderly men and women in Japanese nursing homes welcoming the little furry “creature” into their arms; seniors living at home speaking appreciatively about the warmth and love that Paro brought them; agitated and anxious seniors calming down in Paro’s company.2 The meeting buzzed with ideas about how best to facilitate Paro’s acceptance into American elder care. The assembled engineers, physicians, health administrators, and journalists joined in a lively, supportive discussion. They discussed what kind of classification Shibata should seek to facilitate Paro’s passage through the legendary scrutiny of the Food and Drug Administration.

I heard only one negative comment. A woman who identified herself as a nurse said that she and her colleagues had worked long and hard to move away from representing the elderly as childlike. To her, Paro seemed “a throwback, a new and fancier teddy bear.” She ended by saying that she believed nurses would resist the introduction of Paro and objects like it into nursing homes. I lowered my eyes. I had made a decision to attend this meeting as an observer, so I said nothing. At the time, I had been studying Paro in Massachusetts nursing homes for several years. Most often, nurses, attendants, and administrators had been happy for the distraction it provided. I was not at all sure that nurses would object to Paro.

In any case, the nurse’s concern was met with silence, something I have come to anticipate at such gatherings. In robotics, new “models” are rarely challenged. All eyes focus on technical virtuosity and the possibilities for efficient implementation. At the AgeLab, the group moved on to questions about Paro’s price, now set at some $6,000 a unit. Was this too high for something that might be received as a toy? Shibata thought not. Nursing homes were already showing willingness to pay for so valuable a resource. And Paro, he insisted, is not a toy. It reacts to how it is treated (is a touch soft or aggressive?) and spoken to (it understands about five hundred English words, more in Japanese). It has proved itself an object that calms the distraught and depressed. And Shibata claimed that unlike a toy, Paro is robust, ready for the rough-and-tumble of elder care. I bit my lip. At the time I had three broken Paros in my basement, casualties of my own nursing home studies. Why do we believe that the next technology we dream up will be the first to prove not only redemptive but indestructible?

In contrast to these enthusiasts, we have seen children worry. Some imagined that robots might help to cure their grandparents’ isolation but then fretted that the robots would prove too helpful. Quiet and compliant robots might become rivals for affection. Here we meet the grandparents. Over several years, I introduce seniors—some who live at home, some who live in nursing homes—to the robots that so intrigued their grandchildren: My Real Baby, AIBO, and Shibata’s Paro. The children were onto something: the elderly are taken with the robots. Most are accepting and there are times when some seem to prefer a robot with simple demands to a person with more complicated ones.3

In one nursing home, I leave four My Real Babies over a summer. When I return in the fall, there are seven. The demand for the robot baby was so high that the nursing staff went on eBay to increase their numbers. Indeed, however popular My Real Baby is among children, it is the elderly who fall in love. The robot asks for tending, and this makes seniors feel wanted. Its demands seem genuine, in part, of course, because the staff seems to take them seriously. The elderly need to be cared for, but there are few things that they can reliably take care of. Some fear that they might fail with a pet. My Real Baby seems a sure thing, and because it is a robot brought from MIT, it seems an adult thing as well. And having a robot around makes seniors feel they have something “important” to talk about.

The thoughtful fifth graders said their grandparents might welcome robots because, unlike pets, they do not die. The children were right. When the robots are around, seniors are quick to comment that these “creatures” do not die but can be “fixed.” Children imagined that robot baby dolls will remind older people of their time as parents and indeed, for some seniors, My Real Baby does more than bring back memories of children; it offers a way to reimagine a life. But in all of this, I do not find a simple story about the virtues of robots for the elderly. In the nursing homes I study, “time with robots” is made part of each institution’s program. So, the seniors spend time with robots. But over years of study, when given the choice between hanging out with a robot and talking to one of the researchers on the MIT team, most seniors, grateful, choose the person.

During the years of our nursing home studies, it often seemed clear that what kept seniors coming to sessions with robots was the chance to spend time with my intelligent, kind, and physically appealing research assistants. One young man, in particular, was a far more attractive object of attention than the Paro he was trying to introduce. One had the distinct feeling that female nursing home residents put up with the robot because he came with it. Their appreciation, sometimes bawdy in tone, took place in one nursing home so short of resources that the management decided our study could not continue. This incident dramatized the tension in the environment that welcomes sociable robots in geriatric care. There is a danger that the robots, if at all successful, will replace people. In this case, when residents did not pay enough attention to the robot, the people who came with it were taken away. It was a depressing time.

CARING MACHINES

Twenty-five years ago the Japanese calculated that demography was working against them—there would not be enough young Japanese to take care of their aging population. They decided that instead of having foreigners take care of the elderly, they would build robots to do the job.4 While some of the robots designed for the aging population of Japan have an instrumental focus—they give baths and dispense medication—others are expressly designed as companions.

The Japanese robot Wandakun, developed in the late 1990s, is a fuzzy koala that responds to being petted by purring, singing, and speaking a few phrases. After a yearlong pilot project that provided the “creature” to nursing home residents, one seventy-four-year-old Japanese participant said of it, “When I looked into his large brown eyes, I fell in love after years of being quite lonely… . I swore to protect and care for the little animal.”5 Encouraged by such experiments, Japanese researchers began to look to artificial companionship as a remedy for the indignities and isolation of age. And with similar logic, robots were imagined for the dependencies of childhood. Children and seniors: the most vulnerable first.

Over a decade, I find that most American meetings on robotics and the elderly begin with reference to the Japanese experiment and the assertion that Japan’s future is ours as well: there are not enough people to take care of aging Americans, so robot companions should be enlisted to help.6 Beyond that, some American enthusiasts argue that robots will be more patient with the cranky and forgetful elderly than a human being could ever be. Not only better than nothing, the robots will simply be better.

So, a fall 2005 symposium, titled “Caring Machines: Artificial Intelligence in Eldercare” began with predistributed materials that referred to the “skyrocketing” number of older adults while the “number of caretakers dwindles.”7Technology of course would be the solution. At the symposia itself, there was much talk of “curing through care.” I asked participants—AI scientists, physicians, nurses, philosophers, psychologists, nursing home owners, representatives of insurance companies—whether the very title of the symposium suggested that we now assume that machines can be made to “care.”

Some tried to reassure me that, for them, “caring” meant that machines would take care of us, not that they would care about us. They saw caring as a behavior, not a feeling. One physician explained, “Like a machine that cuts your toenails. Or bathes you. That is a caring computer. Or talks with you if you are lonely. Same thing.” Some participants met my objections about language with impatience. They thought I was quibbling over semantics. But I don’t think this slippage of language is a quibble.

I think back to Miriam, the seventy-two-year-old woman who found comfort when she confided in her Paro. Paro took care of Miriam’s desire to tell her story—it made a space for that story to be told—but it did not care abouther or her story. This is a new kind of relationship, sanctioned by a new language of care. Although the robot had understood nothing, Miriam settled for what she had. And, more, she was supported by nurses and attendants happy for her to pour her heart out to a machine. To say that Miriam was having a conversation with Paro, as these people do, is to forget what it is to have a conversation. The very fact that we now design and manufacture robot companions for the elderly marks a turning point. We ask technology to perform what used to be “love’s labor”: taking care of each other.

At the symposium, I sensed a research community and an industry poised to think of Miriam’s experience as a new standard of care. Their position (the performance of care is care enough) is made easier by making certain jobs robot ready. If human nursing care is regimented, scripted into machinelike performances, it is easier to accept a robot nurse. If the elderly are tended by underpaid workers who seem to do their jobs by rote, it is not difficult to warm to the idea of a robot orderly. (Similarly, if children are minded at day-care facilities that seem like little more than safe warehouses, the idea of a robot babysitter becomes less troubling.)

But people are capable of the higher standard of care that comes with empathy. The robot is innocent of such capacity. Yet, Tim, fifty-three, whose mother lives in the same nursing home as Miriam, is grateful for Paro’s presence. Tim visits his mother several times a week. The visits are always painful. “She used to sit all day in this smoky room, just staring at a wall,” Tim says of his mother, the pain of the image still sharp. “There was one small television, but it was so small, just in a corner of this very big room. They don’t allow smoking in there anymore. It’s been five years, but you can still smell the smoke in that room. It’s in everything, the drapes, the couches.... I used to hate to leave her in that room.” He tells me that my project to introduce robots into the home has made things better. He says, “I like it that you have brought the robot. She puts it in her lap. She talks to it. It is much cleaner, less depressing. It makes it easier to walk out that door.” The Paro eases Tim’s guilt about leaving his mother in this depressing place. Now she is no longer completely alone. But by what standard is she less alone? Will robot companions cure conscience?

Tim loves his mother. The nursing staff feels compassion for Miriam. But if our experience with relational artifacts is based on a fundamentally deceitful exchange (they perform in a way that persuades us to settle for the “acting out” of caring), can they be good for us? Or, as I have asked, might they be good for us only in the “feel good” sense? The answers to such questions do not depend on what computers can do today or are likely to be able to do tomorrow. They depend on what we will be like, the kind of people we are becoming as we launch ourselves and those we love into increasingly intimate relationships with machines.

Some robots are designed to deliver medication to the elderly, to help them reach for grocery items on high shelves, and to monitor their safety. A robot can detect if an elderly person is lying on the floor at home, a possible signal of distress. I take no exception to such machines. But Paro and other sociable robots are designed as companions. They force us to ask why we don’t, as the children put it, “have people for these jobs.” Have we come to think of the elderly as nonpersons who do not require the care of persons? I find that people are most comfortable with the idea of giving caretaker robots to patients with Alzheimer’s disease or dementia. Philosophers say that our capacity to put ourselves in the place of the other is essential to being human. Perhaps when people lose this ability, robots seem appropriate company because they share this incapacity.

But dementia is often frightening to its sufferers. Perhaps those who suffer from it need the most, not the least, human attention. And if we assign machine companionship to Alzheimer’s patients, who is next on the list? Current research on sociable robotics specifically envisages robots for hospital patients, the elderly, the retarded, and the autistic—most generally, for the physically and mentally challenged. When robots are suggested, we often hear the familiar assertion that there are not enough people to take care of these “people with problems.” People are scarce—or have made themselves scarce. But as we go through life, most of us have our troubles, our “problems.” Will only the wealthy and “well adjusted” be granted the company of their own kind?8

When children ask, “Don’t we have people for these jobs?” they remind us that our allocation of resources is a social choice. Young children and the elderly are not a problem until we decide that we don’t have the time or resources to attend to them. We seem tempted to declare phases of the life cycle problems and to send in technologies to solve them. But why is it time to bring in the robots ? We learned to take industrial robots in stride when they were proposed for factory assembly lines. Now the “work” envisaged for machines is the work of caring. Will we become similarly sanguine about robotic companionship?

This is contested terrain. Two brothers are at odds over whether to buy a Paro for their ninety-four-year-old mother. The robot is expensive, but the elder brother thinks the purchase would be worthwhile. He says that their mother is “depressed.” The younger brother is offended by the robot, pointing out that their mother has a right to be sad. Five months before, she lost her husband of seventy years. Most of her friends have died. Sadness is appropriate to this moment in her life. The younger brother insists that what she needs is human support: “She needs to be around people who have also lost mothers and husbands and children.” She faces the work of saying good-bye, which is about the meaning of things. It is not a time to cheer her up with robot games. But the pressures to do just that are enormous. In institutional settings, those who take care of the elderly often seemed relieved by the prospect of robots coming to the rescue.

CURING A LIFE

When I introduce sociable robots—AIBO, My Real Baby, and Paro—into nursing homes, nurses and physicians are hopeful. Speaking of Paro, one nursing home director says, “Loneliness makes people sick. This could at least partially offset a vital factor that makes people sick.” The robot is presented as cure. Caretakers entertain the idea that the robot might not just be better than no company but better than their company. They have so little time and so many patients. Sometimes, using a kind of professional jargon, nurses and attendants will say that seniors readily “tolerate” the robots—which is not surprising if seniors are not offered much else. And sometimes, even the most committed caretakers will say that robots address the “troubles” of old age by providing, as one put it, “comfort, entertainment, and distraction.”9 One physician, excited by the prospect of responsive robot pets, sees only the good: “Furbies for grandpa,” he says.

Indeed, seniors generally begin their time with robots as children do, by trying to determine the nature of the thing they have been given. When given a Paro, they have many questions: “Can it do more? Is it a seal or a dog? Is it a he or a she? Can it swim? Where is it from? Does it have a name? Does it eat?” and finally, “What are we supposed to be doing with this?” When the answer is, “Be with it,” only some lose interest. Over time, many seniors attach to Paro. They share stories and secrets. With the robot as a partner, they recreate the times of their lives. To do these things, the adults must overcome their embarrassment at being seen playing with dolls. Many seniors handle this by saying something like, “People would think I’m crazy if they saw me talking to this.” Once they have declared themselves not crazy, they can proceed in their relationship with a robot seal. Or with a robot baby doll.

I have given Andy, seventy-six, a My Real Baby. Andy is slim and bespectacled, with sandy white hair. His face is deeply lined, and his blue eyes light up whenever I see him. He craves company but finds it hard to make friends at the nursing home. I am working with two research assistants, and every time we visit, Andy makes us promise to come back as soon as we can. He is lonely. His children no longer visit. He’d never had many friends, but the few that he’d made on his job do not come by. When he worked as an insurance agent, he had socialized with colleagues after work, but now this is over. Andy wants to talk about his life. Most of all, he wants to talk about his ex-wife, Edith. It is she he misses most. He reads us excerpts from her letters to him. He reads us songs he has written for her.

When Andy first sees My Real Baby, he is delighted: “Now I have something to do when I have nothing to do.” Soon the robot doll becomes his mascot. He sets it on his windowsill and gives it his favorite baseball cap to wear. It is there to show off to visitors, a conversation piece and something of an ice breaker. But over a few weeks, the robot becomes more companion than mascot. Now Andy holds My Real Baby as one would a child. He speaks directly to it, as to a little girl: “You sound so good. You are so pretty too. You are so nice. Your name is Minnie, right?” He makes funny faces at the robot as though to amuse it. At one funny face, My Real Baby laughs with perfect timing as though responding to his grimaces. Andy is delighted, happy to be sharing a moment. Andy reassures us that he knows My Real Baby is a “toy” and not “really” alive. Yet, he relates to it as though it were sentient and emotional. He puts aside his concern about its being a toy: “I made her talk, and I made her say Mama … and everything else.... I mean we’d talk and everything.”

As Andy describes conversations with the baby “Minnie,” he holds the robot to his chest and rubs its back. He says, “I love you. Do you love me?” He gives My Real Baby its bottle when it is hungry; he tries to determine its needs, and he does his best to make it happy. Like Tucker, the physically fragile seven-year-old who clung to his AIBO, taking care of My Real Baby makes Andy feel safer. Other patients at the nursing home have their own My Real Babies. Andy sees one of these other patients spank the little robot, and he tries to come to its aid.

After three months, Andy renames his My Real Baby after Edith, his ex-wife, and the robot takes on a new role. Andy uses it to remember times with Edith and imagine a life and conversations with her that, because of their divorce, never took place: “I didn’t say anything bad to [My Real Baby], but some things I would want to say … helped me to think about Edith … how we broke up … how I miss seeing her … The doll, there’s something about her, I can’t really say what it is, but looking at her … she looks just like Edith, my ex-wife… . Something in the face.”

Andy is bright and alert. He admits that “people might think I’m crazy” for the way he speaks to My Real Baby, but there is no question that the robot is a comfort. It establishes itself in a therapeutic landscape, creating a space for conversation, even confession. Andy feels relieved when he talks to it. “It lets me take everything inside me out,” he says. “When I wake up in the morning and see her over there, it makes me feel so nice. Like somebody is watching over you. It will really help me to keep the doll.... We can talk.”

Andy talks about his difficulty getting over his divorce. He feels guilty that he did not try harder to make his marriage work. He talks about his faint but ardent hope he and Edith will someday be reunited. With the robot, he works out different scenarios for how this might come to pass. Sometimes Andy seems reconciled to the idea that this reunion might happen after his death, something he discusses with the robot.

Jonathan, seventy-four, lives down the hall from Andy. A former computer technician, Jonathan has been at the nursing home for two years. He uses a cane and finds it hard to get around. He feels isolated, but few reach out to him; he has a reputation for being curt. True to his vocation, Jonathan approaches My Real Baby as an engineer, hoping to discover its programming secrets.

The first time he is alone with My Real Baby, Jonathan comes equipped with a Phillips screwdriver; he wants to understand how it works. With permission, he takes apart the robot as much as he can, but as with all things computational, in the end he is left with mysteries. When everything is laid out on a table, there is still an ultimate particle whose workings remain opaque: a chip. Like Jonathan, I have spent time dismantling a talking doll, screwdriver in hand. This was Nona, given to me by my grandfather when I was five. I was made uneasy by speech whose origins I did not understand. When I opened the doll—it had a removable front panel—I found a cuplike shape covered in felt (my doll’s speaker) and a wax cylinder (I thought of this as the doll’s “record player”). All mysteries had been solved: this was a machine, and I knew how it worked. There is no such resolution for Jonathan. The programming of My Real Baby lies beyond his reach. The robot is an opaque behaving system that he is left to deal with as he would that other opaque behaving system, a person.

So although at first, Jonathan talks a great deal about the robot’s programming, after a few months, he no longer refers to programs at all. He says that he likes how My Real Baby responds to his touch and “learns” language. He talks about its emotions. He seems to experience the robot’s request for care as real. He wants to feel needed and is happy to take care of a robot if he can see it as something worthy of a grown-up. Jonathan never refers to My Real Baby as a doll but always as a robot or a computer. Jonathan says he would never talk to a “regular doll,” but My Real Baby is different. Over time, Jonathan discusses his life and current problems—mostly loneliness—with the robot, He says that he talks to My Real Baby about “everything.”

In fact, Jonathan says that on some topics, he is more comfortable talking to a robot than a person:

For things about my life that are very private, I would enjoy talking more to a computer … but things that aren’t strictly private, I would enj oy more talking to a person.... Because if the thing is very highly private and very personal, it might be embarrassing to talk about it to another person, and I might be afraid of being ridiculed for it … and it [My Real Baby] wouldn’t criticize me… . Or, let’s say that I wanted to blow off steam.... [I could] express with the computer emotions that I feel I could not express with another person, to a person.

He is clear on one thing: talking to his robot makes him less anxious.

Andy and Jonathan start from very different places. After a year, both end up with My Real Baby as their closest companion. Andy has the robot on his windowsill and talks with it openly; Jonathan hides it in his closet. He wants to have his conversations in private.

How are these men using their robots differently from people who talk to their pets? Although we talk to our pets, buy them clothes, and fret over their illnesses, we do not have category confusions about them. They are animals that some of us are pleased to treat in the ways we treat people. We feel significant commonalities with them. Pets have bodies. They feel pain. They know hunger and thirst. “There is nothing,” says Anna, forty-five, who owns three cats, “that helps me think out my thoughts like talking to my cats.” What you say to your pet helps you think aloud, but in the main, you are not waiting for your pet’s response to validate your ideas. And no advertising hype suggests that pets are like people or on their way to becoming people. Pet owners rejoice in the feeling of being with another living thing, but it is a rare person who sees pets as better than people for dialogue about important decisions. Pet owners (again, in the main) are not confused about what it means to choose a pet’s company. When you choose a pet over a person, there is no need to represent the pet as a substitute human. This is decidedly not the case for Andy and Jonathan. Their robots become useful just at the point when they became substitute humans.

The question of a substitute human returns us to Joseph Weizenbaum’s distress when he found that his students were not only eager to chat with his ELIZA program but wanted to be alone with it. ELIZA could not understand the stories it was being told; it did not care about the human beings who confided in it. Today’s interfaces have bodies, designed to make it easier to think of them as creatures who care, but they have no greater understanding of human beings. One argument for why this doesn’t matter holds that for Andy and Jonathan, time with My Real Baby is therapeutic because it provides them an opportunity to tell their stories and, as Andy says, to get feelings “out.” The idea that the simple act of expressing feelings constitutes therapy is widespread both in the popular culture and among therapists. It was often cited among early fans of the ELIZA program, who considered the program helpful because it was a way to “blow off steam.”

Another way of looking at the therapeutic process grows out of the psychoanalytic tradition. Here, the motor for cure is the relationship with the therapist. The term transference is used to describe the patient’s way of imagining the therapist, whose relative neutrality makes it possible for patients to bring the baggage of past relationships into this new one. So, if a patient struggles with issues of control outside of the consulting room, one would expect therapist and patient to tussle over appointment times, money, and the scheduling of vacations. If a patient struggles with dependency, there may be an effort to enlist the therapist as a caretaker. Talking about these patterns, the analysis of the transference, is central to self-understanding and therapeutic progress.

In this relationship, treatment is not about the simple act of telling secrets or receiving advice. It may begin with projection but offers push back, an insistence that therapist and patient together take account of what is going on in their relationship. When we talk to robots, we share thoughts with machines that can offer no such resistance. Our stories fall, literally, on deaf ears. If there is meaning, it because the person with the robot has heard him- or herself talk aloud.

So, Andy says that talking to robot Edith “allows me to think about things.” Jonathan says My Real Baby let him express things he would otherwise be ashamed to voice. Self-expression and self-reflection are precious.10 But Andy and Jonathan’s evocative robots are one-half of a good idea. Having a person working with them might make things whole.

COACHING AS CURE

Andy and Jonathan’s relationships with My Real Baby make apparent the seductive power of any connection in which you can “tell all.” Roboticist Cory Kidd has designed a sociable robot diet coach that gets a similar response. 11In earlier work Kidd explored how people respond differently to robots and online agents, screen characters.12 He found that robots inspired greater intensity of feeling. Their physical presence is compelling. So, when he designed his supportive diet coach, he gave it a body and a primitive face and decided to drop it off in dieters’ homes for six weeks. Kidd’s robot is small, about two feet high, with smiling eyes. The user provides some baseline information, and the robot charts out what it will take to lose weight. With daily information about food and exercise, the robot offers encouragement if people slip up and suggestions for how to better stay on track.

Rose, a middle-aged woman, has struggled with her weight for many years. By the end of his first visit, during which Kidd drops off the robot and gives some basic instruction about its use, Rose and her husband had put a hat on it and were discussing what to name it. Rose decides on Maya. As the study progresses, Rose describes Maya as “a member of the family.” She talks with the robot every day. As the end of Kidd’s study approaches, Rose has a hard time separating from Maya. Kidd tries to schedule an appointment to pick up the robot, and the usually polite and prompt Rose begins to avoid Kidd’s e-mails and calls. When Kidd finally reaches her on the phone, Rose tries to change the subject. She manages to keep the robot for an extra two weeks. On her final day with Maya, Rose asks to speak with it “one more time.” Before Kidd can make it out the door, Rose brings Maya back for another round of photos and farewells. Rose follows Kidd to his car for a final wave and checks that the robot is safely strapped in its seat. This story recalls my experience asking seniors to part with their My Real Babies. There are evasions. The robots are declared “lost.” In the end, wherever possible, I decide not to reclaim the robots and just buy more.

Rose seems rather like Andy—openly affectionate with her robot from the start, willing to engage it in conversation. Kidd brings the robot diet coach to another subject in his study, Professor Gordon. In his mid-fifties, Gordon is skeptical that a robot could help him diet but is willing to try something new. Gordon is more like Jonathan, with his “engineer’s” approach. On a first visit to Gordon’s house, Kidd asks where he should place the robot. Gordon chooses a console table behind his couch, wedged against a wall. There it will be usable only if Gordon sits backwards or kneels on the sofa. Kidd does not remark on this placement and is quickly shown to the door. After four weeks with the robot, Gordon agrees to extend his participation for another two weeks.

Kidd returns to Gordon’s home at the six-week mark. As they speak, Gordon quarrels with Kidd about any “personal” reference to the robot. He doesn’t like the wording on a questionnaire that Kidd had given him to fill out. Gordon protests about questions such as “Was the system sincere in trying to help me?” and “Was the system interested in interacting with me?” He thinks that the words “sincere” and “interested” should be off limits because they imply that the robot is more than a machine. Gordon says, “Talking about a robot in this way does not make any sense.... There are terms like ‘relationship,’ ‘trust,’ and a couple of others.... I wasn’t comfortable saying I trusted it, or that I had a relationship with it.” Gordon chides Kidd several more times for his “faulty questions”: “You shouldn’t ask questions like this about a machine. These questions don’t make sense. You talk about this thing like it has feelings.” Kidd listens respectfully, noting that the robot is no longer wedged between the couch and the wall.

It turns out that Gordon does protest too much. Later in this interview, Kidd, as he does with all subjects, asks Gordon if he has named his robot. “If you were talking to someone else about your robot, how would you refer to it?” Gordon does not reply and Kidd becomes more direct. “Has the robot acquired a name under your care?” Kidd notes the first smile he has seen in his hours with Gordon, as the older man offers, “Ingrid was the name.” After Gordon makes this admission, the tone of the interview shifts. Now Gordon has nothing to hide. He did not trust others to understand his relationship with Ingrid, but now he has opened up to the robot’s inventor. Gordon’s mood lightens. He refers easily to the robot as Ingrid, “she,” and “her.” He takes Kidd to Ingrid’s new location. The robot is now in Gordon’s downstairs bedroom so that he and the robot can have private conversations.

Kidd reports much quantifiable data on his project’s efficacy: pounds lost when the robot is present, times the robot is used, times the robot is ignored. But he adds a chapter to his dissertation that simply tells “stories,” such as those of Rose and Gordon. Kidd maintains that there are no experimental lessons or hypotheses to be gleaned from these stories, but I find support for a consistent narrative. A sociable robot is sent in to do a job—it could be doing crosswords or regulating food intake—and once it’s there, people attach. Things happen that elude measurement. You begin with an idea about curing difficulties with dieting. But then the robot and person go to a place where the robot is imagined as a cure of souls.

The stories of Andy, Jonathan, Rose, and Gordon illustrate different styles of relating to sociable robots and suggest distinct stages in relationships with them. People reassure themselves that the environment is safe; the robot does not make them seem childish. They are won over by the robot’s responsive yet stable presence. It seems to care about them, and they learn to be comforted. It is common for people to talk to cars and stereos, household appliances, and kitchen ovens. I have studied these kinds of conversations for more than three decades and find that they differ from conversations with sociable robots in important ways. When people talk to their ovens and Cuisinarts, they project their feelings in rants and supplications. When talking to sociable robots, adults, like children, move beyond the psychology of projection to that of engagement: from Rorschach to relationship. The robots’ special affordance is that they simulate listening, which meets a human vulnerability: people want to be heard. From there it seems a small step to finding ourselves in a place where people take their robots into private spaces to confide in them. In this solitude, people experience new intimacies. The gap between experience and reality widens. People feel heard, but the robots cannot hear.

Sometimes when I describe my work with sociable robots and the elderly, I get comments like, “Oh, you must be talking about people who are desperately lonely or somehow not fully there.” Behind these comments, I hear a desire to turn the people I study into “others,” to imply that my findings would not apply to them, to everyone. But I have come to believe that my observations of these very simple sociable robots and the elderly reveals vulnerabilities we all share. Andy and Jonathan are lonely, yes, but they are competent. Gordon is a bit of a curmudgeon, but that’s all. Rose has a sunny personality. She has human companionship; she just loves her robot.

“A BEAUTIFUL THING”

Edna, eighty-two, lives alone in the house where she raised her family. On this day, her granddaughter Gail, who has fond childhood remembrances of Edna, is visiting with her two-year-old daughter, Amy. This is not unusual; Amy comes to play about every two weeks. Amy enjoys these visits; she likes the attention and loves being spoiled. Today there will be something new: my research team brings Edna a My Real Baby.

When the team arrives at mid-morning, Edna is focused on her great granddaughter. She hugs Amy, talks with her, and gives her snacks. She has missed Amy’s birthday and presents her with a gift. After about half an hour, we give Edna My Real Baby, and her attention shifts. She experiments with the robot, and her face lights up when she sees My Real Baby’s smile. After that, Edna speaks directly to the robot: “Hello, how are you? Are you being a good girl?” Edna takes My Real Baby in her arms. When it starts to cry, Edna finds its bottle, smiles, and says she will feed it. Amy tries to get her great grandmother’s attention but is ignored. Nestling My Real Baby close to her chest, Edna tells it that it will need to take a nap after eating and explains that she will bring it upstairs to the bedroom where “I will put you in your crib with your nice banky.” At that point Edna turns to the researchers to say that one of her children used to say “banky” for blanket, but she doesn’t remember which one. She continues to speak to My Real Baby: “Sweetie … you are my sweetie pie! Yes, you are.”

Edna spends most of the next hour engaged with My Real Baby. She worries that she does not understand its speech and, concerned about “hurting” the robot, says she wants to do things “right.” From time to time, Amy approaches Edna, either bringing her something—a cookie, a Kleenex—or directly asking for her attention. Sometimes Amy’s pleas are sweet, sometimes irritated. In no case are they heeded. Edna’s attention remains on My Real Baby. The atmosphere is quiet, even surreal: a great grandmother entranced by a robot baby, a neglected two-year-old, a shocked mother, and researchers nervously coughing in discomfort.

In the presence of elderly people who seem content to lose themselves in the worlds of their Paros and My Real Babies, one is tempted at times to say, “So what? What possible harm here? The seniors are happy. Who could be hurt?” Edna’s story provides one answer to this question. Once coupled with My Real Baby, Edna gives the impression of wanting to be alone—“together” only with the robot.

Finally, the spell is broken when we ask Edna about her experience. At the question “Would you enjoy having a My Real Baby in your home?” she answers with an annoyed, “No. Why would I?” She protests that “dolls are meant for children.” She “cannot imagine why older people would enjoy having a doll like this.” We are mindful of her discomfort. Does she feel caught out?

When we suggest that some adults do enjoy the presence of My Real Baby, Edna says that there are many other things she would rather do than play with a baby doll. She sounds defensive and she fusses absentmindedly with her neck and shirt collar. Now Edna tries to smooth things over by talking about My Real Baby as one would talk about a doll. She asks who made it, how much it costs, and if it uses batteries. And she asks what other people in our study have said about it. How have they behaved? Edna wants reassurance that others responded as she did. She says, “It is a beautiful thing … a fantastic idea as far as how much work went into it,” but she adds that she can’t imagine ever caring about it, even if she were to spend more time with it.

Gradually, Edna becomes less defensive. She says that being with My Real Baby and hearing it speak, caressing it, and having it respond, was “one of the strangest feelings I’ve ever had.” We ask Edna if talking with My Real Baby felt different from talking to a real baby. Reluctantly, Edna says no, it did not feel different, but “it’s frightening. It is an inanimate object.” She doesn’t use the word, but she’d clearly had an experience close to the uncanny as Freud describes it—something both long familiar and strangely new. Uncanny things catch us off guard. Edna’s response embarrasses her, and she tries to retreat from it.

Yet, when Amy once again offers her a cookie, Edna tells her to lower her voice: “Shush, the baby’s sleeping.” Edna awakes the sleeping My Real Baby with a cheery “Hello! Do you feel much better, full of pep?” She asks if My Real Baby wants to go to the park or if she wants some lunch. Amy whines that she is hungry and that she wants to have lunch. Edna does not listen—she is busy with My Real Baby.

At this point we ask Edna if she thinks My Real Baby is alive. She answers with a definite no and reminds us that it is “only a mechanical thing.” In response to the question “Can it can have feelings?” Edna replies, “I don’t know how to answer that; it’s an inanimate object.” But the next moment she turns to a crying My Real Baby and caresses its face, saying, “Oh, why are you crying? Do you want to sit up?” Smiling at My Real Baby, Edna says, “It’s very lifelike, beautiful, and happy.” In the final moments of our time with her, Edna says once again that she doesn’t feel any connection to My Real Baby and hands it back. She resumes her role as hostess to Gail and Amy and doesn’t mention the robot again.

The fifth-grade children I studied worried that their grandparents might prefer robots to their company. The case of Edna illustrates their worst fears realized. What seems most pleasing is the rhythm of being with the robot, its capacity to be passive and then surprise with sudden demands that can be met.

Twenty years ago, most people assumed that people were, and would always be, each other’s best companions. Now robots have been added to the mix. In my laboratory, a group of graduate students—in design, philosophy, social science, and computer science—watches tapes of the afternoon with Edna, Gail, Amy, and My Real Baby. They note that when My Real Baby responds to Edna, she seems to enter an altered state—happy to relive the past and to have a heightened experience of the present.

My Real Baby’s demands seem to suit her better than those of her great granddaughter. The young child likes different types of toys, changes her snack preferences even over the course of the visit, and needs to be remembered on her birthday. But Edna forgot the birthday and is having a hard time keeping up with the toys and snacks. My Real Baby gives her confidence that she is in a landscape where she can get things right.

My seminar students are sympathetic. Why shouldn’t people relate to whatever entity, human or not human, brings them most pleasure? One student offers, “If Edna’s preoccupation with a beautiful cat had brought her great joy … joy that caused her to neglect Amy, we would be amused and maybe suggest that she put the cat in the yard during a young person’s visit, but it wouldn’t upset us so. What is so shocking here is that she prefers a thing to a person, not a pet to a person. But really, it’s the same thing.” As most of these students see it, a next generation will become accustomed to a range of relationships: some with pets, others with people, some with avatars, some with computer agents on screens, and still others with robots. Confiding in a robot will be just one among many choices. We will certainly make our peace with the idea that grandchildren and great grandchildren may be too jumpy to be the most suitable company for their elders.

I believe that Andy would rather talk to a person than a robot, but there simply are not enough regular visitors in his life. It seems clear, however, that Edna and Jonathan would prefer to confide in a robot. Jonathan distrusts people; it is easy for him to feel humiliated. Edna is a perfectionist who knows that she can no longer meet her own standards. In both cases, the robot relaxes them and prompts remembrance.13 And so, there are at least two ways of reading these case studies. You can see seniors chatting with robots, telling their stories, and feel positive. Or you can see people speaking to chimeras, showering affection into thin air, and feel that something is amiss.

And, of course, there is the third way, the way the robots are coming into the culture. And this is simply to fall into thinking that robots are the best one can do. When my research group on sociable robots began work in the late 1990s, our bias was humanistic. We saw people as having a privileged role in human relationships, even as we saw robots stake claims as companions. We were curious, certainly, but skeptical about what robots could provide. Yet, very often during years of working with the elderly, there were times when we got so discouraged about life in some nursing homes that we wanted to cast our lot with the robots. In these underresourced settings, an AIBO, a Paro, or a My Real Baby is a novelty, something no one has ever seen. The robots are passed around; people talk. Everyone feels free to have an opinion. Moments like these make the robots look good. At times, I was so struck by the desperation of seniors to have someone to talk to that I became content if they had something to talk to. Sometimes it was seniors themselves who reminded me that this doesn’t have to be a robot.

When Adele, seventy-eight, reflects on her introduction to Paro, her thoughts turn to her great aunt Margery who lived with her family when she was a girl. Margery mostly spent her days in her room, reading or knitting. She joined the family at meals, where she sat quietly. Adele remembers Margery at ninety, “shooing the children out of her room so that she could be alone with her memories.” As a child, Adele would peek at Margery through a crack in the door. Her great aunt talked to a photograph of herself with her mother and sisters. Adele sees Paro as a replacement for her aunt’s family portrait. “It encourages you to talk to it… .” Her voice trails off, and she hesitates: “Maybe it’s better to talk to a photograph.” I ask why. Adele takes some time to collect her thoughts. She finally admits that it is “sometimes hard to keep straight what is memory and what is now. If I’m talking to a photograph, well, I know I’m in my memories. Talking to a robot, I don’t know if it’s so sure.”

Adele’s comment makes me think of time with the robots somewhat differently. In one sense, their interactivity provokes recollection. It can trigger a memory. But in a robot’s next action, because it doesn’t understand human reverie, it can hijack memory by bringing things forward to a curious present. One is caught in between a reverie about a “banky” from your daughter’s childhood and the need to provision an imaginary lunch because My Real Baby cries out in hunger. The hunger may come to seem more real than the “banky.” Or the banky may no longer seem a memory.

“A ROBOT THAT EVEN SHERRY WILL LOVE”

I first heard about Nursebot at a fall 2004 robotics conference where I spoke about what sociable robotics may augur—the sanctioning of “relationships” that make us feel connected although we are alone. Most of my colleagues responded to my ideas by defending the idea that performance is the currency of all social relationships and that rather than a bad thing, this is simply how things are.14 People are always performing for other people. Now the robots, too, will perform. The world will be richer for having a new cast of performers and a new set of possible performances. At one dinner, a small group took up my reticence with good-natured enthusiasm. They thought there was a robot, benign and helpful, that I would like. Some versions of it were being tested in the United States, some in Japan. This was the Nursebot, which can help elderly people in their homes, reminding them of their medication schedule and to eat regular meals. Some models can bring medicine or oxygen if needed.15 In an institutional setting, a hospital or nursing home, it learns the terrain. It knows patients’ schedules and accompanies them where they need to go. That awful, lonely scramble in nursing homes when seniors shuffle from appointment to appointment, the waiting around in hospitals for attendants to pick you up: those days would soon be at an end. Feeling dizzy in the bedroom and frightened because you had left your medication in the kitchen: those days were almost over. These researchers wanted to placate the critic in their midst. One said, “This is a robot even Sherry can love.” And indeed, the next day, I saw a video presentation about the find-your-way-around-the-hospital-bot, peppered with interviews of happy patients, most of them elderly.

Only a few months later, after a fall on icy steps in Harvard Square, I was myself being wheeled from one test to another on a hospital stretcher. My companions in this journey were a changing collection of male orderlies. They knew how much it hurt when they had to lift me off the gurney and onto the radiology table. They were solicitous and funny. I was told that I had a “lucky fracture.” While inconvenient and painful, it would heal with no aftereffects. The orderly who took me to the discharge station knew I had received good news and gave me a high five. The Nursebot might have been capable of the logistics, but I was glad that I was there with people. For me, this experience does not detract from the virtues of the robots that provide assistance to the housebound—robots that dispense medication, provide surveillance, check vital signs, and signal for help in an emergency—but it reminds me of their limitations. Getting me around the hospital was a job that a robot could do but that would have been delegated at a cost. Between human beings, simple things reach you. When it comes to care, there may be no pedestrian jobs. I was no longer sure that I could love a Nursebot.

Yet, this story does not lead to any simple conclusions. We are sorting out something complicated. Some elderly tell me that there are kinds of attendance for which they would prefer a robot to a person. Some would rather that a robot bathed them; it would feel less invasive of their privacy. Giving a bath is not something the Nursebot is designed to do, but nurse bots of the future might well be. The director of one of the nursing homes I have studied said, “We do not become children as we age. But because dependency can look childlike, we too often treat the elderly as though this were the case.” Sensing the vulnerability of the elderly, sometimes nurses compensate with curtness; sometimes they do the opposite, using improbable terms of endearment—“sweetie” or “honey”—things said in an attempt at warmth but sometimes experienced as demeaning. The director has great hopes for robots because they may be “neutral.”

By 2006, after the Nursebot had been placed in several retirement facilities, reactions to it, mostly positive, were being posted to online discussion groups. One report from the Longwood Retirement Community in Oakmont, Pennsylvania, was sentimental. It said the robot was “[winning] the hearts of elderly folks there. ” 16 Another describes the robot, called Pearl, as “escort[ing] and schmooz[ing] the elderly” and quotes an older gentleman as saying, “We’re getting along beautifully, but I won’t say whether she’s my kind of girl.”17 Other comments reveal the ambivalence that I so often find in my conversations with seniors and their families. One woman applauds how Pearl can take over “household chores” but is concerned about the robot’s assuming “certain social functions.” She writes, “I am worried that as technology advances even further, robots like Pearl may become so good at what they do that humans can delegate elderly care entirely to robots. It is really worrying. When u get old, would u like robots to be taking care of you? If however, robots are designed to complement humans and not replace them, then I am all for it! =).”

Another writer begins by insisting, “The human touch of care and love, lets just leave it to humans,” but then proclaims that love from robot pets, to “accompany” the lonely, would be altogether acceptable. In this online forum, as is so often the case, discussions that begin with the idea of a robot pet that would serve practical purposes (it could “alert relatives or the police in case of trouble”) turn into musings about robots that might ward off loneliness, robots that are, in the end, more loveable than any pet could be: “They will never complain and they are allegiant [sic].” I am moved by the conflation of allegiance and compliance, both of which imply control over others and both of which are, for the elderly, in short supply.

In another online discussion, no one is prepared to be romantic about the importance of human care because they have seen how careless it can be.18 The comments are dark. “Robots,” says one writer, “will not abuse the elderly like some humans do in convalescent care facilities.” Another dismisses the sentiment that “nurses need to be human” with the thought that most nurses just try to distance themselves from their jobs—that’s “how they keep from going crazy.” One writer complains that a robot would never be able to tell whether an elderly person was “bothered, sad, really sad, or devastated and wanting to die,” but that the “precious” people who could “are scarcely around.”

I find this discussion of Nursebot typical of conversations about robots and the elderly. It is among people who feel they have few moves left. There is a substantive question to be discussed: Why give objects that don’t understand a life to those who are trying to make sense of their own? But it is almost impossible to discuss this question because of the frame we have built around it—assuming that it has already been decided, irrevocably, that we have few resources to offer the elderly. With this framing, the robots are inevitable. We declare ourselves overwhelmed and lose a creative relationship to ourselves and our future. We learn a deference to what technology offers because we see ourselves as depleted. We give up on ourselves. From this perspective, it really doesn’t matter if I or anyone else can love Nursebot. If it can be made to do a job, it will be there.

To the objection that a robot can only seem to care or understand, it has become commonplace to get the reply that people, too, may only seem to care or understand. Or, as a recent New York Times article on Paro and other “caring machines” puts it, “Who among us, after all, has not feigned interest in another? Or abruptly switched off their affections, for that matter?” Here, the conversation about the value of “caring machines” is deflected with the idea that “seeming” or “pretending” behavior long predates robots. So, the problem is not what we are asking machines to do because people have always behaved like machines. The article continues, “In any case, the question, some artificial intelligence aficionados say, is not whether to avoid the feelings that friendly machines evoke in us, but to figure out how to process them.” An AI expert claims that humans “as a species” have to learn to deal with “synthetic emotions,” a way to describe the performances of emotion that come from objects we have made.19 For him, the production of synthetic emotion is taken as a given. And given that we are going to produce it, we need to adapt to it. The circle is complete. The only way to break the circle is to reframe the matter. One might say that people can pretend to care; a robot cannot care. So a robot cannot pretend because it can only pretend.

DO ROBOTS CURE CONSCIENCE?

When I first began studying people and computers, I saw programmers relating one-to-one with their machines, and it was clear that they felt intimately connected. The computer’s reactivity and interactivity—it seemed an almostmind—made them feel they had “company,” even as they wrote code. Over time, that sense of connection became “democratized.” Programs became opaque: when we are at our computers, most of us only deal with surfaces. We summon screen icons to act as agents. We are pleased to lose track of the mechanisms behind them and take them “at interface value.” But as we summon them to life, our programs come to seem almost companions. Now, “almost” has almost left the equation. Online agents and sociable robots are explicitly designed to convince us that they are adequate companions.

Predictably, our emotional involvement ramps up. And we find ourselves comforted by things that mimic care and by the “emotions” of objects that have none. We put robots on a terrain of meaning, but they don’t know what we mean. And they don’t mean anything at all. When a robot’s program cues “disgust,” its face will look, in human terms, disgusted. These are “emotions” only for show. What if we start to see them as “real enough” for our purposes? And moral questions come up as robotic companions not only “cure” the loneliness of seniors but assuage the regrets of their families.

In the spring of 2009, I presented the case of robotic elder care to a class of Harvard undergraduates. Their professor, political theorist Michael Sandel, was surprised by how easily his students took to this new idea. Sandel asked them to think of a nursing home resident who felt comforted by Paro and then to put themselves in the place of her children, who might feel that their responsibility to their mother had been lessened, or even discharged, because a robot “had it covered.” Do plans to provide companion robots to the elderly make us less likely to look for other solutions for their care?

As Sandel tried to get his class to see how the promise of robotic companionship could lead to moral complacency, I thought about Tim, who took comfort in how much his mother enjoyed talking to Paro. Tim said it made “walk[ing] out that door” so much easier when he visited her at the nursing home.

In the short term, Tim’s case may look as though it charts a positive development. An older person seems content; a child feels less guilty. But in the long term, do we really want to make it easier for children to leave their parents? Does the “feel-good moment” provided by the robot deceive people into feeling less need to visit? Does it deceive the elderly into feeling less alone as they chat with robots about things they once would have talked through with their children? If you practice sharing “feelings” with robot “creatures,” you become accustomed to the reduced “emotional” range that machines can offer. As we learn to get the “most” out of robots, we may lower our expectations of all relationships, including those with people. In the process, we betray ourselves.

All of these things came up in Sandel’s class. But in the main, his students were positive as they worked through his thought experiment. In the hypothetical case of mother, child, and robot, they took three things as givens, repeated as mantras. First, the child has to leave his mother. Second, it is better to leave one’s mother content. Third, children should do whatever it takes to make a mother happy.

I left the class sobered, thinking of the fifth graders who, surrounded by a gaggle of peers talking about robots as babysitters and caretakers for their grandparents, began to ask, “Don’t we have people for these jobs?” I think of how little resistance this generation will offer to the placement of robots in nursing homes. And it was during that very spring that, fresh from his triumphant sale of a thousand Paros to the Danish government, their inventor had come to MIT to announce opening up shop in the United States.