True companions - The Robotic Moment - Alone Together - Sherry Turkle

Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)

Part I. The Robotic Moment

Chapter 3. True companions

In April 1999, a month before AIBO’s commercial release, Sony demonstrated the little robot dog at a conference on new media in San Jose, California. I watched it walk jerkily onto an empty stage, followed by its inventor, Toshitado Doi. At his bidding, AIBO fetched a ball and begged for a treat. Then, with seeming autonomy, AIBO raised its back leg to some suggestion of a hydrant. Then, it hesitated, a stroke of invention in itself, and lowered its head as though in shame. The audience gasped. The gesture, designed to play to the crowd, was wildly successful. I imagined how audiences responded to Jacques de Vaucanson’s eighteenth-century digesting (and defecating) mechanical duck and to the chess-playing automata that mesmerized Edgar Alan Poe. AIBO, like these, was applauded as a marvel, a wonder.1

Depending on how it is treated, an individual AIBO develops a distinct personality as it matures from a fall-down puppy to a grown-up dog. Along the way, AIBO learns new tricks and expresses feelings: flashing red and green eyes direct our emotional traffic; each of its moods comes with its own soundtrack. A later version of AIBO recognizes its primary caregiver and can return to its charging station, smart enough to know when it needs a break. Unlike a Furby, whose English is “destined” to improve as long as you keep it turned on, AIBO stakes a claim to intelligence and impresses with its ability to show what’s on its mind.

If AIBO is in some sense a toy, it is a toy that changes minds. It does this in several ways. It heightens our sense of being close to developing a postbiological life and not just in theory or in the laboratory. And it suggests how this passage will take place. It will begin with our seeing the new life as “as if ” life and then deciding that “as if ” may be life enough. Even now, as we contemplate “creatures” with artificial feelings and intelligence, we come to reflect differently on our own. The question here is not whether machines can be made to think like people but whether people have always thought like machines.

The reconsiderations begin with children. Zane, six, knows that AIBO doesn’t have a “real brain and heart,” but they are “real enough.” AIBO is “kind of alive” because it can function “as if it had a brain and heart.” Paree, eight, says that AIBO’s brain is made of “machine parts,” but that doesn’t keep it from being “like a dog’s brain.... Sometimes, the way [AIBO] acted, like he will get really frustrated if he can’t kick the ball. That seemed like a real emotion … so that made me treat him like he was alive, I guess.” She says that when AIBO needs its batteries charged, “it is like a dog’s nap.” And unlike a teddy bear, “an AIBO needs its naps.”

As Paree compares her AIBO’s brain to that of a dog, she clears the way for other possibilities. She considers whether AIBO might have feelings like a person, wondering if AIBO “knows its own feelings”—or “if the controls inside know them.” Paree says that people use both methods. Sometimes people have spontaneous feelings and “just become aware” of them (this is “knowing your own feelings”). But other times, people have to program themselves to have the feelings they want. “If I was sad and wanted to be happy”—here Paree brings her fists up close to her ears to demonstrate concentration and intent—“I would have to make my brain say that I am set on being happy.” The robot, she thinks, probably has the second kind of feelings, but she points out that both ways of getting to a feeling get you to the same place: a smile or a frown if you are a person, a happy or sad sound if you are an AIBO. Different inner states lead to the same outward states, and so inner states cease to matter. AIBO carries a behaviorist sensibility.

SPARE PARTS

Keith, seventeen, is going off to college next year and taking his AIBO with him. He treats the robot as a pet, all the while knowing that it is not a pet at all. He says, “Well, it’s not a pet like others, but it is a damn good pet… . I’ve taught it everything. I’ve programmed it to have a personality that matches mine. I’ve never let it reset to its original personality. I keep it on a program that lets it develop to show the care I’ve put into it. But of course, it’s a robot, so you have to keep it dry, you have to take special care with it.” His classmate Logan also has an AIBO. The two have raised the robots together. If anything, Logan’s feelings are even stronger than Keith’s. Logan says that talking to AIBO “makes you better, like, if you’re bored or tired or down … because you’re actually, like, interacting with something. It’s nice to get thoughts out.”

The founders of artificial intelligence were much taken with the ethical and theological implications of their enterprise. They discussed the mythic resonance of their new science: Were they people putting themselves in the place of gods?2 The impulse to create an object in one’s own image is not new—think Galatea, Pygmalion, Frankenstein. These days, what is new is that an off-the-shelf technology as simple as an AIBO provides an experience of shaping one’s own companion. But the robots are shaping us as well, teaching us how to behave so that they can flourish.3 Again, there is psychological risk in the robotic moment. Logan’s comment about talking with the AIBO to “get thoughts out” suggests using technology to know oneself better. But it also suggests a fantasy in which we cheapen the notion of companionship to a baseline of “interacting with something.” We reduce relationship and come to see this reduction as the norm.

As infants, we see the world in parts. There is the good—the things that feed and nourish us. There is the bad—the things that frustrate or deny us. As children mature, they come to see the world in more complex ways, realizing, for example, that beyond black and white, there are shades of gray. The same mother who feeds us may sometimes have no milk. Over time, we transform a collection of parts into a comprehension of wholes.4 With this integration, we learn to tolerate disappointment and ambiguity. And we learn that to sustain realistic relationships, one must accept others in their complexity. When we imagine a robot as a true companion, there is no need to do any of this work.

The first thing missing if you take a robot as a companion is alterity, the ability to see the world through the eyes of another.5 Without alterity, there can be no empathy. Writing before robot companions were on the cultural radar, the psychoanalyst Heinz Kohut described barriers to alterity, writing about fragile people—he calls them narcissistic personalities—who are characterized not by love of self but by a damaged sense of self. They try to shore themselves up by turning other people into what Kohut calls self objects. In the role of selfobject, another person is experienced as part of one’s self, thus in perfect tune with a fragile inner state. The selfobject is cast in the role of what one needs, but in these relationships, disappointments inevitably follow. Relational artifacts (not only as they exist now but as their designers promise they will soon be) clearly present themselves as candidates for the role of selfobject.

If they can give the appearance of aliveness and yet not disappoint, relational artifacts such as sociable robots open new possibilities for narcissistic experience. One might even say that when people turn other people into selfobjects, they are trying to turn a person into a kind of spare part. A robot is already a spare part. From this point of view, relational artifacts make a certain amount of “sense” as successors to the always-resistant human material. I insist on underscoring the “scare quotes” around the word “sense.” For, from a point of view that values the richness of human relationships, they don’t make any sense at all. Selfobjects are “part” objects. When we fall back on them, we are not taking in a whole person. Those who can only deal with others as part objects are highly vulnerable to the seductions of a robot companion. Those who succumb will be stranded in relationships that are only about one person.

This discussion of robots and psychological risks brings us to an important distinction. Growing up with robots in roles traditionally reserved for people is different from coming to robots as an already socialized adult. Children need to be with other people to develop mutuality and empathy; interacting with a robot cannot teach these. Adults who have already learned to deal fluidly and easily with others and who choose to “relax” with less demanding forms of social “life” are at less risk. But whether child or adult, we are vulnerable to simplicities that may diminish us.

GROWING UP AIBO

With a price tag of $1,300 to $2,000, AIBO is meant for grown-ups. But the robot dog is a harbinger of the digital pets of the future, and so I present it to children from age four to thirteen as well as to adults. I bring it to schools, to after-school play centers, and, as we shall see in later chapters, to senior centers and nursing homes. I offer AIBOs for home studies, where families get to keep them for two or three weeks. Sometimes, I study families who have bought an AIBO of their own. In these home studies, just as in the home studies of Furbies, families are asked to keep a “robot diary.” What is it like living with an AIBO?

The youngest children I work with—the four- to six-year-olds—are initially preoccupied with trying to figure out what the AIBO is, for it is not a dog and not a doll. The desire to get such things squared away is characteristic of their age. In the early days of digital culture, when they met their first electronic toys and games, children of this age would remain preoccupied with such questions of categories. But now, faced with this sociable machine, children address them and let them drop, taken up with the business of a new relationship.

Maya, four, has an AIBO at home. She first asks questions about its origins (“How do they make it?”) and comes up with her own answer: “I think they start with foil, then soil, and then you get some red flashlights and then put them in the eyes.” Then she pivots to sharing the details of her daily life with AIBO: “I love to play with AIBO every day, until the robot gets tired and needs to take a nap.” Henry, four, follows the same pattern. He begins with an effort to categorize AIBO: AIBO is closest to a person, but different from a person because it is missing a special “inner power,” an image borrowed from his world of Pokémon. 6 But when I see Henry a week later, he has bonded with AIBO and is stressing the positive, all the things they share. The most important of these are “remembering and talking powers, the strongest powers of all.” Henry is now focused on the question of AIBO’s affection: How much does this robot like him? Things seem to be going well: he says that AIBO favors him “over all his friends.”

By eight, children move even more quickly from any concern over AIBO’s “nature” to the pleasures of everyday routines. In a knowing tone, Brenda claims that “people make robots and … people come from God or from eggs, but this doesn’t matter when you are playing with the robot.” In this dismissal of origins we see the new pragmatism. Brenda embraces AIBO as a pet. In her robot diary, she reminds herself of the many ways that this pet should not be treated as a dog. One early entry reminds her not to feed it, and another says, “Do not take AIBO on walks so it can poop.” Brenda feels guilty if she doesn’t keep AIBO entertained. She thinks that “if you don’t play with it,” its lights get red to show its discontent at “playing by itself and getting all bored.” Brenda thinks that when bored, AIBO tries to “entertain itself.” If this doesn’t work, she says, “it tries to get my attention.” Children believe that AIBO asks for attention when it needs it. So, for example, a sick AIBO will want to get better and know it needs human help. An eight-year-old says, “It would want more attention than anything in the whole world.”

AIBO also “wants” attention in order to learn. And here children become invested. Children don’t just grow up with AIBO around; they grow AIBO up. Oliver is a lively, engaged nine-year-old who lives in a suburban house with many pets. His mother smilingly describes their home life as “controlled chaos,” and for two weeks an AIBO has been part of this scene. Oliver has been very active in raising his AIBO. First came simple things: “I trained it to run to certain things and wave its tail.” And then came more complicated things, like teaching AIBO soccer. Oliver also spends time just “keeping AIBO company” because he says, “AIBO prefers to be with people.” Oliver says, “I went home with a puppy, but now it knows me… . It recognizes so many things.... It can feel when you pet him… . The electricity in AIBO is like blood in people.... People and robots both have feelings, but people have more feelings. Animals and robots both have feelings, but robots have more feelings that they can say.”

But when Oliver has a problem, he doesn’t talk to AIBO but to his hamster. He says that although AIBO can “say more of his feelings, my hamster has more feelings.” Oliver does not see AIBO’s current lack of emotionality as a fixed thing. On the contrary. “Give him six months,” Oliver says. “That’s how long it took Peanut [the hamster] to really love.... If it advanced more, if it had more technology, it could certainly love you in the future.” In the meantime, taking care of AIBO involves more than simply keeping it busy. “You also have to watch out for his feelings. AIBO is very moody.” This does not bother Oliver because it makes AIBO more like the pets he already knows. The bottom line for Oliver: “AIBO loves me. I love AIBO.” As far as Oliver is concerned, AIBO is alive enough for them to be true companions.

The fact that AIBO can develop new skills is very important to children; it means that their time and teaching make a difference. Zara, eight, says of her time with AIBO, “The more you play with it, the more actful [Zara’s word!] it gets, the more playful. And I think the less you play with it, the lazier it gets.” Zara and her eleven-year-old cousin Yolanda compare their AIBO puppies to their teddy bears. Both girls make it clear that AIBO is no doll. Yolanda says that turning a teddy bear into a companion requires “work” because her teddy’s feelings “come from my brain.” The AIBO, on the other hand, “has feelings all by itself.”7 Zara agrees. You can tell a teddy bear what it should feel, but AIBO “can’t feel something else than what it is expressing.” AIBO has its “own feelings.” She says, “If AIBO’s eyes are flashing red, you can’t say that the puppy is happy just because you want it to be.”

A teddy bear may be irreplaceable because it has gone through life with a child. It calls up memories of one’s younger self. And, of course, only that special teddy calls up the experiences a child had in its company. But when children don’t want to replace an AIBO, something else is in play. A particular AIBO is irreplaceable because it calls back memories not only of one’s younger self but of the robot’s younger self as well, something we already saw as children connected to their Tamagotchis and Furbies. In comparing her AIBO to her teddy bear, Yolanda stresses that AIBO is “more real” because as it grows up, “it goes through all the stages.”

FROM BETTER THAN NOTHING TO BETTER THAN ANYTHING

Yolanda’s feelings about AIBO also go through all the stages. She first sees AIBO as a substitute: “AIBO might be good practice for all children whose parents aren’t ready to take care of a real dog.” But then she takes another step: in some ways AIBO might be better than a real dog. “The AIBO,” says Yolanda, “doesn’t shed, doesn’t bite, doesn’t die.” More than this, a robotic companion can be made as you like it. Yolanda muses about how nice it would be to “keep AIBO at a puppy stage for people who like to have puppies.” Children imagine that they can create a customized AIBO close to their heart’s desire.8 Sometimes their heart’s desire is to have affection when that pleases them and license to walk away, something not possible with a biological pet.

Two nine-year-olds—Lydia and Paige—talk through the steps that take a robot from better than nothing to better than anything. Lydia begins by thinking of AIBO as a substitute for a real pet if you can’t have one: “An AIBO, since you can’t be allergic to a robot, that would be very nice to have.” But as she gets to know AIBO better, she sees a more enticing possibility. “Sometimes,” she says, “I might like [AIBO] more than a real living animal, like a real cat or a real dog, because, like if you had a bad day … then you could just turn this thing off and it wouldn’t bug you.” Paige has five pets—three dogs, two cats—and when she is sad, she says, “I cuddle with them.” This is a good thing, but she complains that pets can be trouble: “All of them want your attention. If you give one attention you have to give them all attention, so it’s kinda hard.... When I go somewhere, my kitten misses me. He’ll go into my room and start looking for me.” AIBO makes things easy: “AIBO won’t look at you like ‘play with me’; it will just go to sleep if there is nothing else to do. It won’t mind.”

Paige explains that the worst thing that ever happened to her was when her family “had to put their dog to sleep.” She hasn’t wanted a new one since. “But the thing about AIBO,” she says, “is that you don’t have to put him to sleep.... I think you could fix [AIBO] with batteries … but when your dog actually dies, you can’t fix it.” For now, the idea that AIBO, as she puts it, “will last forever” makes it better than a dog or cat. Here, AIBO is not practice for the real. It offers an alternative, one that sidesteps the necessity of death.9 For Paige, simulation is not necessarily second best.

Pets have long been thought good for children because they teach responsibility and commitment. AIBO permits something different: attachment without responsibility. Children love their pets, but at times, like their overextended parents, they feel burdened by their pets’ demands. This has always been true. But now children see a future where something different may be available. With robot pets, children can give enough to feel attached, but then they can turn away. They are learning a way of feeling connected in which they have permission to think only of themselves. And yet, since these new pets seem betwixt and between what is alive and what is not, this turning away is not always easy. It is not that some children feel responsible for AIBO and other do not. The same children often have strong feelings on both sides of the matter.

So for example, Zara likes the idea that AIBO won’t get sick if she forgets to walk or feed it. She likes the idea that she can “get credit” for training AIBO even without the burden of being consistent. Yet, Zara also says that “AIBO makes you feel responsible for it.” Her cousin Yolanda also likes it that AIBO does not make her feel guilty if she doesn’t give it attention, but she feels an even greater moral commitment: “I would feel just as bad if my puppy’s or my AIBO’s arms broke. I love my AIBO.”

Zara and Yolanda are tender with their AIBO. But other children, equally attached to the robot, are very rough. AIBO is alive enough to provoke children to act out their hostility, something we have seen with Furbies and My Real Babies and something we will see again with more advanced robots. Of course, this hostility causes us to look at what else is going on in a child’s life, but in the case of AIBO, we see how it can be provoked by anxiety about the robot itself. Uncanny objects are disquieting as well as compelling.

Recall four-year-old Henry who categorized robots by their degree of Pokémon powers. He believes that his AIBO recognizes him and that they have a special relationship. Nevertheless, Henry takes to increasingly aggressive play with AIBO. Over and over, he knocks it down, slapping its side, as he makes two contradictory claims about the robot. First he says that “AIBO doesn’t really have feelings,” which would make his aggression permissible. But he also says that AIBO prefers him to his friends, something that indicates feelings: “AIBO doesn’t really like my friend Ramon,” he says with a smile. The more Henry talks about how AIBO dislikes other children, the more he worries that his aggression toward AIBO might have consequences. AIBO, after all, could come to dislike him. To get out of his discomfort, Henry demotes AIBO to “just pretend.” But then he is unhappy because his belief in AIBO’s affection increases his self-esteem. Henry is caught in a complicated, circular love test. In our passage to postbiological relationships, we give ourselves new troubles.

As soon as children met computers and computer toys in the late 1970s and early 1980s, they used aggression as a way to animate them and to play with ideas about life and death. Children crashed and revived computer programs; they “killed” Merlin, Simon, and Speak & Spell by pulling out their batteries and then made them come back to life. Aggression toward sociable robots is more complex because children are trying to manage more significant attachments. To take only one example, robots disappoint when they do not display the affection children lead themselves to expect. To avoid hurt, children want to dial things down. Turning robots into objects that can be hurt with impunity is a way to put them in their place. Whether we have permission to hurt or kill an object influences how we think about its life. 10 To children, being able to kill spiders without punishment makes spiders seem less alive, and hurting a robot can make it seem less alive as well. But as in the discussion about whether My Real Baby should cry in “pain,” things are complicated. For the idea that you can hurt a robot can also make it seem morealive.

Like Henry, twelve-year-old Tamara is aggressive toward AIBO and troubled by what this implies. She wants to play with AIBO in the same way that she plays with her much-loved cat. But she worries that AIBO’s responses to her are generic. She says, “AIBO acts the same to everyone. It doesn’t attach herself to one person like most animals do.” Tamara says that sometimes she stops herself from petting AIBO: “I start to pet it, and then, like, I would start to be, like, ‘Oh wait. You’re not a cat. You’re not alive.’” And sometimes she gives in to an urge to “knock it over because it was just so cute when it was getting up and then it would, like, shake its head, because then it seemed really alive because that’s what dogs do.” She tries to reassure me: “I’m not like this with my animals.”

From their earliest experiences with the electronic toys and games of the late 1970s, children split the notion of consciousness and life. You didn’t have to be biologically alive to have awareness. And so, Tamara who knows AIBO is not alive, imagines that it still might feel pain. In the end, her aggression puts her in a tough spot; AIBO is too much like a companion to be a punching bag. For Tamara, the idea that AIBO might “see” well enough to recognize her is frightening because it might know she is hitting it. But the idea of AIBO as aware and thus more lifelike is exciting as well.

Tamara projects her fear that AIBO knows she is hurting it and gives herself something to be afraid of.11 She says of her AIBO, “I was afraid it would turn evil or something.” She worries that another AIBO, a frightening AIBO with bad intentions and a will of its own, lives within the one she complains of as being too generic in its responses. This is a complicated relationship, far away from dreaming of adventures with your teddy bear.

The strong feelings that robots elicit may help children to a better understanding of what is on their minds, but a robot cannot help children find the meaning behind the anger it provokes. In the best case, behavior with an AIBO could be discussed in a relationship with a therapist. One wonders, for example, if in her actions with AIBO, Tamara shows her fears of something within herself that is only partially mastered. Henry and Tamara are in conflicted play with a robot that provokes them to anger that they show no signs of working through.

AIBO excites children to reach out to it as a companion, but it cannot be a friend. Yet, both children and adults talk as though it can. Such yearnings can be poignant. As Yolanda’s time with AIBO is ending, she becomes more open about how it provides companionship when she is “down” and suggests that AIBO might help if someone close to you died. “For the person to be happy, they would have to focus on someone that is special to them, someone that is alive.... That could be an AIBO.”

SIMULTANEOUS VISIONS AND COLD COMFORTS

Ashley, seventeen, is a bright and active young woman who describes herself as a cat lover. I have given her an AIBO to take home for two weeks, and now she is at my office at MIT to talk about the experience. During the conversation, Ashley’s AIBO plays on the floor. We do not attend to it; it does tricks on its own—and very noisily. After a while, it seems as though the most natural thing would be to turn AIBO off, in the same spirit that one might turn off a radio whose volume interferes with a conversation. Ashley moves toward the AIBO, hesitates, reaches for its off switch, and hesitates again. Finally, with a small grimace, she hits the switch. AIBO sinks to the ground, inert. Ashley comments, “I know it’s not alive, but I would be, like, talking to it and stuff, and then it’s just a weird experience to press a[n off] button. It made me nervous.... [I talk to it] how I would talk to my cat, like he could actually hear me and understand praise and stuff like that.” I am reminded of Leah, nine, who said of her Furby, “It’s hard to turn it off when it is talking to me.”

Ashley knows AIBO is a robot, but she experiences it as a biological pet. It becomes alive for her not only because of its intelligence but because it seems to her to have real emotions. For example, she says that when AIBO’s red lights shone in apparent frustration, “it seemed like a real emotion.... So that made me treat him like he was alive.... And that’s another strange thing: he’s not really physically acting those emotions out, but then you see the colors and you think, ‘Oh, he’s upset.’”

Artificial intelligence is often described as the art and science of “getting machines to do things that would be considered intelligent if done by people.” We are coming to a parallel definition of artificial emotion as the art of “getting machines to express things that would be considered feelings if expressed by people.” Ashley describes the moment of being caught between categories: she realizes that what the robot is “acting out” is not emotion, yet she feels the pull of seeing “the colors” and experiencing AIBO as “upset.” Ashley ends up seeing AIBO as both machine and creature.

So does John Lester, a computer scientist coming from a far more sophisticated starting point. From the early 1990s, Lester pioneered the use of online communities for teaching, learning, and collaboration, including recent work developing educational spaces on the virtual world of Second Life. Lester bought one of the first AIBOs on the market. He called it Alpha in deference to its being “one of the first batch.”12 When Lester took Alpha out of its box, he shut the door to his office and spent the entire day “hanging out with [my] new puppy.” He describes the experience as “intense,” comparing it to the first time he saw a computer or typed into a Web browser. He quickly mastered the technical aspects of AIBO, but this kind of understanding did not interfere with his pleasure in simply being with the puppy. When Sony modified the robot’s software, Lester bought a second AIBO and named it Beta. Alpha and Beta are machines, but Lester does not like anyone to treat them as inanimate metal and plastic. “I think about my AIBOs in different ways at the same time,” Lester says.

In the early days of cubism, the simultaneous presentation of many perspectives of the human face was subversive. But at a certain point, one becomes accustomed to looking at a face in this new way. A face, after all, does have multiple aspects; only representational conventions keep us from appreciating them together. But once convention is challenged, the new view of the face suggests depth and new complexities. Lester has a cubist view of AIBO; he is aware of it as machine, bodily creature, and mind. An AIBO’s sentience, he says, is “awesome.” The creature is endearing. He appreciates the programming behind the exact swing of the “floppy puppy ears.” To Lester, that programming gives AIBO a mind.

Lester understands the mechanisms that AIBO’s designers have used to draw him in: AIBO’s gaze, its expressions of emotion, and the fact that it “grows up” under his care. But this understanding does not interfere with his attachment, just as knowing that infants draw him in with their big, wide eyes does not threaten his connection with babies. Lester says that when he is with AIBO, he does not feel alone. He says that “from time to time” he “catches himself ” in engineer mode, remarking on a technical detail of AIBO that he admires, but these moments do not pull him away from enjoying the companionship of his AIBO puppies. This is not a connection he plays at.

It is a big step from accepting AIBO as a companion, and even a solace, to the proposals of David Levy, the computer scientist who imagines robots as intimate partners. But today’s fantasies and Levy’s dreams share something important: the idea that after a robot serves as a better-than-nothing substitute, it might become equal, or even preferable, to a pet or person. In Yolanda’s terms, if your pet is a robot, it might always stay a cute puppy. By extension, if your lover were a robot, you would always be the center of its universe. A robot would not just be better than nothing or better than something, but better than anything. From watching children play with objects designed as “amusements,” we come to a new place, a place of cold comforts. Child and adult, we imagine made to measure companions. Or, at least we imagine companions who are always interested in us.

Harry, a forty-two-year-old architect, enjoys AIBO’s company and teaching it new tricks. He knows that AIBO is not aware of him as a person but says, “I don’t feel bad about this. A pet isn’t as aware of me as a person might be.... Dogs don’t measure up to people.... Each level of creature simply does their best. I like it that he [AIBO] recognizes me as his master.” Jane, thirty-six, a grade school teacher, is similarly invested in her AIBO. She says she has “adopted my husband’s AIBO … because it is so cute. I named it and love to spend time with it.” Early in our conversation, Jane claims that she turns to AIBO for “amusement,” but she ends up saying that she also turns to it when she is lonely. Jane looks forward to its company after a long workday. Jane talks to her AIBO. “Spend[ing] time” with AIBO means sharing the events of her day, “like who I’m having lunch with at school, which students give me trouble.” Her husband, says Jane, is not interested in these topics. It is more comfortable to talk to AIBO than to force him to listen to stories that bore him. In the company of their robots, Jane and Harry are alone in a way that encourages them to give voice to their feelings. Is there harm here?

In the case of children, I am concerned about their getting comfortable with the idea that a robot’s companionship is even close to a replacement for a person. Later, we will hear teenagers talk about their dread of conversation as they explain why “texting is always better than talking.” Some comment that “sometime, but not now,” it would be good to learn how to have a conversation. The fantasy of robotic companionship suggests that sometime might not have to come. But what of an adult who says he prefers a robot for a reason?

Wesley, sixty-four, provides us with such a case. He has come to see his own self-centeredness as an intractable problem. He imagines a robot helpmate as a way to satisfy himself without hurting others. Divorced three times, Wesley hopes a robot would “learn my psychology. How I get depressed, how I get over it. A robot that could anticipate my cycles, never criticize me over them, learn how to just let me get over them.” Wesley says, “I’d want from the robot a lot of what I want from a woman, but I think the robot would give me more in some ways. With a woman, there are her needs to consider.... That’s the trouble I get into. If someone loves me, they care about my ups and downs. And that’s so much pressure.”

Wesley knows he is difficult to live with. He once saw a psychiatrist who told him that his “cycles” were out of the normal range. Ex-wives, certainly, have told him he is “too moody.” He sees himself as “pressure” on a woman, and he feels pressure as well because he has not been able to protect women he cared for from his “ups and downs.” He likes the idea of a robot because he could act naturally—it could not be hurt by his dark moods. Wesley considers the possibility of two “women,” one real and the other artificial: “Maybe I would want a robot that would be the perfect mate—less needs—and a real woman. The robot could take some of the pressure off the real woman. She wouldn’t have to perform emotionally at such a high level, really an unrealistic level.... I could stay in my comfort zone.”

Rudimentary versions of Wesley’s fantasy are in development. I have spoken briefly of the Internet buzz over Roxxxy, put on the market in January 2010, advertised as “the world’s first sex robot.” Roxxxy cannot move, although it has electronically warmed skin and internal organs that pulse. It does, however, make conversation. The robot’s creator, Douglas Hines, helpfully offers, “Sex only goes so far—then you want to be able to talk to the person.”13 So, for example, when Roxxxy senses that its hand is being held, the robot says, “I love holding hands with you,” and moves into more erotic conversation when the physical caresses become more intimate. One can choose different personalities for Roxxxy, ranging from wild to frigid. The robot will be updated over the Internet to expand its capabilities and vocabulary. It can already discuss soccer.

Hines, an engineer, says that he got into the robot business after a friend died in the September 11 attacks on the Twin Towers. Hines wanted to preserve his friend’s personality so that his children could interact with him as they grew up. Like AI scientist and inventor Raymond Kurzweil, who dreams of a robotic incarnation of his father who died tragically young, Hines committed himself to the project of building an artificial personality. At first, he considered building a home health aid for the elderly but decided to begin with sex robots, a decision that he calls “only marketing.” His long-term goal is to take artificial personalities into the mainstream. He still wants to recreate his lost friend.

The well-publicized launch of Roxxxy elicits a great deal of online discussion. Some postings talk about how “sad” it is that a man would want such a doll. Others argue that having a robot companion is better than being lonely. For example, “There are men for who attaining a real woman is impossible.... This isn’t simply a matter of preference.... In the real world, sometimes second best is all they can get.”

I return to the question of harm. Dependence on a robot presents itself as risk free. But when one becomes accustomed to “companionship” without demands, life with people may seem overwhelming. Dependence on a person is risky—it makes us subject to rejection—but it also opens us to deeply knowing another. Robotic companionship may seem a sweet deal, but it consigns us to a closed world—the loveable as safe and made to measure.14

Roboticists insist that the artificial can be made unpredictable so that relating to robots will never feel rote or mechanical. Robots, they say, will be surprising, helpful, and meaningful in their own right. Yet, in my interviews, fantasies about robot companions do not dwell on robots full of delightful surprises. Rather, they return, again and again, to how robots might, as Yolanda suggested, be made to order, a safe haven in an unsafe world.