Alive enough - The Robotic Moment - Alone Together - Sherry Turkle

Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)

Part I. The Robotic Moment

Chapter 2. Alive enough

In the 1990s, children spoke about making their virtual creatures more alive by having them escape the com p uter. Furbies, the sensation of the 1998 holiday season, embody this documented dream. If a child wished a Tamagotchi to leap off its screen, it might look a lot like the furry and owl-like Furby. The two digital pets have other things in common. As with a Tamagotchi, how a Furby is treated shapes its personality. And both present themselves as visitors from other worlds. But Furbies are more explicit about their purpose in coming to Earth. They are here to learn about humans. So, each Furby is an anthropologist of sorts and wants to relate to people. They ask children to take care of them and to teach them English. Furbies are not ungrateful: they make demands, but they say, “I love you.”

Furbies, like Tamagotchis, are “always on,” but unlike Tamagotchis, Furbies manifest this with an often annoying, constant chatter.1 To reliably quiet a Furby, you need a Phillips screwdriver to remove its batteries, an operation that causes it to lose all memory of its life and experiences—what it has learned and how it has been treated. For children who have spent many hours “bringing up” their Furbies, this is not a viable option. On a sunny spring afternoon in 1999, I bring eight Furbies to an afternoon playgroup at an elementary school in western Massachusetts. There are fifteen children in the room, from five to eight years old, from the kindergarten through the third grade. I turn on a tape recorder as I hand the Furbies around. The children start to talk excitedly, greeting the Furbies by imitating their voices. In the cacophony of the classroom, this is what the robotic moment sounds like:

He’s a baby! He said, “Yum.” Mine’s a baby? Is this a baby? Is he sleeping now? He burped! What is “be-pah?” He said, “Be-pah.” Let them play together. What does “a lee koo wah” mean? Furby, you’re talking to me. Talk! C’mon boy. Good boy! Furby, talk! Be quiet everybody! Oh, look it, he’s in love with another one! Let them play together! It’s tired. It’s asleep. I’m going to try to feed him. How come they don’t have arms? Look, he’s in love! He called you “Mama.” He said, “Me love you.” I have to feed him. I have to feed mine too. We love you, Furby. How do you make him fall asleep? His eyes are closed. He’s talking with his eyes closed. He’s sleeptalking. He’s dreaming. He’s snoring. I’m giving him shade.

C’mon, Furby, c’mon—let’s go to sleep, Furby. Furby, shh, shh. Don’t touch him. I can make him be quiet. This is a robot. Is this a robot? What has this kind of fur? He’s allergic to me. It’s kind of like it’s alive. And it has a body. It has a motor. It’s a monster. And it’s kind of like it’s real because it has a body. It was alive. It is alive. It’s not alive. It’s a robot.

From the very first, the children make it clear that the Furby is a machine but alive enough to need care. They try to connect with it using everything they have: the bad dreams and scary movies that make one child see the Furby “as a monster” and their understanding of loneliness, which encourages another to exhort, “Let them play together!” They use logic and skepticism: Do biological animals have “this kind of fur?” Do real animals have motors? Perhaps, although this requires a new and more expansive notion of what a motor can be. They use the ambiguity of this new object to challenge their understanding of what they think they already know. They become more open to the idea of the biological as mechanical and the mechanical as biological. Eight-year-old Pearl thinks that removing the batteries from a Furby causes it to die and that people’s death is akin to “taking the batteries out of a Furby.”

Furbies reinforce the idea that they have a biology: each is physically distinct, with particular markings on its fur, and each has some of the needs of living things. For example, a Furby requires regular feeding, accomplished by depressing its tongue with one’s finger. If a Furby is not fed, it becomes ill. Nursing a Furby back to health always requires more food. Children give disease names to Furby malfunctions. So, there is Furby cancer, Furby flu, and Furby headache.

Jessica, eight, plays with the idea that she and her Furby have “body things” in common, for example, that headache. She has a Furby at home; when her sisters pull its hair, Jessica worries about its pain: “When I pull my hair it really hurts, like when my mother brushes the tangles. So, I think [the Furby’s hair pulls] hurt too.” Then, she ponders her stomach. “There’s a screw in my belly button,” she says. “[The screw] comes out, and then blood comes out.” Jessica thinks that people, like Furbies, have batteries. “There are hearts, lungs, and a big battery inside.” People differ from robots in that our batteries “work forever like the sun.” When children talk about the Furby as kin, they experiment with the idea that they themselves might be almost machine. Ideas about the human as machine or as joined to a machine are played out in classroom games.2 In their own way, toy robots prepare a bionic sensibility. There are people who do, after all, have screws and pins and chips and plates in their flesh. A recent recipient of a cochlear implant describes his experience of his body as “rebuilt.”3

We have met Wilson, seven, comfortable with his Furby as both machine and creature. Just as he always “hears the machine” in the Furby, he finds the machine in himself. As the boy sings improvised love songs about the robot as a best friend, he pretends to use a screwdriver on his own body, saying, “I’m a Furby.” Involved in a second-grade class project of repairing a broken Furby by dismantling it, screw by screw, Wilson plays with the idea of the Furby’s biological nature: “I’m going to get [its] baby out.” And then he plays with the idea of his own machine nature: he applies the screwdriver to his own ankle, saying, “I’m unscrewing my ankle.”

Wilson enjoys cataloguing what he and the Furby have in common. Most important for Wilson is that they “both like to burp.” In this, he says, the Furby “is just like me—I love burping.” Wilson holds his Furby out in front of him, his hands lightly touching the Furby’s stomach, staring intently into its eyes. He burps just after or just before his Furby burps, much as in the classic bonding scene in E.T.: The Extraterrestrial between the boy Elliott and the visitor from afar. When Wilson describes his burping game, he begins by saying that he makes his Furby burp, but he ends up saying that his Furby makes him burp. Wilson likes the sense that he and his Furby are in sync, that he can happily lose track of where he leaves off and the Furby begins.4

WHAT DOES A FURBY WANT?

When Wilson catalogues what he shares with his Furby, there are things of the body (the burping) and there are things of the mind. Like many children, he thinks that because Furbies have language, they are more “peoplelike” than a “regular” pet. They arrive speaking Furbish, a language with its own dictionary, which many children try to commit to memory because they would like to meet their Furbies more than half way. The Furby manual instructs children, “I can learn to speak English by listening to you talk. The more you play with me, the more I will use your language.” Actually, Furby English emerges over time, whether or not a child talks to the robot. (Furbies have no hearing or language-learning ability.5) But until age eight, children are convinced by the illusion and believe they are teaching their Furbies to speak. The Furbies are alive enough to need them.

Children enjoy the teaching task. From the first encounter, it gives them something in common with their Furbies and it implies that the Furbies can grow to better understand them. “I once didn’t know English,” says one six-year-old. “And now I do. So I know what my Furby is going through.” In the classroom with Furbies, children shout to each other in competitive delight: “My Furby speaks more English than yours! My Furby speaks English.”

I have done several studies in which I send Furbies home with schoolchildren, often with the request that they (and their parents) keep a “Furby diary.” In my first study of kindergarten to third graders, I loan the Furbies out for two weeks at a time. It is not a good decision. I do not count on how great will be children’s sense of loss when I ask them to return the Furbies. I extend the length of the loans, often encouraged by parental requests. Their children have grown too attached to give up the robots. Nor are they mollified by parents’ offers to buy them new Furbies. Even more so than with Tamagotchis, children attach to a particular Furby, the one they have taught English, the one they have raised.

For three decades, in describing people’s relationships with computers, I have often used the metaphor of the Rorschach, the inkblot test that psychologists use as a screen onto which people can project their feelings and styles of thought. But as children interact with sociable robots like Furbies, they move beyond a psychology of projection to a new psychology of engagement. They try to deal with the robot as they would deal with a pet or a person. Nine-year-old Leah, in an after-school playgroup, admits, “It’s hard to turn it [the Furby] off when it is talking to me.” Children quickly understand that to get the most out of your Furby, you have to pay attention to what it is telling you. When you are with a Furby, you can’t play a simple game of projective make-believe. You have to continually assess your Furby’s “emotional” and “physical” state. And children fervently believe that the child who loves his or her Furby best will be most loved in return.

This mutuality is at the heart of what makes the Furby, a primitive exemplar of sociable robotics, different from traditional dolls. As we’ve seen, such relational artifacts do not wait for children to “animate” them in the spirit of a Raggedy Ann doll or a teddy bear. They present themselves as already animated and ready for relationship. They promise reciprocity because, unlike traditional dolls, they are not passive. They make demands. They present as having their own needs and inner lives. They teach us the rituals of love that will make them thrive. For decades computers have asked us to think with them; these days, computers and robots, deemed sociable, affective, and relational, ask us to feel for and with them.

Children see traditional dolls as they want them or need them to be. For example, an eight-year-old girl who feels guilty about breaking her mother’s best crystal pitcher might punish a row of Barbie dolls. She might take them away from their tea party and put them in detention, doing unto the dolls what she imagines should be done unto her. In contrast, since relational artifacts present themselves as having minds and intentions of their own, they cannot be so easily punished for one’s own misdeeds. Two eight-year-old girls comment on how their “regular dolls” differ from the robotic Furbies. The first says, “A regular doll, like my Madeleine doll … you can make it go to sleep, but its eyes are painted open, so, um, you cannot get them to close their eyes.... Like a Madeleine doll cannot go, ‘Hello, good morning.’” But this is precisely the sort of thing a Furby can do. The second offers, “The Furby tells you what it wants.”

Indeed, Furbies come with manuals that provide detailed marching orders. They want language practice, food, rest, and protestations of love. So, for example, the manual instructs, “Make sure you say ‘HEY FURBY! I love you!’ frequently so that I feel happy and know I’m loved.” There is general agreement among children that a penchant for giving instructions distinguishes Furbies from traditional dolls. A seven-year-old girl puts it this way: “Dolls let you tell them what they want. The Furbies have their own ideas.” A nine-year-old boy sums up the difference between Furbies and his action figures: “You don’t play with the Furby, you sort of hang out with it. You do try to get power over it, but it has power over you too.”

Children say that traditional dolls can be “hard work” because you have to do all the work of giving them ideas; Furbies are hard work for the opposite reason. They have plenty of ideas, but you have to give them what they want and when they want it. When children attach to a doll through the psychology of projection, they attribute to the doll what is most on their mind. But they need to accommodate a Furby. This give-and-take prepares children for the expectation of relationship with machines that is at the heart of the robotic moment.

Daisy, six, with a Furby at home, believes that each Furby’s owner must help his or her Furby fulfill its mission to learn about people. “You have to teach it; when you buy it, that is your job.” Daisy tells me that she taught her Furby about Brownie Girl Scouts, kindergarten, and whales. “It’s alive; I teach it about whales; it loves me.” Padma, eight, says that she likes meeting what she calls “Furby requests” and thinks that her Furby is “kind of like a person” because “it talks.” She goes on: “It’s kind of like me because I’m a chatterbox.” After two weeks, it is time for Padma to return her Furby, and afterward she feels regret: “I miss how it talked, and now it’s so quiet at my house.... I didn’t get a chance to make him a bed.”

After a month with her Furby, Bianca, seven, speaks with growing confidence about their mutual affection: “I love my Furby because it loves me… . It was like he really knew me.”6 She knows her Furby well enough to believe that “it doesn’t want to miss fun … at a party.” In order to make sure that her social butterfly Furby gets some rest when her parents entertain late into the evening, Bianca clips its ears back with clothespins to fool the robot into thinking that “nothing is going on … so he can fall asleep.” This move is ineffective, and all of this activity is exhausting, but Bianca calmly sums up her commitment: “It takes lots of work to take care of these.”

When Wilson, who so enjoys burping in synchrony with his Furby, faces up to the hard work of getting his Furby to sleep, he knows that if he forces sleep by removing his Furby’s batteries, the robot will “forget” whatever has passed between them—this is unacceptable. So Furby sleep has to come naturally. Wilson tries to exhaust his Furby by keeping it up late at night watching television. He experiments with Furby “sleep houses” made of blankets piled high over towers of blocks. When Wilson considers Furby sleep, his thoughts turn to Furby dreams. He is sure his Furby dreams “when his eyes are closed.” What do Furbies dream of? Second and third graders think they dream “of life on their flying saucers.”7 And they dream about learning languages and playing with the children they love.

David and Zach, both eight, are studying Hebrew. “My Furby dreams about Hebrew,” says David. “It knows how to say Eloheinu… . I didn’t even try to teach it; it was just from listening to me doing Hebrew homework.” Zach agrees: “Mine said Dayeinu in its sleep.” Zach, like Wilson, is proud of how well he can make his Furby sleep by creating silence and covering it with blankets. He is devoted to teaching his Furby English and has been studying Furbish as well; he has mastered the English/Furbish dictionary that comes with the robot. A week after Zach receives his Furby, however, his mother calls my office in agitation. Zach’s Furby is broken. It has been making a “terrible” noise. It sounds as though it might be suffering, and Zach is distraught. Things reached their worst during a car trip from Philadelphia to Boston, with the broken Furby wailing as though in pain. On the long trip home, there was no Phillips screwdriver for the ultimate silencing, so Zach and his parents tried to put the Furby to sleep by nestling it under a blanket. But every time the car hit a bump, the Furby woke up and made the “terrible” noise. I take away the broken Furby, and give Zach a new one, but he wants little to do with it. He doesn’t talk to it or try to teach it. His interest is in “his” Furby, the Furby he nurtured, the Furby he taught. He says, “The Furby that I had before could say ‘again’; it could say ‘hungry.’” Zach believes he was making progress teaching the first Furby a bit of Spanish and French. The first Furby was never “annoying,” but the second Furby is. His Furby is irreplaceable.

After a few weeks, Zach’s mother calls to ask if their family has my permission to give the replacement Furby to one of Zach’s friends. When I say yes, Zach calmly contemplates the loss of Furby #2. He has loved; he has lost; he is not willing to reinvest. Neither is eight-year-old Holly, who becomes upset and withdrawn when her mother takes the batteries out of her Furby. The family was about to leave on an extended vacation, and the Furby manual suggests taking out a Furby’s batteries if it will go unused for a long time. Holly’s mother did not understand the implications of what she saw as commonsense advice from the manual. She insists, with increasing defensiveness, that she was only “following the instructions.” Wide-eyed, Holly tries to make her mother understand what she has done: when the batteries are removed, Holly says, “the Furby forgets its life.”

Designed to give users a sense of progress in teaching it, when the Furby evolves over time, it becomes the irreplaceable repository and proof of its owner’s care. The robot and child have traveled a bit of road together. When a Furby forgets, it is as if a friend has become amnesic. A new Furby is a stranger. Zach and Holly cannot bear beginning again with a new Furby that could never be the Furby into which each has poured time and attention.

OPERATING PROCEDURES

In the 1980s, the computer toy Merlin made happy and sad noises depending on whether it was winning or losing the sound-and-light game it played with children. Children saw Merlin as “sort of alive” because of how well it played memory games, but they did not fully believe in Merlin’s shows of emotion. When a Merlin broke down, children were sorry to lose a playmate. When a Furby doesn’t work, however, children see a creature that might be in pain.

Lily, ten, worries that her broken Furby is hurting. But she doesn’t want to turn it off, because “that means you aren’t taking care of it.” She fears that if she shuts off a Furby in pain, she might make things worse. Two eight-year-olds fret about how much their Furbies sneeze. The first worries that his sneezing Furby is allergic to him. The other fears his Furby got its cold because “I didn’t do a good enough job taking care of him.” Several children become tense when Furbies make unfamiliar sounds that might be signals of distress. I observe children with their other toys: dolls, toy soldiers, action figures. If these toys make strange sounds, they are usually put aside; broken toys lead easily to boredom. But when a Furby is in trouble, children ask, “Is it tired?” “Is it sad?” “Have I hurt it?” “Is it sick?” “What shall I do?”

Taking care of a robot is a high-stakes game. Things can—and do—go wrong. In one kindergarten, when a Furby breaks down, the children decide they want to heal it. Ten children volunteer, seeing themselves as doctors in an emergency room. They decide they’ll begin by taking it apart.

The proceedings begin in a state of relative calm. When talking about their sick Furby, the children insist that this breakdown does not mean the end: people get sick and get better. But as soon as scissors and pliers appear, they become anxious. At this point, Alicia screams, “The Furby is going to die!” Sven, to his classmates’ horror, pinpoints the moment when Furbies die: it happens when a Furby’s skin is ripped off. Sven considers the Furby as an animal. You can shave an animal’s fur, and it will live. But you cannot take its skin off. As the operation continues, Sven reconsiders. Perhaps the Furby can live without its skin, “but it will be cold.” He doesn’t back completely away from the biological (the Furby is sensitive to the cold) but reconstructs it. For Sven, the biological now includes creatures such as Furbies, whose “insides” stay “all in the same place” when their skin is removed. This accommodation calms him down. If a Furby is simultaneously biological and mechanical, the operation in process, which is certainly removing the Furby’s skin, is not necessarily destructive. Children make theories when they are confused or anxious. A good theory can reduce anxiety.

But some children become more anxious as the operation continues. One suggests that if the Furby dies, it might haunt them. It is alive enough to turn into a ghost. Indeed, a group of children start to call the empty Furby skin “the ghost of Furby” and the Furby’s naked body “the goblin.” They are not happy that this operation might leave a Furby goblin and ghost at large. One girl comes up with the idea that the ghost of the Furby will be less fearful if distributed. She asks if it would be okay “if every child took home a piece of Furby skin.” She is told this would be fine, but, unappeased, she asks the same question two more times. In the end, most children leave with a bit of Furby fur.8 Some talk about burying it when they get home. They leave room for a private ritual to placate the goblin and say good-bye.

Inside the classroom, most of the children feel they are doing the best they can with a sick pet. But from outside the classroom, the Furby surgery looks alarming. Children passing by call out, “You killed him.” “How dare you kill Furby?” “You’ll go to Furby jail.” Denise, eight, watches some of the goings-on from the safety of the hall. She has a Furby at home and says that she does not like to talk about its problems as diseases because “Furbies are not animals.” She uses the word “fake” to mean nonbiological and says, “Furbies are fake, and they don’t get diseases.” But later, she reconsiders her position when her own Furby’s batteries run out and the robot, so chatty only moments before, becomes inert. Denise panics: “It’s dead. It’s dead right now.... Its eyes are closed.” She then declares her Furby “both fake and dead.” Denise concludes that worn-out batteries and water can kill a Furby. It is a mechanism, but alive enough to die.

Linda, six, is one of the children whose family has volunteered to keep a Furby for a two-week home study. She looked forward to speaking to her Furby, sure that unlike her other dolls, this robot would be worth talking to. But on its very first night at her home, her Furby stops working: “Yeah, I got used to it, and then it broke that night—the night that I got it. I felt like I was broken or something.... I cried a lot… . I was really sad that it broke, ’cause Furbies talk, they’re like real, they’re like real people.” Linda is so upset about not protecting her Furby that when it breaks she feels herself broken.

Things get more complicated when I give Linda a new Furby. Unlike children like Zach who have invested time and love in a “first Furby” and want no replacements, Linda had her original Furby in working condition for only a few hours. She likes having Furby #2: “It plays hide-and-seek with me. I play red light, green light, just like in the manual.” Linda feeds it and makes sure it gets enough rest, and she reports that her new Furby is grateful and affectionate. She makes this compatible with her assessment of a Furby as “just a toy” because she has come to see gratitude, conversation, and affection as something that toys can manage. But now she will not name her Furby or say it is alive. There would be risk in that: Linda might feel guilty if the new Furby were alive enough to die and she had a replay of her painful first experience.

Like the child surgeons, Linda ends up making a compromise: the Furby is both biological and mechanical. She tells her friends, “The Furby is kind of real but just a toy.” She elaborates that “[the Furby] is real because it is talking and moving and going to sleep. It’s kind of like a human and a pet.” It is a toy because “you had to put in batteries and stuff, and it could stop talking.”

So hybridity can offer comfort. If you focus on the Furby’s mechanical side, you can enjoy some of the pleasures of companionship without the risks of attachment to a pet or a person. With practice, says nine-year-old Lara, reflecting on her Furby, “you can get it to like you. But it won’t die or run away. That is good.” But hybridity also brings new anxieties. If you grant the Furby a bit of life, how do you treat it so that it doesn’t get hurt or killed? An object on the boundaries of life, as we’ve seen, suggests the possibility of real pain.

AN ETHICAL LANDSCAPE

When a mechanism breaks, we may feel regretful, inconvenienced, or angry. We debate whether it is worth getting it fixed. When a doll cries, children know that they are themselves creating the tears. But a robot with a body can get “hurt,” as we saw in the improvised Furby surgical theater. Sociable robotics exploits the idea of a robotic body to move people to relate to machines as subjects, as creatures in pain rather than broken objects. That even the most primitive Tamagotchi can inspire these feelings demonstrates that objects cross that line not because of their sophistication but because of the feelings of attachment they evoke. The Furby, even more than the Tamagotchi, is alive enough to suggest a body in pain as well as a troubled mind. Furbies whine and moan, leaving it to their users to discover what might help. And what to make of the moment when an upside down Furby says, “Me scared!”?

Freedom Baird takes this question very seriously.9 A recent graduate of the MIT Media Lab, she finds herself engaged with her Furby as a creature and a machine. But how seriously does she take the idea of the Furby as a creature? To determine this, she proposes an exercise in the spirit of the Turing test.

In the original Turing test, published in 1950, mathematician Alan Turing, inventor of the first general-purpose computer, asked under what conditions people would consider a computer intelligent. In the end, he settled on a test in which the computer would be declared intelligent if it could convince people it was not a machine. Turing was working with computers made up of vacuum tubes and Teletype terminals. He suggested that if participants couldn’t tell, as they worked at their Teletypes, if they were talking to a person or a computer, that computer would be deemed “intelligent.” 10

A half century later, Baird asks under what conditions a creature is deemed alive enough for people to experience an ethical dilemma if it is distressed. She designs a Turing test not for the head but for the heart and calls it the “upside-down test.” A person is asked to invert three creatures: a Barbie doll, a Furby, and a biological gerbil. Baird’s question is simple: “How long can you hold the object upside down before your emotions make you turn it back?” Baird’s experiment assumes that a sociable robot makes new ethical demands. Why? The robot performs a psychology; many experience this as evidence of an inner life, no matter how primitive. Even those who do not think a Furby has a mind—and this, on a conscious level, includes most people—find themselves in a new place with an upside-down Furby that is whining and telling them it is scared. They feel themselves, often despite themselves, in a situation that calls for an ethical response. This usually happens at the moment when they identify with the “creature” before them, all the while knowing that it is “only a machine.”

This simultaneity of vision gives Baird the predictable results of the upside-down test. As Baird puts it, “People are willing to be carrying the Barbie around by the feet, slinging it by the hair … no problem.... People are not going to mess around with their gerbil.” But in the case of the Furby, people will “hold the Furby upside down for thirty seconds or so, but when it starts crying and saying it’s scared, most people feel guilty and turn it over.”

The work of neuroscientist Antonio Damasio offers insight into the origins of this guilt. Damasio describes two levels of experiencing pain. The first is a physical response to a painful stimulus. The second, a far more complex reaction, is an emotion associated with pain. This is an internal representation of the physical. 11 When the Furby says, “Me scared,” it signals that it has crossed the line between a physical response and an emotion, the internal representation. When people hold a Furby upside down, they do something that would be painful if done to an animal. The Furby cries out—as if it were an animal. But then it says, “Me scared”—as if it were a person.

People are surprised by how upset they get in this theater of distress. And then they get upset that they are upset. They often try to reassure themselves, saying things like, “Chill, chill, it’s only a toy!” They are experiencing something new: you can feel bad about yourself for how you behave with a computer program. Adults come to the upside-down test knowing two things: the Furby is a machine and they are not torturers. By the end, with a whimpering Furby in tow, they are on new ethical terrain.12

We are at the point of seeing digital objects as both creatures and machines. A series of fractured surfaces—pet, voice, machine, friend—come together to create an experience in which knowing that a Furby is a machine does not alter the feeling that you can cause it pain. Kara, a woman in her fifties, reflects on holding a moaning Furby that says it is scared. She finds it distasteful, “not because I believe that the Furby is really scared, but because I’m not willing to hear anything talk like that and respond by continuing my behavior. It feels to me that I could be hurt if I keep doing this.” For Kara, “That is not what I do.... In that moment, the Furby comes to represent how I treat creatures.”

When the toy manufacturer Hasbro introduced its My Real Baby robot doll in 2000, it tried to step away from these complex matters. My Real Baby shut down in situations where a real baby might feel pain. This was in contrast to its prototype, a robot called “IT,” developed by a team led by MIT roboticist Rodney Brooks. “IT” evolved into “BIT” (for Baby IT), a doll with “states of mind” and facial musculature under its synthetic skin to give it expression.13 When touched in a way that would induce pain in a child, BIT cried out. Brooks describes BIT in terms of its inner states:

If the baby were upset, it would stay upset until someone soothed it or it finally fell asleep after minutes of heartrending crying and fussing. If BIT … was abused in any way—for instance, by being swung upside down—it got very upset. If it was upset and someone bounced it on their knee, it got more upset, but if the same thing happened when it was happy, it got more and more excited, giggling and laughing, until eventually it got overtired and started to get upset. If it were hungry, it would stay hungry until it was fed. It acted a lot like a real baby.14

BIT, with its reactions to abuse, became the center of an ethical world that people constructed around its responses to pleasure and pain. But when Hasbro put BIT into mass production as My Real Baby, the company decided not to present children with a toy that responded to pain. The theory was that a robot’s response to pain could “enable” sadistic behavior. If My Real Baby were touched, held, or bounced in a way that would hurt a real baby, the robot shut down.

In its promotional literature, Hasbro marketed My Real Baby as “the most real, dynamic baby doll available for young girls to take care of and nurture.” They presented it as a companion that would teach and encourage reciprocal social behavior as children were trained to respond to its needs for amusement as well as bottles, sleep, and diaper changes. Indeed, it was marketed as realistic in all things—except that if you “hurt” it, it shut down. When children play with My Real Baby, they do explore aggressive possibilities. They spank it. It shuts down. They shake it, turn it upside down, and box its ears. It shuts down.

Hasbro’s choice—maximum realism, but with no feedback for abuse—inspires strong feelings, especially among parents. For one group of parents, what is most important is to avoid a child’s aggressive response. Some believe that if you market realism but show no response to “pain,” children are encouraged to inflict it because doing so seems to have no cost. Others think that if a robot simulates pain, it enables mistreatment.

Another group of parents wish that My Real Baby would respond to pain for the same reason that they justify letting their children play violent video games: they see such experiences as “cathartic.” They say that children (and adults too) should express aggression (or sadism or curiosity) in situations that seem “realistic” but where nothing “alive” is being hurt. But even these parents are sometimes grateful for My Real Baby’s unrealistic show of “denial.” They do not want to see their children tormenting a screaming baby.

No matter what position one takes, sociable robots have taught us that we do not shirk from harming realistic simulations of life. This is, of course, how we now train people for war. First, we learn to kill the virtual. Then, desensitized, we are sent to kill the real. The prospect of studying these matters raises awful questions. Freedom Baird had people hold a whining, complaining Furby upside down, much to their discomfort. Do we want to encourage the abuse of increasingly realistic robot dolls?

When I observe children with My Real Baby in an after-school playgroup for eight-year-olds, I see a range of responses. Alana, to the delight of a small band of her friends, flings My Real Baby into the air and then shakes it violently while holding it by one leg. Alana says the robot has “no feelings.” Watching her, one wonders why it is necessary then to “torment” something without feelings. She does not behave this way with the many other dolls in the playroom. Scott, upset, steals the robot and brings it to a private space. He says, “My Real Baby is like a baby and like a doll.... I don’t think she wants to get hurt.”

As Scott tries to put the robot’s diaper back on, some of the other children stand beside him and put their fingers in its eyes and mouth. One asks, “Do you think that hurts?” Scott warns, “The baby’s going to cry!” At this point, one girl tries to pull My Real Baby away from Scott because she sees him as an inadequate protector: “Let go of her!” Scott resists. “I was in the middle of changing her!” It seems a good time to end the play session. As the research team, exhausted, packs up to go, Scott sneaks behind a table with the robot, gives it a kiss, and says good-bye, out of the sight of the other children.

In the pandemonium of Scott and Alana’s playgroup, My Real Baby is alive enough to torment and alive enough to protect. The adults watching this—a group of teachers and my research team—feel themselves in an unaccustomed quandary. If the children had been tossing around a rag doll, neither we, nor presumably Scott, would have been as upset. But it is hard to see My Real Baby treated this way. All of this—the Furbies that complain of pain, the My Real Babies that do not—creates a new ethical landscape. The computer toys of the 1980s only suggested ethical issues, as when children played with the idea of life and death when they “killed” their Speak & Spells by taking out the toys’ batteries. Now, relational artifacts pose these questions directly.

One can see the new ethics at work in my students’ reactions to Nexi, a humanoid robot at MIT. Nexi has a female torso, an emotionally expressive face, and the ability to speak. In 2009, one of my students, researching a paper, made an appointment to talk with the robot’s development team. Due to a misunderstanding about scheduling, my student waited alone, near the robot. She was upset by her time there: when not interacting with people, Nexi was put behind a curtain and blindfolded.

At the next meeting of my graduate seminar, my student shared her experience of sitting alongside the robot. “It was very upsetting,” she said. “The curtain—and why was she blindfolded? I was upset because she was blindfolded.” The story of the shrouded and blindfolded Nexi ignited the seminar. In the conversation, all the students talked about the robot as a “she.” The designers had done everything they could to give the robot gender. And now, the act of blindfolding signaled sight and consciousness. In class, questions tumbled forth: Was the blindfold there because it would be too upsetting to see Nexi’s eyes? Perhaps when Nexi was turned off, “her” eyes remained open, like the eyes of a dead person? Perhaps the robot makers didn’t want Nexi to see “out”? Perhaps they didn’t want Nexi to know that when not in use, “she” is left in a corner behind a curtain? This line of reasoning led the seminar to an even more unsettling question: If Nexi is smart enough to need a blindfold to protect “her” from fully grasping “her” situation, does that mean that “she” is enough of a subject to make “her” situation abusive? The students agreed on one thing: blindfolding the robot sends a signal that “this robot can see.” And seeing implies understanding and an inner life, enough of one to make abuse possible.

I have said that Sigmund Freud saw the uncanny as something long familiar that feels strangely unfamiliar. The uncanny stands between standard categories and challenges the categories themselves. It is familiar to see a doll at rest. But we don’t need to cover its eyes, for it is we who animate it. It is familiar to have a person’s expressive face beckon to us, but if we blindfold that person and put them behind a curtain, we are inflicting punishment. The Furby with its expressions of fear and the gendered Nexi with her blindfold are the new uncanny in the culture of computing.

I feel even more uncomfortable when I learn about a beautiful “female” robot, Aiko, now on sale, that says, “Please let go … you are hurting me,” when its artificial skin is pressed too hard. The robot also protests when its breast is touched: “I do not like it when you touch my breasts.” I find these programmed assertions of boundaries and modesty disturbing because it is almost impossible to hear them without imagining an erotic body braced for assault.

FROM THE ROMANTIC REACTION TO THE ROBOTIC MOMENT

Soon, it may seem natural to watch a robot “suffer” if you hurt it. It may seem natural to chat with a robot and have it behave as though pleased you stopped by. As the intensity of experiences with robots increases, as we learn to live in new landscapes, both children and adults may stop asking the questions “Why am I talking to a robot?” and “Why do I want this robot to like me?” We may simply be charmed by the pleasure of its company.

The romantic reaction of the 1980s and 1990s put a premium on what only people can contribute to each other: the understanding that grows out of shared human experience. It insisted that there is something essential about the human spirit. In the early 1980s, David, twelve, who had learned computer programming at school, contrasted people and programs this way: “When there are computers who are just as smart as the people, the computers will do a lot of the jobs, but there will still be things for the people to do. They will run the restaurants, taste the food, and they will be the ones who will love each other, have families and love each other. I guess they’ll still be the only ones who go to church.”15 Adults, too, spoke of life in families. To me, the romantic reaction was captured by how one man rebuffed the idea that he might confide in a computer psychotherapist: “How can I talk about sibling rivalry to something that never had a mother?”

Of course, elements of this romantic reaction are still around us. But a new sensibility emphasizes what we share with our technologies. With psychopharmacology, we approach the mind as a bioengineerable machine.16 Brain imaging trains us to believe that things—even things like feelings—are reducible to what they look like. Our current therapeutic culture turns from the inner life to focus on the mechanics of behavior, something that people and robots might share.

A quarter of a century stands between two conversations I had about the possibilities of a robot confidant, the first in 1983, the second in 2008. For me, the differences between them mark the movement from the romantic reaction to the pragmatism of the robotic moment. Both conversations were with teenage boys from the same Boston neighborhood; they are both Red Sox fans and have close relationships with their fathers. In 1983, thirteen-year-old Bruce talked about robots and argued for the unique “emotionality” of people. Bruce rested his case on the idea that computers and robots are “perfect,” while people are “imperfect,” flawed and frail. Robots, he said, “do everything right”; people “do the best they know how.” But for Bruce it was human imperfection that makes for the ties that bind. Specifically, his own limitations made him feel close to his father (“I have a lot in common with my father.... We both have chaos”). Perfect robots could never understand this very important relationship. If you ever have a problem, you go to a person.

Twenty-five years later, a conversation on the same theme goes in a very different direction. Howard, fifteen, compares his father to the idea of a robot confidant, and his father does not fare well in the comparison. Howard thinks the robot would be better able to grasp the intricacies of high school life: “Its database would be larger than Dad’s. Dad has knowledge of basic things, but not enough of high school.” In contrast to Bruce’s sense that robots are not qualified to have an opinion about the goings-on in families, Howard hopes that robots might be specially trained to take care of “the elderly and children”—something he doesn’t see the people around him as much interested in.

Howard has no illusions about the uniqueness of people. In his view, “they don’t have a monopoly” on the ability to understand or care for each other. Each human being is limited by his or her own life experience, says Howard, but “computers and robots can be programmed with an infinite amount of information.” Howard tells a story to illustrate how a robot could provide him with better advice than his father. Earlier that year, Howard had a crush on a girl at school who already had a boyfriend. He talked to his father about asking her out. His father, operating on an experience he had in high school and what Howard considers an outdated ideal of “macho,” suggested that he ask the girl out even though she was dating someone else. Howard ignored his father’s advice, fearing it would lead to disaster. He was certain that in this case, a robot would have been more astute. The robot “could be uploaded with many experiences” that would have led to the right answer, while his father was working with a limited data set. “Robots can be made to understand things like jealousy from observing how people behave.... A robot can be fully understanding and open-minded.” Howard thinks that as a confidant, the robot comes out way ahead. “People,” he says, are “risky.” Robots are “safe.”

There are things, which you cannot tell your friends or your parents, which … you could tell an AI. Then it would give you advice you could be more sure of.... I’m assuming it would be programmed with prior knowledge of situations and how they worked out. Knowledge of you, probably knowledge of your friends, so it could make a reasonable decision for your course of action. I know a lot of teenagers, in particular, tend to be caught up in emotional things and make some really bad mistakes because of that.

I ask Howard to imagine what his first few conversations with a robot might be like. He says that the first would be “about happiness and exactly what that is, how do you gain it.” The second conversation would be “about human fallibility,” understood as something that causes “mistakes.” From Bruce to Howard, human fallibility has gone from being an endearment to a liability.

No generation of parents has ever seemed like experts to their children. But those in Howard’s generation are primed to see the possibilities for relationships their elders never envisaged. They assume that an artificial intelligence could monitor all of their e-mails, calls, Web searches, and messages. This machine could supplement its knowledge with its own searches and retain a nearly infinite amount of data. So, many of them imagine that via such search and storage an artificial intelligence or robot might tune itself to their exact needs. As they see it, nothing technical stands in the way of this robot’s understanding, as Howard puts it, “how different social choices [have] worked out.” Having knowledge and your best interests at heart, “it would be good to talk to … about life. About romantic matters. And problems of friendship.”

Life? Romantic matters? Problems of friendship? These were the sacred spaces of the romantic reaction. Only people were allowed there. Howard thinks that all of these can be boiled down to information so that a robot can be both expert resource and companion. We are at the robotic moment.

As I have said, my story of this moment is not so much about advances in technology, impressive though these have been. Rather, I call attention to our strong response to the relatively little that sociable robots offer—fueled it would seem by our fond hope that they will offer more. With each new robot, there is a ramp-up in our expectations. I find us vulnerable—a vulnerability, I believe, not without risk.