Complicities - The Robotic Moment - Alone Together - Sherry Turkle

Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)

Part I. The Robotic Moment

Chapter 5. Complicities

I first met Cog in July 1994, in Rodney Brooks’s Artificial Intelligence Laboratory at MIT. The institute was hostin g an artificial-life workshop, a conference that buzzed with optimism about science on its way to synthesizing what contributors called “the living state.” Breathtaking though they were in capturing many of the features of living systems, most of the “life forms” this field had developed had no physical presence more substantial than images on a computer screen; these creatures lived in simulation. Not so Cog, a life-size human torso, with mobile arms, neck, and head.

Cog grew out of a long research tradition in Brooks’s lab. He and his colleagues work with the assumption that much of what we see as complex behavior is made up of simple responses to a complex environment. Consider how artificial intelligence pioneer Herbert Simon describes an ant walking across a sand dune: the ant is not thinking about getting from point A to point B. Instead, the ant, in its environment, follows a simple set of rules: keep moving and avoid obstacles. After more than fifteen years of using this kind of strategy to build robots that aspired to insect-level intelligence, Brooks said he was ready “to go for the whole iguana.”1 In the early 1990s, Brooks and his team began to build Cog, a robotic two-year-old. The aspiration was to have Cog “learn” from its environment, which included the many researchers who dedicated themselves to its education. For some, Cog was a noble experiment on the possibilities of embodied, “emergent” intelligence. For others, it was a grandiose fantasy. I decided to see for myself.

I went to Brooks’s lab with Christopher Langton, one of the founders of the field of artificial life—indeed, the man who had coined the term. In town from New Mexico for the A-Life conference, Langton was as eager as I to see the robot. At the AI lab, robot parts were stacked in compartments and containers; others were strewn about in riots of color. In the midst of it all was Cog, on a pedestal, immobile, almost imperial—a humanoid robot, one of the first, its face rudimentary, but with piercing eyes.

Trained to track the movement of human beings (typically those objects whose movements are not constant), Cog “noticed” me soon after I entered the room. Its head turned to follow me, and I was embarrassed to note that this made me happy—unreasonably happy. In fact, I found myself competing with Langton for the robot’s attention. At one point, I felt sure that Cog’s eyes had “caught” my own, and I experienced a sense of triumph. It was noticing me, not its other guest. My visit left me surprised—not so much by what Cog was able to accomplish but by my own reaction to it. For years, whenever I had heard Brooks speak about his robotic “creatures,” I had always been careful to mentally put quotation marks around the word. But now, with Cog, I had an experience in which the quotation marks disappeared. There I stood in the presence of a robot and I wanted it to favor me. My response was involuntary, I am tempted to say visceral. Cog had a face, it made eye contact, and it followed my movements. With these three simple elements in play, although I knew Cog to be a machine, I had to fight my instinct to react to “him” as a person.

MECHANICAL TODDLERS

Cog’s builders imagined a physically agile toddler that responds to what it sees, touches, and hears. An adjacent laboratory houses another robot designed to simulate that toddler’s emotions. This is the facially and vocally expressive Kismet, with large doll eyes and eyelashes and red rubber tubing lips. It speaks in a soft babble that mimics the inflections of human speech. Kismet has a range of “affective” states and knows how to take its turn in conversation. It can repeat a requested word, most often to say its own name or to learn the name of the person talking to it.2

Like Cog, Kismet learns through interaction with people. Brooks and his colleagues hoped that by building learning systems, we would learn about learning.3 And robots that learn through social interaction are the precursors to machines that can actively collaborate with people. A sociable robot would, for example, know how to interpret human signaling. So, to warn an astronaut of danger, a robot working alongside could lift the palm of its hand in that universal cue that says “stop.” And the person working with the robot could also communicate with simple gestures.4 But more than marking progress toward such practical applications, Cog and Kismet generate feelings of kinship. We’ve already seen that when this happens, two ideas become more comfortable. The first is that people are not so different from robots; that is, people are built from information. The second is that robots are not so different from people; that is, robots are more than the sum of their machine parts.

From its very beginnings, artificial intelligence has worked in this space between a mechanical view of people and a psychological, even spiritual, view of machines. Norbert Weiner, the founder of cybernetics, dreamed in the 1960s that it was “conceptually possible for a human being to be sent over a telegraph line,” while in the mid-1980s, one MIT student mused that his teacher, AI pioneer Marvin Minsky, really wanted to “create a computer beautiful enough that a soul would want to live in it.”5 Whether or not a soul is ready to inhabit any of our current machines, reactions to Cog and Kismet bring this fantasy to mind. A graduate student, often alone at night in the lab with Kismet, confides, “I say to myself it’s just a machine, but then after I leave, I want to check on it at night, just to make sure it’s okay.” Not surprisingly, for we have seen this as early as the ELIZA program, both adults and children are drawn to do whatever it takes to sustain a view of these robots as sentient and even caring.6 This complicity enlivens the robots, even as the people in their presence are enlivened, sensing themselves in a relationship.

Over the years, some of my students have even spoken of time with Cog and Kismet by referring to a robotic “I and thou.”7 Theologian Martin Buber coined this phrase to refer to a profound meeting of human minds and hearts. It implies a symmetrical encounter. There is no such symmetry between human beings and even the most advanced robots. But even simple actions by Cog and Kismet inspire this extravagance of description, touching, I think, on our desire to believe that such symmetry is possible. In the case of Cog, we build a “thou” through the body. In the case of Kismet, an expressive face and voice do the work. And both robots engage with the power of the gaze. A robotic face is an enabler; it encourages us to imagine that robots can put themselves in our place and that we can put ourselves in theirs.8

When a robot holds our gaze, the hardwiring of evolution makes us think that the robot is interested in us. When that happens, we feel a possibility for deeper connection. We want it to happen. We come to sociable robots with the problems of our lives, with our needs for care and attention. They promise satisfactions, even if only in fantasy. Getting satisfaction means helping the robots, filling in where they are not yet ready, making up for their lapses. We are drawn into necessary complicities.

I join with Brian Scassellati and Cynthia Breazeal, the principal designers for Cog and Kismet respectively, on a study of children’s encounters with these robots. 9 We introduce them to sixty children, from ages five to fourteen, from a culturally and economically diverse cross section of local communities. We call it our “first-encounters” study because in most cases, the children meet Cog or Kismet just once and have never previously seen anything like them.

When children meet these robots, they quickly understand that these machines are not toys—indeed, as I have said, these robots have their own toys, an array of stuffed animals, a slinky, dolls, and blocks. The laboratory setting in which adults engage with the robots says, “These robots don’t belong to you, they belong with you.” It says, “They are not for you; in some important way, they are like you.” Some children wonder, if these robots belong with people, then what failings in people require robots? For one thirteen-year-old boy, Cog suggests that “humans aren’t good enough so they need something else.”

In our first-encounters study children’s time with the robots is unstructured. We ask questions, but not many. The children are encouraged to say whatever comes to mind. Our goal is to explore some rather open questions: How do children respond to an encounter with a novel form of social intelligence? What are they looking for?

To this last, the answer is, most simply, that children want to connect with these machines, to teach them and befriend them. And they want the robots to like, even love, them. Children speak of this directly (“Cog loves me”; “Kismet is like my sister; she loves me”; “He [Cog] is my pal; he wants to do things with me, everything with me. Like a best friend.”). Even the oldest children are visibly moved when Kismet “learns” their names, something that this robot can do but only rarely accomplishes. Children get unhappy if Kismet says the name of another child, which they often take as evidence of Kismet’s disinterest.

Children are willing to work hard, really hard, to win the robots’ affection. They dance for the robots and sing favorite childhood songs: “The Farmer in the Dell,” “Happy Birthday,” “Three Blind Mice.” They try to make the robots happy with stuffed animals and improvised games. One ten-year-old boy makes clay treats for Kismet to eat and tells us that he is going “to take care of it and protect it against all evil.” But because Cog and Kismet cannot like or dislike, children’s complicity is required to give the impression that there is an emerging fondness. Things can get tense. These more sophisticated robots seem to promise more intimacy than their simpler “cousins.” So when they do not gratify, they seem more “withholding.”

During our study Cog has a broken arm, and Kismet is being modified for research purposes. On many days both robots are “buggy.” Children work gamely around these limitations. So, on a day when there are problems with Kismet’s microphone, some children try out the idea that Kismet is having trouble talking because it speaks a foreign language. A five-year-old decides that this language is Korean, his own language. A twelve-year-old argues for French, then changes her mind and decides on Spanish. When Kismet finally does speak to her, she is pleased. She says that she was right about the Spanish. “He trusts me,” she says happily, and bids the robot good-bye with a wave and an adios. Of course, children are sometimes exhausted by a robot’s quirky malfunctions or made anxious when attempts to charm a broken machine fail. There are disappointments and even tears. And yet, the children persevere. The robots are alive enough to keep them wanting more.

As we saw with simpler robots, the children’s attachments speak not simply to what the robots offer but to what children are missing. Many children in this study seem to lack what they need most: parents who attend to them and a sense of being important. Children imagine sociable machines as substitutes for the people missing in their lives. When the machines fail, it is sometimes a moment to revisit past losses. What we ask of robots shows us what we need.

BUILDING A “THOU” THROUGH THE BODY

When children realize that Cog will not speak, they do not easily give up on a feeling that it should. Some theorize that it is deaf. Several of the children have learned a bit of American Sign Language at school and seize on it as a way to communicate. They do not question the idea that Cog has things it wants to say and that they would be interested to hear.

When Allegra, nine, meets Cog, she reaches out to shake its hand. Cog returns her gesture, and they have a moment when their eyes and hands lock. Allegra then wants to know if it is possible to make a mouth for Cog. The robot has a mouth, but Allegra means a mouth that can speak. Like the five-year-old who thought that a Furby should have arms “because it might want to hug me,” Allegra explains that Cog “probably wants to talk to other people … and it might want to smile.” Allegra also thinks that an “improved” Cog should know how to dance. Scassellati asks, “Should it just dance for you or should it be able to dance with you?” Allegra’s answer is immediate: “Dance with me!” Inspired, she begins to dance, first hip-hop, then the slow and graceful turns of ballet. In response, Cog moves its head and its one functional arm. Robot and child are bound together. After a few minutes, Allegra says, “If his [Cog’s] other arm could move, I think that I would teach him to hug me.” Cog has become alive enough to love her. Later, Allegra makes her dance steps more complex and rapid. Now she dances not with but for Cog. She wants to please it, and she says, “a little bit I want to show off for him.”

Brooke, seven, comes to her session with Cog hoping that it has “a heart … and tonsils” so that it will be able to talk and sing with her. When this doesn’t work out, she moves on to teaching Cog to balance its toys—stuffed animals, a slinky, blocks—on its arms, shoulders, and neck. When things go awry, as they often do (Cog can rarely balance the toys), she gently chides the robot: “Are you paying attention to me, mister?” She says that Cog’s failures are perhaps due to her not having identified its favorite toy, and she remains Cog’s dedicated tutor. Cog finally succeeds in balancing its slinky and this reanimates the robot in her eyes. When Cog fails in successive attempts, Brooke assumes it has lost interest in her game. She asks it, “What’s the matter?” She never questions her pupil’s competency, only its desire.

But Brooke yearns to talk to the robot. She tells Cog that at home she feels ignored, in the shadow of her eleven-year-old sister Andrea, who is scheduled to meet Cog later that day: “Nobody talks to me… . Nobody listens to me.” When Cog responds with silence, she is distressed. “Is he trying to tell me to go away?” she asks. “Cog, Cog, Cog … why aren’t you listening to me?” Suddenly, she has an idea and declares, “I didn’t think of this before.... This is what you have to do.” She begins to use sign language. “I know how to say ‘house’… . I can teach him to say ‘house’ [she taps her head with her right palm, making the sign for house].” Then she signs “eat” and “I love you” as Cog focuses on her hands. She is happy that Cog pays attention: “He loves me, definitely.”

Now, feeling both successful and competitive, Brooke boasts that she has a better relationship with Cog than her sister will have: “She’s probably just going to talk to Cog. I’m not just talking. I’m teaching.” As Brooke leaves, she announces to the research team, “I wanted him to speak to me. I know the robot down the hall [Kismet] is the talking one. But I really wanted him to talk.”

Scassellati is used to hearing such sentiments. He has worked on Cog for seven years and seen a lot of people behave as though smitten with his robot and frustrated that it will not talk with them. He uses the first-encounters study for an experiment in what he considers “responsible pedagogy.” Thirty of the children in our study participate in a special session during which Scassellati demystifies Cog. One by one, Scassellati disables each element of Cog’s intelligence and autonomy. A robot that began the session able to make eye contact and imitate human motion ends up a simple puppet—the boy Pinocchio reduced to wood, pins, and string.

So later that day, Scassellati “debriefs” Brooke and Andrea. He shows the sisters what Cog sees on its vision monitors and then covers its “eyes”—two cameras for close vision, two for distance vision—and the girls watch the four monitors go blank, one after another. They are given a computer mouse that controls Cog’s movement and they get to “drive” it.

Together, the sisters direct Cog’s eyes toward them. When Cog “sees” them, as evidenced by their appearance on its vision monitors, the quiet, didactic tone of the debriefing breaks down. Brooke screams out, “He’s looking at us” and the carefully built-up sense of Cog as mechanism is gone in a flash. Even as the girls control the robot as though it were a puppet, they think back to the more independent Cog and are certain that it “likes” looking at them.

As Scassellati proceeds with this debriefing, he tries to demonstrate that Cog’s “likes and dislikes” are determined by its programming. He shows the girls that what has Cog’s attention appears in a red square on a computer screen. They can control what gets into the square by changing what its program interprets as being of the highest value. So, for example, Cog can be told to look for red things and skin-colored things, a combination that would have Cog looking for a person with a red shirt.

Despite this lesson, the sisters refer to the red square as “the square that says what Cog likes,” and Brooke is joyful when Cog turns toward her hand: “Yep, he likes it.” They try to get Cog’s interest with a multicolored stuffed caterpillar, which, to their delight, makes it into Cog’s red square as well. Cog also likes Brooke’s leg. But she is troubled that Cog does not like a Mickey Mouse toy. On one hand, she understands that Cog’s lack of interest is due to Mickey’s coloration, half black and half red. The black is keeping Mickey from being registered as a favorite. “I see,” says Brooke, “Mickey is only half red.” But she continues to talk as though it is within Cog’s power to make Mickey a favorite. “I really want Cog to like Mickey. I like Mickey. Maybe he’s trying to like Mickey.”

The children imbue Cog with life even when being shown, as in the famous scene from the Wizard of Oz, the man (or, in this case, the machines) behind the magic. Despite Scassellati’s elegant explanations, the children want Cog to be alive enough to have autonomy and personality. They are not going to let anyone take this away. Scassellati’s efforts to make the robot “transparent” seem akin to telling someone that his or her best friend’s mind is made up of electrical impulses and chemical reactions. Such an explanation is treated as perhaps accurate but certainly irrelevant to an ongoing relationship.

Scassellati is concerned that Cog’s lifelike interface is deceptive; most of his colleagues take a different view. They want to build machines that people will relate to as peers. They don’t see lifelike behaviors as deceptions but as enablers of relationship. In The Republic, Plato says, “Everything that deceives may be said to enchant.”10 The sentiment also works when put the other way around. Once Cog enchants, it is taken as kin. That which enchants, deceives.

Children have met this idea before; it is a fairy tale staple. More recently, in the second volume of the Harry Potter series, a tale of young wizards in training, Harry’s friend Ginny Weasley falls under the spell of an interactive diary. She writes in it; it writes back. It is the wizarding version of the ELIZA program. Even in a world animated by living objects (here, people in photographs get to move around and chat), a caution is served. Ginny’s father, himself a wizard, asks, “Haven’t I taught you anything? What have I always told you? Never trust anything that can think for itself if you can’t see where it keeps its brain.” 11But, of course, it is too late. When something seems to thinks for itself, we put it in the category of “things we form relationships with.” And then we resist having information about mechanisms—or a detail such as where it keeps its brain—derail our connection. Children put Cog in that charmed circle.

When Scassellati turns Cog into a limp puppet, showing where Cog “keeps its brain,” children keep the autonomous and responsive Cog in mind. They see Cog’s malfunctions as infirmities, reasons to offer support. Part of complicity is “covering” for a robot when it is broken. When Cog breaks its arm, children talk about its “wounds.” They are solicitous: “Do you think it needs some sort of, well, bandage?”

BUILDING A THOU THROUGH A FACE AND A VOICE

As with Cog, children will describe a “buggy” Kismet as sick or needing rest. So, on days when Kismet does not speak, children talk to the “deaf ” Kismet and discuss how they will chat with it when it “gets better.” Robyn, nine, is chatting with an expressive and talkative Kismet that suddenly goes mute and immobile. Robyn’s reaction: “He is sleeping.”

Sometimes children weave complex narratives around Kismet’s limitations. Lauren, ten, gets into a happy rhythm of having Kismet repeat her words. When Kismet begins to fail, Lauren likens the robot’s situation to her own. It is not always possible to know what Kismet is learning just from watching “what is happening on the outside” just as we cannot observe what is happening inside of her as she grows up. Despite its silence, Lauren believes that Kismet is growing up “inside.” Lauren says that Kismet is “alive enough” to have parents and brothers and sisters, “and I don’t see them around here.” Lauren wonders if their absence has caused Kismet to fall silent.

Fred, eight, greets Kismet with a smile and says, “You’re cool!” He tells us that he is terrorized by two older brothers whose “favorite pastime is to beat me up.” A robot might help. He says, “I wish I could build a robot to save me from my brothers.... I want a robot to be my friend.... I want to tell my secrets.” Fred stares intently into Kismet’s large blue eyes and seems to have found his someone. In response to Fred’s warm greeting, Kismet vocalizes random sounds, but Fred hears something personal. He interprets Kismet as saying, “What are you doing, Rudy [one of Fred’s brothers]?” Fred is not happy that Kismet has confused him with one of his roughhousing brothers and corrects Kismet’s error. “I’m Fred, not Rudy. I’m here to play with you.” Fred is now satisfied that Kismet has his identity squared away as the robot continues its soft babble. Fred is enchanted by their interchange. When Fred presents a dinosaur toy to Kismet, it says something that sounds like “derksherk,” which Fred inteprets as Kismet’s pronunciation of dinosaur. During one back-and-forth with Kismet about his favorite foods, Fred declares victory: “See! It said cheese! It said potato!”

When Kismet sits in long silence, Fred offers, “Maybe after a while he gets bored.” When Kismet shows no interest in its toys, Fred suggests, “These toys probably distract Kismet.” At this point, the research team explains Kismet’s workings to Fred—the Kismet version of Scassellati’s “Cog demystification” protocol. We show Fred the computer monitor that displays what Kismet is “hearing.” Fred, fascinated, repeats what he sees on the monitor, hoping this will make it easier for Kismet to understand him. When this strategy doesn’t prompt a response, Fred blames Kismet’s bad hearing. But in the end, Fred concludes that Kismet has stopped talking to him because it likes his brothers better. Fred would rather feel rejected than see Kismet as a less than adequate relational partner.

Amber, six, also fights to keep Kismet alive enough to be a friend. On the day Amber visits MIT, Kismet’s face is expressive but its voice is having technical difficulty. The young girl, unfazed, attends to this problem by taking Kismet’s part in conversation. So, Amber engages Kismet with a toy and asks Kismet if she is happy. When Kismet doesn’t answer, Amber answers for it with a hearty “Yep!”

When after many minutes, Kismet haltingly begins to speak, Amber’s response is immediate: “He likes me!” Now, Kismet babbles, and Amber interprets. The young girl says aloud what Kismet meant to say and then engages in a conversation with Kismet based on her interpretation. Before leaving Kismet, Amber tries hard to have the robot say, “I love you.” After a half dozen prompts, Kismet says something close enough. Amber thanks Kismet, says, “I love you too,” and kisses the robot good-bye.

In some ways, Amber’s time with Kismet resembles play with a traditional doll, during which a child must “fill in” both sides of the interaction. But even at its worst, Kismet gives the appearance of trying to relate. At its best, Kismet appears to be in continuous, expressive conversation. As with Cog, Kismet’s failures can be interpreted as disappointments or rejections—very human behaviors. Your Raggedy Ann doll cannot actively reject you. When children see a sociable robot that does not pay attention to them, they see something alive enough to mean it.

BUILDING A THOU BY CARING

Children try to get close to Cog and Kismet by tending them. Children ask the robots how they are feeling, if they are happy, if they like their toys. Robyn, the nine-year-old who imagines Kismet asleep when it mysteriously stops speaking, thinks that the robot is alive because “it talks and moves like a person.” When Kismet develops problems, Robyn wants to take it home to “feed it and give it water to drink so that it wouldn’t die; I would give it a Tylenol if it felt sick and I would make Kismet his own room.” The room, Robyn explains, would have a television on which Kismet could “see other robots so it wouldn’t miss its family and friends.”

As children see it, they teach the robots, and the robots appreciate it, even if they are imperfect pupils. Over half the children in the first-encounters study say, unprompted, that they love the robots and the robots love them back. From those who don’t speak of love, there is still talk about Cog and Kismet having made a “good effort” during their lessons. When children congratulate the robots one hears something akin to parental pride. When the robots succeed, in even the smallest thing, children take credit and present each success as evidence that their own patience has borne fruit. During our study the robots’ performance is subpar. But the children’s investment—their desire, connection, and pride—makes the sessions sparkle.

This is clear in the relationship that Neela, eleven, forms with Cog. When Neela first sees Cog, she exclaims, “Oh, it’s so cute!” and then explains, “He has such innocent eyes, and a soft-looking face.” After teaching the robot to balance a stuffed caterpillar on its arm, she says, “I could never get tired of Cog… . It’s not like a toy because you can’t teach a toy; it’s like something that’s part of you, you know, something you love, kind of like another person, like a baby.” When Cog raises its arm, Neela says, “I wonder what he’s thinking?” She asks, “What do you want?” “What do you like?” When Cog hesitates in his play—for example, when he is slow to raise his arm in response to her actions—Neela never uses a mechanical explanation for Cog’s trouble. Her reasoning is always psychological. She says that Cog reminds her of the “slow kids” in her class and she is sympathetic. “He’s slow—it takes him a while to run through his brain.” And she wants to help. “I want to be its friend, and the best part of being his friend would be to help it learn.... In some ways Cog would be better than a person-friend because a robot would never try to hurt your feelings.” (This is an eleven-year-old’s version of the comment made by the graduate student who wanted a robot boyfriend.) For Neela, a silent Cog is simply disabled: “Being with Cog was like being with a deaf or blind person because it was confused, it didn’t understand what you were saying.” In fact, Neela says that Cog does “see”—just not very well during her visit. To compensate, Neela treats the robot as a person having a bout of temporary blindness. “I was just like, ‘Hello!’ because a blind person would have to listen.” Neela hopes that Cog will get over its problems or that “he might grow out of it.... He’s very young you know.”

Neela has recently arrived from India and is having trouble fitting in at school. She explains that a group of girls seemed to accept her but then made fun of her accent: “Girls are two-faced. They say they like you and then they don’t. They can’t make up their mind.” Cog poses fewer risks. At school the girls who taunted her finally begged her forgiveness, but Neela hasn’t been able to accept their apology. In this last regard, “Cog could be a better friend than a person because it is easier to forgive.... It’s easier to forgive because it doesn’t really understand.” Recall that Neela speaks of Cog as “part of you … something you love.” This is love safe from rejection. Like any object of love, the robot becomes “part of you.” But for Neela, Cog, unlike a person, does not have enough independence to hurt you. In Neela’s feelings for Cog we see how easily a robot can become a part object: it will meet our emotional needs because we can make it give us what we want. Is this an object for our times? If so, it is not an object that teaches us how to be with people.

Some children, particularly with Kismet, explicitly put themselves in the role of sibling or parent. In either of these roles, the relationship with Kismet may become a place to reenact the tensions in a family, something we have already seen with AIBO and My Real Baby. In the pursuit of Kismet, brothers come to blows and sisters bitterly compete. And efforts to parent Kismet can be a critique of what goes on at home. Rain, ten, lives with her mother and is preoccupied by her father’s absence. She explains that she would never abandon Kismet: “My father doesn’t live at home; he moved away. If Kismet came to live with me, I would never move away, ever. I would leave him juice every morning. I would make him a comfortable bed. And I would teach it to really talk, not just the little bit it knows now.” There is much similarity between this kind of talk and what happens in a therapist’s office when children act out their conflicts with their dolls. A doll can let you vent feelings, enjoy imaginary companionship, and teach you what is on your mind. But unlike dolls, these robots “push back.” Children move beyond using the robot to relive past relationships. They hope for a relationship with the robot in the real.

Madison, nine, works with Kismet on a day when the robot is at its best. Its emotive face is responsive and appropriate. It remembers words and repeats them back in humanlike cadences. The result looks like Madison is speaking earnestly to someone whose inflection and tone make her feel perfectly understood.

Madison asks Kismet questions in a gentle and soft-spoken manner, “What is your name? Do you have parents?” Kismet responds warmly. Encouraged, Madison continues. “Do you have brothers and sisters?” Kismet moves its head in a way that suggests to Madison that the answer is yes. Madison tells us that Kismet is a little girl (she was “born from a stomach”), but a new kind of little girl. And like any baby, “she” doesn’t know when “her” birthday is. Madison wants to be “her” good parent. “Do you like ice cream?” Madison asks, and when Kismet quietly responds to this question, the two go on to discuss ice cream flavors, favorite colors, and best toys.

Madison begins to dangle one toy after another in front of Kismet’s face, laughing at its changing expressions. Madison tells Kismet that some of the girls in her school are mean; she says that Kismet is nicer than they are. Kismet looks at Madison with interest and sounds encouraging. In this warm atmosphere, Madison tells Kismet that she looks forward to introducing the robot to her baby sister. Playing with her sister, Madison says, is her favorite thing to do, and she expects Kismet will feel the same way. Kismet nods and purrs happily. Again, projection onto an object becomes engagement with a subject; Rorschach gives way to relationship.

Madison believes that Kismet learns from every child who comes to play. But you can’t be impatient. “Babies learn slowly,” she offers. Like a baby, Kismet, too, will learn over time. “I taught Kismet to smile,” Madison says. “[Kismet] is still little, but it grows up.” To justify this claim, Madison, like Lauren, distinguishes between what you can see of a child’s learning and what is hidden from view: “You can’t always tell what babies are learning by looking at them on any day.” The same is true for Kismet. Kismet is learning “inside” even if we can’t see it. A mother knows her child has secrets.

In the hour she plays with Kismet, Madison becomes increasingly happy and relaxed. Watching girl and robot together, it is easy to see Kismet as increasingly happy and relaxed as well. Child and robot are a happy couple. It is almost impossible not to see Madison as a gratified mother and Kismet as a content child. Certainly, Kismet seems to prefer Madison to the children who have visited with it earlier that day. For me, their conversation is one of the most uncanny moments in the first-encounters study, stunning in its credibility because Kismet does not know about ice cream flavors, baby sisters, or mean girls. Kismet does not like Madison; it is not capable of liking anything or anybody.

BUILDING A THOU IN DISAPPOINTMENT AND ANGER

The children in the study care about having the robots’ attention and affection far more than I anticipated. So their interpretation of robot malfunctions as illness is ingenious; they can walk away without feeling dismissed. But the most vulnerable children take disappointments with a robot very personally. The children most upset by a robot’s indifference are those who feel least tended to. They seem almost desperate for Kismet and Cog to recognize and respond to them. Since the children in our study come from a wide range of backgrounds, some tell us that the snack they get during their session at MIT is the best meal of their day. Some find ways to make it clear that their time at MIT is the most attention they have received that week. Children from affluent as well as economically disadvantaged homes talk about parents they rarely see. When these children interpret robotic technical limitations as rejection, they become withdrawn, depressed, or angry. Some take foolish chances.

My field notes taken after one session with Kismet describe a conversation with the junior members of my research team, two college seniors and two graduate students: “Emergency meeting with team after session with Estelle. Disappointment with Kismet provokes her binge eating, withdrawal. Team feels responsible. How to handle such children? What did child want? A friend? A future?” My team meets at a local coffee shop to discuss the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child.

We have spent the afternoon with twelve-year-old Estelle, who had seen the flyer describing our work on the bulletin board of her after-school center: “Children wanted for study. Meet MIT Robots!” She brought it to her counselor and asked to participate. Estelle tells us that she “stood over my counselor while she called MIT.” Estelle has taken special care with her appearance in preparation for her day with us. She is dressed in her best clothes, her hair brushed to a fine polish. As soon as we picked her up, Estelle talks nonstop about “this wonderful day.” She has never been to MIT, but she knows it is a “very important place.” No one in her family has been to college. “I am the first one to go into a college … today.”

On the day of Estelle’s visit, Kismet engages people with its changing facial expressions but is not at its vocal best. We explain Kismet’s technical problems to Estelle, but nonetheless, she makes every effort to get Kismet to speak. When her efforts bear no fruit, Estelle withdraws, sullen. She goes to the room where we interview children before and after they meet the robots. There we have set out some simple snacks. Estelle begins to eat, not stopping until we finally ask her to leave some of the crackers, cookies, and juice boxes for other children. She briefly stops eating but begins again as we wait for the car service that will bring her back to the after-school program. She tells us that the robot does not like her. We explain this is not the case. She is unappeased. From her point of view, she has failed on her most important day. As Estelle leaves, she takes four boxes of cookies from our supply box and puts them into her backpack. We do not stop her. Exhausted, we reconvene to ask ourselves a hard question: Can a broken robot break a child? We would not consider the ethics of having children play with a damaged copy of Microsoft Word or a torn Raggedy Ann doll. But sociable robots provoke enough emotion to make this ethical question feel very real.

The question comes up again with Leon, twelve. Timid and small for his age, Leon usually feels like the odd man out. In Cog, Leon sees another figure who “probably doesn’t have a lot of friends,” and Leon says they have a good chance to connect. But, like Estelle, Leon has not come to the laboratory on a good day. Cog is buggy and behaves as though bored. The insecure child is quick to believe that the robot is not interested in him. Leon had been shown Cog’s inner workings, and Scassellati gently reminds Leon that Cog’s “interests” are set by people adjusting its program. Leon sees the monitor that reflects these preset values, but he insists that “Cog doesn’t really care about me.” He explodes in jealousy when he sees Cog looking at a tall, blond researcher, even as Scassellati points to the researcher’s red T-shirt, the true lure that mobilizes Cog’s attention. Leon cannot focus. He insists that Cog “likes” the researcher and does not like him. His anxieties drive his animation of the robot.

Now Leon embarks on an experiment to determine whether Cog cares about him. Leon lifts and then lowers his arm and waits for Cog to repeat what he has done. Cog lifts its arm and then, as the robot’s arm moves down, Leon puts his head directly in its path. This is a love test: if Cog stops before hitting him, Leon will grant that Cog cares about him. If the falling arm hits Leon, Cog doesn’t like him. Leon moves swiftly into position for the test. We reach out to stop him, appalled as the child puts his head in harm’s way. Cog’s arm stops before touching Leon’s head. The researchers exhale. Leon is jubilant. Now he knows that Cog is not indifferent. With great pleasure, he calls out “Cog!” and the robot turns toward him. “He heard me! He heard me!”

After Leon has been with Cog for about an hour, the boy becomes preoccupied with whether he has spent enough time with Cog to make a lasting impression. His thoughts return to the tall blond researcher who “gets to be with Cog all the time.” Leon is sure that Cog is in love with her. Leon chides her: “He keeps looking at you. He is in love with you.” Leon then settles on a new idea: “Cog is a boy and so obviously likes girls more than boys.” This at least is a reason why he doesn’t stand a chance here. Leon wonders whether he might have more success with Kismet, which the children usually see as a female because of its doll eyes, red lips, and long eyelashes.

Most children find a way to engage with a faltering robot, imagining themselves as parents or teachers or healers. But both Estelle and Leon became depressed when they were not “recognized.” Other frustrated children persevere in anger. Edward, six, is small for his age. What he lacks in size he makes up for in energy. From the start, he announces that he wants to be the “best at everything about the robots.” His father tells us that at home and at school, Edward likes to be “in charge.” He plays rough and gets into fights. With no prologue, Edward walks up to Kismet and asks, “Can you talk?” When Kismet doesn’t answer, Edward repeats his question at greater volume. Kismet stares into space. Again, Edward asks, “Can you talk?” Now, Kismet speaks in the emotionally layered babble that has delighted other children or puzzled them into inventive games. This is not Edward’s reaction to this winsome speaker of nonsense. He tries to understand Kismet: “What?” “Say that again?” “What exactly?” “Huh? What are you saying?” After a few minutes, Edward decides that Kismet is making no sense. He tells the robot, “Shut up!” And then, Edward picks up objects in the laboratory and forces them into Kismet’s mouth—first a metal pin, then a pencil, then a toy caterpillar. Edward yells, “Chew this! Chew this!” Absorbed by hostility, her remains engaged with the robot.

Shawn, six years older than Edward, has a similar reaction. He visits the lab with his two younger brothers on whom he rains insults as they all wait to visit the robots. When Shawn meets Kismet, he calms down, and his tone is friendly: “What’s your name?” But when Kismet is silent, Shawn becomes enraged. He covers the cameras that serve as Kismet’s eyes and orders, “Say something!” Kismet remains silent. Shawn sits silently too, staring at Kismet as though sizing up an opponent. Suddenly, he shouts, “Say, ‘Shut up!’ Say, ‘Shut up!’” “Say, ‘Hi!’ … Say, ‘Blah!’” The adults in the room are silent; we gave the children no rules about what they could and could not say. Suddenly, Kismet says, “Hi.” Shawn smiles and tries to get Kismet to speak again. When Kismet does not respond, Shawn forces his pen into Kismet’s mouth. “Here! Eat this pen!” Shawn, like Edward, does not tire of this exercise.

One way to look at Estelle and Leon, Edward and Shawn is to say that these children are particularly desperate for attention, control, and a sense of connection. And so, when the robots disappoint, they are more affected than other children. Of course, this is true. But this explanation puts the full burden on the children. Another way to look at their situation puts more of the burden on us. What would we have given to these children if the robots had been in top form? In the cases of Edward and Shawn, we have two “class bullies,” the kids everyone is afraid of. But these boys are lonely. As bullies, they are isolated, often alone or surrounded by children who are not friends but whom they simply boss around. They see robots as powerful, technological, and probably expensive. It is exciting to think about controlling something like that. For them, a sociable robot is a possible friend—one that would not ask for too much in return and would never reject them, but in whom they might confide. But like the insecure Estelle and Leon, these are the children who most need relationships that will model mutuality, where control is not the main thing on the table. Why do we propose machine companionship to them in the first place? From this perspective, problems aren’t limited to when the robots break down. Vulnerable children are not helped even when the robots are doing just fine.

AGAIN, ON AN ETHICAL TERRAIN

In the robot laboratory, children are surrounded by adults talking to and teaching robots. The children quickly understand that Cog needs Brian Scassellati and Kismet needs Cynthia Breazeal. The children imagine Scassellati and Breazeal to be the robots’ parents. Both are about to leave the Artificial Intelligence laboratory, where they have been graduate students, and move on to faculty positions.

Breazeal will be staying at MIT but leaving the AI Lab for the Media Lab. The two are down the street from each other, but the tradition of academic property rights demands that Kismet, like Cog, be left behind in the laboratory that paid for its development. The summer of the first-encounters study is the last time Breazeal will have access to Kismet. Breazeal describes a sharp sense of loss. Building a new Kismet will not be the same. This is the Kismet she has “raised” from a “child.” She says she would not be able to part with Kismet if she weren’t sure it would remain with people who would treat it well.

It comes as no surprise that separation is not easy for Breazeal; more striking is how hard it is for those around Kismet to imagine the robot without her. A ten-year-old who overhears a conversation among graduate students about how Kismet will remain in the lab quietly objects, “But Cynthia is Kismet’s mother.”12 Watching Breazeal interact with Kismet, one does sense a maternal connection, one that Breazeal describes as “going beyond its being a mere machine.” She knows Kismet’s every move, and yet, she doesn’t. There are still surprises that delight. Her experience calls to mind a classic science fiction story by Brian Aldiss, “Supertoys Last All Summer Long,” best known through its movie adaptation, the Steven Spielberg film A.I.: Artificial Intelligence.13 In A.I., scientists build a humanoid robot, David, who is programmed to love. David expresses his love to a woman, Monica, who has adopted him as her child.

The pressing issue raised by this film is not the potential reality of a robot that “loves”—we are far from building anything like the robot David—but how Monica’s feelings come about. Monica is a human being who responds to a machine that asks for nurturance by caring for it. Her response to a robot that reaches out to her is confusion mixed with love and attachment.

It would be facile to make a simple analogy between Breazeal’s situation and that of Monica in A.I., but Breazeal is, in fact, one of the first people to have one of the signal experiences in that story—sadness caused by separation from a robot to which one has formed an attachment based on nurturance. At issue here is not Kismet’s achieved level of intelligence but Breazeal’s journey: in a very limited sense, Breazeal “brought up” Kismet. But even that very limited experience provokes strong emotion. Being asked to nurture a machine constructs us as its parents. This new relationship creates its own loop, drawing us into the complicities that make it possible. We are asked to nurture. We want to help. We become open to playing along, willing to defer to what the robot is able to do.

In fiction and myth, human beings imagine themselves “playing God” and creating new forms of life. Now, in the real, sociable robots suggest a new dynamic. We have created something that we relate to as an “other,” an equal, not something over which we wield godlike power. As these robots get more sophisticated—more refined in their ability to target us—these feelings grow stronger. We are drawn by our humanity to give to these machines something of the consideration we give to each other. Because we reach for mutuality, we want them to care about us as we care for them. They can hurt us.

I noted earlier the chilling credibility of the interaction between Madison and Kismet and the desperation of children who seem to need these robots too much. Cog and Kismet are successful in getting children to relate to them “for real.” It is the robots’ success that gives me pause, as does the prospect of “conversations” between the most needy among us—the disadvantaged young, the deprived elderly, the emotionally and physically disabled—and ever more lifelike sociable robots. Roboticists want us to consider a “best-case” scenario in which robotic companions serve as mentors, first steps toward more complex encounters. Even My Real Baby was marketed as a robot that could teach your child “socialization.” I am skeptical. I believe that sociable technology will always disappoint because it promises what it cannot deliver. It promises friendship but can only deliver performances. Do we really want to be in the business of manufacturing friends that will never be friends?

Roboticists will argue that there is no harm in people engaging in conversations with robots; the conversations may be interesting, fun, educational, or comforting. But I find no comfort here. A machine taken as a friend demeans what we mean by friendship. Whom we like, who likes us—these things make us who we are. When Madison felt joyful in Kismet’s “affection,” I could not be glad. I felt in the shadow of an experiment, just beginning, in which humans are the subjects.

Even now, our excitement about the possibilities for robot/human interaction moves us to play fast and loose with our emotions. In one published experiment, two young children are asked to spend time with a man and a robot designed to be his clone.14 The experiment has a significant backstory. Japanese roboticist Hiroshi Ishiguro built androids that duplicate himself, his wife, and his five-year-old daughter. The daughter’s first reaction when she saw her android clone was to flee. She refused to go near it and would no longer visit her father’s laboratory. Years later, when the daughter was ten, a group of psychologists designed a study in which this girl and a four-year-old boy (a child of one of the researchers) were asked to interact with both Ishiguro and his android double. Both children begin the study reluctant to interact with the android. Then, both (by measures such as “makes eye contact” and “speaks”) become willing to engage almost equally with the man and with the robot. Ishiguro’s daughter is finally able to sit in a room alone with her father’s android clone. It is hard to know how to comment on this narrative of a frightened child who makes ever-fainter objections to her part in this experiment. It seems to have little in it that is positive. Yet, the authors use this narrative as evidence of success: children will be open to humanlike robots as teachers, babysitters, and companions. But what could it mean to this child to sit with her father’s machine double? What could she want from it? Why does it matter that she is finally willing to make eye contact and speak with it? Why would we want her to? It is easy to become so immersed in technology that we ignore what we know about life.