Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)
Part I. The Robotic Moment
Chapter 7. Communion
A handsome twenty-six-year-old, Rich, in dress shirt and tie, comes to call on Kismet. Rich is being taped with Kismet as part of a study to determine how well the robot manages adult “conversation.” Rich sits close to Kismet, his face directly across from the robot. He is not necessarily expecting much and engages in a spirit of good humor and curiosity.
Rich: I like you Kismet. You’re a pretty funny person.
Kismet: [nods and smiles in assent and recognition]
Rich: Do you laugh at all? I laugh a lot.
At first, the conversation between Rich and Kismet shows a bit of the ELIZA effect: Rich clearly wants to put the robot in its best light. Like the children who devote themselves to getting Kismet to say their names, Rich shows Kismet the courtesy of bending to what it does best. Rich seems to play at “gaming” the program, ramping up the illusion to the point that he can imagine believing it.
But with the emotionally expressive Kismet, it is easy for Rich to find moments when he senses the possibility of “more.” They can pass quickly, and this “more” is ill defined. But one moment, Rich plays at a conversation with Kismet, and the next, he is swept up in something that starts to feel real. He begins to talk to Kismet about his girlfriend Carol, and quickly things get personal. Rich tells Kismet that his girlfriend enjoys his laughter and that Rich tries not to laugh at her. When Kismet laughs and seems interested, Rich laughs as well and warms up: “Okay. You’re adorable. Who are you? What are you?”
Rich is wearing a watch that Carol recently bought for him, and he shows it off to Kismet and asks for an opinion. Rich admits that the week before, he almost lost the watch.
Rich: I want to show you something. This is a watch that my … this is a watch that my girlfriend gave me.
Kismet: [babbles with interest and encouragement; looks down to the watch]
Rich: Yeah, look, it’s got a little blue light in it too.... You like it? I almost lost it this week.
When Kismet’s reaction to all of this girlfriend talk is to sound shy, deferent, and sympathetic, Rich seems to play with the notion that this robot could be an interested party. He’s enjoying himself. And when the robot responds a bit out of turn and in a low come-hither tone, Rich loses his footing and abandons himself to their exchange. His interaction with Kismet becomes decidedly flirtatious.1 Kismet can mimic human prosody, so when Rich becomes intimate in his tone, so does the robot. The two could easily be at a cocktail party or at a bar.2
Rich: Do you know what it’s like to lose something?
Kismet: [nods with assent; sounds warm in its interest]
Rich: You are amazing.
At this point, Kismet, appreciatively repeats something close to the word “amazing.” Rich, smitten, now seems to operate within an inchoate fantasy that he might want something from this robot; there is something here for him. During their exchanges, when Kismet glances away from him, Rich moves to the side and gestures to the robot to follow him. At one point the robot talks over him and Rich says, “No, stop. No, no, no stop. Listen to me. Listen to me. I think we have something going. I think there’s something here between us.”
Indeed, something is going on between them. As Rich tries to leave, Kismet will not be put off and holds Rich back with a persuasive purr. Rich flirts back and tries to catch Kismet’s gaze. Successful, Kismet’s eyes now follow Rich. When Kismet lowers its eyes, suddenly “shy,” Rich does not want to let go. We are at a moment of more. Who is leading and who is following in this dance? As in a moment of romantic encounter, one loses track and discovers a new rhythm where it doesn’t matter; each animates and reanimates the other. Rich senses that he has lost control in a way that pleases him. He steps in with a raised finger to mark the moment:
Rich: Stop, you’ve got to let me talk. Shh, shh, shh …
Kismet: [sounds happy, one might say giggly, flattered]
Rich: Kismet, I think we’ve got something going on here. You and me … you’re amazing.
Rich, dazzled, asks again, “What are you?” Parting comes next—but not easily. There is an atmosphere of sweet sorrow, equally distributed.
Rich: Bye [regretful].
Kismet: [purrs in a warm tone]
Rich: Bye [in a softer, lower tone].
Kismet: [makes low “intimate” sounds]
Rich: Okay … all right.
Finally, Rich gives up. He is not leaving. He says to Kismet, “You know what? Hang on a second. I still want to talk to you; I’ve got a couple of things I want to say to you.” The video ends with Rich staring at Kismet, lost in his moment of more.
In this encounter we see how complicity gratifies by offering a fantasy of near communion. As our relationships with robots intensify, we move from wonder at what we have made to the idea that we have made something that will care for us and, beyond that, be fond of us. And then, there is something else: a wish to come ever closer to our creations—to be somehow enlivened by them. A robotic body meets our physicality with its own. A robot’s gaze, face, and voice allow us to imagine a meeting of the minds.
A MOMENT OF MORE: THE DANCER AND THE DANCE
In our studies, children imagined that Cog and Kismet were alive enough to evolve. In one common fantasy, they would have offspring with Cog’s body and Kismet’s face. Only a few years later, Cog and Kismet have direct heirs, new robots built by graduate students who were junior members of the Cog and Kismet teams. One of them is Domo, designed by Aaron Edsinger. It has a vastly improved version of Kismet’s face, speech, and vision—this robot really can have a conversation—and a vastly improved version of Cog’s body. Domo makes eye contact, shows expression, and follows human motion. Its grasp has a humanlike resistance. Cog mirrored human motion, but Domo knows how to collaborate.
Domo is designed to provide simple household help for the elderly or disabled. 3 I visit the robot on a day when Edsinger is “teaching” it to perform simple actions: to recognize objects, throw a ball, shelve groceries. But as is the case with all the MIT sociable robots, when one spends time with Domo, its effects transcend such down-to-earth intentions. Even technically sophisticated visitors describe a moment when Domo seems hesitant to release their hand. This moment could be experienced as unpleasant or even frightening—as contact with a robot out of control. Instead, people are more likely to describe it as thrilling. One feels the robot’s attention; more than this, one senses the robot’s desire. And then, of course, one lectures oneself that the robot has none.
For Edsinger, this sequence—experiencing Domo as having desires and then talking himself out of the idea—becomes familiar. For even though he is Domo’s programmer, the robot’s behavior has not become dull or predictable. Working together, Edsinger and Domo appear to be learning from each other. When Edsinger teaches Domo to hand him a ball or put an object into a cup, their simple actions read as an intimate ballet. They seem to be getting closer.
Edsinger extends his hand and asks for a ball. “Domo, give it,” he says softly. Domo picks up a ball and makes eye contact. “Give it,” the robot says and gently puts the ball in Edsinger’s hand. Edsinger asks Domo to place a carton of milk on a shelf: “Domo, shelf.” Domo repeats the instructions and complies. Edsinger asks, “How are things going, Domo?” Domo says, “Okay,” as he follows new instructions to shelve a bag of ground coffee and moves on to pouring salad dressing into a cup. “Domo, give it,” says Edsinger, and Domo hands Edsinger the salad dressing.
Just as the children crowded around Cog to attach toys to its arms, shoulders, and back, seeking physical involvement, Edsinger works close to his robot and admits he enjoys it:
Having physical contact—being in the robot space—it’s a very rich interaction when you are really, really engaged with it like that. Once Domo is trying to reach for a ball that I’m holding and something is wrong with his control. The arms are kind of pushing out and I’m grabbing the arms and pushing them down and it’s like a kid trying to get out of something; I feel physically coupled with Domo—in a way very different from what you could ever have with a face on a screen.... You definitely have the sense that it wants this thing and you’re trying to keep it from doing what it wants. It’s like a stubborn child. The frustration—you push the arm down and it stops and it tries again.... It takes on a very stubborn child quality. I’ve worked on Kismet. I’ve worked on Cog. All these other robots … none of them really have that sort of physical relationship.
Edsinger notes that people quickly learn how to work with Domo in a way that makes it easier for the robot to perform as desired. He reminds me that when we share tasks with other people, we don’t try trick each other up—say, by handing each other cereal boxes at funny angles. We try to be easy on each other. We do the same with Domo. “People,” says Edsinger, “are very perceptive about the limitations of the person they’re working with or the robot they’re working with … and so if they understand that Domo can’t quite do something, they will adapt very readily to that and try and assist it. So robots can be fairly dumb and still do a lot if they’re working with a person because the person can help them out.”
As Domo’s programmer, Edsinger explicitly exploits the familiar ELIZA effect, that desire to cover for a robot in order to make it seem more competent than it actually is. In thinking about Kismet and Cog, I spoke of this desire as complicity. Edsinger thinks of it as getting Domo to do more “by leveraging the people.” Domo needs the help. It understands very little about any task as a whole. Edsinger says, “To understand something subtle about a person’s intent, it’s really going to be hard to put that in the robot.” What Domo can do, says Edsinger, is “keep track of where a person is and ask, ‘Am I looking at a person reaching in the direction of my gaze?’—stuff like that. There’s no model of the person.” And yet, Edsinger himself says he experiences Domo as almost alive—almost uncomfortably so. For him, much of this effect comes from being with Domo as it runs autonomously for long periods—say, a half hour at a time—rather than being constrained, as he was on earlier projects, to try out elements of a robot’s program in thirty-second intervals. “I can work with Domo for a half hour and never do the exact same thing twice,” he says.4 If this were said about a person, that would be a dull individual indeed. But by robotic standards, a seemingly unprogrammed half hour enchants.
Over a half hour, says Edsinger, Domo “moves from being this thing that you flip on and off and test a little bit of to something that’s running all the time.... You transition out of the machine thing to thinking of it as not so much a creature but as much more fluid in terms of being … [long hesitation] Well, you start to think of it as a creature, but this is part of what makes the research inherently uncomfortable. I enjoy that. That’s part of the reason I like building robots.”
Thrilled by moments when the “creature” seems to escape, unbidden, from the machine, Edsinger begins to think of Domo’s preferences not as things he has programmed but as the robot’s own likes and dislikes.5 He says,
For me, when it starts to get complicated … sometimes I know that the robot is not doing things of its own “volition” because these are behaviors, well, I literally put them in there. But every now and then … the coordination of its behaviors is rich enough … well, it is of its own volition … and it catches you off guard. And to me this is what makes it fun … and it happens to me more and more now that I have more stuff running on it… .
If it doesn’t know what to do, it will look around and find a person. And if it can’t find a person, it looks to the last place [it] saw a person. So, I’ll be watching it do something, and it will finish, and it will look up at me as if to say, “I’m done; [I want your] approval.”
In these moments, there is no deception. Edsinger knows how Domo “works.” Edsinger experiences a connection where knowledge does not interfere with wonder. This is the intimacy presaged by the children for whom Cog was demystified but who wanted it to love them all the same.
Edsinger feels close to Domo as creature and machine. He believes that such feelings will sustain people as they learn to collaborate with robots. Astronauts and robots will go on space flights together. Soldiers and robots will go on missions together. Engineers and robots will maintain nuclear plants together. To be sold on partnership with robots, people need to feel more than comfortable with them. People should want to be around them. For Edsinger, this will follow naturally from the pleasure of physical contact with robotic partners. He says it is thrilling “just to experience something acting with some volition. There is an object, it is aware of my presence, it recognizes me, it wants to interact with me.”
Edsinger does not fall back on the argument that we need helper robots because there will not be enough people to care for each other in the future. For him, creating sociable robots is its own adventure. The robots of the future will be cute, want to hug, and want to help. They will work alongside people, aware of their presence and wishes. Edsinger admits that it will be “deceiving, if people feel the robots know more than they do or care more than they do.” But he does not see a moral issue. First, information about the robot’s limitations is public, out there for all the world to see. Second, we have already decided that it is acceptable to be comforted by creatures that may not really care for us: “We gain comfort from animals and pets, many of which have very limited understanding of us.” Why should we not embrace new relationships (with robots) with new limitations?
And besides, argues Edsinger, and this is an argument that has come up before, we take comfort in the presence of people whose true motivations we don’t know. We assign caring roles to people who may not care at all. This might happen when, during a hospitalization, a nurse takes our hand. How important is it that this nurse wants to hold our hand? What if this is a rote gesture, something close to being programmed? Is it important that this programmed nurse be a person? For Edsinger, it is not. “When Domo holds my hand,” he says, “it always feels good.... There is always that feeling of an entity making contact that it wants, that it needs. I like that, and I am willing to let myself feel that way … just the physical warm and fuzzy sense of being wanted, knowing full well that it is not caring.” I ask Edsinger to clarify. Is it pleasurable to be touched even if he knows that the robot doesn’t “want” to touch him. Edsinger is sure of his answer: “Yes.” But a heartbeat later he retracts it: “Well, there is a part of me that is trying to say, well, Domo cares.”
And this is where we are in the robotic moment. One of the world’s most sophisticated robot “users” cannot resist the idea that pressure from a robot’s hand implies caring. If we are honest with ourselves about what machines care about, we must accept their ultimate indifference. And yet, a hand that reaches for ours says, “I need you. Take care of me. Attend to me. And then, perhaps, I will—and will want to—attend to you.” Again, what robots offer meets our human vulnerabilities. We can interact with robots in full knowledge of their limitations, comforted nonetheless by what must be an unrequited love.
A MOMENT OF MORE: MERGING MIND AND BODY
In the fall of 2005, performance artist Pia Lindman came to MIT with communion on her mind. Lindman had an artistic vision: she would find ways to merge her face and body with MIT’s sociable robots. She hoped that by trying, she would come to know their minds. For Lindman, the robots were what Emerson would have called “test objects.” She imagined that immersion in a robot’s nature might give her a new understanding of her own.
The MIT sociable robots are inspired by a philosophical tradition that sees mind and body as inseparable. Following Immanuel Kant, Martin Heidegger, Maurice Merleau-Ponty, and, more recently, Hubert Dreyfus and Antonio Damasio, this tradition argues that our bodies are quite literally instruments of thought; therefore, any computer that wants to be intelligent had better start out with one.6 Not all schools of artificial intelligence have been sympathetic to this way of seeing things. One branch of the field, often referred to as “symbolic AI,” associates itself with a Cartesian mind/body dualism and argues that machine intelligence can be programmed through rules and the representation of facts.7
In the 1960s, philosopher Hubert Dreyfus took on the symbolic AI community when he argued that “computers need bodies in order to be intelligent.”8 This position has a corollary; whatever intelligence machines may achieve, it will never be the kind that people have because no body given to a machine will be a human body. Therefore, the machine’s intelligence, no matter how interesting, will be alien.9 Neuroscientist Antonio Damasio takes up this argument from a different research tradition. For Damasio, all thinking and all emotion is embodied. The absence of emotion reduces the scope of rationality because we literally think with our feelings, thus the rebuking title of his 1994 book Descartes’ Error. 10 Damasio insists that there is no mind/body dualism, no split between thought and feeling. When we have to make a decision, brain processes that are shaped by our body guide our reasoning by remembering our pleasures and pains. This can be taken as an argument for why robots will never have a humanlike intelligence: they have neither bodily feelings nor feelings of emotion. These days, roboticists such as Brooks take up that challenge. They grant that intelligence may indeed require bodies and even emotions, but insist that they don’t have to be human ones. And in 2005, it was Brooks to whom Lindman applied when she wanted to join her mind and body to a machine.
A precursor to Lindman’s work with robots was her 2004 project on grief. She chose photographs of people grieving from the New York Times—a mother bending over a dead child, a husband learning he has lost his wife to a terrorist attack. Then, she sketched several hundred of the photographs and began to act them out, putting her face and body into the positions of the people in the photographs. Lindman says she felt grief as she enacted it. Biology makes this so. The shape of a smile or frown releases chemicals that affect mental state.11 And in humans, “mirror neurons” fire both when we observe others acting and when we act ourselves. Our bodies find a way to implicate us emotionally in what we see.12 Lindman came out of the grief project wanting to further explore the connection between embodiment and emotion. So, closely tracking that project’s methodology, she began to work with machines that had bodies. Teaming up with Edsinger, she videotaped his interactions with Domo, sketched the interactions of man and robot, and then learned to put herself in the place of both.13
Her enactments included Edsinger’s surprise at being surprised when Domo does something unexpected; his pleasure when he holds down the robot’s hand in order to get things done, and Domo, responding, seems to want freedom; his thrill in the moment when Domo finishes its work and looks around for the last place it saw a human, the place that Edsinger occupies. Through communion with man and robot, Lindman hoped to experience the gap between the human and the machine. In the end, Lindman created a work of art that both addresses and skirts the question of desire.
At an MIT gallery in the spring of 2006, Lindman performed the results of her work with Edsinger and Domo. On the walls she mounted thirty-four drawings of herself and the robot. In some drawings, Lindman assumes Domo’s expression when disengaged, and she looks like a machine; in others, Domo is caught in moments of intense “engagement,” and it looks like a person. In the drawings, Domo and Lindman seem equally comfortable in the role of person or machine, comfortable being each other.
The performance itself began with a video of Edsinger and Domo working together. They interact with an elegant economy of gesture. These two know each other very well. They seem to anticipate each other, look after each other. The video was followed by Lindman “enacting” Domo on a raised stage. She was dressed in gray overalls, her hair pulled into a tight bun. Within a few minutes, I forgot the woman and saw the machine. And then Lindman played both parts: human and machine. This time, within minutes, I saw two humans. And then, figure turned to ground, and I saw two machines, two very fond machines. Or was it two machines that were perhaps too fond? I was with a colleague who saw it the other way, first two machines and then two humans. Either way, Lindman had made her point: the boundaries between people and things are shifting. What of these boundaries is worth maintaining?
Later, I meet privately with Lindman, and she talks about her performance and her experience making the film. “I turn myself into the human version of Domo … and I feel the connection between [Edsinger] and Domo… . You feel the tenderness, the affection in their gestures. Their pleasure in being together.” She dwells on a sequence in which Edsinger tries to get Domo to pick up a ball. At one moment, the ball is not in Domo’s field of vision. The robot looks toward Edsinger, as though orienting to a person who can help, a person whom it trusts. It reaches for Edsinger’s hands. For the robot, says Lindman, “there is information to be gathered through touch.” Domo and Edsinger stare at each other, with Domo’s hands on Edsinger’s as though in supplication. Lindman says that in enacting Domo for this sequence, she “couldn’t think about seeking the ball.... I’ve always thought about it as a romantic scene.”
For Lindman this scene is crucial. In trying to play a robot, she found that the only way to get it right was to use a script that involved love. “The only way I was able to start memorizing the movements was to create a narrative. To put emotions into the movements made me remember the movements.” She is aware that Edsinger had a different experience. He had moments when he saw the robot as both program and creature: “A lot of times he’d be looking at the screen with the code scrolling by… . He is looking at the robot’s behavior, at its internal processes, but also is drawn into what is compelling in the physical interaction.” Edsinger wrote Domo’s code, but also learns from touching Domo’s body. Watching these moments on film, I see the solicitous touch of a mother who puts her hand on her child’s forehead to check for fever.
Of a scene in which Edsinger holds down Domo’s hand to prevent a collision, Lindman says,
[Edsinger] is holding Domo’s hand like this [Lindman demonstrates by putting one hand over another] and looks into Domo’s eyes to understand what it’s doing: Where are its eyes going? Is it confused? Is it trying to understand what it’s seeing or is it understanding what it’s seeing? To get eye contact with Domo is, like, a key thing. And he gets it. He’s actually looking at Domo trying to understand what it’s looking at, and then Domo slowly turns his head and looks him in the eye. And it’s this totally romantic moment.
Edsinger, too, has described this moment as one in which he feels the pleasure of being sought after. So, it is not surprising that to enact it, Lindman imagined robot and man in a moment of desire. She says, “It is as though I needed the robot to seem to have emotions in order to understand it.” She is able to play Domo only if she plays a woman desiring a man. “It is,” she admits, “the scene I do best.”
In the grief project, the position of her body brought Lindman to experiences of abjection, something that she now attributes to mirror neurons. She had expected that doubling for a robot would be very different because “it has no emotion.” But in the end, she had to create emotions to become an object without emotion. “To remember the robot’s motions, I had to say: ‘It does this because it feels this way.’ … It wasn’t like I was feeling it, but I had to have that logic.” Except that (think of the mirror neurons) Lindman was feeling it. And despite herself, she couldn’t help but imagine them in the machine. Lindman’s account becomes increasingly complex as she grapples with her experience. If the subject is communion with the inanimate, these are the telling contradictions of an expert witness.14
The philosopher Emmanuel Lévinas writes that the presence of a face initiates the human ethical compact.15 The face communicates, “Thou shalt not kill me.” We are bound by the face even before we know what stands behind it, even before we might learn that it is the face of a machine. The robotic face signals the presence of a self that can recognize another. It puts us in a landscape where we seek recognition. This is not about a robot’s being able to recognize us. It is about our desire to have it do so.
Lindman could not play Edsinger without imagining him wanting the robot’s recognition; she could not play Domo without imagining it wanting Edsinger’s recognition. So, Lindman’s enactment of Domo looking for a green ball interprets the robot as confused, seeking the person closest to it, locking eyes, and taking the person’s hand to feel comforted. It is a moment, classically, during which a person might experience a feeling of communion. Edsinger—not just in Lindman’s recreation—feels this closeness, unswayed by his knowledge of the mechanisms behind the robot’s actions. For Lindman, such interactions spark “a crisis about what is authentic and real emotion.”
Lindman worries that the romantic scripts she uses “might not seem to us authentic” because robots “are of mechanism not spirit.” In her grief project, however, she found that grief is always expressed in a set of structured patterns, programmed, she thinks, by biology and culture. So we, like the robots, have programs beneath our expression of feelings. We are constrained by mechanisms, even in our most emotional moments. And if our emotions are mediated by such programming, asks Lindman, how different are our emotions from those of a machine? For Lindman, the boundary is disappearing. We are authentic in the way a machine can be, and a machine can be authentic in the way a person can be.
And this is where I began. The questions for the future are not whether children will love their robot companions more than their pets or even their parents. The questions are rather, What will love be? And what will it mean to achieve ever-greater intimacy with our machines? Are we ready to see ourselves in the mirror of the machine and to see love as our performances of love?
In her enactments of grief, Lindman felt her body produce a state of mind. And in much the same spirit, when she enacts Domo, she says she “feels” the robot’s mind. But Lindman is open to a more transgressive experience of the robot mind. After completing the Domo project, she begins to explore how she might physically connect her face to the computer that controls the robot Mertz.
Lijin Aryananda’s Mertz, a metal head on a flexible neck, improves on Kismet’s face, speech, and vision. Like Kismet, Mertz has expressive brows above its black ping-pong ball eyes—features designed to make a human feel kindly toward the robot. But this robot can actually speak simple English. Like Domo, Mertz has been designed as a step toward a household companion and helper. Over time, and on its own, it is able to recognize a set of familiar individuals and chat with them using speech with appropriate emotional cadence. Lindman hopes that if she can somehow “plug herself ” into Mertz, she will have a direct experience of its inner state. “I will experience its feelings,” she says excitedly. And Lindman wants to have her brain scanned while she is hooked up to Mertz in order to compare images of her brain activity to what we know is going on in the machine. “We can actually look at both,” she says. “I will be the embodiment of the AI and we will see if [when the robot smiles], my brain is smiling.”
Lindman soon discovers that a person cannot make her brain into the output device for a robot intelligence. So, she modifies her plan. Her new goal is to “wear” Mertz’s facial expressions by hooking up her face rather than her brain to the Mertz computer, to “become the tool for the expression of the artificial intelligence.” After working with Domo, Lindman anticipates that she will experience a gap between who she is and what she will feel as she tries to be the robot. She hopes the experiment will help her understand what is specific to her as a human. In that sense, the project is about yearning for communion with the machine as well as inquiring into whether communion is possible. Lindman imagines the gap: “You will say, ‘Okay, so there’s the human.’”16
As a first step, and it would be her only step, Lindman constructs a device capable of manipulating her face by a set of mechanical pliers, levers, and wires, “just to begin with the experience of having my face put into different positions.” It is painful and prompts Lindman to reconsider the direct plug-in she hopes some day to achieve. “I’m not afraid of too much pain,” she says. “I’m more afraid of damage, like real damage, biological damage, brain damage. I don’t think it’s going to happen, but it’s scary.” And Lindman imagines another kind of damage. If some day she does hook herself up to a robot’s program, she believes she will have knowledge of herself that no human has ever had. She will have the experience of what it feels like to be “taken over” by an alien intelligence. Perhaps she will feel its pull and her lack of resistance to it. The “damage” she fears relates to this. She may learn something she doesn’t want to know. Does the knowledge of the extent to which we are machines mark the limit of our communion with machines? Is this knowledge taboo? Is it harmful?
Lindman’s approach is novel, but the questions she raises are not new. Can machines develop emotions? Do they need emotions to develop full intelligence? Can people only relate to machines by projecting their own emotions onto them, emotions that machines cannot achieve? The fields of philosophy and artificial intelligence have a long history of addressing such matters. In my own work, I argue the limits of artificial comprehension because neither computer agents nor robots have a human life cycle.17 For me, this objection is captured by the man who challenged the notion of having a computer psychotherapist with the comment, “How can I talk about sibling rivalry to something that never had a mother?” These days, AI scientists respond to the concern about the lack of machine emotion by proposing to build some. In AI, the position that begins with “computers need bodies in order to be intelligent” becomes “computers need affect in order to be intelligent.”
Computer scientists who work in the field known as “affective computing” feel supported by the work of social scientists who underscore that people always project affect onto computers, which helps them to work more constructively with them.18 For example, psychologist Clifford Nass and his colleagues review a set of laboratory experiments in which “individuals engage in social behavior towards technologies even when such behavior is entirely inconsistent with their beliefs about machines.”19 People attribute personality traits and gender to computers and even adjust their responses to avoid hurting the machines’ “feelings.” In one dramatic experiment, a first group of people is asked to perform a task on computer A and to evaluate the task on the same computer. A second group is asked to perform the task on computer A but to evaluate it on computer B. The first group gives computer A far higher grades. Basically, participants do not want to insult a computer “to its face.”
Nass and his colleagues suggest that “when we are confronted with an entity that [behaves in humanlike ways, such as using language and responding based on prior inputs,] our brains’ default response is to unconsciously treat the entity as human.”20 Given this, they propose that technologies be made more “likeable” for practical reasons. People will buy them and they will be easier to use. But making a machine “likeable” has moral implications. “It leads to various secondary consequences in interpersonal relationships (for example, trust, sustained friendship, and so forth).”21 For me, these secondary consequences are the heart of the matter. Making a machine easy to use is one thing. Giving it a winning personality is another. Yet, this is one of the directions taken by affective computing (and sociable robotics).
Computer scientists who work in this tradition want to build computers able to assess their users’ affective states and respond with “affective” states of their own. At MIT, Rosalind Picard, widely credited with coining the phrase “affective computing,” writes, “I have come to the conclusion that if we want computers to be genuinely intelligent, to adapt to us, and to interact naturally with us, then they will need the ability to recognize and express emotions, and to have what has come to be called ‘emotional intelligence.’”22 Here the line is blurred between computers having emotions and behaving as if they did. Indeed, for Marvin Minsky, “Emotion is not especially different from the processes that we call ‘thinking.”23 He joins Antonio Damasio on this but holds the opposite view of where the idea takes us. For Minsky, it means that robots are going to be emotional thinking machines. For Damasio, it means they can never be unless robots acquire bodies with the same characteristics and problems of living bodies.
In practice, researchers in affective computing try to avoid the word “emotion.” Talking about emotional computers is always on track to raise strong objections. How would computers get these emotions? Affects sound more cognitive. Giving machines a bit of “affect” to make them easier to use sounds like common sense, more a user interface strategy than a philosophical position. But synonyms for “affective” include “emotional,” “feeling,” “intuitive,” and “noncognitive,” just to name a few.24 “Affect” loses these meanings when it becomes something computers have. The word “intelligence” underwent a similar reduction in meaning when we began to apply it to machines. Intelligence once denoted a dense, layered, complex attribute. It implied intuition and common sense. But when computers were declared to have it, intelligence started to denote something more one-dimensional, strictly cognitive.
Lindman talks about her work with Domo and Mertz as a contribution to affective computing. She is convinced that Domo needs an additional layer of emotional intelligence. Since it wasn’t programmed in, she says she had to “add it herself ” when she enacted the robot’s movements. But listening to Lindman describe how she had to “add in” yearning and tenderness to the relationship between Domo and Edsinger, I have a different reaction. Perhaps it is better that Lindman had to “add in” emotion. It put into sharp relief what is unique about people. The idea of affective computing intentionally blurs the line.
THROUGH THE EYES OF THE ROBOT
Domo and Mertz are advanced robots. But we know that feelings of communion are evoked by far simpler ones. Recall John Lester, the computer scientist who thought of his AIBO as both machine and creature. Reflecting on AIBO, Lester imagines that robots will change the course of human evolution.25 In the future, he says, we won’t simply enjoy using our tools, “we will come to care for them. They will teach us how to treat them, how to live with them. We will evolve to love our tools; our tools will evolve to be loveable.”
Like Lindman and Edsinger, Lester sees a world of creature-objects burnished by our emotional attachments. With a shy shrug that signals he knows he is going out on a limb, he says, “I mean, that’s the kind of bond I can feel for AIBO now, a tool that has allowed me to do things I’ve never done before.... Ultimately [tools like this] will allow society to do things that it has never done.” Lester sees a future in which something like an AIBO will develop into a prosthetic device, extending human reach and vision. 26 It will allow people to interact with real, physical space in new ways. We will see “through its eyes,” says Lester, and interact “through its body… . There could be some parts of it that are part of you, the blending of the tools and the body in a permanent physical way.” This is how Brooks talks about the merging of flesh and machine. There will be no robotic “them” and human “us.” We will either merge with robotic creatures, or in a long first step, we will become so close to them that we will integrate their powers into our sense of self. In this first step, a robot will still be an other, but one that completes you.
These are close to the dreams of Thad Starner, one of the founders of MIT’s Wearable Computing Group, earlier known as the “cyborgs.” He imagines bringing up a robot as a child in the spirit of how Brooks set out to raise Cog. But Starner insists that Cog—and successor robots such as Domo and Mertz—are “not extreme enough.”27 They live in laboratories, so no matter what the designers’ good intentions, the robots will never be treated like human babies. Starner wants to teach a robot by having it learn from his life—by transmitting his life through sensors in his clothes. The sensors will allow “the computer to see as I see, hear as I hear, and experience the world around me as I experience it,” Starner says. “If I meet somebody at a conference it might hear me say, ‘Hi, David,’ and shake a hand. Well, if it then sees me typing in somebody’s name or pulling up that person’s file, it might actually start understanding what introductions are.” Starner’s vision is “to create something that’s not just an artificial intelligence. It’s me.”
In a more modest proposal, the marriage of connectivity and robotics is also the dream of Greg, twenty-seven, a young Israeli entrepreneur who has just graduated from business school. It is how he intends to make his fortune—and in the near future. In Greg’s design, data from his cell phone will animate a robot. He says,
I will walk around with my phone, but when I come home at night, I will plug it into a robotic body, also intelligent but in different ways. The robot knows about my home and how to take care of it and to take care of me if I get sick. The robot would sit next to me and prepare the documents I need to make business calls. And when I travel, I would just have to take the phone, because another robot will be in Tel Aviv, the same model. And it will come alive when I plug in my phone. And the robot bodies will offer more, say, creature comforts: a back rub for sure and emergency help if you get into medical trouble. It will be reassuring for a young person, but so much more for an old person.
We will animate our robots with what we have poured into our phones: the story of our lives. When the brain in your phone marries the body of your robot, document preparation meets therapeutic massage. Here is a happy fantasy of security, intellectual companionship, and nurturing connection. How can one not feel tempted?
Lester dreams of seeing the world through AIBO’s eyes: it would be a point of access to an enhanced environment. Others turn this around, saying that the robot will become the environment; the physical world will be laced with the intelligence we are now trying to put into machines. In 2008, I addressed a largely technical audience at a software company, and a group of designers suggested that in the future people will not interact with stand-alone robots at all—that will become an old fantasy. What we now want from robots, they say, we will begin to embed in our rooms. These intellectually and emotionally “alive” rooms will collaborate with us. They will understand speech and gesture. They will have a sense of humor. They will sense our needs and offer comfort. Our rooms will be our friends and companions.
CONSIDERING THE ROBOT FOR REAL
The story of robots, communion, and moments of more opens up many conversations, both philosophical and psychological. But these days, as people imagine robots in their daily lives, their conversations become quite concrete as they grapple with specific situations and try to figure out if a robot could help.
Tony, a high school teacher, has just turned fifty. Within just the past few years, his life has entered a new phase. All three of his children are in college. His parents are dead. He and his wife, Betty, find themselves in constant struggle with her mother, Natasha, eighty-four, who is recuperating from a stroke and also showing early signs of Alzheimer’s. When a younger woman and at her best, Natasha had been difficult. Now, she is anxious and demanding, often capricious. She criticizes her daughter and son-in-law when they try to help; nothing seems enough. Tony, exhausted, considers their options. With some dread, he and Betty have been talking about moving Natasha into their home. But they both work, so Natasha will require a caretaker to tend to her as she declines. He hears of work in progress on robots designed for child and elder care. This is something new to consider, and his first thoughts are positive.
Well, if I compare having a robot with an immigrant in my house, the kind of person who is available to take care of an elderly person, the robot would be much better. Sort of like flying Virgin Atlantic and having your own movie. You could have the robot be however you wanted it. It wouldn’t be rude or illiterate or steal from you. It would be very safe and specialized. And personalized. I like that. Natasha’s world is shrinking because of the Alzheimer’s. A robot geared to the Alzheimer’s—that could sort of measure where she was on any given day and give her some stimulation based on that—that would be great.
And then there is a moment of reconsideration:
But maybe I’m getting it backwards. I’m not sure I would want a robot taking care of me when I’m old. Actually, I’m not sure I would rather not be alive than be maintained by a robot. The human touch is so important. Even when people have Alzheimer’s, even when they are unconscious, in a coma, I’ve read that people still have the grasping reflex. I suppose I want Natasha to have the human touch. I would want to have the human touch at the end. Other than that it is like that study where they substituted the terry cloth monkeys for the real monkeys and the baby monkeys clung to wire monkeys with terry cloth wrapped around. I remember studying that in college and finding it painfully sad. No, you need the real monkey to preserve your dignity. Your dignity as a person. Without that, we’re like cows hooked up to a milking machine. Or like our life is like an assembly line where at the end you end up with the robot.
Tony is aware that he has talked himself into a contradiction: the robot is a specialized helper that can expertly diagnose a level of impairment and the robot is like a wire-and-terry-cloth monkey. He tries to reconcile his ideas:
I suppose the robot assistant is okay when a person still has some lucidity. You can still interact with it and know that it is a robot. But you don’t want a robot at the end. Then, you deserve a person. Everybody deserves a person. But I do have mixed feelings. Looking at robots and children, really, there is a part of raising children … I think Marilyn French called it the “shit-and-string-beans” part of raising children. This would be good for a robot to do. You are a robot when you do it.
So, I’m happy to give over the shit-and-string-beans part of child raising and for that aspect of taking care of Natasha. Of course, the children are the harder call. But I would do it if it were the conventional thing that everyone did. Most people would do it if it were the conventional thing that everyone did. We didn’t deny our children television, and we didn’t think it was a good thing for them.
Tony is not happy to be caught in a contradiction. But many people share his dilemma. It is hard to hold on to a stable point of view. Plagued with problems, we are told that machines might address them. How are we to resist? Tony says, “I’m okay with the lack of authenticity [if you replace people with robots]. Lack of authenticity is an acceptable trade-off for services needed. I would say that my need right now trumps the luxury of authenticity. I would see a robot cleaning up after Natasha as labor saving, just like a vacuum cleaner. So the eldercare robot, I’m okay with it.”
Betty has been quietly listening to this conversation about her mother. She would like her mother to live in her own home for as long as possible. Maybe a robot companion could help with that. She says,
The robot would make her life more interesting. Maybe it would mean that she could stay in her own home longer. But provide some reassurance and peace of mind for me. More than a helper who might abuse her or ignore her or steal from her. I imagine she would prefer the robot. The robot wouldn’t be critical. It would always be positive toward her. She would be familiar with it. At ease with it. Like Tony says, there is a down side to TV for children and there is a down side to this. There is always a down side. But it would be so worth it.
Then Betty speaks about other “robotic things” in her life. She thinks of automatic tellers as robotic. And she is happy that in her suburban neighborhood, she has a local bank where there are still human tellers, coffee, and a plate of donuts on Saturday. “I love our little bank. It would bother me if I went in there one day and the teller was a well-trained robot. At self-service gas stations, at ATM machines, you lose the intimacy.”
For her husband, however, that neighborhood bank is only an exercise in nostalgia.
The teller is not from the neighborhood. He doesn’t know you or care. There’s no point in talking to him because he has become a robot. If you do talk to the teller, you have become like the ‘old guy,’ the retired guy who wants to talk to everyone on line and then talk to the teller. Because that is the old guy’s social life—the bank, the grocery store, the barber. When you’re young, you’re okay with the ATM, but then, if that’s all we have, when we’re ready to talk to people, when we’re old, there won’t be anyone there. There will just be things.
Tony’s review of the banal and the profound—of being young and wanting an ATM, of being old and bereft in a world of things—captures the essence of the robotic moment. We feel, as we stand before our ATM machines (or interact with bank tellers who behave like ATM machines), that they and we stand robotic among robots, “trained to talk to things.” So, it seems less shocking to put robots in where people used to be. Tony expands on a familiar progression: when we make a job rote, we are more open to having machines do it. But even when people do it, they and the people they serve feel like machines.
Gradually, more of life, even parts of life that involve our children and parents, seem machine ready. Tony tries to focus on the bright side. Alzheimer’s patients can be served by a finely tuned robot.28 Children will have the attention of machines that will not resent the “shit and string beans” of their daily care. And yet, he feels the tug of something else: the robotic makes sense until it makes him think of monkeys deprived of a mother, clinging to wire and terry cloth.
This last reaction may end up seeming peculiarly American. In Japan, enthusiasm for robots is uninhibited.29 Philosophically, the ground has been prepared. Japanese roboticists are fond of pointing out that in their country, even worn-out sewing needles are buried with ceremony. At some shrines in Japan, dolls, including sex dolls, are given proper burials. It is commonplace to think of the inanimate as having a life force. If a needle has a soul, why shouldn’t a robot? At their robotic moment, a Japanese national publicity campaign portrays a future in which robots will babysit and do housework and women will be freed up to have more babies—preserving the traditional values of the Japanese home, but also restoring sociability to a population increasingly isolated through the networked life.
The Japanese take as a given that cell phones, texting, instant messaging, e-mail, and online gaming have created social isolation. They see people turning away from family to focus attention on their screens. People do not meet face to face; they do not join organizations. In Japan, robots are presented as facilitators of the human contact that the network has taken away. Technology has corrupted us; robots will heal our wounds.
We come full circle. Robots, which enchant us into increasingly intense relationships with the inanimate, are here proposed as a cure for our too intense immersion in digital connectivity. Robots, the Japanese hope, will pull us back toward the physical real and thus each other.