Nearest neighbors - The Robotic Moment - Alone Together - Sherry Turkle

Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)

Part I. The Robotic Moment

00

In Solitude, New Intimacies

Chapter 1. Nearest neighbors

My first brush with a computer program that offered companionship was in the mid-1970s. I was among MIT students using Joseph Weizenbaum’s ELIZA, a program that engaged in dialogue in the style of a psychotherapist. So, a user typed in a thought, and ELIZA reflected it back in language that offered support or asked for clarification.1 To “My mother is making me angry,” the program might respond, “Tell me more about your mother,” or perhaps, “Why do you feel so negatively about your mother?” ELIZA had no model of what a mother might be or any way to represent the feeling of anger. What it could do was take strings of words and turn them into questions or restate them as interpretations.

Weizenbaum’s students knew that the program did not know or understand; nevertheless they wanted to chat with it. More than this, they wanted to be alone with it. They wanted to tell it their secrets.2 Faced with a program that makes the smallest gesture suggesting it can empathize, people want to say something true. I have watched hundreds of people type a first sentence into the primitive ELIZA program. Most commonly they begin with “How are you today?” or “Hello.” But four or five interchanges later, many are on to “My girlfriend left me,” “I am worried that I might fail organic chemistry,” or “My sister died.”

Soon after, Weizenbaum and I were coteaching a course on computers and society at MIT. Our class sessions were lively. During class meetings he would rail against his program’s capacity to deceive; I did not share his concern. I saw ELIZA as a kind of Rorschach, the psychologist’s inkblot test. People used the program as a projective screen on which to express themselves. Yes, I thought, they engaged in personal conversations with ELIZA, but in a spirit of “as if.” They spoke as if someone were listening but knew they were their own audience. They became caught up in the exercise. They thought, I will talk to this program as if it were a person. I will vent; I will rage; I will get things off my chest. More than this, while some learned enough about the program to trip it up, many more used this same inside knowledge to feed ELIZA responses that would make it seem more lifelike. They were active in keeping the program in play.

Weizenbaum was disturbed that his students were in some way duped by the program into believing—against everything they knew to be true—that they were dealing with an intelligent machine. He felt almost guilty about the deception machine he had created. But his worldly students were not deceived. They knew all about ELIZA’s limitations, but they were eager to “fill in the blanks.” I came to think of this human complicity in a digital fantasy as the “ELIZA effect.” Through the 1970s, I saw this complicity with the machine as no more threatening than wanting to improve the working of an interactive diary. As it turned out, I underestimated what these connections augured. At the robotic moment, more than ever, our willingness to engage with the inanimate does not depend on being deceived but on wanting to fill in the blanks.

Now, over four decades after Weizenbaum wrote the first version of ELIZA, artificial intelligences known as “bots” present themselves as companions to the millions who play computer games on the Internet. Within these game worlds, it has come to seem natural to “converse” with bots about a variety of matters, from routine to romantic. And, as it turns out, it’s a small step from having your “life” saved by a bot you meet in a virtual world to feeling a certain affection toward it—and not the kind of affection you might feel toward a stereo or car, no matter how beloved. Meantime, in the physical real, things proceed apace. The popular Zhu Zhu robot pet hamsters come out of the box in “nurturing mode.” The official biography of the Zhu Zhu named Chuck says, “He lives to feel the love.” For the elderly, the huggable baby seal robot Paro is now on sale. A hit in Japan, it now targets the American nursing home market. Roboticists make the case that the elderly need a companion robot because of a lack of human resources. Almost by definition, they say, robots will make things better.

While some roboticists dream of reverse engineering love, others are content to reverse engineer sex.3 In February 2010, I googled the exact phrase “sex robots” and came up with 313,000 hits, the first of which was linked to an article titled “Inventor Unveils $7,000 Talking Sex Robot.” Roxxxy, I learned, “may be the world’s most sophisticated, talking sex robot.”4 The shock troops of the robotic moment, dressed in lingerie, may be closer than most of us have ever imagined. And true to the ELIZA effect, this is not so much because the robots are ready but because we are.

In a television news story about a Japanese robot designed in the form of a sexy woman, a reporter explains that although this robot currently performs only as a receptionist, its designers hope it will someday serve as a teacher and companion. Far from skeptical, the reporter bridges the gap between the awkward robot before him and the idea of something akin to a robot wife by referring to the “singularity.” He asks the robot’s inventor, “When the singularity comes, no one can imagine where she [the robot] could go. Isn’t that right? … What about these robots after the singularity? Isn’t it the singularity that will bring us the robots that will surpass us?”

The singularity? This notion has migrated from science fiction to engineering. The singularity is the moment—it is mythic; you have to believe in it—when machine intelligence crosses a tipping point.5 Past this point, say those who believe, artificial intelligence will go beyond anything we can currently conceive. No matter if today’s robots are not ready for prime time as receptionists. At the singularity, everything will become technically possible, including robots that love. Indeed, at the singularity, we may merge with the robotic and achieve immortality. The singularity is technological rapture.

As for Weizenbaum’s concerns that people were open to computer psychotherapy, he correctly sensed that something was going on. In the late 1970s, there was considerable reticence about computer psychotherapy, but soon after, opinions shifted.6 The arc of this story does not reflect new abilities of machines to understand people, but people’s changing ideas about psychotherapy and the workings of their own minds, both seen in more mechanistic terms.7Thirty years ago, with psychoanalysis more central to the cultural conversation, most people saw the experience of therapy as a context for coming to see the story of your life in new terms. This happened through gaining insight and developing a relationship with a therapist who provided a safe place to address knotty problems. Today, many see psychotherapy less as an investigation of the meaning of our lives and more as an exercise to achieve behavioral change or work on brain chemistry. In this model, the computer becomes relevant in several ways. Computers can help with diagnosis, be set up with programs for cognitive behavioral therapy, and provide information on alternative medications.

Previous hostility to the idea of the computer as psychotherapist was part of a “romantic reaction” to the computer presence, a sense that there were some places a computer could not and should not go. In shorthand, the romantic reaction said, “Simulated thinking might be thinking, but simulated feeling is not feeling; simulated love is never love.” Today, that romantic reaction has largely given way to a new pragmatism. Computers “understand” as little as ever about human experience—for example, what it means to envy a sibling or miss a deceased parent. They do, however, perform understanding better than ever, and we are content to play our part. After all, our online lives are all about performance. We perform on social networks and direct the performances of our avatars in virtual worlds. A premium on performance is the cornerstone of the robotic moment. We live the robotic moment not because we have companionate robots in our lives but because the way we contemplate them on the horizon says much about who we are and who we are willing to become.

How did we get to this place? The answer to that question is hidden in plain sight, in the rough-and-tumble of the playroom, in children’s reactions to robot toys. As adults, we can develop and change our opinions. In childhood, we establish the truth of our hearts.

I have watched three decades of children with increasingly sophisticated computer toys. I have seen these toys move from being described as “sort of alive” to “alive enough,” the language of the generation whose childhood play was with sociable robots (in the form of digital pets and dolls). Getting to “alive enough” marks a watershed. In the late 1970s and early 1980s, children tried to make philosophical distinctions about aliveness in order to categorize computers. These days, when children talk about robots as alive enough for specific purposes, they are not trying to settle abstract questions. They are being pragmatic: different robots can be considered on a case-by-case and context-by-context basis. (Is it alive enough to be a friend, a babysitter, or a companion for your grandparents?) Sometimes the question becomes more delicate: If a robot makes you love it, is it alive?

LIFE RECONSIDERED

In the late 1970s and early 1980s, children met their first computational objects: games like Merlin, Simon, and Speak & Spell. This first generation of computers in the playroom challenged children in memory and spelling games, routinely beating them at tic-tac-toe and hangman.8 The toys, reactive and interactive, turned children into philosophers. Above all else, children asked themselves whether something programmed could be alive.

Children’s starting point here is their animation of the world. Children begin by understanding the world in terms of what they know best: themselves. Why does the stone roll down the slope? “To get to the bottom,” says the young child, as though the ball had its own desires. But in time, animism gives way to physics. The child learns that a stone falls because of gravity; intentions have nothing to do with it. And so a dichotomy is constructed: physical and psychological properties stand opposed to one another in two great systems. But the computer is a new kind of object: it is psychological and yet a thing. Marginal objects such as the computer, on the lines between categories, draw attention to how we have drawn the lines.9

Swiss psychologist Jean Piaget, interviewing children in the 1920s, found that they took up the question of an object’s life status by considering its physical movement.10 For the youngest children, everything that could move was alive, then only things that could move without an outside push or pull. People and animals were easily classified. But clouds that seemed to move on their own accord were classified as alive until children realized that wind, an external but invisible force, was pushing them along. Cars were reclassified as not alive when children understood that motors counted as an “outside” push. Finally, the idea of autonomous movement became focused on breathing and metabolism, the motions most particular to life.

In the 1980s, faced with computational objects, children began to think through the question of aliveness in a new way, shifting from physics to psychology. 11 When they considered a toy that could beat them at spelling games, they were interested not in whether such an object could move on its own but in whether it could think on its own. Children asked if this game could “know.” Did it cheat? Was knowing part of cheating? They were fascinated by how electronic games and toys showed a certain autonomy. When an early version of Speak & Spell—a toy that played language and spelling games—had a programming bug and could not be turned off during its “say it” routine, children shrieked with excitement, finally taking out the game’s batteries to “kill it” and then (with the reinsertion of the batteries) bring it back to life.

In their animated conversations about computer life and death, children of the 1980s imposed a new conceptual order on a new world of objects.12 In the 1990s, that order was strained to the breaking point. Simulation worlds—for example the Sim games—pulsed with evolving life forms. And child culture was awash in images of computational objects (from Terminators to digital viruses) all shape-shifting and morphing in films, cartoons, and action figures. Children were encouraged to see the stuff of computers as the same stuff of which life is made. One eight-year-old girl referred to mechanical life and human life as “all the same stuff, just yucky computer ‘cy-dough-plasm.’” All of this led to a new kind of conversation about aliveness. Now, when considering computation, children talked about evolution as well as cognition. And they talked about a special kind of mobility. In 1993, a ten-year-old considered whether the creatures on the game SimLife were alive. She decided they were “if they could get out of your computer and go to America Online.”13

Here, Piaget’s narrative about motion resurfaced in a new guise. Children often imbued the creatures in simulation games with a desire to escape their confines and enter a wider digital world. And then, starting in the late 1990s, digital “creatures” came along that tried to dazzle children not with their smarts but with their sociability. I began a long study of children’s interactions with these new machines. Of course, children said that a sociable robot’s movement and intelligence were signs of its life. But even in conversations specifically about aliveness, children were more concerned about what these new robots might feel. As criteria for life, everything pales in comparison to a robot’s capacity to care.

Consider how often thoughts turn to feelings as three elementary school children discuss the aliveness of a Furby, an owl-like creature that plays games and seems to learn English under a child’s tutelage. A first, a five-year-old girl, can only compare it to a Tamagotchi, a tiny digital creature on an LED screen that also asks to be loved, cared for, and amused. She asks herself, “Is it [the Furby] alive?” and answers, “Well, I love it. It’s more alive than a Tamagotchi because it sleeps with me. It likes to sleep with me.” A six-year-old boy believes that something “as alive as a Furby” needs arms: “It might want to pick up something or to hug me.” A nine-year-old girl thinks through the question of a Furby’s aliveness by commenting, “I really like to take care of it.... It’s as alive as you can be if you don’t eat… . It’s not like an animal kind of alive.”

From the beginning of my studies of children and computers in the late 1970s, children spoke about an “animal kind of alive” and a “computer kind of alive.” Now I hear them talk about a “people kind of love” and a “robot kind of love.” Sociable robots bring children to the locution that the machines are alive enough to care and be cared for. In speaking about sociable robots, children use the phrase “alive enough” as a measure not of biological readiness but of relational readiness. Children describe robots as alive enough to love and mourn. And robots, as we saw at the American Museum of Natural History, may be alive enough to substitute for the biological, depending on the context. One reason the children at the museum were so relaxed about a robot substituting for a living tortoise is that children were comfortable with the idea of a robot as both machine and creature. I see this flexibility in seven-year-old Wilson, a bright, engaged student at a Boston public elementary school where I bring robot toys for after-school play. Wilson reflects on a Furby I gave him to take home for several weeks: “The Furby can talk, and it looks like an owl,” yet “I always hear the machine in it.” He knows, too, that the Furby, “alive enough to be a friend,” would be rejected in the company of animals: “A real owl would snap its head off.” Wilson does not have to deny the Furby’s machine nature to feel it would be a good friend or to look to it for advice. His Furby has become his confidant. Wilson’s way of keeping in mind the dual aspects of the Furby’s nature seems to me a philosophical version of multitasking, so central to our twentieth-century attentional ecology. His attitude is pragmatic. If something that seems to have a self is before him, he deals with the aspect of self he finds most relevant to the context.

This kind of pragmatism has become a hallmark of our psychological culture. In the mid-1990s, I described how it was commonplace for people to “cycle through” different ideas of the human mind as (to name only a few images) mechanism, spirit, chemistry, and vessel for the soul.14 These days, the cycling through intensifies. We are in much more direct contact with the machine side of mind. People are fitted with a computer chip to help with Parkinson’s. They learn to see their minds as program and hardware. They take antidepressants prescribed by their psychotherapists, confident that the biochemical and oedipal self can be treated in one room. They look for signs of emotion in a brain scan. Old jokes about couples needing “chemistry” turn out not to be jokes at all. The compounds that trigger romantic love are forthcoming from the laboratory. And yet, even with biochemical explanations for attraction, nothing seems different about the thrill of falling in love. And seeing that an abused child has a normal brain scan does not mean one feels any less rage about the abuse. Pluralistic in our attitudes toward the self, we turn this pragmatic sensibility toward other things in our path—for example, sociable robots. We approach them like Wilson: they can be machines, and they can be more.

Writing in his diary in 1832, Ralph Waldo Emerson described “dreams and beasts” as “two keys by which we are to find out the secrets of our nature.... They are our test objects.”15 If Emerson had lived today, he would have seen the sociable robot as our new test object. Poised in our perception between inanimate program and living creature, this new breed of robot provokes us to reflect on the difference between connection and relationship, involvement with an object and engagement with a subject. These robots are evocative: understanding how people think about them provides a view onto how we think about ourselves. When children talk about these robots, they move away from an earlier cohort’s perception of computers as provocative curiosities to the idea that robots might be something to grow old with. It all began when children met the seductive Tamagotchis and Furbies, the first computers that asked for love.16

THE TAMAGOTCHI PRIMER

When active and interactive computer toys were first introduced in the late 1970s, children recognized that they were neither dolls nor people nor animals. Nor did they seem like machines. Computers, first in the guise of electronic toys and games, turned children into philosophers, caught up in spontaneous debates about what these objects might be. In some cases, their discussions brought them to the idea that the talking, clever computational objects were close to kin. Children consider the question of what is special about being a person by contrasting themselves with their “nearest neighbors.” Traditionally, children took their nearest neighbors to be their dogs, cats, and horses. Animals had feelings; people were special because of their ability to think. So, the Aristotelian definition of man as a rational animal had meaning for even the youngest children. But by the mid-1980s, as thinking computers became nearest neighbors, children considered people special because only they could “feel.” Computers were intelligent machines; in contrast, people were emotional machines.17

But in the late 1990s, as if on cue, children met objects that presented themselves as having feelings and needs. As emotional machines, people were no longer alone. Tamagotchis and Furbies (both of which sold in the tens of millions) did not want to play tic-tac-toe, but they would tell you if they were hungry or unhappy. A Furby held upside down says, “Me scared,” and whimpers as though it means it. And these new objects found ways to express their love.

Furbies, put on the market in 1998, had proper robotic “bodies”; they were small, fur-covered “creatures” with big eyes and ears. Yet, the Tamagotchi, released in 1997, a virtual creature housed in a plastic egg, serves as a reliable primer in the psychology of sociable robotics—and a useful one because crucial elements are simplified, thus stark. The child imagines Tamagotchis as embodied because, like living creatures and unlike machines, they need constant care and are always on. A Tamagotchi has “body enough” for a child to imagine its death.18 To live, a Tamagotchi must be fed, amused, and cleaned up after. If cared for, it will grow from baby to healthy adult. Tamagotchis, in their limited ways, develop different personalities depending on how they are treated. As Tamagotchis turn children into caretakers, they teach that digital life can be emotionally roiling, a place of obligations and regrets.19 The earliest electronic toys and games of thirty years ago—such as Merlin, Simon, and Speak & Spell—encouraged children to consider the proposition that something smart might be “sort of alive.” With Tamagotchis, needy objects asked for care, and children took further steps.

As they did with earlier generations of hard-to-classify computational objects, curious children go through a period of trying to sort out the new sociable objects. But soon children take them at interface value, not as puzzles but as play-mates. The philosophical churning associated with early computer toys (are they alive? do they know?) quickly gives way to new practices. Children don’t want to comprehend these objects as much as take care of them. Their basic stance: “I’m living with this new creature. It and many more like it are here to stay.” When a virtual “creature” or robot asks for help, children provide it. When its behavior dazzles, children are pleased just to hang out with it.

In the classic children’s story The Velveteen Rabbit, a stuffed animal becomes “real” because of a child’s love. Tamagotchis do not wait passively but demand attention and claim that without it they will not survive. With this aggressive demand for care, the question of biological aliveness almost falls away. We love what we nurture; if a Tamagotchi makes you love it, and you feel it loves you in return, it is alive enough to be a creature. It is alive enough to share a bit of your life. Children approach sociable machines in a spirit similar to the way they approach sociable pets or people—with the hope of befriending them. Meeting a person (or a pet) is not about meeting his or her biochemistry; becoming acquainted with a sociable machine is not about deciphering its programming. While in an earlier day, children might have asked, “What is a Tamagotchi?” they now ask, “What does a Tamagotchi want?”

When a digital “creature” asks children for nurturing or teaching, it seems alive enough to care for, just as caring for it makes it seem more alive. Neil, seven, says that his Tamagotchi is “like a baby. You can’t just change the baby’s diaper. You have to, like, rub cream on the baby. That is how the baby knows you love it.” His eight-year-old sister adds, “I hate it when my Tamagotchi has the poop all around. I am like its mother. That is my job. I don’t like it really, but it gets sick if you just leave it messy.” Three nine-year-olds consider their Tamagotchis. One is excited that his pet requires him to build a castle as its home. “I can do it. I don’t want him to get cold and sick and to die.” Another looks forward to her digital pet’s demands: “I like it when it says, ‘I’m hungry’ or ‘Play with me.’” The third boils down her relationship to a “deceased” Tamagotchi to its most essential elements: “She was loved; she loved back.”20

Where is digital fancy bred? Most of all, in the demand for care. Nurturance is the “killer app.” In the presence of a needy Tamagotchi, children become responsible parents: demands translate into care and care into the feeling of caring. Parents are enlisted to watch over Tamagotchis during school hours. In the late 1990s, an army of compliant mothers cleaned, fed, and amused their children’s Tamagotchis; the beeping of digital pets became a familiar background noise during business meetings.

This parental involvement is imperative because a Tamagotchi is always on. Mechanical objects are supposed to turn off. Children understand that bodies need to be always on, that they become “off ” when people or animals die. So, the inability to turn off a Tamagotchi becomes evidence of its life. Seven-year-old Catherine explains, “When a body is ‘off,’ it is dead.” Some Tamagotchis can be asked to “sleep,” but nine-year-old Parvati makes it clear that asking her Tamagotchi to sleep is not the same as hitting the pause button in a game. Life goes on: “When they sleep, it is not that they are turned off. They can still get sick and unhappy, even while they are sleeping. They could have a nightmare.”

In the late 1970s, computers, objects on the boundary between animate and inanimate, began to lead children to gleeful experiments in which they crashed machines as they talked about “killing” them. And then, there would be elaborate rituals of resuscitation as children talked about bringing machines back to life. After these dramatic rebirths, the machines were, in the eyes of children, what they had been before. Twenty years later, when Tamagotchis die and are reset for a new life, children do not feel that they come back as they were before. Children looked forward to the rebirth of the computers they had crashed, but they dread the demise and rebirth of Tamagotchis. These provoke genuine remorse because, as one nine-year-old puts it, “It didn’t have to happen. I could have taken better care.”21

UNFORGETTABLE

I took care of my first Tamagotchi at the same time that my seven-year-old daughter was nurturing her own. Since I sometimes took a shift attending to her Tamagotchi, I could compare their respective behaviors, and I convinced myself that mine had idiosyncrasies that made it different from hers. My Tamagotchi liked to eat at particular intervals. I thought it prospered best with only small doses of amusement. I worked hard at keeping it happy. I did not anticipate how bad I would feel when it died. I immediately hit the reset button. Somewhat to my surprise, I had no desire to take care of the new infant Tamagotchi that appeared on my screen.

Many children are not so eager to hit reset. They don’t like having a new creature in the same egg where their virtual pet has died. For them, the death of a virtual pet is not so unlike the death of what they call a “regular pet.” Eight-year-olds talk about what happens when you hit a Tamagotchi’s reset button. For one, “It comes back, but it doesn’t come back as exactly your same Tamagotchi… . You haven’t had the same experiences with it. It has a different personality.” For another, “It’s cheating. Your Tamagotchi is really dead. Your one is really dead. They say you get it back, but it’s not the same one. It hasn’t had the same things happen to it. It’s like they give you a new one. It doesn’t remember the life it had.” For another, “When my Tamagotchi dies, I don’t want to play with the new one who can pop up. It makes me remember the real one [the first one]. I like to get another [a new egg]… . If you made it die, you should start fresh.” Parents try to convince their children to hit reset. Their arguments are logical: the Tamagotchi is not “used up”; a reset Tamagotchi means one less visit to the toy store. Children are unmoved.

Sally, eight, has had three Tamagotchis. Each died and was “buried” with ceremony in her top dresser drawer. Three times Sally has refused to hit the reset button and convinced her mother to buy replacements. Sally sets the scene: “My mom says mine still works, but I tell her that a Tamagotchi is cheap, and she won’t have to buy me anything else, so she gets one for me. I am not going to start up my old one. It died. It needs its rest.”

In Sally’s “It died. It needs its rest,” we see the expansiveness of the robotic moment. Things that never could go together—a program and pity for a weary body—now do go together. The reset button produces objects that are between categories: a creature that seems new but is not really new, a stand-in for something now gone. The new creature, a kind of imposter, is a classic case of Sigmund Freud’s uncanny—it’s familiar, yet somehow not.22 The uncanny is always compelling. Children ask, “What does it mean for a virtual creature to die?” Yet, while earlier generations debated questions about a computer’s life in philosophical terms, when faced with Tamagotchis, children quickly move on to day-to-day practicalities. They temper philosophy with tearful experience. They know that Tamagotchis are alive enough to mourn.

Freud teaches us that the experience of loss is part of how we build a self.23 Metaphorically, at least, mourning keeps a lost person present. Child culture is rich in narratives that take young people through the steps of this fitful process. So, in Peter Pan, Wendy loses Peter in order to move past adolescence and become a grown woman, able to love and parent. But Peter remains present in her playful and tolerant way of mothering. Louisa May Alcott’s Jo loses her gentle sister Beth. In mourning Beth, Jo develops as a serious writer and finds a new capacity to love. More recently, the young wizard Harry Potter loses his mentor Dumbledore, whose continuing presence within Harry enables him to find his identity and achieve his life’s purpose. With the Tamagotchi, we see the beginning of mourning for artificial life. It is not mourned as one would mourn a doll. The Tamagotchi has crossed a threshold. Children breathe life into their dolls. With the Tamagotchi, we are in a realm of objects that children see as having their own agendas, needs, and desires. Children mourn the life the Tamagotchi has led.

A child’s mourning for a Tamagotchi is not always a solitary matter. When a Tamagotchi dies, it can be buried in an online Tamagotchi graveyard. The tomb-stones are intricate. On them, children try to capture what made each Tamagotchi special.24 A Tamagotchi named Saturn lived to twelve “Tamagotchi years.” Its owner writes a poem in its memory: “My baby died in his sleep. I will forever weep. Then his batteries went dead. Now he lives in my head.” Another child mourns Pumpkin, dead at sixteen: “Pumpkin, Everyone said you were fat, so I made you lose weight. From losing weight you died. Sorry.” Children take responsibility for virtual deaths.25 These online places of mourning do more than give children a way to express their feelings. They sanction the idea that it is appropriate to mourn the digital—indeed, that there is something “there” to mourn.