CONCLUSION - Alone Together - Sherry Turkle

Alone Together: Why We Expect More from Technology and Less from Each Other - Sherry Turkle (2011)

CONCLUSION

Necessary conversations

During my earliest days at MIT, I met the idea (at that time altogether novel to me) that part of my job would be to think of ways to keep technology busy. In the fall of 1978, Michael Dertouzos, director of the Laboratory for Computer Science, held a two-day retreat at MIT’s Endicott House on the future of personal computers, at the time widely called “home computers.” It was clear that “everyday people,” as Dertouzos put it, would soon be able to have their own computers. The first of these—the first that could be bought and didn’t have to be built—were just coming on the market. But what could people do with them? There was technological potential, but it needed to be put to work. Some of the most brilliant computer scientists in the world—such pioneers of information processing and artificial intelligence as Robert Fano, J. C. R. Lickleider, Marvin Minsky, and Seymour Papert—were asked to brainstorm on the question. My notes from this meeting show suggestions on tax preparation and teaching children to program. No one thought that anyone except academics would really want to write on computers. Several people suggested a calendar; others thought that was a dumb idea. There would be games.

Now we know that once computers connected us to each other, once we became tethered to the network, we really didn’t need to keep computers busy. They keep us busy. It is as though we have become their killer app. As a friend of mine put it in a moment of pique, “We don’t do our e-mail; our e-mail does us.” We talk about “spending” hours on e-mail, but we, too, are being spent. Niels Bohr suggests that the opposite of a “deep truth” is a truth no less profound.1 As we contemplate online life, it helps to keep this in mind.

Online, we easily find “company” but are exhausted by the pressures of performance. We enjoy continual connection but rarely have each other’s full attention. We can have instant audiences but flatten out what we say to each other in new reductive genres of abbreviation. We like it that the Web “knows” us, but this is only possible because we compromise our privacy, leaving electronic bread crumbs that can be easily exploited, both politically and commercially. We have many new encounters but may come to experience them as tentative, to be put “on hold” if better ones come along. Indeed, new encounters need not be better to get our attention. We are wired to respond positively to their simply being new. We can work from home, but our work bleeds into our private lives until we can barely discern the boundaries between them. We like being able to reach each other almost instantaneously but have to hide our phones to force ourselves to take a quiet moment.

Overwhelmed by the pace that technology makes possible, we think about how new, more efficient technologies might help dig us out. But new devices encourage ever-greater volume and velocity. In this escalation of demands, one of the things that comes to feel safe is using technology to connect to people at a distance, or more precisely, to a lot of people from a distance. But even a lot of people from a distance can turn out to be not enough people at all. We brag about how many we have “friended” on Facebook, yet Americans say they have fewer friends than before.2 When asked in whom they can confide and to whom they turn in an emergency, more and more say that their only resource is their family.

The ties we form through the Internet are not, in the end, the ties that bind. But they are the ties that preoccupy. We text each other at family dinners, while we jog, while we drive, as we push our children on swings in the park. We don’t want to intrude on each other, so instead we constantly intrude on each other, but not in “real time.” When we misplace our mobile devices, we become anxious—impossible really. We have heard teenagers insist that even when their cell phones are not on their person, they can feel them vibrate. “I know when I’m being called,” says a sixteen-year-old. “I just do.” Sentiments of dependency echo across generations. “I never am without my cell phone,” says a fifty-two-year-old father. “It is my protection.”

In the evening, when sensibilities such as these come together, they are likely to form what have been called “postfamilial families.”3 Their members are alone together, each in their own rooms, each on a networked computer or mobile device. We go online because we are busy but end up spending more time with technology and less with each other. We defend connectivity as a way to be close, even as we effectively hide from each other. At the limit, we will settle for the inanimate, if that’s what it takes.

Bohr’s dictum is equally true in the area of sociable robotics, where things are no less tangled. Roboticists insist that robotic emotions are made up of the same ultimate particles as human ones (because mind is ultimately made of matter), but it is also true that robots’ claims to emotion derive from programs designed to get an emotional rise out of us.4

Roboticists present, as though it were a first principle, the idea that as our population ages, we simply won’t have enough people to take care of our human needs, and so, as a companion, a sociable robot is “better than nothing.” But what are our first principles? We know that we warm to machines when they seem to show interest in us, when their affordances speak to our vulnerabilities. But we don’t have to say yes to everything that speaks to us in this way. Even if, as adults, we are intrigued by the idea that a sociable robot will distract our aging parents, our children ask, “Don’t we have people for these jobs?” We should attend to their hesitations. Sorting all this out will not be easy. But we are at a crossroads—at a time and place to initiate new conversations.

As I was working on this book, I discussed its themes with a former colleague, Richard, who has been left severely disabled by an automobile accident. He is now confined to a wheelchair in his home and needs nearly full-time nursing care. Richard is interested in robots being developed to provide practical help and companionship to people in his situation, but his reaction to the idea is complex. He begins by saying, “Show me a person in my shoes who is looking for a robot, and I’ll show you someone who is looking for a person and can’t find one,” but then he makes the best possible case for robotic helpers when he turns the conversation to human cruelty. “Some of the aides and nurses at the rehab center hurt you because they are unskilled, and some hurt you because they mean to. I had both. One of them, she pulled me by the hair. One dragged me by my tubes. A robot would never do that,” he says. And then he adds, “But you know, in the end, that person who dragged me by my tubes had a story. I could find out about it. She had a story.”

For Richard, being with a person, even an unpleasant, sadistic person, makes him feel that he is still alive. It signifies that his way of being in the world has a certain dignity, even if his activities are radically curtailed. For him, dignity requires a feeling of authenticity, a sense of being connected to the human narrative. It helps sustain him. Although he would not want his life endangered, he prefers the sadist to the robot.

Richard’s perspective is a cautionary tale to those who would speak in too-simple terms of purely technical benchmarks for human and machine interactions. We animate robotic creatures by projecting meaning onto them and are thus tempted to speak of their emotions and even their “authenticity.” We can do this if we focus on the feelings that robots evoke in us. But too often the unasked question is, What does the robot feel? We know what the robot cannot feel: it cannot feel human empathy or the flow of human connection. Indeed, the robot can feel nothing at all. Do we care? Or does the performance of feeling now suffice? Why would we want to be in conversation with machines that cannot understand or care for us? The question was first raised for me by the ELIZA computer program.5 What made ELIZA a valued interlocutor? What matters were so private that they could only be discussed with a machine?

Over years and with some reluctance, I came to understand that ELIZA’s popularity revealed more than people’s willingness to talk to machines; it revealed their reluctance to talk to other people.6 The idea of an attentive machine provides the fantasy that we may escape from each other. When we say we look forward to computer judges, counselors, teachers, and pastors, we comment on our disappointments with people who have not cared or who have treated us with bias or even abuse. These disappointments begin to make a machine’s performance of caring seem like caring enough. We are willing to put aside a program’s lack of understanding and, indeed, to work to make it seem to understand more than it does—all to create the fantasy that there is an alternative to people. This is the deeper “ELIZA effect.” Trust in ELIZA does not speak to what we think ELIZA will understand but to our lack of trust in the people who might understand.

Kevin Kelly asks, “What does technology want?” and insists that, whatever it is, technology is going to get it. Accepting his premise, what if one of the things technology wants is to exploit our disappointments and emotional vulnerabilities? When this is what technology wants, it wants to be a symptom.

SYMPTOMS AND DREAMS

Wary of each other, the idea of a robot companion brings a sense of control, of welcome substitution. We allow ourselves to be comforted by unrequited love, for there is no robot that can ever love us back. That same wariness marks our networked lives. There, too, we are vulnerable to a desire to control our connections, to titrate our level of availability. Things progress quickly. A lawyer says sensibly, “I can’t make it to a client meeting; I’ll send notes by e-mail instead.” Five steps later, colleagues who work on the same corridor no longer want to see or even telephone each other and explain that “texts are more efficient” or “I’ll post something on Facebook.”

As we live the flowering of connectivity culture, we dream of sociable robots.7 Lonely despite our connections, we send ourselves a technological Valentine. If online life is harsh and judgmental, the robot will always be on our side. The idea of a robot companion serves as both symptom and dream. Like all psychological symptoms, it obscures a problem by “solving” it without addressing it. The robot will provide companionship and mask our fears of too-risky intimacies. As dream, robots reveal our wish for relationships we can control.

A symptom carries knowledge that a person fears would be too much to bear. To do its job, a symptom disguises this knowledge so it doesn’t have to be faced day to day.8 So, it is “easier” to feel constantly hungry than to acknowledge that your mother did not nurture you. It is “easier” to be enraged by a long supermarket line than to deal with the feeling that your spouse is not giving you the attention you crave. When technology is a symptom, it disconnects us from our real struggles.

In treatment, symptoms disappear because they become irrelevant. Patients become more interested in looking at what symptoms hide—the ordinary thoughts and experiences of which they are the strangulated expression. So when we look at technology as symptom and dream, we shift our attention away from technology and onto ourselves. As Henry David Thoreau might ask, “Where do we live, and what do we live for?” Kelly writes of technophilia as our natural state: we love our objects and follow where they lead.9 I would reframe his insight: we love our objects, but enchantment comes with a price.

The psychoanalytic tradition teaches that all creativity has a cost, a caution that applies to psychoanalysis itself.10 For psychoanalyst Robert Caper, “The transgression in the analytic enterprise is not that we try to make things better; the transgression is that we don’t allow ourselves to see its costs and limitations.”11 To make his point Caper revisits the story of Oedipus. As his story is traditionally understood, Oedipus is punished for seeking knowledge—in particular, the knowledge of his parentage. Caper suggests he is punished for something else: his refusal to recognize the limitations of knowledge. A parallel with technology is clear: we transgress not because we try to build the new but because we don’t allow ourselves to consider what it disrupts or diminishes. We are not in trouble because of invention but because we think it will solve everything.

A successful analysis disturbs the field in the interest of long-term gain; it learns to repair along the way.12 One moves forward in a chastened, self-reflective spirit. Acknowledging limits, stopping to make the corrections, doubling back—these are at the heart of the ethic of psychoanalysis. A similar approach to technology frees us from unbending narratives of technological optimism or despair. Consider how it would modulate Kelly’s argument about technophilia. Kelly refers to Henry Adams, who in 1900 had a moment of rapture when he first set eyes on forty-foot dynamos. Adams saw them as “symbols of infinity, objects that projected a moral force, much as the early Christians felt the cross.”13 Kelly believes that Adams’s desire to be at one with the dynamo foreshadows how Kelly now feels about the Web. As we have seen, Kelly wants to merge with the Web, to find its “lovely surrender.” Kelly continues,

I find myself indebted to the net for its provisions. It is a steadfast benefactor, always there. I caress it with my fidgety fingers; it yields up my desires, like a lover.... I want to remain submerged in its bottomless abundance. To stay. To be wrapped in its dreamy embrace. Surrendering to the web is like going on aboriginal walkabout. The comforting illogic of dreams reigns. In dreamtime you jump from one page, one thought, to another.... The net’s daydreams have touched my own, and stirred my heart. If you can honestly love a cat, which can’t give you directions to a stranger’s house, why can’t you love the web?14

Kelly has a view of connectivity as something that may assuage our deepest fears—of loneliness, loss, and death. This is the rapture. But connectivity also disrupts our attachments to things that have always sustained us—for example, the value we put on face-to-face human connection. Psychoanalysis, with its emphasis on the comedy and tragedy in the arc of human life, can help keep us focused on the specificity of human conversation. Kelly is enthralled by the Web’s promise of limitless knowledge, its “bottomless abundance.” But the Oedipal story reminds us that rapture is costly; it usually means you are overlooking consequences.

Oedipus is also a story about the difference between getting what you want and getting what you think you want. Technology gives us more and more of what we think we want. These days, looking at sociable robots and digitized friends, one might assume that what we want is to be always in touch and never alone, no matter who or what we are in touch with. One might assume that what we want is a preponderance of weak ties, the informal networks that underpin online acquaintanceship. But if we pay attention to the real consequences of what we think we want, we may discover what we really want. We may want some stillness and solitude. As Thoreau put it, we may want to live less “thickly” and wait for more infrequent but meaningful face-to-face encounters. As we put in our many hours of typing—with all fingers or just thumbs—we may discover that we miss the human voice. We may decide that it is fine to play chess with a robot, but that robots are unfit for any conversation about family or friends. A robot might have needs, but to understand desire, one needs language and flesh. We may decide that for these conversations, we must have a person who knows, firsthand, what it means to be born, to have parents and a family, to wish for adult love and perhaps children, and to anticipate death. And, of course, no matter how much “wilderness” Kelly finds on the Web, we are not in a position to let the virtual take us away from our stewardship of nature, the nature that doesn’t go away with a power outage.

We let things get away from us. Even now, we are emotionally dependent on online friends and intrigued by robots that, their designers claim, are almost ready to love us.15 And brave Kevin Kelly says what others are too timid to admit: he is in love with the Web itself. It has become something both erotic and idealized. What are we missing in our lives together that leads us to prefer lives alone together? As I have said, every new technology challenges us, generation after generation, to ask whether it serves our human purposes, something that causes us to reconsider what they are.

In a design seminar, master architect Louis Kahn once asked, “What does a brick want?”16 In that spirit, if we ask, “What does simulation want?” we know what it wants. It wants—it demands—immersion. But immersed in simulation, it can be hard to remember all that lies beyond it or even to acknowledge that everything is not captured by it. For simulation not only demands immersion but creates a self that prefers simulation. Simulation offers relationships simpler than real life can provide. We become accustomed to the reductions and betrayals that prepare us for life with the robotic.

But being prepared does not mean that we need to take the next step. Sociable robotics puts science into the game of intimacy and the most sensitive moments of children’s development. There is no one to tell science what it cannot do, but here one wishes for a referee. Things start innocently: neuroscientists want to study attachment. But things end reductively, with claims that a robot “knows” how to form attachments because it has the algorithms. The dream of today’s roboticists is no less than to reverse engineer love. Are we indifferent to whether we are loved by robots or by our own kind?

In Philip K. Dick’s classic science fiction story “Do Androids Dream of Electric Sheep” (which most people know through its film adaptation, Blade Runner ), loving and being loved by a robot seems a good thing. The film’s hero, Deckard, is a professional robot hunter in a world where humans and robots look and sound alike. He falls in love with Rachel, an android programmed with human memories and the knowledge that she will “die.” I have argued that knowledge of mortality and an experience of the life cycle are what make us uniquely human. This brilliant story asks whether the simulation of these things will suffice.

By the end of the film, we are left to wonder whether Deckard himself may be an android but unaware of his identity. Unable to resolve this question, we cheer for Deckard and Rachel as they escape to whatever time they have remaining—in other words, to the human condition. Decades after the film’s release, we are still nowhere near developing its androids. But to me, the message of Blade Runner speaks to our current circumstance: long before we have devices that can pass any version of the Turing test, the test will seem beside the point. We will not care if our machines are clever but whether they love us.

Indeed, roboticists want us to know that the point of affective machines is that they will take care of us. This narrative—that we are on our way to being tended by “caring” machines—is now cited as conventional wisdom. We have entered a realm in which conventional wisdom, always inadequate, is dangerously inadequate. That it has become so commonplace reveals our willingness to take the performance of emotion as emotion enough.

EMOTION ENOUGH

When roboticists argue that robots can develop emotions, they begin by asserting the material basis of all thought and take things from there. For example, Rodney Brooks says that a robot could be given a feeling like “sadness” by setting “a number in its computer code.” This sadness, for Brooks, would be akin to that felt by humans, for “isn’t humans’ level of sadness basically a number, too, just a number of the amounts of various neurochemicals circulating in the brain? Why should a robot’s numbers be any less authentic than a human’s?”17

Given my training as a clinician, I tend to object to the relevance of a robot’s “numbers” for thinking about emotion because of something humans have that robots don’t: a human body and a human life. Living in our bodies sets our human “numbers.” Our emotions are tied to a developmental path—from childhood dependence to greater independence—and we experience the traces of our earlier dependencies in later fantasies, wishes, and fears. Brooks speaks of giving the robot the emotion of “sadness.” In a few months, I will send my daughter off to college. I’m both sad and thrilled. How would a robot “feel” such things? Why would its “numbers” even “want” to?

Cynthia Breazeal, one of Brooks’s former students, takes another tack, arguing that robotic emotions are valid if you take care to consider them as a new category. Cats have cat emotions, and dogs have dog emotions. These differ from each other and from human emotions. We have no problem, says Breazeal, seeing all of these as “genuine” and “authentic.” And now, robots will have robot emotions, also in their own category and likewise “genuine” and “authentic.” For Breazeal, once you give robotic emotions their own category, there is no need to compare. We should respect emotional robots as “different,” just as we respect all diversity.18 But this argument confuses the authentic with the sui generis. That the robotic performance of emotion might exist in its own category implies nothing about the authenticity of the emotions being performed. And robots do not “have” emotions that we must respect. We build robots to do things that make us feel as though they have emotions. Our responses are their design template.

Whether one debates the question of robotic emotions in terms of materialism or category, we end up in a quandary. Instead of asking whether a robot has emotions, which in the end boils down to how different constituencies define emotion, we should be asking what kind of relationships we want to have with machines. Why do we want robots to perform emotion? I began my career at MIT arguing with Joseph Weizenbaum about whether a computer program might be a valuable dialogue partner. Thirty years later, I find myself debating those who argue, with David Levy, that my daughter might want to marry one.19

Simulation is often justified as practice for real-life skills—to become a better pilot, sailor, or race-car driver. But when it comes to human relations, simulation gets us into trouble. Online, in virtual places, simulation turns us into its creatures. But when we step out of our online lives, we may feel suddenly as though in too-bright light. Hank, a law professor in his late thirties, is on the Net for at least twelve hours a day. Stepping out of a computer game is disorienting, but so is stepping out of his e-mail. Leaving the bubble, Hank says, “makes the flat time with my family harder. Like it’s taking place in slow motion. I’m short with them.” After dinner with his family, Hank is grateful to return to the cool shade of his online life.

Nothing in real life with real people vaguely resembles the environment (controlled yet with always-something-new connections) that Hank finds on the Net. Think of what is implied by his phrase “flat time.” Real people have consistency, so if things are going well in our relationships, change is gradual, worked through slowly. In online life, the pace of relationships speeds up. One quickly moves from infatuation to disillusionment and back. And the moment one grows even slightly bored, there is easy access to someone new. One races through e-mail and learns to attend to the “highlights.” Subject lines are exaggerated to get attention. In online games, the action often reduces to a pattern of moving from scary to safe and back again. A frightening encounter presents itself. It is dealt with. You regroup, and then there is another. The adrenaline rush is continual; there is no “flat time.”

Sometimes people try to make life with others resemble simulation. They try to heighten real-life drama or control those around them. It would be fair to say that such efforts do not often end well. Then, in failure, many are tempted to return to what they do well: living their lives on the screen. If there is an addiction here, it is not to a technology. It is to the habits of mind that technology allows us to practice.

Online, we can lose confidence that we are communicating or cared for. Confused, we may seek solace in even more connection. We may become intolerant of our own company: “I never travel without my BlackBerry,” says a fifty-year-old management consultant. She cannot quiet her mind without having things on her mind.

My own study of the networked life has left me thinking about intimacy—about being with people in person, hearing their voices and seeing their faces, trying to know their hearts. And it has left me thinking about solitude—the kind that refreshes and restores. Loneliness is failed solitude.20 To experience solitude you must be able to summon yourself by yourself; otherwise, you will only know how to be lonely. In raising a daughter in the digital age, I have thought of this very often.

In his history of solitude, Anthony Storr writes about the importance of being able to feel at peace in one’s own company.21 But many find that, trained by the Net, they cannot find solitude even at a lake or beach or on a hike. Stillness makes them anxious. I see the beginnings of a backlash as some young people become disillusioned with social media. There is, too, the renewed interest in yoga, Eastern religions, meditating, and “slowness.”

These new practices bear a family resemblance to what I have described as the romantic reaction of the 1980s. Then, people declared that something about their human nature made them unlike any machine (“simulated feeling may be feeling; simulated love is never love”). These days, under the tutelage of imaging technology and neurochemistry, people seem willing to grant their own machine natures. What they rebel against is how we have responded to the affordances of the networked life. Offered continual connectivity, we have said yes. Offered an opportunity to abandon our privacy, so far we have not resisted. And now comes the challenge of a new “species”—sociable robots—whose “emotions” are designed to make us comfortable with them. What are we going to say?

The romantic reaction of the 1980s made a statement about computation as a model of mind; today we struggle with who we have become in the presence of computers. In the 1980s, it was enough to change the way you saw yourself. These days, it is a question of how you live your life. The first manifestations of today’s “push back” are tentative experiments to do without the Net. But the Net has become intrinsic to getting an education, getting the news, and getting a job. So, today’s second thoughts will require that we actively reshape our lives on the screen. Finding a new balance will be more than a matter of “slowing down.” How can we make room for reflection?

QUANDARIES

In arguing for “caring machines,” roboticists often make their case by putting things in terms of quandaries. So, they ask, “Do you want your parents and grandparents cared for by robots, or would you rather they not be cared for at all?” And alternatively, “Do you want seniors lonely and bored, or do you want them engaged with a robotic companion?”22 The forced choice of a quandary, posed over time, threatens to become no quandary at all because we come to accept its framing—in this case, the idea that there is only one choice, between robotic caregivers and loneliness. The widespread use of this particular quandary makes those uncomfortable with robotic companions out to be people who would consign an elderly population to boredom, isolation, and neglect.

There is a rich literature on how to break out of quandary thinking. It suggests that sometimes it helps to turn from the abstract to the concrete.23 This is what the children in Miss Grant’s fifth-grade class did. Caught up in a “for or against” discussion about robot caregivers, they turned away from the dilemma to ask a question (“Don’t we have people for these jobs?”) that could open up a different conversation. While the children only began that conversation, we, as adults, know where it might go. What about bringing in some new people? What must be done to get them where they are needed? How can we revisit social priorities so that funds are made available? We have the unemployed, the retired, and those currently at war—some of these might be available if there were money to pay them. One place to start would be to elevate elder care above the minimum-wage job that it usually is, often without benefits. The “robots-or-no-one” quandary takes social and political choice out of the picture when it belongs at the center of the picture.

I experienced a moment of reframing during a seminar at MIT that took the role of robots in medicine as its focus. My class considered a robot that could help turn weak or paralyzed patients in their beds for bathing. A robot now on the market is designed as a kind of double spatula: one plate slides under the patient; another is placed on top. The head is supported, and the patient is flipped. The class responded to this technology as though it suggested a dilemma: machines for the elderly or not. So some students insisted that it is inevitable for robots to take over nursing roles (they cited cost, efficiency, and the insufficient numbers of people who want to take the job). Others countered that the elderly deserve the human touch and that anything else is demeaning. The conversation argued absolutes: the inevitable versus the unsupportable.

Into this stalled debate came the voice of a woman in her late twenties whose mother had recently died. She did not buy into the terms of the discussion. Why limit our conversation to no robot or a robotic flipper? Why not imagine a machine that is an extension of the body of one human trying to care lovingly for another? Why not build robotic arms, supported by hydraulic power, into which people could slip their own arms, enhancing their strength? The problem as offered presented her with two unacceptable images: an autonomous machine or a neglected patient. She wanted to have a conversation about how she might have used technology as prosthesis. Had her arms been made stronger, she might have been able to lift her mother when she was ill. She would have welcomed such help. It might have made it possible for her to keep her mother at home during her last weeks. A change of frame embraces technology even as it provides a mother with a daughter’s touch.

In the spirit of “break the frame and see something new,” philosopher Kwame Anthony Appiah challenges quandary thinking:

The options are given in the description of the situation. We can call this the package problem. In the real world, situations are not bundled together with options. In the real world, the act of framing—the act of describing a situation, and thus of determining that there’s a decision to be made—is itself a moral task. It’s often the moral task. Learning how to recognize what is and isn’t an option is part of our ethical development.... In life, the challenge is not so much to figure out how best to play the game; the challenge is to figure out what game you’re playing.24

For Appiah, moral reasoning is best accomplished not by responding to quandaries but by questioning how they are posed, continually reminding ourselves that we are the ones choosing how to frame things.

FORBIDDEN EXPERIMENTS

When the fifth graders considered robot companions for their grandparents and wondered, “Don’t we have people for these jobs?” they knew they were asking, “Isn’t ‘taking care’ our parents’ job?” And by extension, “Are there people to take care of us if we become ‘inconvenient’?” When we consider the robots in our futures, we think through our responsibilities to each other.

Why do we want robots to care for us? I understand the virtues of partnership with a robot in war, space, and medicine. I understand that robots are useful in dangerous working conditions. But why are we so keen on “caring”?25To me, it seems transgressive, a “forbidden experiment.”26

Not everyone sees it this way. Some people consider the development of caring machines as simple common sense. Porter, sixty, recently lost his wife after a long illness. He thinks that if robotic helpers “had been able to do the grunt work, there might have been more time for human nurses to take care of the more personal and emotional things.” But often, relationships hinge on these investments of time. We know that the time we spend caring for children, doing the most basic things for them, lays down a crucial substrate.27 On this ground, children become confident that they are loved no matter what. And we who care for them become confirmed in our capacity to love and care. The ill and the elderly also deserve to be confirmed in this same sense of basic trust. As we provide it, we become more fully human.

The most common justification for the delegation of care to robots focuses on things being “equal” for the person receiving care. This argument is most often used by those who feel that robots are appropriate for people with dementia, who will not “know the difference” between a person and a robot. But we do not really know how impaired people receive the human voice, face, and touch. Providing substitutes for human care may not be “equal” in the least. And again, delegating what was once love’s labor changes the person who delegates. When we lose the “burden” of care, we begin to give up on our compact that human beings will care for other human beings. The daughter who wishes for hydraulic arms to lift her bedridden mother wants to keep her close. For the daughter, this last time of caring is among the most important she and her mother will share. If we divest ourselves of such things, we risk being coarsened, reduced. And once you have elder bots and nurse bots, why not nanny bots?

Why would we want a robot as a companion for a child? The relationship of a child to a sociable robot is, as I’ve said, very different from that of a child to a doll. Children do not try to model themselves on their dolls’ expressions. A child projects human expression onto a doll. But a robot babysitter, already envisaged, might seem close enough to human that a child might use it as a model. This raises grave questions. Human beings are capable of infinite combinations of vocal inflection and facial expression. It is from other people that we learn how to listen and bend to each other in conversation. Our eyes “light up” with interest and “darken” with passion or anxiety. We recognize, and are most comfortable with, other people who exhibit this fluidity. We recognize, and are less comfortable with, people—with autism or Asperger’s syndrome—who do not exhibit it. The developmental implications of children taking robots as models are unknown, potentially disastrous. Humans need to be surrounded by human touch, faces, and voices. Humans need to be brought up by humans.

Sometimes when I make this point, others counter that even so, robots might do the “simpler” jobs for children, such as feeding them and changing their diapers. But children fed their string beans by a robot will not associate food with human companionship, talk, and relaxation. Eating will become dissociated from emotional nurturance. Children whose diapers are changed by robots will not feel that their bodies are dear to other human beings. Why are we willing to consider such risks?28

Some would say that we have already completed a forbidden experiment, using ourselves as subjects with no controls, and the unhappy findings are in: we are connected as we’ve never been connected before, and we seem to have damaged ourselves in the process. A 2010 analysis of data from over fourteen thousand college students over the past thirty years shows that since the year 2000, young people have reported a dramatic decline in interest in other people. Today’s college students are, for example, far less likely to say that it is valuable to try to put oneself in the place of others or to try to understand their feelings.29 The authors of this study associate students’ lack of empathy with the availability of online games and social networking. An online connection can be deeply felt, but you only need to deal with the part of the person you see in your game world or social network. Young people don’t seem to feel they need to deal with more, and over time they lose the inclination. One might say that absorbed in those they have “friended,” children lose interest in friendship.

These findings confirm the impressions of those psychotherapists—psychiatrists, psychologists, and social workers—who talk to me about the increasing numbers of patients who present in the consulting room as detached from their bodies and seem close to unaware of the most basic courtesies. Purpose-driven, plugged into their media, these patients pay little attention to those around them. In others, they seek what is of use, an echo of that primitive world of “parts.” Their detachment is not aggressive. It is as though they just don’t see the point.30

EARLY DAYS

It is, of course, tempting to talk about all of this in terms of addiction. Adam, who started out playing computer games with people and ends up feeling compelled by a world of bots, certainly uses this language. The addiction metaphor fits a common experience: the more time spent online, the more one wants to spend time online. But however apt the metaphor, we can ill afford the luxury of using it. Talking about addiction subverts our best thinking because it suggests that if there are problems, there is only one solution. To combat addiction, you have to discard the addicting substance. But we are not going to “get rid” of the Internet. We will not go “cold turkey” or forbid cell phones to our children. We are not going to stop the music or go back to television as the family hearth.

I believe we will find new paths toward each other, but considering ourselves victims of a bad substance is not a good first step. The idea of addiction, with its one solution that we know we won’t take, makes us feel hopeless. We have to find a way to live with seductive technology and make it work to our purposes. This is hard and will take work. Simple love of technology is not going to help. Nor is a Luddite impulse.

What I call realtechnik suggests that we step back and reassess when we hear triumphalist or apocalyptic narratives about how to live with technology. Realtechnik is skeptical about linear progress. It encourages humility, a state of mind in which we are most open to facing problems and reconsidering decisions. It helps us acknowledge costs and recognize the things we hold inviolate. I have said that this way of envisaging our lives with technology is close to the ethic of psychoanalysis. Old-fashioned perhaps, but our times have brought us back to such homilies.

Because we grew up with the Net, we assume that the Net is grown-up. We tend to see it as a technology in its maturity. But in fact, we are in early days. There is time to make the corrections. It is, above all, the young who need to be convinced that when it comes to our networked life, we are still at the beginning of things. I am cautiously optimistic. We have seen young people try to reclaim personal privacy and each other’s attention. They crave things as simple as telephone calls made, as one eighteen-year-old puts it, “sitting down and giving each other full attention.” Today’s young people have a special vulnerability: although always connected, they feel deprived of attention. Some, as children, were pushed on swings while their parents spoke on cell phones.31 Now, these same parents do their e-mail at the dinner table. Some teenagers coolly compare a dedicated robot with a parent talking to them while doing e-mail, and parents do not always come out ahead. One seventeen-year-old boy says, “A robot would remember everything I said. It might not understand everything, but remembering is a first step. My father, talking to me while on his BlackBerry, he doesn’t know what I said, so it is not much use that if he did know, he might understand.”

The networked culture is very young. Attendants at its birth, we threw ourselves into its adventure. This is human. But these days, our problems with the Net are becoming too distracting to ignore. At the extreme, we are so enmeshed in our connections that we neglect each other. We don’t need to reject or disparage technology. We need to put it in its place. The generation that has grown up with the Net is in a good position to do this, but these young people need help. So as they begin to fight for their right to privacy, we must be their partners. We know how easily information can be politically abused; we have the perspective of history. We have, perhaps, not shared enough about that history with our children. And as we, ourselves enchanted, turned away from them to lose ourselves in our e-mail, we did not sufficiently teach the importance of empathy and attention to what is real.

The narrative of Alone Together describes an arc: we expect more from technology and less from each other. This puts us at the still center of a perfect storm. Overwhelmed, we have been drawn to connections that seem low risk and always at hand: Facebook friends, avatars, IRC chat partners. If convenience and control continue to be our priorities, we shall be tempted by sociable robots, where, like gamblers at their slot machines, we are promised excitement programmed in, just enough to keep us in the game. At the robotic moment, we have to be concerned that the simplification and reduction of relationship is no longer something we complain about. It may become what we expect, even desire.

In this book I have referred to our vulnerabilities rather than our needs. Needs imply that we must have something. The idea of being vulnerable leaves a lot of room for choice. There is always room to be less vulnerable, more evolved. We are not stuck. To move forward together—as generations together—we are called upon to embrace the complexity of our situation. We have invented inspiring and enhancing technologies, and yet we have allowed them to diminish us. The prospect of loving, or being loved by, a machine changes what love can be. We know that the young are tempted. They have been brought up to be. Those who have known lifetimes of love can surely offer them more.

When we are at our best, thinking about technology brings us back to questions about what really matters. When I recently travelled to a memorial service for a close friend, the program, on heavy cream-colored card stock, listed the afternoon’s speakers, told who would play what music, and displayed photographs of my friend as a young woman and in her prime. Several around me used the program’s stiff, protective wings to hide their cell phones as they sent text messages during the service. One of the texting mourners, a woman in her late sixties, came over to chat with me after the service. Matter-of-factly, she offered, “I couldn’t stand to sit that long without getting on my phone.” The point of the service was to take a moment. This woman had been schooled by a technology she’d had for less than a decade to find this close to impossible.32 Later, I discussed the texting with some close friends. Several shrugged. One said, “What are you going to do?”

A shrug is appropriate for a stalemate. That’s not where we are. It is too early to have reached such an impasse. Rather, I believe we have reached a point of inflection, where we can see the costs and start to take action. We will begin with very simple things. Some will seem like just reclaiming good manners. Talk to colleagues down the hall, no cell phones at dinner, on the playground, in the car, or in company. There will be more complicated things: to name only one, nascent efforts to reclaim privacy would be supported across the generations. And compassion is due to those of us—and there are many of us—who are so dependent on our devices that we cannot sit still for a funeral service or a lecture or a play. We now know that our brains are rewired every time we use a phone to search or surf or multitask.33 As we try to reclaim our concentration, we are literally at war with ourselves. Yet, no matter how difficult, it is time to look again toward the virtues of solitude, deliberateness, and living fully in the moment. We have agreed to an experiment in which we are the human subjects. Actually, we have agreed to a series of experiments: robots for children and the elderly, technologies that denigrate and deny privacy, seductive simulations that propose themselves as places to live.34

We deserve better. When we remind ourselves that it is we who decide how to keep technology busy, we shall have better.