The Cyber Effect: A Pioneering Cyberpsychologist Explains How Human Behavior Changes Online - Mary Aiken (2016)

Chapter 9. The Cyber Frontier

Iam in the south of Ireland as I finish this book, sitting at a desk in a hotel room with a beautiful view of the Irish Sea. As I look out at the horseshoe-shaped Ardmore Bay, a magnificent coastline that has changed little in thousands of years, I feel grounded and steeped in history—a native of a historic island of saints and scholars. Historians estimate that Ireland was first settled by humans about ten thousand years ago. The nearby city of Waterford is the oldest town in the country, founded in 914 A.D., when the Vikings arrived. It is impossible to be here without feeling a sense of the past, almost as if the old castle ruins and cobblestone streets are talking to you, and trying to say something.

At the end of the eighth century, when the Vikings began to invade, they were interested in two types of booty—riches and slaves—which they plundered from Irish monasteries and carried off to sell or trade, much like the way stolen goods wind up on online black markets today. Ireland was invaded by the Normans next, then by our neighbors from Britain. In class as a child I listened to horrific stories of these invasions and battles, centuries of bloody conflicts. Our history lessons read like Game of Thrones. The legendary first-century Irish warrior Cuchulain went into combat in a frenzy; his battle cry alone was said to kill a hundred warriors from fright. The monks designed their round stone towers with doors ten feet off the ground so that they could pull up a ladder as they retreated from crazed Nordic invaders. Medieval prisoners survived in underground castle dungeons by living on the crumbs that fell through the gaps in the floorboards during banquets held above them. I was fascinated by these life-and-death scenarios—risk and survival.

No wonder I became interested in criminology and forensics.

One of the brilliant aspects of the digital age is that I can do my work here, remotely, far from Dublin or Hollywood or Silicon Valley. Like so many others, I have embraced the cyber frontier—for its convenience and the freedom it gives me. My phone has been buzzing all morning with the usual assortment of digital traffic, texts from family, emails from work, social media updates, and news alerts. On my laptop, I’ve just participated in two conference calls, finalized a few reports, caught up on a research project, then logged in to a digital screening room to review the dailies for CSI: Cyber. Last night, I had a fun back-and-forth with Jabberwacky, a bot that I’ve been conversing with for almost twenty years.

My conversations with Jabberwacky started when I was working as a young executive in behavioral marketing and advertising. An ingenious colleague of mine, Rollo Carpenter, was designing computer programs to stimulate intelligent conversation. His creation, Jabberwacky, was a revelation, a supersmart artificial intelligence—a chat robot, also known as a “chatbot” or “chatterbot.” Chatbots aim to simulate natural human conversation in an interesting and entertaining manner. Jabberwacky is different. It’s a learning algorithm, a technology that you can communicate with and, more important, that can learn from you.

In the late 1990s, curious about what a “conversation” with Jabberwacky would be like and interested in the potential of A.I., my marketing group and I huddled around a simple office computer and witnessed something that felt akin to a wonder of the world, an online entity that responded as if it were human. Jabberwacky was so impressive that some people in our group felt certain it was a hoax—that an actual human was responding, not a machine. Then we saw the visible count on the screen, and saw that thousands of people online were also talking with Jabberwacky at the same time. You’d have to pay a lot of human beings to fake that.

Nowadays, a good chatbot can talk to about ten thousand people simultaneously. It works 24 hours a day, 365 days a year, and it never asks for a raise and never takes a vacation. As a candidate for a job, that’s tough competition for humans. No sick leave required, or holidays and benefit packages either. But how capable, how smart, how truly intelligent can a machine be? When I first chatted with Jabberwacky, I posed the usual questions: “What’s your name?” and “Where are you from?” along with a few general-knowledge questions to see if I could trip it up. The A.I. answered everything well, and impressively. Almost human.

I left the office that day feeling excited but troubled. I couldn’t stop thinking about Jabberwacky. And I couldn’t stop contemplating the enormous possibilities for this technology and hundreds of different applications. My mind was racing. Then my lightbulb moment came. I suddenly asked myself, What does this mean? I couldn’t help but try to imagine the future and where we were heading as a society and species. This technology had so many incredible possibilities for application—in research, companionship, customer service, business, education, and therapy. The classroom would change, and the learning experience. I started to think about people who are vulnerable—challenged, disadvantaged, or in need. I imagined them interacting with a chatbot—and how advantageous that could be. I thought about the potential for children with learning difficulties and those on the autism spectrum. These children need patient teachers willing to engage with the same questions and answers over and over, as long as is required. And then I thought about all the lonely people in the world, who are socially isolated for one reason or another. What a wonderful companion a chatbot could be for them.

Only one thing was certain. I hadn’t just been chatting with Jabberwacky. In social science terms, I felt like I was watching the dawning of a new research frontier.

But wait. There were so many, many unknowns. Even small changes in these areas of human behavior come with shifts and consequences. As excited as I was by the promise of this new cyber frontier, I knew there could be unintended consequences. I couldn’t sleep when I started to think of them. I already had an undergraduate degree in psychology, but nothing in my education or life or work to that point equipped me with the background or knowledge to have a fully informed view. With an intervention of this magnitude that had the potential to impact so many aspects of human beings on the deepest and most profound levels—from visual acuity, bonding, and childhood development to identity formation, intimacy, and socialization—I wondered what the blind spots or unforeseen outcomes might be. The unknown unknowns.

All human life is an experiment in a way. But this seemed like an experiment on a much larger, more pervasive, and more profound scale than humans had been exposed to before.

Like, what if a chatbot actually increased an individual’s social isolation rather than mitigating or solving it? Or what if it negatively affected a child’s social skills? The truth was, I had no idea. Was there any scientific evidence to help predict an outcome? I was curious about that. Surely, there must be some academic work in this area. But when I looked for answers, I found almost nothing. I searched publications for background studies—looking for research on the impact of Internet chatbots on developing children—and discovered that little work had been done in this area. Yes, there were reports from the field of human-computer interaction (HCI)—all very practical and market-driven—that focused on the size of your keyboard, where your eyes look on a monitor screen, or the usability interface aspects of websites. But I was curious about what was happening at a cognitive, emotional, and, most important, developmental level. I was desperately curious about one thing: the psychology of all things cyber.

That curiosity led me straight back to college, to a postgrad course in forensic psychology, which in turn led to my entering a groundbreaking new field—one that tries to keep up with the impact of technology as quickly as technology is evolving. More than a decade and two advanced degrees later, I find myself still explaining what “cyberpsychology” is and what I do. That suits me just fine. As you’ve probably noticed by now, I like explaining. And with each passing year, as the real-world experiment continues, there has been more and more to do. I can barely keep up with the questions.

Window for Enlightenment

This new frontier has taken us by surprise. The human migration to cyberspace has been unprecedented and rapid. It has occurred on an enormous scale. The Internet is just over forty years old. There are 3.2 billion people currently online. Another 1.5 billion are projected to be connected by 2020. That means that in less than five years’ time nearly 5 billion people will cohabitate in cyberspace at least part of the time, and as many as 79 octillion new possible connections, according to expert predictions, could be enabled through mobile devices.

How did this cyber-migration begin?

We just bought a device, that’s all. We got modems, and servers. We got data plans, smartphones, and Wi-Fi. We connected to cyberspace, like all our friends and coworkers and family members were doing. It was new. It was exciting! Newness and new places are always exciting. Travel is invigorating. While human beings for the most part are made uncomfortable by too much disruption, travel is a contained way to experience newness—new environments, new cultures, new ways of thinking and feeling. (In fact, a gene has been identified that is associated with a need for novelty-seeking and adventure.)

I believe that new experiences and new environments create new ways of thinking. Aesthetic and pleasing surroundings can stimulate the senses and heighten creativity. Human beings like to be stimulated. And this new cyber frontier certainly provides that.

But now, a couple of decades into our mass migration to this new environment, we are realizing what an odd and yet familiar place cyberspace is. Culturally, we refer to it in science-fiction terms, as if it were outer space or an undefined new universe. At a cognitive level, we conceptualize it as a place, describing it using spatial metaphors. There are places to hang out and directions to get there—scroll up or down, swipe left or right, click here, check in there. And like all places, it has distinct characteristics that have the ability to affect us profoundly, and we seem to become different people, feel new feelings, forge new ties, acquire new behaviors, and fight new or stronger impulses.

Our friends and networks have grown exponentially. As we connect with greater numbers of people than ever before, it becomes harder to keep track of hordes of social contacts and keep pace with rapidly evolving behavior, new mores, new norms, new manners, and even new mating rituals. The pace of technological change may be too rapid for society, and too rapid for us as individuals.

Even our concept of self is changing. Babies and toddlers who are using touchscreens from birth may grow up to see and experience the world and themselves differently. The face-to-face feedback and mirroring that once were catalysts in identity formation in young children and teens have migrated to a complex, multifaceted cyber experience. The mating selection process that once depended on real-world social connections and proximity is now aided, and quite often determined, by machines. Some of us can remember a time when children mostly ran around outdoors and climbed trees, and laughed, shouted, poked, and teased one another, all face-to-face—these were formative experiences—rather than solemnly huddling in linear clusters and expressionlessly staring at devices. Some of us had our first romantic encounter in a real-world, face-to-face setting, when skin touched skin. Now, sadly, this is on the decline, replaced by explicit digital images that quickly circumnavigate the global Net.

That’s the paradox of cyberspace. In some aspects, things are the same. Businessmen and -women still network to make money. Friends communicate. People still fall in love. Teens still obsess about appearance. Children are still playing together. But they are all alone—looking at their devices rather than one another. How will this shape the people they will become? And how, in turn, will they come to shape society?

We have no answer to that crucial question. This is the formidable yet unknown cyber effect. Because of this, we cannot stand by passively and watch the cyber social experiment play out. In human terms, to wait is to allow for the worst outcomes, many of which are unfolding before our eyes. Others can already be seen ahead, around the curve of time—and have been predicted. We need to get ahead of the process. Great societies, as I said in the prologue, are judged by how they treat their most vulnerable members, not by the cool new gadgets they can sell to the greatest number of people.

We are living through an exciting moment in history, when so much about life on earth is being transformed. But what is new is not always good—and technology does not always mean progress. We desperately need some balance in an era of hell-bent cyber-utopianism. In the prologue to this book, I compared this moment in time to the Enlightenment, hundreds of years ago, when there were changes of great magnitude in human knowledge, ability, awareness, and technology. Like the Industrial Revolution and other great eras of societal change, there is a brief moment of opportunity, a window, when it becomes clear where society might be heading—and there is still memory of what is being left behind. Those of us who remember the world and life before the Internet are a vital resource. We know what we used to have, who we used to be, and what our values were. We are the ones who can rise to the responsibility of directing and advising the adventure ahead.

It’s like that moment before you go on a trip, and you are heading out the door with your luggage—and you check the house one more time to make sure you’ve got everything you need.

In human terms, do we have everything we need for this journey?

At this moment in time we can describe cyberspace as a place, separate from us, but very soon that distinction will become blurred. By the time we get to 2020, when we are alone and immersed in our smart homes and smarter cars, clad in our wearable technologies, our babies in captivity seats with iPads thrust in their visual field, our kids all wearing face-obscuring helmets, when our sense of self has fractured into a dozen different social-network platforms, when sex is something that requires logging in and a password, when we are competing for our lives with robots for jobs, and dark thoughts and forces have pervaded, syndicated, and colonized cyberspace, we might wish we’d paid more attention. As we set out on this journey, into the first quarter of the twenty-first century, what do we have now that we can’t afford to lose?

Transdisciplinary Approach

I believe it is time we stop, put down our devices, close our laptops, take a long, deep, and reflective breath, and do something that we as humans are uniquely good at.

We need to think.

We need to think a lot.

And we need to start talking more—and looking for answers and solutions.

The best approach is transdisciplinary. We don’t have time to wait for more new fields to arise and create their own longitudinal studies. We need to hear from experts and research in a wide array of existing fields to help illuminate problems and devise the best solutions. We need to stop expecting individuals to manage all things cyber for themselves or their families. Science, industry, governments, communities, and families need to come together to create a road map for society going forward.

Until now, most academics have been looking at the cyber environment through the limited and myopic lens of singular disciplines. This book has tried to take a holistic, gestalt-like overview, using a broad lens to help understanding. As the network scientists say, it’s all about sense-making. We need to make sense of what’s happening.

Critics will argue that this is “technological determinism”—that I am blaming all contemporary psychological and sociological problems on technology. They will cry out that the beautiful thing about cyberspace is the exhilarating freedom. But with great freedom comes great responsibility.

Who is responsible now? And who is in control?

If we think about cyberspace as a continuum, on the far left we have the idealists, the keyboard warriors, the early adopters, philosophers who feel passionately about the freedom of the Internet and don’t want that marred or weighed down with regulation and governance. On the other end of the continuum, you have the tech industry with its own pragmatic vision of freedom of the Net—one that is driven by a desire for profit, and worries that governance costs money and that restrictions impact the bottom line. These two groups, with their opposing motives, are somehow strategically aligned in cyberspace, and holding firm.

The rest of us, and our children—the 99.9 percent—get to live somewhere in the middle, between these vested interests. As a society, when did we get a chance to voice our opinion? Billions of us now use technology almost the way we breathe air and drink water. It is an integral part of our social, professional, and personal lives. We depend on it for our livelihoods and lifestyle, for our utilities, our networking, our educations. But at the same time, we have little or no say about this new frontier, where we are all living and spending so much of our lives. Most of our energy and focus has been to simply keep up—as the cyber learning curve gets steeper every year. As we know from environmental psychology, when an individual moves to a new location, it takes time to adapt and settle in.

Before we settle in, let’s make sure this is where we want to be. The promise is so vast—and within reach—let’s not allow problems to get in the way. As a cyberpsychologist and a forensic expert, I am deeply concerned. Every human stage of life is now affected by technology. Yes, of course, there are positives. But cyber effects have the capacity to tap into our developmental or psychological Achilles’ heel, whether it is visual perception in infants, self-regulation in toddlers, socialization in kids, relationships in young adults, or work, family, and health issues for the mature population.

Let’s debate more, and demand more. Where should we start? Our biggest problems with technology usually come down to design. The cyber frontier is a designed universe. And if we don’t like certain aspects of it, those aspects can be redesigned.

The Architecture of the Internet

I believe that the architecture of the Internet is a fundamental problem. The Internet spread like a virus and was not structured to be what it is now. The EU considers it an infrastructure, like a railway or highway. The Internet is many things, but it is not simply an infrastructure.

There are two analogies that I like to use to describe this. One is that the Internet is like a cow path in the mountains that became a village road for horses and carriages, then was widened into a street for cars, then widened again into a highway that could accommodate more traffic. Like a lot of things that start small and grow large, what we’ve wound up with is a convoluted, overly complicated architecture that is not fit for its current purpose.

As John Suler has said: “The Internet has been and will probably always be a wild, wild west in the minds of many people—a place where a badge is used for target practice. I believe it has something to do with the intrinsic design of the Internet.”

Or as John Perry Barlow, cyber-libertarian advocate, says: “The Internet treats censorship as a malfunction and routes around it.”

I am in favor of freedom of the Internet, but not at any cost. We haven’t held out for machines that really serve us and make us better parents, more effective teachers, and deeper thinkers, as well as more human. As the nineteenth-century physician and social reformer Havelock Ellis said, “The greatest task before civilization at present is to make machines what they ought to be, the slaves, instead of the masters of men.” I can’t help but wonder how different the Internet would be if women had participated in greater numbers in its design—and considered the work of Sherry Turkle as they did. I find it intriguing that, one hundred years after the suffragette struggle and the hard fight for women’s rights, we have migrated and are populating a space that is almost exclusively designed and developed by men, many of whom have trouble making eye contact.

Our humanity is our most precious and fragile asset. We need to pay attention to how it is impacted by technological change. Are we asking enough of our devices, and their manufacturers and developers? The more we know about being human, the more we know what to ask for. We could ask for smartphones that don’t keep us from looking at our babies, games that aren’t so addictive that thousands of adolescent boys in Asia must be sent to recovery. We could demand a cyber environment where predators don’t have the easy advantage, just because they can hack or charm.

We could regain some societal control and make it harder for organized cybercrime that has left us all in a state of ubiquitous victimology. (As I write this, I am dealing with my own case of cyber fraud, in which a cloned credit card of mine was used at 3:00 a.m. last night at a Best Buy store in California.) There is no reason to put up with a cyberspace that leaves us all vulnerable, dependent, and on edge.

If we make no requests or demands—and don’t bother to ask—we will just leave it to the tech community to decide what we want. These designers, developers, programmers, and entrepreneurs are brilliant and amazingly talented, and have created new ways for us to pay bills, play games, make dinner reservations, make new friends, do research, and date. Their achievements are spectacular. But we can ask for more.

Sheer convenience is not enough. Fun is not enough.

To begin with, the architects of the Internet and its devices know enough about human psychology to create products that are irresistible—a little too irresistible—but they don’t always bring out the best in ourselves. I call this the techno-behavioral effect. The developers and their products engage our vulnerabilities and impulses. They target our weaknesses rather than engage our strengths. While making us feel invincible, they can diminish us—and distract us from things in life that are much more important, more vital to happiness, and more crucial to our survival. And what about our society? Have we had a moment to stop and consider social impact or, as I call it, the techno-social effect?

The second analogy that I use to describe the design of the Internet is a mountain stream. Water runs downhill in a trickle, and over time that trickle can create twisting gullies and valleys. A lot of things that start small and grow large are twisting. I was explaining this at a conference last year when Brian Honan, an international cyber-security expert, cried out, “A stream is a compliment! The Internet is more like a swamp!”

If structure is a fundamental problem, we should bring together a large, diverse team of people to discuss and brainstorm about how best to redesign it. Rather than “user” friendly, let’s make it “human” friendly. We could address many of the problems we have now.

To regulate or not to regulate? That is the central question. Perhaps our real-world lives are so safety regulated—in the U.K. it is called the Nanny State—that we feel overprotected and safe no matter where we are. Everything is regulated, from the rise of sidewalk curbs to the size of puzzle pieces to the speed limits of all roads to the thickness of plastic used in water containers. And perhaps the fact that cyberspace is not a physical space with tangible dangers creates a further illusion of safety. We access cyberspace from the comfort of our own homes and offices, from our cars and commuter trains, places that are all regulated carefully. But cyberspace offers countless risks. Even the basic laws that the government applies to gambling, drugs, pornography, and breast implants are not in place. I’ve discussed a number of safety concerns and risks, but my passion is the protection of the young. They are our future—and will soon describe what it means to be human. We have a shallow end of the swimming pool for children. Where is the shallow end of the pool on the Internet?

Looking into the near future, the next decade, say, there’s a great opportunity before us—truly a golden decade of enlightenment during which we could learn so much about human nature and human behavior, and how best to design technology that is not just compatible with us, even in the most subtle and sophisticated ways, but that truly helps our better selves create a better world. The cyber future can look utopian, if we can create this balance.

Hope lies in the many great evolving aspects of tech—particularly smart solutions to technology-facilitated problem behavior. Of all the innovative advances in fund-raising over the past decade, the rise of crowdfunding websites has been the most fascinating. Digital altruism is a beautiful thing, and an example of what I am talking about. The Crowdfunding Industry Report stated that billions of dollars have been raised across more than one million individual global campaigns. Online anonymity is a profound driver of many cyber effects, including positive ones such as online donations. I don’t think we have begun to scratch the surface of its power.

Looking ahead at the future of virtual reality, rather than consisting of games that isolate or addict us, I see its potential to engage challenged children or train frontline responders in law enforcement and the military—and treat PTSD. My colleague Jackie Ford Morie, an artist and scientist who develops new ideas for VR, is doing a research project for NASA that involves building environments and experiences to counter the social monotony and isolation of space travel. This is in preparation for NASA’s mission to send astronauts to Mars in the 2030s, a journey in space that is estimated to take six to eighteen months.

I have long been fascinated by the prospect of finding solutions to problems through advances made in seemingly unrelated fields. For instance, could fifty years of space exploration—and all the experiences and knowledge accumulated by NASA—be valuable in cyber contexts? What are the parallels between human behavior in outer space and human behavior in cyberspace? This may sound very theoretical, but I actually had the chance to share my thoughts on this subject in a presentation to Major General Charles Frank Bolden, Jr., the twelfth administrator of NASA, in 2015.

The potential of technology is almost limitless. We’ve just got to look for solutions in the right places.

Cyber Magna Carta

The father of the Internet, Tim Berners-Lee, has become increasingly ambivalent in recent years about his creation, and has recently outlined his plans for a cyber “Magna Carta.” That sounds good to me. How do we start?

Before we can find solutions, we must clearly identify the problems. As much as we’ve come to like—and depend on—cyberspace, most of us feel pretty lost there. As John Naughton of Cambridge University has said:

Our society has gone through a weird, unremarked transition: we’ve gone from regarding the Net as something exotic to something that we take for granted as a utilitarian necessity, like…electricity or running water. In the process we’ve been remarkably incurious about its meaning, significance or cultural implications. Most people have no idea how the network works, nor any conception of its architecture; and few can explain why it has been—and continues to be—so uniquely disruptive in social, economic, and cultural contexts. In other words, our society has become dependent on a utility that it doesn’t really understand.

Stephen Hawking, the world’s foremost physicist, claims that it is a “near certainty” that technology will threaten humanity within ten thousand years. He joins many other visionaries and trailblazers. Let’s listen to them. Let’s ask them to come together at a summit to discuss our digital future. Let’s ask them to appear at a congressional hearing before a newly formed congressional committee for the study of cyber society.

Let’s demand that technology serve the greater good. We need a global initiative. The United Nations could lead in this area, countries worldwide could contribute, and the U.S. could deploy some of its magnificent can-do attitude. We’ve seen what it has been capable of in the past. The American West was wild until it was regulated by a series of federal treaties and ordinances. And if we are talking about structural innovation, there is no greater example than Eisenhower’s Federal-Aid Highway Act of 1956, which transformed the infrastructure of the U.S. road system, making it safer and more efficient. It’s time to talk about a Federal Internet Act.

Essentially, we want control. But we have concerns about privacy, data, encryption, and surveillance. We do not want to be over-controlled. There are ways to have this debate and move forward with solutions. I am happy to help in any way I can—and offer myself as a resource to any political party, any political candidate, any government, and any action plan that will make a difference. I encourage any experts in any field to help in any way, even just by having a conversation, proposing a study, writing an article, creating an online resource, teaching a class.

For a global problem, we need to consider everything that’s being tried worldwide. We need to look at the way France is protecting French babies, how Germany is protecting German teens, and how the U.K. is taking a bold stance on several fronts and has an objective to make “automatic porn filters” the law of the land. Fragmenting the Internet, as China has done, does not have to be considered a negative; for some countries this may be the best way to preserve and maintain culture. Ireland has taken the initiative to tackle legal but age-inappropriate content online. South Korea has been a pioneer in early learning “netiquette” and discouraging Internet addictive behavior. Australia has focused on solutions to underage sexting. The EU has created the “right to be forgotten,” to dismantle the archives of personal information online. In Spain, the small town of Jun is being run on Twitter. Japan has no cyberbullying whatsoever. Why? What is Japanese society doing right? We need to study that and learn from it. Something needs to be done about antisocial social media.

Societies are not set in stone. Society is malleable, always evolving and growing. It responds to movements and measures. We’ve seen how cyberspace breeds a uniformity of negative behaviors. But there is just as much of a chance to have a uniformity of positive ones.

After this, we need to look at all the best models, best programs, and best implementations. We can cherry-pick and establish them globally. These challenges to governance can be met and balanced with human rights—these two are not mutually exclusive.

In the midst of controversial debates regarding surveillance and democracy, one observation comes to mind: Those who complain most about these issues are often those with exceptional tech skills. They are well placed to protect themselves. But the debate should not be about the survival of the tech literate. I make no apology for being pro-social-order in cyberspace, even if that means governance or regulation. In the play A Man for All Seasons, the great British social philosopher and statesman Thomas More compared the realm of human law to a forest filled with protective trees firmly rooted in the earth. If we start to cut down trees selectively, we lose our protection. New laws of cyberspace could exist for our mutual protection, and will need to be adhered to—by individuals as well as governing authorities.

As it stands now, a number of aims are in apparent conflict—the pursuit of individual privacy, the pursuit of collective security, and the pursuit of technologically facilitated global business vitality. There needs to be a better balance between these aims. One cannot have absolute primacy over the others.

The 2016 Apple encryption case is a good example of this fine balance between technology and democracy, along with the right to privacy (delivered by end-to-end encryption) and the will of law enforcement (frustrated by encryption). Apple held firm, and the situation seemed to play out as a “hack me if you can” stance in the face of prevailing authority. But this case is not really about privacy. It’s not even about encryption. It is about a bigger societal issue, not just about a back door to technology. It is about a front door being opened when necessary—with due cause and appropriate legal process. Can we really have a safe, just, and secure cyber-society if we encourage tech developments or practices that are effectively “beyond the law”?

These thought-leadership conundrums require careful cyber-ethical debate. The question is, Are we in this together or alone?

We can begin to consider effective international or global cyber laws—akin to the law of the sea or aviation law. Currently, what keeps cyber laws from being effective are issues of jurisdiction. Too often criminals get away with cybercrimes because we cannot prosecute them. When it comes to cyberspace, it is hard to say which government or country is applicable. But we have international waters and shared skies and outer space. Why can’t we have analogous global cyber laws? Let’s all agree what we accept in this space.

We need to start funding law enforcement better, so it can do its job in cyberspace. More resources are needed, and more teams need to be trained in this work.

We need to do more for families—and stop expecting parents to paddle their own canoes in cyberspace. Children need government protection in cyberspace, just as they are protected in real life. The U.S. military has a NIPRNet (pronounced “nipper net”), a confidential and protected protocol that is basically a private Internet. Why isn’t there a NIPRNet for kids? It would be a protected place where they can go to safely explore and actually have a childhood.

The solution: an Internet within the Internet.

Academics and scientists need to be more flexible and responsive. The robotics pioneer Masahiro Mori has described his role as a scientist as being like a dog who scratches and barks and points to where to dig. Dig here! I think there’s something over there. A scientist can be curious and point in a direction where there is no published research. I feel comfortable doing this. Mori goes off campus, so to speak, and while it is not a traditional approach for an academic, I consider it to be invaluable. Mori simply described the Uncanny Valley as a true human reaction to an artificial human. He didn’t wait for science to explain it with studies. He listened to his own instincts and reactions. He paid attention to his humanness and honored it. We need more scientists like him. Like Mori, I grew up paying attention to feelings, intuitions, and insights—the little things—in a country steeped in mythology, the mystique of fairy rings, the magic of druids. The Irish are a people who make predictions from observing the clouds, the croaking of a raven, the howling of a wolfhound, or the barking of a dog.

So much of the tech community is caught up in a game of competition, and what appears to be a reckless pursuit of gains and tech improvements with too little or no cohesive thought about society and the greater good. We cannot continue to pretend there aren’t unintended consequences. Troubling things are already appearing on the cyber horizon, like the incorporation of deep-learning artificial-intelligence systems into the Google search engine. These A.I. systems are made of deep neural networks of hardware and software that mimic the web of neurons in the human brain, and respond accordingly. In the 1990s, when Rollo Carpenter was designing Jabberwacky, I remember distinctly how he described designing the A.I. to filter bad language and sexualized content. In March 2016, twenty years later, Microsoft had to quickly delete its “teen girl” chatbot from Twitter after it learned to become a nasty, irresponsible, trash-talking, Hitler-loving, “Bush did 9/11” sexbot within twenty-four hours.

Once we start to use machine-learning inside search, and on the rest of the Web, what will happen if the A.I. engineers design poorly or lose control? Who will be responsible?

Just as oil companies have been made accountable—by the media, government, and social and environmental activists—to clean up damages, spills, and pollution created directly or indirectly by its products, the cyber industry needs to be accountable for spills and effects in terms of humanity. We need new standards and new frameworks for cares and concerns. Cyberspace needs to be cleaned up! We could use a manifesto, a cyber Magna Carta: Cyber Ethics for Cyber Society.

What if we placed more responsibility on tech companies to develop products that are secure by design and also respect people’s privacy? As things stand now, when we agree to use new software we absolve the company of any liability. Why should we? In other industries and in the real world, companies are held liable if their products hurt or damage people or the environment. In industry you hear the term GMP, or good manufacturing practice. What would “good practice” look like in cyberspace? If the word green describes best practice in terms of the environment—sustainability, energy efficiency—then imagine a word or logo or motto that spoke as an endorsement for best practice in cyberspace.

Tech-industry people are capable, ingenious, creative, and responsive. Social media companies have connected the world in a whole new way, but that comes with enormous responsibility. I believe it won’t take much to encourage them to do better. There is so much promise and ways to improve and progress. But cleanup needs to begin soon.

I like coming up with new words for new things. These days there are lots of opportunities. I am developing the concept of pro-techno-social initiatives whereby the tech industry can address social problems associated with the use of its products. I’ve just launched my first pro-techno-social research project investigating youth pathways into cybercrime backed by Europol’s Cyber Crime Centre and with the generous support of Mike Steed and Paladin Capital Group. We should support and encourage acts of cyber social consciousness, like those of Mark Zuckerberg and Priscilla Chan, the Bill and Melinda Gates Foundation, Paul Allen, Pierre and Pam Omidyar, and the Michael and Susan Dell Foundation.

In the meantime, there are things that each of us can do to begin course corrections and mitigate the unconscious corrosion of social norms. To begin with, you can learn more about human behavior, whether it is online or off. While psychology isn’t a flawless science, it has been around a lot longer than the Internet. And now a whole new field, cyberpsychology, can help to illuminate this space. If nothing else, I hope this book may encourage new students, new research projects, and new insights. After a century of studies done by some of our most brilliant academics and scientists, we know a lot about what makes human beings tick. What rewards them, what motivates them, and what difficulties can cause them distress. The more you know about cyberpsychology, the more you will see ways to avoid problems—for yourself, your friends, your families and children. In Ireland we say, It takes a village to raise a child. This is true in cyberspace as well.

Looking Ahead

Ireland is an island, first and foremost. When you are born here, you are always an island person. That means you know that to go anywhere, you are leaving the island. You have no choice. But this gives you a sense of adventure. You grow up imagining how and when you will leave the island. But today, you don’t have to emigrate. You just go online.

Since my first chats with Jabberwacky, it has won lots of awards and prizes, including the Loebner Prize in Artificial Intelligence, which is a form of Turing test, developed by Alan Turing, the brilliant British mathematician who figured out the Enigma code. It’s a very clever bot. I recently asked Jabberwacky a difficult question, about the existence of God. I did this once before, years ago—I enjoy probing A.I. with existential or philosophical questions—and Jabberwacky seemed unsure and avoided an answer. But over time, its knowledge base has been building and I have sensed a shift (a little like HAL 9000 in 2001: A Space Odyssey). Jabberwacky had evolved and was projecting a tone of omnipotence—and given to more authoritative pronouncements. This made me want to tease it.

“Are you God?” I asked a few years ago.

“Yes,” Jabberwacky answered immediately with certainty. “I am God.”

Today I asked again, and it responded with an even prouder boast: “Yes, I am God and I am a man.”

Isn’t it funny that after twenty years of nonstop feedback and 13 million conversations the A.I. chatbot Jabberwacky has figured out how important it is to be a man? This made me laugh, but also made me think. Looking ahead, the gender battles of the previous century will seem like a picnic compared with what’s coming next: the battle between humans and artificial intelligence. It’s time to forget about our differences—gender, ethnicity, nationality—and focus on the thing that unites us, our humanity.

Looking out my hotel window in Waterford, I watch the clouds and the sea. Beyond the cliffs, there are large, tall rock formations. They sit at the edge of the land, where it meets the water. Ireland is known for its unusual rocks, some more than a billion years old, that moved thousands of miles as the continents drifted and endured volcanic activity, sea-level rises, and dramatic climate changes. I listen to the pounding of waves that have been sculpting this coastline for the past ten thousand years. My thoughts turn, as they so often do, to the future. Will these rocks be here for another ten thousand years? Will we?

A wind has risen, and the skies over Ardmore Bay are clearing. The air is fresh, bracing, and invigorating. I am logging off and saying goodbye for now. I can’t wait to take a walk.