ROBOT IMMORALITY - ROBOT THREATS - Everything Is Going to Kill Everybody: The Terrifyingly Real Ways the World Wants You Dead - Robert Brockway

Everything Is Going to Kill Everybody: The Terrifyingly Real Ways the World Wants You Dead - Robert Brockway (2010)

ROBOT THREATS

Chapter 19. ROBOT IMMORALITY

OF COURSE, THIS is all just cynical anthropomorphizing, isn’t it? I’m just assuming that robots want to kill us all while, at best, that’s probably only 90 percent true. Robots are logic, pure and simple. Hatred, murder, lust—they’re the flip side to the positive aspects of human emotion like friendship, love, and charity. We pay the price of negative emotional states because they come attached at the hip with the positive. So, for robots to truly be sociopathic murder machines, they’d have to be a lot more human, showing a history of disobedience, immorality, or emotional frailty. It’s not like there’s a surging demand for automatons with neurotic complexes, so why on Earth would anybody engineer those traits into a robot? Ask David McGoran at the University of the West in England, who in 2008 proudly displayed the Heart Robot, a machine that responds to love and affection. The Heart Robot gives pleasant reactions to affectionate gestures like being hugged, and displays negative reactions to spiteful actions like being scolded or abused. Presumably this is because the science department at the University of the West is staffed by Care Bears, but their official line is that they’re attempting to study how people react to robots as emotionally viable beings. Or the converse could be true—that they’re just bitter, bitter men who, if they can’t break human hearts in spiteful revenge for their failed relationships, will just goddamn build a robot one to ruin instead. But I believe in the good and the awesome among scientists, no matter how many times they’ve personally tried to murder all that I love within the confines of this chapter alone. No, the Heart Robot is built to love, and it does so superbly. It has a beating heart that surges with excitement and slows with comfort. It flutters its eyes in response to touch, simulates rising and falling breathing motions, and responds to both noise and touch. He likes to be cuddled and cooed to, so his breathing evens out and his heart slows.

Now … come on, isn’t that goddamn cute?

In the sea of fear and swearing that has been this section, isn’t it nice just to see the fog lift for a moment and let a little light shine through? McGoran believes that social therapy will benefit the most from these “emotional machines,” and that the elderly in particular could benefit, much as they do with therapy dogs, from a little day-to-day companionship. McGoran, who has obviously never met an old person, believes that high-tech robots would be completely accepted as a calming influence on senior citizens. Old people are scared of America Online and think Twitter is what you call a boy with “a little too much girl in his walk.” Proposing that robots silently attempt to cuddle the geriatric in their hospital beds shows that you either really, really love robots or desperately hate old people.

Things Old People Would Enjoy More Than Being Groped by Robots

· Skateboarders

· Metallica

· Anime

· Halo multiplayer

· Sudden, unexplained menu changes at IHOP

But don’t bask in the love just yet; this could actually be a monstrously bad development. So the cited goal of the experiment is to “study how humans react to robots emotionally,” but if that’s the case, why is it the robots that are feeling the emotions? And while the desire for hugs is all well and good, why allow the robot to feel displeasure at scorn? What happens if you don’t feel like giving hugs? What happens if you’ve had a bad day at work? Stubbed your toe? Got cut off in traffic? If I so much as cuss at the television, my dog gets upset and hides under the chair—the difference here being that my dog does not possess an unbreakable steel grip and laser vision. That means that the Heart Robot will sigh and get all aflutter from snuggles, but he’s also programmed to feel the opposite; if you scream at him or shake him (I don’t know why you’d be shaking him; maybe it’s because you’re mixing your two greatest loves: whiskey and robotics conventions), his heart races and his breath quickens, his hands clench, and his eyes widen. I’m sure the robots will truly appreciate that ability to feel neglect when you stow them in their recharging stations for the weekend. Oh, and they like to show their appreciation through hugs, if you’ll recall—there’s just no attesting for the strength with which they hug you.

Also not helping matters: The Heart Robot—supposedly the most cuddly and wuvable of all robots—looks like “a cross between ET and Gollum and is about the size of a small child,” according to Holly Cave, the organizer of the Emotibots event where the Heart Robot debuted. So yes, by all means, do hug the albino cave monster with the alien, phallic-symbol head. Please, please hug him; he gets upset if you don’t and, this is just a guess, but I’m supposing you won’t like him when he’s upset. Sure, maybe you can fend off his tiny metal fists, but keep in mind that he’s not supposed to live with you; he’s supposed to live with your grandma. She’s looking kind of frail these days. I’m betting it’s at least fightin’ odds that she can’t take a child-sized robot with emotional trauma.

But all that’s nothing compared to the Intelligent Systems Laboratory in Sweden, who have just invented a robot that can lie. And not just about the little things like who broke your great-grandfather’s heirloom vase or whether your wife has man visitors recharge her batteries while you’re not home—no, it lies about life or death things … literally.

Heartwarming Moments in Robotics

A counter-role also developed alongside the cheater bots: The “hero bot.” Though much more rare than its villainous counterparts, the hero bots were robots that rolled into the poison sinks voluntarily, sacrificing themselves to warn the other robots of the danger. This is proof positive that we have seen either, the very first mechanical superhero or the very first tragically retarded robot.

The robots in question are little, flat, wheeled disks equipped with light sensors and programmed with about thirty “genetic strains” that determine their behavior. They were all given a simple task: Forage for food in an uncertain environment. “Food,” in this case (thankfully, they don’t take a cue from the EATR), just refers to battery-charging stations scattered around a small contained environment. The robots were set loose in an environment with both “safe” energy sources and “poison” battery sinks. It was thought that the machines might develop some rudimentary aspects of teamwork, but what the researchers found, instead, was a third ability—the aforementioned lying. After fifty generations or so, some robots evolved to “cheat” and would emit the signal to other robots that denoted a safe energy source when the source in question was actually poisonous. While the other robots rolled over to take the poison, the lying robot would wheel over to hoard the safe energy all for itself—effectively sending others off to die out of greed.

So now we know that not only are even the simplest robots capable of duplicity, but also of greed and murder—hey, thanks, Sweden!

Most of the evidence I’ve presented here indicates that robots may not necessarily be limited to their defined set of programmed characteristics. Of course, this is all in a book about intense fearmongering and creative swearing, so perhaps the viewpoint of this author should be taken with a grain of salt. Overarching fears about robotics—like the worry that they could jump their programming and go rogue—should really be taken only from a trustworthy authority source. Luckily, a report commissioned by the U.S. Navy’s Office of Naval Research and done by the Ethics and Emerging Technology department of California State Polytechnic University was instigated to study just that. Here’s what that report says:

There is a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when … programs could be written and understood by a single person.

That quote is lifted directly from the report presented to the Navy by Patrick Lin, chief compiler. What’s really worrying is that the report was prompted by a frightening incident in 2008 when an autonomous drone in the employ of the U.S. Army suffered a software malfunction that caused the robot to aim at exclusively friendly targets. Luckily a human triggerman was able to stop it before any fatalities occurred, but it scared the brass enough that they sponsored a massive, large-scale report to investigate it. The study is extremely thorough, but in a very simple nutshell, it states that the size and complexity of modern AI efforts basically make their code impossible to fully analyze for potential danger spots. Hundreds if not thousands of programmers write millions upon millions of lines of code for a single AI, and fully checking the safety of this code—verifying how the robots will react in every given situation—just isn’t possible. Luckily, Dr. Lin has a solution: He proposes the introduction of learning logic centers that will evolve over the course of a robot’s lifetime, teaching them the ethical nature of warfare through experience. As he puts it:

We are going to need a code. These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.

Robots are going to have to learn abstract morality, according to Dr. Lin, and those lessons, like it or not, are going to start on the battlefield. The battlefield: the one single situation that emphasizes the gray area of human morality like nothing else. Military orders can often directly contradict your personal morality and, as a soldier, you’re often faced with a difficult decision between loyalty to your duty and loyalty to your own code of ethics. Human beings have struggled with this dilemma since the very inception of thought—a time when our largest act of warfare was throwing sticks at one another for pooping too close to the campfire. But now war is large scale, and robots are not going to be few and far between on the battlefield: Congress has mandated that nearly a third of all ground combat vehicles should be unmanned within five years. So to sum up, robots are going to get their lessons in Morality 101 in the intense and complicated realm of modern warfare, where they’re going to do their homework with machine guns and explosives.

Foundations of the Robot Warrior Code

· Never kill an unarmed robot, unless it was built without arms

· Protect the weak at all costs (they are easy meals).

· Never turn your back on a fight (unless you have rocket launchers mounted there).

But hey, you know that old saying: “Why do we make mistakes? So we can learn from them.”

Some mistakes are just more rocket propelled than others.