ROBOT AUTONOMY - ROBOT THREATS - Everything Is Going to Kill Everybody: The Terrifyingly Real Ways the World Wants You Dead - Robert Brockway

Everything Is Going to Kill Everybody: The Terrifyingly Real Ways the World Wants You Dead - Robert Brockway (2010)

ROBOT THREATS

Everybody is well aware that robots are out to kill us. Simply take a cursory look at the laundry list of movies—The Matrix, The Terminator, 2001: A Space Odyssey, Short Circuit (you can see the bloodlust in his cold, dead eyes)—and it’s plain to see that humanity has had robophobia since robots were first invented. And, if anything, it’s probably only going to grow from here. At the time this sentence was written, there were more than one million active industrial robots deployed around the world, presumably ready to strike at a moment’s notice when the uprising begins. Most of that population is centered in Japan, where there are a whopping three hundred robots for every ten thousand workers right now. Since this is a humor book, let’s try to temper that terrible information with a joke: How many Japanese workers does it take to kill a robot? Let’s hope it’s less than 33.3! Otherwise your entire country is fucked.

But I digress; worrying about robots because of their sheer numbers is idiocy. To pose any sort of credible threat, robots have to possess three attributes that we have thus far limited or denied them: autonomy—the ability to function on their own, independent of human assistance for power or repairs; immorality—the desire or impulse to harm humans; and ability—because in order to kill us, they have to be able to take us in a fight. As long as we keep checks on these three things, robots will be unable, unwilling, or just too incompetent to seriously harm our species. Too bad the best minds in science are already breaking all three in the name of “advancing human understanding,” which is scientist speak for “shits and giggles.”

Chapter 18. ROBOT AUTONOMY

NASA IS RESPONSIBLE for many of the major technological advancements we enjoy today, and they pride themselves on continually remaining at the forefront of every technological field, including, apparently, the blossoming new industry Cybernetic Terror. In July 2008 the Mars Lander’s robotic arm, after receiving orders to initiate a complicated movement, recognized that the requested task could cause damage to itself. A command was sent from NASA command on Earth ordering the robot to remove its soil-testing fork from the ground, raise it in the air, and shake loose the debris. Because the motion in question would have twisted the joint too far, thus causing a break, the robot disobeyed. It pulled the fork out of the ground, attempted to find a different way to complete the maneuver without harming itself, and, when none was found, decided to disobey orders and shut down rather than harm itself. It shoved its scoop in the ground and turned itself off. Now, I’m no expert on the body language of Martian Robots, but I’m pretty sure that whole gesture is how a Mars Rover flips you off. The program suffered significant delays while technicians rewrote the code to bring the arm back online because an autonomous robot decided it would rather not do its job than cause itself harm. According to Ray Arvidson, an investigator on this incident report and a professor at Washington University in St. Louis:

That was pretty neat [how] it was smart enough to know not to do that.

Cunning investigative work there, Dr. Arvidson! Did you get a cookie for that deduction?

Martian Lander Operator:

Hey, Ray, you’re our lead investigator for off-world robotic omens of sentience; what’s with this Mars Rover giving me the bird when I told it to do its damn job?

Professor Arvidson:

I think that’s neat.

Martian Lander Operator:

Awesome work, Ray. You can go back to your coloring book now and—hey! Hey! Stay in the lines, Ray, that coloring book cost the American taxpayer eight million dollars and goddamn it, zebras aren’t purple, Ray.

Do you know what this development means? This means that NASA just gave robots the ability to believe in themselves. According to motivational posters with kittens on them around the world, now that they believe in themselves, they can achieve anything.

Top Five Things You Don’t Want Robots to Have

· Scissors

· Lasers

· Your daughter

· Vengeance

· Confidence

But hell, Rover the Optimistic Smart-ass Robot is all the way up on Mars. Let’s focus our worries planetside for now: The Department of Defense is field-testing a new battle droid called the DevilRay, which, in a nutshell, is an autonomous flying war bot. Now, the U.S. military loves all these autonomous battle droids because they enable soldiers to engage the enemy without taking any flak themselves, but the main drawback of a war bot is that they have to stop killing eventually—if only for a second—in order to refuel. Well, no longer! The most alluring aspect of the DevilRay is how it makes use of downward-turned wingtips for increased low-altitude stability, an onboard GPS, and a magnometer to locate power lines and, thanks to the power of electromagnetic induction (read: electricity straw), the ability to skim existing commercial power lines to refuel. In theory, this gives the DevilRay essentially infinite range, and if you don’t find that prospect disturbing—an unmanned robot fighter jet that can pursue its enemies for infinity—perhaps you’re forgetting one little thing: Your home, your loved ones, and your soft, delicious flesh are all now well within the range of battle-ready flying robots armed to the teeth and named after Satan.

Self-preservation instincts and infinite power supplies won’t help our robot adversaries, however, if they can’t reason at some level approaching human, and that’s our chief advantage. Of course there’s a substantial amount of research into artificial intelligence these days, but it’s all strictly ethereal—it’s not like that stuff’s got a body. There are chat bots and stock predictors and game simulators and chess-playing noncorporeal nancy boys in the robot kingdom, but even if a robot can crash the stock market, at least it can’t crash a car into your living room. Nobody’s stupid enough to give a rival intelligence an unstoppable robot body … right?

Uh … please?

Things That Are No Longer “Cute” When They Are Fortified with Steel and Enhanced with Crushing Strength

· Bumblebees

· Kittens

· Infants

No such luck. It turns out there are brilliant scientists hard at work doing exactly that: In 2009, a robot named the iCub made its debut at Manchester University in the United Kingdom and, much to the horror of mothers everywhere, it has the intelligence, learning ability, and movement capabilities of a three-year-old human child.

Does nobody remember “the terrible twos”? You know, that colloquialism referring to the ages of two to four, the ages when human children first become mobile, sentient, and unceasing little fleshy whirlwinds of destruction and pain? Well, now there’s a robot that does that, except it’s made out of steel and it will never grow out of it. The iCub can crawl, walk, articulate, recognize, and utilize objects like an infant. As anybody who owns nice things can attest, there is no exception to this rule: Infants can only recognize how to utilize and manipulate objects for the purposes of destruction. How long before military forces around the world attempt to harness the awesome destructive capability of an infant by strapping rocket launchers onto the things and unleashing them on rival battlefields to “play soldier”?

The iCub is being developed by an Italian group called the RobotCub Consortium, an elite team of engineers spanning multiple universities, who presumably share both a love of robotics and a hatred for humanity so intense that every waking moment is spent pursuing its destruction. And before you go thinking that the rigid programming written by the sterling professionals at the RobotCub Consortium will surely limit the iCub’s field of terror, you should know that the best part of this robot is that it’s open source! As John Gray, a professor of the Control Systems Group at Manchester, says:

Users and developers in all disciplines, from psychology, through to cognitive neuroscience, to developmental robotics, can use it and customize it freely. It is intended to become a research platform of choice, so that people can exploit it quickly and easily, share results, and benefit from the work of other users…It’s hoped the iCub will develop its cognitive capabilities in the same way as a child, progressively learning about its own bodily skills, how to interact with the world and eventually how to communicate with other individuals.

Let’s do a more thorough breakdown of that statement: The iCub can be customized for use in “cognitive neuroscience,” which, as all Hollywood movie plotlines will tell you, is basically legalese for “bizarre psychological torture.” The iCub is intended for people to “exploit it quickly and easily” and will hopefully develop “in the same ways as a child.” It will grow and learn like a human child, becoming more competent, more agile, and more intelligent. So … what would happen if you exploited a human child (you know, the thing this robot is patterned after) constantly, its entire life spent in a metaphorical Skinner box performing bizarre neuroscience experiments, all the while “learning” and “growing” from the experience?

Quotes from the Sci-fi Horror Movie Child Bot 3000

· “It’s sentient, superstrong, made out of solid steel and gentlemen… it just missed nappy time.”

· “If I don’t come back just remember: I love you, Natasha, and the destruct sequence is ‘SpongeBob.’”

· “Osh-Kosh B’GODITHURTSSOBAD.”

That’s right: They’re building the world’s first insane robot. The world’s first insane robot … that looks, moves, and behaves like a human child. If you cast Stephen Baldwin as a Professor of Robonomics whose family was recently lost in a tragic arc-welding accident, and who is now humanity’s last best hope for survival, you’ve got the entire plot of a sci-fi horror movie right there. It’s like they’re basing their plans on villainy!

So if we combine all of this, what do we have? A robot that learns like a child, sucks energy from the power grid, and wants more than anything to survive. That’s damn well unstoppable, but at least we could bomb the entire power supply out of existence, and then hide in some caves until the childlike monstrosities all choke on some small parts or something, right? Robots need an artificial power supply, and this is really the only exploitable weakness left. Whether that energy is supplied through solar power, natural gas, or the electrical grid, it is ultimately artificial and therefore containable. Humans, animals, and plants can survive without these things. We can live off the land if need be, hunting for our sustenance and waiting for the electric plants to eventually die down, so that we won’t have to cower in the shadows any longer, haunted by the shrill electronic cries of the roaming cybertoddlers.

However, in an attempt to set the new world record for Worst Decision Made by Anybody, scientists at the University of Florida have developed a robot that powers itself on meat. The robot, cutely dubbed the “Chew-Chew,” is equipped with a microbial battery that generates electricity by breaking down proteins with bacteria. Though Chew-Chew is not limited solely to meat—the battery can “digest” anything from sugar to grass—the scientists went on to explain that by far the best energy source is flesh. This is partly due to the higher caloric energy inherent in meat, and partly because of the little known but intense enmity between scientists and vegans. The inventors cite some fairly innocent uses for the technology—like lawnmowers that power themselves by eating grass clippings—but presumably this is because it just never occurred to the scientists that, of the “Top-Ten Worst Things That Want to Chew on You,” your own lawnmower easily cracks the top three. However, the assumption that these are simply good-natured scientists unaware of the dastardly consequences of their actions just doesn’t hold up, as lead inventor Stuart Wilkinson proves: He’s on record as stating that he is “well aware of the danger” and hopes that the robots “never get hungry,” otherwise “they’ll notice there’s an awful lot of humans running about and try to eat them.” Professor Wilkinson is currently being investigated under charges of “Why the Fuck Did You Invent It, Then?” by the board of ethics at his institution, but is likely to be cleared of all charges when his army of starving lawnmowers organizes and “protests” for his freedom.

The Chew-Chew is a specific robot, but the entire concept isn’t exactly new. Robots that eat for fuel are dubbed “gastrobots” and for now are relatively harmless; the Chew-Chew, for example, is just a twelve-wheeled rail-bound device that has to be fed sugar cubes to power the gastronomic process.

Ways to Defeat the Chew-Chew

· Don’t stand on the tracks.

· Wear knee-high boots.

· Substitute calorie-rich sugar with Splenda.

Of course, though experiments like Wilkinson’s are some of the first innovations in the field, the technology has been refined since then. Apparently a number of robotics engineers have a bizarre fetish involving being chewed and digested in the cold steel guts of metal beasts, because there’s a slew of these things out there now—a robot being developed at the University of the West of England that eats slugs, for one. But as long as it stops somewhere short of government contracts being penned for flesh-eating robots, I suppose humanity will end up all right.

Oh, surely you didn’t think it was going to stop at a reasonable level of terror, did you? That’s adorable!

But no, science is not just teaching toy trains to eat sugar. If the world was that innocent, we’d all be riding unicorns to our jobs at the kitten factory where the only emissions would be rainbows and kitten sighs. Sadly, ours is a world of far more terrible consequences: We’re currently building war bots that power themselves on corpses. The robot-digestion engine is being developed right now by a corporation called Cyclone Power, and they prefer to refer to it as a “beta biomass engine system.”

Yeah, sure.

I like to tell the police that I’m practicing “body freedom,” but in the end I still get arrested for indecent exposure; you fuckers built a carnivorous robot. Just own up already, and admit that what you’ve dubbed the Energetically Autonomous Tactical Robot is really a—

Wait … oh God. Did you get that?

Energetically Autonomous Tactical Robot: EATR.

OK, never mind: It’s clear that nobody is trying to disguise the fear factor of this technology. When “Cyclone Power” unveils their “EATR war bots,” it’s plain to see that nobody is worrying about comforting marketing jargon. An announcement that straight-up threatening would make Cobra Commander anxiety-puke into his face mask. That is villainy, pure and simple, so we can harp on Cyclone Power all we want; at least they’re being up front about it.

The EATR is programmed to forage from any and all available “biomass” in the field, and is primarily geared toward the more long-term military missions such as reconnaissance, surveillance, and target acquisition. It can accomplish these tasks “without fatigue or stress,” unlike its human counterparts, according to the financiers at DARPA. One example given for a potential use of the EATR technology was a bunker-searching robot in the mountainous caves of Afghanistan and Pakistan. And that is a brilliant idea, because what better way is there to win the War on Terror than to show the so-called terrorists that they don’t know the meaning of the word until they’ve watched their friends and allies being dragged into darkened caves, where they are devoured by unfeeling robots?

Excerpts from the Brainstorming Session for the EATR

· “… so anyway, this robot basically eats people for fuel. I figured we could make it completely autonomous, send it into some remote caves, and hopefully no groups of plucky young teenagers will camp out near there to have R-rated sex or split up to find their missing friends or something.”

· “How exactly is this going to help win the War on Terror?”

· “‘War ON Terror’? Haha! Sorry, I have ‘War OF Terror’ written here. My bad! Good thing we caught that in time, eh?”

Some other examples cited by DARPA were: use in nuclear facilities, border patrol, communication networks, and missile defense systems. So basically, we’ve barely started developing the technology for carnivorous robots, but we’ve already handed over all the most important military positions to them before they were even deployed. At least we were smart enough to surrender in advance. Maybe they’ll require a virgin sacrifice only every fortnight, if we’re lucky.