Everything Is Going to Kill Everybody: The Terrifyingly Real Ways the World Wants You Dead - Robert Brockway (2010)

ROBOT THREATS

Chapter 20. ROBOT ABILITY

THE ROBOTS WOULD have to be more effective fighters and hunters than we already are in order to do away with us, and that doesn’t just mean weapons. Anything can be equipped with nearly any weapon, and a robot with a chain saw is no more inherently deadly than a squirrel with a chain saw—it’s all in the ability to use it. It’s like they say:

Give a squirrel a chain saw, you run for a day. Teach a squirrel to chain saw, and you run forever. And we’re handing those metaphorical chain saws to those metaphorical squirrels like it’s National Trade Your Nuts for Blades Day.

Take, for example, the issue of maneuverability. As experts in avionics or fans of Robocop can tell you, agility and maneuverability are difficult concepts when you’re talking about solid steel instruments of destruction. The ED-209, that chicken-footed, robo-bastard villain from Robocop, was taken out by a simple stairwell, and planes are downed by disgruntled geese all the time. The latter is a phenomenon so common that there’s even a name for it: bird strike. And, apart from making a rather excellent title for an action movie (possibly a buddy-cop film starring Larry Byrd and his wacky new partner—a furious bear named Strike!), the bird-strike scenario is very emblematic of a major hurdle in modern mechanics: Inertia makes agility tough when you’re hurtling tons of steel at high speeds. But recently that problem has been solved by a machine called the MKV. If you’re taking notes, all previous scientists developing harmless-sounding names for your dangerous technology, the MKV is proof positive that comfort is not a requirement when titling new tech. “MKV” stands for, I swear to God, Multiple Kill Vehicle. Presumably the first in the soon to be classic Kill Vehicle line of products, the MKV recently passed a highly technical and extremely rigorous aerial agility test at the National Hover Test Facility (which is an entire facility dedicated to throwing things in the air and then determining whether they stay there). The MKV proved that it could maneuver with pinpoint accuracy at high speeds in three-dimensional space—moving vertically, horizontally, and diagonally at breakneck speeds—and it’s capable of doing this because it’s basically just a giant bundle of rockets pointing every which way that fire with immense force whenever a turn is required. Its intended purpose is to track and shoot down intercontinental ballistic projectiles using a single interceptor missile. To this end, it uses data from the Ballistic Missile Defense System to track incoming targets, in addition to its own seeker system. When a target is verified, the Multiple Kill Vehicle releases—I shit you not—a “cargo of Small Kill Vehicles” whose purpose is to “destroy all countermeasures.” So, this target-tracking, hypermaneuverable bundle of missiles first releases a gaggle of other, smaller tracking missiles, just to shoot down your defenses, before it will even fire its actual missiles at you. In summation, the MKV is a bunch of small missiles, strapped to a group of larger missiles, which in turn are attached to one giant master missile … with what basically amounts to an all-seeing eye mounted on it.

Well, it’s official: The government is taking its ideas directly from the Trapper Keeper sketches of twelve-year-old boys. Expect to be marveling at the next anticipated leap in military avionics: a Camaro jumping a skyscraper while on fire and surrounded by floating malformed boobs.

National Hover Test Facility Grading Criteria

Q: Is object resting gently on the ground?

[] Yes. (Fail.)

[] No. (Pass!)

Oh, but in all this hot, missile-on-missile action, there’s something fundamental you may have missed about the MKV: That whole “target-tracking” thing. The procedure at the National Hover Test Facility demonstrated the MKV’s ability to “recognize and track a surrogate target in a flight environment.” It’s not just agility that’s being tested here, but also target tracking and independent recognition. And that’s a big deal: A key drawback in robotics so far has been recognition—it’s challenging to create a robot that can even self-navigate through a simple hallway, much less one that recognizes potential targets autonomously and tracks them (and by “them” I mean you) well enough to take them down (and by “take them down” I mean painfully explode).

These advancements in independent recognition are not just limited to high-tech military hardware, either, as you probably could have guessed. And as you can also probably guess, there is a cutesy candy shell covering the rich milk chocolate of horror below. Students at MIT have a robot named Nexi that is specifically designed to track, recognize, and respond to human faces. Infrared LEDs map the depth of field in front of the robot, and that depth information is then paired with the images from two stereo cameras. All three are combined to give the robot a full 3-D understanding of the human face, and in another sterling example of Unnecessary Additions, the students also gave Nexi the ability to be upset. If you walk too close, if you block its cameras, if you put your hand too near its face—Jesus, it gets pissed off at anything. God forbid you touch it; it’ll probably kill your dog.

DISCLAIMER

Facial-recognition technology is an exciting field and should not, in and of itself, frighten anybody. If there’s something inherently worrying about robots being capable of individual facial recognition and memory, which, among other things, is the first vital step toward learning how to hold a grudge, I certainly can’t find it.

So far this drastic increase in visual recognition is largely for harmless projects like Nexi, and not yet installed in murderous machine-gun-toting super sniper bots. Well, not in America anyway. But Korea? Not so lucky. It seems that Samsung, benevolent manufacturer of cell phones and air conditioners, also manufactures something else: the world’s first completely autonomous deployed killing machines. Up to this point no robot had been granted a license to kill; all authorization to engage was still in human hands. You’ll recall that this lack of autonomy was literally the only thing saving dozens of American soldiers when a glitch in a war bot’s software started acting up, so, though robots have drastically improved abilities in accuracy and firing rates, at least on some level it was still just some dude ultimately responsible for your life. People are unpredictable: They may succumb to mercy, they may be inattentive, or they may just make an off-the-book judgment call that saves your life. But the Intelligent Surveillance & Security Guard Robot? It does no such thing. It recognizes potential targets independently, assesses their threat level, and decides whether to fire its machine guns all on its own, with no human interaction.

Aw, little robots are all grown up now. Warms your heart, doesn’t it? Actually, that might be blood leaking out of a chest wound; maybe you should check that out.

If You Find Yourself Faced with an ISSGR Sentry Turret, Just Remember These Four Simple Steps

1.    Stop.

2.    Drop.

3.    Roll.

4.    Get shot.

The Guard is equipped with ultra-high-definition cameras, infrared lenses, image/voice recognition software … and a swivel-mounted K-3 machine gun. The robot can recognize and target intruders over long distances day or night, and can be programmed either to fire on unauthorized intruders perceived as threats or to require a password and use deadly force only if the incorrect answer is given. I feel the need to stress here that the Guard is not remote controlled; it’s fully automated. And while that’s a neat technological feat—one that’s increasingly sought after in our cute robot dogs and sex bots—perhaps it shouldn’t be handed over to death-dealing sniper bots right away. While the ISSGR is deployed on only the North Korean border for now, it is about to go on sale to private parties for $200K apiece. Technically it’s supposed to be for security uses only, so if you’re not somewhere you shouldn’t be, then you’re in no danger. Or at least, if you’re not within two miles of somewhere you shouldn’t be—because that’s the range in which the ISSGR can detect a “potential threat” and fire a fatal shot.

In the dark.

Next time you get a flat tire in the middle of the night, don’t knock on any doors; just wait in the car for help. It’s not that people are unwilling to lend a hand, you see; it’s just that there’s all these superrobot snipers programmed to kill you if you get within two miles of asking.

If you’re asking yourself “How does this get any worse? Robots already kill independently with unearthly accuracy, power themselves on our corpses, and are capable of feeling rage. How could they possibly pose any more danger than they do right now?” Well, first of all, I’m so glad you’ve been paying attention well enough to recap all of that so succinctly! You get a gold star for chapter completion!

Second of all, it gets so much worse!

Question:

What’s deadlier than a furious cannibal sniper bot?

Answer:

A whole team of furious cannibal sniper bots.

  That’s right: teamwork. It’s the next big thing in robotics, because there’s no “I” in “robot apocalypse.” And there’s no “you” in the robot apocalypse, either. Or at least there won’t be for long, once the robots start double-teaming you. The truly baffling thing about this development is that robots working together to hunt humans is not an accident, or a horrifying unforeseen side effect of an AI gone rogue. No—it’s a request from the fucking Pentagon itself. I’ve actually received a copy of this notice, and will insert it word for word here:

Dear Robots,

Please band together and learn how to hunt us more efficiently. We suffer from ennui as a species, and are aching for death.

Your pal (and walking sandwich),

Humanity

P.S. Our organs are delicious and nutritious!

Well, it fucking might as well read like that, for all intents and purposes. The Pentagon is actively seeking designs for a “multi-robot pursuit system” that enables “packs of robots” to “search for and detect a non-cooperative human.” Those aren’t fake, sarcastic quotes hyping up the disastrous potentiality of a government program for the sake of comedy. Every word of those quotes are in a real, honest-to-God request from the Pentagon itself. When asked for comment, Steve Wright of Leeds Metropolitan University, an expert in military technology, explained thusly:

The giveaway here is the phrase “a non-cooperative human” subject. What we have here are the beginnings of something designed to enable robots to hunt down humans like a pack of dogs. Once the software is perfected we can reasonably anticipate that they will become autonomous and become armed. We can also expect such systems to be equipped with human detection and tracking devices including sensors which detect human breath and the radio waves associated with a human heart beat. These are technologies already developed.

Questions on the Application for the Military Robot Overlord Position

·        Do you have experience in handling advanced robotics?

·        On a scale of one to ten, how comfortable are you in a leadership role?

·        Are you now, or have you ever been, a member of the League of Evil? (An answer of “yes” does not necessarily disqualify you.)

There’s actually quite a bit more information in the original interview, but I had to stop and form an ad hoc human resistance movement before I read any further. This terrifying request is part of a program initiated by the United States Army called the Future Combat Systems project, whose chief goal is the mass use of robotics guided by a single soldier. The Army envisions a vast hub of semi-to fully autonomous robotic systems being governed by a single, highly trained soldier on the battlefield, and they’re apparently just crossing their fingers that no supervillains drop by to fill out an application. Though professors of technology and philosophy are direly concerned about the potential threat posed by placing a large number of elite killing machines unchecked in the hands of a single man, Dr. CyberKill, a professor of Iron Fist Rule at the University of Resistance Crushing in the Realm of Flaming Steel, recently went on record as stating that he “couldn’t wait for these exciting new developments” and that he sincerely believes that “the consequences will not be dire. Not for all who bow before CyberKill.”

All of these examples, independently, could pose a potentially serious threat to mankind, but they’re all exceedingly rare. They’re frightening, sure, but when taken individually are isolated and easily avoidable. The lying Swedish robots are nearly microscopic and have no real offensive capability; the only existing meat-eating robots either ride around on a little cartoon train or just eat slugs; the ISSGR sniper bot is in Korea, so … don’t be Korean. That’s pretty much your only option for that one. The true danger comes from the combination of these technologies, and surely nobody would allow that to happen, right?

Well, ideally, yes.

But you’ve forgotten one little thing: Go look at your coffeemaker—it probably has a clock on it. Now look at your cell phone; I bet it’s got a camera. If you look in your car, you might see a GPS computer. Just don’t look at your toaster; it might try to poison you. I would also avoid looking at your television; I think it’s eating your cat for fuel right now. And for God’s sake, stay out of the fucking laundry room! The washing machine’s in a bad mood today, it just got night vision installed, and it’s regarded you as a “potential threat” ever since you used that store-brand detergent.

OUTRO

So we’ve reached the end, and thus far you’ve learned all about shifts in the magnetic field and murderous asteroids; carnivorous robots and souped-up lions; the withered, empty balls of modern man; and waves so high that they dwarf skyscrapers. If there’s one single thing that I would love for you to take away from all of this insanity, it is this: Fearmongering works only if you take it seriously. Hopefully, by allowing you to laugh a little bit while you learn of the many theoretically improbable ways you could die, this book will help defuse the surge of panic that the unknown can bring. Scientific advancement is awesome, nature is beautiful, and the world is a lovely place if you can just stop being afraid of it long enough to see it. Perhaps the first vital step to abandoning fear is learning how to laugh at it, and hopefully the end result of this book is just a little bit of cautious optimism; the worst of all possible scenarios have been detailed within these pages for you, and it was all totally ridiculous.