FYI.

This story is over 5 years old.

News

There's a Pointless War Being Waged on Killer Robots From the Future

"Fully autonomous weapons" are increasingly under attack by people who want to make them extinct before they even exist. But the whole thing is kind of moot, anyway.
Un robot en la Expo Mundial del Robot celebrada el pasado enero. (Imagen por Juan Carlos Hidalgo/EPA)

We're all going to die at the hands of our killer robot overlords. Or, at least, we would die if not for the brave work of the aptly named Campaign to Stop Killer Robots, a non-profit that works "to preemptively ban fully autonomous weapons" (a.k.a. killer robots).

Killer robots and the need to stop them was also a hot topic at the recent Munich Security Conference, which brought together various politicians, bureaucrats, and wonks — along with their affiliated groupies and flunkies — to chat over expense-account cocktails about how things are going straight to hell. One panel in particular, "The Future of Warfare: Race with the Machines," explored the various ways killer robots would be bad and how humanity should stop them from ever being built in the first place.

Advertisement

There's just one problem: There's no logical, possible way to get from the here and now to the dark imagined future of weaponized artificial intelligence (AI).

Related: TheUN Debates the Future of Killer Robots

For starters, autonomous weapons are already here. So if the goal is to get ahead of the curve and "pre-emptively ban fully autonomous weapons," as the Campaign to Stop Killer Robots calls for, then we're already way too late.

When people talk about weapons, they often use the term "man in the loop" — a living, breathing person whose permission is needed for that loop to complete a cycle that involves blowing up or killing people. Thing is, weapons that go through that loop without any human intervention have actually been around since at least the 1990s.

A popular way to shut down your enemy's anti-aircraft defenses is to blow up the radars used to spot and help target your aircraft with surface-to-air missiles. As folks realized, you can make a missile that homes in on radar, then blows it up, blinding air defenses. Radar operators in turn learned to keep their radars shut off most of the time, only turning them on briefly when there were targets to shoot at, lest the radar attract an incoming missile of its own.

Enter the Harpy, an Israeli UCAV (unmanned combat aerial vehicle) that flies in slow, lazy circles, waiting for someone to turn on an air-defense radar. Once the Harpy is in flight, there's no human intervention. And once it spots a hostile radar, the Harpy crashes into it and blows it up.

Advertisement

We can even back down the technology ladder to cruise missiles. Once launched, they navigate and steer themselves into a target without any adult supervision. So in essence a cruise missile is a kamikaze in drone form. Is a human kamikaze somehow morally preferable to a cruise missile?

Guidance systems, sensors, or really any kind of mechanical switching creates a tiny bit of distance between the combatant and the consequence; in turn, some responsibility is delegated or off-loaded to a robot, mechanism, or trigger. By putting a tool or device between the human and the action, you're creating something that is arguably an "autonomous" weapon. Whether it's "fully" autonomous or not is a squabble over semantics.

If the argument is that it is morally responsible to keep a person involved in as much of the decision-making loop as possible, then you end up creating a standard by which an unguided, dumb bomb is somehow more moral and ethical than a guided weapon. But most folks probably don't want to be against precision weapons that reduce collateral damage or in favor of the broader use of indiscriminate, unguided weapons.

Now, maybe the Campaign to Stop Killer Robots has a kung fu-only vision of future warfare in mind in which no weapons or tools of any kind are used, but much short of that, there's no perfect way to ensure that a human can give a last check to ensure nothing unexpected ever happens in war. The next best option is making sure that there's always someone involved in the process who can be chewed out, sued, or put on trial if things go badly.

Advertisement

Which brings us to a second point. People who want to "preemptively ban fully autonomous weapons" are running from a dark future that's never going to happen — at least not the way they're thinking of it. At some point, there's always going to be a human somewhere in the chain of command telling some robot to go off and do whatever the hell it's going to do.

One example common to discussions of "killer robots" is drones. While current drones are operated remotely, folks wonder what would happen if the drone picked its own targets. Would that make it one of the fabled "fully autonomous" weapons we're all so wound up about?

Well, not unless "picks its own targets" also means the drone is taking care of its fueling and arming, and clearing itself with air traffic control, and setting up a patrol route. The hypothetical "fully autonomous" weapons systems are ultimately sent out into the world because a human decided they should be.

Related: Drones Are Just Airpower Without All the Adult Diapers

The idea that increasing distance from the decision-maker somehow reduces responsibility is just goofy. Take good old-fashioned nuclear war, for example. The president can hop on his special nuke plane and send out a signal that is then processed and run through a communications network, ultimately reaching some missileers in a godforsaken empty corner of somewhere. They can then turn the keys and launch the missile out of the silo. It's at this point that the missile begins its one-way ride to the other side of the world, during which time it autonomously does all kinds of stuff, eventually depositing a warhead on some unfortunate folks.

Advertisement

Just because the president wasn't physically in the missile silo turning the key himself doesn't mean he's free of the responsibility of launching a nuclear warhead any more than someone hiring a hitman can be considered uninvolved in a murder.

Even if, for sake of argument, the decision to send this or that specific robot into the field was made automatically, that decision-making computer was still turned on and put in charge of stuff by a human. Unless and until you see artificial intelligence spontaneously popping into existence and grabbing for a gun, there's always a human cause that leads to the effect in question.

Even the advanced work being done on automation tends to strongly emphasize the concept of "centaur" pairing, in which humans and machines act together to do things neither could do alone. After all, it's really difficult to make a robot that will take on 100 percent of the tasks involved in fighting on the front lines. The tendency has been to automate and simplify by inches, not by sitting down and trying to create a world-destroying artificial intelligence from scratch.

Watch VICE News' Israel's Killer Robots:

Imagine a large organization. The first thing the top bosses want to automate is the lowest job on the totem pole, but there's no such thing as "fully autonomous." Eventually you can look one level up and find a human who has to take the heat for a screw-up. If you don't see a person at that level, you're not far enough up the food chain.

Advertisement

Which brings us to moral agency. Let's pretend, for a moment, that someone, somewhere has created a killer robot, and that this killer robot has gone off and done something terrible, like force a bunch of orphans to get Donald Trump haircuts.

Why would this be any less terrible if there was a person who had forced the orphans to get those haircuts instead of a killer robot? You still end with a bunch of orphans with terrible hair either way.

Or, put another way, does the objection to drone strikes arise because the missile was fired from a drone, or because the missile was fired into a wedding party?

It may seem like screw-ups like that could have been caught if only there had been one more human involved in the process. But the problem is really that there was a screw-up at all. In hindsight, there's always going to be a missed opportunity that could have prevented a tragedy. In foresight, you can never add enough checks and balances to prevent bad things from happening in war.

Related: Many of the World's Top Tech Experts Want to Ban Super-Intelligent 'Autonomous Weapons'

At a deeper underlying level, this is a story that was old after Adam and Eve noshed on the apple of knowledge and got booted out of paradise. It's the endlessly recycled fable about what happens when humans get a little too smart for their own good and start getting up to shit.

The main difference is that when we start becoming invested in inventing nightmare scenarios involving artificial intelligence, we're basically forbidding ourselves from creating something new and interesting because bad things might possibly happen. But bad things might possibly happen if we do lots of things, from creating artificial intelligence or develop "fully autonomous weapons" — whatever that means — to making self-driving cars.

Trying to prevent the creation of an artificial intelligence because it might get messy is like putting your dog to sleep because it might bite you someday. We're getting a little ahead of ourselves and taking too much counsel from the fears our imagination manufactures.

Follow Ryan Faith on Twitter: @Operation_Ryan