FYI.

This story is over 5 years old.

News

The Military Wants to Build 'Moral' Robots — And Yes, That's Scary

Navy-funded research into robots that can make moral decisions is troubling because of the way humans define military morality.
Image by Saad Faruque

When mathematician and philosopher Alan Turing designed his much-debated test to answer the question "Can machines think?", he actually set out to answer a slightly different question: Could a machine adequately imitate a thinking being? Turing introduced his test in 1950, and the questions and answers it offers have been a subject of fierce contention ever since.

Turing's test has, over the years, encouraged much discussion of crucial questions about what "thinking" even means, who or what can be a thinking thing, and what sort of thinking is available to different subjects. Given news this week that the US Navy is investing $7.5 million to research the possibility of "moral robots," Turing's ideas feel as fresh and relevant as they were more than half a century ago.

Advertisement

As Defense One reported, grant money has been given to researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale, and Georgetown to "explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems."

It is a basic fact of current computing that a system can perform only the operations it is programmed to carry out; computers cannot, at base, get beyond themselves. So people clinging to their tinfoil hats and preparing for the rise of robots should calm down. Whether a robot displays apparent intelligence in such a way that it might as well be intelligent (Is intelligence, after all, more than a set of behaviors?) is a question for philosophers.

Similarly, whether a robot can be moral depends on how we're defining morality. A program could be created with a set of inputs defining certain actions as "right" and certain actions as "wrong" based on a specific set of assumptions. Those inputs would be entered by human programmers, such that a robot could be built that follows a moral code. Morality, after all, has much to do with following predefined codes about "right" and "wrong." Unlike varied and contingent ethics, moral codes are arguably by their very nature robotic. As such, a moral robot seems a very real possibility.

A robot can be no more or less moral than a human — it can simply be programmed to follow a moral code.

So what sort of moral robots are we talking about? A system that, without human direction, can make "decisions," as Defense One reported. "For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning." This counts, arguably, as moral reasoning insofar as it echoes the sort of reasoning a human might carry out based on input presumptions — like "save women and children first." Give a robot the same set of inputs, and it can perform similarly to a human. Crucially in this case, what is moral is not decided by the robot. Instead, the robot is taught — programmed — to echo behaviors beholden to established human moral codes. A robot can be no more or less moral than a human — it can simply be programmed to follow a moral code. (As ever, the virtues of specific moral codes will remain points of contention, whether followed by humans or robots.)

As such, the potential application of "moral" robots in zones of conflict are about as troubling as the application of human moral reasoning when it comes to war. Take, for example, the disturbing fact that guidelines for targeted drone killings broadly define potential enemy combatants as all military-age males in a strike zone. A "moral" robot, without human drone operator, could conceivably be developed to profile and target in the same way. The vile moral calculus that undergirds conflict theater is the problem here, not whether a human flying a machine or a machine programmed by a human carries it out.

Militaries comprised of people already carry out an operational morality as opposed to the performance of individuated moral agency. The rules of war and the edicts of law are set and followed. (Unless, of course, an individual soldier acts outside of the command structure — Chelsea Manning's persecution is chilling evidence of both the rarity and stakes involved in doing so). Operational morality certainly seems available to an artificially intelligent system. What is more disturbing is that human institutions of war-making already rely too often on unchallenged operational morality. The possibility of moral robots, capable of acing a Turing Test, are only troubling — and indeed only possible — because military morality is already robotic.

Follow Natasha Lennard on Twitter: @natashalennard

Image via Flickr