FYI.

This story is over 5 years old.

News

Activists Calling for a Ban on 'Killer Robots' Raise Alarm Over an Uncertain Threat

AI researchers and human rights groups are trying to draw attention to the prospect of advanced weaponry that could one day blanket battlefields and make life and death decisions independent of human direction.
Foto tratta dal documentario di VICE News "Israel's Killer Robots"

The year is 2050. A fighter jet roars across the sky, zeroing in on targets in enemy territory somewhere in the Middle East. But the targets are really a group of farmers holding hoes and rakes, which the jet identifies as guns. Within minutes, it fires a series of rockets, killing all of them. The jet is pilotless, and the aircraft is not being directed from a base. It registers its operation as successful.

Advertisement

The prospect of such scenarios has led some activists and human rights groups to call for a complete ban on so-called "killer robots" — advanced artificial intelligence weaponry that they believe could one day blanket battlefields and make life and death decisions independent of human direction.

Representatives of the Campaign to Stop Killer Robots gathered at two events held at the United Nations on Tuesday to advance this campaign. The events, a press conference and a side meeting of the General Assembly, came three months after more than 1,000 prominent scientists, robotics experts, and researchers — including Stephen Hawking and Elon Musk — signed a letter opposing the development of lethal autonomous weapons systems, as this technology is officially known.

Related: Many of the World's Top Tech Experts Want to Ban Super-Intelligent Autonomous Weapons

While the killer robots idea leads many to imagine something out of the Terminator film franchise, both sides of the debate say that this is largely inaccurate — for now. AI experts point out that the technology for basic autonomous weaponry already exists, and would, for starters, simply involve retrofitting existing weapons like drones with the ability to fire at perceived targets without human approval.

Armaments that make decisions on their own within specific limitations have existed for decades, among them the American-made Patriot air and missile defense system, which can independently fire at perceived aerial targets. The system once destroyed two allied jets during the United States-led invasion of Iraq in 2003 — one an American FA/18 Hornet, the other a British Tornado fighter-bomber — killing three airmen.

Advertisement

Both the US and UK militaries have since developed planes capable of flying on their own, which some say is a short step away from weaponized autonomous aircraft.

Toby Walsh, an AI researcher and professor of computer science at the University of New South Wales who spoke at the UN, predicts that more disastrous incidents like the one in Iraq will take place as automation becomes increasingly prevalent. Though he believes that automation in a variety of fields should be welcomed, such as with self-driving cars, Walsh points specifically to the threat of drones, which he says already significantly distance military officials from the kill decision, with deadly results for civilians.

"It turns out now that nine out of 10 of those killed from drone strikes are not the intended victims," he said, referring to a recent exposé on Washington's covert drone program published byThe Intercept. "Given the current state of the art with AI, we're going to be making far more mistakes with computers."

Ian Kerr, a professor of philosophy at the University of Ottawa and member of the International Committee for Robot Arms Control, said that automated weaponry would cross what he considered a vital ethical threshold by delegating kill decisions to pre-programmed software.

"To offload those decisions to a machine is morally problematic," he remarked.

Kerr added that even if such systems were able to distinguish combatants from non-combatants, they still wouldn't view the field of battle through the moral prism that humans are capable of.

Advertisement

"That fundamentally is a game changer," he said.

Opponents of automated weaponry, including groups like Human Rights Watch, successfully pushed to have the topic debated at the UN's Human Rights Council in 2013. That April, Christof Heyns, the UN's special rapporteur on extrajudicial, summary, or arbitrary executions, recommended that countries "establish national moratoria on aspects of LARs [lethal autonomous robotics]" and called for the establishment of a high-level panel to craft a policy for the international community on the technology.

Related: UN Debates the Future of Killer Robots

In his report, Heyns cited Israel, the UK, the US, and South Korea as having already deployed robotic systems with "various degrees of autonomy." Israel was cited for the use of a "fire and forget" system that targets objects emitting radar. The US Phalanx system found on certain naval cruisers "automatically detects, tracks and engages anti-air warfare threats," while two aircraft — the Northrop Grumman X-47B and the UK's Taranis jet-propelled drone prototype — can fly on their own and, in the case of the Taranis, seek out enemy targets.

Perhaps the most Terminator-like machine mentioned in the report is the Samsung Techwin surveillance and security guard robot. Deployed in the demilitarized zone between North and South Korea, it is capable of auto-detecting targets. Like the Taranis, the Techwin cannot engage without human approval. The next stage for offensive automation would be for such machines to kill of their own pre-programmed volition.

Advertisement

Both the US and the UK have developed guidelines for autonomous weapons systems. In 2012, the Pentagon issued a directive stating that "autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force." The directive, however, will expire in 2022.

Discussions on automated weaponry have shifted to meetings among parties to the Convention on Certain Conventional Weapons (CCW), a treaty governing a range of arms, including incendiary devices, landmines, and laser weapons. The CCW has been signed by more than 100 countries, including the five permanent members of the Security Council. The Campaign to Stop Killer Robots aims to persuade the convention's signatories to implement a pre-emptive ban on lethal automated weapons systems.

But determining exactly what would cross the line between what is commonly termed "meaningful human control" and an unsound or unethical degree of autonomy is difficult.

An American diplomat present at the General Assembly side event said that the US supports negotiations over autonomous weapons in the context of the CCW, but cautioned that it was too early to consider an outright ban.

"It's a very tough issue to deal with," said Steven Costner, deputy policy director at the State Department's Office of Weapons Removal and Abatement. "We have weapons and technology out there that exists, and more going to develop down the spectrum. Where do you draw the line? What is the right level of meaningful human control?"

Advertisement

Michael Schmitt, a fellow at Harvard Law School's program on international law and armed conflict, agreed that a ban was "unrealistic." Regulation, he said, would be more likely to succeed.

"Since autonomous weapons have the potential to be a game changer in modern warfare, some states will wish to develop them either to extend their technological edge on the battlefield or to offset their weakness," he suggested, adding that from a humanitarian perspective, solutions "must be practical and realistic about what states are likely to accept and move in that direction."

CCW member nations will next meet in November, when activists hope they will set a larger agenda on the topic. But talks to do so are still in their nascent stages.

Watch the VICE News documentary Israel's Killer Robots:

A lack of vocal autonomous weapons advocates might in fact make it more difficult for activists to lobby against their creation. Without a clear foil, opponents are largely tasked with defining the theoretical dimensions of an uncertain threat.

Paul Scharre, a senior fellow at the Center for a New American Security, noted that previous bans on weapons such as cluster munitions or land mines focused on pre-existing technology. While autonomous technologies are to an extent already developed, he said, "the problem with autonomous weapons is that critics are talking about a future weapon that doesn't even exist. The conversation is really muddy."

Advertisement

Scharre noted that the automation of offensive systems could potentially make warfare safer by decreasing the element of human error and emotion in conflict.

"There's no reason why you couldn't incorporate all of automation to help that person decide, to make them more accurate," he said. "There's no reason why that wouldn't be a good idea."

Some roboticists agree. In a 2007 report, Georgia Tech professor Ronald Arkin, cited the "tendency to seek revenge" among soldiers and argued that autonomous weapons "can perform more ethically than human soldiers are capable of."

Opponents counter that by the time experts and authorities figure out the precise forms that lethal automation will take, it will be too late to craft meaningful regulations. On another level, they worry that the removal of humans from the battlefield could possibly make wars easier to sell to voters, increasing the likelihood of conflict worldwide. Scharre, who served multiple deployments in Iraq and Afghanistan, agreed that such a development would be distressing.

"If militaries are deploying weapons and don't feel responsible for the killing, that would be ethically problematic," he said.

But it seems that until the prospect of such a reality becomes clearer, the effort to effectively raise alarm over a scenario that seems plucked from dystopian science fiction will be difficult.

Follow Samuel Oakford on Twitter: @samueloakford