Small Wars Journal

A Kellog-Briand Pact for the 21st Century

Mon, 06/09/2014 - 8:55pm

A Kellog-Briand Pact for the 21st Century: Why the International Community Rejected the Call for a Ban on Lethal Autonomous Weapon Systems

Chris Jenks

Prior to the mid May start of the states parties to the Convention on Certain Conventional Weapons (CCW) first ever experts meeting on lethal autonomous weapons systems (LAWS), several groups, including Human Rights Watch renewed their call for a ban on so called “killer robots.” The international community rejected this argument, with only 3 of the 117 states parties (Ecuador, Egypt, and Pakistan) calling for a ban. Why? Because in framing the issue in hyperbolic terms like “killer robots,” the argument ignores the wide range and longstanding use of LAWS and presupposes their future development while failing to acknowledge even the possibility that LAWS may facilitate greater protection of both military and civilians.

“DRONES” ARE NOT AUTONOMOUS ROBOTS

Although the tendency is to conflate “drones” with autonomous robots, the two are very different. This is one of the reasons why the term “drone” is unhelpful and even misleading. Drones as most use the term, refers to a range of remotely piloted systems, often aerial though ground and sea variants exist.  These systems are neither autonomous nor robotic, a human being operates them, just from a distance.

WEAPON SYSTEMS ARE NOT BINARY

From there the call for a ban treats autonomy in a binary manner – a system is either fully autonomous or not at all. This too is both unhelpful and misleading. There are endless degrees of autonomy, systems can autonomously perform whatever percentage of their functions (sensing, analyzing, decision making, executing) that their control algorithms provide.  These degrees of autonomy have led to descriptors like “human delegated”, “human supervised” “man in the loop” and “man on the loop”. The Department of Defense (DoD) suggests autonomy is better thought of as “a capability of the larger system enabled by the integration of human and machine abilities.”

Over 18 months ago, DoD, in a publicly released directive, defined an autonomous weapons system as one that, “once activated, can select and engage targets without further intervention by a human operator.”  In calling for a ban on the future development of such systems, proponents ignore the longstanding legal and effective employment of autonomous weapons.

HISTORY OF AUTONOMOUS WEAPONS

The US, and many other countries around the world, have employed autonomous weapon systems for DECADES.  These systems include the Phalanx Close-In Weapon System, currently in use on every US Navy surface combat ship and those of over 20 US allies, and the Patriot air missile defense system, used by the US military and a dozen allies.  These systems are defensive, but employ lethal force more rapidly and effectively than if directed by a human. Which isn’t to say there are never errors or malfunctions, there are.  In separate incidents during the 2003 invasion of Iraq, US Patriot systems misidentified friendly aircraft as a threat, leading to the downing of a US F-18 and a British Tornado, killing the crews. Similarly, in 2007 a South African autonomous air defense system malfunctioned, killing nine service members.

Whenever lethal force is employed, by human or autonomous system, there are risks of unintended consequences. The question is whether LAWS may provide better force protection to the military and limit unintended injury to civilians more effectively than humans.   Those calling for a ban are in essence saying there is 0% chance that LAWS could be developed which could employ lethal force more effectively and discriminately than a human.

I disagree, on two grounds.

HUMANS ARE POOR PREDICTORS OF THE FUTURE

First, we know that human beings have a poor track record in predicting the future in general, let alone how future technologies, military weapons, adversaries and threats will evolve. In 1899, fifteen years prior to the start of WW I, the international community banded together to preemptively ban what was envisioned to be the next scourge of mankind. Given that World War I ushered in the wide scale employment of several technological advances which would contribute to killing millions, which would you guess was the subject of the 1899 ban? The tank? Machine gun? The plane? Chemical weapons? None of those, rather human predictive abilities resulted in the Declaration on the Launching of Projectiles and Explosives from Balloons.

HUMANS REACT LESS EFFECTIVELY UNDER STRESS THAN WE WANT TO THINK

Second, while I don’t claim to know that a system could be developed which more effectively and discriminatingly employ lethal force, I think it’s at least possible.  

Human decision-making during armed conflict is at its nadir.  When receiving fire, service members are scared, highly stressed, and lacking time and information. Training and discipline can lessen, but never eliminate, the impact of these effects.

What we know of how effectively human beings use lethal force should be, but oddly isn’t, a reminder of our considerable human limitations, which area only exacerbated under stress. The New York City Police Department utilizes possibly the most comprehensive firearms training program of any US major city, including live fire range training and simulations. Yet a 2008 Rand Corp study determined that on average, when a NYPD officer exchanges gunfire with a suspect, the officer hits the suspect only 18% of the time. So for every five rounds an NYPD officer fires, one hits the target, and four miss, ricocheting and landing, well, wherever they land. When suspects do not return fire, the NYPD officer hits the target 30% of the time.

If we confront the reality of the human condition in armed conflict, we should be open to at least the possibility that LAWS might lead to better outcomes. If that’s so, far from a ban, aren’t we obligated to continue to develop, discuss, and debate this technology?

LAWS MAY FACILITATE GREATER PROTECTION OF THE MILITARY AND CIVILIANS

As an Army Judge Advocate, I deployed to Mosul, Iraq where I advised a unit in combat. While deployed to Iraq, I learned the unsettling lesson that when shot at in an urban environment, short of seeing a muzzle flash you generally have no clue from which direction someone is trying to kill you.  In these circumstances, Iraqi Security Forces (ISF) would often fire automatic weapons literally in 360 degrees, having as much chance of hitting the shooter as the winning power ball lottery numbers.

The US military has an acoustic detection system, Boomerang, which in 1-2 seconds from a shot being fired, identifies the origin, reports the data and in conjunction with another system, traverses a weapon system to aim at the origin.  While the technology most certainly exists for the system to then fire the weapon system, it doesn’t, a service member does.

That’s because contrary to the claim that DoD wants humans out of the loop, the US is focused on exploring the legal, ethical, and moral issues LAWS raise.  For years, DoD has been identifying and discussing the role of autonomy and planning for how to best integrate the capabilities of man and machine moving forward.

In the urban shooting scenario described above, employing an autonomous system to identify the source of the firing and orient but not fire a weapon in response seems appropriate. But ask yourself, if you and your family lived in Mosul, Iraq and I told you that a joint US/ISF patrol would be in your neighborhood and that if they receive fire, they will shoot back but that you had a choice: the return fire could be the ISF wildly firing in 360 degrees, or the Boomerang system but fully autonomous (so the system, not a soldier, taking the step of returning fire), which would you prefer (or be less fearful of)?

Let’s dispense with references to the terminator and skynet and instead consider scenarios currently envisioned by military war game planners.  These include “swarm attacks”, where an enemy force employs hundreds or even thousands of small attack boats against the US Navy, or micro unmanned aerial or ground systems.  Bearing in mind the Rand Corp study, can we really say with certainty that human being decision-making, target selection, and engagement accuracy would trump that of LAWs?

CONTINUINING AND EXPANDING THE LAWS DISCUSSION & DEBATE

At its core, the “ban killer robots” campaign reflects considerable intellectual hubris-that we know how technology will develop and that its use will violate international humanitarian law/the law of armed conflict.  Several leading academics at CCW disagreed, including a former head, Nils Melzer, and deputy head, Marco Sassoli, of the legal department of the International Committee of the Red Cross.  Neither of these experts support “killer robots”, rather they are agnostic, recognizing that its impossible to credibly assess the legality of a weapon system which doesn’t yet exist.

Having preceded WWI with the inconsequential balloon bomb declaration, the international community followed the war with a well- intentioned ban on war, the Kellog-Briand pact.  As we know, the ban proved futile.  More importantly the pact consumed time and resources which could’ve been better utilized in better understanding the causes of WWI and greater mitigation of its impact.

So too would a ban on LAWS be both futile and a misdirection of time and resources. The states parties to CCW were right to reject the call for a ban and instead agreeing to continue and expand the moral, ethical and legal discussion on LAWS.

Comments

Sparapet

Tue, 06/10/2014 - 8:30pm

I agree with the thesis of this article. A LAWS that carries out tasks assigned to it by a human can have more and more complex tasks assigned to it as technology evolves to permit it. Anticipating what technology will permit is haphazard even from the mind of the foremost expert. I think we will be hard pressed to argue that a a system capable of enhancing accuracy and speed of direct fire would be unused on moral grounds. This is why i would like to see a follow up article exploring what happens when LAWS are used offensively.

Every system we have right now can only respond to a threat, a threat that can be tied directly to a mathematically describable, physical phenomenon ( e.g. Radars signature, sound, heat). So a phallanx, a boomerang+crow, a patriot missile are all, as the article explicitly admits, defensive. Would we really care if we built a turret designed to trigger countermeasures when it receives fire from an easily describable threat? I think not. Computer games have modeled such systems in their scenarios since the early 1990's.

So imagine then a LAWS taking the moral responsibility for conducting a recon by fire (diversionary fires to root out enemy that is not yet Identified), area suppressive fires not aimed at a specific target, or any other tactic that is permissible under the law of land warfare, but requires a judgement of military necessity not informed by imminent, verifiable, danger (the ultimate example would be a deliberate ambush of an irregular foe). It seems to me that these are the scenarios that require an accountable moral agent, one that is not several degrees removed from the consequences of his decisions. (And to be frank, i don't see Commander Data challenging our cultural definition of sentience in our immediate future)

It seems we are painting ourselves into a rather perverse corner. On the one hand, we cannot imagine LAWS that absolutely will be, making any categorical ban logically absurd on its face, serving only as emotional panacea. On the other hand we are unwilling to contemplate the possible consequences of LAWS doing human things. Sending rounds down range under straightforward logical rules that rely on solutions to X is not a human thing; doing so when the solution to X is a range of probabilities is.

Sci fi gave us defensive LAWS long before a real system existed. Perhaps the reason why sci fi so often anticipates technology that experts utterly fail to is because it isn't terribly concerned with a present, verifiable reality. And in the fantasy realm there are an awful large number of logically consistent and universally disturbing scenarios for systems that can do ambiguity. So though i agree about the rush to legislate being foolish, i disagree that the impulse in invalid. It is very valid, but it can benefit from informed discussion by people willing to burden their minds with contemplations of futures that they would like to avoid as well as promote.