Small Wars Journal

With Great Power, Comes Great Responsibility: Keeping a Human Finger on the Killer Robot’s Trigger

Sat, 07/27/2013 - 12:45pm

With Great Power, Comes Great Responsibility: Keeping a Human Finger on the Killer Robot’s Trigger

Daniel Golebiewski

Abstract

During the 23rd Human Rights Council Session in June 2013, nations, non-governmental organizations, and campaigns debated on whether the international community should ban “killer robots,” or autonomous machines such as Star Wars’ Battle Droids, from future war zones. Admittedly, everyone present at Geneva brought science fiction into reality for the first time. In this essay, I will look into what the arguments were for both sides. On one side, NGOs and campaigns urged the international community to ban killer robots for the sake of mankind and put them back into the toy box, where they forever belong. On the other side, countries currently producing predecessors of the killer robots argued that these robots would become a soldier’s new best friend, for they would save the lives of soldiers and civilians. But in the end, both sides failed to mention a simple phrase: with great power, comes great responsibility.

A Galaxy We Call Home

In the near future, in a galaxy not so far away from Planet Earth, millions will march across the battlefields, mercilessly mowing everything in their path. Unlike computers, they will be self-aware and conscious. They will know who to attack and when to attack. Best of all, if one dies, they will easily become repairable or simply replaced. They are the “killer robots” from when science fiction meets reality.

Arguably since World War I, technology has dramatically improved war. One needs to not look any further than by playing any modern war game ranging from Call of Duty: Modern Warfare 4 to Battlefield 3. From heavy-duty vehicles to assault rifles, seems that every year a soldier brings with him some new weapon to combat zones. However, this is slowly becoming old fashioned. Increasingly, robots—whether they are drones or unmanned weapons—are popping up in war playgrounds. Yes, a Star Wars reality where Battle Droids—cloned into the millions—take over human soldiers. To be sure, as the United Nations Special Rapporteur Christof Heyns states, robots are “the next major revolution in military affairs, on a par with the introduction of gunpowder and nuclear bombs.”[i] As a result, war used to have a warrior and a weapon, but now the weapon has become the warrior whereby it makes the decision itself.[ii] The question, then, arises: with science fiction soon to meet reality, should deadly armed robots replace human solders and roam through the battlegrounds autonomously?

As weird, silly, or as fearful as this may sound, that was the question that the United Nations debated at the 23rd Human Rights Council Session from May 27 to June 14, 2013 in Geneva. In fact, aside from countries taking interest in killer robots, Human Rights Watch, forming the Campaign to Stop Killer Robots, has produced pages upon pages of reports explaining why the international community must ban these killer robots immediately without wasting any precious time.

In this essay, I will examine both sides of the debate on armed robots. On one side, NGOs and campaigns urged the international community to ban killer robots for the sake of mankind and put them back into the toy box, where they belong. On the other side, countries producing predecessors of the killer robots argued that these robots would become a soldier’s new best friend, for they would save the lives of soldiers and civilians. But in the end, both sides failed to mention a simple phrase: with great power, comes great responsibility.

Banning For the Sake of Mankind

Treating killer robots as a disease prone to spread quickly if left unattended, Human Rights Watch, the Campaign to Stop Killer Robots, and the United Nations have promoted their argument to preemptively ban deadly, armed, and autonomous robots because they have the potential to destroy mankind in a flash. These three groups argue for an international treaty, as well as national laws, to make sure that humans always make the decision to use lethal force against their own kind, not robots.

In its 50-page report titled, “Losing Humanity: The Case Against Killer Robots”—the first of any nongovernmental organization’s publication about armed robots—Human Rights Watch outlines its concerns about these future weapons.[iii] It focuses heavily on the fact that armed robots would not meet the requirements of international humanitarian law, specifically Article 36 of Additional Protocol I to the Geneva Conventions, which states:

In the study, development, acquisition or adoption of a new weapon, means or method of war, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.[iv]

First, robots would be unable to distinguish adequately between soldiers and civilians on the battlefield. For instance, since counterinsurgency and unconventional wars have been on the rise in recent years, where combatants do not wear uniforms or any insignia such as a flag or a nametag, robots would have a difficult, if not impossible, time answering their own question: “Is this person a combatant?”[v] As a result, robots would easily fall into the tricks of terrorists or insurgents if these groups were to conceal their weapons or were to exploit the robots’ limits.[vi]

Second, robots would be unable to understand a person’s emotions, which are specifically linked to humans. For example, if a worried mother were to run to her children, who would be playing with toy guns, and yell at them to stop playing near a soldier, a human soldier would identify with the mother’s fear and the children’s game, thus shrugging off this incident and maybe even joining the game.[vii] Robots, unfortunately, would only see a person running towards them and armed people, thus launching an attack.[viii] Hence, if robots cannot understand humans and vice versa, how can they both co-exist on the same war ground?

Third, as soldiers remove themselves from the horrors of war, and as they see their enemy as red dots on a screen, instead of priceless, human bodies, robots will increase the incentive for state and military leaders to go to war, anytime and anywhere. Therefore, robots will devalue the threat to the lives of enemy civilians.[ix] For example, drone warfare—a precursor to killer robots—has turned drone strikes into a video game where drone operators feel emotionally detached from killing because, after all, drone operators can relaxingly push buttons from their homes.[x]

Finally, robots would create a gap in accountability. Even though the commander, the programmer, or the manufacturer would be the one who would put the software into the robots, in the end the robots would get the blame because they did the act—shooting, snipping, or killing. Hence, robots would be perfect scapegoats for authoritarian leaders seeking to strengthen or retain power; the common excuse being, the armed robot did it![xi] More importantly, robots would undercut the ability to provide victims meaningful justice.[xii]

In a similar way, although thinking more in terms of the international community, the UN has called for a moratorium, or banning of killer robots, while the international community debates the ethical questions. In its “Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions,” the UN lists its three arguments against killer robots.

Firstly, robots would not possess qualities needed to comply with international humanitarian law, including “human judgment, common sense . . . understanding of the intentions behind people’s actions, and understanding of values and anticipation of the direction in which events are unfolding.”[xiii] Secondly, as robots would multiply in numbers, they would create a new commodity of states transferring and selling them—a new arms race in the 21st Century.[xiv] And lastly, robots would, at some point, arrive in the wrong hands and at the wrong time: domestic law enforcement suppressing protests and riots; states terrorizing their own populations; or non-state actors such as criminal cartels or terrorists intercepting and using them for their own gains.[xv]

To summarize these flaws of producing killer robots, here is my own possible scenario. Currently, handcuffed by rule of law and democratic governance, state authority—leaders, police, and soldiers—has the power to engage in violence in the name of the state. But, if robots were to replace them and go out hunting for the enemy, they would void the social contract—citizens, though consent, give up some of their rights and follow the state, in exchange for the state protecting the citizens’ remaining rights. Moreover, if the robot’s software were to lead it to kill a human by accident, do the victim’s family sentence Microsoft to The Hague for war crimes? Since robots and computers are implacable of understanding human behavior—if they did, they would surely have a headache and explode!—the answer is very unlikely. Therefore, since robots would be unable to blackout civilians from combatants, just as American drones are leading to larger numbers of civilian deaths, so would be these killer robots.

Robot as a Soldier’s New Best Friend

For now, although killer robots do not exist, countries including the United States, China, Germany, Russia, and the United Kingdom have gotten their hands on precursors to killer robots. These countries have some kind of machines in the air, land, or sea that help to detect, track, and engage with threats such as incoming missiles or aircrafts.[xvi] For instance, the US Phalanx, the US Counter Rocket, Artillery, and Mortar (C-RAM), Israel’s Harpy, and the UK’s Taranis are just some of the machines that are programmed to “fire and forget.”[xvii] Therefore, as robotic warfare expert Peter W. Singer suggests, these machines “are merely the first generation—the equivalent of the Model T Ford or the Wright Brothers’ Flyer.”[xviii] Nevertheless, why are these countries so eagerly looking to use more and more robots in their wars?

Assuming that these countries saw the benefits outweighing the costs in using killer robots, in hopes of winning their case, they provide two straightforward arguments. One, none of them wishes to see another plane filled with caskets of heroes arriving from the warzone or wishes to see their soldiers struggling with post-traumatic stress. Therefore, as American drones in military operations in Afghanistan, Pakistan, and Libya have shown, these countries argue that unlike human soldiers, robots are immune from fear, rage, or sadness that can cloud judgment, distract soldiers from the mission at hand, or worse, cause physical or sexual attacks on civilians.[xix] And two, pro-armed robot countries claim that because humans are too poor at recording data, such as memorizing details accurately, robots can certainly calculate precise time intervals and memorize millions of details in seconds.[xx] As a result, these countries believe that robots, with their precisions, can save civilian lives and decrease the war’s timeframe and costs.

Nonetheless, when nations debated what to do about killer robots at the UN Human Rights Council on May 30, 2013 in Geneva, all expressed interest and concern about these machines and looked forward to further discussions.[xxi] However, not all agreed on the same grounds. For example, the UK was the only state to oppose the call for a ban on killer robots.[xxii] Brazil and France suggested the Convention on Conventional Weapons, not the Human Rights Council, as the venue for further discussing this topic.[xxiii] And Sweden, who is traditionally responsible on the work of the UN Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, promised to put forth a resolution—which will include operative paragraphs—at the next Human Rights Council session in 2014.[xxiv]

In short, to give an example of how this resolution might look like, the US Defense Department, in its “Directive 3000.09,” has provided a roadmap in developing these killer robots.[xxv] The Department promises to test everything thoroughly, from development to employment.[xxvi] In addition, making sure all humanitarian laws are followed, it promises to train the robots’ operators and have guidelines to lower the chances of robots going out of control from a bug or a glitch.[xxvii] More importantly, the Defense Department acknowledges that because these robots are not perfect, they will be prone—although the chances will decrease over time—to malfunctions, software coding errors, enemy cyber attacks, and human-machine interaction failures.[xxviii]

With Great Power, Comes Great Responsibility

The priceless words of Ben Parker to Peter Parker in Spider-Man (2002) hold true to killer robots today. Although the UN lists specific recommendations such as asking states to implement a national moratorium, inviting the High Commissioner for Human Rights to set up a high-level panel of experts to propose a framework, and wanting NGOs to raise awareness, the international community should directly use the movie’s line as the foundation for its argument against killer robots.[xxix] To be sure, the international community needs to bear in mind that a metallic thing will have full control over the war, and that is giving them huge power with great responsibility. Though it may have the brainpower, for a piece of metal to hold onto that power, will be quite a hard task.

The upshot, then, is that we—holding the steering wheel of the future for now—need to ask ourselves questions about where we are heading. By us remembering that our future is not a destiny, meaning it is not written on stone but on a computer hard-drive that we can easily cut or delete, the choice is in our human hands. But before we make that choice, let us refresh our memories of the Three Laws of Robotics, a set of rules drawn up by the science fiction author Isaac Asimov in his 1942 short story “Runaround”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, expect where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[xxx]

Which choice will it be, humans, which will it be? Back into the toy box they go, or we make metallic figures our new friends?

Bibliography

BBC News. “UN Mulls Ethics of ‘Killer Robots.’” May 29, 2013. Accessed June 10,

2013, http://www.bbc.co.uk/news/world-europe-22712752.

Campaign to Stop Killer Robots. “Consensus Killer Robots Must Be Addressed.” May

28, 2013. Accessed June 12, 2013.

http://www.stopkillerrobots.org/2013/05/nations-to-debate-killer-robots-at-un/.

Heyns, Christof. “A/HRC/23/47: Report of the Special Rapporteur on Extrajudicial,

Summary, or Arbitrary Executions.” April 9, 2013. Accessed June 12, 2013. http://daccess-dds-ny.un.org/doc/UNDOC/GEN/G13/127/76/PDF/G1312776.pdf?OpenElement.

Human Rights Watch. “Losing Humanity: The Case Against Killer Robots.” November

2012. Accessed June 10, 2013. http://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf.

Kurfess, Thomas R. Robotics and Automation Handbook. Boca Raton, Florida: CRC

Press, 2005.

Sharkey, Noel. “America's Mindless Killer Robots Must Be Stopped.” The Guardian.

Accessed June 9, 2013. http://www.guardian.co.uk/commentisfree/2012/dec/03/mindless-killer-robots.

Singer, Peter W. “Unmanned Systems and Robotic Warfare.” Hearing before the

Subcommittee on National Security and Foreign Affairs of the House Committee on Oversight and Governmental Reform, March 23, 2010. Accessed June 11, 2013. http://www.brookings.edu/testimony/2010/0323_unmanned_systems_singer.aspx.

United States Department of Defense. “Directive 3000.09 Autonomy in Weapon

Systems.” November 21, 2012. Accessed June 12, 2013, http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf.

End Notes

[i] Christof Heyns, “A/HRC/23/47: Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions,” April 9, 2013, accessed June 12, 2013, http://daccess-dds-ny.un.org/doc/UNDOC/GEN/G13/127/76/PDF/G1312776.pdf?OpenElement, p. 5.

[ii] BBC News, “UN Mulls Ethics of ‘Killer Robots,’” May 29, 2013, accessed June 10, 2013, http://www.bbc.co.uk/news/world-europe-22712752.

[iii] Human Rights Watch, “Losing Humanity: The Case Against Killer Robots,” November 2012, accessed June 10, 2013, http://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf.

[iv] Ibid., 21.

[v] Ibid.,30-31.

[vi] Ibid.

[vii] Ibid., 38.

[viii] Ibid.

[ix] Ibid., 39.

[x] Ibid., 40.

[xi] Ibid., 42.

[xii] Ibid., 45.

[xiii] Heyns, “Report of the Special Rapporteur,” p. 14.

[xiv] Ibid., 7.

[xv] Ibid., 16.

[xvi] Ibid., 8.

[xvii] Ibid., 8-9.

[xviii] Peter W. Singer, “Unmanned Systems and Robotic Warfare,” (hearing before the Subcommittee on National Security and Foreign Affairs of the House Committee on Oversight and Governmental Reform, March 23, 2010), accessed June 11, 2013, http://www.brookings.edu/testimony/2010/0323_unmanned_systems_singer.aspx.

[xix] Human Rights Watch, “Losing Humanity,” p. 37.

[xx] Noel Sharkey, “America's Mindless Killer Robots Must Be Stopped,” The Guardian, December 3, 2012, accessed June 9, 2013, http://www.guardian.co.uk/commentisfree/2012/dec/03/mindless-killer-robots.

[xxi] Campaign to Stop Killer Robots, “Consensus killer robots must be addressed,” May 28, 2013, accessed June 12, 2013, http://www.stopkillerrobots.org/2013/05/nations-to-debate-killer-robots-at-un/.

[xxii] Ibid.

[xxiii] Ibid.

[xxiv] Ibid.

[xxv] United States Department of Defense, “Directive 3000.09 Autonomy in Weapon Systems,” November 21, 2012, accessed June 12, 2013, http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf.

[xxvi] Ibid., 2.

[xxvii] Ibid.

[xxviii] Ibid.,13-14.

[xxix] Heyns, “Report of the Special Rapporteur,” p. 21-22.

[xxx] Thomas R. Kurfess, Robotics and Automation Handbook (Boca Raton, Florida: CRC Press, 2005), p. 1-4.

 

About the Author(s)

Daniel Golebiewski holds a BA in Political Science from John Jay College of Criminal Justice—City University of New York. He is a MA candidate in Human Rights Studies at Columbia University in the City of New York. He is also a research assistant for John Jay’s Center for International Human Rights. For additional information, please visit his website: www.danielgolebiewski.com.

Comments

The United States Military already operates a series of autonomous combat systems, generically known as MWDs. Most weigh between 60-70lbs, have specialized sensor packages, are capable of following general commands, and can use limited force against individuals without direct handler intervention.

For those who don't speak Pentagon, I am referring to a Military Working Dog, and the US military has used them for decades. Legally, they are a furry weapon with a personality, but they are a weapon or sensor system, employed by their handlers.

An armed robot would be about the same as a dog, just with much larger teeth, and I don't imagine commanders would trust them to do more than the canine.