Small Wars Journal

RPA Ethics: A Focused Assessment

Share this Post

RPA Ethics: A Focused Assessment

Robert Hunter Ward


At the turn of the century, the advent of remotely piloted weapons systems unveiled a new technology which would soon allow the United States to accomplish objectives in a safer and more effective manner.[i] Since then, armed remotely piloted aircraft (RPAs) have significantly given the United States a strategic upper hand, and will likely continue to do so in the years to come.[ii] The technological advantage that RPAs provide stems from its ability to wage remote warfare, keeping pilots safe while capitalizing on airpower’s inherent lethality. However, a series of valid ethical concerns have accompanied the emergence of RPAs and, as with any weapons system, we have a duty to analyze these ethical dimensions and allow morality to inform policy. Unfortunately, many of the alarming concerns which gain public attention stem from criticism of warfare (often air warfare) in general and usually circumstantial to location and specific conflict. Alternatively, I will examine three issues exclusively centered on RPA ethics: asymmetric risk, moral hazard, and destructive behavior. This focused assessment is as important for what it includes as it is for what deliberately excludes. Ultimately, by narrowing in on the ethics of RPAs, I find that they are ethical. However, U.S. policymakers must consider the ethical dimension while constructing RPA policy so that the technological innovation can continue its effective operations above reproach.

To preface this analytical research with a semantic note, the terms “Unmanned Aerial Vehicle” (UAV) and “Unmanned Aerial System” (UAS) are valid terms when discussing these platforms, though they sometimes indicate a semi-autonomous nature and are not the best words to use. “Drone” is more abruptly associated with autonomous systems and we should avoid it when discussing remotely piloted platforms. Even further, some scholars deepen the confusion between RPAs and autonomous weapons by offhandedly discussing “pilotless aircraft” that will soon bring “robotic warfare.”[iii] Not only does the U.S. military refrain from using armed autonomous weapons in combat, but it is strictly against the law.[iv] Since the U.S. Department of Defense officially uses the term “RPA,” I will only use that term hereafter.[v]



Figure 1: MQ-9: Reaper [vi]

Focused Approach

In analyzing RPAs, I will not consider the ethics of unarmed RPAs for surveillance, nor will I examine autonomous weapons systems, but rather solely focus on armed RPAs. Much of the existing literature on RPA ethics revolves around the employment of RPAs in countries outside “hot” combat zones, such as in the Global War on Terror under President Obama. It includes, for example, the ethics and legality of RPA strikes against local insurgents and transnational terrorists dressed as civilians in Pakistan, Yemen, and Somalia. [vii] Though issues of collateral damage, sovereignty infringement, and fighting another country’s fights are valid concerns, they do not actually address the morality of RPAs. While retired Army General Abizaid and foreign policy expert Dr. Brooks, among other scholars, have written intriguing reports on RPA policy, most of the discussed matters are irrelevant to a pure examination of RPA ethics. The criticism of a “whack-a-mole” approach, for example, is truly a critique of air warfare in counterterrorism strategy, not necessarily remote warfare.[viii] The idea that RPA strikes take “away from non-kinetic means” is a criticism of war, favoring diplomacy.[ix] Many scholars argue that civilian casualties and sovereignty violations of RPAs lead to anti-United States sentiment, poor precedent, international blowback, and widened instability.[x] These concerns, still, are inherent to combat—mostly air warfare—in specific locations, rather than the employment of RPAs universally. For these reasons, I will move forward assuming air warfare is permissible—though not in every case—but as a general, violent extension of policy.

To be clear, while many people argue that RPAs produce more collateral damage than other weapons, the claim is false.[xi] Even though President Obama withheld numbers of civilian deaths and we do not have the truly factual statistics, that political issue is irrelevant; the technology is relevant. With a long loiter time and high definition cameras, RPA pilots actively watch and wait until civilians are outside the impact zone before striking. If a civilian enters the blast radius while the missile is airborne, the weapon can be diverted.[xii] With smaller bombs, RPAs precisely create smaller blast radiuses than other aircraft. For these reasons, an RPA is a safe weapon and a much less deadly option than an F-16 or a cruise missile.[xiii] Those who oppose RPAs because they strike populated areas and kill civilians should realize that it is the targeting selection they are against, not RPAs. It seems critics would prefer a non-intervention or even pacifist approach, believing the alternative to RPA strikes is no strikes at all. It is possible that riskless combat lures leaders to strike too often, and I will address this issue. In many cases, however, safe and necessary alternatives to RPA strikes include traditional airstrikes and cruise missiles, which would only lead to more collateral damage.

Asymmetric Risk

Critics argue that it is unfair to use technology “absent[ing] one party… from the scene of combat.”[xiv] Within this criticism, there is an issue of public relations and ethics. The former concern is the fear that the United States will be characterized as a coward, thus angering populations of people. Unique to anti-RPA (as opposed to anti-airstrike) sentiment, adversaries convince local populations the U.S. Air Force is cowardly, complaining it is immoral to kill someone—inevitably sometimes civilians—7,000 miles away while sitting in Nevada.[xv] Indeed, it makes sense that airmen sending in a machine to wage war and take lives rather than toughly entering the combat zone themselves might infuriate people who expect a fairer fight. On the other hand, it is a natural technological progression for militaries to protect themselves by creating a distance from their target. We see this with the evolution from swords to bows-and-arrows, to guns, and to bombs. There is merit to the opinion that guns and bombs are okay, but taking the soldier completely out of the warzone is a step too far. Realistically, however, we must realize that technology will inescapably progress from here and autonomous weapons are on the horizon. Although the enemy might use propaganda to portray the United States as a coward—possibly radicalizing otherwise neutral people groups—the U.S. military simply cannot control how these groups choose to respond. Indeed, it would be foolish to have the technological capability to keep U.S. pilots safe and choose not to use it in order to appease the enemy.



Figure 2: RPA Targeting [xvi]

In addition to issues of global image, asymmetric risk also poses an ethical concern. Anthropologist Hugh Gusterson illustrates this asymmetry with a true story: “Mehsud, unaware of his exposure, is watched by faraway [RPA] operators who can see him as if close up, reclining on the roof of his house…. They get to frame the picture while he does not even realize he is in it. Without warning, he is killed as if by a god’s thunderbolt from the sky.”[xvii] To clarify, the RPA operator confirmed Mehsud was a terrorist with “as if close up” imagery. Even so, this illustration poses an ethical question: is it unfair to assume a lesser risk than your adversary in combat? But what some scholars describe as “radical technological asymmetry” is nothing new to combat, especially to “neocolonial counterinsurgency contexts” where “outnumbered occupying forces use superior technology to subdue indigenous populations.”[xviii]

Furthermore, it is unfair to many airmen to suggest operating an RPA does not impose personal risk. In fact, RPA pilots often develop familiarity with targets after watching them for a long period of time. From this emotional connection, killing the targets—even if it means pressing a button from thousands of miles away—is difficult and can contribute to Post Traumatic Stress Disorder (PTSD). Though a “moral distance” exists with other weapons systems such as cruise missiles and manned aircraft, those operators do not watch their targets nor develop a connection with them.[xix] Additionally, the schedule RPA pilots face of going to war during the day and then rushing to attend their kids’ soccer game in the afternoon has created significant emotional, mental, and spiritual issues. For these warriors, the temporal and proximal gaps between home and war creates mental confusion, severe friction within families, and is taking a significant toll on the RPA pilot force.[xx] That said, for those who do not like RPAs because the operators do not put themselves in harm’s way, can we really say that about RPA pilots?

Moral Hazard

Building on the critique that RPAs cowardly wage war without risking human life, we also have another criticism that RPAs—with such negligible risk—cause a moral hazard for U.S. leadership.[xxi] This ethical issue interacts directly with politics and often depends on who is President and how morally disciplined he or she is in employing RPAs. Though the same argument can be made about air warfare—albeit to a lesser extent—many argue that since RPAs are riskless, accessible, quick, and inexpensive, targeted killings will become too easy and too frequent.[xxii] The dangerous question to ask is, “Do RPAs enable leaders to strike at times and places where they would not bother striking without RPAs, only because the riskless nature of RPAs give no incentive not to strike?”

Critics would say leaders—feeling very disconnected from RPA pilots—might use the ease of RPAs to justify using them without truly considering the harm they can inflict.[xxiii] Though this is plausible, it reveals a lack of trust in the disciplined judgement of American leaders more than anything else. Further, if we consider RPAs as a natural progression of innovation, this so-called “moral hazard” brings to light the emotion of fearing advanced technology simply because it is not well-understood. Even if leaders order RPA strikes where they would not without the technology, it is not unlike handguns giving soldiers the ability to distance themselves from the threat and wage less risk. Still, a 7,000-mile separation is much greater than a sniper would enjoy, thus this ethical concern is valid and leaders must be careful not to unrestrainedly wage riskless war.[xxiv]

Destructive Behavior

Thus far, I have mostly discussed the ethics of RPAs in combat zones abroad, as well as the important judgement of leaders who make targeting decisions. Now, perhaps most importantly, I will examine the ethics of remote warfare as it pertains to the actions of operators, the RPA pilots. It is somewhat disturbing, indeed, to think that an American within the safety of the United States participates in a daily war, getting paid to take away human life with a joystick. This picture sounds like it lacks the human element, an important component of warfare. Without a doubt, the human element is crucial to combat because combatants must make decisions combining empathy with an aggressive warrior ethos. Pundits have suggested that remote warfare causes destructive behavior because operators inevitably see it as a video game rather than realistic war. As a result, some assume RPA pilots are reckless, acting with destructive judgement and no discrimination. Though this is logical and it makes sense that remote warfare perplexes skeptical Americans, the fact is that the criticism of destructive behavior is far from the truth.

“The allegation that there is a ‘PlayStation mentality’ to targeted killings by [RPAs] remains to be proven,” says political scientist Michael Boyle.[xxv] Just like any new long-distance weapon innovation, people are uncomfortable with RPAs. “[RPAs] permit killing from a safe distance – but so do cruise missiles and snipers’ guns.”[xxvi] While some argue that this dissociative style of killing desensitizes combatants to the idea of ending someone’s life, RPA pilots feel a surprising intimacy when they witness their target’s every move for hours, days, or even months on end.[xxvii] At the end of this patient observing, RPA pilots watch their screen closely as their target gets obliterated, a far “closer”—though virtual—connection than other pilots or missile operators have the opportunity to experience. It is not surprising, therefore, RPA pilots suffer from PTSD as a consequence of their stressful job, not as a coincidence independent of some “video gaming” experience.[xxviii] Unfortunately, there have been RPA pilots who take their job lightly—or who perhaps joke to hide the psychological damage—equating their job to a video game.[xxix] However, critics should resist drawing undue attention to a few “bad apples” and instead focus on PTSD rates and the alarming fact that RPA pilots suffer from this disorder just as much as pilots who physically fly in combat zones.[xxx]



Figure 3: RPA Ground Control Station [xxxi]

Morality and Legality

The Jus In Bello component of Just War Theory provides a moral framework of how to ethically use any weapon. In war, the military is ethically obliged to act with discrimination and proportionality. Distinguishing between murder and a highly selective killing, strikes should only target known, guilty terrorists to avoid further, deadly conflict. To honor the discrimination component, strikes should avoid collateral damage. To be proportional, the United States should use no more than the amount of force needed to destroy the threat in its exact size and importance.[xxxii] If leaders follow these moral guidelines in their RPA targeting decisions, every strike—even if frequent and with little risk—is ethically justified.

The Law of Armed Conflict and International Humanitarian Law (IHL) are outdated and do not reflect recent technological advancements which make killing more complicated, such as RPAs.[xxxiii] At its very foundation, IHL assumes combatants “mutually occupy a distinct physical space in which war is conducted.”[xxxiv] Therefore, it will not be a “simple fix” to adjust the law in ways which allow for a new weapon. Rather, perhaps an entire revision is in order, one which accounts for remote warfare and is undergirded by a reformed, technologically advanced mindset.


The White House must increase its transparency about the RPA program and establish stricter congressional oversight as this will minimize many ethical concerns.[xxxv] Government must publicize the number of civilian casualties so Congress and the American people can hold leadership accountable, thus lessening the risk of moral hazard. Further, the Defense Department should consider strengthening—though not over-redundantly—the RPA targeting process. Then, it should publicize an unclassified version to ensure Americans that, though an RPA pilot pushes the button, many intelligent people are involved to make sure each strike meets ethical criteria. With strike statistics, a basic knowledge of how the Air Force conducts RPA warfare, and data about psychological costs RPA pilots face, many Americans will come to understand that a distance created by technology does not lead to destructive behavior. Without implementing policy changes immediately, the ethical issues we see now might grow in severity.


To a large extent, the question of whether remote warfare waged by RPAs is ethical depends on if one believes riskless war is dangerous, destructive, and overly easy. The ethical issues of asymmetric risk, moral hazard, and destructive behavior are difficult considerations to face, but necessary to reconcile. Ultimately, despite these concerns, the RPA is an ethical instrument of air warfare which leaders should certainly take advantage of, if they do so judiciously as they should with any weapon. Esteemed philosopher Bradley Strawser writes, “If an agent is pursuing a morally justified yet inherently risky action, then there is a moral imperative to protect this agent if it is possible to do so … As a technology that better protects (presumably) justified warriors, [RPA] use is ethically obligatory, not suspicious.”[xxxvi] So, though there is an argument that RPAs are immoral, the better argument might be that using RPAs is morally necessary.

End Notes

[i] Robert Farley, “Drone Warfare,” in Grounded: The Case for Abolishing the United States Air Force (Lexington, KY: University Press of Kentucky, 2014), 147 - referencing the use of armed RPAs as early as 2001.

[ii] Gen John P Abizaid and Rosa Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” Recommendation (Washington, D.C.: Stimson, April 2015), 26; Michael J. Boyle, “The Legal and Ethical Implications of Drone Warfare,” The International Journal of Human Rights 19, no. 2 (February 17, 2015): 115,; Hugh Gusterson, “Drones 101,” in Drone: Remote Control Warfare (Boston: MIT Press, 2016), 22,

[iii] James Igoe Walsh, “Is Technology the Answer? The Limits of Combat Drones in Countering Insurgents,” in Coercion: The Power to Hurt in International Politics, ed. Kelly M. Greenhill and Peter Krause (New York, NY: Oxford University Press, 2018), 161; Elinor C. Sloan, Modern Military Strategy: An Introduction, Second edition (London; New York: Routledge, Taylor & Francis Group, 2017), 51.

[iv] “Department of Defense Directive 3000.09: Autonomy in Weapon Systems” (2012), 3.

[v] Kevin McCaney, “A Drone by Any Other Name Is … an RPA?,” Online, Defense Systems, May 23, 2014,; “U.S. Air Force - Career Detail - Remotely Piloted Aircraft Pilot,” U.S. Air Force, accessed April 15, 2019,

[vi] Colin Clark, “Reaper Drones: The New Close Air Support Weapon,” Breaking Defense, May 10, 2017, sec. Acquisition, Air, Strategy & Policy,

[vii] Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 38; Boyle, “The Legal and Ethical Implications of Drone Warfare,” 105, 108, 112–14.

[viii] Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 19.

[ix] Abizaid and Brooks, 19.

[x] Abizaid and Brooks, 9–10, 19, 36–37; Daniel Byman, “Why Drones Work: The Case for Washington’s Weapon of Choice,” Foreign Affairs 92, no. 4 (2013): 34; Micah Zenko, “Reforming U.S. Drone Strike Policies,” Council Special Report (Council on Foreign Relations, Center for Preventive Action, January 2013), 17, 24; Boyle, “The Legal and Ethical Implications of Drone Warfare,” 116; Audrey Kurth Cronin, “Why Drones Fail: When Tactics Drive Strategy,” Foreign Affairs 92, no. 4 (2013): 51; Matthew Crosston, “Future Challenges in Drone Geopolitics,” Journal of Strategic Security 7, no. 4 (2014): iii.

[xi] Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 24-25,; Lauren Wilcox, “Bodies,” in Making Thing International 1: Circuits in Motion, ed. Mark B. Salter (Minneapolis, MN: University of Minnesota Press, 2015), 201,; Byman, “Why Drones Work,” 32.

[xii] Zenko, “Reforming U.S. Drone Strike Policies,” 6, 10; Gusterson, “Drones 101,” 8; Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 9.

[xiii] Byman, “Why Drones Work,” 34, 37.

[xiv] Hugh Gusterson, “Toward an Anthropology of Drones: Remaking Space, Time, and Valor in Combat,” in The American Way of Bombing: Changing Ethical and Legal Norms, from Flying Fortresses to Drones, ed. Matthew Evangelista and Henry Shue (New York: Cornell University Press, 2014), 202,

[xv] Gusterson, 201.

[xvi] Kevin Maurer, “She Kills People From 7,850 Miles Away,” The Daily Beast, October 18, 2015, sec. Down Range,

[xvii] Gusterson, “Drones 101,” 3–4.

[xviii] Jai C. Galliott and Bradley J. Strawser, review of Review of “Targeted Killings: Law and Morality in an Asymmetric World,” by Claire Finkelstein, Jens David Ohlin, and Andrew Altman, Ethics 124, no. 1 (October 2013): 184,; Gusterson, “Toward an Anthropology of Drones,” 202.

[xix] Boyle, “The Legal and Ethical Implications of Drone Warfare,” 106–7; Wilcox, “Bodies,” 208.

[xx] Gusterson, “Toward an Anthropology of Drones,” 198; Joseph Pugliese, “Drones,” in Making Things International 1: Circuits and Motion (Minneapolis, MN: University of Minnesota Press, 2015), 233.

[xxi] Zenko, “Reforming U.S. Drone Strike Policies,” 8; Boyle, “The Legal and Ethical Implications of Drone Warfare,” 121.

[xxii] Boyle, “The Legal and Ethical Implications of Drone Warfare,” 121.

[xxiii] Boyle, 121; Zenko, “Reforming U.S. Drone Strike Policies,” 8.

[xxiv] Gusterson, “Toward an Anthropology of Drones,” 201.

[xxv] Boyle, “The Legal and Ethical Implications of Drone Warfare,” 106.

[xxvi] Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 25.

[xxvii] Boyle, “The Legal and Ethical Implications of Drone Warfare,” 106; Gusterson, “Toward an Anthropology of Drones,” 198; Klem Ryan, “What’s Wrong with Drones? The Battlefield in International Humanitarian Law,” in The American Way of Bombing: Changing Ethical and Legal Norms, from Flying Fortresses to Drones, ed. Matthew Evangelista and Henry Shue (New York: Cornell University Press, 2014), 220; Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 25.

[xxviii] Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 25; Boyle, “The Legal and Ethical Implications of Drone Warfare,” 106; Gusterson, “Toward an Anthropology of Drones,” 198; Farley, “Drone Warfare,” 154.

[xxix] Pugliese, “Drones,” 233.

[xxx] Farley, “Drone Warfare,” 154.

[xxxi] Vegas Tenold, “The Untold Casualties of the Drone War,” Rolling Stone, February 18, 2016, sec. Culture News,

[xxxii] Alexander Moseley, “Just War Theory,” in Internet Encyclopedia of Philosophy (U.K.: Internet Encyclopedia of Philosophy, n.d.), 8, 10–12,

[xxxiii] Ruth A. Miller, “Drones: A Case Study,” in Snarl: In Defense of Stalled Traffic and Faulty Networks (Ann Arbor, MI: University of Michigan Press, 2013), 124; Gusterson, “Toward an Anthropology of Drones,” 198.

[xxxiv] Gusterson, “Toward an Anthropology of Drones,” 198.

[xxxv] Abizaid and Brooks, “Recommendations and Report of The Task Force on U.S. Drone Policy,” 37–39; Rachel Stohl, “An Action Plan on U.S. Drone Policy: Recommendations for the Trump Administration” (Washington, D.C.: Stimson, June 2018), 20–21.

[xxxvi] Ryan, “What’s Wrong with Drones? The Battlefield in International Humanitarian Law,” 221–22.

About the Author(s)

Robert Hunter Ward is a U.S. Air Force officer, currently earning a Master of Arts in International Security within the Schar School of Policy and Government at George Mason University. The views expressed in his articles are those of the author and do not necessarily reflect the official policy or position of the U.S. Air Force, the Department of Defense, or the U.S. government.