Small Wars Journal

Does K-2SO Deserve a Medal?

Fri, 05/12/2017 - 5:38am

Does K-2SO Deserve a Medal?

Dan Maurer

If a robot “covered your six” in a firefight, would you show it gratitude?  If the robot was the medic that patched you up after taking shrapnel to the leg, or was the RTO that called in an airstrike that saved your platoon, would you honor its performance?  If it took shrapnel and went down, would you put yourself in greater danger to render aid to it? 

This essay introduces these questions and some of their moral implications.[i]  Though the subject of robot warriors is, obviously, far-out and the legality of “lethal autonomous weapons” is currently under fire, its object—morals and ethics under fire—is much closer to home.  As we will see, asking about the former (even if only as a thought experiment) helps us to better reflect on the latter.  This brief effort will pose questions without solutions—probably an uncomfortable number of them—and move about flittingly between references to popular science fiction literature and Eighteenth Century British philosophy.  For whatever it may lack in conventional structure, I hope I make up for with dialectic argument.

Mimicking or Reality…and Does it Matter?

Do androids dream of electric sheep?,” science fiction author Philip K. Dick once asked.   As the inspiration for the later film, Blade Runner, Dick’s novel imagined a dark, dystopian future where manufactured people were indistinguishable from the rest of us, and feared so much their own forced “retirement” (unplugging them for good) that they would aggressively fight back for their own survival.

John Markoff, a senior science writer for the New York Times, recently observed that our tendency to anthropomorphize our technology is a clue that we will not be able to help ourselves from thinking of advanced Artificial Intelligence (AI)—so advanced that it would pass a Turing Test and be a generic part of everyday life—as “one of us,” named and all.  Given what appears like a deep-seated fear of handing over the decision to kill to automation, it is not inconceivable that the much-harder-to-stop advance of technology will need a human face, if you will, to prevent the claim that we have dehumanized the decision.  (On a smaller scale, some of us do this with our cell phones, our tablets and laptops, and our cars already.)  Markoff quipped that he hoped for the day when such intelligent machines were nothing more and nothing less than “colleagues.”  Not our masters, but also not our slaves.[ii]

In Dick’s novel, such thinking machines were certainly a caste below.  Aside from the moral questions of using (or abusing) such imaginary android humans, and right to avoid one’s own cancelation, the awkward-sounding question in the book’s title asked us to consider the extent to which AI shares in what biological intelligence takes for granted as routine, uneventful, and commonplace—dreaming.  Is AI, ultimately, capped by its silicone parts, and can only be programed to merely mimic what “real people” do, say, think, imagine, and dream?  In a way, it is like asking whether Disney World’s Animal Kingdom is a true capture, a microcosm, of the real world—after all, it has real animals, living their real lives amid real flora and real fauna that certainly appear to us and to them like “the wild”—or whether it is as artificial and manufactured, or more so, as any other penned-in zoo.  I think the answer does not much matter for practical purposes, and cannot ever be determined definitively one way or the other because Nature has no need for a definitive definition; we do not need one in order to interact with it safely, productively, or for our own enjoyment.  But safe, productive, and enjoyable do not, necessarily, mean “moral.” 

But simply posing that question about what is more real demonstrates more than one way to look at something we usually take for granted.  Doing so, if we are carefully reflective, we can uncover our assumptions, reveal our biases, and project our preferences.  Do we prefer the contrived zoo over the combative wild for the practical reasons of safety and convenience?  Do we view one as fundamentally more educational and enjoyable than the other?  Do our answers mean that it does not matter whether we think of one as more real than the other? 

The same question arises when we talk of genetically-modified, “test-tube” babies, or cloning of animals for food production or organ harvesting.  Is there a difference—in terms of ethical interaction, our scruples, our standards for how to behave and what to believe is worthy—between the au natural organism and the organism grown in a lab?  Defending and attacking those assumptions, biases, and preferences inevitably speaks to our morals.  And so the question—one premised on a technological innovation that seems ever-advancing—glows with heated, passionate ethical undertones.  None of this, I should admit, is anything new. 

This essay, on the other hand, poses a related but new question for the warfighter, and others that think about how soldiers will treat one another, honor each other, or sacrifice themselves for each other during the hot flush of battle.  In other words, how interpersonal ethics of loyalty and courage under fire are grounded.  Instead of parsing nuanced definitions of what is real, this issue pokes the bear of how “consciousness” is defined and the moral choices that follow from accepting that definition.[iii]  The reader here should strap suspension of disbelief goggles on and imagine a future in which a modern military has built, trained, and employed sophisticated technology with sufficiently-human personality and body characteristics that enable individual “droids” to accompany soldiers within small tactical units that close with and destroy designated enemies. 

Now, it is at least possible that by the time our culture’s technological progress has so advanced as to be capable of employing such droids, other innovations may have led to the complete obsolescence of warfare altogether, or at least have changed it in such manifold ways that human beings no longer fight each other at all in war and have outsourced it to remote-controlled, or entirely autonomous, combat drones.  Or perhaps, instead of fighting side-by-side with robots, we will have created opportunities for hybrid humans, or soldiers so biologically-enhanced that they perform as if they were made of steel and wires and microchips.  Perhaps we will designate our martially-best warriors to sit inside cockpits of massive robot exoskeletons, duking it out one-on-one to determine the outcomes of very specific international disputes, as in Robot Jox.  Or, maybe we end up outlawing all forms of AI that have any role in armed combat, and “thinking machines” that compute their way through the fog and friction of war may be non-existent.

But this is my flight of science fiction fancy, and my vision of a future combat zone tends to look more like scenes out of Rogue One than Dune.  Assuming a (far away) future of war includes robots with programed or evolved personalities like C-3PO or K-2SO, or identities that learn from mistakes and work, sometimes, to protect a companion human from danger (think Schwarzenegger from Terminator 2), or incites emotional connections like Samantha in Her, there remains questions not often asked.  As with the earlier Animal Kingdom analogy, the material differences between the real and the artificial is, ultimately, immaterial.  Both exist, and both do things that seem indistinguishable, interacting with the outside environment based on internal facts—some of those internal features are man-made, designed, or programmed, while some are learned, and some are genetically-demanded.  So a real moral question is not whether you would prefer to go on safari in Zimbabwe or Orlando, but how—if the circumstances demanded it—humanely you would treat a wounded lion in either setting.[iv]

Applied to droids in combat, how humanely—or to be more specific, soldierly—will we feel towards them?  What scruples would divide us?  What social or cultural norms would bind us?  If they cannot shed their own blood and feel pain from the sting of combat, will they still be inside the band of brothers?  Asking ourselves, as a thought experiment, whether we would honor a droid’s performance in battle, or whether we would mourn a droid’s loss, or whether we would—try to picture this—sacrifice ourselves (or at risk our lives) for a droid to “survive,” also demands that we consider what each of these ancient martial virtues and ethics really mean to us here and now. 

Meet J.O.E.

To make things simple, I pose just one highly-speculative scenario that forces us to confront these questions.  Imagine a droid that, while in no way looking human, is assigned to an infantry squad.  The droid comes with a manufacturer’s alpha-numeric label, but each droid tends to get named by its initial “owning” squad.  Let’s say this one is named “J.O.E.”  Joe has two arms, two legs, and can keep up with Specialist Torres in a foot race, and can—if need be—qualify on a rifle as an expert.  Joe can see fairly well through fog, mist, dust, sand, and heavy rain.  Joe can see at night.  Joe has no need for sleep.  Joe, aside from being made of ink-black metallic and polymer materials and standing a foot taller than everyone else, is a trusted member of the squad.  Joe, though he cannot “eat,” stays with his squad during chow.  Though he cannot sleep, he stands guard on their perimeter during long patrols.  Joe is able to haul extremely bulky loads, and is an ambulatory arms room plus aid station of weapons, ammunition, medical and survival gear.  Joe can translate languages, is well-versed in local history and cultural studies, and is able to connect to this far future’s version of the Internet.  Joe is a walking, talking library of military tactics and doctrine, and is programed to engage with each soldier on a personal level, able to easily reference the pop culture of the time, and engage in casual conversation, like IBM’s Jeopardy-winning Watson or Amazon’s Alexa.  Joe records and transmits everything he sees and hears, as well as every action the squad takes.  Joe can interpret its own operating code and knows its own design history—so, in a in a sense, Joe understands his purpose.  Joe, in other words, perceives, has a memory, and is self-aware.  And Joe can tell a dirty joke.

What Joe cannot do, on this hypothetical, is disobey its programming (which presumably means following the commands of certain humans who possess certain rank and responsibilities).  It cannot (not just “will not”) wander off from its mission, deserting, and turn itself over to the enemy, even if it believes that its short-term safety or maintenance is better off in their hands.  Joe cannot malinger, or feign shortcomings in order to avoid work.  Joe cannot quit.  In other words, Joe lacks what we would interpret as free will.  “He” cannot feel pain, pride, greed, or anxiety, stress, fear, or exhaustion, or jealousy, or peer pressure.  Joe, ultimately, can be turned off.  What Joe cannot do, in other words, is suffer (psychically or physically).  If Jeremy Bentham could weigh in on this development, he would tell us that the capacity to suffer is the critical, dispositive, feature of a thing that humans must consider whenever we consider how to treat that thing.[v]  To suffer is to be worthy of moral consideration.

But even if we cannot personally relate to Joe, in part because Joe is incapable of sharing in mutual suffering the way humans do, we have a knack for humanizing non-humans all the time.  Cats are “selfish,” “sly,” “prideful” and “mysterious”; dogs are “friendly,” “clumsy,” or “mischievous.”  As Stanford mathematics professor Keith Devlin said, “we humans are suckers for being seduced by the ‘if it waddles and quacks, it’s a duck’ syndrome.  Not because we’re stupid; rather, because we’re human.”[vi]  It is entirely plausible that human-shaped droids with human-like personalities and voices (even if only programmed that way) will engender attitudes and responses from us, at least in some regards, that look and feel a lot like those we have with each other.  If you can get angry at your computer and call it “stupid!,” or name your smart phone or SUV and refer to it as “she,” then it is obvious that—at some level of our consciousness—we need to humanize our tools and technologies when we are so unalterably dependent upon them.  Now imagine if your literal survival was also dependent on their “choices.” 

Rules and Values

The starting point for discussion of how humanely or soldierly we will treat our non-human comrades should begin with a basic understanding of how we generate, subconsciously or through deliberate pondering, our own ethical choices.  Ethics can be defined as the set of moral obligations one person manifests toward to another, or to oneself.  But what is an “obligation” under this definition?  It is not a contract entered into with a meeting of the minds, where each party to the contract ascertains a cost and benefit to be lost or gained by participating in the arrangement.[vii]  Rather, an obligation is a duty, responsibility, or commitment that has three features a contract does not have.  First, it may be one-sided (non-reciprocated).  Second, it may be terribly costly (or risky) to one’s finances, reputation, personal freedom, safety, or happiness.  Third, it lacks a concomitant net benefit that is known and expected beforehand.  One’s choice to obey, rather than breach, the terms of a contract may be an ethical choice, but the contract’s terms imposing obligations are not per se “ethical” demands.

Where do these moral obligations come from—are they prima facie encoded into our genes?  Are they taught by institutions?  Do they arise because one is a consenting member of an organization—part of a stamp of “collective identity?”[viii]  Are they modeled by parents and friends?  Are they prescribed by religions or proscribed by laws and regulations?  The answer, of course, depends on which obligation, and in most cases the outcome—the practical application—of one’s moral obligations shields or hides from the observer which source is to be lauded or blamed.  In many cases, such as “do not kill another person out of anger or revenge” is a moral obligation that shares parentage from many or all of these sources. 

All of the potential sources of these moral duties, commitments, or responsibilities have common traits: they express either some “value” or some “rule” (or both).  A “value” is simply a principle or standard that justifies and explains a person’s choice in how to behave toward another (e.g., “all people should be treated with equal dignity and respect, regardless of race, gender, religion, sexual orientation, nationality, etc.”)  According to Jeremy Bentham’s utilitarian view of ethics, to act ethically is to act with the goal of producing the “greatest possible quantity of happiness.”[ix]  But a long line of Bentham’s critics since the Eighteenth Century have argued that this utility is not the only possible motivating value or spring for ethics.[x]  One could believe that to act in rational self-interest, always and in all ways, is the sine qua non of morality, for we are nothing if not individual agents with free will.[xi]  Or, alternatively, to only comport oneself with a view to the effect or outcome it creates—the consequences.

But that isn’t the sum of the story.  Aristotle taught that to be truly just, one must act justly—that is, one’s actions express and build one’s virtue.  Virtue, he said, is the “observance of the mean” between the two poles of deficiency and excess.  In the famous case of action in combat, Aristotle found the mean of “courage” to be a temperate middle ground somewhere between the excess of rashness and the deficiency of cowardice.  Being courageous, he wrote, creates a virtuous cycle.[xii]

We become temperate by abstaining from pleasures, and the same time we are best able to abstain from pleasures when we have become temperate.  And so with courage: we become brave by training ourselves to despise and endure terrors, and we shall be best able to endure terrors when we have become brave.[xiii]

Immanuel Kant, contra Bentham and Aristotle, looks not at the effect or the end-state purpose animating the action, but instead only its motive: whether the action was for the sake of duty alone (what he calls “reverence for the law”), without the inclination to acquire or seek out possible psychological fulfillment, financial incentive, or public esteem one might earn from the action.[xiv]  Kant identified what he believed to be the foundational bedrock of morality—the single axiomatic principle or duty that generates ethical behavior regardless of the circumstances, conditions, or context.  This “categorical imperative” can be summarized as: your action toward another is moral if it is based on a principle (a “maxim” in his words) which you would, if you could, enlarge to become the basis for a universal law or command, applicable to everyone everywhere, and which does not contradict itself.[xv]  In other words, I should sacrifice my immediate safety to secure the safety of others if I would want that same imperative to apply unconditionally to everyone else (and therefore benefit me in time of such need).  Likewise, I should not lie to someone else about my need for money, enticing them to act philanthropically unnecessarily on my behalf, because if everyone lied in this way, no request for charity could be believed and my lie would be futile (just as theirs would be). 

A “rule” is an imperative (to do or not do some act) imposed by an authority figure considered to be legitimate in the eyes of those to whom the rule applies, and carries negative consequences (also deemed legitimate in the hypothetical sense by all those subject to the rule) if one breaks the rule by ignoring it.  Values and rules are truly not independent ideas.  A value may be codified as a rule, but may not be.  All rules that are not arbitrary (random, uninformed, indiscriminate) and capricious (impulsive, unpredictable) are ipso facto explained by some value.  Rules followed long enough by enough people can become, in effect, a new value in of itself.  Finally, values can help answer the question: “how important is this rule to follow, notwithstanding the consequences for breaking it?”  

Questions with no Easy (“Binary”) Answers

So, let’s return to our initial question: Given all that J.O.E. the Droid can do, and all that “he” cannot, would it deserve your moral consideration?  Would you sacrifice yourself for Joe?  Would you honor its sacrifice?  Would you reward it for its performance?  Following Kant’s categorical imperative, it would seem there is a need—under the circumstances—to expand the definition of “humanity” (for Joe would in all likelihood be considered a “rational actor”) and act selflessly for a fellow member of your squad, as you would expect it of them (and it) too.  Consider how Sebastian Junger described the tight bonds of loyalty and affection among members of an infantry platoon fighting in Afghanistan.

As a soldier, the thing you were most scared of was failing your brothers when they needed you, and compared to that, dying was easy.  Dying was over with.  Cowardice lingered forever . . . heroism [on the other hand] is a negation of the self—you’re prepared to lose your own life for the sake of others.[xvi]

But this raises a series of complicated follow-ups.  First, does it matter that the droid cannot, at anything deeper than the superficial level, relate to the soldier?  It cannot fear solitude or pain like you do.  It cannot feel pride or comradeship like you do.  It has no memory of home, or emotional appreciation of music, like you do.  It has no personal relationships of family or friends, like you do.  This raises the specter of whether we owe our moral choices or duties toward one another because of who that other is, or because of what he/she/it does.  Is our soldierly duty deontological—the duty exists unconditionally and one does one’s duty regardless of the benefit, cost, or who the beneficiary of exercising the duty might be—or is consequentialist (we act in a certain way because of the effect of acting that way is desirable)?[xvii]  Or does the nature of our moral obligation change depending on context?  If so, what contexts actually matter, and why?

Under Kantian ethics of the “categorical imperative,” our modern military ethos treats our teammates and comrades and those under our command as ends in themselves, not as means to an end.[xviii]  Would the same care apply to a machine?  Under what conditions would or should we act with concern, compassion, and respect toward something that has no natural moral compass itself?  If we chose not to act with such moral consideration of Joe the Droid, would it be because it is incapable of provoking our empathy?  Or would it be because it cannot suffer like we do?  Or would it be because it could not choose—in a biological sense of individual agency—to be there with you?  All of these reasons seem sensible.  But what does this say about our current state of our soldiers’ ethical indoctrination, which focuses not on the human qualities and capabilities of the other person next to you, but instead focuses on loyalty and selfless effort for the group’s welfare (“I will never leave a fallen comrade” and “I will always place mission first”) and a Kantian sense of doing one’s duty for the sake of doing one’s duty? 

Third, how do we program a droid to be capable of “applied ethics?”  Do we first program initial prima facie or pro tanto moral stances, like “do no harm to others; treat others equally; improve the conditions of others; keep one’s promises; communicate truthfully?”  How do we teach it to choose one over the other when they appear to conflict?  Are these best thought of as “rules” or as “values?” If we cannot agree on which moral stances are primary, something akin to Isaac Asimov’s three laws, and will not even try, what does that say about our ability to train or inculcate that moral capacity into our soldiers and leaders?

Finally, as we mechanize and digitize more and more of our warfighting capability, infusing it with “human” attributes (either emotional or physical), will we (the programmers and the end-users) nevertheless became less and less human, and more “siliconized” in our interactions, with each other?  Early last century, British Major-General J.F.C. Fuller wrote: “[t]he more mechanical become the weapons with which we fight, the less mechanical must be the spirit which controls them.”[xix]  Philosopher Daniel Dennett cautions that, as our computers become ever more powerful tools, we not let our cognitive abilities atrophy.[xx]  What about our morals?  Will our warrior ethos—how we humans care for one another in the context of our mission on the battlefield—atrophy too?  Will our ethics become more and more consequentialist or utilitarian in nature? 

To the extent such an evolution (or, some would say regression) towards warfighting AI autonomy mutes our tendency for gut-reactive, anger-driven, aggressively competitive, or plain counter-intuitive responses and decisions under stress, isn’t this a good thing to evolve toward?  But to the extent it dims our tendencies for irrational sympathy, humor, compassion, empathy, or mercy, isn’t this a bad thing to evolve toward?  Take a moment to think about this, as applied to the infantry squad in the firefight, for your answers to these last two problems signal how you would choose, if you could, to design Joe the Droid and “his” manufactured features.  And that design decision, in the end, tells of whether (and how) you would choose to behave ethically toward that collection of wires, plastic, silicon, and glass.   

As C-3PO once said, “Now don’t you forget this!  Why I should stick my neck out for you is far beyond my capacity!”  Forgiving each other’s mistakes, honoring others’ successes, and risking ourselves for another’s safety are manifestations of our moral code.  They find application in our warrior ethos.  If such obligations are also made part of the droid’s code, will we not find ourselves, one day, bleeding for them and crying over them too?  Will we not seek out their opinion as much as we would seek out their translation, or calculation of the odds of survival

End Notes

[i] I gratuitously concede that this essay barely breaches the surface of an area deeply analyzed in the scholarly philosophical literature and well-researched in studies of current and theoretical Artificial Intelligence.  My only qualifications for such a shallow dive are that I have engaged in the applied ethics of decision-making under fire in combat, and as a practicing lawyer have some daily descent into the morass of moral consequences.  My apologies to any reader possessing a stronger familiarity with the relevant academic literature that I have glibly referenced in support of what I hope is simply a thought-provoking exercise.

[ii] John Markoff, “Our Masters, Slaves, or Partners?,” in What to Think About Machines that Think (John Brockman, ed.) (New York: Harper Perennial, 2015), pp. 25-28.

[iii] Thanks to the anonymous reviewer at the Royal United Services Institute (RUSI) for orienting me to this distinction.

[iv] Obviously, I do not intend to assert that this is the only moral problem protruding from these scenarios.  For example, do we even have the right to create such artificial enclosures (and if so, based on what presumptions, and for what purpose); do we have the right to create artificial humans and suppose, impart, design, or deny them their own rights?

[v] Jeremy Bentham, An Introduction to the Principles of Morals and Legislation (1780) (Mineola, New York: Dover Publications, 2007), p. 311, n.1.

[vi] Keith Devlin, “Leveraging Human Intelligence,” in What to Think About Machines that Think (John Brockman, ed.) (New York: Harper Perennial, 2015), pp. 74-76.

[vii] The basic structure of a “contract” is some form of a mutual agreement in which “consideration” (something of value, material or otherwise) is offered by one and accepted by another.

[viii] Michael J. Sandel, Justice: What is the Right Thing to Do? (New York: Farrar, Straus and Giroux, 2009), p. 213.

[ix] Bentham, An Introduction to the Principles of Morals and Legislation, p. 310.

[x] See, e.g., John Rawls, A Theory of Justice (Cambridge, Ma.: Harvard University Press, 1999), pp. 23-28.

[xi] As one reviewer helpfully pointed out to me, “free will” is by no means securely understood or even accepted.  Philosophy, physics, medical science, psychology, computer science, and religion all offer substantial arguments for or against a deterministic nature of reality (in the physical universe as well as in what we perceive to be our own consciousness).  See, e.g., John Searle, Freedom and Neurobiology: Reflections on Free Will, Language, and Political Power (New York: Columbia University Press, 2007), pp. 5-6, 39-41.  This essay will not contribute to its resolution (it is an open question whether or not this is my volitional choice, or was predestined by a long causal chain of events involving my neurons and their reactions to my environment over my lifetime).

[xii] Aristotle, The Nicomachean Ethics, Book II (trans. Harris Rackham) (1996), pp. 32-36.

[xiii] Ibid, p. 35.

[xiv] Immanuel Kant, Groundwork of the Metaphysic of Morals (trans. H.J. Paton) (1785) (New York: Harper Perennial, 2009), pp. 65-69, 88.  Kant explains that “reverence for the law” means consciously acknowledging that your will to act is subordinate to the dictates of the law or source of the duty; in other words, it is a “value which demolishes my self-love.”  Ibid, p. 69, n. 2.

[xv] Ibid, p. 70 (“I ought never to act except in such a way that I can also will that my maxim should become a universal law”).

[xvi] Sebastian Junger, War (New York: Hachette Book Group, 2011), pp. 210-11.

[xvii] Nassim Nicholas Taleb, The Bed of Procrustes (New York: Random Hose, 2016), p. 6.

[xviii] Kant, Groundwork, pp. 95-96.  This is the second formulation Kant used to explain the categorical imperative: “every rational being exists as an end in himself, not merely as a means for arbitrary us by this or that will . . . [therefore] act in such a way that you always threat humanity, whether in your own person or in the person of another, never simply as a means, but always at the same time as an end.” 

[xix] J.F.C. Fuller, Generalship, Its Diseases and Their Cure: A Study of the Personal Factor in Command (1936), p. 13.

[xx] Daniel C. Dennett, “The Singularity—An Urban Legend?,” in What to Think About Machines that Think (John Brockman, ed.) (New York: Harper Perennial, 2015), pp. 85-88.

 

Categories: robots - robotics - robot warriors

About the Author(s)

Major Dan Maurer, U.S. Army, is a contributing writer at USMA’s Modern War Institute, a judge advocate and former platoon leader in combat, and the first lawyer to serve as a Fellow on the Chief of Staff of the Army’s Strategic Studies Group.  He has a JD and LLM, and has published widely in various law reviews, as well as at Modern War Institute and in Military Review, Small Wars Journal, and the Harvard National Security Journal.  His book, Crisis, Agency, and the Law in U.S. Civil-Military Relations, will be published this spring by Palgrave Macmillan Press; his monograph, The Clash of the Trinities: A Theoretical Analysis of the General Nature of War, will be published this summer by the Strategic Studies Institute Press. His chapter on how the Star Wars films illustrate themes of modern strategic civil-military relationships will appear in a book forthcoming from Potomac Press in early 2018.

The views expressed are those of the author and do not reflect the official position of the US Army, the Judge Advocate General’s Corps, or the Department of Defense.