Small Wars Journal

Considering Military Culture & Values When Adopting AI

Mon, 06/15/2020 - 11:57am

Considering Military Culture & Values When Adopting AI

By Marta Kepe

The advent of autonomous technologies and artificial intelligence (AI) may lead to an unprecedented shift in the military profession, and the values normally associated with it. Technology could distance humans even further from the battlefield and demote their primacy in making decisions about war. It could also upend the culture of the U.S. military. The changed relationship between the military and technology may trigger fissures within the Armed Forces, as well as between the military and the rest of the society. These tensions could leave the U.S. and its Allied forces vulnerable to exploitation and may lead to a reduction in military effectiveness and the ability to project power.

As the U.S. military contemplates increasing the number and diversity of AI and other autonomous systems, it should consider timely means of mitigating any potential negative side-effects.

Historically, combat has expressed human values and reflected cultural attitudes. In ancient Greece, combat was considered a test of character. Greeks infused their battle with the Hellenic characteristics —personal freedom, discipline, technological development, camaraderie, initiative and adaptation —that stemmed from the Hellenic culture itself. Homer described the military leader as a hero who rides alone into the battle, thus symbolizing the separation between the leader and soldiers. The blind bard’s definition of heroism reflected the societal separation of the aristoi (or the best men, the wealthiest members of the society), who were expected to be superior warriors not only because of the training available to them but also because of the equipment they were able to afford.

In the centuries that followed, the ideal warrior continued to be defined by bravery, loyalty, valor, and discipline, as well as the ability to acquire superior technology and weapons. Yet the introduction of new weaponry has often threatened cultural norms about valor and military morality.

When English peasants used longbows to kill French horse-mounted aristocrats, the French church turned against the use of “missile weapons,” considering them a threat to the societal structure. In the 15th century, the first users of gunpowder were called "cowards and shirkers who would not dare to look in the face the men they bring down from a distance with their wretched bullets.” The introduction of the machine gun and unmanned weapons were also in their time called cowardly, because they distanced the soldier from the threat. We should expect disruptive technological innovation and revolutions in military affairs to require not only doctrinal changes but also a perhaps lengthy period of cultural adaptation.

The U.S. military defends the national interests and provides security, but the way it organizes and equips itself, fights and honors its fighters also reflect and shape the values of American society. Those values, in turn, mold the individual soldier, forge a shared identity for the military services, permeate military codes and regulations, and influence how the military is viewed by society. Reed Robert Bonadonna argues that the societal contribution of soldiers has “gone beyond warfighting. It has also been cognitive, cultural, ethical and even ontological.”

Much of the discussion about military AI has revolved around what will happen as humans are removed from the battlefield. But humans have been distancing themselves from direct combat for more than two centuries. Commanders no longer lead from the front. Emerging technologies are already blurring the boundaries between the physical and digital worlds, with strategic, operational and tactical impacts. The warrior ideal has changed. AI will accelerate this process.

Numerous countries are developing AI and remote technology applications for intelligence collection and analysis, logistics, cyber operations, information operations, and command and control. The U.S. and several allied nations have deployed remotely piloted, automated, and partially-autonomous systems in combat and non-combat roles. For example, Raytheon’s Phalanx Close-in Weapon System is a computer-controlled machine to counter close-in threats both at sea and land, and carries out a set of functions including threat detection, evaluation, tracking and engagement. In October 2018, the Marine Corps’ “Sea Mob” program tested AI-directed boats. AI has also been rolled out in operations in Iraq and Syria.

Provided there is a substantial uptake of AI in the military, humans could shift from fighting to supervising machine battle. This may lead to new shifts in the skills and functions of military personnel. Prowess in combat may be less prized than proficiency in developing state-of-the-art AI and maintaining fleets of AI-enabled weapons. There may be an increased need for programmers and engineers, which could affect the traditional make-up of enlisted and officer staffs. If these valued personnel do not deploy in combat, however, how will the military promotion and retention systems reward them?  What new caste of military service personnel could develop to operate remote battlefield technologies, and how will that affect hiring procedures, physical standards, career paths, benefits and awards?

This issue already burst into the public sphere in 2016, after the U.S. military announced the introduction of a Distinguished Warfare Medal for drone pilots. Combat veterans were outraged. The Department of Defense (DoD) quickly backtracked, rejecting the idea that drone pilots should receive their own combat medals. Instead, the DoD added an “R” (for remote) to noncombat medals awarded for exceptionally meritorious service.

Some research suggests that humans who use AI or autonomous technologies may experience a decline in their ability to make decisions that involve moral considerations, self-control, or empathy. The risk is that as people become accustomed to working with machines, they will become more comfortable delegating decision-making to the machine. Could increased reliance on AI for decision-making lead to moral lapses? With a reduced decision-making role in the military, and with people no longer putting their lives at risk in war and crises, humans may no longer be perceived as individuals who exemplify the values of service, duty, courage and honor.

These changes may be slow and materialize only over decades. But as we give AI a greater role in analysis, decision-making support, and operating equipment, we may need to prepare for a decline in the perceived value of human decision-makers. What if they are just not as good as the machines? What will be the value of a human soldier in a military human-machine relationship? Officer status, for example, may in the future require a different mix of trust, courage and confidence than what is so valued today.

Eventually, these shifts may affect not only military culture, but also the support systems for military families and veterans. No longer putting their lives at risk during war and crises and with a reduced decision-making role, military personnel may no longer be perceived as exemplifying the values of service, duty, courage and honor. This may trigger a reevaluation of the compensation that veterans deserve, the value of maintaining a large standing military force, or even the reverence accorded to military personnel and veterans in American society.  

For all this cultural discomfort, there are good reasons for the military to adopt more remote technologies and AI. The dispersed battlefield will become ever harder to manage with humans who simply cannot make decisions fast enough. Military and political decision-makers will not find it any easier to order humans into harm’s way, especially as the range and lethality of weaponry increases. The United States will need to develop and deploy superior AI technology to keep up with great-power competitors and other potential adversaries who are investing heavily in the technology. Finally, the U.S. military exists to deter or win wars, and it is valued by American society to the extent it succeeds. If AI applications help the United States prevail, both the military and civilian culture will be forced to adapt.

We are not there yet. Today, AI is still limited in its tasks and adaptability. But with time and with the necessary technological developments, AI could increasingly perform more complex tasks, such as integrating activities at various levels and phases of missions, starting from mission planning to launch, ingress, operations, egress and recovery. With an increased complexity of the mission, there could be a higher level of autonomy, as shown by recent use of  autonomous vehicles and aircraft such as Dragon Runner, Predator, Fire Scout, Global Hawk, and LMRS.

It may be unlikely that humans will be excluded from the observe-orient-decide-act (OODA) loop anytime soon, as the AI decision-making remains a black box. In fact, the DoD announced AI will not be allowed to make any “kill” decisions. Still, we can also expect that the “slow and steady shift of authority from humans to algorithms” that is happening in civilian enterprises may migrate into the military forces.

Decision-makers will need to educate society about the use of AI and the human soldier’s relationship with it to improve public understanding of a still largely misunderstood technology. As our image of the “ideal warrior” changes, leaders may need to reassure the public that the human solider is still relevant and that the application of AI is not an illustration of a cowardly attempt at warfare. This discussion needs to include technology experts, researchers, policymakers, defense planners, and other military professionals. Officers and enlisted personnel need to contribute to the conversation about the future role of the military profession as it deploys AI. How well the U.S. military is able to anticipate the cultural influence of AI proliferation may well determine its ability to absorb it and avoid any vulnerabilities resulting from potential cultural fissures between services and the military and rest of the society.

About the Author(s)

Marta Kepe is a senior defense analyst at the nonprofit, nonpartisan RAND Corporation and Senior Non-Resident Fellow at the Atlantic Council’s Atlantic Council's Scowcroft Center for Strategy and Security. Her research topics have included analyzing the relationship between new technology trends and military capability requirements, and the impact of new technologies on the skills and training needs in the government and industrial defense domains. She specializes on European and transatlantic defense, comprehensive defense systems, military logistics and defense industry. Previously she worked for RAND Europe, where her work supported European defense industrial development and multinational defense procurement decision.  

She received her M.A. in Security Studies from the Georgetown University School of Foreign Service.

Comments

AllenWalter

Thu, 09/23/2021 - 9:50am

We are committed to providing our clients with exceptional solutions while offering web design and development services, graphic design services, organic SEO services, social media services, digital marketing services, server management services and Graphic Design Company in USA.

Tom Triumph

Tue, 07/28/2020 - 9:54am

Behavioral economics not only backs up the dangers involved in adopting AI but shows it is a near certainly.

At best, anchoring behavior will bias any human oversight of decisions.  Once the AI makes an assessment, based on the most "objective" data available, it would be on the operator to not only have an alternative but to be able to justify it beyond "a hunch".  It is unlikely that anyone able to overcome this bias would make it through a training system--nor would you want it as AI is designed to be correct in MOST scenarios.  For those other scenarios, human oversight would be weak at best.

At worst, risk aversion will stop any disagreement with the AI. How many soldiers would overrule an AI assessment when that decision is not solely on them?  If a soldier follows the AI, and it goes wrong, they can blame the system.  The upside to individual thought or action is thin while the downside is huge.

Of course, all of this would be decided in an extremely short period of time.  And, as the battlefield speeds up, AI assessment will be even more essential.  Leadership needs to think about how to break the biases behavioral economists are telling us are there before deploying systems further.