Small Wars Journal

An Ethical Dilemma: Weaponization of Artificial Intelligence

Sun, 01/29/2023 - 5:51pm

 

An Ethical Dilemma: Weaponization of Artificial Intelligence

By Justin K. Steinhoff

 

Over the past decade, technological transformations have changed the world we live in, from advances in voice assistant technologies, facial recognition software, cryptocurrency markets, fully autonomous self-driving vehicles, and neural network sensing technologies. Today, voice assistance devices using artificial intelligence (AI) and machine learning (ML) technologies, such as Amazon Echo and Google Home, are a part of more than half of all United States (US) adults' daily lives (Juniper Research, 2017). The advancement of AI is leading a global revolution that is changing how we interact (Department of State, 2022). Equally, the US Department of Defense (DoD) has already invested in transformative technology advancement and AI/ML adoption (Chief Digital and Artificial Intelligence Office [CDAO], 2022a).

Numerous companies collect and utilize customer information to support demands and develop sales strategies and targeted advertising campaigns (Goddard, 2019). Adversely, organizations such as Oracle and Cambridge Analytica rely on vast amounts of information, dubbed 'big data,' to create psychometric profiles designed to influence the population (Goddard, 2019; Porotsky, 2019). Likewise, the US Government utilizes and operationalizes vast amounts of big data, AI, and ML adoption to influence military operations (Department of State, 2022). For example, in June 2022, the DoD CDAO and the US Air Force conducted a joint exercise to evaluate and assess a project called Smart Sensor Brain. The project uses an AI-enabled autonomous unmanned aerial system that can conduct "automated surveillance and reconnaissance functions in contested environments" (CDAO Public Affairs, 2022, para. 3). Using transformative technologies creates moral challenges and numerous ethical dilemmas as the US Government weaponizes AI, ML, and autonomous systems.

The Problem

Weaponizing advanced technologies utilizing AI, ML, and autonomy leads to significant ethical and moral challenges that the US Government needs to address appropriately. In 2017, at the National Governor's Association meeting, the chief executive officer of Tesla, Inc. and SpaceX, Inc., Elon Musk, described what he visualizes as the biggest empirical threat to the US is AI safety (Molina, 2017). Musk pronounced that the government needs to consider AI regulation due to the "fundamental existential risk for human civilization" (Molina, 2017, para. 1). The concept of AI has been present for decades, and as advancements in ML, specifically neural networking systems, new capabilities will continue to challenge the world of possibilities.

Albeit a simplistic illustration of the power of AI and ML, nearly 40 years ago, writer and movie director James Cameron released the film, The Terminator. Cameron's film depicted a fictional world of which an indestructible robotic human controlled by a superintelligence system named Skynet (IMDb, n.d.). In Cameron's storyline, Skynet is the neural network software product of what we assimilate as AI and ML today. Theoretically, this example demonstrates a particularly dystopian form of what may be possible with advanced AI. However, according to Wissner-Gross & Freer (2013), AI has the capacity and potential to spontaneously emerge and completely take over the world, consequential to intelligence itself.

As with most innovative technologies, the US Government is constantly modernizing and advancing to remain competitive in any future battlefield. However, multiple ethical dilemmas exist when conjoining the development, advancement, and implementation of AI-enabled systems that are weaponized, autonomously capable, and highly adaptive. According to Sullivan et al. (2017), the US will not maintain a technological overmatch over adversaries on future battlefields due to the convergence of advanced technologies, such as AI. With fully autonomous weapon systems in mind, one of the most prominent ethical dilemmas is at what inflection point should the US Government consider AI-enabled decision-making or targeting unethical. Moreover, the current US policy requires that weapon systems allow for the "appropriate level of human judgment over the use of force" (DoD, 2017, p. 2). Overall, AI has tremendous positive and negative impacts on society and the future of all technological advancements.

The Impact

Examining the US Government's use of AI development and innovation, the purpose of designing programs such as human enhancement and lethal autonomous weapon systems was to reduce the overall risk to Soldiers. According to US Congress, "the term 'artificial intelligence' means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments" (William M. (Mac) Thornberry National Defense Authorization Act, 2020, p. 1164). Recognizing the application and potential of AI and ML to make predictions and decisions, coupled with robotics and autonomics, the future of how Soldiers fight on the battlefield will undoubtedly change. In the example of the Smart Sensor Brain, AI and ML technology supports the autonomous control of multiple sensors and is autonomously capable of "perceiving, making inferences, and reporting observations without the requirement for ground Processing, Exploitation and Dissemination" (CDAO Public Affairs, 2022, para. 4).

According to Sullivan et al. (2017), highly advanced global peer competitors and adversaries of the US will continue to invest in highly disruptive artificial technologies, regardless of the position and investments of the US Government. Consequently, the US Government will unlikely experience undisputed superiority over its adversaries due to the convergence of highly disruptive technologies such as lethal autonomous weapon systems, which is the root cause of the ethical dilemma of weaponizing AI.

The Root Cause

AI and enhanced ML processes have enabled the increased use of advanced technologies in numerous career fields, such as healthcare, sciences, education, and the military. In the private sector, AI technology has outperformed Stanford radiologists at screening and accurately diagnosing diseases in chest x-rays (Husain, 2021). According to Husain (2021), AI has yet to lead to the most critical discoveries developed by humankind. Consequently, the ethical dilemma of operationalizing highly advanced technologies enabled by AI is generally due to the grave misunderstanding of its true capabilities. Many people characterize AI as a tangible yet dangerous creation, like the Skynet example. However, AI is science and is unlike a nuclear weapon that is a detectable, controllable, and monitorable physical object (Husain, 2021). Husain (2021) characterizes that AI operates at a scale and speed generally unfathomable to humans. Although the solution for the root cause may develop over many decades, the US Government has increased efforts to address AI-enabled autonomous weaponry’s ethical challenges.

Solution

Acknowledging the need to address the ethical dilemmas of weaponizing AI and ML software, the US Government has enacted policies at the DoD level to maintain a system of checks and balances (Sayler, 2022). Within DoD, the US military must clearly define the efforts that AI/ML is capable of enhancement. Additionally, military leaders and operators must effectively train with AI/ML-enabled technologies to leverage the range of military operations while executing human-machine hybrid operations in the future (Mooers, 2022; Sullivan et al., 2017).

The US Army has invested more than $72 million to discover capabilities and drive innovation utilizing AI/ML (ARL Public Affairs, 2019). Furthermore, the US Army established the Army Research Laboratory under the Army Development Command, headquartered by the Army Futures Command, which leads the Army's AI modernization. Additionally, the DoD created the Joint Artificial Intelligence Center in 2018 as the military component, and the CDAO in 2022, as the civilian oversight component, all part of the US Government's National Artificial Intelligence Initiative Act of 2020 (CDAO, 2022a; CDAO, 2022b). Bolstering the DoD's research and development capabilities, the CDAO established a partnership with John Hopkins University's Applied Physics Laboratory (JHU-APL) (CDAO Public Affairs, 2022). Building solutions towards the future by conducting joint research and training exercises with the Army, Air Force, CDAO, and JHU-APL provides an additional layer of expertise while continuing to address the ethical challenges of the military use of AI in future battlefields. Examining ethical dilemmas using the ethical processing model will further support the ethical use of AI and ML technologies as innovation and modernization occur.

Ethical Lenses

            Making decisions is essential to life. Leaders often encounter situations that require decisions on ethical dilemmas or issues with which they may not always agree. However, leaders can rely on their character, values, and principles to guide their ethical decision-making. Additionally, using the ethical processing model will further support a leader's ethical reasoning and develop a foundation for ethical decisions.

According to Department of the Army (2019), one of the five characteristics of the Army profession requires leaders to possess military expertise. Under this characteristic, Army leaders must demonstrate military expertise in leader and human development, moral-ethical, geo-cultural and political, and military-technical knowledge. Through the field of moral-ethical knowledge, Army professionals must be capable of creating "moral solutions to diverse problems" (Department of the Army, 2019, p. 2-7). Developing moral solutions to ethical problems begins with ethical reasoning examined through the ethical processing model.

Comprised of four principles, the ethical processing model consists of recognizing the ethical conflict or issue, evaluating the options through the three ethical lenses, committing to the ethical decision, and acting upon the ethical decision (Kem, n.d.). Focusing on step two, evaluating the options of the ethical dilemma includes examining the ethical issue through the three alternative approaches in the ethical triangle of rules or principles, outcomes or consequences, and virtues or beliefs (Kem, n.d.). Through the ethical processing model, examining ethical options to the dilemma of lethal autonomous weapon systems utilizing AI on the battlefield ensures that each perspective of the ethical triangle (rules, outcomes, and virtues) provides a thorough understanding of the most favorable ethical decision.

The Rules Lens: Principles-based Ethics

            At the top of the ethical triangle, the rules lens examines the ethical dilemma with consideration to any rules that currently exist or rules that should exist with regards to what is ethically and morally acceptable (Kem, n.d.). In the case of weaponizing AI, military leaders have identified the ethical dilemma of AI-enabled autonomous weapon systems. However, current "US policy does not prohibit the development or employment of lethal autonomous weapon systems" (Sayler, 2022, para. 2). Furthermore, high-level discussions with senior officials suggest that senior military leaders may need to invest further and develop lethal autonomous weapons as the global competition advances (Sayler, 2022).

Contrarily, current DoD policy requires that all fully autonomous and semi-autonomous weapons must offer appropriate levels of human judgment (DoD, 2017). However, this policy does not apply to certain cyberspace operations and unmanned systems (DoD, 2017). Moreover, there is currently no international definition or understanding of what constitutes an AI-enabled lethal autonomous weapons system (Sayler, 2022). Through the perspectives of the outcomes lens, weaponized AI has the capacity to significantly decrease the risk to force, but also increases the speed at which future battles occur.

The Outcomes Lens: Consequences-based Ethics

According to Kem (n.d.), the second ethical approach examines the consequences (or the outcomes) of the dilemma for which ethical decisions are "judged by their consequences depending on the results to be maximized" (p. 5). The outcomes lens examines the efficacy of an action or inaction of the ethical dilemma; how the action produces maximized results through interests such as pleasure, refuge, and dignity (Kem, n.d.).

Examining the weaponization of AI through the outcomes lens highlights the ethical dilemma of who is responsible for the actions of lethal autonomous weapon systems when the AI executes lethal action that remains inconsistent with expectation and within the laws of armed conflict (Sullivan et al., 2017). Correspondingly, the consequences of inaction by the US Government with respect to the speed at which AI and ML advancements occur and the comparative disadvantage the US Government will encounter if weaponizing AI remains reactive.

From the global perspective, discussions of AI-enabled lethal autonomous weapon systems have called for preemptive global bans on these advanced technological weapons. Due to the ethical concerns regarding the lethal autonomous weapon systems. Opponents of the lethal weapon systems highlight numerous ethical challenges such as operational risks, disintegrated accountability, and the proportionality of use during armed conflict. Correspondingly, entities of more than "30 countries and 165 nongovernmental organizations" have endorsed a global ban (Sayler, 2022, para. 17). Although the US Government does not currently support a ban on lethal autonomous weapon systems, senior leaders of the government may see the criticality and utility of maintaining the capability of lethal autonomous weapon systems through the virtues lens.

The Virtues Lens: Virtues-based Ethics

            The final lens of the ethical triangle is the virtues lens. This lens differs from the rules and outcomes lenses by learning from others, and not by established rules or laws; nor guided by the greatest benefit for all (Kem, n.d.). The virtues lens considers the ethical dilemma through a collective understanding of how a person should be, essentially an ethical decision made by someone of good character (Kem, n.d.).

            The antithesis of virtuous ethical reasoning may be weaponizing AI/ML to manufacture autonomously lethal weapons. However, senior leaders of the US Government may consider the virtuous lens while analyzing the weaponization of AI, with the understanding that peer and near-peer threat actors historically disregard ethics altogether (Sullivan et al., 2017). Although it may seem ethically immoral to produce and possess lethal autonomous weapons systems, the US Government can view the ethical dilemma as a virtuous ethical decision due to the understanding of adversarial capabilities. Moreover, the US Government may see the virtue of weaponizing AI through the rules lens, understanding that the ethical decision reflects protecting US national interests.

Conclusion

            Transformative technologies such as AI and ML have definitively changed how humans interact and will affect the future of battle. Currently, the US military is developing AI, ML, and other transformative technologies (such as autonomy) to enhance survivability and increase overall success on the battlefield. The ethical problem is weaponizing transformative technology enabled by AI/ML to maintain military overmatch and advantage on the battlefield. The ethical dilemma exists specifically with the prospect of fully autonomous lethal weapon systems operating independently of human control. The development of lethal autonomous weapon systems decreases the overall risks to Soldiers in ground combat. However, this also significantly illuminates the ethical dilemma of a lethally autonomous weapon system executing a decision or action based on AI-enabled data without a human in the loop. Lastly, decision-making of ethically challenging problems requires examination through the four steps of the ethical processing model. Likewise, the solutions to the moral-ethical dilemma require examination using the ethical triangle and the three ethical approaches of rules, outcomes, and virtues.

 

References

Army Research Laboratory [ARL] Public Affairs. (2019, March 12). Battlefield artificial intelligence gets $72M Army investment. US Army. https://www.army.mil/article/218354/battlefield_artificial_intelligence_gets_72m_army_investment

Chief Digital and Artificial Intelligence Office [CDAO]. (2022a). Chief digital and artificial intelligence office. https://www.ai.mil/

Chief Digital and Artificial Intelligence Office [CDAO]. (2022b). About the JAIC story. https://www.ai.mil/about.html

CDAO Public Affairs. (2022, June 22). DoD CDAO partners with USAF to conduct developmental test flight of AI and autonomy-enabled unmanned aerial vehicle. https://www.ai.mil/docs/press_release_062222_DoD_CDAO_USAF_Conduct_Developmental_Test_Flight.pdf

Department of Defense [DoD]. (2017). Autonomy in weapon systems. Washington Headquarters Services, Executive Services Directorate. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf

Department of State. (2022). Artificial intelligence. https://www.state.gov/artificial-intelligence/

Department of the Army. (2019). Army leadership and the profession (ADP 6-22). https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN20039-ADP_6-22-001-WEB-0.pdf

Goddard, W. (2019, January 14). How do big companies collect customer data? https://itchronicles.com/big-data/how-do-big-companies-collect-customer-data/

Husain, A. (2021, November 18). AI is shaping the future of war. PRISM 9(3), 50-61. https://ndupress.ndu.edu/Portals/68/Documents/prism/prism_9-3/prism_9-3.pdf

International Movie Database [IMDb]. (n.d.). The terminator. IMDb. https://www.imdb.com/title/tt0088247/

Juniper Research. (2017, November 8). Amazon echo & google home to reside in over 50% of us households by 2022, as multi-assistant devices. https://www.juniperresearch.com/press/amazon-echo-google-home-reside-over-50pc-us-house

Kem, J. D. (n.d.). Ethical decision making: Using the “ethical triangle.” http://www.cgscfoundation.org/wp-content/uploads/2016/04/Kem-UseoftheEthicalTriangle.pdf

Molina, B. (2017, July 17). Musk: Government needs to regulate artificial intelligence. https://www.usatoday.com/story/tech/talkingtech/2017/07/17/musk-government-needs-regulate-artificial-intelligence/484318001/

Mooers, N. (2022). Shaping the role of AI tools in decision making and the responsibilities of the leaders using them [Manuscript submitted for publication]. US Army Concepts Development Division, Maneuver Capabilities Development and Integration Directorate.

Porotsky, S. (2019, June 10). Cambridge Analytica: The darker side of big data. Global Security Review. https://globalsecurityreview.com/cambridge-analytica-darker-side-big-data/

Sayler, K. M. (2022, November 14). Defense primer: US policy on lethal autonomous weapon systems. Library of Congress, Congressional Research Service. https://crsreports.congress.gov/product/pdf/IF/IF11150

Sullivan, I., Santaspirt, M., & Shabro, L. (2017). Mad scientist: Visualizing multi-domain battle 2030-2050. https://community.apan.org/wg/tradoc-g2/mad-scientist/m/visualizing-multi-domain-battle-2030-2050/210183#

William M. (Mac) Thornberry National Defense Authorization Act, H.R. 6395, 116th Cong., 2d Sess. (2020) (enacted). https://www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210

Wissner-Gross, A. D., & Freer, C. E. (2013). Casual entropic forces. Physical Review Letters, 110(16), 168702-1- 168702-5. https://doi.org/10.1103/PhysRevLett.110.168702

About the Author(s)

Master Sergeant (MSG) Justin K. Steinhoff is an active-duty Soldier, serving in the United States Army since February 2005. He is originally from Honolulu, HI and hails from Tacoma, WA. Since enlisting, he's had the honor of serving all over the world, totaling eight combat deployments to Iraq, Syria, Afghanistan, and the Republic of the Philippines. His awards and decorations include the Bronze Star Medal and Meritorious Service Medal (2 Oak Leaf Clusters [OLC]), Army Commendation Medal-Combat (2 OLC), and Army Achievement Medal-Combat, to name a few. MSG Steinhoff is currently at student attending the Sergeants Major Academy (Class 73) at Fort Bliss, Texas. He has two children, Jaeden and Kiana.