Member Login Become a Member
Advertisement

How the Robot Sophia Raises Ethical Questions for AI-Enabled Warfare

  |  
06.09.2022 at 12:07am

How the Robot Sophia Raises Ethical Questions for AI-Enabled Warfare

By Christina Huynh

            In 2017, Saudi Arabia granted a robot, known as Sophia, citizenship. While Sophia is built with a basic machine-learning model with software designed for general reasoning and responses, Sophia still has more rights than Saudi Arabian women. For instance, Sophia may get married, obtain a passport, travel abroad, and wear any clothing desired without permission from a male guardian. If Sophia has the equivalent rights of a male citizen in Saudi Arabia, then she must also have the right to defend herself or join the military. The question is, should robots equipped with artificial intelligence determine who lives and dies in war?

 

            The dilemma of focus is the development and use of lethal autonomous weapons systems in warfare. According to the Department of Defense Directive 3000.09, “lethal autonomous weapons systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.” Keeping in mind that robot autonomy falls on a spectrum based on how “on-the-loop” or involved humans are in control, LAWS has full autonomy and is completely independent from the human (out-of-the-loop). Thus, the introduction of LAWS will change the ethics and operational structure of warfare.

 

Recommended Ban on LAWS

 

            An article by Peter Asaro suggests a complete ban on the creation and use of LAWS in warfare. Asaro argues that unsupervised non-human systems pose a potential threat to international human rights laws (IHL) and the principles of distinction, unnecessary suffering, proportionality, and military necessity, since fully autonomous artificially intelligent robots are unpredictable and free from human control. Asaro also claims that for armed conflict to be legal, only humans should take human lives in accordance with the requirements of IHL. Unlike humans, autonomous robots do not have a conscience, and thus it is difficult to train artificial intelligence to understand moral and ethical conflicts in war. Another claim Asaro makes is that the justice system is not an automated process, so if LAWS broke a principle of IHL, it raises a question as to who is responsible.

 

            Similarly, the Campaign to Stop Killer Robots, a nongovernmental organization, also aims to ban the development, production, and use of LAWS overall. Similar to Asaro’s claim, the coalition makes the argument that LAWS are dangerous because it can select and target anything limitlessly. Instead, the coalition suggests that countries should invest more into research on improving ethics with a humanitarian focus internationally.

 

Regulation of LAWS

 

            These two sources believe that humans uphold the IHL, not robots. The structure that the IHL encompasses is a subjective matter that requires human conscience, morals, and understanding during times of chaos—characteristics that autonomous robots do not have naturally. Thus, humans have the capacity to adapt and operate in these gray areas in making morally reasonable ethical decisions while robots do not.

 

            Despite efforts to completely ban the creation and use of LAWS, as technology and the use of artificial intelligence continues to evolve, the integration of autonomy into weapons in the military is inevitable. Whether LAWS is banned or not, that will not stop other countries from developing LAWS illegally or secretly. Thus, several international groups such as the United Nations (UN), Convention on Certain Conventional Weapons (CCW), and the Group of Governmental Experts (GGE), often host meetings to discuss restructuring the legal framework on addressing the use of lethal weapons systems and the different levels of autonomy associated with each. The United Nations Institute for Disarmament focuses on conducting research to develop the governmental framework and policy related to LAWS and the security issues it poses. The CCW reviews the design of autonomous weapons created and decides whether or not to ban or restrict the weapon for use in warfare based on its threat to non-combatants and the principles of IHL. The GGE examines the impact of such developments on national and international security by hosting regional meetings amongst its members to discuss changes or modifications necessary to existing artificial intelligence structures and policies.

 

The Next GGE Meeting in 2022

 

            Therefore, the most efficient way to monitor, modify, and ensure the creation of LAWS fits within an ethical boundary is to create a process of legalizing LAWS to ensure strict review of the weapon and how countries plan to use it in war. In the most recent meeting on 2022 February, the GGE discussed the existing guiding principles in place, the inclusion of experts in the military and technological realm in shaping policy, and the continued participation of High Contracting Parties in reviewing existing weapons of autonomy. The next GGE meeting will occur in June 2022 to further discuss the direction of frameworks and policies relating to LAWS. The GGE’s stance remains unchanged that humans still need to be accountable and responsibly control the weapon systems they operate.

 

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of Defense, U.S. Army, West Point, or the U.S. government.

 

About The Author

  • Jocelyn Temporary Profile Picture

    Jocelyn Garcia is serving with the title of Assistant Editor and Director of Communications at Small Wars Journal while completing her Master’s in Global Security with a concentration in Irregular Warfare at Arizona State University. She is also a fellow at Inter Populum: The Journal of Irregular Warfare and Special Operations by Arizona State University. She holds a degree in Medical Humanities (Pre-Medicine) from Baylor University. She is extremely passionate about studying the intersection of health, the human condition, and security. Rooted in a holistic and spiritual upbringing, paired with a rigorous academic foundation, she brings a unique perspective to understanding humanity, science, and global security. Her work focuses on addressing humanity’s most pressing challenges, including SARS-CoV-2, emerging diseases and technologies, advancements in synthetic biology, cybernetics, global health security, and broader global security issues. ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ ㅤㅤ ㅤㅤㅤㅤThe educator who most inspired her during childhood was Lt. Col. Si McCurdy, Ret., her middle school dean and 6th-8th grade history instructor, who recognized her curiosity and analytical skills early on and nurtured them in an academic setting. In the Master of Global Security Program, her favorite course is Irregular Warfare and Competitive Statecraft, taught by our Editor-In-Chief, Col. Jan Gleiman, Ret. While she aspires to attend medical school in the future, she is eager to serve others—and her country—through a security role upon graduation.

    View all posts

Article Discussion:

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments