Share this Post
Autonomous Systems and Artificial Intelligence: The Human Dimension
Frank J. Abbott
Autonomous systems and artificial intelligence (hereafter referred to as “A/AI”) are both aspects of future warfare and potential solutions to the challenges of future warfare. The U.S. Army has begun to accelerate its efforts to explore and exploit this emerging upheaval in military affairs. Guided by the  and the , the Army’s development of A/AI technologies has great potential to increase lethality, enhance survivability, and improve efficiencies.
While exploring these technologies, however, it is critical that we do not lose sight of the needs of the Soldier and small team leader. As we begin to design, build, and provide A/AI to tactical organizations, it is easy to get too focused on the technical aspects, such as software codes, open architectures, communications links, and security against hacking and spoofing. To ensure that Soldiers can optimize their performance with A/AI, significant changes must occur across the doctrine, organization, training, materiel, leadership and education, personnel, and facilities domains. In addition, many policies will need to be developed or revised.
This article reviews some of these required changes in terms of the cognitive, physical, and social components of Soldiers and small team leaders. What follows is not an exhaustive list, since further study and experimentation will likely reveal additional topics to be explored. For example, while we examine what tasks A/AI can do, we should also ponder what tasks humans must do, in accordance with the Army Ethic and other factors. In addition, the topics below should lead to a more detailed discussion of the knowledge, skills, abilities, and other attributes that future Soldiers, non-commissioned officers, and officers must possess to fight and win in future conflicts.
The cognitive component refers to the mental activity pertaining to the act or processes of perception, memory, judgment, and reasoning. It includes problem solving, intelligence, and emotional self-regulation.
As the Army provides Soldiers and leaders with A/AI-enhanced capabilities, one chief concern is cognitive overload. Cognitive overload occurs when a human is provided too much information to process at once or given too many tasks to perform simultaneously. It is difficult enough for Soldiers to carry a traditional combat load while maintaining situational awareness of friendly and enemy forces, non-combatants, and the surrounding terrain. Adding a requirement to monitor a tablet or operate a joystick may decrease rather than increase a squad’s effectiveness. The Army has been examining various interfaces (e.g., voice commands, control devices embedded in gloves) that would ensure that Soldiers in combat do not have to focus on a computer screen and their hands are free to carry and use weapons.
Closely related to cognitive overload is the concept of span of control. The Army has for centuries recognized the idea that a leader can effectively manage or supervise a finite number of subordinates. Before leaders can begin to conduct operations with both humans (Soldiers) and machines (robots and autonomous systems), the Army must have a better understanding of how span of control could change. It may become feasible for robots to supervise other robots, thus allowing a single human to control an expanded number of machines. Additional study and experimentation are necessary to find optimum solutions.
In addition, machines can learn based on new information and experience, just as humans do. Machine learning allows for robots and similar systems to improve performance based on additional data and experiences. Machines can then rapidly transmit their lessons learned to other machines, thus allowing robots and autonomous systems to quickly adapt to a changing operational situation. The risk, however, is that if a machine learns the wrong lesson, that wrong lesson may be broadcast across the force at nearly the speed of light. For example, if a robot witnesses a child throwing an explosive device, that robot may “learn” that all children are threats. Soldiers must therefore have the ability to not only immediately recognize a bad lesson learned, but to instantly correct (reprogram) the machines without having to rewrite complex software code. In fact, it is highly likely that the Army will need to develop comprehensive education and training programs for leaders to train their A/AI systems just as they train their Soldiers. The Army’s future doctrinal publications on unit training management may very well include the training of robots, analytical programs, and decision-making aids.
It is not reasonable for all future Soldiers and leaders to be software coders. It is also not reasonable to expect leaders to pause a tactical operation so that A/AI can be programmed for a new or unexpected mission or task. As much as possible, then, A/AI must use natural language so that leaders can perform the same troop leading procedures for both humans and machines. Through natural language (just as people today speak to Amazon’s Alexa or Apple’s Siri), A/AI must be able to comprehend specified tasks, derive implied tasks, and understand commander’s intent. For example, when given a command to “go to the bank” in an urban setting, the robot must understand whether it should proceed to a building or to the nearby river. Robots must also be able to recognize and respond to hand and arm signals, which requires that they frequently observe the designated leader for such non-verbal instructions while they perform other tasks.
Decision-making, especially when exercising lethal force, may pose the greatest challenges when developing and fielding A/AI systems. At the tactical level, decisions must often be made quickly and with a lack of key information. A/AI systems are capable of processing available data, deciding, and reacting faster than humans. Based on the combat situation, however, Soldiers and leaders may not wish to allow these systems to be completely autonomous. Soldiers and leaders will therefore require a simple means to dial the level of autonomy, up or down, based on a variety of factors. This spectrum could include 1) the Soldier must explicitly allow the A/AI system to take lethal action; 2) the Soldier intervenes based on value judgments (ethics) and military necessity; and 3) the Soldier intervenes only if the system begins to fail (spoofing, jamming, etc.) or upon mission completion.
The physical component encompasses all the traditional aspects of physical fitness and holistic health and fitness, with an approach that considers the mental and medical contributions to physical performance. The complexity of the future operational environment will demand that Soldiers become more physically adaptable and resilient. A/AI must help to alleviate the physical demands of future Soldiers and leaders.
The Army RAS Strategy calls for lightening of the Soldier’s physical workload. Just as horses and other animals took on physical burdens for military forces, robots can transport Soldiers as well as carry or tow supplies, equipment, and weapons. However, just as horses require food and medical care, robots will require fuel (or some other power source) and repairs. The requirements to maintain these robots in operational condition must be such that Soldiers can accomplish their tasks in a high intensity conflict situation. Again, these robots must be designed so that resupplying, maintaining, controlling, and evacuating them will not create a physical burden (or a cognitive overload) for the Soldiers they are supporting.
Assuming that these resupply, maintenance, control, and evacuation issues can be addressed, robots and autonomous systems have great potential to increase the Army’s operational tempo. Currently, Soldiers and small teams can operate only for a limited number of hours before requiring rest and sleep. Robots, however, have the potential to operate indefinitely when refueled and rearmed. Tactical units could therefore adopt a “tag team” approach—as Soldiers culminate due to their physical limitations, they can pass control of their robots to well-rested troops who can move forward to continue the offensive. This “tag team” tactic would require that robots damaged or destroyed in combat could be repaired or replaced quickly.
Many have speculated that robots will increasingly replace human Soldiers as technology advances. Given that robots could be produced and operated at reasonable cost, the Army could have robots perform dull, dirty, and dangerous tasks, thus allowing fewer Soldiers to be put into harm’s way. Before completely replacing a Soldier, however, we must remember the Soldier’s extensive physical capabilities. Soldiers can not only look 360 degrees, but also up and down. Soldiers can not only see, but can also hear, smell, and feel to gain a more complete understanding of the surroundings and the combat situation. Soldiers can climb rubble, crawl under barbed wire, ascend stairs, and enter subterranean tunnels. They can search the pockets of enemy dead and enemy prisoners of war. Perhaps robots will someday be capable of executing all such tasks as well as a human, but until that time the Army should carefully consider the pros and cons of substituting a Soldier with a machine.
The social component includes how Soldiers interact with and are influenced by others’ beliefs, behaviors, feelings, and interpersonal interactions. Soldiers who are strong in the social component have a strong commitment to the Army Ethic, which includes the Army Values. They are self-disciplined, foster good communication with others, and are able to develop and maintain relationships based on trust.
Future A/AI systems, therefore, must be trustworthy. Soldiers must be assured that these systems are not being hacked or spoofed and they will operate as advertised. When these systems assist in decision-making, Soldiers must be confident that the systems have accurately interpreted and analyzed the available data and that the recommended decisions are based in an ethical framework. Some A/AI systems may even interact with the local population; therefore, noncombatants must also trust that these systems will act responsibly. All autonomous systems should require coding that factors in ethical considerations to prevent unethical behaviors.
The Army must also be mindful of the possibility of Soldiers developing an emotional attachment to their machines, just as today’s military dog handlers typically develop a bond with their canines. This possibility increases if those machines take on human characteristics (e.g., natural language to communicate). If such an emotional attachment develops, Soldiers may be less likely to put their systems at risk. Future training and education programs must reinforce the idea that A/AI systems are government assets that serve, in part, to reduce the danger that Soldiers may face.
Trust and emotional bonding may lead to moral/ethical challenges. Soldiers and leaders may trust and emotionally bond to their A/AI systems to the point that they forego any moral or ethical review of the systems’ outputs. Furthermore, Soldiers and leaders may be more willing to take unwarranted risks based on an overconfidence that results from an illusion of knowledge and an illusion of control. The Army must ensure that all Soldiers are aware of such pitfalls and ensure that all Soldiers continue to have a solid moral framework when making potentially lethal decisions.
In addition, as robots become more common on the future battlefield, the Army may find that the roles of its leaders will evolve. Officers and NCOs are identified leaders who motivate and care for Soldiers (humans) while being tactically proficient. As these leaders direct the combat activities of both humans and robots, their core competencies may shift more towards technical expertise. With more robots and fewer humans, the way that the Army grooms a Soldier (through experience, education, etc.) to become a first sergeant or company commander may change significantly. Developing future leaders to direct human-machine teams may include 1) ensuring leaders can set behavioral (ethical) constraints for machines; 2) immediately correcting objectionable behavior in both humans and machines; and 3) recognizing that the A/AI system may legitimately identify objectionable Soldier—and even leader—behavior.
A/AI technologies, both present and future, offer great potential to increase the efficiency and effectiveness of U.S. Army teams. These systems can take on the dull, dirty, and dangerous tasks so that Soldiers can be safer and focus on those tasks that humans must do. As the Army continues to develop such technologies, it must remember that the A/AI is not an end in itself, but rather an aid to allow the Soldier to be more combat effective while remaining ethical in the use of force. A/AI systems development must therefore account for the cognitive, physical, and social components of the Soldier and leader, and ensure that those systems complement and extend human capabilities. Combining human and machine capabilities, with a deep understanding of the strengths and weaknesses of each, will allow the Army to realize its vision to have Soldiers of unmatched lethality capable of fighting and winning against any adversary in a joint, multi-domain, high intensity conflict while committed to the Army Values.
Barnes, Michael J., Jessie Y. Chen, and Susan Hill. Humans and Autonomy: Implications of Shared Decision Making for Military Operations. No. ARL-TR-7919. US Army Research Laboratory Aberdeen Proving Ground, United States, 2017. Available at http://www.dtic.mil/dtic/tr/fulltext/u2/1024840.pdf.
Bogges, James. “More Than a Game: Third Offset and Implications for Moral Injury.” Closer than You Think: the Implications of the Third Offset Strategy for the U.S. Army, edited by Samuel R. White. Strategic Studies Institute and U.S. Army War College Press, 2017, pp. 129-139.
Funches, William R. Jr. “Leader Development and the Third Offset.” Closer than You Think: the Implications of the Third Offset Strategy for the U.S. Army, edited by Samuel R. White. Strategic Studies Institute and U.S. Army War College Press, 2017, pp. 121-125.
Joint Concept for Robotic and Autonomous Systems (JCRAS), 19 October 2016, https://jdeis.js.mil/jdeis/jel/concepts/robotic_autonomous_systems.pdf.
Opfer, Chris. “Are robots replacing human soldiers?” 13 February 2014.
Scharre, Paul. Temple International and Comparative Law Journal, “Centaur Warfighting: The False Choice of Humans vs. Automation,” Volume 30, Number 1 (Spring 2016), https://sites.temple.edu/ticlj/files/2017/02/30.1.Scharre-TICLJ.pdf.
TRADOC Pamphlet 525-3-7, The U.S. Army Human Dimension Concept, 21 May 2014, http://adminpubs.tradoc.army.mil/pamphlets/TP525-3-7.pdf.
The U.S. Army Robotic and Autonomous Systems Strategy, March 2017, http://www.arcic.army.mil/App_Documents/RAS_Strategy.pdf.
Van Den Bosch, Eric. “Human-Machine Decision-Making and Trust.” Closer than You Think: the Implications of the Third Offset Strategy for the U.S. Army, edited by Samuel R. White. Strategic Studies Institute and U.S. Army War College Press, 2017, pp. 109-119.
White, Samuel R., editor. Closer than You Think: the Implications of the Third Offset Strategy for the U.S. Army. Strategic Studies Institute and U.S. Army War College Press, 2017. https://ssi.armywarcollege.edu/pubs/display.cfm?pubID=1371
 In this paper, “autonomous systems” includes robots and any other device that can operate with a level of independence that humans grant to execute a given task.
 Artificial intelligence is the capability of computer systems to perform tasks that normally require human intelligence such as perception, conversation, and decision-making.
 Joint Concept for Robotic and Autonomous Systems (JCRAS), 19 October 2016, https://jdeis.js.mil/jdeis/jel/concepts/robotic_autonomous_systems.pdf.
 The U.S. Army Robotic and Autonomous Systems Strategy, March 2017, http://www.arcic.army.mil/App_Documents/RAS_Strategy.pdf.
 In addition, the Department of Defense has announced an intention to issue an AI strategy in late summer or early fall of 2018.
 For a more detailed discussion of the cognitive, physical, and social components, see TRADOC Pam 525-3-7, The U.S. Army Human Dimension Concept, 21 May 2014, http://adminpubs.tradoc.army.mil/pamphlets/TP525-3-7.pdf.
 The Army Ethic describes the moral principles that guide all Soldiers and Army Civilians in mission accomplishment, duty performance, and all aspects of life. See Army Doctrine Reference Publication (ADRP) 1, The Army Profession, 14 June 2015, Chapter 2, and The Center for the Army Profession and Ethic web page “Living the Army Ethic,” http://cape.army.mil/aaop/living-the-army-ethic/.
 Barnes, Michael J., et al., “Humans and Autonomy: Implications of Shared Decision Making for Military Operations,” pp. 17-18.
 See Barnes, pp. 6-7 for further discussion of the topic of span of control.
 Funches, William R. Jr., “Leader Development and the Third Offset,” p. 123.
 Barnes, p. 13.
 Paul Scharre, Temple International and Comparative Law Journal, “Centaur Warfighting: The False Choice of Humans vs. Automation,” Volume 30, Number 1 (Spring 2016), p. 159, https://sites.temple.edu/ticlj/files/2017/02/30.1.Scharre-TICLJ.pdf.
 “Lighten the Soldiers’ physical and cognitive workloads” is one of the five capability objectives of the Army RAS Strategy.
 For example, General Robert Cone, former commanding general of U.S. Army Training and Doctrine Command, predicted in 2013 that drones and robots could replace up to 25% of combat troops by the year 2030. See Chris Opfer, “Are robots replacing human soldiers?” https://science.howstuffworks.com/robots-replacing-soldiers.htm.
 Barnes, p. 20.
 Bogges, James. “More Than a Game: Third Offset and Implications for Moral Injury,” pp. 133-134.
 Van Den Bosch, Eric. “Human-Machine Decision-Making and Trust,” pp. 115-116.
 The Army Vision, June 6, 2018, https://www.army.mil/e2/downloads/rv7/vision/the_army_vision.pdf.