Member Login Become a Member
Advertisement

Command and Control and AI, Oh My! – The Case for Petrov’s Law –  

  |  
01.13.2025 at 06:00am
Command and Control and AI, Oh My! – The Case for Petrov’s Law –   Image

A War Story

September 26th, 1983. The height of the Cold War between the Soviet Union and the United States of America. Lieutenant Colonel Stanislav Petrov, assigned to the Soviet Air Defense Forces, stands watch at a nuclear attack early warning bunker outside Moscow. His shift began like any other but ended with Petrov literally saving the world from destruction. On that fateful day, Soviet early warning systems mistook reflected sunlight for five inbound American nuclear missiles. Soviet military doctrine required the watch officer to relay launch reports to his superiors, who in turn communicated with the Soviet General Staff. On his own initiative, Stanislav decided not to inform his superiors as he knew the United States would not start a world war with such a strategically insignificant first strike. He kept the information at his level, the moment passed, and the world was saved. 

Let us consider for a moment an alternative scenario. The Soviet Union utilizes an artificial intelligence (AI) platform with faster than human reaction and decision speed as part of their early warning system. The AI platform receives the same false report but concludes the information is factual and relays the information to the Soviet General Staff. The Soviet General Staff is too far removed from the early warning system’s daily activities to recognize the error. The USSR responds with the full fury of its nuclear arsenal. American and NATO early warning systems detect a massive Soviet missile launch. Following established war plans, the West responds with a nuclear counterstrike. Human life as we know it ceases to exist.  

One might propose a counterargument that existing nuclear weapon protocols prevent an erroneous weapons launch scenario with or without the benefit of AI. The historical record, however, tells a much different and more frightening story. On at least nine known occasions, the world came to the brink of nuclear Armageddon. It stands to reason we will face similar occasions in the future requiring rapid analysis and critical decision making.   

Welcome to the Revolution

Throughout history, military professionals studied previous conflicts to prepare to recruit, train and resource their soldiers for the next war. While these changes were frequently incremental, on occasion they led to evolutions in warfare. Past evolutions included the introduction of gunpowder which greatly reduced the defensive value of armor and fixed fortifications, or the somewhat belated shift from battleships to aircraft carriers that modernized naval warfare during World War II. Very rarely, development is so radical that it moves beyond evolution to create a revolution in military affairs (RMA) which irrevocably and immutably changes warfare. Arguably the most impactful past RMA came from the development and later proliferation of nuclear weapons granting humankind instant ability to destroy itself on a global level. Restated, warfare, and its implications, have never been the same since Oppenheimer released the atomic genie from the bottle.  

Artificial intelligence is the next revolution in military affairs. AI will positively or negatively affect every facet of military operations at the tactical, operational and strategic levels of war. AI will transform warfare more than the changes wrought by gunpower, aviation, mass production and nuclear weapons combined. While AI is too nascent a concept to fully catalogue its potential use by either state or non-state actors, we may confidently predict some applications. AI will play a major role in data analysis, software and application development, logistics and targeting at all levels of warfare. More specifically at the strategic and operational levels, AI will serve as a primary tool in psychological operations, disinformation strategies and public affairs messaging. At the “muddy boots” tactical level of battle, AI will assist with crew served weapons or obstacle emplacement, fires planning, route selection and other fundamental soldier tasks.  

As AI approaches human critical thinking ability, there will be an inevitable temptation to include it in decision making to augment or outright replace the “man in the loop” control mechanisms used since Cain slew Abel. True, AI offers tempting advantages in decision speed, data aggregation and even reducing the moral burdens of leadership. Military leaders must be cognizant of the many dangers inherent in introducing AI into decision making before surrendering command or control functions. Ultimately, decision authority to employ lethal force must remain in human minds and human hands.   

Integrating AI Within Military Operations

The United States military spent the early 21st century conducting counterinsurgency (COIN) operations in Iraq, Afghanistan, and elsewhere under the umbrella Global War on Terrorism (GWOT). With the general cessation of those campaigns, the American military once again returned to its core mission of fighting large-scale ground combat operations (LSGCO). It stands to reason the United States, its allies, and competitors will perform the same time-tested analysis on the effectiveness of American military capabilities in COIN and LSGCO. These studies will share a common goal of retaining the decisive edge against peer or near-peer competitors. Within the US construct, analysis is performed via examination of doctrine, organization, training, material, leadership and education, personnel and facilities (DOTMLPF). These studies will include examination of AI to best assess its applicability across the entire spectrum of conflict – to assume otherwise is foolish. If history teaches anything, it routinely proves the failure to adapt is the fast road to societal extinction. The question remaining to be answered is twofold: Where can AI contribute to mission success, and to what degree may AI assist military leaders? 

To best answer that question, it is worth reviewing the dual concepts of command and control. The science of control is best defined as the processes and equipment used to exercise the art of command. The art of command is the process by which commanders lead their team. Discussions on command and control constitute a formative part of the military science curriculum used as the Washington State University Army Reserve Officer Training Corps (ROTC) Department Chair. During these lessons, cadets were assigned two tasks: 

  1. Complete a series of simple math problems. 
  2. Draw the Mona Lisa.  

The first task, mathematics, is readily understood as the discipline is governed by proven logic, principles, and rules. The science of control functions via similar rule driven processes, checklists, and standard operating procedures (SOP). Control means, such as doctrinal operations orders or tactical radio employment, are easily taught and just as easily retained. AI understands control via operating software and will outperform humans given its superior ability to process information faster and from a greater number of data sources. For example, an AI platform assisting with the intelligence, surveillance, target acquisition, reconnaissance (ISTAR) cycle will more efficiently search databases across the larger intelligence enterprise to cross reference reports, update collection plans, provide early warning indications or alert units when specific condition criteria are met on the battlefield. In a game like chess, where the rules are clearly defined and enforced, AI has the definite advantage.  

Warfare, however, is not a logical undertaking. War is inherently chaotic, dirty, and frequently illogical. Combat is further complicated by the uncertainty and dynamic operational environment found in every military operation, something Prussian military theorist Carl von Clausewitz’s termed as the “fog and friction of war.” It takes years of training, education, and experience to master the leadership skills necessary for proper military decision making. No current AI system, regardless of its nuanced complexity, matches the uniquely human ability to command – particularly when there are significant information gaps. Small wonder that drawing the Mona Lisa posed the greater challenge to the cadets.  

Employed correctly, AI will complete or augment control functions while simultaneously affording military leaders greater time to think or focus on critical command tasks. 

A Proposal

What guidelines may we look towards to best incorporate artificial intelligence into military operations to bring its advantages to bear against our competitors? Science fiction may point the way. In 1942, author Isaac Asimov established the now famous “Three Laws of Robotics” in a short story titled Runaround. In their original version, the Three Laws dictate: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

 Doctrine requires a new law, superseding Asimov’s previous work, based upon Stanislav Petrov’s moment in history to account for the future use of AI in military decision making. This proposal incorporates the possible benefits of AI in warfare while avoiding potential negative consequences associated with its use.  

Petrov’s Law – The final decision to preserve or take life must be safeguarded by the uniquely human ability to rationalize, empathize, and understand the consequences of action or inaction.  

To be clear, this paper does not argue military leaders should eschew AI altogether. AI will play an important, if not potentially decisive, role in future operations across the land, sea, air, space, information, and cyber domains. AI offers its greatest value as a control tool, providing leaders with a range of analytically informed options to augment human derived conclusions for a given problem set. The inverse, placing command functions directly into “AI hands”, absolves commanders of the consequences of deciding, thereby creating a temptation for leaders to avoid that critical responsibility inherent in their role. Bluntly, soldiers should struggle with the decision to take life.  

Even as a control tool, AI integration comes with a final concern when considering potential consequences when leaders make decisions against the advice of their AI systems. For example, an AI tool assigns probability values to several likely outcomes in a tactical scenario. The supported leader considers all options and, after applying their experience, training and education, selects an outcome the AI system deems has less chance of occurring. Will their political or military superiors back that choice? One hopes so as failure to support those leaders will inevitably result in a situation where AI derived percentages have the final word, and humans are merely rubberstamping a decision.  

Conclusion

Make no mistake, artificial intelligence will revolutionize the entire spectrum of conflict from peace to total war. The degree to which it accomplishes this will rely entirely upon the manner in which it is integrated into military decision making. Leaders should embrace AI as a means to inform, but not replace, their own command responsibilities. Petrov’s Law will ensure decision making remains the sole province of human leadership.  

About The Author

  • Christopher J. Heatherly

    Lieutenant Colonel (Retired) Christopher J. Heatherly enlisted in the U.S. Army in 1994 and earned his commission via Officer Candidate School in 1997. He held a variety of assignments in special operations, Special Forces, armored, and cavalry units. His operational experience includes deployments to Afghanistan, Iraq, South Korea, Kuwait, Mali, and Nigeria. He holds master’s degrees from the University of Oklahoma and the School of Advanced Military Studies. He retired from the military in 2023.

    View all posts

Article Discussion: