USSOCOM: Information-Enabled Command versus AI-Enabled Command
By Dr. Mark Grzegorzewski
As the excitement around artificial intelligence applications grows, United States Special Operations Command (USSOCOM) remains on the forefront in adopting this emergent technology. Special Operations Forces (SOF) have always been the tip of the spear in fighting our nation’s wars and serve as the preeminent asymmetric force. Thus, it comes as no surprise that USSOCOM would want to incorporate the potentially game changing technology of AI into every new program. However, SOF should be careful not to become too enamored with AI tools. Rather, it should continue the focus on executing its core missions and seeing where AI applications may fit in instead of being captivated by a still brittle technology that may or may not have the impacts needed within SOF’s core missions. For the missions of the future, especially downrange in the future operating environment, highly advanced technology may not always be the weapon of choice. Therefore, we both must prepare the force for the potentialities of AI and stay focused on operating with the human domain without support from AI technologies.
Before delving further, it is critical to understand what is meant by both the Information Environment (IE) and Artificial Intelligence (AI). The Information Environment as defined by Joint Publication (JP) 3-12 is “the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information.” JP 3-12 further states the IE “consists of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive. The physical dimension is composed of command and control systems, key decision makers, and supporting infrastructure that enable individual and organizations to create effects. The informational dimension specifies where and how information is collected, processed, stored, disseminated, and protected. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information.” The information environment is not distinct but rather crosses through each of the other five warfighting domains. Used as a tool, AI could hypothetically analyze and/or impact each of the different dimensions of the information environment: physical, informational, and cognitive.
Artificial Intelligence (AI) is a specific academic field within Computer Science that explores how automated computing functions can resemble the capabilities of humans. Subfields and applications of AI include machine learning (ML), machine vision (MV), and Natural Language Processing (NLP). The field of AI is currently producing Artificial Narrow Intelligence (also known as Weak AI) or ANI, which is AI that can execute one particular decision type well in a closed system. Artificial General Intelligence (also known as Strong AI) or AGI is still in development and if realized would be able to execute multiple decision types, though it would likely have limited application in an open system. AI can be a paradigm shifting technology, but the field is still comparatively in its early days. If AGI can be achieved, it surely will be revolutionary. However, in the near to medium term, this is almost certainly not a realistic outcome. Major AI thinkers and practitioners alike think we are at the least 25 years away from AGI, while others claim AGI will never be realized. That said, USSOCOM, and the DOD, should continue to track these developments due to the outsized influence they may one day have on the world.
Currently, almost all AI companies do not disclose how their technologies work. This “black box” method is standard in the industry and is regarded as proprietary information. This means that although an AI technology has come to a conclusion on a given problem, the user does not know how the technology reached its conclusion. The user simply has to trust the model. These black box solutions could lead USSOCOM astray causing a perpetuation and amplification of biases. For example, black box solutions come with pre-trained software applications that cannot be retrained to ask how they arrived on an algorithm, nor can some be updated with new data. As such, adopters and users need to ask hard questions on the front end before employing this tech, such as where did the training data come from?; is the training data accurate?; is there missing data?; and how was the training data labeled? Depending on how and where the AI technology is employed, the answers to these questions could lead to both tactical and operational disaster. The biases that are incorporated into AI models come in the form of the data used to train the AI and the assumptions that are incorporated into AI models. Using a more undetermined and less specific axiom like information-enabled command would allow USSOCOM to avoid some of the criticisms currently being laid at the feet of AI technologies. Further, by recognizing that USSOCOM is an information-enabled command, it identifies that the information environment is incredibly dynamic and USSOCOM has resolved to use all capabilities to get the best possible information to our commanders. In short, by not leading with the tool and being more focused on the mission will allow USSOCOM to choose the right tool for the job.
The AI field, while rapidly progressing in recent years due to increases in computing power, data availability, and comparatively cheap power supplies has entered a relative slow down, as noted in McKinsey’s “State of AI 2020 Report” wherein they state that businesses are less bullish about the applications of AI in 2020. This slowdown can be modeled in the Gartner (a technology research and advisory firm) Hype cycle wherein specific technologies mature, are adopted, and then realize a social application. Where current AI technologies are in the Gartner life cycle can be debated but it can be stated with confidence that broadly speaking AI’s initial expectations have not been met.
The excitement behind developments in the field of Artificial Intelligence (AI) causes many organizations to adopt AI technologies without a plan in place for what they want to solve or how to implement AI technology. Accordingly, there has been a corresponding increase in companies purchasing AI technologies and those same companies being disappointed in the results of that investment. This is not the fault of the AI technologies. Rather, fault lies with the AI adopter for not scoping their problem (e.g. what do you want to achieve?), determining whether or not AI technology is the correct solution, and then understanding what the AI can deliver (e.g. will I be able to do somethings but not others?). AI is the right tool for some problems but currently those problems are very specific.
Further, even for those problems in which AI is the correct tool for the job, many AI solutions fail because of (1) a lack of skilled specialists, (2) not enough data, and (3) an inability to measure results:
In conventional forces, the human mans the equipment. In SOF, it equips the human. This is stated another way in the first SOF truth: “People are more important than hardware [and software].” All this is to say is that we should not define ourselves by our tools, which will surely lead to misapplication and disappointment. While AI may be a great technology for a particular mission, especially when it involves a closed system, it may not be the best fit for many of USSOCOM’s problems. In fact, by framing USSOCOM as an AI-enabled command limits the imagination when considering tools that can be applied to a particular problem. Instead, USSOCOM should focus on its mission problem set and find areas in which AI technology is the best tool to accomplish the mission rather than having a tool in search of a mission. In some cases, AI might not be the best tool for the job. The best tool for the job may be traditional business process automation software, like Python programming, or information technology tools like cyberspace applications. In other cases of completely unstructured, qualitative data, a computer science approach might not be applicable but rather calls for a social scientist to creatively analyze the problem and articulate viable solutions.
While AI technology can mimic meaning in data, only a human can derive the original intent from the information. As such, it is a better fit for USSOCOM’s human domain mission space to think of itself as an information-enabled command. To borrow from retired-General Stanley McCrystal, to be information-enabled is to use information to understand what is happening, why is it happening, what is going to happen, and then effecting that information. This information-enabled label opens up the toolkit and allows for a broader range of options for our analysts, operators, and planners. These options allow SOF to gather historical reporting (Big Data and multi-domain intelligence), sense and respond (AI could be applied here), and then anticipate and shape the future (plan). Of course, in this last step, AI is no substitute for wisdom.
Finally, currently, there is no deliberate attempt to apply data and social science to information problem sets, nor insert AI into existing knowledge management and problem-solving workflows. This is a function of the paradigms behind both AI and Information Operations/Information Warfare adoption within wider DoD circles. As an example, little attention is given to repurposing existing unstructured data and merging it with structured data as part of the solution. This is one more reason why efforts fail, as it is viewed as a bridge too far. However, this need not be true, and could be overcome by hiring the right data science and methodological professionals, plus removing contractual restraints to merging research and data programs.
Ultimately, USSOCOM who operates in an open system known as the human domain, will have trouble making sense of information if it potentially brings the wrong tool to the fight. Moreover, in applying an AI tool to the fight, based on a ‘black box’ algorithm, it could result in a false premise at the core of initial planning. Accordingly, USSOCOM should consider thinking of itself as an information-enabled command as opposed to an AI-enabled command. This second framing opens up the toolbox and allows for other tools, to still include AI, to be more appropriately applied to its mission set.
Disclaimer: The views expressed in this article are those of the author and do not reflect the official policies or positions of USSOCOM, the Department of Defense, or the U.S. Government.