You Can’t Spell PAI Without AI: The Issues of Cognitive Load and Tradecraft with OSINT

AI is everywhere, and its presence interferes with modern intelligence operations. AI is on your phone, in your browser, embedded in applications, even on your smart watch. For most of us, AI is an additive, streamlining tasks and speeding research. For open-source intelligence professionals, AI and AI-generated material potentially impact work in two ways: cognitive load and tradecraft.
Numerous articles cover the benefits and dangers of AI, the challenges of integration across professions, and the growing need for AI literacy within the national security space. While possibilities with AI seem endless, one area needing/requiring more attention is its impact on publicly available information (PAI). There is a flood of AI-generated images, videos, and text content our adversaries intentionally post as people, places, and events to misrepresent, falsify, or completely invent those situations and persons. Not all AI content is intended to deceive. Some AI-generated content is entertainment and spreads the gamut from funny to annoying. Within the intelligence space, the spread of AI content, whether from adversary or ally, is a risk factor. The impact of AI in information operations focuses largely on harnessing the technology to assist with massive data analytics and advantages for decision makers, but one area that remains underexplored is the threat of AI to the public information that our open-source analysts rely on.
Open-Source intelligence (OSINT) “is intelligence derived exclusively from publicly or commercially available information that addresses specific intelligence priorities, requirements, or gaps.” Examples of OSINT sources include social media, public records and websites, news media, and internet images and videos. With the rise of social media, PAI is particularly valuable in gauging sentiment and monitoring events, tracking trends, establishing patterns of behavior, or gaining insight in ways that complement and enhance traditional intelligence methods. OSINT provides leaders with a clearer understanding of an operational environment that enhances decision-making. Although AI content is problematic for creative professions like photography, it also challenges our intelligence analysts and current tradecraft methods. The rise of AI-generated content, particularly photorealistic images, social media posts, and videos, clouds the public information space, adding layers of complexity and noise to PAI and increasing the cognitive load on OSINT analysts. The addition of AI content complicates cataloguing and disseminating OSINT reports to ensure intelligence customers trust those reports.
The Polluted Digital Environment
Protracted examples of a polluted public digital sphere exist from the Israel-Gaza conflict and in multiple domestic operations in the homeland. AI Slop, crudely generated or low-value content, floods the algorithms of platforms like YouTube, Instagram, and Facebook, crowding out legitimate artists and creators, often appropriating or stealing their content. More recently, the Iran-Israel escalation highlighted the weaponization of more sophisticated AI content in the public space to spread disinformation. Both sides released AI-generated videos and imagery showing exaggerated military attacks (Iran posted a downed aircraft that was AI-altered imagery from a video game), false bombing aftermath (Iran posted fake clips of missile strikes in Tel Aviv), and fake public protests (Israel posted AI videos of Iranian protests in Tehran). Even some US news outlets referenced AI-generated B-2 bomber strikes. This fake content amassed millions of views across numerous platforms, some shared by official sources in Iran and Israel. This amplified informational noise propagated when some AI platforms insisted the fake content was real, despite obvious manipulation in some cases.
Impact on Tradecraft
There are billions of terabytes of information created every minute, a Sisyphean task for any OSINT analyst to wade through. Adding AI-generated content to the mix of data sources degrades the value of PAI, damaging the efficacy of OSINT processes and the trustworthiness of sources. There are two main ways in which AI content impairs analysis of PAI: overload/deny and conceal/distract.
- Overload and Deny: Polluting the Information Environment: AI content is easy and fast to produce, allowing information to overload the OSINT process by sheer volume of content to digest, interpret, and classify. It takes an equivalent amount of, and often more, resources to manage false information as it does to manage credible and useful information.
- Conceal and Distract: Eroding Authenticity: AI-generated content hampers intelligence collection by burying useful and relevant information within a mass of distracting or misleading content. Existing AI detection tools can falsely identify real photographs as fakes, increasing skepticism over the authenticity of information, known as the “liar’s dividend.” Additionally, AI-generated accounts posturing as OSINT analysts, journalists, or influencers often maintain the “X” verification status that algorithmically amplifies mis/disinformation content for profit.
While AI Slop and high-quality AI disinformation interferes with our adversary’s decision cycles to our advantage, it also negatively impacts our processes. In the ways noted above, AI pollutes the information space for everyone attempting to glean intelligence from public spaces, making changes to tradecraft and processes necessary. Potentially more concerning is that the increasing quality of AI-generated content degrades an analyst’s ability to identify what is real or not.
Impact on the Analyst
More importantly, but less discussed, is the impact of AI on the cognitive load of our OSINT analysts. Much is written about the erosion of critical thinking and weakened mental activity in multiple fields as AI use increases, and there is value in exploring how we teach and use AI to combat those potentialities. There is an equal amount of data suggesting that AI use positively improves learning by managing cognitive load through offloading menial tasks. Both situations can coexist as AI continues to evolve, and research expands on the impact on human brains. One area with little discussion is how AI may hurt us. In the field of OSINT, there is currently a lack of peer-reviewed research on where and when AI increases cognitive load and/or makes our work harder. Comparative research on attention during repetitive tasks demonstrated increased mind wandering and degraded performance over time.
Cognitive load is the amount of mental effort required to complete a task. Common digital activities that increase cognitive load include notification frequency, task switching, and device checking. Digital impacts on the human brain are myriad and threaten decision-making. A polluted information environment thickens the fog of war by adding excessive false or irrelevant content, delaying the “observe” and “orient” phases of the OODA (Observe, Orient, Decide, Act) Loop. Adding AI to this digital foray increases the complexity of task switching, platform monitoring, information overload, failure/error identification, content validation, and the learning curve to tool implementation. AI content diverts analysts’ time from legitimate information, and AI detection tools designed to assist are not always accurate. These conditions make finding and discrediting AI-generated images and videos resource-intensive. Increased AI exposure also leads to skepticism overload, where analysts are less likely to question whether content is AI-generated.
Yes, some AI enhances task management, but there is an up-front cognitive burden. The sheer volume of AI models and tools risks overwhelming users and increases the possibility of a user applying less rigorous testing and analysis of a tool before implementing it. Likewise, the volume of PAI risks decreased the rigor in evaluating the veracity of information. Cognitively burdened analysts have decreased capacity in the executive functions necessary to recognize and analyze a PAI crowded with AI content.
Recommendations
Combating AI Slop is beyond the scope of the OSINT analyst and this article; however, there are actions available to shield and arm warriors in the PAI space to decrease cognitive load and improve tradecraft. As noted in many other articles, training and education on AI models and AI capabilities are necessary for modern warfighting. A focus on foundational education benefits both analysts and tradecraft. This training must include general media and AI competencies, simulations to develop analytic skills needed to verify image or video authenticity based on visual artifacts, situational experience of useful AI models for PAI distillation, and policies that streamline cataloguing and disseminating standards for intelligence products containing, or cleared of, AI content.
It is imperative for organizations to prioritize the health and fitness of their analysts through professional development, proper shift management to facilitate sleep hygiene, and the smallest number of digital tools necessary for a given task. Further, attentional fitness practices embedded in training programs and facilitated in the field will protect analysts’ cognitive function in high-stress or repetitive environments. A holistic health framework ensures all areas of fitness contribute to analyst performance.
Conclusion
We can no longer evaluate PAI without considering the impact of AI-generated material. While AI is a force multiplier in many spaces, within OSINT, AI presents a real threat to processes and people. Evaluating the impact of AI on cognitive load remains an open research question, but our adversaries are not waiting for those studies to target our populace or analysts with overload and deception. Collectively, we need to improve our AI literacy and shield ourselves from AI Slop. The threat of AI-generated mis/disinformation damages us all, and our inability to combat it necessitates proactive measures to arm OSINT analysts with the tools and tradecraft they need to enhance national security using our public digital spaces.
The views expressed are those of the author(s) and do not reflect the official position of the US Army War College. Gemini was used to brainstorm and research the initial support sources for this article. NotebookLM was used to collate and organize research materials; AI was not used in the generation of any content or written material.