Member Login Become a Member
Advertisement

Framing AI Risks Through Strategic Thinking Frameworks

  |  
09.10.2025 at 06:00am
Framing AI Risks Through Strategic Thinking Frameworks Image

Abstract

This article examines how lessons from the COVID-19 pandemic can guide the responsible integration of Artificial Intelligence (AI) into the United States’ national security strategy. By applying critical thinking, design thinking, and systems thinking frameworks, strategic leaders can better anticipate risks, adapt to rapidly evolving challenges, and safeguard decision-making processes. The analysis underscores that AI’s potential benefits can only be realized if managed with disciplined frameworks, cross-domain collaboration, and informed human oversight. 


Introduction: Lessons from a Pandemic for the AI Age

Did the COVID-19 pandemic prepare the United States for the challenges of emerging Artificial Intelligence (AI)? The pandemic of 2020 reshaped the daily lives of millions of Americans and exposed the strengths and weaknesses of our crisis response systems. The virus spread rapidly, overwhelming hospitals and forcing emergency management teams to deploy data aggregation and predictive modeling to anticipate needs and allocate resources.

While these data aggregation tools provided valuable insights, they were only as effective as the quality of the data and coordination behind them. The algorithms and large language models within AI function on similar principles but operate at far greater speed, scale, and complexity, and carry significantly higher potential risks. This paper argues that, to protect United States national security interests, strategic leaders must apply proven frameworks: critical thinking, design thinking, and systems thinking to guide AI integration, mitigate emerging risks, and ensure responsible deployment.

Critical Thinking: Guiding Strategic Solutions

Critical thinking improves judgment by providing a structured, goal-oriented approach to complex problems. The Paul and Elder model, as described by Stephen Gerras in Appendix C of the Planner’s Handbook for Operational Design, emphasizes recognizing biases, applying feedback loops, and encouraging diverse perspectives to reach sound conclusions. This disciplined, reflective process is especially valuable when navigating uncertainty and competing priorities in high-stakes environments.

During the COVID-19 crisis in New Jersey, the problem was initially ill-defined. Agencies held divergent views based on different assumptions, organizational cultures, and media interpretations. Without structured critical thinking, this could have led to misallocated resources and failed coordination. Instead, disciplined questioning and dialogue aligned stakeholders around a shared understanding, enabling creative solutions such as targeted hospital bed expansions that addressed the areas of highest need.

In AI risk management, the same disciplined approach can align policymakers, technologists, and security experts. Applying critical thinking to issues like adversarial attacks, data poisoning, and model inversion, leaders can scope problems accurately, minimize bias, and ensure that resources are deployed to address the most pressing vulnerabilities. Such rigor ensures that decision-makers respond effectively to present challenges and anticipate emerging risks in rapidly evolving technological environments.

One practical example of applying critical thinking to AI in defense operations could be in the use of machine learning models for intelligence, surveillance, and reconnaissance (ISR). If an AI-enabled system produces a targeting recommendation based on imagery analysis, unexamined biases in the training data could lead to false positives. In a high-stakes environment, this could result in unnecessary escalation or collateral damage. Using assessments through structured bias-checking processes, red-team reviews, and continuous feedback loops as outlined in doctrine such as JP 3-0 and JP 5-0, commanders can more accurately scope the operational problem, validate AI outputs, and ensure recommendations align with strategic objectives and rules of engagement.

Design Thinking: Building Flexible Operational Approaches

Design thinking fosters innovative solutions by blending creative exploration with iterative problem-solving. The Army Design Methodology (ADM) exemplifies this, using feedback-driven, non-linear processes to address ‘wicked’ problems. This method prioritizes adaptability, allowing leaders to adjust their understanding and operational approach as new information emerges.

In New Jersey’s COVID-19 response, design thinking helped agencies adapt as new information emerged. Daily briefings, iterative planning, and predictive modeling identified where hospital bed capacity needed urgent expansion. This operational approach was both flexible and practical, driven by data analysis and refined through continuous collaboration among public health officials, emergency managers, and engineers.

Applied to AI, design thinking enables strategic planners to map risk environments, evaluate potential interventions, and adapt quickly as technology and threats evolve. For example, identifying vulnerabilities in data supply chains and refining protective measures through iterative testing can strengthen resilience against malicious actors. By fostering creative yet structured problem-solving, design thinking ensures that strategic responses remain relevant in dynamic, high-risk environments.

Design thinking can also be applied to defense AI programs, such as predictive maintenance for forward-deployed aircraft. Iterative prototyping of maintenance scheduling algorithms, informed by constant user feedback from maintainers in the field, can improve model accuracy and operational efficiency. This approach mirrors the Army Design Methodology by accepting that early outputs may be flawed and refining them through repeated testing and adjustment. A notable example can be drawn from Project Maven’s phased integration process, where operational users worked directly with developers to adapt AI tools for real-world mission requirements.

In July 2025, the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) awarded contracts worth up to $200 million each to four leading AI firms, including OpenAI, Anthropic, Google, and xAI, to develop “agentic AI workflows” for national security missions. These projects exemplify design thinking in action, as commercial frontier AI models are being adapted into defense applications through iterative development cycles, user feedback loops, and phased integration. By involving operational end users and technology developers from the start, the initiative seeks to ensure that AI tools address real-world mission requirements while maintaining flexibility to adapt as threats and technologies evolve.

Systems Thinking: Understanding Interconnected Threats

Systems thinking provides a holistic lens for understanding how interconnected parts influence the whole. Models like Relationships, Actors, Functions, Tensions (RAFT)help map complex adaptive systems that evolve in unpredictable ways. This perspective enables planners to anticipate how changes in one part of the system can create ripple effects elsewhere.

In the COVID-19 response, the Army Corps of Engineers tailored community-specific solutions by recognizing the links between healthcare capacity, local governance, and federal support. This systems-level awareness enabled targeted, effective interventions. Such tailored approaches would have been far less effective without recognizing the interdependencies within the broader system.

AI systems are similarly interconnected, linking data sources, algorithms, infrastructure, and human decision-making. When leaders apply systems thinking, they can identify where small vulnerabilities might cascade into larger security threats and design safeguards that reinforce the entire system’s stability. This holistic view not only strengthens defensive measures but also supports resilient, adaptive strategies in the face of evolving technological challenges.

In a coalition or interagency context, systems thinking is equally vital for AI integration. For instance, NATO’s Federated Mission Networking (FMN) approach demonstrates how interconnected systems across allied nations must account for varying security standards, data-sharing agreements, and technical infrastructures. An AI-enabled decision-support tool that functions optimally in a United States network might fail or introduce vulnerabilities when integrated into a partner nation’s system with different cybersecurity postures. Planners who map these interdependencies before deployment are better positioned to anticipate interoperability challenges, design robust safeguards, and ensure AI capabilities enhance rather than undermine collective mission success.

Addressing Counterarguments: The Risks of Overreliance on Frameworks

While structured approaches such as critical thinking, design thinking, and systems thinking offer valuable tools for guiding AI integration, some analysts argue that rigid adherence to frameworks can hinder innovation and slow decision-making in fast-moving technological environments. In crisis contexts, commanders and policymakers may lack the time to engage in iterative problem-framing or systems mapping, leading to delays that adversaries can exploit. Moreover, scholars such as Gary Klein have emphasized that expert intuition, built through experience and pattern recognition, can be as vital as structured reasoning in complex and time-sensitive decisions.

Another critique centers on the unpredictable nature of AI development itself. Technology historian Melvin Kranzberg observed that “technology is neither good nor bad; nor is it neutral,” underscoring that the societal impacts of AI will often fall outside any pre-established analytical framework. Overemphasis on formal models risks creating a false sense of control and may lead leaders to overlook emergent, unconventional threats that fall between doctrinal or conceptual boundaries.

Acknowledging these perspectives is essential. Frameworks should serve as aids, not constraints, for strategic thinking, providing structure and clarity without limiting innovation or responsiveness. Leaders must retain the flexibility to adapt or bypass structured methods entirely when operational conditions demand rapid, unconventional responses. This balanced approach ensures that the benefits of critical, design, and systems thinking can be harnessed without sacrificing the agility needed to respond to unforeseen challenges.

Concluding Insights

AI offers extraordinary potential to enhance national security decision-making, but its complexity demands disciplined frameworks. Critical thinking clarifies the problem space, design thinking drives adaptive and creative solutions, and systems thinking ensures that strategies account for the interconnected nature of risks. Together, these approaches can guide the responsible integration of AI into defense planning, ensuring that technological capabilities are matched with strategic foresight.

As the United States confronts compounding challenges such as future pandemics, impacts from climate change, demographic shifts, and resource scarcity, strategic leaders must prepare for an environment where threats emerge rapidly and unpredictably. AI can be part of the solution, but only if managed responsibly, integrated with other national power instruments, and governed by informed human judgment. By applying these frameworks, leaders can shape AI’s trajectory to strengthen national security while safeguarding against its inherent risks.

(This article reflects the views of the author and not those of the U.S. Naval War College, the U.S. Navy, or the U.S. Army.)

About The Author

  • Lieutenant Colonel Brian Corbin is a U.S. Army Engineer and Military Professor in the National Security Affairs Department at the U.S. Naval War College. A Distinguished Graduate of the U.S. Army War College, he also graduated from the U.S. Naval War College, where he completed the Maritime Advanced Warfighting School. He has commanded at the battalion level and served with the National Geospatial-Intelligence Agency, the U.S. Army Corps of Engineers, and the 82nd Airborne Division, with multiple deployments to Iraq and Afghanistan. His experience includes humanitarian operations in disaster relief and COVID-19 response.

    View all posts

Article Discussion:

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments