Small Wars Journal

Foresight in Decision Making: Improving Intelligence Analysis with Probabilistic Forecasting

Wed, 12/05/2018 - 6:25am

Foresight in Decision Making: Improving Intelligence Analysis with Probabilistic Forecasting

 

Matthew Enderlein

 

In the complexity of today's operational environment, military intelligence requirements go far beyond the simplistic, enemy-centric parameters on which conventional doctrine is based.  Today’s battlefield is a complex, dynamic system that is influenced by technology, non-traditional adversaries, and the intersection of military, governmental, and civilian concerns.  Now more than ever, commanders require a dynamic flow of information and analysis to support decision making.  Intelligence facilitates operations, and military intelligence professionals have developed systems to deliver intelligence support down to the lowest tactical level.  Presently, these systems are designed largely to collect and analyze intelligence to produce an understanding of the situation as it is, and to convey this information to enable decision making.

 

Because of the inherent uncertainty of predicting future events, military planners tend to value adaptability and agility over foresight.[1]  However, in the last eight years, academic research into quantitative methods of forecasting geopolitical events, as well as methods of training and assessing the skills of intelligence analysts, has established the need for intelligence agencies to employ probabilistic reasoning, cognitive debiasing training, and accuracy scoring to improve precision in intelligence forecasts.  These methods are slowly being adopted by the civilian intelligence world and these same methods should be adapted and integrated into military intelligence processes as well.

 

Military intelligence professionals should explore leveraging existing research on quantitative forecasting methods and develop systems that place a premium on forecasting changes in the operational environment over simply understanding the current situation as it is.  Even in dynamic and uncertain environments, effective intelligence forecasting systems will enable commanders to better target future operations through enhanced situational understanding based on quantitative, forward-looking methods.

 

Background


It is critical for commanders to understand not only the situation as it is, but also to predict how it will change based on military operations and a myriad of other factors.  The ability to forecast these outcomes will enable commanders to evaluate and select courses of action as well as develop and refine contingency plans.  Accuracy for these predictions is paramount, yet the military, as well as the intelligence community at large, currently relies largely on qualitative forecasting and reporting methods and rarely evaluates the accuracy of forecasts after the fact.[2]  

 

This problem is not new, and it is not unique to the military.  In recent decades, the intelligence community has suffered historic predictive failures, such as the failure to forecast the September 11th attacks and the erroneous assessment of weapons of mass destruction proliferation in Iraq during the run-up to the 2003 invasion.[3]  In 2010, with shortcomings like these in mind, the Intelligence Advanced Research Projects Activity sponsored a geopolitical forecasting tournament that pitted seasoned government intelligence analysts against various civilian research consortiums sponsored by universities and private companies.  One such project, founded in a joint venture between the University of Pennsylvania and the University of California Berkley, developed and honed teams of part-time civilian volunteers that consistently outperformed government analysts’ predictions by 30% over a period of five years.[4]  The forecasters of the Good Judgment Project (GJP), as it became known, achieved this feat with only a modest amount of training and by relying solely on open source intelligence (government analysts had full access to classified sources).

 

The success of the Good Judgment Project was based on the utilization of the simple analytical concepts of probabilistic reasoning, crowd wisdom, and averaging.[5]  Participants, who had no background in intelligence work or significant field specialization, first completed a few hours of online training modules over the course of a week.  Modules included cognitive debiasing training (based on psychological studies of human rationality done by Daniel Kahneman and Amos Tversky) and simple, scenario-based training on the principles of probabilistic reasoning.  Participants then formed small teams and began making predictions on various geopolitical events.  Questions were posed in simple, binary formats, such as: “Will North Korea attempt to launch a multistage rocket between 7 January 2013 and 1 September 2013?” or “Will the Euro be valued at $1.10 USD or less by 1 October 2015?”[6]  Participants conducted research, employed the tools provided in their training, and assigned a numerical probability to each event.  These predictions were aggregated and averaged at the team level.  As each day passed and new information became available, participants updated their estimates, and each participant received a numerical score based on the accuracy of their previous predictions.

 

This system enabled the Good Judgment Project to aggregate and average the predictions of large numbers of forecasters, allowing the GJP to control for individual errors and to use the principle of crowd wisdom to produce more accurate forecasts.[7]  Scoring the accuracy of individual forecasters allowed the GJP to create highly accurate teams of “super forecasters” and also enabled each participant to receive objective feedback on their methods.  By 2015, after five years of data collection and analysis, the GJP had outperformed the intelligence community on thousands of predictions.  Throughout the course of the project, methods and training modules were refined, codified, and validated not only by their success but by numerous academic studies.  Today, GJP methods are successfully employed by government agencies, think tanks, private risk consulting firms, and news agencies such as The Economist.

 

Employing the methodology designed by the Good Judgment Project requires only simple software, a modest amount of training, and the time for analysts to refine their skills.  The super forecasters of the GJP were ordinary people.  Most of them scored higher than normal for open-mindedness in standard psychological batteries, but otherwise little separated them from the average person.[8]  Researchers found that a high level of open-mindedness corresponds to an increased ability to avoid cognitive bias traps that can make predictions less accurate, and to synthesize numerous information sources and derive patterns in data which can better inform probabilistic forecasts.[9]  But beyond these benefits, it was the simple, probability-based methodology of the GJP that enabled it to achieve a level of predictive accuracy that was unprecedented among the intelligence community.

 

Current Army Forecasting Methods and Doctrine

 

Current doctrine espouses the need for intelligence systems that can forecast and predict future events.  Joint Publication 2-0 (Joint Intelligence) states that, “when justified by the available evidence, intelligence should forecast future adversary actions and intentions.”[10]  It then outlines the vulnerabilities and risks inherent to predicting enemy behavior.  Though the doctrine explains that potential benefits to forecasting outweigh the potential risks, in reality, doctrine and practice do not always meet.  Because current doctrine lacks an effective, quantitative method for probabilistic forecasting, military intelligence analysts often shy away from predictive systems.  This reinforces a prevalent notion that forecasting future events is inherently and unavoidably ambiguous and, as a result, military decision makers tend to use qualitative intelligence estimates to rapidly react to changes in the operational environment rather than predicting the changes in the first place.[11]

 

In practice, forecasting enemy intent rarely goes beyond templating the enemy’s most probable and most dangerous courses of action.  For a deployed unit conducting real-time intelligence collection and analysis, if forecasts of future events are included, they are described using qualitative terms such as “probable” or “unlikely.” While these terms may correspond to numerical probabilities per intelligence doctrine, in reality, they have inconsistent, ambiguous meanings to maneuver commanders.  Predictions are generally vague due to the qualitative nature of current forecasting methods and the prevalent notion that predictions are inherently inaccurate.  Often, they are transposed directly from intercepted radio or cell phone traffic and analyzed based on the assumed validity of the source and how the future event fits within the understood situation.

 

In practice, these forecasts rarely drive decision making in a significant way; the understood and known analysis of the current situation is far more useful than uncertain predictions.  But if existing intelligence systems could be augmented by well-trained and effective forecasting teams, commanders and staffs would be armed with updated and actionable forecasts on future events, which would greatly enhance the ability to target operations in a complex environment.

 

Employing Probabilistic Forecasting

 

Using quantitative tools to aid decision making is a foreign concept to many military leaders.  Though employing forecasting teams will never remove the inherent uncertainty from decision making, probabilistic forecasts can provide significant benefit within the larger context of the military decision making process.  Accurate forecasts offer simple, useable quantitative tools to aid commanders in navigating dynamic operational environments.  As probabilistic forecasting becomes fully integrated within the mission command system, operations will drive the creation of new forecasting requirements, which will in turn facilitate the decision making and operations process; creating a continuous feedback loop that will serve to refine and improve all processes involved.

 

The techniques and methods pioneered by the Good Judgment Project can be adapted within existing military intelligence systems to increase the capability of military units to anticipate, plan for, and act on future events.  With modest effort, intelligence analysts can be trained as forecasters and forecasting teams can be created within staffs at all levels.  These forecasting teams would provide real-time forecasting and analysis to commanders at the battalion level and higher.  Additionally, the efforts of forecasting teams could be coordinated and aggregated at higher levels to enable more accurate forecasts at the operational and strategic levels.

 

Effectively implementing forecasting teams would require some adjustments in defining intelligence requirements.  To be actionable by a forecasting team, intelligence requirements need to be specific and quantifiable.  Planning staffs would need to work with commanders to translate intelligence requirements into questions with distinct binary or ternary answers and expiration dates.  Once intelligence requirements are defined, forecasting teams could employ any and all sources available to inform predictions, which can be aggregated and updated continuously as new information becomes available.  Dissemination would also be essential to the efficacy of forecasting teams.  Daily updated forecasts must be available to commanders and planning staffs to enable agility and rapid decision making.  Over time, forecasting teams will increase accuracy with experience.  Additionally, commanders and planning staffs will learn to plan based on predictive, rather than reactive, information.  As forecasting teams become better integrated within staffs, forecasts can be used to enable targeting, future operations, and even refinement of lines of effort.

 

Incorporating forecasting into the processes of the military intelligence system can be supported by existing doctrine.  Army Techniques Publication 2-33.4 (Intelligence Analysis) outlines a method for subjective probability analysis.[12]  This method, which includes a probability scoring system almost identical to that of the Good Judgment Project, can be used as a framework for developing an Army forecasting method and for retraining intelligence analysts to become forecasters.  To be effective, forecasts must be probabilistic, updated continuously, measurable (described numerically instead of with vague language), and must enable decision making in a quantifiable way.[13]  The current doctrine could be improved by better aligning system outputs with system inputs.  Currently, analysts are trained to score probability on a 0.0 to 1.0 scale (corresponding to zero percent and 100 percent likelihoods).  Analysts then transpose probability scores to intelligence estimates which convey that probability on a 9-degree qualitative scale from “highly improbable” to “highly probable” (corresponding to probability increments of ten percent).[14]  Conveying probabilities using this scale is intended as a method to turn raw analysis into a useable product but, in reality, it serves only to dilute the accuracy of predictions.  Research conducted by proponents of the GJP indicates that adding granularity to predictions (the act of estimating probability percentages in increments of one percent versus ten percent) corresponds with increased accuracy.[15]  Incorporating clear, numerical language into intelligence estimates, in addition to analysis, would not only remove ambiguity and unnecessary dilution of accuracy, it would also increase the utility of these estimates to commanders and staffs.

 

Forecasting could also be used to predict and measure mission accomplishment.  Currently, Measures of Effectiveness (MOEs) are used to measure “the attainment of an end state, achievement of an objective, or creation of an effect.”[16]  Like most elements of military operations, MOEs are typically measured in a largely qualitative manner.  Doctrinally, however, MOEs already contain many of the elements of typical forecasting questions.  They are designed to include “measurable, collectable, and relevant indicators.”[17]  Refining MOEs into actionable forecasting questions would enable forecasting teams to track, measure, and predict changes in system behavior in relation to objectives and desired end states.   Forecasting MOEs can add value as quantitative tools to support the operations process and could further reinforce the feedback loop between intelligence and operations.

 

The Way Forward

 

The benefits of employing probabilistic forecasting are apparent, but the Army must carefully plan for the implementation of forecasting to ensure the project is efficiently executed and to prevent marginalization by those who would prefer the status quo.  Early implementation of forecasting will likely be better received in organizations that are already receptive to change and innovation. Training and Doctrine Command’s Army Capabilities Integration Center, US Special Operations Command’s Capabilities Analysis Division, the Intelligence Center of Excellence, or the Special Warfare Education Group within the John F. Kennedy Special Warfare Center and School would be excellent candidates to develop, lead, and implement forecasting pilot programs.  These organizations can establish working groups to determine the feasibility and best ways forward for research, testing, and creating forecasting teams.

 

Working groups consisting of military intelligence officers and NCOs, maneuver leaders, officers and NCOs with significant staff experience, and Operations Research/Systems Analysis (Functional Area 49) officers can explore methods for translating the GJP methodology to military intelligence operations.  It will be critical to explore ways of adapting forecasting methods to work at the tactical level, as all previous forecasting research has focused on geopolitical events.  Additionally, working groups should explore the role analytical software can play in forecasting, how forecasting can be used to measure the effects of friendly actions, and how to best aggregate and disseminate forecasts across all levels of war.

 

These working groups can determine and test the best methods of recruiting, training, and implementing forecast teams.  Gaining support from leaders within pilot organizations will be critical, as anything less than full integration of forecasting into the decision making process will be detrimental to the project’s effectiveness.  As pilot programs are tested and refined, methods and lessons learned must be analyzed and codified to facilitate the implementation of probabilistic forecasting across the Army.  Widespread implementation of forecasting methods would likely precipitate a cultural shift as leaders begin to value planning and operations informed by forward-looking predictive systems over reactive ones.

 

Conclusion

 

Empowering leaders with timely, predictive analysis will enable rapid and informed decision making within the complex and ambiguous operational environment typical of today’s battlefield.  Adapting the methodology developed by the Good Judgment Project to existing intelligence systems will enable the Army to gain a significant and unique advantage over adversaries and enable leaders to navigate dynamic situations armed with the most modern and efficient analysis tools available.  While adapting new, unfamiliar methods of analysis may meet resistance within the Army bureaucracy that can be notoriously resistant to change, leveraging organizations already open to innovation to develop and lead pilot programs can demonstrate the benefits of probabilistic forecasting to the larger system.  Army military intelligence professionals have already developed dynamic, robust intelligence capabilities aided by years of combat experience and the incorporation of new technology.  Employing probabilistic forecasting will enable intelligence professionals to stay on the cutting edge and to continue to outmatch adversaries in the twenty-first century.

 

The opinions expressed in this article are the author's and not necessarily those of the U.S. Department of Defense or U.S. Army.

 

End Notes

 

[1] McNabb, Kathryn, and Gregory Tozzi. 2017. "Center for New American Security." Getting it Righter, Faster: The Role of Prediction in Agile Government Decisionmaking. August 14. https://www.cnas.org/publications/reports/getting-it-righter-faster.

[2] Dhami, Mandeen, David Mandel, Barbara Mellers, and Philip Tetlock. 2015. "Improving Intelligence Analysis With Decision Science." Perspectives on Psychological Science (Association for Psychological Science) 10 (6): 753-754. https://journals.sagepub.com/doi/10.1177/1745691615598511.

[3] Tetlock, Philip, and Dan Gardner. 2015. Super Forecasting: The Art and Science of Prediction. New York: Penguin Random House.

[4] Mellers, Barbara, Joshua Baker, Eva Chen, David Mandel, and Philip Tetlock. 2017. "How generalizable is good judgment? A multi-task, multi-benchmark study." 12 (4): 370. http://www.sjdm.org/journal/17/17408/jdm17408.pdf.

[5] Probabilistic Reasoning involves breaking down complex, uncertain questions into a series of smaller questions which can be answered by using simple math.  Within academia, this is known as “Bayesian Reasoning,” but the probabilistic methods employed by GJP forecasters requires no mathematical knowledge beyond simple arithmetic: Tetlock and Gardner. Super Forecasting, 169-171.

[6] Tetlock and Gardner. Super Forecasting, 261-263.

[7] Crowd wisdom is the concept that a group opinion derived from a number of individual opinions is generally more accurate than any single opinion, even that of an expert.  When matched with the tool of averaging, leaders can embrace a wide range of disagreeing individual opinions from group members and derive a useful, more accurate prediction by simply finding the arithmetic mean: Tetlock and Gardner. Super Forecasting, 131-132.

[8] Open-mindedness is one of the five traits within the “Five-Factor Model of Personality” commonly accepted and tested on multiple psychological batteries such as the “Revised NEO Personality Inventory.”  Individuals who test high in open-mindedness are more receptive to new ideas, experiences, and ways of thinking than the general population.

[9] Mellers, Barbara, Eric Stone, Pavel Atanasov, Nick Rohrbaugh, S. Emlen Metz, Lyle Ungar, Michael Bishop, Michael Horowitz, Ed Merkle, and Philip Tetlock. 2014. "The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics." Journal of Experimental Psychology: Applied (American Psychological Association) 21 (1): 2. https://www.apa.org/pubs/journals/releases/xap-0000040.pdf.

[10] Department of Defense. October 2013. JP 2-0, Joint Intelligence. May 2014. Washington, DC, II-9.

[11] McNabb and Tozzi. Getting it Righter, Faster.

[12] Department of the Army. August 2014. ATP 2-33.4, Intelligence Analysis. Washington, DC: Army Publishing Directorate, 3-10.

[13] McNabb and Tozzi. Getting it Righter, Faster.

[14] Department of the Army. ATP 2-33.4, 3-10.

[15] Tetlock and Gardner. Super Forecasting, 145.

[16] Department of the Army. May 2012. ADRP 5-0, The Operations Process. Washington, DC: Army Publishing Directorate, 5-3.

[17] Westphal, Tom, and Jason Guffey. October-December 2014. "eArmor Mounted Maneuver Journal." Measures of Effectiveness in Army Doctrine. http://www.benning.army.mil/armor/eARMOR/content/issues/2014/OCT_DEC/Westphal.html.

 

About the Author(s)

Captain Matthew Enderlein is an active duty Army Infantry Officer stationed at Fort Bragg, NC.  He has held positions ranging from Rifle Platoon Leader to Company Commander and deployed to Afghanistan in 2014 in support of Operation Enduring Freedom.  Matthew is a 2011 graduate of the University of North Carolina at Chapel Hill.