Best Practices Guide for Conducting Assessments in Counterinsurgencies
This guide provides practical advice to assessment strategy planners and practitioners. It aims to fill the gap between instructions provided in handbooks and field manuals, and the challenges faced when adapting these instructions to specific operations. Its purpose is to complement, not replace, the more detailed planning or instructional documents. Wherever possible, the articles in this guide provide references to more detailed assessment planning documents. It also makes reference to some of the specific needs of the implementation of the Transition (Inteqal) process in Afghanistan.
Part One: Assessment Philosophy
Article One: Remain True to the Assessment’s Objective. The objective of an assessment is to produce insights pertaining to the current situation, and to provide feedback that improves the decision maker’s decisions. This article discusses how key elements of this objective should guide the assessment development process.
Article Two: Take a Multi-dimensional Perspective. This article describes why it is essential to build the assessment by looking at the environment through multiple perspectives that cross lines of operations and time periods. It also highlights some errors that may arise if the assessment lacks a broad perspective.
Article Three: Serve as the Bodyguards of Truth. Assessment teams develop what may, by default, become the only publicly-available, official picture of the campaign. Therefore, assessment teams must serve as the bodyguard of truth and never compromise the integrity of their reports. This article outlines nine key practices that help preserve the integrity of assessments.
Article Four: Ensure Independence and Access. Strategic assessment teams need to be free to express their findings about the current conditions and the influential factors they discover. They also need access to a wide array of information and people in order to perform their job properly. This article describes how to secure independence and access through a partnership between the senior sponsor of the assessment team, individual line of operation owners, and the assessment team.
Article Five: Nurture the Intelligence – Assessment Partnership. The activities related to intelligence and assessments often seem remarkably similar, thus generating the potential for confusion or duplication of effort. This article briefly discusses the mutually supporting relationship between the two activities. It uses references from formal documents and recommends that the leaders of the two communities deliberately develop a shared understanding of this symbiotic relationship in order to avoid problems.
Part Two: Method
Article Six: Establish a Terms of Reference Document. Unclear terms generate confusion in the design of the assessment framework, the analysis of data, and the reporting of insights. Thus, it is in the team’s best interests to develop a Terms of Reference document as soon as possible.
Article Seven: Build the Assessment Framework Iteratively, Incrementally, and Interactively. The assessment framework should be built in stages through a collaborative process. This approach minimizes complexity, allows for effective learning, and retains clearly established priorities. It also allows the assessment team to refine the focus and scope of the assessment framework based on lessons learned during the development and use of earlier versions.
Article Eight: Discriminate between Indicators and Metrics. Most people use the term indicator and metric interchangeably and suffer little or no consequences or confusion. However, there are times when it is useful to discriminate between the two. This article offers a useful approach for when and how to discriminate.
Article Nine: Use Each Class of Indicator Properly. Some indicators can be grouped into classes because they share a common set of characteristics that may be beneficial or detrimental to the assessment process. Several of these broad classes are described in this article including those that measure input versus outcome, those that indicate failure to achieve a condition (spoilers), metrics that can indicate positive or negative effects depending upon context (bipolar), and those that serve as substitutes for other hard-to-measure indicators (proxies).
Article Ten: Beware of Manipulated Metrics. Some metrics can be manipulated by the subjects under observation to send misleading signals to observers, rather than reflecting the reality of the current conditions. This is a particularly high risk for metrics that are used to promote or demote, or directly redistribute resources and money. This article discusses several examples and suggests ways to detect and minimize such distortions of the data.
Article Eleven: Develop a Manageable Set of Metrics. There are hundreds of metrics available for consideration at any point in time. Thus, it is necessary to establish rules that help us select the metrics contributing the most to the assessment effort. This article discusses several screening filters that help practitioners develop a manageable and effective set of metrics.
Article Twelve: Retain Balance in Both Metrics and Method. Interrelated debates arguing the merits of the narrative versus summary graphics, the organizational level at which assessments should be performed, and the need to preserve the front-line commander’s views within higher level summary assessment products persist in the assessment world. This article suggests using a format that balances different metrics and method to capture the best features of each alternative.
Article Thirteen: Deploy Field Assessment Teams. In order to provide actionable information to the decision maker, assessment insights must be relevant and credible. For critical issues, the only way to achieve this standard is get out to the field and engage directly with front-line units. This article suggests that we rethink how we perform assessments and offers an approach that augments the traditional process with the use of field assessment teams.
Article Fourteen: Bound Estimates with Eclectic Marginal Analysis. When a desired metric is difficult to measure directly we might be able to measure other factors that drive the value of the desired metric. Under such conditions, we can use marginal analysis with an eclectic set of related metrics to generate a reasonable estimate of the target metric. This section explains the technique and provides some examples of marginal analysis.
Article Fifteen: Anchor Subjectivity. A degree of subjectivity in assessments is unavoidable. This article discusses methods to minimize the degree of subjectivity, make that subjectivity transparent, and maintain consistency in the way we capture subjective assessments.
Article Sixteen: Share Data. Every coalition effort faces information sharing challenges. This article discusses important reasons for sharing information and offers some guidelines that promote effective sharing.
Article Seventeen: Include Host Nation Data. Two features of the COIN assessment environment that should be considered when developing the assessment process are the existence of host nation data collection efforts and the ability for assessment teams to interact with this system. This article addresses the challenges of using host nation data and ways to work around the challenges.
Article Eighteen: Develop Metric Thresholds Properly. This article discusses key guidelines for developing metrics thresholds, including adjusting levels towards key phases of objective conditions, developing and sharing clear definitions of the thresholds, and ensuring that observances of metrics at these levels represent a significant change in underlying conditions.
Article Nineteen: Avoid Substituting Anecdotes for Analysis. Anecdotes are a useful component of assessments when used properly. Unfortunately, they are often used as substitutes for a solid assessment. The best rule to keep in mind when using anecdotes is that they are generally the starting point for analysis, not the closing argument of an assessment.
Article Twenty: Use Survey Data Effectively. Questions of motivation, satisfaction, degrees of trust or fear, as well as intentions regarding future actions are difficult to measure by monitoring actions. Often, we must capture this information by interviews or broader surveys. This article addresses how to manage some of the major concerns associated with using survey data in assessments.