Small Wars Journal

How to Measure Insurgencies

Wed, 09/12/2007 - 7:47pm
By J. Eli Margolis

Earlier this week, America's top two officials in Iraq testified before Congress about the war in Iraq. Ambassador Crocker described slow but sure progress; General Petraeus spoke more strongly, citing goals met and "substantial" progress.

I was surprised. After a steady public debate of stalemate and withdrawal, the pair put forward recommendations to remain. The disconnect between how America sees Iraq and how our two most knowledgeable professionals view it is great.

Why?

I believe that the answer lies in measures. Media reports and independent assessments like the Brookings Institution's "Iraq Index" have opened the floodgates on statistics. Analyses abound. But, as a recent Salon piece demonstrates, not all have been disciplined. Indeed, the public discourse has abandoned methodology entirely.

In an unusual move, however, Gen. Petraeus took time away from his testimony to assure Congress that he hasn't. The military, he said, uses "a methodology that has been in place for well over a year" to ensure "rigor and consistency" in its analyses. Then he called in a second opinion: "Two U.S. intelligence agencies recently reviewed our methodology and they concluded that the data we produce is the most accurate and authoritative in Iraq."

What is this methodology? Or, more broadly, how do we measure insurgencies?

To answer that question, I began to rummage around, uncovering a number of studies outlining insightful conceptual approaches. They hardly agree. But, taken together, they highlight five important principles.

First is the firm assertion that there are no magic numbers—not troops deployed, not dollars spent, not total number of insurgent attacks. As one of West Point's "Irregular Warfare Messages of the Month" notes bluntly, "trying to reduce success or failure to one or two criteria is risky if not irresponsible." Instead, suggests Craig Cohen of the U.S. Institute of Peace, it is better "to devise an aggregate index of indicators." With measures, more may not always be better, but a handful will always be too few.

Second, analysts need a framework that attaches meaning to each metric. As James Clancy and Chuck Crossett explain in one of the Army's leading journals, different officials too often find different meaning in the same numbers because they have no common reference. To one, falling casualties may be good news. But, to another, it is a sign of decreasing patrols—a possible indicator of heightened instability. The Army's Douglas Jones phrases it simply: "it is only through agreement of definitions and a common framework of insurgency that applying measures of effectiveness to counterinsurgency operations becomes useful." Without a framework, a pile of statistics can be made to fit almost any position.

Third, measures must be important, not just convenient. Counting heads at a graduation parade is far easier than measuring public opinion in a war zone or tracking insurgent financing. But it is a poorer measure of effectiveness. As Frederick W. Kagan notes in the Armed Forces Journal, such tallies of casualties, attacks, and trained locals "are measures of convenience, reflecting the ease with which data can be collected and presented rather than its inherent importance." Honest assessment begins with honest data, even if it is difficult or dangerous to collect.

Fourth, outputs are more important than inputs. Measuring inputs like total dollars spent or the number of bases constructed gauges effort, not effectiveness. As Craig Cohen notes, progress should not be "judged in large part on the basis of international resources expended or programs implemented rather than on the basis of actual results produced." In some ways, this is related to the problem of convenience; analysts can track coalition actions much more readily than their effects. But it is the effects—not efforts—that ultimately matter most.

Fifth—and perhaps most important—is the recognition that the strategy must determine the metrics. The two must be tied. If one campaign goal is to disrupt insurgent operations, for instance, a count of local cell phones would be little more than a statistical distraction. In their approaches, researchers from USIP, the Rand Corporation, the Johns Hopkins University, the Brookings Institution, and the Army's Command and General Staff College all follow this principle. They start high and move down the ladder—from strategy to goals, from goals to measures, and from measures to specific metrics. As in a chain of command, each metric reports to a goal, and that goal back up to the strategy. This approach both highlights needed metrics and removes unneeded metrics—the cell phone counts of some government fact sheets.

So, before the testimony, how did the measures in America's public discourse hold up next to these principles? In a word, poorly.

Media reports were misleading. Major newspapers continue to announce casualties and troop levels daily, encouraging a "magic number" mindset. The Washington Post's series "Weighing the Surge" cites inputs like dollars of oil revenue spent or the number of Baghdad security outposts, and convenient counts like the number of market stalls opened, Iraqis trained, or barrels of oil produced. A graphic published the day before report is the perfect example. And a recent overview in the New York Times presents an array of measures without any mention of the author's insightful framework.

But the media follows the lead of others. As an intern with a government agency working to rebuild Iraq, I saw firsthand the random—and always positive—fact sheets that once circled Washington. Today's Congressional benchmarks are only slightly better. The legislative requirements, for instance, are measures of convenience; in such a corrupt place, laws are cheap markers of government effectiveness and social change. Further, at least five of the eighteen benchmarks measure inputs and not outputs. And nor are they tied to strategy. As last Wednesday's GAO report notes, they were derived not from methodical assessment, but from public statements made by severely pressured politicians.

Notably, such an application to Iraq should be taken with a degree of due reflection. These principles apply to insurgencies; the war in Iraq is more than an insurgency. Among others, tribal warlords, political opportunists, criminal networks, foreign intelligence services, and terrorist organizations complicate the picture considerably. Further, these troubles with measures do not necessarily discredit the broad conclusions of public discourse. Plainly, Iraq remains stubbornly unstable and violent.

But the guidelines outlined above can help—and not just with Iraq. The United States faces an international environment simmering with active and possible insurgencies. It is a challenge that will not go away. Bringing America's public measures nearer its professional ones may help us develop clearer consensus—ensuring that our conflicts abroad do not also become our conflicts at home.

J. Eli Margolis is a MA Candidate at Georgetown University, School of Foreign Service's Security Studies Program.

Discuss at Small Wars Council