Opinion polls in Afghanistan cover issues as diverse as the legitimacy of the Karzai regime, the use of suicide attacks, the appropriateness of female education, and the adequacy of the health systems. Above all else, through providing insights into local perceptions this research method helps the international community understand why the Taliban has been in the ascendency over recent years. The importance of opinion polling is widely recognized, and prominent funders include security institutions (NATO), government departments (USAID, DfID), and development organizations (numerous NGOs). Due to their varied informational requirements and budgets this research may involve anywhere between a few dozen respondents from a specific community to several thousand individuals from across the country.
A number of widely-read polls conducted by media organizations also bring the findings to wider audiences. For instance, a survey conducted by ABC, BBC and Germany’s ARD in 2009 indicated that only six percent of Afghans ‘favored a Taliban administration.’[i] Important insights are also gained through comparative findings over time or between subpopulations divided by age, gender, income, occupation, ethnic, tribe and geography. For example, drawing from the same poll the BBC asserted that ‘68% now backed the presence of US troops in Afghanistan, compared to 63% a year ago.’ Similarly, in 2011 the Asia Foundation maintained that ‘more than four fifths of respondents say the availability of education for children is good or very good in their local area in Central / Kabul (82%) and Central / Hazarajat (82%) regions’, whereas ‘this percentage decreases to 76% of respondents in the North East, 74% in the North West, 73% in the East, 70% in the South East, 69% in the West, and just over half of respondents in the South West.’[ii]
However, the results from polling tend to polarize opinions, with either too much or too little confidence commonly being placed in the findings. The critics have recently been in the ascendancy following the publication of an unfavourable article by Graeme Smith in the Globe and Mail.[iii] Smith’s emphasis was primarily upon the extent to which interviewees sacrifice truth for responses that may be viewed favourably by others, a phenomenon known to social scientists as ‘social desirability bias.’[iv] For non-specialists to judge the validity of the findings from this controversial research method they first need to have a genuine understanding of the impact of this troubling phenomenon.
In a country where individuals are particularly suspicious of the motives of strangers asking questions it is not difficult to comprehend the causes of social desirability bias. Observers commonly assert that the fear of potential repercussions in response to a ‘wrong answer’ may cause interviewees to adapt their replies, either to favor the Taliban and other insurgent groups or the Afghan Government and the U.S.-led forces. More mundanely, some have claimed that there are inherent biases towards pro-government responses as, believing that these investigations are only undertaken by institutions with loyalties to the state, individuals simply tell researchers what they want to hear.
The most common retort to these concerns is that the findings often place the Afghan Government and the coalition in a negative light, thus supposedly providing evidence of reliability. For instance, responding to accusations of a pro-coalition bias in the ABC data, Gary Langer from Langer Research Associates maintains that:
These assertions are hard to square with our actual results. Fifty-nine percent of Afghans rate the performance of the United States in Afghanistan negatively, nearly twice as many as did so in 2005. Sixty-two percent rate the performance of NATO forces negatively. Thirty-six percent directly blame Western forces for civilian casualties; 61 percent blame either Western forces mainly or Western and anti-government forces equally. … This is pro-coalition bias?[v]
However, to imply that evidence of a pro-coalition bias can be identified through simply considering the findings is misleading. Even if ninety-nine percent of the respondents claimed to be hostile towards the coalition, there would still be a bias of this nature if the final one percent falsely maintained that they were supportive. Langer’s argument also overlooks the converse pressure (the anti-coalition bias) undoubtedly driven in some locations by the fear of insurgent retaliation.
Naysayers also commonly assert that the interviewees manipulate their responses with the objective of obtaining benefits for their community, on the basis that donors are more likely to supply water, electricity, and education facilities where shortages are deemed to be the most severe. The most common response to this charge is equally disingenuous. Specifically, it is observed that in certain locations the findings indicate a perception that economic conditions or the availability of goods and services have improved over time, and this is cited as evidence of data reliability. Drawing again from Langer:
If respondents colored their responses to benefit their village, presumably they’d have tried to draw greater assistance by saying things are in terrible shape and getting worse. In fact we see better ratings of local conditions. Large numbers of Afghans report roads and clinics built or repaired and electricity supplied. Forty percent [in 2009] give a positive rating to the availability of jobs and economic opportunities – still far from ideal, but up from 26 percent in 2007.[vi]
Again, it is not possible to determine the proportion of respondents providing dishonest replies through focusing only upon the results, irrespective of whether the findings improve, worsen or remain the same. Put another way, knowing that forty percent from the 2009 sample offered positive answers about economic matters tells us nothing about how many of the remainder lied.
Rather than denying the impact of social desirability bias and possible attempts to manipulate the research, objective pollsters and policy-makers reliant upon this data must conclude that these phenomena create an unquantifiable distortion, and that it is not possible to have absolute faith in the results. As suggested by a British Government source quoted in the Smith article, however, certain degrees of confidence may be retained regarding the comparative conclusions. In other words, findings such as “53 percent of residents from District X support Karzai” should be given less credence than relative conclusions like “support for Karzai has increased in District X over the previous year” or “the residents from District X are more supportive of Karzai than those from District Y.” Perhaps more importantly, surveys remain a blunt instrument through which to develop an understanding of opinions, and it remains necessary for those operating in this environment to place greater value upon the more nuanced insights delivered by qualitative research.
[iii] Graeme Smith, “Many in Afghanistan Fear Looming Disaster as Canada Withdraws”, The Globe and Mail, available at http://www.theglobeandmail.com/news/world/many-in-kandahar-fear-looming-disaster-as-canada-withdraws/article2092248/page1/
[iv] Whilst beyond the scope of this paper, other common concerns with perception survey data in Afghanistan include the extent to which the researchers are able to draw information from genuinely representative samples of the populace, and evidence that in certain cases the data has been faked.
[v] Gary Langer, “Polling in Afghanistan: An Antidote to Anecdote”, ABC News, 14 January 2010, available at http://blogs.abcnews.com/thenumbers/2010/01/polling-in-afghanistan-an-antidote-to-anecdote.html