Member Login Become a Member
Advertisement

HASC Assessment Of The Human Terrain System

  |  
09.30.2009 at 01:42pm

In House Report 111-166 – NATIONAL DEFENSE AUTHORIZATION ACT FOR FISCAL YEAR 2010, the House Armed Services Committee (HASC) has directed “the Secretary of Defense to conduct an independent assessment of the Human Terrain System, and submit to the congressional defense committees a report detailing that assessment by March 1, 2010.”

The reasons why such an assessment are being required are spelled out quite clearly:

The committee is aware of anecdotal evidence indicating the benefits of the program supporting operations in the Republic of Iraq and the Islamic Republic of Afghanistan. The committee also notes that a number of press accounts provide anecdotal evidence indicating problems with management and resourcing. The committee finds it difficult to evaluate either set of information in the absence of reliable, empirical data.

As someone who has followed the Human Terrain System since it first started being discussed, I am glad to see that such an assessment will be taking place. Indeed, I called for this type of assessment last year (here and here).

Elements of the Review

The committee goes on to outline seven key elements they want examined:

  1. An overview of all of the components of HTS, including related technology

    development efforts;

  2. The adequacy of the management structure for HTS;
  3. The metrics used to evaluate each of the components of HTS;
  4. The adequacy of human resourcing and recruiting efforts, including the implications

    of converting some contractor positions to government positions;

  5. An identification of skills that are not resident in government or military

    positions, and how the Army can leverage academic networks or contracting

    opportunities to fill those gaps;

  6. An identification of policy or regulatory issues hindering program execution;

    and

  7. The potential to integrate HTS capabilities into existing exercises.

On its face, this list appears to be quite encompassing, and it is certainly a good starting point. Let us consider each one of these in turn.

(1) An overview of all of the components of HTS, including related technology development efforts;

This is the initial “mapping” of the system defining what is and is not to be included as a property of the system. Well worth doing, although I would hope that the “components” considered include those that are relevant to but not under the direct control of the HTS.

(2) The adequacy of the management structure for HTS;

I find the use of the term “adequacy” interesting, as well as the focus on “management structure” rather than personnel. One of the oft repeated negative claims relates to management personnel and the development of a “toxic culture” inside the program, so I would hope that whoever conducts the assessment will consider more than the formal structure of management.

As I mentioned, the use of the term “adequacy” is quite interesting. For one thing, it implies that the management structure only needs to be minimally functional as measured against a particular set of criteria. But what are those criteria? They are not stated, although they might be inferred from the earlier reference to anecdotal support for the program. Another possibility is that they will be measured against some supposedly “objective” standard of structure within TRADOC – a set of “best practices” that concerns formal organizational design while excluding informal organizational practice. I would hope that the latter would not be the case.

(3) The metrics used to evaluate each of the components of HTS;

This is one of the requirements I have some serious concerns about. “Metrics”, in the sense of quantitative measurements, are useful if and only if they are significant indicators of the concepts and factors attempting to be measured. This is relatively simple and straightforward when it comes to areas dealing with the physical world such as, say, measuring marksmanship or building a bridge. It is, however, much more difficult when it comes to looking at things in the social world.

For example,

The HTS Mission is to provide commanders in the field with relevant socio-cultural understanding necessary to meet their operational requirements.

Key terms here are “relevant”, “understanding” and “operational”. First off, how is “relevance” defined? This is simple when it comes to the type of cultural “knowledge” at the “Don’t wear shoes in the mosque” level, but it becomes increasingly difficult to define once one gets to more abstract levels. What is meant by “understanding”? Normally, at least in many of the social sciences, this term is used in the sense of “empathic understanding” (verstehen) after Dilthey and Weber’s usage. This is an extremely subjective concept that is very difficult to quantify. What is meant by “operational”? Normally, at least in the context of US military discourse, this isn’t a problematic term. However, it can easily be noted that the tactical is the strategic in counter-insurgency operations; in effect, corporals can influence the strategic direction of any COIN operation. Given this, and it is certainly a point or set of assumptions that appear to have been embraced by GEN McChrystal in his recent assessment of the Afghan situation, it might be argued that one of a commanders operational requirements is cultural awareness at the squad level, something that the HTTs cannot currently do. If that was set as one of the criterion for a “metric” on evaluating the HTS, they would necessarily receive a down check.

At a slightly more abstract level, how would causation be “measured”? Is it enough to use a subjective evaluation from a BCT commander that their “cultural understanding” has increased during the tenure of an HTT? Even if this happened, was it because of the HTT? How could the effects of the HTT on the commander’s cultural awareness be separated out from that of general learning by experience or self directed reading? As one can see, attempting to apply a metric in this situation is fraught with difficulties.

(4) The adequacy of human resourcing and recruiting efforts, including the implications of converting some contractor positions to government positions;

Once again, we see the word “adequacy”, which is, once again, interesting. I certainly agree that the move from contractor status to government status is an issue that needs to be examine, but, in my opinion, it is one that should be examined separately from that of recruitment, except where there are specific, structural, overlaps (such as security clearance issues).

(5) An identification of skills that are not resident in government or military positions, and how the Army can leverage academic networks or contracting opportunities to fill those gaps;

This is the element I have the greatest difficulty with. First of all, the core analytic capabilities that are required for the HTTs are not really amenable to being treated as “skills”; they should be seen as “competencies” instead. As a note, a “skill” in this sense is a set of actions that has a fairly precise outcome that may be objectively measured and has a directly measurable relationship to a desired outcome (e.g. marksmanship), while a “competency” is a cluster of skills combined with an attitude, a set of perceptions and, most importantly, culturally specific rules for the deployment of the skill set (see Locating Competence, Jacobi and McNamara, 1999). At the minimum, these competencies will need to be defined, something that is not currently done in a manner that has a significant predictive value in my opinion.

The second concern I have with this element is that it is looking at a recruiting issue – “leverage academic networks or contracting opportunities to fill those gaps.” – without requiring an assessment of the issues that hinder such recruitment. Or, to put it a touch more strongly, the element ignores the extremely strong anti-HTS backlash inside academia. Now, some might assume that this is just an academic complaining about being ignored. Well, I may be an academic, but I also know a lot about recruiting and the types of candidates who are most likely to be recruited in such conditions. At the minimum, whoever conducts this assessment will need to examine the structural and cultural interactions between the Academy and the HTS including, but not limited to, an analysis of ethical and professional concerns.

(6) An identification of policy or regulatory issues hindering program execution; and

On the surface, I have no difficulties with this. I will note, however, that the assessment will, if conducted properly, involve the identification of a significant number of issues.

(7) The potential to integrate HTS capabilities into existing exercises.

Again, on the surface, I have no difficulties with this. I do, however, foresee many problems in getting the capabilities integrated into exercises if the term ‘exercises” is left wide open.

Concluding remarks

I would like to make several concluding remarks about this assessment. First of all, I would hope that at least one Anthropologist be included in the team that conducts it. I consider this to be a minimum requirement since, quite frankly, most people with a background in management are completely unable to understand what we do and how we do it. That, by the way, is not a “sour grapes” comment – it is the result of having worked with management people – academics, governments and private industry – for almost 20 years. There is also a very pragmatic reason as well; without an Anthropologist involved in the assessment, that assessment will have limited or no credibility within the academic community and that will impact on future perceptions of the HTS as well as on future recruiting.

A second point I want to make is that the timeline for this assessment, March 1st, 2010, is very tight. I believe that, in order for this assessment to be of real value to all stakeholders, it will require a lot of interview work, which is quite time consuming. I am concerned that the tight timeline may prove to be a stumbling block.

Finally, I have been told that TRADOC itself “believes this audit to be a positive development” and that they will “give a good hard look on what we need to improve and will offer insights into how we might be more effective at rapidly standing up new programs in the future”. Given TRADOCs recent moves to become more transparent and, especially, those conducted here at the SWJ (e.g. here and here), I have no doubt that these comments are sincere.

About The Author

Article Discussion: