Small Wars Journal

FM 3-24, Social Science, and Security

Tue, 05/20/2014 - 9:33pm

FM 3-24, Social Science, and Security

Adam Elkus

In his review of the updated FM 3-24, Bing West has some harsh words about the manual’s academic tint: “[t]he COIN FM is harmful because it teaches war as sociology.” Charles J. Dunlap is also unimpressed, characterizing it as a mishmash of warfighting and material targeted to “northeastern graduate students.” West and Dunlap’s remarks suggest FM 3-24 belongs in a social science faculty lounge instead of a war room. Recently, former Secretary of Defense Robert Gates also voiced similar sentiments about those who he believes treat war like a “science.”

West, Dunlap, and Gates’ frustrations are not without merit. The history of social sciences’ entanglement with the military and intelligence community is undeniably troubled. Reading West and Dunlap’s critiques of FM 3-24 raise the question of whether a better and more productive relationship is possible.

As a social scientist-in-training, I write this essay to offer some constructive general suggestions about how social scientists and national security practitioners can best collaborate together.

The Correlates of War

The relationship between American social science and national security dates back to World War II and hit a high point during the Cold War.  The social sciences cut their teeth on the toughest political-military problems, and government patronage was also a key ingredient in the growth of the modern social sciences. This relationship abruptly changed during the Vietnam era. Military practitioners came away feeling (with some justification) that academics had overpromised and underdelivered. Moreover, the culture of the American university became more hostile to national security concerns as campus protests pushed the military-industrial complex away.

The end of the Cold War dramatically reduced government patronage of the social sciences. Certainly the spigot wasn’t completely turned off, as evidenced by government-funded ventures like the Political Instability Task Force. But compared to the feast of Cold War social science, the fall of the Soviet Union led to famine.

Without a Cold War-style mobilization of intellectual resources academics are ultimately bound by disciplinary incentives to produce work for other academics -- hence the “gap” between policy and academia. The Global War on Terror partially reversed these trends. The government called on academics that utilized social scientific methodologies ranging from cultural analysis to computational number-crunching. Did all of this reduce war to academic hair-splitting?

In a rare feat of social scientific deference to a historian, I will leave that question to historians of military and intellectual matters like my friend Nick Prime. Instead, I will suggest some pointers for productive collaboration between social scientists and their government counterparts in the security sector. These are by no means an exhaustive list of suggestions, only those that do not seem to have been discussed very much in dialogues about social science, academia, and security.

Eating Soup With a PhD?

1. The employment of social scientists instead of national security practitioners must be justified in each and every use case. Unfortunately, the question of whether or not social scientists can add value to national security is also complex enough to be a social scientific field of study in and of itself.

Government is, like any other area of life, a question about how to allocate limited resources. There are a lot of reasons why a national security practitioner might be a better choice than a social scientist. In the world of security practice, a PhD is not guaranteed to grant intellectual insight superior to that of a battle-tested Marine. Tacit knowledge, military training, experience, and socialization into a particular security context are valuable beyond measure, and these are all generally things academics lack.

This doesn’t mean that experience is everything. The famous game theorist Bruce Bueno de Mesquita is an effective consultant because he extensively elicits practitioner knowledge and plugs it into his models. De Mesquita’s combination of academic chops and solicitation of subject matter expert input makes him a favorite among government and industry clients. That said, without expert inputs for his models de Mesquita would likely be much less successful.

The answer to “should I get an social science academic?” always should be “it depends.” There are a variety of contextual considerations that will decide whether or not academics will provide valuable applications. Powerful defense innovations like amphibious warfare did not require anyone with doctorates in political science or sociology. In contrast, economists made a significant contribution to the World War II war effort, and an anthropologist produced a pioneering postwar study of Japanese culture.

Ultimately there are no easy answers to this question. The anthropologist-staffed Human Terrain System’s troubles show that even application areas (cultural knowledge) that intuitively seem geared towards social scientists can prove problematic in practice. What worked for Ruth Benedict didn’t work out very well in Afghanistan. Consequently, quantitative modelers utilizing a statistical methodology called search theory helped solve key problems in the highly technical and domain-specific area of anti-submarine warfare.  Today, researchers in economics and statistics use search theory to help combat improvised explosive devices.

Context, the nature of the task, and professional judgment will ultimately decide whether or not consulting a social scientist is necessary to help solve a national security problem.

2. Basic academic research is useful even if it does not yield an immediate applied payoff.

Policymakers and journalists often lament that political scientists and their compatriots prefer jargon, mathematical abstractions, and parsimonious theories to ideas that are of clear use to policy concerns. It’s true that most of what is printed in The American Political Science Review has little to no practical utility or relevance. But that’s also the point.

The core mission of the spy is to gather intelligence, the core mission of the soldier is to fight the nation’s enemies, and the core mission of the social science academic is to contribute to knowledge. Many academics would not have suffered through the toil of a doctoral program if they did not value knowledge for knowledge’s sake. Hence, to call their research abstract and impractical is to miss the point entirely.

Also, without basic research there can be no practical applications. This is clear in the “hard” sciences, where absent-minded physicists and mathematicians made discoveries that led to powerful military and civilian applications. Cumulative knowledge about the world is what ultimately paves the way for useful tools. By making practical application the sole criterion of social scientific research value, government consumers paradoxically ensure that they will never get the best research applications.

Basic research that challenges the assumptions of policy also can be highly valuable.  The idea that monopoly of force is what makes a state, first professed a century ago by the social scientist Max Weber, is now extremely controversial. One of the more depressing tragedies of American nation-building efforts is how much blood and treasure was lost pursuing an objective (host nation monopoly of force) that may not have been necessary or possible to begin with. Had policymakers paid attention to the research, perhaps they might have arrived at a different course of action.

Sure, most basic research is probably not relevant to day-to-day practitioner needs. But practitioners should not completely discount the possibility that dry, jargon-filled journal articles might be of value.

3. Academics studying war and conflict must engage with domain-specific research and knowledge that practitioners utilize.

One of the great barriers to greater cooperation between social scientists and the military is the existence of two parallel sources of knowledge regarding war and conflict. 

In the purely academic sphere, linkages between the volumes of literature produced by strategy scholars, military historians, security practitioners, and those of mainstream social scientists are few and far between. In particular, Infinity Journal’s A.E Stahl argues that subfields of mainstream political science that study international conflict often neglect the conduct of war. Social science academics also frequently ignore or misunderstand technical details of war, weapons, and how the US military-industrial complex functions.

There are prominent exceptions to this generalization. Bear Braumoeller, author of one of the most sophisticated and widely acclaimed new works on international security, engages and responds to Clausewitz. One can agree or disagree with Stephen Biddle’s methodologically dense work Military Power while nonetheless acknowledging that Biddle read the military history and considered technical problems often ignored by international security research.

At the end of the day, academics are unlikely to have the vast command of military-technical detail that practitioners do. Nor will disciplinary divides between American social science and military history and/or war studies be bridged tomorrow. But social science academics would undoubtedly benefit from familiarizing themselves with both the tools of the trade relevant to practitioners and the literature that matters most in war college panel discussions and national security forums.

4. The concept of “policy relevance” is not the only viable model of academia-policy collaboration.

Academics and practitioners both share a flawed image of how they should relate to each other: the notion of “policy relevance.” In theory, the academic publishes some peer-reviewed social science research that is relevant to a government concern. In return for making jargon-free yet rigorous research that is of value to the government consumer, the academic is rewarded with attention and support from Uncle Sam. Everyone is happy, right? There is one big flaw with this model. What an academic considers “relevant” may not be so relevant to the practitioner, and what a practitioner considers relevant may be unpalatable to the academic.

Peer-reviewed academic research also competes with thousands of think-tank reports, op-eds, and white papers tailored specifically to the desires of policy. Unless academics try to mimic think-tanks and advocacy groups, what the Ivory Tower considers relevant often won’t be relevant enough to Langley or the Pentagon. A timely think-tank report condensed to a memo or a PowerPoint will always beat a thick but “relevant” journal article. Both sides of the divide also have different expectations about what “policy relevant” research constitutes – and these expectations aren’t easy to resolve.

This doesn’t mean that we ought to throw the policy relevance concept out the window. But policy relevance should be qualified and contrasted with an alternative model: applied projects and partnerships that match social scientific methodology and subject matter expertise to government need. The government benefits from research and analysis that is tailored to its unique problems. And in turn, academics get opportunities to demonstrate the utility of their ideas and methods while also field-testing them on difficult challenges.

My frequent co-author John P. Sullivan has recently begun a profitable partnership with the University of Southern California (USC)’s CREATE Institute. Sullivan and CREATE utilize game theory and multi-agent simulation to develop more robust police deployments. Sullivan gains by working with academics on solutions to help make Los Angeles safer. USC academics, in turn, have an applied environment to further develop their theory of security games.

There are a variety of possible models of policy-academia cooperation, and “policy relevance” is only one of them. We do ourselves a disservice by making it the exemplar.

5. Government-funded social science research efforts require careful consideration and evaluation.

There are countless laments about how government and social scientists have drifted apart. Most of these ignore the plethora of existing government programs that fund social science research on national security matters like the Minerva Initiative. Researchers like Marc Sageman that have consulted on national security matters have a different complaint: the applied research is useless:

Well-meaning research based on social-network analysis, data-farming, agent-based modeling, Bayesian networks, and other kinds of simulations flourished for a few years. The hope was that those cutting-edge tools would anticipate the tactics of the enemy, but they failed to deliver on their promise. What the government did not support was the methodical accumulation of detailed and comprehensive data. As one official once said to me, “Why should I fund the collection of publicly available and free information?”

Is this a fair characterization and assessment of government-funded research? At a very minimum, Sageman’s complaints deserve attention. Before more taxpayer money is funneled to applied research, the government and academics ought to first ponder successes, failures, and areas for improvement in current research initiatives. This can only help make future research more useful to national security policy and practice.

Some research applications are also vastly easier to evaluate than others. Determining the causal relationship between a social scientist’s new data mining algorithm and an increase in enemy fatalities is not easy, but it also is vastly simpler than evaluating long-run research efforts that may not yield tangible payoff for some time or have indirect (at best) impacts on national security.

However, evaluations and critiques should also avoid the Washington habit of taking research out of context and misleadingly holding it up as an example of frivolous spending. Too often, politicians misunderstand the nature and dynamics of academic research and insult academics that, like anthropologist Scott Atran, dedicate enormous amounts of time to national security applications.

Taxpayers obviously deserve the best value for their money. But academics that seek to help the nation also deserve respectful consideration. The fact that research is sometimes difficult to evaluate does not justify holding it up for mockery.

6. Don’t dismiss the complex ethical dilemmas that social scientists face when doing national security work.

In his new history, Mark Lilla enumerates the misadventures of intellectuals in politics. Academic involvement in “politics by other means” poses just as many – if not more – ethical problems.

Certainly some academics will never agree with American foreign policy and will routinely turn down any opportunity to work with the military or intelligence community. But for others the question is more complex. One does not need to be Noam Chomsky to see the problematic elements of political scientist Samuel Huntington’s involvement in the Vietnam-era Strategic Hamlet program. Unfortunately, academic involvement can lead to harm to Americans, allies, and foreign civilians.

Social science also thrives on openness, communication, and debate, but national security work by definition is highly secretive. For many academics, the secrecy and compartmentalization of the national security world runs contrary to basic norms of scholarship and accountability. In particular, social science today emphasizes the importance of replication and reproducibility – which classification makes difficult, if not impossible.

Lastly, academics may potentially face harsh professional consequences for national security work but most would agree that they aren’t likely to be shot at or blown up. This powerful asymmetry in consequences is a frequent source of practitioner resentment when it is not acknowledged.

Ethical dilemmas are not impossible to cope with. But they are also omnipresent in any academic collaboration with the security state.

War and Social Science: A Better Future?

Bing West is right. War isn’t sociology. It’s violent, crude, and frighteningly unpredictable. And this sad reality – along with the many differences between academia and the world of practice, will inevitably produce tension between social scientists, soldiers, and other national security practitioners.

The nation has benefited in the past from fruitful collaboration between social scientists and national security practitioners. Potential for equally productive future collaboration exists.

About the Author(s)

Adam Elkus is a PhD student in Computational Social Science at George Mason University. He has published articles on defense, international security, and technology at Small Wars Journal, CTOVision, The Atlantic, the West Point Combating Terrorism Center’s Sentinel, and Foreign Policy.

Comments

Move Forward

Thu, 05/22/2014 - 4:16pm

In reply to by Outlaw 09

Outlaw,

Dean Chung in his “Winning Without Fighting: Chinese Legal Warfare” article primarily discusses Legal Warfare. The question is how any of these three “warfares” defined below would enable China to seize Taiwan or the Senkakus.

It might help China drill for oil and continue fishing, but neither of those is a military problem. China, if anything, has only further alienated itself in the area of “public opinion” and media coverage in its actions to date. “Psychological warfare” may influence weak neighbors but is unlikely to influence U.S., Japanese, or Korean military or civil leaders, or drive neighbors into getting closer to China. Finally, any “legal warfare” basis for claiming areas far from China’s shores is questionable at best and highly unlikely to meet with success or acquiescence from those affected.

http://www.heritage.org/research/reports/2012/05/winning-without-fighti…

<blockquote>1. Public opinion/media warfare is the struggle to gain dominance over the venue for implementing psychological and legal warfare. It is seen as a stand-alone form of warfare or conflict, as it may occur independent of whether there is an actual outbreak of hostilities. Indeed, it is perhaps best seen as a constant, ongoing activity, aimed at long-term influence of perceptions and attitudes. One of the main tools of public opinion/media warfare is the news media, including both domestic and foreign entities. The focus of public opinion/media warfare is not limited to the press, however; it involves all of the instruments that inform and influence public opinion (e.g., movies, television programs, and books).

2. Psychological warfare provides the underpinning for both public opinion/media warfare and legal warfare. With regard to the PLA, psychological warfare involves disrupting the enemy’s decision-making capacity by sapping their will, arousing anti-war sentiments (and therefore eroding the perception of popular support), and causing an opponent to second-guess himself—all while defending against an opponent’s attempts to conduct similar operations.

3. Legal warfare is one of the key instruments of psychological and public opinion/media warfare. It raises doubts among adversary and neutral military and civilian authorities, as well as the broader population, about the legality of adversary actions, thereby diminishing political will and support—and potentially retarding military activity. It also provides material for public opinion/media warfare. Legal warfare does not occur on its own; rather, it is part of the larger military or public opinion/media warfare campaign.</blockquote>

If that’s the best China’s got they aren't a threat. Their primary assets are not their aircraft or naval ships. It is their long range missiles which pale in contrast to the tonnage the U.S. dropped from aircraft in both Afghanistan and Iraq in both wars. Historians or a PhD candidate would do well to contrast the limited capabilities of 1600 missiles to countless tons of ordnance dropped and used in indirect fire in past wars.

Once their missiles are gone doing only moderate damage, China becomes relatively powerless which may explain their desire for a short, sharp war, if any. China’s navy could not hold off our subs or stealth aircraft firing standoff missiles at ships. The PLA would have great difficult getting to, holding, and sustaining any terrain seized in aggressive actions while simultaneously enduring insurgent and U.S. aircraft attacks on Taiwan. Russia, likewise would need conventional armor for any major seizure of Ukraine or NATO nations. NTC is designed for training and is a worst case training event where the OPFOR has every advantage. The real world threats are far less capable than advertised.

Cyber threats are another matter, but cyber war carried too far by any nation would be grounds for heavy conventional and cyber retaliatory strikes and sanctions. A major EMP blast at altitude would risk violation of MAD deterrence (as would U.S. conventional attempts to <strong>deeply</strong> penetrate Chinese airspace), not to mention risking destruction of China’s major trading partner’s capacity and desire to buy their goods.

Outlaw 09

Thu, 05/22/2014 - 2:10am

In reply to by Jagatai

Jagatai---
Every time one has to participate in a DATE exercise one is in fact exercising COIN to include a "near peer".

That was the theory behind the new DATE scenarios and both have been run for a number of years at both the JRTC and the NTC.

The problem is that both the JRTC and NTC DATE scenarios in no way addresses the new Russian doctrine New-Generation Warfare which is deeply steeped in a UW strategy tied to political warfare that is at the core of their actions against the Ukraine.

Nor does it address the new Chinese Three Warfare Stages Strategy.

Move Forward

Wed, 05/21/2014 - 9:34pm

In reply to by Jagatai

Jagatai and Aelkus,

Have you seen this Rand study and do you believe it is an acceptable level of science melded with history? It has the expanded "subset in time and breadth" that you mentioned but I have no idea if it was used or considered authoritative.

http://www.rand.org/pubs/research_reports/RR291z1.html

That said, believe it is extremely dangerous to use history to try to predict how future wars will be fought. I'm somewhat an expert on several OEF battles yet would not begin to assert that they offer across the board COIN lessons for future battlefields in another time and place.

Even in the same place at different times, you could contrast the Soviet venture down Ghaki valley with heavy armor in the 80's to the U.S. attempt to do the same with less armor in Operation Strong Eagle (google "Vanguards of Valor") and you would find many differences. Now go back centuries to the British or Genghis Khan eras without airpower (lethal/lift/ISR), vehicles, body armor, communications, night vision, and modern rifles, machine guns, and RPGs and the outcomes and tools employed are completely different. Now introduce different geography, combatants, and cultures and tell me you can historically prove that one-size-fits-all.

I could be wrong in saying that the changes between the old and new manuals are not substantial. Also, you can go online to the Army Publication Directorate and get a slightly different take from FM 3-24.2, Tactics in Counterinsurgency from 2009. It is more detailed and 300 pages. I could not see that it has been rescinded based on this new manual nor would one believe it should be. It and the original FM 3-24 use Clear-Hold-Build and the new mantra is Shape-Clear-Hold-Build-Transition. FM 3-24.2 also has a larger, better explained section on Culture, Intelligence, and Planning not using the confusing Operational Design.

The Shape aspect of the new doctrine partially answers those like Charles Dunlap on the airpower and technology front insofar as U.S. UAS and other intelligence, SF, and USAID capabilities could shape and preclude the other stages without extensive U.S. boots on the ground. The new Transition "stage" is the process of transferring military and other responsibilities completely to the host nation if my read is correct. Just sounds like common sense to me.

The extent of the "build" stage is one controversial aspect somewhat tied into the lines of effort. If you believe in those, guess you gotta except the "build" simultaneously wondering if the civil agencies would be there next time. The "clear" part is not automatically assumed at least according to retired Maj Gen Dunlap. The extent of "clear" required is probably another aspect worth modeling (I'm not a modeler). The same modeling might apply for the techniques used to "hold" be it a few FOBs and commuting to war (OK I'm biased), lots of population-centric COPs and patrolling, or Dunlap's idea of the U.S. fighting from an adjacent sanctuary and Green Zones with airfields.

Add variations of population-centric vs. threat-centric and the SF Afghan Local Police and the raiding/counterterror-only approaches and you get an idea of how hard it would be to prove anything with real units since the real thing took years while training centers last weeks. Nor do I think NTC or JTRC would be an acceptable one-site-fits-all test. One big debate few agree on is the level of lethality used to Clear and Hold vs. the claimed veracity of the COIN contradictions. Soldiers/Marines at Ganjgal surely would argue the Gen McChrystal's ROE was too restrictive while others would say the less restrictive ROE of the parent unit at Wanat is what led to that attack.

Who knows if they used modeling and simulation. I wager that all kinds of collaborative SME input was involved along with committee wordsmithing that does not always help make an easy read. Add the complexity of two services and service terms like the Army's Mission Command and Information Collection and the Joint terms Command and Control and ISR. It isn't easy and the results are extensively staffed with changes recommended probably at very high levels on this manual.

aelkus

Wed, 05/21/2014 - 7:23pm

In reply to by Jagatai

"From the data scientist perspective I would suggest that there are several thousand years of human history in regards to insurgencies that we can draw from. 3-24 identifies Laos, Anbar, Sri Lanka, Vietnam, Afghanistan, Peru, Philippines, and El Salvador as the primary source materials. A fairly small subset in time and breadth, which is worrisome.

We can make some pretty good assumptions about a wide range of variables in these conflicts: numbers or soldiers and the population as well as rough distributions, structure of the systems and society, lethality, disease, forced migration, rebuilding costs and other expenditures, etc. The list goes one. How well does this methodology fit these examples and have the used any statistical modeling or simulations to throw out bad ideas and keep the good?"

While I can't speak to what has gone on within TRADOC (I'm a pointy-headed outsider), some thoughts.

First, in my (minor) experience doing statistical analysis of war data some major problems exit. First, a lot of open-source data is extremely poor. Take a look, for example, at the Correlates of War codebooks and you'll find a lot of problems in how the data was coded and collected that create a lot of uncertainty. These problems are multiplied when you merge datasets like COW with other datasets like POLITY or Angus Maddison's economic history statistics. Additionally, there's also some gigantic research design issues in drawing conclusions from long-term historical data. Taleb's response to Pinker's recent book goes into some of them: http://www.fooledbyrandomness.com/longpeace.pdf

That being said, these problems are not insurmountable. There are advanced statistical techniques that can help with problems like this. Unfortunately, as Schrodt noted recently in his critique of quantitative political science we often prefer junk statistics to more powerful methodologies that could deal with very tough problems. The quantitative insurgency, political violence, and civil wars literature within comparative politics has also done a lot of historical work that goes beyond much of the discussions we commonly have about COIN. And I fully agree with Jagatai that we have a far, far, too limited set of cases to review. For the sake of future soldiers and academics, preserving GWOT-era combat data is EXTREMELY important. This is task that demands the utmost attention, otherwise it will take a very long time (akin to the time it took Daddis to write his recent history of MACV's performance metrics) for us to think about this.

Second, while I know that DoD has a lot of modeling and simulation, being on the outside makes me wonder if much of it involves newer techniques, like agent-based modeling, that could get at more fine-grained interaction. My PhD program's founder, Joshua Epstein, for ex, developed an agent-based model of insurgency and political violence using the MASON simulation environment. If you model the system more traditionally (a Monte Carlo model or a series of differential equations) you lose some of the more fine-grained elements that come from a population of interacting AI agents. ABMs also are more generalized and abstract than even more fine-grained simulations built to scale.

But SWJ commenters actually doing this stuff for the government would be in a better place to speak to what the state of the art currently is.

I will caveat this response by saying I do not have a deep understanding of how TRADOC reviews and produces these FMs. My only experience with them was in training and from my conversations with other officers.

I would naively suggest that any research and published doctrine should be the result of collaboration between subject matter experts, i.e. the military, and the research capabilities of academia. Very much along the lines of the author’s discussions of Bruce Bueno de Mesquita, any paper should be the collaboration between subject matter experts, i.e. the military, and the research capabilities of academia. Without that minimum level of rigor can you even consider the document as anything more than an internal SOP?

To reach the level of doctrine I would expect it not only be a collaborative effort, but a rigorously tested one at that. Did they subject this updated theory for countering insurgencies to significant academic AND field testing to prove that it is better that what we had before?

From the data scientist perspective I would suggest that there are several thousand years of human history in regards to insurgencies that we can draw from. 3-24 identifies Laos, Anbar, Sri Lanka, Vietnam, Afghanistan, Peru, Philippines, and El Salvador as the primary source materials. A fairly small subset in time and breadth, which is worrisome.

We can make some pretty good assumptions about a wide range of variables in these conflicts: numbers or soldiers and the population as well as rough distributions, structure of the systems and society, lethality, disease, forced migration, rebuilding costs and other expenditures, etc. The list goes one. How well does this methodology fit these examples and have the used any statistical modeling or simulations to throw out bad ideas and keep the good?

More importantly, did they war game the FM’s recommendations against these history scenarios and against future scenarios? I’m talking basic MDMP map games as well as large scale simulations using active military personnel trying to implement these methods against a reacting enemy. Have a random sample of different unit staffs reorganized along the lines suggested to test the approach?

Finally, have they sent two brigades (or even two battalions) to NTC to test the doctrine against a simulated insurgency? One to use methods and thinking from the old FM and one unit using tactics derived from several months with the new doctrine. The scenario designers should be segregated from the doctrine writers to ensure a fair control.

My guess, based on my own experience in the military, is no. There was likely only cursory war gaming and field exercises designed to “validate” the new FM.

Wow, a little long winded! Suffice it say, I don’t think the problem is centered on an imbalance between the authors, military subject matter experts versus academics, or even the target audience. It’s the lack of rigor that goes into testing the theories proposed in our doctrine. Dreaming up and discussing a cohesive, strategy for such and such is a nice first step. The next step is to rigorously evaluate the different options against controls. Doing so would go a long way to convince practitioners that this more than a collection of good ideas.

We have an incredible capability for modeling, war gaming, and field testing that I believe is going to waste. Just ask JC Penny what happens when you make major alterations without a rigorous testing of your ideas.

ganulv

Thu, 05/22/2014 - 10:25am

In reply to by aelkus

<strong>That being said, academics, like everyone else, follow the money.</strong>

I would be interested in any studies looking at what it takes to get tenure track scholars to change career course. In my experience, very few people outside of academia understand how different a professional environment it is than most. Just getting on the job market requires spending some of the otherwise prime earning years of one’s life doing graduate study and earning very little or negative money. Few get the opportunity to start working towards tenure, and the six year courtship that is the life of an assistant professor is essentially graduate school with decent pay and more pressure. In the end, someone who is successful in earning tenure has spent 4–10 years in graduate school, often 1–2 years as a postdoc, and 6 years as an assistant professor. The ROI for those years is job security and greater professional freedom and intellectual stimulation than in most careers. Government service offers a perhaps comparable degree of job security and (probably, depending on the discipline) somewhat better pay, but without the intellectual stimulation or professional freedom. The private sector pays a whole lot better, but the job security isn’t there. And once someone has left a tenured position s/he can’t just step back into a new one. A new round of academic courtship at the assistant professor level would be involved.

Anyway, what I am trying to say is that if we expect tenure track academics to follow the money we need to be realistic. My suspicion is that it takes <em>a lot</em> more money than it does to lure someone between or within other types of private and public sector positions.

aelkus

Wed, 05/21/2014 - 7:28pm

In reply to by ganulv

Yup, that's true -- as I said in the piece, if the government does not make it worth the academic's while, it is hard to see them going against disciplinary incentives.

That being said, academics, like everyone else, follow the money. During the Cold War, when massive grants were on the line working for the government was not as much of an opportunity cost as it is today. I doubt that anything close to the Cold War research model will re-occur without Cold War threats. But we can certainly do better than we do today -- particularly in area studies.

<blockquote>Peer-reviewed academic research also competes with thousands of think-tank reports, op-eds, and white papers tailored <em>specifically</em> to the desires of policy. Unless academics try to mimic think-tanks and advocacy groups, what the Ivory Tower considers relevant often won’t be relevant enough to Langley or the Pentagon. A timely think-tank report condensed to a memo or a PowerPoint will always beat a thick but “relevant” journal article.</blockquote>

In terms of tenure and promotion points, think tank and white papers carry much less weight than peer-reviewed publications. And op-eds count as service, at best. A <i>Policy Papers</i> and a <i>Media</i> section look nice in a tenure file. But a couple of peer-reviewed articles published over the first six years of a scholar’s career are almost certainly going to count for more than a half dozen white papers and multiple op-eds. Since professors at all ranks are almost certainly working c. 60 hour weeks,* they need to be realistic about where they put their time if they want to stay in the business. This is an unfortunate and unavoidable fact of life in the ever tighter job market that is academia.

* http://www.insidehighered.com/news/2014/04/09/research-shows-professors…