by Dan Ward
Military acquisition programs take too long to deliver.
Good luck finding anyone willing to publically disagree with that statement. But despite the near uniformity of opinion on this topic, the defense acquisition complex consistently fails to answer two related questions:
- How long should acquisition programs take?
- How long do acquisition programs take?
The first question is admittedly difficult to answer in the abstract. An IT program and a jet fighter probably don’t belong on the same timeline.
Or maybe they do.
No doubt we could see some pretty interesting things in both categories in 9 months. But I doubt we’ll get a lot of people to sign up to that sort of timeline, particularly among those who build big projects like aircraft, tanks or ships. Given the wide range of technical genres encompassed by the term “defense acquisitions,” it probably makes sense to spend more time building some things than others.
Even though there is not necessarily a single answer to question #1, we do have a few parts of an inkling of an answer. For starters, according to a 2006 Government Accountability Office (GAO) report, the DoD itself says we shouldn’t spend more than 5 years building anything. So there’s that, but it’s not much to go on. Five years is w-a-a-a-y shorter than we’ve spent on many high-profile systems, but it’s also a pretty long time for other types of technology.
My personal answer to question #1 is “Half the time.” That’s admittedly still on the abstract side, but hang with me for a second.
First, let me explain that by the phrase “my personal answer” I mean an answer based on the considered, studied opinion provided in 1986 by the Packard Commission, which said it “is possible to cut this cycle in half.” (And that’s back when the cycle time was a lot shorter than it is now.)
That finding was corroborated 12 years later by a survey Ross McNutt performed as part of his 1998 PhD research at MIT, which also concluded a 50% reduction was both possible and desirable. The consensus seems to be that long timelines are not inevitable attributes of defense acquisition programs. Doing it faster - a lot faster - is entirely possible.
It sounds crazy and counter-intuitive, I know, but the answer is pretty consistent: acquisition programs should take half the time. If I thought it would help, I’d include a couple more references of subsequent studies that made similar observations and recommendations (there are several), but those two are probably sufficient to make the point. Anything more would be overkill.
I have to wonder what would happen if we tried to cut timelines in half on any meaningful scale (we haven’t). It’d be interesting to see how that would work out, don’t you think?
On that note, you can draw your own conclusions from the fact that we haven’t ever (e-v-e-r) tried to cut acquisition cycle times in half across the board, even though several sources have said such a move is feasible.
Interestingly, we’ve drastically cut development time on many individual projects, which supports the aforementioned assertion that long timelines aren’t inevitable. What we haven’t done is made the effort on a strategic, acquisition-wide level.
Maybe there’s a better answer out there, a more nuanced and comprehensive answer than the admittedly blunt instrument of “half the time,” but the combination of clarity and credentials inherent in this answer is pretty hard to beat.
OK, time to move on to question #2, the “does” question. How long does a typical program actually take?
Nobody knows. Seriously, we have no idea how much time we spend, because no one is collecting the data.
We used to keep track of this sort of thing, as shown in the following graph from a 12 June 98 report by the Defense Science Board, attributed to Daniel Czelusniak:
Feel free to ask a search engine of your choice for an updated version of that chart - but I’m pretty sure you’re gonna find nothing.
Now, I know better than to try to prove a negative, so let me acknowledge an updated chart may be out there somewhere. If so, it’s exceedingly well hidden. I read the internet until my face fell off and couldn’t find it anywhere.
Turns out I’m not the only one without access to this information. In a speech on 6 Feb 2012, the top DoD acquisition guy, Undersecretary of Defense for Acquisition, Technology and Logistics Frank Kendall, acknowledged he doesn’t have an answer to the question “Are we doing better or worse than we were 10 years ago?” If anyone should be able to immediately put eyeballs on that data, it’s Mr. Kendall.
The absence of readily available up-to-date information is an even bigger problem than the troublesome upward trend depicted in the chart, although that trend is indeed a major bummer.
It’s bad enough the average cycle time was so darn long and kept getting longer, despite multiple assertions that it should be cut in half... but now we don’t even know if we’re doing any better or worse. That makes it sort of difficult to substantiate this article’s opening claim, however obviously valid it may be.
Look, this data should be hard to miss, not hard to find. It should be plastered everywhere, not hypothetically hidden in some esoteric metrics management office. A key attribute of this kind of data is its findability, which directly determines its utility and value. If it isn’t front and center, immediately locatable by anyone who wants to see it, it might as well not exist in the first place for all the good it’s doing us.
But I’m pretty sure it’s not hidden. I think it flat out hasn’t been collected. This data should exist. It doesn’t. I’d love for someone to prove me wrong on that. Honestly, I would. On the other hand, I’d hate to think someone put all this data together and didn’t share it with Mr. Kendall. That would be embarrassing.
OK, in the rare instances where we can get someone to even consider measuring today’s acquisition timelines, we end up getting wrapped around the axle debating when to start the clock and when to stop, arguing about what constitutes “a program” and a jillion other bits of important irrelevance (ask me how I know this). And that’s a primary reason we don’t know how long today’s acquisition programs take: there is no broad consensus on how to define such a thing, let alone a desire to measure it.
You’d think we would have figured that stuff out by now. We apparently used to know, then we decided to un-figure it out, preferring instead to spend time arguing endlessly about stuff that doesn’t matter and doesn’t help.
When should we start the clock - at milestone A, B, I, II or some other point? When should we stop - at the Initial Operational Capability, Full Operational Capability, or some other point? How should we define “a program?”
Doesn’t matter. Not even a little.
Yes, these questions need answers, but almost any answer will do. Just pick one, be consistent with the measurements, then watch the trend. It really is that simple. Are some answers better and more useful than others? Of course. But any answer is better than none, and none is what we have today.
This agnostic approach makes sense to me, but based on a hoard of actual debates I’ve witnessed, heard about and tried not to get engaged in, a lot of supposed experts fiercely disagree with me (and each other, of course). They’d rather fight to the death against anyone who thinks milestone B is (or isn’t) the right place to calculate the start of a program than see consistent data collected using a definition which varies slightly from their preferred perspective.
Sigh. That’s not helpful. Not helpful at all.
More than a decade ago, someone tried to tackle the acquisition cycle time problem. You can find some of the results in an Audit Report by the DoD Inspector General, dated 28 Dec 2001.
The report explains “the typical acquisition effort of the 1960’s required 7 years for completion. A review of MDAPs [Major Defense Acquisition Programs], using the 1996 SARs [Selected Acquisition Reports], found that major system required 11 years (132 months) to progress from program start to initial operational capability. In 1998... DoD established the goal of delivering new MDAPs to the field in 25 percent less time...”
So far, so good. Someone made measurements and set an improvement goal. Why they didn’t aim for a full 50% reduction is unclear (chickens!), but even a 25% reduction is a step in the right direction. I’ll take it.
How’d things go? Well, the report explains the “USD (AT&L) computed an average cycle time of 96.9 months for 48 MDAPS...” which means the DoD beat the goal by more than two months.
Hooray! We set a goal and beat the goal! Good news, right? Well, don’t pop the champagne just yet.
The report goes on to say “We could not verify whether DoD met the... goal because the database... omitted programs and contained discrepancies. As a result, the average cycle time goal stated in the FY 2000 Annual Report of the Secretary of Defense may not be accurate.” (emphasis added)
Let’s summarize the situation: they collected some data, but it was not reliable or accurate data. It was a try, but if we judge the attempt on its results, it’s fair to say it was not a very good try. At this point, the obvious question is: Where’s the follow-up?
Surely, someone said: “Alright people, time to have another go. Please clean up the data, deal with the discrepancies and omissions, then re-crunch the numbers so we can find out whether we met the goal.” I mean, they made the initial effort, why stop there? Why not take another pass and this time do it right?
If there was a second attempt, it didn’t leave much of an evidence trail. Unfortunately, that’s typical of these things. There’s a big push to do something, an ineffective outcome, a final report that says “Um, we didn’t quite get there...” And then, silence.
Even if they pulled off a Top Secret do-over, there’s no evidence of actual improvements, because more than ten years down the road, we are still reading reports from the Defense Business Board with lines like this: “Major new programs take too long to bring to the field and are too expensive.”
Or this line from the Harvard Business School’s 2011 assessment of reform efforts from 1960 to 2009: “The problems of schedule slippages, cost growth, and technical performance shortfalls on defense acquisition programs have remained much the same throughout this period.”
It’s almost as if we’re not really trying, isn’t it?
OK, let’s change directions a bit. The whole question of average acquisition cycle times is interesting, but well outside most people’s circle of influence. For military technologists and acquisition practitioners, the average cycle times across the DoD matters less - far less - than whether or not our specific project is on schedule, right?
Here’s the thing - although we need to avoid optimizing a part at the expense of the whole, if we do a lot of individual projects faster, then the aggregate average just might take care of itself. So let’s talk about time as it applies to a particular, individual project. Are there things we could do to help make sure we deliver on time?
What if we decided it was important to deliver a project on time, then aligned our metrics and actions in such a way that they supported that goal? That’s one of those things that sounds obvious but is seldom done in actual practice. Instead, at the first sight of a problem, we tend to tack on a schedule extension (and ask for more money).
What if we fought tooth and nail to prevent delays, instead of treating schedule extensions like a best practice for problem solving?
And by “fought tooth and nail” I mean insisting on a focused simplicity in our requirements and otherwise avoided over-reaching and over-engineering, then also avoiding the dreaded “rebaseline” approach (which tends to be a euphemism for “add time and money”).
What if we incentivized and rewarded early delivery... without simultaneously insisting that unrealistic and unnecessary breakthroughs occur on a predictable schedule? Again, providing such incentives isn’t terribly hard to do. Finding programs that do it, on the other hand, is much more difficult.
There are a million ways to encourage pursuit of a desirable outcome. It’s just a matter of deciding which outcomes we really desire. We’re actually quite good at meeting goals, particularly when they are appropriate goals unencumbered with other, mutually-contradictory goals. The trick is to set the right goals in the first place.
Look, if our goal is to build a four-sided triangle, I promise we’re going to end up with a rectangle every dang time, no matter how often we rebaseline and process-improve. That extra side doesn’t make the triangle better. It makes it something other than a triangle.
And lest there be any ambiguity, let me also say the goal isn’t to do a better job of tracking how much time we spend on acquisition programs, those earlier comments about measurements and metrics notwithstanding. The goal is to actually spend less time - maybe even 50% less time - on our acquisition programs. Collecting and examining real data might should achieve that goal, but don’t confuse the measurement with the achievement.
In practical terms, one way to bring acquisition times down is to use the schedule to constrain the design - which is what the GAO frequently recommends, for what it’s worth. Those GAO ninjas just might be onto something.
Here’s how that would work: Instead of the government telling the contractor “Here’s a huge list of everything I want the system to do. How long will it take and how much will it cost me?” (accompanied by not-so-subtle elbow nudges and winks and whispers of “take as much time as you need.”), we should instead reverse that and say “Here’s how much money and time I’ve got, how close can we get to the capability objectives without exceeding those amounts?” (accompanied by a steely-eyed, squared-jaw countenance that says “Not one day or one dime more.”).
As threats change, technologies mature, and additional capabilities become both necessary and available, we would then integrate them into future blocks and upgrades... or into some future system... or maybe not build them at all. How can I justify that final suggestion? Well, it turns out a lot of our supposed requirements aren’t truly required. The more restraint we can exercise over extraneous desirements, the better, both in terms of timeliness and operational performance.
The good news is most defense acquisition programs can be done in half the time. That’s also the bad news, because it means we’re falling short of the ideal. At least, we seem to be falling short. Until we start collecting the data and making it available to people like Mr. Kendall, we don’t know how far off target we really are. If we decide to collect this data, we’ve got to actually collect it and not be satisfied with a big stack of discrepancies and omissions.
If we’re serious about wasting less time on acquisition systems, we really should put a little effort into asking two key questions: how long should acquisition programs take and how long do they take? The answers just might help us push forward with principles and practices designed to reduce how long warfighters have to wait for new gear.