Small Wars Journal

Designing Unmanned Systems for Military Use: Harnessing Artificial Intelligence to Provide Augmented Intelligence

Wed, 08/23/2017 - 8:24am

Designing Unmanned Systems for Military Use: Harnessing Artificial Intelligence to Provide Augmented Intelligence

George Galdorisi

Submitted in collaboration with TRADOC's Mad Scientist Continuum: Visualizing Multi-Domain Battle - 2030-2050.

Executive Summary

One of the most rapidly growing areas of innovative technology adoption involves unmanned systems. The U.S. military’s use of these systems—especially armed unmanned systems—is not only changing the face of modern warfare, but is also altering the process of decision-making in combat operations. These systems are evolving rapidly to deliver enhanced capability to the warfighter in the multi-domain battle space. However, there are increasing concerns regarding the degree of autonomy these systems—especially armed unmanned systems—should have. Until these issues are addressed, military unmanned systems may not reach their full potential.

One of the operational and technical challenges of fielding even more capable unmanned systems is the rising cost of military manpower—one of the fastest growing military accounts—and the biggest cost driver in the total operating cost (TOC) of all military systems. Because of this, the U.S. military has sought to increase the autonomy of unmanned military systems in order to drive down TOC.

As military unmanned systems have become more autonomous, concerns have surfaced regarding a potential “dark side” of having armed unmanned systems make life-or-death decisions. Some of these concerns emerge from popular culture, such as movies like 2001: A Space Odyssey, Her, and Ex Machina. Whether the movies are far-fetched or not isn’t the point, what is important is that the ethical concerns regarding employing armed unmanned systems are being raised in national and international media.

While the DoD has issued guidance regarding operator control of unmanned vehicles, rapid advances in artificial intelligence (AI) have exacerbated concerns that the military might lose control of armed unmanned systems. The challenge for unmanned systems designers is to provide the military not with completely autonomous systems, but with systems with augmented intelligence that provide the operator with enhanced warfighting effectiveness.

The DoD can use the experience of the automotive industry and driverless cars to help accelerate the development of augmented intelligence for military unmanned systems. As testing of driverless vehicles has progressed, and as safety and ethical considerations have emerged, carmakers have tempered their zeal to produce completely autonomous vehicles and have looked to produce cars with augmented intelligence to assist the driver. Harnessing AI to provide warfighters employing unmanned systems with augmented intelligence—vice fully autonomous vehicles—may hold the key to overcoming the ethical concerns that currently limit the potential of military unmanned systems.

Perspective

One of the most rapidly growing areas of innovative technology adoption involves unmanned systems. In the past several decades, the military’s use of unmanned systems in all domains—ground, air, surface, and subsurface—has increased dramatically.  The expanding use of armed unmanned systems is not only changing the face of modern warfare, but is also altering the process of decision-making in combat operations in the multi-domain battle space.  These systems have been used extensively in the conflicts in Iraq and Afghanistan, and will continue to be equally relevant—if not more so—in the wars in the 2030-2050 timeframe.

As military unmanned systems have become more autonomous, ethical concerns have surfaced regarding a potential “dark side” of having armed unmanned systems make life-or-death decisions. While the Department of Defense has doctrine regarding the need to have humans in-the-loop when lethal unmanned systems are employed, the concomitant desire to be able to prevail over an enemy operating at “machine speed” make it imperative that the U.S. military use unmanned systems to their maximum advantage. Designing and building unmanned systems warfighters can employ in the battlespace is a task that will require intense collaboration among policy, military, industry and other stakeholders.

The Plan for Military Autonomous Systems

The U.S. Department of Defense’s vision for unmanned systems is to integrate these systems into the Joint Force and to employ them extensively in the multi-domain battle space for a number of reasons. These include: reducing the risk to human life in high threat areas, delivering persistent surveillance over areas of interest, and providing options to warfighters that derive from the inherent advantages of unmanned technologies—especially their ability to operate autonomously.

Because unmanned systems are used by all the military Services, the Department of Defense publishes a comprehensive roadmap to provide an overarching vision for the military’s use of unmanned systems. The most recent roadmap, the FY 2013-2038 Unmanned Systems Integrated Roadmap, singled out the need for enhanced UxS autonomy, noting, “DoD envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision-making required for the unmanned portion of the force structure.”[1]

As the Unmanned Systems Integrated Roadmap notes, unmanned systems are especially important assets in those areas where the U.S. military faces a peer competitor with robust defenses. The Joint Operational Access Concept identifies, “Unmanned systems, which could loiter to provide intelligence collection or fires in the objective area,” as a key counter-area denial capability.[2]

The Third Offset Strategy

The Department of Defense has initiated a “Third Offset Strategy,” to ensure that the United States can prevail in the future multi-domain battlespace. Rather than competing head-to-head in an area where a potential adversary may also possess significant strength, an offset strategy seeks to shift the axis of competition, through the introduction of new operational concepts and technologies, toward one in which the United States has a significant and sustainable advantage.

The United States was successful in pursuing two distinct offset strategies during the Cold War. These strategies enabled the U.S. to “offset” the Soviet Union’s numerical advantage in conventional forces without pursuing the enormous investments in forward-deployed forces that would have been required to provide overmatch soldier-for-soldier and tank-for-tank. These offset strategies relied on fundamental innovation in technology, operational approaches, and organizational structure to compensate for Soviet advantage in time, space, and force size.

Today, competitors such as Russia and China (and countries to which these nations proliferate advanced capabilities) are pursuing and deploying advanced weapons and capabilities that demonstrate many of the same technological strengths that have traditionally provided the high-tech basis for U.S. advantage, such as precision-guided munitions. This growing symmetry between U.S. technical capabilities and near-peer potential competitors has been seen in the capabilities demonstrated during Russian power-projection operations in Syria.

The emergence of increasing symmetry in the international security environment suggests that it is again time to begin considering the mix of technologies, system concepts, military organizations, and operational concepts that might shift the nature of the competition back to U.S. advantage. This set of capabilities provides the basis for a Third Offset Strategy.

In explaining the technological elements of the Third Offset Strategy, then-Deputy Secretary of Defense Robert Work has emphasized the importance of emerging capabilities in autonomy and artificial intelligence.[3] The Strategy’s primary technical line of effort is focused on the concept of Human-Machine Collaboration. The five basic building blocks of this concept are:[4]

  • Autonomous deep learning systems
  • Human-machine collaboration
  • Assisted human operations
  • Advanced human-machine combat teaming
  • Network-enabled, cyber-hardened autonomous weapons

From the U.S. Navy’s perspective, the Chief of Naval Operations Strategic Studies Group (SSG) spent one year examining this issue, and its report spurred increased interest in—and emphasis on—unmanned systems Navy-wide. Leveraging the SSG’s work, recent Navy focus has emphasized the need to enhance UxS command and control (C2) capabilities to allow one sailor to control multiple systems in an attempt to lower Total Ownership Costs (TOC) of unmanned systems. This link between increased autonomy and decreased TOC has become an important theme in Navy UxS development as well as UxS research and development in the other U.S. military Services.

Importantly, unmanned systems figure prominently in the U.S. Navy’s Future Fleet Architecture Studies. Three independent studies all call for an increase—often a dramatic increase—in the number of unmanned systems for the Navy and the Joint Force. And while the Navy’s use of unmanned systems is no more important than for the other military services, the fact that the Navy operates unmanned systems in all four domains: air, surface, sub-surface and ground, often means that the Navy has deep equities in UxS development and leads this development in many areas.

The Challenges for Autonomous Systems – Total Ownership Costs

One of the most pressing challenges for the DoD is to reduce the prohibitively burdensome manpower footprint currently necessary to operate unmanned systems.  Military manpower makes up the largest part of the total ownership cost of systems across all the Services.[5]  But how expensive is military manpower? To better understand this compelling need to reduce these manpower requirements, it is important to understand the costs of manpower in the U.S. military writ large. 

Military manpower accounts comprise the largest part of the TOC of military systems across all the Services.  Additionally, military manpower costs are the fastest growing accounts, even as the total number of military men and women decrease.  According to an Office of Management and Budget report, military personnel expenditures have risen from $74 billion dollars in 2001 to $159 billion dollars in 2012, an increase of almost 115 percent.[6]

More recently, a 2016 HIS-Jane’s analysis of the U.S. Defense Budget highlighted the high—and growing—costs of military manpower, as well as the factors driving these costs. Jane’s analysis highlighted the fact that in spite of predicted decreases in the number of military personnel across the Future Years Defense Plan (FYDP), military manpower costs are predicted to rise at least through FY-21. The report noted:

Since the nadir of US defence spending in 1999, personnel expenditure has increased faster than other categories of expenditure, save RDT&E, rising by 39% in real terms by FY06. Military Personnel have enjoyed: five years of pay rises at or around 1.5% - higher than those in the general economy and the pay hike requested for FY16 was 1.3% and for FY17 it is 1.6%; increases in pay for middle grades to improve retention of skilled personnel; improved housing benefits; and substantial increases in retirement benefits…These personnel figures do include mandatory military pensions, but they do not include DoD civilian pay, which is accounted for in the O&M accounts in the US. FY16 Military Personnel funds were USD148.5 billion in constant dollars or 23.9% of the budget and MilPers percentage is expected to rise to 25.5% by FY21.[7]

Lessons learned throughout the development process of most unmanned systems—especially unmanned aerial systems—demonstrate that unmanned systems can actually increase manning requirements.  An article in the Armed Forces Journal summed up the dilemma in a sentence, noting, “The military’s growing body of experience shows that autonomous systems don’t actually solve any given problem, but merely change its nature. It’s called the autonomy paradox: The very systems designed to reduce the need for human operators require more manpower to support them.”[8]

An article by a U.S. Marine Corps officer who worked with unmanned military robots in Afghanistan in the U.S. Army’s Acquisition, Logistics and Technology Journal highlighted the challenges involved in trying to reduce the personnel footprint required to operate military unmanned vehicles:

In recent years, unmanned systems (UMS) have proliferated by the thousands in our Armed Forces. With increasing pressure to cut costs while maintaining our warfighting edge, it seems logical that UMS could reduce manpower and its associated costs while ensuring our national security. Unfortunately, while the recent UMS proliferation has improved our warfighting edge, it has not led to manpower reductions. Instead, UMS have increased our manpower needs—the opposite of what one might expect.

Two primary reasons that the proliferation of UMS has increased manpower needs are, first, that the priority for UMS is risk reduction, not manpower reduction; and, second, that current UMS are complementary to manned systems. Instead of replacing manned systems, UMS have their own manpower requirements, which are additive overall.[9]

Looking to the future of unmanned systems development, while acknowledging future technology breakthroughs may reduce the manning footprint of some military unmanned systems, some see a continuation—or even an increase—in manning required for unmanned systems. Here is how a Professor of Military and Strategic Studies at the United States Air Force Academy put it:

The corresponding overhead costs in training for pilots, sensor operators and maintainers, fuel and spare parts, maintenance, and communications are not cheaper than manned alternatives. Advances in ISR will increase manpower costs as each additional sensor will require additional processing and exploitation capacity…The manpower and infrastructure costs associated with UAVs will prevent it from becoming the universal replacement to all manned military aircraft missions.[10]

With the prospect of rapidly rising costs of military manpower, and the increased DoD emphasis on total ownership costs, the mandate to move beyond the “many operators, one-joystick, one-vehicle” paradigm that has existed during the past decades for most unmanned systems is compelling. The DoD and the Services are united in their efforts to increase the autonomy of unmanned systems as a primary means of reducing manning and achieving acceptable TOC. But is there an unacceptable “dark side” to too much autonomy?

The Dark Side of Unmanned Systems Autonomy

Few who saw Stanley Kubrick’s 2001: A Space Odyssey can forget the scene where astronauts David Bowman and Frank Poole consider disconnecting HAL's (Heuristically programmed ALgorithmic computer) cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft's communications antenna. They attempt to conceal what they are saying, but are unaware that HAL can read their lips. Faced with the prospect of disconnection, HAL decides to kill the astronauts in order to protect and continue its programmed directives.

While few today worry that a 21st century HAL will turn on its masters, the issues involved with fielding increasingly-autonomous unmanned systems are complex, challenging and contentious. Kubrick’s 1968 movie was prescient. Almost half-a-century later, we are still coming to grips with how much autonomy is enough and how much may be too much. This is arguably the most important issue we need to address with military unmanned systems.

We want our unmanned systems to achieve enhanced speed in decision making to act within an adversary’s OODA (Observe, Orient, Decide, and Act) loop.[11] We also want our unmanned systems to find the optimal solution for achieving their mission, without the need to rely on constant human operator oversight, input and decision-making. But while we need unmanned systems to operate inside the enemy’s OODA loop, are we ready for them to operate without our decision-making, to operate inside our OODA loops?

Bill Keller put the issue of autonomy for unmanned systems this way in his Op-ed, “Smart Drones,” in the New York Times:

If you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a suspected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger.

Welcome to the future of warfare. While Americans are debating the president’s power to order assassination by drone, powerful momentum—scientific, military and commercial—is propelling us toward the day when we cede the same lethal authority to software.[12]

More recently, concerns about autonomous machines and artificial intelligence are also coming from the very industry that is most prominent in developing these technological capabilities. The author of a New York Times article entitled, “Robot Overlords? Maybe Not,” Alex Garland of the movie “Ex Machina” talks about artificial intelligence and quotes several tech industry leaders.

The theoretical physicist Stephen Hawking told us that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk, the chief executive of Tesla, told us that A.I. was “potentially more dangerous than nukes.” Steve Wozniak, a co-founder of Apple, told us that “computers are going to take over from humans” and that “the future is scary and very bad for people.”[13]

The Department of Defense is addressing the issue of human control of unmanned systems as a first-order priority and is beginning to issue policy guidance to ensure that humans do remain in the OODA loop. [14] That said, the U.S. military must be able to prevail in combat in the complex and challenging multi-domain battlespace. As then-Deputy Secretary of Defense Robert Work noted, “We believe, strongly, that humans should be the only ones to decide when to use lethal force.  But when you're under attack, especially at machine speeds, we want to have a machine that can protect us.”[15]

It is one thing to issue policy statements, and quite another to actually design unmanned systems to meet warfighters needs. This is a critical point from a policy perspective, because although one can choose to abdicate various levels of decision-making to an unmanned machine, one cannot escape responsibility for the resulting actions. In highly autonomous systems, the system becomes opaque to the operator and these operators frequently ask questions such as: What is it doing? Why is it doing that? What’s it going to do next? It is difficult to see how an operator can be effective if these questions are being asked.

Designing in the Right Degree of Autonomy

Most of us are familiar with the children’s fable, Goldilocks and the Three Bears. As Goldilocks tastes three bowls of porridge she finds one too hot, one too cold, and one just right. As the DoD and the Services look to achieve the optimal balance of autonomy and human interaction—to balance these two often-opposing forces and get them “just right”—designing this capability into tomorrow’s unmanned systems at the outset, rather than trying to bolt it on after the fact, may be the only sustainable road ahead. If we fail to do this, it is almost inevitable that concerns that our armed unmanned systems will take on “HAL-like” powers and be beyond our control will derail the promise of these important warfighting partners. 

The capabilities required to find this “just right” balance of autonomy in military systems must leverage many technologies that are still emerging. The military knows what it wants to achieve, but often not what technologies or even capabilities it needs in order to field UxS with the right balance of autonomy and human interaction. A key element of this quest is to worry less about what attributes—speed, service ceiling, endurance, and others—the machine itself possesses and instead focus on what is inside the machine.

For the relatively small numbers of UxS that will engage an enemy with a weapon, this balance is crucial. Prior to firing a weapon, the unmanned platform needs to provide the operator—and there must be an operator in the loop—with a “pros and cons” decision matrix regarding what that firing decision might entail. When we build that capability into unmanned systems we will, indeed, have gotten it just right and the future of military autonomous will be bright.

Into the Future with Autonomous Systems and Artificial Intelligence

Unmanned systems and artificial intelligence have the potential to critical partners to the Joint warfighter. But this can only happen if the rapid technological advances in unmanned systems and artificial intelligence take into account valid moral and ethical considerations regarding their use. In all cases, unmanned must always have the option for human control and verification, especially when it comes to the use of lethal force.

Those responsible for the concepts, research, development, building, fielding and use of unmanned systems with artificial intelligence might be well-served to look into the commercial trade space, to the automobile industry, for best-practices examples. It is here that we may well find that vital customer feedback that indicates what drivers really want. And while not a perfect one-to-one match, this taxonomy can suggest what kinds of unmanned systems with artificial intelligence industry should offer to the military.

Automobiles are being conceived, designed, built and delivered with increasing degrees of artificial intelligence. It is worth examining where these trend lines are going. As one way to “bin” automobiles, we might think of autos (and other wheeled vehicles) as residing in one of three basic categories:

  • A completely manual car—something your parents drove
  • A driverless car that takes you where you want to go via artificial intelligence
  • A car with augmented intelligence

The initial enthusiasm for driverless cars has given way to second thoughts regarding how much a driver may be willing to be taken completely out of the loop. One article in particular, in the New York Times, “Whose Life Should Your Car Save?” captures the concerns of many. An excerpt from this article captures the essence of the public’s concern with driverless cars, and by extension, with other fully autonomous systems:

We presented people with hypothetical situations that forced them to choose between “self-protective” autonomous cars that protected their passengers at all costs, and “utilitarian” autonomous cars that impartially minimized overall casualties, even if it meant harming their passengers. (Our vignettes featured stark, either-or choices between saving one group of people and killing another, but the same basic trade-offs hold in more realistic situations involving gradations of risk.)

A large majority of our respondents agreed that cars that impartially minimized overall casualties were more ethical, and were the type they would like to see on the road. But most people also indicated that they would refuse to purchase such a car, expressing a strong preference for buying the self-protective one. In other words, people refused to buy the car they found to be more ethical.[16]

As the study referenced in this article—as well as an increasing number of studies and reports indicate—there growing consensus among consumers that drivers want to be “in the loop” and that they want semi- and not fully-autonomous cars. That may change in the future…but perhaps not. And it should inform how we think about military unmanned systems.

Extrapolating this every-day example to the military autonomous systems, we believe—and we think the available evidence, including some of the most cutting-edge work going on today—strongly suggests that warfighters want augmented intelligence in their unmanned machines. That will make these machines more useful and allow warfighters to control them in a way that will go a long way toward resolving many of the moral and ethical concerns related to their use.

This augmented intelligence would provide the “elegant solution” of enabling warfighters to use unmanned systems as partners, not separate entities. Fielding autonomous deep learning systems that enable operators to teach these systems how to perform desired tasks is the first important step in this effort. This will lead directly to the kind of human-machine collaboration that transitions the “artificial” nature of what the autonomous system does into an “augmented” capability for the military operator.

Ultimately, this will lead to the kind of advanced human-machine combat teaming that will enable warfighters—now armed with augmented intelligence provided by their unmanned partners—to make better decisions faster with fewer people and fewer mistakes. This will also keep operators in-the-loop when they need to be, for example, when lethal action is being considered or about to be taken, and on-the-loop in more benign situations, for example, when an unmanned aerial system is patrolling vast areas of the ocean during a surveillance mission.

But this generalized explanation begs the question—what would augmented intelligence look like to the military operator. What tasks does the warfighter want the unmanned system to perform as they move beyond artificial intelligence to provide augmented intelligence to enable the Soldier, Sailor, Airman or Marine in the fight to make the right decision quickly in stressful situations where mission accomplishment must be balanced against unintended consequences?

Consider the case of a ground, surface, subsurface, or aerial unmanned systems conducting a surveillance mission. Today an operator receives streaming video of what the unmanned systems sees, and in the case of aerial unmanned systems, often in real time. But this requires the operator to stare at this video for hours on end (or more, for example, the endurance of the U.S. Navy’s MQ-4C Triton is thirty hours). This concept of operations is an enormous drain on human resource, often with little to show for the effort.

Using basic augmented intelligence techniques, the MQ-4C can be trained to deliver only that which is interesting and useful to its human partner. For example, a Triton operating at cruise speed flying between San Francisco and Tokyo would cover the five-thousand-plus miles in approximately fifteen hours. Rather than send fifteen hours of generally uninteresting video as it flies over mostly empty ocean, the MQ-4C could be trained to only send the video of each ship it encounters, thereby greatly compressing human workload.

Taken to the next level, the Triton could do its own analysis of each contact to flag it for possible interest. For example, if a ship is operating in a known shipping lane has filed a journey plan with the proper maritime authorities, and is providing an AIS (Automatic Identification System) signal; it is likely worthy of only passing attention by the operator, the Triton will flag it accordingly. If, however, it does not meet these criteria (say, for example, the vessel makes an abrupt course change that takes it well outside normal shipping channels), the operator would be alerted immediately. As this technology continues to evolve, a Triton MQ-4C—or other UxS—could ultimately be equipped with detection and classification algorithms that can lead to automatic target recognition, even in unfavorable weather conditions and sea states.

For lethal military unmanned systems, the bar is higher for what the operator must know before authorizing the unmanned warfighting partner to fire a weapon—or as is often the case—recommending that higher authority authorize lethal action. Take the case of military operators managing an ongoing series of unmanned aerial systems flights that have been watching a terrorist and waiting for higher authority to give the authorization to take out the threat using an air-to-surface missile fired from that UAS.

Using augmented intelligence, the operator can train the unmanned aerial system to anticipate what questions higher authority will ask prior to giving the authorization to fire, and provide, if not a point solution, at least a percentage probability or confidence level to questions such as:

  • What is level of confidence this person is the intended target?
  • What is this confidence based on?
    • Facial recognition
    • Voice recognition
    • Pattern of behavior
    • Association with certain individuals
    • Proximity of known family members
    • Proximity of known cohorts
  • What is the potential for collateral damage to?
    • Family members
    • Known cohorts
    • Unknown persons
  • What are the potential impacts of waiting verses striking now?

These considerations represent only a subset of the kind of issues operators must train their unmanned systems armed with lethal weapons to deal with. Far from ceding lethal authority to unmanned systems, providing these systems with augmented intelligence and leveraging their ability to operate inside the enemy’s OODA loop, as well as ours, enables these systems to free the human operator from having to make real time—and often on-the-fly—decisions in the stress of combat. Designing this kind of augmented intelligence into unmanned systems from to outset will ultimately enable them to be effective partners for their military operators.

This focus on augmented intelligence brings us full-circle back to some of the concerns raised by Deputy Secretary of Defense Robert Work. He noted that when the enemy is attacking us at “machine speeds,” we need to exploit machines to help protect us. The importance of augmented intelligence in the man-machine symbiosis was highlighted in a recent U.S. Air Force Technology Horizons report which noted:

“Although humans today remain more capable than machines for many tasks, natural human capacities are becoming increasingly mismatched to the enormous data volumes, processing capabilities, and decision speeds that technologies offer or demand; closer human-machine coupling and augmentation of human performance will become possible and essential.”[17]

Building unmanned systems with augmented intelligence that can partner with operators in this effort is what will ultimately ensure the systems we build reach their full potential to help our warfighters win in combat in the challenging multi-domain battlespace of the future.

 The Challenge and the Way Ahead

Industry wants to design and build unmanned systems the military customer will buy and use. The sooner the military officials signal the desire to field unmanned systems with augmented intelligence, the more likely it will be that warfighters in the field will have the optimally designed unmanned systems that will function in concert with manned systems with just the right degree of human oversight and control.

From the United States perspective, the ultimate beneficiary will be the U.S. Joint warfighters taking on the high-end threat in the multi-domain battle space, as well as their allied and coalition partners who will then able to employ these systems within the constructs of the law of war and rules of engagement. Our warfighters will likely be outmanned and outgunned in high-end warfighting scenarios in the 2030-2050 timeframe. Our task must be to ensure they have the tools to prevail in that fight.

End Notes

[1] FY 2013-2038 Unmanned Systems Integrated Roadmap (Washington, D.C.: Department of Defense, 2013).

[2] Department of Defense, Joint Operational Access Concept, (Washington, D.C.: January 17, 2012), 10. 

[3] Deputy Secretary Work’s interview with David Ignatius at “Securing Tomorrow” forum at the Washington Post Conference Center in Washington, DC, March 30, 2016. 

[4] Remarks by Deputy Secretary of Defense Robert Work at the Center for New American Security Defense Forum, December 14, 2015.

[5] Over a decade ago, in its report, Navy Actions Needed to Optimize Ship Crew Size and Reduce Total Ownership Costs (GAO-03-520, Jun 9, 2003) the Government Accountability Office noted, “The cost of a ship's crew is the single largest cost incurred over the ship's life cycle.”  This was before a decade of military pay increases.  See also Connie Bowling and Robert McPherson, Shaping The Navy’s Future, (Washington, D.C.: Accenture White Paper, February, 2009) accessed at: http://www.accenture.com/us-en/Pages/service-public-service-shaping-navys-future.aspx, for one of a growing number of reports detailing the reasons manpower is the driving factor increasing total operating costs of U.S. Navy ships. This report notes, “The active duty force, for instance, has stabilized and is projected to stay at 317,000 personnel for the next six years. Yet over this same period, the inflation adjusted cost of the force is projected to grow by 16.5 percent due to the rising costs of benefits, including service member and family health care.”

[6] The Congressional Budget Office report, Costs of Military Pay and Benefits in the Defense Budget, November 14, 2012, is just one of many reports that note that costs of military pay have been increasing faster than the general rate of inflation and wages and salaries in the private sector. The report identifies this as just one factor making manpower accounts an increasingly large part of the military budget.  See also, Mackenzie Eaglen and Michael O’Hanlon, “Military Entitlements are Killing Readiness,” Wall Street Journal, July 25, 2013. A number of think tank studies have presented even more dire-even apocalyptic-scenarios as military manpower costs rise.  See, for example, Todd Harrison, Rebalancing Military Compensation: An Evidence-Based Approach, (Washington, D.C.: Center for Strategic and Budgetary Assessments, July 12, 2012), which notes in its first finding: “The all-volunteer force, in its current form, is unsustainable.  Over the past decade, the cost per person in the active duty force increased by 46 percent, excluding war funding and adjusting for inflation.  If personnel costs continue growing at that rate and the overall defense budget remains flat with inflation, military personnel costs will consume the entire defense budget by 2039.”

[7] Jane’s U.S. Defense Budget, November 16, 2016, accessed at ihs.com.

[8] “Why ‘Unmanned Systems’ Don’t Shrink Manpower Needs,” Armed Forces Journal, October 1, 2011, accessed at:

http://armedforcesjournal.com/the-autonomy-paradox/

[9] Major Valerie Hodgson, “Reducing Risk, Not Manpower: Unmanned Systems Bring Lifesaving Capabilities, but Saving Money in Personnel Has Yet to be Achieved,” U.S. Army’s Acquisition, Logistics and Technology Journal, Jan-Mar 2012.

[10] Michael Fowler, “The Future of Unmanned Aerial Systems,” in Global Security and Intelligence Studies, Vol. 1: No. 1, Article 3. Available at: http://digitalcommons.apus.edu/gsis/vol1/iss1/3

[11] The OODA loop was the brainchild of Air Force Colonel John Boyd and its first application was to fighter tactics.

[12] Bill Keller, “Smart Drones,” The New York Times, March 10, 2013.

[13] Alex Garland, “Alex Garland of ‘Ex Machina’ Talks About Artificial Intelligence,” The New York Times, April 22, 2015.

[14] Deputy Secretary of Defense Ashton Carter Memorandum, “Autonomy in Weapon Systems,” dated November 21, 2012, accessed at: http://www.defense.gov/.  See also, “Carter: Human Input Required for Autonomous Weapon Systems,” Inside the Pentagon, November 29, 2012 for a detailed analysis of the import of this memo.

[15] Remarks by Deputy Secretary of Defense Robert Work at the Center for New American Security Defense Forum, December 14, 2015.

[16] Azim Shariff, Iyad Rahwan and Jean-Francois Bonnefon, “Whose Life Should Your Car Save?” The New York Times, November 6, 2016.  See also Aaron Kessler, “Riding Down the Highway, with Tesla’s Code at the Wheel,” The New York Times, October 15, 2015.

[17] Technology Horizons: A Vision for Air Force Science and Technology 2010-2030, accessed at: http://www.defenseinnovationmarketplace.mil/resources/AF_TechnologyHorizons2010-2030.pdf

 

About the Author(s)

George Galdorisi is Director for Strategic Assessments and Technical Futures at SPAWAR Systems Center Pacific. Prior to joining SSC Pacific, he completed a 30- year career as a naval aviator, culminating in 14 years of consecutive experience as executive officer, commanding officer, commodore, and chief of staff; including command of HSL-43, the Navy’s first operational LAMPS Mk III squadron, HSL- 41, the LAMPS Mk III Fleet Replacement Squadron, USS Cleveland (LPD-7), and Amphibious Squadron Seven. His last operational assignment spanned five years as Chief of Staff for Cruiser-Destroyer Group Three, during which he made deployments to the Western Pacific and Arabian Gulf embarked in the USS Carl Vinson and USS Abraham Lincoln. He is a 1970 graduate of the U. S. Naval Academy and holds a Masters Degree in Oceanography from the Naval Postgraduate School and a Masters Degree in International Relations from the University of San Diego.