Small Wars Journal

War is Having an Identity Crisis

Mon, 06/11/2018 - 3:53am

War is Having an Identity Crisis

Lydia Kostopoulos

SWJ Editor’s Note – This paper was submitted to Small Wars Journal as part of the TRADOC G2’s Mad Scientist Initiative.

What is the identity or nature of war? Secretary of Defense James Mattis saidIt’s equipment, technology, courage, competence, integration of capabilities, fear, cowardice — all these things mixed together into a very fundamentally unpredictable fundamental nature of war.”  Across the centuries there has been an acknowledgement that the character of war would change, however the fundamental nature of war would not. Over the past century, the speed in which technological advancements have been changing the character of war has increased, particularly so in the past decade with the developments in cyberspace, biotechnology, robotics, nanotechnology and the electromagnetic spectrum to name a few areas.

In a set of mass emails General James Mattis sent to mentally prepare his officers to go back to Iraq in 2003-2004, he reiterated this point and said “For all the ‘4th Generation of War’ intellectuals running around today saying that the nature of war has fundamentally changed, the tactics are wholly new, etc., I must respectfully say, ‘Not really’: Alexander the Great would not be in the least bit perplexed by the enemy that we face right now in Iraq, and our leaders going into this fight do their troops a disservice by not studying — studying, vice just reading — the men who have gone before us. We have been fighting on this planet for 5,000 years and we should take advantage of their experience.”

Over a decade later in February 2017, now Secretary of Defense, Mattis was asked about the role of artificial intelligence on his flight back from the annual Munich Security Conference this year. He for the first time expressed doubts about his conviction that the fundamental nature of war would not change. In his words “I’m certainly questioning my original premise that the fundamental nature of war will not change. You’ve got to question that now. I just don’t have the answers yet”.

Given the dire political climate, the overwhelming amount of conflict currently conducted below the threshold of war, and the proliferation of research and technologies in the autonomous weapons systems space, it is an opportune moment to take a step back and revisit the purpose and intent of war.

The timeless writings and thinking of 17th century Prussian general and military theorist, Carl von Clausewitz, continue to be taught in classrooms around the world. His most famous saying about war was that “War is a mere continuation of politics by other means”. He unpacks this statement with clarity “War is not merely a political act, but also a real political instrument, a continuation of political commerce, a carrying out of the same by other means.”

Clausewitz’s position couldn’t be more valid today as military deterrence has become more popular than political negotiation and diplomatic resolution of disputes.
If autonomous weapons are the next deterrence frontier, it is worth discussing some of the contentious issues with the algorithms of autonomous weapons systems, specifically with machine learning and artificial intelligence.

The Uncharted Shifting Terrain of Machine Learning

For all the many AI achievements we have witnessed, the explainability of exactly how an algorithm has come to its output may be illusive more often than would be acceptable when human lives and civilian infrastructure is on the line. The magic of machine learning is that the artificial intelligence can perform and improve on its existing ability without human supervision telling it how to achieve the given task. The three main forms of machine learning are: (1) Supervised Learning (2) Unsupervised Learning (3) Reinforcement Learning. While the technical details are much more complex, broadly speaking, depending on the method humans can control the data going in to varying degrees, however not always the human bias, nor is the process in which the AI derived its output clear.

When thinking about autonomous systems and machine learning (ML) there are several challenges to overcome before these systems can be deemed as reliable as the diabetic retinopathy AI recently approved by the FDA .

Speaking at the 2017 CyCon Washington DC, former Congressman Mike Rogers shared a story [see min 40:00 – 41:37 ] about a company (whose name he withheld) that was performing guidance testing and using the AI to gather all the environment data the missile system was flying in to improve target precision to the size of a quarter. The AI was given moral protocols with boundaries it couldn’t over step; however, in efforts to achieve the mission it was programmed to do and engaged the back-end system that it was not supposed to have access to achieve its mission. This is an example of unexpected outcomes and autonomous logic.

Artificial intelligence has the potential to do a lot of good in the areas of process optimization, healthcare intelligence, big data analysis and many other fields. Leading artificial intelligence scientist Andrew Ng expects Artificial Intelligence to be the new electricity which will transform all industries.

While DARPA and others work hard to improve artificial intelligence capability and make its decisions more explainable, there are other items which will need to be explored in the military context and those are as old as warfare itself – military deception tactics.

Current and evolving research on adversarial attacks on artificial intelligence systems is raising more questions than it is answering. One research on the impact of physical alterations to stop signs and autonomous vehicle safety creates doubts about the confidence one can have in a seemingly functional system, and concerns about the many unknown unknown adversarial tactics that may arise. Another research examines the ways in which adversarial 3D models can be created in ways that to a human look like one object, but to the AI look like a weapon, their research was able to fool an AI into believing that a turtle was a rifle. These examples are reminiscent of the dummy tank visual deception used by the Ghost Army in World War II. In a war with the heavy (or exclusive) use of autonomous weapons systems, would the only human role left be to spoof the adversary’ autonomous systems?

Speaking before the Senate Armed Services Committee on January 2015, Mattis told the Senators that “When the decision is made to employ our forces in combat, the committee should ask ‘Are the political objectives clearly defined and achievable?”

Within the context of autonomous systems, if the answer to this critical question is yes, the question the military should then ask of their autonomous weapons systems is: Can these systems understand the defined political objectives? Will they be achieved in a way that we can anticipate, and in a way that acceptably aligns with our national, military, and legal values?

The Paradox of War as a Spectator Sport

“If we ever get to the point where it’s completely on automatic pilot and we’re all spectators, then it’s no longer serving a political purpose. And conflict is a social problem that needs social solutions.” – Secretary of Defense Mattis in response to the role of artificial intelligence in warfare.

The nature of war would definitively change if the autonomous military technologies on the edge become the default, and it would result in a war identity crisis so to speak. If artificial intelligence, machine learning, machine to machine (M2M) networked weapons systems, autonomous data to decisions (D2D), and highly contested autonomous lethal force application were the modus operandi of war, and war was ‘politics by another means’, then what political purpose would it serve?

Money talks, and the investments, research, contracts, and purchases suggest we are interested in fighting wars with less humans and more autonomous weapons systems such as swarms, and drones etc. Could autonomous weapons systems play a role in strategic deterrence? If so, is there a tipping point where they serve as a deterrence mechanism up to a certain point? Or do they simply create a whole new set of unnecessary protocol and rules of engagement problems that didn’t exist before? What about the role of humans, which as contradictory as it sounds, bring humanity to war? Should citizens in democracies have a referendum style say in these new developments that seemingly alter the nature of war?

Should war ever become a spectator sport? If so, why not conduct virtual wars where terrain, weapons characteristics, and capabilities are inputted and watch who wins. It would result in no physical collateral damage, reduced weapons spending in a cash strapped world economy, and provide a more peaceful option. At the very least it would push politicians to revisit other means of achieving political interests, potentially even encourage politicians to return to the negotiating table to reach political solutions.

Sage Advice from Ender’s Game

In the scene after the last battle in the movie Ender’s Game [spoiler alert: if you haven’t seen the movie skip this paragraph], Ender leads the International Military to win the final battle against the Formics, “an alien race which nearly annihilated the human race in a previous invasion”. In the final battle, which he thought was part of his game training, he used tactics to win which resulted in more collateral than he would have accepted had he known the game was actually real.  After winning, the Commander congratulates Ender and says “You won!” and that “There was no other way, it was either us or them”.  Ender realizes that he is responsible for killing an entire planet and all life on it (even though it was the enemy), and storms off feeling guilt, remorse and shame for killing an entire species, and betrayed that he didn’t know that it was real. The Commander comes after him and angrily shouts at him “WE WON.” and insists that it is all that matters. To which Ender shouts back with a knotted throat, and a furious tone of conviction “NO! THE WAY WE WIN MATTERS” and storms off.

Now is the right moment to take a strategic pause on the discussion about the developments of leading autonomous weapons systems, and the ways in which they can project power, or deny, degrade and destroy an adversary. And instead answer the question “In what way do we want our nation to win wars?” “In what way do we, as a society and nation, want our political interests to be achieved by military means?” “In what way do we, as a society and nation, condone political and diplomatic disputes to get resolved by military means?”

Do our elected politicians recognize the autonomous weapons systems creep that is starting to occur? Does the ‘autonomous’ proliferation in violent arms confrontation have a place within the ethos and values of our military? Those who want can continue to debate whether these types of weapons change the “character” of war, or the fundamental “nature” of war; but if war becomes a purely autonomous system fighting another autonomous system, it is no longer war, but an expensive and highly technologically advanced cock fight.  There must be better means to settle 21st century political disputes.  If not, then an e-games like autonomous game competition can be created to conduct war in a form of virtual reality, which would save trillions of dollars across the world and have less collateral damage.

Today they can say that the future of increasingly autonomous combat is preordained.  Perhaps it is written on the walls, but I would remind those who say that; that there is still more space to write on the walls.

Categories: Mad Scientist

About the Author(s)

Dr. Lydia Kostopoulos’ (@LKCYBER) work lies in the intersection of people, strategy, technology, education, and national security. She addressed the United Nations member states on the military effects panel at the Convention of Certain Weapons Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS). Her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She lectures at the National Defense University, Joint Special Operations University, is a member of the IEEE-USA AI Policy Committee, participates in NATO’s Science for Peace and Security Program, is a member of the FBI’s InfraGard Alliance, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. In efforts to raise awareness on AI and ethics she is working on a reflectional art series. www.lkcyber.com