Dissuasion in Cyberspace:
The Limitations of Classical Deterrence Theory
Sarah M. Koch
In 2011, a top federal laboratory in the United States was forced to disconnect from the Internet when administrators discovered that data was being siphoned from a server. In 2014, two Chinese hacks into U.S. Office of Personnel Management databases compromised sensitive information on more than 22.1 million people. U.S. officials said it was “highly likely” that every security clearance application since 2000 had been exposed. In the spring of 2017, a mysterious hacking group called the Shadow Brokers released alleged NSA tools. This trove included EternalBlue, which exploited a previously unknown Windows vulnerability. Hackers then used EternalBlue in two high-profile ransomware attacks only months later.
Western society’s connectivity is accompanied by a new national security risk: cyberattacks. To a degree almost unimaginable a decade earlier, disruptive and destructive cyberattacks have become central to multi-domain warfare in interstate conflict. Our critical infrastructure, banking, and military systems rely on connectivity in cyberspace. Paradoxically, those who are at the forefront of these emerging technologies are also the most susceptible to attack. For this reason, nations such as the United States face many peer or near-peer competitors in the domain of cyber warfare. As cyberattacks by state and non-state actors continue to increase in frequency and severity, cyberattack prevention continues to become more central to national security policy. However, cyberattacks can rarely be deterred. Threat of punishment is the universal deterrence mechanism, but punishment will play a lesser role in the cyber domain. As Richard Clark and Robert Knake argue, “Of all the nuclear strategy concepts, deterrence theory is perhaps the least transferable to cyber war.”
Ultimately, cyber must be distinguished from both nuclear and conventional kinetic conflict. The constant evolution, paradoxes, and indisputable uniqueness of cyber warfare leave strategists with an unclear picture as they pursue appropriate deterrence policies for cyberattacks. Policy experts have identified four potential mechanisms of deterrence and dissuasion in cyberspace: threat of punishment, denial by defense, entanglement, and normative taboos. However, for concept “purists,” only the first mechanism constitutes deterrence. In response, Martin Libicki constructed a ladder of appropriate retaliatory responses: diplomatic, economic, cyber, physical force, and nuclear force. Under a strategy grounded in a multi-domain ladder of retaliation, nuclear weapons could serve as a retaliatory strike after devastating, non-nuclear attacks on American infrastructure. In theory, such a strategy would create a deterrent dynamic for potentially crippling cyberattacks. In reality, its efficacy remains far from clear.
In order to retaliate, the victim, at a minimum, must establish the identity of its attacker. However, attribution after a cyberattack can be complex and is rarely immediate. The speed and sophisticated concealment of cyberattacks make the real-time identification of an attacker rare. Instead, the victim of an attack relies on digital forensics to construct the identity of its attacker.  Even when an attack is traced to a single computer, questions surrounding the identity of the attacker can remain. Eric Talbot Jensen illustrates this conundrum using an anecdote: a cyberattack is traced to a computer in the basement of a Chinese government building. This detective work leaves three possible culprits: a Chinese operative acting on behalf of the Chinese government, a rogue Chinese actor, or a third country attempting to implicate the Chinese. According to a report by Cybereason Intelligence Group, both Russia and China have capitalized on this strategic ambiguity. Russia outsources its malicious cyber activities, and China allows entrepreneurial “hackers for hire” to operate as long as they do not create significant issues for the government. These strategies allow the states to operate under a veil of plausible deniability, hampering forceful retaliation. However, this veil of anonymity is not the sole obstacle to effective cyber deterrence.
In addition to attribution, damage assessment is a critical component when weighing retaliatory measures. International law does not require “a response in kind” to an attack, but a response claimed in self-defense is limited not only by an understanding of necessity, but also by the principle of proportionality. Similar to attribution, the processes of damage assessment and determination of proportionality present many difficulties and complexities when applied to cyber warfare. Michael Schmitt argues: “although they [cyberattacks] are non-forceful [non-kinetic], their consequences can range from mere annoyance to death.” Additionally, an act that, on the surface, appears to be a “mere annoyance,” may in fact have a greater impact. For example, Russian meddling in the 2016 U.S. elections could have a lasting impact on the stability of American society and the American political system. Similarly, when Bashar al-Assad’s regime hacked the U.S. Associated Press twitter account and posted a false message that read: “Breaking: Two Explosions in the White House and Barack Obama is injured,” the New York Stock Exchange lost 200 billion dollars almost immediately. Though the markets recovered quickly, cyber actions that resemble this example could erode public trust in security systems and the media. As Joseph Nye explains, “In the classic duality between war and peace,” many cyberattacks fall into a “gray zone.”
When both the actor and impact have been identified, another question arises in the deterrence dilemma: what should be targeted in response and by what means? Though the punishment mechanism for deterrence need not be limited to the cyber domain, an attack in kind would be the clearest response to a cyberattack and perhaps the easiest to justify on the international stage. However, the debate over what constitutes an appropriate attack in kind highlights two additional characteristics that distinguish cyber from other domains of warfare: (1) the relationship between a society’s technological advancement and its corresponding vulnerability to attack and (2) the single-use nature of offensive cyber capabilities. Richard Clark and Robert Knake argue that deterrence, if the classical concept were applied, would be most effective against the U.S. due to its particular vulnerability to asymmetric attacks. Because the U.S. is more dependent on connectivity than its adversaries, it may be deterred from initiating cyber warfare for fear of retaliation against its own networks. The U.S. takes a great risk when developing offensive technologies, for the possibility will always exist that new American technologies will one day be used against the U.S..
Inversely, a less connected adversary may have far less to lose from an outbreak of cyber warfare. For example, President Obama promised a “proportional response” to North Korea’s hack of Sony Pictures in 2014. Only days later, North Korea experienced one of its worst network failures in years, a blackout of nearly 10 hours, and North Korea blamed the United States for the Internet outage. However, if the United States were at fault, it is unclear whether or not this act was truly proportional to the original North Korean attack. According to the New York Times in 2014, the country had only 1,024 official Internet protocol [IP] addresses and a single upstream network connecting it to the rest of the Internet. Therefore, this attack likely caused limited interruption in North Korea. Similarly, the Trump administration accused Russia of targeted cyberattacks on the U.S. power grid in March 2018. If Russia had indeed interrupted power in the U.S., and the U.S. were to react in kind, the physical effects in Russia would not be comparable. Purportedly, thirty-percent of applications to connect to the electrical grid in Russian cities are denied because the infrastructure has not been updated in decades, and, in some localities, orders for electricity are still placed by phone and tracked on paper maps. As can be seen by these examples, attacks against a technologically inferior adversary may be largely symbolic. Cyber has become a weapon of choice for the outgunned, and it will remain a relatively low-cost and low-risk activity for weaker state and non-state actors.
Given this debate over the effectiveness of a retaliatory response against a less-connected opponent, strategists must weigh the necessity of “expending” an offensive cyber capability that likely will be limited to a single use. After a cyber weapon has been introduced to the public eye, it can be reverse engineered. Generally, this process then renders the development useless. Not only will the exploited vulnerability be patched, but copycat technologies will also arise. This single-use quality clearly distinguishes cyber weapons from conventional weaponry. Designs for munitions, planes, and bombs are rarely scrapped after their first use. The Stuxnet virus, released in 2010, is a clear example of the limited lifespan of offensive cyber capabilities. The cyber worm affected over 60,000 computers, more than half of which were in Iran, and sabotaged the centrifuges used in the state’s nuclear program. It was labeled “one of the most sophisticated and unusual pieces of software ever created” at its time of release. However, despite its sophistication, the virus was quickly disarmed. A few months after its release, its technical components had been identified. By 2011, a plethora of effective antidotes were available and many variants of the malware appeared. In general, a cyber capability that works one day may not work the next, even without its expenditure. If a target becomes aware of a vulnerability, that asset will receive additional protection. Therefore, the secrecy of a state’s cyber weapons is paramount, increasing the futility of conventional deterrence strategies.
Nonetheless, threat of punishment remains a bedrock of U.S. Cyber policy, albeit cloaked by contemporary buzzwords and catch-phrases. While the 2018 Cyber Strategy appeared to focus on active defense, or “defending forward” to “intercept and halt cyber threats” to American networks, later U.S. actions in Cyberspace indicated otherwise. In June 2019, the United States escalated its incursions into the Russian electrical grid, introducing potentially crippling malware. According to a New York Times source, offensive action was “long overdue.” For years, Russia had been inserting malware into American infrastructure. After this tit-for-tat, President Trump’s national security advisor warned: “[if you are] engaged in cyberoperations against us, you will pay a price.” This pointed announcement caused any ambiguity to evaporate. The United States is operating in Cyberspace, the newest theatre of warfare, with a less than novel strategy: classical deterrence.
However, as this article has demonstrated, threat of punishment alone is an inviable strategy. Its utility is limited by ambiguous attribution, unclear measures of proportionality, and single-use weaponry. Ultimately, classical deterrence, as previously defined in the context of nuclear and conventional weaponry, will play a diminished role in Cyberspace. Though research and technologies in this domain are constantly evolving, strategists, policy-makers, and military leaders must understand classical deterrence’s shortcomings in order to minimize their effect—or propose alternative solutions.
The views and opinions expressed in this paper are those of the author alone and do not necessarily reflect the official policy or position of the U.S Department of Defense, 1st Information Operations Command, or any agency of the U.S. government.
 Kim Zetter, “Top Federal Lab Hacked in Spear-Phishing Attack,” WIRED, April 20, 2011.
 Ellen Nakashima, “Hacks of OPM databases compromised 22.1 million people, federal authorities say,” The Washington Post, July 9, 2015.
 Lily Hay Newman, “The Biggest Cyber Security Disasters of 2017 So Far,” WIRED, July 1, 2017.
 Joseph S. Nye Jr., “Deterrence and Dissuasion in Cyberspace,” International Security, 41, no. 3 (Winter 2016/2017): 44.
 , Lieutenant Colonel George B. Lavezzi, “Possible Futures: Space Capability Risks and the Joint Force,” Strategy research project for the U.S. Army War College, Carlisle, PA, 2010: 7.
 Richard A. Clark and Robert K. Knake, Cyber War: The Next Threat to National Security and What to Do About It (New York: HarperCollins, 2010), 189.
 Nye, “Deterrence and Dissuasion,” 54-55.
 Libicki, Cyberdeterrence and Cyberwar, 26.
 Gary Palmer, “A Road Map for Digital Forensic Research,” Report from the First Digital Forensic Research Workshop (DFRWS), Utica, New York, August 7-8, 2001: 16.
 Eric Talbot Jensen, “Cyber Deterrence,” Emory International Law Review, 26 (2012): 785-786.
 Cybereason Intel Team, “Russia and Nation-State Hacking Tactics: A Report from Cybereason Intelligence Group,” Cybereason, June 5, 2017.
 Libicki, Cyberdeterrence and Cyberwar, 75-90.
 Jensen, “Cyber Deterrence,” 799.
 Michael N. Schmitt, “Cyber Operations and the Jus Ad Bellum Revisited,” Villanova Law Review, 56, No. 3 (2011): 573.
 Molly McKew, “Did Russia Affect the 2016 Election? It’s Now Undeniable,” WIRED, February 16, 2018.
 Mark Mardell, “A Market-Moving Fake Tweet and Twitter’s Trust Issue,” BBC World: U.S. and Canada, April 24, 2013.
 Nye, “Deterrence and Dissuasion,” 48.
 Argument constructed by Priyanka R. Dev, “’Use of Force’ and ‘Armed Attack’ Thresholds in Cyber Conflict: The Looming Definitional Gaps and the Growing Need for Formal U.N. Response,” Texas International Law Journal, 50, no. 2 (2015): 381-401.
 Clark and Knake, Cyber War: The Next Threat, 189.
 Clorinda Trujillo, “The Limits of Cyberspace Deterrence,” Joint Force Quarterly, 75 (4th Quarter 2014): 47-48.
 Nicole Perlroth and David E. Sanger, “North Korea Loses Its Link to the Internet,” The New York Times, December 22, 2014.
 Perlroth and Sanger, “North Korea.”
 Dustin Volz and Timothy Gardner, “In a first, U.S. blames Russia for cyberattacks on energy grid,” Reuters, March 15, 2018.
 David Ferris, “Russia’s Power Grid, Held Together with Spit and Grit,” E&E News,
 Trujillo, “Cyberspace Deterrence,” 48.
 James P. Farwell & Rafal Rohozinski, “Stuxnet and the Future of Cyber War,” Survival: Global Politics and Strategy, 53, no. 1 (2011): 24.
 Farwell and Rohozinski, “Stuxnet,” 27.
 Farwell and Rohozinski, “Stuxnet,” 24.
 Trujillo, “Cyberspace Deterrence,” 47-49.
 Department of Defense, “Summary: Department of Defense Cyber Strategy,” 2018, accessed on 18 April 2020, https://media.defense.gov/2018/Sep/18/2002041658/-1/-1/1/CYBER_STRATEGY_SUMMARY_FINAL.PDF.
 David E. Sanger and Nicole Perlroth, “U.S. Escalates Online Attacks on Russia’s Power Grid,” New York Times, 15 June 2019, accessed on 25 April 2019, https://www.nytimes.com/2019/06/15/us/politics/trump-cyber-russia-grid.html.
 Sanger and Perlroth, “U.S. Escalates Online Attacks.”