Small Wars Journal

The Art of Digital Deception: Getting Left of Bang on Deepfakes

Tue, 04/23/2019 - 9:13pm

The Art of Digital Deception – Getting Left of Bang on Deep Fakes

Scott Padgett

This article explores how state actors are using advanced software development tools and artificial intelligence (AI) to invent and perfect new deception capabilities to fool both people and machines on the virtual battlefield.  It examines intelligent computer vision systems and their capabilities to support state-sponsored hybrid warfare.  We explore these capabilities in the context of two Russian disinformation campaigns, specifically Ukraine in 2014, and Venezuela in 2019.  We then offer innovative concepts to mitigate these emerging enemy capabilities and threats. 

The first adversarial threats are "Deepfakes."  Deepfakes include a technique for human image and audio synthesis based on AI. It is used to combine and superimpose existing images and videos onto source images, or videos using a machine learning technique called a "generative adversarial network" (GAN).  Deepfakes are being weaponized to support hybrid warfare to deceive and fool peopleAdversarial threats use AI to manipulate and distort audiovisual content before it analyzed, essentially using machines to fool machines.      

Examining Disinformation Campaigns & the Rise of “Deep Fakes”

Our first adversarial example explores how our enemies are using AI to accelerate the creation of synthetic generated media known as "Deep Fakes."  Deepfakes are synthetically altered videos that appear to be convincing real to deceive people.  Its legacy originates from the adult film industry.  Video and audio production software tools synthetically alter adult films with the faces & voices of famous actors and actresses to appear convincingly real to humans.  These same technologies are being adapted to conduct hybrid warfare.    

In 2014, the Russian Internet Research Agency (IRA), also known as the “St. Petersburg Troll Factory” began a series of disinformation campaigns targeting the US population.  Hundreds of Russian IRA employees manually manufactured thousands of fake and stolen social media accounts.  Using these fake accounts, they created millions of false dialogues, generating fake echo chambers. The Russians were perfecting their astrosurfing hybrid warfare tradecraft.[i]  The intent was toattract and deceive popular news media and actual US citizens into participating in these fake online narratives.  The Russians were perfecting their disinformation capabilities to drive a wedge between US citizens and our government by creating mistrust through these echo chambers as a type of herd mentality. 

From 2014 and throughout the US Presidential elections in 2016, the IRA continued to mature, perfect and scale their tradecraft.  The IRA typically ran a new disinformation campaign about every two months, targeting major social and political charged events that could spur a grassroots movement across the United States.  The IRA recycled most of their fabricated social media content, accounts and echo chambers to conduct digital swarming techniques.  With each new disinformation campaign, the IRA was getting more advanced and improving over time. 

In response to these threats, a myriad of advanced software analytics and AI forensic tools were employed to investigate who specifically was running and sponsoring these Russian disinformation campaigns.  It resulted in grand jury indictments handed down to thirteen Russians and three Russian companies in February 2018.  However, these indictments have not deterred Russia in conducting their hybrid warfare operations.         

Fast forward to 2019, and we see an increase in the number of new Russian sponsored organizations promoting disinformation as their standard warfare practice.  They are now employing AI to automate, accelerate and scale their synthetic generated media content to support their disinformation campaigns as part of their hybrid warfare doctrine.  As recently as February 2019, we see evidence of Russia conducting the same hybrid warfare techniques that were used in Ukraine a few years ago, in Venezuela.  With global events like Venezuela and the upcoming European and US elections, we can expect a massive spike in the number of deep fakes and synthetic “grassroots campaigns” (astroturfing) to sway human opinion and election results.  These threats pose a huge threat to the US and allied interests as state actors continue to perfect their hybrid warfare strategies supported by AI.  Once synthetic generated media content goes mainstream and human population centers accept false narratives as fact, the damage is already done.  How does the US and its allies get left of bang on deep fakes?  We will explore this by examining two Russian Disinformation Use Cases: Ukraine 2014 and Venezuela 2019.      

2014 Ukraine Revolution

The Russian ideological takeover of Crimea took only a few days.  The Russians sent in a "destabilization cell” responsible for synchronizing disinformation activities and fake protests in the cities of Simferopol and Sevastopol.  Destabilization actors were recruited and trained in Russia and then brought over.  These usually consisted of paid actors known as "grey men and women” that rehearsed their pro-Russian narratives.  These actors would then appear in different persona for Russian news interviews and editorials to drive disinformation momentum. 

If the Russians had intelligent computer vision systems to manipulate and synthesize audiovisual content in 2014, how would they most likely employ them and to what benefit? 

The Russians would have avoided using real human actors and digitally fabricate synthetic characters and mock events.  The intelligent computer vision systems approach is cheaper, more scalable and provides less operational risk on foreign soil.  News video of protests could be digitally altered to show thousands of protesters versus hundreds.  Prominent adversarial politicians could be superimposed as protesters that would be easily recognized in the protests.  Opposition leader comments in fake news interviews could be fabricated to alter opinions and facts.  Instead of using mockups, they could digitally create synthetic military engagements that never happened on the kinetic battlefield.  The volume and scale of creating fake media content using AI are endless.      

2019 Venezuela

In the past five years, the Russian disinformation playbook has not changed, but the technology has.  In 2019, we see Russia leveraging their same disinformation “concept of operations” (CONOPS) in Venezuela. Twitter is the most popular social network in Venezuela, and it remains Russian’s favorite social media network to conduct its hybrid warfare.  In January 2019, Twitter closed thousands of Twitter accounts associated with suspicious activity resembling that of the St. Petersburg Troll Factory.

“The most striking success of the Russian disinformation campaign is the impact of the Trojan Horse Narrative. The first case of describing the US and EU humanitarian aid to Venezuela as a Trojan Horse, in Spanish "Caballo de Troya," appeared in Spanish language networks.  The metaphor of the Trojan Horse is neither new nor original, but it is a hard-working image that resonates well with a narrative of a besieged Venezuela. The term “Caballo de Troya" was eventually used in a mere 3.4 percent of all tweets, related to US humanitarian aid in Spanish, but the metaphor proved to be a sturdy vessel for an anti-American narrative created by the Russians.  A week later, this term appears in other languages, and later – in the headlines of media outlets. On 22 February, Evo Morales, the President of Bolivia, uses the term in a statement quoted by RT, denouncing US humanitarian aid as a Trojan Horse. The same day, the BBC published an article with the headline “Venezuela aid: Genuine help or Trojan Horse?”

This way, an unfounded claim that US humanitarian aid could be a cover for supplying weapons to the opposition has migrated from the social networks to reputable media outlets. Nothing in the BBC article supports the claims of the US employing humanitarian missions for malign purposes, but the concept has reached the headlines of major international media, known for trustworthy quality reporting.  The narrative of the Trojan Horse in references to US and International humanitarian aid for Venezuela is currently used in the title of 3,600 articles in Spanish, from both reputable and disinforming media outlets.  In this context it is worth recalling that Russia used “humanitarian convoys” to deliver military support to armed separatists in Eastern Ukraine in the summer of 2014, effectively halting Ukrainian Government restoration of power on separatist-held areas.

It can be noted that pro-Kremlin narratives changed dramatically in the latter part of February 2019. On 19 February, RT’s Spanish service reported that Russia offered humanitarian aid to Venezuela. On 22 February, RT’s Russian service reported that 7.5 tonnes of aid had arrived. But that same day, RT English service suddenly claimed that there is no food shortage in the country – its report came from a supermarket full of food. In this way, the Trojan Horse narrative becomes even more compelling: why would the International community send humanitarian aid to Venezuela, if the shops are full of food?

The analysis demonstrates the power of metaphors when a narrative is created. International aid, delivered from not only the US, but from EU and EU Member States are in this case efficiently branded as an aggressive act, a Trojan Horse, against Venezuela. Discrediting International aid through unfounded claims, hurts the people of Venezuela. Russian disinformation outlets support this narrative, thus worsening the humanitarian situation in the country.”[ii]

What are some of the ways that US forces and our allies can mitigate the impacts of disinformation campaigns and prepare for the emergence of deepfakes as part of hybrid warfare in the future?

We are seeing the emergence of commercial companies developing deep learning technologies to automatically detect deepfakes.  One such company is “Deeptrace”.  “Deeptrace is developing deep learning technologies to detect deepfakes hidden in plain sight and authenticate audiovisual media that has been manipulated.”[iii]  This includes identifying the type of AI software used, and graphical indication features (colored boxes) to show humans precisely what has been altered or changed in the content. We envision integrating deepfake detection software running on big open source (OSINT) data collection pipelines (real-time streaming data) to alert warfighters when synthetic created media is present.  This capability will serve as a virtual tripwire in cyberspace to detect emerging disinformation campaigns at the tactical, operational and strategic levels affecting US and allied commanders.  Having digital forensic evidence on whether media content is real or fake, are potent weapons to counter disinformation threats on the digital battlefields of the future.  

Using AI to Trick AI – Digital Manipulation & Cloaking Techniques

Our second adversarial example uses AI to manipulate digital content to deceive other machines.  This example uses AI to manipulate images before they are received and processed by visual classifier services, enabling our enemies to hide in plain digital sight.   

Colonel (Retired) Stefan Banach, a leader in information-warfare (iWar) and advanced design thinking defines “Synthetic Soldier Immunity” (SSI) as a concept, “derived from the biological immune system of humans and is extrapolated into the Virtual Warfare construct in an attempt to protect Soldiers.  SSI conceptually consists of three layers of protection that will help ensure combatant survivability on modern and future battlefields.  SSI begins with the creation of Innate Immunity, a type of general base-line synthetic protection.  Innate immunity requires the development of enduring robust synthetic external barriers which are the first line of defense against multispectral threats that are used to find and target a Soldier.  Adaptive (or active) Immunity is the second layer of protection.  It is developed through AI/ML and provides cloaking, spoofing, cloning, Bot warfare, electronic and signal intelligence diversionary signatures, and virtual avatars - with physical footprints, to prevent Soldier detection.  Adaptive Immunity constantly exposes threats and administers patches that protect Soldiers; and delivers viruses that disable threat detection and blind targeting systems.  Passive Immunity is a capability that is "borrowed" from an external source and intentionally lasts for a short period of time.  Passive Immunity results from the introduction of synthetic code and malware from external entities around the world, to combat a specific new short duration threat to a Soldier.  The development of the SSI concept, equipment fielding, and training is an acute US Army capability requirement given what threat actors are doing on regional battlefields today."[iv]  The bottom line is that our warfighters must secure and confirm the integrity of their digital media data before our friendly AI classifiers run their analysis.

Another spin on this adversarial threat shows that they are  using advanced visual effects software supported by AI to synthetically alter the visual effects in images and video in real time.  “While the result of visual effects work is glamorous (think Game of Thrones), much of the back-end work is mundane (e.g., removing people from shots). The research team is making this work easier for everyone by allowing the user to select and delete unwanted elements in a moving video and filling the background intelligently. The final product is a video that removes unwanted elements.”[v]  Imagine if unsecured full-motion video sensors on the battlefield or monitoring military installations were compromised and streaming audiovisual data was replaced with manipulated video to cloak the presence of enemy intruders? 

What are some of the ways we can prevent the interception and manipulation of AI visual classifiers and digital stealth capabilities?  The first line of defense is the security of our sensors and our networks that are transmitting collection and processed data.  The second line of defense is putting an authentication mechanism at the audiovisual sensor source.  A solution lies in "cryptographic authentication" in the form of “blockchain” technology embedded in the content at both the source and time the image or video footage is taken.  Blockchain software can be used to determine if media content has been manipulated or compromised. At user-determined intervals, the blockchain platform generates embedded hashes that get indelibly recorded into the media content at the point of collection.  This is the same methodology used to authenticate digital currencies like Bitcoin.

One such company is “Amber Authenticate”  “It is built on the popular open-source blockchain platform Ethereum, and includes a web platform that makes it easy to visually understand which parts of a video clip have hashes that match the originals stored on the blockchain and which, if any, don't. A green frame around the footage as it plays indicates a match, while a red frame takes its place for any portion with a mismatched hash. Below the video player, an Amber color also shows a detailed "audit trail" that lists when a file was originally created, uploaded, hashed, and submitted to the blockchain.”[vi]

If a warfighter runs collected video footage through a blockchain algorithm, and the hashes do not match, the warfighter can be tipped off that something has changed and the content may be compromised.  We are starting to see the emergence of commercial companies that are developing video cameras with embedded blockchain software to ensure proper security of in transit and streaming data for AI analysis. 

Conclusion

Advances in artificial intelligence vision technologies and software development tools are being adopted by our enemies to support their hybrid warfare capabilities. We examined two areas whereby intelligent computer vision services can be used to fool humans and machines to deceive us on the battlefields of the future.  The US and our allies must remain vigilant and focus on emerging technologies that sit at the convergence of disinformation campaigns, AI and big data to anticipate emerging threats and invest in capabilities to mitigate their risks.

The viewpoints, comments, and opinions provided in this article are strictly those of the author.  None of the content contained in this document is in any way an endorsement or representation of the US Federal Government, any Government agency, former employer, or any commercial company, product or service(s).

End Notes

[i] "Astroturfing" was first coined by U.S. Senator Lloyd Bentsen of Texas in 1985.  It is when companies or individuals mask their motives by putting it under the guise of a grassroots movement. 

[ii]Twitter as an Information Battlefield – Venezuela; A Case Study”, https://euvsdisinfo.eu/twitter-as-an-information-battlefield-venezuela-a-case-study/

About the Author(s)

Lieutenant Colonel (Retired) Scott Padgett has been an information technology executive supporting Federal Government customers for over twenty years.  He retired from the IBM Corporation as the Director of Artificial Intelligence and Cloud Computing supporting the US Intelligence Community and the DoD in 2018.  He retired from the US Army in 2007, with twenty-four years of service having served in distinguished to include the 101st Air Assault Division, 75th Ranger Regiment, Joint Staff and Operation Enduring Freedom (OEF).  He is a Soldier for life!  Sua Sponte.