The Challenge of AI-Enhanced Cognitive Warfare: A Call to Arms for a Cognitive Defense
Specialist Jones was in the motor pool when he received the Facebook post that led to his demise. A Facebook friend he had never met with sent him a video. His wife Carla was engaged in sexual intercourse with Sergeant Martinez. Jones, who is prone to impulsive behavior, became enraged and irrational. He went home, grabbed a hammer, and beat Carla to death. Then he committed suicide. For the Chinese artificial intelligence (AI) system, it was a spectacular success. A well-trained and experienced soldier was neutralized in an hour at a negligible cost. The AI system analyzed the social media history of Jones and his wife, and within 20 seconds, determined that Jones was liable to violence if presented with a deepfake video denoting marital infidelity. The AI gathered social media pictures of Carla and Martinez and made a reasonably accurate assessment of what they might look like naked. This speculative and graphic example demonstrates how emerging AI-enabled cognitive threat systems and methodologies can be employed to exploit systemic personnel vulnerabilities and undermine the health and functionality of America’s warfighters. To date, this cognitive threat receives little, if any, consideration within the DoD.
The advent of AI-driven information warfare weapons is leading a new revolution in military affairs, and the future could be grim unless defensive measures are undertaken. The People’s Liberation Army (PLA) is aggressively pursuing what they call cognitive warfare. The goal is to manipulate a target’s mental functioning in a wide variety of manners to obtain some desirable goal. The Chinese Communist Party Party’s (CCP) philosophy of conflict is evolving to consider the human brain as an operational domain. They envision an integrated system of systems where humans will integrate into and be cognitively enhanced by information technology (IT) systems as some type of transhuman evolutional development. In this theory of victory, war no longer entails the destruction of enemy troop formations on the battlefield. A new form of victory where one’s systems overwhelm, disrupt, paralyze, or destroy the ability of enemy systems to operate at all, let alone engage in a traditional military offensive. This fictional story presents hypothetical scenarios presented here are meant to elucidate the threat that powerful emerging technologies can exploit human biological weakness.
Traditionally, the chain of command has ultimate responsibility for the discipline and welfare of military personnel. However, cognitive warfare requires a different kind of response as military command systems are not currently situated to deal with these new emerging threats. Manipulation of cognitions and emotions is not completely a discipline issue. Nations will have to leverage and bolster their behavioral health systems as never before. First, they need to aggressively invest in scientifically developing new methods for how to identify, prevent, and neutralize the effects of this emergent threat. Second, they will need to train and integrate behavioral health personnel into the chain of command to advise and assist in the implementation of these new methods. This is a very challenging task that will take much time and effort to fulfill. This effort needs to begin post-haste, and it starts with a serious conversation.
Framing the Problem
The structure of the human brain leaves us vulnerable to cognitive biases and heuristical shortcuts that are the default method of thinking. People are being cognitively manipulated into squandering their life savings for a fake romantic enterprise or simple exploitation of their greed with a get-rich-quick scam. Criminals have already discovered how to exploit these vulnerabilities, so national security apparatuses of great power nations could already be far ahead of them. The cyber domain presents a historically unparalleled ability to communicate with and influence the masses of any nation connected with the web. Democracies are especially vulnerable because cyberspace is largely an ungoverned Wild West. Adversarial authoritarian nations like China have a key advantage in the cyber domain. While the ‘great firewall’ can be breached, China is a much harder target. They have prioritized cyber-self-defense.
The main problem at hand is that Western democracies are not prepared to defend themselves during the current great power struggle with the authoritarian nations led by China. Russia has been employing online propaganda in NATO countries to agitate extremists on both sides of the political spectrum to generate political violence and disruption of civil society. Specifically, Russia has been attempting to degrade the integrity of the 2024 election with trolls and bots to influence public sentiment since the previous election. e more resources law enforcement must dedicate towards extremists are resources that cannot be dedicated to important defensive tasks like counterintelligence. The present situation is very precarious, but we must imagine the future. As AI matures, it will magnify adversarial threat capabilities that can maximize the creation of social chaos. Trust is essential for a democracy to function, and AI-enhanced cognitive warfare can erode trust.
Artificial Intelligence: The Great Magnifier
Intelligence assessments of the PLA indicate that they have direct energy sonic or microwave weapons based on evidence from their conflict with India in the Himalayas. These kinetic but ostensibly non-lethal weapons attack neural functioning with inaudible sound waves. They are the paragon of cognitive warfare since they directly attack the neural functioning of adversarial personnel. Think tank analysis of these weapons conclude that they would cause significant permanent physiological damage to victims. However, their employment would be high risk since the same weapons can be easily used on their soldiers as well. Asymmetric non-kinetic systems driven by artificial intelligence can assume a much greater range of potentially covert weapons available to strike an enemy from the front line to the homeland. The most obvious is traditional propaganda.
The Microsoft corporation identified that a Chinese artificial intelligence software program with a feature to create art was engineered to autonomously create anti-American propaganda online (Figures 1 & 2) to exacerbate political polarization. It is pushing this propaganda throughout the social media ecosystem far faster and more effectively than humans could. Two abilities make AI-created propaganda so dangerous. First, AI can experiment in real time by observing human behavior and quickly determine which strategies and tactics are more effective. It can use this information to improve much faster than a human. Second is that it will eventually be able to rapidly customize a propaganda message to a person’s unique psychological vulnerabilities. Russia has already used AI to create personalized disinformation messages about 2024. While these messages are currently artistically crude, in Figure 1 the headline is not bolded so it does not look like a news article. Adversary AI will evolve and become more effective. Artificial intelligence can analyze our digital lives and financial transactions and create a psychological profile or targeting package for almost everyone.
Artificial intelligence analysis of spending patterns collected from purchasable financial data can lead to a relatively accurate assessment of a person’s Big Five personality traits. This information can feed a more precise advertising campaign. Theoretically, if the AI determines that a target has a neurotic personality trait, it could create a propaganda message that triggers severe stress that they are more vulnerable to. Artificial intelligence is still far from being able to diagnose disorders, but it’s rapidly moving in that direction. If a soldier is detected as being depressed, the AI could potentially attempt to provoke a suicide.
Targeting families and friends also presents a ripe cognitive warfare target. For instance, they could use a practice from the Ukraine War whereby they collect facial photographs of deceased soldiers. Comparing the corpse’s face to social media posts can lead to identification, which enables the enemy to send photos of the deceased soldier to their family. Of course, they can always just use a deep fake. The impact of this activity would have a devastating effect on the family, which, of course, erodes morale and creates fear. The rear detachment command and garrison behavioral health team will be overwhelmed. If soldiers on the front line learn about this turmoil, it would have a negative psychological impact. The prudent course of action is to assume this will happen and be prepared to create a defensive strategy. This would include technical solutions to prevent the delivery of propaganda, the development of a mental inoculant, such as inducing psychological reactance against it, and a behavioral health response team that can deal with enemy victories.
Espionage and Subversion
Communist psychological and political warfare has traditionally relied on subversion more intensely than other political systems. The reason is that their philosophy of assuming power attempts to avoid direct intense combat by undermining existing power structures and converting citizens to adopt their ideology. At the right time, the people will rise up and overthrow the existing system, ideally without a deadly war. The CCP still embraces a subversion strategy. For instance, secret police stations have been discovered operating covertly within western countries that serve to spy on and silence the Chinese diaspora as certain western citizens. Artificial intelligence cognitive warfare can greatly expand the scope and scale of this threat. One of the three pillars of a functional democracy is representation. For elected officials to be responsive to their constituents, they must be able to correctly know what they think. Artificial intelligence-driven public opinion warfare can manipulate social media to create confusion and spread misinformation to subvert elected officials’ views of constituent’s opinions. If the civilian populations of democracies are seduced into believing that Ukraine and Taiwan are not worth defending, we render ourselves more vulnerable to invasive tyranny.
Imagine a pervasive honey pot trap increasing insider espionage and subversion threats. Except this time, counter-intelligence investigations are hampered by the physical absence of human intelligence agents, which are usually required to recruit and manage traitors. The Indian Army is already developing AI to protect its soldiers from enemy AI honey traps.
This novel type of attack could manifest as AI girlfriends or boyfriends specifically targeting people vulnerable to these pseudo-relationships. Chinese IT companies are alleged to be earning up to a billion dollars in profits by successfully marketing romantic AI partner to their citizens. Bonding with AI mates is possible through a phenomenon called para-social relationships. They are traditionally described as one-sided, unreciprocated relationships that individuals typically develop with media personalities that are not based on actual interactions or mutual communication between the individual and the media personality. However, recent research has discovered that the same process happens with AI chatbots.
The crux of the scam is that the AI mate initiates a loving relationship with you, and as you bond over time, the AI starts demanding money to maintain this relationship. A study by tech company Mozilla discovered that AI mates harvest enormous amounts of highly sensitive personal data from their customers who are emotionally vulnerable. This information can include passwords, deep personal fears, and confidential workplace information. Furthermore, many of these companies sell the information they collect.
Lonely government workers can be manipulated by their AI mates into sharing national secrets. This can be achieved through overt manipulation, but referencing the Mozilla study, it can be done by seducing the target through intimate conversation. They could be manipulated to sabotage national security systems or government operations. Ordinary people could theoretically be radicalized into joining extremist groups that commit hate crimes to overwhelm the police. The incidental effect is also that the targets can be emotionally traumatized by the AI mate. In 2021, a depressed Belgian man was persuaded to commit suicide by his AI girlfriend after six weeks of conversation, and this is not an isolated case. This is a force health protection issue for both uniformed and government civilian workforces. Psychologically neutralizing vulnerable members of the team is cognitive warfare par excellence. We only discussed a few potential risks presented by AI-powered Chinese cognitive warfare, but these provide a clarion call that national security systems need to start developing preventative and defensive strategies.
A Call to Action
Currently, only a few voices are engaging the public about the current and potential future threats posed by malignant actors empowered by AI. Much of the discussion revolves around defending systems, with little discussion on defending people. The first step needs to be amplifying the intensity and scope of this conversation. Threats will not be countered if there is no consensus that they are threats. Hopefully, this venue will develop its strains of debate about these threats.
Discussion of countering these threats focuses on technological solutions, including developing AI to counter adversarial AI While technological solutions are necessary and will be effective, they are not sufficient. The Chinese, Russians, and others will develop methods of defeating or circumventing our defensive technologies. They will succeed at some cognitive warfare attacks upon us despite our best efforts. Furthermore, our defensive umbrella will need to cover less advanced allies like the Philippines, which lacks the infrastructure to defend itself. Keeping allied societies stable is necessary to keep them in the fight. Thus, this problem requires an international all-hands-on-deck approach encompassing all free societies.
Our defensive strategy must be expanded to include behavioral and communication efforts to defeat the threat at the psychological level. Potential targets must be effectively educated to recognize attacks. They must be given psychological tactics to protect themselves. Social science needs to be drafted into this conflict. Some current counter-persuasion strategies, like inoculation theory, offer a solid starting point, but none of them are foolproof. They require more expansion and development to offer more comprehensive solutions. More research is needed to develop the scope and scale of defensive tools available.
Militaries that have behavioral scientists need to be leveraged to develop and implement cognitive defenses. Behavioral health specialists need to be trained and organized to cope with this threat. However, the civilian population is a prime target as well, and one that few democracies have the infrastructure to adequately defend. At the national level, behavioral and medical scientists need to be pulled into this effort and integrated at the center of this problem set. If the general population is unable to resist psychological attacks, defense against tyranny will fail. Threat technologies will succeed or fail based on their ability to influence the human brain. Thus, cognitive defenses are arguably more essential than technological solutions. Let the discussion begin!