Scammers, Saps, and Saboteurs: The Scam Attack and the New Fifth Column

On Christmas Eve 2025, a group of Russian police officers on the outskirts of Moscow noticed a young man sitting in a parked car near a police station on the outskirts. Several officers approached the car, which abruptly exploded. Three people were killed, including the bomber.
This young man was neither a grimly determined anti-war activist nor an Islamic terrorist. Rather, he appears to have been acting on the orders of a phone scammer, carrying out the most extreme example of a novel tactic of hybrid warfare, which I refer to as the scam attack. The goal of this piece is to shed light on a woefully underdiscussed phenomenon and to explore its implications.
The scam attack is an inexpensive hybrid warfare tactic that exploits modern society’s ready availability of personal information and cultural narratives to manipulate unwitting individuals into harming their country. Its only confirmed usage has been against Russia, very likely carried out or assisted by agents of the Ukrainian government.
These hybrid warfare tactics utilize a combination of financial leverage, patriotic narrative manipulation, and the threat of criminal charges to recruit targets. Moving forward, artificial intelligence will likely enable low-cost scaling of the scam attack and expand both the number of potential users and potential targets.
Scam Attacks on the Rise
Based on open-source data gathered by the independent Russian media outlet Mediazona, the first known scam attack was carried out in August 2022, 6 months after the start of Russia’s full-scale invasion of Ukraine. The number of such attacks remained low into the second half of 2023, which saw twenty attacks in both July and August. December 2024 was an even more extreme peak. 65 incidents of scam attack arsons were documented between December 13 and 27. As of January 2025, 187 scam attacks had been reported, mostly arson.
2025 saw a massive escalation in the severity of acts committed by victims of scam attacks. In August 2025, five elderly women were convinced or coerced to bomb military facilities in a plot disrupted by Russian security services. The Christmas Eve car bombing probably marked the first time such a plan was successful, followed by another bombing in February 2026, killing a policeman and the bomber. Scam attackers have probably played a role in organizing attacks against telecommunications infrastructure as of early 2026.
The Tradecraft of the Scam Attack
Scam attackers utilize three methods: financial leverage, patriotic manipulation, and the threat of criminal prosecution. In the first, perpetrators trick targets into sending them large amounts of money, usually claiming to have a business opportunity. Once the money is sent, the scammer offers to return it if the target carries out a task.
In patriotic narrative manipulation, the scammer contacts the target, claiming to be an officer of Russian security services. Swearing the target to secrecy, the “officer” directs them to carry out a mission. Up through the most recent escalation to bombings, these missions consisted of burning down a military recruitment center. The apparent contradiction was explained by the claim that the building in question was being used by Ukrainian security services.
This escalation is motivated by an arms race being conducted in the context of Russia’s war in Ukraine. Russia has conducted its own phone campaign against Ukraine, which primarily relies on monetary incentives. The first public reports of Russian attempts to have Ukrainian civilians conduct bombings predate those in Russia, suggesting that the escalation to lethal scam attacks in Russia may be the latest step in this chain of escalation.
With the lethal nature of more recent attacks, targets are being deceived about the true nature of their task. One target, who was instructed to smuggle a package into a building occupied by Russian security services in Crimea, claimed that he believed it was surveillance equipment which would be used to discover traitors within the organization.
At first, it seems unlikely that anyone could be convinced to carry out such measures based on phone calls and text messages from someone claiming to be a government official. But the scam attack utilizes and subverts narratives that resonate strongly in the Russian context.
Patriotic narrative manipulation preys on Russia’s strong cultural tendency to trust and respect the state, especially security organs. In September 2024, 63% of Russian respondents agreed that national security organs should be trusted. 69% said the same about the army, and 80% believed the president should be trusted. This trust has long been a powerful asset for the Kremlin, both under the Soviet Union and the Russian Federation. Citizens with a greater level of trust in the security apparatus are more deferential, allowing the state greater freedom of action at the expense of personal liberties.
This narrative aids the state, and particularly the security services, in numerous ways. Large budgets for defense and internal security, which come at the expense of healthcare and education, can be easily justified in vague terms. Concerns about the morality of a war or the way the state is pursuing it can be brushed aside by the state. In many cases, the internalization of this narrative means that individuals suppress their own concerns, saving the Kremlin the trouble.
Scam attackers have discovered that they can hijack the power of this narrative by convincing targets that they are agents of the state. Doing so is made easier by the vast expansion of the security state since 2022, which has brought officers of the security state into closer and more frequent contact with the population. This environment makes the act of having a security officer contact a random citizen by phone and assign them a mission seem less absurd.
Scam attacks also play on the narrative that the state is under perpetual assault from within and without. This narrative has gained credence as Ukrainian intelligence services strike deep into Russian territory. If the shadowy hand does lurk in every alley and corner, as the Kremlin says, then the idea that the local recruitment center is an outpost of Ukrainian fascists becomes plausible.
This is the key to the scam attack. As rudimentary as it appears, it weaponizes pro-state narratives against the Kremlin. Effective countermeasures will, therefore, be challenging to develop. Russia has attempted to spread knowledge of the scam attack and to nullify individual tactics. But the wider narratives that make scam attacks feasible are amplified by the Kremlin’s pro-state narrative and cannot be fully neutralized. In fact, doing so would harm the state. Were the Kremlin to successfully counter these narratives, it would likely find it exponentially harder to sustain support for the current war in Ukraine, the near-omnipresent security state, or the privations that have so far been endured in the name of unspecified threats.
The threat of criminal prosecution is the newest approach by scam attackers. The scammer claims that the target’s identity has been stolen and that a loan in their name was taken out, with the funds given to the Ukrainian military. Targets are told that, if they do not cooperate, “evidence” of their donation will be provided to the authorities. Donating money to Ukrainian groups is a serious crime in Russia, which the Kremlin eagerly prosecutes. In 2024, a Russian-American dual citizen was sentenced to 12 years in prison for a $51.80 donation to a Ukrainian humanitarian charity.
Targeting is the process by which victims are identified. Variables that inform selection include age, gender, financial situation, political beliefs, and life experiences. A prominent variable is age, as targets have disproportionately been elderly. A wide bench of target variables contributes to the creation of tailor-made messages. After targets have been selected, contact is made, and a relationship is built to convince the target to carry out a criminal act. This can be labor intensive, with an 82-year old victim speaking with multiple scammers 155 times over nine days. The amount of labor required provides evidence that such attacks are generally part of a coordinated effort rather than an extreme prank conducted by groups of teens. The ease of conducting these operations means that any given attack cannot be confidently attributed to Ukrainian authorities, which complicates assessments of how the scam attack is evolving.
Once trust is established, the target is deployed to carry out an attack. The way relationship building is conducted will vary depending on the approach used. If the scammer claims to be a Russian security officer, the target must be convinced of this. With coercive tactics like financial scams or threats of criminal prosecution, relationship building often consists of repeated threats or providing “evidence” to emphasize and concretize those threats.
Goals, Strengths, and Weaknesses
The initial goals of the scam attack appear to have been psychological and social, aiming to create instability, degrade societal trust, and demonstrate the failures of the government. While attackers did induce targets to commit property damage, especially against government facilities, there is no substantial evidence of strategic damage. The facilities targeted were mostly relatively unimportant local recruitment centers. Attacks have been disproportionately clustered around Moscow and St. Petersburg, which further suggests that the goal is to impact politically sensitive areas, rather than to tangibly decrease recruitment.
The Kremlin has attempted to insulate its population, particularly the affluent residents of major cities, from the impact of the war. Scam attacks function as one way to bring the war to Russian soil and disrupt the state of normalcy. Their potential to turn normal citizens into domestic terrorists could increase suspicion and degrade trust throughout Russian society. Finally, scam attacks serve as a reminder that the state is incapable of keeping its most important cities safe from enemies.
This targeting strategy highlights some of the strengths and weaknesses of the scam attack. In its most basic form, it is inexpensive, requires little to no in-country infrastructure, and reduces the risk to organizers. However, virtual operations suffer from capability constraints, such as a reliance on gullible or manipulable targets, which probably decreases the complexity and impact of attacks.
Such targets are not entirely ineffective, as observed with the recent escalation to bombings and organized murder. Whereas arsonists have been instructed to create Molotov cocktails themselves, recent scam attack bombings have involved the unwitting attacker picking up explosives from hidden caches. A murder committed on March 13 displayed significant coordination by the scammers. Scammers coerced a 20-year-old soccer player to enter a woman’s house, rob it, and kill the occupant. Scammers convinced the woman’s 16-year-old daughter that this man was a law enforcement officer and to allow him entry.
Such developments suggest increasing sophistication, allowing scammers to convince targets of ever more extreme things, as well as the integration of scam attacks with in-country networks of assets. This has expanded the potential for such attacks to damage valuable physical assets.
Technology as an Enabler
Another limitation of the scam attack’s operational construct is the need for a large number of personnel who are fluent speakers of the target language. Some level of proficiency is of course necessary to communicate at all, but fluency is required to convince someone that the scammer is a government official.
The increasing capacity and availability of AI models will probably mitigate some of these problems. Traditional scammers have increasingly made use of AI voice emulators, which have been used for scams as far back as 2019. These tools create speech in a wide variety of languages and can proficiently mimic the idiosyncrasies and mannerisms of a specific voice. The continued advancement of such technologies will allow a far smaller number of personnel to conduct scam attacks in a wide variety of languages, without being bottlenecked by a shortage of foreign language speakers. As AI models become more proficient at translation, it may well be possible for an individual with no proficiency in the target language to conduct a scam attack, particularly via text.
AI could vastly enhance other parts of the process, and lower the barriers to entry. Targeting could be made both cheaper and more precise by using AI data scraping and machine learning. This would enable the rapid collection of tremendous amounts of personal data, the analysis of that data to identify individuals most vulnerable to the scam attack, and which narratives are most effective for particular groups.
The collective impact of such technologies would likely make it possible for a wider variety of organizations to conduct scam attacks on a greater scale, more effectively, and without the need for personnel with language expertise.
Implications for the United States
The US is vulnerable to scam attacks for many of the same reasons it is vulnerable to conventional scams. Americans tend to overestimate their ability to detect scams and underestimate their vulnerability to common tactics. Recent industry surveys suggest that 34% of Americans experienced financial fraud or a scam in 2024. Over a third of this number admitted to losing money in a scam, equating to over 30 million people.
In the US, the viability of threats of criminal prosecution or claims that the scammer is an agent of the state is unclear. American laws and courts are less draconian than their Russian counterparts, and the United States does not have an oppressive security state/at war in UKR/ history of locking people up for speech/ etc. This might reduce the chances that a random caller claiming to have evidence of a crime is sufficient to get a US citizen to commit property damage or violence. But the chance is not zero. Comparatively low trust in the government amongst Americans would likely reduce the willingness to take the “officers” word for it and carry out an operation.
However, scammers have exhibited tenacious adaptability with their narratives. It is easy to imagine the creation of US-specific narratives that play on specific narratives which particularly resonate in the US. Tailored targeting, powered by easy access to vast quantities of personal information, is feasible in the US and enables carefully crafted narratives that appeal to the unique blend of demographic and political characteristics of a target.
As seen in Russia, the impact of scam attacks would be principally psychological, particularly without the involvement of in-country assets. However, the damage that mass scam attacks could do to American social cohesion and stability, especially at a time of massive polarization, should not be underestimated.
Countermeasures
It is likely impossible to fully neutralize concerted scam attack campaigns. Despite decades of efforts by governments, advocacy groups, and corporations, traditional financial scams have persisted and adapted.
This does not mean that nothing can be done. Public information campaigns should be organized that target the narratives likely to be used by scam attackers to minimize the vulnerable population. Such campaigns should seek to address known issues with public information campaigns, such as individuals’ tendency to underestimate their vulnerability, and rapid drop-off in awareness over time. The substantial overlap in the infrastructure and expertise between traditional anti-scam campaigns and the tools likely to help combat scam attacks will likely reduce the cost of countermeasures.
Another potential countermeasure would be increased data privacy. This would reduce the ease with which scam attackers can obtain personal data, which would reduce the precision of targeting, increase the cost, and reduce the efficacy of scam attacks.
Unfortunately, the steps taken by the Trump Administration to this point reflect priorities at odds with such measures. The administration has significantly cut the funding and staffing of the Consumer Financial Protection Bureau, the leading government body against scams. Similarly, the focus on pro-business policies and the strong influence of large technology companies within the administration make it unlikely that new data privacy regulations will be forthcoming.
Conclusion
Regardless of whether the current administration reverses course, there is increasing evidence that unregulated advanced technologies create opportunities for adversaries to harm the US through its vulnerable citizens. The scam attack may be the most eye-catching iteration, but it is far from the only one.
Whether our adversaries adopt such methods against us now or in the near future, it will fall to the US government to combat them. It would be a wasted opportunity to wait until buildings catch fire and cars blow up to prepare.
The author would like to acknowledge and thank Tye Walden and Thomas Lattanzio for their time and contributions during the writing process.