Social Media Manipulation for Sale: 2025 NATO Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement

Social Media Manipulation for Sale: 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement is a new report recently published by the NATO Strategic Communications Centre of Excellence on January 30, 2026.
“This sixth annual evaluation of social media, conducted since 2019, tests the resilience of major social media platforms against manipulation by commercial service providers. The experiment measures platforms’ ability to detect and remove inauthentic engagement that is commercially purchased for deliberately created inauthentic posts in non-political scenarios.”
Download the report here.
NATO Strategic Communications Centre of Excellence has this to say about it on Linkedin:
𝗧𝗵𝗲 𝗴𝗹𝗼𝗯𝗮𝗹 𝗺𝗮𝗿𝗸𝗲𝘁 𝗳𝗼𝗿 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝗶𝗻𝗳𝗹𝘂𝗲𝗻𝗰𝗲 𝗵𝗮𝘀 𝗾𝘂𝗶𝗲𝘁𝗹𝘆 𝗺𝗮𝘁𝘂𝗿𝗲𝗱 𝗮𝗻𝗱 𝗶𝘁𝘀 𝗲𝗻𝘁𝗿𝘆 𝗯𝗮𝗿𝗿𝗶𝗲𝗿𝘀 𝗮𝗿𝗲 𝗮𝗹𝗮𝗿𝗺𝗶𝗻𝗴𝗹𝘆 𝗹𝗼𝘄.
We have just released our latest experiment on social media manipulation, offering the most comprehensive evidence to date of how commercially available inauthentic engagement continues to exploit systemic platform vulnerabilities. Despite regulatory advances and improved enforcement, the study shows that manipulation remains cheap, accessible, and increasingly sophisticated, with AI-enabled bots now blending seamlessly into real conversations rather than acting as obvious spam.
🔎 𝗧𝗵𝗲 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁: in 2025 we tested seven major platforms by purchasing fake engagement and advertising, revealing that over 30,000 inauthentic accounts generated more than 100,000 interactions with limited detection. While some platforms improved account and engagement removal, commercial manipulation is still widely available at low cost, including within paid advertising systems. Most concerningly, AI-driven bots now embed themselves into authentic discussions, making manipulation harder to detect and more persuasive.
𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀:
🔹 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗮𝗻𝗱 𝗮𝗳𝗳𝗼𝗿𝗱𝗮𝗯𝗹𝗲: For a few hundred euros, actors can generate tens of thousands of fake interactions including through ads.
🔹 𝗔𝗜 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝘁𝗵𝗲 𝗴𝗮𝗺𝗲: Bots are shifting from high-volume spam to low-volume, high-credibility engagement inside real conversations.
🔹𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗱𝗲𝗳𝗲𝗻𝘀𝗲𝘀 𝗮𝗿𝗲 𝘂𝗻𝗲𝘃𝗲𝗻: Enforcement has improved, but transparency and routine detection remain inconsistent across platforms.
🔹 𝗙𝗼𝗹𝗹𝗼𝘄 𝘁𝗵𝗲 𝗺𝗼𝗻𝗲𝘆: Cryptocurrency-based payment infrastructures keep the manipulation economy resilient and largely opaque.
🔹𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗺𝘂𝘀𝘁 𝗲𝘃𝗼𝗹𝘃𝗲: Behavioural, cross-platform, and conversation-level analysis is now essential to protect information integrity.𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲: the infrastructure for influence-as-a-service is mature and defending the digital public sphere now requires systemic, financial, and behavioural counter-measures, not just content moderation.
Read the full experiment report below or follow the link in comment section!
Authors: Gundars Bergmanis-Korāts, Tetiana Haiduchyk, Bohdan Smolts
Trementum Research