Member Login Become a Member
Advertisement

SWJ Interview with Dr. Steve Tatham: Will an AI-centric Social Media Take Us Further and Quicker Towards the 1984 Society?

  |  
05.02.2025 at 06:00am

This is a Small Wars Journal question and answer session with Dr. Steve Tatham. Dr. Tatham is one of the most recognized and authoritative experts on Information and Psychological Operations (PSYOPs). His most recent book is Information Operations: Facts Fakes Conspiracists published by Howgate Publishing last year.


Manea: What are the convergences and divergencies between Russian and Chinese modus operandi in IO (Influence Operations/Information Operatuibs)?

Tatham: The West has long studied Russia. The Cold War sustained vibrant linguistic and analysis education and training programs across most NATO nations. Whilst the general public may not have had the understanding or interest in Russian disinformation that they do today, our various Armed Forces and intelligence agencies certainly did. For three years I worked at the Advanced Research Group of the UK Defense Academy, a successor to the Soviet Studies Research Center, that for years had tracked and written about Russian military affairs and information warfare. In the US the Foreign Military Studies Office did much the same. Collectively we have a good understanding of how Russia deploys its IO – particularly post the dissolution of the Warsaw Pact as Eastern European nations, such as Poland and the Baltic States, brought their experience and knowledge to the West.

This is not however the case for China. China only opened up to the West in the late 1970s and the number of Western analysts who can speak Mandarin has remained low for years. In this sense, our collective ability to access Chinese documents, doctrine and academic journals is limited. So, I would argue that we are on the back foot, we don’t have the necessary depth and history of understanding about the nature of Chinese IO that we have with Russia.

Another interesting difference is that much of China’s IO is not happening in traditional defense environments – much of it supports its global development programs. They probably won’t be familiar with the term IO but ask a western mining executive about Chinese influence and information activities in Africa and what they describe will be deep, concerted and well thought through IO campaign. And I know from my own personal experience that they are likely to give you a far more detailed assessment than any UK MoD or Pentagon Analyst because the military analysts are not attuned to be looking in non-military environments. In my research some of the best examples of contemporary Chinese IO are not in the military domain, but in their economic and industrial strategies across Africa. Now something that has put them more on the radar is their National Intelligence Law, which compels all Chinese organizations, under Article 7, to cooperate with intelligence agencies and of course in that regard there has been a great deal of debate about TikTok.

As an aside, I wonder how many readings this have been tracking the many Chinese AI created videos that have appeared post President Trump’s tariffs which show Americans wearing MAGA hats on trainers (sneakers) manufacturing and production lines. They were pretty rapidly produced, they played to some significant US stereotypes – their creators understood their target audiences – and (dare I say this?) they were quite entertaining. But they also showed you how far China has come. Corporately, I would suggest our knowledge of Beijing’s efforts remains sketchy and in comparing and contrasting allows us to ‘paint’ only in broad strokes.

From my perspective, I would say that it is clear that both Russia and China regard IO as vital; it is clear that both devote a great deal of time, money and effort to it. On the back of the Ukraine war there may have been some convergence. But, in my experience, Russian efforts continue to be more ‘clunky’ and more identifiable than Beijing’s. Russia is very focused on interrupting Western social media; China less so (or perhaps less obviously so). Russia is clearly focused on disrupting and undermining Western institutions. In my book I highlight that the Russian intervention in the BREXIT referendum was probably on both sides of the debate – hedging their bets to maximize chaos. Russian IO is often easier to spot (either because it is crude and/or because of the number of experienced analysts), Chinese IO less so. Russian IO targets familiar and obvious audiences; China, again, less so (ego my comment about their efforts in, for example, African commercial programs). Russian IO has many targets – in fact it seems there is nothing that is not a target; China’s IO has an obvious military sweet spot – Taiwan – but many commercial ones that seemingly have evoked little to no interest to the UK Ministry of Defense or the US Pentagon.

In the book, I also point out that other nations are developing their IO capabilities – Iran, North and South Korea, Israel, India, Pakistan, Columbia. This is clearly a growing area and in response we collectively need more than just a defense department approach, but a whole-of-government one.

We need the psychologist, anthropologist and sociologist at the heart of our IO campaigns

Manea: The post-9/11 campaigns had an important component of PSYOPs. Their common thread was about ‘winning hearts and minds’. What lessons can be learned from those experiences in order to be able to fight against external/internal malign actors targeting societal cohesion?

Tatham: Post-9/11 Richard Holbrooke, a senior official under President Clinton, asked “How can a man in a cave out-communicate the world’s leading communications society?” In response, and prompted by organizations such as the RAND corporation, the US turned to Madison Avenue and the ‘ad man’ took an active role in US and latterly the US-led coalitions, IO. As Tom Van de Brook (of the US Today newspaper) reported the US spent hundreds of millions of dollars on attitudinal, marketing inspired, IO in Iraq and Afghanistan. I chart it in detail in the book and frankly – and I say this as the former UK PSYOPs commander in Helmand – it was a disaster.

Dismayed at what we collectively saw, in 2007, me, General Andrew Mackay, and General Stanley McChrystal wrote Behavioral Conflict where we recommended replacing this attitudinal, perception based strategy with one that focused instead on applying psychology to understand the drivers for behaviors and, frankly, ignore attitude and perceptions – at least on the battlefield.

Sadly, that never really happened. In my new book, which General H.R. McMaster so very kindly wrote the foreword to, I recall how we tried to convince a senior (British) International Security Assistance Force (ISAF) officer to undertake a behavioral study of the Afghan National Security Forces (ANSF) because we simply didn’t believe that it was as resilient and cohesive as was popular thought. That request was refused – in Brussels, Washington and London; after years of conflict, billions of dollars, and countless lives lost there was no appetite for anything that diverted from the narrative that GiROA (Government of the Islamic Republic of Afghanistan) and ANSF were capable and resilient.

But it wasn’t. And we all saw that almost live on our TV screens as the whole edifice crumbled in three weeks and Afghanistan was lost. In my view, the Hearts and Minds concept was a failure in its application. But there were two huge impediments to changing it: the lobbying power of the commercial organizations that were delivering the products (and making millions of dollars), and (and here I conjecture), an unwillingness by senior officials to challenge received wisdom and convention and a unremitting clinging to conventionalism.

Today the advertising executive is still there, but now they are supplemented by the information technology executive. The currency of IO today seems to be mass messaging, persona management and social media interactions. The only difference is that today behaviors are supposedly factored in. Supposedly? At its core it still hearts and minds messaging. I still don’t see the psychologist, anthropologist and sociologist at the heart of our IO campaigns. If we wish to see real change, I think we need to divest ourselves of some of the traditional companies we have used for years who have cost so much and, in my view who have delivered such poor results and start looking more intelligently at other approaches and providers.

Effective propaganda has to have something to work up-on

Manea: How can we think about achieving societal resilience in an era in which the ability of malign actors to attack the cohesion of open societies is practically endless?

Tatham: People discuss this a lot, and you will hear answers from better education through developing critical thinking skills through to fact checking by (social) media companies and a hundred and one other solutions. Some of that would help – certainly the fact checking by the technology companies, but there are other things we can do that need a bit more thought. I wrote a paper a few years ago for the National Defense Academy of Latvia. It’s less than snappy title was probably why it’s not widely known: The solution to Russian propaganda is not EU or NATO propaganda but advanced social science to understand and mitigate its effect in targeted populations. I stated there that we were becoming fixated on the volume of IO – particularly Russian. The paper argues that maybe we should be less focused on the inputs and instead follow Winston Churchill’s maxim, “However beautiful the strategy, you should occasionally look at the results”.  What is actually happening as a direct result of the IO? To do that we need proper tools and if we have tools to look at the results, we will see how resilient our societies are (or are not). To quote directly from that paper:

Russian IO may in many instances be blatant lies and fabrications but it is generally well crafted and targets specific known vulnerabilities in societies. But targeting is not enough – we need to know how that translates into possible behaviors. In eastern Ukraine it is clear that there were significant rifts in society already but how much can one attribute the civil war to the Russian propaganda machine and how much to dissatisfaction on the group with central government, with poor life changes, a stagnant economy and rampant corruption? Effective propaganda has to have something to work up- on. It also has to resonate with its audience and here my Russian colleague do seem to know their audiences. The problem is we do not.”

UK & NATO actually had the capability to understand it – tested and trailed. It was called the Behavioral Dynamic methodology and we’d spent nine weeks training students at the NATO Center of Excellence (CoE) in Latvia how to use it and teach it. But in 2018 it got caught up in the Cambridge Analytica storm and was utterly destroyed. I show in the book, in the face of a global media firestorm, how it was quickly publicly disowned, even by institutions such as NATO CoE in Latvia that had been enthusiastic supporters of it, had written reports on its utility and were actively using it to combat Russian IO. Such is the pressure of hostile media.

But we used Behavioral Dynamic methodology in a number of places and it was like a ‘light bulb’ moment – providing us with a level of understanding and knowledge about specific audience groups, their motivations, susceptibilities and influences in a way that we had never seen before – and, to the best of my knowledge has not been replicated since. So, to answer the question directly, whilst there are lots of things we can (and should do), I believe we need a little less IT and admen and a whole lot more science – psychologists, anthropologists and sociologists- to be thrown at the problem. It is in the social sciences that knowing how to build strong societal resilience probably exists.

We are close to the horrors of an Orwellian society: ‘newspeak’, ‘the ministry of truth’ and a life where you are monitored continuously

Manea: Last year Romania has been, in part, TikTok-ed. Computational propaganda as well as the co-optation of influencers in a pre-determined campaign were extensively used. What are the broader dangers and vulnerabilities that people expose themselves to by navigating on TikTok?

Tatham: The willingness of people to place their innermost thoughts and personal details on social media is a never-ending source of amazement to me. But along with the obvious personal detail that can be harvested, there is of course the meta-data – the background information in each video. As data mining becomes more sophisticated through the use of AI so the ability to understand the individual will deepen. The very context of their lives becomes all the more attainable.

In college I took an English literature course and one of the books we had to study was George Orwell’s 1984. Coincidentally, it was in 1984 that I read it as a student and it all seemed pretty far-fetched. I enjoyed the story, but didn’t think much more of it. Having re-read it just last year, I realized what a masterpiece it is, that Orwell was arguably a genius in forecasting the future, and how close we are converging on the horrors of Orwellian ‘newspeak’, ‘the ministry of truth’ and a life where you are monitored continuously.

You asked about TikTok. But it is not just TikTok (although people are understandably nervous of it for its Chinese connections); we should be wary of all social media. I use Facebook to keep in touch with owners of the same type of dog breed as I own. I also use it to keep connected to the French community where we have a second home. Social media is so convenient and easy for the types of tasks. But to get to the posts about dogs, to get to posts about France, I have to navigate a FB feed that is beyond my control and despite being a political centrist, a keen supporter of the European Union (EU) and a hater of extreme politics, I find that I have to navigate, increasingly, stuff that is absolutely not what I want to see. I ignore them, but they have their effect. I saw this first hand. My dad, a passionate European, inexplicably voted for BREXIT because he was angry at EU fishery policy – a subject he had never previously expressed any interest in his entire life. I was puzzled, but looking at his FB feed one day I found it was full of anti-EU messaging, much of it focused on immigrants and fishing. How did that get there?

I don’t use TikTok, but I occasionally see an odd video (like the Chinese MAGA videos post the Tariffs I mentioned in an answer to an earlier question). They can seem disconcertingly innocuous. I am not religious, but I am minded by a biblical reference: Beware of false prophets, which come to you in sheep’s clothing, but inwardly they are ravening wolves. I regard TikTok as a ravenous wolf, just as I do X, Facebook and social media platforms yet to come. They have to be – users generate their content! You think you can control it, but ultimately it begins to control you and of course someone (and in the future, something) does control it, its algorithms and content. I still believe that it has to have something to leverage off – some pre-existing belief or anger, but as the wolf sucks from and more from users so the opportunity for that leverage broadens.

 Like Heroin, Social Media is intoxicating

Manea: Post 9/11 the danger was the radicalization of individuals. Today the danger seems the radicalization of entire segments of population by encouraging a group think effect and echo chambers. How did we end up here?

Tatham: I don’t quite agree with the premise of your question – there is still a danger in the radicalization of individuals as we have seen very recently in New Orleans. And I am not sure we have yet seen, in terrorism terms, the radicalization of entire segments of society per se. But, we have seen the politicization of groups and in some instances that does appear to have translated into violent behaviors – I think in particular of the rioting in the UK last year, but possibly also the January 6th riots in the US.

I was alluding to political radicalization of key segments – not terrorism terms. How we ended up here?

I think it has been a slow journey and in terms of adversaries such as Russia, it probably began a long time when we chose to draw a peace dividend at the end of the Cold War and went about our lives. We happily watched as clever people such as Jack Dorsey and Mark Zuckerberg were developing new and exciting technology and didn’t give much thought to its potential to cause us harm. Probably nor did they! That is not to say that there were not warning notes sounded – slowly more and more studies began to conclude that we could be in trouble – but even then those warnings were largely focused on the (mental) health effects of prolonged exposure. But social media was intoxicating – like Heroin it became impossible to stop and each successive ‘fix’ required more and more attention. To an extent we normalized it, even our respective governments and institutions, celebrities and brands all began communicating via it and quite simply it took over our lives. Its only in the last five to ten years – in the military context – that we have begun to seriously understand how it can be weaponized, and some key international events have accelerated that learning – Ukraine war being an obvious example.

When that weaponization is to turn a known issue or fracture in society – BREXIT in the UK was one example, immigration, globally, is another – then it has the potential to be very destructive and suck in large groups of people – just as we saw in the UK last year. To sound like a stuck record, that was why the BDi methodology was so good because it allowed you to measure those fractures and determine susceptibility to influence, online or otherwise.

Looking forward, for me one of the most frightening issues is that although there is a lot of automated trolling, until now most social media still emanates from human hands – that will not be the case going forward. We really should be more suspicious and questioning about AI; for all its considerable potential in benign areas such as cancer diagnosis or physics or conservation, the human condition tells us someone, somewhere, will absolutely use it for bad things. When it is truly let loose in social media it will be hugely destructive. Will it take us further and quicker towards the 1984 society that George Orwell wrote about with such prescient clarity? I am naturally pessimistic – I will be very happy to be proven wrong.

Manea: Any final point about the book?

Tatham: I don’t get a penny (or a dime) from the book’s sales. The royalties all go to a charity for veterans, providing them with helping dogs. So, please support if you can and if you buy from the publisher’s website (instead of Amazon) even more gores to the charity. Thanks. The link is here.

About The Author

  • Octavian Manea is a PhD Researcher at the Centre for Security, Diplomacy and Strategy (CSDS) that he joined in October 2021. He is interested in the changing character of conflict and the implications of such alterations for the US-led alliance system. Octavian is also broadly interested in strategic studies, transatlantic relations and security issues. He worked for many years as a journalist, and is currently a contributor at the Romanian weekly 22 and the Small Wars Journal. In addition, Octavian was the managing editor of the Eastern Focus Quarterly in Bucharest and was affiliated with the Romania Energy Center (ROEC). Octavian was a Fulbright Scholar at the Maxwell School of Citizenship and Public Affairs at Syracuse University, where he received an MA in International Relations and a Certificate of Advanced Studies in Security Studies. He also holds a BA and an MA in political science and international relations from the University of Bucharest.

    View all posts

Article Discussion: