According to Wikipedia1, psychological warfare (‘psyops’) started in the first World War.  However, as NPR’s article so eloquently put it, “Governments always have sought to shape political and other conditions around the world to their own benefit.1”  As far back as ~550 BC the Bible3 records the field commander from the king of Assyria calling over the walls of Jerusalem, speaking in their language (Hebrew) to intimidate the inhabitants despite the pleas of the city elders asking to converse in Aramaic.  In 71 BC the Romans crucified 6,000 slaves along 120 miles of the Appian Way as a warning after Spartacus led slaves in a two-year-long rebellion.  No matter when you date it, governments have been trying to panic the peoples of their enemies using tactics from air-dropping pamphlets, blasting the enemy with music and noise 24/7, or even G.I. Jane’s famous anti-American rhetoric during the Vietnam war.

However, not all governmental psyops or influence is external.  For example, internal elections in the United States have always had a group of people working to understand the pulse of the nation by tracking popular opinion on hot political issues.  These “pollsters” conduct research using surveys, analyze the results and work with political campaign managers to develop and implement strategies.  These strategies are designed both to guide the candidate’s responses as well as attempting to influence the opinions of the electorate through a flood of advertisements and other tailored messaging.

Enter the world of big data and machine learning… suddenly the game changes and governments do not just manipulate, they indirectly interfere.

The ‘poster child’ of this nation-state interference was the 2016 US presidential election.  If you are not already familiar with the issues of Russian interference and manipulation, I recommend you watch Netflix’s 2019 documentary The Great Hack which details how President Trump (and Obama before him) used services from Cambridge Analytica4 to manipulate social media and eventually sway elections in their favor.  As I stated in my earlier article on Social Media Bots, just be sure you realize that the producers of that documentary are also very clearly trying to make a point of their own.

To be honest, I think that Cambridge Analytica was used as a scapegoat because they were *too good* at the sort of analysis and manipulation that political pollsters have been trying to get good at for centuries.  Aided by big data analytics, our willingness to share information via social media applications (e.g., Facebook), and harnessing the fledgling power of machine learning, they just did the job better, faster, and far more accurately than human beings could previously do.  In fact, they did their job so well that it scared the crap out of the whole world… and rightfully so.

Whatever you believe about Cambridge Analytica’s role, there is little doubt that those same powers – data analysis and manipulation through automated posts and strategically timed advertisements – were also used by the Russian government5 to their own political machinations.  I think it can be argued that the Russian’s influence on the Trump campaign was not so much because they ‘like’ Trump, but because they felt he would be more polarizing and divisive.  We know2 that the Russians have been using these analytics to manipulate the US via social media in order to sow discord and disunity from matters of politics, racial tensions, and religion.  And it’s not just the Russians; we are also aware of targeted campaigns by other inimical countries including Iran and China2 who seek to weaken the United States by having us tear each other apart with internal conflict.

So what are you supposed do about this?  We are not powerless; here are some things that we can do6 that make a difference:

  • Is it a bot account or a person?

As I stated in my previous article, in many instances you can look at the activity of the ‘inflammatory’ account.  If the account’s activity is limited to one topic and few sources of content, you should be highly suspicious that it is automated and worth ignoring.


  • Think about the purpose of the hot-button social media post you are reading

If the ‘author’ is trying to inflame, enrage, or divide rather than fix (commonly called ‘trolling’), consider ignoring it.  It might just be a misinformed 14-year old instead of a Russian bot, but neither is helpful.   After all, even the 14-year old might have been similarly influenced; do not let them ‘tilt’ you.


  • Don’t throw gas on the fire

The bad guys here are taking things to extremes to polarize us.  If you see such extremist versions of an argument posted ignore it as an outlier.  Even more importantly, do not ‘fan the flames’ by re-posting or upping the emotional ante.


  • Talk to *people*, not posts

Older generations often chide millennials and gen Z for staying on their phones and texting/posting instead of talking to each other.  Talk to people instead of avatars and let a little common sense and community building take priority over the ease of social media platforms.  This can literally take the virtual ‘gun’ out of threat actor nation state’s hands.

At the end of the day, it reminds me of the old Smokey the Bear7 ads where he would say, “Only YOU can prevent forest fires!”  All items in cybersecurity require YOU (the user) to be secure on purpose; security is a way of life rather than a few one-off tasks.  Let’s all take a little time to show more online sense & responsibility, be a little more social & a little less ‘media’, and have healthy & constructive discourse to move us forward – instead of further apart.  When it comes to our nation’s ability to have both free AND productive speech and discussion, make sure that you PROTECT IT.





3 2 Kings 17-37;