Blog Post

Pwning Western Democracy - Stopping State Sponsored Trolls

How social media companies can thwart state sponsored trolls, and why they won't do it.

Pwning Western Democracy - Stopping State Sponsored Trolls - How social media companies can thwart state sponsored trolls, and why they won't do it.
Disclaimer: I am not employed nor paid by any of the social media platforms mentioned in this blog post.

Preface

This will be an unusual blog post for me. I normally don't delve into this subject. This one will be a rare exception.

Like many of you I have gone online and witnessed what could be described as trolling. There should be made a distinction though between your regular troll, that is just some guy at his computer, and a group of state sponsored actors. Before we go any further, let me quickly give you a short definition of the term "troll" that I'll be using here.

"Online trolling" is a form of behavior on the internet that is intended to upset the order of a website that such behavior is exhibited on. This is my layman definition that I just pulled out of my .... finger. If we refer to Wikipedia though, their definition is somewhat more inclusive:

In Internet slang, a troll is a person who starts flame wars or intentionally upsets people on the Internet by posting inflammatory and digressive, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into displaying emotional responses and normalizing tangential discussion, either for the troll's amusement or a specific gain.

Let me tell you that even that definition is not yet complete. What it turns out though is that trolling is no longer something that can be attributed to an online behavior. Since about late 2015, it has also spilled out into the mass media and eventually into our daily lives.

Lastly, I want to point out that I am in no way an expert on human behavior, nor that I know why people do certain things. I am a software developer and a security researcher. Human psychology is not my subject of research!

Thus, in this blog post I will only present my technical expertise with suggestions on how we can try to stop the second form of trolling, the most dangerous one, the state sponsored kind. This is what the rest of this post will be about.

Why Is It Bad?

Before continuing, I also want to emphasize that I'm making a clear distinction here. I don't think, nor that I'm trying to stop any individual from expressing his or her views in a public forum, even if they do so in a derisive manner, which may be categorized as trolling. In my view, such behavior is the expression of their Freedom of Speech, which should not be lumped into the category that I will be referring to in this blog post. So I hope that I'm making it clear.

What I'm referring to as bad trolling further down this article, is the second kind. The one that is conducted and sponsored by a government of a state for the purpose of destabilizing, or paralyzing a political or civil process in another country. Something that is done as a "job" by multiple recruits of the said state, using monetary compensation, or because of other ideological reasons. You can fill in the blanks yourselves who the villains here are.

The Instruments

The tools of trade of such trolling enterprise are many. Regrettably what usually ends up being employed are the tools of the state that is actually being subverted. I don't want to make a comparison to 9/11 but there are definitely some parallels there.

In this case, the social media seems to be the prime candidate for such attack. Why? Because of the inner workings of the social media sites. Let's review them first.

The Good

Each social media platform, be it the worst offenders, Facebook, or Twitter, but also YouTube, Quora, Pinterest, TikTok and so on operate with these goals in mind:

  • Everything is done by the "algorithm". Which is a fancy way of saying that a computer program (a script) makes most of the daily decisions.
  • The goal of each social media platform is too keep its users logged in for as long as possible. (Ever noticed how hard it is to find a log-out button?)
  • To keep users on the platform the "algorithm" needs to keep them engaged. But how? Simply by observing what each individual user enjoys watching, (or doing) the "algorithm" will feed him or her the same content over and over gain. That's that simple. If aunt Betty likes cooking, the "algorithm" will show here more recipes. If uncle Jimmy likes guns, the "algorithm" will feed him more weapon-related content. This generally works really well, and I personally love it when the "algorithm" on YouTube feeds me videos that I like to watch. I'm obviously not alone on that.
  • Most social media platforms make money on "selling" their users information to "advertisers". What I mean by that is this. In a benign definition, an advertiser will "purchase" information on specific users that they want to show their ads to. For instance, an advertiser that sells some hip new shaving cream with distribution around the city of Seattle may buy information on users of the social media platform that are male, ages 21 to 40, who live in a zip code near the greater Seattle area. As you can imagine, for such advertiser not paying to show their ads to someone outside of that group is a big boon. So the advertiser is happy, and the social media platform is happy to charge them for the service. And that is how the social media giants, like Facebook, make their money.

Originally, the concept that I outlined above worked really well. Have you seen the Facebook stock lately?

The Bad

I don't know who, and how soon, but someone quickly realized that the concepts above can be easily abused. And I'm not just talking about spammers and false advertisers. I'm talking about the use of the social media "algorithm" to influence citizens of one country on a massive scale. And not just people in your own country (you can always practice on them first with the use of mass media, that has been done before) I'm talking about population of a foreign state. Think about it. For a foreign government to subvert the system of checks at CNN, so that they can make them broadcast the content of their choosing, would require a great deal of manipulation to pull off. In other words, it was hard to do.

But let's review what can be done now with the help of the social media platforms and their "algorithm":

  • The "algorithm" is an inanimate object. It doesn't understand the right from wrong. It just follows the script. If people are engaged in something, it just feeds them more of the same thing. And the only thing the troll needs to do is to provide "the fodder".
  • So how do you engage someone? Well, at first you show them some innocuous content. It may be some funny video that pretty much everyone enjoys. You know that funny Thanksgiving video when someone's clumsy uncle spills the turkey sauce all over his shirt. Or, a skydiver who happens to hit and entangle an eagle in his parachute, and then both survived intact. Stuff that pretty much everyone would enjoy watching. Then rinse and repeat and hope for the person to click Follow, Like, or Subscribe to your content.
  • Then continue feeding them this stuff, but sometimes, throw in some video that is beneficial to your actual cause. Then repeat. Maybe add some unprovable pseudo-science, feel-good inspirational bullsh*t, anything that keeps them engaged. Rinse-and-repeat, while in the meantime slowly intensify the torrent of your propaganda stream. At the same time make sure to muddy the waters too, create confusion, mistrust in the current system. Intensify as you go. Then, when only the "loyal" followers remain, you can now switch them to a more extreme channel/account of yours that actually gives them calls to action.

    (I honestly thought that this kinda stuff can only happen in a plot of the Chuck Palahniuk's novels. But I was wrong.)

  • One more integral part of such echo chamber feedback loop is quite important to note. Most social media platforms thrive on user comments, as well as on upvotes and downvotes. For instance, this is an integral part of Reddit, to denote which posts are important, and which ones are not. The "algorithm" on all social media platforms tries to prioritize highly upvoted content and hide the most downvoted one. And that is how you can also abuse the platform in your favor.

    By creating an army of fake accounts (either through recruiting unsuspecting people to do your bidding, or through automated or hacked bots, or computers) one can abuse the voting system by upvoting the "right" comments, and by downvoting the unfavorable ones. And the "algorithm" will do the rest for you. The effect it will have on the users is pretty similar to what happens if you go to the amusement park and see a big line at a food stall. You'll instinctively know that there's something yummy being sold there, so you'll want to join and get it too. While a lone food cart that no one buys from will make you pass it as well. This is just a social psychology, but with digital up-and-downvotes on the screen. The principles are the same.

The Countermeasures

OK, enough theory. How can you stop this? There's indeed a lot of hurdles here. The most daunting one that the social media platforms seem to tackle now is how to verify the veracity of this or that claim? This is a much harder task than I can tackle. So I'm not even going to try it here.

What I will suggest instead is how social media companies can root out and thwart state sponsored trolls from diluting (or muddying) the comment section and the user voting system. (Those up-and-down voting buttons.) I tried searching for the concepts that I am about to propose here and, to my surprise, there wasn't much information available.

Quick Terminology

Very quickly, let me give you some basic explanation of how the internet connection in people's web browsers, or social media apps works.

When a user's smartphone, tablet, laptop, etc. (I'll refer to them as computers in general) connects to a website, there are certain properties of such connection that can be "spoofed" and some that cannot be. This is important to understand for our discussion here.

Let me give you an example. If you go to some block party in your friend's neighborhood and someone asks you, "What is your name?" You can give them any name you want. There's no way people at that party would know if that's your actual name. So in terms of our internet connection analogy, we can say that such introduction by a person can be "spoofed". But on the other hand, if you go to your local DMV office to renew your driver's license, you can't just give them any name you like. You must prove your identity to get any further with your request. In that case, in our internet connection analogy, we can say that your introduction to DMV officer cannot be "spoofed".

So how can we use it? In our case, any genuine information received by the website from a connecting user's computer, that cannot be spoofed, is something that we can use to base our anti-trolling protections on. Why? Because the trolls will not be able to change or manipulate it.

Now going back to the actual internet connection, the website can have only two properties of such connection that cannot be spoofed. Those are:

  1. Precise time of the connection. The signal from the connecting computer travels close to the speed of light. Thus the website can record with millisecond accuracy the exact time when a user's computer communicated with it. And 05:30:29 UTC is 05:30:29 UTC in New York, Tokyo, or Paris.
  2. IP address of the connecting computer. Such address is required to send the page back to the user's computer, and thus it is not possible to spoof it. If it was, the connecting computer would not receive anything back from the website and on the user's screen the page (or the app) would appear blank.

Everything else, including the user-agent, operating system, type of smartphone, GPS coordinate, size of the web browser window, referrer page, time zone in the settings, etc. can be spoofed and should not be relied for our analysis!

The Method

My proposed method of thwarting the dissemination of mass trolling posts, comments, as well as fake up- and downvotes, is by determining and then labeling each interaction with the website using the country where it originated from, plus with some additional metrics. In this case, other users of the website will be deciding for themselves whether or not to trust certain content, based on very simple visual clues that most of us will instinctively pick up.

Here's a visual example.

Main Up/Down Vote Counter

Say, the CNN YouTube page seems to have been subjected to a slew of abuse (deservedly or undeservedly so.) I'm choosing it only because it is an easy target to make an example with.

So the goal here - instead of just providing a simple count of the current upvotes and downvotes for the video, YouTube can add a little flyout under it, that will also show what country the most of upvotes and downvotes originated from. Here's my Photoshopped sketch to show what I mean:

CNN's YouTube page
A sketch of the CNN's YouTube page with a made-up count of vote stats by country (for illustration purposes only.)

So if we enlarge the vote counter itself, you can see that it shows what country most of the upvotes (on the left) and most of the downvotes (on the right) came from:

CNN's YouTube page
A sketch of the made-up vote count of stats by country (for illustration purposes only.)

Additionally, if someone clicks on the percentage value in such flyout, it can expand to display the breakdown of next 5 countries that the votes were registered from. Here's another Photoshopped sketch to show what I mean:

Downvote stats flyout by country
A sketch with a made-up count of downvote stats by country (for illustration purposes only.)

As you can see, it shows that users from those hypothetical 5 countries had downvoted that specific video.

And as a helper option, for people that are not really keen on recognizing national flags, moving their mouse over the flag, or touching it with a finger on their smartphone, should display a popup with an actual name of the country:

Downvote stats flyout by country with Mexico highlighted
A sketch with a made-up count of downvote stats by country, with Mexico displayed in a popup prompt.

Comment Section Metrics

Once implemented, the technique described above should be also applied to the comment section, when each user comment, or user reply, should contain country stats, and some additional helpful metrics. All that information should be displayed to the public in a similar manner.

Here's one more of my mockup Photoshop sketches for a user comment:

Comment country stats
A sketch with made-up comment metrics showing the country of origin, engagement and countries of upvotes and downvotes.
The sketch above is based on an actual comment made by a troll (minus my stats addition.) Please note the tactic that was used though: the word "God" in the name, with a picture of a cute baby. Who wouldn't listen to what he had to say? 😉

What I'm proposing here are several quite important metrics that should be made available to the public:

  1. Like I showed earlier, the flag by the name of the poster shows the country of origin, or where this post was made from.
  2. The percentage of engagement, that is shown to the right, is the internal metric that is already used by the social media platforms. It basically means how much this user interacted with the platform before they posted this comment. The lower engagement means that this is a brand new account, which should raise an immediate red flag (no pun intended.)

    The user interface on the website can accentuate it by displaying the low percentage number of the engagement in red.

  3. Two flyouts for the upvotes and downvotes, with country stats, should work exactly like I described for the main voting controls above.

Visual Aid

Now agree, that as a user, an American for instance, if you saw the kind of a comment that I showed above (with my little extra additions of the flags and percentages) you would be less likely to fall for the message in that post, wouldn't you?

I know that for myself. Every time I see that sketch, it actually amazes me how such a small alteration makes such a huge difference on the outcome.

On the side note, there's also an existing way to verify someone's engagement on a social media platform. Simply click on the user name and check their profile. In case of our embattled "God Wins" user, here's how their page looks like:
Profile page of a troll
Empty profile page of a "bot" account. (That still existed at the time of this writing.)

I don't have to say that a blank profile and a comment with multiple upvotes most certainly means that this fella wasn't acting alone. And that is a tell-tale sign of a state-sponsored troll.

The Implementation

Let's review the technical aspects of how this could be achieved.

Data Collection

This part is easy. Most social media platforms are good at it already. Collecting the stats that I showed above may require some additional storage resources, but not many. Each user interaction with a website could be saved in two compact database fields:

  • Precise time - can be stored in 8 bytes of data as an integer, if we need millisecond precision. And in 4 bytes, if we don't.
  • IP Address - if we assume just an IPv4 format, this can be stored in 4 bytes of data. Or slightly larger binary form for IPv6.

Then database table storage and retrieval, plus optimization for load balancers, etc. I can probably write this setup in less than a week.

Determining The Country

This task is the one that is the most challenging. Why? In this day and age of cheap (or even free) proxy services and VPNs, there is a good chance that a state sponsored army of trolls will be using one to mask their IP.

The following simplified diagram will show what those services are. First, let's look at a connection without VPN:

No VPN

Such a direct connection will unequivocally convey the IP address of the connecting user. Which is good for our technique.

But here's the connection through a VPN service:

With VPN

As you can see, the VPN masks the real IP address of the user by routing traffic through its own server. This will be a problem for us, since our website will get the IP address of the VPN provider and not of the user that is connecting to us.

So how do we overcome it?

Unfortunately, there's no easy solution there. The way to do it would be to run and constantly maintain a fresh database of IP addresses that belong to VPN services. Luckily for us, due to the scarcity of IPv4 addresses, it is not trivial for VPN services to change or acquire new IP addresses these days. But, there still has to be a robust service that checks and maintains such a database in the up-to-date state.

Can a small company do this? Possibly. There are some that claim to be doing it now. But realistically speaking, this is a fundamental task that will require a large staff of a company to constantly search the web and verify different IP addresses of existing VPN services. Obviously a company that maintains their own mapping service, or has their webpage loaded by a large number of people around the world (wink, any of the largest social medial platforms) can possibly tackle this task.

Dealing with VPNs

So what should happen from a technical standpoint if an IP address that the user is connecting from is a VPN?

Well, several things:

  • Such should be clearly shown in the "flag" area of the user interface on the website:
  • VPN flag graphic
    A sketch with made-up comment stats showing the VPN origin, and a highlighted low engagement.

    Additionally, the legitimate users on the platform should be trained to treat unidentified countries of origin with mistrust.

  • Such post, or comment should be automatically deprioritized. Although legitimate users on the platform should be made aware of it. There also should be a way for verified users to overcome this. See below for details.

    In other words, if you're using a VPN, be prepared to be put at the bottom of the stack. I'm sure most platform "creators" would understand the reasons for that and adhere to the rules to overcome it.

Determining User's Engagement

Calculating users' engagement is something that has been done by the social media platforms since day one. So this part should not be a problem at all. This is usually a hidden metric now, that is indirectly available to the advertisers. But I don't see how a platform would not be able to release this to the public as well.

The engagement basically shows how much a particular user was involved with the platform. Stuff like this:

  • How long ago was the account registered.
  • How much time did the user spend interacting with the platform by watching videos, posting comments, upvoting, etc.
  • How often does the user log in, or spend time on the platform.
  • Views and interests of the user. Was it just the same video that they watched over and over again, or if there's more of a human pattern of activity.
  • How many posts did the user upvote and downvote.
  • Is there any predictable pattern of the user's behavior, etc.

All of these metrics of engagement are very difficult to impersonate by a bot, or by an automated account that was created with a sole purpose of posting fake comments.

Cultural Aspects

We can use machine learning and precise connection time of the user to our website to identify some useful patterns. Say, this may help us to confirm their country of origin. For instance, if most of the posts by a certain user were made in the span of time between 9 AM and 5 PM of a certain time zone, Monday through Friday, we may indirectly conclude that employees in a country with such time zone were involved in making of those posts.

Then also this thing that has been bugging me when I was working on the sketches above. Certain people that pretend to post comments in one language (or pretend to be from another country) may lack the cultural awareness, or lack the grammar of the language that they are trying to impersonate. For instance, the "God Baby" comment, that I've been torturing above, has a peculiar way where the author had put a space between the last word, CNN, and a question mark. If you noticed, the English grammar does not require a space there. While other languages do.

This is obviously not a definitive proof. But such small nuances, especially when they repeat themselves, can be used by machine learning algorithms to detect the poster's country, or confirm or disprove the one that was deduced from their IP address.

I'm obviously not providing any concrete implementation of all this logic here. It's just a thought how this meta-data can be used for our advantage.

Privacy

To be honest, I don't see how revealing a user's country could be infringing on their privacy. So maybe someone will correct me in the comments.

One privacy aspect that comes to mind though is storing users' IP addresses during their interaction with the social media platform. Well, I'm afraid to break it to you, but pretty much any large website is doing it already. This is a part of the access logs that sites maintain for security, and at times for marketing purposes.

Also, as I have proposed above, the website will only reveal the user's country, and not their public IP address. The latter one should remain private and will not be exposed in a public setting.

Deprioritizing Posts

As I kind of already implied, the following should be automatically deprioritized by the platform.

By de-prioritization, I mean ranking the post or a comment lower so that it is not shown to a large audience.
  • Any comments submitted for a post under a different country should be deprioritized. (This is especially true for posts, marked as political.)
  • Posts and comments from users with low engagement levels should be deprioritized. (For comments, they should be displayed lower in the list, even if they have a large number of upvotes. For posts, they should appear lower in search results and not be included in automatic suggestions.)
  • Posts and comments whose country of origin was masked by a VPN or a proxy service should be also deprioritized.

    The exception should be made for verified accounts where the user had proven to the service to be residing in a specific country. Verification could be a paid service that is initiated by the user of the platform. They may be required to submit their passport, or another state issued document, to complete the verification process.

So the final question that we may ask ourselves. Will this cut down on all trolling activity? No, of course it won't. But it will thwart the most igregeous one.

Why This Won't Be Done

So having outlined my strategies for addressing the state sponsored subversion tactics, let me throw a bucket of cold water on my past optimism and explain why this will most likely not be done.

As I noted above, the social media platforms are in the business of making money. They make money by attracting more customers to their platform and by selling their data to the advertisers. And that simple strategy goes radically against what I prosed above. Why would they be interested to invest money and effort into something that may alienate some of the existing users on their platform? And yes, obviously those automated troll accounts are also counted as "users". They can safely tally them up and present the final number to investors and stock-holders during the quarterly results. No one checks where those "users" come from.

Then why would they be interested to cut out a portion of those automated accounts and lower their quarterly numbers?

Or additionally, why would they turn down the ad money from the "advertiser", even if such advertiser is running a subversion campaign against another government? It just doesn't make sense for their financial bottom-line.

Think about it. Implementing any of the technical suggestions that I posted above for the tech giants of the social media is like asking you to do some work, as a result of which you will be paid less in the future. Do you see the paradox there?

Afterthought

Well, obviously the advice that I gave above applies not only to YouTube, whose graphics I was demonstrating in my proposals. Any social media platform can use similar design elements, for as long as they serve the same purpose.

In my view it pays off to see this proposal in a visual form. A tiny addition of a flag to a user's post or comment makes a huge difference in perception of the (political) message that is conveyed in such post. Moreover, by doing it this way, the onus of determining fakes now shifts from the social media platform to the users, which greatly alleviates the burden of performing "fact checking", that is so rife with problems.

And lastly, I'm hoping that I was wrong in my assessment in the section above, and there are some people from YouTube, Facebook, Twitter, you name it, that may read this and take my suggestions to heart. This is a free advice that I'm sharing with anyone who is willing to listen. And maybe, just maybe, we can make the social media platforms less toxic to use than what they are now.

Related Articles