Russia’s Troll Army Is Making Life Harder for US Spies

In this picture taken on Sunday, April 19, 2015, a women enters the four-story building known as the "troll factory" in St. Petersburg, Russia.

AP Photo/Dmitry Lovetsky

AA Font size + Print

In this picture taken on Sunday, April 19, 2015, a women enters the four-story building known as the "troll factory" in St. Petersburg, Russia.

How Moscow’s robotic feeds and paid social-media commentators complicate open-source intelligence gathering.

When Facebook posts and tweets blamed Ukrainian rebels for downing a Malaysian jet there last year, U.S. spies studied social media trend lines to gauge public opinion of the Kiev-Moscow conflict.

The number of Facebook “likes”; statistics on retweets and “favorited” tweets; and other social media analytics told one story. 

But intelligence officials know that, increasingly, autocracies are deploying “trolls” – robotic feeds or paid commentators – to sway social media trends.  So officials say they were cautious when compiling situation assessments.

Such messaging can become dangerous when it casts doubt on ground truth. 

Director of National Intelligence James Clapper depends on open source information in addition to classified material, to provide American decision-makers with objective information. There is a concern that social media campaigns orchestrated by overseas powers could distort open-source intelligence gathering, some U.S. officials say. 

(See also: The Russian Hackers Taking on the Kremlin)

As various situations unfold in other countries — and Clapper has got to be able to advise the president and other senior leaders in the government on what are the likely outcomes, what are the range of possibilities — having the best information possible is crucial,” ODNI Science and Technology Director David Honey told Nextgov.

There are rigorous, rigorous processes to try and always make sure that the information is correct,” he added. “That’s where I would worry: If one of our tools gave an incorrect forecast, it could lead to giving bad advice to the senior leadership.”

Already, adversaries have tried to distort online perceptions, he acknowledged, providing the example of the social media swirl around the July 2014 crash of Malaysia Airlines Flight 17.

There may be a disagreement on the part of Russia on what happened on the aircraft that was downed in that part of the world. And so people will take different data sources and try to use them to their own purposes,” Honey said.

Facebook and Twitter analytics help with “sentiment analysis,” or understanding how certain groups observe and feel about current events.  

The actual cause of the disaster, which killed all 298 people on board, has become a flashpoint. U.S. ally Ukraine alleges pro-Russian separatists shot down the plane over territory they controlled.

Russia may be motivated to try and create one impression,” Honey said. “Coupled to that is the issue of Ukraine, and again sentiment analysis — what are the opinions of the people on the ground? What are the opinions of people in Russia? It’s all important to us to be able to give an understanding of what does it mean.”

As the Sydney Morning Herald notes, the counter-narrative promoted by Moscow’s “troll army” of patriotic bloggers has been that a Ukrainian plane — not Russian-backed rebels — shot down MH17.

Right now, foreign investigators are probing parts from what might be a Russian-made Buk antiaircraft missile system that were recently discovered near the crash site, according to CNN

Reality Checks

The process of differentiating between a real, grassroots social media storm and astroturf campaigns in the blogosphere is not an exact science yet, Honey said. 

The trickiness of discerning fact from fiction also crops up in online punditry during campaign season, for instance, he said.

Is somebody going and blogging 100 times under different names? How do you figure that out? That’s a challenge. And so with any of these technologies, you really have to think through how they can be used, how somebody could game them and make sure that you are getting accurate answers,” Honey added.

(Read more: Inside Russia’s Disinformation Campaign)

This is especially true now, because it’s fairly easy for computer programmers to create bots — formulas that chatter online like real users.

To do a reality check, “there are statistical approaches to be able to try to figure out if there are correlations between posts that are just a little too close to be different people,” Honey said. While Twitter and Facebook try to police fraud, “the ability to spoof the algorithms that check if it’s a human” is a different matter. 

The DNI earlier this month published an unclassified five-year science and technology strategy for startups to read, partly so that intelligence analysts can gain insight into tech inventions before nefarious hands do.  

The people who are developing them at the time aren’t the kind of people to abuse technology, so they don’t necessarily think through how a bad actor might try to manipulate any of the technology,” Honey said. “You’ve got to be able to think through at some point, hopefully in advance, how they might be misused or how somebody might try to trick you into thinking you’ve got one thing when in fact you’ve got something else?”

More than half a decade ago, China pioneered the practice of falsifying social media communications to influence perceptions of Beijing’s ruling party.

Chinese Facebook and Twitter conversations surrounding Tibetan civil liberties were a common target.

This has involved creating fake accounts that publish and/or retweet stories on economic development and ethnic harmony in the region and the use of bots to drown out other voices,” said Adam Segal, who researches China and cybersecurity at the Council on Foreign Relations.

In 2012, several hundred bots flooded Twitter discussions using the hashtags #Tibet and #Freetibet with meaningless tweets and spam, Segal noted on the think tank’s blog at the time.

If you were someone trying to learn more about Tibet, you kept bumping up against these threads, and eventually you may have given up and moved on to some other subject,” he said.

The Associated Press reported in late May that Serbs receive most of their information about Russia from coerced typists that parrot the Kremlin party line. As a result, there is a widespread belief in Serbia that Kiev officials are neo-Nazi, according to the AP

Last month, Forbes columnist Paul Roderick Gregory said an article he wrote the day after the MH17 incident, in which he alleged Russian separatists shot down the plane, has received more than 100 comments from Russian trolls. 

Putin’s keyboard operatives “assert the offending bloggers are CIA spies, professional photoshoppers, forgers, Russia haters, hokhols (a derogatory expression for Ukrainians) — perhaps even insane,” Gregory wrote in July. “These trolls keep busy by poking holes in the evidence, and the more absurd, the better (false facts, photoshopped images). Their job is to raise doubts and cause confusion.”

A Captive Audience

Now other countries, including Middle Eastern regimes not too happy about the Arab Spring, are staging messaging operations to counter Western views. 

Advocacy group Freedom House noticed that last year, 24 of the 65 countries the organization monitors for online censorship were engaging in some form of pro-government social media tampering, said Sanja Kelly, director of the group’s Freedom on the Net project. 

One recent example: Azerbaijan, which hosted this summer’s European Games, took to Twitter to deflect international criticism of its human rights record.

At several points in the run-up to the games, pro-government tweets from multiple accounts appeared at roughly the same time, the BBC reported.

The official organizing committee and a coordinated fleet of users tweeted positive images of Azerbaijan’s capital, Baku. A large group photo of participants crouching and standing, with national flags from across Europe in the background, carried the message: 

#Azerbaijan athletes with @azpresident #Azerbaijan#baku2015 #realbaku2015 #europeangames #biginbaku #ilove azerbaijan”

It’s hard to quantify the success rate of fabricated online campaigns but anecdotal evidence suggests the oppressors are winning the hearts and minds of the citizenry, Kelly said.

Just looking at it more from a qualitative perspective, it seems that it is being effective — because most people cannot tell the difference between a legitimate tweet or a legitimate comment and comments made by these trolls,” she said. “If you are the average reader, there would be no reason for you to believe that this exchange” tarnishing the reputation of a dissident “is not legit.”

Based on general studies of regimes that filter Internet content, the governments “are able to propagate their message and hold their population captive, in a way, because they are not able to get alternative sources of information,” Kelly added. 

Close [ x ] More from DefenseOne