Revelations that Russia used Google, Facebook, Instagram, Pinterest, Pokemon Go and a growing list of other digital campaigns to attempt to influence America’s electoral process through targeted posts and attempted to penetrate the very systems used to administer elections call for stronger efforts to understand the vulnerabilities of our electoral institutions and systems.
They call, too, for more study of the ways adversaries integrate weaponized information into their doctrine and operational planning. Government security leaders should explore potentially critical changes in the technologies that underlie those systems, and secure or replace them entirely. Research and development in this domain must include ways in which to manage the federal, state and local administration of elections throughout the United States more securely. Importantly, government agencies must remind social media companies that foreign support to specific American candidates is against federal law.
A recent report revealed that Russian-generated disinformation may have been shared “hundreds of millions of times” on Facebook alone. As the author of the study, social media analyst Jonathan Albright points out, Russia likely took a measured two-pronged approach to undermining American democracy and compelling U.S. citizens to digest, believe and share misinformation aimed at tarnishing the country’s election system.
First, they published organic posts that connected with very specific groups of voters on very specific issues, like immigration, health care, veteran services, gun control and racial equality. These posts were discovered by Facebook users in the course of their normal social media activity and pushed to everyone in their network.
Second, Russian actors placed money behind geo-targeted ads that hit on specific issues or aimed to damage the reputation of specific candidates irreparably. The goal: to inspire inaction on behalf of the American voter through the mass distribution of false information—in essence, to ensure they do not vote in the election at all. One must surmise that Russia’s efforts were (and remain) sophisticated in their conception, use of social analytics, and execution.
The reality gets darker.
Russia’s troll farms and disinformation agents used social media to divide voters based on political issues, appeal to demographic vulnerabilities and encourage people to avoid the polls entirely. Essentially, they attempted to weaponize the very thing that makes democracy work: informed free will.
Consequently, the tech community, the policy community, and the national security community need to collaborate and share R&D findings when possible to diminish the returns of future Russian disinformation campaigns that connect directly to unaware American citizens.
In practice, this means establishing collaboration mechanisms between government agencies like the FBI and private social media companies to develop shared tools—or, at least, channels of communication—for unveiling information weaponization campaigns and responding to curb illegal efforts to influence America’s elections. Counterintelligence officials and researchers should conduct R&D—with the support of released social media company data—to “reverse engineer” foreign efforts to use social analytics to target specific voter populations and derive messages designed to resonate with those populations.
Social media companies should work with U.S. legislators to craft online advertising governance that addresses the real issues plaguing election security and prevent social media platforms from being weaponized by foreign adversaries moving forward. The freshly introduced bipartisan Honest Ads Act from Sens.Mark Warner, D-Va., Amy Klobuchar, D-Minn., and John McCain, R- Ariz., is a start and provides a model for public-private sector partnership on election security.
The U.S. government should also look down the road to emerging technologies like artificial intelligence, machine learning and blockchain to address data weaponization moving into the 2018 election cycle and beyond. AI can be developed to spot possible instances of weaponized information with precision, particularly if the technology can correlate that information with specific messages and voter demographics programmed into its platform. Machine learning might be used to help agencies reverse engineer rapidly evolving methods used by foreign adversaries to target specific voter demographics and supporting government work to craft counter-messages accordingly. America’s public sector—from federal down to municipal—should consider the use of blockchain as a scalable way of securing electoral systems.
There is much debate to be had on whether technology companies themselves have an obligation to do more to protect their users. In general, tech companies with a U.S. presence operate under the freedoms of speech accorded by the First Amendment. On the other hand, these tech companies present opportunities—most unwittingly—to spread information that is intended to have an illicit effect on U.S. elections, drive populations of American residents apart based on social and political issues, and weaponize public perception tied to other events important to the national interest.
I believe that at a minimum, tech companies have an obligation to warn users regarding nefarious information immediately and transparently. They must take responsibility for the sale of advertising and other capabilities designed to reach voters for illicit purposes—and work from this day forward with government R&D partners to ensure we have the tools necessary to prevent a breach of American free will from happening again.
Ultimately, confidence in the United States’ democratic processes demands confidence in the way votes are collected and counted throughout the nation. In response to confirmed data breaches and social media penetration, research and development should be undertaken by public and private cybersecurity entities to give social media organizations the means to spot and characterize foreign efforts to influence, through troll farms and other means, electoral opinions, and to help halt such efforts. The future of America’s democracy depends on it.
Samuel Sanders Visner is senior vice president of cybersecurity and resilience, at ICF. He is also an adjunct professor of cybersecurity policy, operations, and technology at Georgetown University