Facebook CEO of Mark Zuckerberg appears on a screen as he speaks remotely during a hearing before the Senate Commerce Committee on Capitol Hill, Wednesday, Oct. 28, 2020, in Washington.

Facebook CEO of Mark Zuckerberg appears on a screen as he speaks remotely during a hearing before the Senate Commerce Committee on Capitol Hill, Wednesday, Oct. 28, 2020, in Washington. Michael Reynolds/Pool via AP

Meet the AI That’s Defeating Automated Fact Checkers

Social media companies are using lie-detecting algorithms to reduce the amount of disinformation they spread. That’s not going to be good enough.

As social media and other sites grapple with misinformation surrounding the 2020 election, they’re relying on machine-learning algorithms to spot and tag false or deliberately deceptive posts. Such algorithms weigh a post’s headline, content, and source, and, increasingly, the reader comments that are attached to it. But what if you can deploy fake commenters good enough to fool the algorithms into thinking the story is real? 

A team of Penn State researchers did just that, using what is called a generative adversarial neural network, also called a GAN, to generate comments that could fool even the best automated comment spotters. They described the results in a new paper for the 2020 IEEE International Conference on Data Mining. 

A GAN works by pitting two neural networks against one another in order to discover how a regular neural network might go about solving a problem, like, say, finding mobile missile launchers in satellite photos, and then reversing the process to reveal gaps and weaknesses. It’s sort of like pitting two high-performing chess-playing AIs against each other and then using that data to train a third AI on the tactics that chess programs typically employ. 

“In our case, we used GAN to generate malicious user comments that appear to be human-written, not machine-generated (so, internally, we have one module that tries to generate realistic user comments while another module that tries to detect machine-generated user comments. These two modules compete [against] each other so that at the end, we end up having machine-generated malicious user comments that are hard to distinguish from human-generated legitimate comments),” paper author Dongwon Lee told Defense One in an email. 

Dubbed “Malcom,” the comment-writing AI beat five of the leading neural network detection methods around 93.5 percent of the time. It bested “black box” fake news detectors — neural nets that reach their conclusions via opaque statistical processes — 90 percent of the time. 

“This is the first work proposing an attack model against neural fake news detectors, in which adversaries can post malicious comments toward news articles to mislead cutting edge fake news detectors,” they write. 

Malcom also performed better than other machine learning methods for generating fake comments to fool moderators, such as randomly copying words or phrases from real comments, or taking a real comment and substituting a positive word for a critical word to fool moderator programs and so on. 

The Malcom vs. other ML comment generators

“The lower the purple bar and the green bar, the better (ie, Malcom-generated malicious comments were less likely to be detected by defense methods, so more stealthy). The yellow bar then is the overall, or % of samples filtered by EITHER misspelling or topic coherency detectors. Malcom's generated samples are filtered LEAST among all baselines. This means MALCOM's generated comments have less misspellings and more coherent to the post's topics,” said Lee. 

But Malcom doesn’t just fool moderating programs into giving fake news a pass. It can also be used to demote real news in people’s feeds, the researchers write. 

Lee said that there is no swift cure for the use of GANs like Malcom in content moderation. You can employ more human moderators or different machine learning methods to spot them but “with only limited success for now,” he said. In the future “one may try to improve such algorithms more by using Malcom-generated malicious comments as training examples to learn (so called adversarial learning),” he wrote.

The main take-away from the paper is that “state-of-the-art machine learning models (like those Facebook or Twitter may use in-house)…known to detect fake news very accurately can be still fooled by attacks. Hence, policy-wise, we show that one can no longer rely on the result of [a] fake news detection algorithm fully (as such algorithm[s] may have been attacked and fooled to give [a] wrong result)—need second or third sanity check for important cases.”