New Tech Aims to Help Societies Learn to Spot Fake News

Helsinki, Finland. Studies show that societies with less income inequality are less susceptible to fake news.

astudio / Shutterstock

AA Font size + Print

Helsinki, Finland. Studies show that societies with less income inequality are less susceptible to fake news.

Qatar’s QCRI and Finland’s Faktabaari make tools that help users from all over the political spectrum realize when they’re getting played.

Despite its relatively recent entrance into common parlance, “fake news” is not a new phenomenon. Wherever there are people, different cultures, and contrasting political opinions, there will be biased reporting based on questionable sources of information. 

Fortunately, an emerging set of technologies are increasingly capable of identifying fake news for what it actually is, thereby laying the foundations for communities to do the same. The challenge is to ensure that these platforms get to where they are needed most. 

Thanks to social media, fake news can now be disseminated at breakneck pace to vast audiences that are often unable or unwilling to separate fact from fiction. Studies suggest that fake news spreads up to six times faster on social media than genuine stories, while false news stories are 70 percent more likely to be shared on Twitter. Observers call it “spam on steroids.” If one spam email is sent to only 1,000 people, it effectively dies. However, if fake news is sent to the same number of recipients, it’s more likely to be shared, become viral, and eventually reach millions. 

Fake news punctuated some of the most important elections of recent years, including 2016’s BREXIT referendum and U.S. presidential campaign. Not that this has overly harmed the winning parties; support for the victors on both sides of the Atlantic remains relatively buoyant. But the aftermath of both elections demonstrated that one person’s fake news is another’s cast-iron proof of fact. Put another way, it is difficult to consume fake news free from the influence of personal opinion. That’s where technology can help. 

Related: To Fight Disinformation, Rethink Counterintelligence

Related: Twitter, Facebook Turn Off Hundreds of Accounts Linked to Chinese Disinformation about Hong Kong Protests

Related: The Magpies and the Cuckoos: A Disinformation Fable

Objectivity is not a problem for a new wave of technologies that perform analysis to help citizens understand what they are reading. These include Tanbih, a platform developed by Hamad Bin Khalifa University’s Qatar Computing Research Institute, or QCRI, to encourage media literacy and a “healthy news diet” among the general public.

Tanbih begins by grouping news articles by event and gathering additional information about each media source. From there, the platform facilitates offline analysis by generating profiles of media outlets, their political ideology, reputation for factuality, past use of propaganda, and bias.

Tanbih then taps into the political leaning of social media users who interact with specific news outlets. It weighs users’ willingness to express opinions on controversial issues by sharing news articles. The premise is simple: the bias of a social media user says a lot about the media they share articles from in support of their argument when discussing a given polarizing topic. For example, analysis of tweets containing links to CNN shows that the organization attracts social media followers from across the political spectrum, but more so for users with left or liberal leanings.

Tanbih then goes even further, looking at specific bits of content and searching for common propaganda techniques, including loaded language, stereotyping, and stretched facts within content and coverage. It then trains users to spot the use of propaganda techniques in texts and develop critical thinking when interacting with news. In doing so, Tanbih underscores the importance of going after the source in order to highlight the potential for fake news before it is even written.

Qatar’s QCRI’s platform is by no means the only product that identifies fake news without human bias and sentiment. However, converting these platforms into consumer-friendly applications capable of promoting media literacy en masse will take time. The magnitude of the task they will eventually confront is demonstrated by an experiment conducted in the build-up to Ireland’s 2018 abortion law referendum. Researchers from University College Cork showed over 3,000 volunteers a series of fabricated news articles. Nearly half claimed to have prior recollection of at least one story and many failed to question their memories when told they were reading fake news.

According to an academic involved in the experiment, the results demonstrate how difficult it is to undo false memories once they’ve taken hold. Such findings might also help to explain why public support for Brexit has remained relatively solid despite attempts to debunk claims made by Vote Leave and official inquiries into social media adverts that allegedly broke electoral law. This hardly bodes well for efforts to encourage societies to think more objectively about the information they consume. But there are grounds for optimism.

In 2014, Finland embarked on a multi-layered campaign to prepare its citizens for an increasingly complex digital landscape. This includes an initiative to help Finns identify fake news and counter narratives designed to sow division within the country. Through its critical thinking curriculum, Finland also encourages schoolchildren to examine YouTube videos, social media and news articles for factual and statistical errors. To help, the fact-checking organization Faktabaari has designed tools specifically for use in Finnish schools.

Finland’s efforts to tackle fake news undoubtedly benefit from a national narrative that places a high premium on the rule of law. The country’s position towards the top end of global indices, including education and media literacy, also suggests that fake news struggles to gain a foothold in countries with more equal standards of living — so much so that states with a similar social and economic make-up now look to Finland for inspiration. These include Singapore, a state known for its controversial laws limiting free speech, but which recently introduced stiff penalties for publishing content deemed to be fake news. The scope of Singapore’s legislation drew concerns, and highlights that state-led efforts to objectify and eliminate fake news are not without detractors. It also remains to be seen whether the Finnish model is currently transferable to low-income countries and regions. Central and southeastern Europe are a case in point. Both have been vulnerable to fake news in recent years; neither possesses the financial muscle, media literacy, or societal cohesion of Finland. To compound matters, countries like Macedonia are home to website operators that churn out fake news on an industrial scale. When and how this multimillion-dollar industry ultimately will be broken by governments and legitimate service providers remains uncertain.

What’s more certain is that technology designed to objectively highlight fake news will continue to evolve and become more user-friendly. There’s also potential for these technologies to work hand-in-hand with national campaigns to develop levels of media literacy required to consistently discern the “fake” in fake news. These include efforts in India to rein in the type of polarization caused by the spread of disinformation on WhatsApp and Telegram. Reason enough to encourage the development of tools that objectivize fake news for all.

Close [ x ] More from DefenseOne