Big Tech Bulks Up Its Anti-Extremism Group. But Will It Do More than Talk?

A man passes a facebook screen at the Gamescom in Cologne, Germany, Tuesday, Aug. 20, 2019.

AP Photo/Martin Meissner

AA Font size + Print

A man passes a facebook screen at the Gamescom in Cologne, Germany, Tuesday, Aug. 20, 2019.

Facebook and others launched GIFCT to stop violent groups from exploiting online platforms.

A group effort by giant tech companies to thwart online extremism is gaining members, staffers, and at least a little independence. But some industry watchers wonder whether the Global Internet Forum to Counter Terrorism, or GIFCT, will be allowed to do more than convene meetings.

Founded in 2017 by Facebook, Microsoft, Twitter and YouTube, GIFCT aims to “prevent terrorists and violent extremists from exploiting digital platforms.” On Monday, the group announced three new members — Amazon, LinkedIn and WhatsApp — and that GIFCT will become an independent organization with its own executive director and operational teams.

GIFCT will establish working groups in order to engage stakeholders from government and civil society focused on specific projects and advise GIFCT’s efforts,” officials said in an statement. “Initial working groups are expected to address topics such as positive interventions with respect to radicalization, algorithmic outcomes, improving the multistakeholder Crisis Response Protocol and legal challenges to data sharing.”

The new group will aim to help digital platforms develop programs and business operations to disrupt extremist activity, to conduct exercises to help companies and other groups work with one another, and to support new research into extremism.

The move marks another step in the story of technology companies learning to detect and thwart extremist content.

In 2014, as ISIS spread across the Levant, the jidhadist group turned to Twitter, YouTube, and Facebook to drive recruitment and fundraising. The social media companies faced pressure from lawmakers and lawyers to improve their game, which they did, bit by bit, by establishing a joint database of extremist images and videos so that content blocked by, say, YouTube could also be blocked by Facebook.

Intelwire editor J.M. Berger offered cautious praise for the move. “An independent GIFCT helps put some distance between policy advice and research funding and the sources of that funding, which should empower higher quality research and policy guidance,” Berger wrote in an email.

But he said he would wait to see whether the new GIFCT would go beyond its current efforts to convene stakeholders for discussions.

“Convening is a great activity, if it leads to action, but there’s also a lot of convening for convening’s sake in the CT/CVE online field. We don’t really need another placeholder in this area, we need a dynamic organization that can marshal resources to solve problems. The companies also need to commit to the venture; GIFCT can only succeed if the companies take its advice and research to heart.”

Social media platforms companies have achieved success tackling some types of extremist content, particularly Islamist content. But they’ve had a much harder time containing right-wing extremist groups.

The varying forms of violent rightwing content makesalgorithmic detection harder. Moreover, it’s tough for algorithms to determine when, say, Nazi imagery is being used in legitimate historical or educational contexts. Finally, those who share violent right-wing imagery may be truly dangerous — or may be more simply looking for social acceptance or shock value. 

It’s one reason why the next wave of extremist content will be harder to stop than the last.

Close [ x ] More from DefenseOne