They report that they’ve suspended more than 125,000 accounts since the middle of 2015 “for threatening or promoting terrorist acts primarily related to ISIS.”
Another step the company announced is deploying spam-fighting software to identify potential pro-ISIS Twitter accounts, even those that users have not reported for posting explicit or violent content.
“We also look into other accounts similar to those reported,” company officials wrote. “We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter.”
That tactic is along the lines of what some in the technology community have been urging. In a conversation with Defense One at SXSW in 2015, data researcher Jonathan Morgan said that Twitter ought to take a more networked approach to ISIS (and other, similar extremist groups), targeting not just individuals who had explicitly violated terms of service by posting violent content but also going after potentially linked accounts.
In its blog post, Twitter also emphasized partnerships with organizations such as People Against Violent Extremism (PAVE) and the Institute for Strategic Dialogue to “empower credible non-governmental voices against violent extremism.”
The moves come after a woman named Tamara Fields filed a complaint against Twitter in a California court in January seeking unspecified damages for “knowingly or with willful blindness” providing material support to ISIS, according to The Hill. Her husband, Loyd Fields, a contractor, was killed in an ISIS attack in Jordan in November. “Without Twitter, the explosive growth of ISIS over the last few years into the most-feared terrorist group in the world would not have been possible,” the complaint reportedly reads.
But Twitter has also been targeted by the extremist group itself. Last March, the group announced a declaration of virtual war on Twitter co-founder Jack Dorsey. In the post today, the company essentially acknowledged the difficult balancing act of trying to remain an open platform that still restricts some types of users and content.
“There is no ‘magic algorithm’ for identifying terrorist content on the internet,” Twitter wrote in the post, “so global online platforms are forced to make challenging judgement calls based on very limited information and guidance. In spite of these challenges, we will continue to aggressively enforce our Rules in this area, and engage with authorities and other relevant organizations to find viable solutions to eradicate terrorist content from the Internet and promote powerful counter-speech narratives.”