Smoke rises following a Syrian government air strike on rebel positions, in eastern Aleppo, Syria, Monday, Dec. 5, 2016.

Smoke rises following a Syrian government air strike on rebel positions, in eastern Aleppo, Syria, Monday, Dec. 5, 2016. AP Photo/Hassan Ammar

Tech Companies Are Deleting Evidence of War Crimes

Algorithms that take down “terrorist” videos could hamstring efforts to bring human-rights abusers to justice.

If grisly images stay up on Facebook or YouTube long enough, self-appointed detectives around the world sometimes use them to reconstruct a crime scene. In July 2017, a video capturing the execution of 18 people appeared on Facebook. The clip opened with a half-dozen armed men presiding over several rows of detainees. Dressed in bright-orange jumpsuits and black hoods, the captives knelt in the gravel, hands tied behind their back. They never saw what was coming. The gunmen raised their weapons and fired, and the first row of victims crumpled to the earth. The executioners repeated this act four times, following the orders of a confident young man dressed in a black cap and camouflage trousers. If you slowed the video down frame by frame, you could see that his black T-shirt bore the logo of the Al-Saiqa Brigade, an elite unit of the Libyan National Army. That was clue No. 1: This happened in Libya.

Facebook took down the bloody video, whose source has yet to be conclusively determined, shortly after it surfaced. But it existed online long enough for copies to spread to other social-networking sites. Independently, human-rights activists, prosecutors, and other internet users in multiple countries scoured the clip for clues and soon established that the killings had occurred on the outskirts of Benghazi. The ringleader, these investigators concluded, was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander. Within a month, the International Criminal Court had charged Werfalli with the murder of 33 people in seven separate incidents—from June 2016 to the July 2017 killings that landed on Facebook. In the ICC arrest warrant, prosecutors relied heavily on digital evidence collected from social-media sites.

Werfalli has thus far evaded justice. But human-rights activists still hail the case as a breakthrough for a powerful new tool: online open-source investigations. Even in no-go combat zones, war crimes and other abuses often leave behind an information trail. By piecing together information that becomes publicly accessible on social media and other sites, internet users can hold the perpetrators accountable—that is, unless algorithms developed by the tech giants expunge the evidence first.

Shortly after the Werfalli arrest warrant was issued, Hadi Al Khatib, a Syrian-born open-source investigator based in Berlin, noticed something that distressed him: User-generated videos depicting firsthand accounts from the war in Syria were vanishing from the internet by the thousands. Khatib is the founder of the Syrian Archive, a collective of activists that, since 2014, has been scouring for digital materials posted by people left behind in Syria’s war zone. The Syrian Archive’s aim is “to build a kind of visual documentation relating to human-rights violations and other crimes committed by all sides during the eight-year-old conflict,” Khatib said in an interview.

Related: In Win for Bolton, International Criminal Court Will Not Prosecute US Troops

Related: The Logic of Assad’s Brutality

Related: I Ran the Air War Over Gaddafi. Here’s Why The US Should Stop Backing the Yemen War

In the late summer of 2017, Khatib and his colleagues were systematically building a case against the regime of Bashar al-Assad in much the same way ICC investigators pursued Werfalli. They had amassed scores and scores of citizens’ accounts, including video and photos that purportedly showed Assad was targeting hospitals and medical clinics in bombing campaigns. “We were collecting, archiving, and geolocating evidence, doing all sorts of verification for the case,” Khatib recalled. “Then one day we noticed that all the videos that we had been going through, all of a sudden, all of them were gone.”

It wasn’t a sophisticated hack attack by pro-Assad forces that wiped out their work. It was the ruthlessly efficient work of machine-learning algorithms deployed by social networks, particularly YouTube and Facebook.

With some reluctance, technology companies in Silicon Valley have taken on the role of prosecutors, judges, and juries in decisions about which words and images should be banished from the public’s sight. Lately, tech companies have become almost as skilled at muzzling speech as they are at enabling it. This hasn’t gone unnoticed by government entities that are keen to transform social networks into listening posts. Government, in effect, is “subcontracting” social-media platforms to be its eyes and ears on all kinds of content it deems objectionable, says Fionnuala Ní Aoláin, a law professor and special rapporteur for the United Nations Human Rights Council.

But some of what governments ask tech companies to do, such as suppressing violent content, cuts against other legitimate goals, such as bringing warlords and dictators to justice. Balancing these priorities is hard enough when humans are making judgments in accordance with established legal norms. In contrast, tech giants operate largely in the dark. They are governed by opaque terms-of-service policies that, more and more, are enforced by artificial-intelligence tools developed in-house with little to no input from the public. “We don’t even know what goes into the algorithms, what kind of in-built biases and structures there are,” Ní Aoláin said in an interview.

For years, social networks relied on users to flag objectionable content, all manner of hate speech, and calls to arms that, among other things, espoused violence. But as this content continued to fill up the fringes and spill into clear sight, pressure mounted on Facebook, YouTube, Twitter, and other popular social networks to automate the cleanup. They turned to machine learning, a powerful subset of artificial intelligence that can make sense of huge amounts of data with little to no oversight from human minders.

Designed to identify and take down content posted by “extremists”—“extremists” as defined by software engineers—machine-learning software has become a potent catch-and-kill tool to keep the world’s largest social networks remarkably more sanitized places than they were just a year ago. Google and Facebook break out the numbers in their quarterly transparency reports. Facebook removed 15 million pieces of content it deemed “terrorist propaganda” from October 2017 to September 2018. In the third quarter of 2018, machines performed 99.5 percent of Facebook’s “terrorist content” takedowns. Just 0.5 percent of the purged material was reported by users first.

Those statistics are deeply troubling to open-source investigators, who complain that the machine-learning tools are black boxes. Few people, if any, in the human-rights world know how they’re programmed. Are these AI-powered vacuum cleaners able to discern that a video from Syria, Yemen, or Libya might be a valuable piece of evidence, something someone risked his or her life to post, and therefore worth preserving? YouTube, for one, says it’s working with human-rights experts to fine-tune its take-down procedures. But deeper discussions about the technology involved are rare.

“Companies are very loath to let civil society talk directly to engineers,” says Dia Kayyali, a technology-advocacy program manager at Witness, a human-rights organization that works with Khatib and the Syrian Archive. “It’s something that I’ve pushed for. A lot.”

These concerns are being drowned out by a counterargument, this one from governments, that tech companies should clamp down harder. Authoritarian countries routinely impose social-media blackouts during national crises, as Sri Lanka did after the Easter-morning terror bombings and as Venezuela did during the May 1 uprising. But politicians in healthy democracies are pressing social networks for round-the-clock controls in an effort to protect impressionable minds from violent content that could radicalize them. If these platforms fail to comply, they could face hefty fines and even jail time for their executives. New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron intend to up the ante at a summit next week calling on tech execs and world leaders to band together to eliminate the publication of extremist online content. After the March 15 mosque massacre in Christchurch, New Zealand, was streamed live on Facebook, countries including New Zealand, Australia, and the United Kingdom passed or proposed comprehensive new online-terror laws.

A proposed European Union law has been in the works for months. It would require technology companies to pull down harmful user-generated material—whether words or images—that “incites or solicits the commission or contribution of terrorist offenses, or promotes the participation in activities of a terrorist group.” That standard is extraordinarily broad. But if the companies don’t eliminate such posts within one hour, they face fines of up to 4 percent of global revenues.

Human-rights advocates worry about the decisions tech giants and their algorithms will make under such outside pressure. “The danger is that governments will often get the balance wrong,” argued Ní Aoláin. “But actually we have the methods and means to challenge governments when they do so. But private entities? We don’t have the legal processes. These are private companies. And the legal basis upon which they regulate their relationships with their users, whether they’re in conflict zones or not, is determined by [the company’s] terms of service. It’s neither transparent nor fair. Your recourse is quite limited.”

In July, she wrote an open letter to Facebook’s founder, Mark Zuckerberg, finding fault with how Facebook defines terrorism-related content, a key determination in what it decides to flag and take down. From what Ní Aoláin can tell, “they just came up with a definition for terrorism that bears no relationship to the global definition agreed by states, which I think is a very dangerous precedent. I made that very clear in my communications with them.”

When I asked Facebook to comment on Ní Aoláin’s complaint, a company spokesperson shared detailed minutes from a December content-standards forum. The minutes are a remarkable document, one that underscores the complexity of the judgments tech companies are being asked to make as they seek to monetize human interactions on a global scale. Is a terrorist organization one that “engages in premeditated acts of violence against persons or property,” or should the definition expand to include any non-state group that “engages in or advocates and lends substantial support” to “purposive and planned acts of violence”? “It would shock me,” one person at the meeting commented, “if in a year we don’t come back and say we need to refine this definition again.” (A company spokesperson said recently that there’s no update on the matter to announce.)

How the tech giants’ algorithms will implement these subtle standards is an open question. But a new crop of anti-terrorism bills, post-Christchurch, will thrust technology companies into an even more assertive enforcement role. Under the threat of massive fines, tech giants are likely to invest more in aggressive machine-learning content filters to suppress potentially objectionable material. All this will have a chilling effect on those who are trying to expose wrongdoing in war zones.

Khatib, at the Syrian Archive, said the rise of machine-learning algorithms has made his job far more difficult in recent months. But the push for more filters continues. (As a Brussels-based digital-rights lobbyist in a separate conversation deadpanned, “Filters are the new black, essentially.”) The EU’s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will see to it that nobody sees it. He fears the unintended consequences of such a law—that in cracking down on content that’s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, “would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.”

“On the ground in Syria,” he continued, “Assad is doing everything he can to make sure the physical evidence [of potential human-rights violations] is destroyed, and the digital evidence, too. The combination of all this—the filters, the machine-learning algorithms, and new laws—will make it harder for us to document what’s happening in closed societies.” That, he fears, is what dictators want.

This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.