In this July 30, 2019, photo, Sandra Swint, right, campus security associate for Fulton County School District and Paul Hildreth, background, the district’s emergency operations coordinator.

In this July 30, 2019, photo, Sandra Swint, right, campus security associate for Fulton County School District and Paul Hildreth, background, the district’s emergency operations coordinator. AP Photo/Cody Jackson

New Intelligence-Community AI Principles Seek to Make Tools Useful — and Law-Abiding

Over the last five years, AI has grown considerably, as has public concern about its use, particularly for national security purposes.

The promise of artificial intelligence comes with new challenges for the U.S. intelligence community, which needs AI-driven results to be useful to analysts, transparent to lawmakers, and in line with privacy and other laws. 

On Thursday, the Office of the Director of National Intelligence released a list of principles to guide its use of AI and the development of new tools. 

The list and accompanying framework call on the IC to use “explainable and understandable methods;” “mitigate potential undesired bias;” and routinely test and review algorithms and be sure that they comply with law, among other guidelines.  

The list shares much with the Defense Department’s list of ethical principles but also follows the intelligence community’s 2019 Augmenting Intelligence Using Machines strategy. 

Over the last five years, AI has grown considerably, as has public concern about its use, particularly for national security purposes. Several cities have banned facial recognition technology, for instance. Much of that worry stems from data bias, the possibility that programers might build a machine-learning algorithm using only data that is easily available or obvious to the programer, rather than data that provides a full picture of reality. Data bias is one reason why in 2015 a Google algorithm labeled nonwhite people as gorillas. 

Dean Souleles, the chief technology advisor to the principal deputy to the Director of National Intelligence, says such bias is widespread. “Every single algorithm, every signal datasat, has bias,” Souleles said on a phone call with reporters on Thursday. 

The bias problem helped shape the guidelines and the framework. The IC use of AI is fundamentally different than the way either business or the military uses it. Whereas the military may need some AI to make tactical decisions, such as identifying incoming fire, faster than human speeds, the intelligence community, according to Souleles, can’t outsource decision-making, even if it can use AI to help humans make better decisions. AI in the hands of the IC needs to produce results “that policy makers can interpret and use their human judgement to act on,” he said. That means that anyone should be able to understand how the program reached the conclusion that it did. 

The number of potential uses includes some that are highly classified but others that are potentially releasable to the public, such as language translation of open-source information. Even here the need for human guidance is still considerable, since dialects and speech patterns vary so tremendously, Ben Huebner, the Chief of the ODNI Civil Liberties, Privacy, and Transparency Office, told reporters. 

Souleles identified five or six areas where intelligence community needs overlap with those of the private sector. Among them are cybersecurity, countering foreign influence operations, and identifying entities “who are the bad actors of the world that want to do us harm…If you’re the world’s largest e-commerce vendor on the Internet, you have that same problem, people who want to do harm to your networks, and customers...You need to understand who those actors are and how to characterize them.”