Lt. Gen. Michael Groen, Joint Artificial Intelligence Center director, conducts a press briefing about the DOD's efforts to adopt and scale artificial intelligence capabilities, from the Pentagon, Washington D.C., Nov. 24, 2020.

Lt. Gen. Michael Groen, Joint Artificial Intelligence Center director, conducts a press briefing about the DOD's efforts to adopt and scale artificial intelligence capabilities, from the Pentagon, Washington D.C., Nov. 24, 2020. DoD photo by U.S. Air Force Staff Sgt. Jack Sanders

China Is ‘Danger Close’ to US in AI Race, DOD AI Chief Says

JAIC leader stresses that AI ethics guidelines don’t slow down the United States. In fact, they are essential.

The Pentagon must move faster to standardize its data, adopt cloud services and integrate AI into operations if it is to keep ahead of China’s prowess in artificial intelligence, the head of the Joint Artificial Intelligence Center, or JAIC, said Tuesday. Beijing is accelerating its Made in China 2025 effort and aims “to be dominant in the AI space in 2030,” Lt. Gen. Michael Groen told a National Defense Industrial Association audience. He noted that Pentagon budgeteers are currently building five-year Program Objective Memorandums out to 2027. “You know, to a Marine, that’s danger close,” Groen said.

Groen said that integrating networks across the Defense Department and pressing forward with new enterprise-level cloud capabilities and common data standards was going to be key to helping the U.S. military stay ahead. 

“If we are not in an integrated enterprise, we’re going to fail,” he said. “If we’re still flying in hard drives [to remote bases] because it's more efficient to fly in a hard drive then connect our networks, that’s a symptom that we’re not where we need to be,” he said. 

Groen also devoted a large part of his talk to the Pentagon’s ethical guidelines for AI, which are more detailed and restrictive than many similar lists used by industry players — and certainly more so than any list China’s military has made public. He rebuffed suggestions that these guidelines and restrictions were slowing the development and deployment of AI tools, arguing that only ethical systems will garner the trust of commanders in the field. 

“From an ethical baseline comes trust in AI systems. It comes from a trust-and-verification, test-and-validation environment where you’re actually ensuring that systems work, they work in areas where they’ll be deployed, and they work with other kinds of systems in achieving the overall operations effect you’re trying to achieve,” he said. “If AI is not trusted in the department by...commanders, etc. then It won’t be used, right?”

In a conversation with Defense One to air on Thursday, officials from the JAIC, the CIA, the Defense Intelligence Agency, and the Office of the Director of National Intelligence agreed that paying attention to ethics was a key to adoption of AI tools across government. 

One of the largest ethical challenges is detecting and avoiding bias in AI tool development, an effect that can occur for a variety of reasons but often because the data set that programmers used to train the AI wasn’t broad or diverse enough. In 2018, Google deployed a photo app that was trained almost exclusively on pictures of White people; it tended to misinterpret pictures of people of other races as non-human. 

Alka Patel, the JAIC’s Chief of Responsible AI, said her office is documenting “critical aspects around the data” that they are using: how it was collected, its provenance, intended use, etc. 

Mikel Rodriguez, who oversees MITRE’s decision science research program, said bias isn’t just a political concern but one that can result in broken algorithms and, ultimately, the abandonment of tools that could be decisive in combat. 

Rodriguez described these as challenges that are “left of algorithm” since so many AI tools are trained on poorly constructed datasets before they are even released. “There’s all these potentials for vulnerabilities to be introduced [via] poisoned data sets, etc.” he said. 

One way to protect against that is having a more ethnically diverse coding team. Another is bringing in red teams early in the process to find vulnerabilities. 

“From a security perspective, you could weaponize this bias and it’s a real issue in the sense that the companies that we’re working with are really producing incredible algorithms but they tend to be for these types of problems where you have these more balanced datasets, cats versus dogs,” he said. “It’s not these where you’re looking for, say, a rare target, a transporter or a launcher that [the adversary] might only have one or two of.”

Rodriguez called it “the Achilles heel of a lot of the crop of existing AI systems.”