Getty Images

The Pentagon’s AI Chief Is ‘Scared to Death’ of ChatGPT

But other defense leaders are more eager to deploy new artificial-intelligence tools.

Large language models and generative artificial intelligence agents like ChatGPT have captured the public’s attention, but the Defense Department’s chief digital and AI officer, said he worries about the profound havoc that such tools could wreak across society. 

“I’m scared to death,” about how people might use ChatGPT and other consumer-facing AI agents, Craig Martell said Wednesday.

Such tools, which can respond to simple prompts with long text answers, have raised concerns about the end of academic essays and have even been floated as a better way to answer medical patient questions. But they don’t always produce factually sound content, since they pull from human-created sources. Martell, who comes to the job with experience in academia as well as managing machine learning at Lyft, didn’t mince words when asked his opinion on what large language models like ChatGPT  mean for society and national security. 

“My fear is that we trust it too much without the providers of that service building into it the right safeguards and the ability for us to validate” the information, Martell said. That could mean people rely on answers and content that such engines provide, even if it’s inaccurate. Moreover, he said, adversaries seeking to run influence campaigns targeting Americans could use such tools to great effect for disinformation. In fact, the content such tools produce is so expertly written that it lends itself to that purpose, he said. “This information triggers our own psychology to think ‘of course this thing is authoritative.’” 

While using such tools can feel like an exchange with a human being, Martell warns they lack a human understanding of context, which is why reporter Aza Raskin was able to pose as a 13-year old and get an LLM to give him advice on how to seduce a 45 year-old man. 

The Chief Digital and Artificial Intelligence Office, which Martell heads, is primarily responsible for the Defense Department’s AI efforts and all the computer infrastructure and data organization that goes into those efforts. Martell made his comments during AFCEA’s TechNetCyber event in Baltimore to a room full of software vendors, many of whom were selling AI platforms, tools, and solutions. 

“My call to action to industry is: don’t just sell us the generation. Work on detection,” so that users and consumers of content can more easily differentiate AI-generated content from humans, Martell said. 

In terms of his own priorities for the Defense Department, Martell said the first is putting in place data sharing infrastructure and policies to allow the military to realize its aspirations for Joint All Domain Command and Control, or JADC2. 

“It needs the appropriate infrastructure to allow data to flow in the right places. So if I can set the building of that infrastructure to allow the data to flow back and forth and up and down properly, correctly” across differing levels of classification, that would be a good first step in realizing the vision, he said. Part of that is helping combatant commands get a much better understanding of the data they have, the data they need, and the data they need to share. 

Not everyone in the Defense Department shares Martell’s apprehension on AI and large language models. Just a day before, Lt. Gen. Robert Skinner, the head of DISA, gave a speech that was partially written by ChatGPT. Speaking to reporters during a roundtable discussion on Wednesday, he said, “I'm not scared generally about it… I think it's gonna be a challenge,” for the Defense Department to use AI correctly, but the challenge is one the Defense Department can rise to. “What I'm cautious of is: this has to be a national-level issue.”

Steve Wallace, DISA’s chief technology officer, said “There’s a number of places…that we’re looking to possibly take advantage of [next-generation AI], from back office capabilities and contract generation, data labeling, right?”

But even here, Martell cautioned against being too enthusiastic about the promise of AI, particularly AI tools for labeling data. “They just don't work…What works is human beings who are experts in their field telling the machine this is A; this is B; this is A; this is B; and this is B; and then that's what gets fed into the algorithm generator…to generate a model for you.”

Martell isn’t necessarily opposed to deploying AI even in very high-stakes instances. His concern primarily is that the ease of use of such tools conveys the notion that the user doesn’t need to do the hard work of training and monitoring them. AI, in Martell’s view, is a highly human-driven asset. 

“No model ever survives first contact with the world. Every model ever built is already stale, by the time you get it. It was trained on old data, historical data, because that's what they had to train…. We need to build tools that allow the systems to be monitored to make sure they're continuing to bring the value that they were paid for in the first place.”