A protestor holds up a banner during Sam Altman's visit to The Cambridge Union to receive the Professor Hawking Fellowship on behalf of OpenAI on November 1, 2023, in Cambridge, England.

A protestor holds up a banner during Sam Altman's visit to The Cambridge Union to receive the Professor Hawking Fellowship on behalf of OpenAI on November 1, 2023, in Cambridge, England. Nordin Catic / Getty Images For The Cambridge Union

AI has a political problem

The military is growing increasingly enthusiastic about AI. The public, less so.

Left-leaning media outlets are more skeptical of artificial intelligence than right-leaning outlets, a new study shows, which could make a significant difference in voters’ attitudes toward military and government use of AI—as well as how those technologies are regulated. 

The study, published in the journal Social Psychological and Personality Science in September and made public last week, looked at the way media outlets such as the Washington Post, CNN, the New York Post, and The Wall Street Journal, discussed AI, paying particular attention to specific sentiment tags to determine whether the coverage was positive or negative. The authors found “that liberal-leaning media show a higher aversion to AI than conservative-leaning media,” they wrote. “These partisan media differences toward AI are driven by liberal-leaning media’s greater concern about AI’s ability to magnify societal biases.”

The authors also note that social justice protests and campaigns that emerged after the 2020 death of George Floyd had a broad effect on the sentiment toward AI. 

“The results indicated that this event heightened sensitivity toward social biases in society and, consequently, influenced sentiment toward AI in both liberal and conservative media. Thus, these results provide convergent support for the notion that media reactions to AI are influenced by social bias concerns.”

The results come as the Pentagon is growing more vocal in its ambition to use AI to transform the way it operates on multiple levels, but to do so in line with the ethical principles it first published in 2019.

But the study suggests  a possible disconnect between public trust around AI and the Defense Department and Biden Administration messaging around it. Lawmakers as well as industry leaders like Eric Schmidt, former head of Alphabet, have cast AI development as a critical aspect of the competition between democratic and autocratic states, with the potential to determine economic realities and greatly accelerate military operations

The findings follow other polls showing that young people—who are increasingly left-leaning in their political views—are also wary of the role the United States plays in the world and a perceived over-reliance on military solutions. The public in general is also increasingly worried about AI and its potential for harm. 

In theory, that poor sentiment toward AI and the military could hurt recruiting or talent acquisition. It could also result in smaller military budgets, further putting the United States behind China. It also suggests that a willingness to acknowledge concerns about social and ethical AI use will be key to winning more hearts and minds. 

On Tuesday,  Schuyler Moore, CENTCOM’s chief technology officer, cautioned against the assumption that military operators, officials, or others automatically trust AI. 

“I've been worried sometimes that the AI community frames the discussion around trust as something that has to be pre-built or pre-prepared…Trust builds over time and… it can be improved over time if you set expectations early that there will be a performance improvement if you engage with it in different ways,” she said.