Shutterstock

Are AI Professionals Actually Unwilling to Work for the Pentagon?

A CSET survey finds a more nuanced situation than public perception might indicate.

When Google employees protested their company’s work on Project Maven in 2018, their public letter against the company’s involvement in “the business of war” drew attention to the idea of a "culture clash" between the tech sector and the U.S. Department of Defense. A tense, adversarial relationship makes headlines — but how many AI professionals are actually unwilling to work with the U.S. military? 

Recent research by the Center for Security and Emerging Technology, a policy research organization within Georgetown University’s Walsh School of Foreign Service, suggests a more nuanced relationship, including areas of potential alignment. A CSET survey of 160 U.S. AI industry professionals found little outright rejection of working with DOD. In fact, only 7 percent of the surveyed AI professionals working at U.S.-based AI companies felt extremely negative about working on DOD AI projects, and only a few expressed absolute refusal to work with DOD. 

AI professionals acknowledge several reasons to work on DOD-funded research. A majority see an opportunity to do good and are particularly drawn to projects with humanitarian applications. Many also see professional benefits, including the promise of working on hard problems, especially the kind not being explored in the private sector. In their own words, surveyed AI professionals note opportunities to “expand the state of the art without market forces” or do “research which doesn't have an immediate commercial application.” DOD has long recognized this ability to offer intellectually and technically challenging problems as its ace in the hole when unable to compete against the salaries in the private sector. 

As a whole, professionals who are more aware of Defense Department AI projects and have experience working on DOD-funded research in general were more positive about working on DOD AI projects more specifically. These impressions could be testament to the work done by the Defense Innovation Unit and the Defense Innovation Board to build bridges with tech companies and streamline the contracting and procurement process. 

That said, a lot of surveyed AI professionals are simply not familiar with DOD’s AI research and development efforts. Sixty-seven percent are not at all familiar with DOD’s new ethical principles, and 45 percent report that they have never worked at an organization doing DOD-funded work. Only 27 percent have ever worked directly on a DOD-funded project. 

This lack of awareness among AI professionals, combined with concerns about the use of AI developed in U.S. military projects, suggests DOD can do better in communicating its priorities in AI. While connecting with technology companies has been a key priority for the Pentagon in recent years, many DOD efforts remain shrouded in mystery, causing AI professionals to question the motives behind DOD funding and feeding into fears that working with the U.S. military on AI is akin to “expanding the efficiency of the murderous American war machine” —as one surveyed professional put it. 

Part of the challenge is dispelling misconceptions. For instance, some AI professionals are concerned that collaborating with DOD means creating “autonomous killer drones,” or “weaponized research [without] human in the loop circuit breakers.” Yet recent CSET research on U.S. military investments in autonomy and AI found these investments chiefly support systems that complement and augment human intelligence and capabilities, not replace or displace them. 

Indeed, the U.S. military sees many benefits to human-machine teaming, including reducing risk to service personnel, improving performance and endurance by reducing the cognitive and physical load, and increasing accuracy and speed in decision-making and operations. By focusing the conversation on solving problems related to human-machine teaming, DOD could allay AI professionals’ fears about misuse and safety as well as engage with ongoing debates in the AI research community about collaborative human-AI systems. Moreover, this focus may appeal to AI professionals interested in working on research that makes “defense better or more efficient [by] reducing our casualties...increasing deterrence and shortening engagement,” as one survey respondent wrote.

Better messaging from DOD is not a panacea; some AI professionals may never want to work with the U.S. military, while others will need additional assurances regarding the potential impact of their research. But shifting the conversation to areas of shared interest such as human-machine teaming can help demystify DOD’s activities in AI and perhaps foster a more collaborative relationship between the two communities. 

Dr. Catherine Aiken is a Survey Specialist at the Center for Security and Emerging Technology where she manages the design and distribution of all CSET surveys.

Dr. Margarita Konaev is a Research Fellow at the Center for Security and Emerging Technology, specializing in military applications of AI and Russian military innovation.