U.S. Air Force Second Lt. Christopher Ahn, Pilot Training Next student, trains on a virtual reality flight simulator, at the Armed Forces Reserve Center in Austin, Texas, June 21, 2018.

U.S. Air Force Second Lt. Christopher Ahn, Pilot Training Next student, trains on a virtual reality flight simulator, at the Armed Forces Reserve Center in Austin, Texas, June 21, 2018. U.S. Air Force photo by Sean M. Worrell

Solving One of the Hardest Problems of Military AI: Trust

There are many gaps, and most won’t be solved by code but by conversation.

The U.S. Department of Defense is making big bets on artificial intelligence – rolling out new strategies, partnerships, organizations, and budgets to develop the technology for military uses. But as DOD moves to harness this technology, its success may hinge in part on something that is not technical in nature: overcoming the massive gaps in trust around AI. That trust gap is actually many gaps – between humans and machines, the public and the government, the private sector and the government, and among governments – and undertaking the hard task of working through them will be key to integrating AI into national defense. 

In February, DOD rolled out its new AI strategy, coming on the heels of an Executive Order directing the executive branch to prioritize work in the field. The new strategy was only the latest in a massive new emphasis on the technology. Over the past year, DOD has established a new Joint Artificial Intelligence Center and appointed a highly regarded general to lead it, announced a $2 billion Defense Advanced Research Projects Agency program to develop new AI technologies, launched a collaboration with leading robotics and autonomous technology experts at Carnegie Mellon University, and stood up a four-star Army Futures Command in the tech hub of Austin, Texas. These new initiatives come in the wake of several years of the Pentagon deepening ties with Silicon Valley, most notably through its Defense Innovation Unit — a small cell that works with the most innovative tech companies to adapt their technologies for DOD use — and by inviting tech heavyweights like Amazon CEO Jeff Bezos and Google executive Eric Schmidt to join its innovation advisory board. These moves all come in an environment in which China and Russia have demonstrated strong prioritization of AI, and they reflect a realization, emphasized by DOD officials across two administrations, that harnessing the most sophisticated AI and autonomous technologies is the key to keeping the edge in an increasingly intense technological arms race. Yet as DOD makes these investments in technology, actually integrating them into the military will require investing in trust. 

Related: Without JEDI, Pentagon’s AI Efforts May Be Hindered

Related: The Pentagon is ‘Absolutely Unapologetic’ About Pursuing AI-Powered Weapons

Related: Here’s the Key Innovation in DARPA AI Project: Ethics From the Start

To a technologist, the concept of trust in AI is nothing new, as new technologies often face human-machine trust challenges. Yet AI and autonomy will force a deeper human-machine reckoning beyond what we have grappled with to date. At the core of this challenge is that machine learning, which powers AI, is fundamentally different from human learning. For example, machine learning often relies on pattern detection made possible by ingesting massive amounts of data rather than the inferential reasoning that defines human intellect. To use an oft-cited explanation, a human recognizes a cat as a cat because it carries certain feline characteristics in its appearance and movement, while a computer recognizes a cat as a cat because it looks like another object that has been classified as a cat in the massive data library that the AI technology has trained on. It’s an elementary example, but it illustrates how differences in how a machine reaches conclusions can create real challenges, as AI users may not trust the conclusions reached by the machine. What if this is a cat like none the machine has ever seen? What if it’s a dog groomed in a particularly cat-like way? Further, AI is generally not set up to explain its reasoning to the skeptical user and assure that it has reached the right conclusion. Some of this trust gap is the natural course of technology uptake, in which humans constantly struggle to trust new inventions until they have a demonstrated track record of safety. But this challenge of trust is particularly acute in the military, where commanders – or even the machine itself – may have to make life and death decisions on the basis of information provided by an AI-enabled system. Indeed, these risks have been exposed in dramatic fashion by recent experiments that show how, with a change of a few pixels on an image, a school bus can be made to look like a tank to an AI-enabled analytic technology, potentially leading to disastrous consequences. It’s easy to see how adversaries may take advantage of these dynamics to try to fool AI, which only widens the trust gap further. While many critics think of the human-machine trust problem as mistrust over more advanced uses of AI, such as autonomous weapons, these examples demonstrate that military commanders may be reluctant to trust even the simplest AI applications, things like image classification or data translation. 

Yet for all the challenges here, the solutions are largely technical and focused on improving the technology and building a better human-machine interface. The Intelligence Community and DOD have already begun developing technologies that would allow AI to better explain its reasoning. To be sure, there are still divisions within the tech community about how to think about human-machine trust, but if AI engineers focus on it, they may very well address the human-machine trust issues. Trust is much trickier when it’s applied to how humans interact around AI. In particular, the U.S. government’s ability to harness AI for national defense may rely on its ability to successfully foster trust with the American public, the private sector, and foreign governments. 

First is the American public, which while generally sanguine about the prospects for AI to improve their lives and more trusting of the military than other public institutions, has in recent years shown increasing reservations around the use of advanced technology for national security purposes. A vocal plurality of the American public and the media has consistently opposed the U.S. lethal drone program. And the American public largely reacted in outrage after Edward Snowden disclosed classified documents showing a massive U.S. effort to collect data on Americans’ communications and analyze it with big data tools. There is not yet extensive public polling exploring Americans’ views on AI in national security. At this point, it may not be much more nuanced than what they have seen in sci-fi films like “The Terminator.” But it is certainly important to engender as much trust as possible, so Americans can have faith that their government is not creating an AI-enabled surveillance state or an army of uncontrolled killer robots. 

Lessons from the controversies over the drone and surveillance programs provide a playbook for building trust, through a mixture of public transparency, clear policies governing the program, effective engagement with civil society, and appropriate congressional oversight. Trust begins with transparency, with leaders explaining why it is essential to integrate AI into national security and laying out clear limits on what AI can be used for and what controls will be in place to prevent misuse. To the government’s credit, some of this dialogue has already begun, most notably with senior DOD officials giving speeches putting AI in context and assuring the public that human beings will always be involved in making decisions on lethal action. DOD should continue this dialogue with the public, as well as civil society groups like Human Rights Watch, which have made constructive recommendations on how to properly govern this new technology. Such engagement should be reinforced by the government putting out official guidance, ideally coming from the President, that clearly sets out parameters around AI and its misuse. Congress must play a role too, first by getting smart on the technology and the challenges it presents – a very steep learning curve, if the recent tech-related hearings are any indication – and then by providing oversight of DOD and considering legislation to ensure AI stays within appropriate bounds. Even with all of these steps, there will be parts of the public that are understandably leery about the prospects for robotic warfare. Some of this is inevitable, but DOD can help overcome this skepticism by also taking care of the language it uses to discuss AI. For example, DOD leaders have frequently insisted that there will “always be a human in the kill chain.” Key AI programs have code names like Overlord, Ghost Fleet, and Undertaker. Assuaging the concerns of those who are leery of a world of robotic warfare may require making sure that every discussion doesn’t sound like the foreshadowing of a post-apocalyptic future.

Running in tandem with building trust with the public is gaining confidence with private industry, particularly the set of Silicon Valley companies that are skeptical about working with the U.S. government on national-security issues, both generally and specifically on AI. After a revolt by 4,000 of its employees, Google terminated its participation in DOD’s Project Maven, an initiative in which AI would improve the military’s target assessment capabilities, allowing the military to better distinguish between combatants and civilians in drone footage. Elon Musk remains AI’s most outspoken critic, arguing AI could be more dangerous than nuclear warheads or North Korea, and warning that machines could learn at a rate that surpasses humans’ abilities to control them. This reticence of the most innovative sector of the American economy to collaborate with the government is a far cry from earlier generations in which industrial giants were the backbone of national defense. 

Whatever the roots of this distrust, overcoming it will not be easy. Beyond the public confidence building measures described above, DOD will need to make a concerted effort, building on DIU’s successes and the personal overtures from Secretaries Ash Carter and James Mattis, to build relationships with key players and hubs in the Valley. Cultivating trust with a larger network of credible executives – like Palantir CEO Alex Karp and Michael Bloomberg, who have spoken out about the importance of the tech sector supporting national security – may provide top cover for others to speak on DOD’s behalf. The traditional defense sector can play a role as well, perhaps by hiring leading civilian tech firms as partners and sub-contractors, or investing in leading AI firms (as Lockheed and Boeing’s venture capital arms have done), and impressing upon them the great sense of mission and patriotism that pervades most defense contractors. Ultimately, however, there may be a larger evolution of thinking that needs to take place before the tech sector is fully on board. New technologies with potential military applications have rarely, if ever, been fully excluded from military use. Indeed, just over the past century, promising civilian-developed technologies (e.g., the airplane) have been adapted for military use and key civilian technologies (e.g., the internet) were initially developed by the national security sector. Further, our rivals in AI, China and Russia, don’t appear to have the same scruples about integrating AI into national security. The transfer of technology is inevitable and so the question for leading tech firms should be whether they want to be involved in designing military uses of AI from the ground up, so that they are as effective and ethical as possible, or leave that work to others who may be less skilled or have fewer scruples.

Finally, DOD and the State Department should engage in earnest in the hard work of establishing international norms around AI and national security, which will be key to overcoming trust issues among nations. The United Nations has already begun this dialogue, by convening a Group of Governmental Experts on Lethal Autonomous Weapons Systems that is evaluating and beginning to set parameters around the development and employment of autonomous weapons. Deepening the U.S. government’s early engagement with these dialogues will be critical to ensuring both that AI is employed within appropriate legal and ethical bounds and that international norms reflect both the reality of the technology and the national security needs of nations developing AI technologies. Such a realistic framework and set of norms, developed through a collaborative process, is more likely to be accepted by the nations developing this technology. The U.S. government and the UN should also be thinking ahead to how any arms-control regimes focused on lethal autonomous weapons that might be developed would then be enforced. This is much more complicated than enforcing nuclear arms control regimes, in which the production of fissile materials and testing of delivery systems can often be detected from afar. Monitoring the development and employment of autonomous weapons will be much harder, as these weapons may use seemingly conventional military technologies enabled by autonomous technology baked into the systems that power them. Those developing arms control regimes will have to engage with leading technologists early on to develop technical and practical means of monitoring compliance. Finally, as has been the case since the end of World War II, any arms control and international security regimes that might be necessary will only work if the United States leads. The State Department should be not only heavily involved in the development of the framework for AI in national security but also in cobbling together a coalition of like-minded nations who will form the base of an eventual intergovernmental regime and who can apply diplomatic pressure to the more reticent members of the international community. 

As AI integrates into so many aspects of our daily technology experience, so too will it become increasingly integral in our national security apparatus. Although the greatest technological promise may still be years in the making, now is the time to engage in building a framework for effective governance of this technology and trust in its deployment.