U.S. Air Force Second Lt. Christopher Ahn, Pilot Training Next student, trains on a virtual reality flight simulator, at the Armed Forces Reserve Center in Austin, Texas, June 21, 2018.

U.S. Air Force Second Lt. Christopher Ahn, Pilot Training Next student, trains on a virtual reality flight simulator, at the Armed Forces Reserve Center in Austin, Texas, June 21, 2018. U.S. Air Force photo by Sean M. Worrell

Solving One of the Hardest Problems of Military AI: Trust

There are many gaps, and most won’t be solved by code but by conversation.

The U.S. Department of Defense is making big bets on artificial intelligence – rolling out new strategies, partnerships, organizations, and budgets to develop the technology for military uses. But as DOD moves to harness this technology, its success may hinge in part on something that is not technical in nature: overcoming the massive gaps in trust around AI. That trust gap is actually many gaps – between humans and machines, the public and the government, the private sector and the government, and among governments – and undertaking the hard task of working through them will be key to integrating AI into national defense. 

In February, DOD rolled out its new AI strategy, coming on the heels of an Executive Order directing the executive branch to prioritize work in the field. The new strategy was only the latest in a massive new emphasis on the technology. Over the past year, DOD has established a new Joint Artificial Intelligence Center and appointed a highly regarded general to lead it, announced a $2 billion Defense Advanced Research Projects Agency program to develop new AI technologies, launched a collaboration with leading robotics and autonomous technology experts at Carnegie Mellon University, and stood up a four-star Army Futures Command in the tech hub of Austin, Texas. These new initiatives come in the wake of several years of the Pentagon deepening ties with Silicon Valley, most notably through its Defense Innovation Unit — a small cell that works with the most innovative tech companies to adapt their technologies for DOD use — and by inviting tech heavyweights like Amazon CEO Jeff Bezos and Google executive Eric Schmidt to join its innovation advisory board. These moves all come in an environment in which China and Russia have demonstrated strong prioritization of AI, and they reflect a realization, emphasized by DOD officials across two administrations, that harnessing the most sophisticated AI and autonomous technologies is the key to keeping the edge in an increasingly intense technological arms race. Yet as DOD makes these investments in technology, actually integrating them into the military will require investing in trust. 

Related: Without JEDI, Pentagon’s AI Efforts May Be Hindered

Related: The Pentagon is ‘Absolutely Unapologetic’ About Pursuing AI-Powered Weapons

Related: Here’s the Key Innovation in DARPA AI Project: Ethics From the Start

To a technologist, the concept of trust in AI is nothing new, as new technologies often face human-machine trust challenges. Yet AI and autonomy will force a deeper human-machine reckoning beyond what we have grappled with to date. At the core of this challenge is that machine learning, which powers AI, is fundamentally different from human learning. For example, machine learning often relies on pattern detection made possible by ingesting massive amounts of data rather than the inferential reasoning that defines human intellect. To use an oft-cited explanation, a human recognizes a cat as a cat because it carries certain feline characteristics in its appearance and movement, while a computer recognizes a cat as a cat because it looks like another object that has been classified as a cat in the massive data library that the AI technology has trained on. It’s an elementary example, but it illustrates how differences in how a machine reaches conclusions can create real challenges, as AI users may not trust the conclusions reached by the machine. What if this is a cat like none the machine has ever seen? What if it’s a dog groomed in a particularly cat-like way? Further, AI is generally not set up to explain its reasoning to the skeptical user and assure that it has reached the right conclusion. Some of this trust gap is the natural course of technology uptake, in which humans constantly struggle to trust new inventions until they have a demonstrated track record of safety. But this challenge of trust is particularly acute in the military, where commanders – or even the machine itself – may have to make life and death decisions on the basis of information provided by an AI-enabled system. Indeed, these risks have been exposed in dramatic fashion by recent experiments that show how, with a change of a few pixels on an image, a school bus can be made to look like a tank to an AI-enabled analytic technology, potentially leading to disastrous consequences. It’s easy to see how adversaries may take advantage of these dynamics to try to fool AI, which only widens the trust gap further. While many critics think of the human-machine trust problem as mistrust over more advanced uses of AI, such as autonomous weapons, these examples demonstrate that military commanders may be reluctant to trust even the simplest AI applications, things like image classification or data translation. 

Yet for all the challenges here, the solutions are largely technical and focused on improving the technology and building a better human-machine interface. The Intelligence Community and DOD have already begun developing technologies that would allow AI to better explain its reasoning. To be sure, there are still divisions within the tech community about how to think about human-machine trust, but if AI engineers focus on it, they may very well address the human-machine trust issues. Trust is much trickier when it’s applied to how humans interact around AI. In particular, the U.S. government’s ability to harness AI for national defense may rely on its ability to successfully foster trust with the American public, the private sector, and foreign governments. 

First is the American public, which while generally sanguine about the prospects for AI to improve their lives and more trusting of the military than other public institutions, has in recent years shown increasing reservations around the use of advanced technology for national security purposes. A vocal plurality of the American public and the media has consistently opposed the U.S. lethal drone program. And the American public largely reacted in outrage after Edward Snowden disclosed classified documents showing a massive U.S. effort to collect data on Americans’ communications and analyze it with big data tools. There is not yet extensive public polling exploring Americans’ views on AI in national security. At this point, it may not be much more nuanced than what they have seen in sci-fi films like “The Terminator.” But it is certainly important to engender as much trust as possible, so Americans can have faith that their government is not creating an AI-enabled surveillance state or an army of uncontrolled killer robots. 

Lessons from the controversies over the drone and surveillance programs provide a playbook for building trust, through a mixture of public transparency, clear policies governing the program, effective engagement with civil society, and appropriate congressional oversight. Trust begins with transparency, with leaders explaining why it is essential to integrate AI into national security and laying out clear limits on what AI can be used for and what controls will be in place to prevent misuse. To the government’s credit, some of this dialogue has already begun, most notably with senior DOD officials giving speeches putting AI in context and assuring the public that human beings will always be involved in making decisions on lethal action. DOD should continue this dialogue with the public, as well as civil society groups like Human Rights Watch, which have made constructive recommendations on how to properly govern this new technology. Such engagement should be reinforced by the government putting out official guidance, ideally coming from the President, that clearly sets out parameters around AI and its misuse. Congress must play a role too, first by getting smart on the technology and the challenges it presents – a very steep learning curve, if the recent tech-related hearings are any indication – and then by providing oversight of DOD and considering legislation to ensure AI stays within appropriate bounds. Even with all of these steps, there will be parts of the public that are understandably leery about the prospects for robotic warfare. Some of this is inevitable, but DOD can help overcome this skepticism by also taking care of the language it uses to discuss AI. For example, DOD leaders have frequently insisted that there will “always be a human in the kill chain.” Key AI programs have code names like Overlord, Ghost Fleet, and Undertaker. Assuaging the concerns of those who are leery of a world of robotic warfare may require making sure that every discussion doesn’t sound like the foreshadowing of a post-apocalyptic future.

Running in tandem with building trust with the public is gaining confidence with private industry, particularly the set of Silicon Valley companies that are skeptical about working with the U.S. government on national-security issues, both generally and specifically on AI. After a revolt by 4,000 of its employees, Google terminated its participation in DOD’s Project Maven, an initiative in which AI would improve the military’s target assessment capabilities, allowing the military to better distinguish between combatants and civilians in drone footage. Elon Musk remains AI’s most outspoken critic, arguing AI could be more dangerous than nuclear warheads or North Korea, and warning that machines could learn at a rate that surpasses humans’ abilities to control them. This reticence of the most innovative sector of the American economy to collaborate with the government is a far cry from earlier generations in which industrial giants were the backbone of national defense. 

Whatever the roots of this distrust, overcoming it will not be easy. Beyond the public confidence building measures described above, DOD will need to make a concerted effort, building on DIU’s successes and the personal overtures from Secretaries Ash Carter and James Mattis, to build relationships with key players and hubs in the Valley. Cultivating trust with a larger network of credible executives – like Palantir CEO Alex Karp and Michael Bloomberg, who have spoken out about the importance of the tech sector supporting national security – may provide top cover for others to speak on DOD’s behalf. The traditional defense sector can play a role as well, perhaps by hiring leading civilian tech firms as partners and sub-contractors, or investing in leading AI firms (as Lockheed and Boeing’s venture capital arms have done), and impressing upon them the great sense of mission and patriotism that pervades most defense contractors. Ultimately, however, there may be a larger evolution of thinking that needs to take place before the tech sector is fully on board. New technologies with potential military applications have rarely, if ever, been fully excluded from military use. Indeed, just over the past century, promising civilian-developed technologies (e.g., the airplane) have been adapted for military use and key civilian technologies (e.g., the internet) were initially developed by the national security sector. Further, our rivals in AI, China and Russia, don’t appear to have the same scruples about integrating AI into national security. The transfer of technology is inevitable and so the question for leading tech firms should be whether they want to be involved in designing military uses of AI from the ground up, so that they are as effective and ethical as possible, or leave that work to others who may be less skilled or have fewer scruples.

Finally, DOD and the State Department should engage in earnest in the hard work of establishing international norms around AI and national security, which will be key to overcoming trust issues among nations. The United Nations has already begun this dialogue, by convening a Group of Governmental Experts on Lethal Autonomous Weapons Systems that is evaluating and beginning to set parameters around the development and employment of autonomous weapons. Deepening the U.S. government’s early engagement with these dialogues will be critical to ensuring both that AI is employed within appropriate legal and ethical bounds and that international norms reflect both the reality of the technology and the national security needs of nations developing AI technologies. Such a realistic framework and set of norms, developed through a collaborative process, is more likely to be accepted by the nations developing this technology. The U.S. government and the UN should also be thinking ahead to how any arms-control regimes focused on lethal autonomous weapons that might be developed would then be enforced. This is much more complicated than enforcing nuclear arms control regimes, in which the production of fissile materials and testing of delivery systems can often be detected from afar. Monitoring the development and employment of autonomous weapons will be much harder, as these weapons may use seemingly conventional military technologies enabled by autonomous technology baked into the systems that power them. Those developing arms control regimes will have to engage with leading technologists early on to develop technical and practical means of monitoring compliance. Finally, as has been the case since the end of World War II, any arms control and international security regimes that might be necessary will only work if the United States leads. The State Department should be not only heavily involved in the development of the framework for AI in national security but also in cobbling together a coalition of like-minded nations who will form the base of an eventual intergovernmental regime and who can apply diplomatic pressure to the more reticent members of the international community. 

As AI integrates into so many aspects of our daily technology experience, so too will it become increasingly integral in our national security apparatus. Although the greatest technological promise may still be years in the making, now is the time to engage in building a framework for effective governance of this technology and trust in its deployment. 

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.