Google’s Withdrawal from Pentagon AI Project Risks US Lives, Says Work

Former Deputy Defense Secretary Bob Work speaks at the Defense One Tech Summit at the Newseum in Washington, D.C., on June 26, 2018.

Defense One / Bradley Peniston

AA Font size + Print

Former Deputy Defense Secretary Bob Work speaks at the Defense One Tech Summit at the Newseum in Washington, D.C., on June 26, 2018.

Former deputy defense secretary says the tech giant should consider how its work might help save U.S. troops — and how it is currently helping China.

Google’s decision to withdraw from some Pentagon business following an internal revolt is creating a moral hazard for itself and its employees, former Deputy Defense Secretary Bob Work said.

In April, some 3,100 Google employees signed a letter urging the company to forgo work on Project Maven, a pioneering, if still small-scale, Air Force program that uses machine learning and artificial intelligence to sort through surveillance video footage. In May, company officials announced they would do no more work on Maven after their contract runs out next year.

Speaking Tuesday at the Defense One Tech Summit in Washington, D.C., Work said that company employees who worry that the Pentagon’s artificial intelligence will kill people should should consider that it would increase risks for someone else.

“They say, ‘What if the work is ultimately used to take lives. But what if it saves American lives? 500 American lives? Or 500 lives of our allies?”

Work said he isn’t hearing the same employees complain that Google also opened an AI research center in China last year.

“Google has a center in China, where they have a concept called civil-military fusion,” he said. “Anything that’s going on in that center is going to be used by the military.”

Work, who stepped down as the Defense Department’s No. 2 civilian leader last July, recently joined the board of advisors of big-data firm Govini. He said that Maven was launched as a pilot program, to help the Pentagon learn how AI could help, and how the military could put it to work.

Related: 2018 Defense One Tech Summit Livestream

“Project Maven was supposed to be the least objectionable” task for a Defense Department AI project, he said. Analyzing the hours and hours of surveillance video taken by drones over combat zones appeared to be that task. “Three seven-person teams working constantly could look through about 15 percent of the tape, so we wanted an algorithm” to help, he said.

Work said he hopes the Maven decision doesn’t foreshadow more such moves by Google or the tech industry generally, and that the Pentagon is continuing to work to persuade Silicon Valley to work with the military.

“I was alarmed that it happened. I hope that it’s not a canary in the coal mine. I hope that the [Defense] Department is focused on that, and that…will save lives in a big way.”

Despite Google’s decision on Project Maven, the company continues its vigorous pursuit of military work. In May, company representatives were passing out pitch sheets at SOFIC, the annual special-operations industry conference in Tampa, Florida.

Other Views

Later at the event, several speakers on the “Silicon Valley Meets National Security” panel reacted to Work’s criticism.

Heather Roff, an associate fellow at Cambridge University: “Using the metric ‘lives saved’ starts to obfuscate second, third, and fourth-order effects of using technology. We’re talking big tech, multinational corporations. Aiding the U.S. defense industry might also aid other countries.”

Christina Monaco, chief ventures officer at the National Geospatial-Intelligence Agency: “Peace and prosperity are team sports…But thinking about lives saved as a metric is hard to save; how many lives do you save if you avoid an action?”

Josh Marcuse, executive director of the Defense Innovation Board: “I just came came back from meeting with NATO allies. This is not just an issue for us…We must defend democracy from adversaries that will use AI. I do not want to show up to a smart battlefield with a dumb weapon.

Marcuse added that the Pentagon needs industry’s help to use AI prudently.

“We applied international and humanitarian law to the undersea domain, in the aerial domain, and we’re going to keep applying it in cyber,” he said. But “AI done properly is really, really dangerous, and we want to be safe practitioners, and want the companies to work with us.”

Capt. Sean Heritage, acting managing partner of DIUx, said of the Google revolt, “The things we are reading about in the press are a small number,” and stressed he is finding and enjoys working with patriots in the industry. 

He also said that attitudes in the Valley have changed since the immediate aftermath of the Edward Snowden revelations. “A lot of people are willing to work with us.”


Related video: 

Close [ x ] More from DefenseOne