Getty Images / EThamPhoto

The US Military Should Red-Team Open Source Code

The Pentagon can reduce threats and increase security by finding and fixing bugs lurking in its software.

The U.S. military routinely engages in red-teaming—searching for weaknesses in its war plans—by having its own members role-play as adversaries. Software security researchers also red-team, using the same adversary mindset to conduct penetration testing and to find and fix flaws in software.

Unfortunately, there’s an aspect of modern U.S. military operations that has so far escaped this devil’s-advocate approach: the open-source software that underpins military missions. 

The secret of all modern software is that it is mostly open-source—that is, code created by enthusiasts (and companies) around the world and released for anyone to study and use. Whether it’s your iPhone app, military mission-planning software, spy-plane computer, or big-data analytic tool, it’s open-source software all the way down.

Building apps with open-source components reduces time and cost. And by exposing its source code, open-source software invites the world to find and even fix the inevitable bugs. But open source software, like all software, has security flaws. Nearly a decade ago, the Heartbleed OpenSSL bug exposed information such as credit card details for nearly all web users. More recently, the log4j flaw let attackers easily take over control of affected computers, ranging from Minecraft servers to software from Apple and Amazon. 

It is also the case that malicious actors can and do tamper with open-source software. Just the known cases of open-source software supply-chain compromises number in the thousands.

Fortunately, the military’s red-teaming instincts can help reduce the threat. First, the U.S. military ought to undertake a software census to understand the open-source software components embedded into the software it uses. A good model is a recent Harvard University-Linux Foundation analysis for corporate entities. 

Second, the military should red-team the open-source software components on which it has become dependent. The military could fund organizations like the Open Source Technology Improvement Fund that have a track record of exactly this type of work. In addition, the military could assign its own personnel to help with this task, building the software security skills of its own members. The military could even directly assist the Open Source Security Foundation with a nascent related initiative called Alpha-Omega. Alternatively, open-source software bug bounties, paid by the military, could spur security researchers around the world to find and report bugs.

Third, the security bugs identified should be fixed and fixed quickly. Military members with expertise in software can provide bug fixes directly to the open-source software maintainers. The military could also fund third parties or the maintainers directly to fix the bugs. At the very least, the security flaws should be discreetly reported to the relevant open-source software projects.

Fourth, rinse and repeat. The open-source software that the military depends on will change. Additionally, open-source software projects are constantly evolving, fixing some bugs and inevitably introducing new bugs too. These facts mean that this whole process ought to be repeated periodically.

In the wake of log4j, the open source software vulnerability that led one observer to declare that “the internet is on fire,” the Open Source Security Foundation recently proposed red-teaming 200 major open-source projects a year at a cost of roughly $40 million a year. For the military, that’s budget dust. In short, the military could reduce the security vulnerabilities lurking in its software, improve aggregate software security for all Americans and humanity, and increase the probability of mission success, all with an investment in open source software red-teaming.

The military tries to never go into battle without red-teaming its plan. It’s time to apply that same technique to open-source software.

John Speed Meyers is a security scientist at Chainguard. Zack Newman is a software engineer at Chainguard. Jacobo McGuire is a summer policy research intern at Chainguard.