Linux containers can minimize cross-domain security headaches

Containers can help expedite and improve the reliability of application and data transfers from the “low side” to the “high side,” allowing developers to safely move applications and data across domains while maintaining a secure pipeline.

Cross-domain security is vitally important for controlling  and protecting applications as they pass between unclassified and classified networks, but the act of securing those applications has always been a challenge. Not long ago, it wasn’t unusual to see one developer without a high security clearance yelling to a peer working in a sensitive compartmented information facility (SCIF) about how to code a particular piece of software. While there’s little need for such primitive communications today, there’s still plenty of friction involved when it comes to transferring applications and data from unclassified to classified networks. 

Circumventing this friction in the cross-domain process is important to balancing efficiency and effectiveness with solid security, and Linux containers can be a useful in achieving this goal. Containers can help expedite and improve the reliability of application and data transfers from the “low side” to the “high side,” allowing developers to safely move applications and data across domains while maintaining a secure pipeline.

Let’s examine two primary causes of friction during the cross-domain transfer process and how containers can help.

Undermining efficiency, increasing frustration

Processes like DevOps and DevSecOps that take place on low-side unclassified networks can produce applications and data continuously and rapidly, delivering critical content at a fast pace. However, when it comes time for those applications to be transferred to high-side classified networks, they’re subject to meticulous, time-consuming analysis, which can impede agencies’ efforts at efficiency and speed.

Even if an application successfully passes from the low to the high side, there’s no guarantee the software will perform as it’s supposed to or that all of the necessary data components will get transferred. Pieces of code or key features may end up getting inadvertently flagged as malware and may not make it over the transom. That’s frustrating for both developers and users; the former must deal with rectifying the issue of incomplete applications while managing the flood of new apps being continuously produced, while users on classified networks fail to receive all of the features or data they may need to complete their tasks. 

Minimizing data transfer

Containers make the data transfer easier and more reliable by stripping the process down to its barest essentials. They help minimize the amount of information that’s transferred, which reduces the chances for errors and smooths the transition from one network to another.

Containers are made up of multiple layers. The base layer includes all of the core components needed to make applications run – the core operating system, for example. On top of the base are additional layers that represent functionality, key features and so on. The base layer of a container remains consistent and hardly ever changes, while the top layers can be updated as necessary.

When developers push an application developed within a container over to the high side, they only need to send the base layer once. All future updates can be sent over in bits and pieces and automatically merged with that base layer. The net effect is that less information crosses the domain at any given time. That results in less data that must be analyzed, which helps to expedite the process and minimizes the chances for error.

Employing standardization

One of the reasons moving applications across domains has historically been so painful is because they often depend on the use of specialized enterprise resources, including proprietary hardware. Software-defined solutions like containers are a better option because they offer that standardization and, thanks to their open-source nature, are platform and resource agnostic.

The metadata packaged within containers tends to be robust and standardized. This makes containers highly immutable; they hardly ever change. As such, if developers on the high side need to validate that they received the data and features they need, they can easily inventory components to ensure everything made it across the domain and match components up correctly and reliably. Standardized metadata makes this process much easier.

If a component doesn’t match or if there are missing parts, the high-side developer can let the low-side developer know that the application came across incomplete. This feedback loop supports continuous development and DevSecOps efforts. Since containerized content is typically cryptographically secure, developers can exchange this information with minimal security concerns.

Bringing remote teams closer together

Establishing these feedback mechanisms is particularly important today, when so many teams are working remotely. In a traditional environment, most developers would be co-located and working together in teams, likely doing consolidated data transfers and closely collaborating with each other -- a true exemplification of a successful DevSecOps operation.

That traditional environment is no more, at least for the time being, and remote work is creating an even greater gulf between low-side and high-side development teams. Low-side developers are now likely submitting components independently, which will need to be reintegrated into applications upon reaching the high side. Using the standardized metadata provided by containers simplifies this reintegration process, as it makes identifying missing or non-matching components much easier. 

Revolutionizing the process

While the days of yelling into the SCIF might be thankfully over, the need for a better solution for cross-domain transfers is as relevant today as it’s ever been. Containers have the potential to revolutionize this process and improve collaboration amongst development teams, driving more efficient, effective and secure application and data transfer across unclassified and unclassified networks.