Lightning strikes: Air Force center doubles its supercomputing power

Lab at Wright Patterson unveils a 1.28 petaflop Cray XC30, one of the world’s fastest distributed memory platforms.

AFRL Cray Lightning supercomputer

 

The Air Force Research Laboratory at Wright Patterson Air Force Base has effectively doubled its supercomputing power with the installation of a 1.28 petaflop Cray XC30 machine dubbed Lightning.

Nicknamed after the F-35 Lightning joint Strike Fighter, the $20.8 million supercomputer will be paired at AFRL’s DOD Supercomputer Resource Center with the Spirit supercomputer to research and test new weapons systems and capabilities that might be too hazardous or expensive to carry out with traditional, kinetic methods, and help handle research that involves large amounts of data, the Air Force said in a release.

Lightning and Spirit can run modeling programs in computational fluid dynamics, chemistry, nanotechnology, electromagnetics, acoustics and advanced materials and structures, the Air Force said, and can simulate wind-tunnel, chemical and other tests that could be risky.

The Air Force is investing in its DOD High Performance Computing Modernization Program as a cost-effective way of developing and testing new systems. The modernization program is spending $150 million this year on new supercomputing assets, of which Lightning is the first to go online. The supercomputing center at Wright Patterson is the largest of five supercomputing centers in the Air Force.

A petaflop is equal to a quadrillion floating-point operations per second, and while 1.28 petaflops doesn’t place it among the world’s fastest is raw computing terms (China’s Tianhe-2 tops the Top 500 supercomputing list  at 33.86 petaflops), Lightning is among the world’s fastest distributed memory platforms, with a total disk space of 4.5 petabytes, the Air Force said. A petabyte is equal to a million gigabytes of storage, and is large enough, for example, to hold all the DNA information on twice the population of the United States.

Changing computational demands have changed some of the ways supercomputers are measured for different tasks. Rather than going just by raw floating-point speed, for example, the Graph 500 organization four years ago began ranking supercomputers by how well they handle the graph-type problems inherent in big data. The group also has developed a Green Graph 500  list of the most energy-efficient supercomputers.