The next generation of high-performance computers

IARPA wants to develop quicker, more durable ways to design new architectures and applications for high performance computers.

Researchers for the intelligence community want ideas on how to improve modeling and simulation of high-performance computing architectures and applications.

As HPC systems advance, they are becoming more exotic, dynamic, complex and vast, potentially overwhelming  traditional methods of designing, testing and optimizing them, according to a June 25 request for information posted by the Intelligence Advanced Research Projects Activity. All that complexity -- from many-core designs to burst buffers, novel networking and parallel computers -- can lead to systems that can be made up of disparate computers, storage systems and other large-scale data sources, it said.

“This additional challenge of heterogeneous data sources make modeling the execution of an application an even more important, but complicated effort,” said the RFI.

IARPA is asking for help with modeling and simulation research that can eventually tackle large-scale computational and data-analytic applications that run on HPC systems. Those models, it said, should be able to act on dynamic information about hardware, power sources, performance, resiliency and other variables and respond accordingly with trade-offs as those variables change.

IARPA also wants input on using machine learning and artificial intelligence to help develop simulations, modeling of dynamic power and resiliency capabilities and other dynamic factors in systems.

Responses are due July 29.