Analog this: Is an old way the key to solving complex simulations?

DARPA wants a new processor architecture to solve constant-variable equations that CPU supercomputers aren’t built for.

DARPA analog computing

In this nearly all-digital age, analog systems are a vanishing breed. Certain cars still have analog dashboards. A lot of people prefer analog clock faces to something that looks like the timer on a bomb. And a growing, if still small, number of folks are turning back to vinyl records for their listening pleasure. But those formats are what’s generally regarded as retro—a look back into a time gone by.

In one sense, however, military researchers think analog could be the key to the future, in terms of improving the extremely complex simulations that fuel research into everything from fluid dynamics to climate change.

The Defense Advanced Research Projects Agency is considering the possibility, and has issued a solicitation  looking for new processing paradigms, possibly including analog approaches, for simulations that involve constant rates of change. The Analog and Continuous-variable Co-processors for Efficient Scientific Simulation, or ACCESS, program is intended to get past the current barriers to computer performance, by any means that works.

“In general, we’re interested in information on all approaches, analog, digital, or hybrid ones, that have the potential to revolutionize how we perform scientific simulations,” Vincent Tang, program manager in DARPA’s Defense Sciences Office, said in a release.

Computationally intensive simulations, typically handled by supercomputers, are an essential part of solving complex mathematical problems. They’re used to measure the acoustic signature of a new ship’s hull design, for instance, or the aerodynamics of a stealth aircraft, or the rate of decay in the nation’s nuclear arsenal. They’re extremely valuable when problems are too complicated to work out theoretically or when real-world tests would be too costly. (The nuclear arsenal is a good example.)

But as powerful as supercomputers are—and they’re getting faster all the time—their computing architectures have hit a wall with regard to certain problems. A supercomputing cluster of thousands of CPUs, each working on a piece of a problem, “is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” Tang said.

Those partial differential equations describe motion, diffusion, equilibrium and other factors that “involve continuous rates of change over a large range of physical parameters” often over long distances, and as a result aren’t conducive to being broken up into pieces, he said.

That’s where analog approaches might come in. Analog systems date back more than a century, and although they’ve been long since passed in speed and reliability by digital architectures, they could have one advantage for specific types of problems—they work by manipulating continuously changing values, rather than the discrete measurements used by digital cores. That’s why DARPA thinks an analog system, combined with other new, advanced technologies in a modern architecture, could surpass CPU-based performance for specific problems.

Whether analog, digital or some combination of the two, however, what DARPA wants is a processor specially designed for these continuous-variable equations. Such a processor “may enable revolutionary new simulation capabilities for design, prediction, and discovery,” the agency said.

In its request for information, DARPA said it is looking for:

  • Scalable, controllable, and measurable processes that can be physically instantiated in co-processors for acceleration of computational tasks frequently encountered in scientific simulation.
  • Algorithms that use analog, non-linear, non-serial or continuous-variable computational primitives to reduce the time, space and communicative complexity relative to von Neumann/CPU/GPU processing architectures.
  • System architectures, schedulers, hybrid and specialized integrated circuits, compute languages, programming models, controller designs, and other elements for efficient problem decomposition, memory access, and task allocation across multi-hybrid co-processors.
  • Methods for modeling and simulation via direct physical analogy.

Responses to the RFI are due by April 14.