Pentagon wants an imaging sensor that can think

Under the ReImagine program, researchers seek to develop a programmable array of pixels that can use machine learning to react to what they’re seeing, as a way to improve situational awareness.

Digital imagery has come a long way in the last 20 years or so. From early images where you could see the square edges on the pixels to cameras with top-quality lenses and 160 or more million pixels and phone cameras good enough to shoot professional sporting events. The Energy department also is developing a 3.2-gigpixel camera to serve as the eye of the Large Synoptic Survey Telescope.

What could be next? How about an imaging sensor that essentially can think, combining data from multiple sensors and using machine learning to adjust the image based on what’s happening in the frame. That’s the idea behind a new Defense Advanced Research Projects Agency program called ReImagine, or Reconfigurable Imaging.

The goals of the program isn’t so much to improve everyday photography as it is to improve situational awareness for warfighter by combining such inputs as infrared emissions, separate resolutions or frame rates and 3D LIDAR (light detection and ranging). The system would use a million pixels in an array the size of a thumbnail and over 1,000 transistors to give each pixel a programmable ability to adjust to the image being delivered.

 In DARPA’s vision, the pixel array would react autonomously, in real time, with the ability to switch between different sensor nodes in a way that no camera can do now.

One key to the program is miniaturization, taking the kinds of sensors now available in larger, individual military platforms and moving them to one small chip. “What we are aiming for is a single, multi-talented camera sensor that can detect visual scenes as familiar still and video imagers do, but that also can adapt and change their personality and effectively morph into the type of imager that provides the most useful information for a given situation,”  said Jay Lewis, program manager for ReImagine.

DARPA has posted a Special Notice on the FedBizOps website and expects to make a Broad Agency Announcemnt soon, with a Proposer’s Day set for Sept. 30.

Meanwhile, MIT-Lincoln Laboratory will work on developing a reconfigurable layer for what will be a three-layer hardware piece. Potential vendors will be asked to come up with megapixel detector layers and software and algorithms for converting signals into digital data in a two-way flow of information between sensors and algorithms.

“Even as fast as machine learning and artificial intelligence are moving today, the software still generally does not have control over the sensors that give these tools access to the physical world,” Lewis said. “With ReImagine, we would be giving machine-learning and image processing algorithms the ability to change or decide what type of sensor data to collect.”