Robotic device coined Digidog on display in Times Square.

Robotic device coined Digidog on display in Times Square. Lev Radin / Pacific Press / LightRocket via Getty Images

Microchip breakthrough may reshape the future of AI

IBM’s new NorthPole may enable smarter, more efficient, network-independent devices that may even help the U.S. win the microchip war against China.

A prototype microchip design revealed today by IBM could pave the way for a world of much smarter devices that don’t rely on the cloud or the internet for their intelligence. That could help soldiers who operate drones, ground robots, or augmented-reality gear against adversaries who can target electronic emissions. But the new chip—modeled loosely on the human brain—also paves the way for a different sort of AI, one that doesn’t rely on big cloud and data companies like Amazon or Google.

Unlike traditional chips that separate memory from processing circuits, the NorthPole chip combines the two—like synapses in the brain that hold and process information based on their connection to other neurons. Writing in the journal Science, IBM researchers call it a “neural inference architecture that blurs this boundary by eliminating off-chip memory, intertwining compute with memory on-chip, and appearing externally as an active memory.”

Why is that important and what does it have to do with the future? Today’s computers have at least two characteristics that limit AI development. 

First, they need a lot of power. Your brain, running on just 12 watts of power, can retain and retrieve the information you need have a detailed conversation while simultaneously absorbing, correctly interpreting, and making decisions about the enormous amount of sensory data required to drive a car. But a desktop computer requires 175 watts just to process the ones and zeros of an orderly spreadsheet. This is one reason why computer vision in cars and drones is so difficult, a huge limiting factor for autonomy. This energy inefficiency is one reason why many of today’s AI tools depend on enormous enterprise cloud farms that consume enough energy to power a small town

The second problem is that we’re reaching the atomic limit of how many transistors we can fit on a chip.

Since their inception, computers (without vacum tubes) have gotten more powerful at the same time they’ve gotten smaller due to a phenomenon called Moore’s Law (after scientist Gordon Moore.) The law says that the number of transistors on an integrated circuit will double roughly every 18 months. That’s what gives that phone in your pocket more power than the computers of the 1970s that required entire rooms, and, of course, the cost of integrated circuits has also gone down proportionally.  But the laws of thermodynamics trump Moore’s Law so there is a hard limit to the number of transistors that can be set on an integrated circuit, and therefore, so long as chip architectures remain as they are, a limit to how much better, faster, smaller and cheaper computers can be. That limit is approaching later this decade.

The NorthPole chip prototype may help solve both problems. “What we really set out to do is optimize every joule of energy, every capital cost of a transistor, and every opportunity for a single clock cycle, right? So it's been optimized along these three dimensions, energy, space and time,” IBM senior fellow Dharmendra S. Modha said in an interview. 

The NorthPole chip has 22 billion transistors and 256 cores, according to the paper. There are, of course, chips with more transistors and more cores. But NorthPole’s unique architecture allows it to operate exponentially more efficiently on tasks like processing moving images. Against a comparable chip with “12nm silicon technology process node and with a comparable number of transistors, NorthPole delivers 25✕ higher frames/joule,” according to the paper. If you wanted to connect a lot of them in an enterprise cloud environment to run a generative AI program like ChatGPT, you could shrink that cloud down considerably. Cloud computing that used to take a massive building of servers suddenly fits in the back of a plane. But of course you also need fewer chips for things like small drones and robots.

Pentagon interest

IBM has been working on such neuromorphic chips for more than ten years, with funding from DARPA’s SyNAPSE program. The program spent more than $50 million between 2008 and 2014, and DOD has since 2019 invested another $90 million on the chips, a top Pentagon research official said.

“This is a prime example of what we would call patient capital,” Maynard Holliday, assistant defense secretary for critical technologies in the Office of the Under Secretary of Defense for Research and Engineering. 

“For us, in the Department of Defense, we've always been looking for low…size, weight and power, and then increased speed for our processors. With the advent of generative AI we recognized that to do [computation] in a low-power fashion; we would need this kind of architecture…especially in a contested environment where our signals may be jammed. GPS may be denied, to be able to do compute at the tactical edge is an advantage.”

Smarter and network-independent chips could vastly improve the ability of various military systems—drones, ground robots, soldier headsets—to perceive and interpret the world around them. They could help ingest a wider variety of data, including audio, optical, infrared, sonar, and LiDAR; and enable the creation of new kinds of sensors, such as “micro power impulse radar,” Holliday said. 

“It can do segmentation, which means it can discern, you know, people in a picture, it could classify sounds for you, again, all at the edge” without the help of the internet, he said. It could also revolutionize self-driving cars, not just for the military but in the commercial sector. “You can think about this from a vehicle standpoint [or from a dismounted soldier standpoint.”  

Various military labs, including the Air Force Research Lab and Sandia National Lab, are already looking into uses for the prototype chip, Holliday said. 

NorthPole may even enable the military to do more with fewer and more domestically producible chips. That’s a rising concern as more and more officials warn of a potential Chinese invasion of Taiwan, one of the main suppliers of advanced microprocessors for phones, cars, etc. 

Holliday said NorthPole already rivals the most advanced chips out of Asia, and future versions are expected to be even more efficient.

“NorthPole is at 14 nanometers. You know, the state-of-the-art stuff that's in our iPhones and other commercial electronics is three nanometers. And that's produced all in Asia at TSMC [in Taiwan] and Samsung. And so the fact that this chip is performing the way it does at 14 nanometers bodes very well. As we descend that technology node curve to single digit nanometers, it's just going to be ever better performing.”

But the United States still has to heavily boost its ability to fabricate such chips in large quantities, he said, a process that’s barely begun