Engineering researchers at the University of Minnesota Twin Cities have developed a groundbreaking hardware device that could significantly reduce energy consumption for artificial intelligence (AI) computing applications. Published in the scientific journal npj Unconventional Computing, the research introduces computational random-access memory (CRAM), a new model where data processing takes place entirely within the memory, eliminating the need to transfer data between logic and memory, which consumes large amounts of power.
The International Energy Agency has forecasted a doubling of energy consumption for AI applications from 2022 to 2026, equivalent to the electricity consumption of Japan. The researchers estimate that a CRAM-based machine learning inference accelerator could achieve a 1,000-fold energy improvement compared to traditional methods. This breakthrough, more than two decades in the making, builds upon the team’s work with Magnetic Tunnel Junctions devices to enhance microelectronics systems. The CRAM architecture enables computation within memory cells, breaking down the bottleneck between computation and memory in traditional computer architecture.
Jian-Ping Wang, senior author on the paper and Distinguished McKnight Professor at the University of Minnesota, reflects on the initial skepticism surrounding the concept of using memory cells for computing, which has now proven feasible. The interdisciplinary team at the University of Minnesota has been instrumental in advancing this technology, with contributions from researchers in physics, materials science, computer science, and engineering. CRAM offers an energy-efficient digital substrate that can be reconfigured to match a variety of AI algorithms, surpassing traditional building blocks in terms of energy efficiency.
CRAM leverages spintronic devices, which use the spin of electrons rather than electrical charge to store data. These devices are more energy-efficient and faster than transistor-based chips, offering a promising alternative for AI computing. The team plans to collaborate with semiconductor industry leaders to demonstrate and scale up the hardware for widespread AI applications. Funding for this research was provided by several agencies, including DARPA, NIST, NSF, and Cisco Inc., with nanodevice patterning and simulation work conducted in collaboration with the Minnesota Nano Center and the Minnesota Supercomputing Institute.
The development of CRAM represents a significant advancement in AI computing, offering a more energy-efficient solution with high performance and lower costs. By conducting computations within memory cells, the device eliminates the need for energy-intensive data transfers between logic and memory. This innovative approach has the potential to revolutionize AI applications and contribute to the overall efficiency of computing systems. Further collaborations with industry partners and continued research efforts are expected to drive the adoption and integration of CRAM technology in the near future.