New CRAM Technology Reduces AI Chip Energy Consumption Dramatically

TapTechNews July 31st news, a research team from the University of Minnesota Twin Cities has newly developed Computational Random Access Memory (CRAM), which can reduce the energy consumption of AI chips to one-thousandth.

New CRAM Technology Reduces AI Chip Energy Consumption Dramatically_0

The International Energy Agency (IEA) predicts that the energy consumption of AI will double, with a power consumption of 460 terawatt-hours (TWh) in 2022, and it is estimated to reach 1000 terawatt-hours in 2026.

The team said that traditional AI chips need to continuously transfer data between logic (processing) and memory (storage), thus resulting in huge power consumption.

And the new CRAM memory solves this problem by processing the data while keeping it in the memory. The data does not need to leave the grid where the computer stores information and can be processed entirely within the memory array.

The team said that compared with the traditional method, the energy consumption of the machine learning inference accelerator based on CRAM can be reduced to one-thousandth, and even can reach 1/1700 or 1/2500 in some application scenarios.

The team was established in 2003 and consists of experts in physics, materials science, computer science, and engineering. It has been developing this technology for the past 20-plus years.

This research is based on the relevant patents of Magnetic Tunnel Junctions (MTJs), and MTJs are nanostructured devices used in hard disks, sensors, and other microelectronic systems including Magnetic Random Access Memory (MRAM).

New CRAM Technology Reduces AI Chip Energy Consumption Dramatically_1

New CRAM Technology Reduces AI Chip Energy Consumption Dramatically_2

The CRAM architecture overcomes the bottleneck of the traditional von Neumann architecture (where computing and memory are two separate entities) and can meet the performance requirements of various AI algorithms more effectively than traditional systems.

The University of Minnesota team is currently collaborating with leaders in the semiconductor industry to expand the demonstration scale and produce the necessary hardware to reduce the energy consumption of AI on a larger scale.

TapTechNews attaches the reference address

Likes