NEOSemiconductor Unveils 3DX-AI Chip Technology with Massive AI Processing Power and Ultra-Low Power Consumption

TapTechNews August 6th news, NEOSemiconductor released the 3DX-AI chip technology on the 5th of this month local time, claiming that this technology can achieve hundreds of times the AI processing power of the current HBM memory solution, and at the same time the power consumption can also be reduced by 99%.

NEOSemiconductor Unveils 3DX-AI Chip Technology with Massive AI Processing Power and Ultra-Low Power Consumption_0

TapTechNews noticed that 3DX-AI can be understood as the combination of two technologies: it uses the 3DDRAM technology to construct the DRAM Die of the HBM memory to achieve higher capacity; at the same time, a local processor is introduced in the DRAM Die, similar to the PIM concept proposed earlier.

On the former technology, a single 3DX-AI chip contains 300 layers of 3DDRAM units, and the overall capacity reaches 128 gigabytes (GB), and after 12-layer stacking, a single-stack capacity of 192 GB can be achieved, allowing for larger AI models to be stored. While the current maximum single-stack capacity of HBM3(E) memory is only 36 GB.

NEOSemiconductor Unveils 3DX-AI Chip Technology with Massive AI Processing Power and Ultra-Low Power Consumption_1

And on the latter technology, NEOSemiconductor said that each 3DX-AI chip is equipped with a layer of neural circuit units, including 8000 neuron circuits, which can directly perform AI processing inside the 3D memory, greatly reducing the power consumption generated by data transmission to the GPU.

NEO Semiconductor expects each layer of neural circuit units to provide an AI processing throughput of 10 terabytes per second (TB/s), which is 120 TB/s for a 12-layer stacked 3DX-AI memory stack, an increase of 100 times compared to the traditional scheme.

NEO Semiconductor founder and chief executive officer Andy Hsu said:

Due to the inefficient architecture and technology, the current AI chips waste a lot of performance and power.

The current AI chip architecture stores data in HBM and relies on the GPU to perform all computations. This architecture that separates data storage and data processing makes the data bus an inevitable performance bottleneck. Transferring a large amount of data through the data bus leads to limited performance and soaring power consumption.

3DX-AI can perform artificial intelligence processing in each HBM chip. This can greatly reduce the data transmission between the HBM and the GPU, thereby improving performance and significantly reducing power consumption.

Likes