TapTechNews June 3rd news, AMD today announced the Instinct GPU AI accelerator road map until 2026.
AMD announces that the update rhythm of its Instinct product line is adjusted to the same once a year as NVIDIA to meet the continuously expanding AI application needs.
The AMD Instinct MI325X accelerator will be the first to hit the market in the fourth quarter of this year, which can be regarded as a refreshed version of MI300 with HBM3E memory replaced.
The memory capacity of this accelerator will increase from 192GB of MI300X to 288GB, and at the same time, the memory bandwidth also slightly increases from 5.3TB/s to 6TB/s (Note by TapTechNews: the equivalent memory rate increases from 5.2Gbps to 5.9Gbps).
AMD claims that the memory size and the single-server operable model parameter scale of MI325X are twice that of NVIDIA's H200, and the memory bandwidth, theoretical FP16 computing power peak, and theoretical FP8 peak are 1.3 times that of NVIDIA's H200.
And AMD will launch the first product of the next-generation MI350 accelerator series, MI350X, in 2025. MI350X will adopt the 3nm process, based on the CDNA4 architecture, and also contains 288GB of HBM3E memory.
MI350X supports FP4 and FP6 data types. AMD claims that its AI inference performance is improved by about 35 times compared to the existing MI300 series.
In addition, the HBM memory capacity of MI350 will be 1.5 times that of NVIDIA's B200, and the AI computing power is also 1.2 times that of the competitor.
Finally, AMD expects to launch the Instinct MI400 accelerator series based on the CDNA Next architecture in 2026.
2024 Taipei Computex Special