Wuwenxinqiong Unveils Heterogeneous Distributed Hybrid Training System

TapTechNews July 5th, the co-founder and CEO of Wuwenxinqiong, Xia Lixue, yesterday released the heterogeneous distributed hybrid training system of Wuwenxinqiong's large-scale model at the AI Infrastructure Forum of the World Artificial Intelligence Conference, stating that the highest computing power utilization rate of the kilo-card heterogeneous hybrid training cluster reached 97.6%.

Wuwenxinqiong Unveils Heterogeneous Distributed Hybrid Training System_0

Xia Lixue also announced that Wuwenxinqiong's Infini-AI cloud platform has integrated the heterogeneous kilo-card mixed training ability of the large model and is the world's first platform that can carry out a single-task kilo-card-scale heterogeneous chip mixed training, with ten-thousand-card scalability, and supports the mixed training of large models including six heterogeneous chips such as AMD, Huawei Ascend, TianShu Zhixin, Muxi, Moore Threads, and NVIDIA.

Wuwenxinqiong Unveils Heterogeneous Distributed Hybrid Training System_1

Wuwenxinqiong said:

Before turning on the faucet, we don't need to know where the water comes from. Similarly, in the future, when we use various AI applications, we won't know which base models it invokes and which accelerator card's computing power is used - this is the best AINative infrastructure.

According to TapTechNews' previous report, Wuwenxinqiong and Moore Threads jointly announced in May that the two sides have officially completed the practical training of the 3B-scale large model MT-infini-3B based on the domestic full-featured GPU kilo-card cluster, and the performance ranks among the forefront in models of the same scale.

Wuwenxinqiong Unveils Heterogeneous Distributed Hybrid Training System_2

Likes