Zero One Technology Releases Yi-Large Model And Upgrades Yi-1.5 Series

TapTechNews May 13th news, Li Kaifu, founder and CEO of Zero One Technology, released the billion-scale closed-source model Yi-Large today. He said that most indicators of Yi-Large can benchmark or even surpass GPT4.0. In the latest AlpacaEval2.0 evaluation at Stanford University, the global large model WinRate ranks first, and the text length error WinRate (LCWinRate) ranks second.

Meanwhile, Zero One Technology has upgraded the small open-source model versions Yi-34B and Yi-9B/6B to the Yi-1.5 series, with each version achieving SOTA (State-of-the-art model, meaning the best-performing model in current research) in the same size category.

According to TapTechNews, zero one technology recently launched a one-stop AI work platform Wanzhi, which can be used to create meeting minutes, weekly reports, interpret financial reports or papers, create PPTs, and can also be used as a writing assistant.

Li Kaifu also said that the decrease in inference costs of large models will drive the landing of Chinese AI large models, and this year will usher in the year of explosive applications of large models.

Likes