Microsoft's Phi-3 Small Language Models and Intel's Cooperation

TapTechNews May 26th news, Microsoft has successively announced a series of Phi-3 small language models (SLM) from April to May this year. The related models are claimed to be lightweight enough to run on mobile devices and have visual capabilities at the same time, being able to understand text and pictures, mainly used in low-power computing scenarios.

Currently, Intel has released a press release, claiming that currently developers have optimized their GaudiAI accelerator, Xeon and CoreUltra CPU, and Arc graphics card for the Phi-3 model, thereby further reducing the threshold for related software and hardware to run AI models:

We are using the latest AI models and software in the industry to provide and develop powerful artificial intelligence solutions for customers. Currently, we are actively cooperating with Microsoft to ensure that Intel hardware (covering data centers, edge computing and client devices) can fully support Microsoft's Phi-3 model, so as to bring AI 'to anywhere'.

Microsofts Phi-3 Small Language Models and Intel's Cooperation_0

For the Phi-3 series models, TapTechNews has successively conducted some introductions before, and interested friends can check the following articles to obtain more details:

Generating 12 tokens per second locally on the iPhone, Microsoft releases the phi-3-mini model: 3.8 billion parameters.

Microsoft once again sets off an AI productivity revolution late at night: Altman takes the stage and 'exposes' the new model himself.

Likes