Microsoft Announces Support for Fine-Tuning AI Models on Azure and Launches New Service

TapTechNews July 26th news, Microsoft yesterday (July 25th) posted a blog announcing that it supports developers to fine-tune the Phi-3-mini and Phi-3-medium AI models on Azure to improve the performance of the models for different use cases.

For example, developers can fine-tune the Phi-3-medium model for tutoring students; or a chat application can be built according to a specific tone or response style.

The Phi-3-mini model was released in April this year and has 3.8 billion parameters, with two versions of context length: 4K and 128K; the Phi-3-medium model has 14 billion parameters, and the context length also has two versions of 4K and 128K.

After the Phi-3-mini model was updated in June, the跑分 performance was further improved. TapTechNews attaches the performance comparison as follows:

跑分 Phi-3-mini-4k Phi-3-mini-128k April version June update April version June update InstructionExtraHard 5.7 6.0 5.7 5.9 InstructionHard 4.9 5.1 5 5.2 JSONStructureOutput 11.5 52.3 1.9 60.1 XMLStructureOutput 14.4 49.8 47.8 52.9 GPQA 23.7 30.6 25.9 29.7 MMLU 68.8 70.9 68.1 69.7 Average 21.7 35.8 25.7 37.6

Microsoft also announced the official launch of the Models-as-a-Service (serverless endpoint) service today. Developers don't have to worry about the underlying infrastructure and can build the Phi-3-small model through the serverless endpoint method to quickly develop artificial intelligence applications, and the Phi-3-vision model will be expanded and launched later.

Likes