Apple Unveils Apple Intelligence at WWDC24

TapTechNews June 12th news, Apple announced the highly anticipated  Apple Intelligence at yesterday's WWDC24, introducing a series of AI features for  iPhone, Mac and other devices.

Apple Unveils Apple Intelligence at WWDC24_0

Subsequently, Apple's machine learning official website disclosed detailed information about Apple Intelligence. According to Apple's official introduction, Apple Intelligence has two basic models:

Local model: A language model with approximately 3 billion parameters on the device, with a test score higher than many 7-billion-parameter open source models (such as Mistral-7B or Gemma-7B);

Cloud model: A larger cloud language model that can be run through private cloud computing and on Apple chip servers.

Apple stated that Apple Intelligence is composed of multiple high-performance generative models, which are specifically tailored to users' daily tasks and can dynamically adapt to their current activities. The basic models built into Apple Intelligence are fine-tuned for the user experience, such as writing and refining texts, prioritizing and summarizing notifications, creating interesting images for users' conversations with family and friends, and taking in-app actions to simplify interactions between apps.

Apple Unveils Apple Intelligence at WWDC24_1

In terms of pre-training, Apple's basic models are trained on the AXLearn framework, an open source project that Apple released in 2023. It is built on JAX and XLA, enabling Apple to scalably train models on a variety of training hardware and cloud platforms, including TPUs as well as cloud and local GPUs.

TapTechNews noticed that Apple promises that when training basic models, the company never uses users' private personal data or user interactions, and will use filters to remove publicly available personal identity information on the internet, such as social security and credit card numbers. Apple also filters out swear words and other low-quality content to prevent it from being included in the training corpus. In addition to filtering, Apple also performs data extraction, duplicate data removal, and applies model-based classifiers to identify high-quality documents.

In terms of optimization, Apple uses grouped-query-attention in both device-end and server-end models. The device model uses a vocabulary size of 49K, while the server model uses a vocabulary size of 100K, including additional languages and technical tags.

Through optimization, Apple claims that on the iPhone 15Pro, it can achieve a first token latency of approximately 0.6 milliseconds per prompt token and a generation rate of 30 tokens per second.

In the instruction-tracking evaluation (IFEval) test, Apple's local model outperforms models including Phi-3-mini, Mistral-7B and Gemma-7B, and is comparable to DBRX-Instruct, Mixtral-8x22B and GPT-3.5-Turbo; while the cloud model is basically on par with GPT-4-Turbo.

Apple Unveils Apple Intelligence at WWDC24_2

Apple Unveils Apple Intelligence at WWDC24_3

Apple Unveils Apple Intelligence at WWDC24_4

Apple Unveils Apple Intelligence at WWDC24_5

Apple plans to open Apple Intelligence in the iOS18, iPadOS18 and macOSSequoia test builds to be launched this summer, and then will open to the public in a test build in the fall of this year, but some features, more languages and platform supports will have to wait until next year.

Apple Intelligence can be used for free, but only on devices equipped with the A17Pro chip or any M-series chip. This means that to use these functions, you need an  iPhone 15Pro or  iPhone 15ProMax, and the upcoming  iPhone 16  series will also support Apple Intelligence.

On the Mac side, you need a Mac equipped with M1 or a later version, and for the iPad, you need an  iPadPro  or  iPadAir equipped with the M1 chip or a later version.

Apple WWDC24 Developer Conference Keynote Special Topic

Likes