Apple Sets Strict Instructions for SmartReply to Avoid 'Hallucinations' in macOS 15.1 Beta

TapTechNews August 7th news, recently some netizens have found a series of internal instructions set by Apple for the SmartReply function of AppleIntelligence in the macOS 15.1 beta version. These instructions exist in the form of JSON files and specify in detail the workflow of this function. Most notably, Apple clearly requires the system to 'not fabricate facts and not have hallucinations'.

Apple Sets Strict Instructions for SmartReply to Avoid 'Hallucinations' in macOS 15.1 Beta_0

According to TapTechNews, SmartReply is a function of Apple's mail application, aiming to automatically generate possible reply options by analyzing the content of the mail. Behind this function is Apple's self-developed intelligent technology. However, like other generative AIs, SmartReply also faces the 'hallucination' problem, that is, the system may generate false or misleading information.

Apple Sets Strict Instructions for SmartReply to Avoid 'Hallucinations' in macOS 15.1 Beta_1

In order to prevent this from happening, Apple has set strict instructions inside the system. According to the leaked instructions, the SmartReply function is required to only extract the questions clearly raised in the mail and provide corresponding answer options. This approach helps to reduce the errors of the AI in generating replies.

Although Apple clearly requires the AI not to fabricate information, industry insiders believe that completely eliminating the 'hallucination' problem still faces challenges. Because the generative AI model essentially does not understand the text content and cannot accurately judge the authenticity of the information.

Likes