Google Updates AI App Guidelines to Reduce Inappropriate Content

TapTechNews June 7th news, on Thursday local time, Google updated its guidelines for AI applications, aiming to reduce "inappropriate" and "prohibited" content.

Google Updates AI App Guidelines to Reduce Inappropriate Content_0

Google noted in the new policy that applications providing generative AI functions must prevent the generation of restricted content, including pornographic content, violence, etc., and require applications to conduct "rigorous tests" on their AI models.

These rules apply to a variety of applications, which TapTechNews briefly summarizes as follows:

Any combination of text, voice, and image prompts that generate content through generative AI App.

Chatbot-like, image-generation-like (text to image, audio to image, image to image), voice and video-generation-like App.

Not applicable to App that "merely carries" AI content or uses AI as a productivity tool.

Google Play makes it clear that the AI-generated offending content includes, but is not limited to, the following cases:

AI-generated deepfake materials without consent.

A real-person voice or video recording that facilitates fraud.

Content that encourages harmful behavior (such as dangerous activities, self-harm).

Content generated to foster bullying and harassment.

Primarily content for satisfying "sexual needs".

AI-generated "official" documents that make dishonest behavior possible.

Creating malicious code.

Google will also add new app onboarding features in the future, striving to make the process of submitting generative AI applications to the store more open, transparent, and simplistic.

Google Updates AI App Guidelines to Reduce Inappropriate Content_1

Likes