Google's DeepMind Launches AI Safety Framework

TapTechNews May 21st, Google's DeepMind has recently launched an AI safety framework named 'FrontierSafetyFramework', mainly used to detect the risk situation of AI models, claiming that it can actively identify 'the AI capabilities that may cause significant risks in the future', and point out to researchers 'exactly on which levels the relevant models may be exploited by hackers'.

Google's DeepMind Launches AI Safety Framework_0

According to the introduction, the currently released FrontierSafetyFramework 1.0 version of DeepMind mainly contains three key components, namely 'identifying whether the model has the ability to cause significant risks', 'estimating at what stage the model will have potential safety hazards', and 'intelligently optimizing the model to prevent it from causing risks'.

Google's DeepMind Launches AI Safety Framework_1

DeepMind stated that the company 'has been constantly breaking through the boundaries of AI', and the developed models have changed their perception of the possibilities of AI. Although the company believes that future AI technologies will bring valuable tools to society. But they also realize that the risks of relevant AI technologies may have a devastating impact on society, so they are gradually enhancing the safety and controllability of the models.

TapTechNews noticed that currently DeepMind is still developing the FrontierSafetyFramework and plans to improve the relevant framework through cooperation among industries, academia and relevant departments.

Likes