The World's First Comprehensive AI Regulation Act Came into Effect in the EU

TapTechNews August 1st news, 20 days after the official release of the final complete version of the Artificial Intelligence Act by the European Union, the world's first comprehensive regulation of artificial intelligence came into effect officially on August 1 local time.

It is introduced that the Artificial Intelligence Act aims to ensure that the artificial intelligence developed and used in the European Union is trustworthy and there are safeguard measures to protect people's basic rights. This regulation aims to establish a unified internal market for artificial intelligence in the European Union, encourage the adoption of this technology, and create a supportive environment for innovation and investment.

The Artificial Intelligence Act stipulates that the fine for violating the prohibited artificial intelligence applications may be as high as 7% of the global annual turnover, the fine for violating other obligations can be up to 3%, and the fine for providing false information can be up to 1.5%. All provisions in this act will be fully applicable within two years, but some of them will be implemented earlier.

Artificial Intelligence Act puts forward a forward-looking definition for artificial intelligence based on the product safety of the European Union, which TapTechNews summarizes as follows:

Minimum risk: Most AI systems, such as AI-enabled recommendation systems and spam filters, fall into this category. Since these systems pose the least risk to citizens' rights and safety, there are no obligations under the Artificial Intelligence Act. Companies can voluntarily adopt additional codes of conduct.

Specific transparency risk: AI systems such as chatbots must clearly disclose to users that they are interacting with a machine. Some AI-generated content, including deepfake content, must be marked as such content, and users must be notified when using biometric classification or emotion recognition systems. In addition, providers must mark the synthetic audio, video, text, and image content in a machine-readable format when designing the system and can be detected as artificially generated or manipulated content.

High risk:  Artificial intelligence systems identified as high risk will be required to comply with strict requirements, including risk mitigation systems, high-quality data sets, activity records, detailed documentation, clear user information, manual supervision, and a high level of robustness, accuracy, and network security. Regulatory sandboxes will promote responsible innovation and the development of compliant AI systems. For example, such high-risk AI systems include AI systems for recruitment, or for evaluating whether someone is eligible for a loan, or AI systems for running autonomous robots.

Unacceptable risk:  Artificial intelligence systems that are considered to pose a significant threat to people's basic rights will be prohibited. This includes AI systems or applications that manipulate human behavior to circumvent the user's free will, such as toys that use voice assistance to encourage dangerous behavior of minors, systems that allow governments or companies to conduct social scoring, and certain applications of predictive policing. In addition, certain uses of biometric systems will be prohibited, such as emotion recognition systems used in the workplace, and some systems used for classifying people, or real-time remote biometric systems used for law enforcement purposes in publicly accessible spaces (with only a few exceptions).

Likes