EU Council gives final nod to risk-based rulebook for AI

Many interacting with chat interface.

Image Credits: Natee127 / Getty Images

It’s a wrap: European Union lawmakers have given the final approval to set up the bloc’s flagship, risk-based regulations for artificial intelligence.

In a press release confirming the approval of the EU AI Act, the Council of the European Union said the law is “ground-breaking” and that “as the first of its kind in the world, it can set a global standard for AI regulation.” 

The European Parliament had already approved the legislation in March.

The Council’s approval means the legislation will be published in the bloc’s Official Journal of the European Union in the coming days, and the law would come into force across the EU 20 days afterward. The new rules will be implemented in phases, though some provisions will only be applicable after two years, or even longer.

The law adopts a risk-based approach to regulating uses of AI and bans a handful of “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. It also defines a set of “high-risk” uses, such as biometrics and facial recognition, or AI used in domains like education and employment. App developers will need to register their systems and meet risk and quality management obligations to gain access to the EU market.

Another category of AI apps, such as chatbots, are considered “limited risk” and subject to lighter transparency obligations.

The law responds to the rise of generative AI tools with a set of rules for “general-purpose AIs” (GPAIs), such as the model underpinning OpenAI’s ChatGPT. However, most GPAIs will face only limited transparency requirements, and only GPAIs that pass a certain compute threshold and are deemed to pose a “systemic risk” will face tougher regulation. (For more on how the EU AI Act responds to GPAIs, see our earlier reporting.)

“The adoption of the AI act is a significant milestone for the European Union,” said Mathieu Michel, Belgian secretary of state for digitization, in a statement. “This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.”

In addition, the law establishes a new governance architecture for AI, including an enforcement body within the European Commission called the AI Office.

There will also be an AI Board comprising representatives from EU member states to advise and assist the Commission on consistent and effective application of the AI Act — similar to how the European Data Protection Board helps steer application of the GDPR. The Commission will also set up a scientific panel to support oversight and an advisory forum to provide technical expertise.

Standards bodies will play a key role in determining what’s demanded of AI app developers, as the law seeks to replicate the EU’s long-standing approach to product regulation. We should expect the industry to redirect the energy they had focused on lobbying against the legislation toward efforts to shape the standards that will be applied to AI devs.

The law also encourages setting up regulatory sandboxes to support development and real-world testing of novel AI applications.

It’s worth noting that while the EU AI Act is the bloc’s first comprehensive regulation for artificial intelligence, AI developers may already be subject to existing laws such as copyright legislation, the GDPR, the bloc’s online governance regime and various competition laws.

EU lawmakers bag late night deal on ‘global first’ AI rules

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注