Development: EU reaches political consensus on an AI Act

Artificial intelligence
The AI regulatory framework will guarantee safety, legality, trustworthiness, and respect for fundamental rights in AI systems.

In 2023, the widespread adoption of Artificial Intelligence (AI) accelerated thanks to models such as ChatGPT and the recent debut of ‘Gemini’ by Google. As AI models continue to become more capable, the legal services sector faces yet another disruptive technological frontier, demanding vigilant oversight and management.

World’s first AI regulatory regime

Pioneering the oversight of AI application, the European Council and Parliament have successfully negotiated a provisional agreement to draft an EU AI Act. Delegates have emphasised that the key tenants guiding AI regulation will prioritise: ethics, safety and trustworthiness. The AI Act will become the first global standard for governing AI models, marking a historical step towards responsible innovation and supporting the development of human-centric AI.

The EU’s AI Act is poised to mirror the influential impact of the General Data Protection Regulation (GDPR). Much like the GDPR, the EU aims to provide an overarching framework that promotes accountability and consistent standards for the development and application of AI systems.

How will it work?

The EU AI Act will focus regulation on identifiable risks, categorising them into four levels: (1) minimal or no risks, (2) limited risks, (3) high risks and (4) unacceptable risks. The Act will also contain rules to encourage a unified market for AI programs, in line with the EU’s organised strategy on artificial intelligence, aiming to speed up AI investment throughout Europe.

Regulation will also extend to banning certain applications of AI that pose a threat on the rights and democracy of EU citizens. It was agreed that the AI Act should prohibit the following:

  • Systems which categorise individuals based on political, religious, philosophical beliefs, sexual orientation and race;
  • Gathering facial images from the internet or CCTV without specific targets in mind to build databases for facial recognition;
  • Models that identify and interpret emotions in professional settings;
  • Social scoring based on behaviour or personal characteristics;
  • AI models designed to influence or manipulate human behaviour in a way that undermines freedom of choice.
  • AI used to exploit the vulnerabilities of people based on age, disability, social or economic situation.

However, negotiators have agreed that certain exceptions to these bans will require judicial approval for law enforcement purposes and strictly defined criminal activities.

Addressing recent innovations in AI

One area of AI that has not been fully addressed by this resolution is how generative AI systems and foundation models are to be governed within this framework. Due to the agreement concentrating on proposals from 2021 and 2022, the proposals drafted do not incorporate these recent innovations of AI. The agreement has not yet articulated essential definitions to clarify and precisely categorise generative AI systems for providing clarity and precision. Although they may fall within scope of ‘general-purpose’ AI, proper discussions may have to wait until the adoption process.

What next?

The regime will establish new accountability measures for entities involved in AI. These include providers, deployers, importers, distributors and product manufacturers. Entities should proactively prepare for upcoming regulations by conducting risk assessments on their in-house AI systems, raising awareness, developing ethical frameworks, assigning responsibility, and establishing formal governance structures.

Author: Zaki Zeini, paralegal.