Up
0
Down

Navigating the nexus of Policy, Digital Technologies, and Futures (S1/E10)

S1/E10: The European Union’s Artificial Intelligence Act – The EC proposal

In the last episode you had a helicopter view of probably the noisiest piece of legislation in the past years, the Artificial Intelligence Act, or AI Act, or also AIA for the very intimate. True, it was somewhat philosophical and I perhaps exaggerated a little. Today, I’ll zoom in on its details, starting with the first step in the governance process: the proposal by the European Commission. This time I’ll keep almost totally to its informational side.

But before we start: be aware that the AI Act isn’t alone in the European Commission’s (EC) drive to establish a European legal framework for AI, in order to address fundamental rights and safety risks specific to the AI systems. As a matter of fact, the main proposed legislation has two sister proposals, namely the AI liability directive and the product liability directive – adapting liability rules to the digital age and to AI –, both proposed in September 20222. Plus some cousins, like the Cybersecurity Resilience Act (CRA).

The goal stated by the EC for this framework was to address the risks of AI and position Europe to play a leading role globally in the field of AI. As explained by an EC official in a conference organised by ENISA in June 2023, the framework was needed from a safety viewpoint, with the important proviso that the protection of fundamental rights is to be included in the overall notion of safety. Whence, also, the sister liability directives.

That’s why the EC proposed a risk-based approach, with four levels:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal or no risk

Unacceptable risks posed by AI systems

In the EC proposal, the Unacceptable-risk class would include AI systems considered a clear threat to the safety, livelihoods, and rights of people, and should be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring' by governments. AI-based biometric identification systems would also be prohibited in some cases, as explained further below.

High-risk AI systems

The High-risk category includes AI technologies used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
  • High-risk AI systems will be subject to strict obligations before they can be put on the market.

In order to be considered safe for use, High-risk AI systems will be subject to strict obligations before they can be put on the market. These would include the following.

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustness, security and accuracy.

Of particular note is that all remote biometric identification systems would be considered high-risk and subject to stricter requirements. In the EC proposal, their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach, and the data bases searched.

Lower classes of risk

The are two main classes of low-risk AI systems:

Limited-risk AI systems, like chatbots, would have to comply with specific transparency obligations. For instance, when using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal-risk AI systems, such as AI-enabled video games or spam filters, would enjoy free use of their applications. According to the EC proposal, the vast majority of AI systems fall into this category. The draft regulation does not intervene here, as these AI systems are deemed to represent only minimal or no risk for citizens' rights or safety.

Governance

The European Commission proposes several governance measures in the draft AI Act. They include, for instance:

  • The supervision of the new rules should be the competence of national competent market surveillance authorities.
  • A European Artificial Intelligence Board would be established in order to facilitate the regulation’s implementation, as well as drive the development of standards for AI.
  • Voluntary codes of conduct are proposed for non-high-risk AI.
  • Regulatory sandboxes should be put in place to facilitate responsible innovation.

Interestingly, the AI Act and yet another parallel piece of EU legislation, the Cybersecurity Resilience Act, are intimately related, even with cross-citations. According to the EC official referenced above, one could see their relationship as the AIA ruling on procedures and the CRA ruling on content.

The EC official also stated that standards will play a very important role in the AIA’s implementation. Accordingly, in May 2023 the EC issued a request to CEN-CENELEC (international non-profit associations that are officially recognized as European Standardization Organizations, alongside ETSI, the European Telecommunications Standards Institute), to develop standards in support of the regulatory requirements of the AI Act, with the aim for these standards to be adopted as harmonised standards. Notably, such standards must be compatible with the CRA.

 

I guess this is enough information to digest for this episode. As I wrote before, this regulation has been the hottest topic in Brussels for about a year. Think now that it was actually proposed well before the public launch of ChatGPT, meaning that the proposal didn’t even dream of generative AI. You may thus start to imagine the acceleration in activity by the co-legislators (Council and European Parliament), from end November 2022 on, so that they could be able to include in their adopted positions whatever clever ideas they could have in so limited time about a super fresh technology with fully unknown potential impacts.

More on this in the next episode, where I’ll briefly explore the modifications adopted by each of the co-legislators prior to the negotiations for the final text, that should start around July 2023. Keep watching this space!

 

 

 

[This blog series is inspired by research work that is or was partially supported by the European research projects CyberSec4Europe (H2020 GA 830929), LeADS (H2020 GA 956562), and DUCA (Horizon Europe GA 101086308), and the CNRS International Research Network EU-CHECK.]

 

Afonso Ferreira

CNRS - France

Digital Skippers Europe (DS-Europe)