Legal issues were also on the Nexus2050 agenda. Photo: Romain Gamba/Maison Moderne

Legal issues were also on the Nexus2050 agenda. Photo: Romain Gamba/Maison Moderne

The regulatory aspect of artificial intelligence was one of the key topics on the second day of the Nexus2050 conference. This was an opportunity to take stock of the European regulation adopted on 21 May.

To regulate artificial intelligence, Europe chose a risk-based approach in its . This regulation is part of the EU’s political will to create a single market for data by putting in place the conditions to ensure that data, the driving force behind artificial intelligence, circulates more easily within the EU and can be reused without infringing the privacy of Europeans. It extends the General Data Protection Regulation (GDPR) and the Data Governance Act, which defines the rules and mechanisms for re-using certain public data or data held by the public sector, but which is protected. In fact, it was on the basis of these European regulations that Luxembourg introduced the “” principle, the cornerstone of the Frieden government’s administrative simplification policy.

A broad definition

The European Commission has chosen a broad definition of artificial intelligence: an artificial intelligence system is software that has been developed using one or more techniques and approaches such as machine learning and that can, for a given set of human-defined objectives, generate results such as content, predictions, recommendations or decisions that influence the environments with which it interacts. This is a sufficiently broad definition to ensure that innovation is not hindered. The EU therefore considers that AI-related technologies bring many social and economic benefits, but that they may also undermine citizens’ fundamental rights, such as the right to human dignity, respect for privacy and the principle of non-discrimination.

A scale of risk

The regulation of 21 May 2024 classifies AI systems according to the risks they pose to fundamental rights, and adjusts constraints accordingly. The risk scale is graded as follows: minimal risk, low risk, high risk and unacceptable risk.

In the name of unacceptable risk, the EU is banning technologies such as social rating, large-scale remote biometric recognition in real time--with a few exceptions for the search for missing persons or the fight against terrorism--large-scale processing of personal data and the development of ‘deep fake’ videos.

For technologies classified as high risk, such as biometric identification, management of critical infrastructures (water, electricity, etc.), human resources management, access to essential services (bank loans, public services, social benefits, justice, etc.) or police missions, rules on traceability, transparency and robustness apply.

For the “lower risk” category, the regulation requires a degree of transparency from the supplier. Here, AI-generated content must be flagged as such.

The “minimal risk” category covers all uses that do not pose a risk to citizens’ rights and are therefore not subject to specific regulation. Examples include spam filters.

Deterrent penalties

Penalties are modulated according to these categories. Failure to comply with the rules on prohibited practices can result in a fine of up to 7% of turnover or €35m. For the other two categories, the penalty may be up to €15m or 3% of turnover.

This regulation will come into force in 2026. It is accompanied by a European Artificial Intelligence Committee made up of one representative per member state.

This article was first published in French on . It has been translated and edited for Delano.