On 14 June, the European Parliament adopted its negotiating position on artificial intelligence (AI) legislation by 499 votes to 28, with 93 abstentions. The legislation aims to ensure that AI developed and used in Europe fully complies with the rights and values of the European Union. Negotiations with the council on the final form of the legislation will begin.
The draft will involve more control and transparency obligations so that AI fully complies with EU rights and values, “in particular on human surveillance, security, protection of privacy, transparency, non-discrimination and social and environmental well-being,” said the .
The notion of obligation introduced will depend on the risk that AI may generate. In addition to assessing and mitigating potential risks, suppliers of these systems will have to register their models in the EU database before they are put on the market.
Beyond certain obligations, the aim is quite simply to ban AI systems that present an “unacceptable level of risk to personal safety,” or that are used for social rating purposes.
Between supporting innovation and protecting citizens
The MEPs, who voted by a large majority in favour of these new rules, will go even further. They also include bans on: remote biometric identification systems, predictive policing systems, emotional recognition systems, and the untargeted capture of facial images from the internet or video surveillance footage.
AI solutions used to influence voters and the outcome of elections, as well as those used in recommendation systems operated by social media platforms (with more than 45m users) have been added to the list of high-risk systems. All these rules will have to be monitored by the European AI Office, whose role has been revised by MEPs.
If negotiations result in the legislation being finalised, it would only be applied--at the earliest--in 2026.
This story was first published in French on . It has been translated and edited for Delano.