For Jean-Louis Schiltz, by focusing on principles rather than applications, the EU has chosen the best way to regulate artificial intelligence. Photo: Marie Russillo/Maison Moderne

For Jean-Louis Schiltz, by focusing on principles rather than applications, the EU has chosen the best way to regulate artificial intelligence. Photo: Marie Russillo/Maison Moderne

The regulatory aspect of artificial intelligence was one of the key topics on the second day of the Nexus2050 conference on 27 June. With the EU adopting a regulation on AI in May, Jean-Louis Schiltz looks back at the “spirit of the law” that should prevail in this area.

has had several lives. First, in politics. The lawyer was elected to the Chamber of Deputies under the CSV colours, but did not sit. He was called directly into government, where he held the posts of minister for cooperation and humanitarian action and minister for communications. He was also defence minister from 2006 to 2009. Schiltz resigned from all his offices in 2011 for personal reasons and once again became a lawyer. As a senior partner at Schiltz & Schiltz, he specialises in technology issues. An honorary professor at the University of Luxembourg, he devotes his teaching and research to all aspects of ‘innovation through law,’ with a particular focus on fintechs.

Should AI be regulated or not?

Jean-Louis Schiltz: I believe that the objective of any regulation of artificial intelligence must be to create a framework of trust. With the growing number of users of this technology, particularly ChatGPT, there is a real need to establish such a framework to avoid abuses. So the answer is clearly yes.

The underlying question, given the speed at which things are moving, is whether it is not already too late for the regulator to seek to regulate the phenomenon? Can it keep up with innovation?

This is the crux of the problem, and for the regulator it comes down to the following two questions: should we regulate AI itself or the applications? In my view, it is the applications that should be regulated, and that is the choice that the European Union has made. The second question for regulators is whether to adopt very detailed regulations, as we did with the GDPR (General Data Protection Regulation). That’s what we also did for the AI Act, which runs to 378 pages. The other approach would have been to opt for a French-style approach based on principles. That was not the fundamental choice made for the regulation of 21 May 2024. After all, it has 58 articles... Look at the Civil Code, it’s 200 years old.


Read also


latter approach, if followed rigorously, allows us to keep up with and adapt to new developments. In nine cases out of ten, we will find a principle to link a new use to it. If you regulate on the basis of use cases, it quickly becomes very complicated.

European regulations have opted for a risk-based approach. Isn’t that too simplistic?

As with the Dora regulation [Digital Operational Resilience Act], there are a certain number of principles that underlie all European regulations affecting technology. First, there is the principle of proportionality, then accountability and then the risk-based approach. This means that the text becomes a general framework and that it is up to the operators to decide, under the supervision of the AI Office set up in Brussels, which cases they think they should be involved in and how they should comply.

Many people think that the best way to regulate artificial intelligence is at international level. What is your opinion on the subject?

The international framework seems to me to be the most appropriate. One event that went somewhat unnoticed was the first global summit on the risks of AI organised by London on 1 and 2 November, which brought together Americans and Chinese and set out to define a global approach to regulating the use of AI. The participants certainly agreed on very general principles. But they did agree, in particular on the principle that regulation was needed. The next summit--which we could organise in Luxembourg, I’ll throw the idea out there--will provide an opportunity to take the debate further. I think that this is where things will really come into play, as in the financial sector, following the example of what happened with Iosco (International Organization of Securities Commissions), an international body where everyone meets and where the anti-money laundering rules were forged, rules applicable to the financial sector at global level and also with the FATF, which are now almost universal.

This is the direction in which we need to move to avoid the risk of each geographical bloc adopting its own laws with, what’s more, extraterritorial rules. This is a path that Europe is currently taking, inspired by the Americans. If each bloc does this, it will be a huge mess.

Given the state of international relations and the fact that AI is widely used for military applications, as seen in Ukraine, isn’t this path doomed to be ineffective?

Not many people know this, but at European level, military applications are excluded from regulation. This is something that has gone unnoticed. It just goes to show how sensitive and important security issues are.

After that, the geopolitical situation is what it is... However, I remain convinced that the best solution will be found through dialogue. Despite everything that’s happening around the world, governments are still talking to each other, and I believe that AI is a good subject on which to find common solutions, even if we remain opposed on many other issues.

This article was first published in French on . It has been translated and edited for Delano.

Updated to clarify the legislation referenced