Yusuf Yassin: In June, the European Parliament voted to adopt its own negotiating position on the AI Act, triggering discussions between the European Parliament, the European Commission, and the European Council. What’s the latest developments you are hearing from Brussels?
: The three institutions of the EU have their own views regarding the AI act. They must reconcile their differences through the so-called “trilogue” procedure and the aim is to agree on a final version of the act this year. The emergence of generative AI gained a lot of traction in EU institutional discussions. What we know is that the parliament’s version includes amendments to introduce generative AI to the AI Act. In prior versions, it did not feature. The European Banking Federation continues its advocacy regarding the definition of AI, supervision, and treatment of creditworthiness, or credit score assessment, as a high-risk AI system. There is a need for a more targeted definition of AI because, according to EBF, the one provided by the European Commission is too broad. At this stage, the definition of AI proposed by the European Parliament is more in line with expectations of the European banking community. Concerning generative AI, I expect this will be in the final version of the AI Act, due in December of this year.
The proposal also outlines certain obligations that generative AI providers must fulfil. For example, it includes a requirement to inform users if interactions involving AI are used and specify if content is generated by a human or machine. It also requires safeguards against content violating EU laws. Providers must also make available a summary of their training data usage. These additions to the text will impact EU market providers and might affect US counterparts working with EU data or with EU customers.
When do you expect the AI Act to come into effect, and how are banks in Luxembourg preparing for this regulation?
I anticipate the AI Act will be approved by the end of 2023. It will likely require two additional years to implement, possibly taking effect around 2025. For now, financial institutions largely remain in an exploration phase, particularly those interested in various AI system applications. What is clear--and this is reflected in our recent survey on AI use in banks--is that no customer data is being used in tests. That was something that all respondents told us, that they are never going to put customer data in open AI tools like ChatGPT. It is just not safe from a data protection point of view.
We are increasingly seeing more firms developing their own AI solutions. For example, Allen & Overy is beta testing an AI tool that improves their processes and checks contracts. According to our survey, 35% of the banks in Luxembourg said they are now developing internal generative AI solutions, compared to 28% of respondents who said they are using an off-the-shelf AI tool.
We are also working with our members to help them understand the AI Act and develop their knowledge. Many banks have recruited teams of data scientists to build up their data and make sure it is of a high quality. In partnership with the Luxembourg House of Training, we have organised some training sessions to create awareness about artificial intelligence. We also set up a working group at the ABBL focusing on the data economy. Representatives of data centres are part of the group and they are very involved and exchange a lot of information on best practices.
We clearly need to broaden our scope and continue our efforts to inform executives about AI and the AI Act.
Your survey shows that many senior managers are not familiar with AI tools like ChatGPT; only 28% of respondents said senior managers were familiar with the technology and the figure falls to 7% when you narrow it to banks. What is ABBL doing to educate senior managers about AI?
We still have a lot of work to do to inform senior managers about the different technologies available, and AI is one of the biggest ones. It is going to take some time and we are engaging with them.
Our Fintech & Innovation Forum, a big group uniting our members and fintech firms, will centre around this topic. We want to make sure that senior executives will be attending a forthcoming event dedicated to this subject so they can understand and hear the different experiences and concerns around the technology. There was a recent survey by PwC Luxembourg showing that banks rank last in terms of awareness about the European AI legislation, compared to other industries. A large part of that is because discussions are often held with data scientist teams and they tend to already be extremely knowledgeable on the subject. We clearly need to broaden our scope and continue our efforts to inform executives about AI and the AI Act. I anticipate that once the act is approved at the end of the year there will be plenty of discussions about the regulation and its impact on the industry, and that will serve to educate senior managers.
Are banks already budgeting for the increased costs associated with implementing AI tools and adhering to the AI Act?
If we assume that the AI Act will enter into force in 2023 and apply by the end of 2025, banks that use AI would be required to start reporting in 2026. Furthermore, several titles, chapters and articles of the act will apply already in 2024. So, banks will need to pay attention to these extra costs. They will need to find ways to analyse their data and build the necessary internal competencies to be compliant to the new regulation. So, it will have an impact in a phased manner starting already from the next year and leading to full-fledged reporting obligations to commence in 2026.
Based on your observations, are you seeing any examples of companies taking the right approach to AI, and how are they using the technology?
The most common use case we have seen so far in the financial sector is in fraud detection. We have a research project with several ABBL members and the University of Luxembourg’s Interdisciplinary Centre for Security, Reliability and Trust (SnT) that uses AI tools to improve transaction monitoring. Currently, banks have systems in place so that whenever there is a suspicious transaction--for example, if a customer has the same name as a terrorist--the system will flag it. These hits are false 95% of the time, what we call false positives. Banks can reduce the number of false positives through the use of AI.
Another interesting use case we are working on with our members relates to “next best offering”. For example, whenever customers enter a bank branch, or if they have an interaction with the bank, AI can help to let them know what's the next best service the bank can offer that customers would need. It does not necessarily need to be a product or service a bank wants to sell to their customers. It could be access to a service they already have but have not used before. We have seen a few banks developing next best offer solutions to propose to customers.
Another use case is cybersecurity. We have established a working group on cybersecurity and some members have informed us they are using AI to improve their cybersecurity capabilities. For example, whenever there is a fax that is received, it can already be translated into the system to initiate the payment. That is, of course, a very dated process and some automation already exists.