Artificial intelligence stepped out of science fiction years ago and is at work in Luxembourg in insurance, financial services and fraud.  Photo: Shutterstock.

Artificial intelligence stepped out of science fiction years ago and is at work in Luxembourg in insurance, financial services and fraud.  Photo: Shutterstock.

Artificial intelligence stepped out of science fiction years ago and is at work in Luxembourg in insurance, financial services and fraud. Andreas Braun, director, artificial Intelligence and data science at PwC Luxembourg and Ajay Bali, associate partner, digital advisory services, EY Luxembourg exchanged on the subject.

How is artificial intelligence changing the way financial services are delivered in Luxembourg?

Andreas Braun: It’s a broad question because AI has changed the way financial services are offered on multiple fronts. Let’s take the retail customer. A machine learning-based analysis of your income and expenditure is used by the bank. For example, if you are increasing your spending they can sell a credit product. If you are saving it is an opportunity to offer plans for retirement.

In the back office, where the cost of compliance is ever increasing, we see automation powered by ML and natural language processing to look at documents, pick out the important parts and make the life of the back office easier. In ‘know your customer’ processes, it has been common for years to take into account a customer’s social networks and to carry out a sentiment analysis. In the past, this would have been done by Googling, now it’s done through sophisticated ML. The same principle applies for anti-money laundering, where ML can pick out anomalous behaviour. In fact, what has been used in AML has flowed into other anomaly detection tools, such as cybersecurity for financial institutions.

Ajay Bali: I would say that AI has already changed the way we deliver financial services. With something like AML, the volume of data is huge. What ML can do is pull together different combinations of data and flag it to the decision makers. Often the customer is unaware that AI is being applied in very different ways to make decisions on their behalf.

Braun: In Luxembourg, AI also helps a lot with the multilingual factor. The translation tools that AI can deliver really help Luxembourg financial services companies with exposure to markets like Asia.

You say that the customer is unaware. Are there best practices for the use of AI in financial services?

Bali: There are governance and best practices around how data is fed into AI in order to prevent bias--for example, when getting a loan approved. If loans are frequently granted to customers in one postcode and not to customers from another, governance is needed in the way you feed data to prevent AI from automatically implementing a bias. Take facial recognition for the KYC process. Unless you train AI with multiple images of people with different appearances, you could build in a lack of recognition and therefore a bias.

Braun: The European Commission is defining best practices in AI to allow the ongoing functioning of markets but also to ensure that critical things affecting the consumer are being addressed. This includes bias.

Bali: There are a number of risks with AI. Not only bias but also a business risk, in that the data you’re being fed may have no integrity, a financial risk if AI forms the basis of financial decisions and privacy risk.

In the past it has been suggested that AI poses a risk to jobs in financial services as well. What do you think?

Braun: Around 10 years ago, there certainly used to be a fear that AI would put everyone out of a job. In reality, the world we are living in is getting more and more complex. Regulation in financial services has become more and more of a challenge, so we see AI as a necessary tool to manage a complex world, and not as a replacement for people.

The level of information that needs to be processed is always increasing. Can AI and ML help with issues like asset identification?

Braun: AI has certainly helped in complicated regulatory cases in the past. With GDPR, for example. There’s suddenly a big risk to clients if they keep personal information of the people with whom they interact. We can use it as a tool to help companies find where this information is stored. Then take the example of transitioning from Libor and Euribor indices. ML can help a client identify all their links to these indices.

How else do you use AI to support your clients?

Braun: How we support our clients depends on their maturity. We do an assessment, dig into what they have done so far. It’s important to do data governance first. Through this we can set up processes to provide algorithms for clients to improve their competencies in-house. We have an AI lab in Luxembourg, where we can workshop with clients to build a proof of concept.

What would you say the role of fintechs is in artificial intelligence?

Bali: The role of fintechs is to bring innovation to AI. They are completely critical to the development of AI. I would say on the one side I work with classic banks on AI, and on the other side with fintechs. What is important to do is to create an ecosystem that brings fintechs and banks together. This way, rather than building new technology from scratch, we are working from a more sophisticated springboard. One of the things that EY does in Luxembourg is to bring large companies together with fintechs in our labs. It’s like a bit of matchmaking, and build conceptual models together far more quickly than it would be done otherwise. This approach can move banks away from their standard, process-oriented thinking.

What do you think are the biggest challenges of the upcoming EU regulation on artificial intelligence?

Braun: The idea is to regulate the use of AI in ML. As with any regulation, there is always the fear that it will lead to a lot of new processes, a kind of GDPR 2.0. However, it’s very much in draft form, and we welcome the opportunities to standardise processes and protect the consumer.

Bali: The regulation is due to come into action in 2024. It essentially categorises AI applications into three different tiers depending on their assumed risk to the individual--from unacceptable risk to high-risk down to low-risk applications that will be left largely unregulated. At the moment, business owners feel there is no risk when using AI. Hopefully with this regulation they will understand more, and it will ensure that critical aspects of AI to protect the consumer will be addressed.

This article first appeared in the April 2022 edition of .