Deepfakes are made from a flourishing set of technologies that are causing damage in exploding amounts. And the authorities’ response is far too timid. Photo generated by ChatGPT 4.0

Deepfakes are made from a flourishing set of technologies that are causing damage in exploding amounts. And the authorities’ response is far too timid. Photo generated by ChatGPT 4.0

The scene is comical: researchers at the university have made a deepfake of the prime minister to show him--the prime minister himself, I mean--how they’re going to detect deepfakes. Journalists are not allowed to capture the deepfake, even though these days I could make my own “FakeLuc” in a matter of minutes. These are the days we’re living in.

“It’s forbidden to take videos! No photos of the screen!”

From over my shoulder comes the panic.

On the screen in question is prime minister Luc Frieden, or something that looks and sounds like him, introducing a new government act that requires people to wear socks with flip-flops and have pineapple on their pizza. The “Chill Act” it’s been dubbed. A spokesperson for the University of Luxembourg had been scrutinising the video along with me before the photo faux pas occurred. Soon the prime minister’s security detail “invited” me to refrain from posting the photo, if I’d got one. He was polite. Friendly. But quite firm.

But the deepfake, created by the SnT for educational purposes, was running all day anyway! Who issued this curious order forbidding journalists from snapping a photo of it? The university said it was the prime minister. The prime minister said it was the university. “The grand duke has already been targeted,” said Frieden. “ has already been targeted. I’ve already been targeted and my services very quickly did what was necessary.” He added people believe these deepfakes and have called him out on things he’s been depicted as saying.

This tendency to believe that what looks and quacks like a duck is--in fact--a duck is something that researchers have already demonstrated: a quarter of people exposed to a deepfake believe what the video says. Back in 2022, some 72% of people didn’t know what a deepfake was. “Probably,” Frieden explains, “the university wanted to avoid this deepfake ending up on social networks.” It was rather like watching a firefighter put out a fire.

At the end of the day, the university sent an email stating that it had been instructed “not to let anyone photograph or film the prime minister’s deepfake during his visit. This file was produced solely for educational purposes surrounding ’s visit to Partnership Day.” It should be added that the deepfake was deleted as soon as the head of government left.

The deepfake shows a fake prime minister in the process of announcing a something called the “Chill Act,” with implications for how people must eat their pizza. Photo: Maison Moderne

The deepfake shows a fake prime minister in the process of announcing a something called the “Chill Act,” with implications for how people must eat their pizza. Photo: Maison Moderne

Phew. All that remains is the question: did the prime minister really need to be confronted with his own false image to understand what was at stake with deepfakes? While the answer is undoubtedly “no” for Frieden, a great many business leaders and politicians deserve to be made aware of this new form of “president fraud,” in which a fake authority figure calls their accountant, likely on a Friday evening, to ask that a large sum of money be wired urgently as part of a juicy new deal. This is a kind of fraud that is now exploding.

A deepfake every five minutes

“Exploding,” yes. Need figures to measure this explosion?

—Deepfake attempts increased 31-fold in 2023 compared to the previous year, representing a staggering growth of 3,000%, according to the Onfido Identity Fraud Report 2024.

—The same report in 2025 shows a deepfake every five minutes (+244% year-on-year).

—Deepfakes account for 40.8% of biometric financial fraud.

—A cybersecurity firm has recorded an increase of 1,400% in deepfake attacks in the first half of 2024.

—According to a Signicat report, deepfake fraud attempts in the financial sector have increased by 2,137% over the past three years. Three years ago, deepfakes were not even among the three most common types of digital identity fraud.

—The estimated number of deepfake videos has shown an exponential increase, from less than 8,000 in 2019 to a forecast of 200,000 in 2025.

—The global cost of deepfakes-related fraud has risen from $0.1bn in 2019 to $4.7bn in 2023. Forecasts for 2025 potentially reach $10.5bn.

—Some easy-to-use deepfake tools attract up to 10m searches per month.

Three technologies that feed off each other

The most frightening figure is that last one: it shows how ChatGPT has boosted usage by the average person, especially those with bad intentions. The amount of money needed to invest in a deepfake has practically gone to zero, the speed to a few minutes and the technology to “truer than life.”

Until now, the experts would tell you: “Look at the eyelashes and you’ll be able to tell if it’s a fake.” But that was until now.

Cybercriminals use three basic technologies:

Auto-encoders: start with two images, one of you, one of somebody else. This technology will modify the other image pixel by pixel until it looks like you… like an expert painter touching up a poorly finished portrait of one of his pupils.

Generative adversorial networks: two networks that go head-to-head. One is a forger who generates the fake image, the other is a detective who analyses the fake image… until the forger’s image no longer elicits comments from the detective.

Convolutional neural networks: like those heavy-duty magnifying mirrors in hotel bathrooms, this is a network that will analyse each pore of skin in a photo with such precision that it facilitates the creation of the deepfake. This technology is often the basis for the other two.

It’s up to experts, then, to spot how the videos circulating have been cobbled together. Then they must act to prevent the content from spreading while implementing an official communication if the personality is a public one.

That’s what this SnT project does: it seeks to detect such tinkering. The aim is not to sell software with this capability, as , head of the SNT Technology Transfer Office, made clear during the press visit, but to teach the Luxembourg market--and Post in particular, which is co-funding this research project--how deepfakes work so that they can be better understood and combated.

The urgent need for a national strategy

Without yet knowing the Frieden government’s battle plan, which will be presented in broad outline next week, Luxembourg should move faster and harder on two fundamental aspects.

Firstly, contrary to popular belief, deepfakes are not mainly used on TikTok to make everyone laugh. They’re mainly used to bypass the identification procedures of the banking sector and the financial centre. Some 40.8% of biometric frauds are due to deepfakes. Using extensive data from the US Federal Reserve, the Deloitte Center for Financial Services predicted back in 2023 that synthetic identity fraud would generate at least $23bn in losses by 2030.

Traditional banks offering credit are particularly targeted, so much so that last November the US financial regulator published a specific alert on the subject, with nine very pragmatic red flags.

1. Inconsistent customer photo: the photo appears altered or inconsistent with other identifying information (e.g. the date of birth suggests an age different from that apparent in the photo).

2. Inconsistent identity documents: the customer presents several identity documents that are not consistent with each other.

3. Use of third-party webcam software or technical anomalies: the customer uses a webcam plug-in during live verification, or reports suspicious technical issues to avoid this verification.

4. Refusal to use multifactor authentication (MFA) to prove identity.

5. Photo ID found in a public database of AI-generated faces during a reverse image search.

6. Detection by anti-deepfake software: the customer’s photo or video is identified as potentially manipulated.

7. Suspicious text content in customer profile: text produced by GenAI is detected in responses or information provided.

8. Inconsistent geographical or technical data: information on location or type of device used does not match the identity documents provided.

9. Suspicious financial behaviour: a newly opened or not very active account shows fast transactions, payments to risky sites (gambling, crypto, etc.), or numerous refusals/contests of payments.

The financial sector must therefore urgently and strongly address the problem.

Then, as with many issues, the government must take action to raise awareness and to educate. One of already dates back seven years, , but this strategy absolutely needs to be given a different scope.

So far, China, California, the UK and France (in certain circumstances) have severely cracked down on the use of deepfakes… and Europe--and Luxembourg--should consider whether it needs to be clearer in this mission too.

Further reading

on threat developments.

and its many links to browse.

.

in 2024.

The , because this company offers unified platforms instead of a stack of 100 technologies connected to each other that increase risk.

.

This article in French.