Thierry Labro: What's the point of a House of Ethics?
Katja Rausch: A house, in general, promises hospitality, inclusivity, sharing... All these values, which are part of an individual ethic, are very important. Four years ago, when I wanted to launch the House of Ethics, I saw that there were only institutes, academies and laboratories, which seemed to me to be extremely elitist. Ethics is what makes us who we are; it’s not a science that you have to learn. We live it, we speak it, we eat it. Applied to artificial intelligence?
Because what was needed was precisely a non-academic discourse, but for everyone to retain their own style, which is very important in ethics. You mustn’t put it in a corsage to make it regulatory, which it is not at all. It’s motivating, it’s inviting, it’s organic, it’s human. We were the first House of Ethics in the world.
Behind its title, what is done in this house?
First, we democratise ethics, providing a platform for information, articles and interviews, where we invite professionals and experts to talk about their experiences in all sectors. After that, we are also a think tank, we reflect, we do research in ethics, observe what’s going on, decode and also develop tools to help people put these ethics into practice. These are not just tools.
In the current context, could we assume that Donald Trump has ethics?
He has his own ethics. Ethics are a system of values and we all have different values. His system is not ours. We don’t share it. He thinks he’s very ethical. Read his messages on his social networks. He’s doing this for Americans. “I got this deal done”; “we're going to stop this war for you.” He’s sacrificing himself in his world.
That’s his narrative, anyway...
Of course. But all narrative is always political rhetoric. You have to be careful when you’re in the political arena. The alarming thing is that we have technological tools that have become weapons. The emergence of political technology is very alarming. We’ve been warning about this major concern for years.
Donald Trump has one ethic, the Chinese president has another, Vladimir Putin another and the Westerners--depending on who you talk to--yet another. A bit complicated to get everyone together, isn’t it?
Absolutely. This is another very important subject: there are other ethics. Ethics is a plural truth. Generalising will not work. Ethics must be collective, which is very different from general or universal. We need to be inclusive, polycultural, multidisciplinary and the friction this generates brings out new values.
The positive side of the new US president’s behaviour is precisely the assertion that we need to oppose him with a global, technological ethic... Is that realistic?
Artificial intelligence has been around since 1956. Giving definitions is reductive and exclusive. We have to make these differences when we talk about sovereignty, independence and autonomy. There are differences between all these things and we tend to mix them up! Which leads us to an impasse, where we say that it’s difficult to find something in common. For the universal, yes.
The only universal thing is the Universal Declaration of Human Rights after the Second World War in 1948. A universal declaration of ethics seems to me really unrealistic and not even good. We need hubs where everyone can define, nationally, geographically and culturally, what they mean by “ethics.” Then governance and then interoperability. Because then we would have a much larger catchment area for finding consensus.
The creation of hubs is conceivable. But today, technology, whether American, Chinese or elsewhere, is always transversal. Would we have to free ourselves from all these technologies to start discussing ethics again?
We can’t say we’re too late, otherwise we won’t be doing anything in life. We’re in a period of acceleration that has made us lose our bearings. Everyone has been surprised since ChatGPT’s “technoputsch” in November 2022. We didn’t launch a product or software, but an operating system. A system that has infiltrated all the way to man, all the way to man’s “bios.” I don’t think we need to go backwards. In general, with these national, ideological and political interests, as soon as you present them with a win-win situation, they are suddenly very flexible, very elastic. I don’t think we need to react within a rigid framework.
Technologies will always be subject to bias.
Let’s look at it in a fluid framework. It is perfectly possible to create different centres, different hubs, but to respect equal sharing. Without China or the United States dominating, or Europe alone asserting itself. Africa, Asia and Latin America must also be integrated. Brazil and Argentina are monsters, and we don’t include them at all.
But how do we do this in practice? The technologies are all based on a kind of modern juice extraction. Machines that swallow your data and return a mush... We’d still have to unravel the models.
What you're talking about is what I call the whitewashing of knowledge. Everything is being standardised, and we are being led towards a single way of thinking. Ethics alone are not enough in the face of what you call systematic infiltration. We need allies like human rights and regulation. This triad has to work. Today, we are in silos. There are the ethicists, who don’t like the bearers of human rights, who don’t like the regulators. At the House of Ethics, , we have invited Dr Susie Alegre, who is one of the leading advocates of human rights applied to technology. It’s important that we pool our intelligence, collectively, to confront this systemic siege.
In the past, we didn’t talk about systemics. Ethics has always been something ad hoc, individual and “hard louded.” We need to change. We need to get our collective act together. And not just the people, but also the disciplines. Interdisciplinarity is really very important. It’s where new ideas are born that we haven’t really considered before. I like young people talking to me, old people giving me their impressions. It’s very important to listen to everyone. We need to get out of this tunnel where there are only two ways out, yes or no. That’s not true. That’s not true. In a complex, non-linear world, there are many more solutions.

Katja Rausch: “It’s entirely possible to create different centres, different hubs, but to respect equal sharing.” Photo: Nader Ghavami
Isn’t ethics a luxury item? You can worry about it when you’ve got nothing more important to do...
It’s the cognitive ethics approach that's part of the philosophical discipline, Kant, Aristotle, and so on. Yes, a leisure ethic. If you apply an ethic to something, as the name suggests, it’s an applied ethic, it's different.
For me, for an ordinary person, where does the difference manifest itself? Please explain.
Ethics applied to artificial intelligence is concerned with how we should think about the product we are coding. By coding this, am I going to help people? Is there a definite risk?
Is it 100% possible?
Nothing is 100% possible. You already have to know the technology. I taught information systems for 12 years. How can a system be biased? Simply because it’s trained on data and that data is entered by people. Already in data entry, there is a huge amount of bias, it can be comments, especially in police data entry…
When trained, these biases are amplified, that’s clear! At the end of the day, it’s always a system that uses probability. You have to understand that too. These systems are tuned like music... Technologies will always be subject to bias. You have to have a human being behind it! The automation of decisions behind which there are no longer humans, in bank loans, in driving...
For driving, it’s still great for avoiding accidents....
If we go to the wall, yes. If you’re driving your Tesla and it’s snowing in Norway and the sensors can’t see the road lines or recognise the lights or read the stop sign because it’s snowing, there’s no point in your automatic driving.
You’re against automated driving. But there are other technologies that provoke the same rejection movement?
A lot of things cross the red line. Stupid little applications, like Deepnude, which allowed you, when you were walking down the street, to undress everyone... Of course, we asked the developer why he had put this on the market. He was amused... No thought was given to the consequences. What’s the point? I’m not even talking about all the privacy issues. Since the beginning of the House of Ethics and the launch of ChatGPT, again because I knew the technology, this large-scale theft of personal data, the violation of intellectual property, systemic and systematic, to feed a commercial product, that, that’s not right.
Isn’t that art? Art is a bit like that too, reinventing from what exists or moving away from it?
ChatGPT is on an ethical permafrost. There are no ethics. Even Sam Altman acknowledges this by saying he’s going to add it as decoration or icing at the end.
Everyone has been surprised since the ‘technoputsch,’ in November 2022, of ChatGPT.
Can a tech entrepreneur or developer be expected to ask these questions from the start?
Of course! Would you buy a car without brakes? Would you get on a plane that you knew was going to lose a wing? We’re so used to having safety and precautionary measures! For any other product, even when you eat, for example, we say “no!” Road safety, civil aviation, health. It’s a good thing we have organisations that carry out safety checks! Why not apply it here? It’s beyond me.
Ethics is first and foremost an individual concept that could become universal, but it is entirely conceivable that the same technology will not be perceived in the same ethical way depending on the population it is aimed at.
There is no ethics without ethos. It’s who I am: what my values are, my decency, my benevolence, etc., which define how I develop a product or offer a service to the world. If several ethos come together, is it possible to make something that is significantly better than one that is perhaps malevolent? The worst cocktail for ethics is to have a black, opaque box, plus an irresponsible actor. That’s where the community can put the brakes on! There are now developments in small language models, over which we have more control and a better-defined scope. I believe in neurosymbolic AI…
Neurosymbolic AI?
It’s a hybrid mix. We have the neurons, but we don’t have the symbolic side yet. For the moment, the system works on tokenisation and every time you put in a word, it tries to guess the next one. Like collage and probabilities. But a large language model doesn’t understand anything. They’re just tokens, little entities that are put together. That’s why they have problems with maths: if there’s not the same sequence in the training data, they’ll have a really hard time.
If we add the symbolic side, i.e., semantics, the understanding of a context, we arrive at a completely different dimension of a chatbot. The interaction will be much stronger. In fact, this is the technology used in humanoid robots. You can’t put ChatGPT in a robot! That would be dangerous. They had a contract with the Chinese, which has been broken.
But it’s also the one that scares us, isn't it? Able to think and decide for itself...
Exactly. LLMs are not going to do anything for general artificial intelligence. This will give us a chance to build intelligence that understands us. When Deep Blue won against Kasparov, it didn’t know it was playing chess. We have to be careful with words. We say “the machine thinks”; “it reflects.” No! The machine processes data. It separates them. It’s 0 and 1. These days, we use this anthropomorphic vocabulary too much and often too carelessly for technological subjects. That’s not the way to make them more trustworthy.
Announcements are multiplying. Many experts say that we’ve lost the momentum to put the brakes on technological development and put it back on a more ethical track, in the sense that you use?
Everyone was up in arms about China’s Deepseek. I thought it was fantastic. Deepseek told Silicon Valley, which is pumping billions and billions into this, that we could do things differently, with much less money and much more quickly. From an ethical point of view, they are just as lamentable as each other, but it gives us Europeans hope, as we have extraordinary brains and skills.
Let’s really try to think “out of the box,” “out of the black box!” That’s what they did. It’s given a boost to small businesses. To SMEs. In Europe, 80-90% of the economic fabric is made up of SMEs. It is also our mission, at the House of Ethics, to talk to these people and less to those who have the power of money to afford everything.
Not to mention GDPR, under which we have decentralised the authorities to some extent in Europe. But regulators don’t have the same will everywhere...
It’s a nightmare, this journey. It takes so long. There are so many barriers that the sanctions are ridiculous.
Maybe they’re ridiculous because there's nothing to sanction.
Yes, there is! Look at Max Schrems, they have hundreds of cases! He himself says that things are going nowhere. The Italian regulator is going to great lengths, even banning technologies and applications. On the other hand, the Irish are very lax.
They’ve got the tech giants at home....
It’s all very well to talk about data protection all the time, but at some point you have to move on! From my point of view, the most important thing is the independence of ethics. Most laboratories and institutions are subsidised by Google, by Meta... That shouldn’t be the case. We organise meetings, we bring in experts from all over the place, our conferences are paid for, yes, but that’s how it should be done.
Unless you consider that funding comes from everywhere and that this guarantees balance.
It’s something I asked myself at the beginning. I always said no to partnerships in the beginning. Today we have one with the Institute of Philosophy and Technology in Athens. If we get the tech giants and the Big Four, like PWC who were interested, it’s not going to work. It’s more difficult. We can say what we want.
Saying is fine, but you also have to be heard... Even those who go against the system are actually part of the system.
Like rappers. They often come from ghettos. Once they’re rich, they acquire the same value system as the rich. They have villas, luxury cars and so on. It’s not that difficult. I asked 30 people to come to our conferences, and all 30 said yes. We don’t format anyone. Ethics is alive, it’s biological, it evolves. It’s an organism that has to evolve.
Three lectures not to be missed
In 2025, the House of Ethics will organise a series of three lectures on AI and ethics moderated by Katja Rausch, while the round tables will be moderated by its interdisciplinary research director, Daniele Proverbio.
On 2 April: Susie Alegre, a lawyer specialising in human rights and artificial intelligence, author of the bestseller Human Rights, Robot Wrongs, has been invited.
On 15 May: it will be the turn of the author of The Economist’s best book of the year, Privacy is Power, Carissa Véliz, who is a data protection expert at Oxford University.
June 18, another topic of interest to Luxembourg: Ingrid Vasiliu-Feltes, an expert in digital twins and deeptech, will be coming to talk about these subjects. She founded DoUtDes Cyber-Ethics, 360-degree cyberintelligence.
Katja Rausch’s mini-biography
Double master’s degree from the Sorbonne
A graduate in marketing, audiovisual and publishing, Katja Rausch headed to Louisiana for an MBA at the Freeman School of Business, before returning to the Sorbonne to complete her training with a master’s degree in modern specialist literature, linguistics and rhetoric.
From New York to Paris and Luxembourg
Rausch began her career in New York with Booz Allen & Hamilton, in management consulting, before returning to France to manage the digital strategy of an IT engineering services company, working for clients such as Nestlé France, Richemont (Cartier), Lafuma and the Bank of Central African States. She then made a marked shift towards the intersecting issues of ethics and technology.
The “prof”
Throughout her career, she taught information systems at the Sorbonne School of Management and became a pioneer in ethics applied to AI. As early as 2016, Rausch introduced one of the first courses on data ethics at the Paris School of Business.
This article was written in for the of Paperjam magazine, published on 26 March. The content is produced exclusively for the magazine. It is published on the site to contribute to the full Paperjam archive. .
Is your company a member of Paperjam Club? You can request a subscription in your name. Let us know via