(Note to readers: if you’re looking for an optimistic note in the following, it comes only at the end… alongside my doubt that it will do any good.)
“The grand duchy of alcoholics: what life is like in the richest country in the world.” Long spared from a global campaign of fake news, Luxembourg has now gotten its first glowing article in its own version of Pravda--the word pravda is Russian for “truth”--a propaganda organ.
Let’s skip the details of how this digital monster describes the country and think instead about the monster itself, an opaque network that is now considered the world’s largest fake news farm. Deployed from Russia and its satellites, it tirelessly feeds misleading, polarising and viral content to social platforms, including Facebook, X, TikTok, YouTube, Telegram and others. Thousands of fake accounts, bogus sites, fake videos, hate messages and conspiracy theories are disseminated in a well-oiled machine. The aim? To create doubt, inflame tensions, undermine confidence in institutions and sometimes even incite violence. In 2023, Meta admitted to having dismantled “the largest disinformation campaign ever seen on its platforms,” orchestrated in large part by this network.
But that’s not the most worrying thing. What gives Pravda unprecedented power is that it doesn’t work alone. It works with algorithms, the same recommendation systems that decide, every day, what we see, read and believe.
Welcome to the age of algorithmic truth.
Today, “reality” is no longer constructed solely by facts, but by streams of content, sorted, filtered, amplified by artificial intelligences that have neither morality nor hindsight, but that obey a single logic: maximising our attention. What we see online, what comes up in our feed, what becomes viral or invisible… it’s all the result of automated arbitration.
Bubbles and feedback loops
For example, the more a message (whether true or false) is repeated and massively disseminated online, the more it will tend to appear credible and to anchor a certain version of reality. In this way, “repetition ends up generating a convincing effect,” notes one expert, to the point where constant exposure to biased content leads to the emergence of “a discourse on reality shaped by algorithmic truth.” On Facebook, for example, content that is accompanied by a large number of reactions will be deemed relevant by the system and displayed widely, creating an apparent social validation of that content. Similarly, on TikTok, the “For You” page will very quickly pinpoint a person’s areas of interest and offer them almost exclusively videos in these themes, giving the impression that this subject dominates the news.
The “truth” then becomes relative to each person’s information bubble. Each individual is locked into a flow of information calibrated to their preferences and profile. We also talk about “echo chambers” where users only hear points of view that confirm their beliefs. These exist because the algorithms learn from our interactions: if we show an interest in a certain type of content (like videos questioning climate change), the platform will recommend other similar content. Over time, the news feed becomes homogenised around our pre-existing opinions. As Sciences Po’s digital chair explains: “By coupling the human tendency to confirm one’s own beliefs with the narrowing of information sources, filter bubbles only amplify the effects of confirmation bias by suppressing opposing viewpoints.”
And the consequences are many. Firstly, society becomes more polarised. By evolving in parallel media universes, groups of citizens develop increasingly divergent and mutually incomprehensible visions of the world. Studies have shown that, in the United States, exposure to political content from only one side can not only radicalise opinions, but also increase mistrust and even hatred of the other side. Do we still need to talk about the assault on the capitol?
And then, these bubbles encourage the spread of fake news. Since the algorithms show us what we already like, a user who is a fan of conspiracy theories will be offered more and more conspiracy content. If this content is false, there is little chance of a correction or contradictory information breaking through the wall of the bubble. In this way, “filter bubbles reproduce and suggest content that has already been consulted by the user, which can lead to a lack of access to factual information,” explains the Sciences Po chair. The user ends up seeing almost no external elements likely to refute the disinformation they have integrated; they are evolving in a biased ecosystem where disinformation can circulate without counterweight.
This is where feedback loops come in. Misinformation appears (e.g. a false testimony, a sensationalist rumour or deliberately manipulated content), which may be posted by an individual, a malicious group seeking to deceive, or an unreliable media organisation. If this news item arouses strong reactions (indignation, fear, enthusiasm), users will comment on it, “like” it and share it en masse. Recommendation algorithms are programmed to detect peaks in engagement. Content that generates a lot of interaction is interpreted as “interesting.” Seeing this craze, the platform will push this content to even more users, including those who would not otherwise have heard of it. More people see the news item, and if some of them believe it and react, this creates new interactions. Each new share implicitly reinforces the impression that lots of people are talking about it, so there must be something to it. Without immediate verification, the fake news gains credibility simply by virtue of its popularity. The algorithm, seeing that the trend is confirmed, continues to push it, and the misleading message spreads faster and further, out of control. It’s a self-reinforcing circle: misinformation leads to engagement, which prompts the algorithm to show it more, which generates even more engagement and so on.
Real news is slower to reach internet users
Research has quantified this phenomenon. An MIT study published in Science revealed that, on Twitter, false information spreads much faster and more widely than true information. On average, a true piece of information took six times longer than a false one to reach 1,500 people on Twitter. The authors note that “lies spread significantly further, faster, more deeply and more widely than the truth” on social networks. The novelty, shocking or emotional nature of infomercials makes them more viral--and algorithms, which are sensitive to virality, mechanically amplify this trend. Facebook itself has observed that “our algorithms exploit the human brain’s attraction to division,” which leads to ever more controversial content being pushed at users to grab their attention.
The algorithms work to the millisecond, constantly adjusting themselves. Very fast feedback loops can be created (hence the term “instant feedback loop”). Human moderators or manual fact-checking mechanisms, on the other hand, can only intervene after the fact, sometimes when the fake news has already gone viral. Platforms are experimenting with automated solutions (AI detection of misleading content, proactive deletion), but this raises problems of reliability and censorship (risk of false positives in legitimate debates). In short, the internal workings of algorithms make fine-tuned regulation difficult: they are machines for optimising a quantifiable criterion (clicks, views, etc.) but not the quality of the information.
Faced with these biases, researchers and innovators are proposing alternative approaches. For example, the Tournesol project, led by Lê Nguyên Hoang, aims to create a “democratic” recommendation algorithm, where users can vote on the quality of content to guide what should be highlighted. The idea is to move away from the single criterion of engagement and introduce notions of general interest or reliability. Similarly, some suggest giving users the choice of the algorithm they wish to apply (for example, a pure “chronological” mode or a “diversity of opinions” mode).
And now, poisoning LLMs
And if few people have yet heard of Luxembourg’s version of Pravda, it’s because none of these versions are intended to be read by internet users. Initial studies show that this content generated by artificial intelligence feeds large language models (LLMs) so that they can re-propose it to researchers and internet users as credible content. This is what the American Sunlight Project has dubbed “LLM grooming.” LLMs become “stochastic parrots” who imitate linguistic patterns without necessarily understanding the meaning or veracity of the content, and who are likely to reproduce and disseminate this misinformation in an apparently coherent and plausible form.
And this is where a war invisible to you, to me, has begun in the black boxes: AI-generated content becomes plausible enough through whitewashing that it can no longer be detected, forcing those with a shred of conscience left to devise new tools to detect fake news and bogus content that criminals immediately set about circumventing. It’s a modern version of the race between the cops and the robbers, just applied to fake news.
(And the good news, although I remain cynical, is that the European Union has decided to devote €5m to the launch of a fact-checking platform. If you read that figure correctly, you already know how useless it will be.)
Further reading
— (French only)
—
— (French only)
—
—
—
—
—
—
This article in French.