What kind of future will be inaugurated by artificial intelligence? Predictions vary widely, the worst of them even apocalyptic in nature. Thomas Raleigh, associate professor, believes that it’s worth taking this technology seriously. Photo: Shutterstock

What kind of future will be inaugurated by artificial intelligence? Predictions vary widely, the worst of them even apocalyptic in nature. Thomas Raleigh, associate professor, believes that it’s worth taking this technology seriously. Photo: Shutterstock

As AI platforms rise in power and popularity, it grows more important for regular people to understand what they are and how they work. This poses a challenge, however, because AI systems are so complex that even their makers can’t know all their inner workings. Delano spoke to philosophy researcher Thomas Raleigh to gain perspective.

“In certain circles, the idea that AI poses one of, if not the, greatest risk to the human race is taken almost as gospel.”

This comment is from Thomas Raleigh, associate professor of philosophy at the University of Luxembourg and whose research project “The Epistemology of AI Systems” will kick off later this year.

The social circles to which he refers are more mainstream that you might expect: they aren’t fringe groups or science fiction writers, but people in university communities and Silicon Valley. And the risk is, at its worst, an eventuality in which AI transforms our lives too radically and too quickly for us to control.

As new AI programs like ChatGPT continue to make headlines, part of what has changed recently isn’t (merely) a technological leap forward, but the fact the AI itself is being taken seriously as an object of study, legislation, concern, etc. In an interview, Raleigh describes his own transition from being sceptical of the limits of this technology to being deeply troubled by its potential. “To be honest, the more I read about it… the more anxious I’m getting, the more I think treating it as a serious existential risk (as they say) is pretty reasonable.”

(Disclaimer: as a philosophy researcher rather than a programmer, Raleigh also stresses very clearly that his role is that of a bystander; he calls the potentially widespread change to human society due to AI--and associated timeline predictions--“incredibly speculative”.)

Preparing for a unique type of big change

Fear of artificial intelligence comes in part from the mystique surrounding it. Indeed, the computing power and input ranges of AI systems are vast--so vast, Raleigh points out, that even the humans who design these systems don’t fully understand their internal operations. In that context, it makes sense to return to a basic question: how can ordinary people make sense of this technology?

This is a central concern for Raleigh in his research, a three-year project with funding from the FNR. It is also behind an organised push (not related to Raleigh) currently underway in Luxembourg to get individuals to take the “” massive open online course, an introduction to AI and how it could--and already does--affect your life.

These developments, taken together with anecdotal evidence of a rise in attention on AI, suggest that people are gearing up for the arrival of what threatens to be a society-upheaving leap in machine capability.

Trusting the machine

The internal workings of the newer neural networks and deep learning platforms, Raleigh points out, are too complex for humans to understand. “Even when it’s fully functioning, we can see the input/output, but we can’t see the internal patterns--or at least we don’t know what the internal patterns are.”

“These things are opaque in a certain sense,” he says.

In this context comes the first of his research questions: Under what circumstances is it rational to trust these technologies? Indeed, you can start from a position of measurability, but the system by design will outgrow that. “You have a track record of reliability,” he explains. “You know that on the inputs so far it’s given mostly accurate outputs. But of course, that can change, right? As it moves away from its training sets into a new domain.”

A related issue is linked to the meaning behind what an AI machine can produce. Text generators that use natural language processing have, for example, become very advanced. “But it’s not totally obvious that [their output] has the same meaning as a sentence produced by a human,” says Raleigh. “It’s not clear that they mean the same thing--or under what circumstances we should think they mean the same thing.”

Tools to help simplify AI

Another research question has a more practical aspect: how can we go about discovering an AI system’s internal patterns and mapping them in a way that our brains can parse? Raleigh talks about finding an ontology for the internal processes of an AI system.

On that front, one of the project’s partners is Amro Najjar, a computer scientist at the Luxembourg Institute of Science and Technology (LIST) who works on explainable AI. Here, the idea is to create tools that can show us those opaque internal patterns. “Of course, these… are inevitably simplifications,” says Raleigh, “simplified models of the full complexity.” As part of the project, Raleigh and his team will work with Najjar to come up with formal techniques to grade the transparency of AI systems.

What you can do now

With these questions of complexity, trust and ultimately ethics in mind--taken perhaps as a backdrop--the Elements of AI course offers a more immediate way for individuals to learn about artificial intelligence.

The course was created by MinnaLearn and the University of Helsinki, but --jointly put together by the University of Luxembourg, the University of Luxembourg Competence Centre (ULCC), the Service de Coordination de la Recherche et de l’Innovation pédagogiques et technologiques (SCRIPT) and Digital Luxembourg--provides support groups and extra webinars (both free) for various takers in Luxembourg.

The webinars will be held by experts from the University of Luxembourg. The first one meets on 15 February.

Christian Weibel, communication manager at the ULCC, explains that similar efforts were made with this online course in 2021, when about 1,000 people registered. He hopes for even more takers for this iteration, which features support groups for the first time: one oriented towards women (though open to all) run by Women In Digital Empowerment (WIDE); one geared towards IT professionals run by the Digital Learning Hub; and one for teachers run by IFEN and focusing on how to use the content of the course in their classrooms (“very, very interesting now that ChatGPT has arisen,” says Weibel).

Weibel echoes Raleigh’s observation about anxiety, though from his professional perspective it manifests more immediately as an obstacle to education. “The interesting thing about AI is that it’s present everywhere,” says the communications manager, “so you get people interested quite easily--but then you have to work with lots of biases. Because people are scared of it.”

After that, he continues, a new problem arises: once confronted with the technical aspects of the discussion, people’s interest often drops off severely.

The hope is that the webinars and support groups will help contextualise and localise the topic so that people can stay interested--and so they can ultimately dig deeper into what artificial intelligence is and what kinds of futures it could bring.