ChatGPT is transformative, says Jacques Klein of the University of Luxembourg. One of its biggest flaws, however, is that it does not fact check. Photo: Shutterstock

ChatGPT is transformative, says Jacques Klein of the University of Luxembourg. One of its biggest flaws, however, is that it does not fact check. Photo: Shutterstock

The artificial intelligence software ChatGPT has prompted universities to rethink how they teach, fears over the future of white-collar jobs and the development of new programmes to sniff out the robots. But it’s not outsmarting humans yet, says the University of Luxembourg.

“It’s transformative in the field,” said Jacques Klein, a professor at the university. Klein is part of a team at the university’s Interdisciplinary Centre for Security, Reliability and Trust (SnT) that is working with BGL BNP Paribas to develop a chatbot for the bank.

Chat bots are already widely used, for example in customer service. Users enter a question and a software will help match this with a pre-defined answer. Often, when a problem is more complicated, customers are still connected with a human.

This artificial intelligence can also automate reading and processing of some forms to speed up procedures. Luxembourg’s national health insurer CNS, for example, has begun working with automated processes for its reimbursements.

It seems to understand the question you ask and generates an answer that is not predefined.
Jacques Klein

Jacques KleinprofessorUniversity of Luxembourg

ChatGPT, however, takes this to the next level. “It seems to understand the question you ask and generates an answer that is not predefined,” said Klein. “This is something that is much harder to do. You need to train your AI on very large data sets.”

But it also comes with technical limitations. A big one is that there is no way to know whether the information it provides is factually correct. “The only way to know if an answer is correct or not, is for the user to know the answer already. It’s very easy to spread fake news or wrong information.”

Catherine Léglu, vice-rector for academic affairs at the university, for example, said she had asked ChatGPT for text related to her area of research--medieval Occitan literature. While some extracts read more or less like Wikipedia entries, some of the output was factually incorrect, she said.

“The human dimension”

The university is working on updating its guidelines on plagiarism and fraud to raise awareness among students that the use of the software could jeopardise their grades. Students submit a form with their work to declare that it is original and produced by themselves, which is not the case if ChatGPT or other AI was used to write it.

“We’re looking at it in various ways,” Léglu said. “In terms of research, of output and academic work, of student assessment.” The humanities faculty is in the process of building a digital ethics centre, which is “busy analysing the tool,” she said.

Work produced by AI largely lacks original thought. It also does not include references, a key component of academic work. “What’s missing from it is precisely the human dimension,” said Léglu. And while the software has passed exams in some fields, it has so far proven to be at best an average student.

“It is not super smart,” said Klein, adding that it struggles with logic questions. For example, if you tell ChatGPT that you’re six years old and have a sister half your age, and then ask the system to tell you how old your sister will be when you’re 60, it cannot give an answer. “It does not have the ability to think like this.” Yet.

“We’re only at the beginning,” said Klein. “At some point, it will be able to answer that question.” How long this will take, however, is anyone’s guess.

The makers of ChatGPT last week released a tool that will help users spot AI-written work, and Klein said that perhaps one AI will be better at spotting another than humans would be.

Lecturers have for a long time been alert to plagiarism or essay mills, where students pay an external provider to write their work--which usually ends up being very formulaic, Léglu said. The university is encouraging lecturers to develop exams and assessment that make cheating more difficult.

AI monopoly

While AI will perhaps automate work previously done by white-collar workers, Klein said it could also free up time for staff to do more meaningful tasks. Or perhaps it will simply alleviate workloads for better work-life balance. “For now, I see it more as an assistant,” Klein said. And AI will also create new jobs.

How we use AI is still up to humans. Regulation in the financial sector, for example, heavily restricts access to external tools. Within some banks, employees cannot even use Google Translate from a work computer, never mind ChatGPT. A bank needs to be able to provide an explanation, for example, for a loan refusal. In an entirely automated procedure, this explanation will be lacking.

Only a really big company can build this kind of model.
Jacques Klein

Jacques KleinprofessorUniversity of Luxembourg

“We need to live with it,” said Klein. “It exists, so why not use it?” In the health sector, AI could help prevent oversights by human staff. Doomsday visions of the robot apocalypse have existed for decades and we are still far from the capabilities that past generations had envisioned we might have reached by today.

Humans must consider regulation when it comes to AI, Klein said, but said an often forgotten aspect is the potential monopoly by a few powerful players. Google appears set to launch its own rival soon. “Only a really big company can build this kind of model. It’s really expensive… We need to think about that.”

The author attempted using ChatGPT to write this article but the website was consistently “at capacity” and experiencing “high traffic”.