“Let’s not lose sight of the fact that for the employer the obligations are enormous,” said Castegnaro. “The responsibilities must be assumed by management.” Photo: Marie Russillo/Maison Moderne

“Let’s not lose sight of the fact that for the employer the obligations are enormous,” said Castegnaro. “The responsibilities must be assumed by management.” Photo: Marie Russillo/Maison Moderne

For lawyer Guy Castegnaro, employers need to get a grip on AI and its uses--because the risks are real.

“AI tools are becoming increasingly common in the workplace, impacting everything from recruitment to dismissal,” said employment lawyer at a keynote speech delivered as part of the tech conference Nexus2050 at Luxexpo The Box.

In recruitment, said the lawyer, AI systems can write job descriptions; candidates can use a chatbot when submitting their application; and AI systems can sort and pre-select CVs. In performance management, he went on, algorithms can make comparisons, detect opportunities for improvement and efficiency, and even generate a performance evaluation. And in terms of monitoring, AI systems “can track employees for health and safety reasons,” said the lawyer, such as for taxi drivers, “but also for efficiency reasons.”

A matter of risk

Castegnaro, who two decades ago founded the employment law firm that bears his name, warned that these uses are not without risk for employers. adopted by the European Council last May identifies four layers of risk--unacceptable, high, limited and minimal--and, applying these to the world of work, Castegnaro classifies AI systems used for recruitment and selection, as well as those that make decisions that affect the employment relationship, as “high risk.”

The resulting obligations are numerous: “The employer must use the AI system in accordance with the system developer’s instructions; designate a person to oversee the AI system who is trained, competent and has the necessary support and authority; ensure that the input data over which it has control is relevant and sufficiently representative; monitor the AI system in accordance with the supplier’s instructions; report incidents to the supplier; inform and consult and, in some cases, seek the authorisation of employee representatives before implementing an AI system.”

“In some cases,” he added, “the employer must carry out a fundamental rights impact assessment before using an AI system, for example if it is a public body providing a public service or if it operates in the banking and insurance sector.”

Eight recommendations for employers

“From my point of view,” said Castegnaro, “companies right now are too inclined to put the question of artificial intelligence back on the shoulders of IT departments. Of course it concerns IT, but let’s not lose sight of the fact that the employer’s obligations are enormous. Management must take responsibility. The time to act is now. And that means getting ready.”

To this end, the lawyer has eight recommendations for employers:

—launch an internal audit of all AI systems currently in use or planned

—identify what needs to change with regard to AI systems

—understand the AI systems it uses

—don’t blindly trust AI systems

—think carefully about what type of data is really useful and necessary

—carry out due diligence where necessary

—inform and consult employee representatives and, where appropriate, seek their approval before implementing an AI system

—define and negotiate with employee representatives the amount of AI needed for personnel management

What about employees? “As far as I’m concerned,” said Castegnaro, “artificial intelligence must remain a simple working tool for both employers and employees. Before using it in the workplace, employees must inform their employer, because they are committing their company’s liability.”

This article in Paperjam. It has been translated and edited for Delano.