If you have read the book “I, Robot” by Isaac Asimov, or saw the movie featuring Will Smith, you’re already familiar with the Three Laws of Robotics: “1: A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws”.
Of course, this is pure science fiction. But those three laws just appeared in a draft European Parliament committee report on robots and artificial intelligence. In this non-binding document, which could potentially be endorsed by the full parliament in January, Mady Delvaux, a Luxembourg Socialist MEP, calls for the creation of a legal framework for such machines.
“Robotics and AI have become one of the most prominent technological trends of our century. The fast increase of their use and development brings new and difficult challenges to our society”, writes Delvaux. Therefore, the reasoning goes, “robots and AI would increase their interaction with humans”, raising “legal and ethical issues which require a prompt intervention at EU level”.
What should Europe do, then? A law, Delvaux answers. She wants the European Commission to draw up rules to define smart robots and establish a classification and registration system for them. It would also tackle the issue of robot liability, and the issue of “allocating responsibility for damage caused by robots”.
In addition, Delvaux wants a “European agency for robotics and artificial intelligence” that would provide analysis and expertise. Then, concluding her draft text, she looks beyond the EU and calls for “regulatory standards under the auspices of the United Nations”.
Last but not least, Delvaux would like robotics engineers and robot users to comply on a voluntary basis with a “Charter on Robotics”. And to be sure it happens, she actually wrote the charter herself, in an annex of the report. According to the text, engineers should abide by a few principles, such as beneficence (robots should act in the best interests of humans), non-maleficence (they should not harm humans), autonomy (everyone should be able to make un-coerced decisions regarding interactions with robots), and justice (there should be a fair distribution of the benefits of robotics). As for users, they would be “permitted to make use of a robot” and “have the right to expect [it] to perform any task for which it has been explicitly designed”. But they would not be permitted “to enable it to function as a weapon”.
In the Brussels bubble, sources describe the report as far-fetched. It tackles some very theoretical issues--and the commission does not seem very eager to create a specialised agency or to draft law on robot liability. Despite several messages left with her office, Delano could not reach Delvaux for comment.
The draft report “Civil Law Rules on Robotics” (PDF) is available on the European Parliament’s website.
This article was first published in the Winter 2017 issue of Delano magazine. Be the first to read Delano articles on paper before they’re posted online, plus read exclusive features and interviews that only appear in the print edition, by subscribing online.