Paperjam.lu

Photo credit: Ben Raynal via Flickr (CC BY-NC 2.0) 

The borderless nature of the internet, where a website may be hosted in one country, operated by staff in another, with comments left by readers in a third, poses a thorny problem for website operators and government agencies seeking to tackle the issue.

In Britain, the telecommunications regulator Ofcom recently issued a report discussing the issues around online harm and potential ways forward. A UK government white paper on the subject is also expected this autumn, and health secretary Matt Hancock announced at the Conservative Party conference that he would direct the UK’s chief medical officer to draw up guidelines about social media use among children and teenagers amid growing concerns over potential harm.

Most forms of online content are not subject to the Ofcom Broadcasting Code with which radio and television services based in the UK must comply. In fact, Ofcom highlights that a wide range of popular online content – including videos uploaded to YouTube, or content posted on social media, sent through messaging services, or which appears on many online news sites, and also political advertising – is subject to little or no specific UK regulation. These different platforms are subject to different rules, which means the same content shared on each would be treated differently depending on how it was accessed.

Ofcom sees this “different screen, different rules” approach as arbitrary and problematic, providing no clear level of protection for viewers. While addressing this disparity is a legitimate aim, is it even possible to subject all content accessible by UK audiences via the internet or online services to UK law, regardless of where in the world it originates?

Rule of territory vs access

Under international law, one of the primary means for states to exercise their jurisdiction is the territorial principle, the right to regulate acts that occur within their territory. UK law would apply to online content hosted on servers located in the UK, for example, or to an internet user uploading content online from the UK.

But of course internet users can access content created and hosted from all over the world, and it is not always possible to tell where it has come from or where it is hosted. This limits the territorial principle, and makes establishing the existence of a territorial connection with “un-territorial data” a key requirement. Unfortunately, there is no international agreement on how to do so.

Instead, states have interpreted the principle quite broadly to argue that the mere accessibility of online content from within their territory is deemed sufficient. For example, in court cases against Perrin and Yahoo, UK and French courts respectively applied their national laws to online content accessible in their countries, even though it had been uploaded from and was hosted in the US. The act of publishing content online, the courts argued, is equal to physically acting or producing adverse effects within their territory irrespective of its origin.

The too-long arm of the law?

So we increasingly see states tending to impose measures that go well beyond their borders. For example, in a case regarding “the right to be forgotten”, the French Data Protection Authority ordered Google to remove search results not just from its European versions, but from all its geographical extensions in order to make the search results inaccessible worldwide. This case shows a national court using the fact that the US-based company Google conducts business in France to impose the global application of its domestic laws.

This is problematic because it inevitably runs into the rights and freedoms of foreign citizens abroad, who should in theory only need to comply with the local laws in their country. Currently, the French case is pending before the European Court of Justice. The “right to be forgotten” which is certainly protected by EU law, does not have universal application, therefore while internet users in the EU might have a right to have some personal information removed, internet users in foreign countries where that information is legal have a right to access it.

But if the global delisting order was enforced, internet users in those other countries would see their freedom to access information violated due to a decision of a foreign authority in a foreign jurisdiction based on foreign law. If all states adopted this approach, it would only be a matter of time before internet users in Britain found their right to freedom of information was on the line.

Defining and regulating ‘harm’

On the other hand, states have a right to regulate to protect citizens from harm. Indeed, the actions of foreign-based corporations and the way in which they manage online content can negatively affect internet users’ rights worldwide. As has been pointed out, multinational companies’ choices of where to host their data, the country in which they are based, and consequently the laws with which they have to comply, and what they include and exclude from their terms of service all significantly affect the rights to privacy and freedom of expression of internet users worldwide.

Some companies voluntarily perform global takedowns of content at the request of governments or users based on their own terms of service. But international cooperation is needed, and is preferable to unilateral actions by courts that have very broad extraterritorial effects. There are a number of international multistakeholder groups working on internet governance which are exploring possible solutions, such as the World Summit on Information Society, the Internet Governance Forum, meetings organised by ICANN, and Regional Internet Registries and international conferences organised by the Internet & Jurisdiction Policy Network.

But there are many missing elements that make progress difficult. When it comes to regulating online content there is no international agreement on how states should exercise their jurisdiction, or on what kind of content should be considered abusive, when or whether internet companies should be responsible for their users’ content at all, how or if they should remove content considered harmful or abusive, and whether such removals are global or limited in scope.

These problems are so intertwined with the sovereignty of each state that an international agreement that manages all the issues but is acceptable to all is unrealistic. But agreed common international guidelines on how to address these issues is needed, and will require input from nation states, internet companies, the internet technical community, and voices representing users’ rights and civil society.

Sara Solmone, Postgraduate Teaching Assistant, University of East London

This article is republished from The Conversation under a Creative Commons license. Read the original article.