Abstract
This article examines the seven standard questions that must be confronted in nearly all decisions in the online content regulation field. By mapping out and discussing those standard questions, the article may hopefully aid both legislators and judges engaged in online content regulation. The checklist of questions canvassed here may also be used as a tool to evaluate laws aimed at online content regulation as well as judgments made within this field.
The online content available to Australian Internet users is regulated in several different ways. Areas such as defamation law, intellectual property law, data privacy law, hate speech law and national security law all set parameters for what may be posted online as well as offline. Underpinning this is a constant balancing between fundamental rights including the right of freedom of expression. This regulatory landscape is constantly evolving. Recent proposed reform initiatives include new rules in Australia regarding defamation, data privacy and combatting misinformation and disinformation.
All, or at least nearly all, decisions made by lawmakers (including both the courts and the legislators) in the online content regulation field involve a set of standard components or questions that must be confronted. This article seeks to map out those standard questions and discuss them briefly. Hopefully, this may aid both legislators and judges engaged in online content regulation. The checklist of questions canvassed here may also be used as a tool to evaluate laws aimed at online content regulation as well as judgments made within this field.
Having made these remarks, I hasten to acknowledge that content regulation – both in Australia and beyond – also comes in several more informal forms. There is significant public pressure to regulate content that is not currently unlawful, and that in some cases, arguably cannot constitutionally be made illegal. 1 In addition, the Internet intermediaries that host, or otherwise facilitate, much of the content dissemination on the Internet play major roles in content regulation both by upholding the law and their own content policies, and via the decisions they make as to which content-related orders they accept and which they seek to fight back against. These matters, while highly significant, fall outside the scope of this article. 2
To prepare the ground for the checklist of standard questions that must be confronted by law makers, a few initial observations will first be made about online content regulation.
Online content regulation
Debates about online content regulation are characterised by the clash of strong – in some cases, extremist – points of view. Free speech advocates may favour the uncompromising position that any restrictions imposed on what may be communicated online are seen as unacceptable. At the same time, those in the ‘all laws apply all the time camp’ take for granted that all online content must conform to all applicable laws regardless of source.
The reality is that neither of these options are realistic and, admittedly, they may be driven by a kind of ‘ask for everything to get something’ mentality. On the one hand, absolutists on the free speech side may be criticised for failing to properly take account of the fact that some types of content (such as child abuse content) is so vile that it simply must be addressed. On the other hand, those calling for compliance with all applicable laws seemingly fail to recognise that they are asking for the impossible – Internet actors (including those posting content or facilitating the posting of content) typically cannot possibly: (1) identify all laws that appear to regulate their online behaviour; (2) access all laws that appear to regulate their online behaviour; or (3) understand all laws that appear to regulate their online behaviour.
Additionally, even the largest Internet intermediaries may be placed in positions where legal compliance is made impossible by competing duties stemming from different countries’ laws that make a claim of applying to their conduct (true conflicts of laws). Thus, the proposition that all laws from all sources – including different countries – can and should be complied with simultaneously is unrealistic.
A key problem is that, while most people would expect Internet actors to abide by the law of their respective countries, they would probably not wish for Internet actors to abide by all laws, of all other countries, as such compliance would lead to Internet actors being forced to take account only of the most restrictive laws from all the countries in the world. Such a race to the bottom is an unhealthy direction for the Internet and would entail a loss of key features of the Internet of today.
Against this background, it seems clear that we need to approach online content regulation with great care and a willingness to be realistic in our expectations. We need to be proactive rather than merely reactive; we need to recognise the international dimension and interconnectedness of the online environment, and we need to aim to reconcile – and, where necessary, balance – legitimate competing interests. Having said this, we can now proceed to examining the seven standard questions.
Who is the law/order directed at?
Where a content regulation matter is brought before the courts, the application to the court will of course typically include some information about the intended respondent. In that sense, the question of ‘who is the order directed at?’ is partly answered for the courts. However, courts are still tasked with deciding whether to grant the order sought, so they still need to engage with this matter. For example, the court may grant a content-related order – such as an order for content to be removed – against the original poster but without also granting an order against the relevant Internet intermediary. The same holds true in relation to orders for damages based on online content.
For the legislator, the question of the appropriate addressee is of more obvious application. Whenever making law affecting online content, the legislature must engage with the question of who must comply with that law.
Legislation and court orders are predominantly directed at either the party originally posting the content in question, or the Internet intermediary (or intermediaries) involved in its distribution. The appeal of targeting the intermediary may stem from a range of reasons such as that they typically are easier to identify and, where monetary redress becomes an issue, that they commonly have ‘deeper pockets’ than the original poster.
In this context, it is interesting to note how the recent proposed amendments to Australia’s defamation laws – to take effect during 2024 – place greater emphasis on actions against Internet intermediaries than before. 3 This is possibly part of a broader trend, given that ongoing law reform in the area of mis- and dis-information also is targeted at Internet intermediaries. 4
In some circumstances, content-related legislation and court orders may seek to apply to everyone rather than to specified individuals or intermediaries. Suppression orders and non-publication orders, for example, are typically not limited to specific individuals or intermediaries. Rather, they seek to ensure that the content in question is not disseminated by anyone.
On what basis is this content being regulated?
To be able to make any form of decision about online content, the body making that decision must have jurisdiction to do so. This applies equally to legislators, courts, and other bodies. The regulation of online content raises particular issues and risks due to the inevitable interconnectedness and risk of ‘spillover effects’ typical of the online environment. Furthermore, international law on the topic of law-making in relation to the Internet is largely a grey zone. 5 Thus, it would be helpful for States to discuss in detail how they see their claims of jurisdiction over the Internet being justified. This would not only strengthen their claims but would also help develop international law in this field.
Courts must also take care to provide a full discussion of the basis for their jurisdiction. And in doing so, they may wish to show some degree of restraint, or at least an understanding of the international implications of their decisions. Regrettably, that does not always happen.
In
When considering the question of ‘on what basis is this content being regulated?’, the matter of jurisdiction is only one part of the calculation; the other is the matter of applicable law. Even where a court concludes that it has jurisdiction over a given matter, it needs to determine which State’s law it then must use as the standard against which the content in question is assessed. Similarly, a legislator making law regulating online content must confront the question of whether the domestic law it makes only and always is to be relied upon by those subject to that law.
An interesting phenomenon in this context is provisions such as section 5B of the
Elsewhere, I have been advancing the following jurisprudential framework for jurisdiction:
In the absence of an obligation under international law to exercise jurisdiction, a state may only exercise jurisdiction where: (1) there is a substantial connection between the matter and the state seeking to exercise jurisdiction; (2) the state seeking to exercise jurisdiction has a legitimate interest in the matter; and (3) the exercise of jurisdiction is reasonable given the balance between the state’s legitimate interests and other interests.
8
I argue that this framework – which has the advantage of not anchoring jurisdiction in a territoriality thinking – may serve as a baseline against which claims of jurisdiction may be assessed also in the content regulation field.
Against what standard is the content assessed?
A central question for all online content regulation is, of course, against what standard the content is assessed. This will vary depending on the area of law that is involved. For example, determining whether to restrict content based on hate speech laws involves, in part, different types of assessment to those involved in deciding whether to restrict content based on data privacy concerns, or doing so under defamation law. Furthermore, the assessments that parliaments must make when enacting the laws or assessments that courts must make when adjudicating disputes about such laws, are both different, in part, to the assessments that social media companies must make when making take-down decisions, and so on. Given the nature of this brief overview, it is not possible to provide a detailed discussion here of all such assessments.
However, one point that does need to be made is that the assessment process typically involves both a consideration of the content itself and the surrounding circumstances of its publication. Furthermore, available defences must be considered within the assessment. Thus, while the assessments involved in online content regulation may occasionally be relatively straightforward, they are often highly complicated.
Having noted the complicated nature of assessments involved in online content regulation, it may be observed that, while courts faced with making such decisions typically take months to do so, Internet intermediaries may only be able to devote minutes to each such decision given the large volume of content for which they need to decide.
What type(s) of ‘order’ are involved?
The types of decision relevant in the context of online content regulation may usefully be divided into four categories: removal orders, blocking orders, de-listing orders, and must-carry orders. While distinct, some of these types of order are frequently made in conjunction with each other. For example, it is commonly the case that a removal order is combined with a blocking order. Finally, by way of introduction, in addition to the content-focused types of orders discussed here, other related types of order exist, such as account suspension and domain suspension orders. Such orders may be combined with content-focused orders. 9
Removal orders
The aim of a removal, or ‘take-down’, order is obvious from its name. The party against which such an order is granted is required to remove the content in question. Removal orders are relatively common and have been granted by several courts around the world, for example, in the Australian and EU cases discussed below.
Where content is taken down, there is an obvious and justified concern that it may be reposted. 10
Blocking orders
Blocking orders are concerned with the future availability of content. For example, a court may order that a party not only remove specific content (under a removal order), but also ensure that the same content is blocked from reappearing on the site in question. Thus, a distinguishing feature of blocking orders is that they require some degree of ongoing monitoring. Such orders are, thus, both more onerous to comply with and represent a bigger threat to freedom of expression than removal orders. Yet, they appear to have become increasingly common. 11
Blocking orders can be divided into at least three sub-categories. In the example above, the blocking related to the same content that had been removed under the removal order. This is the first sub-category. A second sub-category occurs where the blocking covers ‘similar’ content to that removed under the removal order. Finally, the third sub-category is exemplified in those cases where the blocking order stands alone, that is, where there is no identified content being removed but, rather, the order is only forward-looking and aimed at future content.
Perhaps the most extreme blocking order issued in a Western democracy is that made in 2017 by Pembroke J of the Supreme Court of NSW in
Finally, in the context of blocking orders, we may arguably distinguish between the type of blocking orders discussed so far and what may be termed ‘targeted blocking orders’ (or perhaps more commonly, ‘filtering orders’). In the case of the latter, the order is more nuanced, for example, by setting content filtering rules, such as access depending on the user’s age.
De-listing orders
De-listing, or de-indexing, orders are unique in that – at least for now – they are specifically addressed to search engines. Such orders require that the search engine, against which the order is given, must ensure that specific search results are not displayed in response to specific search terms/methods.
A key aspect of a de-listing order is that the original content – the content in dispute – is unaffected and remains available on the relevant website(s). Thus, the order is only aimed at making the content less easily discoverable.
De-listing orders were made ‘famous’ by the Court of Justice of the European Union (CJEU) ‘right to be forgotten’ decision in Google is an innocent bystander but it is unwittingly facilitating the defendants’ ongoing breaches of this Court’s orders. There is no other practical way for the defendants’ website sales to be stopped.
16
The last sentence in this quote is, of course, entirely incorrect. For example, sales require some form of payment method, so one other practical way for the defendants’ website sales to be stopped would be to seek to cut the payment mechanism. The bigger question is this: if the law fails to prevent certain Internet content, is it reasonable that it places the burden of preventing that content on the shoulders of what it describes as ‘an innocent bystander’?
Must-carry orders
Must-carry, or ‘stay up’, orders are essentially the opposite to removal orders. A must-carry order means that the party to whom it is directed is prevented from removing, and/or required to reinstate, the content in question.
Such orders remain rare and have gained comparatively little attention. 17 To date, must-carry orders have been primarily discussed in the context of US, German and Brazilian law. While such orders have been unsuccessfully sought under US law, 18 in both Brazil and Germany, for example, such orders have forced Internet platforms to reinstate content that the platforms determined had violated their community guidelines. 19
Finally, there is one type of order – with quite a long history – that arguably is somewhat related to those that fit under the ‘must-carry’ label; namely orders requiring the publication of a correction and/or apology. Like ‘must-carry’ orders, such orders obviously demand the publication of certain content and are frequently sought both in the online and offline context, and often in combination with other remedies such as monetary compensation. 20
What is the scope of (remedial) jurisdiction?
All content-related decisions need to take account of the matter of ‘scope of jurisdiction’
21
or ‘scope of remedial jurisdiction’ as preferred by the Court of Appeal for British Columbia in the
Thus, for example, a court that is in the process of ordering a given person to remove certain online content must also decide whether to order the relevant content to be removed only in the State where the court sits, or globally, or somewhere in between these extremes, such as within a specific region. It goes without saying that this is a key question from the perspective of Internet governance. Indeed, while it remains an emerging area of study, it is a topic that will only increase in significance over the coming years.
Regrettably, courts have often been rather cavalier in their approach to the matter of scope of jurisdiction. In the mentioned case of
Similarly, legislators have too often adopted a ‘head in the sand’ approach when it comes to the matter of scope of jurisdiction. This was on display in the discussions that resulted in the reform to Australia’s defamation law
25
mentioned above. In the few lines by which the crucial background paper that paved the way for the proposal engaged with the topic, it merely notes: Some stakeholders raised concerns that any power to make orders to non-party intermediaries, particularly where such orders are framed as applying worldwide, may face difficulties of jurisdiction and enforcement in relation to foreign based intermediaries. Other stakeholders pointed out that to the extent that a court already has power, it is not the power that is in issue, but its enforcement, particularly in relation to worldwide orders. Stakeholders generally did not support any attempt to ‘fix’ the inherent jurisdiction and enforcement issues that arise in the context of the global online environment via the MDPs. Some considered that there may need to be international agreement to resolve this issue.
26
This is a clear misunderstanding of what was at stake. It is also a conflation of issues as it lumps the scope of jurisdiction issue together with the matter of practicalities of effective enforcement. The reform to Australia’s defamation law can, of course, not solve – or ‘fix’ – all the inherent jurisdiction and enforcement issues that arise in the context of the global online environment. But we cannot ignore that the reform initiative ought to have confronted those jurisdiction and enforcement issues, as well as the fact that the new court powers, that allow for non-party orders to remove online content, either must have worldwide scope or not.
In creating new court powers for non-party orders to remove online content with potentially worldwide reach, the defamation law reform is contributing to the current trend of States claiming worldwide scope of jurisdiction for their court orders. Unfortunately, Australia is here (again) not even mentioning the need to comply with international law in this context. This may be seen as a failure to provide Internet intermediaries with clear guidance as to the scope of jurisdiction expectations of such orders.
The above suggests that there is a great need for further work on the scope of jurisdiction.
What is the duration of the order?
Another question that legislators and courts need to consider in relation to online content regulation is the duration of the effect of any given order or other content-related decision. For example, should a given removal or blocking order be permanent or limited in time? The reality is that most such orders are not limited in time. However, some types of orders – such as suppression orders – commonly are limited in time.
A question that arises in this context is who will reinstate content at the end of the duration of a content removal or blocking order? For example, it may be questioned whether a search engine has a sufficiently strong interest in taking steps to ‘re-list’ de-listed search results in the event they are no longer covered by the ‘right to be forgotten’. This is a topic that has gained remarkably little attention.
Will the content poster be notified?
A final question that must be confronted in some situations is whether the person who posted the content affected by a content regulation order must be notified. Obviously, where the decision is directly aimed at that person, they will be notified. However, where an Internet intermediary is ordered, for example, to remove or block content, the question arises whether the person who posted the content must be informed of that decision.
This question gives rise to complex policy considerations. On their most basic level, considerations include the fact that if such a person is informed, they may be given an incentive to re-post the materials. At the same time, if they are not informed, due process and their freedom of expression are undermined even more severely.
Concluding remarks
As explained above, the regulation of online content raises a range of complex questions with which courts and legislators typically must engage. Given the centrality of freedom of expression in any functioning democracy, the issues at stake are important indeed. But while the complexity of balancing – or preferably reconciling – fundamental rights in a domestic setting is obvious, it is greatly amplified where it is brought to an international level as is often the case with online content regulation. Put simply, there is no international consensus about online content regulation.
In this context it is worthwhile to pause and reflect on the type of online environment that Australia wants for the future. Freedom of expression is a fundamental human right that must be upheld both online and offline. At the same time, an Internet that communicated mostly hate speech, ‘fake news’ and child abuse images – clogging our social media content, instant messages and emails – would no longer be a valuable resource. Such ‘junkification’ is, thus, a serious threat to the Internet if the freedom of expression argument is pushed too far.
With this in mind, perhaps it may be said that the way Australia regulates online content is not only important for the obvious policy considerations pursued, such as the protection of reputations, data privacy, and human dignity, it is also important for protecting the Internet as such.
Footnotes
Acknowledgment
This article draws, and expands, upon: Dan Jerker B Svantesson, ‘Global speech regulation: Extraterritoriality in the context of internet content blocking, removal, de-listing, and must carry orders’ in Austen Parrish and Cedric Ryngaert (eds),
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
