Abstract
Public figures are subject to high rates of online abuse than everyday users. This article presents findings from a study on digital platforms’ higher threshold for protecting public figures in contrast to everyday users. Presenting a summary of extant literature on the experience, impact and harms of online abuse of public figures, we analyse 31 platform terms of service and related policies to understand the extent to which platforms openly differentiate between public figures and other users. We focus on platforms’ use of ‘newsworthiness’ and ‘public interest’ to justify the differential threshold. Using a cultural-informed approach, we analyse platforms’ reliance on ‘newsworthiness’ and ‘public interest’ justifications to argue that these justifications are utilised without regard for the histories, risk assessment, ethics and labour-intensive processes in which the concepts of newsworthiness and public interest became familiar among more traditional media forms such as news organisations.
Introduction
Public figures are subject to high rates of online abuse and harassment and are often at greater risk of abuse than everyday private individuals due to the professional requirement to be an engaged digital user with regular exposure (e-Safety Commissioner 2023; Park and Kim, 2021; Wagner, 2022). The impacts of online abuse on public figures have been substantial and varied, depend partly on profession or role, and are known to include negative career outcomes, withdrawal from public life and serious negative wellbeing outcomes (including suicidality), particularly among women and minorities (Deavours et al., 2022; Duffy et al., 2022). Online abuse targeting public figures has also had a negative impact on communities with the loss of key figures from public life, the lowering of the quality of public debate, and the toxification of the digital ecology (Cover, 2022a).
One of the reasons public figures are victimised at higher rates than other users online is that many digital platforms have significantly higher thresholds for addressing online abuse against public figures, often on the basis that content about public figures is deemed ‘newsworthy’ or a matter of ‘public interest’. Platforms often explicitly mention this differentiation in their terms of service and community guidelines. Even platforms which have strong standards of protection against abuse and harassment have been known not to apply their policies evenly to all users (Takano et al., 2022). To date, there has only been nascent scholarship on the experiences and impacts of online abuse against public figures, despite growing public and governmental interest in the topic, including some official inquiries into how to better protect public figures through regulation of platforms to enhance policy and moderation protections (e.g. Federal Trade Commission, 2022; UNESCO, 2021).
This paper is part of a wider study on online abuse against public figures commissioned by the Australian Government to determine current scholarship, platform and regulatory policy frameworks and public debates on the topic. Among the project activities, we analysed a selection of platform terms of service and related policies to understand the extent to which the online abuse of public figures is related to platforms openly differentiating between public figures and other users. We draw on applied cultural and media theory to make sense of the themes, discourse and conceptual frameworks by which digital platforms justify a different protection threshold for public figures. This article reports on one aspect of the study findings: how their policy statements justify the differential threshold for intervening in the abuse or misrepresentation of public figures using ‘newsworthiness’ and ‘public interest’ concepts. In the first section, we draw on the emerging corpus of literature to provide an outline of the key issues and impacts of online abuse on public figures incorporating the range deployed across scholarship: entertainers, politicians, journalists, sportsplayers – an inclusive category used by platforms that fails to account for the diverse experiences, resources, institutional supports of public figures (Cover et al., 2024). The second section presents an overview of platform terms of service and community guidelines, drawing on a selection of 31 of the most popular digital platforms to outline the ways in which they articulate that differential threshold. This is followed by a culturally informed analysis of platforms’ use of the ‘newsworthiness’ and ‘public interest’ justifications. We argue that these justifications are utilised without regard for the histories, risk assessment, ethics and labour-intensive processes in which the concepts of newsworthiness and public interest became familiar among more traditional media forms such as news organisations.
Public figures, online abuse and its impacts
A nascent but growing field of scholarly and grey literature shows that public figures are subject to high rates of online abuse and suffer substantial negative impacts across career, health and wellbeing. We conceptualise online abuse of public figures as falling broadly into two categories of problematic communication: misrepresentation (including disinformation and misleading content) and directly targeted abuse (including trolling, doxing, harassment and pile-ons). Both categories can often fall short of the definitions of illegal or unlawful content recognised in various jurisdictions under hate speech and anti-discrimination laws, particularly where a pile-on can involve relatively mild shaming content at harmfully high rates (Thompson and Cover, 2022).
Misrepresentation broadly refers to forms of disinformation, misleading content and the use of synthetic media such as ‘deepfakes’ that affect how the public perceive and understand a public figure (Lazer et al., 2018; Westerlund, 2019: 39) (deepfakes refers to the use of artificial intelligence to create realistic-looking fake photos or videos). These methods of misrepresentation commonly target public figures for two key reasons: first, complex knowledge about a public figure attracts disinformation in a vacuum of clarity; and second, public figures’ imagery for use in synthetic media is widely available (Cover, 2022a). Misrepresentation harms public figures who are dependant for their employment or public roles on authenticity, credibility and control of their narrative or brand, particularly politicians (Roberts, 2023), journalists (Waisbord, 2022) and high-profile entertainers (Bode, 2021; Shahzad et al., 2022).
Public figures, ranging from journalists to entertainers, also experience direct online abuse, particularly given their use of digital platforms for promotion of themselves or the content (television, music, films, party policy announcements, news alerts) in which they appear. Most often targeted in this category are women and minorities (Ghaffari, 2022), and there is an indication of a growing culture of ‘celebrity bashing’ as a cultural form of online engagement (Park and Kim, 2021). Community activists and civil society advocates – an emergent category of public figures – are frequently exposed to direct forms of online abuse, including trolling, doxing and harassment, for instance, in relation to anti-racist advocacy (Park 2017), family violence campaigns (Whiting et al., 2019), climate change activism (Duvall, 2022), and pro-vaccination messaging since the COVID-19 pandemic (Pérez-Arredondo and Graells-Garrido, 2021).
Addressing the high rates and extent of online abuse targeting public figures is politically and socially important, given the known negative impacts on public figures. Our analysis of the existing literature suggests negative impacts can be understood across three categories: victim-survivor health and wellbeing; withdrawal from public life; and impacts on the quality of public discourse. We discuss each impact in turn briefly below.
First, the literature shows that journalists (Deavours et al., 2022; eSafety Commissioner, 2023), politicians (Einarsdóttir and Ólafsson, 2022), entertainers and other celebrities (Ghaffari, 2022; Park and Kim, 2021) have been found to suffer substantial personal harms to health and wellbeing resulting from online abuse and harassment. The frequency and severity of the abuse has been found to result in significant emotional labour to maintain employment and public presence (Miller and Lewis, 2022). One empirical study noted that celebrities who are the victims of online aggression often struggle with negative consequences and hurt in ways that can form the basis for more serious issues such as depression and problematic alcohol or drug use (Ouvrein et al., 2021). In more extreme cases, suicidal behaviour has followed online abuse; a number of key international case studies show where harassment, trolling, pile-ons and doxing have led to suicide, with coronial reports noting the abuse as a significant contributing factor (Jane, 2015; Park and Kim, 2021; Thompson and Cover, 2022).
Second, several studies have noted that online abuse substantially reduces the willingness and desire to remain engaged in public life with some public figures known to change the field of employment as a result (Lee and Kim, 2022; Sarikakis et al., 2023). Online abuse is now recognised as a future barrier to women’s representation in parliament and public service roles in the United Kingdom, Sweden and Finland (Erikson et al., 2021; Harmer and Southern, 2021; Mannevuo, 2023), while some sportsplayers have withdrawn from participation at elite competitive levels for the same reason (Kavanagh and Jones, 2016).
Third, the outcome of both the health concerns and the reduction in online engagement by public figures alongside the broader toxification of digital communication can lead to a stifling of quality public debate by making it more difficult for opinion-makers and debate leaders to participate safely in online discourse (Karatas and Saka, 2017). Studies have noted this has had political consequences; for example, Colombia, India and other non-Western and Global South countries have had their political discourse disrupted by the curtailment of speech subsequent to online harassment of journalists and politicians (Barrios and Miller, 2021; Bhat and Chadha, 2022; Pain and Chen, 2019).
Given the known extent and rate of abuse experienced by public figures, and the evidence of its negative impact on both the people themselves and the wider digital ecology, it is important to investigate the extent to which platforms’ differentiation of public figures from other users actively contributes to the issue, alongside the exposure to harms a public figure generates simply by being well-known (Furedi, 2010). In the next section, we analyse a selection of major platforms’ policies with regard to how they differentiate public figures from other users.
Platform differentiation by policy
Despite their origins in a laissez-faire communication environment in the 1990s, platforms are now facing mounting advocacy for increased regulatory pressure both to improve their protection policies and to practice them through consistent, timely moderation and intervention (Blanco et al., 2022; Flew, 2021). This has included advocacy to equalise their policy protections for all users, regardless of status or public notoriety. Given the significance of the more popular platforms for most kinds of public figures’ self-representation in the course of their work, we surveyed the policies, community guidelines and terms of service of 31 of the most popular platforms. Platforms were chosen based on a list of the 28 most popular platforms by user number provided in 2022 by Semrush (Lyons 2022) plus three relevant to the research team’s past study. User numbers ranged from 150 million (Discord) to 2.9 billion (Facebook) monthly active users for the first quarter of 2022.
We present here a qualitative thematic analysis of key samples drawn from individual cases to indicate the extent to which public figures are protected differentially. All policies and terms of service were current as of May 2023. Of the 31 platforms whose policies, terms of service or community guidelines were surveyed, approximately one-third (n = 10) had policies that did not explicitly differentiate public figures from everyday users. The remaining two-thirds (n = 21) made public figures an exception to one or forms of prohibited content or behaviour. This does not mean public figures are necessarily considered ‘fair game’ by the platforms – for example, certain kinds of extremist content, personal threats and illegal hate speech are usually prohibited in policies even among platforms that differentiate protections for public figures, yet milder forms of harmful content or behaviour that have the risks and impacts for public figures described above were often permissible in policy statements. For the sake of space, we will give a few key examples of platform policies from both groups.
The platforms under the
However, Facebook’s ‘Bullying and Harassment’ policy does indeed make a distinction between public figures and private users on the basis that the platform wishes to encourage critical commentary: We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe, as well as certain attacks where the public figure is directly tagged in the post or comment. We define public figures as state- and national-level government officials, political candidates for those offices, people with over one million fans or followers on social media and people who receive substantial news coverage (Facebook, 2023a).
The policy is grounded in a public interest test by invoking the idea that critical commentary by other users is valuable in itself. Furthermore, the policy notes ‘substantial news coverage’ as a justification for a higher threshold before take-down of content. This is reflected in Facebook/Meta’s ‘Privacy Violations’ policy, which notes that it is a violation to post content that is believed to come from a hacked source, whether about public figures or private individuals ‘[e]xcept in limited cases of newsworthiness’. As with the concept of public interest, newsworthiness has long been recognised in scholarship as arbitrary and sometimes indefinable, dependent on the substantial labour of news media editorial teams (Archard, 1998; Gauthier, 2002).
Similarly,
We do allow for disagreements, commentary or criticism on policies and matters of public interest or organizations as long as they do not insult or vilify. Members may express heightened negative criticism and disapproval towards public figures, such as politicians, celebrities, prominent business leaders, or other individuals voluntarily in the public eye (LinkedIn, 2023).
As with Twitter, LinkedIn understands public figures to include those who have made a choice to be public, although this is ill-defined (Cover et al., 2024). While a politician would ordinarily be recognised as a person who voluntarily entered public life, it is unclear if a person whose employing organisation expected them to engage online audiences as part of their work – journalists, some entertainers, charity leaders or company executives – would be considered as having made a voluntary choice to be public or whether they would perceive themselves to be ‘public figures’ in the form recognised for politicians or actors. Indeed, a core problem lies with policy’s assumption of a neat alignment between being a public figure, agency and individual responsibility, drawing on outdated liberal-humanist presumptions that they are natural and rational agents who in all cases consent freely to their public engagement.
We allow some critical comments of public figures, understanding that they are in a position of public attention and have ways to counter negative speech, and that the critique may be in the public interest to view. However, we still remove content that violates other policies (such as threats, hate speech, and sexual exploitation), as well as serious forms of harassment (such as doxing and expressing a desire for someone to experience serious physical harm) (TikTok, 2023a).
The basis of TikTok’s policy is that public figures are understood to have greater resources and support to combat misrepresentation and direct abuse (written as ‘negative speech’). This broadly fits within what is sometimes referred to as a ‘power transfer model’ of media ethics (Cover, 2004; Gauthier, 2002), whereby the harms to a person with greater resources or a position of public power or influence can be greater because this fact will produce a levelling effect across the broader society. However, such a policy significantly overestimates the extent to which all public figures are resourced to counter such negative speech. For example, there are significant differences in institutional support for a politician belonging to a major party as opposed to an independent parliamentarian, just as a freelance journalist is arguably less well supported than one who works for a major international news outlet (Brummel et al., 2019; George, 2014). The policy may also imply that public figures have better mental and wellbeing resources than everyday users, which is not necessarily the case. Drawing on the same principles, TikTok protects only those who are not public figures from the harms of synthetic media misrepresentation such as deepfakes (TikTok, 2023b).
Discussion and forum-based platforms such as
Finally, popular video-sharing site Debates related to high-profile officials or leaders: Content featuring debates or discussions of topical issues concerning individuals who have positions of power, like high-profile government officials or CEOs of major multinational corporations. (YouTube, 2023)
It is notable that the examples of public figure are primarily limited to government actors and corporate leaders, unlike other platforms such as those in the Meta group or Twitter which use ‘public interest’ and ‘newsworthiness’ to declare a substantially wider group of individuals as potential public figures who will be exempt from protections enjoyed by everyday users.
Among the 31 platforms surveyed, only two were found to have ambiguous or more nuanced differentiation. The image-sharing and saving platform
One-third of the platforms either did not discuss public figures in their terms of service or were explicit that their terms and community guidelines applied to all users. These included Amazon’s live-streaming Web site
The newsworthiness justification
These platforms hold significantly higher thresholds for addressing online abuse against public figures, on the basis that content about public figures is ‘newsworthy’ or a matter of ‘public interest’. The concept of newsworthiness has often been described as an arbitrary set of values that governs what ‘counts’ as news, and the selection of events to be published as news (Bednarek and Caple, 2017), operating within liberal-human concepts of media, truth and service to democratic processes (Gans, 2010). Broadly, what is newsworthy tends to be culturally specific, location-aligned and relevant to a news outlet’s own values and perceptions of what interests its readership or audience, but nevertheless has a very strong association with the traditional institutional practices that utilise it, test it through reporting and writing practices, and explicitly separate it from opinion (Deuze et al., 2007).
Most contemporary analysis of how newsworthiness is operationalised stems from work conducted in the 1960s by Johan Galtung and Mari Ruge (1965), which analysed a ‘chain of news communication’, the processes of determining news value in print and broadcast news organisations. They outlined the steps and personnel involved in determining if an event or story should be considered newsworthy, and therefore worth reporting upon. The process involves: selection if it aligns with one or more ‘factors’ such as the frequency, intensity or reference to elite persons or nations; distortion, which involves accentuation of the newsworthy elements of the story and replication whereby once something has become newsworthy it is more likely that similar events/personages will be treated in the same way. Later research (e.g. Harcup and O’Neill, 2001) noted that in the subsequent decades, celebrity stories and more positive news stories were being incorporated into what counts as newsworthiness based principally on repetition.
Whitaker and colleagues (2004) showed how judging newsworthiness has been further refined based on one or more of the following eight criteria: timeliness and currency; proximity to media consumers; prominence (related to a famous person or place); impact on readers’ lives; suspense (related to conflict and resolution); human interest and appeal to human emotions; novelty (the ‘first’, ‘last’ or ‘only’ concern); and progress (in relation to the goal achievements of people and communities).
Media scholars have noted that while newsworthiness as a concept is driven by a ‘will to truth’ which motivates an institutional framework governing what is and should be publicly known (Matheson, 2004: 445), newsworthiness is not constructed in a neutral or objective way, but depends heavily on the news outlet’s interests and values, political alignment, and market decisions related to circulation figures and advertising income (Bignell, 1997). This does not suggest that all editorial decisions are made cynically (McRobbie and Thornton, 1995) but that they are subject to competing interests beyond the criteria of newsworthiness.
It is therefore important to draw a distinction between the use of the concept of newsworthiness by platforms and others not traditionally involved in the production of news and the labour-intensive institutional processes by which it is traditionally determined. In the traditional determination, the editor and editorial meeting are involved in weighing up the risks and ethics of making a person, event or organisation newsworthy. That is, even though information may have come to a journalist about, say, a public figure that might meet newsworthiness criteria, an assessment of risk to the subject of the story, to the public, or to democratic processes is a key part of the decision-making process (Kitzinger, 1999). Editors of more sophisticated news outlets rely on a combination of instinct, journalistic ethics, experience and, where warranted, consultation to determine if coverage of something newsworthy will have unwanted or violent consequences on the subject of the story or on others (Miller and Williams, 1993). For example, stories of high-profile suicides, which are known to prompt suicidality among others (Blood et al., 2007), may meet newsworthiness criteria but might not be covered if the story breaches privacy, grieving or puts others at risk (Archard, 1998). Similarly, stories that may meet some newsworthiness criteria are sometimes withheld through the traditional gate-keeping process of institutional news production subsequent to government pressure if it is perceived to create a serious security risk (Boyd-Barrett, 1995) – a negotiation that reflects on a news publisher’s credibility and position in society (Rupar, 2007).
These perspectives on the ethics and process of newsworthiness, then, opens important questions as to digital platforms’ use of the concept to justify the differential protections of public figures. While a public figure as a personage may meet some of the recognised criteria for newsworthiness (e.g. fame or novelty), it is not at all clear that a policy-level decision not to intervene in abuse or misrepresentation of public figures has undergone the rigorous process of risk assessment, ethical considerations and consultation involved in news production, nor whether moderators are equipped with the skills and knowledge framework to make editorial decisions regarding newsworthiness. Arguably, newsworthiness is mis-used by platforms to allow problematic content in line with business models that rely on the maximisation of user engagement through networked logic, algorithmically determined feeds and the orchestration of user-relevant content (Flew, 2021). As such, platforms may mis-read newsworthiness as content which entertains, sensationalises or shocks, without regard for the news production practices that traditionally protect individuals and societies.
The public interest justification
The second frame of justification for differential protection of public figures used by platforms identified above is ‘public interest’. Public interest is broadly understood as pertaining to the wellbeing of general society and a nation-state’s population or citizenry, often used to justify the decisions made by traditional news media (Napoli, 2019). It is a liberal-humanist concept that emerged first in the eighteenth century whereby the public was beginning to be understood not only as a population or count of people, but as a set of practices and beliefs that needed to be nurtured, protected and preserved (Foucault, 2007: 75). The concept was refined in the nineteenth century to incorporate liberal perspectives of utilitarianism, whereby the interests of the public are best served by the maximisation of the happiness of the greatest number (Mill, 1972). In the twentieth century, the concept was used as a guiding notion marking the regulation of media, such as the United States’ Radio Act 1927 and the royal charters that authorise the British Broadcasting Corporation and present its mission.
What ‘counts’ as public interest is not, of course, always genuinely in the public interest but actively serves vested interests that come to stand in for the public (Berlant, 2007). Arguably, the deployment of a public interest justification by platforms could be said to be similar to that used in political rhetoric to justify other actions or failures to act, or by government media organisations that represent a racial dominance status quo rather than consider the public benefits of diversity (Murdock, 1992). Nevertheless, a more ethical use of the concept is familiar in media and communication where it is used as a guiding principle for maximising objective reporting (Hallin, 1992), underpins a self-perception of news media as a fourth estate (Hartley, 2009), and by public relations and strategic communication personnel (Whitaker et al., 2004), creative industries, film and entertainment television producers (Cover, 2023) for determining balance that seeks to minimise harm.
In relation to more extreme online hate speech against minorities, the United Nations’ Special Rapporteur on Minority Issues (2022: 8) has argued against the misuse of an ‘overly broad and ill-defined “public interest” exemptions’ that permit hate speech to circulate in violation of other content policies. What is indicated here is that several platforms use a concept of public interest selectively to exonerate certain policy exceptions, but actively eschew any perception that the platforms provide a public service but are openly for-profit institutions (e.g. Meta, 2022). Indeed, the higher threshold before intervening in the abuse or misrepresentation or public figures is arguably not in the public interest but maximises engagement, traffic and thereby profit through converting their celebrity value into platform revenue (Su and Jin, 2022) in line with their for-profit mission. Extreme abuse result in scandal is thereby tempered by the risk of advertiser withdrawal, rather than a balancing of public interest. In this sense, we argue, public interest concepts are actively co-opted by platforms to excuse differential policies that increase revenue at the expense of the wellbeing of public figures or the condition of the digital ecology itself.
Ethics and the ‘value’ of public figures to platforms
Putting aside the market drivers that are most likely to be a platform motivation for permitting online abuse and other problematic content about public figures (Furedi, 2010), it is important to critique the newsworthiness and public interest justifications from the perspective of ethics by asking if there are indeed ethical frameworks that may permit a differential threshold for public figures.
In an important analysis by Candace Gauthier (2002), there are three broad media ethical positions related to how media treat public figures. The first of these draws on Immanuel Kant’s philosophy to argue that abusing or misrepresenting a public figure is understood to treat that person not as a full subject, but as a ‘thing’ or a ‘means’ by interfering with their rational choices over how they are represented and their choice to participate in a respectful media setting. Gauthier suggests that this ethical position is practised by weighing up potentially misleading or abusive content against their dignity. That is, it is neither news nor in the public interest to allow content that does not treat a public figure respectfully as a social subject.
A second ethical model is the traditional liberal-humanist position asserting John Stuart Mill’s utilitarian framework. For Mill, the harms and benefits of any action must be identified in order to determine the rightness or wrongness of the action, where wrong is understood as greater harms outweighing any expected benefits. In this context, harming a public figure through harassment or misrepresentation would be considered unethical unless it was possible to identify clear benefits to a wider public – with entertainment, audience pleasure in hatred or platform profits not being considered genuine benefits according to a liberal perspective (Cover, 2022b).
Finally, Gauthier identifies what she calls the Power Transfer model, in which public figures are seen as already having a greater power over their own representation, and in which user actions against them transfer some of that power to a wider audience. For example, public figures are sometimes perceived as having resources such as public relations and management agents, political party operatives, or supportive followers that help protect their representation and keep them from seeing direct abuse, in addition to the funds available to pursue legal remedies in ways not as available to everyday users, alongside the perceived financial, social and cultural benefits of being a public figure. A power transfer model affords a transaction of power from public figures to the public whereby the abuse or misrepresentation of public figures in ways from which everyday users are protected may be perceived as having a democratising or levelling effect.
It is disputable, of course, if any of the above three models of ethics are knowingly deployed by platforms that differentiate protections for public figures, although the power transfer model may well be a spectral influence on platform policy. There are, however, two problems with this framework in the context of platform differentiations. First, there is limited indication that, even among public figures of means or high-end support, they have access to health and mental health resources to withstand substantially higher rates of abuse without remedy. Indeed, the number of suicide attempts and completions among public figures noted above is indicative that the wellbeing resources are no greater than for others.
Second, although journalists, politicians and well-paid entertainers may have greater financial or institutional resources alongside the social benefits of being a public figure, not all public figures are equally resourced or share in the same social status. Indeed, core to the problem with platform policy is an inadequate definition of public figures that lumps together many different professions and roles, both in public service and those who have achieved fame through private means. Traditionally, public figures usually include politicians, journalists, sportsplayers, entertainers (actors, singers, authors, models), high-profile corporate executives, public servants, heads of charities, religious leaders and celebrities who achieved fame in other ways. The term ‘influencers’ is also often used as a broad epitome of Internet celebrities, given that they ‘make a living from being celebrities native to and on the Internet’ (Abidin, 2018: 75). Many are supported by employers, organisations, management agencies, political parties, television networks and the resources of others. Markedly different in terms of protections, resources, institutional support and social standing are the public figures who emerged from the 1990s onwards: reality television stars who were everyday people selected to become famous (Roscoe, 2001), ‘micro-celebrities’ who achieved public interest by growing a following through digital engagement and usually operate individually (Senft, 2013), and online influencers who extended micro-celebrity by attracting larger followings and the interest of corporate sponsors (Abidin, 2018). High-profile community advocates, such as Greta Thunberg who became internationally noteworthy for leading climate protests (Duvall, 2022), and accidental public figures who may have become famous for being the victim of online abuse in the first instance (Thompson and Cover, 2022), also fall into this category.
In this respect, no ethical framework adequately captures the vast range of personal circumstances, the extent to which being a public figure was actively chosen, or the resources available to individuals, and it is ethically impossible to apply a standard to all people who may be understood as public figures. Platform policy creates a category of public figure and risks, applying a differential standard to all who might be considered public figures, without regard to specific circumstances. This cannot be accounted for in any of the above ethical frameworks, suggesting then that platform differentiation of public figures may warrant new ethical approaches.
Conclusion
Public figures experience high rates of online abuse and harassment, and are at greater risk than everyday individuals owing to their professional requirements to be engaged digital users (including self-promotion activities). In addition to this greater risk are the widely held perceptions that they are ‘fair game’. Online abuse has significant impacts on victims, regardless of whether they are public figures or private everyday individuals. Such impacts can include: negative health and wellbeing outcomes, negative career outcomes, and withdrawal from public life. Online abuse can also lead to the lowering of the quality of public debate and the toxification of the digital ecology.
In this study, we sought to examine the policies on online abuse against public figures on 31 digital platforms. Our findings show that there are diverse definitions of what counts as a public figure and the protections afforded to public figures can differ according to the platform. Some platforms, for instance, have a broad definition that encompasses persons who voluntarily entered public life, while others are not clear on what counts as a ‘public figure’. Some platforms do not include any reference to public figures in their policies or terms of use, while others state that everyone has equal protection against online abuse.
Newsworthiness and public interest are often cited as justifications by various platforms for not regulating harmful content against public figures, however, we argue that public figures should enjoy the same protections against online abuse as everyday users in order to minimise the risk of harm, including self-harm, and to promote an equality of digital citizenship. The concept of public interest, as applied to platform content regulation, may appear vague and subject to alteration, making it ethically questionable. We argue that new ethical approaches are required to adequately address online abuse against public figures with a focus on mitigating harms, including clear and unambiguous policies and transparent practices. One limitation of the study was that we were only able to examine publicly available policies and terms of service and we do not report here on how online abuse against public figures is regulated in practice. More research is needed to understand the experiences and impacts of online abuse against public figures, and the extent to which platforms differentiate in moderating such content, especially compared to everyday users.
