Abstract
Researching on platforms through platforms poses challenges to researchers, particularly when observing subcultures and content at the margins. Inspired by Massanari's essay on researching under the ‘alt-right’ gaze, this paper uses autoethnography to address the impact the system of platform governance has on researcher vulnerability in data collection, persona management and results dissemination, particularly for researchers gathering data censored by platforms and for researchers constructing their public profile through digital media. My goal is to examine how the intersection of platform power, academic precarity and the creator economy affects researchers and academics. At the heart of this are the questions: How can researchers gather data, disseminate results and establish a professional profile under platforms’ all-encompassing gaze? What does platform governance and its focus on specific areas of control mean for researching content and users at the margins? What risks do platforms themselves pose to researchers’ work? And how does the broader precarity of particularly early-career academic work intersect with the effects of platform power? To this end, this paper discusses the increasing digital labour required by the ‘impact agenda’ and the difficulty of managing a researcher online persona in an age of growing digital censorship, sharing personal experiences of censorship in research to define ‘the platform gaze’ as gendered, raced, heteronormative and puritan surveillance, constructing a social reality where marginalised individuals and dissent are both hyper-visible and vulnerable to harassment and silencing. I conclude with considerations on activist interventions in the platform governance field.
Introduction
In 2021, Meta, the tech giant and parent company of Facebook, WhatsApp and Instagram, shut down two New York University researchers’ accounts, preventing them from studying political ads and misinformation with the pretence of protecting user privacy (Hatmaker, 2021). Many, however, viewed Meta's move as a way to hinder the researchers’ transparency work, and the shutdown raised various concerns about platforms’ power over knowledge and their gatekeeping of information (Hatmaker, 2021), which affects researchers and users alike.
This essay addresses the challenges that researching on social media platforms through the same platforms poses to researchers, particularly when they are part of and/or observe subcultures and content at the margins, such as nudity and sex work. My goal is to examine how the intersection of platform power, academic precarity and the creator economy affects researchers and academics. At the heart of this are the questions: How can researchers gather data, disseminate results and establish a professional profile under platforms’ all-encompassing gaze? What does platform governance – or the regulatory dynamics that determine the freedoms, responsibilities and liabilities of platform companies (Tiidenberg, 2021) – and its focus on specific areas of control mean for researching content and users at the margins? What risks do platforms themselves pose to researchers’ work? And how does the broader precarity of particularly early-career academic work intersect with the effects of platform power?
These questions are urgent and important, because while platforms can offer extraordinary research opportunities, ‘their design and day-to-day functioning can impose constraints largely outside the control of researchers’ (franzke et al., 2020: 12), making them both a site of opportunity (i.e. work) and oppression in the form of abuse and surveillance, for users and researchers alike (Coombes et al., 2022). It is this lack of control, leading to new research challenges and, often, heightened risks, that I wish to address here, reflecting on platforms’ influence and chilling effects on research and society within an already opaque social media governance system influenced by systemic offline inequalities.
I am no stranger to platform censorship myself, both in a research and personal capacity: my social media accounts, essential towards my research and, during some stages of my life, towards my income, have been repeatedly censored, affecting my ability to continue my work and to support myself (Are, 2023).
The entities that my research participants have defined as ‘nameless masters’ of platforms (Are, 2024) have created plenty a risk and challenge during my own studies on platform governance and its effects on sex working and sex-positive communities.
This paper therefore situates my experience within broader platform governance literature, conceptualising it as researching under ‘the platform gaze’. Inspired by Massanari (2018), I define ‘the platform gaze’ as a gendered, raced, heteronormative and puritan surveillance, constructing a social reality where marginalised individuals and dissent are rendered both hyper-visible and vulnerable to harassment by users and silencing by platforms. Following this definition, I offer thoughts on how the platform gaze affects researchers gathering data subject to platforms’ rules and their construction of a professional profile through digital media. I highlight researcher vulnerability in data collection, results dissemination and persona management, which I problematise in connection with the increasing digital labour required by the ‘impact agenda’, or the quest for publications, promotion and visibility within the academy (Jerome, 2020).
I then share my personal account of how these challenges affected my work and research using autoethnography – a method effectively utilised in the study of platforms, inequalities and the academy (Are, 2023; Are and Paasonen, 2021; Elhinnawy, 2022; Hager and Peyrefitte, 2021; McMillan Cottom, 2015; Stanley, 2023) consisting in the interpretation and creation of knowledge rooted in the native context via tenets of autobiography and ethnography (Mitra, 2010) – to describe direct experience of social media moderation's processes and of life in the academy (Mitra, 2010). This is not to centre my experience, but to highlight how the censorship of specific communities and stigmatised topics and the withholding of access to vital information about the inner workings of Big Tech does not only affect users’ lives and livelihood – it creates a chilling effect on the production of knowledge and research too, making a handful of opaque, mysterious and powerful companies all the more powerful. I conclude with considerations on activist interventions in the platform governance field and with changes in the academy to mitigate researcher precarity.
Systemic issues within platform governance
Platform governance is both an area of research and activism, questioning ‘the implications and impact of platform features, functions and rules’ as well as ‘the international regulatory dynamics that currently delineate the freedoms, responsibilities and liabilities of platform companies’ (Tiidenberg, 2021: 2). Content moderation, or the practice of deleting and/or censoring online content, is a key aspect of platform governance, and an ever-evolving area of research (Gillespie et al., 2020). Through content moderation, social media platforms rule over the visibility of what is posted in their spaces, enforcing ‘community guidelines’ or ‘standards’ on the basis of which a blend of human moderators and algorithms are trained to make decisions (Gillespie et al., 2020; Kaye, 2019, etc.).
The way platforms govern content has had significant impacts on the freedom of expression, labour, network-building, activism and memory-making abilities of their users (Blunt and Stardust, 2021; Blunt and Wolf, 2020; Gillespie et al., 2020). While most social networks ban content against the criminal law of most countries (Goanta and Ortolani, 2021), they have also over-censored posts they found legal but potentially objectionable to protect their commercial interests, displaying overzealousness towards recent legislation (see: Are and Paasonen, 2021; Blunt and Stardust, 2021; Blunt and Wolf, 2020; Gillespie, 2010; Kaye, 2019; Nolan Brown, 2022, etc.). Platforms’ human and automated moderation has been known to disproportionately target users at the margins, over-focusing on nudity and sexual communications instead of on violence (Are, 2024; Coombes et al., 2022) in response to a United States law known as Fight Online Sex Trafficking Act (FOSTA)/Stop Enabling Sex Traffickers Act (SESTA).
The Allow States and Victims to FOSTA and SESTA ‘remove internet platforms’ immunity to their users’ criminal activities pertaining to commercial sex, consensual or not (Are, 2024; Coombes et al., 2022). With the aim to fight sex trafficking, FOSTA/SESTA made platforms over-censor content instead, deleting posts and accounts by sex workers, athletes, lingerie, sexual health brands, sex educators and activists worldwide to avoid being accused of facilitating trafficking, governing content posted around the world through US law (Bronstein, 2021; Haimson et al., 2021). After FOSTA/SESTA, creators at the margins became the targets of repeated surveillance and silencing, resulting in loss of earnings and significant emotional distress (Are and Paasonen, 2021; Are and Briggs, 2023; Are and Gerrard, 2023).
FOSTA/SESTA is a relevant case study not just in the governance of nudity and sex in online spaces, but also in platform power over words, images, expression and, ultimately, work. Platforms’ governance means content and profiles can always be removed at the flick of a switch – it just happened to affect nudity and sexual content first, allowing this power to be dressed as online safety.
Since researchers are, often, users themselves, platform governance and FOSTA/SESTA affect them too, meaning that internet regulation and its subsequent translation into content moderation is not just a matter of freedom of expression – it is also a matter of work. The study of platform governance and the experiences of researchers in spaces governed by corporate companies such as social media thus overlap with new, platform-generated workstreams, such as the influencer or creator economy. The creator economy sees microcelebrities with high amounts of online followers make a living by creating content, building communities and promoting products and services through social media platforms such as YouTube, Instagram, TikTok, generating revenue for themselves and for brands (Cotter, 2023). Known as creators or influencers, these users must become ‘jack-of-all-trades entrepreneurs within a highly competitive industry, simultaneously videographers, editors, photographers, on-screen talent, brand ambassadors, merchandise producers’ (Glatt, 2022: 4) – abilities that, as we will see, modern researchers often must mirror to collect data, promote their studies and profile.
The creator or influencer economy is a ramification of the ‘gig’ or ‘platform economy’, an ‘insecure, often short-term or piecemeal, employment, frequently facilitated by a platform or app’ (Easterbrook-Smith, 2022: 3). This line of work is extremely precarious for those who rely on it (Duffy, 2020; Duffy and Meisner, 2022; Glatt, 2022, etc.), as it is strictly reliant on platform governance variables, and particularly on visibility through the accumulation of metrics such as likes, following and engagement (Banet-Weiser, 2018). In the creator economy, views, channel subscriptions, following become essential to a user's legitimacy (García-Rapp, 2017). This quest for visibility greatly affects users’ lives and, we will see, has an impact on research.
While previous studies explored the little control researchers have on platforms (franzke et al., 2020) and the risks and challenges of being visible to abusers in the process of researching (Massanari, 2018; McMillan Cottom, 2015), there is a gap in our methodological and ethical reflections, which have so far only touched platform governance and power over our work as researchers in passing. To address this gap I, like Massanari, briefly draw from the work of Foucault (1995) and Mulvey (1989), but largely base my reflections on creator economy research (e.g. Bishop, 2019; Duffy and Meisner, 2022; García-Rapp, 2017) and on work by sex worker activists and researchers (Armstrong, 2022; Blunt and Stardust, 2021; Blunt and Wolf, 2020; Coombes et al. 2022; Jokubauskaitė and Stegeman, 2022; Stardust et al., 2020, etc.). I use this practical lens to explore platforms’ impact on research because firstly, as the user populations most affected by surveillance, platform work precarity and censorship, sex workers and creators are valuable experts to cite; secondly, because academics relying on platforms as research and result dissemination intermediaries can find themselves in similarly precarious situations due to platform power.
Surveillance is ‘the negative side of information gathering, processing, and use that is inextricably bound up with coercion, domination, and (direct or indirect; physical, symbolic, structural, or ideological) violence’ (Fuchs, 2011: 126). Usually linked with the state, it can often be experienced during ‘asymmetrical power relations’ of any sort (Fuchs, 2011), meaning that platforms’ omniscient power over users’ content, contacts and lives can be conceptualised as a form of surveillance. Foucault (1995), too, stressed the strength of technologies as a tool for discipline and surveillance, with the surveilled acting as if they were always under scrutiny, disciplining their actions as a result. In turn, technological tools for discipline and surveillance become normalised and de-institutionalised, ruling numerous aspects of our daily lives without much government oversight (Foucault, 1995).
Non-conforming, raced and marginalised communities such as sex workers, Black, Indigenous, and People of Color (BiPOC), disabled, queer users, and those sharing nudity and sex are disproportionately affected by censorship (Blunt and Wolf, 2020; Browne, 2015; Haimson et al., 2021; Stardust et al., 2020), which can assume a gendered dimension, particularly towards individuals that express themselves through their bodies. Already used to conceptualise platforms’ content moderation approach, Laura Mulvey's (1989) male gaze theory, where the active male observes the passive female as an erotic spectacle, has been translated into platform governance by arguing that a largely male Big Tech workforce (Jee, 2021; Roberts, 2019) governs passive female bodies to be consumed but contained for viewers’ safety (Are, 2022). Browne (2015) adds a racial dimension to this gaze, adding that old and new information technologies carry a ‘white gaze’, with a largely white workforce surveilling people of colour and providing information to institutions. Further, if recent content removals and oversight body decisions are anything to go by, this gaze is heteronormative, and views anything deviating from heteronormativity as harmful: as Meta's oversight body the Oversight Board (2023) argued when overturning the deletion of posts depicting non-binary and trans nudity, content moderation is often characterised by a binary view of gender that disproportionately harms and silences women, Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual (LGBTQIA+) and non-binary people.
This surveilling gaze overlaps with the quest for social media visibility in the creator economy, where hyper-visible marginalised communities on platforms are ‘punished and disciplined precisely when the spotlight falls on them’ (Banet-Weiser, 2018: 25), meaning that particularly creators at the margins find themselves targeted by invisibility in the shape of shadowbanning, and hypervisibility to both harassers and platforms, triggering de-platforming and online abuse (Haimson et al., 2021; Stegeman et al., 2024). Indeed, platforms ‘employ surveillance technologies that screen for sexual content and nudity, share user information with law enforcement and advertisers and hold double-standards when assessing the explicitness of content created by lay users’ (Stardust et al., 2020: 6). This surveillance, justified as a means of keeping users safe particularly from the harms of sex trafficking, ‘has yielded few of its intended benefits for reduction of sex trafficking’ but has instead interfered ‘with the legal rights of sex workers and our collective abilities as citizens to control our own bodies and express our sexual selves’ (Bronstein, 2021: 370).
Surveillance has also been affecting researchers working and collaborating with sex workers and sex work-adjacent topics – or topics revolving around sex but sitting outside the commercial sex trade, such as sex education, sexual and gender expression, performance etc. – in the form of shadowbanning and de-platforming. For example, during Coombes et al.'s (2022) data collection and results dissemination, platforms’ targeting of sex work as both a profession and as a term to be removed or hidden affected the authors’ possibilities to access participants and to share their findings. Since ‘[o]n most major social media platforms, using terms like “sex worker” or “OnlyFans” appears to trigger content moderation protocols, putting the user's account at risk’, researchers themselves must jump through hoops, self-censor and avoid de-platforming and shadowbanning to continue producing knowledge (Coombes et al., 2022; Jokubauskaitė and Stegeman, 2022).
Researchers using platforms for work therefore perform their labour under a specific gaze: a blend of ‘male’, ‘white’, ‘alt-right’, cis, heteronormative and puritanical gaze (Browne, 2015; Coombes et al., 2022; Massanari, 2018; Mulvey, 1989) that I will now introduce as ‘the platform gaze’.
Defining the platform gaze
Massanari defined the ‘alt-right’ as groups of ‘White nationalist, Islamophobic, fascist, misogynistic, anti-immigrant, and anti-intellectual communities’ (Massanari, 2018: 2), and its gaze as ‘an amorphous networked community that gazes, illuminates, objectifies, and actively constructs a particular social reality’ with a ‘gendered and raced subjectivity’ (Massanari, 2018: 4). She argued that conducting research under the gaze of ‘loud, angry, networked, and technologically proficient group of mostly White men’ (Massanari, 2018) in circumstances such as #Gamergate put individuals in marginalised positions such as women, people of colour and members of the LGBTQIA + community particularly at risk. Further, ‘alt-right’ spectators benefited from an imbalance of power, remaining shrouded in darkness while their targets – for example, the people they harassed, but also journalists and/or researchers – were rendered visible (Massanari, 2018). This is consistent with McMillan Cottom's (2015) experience of being a visible researcher writing publicly online, something that, as a woman of colour, resulted in harassment and accusations about her supposed lack of expertise.
Yet this visibility, scrutiny and surveillance should not be identified only with malicious users, but with the platforms we use for research and for our personal life as well. Indeed, platform governance mirrors the characteristics of Massanari's ‘alt-right’ gaze, in the sense that their surveillance is gendered, raced and, I argue, heteronormative and puritan, constructing a social reality where marginalised individuals are rendered both hyper-visible and vulnerable to harassment by users and silencing by platforms alike (Stegeman et al., 2024) by platforms that originate from ‘the specific and rarefied sociocultural context of educated, economically elite, politically libertarian, and racially monochromatic Silicon Valley, USA’ or in socially conservative China (Roberts, 2019: 93–94). This vulnerability is, as Schoenebeck and Blackwell (2021) write, both platform-enabled and platform-perpetrated, therefore aided by and enacted through networked technologies.
Similarly to Massanari's ‘alt-right’ gaze, the platform gaze also works as an intimidation tactic, simultaneously isolating the targeted individual and giving them the impression of being at the mercy of an amorphous, undifferentiated mob. This means that the platform gaze feels very personal for its targets, who feel rejected and ostracised due to unchangeable characteristics such as their line of work, their appearance, their race, gender identity or sexual orientation (Are and Paasonen, 2021; Are and Briggs, 2023; Are and Gerrard, 2023; Haimson et al., 2021). Indeed, while the ‘alt-right gaze’ can silence researchers and users by driving them off platforms, the platform gaze can de-platform them altogether.
As users, researchers – and particularly those from marginalised backgrounds, or those researching on stigmatised topics such as sex work and nudity – are also subject to the platform gaze because visibility is both a key aspect of the creator economy and a variable researchers have to work around, both when gathering data about the creator economy and platform governance, and when disseminating study results.
On social media platforms, visibility is afforded or restricted by algorithms, ‘codified step-by-step processes’ which remain largely opaque to users and researchers alike (Bishop, 2019: 2590). As such, platform governance has been found to have a ‘top-down’ approach, meaning the power dynamic between social media companies and their users is unbalanced: it is platforms who hold the reins of and the information about how their spaces are run, while users’ work and even presence online remains precarious without this knowledge (see: Are, 2022; Bishop, 2019; Cotter, 2023; Duffy, 2020). As a result, professional content creators who earn a living through platforms do not know how their content will perform, if it will be seen by others and if it will bring them further work opportunities, leading them to share ‘algorithmic gossip’ (Bishop, 2019) or ‘folk theories’ (Eslami et al., 2016) in the hope of being seen by others. Similarly, researchers working through platforms are not able to fully control their workspaces, to know whether their accounts will be revoked due to their research, if platform processes will make their research invisible, affecting their data collection abilities, their chances at publishing and building a career and academic persona, or if they may be sued for using platforms to collect data, something that platforms’ community guidelines hint at with the pretence of protecting users’ privacy (Sandvig v Barr, 2020).
It is in this scenario that my experience is situated and that, as I will explain below, the platform gaze has affected my work and profile.
Researching stigmatised social media topics through platforms
This is where it gets personal, and by personal I mean autoethnographic.
Elhinnawy writes that ‘[a]utoethnography brings the personal experience, the concrete facts, and an emphasis on storytelling to academic scholarship’ (2022: 56), but that since it is often disputed as an overly self-centred method, it needs a critical dimension to truly elicit change. To be critical, autoethnography needs to be engaged in ‘becoming’, to help ‘envision ways of embodying change’ and capture the cultural and institutional contexts in which personal experience is examined (Elhinnawy, 2022; Jones, 2016).
Autoethnographic approaches have been used by sex workers and those adjacent to sex work (Are, 2022; 2023; Jokubauskaitė and Stegeman, 2022; Stardust et al., 2020), women, and particularly women of colour (Elhinnawy, 2022; Hager and Peyrefitte, 2021; McMillan Cottom, 2015; Stanley, 2023), as well as queer people (Malta, 2022; Schramm et al., 2022; Weatherall and Ahuja, 2021) to examine experiences in and outside the academy beyond the recounting of facts and feelings, towards sharing events steeped in the cultural, technical and institutional processes that trigger such facts and feelings, making them worthy of research. In this sense, autoethnography provides a unique tool for researchers to critique the impacts systems like White supremacy and the heteropatriarchy have on academia, platforms and their governance and, by extension, on them, bringing such experiences to light and normalising them for the broader worker and user base. According to Ahmed's feminist approach to research (2017), theory and daily experiences can be interrelated, making the personal political. While showing vulnerability does constitute additional emotional labour for academics coming from non-dominant groups (Elhinnawy, 2022; McMillan Cottom, 2015), sharing our experience does not just highlight unique situations developing in our field – it can also broaden the breadth of experience and emotion tied to what constitutes a professional or expert in the academy.
Although I come to the study of social media governance with the significant privilege afforded by being a white, cisgender, university-educated bisexual woman with no lived experiences of sex work, posting sex work-adjacent content and researching on the moderation of nudity and sex work has meant facing significant online and offline challenges as a user and as an academic alike. These experiences have shaped me as a researcher and as a person and will therefore shape this paper. Similarly to Coombes et al. (2022), I argue that these instances of censorship make me an expert of my own experiences, which are applicable to various user populations and therefore worth addressing.
As the first in my family to pursue a doctorate, and as an immigrant to the United Kingdom, I did not have what Stanley (2023: 417) calls ‘the connections, processual understandings, and the resultant confidence that others unthinkingly enjoyed’ in the academy, something that affected my ability to receive funding for my PhD. Therefore, throughout my doctorate, for which my institution awarded a fee-waiver but not a scholarship, I supported myself through a variety of jobs, including teaching online and offline pole dancing classes and being a professional content creator through my blog and through Instagram and TikTok. Although physically exhausting, teaching pole gave me enough time, flexibility and mental energy to make my own schedule and to continue my doctorate, mitigating the precarity of my situation.
During the second year of my PhD, Instagram had started using ‘shadowbanning’, a light censorship technique that hid content from the platform's Explore page without informing users, to moderate pole dancers’ posts and hashtags (Are, 2022; Are and Paasonen, 2021). Because of shadowbanning, I soon realised I was in a unique position to develop knowledge in a growing but then under-researched area, and to make an impact on the experiences of fellow censored users. Indeed, as a platform governance researcher, I had the means to contextualise what was happening. As a blogger and former public relations and social media strategist, I had the professional knowledge to be an intermediary between Instagram and my pole dancing followers. Importantly, I had not just observed content creation and censorship for research – I knew it in practice: my Instagram profile has been deleted and repeatedly shadowbanned throughout 2020 and 2021; my TikTok was deleted four times in 2021, three times in just one week (Are, 2023). On the back of these experiences, I became part of networks of censored users who were not being listened to by platforms or the mainstream media (Are, 2022; 2023).
Despite my privileged position as a researcher, my experiences with platform governance mirrored that of censored users: faced with no transparency from platforms, I found myself reverse-engineering their moderation through stringing together my experiences of censorship to help others in my situation and demand better communication (Are, 2022), not unlike Bishop's (2019) beauty bloggers exchanging ‘algorithmic gossip’ to thrive under algorithmic management's opacity, or Eslami et al.'s (2016) ‘folk theories’ about curation algorithms, developed by users to plan their behaviour under algorithmic uncertainty
These experiences of precarity in my academic and platform work meant I often chose my research methods not only according to which methodology was more appropriate to my studies, but also according to what was available to me in terms of time and resources, having started to observe platform governance ahead of securing a research post. Platforms were my means of data collection, as they were largely free to use when observing my own experiences. Just like any other user, I had to post about my research by acting within their community guidelines and opaque governance to share calls for participation and my findings. My personal experiences as a user therefore blended with my work experiences as a researcher, generating specific risks and challenges borne out of academic and platform work precarity.
One of these challenges was surveillance, both by platforms and users. This meant losing my content creator and researcher accounts – a source of income ahead of becoming fully employed – due to automated, unexplained platform decisions and of swathes of negative comments and reports by other users, who believed my Instagram and TikTok pole profiles inappropriate and flagged them to the powers that be (Are, 2023).
This powerlessness, only reversed after news articles about my deletions written by contacts within the media and support received by contacts within Big Tech (Are, 2023), had both short-term and long-term consequences: briefly, it affected my ability to post about my research and to support myself financially through my social media profiles; in the long-term, and to this day, it creates a deep sense of unease about my reliance on platforms as a work and research tool – something I still find necessary because of years of networks and trust build among specific, unheard communities.
Platform governance also affected my data collection abilities, particularly when I began gathering participants’ experiences through a qualitative survey towards a study on the de-platforming of sex. At the time, I found my call for participants and, as a result, my profile's visibility hit by different degrees of censorship. Having circulated an anonymous qualitative survey through my own social media profiles on Facebook, Twitter, Instagram and TikTok, my call for participants received significantly reduced views on both my TikTok and Instagram profiles compared to my usual posts. Further, Instagram's automated moderation flagged my Qualtrics survey as a ‘dangerous link’, likely affecting the number of users who, already under threat of de-platforming and hacking, may have decided not to complete it for fear of losing access to their account. My survey link, often shared by sex working and sex-positive communities, may have been mistaken for spam or ‘sexual solicitation’ by Instagram's automated moderation, causing a chilling effect on my ability to gather data (Are, 2024).
Following my call for participants, which relied on regular posting, direct messaging and requests to share my survey, both my Instagram and TikTok profiles also experienced a drop-in engagement akin to shadowbanning, so much so that even users who followed me claimed they were not seeing the content I posted. Although it is often difficult to ascertain whether an account is shadowbanned (Are, 2022), it would be fair to say that, given platforms’ existing ostracisation of sexual content and communication, both the topic of my call for participants and the support it receives amongst communities they notoriously censor may have resulted in further shadowbanning.
These are just a few examples of the lack of control that researchers who work and recruit through platforms face due to their governance. However, my challenges did not stop here: aside from platform-related challenges, I faced academic challenges too, namely due to my choice of method or approach to ‘reverse engineering’ platform governance, often deemed not rigorous enough by journals. The irony is not lost on me that while my first autoethnographic paper was initially rejected by three journals and put through (welcome and necessary) ethical scrutiny and peer review for over a year, a study written by a man and largely based on masturbating to comics depicting underaged boys was published with nothing more than a sentence acknowledging the potential ethical issues with masturbating to comics depicting teenage boys (Poole, 2022). Similarly, my research topic's proximity or adjacency to sex work has at times resulted in peer review disagreements on an ideological basis, and perhaps in a year and a half's quest to find a postdoc, while the fact that I preferred focusing on user experience in the face of platform opacity often resulted in desk rejections due to reviewers’ preferences for statistical data about content moderation, something that, precisely due to platform opacity, researchers are often not privy to.
At this juncture, it therefore becomes necessary to discuss how the systemic inequalities of the platform gaze and power over users and society intersect with and influence researchers, and early-career researchers in particular, in the neoliberal university.
The researcher as user: visibility and the ‘impact agenda’
The circumstances in which researchers’ work has come to be affected by platforms are not isolated to Big Tech and social media, but are part of a wider shift towards neoliberalism in the academy. Neoliberalism has been identified as the early 21st-century, market-based governance characterised by ‘practices such as privatization, commodification and commercialization, as well as mobility across borders and authoritarian surveillance of workers everywhere’ (Hager and Peyrefitte, 2021: 2). These practices have then been integrated into higher education through the introduction of metrics, competitiveness and measuring of performance in universities, starting in the 1980s in the UK, US and Australia and then extending to the rest of the world, bringing academics to internationalise their work and constantly churn out research, while bearing with the pressures of increased teaching and administration loads, and academic precarity (Hager and Peyrefitte, 2021) in order to drive student recruitment, the main source of university income after governments’ continuous underfunding of institutions (Hearn, 2015). In McMillan Cottom's words, this ‘academic capitalism’ that ‘promotes engaged academics as an empirical measure of a university's reputational currency’ forces us to adopt techniques and practices that keep us in line with the aims of the neoliberal university (2015: 2), where burnout and overwork are rife, particularly for early-career researchers (ECRs) juggling several jobs and unpaid commitments aimed at securing the metrics necessary for a permanent position (Stanley, 2023).
One of the ways several researchers, and particularly ECRs, and I have found ourselves adopting in order to keep up with the neoliberal university is through visible ‘impact work’, or what McMillan Cottom calls ‘academic public-ness’ (Stanley, 2023). This has been critiqued as a form of marketing of academic knowledge to the detriment of those unwilling or unable to commodify their work, but also praised as a way to make knowledge more accessible (Stanley, 2023; Berman, 2011). And while digital visibility may indeed allow for the broadening of knowledge outside the ivory tower, in the neoliberal university, impact can be shorthand for status, a transformation of research into capital for institutions that separates academic celebrities from those caught in the teaching and marking slog (Berman, 2011; Slaughter and Rhoades, 2004). This publicness and ‘impact’ now often happen through social media platforms.
On social media, researchers navigate their online presence as scholars and users simultaneously, and while they may not necessarily be earning a living through platforms, content creation and community engagement may be at the centre of their work, for example, in data gathering and results dissemination, and certainly in profile building, particularly on the back of what is known as the ‘impact agenda’. Wróblewska (2021) describes ‘impact’ as ‘something scientists have always done – interacting with society to offer solutions to real problems’. But according to her, ‘What the impact agenda did, was to give a name to this phenomenon, along with a definition and evaluation criteria’ (Wróblewska, 2021). ‘Impact’ and its related agenda mean different things to different researchers: seeking status through the publishing of peer reviewed academic journal publications; achieving permanent positions such as Professorships or Head of Department titles; teaching and imparting knowledge; and making a positive impact on society through ‘real-world’ impact – for example, translating research into new practices, business efficiency, organisational change, societal benefit, government policies or public awareness (e.g. Jerome, 2020).
In this sense, just like in industries such as journalism and the arts, making an impact in academia can assume a social media dimension, requiring researchers to achieve visibility – or, at the very least, requiring them to consider their degree of privacy and publicness depending on their field. This can lead certain academics to become what Marwick and boyd (2011) call a ‘microcelebrity’, creating and establishing a persona that is then performed through the illusion of closeness with ‘fans’. However, this public engagement itself is not equal: for McMillan Cottom (2015), the burden of the deinstitutionalisation of knowledge often falls on the same academics trying to achieve stability in the academy while coming from marginalised backgrounds. These academics, often from first-in-family, queer, working class and/or non-white backgrounds already, may already have faced unconscious bias against them in the academy and/or have lived through trauma (Malta, 2022; Weatherall and Ahuja, 2021), and they may come from a more precarious, less connected position, meaning they may be more in need of proving impact and, therefore, more reliant on platforms and their connected precarity and hypervisibility (Stegeman et al., 2024).
The quest for impact and visibility in the academy cannot therefore be separated from precarity: with major PhD funders being forced to increase doctorate stipends of 10 percent after a backlash during the 2022 cost of living crisis (Inge, 2022) and with the ‘personal and financial pressures, and a narrowing of employment options’ faced in particular by early-career academics (Spina et al., 2022: 546), social networks ‘often provide essential professional and social support’ as well as a self-promotion tool (Massanari, 2018: 3).
In this crowded and precarious professional scenario, early-career academics find themselves having to juggle research work with the trappings of the impact agenda not just to ensure their research has value, but also to build enough of a profile to be considered for postdoctoral and/or permanent roles. This was certainly my case: it was mostly thanks to my visibility and online network that I could publish the papers that led me to my current role, and that I could support myself financially during my lengthy search for a postdoc.
Further, early-career academics often find themselves supplementing inadequate research stipends and precarity with other, non-academic work (Spina et al., 2022) – some of which can also be reliant on forms of social media content creation and promotion (Are, 2022). This means that, like users, they work through platforms they have very little control over, striking a balance between ‘good’ visibility – for example, impact, public awareness, an established profile – and ‘bad’ visibility – for example, harassment, targeting and censorship (Massanari, 2018).
However, as argued by McMillan Cottom (2015: 2), ‘microcelebrity and attention do not operate in the same way for all status groups’. Indeed, working through platforms is, for Massanari (2018), both a risk and a challenge, particularly for researchers from marginalised communities, untenured researchers or researchers on the job market, who find themselves hyper-visible to both harassers and platforms, replicating the ‘hyper(in)visibility’ highlighted by the marginalised content creators interviewed by Stegeman et al. (2024). For instance, in her autoethnography of her experiences as a scholar of colour and an academic microcelebrity, McMillan Cottom (2015) argues that, in the academy's increased call for researchers to publicly engage, different backgrounds will trigger different results in the interfacing with multiple publics, citing white women's experience of rape threats and her own experiences of having her expertise questioned by harassers in her audience as characteristics of the racial split in online harassment. At the same time, with LGBTQIA+, sex working or sex work-adjacent, and non-white users being often target by platform censorship (Haimson et al., 2021), researchers from those backgrounds and/or focusing on such research topics may find themselves being censored by the same platforms they rely on for impact and, therefore, for academic visibility (Jokubauskaitė and Stegeman, 2022).
Having already experienced content and account censorship, I myself am finding that the stakes of platform surveillance and censorship are becoming higher the further I go on in my research journey: not only do I risk losing what once was my main source of income and a significant means of memory-making and network-keeping; was I to be de-platformed again, I would lose my main tool to gather data, disseminate my findings and raise awareness of my work, alongside my reputation as a highly networked, ‘influencer researcher’ – a brand that relies on my following.
Additionally, while researching on platform governance is, in Tiidenberg's (2021) words, both an area of study and of activism, academic publicness in the neoliberal university carries with itself a certain political correctness and an apolitical approach to remain marketable (McMillan Cottom, 2015) and, in my case as a platform governance researcher, to be palatable not just to universities and academics, but to the opaque platforms whose workers I need to keep having access to in order to maintain that publicness and opportunity for impact. This constant juggling of audiences, to which I add the activist and creator participant networks that are so crucial to my research, adds further strain to an already precarious working situation.
Even more importantly, what my experience with Instagram flagging my survey as a dangerous link shows is that even by sharing an apparently innocuous link to a survey or call for participants post, I may be putting users who are already disproportionately affected by de-platforming at further risk of censorship. Managing this risk, my relationship with participants and with different publics is a challenge that platform governance researchers need to address.
Ethics of social media research under the platform gaze
The reliance on social media platforms and their powerful gaze calls for new ethical approaches to research, not just in liaising with participants, but also in presenting our public research personae online and even in the peer-reviewing and hiring process.
With regard to participant relations, it is important to reconsider online informed consent, especially when gathering content through application programming interface scraping of semi-public data or gathering posts and profile information which are already public due to terms of use agreements (franzke et al., 2020), and which should be accessible to researchers, as their use towards academic studies does not constitute criminal activity and unfair use of data (Sandvig v Barr, 2020). While obtaining informed consent may be challenging, it is essential to plan for the fact that collecting data and posts from groups that have notoriously been subject to the platform gaze, such as sex workers and users posting nudity, might further endanger them by making them more visible to platforms and/or dangerous audiences. We have no power over the directions our studies may take under the platform gaze: as such, even with informed consent being implied in posts that are already public, it is worth at least anonymising their authors and/or anything that may identify them to protect users from platforms’ retaliation in the case of virality (franzke et al., 2020). More work is therefore needed to put participants – especially marginalised, over-researched participants – at the centre of creating new ethical guidelines, by allowing them to either consult on the most appropriate ethics in the studies they are involved with, or by contracting them to consult on the creation of new industry guidelines.
Furthermore, the Association of Internet Researchers (AoIR) stresses the importance of protecting researchers as much as informants and participants (franzke et al., 2020). Somehow, researcher risk is still viewed as something static, only happening during the research process. For Massanari, however, risk is not just ‘something that occurs in a particular time and place’, because our field is not ‘a static place that we go to, collect data from, and leave’ (Massanari, 2018: 5). Internet researchers use platforms as a research tool and field, but also as a personal and administrative life tool. Therefore, their protection cannot just be about avoiding a ‘strong ideological reaction’ such as death threats, ‘doxing’ or identity theft (franzke et al., 2020:11), but also about making sure that carrying out their research does not make them hyper visible to the platform gaze, potentially excluding them from society if de-platformed. It is therefore necessary to make room for new, covert methodologies that can grant further protection from both users’ abuse and platform surveillance.
Researching under the user and platform gaze has shown the benefits of qualitative methodologies such as ethnographies, autoethnographies, diary studies and the like, but it has also challenged ethics boards and peer reviewers’ unease with covert research, which is essential towards researcher safety (Massanari, 2018). Further, while discussions around ethics have overly focused on data collection, they have overlooked peer-reviewing, publishing, and certainly workplace politics. And indeed, digital methodologies rooted in the personal have proven to be controversial with reviewers, who have shown unease with the validity of digitally ethnographic and autoethnographic methods (Poole, 2022), even though these are often the only methodologies available to unpack opaque platform governance, especially from the precarious and under-funded position of unaffiliated, independent and/or early-career researchers (Are, 2022).
I am not attempting to argue that covert methods or methods rooted in the personal should go without scrutiny. Indeed, the recent publication of the aforementioned study largely based on masturbating to comics depicting underaged boys shows the damage that inadequate peer-review can do to the credibility of ethnography and qualitative research (Poole, 2022). However, the fact that methods may be rooted in the personal does not make them less valid, particularly in dealing with the platform gaze: with little to no platform communications and transparency about their governance processes, focusing on the personal experience of users or, even, of the researcher as a user recognises the content moderation and algorithmic expertise of those affected by it (Bishop, 2019), something too often belittled by platforms (Cotter, 2023).
The ethics of peer-reviewing under the platform gaze are therefore strictly linked with the ethics of accepting new modes, participants and topics of research, particularly within the sex and the sexual. Ethics are indeed often weaponised against ‘controversial’ topics, allowing higher education to ‘continue to marginalise other ways of knowing’, contributing to the perpetuation of injustices (Simpson, 2022: 27). While ethics are often strictly discussed to address power imbalances between the researcher and the researched, Simpson argues that this leaves ‘other forces of patriarchal power unquestioned, namely, that of “ethical regulation” and Research Ethics Committeess’, meaning that ethics are misused to police the production of knowledge and to exclude specific topics and researchers – for example, focuses on gender, sexuality and colonialism (Simpson, 2022: 26). Simpson adds that we cannot solve social problems without broadening the experiences and topics reflected in our field.
A primary example of the policing of knowledge is the unease or even stigma towards sexuality research, which Irvine (2014) has argued that academia views as ‘dirty work’. Using Hughes’ ‘dirty work’ (1958, 1962), which sees society disavowing types of work which are also considered essential forms of labour, Irvine argues that sexuality researchers have struggled to get legitimacy, particularly through conventional steps such as ‘publishing, funding, and ethical regulation’ (Irvine, 2014: 633) because sexuality research is constructed as ‘dirty work’. This is particularly relevant when it comes to research on sexual labour and its surrounding spheres.
Sex work researchers and sex workers within the academy have written about experiencing stigma, or being marked as ‘other’ due to their attributes or characteristics, while carrying out their research: examples cited exposed challenges by universities’ ethical approval processes and comments from colleagues who belittled their work (see Ahearne, 2015; Hammond and Kingston, 2014; Simpson, 2022). This is, for Simpson, indicative of academia's whorephobia – or ‘the hatred, disgust and fear of sex workers – that intersects with racism, xenophobia, classism and transphobia’ (Simpson, 2022: 20).
Hammond and Kingston define this experience as ‘stigma by association’, where the existing stigma against sex workers, which excludes them from opportunities, public services and public life, is spread onto scholars because of their research area, echoing some of the ways that stigma affects sex workers (Hammond and Kingston, 2014: 330). This has resulted in sex work research being dubbed as invalid, inaccurate, unprofessional and even dangerous, and in assumptions being made about researchers’ sexuality, promiscuity or, indeed, that they have worked in the sex industry by default due to their area of research (Hammond and Kingston, 2014; Simpson, 2022).
The stigma experienced by sex work researchers, and by sex work adjacent individuals posting on social media like me, pales in comparison with that experienced by sex workers themselves, making the academy an unwelcome place for anyone with a sex working past (Armstrong, 2022; Simpson, 2022). Yet, as Simpson stresses: ‘Nevertheless, carrying out sex work research can elicit whorephobic effects’ (Simpson, 2022: 20). Therefore, questioning, reflecting on and challenging this stigma ‘when carrying out research considered in any way “controversial” helps to expose and work towards overcoming harmful practices and misuses of power that limit knowledge production’ and, I would argue, fair working practices (Simpson, 2022: 19). Indeed, the stigmatisation of specific areas of knowledge production goes hand in hand with academic labour politics: sex workers have become an over-researched population, but their own research is not cited, and they are not paid for their expertise (Hall, 2022). Academia is de facto inaccessible to sex workers, leading the knowledge it produces to either further stigmatise sex work, or exclude the expert viewpoints of those who perform that labour.
Therefore, discussions around updated ethics should also encompass the way academics, and particularly early-career academics, can access and perform labour under the platform gaze in an increasingly precarious industry that tends to exclude or frown upon certain demographics. Tacitly, the ‘impact agenda’ has meant researchers looking to impress funders and employers, or merely looking to build a profile to disseminate their research, should engage with social and mainstream media, effectively doing content creation, public relations and social media marketing for their research and their persona. This, alongside challenges surrounding visibility and institutional dislike of sexuality research, adds a further layer to their precarity: a dimension of additional, time-consuming digital labour, which is often not supported by institutions.
The added digital labour of having an online presence may include different costly and time-consuming actions to be performed regularly: purchasing equipment and software for shooting and editing different forms of media; subscribing to hosting for websites; copywriting and editing; media and social media monitoring; and media relations. Yet, often, it is not institutions that take care of this aspect, which may be crucial towards both researchers’ career advancement and to institutions’ reputation: the burden of profile-building and keeping rests entirely on the researcher's shoulders, even more so if they are early-career and unaffiliated. In my experience as a PhD student, for instance, I regularly engaged with mainstream media and social media, securing a variety of speaking opportunities as a result; yet, despite earning my former institution mentions in news outlets ranging from Vice to Dazed, from the MIT Technology Review to the British Broadcasting Corporation, it was mainly my work rather than my university's public relations department's.
Although this digital labour has, eventually, proven to be extremely beneficial towards my career, it relied on my previous expertise as a PR account director, and on hours of unpaid labour in the hope that certain mentions, or certain viral pieces of social media content, would help me secure a more permanent position. And as the news agenda is fickle, dependent on breaking news, I sometimes would spend hours filming with or talking to journalists for my story to be dropped.
It is this awareness – often mediated through the same platforms which may take it away at a flick of a switch – that has helped me gain a profile in social media research. But this power over one's time, and this consuming labour of creating content and keeping media relations, can be entirely inaccessible to researchers who do not have PR expertise or who are forced to work multiple jobs to support themselves to survive in the precarious academy or who have indeed been de-platformed.
Conclusion: challenging platform power in academia
This essay has addressed some of the challenges of researching under the platform gaze, which I defined as gendered, raced, heteronormative and puritan surveillance, constructing a social reality where marginalised individuals and dissent are rendered both hyper-visible and vulnerable to harassment by users and silencing by platforms alike. In highlighting the platform gaze and its power over academics, I have also reflected on how such power is wielded and made stronger thanks to the inherent precarity and worker inequalities generated by academia as a workplace, and by the neoliberal university's focus on the impact agenda.
I now conclude with pointers to address the risks and challenges mentioned in this paper – the chilling effect of doing research under the platform gaze, particularly after FOSTA/SESTA; needing new ethics to protect researchers as much as participants from both malicious actors’ and the platform gaze; the welcoming of new methodologies; and challenging the academy's unease towards sexuality research.
Developing research networks on platform governance
Similarly to Massanari (2018), who recommends adopting an ethics of care perspective which considers the ‘alt-right’ gaze, I argue that collectively, the academy should work from an ethics of care perspective under the platform gaze. This includes: being honest with students and colleagues about the risks they face when studying platform governance and carrying out research on platforms through platforms; acknowledging that platform-enabled and platform-perpetrated harms will disproportionately affect marginalised and vulnerable populations (Massanari, 2018); and putting measures in place for institutions and more senior academics provide public and private support to those who may be targeted and/or affected by precarity. As such, the creation of research networks for researchers studying platform governance – and particularly the governance of sex, nudity and sex work – is essential: while the Association for Computing Machinery and the AoIR have created helpful ethics guidelines for professional and research practice (franzke et al., 2020; Massanari, 2018), researchers working under the platform gaze can greatly benefit from the real-time, experience-based support of a research network, particularly when researching stigmatised topics.
Engaging in research advocacy
Both Massanari and Jokubauskaitė and Stegeman recommend that those who are comfortable with it should engage in research advocacy ‘with technologists, legislative bodies, and law enforcement’ to improve their working conditions (Jokubauskaitė and Stegeman, 2022; Massanari, 2018: 6). Particularly because researchers face similar precarity as content creators, they are in a unique position to highlight the precarity of platformed working conditions as well as the chilling effect that the platform gaze can have on art, research, expression and activism. Thanks to their governance expertise and personal experience, researchers can raise awareness of issues related to how platforms are run with different stakeholders (Are, 2023).
Managing agendas
Researching under the platform gaze means managing different agendas: platforms’ own PR and economic agenda, the academic ‘impact agenda’ but also, crucially, the agendas of journals, peer reviewers, funders and institutions who may disagree with specific research focuses. As such, navigating academic work becomes a complex game of hopscotch, trying not to fall when balancing the interests of different stakeholders. Therefore, more rigour is required particularly in peer-reviewing, editing and hiring committees to balance out views that show particular distaste towards certain methodologies or topics with different opinions: as Simpson (2022) argued, we cannot allow ‘ethics’ to be weaponised to police knowledge production and inclusion in the academy.
Training for media and social media literacy
Institutions’ communications department should provide more support and training for particularly early-career academics wishing to promote their work, in order to reduce the unpaid digital labour they have to perform to secure visibility. Separately, consulting and intervention is needed in the academy to bridge the divide between outside industry and academics, providing training not just by designated staff members, but by industry journalists, social media marketing and PR professionals, to maximise learning, minimise unnecessary labour and highlight the potentials to misrepresent researchers’ work within media agendas.
Talking about researching under the platform gaze inevitably strings together issues of political internet governance, stigma against marginalised communities, discomfort with change and labour precarity. As such, an essay like this cannot aid researchers without systemic interventions to tackle these social and political issues. Therefore, questioning power imbalances not just in the relationship with platforms, but in society and in the academy itself, becomes a crucial part of mitigating risks for researchers and users alike.
Footnotes
Acknowledgements
The author would like to thank the Centre for Digital Citizens and Northumbria University for mitigating some of the challenges addressed in this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This work was supported by the Engineering and Physical Sciences Research Council (grant number EP/T022582/1).
