Abstract
This study builds on theories of user imaginaries by examining how LGBTQ+ creators in Montreal, Canada and Berlin, Germany respond to perceived algorithmic bias. Through observation and close reading of creators’ Instagram content, the study finds that expectations of discrimination based on sexual and gender identity, embedded in geographical and sociocultural contexts, shape these users’ understandings of threats posed by algorithmic governance. Findings also identified three main responses to perceived algorithmic bias: direct calls for engagement, strategies for eluding algorithmic surveillance, and adaptation to presumed algorithmic parameters. Instead of giving up or leaving, these responses demonstrated users’ participatory resignation, as an expectation of algorithmic bias informed by past experiences of identity-based discrimination paired with determination to negotiate such bias to endure on the platform. Thus, this article contributes a novel comparative analysis that expands conceptualizations of algorithmic imaginaries while revealing how resignation is mobilized as resistance to algorithmic governance.
Introduction
Users have long developed folk theories to explain how technologies work when the nuances of functionalities and their consequences are not completely clear (Ytre-Arne and Moe, 2021). Although sometimes dismissed (Cotter, 2021), the folk theories of people facing historical and present systems of oppression warrant attention, as they are often canaries in the coalmine for how new policies and automated regulation affect and curtail user behaviour. Increasingly, platform algorithms impact LGBTQ+ 1 people's activity and visibility through automated sorting, categorization, filtering, and recommendation systems (Myles et al., 2023). As an expanding number of everyday users feel pressure to self-brand and optimize their visibility on social media (Bishop, 2023), LGBTQ+ users become cultural producers with the need to develop an understanding of algorithms and how to negotiate them.
This article contributes to the growing scholarship into user imaginaries, especially research developed in relation to Bucher's (2017) conceptualization of ‘algorithmic imaginaries,’ to consider how LGBTQ+ identities and geocultural factors shape the development of such imaginaries. As scholars consider the influence of platform affordances, governance, political economy, norms and practices, we offer this comparative study – an approach that is still relatively rare in social media research (Matassi and Boczkowski, 2021) – that provincializes two Western contexts by considering similarities and differences in algorithmic imaginaries across locations. This study examines the content of LGBTQ+ Instagrammers located in Montreal, Canada and Berlin, Germany, to identify how they understand, and respond to, perceived algorithmic bias through algorithmic imaginaries. Perceived algorithmic bias arises in users’ attempts to explain differentially applied algorithmic functionalities regarding moderation, filtering, and curation leading to reduced engagement (‘shadowbanning’) when platforms’ logic and mechanisms for these developments are unclear, requiring imaginative speculation. Instagram posts and Stories were collected and analyzed through an ethnographic observation of the social media content of LGBTQ+ individuals whose social and financial goals were tied to their visibility, such as drag performers and artists.
Findings revealed that these users’ algorithmic imaginaries were structured by expectations and past experiences of discrimination in predominantly cisheteronormative digital and social spheres. Specific geographical and cultural considerations also shaped these users’ imaginaries concerning who they felt was most threatened by algorithmic bias and its impacts on visibility. Similar to other studies of platformed cultural production (e.g., Duffy and Meisner, 2023), we found that users acted upon these algorithmic imaginaries, calling on followers for greater engagement, attempting to elude moderation, and adapting to presumed algorithmic parameters. However, our focus on LGBTQ+ users illuminated how these individuals engaged in what we conceptualize as participatory resignation, constituted by the acceptance of algorithmic bias as a precondition for platform use – cultivated through longstanding resilience in the face of stigma – and the determination to circumvent or negotiate it as integral to enduring platform participation. Therefore, this article contributes to the study of platformed cultural production on multiple levels, as it provides a comparative case study, identifies the role of sexual identity and geocultural subjectivity in algorithmic imaginaries, and makes sense of the approaches to perceived algorithmic bias that enable continued presence on a platform. The following sections provide background literature regarding LGBTQ+ people's negotiation of platforms and an overview of theories concerning algorithmic imaginaries alongside research into creators’ mobilization of collective imaginaries. Then we contextualize Montreal and Berlin regarding distinctions and similarities in their LGBTQ+ scenes before detailing the study's methods. The article's findings and discussion sections provide examples regarding the role of sexual identity and geocultural positionalities in understandings of, and responses to, perceived algorithmic bias. This allows us to conclude with considerations for future research building on the understanding of participatory resignation as a form of resistance to platforms’ governmentality.
Platformed negotiation of sexual identity
LGBTQ+ people have long negotiated the stigmatization of non-cisheterosexual identities by cultivating counterpublic spaces and discourse allowing for solidarity and challenges to such stigma (Warner, 2002). 2 For several decades, such negotiations have played out online, giving rise to research demonstrating LGBTQ+ people's strategies for managing sexual identity expressions in digital environments (DeVito et al., 2018; Robards et al., 2018). Such research has often focused on how LGBTQ+ people imagine their vast and unknowable audiences, determining what to share across platforms’ collapsed contexts of queer-friendly and potentially hostile acquaintances and strangers (Marwick, 2023). This includes decisions about how to represent sexual identity, such as through coded messages that only land with some audiences – what Marwick and boyd (2014) term social steganography – and which platforms, visuals, and other communicative practices to adopt (Duguay, 2022).
Even so, LGBTQ+ people's digital self-representation is negotiated in relation to platforms’ technological functionalities, policies, governance, and economic incentives, many of which (re)embed stigma relating to gender and sexual identity. Many platforms reinforce a gender binary and require static forms of identification (i.e., ‘real name’ policies) that preclude fluid, multiple, and transitioning identity expressions (Bivens and Haimson, 2016; Lingel and Golub, 2015). Further, interface arrangements defaulting toward public sharing leads to unintended outing, often with greater consequences for the privacy and wellbeing of queer youth of colour (Cho, 2018). In his study of Gaydar, a gay men's social networking service, Cassidy (2016) found that the platform gave users a sense of constraint on their: activity (oriented toward casual sex), identity expression (stereotyped), and options (few alternative platforms seemed viable), resulting in ‘participatory reluctance’ as begrudging but continued use. As niche LGBTQ+ platforms have struggled to stay afloat, users have more widely adopted popular platforms like Facebook, Instagram, and Tumblr, developing identity-specific practices to foster community (Robards et al., 2018). On popular platforms, LGBTQ+ cultural producers still engage in identity negotiations. For instance, queer drag artists curate identities specifically for streaming on Twitch but their mediated performances are shaped by platform affordances, audience expectations, and avenues for monetization (Persaud and Perks, 2022).
Platforms’ governance policies have also become increasingly hostile to LGBTQ+ people, with the introduction of legislation like FOSTA-SESTA 3 in the United States of America (USA) contributing to a widespread deplatforming of sex and sexuality (Tiidenberg and van der Nagel, 2020). While this legislation purports to render platform companies responsible for sex trafficking content, it holds them accountable for any consensual or coerced sex work occurring through their platforms and their response of over-censoring and banning sexual content has had devastating impacts on queer, trans, and Black, Indigenous, People of Color (BIPOC) sex workers as well as LGBTQ+ communities more broadly (Liu, 2020). In highlighting how platforms equate sex with danger and unsafety, Paasonen et al. (2019) locate platforms’ censorship and punishment of sexual content in an American Puritan cultural legacy that instils a ‘structure of feeling’ (citing Williams, 1977) in platform moderation. Platforms’ avoidance of sexual content also safeguards their focal revenue stream by keeping their services advertiser-friendly – an imperative that has led to the demonetization of LGBTQ+ creators, often conducted opaquely and applied to even policy-abiding content, prompting user speculation about why certain creators have been targeted (Caplan and Gillespie, 2020).
Algorithmic curation is a key enforcement mechanism of platform governance. While algorithmic personalization can enable LGBTQ+ people to find each other, it can also reduce the visibility of their content or render users into targets for harassment (Simpson and Semaan, 2021). Automated removals and decreases in visibility generally occur without human moderation and leave little recourse for creators. As such, LGBTQ+ people and sexual content creators attempt to discern and anticipate platforms’ algorithmic functionality (Are and Briggs, 2023; DeVito, 2022). They also speak out publicly about shadowbanning, drawing attention to how platforms like TikTok demote content by queer, trans, and disabled users while providing little to no protection from malicious reporting that keeps these creators out of algorithmic feeds (Rauchberg, 2022). Thus, the stakes are high for LGBTQ+ cultural producers given their management of stigmatized content, and intensified for those whose livelihoods are linked to their social media presence.
Imaginaries and platformed cultural production
As algorithmic curation has become a key mechanism for visibility and platforms have assumed a central role in cultural production (Poell et al., 2021), users’ anticipative strategies for self-representation have also become more sophisticated. In an exploration of user perceptions of Facebook's algorithms, Bucher (2017: 31) defined algorithmic imaginary as ‘the way in which people imagine, perceive and experience algorithms and what these imaginations make possible’. Diverging from perspectives on folk theories in Human-Computer Interaction (HCI) and cognitive psychology (DeVito et al., 2018; Ytre-Arne and Moe, 2021), Bucher conceptualizes algorithmic imaginaries in relation to affect and power. Algorithmic imaginaries are affective interpretations and sense-making processes regarding algorithmic operations that are otherwise unknowable (e.g., News Feed rankings). Applying a Foucauldian lens (1982) that views power not solely as a mechanism for repression or hierarchical domination but also, focally, as generative, Bucher underscores that algorithmic imaginaries are productive in their ability to shape how people respond to platforms and other users. While the concept has been applied widely – for instance, to analyze how youth understand algorithmic news curation (Swart, 2021) – the factors shaping algorithmic imaginaries are ambiguous in its original conceptualization.
Scholars have further examined the impact of various actors on imaginaries. When theorizing ‘data imaginaries,’ Beer (2019) points to analytics companies as key actors cultivating shared ideals, norms, and expectations associated with data and its uses. He builds on Taylor's (2004) social imaginary that encompasses how ‘people imagine their social existence,’ such as through images, Stories, and legends that become shared by large groups and form an ‘understanding that makes possible common practices and a widely shared sense of legitimacy’ (p. 23). Taylor's expansive definition of social imaginaries is useful, given how sexual identity comprises personal subjectivities that exist in relation to broader social, cultural, political and economic structures (Warner, 2002). Further, specific locations and national identities give rise to shared histories, narratives, and logics reinforced through ‘imagined communities’ of collective likeness (Anderson, 1983). Returning to a platform context, Schellewald (2022) identifies the importance of such narratives in the ‘stories about algorithms’ (p. 1) that users create. Further, users develop practical knowledge of algorithms at the intersection of collective practice and discourses deployed within their online social worlds (Cotter, 2024). These considerations account for how personal and social understandings of identity and context contribute to an algorithmic imaginary.
However, algorithms must be considered within the broader platform and media ecologies that shape imaginaries. Reflecting these multiple factors, Van Es and Poell's (2020) conceptualization of platform imaginaries elucidates how ‘social actors understand and organize their activities in relation to platform algorithms, interfaces, data infrastructures, moderation procedures, business models, user practices, and audiences’ (p. 3). They attend to features alongside economic and governance structures that open or foreclose avenues for action. Media and public discourse can also shape platform imaginaries, such as the journalistic coverage and meme culture that framed OnlyFans as a NSFW 4 platform (van der Nagel, 2021). Platform owners also position themselves as authorities, deploying messages to shape users’ understanding of algorithms, exemplified in Instagram's denial of shadowbanning. This ‘black box gaslighting’ (Cotter, 2021: 1227) is ‘how platforms leverage their epistemic authority to prompt users to question what they know about algorithms, and thus destabilize the very possibility of credible criticisms’. As such, multiple factors outside users’ personal experiences shape algorithmic imaginaries.
Imaginaries are also socially shaped among users, many of whom counter platform opacity and gaslighting through mutual support and sharing intel (Cotter, 2021). Influencers, or those whose main vocation is content creation, form ‘pods’ or informal networks committed to amplifying each other's content (Abidin, 2018). They exchange algorithmic gossip as ‘communally and socially informed knowledge about algorithms and algorithmic visibility’ (Bishop, 2019: 2590). Such gossip feeds into broader ‘influencer imaginaries’ (Arriagada and Bishop, 2021) through which creators negotiate commerciality and authenticity as they work to monetize their following. LGBTQ+ influencers are also subject to these tensions, negotiating authentic performances of sexual identity while channelling identity-based relatability into brand partnerships and platform fame (Raun, 2018). However, as with creators’ response to YouTube's demonetization phases of the ‘adpocalypse’ (Caplan and Gillespie, 2020), overtly discussing precarious labour conditions threatens this balance as creator outcry brings the instrumental elements of their audience relationships into view.
Recent studies have focused on creator approaches to negotiating such precarious platform conditions through imaginaries. Examining Instagram influencers across the USA, Germany, and Japan, Richter and Ye (2023) uncovered that influencers intricately negotiate relationships with followers, platforms, and commercial partners by treating Instagram simultaneously as a social space, workplace, and marketplace. Studying the influencers who experience marginalization or stigmatization, Duffy and Meisner (2023) found that they experienced ruptures to their platform visibility through formal (e.g., account bans) and informal (e.g., shadowbans) mechanisms. In response, creators enacted anticipatory practices, suppressing identity-related content or experimenting with tactics for increasing visibility, and reactive practices like censorship circumvention informed by previous experiences. They discuss resignation as another reaction, involving users’ compliance with platform requirements, analyzing it as a response to the platform governmentality imposed upon users through modes of surveillance and discipline. Sexual content creators subject to deplatforming express a related constellation of emotions, including disengagement, vigilance, shame, grief, and defiance coupled with an ‘uncomfortable dependence upon platforms’ (Are and Briggs, 2023: 8). Even users whose creative work is not centred on social media production can be subject to this dependence, experiencing influencer creep, as the imperative to self-brand, apply techniques of algorithmic optimization, and perform authenticity on platforms to participate in creative economies (Bishop, 2023). As such, our study attends to the imaginaries of LGBTQ+ creators, noting how these precarious platform conditions shape their algorithmic imaginaries.
Examining LGBTQ+ Montrealers’ and Berliners’ algorithmic imaginaries
This study is geographically anchored through a focus on cultural producers in two urban centres with similarly active queer scenes: Montreal and Berlin. Studying individuals across these sites enables examining transnational elements of culture (Szulc, 2023) and its digital flows across transnational platforms as well as localized meaning-making in terms of vernacular understandings of sexual identity and technology. This comparative approach is still rare in social media scholarship but holds value in allowing for contextualization of platform activity and attention to patterns across cases ‘while avoiding normalization and universalization’ (Matassi and Boczkowski, 2021: 209). In contrast to, for instance, Richter and Ye's (2023) focus on transnational trends, our study is attuned to geocultural specificities through its focus on these two cities, provincializing Western contexts that are often treated as homogenous. While a comprehensive cultural-historical background on each city is outside this article's scope, the following section provides information relevant to the study's framing and findings before discussing research methods.
Montreal and Berlin have gained reputations as lively centres for the acceptance, protection, and promotion of LGBTQ+ identities (Club Culture Berlin, 2019; Podmore, 2021). Their evolution as queer-friendly cities reflects years of grassroots-led mobilization by LGBTQ+ communities. From the mid-20th century, Montreal has sustained thriving gay and lesbian social scenes, though they have also experienced adversity, such as through police raids on venues in the 1970s-1990s (CBC News, 2017). In the 1980s, numerous gay commercial venues took hold of a former rundown neighbourhood, which led to the establishment of The Village (Giraud, 2013). While The Village symbolizes LGBTQ+ presence in the city, gay men have been its most welcome patrons while women and gender non-conforming people have established more ephemeral social networks, such as through queer and lesbian cabarets and dance parties (McLeod et al., 2014; Podmore, 2021). With gatherings dispersed, social media has become integral to defining and promoting LGBTQ+ identity, sociality, and community (Krishnan and Duguay, 2020).
In Berlin, owing to a repressive regime against sexual minorities under Nazi rule, queer people's acceptance has fluctuated greatly post-World War II in the Federal Republic of Germany (West) and the German Democratic Republic (East), albeit in relation to different political motives. On both sides, LGBTQ+ people were seen as working against the stability and legitimacy of newly established political regimes (Huneke, 2022). Nevertheless, after homosexuality's legalization in the 1970s in West Germany, queer culture proliferated in Berlin's western districts, fostering solidarity and mobilization among LGBTQ+ people. In East Germany, LGBTQ+ people pushed for tolerance within the socialist structure, efforts that allowed for significant political reforms fostering the rights of LGBTQ+ communities in the 1980s (Huneke, 2022: 12). Post-reunification, propelled by novel opportunities and freedoms, the LGBTQ+ movement in Berlin was dynamic and led to the development of queer scenes that have remained prevalent into the 2000s (Stührmann, 2019). Beyond specific locations or venues, LGBTQ+ social media with geotags in Berlin show a range of identity-related representations, expression, and sociality dispersed across this urban space (Sedrez, 2022).
Queer spaces in both cities have been weakened in recent years due to reduced funding and resources, gentrification and rising rents, erosion of government support, and the COVID-19 pandemic (Conseil Québecois LGBT, 2022; Trott, 2020). LGBTQ+ people in either city remain at higher risk of discrimination than cisheterosexual people, with queer women and transgender people facing the highest risks (Bayrakdar and King, 2023; CQ-LGBT, 2022). LGBTQ+ people also experience intensifying political threats. In Germany, far-right parties like Alternative for Germany (AfD) have fuelled anti-LGBTQ+ sentiment, while in Québec, following conservative American policies, there have been calls to limit trans people and drag artists’ access to public spaces (Hutton, 2018; Oduro and Jones, 2023). Overall, these cities have different linguistic profiles (French and English in Montreal; German and English in Berlin) and histories that have rendered them into contested yet progressive urban centres.
This article investigates how such geocultural dispositions combine with sexual identity to shape algorithmic imaginaries through non-participant, online ethnographic observation of cultural producers in Montreal and Berlin. Heeding Hine's (2017) assertion that contemporary digital ethnographies are necessarily networked and multi-sited, the study's observations began with creators’ Instagram accounts and then branched out to their content on other platforms, such as Facebook and TikTok. Instagram was a logical starting point given its primacy in controversies surrounding algorithmic governance (Are, 2022; Cotter, 2021) and the platform's professionalization within the influencer industry (Richter and Ye, 2023). The study's inclusion criteria focused on content creators with a stake in algorithmic influence on their visibility given their occupation, creative outputs, or community endeavours. As such, the research began with compilation of lists of LGBTQ+ creators in each city who were visible on the platform, tracing follower networks to identify a range of users, including drag performers, writers, activists, visual artists, tattoo artists, politicians, and more.
Overall, the social media content of 48 Montrealers and 46 Berliners was observed from January 11 to 13 March 2023. Since much content was ephemeral (e.g., Stories), extensive field notes were recorded for analysis alongside screenshots of posts, stored securely for further analysis. Since all accounts were public and users implemented hashtags and other mechanisms to bolster visibility, indicating an intended level of publicness to which researchers should be sensitive when identifying digital data for research (Franzke et al., 2020), we quote users and describe their content as it appeared. However, we refer to individuals through pseudonyms (reflecting in them the creative ethos of the original usernames) and omit screenshots to safeguard users’ privacy given the longevity of academic publications that may contrast with desires to remain unsearchable outside the platform or to delete content in the future.
Analysis combined close reading of user content with approaches for examining visual social media. Stories and posts were individually examined and then collectively sorted according to key patterns, themes, and narratives (Brummett, 2019). Since individuals develop vernaculars and metaphors in everyday dialogue concerning technologies (Puschmann and Burgess, 2014), this required watching for posts that did not necessarily name algorithmic bias but instead discussed moderation decisions, experiences of limited reach, adaptations to perceived algorithmic shifts or censorship, and suspicions of shadowbanning. While close reading considered textual and visual aspects of content, we further heeded Highfield and Leaver's (2016) invocation for Instagram-related research to attend to visual trends, practices, and technocultural contexts. We paid attention to visual flows, such as by noting the order of Stories posted; visual elements like emojis, which can communicate affect; and references to media and (sub)cultures. Bucher's (2017) attention to affective registrars of algorithmic imaginaries also attuned us to users’ affective expressions through written tone, emoji, or visual expressiveness. Our positionalities as queer individuals living in Montreal, who are avid social media users and researchers, along with Chartrand's considerable time spent in Berlin and trilingual capacity across French, German, and English, facilitated interpretations of user content.
This combined approach identified individuals’ feelings, opinions, and reactions in their own written and audio-visual expressions. Yet, as observers, we were unable to extrapolate how such expressions related to individuals’ overall experience as cultural producers across online and offline spaces (e.g., on stage or in the studio). This study is also limited in its focus on Instagram as the main platform upon which we observed dialogue about algorithmic bias due to the threat it posed for public and professional personas. In particular, ephemeral Instagram Stories functioned as the main mechanism for LGBTQ+ users to communicate quick reactions to moderation without deterring from their persistent posts. While these limitations can be addressed through future research, the present study elucidates key considerations in these creators’ algorithmic imaginaries and reveals their responses to perceptions of algorithmic bias.
Findings and discussion
The following sections draw on examples from our observation in which cultural producers react to changes to their content (e.g., removal) or engagement levels that could be attributed to algorithms, or reflect on algorithmic processes more broadly (e.g., filtering and curation). Several posts reflected users’ algorithmic imaginaries, as affective expressions of amusement, anger, and despair associated with conjectures about algorithms and their consequences. Examples show how these imaginaries were shaped by individuals’ pre-existing expectations about how platforms handle displays of sexual identity and sexuality as well as their notions of geocultural factors affecting platform curation and moderation. Such algorithmic imaginaries are productive in their incitement to act, as Bucher (2017) notes, prompting users to respond to perceived algorithmic bias through three main approaches: direct calls to action, attempts to render offending content opaque to algorithmic moderation, and adaptation to the algorithm's assumed demands. Altogether, these responses reflect an affective approach of participatory resignation, as acceptance by users of algorithms’ immutability and individuals’ determination to endure on the platform regardless.
Imaginaries of sexual identity
JupiTramp, the Montreal drag artist quoted earlier, posted a story remarking that one of his images was removed 1.5 years after he posted it. The photo was a portrait in which he had a ballgag in his mouth, referencing BDSM
5
practices and aesthetics. JupiTramp remarked, ‘Way to be puritanical Instagram,’ subsequently mentioning the hashtags he presumes triggered this removal, which he will avoid in the future. Within his algorithmic imaginary, JupiTramp associated content removal with puritanical inclinations, picking up on the underlying rejection of sexuality as immoral and dangerous imbued in Silicon Valley platforms (Paasonen et al., 2019). He included the ‘grinning face with sweat’ emoji (
) in his commentary, recognizing that he was caught avoiding what he already assumed was a prohibition on such content. Despite disagreeing with the decision, he anticipated that removals would happen again, noting ‘guess I’ll avoid these [hashtags] in the future.’
The account Dyke Illusion Berlin displayed similar assumptions that LGBTQ+ expression would lead to censorship but upon experiencing different consequences. The account, representing a group of genderqueer dance and strip performers, posted about reduced engagement: ‘We knew it would happen eventually… All those uses of the term D¥ke got us told off too many timeessss.’ They inferred that using ‘dyke,’ as a reclaimed term associated with lesbians, led to an algorithmic shift in their content's visibility. Their declaration, ‘We knew it would happen,’ indicates familiarity with identity-based discrimination and certainty that Instagram reinforces it. For these queer users, platform-led discrimination is not a matter of if it will happen, but when and for what small transgression (e.g., the use of a specific term or aesthetic).
Mik, the Montreal-based founder of an agency for diverse models and interdisciplinary artists, was subject to repeated takedowns after posting about the book, The Faggots & Their Friends Between Revolutions. Written by Larry Mitchell, founder of the gay press Calamus Books, the novel integrates themes of queer community and sexual liberation into a fictional narrative (Brim, 2009). Nonetheless, the post was removed, restored, and then removed again, indicating discrepancies in content moderation. Instagram's Community Guidelines state: ‘When hate speech is being shared to challenge it or to raise awareness, we may allow it’ (Instagram, 2023). It is possible that the post was automatically flagged as ‘hate speech’ and later restored by a human moderator with the discretion to understand that ‘faggot’ was being reclaimed, only to be flagged again. Therefore, while Instagram's guidelines may allow this use, Mik's experience demonstrated a lack of clarity concerning when such a post is allowed and could belie conflicting approaches between algorithmic and human moderation. In a reactive, hastily written commentary, reflected by disjointed writing, Mik expressed beliefs that LGBTQ+ content was repetitively targeted for censorship, actively limiting potential discussion and exchanges about queer issues and terminology. Specifying that deletion was rapid, they noted that Instagram removed the post without allowing for further appeals. The message's tone and use of the ‘mending heart’ emoji (
) expressed sadness and despondency while Mik concluded that the problem would continue regardless of their actions.
These constitute just some of many instances in which users demonstrated algorithmic imaginaries shaped by their positionality as LGBTQ+. They showed an understanding that their activity and the platforms they use exist within a societal context that remains cisheteronormative and favours normative sexual practices (Warner, 2002). Furthermore, many indicated an awareness that platform policies, and their enactment through moderation, complied with these dominant norms relating to sexual identity and sexuality. This awareness was often informed by their social world (Cotter, 2024) of other LGBTQ+ cultural producers as their posts sometimes mentioned other accounts that shared about removals or shadowbanning after posting similar content. User critiques also reflected a sense of public and media discourse about algorithmic moderation. Therefore, individuals’ algorithmic imaginaries were shaped by their experiences of living out non-normative sexual and gender identities while being collectively informed, as reflected in the notions of platform imaginary (van Es and Poell, 2020) and algorithmic gossip (Bishop, 2019), by other users and popular understandings of platform governance.
Geocultural imaginaries
While similarities across examples from Montreal and Berlin demonstrate how cultures are entangled in transnational ways (Szulc, 2023) and how platform practices coalesce across location (Richter and Ye, 2023), this section attends to geocultural context to note differences in algorithmic imaginaries. Creators’ reactions to perceived algorithmic bias did not always have the same political motivations, speak against the same threats, nor reflect lived experiences in both cities. A Berlin-based Jewish activist, Lenas_Ausblick shared two Stories from Keshet Deutschland, an organization catering to Germany's Jewish LGBTQ+ community. In German, the posts denounced numerous antisemitic comments the organization received on a recent TikTok video. In a practice that Bucher (2017) found was common – the discussion of an algorithmic imaginary pertaining to one platform aired on a different platform – Keshet Deutschland asserted that such behaviour was exacerbated by TikTok's algorithm, which threatened Jewish LGBTQ+ people by exposing them to harassers through the use of algorithmically detected location and language to present content. By reposting these Stories, Lena indicated that she believed this information was important for other LGBTQ+ Jewish people in Germany. Lena's algorithmic imaginary of this post's relevance was shaped not only by her sexual identity but also her alignment with the organization's religious and political perspective. Furthermore, Germany's political climate intensified the salience of algorithmic bias, wherein the recent strengthening of right-wing politics has fed a resurgence of antisemitic sentiment (Angelos, 2019).
While Montreal-based creators did not post about these themes, many engaged in political statements against rising transphobic attitudes alongside anti-trans and anti-drag legislation proposed in the nearby USA (Laberge, 2023). Montreal-based visual artist and writer, Les lubies volatiles, included an Instagram Stories highlight section titled LGBTQ+ with posts countering discrimination against trans and non-binary individuals, such one with the phrase ‘Les femmes trans sont des femmes’ (translation: trans women are women). Such statements appeared in sequence with Stories critiquing the algorithmic censorship of LGBTQ+ people. Les lubies volatiles's inclusion of these posts together stressed the importance of algorithmic justice for LGBTQ+ rights, specifically the defence of transgender people's rights. Several cultural producers in Montreal made similar statements, with their urgency underscored by local developments, such as Quebec's Conservative Party leader, Éric Duhaime, launching a petition to ban drag performances in the presence of children (Radio-Canada, 2023). 6 Through posts calling for fairer algorithmic representation, creators illustrated that the stakes of experiencing algorithmic bias were high in relation to the local hostile political environment, which too threatened to reduce the visibility of trans people and drag performers.
Beyond location and sexual identity, other facets of identity combined to shape users’ algorithmic imaginaries. For instance, Les lubies volatiles's neurodivergence underscored their dedication to posting about protection from discrimination. Other LGBTQ+ creators mentioned how their efforts to gain algorithmic visibility were related to financial needs, noting that they perceived algorithmic bias against LGBTQ+ people to threaten their economic sustainability. A couple of tattoo artists posted along these lines, as their photos of freshly tattooed body parts (often on visibly queer people) were affected by the dual regulation of sexual identity and nudity on social media (Are and Briggs, 2023). Geocultural context also shaped the prominence of how creators represented certain elements of identity. Despite both cities being known as hubs for migration, more creators in Montreal than Berlin identified in their profiles as immigrants or part of diasporic communities. FrailIdylle, who self-identified as a queer ‘Egyptian Africunt,’ posted multiple times about navigating limited reach on Instagram with captions such as, ‘Not even a shadowban can stop this ART from circulating,’ referring to the need to counter algorithmic suppression of their ‘art’ as posts that expressed multiple facets of their identity. Such vocal expressions occurred amid the backdrop of a province in which political leaders, including Quebec's premier, have refused calls to acknowledge systemic racism (Jung, 2021). Similar to Taylor's (2004) conceptualization of collective social imaginaries, instances in the previous two sections reflect how identity and situatedness played into cultural producers’ shared understanding of algorithms and why they matter.
Acting on algorithmic imaginaries
Our observations also uncovered the ‘productive and affective power that these imaginaries have’ (Bucher, 2017: 41) in shaping how individuals act regarding algorithms, evident in posts demonstrating strategies and behaviours directly informed by algorithmic imaginaries. While the three main modes of response we observed are not unique to these creators, this study's lens enabled us to identify how identity-related positionalities and contexts featured in such responses. First, some creators issued direct calls to followers for increased engagement. After reviewing their engagement metrics, Dionysus_tattoos, a tattoo artist in Montreal, shared a screenshot of their Instagram page indicating that recent posts showed much lower reach than usual, stressing that reduced engagement results in fewer appointments and severely diminished revenue. Creators frequently use such displays of evidence to support claims of shadowbanning (Are, 2022; Cotter, 2021). Admissions of platform precarity, common among creators who see Instagram as a marketplace (Richter and Ye, 2023), set the stage for requests for engagement to alleviate the problem. Similarly, another Montreal artist, Lucie Cinq-Mars pleaded, ‘If you like, comment, share or save my post, I will bless you over 7 generations, promise’ (English translation). Direct calls to action involved the affective labour inherent to cultural producers’ instrumentalization of relationships to boost their algorithmic visibility (Duffy, 2017). In many instances, creators leaned on relationships with followers with whom they presumed shared positionalities, especially LGBTQ+ identities, hoping this common ground would spur others to action.
Secondly, individuals whose algorithmic imaginaries included theories of automated content moderation and filtering systems targeting LGBTQ+ content altered their behaviour to elude these forms of algorithmic governance. In the example from Dyke Illusion Berlin, the account spelled ‘dyke’ with the monetary symbol ¥, following practices of ‘algospeak,’ an emergent linguistic practice specifically intended to circumvent content moderation (Steen et al., 2023). Similarly, Mik anticipated Instagram's inability to discern the contextual use of a sensitive word by writing ‘F*ggots’ in the book title, though this still triggered the post's removal. These alterations reflect LGBTQ+ people's longstanding practices of coded self-presentation, from ‘hanky code’ to encoded online usernames (Livia, 2002), though in these cases, algorithms were the primary audiences these users hoped to elude. These responses align with Duffy and Meisner's (2023) findings that stigmatized creators often employ circumvention practices as creative yet laborious workarounds or linguistic signals allowing messages to land with audiences while eluding the algorithm. They describe how creators redirect practices of social steganography (Marwick and boyd, 2014) toward algorithmic moderation, which resonated in our findings, though creators we observed expressed little certainty of the signs or symbols that could trigger moderation.
Lastly, some users adapted to perceived requirements imposed by algorithms to avoid repercussions for their visibility. A tattoo artist based mainly in Montreal but also practising in Berlin, Rain posted a selfie with the caption, ‘Haven’t posted my face on my feed in a long time, let's hope it fixes my algorithm.’ The post indicated a concession: the artist would rather not share this kind of content but believed it was what the algorithm favoured. This concession also reflected the feeling that one's desired self-expression, such as their art and opinions on LGBTQ+ topics, was insufficient for sustained engagement according to the platform's algorithmic criteria. This imperative toward visibility and active audience maintenance reflects influencer creep (Bishop, 2023), as even jobs with a significant offline component, such as tattooing, now require platformed self-branding optimized for algorithmic circulation and demand authentic performances of the self, including selfies. The German author Farideh posted back-to-back Stories about this approach, the first as a meme that, translated from German, read: No one: People on Insta: Here is my face because of the algorithm
Overall, these users’ actions reflect LGBTQ+ people's strategies to negotiate otherwise hostile environments, which are often resonant with strategies employed more broadly by cultural producers to navigate platform visibility. However, for these individuals whose platform activity combined cultural production with rendering LGBTQ+ identities visible, their approaches enabled them to retain their presence by mobilizing engagement, obscuring content from algorithmic moderation, and adapting to presumed platform parameters. We see this as a move from participatory reluctance, Cassidy's (2016) concept for understanding how gay men remained on Gaydar despite the hurdles it posed, toward participatory resignation, defining resignation as ‘the acceptance of something undesirable but inevitable’ (Oxford Languages, n.d.). Our study's creators were similarly aware of platforms like Instagram perpetuating business aims and policies that largely conflict with LGBTQ+ people's goals and livelihoods, as these platforms censor and obscure identity-related content that is unfriendly to advertisers and conflicts with dominant norms. These users also exhibited a sense that catering to the algorithmic requirements of popular platforms was necessary, given that dominant platforms hold the possibility of greater visibility and, in turn, the potential for economic stability or community outreach. Therefore, LGBTQ+ individuals faced with perceived algorithmic bias show participatory resignation as resigned acceptance of the platformed conditions they must negotiate, informed by personal and geocultural understandings of the continued need for LGBTQ+ people to endure in the face of discrimination.
Reluctance morphs into resignation as users develop an algorithmic imaginary that expects, anticipates, and looks to negotiate algorithmic bias as a precondition for continuing to use these platforms. Building from other studies of stigmatized creators that have identified resigned sentiments regarding algorithms (Are and Briggs, 2023; Duffy and Meisner, 2023), our research highlights the continuity between users’ cultivation of resigned acceptance and their past experiences of identity-related discrimination and resilience stemming from such experiences. We conceptualize participatory resignation not as giving up or moving on to another platform, as some creators speak of doing in these other studies, but as an active (i.e., participatory) form of acceptance that fosters endurance. Participatory resignation involves users conceding, giving algorithms just enough of what they presume platforms expect (e.g., a face picture), to continue expressing themselves how they want. It is a productive understanding of negotiation and resistance toward algorithms (Bucher, 2017) and a defiant stance towards platforms, which supposes that even with these concessions, queer expression can and should still circulate on platforms.
Conclusion
This study has examined the algorithmic imaginaries of LGBTQ+ cultural producers in Montreal and Berlin through ethnographic observation of their Instagram posts pertaining to algorithms. Findings illustrate how personal and social understandings of sexual identity and geocultural context shape algorithmic imaginaries. Examples showed how positionality as LGBTQ+ and awareness of algorithmic governance mechanisms, alongside others’ discussions of algorithmic bias, connected gender and sexual discrimination to the perceived consequences of algorithmic curation, filtering, and moderation. Individuals’ geographic and political contexts also influenced the issues they decided to speak out about, such as transphobia in the Canadian context and antisemitism in the German context, as they emphasized the importance of making certain issues algorithmically visible. In response to these algorithmic imaginaries, these users took an overarching approach of participatory resignation as acceptance of the regulating power of algorithms, informed by an expectation of discrimination and history of negotiating it, through which they developed several approaches to remaining visible on these platforms.
Overall, this research broadens conceptions of imaginaries pertaining to platforms. It integrates aspects of the social imaginary (Taylor, 2004) regarding collective understandings, narratives, and experiences of sexual and geocultural identity into Bucher's (2017) concept of algorithmic imaginaries. Further, it uncovers imaginaries through which these LGBTQ+ creators make sense of perceived algorithmic bias and how such imaginaries are informed by identity-related perspectives that enable individuals to endure in the face of platform adversity. As future directions for research, user interviews will provide more fulsome understandings of their imaginaries and the impetus for responses to algorithms. We also uncovered affects of frustration, rage, and cynicism fuelling individuals’ participatory resignation that warrant further exploration. In particular, the aspect of resignation that views a scenario as inevitable, and thus unchangeable, raises concern for users’ perceived lack of agency over platform conditions. As such, this study contributes to the growing research into the need for platforms to respect users’ sexual and human rights (Spišák et al., 2021), and studies calling for greater algorithmic transparency and accountability (Are, 2022). We echo Duffy and Meisner's (2023) concerns over platform governmentality that steers the creator economy through ‘discipline and punishment’ (p. 301). However, we are hopeful that, through legacies of queer appropriation, the participatory resignation of LGBTQ+ creators indicates enduring possibilities for subversion and resistance.
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Concordia University Research Chair funding, Fonds de Recherche du Québec-Société et Culture (Bourse au doctorat en recherche).
Notes
Author biographies
Alex Chartrand is a PhD candidate in Communication Studies at Concordia University in Tiohtià:ke/Montreal and is a member of the Digital Intimacy, Gender & Sexuality Lab. His interests are online social movements, queer countercultures, platform governance, algorithmic studies and alternative use of technology. His work focuses on how algorithmic imaginaries constructed by LGBTQ+ communities shape their resistance against instances of biases and discrimination encountered on social media platforms.
