Abstract
This study employs semistructured interviews and algorithmic ethnography to explore how algorithmic shadowbans have been used to moderate content related to Chinese gay men and achieve targeted algorithmic governance. Through a multimethod approach combining both thematic analysis and discourse analysis, this study claims that algorithms impose seemingly tolerant but actually restrictive shadowbans, which are thematized as “(im)permissible searching” and “(un)smooth posting,” on Chinese gay men. This study conceptualizes such algorithmic shadowbans as “algorithmic camouflage,” emphasizing the opacity of the roles, behaviors, and purposes of algorithms toward specific users from an interactive perspective, highlighting the “hypocrisy” of algorithms. Under hypocritical algorithmic shadowbans, this study suggests that a highly camouflaged “de-gaying” discourse—through compositions of dehumanization, de-emotionalization, and dramatization—is being shaped by algorithms on Chinese digital platforms.
Keywords
Introduction
Users of social media have coined the term “shadowban” to describe obscure content moderation policies designed to act covertly to undermine the visibility of content flagged as “inappropriate” by the platform (Are, 2022: 2003). With users continually sharing stories online about suspecting they had been shadowbanned, such as by noting silent drops in traffic and disappearing content, more users started to believe that shadowbans were being employed (Fowler, 2022). Myers West (2018) investigated perceptions of “shadowbans” and found that users tend to identify human actors as being responsible for implementing them, such as the platform operator or the company itself, rather than nonhuman actants, such as algorithms. Indeed, due to the high threshold of knowledge about algorithms (McQuillan, 2016), the large code structure (Seaver, 2019), and the nature of being the secret property of a corporation (Pasquale, 2015), it is easy for algorithms to escape accountability when users allege that shadowbans are in play. Nevertheless, it is indisputable that algorithms also implement shadowbans (Jaidka et al., 2023; Musiyiwa and Jacobson, 2023), especially for moderating deviant content that platforms dislike (Gillespie, 2022; Steen et al., 2023).
As such, this study works to track the signs of algorithmic shadowbans. As previous studies have reminded us, scholarship has relatively little knowledge of marginalized groups interacting with algorithms (Hargittai et al., 2020). This study takes the gay male community in China as an example and asks:
In addition, although digital platforms in China rarely admit that they are implementing shadowbans on gay men, gay male users have perceived algorithms as secretly moderating content about them (Shen, 2023; Zhao, 2023). Thus, this study asks a supplementary question:
As algorithmic shadowbans are closely related to algorithmic governance (Savolainen, 2022), implicit content moderation driven by algorithmic shadowbanning can be considered to be a complex approach to achieving algorithmic governance of specific content and people. For example, research by Haimson et al. (2021) and Duffy and Meisner (2023) reveals that algorithms target the accounts of black people, covertly removing content containing the term “black” to manage tiered visibility, mitigate alleged racist conflicts, and suppress non-normative expressions. Therefore, in the literature review, in addition to presenting studies related to shadowbans and content moderation, research on algorithmic governance is also covered, with a specific focus on that related to gender and sexual minorities (GSMs). In the following, “(im)permissible searching” and “(un)smooth posting” are identified to unpack the shadowbans conducted by algorithms and are conceptualized as “algorithmic camouflage.” Further, this study suggests that a highly camouflaged “de-gaying” discourse has been shaped on Chinese digital platforms.
Literature review
Shadowbans and content moderation
When we mention shadowbans, we generally refer to content moderation at the same time, and vice versa (Fowler, 2022). The concepts of shadowbanning and content moderation are inseparable, or, broadly speaking, shadowbans can be considered and discussed as a part of content moderation (Myers West, 2018). As introduced earlier, shadowbans are described as the content moderation that other actors, who have the power on digital platforms (operators, companies, algorithms, etc.), implicitly/softly impose on their accounts (Are, 2022; Jaidka et al., 2023). Although content moderation can take place transparently, shadowbans have also been combined closely with such content moderation approaches as the ranking of content creators (Cotter, 2023), manual or algorithmic flagging (Crawford and Gillespie, 2016), and governing search engines (Jones, 2023), which has also been widely reported by users on various platforms (Suzor et al., 2019).
With the intervention of algorithms, shadowbanning or, rather, content moderation, becomes more powerful. As Gillespie (2020) suggests, artificial intelligence (AI) technologies supported by algorithms not only reduce the cost of content moderation, but also expand its scope and improve its efficiency. Algorithmic content moderation along with manual content moderation make up the cognitive assemblage that shapes platforms’ information flows, accurately targeting and filtering content that platforms do not like, such as violent extremist material (Crosset and Dupont, 2022). Duffy and Meisner (2023: 287) indicate that through these “formal (human and/or automated content moderation) or informal (shadowbans, biased algorithmic boosts) means,” some groups that are already marginalized are experiencing greater deprivation of their rights, causing them to be even more disadvantaged in the digital world. This is the case with GSMs, for whom algorithmic governance, algorithmic content moderation, and algorithmic shadowbans are increasing.
Algorithmic governance toward GSMs
Similar to the logic of algorithms being used to govern deviant criminal actions in cities (Kubler, 2017), GSMs often encounter harsh algorithmic governance on digital platforms due to their
Such algorithmic content moderation methods have been effective and powerful tools in the widespread censorship and governing of GSMs (e.g. Dias Oliva et al., 2021), posing a strong “threat(s) of invisibility” to them (Bucher, 2012: 1171)—in the form of GSM content being unable to be promoted (e.g. Zhao, 2023) and in existing GSM content becoming unsearchable on platforms (e.g. Cavalcante, 2019). Although it is reasonable for hypersexual GSM content to be moderated, even if GSM content simply depicts a kiss (Wang and Spronk, 2023), a real scar from trans surgery (Delmonaco et al., 2024), or a simple story about boys’ love (Wang and Tan, 2023), it cannot avoid being moderated. Simpson and Semaan (2021: 24) term this mechanism by which algorithms seek to suppress the content of GSMs’ affirmation of their identity as “algorithmic exclusion,” such exclusion not only hindering the users’ daily claim of their specific (deviant) identities, but likewise reducing the public's understanding of the GSM community.
In addition to creating threats of invisibility, platform algorithms also work to stigmatize GSM content: they have labeled all GSM content—regardless of sexual elements or intent—as NSFW (not safe/suitable for work), associated GSM content with prohibited content, disrupted the public reputation of the GSM community, spread a toxic technoculture of GSM-phobia, and even led to the demonetization of GSM content creators (DeVito et al., 2018; Duguay et al., 2020; Kingsley et al., 2022; Pilipets and Paasonen, 2022). Normalizing GSM content has also been shown to be an efficient approach for platform algorithms to govern members of those communities. Wang and Zhou (2024) note that in the Chinese context, platform algorithms tend merely to allow the media representation of optimistic, positive, romanticized GSM content that conforms to the mainstream imagination, whereas negative but personal experiences about being a GSM person are more likely to be invisible. Similarly, Southerton et al. (2021) identify that platform algorithm filters are committed to achieving a content classification that screens the sexualized content out from GSM content, doing so to shape the representation of “good” GSM citizenship without sexual desires. Some scholars have suggested that GSMs in modern society have been partially constructed by platform and marketing algorithms (Bivens and Haimson, 2016).
As Katzenbach and Ulbricht (2019: 10) conclude after investigating algorithmic content moderation: “Algorithmic governance is multiple, contingent and contested. This suggests that algorithmic governance is very complicated and that “there is a strong possibility for algorithmic governance to produce bad outcomes at scale, whether or not they are intended” (McQuillan, 2016: 10). The importance of investigating the practice of algorithmic governance through algorithmic content moderation is thus self-evident.
Research methods
Algorithmic ethnography
As Seaver (2019: 419) suggests, “taking an ethnographic eye to algorithmic systems allows us to see features that are typically elided or obscured.” Therefore, this study adopts algorithmic ethnography to trace the signs of algorithms. As a method to “address how algorithms intersect with society, culture, and politics in intricate ways” (Wang and Zhou, 2024: 1190), algorithmic ethnography encourages researchers to confront the complexity of algorithms, cooperate with the algorithms, and implement reflexive observations (Christin, 2020a). Although there is no clear procedure for conducting algorithmic ethnography, scholars have developed some tactics as references. Seaver (2017: 8–10) outlines five strategies for investigating algorithms ethnographically, such as valuing the “heteroglossia” expressed by different departments and teams within the same company regarding algorithms, as well as the “irony” inadvertently displayed by technicians. Christin (2020a, 2020b) further proposes several approaches and focuses, such as algorithmic comparison, algorithmic triangulation, and algorithmic sorting, for conducting algorithmic ethnography online. In this study, two main domains are emphasized: algorithms as prisms that refract statuses and transformations of social dynamics (Christin, 2020a), and algorithmic metrics (likes, views, comments, etc.) that reveal how and why users see specific content (Christin, 2020b). Whereby, I hope to identify not only the various social hierarchies but also the social contexts and interactions influenced by algorithms.
To complement the ethnographic data, this study also employs the scraping audit method proposed by Sandvig et al. (2014) to audit the algorithms. This is done through the repetitive use of GSM-related terms (e.g., “gay men”—“男同性恋” in both English and Chinese) when interacting with algorithms, including intentionally using these terms as keywords in searches or as hashtags in posts, and then comparing the algorithms’ responses with their general rules.
Although various media platforms, such as Baidu for searches, WeChat for conversation, and Weibo for news, form a large media matrix through which Chinese gay men interact with algorithms every day, this study chose to conduct algorithmic ethnography on Douyin, a video-sharing platform, for several reasons. First, Douyin's advanced algorithmic design makes it a significant platform for GSMs online (Wang and Spronk, 2023). Second, Douyin's algorithms are socially important in China, transforming the economic structure (Lai, 2022), stimulating patriotism (Chen et al., 2021), and reproducing ideologies (Meng, 2021). Third, studies have explored the interaction between algorithms and GSMs on TikTok, the international version of Douyin (e.g. Karizat et al., 2021). Following this vein of research can help to enhance our global understanding of the relationship between mediated technologies—as represented by an increasingly popular video-sharing platform—and GSMs. Thus, this study conducted algorithmic ethnography on Douyin from October 2021 to April 2023, collecting data such as screenshots, screen recordings, and field notes for subsequent analysis.
Semistructured interviews
Hargittai et al. (2020: 767) provide suggestions to those researchers who aim to explore users’ algorithmic skills and indicate that “in-person one-on-one interviews… offer the kind of privacy that can be helpful with topics where people may not be knowledgeable”; therefore, this study applies one-on-one semistructured interviews to investigate gay men's perceptions of algorithms. Using snowball sampling based on my own personal relationships and recruitment advertisements on digital platforms upon which Chinese gay men are active, such as Blued (see Miao and Chan, 2021), Zhihu (see Zhao and Chu, 2022), and Douyin (see Wang and Zhou, 2024), between November 2021 and April 2023, I recruited 35 participants. Based on ethical considerations regarding ensuring anonymity, Arabic numerals (i.e. Ix, where x = 1, 2, …, 35) are used to refer to the interviewees.
Among the interviewees, nine upload gay-related content periodically to run their accounts and thus were interviewed as “content creators” (I3, and I28 to I35, whose fan numbers range from 0 to 300,000); their interview questions started by inquiring about their experience of uploading gay male content and their negotiation with other actors (such as algorithms, platforms, operators, and policies). Other interviewees were interviewed as “audiences” and asked about their thoughts on content sorting and their search habits, and then gradually about perceptions concerning algorithms.
Before introducing the data analysis below, I wish to clarify two points about the interviews. First, I28 is a heterosexual female who leads a team running an account that publishes only gay male content. Second, this study includes interviewees’ experiences from all media platforms, since the focus of this study is on the
Multimethod text and discourse analysis
To process the interview and ethnographic data, this study conducted a multimethod text and discourse analysis, as in Alejandro and Zhao (2024). Thematic analysis (TA) and discourse analysis (DA) were used in parallel to identify analyzable themes and discourses in the study because, as Alejandro and Zhao (2024: 466) suggest, “DA unpacks the implicit dimensions of discourse while TA provides a systematic strategy to organize mainly explicit thematic dimensions of language.” This study intends not only to show how algorithms have imposed shadowbans as well as content moderation on the gay male community, but also to identify the sociocultural contexts in which algorithms have acted in this way. The two-round coding method proposed by Saldaña (2021) guides the TA. In the first round, process, in vivo, evaluation, causation, and holistic coding were applied to highlight gay men's subjectivity, their interactions with, and their understandings of, algorithmic shadowbans, as well as the manifestations of algorithmic shadowbans. In the second round, focused coding was used to select codes that best described the algorithmic shadowbans and the perceptions of them among gay men. In addition, the DA tools proposed by Gee (2014: 162, 189), particularly the “Social Language Tool” and “The Big C Conversation Tool,” were employed to explore the context behind some of the folk terms used by the interviewees, as well as the human-machine, social, and group dialogues relevant to specific media content collected via ethnography. The next section will present the findings structured around two main themes: “(im)permissible searching” and “(un)smooth posting.”
Findings
(Im)permissible searching
Behind the high level of efficiency of search algorithms, scholars have pointed out that algorithms, following their own adherence to “patterns of inclusion,” predetermine what will appear in search results long before the users perform their searches (Gillespie, 2014: 169). Serving as advisors to the users, algorithms determine what is most relevant to users’ search results, thus providing stronger visibility to certain information in a world in which attention is up for grabs (Gillespie, 2017); similarly, algorithms filter out various identities, deciding which social identities are worth publishing and need to be visible and excluding others (Karizat et al., 2021). This study found that algorithms covertly manipulate search engines to allow searches using keywords related to gay men and to generate results, but employ the following three strategies to exclude and marginalize gay men in search engines.
I have searched for the word “出柜 (come out, pronounced ‘Chu Gui’ in Chinese),” all the results provided by the algorithm are about “橱柜 (cabinet, also pronounced ‘Chu Gui’).” No one wants to see these things if they search for “出柜.” (I3) Once I searched for “男男性交 (male-male sex intercourse)” on Baidu.com, but on the result page of websites, it was something like “男性和女性交往 (male-female social intercourse)”; when I clicked the result page of pictures, it showed something like “stipulated by the law, there are no results.” (I6)
The third tactic appeared in my algorithmic ethnography: that of converting keywords into “(safe)
It does not mean that you cannot search for these [gay male] contents, but you cannot get something very directly related to the [gay male] topic. It will show you other content, as long as there is a little bit of relevance, but not what you want.
Search algorithms can also directly block content access triggered by keywords related to gay men. This is demonstrated by I6's experience on Blued, a gay dating app, when he tried to reconnect with a friend using the search algorithms: His nickname is A-B-C-D-E (already anonymized, and each letter represents one Chinese character in a five-character name) and that is how I searched. Then it shows no results, there is no such person. I can guarantee he has not changed his nickname. Searching for A-B can have a bunch of results, C-D-E also the same, but when I go a little more precise—B-C-D-E or A-B-C-D—then nothing appeared.
Paradoxically, interviewees reported that gay male content that had its access blocked by search algorithms did not disappear from the platform. I12 mentioned that when using Douban, social media that contains various interest groups, he had browsed the content labeled “My Gay Love Story” many times but, when he suddenly used it as a search keyword, he could not find it: “I am not sure if they disappeared or was blocked…But I have read it at least, no less than eight times on my homepage recommendations.” I33 shared a similar experience: There was a [gay] group on Douban called XXXXX (already anonymized), which was really active with a large user base. But now that group is not searchable; maybe not unsearchable, but a kind of, hidden by the platform. That is that group, if you want to access through the search bar, you cannot find it anymore.
It is truly weird that when I searched for the word “同性恋,” videos with over 10,000 or even 100,000 likes were presented, but they were mostly created by heterosexual bloggers, or to say, knowledge-sharing bloggers and medical bloggers. (I34) Maybe those algorithms on Zhihu are giving you some, I mean the ones that are at the top of the list, are all from those professionals. (I8) If you search for the word “同性恋” alone, only psychiatrists and “Is homosexuality a disease?” will be shown, but there is no content created by us. (I31)
This phenomenon also resonated with my algorithmic ethnography. When I used such keywords as “同性恋,” “男同性恋,” and “gay,” the following four clusters of content mainly appeared: first, practitioners of such professions as law, psychological consulting, and medicine discussing a series of topics, such as whether homosexuality is illegal, whether a homosexual can be married, or whether homosexuality is a disease; second, a collection of non-Chinese TV and film works about homosexuality (it should be noted that some top videos use such titles as “Deformed Family Relationships,” “Fake Love between Homosexuals,” and “Everyone Deserves Love”); third, news related to homosexuality outside China, such as “A Same-sex Couple Sentenced to Caning in Indonesia”; and fourth, the everyday life of foreign homosexuals, such as “Sharing a Pair of Handsome CP (couple) in Sweden.”
That is, the Douyin algorithm tries to create a “
Overall, keyword searches related to gay men may seem functional, but they are actually manipulated by algorithms, diverting search paths, erecting barriers to accurate results, and promoting specific portrayals of gay men. Interviewees also claimed that they have been trying to break out of such confinement, for example by applying keyword combinations to improve the searchability of targeted content. However, as I6 stated, a single keyword may simply generate noise, whereas a precise combination of keywords can alert algorithms to content deemed dangerous or sensitive, leading to “no results.” A counterintuitive logic of search algorithms emerged from the interviews: searches for keywords related to gay men often lead to a paradox whereby “the more accurate the keyword, the more failures in results,” while “the more inaccurate the keyword, the more off-target the results” also holds true. Consequently, users seeking gay male content face the dilemma of being unable to use
(Un)smooth posting
As Bucher (2012: 1164) indicates, by acquiring the capacity to moderate the “visibility” of different content, algorithms have been imposing a perceived “threat of invisibility” on various content creators; meanwhile, under the operation of “platform paternalism,” platform algorithms gain a certain “moral authority” (Petre et al., 2019: 2). With the rules they set promoted, the powers they have reinforced, and the collaboration of other content-moderation mechanisms, algorithms apply themselves to restrict content that they disfavor and advocate that they favor (Myers West, 2018). Algorithms do this by, for example, co-forming “sexist assemblages” with human censorship to limit deviant content (Gerrard and Thornham, 2020) or granting a large volume of traffic to those videos that depict the “positive energy” favored by officials (Chen et al., 2021). GSM content is still far from being favored by algorithms due to the conservative sociocultural context in China; at the same time, this content has, in turn, already brought a large volume of traffic and even economic benefits to platforms (see more about the “pink economy in China” in Liu, 2023). With such ambivalent pulls, this study found that algorithms ostensibly still allowed some gay male content to be posted; however, in the accounts of interviewees, these postings were implicitly thwarted in a variety of ways.
When posting content, interviewees reported that they had the opportunity to add words related to gay men after typing the “#,” seemingly completing the creation of hashtags. However, they then realized that the algorithmically generated menu bars—reminding them of more potentially relevant hashtags and indicating the “heat” of each—that should appear did not: “when I added the word ‘出柜’ or ‘公开性取向 (disclose the sexual orientation)’ to the hashtags, the menu bar showed nothing” (I29). Similarly, in conducting my algorithmic ethnography, I tried to add “#同性恋 (#gay)” to the content I posted, and the algorithmic system on Douyin immediately reminded me to “create it as a new hashtag”; however, when I completed creating the hashtag and attempted to add it again to another piece of content I hoped to post, I received the same reminder. That is, “#同性恋” can be (pseudo-)created, (pseudo-)added, (pseudo-)posted, and be outwardly consistent with other hashtags, but this is all camouflage because it is never recorded and never activated by the algorithm. In addition, when trying to click on “#同性恋” to access the content gallery using the same hashtag, the algorithm responded with nothing instead of the interface jump as it should.
Such pseudo-activated hashtags not only hinder the connection of content related to gay men but, as I34 indicates, transform it into targets, attracting more accurate and efficient censorship and moderation of such content by algorithms: “They are too straightforward, and could result in being flagged [by the algorithm] as unsuitable for recommending to the public or something like that.” It has also been alerted by previous studies that hashtags have been used by platforms to “police problematic posts” (Gerrard, 2018: 4494), and posts with flagged hashtags will be dampened by the platform—“returns few or zero results”—when other users type in relevant keywords to search (Duguay et al., 2020: 246), fulfilling the mechanism of “(im)permissible searching” described above. The network of information related to gay men is then disrupted with pseudo-activated hashtags, resulting in increased labor for potential users to access relevant knowledge.
Confusion often arises when platforms request revisions to content related to gay men. As one participant (I34) noted, “It says we are ‘not suitable for sharing’ and then if you click on it, it lists dozens of rules without specifying the exact one.” This experience echoes a participant in Shen's study (2023), who reported that Douyin explained his GSM content could not be promoted because it disrupts public order or violates social morals—which are vague, catch-all terms taken from Douyin's User Service Agreement. These dilemmas are not solely due to mechanical actants such as algorithms but also involve human intervention. However, as I34 accentuates, “you have no idea the who—the machine or a human being—that restricts you.” Under such mutual cover, creators of gay male content are forced to engage in lengthy and redundant puzzle-solving tasks: Many algorithms are still opaque, we can only observe their traffic pattern, to guess, to feel … Douyin does not just have a superficial moderation, in fact, it has a lot of hidden rules. So, it is impossible for you to predict with 100% certainty how it will push the traffic, what you can do is to follow it and to make changes! (I31)
Interviewees reported that it was easy for them to realize that algorithms had been limiting the strength of their communication after observing and comparing the magnitude of the metrics: “It was obvious that our videos were bound to get stuck at 100,000 views” (I28). Although such “getting stuck” may be temporal, I28 insists that gay male content has had algorithm-driven ceilings inflicted on it since “when I was operating other-types of accounts, the most views can be 10 million, but for now this gay man account, no videos can earn more than 100,000 likes.” I29 also complained that he has been improving his video quality, but it is ridiculous that the video views have dropped instead—making him believe that this is because his account has been “flagged” by both humans and algorithms. I33 described an imagined “line,” which, once crossed by the views of his video, meant that the number of views would “dive immediately.” I32 named this a “black-box operation” of platforms, scrutinizing gay male content and hindering such content from entering the next higher-level “traffic pool” without informing the content creators.
For the ceiling on the breadth of communication, I3 described this type of algorithm-driven mechanism as do “not recommend to the external”; for example, only recommend gay male content to the followers of the creator internally. This directly echoes I32's complaint about the number of followers starting to stagnate at some point: “I have been stuck at 200,000 fans for quite some time.” Other interviewees were also aware of this issue: “For gay-related [content creators’ followers], it is usually no more than 100,000, probably 50,000 to 100,000…the most, maybe the 200,000” (I11). However, a special (and the only) case arose during my algorithmic ethnography: an account that displays the daily life of a gay male couple has more than 2,000,000 followers. Through detailed observation, I believe the reason for this exception is that it has (re)produced the following discourses: (1) ambiguity—only showing one person's image, and the other only shows his voice in their videos; (2) dramatization—posting fun-making content based on their gay experiences, dramatizing these moments to please the audience and even the platform; and (3) neo-familism—through frequent exhibition of their mothers’ interactions and conversations with them, they demonstrate the harmony between GSMs and their original families, realizing the integration of neo-familism and queerism described by Wei and Yan (2021). In contrast, the account that showcases images of them appearing together, also created by this gay couple in the same time period, has just over 300,000 followers.
Replicating such success is feasible. The transformation through the dramatization of accounts provides a pathway for breaking through the algorithm-driven ceilings placed on gay male accounts. I31 shared an impressive case: the popularity of an ordinary creator who had been posting gay male content increased sharply after he dramatized a scene in which he exaggeratedly depicted being persuaded by his parents to get married. Seizing this opportunity, he achieved the dramatization of his account and successfully gained more than 10,000,000 followers. Subsequently, he at no point directly identified as a gay man in his content but alluded to typical stereotypes about gay men, using them as humorous material. I34 pointed out that this type of transformation, initially characterized by the dramatization of gay male content, not only circumvents algorithmic restrictions but also obtains excellent metrics: “They can get hundreds of thousands of likes!” I35 also complained that although his serious content about gay men is hunted by algorithms, those accounts creating “擦边 (thirst trap)” content, such as by showing muscles or bondage play, are supported by algorithms due to their high traffic potential. It would appear that algorithms and platforms recognize that dramatizing gay male content not only mitigates risks but also generates financial benefits. Placing a communication ceiling on gay male content creators has also become a means of facilitating dramatization for gay male content creators.
Overall, although algorithms ostensibly permit gay men to post gay content, they impose numerous subtle obstacles. Despite these challenges, interviewees mentioned strategies to overcome them, like transformation through dramatization being deemed to be notably effective. Another frequently mentioned strategy is “term appropriation,” which involves replacing sensitive words with safer alternatives to confuse the algorithm and reduce its control over gay male content. For example, I3 avoids using words like “boyfriend” in his titles, opting for terms such as “roommate,” “older brother,” or “younger brother.” I31 and I34 described these terms as “words only we (GSM) understand” and as “acronyms from the Pinyin [of a word],” such as “1,” “0,” “monkey,” “bear,” “txl,” or “telephone book.” This aligns closely with the findings of Ai et al. (2023) and Wang and Spronk (2024), who also discovered that such terminology is widely used in creators’ works. Perhaps, to some extent, gay male content creators are able to devise strategies to circumvent algorithmic restrictions; however, the persistent and opaque content moderation by algorithms not only imposes greater labor and time costs on gay male content creators when posting content, but also appears to be effective in altering the style of the gay male content being posted.
Discussion
Algorithmic camouflage
As Myers West (2018) states, shadowbans often exist only in the suspicions of users, who are not able to present precise evidence to complete the accusation against such content moderation. Coupled with the mist brought by black box algorithms (Pasquale, 2015), algorithmic shadowbans are even more opaque. In addition to these globally common conditions, gay male users in China also suffer from local circumstances: on the one hand, the algorithm does not want to give up the traffic and the accompanying economic benefits that gay man content can bring (see more in Liu, 2023); and, on the other, the algorithm needs to conform to the social and governmental attitudes toward gay men—to be ambivalent, ambiguous, and implicit (see more in Jiao, 2021). As such, as identified above, shadowbans as a means of content moderation, imposed by algorithms against gay men in China, not only have such attributes as the obscurity, lack of evidence, and softness that have been suggested by previous studies (e.g. Are, 2022), they also take on the attributes of what I would call “hypocrisy.”
On the one hand, algorithms pretend to be fair and objective, ostensibly allowing content related to gay men to be searchable and publishable; but, on the other, they secretly and essentially restrict content related to gay men by such means as tampering with keywords and pseudo-activating relative hashtags. I hope to conceptualize such hypocritical algorithmic shadowbans as “
The conception of algorithmic camouflage can contribute to our understanding of algorithmic opacity. Whereas previous studies emphasize the opacity of algorithmic procedures, mechanisms, and processes due to corporate secrets, complex technologies, and intricate knowledge (e.g. Burrell, 2016), algorithmic camouflage focuses on the opacity of the roles, behaviors, and purposes of algorithms toward specific users from an interactive perspective. The former technical opacity from the algorithms themselves not only facilitates the implementation of algorithmic camouflage but also prevents users from perceiving and detecting it, thereby stealthily delegitimizing the power and rights of certain users. Similar to the “identity strainer theory” introduced by Karizat et al. (2021: 19), algorithmic camouflage underlines algorithmic control over users with specific identities that are deemed undesirable by the algorithms. Further, algorithmic camouflage exposes the “hypocrisy” of such control mechanisms, demonstrating how they are executed in implicit, opaque, and subtle ways.
As Simpon and Semaan (2021) suggest, algorithmic exclusion is crossing over from the limited boundaries of technology into the wider societal sphere; what this study further wishes to argue is that algorithmic exclusion may be occurring in a highly camouflaged manner and is also taking the attributes and tactics of the exclusion of marginalized groups from the social sphere into the technological domain, promoting the complementation and convergence of the exclusion of marginalized groups in both areas. Whether for gay men, other GSMs, or even more marginalized groups, recognizing algorithmic camouflage as well as being aware of the “hypocrisy” of algorithms will be crucial in interacting with algorithms.
A “De-gaying” discourse shaped by algorithms
Some scholars have suggested that algorithms and humans have achieved a state of harmonious symbiosis. For example, according to Tang et al. (2022: 60), “based on the identification and domestication, humans and algorithms have reached a stable status of ‘you have me, and I have you’”—indicating the mutual integration and interdependence between the two. However, based on the empirical findings in this study, I hope to refute and reveal the heteronormativity of such arguments. Gay male users have indeed become inseparable from algorithms, which have become a mediator of their public expression, sociability, and community connection in gay men's daily lives (Zhao, 2023). However, the algorithmic mechanism imposes different attitudes and disguises on gay male users from those of the general public, directing content related to gay men to be de-humanized, de-emotionalized, and dramatizing. Thus, while seemingly preserving the “symbol” of (normative) gay men, it actually erases the existence of gay male “subjectivity,” pursuing the “de-gaying” (disassociating from gay elements) of the platform. As Fan and Ye (2018: 31) suggest, through moderating content ordering and distribution, algorithms determine what kind of information users have access to, while they “serve to construct various discourses”; therefore, what I would like to demonstrate below is how algorithms have been shaping a “de-gaying” discourse on digital platforms in China.
The first aspect is the algorithm-driven
The second is the algorithm-driven
The third is the algorithm-driven
Conclusion
This study has examined how algorithmic shadowbans have been imposed on Chinese gay men as a means of enabling content moderation and how Chinese gay men generate perceptions of such algorithmic shadowbans. The ambivalence and obscurity of algorithms are evident in their implementation of shadowbans, which both permit and do not permit content related to gay men to be searched for, and such content has also been posted in a way that operates both smoothly and unsmoothly under the shadowbans of an algorithm. This study conceptualized the above ambivalent but obscure algorithmic shadowbans as “algorithmic camouflage,” exposing the algorithm's ostensible pseudo-tolerance of marginalized groups and accusing the algorithm of operating with “hypocrisy.” This study suggests that a highly camouflaged “de-gaying” discourse—operating through the compositions of dehumanization, de-emotionalization, and dramatization—has been shaped by algorithms on Chinese digital platforms under hypocritical algorithmic shadowbans.
There are two points that I need to clarify here. The first is that human actors also have a major role in conducting content moderation, but this paper does not deal much with human moderation since I hope to track and reveal more about the actions of algorithms. It also needs to be noted that for general users (including the author) and even the platform operators, some algorithmic (or to say mechanical) and human actions in implementing content moderation are likely to be indistinguishable—whether we need to or how we can distinguish the human and algorithmic actions can be discussed more in future research. The second is that this article does not intend to criticize any content creators who work to increase the visibility of GSMs, but does wish to remind us all that we need to look critically at high visibility, which can easily be turned into toxic visibility. Although the nuances among GSMs need further exploration, this study demonstrates that under the lures and camouflage of algorithms, GSMs may be experiencing dehumanization, de-emotionalization, and dramatization of their media representation.
Footnotes
Acknowledgements
I would like to express my gratitude to my anonymous informant for trusting me, and I hope you find your contributions meaningful when you read this paper. I also hope to thank the anonymous reviewers and journal editors for their enlightening comments, as well as Jiacheng Liu (Penn State University) and Wei Wei (East China Normal University) for their assistance with research design and data collection. Special thanks to Yechi Wang (LSE)—your encouragement and daily companionship gave me constant strength to complete this work.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Ministry-University Co-construction Project at the College of Arts and Media of Tongji University (grant number BXGJ-2024-C13, BXGJ-DL-8).
