Abstract
This article examines how ‘pleasing the algorithm’, or engaging with algorithms to gain rewards such as visibility for one’s content on digital platforms, is treated from a moral perspective. Drawing from Harré’s work on moral orders, our qualitative analysis of Reddit messages focused on social media content creation illustrates how so-called folk theories of algorithms are used for moral evaluations about the responsibilities and worthiness of different actors. Moral judgements of the actions of content creators encompass ideas of individuals and their agency in relation to algorithmic systems, and these ideas influence the assessment of algorithm-pleasing as an integral part of the craft, as condemnable behaviour, or as a necessary evil. In this way, the feedback loops that arrange people and code into algorithmic systems inevitably make theories about those systems also theories about humans and their behaviour and agency.
Introduction
‘Pleasing the algorithm’ encapsulates a core concern of content creators, who constantly have to tackle the problem of acting in ways that reward visibility on digital platforms (e.g. Cotter, 2019; Glatt, 2022; Savolainen, 2022). Yet, the phrase is also fundamentally vague: while it highlights how datafied audiences are intrinsically interwoven with the behaviour of recommendation systems, precisely what or whom the ‘algorithm’ refers to – and what or who is being pleased – might change with a speaker’s aim or perspective (e.g. Gillespie, 2016; Ziewitz, 2016). As such, ‘pleasing the algorithm’ can, for example, be used in a derogatory way to paint the pleaser as someone who submits to the whims of digital platforms or in a more neutral, technical sense to discuss how these platforms work. Tracing how the phrase is invoked reveals the degree to which the ways people make sense of the interrelations of humans and technologies in the context of algorithmic systems are full of morally laden judgements.
In this article, we take these judgements as our point of departure and scrutinize how pleasing the algorithm is treated from a moral perspective in Reddit messages focused on content creation and social media. Our analysis brings to light an under-researched and under-theorized dimension of algorithm talk: that is, not how individuals practically or technically decode algorithmic systems, but how they discursively make sense of how people engage with these systems, and how such processes are often characterized by a strong evaluative component. We illustrate how technical features of algorithmic systems become layered with fundamentally human ones, even with concerns at the very core of humanness: the acceptability of conduct, agency and the responsibility individuals have for their actions. We approach this dynamic using the concept of a moral order, which we consider ‘a collectively maintained system of public criteria for holding persons in respect or contempt’ (Harré, 1987: 222). This framework allows us to show how the moral status of pleasing the algorithm depends on the moral order with which that is evaluated and how a focus on agency reveals that when people talk about algorithmic systems, they also talk about what is acceptable in different contexts: that is, about the moral worth of individuals.
Our investigation of the interplay of human forces and technological systems slots into two research streams on human–machine relationships. First, we respond to recent calls to re-humanize algorithmic systems (Pink et al., 2022; Ruckenstein, 2023; Ruckenstein and Turunen, 2020), which emphasize the need to establish humans as critical agents in relation to algorithmic systems. These ‘dynamic arrangements of people and code’ (Seaver, 2019: 419) are not simply more or less autonomously operating technologies but are entangled with socio-cultural relationships and contexts in consequential ways (Pink et al., 2022). Our contribution to the re-humanization agenda begins with the view that when people consider acts of pleasing the algorithm, ideas about how algorithms work become interwoven with understandings of humans who create, use or otherwise encounter them; conceptualizations or theories about algorithmic systems are not simply ‘folk theories’ (e.g. Eslami et al., 2016) about the technical operation of algorithms but often also ideas about what humans are.
Second, and relatedly, our approach based on moral orders contributes to discussions on human agency in relation to algorithmic systems. Agency has become a key problematic in the literature (e.g. Rydenfelt, 2022; Siles et al., 2023); the concept relates closely to ideas of responsibility (Semin and Manstead, 1983), which is another notably thorny subject in these scholarly debates (e.g. Orr and Davis, 2020; Popova et al., 2024). Depending on approach and context, research has by and large highlighted individuals either as pressured or controlled by powerful technologies or as active agents in relation to them. Attention is called to the dimension of control by highlighting how platforms push content creators to publish increasing amounts of content (Gregersen and Ørmen, 2023) or how the data economy has made manipulating choice architectures possible by using data harvested from individuals’ online activities (Yeung, 2017). Others focus on presenting individuals engaging with technologies as agents who use and manipulate them for their own goals (Cotter, 2019; Haapoja et al., 2020; Savolainen and Ruckenstein, 2024).
While the distinction between individuals controlling or being affected by algorithmic systems is not a simple binary matter, it echoes decades of social science debates: in psychology, for example, it appears in discussions about whether individual behaviour is causally determined or agentic (e.g. Harré, 1993). Similarly, sociological debates are marked by the agency–structure debate (e.g. Fuchs, 2001). Our intention is not to take a stance in these long-standing discussions; instead, we draw attention to how the feedback loops that connect people and code and arrange them into algorithmic systems make theories about those systems always also theories about human behaviour and agency. As we illustrate, both the agency of individuals and causal forces affecting what people do are evoked in descriptions of human–algorithm relations. From the perspective of moral orders, how agency is presented in these descriptions is crucial; different constructions of agency – or its absence – govern respect or contempt, thereby managing responsibility in discussions about pleasing the algorithm. This enables a consideration of how theories about algorithms and human connections with them do not simply invoke different claims on human–technology relationships but are themselves invoked to achieve something such as blaming or justifying conduct.
From folk theories to moral orders
People’s descriptions of how algorithmic systems work are often treated as reflecting their underlying beliefs about those systems’ technical operations. A particularly influential concept in this regard is ‘folk theories’, or working understandings that frame user expectations and guide user behaviour, with consequences for satisfaction, trust and a sense of control (DeVito et al., 2017; Eslami et al., 2016). A burgeoning literature maps and typologizes folk theories of algorithmic systems in specific contexts, with a focus on operational aspects (e.g. Siles et al., 2020), such as the logics applied to data in content recommendation (DeVito et al., 2017; Eslami et al., 2016).
Recent work on everyday engagements and perceptions of algorithms has taken into consideration how values, identity and emotions colour perceptions and beliefs about technology (Bucher, 2018; Cotter, 2022; Karizat et al., 2021; Lomborg and Kapsch, 2020; Ytre-Arne and Moe, 2021). For instance, rather than approximating an objective technical reality ‘out there’, folk theories can be viewed as discursive and social constructs deployed to achieve different things (see Edwards and Potter, 1993). Nader and Lee (2022), for example, describe how users of dating services uphold their sense of self-worth by attributing their negative experiences to algorithms rather than their own desirability. Meanwhile, Lomborg and Kapsch (2020) and Cotter (2022) demonstrate how engagements with algorithms can be understood as value-laden and ideological.
However, mobilizing the folk theory concept can all too easily end up reproducing a human–machine divide in the sense that algorithms are depicted as possessing causal and scalable power, while the agency of people is located primarily in locally interpreting or resisting algorithms’ outputs. By contrast, we turn our attention to the moral and evaluative dimension of the very theories deployed to describe or understand how algorithmic systems work. We aim to highlight that these notions are inseparably interwoven with understandings of human agency. This, we contend, calls for paying attention to exactly how algorithm-related theories are deployed to imply active human involvement and to suggest responsibility and worthiness.
Discussing the ubiquitous moral dimension of language use, Drew (1998) points out that all reports of conduct, whether one’s own or others’, are incomplete and selective and designed for local interactional purposes; as such, they should ‘always and irretrievably be understood as doing moral work – as providing a basis for evaluating the “rightness” or “wrongness” of whatever is being reported’ (p. 295). Folk theories about algorithmic systems always include reports of conduct, such as implicit or explicit ideas of the actions and goals of the systems’ designers or reasoning for the ways people engage with them, so they are never devoid of this moral work either. Attributions of responsibility included in these reports can be approached as actions carried out to manage the moral status of individuals (Edwards and Potter, 1993). This management often includes descriptions of agency, and having agency relates to being responsible for one’s actions (Semin and Manstead, 1983). From this perspective, agency is ‘a discursive, political and moral concept’ (Olakivi and Niska, 2017: 25) deployed to manage responsibility.
Mundane discourse is so full of moral evaluations that little attention is generally paid to that fact (Bergmann, 1998), which suggests why moral aspects have received limited attention in folk theory research. Yet, a focus on the moral considerations tied to algorithmic systems is not completely novel. Earlier studies have taken particular notice of the moral criticism levied against platform companies. For example, folk theories are utilized to blame platforms for suppressing certain opinions while allowing and amplifying the expression of opposite views (Riedl et al., 2023). Karizat et al. (2021) document users criticizing TikTok for how its algorithms suppress content related to social identities involving race or physical appearance. Yet, users also criticize one another on moral grounds, as illustrated by Cotter’s (2019) study depicting how social media influencers may scrutinize their colleagues attempting to succeed by manipulating visibility metrics.
Beyond social media, Ziewitz (2019) discusses the negotiation of boundaries between acceptable conduct and manipulation, or ‘gaming the system’ in the field of search engine optimization, illustrating how what is illegitimate gaming of search engines ‘is not given in advance, but needs to be established, navigated and negotiated in specific situations’ (p. 723). Moral evaluations are also engaged in by those who design and build algorithmic systems; they may at times accept responsibility for what their creations do in the world but may also shift blame to other actors, including users (Orr and Davis, 2020; Popova et al., 2024). The purported difficulty that designers face in understanding and controlling their creations, which continuously learn from complex, datafied user behaviours (e.g. Ananny, 2023), can also serve to protect developers from blame and moral responsibility.
Expanding on these insights positioning algorithmic systems within the moral sphere, and beginning from the starting point of language as a field of moral evaluations, we build our approach on Harré’s (1987) conceptualization of moral order, which
includes a collectively maintained system of public criteria for holding persons in respect or contempt and rituals for ratification of judgments in accordance with these criteria. The moral value of persons and their actions are publicly displayed by such a system. It is realized in practices such as being deferential to someone or censuring someone, by trials, by punishments, by insults, by apologies and so on. (p. 222)
Moral orders can thus be understood as systems of meaning which can be drawn from to argue whether someone is worthy or unworthy: that is, as a way of managing moral identities. Notably, several such systems can coexist, so that the same action can be commendable in one system and contemptible in another (Harré, 1983, 1993). This points to the existence of different and contrasting moral orders that are negotiated and applied situationally (Van Langenhove, 2017); as we demonstrate, this is also true for evaluating acts of pleasing algorithms.
This perspective resonates with those approaches to morality as a social phenomenon arguing that ways of justifying one’s actions and the ideas of actions requiring justification are standardized and shared (e.g. Semin and Manstead, 1983). For example, in their theory of justification, Boltanski and Thevenot (2006) identify reasonably stable moral repertoires that actors mobilize to justify their chosen course of action. This relates to the need to provide ‘accounts’ or justifications and excuses (e.g. Scott and Lyman, 1968) that seek to maintain or repair one’s moral status or the status of someone whose reputation matters: for individuals, moral orders come to matter when there are reasons for upholding reputations and a sense of self-worth. Failure in reparative actions may not only lead to feelings of unworthiness and shame but bear upon individuals’ future prospects, as the wider community may shun those deemed morally flawed (e.g. Harré, 1993).
Harré’s formulation of moral orders shares an affinity with Erving Goffman’s perspective on morality, in which individuals are considered to be preoccupied with maintaining respectable identities and norms become a moral concern when used to categorize people in positive or negative ways (Bergmann, 1998). Moral orders extend from classifying individuals and their actions as simply good or bad to encompassing a broader perspective of a person’s worth, primarily centred on respect and contempt (e.g. Harré, 1983). The approach therefore encourages the identification of different ways people argue for worthiness, respect or contempt as they describe the behaviour of others or themselves in relation to algorithmic systems. Sometimes these moral considerations are overtly expressed evaluations. They may also be subtle but still perform moral work in Drew’s (1998) sense by being selective and thus providing a basis for evaluation.
Data and methods
Below, we analyse attributions of responsibility as negotiations of the moral status of acts of pleasing the algorithm in a sample of Reddit messages. On Reddit, individuals (known as ‘Redditors’) submit content such as text, links and pictures and discuss them with others. Reddit is organized into subreddits that serve as discussion boards for specific topics and interests. Reddit proved a suitable platform for discovering conversations about the topic under study: for example, in our dataset, one prominent subreddit was NewTubers (https://www.reddit.com/r/NewTubers/), which calls itself a ‘Premiere “Small Content Creator” Community, created to allow up-and-coming channels to improve with resources, critiques, and cooperation among tens of thousands of peers!’ and is focused on video content platforms such as YouTube and Twitch.
We collected 409 unique Reddit messages posted in 2022 with at least one of the following phrases in singular or plural form: ‘pleasing the algorithm’, ‘please the algorithm’, ‘pleasing algorithm’ and ‘please algorithm’. Message length varied between 4 and 1016 words, with an average of approximately 94 words. We used the PushShift API to search all of Reddit during January and February 2023. We selected ‘pleasing’ as a key search term because it regularly appears in user lingo, as documented by academic research (Glatt, 2022; Savolainen, 2022) and reflected in digital culture more generally (e.g. ViralMango, 2021). Other vernacular phrases, such as ‘beating’ or ‘gaming’ the algorithm, might have offered a different view into the relationships between people and technology. However, our search conditions revealed rich and multidimensional algorithm talk that indicated various practical and normative stances vis-à-vis algorithmic systems.
Our initial reading through the data made clear that the messages often discussed specific content creators and their algorithm-related actions. Others were advice or feedback directed to someone creating content or aspiring to be a content creator, typically on social media, but sometimes in online marketplaces such as Amazon. At times, the messages simply discussed the general state of content creation and digital culture. As our analysis progressed and we discussed the material and our impressions, we noticed a moral tone in many messages. This appeared to shed light on a dimension of everyday algorithm-related discourse that has lacked theoretical exploration. The messages deploying the notion of pleasing the algorithm were not simply decoding the practical or technical aspects of algorithmic systems but also focused on evaluating, on moral grounds, how people engage with algorithms. This insight further guided our analysis. We thus began with a broad, inductive approach before homing in on specific phenomena present in the material.
In the analysis, we sought to determine the different moral orders that messages presuppose: that is, the context in which messages find their relevance. In practice, this involved identifying the moral meaning ascribed to acts of pleasing algorithms and the related roles and responsibilities of different actors. Below, we illustrate the discursive resources people employ to assign blame or maintain respectability concerning algorithms and their effects. We thus operationalize moral orders as a kind of interpretative repertoire or culturally shared ‘resources for making evaluations, constructing factual versions and performing particular actions’ (Potter and Wetherell, 1995: 89). As may already be clear, we do not view ‘pleasing the algorithm’ as a stable, fixed or objective phenomenon but as a discursive one. Tracing the phrase allowed us to access morally laden depictions of algorithmic systems and the roles that humans play in them.
We have sought to strike a balance between transparency of analysis and ethical treatment of the data we collected. Since Reddit messages are available online, many of the messages could be found and the Redditor in question identified with search engines. While Reddit has since limited its API, datasets containing Reddit content still circulate online, so our dataset could be replicated. To take steps to protect the anonymity of the Redditors we quote, the message excerpts are edited and paraphrased, and at times we resort to only describing the messages: both approaches are common and preferred ways to protect users in Reddit research (Fiesler et al., 2024; Proferes et al., 2021). We have carefully aimed to capture the original meaning of the messages to make the basis of our interpretations transparent. In addition, we removed all references to individuals, as we do not consider them consequential for this analysis.
Findings: moral evaluations of pleasing the algorithm
A former colleague admitted to creating such clickbait videos. He said he just takes someone’s reaction video [. . .] slices it and adds a couple of meme clips to make it over 10 minutes long to please the algorithm and posts it. It’s apparently quite profitable, but not enough to restore the respect I lost for him.
The above example illustrates how tracing the discursive construct of pleasing the algorithm exposes morally laden descriptions of algorithmic systems, their human elements and their social impacts. Reddit users typically categorized certain acts as pleasing the algorithm: using thumbnails that include faces, participating in trends, creating ‘clickbaity’ titles, focusing on maximizing engagement metrics and ensuring a certain video length, as in the example above. Rather than an objective or well-established phenomenon, though, the phrase could refer across situations and Reddit communities to different kinds of actions. Notably, similar actions could be interpreted differently: they could cause contempt or be justified or even respectable. Thus, in the following sections, we illustrate how different moral orders – ways of allocating contempt or respect – are constructed by managing responsibility when talking about pleasing the algorithm.
We distinguish three moral orders: in the first, acts of pleasing were discussed as part of content creators’ craft and thus morally acceptable, with most of the responsibility placed on the shoulders of the audience. In the second, pleasing the algorithm was cast as condemnable manipulation, with a focus on chastising content creators. Finally, pleasing the algorithm was considered potentially detrimental to both creators and audiences but still justifiable because the content creator role requires it, indicating inculpability on the part of creators.
Pleasing the algorithm as part of the craft
In the first moral order, we identified acts of pleasing the algorithm that were constructed as morally acceptable. One way of treating algorithm-pleasing in this way was to define algorithms as representing the tastes of the audience and acting as extensions of the audience’s intentional actions. From this starting point, which can be viewed as a type of folk theory, pleasing the algorithm becomes part of a skilful content creator’s repertoire; pleasing the audience, as a content creator should, also pleases the algorithm.
This acceptance of pleasing algorithms was common in advice messages that either implicitly or explicitly positioned the author as knowledgeable about what being a content creator entails. As one Redditor indicated, it is impossible to please the algorithm, but creators can please their audiences, and as they win over audiences, they also win the algorithm. The Redditor continued:
When viewers are happy, they engage with your content and boost your metrics and do the sharing for you. When there are enough happy viewers for your videos, all new content you create will boost the algorithmic rankings.
Another message listed several factors to which a content creator should pay attention, including not only content quality but also various elements that were argued in other contexts to be signs of pleasing the algorithm. These include thumbnails and catchy titles, which the Redditor identified as ‘attention grabbers’. This highlights how these elements are treated as aimed at garnering the audience’s attention and inviting them to view one’s content rather than towards a technical system.
In this moral order, content quality is constructed as the determinant of one’s success and – because quality is something that the audience judges and appreciates – succeeding itself becomes something respectable and serves as a sign of quality. Achieving success, therefore, becomes inherently commendable. Although quality was not always explicitly referred to as a concept, it was readily apparent that when pleasing the algorithm was considered morally acceptable, producing quality content was inextricably linked to satisfying and engaging the audience.
From the perspective of moral order, this emphasis on quality achieved two things in relation to agency and responsibility: first, the content creator who succeeds is painted as someone who truly earns it, as an agent who has proven worthy of success. Second, the audience is also constructed as active agents who deliberate and choose between content rather than passive beings controlled by platform and content creators through recommendation algorithms and frivolous tricks. When the audience enjoys content, they engage with and share it, which is reflected in the way the audience’s actions boost the metrics of the platform: that is, the algorithm rewards those that the audience rewards.
While constructing acts of pleasing the algorithm as morally commendable thus required success with algorithms to be aligned with the audience’s tastes and preferences, it was not necessary to collapse the categories of algorithm and audience. The algorithm could be treated as a separate entity that has to be considered – and pleased in this sense – but success is still earned based on pleasing the audience’s tastes. Rather than being comprised of mere tricks played by manipulative content creators, pleasing the algorithm was a core part of their craft that needed to be mastered: ‘Publish content consistently on relevant channels and design it for what your audience values. No need to overproduce and content can be reused as reels and so on to please the algorithms’.
Even with this variation between the algorithm as an extension of audience taste and deliberation or an entity separate from those factors, the moral order at play here is built on assumptions that the audience is in control and that its decisions and agency are respected. The audience judges who succeeds and who does not. If content creators have agency over their success, they nevertheless ultimately rely on their capability to please the audience rather than figuring out how some technical system works. In a sense, illegitimate ways of succeeding cannot exist from this perspective. The example below shows how this idea can be used to challenge a version of events that places responsibility for who succeeds on the decisions the platform company has made regarding the algorithm rather than on the audience:
I don’t exactly recall what [content creator] stated, but didn’t they claim that shorter, under 20 minute videos please the algorithm and gather more views? It’s not exactly true. It seems like a distraction from the real reason, video quality. Looking around YouTube, there are tons of long videos with over 100k views.
The message shifts the blame for being unsuccessful to the content creator who is simply not good enough, and what is good enough is determined by the audience rather than the algorithm. Here, the moral order under scrutiny is drawn upon to show contempt for someone who has accounted for a lack of success with a certain theory of how the algorithm grants visibility, illustrating how different versions of algorithmic systems are used and challenged in interactions.
As it puts the onus on human audiences, this moral order allowed Redditors to absolve even the platform companies from responsibility for their design choices, including the kinds of content their algorithmic systems favour. Here, this is done by linking the audience’s preferences to the competition that platforms are facing:
I understand the frustration of users disliking videos and longing for the old days, but Instagram has to push reels to survive. In social media people prefer different things compared to a few years ago. The majority prefer video content compared to photos. This is why TikTok is so popular, it’s hard to challenge that with pictures. Instagram serves its interests by pushing reels, the data says people want them.
This sympathy towards a platform company shows that it is possible to craft versions of events that shift much of the responsibility for how a platform’s recommendation algorithm works to factors external to the company – in this case, it is social media users who direct the actions of platform companies, not the other way around. This example also emphasizes the knowledge the company has, making it appear to be a rational actor doing what is needed to survive, regardless of what some users may prefer.
Pleasing the algorithm as condemnable
In contrast to the first moral order, pleasing the algorithm was also treated as morally condemnable. Our material included several ways to construct this position: Redditors might cast the act of pleasing as audience manipulation or describe audiences as being drawn to low-quality content, a trait of which some content creators then took advantage. Other examples highlighted the negative consequences of pleasing the algorithm for the audience, other content creators and/or digital culture more generally. This criticism tended to treat algorithm-pleasers as greedy or otherwise morally flawed. One Redditor, for example, painted ‘YouTubers’ as only caring about ‘pleasing the algorithm’, referring to a particular example where a content creator arguably tactically offered audiences ‘bullshit’ to increase views instead of caring for their community of followers.
Treating the audience as simply reacting to external forces rather than as acting intentionally made it possible to criticize acts of pleasing the algorithm as manipulations driven by greed or other morally dubious motivations. Agency and thus responsibility were attributed to content creators, as one Redditor expressed it:
People can be ‘hooked’ for profit. Videos are often designed for monetization by pleasing the algorithm. Getting hooked by your brain’s dopamine system is easy, like catching a cold if the immune system cannot resist it. The minds of humans are ill-prepared for the combination of greed and artificial intelligence hacking our brain chemistry.
This example captures two aspects that make pleasing the algorithm condemnable: the audience’s behaviour is explained as causally related to – if not determined by – certain kinds of content, a claim relying on a psychological theory about individuals who can be manipulated in this way and a view of algorithmic systems as providing opportunities and profit for those who can exploit recommendations of popular and attention-grabbing content. Here, the responsibility is assigned to greedy creators who are hooking people for money.
‘Impure’ motivations such as greed were contrasted with creating content that is somehow significant, whether because of its originality or its creator’s passion for the subject. This contrast could be used to praise those not viewed as focused on pleasing the algorithm: ‘[he] is among the remaining interesting and sincere reviewers on YouTube. He just recommends the game or not without clickbait. He does in-depth, long reviews, rather than just over 10 minutes with a dumb thumbnail for pleasing the algorithm’. Here, a distinction is made between this content creator, who is asserted to be honest, and less praiseworthy actions others take to please the algorithm. Thus, resources used to make pleasing the algorithm morally objectionable can also be used to argue for the worthiness of those content creators whose actions are framed as something else – notably, something more proper or ‘right’.
Moral tones were also present in messages that took a stance on the general state of content production and digital culture:
Short videos keep you staring at the screen, and so social media platforms decided to pivot to them from pictures that users have learned to have a measure of self-control over, and from longer videos created in smaller numbers since they require effort to make and commitment to watch. Influencer wannabes then reacted to the algorithm, as becoming an ‘influencer’ requires pleasing the algorithm.
Pleasing the algorithm was condemned not only based on accusations of greed or otherwise impure motivations of those engaging in the practice but also by highlighting its undesirable outcomes. Frequently mentioned negative consequences for the audience included the homogenization of content and the consequent lack of diversity. Pleasing the algorithm was also described as immoral when it involved generating engagement through shock value – for example, by a content creator doing something dangerous, which could have negative effects on both audiences and content creators themselves. In addition, algorithm-pleasers were described as harming the content economy. Legitimate content creators – that is, those who created content for ‘correct’ motivations such as helping others and simply enjoyed putting genuine effort and real thought into content creation – were forced to compete with those who were merely pleasing the algorithm and exploiting the system. The distinction between legitimate content creators and algorithm-pleasers reflects the management of moral identities of worthy and unworthy creators, implying that those focusing on algorithm-pleasing tactics lack a right to enjoy success, visibility or fame.
Pleasing the algorithm as a necessary evil
This third moral order casts acts of pleasing the algorithm as negative in their consequences but morally acceptable because content creators are simply victims of circumstance. The act of pleasing the algorithm is considered a necessary part of being a successful content creator, and responsibility for this necessity was emphasized as lying elsewhere, with platforms holding the power to define the requirements for visibility in algorithmic systems or even among human audiences.
One strand of discussion related to this moral order revolved around the concept of burnout. Creators were described as burning out due to the need to please algorithms and thus act for externally motivated reasons, in contrast to being driven by internal motivation, which was described as more sustainable. This approach was used to argue that creators are forced to please the algorithm and personally suffer for it, casting them as victims. A central reason for burnout was that achieving success in terms of viewers and engagement was argued to demand posting new content very frequently, which was described as smothering one’s creativity and ability to express oneself freely:
The need to please the algorithm is the problem. Publish content often enough, get views from the right audience, gather engagement. Fail in this and you’re at the bottom. Social media should be about presenting yourself however you want, but now people burn out attempting to act correctly for visibility without getting it.
Notably, while the blame for burning out content creators was placed on how the algorithmic system worked, this was not solely described as the responsibility of the platform companies. Part of the responsibility could also be attributed to the human audience. According to one Redditor, that audience expects content creators to act like machines and when displeased, the audience will just move on to the next creator who is also in danger of burning out. Similarly, while blaming platforms was a common way of arguing why certain types of content flourish, some of the responsibility for content being undesirable – socially detrimental, low-effort or generally negative – could also be attributed to the audience. In the following example, a Redditor reports having tried to create more positive content, but human nature revealed itself through the metrics of audience engagement and tied their hands:
Sadly, pleasing the algorithm is generally necessary for gaining a following, and it requires engagement. And engagement necessitates affinity and interest. Building affinity produces and re-creates a community. Problematically, we have biases regarding what we prefer to click and consider important. The worst things tend to draw our attention. [. . .] The aims are sometimes amiable: I try to gather engagement and interest with videos focusing on non-partisan topics, but holding individuals engaged calls for attention. [. . .] With its shades of gray, reality is flawed instead of absolute. And people dislike it immensely, so we are biased towards absolutes.
In this instance, part of the responsibility for demands imposed on content creators lies with the audience, or ‘human nature’, but the blame was also placed on platforms for amplifying the affinity of humans for negatively affective content. The human audience with its interests is thus difficult to separate from ideas about the technical reality of recommender systems, underlining how folk theories of algorithms are intertwined with the agency of the humans who interact with them. The example also portrays its author as a moral actor and skilled content creator who tries to produce content on non-polarizing issues but also needs to please the algorithm to gain attention.
Justification of acts of pleasing the algorithm could also be achieved by highlighting extenuating circumstances: the genuinely respectable content that a creator has produced, their need to make a living and other potential ways of arguing that they are virtuous despite their actions. Here, Redditors tended to indicate sympathy towards or respect for a specific content creator, as in this example:
He has the right to please the algorithm with as much clickbait as he wants. I have subscribed to him from early on and have always admired him. He hasn’t bought expensive real estate or cars and has not begun to create boring videos discussing his success and wealth. He’s loyal to his origins. He put his wealth wisely towards his company and employed dozens of people to share his success. So even if clickbait is repulsive and irritating, if he does it, I know there is still content I relish.
Here, both the character of the creator and the evaluation of his content are used to justify why it is acceptable to use clickbait. This type of talk retains creators’ moral worth and casts them as genuinely agentic: in this case, as actors who do not squander the fruits of their labour but rather share them. In another message, the responsibility is shifted from the same content creator to ‘humanity’ in general, a reference to the audience:
People simply click thumbnails with faces on them more often. He has discussed it too: he says taking ‘reaction pictures’ makes him feel stupid and he dislikes it, and he needs his people to create these thumbnails. However, according to him, thumbnails with a face increase views by a fifth. So instead of him, humanity is to blame.
By describing the content creator as unwilling to participate in algorithm-pleasing practices and then explaining the rationale for engaging in them anyway, the blame is shifted to the audience, and the moral worth of the content creator is retained. In addition, the content creator is described as being open about these practices and their purpose: confessing that one has a stake in an issue can serve to show that a content creator is not trying to dupe anyone and is honest (Potter, 1996). These types of methods of crafting descriptions depict content creators as victims of structural matters while at the same time highlighting them as moral agents who either have earned the right to please the algorithm or only do so unwillingly.
Discussion and conclusion
Our analysis shows that algorithmic systems coexist with contradictory moral orders that individuals uphold and draw on to distribute respect and blame and that descriptions of these systems are rife with moral evaluations of agency and responsibility. In the three moral orders outlined above, acts regarded as pleasing the algorithm may be treated as contemptible, acceptable or in some cases respectable. Reasons as varied as opportunism, greed, willingness to serve the audience and the necessities of the daily struggle to make a living as a victim of a hostile structure all bear on the acceptability of these acts. The analysis underlines that moral orders – in Harré’s sense of systems of meaning that arbitrate worthiness and unworthiness – represent not only the criteria for determining which actions are acceptable or laudable but also the situations in which that praise can be applied (Van Langenhove, 2017). Indeed, the moral orders hinge on how participants in the situations described in the messages were discursively allocated agency; for people to be responsible for something, they need to be genuinely agentic.
In the first moral order, pleasing the algorithm was considered a morally acceptable act, an integral part of the craft of content creation. This moral evaluation relied on the idea that success, visibility and fame are determined by the human audience, with algorithms simply extensions of the audience’s tastes. Alternatively, algorithms can mediate the audience’s tastes indirectly; when algorithms are interacted with correctly, they can support a content creator’s attempts to gain access to the human audience. Either way, it is ultimately not the algorithm but the human audience’s reactions, preferences and even whims that content creators – and the digital platforms that design and build algorithmic systems – need to please.
The second moral order, by contrast, depicted acts of pleasing the algorithm as condemnable. The Reddit messages operating within this moral order drew from different and potentially complementary resources: their authors might emphasize that content creators exploited the audience’s vulnerabilities through algorithms, or they might state that creators manipulating algorithms are gaining an unfair advantage in competition. From the perspective of the moral order, the result is similar: pleasing the algorithm merits contempt. Justification for showing contempt implies that pleasing the algorithm is condemnable because it hurts other content creators; if no one exploited or manipulated the system, visibility could be gained by fair means.
The third moral order was more ambiguous. Depicting pleasing the algorithm as a necessary evil enabled justifying acts of pleasing, casting content creators as victims of circumstances and relieving them of responsibility while simultaneously accepting that pleasing the algorithm could have negative effects. Accounts drawing from this moral order protected content creators as morally worthy individuals, distancing them from the role requirements (Goffman, 1972) enforced on them. While appealing to role demands is a common way of justifying actions (Edwards and Potter, 1993), for that approach to be plausible, the role must be defined in a way that allows for shifting responsibility. In the context of the third moral order, responsibility for pleasing the algorithm can be dispersed across the whole system: platforms, audiences and algorithms. While different accounts might emphasize the platforms’ responsibility over the audience’s or vice versa, blamelessness on the part of content creators is at the core of this moral order.
Reflection in terms of moral orders provides two novel views into folk theories that describe the workings of algorithmic systems. First, analysed in terms of moral orders, folk theories can be understood as assessments of responsibility rather than just understandings of technology. From this perspective, folk theories concern the judgement of acts that take advantage of algorithmic systems; these theories are crafted and mobilized not only to explain technical operations but also to do things such as justify or condemn actions of people interacting with and making use of algorithms. Second, and simultaneously, folk theories contain notions about people’s role as audiences or targets of algorithmic systems. As feedback loops connect people and code, arranging them into what Seaver (2019) calls algorithmic systems, it becomes difficult to separate human audiences from the technical realities of those systems. Our examination of how the notion of pleasing the algorithm is invoked highlights how on digital platforms not just humans, but humans tied in with algorithmic systems, are the ultimate targets, recipients and audiences of content. Theories about algorithmic systems are therefore inevitably theories about human audiences’ behaviour and agency in areas like deliberate choice-making, reactions to impulses and being misled. The very nature of algorithmic systems as inseparable constellations of humans and technology brings with it perpetual uncertainty regarding where to locate the causes of and thus responsibility for events. This enables the construction of different accounts of how the systems work and, consequently, contradictory evaluations of agency.
At its core, this perspective towards folk theories is, we suggest, a means of re-humanizing algorithmic systems or establishing humans as consequential agents in relation to them. Our analysis shows that people use different types of conceptualizations of algorithmic systems to assess and argue whether something or someone should be blamed for what happens and who that something or someone should be. For this ‘blame negotiation’ (Edwards and Potter, 1993), the explanations of what human actors are in relation to algorithms and why they act as they do are as important as the technical descriptions given. As blame can be assigned using certain discursive versions of events, the responsibility can equally well be shifted by alternative versions. Riedl et al. (2023), for example, describe how both anti-abortion and pro-choice activists blame social media platforms for suppressing their opinions: despite their polar-opposite beliefs, both blame platforms (and by extension their algorithms) rather than a lack of engagement from the audience, while Cotter (2019) shows how explanations of what influencers do with algorithmic systems are not just descriptions of events but also mobilized to signify whether or not relationships with audiences are proper. Our results demonstrate how blame might be assigned differently, depending on who or what is stated to have agency in the situation: that is, how the system becomes humanized. Content creators aiming to please the algorithm, for example, could be responsible for duping users or be victims of the system themselves.
Moral orders, and more generally a focus on morality, can help broaden the view of ethically relevant acts and actors in the digital landscape. Moral orders afford an understanding of accountabilities and social norms that are beginning to condense around the use of algorithmic systems, even as they are distributed in nature and shaped by various agents and acts (Seaver, 2019). The bulk of contemporary ethical discussion focuses on the responsibilities of those creating algorithmic systems (but see, e.g. Orr and Davis, 2020); however, since these systems mediate and facilitate social cooperation, what is required is careful consideration of the rights and responsibilities of different stakeholders, including creators and consumers. As we have illustrated by distinguishing between three distinctive ways the same actions with recommendation algorithms can be evaluated morally, the focus on moral orders makes it clear that perceptions of rights and responsibilities may conflict with one another in everyday practice.
As increasing parts of everyday human life become entangled with different algorithmic systems, different conceptualizations of humans, their agency and their motives in relation to these systems will also become relevant building blocks for doing things with language. In terms of re-humanization, we suggest paying attention to the different and often conflicting ways in which individuals humanize algorithmic systems and especially the purposes for which they use these ways of humanizing: as our example of pleasing the algorithm shows, focusing on the ways algorithmic systems are humanized in everyday talk opens a view on the role folk theories play in the negotiation of such fundamental issues of social life as the moral worth of individuals.
Footnotes
Acknowledgements
The authors would like to thank the anonymous reviewers, members of the Datafied Life Collaboratory and the Helsinki Social Computing Group and Airi Lampinen for their invaluable comments on various stages of this work.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work has been supported by the Helsingin Sanomat Foundation grant ‘Pleasing the Algorithm’, the Kone Foundation grant ‘Digital Ideologies’ and the ‘REPAIR’ project funded by the Strategic Research Council established within the Research Council of Finland.
