Abstract
Algorithmic persuasion is a mode of organizing that happens through inducing experiences that covertly seek to influence behavior by presenting an ongoing stream of affective recommendations. This essay advances the thesis that this mode of algorithmic organizing has the capacity to affect individuals’ sense of their self and explores how and why this may happen. It suggests that individuals may be susceptible to experiencing AI recommendation systems as sublime. Their sublime qualities give normative force to their recommendations and, through them, appeal to one’s affective drives, fears, and hopes, revealing who or what one is or may become. This subjecting of one’s self to these recommendations warrants two critical observations: a behavioral preference we call “people-like-you” and the emergence of “algorithm conformity” as an organizing force. Yet, there can be epistemic corruption in the recommendations. This epistemic corruption is the else, whose experience can evoke uncanny feelings and carries with it the possibility to break the spell of the sublime and to escape and resist algorithm conformity. But can such experiences, individual and dispersed, give rise to actions of collective resistance in a world of algorithmic capitalism?
Keywords
Questions of algorithmic organizing, for example, its reach, impacts, and forms, have become a prominent theme in management and organization research (Kellogg et al., 2020; Kim et al., 2024; Lindebaum et al., 2020; Shrestha et al., 2021). Their prominence is partly driven by the ubiquity of algorithmic systems (Fisher, 2022; Gillespie, 2016) and partly by the transformative impacts algorithmic systems are having on capitalist society through technological advances broadly labeled artificial intelligence (Pasquale, 2015; Schildt, 2020; Zuboff, 2019). Key to the argument we advance in this essay, algorithmic systems are transformative in impacting and changing the way organizing itself is made possible (Noponen et al., 2024).
Research on algorithmic organizing points toward two competing views: one is focused on algorithmic control, another on algorithmic facilitation (Duggan et al., 2020; Etter and Albu, 2021; Wood et al., 2019). They diverge in their understanding of how human activity is organized through the core device of algorithmic management: the platform (Bucher et al., 2021). Algorithmic “Taylorism” (Noponen et al., 2024) is a mode of algorithmic organizing oriented at control; it is typical of the low-skill gig economy as found in food delivery, transportation, and crowdsourcing (e.g. Doordash, Uber, and Amazon MTurk). The other view sees algorithmic organizing as oriented at enabling and facilitating co-ordination; this mode is typical of collaborative consumption and crowdfunding platforms (e.g. Airbnb and Kickstarter; Curchod et al., 2020). The two views share, however, the idea that the platform’s goal is to organize human search for utility and that, therefore, algorithmic organizing operates under a sense of utility-informed rationality associated with acts mediated by these platforms.
This essay focuses on a third mode of algorithmic organizing, namely algorithmic persuasion (Gunaratne et al., 2018), which is distinct from but operates alongside the previous two modes. In algorithmic persuasion, organizing happens through inducing affective experiences that covertly seek to influence behavior by presenting an ongoing stream of recommendations (Pignot, 2023). As such, the assumption that utility-informed rationality guides the use of AI systems is loosened. Their recommendations flow to users through multimodal platforms; they can interact with their users through dedicated devices (e.g. a FitBit activity wristband or an Oura ring) and customized applications (e.g. TikTok or Temu) or they can be embedded in other software (e.g. Grammarly or Google’s predictive search). Many commentators have pointed out how algorithmic persuasion may increase users’ levels of learned helplessness (Moore, 2019), psycho-emotional deficitisation (Lindebaum and Langer, 2024), and self-inflicted immaturity (Scherer et al., 2023; Scherer and Neesham, 2023), often for the benefit of the powerful organizations that control these systems: the “FAAMGs” and “BATXs” of the popular business press.
We refer to these persuasive systems as AI recommendation systems, because that is essentially what they do and what they have in common. Their outputs are profiled behavioral recommendations (Coeckelbergh, 2020: 125) that are derived from the extraction, categorization, optimization, and extrapolation of huge amounts of data. The recommendations are both individualized, as the recommendation is derived from data specific for that user, and dividualized (Cheney-Lippold, 2017; Deleuze, 1992), as the recommendation combines user specific data with those of numerous other “people-like-you,” those individuals whose algorithmic profiles have been calculated to be similar to that of the focal user (Chun, 2016). Their recommendations may remain merely a suggestion or a nudge that may or may not be followed but they are intended to be hard to resist and subvert (Walker et al., 2021).
Everybody encounters AI recommendation systems the moment they interact with the digital world, whether at work, when studying, or in private life. Often, their recommendations appear as mundane, for example, when offering suggestions to polish a digital photograph. In other instances, they nudge users toward certain consumptive behaviors, position them on social media, and inform them about the preferences to which people-like-you subscribe: newsfeeds, songs, films, books, and other online purchases. Their recommendations can, however, be consequential and obtrusive, especially after prolonged and repeated use of AI recommendation systems, as they may create deep affective-embodied experience-based social clusters; one can think of the homophilic “echo chambers” of contemporary cyberspace and their structuring of perceptions of, for example, ideal bodies, consumption patterns, and ideological beliefs (Chun, 2016; Cinelli et al., 2021). However, responding to the recommendations of AI systems does not always seem to follow from a utility-informed rationality on the part of the user.
It is understandable that much of the critical (management) literature on AI focuses on the control that powerful organizations, through their ownership and use of AI recommendation systems, are able to exercise over their customer-citizen-client-employee users (Al-Amoudi, 2023; Vesa and Tienari, 2022). The analysis typically centers on the agency and intent of the owners of the systems, with the systems themselves as proxies for their owners (Maaranen et al., 2022). Hence, the systems are understood as intermediaries between their owners and users. Research has explored the subjectivities of the latter in relation to these systems (e.g. Lindebaum and Langer, 2024; Pignot, 2023), identifying feelings of anxiety, precariousness, and performance fatigue stemming from being subjected to them (Manley and Williams, 2022). Yet, Burgess (2023: 1243–1244) argues that “scholarly work that aims to respond critically to datafication can end up centering the large technology companies and the State, leaving ordinary people out of the picture.” In light of arguments that posit how being human is co-constituted with its technology (e.g. Bailey et al., 2022; Den Hond and Moser, 2023; Murray et al., 2021) and that “all interaction (technological as well as nontechnological) changes the self and is a primary precondition for selfhood” (Åhman, 2017: 170), we argue that we need to delve deeper into the relationship between the individual user and the technology, more specifically to why the use of AI recommendation systems is attractive, in spite of their potential for enhancing learned helplessness and “[eroding] humans’ capacity for using reason” (Scherer and Neesham, 2023: 2). We believe that insufficient attention has been given to the possible affective shaping of the self through the use of AI recommendation systems. We advance the thesis that the individual’s sense of their self may be affected by AI recommendation systems, alongside and concomitantly with the intent of the systems’ owners yet independently thereof, and we explore how and why that may happen.
Our argument proceeds as follows. We argue that the use of AI recommendation systems may affect one’s sense of self when they are experienced as sublime. They do so by judging and revealing, through their recommendations, who or what one is or may become. This sublime quality has a normative force, and therefore AI recommendation systems stimulate the individual to emulate their recommendations. They are an increasingly important “other” through whose “gaze” the user “constructs their self” (King, 2016: 74). Relying on Fromm’s (1969) insights regarding automaton conformity, we explain why some users may be susceptible to the normative force of the sublime qualities of AI recommendation systems and perceive their recommendations as helpful guidance in dealing with feelings of anxiety and isolation that stem from the present societal condition. The result is a subjectification of the self to the recommendations of AI systems; this “algorithm conformity” not only has a psychological but also a sociological condition. Cheney-Lippold’s (2017) notion of the else—the idea that there is some epistemic corruption or misspecification in the recommendations from AI systems—may trigger a sense of uncanniness that can be an entry to emancipation from algorithm conformity and to resistance and collective action.
We offer three contributions. First, we advance the idea that the use of AI recommendations systems may affect the user’s sense of their self, concomitantly with but separately from the intentions of the systems’ owners. Unlike views that depict the use of AI recommendation systems in terms of utility-informed rationality, AI recommendation systems may, in our view, also play into and appeal to one’s affective drives, fears, and hopes. Second, we propose the notions of “algorithm conformity” to elucidate the aggregate, social control over the people’s sense of their self that these systems may occasion through their capacity to reveal and to pass judgment as responses to anxiety in late modernity. Third, we explore how Cheney-Lippold’s (2017) notion of the else, the epistemic corruption in a recommendation, may occasion an uncanny experience that can act as a trigger to disentangle the self from algorithm conformity and search for emancipation. Thus, AI recommendation systems themselves contain the seed, captured in the else, for the possibility of countering algorithm conformity and developing the self through a critical engagement with them. We conclude by arguing how such individual experiences of else can under specific conditions build collective resistance, as exemplified by the German Bundesliga football club FC St. Pauli’s decision to abandon the social media platform X.
The sublime and AI recommendation systems
Cohn and Miles (1977) explicate the convoluted history of the meaning of the word “sublime” in Western thought and art. For example, in the first century CE, Longinus’ wrote on the sublime of elevating rhetoric. In the period of Enlightenment, Edmund Burke associated the sublime with feelings of terror and vastness while Immanuel Kant understood the sublime as a confrontation with the overwhelming. In the late 20th century, Jean-François Lyotard’s reflected on the sublime in post-modern painting as a presentation of the unpresentable. The sublime has throughout been evoked to capture the dynamics between an overwhelming outside and the self’s affective response when so exposed. We retain from Cohn and Miles’ (1977) discussion two connotations of the sublime: one is descriptive—the attribution or experience of an entity as sublime; the other is normative to the self—the sublime as an urge to transcend one’s limitations by proposing a standard or quality to emulate. We argue that AI recommendation systems can be experienced both as descriptively sublime entities and as normatively sublime to the self.
An entity is descriptively sublime when it is imputed with a quality that is beyond rational understanding, “bigger than life.” Natural phenomena (a thunderstorm, a fire) and human accomplishments (a piece of art, a place of worship, a warship) can be descriptively sublime by simultaneously evoking admiration, amazement, and attraction, and making one feel small and vulnerable. Technology can be sublime (Nye, 1994), as in algorithmic sublime (Ames, 2018) and cyber sublime (Gardner, 2009). AI recommendation systems can be experienced as endowed with intrinsic exceptionality by revealing themselves with an aura of true knowledge (Rasch, 2022: 67) and having “magnetic power” (Mosco, 2004: 118), to which the intractability and incomprehensibility of their inner workings add their own share (Pasquale, 2015). Through their recommendations, they may invoke a mix of feelings: admiration and reverence, astonishment and awe. AI recommendation systems can be experienced as descriptively sublime.
A recommendation may incite curiosity, surprise, and estrangement: the entire range of affects that can be captured with the emphasis on “this” in the question “Is this what I need to be/see/hear/read/think next?” AI recommendation systems thus perform acts of revelation: they reveal something beyond the knowledge of the subject with confidence-inspiring certainty of an unknown probability. Like an oracle, AI recommendation systems appear to hold answers that pertain to people-like-you. 1 They may pick up some clues about one’s subconscious drives and desires. They may “know” more about you than you do yourself (McCarthy-Jones, 2020) and confront you with parts of your being of which you were unaware, or with memories or desires that you would rather ignore, forget, or not to be known by your partner, parent, or best friends. Han (2017) uses psychoanalytic language to suggest that “Big Data can even read desires we do not know we harbor. After all, under certain circumstances we develop inclinations that elude consciousness. Big Data is making the id into an ego to be exploited psychopolitically.” The revelatory quality of their recommendations may result in transcending the everyday banality of one’s here-and-now by the suggestion of elucidating something fundamental or essential of one’s being, whether in terms of presence or absence and for all good and bad. When an AI system recommends something that is experienced as profound, when its revealing plays into one’s deepest affective drives, desires or fears, hopes of pleasure and fulfillment, or feelings of shame and distress, it reveals who or what one is or may become. It has the potential to influence one’s sense of self. This potential is enhanced as the descriptively sublime AI recommendation systems obtain normative force.
In its normative connotation, the sublime becomes something to aspire to and worthy of accomplishment; it stimulates the individual to transcend its own limitations. After all, “the self constitutes itself in the mirror and through the resonance of the other . . . [the self] is constructed through the gaze of the other” (King, 2016: 72,74). AI recommendation systems, hence, not only reveal but also judge by carrying the potential for presenting their recommendations with normative, perhaps even moral, force as that which—supposedly—is superior to the individual and therefore worthy of pursuit. They have the potential to show how one is deficient against a standard espoused in a recommendation and to insert themselves into the will and struggle to become a (morally) better version of oneself, into the “mental process of suppressing man’s lower desires and substituting for them higher goals” (Cohn and Miles, 1977: 302). Their suggestion is that “you ought to be like this” and that “you may not be like this” (Freud, 1989: 30, emphasis in original). Thus, AI recommendation systems also pass judgment. Having accounted for all available data, AI recommendation systems pass a seemingly moral verdict about what concerns the individual as data points in a cluster of similar data points, a “dividual” (Deleuze, 1992). But because the better version of oneself, the higher goal, derived from AI recommendation systems is constructed from data not just on oneself but also from data on a myriad of other individuals; the “you” in “people-like-you” refers to a cluster of “like-minded” algorithmically calculated recommendation profiles. This is an often-overlooked organizing outcome of AI recommendation systems. Due to AI recommendation systems’ capacities for revelation and judgment and the ubiquitous proliferation of AI profiles across all walks of life, the individual is constantly confronted with these features of revelation and judgment. But not everybody is equally susceptible to the “smart power” (Han, 2017) of AI recommendation systems. The question is, why is that so? What induces algorithm conformity and, as we will later explain, the potential for resistance?
Algorithm conformity: People-like-you
The question then becomes, how can we better think of this organizing quality of AI recommendation systems: What do we mean with “people-like-you,” and what does such a condition imply for understanding algorithmic organizing as an aspect of a technologically evolving late modernity? We begin by foregrounding our response to these questions through the work of Erich Fromm on the nature of freedom. We focus in particular on his concept of automaton conformity and propose that it takes the form of “algorithm conformity” in the present condition of late modernity and algorithmic organizing. This focus on conformity develops our argument in an important manner, because it explains why the mechanisms of revelation and passing of judgment through which AI recommendation systems act, may exert such pertinent and surprising power in our contemporary societies.
Fromm (1969, 2002) discusses automaton conformity as a behavioral disposition in which the individual has relinquished their ability for autonomous thinking and yielded to the external authority of cultural norms in coping with their feelings of anxiety and isolation that stem from the societal conditions of modernity. It is a mode of being in which “the individual ceases to be himself [sic]; he adopts entirely the kind of personality offered to him by cultural patterns; and he therefore becomes exactly as all others are and as they expect him to be” (Fromm, 1969: 208–209). Automaton conformity is a defense mechanism against the existential fear of freedom, the impossible impossibility to choose, and the feelings of isolation and anxiety that come with this predicament. It is a associated with the advance of modernity; it operates on readily available, pre-programed desires and thoughts, the adoption of which help individuals to avoid the complexities and uncertainties of establishing an authentic selfhood.
In Fromm’s analysis, automaton conformity was a quality of industrial mass society, from mass movements and mass media all the way through to mass consumption and mass marketing (Fromm, 1999). Today these institutional forces and cultural norms have weakened while perceived individuality is valued higher than the mass society of industrialism. Yet, in the present societal condition of late modernity, similar feelings of anxiety and isolation still pertain (Bude, 2019). But now AI recommendation systems take on the task of the external authority and, when internalized as normatively sublime, the ensuing conformity is algorithm conformity. They incite desires, wishes, fears, and dreams while saying people-like-you ought to be like this. In a world in which every imaginable possibility is just a few clicks away, in which “everything is open but nothing is meaningless . . . [t]he stress of anxiety is the stress of the search for meaning” (Bude, 2019: 106).
One susceptibility to this conditioning is precisely created by the anxiety and isolation generated by the quest for increased or better self-understanding, by the search for an answer to the question: Who am I for myself? In seeking an answer, AI recommendation systems begin to assume epistemic power, in extreme cases perhaps even primacy, because in their revealing and passing judgment, they generate what look like answers. They are readily given answers, tailored to the desires and needs that AI recommendation systems calculated people-like-you are likely to have. It does not demand much effort or engagement to follow up on them; there is little if any friction involved (Han, 2018). Buy this. Read that. Watch this. Like that. Believe this. Be like that. Be like people-like-you.
We believe that, under today’s conditions, AI recommendation systems may fill gaps left empty by the receding institutions of industrial society which we mentioned earlier. As one responds to, acts through, and derives meaning from AI recommendation systems, they can provide the externalized personality, “the individual’s relationship to his [sic] own self” (Fromm, 1969: 140). Through clustering computationally aggregate social groups, these systems make recommendations which obtain authority through the power of their sublime quality. Thus, in a subversive manner, AI recommendation systems convey relational power: the sense of self is filled up with computational solutions to anxiety and isolation. Conformity stemming from cultural norms or institutional pressures is mixed with the smart power (Han, 2017) of algorithmically recommended actions and choices. And herein lies the promise of algorithm conformity: in a society with receding structures for mass organization, it points an individual its place in society, not as an authentic self, but as a computationally generated, “relentlessly pointed yet empty, singular yet plural YOU” (Chun, 2016: 3) with computationally clustered attributes and behavioral characteristics.
The uncanny in AI recommendations
Our argument then, is that the organizing capacity of AI recommendation systems lies with its capacity to create organizing through clustering people as people-like-you, while leaving their individual selves exposed to an experience of insufficiency by confronting the self with the normative expectation of how people-like-you ought to behave. Algorithmic conformity is not totalitarian but co-exists with other, older forms of organizing. The trade unions, mass media and mass marketing of modernity still form important, although weakening, components of late-modern society. But what is perhaps more important for our argument is that in the mechanisms of revelation and judgment, through which AI recommendation systems organize, lies also the very possibility for breaking their spell.
The encounter with an AI recommendation system may be a “wonderfully creepy” experience, one that is “endlessly fascinating yet boring, addictive yet revolting, banal yet revolutionary” (Chun, 2016: ix). The dualities Chun invokes make an important point: there is something “wonderful,” sublime in the technical prowess of these systems but also something “creepy,” uncanny. Their uncanniness shows in particular when, occasionally, their recommendations are a bit off the mark, misspecified, despite them seemingly being all-knowing; AI recommendation systems do suffer from epistemic corruption (Cheney-Lippold, 2017).
The uncanny, the unheimlich Heimliches, “insinuates itself in the form of affective anxiety and a blurry feeling of the strange and the familiar” (Orr, 2023). It is associated with feelings of unease, or eeriness. Mori et al. (2012) argues that technological artifacts, such as a robot or a prothesis, can provoke an uncanny feeling from perceiving them as very similar to, yet disturbingly not quite being, the original, real thing. In the case of AI recommendation systems, uncanniness stems from their profiling (and subsequent recommending) being based on a recombination of data extracted from numerous individuals, and not exclusively from the focal self. The recommendation speaks to you, but when the system’s profile does not square with the conception you retain of yourself, uncertainty may settle in. Might the AI recommendation system be mistaken? Did you perhaps, unnoticed and unwillingly, leave some data traces not intended for profiling? Or would you rather prefer to forget about some events in your past that were nevertheless picked up as data, processed, and fed into the profiling, such that after all the recommendation may still be “’correct” but in an unwelcome manner? Another possibility is that the AI recommendation system suffers from epistemic corruption. When profiling and a subsequent recommendation are somewhat misspecified, their experience can be uncanny because the unfamiliarly familiar invokes feelings of unease (Mori et al., 2012).
There are reasons why there is a possibility for misspecification between the profile and the self and, by implication, the AI recommendation and the sense of self. First, biographically significant parts of every individual’s being and doing are still offline; not every single choice, action, behavior, or movement is captured as data. Second, although some of the choices and behaviors made in daily life are guided by routines, other choices and behaviors are non-routine but may still be significant for one’s life, that is, frequency is not a reliable indicator for relevance. Third, the datafication and digitalization of the choices, actions, behaviors, and movements that are being captured and registered, imply a considerable loss of information that is relevant for interpretation, for example regarding context, motivation, and meaning. Finally, old data are rarely if ever deleted; there is a legacy of the past in today’s algorithmic profiles. Therefore, and although they are intimately connected to one’s behavioral traces and continuously updated, not all data traces are equally meaningful in informing a profile; profiles are not fully accurate representations of “who we are” or “who we think we are.” Cheney-Lippold (2017) discusses the resulting epistemic corruption as the else, as the “wiggle room” between the instantiation of a profile and what it is supposed to represent, “between datafied and nondatafied life” (p. 179).
The else: Emancipation through misspecification
If it is indeed so that some people experience AI recommendation systems as sublime, that their revealing and passing judgment “speaks to you,” and that these systems seem to offer answers and consolation from anxiety and isolation, if they are the other through whose eyes one conceives of one’s self (King, 2016), how then can one resist, or recover from, algorithm conformity? This question has received various answers. One answer is to frustrate the data extraction that feeds AI recommendation systems or to refrain as much as possible from using them. There is no need to have a smartphone, a smartwatch, and a smart house; there are good reasons not to use social media (Lanier, 2018). A second answer asserts the individual’s ability and courage to autonomous, independent thinking as a counter to immaturity (Scherer and Neesham, 2023), perhaps aided by education and awareness programs. A third answer is to rely on protection by organizations, regulation, conventions, and institutions such as the State, news media, social movements, and business organizations (Al-Amoudi and Latsis, 2019; Scherer et al., 2023). Answers such as these are undoubtedly helpful in some, but problematic in other respects. Technical fixes and regulative and institutional solutions cannot be expected to be fully effective, and, in a way, relying on them is like exchanging one external authority for another. But more important—in relation to our argument—is that they do not break the spell of the sublime; they are rational responses to something that is at its core an affective experience.
We suggest, perhaps paradoxically, that an answer to the question of resisting algorithm conformity resides in Cheney-Lippold’s else. The else may be perceived as irrelevant, just slightly irritating noise that can easily be ignored. Or one may be utterly puzzled, annoyed, or distracted by the epistemic corruption in the judging and revealing implied in a recommendation (not to speak of the extreme instances of being categorized as a criminal, a terrorist, or a financially insolvent person). Experiencing the else is uncanny, the feeling that something is weird, not quite correct. The else is the difference between who one believes oneself to be as an individual and how one is represented by AI recommendation systems. This else, Capurro et al. (2013: 11) insist, is an ethical difference, because it poses questions like: Who am I—or who can or should I be—for myself in the gaze of the other? The else has the potential to arouse affects such as surprise, annoyance, or irritation; it can raise curiosity; or it may make one think, incite reflexivity and inquiry into oneself. AI recommendation systems may invoke or confirm one’s deepest desires, wishes, fears, and dreams. But when the else seeps through, uncertainty sets in, and one may begin to question one’s behaving as people-like-you behave. Its uncanniness “can sensitize us to the possibility for alternatives” (Orr, 2023: 2011).
On the one hand, the else may disrupt the sense of self as reinforced by AI recommendation systems, yet with a nagging feeling, due to the sublime, that there still may be something valid in the recommendation. Perhaps the else is not a misspecification in the AI recommendation, but a misconception in one’s sense of self? There is, after all, the possibility that it “knows” more about you than you do yourself, that it really knows you (Han, 2017) and points out some “imperfection” in its revealing and passing judgment. If so, it offers a cue for self-reflection and the possibility of developing oneself by stimulating consideration of who one can or would like to be. It may stimulate reflection on the desirability of one’s desires, the capacity of which is “essential to being a person” (Frankfurt, 1971: 10).
On the other hand, the else may unmask the AI recommendation systems’ normative sublime once one has recovered from having been propelled from the center of one’s own world and had the occasion to observe from a distance this strange experience of the else in the recommendation of an AI system. Upon the experience, perhaps upon multiple experiences, of the else, one may have the epiphany that AI recommendation systems have a voice but not a voice of their own, that their “knowing you” is really advanced data processing. And then it may provoke a good sardonic, “kynical” laugh (Sloterdijk, 1988) through which the normatively sublime halo of AI recommendation systems is dissipated and its representational illusion broken beyond repair. Such a laugh may assert confidence in the self and lead to a reclamation of “freedom, awareness, joy in living” (Sloterdijk, 1988: 166) and be an occasion for play, experiment, and ultimately resistance (Cheney-Lippold, 2017: 26) to algorithm conformity.
Epilog: The else and collective resistance?
It is understandable that organizational scholars may not feel at ease with the individuality of an experience of the else as a seed of resistance to AI recommendation systems, because it is not immediately evident how this experience can generate collective responses of resistance and emancipatory forms of organizing. We think this is a genuinely complex and difficult question and would not wish to come across as peddling snake oil to critical management studies’ constant search for the silver lining of emancipatory outcomes. How subjectivities are shaped and power is expressed algorithmically is stunningly subversive. We can witness a seeping realization of this in our news feeds. For example, “‘Teenage girls are feeling vulnerable’: Fears grow over online beauty filters,” writes Booth (2024) in The Guardian, or “Help! My political beliefs were altered by a chatbot!” exclaims Mims (2023) in the Wall Street Journal. But at the same time, we may be observing the possible antecedents for accountability, in particular through how organizations relate to the social media platform X. Writes Bundesliga club FC St. Pauli:
FC St. Pauli is withdrawing from the social media platform, X. The Boys in Brown joined the platform in 2013 and had 250,000 followers. Announcing its reasons for withdrawing, the club said that owner Elon Musk had turned a space for debate into an amplifier of hate that was capable of influencing the German parliamentary election campaign. . ..The club would like to thank its members for the critical exchange on what to do about X and calls on its followers on the platform to switch to BlueSky.
This message leads us to believe is that there is potential for collective resistance through individual experiences of the else, but only if these experiences arise in a context where an alternative infrastructure for mobilizing dissent and resistance exists (Shantz, 2016). It is important to highlight the club’s recognition of the exchange with its members guiding this moral decision. Whilst FC St. Pauli’s argument (probably reflecting its leftwing heritage) is unusually straight-forward, similar announcements have been made by many major organizations and companies. 2 Their decisions point out that there is indeed demand for alternative forms of online engagement. Some of these actions are caused by informed discontent with the direction of the X, and some more reactions toward mitigating harm to the organization’s public image but nevertheless all are responses to perceived collective pressures.
In lifting up the trajectory from X to Bluesky as a form of collective resistance arising from the uncanny in AI recommendation systems and an ensuing experience of the else, we do want to underline its potential problems, too. We do not know the future trajectory of Bluesky. Much of the behavior of AI recommendation systems is down to the way in which algorithmic capitalism rewards systems that generate and capture attention. Maybe Bluesky will devolve like many of the original social media companies have. Possibly, the move to Bluesky is another, if more collective, kind of echo chamber; one where liberals and progressives emigrate to a new platform whilst conservatives and alt right views remain behind. Yet the context of this struggle contains similar elements identified by Fromm and other prominent members of the Frankfurt school: ideological structures that foster alienation, the stifling of authentic individuality and the triumph of instrumental, technologically mediated reason. Or perhaps, could this be the time for something else?
Footnotes
Acknowledgements
We are grateful for the questions and comments by the journal’s editor, its reviewers, and a number of colleagues: Ingrid Becker, Christian Fieseler, Mariel Jurriens, Lise Justensen, Florian Krause, Othmar Lehner, Christine Moser, Ursula Plesner, Claudia Schnitzler, Yuliya Shymko, and others. Their comments, questions, and suggestions helped us in developing and streamlining our argument. We are, of course, responsible for all remaining shortcomings.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
