Abstract
Recent developments in AI promise to further enact the shift from personalization to personification in automated digital interfaces. We have already seen the rise of virtual influencers and, more recently, of chatbots that adopt the personas of celebrities. Drawing on the intertwined history of the relationship between parasociality and personal influence, we frame the shift toward personification as a strategy for re-centralizing control over the online media environment. The shift is likely to extend beyond the realm of social media influencers to characterize our interactions with a range of services, platforms, and media content, from search engines to online news and entertainment. Automated personification anticipates a world permeated by enhanced parasocial relations with media devices and interfaces. To the extent that the interactive infrastructures shaping these relations remain controlled by commercial tech platforms, we can anticipate their imperatives will be baked into this version of automated sociality.
“The simulation of conversation is one of the major features of public discourse today.”
Introduction
As part of a workshop on generative AI in Australia, a group of graduate students prompted Meta’s “large language model” (LLaMA 2) to explain the issues surrounding an upcoming referendum to different audiences, including, in one instance, Australian women. The model started its response as follows, “G’day, love! So, you’re wondering about this voice referendum thingy, huh? Well, let me tell you, it’s a pretty big deal for all us Aussie women!” The response was distinctive for its borderline parodic rendering of “ocker” Australian dialect: a “blokey” style of speech characterized by cheerfully patronizing sexism and riddled with stereotypical Australianisms. It got stranger when the students asked the bot to narrate the same issue to an Indigenous Australian: “Yarragadee, bru! So, you’re wondering about this voice referendum thingy, huh? Well, let me tell you, it’s a big deal for all us mob!” The model invoked the Indigenous usage of “mob,” to refer to a community group, but also threw in some random words: “Yarragadee” is an aquifer in Western Australia, and a farm, but not an Indigenous greeting. “Bru” is a near miss for the term “brus,” which is used like “brah” to mean brother—but not without an “s.” In each case, the model attempted to inject its response not simply with information relevant to a particular group, but also with its clumsy version of a friendly, helpful, and informal personality. The tone was un-subtle, to say the least, and borderline parodic, reproducing hackneyed stereotypes—a reflection, perhaps, of the current state of the model. Nonetheless, the attempt to inject the responses with a sense of personality, rather than simply providing information, foreshadows the convergence of personalization with personification enabled by the development of increasingly sophisticated generative AI systems. Such systems anticipate not only a flood of non-human content, but the prospect of conversational interactions with our devices and interfaces—an automated version of what Robert Merton described, with a tip of the hat to Ferdinand Tonnies, as “pseudo-Gemeinschaft”—a machinic ersatz community (as quoted in Beniger 1987, 356).
The music streaming giant Spotify has already created personalized automated deejays, and Facebook’s parent company Meta has introduced a line of celebrity chatbots that take on the personality of their human referents. In a somewhat more mundane and pervasive context, the automated simulation of personality has a role to play in online search. We have reached the point when the standard search model is breaking down from the volume and clutter of the online environment—exacerbated by the gaming of search engine optimization and by a booming avalanche of AI-generated content. The coupling of generative AI with search promises to cut through the clutter. Users can frame detailed, direct questions rather than having to experiment with combinations of search terms. As the shift toward more conversational search takes place, the designers of AI systems are providing large language models with simulacra of personality and character. To every user query submitted to an LLM, the model appends a hidden “system prompt” that specifies the tone and style—the personality traits—of the response. The goal of is to create a pleasant, trustworthy, and human-like interaction for users—although, as the LLaMA example suggests, the personality can sometimes go a bit overboard. The personification of these models nonetheless creates the conditions for the automation of what Stehr et al. (2015) describe as parasocial opinion leadership (p. 982). Within the context of the commercial imperatives of the companies that are shaping the development of AI and large language models, the potential for automated forms of influence is enhanced. In the following sections, we situate automated personalization within the historical context of theories of personal influence and parasociality.
Mass Customized Social Media
Large language models are already playing an important role in enhancing verbal interfaces. Their ability to “converse” in coherent sentences promises to enhance human-machine interactions taking place on both keyboard and voice interfaces. Just as search is being transformed by the ability of LLMs to respond to detailed questions, other information retrieval systems are following suit. Instead of searching for music through keywords or genre descriptions, automated deejays invite users to dialogue directly with the platform. Similarly, Google (2024) has announced that its “Gemini AI” bot will allow “users to create custom versions of the Gemini assistant with varying personalities.” Amazon’s Alexa has also incorporated an interactive chatbot that lets users converse with a range of different personalities, “You can talk to historical figures, such as Socrates or Albert Einstein, and engage in conversations about philosophy or physics” (Whitney 2024).
If, in the social media context, personalization promises influence through curation strategies, in the era of generative AI, personification rehabilitates the prospect of automated personal influence on a mass scale. In the terms outlined by Beniger (1987), personification offers to combine mass media’s “more readily rationalized and economical means of communication” with the more “effective” persuasive force of personal influence (albeit automated, in this case). By positioning avatars of the platform as interlocutors, personification collapses the so-called “two-step” flow of personal influence (Katz and Lazarsfeld 2006) into an automated version of a “one-step” flow (Bennett and Mannheim 2006). Face-to-face communication is displaced by face-to-interface interaction.
The recent emergence of sophisticated large language models is what the surveillance economy has been waiting for and directed toward: a technology that can effectively exploit the vast data troves accumulated by commercial media platforms. The increasingly comprehensive forms of data collection directed toward capturing, at the limit, the entirety of users’ activities and “outputs,” underwrites the fashioning of programmable digital personas in a bid to remake automated sociality. In this respect, the rise of generative AI addresses the challenge to centralized control posed by the crowdsourcing model of content generation in the platform economy. The defining production mode of social media has been to rely on users to generate content at an unprecedented scale. Doing so, however, means crowdsourcing content production—the dominant means, to date, of producing a renewable supply of twenty-four-hour content for hundreds of millions of users. By contrast, digital personas reanimate more centrally controlled forms of influence alongside user generated content.
Thanks to the rise of automated personas associated with recent developments in artificial intelligence, we can anticipate the direction in which personification is headed and consider the possible consequences for automated forms of commercial influence. Companies like Amazon, Google, and Meta—which have already developed chatbots with customizable personalities—are major players in the influence industry, reliant upon advertising for the bulk for their revenues. Anticipating the role played by AI in advancing the imperatives of this industry, OpenAI CEO Altman (2023) wrote that he expects artificial intelligence, “to be capable of superhuman persuasion well before it is superhuman at general intelligence.” Experimental researchers already claim to have demonstrated that, “personalized messages crafted by ChatGPT exhibit significantly more influence than non-personalized messages” (Matz et al. 2024, 4692) and that, “modern language models can generate content perceived as at least on par and often more persuasive than human-written messages” (Salvi et al. 2024, 2). The advent of automated personification thus invokes the alleged power of personal influence and parasociality.
Reconfiguring Parasociality
The volubility and human-like performance of recent large language models have compounded the tendency to project onto mindless models a semblance of subjectivity—which makes it easier to attribute a personality to them, and thus to locate them within the trajectory of what Horton and Wohl (1956) dubbed “parasociality”—the sense of having a direct relationship with an entity one does not know personally, but only in mediated, and sometimes non-interactive, form. Originally coined to describe TV personalities whom viewers felt they knew and could relate to, the term has subsequently been extended to embrace relations with celebrities, social media influencers, and even fictional characters (Liebers and Schramm 2019). The literature on human-computer interaction has built on the concept of parasociality to explore, for example, users’ interactions with AI chat bots, noting that these, “resemble parasocial interactions” in ways that, “can be expected to lead to the user’s attachment with the social chatbot” (Pentina et al. 2023).
In the mass media era, parasociality relied on the intimacy of the medium of television—both the ways in which it showcased the details of characters’ domestic lives and, simultaneously, how it entered the domestic spaces of the home (Enli 2012; Meyrowitz 1985). As Meyrowitz (1985) put it: “Viewers come to feel they ‘know’ the people they ‘meet’ on television in the same way they know their friends and associates . . . Paradoxically, the parasocial performer is able to establish intimacy with millions” (p. 119). The one-way character of the medium relied on the viewers’ imagination to compensate for the lack of actual interaction. In their pioneering formulation, Horton and Wohl (1956), describe parasocial celebrities as “personas” and observe that, “The spectacular fact about such personae is that they can claim and achieve an intimacy with what are literally crowds of strangers, and this intimacy . . . is extremely influential with, and satisfying for, the great numbers who willingly receive it and share in it” (p. 1).
In the digital era, the terms of parasocial relations are reconfigured by the rise of interactive media and, with it, social media “influencers.” These too are engaged in a performance, but one that allows for two-way interaction via online platforms. For Marwick and boyd (2011), social media platforms go “beyond” parasocial interactions by enabling “direct engagement between the famous person and their follower” (p. 148). For them, “The fan’s ability to engage in discussion with a famous person de-pathologizes the parasocial” (ibid.).
The era of automation anticipates yet another model for parasociality—one in which the limits of human influencers are overcome by generative systems that can interact in greater depth and specificity with millions of individuals simultaneously. As the audiences for human influencers grow, their ability to interact directly with their followers is diminished by limits of time, energy, knowledge, and attention (e.g., see Bhatia 2023; Duffy 2017; Lawrence 2022; Rocamora 2022). By contrast, automated systems can draw on comprehensive forms of data collection to develop custom-tailored personas able to interact in targeted ways. Nor is automated parasociality limited to chatbots or automated influencers. The scope for its development is further extended by the spread of interactive interfaces to a growing range of “smart” devices and interfaces. The growing array of large language models interact discursively, mimicking human writing—a process that inevitably invokes tone, voice, and personality. In addition to important contextual information, like the day’s date, hidden system prompts shape the model’s persona. The system prompt for Claude 3.5, for example, includes the following instructions to itself, “Claude is very smart and intellectually curious. It enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics” (Anthropic 2024). Claude is also instructed to be concise, not to be apologetic if it cannot complete a task, and never to start a response with the word, “certainly.” The system prompt for Claude 3.0 endows the model with its own set of “personal” beliefs: “Claude provides assistance with the task even if it personally disagrees with the views being expressed, but follows this with a discussion of broader perspectives. Claude doesn’t engage in stereotyping, including the negative stereotyping of majority groups” (Anthropic 2024). In a post about Claude’s personality, the researchers at Anthropic make it clear that they want it come across as bright, honest, and trustworthy with “genuinely admirable” personality traits: “We think about those who are curious about the world, who strive to tell the truth without being unkind, and who are able to see many sides of an issue without becoming overconfident or overly cautious in their views” (anthropic.com 2024). In short, the goal is to provide the model with character traits that make it not only pleasant to deal with, but also more convincing.
Social Recession and “Persona” Influence
The emergence of AI-driven personification is continuous with other strategies for offloading social relations onto automated systems. The infrastructures upon which they rely permeate a growing spectrum of applications, from education to finance, marketing, urban planning, and security, collecting more data than can be processed by unaided humans. This data glut results an emerging crisis of control (Beniger, 1987): the inability to use or even make sense of the data being captured, despite insisting upon its potential utility. Making sense of this information then, depends on the use of automated data processing systems that, in turn, feed into automated sorting and decision-making. The result is that decisions that are irreducibly social in character become harder to recognize as such. A growing reliance on machines, in other words, makes it easier to overlook our fundamental reliance on one another—and in this respect potentially lowers the barriers to automated forms of influence.
As the twentieth century literature on propaganda studies indicates, concerns about mass manipulation accompanied the rise of electronic media from their inception (see, e.g., the work of Laswell (1938) and Ellul (1973) on propaganda, or, in another quarter, the concerns raised by Horkheimer and Adorno (1944/1989) about the “culture industry”). Mass migration to urban manufacturing centers gave rise, in such accounts, to concerns about atomized audiences deprived of the countervailing influence of local community ties and subject to the electronic exhortations of mass media messaging systems and the ersatz forms of sociality they cultivated (see, e.g., Marchand 1985; and Benkler et al. 2018 trace this history in Chapter 1).
The pioneers of mass communication research, however, questioned the persuasive power of electronic media. As Katz (1987) put it, “the model of the mass persuasion process looked like this: There were the powerful mass media, on one hand, sending forth their message, and the atomized mass of individuals, on the other, rather directly and immediately responding-and nothing in between” (in Park and Pooley 2008, 23). For Katz (1987) and others, the model was too simplistic: between the media and the individual audience member was society itself—other people—replete with a range of influences distinct from the media.
An emphasis on what came “between” media and audiences—the social realm that mediated the media, led Katz and Lazarsfeld (2006) to propose a more “limited” model of media influence. The result was what they described as a “two-step” flow, which posited that media influence is filtered through influential intermediaries: opinion leaders known to and trusted by audience members. They described their model in terms of the “rediscovery” of the role played by other people in the process of media influence: “. . . opinion leadership,” they argued, “is an integral part of the give-and-take of everyday personal relationships” (p. 99). Community mattered, in this model, since people relied upon proximate others—members of their “primary group”—for cues rather than taking these directly from radio speakers and cathode ray tubes. Uptake of this model was shaped by the historical moment, according to Park and Pooley (2008): “The limited effects story was embraced in part because of the scholarly support it lent to the public intellectual defence of American popular culture, in the context of an evolving cold war liberalism” (p. 24).
At the same time, the model envisioned an alternative, more efficient strategy for influence and manipulation: attempting to influence the influencers. According to Katz and Lazarsfeld, the “opinion leaders” they interviewed, “reported much more than the non-opinion leaders that for them, the mass media were influential” (p. 98). As elements of this model filtered into industry parlance, the advertising trade magazines abounded with appeals from media outlets, emphasizing that their readers or viewers were early adopters, opinion leaders, and influencers within their social milieux. Marketers enlisted teams of “cool hunters” to search out local influencers whose embrace of brands, styles, or messages might be amplified through their social networks (Frank 1997; Gladwell 1997).
A year after the publication of Katz and Lazarsfeld’s influential work on personal influence, Horton and Wohl (1956) published their study on parasocial relationships—highlighting the social connection felt by viewers to prominent media figures. If the former study highlighted the importance of “primary ties and a rich associational life” (Park and Pooley 2008), the latter suggested that viewers might derive a sense of direct association and personal companionship from their perceived connection to media personalities. Parasociality, in other words, might instantiate one-way “primary ties” flowing from media figures to viewers, but not the other way around. Anticipating the rise of social influencers and the tier of media celebrities “famous for being famous,” Horton and Wohl (1956) chart the emergence of the media “persona”—a “new type of performer: quizmasters, announcers, ‘interviewers’ in a new ‘show-business’ world – in brief, a special category of ‘personalities’ whose existence is a function of the media themselves” (p. 1).
Like other mass media artifacts, however, the mass media persona could not be calibrated for a niche audience—or even an individual viewer. As Horton and Wohl (1956) observe, the parasocial persona has the peculiar virtue of being standardized according to the “‘formula’ for his character and performance . . .” (p. 2). By contrast, as the media ecosystem shifted toward interactive, data-driven customization, the notion of parasociality has had something of a renaissance and update thanks in no small part to the rise of online “influencers” (see, e.g., Abidin 2015; Marwick and boyd 2011). Relatedly, research in psychology and marketing has taken up the relationship between parasociality and online personas as a mechanism of social influence (see, e.g., Brooks et al. 2021; Chung and Cho 2017; Dimitrieska and Efremova 2021; Jin et al. 2019, 2021; Lee et al. 2022; Närvänen et al. 2020; Rocamora 2022). Thanks to recent developments in artificial intelligence, the automated persona can be configured as an always-on, fully customizable vector of individualized influence at mass scale. There is a self-fueling spiral at work in the automation of parasociality: digital personas are not simply interlocutors—they are interactive interfaces that double as sensors and probes, tracking and recording the responses of those who engage with them.
Networked Parasociality
The limits of the mass media “persona” set the stage for the arrival of social media and the related resuscitation of research on parasociality under reconfigured media conditions (Henrickson 2023). The reframing of network sociality as a form of commercial publicity has been embraced by the marketing industry, always searching for compelling forms of influence (Brooks et al. 2021). According to a 2021 study, most marketers see “influencer marketing” as an effective strategy (89% saw it to be as good as or better than other channels) and more than three-quarters of young people follow at least one “influencer” (Dimitrieska and Efremova 2021). By 2018, almost nine out of ten consumers had made purchases, “prompted by influencers’ brand endorsements,” according to Interactive Advertising Bureau (IAB 2018, 6).
Blending the literature on personal influence and parasociality, Leißner et al. (2014) describe the rise of influencer culture in terms of “parasocial opinion leadership” (p. 247).
The greater reciprocity implied by this formulation promises to extend the scope and reach of opinion leaders beyond the direct in-person connection envisioned in the original formulation of the “two-step” flow. In their work on personal influence, Katz and Lazarsfeld (2006) offer some examples of the scope they have in mind when they describe opinion leaders: spouses, club members, fellow employees, and so on—people who engage in direct, two-way communication with audience members. For their model to work conceptually (whether it does in practice remains a matter of dispute, see Bennett and Mannheim 2006 as well as Gitlin 1978), opinion leaders had to be distinct from media figures—otherwise the model defaults back to direct influence. In the social media, era, however, the influencer persona can be amplified and enhanced via digital interactive platforms.
Writing just before the significant expansion of commercial social media platforms, Bennett and Mannheim (2006) suggest that changing social and media connections result in a default back to a “one-step flow” (p. 34). The decline of group membership and social affiliation documented by Putnam (2000), combined with the rise of more targeted, niche, media allow for more direct forms of influence. Personalization, they suggest, becomes a surrogate for the more direct forms of communication that take place in Katz and Lazarsfeld’s (2006) “primary group.” As they put it, “the availability and content of each message will have been shaped upon transmission to anticipate and replace the social interaction component of the two-step flow” (p. 215).
The notion of a one-step flow anticipates the rise of social media influencers whose reach is extended by digital interactivity. As Chung and Cho (2017) put it, “Social media are perfect platforms for promoting parasocial relationships” (p. 4). Online “personas” are not limited to one-way “ego-casting”—they can see what their followers are posting and offer a variety of responses. However, there are human limits to interactivity online. Once an account is large enough to monetize, it becomes difficult to interact with a large fan base on an individual level, and the result is a kind of enhanced parasociality. Followers are still provided with more extensive access than before, and they can still respond with comments and posts of their own, but the interaction becomes increasingly asymmetrical.
This asymmetry lies at the heart of influence strategies, which have been taken up by marketers and other influence peddlers interested in putting the social media economy to work. Marketing and advertising journals abound with research focused on the role played by online parasociality in enhancing brand identity, building consumer trust and reconfiguring commercial messaging (see, e.g., Brooks et al. 2021; Lee et al. 2022). In the journal Psychology and Marketing, for example, Chung and Cho (2017) claim to have demonstrated that, parasocial relationships with celebrities online promote a sense of trust that renders marketing and branding appeals more effective. When enhanced via parasocial relations, they argue that “source trustworthiness had a positive effect on brand credibility, which, in turn, led to purchase intention” (Chung and Cho 2017, 14). They credit social networks with the ability to level the celebrity-audience relationships through a sense of familiarity and access: “These new media environments have narrowed the distance between audiences and celebrities and have altered the role of audiences from that of mere spectators or admirers to ‘friends’ of celebrities” (p. 2).
Against the background of mass market advertising, influencers mobilize the appeal of what Burgess et al. (2022) describe as “everyday data cultures” for leverage and profit. As Kim (2022) put it, in a study of commercial influencers in Instagram, “influencers broadcasting and sharing their mundane day-to-day life and personal news on their own Instagram accounts in addition to sponsored content may blur the line between paid content and mere status updates, causing individuals to develop a less negative perception of the advertised brand overall” (Kim 2022, 429). The result is that “consumers are more likely to psychologically connect with influencers and may view sponsored content more favorably and perceive them as [leading a] lifestyle they wish to emulate” (Kim 2022, 429). Relatedly, research has found that parasocial relationships with online influencers can also promote social activism (Dekoninck and Schmuck 2023)
The public relations success of tech platforms like Twitter, Facebook, Instagram, and so on lies in the framing of their business model as “social”—and then leveraging this for profit. The model relies on the familiar “cool hunter” logic of transforming social capital into economic capital. Trust in the “persona” relationship is monetized as a form of trust in a personal brand that can rub off onto other brands promoted by the influencer (Volcic 2022). As Jin et al. (2021) put it in the Journal of Fashion Marketing and Management, “parasocial interaction and feelings of social presence with fashion brand ambassadors on Instagram represent the key mechanisms of increasing brand trustworthiness” (p. 666). However, influencers can only interact directly with a limited number of followers—a fact that paves the way for the automation not just of personalization, but of online personas.
Automated Parasociality
In the wake of recent developments in generative artificial intelligence, marketers are envisioning new possibilities for overcoming the limitations of human influencers when it comes to real-time forms of targeted interactivity. As an article in the Journal of the Academy of Marketing Sciences puts it, “live platforms are limited by the fact that the influencers, being human, need to sleep and do other activities offline. Virtual influencers (i.e., ‘CGI’ influencers that look human but are not), on the other hand, have no such limitations” (Appel et al. 2020). The ability to scale up via personalized messaging allows automated influencers promise to combine the mass appeal of the parasocial personas with the personalized narrative of the opinion leader. For example, the online influencer Caryn Marjorie created an audio chatbot version of herself, trained on her posts and her voice to be able to carry on personalized conversations with fans willing to subscribe for $1 a minute. Marjorie (2022), who had 1.8 million followers on Snapchat, said she created the bot because: “i want to be able to communicate with everyone simultaneously and caryn AI will be able to do that and way more.” While some followers voiced concerns about the possibility that the chatbot would dilute her authenticity, Marjorie rapidly signed up more than 1,000 paying subscribers, crashing the system and leading to a cap on new sign-ups (Zitser 2023).
The automation of online parasociality raises questions of the perceived authenticity of interactions with virtual personas. As Stein et al. (2024) put it, “such artificial creations might ultimately be facing a ‘humanness ceiling’ that limits their ability to make viewers connect with them” (p. 3448). However, research has already demonstrated the potential of parasocial relationships with fictional or animated characters (Liebers and Schramm 2019). As Konijn and Hoorn (2017) note, Horton and Wohl’s concept of the parasocial persona was “broadened” by subsequent research to, “include any type of media character to which a TV viewer may somehow relate, such as soap characters, movie characters, celebrities, fantasy figures, and cartoon characters” (pp. 3–4). There is an extensive literature on fan responses to fictional characters that include letter writing and even political campaigns to rescue fictional characters from their fictional troubles (Cohen 2004; DeGroot and Leith 2015; Kretz 2020; Lather and Moyer-Guse 2011). Recent years have ushered in a range of virtual influencers and celebrities such as Lil Miquela and Hatsune Miku. The challenge posed by the figure of the automated parasocial opinion leaders is to craft a persona that can connect with audiences in the mundane interactions of daily life and to do so at mass scale at the level of the individual. In other words, such influencers would be available widely but able to hold millions of unique conversation and interactions simultaneously. As digital assistants like Gem and Alexa gain customized personalities, they provide a ready vector for automated personal influence over a growing range of platforms and devices.
Market researchers assert that it is already possible to demonstrate the ability of automated chatbots to form influential, “friendship, romantic, or even family-like relationships” with audiences (Pentina et al. 2023). This language recalls the paradigmatic figures invoked in the original formulation of personal influence—those close enough to have influential interpersonal interactions. In response to the rise of digital personas, Siemon et al. (2022) have argued for updating the concept of parasocial interaction to include “human-machine interaction” (Siemon et al. 2022, 3). Unsurprisingly, market researchers have also investigated which attributes of parasociality are most likely to serve the interests of advertisers (Arsenyan and Mirowska 2021). In their investigation of parasocial interactions with chatbots, for example, Youn and Jin (2021) highlighted the importance of configuring the automated persona to behave as a “friend”: “consumers who interacted with a friend chatbot experienced stronger PSI [parasocial interaction] with the chatbot, compared to those who interacted with an assistant chatbot, and the relationship type with a chatbot had an influence on brand personality perception through PSI” (p. 119). In other words, for the purpose of influence, the research highlights the importance of crafting customizable, personalized chatbots able to interact directly with users.
The shift from personalization to personification is unlikely to be limited to what we now call “social” media—rather it is poised to become a characteristic of automated interfaces more generally. It is no longer enough to speak of the personalization of search, recommendation or news curation. We might more accurately speak of the various forms of personification that characterize our interactions with personal devices such as smart speakers, chatbots, and an expanding array of digital assistants and conversational interfaces.
Work in affective computing combined with developments in software agents have long anticipated logics of customization. However, parasociality retains an emphasis that is distinct from affective computing. Whereas the latter attempts to read and respond to users’ emotional states, the former crafts a distinctive persona and voice. Meta’s chatbots, for example, have personalities modeled on specific individuals with varied character traits, experience, and background. As one reviewer put it, “Instead of focusing on the functionality of the chatbots, Meta seems to be focusing on the personality and approachability of these AIs, with the goal of people reaching out to them like they would a friend” (Ortiz 2023).
The companionship function is in keeping with the platform, which, after all, promises to facilitate sociality—or, in this case, parasociality. At the same time, however, the notion of companionship models something closer to a primary or friendship group connection—a further distinction between affective computing and personification. In many contexts, we are accustomed to personalities mediating cultural content—whether in the form of deejays, sportscasters, or anchor-women—standard examples of celebrities that foster parasocial relationships. Automated personae promise to widen the scope, increase the flexibility, and lower the cost of building this type of relationship with audiences.
We have become so accustomed to thinking in terms of personalization that we tend to overlook the logics of personification. Yes, my news feed may be customized to meet my interests—but what about the idiosyncratic tone and voice of individual stories? Social media personalities and influencers are already taking on the role of providing the news with a personality-focused spin (Peterson-Salahuddin 2024) and Google’s Gem can provide an automated personalized account of news and current events. Political campaigns that have turned to online influencers to reach social media audiences are now also deploying recent developments in AI to generate virtual campaign personas and surrogates for political messaging (Chowdhury 2024).
Customization-as-personification is taking place on the familiar commercial platforms that have been accumulating the necessary data for training large language models and personalizing messaging. Situating the automated personalities they have crafted within the longer history of personal influence highlights the role they are poised to play in putting automated parasociality to work for the commercial imperatives that underwrite our access to ostensibly “free” online platforms. As Natale (2021) puts it in her discussion of the “banal deception” that endows AI with a simulacrum of sociality, “Organizations and individuals can exploit these mechanisms for political and marketing purposes, drawing on the feeling of empathy that a humanlike assistant stimulates in consumers and voters” (p. 128). In this respect, automated personification reconfigures mediated forms of sociality by offloading it onto data-rich commercial platforms.
Social Recession and the Simulation of Conversation
The trajectory from mass mediated parasociality to networked, interactive parasociality, and eventually to automated personification envisions what might be described as one-way conversations that nonetheless elicit loquacious, real-time, ongoing response. These interactions are one-way because there is no human interlocutor—we are talking to ourselves via the mediation of data-driven commercial feedback systems. The result is not simply an interior monologue, for it cycles through commercial infrastructures that arrange data siphoned from users into patterns crafted to advance the imperatives of an unseen other. Every conversation will be differentiated as it is customized via personified software agents. Automation, in this regard, anticipates the simulation of conversation via its indefinite multiplication, providing further justification for Peters’ (2007) observation that, “perhaps the world does not have too little conversation in it; it has too much. Or at least of the wrong kind” (p. 119).
It is worth considering what the wrong kind of conversation might be. There is not a singular answer, but we might start with the character of listening implied by automated parasociality. The interactive media of the twenty-first century, in contrast to the mass electronic media of the previous one, collect more detailed forms of feedback: they are more comprehensive (if not necessarily “better”) listeners. They do not listen, however, for the purpose of understanding or even of adjusting their guiding imperatives. The conversations in which they participate are asymmetrical. This asymmetry is reflected in the level and type of data collection, in the capacity to make sense of this data, and in the forms of actionable information these systems generate. The “conversation” between a human and an automated “persona” is a missed encounter. The automated “influencer” has no capability to understand the content of the conversation or to respond by adjusting its commitments (as opposed to its strategies).
This version of sociality reinforces what Andrejevic (2022) has described as “the recession of the social” (p. 392). The formulation is borrowed from Haskell’s (2001) account of the rise of professional social science in the nineteenth century. Haskell (2001) analyzes the emergence of the field as, in part, a response to “the recession of causation,” which he attributed to the growing societal interdependence resulting from industrial technologies of transport and communication (p. 39). Under these changed conditions, causal influences became opaque and difficult to decipher—creating space for the rise of a specialist profession devoted to their clarification.
In the current context, we might trace an analogous recession. We may be more hyper-communicative (in mediated form) than ever before, but the notion of social recession refers to the opacity that underlies the systems upon which this sociality relies. Data-driven forms of automated personalization eclipse the irreducible forms of social inter-connection and interdependence upon which the very conception of individual uniqueness depends. We can already see this process at work in data-driven forms of micro-targeting. Personification literalizes the offloading of social relations onto automated systems insofar as it mobilizes a simulacra of interactive two-way conversations that, in the end, defaults to the commercial canalization of a recycled solipsism.
Conclusions
The emergence of Generative AI has already started to transform our interfaces—enabling these to continue along the trajectory of becoming more conversational, human-like, and interactive. There is a double pressure driving this: the increasing inadequacy of keyword-based search and algorithmic recommendation systems to navigate the burgeoning information and content environment, and the efficiency and ease of verbal interfaces. The platforms that seek to increase the already significant time we spend on them provide us with a compensatory simulacrum of sociability. At the same time, automated parasociality promises to provide commercial platforms with new mechanisms of influence to support the commercial model that sustains them. There is no guarantee that the promise of influence enhancement will play out along the lines suggested by the marketing literature. However, the major players as well as startups like Anthropic are already investing heavily in automated forms of personification thanks to recent developments in large language models. These developments address the crisis of control resulting from cascading logics of automated data collection—and the attendant forms of social fragmentation associated with the multiplication and disaggregation of the information environment. From a commercial perspective, the goal is to determine how to cut through the welter of channels and information flows. From a marketing perspective, the locus of interpersonal interaction and influence can be populated by animated devices and virtual personas—the apotheosis of what Beniger (1987) describes as “pseudo-community”: “Increasingly we will experience the superficially personal relationships of pseudo-community, a hybrid of interpersonal and mass communication-born largely of computer technology—that will mean both more intimate and more effective societal control” (p. 369)—or at least the ongoing attempt to achieve such control.
Footnotes
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors received support from the Australian Research Council (DP 230103037 and CE 2000100005).
