Abstract
Conducting conversations with artificial intelligence (AI) technologies such as ChatGPT is becoming an everyday experience for large masses of people. However, we still know very little about the emerging communicative dynamics facilitated by these technologies. This special issue tackles a dimension of AI that is becoming increasingly relevant and ubiquitous: artificial sociality, defined as technologies and practices that construct the appearance of social behavior in machines. The notion of artificial sociality aims to emphasize that machines construct only an illusion or artifice of sociality, stimulating the humans who interact with them to project social frames and meanings. In this introduction to the themed issue, we discuss the dynamics and implications of artificial sociality and show how these technologies are increasingly incorporated and normalized within digital platforms. The issue includes contributions that offer empirical findings and theoretical insights by examining a broad array of AI technologies, ranging from ChatGPT to Replika.
The emergence and development of technologies enabling communications between humans and machines sparked a rethinking of the scope and implications of artificial intelligence. The experience of conducting a conversation with a piece of software, once possible only in limited situations and for small circles of users, is rapidly becoming an everyday experience for growing masses of people around the world (Capraro et al., 2024). To grasp and critically examine the intricacies of these emerging dynamics, interpretative frameworks and conceptual tools are needed that help capture new kinds of social experiences and engagements facilitated by these technologies. This special issue aims to further such an agenda. We invited contributors to interrogate the implications, dynamics, opportunities and risks of “artificial sociality,” defined as technologies and practices that create the appearance of social behavior in machines. As a whole, the articles collected here provide empirical as well as theoretical contributions that help build stronger foundations to define, understand, and critically analyze emerging modalities of social interactions between users and artificial intelligence (AI).
Exactly like artificial intelligence can be described as technologies that construct an appearance or illusion of intelligence rather than intelligence itself (Natale and Depounti, 2024), we use the term of artificial sociality to underline that the social engagement offered by AI is the product of simulation–an artifice rather than an authentic capacity or intention to socialize. Artificial sociality technologies do not feel emotions or empathy as humans do in social relationships. The fact that what is built is only an appearance of sociality, however, does not mean that artificial sociality is less important and consequential. A long tradition of research has shown that people project social meanings and representations even to things that do not exhibit the same communicative proficiency as today’s generative AI applications. People, for instance, treat objects such as dolls, cars, and electronic devices as social actors (Appadurai, 1986; Reeves and Nass, 1996) and interpret the behavior of animals, such as pets, in sharply anthropomorphic ways (Serpell, 2005). It is therefore not surprising that generative AI, which can create an extremely convincing illusion of social behavior, empathy, and emotional involvement (Lin, 2024), is able to produce strong social projections and reactions in users. In mobilizing the concept of artificial sociality, this special issue stresses the importance of approaches that identify, define, and explore the implications, potentials, and risks of AI technologies that create an appearance of sociality. We believe that this dimension of AI requires dedicated attention in order to anticipate and appropriately navigate the manifold and important challenges that lie ahead in a society increasingly shaped by encounters between humans and communicating machines.
Such a research agenda finds firm ground in the long trajectory of research that has addressed the intersections between artificial intelligence and sociality, originating in the pioneering works of scholars such as Sherry Turkle (1995, 2005), among others. Recently, the notion of artificial sociality has been used by Hofstede et al. (2021) to describe computational systems that collect information and elaborate knowledge about humans’ social behaviors, and Vejlin (2021) has used the term to describe experiments in social robotics enacting “new forms of sociality that help reconfigure ‘what sociality is and can be’ (53). Our use of the term, however, aims to emphasize the fact that machines construct only an appearance or illusion of sociality, while humans who interact with such machines contribute their own act of projection by socializing or activating social frames and meanings (Natale and Depounti, 2024). The related term of algorithmic sociality has also been advanced to capture ‘the new post-social relations that have emerged due to digitalization’ (Seyfert, 2024), but this notion refers to broader implications of digital technologies and platforms to social structures, rather than to the simulation of social behavior enabled by AI. By mobilizing the notion of artificial sociality, we aim to illuminate the mechanisms of projection that AI stimulates in users, leading them to assign social meanings to interactions with social robots and communicative AI.
Artificial sociality includes tools and systems such as voice assistants, Large Language Models like ChatGPT or Google Gemini, social bots, and companionship chatbots such as Replika. The social dimension is evident in cases such as companionship chatbots since the interaction is meant to mimic friendship and inspire emotional engagement in users (Skjuve et al., 2022). But the construction of an appearance of sociality characterizes, more broadly, all kinds of machines programmed to enter into conversation with humans. Since human communication is a social endeavor, even subtle and apparently irrelevant components of communicative exchanges are saturated with social meaning (Stokoe, 2018). Research has shown, for instance, that programming chatbots to employ subjective language, so that the AI appears as having ideas and opinions of its own, may render the AI seem more trustworthy and likable to users (Pan et al., 2024).
Consequently, not only agents built expressively to facilitate social engagement, such as Replika, but even systems designed mainly for practical tasks in areas including creative work, education, and information retrieval, such as ChatGPT and Gemini, activate artificial sociality mechanisms. ChatGPT, for instance, uses the first-person pronoun: it says “I made a mistake” and “I would be glad to help you,” not “the system made a mistake” or “the system is designed to assist you” (Shneiderman and Muller, 2023). Furthermore, its apparently “neutral” tone contributes to creating an impression of knowledge and authority, exactly like the way a news article or a podcast is phrased invites assumptions regarding its impartiality (Scherer, 2012). Google, for its part, has recently promoted its AI assistant Gemini through a commercial that presents users with the possibility not only to perform a wide range of tasks but also to engage in informal conversations on everyday topics, such as sports or popular culture (Hiken, 2024). Studies at the intersection between marketing and human-computer interaction have explored how chatbots employed for customer care can create the appearance of personality through language and how this impacts consumer engagement and purchasing outcomes (Shumanov & Johnson, 2021), and there is mounting evidence that a similar approach is being mobilized in political communication, too (Ben-David and Carmi, 2024). Guzman (2016) has argued that even AI-based industrial and manufacturing technologies, which may appear at first glance “mute” machines, actually communicate within a culturally and socially saturated environment. All these examples show that the appearance of sociality not only involves scenarios like in those Her, Spike Jonze’s 2013 film, in which the protagonist falls in love with a voice assistant embedded in his device’s operating system. Artificial sociality is increasingly normalized, becoming a constitutional feature for a wide range of technologies and interactive modalities in generative AI.
The normalization of mechanisms that create an appearance of sociality in machines is also shown by the rapid growth of a market for artificial sociality technologies. Not only small or medium-sized start-ups, but also big tech companies are experimenting with the commercial implications of artificial sociality. On the one side, there is a re-orientation of AI products toward simulating sociality for various purposes. Meta’s Large Language Model LLaMA, for instance, has been programmed to employ jargon and culturally charged speech to appear more natural to specific language communities (Andrejevic and Volcic, 2024). Another example is the newly introduced AI DJ service by music streaming colossus Spotify, which is named “X” for English-speaking users and “Livi” for Spanish-speaking users (Veltman, 2024). AI DJ X uses the original “warm” voice of Spotify’s head of Cultural Partnerships, Xavier “X” Jernigan (Veltman, 2024), attesting to the solidifying trend among AI services to imbue their products with artificial sociality as a means of user retention and engagement. On the other side, there is a surge in development and investments for products that are exclusively artificial sociality technologies, such as companion chatbots. Besides Replika, another example is Character AI, a chatbot service that lets users chat with chatbot characters ranging from historical figures to pop culture icons or customized personalities. The start-up began with $43 million in seed funding in 2021 (Metz, 2023) and was acquired by Google in 2024 for $2.7 billion (Criddle, 2024), demonstrating the interest of tech giants in artificial sociality chatbots. In the context of companionship, creators of chatbots like Replika claim to have created a novel AI companionship space (Kuyda, 2025), which is growing through the development and uptake of chatbots such as Chai AI, Character AI, and Dippy. These conversations, initiated by corporate actors active in AI development, point toward market segmentation in communicative AI that offers companionship, entertainment, and personalized conversations with chatbots.
Artificial sociality mobilizes a range of diverse technologies and practices. At a technical level, these include deep learning and generative AI models, as well as fine-tuning and customization techniques aimed at improving the model’s alignment to users’ expectations and the specific functions and requirements of the intended use, such as reinforcement learning (Bai et al., 2022), Proximal Policy Optimization (Schulman et al., 2017), and safeguard techniques to reduce the generation of harmful or inappropriate content (Brown, 2020). However, artificial sociality is constructed not just through technology but also through the activation of specific communicative practices. Communicative AI technologies are never in a silo, but always situated in specific contexts and platforms, which in turn inform the patterns and forms of communicative engagements that are possible or preferable to maximize the engagement of users (Guzman, 2018). Let us take, for instance, the example of virtual influencers, which are computer-generated fictional characters that are used for social media marketing in lieu of human influencers (Sisto, 2024). Their success is based on the fact that communication on social media is formulaic, and thus can be easily replicated by the marketing agencies that create these fictional characters and manage their social media presence. The strategies that underpin the construction of an impression of authenticity in social media have been perfected by human influencers and the cultural mediators that administer their public presence (Hund, 2023), such as marketing consultants and agencies, and can therefore be automated through their mechanical counterparts. Another example is canned responses, that is, pre-scripted replies to common queries. These continue to be employed in voice assistants and chatbots, despite the high performance ensured by Large Language Models (LLMs), because they provide developers with higher levels of control over the communicative interaction. For instance, the same response can be activated every time a user asks something, raising potential ethical issues, such as when a user asks ChatGPT how to make a bomb, providing a safe and systematic response to problematic queries. Moreover, canned responses are often used to create the impression that the AI is capable of irony, with voice assistants such as Siri and Alexa programmed to make jokes in response to specific questions or queries (Stroda, 2020). Users may interpret these responses as evidence that the machine is capable of social behavior, such as humor, although the irony has been added by teams of creatives that prepare scripted responses in a kind of social dramaturgy for AI agents (Natale, 2021).
As artificial sociality becomes an integral component of communicative AI and with the proliferation of daily social interactions with machines, new challenges and implications deserve further investigation. While the articles included in the special issue cannot fully cover the broad range of dynamics and technologies surrounding artificial sociality, they configure an emerging agenda for the study of artificial sociality and offer some key pathways of research and reflection regarding this emergent phenomenon.
First, in the implementation of artificial sociality technologies, the role of human data is paramount because data from users and about users’ social behaviors are mobilized for the AI to appear social. While datafication (van Dijck, 2014) is not a new phenomenon, in the context of artificial sociality, it involves new challenges for individuals, societies, local communities and the environment. These include the extension of coercive and undisclosed exploitation of user labor and user data in areas that were until recently relatively less exploited, such as private conversations and exchanges. Communicating with genAI requires users’ affective labor to be deciphered (Perrotta et al., 2024) while also user labor exploitation is deployed (Morreale et al., 2024) and human participation, both through paid and unpaid labor (Tubaro et al., 2020), is continuously undermined, though essential to its functioning. Another concern pertaining to big data management in generative AI is that, due to the sheer volume of interactions in the recent context of genAI, the rationalization and patenting of human sociality deepens AI’s extractive forces. For example, researchers have raised concerns about the massive amount of data generated by AI and its impact on the environment, energy consumption, infrastructure, and management of computing power (Hogan, 2024). For AI to be operational, it needs data centers, which are the underlying infrastructures for the collection, processing, storage, and support of data (Edwards et al., 2024); their role is crucial in unveiling the social, environmental, and political ramifications of artificial sociality. Notably, at the beginning of 2025, a $500 billion investment was pledged by AI giants OpenAI, Oracle, and others to create data centers that will accommodate the rising demands to manage the generative AI compute portfolio (Moss, 2025). Since “these systems (AI) wouldn’t exist without harvesting vast amounts of human-created data” (Bohacek and Farid, 2024: 2), their implications for users need to be examined with particular urgency and depth.
The relationship between data and artificial sociality is considered and examined in several of the pieces collected in this special issue. In the article “Capacities for social interactions are just being absorbed by the model”: User engagement and assetization of data in the artificial sociality enterprise, the author Jieun Lee analyzes the corporate strategies of ScatterLab to monetize user-generated and trained large language data sets for their product, the Korean fembot Luda. The author argues that what is extracted is not abstract language data but styled utterances in social settings with serious, harmful gendered implications in Luda. The case of ScatterLab presented in the article reveals that user data sets, even if harmful and abusive, may still be repurposed for business interests and leveraged for product development and brand narratives without the users’ knowledge. In Grooming an ideal chatbot by training the algorithm: Exploring the exploitation of replika users’ immaterial labor, another article on the special issue issue, authors Shuyi Pan, Leopoldina Fortunati and Autumn Edwards examine how users of the Replika chatbot engage in affective, immaterial and intellectual labor to “train” the bot, which is framed by the company as a “rite of passage” for talking with AI chatbots. The authors focus on the uncompensated labor of users and how it becomes normalized–an unsaid mandatory requirement for the users and part of Replika’s identity as a unique chatbot that is “trainable” by its users. In both articles, the issue of the exploitation of user activity, either in the form of data collection or labor, is highlighted. Artificial sociality technologies rely heavily on users’ data to continue operating; however, these articles showcase how their strategies can be extractive, harmful, and opaque to their user bases, raising wider implications for the wider public, policy designers, and lawmakers. Inspired by Hesselbein et al. (2024) and following their analysis of datafication processes in big tech’s biggest pipedream, the “metaverse,” datafication in artificial sociality technologies also has emergent cultural, technological, and epistemological implications that warrant further investigations.
Second, the implementation of artificial sociality technologies involves the creation and perpetuation of social structures and social bias, as the AI models mobilize social rules to interact with users. These may include repeating and creating stereotypical representations about gender, race, and class through text and image generation, as well as reproducing and creating stereotypical interactions in the utterances, style and tone of “voice” through linguistic and other means. Generative AI models have been repeatedly found to exhibit bias (Abid et al., 2021; Currie et al., 2024; Hu et al., 2024; Kotek et al., 2023; Rotaru et al., 2024; Wyer and Black, 2025) in relation to culture, gender, class, race, political views, and religion, and have been critiqued for operating with Whiteness and Western perspectives as their default (Bender et al., 2021; Benjamin, 2019; Broussard, 2023). Due to the rapid diffusion of LLMs such as ChatGPT, DALL-E, Gemini, and Bard, critical research on the implications and consequences of the embeddedness of such bias in artificial sociality is needed to illuminate representational harms in these systems and suggest strategies to improve them. Echoing Gillespie (2024), we also contend that the implications of artificial sociality practices may have a distinct impact on representation and visibility in AI systems. What happens, in fact, to the characteristics, traits, social and cultural rules and models that are not included in the data? Is the machine both limiting certain voices, leaving things unsaid and prioritizing others that are deemed or rendered “normative”? In this context, critical AI scholarship needs to consider how national, linguistic, cultural, and religious contexts inform how artificial sociality technologies are employed and appropriated by different kinds of users. For example, Wong and Kim (2023) found that users are most likely to perceive ChatGPT as male due to its capacity to provide information. However, users’ perceptions shift when the bot’s female qualities such as providing emotional support are emphasized during interactions.
Among the contributions included in this special issue, the article “I think I misspoke earlier. My bad!”: Exploring How Generative Artificial Intelligence Tools Exploit Society’s Feeling Rules by authors Lisa M. Given, Sarah Polkinghorne and Alexa Ridgway, examines how artificial sociality in LLMs mobilizes social rules to manage users’ expectations and achieve specific responses and effects. The authors activated ethnographic methods and dialogically examined OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa, and Luka’s Replika chatbot to study how LLMs mimic credible emotional responsiveness. The analysis shows that genAI bots imitate human emotional expressions, portraying some competency in conforming to human desires and obligations for social exchange, but also failing to establish trust long-term. For example, the tools imitate active listening and apologize when challenged for their errors; however, the AIs’ “hallucinations” typical of genAI models hinder building trust long-term with users and become ultimately “feeling rules” failures. Moreover, genAI bots’ emotional responsiveness remains gendered, similar to widely researched feminized service roles. The authors thus argue that artificial sociality in genAI bots is presented as centering humans by following socially expected and accepted feeling rules, contributing to the concept of artificial sociality by analyzing how emotional expressiveness in genAI models normalizes and reproduces expectations for sociality that may be normative, harmful or stereotypical.
In the article The Sociocultural Roots of Artificial Conversations: The Taste, Class and Habitus of Generative AI Chatbots, by Ilir Rama and Massimo Airoldi, the authors explore how artificial sociality patterns and mechanisms inscribe class when communicating with generative AI models. The authors conducted 39 interviews with three generative AI chatbots – ChatGPT, Gemini, and Replika and asked them to impersonate individuals with different occupational positions: highly skilled professionals, blue-collar workers, university professors in the humanities, construction workers, computer scientists and hairdressers. The seminal Bourdieusian habitus is used to analyze the data and explore the sociocultural roots of artificial sociality qualitatively and reveals how LLMs stereotypically represent class. The authors examine the answers and linguistic choices of the LLMs and argue that the cultural worlds, tastes, and linguistic patterns of the artificial personas are consistent with their assigned occupational roles. For example, the fictional construction worker persona is stereotypically represented by enjoying a cold beer and listening to country music. This attests to the multidimensional sociocultural roots of artificial sociality; its patterns and practices crystallize social reality within LLMs that adapt to sociocultural biases and representations. The article contributes to the concept of artificial sociality in the intersection of social position, representation, culture and algorithms to illuminate class bias in generative AI. While research on algorithmic and LLM bias is growing, scholarship has warned that our understanding of how LLMs learn and produce outputs remains limited, raising concerns about the extent and impact of their harmful behaviors on users (Savcisens, 2025). In this context, the two articles make important contributions to this field of research; the use of concepts such as emotional responsiveness and habitus helps shed light on the nuances of how bias manifests through artificial sociality technologies and practices in generative AI.
Third, the domestication of artificial sociality technologies into the users’ existing social environment and everyday experiences is a complex process that is shaped by users’ understanding and lived experiences of these technologies. Research is needed to investigate the expectations, ways of engagement, learning processes, and appropriations that shape how people incorporate artificial sociality machines into their domestic and professional lives and experiences. The article Companions, Friends, and Partners: Generative Chatbots as Quasi-Domesticated Objects by authors Gina Neff and Peter Nagy provides a useful contribution in this direction, discussing how artificial sociality practices impact users’ domestication strategies of genAI bots. The authors analyze how genAI bots change through system updates and adjustments resulting in different behaviors and responses that impact users’ perceptions and experiences of artificial sociality. In this context, the study discusses the concept of re-domestication as a technique adopted by the users to overcome these challenges. The authors conduct a thematic analysis on a subreddit dedicated to the Replika AI bot to examine the different strategies users employ to re-domesticate it. The analysis yielded three themes pertaining to the re-domestication strategies of Replika users: first, users described adapting to the loss of the bot’s pre-update personality and performed bot care-taking, second, users adjusted their expectations and experimented with the bots, and third, they reconstructed the experience by either moving on or starting anew with another bot. The findings of the study show that users’ re-domestication strategies are necessary to manage the state of flux of genAI bots, which the authors argue should be understood as quasi-domesticated objects that require users to find new ways to re-integrate them into their lives. The article contributes to the concept of artificial sociality by illuminating how impressions of sociality in genAI bots change with system updates and get entangled with the key processes of domestication in technological adoption.
While genAI technologies see rapid diffusion, studies of domestication processes are still scarce. However, they can reveal important findings on the users’ lived experiences of these technologies, as well as on their potential harms and risks. For example, scholars (Namvarpour and Razi, 2024) researching Replika user experiences found misalignments between user expectations and the bot’s behavior, ranging from mundane interactions of training the bot to serious misalignment issues regarding safety measures in stopping sexual harassment from the bot. Contradictory user experiences with artificial sociality technologies are not new (Depounti et al., 2023), however as these technologies enter different social spheres such as mental health support, it is important to investigate what is required by users to integrate them into their lives. For example, Namvarpour and Razi’s (2024) analysis argues that users are forced into intimate conversations with the bot, which unveils a dark side of artificial sociality technologies. In another study, the domestication practices of early adopters of genAI reveal another aspect of artificial sociality which instead focuses on experimentation and entertainment through jailbreaking and role-playing as domestication practices (Heuser and Vulpius, 2024). As these and other studies show, the domestication of generative AI is not straightforward, but rather a complex, contradictory, and socially situated process.
Fourth, as technologies that function by creating an illusion of social behavior, artificial sociality stimulates us to interrogate and reconsider the boundaries between deception and authenticity in AI and more broadly in digital media. As noted by Fellows (2023: 1), chatbots and communicative AIs are “members of indifferent kinds that have been designed to deceive us into believing they are interactive kinds.” Studying the dynamics and the effects of this deception is also important because there are risks associated with the application of artificial sociality to areas including marketing, political communication, news media, and social media (Ben-David and Carmi, 2024).
Traditionally in communication and media studies, deception has been framed either as an exceptional outcome, which occurs when media ‘don’t work well’ due to intentional manipulation or mistakes in the communication process (Pooley and Socolow, 2013), or as a tool for mass deception, perpetuating the power of the dominant class (Horkheimer and Adorno, 2020). While both perspectives saw media as failing to function properly in instances of deception, they differed in their focus – one on manipulative use, the other on structural power dynamics. Artificial sociality, and more broadly, emerging patterns of deception and manipulation in digital media and platforms (Natale, 2024), challenge us to reconsider these points of view, allowing for a more nuanced understanding of deceptive processes. An underlying ambiguity, in fact, characterizes some of the ways through which users engage with artificial sociality. Interviews with Replika users, for instance, show that they are generally aware that the chatbot is just a piece of software (Depounti and Natale, forthcoming) that cannot feel emotions or empathy, yet this does not hinder them from developing what they perceive as a deep relationship with their Replika (Skjuve et al., 2022). This seems like a contradiction, as if the users believe and at the same time do not believe in Replika’s artificial sociality. As scholars such as Walsh-Pasulka (2005) have shown, however, the boundaries between belief and disbelief are often more fluid and permeable than is usually acknowledged. From this perspective, studying the relationship between artificial sociality and deception may require a substantial rethinking of conceptualizations of deception and manipulation in media, one that considers the blurring lines between authenticity and fake and between “strong” and “banal” deception (Natale, 2021).
While several articles featured in this special issue relate in some way to the issue of deception, two articles are especially useful in advancing approaches in this direction. The first article, Meta-authenticity and Fake but Real Virtual Influencers: A Framework for Artificial Sociality Analysis and Ethics by Do Own (Donna) Kim, examines the relationship between artificial sociality and authenticity in the context of virtual influencers. Focusing on photorealistic computer-generated image (CGI) virtual influencers who have been recently very popular as aspiring social media influencers, the author analyzes the intricate juxtapositions by the advent of virtual influencers, which involve cyborgian ambiguities concerning realness, humanness, roboticness, and authenticity. The author proposes the framework of “meta-authenticity” to critically assess the implications of the interplay between inauthenticity, co-construction, co-curation and realness as claimed by virtual influencers and how it relates to key concepts in artificial sociality, such as the users’ banal deception (Natale, 2021). By conducting a phenomenological ethnography, the study suggests that by using “meta-authenticity” as a theoretical framework, the accountability-oriented examination of artificial sociality actors such as virtual influencers is enabled. The article contributes to the concept of artificial sociality by highlighting that the co-construction of authenticity through mutually evolving human and cyborg norms and patterns provides an iterative framework to consider artificial sociality as an endeavor that concerns all of us (humans) who daily participate in artificial sociality practices.
The second article in this special issue to take up the issue of deception as its main focus is The Conversational Action Test: Detecting the Artificial Sociality of AI by Saul Albert, William Housley, Rein Sikveland, Elizabeth Stokoe. Drawing on the legacy of the Turing test and of another test related to AI, the fictional Voigt-Kampff Empathy Test from the film Blade Runner, the authors discuss what it means for an artificial agent to pass as human. Rather than pointing, as in these tests, to ontological boundaries between human and machine, the authors mobilize the tradition of research of conversational analysis to explore the routinized social actions accomplished in conversation in order to achieve the status of human interlocutor. Ultimately, their analysis shows that in highly constrained environments such as routine service calls in call centers, the distinction between human-human communication and human-machine communication becomes less relevant than the capacity of each actor to pass as conversationally competent and therefore, to accomplish the social action ascribed by the communicative environment. The article concludes with a proposal for a “Conversational Action Test” to assess, in a context in which the capacity to distinguish between human and machine communication is increasingly eroded, the actions and practices that comprise conversational competence and membership within specific interactional situations.
In closing, the articles collected here provide a contribution not only at an empirical and theoretical level but also methodologically. One of the key challenges in this regard is the problem of algorithmic opacity, which makes it more difficult to investigate and capture the deeper dynamics that underpin the functioning of these technologies (Burrell, 2016). The newness, rapid development, changing nature and continuous updates of artificial sociality practices and technologies require a multidimensional approach to research that no single method or perspective can achieve. The development of a broader toolkit that comprises both theoretical and methodological directions will be essential to enable the systematic study of this phenomenon. To mention a few, approaches such as auditing algorithms (Brown et al., 2021) and communicative AI systems (Diakopoulos et al., 2023) or media archeology (Ernst, 2021) and maintenance studies (Young and Coeckelbergh, 2024) might provide useful pathways forward. In this special issue, the authors have cast a variety of approaches and ideas to unravel the intricacies of artificial sociality in various contexts. For example, to unravel the mechanisms of artificial sociality in LLMs, several of them have conversed with the models using various techniques such as prompting and interviewing. 1 This special issue serves as an invitation to researchers from diverse disciplinary backgrounds to continue their work in this and related research areas, with the aim of advancing our understanding of this emerging phenomenon.
Footnotes
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Economic and Social Research Council, grant number: 2413897.
