Abstract
This article examines the mediating role of social scientists in the cultural integration and regulation of artificial intelligence (AI), with a particular focus on the creative industries. Drawing on an ethnographic case study within a film cooperative, it identifies four modalities through which social scientists become enrolled in AI-related organizational processes: as middlemen linking theory and practice, as distributors facilitating the flow of agency, as coordinators bridging innovation and appropriation, and as hosts observing the reproduction of technical skills. Situated at the intersection of Science and Technology Studies (STS) and Media Studies, the article rethinks mediation not as passive translation, but as an active montage of fragmented meanings, practices, and actors. It argues that AI is not merely an object of study but a distributed assemblage whose significance emerges through situated associations. By articulating how social scientists engage with AI through organizational consultation, cultural programming, and collaborative experimentation, this paper reframes the sociology of AI as a field of strategic, reflexive, and creative intervention. In doing so, it highlights the importance of problematizing mediation as a relational practice that connects cultural actors, technologies, and institutions in the evolving ordering of artificial intelligence.
Last year, I contacted the director of a filmmaking cooperative to do participant observation in his organization. As a researcher focusing my studies on the use of digital technologies in creative practices, he offered me the opportunity to participate in a “Cycle on AI” they had just launched and asked me to coordinate an organizational reflection on the use of generative AI. The managers of the cooperative had a vague idea of their members’ experiences with AI and what their hopes and fears were. For 6 months, I conducted research on the use of AI in filmmaking, collaborating with a consulting firm to conduct surveys and interviews with their members, and organizing masterclasses with various experts.
In recent years, I have participated in several collective discussions on AI, which have taken place among groups of artists, organizations, sectoral associations, and government working groups. Every time, I have observed cultural workers foreseeing upcoming social transformations of their field due to the progressive integration of AI in creative practices. Since the promises made by AI developers regarding the potential impacts of their innovations are so broad, cultural actors find themselves seeking mediators who do not have a marketing program behind them. For social scientists to have an effective involvement in those situations, however, it is necessary to problematize mediation and its place in the sociology of AI.
In this article, I will discuss the roles assigned to social scientists involved in organizational reflections on the integration of AI and the resistance to its adoption. To do so, I will describe the mediating role of social scientists in the collective reflection on the integration of AI that is currently underway in the cultural sector. After a brief theoretical presentation of mediation in Science and Technology Studies (STS) and Media Studies, I will reflect on the symmetrical mediation of AI and social scientist in the context described above, my participation in a film cooperative as a researcher in Communication Studies. I will cover four forms of enrollment in this context: 1- as middle-man, I processed middle-range theories of AI to connect research and practice; 2- as distributor, I consulted the members of the cooperative and participated in a form of distribution of agency between managers and filmmakers; 3- as coordinator, I connected conceptions and appropriations of technology, from innovation to action; 4- as host, I witnessed the development of technical skills and hosted the reproduction of new technology myself. The forms of mediation and their associated social orders are irrelevant here. What must be considered is how the social researcher studying AI is positioned within it. I will argue, inspired by the theory of film montage, that the role of mediation is as much to cut and select from previous significations of AI as to connect various pieces together. Mediation thus reveals new forms of meaningful association situated in the meeting of AI and its mediators.
The work supporting this argument is based on an autoethnographic approach to the fieldwork I conducted (Ellis et al., 2011). As such, my involvement as a social scientist was analyzed not so much to understand the applied sociology of AI in research projects that engage researchers and participants symmetrically, but rather as a means to study the cultural practices and meanings associated with AI in the situation in which I participated. The particularity of this fieldwork is that, aside from my fieldnotes, the data were collected for the organization I joined: documents, surveys, and interviews were created for and with the actors I observed. I still analyzed them using a method inspired by grounded theory (Paillé, 2011), but my fieldnotes were necessary to add a layer of account of my involvement in the creation of this data. The four figures of the social scientist described below emerged iteratively through the comparison of tasks assigned to me throughout my fieldwork. In the discussion, I will demonstrate how these roles are connected to the meanings of AI for the cultural workers I have observed.
On Regimes of Delegation and Mediation
Bruno Latour synthesized his now-famous proposition, known as Actor-Network Theory (ANT), in a conference titled “Information Technology and Changes in Organizational Work” (Latour, 1996). One of the conference’s subjects was AI, which was understood at the time as a complement to information and expert systems. Latour’s argumentation was quite simple: since sociological categories had been deconstructed by poststructuralists and reordered by waves of new interdisciplinary fields of study, including Science and Technology Studies (STS), sociological descriptions of AI and its contextual understanding within organizations were problematic. “In the following statement, ‘information science and artificial intelligence in human organizations,’ only the two couplers ‘and’ and ‘in’ have remained unscathed! Each of the six other words has been reformatted beyond recognition” (Latour, 1996, p. 300). His arguments, which are well-known to anyone familiar with ANT, are still worth examining for their clarity.
Information is rooted in the world, or a world, if you take the full relativistic approach, as it links entities to one another through situated reference, allowing for action at a distance. Science is the animation of society by a network of actants, ranging from sophisticated instruments to tiny non-human observers, through local practices that establish “real” connections between wider entities. Artifacts, the material components that make artificial machines, are not passive receptacles for social values, but are part of the collective as active social agents, adding intentions and meanings to scripts of actions. Intelligence is unrecognizable to psychologists, “more akin to heterogeneous engineering and world-making, a distributed ability to link, associate, tie, fragments of reasoning, stories, action routines, subroutines, and to hang them to many holders” (Latour, 1996, p. 301). The human is disfigured by the engineering dream of morphing it into a rational machine and the humanist counter dream of recovering its intentionality, reflexivity, and coherence of values; torn between these two dreams, it finds itself resembling a cyborg, neither a perfect machine nor a pure human. Finally, organizations are now filled with humans and technologies, and their stories and programs of actions: “It is no longer clear if a computer system is a limited form of organization or if an organization is an expanded form of computer system” (Latour, 1996, p. 302).
The goal here is not to boil down “what is sociological about AI” to artifacts’ intentionality and distributed world-making capabilities. It will be argued, though, that we find ourselves in a similarly fragmented situation as Latour’s, with recent developments in AI and how we collectively make sense of it. From mapping the extraction of rare minerals, labor’s value and everyday data (Crawford, 2022) to decolonizing data territories (Mejias & Couldry, 2024), the spatial depiction of AI represents its embeddedness in complex sociotechnical systems that cannot be reduced to systemic structures but make sense of its fragments. This critique helps us understand the global power relationships between Big Tech companies and other actors involved in developing AI’s infrastructures and applications. Closer to ANT, it is even argued that the problematization of AI is regionalized by unequal translators who support and critique AI in the public sphere (Roberge et al., 2020).
Regardless of how the fragments are displayed, this global image does not convey the social context in which AI applications are used. In the creative industries, for example, the integration of AI is part of a redefinition of labor, from creators’ practices (Chow & Celis Bueno, 2025) to policy reforms (Lee, 2024). The intersection of AI and creativity calls for a re-evaluation of political economic issues of the creative industries as much as the sociotechnical imaginaries associated with creation (Kofler et al., 2024). To make sense of things such as AI through these various scales, Latour refers to different “regimes of delegation,” which describe the relative extension of programs of actions, from personal intentions to infrastructural coordination. To illustrate the shift in various degrees of technical mediation, Latour (1994) refers to the famous prelude to Stanley Kubrick’s 2001: A Space Odyssey (1969). After a dark monolith—a mysterious and powerful black box prefiguring the tenebrous HAL 9000—appears to a band of monkeys, one of them takes a bone and transforms it into a weapon. Then, as it flies into the sky, it becomes a futuristic space station through the power of montage. The transformation from bone to weapon to high technology is also the translation of sociotechnical humans from apes to warriors to space travellers. In a Latourian interpretation of the sequence, this is not a representation of linear progress, but the mediation of various social and technical actors in complex programs of actions.
Mediation is one of those fragmented concepts presented by Latour. Leah Lievrouw (2014) highlights three components of mediation in Communication and Media Studies: artifacts bring material considerations related to communication technologies in use; practices highlight actions, interactions, and habits of certain individuals and communities; and social arrangements form interpersonal relations, organizations, and institutions. This mediation approach, applied to the social study of AI, helps us understand not what is sociological about it at a definitional level, such as what is artificial and what is intelligent in its most basic form, but where artificiality and intelligence are being distributed and how to describe the displacements of those fragmented concepts in various situations (Celis Bueno et al., 2024). In the following sections, I will describe the symmetrical mediation of AI and social scientist in four contexts, which we can associate with as many “regimes of delegation” (using Latour’s vocabulary) or as associations of the components of Lievrouw’s mediation.
Sociotechnical Mediation of AI and Social Scientist
A classic problem in STS consists in asking how artifacts and other non-human entities act as intermediaries in a given situation, ranging from scallops (Callon, 1984) to robots (Vertesi, 2015). With AI, this problem is particularly evident, for example, in the representation of “smart” objects in technological innovation discourses (Irani, 2023), or in understanding the infrastructural politics of algorithmic recommendation systems (Seaver, 2021). In all cases, but more specifically with AI, artifacts are both mediated and mediating as objects and subjects of complex sociotechnical assemblages in which, as processes of mediation usually work, most actors are listening to a minority of representatives expressing a collective program of actions (Ananny, 2024; Marres et al., 2024). It is almost as classical a problem in STS—but far more debated—to reflect on the role of social scientists in that mediation (Lynch, 2000; Woolgar, 1988).
As described above, I was recently enrolled in a filmmaking cooperative whose managers sought to understand the current state of AI in the practices of their members. This description, they reckon, would help develop policies that prescribe the use of those technologies within the organization and outside (e.g., in their relationships with financing institutions and film festivals). In this scenario, the social scientist (me) was enrolled in four distinct but related types of mediation of AI: as middleman, distributor, coordinator, and host. The description of those figures will help understand the mediation of AI in the context of this creative organization.
Social Scientist as a Shifty Middleman
When I started my internship, my manager suggested I begin my job by reviewing research on the use of AI in filmmaking and other creative practices. Before doing the literature review, I expected the members of the cooperative to be both hopeful that certain tedious tasks would be performed automatically by AI, and anxious that new procedures would constrain the artistic process. Up until then, filmmakers had been faced with discourses expressing opposite visions of AI: on the one side, the apologetic discourse of start-ups continually offering new products to free creators from non-creative tasks, and on the other, the famous resistance of unions to the restructuration of creative work in the American film and television industries. Yet, Silicon Valley and Hollywood’s technophile and technophobic visions of AI and audiovisual creation are rooted in the American studio system; AI will not have the same impact on filmmakers in a regional town in Canada as it will in the globalized American creative industries. So, I set out with a generic goal: to present research on AI in the world of cinema to members of the film cooperative, taking specific consideration of their local production, post-production, and distribution activities, as well as a more general questioning of the transformations in the worlds of art associated with AI.
Considering the context, it seemed pertinent to search for studies of local integration of AI and issues specific to independent filmmaking outside of production centers. Although this specific type of research is scarce, there is a variety of representations of AI that can be associated with “middle-range” descriptions and conceptions of the technology (on middle-range theory in STS, see Wyatt & Balmer, 2007). That is, actual case studies that enrich AI’s meaning beyond its mediatic representation. Those studies focus on algorithmic recommendation (Frey, 2021), prediction (Chow, 2020), and content generation (Hales, 2021). Acting as a liaison between theorists (the authors of the literature) and practitioners (the cooperative’s filmmakers) in a collaborative reviewing process, I was enlisted as a “middleman” in this script. I was no more than a speed bump forcing drivers to slow down: my role was to bring a series of past intentions of authors and make them converge, through ongoing discussions and dynamic presentations, in the present of the filmmakers with whom I collaborated. In the process, my collaborators slowed down from their ordinary practices to shift their use of AI-as-a-tool to their appropriation of AI-as-a-program, as a modus operandi, a dispositif. In the process, they began reflecting on their own shift from being creators of works of art to users of technology.
This mediation is reminiscent of Kubrick’s montage, where the social scientist is not the almighty editor, but merely the cut—the connector of previously independent entities. As in the film 2001, the historical mediation of AI was particularly meaningful, especially in a text by film historians interested in technological innovation, who argue that AI and other digital technologies may appear as both a revolution and a transition in cinema (Gaudreault & Marion, 2023). According to them, forms of disruptive discontinuity and tranquil continuity can be observed in plural temporalities, depending on the sector of cinema being examined. With digital technologies, including AI, Gaudreault and Marion described three recent crises faced by filmmakers. First, a crisis of the profession, AI-facilitated technologies promising to awaken the professional within all of us, multiplying the expressive and creative possibilities of every amateur who wants to play at being a pro. Second, a crisis of authorship arises, in which the revered author is replaced by anonymous workers enrolled in a technology-driven process that sums and averages databases of scripts and films to compose “original” creations. Third, a crisis of representation, as the act of filming is progressively replaced by that of generating, leading to a Copernican decentering of the figurative image and the representation of reality.
As a middleman, when I presented these theories on film history to filmmakers, I was told that they personally explore the same themes in their experimentations with AI as in their previous practices, the same narratives, characters, frames, movements, sounds, and places. However, the filmmakers I interacted with approach those themes differently with AI because they do not collaborate with co-writers, directors of photography, operators, actors, and musicians, and they do not operate a camera on set. They interact with models of characters, worlds, and styles they have generated and trained through prompts and settings. Despite the change in practice, they do not view this shift from filming, capturing, and recording to content generation, prompt engineering, and training algorithmic models as a “crisis of representation.” Yet, the latter actions require different skill sets than the former, because the use of multiple tools constantly offered through the rapid cycle of technological innovation comes with continued exploration of procedures rather than the standard industrial practice on professional and unionized sets. If the artists I have observed and consulted do not identify this shift in activities and required skill sets as a crisis, it is because they are already accustomed to being flexible, adaptable, and multitasking—the ethos of independent and regional filmmakers. They are used to constant exploration of tools and workflows. It is not difficult, though, to see that this transition towards new technologies and skills will be experienced as a crisis in other contexts. It might even be perceived as a revolution for some filmmakers. However, the role of the middleman is only to provide collaborators with a shift in perspective, not the meaning they assign to the objects they see from it.
Social Scientist as Distributor
Before I arrived at the cooperative, a consultation firm specializing in the integration of AI in businesses had already been hired to investigate how to govern the use of AI by the organization’s members. My managers asked me to collaborate with two of the firm’s employees to revise a form they wanted to use for a survey and to participate in interviewing the artists who agreed to share their experiences and thoughts on AI with them.
Although my two collaborators wished to adopt a neutral stance towards AI, their survey appeared to me techno-optimistic. My role, at the time, was not to inscribe a critical perspective in the form, nor to make it more neutral. The formal apology of AI was not a problem in itself. The problem was that the survey translated the claims of AI promoters rather than the interrogations of the cooperative’s managers. Along with my supervisor, I had to modify some of its formulations, add questions, and remove suggested answers to make it appear more like the mediation of the organization rather than the representation of a consulting firm. We did not aim to integrate AI into the members’ practices, or even block it entirely, but we needed to empower the managers in the governance of the cooperative.
The problem was similar in the interviewing process. Again, the challenge was about mediation: we needed to incorporate the organizational objectives that brought us together in an interview guide and actualize this script in a procedural interview. Our focus was not on the technology or the practices of our interviewees, but on the social arrangement of the cooperative and the development of a framework that would lead to organizational policies regulating the use of AI in the creative practices of its filmmakers. At some point in an interview, for example, our interviewee wondered what AI was. One of my collaborators, an informatician, proceeded to define AI technologically, describing its mechanisms at length. I had to refocus our conversation to obtain a definition of AI for the interviewee, as understanding what AI is as part of the complex arrangement of creation is as important, if not more so, than its mechanisms if the goal is to regulate its use within this creative organization.
My role here was not to act as a middleman, but as a distributor. In the film, television, and other media industries, the term “distributor” is applied to various human and organizational entities, including content producers, marketing companies, and channels, whose activities encompass multiple tasks such as marketing, dubbing, and formatting (Lotz, 2021). The role of the scholar distributor is as varied as in the media sector, but its goal remains the same: to focus on where content is going rather than defining what it is. What was in circulation, in the context described above, was not a material creation, such as a film, but rather questions that needed to be shared among the various actors within the organization, including its managers and filmmaking members. As a mediator, I facilitated the distribution of agency among those actors. It is through this distribution that the answers to the questions we asked would inform an organizational reflection on the use of AI, and eventually, an adaptation of its governance.
The social scientist’s role as a distributor of agency in reflecting on AI should be understood within the complex context of the creative industries. The distribution problem of agency related to the discussion and use of AI might be developed as a complex of flows, not only the flow of questions but also the creative workflow of filmmakers, the flow of data to train AI applications and the flow of money in developing them, adapting social institutions for their integration, and monetizing what is produced with Generative AI technologies. In practice, the embeddedness of a creator’s practice in the “world of art” or in the “creative industries” leads to different evaluations of the value of a work of art, how it might be used as data to train machine learning models and generate new content, the compensation for this use of art-as-data, the monetization of generated content, and the production of organizational policies or guidelines for this whole process. Indeed, rules don’t apply in the art world, in the tech world, or in the business world the same way as in the social world of the law-abiding citizen. Some artists we surveyed and interviewed campaigned for artistic freedom, accepting that an organization, such as a film cooperative, adopt guidelines for the use of AI by its members, but not the idea of establishing organizational policies, including disclaimers. Others, related more closely to the “film industry,” argued that the use of their work to train AI models should be based on the rules for its distribution: if an artwork is used to train AI, shown in a cinema or on TV, there should not only be compensation, but the right owners should decide where and how it is used or shown. The role of the scholar-distributor is to ensure the circulation, the respect and the evaluation of those ideas, but not their evaluation itself. This is where the distributor must be distinct from the consultant, and the researcher from the service provider.
Social Scientist as Coordinator
While reviewing the literature and supporting the consultants surveying the cooperative’s members, I began organizing masterclasses with various AI experts. My aim was to bring together the regional film industry’s creative workforce to collectively engage with the emergence of this new technology. The idea, proposed by my managers, was to give participants a chance to better understand AI innovations through expert presentations, while also encouraging a shared artistic critique of this supposedly inevitable shift. Three main activities were planned: the masterclasses themselves, film screenings, and networking workshops.
The selected speakers for the masterclasses shared diverse views on AI: developers showed how data and algorithms are conceived to predict a film’s financial success; a sociologist stressed the importance of including diverse voices in how AI is funded, built, assessed, and used; an artist saw new technologies as a way to reflect on humanity’s place in the Anthropocene era; research-creation scholars examined how technology shapes creators’ relationships with others and with themselves; managers raised questions about how to integrate AI into organizations and how the arts community can reflect on it together; and I discussed the strengths and limits of academic research on AI in the cultural industries.
AI is often viewed as a technology of the future, but many artists have been utilizing AI tools for years. Some of these tools are already in use in parts of the film industry, particularly in special effects and animation. As part of my role in this project, I organized a short film screening at a nearby cinema to showcase what is already being done—not just in those areas, but also in live-action fiction and documentaries. This screening helped masterclass participants place AI within a wider landscape of digital tools, trace the evolution of image classification and generation over the past decade, and explore how humans and machines interact. The films also raised critical questions about social biases in how identities are represented, and showed how generative AI is being used to explore intimacy, human-nature-urban relationships, ecological concerns, and the beauty that can emerge from both handmade and algorithmic creation.
Given the importance of the aesthetic and ethical issues associated with AI, and above all, the economic repercussions that they may have on professional artists, we decided to encourage the mobilization of both emerging and established artists. We organized workshops, a lunch and drinks to motivate those exchanges. Along with the masterclasses and the short film program, the networking activities showcased a different yet related role of the social scientist in such a context: as part of an apparatus of coordination, I connected AI from its conception to its appropriation, cinema from filmmakers to films, and practitioners from established to aspiring professionals.
Madeleine Akrich famously related programs of action embedded in technological innovation to forms of coordination. Action, as defined by Akrich in technical terms, involves cooperation between a user and a device, relying on the adjustment of both parties. From the inscription of such actions in the user’s device at the conception stage to their incorporation in the user’s body at the appropriation stage, the coordination of this program also involves intermediaries, guides, auxiliary instruments, and socialized forms of learning (Akrich, 1993, p. 18). As seen in the previous sections, the social scientist-as-middleman and the social scientist-as-distributor are part of this apparatus, acting in this network of collaborators without being a direct user or critic of the technology. Sociologists of AI are particularly attentive to the role of coordinator, as AI is both scientifically and publicly problematized in its conception and use (see for example, Lysen & Wyatt, 2024 on the refusal of participating in this coordination). The social scientist-as-coordinator, as seen from the activities described above, is engaged in the situation, translating specific claims through the invitation of an expert rather than another, making some affordances more visible in the technology through the programming of a given film, and facilitating connections between users and non-users, among other things. Adopting a technology, as well as critiquing and abandoning it, is facilitated through socialization, which is partly coordinated by social scientists in such contexts.
Social Scientist as Host
The cooperative where I did my internship had a mentoring program. When they receive artists-in-residence, for example, they match them with someone who has the experience to advise them on their project. My manager wanted to do the same with me, as part of a “research-in-residence” program. Through a series of meetings, I met with an artist who actively experimented with AI. He showed me his favorite applications and the discussion groups on various social media platforms where he exchanged knowledge and expertise with other artists. During our final meeting, he demonstrated new functionalities he had just discovered live in front of me. The primary goal of the mentorship was to observe AI in action, as it is integrated into a real-world creative workflow, and discuss its cultural value.
At some point, the mentorship stopped appearing to be about or with AI and started to be by AI. My human mentor seemed to be a host. The script we had convened had enrolled him as a mentor, but to actualize this scenario, he had to act as a mediator. Showing me how the technology is incorporated into his practices was not a performance one could qualify as “advising.” He morphed into the role of a user in front of me, using his tools without speaking, before asking me to perform some tasks he had experimented with earlier. In the process of mentorship, AI was infecting one host after the other, a kind of “reproduction” closer to Ridley Scott’s Alien (1979) than Karl Marx or Pierre Bourdieu’s sociology: this mentoring was less a political-economic ordering of class and capital between this person as an experienced artist and I as researcher, more a construction of what Latour conceptualized as the “body corporate,” the meeting of proliferating mediators in the making of an object-institution (1994, pp. 44–45).
Shortly after my mentor taught me his process of creating a film using generative AI applications, distinguishing this creative process from his usual filmmaking workflow, I recalled Latour’s comparison of the traditional “Pasteur’s” pipette and the new automatic “Pipetman.” According to him, work is not as much “automated” by this innovative pipette, replacing the technician’s practice, as it is displaced, necessitating a new set of technical skills embedded in the artifact and incorporated by the device’s user. The same is happening with AI’s reported automation, which is actively performed while its program is incorporated by its users. The mentorship was just one instance of that corporal/corporate absorption of the properties of a new technology, both in material and symbolic forms. Observing the use of AI in action, I witnessed a composite of functionalities, discussions, and experimentations that fused the object, my mentor, and now me in this institutional context of mentorship. Users and observers, artists and social scientists, serve as hosts for the reproduction of new technologies, acting as a single corporate body as the intersection of proliferating mediators.
Conclusion: The Four Figures of Janus
There is a wide range of arguments for and against AI, as well as proposals on how the development and use of Generative AI should be approached. Many individual and collective actors argue, for instance, that the success of AI relies on transparency on the part of technology companies, so users can understand how those machines are constructed, rather than hiding their mechanics behind an opaque marketing program; trust depends on this understanding, they reckon (Manovich, 2017; SMPTE, 2023; Weatherbed, 2023). This interpretation flattens the social understanding of AI users, who are almost considered cultural dopes, constructed by technological firms and their technologies. We must deepen our understanding of users’ practices and the organization of their activities, as their adoption and critique of AI are social performances. In the case discussed, filmmakers’ uses and non-uses of AI were a cooperative performance. In other cases, actors will adopt other forms of collective action.
Elsewhere, it has been argued that we must describe the “thingness” of AI, which will need to be situated (Suchman, 2023). Claudio Celis Bueno et al. (2024) have adapted the mediation theoretical framework presented earlier for a critical study of AI in the creative industries, shifting to an approach that fragments “creativity” in its various situations. Creative labor situates creativity in the age of generative AI, marking an encounter between global social arrangements (e.g., post-industrial capitalism and the creative industry) and specific material practices (e.g., having a workforce whose activities and schedules are flexible, yet still under control). AI’s automation displaces labor more than it replaces it: a workforce is needed to design AI, train it by generating datasets, and maintain it. The question, then, is whether AI exploits the property of workers who do not benefit from it (e.g., the creators of artworks used as data), or whether technological algorithms and digital capitalism are built from objects we should legally inscribe as “commons.” Following the shift from “what” to “where” is agency, AI is conceived as a partner in the co-creation of a work. Aside from economic and legal frameworks, AI’s technical affordances demonstrate the distributed nature of creative agency and blur the conceptual distinction between human and machine labor. In other words, even if we can observe a replacement of human creative work by machines at the production level, human labor is still promoted as the foundation of product value.
As noted by Hye-Kyung Lee (2022), the conceptualization of the art world as a “creative industry” shifts the question of labor to that of capital, and that of copyright to that of “intellectual property.” In discourses about the creative industry, the economic value of creation exceeds its aesthetic and symbolic value because creation is perceived as something that can be accumulated, reinvested in the production process, and transferred to other sectors. Creativity is no longer highlighting the quality, excellence, and critical capacity of the worker, but the popularity of intellectual property on the market. Ironically, even if AI generally does not produce historically transformative creations on a large scale, its creativity being limited to the way it combines what it is provided with, we must also recognize that many human creators produce exploratory creations on a small scale due to the industrial situation in which they work.
As a final point of discussion, to consider the social construction of creativity, Atkinson and Barker (2023) argued for the study of generative AI applications external to ordinary workflows of creators. But AI is also embedded into everyday applications used in creative processes. They assessed how AI production can alter the relationship between art world participants (creators and other social actors who inform and validate their creations) and objects of knowledge. They situated AI in a variety of communicative practices: recommendation algorithms acting as gatekeepers of what artists see, filtering their sources of inspiration and what they can show to their public, and generative applications switching the creative process from the production to the selection of AI-produced content. Their theorization accentuated the transformations of creativity brought about by AI. However, the integration of AI in creative practices must be considered within a more complex framework that encompasses the arrangement of technology and social organization.
The mediating role of social scientists in the organizational regulation of AI, specifically as presented in this article but more generally in the intervening role of social scientists in the development and appropriation of emerging sciences and technologies, helps researchers reveal arrangements of technology, uses and social practices, such as creation. Taking, for example, the interpretation of my engagement as a middleman, distributor, coordinator and host in a film cooperative, we must acknowledge that the social scientist might not only be enrolled in different ways, figuring the researcher variously but also in a configuration of mediation that can get the researcher further away from the node of actors involved in regulating AI, such as an organization’s managers. Governing AI is a responsibility shared by managers with their team members, as well as external experts and other stakeholders, who are involved in understanding the technologies, practices, and social arrangements that require regulation.
Furthermore, we must acknowledge that the role of a social scientist has never been to promote, or even to suggest, a social model: we know all too well that no model can be actualized from such a claim. Some actors will always find “policies” too strict and forgo any compliance with normative uses of technologies, while others will find a “guide” too weak to promote the collective value negotiated and adopted democratically in an organization. Some will say that reviewing scientific literature is too theoretical or based on contexts too different from their own, that consulting a few members is unrepresentative of the whole community, that coordinating masterclasses does not allow enough dialogue between technophiles, technophobes, and everyone in between, and that mentorships are fictional and do not allow the building of strong relationships.
The goal is not to say that social science has a strong or weak impact on the mediation of AI in and outside a social organization, but to show that the intervention of social scientists is as real as it gets: in a world fragmented by the ambiguity of all the promises made by AI developers, reviewing, consulting, coordinating, and hosting questions around the use of AI is situating the technology in history, in the particular social arrangement of the organization, in various claims by experts, in films and personal relationships, and more importantly, in the practices of creation. As mediators, social scientists do not have to replace the fragments of theory and meanings once associated with technology; instead, they can create new connections between those fragments. Like a film montage situating meaning in the connection between shots, for example, associating the most basic form of weapon to space technology as a critique of the “progress” of humanity, the sociology of AI must connect different fragments through mediation, from theory to practice, users to managers, developers to adopters, films to filmmakers, applications to forums, etc. If the question is not what but where AI is in creative sites, then it must be found in those connections.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
