Abstract
AI-generated art has its roots in the 1960s, driven by pioneering artists experimenting with algorithmic and rule-based systems. In contrast, over the past decade, new AI art generators, fuelled by increased computing power, are often associated with extractive data practices spearheaded and monopolised by big tech companies. Both historical and contemporary approaches involve learning patterns from existing data to generate new content; however, early work was primarily artist-led and exploratory in nature, whereas today’s generative AI is largely shaped by big tech for-profit interests. This shift has provoked both excitement and scepticism in the creative industries, while raising concerns around authorship, privacy, forgery, discrimination, and the ethical implications of unjust machine learning techniques which erode creators’ rights, under the guise of technological progress and the democratisation of artistic creativity. This paper examines how values, beliefs, and emotions shape UK arts practitioners’ engagement with narrow AI and artistic automation. Through thematic analysis of interviews and focus groups with artists, curators, and organisers, three major narrative themes emerge: (1) a critical perspective on tech-driven artistic automation, (2) calls for improved human-machine collaboration, and (3) tensions arising from personal values, beliefs, and emotional responses. These findings highlight that arts-led AI practitioners offer a necessary counterbalance to the widespread adoption of tech-centric automation in art, advocating for more ethical, collaborative, and value-driven approaches as automation becomes increasingly pervasive in the creative sector.
Arts-led AI practice
This paper explores the role that values, beliefs and emotions play for arts practitioners working with Narrow AI
1
as they navigate the tech-led automation of artistic processes in the UK arts sector. Renewed interest in AI-generated art has grown steadily over the past few years with the emergence of commercially accessible user-facing digital AI content generators, such as DALL-E, Midjourney, Boomy and ChatGPT. The rise of generative AI and its potential to automate artistic production has raised new questions about the nature of human-machine collaboration in the arts, where human expertise both guides and is enhanced by automated technologies (Van Rooy, 2024). Automation foregrounds a range of ethical concerns, including those related to authorship, privacy, human creativity, forgery, ownership, discrimination, and AI’s environmental impact (see e.g. Brevini, 2020; Crawford and Paglen, 2021; Elgammal, 2018b). Moreover, the shift toward tech-led automation in creative processes prompts critical reflection on the commercial convergence of art, technology, and aesthetics and the broader societal implications (Manovich and Arielli, 2024). This warrants consideration given the role of artistic inquiry in knowledge production and its ability to offer audiences new ways of perceiving, understanding, and thinking critically about the world (Rogers et al., 2021). As suggested by Burri (2021): ‘Art practices can be seen as epistemic practices that assemble materials, tacit knowledge, bodies, and aesthetic judgements in experimental setups. In these spatial and temporal settings, such entities are associated, enacted, and performed in ways that follow inherent and implicit logics. From these enactments, new forms of knowledges and insights may emerge which are perceived not only in cognitive but also in sensual and often tacit ways. Art making can thus be viewed as a practice of knowledge production emerging in a research-like process’ (p. 183).
While ethical frameworks are often introduced in AI research and policy for responsible technology use and design (e.g. AoIR 2019; UK Government 2025), they frequently overlook the lived experiences, values, beliefs and emotions that shape how practitioners engage with AI. Moreover, dominant big tech discourse is reflected in some computer science research on AI art (e.g. Generative AI and HCI, 2023), which in turn reflects longstanding debates about perceiving algorithmic techniques as neutral and decontextualized (Christin, 2017). Indeed, empirical research and policy discourse often fails to recognise the critical contribution of arts practitioners to the broader societal AI debate. We draw on existing research that has asserted that algorithmic techniques are not neutral (Gillespie, 2014; O’Neil, 2017; Noble, 2018; Benjamin, 2019) and that knowledge production practices are influenced by broader cultural forces, such as algorithmic imaginaries, feelings, emotions and affects (Bates and Elmore, 2018; Christin, 2017).
Arts practitioners can approach issues related to AI and automation from a critical perspective. Although they work within a landscape shaped by the rise of big tech’s generative AI (Michaels, 2024), arts practitioners are often independent and not tied to commercial production driven by capitalist values of efficiency, productivity and profit (see Arif and Takefuji, 2025). This affords artists greater freedom to express critical perspectives (Bates et al., 2025) and to experiment with how they incorporate AI into their artistic practice (Ploin et al., 2022). This includes supporting audiences to ‘look at problems in AI in a holistic way’, beyond viewing AI solely as a technical matter or tools for creative expression (Hemment, 2023). It is argued that arts practitioners recognise AI technologies as material manifestations of cultural, social and political forces that come together to shape AI use (Broeckmann, 2019; Ploin et al., 2022). Therefore, exploring values, beliefs and emotions of arts practitioners is key to understanding the implications of human-machine collaborations. This study emphasises the critical need to consider these dimensions across diverse arts practices. This helps to refocus how technologists might design more ethically responsible AI systems, and also addresses the significant gap in current empirical AI research and policy discourse, which too often fails to recognise the critical contribution of arts practitioners to the broader societal debate on AI.
We examine how UK-based arts practitioners are navigating the pervasive big tech-led explosion of narrow AI tools to automate artistic processes. We draw on interviews and focus groups with 14 artists, curators and arts organisers spanning music, storytelling, visual art, performance, installation and combinations thereof. Our findings suggest that most arts practitioners see ongoing AI developments as being in conflict with their own personal values. We argue that the way they navigate automation presents a counterpoint to the pervasive tech-led explosion of narrow AI tools that automate artistic processes.
Automation and arts practice
Automation involves the replacement of human labour with technology that can lead to the displacement of manual workers. This in turn has significant social, economic and environmental – and importantly, cultural – consequences. Aligned with Simmler and Frischknecht (2020), we understand ‘automation’ to refer to a ‘state in which a technical system performs a formerly human task or parts of that task, respectively’. Importantly, the task assigned to the technical system can encompass partial or complete automation (Vagia et al., 2016). In arts practice, this includes the automation of creative processes such as image generation, music composition, text production, or aspects of editing and curation, often facilitated by generative AI tools and models. In the context of human-machine collaboration, the level of automation delineates the degree to which the machine contributes to the collective performance (Simmler and Frischknecht, 2020).
From the literature we identified that there are two main trajectories of AI arts practice and automation: (1) arts-led which extends arts practice as a form of human-machine collaboration and is driven by artists for personal use in their professional practice and (2) tech-led arts, driven by big tech and commercialisation of tech and data (Andrews and Harcroft, 2025; Ploin et al., 2022). Arts-led AI practice dates back to the 1960s and 1970s and includes the visual art of Vera Molnár (Broeckmann, 2019). Molnár pioneered computer-generated art, using a radical systems-based approach called ‘machine imaginaire’. Molnár’s work helped to establish the contemporary intersections between art and technology as forms of experimentation and artistic collaboration with computers (Thaddaeus Ropac, nd). While early computer-generated art required artists to manually encode aesthetic parameters of their work (see e.g. Tate, nd), contemporary arts-led AI production typically involves partial automation, such as the use of supervised machine learning. A computational trend utilising this technique is the application of Generative Adversarial Networks (GANs), introduced by Ian Goodfellow et al., in 2014 (Elgammal, 2018a). While these arts-led practices often rely on technologies developed within the broader tech industry, the use of partial automation allows artists to retain creative control and navigate the challenges posed by big tech-driven AI systems (Ploin et al., 2022).
The tech-led arts trajectory is driven by big tech-driven resurgence in AI art emerging from the explosion of online data, large-scale technology investment, increasing computing power, and ML algorithm optimisation (Hwang, 2018; Tatar et al., 2024). AI art generators are often funded and integrated into platforms owned by large, commercial tech companies like Google and Microsoft, which frame them as broadening access by enabling a wider audience to experiment with automated AI art creation (Broeckmann, 2019). As Zylinska (2020) argues, this framing functions as a strategic marketing narrative, appealing to audiences’ desire to be seen as creative while masking the core commercial aim of attracting user engagement and harvesting valuable data (Zylinska, 2020; Klinge et al., 2022).
While tech-led automation is often framed by economic growth, business optimisation, and social innovation, these narratives are deeply rooted in capitalist values that prioritise efficiency, productivity and profit (Egan, 2025; Lowitzsch and Magalhães, 2024), which can lead big tech companies to deprioritise broader societal and ethical concerns surrounding AI. These include the significant environmental costs of AI, such as the high energy and water demands of data centres, increased mineral extraction and mounting e-waste (Brevini, 2020; Crawford and Paglen, 2021). At the same time, automation raises critical issues related to job displacement, security threats, misuse for political or economic gain, privacy breaches, surveillance, and the potential erosion of human sovereignty and control over technology (Saurwein et al., 2023). For arts practitioners, tech-led automation specifically challenges the artist’s role in the creative process (e.g. Bidshahri, 2019). Tech-led automation also raises issues regarding privacy, authenticity, originality and the aesthetics of the art created, and its subsequent cultural value to society (Andrews and Hawcroft, 2025).
Our study foregrounds these cultural dynamics to address a gap in current research, practice and policy that predominantly focuses on economic and technical factors (Andrews and Hawcroft, 2025).
Arts practitioners’ responses to AI automated arts practice
The tension between artistic practice innovating AI and AI impacting arts practice was recognised alongside the ways in which artists bridge disciplinary silos and transcend sectors (Andrews and Hawcroft, 2025). For many arts practitioners, the current AI hype and ongoing developments conflict with their human, societal and ethical values (see Elgammal, 2018b). A frequently cited example is the 2018 artwork generated using GANs titled ‘Portrait of Edmond Belamy’, which made headlines by selling for $432,500 at Christie’s auction house. Created by the Paris-based art collective Obvious, the piece faced criticism when it was revealed that the authors had used the open source code from another artist, Robbie Barrat (Vincent, 2018). This discovery, among other factors, led some arts practitioners to describe GANs art as derivative and lacking originality (Cohn, 2018), raising questions about the authenticity of AI-generated art, its attribution, originality and purpose.
AI assisted creativity is becoming mainstream within arts and creative industry production alongside a rise in resistance from some artists and arts practitioners when it comes to the potential of complete automation of art. Hollywood writers and actors strongly opposed the use of AI. Taking striking action, they argued against the commercial use of automated imagery of actors, voice alterations and automated script generation, without proper compensation for the original creators (Beckett and Paul, 2023). These concerns are echoed in other art forms. Illustrators expressed concerns about the legality of AI image generators and their potential to devalue illustration and photography skills (Thomson, 2023; Shaffi, 2023; Wilkins, 2022). In response to these challenges, arts practitioners grapple with asserting control over the use of their work (e.g. lawsuits, see Ashby, 2023). Artists often create experiences that reframe the challenges associated with AI, and at times offer innovative solutions to navigate these challenges (Andrews and Hawcroft, 2025; Hemment, 2023). By way of example, Holly Herndon and Mat Dryhurst’s exhibition ‘The Call’ (Serpentine Galleries, 2024), considers the ways in which AI can support the transformation of the individual to the collective by exploring collaborative singing practices such as ‘call and response’ as a means to collect and share information. The artists recorded community choirs across the UK as a Data Trust Experiment that allows power to be distributed between the choir members creating the choral dataset, and the people using the training models. This collaborative human-AI experiment delves into creativity’s collective nature. Further, Herndon and Dryhurst created a new tool, Kudurru, through their start up, Spawning (Andrews and Hawcroft, 2025; Knibbs, 2023).The platform operates as a network of websites to identify and block web scraping of data in real-time. By targeting URLs listed in datasets, Kudurru disrupts the downloading process during scraping attempts.
Despite these concerns there has been limited research delving into arts practitioners’ experiences and perspectives on the automation of arts through narrow AI tools in the context of art creation and production (Manovich and Arielli, 2024). Ploin and others (2022) explore the implications of AI within arts practice and the spectrum of influences from automation to complimenting artistic practice. Notably, here, artists’ embodied experience of AI and what it feels like to work with AI is explored; artists expressed a mix of surprise in relation to outputs generated, and frustration at having less control over ML as a medium, and that a degree of patience and openness was required with rework. Explicitly, there has been limited discussion on the role of emotions, values, and beliefs in navigating automation within the context of the tech-led explosion of narrow AI tools. While recognising the artistic domain’s capacity for automation, especially in tasks like generating diverse images, the research brings attention to the nuanced decision-making inherent in artistic creativity.
In this paper, we highlight the importance of value-based, nuanced decision-making in artistic practice, a facet often overlooked in tech-led discourse reflected in some existing tech-led research around AI and art.
Arts practitioners’ emotions, values, and beliefs
Our theoretical perspective adopts a social constructionist framework rather than a cognitive approach to comprehend the interplay of values, beliefs, and emotions. This acknowledges that these elements are intricately woven into the fabric of social phenomena, shaped by socio-political forces (Bates, 2017). Here, individuals hold beliefs and ideas that they assert as truths, often recognised as loose ideologies (James, 2019; Harrison and Boyd, 2018) or fragmented common sense representing perceptions of how the world operates (Hall, 1987). Aligned with the definition by Bates (2017), values significantly influence and rationalise people’s data practices, with organisational structures and cultural logics playing a pivotal role in shaping and expressing them (Ustek-Spilda et al., 2019). Individuals hold convictions and possess a sense of discernment regarding what is desirable which guides their behaviour (Schwartz, 1992). On the other hand, drawing on the cultural sociology of emotions (Bericat, 2016), we understand that emotions extend beyond biological responses and are inherently social, and conditioned by cultural influences. As highlighted by Kennedy and Hill (2017), data has the capacity to ‘stir up emotions’, and practitioners experience distinct feelings concerning the adoption of ML systems in their professional environments (Eubanks and Boyd, 2018).
For example, research by Bates and Elmore (2018) demonstrates that ideational and affective factors are pivotal in data scientists’ decision-making within data mining and machine learning. There is a need, as urged by Christin (2017), for more ethnographic research on ‘algorithms in practice’ to understand the ‘practices, representations, and imaginaries’ of those relying on algorithmic technologies. This understanding is crucial for an empirical counterbalance to the prevailing discourse on ‘algorithmic power’ (Christin, 2017) and ‘data power’ (Kennedy and Bates, 2017). Specifically, there is a gap in understanding how affective and ideational factors interact to shape the ‘data cultures’ framing expert practitioners’ involvement with narrow AI practices and outputs, impacting practitioners and the wider community.
In terms of ideational factors, Veale and others (2018), explore the notion of ‘fairness’ with practitioners including how they define the term and the challenge in achieving ‘fairness’. Orr and Davis (2020), examine how AI practitioners understand responsibility for upholding ethical values to be distributed across different actors.
Existing research on the emotional dimensions of engagements with data and algorithms tends to focus primarily on non-experts’ everyday experiences (e.g. Bucher, 2017; Kennedy and Hill, 2017; Ruckenstein and Granroth, 2019) or even more recently on emotional experiences of chatbots, as exemplified by a study on the norms and values underlying ChatGPT anger (Monrad, 2024). These studies fail to consider how emotional factors shape the practices and decision-making processes of practitioners.
As Kitchin (2014) points out, subjectivities and systems of thought play important roles in the complex ‘data assemblages’ that define ‘what is possible, desirable, and expected of data’ (p. 24). Importantly, artists can enrich these debates by offering a unique perspective that connects to societal needs. Artists also critique the negative impacts of AI, challenging the narrative promoted by big tech companies that suggests understanding AI is unnecessary because we are already immersed in it (Zylinska, 2020). Moreover, emotions have emerged as the most prominent factor differentiating artists from computers in recent discussions on human-machine collaboration. Anantrasirichai and Bull (2022) argues, the emotional and ideological depth that human artists bring to their creations cannot be replicated by AI, making the study of the arts-led AI practice essential.
While arts practitioners are increasingly adopting AI techniques in their practice (Manovich and Arielli, 2024; Ploin et al., 2022), research has yet to address how their values, beliefs and emotions influence their responses to the automation of arts in creative practices. With this in mind, our research question was how are AI artists navigating the tensions of the automation of arts practice in line with their values, beliefs, emotions?
Methodology
Our research focused on the role that values, beliefs and emotions play when arts practitioners work with narrow AI. Given the role values, beliefs and emotions play in guiding artistic practices, they are also essential in shaping engagement with automation.
We carried out empirical research in the UK in 2022 and 2023 via interviews (n = 14), focus groups (n = 3) and observations (n = 5). Fieldwork was conducted either in person or online with a diverse range of 14 arts practitioners. Most of the practitioners were publicly or self-funded and not engaged in purely commercial work; some had primary or supplementary sources of income beyond their art which gave them greater freedom to be critical, as their livelihood was not directly tied to the outcomes of their artistic production. Participants at times worked both as independent practitioners and within creative industry contexts; however, in this study our focus was on their independent arts practice and on how they engaged with AI in producing original artworks or curated exhibitions. This included artists (n = 10), curators (n = 3) and an AI arts organiser (n = 1) of artworks spanning a range of art forms, including music, visual arts, storytelling, performance, installation and combinations thereof. To source interviewees, we searched online and within the literature for existing UK-based AI arts programmes to identify a diverse range of artists, curators and commissioners. We also drew on our existing networks and sought new connections via our project advisory board. We had a remit to bring diverse perspectives, balancing for gender, ethnicity, disability, sexuality and intersections thereof, as well as different artforms. The focus groups were conducted with 10 interview participants to discuss the initial findings, following up on emerging themes so as to determine which aspects of the findings resonated with them. The focus groups also aimed to explore the cultural dynamics between interviewees and their differently situated practices.
The interview and focus group questions explored topics such as participants’ experience of working in the arts sector, including their engagement with narrow AI and specifically, ML and data mining in their artistic work. Their views and feelings about recent developments and priorities around AI adoption in the arts sector. Aspects of participants’ work that they feel positively and negatively about, and what they feel to be important about their work. Participants’ engagement with colleagues about the adoption of narrow AI in the sector, and their expectations about future uses of ML and data mining in the arts sector. Observational data from experiencing AI artworks and events was used to sensitise the researchers to practitioners’ contexts of practice, rather than forming part of the analysis. Our data collection started in Summer 2022 and finished in Summer 2023.
Data from interviews and focus group transcripts were analysed using thematic analysis (Braun and Clarke, 2006) to draw out the key themes that emerged when practitioners talked about their beliefs, values and emotions in relation to AI integration into their workflow. Ethical approval for the study was gained from University of Sheffield, and all participants were anonymised using pseudonyms.
Findings and discussion
Three themes arose from interview and focus group around the interplay of culturally situated values, beliefs and emotions of arts sector practitioners: (1) Human values: being critical of complete automation of arts practice, (2) Societal and ethical values: calling for improvement to human-machine collaboration, and (3) Where the magic happens: where tensions between beliefs, values and emotions are creatively resolved. We argue that each of these themes presents strategies for navigating tech-led automation of the arts through arts practitioners’ socio-politically constituted values, beliefs and emotions.
Human values: Being critical of tech-driven automation of arts practice
Most of the arts practitioners that we spoke to recognised the recurring pattern of hype surrounding AI and automation and addressed it with scepticism and criticism. They had witnessed previous technology trends come and go, with promises to revolutionise the creative landscape. A large proportion of these practitioners did not believe there have been any significant breakthroughs in AI art beyond the increased availability of processing power facilitated by graphics cards and widespread adoption of the latest generative AI tools. As a result, they had developed a critical mindset when faced with such hype and many believed, ‘it’s not long enough for the hype to die away’ (Ann, artist).
Artist, Sam commented on the world’s first ultra-realistic humanoid robot artist, which had sparked discussion and critiques among arts practitioners (Bidshahri, 2019; Broeckmann, 2019; Ploin et al., 2022) regarding the nature and meaning of art, as well as the role of creativity in society, both presently and in the future. Sam added to this debate, highlighting the reoccurring hype around complete automation: ‘the kind of breakthrough stories that you see, like, the recent one about AI-Da, this robotic artist, and it’s clearly an absolute sham. It’s like AI in theory but it’s just like a puppet. (…) this – yeah, compliant female stereotype of a robot, but it’s supposedly making artwork but clearly there’s not much going on there [laughs]. Yeah, far better work was done in the ‘80s by people like Harold Cohen, making robots paint. So yeah, these tropes coming round and they break through in the media, but because it’s the same story coming back again and again, it doesn’t really move forward’ (Sam, artist).
Similar to Sam, most arts practitioners we spoke to were not interested in tech-driven arts practice and complete automation. They were arts-driven and aligned their practice with a human-centric approach (Shneiderman, 2022), wherein humans maintain control of ever-increasing automation, with machine learning serving merely as a tool for specific tasks, either as part of the art process or integrated directly into the artwork itself. They advocated for the value in ‘human-centric’ technological development (Hasselbalch, 2019) and ‘defending human expertise’ (Pasquale, 2020). Sam’s further reflections highlighted these perspectives: ‘I think my interest is in really human-centric technology. So where I’m using machine learning or AI, it’s kind of like a tool in the toolbox sort of thing. It’s not really upfront. (…) but really where I’m working with algorithms it’s humans that are making the algorithms and the algorithms are the actual outcome…or part of the outcome. They’re not tools that are being developed for generating art or something, they’re really part of the art themselves’ (Sam, artist).
We argue that the arts-led practitioners we spoke to were orientated towards the value of ‘humanness’ or human values. This approach tends towards the development of AI technologies that complement, rather than override humanity (Anantrasirichai and Bull, 2022). In particular, artists tried to ensure that their creative processes maintain an authentic connection to these values. They often emphasised the importance of self-efficacy and authenticity when working with narrow AI, viewing these qualities as integral to their artistic practice.
Many practitioners treated narrow AI not just as tools but as an intrinsic part of the art itself, a perspective articulated earlier by Sam. They believed that the ideological depth that artists bring to their creations cannot be replicated by AI, emphasising the need to rethink what it means to be ‘human’ as an evolving process rather than a fixed state (Amaro, 2022; Wynter, 2001). As a result, they resisted the claim that machines can replace human creativity (Bidshahri, 2019). Contrastingly, they spoke about imitation of existing creative practices as expressed by one artist talking about automation and new generative AI tools: ‘I wouldn't necessarily say it’s automated creativity. I think it’s doing a very good job of imitating existing creative practices, you know, certain ways. So like, it’s very good at generating something that looks like a drawing that was done using a digital drawing tool, or whatever… I don’t think it's like replacing the human…’ (Rob, artist)
Many of our participants emphasised the frequently observable lack of balance between utilising novel narrow AI tools and the artistic concept intended for audiences. They suggested that much of what is labelled as AI art is performative or derivative, failing to address critical issues that deserve attention. They believed that the current hype surrounding AI and automation has led some artists to engage with narrow AI and generative tools superficially, as deep or critical engagement may not be necessary ‘to get recognition for that kind of work’ (Ann, artist). They suggested that these artists often gravitate towards the tech-driven proliferation of narrow AI tools prioritising technical efficiency, productivity and profit and aimed at aesthetic outcomes (Zylinska, 2020). This opinion was echoed by a curator who frequently collaborates with big tech companies: ‘So maybe, it would be good to see artists incorporate these – the latest AI tools - as a middle stage in the process, as opposed to the final result. And if they do use it as a final result then they would spend more time thinking about the meaning of the image and what they’re trying to communicate, and how that fits into their practice because that’s not really being done yet by many artists’ (Natalia, curator).
On the other hand, when asked about the ‘ideal scenario’ for the use of AI and automation in the arts field or what they considered important in their artistic practice, a large proportion of practitioners emphasised the necessity for more ‘critical analysis’ and ‘meaningful investigations’. They wanted to understand how AI works and envision better ways of integrating automation into human life, rather than simply being given the tools to use it. Nearly all practitioners disagreed with the profit- and industry-led approach, recognising that media discourse shapes society's perceptions of technological innovations and contributes to sociotechnical imaginaries (Saurwein et al., 2023). An artist, Ann elaborated on her stance regarding this matter: ‘I just think that our attitude to it as a society is a bit warped, and very much kind of driven by an ideology of a kind of efficiency and productivity that I think I disagree with’.
The arts practitioners critically engage with the automation of the arts, particularly as it is driven by capitalist values such as efficiency, productivity and profit, and ‘escalatory logics’ of contemporary capitalism (Rosa et al., 2017). These values often conflict with those of practitioners, who feel they are ‘expected to adapt their practices in the service of AI-driven capitalism’ (Bates et al., 2025). Instead, they express a desire for and belief in creativity, using human values, such as self-efficacy and authenticity, to guide their artistic practice.
Societal and ethical values: Call for improving human-machine collaboration
Our participants recognised numerous challenges and risks from automation. They acknowledged the presence of systematic discrimination embedded within automation including narrow AI tools and how they are fixed to structure the world through a racialised lens (Noble, 2018; Benjamin, 2019). For our participants, these technologies transcend acting as mere tools, instead these technologies embody cultural, social and political influences that collectively shape both their functioning and how society is organised. They were acutely aware of the potential implications of automation in the art field, consequently confronting various ethical considerations that guide their decision-making when integrating narrow AI tools into their practice. Our participants often turned to their personal values and beliefs to ensure that AI tools were used in a way that respects and aligns with societal and ethical values, including fairness, justice and privacy. For example, Ann elaborated when asked about her approach to using narrow AI tools: ‘I think, for me, it always feels like the ethical considerations of working with this tool, especially if this tool becomes the centre of the work, is – in what way is this being represented and for whom? And like, by whom? And is that a compromise that I feel like I'm okay with? And I think there's lots and lots of different layers to that’ (Ann, artist).
Arts practitioners we spoke to believed they should take responsibility for the artworks they create, and aimed to challenge discrimination within AI systems and the exploitation of vulnerable communities through data extraction (Benjamin, 2019). They actively addressed discrimination in AI by striving to ensure fairness in the data they use. This commitment was emphasised by Elena, a curator, who explained how she collaborated with other arts practitioners: ‘And it’s something that we discuss when it comes to thinking about ethics as well, because with ethics, we talk a lot about the – one side, thinking about bias, thinking about race and gender, but also thinking about people who are excluded, etc., and impacted’ (Elena, curator).
Many participants mentioned the environmental impact of working in the AI art field and its effects on vulnerable populations. This includes CO2 emissions arising from large-scale data collection and ML training models, as well as the use of minerals, e-waste, and their role in fossil fuel extraction (Brevini, 2020; Crawford and Paglen, 2021). As Hannah, a curator, suggested, the environmental cost of automation was not something traditionally attributed to the art world, making it important to consider in one’s artistic practice: ‘It’s just like a huge, huge, huge amount of data and computing power required to work in this field that if it’s just going to be a craze, it’s got a big environmental cost. I think it’s this kind of unknown thing where people say, like, oh yeah, it’s bad for X, Y and Z reasons but people think of it much more in terms of, like, Bitcoin and crypto currency, but not so much in its art world outcomes, I guess’ (Hannah, curator).
Due to these issues, as well as privacy and data breaches, most arts practitioners we spoke to were not opting for complete automation. Instead, they explored ways to collaborate with narrow AI to complement their creative process and address or mitigate potential risks associated with automation. Some artists express a preference for working only with small datasets and/or their own data, rather than adopting large-scale datasets which would be deemed extractive. For example, Eva, an artist, explained: ‘For me working with machine learning it’s important that I made my own data set and that I knew all of the source material that went into it. And it’s also because… how I was working with the model, I was asking questions about my practice and the sounds that I wanted to work with. So I wanted to know what was going into that model to then be able to analyse the outputs’ (Eva, artist).
Other participants believed that thinking about their artwork as data allows it to be extracted freely, much like social media data. As a result, they navigated the tension between perceiving it as mere data or as an artistic creation and a form of craft. Nick’s explanation highlights their perspective: ‘And actually, what you want is not like a universalising dataset but actually a super-subjective dataset, which can be small and can be even gathered by a certain person, a single person. So yeah, I tend to lean into that a little bit and I think I really appreciate the act of dataset making as a kind of artistic practice in its own right’ (Nick, artist).
A few artists we talked to experimented with using narrow AI tools in unique ways, aiming to improve human-machine collaboration and to produce new works in line with their values. For example, Nick, discussed open source tools and utilising them differently on his own terms, describing it as ‘a little bit kind of boutique’: ‘For all of the voice synthesis stuff I’ve been using Coqui which is an offshoot of Mozilla’s deep voice project. It’s like a half open source/half private speech synthesis company. But I reached out to them and they’ve actually been supporting me throughout the whole project if I have questions about like, how to train this kind of model, or how to set this up. ‘Cos I’ve been doing a little bit of boutique, let’s call it [laughs], boutique things with their models and with their code base’ (Nick, artist).
Such efforts demonstrate that arts practitioners we spoke to engage with narrow AI not just as a tool or topic (The Alan Turing Institute, 2022), but as a sociotechnical system requiring ethical considerations. They acknowledge the potential of narrow AI tools to enhance human creativity, but highlight the tensions inherent in collaborating with AI. To address and mitigate the potential risks posed by automation, they strive to align their artistic practices with the ethical and societal values they uphold. As a result, many arts practitioners advocate for enhancing human-machine collaboration by using small data, open source tools, or making use of narrow AI as a way to navigate tech-driven automation of art.
‘Where the magic happens’: Tensions between beliefs, values and emotions
Arts practitioners we spoke to brought a critical perspective to tech-driven automation of the arts, integrating their beliefs, personal and societal values into discussion of their artistic practice. In terms of emotions and feelings, most arts practitioners emphasised that they were enthusiastic and motivated by the potential for achieving social impact through their work and using AI tools responsibly by being critical and conducting meaningful investigations. This was what inspired them the most, as expressed through their responses: ‘It seems to me that kind of late capitalism thrives upon the unseen. It thrives upon hiding aspects of production from publics. And I think I have faith that those publics would make the right decisions about how to live better and more sustainably if they could just see some of those things. So I see my role as asking questions about that and trying to reveal or make visible some of those hidden aspects of digital production. Especially as a way of helping the world to kind of make better decisions about how to live, I think’ (Liam, artist).
They were excited by the results of their work and the potential impact artists can have on society, particularly in terms of improving human-machine collaboration. Furthermore, they observed the potential for artists to influence the tech industry itself. For example, Elena, a curator, emphasised the importance of critically evaluating data sources to avoid perpetuating biases and inequalities inherent in many datasets (see e.g. Noble, 2018; Benjamin, 2019; Amaro, 2022) highlighting how the tech industry could learn from these critical artistic practices: ‘(…) so for example, if I see work by – how artists bring in data from people who are not represented at all in data or are underrepresented, and suddenly they train machines (…) to kind of recognise these people (…) – then this is exciting, I think, and it’s something that maybe the industry can learn from’ (Elena, curator).
Most practitioners agreed that they personally benefited from using narrow AI tools. They felt positive about incorporating them not only as a topic for critical investigation but also as a creative tool in their work to enhance their artistic practice and development. Artists spoke about a space where ‘unique creativity’ and ‘novel ideas’ can be explored through engagement with these technologies that brought joy, excitement, and curiosity to their work. For example, Eva explained: ‘I might have got cut off before but that goes back into my interest musically into the composition around chaos and chance and working within systems, and how humans kind of relate to machines and how autonomous we are and where we can control or where we can collaborate with machines. So for me that’s the really exciting bit and that’s the kind of, yeah, collaboration with these models’ (Eva, artist).
Many arts practitioners we spoke to also expressed excitement when describing their collaboration with AI tools in terms of being innovative and pushing boundaries of artistic practice. Isabelle’s explanation, as the AI arts organiser, highlights this perspective: ‘Again the fact that you’re trading on a territory where it’s developing now and you’re doing something that’s quite innovative is definitely something that excites me’. Similar to Isabelle, other arts practitioners discussed their enjoyment of learning and applying new skills while experimenting with AI tools, treating them as ‘new things to play with’ (Hannah, curator).
A few of our participants were positively surprised about unexpected outcomes arising from human-machine collaboration. For example, Rob discussed a project involving the creation of a large dataset that he used to train an AI model. While initial assessment suggested potential limitations, the model unexpectedly produced promising results, leading to a deeper focus on this work: ‘So I didn’t actually think it was going to work. And then when it did and it started producing interesting results, then yeah, I was kind of sort of blown away’ (Rob, artist).
Nathan also shared his experience of unexpected pleasant surprises during the discussion, highlighting how human-machine collaboration positively impacted his artistic practice. He explained how they enhanced both his creative process and his overall experience as artists: ‘There is room for interesting things to happen that go beyond the training data, and allow you to recombine concepts, to fuse styles, to imagine things that you might never have been able to imagine or visualise, to have a conversation between an artist and a piece of technology that is surprising’ (Nathan, artist).
However, as Nathan continued, this joy, excitement, and surprise were often present when he engaged with narrow AI, distinguishing it from viewing it as a career path or a topic within a broader social context and its potential consequences: ‘Well, it’s a wonderful experience. I have to say that using this technology is fantastic. It’s fun, but you sort of have to detach from any notion that it’s your career [laughs], and that’s difficult, because if I’m simply just to engage with it as a tool, I can spend hours and hours and hours prompting Midjourney and being endlessly fascinated by it. So, creatively, it is inherently amazing and there is joy in using it, and there are possibilities that are fantastic, that we’ve only just touched the surface of’ (Nathan, artist).
Nathan concluded that for him human-machine collaboration represented ‘disruptive, revolutionary technology that is a pleasure to use, endlessly fascinating and deeply disruptive’. When pressed further on how this tension between emotions made him feel, he expressed a mix of excitement and concern. While he appreciated the creative possibilities and the innovation these tools brought, he also acknowledged a sense of unease about their broader implications and the complexities they introduce into both artistic and societal context: ‘I think probably my feelings are fairly clear, but if I had to state them, I feel uncertain. I feel anxious. I feel existential. I feel a small amount of excitement. I feel exhausted. I feel threatened. I feel curious, resentful’ (Nathan, artist).
Many arts practitioners expressed similar mixed emotions about using narrow AI tools and how AI in arts practice could evolve. These mixed experiences created a tension between the criticism of AI and tech-driven automation, aligned with practitioners beliefs and values discussed in the previous sections, and their enthusiasm for using narrow AI tools in their creative work, particularly when engaging with it as a tool. A few participants emphasised the need for collective exploration of these technologies. This approach aims to prevent their monopolisation by big tech companies interests, as suggested by Hannah: ‘[Laughs] I mean, I feel optimistic and also scared, I would say. I think I feel that it’s really important that we are – yeah, that artists are creatively experimenting with these tools so that there’s a shared ownership that’s not just about, like – kind of regulated by financial companies’ (Hannah, curator).
We suggest that the sub-theme ‘where the magic happens’ aptly captures the emotions expressed by most of our study participants to reflect the tension between personal beliefs, values and emotions related to the use of narrow AI tools. Participants navigated these tensions by focusing on enthusiasm and excitement rooted in values and beliefs they consider most important, such as the potential to benefit society through their work and the drive to push the boundaries of artistic practice and creativity through innovation.
Conclusion
Building upon existing critical scholarship about practitioners’ experiences of the integration of AI into workflows, this paper addressed a specific way in which arts-driven, mostly publicly or self-funded AI practitioners navigate these issues. They are not pursuing the potential of complete automation of their artistic practice, instead, they try to navigate it guided by their own values, beliefs and emotions. Whilst the current hype surrounding tech-driven AI promises automation of artistic processes, fuelled by values of efficiency, productivity, and commercial profit (see Arif and Takefuji, 2025), arts-led AI practitioners use their own personal values to guide their practice. This approach acknowledges the importance of broader cultural forces in shaping artistic practice, understood here as a form of knowledge production and inquiry. We identified three narrative themes arising from our empirical analysis: arts practitioners expressing critical views on the tech-driven automation of artistic practice, calling for improvements in human-machine collaboration, and experiencing tensions between their beliefs, values, and emotions.
Arts practitioners we spoke to struggle with hype around AI. They navigate this by being critical of tech-driven automation of arts practice and using human values, such as authenticity and efficiency, to guide their practice. They believe that working as arts practitioners carries a responsibility – particularly regarding the context and consequences of their work – and that tech-driven automation is often in conflict with their societal and ethical values such as fairness, justice and privacy. In the absence of formal ethical guidelines for AI artists and lack of legal repercussions for scraping data (Bates et al., 2025) these personal principles became essential in shaping their approach to perceptions of what is desirable and ethical when applying narrow AI techniques and automation in diverse artistic practice contexts. They focus instead on improving human-machine collaboration by, for example, using small data or open source tools because they enjoy the potential of automation to enhance their artistic process in ways that go beyond extractive practices rooted in capitalism, even while utilising tools rooted in capitalist systems to achieve this vision.
The sub-theme ‘Where the magic happens’, is where the tension between personal beliefs, values and emotions related to the use of narrow AI tools and automation is effectively navigated. Arts practitioners are motivated by enthusiasm and joy rooted in their values and beliefs they consider most important, such as the potential to benefit society through their work and the drive to push the boundaries of artistic practice and creativity through innovation. They emphasise the need for collective exploration and critical engagement with these technologies to ensure shared ownership and to prevent their monopolisation by the interests of big tech companies. Arts practitioners' values, beliefs and emotions lie at the heart of human-machine collaboration, reflecting a long history that extends beyond the current hype surrounding AI and automation.
We argue that, at its core, the way art-led AI practitioners navigate tech-driven automation presents a counterpoint to the pervasive tech-led explosion of narrow AI tools to automate artistic processes often reflected in decontextualized Computer Science and Big Data discourse by acknowledging the importance of broader cultural forces in knowledge production practices (Bates and Elmore, 2018; Christin, 2017), such as values, beliefs and emotions of arts practitioners. This warrants an important cross-disciplinary dialogue (as called by, e.g. The Alan Turing Institute, 2022; Hemment et al., 2025) on responsible and ethical integration of AI. We suggest that navigating tech-driven automation involves not just adapting to new generative AI tools but also recognising the cultural and social value that artists bring in navigating automation by redefining the boundaries of human-machine collaboration within arts practice. By illuminating the arts-led practitioners’ perspectives we seek to inform the development of more critical, reflective and holistic cultures of algorithmic practice, in which the impact of complete automation is carefully considered, with a focus on ethical, cultural and societal implications, balanced against technical efficiency, productivity and profit. This exploration helps ensure that cultures of AI practice contribute to reimagining the possibilities of what it means to be human (Amaro, 2022; Wynter, 2001), using technology as a tool to imagine that.
This values based exploration of AI arts practice has a role to play in considering the unique ways that art fosters critical understanding, opens us to new ideas, experiences, and cultural relationships emerging from automation. This inquiry is of value to the creative industries in prompting critical reflection on the cultural dynamics shaping AI tool adoption and use within arts practice, with a view to recognising and respecting values-driven approaches to arts practice. Secondly, it offers an opportunity for AI practitioners in other sectors to reflect on and consider the societal implications of AI’s cultural dynamics and its relevance to their work. Finally, the study invites policy makers and technologists to critically engage with how AI, as currently imagined and implemented, contributes to broader societal and environmental inequalities – offering space to consider alternative, more equitable design approaches. It also shapes the experiences of audiences engaging with AI-driven artworks, encouraging a deeper, critical reflection on the role of automation in society and its potential to both challenge and reinforce existing norms and structures.
Future research could explore how practitioners carry the values developed in their independent arts practice into their work within the creative industries and under what conditions these values and practices are maintained, transformed, or constrained. Such an inquiry would provide a richer understanding of how values travel across different cultural contexts and how they shape AI use within more commercially orientated creative settings.
Footnotes
Acknowledgements
We would like to thank our participants and partners for their time, expertise and contributions to the project.
Ethical Considerations
This study was approved by the Ethics Committee of Sheffield University (Ethics Code: R/163905) on January 17, 2022.
Consent to Participate
All participants provided written informed consent prior to enrolment in the study. This research was conducted ethically in accordance with the UKRI policy and guidance on the governance of good research practice.
Consent for Publication
Informed consent for publication was provided by the participants.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Arts and Humanities Research Council - UKRI [Grant number AH/T013362/1].
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
