Abstract
As artificial intelligence (AI) rapidly transforms industries, the world of art and cultural management finds itself at a critical crossroads. AI-generated images, music, literature, and curatorial tools are no longer experimental. They are now part of the mainstream creative process. The question is no longer if AI belongs in the arts, but how we engage with it ethically, meaningfully, and imaginatively. At its best, AI is a tool that can challenge creative norms, democratize access to production, and reveal patterns and possibilities that human artists might overlook. Optimistically, it is seen by some not as the end of creativity but as a new kind of collaboration. However, the risks are real. When AI generates works trained on massive datasets without attribution or consent, questions of authorship and ownership surface. When algorithms are used to curate or program cultural content based on popularity metrics, there is risk of narrowing the spectrum of artistic voices rather than expanding it. If cultural institutions prioritize AI for cost-efficiency over human insight, the soul of curatorial and artistic work could be lost. As part of my efforts to immerse myself in the field of AI and explore its relevance to the arts and creative industries, I had the pleasure of speaking with Octavio Kulesz, a philosopher and a widely respected expert in AI ethics. His work focuses on cultural diversity and the creative industries in the digital age.
As artificial intelligence (AI) rapidly transforms industries, the world of art and cultural management finds itself at a critical crossroads. AI-generated images, music, literature, and curatorial tools are no longer experimental. They are now part of the mainstream creative process. The question is no longer if AI belongs in the arts, but how we engage with it ethically, meaningfully, and imaginatively.
At its best, AI is a tool that can challenge creative norms, democratize access to production, and reveal patterns and possibilities that human artists might overlook. Optimistically, it is seen by some not as the end of creativity but as a new kind of collaboration.
However, the risks are real. When AI generates works trained on massive datasets without attribution or consent, questions of authorship and ownership surface. When algorithms are used to curate or program cultural content based on popularity metrics, there is risk of narrowing the spectrum of artistic voices rather than expanding it. If cultural institutions prioritize AI for cost-efficiency over human insight, the soul of curatorial and artistic work could be lost.
As part of my efforts to immerse myself in the field of AI and explore its relevance to the arts and creative industries, I had the pleasure of speaking with Octavio Kulesz, a philosopher and a widely respected expert in AI ethics. His work focuses on cultural diversity and the creative industries in the digital age. In 2018, his contribution to the UNESCO report entitled Culture, Platforms and Machines, foresaw many of the challenges generative AI would later pose to the cultural sector. Two years later, he was selected as one of 24 international experts to co-author UNESCO's landmark Recommendation on the Ethics of Artificial Intelligence, as the first global standard-setting instrument on the topic. His insights offered a rich approach to the multifaceted implications of AI for the arts and the broader creative industries.
The question of AI's role in creativity has sparked intense global debate, particularly among those concerned with the future direction of the creative industries (Öztas et al., 2025; Mazzone and Elgammal, 2019; Manovich and Arielli, 2024). Particularly, notions on artistic quality in AI-generated content remain a contested and evolving issue in contemporary aesthetics and media theory, specifically in relation to issues of autonomy, authenticity, and authorship (McCormack et al., 2019). Unlike human imagination, which draws on lived experience, intuition, and originality, AI systems generate content by processing and recombining patterns from vast training datasets (Öztas et al., 2025). Rather than a singular or autonomous source of creativity, AI operates as a derivative engine shaped by the data it consumes. I asked Octavio to reflect on the status of AI as a tool, or something more.
Visanich: Based on your expertise, Octavio, how do you think AI is impacting the creative process? Is it simply a new tool artists can use, or is it fundamentally reshaping what it means to be a creator in today's world?
Kulesz: It is evident that AI presents significant advantages as well as considerable challenges for artists. And beyond the specific effects of AI on individual creative sectors, such as music, literature, visual arts and film, to name a few, it is crucial to pay attention to the broader patterns that emerge across fields.
If we consider the positive impacts of AI, two clear trends stand out. First, many artists refer to the augmentation function that AI serves in their work. AI is increasingly seen by users, both professional and amateur, as a source of inspiration and creative potential. This entails not only significant gains in productivity but also a distinctive way of conceiving creation, not merely as an individual act but as a co-construction between human and machine.
Second, it is evident that such tools have lowered the barriers to entry for creation. Indeed, they have enabled millions of people without specialized knowledge in certain artistic disciplines to produce technically high-quality content. This is why there is frequent discussion about the democratization brought about by the AI era.
However, the negative effects of the current situation are also quite evident. Beyond the impressive technical quality of a photograph or video clip produced with AI, it is equally necessary to question the artistic quality of such content: to what extent does it convey meaning, values or worldviews? Considering that the so-called democratization of creation applies not only to humans but also to all kinds of bots generating mass content subsequently disseminated online, one could argue that we are witnessing a kind of banalization of art, which does nothing to improve the situation of artists.
The evaluation of AI-generated art thus presents a critical challenge: Who determines whether something possesses artistic merit, and according to which standards? This issue is not merely epistemological but also political. The very act of assessing artistic quality, especially when applied to works created by non-human agents, raises fundamental questions about authority, gatekeeping and cultural legitimacy. If AI-generated content is judged solely through traditional, human-centric frameworks, the evaluation process may be inherently biased against emergent forms of expression that diverge from established norms.
Moreover, since AI generative models are trained on large datasets of existing artworks, they often produce outputs that resemble pastiche, imitations or recombination of past styles, lacking the originality or contextual nuance that characterizes human-made art. Tools like DALL·E, Midjourney, and GPT-based models exemplify this, as they generate new content by drawing from, and reassembling patterns found in historical data.
Visanich: AI tools are often referred to as producing pastiche rather than truly original works, since they are trained on existing data and patterns. Would you agree with that assessment? Or do you think AI is capable of contributing to the evolution of artistic knowledge in a more meaningful way?
Kulesz: I believe it is important to keep in mind that AI, in itself, is inert and without true agency. When working with chatbots, there must be an initial prompt that triggers the interaction. In the prompt lies a vast space for creativity that determines the quality of the final result. Moreover, that result never comes after the first prompt but instead requires an ongoing conversation between human and machine. If well guided by the human, this exchange can lead to remarkable creations. It is up to us to know which keys to press on this supposed gigantic pastiche-generating machine in order to achieve creative outcomes.
It is worth recalling another metaphor that, alongside the pastiche machine, has been used to describe generative AI, particularly large language models (LLMs). In 2021, Emily Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell described LLMs as stochastic parrots, that is, models capable of repeating patterns from data without real comprehension, just as a parrot can mimic speech sounds without understanding what it is saying. It is true that the very essence of these systems is to predict the next token in a sequence through statistical methods, and that they can produce grammatically coherent, yet at times nonsensical texts. But I believe the parrot metaphor does not fully capture the rich creative potential that an artist who understands how to use an LLM or other AI tools can unlock.
The use of AI technologies theoretically broadens access to tools of cultural production in a democratized approach. Yet the power to define and legitimize quality art frequently remains concentrated in institutional or elite structures. Whether a creative work is produced by a human or an algorithm, the central question persists: Who decides what constitutes artistic value, and how are those decisions made and shaped by broader dynamics of power within the cultural field? It also raises a critical question: how might the cultural sector be reshaped if profitability becomes the primary driver of its development? Octavio confronted the possibility in the context of his work for UNESCO.
Kulesz: This was the alarming scenario I described as plausible in my report Culture, Platforms and Machines, which I wrote for the Intergovernmental Committee for the Protection and Promotion of the Diversity of Cultural Expressions at UNESCO in 2018. This is a case that can result in a situation where culture risks becoming merely another commodity, stripped of identity, values, and meaning. This would have consequences far beyond the cultural sector.
Such a statement warrants a closer examination of the evolving role of culture in the AI era, particularly the growing risks of its fragmentation, commodification, and algorithmic curation
A decade later, these dynamics have intensified. Algorithms now mediate nearly all major cultural experiences, particularly through social media platforms. As AI systems increasingly generate cultural products, ranging from music and literature to visual art, these outputs are trained on massive datasets, often scraped from human creators without consent. This not only detaches cultural expression from its original context, but also reinforces a wider neoliberal trend: the commodification of culture, where value is dictated by engagement metrics rather than meaning or creative intent.
Thus, from this critical standpoint, the current trajectory does not represent a liberation of imagination, as techno-optimists often suggest, but rather an outsourcing of creativity for profit, further entrenching corporate power over the tools and platforms of cultural production (Franklin, 2023). This shift risks eroding human agency in meaning-making, fulfilling the very reshuffling of cultural values that Striphas foresaw a decade ago. The infiltration and accessibility of AI tools are drivers of this reshuffle. This dramatic shift raises the question of how artists are using these tools and which tools predominate.
Visanich: It seems like there's a growing number of platforms out there, from generating images and music to helping with writing and animation. Could you walk me through some of the most common AI tools that artists are actually using in practice right now, and maybe share how they’re impacting the creative process?
Kulesz: There is a myriad of AI applications currently used by creators of all kinds. This includes both ready-to-use tools such as ChatGPT and Midjourney, as well as more open and customizable systems that, while offering greater flexibility, require more technical expertise.
In general terms, I would say that chatbots based on large language models such as ChatGPT and Claude and the various image-generation models developed by Midjourney, OpenAI, and others are radically transforming the creative landscape. Think of, for instance, a writer: the new possibilities for drafting, editing, and applying formats or styles directly to paragraphs signal significant changes in the way we work with text. In the realm of images, the ability not only to create but also to edit and iterate with specific styles and characters represents a qualitative leap in the visual arts. For those who know how to use them skillfully, these two families of tools (closely intertwined, since most chatbots today are multimodal and can generate images from text or vice versa) can be profoundly transformative.
Transformations such as these can be seen both in educational systems and in the creative industries. In education, these tools can personalize learning, improve accessibility, and foster critical thinking through interactive and adaptive content. AI applications in education can bring about meaningful impact on teaching and learning (Zawacki-Richter et al., 2019). In the creative sector, they can streamline workflows, broaden artistic possibilities, and democratize content creation by lowering technical barriers for non-experts. Ultimately AI applications can reshape how knowledge and art are produced, shared, and experienced. Moreover, AI agents are expected to possess capabilities that extend beyond mere generation of information.
As AI tools/agents become more integrated into artistic and cultural production, a growing tension has emerged between viewing AI as a creative collaborator versus seeing it as a potential replacement for human artists. This debate has gained urgency as generative models increasingly produce content, text, images, music, even video, that rivals, or mimics human-made work. On one hand, AI offers opportunities for experimentation, efficiency, and accessibility in creative practice. On the other, its rapid adoption raises concerns about devaluing artistic labour, erasing authorship, and reshaping the cultural economy in ways that may marginalize human creators. Understanding this tension is crucial, as it sits at the heart of broader conversations about the future of creativity, cultural identity, and technological agency.
Kulesz: In the coming months and years, we will likely also witness the proliferation of AI agents, which no longer merely generate content but also have access to tools enabling all kinds of actions, particularly on the web. Although it is still too early to draw definitive conclusions in this area, it seems quite likely that AI agents will have a significant impact across various creative sectors, including those such as the performing arts, that have so far remained somewhat on the sidelines of the generative AI boom.
Visanich: In view of this constant oscillation in thinking of AI both as taking-over agents on one hand and on the other hand as a creative collaborator, do you think that AI agents will replace artists, or is it more a question of acting as creative collaborators?
Kulesz: AI is a tool that does not replace human collaboration. AI has no intention, no point of view, no emotions. However, this does not mean that AI is not relevant as an assisting machine. In creative work, AI can help save a great deal of time, generate sketches or initial drafts to build upon, and identify patterns that might otherwise go unnoticed, among other key uses. In any case, it is always important to remember that AI can be an excellent servant, but a poor master. It is the human user who must lead.
Visanich: This increase of automated tasks brings to mind worries about what you referred to as the banalization of creative work. How are AI agents replacing specialized expertise and banalizing this process? What is your take on this?
Kulesz: Related to this banalization, we are witnessing the rapid erosion of numerous creative professions. Consider, for example, translation, editing and proofreading, photography, illustration, among others. A somewhat concerning trend in this context is that many, both within the creative sectors and beyond, are almost entirely replacing interns or junior assistants with AI systems. This not only deprives organizations of fresh perspectives and emerging knowledge, but also denies an entire generation the opportunity to learn collaboratively in a real work environment. In the long term, this is likely to have detrimental effects on any cultural field.
Moreover, many organizations are being lured, often uncritically, by the promise of AI and end up dismantling entire internal departments, even those staffed by experienced professionals. In many cases, the outcome has been counterproductive and has forced companies to rehire human workers. Again, AI is an excellent complement for those who know how to use it effectively. For those who underestimate or overestimate it, its adoption can prove fatal.
Kulesz expands on the rapid erosion of several professions, whose work can gradually be carried out by AI. He warns that as the use of AI in these fields expands, it may deliver a serious blow to the creative sector, leading many artists and cultural workers to fear that their roles could become obsolete.
In view of this, the report Generative AI and the Future of Work in America argues that by 2030, up to 30% of work hours across the U.S. economy could be automated, a shift accelerated by the rise of generative AI (McKinsey and Company, 2023). Yet rather than causing widespread job loss, generative AI is expected to augment the work of professionals in STEM, creative fields, business, and law. The most significant impacts of automation are likely to be felt in other sectors, particularly in office support, customer service, and food service, where employment may continue to decline (ibid.). Additionally, the World Economic Forum (2020) estimates that by 2025, 85 million jobs could be displaced, but 97 million new ones may emerge, requiring new skillsets (World Economic Forum, 2020). I asked Octavio to comment on the impact such transformations might have.
Kulesz: In my view, the current situation is more complex than mere job loss, as many of these professionals’ work on a freelance basis. Rather than causing abrupt unemployment, AI has made commissions increasingly scarce and poorly paid. In many fields, such as writing, editing, photography, illustration and translation, professionals are witnessing a steady decline in income.
There has also been a shift in the nature of the tasks demanded; instead of being hired to write, edit, or translate a text, take photographs, or produce illustrations, many creators are now commissioned to revise or adjust machine-generated outputs, which can feel disheartening and diminish their sense of professional value.
In this regard, although there are cross-cutting studies on the impact of AI on employment, it would be useful to expand research on the specific effects of AI adoption in each creative sector and across different countries.
It is worth noting that in 2023, approximately 7.8 million Europeans were employed in the cultural and creative sector, encompassing roles such as musicians, artists, designers, dancers, and journalists (Eurostat, 2023). Yet, artists are renowned for precarious working conditions (Gill and Pratt, 2008; Visanich and Attard, 2020). Nearly half of the respondents in the 2024 Creative Pulse Survey reported poor working conditions, including abusive subcontracting, false self-employment, underpaid or unpaid work, and coercive contracts (Culture Action Europe 2024).
Employment in the cultural and creative sector is unevenly distributed across the EU, with Northern and Western Europe having higher concentrations of cultural workers compared to Southern and Eastern Europe. Economic security appears to be a significant factor influencing artistic success. Migrants moving from less affluent regions to cultural hubs may still encounter pay disparities compared to local counterparts. For example, in the Netherlands, which has the highest proportion of cultural workers, studies have highlighted that women and individuals from migrant backgrounds earn less than their male and non-migrant colleagues across the creative sector (ibid.). Reflecting on the potential dangers that Kulesz identifies, I’m interested on what other ways AI is being used in cultural management and the implications for cultural policy-making.
Kulesz: AI is increasingly used in cultural management, though it is still far from reaching the level of adoption seen among creators. There are indeed institutions that use AI to manage collections, optimize ticketing and visitor flows, analyze audience engagement, and personalize cultural experiences, for example. These are not necessarily generative AI tools but rather more traditional AI applications.
The gap between cultural policy-making and the advanced grassroot uses in creative practice is striking. While among artists and cultural professionals AI tools—particularly generative AI—are expanding at a rapid pace, in the field of cultural policy we still operate in what often feels like a pre-AI and sometimes even pre-digital era. There is an urgent need to change the approach with which policies are designed and decided. We must move away from a vertical model, typical of territorial and analogue power structures, toward more dynamic and adaptive frameworks. Technology itself will not solve that problem. Change must come from a renewal of mindsets and strategies. In the public sector, as in the private sector, a lack of understanding of the potential and challenges of AI can lead to deeply harmful outcomes.
Despite the perception that cultural policy often operates in a pre-AI era, Rindzevičiūtė (2025) argues that AI poses a wicked problem for cultural policy because of its rapid development, opacity, and potential to disrupt established norms in cultural participation and governance. Yet Kulesz stresses the importance of a comprehensive understanding of new technologies in designing cultural policies.
Kulesz: To design effective new cultural policies, it is essential that those in positions of power develop a solid grasp of emerging technologies. Since the advent of generative AI, and especially with the COVID-19 crisis, which accelerated the digitalization of life, a unique opportunity to modernize cultural governance at the global level, particularly in its relationship with AI, has been missed. In this regard, it is useful to invoke the Collingridge Dilemma, which posits that when a technology is new, it is still possible to influence its development and steer it socially, although its future impacts are uncertain and difficult to predict. Once those impacts become evident, however, the technology is usually so entrenched that altering its course or regulating it becomes extremely difficult. The decisive opportunity to act at the intersection of culture and AI was, I believe, at the end of 2021, when UNESCO's Recommendation on the Ethics of AI was published, offering concrete and bold guidance for interventions in the cultural sectors.
The Collingridge Dilemma (1980) discussed by David Collingridge in his treatise The Social Control of Technology suggests that development of new technology will always outpace the ability to regulate or control it. Intended to address the need to regulate, UNESCO's Recommendation outlines a clear roadmap for cultural development, calling for collaboration across governments and NGOs in four key areas: governance for culture, artist mobility, integration of culture into sustainable development, and the promotion of human rights (UNESCO, 2018). Linked to the 2030 Agenda, the report has improved global monitoring of cultural policy since 2015. It identifies emerging strategic priorities such as artistic freedom, gender equality, and digital creativity. Highlighting the positive impact of local and regional innovations, especially in the global South, it also addresses persistent inequalities and artist vulnerabilities. The inclusion of new data makes the report a crucial tool for shaping responsive and inclusive cultural policies.
UNESCO advocates for a human-centred approach to AI in culture, emphasizing the importance of safeguarding artistic freedom, promoting cultural diversity, and ensuring equitable access to technological tools. This approach recognizes that AI should serve as a means to enhance human creativity, not replace it, and must be guided by ethical frameworks that respect cultural rights and values. Octavio, however, has some reservations about UNESCO's efficacy in this matter.
Kulesz: In the four years since the publication of the Recommendation, virtually no meaningful cultural policies have emerged in this area, and it may now be too late, as we appear to be entering the final phase of the Collingridge cycle. This was not a failure of foresight, since many articles and reports anticipated much of what we are now discussing, but rather a failure of coordination and policy implementation. A parallel can perhaps be drawn with climate change, where the worsening of conditions has been diagnosed for years, yet national policies and multilateral fora remain paralyzed by a range of conflicting interests.
The integration of AI into artistic creation has brought about a profound need to address in international and national policies how we understand authorship, ownership, and originality. As AI-generated works increasingly mimic or remix existing styles, datasets, and cultural content, questions arise about who, or what, owns the resulting outputs. Traditional intellectual property frameworks are largely built around the notion of a human creator, making them ill-suited to address works produced autonomously or collaboratively by machines. This legal grey area creates uncertainty for artists, cultural institutions, and policymakers alike, especially when it comes to attribution, licensing, and fair compensation. Regarding this evolving context, I wondered what the biggest concerns might be relating to intellectual property.
Kulesz: First, it is important to note that the legal status of AI outputs remains unclear. In most jurisdictions, for a work to qualify as a human creation and be attributed to a specific individual, it must be established that the individual made an original and creative contribution to its expression. However, if one simply uses a prompt such as “generate an illustration” and obtains an output, who is the creator of that image? Thus far, most courts and administrative bodies have held that a vague prompt, standing alone, does not constitute sufficient human authorship to warrant copyright protection. By contrast, if there is a more active process on the part of the user (for example, through extensive conversation with the chatbot, the use of much more complex prompts, or the employment of additional editing tools to improve the initial content), then the notion of an original human contribution seems more plausible. In any case, the question remains unresolved.
The other dimension, with profound ethical and social implications, concerns what happens upstream. This relates to the indiscriminate use of all kinds of content such as photographs, illustrations, songs, novels, essays, films, among others that AI companies have employed to train their models. Although there are some exceptions, most AI providers do not disclose information about the datasets used, seldom seek authorization from rights holders, and even less often offer them financial compensation.
The situation is doubly unfair, since not only is prior authorization not requested, but, as Ed Newton-Rex, the AI music company founder observes, AI competes with its training data: indeed, an LLM trained on short stories can produce competing short stories; an AI image model trained on stock images can generate competing stock images; and so forth. In this way, works are not only used without permission, but the machines thus trained end up competing against the very artists whose works they ingested.
If creators are not fairly and transparently included in the equation, the entire scheme risks collapsing in the long term. Current AI models are enjoying a boom thanks to a kind of original accumulation of cultural wealth extracted without consent. As creative professions decline, these machines will have fewer high-quality inputs to ingest and will increasingly rely on synthetic creations, a development that is already known to produce harmful results.
Visanich: As a philosopher, what do you see as the most serious ethical issues raised by the use of AI in the creative sector, for example deep fakes, bias, and ownership?
Kulesz: The impact of this new technological wave is so considerable that there is virtually no dimension of human life that is not being profoundly affected by these changes. In the creative sectors, the effects are multifaceted and carry significant ethical implications.
We have already discussed the impact on labour to some extent. The prospect of a partial collapse of the cultural fabric within just a few years is bleak. While it is true that new jobs are being created alongside those lost, this does not diminish the gravity of what is unfolding. We are speaking of the possible demise of creative ecosystems that have taken generations to build. And the problem is not merely that some jobs disappear but also the pace at which this happens. If we consider the advent of Gutenberg's printing press in the mid-15th century, the response to this technology by scribes was generally one of rejection. There was even a petition from the guild of copyists in Genoa in 1474 asking the Senate to ban the innovation. Yet we must acknowledge that this sector had several decades to adapt. Indeed, print and manuscript copies coexisted for quite some time, even giving rise to interesting hybrid forms, such as books with printed typefaces and hand-drawn illustrations.
By contrast, in the case of AI, the speed of change is unprecedented in human history. Consider that ChatGPT has accumulated 800 million users in just over two years. This tool has completely transformed the landscape of writing and text analysis, with consequences for culture, education and science of a magnitude and rapidity never before seen.
An issue that has thus far received little attention is how use of AI poses ethical concerns for Indigenous communities whose cultural knowledge, often considered communal, sacred, or non-commercial, can be repurposed by AI systems. Unlike individuals who can assert personal intellectual property rights, these communities often lack formal mechanisms to protect their cultural expressions, which may not be covered under existing IP regimes. Without robust safeguards and culturally informed AI governance, there is a risk that AI technologies will further marginalize traditional knowledge holders, erode cultural sovereignty, and perpetuate a one-sided digital appropriation of heritage. Kulesz elaborated on this point.
Kulesz: The issue of intellectual property is a crucial challenge, as previously discussed. Without respect for creators’ rights, the entire cultural value chain collapses. Beyond the individual level, there is also the risk of cultural appropriation of collective traditional cultural expressions. AI has become an efficient machine for generating content in the style of specific traditions, leaving creators from Indigenous communities particularly vulnerable.
Another point to highlight is the growing dependence on large tech companies. Artists, cultural organizations and small and medium creative enterprises have no real ability to negotiate the terms and conditions they agree to each time they use the most well-known AI systems. Their ability to develop their own systems is also limited, given the costs involved in terms of computing power, skilled talent and access to high-quality data. The risk of economic and power concentration in the hands of AI companies is therefore clear, which bodes ill for cultural diversity.
Closely related to this is the proliferation of biases whether gender-based, political, ethnic or others that perpetuate stereotypes, foster discrimination and risk homogenizing future cultural expressions. This could even coexist with the opposite trend: generative AI enables content tailored to the tastes of individual users, but can we still speak of a shared culture in a scenario of hyper-personalization?
Finally, the ease of creating text, images, video and music will only exacerbate the current proliferation of deepfake content, which will likely have a significant impact not only on culture but also on politics and other spheres. We are already witnessing how such realistic creations play an increasingly influential role during election periods in various countries.
Visanich: Looking at the future, how do you envision collaborations between human artists and AI evolving?
Kulesz: We must continue to emphasize that AI is a tool. By this I do not mean to suggest that AI is just a tool, since it has enormous economic, political and cultural effects. However, I find it important to reiterate the crucial role that humans play in all of this. Any narrative about a superintelligent AI seeking to dominate the world is, for now, nothing more than that: a narrative, a myth, a story created by human beings.
The only way to truly benefit from AI tools is to understand them as deeply as possible and to use them in an intelligent and ethical manner. The main risks of AI stem above all from human ignorance and inaction, which lead both to exaggerating its capabilities and to dismissing its potential. The combination of these attitudes results in overlooking the necessary precautions when using these tools, such as preventing economic concentration, mitigating biases, combating cultural appropriation, and other critical safeguards.
Those best positioned to make a meaningful contribution are the ones who understand these technologies both practically and theoretically, and who are able to employ them within a framework of co-creation. In this sense, the potential for future collaborations between human artists and AI is immense.
Concluding remarks
Debates surrounding the role of AI as either a tool or an autonomous agent are central to understanding the current and future trajectories of the creative industries. AI technologies are widely acknowledged to be reshaping the ways in which content is created, distributed, and consumed, while also disrupting traditional employment structures within the cultural sector. From a critical standpoint, this shift is less indicative of a liberation of human imagination and more reflective of the commodification of creativity.
In my conversation with philosopher and digital culture expert Octavio Kulesz, the evolving relationship between AI and the arts was brought into sharp focus. Kulesz outlines the dual nature of AI's impact: on one hand, it accelerates creative processes and broadens access to artistic tools; on the other, it raises pressing concerns about the future roles of artists, the valuation of creative labour, and the maintenance of artistic authenticity. He warns of the banalization of creative professions, where the uniqueness and depth of human creativity risk being diminished by algorithmically generated content.
For arts managers, these dynamics present both opportunities and profound challenges. Effective arts management in the age of AI demands a careful balance between embracing innovation and upholding ethical standards. This includes fostering meaningful collaboration between human artists and AI systems, and advocating for regulatory frameworks that protect artists’ rights while ensuring that AI contributes constructively to artistic and cultural development. As Kulesz affirms, the sector must remain critically engaged and proactive to safeguard its values in the face of accelerating technological change.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
