Abstract
Artificial intelligence (AI) is the focus of significant academic attention, yet relatively few works address its relationship to culture and to art. This is the focus of the article here, with central questions with regard to this interrelationship emerging from several recent texts on art, culture, and AI examined from the perspective of cognitive science, philosophy, and art itself. Such central questions are, “Can intelligent machines and programs be said to be creative and to produce true art?”; “What is the socioeconomic context of the rise of AI in relation to art and cultural production?”; How is artificial intelligence as an explanatory framework understood in relation to arts and culture?” and “Can an apparatus of artificial intelligence be an authorial subject in relation to a work of art?.” Critical examination of the questions across these significant but disciplinarily varied recent works in the area and others shows that such questions return back to philosophical uncertainties which already exist in the definitions of art and culture, but also that the development of AI art helps in turn to inform and alter questions about and answers to such basic philosophical questions about art and culture themselves.
Introduction
Any inquiry into art or culture inevitably confronts the unresolved question: what is art? More broadly, what are art and artifice? Even when not posed directly, these questions persist in the background, casting doubt on any systematic theory of the arts. Yet, the development of theory and practice often depends less on definitive answers than on the act of questioning itself. With this epistemological premise, this essay argues that artificial intelligence, in its artistic applications, informs and re-informs basic questions about the nature of art. Cultural mediators such as arts managers must then develop both theoretical and practical responses. Through a review of selected texts on artificial intelligence (AI) and art, I show that interpretations of AI and its cultural products invariably reflect back on human intelligence. If AI constitutes something genuinely different—and this difference carries profound implications for culture—then we cannot understand it without allowing it to reshape our assumptions about art, culture, and even human cognition. The essay employs an informal content analysis of several recent books to highlight emerging debates and insights. Given AI's rapid development, research and scholarship have not yet caught up. Rather than a frequency of concepts and through-going comparison, my aim instead is to use these works as a point of departure for discussion, and to highlight how contemporary scholars and practitioners are framing the interaction between AI, art, and culture.
A central pattern is that attempts to analyze AI's relationship to art and culture inevitably return to basic philosophical questions about each of the terms of comparison—questions such as, what is intelligence?; is intention relevant in judgments about value?; is the subject the author of a work, and, does non-human knowledge exist, and if so, what does it look like? Approaching the field as a philosopher, I selected books that emphasize such theoretical reflections, which I argue can deepen cultural management practices for the reason that they elicit reflection on the very themes that help to answer why such a field is needed, now or in the future.
The key texts, all published within the last 6 years, are: From Fingers to Digits: An Artificial Aesthetic (Boden and Edmonds, 2019), Art and AI (Hermerén, 2024), and the edited collection AI and the Future of Creative Work: Algorithms and Society (Filimowicz, 2023). Boden and Edmonds establish conceptual frameworks for examining the relation between AI and art, with particular attention to creativity. Hermerén addresses the philosophical question of whether AI can truly create art. Filimowicz's contributors examine AI's impact across cultural industries, situating artistic production within broader economic and technological contexts. Across these works, recurring themes include representation, consciousness, authorship, authenticity, subject-object relations, process, and production.
Underlying all of these debates is a differentiation—sometimes framed as opposition—between human beings and artificial intelligence. The authors nonetheless attempt to remain objective. Rather than favoring either human or non-human intelligence, they are keen to develop frameworks through which to interpret AI's place in culture. In doing so, they converge on recognizing that the relationship between AI and the human creative agent is interactive and complementary. This aligns with perspectives in general AI research, even when it is not directly concerned with the arts.
Yet challenges to the legitimacy of AI-generated art remain central. In Art and Artificial Intelligence, Goran Hermerén—Professor Emeritus of Medical Ethics at Lund University—poses the core question: can computers create art?. He draws also on the work of Margaret Boden, Research Professor of Cognitive Science at the University of Sussex, who emphasizes autonomy as a criterion for creativity. Boden notes, however, that autonomy is often too loosely defined and requires further analysis in order to distinguish between AI-generated and human-generated art. In collaboration with Edmonds, Boden argues that AI art may display creativity in some respects but not in others. Importantly, she highlights that creativity has historically been a marginal concern in both the production and analysis of AI art. A methodological problem is that much scholarship judges AI art through human-centered categories. If AI art is regarded as deficient or fundamentally different, however, the issue may lie less in AI itself than in the anthropocentric frameworks through which it is assessed. Despite current scholarly recognition of the Anthropocene as a critical frame for human history, AI is still forced to validate itself against human art.
The comparison of AI-generated and human-generated art directs attention to different focal points: the artwork itself, the techniques of production, the processes involved, the artist's intentions and feelings, and the perceptions of audiences, all of which have implications for management of cultural production and enterprises.
Notably, Michael Filimowicz's edited collection, AI and the Future of Creative Work, is distinctive in foregrounding the broader context of artistic labor and production. A Senior Lecturer in Interactive Arts and Technology at Simon Fraser University, Filimowicz gathers essays that situate AI within cultural economies and industries. An important concern in the field has always been the precarity and relatively low rate of return for artistic and cultural labor.
With the above broad patterns in the literature considered here, it is useful, as a first consideration, to look at how the various authors deal with the concept of intelligence. Defined in myriad ways in a broad range of fields, intelligence is as thorny a concept as those of art and culture. It is a vast understatement to say that finding a commonly agreed-upon definition of human intelligence is difficult (Bartholomew, 2004; Sternberg and Detterman, 1986). Although AI “does not originate from the same underlying human cognitive or emotional process” (Gignac and Szordorai, 2024) as human intelligence, its advent undoubtedly introduces new dimensions to theories of intelligence. For example, one might consider whether intelligence can exist without a conscious subject as its bearer. Such a human-centric view, however, is possibly beside the point when considering the case for AI and production of art and cultural forms.
AI and authenticity
Assessing AI creations as authentic, as art, or even as evidence of intelligence may be less productive than analyzing how AI reshapes structures of cultural production, especially in the case of emerging musical artists. Forced to contend with the market as it is, they consciously and unconsciously package themselves as artists in AI-generated collections and forms of music. The packaging of one's artistic identity can give the illusion of an individuated authenticity even though the parameters of an expressible (and marketable) subjectivity are fabricated, increasingly, by AI programs, which select repertoires of music based on both preselected music and on choices consumers make from preselected bodies of content. Artists, in turn, to be included in any media or social media platform, must present themselves within a context of prefabricated musical repertoires deemed worthy of promotion and production by AI programs, often algorithms. The point is that authenticity and creativity are already subsumed under an intelligence that operates outside of human consciousness and in favor of music industry forces.
In this vein, Sarah Keith and colleagues shift attention from AI's algorithmic functions to broader structures of streaming oligopolies, datafication, and labor. In Future, they argue that while AI enables new modes of music creation, structural changes in labor and compensation better explain shifts in the industry. This perspective reframes AI as one factor within long-standing dynamics of industrial organization, rather than a singular technological rupture. Such analysis tempers the tendency to sensationalize AI by situating it within broader social and economic contexts. In much liberal scholarship, art and culture are treated primarily as first-order elements of society, so are not examined within underlying frameworks of production. This approach often obscures the political stakes of cultural labor. In Future, Matthew Garvin and colleagues reject the pretense of disinterested analysis, highlighting instead the complicity of academic scholarship in producing cultural labor markets through, for example, arts management training. They argue for a strategic deployment of AI-art theory and practice to support what they term a “liberated economy” (Filimowicz, 2023: 71). From this perspective, the key question is not simply how algorithms produce art, but how they are mobilized within socio-political contexts.
Positive and negative effects
The emphasis on the algorithm raises a critical question: has AI—specifically its algorithms—become fetishized in cultural discourse? Conversations around Silicon Valley, for instance, often reveal an overconfidence in algorithms, presented as transformative forces independent of their technological and social contexts. Such views neglect the role of binary code, hardware, and above all, the political economy of technology. In the arts, similar distortions appear. One arts manager described how San Francisco's art market suffered when technology investors, perceived as lacking cultural refinement, supported algorithmically inflected works widely regarded as low-quality, while ignoring more critically acclaimed art. This fixation on algorithms, the manager suggested, distorted artistic valuation. Both this anecdote and the arguments in Future converge on the same point: fascination with algorithms can obscure the economic and social realities that shape cultural production.
Christopher Magis's contribution to Future develops this critique further. In A New Economy of Blockbusters? Netflix, Algorithms, and the Narratives of Transformation in Audiovisual Capitalism, Magis argues that algorithmic mythologies distract from the deeper industrial logics of audiovisual capitalism. While firms like Netflix deploy algorithms to allocate investments, their decisions remain guided by traditional patterns of risk and market calculation. Industrial logics evolve, but not always in the ways imagined by technological discourse. This suggests that overestimating AI's cultural power is as misleading as underestimating it. Indeed, the development of AI may depend less on its technical capabilities than on whether it can recognize and integrate its own cultural, historical, and economic contexts—a task often overlooked by human designers. In this way, the study of AI reflects the value of structural analysis in cultural mediation.
On the more deleterious side of social effects, algorithms used in the curation of content in the cultural and creative industries have been linked to binge-watching of television and movies (Shlott and Gaenssle, 2025), which in turn has been linked to increased stress and anxiety (Alimoradi et al., 2022). In contrast, as Garvin et al. argue—using the example of artisanal art—such socio-political deployments of algorithms can be mobilized to improve social effects, as when artisans themselves have control of AI with the goals of healthier or more authentic products. One such pilot project of artisanal control described by Garvin began by successfully distinguishing between artisanal cloth and mass-produced cloth in the compilation of sources.
Attention to historical and socioeconomic conditions and the cultural forms they produce is similarly important for understanding how art is produced and circulated. While Boden, Edmonds, and Hermerén often anchor their analyses in artistic processes and conceptual categories—and most authors in Filimowicz focus on cultural industries, Sven Nyholm's lead essay in Future (Can a Robot Be a (Good) Colleague?) shifts the focus to the workplace. Drawing on his previous work on ethics and human–robot interaction, Nyholm interrogates whether robots can be understood as colleagues, or even as companions. This question is not only philosophically provocative but also socially timely, given public debates about automation and job displacement. Cultural management provides an especially striking case, as robots have already been employed as museum guides (Styx, 2024) and as performers in theater (Lin et al., 2013). Nyholm suggests that these roles position robots as actants within organizational systems, raising concerns that resonate with broader socio-structural dynamics. The narratives relating to economic life inevitably frame how we interpret such developments (Pederson, 2013; Shiller, 2019).
In turn, the presence of AI colleagues in cultural labor may reshape those same narratives, reinforcing the reciprocal relationship between AI and our self-understanding. To take one simple example, the ideal traits of work colleagues are possessed in degrees, rarely in full measure. Attention to detail, ability to work with others, and other attributes constitute the strengths and weaknesses found in any workforce. Helen Ryland (2021) takes this up to argue that robots should be measured as co-workers with the same recognition—that perfection should not be demanded either of humans or robots in the workplace.
Nyholm situates this debate within the longer philosophical tradition of assessing technology's social role. He draws on Aristotle, who simultaneously defended slavery and speculated about technology's capacity to eliminate the need for enslaved labor. Later, radical feminist theorists such as Shulamith Firestone (1993) and Valerie Solanas (1997) similarly envisioned technology as potentially liberatory, particularly for women. Yet, as Nyholm points out, ethical evaluations of robots in the workplace remain deeply contested. Some, like Joanna Bryson (2010), provocatively argue that robots should remain slaves, while others emphasize technology's emancipatory potential. For Nyholm, the very existence of such divergent evaluations underscores the conceptual inadequacy of existing categories. As Filimowicz notes, “new concepts and categories are needed” (2023: 16). Here again, AI does not simply fit into inherited frameworks; its range of pros and cons compels us to revise our frameworks, reinforcing the argument that AI art reopens and clarifies basic questions about agency, creativity, and social meaning.
Socioeconomic perspectives provide further clarity. One recurring concern is whether AI-generated art contributes to a broader shift from knowledge to information, or from art to entertainment. Digitalization often appears to favor the entertainment register of visual and aural art. Stephen Roddy, in his essay for Future, connects this fear to earlier critiques of cybernetics and automation, noting persistent anxieties about human replacement. Yet, like other scholars, Roddy concludes that what is occurring is not displacement but collaboration: humans and machines interact to produce hybridized forms. Rather than subtracting the human, AI incorporates human inputs into its processes. This hybridization is visible across domains, from simultaneous live streaming and broadcast television to parallel commentary on social media. Such practices stack multiple representational layers, producing additive rather than obsolescent forms. Once again, AI prompts reconsideration of representation, not as a static correspondence but as a recursive process of re-presentation.
AI and the artwork itself
Boden and Edmonds's work highlights a different vantage point than the larger socioeconomic formations of AI and art by centering on the artwork itself. This perspective, while limited, is indispensable for a full understanding of AI, art, and cultural management practice. Their systematic categorization of artistic processes emphasizes production and creativity at the level of the individual work, in contrast to the industry-level focus of Future. The juxtaposition of these approaches illustrates how even the concept of intelligence is shaped by anthropocentric assumptions. Too often, inquiries into intelligence itself begin from the standpoint of human cognition—asking, for instance, what occurs in the artist's mind—rather than tracing the broader evolution of the concept itself to other societal forces, for example, but also to the practical contributions of the range of cultural workers who are also involved.
A focus on the processes of artistic production, therefore, directs attention to the question of authorship. In this context, Hermerén, in a similar approach to earlier work by Harold Becker (2008), emphasizes the multiplicity of participants involved in producing a work of art. For Hermerén, clarity about contributions is essential, particularly in cases where both human and non-human elements are involved. This includes AI systems, programmers, and curators who intervene at various stages of production. To navigate these complexities, Hermerén underscores the importance of conceptual frameworks that can systematically map human–machine interactions and thereby provide a basis for determining authorship. This perspective positions authorship not as a singular act but as a distributed process, highlighting how AI requires us to reconsider established categories of artistic creation.
The challenges of attribution become concrete in Hermerén's analysis of Portrait of Edmond Bellamy, an AI-generated work sold by Christie's auction house in 2018 for over $400,000. The painting's production involved programmers who drew on pre-existing code, a collective that devised the artistic concept, and an AI system that generated the final output. Despite this complexity, Christie's credited authorship solely to the AI program, a move widely interpreted as commercially motivated. This decision not only marginalized the human contributors but also exemplified the risks of treating authorship in non-systematic ways. While ethical debates about attribution remain important, the Christie's case illustrates how economic imperatives and market logics can override nuanced assessments of authorship. Thus, the discourse on AI art cannot be separated from the political economy of cultural production—a point that returns us to the broader argument that AI forces a reexamination of foundational questions about art itself.
Uncertainty about how to situate authorship extends to the very definition of the artwork. Boden insists that outputs of computer-generated art programs should be regarded as artworks, but she also cautions that the program itself may lay a stronger claim to being the definitive work (Boden and Edmonds, 2019). Edmonds, in his chapter Programming as Art, advances a similar view, suggesting that programming constitutes the artwork itself rather than merely producing it. This reconceptualization shifts attention from object to process, dissolving the discreteness of the final artifact and elevating the generative procedure as the locus of artistic value (Shanken, 2002). Subjectivity remains a factor—through the perceptions of artists and responses of audiences. But the emphasis moves away from discrete outputs and toward the systems that generate them. The result is a reconfiguration of the subject–object relationship, not in Nietzsche's (1989) reversal where the artist is more interesting than the work, but in a reframing of the process as the central object of inquiry. Such a shift repositions representation as a criterion of art (an idea originating, at least, in ancient philosophy). The idea of art representing something of the world takes on an interesting dimension when the artistic output results from a human prompt, but even more so if the output is solely generated by the AI algorithm.
Representation, however, is only one way of considering artistic output. Artistic movements in the 19th and 20th centuries emphasized line, color, texture—or in other words, the process and execution of art works. By situating AI art within this shift, we may be able to see how longstanding debates in art theory resurface in new forms, reaffirming that AI art functions less as an anomaly than as a lens on enduring questions: what is art? What do we value in and about art?
Even when attention shifts to broader economic contexts, questions about the nature of art and intelligence remain central. In Fingers, Boden theorizes across multiple levels: individuated artistic processes, the intermediary space of coding, and machine-generated art. Given her background in philosophy of mind, it is fitting that she turns to R.G. Collingwood's philosophy as articulated in The Principles of Art (1960). Rather than relying on the philosopher's better-known works on history or nature, Boden foregrounds Collingwood's claim that art is fundamentally the expression of emotion. This choice is instructive: it highlights the continued salience of theories that anchor art in human subjectivity, even as AI art problematizes the very notion of a sapient, emotional subject
Collingwood's emphasis on intention and emotion reflects a longstanding view that art requires conscious human agency. Such assumptions underlie many critiques of AI art, which often echo debates in moral philosophy—dividing assessments between consequentialist and deontological approaches. From a consequentialist perspective, the value of art could be judged independently of intention, increasing the potential that AI outputs can be considered art. From a deontological perspective, however, intention and subjectivity remain essential, rendering AI-generated works inauthentic.
Nietzsche's critique of the ordinality of subject to action complicates these dichotomies, further destabilizing the assumption that intention must anchor artistic value. In AI art, input replaces intention, and yet aesthetically significant works may still emerge. This again shows how AI compels us to question established linkages between subjectivity and art.
Collingwood's theory foregrounds the role of the artist as constructor of emotion, while treating the audience as secondary participants in a guided process of evocation via the work. Contemporary AI art disrupts this asymmetry. Audience interpretation now plays a more central role in determining whether AI outputs are recognized as art, a shift that resonates with broader trends in interactive and participatory art practices. Ernest Edmonds's contribution in Fingers, for instance, examines interactive art from the perspective of artists’ own theories and technological innovations, but also takes into consideration the audience perspective. The rise of AI-generated art underscores how crucial audience reception of the work has become, particularly when the artist-subject is absent or redefined. In this sense, AI art not only destabilizes Collingwood's framework but also invites reconsideration of where creativity and consciousness should be located.
The ontology of art
The question of consciousness has been central to the examination of AI in general. Within Boden's field of cognitive science, consciousness and cognition have been partially decoupled, acknowledging that consciousness can exist without cognition. Yet this recognition has not resolved the difficulties of using either concept as a criterion for understanding intelligence, intention, or creativity. AI art brings these limitations into sharp relief. By producing works that are aesthetically persuasive without being grounded in consciousness or cognition, AI highlights the inadequacy of existing categories. Thus, far from simply failing to meet established criteria for what is truly art, AI-generated works challenge us to rethink the criteria themselves. In this way, AI art not only tests, but also transforms our conceptual understanding of art and human intelligence.
In The Philosophy of Creativity (2014) and The Creative Mind (2004), Boden complicates the notion of consciousness, treating it less as a unified entity and more as a dynamic interplay of elements. Hermerén, likewise, observes the oscillation between centripetal and centrifugal forces in philosophical accounts of consciousness. The core issue remains whether consciousness is best understood as subject or object, a difficulty tied to the broader philosophical problem of distinguishing between the real and the apparent world.
Hermerén draws on Glannon (2022), who introduces a moral dimension to consciousness, to show that definitions of consciousness tend to move in circles (Hermerén, 2024: 18). Glannon's attempt to normativize consciousness grounds the problem in practical situations, suggesting how consciousness might serve as a criterion for defining art. Yet Hermerén emphasizes that the “explanatory gap” persists (Hermerén, 2024: 20). For him, consciousness remains indispensable to discussions of artistic authenticity, but its instability as a concept mirrors the difficulties of using it as a philosophical foundation for defining art. Thus, as in broader debates about AI, the very uncertainties regarding consciousness can illustrate how AI compels us to revisit fundamental problems rather than resolve them.
Art and cultural mediation
For cultural mediators and managers, these philosophical uncertainties translate into practical dilemmas. Mazzone and Elgammal's (2019) study, which compared AI-generated art with works exhibited at Art Basel, raises questions about whether the contemporary aesthetic environment itself contributes to the indistinguishability of AI and human art. If AI systems are trained on up-to-date artistic practices, they may be especially adept at generating outputs that align with current styles, thereby creating interpretive uncertainty for audiences. In this sense, the evaluation of AI art is inseparable from the questions posed to it: AI's answers reflect the framing of its inputs. For mediators, the issue is not whether AI is conscious but how audiences negotiate meaning when confronted with works that blur established distinctions. Once again, AI does not merely imitate human art but destabilizes the very grounds on which judgments are made.
Artists and institutions have themselves foregrounded this destabilization. Ai Weiwei's public projects (Circa, 2024), which displayed seemingly unanswerable questions across electronic screens in major cities, emphasized uncertainty as a distinctively human characteristic. Similarly, the 2020–2021 exhibition Uncanny Valley: Being Human in the Age of AI at the DeYoung Museum explored how AI-generated answers can produce social disruption when guided by problematic questions (E-Flux, 2020). Both cases highlight a central dynamic: in the cultural field, questions often outweigh definitive answers, and uncertainty itself becomes part of the aesthetic experience. This dynamic mirrors the approach of Boden, Edmonds, and Hermerén, who collectively stress that mediators—curators, trustees, philosophers—must operate without final resolutions about authenticity. AI art thus reinforces the point that cultural practice is often conducted under conditions of provisionality, where decision-making cannot rely on absolute criteria.
This reframing of inquiry to focus on the questions rather than answers raises further comparative possibilities. If questions about art can be asked outside the AI context—such as whether process outweighs intention or product in performance art—then AI art is one instance of a broader inquiry into the status of subjectivity in cultural production. Extending the discussion to other forms of intelligence, such as animal or object intelligence, could enrich debates about AI art by expanding the criteria under which we evaluate artistic practice. If multiple intelligences exist, then art might be better understood through a pluralist lens, challenging anthropocentric assumptions like the need for representability of objects and the priority of an emotional subject, and situating AI art within a spectrum of creative agencies.
Representation and representability, have been central elements of art, especially in its naturalistic forms. Representation, however, cannot by itself serve as a definitive criterion for evaluating AI-generated art. Both humans and AI employ representational strategies, though AI depends on human inputs at least initially. Harold Cohen's experiences with his painting machines illustrate this point. For Cohen, the machines’ representations forced him to interrogate his own cognitive processes, even as he struggled to escape anthropocentric assumptions about representation. Similarly, Simon Colton, creator of The Painting Fool, a computer program that generates its own artwork, critiques the assumption that computer-generated art must emulate human behavior to be valuable (Colton, 2008). Such reflections destabilize claims that human art is inherently more authentic or valuable, emphasizing instead the importance of interrogating representational practices themselves. AI art thus operates as a provocation, compelling us to ask not only what representation is and what qualifies as representable, but also how different representational logics inform our definitions of art. Here, then, a pattern emerges among actual practitioners of the arts with regard to anthropocentrism. Across the literature on AI and art, when it is artists who are asking the philosophical questions, accompanied by their actual creation of works, AI is actually given great consideration as a valid but different origin of art, and anthropocentrism is actively challenged or avoided.
Hermerén extends, as it were, Cohen's interrogation of human processes in art by positing five modes of thought relevant to the analysis of AI art, of which hermeneutic and empathic thinking pose the greatest challenges. Analytic and instrumental thinking, by contrast, align more closely with AI's current capacities. This typology underscores the ways in which AI exposes distinctions in human cognition itself. Cohen, approaching the problem as an artist, emphasized introspection and creative process, while Hermerén, as a philosopher, abstracts toward general types of thought. Both, however, suggest that reflecting on human thought is central to understanding AI art. Here again, AI compels a reexamination of established categories, whether the locus of creativity lies in product, process, or cognition itself.
Building on Edmonds's claim that programming has introduced new concepts into artistic practice, it becomes evident that system-level thinking is increasingly central to discussions of the nature of AI art. Computer-generated works foreground the role of systems, though the concept of a system in this context carries particular meanings.
This systemization invites questions about what types of systems we prioritize in cultural analysis and why. Importantly, Boden links such system-level concerns to the audience's experience. In Fingers, she critiques practices that overwhelm visitors with program outputs, suggesting that such approaches risk reducing art to instruction rather than aesthetic encounter (Boden, 2019). Her taxonomy of computer art, including categories such as Evo-art—generated through genetic algorithms that evolve the program itself—illustrates how process and pedagogy intersect in AI art reception. For cultural mediators, this raises practical questions: how much should audiences be educated about the generative processes behind AI art, and where does explanation risk eclipsing experience? These tensions highlight how institutional contexts, not just technical capacities, shape the reception and evaluation of AI art.
The case for creativity
Boden's broader framework situates AI within a typology of creativity—combinatorial, exploratory, and transformational (Boden, 2004)—and include, as well, her taxonomy of computer art. As analytical tools, they structure the debate by clarifying relationships among types of creativity and forms of AI art. Boden also invokes John Ruskin's theory of Gothic art (1900), comparing its characteristics—including grotesquerie—to computer-generated works. She concludes that, by Ruskin's criteria, most but not all AI-generated art would fail to qualify as “true art” (Boden, 2019: 169). Like Collingwood, Ruskin emphasizes the centrality of human emotion and fellow feeling as criteria. In aligning AI art against these benchmarks, Boden provides a framework for understanding why many scholars and audiences remain skeptical of AI's artistic validity. Yet her work also demonstrates that AI forces us to deepen our understanding of natural intelligence and creative processes, even as it challenges us to remain open to new evaluative categories.
But whether the criterion of validity and the concept which centers examination of AI and art is creativity, the algorithm, the author, the audience, the work, or the process, the analytical focus may lead one into directions which exclude certain possibilities, including ones that will look like good answers, while it favors others. The bias against the validity of AI as a source of art and culture that one detects in much philosophical and critical writing on AI may stem from beginning examinations with anthropocentric assumptions, but it also might stem from an overfocus on AI.
Conclusion
This essay has offered an informal analysis of several recent books treating the subject of AI and art. A fuller analysis would treat these topics more deeply in order to assess the frequency of these themes beyond the literature, including their impacts on the field of cultural management and its practices. The very newness of research on AI and art, especially from a philosophical perspective, limits the possibilities. Even so, this essay lays out a set of themes that predominate and serve as a beginning point for reflection and discussion: themes of representation, intelligence, authorship, authenticity, process, production, and the relation between subject and object are relevant to the field apart from considerations of AI. I argue that they are more salient as the use of AI advances.
The complexity of the interface, as it were, between AI art and human art, and between AI-generated cultural forms and pre-AI forms, will be part of significant debates, at least in the near future. For cultural mediators and managers, this means recognizing that questions of authenticity and value in AI art will continue to resist definitive answers. Instead, the challenge is to manage the interface between human and artificial creativity in ways that both acknowledge uncertainty and foster new possibilities.
One thing that much of the literature on AI and art shows is that AI is part of a landscape of cultural forms that may change, ontologically, over time. In the development of the new idea of object intelligence, for example, and in the light of a new focus on animal intelligence, AI may also come to be seen as only one part of a comprehensive array of intelligent processes. In such a view, human art as a lens for looking at AI seems to allow for an expansion of interpretations rather than serving as a lens for critique.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
