Abstract
In this comment piece, we argue that mass-produced generative AI (GenAI) images, commonly referred to as “AI slop” should be considered a form of aesthetic alienation. Specifically, we focus on GenAI images of fall, arguing that GenAI images alienate not only artists from their art, but produces an alienating aesthetic in and of itself. Closely attending to the aesthetic registers of GenAI images opens up important sociological questions about the role of the image in contemporary society, and the affective logics of late capitalism. Finally, we highlight how GenAI images are profoundly implicated in the extractive and destructive materialities of late capitalism.
On Trying and Failing to Photograph the Moon
Have you ever tried to take a photo of the moon and been dissatisfied with the result? Amateur and professional photographers alike know that photos often fall comically short of the encounter in the moment. For years, Samsung marketing claimed that their phones were better equipped to take high-quality photos of the moon, boasting “moon-zoomed technology”. In March 2023, Samsung was caught “faking” photos of the moon (Vincent & Porter, 2023). The “faking” was discovered by a Reddit user, who took a blurry photo of the moon and photographed it (the prior photo) with a Samsung phone (the S23 Ultra) (Vincent & Porter, 2023). The phone inserted details that were not present—and, importantly, could not be present in the original photograph—producing a crisp and highly detailed “photograph” of the moon. Users had been accusing Samsung of overlaying texture filters onto photographs of the moon since the feature first launched in a 2020 model, but Samsung repeatedly denied that this was the case.
The process at work here is more than simply removing blur using Artificial Intelligence (AI) tools; it is also inserting details where there are none. What appears to be happening is that Samsung’s technology recognizes that you are attempting to take a photo of the moon and inserts an AI-generated image in its place. In the context of a broader range of image editing and manipulation practices becoming commonplace (Bravo, 2017), one might consider AI-generated images as yet another challenge to the notion of what a “real” photograph is (Campbell, 2014). However, this thinking positions AI-enhanced or altered photographs as simply another advancement in a long line of (photographic) technological progress, another transformation of what constitutes art.
We argue that AI images are inseparable from their enrollment in capitalistic wealth creation, especially in the way they are increasingly mobilizing on social media as cheap and easy ways to drive attention traffic. Marx’s (1932) theory of alienation is newly applicable to the proliferation of low-quality AI-generated images across digital platforms, dubbed “AI slop” (Mahdawi, 2025). This concept enables us to consider the aesthetics driven and produced by AI-generated images, while also attuning us to the materiality of this socio-technical configuration, and raises important sociological questions about aesthetics and the social world. We examine a specific AI slop example of the “fall aesthetic,” a popular social media trend involving images of aesthetically pleasing autumnal scenes, which is now dominated by AI-generated images. AI slop is a qualitatively different type of image from those defined by Steyerl (2009) as “poor images.” Where poor images are degraded, pixelated, and generally low quality, AI slop is marked by a slick, uncanny, and almost hyper-real quality. We contend that AI-generated image aesthetics represent a new iteration of surplus value creation on digital platforms, wherein their production alienates the artist from the art, producing aesthetics that alienate and can only replicate rather than capture beauty.
Throughout this piece, we use the term “AI-generated images,” as we consider the lay practice of AI-image generation discussed here as sitting between the photographic and the artistic. As Hausken (2024) argues, AI-generated images mimic photographs without being photographs. She contends these images are more akin to the style of “photorealism” than photographs themselves. However, many of the mass-generated AI images are not yet photorealistic themselves, with their uncanny feeling and aesthetic excesses, producing a form of affective literalism. Given the position of AI images as adjacent to and perhaps claiming some of the same cultural space as photographic work, we argue that theoretical work that understands the social functions and affective dimensions of photographs is useful for understanding and analyzing what an AI image does.
AI Art, Images, and Alienation
The aesthetic and representational qualities of AI-generated images not only complicate the relationship to the real described by Sontag, where photographs document the world (2003, p. 63), but also trouble its aesthetic project. Where the knowledge project of photography falters, Sontag argues that creativity steps into the breach, and the photograph is then considered on the basis of its originality, “equated with the stamp of a unique, forceful sensibility” (2003, p. 118). AI images have neither a clear relationship to knowledge, nor a “unique, forceful sensibility.” Where does this leave the AI-image? Arielli (2024) argues that AI generators function as a “quintessence machine” (p. 16), transforming the unique sensibility that Sontag identifies in artistic images into “instances of a general idea” (Arielli, 2024, p. 17) that can be infinitely reproduced. Arielli points to Benjamin’s work on reproduction as one way of understanding the reproducibility of art through AI generators. However, as Arielli goes on to argue, Benjamin is concerned with the reproduction of specific works of art, not a general style of an artist’s work. Benjamin (2018) argues that reproducing artwork, even as faithfully as technically possible, means that we lose the aura of the original, or its specific, contextual presence in time and space.
While Benjamin argued that reproduction might result in a loss of aura, the reproductive technologies of digital spaces do produce logics that are increasingly dominated by “vibes,” which are more fluid than Benjamin’s aura. Partly fueled by associations that emerge through machine learning (other users who liked this product also liked this product) (Yeetgenstein, 2024), it’s not surprising that AI image generators try to reproduce the “vibe” of human-produced art. In this way, they operate similarly to digital advertising flows, which attempt to both create and respond to the shifting moods and “vibe” of users (Brown et al., 2024). With respect to AI-generated images that seek to replicate a “vibe,” Arielli (2024) argues that we may enter a scenario where the aesthetic moods and motifs of individual artists’ work are also legally protected.
Let us, for a minute, consider the “art” produced by Generative AI (GenAI) tools such as MidJourney, Dall-E, and AI “art” generating tools now embedded across Meta products, including Facebook and Instagram, and in Microsoft tools and Adobe. There are a range of authorship concerns with these tools, most prominently that they have been trained on the stolen work of artists who have neither consented nor been compensated for the use of their work to train large language models (Goetze, 2024). Here, we can extend Marx’s (1932) critique of the alienating aspects of capitalism to the aesthetics produced by GenAI and other forms of synthetic media. One of the primary planks of Marx’s analysis is his description of how capitalism modifies labor, and in the process of doing so, distances the worker from the product of that labor, alienating them. For Marx, labor, and by extension work, was a productive, creative orientation to the world, and one of the ways in which humans make and find meaning in their lives. Artists whose work has now been “reproduced” by GenAI models are alienated from their creative labor—they no longer have control over what they produce. While one could argue that any artist may be “trained” in the style of another, the reproductive capacities of AI far exceed mere resemblance. Indeed, a recent US study found that widely used image diffusion models such as Dall-E 2 “memorize” images from their training data and may reproduce near identical copies when generating “new” images (Carlini et al., 2023). The earlier example of the moon represents a type of doubling of this alienation, where the attempt to create art is itself alienated through the silent insertion of AI into this process. However, these configurations are not purely mechanical, dependent on the labor of moderating humans.
Applying a Marxist lens to AI images extends Sekula’s analysis of photography, labor, and capitalism. This work offers an analytical frame for GenAI images, particularly through his distinction between the esoteric and exoteric economies. In the former, “profit is derived from the value added to the commodity by labor power” (Sekula, 2014, p. 21), while in the latter, money is created out of money, outside the surplus value produced by labor (Mulvey, 1993). GenAI images, we argue, sit in an uneasy space between the two, neither removed from the material realities of labor, but also forming part of the tech-hype cycle (Maddox & Smith, 2025), which produces money from money. By their design, AI art tools cannot generate new ideas or aesthetics, or be inspired. Rather, the expression of art they generate is constrained by the fact that they can only reproduce reconfigurations of what has been.
Is it artistic practice to generate art through engagement with a machine? Is there an art to the prompting? One AI artist, Jason M. Allen, reported spending over 100 hours creating his prize-winning partially AI-generated image Théâtre d’Opéra Spatial, only to find he was unable to copyright it, and his image is now being repeatedly used without authorization (Prada, 2024). His attempt to copyright the image was denied by the US Copyright Office, and further appeals by the artist were denied by a federal US judge, on the basis that copyright must be held by human authors (Knibbs, 2023). GenAI image generation is oriented towards surplus value creation through the exploitation of already existing human creative work without consent, rather than artistic expression. Allen’s attempt to engage in creative labor reveals this as a foundational struggle with the medium. By and large, GenAI art bypasses the creative labor needed to produce art and instead creates only the product, with Allen’s 100-hour effort being the exception in a landscape where AI art is largely produced instantaneously. Simultaneously, when attempts are made to integrate GenAI into creative practice, as Allen has done, the artist must contend with the tool for ownership of the creation.
A study of Facebook pages posting GenAI images by Renée Diresta and Josh Goldstein (2024) found that many often include captions to reinforce a reading of the image as authentic human creativity. Many of the AI images collected by Diresta and Goldstein include captions that say, “I made it with my own hands,” and the picture would include, for example, a young girl holding a cake she has made. We argue that these captions are, in some ways, a tacit admission that the labor of human creativity has value, and these images seek to cultivate this value through association. Even with human creative labor involved, the aesthetics produced by GenAI reproduce and intensify the alienation inherent in late capitalist society, both in alienation from creative labor and subsequently from the self, in part because of the strange and uncanny registers of AI images as they attempt to replicate the beauty and transcendence of human creativity.
In attempting to create “beauty” through GenAI images that exist only because we command them to exist, we no longer stand “in a different relation to the world than we were a moment before” (Scarry, 1999, p. 112). Without a tether to the real, nor the capacity to decenter the viewer, can AI art be a response to beauty? If not, what are AI images for, and what do they do? To examine this question, we explore AI image aesthetics as they are commonly produced and circulated on and through social media platforms with a clear intention to create or respond to beauty in the natural world.
The Excessive Aesthetics of AI Fall
“Aesthetic” autumn images are widely popular content across visually oriented platforms. There are (as of December 2024) over 82 million posts on Instagram tagged #fall, and nearly 8 million posts on Instagram tagged “#fallvibes.” Jennings (2021) identifies in her reporting for Vox, and argues that each autumn (or fall) generates its own social media “micro trends” (see also, Abad-Santos, 2019). These seasonal aesthetics are based on real people and real places, regardless of the fact that they are curated for Instagram and other similar platforms.
The North American fall of 2024 has seen yet another iteration of this trend. However, this time, it is notably mediated by AI. On September 24, 2024, a post from the X (formerly Twitter) account @fallaesthetic0 tweeted an AI-generated image with the caption “dreaming of fall days like this (autumn leaf emoji).” At first glance, the image appears to be one of a village town square somewhere in Europe, or perhaps the old quarter of Montreal or somewhere in Boston. The image positions the viewer inside a cozy cafe, looking out. The sky is overcast, and the trees are bare; the interiors of the shops on the other side of the square are lit with a warm, inviting light. Festoon lights are hung between the lampposts. The image conveys the feeling of fall with a clear attempt to be aesthetically beautiful. Closer inspection reveals the tell-tale signs of AI wonky-ness: The cups are a strange shape, and one appears to have a dead bird floating in the coffee. The seat on the left-hand side of the picture is strangely constructed, and the table is wet, as if it has been raining inside. At the time of writing, this image had been viewed 11.6 million times. It was reposted again by the same account on October 19, 2024 by the same account with the caption, “this aesthetic is literally all I want from life.” The affective pull of aesthetics overwhelms the real; the viewer desires the signifiers of fall, seemingly independent of the actual experience of the season.
Instagram is also replete with GenAI’s confusing attempts to capture the fall aesthetic. Top search results in hashtags such as #fall or #fallvibes include an array of GenAI images and, increasingly, videos seeking to capture the seasonal vibe with image after image of perfect autumnal rain-soaked streets, covered with excessive leaf litter in a range of shades of orange. Interior scenes depict fireplaces surrounded by pumpkins, truly confusing amounts of pumpkins. The images are moody and saturated, with flickering light. Searching for similar terms on Pinterest will yield similarly uncanny scenes—reading a book in a bed layered with many cozy woolen blankets, which is also inexplicably covered in leaf litter despite being inside. While the affective lure of the AI image is that it feels like something that might exist, that could exist, the images often linger at the boundaries of possibility, frequently straying from implausible to impossible.
In Figure 1, a GenAI image we produced using the design tool Canva to capture “fall vibes,” we can see the key elements of uncanny
1
AI aesthetics in the scenes and objects. At first glance, the image appears familiar, likely taking place in a cafe where we see a table with two coffees placed on it. The table sits in front of a large window, revealing a busy, rainy autumn day where the orange leaves of the trees are striking against the grey skies. These scenes are typical and familiar, seeking to evoke a sense of coziness in the viewer, reminiscent of a warm drink on a cold day. However, many parts of the image are unsettling and alien. The boundaries of the cafe are permeated by the trees outside as they grow into and blend with the frame of the window. The ground of the cafe is littered with orange leaves, so high that no chairs can be seen to sit on. In place of a chair, a large, organic-looking, ginger-colored mushroom-like shape appears next to the table, sprouting up among the leaf litter. Looking closer at the scene outside, the streets are not only bustling but chaotically filled with shapes and cars sitting at odd angles, as if some emergency were unfolding. If AI images are not photorealistic and more uncanny than alluring, how can we understand their social function? A Gen AI image created by the authors using the design tool Canva, using a prompt requesting a “cozy cafe in the fall with a table and coffee.”
AI Aesthetic Economies
Part of the increasing ubiquity of AI images on social media is a response to the incentives of platform economies. For example, Meta has a monetization policy that enables accounts to earn money on eligible “Facebook Pages, profiles in professional mode, Events and Groups” (Meta, 2024). Diresta and Goldstein (2024) note that AI images are one way page owners gain profit and attention. They also note that the algorithmically driven nature of Facebook’s feed means that users are likely to see these images, even if they do not follow the page itself. Diresta and Goldstein (2024) divide the pages using AI images into two categories: spammy and scammy. Spam pages use AI images to grow their follower count (sometimes inauthentically) and direct people to outside domains for the purposes of generating income. Scam pages were categorized as such if they deceived followers by “stealing, buying or exchanging Page control” (Diresta & Goldstein, 2024, p. 2), falsely claiming names or addresses, or deceiving customers into buying fake products. Whichever category the Page fell into, AI images generated high amounts of engagement, with users responding to these images not appearing to realize they were AI-generated (Diresta & Goldstein, 2024, p. 4), a theme also taken up by popular reporting (Bond, 2024; Klee, 2024). On platforms like Instagram, which do not straightforwardly monetize AI images and videos, they appear to function as a way to drive users to the account’s profile, which then, in a manner very similar to that described by Diresta and Goldstein (2024), link out to other websites.
This content, introduced above as “AI Slop,” and associated circulation practices demonstrate a function different to that of art. The aesthetic imaginary of AI slop does not connect us to a great “out there” of beauty (Scarry, 1999). It does not radically decenter us; rather, these AI images are the average of our aesthetic desires, encompassing neither the jarringly camp nor the transcendentally beautiful. Their function as fodder for content recommendation algorithms, seeking to direct attention towards profitable ends, results in an overall smoothness to the images that seeks to create a kind of copy of beauty—a familiar sense of beauty. Indeed, it is only through their errors, through their uncanny not-rightness, that AI art generates feeling (Figure 2). Another GenAI image created by the authors using the design tool Canva, using a prompt requesting a “cozy cafe interior in the fall.”
Every AI image is the work of human, technical, and material factors, all with their own cost. While these images attempt to invoke natural beauty, the AI tools behind their creation are implicated in significant environmental harms. We can understand this as a form of alienation from our species-being, wherein capitalist production processes deprive us of what it means to be human (Marx’s 1932). GenAI uses significant energy and water resources, especially when the life cycle of the technology is considered (Ligozat et al., 2022). The resources required to acquire, store, and analyze the training data add significantly to the carbon footprint of GenAI. These technologies also consume vast quantities of fresh water, which is used to cool data centers—the demand for water is an increasing concern as GenAI may begin to compete with human beings for water (Ponce Del Castillo, 2024). While contributing to environmental degradation and threatening long-term human survival, in place of the actual natural beauty, these images offer perverse and uncanny forgeries.
Beyond the environmental concerns, the human labor GenAI requires is also largely hidden from its consumers. While the outputs of AI appear as instantaneous computational achievements, they are the result of considerable low-wage human labor wherein workers in factory-like conditions “clean” their training data and correct errors (Altenried, 2020; Pogrebna, 2024). These workers are alienated from the products of their labor, completing an essential part of a fragmented and complex process of production, which may culminate in the production of an image they have no relationship to and will never see. They are not recognized as artists, but without their labor there would be no end result for the person entering the prompt at the interface. Like Marx, we are concerned with the materialities of aesthetic alienation, from our relationship to the environment to the material implications of the use of AI felt most acutely in the intensification of natural resource extraction and how the aesthetics of GenAI solidify the power, control, and extractive logics of the tech monopolies who create them.
Conclusion
While it may be tempting to examine GenAI aesthetics only at the level of artistic merit (or lack thereof), AI “art” is a material issue that cannot be disentangled from structures of power and capital. AI images also alienate us from the material world by directing the machine to show us the world as we wish it to be, or feel it ought to be; we are also eliding its material impact. AI slop is worth considering in its own right, as it sits among the most commonly produced and consumed types of AI-generated images.
The logic of AI is what gives AI images their simultaneously affective and uncanny qualities. As Arielli argues, for AI, “ambiguity might be a problem to solve, in aesthetics, ambiguity is a feature. An artwork’s aesthetic richness often lies in its resistance to a singular interpretation, remaining open to multiple readings” (2024, p. 10). AI images are affective literalism, excessively adopting and reproducing signifiers of feeling. The GenAI images of “fall aesthetics” reveal an attempt to create the feeling of autumnal coziness by including as many symbols as possible—orange leaves, pumpkins, a rainy day, deciduous trees, woolen fabrics, coffee—all in a chaotic excess that cultivates a sense of the uncanny. Further underscoring the uncanny, GenAI image generators often struggle with the human, particularly hands. This struggle underscores Sekular’s argument that photography as a fine art relies on the iconography of the human body, particularly its hands. It is the “represented body, within the frame, [that] conjures up a recognition of the presence of two other bodies, that of the photographer and that of the spectator…this is a roll call from which no one is missing” (Sekula, 2014, p. 23).
But in GenAI images, it is hard to shake the sensation that something is, in fact, missing. Art is the response to beauty, as beauty, Scarry (1999, p. 3) argues, “bring copies of itself into being” through affect and human action. In beholding beauty, we seek to replicate it and bring new beauty into the world. In some ways, the descent into pure simulacrum (Baudrillard, 1994) through GenAI images is understandable, being endless repetitions of the beauty that moves us. In its repetition, AI is the rut maker, reproducing repetition without reference. The AI image is solipsistic in its orientation to the world and attempts to transcend the impermanence of beauty by at least partially automating its production.
Footnotes
Author Contributions
Each author contributed to the conceptualization, argumentation and preparation of this manuscript.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
