Abstract
ChatGPT’s debut in 2022 heralded the entry of generative AI into mainstream public attention. The radical technology could do what no machine had done before: mimic humans’ complex linguistic abilities. The ghost had entered the machine. In their essay, Phillips, Kalvapalle and Kennedy (2024) argue that one of the important aspects of generative AI is that it participates in the social construction of categories. Many other technologies also participate in the social construction of categories, yet this process often goes unnoticed. Why? We argue that the degree to which technologies are perceived to participate in the social construction process depends on three elements: the degree to which we anthropomorphize the technology, whether its affordances allow for easy interaction, and the vested interests of powerful stakeholders. We agree that humans and machines co-construct categories, but we argue that this process is itself socially constructed through an iterative process among participating stakeholders.
Keywords
Introduction
Generative AI marks a new era of technological development. According to Phillips et al. (2024), generative AI is especially disruptive because it participates in the social construction of categories—the “partitions that group together objects perceived to be similar” (Zunino et al., 2019, p. 1). Categories form the basic lens through which humans and, by extension, organizations make sense of the world. To show generative AI’s potential in shaping categories, Phillips et al. (2024, p. 3) suggest that generative AI can pass what they term the “participation game”—“engag[ing] in framing, argumentation, and persuasion that parallels the underlying processes of category formation in social construction.” However, generative AI is not the only technology which participates in the social construction of categories.
Scholars have long argued that technology is involved in social construction processes (Leonardi, 2011; Leonardi & Barley, 2010; Orlikowski & Scott, 2008). Research on historical cases shows that new technology disrupts how people construe categorical boundaries (Basalla, 1988). For example, the invention of the automobile not only created a new category, but changed people’s perception of the existing “carriage” category (Grodal et al., 2015); over time, the carriage was prefixed as “horse-drawn” to distinguish it from the “horseless carriage” or the automobile (Rao, 2004). What is puzzling, though, is that generative AI has received a tremendous amount of attention, whereas other technologies go nearly unnoticed. Take the buying and selling of stock, for example, which today is largely done through “quantitative trading.” These algorithms, set up to automatically buy and sell stock, were made possible by significant advances in computing power. Quantitative trading determines the ebbs and flows of a large share of global assets because it influences which stocks and sectors are perceived as booming or busting. By guiding how bankers evaluate and trade stock, quantitative trading participates in the social construction of the stock market (Beunza & Stark, 2004, 2012). While algorithmic management has received some attention from academic scholars, public discourse around how quantitative trading is shaping society at large and categories in particular is sparse. Many people do not know what quantitative trading is, let alone that it exists, and its outsized influence on the creation of wealth. There are few calls and proposals for how quantitative trading needs to be controlled and regulated. These cases suggest that, even when technology actively impacts the social construction of categories, not
In this essay, we argue that technologies’ involvement in the social co-construction of categories receives greater attention when a technology is perceived to challenge the human–machine boundary. When this happens, it sets off a recursive process where the boundary between humans and machines is itself increasingly co-constructed. This recursive process can be dampened or reinforced by the technology’s affordances and the interests of the people in power who will either work to limit or increase control of the technology. We suggest that a new technology, such as generative AI, is more likely to be viewed as a participant in the social construction of categories when it is categorized as more human-like, when its affordances make it more accessible by a diverse set of stakeholders, and when powerful stakeholders have vested interests in promoting the technology. In contrast, these reactions will be attenuated when new technology is seen as less human-like, when the affordances of the technology make it hard to access and use, and when powerful stakeholders have vested interests in hiding or protecting the technology. We thus argue that the degree to which we perceive generative AI to participate in the social construction of categories is itself socially constructed. Throughout this essay we develop a theoretical model (depicted in Figure 1), which depicts the iterative social construction process between humans and generative AI. Below we elaborate on the elements of this model and how it generalizes beyond the case of generative AI.

The perceived human–machine boundary and the social construction of categories.
Co-Constructing Generative AI: The Human–Machine Boundary, Affordances, and Power
Challenges to the human–machine boundary
The creation, change, and decline of categories influences how we understand the world, and subsequently how we react to everything around us from economic markets to the organization of work (Hannan et al., 2007; Rosch, 1978; Zerubavel, 1997). Due to the interconnectivity of categories, the emergence of new categories can augment, shrink, and challenge existing categories (Boghossian & David, 2021; Murphy, 2004). One of the important categorical boundaries, which the emergence of new high-tech categories can disrupt, is the boundary between the categories “human” and “machine.” Challenges to the human–machine boundary are important because this boundary helps define what it means to be human (and by implication, not machine) and is thus core to our identity.
The meanings of the categories “human” and “machine” have often been constructed in opposition to each other. Some of the central characteristics of what it means to be human have been defined as things that machines cannot do: talk, think, love, possess self-awareness, and have consciousness. In the 1950s, Turing proposed the “imitation game,” later called the “Turing test,” to provide a gauge for whether machines had become human, based on how well the machine can “use words (and, perhaps, to act)” the same way human beings do (Oppy & Dowe, 2003). Although the Turing test was for a long time considered a central criterion that could distinguish machines from humans (French, 2000), scholars have begun to challenge this criterion because most current chatbots, such as ChatGPT and Bard, now pass the Turing test.
Along with being able to use words the way humans do, AI is also making inroads into other activities that, up until now, were the exclusive domain of humans. For instance, generative AI exhibits creativity as it generates complex images, audio, and even video. An AI-generated song which simulated the vocals of Drake and The Weeknd went viral, generating over 15 million views on TikTok, 275,000 views on YouTube, and over 600,000 streams on Spotify (Snapes, 2023). In this instance, generative AI breached the boundary between the categories “human” and “machine” because AI created a new song which sounded
More generally, Phillips et al. (2024) point out that the boundary between “human” and “machine” is more acutely challenged because machines, particularly generative AI, now participate in the social construction of categories. As they note, “while it is not the case that the technology can pass as a human exactly, it is certainly interacting at a level where something akin to human-to-human conversation is happening.” They further present an example where an “AI system convinced the engineer [who built it] that it was sentient” which, according to them, suggests the AI “‘passed’ as an interaction participant” (Phillips et al., 2024, p. 20).
We argue that this discussion—that generative AI may play an outsized role in the social construction of categories—occurs because generative AI, to a greater extent than other technologies around us, is perceived to disrupt the categorical boundary between “human” and “machine”. When non-human entities such as objects or animals behave in human-like ways, we tend to anthropomorphize them; in other words, we “imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions” (Epley et al., 2007, p. 864). When we observe machines carrying out acts that our categorical understandings would suggest only humans are able to do, we anthropomorphize these machines by attributing human characteristics to them. Anthropomorphism is exhibited as a spectrum; people use analogies to make inferences based on superficial similarity (they
The consequences of challenging the human–machine boundary
Technologies that are perceived to challenge the human–machine boundary typically elicit a stronger and more polarizing emotional reaction among stakeholders, many of whom call for its strict regulation. Because generative AI has been perceived to breach this boundary, these kinds of reactions have been common and predictable. What is notable in this case is that the human-like capabilities of generative AI have spurred AI experts, CEOs, and engineers themselves to believe in AI’s potential to become truly dangerous (Roose & Newton, n.d.). In 2023, thousands of technology experts and AI developers signed an open letter calling for a pause on all generative AI development. They argue that “AI systems with human-competitive intelligence can pose profound risks to society and humanity . . .
In contrast, people who see the human–machine boundary holding strong have received generative AI with more positive enthusiasm, or at least indifference, particularly when it comes to the reorganization of work (Kellogg et al., 2020; Pakarinen & Huising, 2023). These scholars and experts acknowledge that many jobs may be completely replaced by generative AI; however, they see generative AI generally as “more augmentation rather than replacing workers” (Hamer, Vice President of Gartner as quoted in Abril, 2023). The tasks and jobs AI can replace are not conceived of as distinctly human; rather, they are seen as ripe for automation. Furthermore, generative AI even creates opportunities for new jobs to emerge—jobs that will require, once more, uniquely human abilities: In Accenture PLC’s global study of more than 1,000 large companies already using or testing AI and machine-learning systems, we identified the emergence of entire categories of new, uniquely human jobs. These roles are not replacing old ones. They are novel, requiring skills and training that have no precedents. (Wilson et al., 2017, p. 14)
Those who do not perceive AI to challenge the human–machine boundary are less likely to have a strong emotional reaction. Instead, it is when people perceive the boundaries between humans and machines as blurring that concerns and even panic over the technology manifest. Yet, even for those who panic, there is hope. Studies of other technologies suggest that, although a technology may initially be viewed as challenging the human–machine boundary, the ensuing uproar seldom holds over time. The panic that nanotechnology was going to destroy humanity because nanorobots would turn the world into “grey goo” (Drexler, 1986) has slowly subsided and given way to indifference (Grodal, 2018). Likewise, the introduction of computers also initially challenged the boundary between humans and machines. Before the modern computer, the ability to calculate and manipulate numbers was considered a distinctly human ability. The dominant metaphor that emerged, the computer as a “brain” (Bingham & Kahl, 2013; Kahl et al., 2016), exemplifies this belief. The predictions about the computer’s impact on society were very similar to generative AI’s predicted impact: the computer was going to replace human workers and disrupt society at large (Yates, 2005). When we sit at our computer today, it does not seem human-like at all. But it is not the computer that has veered away from human territory; if anything, the computer has acquired abilities to carry out
The technology’s affordances
The degree to which a technology is perceived to breach the human–machine boundary is shaped by the technology’s affordances (Orlikowski & Scott, 2008). While nearly all technologies participate in the social construction of categories (Clark, 1985; Kaplan & Tripsas, 2008; Grodal et al., 2015), a technology’s affordances— that is, the potential pathways of actions rooted in the features of the technology and their prescribed uses (Gibson, 1977)— may allow it to more or less actively participate in shaping categories. For example, the developments of digitalization technology enabled new forms of publishing, sales, and consumption of books; this digitalization changed the way, both directly and indirectly, we categorize books (Orlikowski & Scott, 2023). Likewise, the technologies underlying the sharing economy, gig-work platforms, data analytics, and mobile devices, had affordances which facilitated easy engagement and instant connectivity, consequently shaping the enormous impact these technologies have had on the reorganization of work and society. The extensive adoption of these technologies has not only challenged existing cultural categories such as “yellow taxi,” but has also shaped employment categories, with workers framed as “independent workers” rather than employees, thus limiting their under the law (Cornelissen & Cholakova, 2021). These examples highlight how our evolving perception of technology shapes how we define and redefine not just the category of the technology itself, but other categories as well.
The affordances of recent generative AI, characterized by the user-friendly interfaces of chatbots and image generation tools, allow the public to participate in its social construction and categorization. Some of these users actively comment on its capabilities (or lack thereof) and thus shape our collective understanding of the degree to which generative AI is participating in social construction processes. For example, one hot debate is the degree to which generative AI can take over the role of educators, with some arguing that there will be no need for teachers in the future, and others arguing that generative AI is more like an advanced calculator (Fong, 2023). In contrast, the affordances of algorithms underlying quantitative trading are complex, hidden and not available to the public. These affordances make it difficult for most stakeholders to access and understand the technology and see how it shapes the social construction of categories, such as market sectors. The degree to which a technology’s affordances are accessible to the public is thus an important mechanism which can increase or decrease both the social construction of categories itself and the perception of the technology’s role in this process. The affordances of AI might allow it to have more far-reaching and disruptive effects on society than preceding technology. However, stakeholders’ abilities and their vested interests in the diffusion of the technology will also play a role in this process.
Powerful stakeholders with vested interests
We know from categorization research that multiple stakeholders shape the social construction of categories (Granqvist et al., 2013; Kahl & Grodal, 2016; Kennedy, 2008; Navis & Glynn, 2010). In a classic example, Hargadon and Douglas (2001) show that when Edison introduced electric lighting to the world, he manipulated its design such that it seemed similar to the existing gas lighting system. However, not all stakeholders will have the same influence; powerful stakeholders with vested interests will have an outsized influence on this process (Grodal, 2018; Grodal & Kahl, 2017). AI developers attempt to manipulate whether generative AI appears to be “human” or “machine” by varying its features, including its training, interaction, and moderation mechanisms. Whether they attempt to make generative AI seem human or not depends on whether they perceive the technology’s breach of the machine–human boundary as beneficial to the sale of their products. Some organizations intentionally design their AI-powered products to distance them from the “human” category to ensure that generative AI products do not frighten the public. For example, Google veered away from human-like characteristics because “a chatbot that imitates humans comes off as eerie, rather than scientific and innovative” (Seetharaman & Wells, 2023). Similarly, “Altman said OpenAI explicitly decided to call its chatbot ‘ChatGPT’ and not a person’s name so people wouldn’t confuse the tool with a person” (Seetharaman & Wells, 2023, p. 1). In the case of high-frequency trading, institutional custodians successfully steered the perception of disruptive algorithms from a threat to an unproblematic part of society (Marti et al., 2024). In contrast, some organizations try to make their AI technologies appear
Although organizations strategically frame and categorize their products in a particular way, not all organizations have the power to convince stakeholders of their viewpoints (Kaplan & Tripsas, 2008). As Hsu and Bechky (2024) illustrate with the case of the 2023 screenwriters’ strike, management and employees may have different views about the social impact and the desirability of generative AI. Although an extensive adoption of generative AI in screenwriting could increase the efficiency and quality of average work, and democratize the entry barrier to the profession (Hsu & Bechky, 2024), the strong fear harbored by writers and their unionized power spurred negotiation with the management to create a “controlled” use of generative AI. Importantly, while the control over generative AI is being actively contested and negotiated, control over other technologies often happens in silence, outside public view. For example, at the same time as newspapers were filled with reports of the writers’ strike, the White House released a new standards strategy for critical and emerging technologies (Page, 2023). However, despite technology standards setting the backbone for all technologies (Yates & Murphy, 2019), and thereby profoundly shaping the social construction of categories, this event received sparse public attention.
We suggest that, for generative AI to participate in the social construction of categories, people need to
This essay raises a series of new questions for future resaerch. Future studies can empirically document whether the perception that a technology challenges the human-machine boundary generates more audience attention. Scholars can also extend this by examining whether technologies which challenge the human-machine boundary impacts market exchanges. While recent studies has shown that consumers generally show negative reaction when producers use technologies which challenge the human-machine boundary (Jago 2019; Luo et al., 2019), future studies can investigate when and why crossing the human-machine boundary may be accepted (or even welcomed) by customers. Other boundaries such as symbolic, social, and material boundaries (Grodal, 2018; Lamont & Molnar, 2002; Lawrence & Phillips, 2019) shapes the trajectory and reception of new (or past) technologies. For instance, the affordances of technologies are mutually shaped by producers, who envision its potential uses, and its users, who translate this potential into intended and unintended actions. Future research might examine how producers and users jointly negotiate technology boundaries in AI and other technological fields. Lastly, we call for a deeper examination of power dynamics during technological emergence (Grodal & Kahl, 2017). Power structures among stakeholders, users, and the media shape public discourse around a new technology, consequently enabling its widespread use, or stifling its devleopment. Scholars migth examine the extent to which this happens around AI.
Conclusion
Generative AI has attracted an outsized amount of attention for its participation in social construction processes (Phillips et al., 2024). However, many other technologies with similar potential also participate in the social construction of categories, but such processes have often gone unnoticed. Why? We argue that technology’s participation in social construction of categories depends not only on its technical potential but also on people’s perception about the technology’s role. In particular, we argue that people’s belief about technology’s role is shaped by how deeply a technology is perceived to challenge the human–machine categorical boundary. By anthropomorphizing generative AI, we begin to view it as an active and potentially threatening participant in the social construction of categories. This perception is further moderated by the affordances of the technology, and the vested interests of powerful stakeholders. The affordances of generative AI make it easy for relevant stakeholders to interact with the technology. The vested interests of powerful stakeholders mean that stakeholders publicly debate and contest the meaning and significance of generative AI. In contrast, categorizing AI as a machine, regardless of its impressive generative potential, will render it as mundane and innocuous as the screen you are staring at. How we categorize generative AI shapes the extent to which we believe generative AI participates in the social construction of categories, as well as our reactions to it. We therefore argue that how vested stakeholders—including organizations, users, scholars, regulators, and the general public— categorize generative AI grants it the ability to participate in the social construction of categories. The more powerful these stakeholders are, the more influence they will have on the social construction process.
The human–machine boundary is always shifting, but how and where? In the future, are we going to be unfazed when generative AI creates texts, images, songs, videos, code, and more? Like so many other technologies that have come before it, will we come to accept that generative AI is a machine? Or will we continue to perceive it as closer to humans, truly becoming as disruptive and dangerous as it portends? Only time will tell.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
