Abstract
In this article, we introduce the term “conjuration of algorithms” to describe how the tech industry uses the language of magic to shape people’s perceptions of algorithms. We use the image of the magician as a metaphor for how the tech industry strategically deploys narrative devices to present their algorithms. After presenting a brief history of the Western European and North American understanding of stage magic, we apply three principles of magic to a recent case: OpenAI’s discussion of ChatGPT to show how tech leaders present algorithms as magical entities. We argue that the conjuration of algorithms allows the tech industry to forge vivid, overly positive, and deterministic narratives that make it challenging for their critics to call attention to the very real harms that algorithmic systems pose to users. We call for discourses of reality instead of magic, as a way to support responsible technology design, development, use, and governance.
Introduction
From managing social media news feeds through personalizing online shopping wish lists to artificial intelligence (AI) chatbots, algorithms now play a ubiquitous role in people’s everyday lives. They generate profits for the companies that design them, and they serve as mysterious black boxes for the customers who use them (Lomborg and Kapsch, 2020). An algorithm is an “abstract, formalized description of a computational procedure” (Dourish, 2016: 3). Algorithms are governed by codes, infrastructures, and data. They follow instructions and execute orders. And yet, when talking about what algorithms are and what they do in the world, people often refer to them as magical entities beyond human understanding, that can seemingly break the normal physical rules of the world and achieve miraculous feats (e.g., Finn, 2018; Kidd and Birhane, 2023; Sharkey and Sharkey, 2006). Technology professionals are not an immune from this conjuring of magic. For instance, when interviewing computer vision engineers, Thomas et al. (2018) found that “they spoke of false promises and lost opportunities when algorithms did not deliver, and of the magic and faith necessary when they did” (p. 2).
In this article, we aim to look at how the tech industry uses the language and images of magic to present algorithmic systems to the public and position their own political goals. Magic and technology have long been intertwined. By linking the two deliberately in how they talk about algorithms, the tech industry tries to manipulate people’s perceptions of algorithms in order to deflect attention and calls for accountability away from their own already immense power. We use the word “magic” to refer to a particular set of practices that emerged as magic was institutionalized as a theatrical spectacle during the nineteenth century in Western Europe and the United States (Bell, 2009). At its core, magic is a unique and provocative art form that is concerned with producing unexpected and seemingly impossible outcomes. Magic tricks catch spectators by surprise, contradict the laws of nature, and produce dazzling and extraordinary effects (Camí and Martínez, 2022). Through various tricks and illusions, magicians entertain their audience and generate profit for themselves. We suggest that tech companies act like magicians when they present their algorithmic systems to the public. We are far from the first to make the link between magic and modern technology. As the anthropologist, Gell (1988) noted, The propagandists, image-makers and ideologues of technological culture are its magicians, and if they do not lay claim to supernatural powers, it is only because technology itself has become so powerful that they have no need to do so. And if we no longer recognize magic explicitly, it is because technology and magic, for us, are one and the same (p. 9).
We extend this link by showing how tech companies engage in what we call the conjuration of algorithms—a strategically deployed narrative device that uses the principles of magic to manipulate the public perception of technologies. We argue that like the principles of stage magic, the conjuration of algorithms serves three purposes for the technology sector: to conceal the design of technologies, create confusion around the capabilities of technologies, and produce dazzling effects of technologies. In the following, we briefly present the history and concept of magic to draw a parallel between what magicians do and what the tech industry does when presenting their algorithms to the public. We will use the case of OpenAI’s ChatGPT to demonstrate how the tech industry engages in the conjuration of algorithms and to argue that the concept of conjuration can help us to better frame calls for accountability that are rooted in reality, not magic.
From the theater to the world: magic as a unique and provocative art form
Magic, as the professional magician Neale (2002) noted, is a unique art form. Magicians intentionally and artfully perform spectacles—they entertain and provoke their audience by creating elaborate mysteries through their performance. The origin of the entertainment magic industry can be traced back to 1845 when Jean-Eugène Robert-Houdin opened his Théâtre des Soirées Fantastiques in Paris (Jones, 2017). Robert-Houdin’s ideas and practices of magic eventually informed the so-called Golden Age of Magic (late 1800s to early 1900s) characterized by “the zeitgeist of technological progress and aesthetic innovation in a race for novel effects” (Jones, 2018: 146). Magic shows of this time already relied on various sophisticated—and often hidden—technological apparatus that the magician on stage could use to manipulate devices, such as bird cages and person-holding cabinets (Smith, 2015). With the passing of the Golden Age of Magic, the theatrical version of magic gradually moved on to television specials and later to the computer screen (Waltz, 2018).
This theater-based definition of magic differs from how anthropologists think of magic as serving as a collection of practices and tools individuals utilize to achieve their goals and bring about particular outcomes via controlling the supernatural (Brown, 1986). Magic in this sense is about control—exploring the unknown, foreseeing the future, and manipulating the social and material world. This magical intention was lost as magic was disconnected from older Indigenous realities and traditions and transformed into a spectacle (Jones, 2018). It is this modern and Western notion of magic that we draw on here. Modern stage craft reduces magic from the supernatural and the attempt to control nature to the performance of illusions, often involving misdirection, sleights of hand, trickery, or deception.
Whether they are produced in traditional or digital settings, magic tricks always have a double reality. When presenting a magic trick, the magician makes a distinction between the “external life” and the “internal life” of the effect (de Ascanio, 2005 [1964]). The “external life” of the magic trick is what the audience sees. In this phase, the magician may tell a story or carry out a demonstration. However, the “external life” of the trick merely serves as a facade that hides the magician’s deception by using materials and methods that the audience cannot see. This is what constitutes the “internal life” of the magic trick—a secret, parallel reality that the audience cannot see due to the use of various concealments, procedures, gadgets, technologies, techniques, and devices. These materials and methods are closely guarded secrets by magicians and are often kept hidden in unique catalogs (Camí and Martínez, 2022).
Given the tension between the “external life” and the “internal life” of magic tricks, magic is participatory—the magician establishes a direct relationship with the audience members and invites them to take part in the performance (Neale, 2008). Magic tricks are presented with a logic and a naturalness that hardly seem suspicious to the audience. Everything is predictable until the surprising outcome that shatters the audience members’ expectations. This surprising outcome is the crucial part of the trick. It is notoriously difficult to master, given that it requires the magician to be able to challenge people’s capacity to infer and anticipate events (Smith, 2015). If executed well, magic tricks can disrupt the audience’s conscious perception of the world. The audience is captivated by the disparity between what they expect to happen at the end of these performances and what they finally witness happening (Camí and Martínez, 2022).
Magic, therefore, allows audiences to experience awe while challenging them to imagine alternative explanations for the magician’s tricks or even to attempt to figure out the secret behind the magic trick (Jones, 2010). As such, the stage magician’s success relies on misdirection so the audience does not notice how the trick was produced (Hass, 2008). If the misdirection is done successfully, the audience will be able to fully immerse themselves in the experience (Kuhn et al., 2008).
For misdirection to work, magicians need to follow certain principles described by the psychologist Macknik et al. (2008). First, according to the “an action is a motion that has a purpose” principle, it is necessary for magicians to perform unnatural actions to conceal their magic tricks so they can reduce their audience’s suspicion. What makes them challenging is that unnatural actions need to seem natural—otherwise, the tricks will not work. For instance, if the magician wants to hide a small object in their hair (unnatural action), they may pretend to scratch their head while doing it. This way, they can distract their audience and turn the unnatural action (hiding something in their hair) into an innocent action (scratching their head).
Second, in misdirection, the “closing all the doors” principle builds upon the importance of repetition to confuse the audience as well as induce sensory illusions more successfully. Centuries ago, magicians learned that by manipulating the speed of their maneuvers they could make them seem invisible. Trained magicians are capable of executing tricks that are impossible to detect visually as they sneak or conceal cards, coins, balls, or other gadgets. Magicians can use repetition to hide their method behind the trick they are performing. For instance, when the audience sees the magician’s actions repeated, they probably assume that each repetition is done the same way. However, the magician can manipulate their audience’s perception by covertly changing their method in an unpredictable rhythm. As Macknik et al. (2008) noted, “in this way, the magician closes the door on every possible explanation for the trick, until the only remaining possibility is ‘magic’” (p. 877). However, magical tricks are not merely the products of the magician’s marvelous speed. According to the philosopher Max Dessoir (1893), the magician’s success stems from their mastery of concealing art. For him, magic is the art of convincing the spectators that there are no logical explanations to the wonders they have witnessed, other than the ones that the magician provided.
Finally, according to the principle of “never doing the same tricks twice,” audiences become more likely to see through deceptions to identify the trick if magicians fail to convince them about the “newness” and “uniqueness” of their performance. As Smith (2015) pointed out, This need to avoid repetition captures the contradiction at the heart of the modern style: exactingly designed and constructed performances, repeated with machinic reliability show after show, but each time appearing effortlessly of the moment (p. 332).
Everything that happens around magicians when producing a magic trick affects how the audience members experience it and the emotion they feel after the conclusion of the performance (Camí and Martínez, 2022). As such, the success of the performance relies on the magician’s ability to produce dazzlement and manipulate their audience into believing that their tricks are stuff of magic. That is, even when performing a familiar trick, the magician needs to make the audience believe that they in fact have never seen this trick done this way before.
Conjuring algorithms as a narrative device
Modern magic is a technically sophisticated and unique art form—a web of carefully crafted tricks and illusions intended to amaze and deceive the masses and generate profit (Bell, 2009). The conjuration of algorithms reflects this idea of stage magic. The conjuration of algorithms allows tech companies to create a “magical aura” around their technologies while distracting the public from being able to see through their illusions. The use of the principles of magic enables the tech industry to portray algorithms as akin to omnipotent and captivating entities that possess various magical capabilities. As such, by conjuring algorithms, tech companies can shape people’s perceptions in a favorable way, and avoid (or, delay) criticism and accountability.
As a strategically deployed narrative device, the conjuration of algorithms allows tech companies to produce vague and often contradictory interpretative frames about their algorithmic systems. Because they are less coherent, the narratives produced by the conjuration of algorithms “invite” people to engage in imaginative cognitive processes. One reason why the conjuration of algorithms can promote imaginative cognitive processes, we would argue, is that it tends to provoke strong emotional responses in people. For instance, the tech industry borrows popular images of technologies informed by science-fiction stories and metaphors (Hermann, 2023) as well as technological myths (Natale and Ballatore, 2020). While these popular images are often based on dichotomies (e.g., good/bad and rational/irrational), they serve as important building blocks for the tech industry to conjure their algorithms. Because these popular images are vague and therefore are less coherent and specific, they can be used in a variety of situations to conjure algorithms in a positive light. Take the example of Amazon’s virtual assistant algorithm, Alexa. Amazon describes their voice-activated agent as someone who “lives in the cloud and is happy to help anywhere there’s Internet access and a device that can connect to Alexa” (Alexa Features). Here, Amazon conjures the image of the benevolent technology—a conveniently vague magical narrative around Alexa, strategically deployed to shape how people imagine and experience the algorithm’s capabilities.
It also matters who shares the narrative. When narratives are told by people with power and authority (e.g., CEOs, entrepreneurs, and celebrities), they gain more attention and recognition (Barron, 2015). Given the immense financial and economic power they hold over the world, the tech industry can relatively easily engage in the conjuration of algorithms to shape people’s perceptions of their technologies (and themselves) in a favorable way. For instance, through sensational press releases, the tech industry can portray themselves as neutral actors working tirelessly to uphold the ideals of authenticity and integrity (Petre et al., 2019). Tech giants, such as Amazon, Google, Meta, or Microsoft, have the resources and necessary infrastructure to craft narratives that can reach a great number of people just in a few seconds. Search engines—the world’s most popular ones are owned and operated by Google and Microsoft—already manipulate people’s perceptions of technologies given that they create and magnify biased representations of the tech industry itself (e.g., Gezici et al., 2021; Papakyriakopoulos and Mboya, 2022). Similarly to the most successful magicians in the Golden Age of Magic, tech giants can be viewed as modern-day celebrity conjurers whose voice and message are more likely to reach the masses given their power and influence.
As a narrative device, the conjuration of algorithms also acts as cognitive filters, highlighting certain events and themes (Harding et al., 2017). Narratives do not simply describe what happened in the past but actively shape people’s perceptions of the world, orienting them on what information to pay attention to and what to ignore (Rappaport, 1993). When conjuring algorithms, tech companies act just like magicians—their purpose is to prevent their audience, or in this case, the user from seeing through their intricate deceptions. Consequently, the tech industry magnifies the potential benefits of their algorithms while downplaying the potential harm they may cause to users.
By building upon as well as extending the three principles of magic, we suggest that the conjuration of algorithms serves three purposes for the tech industry—concealing the design of technologies, creating confusion around the capabilities of technologies, and producing dazzling effects of technologies.
Concealing the design of algorithms
The first principle of magic is about concealment. As such, for the conjuration of algorithms to work, tech companies need to be able to hide from the public what their technologies actually are and what they can do. From simple search engines like Google Search through face filters developed by Meta or TikTok to Amazon’s Alexa, algorithms seemingly “help” achieve users’ goals (e.g., finding a good restaurant and enhancing a profile picture), but in reality, these technologies serve as effective tools for the tech industry to collect data and feed them into their algorithmic systems, either for purposes that what Zuboff (2023) calls “surveillance capitalism” or in the case of the large language models (LLMs) we discuss below to train them to produce more accurate outputs. Algorithms, as Sarah Lamdan (2023) put it, enable companies to operate as data cartels who hide their immense informational and social power “by maintaining each of their product lines in separate silos, and by obscuring what their data products by giving them vague names like ‘special services’ and ‘risk solutions’” (p. 127). By concealing the design of their algorithms, companies can act like magicians and be able to hide important information about how their technologies actually work from the public so those cannot see through the illusion.
Creating confusion around the capabilities of algorithms
The second principle of magic highlights the importance of confusion. It is not enough for the tech industry to hide information about their algorithms, it is also important to create confusion—so the only explanation left for the public to imagine how these technologies work is “magic.” At its core, magic is about misdirection. By creating confusion around the capabilities of their algorithms, the tech industry can generate a unique form of illusion, also known as illusory correlation (Kuhn et al., 2008). Arising when some events capture more attention than other events, illusory correlation describes people’s tendency to see the connection between things even when they are not related at all (Hamilton and Gifford, 1976). Highly successful magicians can use a variety of misdirection techniques to draw illusory correlations between two unrelated events. Similarly, the tech industry can conjure algorithms to make people believe that their technologies possess “superhuman” and “miraculous” attributes by withholding crucial information that could “demystify” their technologies, such as what methods and resources the algorithm uses to execute orders (Elish and boyd, 2018).
Producing dazzling effects of algorithms
The third principle of magic refers to the importance of producing dazzlement so the audience may not get the opportunity to see through the deception. By producing dazzlement, the tech industry evokes futurity—they portray their algorithms as infallible panacea that will miraculously “fix” all problems of the world (e.g., “predicting” future events with great accuracy and “choosing” the right candidates for job positions). When focusing on the dazzling effects of algorithms, the tech industry constructs narratives that highlight the “newness” and “disruptive nature” of technologies (Balbi, 2015). Their inventions, as the carefully crafted corporate narratives go, are novel entities with no connection to the past and therefore may not be critiqued based on the potential shortcomings of their predecessors. By highlighting the dazzling effects of algorithms, the tech industry conjures the image of a futuristic utopian world in which technology works perfectly in line with people’s expectations.
To highlight how the tech industry uses the three principles of magic to conjure algorithms, we turn to the example of OpenAI’s ChatGPT. Below, by using news coverages, scientific reports, and OpenAI’s press releases and public announcements, we illustrate how the company engaged in the conjuration of algorithms to conceal the design of, create confusion around, and produce dazzling effects of ChatGPT.
Exploring how OpenAI conjures ChatGPT
Since its release on November 30, 2022, the AI chatbot ChatGPT has captivated the public imagination with the promise that it will radically transform how people work, learn, and have fun (Heaven, 2023a). ChatGPT accumulated 100 million users just 2 months after launch (Hu, 2023). OpenAI—the parent company behind ChatGPT—has become “the talk of Silicon Valley” and “a magnet for investors,” expecting $200 million in revenue in 2023 and $1 billion by 2024 (Dastin et al., 2022).
The allure of ChatGPT is that it is promised to be able to accomplish virtually everything—if not now, then in the future. The success and longevity of OpenAI, we would argue, also rests on how masterfully they can conjure ChatGPT to avoid scrutiny and make the public believe in the company’s vision.
Concealing the design of ChatGPT
On their website, OpenAI (2023a) discusses how ChatGPT works. First, they “pre-train” their LLMs, or deep learning algorithms that are trained on data from a wide range of sources, such as books, articles, and web pages. These LLMs allow ChatGPT to complete sentences provided that the user gives appropriate and specific prompts. Next, OpenAI “fine-tunes” their chatbot on a curated dataset supervised and managed by human reviewers. The reviewers’ feedback allegedly enables ChatGPT to provide more accurate and less biased information for users. From these descriptions, it is apparent that ChatGPT is not a final product; rather a continuously changing proof of concept using LLMs created and maintained by OpenAI. Through each iteration, ChatGPT becomes an increasingly complex tool allegedly capable of producing more reliable results (Perkel, 2023). For instance, before releasing it for public use in late 2022, ChatGPT was trained with GPT-3.5 (Generative Pre-trained Transformer 3.5) that allowed it to hold more “human-like” conversations, generate articles and stories about various topics, and summarize texts or answer questions about them (Heaven, 2023b). In this sense, GPT-3.5 was a more robust version of GPT-3, a model originally used as the foundation for the early version of ChatGPT in 2020 (Heaven, 2021).
OpenAI made yet another improvement to ChatGPT when it released GPT-4—a multimodal LLM capable of responding to both text and images—on March 14, 2023. Unlike its previous iterations, however, GPT-4 can only be accessed via the paid version of ChatGPT (called ChatGPT Plus) in the present. On their website, OpenAI (2023b) described GPT-4 as an algorithm that “while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks” (paragraph 1). Although, for many simple tasks, the difference between GPT-4 and GPT-3.5 models is not significant, GPT-4 supposedly outperforms previous models in more complex reasoning situations (OpenAI API, 2023). However, OpenAI has not yet revealed any specific information about what data and training techniques they used to create GPT-4 (Heaven, 2023b).
When it comes to the design of ChatGPT, OpenAI promotes transparency and responsibility in their public announcements and press releases. However, just like magicians who use the “external life” of their tricks to conceal the “inner life” of their performance, OpenAI creates an overly positive narrative around ChatGPT in order to distract the public from how they train and manage their LLMs, GPT-3.5, and GPT-4. As such, the conjuration of algorithms is also about manipulating people’s algorithmic imaginary—“the way in which people imagine, perceive and experience algorithms and what these imaginations make possible” (Bucher, 2017: 31). However, this algorithmic imaginary does not only involve the user but also the processes of the technological system, ChatGPT (see also Schulz, 2023). That is, the algorithm of ChatGPT “imagines” users’ future behavior via LLMs, which is supposed to predict all their future actions. However, OpenAI also anticipates different actions that could potentially be performed by users with the datasets they feed into their LLMs.
While readily acknowledging the potential shortcomings and biases of their dataset, OpenAI is much less transparent about where they actually get the data from and how they supervise it. They often use vague terms, like “publicly available corpus of web pages,” to describe what data they use to design ChatGPT. Recent news coverage, however, paints a different picture of the company’s practices. For instance, several lawsuits accused OpenAI of “secret web-scrapping” by claiming the company harvested and used copyrighted materials (Creamer, 2023) and sensitive personal data (Thorbecke, 2023) to train the LLMs operating ChatGPT. Similarly, a recent investigation of the US-based news magazine, TIME revealed that what OpenAI actually means by “supervised training” is hiring outsourced Kenyan workers—who are compensated with less than $2 per hour for their work—to manage and curate ChatGPT (Perrigo, 2023). The article also quotes the AI ethicist Andrew Strait who noted, ChatGPT and other generative models are not magic—they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent. These are serious, foundational problems that I do not see OpenAI addressing (paragraph 27).
By concealing the design of ChatGPT, OpenAI acts like a magician by attempting to hide the techniques of the illusion from the audience. In this case, hiding who does the work of ensuring that the model fits.
Creating confusion around the capabilities of ChatGPT
When it comes to the potential capabilities of their chatbot, the CEO of OpenAI Sam Altman described ChatGPT as a “co-pilot”—“someone” who can help users solve problems, compose a wide range of texts and even computer codes (Ordonez et al., 2023). For Altman, ChatGPT is more like a reasoning engine that can exhibit reasoning abilities, than a very large collection of text parsed from the Internet. His arguments—seemingly—are backed up by research. For example, a recent study 1 suggested that the most up-to-date version of ChatGPT can solve challenging and complex tasks that span across various disciplines, without needing any special prompting (Bubeck et al., 2023). Reports of ChatGPT performance highlight its skill at entrance exams (e.g., the MIT Mathematics and Electrical Engineering and Computer Science (EECS) courses) (Zhang et al., 2023) and the representatives of OpenAI (2023c) argued that the new ChatGPT can pass a simulated bar exam. According to OpenAI (2023b), the most recent version of ChatGPT’s performance is already close to human-level performance on various professional and academic benchmarks. However, these studies have been challenged, at times withdrawn, and replication is difficult to administer given that the tech industry rarely shares any specific details about the methods and codes they used in their studies.
OpenAI can create confusion around the capabilities of ChatGPT by propagating the narrative that ChatGPT can act like a “co-pilot” that can solve almost all problems with great accuracy while seemingly contradicting itself on their official website. That is, under the question “Why does the AI seem so real and lifelike?” on the OpenAI website, the answer reads as follows: These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system’s design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times (paragraph 3).
In parallel with acknowledging the limitations of ChatGPT’s design, however, OpenAI also promotes the idea that their chatbot acts like “a reasoning co-pilot” with close to human-level performance. This contradiction exemplifies one of the fundamental characteristics of magic—subverting expectations and creating confusion to make people question their understanding of the world. Contrary to OpenAI’s narrative, their LLMs may not only produce outputs that are inaccurate at times. These systems are fundamentally flawed by design (Bender et al., 2021). That is, GPT-4 still produces biased, factually inaccurate, and harmful text due to its design (Heaven, 2023b). In some cases, the ChatGPT cannot interpret the question; at other times, it confidently provides an incorrect answer (Perkel, 2023). When the code fails to run, such problems are obvious. Sometimes, however, the code runs but generates the wrong output or “made up” answer—a phenomenon referred to as “hallucination” (Alkaissi and McFarlane, 2023). When it comes to tasks such as solving math problems, answering sensitive or dangerous questions, generating code, and visual reasoning, the performance of both GPT-3.5 and GPT-4 varies significantly and some have noticed that it becomes substantially worse over time (Chen et al., 2023).
When discussing the capabilities of ChatGPT, OpenAI conjures confusion and further stabilizes the already strong anthropomorphic perception of technologies (see also Ziewitz, 2016). Sometimes OpenAI refers to ChatGPT as a tool, sometimes they refer to it as “someone” with human-like characteristics, such as intent, strategy, and creativity, even if in reality groups of engineers and designers create those systems to solve specific tasks. OpenAI CEO Sam Altman’s international tour to meet with parliaments only underscored this process of conjuring ChatGPT as an uncontrollable entity with its own agency, only further conjuring confusion.
Text generated by ChatGPT or other LLM is not grounded in communicative intent or any model of the world. ChatGPT is an algorithmic system designed to use probabilistic information and stitch together sequences of linguistic forms from large training data but without any reference to meaning. Bender et al. (2021) call such technologies stochastic parrots. A stochastic parrot acts like a human, not because they are, but because people perceive stochastically generated, parroted text that way due to people’s tendency to imbue technologies with human-like characteristics. Given that some of Bender’s co-authors were dismissed from their positions working in technology ethics at Google for what they say was sounding the alarm about this flaw of LLMs, the stakes for the power of conjuring algorithms are high (Hao, 2021).
Producing dazzling effects of ChatGPT
When it comes to the potential effects of ChatGPT, OpenAI yet again seems to create and share conflicting narratives about the social and technical impact of their chatbot. While they highlight the potential shortcomings and biases of their algorithmic system in the present, OpenAI also promotes the idea that ChaGPT will become increasingly sophisticated and serve as a vehicle of change in the future. According to this narrative, ChatGPT and the continuously evolving LLMs it is based on will somehow magically change how people work, have fun, and exercise their creativity (Heaven, 2023a). The CEO of OpenAI Sam Altman even noted in an interview that ChatGPT is just one step toward his company’s goal to eventually build artificial general intelligence (AGI)—an AI system that can perform or exceed human performance (Ordonez et al., 2023). Given the design and capabilities of GPT-4, some computer scientists from Microsoft Research already suggested that the most recent version of ChatGPT should be viewed as an early, but incomplete version of AGI (Bubeck et al., 2023).
By portraying ChatGPT and their LLMs as continuously evolving entities, OpenAI acts like a magician when they produce magic tricks and produce dazzling effects that captivate the public. The performance of ChatGPT may look like the “dazzling magic” of algorithms, but it is based on work that treats language as predictable, the byproducts of scholastic parrots (Bender et al., 2021), and is the product of labor of content moderation and labor automation, not intelligence per se (Burrell and Fourcade, 2021). The idea that ChatGPT has the evolutionary power to become AGI is pure fantasy but presented carefully as the result of the magic of technological advancement. By discussing the dazzling effects of future ChatGPT, OpenAI attempts to distract the real dangers ChatGPT and other LLMs pose to people now. Various studies have already demonstrated that ChatGPT can produce a wide range of harmful and racist content (e.g., Deshpande et al., 2023; Zhuo et al., 2023). As Kidd and Birhane (2023) put it, Unrealistic, and exaggerated capabilities permeate how generative AI models are presented, which contributes to the popular misconception that these models exceed human-level reasoning and exacerbates the risk of transmission of false information and negative stereotypes to people (p. 1222).
To put it simply, the simplistic and yet alluring “magical narratives” of algorithms are inherently seductive and highly effective tools in shaping how people imagine technologies of the future while distracting them from the potential negative effects of technologies in the present (Healey and Woods, 2017).
By highlighting the dazzling effects of ChatGPT, OpenAI promotes a particular discourse called “enchanted determinism” (Campolo and Crawford, 2020). By emphasizing the “superhuman” effects of ChatGPT, the company uses various exaggerated technological calculations and predictions to describe their algorithm’s “mysterious” mechanisms and “magical” effects. In doing this, OpenAI conjures the image of a techno-utopian future to distract people from calling for accountability today. For instance, OpenAI along with other tech giants, such as Amazon, Anthropic, Google, Inflection, Meta, and Microsoft “voluntarily” promised the White House (Kelly, 2023) and the European Union officials (Barr, 2023) that they would develop technologies responsibly to mitigate the risks their algorithmic systems may pose to society. However, while publicly calling for stricter regulations, the CEO of OpenAI, Sam Altman secretly started a lobbying operation (Deutsch, 2023) to avoid regulations that would hurt his company’s financial future (Weatherbed, 2023). We see what magicians do: creating dazzling effects to conceal their real intent.
Conclusions
As a framework, the conjuration of algorithms can help communication and media researchers reflexively account for how narrative devices are being strategically deployed and transmitted by the tech industry. The frame of conjuring algorithms helps bring attention to two main problems. scholars and policymakers face when studying the tech industry.
First, the concept of the conjuration of algorithms resonates with what prominent technology researchers already noted—the importance of studying the politics of algorithms (Sandvig et al., 2016). Algorithmic systems are particularly valuable tools for companies that already hold an immense economic, technological, and political power. While the tech industry is often heavily criticized for their potential to perpetuate harm to historically marginalized communities (e.g., Birhane, 2021), violate citizens’ privacy rights (e.g., Gebru et al., 2017), or exacerbate existing racial discriminatory practices (e.g., Benjamin, 2019), the “trick” of conjuring algorithms help shield them from accountability They rarely have to face consequences for the immense harm they cause to users, especially those who are historically marginalized.
Most algorithms that we have discussed here are built to serve the needs of those who already have the most privilege in society. The conjuration of algorithms reflects capitalism’s logic–providing new financial opportunities and reproducing beneficial relations for powerful tech companies. Too often, industry stakeholders portray their proof of concepts as if they were already an actual service by using imaginary projections for predicting the future (Tsing, 2000). Similarly to other technological futures (e.g., smart cars and service robots), chatbot futures are enmeshed with predicted sales. Because the goal of tech companies is to sell their products and services, stakeholders alter the details and exaggerate predicted numbers in support of marketing claims (Ruckenstein and Trifuljesko, 2022). As the case of OpenAI’s ChatGPT shows, the tech industry—which already has access to resources and infrastructure that their critics do not—conjures algorithms to limit users’ imagination and create distraction, confusion, and dazzlement. Representatives of the tech industry deliberately invoke the principles of magic to make it more challenging for their critics to counter their arguments. Critical voices of technologies are often ignored and dismissed (Mohamed et al., 2020). The skills of the magician, the resources they have, and the techniques of their magic make it difficult for the skeptics to see through and counter their deceptions and illusions. We argue the same is true for tech companies.
Second, by holding a powerful role in shaping how people imagine what technology can and cannot do, the conjuration of algorithms can lead to the formation of sociotechnical imaginaries. Originating from science and technology studies (STS), sociotechnical imaginaries are based on society’s collective beliefs about the potential implications of technologies for their present and future lives (Sadowski and Bendor, 2019). Sociotechnical imaginaries encompass what Jasanoff (2015) calls “collectively held and performed visions of desirable futures” that are “animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology” (p. 19). The field of STS recognizes that history and culture shape the implementation and use of existing and new technologies (Vicente and Dias-Trindade, 2021). When the tech industry conjures algorithms, they contribute to misinformed debate about algorithms, with potentially significant consequences for technology research, funding, regulation, and reception. False fears and wishful thinking encapsulated by the conjuration of algorithms may also lead to missed opportunities through failure to imagine and discuss the actual benefits and risks algorithmic systems pose to individuals and society.
When conjuring algorithms, the tech industry portrays their technological inventions as neutral entities with agency and power that are less susceptible to human biases and errors; therefore, they are more useful in solving problems (Gillespie, 2014). Technological inventions, according to this view, are deterministic forces that are inevitable products of technological progress (Crawford, 2016). However, these systems are built upon hidden human labor and a vast amount of data—two important issues, while critical for them, the tech industry does not address publicly at all, or shares in vague, ambiguous, and confusing narratives. Stage magicians conceal the design of their tricks, creating confusion around their capabilities in order to produce dazzling effects. Tech companies rely on the same playbook to present algorithms to the world, making it even more difficult to hold to account their already immense financial, political, cultural, and social power. Working toward responsible technology will not appeal to calls of magical technologies but instead will anchor public discourse in reality, building capacity for accountability. That involves bringing the public in on the so-called illusion through greater education, expanded digital literacy, and increased clarity on the appropriate roles, functions, and regulations of algorithms in everyday life.
Footnotes
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
