Abstract
Artificial intelligence in its myriad forms is taking the world by storm. In this Provocation Essay, I reflect on my learning in studying human-artificial intelligence interaction. I embarked upon the journey as an organization and management scholar who was clueless about technology but mustered the courage to face artificial intelligence. I learned to approach it as a moving target and slippery subject of inquiry. Viewing artificial intelligence as magic orchestrated by magicians and driven by future-oriented discourse helped me make sense of what is done and sold in its name, how, and why. Based on my autoethnography, I propose a perspective for studying human-artificial intelligence interaction that is grounded in what organization and management scholars can do: critical reflexive inquiry that is open to collaborating with scholars and practitioners with different insights into artificial intelligence and committed to keeping arguments open-ended and leaving room for readers’ imagination. In doing so, we can speak up for humans in human-artificial intelligence interaction.
Introduction
Oh crap. This was my first thought when I found out in December 2022 that we received funding for a research project on managing people and artificial intelligence (AI). I knew about people and management, but not much about AI. OpenAI had released ChatGPT (Chat Generative Pre-trained Transformer) just days earlier, and I had no clue about the new technologies that AI solutions are based on. Yet, technologies advance rapidly, and they insidiously steer our lives. This became evident as our research proceeded and AI, and generative AI in particular, has taken the world by storm. Generative AI tools such as ChatGPT are trained on massive data sets and when prompted, they come up with new content such as text, images, or audio. Computer scientists Arvind Narayanan and Sayash Kapoor (2024) note that generative AI “often feels surreal, and the future of AI will no doubt be even weirder” (p. 177). Unlike previous transformative technological innovations such as mobile phones or the Internet, AI is not a communication tool or global network. It has autonomy and intelligence that is not present in traditional computing or communication technologies.
To paraphrase the science fiction writer Arthur C. Clarke (1984), it was all magic to me. I was awed by the magic of AI technologies. I was also disarmed and a little fearful when confronted with their disruptive potential in my professional and private life. I was skeptical about technology hype, and mindful of the gendered, racialized, and other biases that coders code into their algorithms. I was also excited. I mustered the courage to learn about AI, its impact on workplaces and organizations, and how humans and AI interact.
Reflecting on my experiences and learning, I am writing this Provocation Essay as an autoethnography from my position as a technologically clueless organization and management scholar. I am a white cisgender senior male professor, and I have always had an uneasy relationship with technologies. I am skeptical about the proliferation of all things technological in academic work, and my courage is equipped with a healthy dose of critical thinking. I consider myself a critical scholar in that I engage with critical theories and theorizing and seek to challenge taken-for-granted assumptions and established “truths” and make the foundations and practices of power and inequalities visible.
This autoethnography goes far beyond our research project to comprise my learning more generally. While our field is filled with authoritative voices on AI, my approach is modest and my writing personal and polemic. Autoethnography enables this. It is reflexive research grounded in personal experience (Ellis, 2007; Ellis et al., 2011) that helps illuminate social and cultural phenomena that would be difficult to capture otherwise (McDonald, 2016). Engaging with analytical autoethnography, I aim to theorize my experiences and point to connections with others’ experiences and contributions (Anderson, 2006). My provocation is grounded in reflection and informed by critical studies on management learning. I hope to help others find new ways to think about, and study, human-AI interaction (cf., Behar, 1996).
In December 2022, I began to keep a diary of what I learned in our research and in what I read, discussed with others, and experienced in different events. I was certain that something that I could not foresee was bound to happen. I wanted to learn to study AI, something I was not fully comfortable with, and I was certain that others were experiencing something like this. I returned to this text regularly as I learned more about AI and human-AI interaction. I found myself reflecting on a moving target.
AI proved to be difficult to pin down as a subject of inquiry, and I began to view it as magic. This is not as strange as it sounds. Drawing on a rich tradition of studying new technologies with a magic lens, I looked at AI as illusion, rather than supernatural, although the distinction is difficult to make in practice (see e.g. Gell, 1988; Kuhn et al., 2008; Leaver and Srdarov, 2023; Obadia, 2022; Sharkey and Sharkey, 2006). I began to understand AI magic as appearing to perform supernatural feats. I also realized that magic requires active participation by its audience—us—to be convincing and successful. My attention turned to AI as magic orchestrated by magicians and driven by future-oriented discourse. This helped me to make sense of what is done and sold in its name, how, and why—and to think about human-AI interaction in new ways.
This essay recounts my learning about AI and human-AI interaction. When I finished the first version in July 2024, I decided to keep the question mark in the main title. Generative AI proliferated after our project kickoff, and the context of our research seemed to change radically. This led me to reflect on my assumptions about humans and technologies and to think differently about studying human-AI interaction. I received constructive critical feedback on the first version of this essay from the handling Editor and reviewers, and with their guidance I continued my learning journey. The idea of viewing AI as magic developed—and it became more critically oriented. The feedback continued to be constructively critical in the second round, and I sharpened my focus on AI as magic.
In this provocation essay, I invite you to join me in learning about the magic of AI and in creating new ways to think about and study human-AI interaction. I propose that this is grounded in critical reflexive inquiry; open to collaborating with scholars and practitioners with different insights into AI; and committed to keeping arguments open-ended and leaving room for readers’ imagination. It is about speaking up for humans in human-AI interaction.
Next, I recount how I got acquainted with AI and human-AI interaction as a subject of inquiry. I then reflect on my experiences and learning, develop the AI as magic lens, and elaborate on my provocation.
This time everything will change!
AI is a prime example of how new technologies are sold to us as transformative or disruptive (Fleming, 2019). While previous tech hype cycles have ended in some form of disappointment, we are persuaded to believe that this time everything will be different—that we really are living a transformation where AI technologies will disrupt just about everything in our lives (Vesa and Tienari, 2022). Narayanan and Kapoor (2024) acknowledge that “consumer-facing AI has finally, after many, many decades, crossed the threshold of usefulness” (p. 166). However, they warn us against “AI snake oil,” referring to “AI that does not and cannot work as advertised” (p. 2). Their message is that AI is exciting and very often useful, but we must approach it with caution.
AI hype went to high gear in December 2022, just days before we kicked off our research project. “AI bot ChatGPT stuns academics with essay-writing skills and usability,” Alex Hern of The Guardian wrote after it was first introduced to the public (December 4, 2022). “Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests.” Now this was impressive. And it came with a warning. “Professors, programmers and journalists could all be out of a job in just a few years,” Hern pondered.
After I started writing my learning diary in December 2022, it seemed that the world was being transformed by the inescapable advancement of all sorts of AI-related tools and technologies. AI was constantly in the headlines. I was puzzled as the context of my learning seemed to change fast and furiously. It was a strange and somewhat paradoxical feeling: being anxious, frustrated, and even scared on the one hand, and curious, excited, and inspired, on the other. I began to think about what makes human-AI interaction so special as a subject of inquiry.
What is AI?
My early diary notes remind me how I asked myself some basic questions. For example, what is AI? Put simply, I learned, it is intelligence (as in perceiving, synthesizing, and inferring information) demonstrated by machines. AI models are created by using algorithms and training data, and they learn from experiences. I also learned that AI should be thought of in the plural. There is not one AI but an endless number of technologies, tools, and applications. These are computer systems that can perform tasks that traditionally understood would require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
I learned that algorithms as procedures for solving problems or performing computations are part of AI. They are finite sequences of rigorous instructions coded by humans and recursive mathematical functions that operate upon themselves (Totaro and Ninno, 2014). Algorithms are simplifications, however, as they are based on choices about what is important enough to include and code in, and what can be excluded and left out (O’Neil, 2015). Algorithms are not free from biases but have learnt to efficiently automate them (Maaranen et al., 2022). They help AI tools to “automate bullshit,” as Narayanan and Kapoor (2024) put it, following philosopher Harry Frankfurt (2005) who defined bullshit as speech that is intended to persuade without regard for the truth.
A management consultant friend explained that while experts can, in principle, track down, or reverse engineer, algorithms to reveal what they are made of, many AI technologies with their constant ability to learn from experience have remained more of a “black box.” This depends on the type of AI and how it is built. AI models are non-linear and probabilistic, and while their inputs and outputs may be clear, it is often difficult to understand how they make decisions. Generative AI is an example. I learned that ChatGPT is a type of large language model (LLM) that responds to user prompts with images, texts, or videos created by AI, simulating conversational interactions. It is based on sophisticated machine learning algorithms that have been trained on large data sets published on the Internet. (Jiang and Hyland, 2025.) When we prompt them, generative AI tools such as ChatGPT respond in apparently smart and sometimes surprising ways.
What to study about AI?
Reading up on research, I learned early on that AI is ubiquitous as a subject of inquiry. Scholars are instructed to zoom in on specific phenomena related to data, algorithms, and decisions and solutions (cf., von Krogh, 2018). AI enables machines to learn and act autonomously (Balasubramanian et al., 2022) and to interact with humans in decision-making and problem-solving (Murray et al., 2021). AI technologies affect organizations in new ways (Murray et al., 2021). AI carries the potential to both substitute and complement cognitive capabilities of humans, and it was already impacting on strategic decision-making when I began writing this text (Krakowski et al., 2023; Raisch and Krakowski, 2021).
Research on organizations and AI is mushrooming. In developing a way to think differently about human-AI interaction—and as a basis for looking at AI as magic—I offer a selective review that reflects my early learning of the intriguing research.
First, studies show how humans and AI are becoming in interaction, warranting a focus on “co-constitutive relations” between technologies and organizing (Faraj and Pachidi, 2021), or their “co-constitutive interactions” (Balasubramanian et al., 2022). There are many ways to conceptualize how AI is entangled in social relations and interaction, with different assumptions and vocabularies (Larson and DeChurch, 2020). “Sociomateriality” and “sociomaterial practices,” for example, are established concepts in studying how the social and material are entangled (e.g. Leonardi, 2013; Orlikowski, 2007). The “agency” of AI is debated, and studies of whether new technologies automate or augment human labor abound (Raisch and Krakowski, 2021). While Andreas Kaplan and Michael Haenlein (2019) pondered, typically for that time, that it is likely that “humans will always have the upper hand where artistic creativity is concerned” (p. 19), recent developments suggest that this is not necessarily so. AI technologies seem ever more artistic and creative, and humans and AI “become” and “entangle” in constantly new ways.
Second, studies suggest that AI operates under a veil of rationality. The promise of rationality positions critique of AI as old-fashioned and irrational (Vesa and Tienari, 2022). This has consequences for human-AI interaction. We behave as if AI was rational, and we help to create self-fulfilling prophecies that bolster (assumptions of) AI rationality (cf., Friedland, 2019). Often unwittingly, we let algorithms and AI think on our behalf. However, the rational thinking that algorithms and AI do on our behalf is formal (Balasubramanian et al., 2022) or procedural (Lindebaum et al., 2020), and boundedly rational, at least for now. ChatGPT, for example, calculates probabilities in its training data and generates coherent texts, but it still does not understand context in the way humans do (Jiang and Hyland, 2025). Yet, ever more sophisticated AI tools that appear rational are guiding us to self-inflicted states of “learned helplessness” (Lindebaum et al., 2020).
Third, studies elucidate how AI is intruding in our lives with a discourse that is oriented toward the future. AI rationality comes with a promise of inevitable progress. Through their impressive uses (think of ChatGPT and its new versions!), cutting-edge technologies become objects of desire (Vesa and Tienari, 2022). They contain a radical promise to change practices and routines in organizations (Bailey et al., 2022) and to challenge professional roles, considerations of status, and forms of collaboration (Sergeeva et al., 2020). AI takes the form of powerful “unruly” discourse about the “latent future present,” as media studies scholars Amanda Lagerkvist and Reimer (2023) put it. At the same time, AI tools and technologies (and their algorithms and training data) remain opaque (e.g. Hannigan et al., 2024; Leaver and Srdarov, 2023). As humans, we are increasingly vulnerable as everything seems to be accelerating (Coeckelbergh, 2022) and as unruly technologies shape and transform our existence, often in secrecy (Hannigan et al., 2024).
Fourth, studies illuminate how human-AI interaction is embedded in power relations. How power operates in and around AI has deterministic elements: “claims about ‘superhuman’ accuracy and insight, paired with the inability to fully explain how these results are produced, form a discourse about AI that we call enchanted determinism” (Campolo and Crawford, 2020: 1). When the predictive accuracy and the unexplainable properties of AI are combined, and its transcendent or “superhuman” capacities are turned on, its power (and the power of those who develop it and sell it to us) hits us. AI works on data and detects patterns that give unprecedented access to our identities, emotions, and social character (Campolo and Crawford, 2020). AI gets under our skin but often so subtly that we do not recognize its power over us—or even care about it (cf., Hannigan et al., 2024).
Fifth, studies remind us that there is a dark side to technologies and AI. Philosopher Andrew Feenberg (2002) argues that technologies are not neutral and critiques the antidemocratic values that govern their development. AI takes exploitative forms if we allow it to do so, and it is “never separate from the assembly of institutional arrangements that need to be in place for it to make an impact in society” (McQuillan, 2022: 1). Sociologists Jenna Burrell and Marion Fourcade (2021) talk about “coding elites” who have the means and expertise to control and extract value from data and the “cybertariat” whose task is to produce that data. Anthropologist Mary Gray and computer scientist Siddharth Suri (2019) discuss the “paradox of automation’s last mile.” This means that as AI advances, it creates temporary labor markets for unforeseen tasks. When a new form of automation takes over work previously done by humans, training AI creates new needs for human labor elsewhere. When companies developing AI seek to minimize costs, this means that the “invisible” workforce is often gathered in the Global South, becoming a “new global underclass,” as Gray and Suri (2019) argue. AI, then, often becomes exploitative, even if this exploitation remains hidden.
Reading and reflecting on all this led me to think why we humans behave according to what is technologically predicted and how we take part in creating that predicted future. My diary notes remind me how I began to reflect on how people like me are positioned in relation to AI. While being a layperson in terms of technology, I am an organization and management researcher. While people like me may suspect that a lot of what is sold to us as AI is fake, we have no vocabulary, expertise, or authority to question its technologies (Kaltheuner, 2021). As organization and management scholars, we are vulnerable in relation to AI technologies that remain hidden when we study workplaces and organizations (cf., Lange et al., 2019). Acknowledging this vulnerability became the basis for my learning about AI.
Studying a moving target
Over the past few years, I have made notes about struggles in my learning diary and reflected on how my relationship with AI is changing. In addition to reading studies in different fields, my learning about AI and human-AI interaction is based on encounters and discussions with practitioners, working on our research project, and engaging in conversations with fellow scholars.
Reflecting on AI performances
I continuously struggle with my dual feelings about AI. I am anxious, frustrated, and fearful as well as curious, excited, and inspired. My diary notes depict how I have admired AI experts’ ability to control my attention and to influence how I think about AI. Their skillful AI performances have given me a sense of wonder. These practitioners seem to be so well versed in AI tools and technologies that they can steer my assumptions about the possibilities on offer, even if I continue reading critical research on AI. They keep me on my toes about what can happen next in what they construe as a whirlwind of changes.
For example, I attended a public seminar in my business school and witnessed representatives from tech companies sharing their excitement about AI. The dominant discourse was about dreaming, exploring, bravery, and making things happen and, typically to my native country, peppered with warnings that “we” are falling behind developments elsewhere. The event was opened by an AI expert who is a seasoned tech executive. They boldly claimed: “You’re already using AI—you just don’t see it!” We were sucked into a future with no alternatives.
The AI expert went on to postulate that “Learning is the true superpower in the AI era.” Later I asked Google and ChatGPT about this and noticed that such talk about human-AI interaction being all about learning is constantly reiterated online. However, I struggled to understand what learning meant here. Perhaps it was not human learning after all. “AI isn’t just about technology, it’s about learning faster than the world is changing,” the AI expert told us. With our limited capabilities, I suspected few humans can learn “faster than the world” but perhaps some can make AI do it. We humans seemed merely a side kick in an inevitable development, because “no fuel, no flight.” The AI expert’s conclusion was that “data” is the “fuel” and the most important “strategic asset” for companies “in the AI era.” Humans were relegated to data sources for AI technologies to feed on. All this was presented in an engaging and celebratory way. I wrote in my notes that I have just witnessed a very skillful AI performance (and magic, to which I’ll return later).
The seminar also included short research-based presentations by scholars. My colleague offered a critical talk about how AI is eroding freedom. They discussed how AI organizes our anxieties and argued that “we are not individuals but dividuals: recombined and recalculated blips of data in a sea of big data.” I thought my colleague’s critique was convincing but corporate guests sitting in the first row looked distracted and some fiddled with their mobile phones. In a panel discussion featuring corporate representatives that followed the scholars’ presentations, no-one referred to the critique. It was as if nothing had happened: AI is here, AI changes everything, “we” (= you, the audience) are not getting it, “we” (you) are losing out. There was no room for critical voices in the inevitable trajectory of progress.
I talked to some seminar participants in the foyer afterwards. I mentioned the lack of acknowledgment of the dark side of AI among the corporate presenters and panelists. I talked to people who work in companies that are not as AI alert and savvy as the tech companies highlighted in the seminar, and they were nodding. They shared examples of challenges in human-AI interaction that reminded me of the companies we study. The bold future-oriented AI discourse that dominated the seminar made me and others anxious as well as inspired.
On my way out, I bumped into a business-oriented colleague. Their eyes were shining with excitement. They said that “we must dig deeper, we must become much better!” They talked about all the wonderful AI tools available to make academic research more efficient and relevant to practice. It was clear that my comments about the crucial role of humans in research and writing did not resonate: “Think about all the data and all the reports AI writes for you if you just prompt it right!” I was puzzled. Biases in, and lack of transparency of, AI did not seem to be issues. I went home to scribble notes about struggles with my emotional rollercoaster ride when I witnessed AI performances and exchanged views with others about what is happening when humans and AI interact.
Expectations for AI to transform businesses and societies are growing exponentially. I have had countless discussions with business practitioners who develop and use AI tools and technologies. A management consultant friend of mine has for many years followed media and social media coverage on AI. I have saved all the emails the consultant has sent me, with examples of how AI is discussed and links to interesting stuff, spiced up with their own reflections on what is going on. The baseline argument is that AI will carry out most, if not all, routine tasks far quicker and more efficiently than any human. Beyond this, there are examples (usually quite convincing) and future predictions (often seemingly possible) of AI creating its own performances, truths, cheating, and authenticity—and replicas and clones of us humans. I learned that media accounts and social media commentary include alarmist as well as celebratory visions, depending on who you ask and what you read.
Yet it seems to me that AI performances are getting wilder and wilder. Sometimes they are just plain scary. Our destiny seems to be in the hands of people who treat AI as technologies for making money and who are often incapable or unwilling to acknowledge its dark side.
Some time ago, I started playing around with AI tools such as ChatGPT myself. I notice how the tone in my notes on using ChatGPT has changed. Earlier I could make sarcastic remarks about its clumsiness and hallucinations. Now its advancement keeps surprising me. Translations between my own language and English, for example, work astonishingly well. As a user, I have been thinking a lot about my cluelessness and vulnerability. As I get more courageous and acquainted with AI, the technologies advance and AI performances abound, and I feel clueless on a higher level. Time and again, I witness how AI is a slippery subject of inquiry. The need to muster courage never ends, and I am learning to be reflexive about this.
Studying human-AI interaction critically
My diary notes depict my reflections on our research that focuses on managing people and AI. The companies we study operate in service businesses that are affected by the proliferation of new technologies. I have made notes in my learning diary as our studies have progressed. I share two examples here: the first is about anthropomorphizing AI that is not ready, and the second is about developing AI in gendered ways. Both examples show how AI can—and must—be studied critically, going behind the façade of exciting (and scary) AI performances and appreciating humans in human-AI interaction.
First, while we were kicking off our research project in late 2022, I had a trial run on studying humans and AI. Two colleagues in our project engaged in studying a company that was investing heavily in “digitalizing” its business. They carried out interviews and observed the workplace. Harnessing AI was (and is) a key strategic question there. We studied how the top managers worked to anthropomorphize a robot. Anthropomorphizing or humanizing refers to activities where a robot is attributed human characteristics, imbuing its real or imagined behavior with human-like properties, characteristics, motivations, intentions, and emotions (cf., Mori, 1970). Top managers decided to give the robot a human name. However, as it turned out, it was not a robot but an algorithm (AI-driven rule-based software) without physical or visual form. It was a bot that was to carry out back-office tasks that were earlier done by humans. We ended up calling it a (ro)bot. Curiously enough, it was treated like a human “colleague,” and it seemed to influence relations and interactions between managers and employees.
We witnessed how AI entered the workplace in a clumsy way and decided to study it through an affect lens (see Einola et al., 2024). Affect refers here to feelings and emotions that become shared and collective as they circulate among organizational members (Barsade and Gibson, 2007). What we found was that the anthropomorphized (ro)bot prompted excitement and hope among top managers and frustration and anger among some of the employees. We learned that the (ro)bot became an inanimate amplifier for existing discontent between managers (its creators) and employees (its colleagues). The affect lens allowed us to see how humans impacted upon human-AI interaction and turned it into a dispute, to our surprise, along the “traditional” battle-lines between managers and employees.
We learned that discursive ambiguity was an integral part of the clumsy entry of the (ro)bot into the workplace. Anthropomorphizing led to assumptions and expectations among employees that the (ro)bot was not (yet) able to fulfill. On the contrary: it created extra work and led to hassles. The temporal element in all this was noteworthy, too. While employees were slowly getting used to interacting with the (ro)bot, the management already talked about its future “cousins.” For them, the (ro)bot was merely a first step in what was to become a full-blown AI-based transformation. While the top managers were firmly in the future, excited and hopeful, employees were stuck with helping the (ro)bot in the here-and-now, frustrated and angry. The future-oriented AI discourse led to contradictions and conflicts in the workplace, which conditioned human-AI interaction.
With this study, I learned that managing human-AI interaction is grounded in how people are managed as people. When introduced to new settings, AI solutions are work in progress. Incompleteness (combined with promises of rationality and improvement) conditions their interaction with humans. The temporal element in introducing AI to the workplace was significant as the future-orientation in AI discourse became contested. However, the status and authority of managers with up-to-date knowledge on AI was bolstered, while employees’ skepticism and resistance proved futile. There were consequences of AI that only became apparent after our study was completed. I heard that some of those who were vocal in challenging AI were no longer employed by the company.
Second, we are studying another service company where my colleagues interviewed people who have been engaged in different roles in relation to the company’s new service chatbot, that is, an AI-powered application and web interface designed to have conversations with customers. At different points in time, my colleagues interviewed developers and managers overseeing the process; those who had become “AI trainers”; and those whose customer service work roles were changing. While all interviewees agreed that human work would change with AI, they had different ideas about the extent to which the chatbot would replace humans in customer service. Again, AI was work in progress and, again, it spurred different viewpoints and disputes. We began to form an understanding of how the chatbot affected customer service delivery. My diary notes remind me about my own frustration as we did not get a chance to tap into how strategic decision-makers at top echelons made sense of the changing contexts of AI.
How the chatbot played into gender relations in the company caught our eye. This offered another critical angle to study human-AI interaction. Feminist theorizing suggests that technologies, AI, and chatbots are gendered. Technologies are traditionally associated with men and masculinities (Cockburn, 1985), and designing, implementing, and using technologies relies on social categories such as gender (Wajcman, 2017), often misattributing these categories (Sutko, 2019). Male norms are inscribed into AI tools and technologies that increasingly influence our lives, exacerbating gendered power imbalances (Borau, 2025). We decided to explore AI and gender, with a focus on the chatbot’s “gendered personality and identity,” as my colleague initially put it. The industry where the company operates has a tradition of gender segregation where top management is dominated by men, while customer service work is disproportionately carried out by women.
Analyzing the interviews, we found reflections on whether the chatbot should be a “female” human or a more abstract technological entity. While the initial chatbot as it appeared to customers was made to resemble a female (with long hair and a friendly smile) and it was given a female-sounding name (not surprisingly in the light of existing research), we found that it was later made into a more abstract tech representation. Curiously, while the feminized chatbot had originally been notably unhuman in its dealings with customers, the abstract chatbot became increasingly human-like in its interactional style, reflecting the rapid advancement of AI technologies. This eventually gave way to a more matter-of-fact style as customer service overall was dehumanized. We developed the idea that gendered AI acted as “scaffolding” in the process of dehumanizing. Gender played into human-AI interaction and how it was managed, but in an indirect, sneaky way. We could depict a gendered dehumanizing trajectory in an AI-driven transformation where not only humans but also human representation was being erased from customer service work. Some of our interviewees expressed regret about this.
Engaging in these studies taught me that AI is never complete, and that this influences human-AI interaction. AI seems to engender tensions and disputes, and its future-making is a source of contestation. As organization and management scholars, we can bring to light what more technologically oriented research tends to ignore: the human in human-AI interaction. However, I struggled to come to terms with the AI performances I witnessed and read about elsewhere. While our research on human-AI interaction proceeded, I was concerned by news of what generative AI seemed to be achieving in all walks of life. AI in its myriad forms seemed to develop at a chaotic pace. Making sense of this became an increasingly central part of my autoethnography. I wanted to learn more about contextualizing our findings, and I was again reminded that AI is a slippery subject of inquiry.
Meanwhile, back in academia
Throughout my learning journey, I have jotted down notes about scholars discussing how AI is affecting academia. As with research on AI discussed earlier, reflections on AI, universities, and changing academic work are mushrooming. Generative AI is described as an “exogeneous shock” to academia (Krammer, 2023), and it is transforming the way research is done and written (Barros et al., 2023). With the help of ChatGPT and other AI tools, “irrelevant” and “inadequate” article manuscript submissions were abounding already when I embarked on my learning journey (Barros et al., 2023). I also suspect manuscript review processes in journals are changing. For all I know, the reviews received for this essay could be generated by AI.
In these exciting but perplexing conditions, key figures in our field argue that critical thinking is sorely needed (Larson et al., 2024) and that we must hold onto our intellectual engagement and academic communities based on human interaction (Bechky and Davis, 2025). My experiences confirm this, although I realize that there is a divide in academia in the sense that some scholars are wholeheartedly embracing AI while others remain more skeptical (Kulkarni et al., 2024).
Turning attention to the research generative AI helps us produce, organization theorists Dirk Lindebaum and Peter Fleming (2024) call for our reflexivity and responsibility as human scholars to engage in creative, contextual, and committed research. I have noticed how some of my colleagues are enthusiastically exploring AI tools in generating and analyzing qualitative data. I can only agree with Lindebaum and Fleming (2024) in that there are fundamental risks involved in this. With automated coding and analyses, and developing virtual research assistants to support the work, researchers can lose touch with what they are trying to understand as well as with their assumptions and commitments to why they do research in the first place. They can lose touch with in-depth contextual understandings of whatever they study.
I doubt that AI could have come up with the critical perspectives on human-AI interaction described earlier, looking at anthropomorphizing through an affect lens and considering gendered AI as “scaffolding” in dehumanizing customer service. This required our human efforts and collaboration where we worked together through endless discussions. We dealt with disputes and found ways to make up.
I have learned to remain open to collaborating with scholars as well as practitioners with different insights into AI. There is no way I can learn on my own or only with like-minded people. This has led me to think that we as organization and management scholars should keep our arguments open-ended. Precise descriptions and theorizing on AI technologies may be redundant when read a few years later. We should leave room for our readers’ imagination when we try to say something worthwhile about the moving target. This is important as human-AI interaction takes place in conditions of technological hype. This hype—and AI performances by experts—led me to consider AI as magic. The lure of magic is palpable when AI is becoming ubiquitous, and magic offers a way to understand human-AI interaction and its context.
AI as magic
I began to refer to magic early on in my learning diary notes. My point of entry was Arthur C. Clarke’s (1984) well-known dictum that any sufficiently advanced technology is indistinguishable from magic. I continued to read about magic and learned that technologies have a special relationship with magical thinking (Obadia, 2022). I learned that technologies are likened to magic because of the unrealistic expectations society sets for them (Stivers, 2001) and because magic helps us “address the fundamentally indeterminate condition of human existence” (Larsson and Viktorelius, 2024: 189). I dug deeper into how and why technologies and AI are associated with magic. Eventually magic became a lens through which I could make sense of my learning about AI and human-AI interaction.
It’s magic!
Magic spans time, place, and culture, and it is present in all parts of the world. Magic draws from tales, legends, and myths, and it employs oracles, seers, and soothsayers to offer us glimpses into the unknown. Magic has intrigued anthropologists who distinguish it from comprehensive belief systems such as religion (Lévi-Strauss, 1963). While the concept is multifaceted (Bailey, 2006), in essence, magic is a way of dealing with the world and trying to achieve something by doing something else. Magic is based on the human desire to find connections between things that do not appear to be connected. For this, magic evokes symbols and symbolism. Part of the magic is that it appears effortless, but in practice, magic is seldom achieved without effort (Gell, 1988; Larsson and Viktorelius, 2024).
Magic is often associated with the supernatural, and it straddles our fantasies and social realities. It is about fiction and abstraction that promise enchanted universes, on the one hand, and about injecting the extraordinary into the ordinary, on the other (Obadia, 2022). We can summon magic to cope with what we encounter in our lives. In other words, magic is inversely located in the abstractions of the imagination and embedded in everyday reality (Jarvie, 2018). In going beyond distinctions between rational and irrational, magic serves a purpose. Magic manipulates symbols to bring about change in the world (Obadia, 2022) and it helps us achieve something through actions that are not physically related to the goal (Larsson and Viktorelius, 2024). Magic is about performances where the natural world is manipulated through rearranging objects in another discursive or symbolic universe (Larsson and Viktorelius, 2024).
Magic feeds on our assumptions. This is demonstrated by stage magic that has been with humans since ancient times, using a variety of techniques in the service of illusion (Truitt, 2015). Magic as illusion is grounded in the magician’s ability to control attention, distort perception, and influence choice (Kuhn et al., 2008). It produces a sense of wonder in the spectator. Skilled magicians can manipulate our assumptions, leading to a result that seems inconsistent with what is occurring. At the same time, we are kept in suspense as to what we are about to witness next.
What AI magicians can do
Computer scientist, mathematician, and philosopher Norbert Wiener (1964) pondered on the implications of the rise of intelligent machines. He wrote about the mind of the master who is delighted when he realizes how some of the supposedly human functions of his slaves can be transferred to machines. “This type of mastermind is the mind of the sorcerer in the full sense of the word,” Wiener concluded (p. 53). Today, magicians come in the form of developers of AI and those who sell AI to us and profit from it. Magicians and their AI performances are supported by media and researchers who are tempted to follow the money, keep up the hype, and fail to act as a check on industry power (Narayanan and Kapoor, 2024).
The public seminar in my business school described earlier was like a magic show. The opening presenter’s performance resembled that of a magician. Abracadabra, “You’re already using AI—you just don’t see it!” The magic trick was performed before our eyes as we were led to believe that it is all inevitable. We were persuaded to believe that we must change, although we were not told how exactly. Finally, we were given the impression that expectations set for AI are sky-high . . . and we were left wondering about what will happen next. We were warned about the perils of not adjusting to, and embracing, AI, but we were left confused about what this means. This is what magicians do: they rely on their relationship with us, the audience, and they work on our assumptions and emotions.
The appearance of magic in new technologies is nothing new. Allen Newell (1990), co-creator of two of the earliest AI programs, said: “I see the computer as the enchanted technology. Better, it is the technology of enchantment. I mean that quite literally” (p. 47). Computer scientists Sharkey and Amanda Sharkey (2006) argue that “deception is an integral part of AI and robotics; in some ways they form a science of illusion” (p. 9). When working closely with humans, AI requires “some illusion of animacy and thought” (Sharkey and Sharkey, 2006). AI represents a new technological medium through which magic can be expressed (Davis, 2015; Obadia, 2022).
Psychology scholar Roberto Musa Giuliano (2020) agrees that AI “has always been entwined with the fictional. Its language echoes strongly with other forms of cultural narratives, such as fairytales, myth and religion” (p. 1009). He says that coding resembles knowing the sacred words of a spell. Sharkey and Sharkey (2006) add that the magic in AI is about “convincing people that they are dealing with a machine that understands them or that has feelings” (p. 12). This means that AI magicians rely on people like me, laypeople and researchers, who become complicit in magical acts and in amplifying AI hype. For the magic to materialize, we must believe that AI will deliver on the promises made in its name. I notice this in my experiences, “Learning faster than the world is changing,” for example, is fiction although it was served to us as a fact that is easy to believe.
The capacity to straddle fantasy and reality fuels representations of AI as magic, and magic triggers “the hyped imagination of what is possible, not what is realistic” (Elish and Boyd, 2018: 58). References to the lexicon of magic thus help signify the potential that AI holds for us (Francisco, 2015), for example, in dealing with our desire to find connections and understand complex phenomena (Larsson and Viktorelius, 2024). When we assume that AI is (like) magic, it helps us make sense of the world and imagine what could be. My business-oriented colleague, eyes shining with excitement, is a good example. They embraced the magic wholeheartedly. I have encountered lots of business practitioners and scholars who give the same impression—and I have noticed how I enjoy a good magic show myself.
I began to see how AI as magic requires active participation by its audience—us—to be considered convincing and successful. This helped me to think through connections between AI hype and magic. Keeping up the hype is needed to attract our interest time and again so that AI can appear to perform supernatural feats. Magic in AI performances feeds on hype, and what is hype today may turn into “reality” tomorrow.
Viewing AI as magic helps to study what is done and sold in its name, how, and why. Companies selling and profiting from the use of AI tools and technologies can position their products and services as seemingly magical—as “digital minds” that not even their creators can understand—to make them even more appealing to potential customers and investors (Leaver and Srdarov, 2023). Because magic is about manipulating symbols, likening AI to magic is a discursive and narrative endeavor. The language of magic fuels collective imagination about AI, and magic “engages a reflection on technological progress and its social acceptability: it sheds light on the mental and imaginary dimension of the human relationship to AI” (Obadia, 2022: 25). This is what AI performances and hype are for: they get prospective buyers to pay attention.
I also learned how the language of AI and magic is related to the motivations and intentions of the magicians. Anthropologist Lionel Obadia (2022) discusses the polarization of images in the language of AI experts. On the one hand, experts describe magic as “a fertile imagination, a technology-friendly and psychological lubricant” for accepting AI technologies (Obadia, 2022: 27). On the other hand, they present magic as a stigma and an obstacle to technological advancement. While talking about AI as magic may be beneficial for companies and their decision-makers, they can also distance themselves from it when it suits their purposes. They can, for example, claim that likening AI to magic results from hype, fantasy, and “intellectual laziness” that does not do justice to the complexity of AI and the knowledges it is based on (Obadia, 2022). However, non-fictional and fictional become intertwined as popular culture helps frame understandings of AI (Leaver and Srdarov, 2023; Wilks, 2019). AI remains on the verge of magic, no matter what position is taken.
Comments on whether AI is magic or not magic, then, can serve many purposes, and they can be motivated by different business objectives. When I share with practitioners the idea of viewing AI as magic, the response tends to be one of suspicion or denial. Talk is directed back to AI technologies and what they will be tomorrow. This is also evident in public commentaries about AI. While some claim that “AI has lost its magic” (Bogost, 2024), others are quick to point out that AI “is not magic” (Zambach, 2023). Yet, references to magic remain part of the future-oriented and technologized discourse on AI. For example, while not studying AI as magic, Kaplan and Haenlein (2019) begin their article with the words “Once upon a time, there was a magic mirror . . .,” referring to the evil queen in the Grimm brothers’ tale on Snow White. It seems that magic is on Kaplan and Haenlein’s minds, even if they do not elaborate on it. Magic serves as a metaphor to set the scene, and to hook the reader, in this authoritative piece on the evolution of different types of AI systems. This shows how associating AI with magic can take many forms. It seems that sometimes it is done for appearances or entertainment.
Toward provocation or the problem with AI as magic
After playing around with the idea for quite some time, I began to look at AI as magic increasingly critically. In my diary, I noted how associating AI with magic must be done with care. Media theorist and historian Simone Natale (2021) refers to AI tools as “deceitful media” when they are designed to deliberately appear generally intelligent. Magic is in some ways always about misdirection. At the same time, prejudiced gendered social practices found in magic can be instantiated within cutting-edge AI devices (Toncic, 2021). However, AI magicians are never fully in control of their magic. Our research indicates that when AI-powered tools enter workplaces and organizations, they tend to be half-baked, and they engender different viewpoints, tensions, and disputes among humans.
Opacity as the crux of AI magic
The opacity of AI perpetuates its associations with magic. Internal workings of AI are not fully accessible to users (Brewer et al., 2024) as both algorithms and the data used to train AI are typically not revealed to us. Hannigan et al. (2024) maintain that “users often do not care to understand how technology does its magic because the business model and algorithms behind the technology can be secret, opaque, inaccessible, and fixed” (p. 582; my italics). Lack of transparency in terms of the data used to train generative AI tools makes it virtually “impossible to know which perspectives, presumptions, and biases are baked into these tools” (Leaver and Srdarov, 2023). Like magic, opaque AI feeds on our assumptions and imagination. When the seemingly impenetrable workings of AI are the subject of media hype, its magical appearance is bolstered (Leaver and Srdarov, 2023), paving the way for all sorts of AI performances.
Escaping the hype deployed by AI companies and media is thus essential for repositioning AI not as a magical savior, but as new technologies that must be critically understood (Leaver and Srdarov, 2023). While the idea of AI as magic is compelling, Narayanan and Kapoor (2024) warn us that portraying AI as inherently mystical may serve to reduce human agency in coping with it. Contrary to claims that AI is “unknowable,” like it is for me from a technology point of view, many experts “know exactly how AI is trained” (Narayanan and Kapoor, 2024: 252). For me, a critical perspective on AI as magic means approaching how AI is talked about and how it interacts with humans, and doing this critically and reflexively, rather than dismissing AI as unapproachable. It helps shed light on how humans and AI “become” and “entangle” in constantly new ways, how AI operates under a veil of rationality and with a discourse that is always oriented toward the future, how human-AI interaction is embedded in power relations, and how it has a dark side.
Magic offers a way to approach moving targets and slippery subjects of inquiry and to develop a critical perspective on AI and its many sides. It seems that the magic in (generative) AI, and in human-AI interaction, is in the movement. I learned that just when I think I pinned it down for empirical analysis and theorizing, it evades me and pops up elsewhere in apparently more developed and magical forms.
Of course, presenting AI as perpetually beyond comprehension may be good for business. With exponential potential for making money, its magic serves to defy regulation. It is, perhaps, a conscious design imperative to produce moments of awe for us. This is how AI magic tricks happen. As such, AI continues to be a basis for exploitative business practices. Algorithms and models trained on flawed, biased, and secret data continue to conquer ground. After all, AI is a moving target for a reason. Alongside technological advancement, there is always someone who benefits from incompleteness and movement: the magicians or masterminds (Wiener, 1964), those who should be accountable for what AI does but usually are not (Vesa and Tienari, 2022). This is why opacity is problematic as the crux of AI magic.
What critical organization and management scholars can do to unveil AI magic
In my learning diary, I became increasingly critical of what is done in the name of AI. The idea of AI as magic developed into an alternative way of understanding human-AI interaction that highlights those aspects that seem to be silenced in the technologically oriented theorizing. While mainstream studies continue to manufacture technologized “truths” about AI, organization and management scholars can develop alternatives to this. We can be critical and voice those who suffer and those who are misrepresented, silenced, and excluded when AI enters organizations and workplaces. In our studies, this includes employees who serve the needs of technologies that eventually make them redundant.
Furthermore, because AI as magic requires active participation by its audience—us—it is important that organization and management scholars show what it means to be reflexive in studying human-AI interaction. There are different ways to understand reflexivity, and I think of it as awareness about how our assumptions and judgments influence our doings and sayings. Reflexivity builds on a willingness to reflect on our reflections (Antonacopoulou and Tsoukas, 2002) and on “a constantly changing sense of our selves within the context of the changing world” (Etherington, 2004: 30). Reflexivity extends to scholarly writing, “recognizing and making explicit the relationship between the writer and what, how and why they write” (Grey and Sinclair, 2006: 447). All this can take the form of what organization theorists Emma Bell and Willmott (2020) call disruptive reflexivity that “amplifies doubt by breaching convention and challenging the basis of knowledge claims” (p. 1370).
In my autoethnography, I have tried to do this and to develop ways to think differently about AI and studying human-AI interaction. This has meant acknowledging my vulnerability in the face of AI, reflecting on how and why I struggle as I get excited as well as anxious by the magic of AI performances, and thinking through how and why contextualizing our research findings is paramount. I have also learned the value of conversing with others. This means that reflexivity is seen as a joint and shared practice; being courageous together. Developing a critical reflexive perspective on AI as magic, then, is about teaming up with others (who may look at AI differently) and scrutinizing how magic feeds on hype around AI performances so that magicians can achieve their hidden goals.
For critical and reflexive inquiry to be meaningful, I keep coming back to how we must hold onto our judgment and thinking as human scholars. Organization theorists Christine Moser et al. (2022) offer intriguing ideas about learning when they work to “demystify” AI. They remind us that learning always involves judgment and that judgment implies not only reasoning but reflection (or reflexivity) as well as imagination and empathy, among other things. AI is (as far as I can see, still) incapable of handling these qualities. C. Wright Mills (1959) argued that sociological imagination is the ability to “see the context” that shapes our individual experiences. Understanding relations between self and society is where humans still seem to have an advantage over AI. When we use our imagination in research, however, we must be open and empathic not only to those we work with but also toward those whose actions and experiences we study (Tienari, 2024). I suspect that critical reflexivity, imagination, and empathy will be challenges for AI for some time to come. This is why they are crucial for us humans.
Provocation
In the spirit of Provocation Essays, I have engaged with personal and polemic writing to think differently about studying AI and human-AI interaction (cf., Reed et al., 2024). Autoethnography has enabled me to focus on the self while taking a wider ethnographic gaze, expressing how we struggle to make sense of our experiences (Ellis et al., 2011) and extending understanding about an important societal phenomenon (Anderson, 2006). I have recounted my learning about AI, and studying human-AI interaction, and how this led me to view AI as magic. The world of AI is replete with magicians who share their visions of the future, with engaging symbolism, rallying for what appears to be an inevitable technological transformation. To challenge the alleged inevitability, I propose we speak up for humans.
Using magic as a lens, we organization and management scholars can help to unmask the conditions where AI is performed and reflect on our complicity in this. We can study human-AI interaction through critical reflexive inquiry that is open to collaborating with scholars and practitioners with different insights into AI and committed to keeping arguments open-ended and leaving room for readers’ imagination. All this is crystallized in approaching AI as magic in ways that do not glorify it but contributes to critically understanding what is done and sold in its name, how, and why.
First, in terms of critical reflexive inquiry, I propose that we do not lose sight of how AI changes relations and configurations of power in organizations, society, and the global technology marketplace, on the one hand, and how people, organizations, and society contribute to this, on the other. In Feenberg’s (2002) words, “In choosing our technology we become what we are, which in turn shapes our future choices” (p. 14). Narayanan and Kapoor (2024) remind us that “much of the downside of AI comes down to factors outside the technology itself” (p. 261). As organization and management scholars, we can unmask this by doing research that centers humans critically and reflexively. Criticality means that we find new ways to question how truths and taken-for-granted assumptions about AI are manufactured. Reflexivity means that we constantly challenge our own assumptions and understandings in doing this.
Second, I propose that we remain open to collaborating with scholars and practitioners with different insights into AI. As subjects of inquiry, AI in its many forms is too complex to be studied among the like-minded. I have found inspiration in research published in other fields and in interacting with scholars and practitioners who look at AI differently from myself. This has helped me put my technological cluelessness in perspective and to muster courage to continue to face AI by working and learning with others.
Finally, I propose that we as organization and management scholars keep our arguments open-ended, leaving room for readers’ imagination. Human-AI interaction is a moving target and slippery subject of inquiry. Viewing AI as magic may merely be a starting point in thinking differently about it. Magic can be moved to the background and abandoned as the research progresses, and new insights are developed. We must speak up for humans in different ways.
Footnotes
Acknowledgements
I am deeply grateful to Provocations Editor Cara Reed and the anonymous reviewers for their insightful comments, guidance, and support throughout the review process. I extend my thanks to Katja Einola, Violetta Khoreva, Martina Čaić, and Robert Ciuchita for our great research collaboration on managing people and AI. Thank you Katja, Violetta, and Martina for our inspiring work on gender and AI.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Funding from dr. h.c. Marcus Wallenberg’s Foundation for Research in Business Administration is gratefully acknowledged.
