Abstract
In this article, we reflect on the dark side of generative artificial intelligence. We identify several concerns associated with generative artificial intelligence in relation to knowledge processing and developing understanding, including misalignment between the artificial intelligence’s intended and actual use, its rhetorical duplicity, risk of technical dependence, negative impact on creativity and contextual understanding, overall decreased knowledge quality, the production of ‘illusory truths’, artificial intelligence’s progressive encapsulation and its exponential proliferation. We call for reflection on its potential implications for management learning, as well as for learning more broadly.
Keywords
Introduction
Affordances and consequences of generative artificial intelligence (GAI) use in academia have been explored in two recent editorials published in Management Learning. The first editorial identified the impacts and the possible responses to GAI in academia along three main domains of an academic’s professional responsibilities: research, teaching and service (Barros et al., 2023). The second highlighted certain challenges posed by artificial intelligence (AI) to educational systems, for instance, in the context of academic integrity and student assessment (Krammer, 2023). However, in this frenetically evolving context, a remarkable increase of both the extent of AI uses and its applications as well as the scale of GAI rollout and planned adoption nationally and globally (e.g. Le Monde, 2025a), motivate further in-depth inquiry. Compounded with the economic and political agendas and vested interests – increasingly impossible to disentangle (see Acemoglu and Johnson, 2023; Crawford, 2021), as was recently (and notably) evident in the AI-driven US government overhaul attempt (e.g. Grillaert et al., 2025; Kimery, 2025) – and with the increased use of AI becoming a global issue involving planet-wide risks (Crawford, 2021; Le Monde, 2025b), while also promising deep social changes (Lei and Kim, 2024), the impact of AI on how and what we learn will be an ongoing concern for Management Learning’s community.
The use of GAI tools, such as ChatGPT or Claude, is associated with advantages, including enhanced productivity and streamlined decision-making (Adedoyin and Christansen, 2024). However, it also brings about challenges, which have the potential to at least complicate and at worst upset individual and organizational learning processes. In this article, we explain the key concerns associated with GAI more broadly before focusing on its implications for learning. We further offer recommendations for organizational learners and those capable of facilitating learning for others. In sum, we explore how the various aspects of recent analysis concerning GAI’s potentially damaging impacts enable us to better position our understanding of its role in learning, particularly by remaining alert to its potentially disruptive repercussions.
The dark side of generative AI
One may posit that problems with GAI are not intrinsic to the technology but are rooted in its development and application. The applications we are particularly concerned about in this article, are mostly concerned with large language models (LLMs), a subset of GAI focused on producing human-like text mostly using ‘transformer’ architectures (Vaswani et al., 2017). Meanwhile, GAI can create other outputs, such as images, audio and code, for example. While generative models can structure and synthesize vast quantities of existing data, their outputs are constrained by these inputs and by the model’s limitations, such as its architecture, training objectives and representational assumptions. Therefore, it would be inappropriate to expect GAI to be strictly speaking ‘accurate’ or to generate new insights, as it is bound by the patterns it identifies in its training. This is not what a well-informed user, an expert, would expect. This also is not necessarily what such an informed user would find useful, because they might not perceive an algorithm as a source of true statements since they would be sufficiently well-positioned to assess the statement’s veracity.
Crucially, though, most AI users are non-experts and are not well-equipped to judge the accuracy of the outputs produced by GAI. While GAI can boost non-experts’ capabilities, it may also foster over-reliance, inhibiting users’ deeper skill development (Anthony, 2021). Access to a fair level of output is broadened, without a strong foundational understanding of the knowledge in use. As outputs from GAI are a mixture of accurate and false content (Alkaissi and McFarlane, 2023), collaborating with these systems is more challenging than it might appear. This, coupled with the range of users including nonexperts, students and the general public results in a risk of ‘false expertise’, whereby ‘opinions’ generated by AI are employed to create and bolster alleged expertise (Alkaissi and McFarlane, 2023); thus, also winding up the falsity spiral by ‘training’ the AI algorithms to legitimize the ‘false experts’’ bogus claims (Badau, 2023). In addition to the concerns over model degradation due to recursive training on synthetic data (Shumailov et al., 2024), this dynamic is mired with not only ethical risks but also tangible dangers, if applied in fields such as medicine (e.g. Amann et al., 2020). The misalignment between the intended goals and the performance of GAI has long been a challenge for machine learning systems, involving questions around ethics, psychology and the understanding of what ‘intelligence’ means (Christian, 2020).
The GAI’s alignment issue is exacerbated by its rhetorical duplicity, with false or imprecise information cloaked under scientific jargon that leads to a sense of certainty (Azaria et al., 2024). Lay users confronted with false information, enveloped in plausible data and suffused in scientific parlance, are relatively vulnerable to misinterpretation or misrepresentation of data, irrespective of whether or not they do so with intention (Hannigan et al., 2024).
Repeated, uncritical exposure to GAI may lead to overdependence (Mannuru et al., 2023; Zhang and Xu, 2025), over-reliance (Christou, 2024) and reduced learning opportunities (Anthony, 2021). Emerging research suggests that features such as personalized responses, emotional validation and continuous engagement may create dependency, which is further exacerbated by the practical allure stemming from the claims of GAI’s purported capacity to boost productivity and efficiency (Yankouskaya et al., 2025). While consulting generative AI may seem like an easy fix to a complex problem, it creates issues of its own. For example, among healthcare professionals, the relative convenience and expediency of use coupled with a lack of training may result in ‘ChatGPT Dependency Disorder’, potentially leading to suboptimal treatment plans or missed critical considerations (Chakraborty et al., 2024). The phenomenon, labelled ‘skill erosion’, may pose long-term effects as new entrants in a field lose touch with previously established practices and may lack the opportunities to develop their expertise (Anthony, 2021; Rinta-Kahila et al., 2023).
Although GAI can enhance innovation through improved content awareness (Kietzmann and Park, 2024) and improve productivity for complex tasks (Dell’Acqua et al., 2023), the creative value of AI-generated content remains dubious in other contexts, such as music composition (Baidoo-Anu and Owusu Ansah, 2023). Moreover, AI usability in creating content is limited by its lack of contextual understanding (Akinwale et al., 2025), which may result in inappropriate explanations or irrelevant responses (Baidoo-Anu and Owusu Ansah, 2023).
GAI is known to disrupt socialization and interpersonal exchange, obstructing knowledge ties and knowledge-sharing (Retkowsky et al., 2024). As a result, it decreases the opportunities for embodied contact with others, affecting knowledge-sharing at the organizational level (Anthony, 2021). It might also negatively affect the emergence of communities of practice or within other commitment-driven ‘social worlds’ (Elkjaer, 2004). This relative siloing of knowledge pursuits forces organizational members into developing new tactics to learn the necessary expertise in order to learn and operate their tasks (Beane, 2019). When such inquiries are conducted by nonexperts, they might negatively affect their outcomes. Even skilled professionals might fail to critically consider content created by GAI, as exemplified by the Manhattan lawyer who used nonexistent case law at a court hearing, claiming that he ‘did not comprehend that ChatGPT could fabricate [it]’ (Weiser and Schweber, 2023). It appears that, at all levels, a lack of active engagement in the process of acquiring knowledge decreases the relevance of that knowledge and the stakes involved in deploying it (Lindebaum and Fleming, 2023). Likewise, a dearth of social mediation in the knowledge acquisition process and scarce or absent oversight of its effects, might lead to lower-quality knowledge (Retkowsky et al., 2024).
Furthermore, the cognitive consequences of the increasingly protracted process of distinguishing between truth and falsity when it comes to using GAI should be considered. While LLMs can generate content which appears sensible, it often results in the consumption of ‘botsthit’ (Hannigan et al., 2024) or ‘coherent nonsense’ (Curran et al., 2022), with relatively infrequent errors woven into plausibly sounding and overall ‘correct’ discourse. As many academics would agree and as a firsthand experience in assessing student coursework confirms, spotting such coherent nonsenses is difficult: submissions aided by GAI are certainly harder to mark than genuinely poor submissions, the latter being normally peppered with errors and nonsensical claims throughout. Consequently, research shows that a significant proportion of entirely AI-generated content passes as genuine work (Scarfe et al., 2024). A phenomenon called ‘illusory truth effect’ dictates that repeated information is often perceived as more truthful than new information (Hassan and Barber, 2021). Hence, the more often a nonsense claim goes unnoticed but is repeated – whether in the context of academia or elsewhere – the more difficult it becomes to spot it.
Manifestly, the difficulty of distinguishing between truth and falsity can engender grave consequences. For example, in high-stakes situations like those entailing significant financial consequences, people tend to over-rely on AI-generated content at the detriment of other available contextual information and their judgement (Klingbeil et al., 2024). However, even the most genuine commitment to truth-seeking may be jeopardized, because our capacity to deal with nonsense is further exacerbated by AI’s ‘progressive encapsulation’, the reduced visibility into and control over GAI’s relations and functions (Hinds and von Krogh, 2024). This opaqueness characterizing input–output connections in GAI is known as the ‘black box effect’ (Hinds and von Krogh, 2024) – what goes on inside remains ‘AI’s little secret’ as far as most of the population is concerned. Consequently, to most users, the data itself will be black boxed as well and beyond knowing; in fact, even developers are often unsure about how a GAI application works (Asatiani et al., 2020). Meanwhile, experts also face a challenge in engaging with outputs from black boxed systems to achieve decisions that humans must uphold (Lebovitz et al., 2022).
While the use of GAI may be defended on grounds of increased efficiency (e.g. Mollick, 2024), the argument here does not come across as clear-cut either. A ‘jagged technological frontier’ looms in various organizational contexts, meaning that the use of AI by workers may make them less likely to produce correct solutions (Dell’Acqua et al., 2023).
Implications for (management) learning
Even this brief review of some of the challenges posed by GAI to knowledge processing and developing understanding – including misalignment between the AI’s intended and actual use, its rhetorical duplicity, risk of technical dependence, negative impact on creativity and contextual understanding, overall decreased knowledge quality, producing ‘illusory truths’, AI’s progressive encapsulation and its exponential proliferation – calls for reflection on its potential implications for management and learning, as well as for learning more broadly.
First, a myth should be dispelled: considering content-generating AI as ‘learning’ is tantamount to mislabelling it. This is problematic for a plethora of reasons. It may be argued that, on ontological grounds, whatever this process is, it is not learning as we have known it for centuries – if not for other reasons, then because in this case the very learning subject is an ambiguous entity. Unlike a human (or animal or perhaps even plant; Gagliano et al., 2016) being, it – an AI model – does not come into picture until after it has ‘learned’. Human learning processes render it subjected to learning throughout, thus logically pre-dating the very process itself. While learning can be considered as an accessory to a human being, coveted as it may be, an AI model is an epiphenomenon of learning – it comes later, as its consequence. This ontological difference is complemented by an axiological one: while there may be a good reason to endow a human learner with intrinsic value independent of the learning process, in AI’s case this learning-independent intrinsic value element is wholly absent. After all, at the point of commencement of learning, AI model is nothing but a presence yet-to-be-created through various data analysis processes (Salvaggio, 2024).
The ontological and axiological status of AI as the learning (non)subject has further consequences and characteristics, which ramify as well as contextualize its capacity to support the learning process. GAI significantly complicates drawing actionable conclusions from the single-loop learning process to inform modification of goals and rules in light of experience, rendering the double-loop learning process difficult to achieve (Argyris, 1977). Despite GAI’s conversational veneer, the logic of using these tools is hardly dialogic; it can, by and large, be described as one-way traffic. This is not to dismiss the analytic prowess of GAI to compile and summarize content at a rate unachievable to human beings. Yet, ‘interactions’ between humans and AI are not dictated by the desire to comprehend the intrinsically valuable position of ‘the other’ allowing it to potentially enrich our own understanding towards amending the next iteration of the exchange to the point that it takes into account what was learned that way – which is the essence of the dialogue and dialogic learning (MacIntosh et al., 2012; Izak, 2025). It is so, first, because (as said above) there is no intrinsically valued conversation partner at the other end, and second, because, by and large, the logic of this interaction is to seek an answer to a problem – indeed the more strictly defined the problem is, the ‘better’ the answer will be – rather than to participate in a dialogic journey towards understanding. The latter factor does not merely result from the practicalities of how AI is used in everyday life, be it in organizational or other contexts (albeit it certainly is influenced by this applied concern) but also from its very logic and features of its interface. Although ostensibly being a learning algorithm, GAI disincentivizes deeper conversational engagement, not only through making it more difficult to spot errors due to the ‘illusory truth effect’, but also due to how encompassing the generated content may appear on its surface. LLMs can be ‘trained’ to use a certain style and apply particular rhetorical models, including scientific jargon, specifically designed to persuade. However, when the tone is mistaken for content and the extent of content is overwhelming, users risk shying away from critical dialogic engagement. Lack of human and/or community mediator engaged in the content production – unlike community-driven content-generating platforms such as Wikipedia – perpetuates this effect: a GAI model, as a non-subject, does not ‘feel ashamed’ after committing an error, nor does it have an emotional commitment to protecting one’s authority and therefore has no embodied incentive not to mislead. It has no stakes in what it is producing, as Lindebaum and Fleming (2023) have highlighted.
Coupled with their pretence of authority, LLMs become inherently non-dialogic (cf. Matusov et al., 2024) and therefore fundamentally unable to encroach beyond monologue (Izak et al., 2022) and the single-loop mode of learning (Argyris, 1977). GAI models are fit to patternize and organize content, not to enrich or enhance it; nor is it to learn from others or with (or through) others. Nevertheless, researchers have been looking into new ways to deploy GAI in learning environments, which have apparently generated some promising results (Kestin et al., 2024).
The questions remains, then: Where will this path lead us? On one hand, current trajectories of GAI development give little basis for optimism. The more frequently it is employed, the more ‘encapsulated’ GAI becomes. Opacity of GAI systems, which obscure their functioning and erode transparency, particularly when such systems are recursively trained on AI-generated content, may lead to model collapse and epistemic degradation (Shumailov et al., 2024). In addition, uncritical use of GAI tools may undermine critical thinking and dialogic engagement, particularly as students begin to treat AI outputs as social actors rather than contestable sources of information (Larson et al., 2024). Becoming dependent upon a tool functionally opaque and rhetorically persuasive, may also undermine the capabilities of learners and practitioners over time (Anthony, 2021; Rinta-Kahila et al., 2023). For future learners this may mean impediments to their critical thinking and dismantling of their dialogic journey towards understanding (Lindebaum and Fleming, 2023). For societies it may be the harbinger for increasingly superficial engagement with ideas and knowledge from different humanities and natural and social science disciplines among students, educational institutions and eventually among forthcoming organizational workforces.
Acemoglu et al. (2023) argue that the current trajectory of AI deployment risks exacerbating inequality and diminishing the scope of meaningful human work. The dominant private-sector model emphasizing automation over augmentation, including due to fiscal incentives, risks displacing labor rather than enabling it. This is not inevitable, however. Lei and Kim (2024) underscore that the dichotomy between automation and augmentation is overstated. Instead of being opposites, automation could lead to increased labor demand and more meaningful work. Nevertheless, without safeguards and policy interventions, the unreflexive, unregulated and ubiquitous deployment of AI tools risks marginalizing human expertise and tacit knowledge to the detriment of organizations and people. Collective learning, through adaptation and coordination is essential to learning to augment and at scale. As these learning infrastructures erode, augmentation can be captured by actors with pre-existing technical or organizational advantages, entrenching unequal access to its benefits. In this way, the decline of collective learning not only inhibits the diffusion of augmentation, but also undermines its equitable distribution.
GAI certainly has the potential to catalyse a catastrophic plummeting of the quality of knowledge as well as our understanding of ‘how things work’ and ‘what they mean’. Whether this will ultimately result in the proverbial irrigating of future crops with Gatorade to their imminent demise and ensuing hunger on a global scale – as evocatively portrayed in the dystopian comedy ‘Idiocracy’ (2006) – depends solely upon us.
Where to next – recommendations for (management) learners
Bar the science-fiction scenarios rendering human beings enslaved by robots and despite AI models’ innate infrastructure, to this day AI has not been able to dictate its conditions to us, including its incapability to impose the ways we should be using it. Nothing around AI is natural or a given, and its development is conditioned by multiple contexts (Crawford, 2021), some of which we outlined in this article. GAI can certainly be engaged with in an informed, cautious and sustainable manner; the way, as some would argue, it was intended to be in the first place. GAI’s ramifications, once properly recognized, can be turned into strengths and opportunities.
First, in formal educational contexts – including in business and management schools – LLMs and GAI more broadly should be approached as a tool that can support the educators in developing student engagement. This means understanding how and when to deploy AI, rather than taking for granted that learning can be automated. Second, cultivating dialogical learning environments where lecturers and students can bring knowledge from ‘outside the box’ is crucial, since human interaction can hardly be substituted, despite the recent push for digital artificial friends and romantic partners. Third, AI literacy needs to be placed at the forefront of the discussions in every learning environment, not only in universities and business schools, but also in organizations that trade in expertise or appreciate the role of tacit knowledge within their ranks. Finally, it is critical that governments and societies recognize that AI deployment is not the sole turf of techno-optimists and billionaires with hidden or occult agendas. Societies and all types of organizations that constitute it must consider the critical change AI tools can bring and develop innovative ways to safeguard how interactions and developments take place. To harness the power of LLMs and GAI without eroding our capacity for learning, critical thinking and reflexive management and organizational practice, we must engage with them intentionally and remain committed to an educational experience that goes beyond spitting out GAI fabricated, though eloquent sounding, rhetoric.
