Abstract
There is an emerging consensus that traditional management roles could—and maybe should—be performed by machines infused with Artificial Intelligence (AI). Yet, “true” leadership—that is, motivating and enabling people so that they can and will contribute to the collective goals of an organization—is still predominantly viewed as the prerogative of humans. With our opinion piece, we challenge this perspective. Our essay aims to be a wake-up call for large parts of academia and practice that romanticize human leadership and think that this bastion can never be overtaken by AI. We delineate why algorithms will not (need to) come to a halt before core characteristics of leadership and potentially cater better to employees’ psychological needs than human leaders. Against this background, conscious choices need to be made about what role humans are to play in the future of leadership. These considerations hold significant implications for the future of not only leadership research but also leadership education and development.
Thus far, organizational leaders have been in charge of managing organizational change toward greater digitalization. While this has mostly concerned lower to mid-level employees, it now seems that higher management strata themselves are coming under pressure. The question is to what extent. Will machines only be used for standardized managerial functions, or will people leadership—the supposed prerogative of humans—also fall to advanced technological solutions?
In this essay, we approach this question by specifically considering recent developments in artificial intelligence (AI): technology that is able to gather and interpret information, recognize and make sense of patterns, generate predictions and results, evaluate and improve its own performance, as well as give instructions to other systems or agents (Ferràs-Hernández, 2018; Glikson & Woolley, 2020). Such AI has been increasingly used to automate routine tasks in organizations (Raisch & Krakowski, 2020), but the technology's development is anything but linear (cf. Moore's law; Moore, 1965). Indeed, recent accelerations in computing power and the development of sophisticated machine-learning approaches have sparked a lively debate about whether and how AI might subsume more complex managerial tasks (Balasubramanian et al., 2020; Parent-Rocheleau & Parker, 2021; Raisch & Krakowski, 2020; Tschang & Almirall, 2020). Assuming that digitization continues apace—because it is more efficient and predictable—most scholars agree that AI will replace humans in several standardized managerial functions (Balasubramanian et al., 2020; Parent-Rocheleau & Parker, 2021; Raisch & Krakowski, 2020; Tschang & Almirall, 2020). These AI managers would recruit employees, give them task instructions, evaluate their performance, and even make autonomous promotion or retention decisions (Höddinghaus et al., 2021; Parent-Rocheleau & Parker, 2021). As such, AI managers conduct tasks that match people's implicit assumptions about typical behaviors associated with activity-based manager prototypes (Kniffin et al., 2019).
In contrast to AI taking on management tasks, most scholars still tend to romanticize true human leadership, thinking that this bastion will never fall to AI. Typical leadership activities (i.e., motivational and relational functions to enable people so that they can and will contribute to the collective goals of an organization; Antonakis & Day, 2018; Kniffin et al., 2019) have so far remained a “safe space” untouched by AI substitution scenarios. The consensus seems to be that while traditional management roles could and maybe should be taken over by algorithms, 1 “true” leadership—catering to the needs of employees—will remain the province of humans.
In the following, we question that assumption. Given the steep increase in publications about AI in research fields outside the leadership domain, it is astonishing that the emerging literature on AI in organizational management lacks an equally rich discussion about the potential substitution of people leadership with AI leaders. This oversight may stem from a preference for staying within our comfort zone, allowing us to preserve established conceptual, methodological, and pedagogical paradigms. But whatever the reason, we believe the field has a duty to look at the topic of AI leadership and engage in a candid discussion about its implications for human leaders in the future of work. Otherwise, we risk sleepwalking into an unexamined reality and failing to adjust in time.
Against this background, this essay considers recent developments in AI-human interactions in applied fields in order to explore how sophisticated algorithms can embody the core characteristics of leadership. Specifically, we will argue that technology is moving from a mere tool for human leaders (i.e., the NOW) to an (pro)active advisory/support role for human leaders (i.e., the NEW) to eventually substituting for human leadership (i.e., the NEXT). In doing so, we offer a comparison of what human vs. AI leaders can accomplish today (i.e., NOW) and in the future (i.e., NEW + NEXT) when it comes to optimally enabling and motivating employees at work. These considerations provide the backdrop for rethinking the future of leadership research as well as the future of leadership education and development.
The NOW of Leadership
In the NOW, digitalization in leadership refers to digitally mediated communication and the associated challenges of leading a remote or hybrid workforce. While scholars have investigated leadership in virtual settings for several years (Avolio et al., 2014; Cortellazzo et al., 2019), this research stream gained massive momentum amidst the omnipresent challenges of the coronavirus disease 2019 (COVID-19) pandemic (e.g., Bartsch et al., 2020; Dirani et al., 2020; Stoker et al., 2022; Wyatt, 2020). Scholars have discussed, for example, how technology can expand the way leaders lead teams (Larson & DeChurch, 2020), and whether the ethical and trust-related aspects of leadership differ in virtual versus face-to-face interactions (Cascio & Shurygailo, 2003; Lee, 2009), or why transparency constitutes a particular challenge for leaders in such settings (Turesky et al., 2020). For the moment, the interested reader can gain a comprehensive overview of the literature related to the NOW of leadership by reading the bibliometric review on digital leadership prepared by Tigre and colleagues (2022). We want to keep this section short, as technology in the NOW of leadership is merely about translating traditional leadership to a digital space. While certain aspects of leadership (e.g., delegating or crafting a sense of “we”) may be more relevant than others in such virtual or hybrid working models (Haslam et al., 2021; Stoker et al., 2022), the prevalent issues (e.g., direction-setting including appropriate verbal and non-verbal communication while considering context) are not radically new. In any case, in this setting, human leaders are still the main initiators of task-, relations-, and change-oriented leadership functions—even if they are enacted via digital channels.
The NEW of Leadership
The NEW of leadership concerns how algorithms can effectively (and proactively) augment or support leadership, such that humans and AI collaborate in leading employees (Raisch & Krakowski, 2020; Tsai et al., 2022). This augmented leadership approach is also occasionally referred to as the AI serving as a partner or co-leader, with algorithms making suggestions about what leaders should look at, which team members may require more attention, or how to improve team dynamics. In doing so, the AI also provides corresponding background information based on identifying patterns in employee-related data. The underlying data are either provided by the employees themselves (e.g., regular short pulse-check surveys) or automatically collected (e.g., communication patterns derived from electronic channels, movement, and wearable-generated physiological data).
While such an AI-supported leadership approach is not yet the reality for the majority of leaders in contemporary organizations, we are seeing a considerable number of start-ups offering sophisticated technological solutions to help leaders in their core responsibilities (i.e., fulfilling task-, relation-, and change-oriented leadership functions; DeRue et al., 2011). As such, it is only reasonable to expect that AI-based leader-support dashboards or communication assistance systems will soon become more mainstream as companies implement them and see better results. Accordingly, we expect the NEW of leadership to be the standard in a few years. But what does that mean in concrete terms?
Task-related leadership includes observing employees’ work progress, offering task-related advice, helping with problems, or contributing to effectively structuring the work process. If you have ever used Microsoft Viva Insights or allowed your smartphone or smartwatch to provide you with suggestions on when to move, sleep, or take a break, you can easily imagine how AI can support leaders. In the corporate world, the respective AI-based solutions include feedback dashboards for leaders that provide them with high-frequency insights into team challenges and data-based recommendations for actions that could be undertaken to tackle these challenges (e.g., MONDAY.ROCKS). As a cheaper solution, leaders can also simply ask freely available AI tools such as GPT-chat (https://chat.openai.com/) for advice on concrete employee problems and will get quite elaborated answers with concrete steps to take to address the problem. Of course, even with such support, humans still have to consider what they want to be supported on and decide whether they want to follow the advice of AI or not (Burton et al., 2020; Longoni & Cian, 2022). That being said, we may speculate that leaders think twice before overturning an action suggested by AI. Indeed, examples from the US judicial system show that AI systems used to support judges in their sentencing are rarely overturned (O'Neil, 2016). After all, who would want to be responsible for handing out a mild sentence and the person recommitted a crime a short time after? By the same token, which leader would want to be responsible for a team conflict that prevents the timely delivery of a product when the algorithm tells them to intervene and remove a certain person from the team?
Relation-oriented leadership functions comprise several facets: Considering followers’ individual needs; motivating employees according to their personal preferences, or bolstering work engagement through high-quality interactions. Here too, we already see a number of AI-based solutions in place. For example, the chatbot Amber—an employee engagement tool that allows employees to share their feelings—has been around for a while and provides leaders with real-time insights into their employees’ experiences, but its implementation truly gained momentum during the COVID-19 pandemic (Dutta, 2021). The tool relies on automatic sentiment analysis and warns leaders via a dashboard, for example, about disengaged employees, so that countermeasures can be taken in a timely manner. Moreover, Amber not only identifies the attrition risk, but also offers advice about whether and how leaders should intervene (e.g., based on the respective employee's previous performance and potential status). As another example, an algorithm could also be used to pre-screen a human leader's emails to provide in situ recommendations about how to write in ways that are more empathic, directive, visionary, etc. (depending on the followers’ identified state). While this may seem new to the leadership field, such tools are already in use in the dating industry, in which many individuals rely on AI “wingbots” to craft their messages more effectively for potential lovers (Basu, 2019) or on an AI “slutbot” to practice their flirting skills (e.g., juiceboxit.com). To translate these capabilities to leadership, relations-oriented leadership can, for example, utilize machine learning in combination with eye tracking (via omnipresent webcams or face reader technology) (Cheng et al., 2022; Gerpott et al., 2018) to provide leaders with real-time updates on their employees’ attention, emotional expressions, and assumed mood. Such AI-based tools could, for example, be used in video calls to give leaders real-time information about whom to address to regain attention, who might be the “ring leader”, or what kinds of topics will cause employees to withhold their opinions (Sawar, 2022).
Lastly, change-oriented leadership functions refer to formulating an attractive vision for employees, charismatically spreading enthusiasm and energy among followers, or initiating change. Here, leadership scholars often express the most reservations about whether and how AI could assist human leaders. After all, aren’t creativity and inspiration exclusive to the human mind? Probably not for long. A few years ago, we saw YuMi—a two-armed collaborative humanoid robot—conduct Italian tenor Andrea Bocelli at a charity concert in close cooperation with and preparation by the human director of the Philharmonic Orchestra (Wiltz, 2017). By now, algorithms such as AIVA, Autotune, WavTool, or Jukebox create tunes that artists use to get inspiration for their tracks. Several AI-based applications (such as DALL·E, Midjourney, or Runway) generate art from a natural language description, with research projects (e.g., magenta, https://magenta.tensorflow.org/) openly inviting people to participate in the further development of such tools.
Due to its sophisticated pattern recognition, AI is also increasingly being used to generate advice in corporate strategy development (Herse, 2021) as well as in R&D departments and the innovation industry (Cockburn et al., 2018). And we are only seeing the beginning now that BloombergGPT entered the stage, a 50 billion parameter language model trained on largely propriety professional data as well as additional highly curated general-purpose datasets (Wu et al., 2023). A tool that can easily jump from mere analysis to making more active change suggestions, such as allocation of finances and other resources.
In the leadership domain, think, for instance, of an AI's support in applying charismatic leadership techniques (Antonakis et al., 2011). Because these techniques are specific and well-defined, an AI tool could easily be built (or is already freely available when using an API language generator such as the GPT-based tool offered by OpenAI) to provide leaders with inspiration about how to address their followers more charismatically (cf. TED and Twitter in Tur, Harstadt & Antonakis, 2023). As such, an AI-based application could help leaders not only to envision sound strategic changes but also help translate these into concrete communicative actions, which could then even be personalized for different audiences (e.g., efficient vs. analytical thinkers; Carton & Lucas, 2018).
To conclude, AI-supported leadership is already taking shape and will quickly spread among companies across the world. In a sense, the COVID-19 pandemic paved the way: It increased people's openness toward these technologies, but more importantly, it compelled the digitization of many (previously offline) leadership communication acts, which AI could then process. That said, in the NEW of leadership, the human being still remains in charge of decisions and execution. For most leaders, this means they can stay in their technology-related comfort zone (Haesevoets et al., 2021). Yet, history has proven that technology will inevitably solve problems where it is more efficient, more predictable, less expensive, and comes with lower risk. Thus, we argue that our future is one where AI takes over leadership roles.
The NEXT of Leadership
The NEXT of leadership is that AI will not only support but substitute human leadership: completely assuming authority over task-, relations-, and change-oriented functions that people prototypically associate with human leaders. Here, consider (re-)watching the movie “Her”, which was released a decade ago. As the movie makes clear, interacting with an AI-based avatar would not be a sci-fi oddity, but an extension of how we already use smartphones. While we initially might still interact with it via text-based chat (cf. ChatGPT embedded into Microsoft's office365, Bing, and Azure products; Bass, 2023), sooner rather than later, it occurs even more naturally via voice (i.e., natural language, think Siri, Alexa, Cortana, or Google Assistant) and we will build a relationship with a companion that remembers what we inquired about (cf., the replica app). And once we move from much-hated video calls to the more immersive Virtual Reality (VR) experience, the AI's physical forms (i.e., avatars) will be indistinguishable from our human colleagues’ avatars.
Aside from mimicking human beings’ appearance, one of the main game changers will be that AI leaders will not only be able to communicate better but in doing so, actually address followers’ fundamental psychological needs. That is, AIs will be able to cater to human nature and not just humans’ instrumental value for the business.
Self-Determination Theory (SDT), has identified three such basic psychological needs: the need for autonomy (i.e., a striving to feel ownership and freedom), the need for competence (i.e., the striving to feel a sense of mastery over the environment and to develop new skills), and the need for relatedness (i.e., the striving to feel connected to others) (Deci & Ryan, 2000; Van den Broeck et al., 2016). According to SDT, meeting these needs support human flourishing, with the consequence, at work, that employees are more motivated to engage and more satisfied (Deci & Ryan, 2000; Gagné & Deci, 2005).
The litmus test for AI leadership is therefore whether it can cater to these fundamental needs by effectively performing task-, relations- and change-oriented leadership behaviors. On that point, we invite you to consider whether and how AI leadership reaches parity with—or possibly even outperform—human leaders (McFarlin, 2019; McKinsey & Company, 2019). To make the thought experiment fair though, compare an AI to the average leaders that you have come across, rather than the greatest human leadership you can imagine.
Fulfilling employees’ need for autonomy. Because algorithms can provide a lot more (individualized) real-time information transparency than human leaders, AI leaders can make employees feel more in control of their work—and thereby enhance employees’ intrinsic motivation. For instance, AI leaders can increase employees’ experiences of autonomy by sending messages that inform them about potential actions while also giving employees the opportunity to substantiate their own positions (i.e., the AI can ask a lot of challenging questions without annoying the employee). In short, the transparency and voice options that AI leadership affords are likely to positively affect employees’ sense of autonomy. Indeed, scholars have demonstrated that humans have a “theory of machine” that inclines them to think about algorithms as being fair, fast, and unbiased—and consequently, they may prefer to listen to an AI over a human (Logg et al., 2019). We argue that the “autonomy advantage” is not just a daily reality for people working in gig economy jobs (who are, in many cases, already being led via dashboard; Safak & Farrar, 2021), but that it will soon be the new normal in all kinds of organizations. After all, human leaders are often experienced as a bottleneck for getting accurate and real-time information, due to their perceived reluctance in justifying decisions, let alone having them challenged (Duggan et al., 2020).
Fulfilling employees’ need for competence. A leader is expected to teach and guide employees (Kniffin et al., 2019) by honing their sense of mastery. The transparency guaranteed by an AI leader can empower employees to try out new ways of working and directly see the results of their actions, potentially in comparison to others (Velez et al., 2018). In other words, an AI leader can use automated and individualized feedback systems to engage employees in real-time learning, thereby sharpening their sense of competency and efficacy. In fact, we expect AI leaders to outperform their human counterparts in this domain once the former's feedback is paired with gamification elements (e.g., challenges that are adjusted to skill levels), status symbols (e.g., badges), and appreciative motivational communication—all at a level of granularity, personalization, and immediacy that is difficult for a human leader to match. Notably, humans seem to be more open to negative or developmental feedback provided by an AI (Yalkin et al., 2022), indicating that many of the harmful internal incompetence attributions that may keep employees from learning when feeling monitored by human leaders might be overcome through AI leaders (Raveendhran & Fast, 2021).
Fulfilling employees’ need for relatedness. The last bastion of leadership tasks is speaking to employees’ need for relatedness. In this respect, in a future where much of the interaction will occur digitally, the avatars of algorithms will ultimately be indistinguishable from those of real humans. In fact, their communication style has already surpassed the Turing test (which probes whether humans can identify a text as written by a machine or human; cf. Else, 2023). That said, even in an offline world, the human-like features of machines (e.g., animated face, dynamic voice) already encourage us to project human-like qualities and engage in bonding (Moussawi et al., 2021; Sheehan et al., 2020). Given that algorithms can analyze patterns much better and make logic jumps that go beyond linear relationships (Wenzel & Van Quaquebeke, 2018), AI leaders can, for example, facilitate social connections at work by introducing colleagues to each other who have a high chance of getting along (Zhu, 2017). Not yet convinced? Then consider evidence from areas such as psychotherapy or sports: fields that underwent an analogical development of first doubting and then utilizing the “empathic” abilities of AI. For example, research has demonstrated that AI-powered chatbots, such as Woebot Health, effectively help treat depression or anxiety. The evidence has even convinced some insurance companies that they should expand their coverage to include chatbot-assisted treatment because of the chatbots’ proven communication abilities to make patients feel understood and supported. In sports, AI coaches not only easily outplay human coaches given their ability to analyze data from ongoing competitions and derive actionable advice (Schmidt, 2021), they have even convincingly delivered motivational speeches (Nichols, 2021). Through tailored rhetoric, they can foster connection and create a sense of belonging. Naturally, these developments can be extrapolated to the leadership field—to the point that we may soon see human followers describe their relationships with AI leaders in glowing terms.
In sum, there should be little doubt that people-oriented leadership can and will be executed by AI. Those who remain skeptical should consider the exponential curve of technological advancement in this field. ChatGPT, released in November 2022, was already quite powerful with 175 billion parameters; however, just a few months later, ChatGPT4 took it to an entirely new level with advanced image processing, programming, and language processing skills. Natural language processing (NLP) models, such as ChatGPT, have made impressive jumps in just a few years—and this only represents the developments known to the public. At the time of writing, Google has launched first robotic applications relying on even more powerful NLP algorithms (e.g., PaLM) (Wheatley, 2023), and there seem to be even more advanced applications at their disposal.
Generally, we have only seen the very beginnings of what is referred to as the metaverse, shifting our face-to-face world even more online (explored via VR goggles such as Meta's Oculus Quest or Apple Vision). Given the exponential development, what we outlined above may prove to only be the tip of the iceberg: AI's potential reach into leadership could be even deeper than we can currently imagine—and come even sooner.
Implications for our Field
If the NEXT of leadership increasingly employs AI leaders, then what remains for human leaders and those scholars who are supposed to educate and study them? First, let's face it: Very likely, we will need fewer human leaders, particularly at the lower and middle management levels because their leadership functions can easily be taken over by AI and maybe for the better because we have seen that fewer people want or are even willing to take on leadership roles (Zhang et al., 2020).
But will we need human leaders in a broad sense? Our answer is yes—but those leaders will be different in nature: They won’t be leading the humans within an organization, but leading the machines that lead the humans. This new crop of leaders therefore needs to understand how humans AND how AIs operate. They become stewards of an extremely powerful technology that will govern the fate of their business and the humans within. If humanity is to thrive, these leaders will need to be able to resist against the temptations of ever more powerful machines (Köbis et al., 2021), and also against the competitive dynamics where others have similar powers at their disposal (Grant, 2023). Naturally, this scenario has important implications for the leadership research and education of the future.
Implications for Leadership Research
Algorithms follow probabilistic optimization patterns, much like the empirical research of the different fields at a business school. The AI, like a scientist, looks for answers to questions like: How to spend on advertising to best drive consumer behavior? How to optimize operations so that supply is guaranteed while warehousing is minimized? How to best communicate to motivate employees and thereby drive performance? However, any kind of unsupervised pattern recognition initially often presents a black box, limiting our human understanding of the involved logics. If humans are to continue increasing their understanding of the world—and, by extension, retain some control over the steering logics—leadership research needs to evolve.
First, one concrete approach lies in research that feeds or co-creates logic with AI (cf. Lee et al., 2022). On the one hand, this will require leadership scholars to partner up with computer scientists and—to speak the same language. This entails that leadership scholars at least need to understand the fundamentals of the algorithms with which they are working, or they risk becoming overdependent on technology and underdeveloped in their reasoning capabilities (Bauer, 2022). On the other hand, it will also require that leadership scholars summarize their assumptions about leadership in a more formalized way: Causally identified predictions in leadership theorizing with meaningful measurement units are needed to build an evidence-based foundation for the programming of algorithms responsible for leading employees. To reach this goal, leadership scholars have to seriously address critiques about former research that often conflates cause and effect when studying leadership styles (Fischer & Sitkin, 2022) and finally speak to calls to get to the behavioral level of leadership (Banks, Woznyi & Mansfield, 2021, Hemshorn de Sanchez et al., 2022; Van Quaquebeke & Felps, 2018). These debates are not exercises in academic naval gazing but crucial to remain relevant in a future in which AI is omnipresent.
Second, we urge leadership research to train AI to broaden its scope: To look at the implications for not only workplace performance, but also personnel well-being. Otherwise, algorithms may encourage abusive leadership behaviors for the sake of short-term benefits (Tröster & Van Quaquebeke, 2021), despite the potential for human suffering. The AI leader may, for instance, recognize that certain types of followers are particularly likely to intensify their work efforts when being abused and kicked down (Gerpott & Van Quaquebeke, 2022). Such pattern recognition may reinforce biases or ill-suited behaviors toward certain employee groups. One might argue that an AI would be programmed to consider long-term effects, but it could counterbalance those concerns against turnover potential. Who is to say that the algorithm will not find a “tolerable” balance between exploitative leadership and employee churn? It falls to us leadership researchers to step out of our simplistic theories, and instead proactively find answers how we would model trade-offs.
Relatedly, third, while initial evidence indicates that humans are less likely to follow the unethical instructions of an AI (versus human) leader (Lanz et al., 2023), at the same time humans seem to experience less moral outrage over algorithmic discrimination than over human discrimination (Bigmann et al., 2022). Accordingly, it contains a risk that decisions of AI leaders that one “has to follow” are used as an excuse for unethical behavior with implications for the potential weakening of collective action to address systematic discrimination and other societal issues (Bigmann et al., 2022). A promising research area will thus be to study how we can prevent the “diffusion of responsibility” for human wellbeing when AI leaders are in charge.
Fourth, the general topic of ethics within organizations will need a new, invigorated focus (De Cremer et al., 2022). Sure, a first endeavor may include a manifest akin to Asimov's science-fiction laws for robots “A robot shall not harm a human, or by inaction allow a human to come to harm. A robot shall obey any instruction given to it by a human. A robot shall avoid actions or situations that could cause it to come to harm itself,” but at the heart of the matter will be the question whether the question of ethics will have to remain under full control of humans, or whether it too is a task that can be directly or indirectly put to machines because they can then effectively govern decision-making based on principles centrally formulated. This topic will, however, be difficult to navigate as enabling machines as our moral agents may come with unintended consequences, such as, for instance, the machine judging that if humanity is to survive it needs fewer humans, or, in the case of organization, that an organization's business model is hurting the human condition (by whatever logic) and thus needs to be shut down immediately, or even more extreme, that human leadership cannot be trusted to make unbiased decisions and thus needs to be “turned off.” Evidently, who is to oversee ethics is a debate to have. Now.
Fifth, we call for research that investigates how we can measure human leaders’ AI literacy. As we expect that future human leaders need to understand both how humans AND how AIs operate, this will require validated new measurement tools beyond self-assessments (e.g., Wang et al., 2022). Such objective measures could, for example, be used to find out whether leaders are able to “provide the right prompts” to the AI leaders they are supposed to supervise (cf. prompt engineering, Brown, 2023). In other words, there is a need to measure if leaders are sufficiently educated with regard to the technological side of things; accordingly, scholars should find ways to objectively measure whether human leaders understand the ins and outs of AI and how it potentially benefits or harms employees.
To conclude, the time for convenient and “interesting” studies is over (Pilluta & Thau, 2013; Tourish, 2020). We must commit to a serious review and expansion of our research foci, methods, and conceptualizations if we are to play a role in how AI shapes the future of leadership.
Implications for Leadership Education
Our presumed automation of regular managerial tasks would imply a paradigm shift in what business schools teach going forward. AI tools will likely take over certain jobs, as we can see in advertising (e.g., agencies allocating budgets to different channels based on AI predictions of the best returns), in finance (e.g., robo traders executing much of the daily trading), or in operations (e.g., increasingly mathematical models governing ordering, sales, and more). Yes, there will still be a need for (a small group of) people who are educated in the details of marketing, finance, accounting, operations, etc. But that group will be more business engineers than business managers, working on honing AI management tools rather than directly lending their knowledge to businesses.
That leaves the topic of people leadership. Currently, AI is an exciting tool to play around with and perhaps incorporate into the classroom (e.g., Mollick & Mollick, 2022). Some start using VR googles in leadership development programs to train future managers in (public) speaking, thereby drawing from machine learning to provide feedback and continuously generate new scenarios (e.g., Ovation). Smartphone apps such as Bunch aim to make leaders more transformational on the job by tailoring their advice to the leadership style and goals of the user. Universities are beginning to invest in such amenities—if only to signal that they are in touch with the latest trends. It won’t be long, though, before business schools seem outdated if they aren’t engaging with latest AI-supported leadership development tools: Not simply due to lacking coverage of an increasingly mainstream technology, but also because traditional lecture formats will seem even more dated by comparison.
When envisioning the future of leadership development, it should be a point of business school education to help their students develop an in-depth digital literacy that allows them to use AI-support in their leadership tasks. In doing so, we should also consider the capabilities that AI has not only to help to directly improve leadership by offering concrete suggestions on how to deal with certain leadership tasks, but also help aspiring leaders to learn more effectively. First, many AI-based leader support tools collect feedback (either directly from the leader or from other sources) about the effectiveness of the implemented solutions (e.g., identifying effects on reciprocal communication, on follower performance, or on other KPIs). Beyond supporting the algorithm's continuous evolution, such feedback gives leaders a chance to reflect on their behavior. Research underscores that timely feedback—and the resulting self-reflection—is an effective tool for developing leadership skills (Densten & Gray, 2001; Dormody, 1996; Nesbit, 2012). Second, (aspiring) leaders can learn from the AI's recommendations (i.e., immediate on-the-job feedback, for instance, regarding how to hone one's email communication) and then proactively adapt their future behavior. Evidence from the sports domain suggests that such recommendations tremendously accelerate learning because the AI can simulate many variations and then generate advice that falls outside of most humans’ comfort zone. For example, one AI program taught human sailors new tricks that initially felt counterintuitive, but did actually work (Agrawal et al., 2022). Just as the sailors retained the new strategies and broadened their behavioral repertoire, we believe AI could similarly expand leadership development.
This brings us to one of the core pedagogical implications for future leadership development. The most important task is to develop future leaders capable of ensuring good ethics against ever more powerful machines. Evidently, algorithms themselves do not provide employees with unethical instructions to intentionally harm or discriminate against certain people but instead reproduce biases existing in the data they were trained on. If human managers do not comprehend at least the basics of AI systems (and how developers may potentially misuse them to impose their values on others), they cannot meaningfully provide future oversight and direction in their work environment. This risk is pressing; at the time of writing this article, many famous tech leaders just signed a letter to pause the training of AIs until its ethical risks become more manageable again (Loizos, 2023). Accordingly, future leaders must learn how to evaluate AI technology from an ethical perspective because simply offering algorithmic transparency seems insufficient (Leib et al., 2023). This means that ethics will remain a crucial domain of leadership education, if anything, it seems a prerogative to put it even more center-stage. Or, to use the words of ChatGPT when being asked about its moral convictions: “As an artificial intelligence language model, I am not capable of being ethical or unethical. I am programmed to provide helpful responses to users based on the input I receive, but I have no moral or ethical values or beliefs of my own. Ultimately, it is up to the humans who design, develop, and use me to act ethically in their interactions with me and in their use of the information and knowledge I provide.” (Authors’ own chatGPT interaction, 2023)
Based on this notion, we hope that the longstanding discussion of whether and how business schools can humanize leadership (Petriglieri & Petriglieri, 2015) will regain momentum. Such humanization and reconsideration of ethical principles needs to be interdisciplinary: Metaphorically, the needs of humanity (psychology, sociology, biology, medicine, and philosophy) should be in dialogue with business (the profit motive in a competitive world) and engineering (the curiosity about how much machines can do). In this way, we can hopefully maintain a “human-in-the-loop” pattern (Grønsund & Aanestad, 2020) whereby human leaders still (co-)generate a ground truth against which to assess algorithmic leadership and potentially adapt the underlying AI. Students need to develop a digital backbone in order to stand their ground; against the technology itself when it provides ethically questionable advice (e.g., firing certain employee groups because they underperform, Lanz et al., 2023); against engineers who only see the opportunities of the machine (Köbis et al., 2021); and against a multitude of consultants who want to integrate ever new technologies without considering their broader impact.
The question is not anymore whether AI will play a role in leadership, the question is whether we will still play a role. And if so, what role that might be. It's high time to start that debate.
Footnotes
Acknowledgments
Special thanks to Mark Beall for his provocative keynote at the 2022 New Directions in Leadership Research (NDLR) hosted by the University of Virginia Darden Business School and his subsequent feedback on our manuscript.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
