Abstract
Faculty use of technology for teaching has been a part of business education for many years, but there is no doubt that recent advances in artificial intelligence (AI) have ushered in a period of major disruption in how and what we teach that is likely to continue for some time. To stay relevant, educators must embrace AI-induced change, yet the implications of such a stance are manifold. A central question is how business education can stay human-centered in a world increasingly driven by AI. This Curated collection of essays looks at this question from many different perspectives, encouraging faculty to cultivate a mindful relationship with AI, to see AI adoption as a change management initiative that brings both risks and rewards, and also gives some practical examples of how AI can be incorporated into business curricula.
Introduction
Cynthia V. Fukami and Aimee L. Hamilton
This curation offers a companion piece to another recent curated contribution (Kulkarni et al., 2024). While that curation focused on research, we consider herein the implications of artificial intelligence (AI) for teaching and learning. Our orienting concept, as captured in our title, borrows from design thinking, and is intended to remind us that business education, regardless of the tools employed, must remain human-centered (Lerman, 2017). To be human-centered means to have empathy for the wants and needs of the key stakeholders, such as faculty, students, and staff, who are affected by the technological innovations we introduce. In business education, humans and their development are at the heart of what we do and should be our north star; we should not innovate for the sake of innovation. While some educators have rushed to immerse themselves in the new tools, techniques, and toys that are available, others have been reticent or downright resistant to incorporating AI in their teaching. Whether one is an early adopter, a laggard, or somewhere in between on the adoption curve, staying true to that north star is challenging as the AI landscape continues to shift beneath our feet and is expected to do so for years to come (Orf, 2024). As AI innovations propel society quickly toward the inevitable “Singularity,” the point at which AI will surpass the intelligence of all humanity (Vinge, 1993), how can any of us, let alone educators, stay human-centered? It is hoped that like Tron 1 , the Singularity will “fight for the users.” In the meantime, this curation explores how educators might steady themselves on this unstable ground and maintain a human-centered focus when it comes to using AI innovations.
Faculty use of technology to assist in training, research, and teaching has been part of higher education, and business education, for many years. For example, the MED Division of the Academy of Management has been running a technology-based session such as a symposium or professional development workshop for over 25 years. In those earlier days, the big technological issue was the introduction of personal computers and primarily involved computer simulations to enhance learning and the development of online education.
Today, our attention has turned to the Fourth Industrial Revolution (4IR; Schwab, 2017) and AI, “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and the use of data for complex processing tasks” (Popenici & Kerr, 2017, p. 2). The present curation warrants additional definitions before going any further. Artificial narrow intelligence applies AI to limited tasks for which a computer is more efficient than a human, such as email spam filters, while Generative AI (GenAI) can create new content (Kalota, 2024). Relatedly, Machine Learning (ML) is a form of AI that “allows the computer to learn automatically without human intervention or assistance” (Jaggia et al., 2023, p. 386), and a Large Language Model (LLM) uses AI to predict and suggest the mostly likely next word in a sentence (Kalota, 2024). You may have noticed an LLM at work when you composed a message in recent updates of your email platform. Another type of AI, Artificial Neural Network (ANN), mimics the structure of the human brain and can therefore quickly identify highly complex relationships (Kalota, 2024).
Perhaps the most impactful development of the lot so far is ChatGPT, a type of GenAI that uses both an ANN and LLM to generate new content based on a prompt given by the user (Kalota, 2024). The burgeoning use of ChatGPT has made an impact unlike any other in our adult lives. Technological changes of this magnitude disrupt the status quo and bring both opportunities and threats (Schumpeter, 2015). Our collective generations have experienced at least one other technological revolution: The Third Industrial Revolution or the introduction of computers (McGinnis, 2018). While disruption occurred (modems, computer simulations, and online education, to name a few), it seemed like a minor earthquake, perhaps a three on the Richter scale. However, the notion that the computer could think for us (albeit after we create the prompts), seems bigger, more like a six on the Richter scale. In our view, it is the generative aspect of GenAI that makes it more compelling and confounding. The notion that the superintelligent Singularity will think better than us within a few years is off the scale, and its implications are hard to comprehend. Can computers think? Should they? Are computers partners in the educational enterprise, or are they enemies threatening our very existence? Will GenAI create chaos or be a useful tool? As has been asked many times in popular culture, will the future be dystopian or utopian or somewhere in the middle? In this vein, we haven’t yet created much useful, research-based information on how to effectively use GenAI in teaching. It is the purpose of this curation to start filling that gap by keeping our north star in mind: How do educators avoid the trap of reacting to the tool itself and forgetting our obligation to humankind?
While we recognize that some early, dire predictions about AI were a bit overstated, namely predicting 40% job loss because of AI (Frey & Osborne, 2013), there is still concern expressed almost daily about the possible harm from AI on a very macro level. How do those concerns impact our work as business educators? Others have written about the importance of technological literacy for faculty (Allen, 2020), and, in this journal, of the potential impact of AI on our research (Kulkarni et al., 2024). We have also written about the impact of the 4IR and AI on the future of work and organizations (Allen et al., 2022) and our efforts to create a course to address these issues with our students as they prepare for their careers in this landscape. To be sure, there already are, and will continue to be, changes in the content that we teach in the various fields in business schools alongside changes in the way we educate.
In this curation, we provide food for thought on the implications of AI for us as educators, the way we teach, and the way students learn. The impact on management content straddles different issues and levels of analysis within the content of our courses, for example, global cooperation, global economies, government oversight, the evolution of the labor market, the birth of new industries, numerous topics on how we lead employees effectively, and the multiple ethical issues involved in the deployment of AI before we have developed the wisdom to use it. How can we have answers when we do not yet know the questions and the technology in question continues to develop rapidly, almost daily?
At the time of this writing, GenAI has indisputably taken hold in higher education. Faculty and staff are scrambling not only to determine how to harness its potential (Mills, 2023) but also how to police its use (Supiano, 2023). Current controversies concern not “whether” but “how much” GenAI is acceptable for tasks and projects that have heretofore been the domain of critical thinking and embodied knowledge, such as writing term papers and preparing lesson plans. By the time this article appears online, answers to those issues may have already been institutionalized. New, thornier questions will undoubtedly arise to take their place. Thoughtful engagement with these issues and careful consideration of the pros and cons of each new GenAI tool are important; however, educators must first see these ongoing disruptions through a clear and unclouded human-centered lens.
To this end, we have organized this Curated collection into three “sections,” bringing that human-centered lens to bear on three specific domains. The first section turns the lens on us, the educators, with three essays that bring empathy to how we think of our relationships with GenAI and our students, and our need for upskilling. The second section widens the lens to take in the business school as a whole, considering AI adoption as a change management problem and providing a clear-eyed exploration of AI-related uncertainty and risk and why new risk assessment tools are needed. Finally, the third section brings empathy into the classroom, with thoughts from authors who have developed ideas that represent some possible solutions for educators. These include GenAI-based tools that have already been integrated into two areas of business education: ethics and negotiation, albeit without strong evidence of effectiveness as of this date. The authors in this section report on efforts to provide students with a sense of agency and confidence about a future immersed in AI. They also provide specific examples of developments in using GenAI to teach negotiation more effectively and offer ideas about using our tools as scholars to make data-based assessments of its effectiveness.
The impact of the rate of AI-induced change hits our industry head-on when we think of our duties as educators of the generations to follow. If a computer can write final exam papers, what does that mean for learning? In the field of higher education, we’ve evolved from the technology of the pen to the printing press, to the typewriter, to using a word processor, to a computer being an author. When does the tool become the ideator? The decision maker? In his classic work on wisdom, Aristotle (1941) said that the use of a tool itself illustrated “techne,” or the craft involved in a task, and the ideators illustrated “epistome,” or its theoretical knowledge. In our previous work on management education (Wittmer & Fukami, 2017), we have argued that for business school in particular, our goal should be “phronesis,” or the wisdom to understand when and where to use the tool in particular situations. We suspect that to be the case with GenAI. On the other hand, while our own expertise falls within the walls of the Business School, we look forward to seeing what colleagues in other disciplines are learning about the promise and perils of GenAI in higher education.
The essays that form this curation challenge us to be mindful and intentional when evaluating GenAI but also to use the tools from our discipline to support our decisions. In other words, how we approach AI gives us the opportunity to practice what we preach in the management classroom. A common theme among the contributions is that the AI landscape in higher education is rapidly and continually evolving. Many universities, including our own, espouse a student-centered approach to higher education in general and business education specifically. What does this mean as GenAI innovations keep emerging and increasingly permeate teaching and learning? What role can we as management professors play in wisely guiding our students? Is it an opportunity for us to learn with our students, as Allen suggests, or even from them, as Wittmer, Allen and Aghdasi model for us? Our intention with this curation is to empower a human-centered orientation to better adapt to AI-enhanced business education and to begin a legacy of evidenced-based understanding to guide our way.
Section 1: The Need for Intentionality
How Can Business Educators and Students Harness a Symbiotic Relationship with GenAI That is Mutually Beneficial, Intentional, Conscious and Mindful?
Christine Rivers and Anna Holland
The adoption of GenAI in business education can be divided into four phases: chaos, order, acceptance and symbiosis. In this essay, we propose a new perspective to reframe our relationship with GenAI. For this relationship to flourish, we need to move beyond acceptance and towards symbiosis between humans and technology; an intentional, conscious, and mindful process that is deeply human-centered. This process is equally important for both business and management educators and students alike. We understand intentional as the motivation behind an action (Goldstein, 2013), conscious as the knowing and experience of mental states (Thomasson, 2006) and mindful as the ability and practice of present moment awareness of the body, mind, feeling and mind-objects (Purser & Milillo, 2015).
In late 2022, GenAI platforms such as ChatGPT disrupted the learning and teaching environment in business and management education. This phase of disruption combined excitement of the new with fear of change and uncertainty surrounding the impact of AI on our profession, as noted below by Fischbacher-Smith, et al. We witnessed trial and error behavior by both students and academics in the use of GenAI. As with most situations that appear out of control and have an increased level of fear, we looked to educational leaders for guidance and to bring order into the chaos through regulation and policies. Webinars, workshops, and guidance documents flooded our inboxes, yet it did little to calm the fears some of us felt. The answer from educational leaders has been to use GenAI platforms in a responsible manner (De Cremer & Kasparov, 2022) and to align with the principles of academic integrity as Morse and Leben note below, and others (Bin-Nashwan et al., 2023; Cronan et al., 2018) have suggested. In this context, responsible refers to the ethical, transparent, and accountable use of AI technology (Mikalef et al., 2022), conceptual constructs that often lack the actionable recommendations we seek or the necessary organizational change to enable it (see Caporarello below).
During this first phase, alongside many others, we conducted studies with students and academics to understand the lived experiences of GenAI in learning and teaching. We found that students and academics predominantly used GenAI to support learning and speed up output generation. Both shared the understanding that using GenAI to produce an assessment or publications would be unethical. Intentions to use GenAI varied, but all intentions had one aspect in common: they were focused on producing academic work more effectively (achieving the goal) and efficiently (how well the goal is achieved). We could describe this approach as outsourcing of the slow and painful elements of learning and writing—namely, curiosity, creativity, and critical thinking (Rivers & Holland, 2023), while still at least conceptually using GenAI responsibly. As educators we increasingly struggle to comprehend the new reality of reduced student engagement and attendance, and we need to think of new ways to design student-centric curriculum with GenAI in mind (see Brady & Fellenz below). It is easy to allow GenAI to take the brunt of blame for a shift towards a goal-oriented and somehow mindless and passive approach to learning fueled by demands of immediacy and instant gratification. We suggest however that this could not be further from the truth. The real issue is that we have accepted this new reality without examining our relationship with GenAI in this conundrum: is AI a partner, coach, or something else (see Brett below)?
Learning is understood (conceptually and experientially) as an active, dynamic, and conscious process that takes place in the present moment (Rivers & Kinchin, 2019). There is beauty in taking time, to sit, to focus and to produce intellectual work in the present moment. Our fear-based reaction to the rise of AI in business education left some paralyzed, while others embrace GenAI by giving it permission to do the learning (or research) to a lesser or greater extent. The danger is that we are losing human qualities such as contemplation and concentration, and the art of setting intrinsically meaningful intentions that serve our inner goal of growing and developing. Consequently, our experience of learning is at risk of moving from an intentional, conscious, and mindful approach to a goal-oriented and mindless one.
By examining our relationship with GenAI in business education, we have an opportunity not merely to advance learning, teaching, and scholarship but to support students in shaping their relationship with GenAI beyond the university. Such introspection starts with setting an intention, for instance to use GenAI when feeling “stuck” (Kulkarni et al., 2024). At an early stage of a creative output such as research, human centrality and agency have been recognized as crucial to the authorship process (Kulkarni et al., 2024), because they enable us to move beyond surface learning and into deeper inquiry (Dyer & Hurd, 2016). This allows a conscious and mindful way of working with GenAI that complements our cognitive skills of curiosity, creativity, and critical thinking rather than replacing them.
Employers still rate curiosity, creativity, and critical thinking as top skills (IBM, n.d.; Sebastião et al., 2023). These are human qualities that must be nurtured and developed. If we increasingly outsource these skills to GenAI, i.e., giving up our human qualities, then we train our mind not to use them and we easily lose the ability to make conscious and informed decisions. An intentional, conscious, and mindful approach will allow us to advance our practice and develop guidance suitable for learning, teaching, and scholarship in the future and move beyond accepting the existence of GenAI towards symbiotically aligning GenAI with our human qualities. New technologies will continue to arise and disrupt business education, and the ability to adapt and upskill is vital for future success (see Allen below). Developing a different way of interacting and coexisting with technology will enable us to equip our students to move skillfully from using technology because it exists and accepting it, to using technology symbiotically with intention, consciously, and mindfully. Consequently, we move from the phase of acceptance and a simple relationship between two entities to a symbiotic relationship, which becomes long-term, mutually dependent, and beneficial. This also requires ongoing exploration of our values and changing identity in relation to AI advances.
Learner-Specific AI Deployment in the Student-Centered Business School
Mairead Brady and Martin Fellenz
Recent advances in AI-supported general and educational technologies offer a valuable perspective on how to progress to a more student-centered business school. Despite the concept of student centricity having immense intuitive appeal, there is as yet no dominant or widely agreed definition of the term (Bremner, 2020; Starkey, 2017), nor consensus on how to operationalize it. Varying perspectives on student-centricity conceptualize it as increased student autonomy (Pineda & Ashour, 2023) or reduced instructor interventions, with students as active co-creators of the educational journey (Weimar, 2013). It often places the much-criticized measurement of student satisfaction, simplistically operationalized through student teaching evaluations (Calma & Dickson-Deane, 2020; Kornell, 2020; Trinidad, 2020), at the heart of the matter. In contrast, we see student-centricity as an unwavering strategic and operational focus (Fellenz & Brady, 2010) on student learning as the key value creation process across all parts of the business school, with self-responsible learners as active partners in the co-created educational process (Robayo-Pinzon et al., 2024).
The “student as customer” view is contentious (Gupta et al., 2025), so an important caveat is that student centricity is different from customer centricity in that business schools should not treat their students primarily as customers. To do so will compromise the academic standing of the business schools, take away from the developmental and collaborative nature of the student-instructor relationship, compromise educational quality, and limit academic independence and freedom (for further discussion see Calma & Dickson-Deane, 2020; Guilbault, 2016).
Nevertheless, the student-centric perspective owes much to the classic tenets of customer centricity (e.g., Drucker, 1954) and the rich vein of research and theory in this area (Fader & Toms, 2018; Fader, 2020; Gummesson, 2008; Shah et al., 2006; Sheth et al., 2000). Business schools can achieve student-centricity by adapting the five key steps of customer centricity (Kotler et al., 2024) and applying them, with AI support. The first step (student learning focus) is the adoption of an overarching philosophy that truly focuses on student learning as the central value creation process. As a second step (deep student knowledge), business schools need to fully commit to using research, data analytics and AI insights to learn about their students (see also Morse & Leben below). The third step (student segmentation) is to actively use the insights gained to recognize, delineate and select student segments—representing groups of students with distinct learning needs and approaches. The fourth step (customized learning journeys) requires them to customize their activities to personalize and maximize student learning within these segments. As a fifth step (data-driven feedback loop) business schools must constantly measure, monitor, and improve student learning through insight derived from active experimentation, collaborative design, and feedback from intensive data and AI analysis.
Business schools are slow to change (Fellenz et al., 2022; Caporarello, below), but we explore below how these five steps and AI can advance a customizable, scalable, student-centered model.
Step 1: Student Learning Focus
The focus on supporting student learning must be pervasive and shape all decision-making, employee behavior, workplace culture, educational design, and educational interactions in the business school. This requires a shift in both mindset and activities away from treating business school students as revenue sources or cash cows for universities (Hibbert & Foster, 2022), or as a distraction for faculty from their primary focus on research output (Berbegal-Mirabent et al., 2018). Faculty and administrators alike must emphasize students as active partners in education, and their learning as the key metric of success. This metric must gain priority in individual and collective decision-making, guide institutional, faculty and student conduct, and determine AI and technology deployment.
Step 2: Deep Student Knowledge
Business schools must combine expertise in learning design with data-driven insights from learning analytics. This requires an organization-wide skillset development in using high-quality data for decision-making (Davenport et al., 2023; Thomke, 2020), and the willingness to challenge traditional perspectives and opinions. We can use sophisticated AI and data analysis to unlock the potential of already available yet underutilized datasets from virtual learning environments such as Canvas, Moodle or Blackboard (Boulton et al., 2018), aligned to relevant learning metrics on academic performance, engagement levels, formative and summative assessments, interactive classroom dynamics, and emotional and sentiment analysis. These can provide an immensely rich and deep knowledge base that offers the chance to both (dis-)confirm existing beliefs and explore new insights about students and their learning progress.
Step 3: Student Segmentation
Like businesses that use AI-based analytics to uncover nuanced insights about customer preferences, needs, and behaviors (Palumbo & Edelman, 2023), business schools can utilize AI to distinguish student learner segments that have distinct learning needs in close to real-time (preterm, in-term, post-term). There is value in using population-, cohort-, group-, and individual-level data to create the most meaningful and usable segments, possibly even segments of one. Data-informed subcohort segmentation allows programs and instructors to move away from the one-size-fits-all approaches of most current educational models and provides the basis for effective customization of learning journeys and learning support for specific segments in student-centric ways.
Step 4: Customized Learning Journeys
The above-mentioned data can be used to validate and further tailor the student learning journey for identified student segments. Such evidence-based customization is already possible (Zhang & Aslan, 2021); however, the vision of personalized educational journeys, immersive learning environments with customized instructions for self-directed study, and AI-automated personalized feedback is only slowly arriving in business schools (Barros et al., 2023; Zawacki-Richter et al., 2019). Real-time analytics using natural language processing for classroom engagement can provide immediate feedback to professors regarding student engagement level, while AI systems can analyze individual student performance, learning styles, and preferences to tailor the course content and suggest additional resources. For assessments, AI-generated individualized feedback can replace traditional grading methods that often fail to maximize student learning. This AI-based approach could enable students to become more engaged and active participants in their educational journey and offers faculty the opportunity to tailor curriculum design, teaching methods, personalized feedback, and assessments to meet segment-specific learning needs and preferences in timely, effective, and increasingly efficient ways.
Step 5: Data-Driven Feedback Loops
Effective customization needs to be monitored, and educational design and delivery must be regularly reviewed and aligned to AI-driven recommendation systems, predictive analytics models and academic outcome measures to adjust and optimize each step of the learning journey. Ongoing experimentation and innovation must be core features (Fader & Toms, 2018; Thomke, 2020) of a student-centric business school. A learning-oriented experimentation approach to test existing and novel educational approaches, can provide the source of deep understanding of learners and their learning. Central to this is the effective measurement of student learning, achieved through the selection and operationalization of segment-specific learning metrics, which enable a data-driven approach to the development, deployment, and/or cessation of AI-supported learning interventions based on rigorous, ongoing efficacy analysis.
Operationalization and Implementation Challenges
While AI's potential has captured attention across industries, its adoption often presents significant challenges. As Wixam notes, “AI creates no value unless it's used properly […] and you need the right capabilities to work with and manage it properly” (Eastwood, 2024: 1). Many well-resourced companies struggle with AI adoption, with their initiatives remaining tentative (Davenport & Mittal, 2023a, 2023b). The challenge to integrate AI into management education for often underfunded and less technologically adept business schools is daunting. Despite some innovative early AI adopters (Mollick & Mollick, 2023; Zawacki-Richter et al., 2019), most business schools remain stuck in the very early stages of technology adoption in teaching and assessments, reflecting both technological apprehension in higher education (Brady et al., 2019) and gaps in knowledge and skill in implementing student-centric education (Trinidad, 2020). The challenges of ongoing data security and personal data safety for students remain critical (Liu & Khalil, 2023; Zuboff, 2019) and is aligned to ethical, safety and general concerns about whether AI can be or should be implemented (Bevan et al., 2024). Like previous technological advancements such as the internet, mobile phones, and social media, we see both avid supporters (Mollick & Mollick, 2023) and concerned commentators on the impact on students’ actual learning (Lindebaum & Fleming, 2023).
Nevertheless, a student-centric learning philosophy is increasingly essential for business schools to remain relevant and to sustain their role in the management education eco-system in this AI era where education will become more, not less, valuable. However, to achieve the promise of a truly student-centric approach to business education, business schools need to be more proactive, more determined, and more courageous in deploying AI-augmented educational approaches and models. They need to invest in their own learning capacity and their ability to constantly adapt to changes in student learning needs, relevant technology affordances, regulation and legislation, and other factors relevant for supporting student learning. We need to consider how to leverage the immense power and scalability of AI-supported data analysis to enhance the pedagogical processes that drive student learning. Rather than looking like a typical business school, the focus must shift to providing the best possible platform for management students’ learning. Business schools that want to enhance their student-centricity must start by adopting student-learning-oriented philosophies along with data-driven practices that demonstrably enhance student learning. If business schools fail to take the opportunity to exploit AI for the benefit of students and their learning, other players—first among them the large technology companies (Zuboff, 2019)—stand ready and could step in and replace business schools as the primary suppliers of business education (Fellenz et al., 2022).
Navigating the Tech Revolution: Considerations for Academic Upskilling
Scott J. Allen
I recently attended an EdTech Conference in Ohio, and it was fascinating to watch each speaker (industry experts and novices alike) collectively stand in awe of the current situation around technologies enabling disruption (TED). It seemed all were trying to understand what the future holds and how TED, such as generative and agentic AI, will impact the landscape of education and business (Ackerman, 2025). “Adaptation” seemed to be the word of the day, and rightfully so. As academics in business colleges, we, too, need to adapt regardless of discipline. This will ensure our relevance as faculty and the relevance of our students (Allen, 2020). This brief piece highlights the importance of general and domain-specific tech literacy. I provide several concise and actionable considerations for academic upskilling (i.e., faculty development), which is defined as “the process of learning new skills or refining existing skill sets to enable employees to continue practicing with ease” (Gasteiger et al., 2021). Upskilling will be critical if business school faculty are to implement the suggestions throughout this curation.
General and Domain-Specific Tech Literacy
An essential first step in academic upskilling is determining the knowledge gap, and when navigating the shift to industry 4.0, one consideration is focusing on general and domain-specific tech literacy. While it's easy to fall into a singularly focused conversation about AI tools (e.g., Claude, ChatGPT), the technology is not new, nor is it the only one enabling disruption. Several technologies are altering the landscape and converging to spark new business models, differentiators, and opportunities for all businesses. For instance, in its Big Ideas 2025 (ARK, 2025), ARK Invest highlights such tech breakthroughs/trends as AI agents, scaling blockchains, autonomous logistics, robotaxis, and smart contract networks as technologies to watch in the coming year. Unfamiliar with these technologies? You are not alone. The reality is that faculty need to have a level of tech literacy. As Stephan et al. (2017) aptly suggested, All workers—from executives to interns—will need to learn much more about critical systems: Their capabilities and adjacencies, their strategic and operational value, and the particular possibilities they enable. In other words, individuals will need to become tech fluent. (paragraph 5)
A Concise and Actionable Roadmap for Academic Upskilling
With a clear understanding of the need for faculty upskilling, program architects can choose from several options that serve as the “how.” This section briefly explores several viable options for upskilling, including research streams, formal education, co-learning, professional development, practical experience, community/network building, and micro-learning.
Aligning faculty research incentives and agendas with upskilling for TED is a natural path for institutions that matches the flow of academic life. Built into the process of research is the activity of learning and upskilling (Miller et al., 2012). Faculty may even submit proposals to journals for a special issue on the topic of interest (e.g., the intersection of management education and extended reality). In addition to research, faculty may choose to engage in formal education in the form of degrees, certificates, or executive education. For instance, Cornell offers a certificate in blockchain, and UCLA Extension has a certificate in Blockchain Technology Management. In addition to formal education, several professional development opportunities exist via technology organizations, such as becoming a Microsoft Certified Azure AI Engineer Associate via Microsoft or Fundamentals of Google AI for Web-Based Machine Learning via Google. These learning opportunities serve as an opportunity to network with industry leaders. Likewise, faculty may provide hands-on experience with cutting-edge tools highly relevant to business (Oliveira et al., 2022).
Another path forward is co-learning with students (Haddock et al., 2023). Rather than serving as the expert, faculty orchestrate and design learning experiences where they learn alongside their students from industry leaders and experts (Wittmer et al., below, and Robertson, 2022). The partnerships provide opportunities to build one's network and may lead to other collaborative efforts such as research and co-teaching. A variation of co-learning is to serve as a convener of industry experts. Hosting a conference or a symposium is a wonderful way to spark and may help your institution become a center of gravity for conversations and dialogue in your community/region. This approach may lead to faculty establishing networks as learning communities for students, colleagues, and external partners alike (Lieberman, 2000). You might even connect with alumni and community leaders engaged in the work (Stuber et al., 2011). Naturally, these upskilling activities can ultimately lead to designing and developing courses, course sequences, certificates, or degree programs (Allen et al., 2022). Finally, there is an opportunity to engage in daily ongoing and micro-learning activities via books (e.g., The Coming Wave by Mustafa Suleyman [2023]), podcasts (e.g., For Your Innovation), newsletters (e.g., The Neuron), and blogs by industry leaders (Peter Diamandis’ Blog).
Potential Barriers to Faculty Upskilling and Strategies to Overcome Them
While there are many potential paths there are three primary barriers to faculty upskilling: Motivation, incentives, and time. Ultimately, faculty chart their course. Those whose interests do not intersect with TED may need more motivation to re-orient their research agenda. Likewise, faculty members may prioritize other tasks, such as publishing in known domains or securing grants. Time spent on “staying the course” may have a more immediate impact on promotion to full professor or other career objectives. Third, a core feature of faculty life is choosing their research agenda and how to spend their time. Disrupting this norm will require some convincing.
The quickest way to overcome these barriers is to incentivize faculty members with resources (e.g., grants, funding, sabbatical) to re-orient their focus areas. Even better, a well-designed upskilling initiative should be intentionally connected to multiple aspects of faculty life (e.g., mission/vision, sabbatical, annual reviews, hiring, tenure and promotion, accreditation, and faculty development grants; Pedota et al., 2023). If done well, this is an ongoing and ever-present process for management educators and ensures that faculty maintain a positive relationship with AI (see Rivers & Holland above). By doing so, colleges of business will be well-positioned to work alongside industry leaders who need human talent and expertise as they work to stay on the cutting edge themselves. Naturally, upskilling and the incentives that motivate action are critical components to driving participation and engagement among faculty. However, administrators and faculty alike must be thoughtful and intentional about their upskilling strategy and theoretical framework (Steinert, 2012).
Section 2: Readiness for AI Adoption
A Blueprint for Successful AI-Induced Change in Business Schools
Leonardo Caporarello
Readiness for AI adoption by business schools isn't just a buzzword—it's a blueprint for survival. AI technologies provoke high-magnitude and fast-paced change in institutions of higher education (Kroshilin, 2022; Rodríguez-Abitia & Bribiesca-Correa, 2021; Teker et al., 2022; Zawacki-Richter et al., 2019). This essay takes the stance that schools of business should not only adopt AI but also shape its future advancements and guide its development to enhance management practices, including fostering the creation of AI tools specifically designed to optimize business processes and support informed managerial decision-making.
Unfortunately, business schools struggle with both AI adoption and shaping an AI-driven future (Lee et al., 2019; Mikalef & Gupta, 2021; Pillai & Sivathanu, 2020). Institutions reinforce stability and consistency (Suddaby, 2010), and people resist change for many reasons (Kanter, 2012). Staying human-centered during such a consequential and large-scale initiative as AI adoption and leadership requires an organizational change management framework that accounts for the vagaries of the people who must embrace and implement the change. By AI leadership, we refer to a proactive and responsible approach by business schools in embracing AI—experimenting with various implementation strategies and sharing the most effective models and practices across the academic and business communities.
Despite its evident utility, the application of change management principles remains underexplored within the context of AI-induced change. Drawing on Kotter's blueprint for organizational change (1995), this essay proposes several practices for leading successful AI adoption in business schools.
Establish a Sense of Urgency: Articulate Persuasive Reasons for Adopting AI for Business Education
Like many organizational changes, AI-related systems create uncertainty and turbulence, fomenting resistance to change (Bartunek et al., 2006). To facilitate readiness for change, leaders must help key stakeholders, such as faculty, see the necessity and urgency of AI adoption (Chonko et al., 2002; Fuller et al., 2019). Faculty and staff, as well as students and external stakeholders, need to see and understand how AI-related initiatives will benefit them directly—and more critically, what stands to be lost if they do not commit to the change initiative sooner rather than later. The urgency of implementing AI in education can be supported by several justifications. First, AI streamlines faculty routine tasks, allowing professors to redirect their focus toward the core of learning and interactions with students (Rivers & Holland above; Guilherme, 2019). Second, AI brings benefits to faculty members, including career advancement and competitive advantages, which are difficult without mastering modern technologies (Ahmed et al., 2022; Hannan & Liu, 2021).
Form a Powerful Guiding Coalition: AI Adoption is Not a One-Person or One-Committee Job
Leaders play a major role in the success of organizational transformation efforts (e.g., Fernandez & Rainey, 2017), yet when it comes to AI adoption, they might be tempted to delegate AI adoption to a task force, or, even worse, to make a lone chief technology officer accountable for the roll out. Instead, leaders would do well to stay involved and assemble around them a broad coalition of influential supporters who can keep the momentum for change going. Forming a coalition to advocate for AI adoption energizes other stakeholders and contributes to the successful implementation and acceptance of AI (Carvalho et al., 2022; Larson & DeChurch, 2020).
Bridging these diverse perspectives poses a challenge to create mutual understanding. In practice, this could involve announcing open calls for individuals interested in joining a working group that will serve as the guiding coalition for AI-induced change. Engaging various stakeholder groups—practitioners, students, and leaders of higher education institutions—can help to involve key figures to this coalition. It is equally important to include administrative staff, leading experts and prominent authorities in academia. This approach will increase the credibility and influence of the proposed changes. Such inter-disciplinary teams will foster change performance and shorten the timeline of AI implementation (Mikalef & Gupta, 2021).
Define the Vision and Overcommunicate it by a Factor of 10: Broadcast a Compelling Vision for the School's Future AI Agility
Impactful leaders must define how AI can effectively support—or reshape—teaching and learning processes and facilitate objective- and standard-setting for stakeholders (Groulx et al., 2025; West & Unsworth, 1998). Furthermore, active engagement of stakeholders through impactful communication strategies (Sawagvudcharee & Yolles, 2022) is essential in the evolving landscape (Kitchen & Daly, 2002; Men, 2014). Communicating the vision is especially important for AI implementation because resistance to digital change can be quite strong (Holmström, 2022). Communication can reduce uncertainty, overpower fear of risk, and help increase acceptance of AI (Booyse & Scheepers, 2023). Highlighting the role of AI in addressing fundamental educational challenges is crucial in vision creation (Fengchun & Holmes, 2023). In this vein, as stated in the Forbes article “Is AI Going to Transform Higher Education and How?” (Press, 2024), it is important to focus on the positive vision of AI adoption.
Adopting a multi-stakeholder, participatory approach (Barreteau et al., 2013) to vision creation is highly effective when there is a need to bring together diverse perspectives and interests. Engaging a broad spectrum of stakeholders helps to find a balance between the interests of diverse groups of stakeholders. As previously noted, a crucial aspect of this vision should include not only the adaptation of AI but also the proactive shaping of AI's trajectory.
Business schools exist both as independent institutions and as units within universities. In university settings, these schools—characterized by their innovative culture and resources—are uniquely positioned to pioneer and disseminate best practices throughout the broader academic community.
Remove Obstacles and Plan for and Create Short-Term Wins: Support Faculty Training and Provide Resources
There is extensive literature on possible obstacles of organizational changes (Blanco-Portela et al., 2017; Burnes, 1991; Hoag et al., 2002). A significant body of literature focuses on obstacles to AI technology adoption. For example, Lukito and Mirwan (2022) found 64 reasons for the failure of digital transformation. Booyse and Scheepers (2023) investigated barriers to AI adoption and identified a number of obstacles including lack of trust, lack of creativity, and human resistance. Technological changes in education bring multiple risks and barriers (Blanco-Portela et al., 2017, Fengchun & Holmes, 2023). Malik et al. (2019) note that institutional, technological, and personal barriers arise in the implementation of AI in university. Among the major obstacles of AI implementation in education is the lack of institutional support and resources.
Celebrate Elements of AI Adoption in Real Time
Faculty are the workers on the front lines of AI-induced change in higher education. A major obstacle to successful AI implementation is the lack of faculty proficiency in working with GenAI and other educational tools. Allen (herein) provides some suggestions for tackling this critical obstacle, and leaders would do well to prioritize and support faculty upskilling initiatives.
Short-term positive results play a key role in organizational change initiatives (Amabile & Kramer, 2011; Reeves, 2006). These results also boost stakeholder motivation, and help to overcome resistance to change (Abbas, 2023). For example, AI implementation often leads to higher student evaluations of teaching, because AI learning solutions (such as immersive gaming technology) increase student engagement and enhance the learning experience (Popenici & Kerr, 2017). In addition, business schools could introduce small projects, such as collaborative platforms or single tools for routine tasks (e.g., tools that create individual learning plans for students, track progress and offer recommendations, such as the platform of the British company Century Tech and RiPPLE at the University of Queensland). Undoubtedly, such small-scale projects demand minimal institutional and financial resources and could also serve the goal of upskilling faculty (see Allen above). Yet, while short-term wins promise immediate benefits, they may not always be part of transformative change. More often than not, these short-term wins become endpoints in themselves (Chhibber, 2021), which is a pitfall to avoid.
Don’t Declare Victory Too Soon: Where AI is Concerned, Institutions Should Probably Never Declare Victory
“Empirical studies that have linked digital transformation to more sustainable business are still scant” (El Hilali et al., 2020, p.52), and therefore, the challenge of sustaining digital change remains poorly understood by both researchers and practitioners. Further, the AI landscape is by its nature dynamic and ever-evolving, and uncertainty is a constant (see Fischbacher-Smith et al. below). Experts estimate that the “plateau of productivity” from digital technology will not reach a mature phase for quite some time (Gartner, 2024), suggesting that institutional leaders must take care to avoid declaring AI adoption initiatives to be final and complete. When implementing AI, a change mindset is vital to avoid lagging behind the field. Given the limited resources in most educational contexts, continuous change could be especially challenging because of it requires significant investment, institutional support, and ongoing training for professors (Holmes & Tuomi, 2022).
Anchor Change in the Organization's Culture: Leaders Must Embed AI Change in the Institution's Culture
Among the antecedents of organization transformation failure, researchers identified the absence of innovation within organizational culture as paramount (Vey et al., 2017). Organizational culture represents a key success factor of digital transformation (Frost et al., 2020), serves as an important element of digital strategy (Gorjian Khanzad & Gooyabadi, 2022), and allows AI transformation to become sustainable (Isensee et al., 2021). Implementing AI-related change in institutions of higher learning holds both high promise and peril. The change management literature offers useful advice for leaders who want to increase the likelihood of successful adoption in their institutions. In educational settings, data science and AI application should become a part of the organizational culture (Passi & Jackson, 2018; Vey et al., 2017). Financial incentives such as internal university grants, performance-based bonuses would foster engagement with AI-induced changes and signal its value to the university. In addition, human-centered ideation through hackathons (Briscoe, 2014), conferences, and interdisciplinary projects would likely encourage knowledge exchange and help culturally embed AI adoption, which is necessary for enduring change.
To Uncertainty and Beyond: Managing the Challenges of AI in Business Education
Denis Fischbacher-Smith, Moira Fischbacher-Smith and Stephan von Delft
Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee—Tenner (1996, p. 9)
Our abilities to “foresee and forestall” potential hazards (Bateson, 1972) are brought into sharp focus by AI and the uncertainty that it creates for business schools. Business education typically promotes a rational positivistic approach to management—put simply, if you can measure it, you can manage it. While this approach can work where good data are available and enable predictive validity, it can be challenging when dealing with the uncertainty of “messy” organizational problems.
Managing uncertainty is especially difficult with any new technology. Emergent conditions within socio-technical systems generate challenges that require us to manage radical uncertainty (Kay & King 2020a, 2020b). The ability to then determine potential failure modes and effects is constrained due to a departure from a system's designed-for state, creating fractures in organizational controls because of the emergent nature of the threat matrix (see, for example, Hodge & Coronado, 2007). Business schools, therefore, need to anticipate and adapt to the task demands and vulnerabilities that AI will generate yet shape their learning and teaching environment in a human-centered way. This requires a fundamental adaptation of the prevailing business education models.
Framing the Problem Space
The emergence of GenAI and the pace of its development have started to challenge many of the assumptions held around learning design (Zawacki-Richter et al., 2019). According to a recent study by AACSB, AI generates potential threats to business education (e.g., skill loss, inequity and bias, academic integrity challenges) and opportunities (e.g., enhancing the learner experience, new educational approaches; AACSB, 2024). For example, an AI-powered automated grading system can both drastically shorten the return of grades to students, and raise issues around fairness and discrimination (Binns, 2018). Threats and opportunities around AI create new challenges in terms of the leadership of innovation (Antonopoulou et al., 2021), and the skills and competencies required by business school leaders. Importantly, this has implications across the various stages of business school digital transformation strategies and will require that managers develop the “requisite imagination” (Westrum, 1993) to deal with the potential for emergent conditions generated, thereby ensuring that controls are in place to manage vulnerabilities.
Joglekar and colleagues (2024) argue that there are three stages in digital transformation that can be seen to have relevance to business schools. The first is concerned with a focus on improvements in efficiency where many routine administrative tasks can be automated by AI-based systems. One difficulty here is where this becomes a black-box process in which the details of the AI-managed processes are opaque for end-users. Other negative issues relate to potential problems around misuse in student assessments and academic publications, requiring a need for investment in training for students and staff around the ethical use of the technology (see also Wittmer et al. in this collection). Determining the likely impact of a new technology in the short and longer term is, however, problematic. Blin and Munro (2008) point to early claims that “e-learning would transform and disrupt teaching practices in higher education” (p. 475), yet there was a lack of realization of those aspirations in the short-term. Arguably, however, GenAI is already proving disruptive within higher education; determining the nature and extent of a response to the affordances of such technology is a leadership issue. Requisite imagination requires managers to think outside of the silo-based approaches that often prevail within business schools and also requires that the curriculum covers a multidisciplinary approach to dealing with complex, multilayered problems. Noting that AI will generate the ability to handle large volumes of information quickly, it will also require an assessment of its validity and applicability to a particular problem space. The end-user will also need to assess the provenance of that data and extract meaning from it.
The second stage identified by Joglekar et al. (2024) relates to the need to build additional digital capabilities. This will require addressing the range of “source types” (Reason, 1990) within a school's culture, namely: Awareness, commitment, and competence among all staff and across all job families, while maintaining a human-centered approach to learning and teaching by prioritizing human agency, ethical considerations, and the enhancement of human capabilities. Again, this is a leadership issue and impacts on the approach taken by a school towards recruitment and staff development (see Allen's piece in this curation). Leaving the technicalities of AI development to those with a technical understanding of the issues, such as learning technologists, may not lead to an integration of the technology into a wider school strategy. AI will also expose any existing problems in the underpinning technological infrastructures for a school and any digital poverty that exists among stakeholders, such as students and staff. Business schools will need to embark on a structured program of staff development to ensure that academic colleagues are able to apply the imagination needed to manage emergent conditions and to work across the traditional disciplinary silos within business schools and universities that have shaped decision-making in the past.
The third and final stage of Joglekar et al.'s (2024) framework is that of growth through the process of digital transformation. Business schools are already agents of growth for many universities, and there is a potential problem that growth through digital transformation will be seen principally in terms of student numbers rather than through greater efficiencies and an enhanced learning experience. The issue of customer value is seen as an important part of Joglekar et al.'s framework, but this is perceived in some business school sectors, as secondary to revenue growth. Moreover, customer value may involve expectations of AI to improve efficiencies in academic activities (such as improving the consistency and timeliness of feedback) accompanied by repurposing the time saved into other value-adding engagement with the student. With much emphasis on cost saving and lack of time across the sector, redirecting staff time to “value-adding activities” will prove very challenging for many business schools.
Another important aspect of this stage is the development of networks and the creation of a wider digital infrastructure in support of a school's mission. At the same time, AI also has the potential to allow new entrants into the market, especially for non-accredited forms of learning and this could impact the intended growth strategies of schools. There are, therefore, potential vulnerabilities associated with AI as a form of digital transformation across each of the transformation stages. Appendix A (see Supplemental Material) illustrates the potential challenges that might be generated by AI within business schools. The data were generated by asking ChatGPT what risks might be associated with Joglekar et al.'s (2024) three stages of digital transformation (OpenAI ChatGPT, 2024).
Addressing potential vulnerabilities will be a continuous process, while an inability to address these issues, and ineffective digital leadership, will generate the potential for failure within many business schools. These are dynamic issues, and emergence within the system will generate further problems that have not been anticipated, thereby requiring staff to be vigilant about the potential for failure. Unfortunately, discussion in the sector is often framed in terms of the risks associated with the introduction of AI, yet there is a problem with how risk as a construct is often defined (Dowie, 1999). For risk assessment to be meaningful to decision-makers it must have a degree of predictive validity around the probabilities and consequences associated with a hazard, in this case AI (Knight, 1957). Discussions around AI are, however, less about risk management and more about uncertainty and vulnerability management and this will require a change in the ways that business schools are managed (Fischbacher-Smith, 2023).
Risk, Uncertainty and Emergence
AI threatens business schools’ operations in several ways. There is an obvious threat to curriculum relevance coming from advances in AI if current content and training become rapidly obsolete. AI could, however, be seen here as a potential collaborator in the learning design processes, rather than as a competitor, but that has to be carefully managed from an ethical perspective (Carroll, 2025). This would require a level of agility from business schools and their parent institutions that has not always proved possible (Twidale & Nichols, 2013).
Ethical and other issues associated with AI-based technologies are increasingly commonly known. The opaque nature of the algorithms used by AI and the lack of effective provenance associated with AI-generated scripts, for example, masks the degree of robustness of the text generated because it is currently impossible to test its veracity. There are also intellectual property and copyright issues associated with the output -it is impossible to list the sources of information that underpin ChatGPT's answers, highlighting the problems around verification.
The challenge with any new technology is that the assessment of “risk” as probability and consequence is invariably challenging. If we accept Knight's definition of risk as measurable uncertainty, the problem we face is a lack of statistical evidence to judge that technology “in the wild.” The analysis of risks from AI is invariably based on the judgements of those with expertise in the field, and this, according to Knight, is the weakest form of assessment. This is especially problematic with AI when it operates in autonomous conditions (Trusilo, 2023). Most assessments about AI will relate to unmeasurable uncertainty, thereby highlighting the links between uncertainty and organizational knowledge (Rumsfeld, 2002, 2011).
Business schools typically adopt a risk management approach that is designed for managing well-understood and predictable risks within defined areas. However, this conventional focus on risk management will not serve business schools well in an environment where AI has the potential to be so disruptive and ubiquitous. Accepting uncertainty and recognizing the inability to predict emergent effects (and planning appropriately) emphasizing instead the need for adaptability and evolution, will become key differentiators among those who thrive and do not thrive in the coming years. Business schools will need to be more agile in their response to AI, address the ethical issues raised, ensure that they recognize the knowledge limitations underpinning their decisions, and develop a range of contingency strategies to deal with the potential for emergent conditions. For example, business school leaders may want to consider implementing processes for AI experimentation and integration and collaboration with other subject areas and disciplines to identify emergent conditions and better manage vulnerabilities. To further support faculty in developing the requisite imagination to effectively deal with emergent conditions, business schools also need to foster a culture of “conscious inquiry” (Westrum, 1993, p. 402) by encouraging faculty to observe, probe, and openly share conclusions of issues around AI. Importantly, faculty will require the imagination to identify possible modes of failure around AI and also possible modes of testing the learning and teaching system. Business school leaders should therefore communicate the importance of sharing information, observations, or ideas around AI and establish a culture where such information, observations, or ideas are shared wherever they exist within the school, regardless of hierarchy or status and without fear of criticism or judgment. Another aspect is to promote the exchange of ideas and diverse insights by breaking down silos. Discussions with students about these same issues are equally important if students are to learn from and experiment with the tools that they will use in the workplace.
Moreover, given the uncertainties around AI, business schools’ typical approach to strategic planning that depends on stable (or at least predictable) environments may no longer be suitable. Instead, business schools may need adaptable strategies that prepare for multiple outcomes in each transformation stage and take advantage of change through business model innovations, thereby shaping the evolving business education landscape. Learning design will also need to be human-centered and yet be adaptive to technological change and evolve more rapidly than in the past. If the business school curriculum is to remain current, relevant, and reflective of how AI is changing business and business education, then assessing vulnerability rather than risk is critical.
Section 3: AI in Use in the Business Classroom
A Student-Centered Teaching Strategy for Ethics and AI
Dennis Wittmer, Douglas Allen and Farzin Aghdasi
Business schools need to assist students in arming themselves with the values, perspective and knowledge that will allow them to take leadership roles in the current wave of technological development. In the past six years we have developed and taught a course designed to help business students consider the impact emerging technologies will have on their role as future leaders and managers and on the employees and other stakeholders to whom they have responsibilities (Allen et al., 2022).
As we survey the tech landscape, we have concluded that our job in the business school is as much about helping our students evaluate and understand the impact of AI and other advanced technologies on jobs, organizations, and society, as it is about training students to use AI as a tool. We see our challenge as helping our students acquire knowledge, skills, and perspective that will allow them to navigate the continuing transition toward an AI-influenced future with impact and integrity.
The fact that recent technological advances, including AI, have so far outpaced the ability of individuals, businesses, and public policy to keep up has been well documented (e.g., Deloitte, 2017). Interestingly, recognition of the gap between technical and social innovation is not new. Often focusing on the organizational level, Emery and Trist (1960), Evan (1966), Damanpour and Evan (1984) and others have discussed this lag and have shown through their research that organizational performance declines as the gap between technical and social innovation widens. At present, this gap makes business schools and their students vulnerable to the risks associated with AI advances (see Fischbacher-Smith et al. above).
In our view, this gap is not inevitable at either the organizational or societal level, but is due at least in part to the asymmetrical attention and investment in the technical development and deployment of AI without equal attention and resources devoted to the ethical and societal implications arising. Market forces overwhelmingly reward speed and attainment of “first to market” rather than careful consideration of ethical and societal considerations. Too often, the incentive has been to “move fast and break things” as a common Silicon Valley mantra advocates (see Olson, 2024, for instance, for discussion of how these forces are playing out in the context of OpenAI and DeepMind). Given the fast pace of AI innovation, faculty have perhaps not had the time to reflect on their own relationship with AI (see Rivers & Holland above). In our view, colleges of business and other technical studies (such as computer science and engineering) have inadvertently contributed to this growing gap between technical and social/ethical innovation by failing to sufficiently prepare their graduates to critically assess the ethical and societal consequences—both good and bad—associated with these technologies. See additional examples of ethically based discussions concerning AI and advancing technology in Binns (2018), Binns et al. (2018), Zuboff (2019), and Nissenbaum (2004).
As a result, we focus on helping students build the capacity to see themselves as protagonists in the development of this new tech-informed world and gain a sense of agency and confidence that they can participate in the writing of the future they desire, rather than passively accept the future they may fear to be beyond their control. We hope to cultivate in our students a confidence that the future is not inevitable, but one they can help shape through their actions and decisions.
The impact of AI is expected to evolve over many years–meaning that students will be challenged to evolve their understanding of this technology over their entire career. At the same time, the global nature of AI invites, indeed demands, a level of global coordination and consideration seldom if ever seen to this point in the realm of business and government, for example, the World Economic Forum (2019) and the Bahá’í International Community (2021). As a result, students will need to build and maintain a modifiable and global viewpoint that allows them to bridge the current gap between the technical advancement of AI and the values that could guide its responsible development and use, both in their home country and globally.
While we don’t expect to resolve these thorny problems in one course, we seek to expose students to the practical and ethical questions associated with these issues, allowing them to experiment with tools, concepts, and frameworks as they struggle with often contradictory trends and pressures. By the end of the course, our intent is that students will have constructed their own viewpoint about the use, development, and societal implications of AI, anchored in a solid set of values. See Allen et al. (2022) for detailed information about the course we developed, and for some of the student-centered activities embedded in our “Leading in the Digital Age” course. To reinforce the interaction between the technical and ethical dimensions of AI, three in-class values exercises are conducted around the beginning, middle and end of the course. These exercises illustrate how student-centered learning can be easily incorporated into courses that cover AI.
Exercise 1. Values Identification
Students reflect on and create lists of important values, norms, and ethical principles involved with the design, development, and use of AI. In class, students are asked to individually generate a list of ethical and moral values they believe would be important to them as employees, customers, businesses, and society. Next, students share their list with others in small groups and create a consolidated list, perhaps prioritizing those points most commonly raised by members of the group. Using the consolidated list, students report out and create a class list on a whiteboard, which is retained by faculty for future use. By way of comparison, we show students recent studies which highlight ethical principles and norms identified as most important in relation to AI (Khan et al., 2022, 2023).
Exercise 2. Identification of Current and Future Uses of AI That May Compromise Ethical and Moral Values
In this exercise, students identify a specific example of how AI is being used that raises potential ethical concerns. Before the class session, we assign students to find articles that discuss a current example of the use of AI that may compromise or ignore ethical values and norms. Next, students post one of these examples on a discussion thread, including the title and link for the article, along with analysis of the ethical values at stake. During class, students discuss their examples in small groups, looking for themes of the most commonly occurring ethical norms being compromised by applications of AI. Groups record their list of norms and principles (e.g., privacy, justice, cross-cultural respect, transparency, etc.) on a white board, and the list is compared to the lists generated from Exercise 1. This exercise is intentionally positioned later in the course, after students have done readings about AI and advanced technology and their impact on management.
Exercise 3. Identification of Company Best Practices and Public Policy/ Legal Initiatives to Address AI and Ethics Issues
In this exercise, students review and assess different corporate and governmental approaches (both local and global) for addressing the ethical challenges related to AI applications. Strengths and weaknesses are assessed and “best practices” gleaned. First, we assign students to find one business example of how a company is addressing the ethical adoption and use of AI. They are also assigned to find one example of how governments are trying to address the ethical and social challenges of AI and its impact on society (again students are encouraged to look for global as well as local examples) Next, students post their examples on a discussion thread prior to class, including relevant links, along with their summary/analysis of the corporate policies and strategies. In class, we divide students into small groups to share and compare both corporate and government responses. Groups select one best practice from business and one government policy to share in a discussion with the entire class. See Smit et al. (2020) and Zhou and Chen (2023) for company examples and academic research related to moving from principles to practice.
As part of this third exercise, students might also identify “best practices” in achieving what has become known aspirationally as “trustworthy AI.” One likely company example is Nvidia, which has been a leader in advancing trustworthy AI. At Nvidia, the management team is highly aware of the need for guardrails in AI. Accordingly, a team including engineers, lawyers and activists has been hired and tasked to study the social discourse on the subject, and to attempt to create guidelines to protect both the users and creators of AI models, along with the associated datasets for training, evaluation and testing of the models. As one example, all new models need to have what is termed “model card++,” a document that transparently discloses how the model was trained and its known limitations and biases. Legal language is inserted to caution the user. While this is not wholly adequate protection, at least it calls for the users (all of whom are other developers creating the final products) to take shared responsibility for any unintended consequences.
The advent since late 2022 of ChatGPT and other GenAI-driven platforms capable of generating very human-like output has greatly accelerated the use of AI in daily work and personal activity—and in the process, intensified the need to identify and develop “best practices” in the ethical development, adoption and deployment of AI-driven platforms. Even if output from these models is “hallucination”-free (too often a big “if”), new questions about the appropriateness and societal impacts of these AI platforms are raised. Can this sophisticated environment impact human agency? Questions of privacy, safety, security, transparency, bias, discrimination and longer-term human and societal impact are becoming part of the work of academia, as well as industrial developers of AI. Fortunately, through independent investigation, students are able to find many examples of companies attempting to learn how to address these issues.
To complete the series of the three exercises we outlined above, students are asked to assess the strengths and weaknesses of each of their examples and identify further actions required by individuals, companies, and societies to ensure that AI advances with balanced consideration of the ethical and technical opportunities and challenges. With regard to the value and benefit of these three exercises, we have not performed formal assessments. However, informally we have evidence of their effectiveness. First, student teams are assigned responsibility to lead discussion on the topic for the day (e.g., impact of AI on particular industries, the competition for AI supremacy by China and the United States, the use of AI in terms of HR practices, among others). Teams consistently included discussions related to values and ethical norms in their analysis. Evidence of the impact of the exercises was also seen in the final individual assignment, which asked students to outline the kind of future they envisioned as it related to AI and management. Consistently, students included ethical values and principles as elements of their future assessment and prescribed action that could improve the future as influenced by AI and other advanced technologies.
Many faculty may feel that they are not adequately prepared to address these issues with their students. We feel the same way. None of the faculty who developed this course have a technical background and incidentally, when we developed this course, we were the three senior-most faculty in our department. The technology is advancing at an exponential speed with no end in sight.
This is exactly why we designed our course as one that is co-created in real time with our students. By harnessing the minds, experience, and investigative skills of every student in the class, we create a learning community capable of generating far more knowledge than the faculty could possibly bring to the classroom. We do our best to keep abreast of developments in the field—exchanging literally hundreds of articles among ourselves in the months leading up to each iteration of our course. Instead of a textbook, a reading list curated from this exchange becomes a resource for the class. It is true to say that after teaching this class for the past six years, each iteration has been significantly different. Six years later, very little similarity exists between the first and the sixth iteration.
In an earlier essay in this collection, Rivers and Holland encourage business educators to “equip our students with the necessary skills to move skillfully from using technology because it exists, to using technology with intention, consciousness and mindfulness.” We agree, and by empowering students to see themselves not just as passive users of technology, but as protagonists in its design and deployment, we believe the existing gap between technical and social innovation can be successfully closed for the betterment of society.
Does AI-Assisted Negotiation Training Facilitate Learning?
Jeanne M. Brett
Does AI-assisted negotiation training facilitate learning, or is it just a gimmick, a change of pace for students, or even worse a crutch that means students no longer learn negotiation strategy or how to utilize it effectively? To evaluate these three reasons why AI-assisted negotiation training might not facilitate learning, let us consider: how business schools teach negotiation skills, what is AI-assisted negotiation training, what criteria are most relevant for evaluating AI negotiation learning, and what some preliminary data are suggesting.
Negotiation training is grounded in experiential learning—the process of learning by reflecting on an experience (Kolb, 1984). In negotiation training the experience is a simulated negotiation exercise in which students are assigned to roles, for example buyer/seller, given information about the issues to be negotiated, their role's positions on those issues, their interests (e.g., why take those positions, their priorities), their alternatives to a negotiated agreement, and some amount of information about the other party. They negotiate with each other and turn in their results, typically not to be graded because doing so generates an anti-learning win-at-all-costs environment. The learning comes from reflecting on their own experience and via an instructor-guided discussion designed to reveal the concepts and theory that the negotiation exercise emphasizes. To encourage deeper reflection, some instructors require or encourage students to keep diaries, and some provide systematic peer feedback. (See negotiationandteamresources.com for a peer feedback system.) There is relatively little traditional academic learning—reading assignments, lectures—in negotiation classes.
AI should be able to facilitate negotiation students’ learning that goes beyond the traditional classroom experience. It can coach students through preparing for the negotiation. It can act as their negotiation counterpart or teammate. It can provide personalized feedback. Examples of each are contained in the following paragraphs.
A negotiation planning document is a rubric that guides students to prepare systematically for negotiation. A planning document organizes the information in negotiators’ roles: Who are the parties, what are the issues, what are the parties’ positions, interests, and priorities, BATNAs and walkaways (Brett, 2014). Some planning documents also ask students to write down their opening offer, their plan for questioning, etc. A currently available GenAI planning document coach reviews the student's work, gives the student feedback, and iterates with the student making suggestions for deeper analysis until the AI coach is satisfied that the student has prepared up to the professor's standard. (See Morse & Leben below.)
AI can negotiate with a student as a counterpart, or as a teammate. It can play both roles in a mediation exercise, and it can populate all but one member of a group for multiparty negotiation. These options are currently available in a variety of negotiation exercises from idecisiongames.com. This platform also has AI-facilitated exercises for leadership, entrepreneurship, strategy, and sustainability.
Negotiation exercises give students a lot of information about what they need to negotiate, but seldom much direction about how they should negotiate. The result is that students vary rather widely in how they enact their roles. Some are more cooperative, others more competitive; some are more and others less trusting, deceptive, emotional, etc. Even when the professor instructs students to act angry, to give them experience of managing their own and the other negotiator's emotions, some students cannot bring themselves to do so. (Holly Schroth's Teaching Notes for Myti-Pet, available at negotiationandteamresources.com, provide guidance for stimulating and debriefing anger in negotiation exercises.) Not so AI. Although OpenAI prompted Chat GPT to be kind, helpful, positive, etc., it is possible to prompt a generative AI negotiator to be angry and emotional (see Jeanne Brett's exercise with teaching notes, At Your Service AI available at idecisongames.com.). It should also be possible to prompt an AI negotiator to be deceptive, reluctant to trust, pro-social or pro-self, or to negotiate culturally, for example, to negotiate from the position of low or high trust or use low or high context communication. However, as OpenAI has produced newer versions of ChatGPT it has become more and more difficult to prompt an AI negotiator to act contrary to the norms of integrative negotiation that dominate the negotiation literature, which of course ChatGPT has read.
What was important about being able to prompt the AI negotiator on how to enact the role is that every student negotiated with an AI counterpart that at least at the beginning of the negotiation enacts the AI role similarly and so gives the student a negotiation strategy challenge to overcome. Although the AI counterpart follows its prompt, it also responds to the student negotiator. In At Your Service AI as the student learns and responds to the AI counterpart's interests, the AI counterpart would become more cooperative. Comments from a few EMBA students who negotiated with an AI counterpart in At Your Service AI in early 2024 reveal that they were not prepared for an angry and defensive AI counterpart: “The AI was quick to start the conversation in a heated manner. They should start out more level-headed.” “The bot can be stubborn or does not fully understand my message.” “Bot was not imaginative.” They blamed the bot until they figured out overcoming this behavior was their challenge.
In the multiparty context, GenAI plays more than one role. One example is the manager as a mediator. Exercises from negotiationandteamresources.com that illustrate the manager as a mediator include Paradise and Telepro. In the non-AI classroom, it is time-consuming to give all the students the manager as a mediator experience. Students in groups of three rotate through three exercises playing a disputant twice and a manager-mediator once. In the AI-assisted classroom, all the students play the manager-mediator role and AI plays both disputants (ParadiseAI is available from idecisiongames.com). In the multiparty, group decision-making context, AI can play multiple roles and challenge the student to lead the group to a particular decision (multiparty HarborcoAI is available from idecisiongames.com).
Generative AI can give the student personalized feedback in all the above learning situations. It does so by reviewing the exercise and the negotiation transcript and evaluating the student's behavior against a rubric. The rubric should organize the feedback according to the negotiation concepts that the exercise is supposed to illustrate. For example, did the student negotiator reciprocate the AI negotiator's angry behavior? Or, did the student use interest-based questioning to learn about the AI negotiator's interests? By addressing those interests, did the student turn the AI negotiator away from anger? GenAI feedback can tell students what they did well and what else they might have done to potentially improve their outcome. It can give the student information about how the AI negotiator “feels” about the student negotiator and the negotiation process and outcomes. To do this, the AI prompt needs to identify the words that the student used that research shows would, in a human, stimulate such feelings. Not too far into the future students will be negotiating with avatars that show facial emotions. With the students’ video cameras on, an AI prompt can read the students’ facial emotions and give feedback. I am aware of two experiments where the AI gives students feedback and coaching during the negotiation (Shea et al., 2024; Shaikh et al., 2024). A third project, in process, proposes to develop a measure of AI Negotiation Competence (Wang et al., 2025).
The current and future levels of personalized feedback available in the AI negotiation classroom generally are not available in the traditional negotiation classroom. A traditional class extensively using peer feedback provides the closest comparison. The AI feedback can also provide the student with comparative information from classmates’ experiences or from normative information. An example of the latter might be if the instructor wanted the exercise to illustrate differences when negotiators have prevention versus promotion goals and gave half the negotiation groups a prevention focus and the other half a promotion one (Peng et al., 2015). Comparative feedback is typically not available in a standard negotiation classroom.
In looking at the relatively sparse literature on criteria for evaluating learning in negotiation (Cohn et al., 2009; Ebner et al., 2012) four criteria stand out: Knowledge, satisfaction, behavior, and outcomes. These criteria can be applied to the AI-assisted negotiation classroom as well as to the traditional negotiation classroom. Knowledge refers to whether students understood the concepts that the course or exercise(s) was intended to teach. Testing knowledge is straight forward: Does the student know the definition of the concept, and how and why it is used in negotiation, for example, What is a BATNA? Why is a BATNA important to a negotiator? How does a negotiator set a BATNA? Using technology to facilitate teaching and assessment has been around since before psychologist B.F. Skinner applied the reinforcement techniques he had developed for training pigeons to teaching students in the 1950s (https://americanhistory.si.edu/collections/object/nmah_690062). Another possible assessment tool is the GenAI planning document coach. Although not currently used for assessment, the GenAI planning document coach certainly could be adapted to that end.
Behavior refers to whether students demonstrated skill development; did they use the concepts appropriately during the negotiation? Pre- and post-self-reports are one way of measuring behavior, so is peer feedback. However, there are some AI-assisted means of evaluating whether students are learning skills. One is the opponent model developed by Chawla and colleagues (2022). A core negotiation skill is inferring the counterpart's interests and priorities based on social interaction. The way the opponent modeling exercise works is the automated agent and the student infer the needs of the negotiation partner from a conversation they both read. The student receives feedback on the difference between what the automated agent infers and what the student infers and can then review the conversation for the cues the automated agent picked up and the student failed to notice.
A very fine-grained analysis of negotiation behaviors (tactics) is now possible by using a validated AI system to code negotiation transcripts (Friedman et al., 2023). Tedious, time consuming, and error prone human coding of negotiation transcripts has long been the way negotiation researchers have learned how negotiators with different social motives, or from different cultures, use strategy to reach agreements (Adair & Brett, 2005; Weingart et al., 1990) It should be possible to run negotiation transcripts through the AI coder and give them feedback on their use of tactics, reciprocity, and the timing of that use regardless of whether the counterpart is AI or another student.
Outcomes, individual and joint gains are relatively easily assessed when negotiation exercises are quantified. Qualitative outcomes require a coding rubric, which GenAI should be able to summarize quickly for the instructor. The more difficult outcome criterion to evaluate is transfer of behavior from training to job or life; yet this may ultimately be the most important criterion for evaluating negotiation learning against the goal of taking a negotiation class. Some limited evidence suggests that AI negotiation training can transfer (Tschoe et al., 2024).
Satisfaction refers to how students feel about the learning system and what they feel they got out of participating in it. Surveys are the usual way to evaluate satisfaction: Was the system easy and enjoyable to use? Was engaging in the learning system a worthwhile use of my time? Did I learn something from doing the AI exercise? Would I recommend the learning system to a friend? (Murawski et al., 2024). AI-assisted negotiation training may facilitate satisfaction because of the individual feedback. It may also help students who take the negotiation course, but are reluctant to negotiate, get over that reluctance. The AI negotiator isn’t a classmate, doesn’t have feelings, and is infinitely patient. The AI feedback emphasizes what the student is doing well and what the student can do to improve. Negotiating with an AI counterpart is a very private way to learn and gain confidence. On the one hand, there is a risk that confidence negotiating with AI doesn’t transfer to confidence negotiating with a human. On the other hand, these AI-trained and confident negotiators are going to be the scourge of the customer service AI that we are already negotiating with.
Finally, please note that I have proposed evaluating AI-assisted negotiation learning against a set of criteria, not against the human-to-human, face-to-face classroom experience. Although the classroom may or may not be the gold standard for learning, classroom negotiation training is not always possible. Much negotiation training is occurring online today. People are going to be, are already, negotiating with AI customer service representatives. The ultimate question is the goal of teaching negotiation. I want my students to become skillful negotiators and to have confidence in their skills regardless of with whom or with what they are negotiating. Does negotiating with GenAI facilitate that goal? This is a major opportunity for research. At this point the advances in technology have far outstripped the evidence of learning.
AI for Analytical Reasoning: Insights for Teaching Business Ethics and Negotiation
Lily Morse and Derek Leben
How will GenAI, specifically LLMs, transform the way we teach business ethics and negotiation? Given their rapid adoption in organizational and everyday life, equipping students to use LLMs responsibly is essential preparation for future career success. In our own classes, we developed two LLM-powered chatbots to help students develop analytical reasoning skills. We designed the chatbots to engage business ethics students in argumentation (e.g., critiquing the validity of arguments, identifying objections, and evaluating evidential support) and negotiation students in strategic planning (e.g., prioritizing interests, anticipating counteroffers, and preparing bargaining strategies). Overall, students responded positively to these tools and felt they provided a meaningful learning experience.
However, we also faced challenges in leveraging this technology without compromising human-centered learning objectives. A significant concern was that students could become dependent on the chatbots, which would undermine our goal of cultivating their problem-solving and decision-making abilities (Chan & Hu, 2023; Dergaa et al., 2024). That is, students might rely on the immediate responses provided by the chatbots rather than form their own insights (Brantley, 2023). This essay identifies the need for rigorous learning assessments to mitigate this risk and proposes three recommendations for constructing them. Notably, the assessments are not limited to business ethics and negotiation courses and may be adapted for use in related disciplines that emphasize analytical reasoning, such as operations management and business law courses.
Recommendations
First, we encourage instructors to establish a uniform set of benchmarks that measure the effectiveness of LLMs for cognitive skill-building. The assessments should evaluate student performance across both “global” and “local” levels without technological assistance. Global benchmarks should measure generalized skills that are applicable across a wide range of topics. For instance, instructors could incorporate questions from standardized tests like the LSAT analytical reasoning section or Critical-thinking Assessment Test (CAT) to determine students’ core analytical reasoning abilities (Cullen et al., 2018). 2 By contrast, local benchmarks should assess content-specific knowledge and skills throughpeer and faculty evaluations. To ensure alignment with course objectives, educators can construct their own local benchmarks using existing instructional frameworks. For example, Berland and Reiser (2008) recommend evaluating analytical reasoning and argumentation by measuring students’ capacity for sensemaking (ability to develop understandings), articulation (ability to explain one's reasoning process to others), and persuasion (ability to convince others of arguments and successfully defend against counterarguments).
Second, we recommend conducting pre-post assessments to measure knowledge gains and skill acquisition. Consider Susskind et al. (2024), who described their experiences using LLM negotiation “coaching” bots for multiparty bargaining simulations. The authors assessed students’ negotiation skills before and after the simulation. Their findings revealed significant improvements in students’ ability to reason strategically about their negotiation goals and the interests of their counterparts.
Where feasible, controlled experiments with randomly assigned experimental and control groups can provide deeper insights into the causal impact of LLMs (Shea et al., 2024). To demonstrate, Leben designed an LLM “Ethics Debate Coach” for students to use in writing workshops during the Fall 2024 undergraduate business ethics course at the Tepper School of Business at Carnegie Mellon University. The chatbot had access to the textbook and other course materials, and was instructed to act as a “sparring partner” that challenged their arguments in a warm and friendly manner, but did not directly evaluate arguments or provide suggestions. Students across three sections were randomly divided into experimental and control groups: the control group debated their policies with a peer, and the experimental group debated their policies with the chatbot. Students in the experimental group were only able to access the LLM during the writing workshops and transcripts of the interactions were submitted after the workshops.
To evaluate learning outcomes, both qualitative and quantitative metrics were employed. The qualitative metrics included surveys that students in all conditions completed after the workshop (Appendix B; see Supplemental Material), and the quantitative metrics included rubric scores on their papers as assessed by the course graders (Appendix C; see Supplemental Material). Especially relevant are rubric features that assess analytic skills, such as: structure and internal consistency, strength/validity, consideration of alternatives/counter-examples, examples/case-studies, evidence, and implications. This evaluation method satisfies many of our recommendations. There are some weaknesses in the method as well, as it only includes local measures rather than global ones. Future evaluations will strive to incorporate these changes.
Our third recommendation calls for a structured approach to mitigate broader ethical harms and safety risks associated with LLM adoption in the classroom, beyond the issue of overreliance (Ji et al., 2023; Piers, 2024; Zhou & Chen, 2023). Fischbacher et al., herein, identify several risks and vulnerabilities associated with AI adoption in universities, which we build upon by discussing practical safeguards for LLMs.
For instructors who are less technically inclined, the Holistic Evaluation of Language Models (HELM) framework offers an accessible starting point for assessing potential harms linked to LLMs. HELM evaluates how popular language models (e.g., ChatGPT, LLaMA) perform across several benchmarks including accuracy, robustness, fairness, toxicity, efficiency, information retrieval, and consistency (Liang et al., 2023). Instructors with greater technical expertise can apply more advanced safeguards, such as configuring customizable “guardrails” that constrain or direct LLM behavior. Tools like Nvidia Nemo-Guardrail and IBM Dromedary can help to ensure that conversational chatbots stick to curriculum-relevant topics, deliver accurate information, and produce safe responses in the face of malicious users (Rebedea et al., 2023).
It is also important to provide robust privacy and security protections that align with the educational context (Nissenbaum, 2004). For example, training LLMs on sensitive data from sources like class transcripts may require explicit student consent. Because LLMs can often be hacked to reveal private or copyrighted material in the training data (Nasr et al., 2023), instructors should adopt secure access methods. This might include using locally hosted models or internally managed platforms for students interacting with the models. Implementing these safeguards can enhance students’ perceived fairness and trust in the technology (Morse et al., 2022). Ultimately, we argue that integrating LLMs into business ethics and negotiation courses requires thoughtful instructor oversight to ensure student learning is enhanced, not diminished.
Concluding Thoughts
Aimee L. Hamilton and Cynthia V. Fukami
After reading these essays, we expect that at least a few of you are overwhelmed. Questions abound about where each of us should start with our journey to remain human-centered while satisfying our needs for critical thinking, rigor, and introducing our students to the awesome technological changes around us. University policies and the guidance we receive from administrators are often murky, forcing us to grapple with AI-related questions on the “front lines,” and to develop personal policies regarding AI use. Some of us may question our expertise in this new realm. Are we experts in technology? What support is being provided to faculty? Do we have the motivation, either extrinsic or intrinsic, to learn how to best use these tools? If we are motivated, which experts do we trust to educate us?
More than this, how far are we out of our comfort zones? We were trained in the theory and methodology of an academic discipline, and technology was a tool for us to research that discipline. Now we are asked to create web-based material, to use GenAI, to assure the learning of (as well as police) students whose sophistication with it likely exceeds our own, and to be IT troubleshooters when our classroom technology isn’t working as it should. This is probably not what many of us signed up for, and we may perceive that our psychological contracts are in violation. Simply put, the 4IR has changed our job descriptions, and it seems that there is no going back. Recently, Nick Machol, CEO of Unburdn.ai, gave a talk at our College of Business about the importance of teaching GenAI tools to business students. Machol contrasted the old way of thinking about AI with the new way (Table 1).
The Old Way Versus New Way of Thinking About Technology.
Note. Reproduced with permission of the author (Machol, 2025).
The first author of these concluding thoughts, in a somewhat rude awakening, recognized that her GenAI policies in the classroom reflected “old way” thinking. She wonders how many of her colleagues are also old way thinkers. Perhaps a crucial enabling condition for human-centered, GenAI-enhanced, business education is that all faculty members start thinking about technology in the new way: as an amplifier, not a replacement.
Technological advancement signals not only change in our classrooms but also in higher education. We leave it to other authors to address the disruption of higher education. This Curated collection focuses on faculty, who have long been the authorities of their classrooms. Our expertise was our calling card. With the Singularity looming, what is our new value proposition? How do we move forward and remain human-centered in this brave new AI-driven world? It is our hope that bringing empathy to our relationship with and use of GenAI will light the path.
Supplemental Material
sj-docx-1-jmi-10.1177_10564926251364213 - Supplemental material for Human-Centered Business Education in an Artificial Intelligence-Driven World
Supplemental material, sj-docx-1-jmi-10.1177_10564926251364213 for Human-Centered Business Education in an Artificial Intelligence-Driven World by Cynthia V. Fukami, Aimee L. Hamilton, Christine Rivers, Anna Holland, Mairead Brady, Martin Fellenz, Scott J. Allen, Leonardo Caporarello, Denis Fischbacher-Smith, Moira Fischbacher-Smith, Stephan von Delft, Dennis Wittmer, Douglas Allen, Farzin Aghdasi, Jeanne M. Brett, Lily Morse and Derek Leben in Journal of Management Inquiry
Footnotes
ORCID iDs
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
Notes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
