Abstract
The widespread applicability of generative artificial intelligence (AI) in various organizational settings has led to the realization that ethical guidelines for responsible use are needed. Undoubtedly, the treatment of operational topics in corporate settings is expected to vary from their implementation in human service organizations, such as educational institutions. However, the significantly higher volume of development and utilization of AI technologies in private corporations has also observed a more organized and advanced effort to address the gap in needed ethical standards. This article discusses core principles, values and decision-making trends in AI ethics, as reflected in the recent business literature, and it conceptualizes their relevance to educational institutions and educational leaders’ strategic role.
Introduction
A proliferating number of studies have been criticizing educational institutions for an ongoing corporatization, claiming that this phenomenon has diminished the unique institutional character of educational organizations (Blum and Ullman, 2012; McCartney and Metcalfe, 2018). Nonetheless, a critical scrutinization of trends in business management scholarship and practice can benefit educational leadership scholars and practitioners alike, when carefully using such material as foundations for crafting adapted versions of existing solutions in the industry, or highlighting them as erroneous actions for avoidance when developing new directions (Smith and Riley, 2010). This approach becomes especially useful when dealing with novel advancements, whose treatment cannot benefit from existing knowledge bases and policy frameworks.
Artificial intelligence (AI) constitutes such an example. In the context of this article, AI is defined as any developed machine that behaves as though it was intelligent, or in other words, it imitates the way human minds acquire, process and distribute information (Ertel, 2025). In the field of education, such systems have been applied in teaching and learning practices as well as the enhancement of administrative procedures and managerial outcomes. The latter manifestation of the AI phenomenon is of concern for this article, also mentioned below as organizational AI. Organizational AI involves several applications that AI can have in the organizational environment, including front-end processes, such as the provision of support to service recipients, as well as back-end procedures, such as the facilitation of managerial functions (Mäntymäki et al., 2022). While widely recognized benefits occupy strategic importance in an organization's decision of adopting AI solutions, including their proven operational efficiency and their functioning as a foundational layer for creating data-driven service environments (Schaefer, 2020), prolific concerns have also risen around these systems’ limitations and drawbacks, including adjacent ethical risks extending from susceptibility to cybercrime (Kateryna et al., 2020), accidental or malicious model training mishandling (Mou and Meng, 2023), biased and misleading responses (Ivanov and Webster, 2017), as well as enhancement of robotization and physical frontline worker obsoleteness (Cheng and Jiang, 2020), potentially provoking stakeholder dissatisfaction and uncertainty, as well as reputational damage for the institution (Bouhia et al., 2022).
Thus, organizational AI in the corporate world has admittedly emerged as a crucial technological force that has already demonstrated the potential of ongoingly revolutionizing diverse business activities and operational functions, extending from marketing and entrepreneurship to operational management and customer support, among others (Oueslati and Ayari, 2024). Indeed, AI appears to be growing faster than regulators and lawmakers can react to it (Cannarsa, 2021), rendering individual organizations and the leaders representing them, as moral authority figures, expected to make decisions that prioritize the well-being of their operations but also their wide spectrum of stakeholders (Anshari et al., 2023). The novelty of these situations adds a significant layer of complexity for organizations, especially educational organizations, where technological management and integration of innovation have traditionally suffered from either slower responses or a broad disinterest.
The purpose of this article is to highlight central trends in more recent business literature pieces pertaining to corporate AI ethics and to discuss their significance for educational leaders seeking effective approaches to ethical AI administration. The applicable trends are clustered in three main categories that are separately examined in the three parts of the main body of the article. The first part explores the conceptualization and operationalization of co-operative solutions to ethical challenges raised by the use and development of AI systems, coupled with the need for innovative knowledge management approaches. The second part provides an expanded model of social responsibility and the inclusion of responsive ethical automation for the formation of novel corporate digital responsibility paradigms. The third and final part underlines the importance of thinking beyond intellectual property, which seems to have monopolized the AI ethical debates in educational management, with an emphasis on the labour market and novel challenges in the moral economy. Parametrically, it should be underlined that generative artificial intelligence was not used at any point during the production of this article. While the ability of such systems to produce acceptable summarizations of the literature is recognized, they currently lack the skillset of creating intellectually sophisticated syntheses of knowledge, with the latter serving as the ultimate goal of this article.
Knowledge management and deliberative cooperation
As AI systems are progressively becoming more sophisticated, business leaders have delved into an effort of introducing AI-informed solutions to a range of frontline operations and even executive planning and strategizing (Watson et al., 2021). To successfully navigate the coming era, where it is anticipated that AI will be even further embedded within the management and day-to-day administration of organizations, leaders anticipate a new agenda of competencies, which will be further popularized and established as a required skillset for future managerial appointments. Such skills, or leadership qualities (Pierog, 2023), include digital know-how, data-driven decision-making, ethics, networking, and agility (Tasnim, 2024; Watson et al., 2021). Agility, in particular, is identified as a potential key characteristic of the model AI-informed leader. An agile leader is humble, visionary, adaptable, and engaged (Neubauer et al., 2017). Such attributes have been previously considered foundational for ethical management more broadly (Geva, 2006), and ethical technology management in particular (Brendel et al., 2021).
In the meantime, current senior leaders are assigned the task of reskilling their current workforce, recruiting new talent, building an intrapreneurial culture, and managing the unprecedented changes and ethical challenges in the AI-informed workplace (Watson et al., 2021). In the pursuit of finding answers to the proliferating AI ethical challenges, a growing wave of business scholars and practitioners have been embracing an institutionalist approach to AI ethics, arguing that organizations should seek cooperative options and encourage the government to co-develop standardized regulatory solutions that entertain enhanced legitimacy in contrast to self-regulation (Ferretti, 2022). Others, although they do recognize the importance of co-operating with governments to identify common solutions, they also underline the significance of individual organizations developing their own policy solutions, by instilling an interest for intraorganizational cooperation through deliberative decision-making and ethical policy-making (Nahavandi, 2019). This latter approach more considerably respects the idiosyncratic nature of individual organizational cultures (Weaver, 2001). This debate is crucial for educational institutions as they have had a long history of isomorphic propensities, which although may assist them in their pursuit of legitimacy, they can dilute their individual institutional and community character (Ashworth et al., 2009; Fay and Zavattaro, 2016).
Additionally, while we can conceive AI-enabled corporations as nodes participating in a network of innovative knowledge development and exchange, that web is dominated by an oligopoly of centralized mega-corporations, peripheralizing businesses who lack access to capital and thus cannot develop their own AI services (Verdegem, 2024). This reality is creating an uneven playing field that bears inequitable outcomes and monolithic representations of stakeholder interests (Montes and Goertzel, 2019). Educational institutions, as organizations that employ services from AI development businesses, due to their increasingly AI-enabled organizational character, have the potential to contribute to the development of a distributed, decentralized and democratized market for AI technology (Montes and Goertzel, 2019), by embracing cross-institutional support in an a progressively more competitive global educational system (Armstrong and Ainscow, 2018).
Indeed, the early dominance of the AI space, and especially the novel generative AI space, by a few key players, has created the need for joint ventures and entrepreneurial collaboration that can increase inclusivity, but also legitimacy, based on the extent of involvement of broader societal institutions (Salgado-Criado et al., 2024). Thus, educational institutions can play an integral role in this endeavour as legitimacy-granting entities, by actively soliciting and partaking in such collaborations. Furthermore, educational institutions can benefit by replicating the paradigm of joint ventures, through the formation of interinstitutional collaboratives to co-design AI systems and policies for shared implementation, since the dominance of the AI space by only a few educational institutions is also an immense risk, especially when considering how knowledge exchange and innovation have been been gatekept in the past (Swartz, 2009). At the same time, institutions are being presented with the opportunity to create the necessary channels of communication and equitable networks of collaboration in light of AI-related knowledge management, which they can subsequently utilize for other organizational functions.
Another identified point of cooperation requiring further attention as per the business literature, is the distribution of responsibility between operators and providers of AI systems. More specifically, although AI systems can augment a corporation's efficiency and effectiveness, the ongoing embeddedness of AI algorithmic architectures with the physical word, such as in the example of Artificial Intelligence of Things, where human-machine interactions are deployed to enhance data management (Loureiro et al., 2021), it has perplexed processes of moral attribution. If an AI system causes harm to a customer, the firms who have operated the system will argue that they lack control over its algorithmic basis and therefore its actions, whereas the firms who have developed the system, will assert that they lack control over its actual use, leading to a conundrum of attribution of ethical responsibility (Lüthi et al., 2023). This is further complicated by the inherently subjective nature of ethics, as what is right or good to one person or group of stakeholders, might not be to another, thus creating the need for measurable outcomes with scales that have first been developed through deliberation and communal trust and are applicable to the whole lifecycle of conception, development, implementation, and evaluation of AI systems (Radclyffe et al., 2023). Educational leaders can potentially escape this conundrum by implementing a symbiotic approach to AI management by juxtaposing evidence-based decision-making with value-based moral decision-making (Wang, 2021).
Expanding social responsibility
Enhancing trust in AI entails convincing stakeholders and service recipients to boost their trust in a sociotechnical system (Chen et al., 2023). This necessitates for leadership a consideration of the wider context of AI system development and use, as well as the successful calibration of ethical principles, including conflicting priorities and risk trade-offs, which can impact differently people's trust based on the societal status quo and diversity in cultural interpretability (Duenser and Douglas, 2023; Seeamber and Badea, 2023). This calibration necessitates a socially responsible backdrop in order to be executed in an ethical manner. Of course, it is safe to assume that AI will be only as ethical as the purposes of the social system that operates it (Benthall and Goldenfein, 2021).
AI is another revelation of the Promethean gap as conceived by Günther Anders, according to which, the relations between human and technology are characterized by a growing asynchronization (Schwarz, 2019). In the case of AI, the properties of the produced tools, especially in the form of generative AI systems, exceeded the human ability to develop governing principles, ethical guidelines, and acceptability policies for their use. As AI's abilities keep expanding, this existing gap will most likely grow rather than shrink, since new unimaginable challenges and exchanges with the unknown of innovation will follow. This gap, or what he has described as the outdatedness of human beings, can lead to the adoption of a pessimistic sense of uncontrollability or uncanniness in the Heideggerian sense (Whithy, 2015), which however, may positively influence decision-making by enabling diverse actors to approach these technological achievements with humility and ongoing scrutiny of not only their inputs and outcomes, but their wider implications on the history of humanity.
Organizations are expected to recognize this reality, by acknowledging that not only they have a social responsibility to comply with current and future AI regulatory frameworks, but they also have an integral human responsibility to weigh the trade-offs between increased AI-driven solutions and their ramifications on both internal and external stakeholders’ well-being (Fioravante, 2024). This human-centric approach, which is rooted in human rights (Yeung et al., 2020), adds a significant layer of meaningfulness in the process of ethical (self-)assessment and the avoidance of policy tokenism. Specifically, while socially responsible approaches to AI use and development have propelled the emergence of a nuanced manifestation of corporate social responsibility (CSR) known as corporate digital responsibility (CDR), businesses on a global scale tend to regularly fail to actualize such practices, by prioritizing the uninterrupted leverage of the financial benefits that automation and user data collection can generate, such as customer experience personalization and cost reduction (Wirtz et al., 2023). Thus, similarly to the case of greenwashing in the context of promoted performative adherence to socially responsible sustainability, digital washing, that is, the communicated but not actualized adherence to AI ethical guidelines, has also been occurring (Fioravante, 2024).
This generates the need for an honest renewal of CSR, which has been previously described as CSR 3.0, and has been defined as ‘a company's socially responsible strategies and practices that deal with key ethical and socio-technical issues associated with AI and related technologies, on the one hand, and leverage the power of AI and related technologies to tackle social and environmental problems, on the other’ (D’Cruz et al., 2022: 884). While general macro-approaches can offer institutional guidelines for responsible AI development and use, a dedicated CSR approach reminds us that all dimensions of ethical AI governance, including accountability, transparency, explainability, interpretability, reproducibility, fairness, inclusiveness, privacy and safety, lie at the individual level of responsibility, and specifically the leader's ability to integrate these aspects into their day-to-day decision-making (Camilleri, 2024). A person-centered approach to social responsibility can further enhance the realization of the need for the development of related self-leadership attributes that vertically transgress organizational limits of authority and distribution of deliverables.
As organizations worldwide are facing the universal consequences and challenges of the digital revolution, the argument that AI-related ethical challenges constitute a global problem requiring a global solution, potentially through the formation of an inclusive international regulatory agency, is progressively winning ground (Erdélyi and Goldsmith, 2018). However, not only a global solution with internationally agreeable terms can be stalled by prolonged deliberations, and even face immense hardship in its implementation, but ongoing attempts of developing internationally relevant AI ethical principles for both private corporations and public institutions, such those led by the Organization for Economic Cooperation and Development (OECD, 2019), tend to carry significantly vague descriptions and recommendations, rendering individual organizations as the primary agents of moral authority. Therefore, there is a pragmatic need to not demise the importance of individual accountability towards responsible AI. Leaders at all levels are expected to play an immediate role in this process, as organizations and the multiple nuanced divisions within them are called to act as independent units of ethical decision-making in regard to AI development and utilization.
Automated work and the new moral economy
A vast majority of empirical studies on ethical decision-making in regard to AI in corporations, have shown that business leaders significantly prioritize issues related to the wider social and economic responsibility, along with issues pertaining to privacy, data protection, bias, fairness, transparency, explainability, and accountability, over challenges associated with intellectual property and ownership (Rezaei et al., 2024). This comes in stark contrast with educational studies that have so far focused primarily on issues pertaining to intellectual property (Kumar et al., 2024). As previously discussed, AI technology can help streamline a diverse portfolio of operations, adding significant workflow efficiencies. Nonetheless, its associated ethical complications also extend to the wider socio-economic sector, and more specifically the labour economy, since the progressive trust in technology-based outputs can disrupt the workplace by causing job insecurity, leading employees to explore alternate careers (Pandya and Wang, 2024).
Businesses have even proceeded with embracing the development and adoption of anthropomorphic characteristics for their AI systems, expecting that human-likeness in systems’ behaviour during interactions with stakeholders and clients will reduce feelings of uncertainty, alienation, and distrust toward artificiality (Obrenovic et al., 2025). Nonetheless, this plan also propels the expansion of Industry 4.0, that is a model of automated production that relies on the rise of data analytics, digital connectivity, and human-machine interaction (Javaid et al., 2022), beyond the manufacturing sector to the realm of human services, including education. This has created a quest for knowledge-focused workers, such as accountants in the field of business, to utilize AI systems as co-creating tools, but not to allow them to outperform their capabilities, by engaging with continuous skill development and advocating about the importance of the human element in knowledge work (Sutton et al., 2018). The mastery of AI-related skills can be deemed as a lifelong learning process, which can revitalize the importance of training and development endeavours at the organizational level.
As the abilities of AI systems expand, educational leaders, among others, will have to undertake the task of maintaining the significance of that human element in both the instructional as well as the administrative institutional facets. While skill development and technical expertise enhancement are imperative allies in support of the importance of the human element, the current trends demonstrate that only some highly skilled workers are slated to succeed in this new environment, while far more will witness displacement into lower-paying jobs or even permanent unemployment (D’Cruz and Noronha, 2021; D’Cruz et al., 2022), a phenomenon previously described as technological unemployment (Kim and Wolf, 2019). On the positive side, the ability of AI to boost innovation and add large-scale efficiency to organizational operations has rendered it as a crucial enabler for the circular economy, that is an economic development model that prioritizes scalable circularity in product and service development over the high levels of resource extraction, consumption and waste associated with the current unsustainable approach to economic development (Roberts et al., 2022).
A deeper dive into the qualitative characteristics of co-creating AI systems, such as generative AI text agents, has led to the realization that many ethical extensions are immensely tied to issues of quality management and work performance. For example, the overutilization of these systems can result in: (a) massive low-quality content production, known also as the lowest denominator problem, (b) a growing buffer in the communication between stakeholders, known also as the mediation problem, and (c) automated mass manipulation and disinformation, known also as the fake agenda problem (Illia et al., 2023). Therefore, while intelligent automation technologies can play an integral role in raising efficiency and even financial gains, from frontline client service provision to a novel approach to human resource management, also known as algorithmic management (Vrontis et al., 2022), the raised ethical challenges can have a direct negative effect on both the people and their produced work. Stakeholder theory can assist leaders with defining the macro-effects that automation can have on their diverse constituency of stakeholders, and thus proceeding with the conception of risk mitigating plans not only for the end users of AI systems but also their operators (Wright and Schultz, 2018). In education, this could entail inviting administrators, educators, and students to become co-creators of leadership transitions, by responsibly considering the impact of their knowledge-based decisions on an ever-changing global reality (Orfanidis, 2023).
Conclusion
While ethical issues associated with the rapid developments in AI technologies are continuously proliferating, AI applications in diverse operational settings within corporate organizations are ongoingly expanding. As such applications are also progressively introduced in educational institutions, this article aimed at identifying trends in AI management in the corporate world that could potentially be relevant to the strategic reflection of educational leaders.
Firstly, a deliberative model of cooperation among institutions and an inclusive approach to knowledge management have been identified as able to propel innovation, enhance the legitimacy of decisions, and attribute ethical responsibility more fairly to each involved AI agent, from conceptualization and design to day-to-day operation and evaluation. Previously, the educational administrative literature has indicated the need for systematic approaches to leadership that involve cooperative initiatives, which are certainly characterized, and more often restrained, by geographical contextualities (Lin et al., 2023), with the need for digital expertise exchange functioning as an optimistic avenue for extended result-driven collaboration (Armstrong et al., 2021).
Secondly, AI ethics in corporations has initiated an organized attempt to expand strategies of social responsibility. Educational leaders need to be aware of the opportunities that this approach can generate, such as introducing socially contextualized ethical decision-making, but also the associated challenges, such as the risk of digital washing similarly to green washing in the case of sustainability when treated as a component of social responsibility. A socio-ethical approach to the administrative incorporation of AI involves an uninterrupted linkage between tangible technical implementations and the reflection on more abstract values, which, however, has a direct impact on the current and future social lives of humans (Hagendorff, 2020).
Lastly, educational leaders should consider carefully ways to integrate matters pertaining to automated work as evidently it has already started affecting the experience of all three major educational stakeholder types (students, administrators, and instructors), creating the need for a new type of moral economy that is indeed technology-based yet without lacking the significance of the human element. A moral economy approach involves a systematic approach that accounts for the social and environmental costs of the materialities associated with the design and functioning of AI systems (Murdock, 2018), the politics that are integrated in the utilization and sharing of innovation in the context of the so-called high-tech modernism (Farrell and Fourcade, 2023), and the values that underpin our participation in larger economic structures that host the digital commodity exchange AI solutions rely on (Elder-Vass, 2018).
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
