Abstract
The exponential growth and rapid development of generative artificial intelligence (AI) tools brought into sharp focus the need for a robust, ethical, and human-centred framework to govern its use and application. For higher education, the need for such a framework could not be stressed enough. Currently, universities are not equipped with adequate digital infrastructure and expertise to adapt, regulate and exploit this new technology. Moreover, the lack of clear policies on the use of generative AI confronts universities with more challenges rather than providing solutions, which is initially the premise of AI. In the paper, I seek to provide critical reflections on the implications of AI on higher education and the need for an inclusive, ethical, and human-centred approach to its regulation. Furthermore, I propose a four-step roadmap for ethical governance and what that entails for universities of the future.
Keywords
Introduction
The use of Artificial Intelligence (AI) in education, including Higher Education, is neither a new nor a recent phenomenon; it has been in use for nearly three decades (Akour et al. 2023). The conversation about its ethical use and impact on the future of education has been ongoing for at least 25 years (Aiken and Epstein 2000). Drawn by its promise of efficiency and cost-effectiveness, some universities were quick to embrace the new technology, leaning into AI-powered tools and agents to mainstream administrative processes, transform teaching and learning, and support their knowledge management systems (KMS) (Rafik 2023). The closure due to the COVID-19 pandemic and the loss of learning during this period confronted universities with unprecedented challenges, such as a shortage of skills and staff, and accelerated the transition to AI-assisted classrooms to facilitate online and mobile learning (Holmes et al. 2022). In some cases, the use of AI in higher education has proven to be beneficial in improving student engagement, increasing efficiency, and enhancing teaching and learning (Hannan and Liu 2023). However, the swift integration of AI within academia outpaces the growth in evidence-based research to substantiate the benefits of its application (Li and Zhang 2024). Indeed, robust evidence on the ethical and human-centred use of AI in higher education remains limited lacking both breadth and depth (Chen et al., 2020). Furthermore, several concerns have been expressed regarding its vulnerability to errors, fabrications, plagiarism, and bias (Farrokhnia et al. 2023).
Despite various attempts to define AI, the literature indicates that AI inherently functions as a ‘black box’, asserting its definition and our understanding of it should adhere to a “relational epistemology” where AI’s actions are “contextually bound and untraceable” (Bearman and Ajjawi 2023, 1160). This conceptual ambiguity has profound implications for the future of higher education. At the outset, it underscores that our understanding of AI has not attained the level of maturity required for its widespread implementation. Second, the proposed regulatory frameworks that govern its ethical and human-centred use and applications must be unequivocally rooted in social, environmental, and economic contexts. Third, while universities are pleading for resources to invest in AI-powered tools and agents, equal investments should be channelled to regulate its use, develop skills of students and staff on its responsible use and critically evaluate its implications for the future of higher education. With the lack of clear, holistic, ethical, and human-centred governance frameworks, universities were left in a state of utter chaos (Crompton and Burke 2023). While some universities were quick to react by banning the use of AI tools by students and staff (Yu 2023), others were engulfed with a ‘tsunami’ of AI, allowing its uses by students and staff (Tobenkin 2024).
To mitigate the considerable risks of its growing popularity and challenges surrounding the regulation and governance of artificial intelligence, international bodies such as the European Union’s Artificial Intelligence Act, UNESCO’s Recommendation on the Ethics of Artificial Intelligence (UNESCO 2022 ), and the OECD’s AI Policy Observatory have developed frameworks and resources to guide the ethical and human-centred applications of AI. While these initiatives offer important guidance for integrating AI into education, there remains a striking lack of understanding regarding its practical use within academia (Slimi and Carballido 2023). For instance, a survey conducted by Inside Higher Ed revealed that only 20 per cent of university leaders reported having a formal policy framework governing the use of AI in teaching and research (Paris et al. 2025). A key challenge facing universities lies in the recognition that developing and ensuring regulatory compliance frameworks for AI is resource-intensive (Abdelaal and Al Sawi 2024). Given the sector-wide shortage of skilled academic and non-academic staff, many universities struggle to navigate the complexities of AI-enhanced teaching and learning (Daniel et al. 2025).
Furthermore, while the proposed guidelines and frameworks offer valuable pointers for practitioners, university leaders and educators, they do not necessarily speak to the realities of universities in the Global South. Faced with limitations on both human and non-human resources, the low levels of e-readiness across universities in the Global South pose significant threats of ‘
Against this backdrop, this paper seeks to critically examine the potential and multiple dimensions of disruptions brought by the increasing reliance on generative and predictive AI in higher education. The first section takes stock of the current and emerging applications of AI, such as developments in AI agents, chatbots and Artificial General Intelligence (AGI) and their impact on teaching and learning, scientific research, and management. The second section identifies potential risks, limitations and ethical considerations of generative AI. The third section offers a road map for universities to mitigate these challenges and risks. It concludes by pointing to future directions for ethical, inclusive, and human-centred governance frameworks.
The Current Landscape of AI in Higher Education
The core literature on Artificial Intelligence (AI) in higher education emphasises its multifaceted and transformative benefits for teaching and learning (Pisica et al. 2023). Some examples of its potential applications include enhancing adaptive and personalised learning (), supporting the design, implementation, and evaluation of online and hybrid learning (Hsiao and Chang 2024), streamlining administrative tasks (Saaida 2023), improving assessment and feedback (Rekha et al. 2024), offering accurate data and analytics for leadership (Geetha 2025), and aiding students with learning differences (O’dea and O’Dea 2023). The growth in the application of AI, defined as “computing systems capable of emulating human-like processes such as learning, adaptation, synthesis, self-correction, and data utilization for complex tasks” (Popenici and Kerr 2017, 1), compels universities to rethink pedagogical practices of teaching and learning (Cardona et al. 2023), assessment and feedback processes (Lodge et al. 2023; Perkins et al. 2024) and online learning strategies (Amer-Yahia 2022). The ripple effects following the AI-driven digital transformation of higher education in recent years mean that universities are increasingly pressured to adapt their conventional methods of teaching and learning, research and administration (Almusaed et al. 2023).
There is a compelling case for AI tools and agents to offer cost-effective and efficient solutions to some of the persisting challenges facing universities, especially in the global south (Slimi 2021). For instance, by harnessing the power of data analytics and algorithms, AI-powered tools can boost adaptive and personalised learning tailored to individual student needs (Taylor et al. 2021). In contexts where universities are struggling with massive intake of students each year, the integration of AI can potentially enhance streamlining administrative tasks such as student admissions and support services (Siddiqi 2024), diversify course design, development and delivery (Chang and Ke 2013), facilitate data-driven decision-making and leadership (Madanchian et al. 2024), and elevate the quality of educational services, hands-on pedagogies, and improve assessment and feedback methods to support better employability prospects (Simuka 2022). For educators, AI tools hold immense potential to provide affordable resources for curating engaging and interactive educational content (Adetayo et al. 2024), increasing collaboration in teaching and research (Rawas 2024), and more effective management of large classrooms (Mounkoro et al. 2024). For students and parents, evidence shows that AI-assisted learning increased students’ digital literacy and peer collaboration and supported adaptive, innovative, and engaging learning experiences that match their needs at different levels (Joseph et al. 2024).
Historically, universities - particularly in the Global South - have been known to resist and slowly adopt new digital technology in teaching and learning (Lubinga et al., 2023). However, the increasing demand for online and hybrid learning recently, first motivated by the need to find alternatives to in-person learning due to closure during COVID-19, pressured universities to redirect investments for online course delivery such as the development of e-learning platforms (Broadband Commission 2021). Across OECD countries, the majority of students, educators, and administrators expect universities to continue the provision of blended and hybrid learning (OECD 2023). While increased reliance on digitally powered learning offered universities alternatives to continue teaching during the pandemic, several limitations and barriers impeded its seamless integration. Studies report that students often express feelings of isolation and boredom due to lack of interactive learning content, which ultimately translates into higher rates of dropout (Bañeres et al. 2023). In several countries in the global south, pedagogies of online teaching and learning remained unchanged, either due to a lack of training or expertise, mirroring traditional methods of teaching, except that they used digital media such as videoconferencing (Biltagy 2021; Niehues-Jeuffroy and Rusnak 2020).
The absence of participatory pedagogy in online course delivery adds another layer of complexity and raises serious concerns about its effectiveness (Park 2015). Studies suggest that massive open online courses (MOOCS) often employ a one-directional approach that, from a participatory pedagogy standpoint, fails to sustain students’ motivation, involvement and engagement (Ogunyemi et al. 2022). Additionally, the absence of instructor’s role to build connections among students can exacerbate feelings of loneliness, isolation, and detachment, leading to deterioration in mental health and well-being (Kaufmann and Vallade 2022). Recent studies indicate areas where AI can offer remedies to the limitations of online, mobile and hybrid learning. For example, AI and machine learning (ML) can offer valuable resources for educators seeking to create tailored and engaging content for personalized learning experiences (Gligorea et al. 2023). Furthermore, evidence shows that Intelligent Tutoring Systems (ITS) and AI-powered chatbots enhanced students' comprehension of basic concepts, language acquisition and engagement with peers (Basri 2024).
While there is limited empirical evidence on their effectiveness in higher education, some studies have already shown a positive correlation between their implementation and improved interactions among students. For example, previous studies found that ITS outperformed teacher-led classroom instruction and non-ITS computer-based instruction, showing better results in students’ comprehension of complex subjects (Nesbit et al. 2014). Recent evidence underscored the potential of educational chatbots in delivering lessons in a responsive, interactive, and confidential manner, leading to improved learning outcomes (Chen et al. 2023). Similarly, research evaluating the effectiveness of task-oriented chatbots in supporting postgraduate students in Saudi Arabia reported higher levels of motivation among study participants compared to their peers. The study also found that participants relied on cognitive and metacognitive learning strategies while using the chatbot, thereby mitigating concerns of cognitive laziness and encouraging further research and development of chatbot systems for postgraduate education (Al-Abdullatif et al. 2023).
While further extensive and rigorous research is required to provide a comprehensive evaluation of AI chatbots in enhancing online and hybrid learning, concerns have been raised in relation to limited student engagement, lack of trust, plagiarism, and the extent to which chatbots resemble human behaviour (Almutairi et al. 2023; King 2023). These mounting concerns underscore the challenges associated with the adoption of chatbots and their negative impact on students. To significantly improve students’ learning experiences, research in this area proposes the incorporation of human-like avatars, gamification elements, and emotional intelligence into chatbots to increase students' engagement (Wu and Yu 2023). This is well-supported by evidence from another study that explored the benefits and limitations of using AI-generated avatars in the redesign of business ethics material in a postgraduate course, and found overall positive perceptions among students and improved critical thinking and analysis skills (Vallis et al. 2023).
In large classrooms, the integration of AI tools can revolutionize teaching practices, course preparation, and classroom management, thus alleviating teachers' workload and allowing more time for creative and innovative pedagogies that are typically inaccessible in traditional classroom settings (Archibald et al. 2023). For instance, in India, where teachers face a significant workload due to massive student intake, the integration of AI tools has proven to benefit teachers class preparation and management (Chatterjee, & Bhattacharjee 2020). Similarly, in the United Arab Emirates, the incorporation of AI and data analytics into curricula design and teaching brought satisfactory learning outcomes, showing enhanced students’ creativity and problem-solving skills (Jarrah et al. 2023). Recent studies show that teachers are increasingly relying on AI to grade students’ assignments, identify incidents of plagiarism, and ensure academic integrity (Zentner 2022). Where it has been tried, the use of AI tools for academic assessment enhanced learners' skills to identify errors and deliver tailored and objective feedback for essay writing (Putra et al. 2023). Another study suggests that AI-enhanced grading reduces human subjectivity and improves consistency and fairness compared with traditional human grading (Gobrecht et al. 2024) This extends to other forms of assessments, for instance, emotionally intelligent conversational agents can simulate a mock Viva-voce setting, offering students innovative and interactive tools to prepare and thus alleviate anxieties associated with oral examinations (Alaswad et al. 2023)
The use of AI tools and analytics can iteratively improve accessibility by addressing the diverse needs of students with learning differences. Evidence has shown that, when integrated properly, AI tools can support the identification of at-risk students and provide educators with resources for remedial support (Saad and Tounkara 2023). Moreover, AI-generated voice-overs for educational materials can improve students' accessibility and comprehension of course material (Kit et al. 2023). Recent developments in AI, such as Agentic AI, means that current models are increasingly equipped with “cognitive architecture” as a foundational framework, enabling them to act, behave, and reason autonomously, with minimal or no human interventions (Wiesinger et al. 2024, 5. While research on agentic AI in education is still in its nascent stages, emerging studies have begun to outline its potential benefits, risks, and limitations. The evidence strongly suggests that AI agents “can significantly lower the barriers to creating effective, engaging simulations, opening up new possibilities for experiential learning at scale.” (Mollick et al. 2024, 1). Other studies suggest that AI agents hold promising potential to enhance assessment and feedback, reflective practices, and to increase the effectiveness of administration and management practices (Yusuf et al. 2024, 2025).
Emerging studies indicate that the benefits of AI for universities extend beyond teaching and learning to further improve administration, governance and decision-making. Administrative systems powered by AI can streamline routine tasks, such as admissions, enrolment, and financial aid processing, thereby allowing for efficient resource allocations (Saaida 2023). Universities that incorporate AI-powered Knowledge Management Systems (KMS) are reporting high-level of accuracy in planning and resource management, which can lead to increased efficiency, productivity, collaboration, and general satisfaction among both academic staff and students (Galgotia and Lakshmi 2023). Furthermore, the use of process mining and rule-based artificial intelligence facilitated the analysis, detection, and visualisation of deviations in students’ study paths, based on data extraction processes from campus management systems and study programmes (Wagner et al. 2023).
Developing robust quality assurance (QA) systems is paramount for fostering vibrant teaching and learning, and scientific research ecosystems (Mishra 2019). Universities, particularly in the global South, often encounter significant challenges in maintaining a high standard of quality in classroom and online teaching and learning (Kaliisa and Picard 2017; Sehoole et al. 2023). For instance, universities in the Arab region face monumental barriers in conducting independent internal and external QA assessments, as well as establishing high-quality standards for teaching, research, administration, governance, and resource planning (Badran et al. 2019). A combination of factors includes underfunding, lack of technical expertise, and absence of a quality culture contribute to the deteriorating state of quality assurance (Keser 2015). Recent evidence suggests that AI data analytics, for example, can leverage the power of algorithms to develop a standardized and comprehensive quality assessment framework (Chemlal 2023).
This support is particularly valuable for universities in contexts where inadequate funding for higher education has led to the deterioration of infrastructure and human resources. For instance, research conducted in Philippine higher education illustrates how AI-enabled Quality Management Systems (QMS) generated accurate and precise audit reports, facilitating the development and implementation of globally competitive educational policies, programs, and accreditation standards (Tobias et al. 2023). According to another study encompassing 28 educational institutions in the UAE, the adoption of AI has enhanced capacities for smart learning and significantly improved educational management systems (Akour et al. 2023).
The Need for Robust, Ethical, and Human-Centred Governance Frameworks
Since its launch in November 2022, OpenAI’s ChatGPT marked a major shift in the use of generative AI and Large Language Models (LLM) in higher education. The Covid-19 pandemic and the subsequent rapid adoption of digital technology to facilitate online and hybrid learning served as a precursor to the widespread use of generative AI. Notably, one appealing aspect that attracted both students and teachers was accessibility - requiring no training, fees, or sophisticated resources - making their integration into classrooms inevitable (Hashmi and Bal 2024). Reactions to the growing use of generative AI in higher education have been mixed, as is typical with the introduction of any new digital technology (Singh and Hiran 2022). While some have expressed enthusiasm, citing benefits such as improved student engagement (Eager and Brunton, 2023), research collaboration (Al-Zahrani 2024), assessment and management (Ogunleye et al. 2024), others have cautioned about its negative effects on students learning and motivations (Fan et al., 2025) and risks and ethical implications for academic integrity (Nguyen 2025) A number of studies urged universities to firmly respond to the disruptions and uncertainties brought by unsupervised and unregulated use of generative AI and advised against its rapid adoption without first setting proper safety frameworks to protect the privacy, agency, and creativity of staff and students (Cotton et al. 2023; Yaroshenko and Iaroshenko 2023).
Although it is still early to determine, there is some indication from present studies that demonstrates a positive correlation between the integration of generative AI tools like ChatGPT and the enhancement of learning and teaching (Baidoo-Anu and Ansah 2023). However, a primary challenge lies in the limited evidence available on the benefits of generative AI in educational contexts (Chaka 2023). AI researchers and ethicists advised a cautious approach and emphasised the need for further empirical investigation into the use of generative AI before widespread adoption (Essa et al. 2023). Furthermore, the lack of a clear understanding of its benefits, such as adaptive and personalized learning, hinders efforts to measure its effectiveness and efficiency. More often, these terms are “opaque, nebulous, and not distinctly understood by educators, particularly as the technology itself is still in its formative stages” (Taylor et al. 2021, 18).
One aspect that drew particular attention is the impressive performance of generative AI tools in standardized exams, which unsurprisingly, raised valid concerns among educators, prompting a thorough re-evaluation of feedback and assessment practices. For example, a study assigned ChatGPT to complete the Test of Understanding in College Economics (TUCE), a standardized economics knowledge assessment in the United States. The results revealed that ChatGPT scored in the 91st percentile for Microeconomics and the 99th percentile for Macroeconomics, surpassing the performance of students who took the TUCE exam at the end of their course. Moreover, ChatGPT consistently provided responses that outperformed the average performance of students from all institutions (Geerling et al. 2023). Similarly, in Australia, another study experimented with the use of ChatGPT to perform in exams for STEM subjects, particularly in engineering education assessments, and found that ChatGPT demonstrated proficiency in certain subjects and excelled in specific types of assessments. (Nikolic et al. 2023). These findings indicate that, if left unregulated, generative AI tools may potentially be used to facilitate cheating and plagiarism, thereby increasing the risk of academic misconduct among students. Furthermore, the findings underscore the significant challenges that generative AI poses to traditional assessment methods in higher education, highlighting the urgent need to rethink existing mechanisms of assessment, feedback, and evaluation.
While the integration of generative AI holds considerable promise for transforming current teaching and learning, research collaborations, and publication productivity, it also presents significant challenges to the integrity of the global knowledge economy. The use of generative AI tools has sparked debates about their impact on authorship, creativity, and human voices (Monte-Serrat and Cattani 2023). To address the increasing use of generative AI by students, universities have invested in AI detection tools. However, evidence suggests a substantial margin for errors and considerable limitations in identifying human authorship. A study looking into the false positives and false negatives of generative AI detection tools in education and academic research indicates that the detection rates of AI-generated abstracts are notably lower than those in other parts such as the literature sections of selected articles, suggesting a higher likelihood of falsely attributing AI-generated text to human authors (Dalalah and Dalalah 2023). Additionally, a comparison between human authorship and AI-generated authorship reveals significant overlap in their distributions, highlighting the potential for errors and inaccurate detection of AI-generated text (Akram 2023; Bellini et al. 2024).
Ensuring the accuracy and fairness of AI algorithms to prevent bias requires collective efforts from universities, policy makers, and governments (Kordzadeh and Ghasemaghaei 2022). The limitations regarding educators’ control over the selection of learning materials and curriculum design proposed by generative AI may lead to the spread of misinformation, including the dissemination of falsehoods, inaccuracies, and the presentation of content that fails to resonate with students’ lived experiences (Almela 2023). Broadly speaking, the predominance of English as the main language of instruction for most online content may lead to the extinction of native languages, thereby exacerbating disparities in access and achievement (Abdel Latif and Alhamad 2023). The utilization of generative AI in content development amplifies concerns regarding biases towards non-English learning materials and the criteria used to evaluate recommended readings (Farrelly and Baker 2023). As it pertains to other operations of higher education, such as admission processes, the EU AI Act identified the use of predictive AI as a ‘high risk’ area (Edwards 2021) due to their vulnerability to commit racial profiling (Obermeyer et al. 2019). Furthermore, it is imperative to employ AI in a manner that aligns with the overarching objectives of higher education, like promoting critical thinking and creativity in the classrooms (Gerlich 2025), rather than solely focusing on cost-effectiveness and efficiency (Al Ka’bi 2023).
By extension, concerns were voiced about the lack of regulations to safeguard students’ privacy (Huang 2023), as well as the precarity faced by academic staff due to the potential displacement that may occur as a result of the widespread integration of AI (Acemoglu and Restrepo 2019). While regional frameworks such as the European Union’s General Data Protection Regulation (GDPR) are exemplary models and offer some guidance on data and privacy protections, universities must establish and enforce their own robust data protection measures to ensure the safety of students and staff. One way to achieve this is by mandating the obtaining of informed consent from both students and faculty members before collecting and utilizing their data for AI-driven applications. Additionally, clarity and transparency should be at the cornerstone of any policy on the use and sharing of data with third parties. For example, by adopting the risk-based approach outlined in the EU AI Act, the Higher Education Act for AI (HEAT-AI) offers educators and students a structured framework for engaging with AI in educational settings (Temper et al. 2025). Several countries, including Egypt, Saudi Arabia, India, Uzbekistan, and South Africa, are also developing regulatory frameworks and national strategies to govern the use of AI in higher education and scientific research (Baradei et al. 2025; Castle et al. 2025).
The race towards Artificial General Intelligence (AGI) has renewed concerns about algorithm bias, fairness, and privacy and underlined the need “for codes of conduct to ensure responsible AGI use in academic settings like homework, teaching, and recruitment” (Latif et al. 2024, 1). To this end, the AI tech industry needs to address these issues by improving transparency around how models are trained, collaborating with universities to build trust and standardise operations, and supporting higher education in adopting more robust regulatory policies for the use of generative AI.
The Roadmap: Addressing the Ramifications of AI for Higher Education
Against this backdrop, AI researchers and ethicists underscored the urgent and pressing need for a comprehensive, ethical, and human-centred governance of AI technologies (Ko 2023). Within academia, it is inevitable for universities to confront the risks of AI’s integration into teaching and learning and to prepare for the uncertainty it brings. Universities must address the impact of automation brought by AI and the consequences of rapid advancements in the field on their administrative and management systems. Instead of imposing an environment of detection, universities must work collectively with students, academics, and practitioners towards cultivating responsible and ethical use of AI. Generative and agentic AI can bring tremendous benefits to universities, which are expected to revolutionize teaching and learning and reshape the professional landscape in the foreseeable future (Ahmad 2020). To this end, this section proposes a four-step roadmap to guide efforts by universities of the future:
A Paradigm Shift is Needed in Higher Education on the Adoption of AI into Teaching, Learning and Assessment
The imperative lies in ensuring AI use, application, and integration align with the vision and ethical conduct of universities, while also safeguarding transparency, privacy, and protection for students and staff. The increasing reliance of students on AI in various aspects of academic work necessitates a re-conceptualization by universities regarding plagiarism, assessment, and feedback (Cortinhas and Deak 2023). A shift in positionality means universities need to rethink their approaches from consumers or end users of digital technology to becoming central players in its creation, development, utilisation, and governance. To achieve this, increased investments in resources, meticulous planning, collaboration, and continuous assessments are pivotal to ensure the effective, ethical, and responsible use and integration of AI by staff and students. To reap the benefits of digital transformation brought by AI, a strategic vision by university leadership that critically and impartially evaluates its implications is needed (Hinojo-Lucena et al. 2019).
Additionally, the seamless integration of AI-powered models and agents necessitates increased investment in digital infrastructure, the development of staff competencies, support for leadership, and the establishment of robust IT strategies for cybersecurity (Sakasa and Mawela 2023). In the era of big data, universities, particularly those in the global south, are pressured to compete in an increasingly digitalized knowledge economy while facing budget constraints and limitations in developing human capacities, which puts them at risk of falling behind and exacerbating digital disparities on a global scale (Dhar et al. 2023). Furthermore, the slow progress of policy and regulations emerging from the global south that speak to its needs and its implications for higher education hinders the development of robust digital ecosystems in universities, impeding their efforts to address the disruptive changes brought about by the rapid pace of digital technological advancements such as generative AI (Raffaghelli and Sangrà 2023).
Collecting More Evidence-Based Data on the Impact of AI’s Integration in Higher Education
It must be noted, of course, that AI is not a remedy for all the sector’s challenges, barriers, and limitations. While universities are often urged to act quickly in response to changes brought by digital technology, it is pivotal to pause, critically examine, and reflect on current trends and disruptions brought by the use of AI. As highlighted earlier in this paper, there is a lack of large scale studies on the benefits of AI in higher education. As such, both caution and scrutiny are strongly advised when considering its adoption in all aspects of teaching and learning. While AI can offer cost-efficient resources to educators, administrators, and university leaders, AI researchers and scholars propose to strike a balance between human-centred and AI-powered innovation for education and carefully assess the potential risks associated with increased reliance on AI in decision-making processes (McConvey et al. 2023; Sain et al. 2024).
Indeed, in response to the limited evidence, there has been a recent surge in publications exploring the trends and risks of AI in academia (Soliman et al 2023). This is most certainly a welcome development to help and guide universities and educators on how best to deploy AI-powered tools for all domains of academic work. Especially in high-risk decisions such as admission processes, grading, and assessment, nonetheless, results generated by AI tools and agents should undergo vigilant monitoring and periodic auditing to ensure fairness and mitigate unintended consequences.
Empowering Teachers and Students by Channelling More Investments in Developing AI Literacy and Data Analytics Skills
To facilitate the successful integration of AI into higher education, institutions should prioritise faculty development initiatives aimed at providing educators with capacity building opportunities and supporting their proficiency in using the technology. The importance of effective teacher-centred leadership is crucial in shaping student character and guiding them towards using generative AI agents like ChatGPT ethically and responsibly. Studies show that even with the growing influence of AI agents, students are positively open to AI-supported assessments under certain conditions; however, they emphasize the importance of human interaction in the feedback and evaluation processes (Braun et al. 2023). In terms of teaching and learning, at the didactic level, there is evidence of students’ preferences for clear teaching practices that foster interaction and relationships, both between teachers and students and among students themselves (Álvarez-Álvarez and Falcon 2023).
This begins with the development of digital citizenship among educators. Universities efforts should be geared towards developing clear AI adoption strategies at the institutional level, the modernization of existing digital infrastructure, and the provision of capacity building programs to enhance the AI proficiency of academic and administrative staff. A stronger focus on teachers’ digital competencies in higher education, include the development of new competency frameworks, such as the UNESCO’s AI competency framework for teachers, the inclusion of algorithm literacy in teachers’ training, addressing disciplinary intersections with artificial intelligence, and safeguarding privacy and confidentiality (Howard and Tondeur 2023).
Fostering Multi-Level International Cooperation and Empower Universities from the Global South to Shape Policies and Regulations of AI
To ensure a balanced representation of voices from different contexts, universities from the global south must actively engage with the conversation surrounding the use of AI. Universities must take a central role in shaping AI policies that prioritize the respect of human rights and address its risks. Ethically rooted actions are needed to reassess AI regulations, re-evaluate public policies, and bolster investment in digital infrastructure (Roumate 2023). Recently, efforts have been made to regulate the use and integration of AI in education, including the recent approval of the world’s first comprehensive AI law by the EU (Piachaud-Moustakis 2023). Additionally, the European Commission has published guidelines to assist teachers in dispelling misconceptions about Artificial Intelligence and promoting its ethical use (European Commission 2022). Similarly, UNESCO has released a new roadmap for the utilization of AI in education (Holmes and Miao 2023). These examples are set to stimulate critical debate among member states and other stakeholders, ensuring that higher education systems respond proactively and effectively to the opportunities, barriers, limitations and risks presented by AI’s integration in higher education. However, there remains a pressing need for universities to develop their own policies that respond to their diverse needs across different contexts (Sam and Olbrich 2023).
Conclusion
In only three years, the world of education, as we know it, has undergone a profound transformation. There is no denying the fact that, within this relatively brief period, the conventional ways, understandings, practices and methods of learning and teaching are no longer confined to traditional classroom settings. The ramifications of the COVID-19 pandemic followed by the introduction of accessible and open-sourced generative artificial intelligence agents brought seismic changes to the social, political, and technological economies of higher education. What these changes indicate is that sooner than we anticipated, pedagogical practices of learning and teaching will be, for lack of better words, obsolete. The rapid advancements of generative AI and its use by students brought into sharp focus the pressing need for improving its governance and regulations.
While governments have pledged tremendous financial investment in developing AI agents, evidence-based data is needed to deepen the understanding among policymakers, educators and students. Technological advancements, such as big data, large language models and AI, present both opportunities and challenges that can be effectively harnessed for pedagogical practices and learning outcomes. Achieving this goal, however, requires fostering reciprocal and mutually beneficial relationships between diverse stakeholders, including academia, governments, civil society, multilateral organisations and the private sector. University leaders must make unswerving and sincere efforts to promote transparency and to direct investments and resources to enhance AI research collaboration and peer learning. Additionally, the use of AI in higher education should be understood in pedagogical terms and guided by relevant ethical and human-centred frameworks, and undergo empirical testing to validate its efficacy.
In conclusion, this paper identified different aspects of AI applications in higher education and their potential to revolutionise teaching and learning. In the same breath, the paper provided some reflections on barriers, limitations, and challenges for universities to exploit this type of highly sophisticated technology. With the end goal is to advance our understanding of AI in higher education and exploring future research directions, the paper offered a four-step roadmap for universities of the future towards inclusive, ethical, and human-centred adoption of AI in higher education.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
