Abstract
The surge in artificial intelligence (AI) adoption in higher education, exemplified by Large Language Models like ChatGPT, offers unprecedented opportunities for learning and collaboration while posing potential threats to academic integrity. This study investigates the complex relationship between personal factors, external factors, technological advancements, especially AI adoption, and the academic integrity of post-graduate management students.
The central element of this study is a mini-project assigned to 60 PGDM students in a Business Research Methods course. Preliminary evaluations of the project reports submitted by students suggested extensive use of AI-generated content. The submitted reports were classified by level of AI-generated content, and this classification was then integrated with student responses to a structured questionnaire grounded in eight theoretical constructs related to academic integrity.
Employing an explanatory sequential mixed-method design, the study first used discriminant analysis to identify the factors that influence students’ ethical decision-making within AI-integrated educational environments, followed by thematic analysis of students’ qualitative responses to triangulate the findings.
This research finds that peer pressure, academic stress, and perceived institutional unfairness drive management students to unethical use of AI tools. This study highlights the need for transparent evaluation practices, a mechanism for peer accountability, institutional support systems and clear guidelines on ethical AI use. If the institutions fail to provide support, students are more likely to engage in behaviours that may put their academic future at risk. The study has implications for teaching interventions, policy formulation, and future research that may help create a culture of integrity within AI-enhanced learning environments.
Keywords
Get full access to this article
View all access options for this article.
