Abstract
As African universities increasingly adopt AI technologies for teaching, research, learning, and administration, concerns arise about safeguarding the personal data and privacy of faculty, staff, and students. The adequacy of existing legal frameworks to protect personal information amidst these immersive technologies remains uncertain. This study examines how African universities can balance leveraging AI-enhanced educational opportunities while protecting data privacy rights. Using the Technology-Organization-Environment (TOE) framework, the study employed an explanatory sequential mixed-methods approach across universities in South Africa, Kenya, and Ghana. Quantitative data from 90 participants were analysed using SPSS (t-tests, regression, ANOVA, chi-square, correlation), while qualitative data from key informants were thematically analysed using NVivo. Findings indicate moderate AI adoption in African universities, with Kenya showing the highest rate, followed by South Africa and Ghana. Technological context and organisational readiness significantly influenced positive perceptions of data protection. However, weak enforcement of Ghana's Data Protection Act, South Africa's POPIA, and Kenya's Data Protection Act negatively impacted trust. Qualitative insights revealed poor policy communication, insufficient staff training, and widespread distrust in regulatory institutions. The study proposes a three-pillar framework to guide African universities towards robust data protection and ethical AI integration, focussing on institutional accountability, privacy by design, and effective legal enforcement as crucial for fostering trust in AI-driven educational systems. The research recommends strengthening institutional policies and capacity, urging universities to develop and implement comprehensive data protection policies aligned with both national and international standards.
Get full access to this article
View all access options for this article.
