Abstract
Academic libraries in Sub-Saharan Africa and China are adopting AI-driven personalisation, but comparative evidence on performance, trust, and metadata governance is scarce. The objective is to assess three knowledge-graph (KG) personalisation engines across diverse infrastructures and propose a scalable integration model for libraries. A mixed-methods study conducted across six institutions utilised a simulated repository (comprising 600 theses), interaction logs, and feedback from 360 students and 48 staff members. We compared rule-based, embedding-based, and hybrid engines. Primary outcomes included top-10 relevance (precision at 10 [precision@10]) and recall (recall at 10 [recall@10]). Perception outcomes were measured using the Knowledge Graph–Personalisation Acceptability Scale for Students (KG-PASS) and the Knowledge Graph–Transparency and Control Index (KG-TCI). Interaction behaviours included toggle frequency (toggles per session), override actions, and micro-survey feedback. Metadata audits assessed field depth, tagging quality, and semantic sparsity. Hybrid engines achieved superior relevance, recall, and user acceptability. Greater metadata depth and tagging quality improved the experience, while semantic sparsity reduced trust among users with lower proficiency. Interaction behaviour exhibited a mean toggle frequency of 2.5 per session, a mean dwell time of 18.9 s, a session duration of 10.0 min, and a latency of 1208–1562 ms. We contribute a scalable model, technically modular (five-layer KG architecture interoperable with Koha/DSpace, multilingual schema support, metadata tiering) and organisationally portable (a governance loop for policy, staff training, and routine audits) that offers actionable guidance for librarians, developers, and administrators to prioritise phased deployment, metadata remediation, and continuous monitoring with KG-PASS/KG-TCI.
Keywords
Get full access to this article
View all access options for this article.
