Abstract
In the digital & intelligent era, integrating AI technology into library services has spurred innovation but also brought potential risks. This paper identifies and assesses AI-related risks in smart libraries from a technostress perspective, proposing governance strategies to enhance service quality and provide a reference for smart library development. Using content analysis and technostress theory, potential risk sources of AI applications in smart libraries are analyzed across five dimensions: techno-overload, techno-invasion, techno-complexity, techno-insecurity, and techno-uncertainty. The Decision Making Trial and Evaluation Laboratory (DEMATEL) method is then applied to assess causal relationships among risks, revealing two categories: technical-level risks (AI malfunction, emotional disconnection, AI misjudgement, algorithmic bias, and responsibility ambiguity) and societal-level risks (security threat, fairness challenges, regulatory ambiguity, copyright concern, and occupational maladaptation). Key findings highlight AI malfunction and misjudgment as driving risks, while regulatory ambiguity and occupational maladaptation are resultant risks. The paper proposes hierarchical risk governance strategies, including prioritizing high-driven risks, managing resultant risks, and dynamically adjusting adaptation rules.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
