Abstract
This study provides a detailed comparative evaluation of freely available AI-powered tools designed to support the preparation of scientific literature reviews, a core competency in researcher education and training. Adopting a mixed-methods approach, seven prominent AI tools were assessed against a predefined set of weighted criteria. The findings reveal significant variations in tool performance, highlighting the need for educators and practitioners to provide targeted guidance on tool selection. While Scispace demonstrated superior comprehensive functionality (overall score: 91.5%), other tools offered notable strengths in specific niches relevant to different learning and research stages. This study contributes valuable, evidence-based insights for information science educators, trainers, and librarians on integrating these emerging technologies into research methods curricula. The aim is to enhance the efficiency, quality, and critical skills of students and researchers in their scholarly work.
Keywords
Get full access to this article
View all access options for this article.
