Abstract
Introduction
The COVID-19 pandemic highlighted the potential of digital tools in clinical neuropsychology, prompting a shift from traditional paper-based assessments to digital alternatives. However, research on this topic remains limited. This study evaluates the usability of digital neuropsychological assessment tools and examines healthcare professionals’ perceptions of their reliability and efficiency in clinical settings.
Methods
A cross-sectional observational study was conducted at the Centro Neurolesi “Bonino-Pulejo.” Healthcare professionals, including neuropsychologists and clinicians, were asked to alternate between digital and traditional paper-based assessments. The usability of the digital tools was measured using the System Usability Scale (SUS), while qualitative feedback was gathered through open-ended questions.
Results
A panel of 29 healthcare professionals participated in the study. The quantitative analysis of the SUS scores revealed a mean score of 89.48 (SD = 10.12) for the digital format and 81.38 (SD = 11.49) for the traditional format. A Wilcoxon signed-rank test showed a statistically significant difference between the two conditions (p = 0.0003), indicating that participants found the digital tools to be more usable compared to the traditional assessment format. Key qualitative feedback indicated that participants appreciated the speed, efficiency, and reduced error rates of digital tools, with many noting improvements in data organization and reporting.
Conclusions
Although the study's single-center design, small sample size, and reliance on self-reported measures may limit the generalizability of the findings, the results underscore the high usability and effectiveness of digital neuropsychological assessments. Healthcare professionals reported improved efficiency, accuracy, and data organization, supporting the tool's potential integration into clinical practice. These findings underscore the promise of digital tools in modern neuropsychology, paving the way for future multicentric and longitudinal research to validate their broader applicability.
Keywords
Introduction
In recent years, technology has significantly impacted the medical field, extending to clinical neuropsychology, especially as the COVID-19 pandemic highlighted the potential of digital tools in healthcare delivery. 1 This rapid shift pointed out the need to explore digital solutions in neuropsychological assessment, where traditional paper-based tools continue to dominate despite notable limitations. A key limitation of paper-based assessments is their infrequent data collection, providing only a snapshot of cognitive functioning instead of capturing fluctuations over time.2,3 Traditional assessments overlook real-time factors like fatigue and time of day, which impact cognitive performance.4,5 Studies indicated that cognitive performance fluctuates throughout the day, with older adults generally performing better in the morning than in the afternoon or evening.6,7
Additionally, these assessments are often time-consuming, limiting the capacity to assess more patients and contributing to longer wait times. 8 Human error during scoring and data transcription also poses a risk, potentially affecting result accuracy and diagnostic reliability. 9 Furthermore, paper-based tests lack adaptive testing capabilities, meaning that test difficulty remains fixed regardless of the participant's performance, unlike digital assessments that can dynamically adjust based on real-time responses. 10 Another significant challenge is data management and storage. Paper records require considerable physical space, are susceptible to loss or damage, and make longitudinal tracking of cognitive changes more difficult. The absence of automated data integration further complicates research applications and clinical follow-ups. 11 Given these challenges, the transition to digital assessments represents a promising solution to overcome many of these limitations, offering improved efficiency, accuracy, and accessibility in neuropsychological evaluation. Digital assessments may address some of these issues. Adaptive testing enables shorter, more personalized sessions by adjusting item difficulty based on previous responses, while automated scoring enhances measurement accuracy by minimizing manual errors.12,13 Digital formats also enable clinicians to conduct group assessments, allowing them to reach more patients within the same timeframe.14,15 Despite these advantages, digital assessments are not yet widely adopted. A key barrier to their adoption is the perceived usability among healthcare professionals (HCPs), as their acceptance is critical for routine implementation in clinical settings. Although digital cognitive assessments can enhance testing efficiency, accuracy, and accessibility, 16 many neuropsychologists’ express concerns about their reliability and validity compared to traditional methods. 17 To encourage wider adoption, it is essential to ensure that digital tools meet the psychometric standards of traditional assessments. The shift from paper-based to digital neuropsychological tests offers the opportunity to improve efficiency and data collection. 18 However, usability remains a critical factor for successful implementation, digital tools must be intuitive and accessible, ensuring that they measure cognitive abilities rather than users’ proficiency with the technology. 19 Furthermore, HCPs must trust these tools, emphasizing the need for adequate training and support.20,21 Most existing research has primarily focused on fully remote testing or the development of specific digital tools, often overlooking their practical implementation in clinical settings.18,22 While studies suggest that digital cognitive assessments can improve testing efficiency, accuracy, and accessibility, 16 concerns persist regarding their validity and reliability compared to traditional methods. 17 Addressing these challenges, this study contributes to an emerging field by evaluating the usability and feasibility of digital neuropsychological tools when administered in person. Given these gaps in the literature, it is essential to investigate how digital neuropsychological assessments perform in real-world clinical settings and how they are perceived by HCPs. Unlike previous research, which has primarily examined fully remote assessments or the implementation of specific digital tests,22,23 this study focuses on the usability of digitized neuropsychological assessment tools in face-to-face contexts. Additionally, it explores HCPs’ perceptions regarding their reliability, efficiency, and potential for integration into routine clinical workflows.
Methods
Study design and study population
This study is a cross-sectional observational study conducted at the IRCCS Centro Neurolesi “Bonino-Pulejo” from August 2023 to August 2024. The aim was to evaluate the usability and perception of digital neuropsychological assessment tools among HCPs. Written informed consent was obtained from all participating HCPs. The study was conducted in accordance with the Declaration of Helsinki and approved by Institutional Review Board (IRB) and Ethics Committee of IRCCS Centro Neurolesi “Bonino-Pulejo,” Messina, Italy (Ethics Approval ID: 19/2023).
The study included a diverse group of HCPs such as neuropsychologists, physiotherapists, speech therapists, and clinicians involved in cognitive and functional assessments. The inclusion criteria were: (i) a degree in a relevant healthcare field (e.g. psychology, neuropsychology, physiotherapy, speech therapy, or related disciplines); (ii) at least 1 year of experience in administering neuropsychological or functional assessments; (iii) at least basic experience with both digital and paper-based test formats, ensuring an adequate level of familiarity for a comparative evaluation; active engagement in cognitive assessment, rehabilitation, or neuropsychological testing in routine practice. Moreover, the following exclusion criteria were applied: (i) lack of prior exposure to digital neuropsychological assessments, as this could introduce variability in usability perception; (ii) limited clinical experience (i.e. professionals with less than 1 year of practice in neuropsychological or functional assessment); (iii) roles without direct involvement in patient assessment.
Procedure
Before entering the study, all HCPs attended an introductory session with an IT engineer to review the digital system's functionalities and operations. The session served as a brief refresher rather than a formal training, minimizing potential bias in usability perceptions. During the 1-year study, HCPs alternated between digital and paper-based formats for the neuropsychological assessments most used in their practice (see table 1). Both formats were integrated into routine clinical assessments, allowing HCPs to gain practical experience with digital tools while maintaining the traditional paper method as a reference.
Test/scale in digital format.
The digitalization process involved Microsoft Forms, part of the Microsoft Office Suite, which was used to convert the paper-based assessment tools into an interactive digital format. The scoring of responses was conducted using Microsoft Excel, ensuring automated calculation and data organization while maintaining consistency with standard clinical scoring procedures. The digital tools used in this study were adapted from existing standardized neuropsychological assessments to ensure consistency with traditional formats. While Microsoft Forms was utilized for digitization, the structure, scoring criteria, and administration procedures remained aligned with their validated paper-based counterparts. These tools were not independently pre-validated but were designed to adhere to established clinical protocols.
At the end of the study period, a comprehensive evaluation was conducted to assess HCPs’ experiences and perceptions of both formats, focusing on usability factors such as ease of use, time efficiency, and perceived reliability. HCPs completed a detailed feedback questionnaire, facilitating a direct comparison of usability and perception between the digital and traditional formats.
Outcome measures
Usability and perception were assessed via a customized questionnaire completed by HCPs at the end of the study. The questionnaire included items on ease of use, efficiency, and reliability, with responses rated on a Likert scale ranging from “Strongly Agree” to “Strongly Disagree.” Additionally, open-ended questions allowed participants to provide qualitative feedback, capturing in-depth perspectives on both digital and traditional formats. The System Usability Scale (SUS) developed by Brooke, is a quick and reliable method for assessing the usability of design solutions. 68 In the present paper was also administered to quantify the usability of the digital format.
Statistical analysis
The statistical analysis for this study was conducted using Jamovi (Jamovi Project). 71 The demographic variables were represented using descriptive statistics, with continuous variables, such as age, reported as means and standard deviations, while categorical variables, such as gender distribution, educational background, and profession, were presented as percentages. For quantitative analysis, SUS scores were assessed using non-parametric methods due to the small sample size and the non-normal distribution of data. Normality was tested using the Shapiro–Wilk test, which confirmed a non-normal distribution. Consequently, a Wilcoxon signed-rank test was used to compare the median usability scores with the reference value, with a significance level set at p < 0.05.
Additionally, a qualitative analysis was conducted on open-ended responses regarding participants’ experiences with the digital format. This involved a thematic analysis, a method used to systematically identify, analyze, and report patterns (themes) within qualitative data, as described by Braun and Clarke. 72 The analysis aimed to uncover key themes related to usability, such as speed and efficiency, error reduction, and data organization.
Results
In this study, a total of 29 participants were included. The majority of the sample consisted of women (93.1%), while men accounted for only 6.9%. The mean age of the participant was 34 years, with a standard deviation of 6.93 years. Regarding educational background, 48.3% of the sample held a postgraduate specialization, while 10.3% were in internship or postgraduate training. In terms of profession, the largest group was composed of health researcher psychologists, representing 41.67% of the sample, while there was only one physiotherapist health researcher, accounting for 4.17% of the total sample. The second largest group consisted of psychology research trainees, who represented 25% of the participants. Both speech therapists and neuropsychomotor therapists constituted 8.33% each. Finally, professional research collaborators made up 4.17% of the sample. These results indicate a diverse range of skills among the HCPs involved in the study.
The quantitative analysis of the SUS scores revealed a mean score of 89.48 (SD = 10.12) for the digital format and 81.38 (SD = 11.49) for the traditional format. A Wilcoxon signed-rank test showed a statistically significant difference between the two conditions (W = 22.5, p = 0.0003), indicating that participants found the digital tools to be more usable compared to the traditional assessment format. For more details see Figure 1.

Comparison of SUS scores between traditional and digital tools. The bar graph illustrates the mean SUS scores for traditional (orange) and digital (blue) assessment tools.
In addition to the quantitative findings, qualitative feedback from participants highlighted several key themes. The main themes that emerged, along with the corresponding percentages of participants mentioning each theme, are: speed and efficiency, with 65% of participants highlighting the time savings during administration and result processing; error reduction, with 52% of participants appreciating the increased reliability of the results due to automated scoring and reduced human errors; and data organization, which was seen as a key advantage by 48% of participants, as digital data are easier to manage, store, and retrieve for further analysis. Additionally, a theme related to the ease of use of digital tools emerged, with 38% of participants finding the devices easy to use after a brief familiarization period.
These themes indicate strong acceptance of the digital format, suggesting that participants perceived significant improvements over traditional methods in terms of efficiency, accuracy, and ease of use. The responses were coded and categorized in a way that provides a clear understanding of the perceived benefits, which supports the quantitative results and points toward a potential direction for the future adoption of these tools.
Discussion
The results of this study demonstrate a high level of usability for digital neuropsychological assessment tools, as indicated by SUS scores. The mean SUS score suggests that HCPs found the digital format effective and intuitive. Previous research has shown that usability is a critical factor in the adoption of digital tools in clinical settings, as it directly influences clinicians’ satisfaction and their willingness to integrate new technologies into their practice.73,74 Qualitative feedback from participants further highlighted the perceived benefits of digital tools, particularly in terms of speed and efficiency. Many respondents noted significant time savings during assessments, corroborating the findings of other studies highlighting the efficiency of digital formats in clinical workflows. 75 For example, the shift to digital assessments significantly reduced the time required for patient assessments, which in turn enabled HCPs to effectively manage larger workloads.
In addition to time savings, participants also reported that digital tools enhanced the consistency of assessments. 76 The reduction in errors reported by participants aligns with existing literature, which highlights the increased accuracy associated with digital assessments. 77 Standardized prompts and automated scoring reduce variability across assessments, minimize human error in data entry and interpretation, and improve patient data reliability.78,79 This consistency is particularly valuable in neuropsychology, where standardized assessment protocols contribute to more accurate diagnoses and treatment plans, ultimately enhancing continuity of care. 80 Furthermore, the data organization mentioned by participants is supported by research indicating that digital systems improve data management capabilities, allowing for easier retrieval and analysis of patient information. As healthcare becomes increasingly data-driven, integrating digital assessment tools can improve clinical decision-making and patient outcomes. 81
Recent studies have increasingly focused on the usability and feasibility of digital neuropsychological assessments. For instance, Um Din et al. 82 developed a supervised, tablet-based version of a cognitive and manual dexterity tests on a digital tablet to screen people with cognitive impairment. The authors demonstrated high system acceptance among older adults, despite low familiarity with technology. Similarly, Polk et al. 22 evaluated remote and unsupervised cognitive assessment tools in preclinical AD samples. They reported positive usability ratings and found that remote assessments showed good construct validity with traditional neuropsychological tests. 22 Moreover, baseline memory performance assessed digitally was associated with future cognitive decline, indicating the potential of such tools for monitoring Alzheimer's disease progression. 83 Our study systematically evaluated the usability of digitized versions of standardized neuropsychological tests administered in person, an approach that has received limited investigation. By assessing the perceptions of HCPs who routinely use these tools, our research provides insights into their practical applicability and integration into clinical workflows. 84 This combination of quantitative and qualitative approaches offers a comprehensive understanding of usability, efficiency, and potential barriers to adoption in real-world settings. 85 Collectively, these studies, including ours, highlighted the growing evidence supporting the usability and validity of digital neuropsychological assessments across various populations and settings. 18 Therefore, the clinical implications of this study are significant, as they highlight the potential benefits of integrating digital assessments into routine practice. By improving usability, these tools can improve workflow efficiency, reduce errors, and ultimately lead to better patient outcomes.18,22 Healthcare organizations should consider investing in training and support to facilitate the transition to digital formats. 86 Additionally, ongoing evaluations of usability and effectiveness are essential to ensure that these tools continue to meet the needs of both healthcare providers and patients. 87
Strengths, limitations, and future perspectives
Previous research on digital neuropsychological assessments has primarily focused on fully remote testing or developing specific digital tools, often neglecting their practical implementation in clinical settings.22,23 Our study addresses this gap by evaluating the usability of digitalized versions of in-person standardized neuropsychological tests, an approach that has received little attention. Additionally, we assessed the perceptions of HCPs who regularly use these tools, providing insights into their practical applicability and integration into clinical workflows. By combining quantitative and qualitative approaches, this study offers a comprehensive understanding of usability, efficiency, and potential barriers to adoption in real-world settings.
Despite these contributions, several limitations must be acknowledged. Although informative, the sample of 29 participants may not fully capture the diversity of HCPs using digital assessment tools. Additionally, the monocentric design limited the ability to conduct statistically robust subgroup analyses across different professional categories. However, the inclusion of a wide range of professionals ensured a broad assessment of usability perceptions. Future research should expand the sample size across more institutions to allow for a more detailed assessment of interprofessional differences. A larger, multicenter study could provide more generalizable results, accounting for institutional and geographical differences in digital tool adoption and usability perceptions. Such studies could also explore whether usability perceptions vary across healthcare settings with different levels of technological infrastructure.
Another limitation is the gender distribution, as 93.1% of participants were female, which may impact the generalizability of the findings. Additionally, the reliance on self-report measures introduces potential biases, such as social desirability bias or overestimation of usability. The study also did not include a patient-centered assessment, which is critical to understanding the broader impact of digital tools on clinical outcomes and patient experience. Moreover, the cross-sectional design captures only a snapshot of usability perceptions at one point in time, preventing an analysis of how these perceptions evolve with increased exposure. This study does not extensively address barriers to adoption, such as cost, training, and accessibility, key factors influencing digital assessment implementation. Cost remains a key barrier, as some digital neuropsychological tools require expensive licensing fees or specialized hardware, which may not be feasible for all institutions. Additionally, training requirements must be considered, while digital tools can streamline assessment workflows, their effectiveness depends on adequate clinician training to ensure accurate administration and interpretation. Finally, infrastructure disparities across institutions may affect accessibility, with some facilities lacking the necessary technological resources to implement digital assessments seamlessly. Future studies should focus on identifying and addressing these challenges to facilitate seamless integration into routine workflows.
Finally, a potential limitation of this study is that participants had prior exposure to digital tools, as required by the inclusion criteria. While this ensured that usability evaluations were not confounded by a complete lack of digital familiarity, it may limit the generalizability of the findings to professionals without previous experience with digital neuropsychological assessments. Future studies should include clinicians with varying levels of digital proficiency to assess whether usability perceptions differ based on prior exposure. This would provide insights into potential barriers to adoption for individuals less experienced with digital tools and inform tailored training strategies.
Despite these limitations, our findings provide an important foundation for further research and clinical advances in digital neuropsychological assessments. Increasing sample size and diversity across multiple institutions will improve the generalizability of the findings. At the same time, longitudinal studies could provide deeper insights into how perceptions of usability change over time with repeated use. Incorporating patient perspectives would also offer a more holistic assessment of the impact of digital tools on neuropsychological care. Finally, identifying cost-effective implementation strategies, improving training protocols, and enhancing accessibility could help in optimizing the adoption of digital tools in clinical practice. Finally, because all participants had prior experience with digital tools as part of the inclusion criteria, this study was unable to assess how different levels of digital proficiency influence perceptions of usability. Future research should include participants with different levels of digital familiarity to explore potential differences in user experience and adoption.
Conclusions
In conclusion, this study highlights the promising potential of digital neuropsychological assessment tools to enhance clinical practice. With a high SUS score, HCPs found these tools effective and intuitive, demonstrating their readiness to integrate digital technologies into their workflow. This aligns with previous research highlighting that usability is critical for the successful integration of digital tools in healthcare settings, influencing both clinician satisfaction and technology adoption.
The study also shows significant time savings, with digital tools enabling quicker assessments and allowing clinicians to manage larger caseloads, which supports findings from other studies on the efficiency of digital formats.
Moreover, the consistency and accuracy of assessments were enhanced, with fewer errors reported by participants, which is crucial for neuropsychological evaluations.
Digital tools help standardize processes, reduce human error, and improve data reliability, contributing to more accurate diagnoses and treatment plans.
Additionally, the improved organization of patient data facilitates better decision-making and patient outcomes.
Future research should focus on multicenter validation studies to ensure that these findings are generalizable across diverse clinical settings. Investigating longitudinal usability perceptions will also be crucial to assess whether clinicians’ acceptance of digital tools changes over time as they gain more experience. Furthermore, patient-centered evaluations should complement clinician perspectives to explore how digital assessments impact patient engagement, adherence, and clinical outcomes. Integrating digital assessments into routine practice can improve efficiency, reduce errors, and enhance patient care. To maximize their benefits, healthcare organizations should prioritize training and support programs, ensuring a smooth transition to digital workflows while continuously evaluating usability to meet evolving clinical needs.
Footnotes
ORCID iDs
Ethics considerations
The studies were conducted in compliance with local legislation and institutional requirements. Participants provided written informed consent to participate in the study. Written informed consent was also obtained from the individuals for the publication of any potentially identifiable images or data included in this article.
Author contributions/CRediT
Conceptualization, methodology, and writing—original draft preparation were done by FMG, MGM; software was done by FMG, PDP; validation was done by MGM, AR; formal analysis and data curation were done by FMG, MGM, PDP; investigation was done by FMG, MGM, AQ, RSC; resources was done by AR, MB, FB; writing—review and editing was done by MGM, AR, MB, FB, RSC; visualization and supervision were done by AQ, RSC; project administration was done by FMF, MGM, AR, PDP, AQ, RSC. All authors have read and agreed to the published version of the manuscript.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Current Research Funds, 2025, Ministry of Health, Italy.
Conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability
The data supporting the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
