Abstract
The rapid integration of artificial intelligence (AI) into healthcare presents a double-edged nature, making systematic assessment of users’ benefit-risk perceptions critical. However, a unified, multidimensional framework for such measurement is currently lacking. This review aims to systematically identify and synthesize existing measurement instruments for users’ benefit-risk perceptions of AI in healthcare, and to propose an integrated framework based on the evidence. Guided by Arksey and O’Malley’s 5-stage framework, we retrieved quantitative studies describing measurement dimensions for users’ benefit-risk perceptions regarding AI in healthcare. The search covered 8 Chinese and English databases from their inception to December 6, 2025. Two reviewers independently performed study screening and data extraction, with subsequent synthesis and visual presentation of findings. Based on a synthesis of 49 eligible studies, we developed a measurement framework encompassing 5 benefit and 6 risk dimensions, where technological attributes often exhibit a dual nature. Current measurement instruments consistently emphasize functional benefits, cost benefits, and privacy risks across diverse healthcare contexts, user groups, and geographical regions. In contrast, social benefits and capability development risks generally receive less consideration. Furthermore, variations in instrument design are primarily reflected at the subdimension level. This framework extends classical technology acceptance theories. It provides a theoretical basis for standardized instrument development and offers guidance for the clinical implementation of AI in healthcare. Future research should explore how perceptions evolve with advancing AI maturity and clinical integration to support responsible adoption.
Keywords
Introduction
Artificial intelligence (AI) refers to computer systems that perform tasks or make decisions that usually require human intelligence. 1 With its breakthrough technological advancements, AI has demonstrated immense potential in enhancing the quality of healthcare services and improving patient outcomes. 2 The Guidelines on the Ethics and Governance of Artificial Intelligence for Health issued by the World Health Organization (WHO) 3 mark a global consensus that healthcare AI has emerged as a pivotal driver for optimizing resource allocation and promoting the equity of diagnosis and treatment. Driven by the confluence of policy support, technological innovation, and clinical demands, AI technology is integrating into the healthcare sector at an unprecedented pace,4,5 while continuously reshaping the global healthcare delivery paradigm. 6 Practical evidence has shown that existing healthcare AI systems are able to autonomously or semi-autonomously execute a diverse array of healthcare tasks, such as image-based medical diagnosis, 7 treatment recommendation, 8 surgical intervention, and healthcare administration.9,10 Indeed, studies indicate that AI tools often outperform human clinicians in medical image assessment and analysis.6,11 In the long run, AI is not only expected to enhance patient safety and health outcomes by minimizing human errors, but also to alleviate the routine workload of healthcare professionals—by taking over repetitive tasks, efficiently managing patient health data, and delivering mental health support, among others—enabling them to focus on more complex clinical tasks.6,12
However, while improving service quality and efficiency, medical AI technology may also introduce multidimensional patient safety risks—the core dilemma of its ‘double-edged sword’ effect. 13 Scientist Stephen Hawking once warned that ‘if the rapid development of artificial intelligence cannot be effectively controlled, its impact may be catastrophic’. 14 As a sentinel mechanism for patient safety and the basis of decision-making behavior, risk perception refers to the process through which individuals collect, interpret risk information, and form subjective judgments. 15 In the context of medical AI applications, this process manifests as a systematic evaluation of technological uncertainty and potential adverse consequences.16,17 Meanwhile, issues such as privacy risks due to data breaches, fairness controversies caused by algorithmic biases, and decision-making uncertainties arising from technological black boxes are constantly heightening users’ perception of AI risks.18 -20 In contrast, benefit perception refers to individuals’ subjective cognition of the potential positive outcomes or impacts of medical AI technology based on their usage expectations.21,22 The coexistence of these positive expectations of technological benefits and negative concerns about its risks constitutes the core contradiction determining users’ acceptance of medical AI.23 -25
Currently, research on the willingness to adopt medical AI technology is primarily based on classic theoretical frameworks such as the Technology Acceptance Model (TAM) 26 and the Unified Theory of Acceptance and Use of Technology (UTAUT). 27 As research advances, scholars have increasingly recognized the significant role of users’ benefit-risk perceptions of AI in this process. Consequently, they have also begun to incorporate the Benefit-Risk Assessment (BRA) model into their research frameworks to establish a more comprehensive analytical perspective. 28 However, due to the relative novelty of modern AI systems, the diversified benefit-risk perceptions of medical AI among stakeholders—such as healthcare professionals, patients, and caregivers—have not yet been fully clarified. Most existing studies either remain confined to isolated investigations of individual dimensions (eg, privacy risks 29 or occupational displacement risks 30 ), or only suffice with a generalized assessment of overall benefits and risks, neglecting factors such as potential socioeconomic risks. 31 This has resulted in a notable gap between theoretical research and clinical practice. Studies have shown that benefit-risk perceptions of AI in healthcare scenarios encompasses multiple dimensions.32,33 For instance, the European Parliament’s Directorate General for Parliamentary Research Services (EPRS) has identified 7 key risks of potential harm to patients posed by AI technologies. 34 Variations in benefit-risk perceptions often exist across AI user groups with distinct characteristics and different healthcare application scenarios. 35 Healthcare professionals may place greater emphasis on dimensions such as workflow optimization and liability attribution, 36 while patients tend to be more concerned with personal privacy, humanistic care, and related issues. 37 Such perceptual variations highlight the complexity of interactions between AI technologies and social, economic, and ethical dimensions.
Given the fragmented exploration and generalized assessment of AI perception in existing studies, as well as the current lack of a validated, mature scale for measuring users’ benefit-risk perceptions of AI in healthcare, 38 it is necessary to systematically synthesize and integrate the measurement dimensions covered by current research instruments—that is, the diverse theoretical facets or observational perspectives of the construct of AI benefit-risk perceptions. As a research methodology tailored to exploring emerging and heterogeneous topics, scoping review is particularly well-suited to the systematic synthesis of scattered research findings and conceptual dimensions within a field. 39 This approach aligns with the aim of the present study to integrate perceptual dimensions across different user groups and application scenarios, and to construct a multidimensional framework for users’ benefit-risk perceptions of AI in healthcare. Not only can this framework extend the explanatory power of existing technology acceptance theories regarding core constructs such as perceived usefulness and adoption intention, but it will also provide essential theoretical support for the subsequent development of a reliable and valid measurement instrument for users’ benefit-risk perceptions of AI in healthcare. Furthermore, it offers scientific guidance for the clinical implementation of related AI technologies, ultimately facilitating the integration of AI into healthcare service systems in a more responsible and acceptable manner.
Methods
This scoping review followed the 5-stage framework described by Arksey and O’Malley, 40 incorporating methodological steps of the Joanna Briggs Institute (JBI). 41 The reporting was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) Checklist (Supplemental Table 1). 39 The study protocol was registered with the Open Science Framework (OSF) registry (https://doi.org/10.17605/OSF.IO/GV8JQ).
Stage 1: Identifying the Research Question
This review explored existing literature on measuring users’ benefit or risk perceptions of AI in healthcare. The research questions were carefully designed to cover the extensive range of the available literature. The research questions were as follows: (1) What specific dimensions are used to measure current AI users’ benefit-risk perceptions in healthcare? (2) When addressing different user groups or distinct healthcare scenarios, do the measurement dimensions for AI users’ benefit-risk perceptions vary in design? (3) Do the target user groups of interest and the design of measurement dimensions for AI benefit-risk perceptions differ between Chinese studies and international studies?
Stage 2: Identifying Relevant Studies
The searches were conducted in the following 8 electronic databases: PubMed, Embase, Web of Science, EBSCO, China National Knowledge Infrastructure (CNKI), Wanfang Data Knowledge Service Platform, China VIP Database, and SinoMed, from inception (the earliest available date in each database) to December 6, 2025. The following keywords and the corresponding Medical Subject Heading (MeSH) terms were combined to search the databases: ‘perceived benefit’, ‘benefit perception’, ‘positive perception’, ‘perceived advantage*’, ‘perceived gain*’, ‘perceived value’, ‘perceived outcome’, ‘perceived risk’, ‘risk perception’, ‘perceived threat’, ‘perceived concern’, ‘perceived danger’, ‘Artificial intelligen*’, ‘AI’, ‘robot*’, ‘chatbot’, ‘smart healthcare service*’, ‘computer-assisted’, ‘computer-aided’, ‘large language model*’, ‘chatGPT’, ‘machine learning’, ‘deep learning’, ‘intelligent assistive technolog*’, ‘health*’, ‘medical’, ‘care*’, ‘clinical’, ‘diagnos*’, ‘therap*’, ‘treat*’, ‘rehab*’, ‘public health’, ‘nurs*’, ‘patient*’, and ‘physician*’. Supplemental Table 2 shows the detailed search strategy.
We further identified potentially eligible studies by searching gray literature using Google Scholar and performed manual searches of the reference lists from identified studies and related reviews.
Stage 3: Study Selection
The inclusion criteria included: (1) The study participants were users or potential users of AI or intelligent assistive technologies in healthcare; (2) The research variables encompassed benefit and/or risk perceptions; (3) The measurement instruments for benefit-risk perceptions were questionnaires or scales, with specific perceptual dimensions or constructs extractable; (4) The objects of benefit-risk perceptions were AI or intelligent assistive technologies.
The exclusion criteria were: (1) Narrative reviews, qualitative studies, comments, editorials, meeting abstracts, or letters; (2) Literature with full-text unavailable or incomplete information; (3) Literature published in languages other than Chinese or English; (4) Duplicate publications.
Duplicate records were removed using NoteExpress software. Two researchers (HS and YX) independently screened records by reviewing titles and abstracts, followed by full-text screening based on the inclusion/exclusion criteria. The results were cross-checked for consistency. Any discrepancies were resolved through discussion between the 2 researchers or, if necessary, by consultation with a third researcher (HH) to reach a consensus on final inclusion. The quality assessment of the included studies was conducted by 1 author (YX) using the methodology checklist for cross-sectional studies developed by the Agency for Healthcare Research and Quality (AHRQ). 42 The purpose of this assessment is to clarify the rigor of each study in aspects such as design, participant selection, and confounding control, thereby providing a critical basis for evaluating the reliability of the synthesized measurement instruments dimensions. Studies were classified as low (scores < 3, high risk of bias), moderate (scores 4-7, moderate risk of bias), or high quality (scores 8-11, low risk of bias) on an 11-point scale.
Stage 4: Charting the Data
Two researchers (HS and YX) independently extracted data from eligible literature for final analysis, with any discrepancies resolved by discussion. The extracted data included: authors, publication year, country, study participants, types of intelligent products, healthcare application scenarios, dimensions of benefit-risk perceptions, sources of measurement instruments, and reliability and validity of the instruments. Furthermore, it should be clarified that the extraction and coding of the dimensions of benefit-risk perceptions from the literature were conducted based on our prior grounded theory study focusing on healthcare professionals’ perceptions of medical AI. In this earlier study, semi-structured in-depth interviews were conducted with 18 healthcare professionals to systematically gather data regarding their anticipated benefits and perceived risks related to medical AI applications. Through a 3-stage coding process—including open coding, axial coding, and selective coding—initial dimensions of AI benefit-risk perceptions were refined. The literature coding for the present scoping review was subsequently carried out using this initial dimensional framework as a foundation.
Results
Stage 5: Collating, Summarizing, and Reporting the Results
Literature Search
A total of 1429 records were initially identified through database searches, with 7 additional records included via manual searching of other resources. After removing duplicates, 897 studies required titles and abstract screening. Subsequently, 107 studies were read in full, of which 58 were excluded based on the predefined inclusion/exclusion criteria. Ultimately, 49 studies were included in this scoping review. The detailed study selection procedure is presented in Figure 1.

PRISMA flow chart of the selection process.
Characteristics and Risk of Bias of the Included Studies
A total of 49 eligible studies in English or Chinese met the inclusion criteria. As shown in Figure 2, the annual number of publications generally increased over time, with only a slight dip in 2022. These studies were distributed across 16 countries, among which China and the United States accounted for the largest number of studies. It is worth noting that 24.5% (12/49) of the included studies were master’s theses from China, which is one of the reasons for the high number of studies originating from China. Regarding healthcare application scenarios, 38.8% (19/49) of the studies focused on general healthcare, followed by clinical decision support (24.5%, 12/49), personal health management (14.3%, 7/49), treatment and surgical assistance (12.2%, 6/49), primary and community healthcare (6.1%, 3/49), and mental health (4.1%, 2/49). The study participants comprised healthcare professionals (46.9%, 23/49), patients and caregivers (24.5%, 12/49), the general public (18.4%, 9/49), and university students and faculty (10.2%, 5/49). Regarding the sources of measurement instruments for users’ perceived benefits and risks of AI, 63.3% (31/49) of the studies adapted instruments from other research, while 36.7% (18/49) used self-developed ones. The majority of studies (71.4%, 35/49) conducted both reliability and validity tests for their measurement instruments, 24.5% (12/49) of the studies performed either reliability or validity tests only, while only 4.1% (2/49) of the studies did not provide any description of such assessments. The characteristics of the included studies are summarized in Table 1 and detailed in Supplemental Tables 3 and 4.

The number of studies by published year.
Summary of Study Characteristics (n = 49).
The risk of bias assessment revealed that the majority of studies (83.7%, 41/49) were at moderate risk, with fewer at low risk (10.2%, 5/49) or high risk (6.1%, 3/49). Specifically, most of the included studies clearly specified their data sources, study time periods, and response rates of the study participants, yet commonly lacked detailed descriptions of the exclusion criteria for study participants, the control strategies for confounding factors, and approaches adopted for handling missing data (Supplemental Table 5).
Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare
Table 2 presents the measurement dimensions of users’ benefit-risk perceptions of AI in healthcare across all included studies. Among the 49 studies, 38 measured both benefit and risk perceptions, while 3 and 8 studies measured only benefit perceptions and only risk perceptions, respectively. Regarding benefit perceptions, 5 dimensions were identified: functional benefit, empowerment benefit, social benefit, emotional benefit, and cost benefit. Among these, functional benefit received the highest attention (36 of 41 studies), whereas only 8 studies measured users’ social benefit perception of AI.
Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare.
Concerning risk perceptions, 6 dimensions emerged: technical efficacy risk, safety and privacy risk, ethical and social risk, legal and liability risk, capacity development risk, and resource consumption risk. Among these, safety and privacy risk garnered the most attention (40 of 46 studies), while only 8 studies measured users’ capacity development risk perception of AI.
To further clarify the core connotation of each dimension, Table 3 systematically specifies the definitions and characteristics of the aforementioned 5 benefit perception dimensions and 6 risk perception dimensions. The detailed subdimensions of benefit perception and risk perception are provided in Supplemental Tables 3 and 4, respectively.
Operational Definitions of Benefit-Risk Perceptions Dimensions for AI Users.
Notes: This table is compiled based on the consensus descriptions of benefit-risk perceptions dimensions from the 49 included studies.
Variations in the Design of AI Benefit-Risk Perceptions Measurement Dimensions Across Healthcare Scenarios
Figure 3 shows the measurement dimensions of user benefit perception for AI across 6 different healthcare application scenarios, including general healthcare (n = 13), primary and community healthcare (n = 3), clinical decision support (n = 12), treatment and surgical assistance (n = 5), personal health management (n = 6), and mental health (n = 2). In the most researched scenarios of general healthcare (n = 13) and clinical decision support (n = 12), the assisted diagnosis and treatment benefit subdimension under the functional benefit dimension (12/13 vs 9/12; the number of studies including the dimension/total number of studies in the scenario) and the time efficiency benefit subdimension under the cost benefit dimension (8/13 vs 9/12) received the highest attention, while the emotional benefit dimension (3/13 vs 3/12) and the social benefit dimension (5/13 vs 1/12) received the least attention. Similar results were observed in other scenarios. In primary and community healthcare (n = 3), treatment and surgical assistance (n = 5), and mental health (n = 2) scenarios, measurement instruments did not cover the social benefit dimension. Notably, in the treatment and surgical assistance scenario (n = 5), measurement instruments primarily focused on the healthcare quality improvement benefit subdimension under the functional benefit dimension (5/5). In the personal health management scenario (n = 6), the problem-solving benefit subdimension under the functional benefit dimension (n = 5/6) received more attention. More detailed measurement dimensions of AI benefit perceptions across the 6 types of AI healthcare application scenarios are presented in Supplemental Figure 1.

Radial tree graph of AI benefit perception measurement dimensions for different healthcare scenarios.
Figure 4 shows the measurement dimensions of user risk perception for AI across 6 different healthcare application scenarios, including general healthcare (n = 19), primary and community healthcare (n = 3), clinical decision support (n = 12), treatment and surgical assistance (n = 6), personal health management (n = 5), and mental health (n = 1). In the most researched scenarios of general healthcare (n = 19) and clinical decision support (n = 12), the privacy risk subdimension under the safety and privacy risk dimension (17/19 vs 8/12) received the highest attention. In addition, a notable difference is that studies in the general healthcare scenario also focused on the occupational replacement risk subdimension under the ethical and social risk dimension (16/19), whereas the clinical decision support scenario paid more attention to the output accuracy risk subdimension and the professional competence risk subdimension under the technical efficacy risk dimension (12/12). Across all 6 healthcare scenarios, the legal and liability risk, capability development risk, and resource consumption risk dimensions received relatively low attention. It is worth noting that in the treatment and surgical assistance scenario (n = 6), the system reliability risk subdimension under the technical efficacy risk dimension (5/6) and the physical risk subdimension under the safety and privacy risk dimension (6/6) received more focus. More detailed measurement dimensions of AI risk perceptions across the 6 types of AI healthcare application scenarios are presented in Supplemental Figure 2.

Radial tree graph of AI risk perception measurement dimensions for different healthcare scenarios.
Variations in the Design of AI Benefit-Risk Perceptions Measurement Dimensions Across User Groups
The 49 studies measuring users’ AI benefit-risk perceptions involved 4 distinct user groups: healthcare professionals (n = 23), patients and caregivers (n = 12), university students and faculty (n = 5), and the general public (n = 9). Figure 5 presents the design of measurement dimensions for AI benefit-risk perceptions across different user groups. A common focus across all user groups is the subdimensions of time efficiency benefits (under the cost benefit dimension) and privacy risks (under the safety and privacy risk dimension), which receive the highest level of attention in dimensional design.

Radial network graph of AI benefit-risk perceptions measurement dimensions across different user groups. The dots represent the user groups and the measurement subdimensions of AI benefit-risk perceptions, with their size corresponding to the number of studies and their color indicating the classification of measurement dimensions. The lines denote the connections between different user groups and these subdimensions, with their thickness reflecting the number of studies.
Beyond this commonality, designs diverge according to user group. For healthcare professionals, measurement instruments also incorporate functional benefits (with subdimensions: assisted diagnosis and treatment benefits (11/23), workflow optimization benefits (9/23)) and technical efficacy risks (with subdimensions: output accuracy risks (12/23), professional competency risks (9/23)). For patients and caregivers, the dimensional design also places emphasis on functional benefits (with subdimensions: healthcare quality improvement benefits (5/12)), emotional benefits (with subdimensions: emotional experience benefits (5/12)), and technical efficacy risks (with subdimensions: system reliability risks (4/12)). In studies measuring AI benefit-risk perceptions among the general public, functional benefits (with subdimensions: medical safety benefits (3/9)) and technical efficacy risks (with subdimensions: output accuracy risks (3/9)) also receive some attention. Furthermore, dimensions such as legal and liability risks, as well as ethical and social risks (with subdimensions: occupational replacement risk (9/23)) are accorded greater consideration primarily in research designs focusing on healthcare professionals.
Variations in the Design of AI Benefit-Risk Perceptions Measurement Dimensions Across Chinese and International Studies
Given the relatively large number of Chinese studies among the included studies, we analyzed the differences in the target populations and the design of measurement dimensions for benefit-risk perceptions of AI in healthcare based on their geographic origin (Figure 6). Of the 49 studies, 26 were from China (53.1%), while 23 were from other international regions (46.9%). It is evident that Chinese studies focused more heavily on patients’ benefit-risk perceptions of AI in healthcare (11/26, 42.3%), followed by those of healthcare professionals (8/26, 30.8%). In contrast, international studies showed the opposite pattern (8.7% vs 60.9%). No significant differences were identified in the design of measurement dimensions for AI benefit-risk perceptions between Chinese and international studies; both prioritized functional benefits, cost benefits, safety and privacy risks, and technical efficacy risks to the highest degree.

Radial tree graph of AI risk perception measurement dimensions for Chinese and international studies.
In summary, this scoping review integrates measurement instruments from 49 cross-sectional studies to construct a framework for measuring user benefit-risk perceptions of AI in healthcare, encompassing 5 benefit dimensions and 6 risk dimensions. Current research exhibits significant commonality in the focus of user perception measurement: functional benefits (with the subdimension of assisted diagnosis and treatment), cost benefits (with the subdimension of time efficiency), and privacy risks receive the most attention across measurement instruments for all healthcare scenarios, user groups, and geographic origins. In contrast, social benefits and capacity development risks are generally less emphasized. Furthermore, differences among measurement instruments are more evident at the subdimension level. Regarding specific scenarios, in treatment and surgical assistance scenarios, measurement instruments also focus on the subdimension of healthcare quality improvement under functional benefits, whereas in personal health management, the problem-solving subdimension attracts greater attention. In terms of populations, measurement instruments designed for healthcare professionals are more likely to include dimensions such as legal and liability risks, as well as occupational replacement risk, while instruments targeting patients place greater emphasis on emotional benefits.
Discussion
This study presents the first comprehensive, systematic scoping review that synthesizes the measurement dimensions of users’ benefit-risk perceptions regarding AI in healthcare. To examine the social implications of AI, an assessment of users’ benefit-risk perceptions is central. 35 Multiple studies have confirmed that benefit-risk perceptions are critical determinants shaping individuals’ attitudes toward technology, 53 with positive benefit perceptions and negative risk perceptions jointly shaping the formation of AI-related preferences and usage intentions.25,29,47 Unlike previous studies that primarily focused on descriptive analyses of users’ attitudes toward AI, 89 this study centers on the methodological core of ‘how to measure’ such perceptions. By synthesizing 49 quantitative studies, we constructed a comprehensive measurement framework encompassing 5 benefit dimensions and 6 risk dimensions, and mapped the landscape of measurement dimensions for AI benefit-risk perceptions across different healthcare application scenarios, user groups, and geographical regions. This work not only addresses the theoretical gap of lacking a systematic measurement framework in this field but also provides an empirical foundation for developing standardized assessment tools, thereby advancing research on AI benefit-risk perceptions in healthcare from fragmented discussions toward systematic inquiry.
The contribution of this study lies in achieving a systematic integration and theoretical advancement of fragmented measurement instruments. The dual-nature framework of benefit-risk perceptions that we constructed demonstrates that AI technical attributes are frequently perceived in a paradoxical manner by users—meaning that the same attribute often serves both as a source of benefit and a potential risk. 90 For instance, a qualitative study found that patients considered the greatest advantage of AI in skin cancer screening to be its ability to enhance diagnostic accuracy, while its greatest disadvantage was precisely the potential to compromise diagnostic accuracy. 91 On the one hand, AI may generally perform with greater accuracy than humans; on the other hand, its performance may not meet expectations in specialized scenarios lacking tailored training or necessary contextual knowledge.78,91,92 In terms of the social dimension, while the application of medical AI may help narrow the healthcare gap by improving the accessibility of medical resources, it could also exacerbate health inequities due to economic constraints.91,93 Similarly, users’ perception of the costs associated with AI also has a dual nature: it may lead to improvements in economic and time efficiency, yet it may also result in negative experiences due to high costs or complex deployment processes.51,78 This finding transcends the original formulation of classic technology acceptance theories, which primarily focused on facilitators such as perceived usefulness and perceived ease of use while largely overlooking the role of multidimensional benefit- risk perceptions.26,27 It reveals the psychological mechanism underlying users’ complex value trade-offs in practical decision-making regarding AI adoption. Thus, this framework not only summarizes existing research but also significantly extends technology acceptance theories, offering a novel theoretical perspective for understanding contradictory perceptions surrounding AI technology adoption.
From the perspective of the measurement framework of users’ benefit–risk perceptions of AI in healthcare that we have constructed, scholars in this field have generally focused on the functional benefits and cost benefits of AI—particularly its benefits in terms of assisting diagnosis and treatment and improving time efficiency.29,37 This finding is also supported by the qualitative study conducted by Nelson et al, in which improved diagnostic accuracy and efficiency were widely mentioned by patients as key AI benefits. The qualitative research conducted by Nelson et al 91 also supports this finding, with medical diagnostic accuracy and improved efficiency being widely mentioned by patients as benefits of AI. These 2 types of benefits directly address the core challenges faced by both the supply and demand sides of healthcare services. Improved precision in diagnostic assistance is crucial for enhancing diagnostic quality and treatment efficacy, which are key to improving patient health outcomes. Meanwhile, gains in time efficiency can significantly streamline healthcare service workflows and increase patient satisfaction. 50 Moreover, it is noteworthy that attention to the social and emotional benefits of AI remains relatively insufficient. In fact, AI is even perceived as a potential barrier that may undermine the humanistic elements in doctor-patient interactions,18,25 such as increasing doctors’ ‘screen time’ and reducing face-to-face patient interactions. 94 We found that the social benefits of AI, particularly in terms of information sharing and service accessibility,38,79 have been incorporated into some scholars’ measurement instruments for assessing AI benefit perceptions. However, the dimension of service equity has been largely overlooked and warrants future inclusion. For certain vulnerable patient groups, such as those with language barriers or complex medical needs, machine learning predictive analytics can identify and prioritize their need for relevant services, thereby helping to alleviate inequalities in healthcare resource allocation. 82
The privacy risks associated with AI have also attracted widespread scholarly attention, and this concern has been consistently integrated into the design of measurement instruments across various healthcare scenarios and user groups. Grover and Kar 95 argue that security of privacy constitutes a critical element in understanding the complexity of the relationship between society and AI. Although numerous international organizations and countries, including the World Health Organization (WHO), 96 the European Union (EU), 97 and China, 98 have successively issued AI-related regulatory guidelines, challenges remain in safeguarding data privacy and promoting the reasonable use of data. 99 This widespread privacy anxiety stems, on one hand, from the highly sensitive nature of healthcare data itself, and on the other hand, from the potential for substantial harm to users’ health rights and personal privacy in the event of AI system failures. When data collection relies on third-party applications and privacy policies are shaped by technology companies rather than healthcare institutions, 100 users’ doubts about data control and compliance are further intensified. To address these challenges, future safety and privacy measures may consider incorporating blockchain distributed storage and access control mechanisms. Meanwhile, privacy protection laws need to further clarify responsible entities and define the boundaries of data usage, 101 thereby establishing a trustworthy AI application environment through both institutional and technical dimensions.
A study has pointed out that AI applications are by no means isolated technological products; instead, they should be regarded as complex sociotechnical systems composed of numerous interacting components. 102 Multiple survey studies have found that respondents’ demographic characteristics—such as age, gender, education level, occupation, and degree of religious belief—as well as factors like their endorsement of conspiracy theories and prior experience with AI or computers, significantly influence their benefit-risk perceptions of AI.1,35,90 These findings provide crucial methodological guidance for the subsequent development of population-specific scales for measuring users’ benefit-risk perceptions of AI, thereby enhancing the relevance and validity of such measurement instruments. In the studies included in this review, such differences are also reflected to a certain extent. Measurement instruments targeting healthcare professionals reveal that risks related to accountability allocation and occupational displacement are focal points of scholarly concern. As current AI systems do not fully possess human consciousness and sensibility, they can hardly be regarded as moral agents. 103 Consequently, healthcare professionals often express concerns regarding accountability allocation in the event of adverse outcomes. With the increasing integration of AI technology into the healthcare sector, the structure of roles and the nature of job responsibilities are undergoing profound transformations. Traditional repetitive tasks are gradually being replaced by automation, while emerging positions require proficiency in AI-related skills. This shift has forced healthcare professionals to reconstruct and adapt their career paths.104,105 The study by Scott et al 106 indicates that patients tend to express concerns about losing clinical oversight and opportunities to participate in decision-making during the diagnostic and treatment process. Essentially, external benefits and risks are often objective technological or social facts, while benefit-risk perceptions are the results of individuals’ subjective judgments. 107 It can be said that even though different groups have varying focuses, all AI users generally share a common expectation: to ensure that humans remain at the core of decision-making and that humanized communication is maintained in human-AI interactions. 108
Limitations
This study had several limitations. Firstly, although we systematically searched mainstream databases and screened potential records, this review only included literature published in Chinese and English, which may affect the comprehensiveness of the research findings. Secondly, the core of this review is to summarize ‘what has been measured’, that is, the dimensional content, rather than ‘how well it has been measured’, which refers to the reliability and validity of scales. Therefore, although certain dimensions are frequently mentioned, the scientific rigor of their measurement instruments may vary. Developing standardized scales in the future remains a critical task for our research. Finally, despite following a systematic process for summarizing measurement dimensions, this process still involves a certain degree of subjective judgment. This is particularly true for items with ambiguous or multifaceted meanings, where dimensional categorization may be subject to differing interpretations. Therefore, the dimensional framework proposed in this study should be regarded as an interpretive synthesis based on existing literature, rather than a definitive or exclusive classification standard.
Conclusions
Empirical research on the adoption of healthcare AI technologies has amassed a robust evidence base, underscoring the importance of systematically evaluating users’ benefit-risk perceptions. By synthesizing the measurement dimensions of these perceptions—characterized by both commonalities and idiosyncrasies across diverse application scenarios, user groups, and geographic regions—this scoping review develops an integrated, multi-dimensional measurement framework for users’ benefit-risk perceptions of AI in healthcare. This framework not only transforms fragmented perceptions into a systematic and operationalizable theoretical tool—providing a clear structural definition for this complex construct—but also extends the explanatory scope of classical technology acceptance theories within AI-in-healthcare contexts. Furthermore, it offers a direct foundation for developing standardized measurement instruments that are scientifically aligned with specific healthcare scenarios and user roles, thereby advancing evaluation practices from experience-based judgment toward evidence-informed decision-making. Specifically, future measurement instruments can operationalize the dimensions based on this framework and validate their reliability and validity through multi-stage empirical research. Moreover, instrument design should incorporate stratified measurement for different user groups while balancing clinical context specificity and cross-context comparability, thereby enhancing the relevance and applicability of assessments. Subsequent research may further examine the dynamic evolution of perceptions across different stages of AI technological maturity and levels of clinical integration, in order to continually promote the responsible and acceptable integration of AI into healthcare service systems.
Supplemental Material
sj-doc-1-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-doc-1-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-doc-2-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-doc-2-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-docx-3-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-docx-3-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-docx-4-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-docx-4-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-docx-5-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-docx-5-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-pdf-6-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-pdf-6-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-pdf-7-inq-10.1177_00469580261427409 – Supplemental material for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review
Supplemental material, sj-pdf-7-inq-10.1177_00469580261427409 for Identifying Measurement Dimensions of Users’ Benefit-Risk Perceptions of AI in Healthcare: A Scoping Review by Haoning Shi, Yue Xiang, Xilin Yang, Qinghua Zhao and Huanhuan Huang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Footnotes
Acknowledgements
The authors wish to extend their sincere gratitude to all members of the Nursing Research Center at the First Affiliated Hospital of Chongqing Medical University for their advice throughout this study.
Ethical Considerations
Not applicable.
Consent to Participate
Not applicable.
Author Contributions
HS contributed conceptualization, data curation, formal analysis, methodology, visualization, writing—original draft, writing—review & editing. YX and XY contributed data curation, formal analysis, methodology, writing—original draft. QZ contributed funding acquisition, supervision, writing—review & editing. HH contributed conceptualization, funding acquisition, supervision, writing—review & editing. All authors critically reviewed and revised the initial draft. All authors have read and approved the final version of the manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: this work was supported by by the General Program of Chongqing Natural Science Foundation (project No. CSTB2025NSCQ-GPX1118) and the Key Nursing Research and Innovation Project of the First Affiliated Hospital of Chongqing Medical University (project No. HLPY2025-04).
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
Not applicable.
Guarantor
HH.
Supplemental Material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
