Abstract
Background
The use of artificial intelligence (AI) in healthcare in general and scientific research in particular has become increasingly prevalent as it holds great promise for optimizing research processes and outcomes.
Aims
This study described predictors and differences in students’ perceptions of the risks and benefits related to using AI in nursing research.
Methods
A quantitative transverse study was implemented utilizing a convenient sample of 434 nursing students from a governmental university. Data were analyzed using many descriptive and inferential statistics.
Results
Nursing students perceived AI in nursing research positively, with an overall mean score of 3.24/5 (SE = .024). Their feelings about AI were generally positive (Mean = 3.54/5; SE = .049; 95% CI = 3.45–3.64). Perceived risks of using AI in research were high (Mean = 1.59/2, SE = .016), especially concerning liability issues (Mean = 3.50/5, SE = .031), communication barriers (Mean = 3.48, SE = .035), unregulated standards (Mean = 3.37, SE = .034), privacy concerns (Mean = 3.37, SE = .034), social biases (Mean = 3.33, SE = .033), performance anxiety (Mean = 3.31, SE = .034), and mistrust in AI mechanisms (Mean = 3.28, SE = .032). The perceived benefits were also high (Mean = 3.46, SE = .030), with a strong intention to use AI-based tools (Mean = 3.52, SE = .033). Key predictors were high GPA and training in public hospitals. hospitals.
Conclusion
AI in nursing research has many benefits; however, it comes with risks that need immediate management. Nursing students’ GPAs and the hospitals where they received their training were often the key factors that shaped how well they understood the use of AI in nursing research. High-achieving students who were trained in public and teaching hospitals tend to be better users of AI in nursing research.
Background
It is possible for a computer to think like a human through artificial intelligence (AI) (Xu et al., 2021). John McCarthy launched AI research in 1956 at a Dartmouth College conference (Xu et al., 2021). Remarkable progress was made in the ensuing years, as its integration into healthcare settings was increasingly acknowledged (Kueper et al., 2020; Xu et al., 2021; Yelne et al., 2023). AI has the potential to advance research by addressing the vast amounts of data that humans are unable to process meaningfully, especially in light of the recent explosion in data availability (Bajwa et al., 2021; Kueper et al., 2020).
In nursing, AI has found its way into research in the 1950s, into practice, and education in the early 1980s because of advancements in computing technology (Shang, 2021). Nursing research could benefit from AI integration; however, some risks cannot be ignored. By using AI-driven technologies, nursing researchers can capitalize on data analysis, predictive modeling, and decision assistance, which can provide valuable insights from extensive healthcare data (Ahmed, 2024; Yelne et al., 2023). AI algorithms can detect research areas that require attention, suggest possible approaches for conducting studies, and encourage cooperation among researchers from varying fields (Ahmed, 2024; Yelne et al., 2023). As a result, AI expedites the research process and promotes collaboration, making significant contributions to the nursing field (Ahmed, 2024).
On the other hand, AI has many risks and drawbacks. There is a possible danger of excessively depending on AI systems, which could undermine the development of critical thinking abilities and creativity in academic pursuits (Abbas et al., 2024; Ahmed, 2024). The importance of ethical considerations regarding data privacy, algorithmic bias, and the proper utilization of AI technologies cannot be overstated (Ahmed, 2024; ElHassan & Arabi, 2024).
In order to address the risks associated with AI, it is necessary to take a balanced approach that prioritizes ethical standards and human rights, particularly those of patients. Nursing researchers must play an active role in designing, implementing, and regulating AI technologies, ensuring that they adhere to research ethical principles and clinical expertise (Ahmed, 2024; Mennella et al., 2024). Therefore, it is important to provide training programs that equip nursing researchers with the skills necessary to collaborate effectively with AI systems (Cary et al., 2024). Additionally, clear guidelines and regulations are needed to govern the responsible use of AI (Ahmed, 2024; Mennella et al., 2024). Nursing researchers should supervise the creation as well as application of AI technology to guarantee that it adheres to the core values of academic integrity (Ahmed, 2024; Mahmud, 2024). AI could be fully utilized while preserving the core principles of nursing research by encouraging cooperation, openness, and accountability (Ahmed, 2024; Mennella et al., 2024).
In conclusion, while AI can revolutionize research procedures by streamlining the process of discovery and promoting collaboration, there were significant risks of over-dependence on AI systems, ethical dilemmas surrounding data privacy, and algorithmic bias. Therefore, nursing researchers must proceed with caution and carefully navigate the potential drawbacks. Maintaining the authenticity standards of research is crucial to maximizing the potential of AI in nursing research. By doing this, nursing scientists can improve nursing research while also benefiting from AI.
Methods
Design
To find out how nursing students perceived the use of AI in nursing research, a cross-sectional online survey was used. A cross-sectional study allows data to be collected all at once from a variety of different participants (Polit & Beck, 2019). It is advantageous to use this design to answer different kinds of research questions. Preliminary data collection could be completed quickly and cheaply, and the results of this design would be comparatively brief (Polit & Beck, 2019). However, the cross-sectional design does not allow control over the selection of variables, cannot establish cause-and-effect relationships as they do not follow subjects over time, confounders may influence the relationships between variables, and the personal bias is high; thus, it influences data collection, interpretation, and generalization (Polit & Beck, 2019).
Sample and Settings
The study's respondents were Jordanian nursing students drawn from a diverse range of universities, with the accessible population comprising students from the selected institutions. To ensure the target population was reached, the survey's initial filtering questions were utilized. The primary independent variable in this study was AI. At the same time, the six additional student characteristics examined were gender, age, grade point average (GPA), educational level, type of hospital where training occurred, and type of university attended. Table 1 provides details on the measurement of these variables. The equation N = 10(k) + 50 was used to determine the sample size, where k represents the number of variables, and an additional 50 cases were added to compensate for potential dropouts (Polit & Beck, 2019). Although the recommended sample size should have been at least 120 nursing students, the study ended up with a convenience sample of 434 nursing students. The inclusion criteria focused on proficiency with technological platforms and enrollment in academic nursing programs at higher education institutions.
Students’ Characteristics (N = 434).
A convenience sample of nursing students was collected from three government-run and two private universities, chosen purposefully. The first researcher presented the study and allowed participants to self-select for involvement. Responses were gathered through online platforms such as WhatsApp and Facebook.
Ethics
On January 13, 2023, the study was approved by the university where the first author is employed; the Institutional Review Board (IRB) reference number was 22/4/2022/2023. It was an optional online survey that participants could opt not to fill out. All participant data were kept anonymous, and the researcher's coded responses in Google Drive were protected by a password that the researcher maintained. Confidentiality was maintained by only providing the overall aggregates to nursing administrators at the designated universities.
Data Collection
Following a pilot study that required no modifications, data were collected online in February 2024 using Google Forms for a self-report survey conducted in English, as it is the formal language of nursing education in all Jordanian institutions. On the Facebook pages of the Faculty of Nursing and the first researcher's colleagues and their WhatsApps, the survey link was shared by the first researcher. Students were encouraged to invite their contacts, and their responses to the survey served as their consent. After a week, participants were reminded to complete the survey just once, and information was gathered over 20 days.
Tool
In 2020, Esmaeilzadeh developed and validated a tool to evaluate the “Use of AI-based tools for healthcare purposes.” With ten subscales and 54 items total, the scale is composed of five-point Likert ratings, ranging from 1 (strongly disagree) to 5 (strongly agree). In order to measure the risks and benefits of using AI in nursing research, the current researcher adopted and modified the instrument. Thus, a pilot was run before data collection, with no needed changes.
The subscales related to risks and benefits of using AI in nursing research were the perceived performance anxiety (5 items), perceived social biases (5 items), perceived privacy concerns (6 items), perceived mistrust in AI mechanisms (5 items), perceived communication barriers (5 items), perceived unregulated standard (5 items), perceived liability issues (6 items), perceived risks (very low/very high) (5 items), perceived benefits (7 items), and intention to use AI-based tools (5 items). The first author added an overall question stated as “Taken all together: How positive or negative do they feel about the use of AI in nursing research?” (very positive, positive, neutral, negative, and very negative).
By looking at the standardized factor loading, composite reliability, and the average variance extracted (AVE), Esmaeilzadeh (2020) determined that the original scale has convergent validity. Furthermore, the author stated that all of the AVEs’ square roots demonstrated discriminant validity, which was higher than .700 and greater than the correlations between any two constructs. Along with calculating the Cronbach's alpha for each construct, Esmaeilzadeh (2020) determined the internal consistency of the instrument (perceived benefits = .940, perceived risks = .900, performance risks = .910, perceived social biases = 0.880, perceived privacy concerns = .940, perceived mistrust in AI mechanisms = .920, perceived communication barriers = .930, perceived unregulated standards = .940, perceived liability issues = .940, and intention to use AI-based devices = .940). In the current study, the overall Cronback's alpha was .964, and for the subscales were perceived benefits = .874, perceived risks = .746, performance anxiety = .864, perceived social biases = .813, perceived privacy concerns = .859, perceived mistrust in AI mechanisms = .826, perceived communication barriers = .869, perceived unregulated standards = .846, perceived liability issues = .862, and intention to use AI-based devices = .897, indicating that the tool is consistent internally. However, the current study's reliability coefficients were lower than those of Esmaeilzadeh (2020), which could be related to the proficiency of the students in using AI technologies.
Data Analyses
The data were analyzed using the Statistical Package for the Social Sciences (SPSS) version 25 (IBM, 2017), accepting a value of .05 as statistically significant. The risks and benefits of using AI in nursing research were examined as interval variables in the current study. Thus, descriptive statistics, including means, standard deviations, or frequencies, percentages, standard errors, and the range, as well as the mean's 95% confidence interval (CI), were provided.
The general linear model (GLM) was used to assess whether the sample's characteristics (independent variables) were related to the perceived risks and benefits of using AI in nursing research (dependent variable) (Polit & Beck, 2019). Differences in the perceived risks and benefits of using AI in nursing research were evaluated using T-tests (in the case of two groups) or analysis of variance (ANOVA) with Scheffe's post hoc test (in the case of more than two groups) (Polit & Beck, 2019).
Results
Sample's Characteristics
A total of 434 out of 700 potential nursing students agreed to participate, resulting in a response rate of 62.00%. The majority were academically excelling female students with very good to excellent GPAs, under 25 years old, and mostly in their third year of study. These students primarily received their training in public hospitals and were enrolled in public universities. Table 1 provides more information on nursing students’ characteristics.
Perceived Risks and Benefits of Using AI in Nursing Research
Answering the first research question revealed that nursing students had a generally favorable perception of using AI in nursing research. On a 5-point scale, where one meant “strongly disagree” and five meant “strongly agree,” the average score was 3.24/5(SE = 0.024), suggesting a positive perception. Additionally, when asked to give an overall assessment, ranging from “very positive” to “very negative,” the mean response was 3.54/5(SE = .049, 95% CI = 3.45–3.64), further indicating a positive perception of AI's role in nursing research (Table 2).
Means, Standard Errors of the Means, Maximum, Minimum, and 95% Confidence Interval (CI) of the Means of Using AI in Nursing Research (N = 434).
Likert scale rated from 1 (strongly disagree) to 5 (strongly agree). SE = standard error of the mean; 95% confidence interval (CI) of the mean, using standard errors.
The overall mean for perceived risks of using AI in scientific research was high (Mean = 1.59/2, SE = .016), particularly related to the perceived risks of liability issues (Mean = 3.50/5, SE = .031), communication barriers (Mean = 3.48, SE = .035), unregulated standards (Mean = 3.37, SE = .034), privacy concerns (Mean = 3.37, SE = .034), social biases (Mean = 3.33, SE = .033), performance anxiety (Mean = 3.31, SE = .034), and mistrust in AI mechanisms (Mean = 3.28, SE = .032); the highest mean indicated the highest risks. On the other hand, the overall mean for the perceived benefits of using AI in scientific research was high (Mean = 3.46, SE = .030), with a high intention to use AI-based tools (Mean = 3.52, SE = .033); the highest mean indicated the highest benefits and intention to use, and both were above the mean of 3/5 (Table 2).
Perceived Risks
The greater the average score, the greater the perceived risks associated with employing AI technology in nursing research. Of the perceived risks of using AI in scientific research (very low/very high), the highest perceived average was linked to the possibility of unfavorable outcomes associated with the use of AI research tools (Mean = 1.65/2, SE = 0.26). Conversely, the lowest perceived average was related to the degree of uncertainty involved in the use of AI research tools (Mean = 1.56/2, SE = 0.24) (Table 2).
Perceived Performance Anxiety
The highest perceived mean was that the mechanisms used by AI-based devices would result in research errors (Mean = 6.56, SE = .063), while the lowest perceived mean was that the predictive models of AI-based tools might malfunction (Mean = 3.34, SE = .055) (Table 2).
Perceived Social Biases
The highest perceived mean was that AI-based tools used in research might be unfair to a certain group of the population (Mean = 3.46, SE = .042) and that AI devices could lead to morally flawed practices in research (Mean = 3.52, SE = .059). Contrary, the lowest perceived mean was that AI devices could lead to unethical practices in research (Mean = 3.26, SE = .047) and the high possibility of biases by AI devices to certain groups of the population (Mean = 3.26, SE = .044) (Table 2).
Perceived Privacy Concerns
The highest perceived mean was related to AI-based applications helping research entities collect too much personal information from people (Mean = 3.50, SE = .044). In contrast, the lowest perceived mean was related to clients’ information being shared with other entities without their explicit consent (Mean = 3.25, SE = .044) (Table 2).
Perceived Mistrust in AI Mechanisms
The highest perceived mean was that nursing students trusted that AI-based tools could adapt to specific and unforeseen scientific research situations (Mean = 3.35, SE = .040). In contrast, the lowest perceived mean was that nursing students trusted the AI algorithms used in scientific research (Mean = 3.24, SE = .042), and the accuracy and predictive powers of current AI algorithmic models used in scientific research (Mean = 3.24, SE = .041) (Table 2).
Perceived Communication Barriers
The highest perceived mean was that students were concerned that AI tools may eliminate the contact between researchers and clients (Mean = 3.54, SE = .041) and that by using AI devices, they might lose face-to-face cues and personal interactions with researchers (Mean = 3.54, SE = .047). On the other hand, the lowest perceived mean was related to using AI devices could result in a more passive position for making research decisions (Mean = 3.38, SE = .041) (Table 2).
Perceived Unregulated Standards
The highest perceived mean was that nursing students were concerned that appropriate regulatory and accreditation systems regarding AI-based devices were not in place yet (Mean = 3.42, SE = .042) and the lack of clear guidelines to monitor how AI tools perform in the research context (Mean = 3.42, SE = .042). On the other hand, the lowest perceived mean was that students were concerned that the safety and efficacy of AI tools were not regulated clearly (Mean = 3.32, SE = .045) (Table 2).
Perceived Liability Issues
The highest perceived mean was that nursing students were concerned because it is not clear who is responsible when errors result from the use of AI research tools (Mean = 3.59, SE = .044). In contrast, the lowest perceived mean was students’ concerns that the use of AI tools for scientific research purposes increased their liability (Mean = 3.39, SE = .040) (Table 2).
Perceived Benefits
Analyzing the data, it's clear that the higher the average score, the greater the benefits of leveraging AI in nursing research. Nursing students believed the biggest benefit was that AI-powered research tools could enhance data management systems (Mean = 3.55/5, SE = .038). On the flip side, they saw AI's impact on prognosis as the least significant (Mean = 3.41/5, SE = .035) (Table 2).
Perceived Intention to Use AI-Based Tools
Nursing students show a strong inclination to leverage AI technologies for research endeavors. The average score indicates they generally agree (Mean = 3.56/5, SE = .040) with the use of AI-based tools for scientific investigations. Additionally, they express a similar level of intent to utilize these tools (Mean = 3.56, SE = .038). However, their willingness to continue this practice in the future appears slightly lower (Mean = 3.48, SE = .041), suggesting a cautious yet progressive attitude towards AI integration in nursing research activities (Table 2).
Discussion
Perceived Risks and Benefits of Using AI in Nursing Research
The results of this study highlighted the perceived risks and benefits of using AI in nursing research. The overall mean score for both risks and benefits suggests that nursing students had a positive perception of AI in research. This is an important finding, as it indicates that AI was viewed as a valuable tool in the field of nursing research.
One of the main concerns when integrating AI into research was the risk of liability issues. This perceived risk was high, indicating that nursing students were aware of the potential legal implications of using AI in research (supported by Abd El-Monem et al., 2023; Elsayed & Sleem, 2021). Therefore, it is essential to establish clear guidelines for the use of AI in research to mitigate these risks (Jiang et al., 2017). Another significant risk associated with AI in research was communication barriers. This perceived risk was also high, suggesting that nursing students were concerned about the potential difficulties in communicating effectively with AI systems (similar to Dash et al., 2019). Effective communication is critical in research, as it ensures that data is accurately collected and analyzed. Thus, it is essential to develop strategies for improving communication between humans and AI systems to minimize these barriers (Maddox et al., 2019).
On the other hand, the overall mean for the perceived benefits of using AI in scientific research was high. This result suggests that nursing students believe AI can significantly enhance the research process. AI could be used to automate tasks, analyze available large datasets, and identify patterns that may not be clear to humans (Mehdipour, 2019). These benefits could lead to more accurate and efficient research, ultimately improving patient care (Haleem et al., 2020).
The results of this study indicate that nursing students had a positive perception of AI in research, but they were also aware of the potential risks. To fully understand the potential benefits of AI in research, it is essential to address these risks and develop strategies for mitigating them. By doing so, AI could be used to enhance the research process and improve patient care (Bai et al., 2020).
Perceived Risks of AI in Scientific Research
The provided result highlights the key perceived risks associated with using AI in scientific research. The most prominent concern expressed was the potential for unintended consequences linked to the use of AI research tools (as in Mehdipour, 2019). This concern suggests that researchers are worried about the unpredictable nature of AI systems and the possibility of unforeseen outcomes arising from their application in scientific investigations (Huisman & Helianthe, 2019). Interestingly, the least worrying aspect was the level of uncertainty surrounding the application of AI-driven research tools. This finding indicates that while researchers acknowledge the potential risks, they were relatively less concerned about the overall uncertainty involved in using these technologies (reported by O'Connor et al., 2023).
Regarding performance anxiety, the prevalent concern was that the mechanisms used by AI-based devices could result in research errors. This result reflects a worry that the inner workings of AI systems may not be fully transparent or reliable, potentially leading to flawed research findings (similar to Seibert et al., 2021). At the same time, the least concerning issue was the potential for malfunction in the predictive models of AI-assisted tools. This finding suggests that researchers may have more confidence in the predictive capabilities of AI models, even if they were worried about the underlying mechanisms (Risling & Low, 2019). Ronquillo et al. (2021) were most concerned about the unintended consequences that might arise from the use of AI in scientific research, highlighting the need for careful monitoring and risk assessment. While uncertainty surrounding AI applications was present, it was not the primary concern, indicating a willingness to explore these technologies with appropriate safeguards.
Transparency and reliability of AI-driven research tools are crucial, as researchers were particularly worried about the potential for research errors due to the mechanisms used by these systems (like Seibert et al., 2021). By addressing these perceived risks through robust research protocols, ongoing evaluation, and continuous improvement of AI-based tools, the scientific community could harness the power of AI while mitigating the associated concerns (Abuzaid et al., 2022). The most concerning issues raised were the potential for AI-based research tools to impact certain segments of the population unfairly and the possibility of AI devices leading to morally flawed practices in research. These concerns were linked to the potential for AI to perpetuate existing biases and inequalities, which could have significant negative influences on individuals and society (Bai et al., 2020).
The least concerning issues raised were the potential for AI-powered devices to steer research down ethically compromised paths and the heightened risk of these devices exhibiting biases toward specific groups of the population. These concerns were more related to the ethical implications of AI in research and the potential for AI to be used in ways that were not aligned with ethical principles (aligned by Abuzaid et al., 2022; Seibert et al., 2021). In terms of privacy concerns, the highest level of concern was linked to AI technologies enabling research institutions to amass an excessive amount of personal data from individuals (Haleem et al., 2020). These privacy concerns raised significant ethical and legal issues about data protection and the potential for misuse of personal data.
The lowest level of perceived privacy risk was associated with clients’ personal information being shared with other parties without their clear authorization. This perceived privacy risk highlights the importance of ensuring that individuals have control over their data and that it is not shared without their consent. Regarding mistrust in AI mechanisms, the highest perceived mean was that nursing students trusted that AI-based tools could adapt to specific and unforeseen scientific research situations (Kueper et al., 2020). This suggests that AI could be a valuable tool in research, particularly in situations where human judgment may be limited.
Perceived Intention to Use AI-Based Tools
The nursing students showed a strong inclination towards utilizing perceived intention to use AI-based tools. The nursing students showed a strong inclination toward utilizing AI-based tools for scientific research purposes, as indicated by the highest perceived mean. However, students’ willingness to continue using such tools in the future was found to be comparatively lower in AI (supported by Ahmed, 2024; Mennella et al., 2024). This result suggests that while nursing students recognize the potential benefits of AI in research, they may have some reservations or concerns about the long-term adoption and integration of these technologies into their daily practice. Factors such as trust in the reliability and accuracy of AI-based tools, ease of use, and the perceived impact on their professional roles and responsibilities could be influencing their intention to use AI in the future (Abuzaid et al., 2022; Ronquillo et al., 2021).
Strengths and Limitations of the Study
This research fills in the gaps in the literature on nursing leadership and integrates AI into nursing research. Improved comprehension of AI's use in nursing research would be advantageous for nursing research as well as the nursing profession. This study, however, has a number of methodological problems. Nursing students’ self-reporting risks and benefits of using AI in nursing research has a bias. Because the survey was conducted online and samples were gathered through social media groups, it is possible that students with an interest in technology and AI were more likely to respond, which could lead to a bias in self-selection. The study's conclusions may not be as applicable to other nursing student populations or all university students as a result of these limitations.
Furthermore, the cross-sectional research design and non-random convenience sampling were used to gather the data from a limited purposive sample of governmental and private universities, which limited applying the results to other universities. Thus, it is advised that future studies employ a longitudinal research design and random cluster sampling. Particular socioeconomic settings may also influence AI's application in nursing research. There is limited generalizability of data collected in Jordan to other nations.
Conclusions
The study demonstrates that nursing students perceive the use of AI in nursing research positively, with an overall mean score of 3.24/5 (SE = .024). Their feelings about AI were generally positive (Mean = 3.54/5; SE = .049; 95% CI = 3.45–3.64). However, students also identified significant perceived risks, particularly concerning liability issues, communication barriers, unregulated standards, privacy concerns, social biases, performance anxiety, and mistrust in AI mechanisms. Despite these risks, the perceived benefits of AI were high (Mean = 3.46, SE = .030), with a strong intention to use AI-based tools (Mean = 3.52, SE = .033). Key predictors influencing students’ perceptions were high GPA and training in public hospitals. High-achieving students and those trained in public and teaching hospitals were more adept at understanding and utilizing AI in nursing research. Immediate management of the identified risks is crucial to maximize the benefits of AI in this field.
Footnotes
Acknowledgments
Everyone who participated in this study has the authors’ undying appreciation.
Patient and Public Involvement
There was no patient or public involvement in this research's design, conduct, reporting, or dissemination, as the sample included nursing students from three public and two private universities.
Ethical Considerations
The study has IRB (22/4/2022/2023 dated January 13th, 2023) of the an University-Jordan.
Informed Consent
The invitation letter informed the participants that their participation in the survey constituted their consent to participate in the study.
Author Contributions/CRediT
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability
The dataset used and analyzed during this study is available from the first author upon reasonable request.
