Abstract
Objective
Generative AI is increasingly used to provide health-related information in addition to online health information seeking (OHIS). Users’ willingness to adopt it is crucial. This study investigates individual factors associated with more frequent OHIS: health status, health anxiety, and eHealth literacy. Using the Technology Acceptance Model (TAM), we examined whether these factors are related to more trust in generative AI for health-related purposes and the willingness to use it.
Methods
Using SEM, we analyzed cross-sectional survey data (N = 4775) that is representative of adult Czech internet users (50% female; aged 18–95 years).
Results
Trust in AI was strongly associated with willingness to use AI. Health status and health anxiety were related to willingness to use AI only indirectly through trust. Higher eHealth literacy was associated with more trust only marginally and had no direct relationship with willingness to use AI. Wellness-related OHIS was positively associated with willingness to use AI for wellness purposes, and illness-related OHIS was associated with willingness to use AI for illness purposes.
Conclusion
Although not emphasized in TAM and its health-related extensions, trust seems to be a critical mediator in the adoption of generative AI for health purposes. Other factors related to OHIS were not associated with willingness to use AI, except for their relationship with trust. eHealth literacy is practically unrelated to trust and willingness to use AI, which is noteworthy given that health anxiety and health status related to higher acceptance are associated with more risky or high-stake use of online health information.
Keywords
Introduction
The introduction of artificial intelligence (AI) is transforming healthcare practices. This includes the introduction of AI-assisted medical care and the way patients gain information about health. Generative AI systems, such as ChatGPT by OpenAI and Bard by Google, are designed to generate human-like text that is trained on broad datasets, including medical information. As a result, although not intended for medical purposes, they are becoming sources for health-related knowledge, which is potentially more comfortable than a common web-based search. 1
A key advantage of generative AI is that it offers information tailored to individual needs. When seeking information on the internet, users have to navigate through inconsistent information that varies in quality and relevance. This can provoke feelings of uncertainty and overload, and it assumes sufficient health and digital literacy.2–4 Generative AI can overcome this by delivering information in a concise way that minimizes alarms.5,6 In terms of credibility, AI-generated answers are at least similar in value to information found on a Google search.7,8 On the other hand, there are new types of limitations that may complicate the adoption of generative AI for health-related purposes. For instance, because these systems were intended to converse rather than provide information, they tend to “hallucinate” to maintain conversation when credible information is lacking. 6
Altogether, generative AI is an increasingly popular source of health information, which complements traditional web-based health information seeking (OHIS).8,9 Nonetheless, as with other new technologies, its use depends on whether people are willing to adopt it despite its limitations. Research on non-medical generative AI for health purposes is still very limited, and it focuses on the features of the AI that could be improved to increase user acceptance.1,9 However, we lack insight into which users are willing to adopt it in its current form. Identifying the individual factors linked to trust in health-related information from generative AI—and the willingness to use it—can support its safe and effective integration into healthcare. This study addresses this gap by examining how individual differences relate to trust in non-medical generative AI and the willingness to use it for health purposes.
To some extent, assumptions can be deduced from research on patient attitudes toward medical AI (i.e., tools that augment or replace practitioners, evaluate test results, propose a diagnosis). Yet, we need to consider several differences. First, even compared to an independent medical AI,10,11 the level of supervision is low, and the accountability is not clear. This may be counterbalanced by the fact that non-medical generative AI is intended for more low-risk activities (e.g., preliminary explanations, clarifications, and advice for a healthy lifestyle) and should refrain from any diagnostics.6,8
Second, non-medical generative AI can be used for illness-related topics (i.e., diagnostics, management of diseases) and for wellness-related topics (i.e., promoting a healthy lifestyle), which may differ in causes, consequences, and potential risks.12,13 For instance, while AI illness information may substitute for other sources,14,15 wellness information, especially when sought by people with no acute health issues, likely does not. Yet, research on medical AI has not distinguished between the types of researched topics. 1 To make up for this gap, we will focus separately on the two types of topics, providing new insight into the area of wellness-related information.
Technology acceptance model (TAM) and willingness to use AI
Our study focuses on the willingness to adopt generative AI as a new technology for health-related internet use that is oriented on illness- and wellness-related information seeking. In line with this aim, its conceptual framework has been inspired by the Technology Acceptance Model (TAM), 16 and its extensions are relevant for health-related topics. The TAM posits that the willingness to accept new technology is affected by individual characteristics that determine whether users perceive it (1) as useful and (2) as easy to use. In turn, the perceived usefulness and ease of use combine to foster a greater willingness to adopt the new technology. 16
In a health-related context, perceived usefulness is determined by health consciousness and perceived health risk, which make health information more salient and relevant. 17 As a result, health information should be more useful for individuals who worry about their health, either due to worsened health status or as a result of health anxiety (ungrounded fear about health). Furthermore, the individual factors that should facilitate acceptance should include eHealth literacy (i.e., the ability to understand health information and to seek it online). A person who perceives themselves as competent in eHealth should perceive the technology as easier to use and also as more useful, as their skills contribute to the usefulness. 18
Finally, in their extension of the TAM, Ghazizadeh et al. 19 have shown that these factors do not impact the willingness to adopt a new technology only directly, as posited by the original model, but mainly through the useŕs trust. Although trust seems to be an important factor in establishing the willingness to use medical AI, 20 the integrated model by Ahadzadeh et al., 17 in its current state, does not reflect it. Also, the role of trust has not been studied for the general AI used that is for medical purposes. In our study, we aim to enrich the health-related TAM by including trust as a mediator between the willingness to use new technology and its antecedents.
Health status
According to the health-oriented extension of the TAM, 17 health statuses make the topic of health more salient, so health-related technologies as more useful. Among other behaviors, this applies to health-related internet use: people with worse health status tend to use the internet for health-related purposes more frequently and in a more diverse way. 21 Although these trends have not yet been studied with regard to AI, we assume that the mechanisms are similar. Moreover, we assume that people with worse health status are motivated for both illness- and wellness-related AI use because the healing process often involves both the treatment of symptoms and the adoption of healthy habits. 22
H1: Worse health status is associated with a higher willingness to use AI for (a) wellness-related and (b) illness-related purposes.
Health anxiety
Health anxiety represents an ungrounded fear of contagion and a tendency to misinterpret or exaggerate the importance of bodily sensations. A robust body of studies shows that people with higher health anxiety are generally more interested in health-related topics and tend to seek online health information more often.23,24 Moreover, people with high health anxiety tend to seek health-related information as a reassuring behavior and a coping strategy. 3 This includes consulting multiple sources, such as specialists, family, and media.25,26 While the relationship between health anxiety and the health-related use of generative AI has not yet been studied, we assume that generative AI is of interest to people with health anxiety, and that they are more willing to use it.
H2: Health anxiety is positively associated with the willingness to use AI for (a) wellness-related and (b) illness-related purposes.
eHealth literacy
eHealth literacy represents a set of health-related knowledge and digital skills that includes the ability to navigate web-based searches, find health-related information online, and evaluate the reliability and relevance for the user. 4 These skills are related to more frequent health-related internet use because they enable and encourage the search.27,28 The relationship between eHealth literacy and the usage of AI in a medical context has been less studied. Higher eHealth literacy is related to more positive attitudes toward AI-assisted medical consultations, both directly and indirectly, because of its perceived efficiency. 29 In line with the TAM, we expect eHealth literacy to facilitate the usage of AI for health-related purposes.
H3: eHealth literacy is positively associated with the willingness to use AI for (a) wellness-related and (b) illness-related purposes.
Trust
Trust in the performance and reliability of health-related AI seems to be crucial for its adoption, while mistrust in some of its components presents a barrier. 30 Apparently, the concerns grow with the lower supervision and higher independence of AI. In Esmaeilzadeh et al., 10 patients expressed a number of reservations, which were only salient when AI would substitute, not augment, a standard appointment. These are related to privacy, transparency, the lack of direct human supervision, and unclear accountability, 10 which can all be expected also for non-medical generative AI. Non-medical generative AI brings additional concerns related to the trustworthiness of the results. First, some users fear potential inaccuracy, either based on previous experience or on the fact that the training data may be outdated or contaminated with misinformation. And second, they question the reasoning of the system, which limits the range of medical decisions they are willing to make based on recommendations from AI.31,32
Accordingly, the importance of trust in the intention to use AI also seems to grow with higher levels of its independence. Huo et al. 33 have shown that, while trust was not important to accept AI as an assistive tool for a doctor, it did play an important role in accepting an independent diagnosis by medical AI. Also, trust is strongly associated with the intention to use generative AI, in general, 1 although studies focused on trust related to the use of generative AI for more specific health-related purposes are lacking. Therefore, we assume that, similar to medical AI systems, trust is an important factor in the willingness to use general AI for health-related purposes.
H4: Trust in health-related information from AI is positively related to the willingness to use AI for (a) wellness-related and (b) illness-related purposes.
Trust as a mediator between other factors and willingness to use AI
Moreover, we expect that trust in AI not only facilitates its adoption but also mediates the relationship of the other abovementioned variables and the willingness to use it. People with more health problems are more concerned about accountability, and they may not fully follow its recommendations. At the same time, they seem to appreciate it as a quick solution and trust its performance. 10 Esmaeilzadeh et al. 10 show that some specifics exist for chronic patients who need more long-term collaboration and where the relationship with their doctor cannot be replaced. However, the authors focused only on situations where medical AI replaced or augmented medical appointments. These reservations may not translate into lower trust in complementary generative AI use. 10 Therefore, we hypothesize with caution that worse health status might be related to an increase in trust toward AI.
H5: Worse health status is associated with higher trust toward health-related information from AI.
Similarly, people with health anxiety seem to approach the sources of health-related information less critically. Consequently, they have a higher tendency to trust and share health-related information, even when unverified and not trustworthy. 34 Therefore, it may be easier for them to trust health-related information generated by AI, and we propose the following hypothesis:
H6: Health anxiety is positively associated with trust toward health-related information from AI.
Finally, higher health and eHealth literacy facilitate access to sources that may be demanding with regard to seeking, understanding, and evaluating information (e.g., medical websites as compared to celebrities or TV, which appeal to the authority of the speaker). This could be associated with higher trust for information from these sources.35,36 At the same time, eHealth literacy was associated with a slightly increased distrust of AI. 29 However, this study addressed beliefs about AI as a replacement for doctors, rather than a source for medical information, which emerged as rather positive. Therefore, people with higher eHealth literacy could trust AI more as a source of medical information:
H7: eHealth literacy is positively associated with trust toward health-related information from AI.
Online health information seeking (OHIS)
According to the Theory of Channel Complementarity, 37 people combine various sources to indulge in the benefits and reduce their limitations, which is also typical for health information seeking.38,39 Web-based and AI-based health information seeking have unique strengths, and they compensate for the weaknesses of one another. While AI-generated health information may be easier to comprehend and while it may simulate human interaction, web-based information is better referenced and easier to verify. 8
Moreover, previous studies have shown that seeking online health information focused on illness and wellness topics tends to co-occur.12,13 Although the patterns and needs may differ across the lifespan40,41 or according to the health status, 22 health-related internet searches often combine both, like when seeking for treatment together with preventive and complementary measures. Therefore, we expect that the illness- and wellness-related OHIS will be related across web-based and AI-based search information (i.e., more seeking wellness-related information on the web may be associated with the higher use of AI for illness-related purposes, and vice versa):
H8: Wellness-related online health information seeking will be positively associated with the willingness to use AI for (a) wellness-related and (b) illness-related purposes.
H9: Illness-related online health information seeking will be positively associated with the willingness to use AI for (a) wellness-related and (b) illness-related purposes.
Control variables
Additionally, we control for several demographic variables. First, gender can be associated with health-related AI adoption in various ways, because women are generally more frequent seekers of health information 42 and men are more frequent AI users. 43 Second, older adults tend to be less frequent health-related internet users, 42 and they tend to be slower in the adoption of new technologies. 44 Third, higher education tends to be associated with both more frequent health-related internet use and more AI use. 42 Finally, we decided to control for whether our participants have used generative AI before because it might be easier to use for those with experience.
The current study
Generative AI models, such as ChatGPT, are becoming a promising source of health-related information that may complement web-based searches. However, their dissemination strongly depends on the willingness of users to adopt them. It is unclear whether factors that facilitate web-based health information are also associated with the willingness to use AI for similar purposes. While some of the mechanisms may be similar, others may differ, at least because of the interplay between the willingness to seek health information on the internet and to use new technology. The aim of the current study is to explore the individual factors that contribute to online health information seeking and that may be associated with the willingness to use AI.
Framing our study within the Technology Acceptance Model,16,17 we aim to identify the factors related to the perceived usefulness and the ease of use of health-related AI, namely for health status, health anxiety, and eHealth literacy. We will extend the model by studying an indirect path through trust in AI. Additionally, we will study the relationship between web-based OHIS and the willingness to use AI for health-related purposes. Finally, unlike previous studies, we will newly conceptualize health-related information into illness- and wellness-related because the motives for the use of both may be slightly different. For the summary of our hypotheses, see Figure 1.

Conceptual model, with hypotheses.
Materials and methods
Participants
The current study is based on cross-sectional data collected from Czech internet users in an online survey. Our procedure followed the guidelines for observational cross-sectional studies according to STROBE (see Supplement 1 for the checklist). Participants were eligible if they were aged 18 years or older and used the internet. The data were collected between October 2 and 16, 2023, by STEM/MARK, a Czech market research agency, which contacted members of their research panel (Czech National Panel, a part of National Sample s.r.o.). The agency is a member of ESOMAR and follows its guidelines for data protection and panel management. The Czech National Panel consists of 64,000 Czech panelists, including specific populations and populations that are harder to find online, ensuring the representativeness of various Czech sub-populations. The agency used quota sampling; specifically, the participants were representative of the adult Czech population in terms of age, education, household income, municipality size, and NUTS 3 region, according to Eurostat. We required equal representation for men and women in the sample according to the gender they had stated to the agency (with ±7% tolerance). The agency approached their panelists either online or face-to-face, and the survey was administered with the computer-assisted web interviewing method. We demanded a sample size as large as the agency could offer for the money allowed by the funding of the project, but at least 3500 participants to ensure meaningful representation of marginal groups (e.g., to have more than 100 participants in the smallest region).
The study was approved by the Research Ethics Committee of Masaryk University (no. EKV-2023-102). Before entering the survey, the participants gave their informed consent. The participants were guaranteed the option to not respond to every item, and to leave the study at any time. They were assured that their data would be fully anonymized and that this meant not being able to track and delete their responses once they submitted their answers. For finishing the survey, participants were given a monetary award, the amount of which is unknown to the researchers but is in line with the standard system of European National Panels.
Of 5480 participants who initially opened the survey, 4921 were eligible to participate. The rest could not participate, either because they did not comply with the quota (e.g., due to not being internet users) or because the quota was already filled. Additionally, data for 146 participants who filled in the questionnaire were removed due to the insufficient quality of their responses (i.e, more than 10% of missing data, unrealistically short time spent with the questionnaire, or highly logically inconsistent responses). The final sample consisted of 4,775 Czech internet users aged 18–95 (M = 45.37, SD = 16.4). Although the agency recruited participants who identified as male or female in the pool, 50.4% participants reported their gender as male in the survey, 49.5% identified as female, and 0.1% (N = 4) reported their gender as “other.” Of the participants, 28% had experience with using a generative AI system. Men in our sample had a slightly higher income than women (t = −10.50, P < .001, Cohen's d = −.30) and women were slightly more educated than men (t = 3.29, P < .001, Cohen's d = .10). Both differences are in line with trends in the Czech population, both in direction and effect size. 45 In line with the quota, men and women did not differ in age (t = 0.57, P = .57). For further demographic information, see Supplement 2.
Measures
All measures used in this study, except for the eHealth Literacy Scale, were adapted from English with the TRAPD procedure. First, the items were translated by two independent researchers. A third researcher formulated the final translation, consulting with both translators. All scales used in the survey were thoroughly pretested in cognitive interviews. Five participants (diverse in terms of gender, age, and education), ranging from 26 to 73 years of age, gave their feedback on the survey's content, clarity, and comprehensibility. The survey in full is available in Supplement 3.
Willingness to use AI for health-related purposes
We created this measure for the purpose of this study. We asked whether the participants would use an Artificial Intelligence system, such as ChatGPT or Bard, to obtain information for three wellness-related topics (e.g., how to exercise, how to eat healthily, and how to achieve/maintain/lose weight) and three illness-related topics (e.g., self-diagnostics, treatment, and the cause of the symptoms). The scale was adapted from the questions for OHIS (see below) and combined with a scale to measure the willingness to use AI, 46 which is an integral part of the TAM model. The response scale ranged from 1 (Definitely not) to 4 (Definitely yes). The internal consistency of the scale was excellent (ω = .95; M = 1.84, SD = .86 for the wellness use; M = 2.06, SD = .92 for the illness use).
Trust in health-related information from AI
Trust was measured by a 3-item sub-scale on trust from Shin. 47 We asked whether, in the context of health, the participants trusted recommendations from AI-driven services, whether they considered AI health recommendations to be trustworthy, and whether they believed that the results generated by AI were reliable. The response scale ranged from 1 (Strongly disagree) to 5 (Strongly agree). The internal consistency of the scale was excellent (ω = .95; M = 2.36, SD = .95).
Health status
To measure health status, we used an adaptation of Patient Health Questionaire-15. 48 Compared to the original tool, we aggregated the respective symptoms into four broad categories: gastrointestinal, cardiopulmonary, musculoskeletal, and sleep/energy. We asked the participants how much they were bothered by these symptoms over the preceding 4 weeks. The response scale ranged from 1 (Never) to 5 (Always). The overall score was computed as the mean of the items (M = 2.55, SD = .83).
Health anxiety
We measured health anxiety with 14 items from the Short Health Anxiety Inventory 49 that related to the perceived illness likelihood and body vigilance. In line with recommendations by Alberts et al., 50 we decided not to include four items related to the perceived negative consequences of illness, which likely represent a unique construct and could endanger the validity of the scale. The response scale was similar to that in Lagoe and Atkin 51 and it ranged from 1 (Strongly disagree) to 5 (Strongly agree). The internal consistency of the scale was excellent (ω = .95; M = 2.1, SD = .78).
eHealth literacy
To measure eHealth literacy, we used seven items from the eHealth Literacy Scale (eHEALS). 52 The translation was used successfully in previous Czech studies with other samples. 13 Participants evaluated their own skills related to working with health information online, namely the knowledge of where and how to find it, how to assess its quality and reliability, and how to use it to one's advantage. One item with the lowest alpha level (Q8; “I feel confident in using information from the internet to make health decisions”) was left out to decrease the length of the scale while maintaining its validity. The response scale ranged from 1 (Strongly disagree) to 5 (Strongly agree). The internal consistency of the scale was excellent (ω = .90; M = 3.70, SD = 0.69).
Online health information seeking
We asked our participants how often in the preceding six months they had used the internet to search for information, discussions, articles, or posts about health. Online health information was divided into two domains: focus on wellness and focus on illness. Both domains were represented by six items to present the convergence of the scales developed, ad hoc, for previous studies (wellness items12,13; illness items 53 ). Wellness-related information included diet, exercise, weight management, promotion of health/prevention of illness, lifestyle facilities (e.g., massages or gym), and dietary supplements and vitamins. Illness-related information included specific diseases, the causes of symptoms, diagnosed and undiagnosed conditions, treatment options, and healthcare facilities and services. The response scale ranged from 1 (Never) to 6 (Several times a day), and the reliability of the scale was excellent (ω = .94; M = 2.09, SD = .85 for the wellness use; M = 2.12, SD = .77 for the illness use).
Control variables
Gender was assessed by a single item with the options “man,” “woman,” and “other.” The participants were asked to enter their age in years as an open question. We asked participants about their highest completed level of education, which was further coded as (1) primary or secondary without General Certificate of Secondary Education (GCSE), (2) secondary with GCSE, and (3) university or higher (including higher vocational school). Previous experience with AI was assessed with the following question: “Have you used any artificial intelligence system in the last year where you can ask questions and the system automatically answers you? These are, for example, ChatGPT, OpenAI, Bard, etc.” It was handled as a binary variable.
Statistical analysis
The mediation analysis was conducted with the R package “lavaan.” 54 In this model, the dependent variables included the willingness to use AI for wellness-related health information, termed “AI wellness use,” and the willingness to use AI for illness-related health information, referred to as “AI illness use.” The mediator in the model was trust in AI-generated health information. The predictors included health anxiety, health status, e-health literacy, and OHIS behaviors. Age, gender, educational level, and whether participants were AI users or non-AI users were covariates to control. Health anxiety, trust in AI, willingness to use AI, and eHealth literacy were included as latent variables in the model. For other variables with multiple items, we calculated their mean score. For missing data, we did not use any imputation. Relevant data and script are available through OSF: https://osf.io/h64kw/?view_only=9068d0058aa9408d9743bc1ec0f91809
Results
Bivarate correlations between the variables (see Table 1) show positive relations between all of the independent variables and for both trust in health-related AI and the willingness to use it, suggesting that our hypotheses are plausible. The final model exhibited a good model fit (χ2(808)= 12991.5, CFI = 0.92, TLI = 0.94, RMSEA = 0.06, SRMR = 0.03).
Correlations between key variables.
* P < .001.
Due to our sample size, the analysis tended to detect very weak effects as statistically significant, although their practical significance is negligible. Therefore, we decided to accept only results with a β-value of .10 or higher as strong enough to support our hypotheses, which is in line with previous research in this field. 13 Results that are supportive of our hypotheses are presented in Figure 2. For all results, see Table 2.

Conceptual model, with results. Note: Only significant results with β ≥ .10 are displayed.
Results of the main analysis.
Note: Please note that due to the sample size, even marginal effects were significant. Therefore, P-values should always be interpreted in the light of effect sizes.
The standardized SEM model indicated that, after adjusting for gender, age, and education level, and AI users or non-users, trust in AI was strongly associated with the willingness to use AI for both wellness purposes (H1a; β = .59, P < .001) and illness purposes (H1b; β = .61, P < .001), in line with our expectations.
In line with H2, people with worse health status trusted health-related AI slightly more (β = 0.10, P < .001). Worse health status was also marginally associated with a higher willingness to use AI, both for wellness purposes (H3a; β = .05, P = .002) and illness purposes (H3b; β = .07, P < .001). However, we consider this effect too weak to convincingly support H3, especially given the sample size. Similarly, in line with H4, people with higher health anxiety were slightly more trusting about the use of AI for health-related purposes (H4; β = .13, P < .001). However, the direct association between health anxiety and the willingness to use AI for health-related purposes was marginal for both wellness purposes (H5a; β = −.03, P < .001) and illness purposes (H5b; β = .04, P < .001) to support H5. Altogether, the effect of health status and health anxiety on the willingness to use AI for health-related purposes seemed to be fully mediated by trust.
Contrary to our expectations, higher eHealth literacy was associated with more trust toward health-related AI only very weakly (β = .08, P < .001), which practically does not support H6. The direct association between eHealth literacy and the willingness to use AI for wellness purposes was negative and negligible (β = −.02, P = .002) and there was no association between eHealth literacy and the willingness to use AI for illness purposes (β = .00, P = .99). Therefore, H7 was not supported.
Hypotheses 8 and 9 presumed that the willingness to use health-related AI would be positively associated with the frequency of OHIS. Both hypotheses were supported, but only for the same type of information (i.e., wellness to wellness, illness to illness). More frequent wellness-related OHIS was associated with more willingness to use AI for wellness purposes (H8a; β = .13, P < .001), and more frequent illness-related OHIS was associated with more willingness to use AI for illness purposes (H9b; β = .11, P < .001). However, the relationship with the other type of information was marginal and even negative for wellness-related OHIS (H8b) and absent for illness-related OHIS (H9a).
Finally, although we did not present a hypothesis regarding this effect, more frequent wellness-related OHIS was also associated with more trust toward health-related AI (β = .16, P < .001), and that trust partially (46.13%) mediated the relationship between wellness-related OHIS and the willingness to use AI for wellness purposes. This effect was also somewhat visible for illness-related OHIS, but it was marginal with the significance likely being an artifact of the sample size (β = .05, P < .001).
Discussion
The advancement of systems based on Artificial Intelligence has impacted various areas, including new opportunities in electronic health. This includes the possibility to consult health-related topics, including self-diagnostics and lifestyle, with generative AI systems, such as ChatGPT. The aim of this study was to explore whether factors associated with traditional web-based OHIS may also facilitate the willingness to use generative AI for similar purposes. Building on the Technology Acceptance Model, we studied factors that may contribute to the willingness to use the new technology. In line with studies from other areas that apply the TAM, we enriched the model by studying an indirect path through increased trust in the technology.
We expected that worse health status (H1) and higher health anxiety (H2) would be associated with a higher willingness to use AI for (a) wellness-related and (b) illness-related purposes. In line with health-related TAM, health topics should be more salient for people with higher health anxiety or worse health status, who, albeit due to different reasons, experience elevated health consciousness and perceived health risk. 17 As a result, they should also be more engaged in health-related technology use, as has been shown for online health information seeking. 42 However, unlike web-based online health information seeking,21,24 neither health status nor health anxiety was directly associated with the willingness to use AI for health-related purposes. Similarly, we assumed that eHealth literacy may serve as a proxy for the ease of use. 17 Therefore, we expected that users with higher eHealth literacy may be more willing to use AI for health-related purposes (H3a,b), because the adoption of the new technology is easier, both from the perspective of health knowledge and digital skills. However, again, we did not find support for this direct relationship.
Altogether, it seems that factors that should motivate seeking health information and facilitate their evaluation do not directly motivate the use of health-related AI in its current state of development. This may be due to the novelty and potential limitations of the technology. 6 Presumably, people with certain traits can be more likely to use it only if they consider it a trustworthy source of health information. Therefore, we considered the role of trust in the process and studied whether health status, health anxiety, and eHealth literacy contributed to more trust, which would contribute to the potential adoption of the technology, at least indirectly.
Following our expectations (H4a,b), trust in the health-related information from AI was strongly associated with the willingness to use it for both wellness- and illness-related purposes. This is in line with previous literature, which suggests that trust presents a crucial motor for the adoption of AI for health-related purposes, 20 and that this is all the more important when there is no medical professional to supervise the technology and guarantee the results. 33
Previous research has yielded conflicting results for the association between perceived health status and trust in online health information, in general. 55 In the case of medical AI, patients with worse health status trust its performance more, but they are more aware of its drawbacks, if it were to replace or augment physicians in their decision-making. 10 In the current study, people with worse perceived health status seemed to be slightly more trusting toward the health-related use of AI than those with less health issues, which is in line with H5. It is possible that, for non-medical generative AI, people with worse health status endorse the benefits, but, at the same time, they do not have to worry about the drawbacks, because such systems were never intended to replace medical appointments. In line with H6, health anxiety was also associated with more trust in health-related AI. People with health anxiety have a high need for health-related information, 24 which leads them to consult all available sources and approach them less critically. 34 Our results suggest that this also includes generative AI as a new, not yet established, source for health-related content.
On the other hand, although we found a relationship between eHealth literacy and trust in health-related information from non-medical AI, in line with H7, it was marginal and below practical significance. Yet, it is likely that these weak to null effects are at least partially a result of the multifaceted—and perhaps also the non-linear—relationship of eHealth literacy with trust in AI and the willingness to use it for health-related purposes. First, although commonly used as a one-factor scale, eHealth literacy consists of at least two dimensions: health literacy and digital literacy. 52 As was shown in Kang et al., 29 different dimensions of health literacy may contribute differently to attitudes toward AI. For instance, while higher digital and disease prevention literacy were associated with higher distrust in AI, higher health promotion and healthcare literacy were not. 29
Second, the effect of each of the sub-dimensions may not be linear. For instance, digital literacy is associated with a higher perceived efficiency of AI, but also higher distrust. 29 Moreover, this may change with the growing literacy. Huang and Ball 56 studied trust in AI online and hospital diagnoses according to AI literacy. Although users with higher AI literacy were generally more trusting of the health applications of AI, there was a drop in trust for users with intermediate AI literacy, which suggests more skepticism. 56 As a result, both of these inconsistencies may lead to seemingly null effects, potentially confounding the true role of eHealth literacy in factors related to health-related AI use.
Overall, the abovementioned results show that the factors associated with OHIS, namely health status, health anxiety, and possibly eHealth literacy, are associated with the willingness to use AI for health-related purposes. However, our results indicate that their role in AI adoption is not direct; it lies in the fact that users who score higher in these factors are more likely to trust the technology. This underlines the role of trust, which is not automatically granted in the case of generative AI, especially due to its novelty and the concerns related to its black-box nature. We have discussed that the role of trust seems to rise with the independence of the system. For non-medical generative AI, this may be amplified by more sources of distrust compared to highly specialized medical AI. Generative AI brings new concerns related to accuracy or new privacy issues (e.g., becoming a part of training data). Also, as the most popular generative AI systems are products of companies such as OpenAI, Google, or X, trust in the system may be associated with attitude toward the company and beliefs about their interest.
Our findings have implications for the TAM adapted for health technologies, 17 which does not acknowledge the role of trust in the acceptance of a new technology. In the current study, individuals with higher health anxiety and poorer health were more willing to use generative AI for health-related purposes—but only because they were more likely to trust the technology. This is important to acknowledge because trust can easily be undermined, potentially leading to the rejection of the technology. We therefore suggest that health-related technology acceptance models (TAM) be extended to include the indirect role of trust. This would clarify the extent to which individual factors influence acceptance directly, and the extent to which they operate through trust—which may, but does not necessarily, lead to actual use.
In line with this, we were still able to observe variables that were associated with the willingness to adopt AI regardless of the trust level. In addition to the effects based on the TAM, we explored the association between web-based OHIS and the willingness to use AI for similar purposes (H8, 9). We expected that, in line with previous research, 37 users would combine both modes to maximize their benefits. Moreover, seeking wellness-related and illness-related topics is often combined, 40 and it was strongly correlated with online health information seeking and with the willingness to use AI (see Table 1). In our study, we found support for this complementarity, but only to some extent. More frequent wellness-related OHIS on the web was associated with a higher willingness to use AI for wellness-related topics (H8a), but not for illness-related topics (H8b). Similarly, a higher frequency of illness-related OHIS was only associated with the higher willingness to use AI for illness-related topics (H9b).
Apparently, web-based and AI-based searches may complement one another within a thematic domain, potentially differing in the specific topics of interest. At the same time, more wellness-related OHIS, a relatively low-risk behavior, may not translate into the willingness to use new technology for more serious illness-related information, such as self-diagnostics or treatment suggestions. 9 Similarly, some illness-related motivations for using AI may not be linked to the OHIS that is related to wellness and prevention. For instance, seeking illnesses out of curiosity does not present much risk, and it is not linked to wellness-related OHIS. Or, people may be willing to discuss their health status with AI before a physician appointment, which may not be associated with seeking prevention and a health lifestyle. Research focused on the specific patterns of the actual health-related use of generative AI is needed to shed light on their potential differences.
Limitations and implications
This study showed that user characteristics related to online health information seeking may facilitate the use of AI for health-related purposes. Generative AI may become an additional source of information for people who already seek health information online, potentially opening space for the benefits of the combination of both. However, our study should be interpreted with several limitations in mind.
First, while we based our hypotheses on a well-established theoretical model and used stable predictors, the data were cross-sectional, which limits causal conclusions, including the mediation effects. Second, we examined the willingness to use AI for health-related purposes rather than actual usage, because generative AI was not widely adopted for serious health reasons at the time of data collection. Future studies should focus on the actual usage. Moreover, they should examine whether the role of trust has changed in any way, because generative AI is becoming more established and trust may be less crucial. Third, although we controlled for several demographic variables in the analysis, there may be societal trends that we did not account for. For instance, we did not include income, although it could play a role in AI adoption, due to expected non-linearity. While we expect willingness to use AI to rise with income, at least due to technology provision or education, our sample includes multiple sub-populations that violate these expectations. This may include well-educated young adults who still study, have only begun their careers, or are on parental leave. However, this approach may have omitted a specific type of user, who would be distinguished by income, but not by education (e.g., IT workers without university education). Also, although we tried to ensure our sample was as representative as possible of the Czech population, we assume that some sub-populations may still be unwilling to participate in surveys, and therefore under-represented (e.g., technology-averse or research-averse).
Finally, the Czech Republic is a country with high technological provision and digital literacy, comparable to other European countries. 57 ChatGPT became available in the Czech Republic in late 2022 and was already usable in the Czech language; Bard supported Czech at the time of data collection. In 2023, Czech users showed a balanced ratio of curiosity and skepticism toward generative AI. 58 At the time of the data collection, Bard was available for free, and although the premium version of ChatGPT (using GPT4 for 20$/month) was available at that time, we can assume that, similarly to other countries, a vast majority of users were using the free version with GPT3.5. Therefore, the results may be difficult to generalize to countries with lower technological provision or different attitudes toward AI.
Despite the limitations, our study has some implications for future research and practice. First, users who seek online health information seem to utilize all available sources, including AI. Future studies should focus on the differences between web-based and AI-based health-related internet use and map whether users combine them in a complementary way to gain the benefits of both or whether they tend to replace one mode with the other. Second, people seeking medical care, who have poorer health or more health-related worries, are more likely to trust AI. Practitioners should acknowledge this, educate their patients about using AI for health-related purposes, and be ready to discuss the information their patients find. Third, the developers of generative AI and the health-related applications, such as Dr ChatGPT, should acknowledge that the willingness to use their systems is strongly conditioned by trust. Adjusting the system to account for factors that may violate trust is vital for its adoption. Given that users are willing to accept the systems, although they acknowledge that they may not be fully accurate, the developers could provide clearer descriptions of the limitations which would guide the use. These may include a clear description of the data the AI was trained on or has access to, recommendations on what the system is and is not capable of in the area of health, or to transparently inform users whether the system will train on their data or whether it is capable of compiling their broader anamnesis from the individual prompts.
Conclusion
This study explored the factors associated with the willingness to adopt non-medical generative AI for health-related purposes. To our knowledge, it is one of the first to focus on wellness-oriented AI, rather than AI developed specifically for medical care. Building on the literature on OHIS, we examined the individual predictors that may be relevant for this emerging form of health support. In doing so, we included both illness-related and wellness-related information to reflect the diverse nature of the health content available online.
Despite this broader focus, our findings align with earlier research,30,33 highlighting trust as a central factor in AI adoption. We propose that individual characteristics influence the willingness to use AI primarily through their impact on trust. This has implications for the Technology Acceptance Model (TAM), which may be enriched by explicitly considering trust to be a mediating factor in the adoption of new technologies.
Although higher eHealth literacy was associated with more trust in AI, future research should disentangle the contributions of digital and health literacy. 4 Users may feel confident interacting with AI tools yet lack the ability to critically evaluate their output—an important consideration when assessing the potential risks and benefits.
We also observed patterns specific to the use of non-medical generative AI. Individuals with poorer perceived health status were more likely to trust the technology, possibly because it is not seen as a threat to professional care. This finding has clear implications for practice: healthcare professionals should be aware that patients with worsening health may turn to generative AI as a source of guidance. Supporting them in using it safely and critically could help promote informed and responsible health information use.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076251360973 - Supplemental material for Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study
Supplemental material, sj-docx-1-dhj-10.1177_20552076251360973 for Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study by Adela Svestkova, Yi Huang and David Smahel in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076251360973 - Supplemental material for Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study
Supplemental material, sj-docx-2-dhj-10.1177_20552076251360973 for Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study by Adela Svestkova, Yi Huang and David Smahel in DIGITAL HEALTH
Supplemental Material
sj-docx-3-dhj-10.1177_20552076251360973 - Supplemental material for Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study
Supplemental material, sj-docx-3-dhj-10.1177_20552076251360973 for Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study by Adela Svestkova, Yi Huang and David Smahel in DIGITAL HEALTH
Footnotes
Acknowledgements
This output was supported by the NPO “Systemic Risk Institute” number LX22NPO5101, funded by the European Union—Next Generation EU (Ministry of Education, Youth and Sports, NPO: EXCELES). The funding institution was not involved in the study in any of the stages of its preparation.
Ethical considerations
The study was approved by the Research Ethics Committee of Masaryk University (no. EKV-2023-102).
Author contributions
Adela Svestkova: conceptualization, methodology, investigation, and writing—original draft; Yi Huang: conceptualization, methodology, formal analysis, investigation, and writing—review and editing; David Smahel: conceptualization, methodology, investigation, writing—review and editing, supervision, and funding acquisition.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This output was supported by the NPO “Systemic Risk Institute” number LX22NPO5101, funded by the European Union—Next Generation EU (Ministry of Education, Youth and Sports, NPO: EXCELES). Ministerstvo Školství, Mládeže a Tělovýchovy (grant number LX22NPO5101).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
All relevant data and script are available through OSF: https://osf.io/h64kw/?view_only = ![]()
AI assistance
During the preparation of this work, the authors used ChatGPT 4.0 in order to improve the readability of the abstract. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.
Consent
Before entering the survey, the participants gave their informed consent to participate in the study.
Supplementary material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
