Abstract
We explored attitudes among Danes toward a healthcare project under development, which includes artificial intelligence, healthcare surveillance, and big data – aimed to improve the detection of vaccine side effects. Similar to other studies in the field, we found a dual attitude of overall support for the project while being apprehensive about risky elements within it. We expanded on existing literature about such technologies by framing this as “ambivalence” and acknowledging the tension it creates for interviewees. This allows us to detect a variety of ambivalence-reducing strategies used by interviewees to square their support for the project with their awareness of the risks it introduces. Thus, in addition to conditioning the support for the project, interviewees presented attitudes of technological determinism, powerlessness, and reduced personal risks. We conclude by charting the implications of the current and future levels of public support for projects like the one here discussed.
Keywords
Introduction
New technologies are introducing new opportunities for improving public health. The combination of big data, healthcare surveillance techniques, and advanced artificial intelligence (AI) methods enables public health projects that were unimaginable just a few years ago. At the same time, such projects present ethical challenges associated with each of these technologies in a way that requires new reflection, not just about the challenges, but on evaluating how to balance these concerns against the project's potential benefits.
Challenges include potential risks, such as data leaks or data abuses, from centralized big data repositories that contain private and comprehensive healthcare data about individuals. If such data comes out, it can have detrimental effects on these individuals and undermine confidence in the project altogether. Another example of such issues is the difficulty of providing effective human oversight over advanced AI systems, which are intrinsically black-boxed due to their use of complex machine learning techniques.
It is important, among other things, to understand the stance of members of the public toward such projects and the benefits and challenges they introduce. The perspective of affected people is crucial in shaping these initiatives, first, because it is their health data that feeds into these systems and allows them to exist (Mursaleen et al., 2017). Second, it is members of the public who stand to gain (e.g., through improved public health) and lose (e.g., through loss of privacy) from these novel healthcare systems, which renders them an important stakeholder. Thus, it is essential that members of the public have the opportunity to weigh in on them for the sake of good governance (Kalkman et al., 2022). Moreover, a lack of support from large parts of the public may cause controversy, delay, or derail such projects entirely, underscoring the influence of public opinion (Aitken et al., 2016).
Therefore, this article analyzes public opinion concerning a Danish public health project that combines healthcare surveillance, secondary use of big data, and AI (described in more detail below) on the basis of a set of interviews with a sample of members of the public. The notion of “the public” is a topic of discussion in its own right, in particular in the context of participatory activities aiming to arrive at and legitimize decisions (Felt and Fochler, 2010). Our construction of “members of the public” in our sample should be viewed in relation to our case-sampling logic (Flyvbjerg, 2006). Interviewees are not representative of the general public in a strict statistical sense. Rather, “the public” is, we find, a convenient way of referring to the opinions and attitudes expressed by the members of the public, such as our interviewees.
Earlier research within healthcare about public opinion toward AI, surveillance, and big data projects has found a pattern of supporting such technologies as a whole while having reservations about elements within them (e.g., Aggarwal et al., 2021; Aitken et al., 2016; Cascini et al., 2024; Esmaeilzadeh, 2020; Fink et al., 2018; Jonmarker et al., 2019; Jutzi et al., 2020; Kalkman et al., 2022; Nelson et al., 2020; Yang et al., 2019; Young et al., 2021). In other words, members of the public seem to wish to enjoy the healthcare benefits of these systems while avoiding the risks they create to privacy, autonomy, and other considerations they hold dear.
While scholars have mainly related to this supportive but apprehensive stance as a reflection of nuance and complexity on the part of the public, thus preserving the idea of a purely rational calculus, we think the more correct depiction is that it reflects an ambivalence, asking to be alleviated. “Ambivalence exists when someone experiences both positive and negative feelings about an issue […] facing a surge of conflicting feelings, you hesitate between” options (Nai, 2014). In these studies, public members are thus essentially negotiating out loud whether and mostly how they can support such technologies (due to the healthcare benefits) despite their risks and downsides. Thus, to understand public opinion about these technologies, we should not only consider attitudes of support and reservations, but also how members of the public choose to alleviate the tension created by their ambivalence towards these technologies.
To study this, this paper employs qualitative research, focusing on a project (later referred to as “the project”) currently under development by a consortium of Danish academic and commercial organizations. The development team aims to build a system that will draw healthcare data from multiple sources and allow real-time vaccine side effect monitoring with the help of an AI model (Innovation Fund Denmark, 2022; Nielsen et al., 2021). The novelty of this project lies in the variety of healthcare data sources, the large quantity of data it will analyze, and the advanced AI techniques that will be used to track vaccine side effects.
We conducted in-depth interviews with members of the Danish public to examine their opinions on this project. Our research questions are: What is the general attitude towards the project? Towards its potential benefits? Towards its AI component? Towards its surveillance component? How do members of the public perceive the big-data consolidation realized in such a project? Who are legitimate, and who are illegitimate partners for such a project? How do members of the public balance potential benefits against risks and ethical concerns? Are there differences in attitudes between subgroups of Danes?
Both the length of the interviews and the degree to which we “interrogated” the interviewees allowed us first to acknowledge and later examine the existence of ambivalence toward the project and how this ambivalence is alleviated. Results show that most interviewees displayed a supportive but apprehensive pattern and used a number of schemes to alleviate the tension created by this dual position.
Attitudes and ambivalence
The study of public attitudes towards the implementation of new technologies in healthcare has proven popular among scholars (e.g., Aitken et al., 2016; Cascini et al., 2024; Young et al., 2021), not least due to the high stakes involved. The use of advanced technologies, such as machine learning and deep learning, renders these new systems extremely data-hungry, accounting for the recent sharp increase in surveillance and data-gathering practices (Duke, 2023). The raw material of these systems is the public's own healthcare data, which is private since it can reveal compromising information about individuals. However, beyond issues of privacy and data security, there are also issues of ownership. Whether members of the public are the actual owners of health-related data, and the responsibilities of bodies training AI models towards the individuals that provide this data are now highly debated issues (Dziedzic et al., 2024; Mursaleen et al., 2017).
Moreover, members of the public are not only the providers of raw materials for these systems; they are also potential beneficiaries of these systems, primarily of their promise of improved healthcare. Taken together, issues of data ownership, direct risks created by these systems, and potential benefits from them position the public as a primary stakeholder. Literature about the implementation of technologies stresses the importance of incorporating stakeholders into the decision-making process so that they can weigh in on them (Bostick et al., 2017; Callon et al., 2009; Frankenfeld, 1992; Stenekes et al., 2017). Nowadays, this is considered good governance, and like Kalkman et al. (2022), we believe that public attitudes towards such healthcare systems are a form of weighing in and should thus be taken into consideration.
When examining public attitudes towards the implementation of big data and/or AI technologies, it is first worth noting that the medical application of such technologies is considered riskier than their application in other sectors (Schepman and Rodway, 2020). At the same time, medicine and healthcare seem to be sectors in which potential benefits from these tools are easily and frequently perceived (e.g., Aggarwal et al., 2021; Aitken et al., 2016; Cascini et al., 2024; Esmaeilzadeh, 2020; Young et al., 2021). It thus comes as no surprise that this sector consistently produces the abovementioned dual position of support and apprehension as it comes to implementing these technologies (e.g., Aggarwal et al., 2021; Aitken et al., 2016; Cascini et al., 2024; Esmaeilzadeh, 2020; Fink et al., 2018; Jonmarker et al., 2019; Jutzi et al., 2020; Kalkman et al., 2022; Nelson et al., 2020; Yang et al., 2019; Young et al., 2021).
In practice, this means positive attitudes towards AI and/or big data projects, but also many reservations, which often translate into risk-reducing conditions or expectations. For instance, as it comes to big data, anonymizing data, securing access to data, informing about data use, and seeking consent are among the mentioned conditions that individuals set (Aitken et al., 2016; Jutzi et al., 2020; Kalkman et al., 2022; Young et al., 2021). Similar examples pertaining to AI technologies are for these projects to have transparency, to be interpretable, and to have strong human oversight (Fink et al., 2018; Jonmarker et al., 2019; Jutzi et al., 2020; Nelson et al., 2020; Yang et al., 2019; Young et al., 2021).
While dual and conditioned attitudes towards these technologies and projects can be perceived as signs of complexity and nuance on the part of the public, we believe they reflect ambivalence. That is, when confronted with a technological project that may improve healthcare but put patients’ privacy or autonomy at risk, individuals become ambivalent towards the project. Ambivalence, as it pertains to the public's attitudes, is often described as a state of contradiction and tension. For instance, Webb describes it as “simultaneously saying ‘yes’ and ‘no’ […which] may reflect the contradictions and compromises” made by those individuals (Webb, 2017: 77). Nai similarly describes it as “facing a surge of conflicting feelings” (Nai, 2014: 292), while Lavine points out that people have a “problem of reconciling strongly held but conflicting principles and consideration” (Lavine, 2001: 915).
The ample social psychology literature about ambivalence differentiates it from lack of attitude, indecisiveness or indifference (Eagly and Chaiken, 1993; Rothman et al., 2016), and stress that “Individuals who are indifferent have weak positive and negative associations, whereas those who are ambivalent have strong positive and negative associations” (van Harreveld et al., 2015: 3). Moreover, it indicates that feeling of discomfort within ambivalent individuals can be empirically gazed, and that the question whether conflictual attitudes will produce discomfort is contingent upon context (e.g., if a person is required to make a decision) and on individual traits (Eagly and Chaiken, 1993; van Harreveld et al., 2015). Moreover, existing texts on the subject also suggest that individuals’ reactions can go both towards more complex and knowledge-inclusive attitudes toward a topic, or towards cognitive shortcuts and motivated reasoning (Eagly and Chaiken, 1993; Rothman et al., 2016).
The literature also teaches us that once the tension becomes apparent, people try to alleviate it: “when attitudinal components are not incongruent with each other. People have a motive to reduce inconsistency, because inconsistency might cause a negative affective experience” (Liu and Xu, 2020: 2). This is also supported by van den Haak and Wilterdink, who exemplify the use of the downplaying technique to alleviate ambivalent tension (van den Haak and Wilterdink, 2019). Other tension-reducing strategies found in the literature are biases (a selective focus on one side of an issue) and using the affirmation strategy, which implies adopting a theory about how the world works that eases the tension (van Harreveld et al., 2015). Applied to the field of public attitudes towards novel technologies, we can understand the abovementioned practice of conditioning as not merely nuance but a technique to reduce this ambivalence and shift to wholehearted support of the project. Moreover, if we recognize that ambivalence is at play, there might be tension-alleviating techniques used by individuals in order to feel better about the support of the project. Indeed, van den Haak and Wilterdink provide evidence that in-depth interviews are a powerful tool for studying ambivalent attitudes, as they are effective in charting contradictions (van den Haak and Wilterdink, 2019).
Why is understanding public attitudes towards these projects as ambivalent important? First, we believe it is a more accurate depiction of reality, as individuals do not just make a rational calculus of pros versus cons, but may also enlarge or diminish their perception of risks according to their perception of benefits (e.g., Esmaeilzadeh, 2020; Jasanoff, 1993).
Thus, by focusing on ambivalence, we gain insight into the creative distortions that people employ in the process of weighing risks and benefits. Second, we believe the ambivalence frame can explain more of the contradictions that public members present. Third, while conditioning appears to prioritize public reservations, other techniques for reducing ambivalence may conceal these reservations and lead to the conclusion that there is less opposition to the project than actually exists. Fourth, while conditioning support requires know-how (e.g., understanding that anonymization is a way to improve privacy and thus can be demanded), other ambivalence-alleviating techniques do not necessarily require know-how and are therefore more available to less informed individuals. Lastly, the literature on ambivalence in political attitudes teaches us that ambivalent attitudes are less stable, and the final actions of ambivalent individuals are harder to predict (Lavine, 2001; Liu and Xu, 2020; Webb, 2017), which may also be relevant with regard to attitudes toward technologies.
The problem of vaccine side effects and the PHAIR project
Drug side effects constitute a significant challenge for healthcare authorities, both for patients’ well-being and as a drain on resources. For instance, 5% of patients admitted to Danish hospitals, which constitute around 30,000 admissions, are due to drugs’ side effects (Beijer and de Blaey, 2002; Hallas et al., 1990). With such a high number of admissions, reducing side effects can be most beneficial, yet this requires an effective system of side effects reporting and tracking, where there is room for improvement. Adverse drug reactions are severely under-reported, with the figure of under-reporting in developed countries estimated as high as 95% (Hazell and Shakir, 2006; Khalili et al., 2021). With such high rates of underreporting, the ability to effectively address the side effect challenge seems limited.
The current method of side effect reporting in Denmark seems to rely on lengthy, labor-intensive, manual processes in which individual case safety reports are submitted, amalgamated, and analyzed (Nielsen et al., 2021). Many drug users and prescribers are unaware of existing side effect reporting options, and patients have trouble distinguishing between symptoms of a given condition and those of drug side effects. These problems may account for the high percentage of adverse drug reactions underestimation, which was consistently observed around the globe (Hazell and Shakir, 2006; Khalili et al., 2021). It is thus of no surprise that Danish patients have voiced concerns “about the substantial lag from the actual identification of serious side effects until the relevant authority decides to act on the safety signal” (Nielsen et al., 2021: 8).
To address these issues, a consortium of Danish academic and commercial organizations is developing a system that combines healthcare surveillance, big data, and an AI model (Innovation Fund Denmark, 2022). As is stated by the team behind this project: The long-term aim of the project is to build a platform for surveillance of side effects (and effectiveness) of medical treatments using all relevant available data to enable a fine-grained analysis, and in the short term apply this to surveillance of the side effects of Covid-19 vaccines. (Nielsen et al., 2021: 12)
Indeed, COVID-19 serves as the initial application of this developed system (Innovation Fund Denmark, 2022), but also exemplifies how “the current drug monitoring systems do not provide sufficient and timely answers to concerns and questions and thus conspiracies and movements nourish on alternative and non-evidence based stories, events and narratives” (Nielsen et al., 2021: 8).
While the current system seems to be “prone to bias due to reliance on spontaneous case safety reporting as it mainly reports known side effects” (Nielsen et al., 2021: 8), the developed system drawing of healthcare data is planned to be automatic, continuous, and unbiased (Innovation Fund Denmark, 2022; Nielsen et al., 2021). While the developed system includes a significant amalgamation of private healthcare data, it also promises to be a privacy-preserving project that will safeguard the used data from privacy-infringing practices (Innovation Fund Denmark, 2022; Nielsen et al., 2021). Regarding AI technology, it is claimed that “machine learning methods will be used for generating adverse event hypotheses using standard random forest or boosting, but also more modern deep neural network-based autoencoder embeddings and time-recursive modelling using autoregressive RNNs” (Innovation Fund Denmark, 2022: 43).
Methods
To understand Danish public opinion toward the PHAIR project and similar projects that combine healthcare surveillance, big data, and AI, we conducted 20 in-depth, semi-structured interviews with an unrepresentative sample of members of the Danish public. All interviews lasted approximately one hour and were conducted between April and May 2023. Thomas A.M. Skelly and a colleague from the department conducted all interviews in Danish. The research was approved by the research ethics review boards associated with the University of Copenhagen (Ethics approval number: 504-0396/23-5000).
Recruitment and design
Citizens were recruited through the online panel Norstat and were compensated for their participation with a gift certificate of 400 DKK. We used a purposive sampling strategy (Robinson, 2014) based on responses to a questionnaire that was administered to Norstat panelists (n = 303). The main factors where we aimed to reach variation and intensity in our case selection were general trust in public health authorities, conspiracy beliefs, and personal experiences with prescription drugs, as we anticipated possible attitudinal differences to the PHAIR project between these subgroups. For the three main purposive sampling factors, we asked about personal experience with prescription drugs in the questionnaire. For trust, we replicated the Trust in Public Health Authorities (TiPHA) scale (Holroyd et al., 2021), and, for conspiracy beliefs, the Conspiracy Mentality Questionnaire (CMQ) (Bruder et al., 2013). Among the questionnaire respondents who were willing to participate in a follow-up interview, we constructed a 12/8 split in personal experience with prescription drugs. We also established variation with respect to TiPHA and CMQ by combining responses to the two measures into four groups. Individuals from three of these subgroups were included: eight with Low-Trust; High-Conspiracy, six with High-Trust; Low-Conspiracy, and six with Medium-Trust; Medium-Conspiracy. These subgroups were constructed according to principles of strategic qualitative case selection (Flyvbjerg, 2006), which made it probable that we encountered the most extreme stances, positive and negative, toward novel health technologies in the Danish public (see Supplementary Material 1 for more details).
The interview guide (Supplementary Material 1) was structured according to six themes where the four latter themes focused on the PHAIR case study: (1) personal experiences with health authorities; (2) knowledge and perceptions of health data use; (3) perceptions of the PHAIR project; (4) perceptions of privacy and data ownership; (5) perceptions of potential misuse of data; (6) legitimacy of the use of AI, as well as of corporate and international involvement in PHAIR. To contextualize the PHAIR project for interviewees and compensate for citizens’ limited knowledge of pharmacovigilance, we used simple image vignettes (Törrönen, 2018; see Supplementary Material 6). We also asked several follow-up questions to validate interviewee positions (Small and Calarco, 2022) by probing whether and how the involvement of different types of actors (e.g., commercial companies), beneficiaries, and interests involved in implementing the PHAIR project would change their attitudes. Lastly, terms such as “privacy,” “consent,” or “AI” were not defined for the interviewees to prevent potential biases.
Data analysis
After obtaining informed consent, all interviews were recorded, transcribed, corrected, anonymized, and translated into English. A hybrid method was used for coding, combining deductive and inductive processes with several iterations to improve the coding scheme and familiarity with the text (Fereday and Muir-Cochrane, 2006). Coding was carried out by Shaul Duke and subsequently sampled and examined by Thomas A.M. Skelly in order to discuss consistency and avoid biases. Skelly also conducted independent coding for the overall assessment of the interviewees’ attitudes, and this coding was checked against Duke's assessment. Subsequently, Duke and Skelly discussed cases of disagreement in order to reach a consensus, assess the robustness, and root out bias. Codes were arranged under seven themes, reflecting the main themes of the interview guide (see Supplementary Material 2). The themes were: (1) past experience with the Danish healthcare system and assessment of the current situation of this system; (2) reservations regarding data privacy; (3) reservations regarding AI; (4) reservations regarding project members and setting; (5) supportive stances; (6) ambivalence-alleviating strategies, and (7) overall assessment. Note that while Theme 5 contains all the codes that voice support for the project, Themes 2–4 contain all the codes that reflect Reservations.
Results
All interviewees expressed both a positive and a negative inclination towards the developed system (Supplementary Material 3). In what follows, we will review both positive and negative stances and then link them to strategies of ambivalence alleviation.
Supportive attitudes toward the developed system
Overall, the interviewees showed significant support for the project, especially initially when it was depicted to them and they were asked to give their impression. While they did not all convey support to the same degree, even the most critical among them (Mina, Beate, Sanne, and Vera) were not devoid of positive attitudes towards the developed tool.
One example of a supportive attitude comes from Kiera, whose initial reaction was, “I think it seems like it makes perfect sense. Yes, take advantage of all the potential knowledge there is” (Kiera). Similarly, Mia's reaction was, “I actually think that's very good because then the side effects may come from more places” (Mia).
The idea that the developed system will improve healthcare and the medical understanding of side effects are most frequently raised as part of expressing supportive attitudes. For instance, one interviewee points out, “I think it would be beneficial for the entire health care system that you had easier access to see these patterns based on the good method design” (Anton). A second interviewee stated, “I see this only as an advantage. Both patient-wise, but also medically” (Barry), and a third added, “It would be the end user who got the most out of this” (Johnny). These findings are in line with existing literature about data reuse (e.g., Aitken et al., 2016; Cascini et al., 2024).
Some interviewees assessed that this developed tool would benefit them personally in terms of health. For instance, an interviewee stated, “I expect myself to benefit from this kind of research being carried out” (Milo). Another added, “I think that could benefit me. Yes, if it is promoting more precise analyses and knowledge” (Kiera). Another interviewee figured it would take some time until such a system would be effective. Still, it could benefit his children: “So if we look a few years ahead, I think it's something […] that will benefit my kids” (Terje).
A benefit that many interviewees raised, which is also supported by existing writing (e.g., Young et al., 2021; Nelson et al., 2020), was the time-saving that the developed system promises to introduce. For instance, one suggested, “There are benefits to it all being gathered in one place, and you find the side effects faster” (Barry). A second interviewee adds, “The fact that you can learn something faster will be an advantage” (Maren).
Another benefit interviewees could see in this project is the cost savings for Danish healthcare authorities. For instance, it might help “better understand what the money you threw into the health system, whether it was also used as efficiently as possible” (Anton). Another interviewee assessed both costs and benefits: “It may draw some resources out of the health care system […] In the long run, I think that then you can save despite it” (Terje). A third interviewee pointed out that if this way of side effect reporting is less labor intensive for the clinical staff, then this will save money for the healthcare system: “After all, they can save a few minutes here and there on various conversations, and all in all, it can probably result in a chief physician position or a few nurses” (Ben).
Negative attitudes toward the developed system
All interviewees also voiced negative attitudes towards the developed system. While most of the voiced reservations clustered around the three topics listed below (data privacy, the AI component, and the identity of the partners), some interviewees had other reservations, such as the effectiveness of this new tool. Mina, who comes from a healthcare background (she worked as a nurse), questioned both whether the tool would effectively detect side effects and what to do with the results. With regards to the latter point, she states: “But it is also a question of what you want to use it for. Because if you get a lot of side effects reported, will you ban that drug?” (Mina). Beate, on her part, raised doubts if the system that is being developed will be able to effectively distinguish the vaccine's side effects from other drugs the individual is taking or from the effects of the virus: “Now, is it a side effect of one or the other, or is it just some sensation or is it a virus or?” (Beate).
Reservations regarding data privacy
Similar to previous studies (Aggarwal et al., 2021; Cascini et al., 2024; Kalkman et al., 2022), the interviewees raised a variety of reservations regarding the developed tool concerning the lack of protection of privacy. First, most interviewees raised the issue of sensitive data in one way or another. For instance, one interviewee raised his concern about “the use of data about […] whether it was safe enough and whether it was anonymized so that it did not go back to people” (Johnny). Another interviewee voices reservations regarding sharing sensitive information with this system, if this was something that automatically came in–if it was passed on, like personal illnesses and stuff, that it was something that had been more taken into account […] I actually think you should stay away from it. I think you should refrain from sharing that kind of information. (Terje)
Yet another interviewee states, “The more information, the more one can also reveal the truth about many different people” (Beate).
Regarding data privacy, concern about the risk of data leakage is especially salient for interviewees; that is, a scenario in which data from this repository will leak out. For instance, Mandi stated, “I think it's a very real concern that there could be leaks in the future” (Mandi). Another interviewee offers a similar statement: “When you put data in, there will always be a risk of leaks” (Vera). Moreover, interviewees pointed out that leaked data can be accidental or a result of hacking. Sanne gives an example of an accidental leak: “There are still people behind, and people make mistakes” (Sanne). Another interviewee points out the hacking angle: “You can come in and hack it and see” (Ben). Furthermore, some of the salience about the possibility of data leaks stems from the interviewees being aware of recent leaks. For instance, “You regularly experience leaks from municipalities and hospitals, etc.” (Ben), or “there is a leak every now and then, and we hear in the media” (Mina).
Another popular recurring theme in the interviews is data abuse, which occurs when someone with access to the system uses it in a privacy-violating way. Consider, for instance, these three utterances which voice very similar sentiments: “There's a lot of data that can be misused” (Keira), “of course, everything can be abused and be twisted and turned and exploited” (Baeta), “I think there will probably be someone trying to abuse it” (Kenny). Moreover, several interviewees mentioned a recent incident of medical record data abuse taking place in Denmark, in which the digital records of a 13-year-old teenager who was abducted were accessed by unauthorized personnel (Anton, Baeta, Sanne, Kenny, Barry, Paige), which may explain some of the salience of this risk.
When thinking about the detrimental consequences of data leakage and data abuse, the interviewees exhibited the ability to offer concrete scenarios in which this data may be used against you. For instance, “there are some important people, VIPs, politicians, prominent businessmen, if there's like something there, it could well be used by different people to expose vulnerabilities” (Connor). Other non-VIP scenarios revolved around leaked/abused data used by a potential employer or an insurance company. For instance, “A bank adviser will not know that I have high blood pressure. Strictly speaking, they do not. But it can help decide whether to give me a loan or not to give me a loan” (Ben).
Last, on the issue of privacy, interviewees raised the issues of informing, consent, and opt-out. First, regarding informing, several interviewees raised the need to notify the affected persons that their data is now used for this system. For instance, one interviewee stated, “It must be knowledge that is available to the citizen so that they know that their information has passed on” (Terje). At the same time, another added that “it will probably also be really good to be able to explain very, very thoroughly and very transparently what it was that you should use this data for” (Anton).
Interestingly, while some interviewees perceived informing as an alternative to consent, others see it as a prerequisite to the need to have users’ consent: “I would like that. I am not saying yes to anything” (Sanne). Indeed, consent was an issue that seemed to be salient for the interviewees, with almost half of them thinking consent to use their data for this project should be obtained. For instance, “there must be somewhere in such a new system where you check that you want to share it. If you don't want it to be shared. Then it must be your right not to tick that box” (Maren). Among those interviewees who did not state that the system's use of the data required consent, some supported a no consent scenario, while others an opt-out option. For instance, Kenny stated, “It seems to me that no, you probably don't have to give consent” (Kenny), while Milo stated: “I believe it should be something you actively opt out of” (Milo).
Reservations regarding AI
Another salient issue among interviewees concerns reservations regarding the AI component of this system under development. Most of them have displayed AI distrust of one type or another. For instance, when asked, “What if this tool […] contained AI?” An interviewee answered, “I don't know if I will trust it enough” (Vera). Another interviewee stated, “I don't really know enough about that to be able to say with certainty. But scary, it seems” (Ben).
These common sentiments were reflected in utterances that sought either human oversight or human involvement in the analysis of the findings. The following excerpt exemplifies the need for oversight over the AI: “It may even end up being an AI that controls some of these things in one form or another. But you have to have some sharp and smart people to monitor this as well” (Johnny). A second interviewee voices a similar sentiment: “With such a supercomputer there, should also be some operator somewhere, that everything is as it should be” (Morten). While the wish for human oversight assigns humans the role of monitoring that the AI is running correctly and was not tampered with, some interviewees went a little further and required human involvement in the side effect analysis itself. For instance, one interviewee stated, “I would not like the conclusions that are made on the data that are shared and collected in this way, that they are made by machines themselves” (Anton). Another declared that an analysis lacking human oversight makes them turn against the project: “But if the output comes directly from the machine, then I get off” (Sanne).
While a small part of the AI distrust stems from the fear of artificial general intelligence (AGI) going haywire, most comes from issues such as input errors and risky decision-making. Four interviewees raised the issue of an AGI becoming independent, with utterances such as “so much of these horror scenarios that the artificial intelligence has suddenly just run its own show” (Mandi). However, even for these four interviewees, a fear of an independent AGI is not the only reason for their AI distrust. Going back to Mandi, she also voiced a concern that without human oversight, “our information ended up in all sorts of random places” (Mandi) in a way that creates risky decision-making.
Indeed, risky decision-making was one of the recurring themes among interviewees. For instance, Kiera thinks that an AI system can make false links between variables: “the results or the analyses […] it may well be that you can see a connection between them, but it may just as well be random” (Kiera). A few interviewees also raised the issue of errors in input producing errors in the output of the AI system. For instance, Jan points out that “if you feed it something wrong, sometimes it comes to wrong conclusions” (Jan).
All in all, these reservations towards AI that we encountered among our interviewees align well with public reservations voiced in the existing literature (e.g., Esmaeilzadeh, 2020; Fink et al., 2018; Schepman and Rodway, 2020; Yang et al., 2019)
Reservations regarding this public health project's partners and collaborators
Most interviewees showed significant reservations about the identity of the organizations the project will partner with. By default, the interviewees seemed to believe that Danish universities and government agencies would run this project. This seems to grant them a sense of security in the project since they display great faith in these institutions. These two excerpts best summarize this sentiment: “Fortunately, we have this relationship of trust in Denmark. We can trust the authorities” (Connor), and “I hope, and rather naively, believe that our country is still where things are going fairly honestly” (Vera).
In light of this trust in Danish authorities, some interviewees spontaneously considered other entities being part of this project, while a majority were prompted by the interviewer to consider this idea. When asked, “How would you feel if such a tool with or without artificial intelligence was developed and implemented by organizations outside Denmark?” many interviewees had reservations. For instance, “I would not be comfortable with that” (Tina), or “I do not think I would like that” (Mina).
Moreover, many interviewees had specific reservations about partners from certain countries, with China, Russia, and the United States as conspicuous examples. For instance, Johnny stated he “could be a little suspicious if it was a Chinese company […] which we know at present has an interest in collecting data” (Johnny) or “China, North Korea, which monitors about everything and uses it” (Kiera). Another interviewee stated the issue is lack of transparency: “I would have a hard time with most not too transparent private actors out in the world […] China is not a very transparent country in principle. I also don't think Russia” (Anton).
A few interviewees objected to partnering up with organizations from regimes with undemocratic procedures that might use medical data against part of their population. For instance, Jan stated that “you can always use that data […] to find and eradicate a minority group, then maybe you can find something […] states that have a dictatorial regime […] undemocratic” (Jan). Connor adds that “it could well be an instrument of mass surveillance by a totalitarian regime” (Connor).
The second major reservation regarding the project's partners and collaborators was about commercial entities. A majority of interviewees showed aversion to including commercial entities in general or certain types of corporations. For instance, “I am probably against, shall we say, some corporate interests which exploit this knowledge for their own purposes, which are not necessarily research-based” (Johnny), and “I think that risk exists as soon as something commercial is mixed into it” (Milo).
Within this commercial entities’ distrust, the most salient concern seems to be that corporations would use their access to this project and its data for profiteering, as opposed to for the public good. For instance: “They shouldn't have too much information because it's something they can use for a commercial purpose they can monetize” (Tina), or “If only it were to be able to help the Danes. But they don't, other than they make money from it” (Maren). This distrust towards commercial players is in line with the existing literature about public concerns (Aggarwal et al., 2021; Aitken et al., 2016).
Mitigating ambivalence
All interviewees listed both benefits and downsides that they could see in the developed system (see Supplementary Material 3). Yet they did not dispassionately mention elements that they like and do not like in this system, but seemed to negotiate their reservations in a way that allowed them to approve of the system (as it offers potential healthcare benefits). This is indicative of the existence of tension with regard to contradictory ideas. What follows are some of the recurring themes within interviewee utterances, which seem to ease this ambivalence.
The most popular mitigating explanation for bridging the tension between being concerned about the privacy of individuals and supporting this project, which has a significant surveillance component baked into it, is the diminished personal concern argument. For instance, Kiera states, “I think my medical history is so harmless that it wouldn't—It somehow wouldn't matter if people knew. But I could imagine there were others where it will affect them or have consequences for them” (Kiera). Other interviewees simply said, “I don't feel like I have anything to hide” (Mia). This sentiment of not seeing privacy risks as personally consequential but seeing them as potentially impactful to others was echoed by most interviewees in our cohort.
Another mitigating explanation that several interviewees raised was to point out that people are, to a large degree, under surveillance as is, and thus, adding another system of surveillance would not really change anything. For instance, Kenny talked about the ability nowadays to control data: “The race is over, I think. It's out of our hands” (Kenny). Similarly, Paige pointed out that the discussion about restricting access to private healthcare data is superfluous: “So basically, even though we sit and talk about who has rights and whether people should be allowed to enter, I think there are many more people looking at it than you realize” (Paige).
A third mitigating explanation was that this developed system is part of current technological advancement, and (according to this logic) you cannot or should not hold down technology. For instance, one interviewee who stated that the developed system constitutes a “necessity” was asked to explain and answered: “We must continuously improve our health system and use the technology we have. We change. After all, we cannot continue to live as we have already done, then we will all be running around with one stick in our hands and a stone in the other” (Terje). When thinking about the possibility of data abuse, an interviewee replied, “You can imagine that it could be abused, but you can't stop it […] That is an illusion” (Beate).
Conditioning support
A fourth scheme to mitigate the ambivalence was to raise certain expectations regarding how the future system will operate. Once these expectations were set, interviewees either conditioned their support or just assumed that their expectations would materialize. In this section, we will go over some of their expectations.
First, there was the expectation that no data breach or data abuse would take place. Maren, for example, explicitly conditions her support on her personal information remaining confidential: “This must not happen. There must be a guarantee of this. Otherwise, we should not create something like this” (Maren). Mia, when talking about the restriction of access to sensitive data, states, “After all, only professionals can get to it, right?” (Mia). Later in the interview, she added, “I have been promised that things will not be abused in this way, then I have to trust that they will not be” (Mia).
Another, even more salient point on which interviewees conditioned their support of the developed system is one-directional anonymization. That is, they need to know that full anonymization of sensitive data is present, with the tacit assumption that it cannot be reversed. A good example of it, when talking about the use of big quantities of private data, “As long as it can't be used to find me as a person. Then I don't have any problems with that” (Milo). Milo was also one of those who stated no consent was needed in order to use the public's personal healthcare data, yet this too was conditioned on this one-directional anonymization: “We are still talking about anonymized data, then the answer [if consent is needed] is no” (Milo).
Similarly, but less explicit conditioning seems to occur with regard to human oversight over the otherwise AI-driven system. For instance: “It is a concern that there should then be supervision with the supercomputer. And you're going to have to put some people to do it” (Mandi), or “there must still be some person inside also who can sort through it” (Vera), or “it must be at least controlled in some way” (Ben).
Discussion
When interviewing Danish individuals about a public health system being developed that combines AI, healthcare surveillance, and big data, we encountered a dual position of support and apprehension (for discussion of this dual position, see the “Attitudes and ambivalence” section). All interviewees saw potential benefits for public health in a system that draws on a multitude of data from several sources and analyzes it to detect vaccine side effects. Some interviewees also suggested it will save time, thus accelerating the detection of adverse side effects, while others suggested it may save money for healthcare authorities.
While voicing general support, interviewees also expressed reservations regarding key aspects of the developed system. This included reservations about the risks to individuals’ privacy, reservations about the AI component of the system and the variety of risks that it introduces, and reservations about the partners that may play a role in such a system, with potentially harmful consequences.
Among our sampled interviewees, there was some more positivity towards the developed system among men than women. Yet overall, we found the differences according to gender, medicine use, and trust level to be minor, with a dual stance towards the system being prevalent across the groups. This supportive but apprehensive stance aligns significantly with existing literature about attitudes towards advanced technological systems in healthcare. For instance, studies examining patients’ and potential patients’ attitudes toward the implementation of AI in medical settings have repeatedly found this dual position (e.g., Esmaeilzadeh, 2020; Fink et al., 2018; Jonmarker et al., 2019; Jutzi et al., 2020; Nelson et al., 2020; Yang et al., 2019; Young et al., 2021). A similar dual position was found regarding big data projects in healthcare (e.g., Aggarwal et al., 2021; Aitken et al., 2016; Cascini et al., 2024; Kalkman et al., 2022).
Moreover, the existing literature in the field highlights a specific dual position of overall support, with particular reservations, which raises the dilemma of why individuals do not seem to go the other way, objecting to the project, while supporting elements within it. One possible factor that may be at play here is that regret is a strong motivator when it comes to ambivalence (van Harreveld et al., 2015). Therefore, when dealing with new technology, it may be that missing out on the benefits of a technology produces more regret than opting to support it.
How to alleviate the ambivalence
The found dual stance among members of the public reflects an apparent contradiction between the desire to enjoy the benefits of a promising technological system in healthcare and the wish to avoid the risks that this system entails. Following both van den Haak and Wilterdink's (2019) and van Harreveld et al.'s (2015) texts (see Attitudes and ambivalence section), we now turn to examining how this ambivalence is alleviated in our case.
First, and similar to the existing literature (e.g., Aitken et al., 2016; Kalkman et al., 2022), our study shows that some members of the public condition their support of the project on steps taken to secure their privacy and minimize risk. This conditioning varies in the level of explicitness among interviewees, yet it appears very frequent. This conditioning serves as a bridge between the patient's support and reservations by reducing the reason for concerns. For instance, many interviewees conditioned their support of the AI component on the system having robust human oversight.
Another ambivalence-reducing strategy that appeared in the interviews was to assume that the risk-reduction steps they perceived would be implemented. That is, interviewees repeatedly voiced their reservations, closing the topic with the assumption that the Danish healthcare authorities would take care of it. This strategy can most likely be attributed to the high trust of the Danish people in their healthcare agencies (Nielsen and Lindvall, 2021; Rothstein and Stolle, 2003), and was mentioned explicitly by some interviewees. Hence, high levels of trust allow these assumptions that the Danish healthcare authorities will reduce risk to seem possible. This strategy appears to correspond to the affirmation strategy, identified by social psychologists (van Harreveld et al., 2015), as it adopts a theory about how things work in order to alleviate tension.
Another element of ambivalence alleviation at play was the feeling of powerlessness. Some interviewees expressed what can be described as a reluctant acceptance of risky components within the developed system, due to a feeling of being unable to resist. This may be the inability to resist technological advancements, as technology is an unstoppable force in modern society (e.g., Beate, Terje), or the inability to avoid privacy infringement because surveillance is ubiquitous (e.g., Mina, Ben). Indeed, in the Danish context, both feelings seem to be supported by reality, where vast amounts of healthcare data are already collected on citizens and new technologies are rapidly being implemented within the healthcare system (Hoeyer, 2019; Skovgaard and Hoeyer, 2022). Thus, in the Danish case, this deterministic stance has some anchoring in reality, given the existing trajectory of technological implementation and the minor role ordinary citizens play in determining this trajectory. This phenomenon was also referred to as digital resignation – as “an expression of legitimate powerlessness by individuals, who are limited in their potential actions” (Bagger et al., 2023).
The last ambivalence-reducing strategy is the idea of reduced personal risk. This was a popular stance voiced by a majority of interviewees. Its logic is that, despite reservations regarding this system under development, many interviewees do not seem to think it will harm them personally. While supporting a tool that you believe may be harmful to others but not to you may be an ethically contentious position to hold, it may also be seen as a way to endorse the project (with its potential benefits) despite its risks.
Ethical insights
Taken together, these strategies give evidence that the interviewee's position is not the result of a straightforward calculus of pros and cons, but rather consists of different strategies that allow interviewees to both be apprehensive about concerning elements within the developed system and support it at the same time. This does not mean this process is devoid of rational analysis. Yet, it does mean that there are also rationalizations (e.g., privacy is dead, so we shouldn't bother) and selective viewpoints (e.g., I am personally not at risk from this system, so it's fine). It means we, as scholars, should recognize the presence of tension and the discomfort that ambivalent attitudes create.
Furthermore, even the conditioning practice by all interviewees is not as clean and cut calculus as it might appear at first glance. While some interviewees state that X is a condition of their support, and if not, they withdraw their support, many other times, they say that they expect X without explaining the consequences of it not happening or assume some risk-reducing process will be at play, which may, in some cases, amount to nothing more than wishful thinking. Indeed, going by available data about this developed system (Innovation Fund Denmark, 2022; Nielsen et al., 2021) and the technology it intends to implement, it seems safe to say that several of the conditions/expectations of interviewees are misaligned with how this system is being designed and molded these days (e.g., the nonuse of commercial partners).
Realizing the ambivalent dynamics has several implications. First, it means we should give limited weight to the overall support for the system that interviewees displayed, since some of it was enabled by ambivalence-alleviating techniques that deviate significantly from a rational process. As the literature suggests, perceived benefits reduce the perceived risk of individuals (e.g., Esmaeilzadeh, 2020; Jasanoff, 1993), which is evident in our case study.
Second, since so much of this dynamic is derived from perceived benefits, it is expected that every erosion of the benefits the developed system stands to deliver may reduce the ambivalence and cause public members to judge the project less favorably. This is important to note since technologies seldom deliver on their initial promises, which are often overestimated (McDaniel and Pease, 2021; Topol, 2019).
Third, since conditioning, expecting, and assuming are so central to how interviewees alleviate their ambivalence, these risk-reducing conditions should be taken very seriously in assessing the support and legitimacy of the project. For instance, interviewees who have conditioned their support of this system on being asked for consent for their data to be included cannot be just pegged as supportive if no such consent is likely to be sought. Similarly, some interviewees consider certain variables to be sensitive or irrelevant to the study (e.g., an individual's weight) and thus expect them not to be used in the system's big data analysis. Another part of the interviewed group expects to be able to opt out of contributing their data to this system. Moreover, a non-trivial part of the interviewees considers data leakage from the system a breach of their confidence, and a no less significant portion of the interviewees highly disapprove of the involvement of commercial parties. As with the AI component, some interviewees expect the actual analysis to be done by humans, while others demand strong human oversight. This point of taking conditions seriously is further underscored by the recurrent observation that ambivalent attitudes (in our case, support for the system) are less stable (Lavine, 2001; Liu and Xu, 2020; Webb, 2017).
Conclusion
Using interviews, we examined the Danish public's attitudes towards a system currently being developed that combines healthcare surveillance, big data, and AI-driven analysis, aiming to detect vaccine side effects. Similar to other texts in the field, we found a dual stance of supporting the project while being reserved about elements within it. Also, similar to other texts, we found the prevalent use of conditioning the support that interviewees grant to the system on risk-reducing elements that should be taken.
We have expanded on existing writing about public attitudes towards new healthcare systems by shifting the frame from a dual position, which only acknowledges the contradictory positions that interviewees hold, to an ambivalence, which also acknowledges the tension this dual position creates for them. This focus on ambivalence enabled us to explore ambivalence-alleviating strategies and reframe interviewees’ practice of conditioning the support on some risk-reducing measures as one such strategy. It also enabled us to identify several other strategies that interviewees use in order to support a project that promises healthcare benefits but also carries risks. These additional strategies both paint a more complete picture of the attitude formation process and distance us from ideas of a purely rational weighing of pros and cons. We also pointed out the potential misalignments between interviewee perceptions and how the system under development is being designed, as well as the instability of ambivalent attitudes.
Taken together, all these elements seem to suggest that reservations may increase in the future once more information about the system and its inner workings comes out. Thus, we interpret the supportive but apprehensive ambivalence as one in which the support is much more fragile than the apprehension. Moreover, we view it as a warning sign, which, if ignored, could escalate into public opposition, potentially leading to delays or derailment of the project, rather than a minor issue easily addressed.
Strengths and limitations
Although the number of Danish public members interviewed is limited (n = 20), the fact that we reached a similar conclusion regarding dual position and conditioning of support as found in the existing literature on public opinion towards such healthcare technologies is a positive indication of generalizability. Finding no significant pattern among subgroups within our cohorts underscores this point further and suggests a fairly uniform reaction towards such technologies among different groups of individuals.
Supplemental Material
sj-docx-1-bds-10.1177_20539517251386035 - Supplemental material for Supportive but apprehensive: Ambivalent attitudes towards a Danish public health AI and surveillance-driven project
Supplemental material, sj-docx-1-bds-10.1177_20539517251386035 for Supportive but apprehensive: Ambivalent attitudes towards a Danish public health AI and surveillance-driven project by Shaul A. Duke, Peter Sandøe, Thomas Andras Matthiessen Skelly and Sune Holm in Big Data & Society
Footnotes
Acknowledgements
We would like to thank Thomas Bøker Lund for his helpful comments and abundance of assistance in developing this article. We would also like to thank Klaus Lindgaard Høyer for providing background information for this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Innovation Fund Denmark through a Grand Solutions grant (PHAIR, grant no. 1061-00077B).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
