Abstract
Objective
The adoption of genAI is rapidly evolving all sectors. In public health, emerging genAI technologies have shown promise in facilitating tailored communication, public health surveillance, and administration and decision making. However, adoption of genAI gives rise to several concerns, including the perpetuation of systemic inequities and erosion of public trust. The lack of clear guidelines and directives presents challenges for the responsible integration of genAI. Therefore, this study aims to: (1) explore the understanding of the application of genAI by public health professionals; (2) identify barriers and enablers to responsible genAI use; and (3) explore the perceived governance needs and opportunities for community engagement to guide the responsible and trustworthy implementation.
Methods
A semistructured interview guide was iteratively developed, and Canadian public health professionals with experience in genAI technology were recruited via purposive and snowball sampling. All interviews were conducted between 5 June and 21 July 2025.
Results
Data from 13 interviews were analyzed using reflexive thematic analysis, from which 7 unique themes emerged: (1) uses of genAI and shift in priorities, (2) emerging skills demands, (3) shift in public health values, data use, and equity, (4) governance imperative, (5) organizational-level guidance, (6) importance of fostering trust, and (7) inclusion of community as co-creators.
Conclusion
These themes offer insight into the complexities and challenges of responsible genAI adoption, underscoring the need for governance and organizational frameworks that support equitable, accountable, and transparent implementation within public health. These themes offer guidance to facilitate the responsible integration of genAI in public health and highlight the national organizational governance considerations.
Introduction
Digital transformation is widespread, with generative artificial intelligence (genAI) technologies being deeply integrated into many aspects of our personal and professional lives, including education, 1 psychology, 2 and research. 3 With continuous developments and updates, genAI tools, such as Copilot and ChatGPT, employ advanced machine learning algorithms to analyze large amounts of raw data and generate novel outputs in the form of text, images, videos, and other forms of media in response to user prompts. 4 For instance, OpenAI's ChatGPT is a genAI technology that is able to generate contextually relevant outputs in response to user prompts. 5 Their ease of use and accessibility have led to their widespread adoption into daily life, with ChatGPT in particular amassing over 4.61 billion visits per month. 6
In healthcare, genAI tools have been applied for disease detection, treatment, and patient care, successfully informing clinical services.7,8 For instance, genAI shows potential in supporting radiological decision making for breast cancer, 9 and genAI tools, such as Socrates 2.0 have been developed and adopted to facilitate cognitive behavioral therapy. 10 Similarly, in the workforce, genAI has been explored for its application in task automation and improving efficiency and productivity. 11 In the broader health system, public health plays a critically important role in promoting and protecting health at the population level through policy, programs, and services. 12 GenAI shows potential for helping public health achieve better health outcomes, including facilitating the analysis of health behaviors using social media, tailoring communication to vulnerable populations, facilitating misinformation control, epidemic modelling, summarizing surveillance information, and enhancing the efficiency and capacity of public health surveillance.13–15
Transparency, equity, and bias challenges and the implications for trust
While their widespread adoption has notable advantages, genAI tools have been met with skepticism due to their inadvertent risks. In public health, some notable concerns include the lack of transparency and the perpetuation of existing inequities and biases. For instance, the “black box” nature of genAI technologies, which often refers to the lack of algorithmic transparency, has raised concerns about ownership, accountability, and re-traceability of generated outputs.16,17 Further, another pressing concern is the presence of algorithmic bias. This refers to the perpetuation of existing inequities and biases, such as racism, sexism, and socioeconomic discrimination, often exacerbated by genAI technologies. 18 GenAI systems often reflect systems of oppression, and training these technologies on nonrepresentative datasets can exacerbate harm based on factors such as race and gender,18,19 and disproportionately impact priority populations, such as Indigenous Peoples and older adults. 18 These concerns, when unaddressed, may erode trust in the public health system and may lead to decreased uptake of health recommendations and downstream effects that are harmful to the communities and populations. 20 Therefore, bolstering trust as new technologies emerge and are implemented in public health is important for increasing public perception and promoting population health. 21
The legislative landscape and the governance imperative
The need for governance and organizational-level policies that reflect principles of responsible AI use, including mechanisms to evaluate potential bias and equity implications, and encourage innovation in public health, is well-recognized. 14 Internationally, the European Union's Artificial Intelligence (AI) Act is the first comprehensive legal framework regulating AI. This framework categorizes AI applications by risk, regulating applications and systems classified as high-risk and banning unacceptable risk systems and applications. 22 Unlike Canada's defunct Artificial Intelligence and Data Act (AIDA), 23 the European Union's AI Act considers high-risk systems to include those that could result in harm to individuals or society. 24 AIDA was primarily concerned with individual-level harms rather than groups and communities, and government uses were exempt. 25 This Act stipulates the obligations for providers and users, drawing on principles of levels of risk, transparency, promoting innovation, and overseeing implementation. 22 These obligations also include effective human oversight across the use of genAI models to prevent or minimize risks to health, safety, and fundamental rights. 26
With the absence of legislation, the Government of Canada issued a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. 27 The Code reflects guiding principles for the responsible integration of genAI in the government, stressing principles of engagement with stakeholders, transparency, prioritizing public needs, mitigating risks, evaluating outputs, and more. 27 The Government of Canada also has a Guide on the Use of Generative Artificial Intelligence that pertains to Government of Canada institutions only. 28 This fragmented governance landscape demonstrates a clear need for public health organizations to introduce and enact policies that reflect principles of responsible use and fully consider the broader and indirect harms on the communities they serve. This is essential for building and maintaining public trust in an already fragile system, as well as protecting communities from the associated harms. Public opinions toward genAI adoption in healthcare remain uncertain, 29 and the lack of regulatory frameworks and absence of clear guidance addressing issues pertaining to transparency, human oversight, equity, and bias may further erode trust and health inequities.
National context for community engagement in digital innovation
In Canada, patient and public involvement is embedded in national guidance relevant to digital and AI innovation. The CIHR SPOR Patient Engagement Framework 30 and Framework for Patient Engagement in Health Technology Assessment 31 provide models for involving citizens in the design and assessment of emerging health technologies. Health Canada and the Public Health Agency of Canada similarly emphasize public engagement in policy and program development, including digital health tools. The Tri-Council Policy Statement (TCPS2, Chapter 9) 32 reinforces community engagement in research involving First Nations, Inuit, and Métis Peoples and the Pan-Canadian Health Data Strategy 33 explicitly links trustworthy technology implementation to public trust, stewardship, and shared governance.
Emerging guidance relevant to health and the public sector increasingly ties trustworthy AI to public and community engagement. The World Health Organization's ethics/governance guidance for AI34,35 calls for participation and accountability in health AI deployment. There are currently no public health-specific frameworks that operationalize community engagement or co-production governance of genAI. In the absence of clear legislation, policy, and directives, the governance and integration of genAI in public health, a field fundamentally grounded in transparency, trust, and equity, remains to be understood. This research addresses these gaps by exploring public health professionals’ experiences, perceptions, and governance needs related to the responsible and trustworthy use of genAI in Canadian public health practice. This is the first empirical study capturing frontline perspectives from public health on genAI practice and governance. The objectives of the study include:
Exploring public health professionals’ understanding and application of genAI in their work, including the perceived benefits, risks, and equity implications; Identifying organizational and policy-level enablers and barriers to the responsible use of genAI in public health contexts; and Exploring perceived value, governance needs, and opportunities for community engagement in guiding the ethical implementation of genAI in public health.
Methods
This qualitative research 36 employed key informant interviews to investigate how public health professionals in Canada perceive, utilize, and govern generative artificial intelligence (genAI) technologies. A reflexive thematic analysis approach was used to analyze the interview data.37–39 The research is described according to the Consolidated Criteria for Reporting Qualitative Research (Supplemental File 1). 40
The research team is based in Ontario, Canada, and consists of two female researchers affiliated with the Department of Population Medicine at the University of Guelph. MM holds a PhD in public health, and HS holds an MSc in Epidemiology. Both team members bring expertise in public health and health communication, with experience in qualitative research, genAI governance, and applied practice. Their academic and professional backgrounds include work in public health agencies, academic institutions, and collaborative research initiatives focused on communication, workforce development, and community engagement.
These professional and disciplinary positions may have influenced how the researchers approached the data, including the emphasis placed on systems-level thinking, communication and trust, and workforce development. Throughout the analysis, the team engaged in reflexivity around their assumptions, values, and interpretations, and remained attentive to how their positionalities and professional experiences informed the identification and interpretation of themes.
Ethics
Ethics approval was granted by the University of Guelph's Research Ethics Board (REB#1709).
All participants provided written informed consent that included information about the research aim, and interviewers summarized informed consent information prior to each interview. Participants were informed of their right to withdraw from the study at any point until interview analysis began on 1 August 2025.
Participant recruitment
Canadian public health professionals with expertise in data science, information technology, or AI within the field of public health were recruited using purposive sampling and snowball sampling. Participants fulfilled the following eligibility criteria: 18 years or older, working in public health in Canada, and proficiency in English. Eligible individuals included those working in public health roles relevant to genAI use, such as Directors of Information Systems, Chief Privacy Officers, Data Scientists, and professionals or researchers with experience in public health communication, digital health, or AI governance. The research team was mindful of the potential influence of prior relationships with participants. One researcher (MM) had vague prior familiarity with one participant through professional networks, but this connection was not ongoing and did not extend into the interview context. Another participant was known more directly to MM; however, to mitigate potential bias or influence, that interview was conducted by a different member of the research team (HS). The researchers had no prior relationship with any of the other participants before the study began.
Key informants were contacted via email, and up to three reminders requesting their participation were sent. When participants expressed interest, the informed consent form was shared, along with a request for their availability.
Participants were offered a $50 gift card as a token of appreciation, with no obligation to accept or complete the full interview.
Interviews
Semistructured interview questions (Supplemental File 2) were developed by MM with feedback from HS. After the first two interviews, questions were prioritized to remain in the approximately 45 minute timeframe requested of participants. The questions remained consistent across all interviews, allowing flexibility for probing and follow-up questions consistent with semistructured interview methods. The interview guide explored the roles and experiences of public health professionals with genAI, their perceptions of the benefits and risks, and their views on trust, equity, implementation needs, and governance. It included questions aiming to understand current organizational policies, the need for community engagement, and the safeguards needed to facilitate the responsible and trustworthy use of genAI in public health practice.
All one-to-one interviews were conducted using Microsoft Teams version 25198.1302.3822.1091 between 5 June and 21 July 2025. All sessions were audio- and video-recorded and transcribed verbatim using the embedded tools in Microsoft Teams. MM and HS independently facilitated the interviews and took field notes. MM conducted the interviews from her office at the University of Guelph, while HS conducted interviews from her home office. Transcripts were verified by MM and HS.
Data analysis
Data analysis followed five phases of reflexive thematic analysis as described by Braun and Clarke37–39: (1) data familiarization; (2) generation of initial codes; (3) theme searching; (4) reviewing themes; and (5) defining and naming themes. MM read and re-read all interview transcripts and field notes before coding. An inductive coding approach was adopted to code the transcripts line by line by MM via NVivo 14, with codes developed in relation to the research question and objectives. A coding tree was first developed in NVivo 14 by MM and then refined by HS and MM collaboratively. Codes were and then organized into higher-order categories (theme searching) by MM, and then in collaboration with HS in Microsoft Word. Codes capturing similar ideas were iteratively grouped and compared across transcripts to identify candidate themes, which were refined collaboratively to capture both explicit and underlying meanings (Table 1). The initial codes and higher order categories served as a conceptual map that guided the refinement of themes. Reflexivity was maintained throughout the analytic process as the researchers critically considered how their professional backgrounds in public health communication and digital health might influence coding decisions and interpretation of meaning. The final thematic framework was developed by MM with input from HS and reflects both shared and divergent participant perspectives on the responsible use of genAI in public health. Participants did not provide feedback on the interview transcripts or results due to time constraints.
Overview of coding structure and thematic development.
In line with Braun and Clarke's reflexive approach, data saturation was not used as a criterion to determine sample adequacy or analytic completeness, as meaning was understood to be generated through the interpretive process rather than discovered through redundancy. 41 Instead, adequacy was ensured by the richness and depth of the data and the iterative engagement with the material. 41
Results
Participants
Thirteen public health professionals participated in semistructured interviews, each lasting an average of 55 (40–80) minutes, with no repeat interviews conducted. No participants withdrew their participation after the interview, stopped before the end of the interview, or refused to participate.
Participants represented a diverse mix of organizations across public health practice, research, and policy, including international governance bodies, Canadian universities, regional and national public health agencies, professional associations, and knowledge translation hubs. Participants held roles such as Director, Epidemiologist, Policy Lead, Data Scientist, Program Manager, Researcher, and Advisor across local, provincial, federal, and international contexts. Participants were in their current roles for an average of approximately 6 years (range one and a half years to over 25 years) in public health or related sectors, with several holding doctoral or master's degrees in public health, informatics, health policy, or communication. Several participants also had fellowships or leadership positions related to AI, innovation, or health data governance. This mix of technical, managerial, and policy perspectives provided insights into both implementation-level challenges and broader ethical and governance considerations. Table 2 summarizes the participants’ backgrounds in aggregate form to maintain confidentiality.
Summary of participant characteristics.
LLMs: Large Language Model.
Thematic analysis
Across the 13 interviews, 7 themes describing the uses, perceived benefits and risks, perceptions of trust, equity, and implementation of genAI emerged: (1) adoption of genAI across the workforce simultaneously increases capacity and shifts priorities; (2) growing applications of genAI is shifting roles and skills needed as a public health professional; (3) participants discussed perceived shifts in public health values, data use, and equity; (4) comprehensive governance is necessary for the responsible and sustainable adoption of genAI; (5) organizational-level policies are required to guide the responsible adoption of genAI; (6) building and maintaining trust is dependent on inclusion and transparency; and (7) inclusion of community members as contributors to the design, data, and oversight of genAI is critical for the equitable and responsible integration of genAI.
Theme 1: Reconfiguring responsibilities in public health practice
Across roles and jurisdictions, participants described a growing number of practical applications for genAI. These included automating repetitive tasks like transcription and summarization, assisting with coding and troubleshooting, supporting planning and program evaluation, and enhancing communication strategies. Participants emphasized the potential for genAI to support more equitable and timely health interventions. Participant 13 described how genAI could improve communication with communities by ensuring “people get the information they need at the level that they are best able to engage with.” Rather than relying on one-size-fits-all campaigns, participants described how genAI could “give a lot more modalities or ways to connect to people on their terms” (Participant 3). Participant 13 noted that genAI could assist with translation: “translation is something that we're looking to use with AI. It's not quite there yet for us, but we expect it will increase the accessibility of services like nondigital service, even perhaps in person or over phone” (Participant 5).
Participants reflected on how genAI can alleviate the burden of routine and repetitive tasks, with participant 8 noting that genAI, “helps eliminate some of those repetitive tasks… so we can actually spend more time on the science and analyzing data that actually requires our human brains.” Participants also described genAI as an equalizer that can help professionals in areas where they may not be trained, such as communication. As participant 10 shared, “It helps you with writing. If you're not a strong writer, like, that's great. Like … you don't go into public health because you're a writer, you know what I mean? And it kind of helps, I think, even the playing field.” Even when genAI saved time, its role in replacing foundational and critical thinking raised concerns. “It does save you time in creating summaries here and there, but it should not be a replacement of core understanding… and I think that that's a severe risk we do face” (Participant 13).
Adoption of genAI may reshape the nature of public health work, shifting capacity toward more meaningful but cognitively demanding work. As participant 11 reflected, So it's going to minimize our use of these repetitive tasks and it's going to increase the proportion of tasks that we do that require higher knowledge work… I could spend days thinking about one particular problem and have nothing to show for it. Whereas 10 years ago, I could do some of the work that ChatGPT is doing, and I have something to show for it. So maybe that's a challenge.
This reallocation of effort away from administrative outputs and toward complex, often ambiguous problem-solving was seen as both a benefit and a burden. Participant 6 cautioned that this shift could also remove aspects of work that offer balance: “So it might take away what is pleasurable from some jobs, leading some to faster burnout.”
Theme 2: Transforming workforce roles and capacity
Participants reflected on how genAI is not only reshaping tasks but also the roles, responsibilities, and skill requirements across public health teams. Participants described genAI as demanding new forms of human oversight, technical translation, and ethical oversight. Competencies in the responsible use of genAI were framed as essential to future public health work: “Essentially, you need AI management skills… AI supervision skills in all areas of work” (Participant 5).
Professional development and education were also emphasized as key contributors to retaining and equipping future public health professionals with the competencies needed. Participant 11 pointed to the curriculum already being developed and what is needed to prepare the workforce: “Just changing the curricula across some higher education and public health… there are courses now around it… I think just having that as like a formal part of some course or a standalone course would be really beneficial to train folks in public health… what are some appropriate uses and what are some things to look out for… that people should be careful about. And I think that's number one because that's going to cascade into the next generation of public health professionals being prepared to use AI.”
Several participants suggested the need for dedicated hybrid roles to bridge gaps between technical teams and frontline professionals. Participant 6 described the value of having someone who “isn't necessarily the engineer but is someone who can translate for the engineer that they're working with. [Someone that] bridges the divide and therefore can kind of help with acceptance rates.” Roles that both understand the technical aspects of genAI and the public health context and applications were discussed as important to the ethical and responsible use of models in practice.
Finally, participants discussed workforce concerns about job displacement due to genAI, particularly around tasks that can be automated. Participant 5 noted: “I wouldn't say jobs, but there are certain tasks that are at very high risk of replacement by AI, and document transcription is one of those. Tell them [about the role genAI will play]. Involve them in the process. Train them on the new skills that are required, such as supervising AI systems.”
Theme 3: Potential shifts in public health data use and equity
Participants envisioned genAI as a catalyst for transforming how public health generates, interprets, and acts on information. Participant 3 noted, “I just think that's a whole area that could have a renaissance of how documentation is happening and how much intelligence are we getting.” Participants described a more nimble, personalized, and data-literate public health system with genAI. As Participant 8 explains, “You can combine all the possible data sets and come up with insights that can be used to create interventions and then monitor in real time… what the impacts are across population groups.” This was discussed as a shift toward a more strategic and data-driven approach to public health: “I think all the value will be at the public health, population health level—not in individual care contexts” (Participant 3). Others suggested genAI could enable more prevention and thus healthcare-related savings: “This also has immense potential, then from a financial point of view…investing in prevention rather than having massive acute care” (Participant 1).
Appropriate implementation can be valuable for advancing resource allocation in public health, as participant 12 explained: If it's implemented correctly, I think it's a great equalizer of resources, right? So say in the community where they can't or they don't have the same resources as Toronto, they can't hire as many Health Officers. Implementing an AI tool really equalizes the playing ground there where they can have a very similar level of support at a very high level.
However, participants raised concerns about the reliability and generalizability of the data underpinning genAI systems. Participants discussed biased or incomplete datasets that could reproduce or worsen historical exclusions and harms: “The data is already biased… the AI is then going to be built by [that bias]” (Participant 6). GenAI tools, such as chatbots, are only as equitable as their training data: “You make the assumption that the bots have the opportunity to be inequitable in its communications, and nothing could be further from the truth. So, the bot is only as equitable as the knowledge that it is trained on” (Participant 7).
Furthermore, participants cautioned that tools trained on mainstream knowledge and unrepresentative datasets may not yield nuanced insights, potentially reproducing dominant narratives rather than fostering new understandings. There's the risk that it lures us into thinking if a document is reviewed by chatbots or whatever, then we have the full scope of the issues. Again, not necessarily true. What I find is that they do quite poorly sometimes with understanding nuances within documents, whether it's papers in research or whether it's, you know, policy documents here and there. So, there still needs to be that human and look constantly reviewing. (Participant 13)
Participant 10 offered a similar perspective when discussing evidence synthesis: My biggest concerns with the evidence synthesis is that it's hard to do as a human and people are going to want [a shortcut]… why wouldn't I use this big computer brain instead of my single human brain to do this? But… we're still better at weighing… across all these studies, where were the big effects? What were the biggest studies? Which were the stronger studies? And looking at those nuances, than I think the AI is maybe never going to be able to do that [nuanced assessment]. Nevermind adding values, ethics, equity, and different biases.
Theme 4: System-level governance and strategic direction
Despite the growing use of genAI in practice, participants discussed the lack of coordinated and comprehensive governance to guide the responsible integration of genAI in public health. While there is pressure to adopt AI tools, many felt public health is underprepared in terms of regulations, policy, and collaboration. Participant 8 described this governance gap: “We need rules and regulations and well-known ways to verify, validate, evaluate…All that I find is not there yet. I think we're still in the infancy.”
Governance was discussed as a system that should actively promote equity and enable responsible use. Some participants called for public health and government to take a more proactive role in shaping digital health ecosystems, given the influence of technology companies on public health outcomes: There's this arms race to develop the technologies… with the idea that there's a monetary gain to be had and power to be consolidated… issues around equity and bias are sort of secondary thoughts. The models are trained with the majority… the central focus is on profits… [so] it is very likely that biases will be replicated, perpetuated, and sometimes even amplified. (Participant 13)
Some participants also discussed the importance of shared infrastructure to provide equal access to genAI models across jurisdictions. Participant 12 advocated for shared access models to level the playing field: It's like not necessarily developing their own LLM [large language model] per se, but like at least making the endpoints available to everyone… then that way each of the offices can have access to this like main program and then …everyone has this baseline level of support, right?
A few participants also saw governance as requiring a long-term, systems-level view that incorporates sustainability. Although few participants raised environmental concerns, those who did linked them to the infrastructure and energy demands of large AI models. As participant 3 reflected, “So let's say if you used an AI tool for newborn, you know maternal newborn care let's say, but now you're gonna hurt the planet and that tips the scale. Then it's not a good tool.” Participant 10 reflected on the water use and lack of widespread knowledge: “It uses a ton of water. And I think somebody brought this up, and we were like, wait, what? Why on earth does it use water? And we didn't know, right?”.
Theme 5: Organizational policy foundations for responsible genAI adoption
Participants emphasized that clear, coordinated, and enforceable policies are critical to moving governance from aspiration to action. Policy frameworks, developed alongside the systems and structures needed to operationalize it, were described as essential for providing public health units a clear starting point for implementation. Absence of clear standards, organizational policies, and legislative frameworks has led to hesitation and inconsistency across jurisdictions: “Standards, policies, are just a starting point—I think that would give some public health units more confidence to start approaching or thinking about implementing new tools because they have a starting place” (Participant 4).
Infrastructure funding was also seen as a core enabler of responsible genAI use. Without investment, policies would remain unenforceable and unevenly applied. Participant 7 discussed underinvestment in innovation and technology as a major barrier: “The biggest issue that public health units have is that they’re not funding the necessary infrastructure for innovation… how they fund it is way more important than how they operationalize it”. As participant 4 noted, outdated procurement frameworks can undermine policies and block local innovation: “A lot of our procurement bylaws don’t favor startups or small companies… It's a little bit of a concern… how are we ever gonna keep these innovations in Canada or Ontario when our health services are publicly funded?”.
Theme 6: Trust in a complex information ecosystem
Participants described trust as fragile and dependent on transparency and inclusion for the responsible integration of genAI in public health. Some participants saw genAI as a potential opportunity to build trust, while others felt it could be detrimental to public trust. “Public trust is a huge component of the use of artificial intelligence, in my opinion. And if you really want a community to stand by something and have full trust, they need to be engaged, and things need to be transparent” (Participant 8). Similarly, participant 13 discussed transparency as a key aspect of genAI use: “Doing AI in the open is a way of maintaining that trust.”
Participants acknowledged that trust in public health was already precarious before genAI gained traction. “I think there's already a lack of trust with public health to begin with. So, I'm not 100% sure if that trust level will change, or if it's just the reasons that people don't trust will change” (Participant 6). Trust was also described as a serious risk to public health credibility, as participant 13 warned, “We take such a long time to build relationships and to build trust. It can disintegrate in an instant… it's like dynamite. It can do a lot of good. It could do a lot of harm.”
Finally, inclusion was a crucial aspect of maintaining trust, with several participants arguing that trust cannot be built through top-down approaches. Instead, genAI must be implemented in ways that allow communities to shape both the content and the tools: “I think what I really mean by making it more specific is being able to connect whatever AI tool to your community's data… allowing the community members to also modify the prompt… It's using the community's own data… making those toolings available to all of the different communities, I think is going to be really important.” (Participant 12)
Theme 7: Community involvement and the need for co-production
Participants emphasized that the potential for genAI to support health, equity, and trust requires repositioning communities as active contributors in the design, data collection, and oversight of genAI, rather than passive recipients of its outcomes. Several participants discussed the importance of engaging communities early and meaningfully. Participant 1 discussed the importance for equity, “To engage in users in its adoption is going to be crucial to reduce the health disparities that we already encountered in the past with other technologies, rather more basic ones such as EHR [electronic health records], for example.” Similarly participant 13 noted: “Equity becomes really central to the work that we do… specifically also for populations routinely left behind.” For these communities, inclusion must go beyond communication to shared decision making about when, how, and why genAI is deployed.
Co-production requires community involvement in decisions about governance and use of genAI from the outset. Participant 6 explained: I think it is not a one-size-fits-all kind of mitigation technique. I think the mitigation is just… Being embedded either being from the community yourself that you are trying to build for or being embedded within that community before you even start coming up with your research questions. Otherwise, how do you know that people are needing what you are trying to produce?
Others emphasized the need for ongoing dialogue and reciprocity. Participant 2 described co-production as a process grounded in mutual understanding: “We need to have a way… of meaningful engagements and discussions with the public in order to understand what their expectations are… and then we say based on what we hear from you, yes, we can act in that way.”
However, some participants expressed caution about the assumptions underpinning community involvement, particularly around capacity and representation. Participant 9 reflected on this tension: I'm sure the community will endorse sensible general principles, but whether they'll have enough knowledge to translate those principles into particular applications seems really dubious to me. Are you consulting the people who are best positioned to know what's important to their communities, as opposed to a random community member who hasn't thought about this issue before? (Participant 9)
Discussion
This study examined how public health professionals with experience in technology, data science, communication, and other related areas perceive the responsible integration of generative AI in public health practice. Across thirteen interviews, participants described both the opportunities and the challenges that genAI introduces into a system built around wellbeing, equity, and trust. The analysis generated seven interconnected themes that illustrate how genAI is reshaping roles and responsibilities, redefining workforce capacity, and requiring new systems and relationships for data, governance, and community engagement. The findings show that while genAI offers promise for improving efficiency, insight generation, and communication, its adoption raises important questions about accountability, workforce readiness, and the maintenance of public trust.
At the organizational level, training and human oversight are required to address the shifting responsibilities, skills demand, and inequity challenges
As the introduction of genAI is transforming various domains, the opinions of public health professionals toward genAI also continue to evolve. Participants viewed genAI as a transformative tool that can shift public health toward a more strategic, data-driven landscape and serve as an ‘equalizer’ by bridging resource gaps across organizations and promoting consistent care. Participants broadly described common sentiments about the benefits of genAI as a tool to automate tasks, such as transcription and summarization, and increase the productivity and efficiency of their work. A 2024 U.S. survey of genAI adoption at work and home demonstrated that genAI increases time and capacity in the workplace, saving up to 4 hours per week for an average worker. 43 However, with the automation of these tasks, participants observed a shift in responsibilities towards more complex, problem-solving work. For some, this transition was welcomed as it reduced administrative burden, while others perceived this transition toward higher output demands as a contributor to burnout. In fact, a 2024 survey of the United States, the United Kingdom, and Canadian knowledge workers found that, while 96% of top executives expected genAI to increase productivity levels, 77% of employees cited decreased productivity and increased workload, with 61% of people indicating that using genAI will increase the likelihood of burnout. 44
To address these challenges, participants emphasized the importance of technical training in preparing young professionals for the workforce, knowledge intermediaries, enhancing AI literacy, and fostering dialogue between genAI developers and public health professionals. Structured upskilling and training are necessary for the integration of genAI, fostering workforce readiness, and supporting its responsible use.11,45 For instance, Malaysia's national “AI at Work 2.0” initiative integrates Google Workspace, including Gemini (a genAI model), into public service operations, supported by training to enhance AI literacy and advance the digitalization of the public sector. 46 In addition, training and education are necessary to address the transparency concerns among public health professionals. The lack of transparency partly stems from the “black box” nature of AI algorithms, which exert poor explainability or interpretability of outputs. 16 Training practitioners on the “black box” nature of genAI, how to identify biases and hallucinations, and how to engage with models responsibly is key. 47 Training needs to include creating effective prompts, verification of outputs, understanding the limitations and risks of the technology, and ultimately how to engage in human oversight across the use of genAI models. 47 In public health, maintaining transparency of AI models may increase adoption of genAI by public health professionals. 48
The lack of training to facilitate responsible genAI use may also create, sustain, or exacerbate inequities. 49 GenAI technologies are generally trained with training data in European languages with Western cultural contexts, 50 and as one participant emphasized, these models will only perform as well as the data with which it is trained. As participants repeatedly highlighted, genAI relies on training data that reflect systemic inequities, bias, and stigma. When applied in public health, algorithmic bias has the potential to perpetuate and amplify existing inequities and biases, potentially impairing informed decision making and leading to downstream impacts on the community or population levels, particularly in marginalized communities. 18 For instance, Indigenous communities have noted that Indigenous knowledge systems, such as Two-Eyed Seeing, are not reflected in AI systems, which can lead to the amplification of biases against Indigenous Peoples. 18 Therefore, human oversight remains necessary to counteract the equity implications. As one participant noted, humans are still better at making informed decisions when presented with nuanced information. In addition, AI models can produce plausible yet false content, and without human oversight, this will continue to perpetuate systemic inequities. 51
Institutional and interpersonal trust are needed to ensure transparency and foster public trust
Participants discussed the necessity of trust for the responsible integration of genAI in public health. Fostering trust is essential, as individuals’ confidence in and adherence to public health recommendations are closely linked to their trust in public health organizations and professionals.52,53 Therefore, trust at the institutional and interpersonal levels is needed for guiding the ethical and responsible integration of genAI. At the institutional level, public health organizations should aim to foster confidence between the organization and the public by ensuring that genAI technologies are used in ways that do not erode trust. This is particularly relevant in the current public health context, where trust is fragile due to the general public displaying low trust in government organizations for health information. 29 Public health can maintain trust while using genAI by embedding equity and inclusion into data and system design, engaging communities in co-development, and ensuring transparency in how AI is used and evaluated.54,55 In practice, this means conducting regular equity audits, establishing clear ethical and accountability frameworks, communicating openly with the public and professionals about AI's role and safeguards, and continuously monitoring for unintended harms.54–56
At the interpersonal level, public health professionals play an important role in shaping and influencing the public's decision making and uptake of recommendations. 57 For instance, during the 2014 Ebola pandemic, misinformation and tension between health workers and the community led to a breakdown of trust. 20 While system-level safeguards, such as equity audits, ethical frameworks, and transparency, are critical, trust is ultimately established through human relationships.54–56 Professionals must maintain strong connections with communities, clearly explain the role and limitations of AI in decision making, and tailor communication in culturally competent ways.54,55 In doing so, they ensure that AI augments rather than undermines the relational trust that underpins effective public health practice.54–56 Synergistically, building trust at both the institutional and interpersonal levels is essential ensure genAI implementation is transparent, equitable, and directed toward the betterment of the communities and populations.
Responsible genAI in public health depends on clear governance, accountability, and community relevance
The need for governance and strategic guidance to implement genAI in a responsible manner is imminent and well-recognized. For example, the World Health Organization34,35 has released guidance for large language models (one type of genAI) as well as another for AI in general, to provide principles that ensure AI benefits the public. Public health representation in AI governance is important for minimizing harm and maximizing societal benefits, with a focus on addressing equity and bias challenges and advancing population health. 58 In addition, robust governance frameworks, ensuring transparency, accountability, and equitable human oversight are foundational to fostering public trust. 54 Fisher and Rosella 42 emphasize that governance priorities should include a transparent and clear definition of roles and responsibilities, strict human oversight, and accountability. To achieve this, as some participants highlighted, there is a governance opportunity to develop open access tools to ensure transparency and increase public trust. In addition, monetary resources can be invested to bridge the gap between public health experts and AI developers by establishing AI liaison roles. These individuals, with expertise in both public health needs and AI development, can ensure the ethical, responsible, and locally relevant implementation.
Promotion of sustainable AI, one of the six ethical principles for the use of AI for health, 35 should also be pursued as a key aspect of governance and decisions around genAI model use. While participants seldom discussed the environmental implications of genAI adoption, some cited the large volume of water needed to power these technologies as a cause for concern. ChatGPT is estimated to consume five times more electricity than a simple web search, 59 and consumption of water by AI systems is projected to reach 6.6 billion cubic meters by 2027. 60 AI infrastructure, particularly data centers, consumes massive amounts of electricity (often fossil-fuel-based) and freshwater for cooling, and generates significant electronic waste. 61 Further, marginalized populations are bearing the environmental and social costs of AI mineral extraction. 62 These resource demands raise critical questions for public health, which is increasingly focused on planetary health and the links between climate change, environmental degradation, and population wellbeing.
Sustainability in this context means more than reducing energy use; it involves making informed, values-driven decisions about when, where, and how genAI tools are used. It also involves considering whether the use of genAI meaningfully advances health equity and outcomes relative to its environmental cost. Governance frameworks can operationalize this principle by requiring environmental impact assessments of AI adoption, supporting innovation in “green AI,” and encouraging transparency from technology providers about resource consumption. 63 As one participant noted, sustainability involves weighing benefits against risks and harms—ensuring that the adoption of genAI enhances public health without exacerbating the very environmental and health crises it seeks to mitigate.
While governance provides an overarching framework, individual public health organizations play an important role in operationalizing the oversight of equity, bias, trust, and transparency of genAI. Although genAI has the capacity to augment public health service delivery, it is important to recognize the capacity at which this can be done responsibly. Governance structures provide the flexibility to adapt standards to the specific mandates, resources, and population needs, while providing expectations for appropriate use, implementing safeguards for data privacy and security, and ensuring ongoing monitoring of genAI technologies. 64 Organizational guidance, communicated in clear and accessible language, can also mandate training and evaluation, build workforce capacity, and reinforce accountability.45,64 Participants emphasized the importance of establishing evaluation and monitoring mechanisms to ensure the responsible and effective use of AI. GenAI algorithms should maintain accountability through human oversight and reflexive evaluation of the tools to ensure that they are implemented responsibly. 65 Participants also identified other considerations for implementation of organizational guidelines that promote responsible integration of genAI, including confidence scores, grounding evaluation in public health outcomes, and standardized framework for validation and verification.
In addition, as participants consistently emphasized, the inclusion of experienced community members in the design, data collection, and oversight of genAI for the responsible integration of genAI technologies. Participants emphasized the need for genAI technologies that are adaptable to community and population needs and able to positively impact public health's key role in promotion and protecting health. Co-creation of genAI technologies can lead to the development of technologies tailored to community needs, 66 fostering public trust. 67 In general, communities are more receptive to new technologies when they are involved in governance oversight, such as policy development, 21 which enhances institutional and interpersonal trust. Beyond fostering receptivity, co-creation is crucial for promoting equity and inclusion by ensuring that marginalized voices inform the design and use of technologies, thereby reducing the risk of perpetuating existing health inequities. Co-production grounds genAI in local cultural norms, languages, and priorities, enhancing contextual relevance, demonstrating accountability, and creating pathways for ongoing feedback.68,69
Recommendations
It is clear that genAI needs to be deployed with caution in public health. By capturing the perspectives of public health professionals, our study highlights the challenges surrounding the adoption of genAI in public health and provides actionable insights into the considerations necessary for guiding its responsible integration. The themes identified can inform policy development, workforce training, and governance frameworks to ensure that the adoption of genAI supports equity, transparency, and public trust.
In synthesizing our findings, we propose the following recommendations to facilitate the responsible integration of genAI in public health:
Limitations and future research
This study reflects the perspectives and opinions of thirteen public health professionals. Most participants held positions in federal and provincial public health organizations. Therefore, this study may not represent the view of all public health professionals in Canada. In addition, participants in this study held higher-level or leadership positions within public health, and there was a lack of representation from frontline professionals in under-resourced settings and equity-deserving groups. As a result, some themes, such as the need for community co-creation, reflect the participants’ perceptions rather than direct input from community members. This presents a significant limitation in the breadth and perspective of findings. Finally, the sampling strategies used may have led to self-selection and response bias. Therefore, responses among those who participated may differ from those who did not due to greater interest or stronger opinions about public health governance. This may have led to the overrepresentation of certain perspectives, which should be considered when interpreting the findings.
Future research should aim to include larger and more diverse groups of public health professionals and community members to capture a broader range of perspectives. In addition, expanding participant groups to include AI developers, policymakers, and community representatives may provide a more comprehensive understanding of the broader impacts of genAI integration in public health.
Conclusion
This study interviewed 13 public health professionals and explored their perspectives on the emerging role of genAI and the considerations needed to support its responsible and equitable integration in public health. Seven themes were identified, highlighting the uses, opportunities, and risks: genAI can increase workforce capacity, shift priorities toward higher-level knowledge work, and support more tailored, data-driven interventions. However, it also creates new skill demands, risks perpetuating inequities, and raises concerns about trust, transparency, and sustainability. Participants emphasized the imperative for system-level governance, organizational policies, and community co-creation to ensure responsible and equitable integration. While genAI holds transformative potential for strengthening public health practice, its adoption must be guided by governance frameworks, organizational supports, and inclusive processes that safeguard equity, accountability, and trust.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076261416713 - Supplemental material for “It's like dynamite—It can do a lot of good. It could do a lot of harm”: A qualitative study on the uses, benefits, and risks of genAI in public health
Supplemental material, sj-docx-1-dhj-10.1177_20552076261416713 for “It's like dynamite—It can do a lot of good. It could do a lot of harm”: A qualitative study on the uses, benefits, and risks of genAI in public health by Hisba Shereefdeen and Melissa MacKay in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076261416713 - Supplemental material for “It's like dynamite—It can do a lot of good. It could do a lot of harm”: A qualitative study on the uses, benefits, and risks of genAI in public health
Supplemental material, sj-docx-2-dhj-10.1177_20552076261416713 for “It's like dynamite—It can do a lot of good. It could do a lot of harm”: A qualitative study on the uses, benefits, and risks of genAI in public health by Hisba Shereefdeen and Melissa MacKay in DIGITAL HEALTH
Footnotes
Ethical approval
Ethics approval was granted by the University of Guelph's Research Ethics Board (REB#1709).
Consent to participate
All participants provided written informed consent that included information about the research aim, and interviewers summarized informed consent information prior to each interview.
Contributorship
MM was involved in conceptualization, resources, supervision, and validation; HS and MM in methodology, formal analysis, investigation, and writing—review & editing; and HS in writing—original draft. All authors have read and approved the final manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Financial support for this research was provided by the Centre for International Governance Innovation as part of the CIGI Digital Policy Hub Fellowship program.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
Data from the interviews is not available as per Ethics approval. Requests for coded data or preliminary themes may be available through request.
Supplemental material
Supplemental material for this article is available online.
Guarantor
MM accepts full responsibility for the execution and content of this research and publication.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
