Abstract
Objective
Generative artificial intelligence (genAI) technologies have rapidly evolved, offering potential to strengthen core public health functions such as health communication, surveillance, and emergency preparedness. While genAI may enhance public health outcomes by enabling tailored messaging, helping to combat misinformation, and supporting data-driven decision-making, its integration raises significant concerns about equity, privacy, and trust.
Methods
This rapid review explores guiding principles for the trustworthy and responsible use of genAI in public health contexts. Following established rapid review protocols, peer-reviewed and grey literature published since 2014 were identified and analyzed thematically.
Results
Ten articles met the inclusion criteria, focusing on genAI applications across various public health settings. Ten themes were generated that describe guiding principles for the trustworthy and responsible use of genAI in public health. The themes emphasize the importance of human oversight, transparency, equity, accountability, and culturally relevant communication. While genAI can be used to support health behavior change, enhance health communication across literacy levels, and promote community engagement, risks such as algorithmic bias, data misuse, and the amplification of health disinformation must be mitigated.
Conclusion
Organizational policies must reflect ethical considerations and address current regulatory gaps to help mitigate these risks. Workforce training, interdisciplinary collaboration, and policy development are vital to support responsible and trustworthy implementation. This review provides preliminary insights that can help public health organizations begin to consider guidling principles for policies for guiding genAI adoption and use, emphasizing the importance of human-centered and ethically grounded approaches. Findings also identify future research needs, including the evaluation of genAI tools in diverse public health contexts, assessment of real-world impacts, and exploration of governance frameworks. This review offers an initial foundation for public health organizations to consider potential applications of genAI and develop policies that support its responsible and trustworthy use.
Introduction
Artificial Intelligence (AI) tools have demonstrated significant potential to enhance public health outcomes. Public health refers to organizations and agencies promoting and protecting health at the population level through policy, programs, and services. 1 Across public health settings, AI shows promise in the fields of health surveillance, health literacy and communication, predictive analysis, and evidence synthesis.2,3 However, ensuring AI models use accurate, representative, and appropriate information is critical, as social biases related to race, gender, sexual orientation, class, and other factors are often embedded in AI outputs.4,5
Generative AI (genAI), a rapidly advancing field and a type of AI, is gaining attention for its ability to generate text, images, audio, videos, and computer code in response to user prompts. 6 Although the use of genAI in public health is less common than in other sectors, including healthcare, there are opportunities for innovation and improved public health initiatives. 7 Like AI more broadly, genAI is well-positioned to support the core functions of public health, including health surveillance, health promotion, communication, and emergency preparedness and response. GenAI tools like chatbots and predictive analytics can enhance disease surveillance, outbreak detection, and resource allocation.7,8 Chatbots and virtual assistants can provide health information and support healthy behaviors. 9 GenAI can also help tailor communication, including text and images, to diverse audience language and health literacy needs.7,9 During an emergency, genAI can be used successfully to create evidence-based and persuasive health messages with human supervision. 10
However, caution must be taken because of potential harm, especially in health contexts, and because many of these applications have not been comprehensively evaluated. Challenges such as bias, privacy concerns, and hallucinations must be addressed when implementing genAI in health-related settings.9,11,12 Personal health information and the processing of large datasets pose risks to data privacy and security. 9 Equity-related risks include unequal access to AI technologies, inequalities in the opportunities to benefit from the use of the technologies, and inequalities that directly arise from biased algorithms and biased and nonrepresentative data integrated into the technologies. 13 Additionally, models can rapidly produce massive amounts of convincing inaccurate health information and disinformation (content that is deliberately misleading), which poses a threat to public health initiatives. 14 To successfully implement genAI in public health, key priorities to overcome these harms include, among others, modernizing data governance and infrastructure, addressing workforce skill gaps, developing strategic partnerships, and ensuring transparency and equity. 2 Addressing workforce gaps in digital competencies requires not only integrating digital literacy and practice-relevant skills into formal public health curricula but also investing in system-wide capacity building, continuing education, and transdisciplinary training opportunities. 15
Trust is a critical factor in the success of these technologies and in public health more broadly, influencing the adoption of health behaviors and policies. 16 The adoption of genAI in public health can change the way that information is produced, accessed, and trusted, which necessitates careful consideration of the risks and benefits. 17 The integration of genAI in public health requires addressing systemic consequences and risks, with a focus on evaluation, mitigation, and future research challenges. 18 As genAI technologies continue to rapidly advance, there is a pressing need for regulation and organizational governance to safeguard public health and mitigate potential risks associated with AI-generated health information.14,18
Privacy legislation obligations in Canada differ based on the type of organization (public, private, or health-related) and the intended purpose of AI use. 6 Currently, there is no specific regulatory framework for AI in Canada; however, the proposed Artificial Intelligence and Data Act (AIDA) is defunct. 19 AIDA sought to ensure that AI systems used in Canada are safe and nondiscriminatory, holding organizations accountable for misuse. 19 A major criticism of AIDA was that harm was defined based on Human Rights considerations, omitting concerns about population- and community-level harms, indirect harms intrinsic to genAI outputs, and using algorithms influenced by biased data. 20 Moreover, government uses of AI were excluded from AIDA, with its focus primarily on commercialization rather than addressing equity issues. 20 Current regulations are also insufficient for addressing inaccurate health information and disinformation produced by genAI. 14
Organizations must, therefore, establish their own policies to guide the responsible, ethical, and transparent use of AI, including genAI, to account for potential legislative shortcomings. Researchers emphasize the need for ethical guidelines, transparency, and human oversight to ensure the responsible and trustworthy use of genAI.11,21 Clear guidelines and systematic implementation methods are needed to guide genAI use in public health.7,22 Collaborative efforts and policy initiatives are essential for addressing these challenges and using genAI in public health while prioritizing equity, trust, and improved health outcomes). 21 Key areas for future research include exploring AI ethical considerations, developing evaluation frameworks, and building capacity for responsible AI implementation.11,23
This study aims to generate evidence-based recommendations for the trustworthy and responsible use of genAI in public health, which can be used to inform organizational policies that address legislative shortcomings and provide guidance on the use of genAI technologies across public health settings. While emerging work has begun to propose detailed frameworks for the responsible use of genAI in healthcare,12,24,25 the evidence base remains fragmented and focused on clinical applications in healthcare. This review provides a rapid synthesis of guiding principles across studies relevant to public health, offering a snapshot that complements and informs these framework-oriented efforts. To address this aim, a rapid review exploring what is known about the trustworthy and responsible use of genAI in contexts relevant to public health was. The objectives of this research include:
Identify published peer-reviewed and grey literature exploring the trustworthy, ethical, and responsible use of genAI in contexts relevant to public health; and Conduct a thematic analysis of results to understand guiding principles related to the trustworthy and responsible use of genAI in contexts relevant to public health.
Methods
The protocol follows rapid review guidelines from the National Collaborating Centre for Methods and Tools, 26 the World Health Organization, 27 and Cochrane Rapid Review Methods Guidance. 28 A rapid review streamlines the process of a systematic review to produce results in a more time- and resource-effective manner. 29 Rapid reviews are effective for answering broad questions that have relevance to policy. 29
The review scope and strategy were collaboratively developed. MM developed the review scope and strategy with input from the research team and experts from the Centre for International Governance Innovation. A Specialist Research Librarian at the University of Guelph with research synthesis expertise in public health was also consulted to provide input on the review scope and search strategy. The protocol was not registered as this rapid review was completed in a shorter timeline.
The rapid review methods and results are reported in accordance with the PRISMA statement for scoping reviews (Supplemental Material 2), which are also relevant to rapid review methods. 30 PRISMA-RR, an extension to PRISMA for rapid reviews, is currently under development and not available at the time of research.
Review scope
Inclusion criteria
Articles exploring the trustworthy and responsible use of genAI in contexts relevant to public health was the primary inclusion criteria. Contexts relevant to public health include disease prevention, health promotion, human resource efficiencies within a health context, and communication within a health context. All literature, including qualitative, quantitative, and mixed methods research published as peer-reviewed journal articles, dissertations, conference articles, preprints, and other gray literature, was included in the review. Articles published in English in 2014 or later will be included. The year 2014 was chosen to mark the creation of Generative Adversarial Networks, which was a fundamental breakthrough and paved the way for models such as ChatGPT. 31 Articles relevant to a public health context that explore factors related to the responsible and ethical use of genAI were included.
Exclusion criteria
Articles that mention concepts related to trust and genAI in public health without a substantial focus on these concepts will be excluded, as will commentaries and editorials. Articles focused only on healthcare at the individual level and/or clinical topics (e.g. disease diagnostics, imaging interpretation), and settings will be excluded.
Search strategy
Two researchers (MM and AK) tested and refined the search strategy in each of the five databases used. The final search was conducted on 2 October 2024, in Medline via OVID, Web of Science, PsycINFO, Compendex, and INSPEC and were searched by one researcher (MM) for peer-reviewed articles published between 1 January 2014 and September 2024. This review therefore reflects a
Controlled vocabulary and keywords related to genAI, trust and responsible use, and public health were applied to the information sources (Table 1).
Search concepts, controlled vocabulary, and keywords used in database searches.
The final search strategy, as implemented in Medline via OVID on 2 October 2024, is shown in Figure 1.

Medline via OVID search implemented 2 October 2024.
Supplementary searches
To supplement the database search, one researcher (MM) hand-searched the following journals:
One researcher (AK) also searched gray literature using Google from 8 October 2024, to 23 October 2024. Search terms related to “artificial intelligence,” “public health,” and “ethics” were combined, and searches continued until the researcher reached five irrelevant pages.
Article screening
All relevant citations were first imported into Mendeley 32 and then to DistillerSR reference management software 33 for deduplication and screening. Articles were screened by two researchers; one researcher (MM) was responsible for screening 90% of the articles, while a second researcher (AK) was responsible for screening 10% for each screening level. A second researcher was used to help reduce bias and increase the reliability of the study selection process.
Title and abstract screening and two rounds of full-text screening were performed using Distiller SR. To obtain the full text of articles, MM searched the University of Guelph Library, used Google and Google Scholar, and contacted authors directly through ResearchGate or via their publicly available email. The first round included articles relevant to all AI and contexts relevant to community- and population-level health, including within healthcare settings. The second round focused only on articles relevant to genAI and public health, excluding AI without a focus on genAI and healthcare settings without substantive community or population relevance. The review was more inclusive until this point to ensure there were enough relevant articles to have a specific focus on genAI and public health.
Kappa for the 10% independently screened articles in full text was 0.82 for inclusion in the final review, indicating high agreement. 34 The full-text screening form assessed each article's focus, adherence to inclusion criteria, article type, and exclusion criteria if it was not deemed relevant to the final review. The research team developed screening forms collaboratively and pretested them before implementation. Conflicts at all stages were resolved collaboratively through discussion.
Data extraction
One researcher (MM) conducted the data extraction with verification by a second researcher (AK). An Excel table was used to extract data related to the following: year, country of origin, study methodology, study aim, public health context, type of genAI, type of generated content, key findings related to the use of genAI in public health, key findings related to the responsible use and ethical implications of genAI, and key findings regarding genAI integration and governance, gaps and limitations, and future research directions (Supplemental Material 1).
Thematic analysis
Data extraction results were thematically analyzed following the methods outlined by Arksey and O’Malley, 35 and updated by Levac et al. 36 One researcher (MM) coded the extracted data line-by-line to develop an initial thematic framework, which was then verified by a second researcher (AK). Codes were inductively generated from the extracted findings, ensuring that meaning was derived directly from the included studies rather than imposed a priori. Researchers triangulated the thematic framework with the original articles, data extraction form, and their expertise to develop the themes. The research team collaboratively refined and finalized the thematic framework, grounding each theme in patterns observed across the body of included literature. In line with the principles of thematic analysis, we did not quantify how many studies contributed to each theme; rather, we identified recurring concepts and illustrated them with examples from relevant studies.
Results
Following deduplication, 3345 articles were screened at title and abstract (Figure 2). Next, 177 full-text articles were screened, with 91 advancing to a second full-text level to assess relevance to genAI and public health only. Ten articles were included in the final review.

PRISMA flow diagram of results.
Characterization of included articles
Most of the articles included were peer-reviewed journal articles (
Summary characteristics of articles (
*Multiple selections possible, totals can exceed 100%.
Thematic analysis
Ten themes were generated, describing guiding principles for the trustworthy and responsible use of genAI in public health. The themes are as follows.
GenAI has the potential to improve public health communication though barriers exist
GenAI, including chatbots, have been used to support behavior change by providing tailored, accurate, clear, and relevant health information.37–39 An evaluation of ChatGPT and vaccine communication found mostly correct, clear, and concise responses that balanced issues of individual autonomy, privacy concerns, fairness, and equity. 39 GenAI chatbots provided factual vaccine safety and effectiveness information that positively impacted attitudes and intention to vaccinate, with no backfire effects. 38 Images, videos, and limiting repetition and overall length of responses improved engagement with generated health information. 38 Anthropomorphic cues like gender and age can tailor how users perceive and engage with chatbots. 38
GenAI supports education in health-related subjects by aiding in problem-solving, idea generation, and answering health-related questions. 40 Importantly, genAI can support two-way communication between public health organizations and communities by providing support, answering questions, and providing tailored information. 41 Caution must be taken in implementing genAI to support public health initiatives given the lack of evidence related to effectiveness and impact. 37 Successful use of genAI for tailored health information may require new theories or frameworks, or the adaptation of existing frameworks, to reflect evolving knowledge for its responsible and trustworthy use. 42
GenAI can either amplify mis- and disinformation through poorly safeguarded systems or deliver accurate and credible health information
AI-generated mis- and disinformation can significantly amplify misleading health information, including factual errors, hallucinated sources, and potentially harmful advice. 42 Safety standards and policies for its use in this regard are poorly defined, allowing malicious actors to use genAI to generate persuasive and tailored health disinformation. 43 Common jailbreaking techniques to override misinformation safeguards, including assuming a persona and fictional storytelling, were successfully used across most popular large language models, with the exception of Claude 2, to generate mis/disinformation for a range of health topics. 43 Attention-grabbing titles, persuasive messages, fabricated academic sources and testimonials, as well as information tailored to diverse communities, were included in generated mis/disinformation. 43 Mis/disinformation risks will continue to grow as genAI advances, especially in audio and video contet. 43
The genAI model Claude 2's refusal to generate mis/disinformation regardless of prompts or jailbreaking techniques used, demonstrates the ability of genAI developers to pair valuable functionality with robust safeguards. 43 GenAI can develop accurate, empathic, and tailored health information with appropriate safeguards against mis/disinformation.42,43 It may be particularly useful for health issues for which conspiracy theories and misinformation are prevalent because of the ability to create individualized and nuanced information considered credible and trustworthy by users. 42
Trust plays a critical role in adoption and ethical use
Trust is a fundamental outcome of the responsible and trustworthy use of genAI in public health because it shapes public acceptance, influences perceptions of risk and usefulness, and determines whether genAI models are adopted and effective.38,40,42,44 The responsible and ethical use of genAI can enhance public health outcomes while maintaining integrity and trust. 40 Tailoring generated health information to the unique cultural, geographical, linguistic, and historical contexts of diverse communities establishes credibility and builds trust. 42 Generated health information that conveys emotional awareness, such as empathy, and subject matter expertise can demonstrate credibility. 42 The usability, reliability and perceived gatekeeping role chatbots play directly influence trust in both the AI system and public health generally. 38 Perceptions of the usefulness and trustworthiness of genAI in health-related settings also directly impact the adoption of genAI. 40 The assessment of risk associated with genAI use in health is determined by trust and ease of use of the technology. 40 Governance that clearly and transparently documents and reviews all AI-supported decisions is key to maintaining trust. 44 Adherence to all legislation when using genAI in public health is also critical to maintaining trust. 44
Human-centeredness and oversight in policies are vital
GenAI can support evidence-informed decision-making in public health but decisions must ultimately rely on human judgment.40,44,45 Human oversight at numerous and predefined points upstream (e.g. data used to train the model, ethical and legal standards for development) and downstream (e.g. monitoring outputs for accuracy, fairness, bias, and evaluation of impact) of the algorithm should be established and governed by guiding principles that reflect the responsible and ethical use of genAI.40,45 Continuous oversight and evaluation of the outputs and impacts on diverse communities is necessary to ensure tools are reliable and trustworthy.40,45 It is also essential to prevent overdependence on genAI, ensure an understanding of risks and limitations, and maintain trust through accountability, transparency, and accuracy.40,45 A robust validation process that assesses the models and outputs and recommends guardrails should be implemented to address risks, including mis/disinformation and inconsistency of outputs. 40 Public health organizations must understand relevant laws and policies related to AI use, including ethical principles that must be upheld through their use of genAI models. 45
Comprehensive training and education emphasizing the functionality, applications, limitations, and risks, is needed
Extensive training and change management strategies are necessary to enable human oversight, including an understanding of how to use models and evaluate the outputs, as well as the potential risks and limitations. 44 Public health practitioners need to understand how genAI works and how it can support health communication in practice 42 ; however, a current lack of leadership and expertise are barriers to genAI use in public health. 37 An investment in innovative coursework and on-the-job learning is needed to prepare students and the workforce to navigate the complexities of using genAI models in public health.40,42 High-level machine learning concepts that also emphasize the limitations of AI can help catalyze the use of genAI across public health initiatives. 37 Education and training should foster responsibility and accountability of public health to ensure genAI is used for good and human oversight is facilitated by appropriately training practitioners. 45 Hiring public health practitioners with AI expertise is important, but competing with private industry and healthcare-oriented sectors is a challenge. 37
Equity, accessibility, and accountability must be prioritized to reduce bias and harm
Bias issues that exacerbate inequities and lead to health disparities are a widespread concern with genAI applications in public health.37,39–42,44,45 Algorithmic bias41,44 and issues with selection and information biases in the datasets used to train genAI models37,39,44 can lead to unequal health outcomes and disparities.37,42,44 For example, algorithms can perpetuate biases based on training data and compromise patient confidentiality and trust through the incorporation of identifiable personal and health information in AI systems. 44 Affordability-driven biases may also exacerbate health inequities for communities and organizations that do not have the resources needed to integrate genAI technologies into their systems. 37 Furthermore, inequitable health outcomes and disparities may be intensified when the bias is rooted in historical racism, marginalization, and trauma. 42
Outputs must be culturally respectful, accurate, and unbiased to avoid harm and maintain trust between public health and diverse communities. 42 Representation of diverse social locations in datasets used to train genAI models is essential to avoiding harm.37,44,45 Personalized health information, including tailoring to health literacy levels, can increase accessibility and reduce inequities. 37 Equity and accessibility should guide genAI adoption in public health to minimize health disparities and ensure everyone benefits regardless of demographic characteristics or other characteristics.40,45 Mechanisms must be in place to ensure accountability and allow for individuals and communities adversely impacted by genAI to seek reparations. 45 Ultimately, genAI should not be used if mental or physical harm results that could otherwise be avoided through using an alternative approach or practice. 45
Ethical principles including benevolence, equity, and privacy and regulations are needed
Commonly discussed ethical principles in a scoping review of genAI and health include benevolence, equity, and privacy, although all principles are important. 46 Privacy and transparency became prominent issues raised during the COVID-19 pandemic for the responsible and ethical use of genAI. 41 Ethical issues require developers and stakeholders of genAI to address these considerations. 41 Regulations and guidelines were the most commonly suggested approaches to address ethical issues related to genAI. 46 Challenges in applying broad ethical principles to specific contexts exist, however, and guidance may offer limited support when principles conflict and tradeoffs must be made. 46 For example, there are ethical issues with chatbots using proactive persuasion to encourage users to vaccinate themselves or their children. 38 Using genAI to make decisions around health also raises concerns about responsibility and consent. 44 Regulations mandating transparent, patient-centered consent processes, responsible data handling, and the protection of patient rights must be strictly upheld. 44 Further, rapid advancements and the complexity of genAI models mean that ethical breaches are still possible while adhering to legal regulations, as laws may lag behind the technological developments. 46
Robust cybersecurity measures including encryption, anonymization, and informed consent are needed to safeguard privacy
Cybersecurity protocols must be implemented to protect personal information, including using encryption technologies, systems to monitor questionable activities and secure data storage. 44 Informed consent must be used so that communities understand privacy and confidentiality with regard to interacting with genAI, including chatbots.38,45 Careful consideration of personal information relevant to health is important to protecting privacy. 38 Anonymization can also be used before data is integrated into genAI to protect privacy. 44 Algorithms also must be secure so that privacy is ensured while still allowing for valuable insights to be generated. 44 Adherence to legal frameworks can also protect privacy when using genAI in public health. 45 Security and privacy must be upheld to protect personal information to maintain credibility and trust in public health. 44
Transparency and explainability of genAI models are necessary to encourage understanding of genAI's role, meaningful public engagement, and human oversight
Transparency is necessary to maintain trust and minimize risks associated with genAI use in public health.38,40,44,45 Users must explicitly know when they are interacting with genAI chatbots 38 and any role it plays in their relationship with public health organizations. 45 Information about how data is used and shared, potential risks and benefits, and overall uses of genAI must be transparent. 44 AI technologies need to be understood by stakeholders including developers, communities, and regulators, which is supported by transparent and explainable information about the development and deployment of genAI in public health. 45 Transparent information about the use, capabilities, and limitations also facilitates meaningful public engagement to make informed judgments, participate in decision-making processes, and build trust in public health applications of the technology. 45
Governance should foster collaborative, transparent, and sustainable approaches to integrating genAI technologies in public health
Human capabilities can be strengthened with genAI, creating synergistic partnerships between technology and practitioners to address complex health challenges where AI compliments human judgement rather than replaces it. 41 Risk mitigation, including addressing the generation of mis- and disinformation, should be a central priority of public health governance and legislation in collaboration with healthcare. 43 The cornerstones of legislation should be transparency and health-specific auditing and monitoring. 43 In addition to legislation, genAI guidelines are needed to systematically and transparently evaluate genAI impacts, including benefits and harms, to inform regulators and public health decision-makers. 37 These efforts should prioritize promoting genAI technologies that are consistent with the wider promotion of the sustainability of health systems, the environment, and workplaces. 45 The GRADE Evidence to Decision framework could be useful in evaluating genAI impacts as well as cost-effectiveness and equity and acceptability considerations 37 as they pertain to the health system, environmental, and organizational goals. Public health organizations, policy-makers, and other stakeholders should collectively make decisions about frameworks to regulate the use of genAI in public health. 37
Discussion
Ten articles related to the trustworthy and responsible use of genAI in public health were included in this rapid review. Most of the included articles were peer-reviewed original research published in 2024 using qualitative methods. The public health contexts focused on within the included articles were healthcare settings at the community and population levels and public health settings. ChatGPT and large language models in general were the most often evaluated, with text outputs being the most common outputs evaluated. The geographic locations of the research or the researcher affiliation varied; each of the included articles was published in a different country. Participant perspectives were only included in three of the articles. The thematic analysis resulted in 10 themes that suggest preliminary guiding principles which may inform the responsible and trustworthy use of genAI in public health as the field develops. Although organizational policy development around rapidly changing technology is difficult, it is necessary to overcome regulatory shortcomings and provide guidance for the cautious adoption of this potentially beneficial technology. Proactive, collaborative, and ethical approaches to integrating genAI in public health must focus on human oversight, transparency, accountability, and sustainability to ensure equitable health benefits while mitigating risks. This rapid review provides a starting point by which public health organizations can consider their uses of genAI and the policies that can guide their responsible and trustworthy use in practice, although it is acknowledged that this research are is in its infancy.
Recent work has begun to advance more concrete frameworks for responsible and trustworthy use of genAI in health, complementing the principles synthesized in this review. For example, De Vere Hunt, Jin, and Linos 25 propose a broad framework for evaluating GenAI in healthcare that emphasizes alignment with GenAI's strengths, integration of evaluation methods, and stakeholder collaboration. Templin et al. 12 introduce a five-step audit framework for evaluating bias in large language models within healthcare settings, accompanied by practical tools for implementation. Stetson et al. 24 provide a real-world governance model applied in oncology, including lifecycle management processes, model information sheets, and structured risk-assessment tools. These operational mechanisms reinforce our findings that organizational guardrails, accountability, and transparent reporting are vital for responsible use. Taken together, these frameworks offer sector-specific and operational detail, while our synthesis provides a cross-cutting foundation relevant to public health. The principles identified in this review, including equity, transparency, human oversight, ethical principles, and collaboration, are both complementary to and mutually reinforcing of the recent frameworks. The frameworks provide sector- or specialty-specific tools for operationalizing the guiding principles, which could adopt and adapt to population-level contexts by embedding them into organizational governance, workforce training, and community engagement strategies.
The ability of genAI to enhance data-driven predictions, combat misinformation, and deliver personalized, empathetic health communication at scale should be cautiously explored
Varied uses of genAI across public health highlight its capacity to contribute to public health functions and address current resource and action challenges. GenAI is explored widely in the media and scholarly literature and has the potential to enhance public health action by improving service delivery, strengthening outbreak preparedness and response, accelerating research, and ultimately enhancing health outcomes for communities.7,47 Within the included literature, a number of uses in public health were explored, including mobilizing genAI for epidemiological purposes, scaling and tailoring health communication initiatives, remote collaboration, and idea generation.37–39,41,43,44 One study highlighted the ability of genAI to track virus spread, predict outbreaks, and inform strategies during a public health emergency, such as the COVID-19 pandemic, by analyzing datasets for epidemiologic purposes. 44 Other studies explored the ability of genAI to produce tailored, empathetic health communication at scale, addressing misinformation and promoting awareness of public health issues.39,42 Finally, genAI also offers the potential for personalized health promotion and tailored communication on topics where high levels of controversy and misinformation exist, such as vaccination and alternative medicine. 42 Despite the challenges to implementing genAI due to concerns about cost, security, and ethics, 48 the unique properties and changing public health ecosystem may accelerate its successful integration compared to previous technologies like electronic health records. 49 The ease of use, wide availability of the models, growing integration of genAI models with other software, and the speed of development of genAI are the unique characteristics that may facilitate successful integration into public health action. 49 While genAI offers substantial opportunities, it is crucial for public health organizations to manage associated risks, implement effective change management strategies, and invest in skill-building to sustain value as these technologies are adopted and scaled. 47
Human-centeredness and oversight woven throughout policies are vital for trust and the responsible use of genAI
Human-centeredness and oversight are crucial for responsible and trustworthy genAI development and governance. Human oversight at all stages of design and implementation is critical to avoiding a culture overdependent on genAI, understanding the limitations of genAI, preventing harms associated with the use of models, and ensuring regulatory principles are upheld.40,44,45 Prioritizing human oversight in AI development and use will position humans at the center of public health decision-making and ensure that genAI systems are being monitored and evaluated in terms of cost-effectiveness, equity, acceptability, feasibility, and benefits for population health.37,44,45
Human-centered genAI is a key goal in a systemic approach to governance that aims to improve human-technology interaction and performance of models by focusing on ethics, needs, and values, as well as the actual human capabilities of those participating in oversight. 50 This criteria is key for building trust where humans must understand the technology, its decisions, and its outputs. 50 Critical reflection by public health organizations, teams, and practitioners using genAI must proactively consider the risks and unintended consequences to uphold ethical principles. 51 A collaborative approach to the use of genAI should emphasize inclusive governance frameworks, transparency, communication and collaborative decision-making should guide this key goal within policies. 50 This moves the human-centered nature of policies from within public health organizations only to also be community- and society-centered, where sustainability and the determinants of health are also considered. Tools and processes must be developed based on policies to scope and support human oversight, 51 although a recent policy analysis of government algorithm policies found two significant flaws. 52 The lack of understanding and ability of human practitioners to effectively oversee genAI models leads directly to the second policy flaw, which is that policies with human oversight provide a false sense of security and shift accountability from the organization to practitioners. 52 Green (2022) argues that policies must include human oversight but also be largely based on organizational oversight, which includes rigorous justification and exploration of evidence-based practices and approval of uses through a collaborative review process with stakeholders. These issues require more research, however, at its core, human oversight must not become an empty procedural safeguard but must meaningfully protect against harms.
Training and resources to enhance digital fluency and ethical understanding of genAI are needed to upskill the public health workforce
The included articles found that public health practitioners require training to effectively integrate genAI into practice, emphasizing limitations, ethical use, and trust-building.42,44,45 Resource constraints, including financial challenges and the expertise needed to train students and practitioners, limit genAI adoption in public health.37,44 Calls have been made for enhanced digital competencies and ethical understanding of digital technologies in health-related settings.15,53,54 A rapid review of literature aiming to identify digital competencies for public health, training approaches, and partnerships that can enhance technology uptake in public health found new competencies are needed that cross-cut and extend existing core competencies. 53 Training recommendations included adapting education and professional development to integrate interdisciplinary approaches to building digital competencies for public health. 53 An article included in this review echoed this, highlighting that effective training requires institutions to adopt innovative teaching methods, equipping practitioners to use genAI while addressing the complexities of public health. 40 Capacity building, opportunities for the next generation of public health practitioners, and an ethics-driven approach have been further highlighted as key priority areas for digital health education. 55 A structured and incremental approach to leveraging genAI within organizations has been recommended to overcome resource and human resource barriers while fostering innovation and addressing ethical concerns and workforce development needs.56,57
Organizational policies must include guardrails to address intersecting issues such as privacy, bias, duplicated content, and mis/disinformation
Barriers to the trustworthy and responsible use of genAI include the absence of proper safeguards to mitigate risks of algorithmic bias, societal biases, selection bias, and affordability-driven equity issues.37,41,42,44,45 Protecting data privacy through anonymization, encryption, and adherence to regulations is critical for maintaining trust.41,43,44 Safeguards may consist of regulatory frameworks, similar to the Personal Information and Electronic Documents Act, and must be based on ethical principles, especially equity, nonmaleficence, and benevolence.45,46 Effective safeguards, such as robust validation and monitoring, were found to be necessary to prevent the spread of health disinformation.39,40,42 Current regulations and guidelines are fragmented, underscoring the need for clearer governance frameworks tailored to AI in public health. 46 Governance must ensure that genAI also aligns with the wider promotion of sustainable health systems, environment, and workplaces. 45 A dual governance framework has been suggested where federal regulations for genAI are paired with safety mechanisms crowdsourced from developers, researchers, and other stakeholders that audit genAI models and develop tools to protect individuals and communities from harms. 58
Organizational policies must explicitly incorporate ethical principles and address risks to promote autonomy, equity, transparency, and trust
Included articles commonly referenced equity and bias issues, privacy and security concerns, transparency, and trust in AI systems as features that significantly impact the responsible and ethical use of genAI.37–42,44–46 Organizational policies need to proactively address bias through the inclusion of equity, transparency, and trust, so as not to perpetuate disparities in health outcomes.40,42,44,45 This reflects principles identified in a unified framework for AI in society, including beneficence, nonmaleficence, autonomy, justice, and explicability which aligns with transparency. 59 These principles serve as a framework for regulations, policies, technical standards, and best practices developed for specific sectors. 59 Key risks associated with genAI use, including transparency, accountability, privacy, algorithmic bias, and unintended consequences, can be addressed by integrating ethical principles into policies and monitoring mechanisms for risk management. 60
Collaboration with other disciplines and communities should be undertaken to promote the understanding and responsible use of genAI for equitable outcomes
Public health partnerships with technology companies are necessary to catalyze genAI innovation and address expertise gaps.41,42 Collaboration with communities is essential to determine community values, priorities, needs, and ensure datasets are representative of the communities for which they are applying genAI. 44 Communities can additionally be mobilized to exert pressure on the government for regulatory policies and associated safeguards. 43 A recent Deloitte report on equitable and inclusive genAI governance also suggests that community engagement with equity-deserving groups in development and use is necessary to understand needs and concerns of Black communities and other underrepresented populations, ensure cultural relevance, and build trust in how GenAI systems are designed, implemented, and governed. 61 In Canada, the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems” explicitly outlines collaboration with researchers, members of the AI ecosystem, government, and other actors to drive inclusive and sustainable economic growth in Canada. 62
Limitations in studies
Within the included articles, generalizability of study findings,38–40 language bias, 38 small sample size, 37 use of nonprobability sampling, 37 inability to comprehensively evaluate the genAI models because of poor transparency from developers, 43 and ability to capture all relevant ethical considerations. 46 One article also cited the limited understanding of COVID-19 because genAI models included were only trained on data up to 2021, thus potentially producing inaccurate or biased outputs on COVID-19 vaccination. 39 Three articles41,42,44 did not include any discussion of the limitations or biases associated with the research.
A key limitation of this rapid review relates to the temporal scope of the evidence. Because the literature search included articles published until September 2024, the review does not capture research published after that date. Given the rapid pace of developments in genAI, it is likely that additional insights, frameworks, and applications have since emerged. This is an inherent feature of rapid reviews, which are designed to provide a time-sensitive synthesis rather than a continuously updated account. Another limitation relates to the technical depth of the available literature. The studies included in this review focused primarily on high-level guiding principles (e.g. equity, privacy, oversight, transparency) rather than providing detailed discussion of model architectures, explainable AI techniques, or validation methodologies. This absence of technical detail reflects the early stage of research in this area rather than an omission of this review, and highlights the need for future work that translates principles into concrete, operational guidance for public health practice. The findings of this rapid review may also be limited by being a less comprehensive review compared to a scoping review as fewer databases were used. A single researcher reviewing 90% of the articles in title and abstract and full-text screening may also have produced bias or errors. This is an appropriate approach where a single experienced researcher (in this case, MM) screens all records, and/or verification of a subset of the articles is done by another independent researcher (in this case, AK) to ensure accuracy and consistency. 27 Rapid reviews allow for a streamlined approach at all stages of the review, allowing for evidence to be generated in a shorter time compared to other review methods. 27 Rapid reviews are particularly valuable in health technology assessment, where new technology is constantly emerging and changing 63 Despite this, measures were taken to increase the robustness of the rapid review, including using five relevant databases and a second reviewer to independently screen a 10% subset of the articles in each stage. Finally, the research related to genAI and public health is limited at this time, so caution should be taken in interpreting the results.
Future research
This research should be repeated, as a scoping review, in the future when more research related to genAI and public health is available. This will enable a comparison of the guiding principles found in this study to the future research, which can be updated with new considerations. Future research should explore how broad guiding principles, such as those synthesized here, can be operationalized through detailed frameworks (e.g. bias audits, governance models) and adapted for population-level public health contexts. It should also explore and evaluate the use cases for genAI in public health, including benefits and harms, to further our understanding. This includes within health communication and genAI's ability to tailor information to diverse communities. Impacts of genAI adoption in public health should also be explored, including how use affects health outcomes, trust, and practice. Finally, research addressing the ethical implications of genAI use in public health should continue, especially for issues related to privacy, security, and the equitable distribution of benefits across communities.
Conclusion
This rapid review included ten articles that highlighted the emerging yet early potential of genAI in public health. GenAI demonstrates potential to enhance public health through applications such as data-driven epidemiologic approaches, tailored health communication, and innovative solutions to challenges in resource-constrained settings. However, the rapid pace of technological advancement necessitates the development of organizational policies that reflect safeguards and ethical principles, and interdisciplinary collaborations to guide responsible adoption. It will be important for public health, policymakers, communities, and other actors to collaboratively explore policies centered on human oversight, practitioner training, accountability, transparency, equity, and sustainability. The research is in its early stages, offering a foundation for public health organizations to consider the use of genAI and the policies guiding its responsible and trustworthy application, while providing a basis for future research to build upon.
Supplemental Material
sj-pdf-1-dhj-10.1177_20552076251393302 - Supplemental material for The double-edged algorithm: A rapid review exploring the trustworthy and responsible use of generative AI in public health
Supplemental material, sj-pdf-1-dhj-10.1177_20552076251393302 for The double-edged algorithm: A rapid review exploring the trustworthy and responsible use of generative AI in public health by Melissa MacKay, Anjali Kukan and Jennifer E McWhirter in DIGITAL HEALTH
Supplemental Material
sj-pdf-2-dhj-10.1177_20552076251393302 - Supplemental material for The double-edged algorithm: A rapid review exploring the trustworthy and responsible use of generative AI in public health
Supplemental material, sj-pdf-2-dhj-10.1177_20552076251393302 for The double-edged algorithm: A rapid review exploring the trustworthy and responsible use of generative AI in public health by Melissa MacKay, Anjali Kukan and Jennifer E McWhirter in DIGITAL HEALTH
Footnotes
Acknowledgements
The research team would also like to acknowledge Jacqueline Kreller-Vanderkooy, specialist research librarian, for her assistance in developing the search strategy.
Ethical considerations
Ethics approval was not necessary for this review article of previously published research.
Author contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Melissa Mackay, with data collection and analysis support performed by Anjali Kukan. Melissa MacKay wrote the first draft of the manuscript, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was produced with support from the Centre for International Governance Innovation and Mitacs Accelerate as part of the Digital Policy Hub fellowship program (Grant No. IT36431).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Guarantor
M.M.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
