Abstract
Contemporary uses of artificial intelligence (AI) in global health are shaped not only by technical expertise but also by embedded narrative logics such as assumptions about whose experiences count, whose perspectives define the problem, and whose futures are imagined in algorithmic attempts to provide solutions. This paper examines the narrative machine embedded within AI-driven health technologies and argues that the epistemological foundations of such systems are deeply entwined with colonial-era patterns of knowledge extraction, abstraction, and representation. Through a theoretical lens informed by postcolonial and decolonial studies as well as narrative ethics, this paper proposes a decolonial analytic of AI systems as narrative machines: tools that not only process data but also inscribe particular worldviews. I explore how these systems often exclude or distort local health epistemologies, particularly in the Global South, leading to interventions that are technologically sophisticated but culturally disembedded and ethically fraught. In practical terms, the paper examines case studies of AI-enabled diagnostic platforms, epidemiological modeling tools in the Caribbean and Africa. It identifies three domains where decolonial intervention is possible: (1) Participatory design methodologies that center narrative sovereignty; (2) Ethical audit frameworks that account for epistemic inclusion; and (3) Policy structures that resist data extractivism in favor of relational, consent-based data practices. This paper contends that addressing global health inequities through AI demands not just better data or fairer algorithms, but a transformation of the narrative structure through which technological futures are conceived and operationalized.
Introduction: The Ethics of Artificial Intelligence Health
As artificial intelligence (AI) becomes increasingly embedded in global health infrastructures, from diagnostic tools (Sussman et al., 2022) and disease surveillance platforms (Anjaria et al., 2023) to mobile health applications (Pifeleti et al., 2025) and triage chatbots (Schmude et al., 2022), the ethical stakes involving who is seen, how health is defined, and what futures are made possible, become even more critical. While much attention has been given to algorithmic bias and technical transparency (Kerasidou, 2021; Panch et al., 2019), far less scrutiny has been directed toward the deeper narrative architectures that undergird these systems: whose worldviews are encoded, whose bodies are modeled, and whose experiences are rendered legible, or illegible, by the logic of machine learning.
This logic travels through infrastructures shaped by historical and ongoing patterns of epistemic extraction (Sekalala & Chatikobo, 2024), particularly in the Global South, thereby replicating dynamics of data colonialism (Couldry & Mejias, 2019). As with earlier moments of colonial medicine, AI development today frequently privileges external data sources, biomedical framings, and technocratic visions of health which sideline locally grounded ways of knowing, community storytelling traditions, and culturally situated understandings of illness and care (Kakar, 2021). The result is not just biased data, but a narrative narrowing, a re-inscription of health futures authored from elsewhere.
AI systems should be understood not merely as computational instruments but as narrative machines (Metz, 1990): 1 devices that construct, circulate, and institutionalize particular stories about health, risk, and intervention. When these narrative machines are rooted in Euro-American datasets and commercial priorities and then exported wholesale to low- and middle-income countries, they risk reproducing data colonialism that flattens local complexity.
This paper advances a framework for decolonial AI ethics suited to the Caribbean that centers relationality, pluralism, and community-grounded knowledge. 2 Decolonial AI is not merely a call to diversify datasets or improve accuracy. It is a methodological and ontological commitment to redesigning AI systems that are accountable to the histories, geographies, and sociocultural realities of the communities they claim to serve (Birhane, 2020; Mohamed et al., 2020). Decolonial AI resists the extractive logic of data empires and seeks to amplify localized forms of sense-making. Such an approach recognizes the potential of AI to support health equity, but only if the narrative machine is reprogramed through participatory, co-design paradigms with ethical safeguards.
This research note uses a comparative case study approach to examine how AI health interventions reflect and reproduce different narrative and epistemic models. Jamaica and Ethiopia were selected for their contrasting approaches: Jamaica's AI-powered breast cancer screening exemplifies a more transactional, externally driven model relying on non-local data and minimal community input. In contrast, Ethiopia's Amharic-language chatbot, co-developed with local stakeholders, reflects a more relational and culturally grounded design. The case of Ethiopia's Amharic-language chatbot offers a pathway for Caribbean health systems to resist the extractive nature of Global North AI systems. To do this, I emphasize design approaches that involve communities in shaping the technology, review processes that ensure diverse voices and ways of understanding health are included, and policy frameworks that prevent outside groups from taking data without meaningful local control.
Theoretical Groundings: A Narrative Medicine Approach to AI Health
AI is often described in technical terms such as models, datasets, and algorithms (Bingley et al., 2023; d' Elia et al., 2022; Kerasidou, 2021). These terms suggest a framing based on prediction, diagnosis, or optimization. AI, therefore, is not simply a tool for analyzing data, but a powerful narrative infrastructure because it shapes how stories are told about the world, and whose voices, experiences, and knowledge forms are legitimized or erased in the process. Regarding health and illness, the narrative function of AI systems shape risk and responsibility, and whose lives are rendered intelligible through its computational gaze. 3
Arthur Frank's (1995) distinction between “narrative wreckage” and “narrative repair” is relevant here as AI design, when inattentive to lived experience, risks contributing to the marginalization of underrepresented people by turning complex sociocultural and historical experiences into fixed checkboxes, symptom clusters, or health results that ignore people's real-life situations. 4 Alongside issues of ongoing misrepresentation (Kleinberg et al., 2022), the increasing use of clinical data by private companies for profit (Terry, 2019) further reduces diverse, lived health experiences into one-size-fits-all narratives through a preference for consolidating AI systems that prioritize efficiency and scalability over equity (Roberts & Salib, 2024), leading to interventions that fail to meet the needs of marginalized communities.
Data Colonialism and Extractive Infrastructures
Narrative bias is reinforced through what Couldry and Mejias (2019) term data colonialism: the appropriation of human life for the purpose of extracting data and transforming it into value within asymmetrical global systems. Unlike the extraction of raw materials under classical colonialism, data colonialism operates through the capture of everyday activities; health interactions, digital communications, biometric indicators, and their integration into global systems of surveillance, computation, and profit-making. This dynamic mirrors older colonial patterns. Udupa & Dattatreyan (2023) note how digital infrastructures carry colonial residues not only in their extractive logic, but in the very ways they organize knowledge, that is prioritizing quantifiability, technical legibility, and universality over cultural specificity. Such political and colonial contexts in which AI is developed and deployed requires an ethical response that includes not only safeguards against bias or harm, but a rethinking of AI as part of a wider ecology of meaning-making: who defines the problem? Who interprets the data? Who decides what counts as “health”?
Artificial Intelligence as a Narrative Machine
Whether through the outputs of a diagnostic model, the red flags raised by a disease surveillance system, or the structured responses of a health chatbot, AI tools generate stories about individuals and populations. These stories are embedded in metrics, protocols, and recommendation engines, but they are stories nonetheless, and they are far from neutral. The term narrative machine captures the dual function of AI: as a technological system and as a cultural producer. These narratives are often shaped by developmentalist framings of the Global South; as technologically lacking, administratively inefficient, logically incomplete, and therefore in need of “smart” interventions.
Artificial Intelligence in Caribbean Health Contexts: Navigating Narrative and Epistemic Tensions
The techno-narrative concerns such as narrative bias, data colonialism, and the ethical erasure of lived experience are apparent in the implementation of AI systems in real-world healthcare settings. In May 2025, Jamaica's Ministry of Health and Wellness launched a pilot project using a portable, AI-enhanced breast screening tool at the University Hospital of the West Indies. The device, a radiation-free pre-screener, was introduced as part of a broader effort to increase access to early detection technologies in low-resource regions and countries.5 Although its potential is notable, the device's deployment reveals tensions between technical innovation and the potential for exclusion and bias.
Notwithstanding the sociocultural barriers involving fear, pain, privacy, and distrust, which technology alone cannot overcome, 5 findings related to inaccurate diagnostic assessments and low generalizability (Freeman et al., 2021) in AI image analysis of breast cancer screening programs are compounded by geographic bias in dataset development which “significantly limits the equitable application of AI in BC [breast cancer] mammography-based evaluations, given the vast difference in healthcare infrastructure, imaging access, and population characteristics between lower- and high-income countries” (Miyawaki et al., 2025). This raises the risk of false positives or negatives that undermine patients’ trust in healthcare institutions, particularly in marginalized communities (Chinta et al., 2025). A deeper problem, however, lies not only in mismatched data, but in the dataset's lack of training in local illness narratives, community knowledge, and lived experiences of risk (Doede et al., 2018). In this way, AI screening tools operate as un/intentional gatekeepers which privilege standardized biomedical inputs while muting the lived, emotional, and social dimensions of women's health experiences of breast cancer in Jamaica.
Toward a Decolonial Ethics of Health Artificial Intelligence
As the case study involving the BC pre-screening tool shows, health AI systems must be constructed with communities, not merely delivered to them. 6 Without co-creation, the most technically sophisticated AI tools, even those developed for use in Global South territories, risk imposing dominant narratives and marginalizing lived experiences. The following section proposes a decolonial shift in how such technologies are conceived, audited, and governed. Based on the success of Ethiopia's Amharic-language chatbot, I highlight three interlocking dimensions; participatory design, inclusion in ethical audits, and transformative data governance, that together scaffold a more geo-culturally attuned approach to health equity and inclusivity.
Participatory Design and Narrative Sovereignty
Participatory design models that embed historically marginalized voices at each stage, whether through collaborative data collection, narrative framing, or interface development, have been shown to increase fairness, transparency, and trust (Chen et al., 2023). 7 An example of such a model is observed in the AI-powered chatbot developed specifically for Nigist Eleni Comprehensive Hospital, the only public hospital in Hossaena city, Ethiopia, for use as personal virtual doctor in Amharic, the most widely spoken language in Ethiopia. The chatbot trained on a dataset of 12,127 pairs of questions and answers gathered in Amharic from the hospital. The co-designed, locally based chatbot is intended to provide medical services for remotely located patients unable to travel to the only public hospital in Hossaena. The chatbot's 95.7% accuracy rate demonstrates how involving local stakeholders in the co-creation of assistive technologies can provide linguistically and culturally meaningful, relatable, and accessible interventions. 8
The operationalization and success of the Amharic text-based chatbot in Ethiopia shows how participatory design, when rooted in local language, context, and community needs, can transform AI from an extractive tool into a tool for inclusion and equitable care. The chatbot challenges dominant AI storytelling structures by embedding local voices and vernacular knowledge into its very architecture. Rather than relying on imported datasets or decontextualized biomedical jargon, the chatbot was trained on vernacular health questions and expressions collected from real interactions at Nigist Eleni Comprehensive Hospital. These everyday narratives about pain and illness formed the core training data by encoding lived experience directly into the machine's logic. This marks a significant departure from dominant models where patient voices are often translated or erased entirely.
Health, as represented in this system, is not reduced to standardized metrics or Western disease categories. Instead, it emerges through locally and culturally specific concerns. Users might describe symptoms through idiomatic expressions, moral language, or social context, all of which the system can interpret due to its culturally resonant training. This re-embedding of AI within a culturally familiar linguistic frame directly challenges the homogenizing tendencies of Global North-designed health technologies and shifts the focus from extractive data collection toward co-construction, where the design and functionality of the technology are shaped in dialogue with local communities. This form of local governance in which communities are not merely passive users but active co-authors of the epistemic and ethical foundations of the technology offers a counter-model to dominant data empires. Building on these insights and grounded in the principle of non-traditional Community-Based Participatory Research (Villar & Johnson, 2021), the following framework proposes a threefold approach to embedding decolonial ethics into health AI systems, one that centers narrative sovereignty, epistemic justice, and relational data governance as key design and implementation principles.
Narrative co-creation workshops conducted at the earliest stages of AI development will allow community members to define relevant symptoms, risk factors, and case examples that reflect their lived realities. Datasets and algorithmic structures should integrate local knowledge, elements such as oral histories, metaphors, and illness narratives that carry cultural knowledge into the logic of the system. Capacity-building efforts are essential to equipping communities with foundational AI literacy to ensure they can meaningfully participate in both the design and critique of emerging technologies. Tools such as interactive prototyping, ethical scenario-building, and collaborative decision platforms such as Miro can serve as inclusive and iterative spaces for such engagement. 9
These practices help shift the design of health AI from extractive models toward collaborative and contextually resonant systems, ensuring that the narratives and knowledges of the communities most affected are not only heard, but structurally embedded in the technologies that shape their health futures.
Ethical Audits through Inclusion
In healthcare, particularly in postcolonial settings, what is often missing from the technocratic development of AI systems is a deeper consideration of inclusion to ensure that local ways of knowing, reasoning, and interpreting health and illness are embedded within the design and governance of AI tools. 10 Drawing from Miranda Fricker's (2007) concept of epistemic injustice, two forms of exclusion become particularly relevant in health AI: testimonial injustice, where the knowledge of marginalized groups is discounted or discredited, and hermeneutical injustice, where there is a structural gap in how certain experiences, in this case of illness, suffering, or care can be expressed or understood in dominant institutional frameworks. These forms of injustice are not simply philosophical concerns since they also materialize in algorithmic outputs that fail to capture nuanced, community-specific health narratives, and decision-making logics. 11
To counter these risks, ethical audits of health AI must be reconceptualized to include epistemic inclusion as a core criterion. This entails not only diversifying training datasets, but also involving community members, patients, and local knowledge-holders in the evaluation of AI systems.
Policy and Governance Against Data Extractivism
International and Caribbean-regional frameworks, such as UNESCO's (2021), WHO's (2021), and the PAHO (2024), have advocated for more relational forms of data governance. These approaches recognize that consent should not be a one-time transaction but an ongoing, dialogical process, and that data must remain under the stewardship of those from whom it originates, respecting cultural values, social trust, and communal modes of knowledge.
Reviews in bioethics and global governance warn that unregulated and opaque healthcare algorithmic systems, such as the “black box” nature of AI development (Morley & Floridi, 2025), can replicate exploitative “northern domination” of the Global South (Monasterio Astobiza et al., 2022) that may result in health standards that are misaligned with local epidemiologies, infrastructures, or understandings of care. An ethical, context-sensitive governance of health AI in postcolonial settings such as the Caribbean requires that relational consent models (Peltier et al., 2024) replace transactional approaches to ensure individuals and communities retain ongoing, revisable control over how their data is used. 12 These models should be grounded in cultural practices of negotiation, storytelling, and collective decision-making, rather than merely satisfying legal formalities (De Matas et al., 2025).
Caribbean-wide policy frameworks should establish robust standards for data stewardship, prohibit the export of identifiable health data without explicit safeguards, and prioritize the development of locally curated datasets. Policies tailored to Caribbean contexts would enable the region to build AI systems that reflect its specific public health needs, linguistic diversity, and cultural values.
AI ethics cannot be achieved without investments in local infrastructure, technical capacity, and data literacy (Kiosia et al., 2024). Effective governance must be matched by tangible resources: funding for regional cloud computing infrastructure, education in data science and ethics, and support for community-based institutions to lead on AI innovation.
Conclusion: Reimagining Health Futures
The ethical integration of AI into global health is not solely a technical endeavor: it is a narrative project. This paper has argued that to meaningfully engage with the ethical challenges posed by AI, we must look beyond datasets, metrics, and algorithmic accuracy and interrogate and transform the narrative architectures that underpin how AI systems are conceptualized, designed, and deployed. These architectures, often encoded with Euro-American assumptions about health, illness, efficiency, and objectivity, can reproduce forms of epistemic and structural violence when uncritically exported into Global South contexts. To counter this, the future of health AI must be rooted in a recognition that knowledge emerges from multiple sites, languages, and lived experiences. Embedding these perspectives requires a shift from extractive datafication to co-design, where communities are not subjects of innovation but co-authors of their technological futures.
This requires a methodological reorientation that elevates testimony, community memory, and cultural metaphor as legitimate forms of input in the creation of digital tools. It also demands new modes of governance and accountability that honor relationality, trust, and reciprocal learning between developers, policymakers, and local stakeholders. Stories are not merely representational but also infrastructural. They shape how we model risk, allocate care, and define whose lives are legible to the system. To build AI that heals rather than harms, we must cultivate infrastructures of care that are as attentive to narrative as they are to code.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
