Abstract
Artificial intelligence (AI) has the ability to revolutionize global healthcare delivery, offering opportunities to accelerate progress toward Sustainable Development Goal 3: Good Health and Well-being. This potential is especially significant in the Global South, where resource constraints, infrastructure limitations, and diverse sociocultural approaches to health and healing create complex, or “wicked,” problems. From clinical decision support systems and diagnostic tools to predictive analytics and drug discovery platforms, AI applications are multifold. However, their rapid deployment raises ethical concerns, particularly regarding their potential to perpetuate harm. Current governance mechanisms often fall short in addressing structural inequities or empowering affected communities. These frameworks typically focus narrowly on risk identification/mitigation and technical fairness, neglecting crucial historical and sociocultural realities. This article introduces Reparative Algorithmic Impact Assessments (R-AIAs) as a transformative framework for ensuring ethical and equitable AI deployment in healthcare. Grounded in decolonial and intersectional values, R-AIAs emphasize challenging Western epistemologies, promoting data inclusivity and sovereignty, fostering participatory governance, and redressing systemic biases, inequities, and power imbalances. The framework's ability to center diverse knowledge systems, such as Ubuntu philosophy, emphasizes its catalytic potential. R-AIAs operationalize these principles through six interconnected steps, exemplified by a case study of an AI-powered maternal health system in sub-Saharan Africa. These steps offer actionable strategies for bridging global divides. By embedding reparative practices, R-AIAs elevate impact assessments from compliance exercises to tools for empowerment, challenging colonial legacies and advancing global health equity. The framework underscores that achieving “AI for All” demands a sustained commitment to justice and redress.
Keywords
The Dual Nature of AI: Opportunities and Ethical Challenges
Artificial intelligence (AI) has the potential to transform global healthcare delivery, offering unprecedented opportunities to accelerate progress toward the Sustainable Development Goals (SDGs), particularly SDG 3: Good Health and Well-being. Healthcare AI applications are multifold—from clinical decision support systems that guide complex medical decisions, to diagnostic imaging AI that enhances detection and interpretation of radiological studies, to drug discovery platforms that accelerate pharmaceutical development, to predictive analytics that forecast disease progression and patient outcomes. These tools have the potential to address longstanding inequities between the Global South and North, particularly in regions facing resource constraints like physician shortages and/or limited specialist access.
Real-world deployments of AI are already demonstrating tangible impacts across diverse healthcare contexts.
Clinical decision support: In Kenya, the primary care provider Penda Health has partnered with OpenAI to integrate an LLM-powered clinician copilot into their electronic health records system (Korom et al., 2025). Analysis of 39,849 clinical encounters revealed that practitioners supported by the system experienced a “16% relative reduction in diagnostic errors and a 13% reduction in treatment errors compared to those without” (Korom et al., 2025). Diagnostic image recognition: Meanwhile, in South Sudan, Médecins Sans Frontières facilities are utilizing AI-powered software trained on 380,000 snake photographs to support species identification and antivenom selection, with preliminary assessments indicating the system's accuracy can exceed that of clinical specialists—particularly valuable where incorrect antivenom administration wastes scarce resources that may cost patients the equivalent of their annual income (Ahmed, 2024). Fetal monitoring: At Malawi's Area 25 Health Centre, PeriGen's AI-augmented fetal monitoring system—implemented via partnerships with the country's National Health Ministry and Texas Children's Hospital—has achieved an 82% decrease in stillbirths and newborn deaths over 3 years of deployment (Kimeu, 2024). The technology tracks fetal vital signs during delivery and flags concerning patterns for clinical attention, while requiring considerably less infrastructure, equipment, and specialized personnel than conventional monitoring approaches (Chiweza et al., 2022; Kimeu, 2024). Patient retention: In another maternal and child care application, the Indian nonprofit ARMMAN collaborated with Google DeepMind to develop an AI model that predicts which pregnant women are likely to disengage from its mHealth messaging program (Hu, 2025). In a pilot with 100,000 women, the system improved retention rates by 30% through the strategic allocation of interventions (Hu, 2025; Verma et al., 2024). Traditional medicine digitization: Lastly, India has also pioneered the world's first comprehensive digitization of traditional medical knowledge through its AI-enhanced Traditional Knowledge Digital Library (Council of Scientific and Industrial Research [CSIR], 2025). Employing natural language processing, it transforms millions of ancient manuscripts across multiple languages (e.g., Sanskrit, Tamil, Persian, Urdu, Arabic) into structured, searchable databases that both safeguard against intellectual property appropriation (biopiracy) and facilitate computational analysis of traditional therapeutic disciplines like Ayurveda and Unani (CSIR, 2025; The Economic Times, 2025, july 22).
While these examples illustrate AI's potential to tackle pressing health challenges (Vinuesa et al., 2020) and democratize access, they also reveal how the integration of AI into healthcare systems worldwide raises unique ethical complexities that go beyond general AI governance challenges. Existing governance structures often overlook the ways AI can entrench biases, perpetuate injustices, and widen global divides (Ashar et al., 2024; Davis et al., 2021; Igarapé Institute, 2024; Racine, 2024). This lack of effective oversight is particularly concerning given that these systems make life-critical decisions that have direct bearing on patient well-being. AI-powered healthcare systems are frequently opaque, with their clinical decision-making processes poorly understood by both healthcare providers and the patients they affect. These challenges are further compounded in Global South healthcare settings, where resource constraints, infrastructure limitations, and unique sociocultural approaches to health and healing create an environment marked by what scholars term “complex” or “wicked” problems.
Current AI governance faces fundamental challenges that extend across sectors, including healthcare, and threaten the goal of “AI for All.” The extreme concentration of power—with 14 of the 15 largest AI companies being U.S.-based (Stash, 2024)—systematically excludes affected communities from development and oversight processes (see Lehdonvirta et al., 2024). While AI safety initiatives exist, they rely primarily on voluntary guidelines and technical audits that neglect diverse cultural knowledge systems and approaches. These mechanisms lack enforcement power, treat community (e.g., patient) input as optional rather than essential, and result in systems optimized for Western paradigms but potentially harmful in other contexts. This includes Western medical paradigms, which are rooted in colonial histories and often privilege individualistic, biochemical approaches to health while marginalizing holistic, community-based, and indigenous understanding of well-being (Richardson, 2021). Such models can misclassify or pathologize cultural variations in health experiences and healing practices. Furthermore, they can propagate inequalities.
The stakes are especially high for the Global Majority—communities spanning Africa, Asia, Latin America, and beyond—who are often excluded from meaningful participation in all stages of the AI lifecycle (Igarapé Institute, 2024; Racine, 2024). The consequences for healthcare delivery are profound:
Clinical decision support: AI systems trained primarily on Western patient populations often omit diverse approaches to treatment and care, leading to potentially harmful recommendations. For instance, an AI system might recommend formula feeding for HIV-positive mothers—following Western clinical guidelines—while failing to consider limited access to formula, clean water, or facilities to sterilize bottles in many contexts. Such decontextualized recommendations can lead to worse health outcomes when local realities and resource constraints are not considered. Diagnostic systems: Like facial recognition technologies that show systematic bias against certain demographic groups, for example, people of color, nonbinary and genderqueer individuals (Birhane, 2022b; Buolamwini & Gebru, 2018; Scheuerman et al., 2019), AI diagnostic systems can exhibit differential accuracy rates that disadvantage underrepresented populations, potentially leading to missed or incorrect diagnoses. For instance, recent research exploring AI-exacerbated bias in dermatological diagnoses found that while deep learning systems improved overall diagnostic accuracy, they widened the accuracy gap between light and dark skin tones among primary care physicians by 5 percentage points (Harris, 2024; Groh et al., 2024). Data exploitation: Data from Global South communities, including sensitive medical information, is frequently harvested without adequate protections or fair compensation, benefiting companies in the Global North while disempowering source communities (Mohamed et al., 2020). These extractive practices reinforce global power imbalances and undermine data sovereignty. Cultural misalignment: AI technology is frequently not designed to be adaptive for diverse sociocultural contexts. For instance, content moderation algorithms can simultaneously allow harmful content while censoring legitimate cultural expressions (Sambasivan et al., 2021). In healthcare settings, this can manifest as inappropriate risk scoring or misclassification of symptoms when AI-powered systems struggle to account for cultural differences in how health conditions are described and experienced. Knowledge systems: Indigenous and traditional knowledge systems, including approaches to health, disease, and care, are often marginalized or entirely excluded by AI systems operating from Western epistemological positions (Benjamin, 2019; Mohamed et al., 2020). This epistemic violence extends beyond technical misalignment to reflect deeper power structures that privilege Western scientific knowledge while dismissing other ways of knowing and healing. Such exclusion undermines cultural sovereignty and can erode vital local healthcare practices.
Moreover, these technologies are being rapidly deployed in critical care settings where algorithmic bias can have immediate, life-threatening consequences. Current power dynamics heavily favor industry interests, with AI expertise and resources concentrated in the Global North, while Global South communities lack both decision-making authority over deployed systems and mechanisms for safeguarding fair benefit-sharing.
This duality—the transformative potential of healthcare AI alongside its ethical risks—calls for a governance approach that balances medical innovation with health justice. How can we ensure that clinical algorithms promote accountability and transparency? What mechanisms exist to identify and rectify bias in medical AI? How do we embed equitable and inclusive practices into the development and deployment of healthcare technologies? Without intervention, rapid AI deployment will further entrench global inequities and algorithmic colonialism, leading to continued discrimination, exploitation of data and resources, erasure of diverse knowledge systems, and limited technological sovereignty for affected communities.
While existing Algorithmic Impact Assessments (AIAs) offer important oversight mechanisms, they often fall short in ameliorating power asymmetries and historical inequities (Racine, 2024). Building on these established frameworks, this paper examines how Reparative AIAs (R-AIAs) can help realize the vision of “AI for All” in healthcare contexts by bridging deep-rooted divides between the Global South and North, as well as between cultural East and West. While R-AIAs were developed as a broader framework for ethical AI governance, their principles are particularly well-suited to healthcare contexts, where issues of bias, fairness, and cultural competency directly impact patient results and well-being.
We demonstrate how R-AIAs can prioritize the development of inclusive, equitable AI-powered medical technology that centers patient agency, dignity, and well-being as defined by the Global Majority. By grounding R-AIAs in diverse knowledge systems (e.g., Ubuntu philosophy), we can protect and elevate different approaches to health and healing while making it so that these tools serve all communities effectively and ethically. This application of R-AIAs to healthcare contexts enhances traditional AIAs by incorporating reparative, decolonial principles, thus offering a feasible but impactful path forward for just healthcare-related AI deployment. By surpassing conventional notions of algorithmic fairness and responding to serious gaps in existing assessment frameworks, R-AIAs can actively redress historical, structural, and systemic inequities in global health, while delivering measurable improvements in patient care and health outcomes.
Beyond Technical Checklists: The Case for R-AIAs
AIAs have emerged as a valuable tool for surfacing shortcomings in how AI systems are designed, evaluated, and governed. These assessments provide structured frameworks for examining the societal, economic, environmental, and cultural impacts of algorithmic systems before their deployment (Ada Lovelace Institute, 2021; Moss et al., 2021; Racine, 2024; Stahl et al., 2023). In doing so, AIAs aim to foster accountability, explainability, transparency, and reflexivity, which in turn can help mitigate risks, maximize benefits, and build public trust in AI-powered technologies (Ada Lovelace Institute, 2021; Ashar et al., 2024; Metcalf et al., 2021; Moss et al., 2021; Racine, 2024; Reisman et al., 2018; Selbst, 2021; Stahl et al., 2023).
But the potential of AIAs extends beyond technical considerations to realizing the vision of “AI for All.” When implemented successfully, these assessments can help ensure AI systems benefit populations across the Global South and North. However, analysis of current AIA implementations reveals various limitations in practice. As noted in a recent systematic review, the field is young, AIA frameworks are still evolving, and scholars continue to debate how these assessments should be structured and implemented (Watkins et al., 2021). Other research shows that current participatory approaches to algorithmic accountability and fairness have systematically excluded marginalized voices (Birhane, 2021, 2022a; Birhane et al., 2022; Davis et al., 2021; Racine, 2024).
We acknowledge several additional limitations in our introduction, specifically that traditional AIAs consistently fail to (a) address deeper structural inequities, underlying power asymmetries, and sociohistorical realities that shape how AI systems affect communities or (b) provide mechanisms for redress. Rather, existing frameworks tend to narrowly focus on immediate risk mitigation and technical fairness, prioritizing algorithmic performance over actively advancing equity, justice, benefit-sharing, or community power and capability (Racine, 2024; see also Birhane, 2021, 2022a; Birhane et al., 2022; Davis et al., 2021). They also frequently treat assessment as a technical checkbox rather than a transformative process, leading to superficial, even tokenistic, community engagement, with limited meaningful influence on impact identification and assessment or system design and deployment. Additionally, successful components of present AIAs—such as structured evaluation protocols and participatory mechanisms—often operate in isolation rather than as part of a comprehensive approach.
These limitations underscore the urgent need for not only clearer frameworks, but more robust, cohesive, context-specific, and human-centered strategies. This requires rethinking how we approach algorithmic governance. The concept of algorithmic reparations, first conceptualized by Davis et al. (2021), offers an invaluable praxis for us to work off. This praxis strives to “name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form,” recognizing that algorithmic systems both reflect and reinforce (a) existing power structures and (b) broader patterns of marginalization, oppression, privilege, and disadvantage (Davis et al., 2021; Johnson, 2021; Kalluri, 2020; Racine, 2024). R-AIAs offer a novel and transformative framework that combines the strengths of conventional accountability mechanisms with reparative practices. R-AIAs encompass four key principles rooted in decolonial, Intersectional scholarship:
Challenging Western epistemologies and centering local contexts: Valuing diverse ways of knowing, understanding, and engaging with health, disease, and technology, appreciating that these paradigms emerge from many traditions, histories, and sociocultural contexts. AIAs adapt to reflect the specific realities of Global South healthcare systems and the communities they serve, including deep consideration of lived experiences and colonial legacies. This principle emphasizes bridging divides between different knowledge systems—from Global South to North, East to West—rather than privileging any single approach, empowering communities to develop and deploy AI-powered systems that align with their values and needs. Promoting data inclusivity and sovereignty: Making certain that data collection, storage, and use respects the rights, autonomy, and diversity of Global Majority populations. This includes identifying data gaps, cultivating representational datasets that embrace and celebrate rather than censor and erase local complexity, and establishing mutually beneficial relationships between AI developers and communities. Fostering participatory governance and transdisciplinary collaboration: Developing mechanisms that meaningfully and actively involve marginalized and minoritized perspectives throughout the algorithmic lifecycle, that is, system research, design, development, deployment, evaluation, and governance. This means building partnerships across sectors, disciplines, and stakeholders (e.g., between researchers, technical experts, healthcare providers, traditional practitioners, community leaders, and policymakers), integrating diverse expertise with community insights, and ensuring communities have real decision-making power. Moreover, this is vital for combating system opacity. Redressing systemic biases, inequities, and power imbalances: Critically examining and confronting biases and power dynamics—for instance, between Global North technology providers and Global South healthcare systems and communities. Create concrete mechanisms for sustainable development and long-term community benefit, including pathways for technological self-determination.
Through these principles, R-AIAs work to ensure AI-powered systems serve and empower all communities. The framework's adaptability is a particular strength that allows for the incorporation of additional guiding philosophies, such as Ubuntu, to further enhance its impact. Ubuntu philosophy, rooted in African traditions, emphasizes interconnectedness, collective well-being, and the idea that an individual's humanity is inextricably linked to the humanity of others (Ajitoni, 2024). Integrating this ethos into an R-AIA process can further operationalize the above principles. For example, an Ubuntu-inspired R-AIA application might focus on co-creating AI systems that not only meet local healthcare needs but also strengthen community bonds and promote reciprocal benefits between developers and the communities they serve. In the next section, we offer specific steps for how reparative AIAs can be actioned in practice.
R-AIAs from Principles to Practice: Operationalizing “AI for All”
The path to “AI for All” requires frameworks that can bridge divides while addressing global challenges. We propose six interconnected steps that operationalize R-AIA principles to transform how AI systems are developed and deployed across different contexts. For each step, we provide concrete examples of aligned and misaligned practices to illustrate how R-AIAs can be effectively applied in practice. To make this section even more tangible, we have employed a case study. Our study focuses on a U.S.-based technology company interested in developing an integrated AI-powered maternal health system for sub-Saharan Africa—a region where AI-powered technologies could help tackle critical healthcare gaps but where historical and contemporary inequities and power dynamics demand careful consideration. The system combines diagnostic tools for predicting pregnancy complications, resource allocation algorithms for medical supplies and personnel, and monitoring systems that track maternal and infant health outcomes. This case study illustrates how R-AIAs can promote equitable technological development.
Step 1: Sociohistorical Context Grounded in Diverse Knowledge Systems
Our first critical step involves in-depth research into the complex sociohistorical landscape in which the AI-powered system will be embedded. This entails examining historical injustices, the lived experiences of marginalized and minoritized communities, and documented harms associated with comparable technologies (see Partnership on AI [PAI], 2024). It also goes beyond traditional impact assessments by fostering a deeper understanding of the systemic dynamics at play and considering how past technological interventions have either narrowed or widened global divides. For maternal health in sub-Saharan Africa, this means analyzing both the colonial legacies that have shaped healthcare systems and the rich traditions of local maternal care that have sustained communities for generations. The aim is to uncover historical barriers to access while recognizing and valuing the resilient, community-based care networks that have evolved in response. This foundation helps identify not just potential impacts or harms, but also opportunities for meaningful collaboration between diverse knowledge systems.
Aligned: A collaborative research team is formed. Local librarians and information specialists with archival expertise analyze regional healthcare histories, with a particular focus on injustices. Traditional birth attendants and community health workers share generations of maternal care knowledge and practices, offering invaluable insights that are integrated into system design. Medical anthropologists and health historians map how different communities understand and approach maternal well-being, ensuring the system aligns with local values and needs. The team examines how past technologies affected traditional health practices and networks, deriving lessons to avoid repeating harms or marginalization.
It is equally important to highlight practices that fail to align with the principles of reparative governance. Misaligned approaches can inadvertently reinforce systemic inequities and undermine the potential benefits of “AI for All.” Examples of such practices include:
Misaligned: Limiting research to English-language medical journals and Western healthcare databases, overlooking essential insights from local and Indigenous sources and perpetuating a narrow, Western-centric view of healthcare knowledge. Relying solely on quantitative health metrics, ignoring qualitative lived experiences from communities and patients that are vital to understanding nuanced healthcare challenges. Treating local healthcare practices as obstacles to overcome rather than knowledge systems to be respected and integrated into AI solutions. Conducting desktop research without engaging local expertise, resulting in AI systems that fail to account for critical sociocultural and contextual nuances.
Step 2: Power Analysis
Comprehensively analyzing power dynamics and asymmetries is crucial for understanding how AI systems can reinforce or challenge existing structures—and how these impact human well-being. Drawing from intersectionality theory, this analysis acknowledges that AI-powered technology does not exist in isolation. Rather, these tools are inseparable from broader, long-standing social, economic, and political contexts (Birhane, 2022a; Davis et al., 2021; Kalluri, 2020; Racine, 2024). Understanding these layered relationships, including how different aspects of identity and marginalization interact and compound, is essential to understanding how individuals and communities experience an AI system.
This analysis should examine multiple dimensions of power: (a) institutional power—who makes decisions and controls resources, (b) knowledge power—whose expertise is valued and legitimized, (c) data power—who controls data collection, storage, and usage, (d) economic power—how value is created and distributed, (e) technological power—who has access to and control over technical infrastructure, (f) narrative power—who shapes the stories and frameworks around AI development, and (g) other local power structures identified through the sociohistorical research and community engagement. This power analysis directly informs subsequent implementation steps: shaping equitable engagement processes, informing data governance structures, guiding monitoring frameworks, and ensuring meaningful redistribution of resources and capabilities.
Aligned: Create detailed power maps and establish formal mechanisms that elevate historically underempowered groups into decision-making positions, making certain that their lived experiences shape system development. For example, implement a Community Advisory Board with veto power and equal representation from various affected communities. Acknowledge how power dynamics influence people's ability to exercise agency and self-determination when interacting with these systems. Misaligned: Limiting power analysis to formal institutional structures while ignoring informal community power dynamics. Conducting power mapping exercises without meaningful input from affected communities. Treating power analysis as a one-time documentation exercise rather than an ongoing process. Focusing solely on technical power dynamics while ignoring cultural and social power structures. Making recommendations that reinforce existing power hierarchies rather than transforming them. Failing to scrutinize how intersecting forms of marginalization (e.g., gender, race/ethnicity, socioeconomic status, educational attainment, sexual orientation, geographic location) shape access and agency.
Step 3: Participant Engagement and Community-led Impact Coconstruction
The identification and evaluation of potential impacts require fundamentally redistributing power. Rather than treating community input as advisory, this process must give real authority to those most affected by AI-powered systems—in this case, patients, traditional birth attendants, and community health workers—to determine how impacts are defined, assessed, and rigorously mapped to potential harms (Metcalf et al., 2021). This shift in power dynamics is crucial for establishing genuine accountability (Metcalf et al., 2021). Engagement must be meaningful and nontokenistic. And while sustained engagement is valuable, careful attention must be paid to avoiding consultation fatigue, particularly given the sensitive nature of the topic and the existing intensive demands on healthcare workers (see PAI, 2024). Rigorous ethical frameworks must underpin all community engagement, with particular attention to informed consent and cultural protocols around maternal health information. Lastly, equitable partnerships between institutions with diverse resources and expertise—such as those in resource-rich regions and the Global Majority—can promote knowledge exchange and capacity building while establishing a foundation for impactful and inclusive co-construction. This relational approach prioritizes collaboration over extraction, fostering trust and co-ownership in the process.
Aligned: Use sociohistorical research findings to guide inclusive participant recruitment across different communities and healthcare traditions. Establish mechanisms to navigate divergent values between Western and local approaches to maternal care. Develop iterative feedback loops that continuously integrate emerging community knowledge with the purpose of system refinement. Meet accessibility needs and implement a variety of flexible engagement formats in local languages (e.g., mobile consultations via SMS or apps, written feedback, in-person or broader digital options, rotating meeting locations and times to accommodate different schedules). Provide fair compensation and comprehensive support for participation, including translation services, childcare support, and reimbursements for transportation costs and lost work time. Make certain feedback directly shapes system functionality, from diagnostic criteria to resource allocation priorities. Create clear documentation of how community input influences each development phase. Misaligned: Defining and assessing impacts based on superficial consultations with high-prestige experts (PAI, 2024), like hospital administrators or international health organizations, while neglecting input from patients, traditional birth attendants, and community health workers. Relying on Western maternal health metrics without considering local definitions of positive birth outcomes. Conducting rapid assessment workshops that don't allow for deep community engagement. Making decisions about system design before consulting local communities, only ostensibly seeking their “approval” afterward. Using community input selectively, only implementing suggestions that align with pre-existing technical plans. Failing to accommodate local languages, cultural protocols, and accessibility needs.
Step 4: Reparative Data Sovereignty
Effective data governance frameworks must prioritize the agency of affected communities, shifting control over the collection, storage, and use of sensitive health information into their hands. These frameworks should align with local and Indigenous data sovereignty principles (Carroll et al., 2024; Kukutai & Taylor, 2016), ensuring that data practices not only protect individual privacy but also uphold collective rights. In the context of maternal health, this involves addressing the systemic underrepresentation of diverse experiences and creating data systems that actively contribute to equitable, community-defined healthcare outcomes. By centering on reparative and inclusive practices, these frameworks can correct historical inequities (e.g., past exclusions of marginalized communities) and foster trust while empowering communities to shape their own healthcare futures.
Aligned: Establish community-controlled data trusts with authority over maternal health data collection and use. Design data systems that document historically excluded maternal health experiences and outcomes, as well as a full spectrum of maternal care practices, including traditional approaches. This involves implementing data collection methods that resolve historical gaps in maternal health records from rural and underserved areas. Build local data infrastructure and training programs that enable communities to control their own information. Create data governance structures that reflect local values, such as Ubuntu principles of reciprocity, interconnectedness, and collective responsibility. Misaligned: Extracting sensitive maternal health data without clear benefits, robust protections, informed consent, or meaningful community participation in decision-making; practices that perpetuate data colonialism and exploitation. Training AI systems primarily on data from urban hospitals and/or Western sources while excluding local, rural healthcare experiences. Relying exclusively on Western medical metrics (like specific vital signs or intervention rates) while missing important cultural indicators of maternal well-being and birth success. Creating data dependencies that require ongoing external technical support.
Step 5: Ongoing System Monitoring, Evaluation, and Adaptation
Advancing equity and justice is an iterative process that requires sustained commitment. In acknowledgment of this, R-AIAs necessitate that system assessment extend well beyond initial deployment, with regular evaluation of real-world impacts through metrics and processes determined by affected communities. This continuous monitoring should track not only technical performance but how the AI-powered system shapes healthcare practices, community relationships, power dynamics, and other sociohistorical realities. For our use case, this means evaluating impacts on both clinical care and traditional birthing practices.
Aligned: Build adaptation protocols that allow the system to regularly evolve based on community feedback, needs, and goals. Develop evaluation frameworks that challenge Western-centric definitions of system success and instead incorporate diverse cultural values, priorities, and knowledge systems. This includes mechanisms for key stakeholders (e.g., traditional birth attendants, community healthcare workers, patients) to lead regular system assessments. Misaligned: Evaluating system performance solely through technical accuracy metrics. Ignoring feedback from community stakeholders about system impact, including effects on care relationships and power dynamics. Maintaining rigid evaluation frameworks that cannot evolve with community needs or goals. Making system modifications without community consultation.
Step 6: Active Redress and Capacity Building
R-AIAs go beyond simply identifying impacts. Rather, they demand concrete, actionable strategies that actively redress deep-rooted inequities and combat algorithmic coloniality (Racine, 2024; see also Davis et al., 2021). This includes creating pathways for communities to develop their own AI capabilities while ensuring technological development serves community-determined needs and goals.
Aligned: Partner with local medical schools, midwifery programs, technical institutes, and community health centers to develop healthcare AI expertise. Create paid apprenticeship programs for local healthcare providers to develop AI expertise that respect and integrate diverse knowledge systems. Invest in regional computing infrastructure to support local AI innovation. Transfer technical knowledge and resources to community-controlled institutions. Establish funding mechanisms for community-led maternal health AI-powered initiatives. Misaligned: Maintaining technical dependencies on external experts. Offering superficial training without meaningful knowledge transfer. Restricting access to system development tools and documentation. Failing to provide resources for local AI capacity development.
Concluding Remarks: Toward Bridging Global Divides
The rapid global deployment of AI in healthcare settings worldwide demands immediate action to ensure these systems realize the vision of “AI for All.” As highlighted by cases like the Optum algorithm—in which bias led to discriminatory medical resource allocation harming millions of Black patients (Obermeyer et al., 2019)—the consequences of inadequate governance frameworks are both immediate and far-reaching. In merging algorithmic impact assessments with a reparative praxis, we find ourselves with a powerful yet practical justice-oriented tool that actually advances global health equity. By progressing past potential harm identification and breaking away from narrow technical fairness metrics, this approach transforms impact assessments from a compliance exercise into an apparatus for empowerment. R-AIAs actively challenge the colonial legacies that continue to shape both health outcomes and technological development/deployment, recognizing that bridging global divides requires sustained commitment to redress.
The framework's ability to integrate diverse philosophical traditions, such as Ubuntu, emphasizes its transformative potential. By embracing local epistemologies and value systems rather than imposing a one-size-fits-all model, R-AIAs create opportunities for more accountable, transparent, inclusive, and ethical AI-powered innovation. Moreover, while our focus has been on the Global Majority—particularly, maternal health in sub-Saharan Africa—the framework holds significant relevance across healthcare settings, including for marginalized communities in Western contexts. In adopting a human-centered approach, R-AIAs also address a critical gap in current AI safety efforts by ensuring governance mechanisms reflect global diversity in human values and conceptions of well-being. This advances both the technical and social aspects of AI safety while working toward genuine human flourishing for all those impacted by these systems.
Footnotes
Acknowledgments
I would like to thank the anonymous reviewers and editorial team for their thoughtful feedback and dedication to bringing this timely Special Issue to fruition. I am particularly grateful to Prof Michael Parker, PhD, for his invaluable comments on an earlier version of this paper and his ongoing mentorship. Lastly, I greatly appreciate the generous support from my funders that has made this research possible.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by a Medical Sciences Graduate School Studentship (Nuffield Department of Population Health) and the Baillie Gifford–Institute for Ethics in AI Scholarship, both associated with the University of Oxford.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
