Abstract
Artificial Intelligence (AI) applications in healthcare are evolving rapidly. The integration of AI into the Canadian healthcare system has demonstrated significant potential for enhancing the efficiency of care and improving patient outcomes. However, as this transformative technology continues to advance, it is crucial to take into account the unique perspectives and requirements of Indigenous Peoples in Canada. This article delves into the political, ethical, and practical considerations associated with introducing AI into Indigenous healthcare, emphasizing the paramount importance of equity and inclusion, which are rooted in the Two-Eyed AI framework. It also underscores the significance of co-creating AI technology in collaboration with Indigenous communities and multidisciplinary development teams. To illustrate these principles, this article spotlights an international AI epistemology-focused working group example. Healthcare professionals who engage with AI, whether it be through research, management, development, or leadership are implicated with this contemporary paradigm shift in decolonizing novel AI technology.
Introduction
Artificial Intelligence (AI) in healthcare encompasses the application of advanced technologies and computer algorithms that simulate human intelligence and decision-making processes. This technology is utilized for the analysis of medical data and aims to support healthcare professionals in accurately diagnosing, treating, and predicting patient health outcomes. 1 The transformative potential of AI in healthcare is revolutionary. In Indigenous communities, for example, AI technology aids in overcoming geographical challenges, enabling care providers and patients access to advanced diagnostics and expert medical insights that can be harder to obtain in under resourced and remote healthcare facilities. 2 There are, however, ethical considerations surrounding AI and Indigenous Peoples within the Canadian healthcare context that revolve around multiple facets, such as cultural sensitivity, the incorporation of Indigenous Knowledge (IK) and values, equitable access, data sovereignty and governance, community engagement, and informed consent. 3 These complex ethical concerns are rooted in Canada’s historical legacy of colonialism, intertwined with unethical medical research studies, that have been harmful to Indigenous communities specifically in the contexts of Western medicine and healthcare delivery. 4
As Canada grapples with the Indigenous Rights Movement and endeavours towards reconciliation, it is imperative to prioritize efforts aimed at rectifying past shortcomings in the current implementation of AI policies within the healthcare sector. 5 In 2017, Canada became the first country globally to launch a national research strategy backed by the Canadian Institute for Advanced Research (CIFAR). 6 The Canadian government has a unique opportunity to incorporate Indigenous perspectives into AI development, in line with its obligations under Section 35 of the Canadian Constitution 7 and commitment to the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP). 8 While AI holds great promise for enhancing healthcare service efficiency, the absence of adequate consideration for Indigenous perspectives of health and well-being in the creation of such technology may result in healthcare technologies that are not well-suited to the specific challenges and socio-cultural determinants of health Indigenous populations face. 9 Furthermore, AI can be enriched by Indigenous worldviews through the practice of Two-Eyed Seeing, 10 which balances Indigenous and Western scientific viewpoints.
The Silicon Valley doctrine
The composition of the technology workforce dominated by affluent male engineers of European descent influences the development of AI technology. 11 The Silicon Valley perspective refers to a set of values, attitudes, and approaches to technology, entrepreneurship, and innovation that have emerged and become associated with a renowned tech innovation region in California. 5 The Silicon Valley doctrine embodies a Western perspective that regards both humans and non-humans as exploitable resources, 12 a perspective deeply intertwined with the legacies of colonization, capitalism, and slavery. 13 Moreover, humans may need to develop an understanding of the ethical considerations or algorithmic biases that AI technologies may endow from their creators. Algorithmic bias has been defined as the application of algorithms exacerbating existing inequities in socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation, particularly within health systems.14,15 For example, algorithms employed by prominent health systems have exhibited racial bias for marginalized populations, including Black, 16 LGBTQ2S+, Asian, 17 and Indigenous populations. 18 Bias and underrepresentation in the creation of AI technologies have a trickle-down effect, exacerbating social inequalities and potentially causing patient harm through the misinterpretation of personal health data. 19 Ultimately, the peril in Silicon Valley may lie in its exclusive incorporation of the dominant Western worldview in the development of conscious machines.
The breath of AI
Kinship-based knowledge systems emphasize the collective, holistic, and relational aspects of Indigenous ways of knowing to promote human flourishing. 20 This belief is core to many Indigenous epistemologies. If Western philosophical and scientific traditions lack “relational” understanding of humanity’s place in the world, conversely, many Indigenous cultures place importance on relationships with inanimate and animate objects.20,21 For example, in Lakota ontologies, non-human relationships are prioritized, especially those with stones. 22 In essence, non-humans are viewed as having volition and decision-making. 23 These epistemologies form the foundation for ways of knowing and communicating that recognize the interconnected relationships that extend to wildlife, plants, the elements, water, and land. 24 Similarly, AI is an extension of personhood to an inanimate object such as a technological device, software code or algorithm. 21 It is essential to highlight that there is no singular Indigenous epistemology. This article will use the terms Indigenous epistemology and perspective to describe frameworks originating from Indigenous nations in North America and the Pacific Ocean, which hold commonalities in their perspectives on human to non-human kin relationships. 22 When considering the development of more ethical, sustainable, and culturally sensitive AI applications that can serve the well-being of both individuals and the broader ecosystem, ultimately contributing to a more balanced society. 25
Two-Eyed AI
Two-Eyed AI represents an AI approach that encompasses both IK and Western scientific perspectives. This concept of Two-Eyed AI is rooted in the Two-Eyed Seeing principle, championed by prominent Mi’kmaw Elders, Albert and Murdena Marshall. 19 Two-Eyed Seeing entails the art of perceiving the strengths inherent in Indigenous wisdom balanced with the strengths found in Western knowledge and ways of understanding. 19 This approach involves one eye focused on human ethics concerns and the other on recognizing the technological potential.10,19 For example, in Canada, AI is being used to help preserve polysynthetic Indigenous languages like Mohawk, Algonquin, and Michif through speech- and text-based applications. 26 In essence, Two-Eyed AI is a practical manifestation of decolonization theory in the realm of AI. It promotes a seamless fusion of AI technologies with Indigenous culture, with the goal of empowering Indigenous communities in the digital era while preserving their culturally rich heritage.
Case study: Co-creation of AI technology with Indigenous communities
Background
In 2019, the CIFAR released a position paper in conjunction with the Pan-Canadian AI Strategy, focusing on AI development through the lens of Indigenous perspectives. 27 This document marked the inception of a more ethically oriented AI strategy called the Abundant Intelligences Project (AIP). Co-directed by Prof. J.E. Lewis (Concordia University, Quebec) and Prof. H. Whaanga (Massey University, New Zealand), the AIP aims to make AI more inclusive of Indigenous perspectives. 28 This ongoing government funded project involves investigators and collaborators, primarily of Indigenous background, from universities and Indigenous community-based organizations across Canada, the United States, and New Zealand. 10
Strengths of the abundant intelligences project
The three-pronged framework of the AIP project is a unique application of the Two-Eyed Seeing principle. The AIP explores three distinct axes, each addressing vital aspects of IK integration into mainstream AI applications. Integration, the first axis prioritizes active community engagement while merging IK practices and mainstream AI research frameworks. The establishment of physical spaces, “pods,” facilitates deep engagement within participating Indigenous communities. 27 Indigenous communities recognize stakeholders as part of their extended network of relations, whereas technology companies typically identify stakeholders as comprising the board of directors, shareholders, employees, and consumers. 15 This inclusive and collaborative process acknowledges that any co-created AI solutions respect both human and non-human entities.
The strength of the second axis, Imaginaries, lies in designing AI systems tailored to the specific needs and aspirations of Indigenous communities. 27 By taking into account geographic isolation, linguistic diversity, and cultural traditions, it addresses the unique challenges faced by these communities. The incorporation of Lakȟóta design principles into AI wearable devices, for example, highlights, this approach’s innovation and adaptability. 29 Moreover, including Indigenous imaginaries respects the fundamental principles of sovereignty and self-determination, 16 ensuring that AI technology aligns with the community’s values and priorities.
Intelligence, the third axis, addresses technical challenges in mainstream AI research by applying Indigenous worldviews. By striving to translate the full spectrum of human intelligence into machine intelligence, the AIP ensures that AI technologies are developed and deployed with respect for and preservation of Indigenous cultures, languages, and traditions. 27 For example, when AI design is approached from a Māori perspective, it accentuates the significance of hapū (“sub-tribes”) and whānau (“family”), 30 underscoring the potential for more holistic and culturally resonant AI applications. Applying this principle to technologies like facial recognition, which often include images of both living and deceased individuals, breaches Tapu (“sacred”) principles. 18 Therefore, AI development needs to be culturally sensitive, even challenging Western technocratic norms where cultural content may be seen as expendable for efficiency and optimization. 18 This approach enriches AI research as it also promotes cultural preservation and inclusivity in technology development.
Limitations of the abundant intelligences project
AI has the potential to promote equity, but its positive impact is more likely when the development is undertaken by entities not primarily motivated by profit, such as hospitals or universities. 31 The allocation of government funding can profoundly influence the trajectory and outcomes of the AIP. The pan-Canadian AI Strategy backing the CIFAR and the AIP, currently receives financial backing from major technology players such as Facebook, a commercial bank, plus government funds, thereby exemplifying a strong government-industry-academia collaboration. 6
However, the 5-year, $125 million investment in the pan-Canadian AI Strategy pales in comparison to funding in other countries, subsequently prompting concerns about Canada’s historical challenges in commercializing research outcomes. 6 There is a danger in AI development that developers may tokenistically apply these values without bringing about substantial changes in how AI is conceived and crafted. 32 Highlighting this concern is the proposed Bill-C27, which concerns updating AI privacy laws, and the government’s criticism as “anti-democratic” for not consulting Indigenous leaders and relying too heavily on feedback from industry. 33 Moreover, shrinking contemporary fixations of technological speed, scale, and innovation ought to be disrupted through Indigenous centred AI development. 34 Perhaps only governmental and legislative drivers that may force policymakers and other influential technology sector parties to ground their goals around larger questions of equity, colonial legacies, and justice will suffice. The challenge of regulating the complex decisions for ethical AI will demand significant public engagement and scrutiny because the essentials for fair AI development go far beyond regulatory boundaries. 32 In the end, a well-defined national policy is essential to delineate the parameters within which AI and Indigenous worldviews will collaboratively evolve in Canada. 20
Considerations for health leaders
Abundance over singularity
“Abundant intelligences” encapsulates the notion of diverse and thriving knowledge practices within specific environments. 30 Drawing on the AIP framework, health leaders and AI working groups may actively embrace strategies within their organizations to ensure a culturally sensitive and inclusive approach. The term “abundant” signifies not only the richness of these practices but also their potential to shape numerous futures and challenge narrow perceptions of intelligence. 35 Health leaders who understand the importance of incorporating Indigenous worldviews into AI development are better positioned to manage culturally sensitive, equitable, and inclusive organizations. As emphasized by the 2015 TRC Calls to Action, 36 the praxis of the AIP can guide healthcare leaders to ensure the inclusion, respect, and empowerment of Indigenous communities during this technological transformation. By lowering the predominant Western viewpoint and incorporating alternative paradigms, the more adept AI developers may become at achieving social and ethical objectives.
Health leadership towards abundantly intelligent organizations
There is potential for infusing Indigenous worldviews into the integration of AI at multiple levels within healthcare organizations. At the development level, designers, engineers, and computer scientists can incorporate diverse perspectives to inform the creation process. 35 This may involve developing adaptable, modular systems customized to fit specific contexts while maintaining core elements that respect Indigenous cultural identities, histories, and worldviews. Creating wider gender identification spectra to include Two Spirit is one example. 37 At the management tier, tools such as end-to-end AI development frameworks that allow for recurring audits enable managers to better mediate the implementation of new technology in conjunction with Indigenous stakeholders. 38 Such tools can enable development teams to assess how an AI model performs under various conditions, including its effectiveness across diverse human health datasets.10,39 Moreover, culturally diverse AI development teams are crucial for designing and developing inclusive AI applications. 40 An emphasis should also be made on recruiting and retaining Indigenous talent in both healthcare and AI fields to further enhance the creation and uptake of abundantly intelligent software.
Providing cultural competency training for healthcare staff, including AI developers and users, enables more culturally sensitive care and the culturally appropriate use of AI technologies. Participatory exercises such as the “KAIROS Blanket Exercise” lead by Indigenous elders or educators promote truth, understanding, and reconciliation among both Indigenous and non-Indigenous communities alike.41,42 Creating shared spaces, coined “pods” by the AIP, whereby Indigenous community members facilitate deep learning and IK sharing practices, such as talking circles and smudge ceremonies reflect culturally sensitive approach.29,43
Establishing ethical oversight committees with Indigenous representation to review AI technologies in tandem with current and novel government policies can help to ensure better alignment with Indigenous ethical principles. Furthermore, sharing best practices and successful case studies related to integrating Indigenous perspectives into AI in healthcare contributes to interdisciplinary and cross-sector knowledge sharing. 42 Without widespread understanding of the impacts of singular AI systems, healthcare equity challenges as seen in differential outcomes during the COVID-19 pandemic in Canada will persist. 39
Although an exhaustive discussion of decolonization is beyond the scope of the article, this brief overview is a catalyst for practically merging Two-Eyed Seeing principles into healthcare organizational change strategies. At the heart of this discussion is the need for health leaders to honour the ethical imperative of, “nothing about us without us” through ongoing consultation with Indigenous Peoples. 44 In the same vein, as health leaders understand the issues within AI systems, lines of accountability can become more clearly defined within the healthcare organizations that they serve.
Conclusion
As AI advances, so should the integration of the distinct perspectives and needs of Indigenous Peoples. Principles deeply embedded in The Two-Eyed AI framework can guide the inclusion of Indigenous ethos and imaginaries into AI creation balanced with scientific and technological constructs. 27 Moreover, as health leaders strive towards fostering collaboration with Indigenous Peoples, this AI, in turn, will aid in restructuring our societal landscapes, reshaping our perceptions of digital territories, and supporting individuals in acknowledging and respecting Indigenous viewpoints. 22 By prioritizing Indigenous viewpoints in the advancement of AI and AI policy, exemplified through the federal government’s endorsement of the AIP, Canada can enhance its endeavours to foster ethical and inclusive AI.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical approval
Institutional Review Board approval was not required.
