Abstract
In the age of rapid AI advancement, digital colonialism poses a significant threat to Indigenous communities, perpetuating inequalities and exploiting their data. This commentary delves into the concept of Indigenous data sovereignty as a powerful framework for resisting digital colonialism and promoting ethical AI development.
Keywords
The rapid advancement of artificial intelligence (AI) has brought forth many ethical challenges for businesses, particularly concerning the rights and well-being of marginalized communities. Among these challenges is the issue of digital colonialism, which perpetuates colonial power structures and reinforces existing inequalities through the extractive and exploitative practices of digital technologies (Kwet, 2019). Indigenous peoples, who have long been subjected to physical and cultural colonization, now face the added threat of digital colonialism in the form of data exploitation and misrepresentation in AI systems developed by influential businesses (Mohamed et al., 2020). This commentary presents the concept of Indigenous data sovereignty as a novel framework for promoting ethical AI development and resisting digital colonialism in the business context. By examining the importance of Indigenous data sovereignty, the consequences of extractive data practices, and the implications for businesses, this commentary offers original insights that bridge the gap between Indigenous rights, data governance, and AI ethics.
The Importance of Indigenous Data Sovereignty in AI Development: Consequences and Implications for Businesses
Indigenous data sovereignty asserts the inherent rights of Indigenous peoples to govern the collection, ownership, and application of data about their communities, knowledge systems, and territories (Kukutai & Taylor, 2016). This concept challenges the dominant paradigm of technological determinism in AI development, which often prioritizes rapid innovation over social responsibility, cultural diversity, and ethical accountability. Indigenous data sovereignty recognizes that Indigenous data are not merely a neutral resource to be exploited, but a living, relational entity that is deeply connected to the identities, histories, and futures of Indigenous peoples (Rainie et al., 2017). By centring Indigenous perspectives and knowledge systems, this framework offers a departure from conventional approaches to AI governance, which have often overlooked the unique needs and concerns of Indigenous communities.
The ongoing legacy of colonialism and racism further accentuates the need for indigenous data sovereignty in AI development. Colonialism has facilitated the subjugation, exploitation, and erasure of indigenous peoples and their cultures, often justified by the development of Western science and technology. Racism, deeply intertwined with colonialism, has marginalized Indigenous peoples and devalued their knowledge systems and ways of life, leading to a systemic underrepresentation of Indigenous perspectives in AI development and the perpetuation of biases and stereotypes in AI systems that can have harmful impacts on Indigenous communities. This colonial and racist legacy manifests in the extractive approach to Indigenous data, where it is collected, used, and monetised without the knowledge, consent, or benefit of Indigenous communities, reinforcing existing power imbalances and structural inequalities (Mohamed et al., 2020).
The extractive approach to Indigenous data has far-reaching consequences for Indigenous communities. By removing Indigenous data from its cultural context and reducing it to data points, this approach erases the cultures, histories, and ways of knowing integral to the identities and well-being of Indigenous peoples (Kukutai & Taylor, 2016). This erasure perpetuates the economic and social marginalization of Indigenous communities and represents a significant loss of valuable knowledge and insights that could inform the development of more sustainable, equitable, and culturally responsive technologies.
Failing to engage with and learn from Indigenous knowledge systems, businesses risk missing out on valuable opportunities to create more socially and ecologically responsible technologies. The extractive approach to Indigenous data often perpetuates harmful stereotypes and biases that can become embedded in AI systems, leading to discriminatory outcomes and further marginalization of Indigenous communities.
The consequences of this extractive approach have significant implications for businesses developing AI technologies. By failing to respect Indigenous data sovereignty and engaging in extractive data practices, companies risk perpetuating the same harms and injustices inflicted upon Indigenous communities, undermining their social license to operate and eroding public trust in their products and services.
In an era of increasing public scrutiny and demand for corporate social responsibility, businesses that continue to engage in extractive data practices risk significant reputational damage and loss of market share. By failing to respect Indigenous data sovereignty and engage in meaningful partnerships with Indigenous communities, businesses risk being seen as complicit in the ongoing legacy of colonialism and racism.
Moreover, the legal and regulatory landscape around Indigenous data sovereignty is evolving rapidly, with many countries and international organizations recognizing the rights of Indigenous peoples to control their own data. Companies that fail to adapt to these changing norms and standards risk facing legal challenges, regulatory sanctions, and being left behind by competitors who are more proactive in engaging with Indigenous communities and respecting their data rights.
The Path Forward
To address the challenges posed by the extractive approach to Indigenous data in AI development, powerful tech businesses must recognize and respect Indigenous data sovereignty. This requires a fundamental shift in the way these companies approach AI ethics and governance, moving away from an extractive and deterministic model of AI development and toward a more collaborative, culturally responsive, and accountable approach.
One key aspect of this shift is the need for meaningful engagement and collaboration with Indigenous communities throughout the AI development process. This means going beyond superficial forms of consultation and toward deep, sustained partnerships that center the voices, knowledge, and priorities of Indigenous peoples. It also means creating mechanisms for Indigenous communities to exercise control over their data, such as data trusts, research agreements, and community-driven protocols.
Another critical aspect of indigenous data sovereignty in AI development is the need for greater transparency and accountability in the way AI systems are designed, deployed, and governed by powerful tech businesses. This means ensuring that Indigenous communities have access to information about how their data are being used in AI systems, as well as the ability to contest and correct inaccurate or harmful representations of their cultures and knowledge. It also means establishing clear mechanisms for redress and accountability when AI systems developed by these companies cause harm to Indigenous communities, such as through the misuse or misappropriation of Indigenous data.
A third step that practitioners can take is to actively support and invest in Indigenous-led AI initiatives and startups. This can include providing funding, mentorship, and technical resources to help Indigenous entrepreneurs and innovators develop AI solutions that are grounded in their own cultural values and knowledge systems. For example, the Indigenous AI Initiative at the University of Waikato in New Zealand is working to develop AI technologies that are culturally responsive and beneficial to Indigenous communities, such as tools for language revitalisation and environmental monitoring (Lilley et al., 2024). Similarly, the Indigenous AI Network in Canada is bringing together Indigenous researchers, entrepreneurs, and community leaders to explore the potential of AI for Indigenous self-determination and well-being.
Business and society scholars and business schools have a crucial role in supporting Indigenous data sovereignty in AI development. They can conduct research on the social and ethical implications of AI for Indigenous communities, develop best practices for responsible AI governance, and integrate Indigenous perspectives into their curricula to equip future leaders with the necessary cultural competence and ethical frameworks. Furthermore, business schools can collaborate with Indigenous communities to co-create AI solutions that address their specific needs, as exemplified by the University of Melbourne’s partnership with the Indigenous Knowledge Institute and the University of Arizona’s initiative on Indigenous Entrepreneurship and Innovation. By taking these steps, businesses and business schools can redress the harms of extractive data practices, unlock new opportunities for innovation grounded in Indigenous wisdom, and contribute to a more inclusive, sustainable, and equitable future.
Conclusion
Indigenous data sovereignty is crucial for resisting digital colonialism and technological determinism in AI development. By centring Indigenous rights and perspectives, tech companies can build more equitable, inclusive, and sustainable AI systems. However, this requires a fundamental shift in approach, meaningful collaboration with indigenous communities, transparency, accountability, and support for indigenous-led AI initiatives. Business scholars and schools should conduct research, integrate Indigenous perspectives into curricula, and partner with indigenous organizations. Achieving Indigenous data sovereignty in AI development necessitates a commitment from all stakeholders to build relationships of trust, reciprocity, and mutual respect, ultimately leading to a more just and equitable future for all.
Footnotes
Acknowledgements
The author thanks Frank de Bakker and Simon Pek for their continuous support throughout the development of this commentary.
Author’s Note
Vishal Rana is also affiliated with University of Doha for Science and Technology, Doha, Qatar.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
