Abstract
The rapid expansion of artificial intelligence (AI) has triggered significant ethical and human rights concerns. While much of the debate focuses on risks such as discrimination, disinformation and algorithmic bias, much less has been written about AI's potential to support human rights practice and scholarship. This article engages with both perspectives, by reflecting on the early development of RedressHub, a tool that integrates AI-assisted information retrieval with a participatory, stakeholder-driven design to map and connect redress initiatives for colonial harm and its legacies across Europe. We discuss the ethical and epistemological implications of fine-tuning large language models (LLMs) to support the documentation of redress efforts for colonial injustice. Contributing to current debates on process-based, value-driven AI deployment in human rights, we argue that a co-creative approach that engages relevant stakeholders from conceptual design to interface development offers a crucial framework for addressing these challenges. Embedding participation at every stage, this approach has the potential to enhance explainability and contribute to mitigating bias, to open crucial conversations on addressing extractivism and to explore how and under what conditions AI tools can be leveraged to serve the needs and priorities of affected communities.
Introduction
The rapid expansion of artificial intelligence (AI) across nearly every societal domain has sparked intense ethical debates. Calls for legislative and governance frameworks to address AI's societal challenges – from privacy violations and disinformation to discrimination and algorithmic authoritarianism – underscore the need for AI solutions designed in a rights-respecting manner, drawing on established international human rights principles and value frameworks (McGregor et al., 2019; Ünver, 2024). While ethical debates predominantly focus on rights-related risks, a smaller but growing body of literature explores how AI can be leveraged for advancing human rights documentation, analysis, monitoring, and reporting, as well as for generating new insights and methodologies for human rights scholarship. Civil society-driven and bottom-up collaborations between data scientists, victims(-activists), practitioners and academics highlight opportunities of AI for enhancing the capacity and agency of those affected by rights abuses (see Dulka, 2023; Teo, 2025).
We position ourselves at the intersection of these two debates, writing from a critical and victim-centric perspective, while exploring the pertinence of AI technologies for furthering human rights practice and research. This commentary reflects on the early development stages of RedressHub, a social value creation project that proposes a novel database repository and interface to map and connect redress initiatives for colonial harm across Europe, using AI-assisted information retrieval. Writing as the project launches, we share emerging insights from ongoing efforts to conceptualise and implement a participatory, stakeholder-driven approach to AI design in the context of redress for massive (historical) rights violations. Below, we outline the RedressHub project, its use of multilingual large language models (LLMs) for semi-automated data retrieval, classification and structuring, and the tensions this raises between technical opportunities and ethical and epistemological challenges. We then argue that a co-creative design process offers a promising path for addressing these axiological challenges. In doing so, we build on and seek to contribute to emerging debates on process-based participatory approaches to value-driven AI in human rights research and practice.
RedressHub: mapping redress initiatives for colonial injustices
RedressHub is conceptualised as an online database to map and connect redress initiatives addressing colonial harms and their legacies across Europe, featuring an interactive interface with advanced search capabilities and built-in geospatial, network, and time-series visualisations. It aims to offer users a comprehensive overview of ongoing efforts, approaches, and key actors involved in redress efforts, facilitating multi-directional knowledge exchange, fostering new networks and alliances, and supporting stakeholders to design and implement meaningful, scalable redress initiatives. Its development is coordinated by an interdisciplinary team at Ghent University, inspired by Justice Vision’s research on redress initiatives in consolidated democracies, as well as interactions with the Transatlantic Redress Network.
Since the early 2020s, there has been growing – albeit still limited – public and policy attention to colonial legacies, largely due to the relentless activism of civil society and grassroots movements (Sierp, 2020). While the Black Lives Matter movement, which has its origins in the United States, may be one of the most well-known examples, also across Europe (e.g. in Denmark, the Netherlands, Belgium, Germany, France, Italy, Spain and Portugal) justice actors have advanced demands and initiated actions for legal and policy reforms, truth-seeking, apologies, reparations, restitution of looted artefacts, dismantling of colonial symbols and memorialisation.
A recent mapping of redress initiatives in the Belgian cities of Antwerp, Brussels and Liège (Boddin, 2024) revealed two key challenges. First, from a practical point of view, struggles and initiatives for redress remain highly fragmented. Despite related goals and action repertoires, there is limited knowledge-sharing and lesson-learning across contexts and initiatives. Second, from a research point of view, manually mapping these initiatives is a time consuming and labour-intensive process, even at a limited sub-national scale. Both challenges hint at the relevance of novel data technologies to effectively identify and analyse redress initiatives on a larger European scale.
Since no off-the-shelf solution exists to facilitate this exercise and given the importance of engaging redress actors in decisions over why and how data for this mapping is collected, categorised and interpreted, RedressHub will be developed through closely interwoven technical and co-creation tracks. The technical track integrates web crawling and scraping, text classification, and entity recognition to help identify and structure relevant data, in addition to building the database structure and online user interface. The co-creative track proposes a staggered series of bilateral conversations, collective design sessions and ethics consultations with a diverse community of redress actors to inform and steer the design process. Through close interactions between both tracks, we seek to balance the opportunities and risks of integrating AI into the project.
Leveraging AI-assisted information retrieval
While AI can be categorised in numerous ways, at RedressHub we focus more narrowly on key subfields relevant to the information retrieval strategy that will be implemented to populate the database. This includes the application of Natural Language Processing (NLP), a subfield of AI that enables computers to understand, interpret, and generate human language using machine learning techniques. It involves tasks such as Named Entity Recognition, which identifies and categorises names of organisations, places, dates and so on, within unstructured text. With the increasing availability of pre-trained LLMs, these methods have become more readily accessible and can be further fine-tuned using few-shot learning and supervised training with manually labelled data for improved text relevance classification and entity recognition (Littman, 2021).
Combined with dictionary-based web crawling and scraping, the benefits of AI-assisted information retrieval for RedressHub are manifold. It can significantly improve the capacity to recognise and organise relevant data from vast amounts of unstructured digital text sources (social media, news outlets, institutional repositories, aggregate websites, etc.). The speed and continuity at which data collection can happen, ensures the kind of steady and up-to-date information needed to make RedressHub an actionable platform. Moreover, the multilingual capabilities of LLMs allow for data detection and analysis across different linguistic and regional contexts, overcoming language barriers that typically hinder cross-border research. Beyond efficiency, this strategy offers new possibilities for identifying connections, trends, and shared strategies among disparate initiatives, which might not emerge through targeted manual querying.
These gains in scale, speed, multilingual reach and pattern detection enable RedressHub to offer a more comprehensive view of redress efforts across countries, sectors, and time periods. This can reveal, for example, how policy responses evolve differently across countries or how community-led initiatives in one region may inspire or influence similar efforts elsewhere. This approach can also support the development of a practice-based typology of redress – that is, a typology grounded in how organisations themselves define and enact redress. For instance, some actors frame redress as an inward-facing process aimed at transforming own processes, collaborations and structures, while others focus on outward-oriented actions such as public memorialisation or policy reform. Such a typology offers a starting point for assessing the relevance of existing legal or societal frameworks underpinning redress claims, and revisiting those to better reflect practices on the ground.
The integration of LLMs into human rights research is, however, not merely an exercise in improved efficiency. It alters how knowledge is produced, analysed, disseminated and introduces genuine risks. In this sense, the specific context of RedressHub raises critical ethical and epistemological questions about knowledge production within colonial redress struggles: the influence of AI-related biases on how historical injustices and their responses are acknowledged and understood, the prioritisation of quantification over the contextual narrativisation of harm and redress, the replication of environmental, human and knowledge extractivism, and questions about visibility and data protection when compiling redress-related information.
Understanding ethical and epistemological challenges
Much has been written about the risks of bias in the outputs of generative LLMs (Dulka, 2023). In the context of RedressHub the notion of bias extends to how colonial harm is defined, both in its historical and ongoing forms. Historical records and legal frameworks have frequently been shaped by the perspectives of former colonial powers, privileging certain forms of harm – such as economic loss – while minimising or excluding others, such as cultural erasure, epistemicide, or intergenerational trauma. When LLMs are trained on data that reflects dominant legal, political, societal, and institutional narratives, training data bias can overlook or downplay the experiences of affected individuals and communities. Algorithmic bias in the model's processing methods, reinforcement learning, and fine-tuning can further reinforce narrow, exclusionary understandings of harm, marginalising alternative conceptions of injustice and claims for justice.
Bias in (semi-)automated information retrieval outcomes can also reflect existing hierarchies and underrepresentation within the information ecosystems from which data is drawn. In the context of RedressHub, these risks can potentially emerge from disparities in how different organisations and institutions document, categorise, and make information available online. Larger or more established entities with more resources may produce and disseminate more structured, accessible, and widely referenced data, using ‘recognised’ registers of language and terminology, while smaller grassroots and community-led initiatives – often central to redress efforts – may lack visibility in digital spaces.
These issues may be compounded by reliance on the collection of data fragments, and the prioritisation of quantification and pattern recognition, which risk oversimplifying the complexities of harm experiences that give rise to redress claims and to undervaluing qualitative insights and lived experiences, marginalising alternative ways of knowing and expressing (see also Merry, 2016). These issues result, at a more fundamental level, in a real concern that the implementation of AI tools can exacerbate existing power imbalances between privileged institutional, academic and practitioner ‘experts’ and marginalised individuals and communities directly affected by injustices, as well as reinforcing a hierarchy of whose and what kind of knowledge is considered credible or valuable (Bakiner, 2023).
In addition, LLMs and their dependence on big data call for a critical examination on the replication of the very practices and logics that underpinned coloniality (Couldry and Mejias, 2018), including the extraction of knowledge, natural resources and human labour. The development and deployment of LLMs relies heavily on rare minerals and on un(der)compensated labour, both often sourced from the Global South, as well as on vast amounts of knowledge and data for training, often without consent or compensation. These new forms of data colonialism reflect enduring historical imbalances in the valuation and capitalisation of knowledge, labour and environment. For initiatives like RedressHub, this means considering open-source and non-commercially developed alternative models where possible, even where this entails trade-offs in model performance or convenience.
Lastly, using AI-assisted information retrieval to collect publicly available information on a larger scale also raises questions about data visibility and protection, including issues of consent, the potential for data breaches, and how this information might be used beyond its intended purpose (Dulka, 2023). In the context of RedressHub, as information about redress initiatives becomes more accessible, it introduces vulnerabilities in terms of raising the profile of certain actors and initiatives, notably those resisting decolonisation efforts. This could expose activists, organisations and affected communities to targeted backlash. Additionally, not all redress actors may want to draw public attention to initiatives that have a more inward-looking restorative, community-building character.
Addressing risks through participatory and co-creative design
In designing RedressHub, we consider it paramount to critically engage with these risks and challenges, together with those actors who stand to be impacted. Existing debates in the field of Critical Data Studies (Iliadis and Russo, 2016; Kitchin, 2024) suggest that adopting participatory approaches in AI design can help mitigate some of the most pressing challenges at the intersection of AI and human rights. In conceptualising the development of RedressHub, we therefore prioritised three approaches: understanding the platform as a sociotechnical system, focusing on process-based design, and taking a community- or stakeholder-centric approach.
A sociotechnical perspective recognises that AI systems are not merely a technical tool but operate within complex social, political, and economic contexts; it necessitates examining the broader societal impact and ensuring diverse perspectives inform their development to align with ethical and legal standards (Aizenberg and van den Hoven, 2020). This perspective underpins RedressHub's dual-track technical and co-creation development process, through which redress actors and stakeholders shape the platform's goals and functionalities, information retrieval, database and interface design, and the underlying ethical framework. Building the platform in this way enables ongoing reflection on how such data infrastructures and technologies, themselves entwined with hierarchical colonial epistemologies, might be reoriented or repurposed to support more equitable forms of knowledge production.
Process-based design, in turn, shifts attention from AI's outcomes to its development process, ensuring transparency, explainability and adaptability (Yu et al., 2024). This perspective underpins the platform's hybrid ‘AI-in-the-loop’ strategy (Natarajan et al., 2025), meaning that human oversight is integrated at all critical junctures of information retrieval, structuring, and representation, and that AI-identified outputs are supplemented by targeted manual searches and contextual expertise. This enhances the accuracy, relevance and traceability of retrieved data but also supports meaningful inclusion of underrepresented initiatives, voices, and narratives – particularly grassroots and diaspora perspectives.
A community- or stakeholder-centric approach prioritises the voice, rights and needs of those vulnerable to and seeking redress for (legacies) of harm, while incorporating safeguards against discrimination and misuse (Teo, 2025), and informs the co-creation process itself. Ongoing bilateral conversations will progress, in the coming months, to ethics consultations and collective design sessions to support, for example, the delineation of context-sensitive and diverse understandings of harm and redress, and the development of a corresponding search dictionary. Redress actors’ situated knowledge can also enhance model transparency and help mitigate bias in model outputs through incorporating techniques of interpretable and explainable AI in NLP that align with the call for transparency, accountability, fairness and legal compliance (EU Artificial Intelligence Act, 2024).
The participatory approach also extends to the ethical governance of RedressHub by reflecting on privacy, sensitivity and inclusivity together with stakeholders to establish guidelines for acceptable uses and boundaries for data retrieval and representation. This complements, but also goes beyond, conventional ethics and data management policies and protocols on, for example, personal data processing or data access controls.
Concluding remarks
In building RedressHub, we focus on how values, methodologies, and co-creation frameworks can guide the deployment of LLMs for information retrieval and documentation in ways that support human rights research and social value creation. RedressHub's overarching ambition is to be an actionable tool for those directly involved in the struggle for redress, as well as for a broader user community, including educators wanting to update their curriculum, museums setting up restitution programs, municipalities addressing the glorification of colonial histories in public places, legal clinics, artists, policymakers, media, and many more. Through constant conversation between the technical and co-creation tracks, we aim to ensure that RedressHub is not only functional but also contextually relevant, culturally sensitive, and attuned to the needs and experiences of the redress actors it seeks to support.
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the European Research Council, (grant number Proof of Concept ERC-2024-POC-101212937-RedressHub).
Declaration of conflicting interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: The authors are involved in the design and coordination of RedressHub at Ghent University. This disclosure is made in the interest of transparency, and the authors declare no other competing interests.
