Abstract
Introduction
It is acknowledged that evaluation theory and practice can catalyse transformative change (Van den Berg et al., 2019). Given the complexity of development initiatives owing to political, socio-cultural, environmental and economic factors, Naidoo (2022, p. 17) argues that the evaluation practice can no longer remain static but must be adapted by developing contextually appropriate tools and methodologies. Realist evaluation, one of the theory-driven evaluation approaches, has been lauded for its responsiveness to context and, thus its principles and practices can potentially contribute to the decolonisation agenda (Renmans et al., 2022). However, its lack of acknowledgement of power imbalances and an overreliance on Western-based concepts thwart its potential (Renmans et al., 2022). There is need, therefore, to explore how the realist evaluation approach can contribute to the decolonisation agenda and address inequity and socio-economic injustices in formerly colonised societies.
The Realist evaluation approach is based on Pawson and Tilley’s seminal work and seeks to understand what works, how, why, for whom and under what circumstances (Pawson & Tilley, 1997). Realist evaluation simply starts with a programme theory and ends with a refined programme theory (Pawson & Tilley, 1997). The approach uses Context, Mechanism and Outcomes (CMO) configuration as the analytical framework (Mukumbang et al., 2018a) with the aim of capturing the interaction between context and programme resources and stakeholders’ reasoning that generate programme outcomes (Westhorp, 2014). Attention is specifically given to the (a) formulation of CMO configurations which relies on multiple sources of evidence; and (b) testing of the CMOs to identify the causal processes at play within the programme context (Wong et al., 2016). Randell et al. (2014) have termed realist evaluation as methodologically pragmatic in its approach to data collection which is characterised by the use of mixed methods. Pawson et al. (2005) encourage realist evaluators to explore mixed methods approaches but, at the same time, remain methodologically innovative. While quantitative methods may be more effective for identifying outcomes and certain aspects of context, qualitative methods can help to investigate mechanisms and identify unanticipated aspects of context and outcomes (Westhorp et al., 2011). There is also consensus among realist evaluators that realist evaluation questions should first be defined before choosing the appropriate methods (Randell et al., 2014; Rycroft-Malone et al., 2010; Westhorp et al., 2011, 2014).
Mbava and Chapman (2020) suggested that realist evaluation could be adapted by embedding foundational monitoring and evaluation (M&E) systems and convening sector players to agree on strategic priorities, desired outcomes and the outcome indicators could make it more sensitive to the African settings. According to Mbava and Chapman (2020), these two foundational stages should be completed before an evaluator can implement the realist evaluation in an African setting. Their adaptation was meant to help programme designers and implementers lay the foundation for a realist evaluation by designing M&E systems with a realist lens, ensuring that the project stakeholders are involved, and contextual data is collected throughout the programme period (Mbava & Chapman, 2020). However, the adapted model by Mbava and Chapman (2020) does not explicitly describe how an evaluator should actively engage with issues of power imbalance and inequities among programme stakeholders and beneficiaries. Notably, programmes implemented in Africa may not have been designed with a realist evaluation lens. Importantly, when the realist approach is used by evaluators external to a setting it may be characterised by lack of contextual familiarity which might lead to loss of insights (Manzi et al., 2020; Marjanovic et al., 2017). Based on these limitations, this paper seeks to explore how the realist evaluation cycle can be implemented in a manner that adheres to the indigenous research principles.
Evaluation in African Settings
Evaluation practice in the African context, according to Abrahams et al. (2022), is a microcosm of the African development space, which is dominated by Western funding, development ideals and initiatives. With this asymmetry, the Made in Africa Evaluation principles (MAE) emerged to counter the Western epistemological dominance in evaluation theory and practice in Africa (Abrahams et al., 2022; Frehiwot, 2022). Simply, the MAE approach emerged to “address issues that mainstream evaluation approaches fail to address – both in theory and practice – including the inability to respond to cultural and contextual relevance, appropriateness of evaluation methodologies and approaches and ethics and values in evaluation” (Chilisa, 2015, p. 8).
Recent writings (Abrahams et al., 2022; Fish, 2022; Khumalo, 2022; Tirivanhu, 2022) have highlighted the need for legitimising the African knowledge systems, adopting methodological approaches that borrow from African-rooted paradigms and pursuit of decoloniality agenda by interograting power structures. Chilisa (2015) has argued that instead of reinventing the wheel, adaptation of the existing (Western-based) evaluation theory and practice is a good strategy to ensure that the evaluation practice is African driven and rooted in African values (p.17). To do this, Chilisa (2015) highlighted the need for evaluators to (a) design evaluations and evaluation tools that are sensitive to the African setting and its diversity, (b) conduct evaluations that have real value to the participants, (c) design evaluations that give primacy to the African data collection methods such as storytelling, oral traditions, music and folklore.
In 2021, the Africa Evaluation Association developed the MAE principles which sought to ensure that evaluations are empowering for Africans, technically robust, ethically sound and Africa centric (Tirivanhu, 2022). In their paper, Chilisa and Mertens (2021) have described indigenous research principles which ensure that evaluations conducted in indigenous and/or formerly colonised societies are ethically sound, rigorous and transformation -focused. These principles include: (a) Relationality – the evaluation should consider ability of the programme beneficiaries to prioritise their needs and judge the relevance of the programme. (b) Responsibility – the evaluator can use their role to bring about socio-economic and environmental justice by unravelling and speaking about and on behalf of the voiceless. (c) Reverence – recognises that spirituality and values contribute to ways of knowing and the evaluator must examine how both intersect. (d) Reciprocity – the evaluation must examine the value added to the beneficiaries’ lives by the programme and what the beneficiaries bring to the programme. (e) Respectful representation – the evaluation cycle should be community/beneficiary-centred, and the beneficiaries should have ownership and access to the data. (f) Reflexivity – the evaluator continuously reflects on their positionality and how it may affect their attitude, judgement, or actions throughout the evaluation. (g) Responsivity – the evaluator learns from the process and adapts their approaches, methodologies, and tools to become context and culturally sensitive. (h) Rights and Regulations – ethical protocols should grant programme stakeholders and beneficiaries the right to prioritise their needs, participate in the evaluation and have ownership of the data and report. (i) Decolonisation – the evaluator challenges the Western values and standards and examines power structures and dynamics in evaluations conducted in Africa.
Study Aim
This paper describes how a realist evaluation of a health research capacity strengthening programme can be carried out in a manner that respects the indigenous research principles.
Programme Context
The study will be positioned within the African Research Initiative for Scientific Excellence (ARISE) Pilot Programme. Funded by the European Commission (EC) and jointly implemented by the African Academy of Sciences (AAS) and the African Union (AU). The ARISE is a five-year programme (2022–2026) seeking to: (a) Enhance the capabilities of emerging African research leaders committed to a research and teaching career in Africa. (b) Strengthen institutional research management and support systems for pan-African research to thrive. (c) Support the generation of cutting-edge research that will contribute towards transformation of lives in Africa.
The programme supports African early-career scientists, also referred to as the Principal Investigators (PIs) who have demonstrable potential to be research leaders. The PIs are offered up to 5-year research grants to establish their own research team comprising of collaborators, research assistants (i.e., PhD and master’s trainees) and research support staff whose composition depend on the requirements of the field of research and the objectives of the research project. The research capacity strengthening focuses primarily on the PI as an emerging research leader. The PI is required to attend research capacity strengthening activities such as specialised training/workshops, recruit, train and supervise masters/PhD students, participate in networking events and establish research collaborations, undertake stakeholder engagement and promote research uptake. At institutional level, the research capacity strengthening includes the Good Financial Grants Practice (GFGP) and the Good Research Management Practice (GRMP) assessments. These areas have been summarised in Figure 1 below including the desired outputs and outcomes. The ARISE Logic model.
Study Design and Methods
A multi-case study design will be employed. According to Koenig (2009), case study designs are useful in understanding the premise upon which an initiative is built, determining causal chains and sustaining theory building throughout the evaluation process. Given the available [limited] resources, three (3) cases will be selected and studied (to be labelled as Case A, B and C). The selection of the three cases will be based on the size of the host research institution or university and the nature of the research project. The case studies will be conducted sequentially and each case will form a unit of analysis. Both with-in case and cross-case syntheses will be conducted in line with the guidance provided by Yin (2017).
Purely qualitative methods will be employed. The indigenous research principles emphasise the reliance and use of indigenous African data collection methods such as storytelling, folklore, music, etc., (Abrahams et al., 2022; Chilisa, 2015) which are qualitative in nature. Storytelling, for instance, allows the participants to choose what aspects of their story or experience to tell. This fosters shared power between the researcher and the researched and it is critical in African settings (Rooney et al., 2016). The study will be operationalised in four stages (a) theory development, (b) data collection, (c) data analysis and, (d) theory refinement, as described in detail below.
Stage 1 – Theory Development
The initial programme theories (IPTs) will be elicited through a review of the ARISE programme documents, followed by a focus group discussion with project-level stakeholders and beneficiaries and finally, a review of published literature reporting on similar research capacity strengthening initiatives. The aim will be to formulate overarching theories on ‘what works’, ‘how’, ‘for whom’, and ‘in what circumstances’ in relation to health research capacity strengthening.
Review of ARISE Programme Documents
The ARISE programme documents (e.g., funding proposals, programme description of action, the M&E framework, and the grantees progress reports) will be reviewed to gather information about how the programme was designed and identify the underlying assumptions – if articulated. The document appraisal template by Molitor et al. (2023) will be used to extract and organise content related to context, mechanism and outcomes from each document source. Potentially, the initial programme theories will be framed around the research capacity strengthening activities (described in the Programme Context section above) namely research collaboration, mentorship, networking, student training/supervision, Good Financial and Grants Practice (GFGP) and Good Research Management Practice (GRMP) assessments, research dissemination and uptake. Once the content has been extracted from the programme documents, the evaluator will apply their own professional hunches and experience to articulate the draft programme theories. The draft programme theories will then be discussed in the stakeholder focus groups.
Stakeholder Focus Groups
Three focus group discussions will be conducted with each of the selected research project teams. In traditional realist evaluation, elicitation of IPTs involves talking to the programme designers and practitioners (Pawson, 2006), but in this study, the indigenous relationality principle emphasises the need to involve everyone affected by a programme throughout the evaluation process. The draft IPTs will be discussed during the focus groups with the aim of refining them. The realist interviewing technique which uses the programme theories as the subject for discussion will be employed to elicit stakeholders’ views and comments. The goal of the realist interviewing technique is to unearth the underlying mechanisms that are triggered in specific programme contexts. For instance, some of the interview questions will be framed as follows: What is it about the ARISE [context/programme resource] that makes a difference? We theorise that the ARISE programme generates XY [outcomes] by triggering XY [mechanism], is this a fair assessment?
While in traditional realist evaluation IPTs are refined by seeking views and inputs from the programme designers and practitioners (Pawson & Tilley, 1997), indigenous research principles emphasise the involvement of and participation by the programme beneficiaries at every stage of the evaluation. Each focus group will include 8 – 12 participants comprising of the PI, collaborators, the PI’s mentor, research support staff and masters/PhD trainees. Besides their contribution on the IPT refinement, the focus group will seek to understand the level of involvement of beneficiaries in the design of the ARISE programme and how well the programme is aligned to their research capacity needs and priorities – in line with the relationality principles. Importantly, during the focus groups discussions, the existing power structures and dynamics among the funding partners, the programme stakeholders, and beneficiaries will be explored (in line with the decolonisation principle), and stakeholders and beneficiaries’ input on the evaluation design and tools will be sought to ensure they are context sensitive (in line with the responsivity principle).
Literature Review
After the stakeholder focus groups, the IPTs will further be refined through a literature review process. Literature review can lead to extraction of “nuggets” of data and insights on context, mechanism and outcomes which can be useful in further refining the IPTs (Fick & Muhajarine, 2019). The review of relevant literature will help to explore tacit theories of health research capacity strengthening as described by other researchers and gather information on actors, contexts, mechanisms, and outcomes. Using (((“research capacity strengthen*” OR “research capacity build*” OR “research capacity develop*”))) as the key search terms, Google Scholar, Scopus, Embase and Web of Science will be searched. Only papers published in English will be included.
Once papers have been identified, each will be read through and mined for the relevant data related to the context, mechanisms and outcomes that could inform further refinement of the IPTs. The realist document appraisal template by Molitor et al. (2023) will be used to organise and synthesise the extracted context, mechanism and outcomes. The extracted context, mechanisms and outcomes will then be examined in the light of the draft IPTs. Besides informing the refinement of the IPTs, insights from the review will help determine the plausibility of the programme theories as articulated. This will ensure that they are evidentiary sound before they are tested.
Stage 2 – Data Collection
Data collection in realist evaluation, according to Pawson and Tilley (2004), allows for interrogation of the CMO hypotheses and the aim is to verify, validate or refute the programme theories. The realist interviewing technique will be employed to gather ontologically deep evidence on contexts, mechanism and outcomes that would – as Mukumbang et al. (2020) suggest – clarify, modify, confirm, or discredit the IPTs. The aim will be to identify if and to what extent the participants’ experiences square with the programme theories. For instance, the questions will be framed as ‘I think the ARISE [mentorship, networking, collaboration, etc.] works in context XY by triggering XY mechanism thus generating XY outcomes’ how true is this? Probing questions will seek to elicit deeper information about the participant’s responses by asking them to provide more information about the context, mechanism or outcomes. For instance, ‘what is it about the [for example, mentorship, collaboration, networking] that makes it work to generate research capacity for you?’ The goal will be to Identify the actual outcomes and effects of the programme – whether intended or unintended, positive, or negative. This is in line with the indigenous reciprocity principle which emphasises that the evaluation process should be able to establish how the stakeholders and beneficiaries have responded to the programme resources and the outcomes achieved. In line with Manzano (2016) suggestions, the interview questions will be framed around causal links within programme theories. The first two or three interview questions will seek to understand the participant’s involvement in the ARISE programme; an understanding that would inform what programme activities and theories the participant is deeply involved in and thus able to share their experiences on.
Throughout the data collection process, and in line with the reflexivity principle, reflective journaling will be used to document how personal perspectives and judgements (in relation to the ARISE programme and the research capacity strengthening concept) influence or affect the way questions are posed or probed. Importantly, active learning from the data collection sessions and interactions will be pursued thus informing the adaptation of the interview questions, where necessary, in line with the responsivity principle. The evaluator will use thier role/voice to speak up about power imbalances, social injustices and inequities that might be perpetuated, ignored, or reinforced by the ARISE programme in line with the responsibility and decolonisation principles.
Data collection will involve (a) in-depth interviews with the Principal Investigators (PIs), research collaborators, PIs’ mentors and supervisors, research support staff, programme staff and partners; (b) participant observation with the PIs and (c) storytelling sessions with the masters/PhD trainees. The case study data will be sequentially collected starting with Case A, then Case B and finally Case C. By doing this, insights and experiences from one case study will inform the engagements in subsequent one. Data analysis will begin during data collection, and this will involve – as Pawson (2006) posits – a continuous iterative process of identifying and placing information within a wider CMO configurational maps. Starting the analysis this early will help to identify emerging data gaps that need to be addressed in subsequent interviews.
Interviews with Partners and Programme Staff
In-depth interviews with the partners and programme staff will seek to understand the partnership, the ARISE grant conditions, power dynamics, how the partners have been involved in the programme design stage, their role in the programme implementation, reported outcomes and the contextual factors that have enabled or hindered the implementation process.
Participant Observations and Interviews with the Principal Investigators (PIs)
Participant observations (of PIs) will be conducted within their institutional settings. All the PIs are fully supported by their ARISE grants and therefore, they are expected to fully devote their time towards implementation of their ARISE projects. The evaluator will spend a day with the PIs observing how they go about their day, the time they spend in teaching versus research, their interactions with research collaborators and masters/PhD trainees, their networking efforts, the available research resources (e.g., laptops, Internet, office space), how they utilise those resources, and, how these are linked to the ARISE project. Field observation notes will be taken. Once the participant observation is completed, interviews with the PIs will be carried out to clarify the observations made. The interview will also involve interrogation of the programme theories and discussing how the PIs’ experiences square with the theories.
Interviews with the PI Mentors
The ARISE programme encourages the PIs to seek mentorship support from senior researchers. The programme also matches the PIs with potential mentors in case they do not have one. Interviews with the mentors will seek to understand their experience mentoring the PIs, the research capacity outcomes achieved, what has worked well and what has not worked well, and how the context has affected or influenced the success of the mentorship relationships.
Interviews with Collaborators and Research Support Staff
Most PIs are working with other researchers (collaborators) to implement their research project activities. Interviews with the collaborators will seek to understand the programme resources provided by the ARISE, the research capacity outcomes attributable to the research collaboration, what has worked well and what has not worked well, how the context has influenced the success of the collaboration including any existing power structures and dynamics between the PIs and their collaborators. Additionally, interview sessions with the research support staff tasked with the day-to-day coordination of the research project activities (e.g., organising stakeholder engagement activities, procurement of research equipment, management of project budgets, completion of the Good Financial Grant Practice assessment) will be conducted. The research support staff will provide insights on the institutional context within which the projects are implemented and how those contextual realities moderate/mediate research capacity outcomes.
Storytelling with Masters/PhD Trainees
As each PI is required to recruit, train and supervise two PhD trainees and four masters’ trainees over the project period, the storytelling interviews will be aimed at documenting the experiences of the trainees in regard to training, supervision and mentorship. A researcher using a storytelling approach elicits a participant’s experiences by asking them questions designed to have the participant respond in a narrative (McCall et al., 2021). The storytelling session will be comprised of (a) the story phase where the participant will be given time to tell their story, (b) the narrative follow-up phase where clarification about the story will be sought and contexts, mechanism and outcomes will be interrogated.
Stage 3 – Data Analysis
In line with the indigenous rights and regulation principle, participants will be given the opportunity to review their interview transcripts and edit where necessary. This process will help to validate the participant’s voice and to empower the participant by giving them control over what is written (Mero-Jaffe, 2011). The interviewee transcript review is anticipated to work in the ARISE programme context because the programme participants are university-level staff, researchers or trainees. During data analysis, the evaluator will: (a) ensure that the voices of all stakeholders and beneficiaries, where applicable, are captured including those with negative opinions about the value of the ARISE programme (relationality principle) (b) identify data on stakeholders/beneficiaries’ response to the programme resources (e.g., intended, or unintended, positive, or negative outcomes) and how the programme has contributed to those outcomes (reciprocity principle) (c) identify how the participants connect spirituality and the research capacity strengthening work and how the spirituality (values) manifest or shape the contexts, mechanism or outcomes (reverence principle), (d) identify data that demonstrate power imbalances, social injustices, and inequities and how these affect/influence (where applicable) the programme context, mechanisms, and outcomes (decolonisation principle) and (e) transparently document how their positionality influence/affect their judgement and actions in the analysis (reflexivity principle).
Consistent with the realist approach, retroductive theorising will be applied to the various data sources. The evaluator’s professional hunches and experience related to the research capacity strengthening programmes in Africa will also be applied to uncover generative causation. Since the elicitation of IPTs and the refinement of theories are partially dependent on the evaluator’s judgment and knowledge (Gilmore et al., 2019), reflective journaling will be used to document the decisions that are made during the analysis, thus ensuring transparency. Using the CMO heuristic tool, fragments of data that denote contexts, mechanism and outcomes will be identified.
NVivo software will be used for data management. The IPTs will be used as the data organisation framework and this will include (a) creating a parent node for each IPT, (b) creating a case for each of the participant descriptors and indigenous research principles and, (c) populating relevant data in both the nodes and cases. Each IPT will be represented as a code (parent node), and the participant category (PI, mentor, collaborator, student, etc.) and indigenous research principles will each be represented as a case. Coding will happen whenever a CMO manifests (whether new or related to any of the IPTs) in the data. Once the CMO is identified, the fragments of data representing the context, mechanism and outcome will then be dragged and dropped into respective IPT (node) or a new one created. Additionally, where the fragments of data are relevant to any of the indigenous principles, they will also be dragged and dropped into the specific case. These will likely result in overlaps between and among elements. For instance, data linked with the specific IPT nodes may also be linked with one or more cases.
Stage 4 – Theory Refinement and Reporting
Data from the empirical study will inform the confirmation, refutal, or refinement of the IPTs. The refined programme theories should articulate the specific elements of contexts (C) that trigger specific motivations, incentives, reasoning, or resources aspects of mechanisms (M) to generate positive or negative, intended or unintended outcomes (O) (Dalkin et al., 2015). If the identified power imbalances, social injustices, and inequities have utility on the refinement of the programme theories, then they will be appropriately included in the description of the refined CMO configurations. While the traditional realist evaluation starts with a programme theory and ends with a refined programme theory (Pawson & Tilley, 1997), in this study, the evaluator will play an active role in initiating dialogue among partners, stakeholders and beneficiaries on how inequity and social injustice could be addressed.
In circumstances where a CMO configuration is identified only in one case, the CMO configuration will be considered and presented, provided there is theoretical evidence to support the interpretation of the refined theory, as guided by Robert et al. (2019). Once all the CMO configurations are identified, then the refined per-case theories will be presented and described. Since the three cases (research projects) are implemented in different countries and focused on different health research areas, the programme theories will not be consolidated given the potential uniqueness of contexts. Contrary to Mukumbang et al. (2018b) who synthesised CMO configurations to present the bigger picture (overarching CMOs), the within-case CMO configurations will be maintained given the unique insights generated by each case, thus maintaining the granularity of the programme theories. This means that even where similar mechanisms are triggered across two or three cases, the synthesis of the CMO configurations still remain within case. The refinement of the IPTs will continuously be carried out throughout the data analysis and within case synthesis, so that multiple programme theories are simultaneously refined.
Once theory refinement is completed, a stakeholder-friendly report will be drafted and presented to the ARISE stakeholders for validation and feedback (in line with respectful representation principle). The development of the report will be guided by Punton et al. (2016) recommendations on how to develop jargon-free and stakeholder-friendly realist evaluation report. The report will include a set of recommendations for the various ARISE stakeholders and beneficiaries that, if implemented, could help optimise the impact of the programme. All the raw data will be shared with the programme partners who possess the ownership rights (in line with rights and regulation principle). Importantly, the evaluator will also commit to provide technical support to the ARISE programme partners and stakeholders (over a six-month period) to help them develop action plans and a measurement framework for the implementation of the recommendations. This is in line with indigenous responsibility principle, which emphasises that the evaluation should not generate only a report, but clear actionable points and the evaluator providing requisite support post evaluation.
Conclusion
In her synthesis paper, Chilisa (2015) argued that instead of reinventing the wheel, adapting evaluation theories and tools to ensure that they are sensitive to the African settings is one way of decolonising them. This evaluation protocol describes how a realist evaluation of a health research capacity strengthening programme can be carried out in a manner that respects the indigenous research principles and thus sensitive to the indigenous or formerly colonised communities. While the realist evaluation approach seeks to systematically examine what works, how, why for whom and in what circumstances; the indigenous research principles emphasise the need to ensure the evaluation is conducted in an ethical manner, centres the voices of the participants, explores the connection (where possible) between the living and non-living, examines power structures and addresses power imbalances and, importantly, it is useful to stakeholders. In adherence to the indigenous research principles, a realist evaluator should explore the existing power dynamics, speak up on behalf of the voiceless and initiate dialogues with programme partners/stakeholders that are geared towards addressing inequity and social injustice which might be perpetuated, reinforced or ignored by the programme.
Generally, this indigenised realist evaluation is a proof of concept, and the resultant report will describe the extent to which realist evaluation adheres to indigenous research principles and any tensions and incompatibilities – if any – that researchers need to be aware of. For instance, the use of the storytelling method, which is deemed an indigenous research method, in a realist study is the first of its kind; therefore, examining the (in)compatibilities between the two will be a significant methodological contribution. Importantly, a reflection on the evaluation methods and processes will yield an ‘indigenous-inspired realist evaluation’ framework which could then be tested in different programmatic contexts and further refined by researchers in Africa.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the School of Management Studies, University of Cape Town.
Ethics Statement
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
