Abstract
The community and provider-driven social accountability intervention (CAPSAI) project was a complex intervention consisting of interacting components, actors and processes, implemented in Ghana and Tanzania from 2018 to 2020. It aimed to measure the impact of a social accountability intervention implemented in family planning services, on contraceptive uptake and use. The intervention incorporated a process evaluation to identify key causal pathways and better understand factors influencing uptake and use, in the context in which the intervention was implemented. This study used a unique methodology to conduct the process evaluation, and looked at several outcomes, including service utilisation, continuation rates and attitudes and behaviours. Qualitative methods were used to evaluate the effects of the intervention – (1) context mapping, (2) a process evaluation of the different steps of the intervention, and (3) compiling cases of change. Document review, non-participant observation and in-depth interviews were conducted. Data analysis included summaries and coding of the qualitative data. Coding and thematic content analysis were conducted collaboratively across two countries and used a single codebook that was relevant to both countries, yet enabled country specific analysis. Each country team drafted reports that drew on the three data sets. Various lessons were learnt – including the importance of teamwork, development of standard operating procedures to guide practical study components, and how to prioritise study activities over a long study timeline. The study demonstrated that a process evaluation of a complex intervention across multiple countries is feasible, and was also able to yield differentiate results based on specificity in each country’s context. Based on this study, similar process evaluation activities can be replicated in future work.
Keywords
Introduction
There is wide recognition that health interventions are complex. Several components are involved, several types and levels of behaviours and a number of groups or settings are targeted; and diverse expertise and skills are required by those delivering and receiving the intervention (Skivington et al., 2021). Process evaluation is an essential part of designing and testing such complex interventions, and these process evaluations aim to “assess fidelity and quality of implementation, clarify causal mechanisms and identify contextual factors associated with variation in outcomes” (Moore et al., 2015). They are important to explore intervention implementation processes, the setting, contextual factors, the role and participation of different actors, the use of different structures and resources, and how these may have impacted on the outcomes or results, or mechanisms of change (Limbani et al., 2019; Oakley et al., 2006). New Medical Research Council (MRC) guidance provides a framework for conducting process evaluation studies that includes implementation, mechanisms, and context (Medical Research Council, 2006), yet, how each process evaluation is designed differs.
This manuscript reports on the process evaluation surrounding an initiative to strengthen social accountability approaches for family planning services - the community and provider-driven social accountability intervention (CAPSAI) project. In this manuscript, social accountability is defined as a citizen-led, collective process for holding duty-bearers (including politicians, government officials and/or service providers) accountable for their actions (Joshi, 2017). Social accountability interventions are complex interventions because they use a combination of activities and aim to empower and educate community members to demand their entitlements and to support those with the responsibility to provide services to recognize and act on these demands (McMullen et al., 2022; Steyn et al., 2020 (a)).
Specifically, the CAPSAI project was a complex intervention consisting of interacting components, actors and processes. The CAPSAI project was implemented in Ghana and Tanzania from 2018 to 2020, and incorporated a process evaluation to identify the key causal pathways and better understand the contexts in which the intervention was implemented. The process evaluation in this study built on the evidence on the effects of social accountability and participatory processes in the context of family planning programmes (Steyn et al., 2019). The study looked at several outcomes – service utilization, continuation rates and attitudes and behaviours (Steyn et al., 2019; Steyn et al., 2020 (a)). As per MRC guidance on studying complex interventions (Medical Research Council, 2006), the project included a process evaluation composed of qualitative methods (Moore et al., 2014; Moore et al., 2015).
This study used a unique methodology to conduct a process evaluation of a complex intervention, in this case, social accountability on family planning services across two countries. Three methods were used to evaluate the effects – (1) context mapping, (2) a process evaluation of the different steps of the intervention, and (3) compiling cases of change. Qualitative data collection was conducted for each of these methods, including document review, non-participant observation and/or in-depth interviews (IDIs). In this paper, we provide details of the CaPSAI project and the design of the process evaluation used, details of the data collection and analysis. The process of how the team in this multi-country study worked together is also highlighted, with recommendations for future process evaluations of complex interventions.
The CaPSAI Project: How the Process Evaluation Fit into the Study Design
The CaPSAI project aimed to measure the impact of a social accountability intervention on contraceptive uptake and use and to understand the mechanisms and contextual factors that influenced and generated these effects (with emphasis on health services actors and community members) in selected districts in Ghana and Tanzania (Steyn et al., 2020 (a)). Contraceptive uptake was measured by number of new users (who had either never used a family planning method or were switching to a modern method, or restarting after not using a method for a period of 6 months); and contraceptive use was measured in terms of continuation, method switching and informed decision making. A process evaluation was designed to document the implementation of the social accountability intervention in eight intervention sites per country. The process evaluation was intended to capture how context affects intervention implementation and outcomes, and to explore the mechanisms of impact or how the delivered intervention could produce change.
The MRC guidance on complex interventions recommends developing a theoretical understanding of the change process based on existing evidence and theory, and then to model the complex intervention to inform the intervention and evaluation design. The CaPSAI study design and theory of change (ToC) was informed by a review of social accountability literature by the EVIDENCE Project (which used implementation science to pinpoint how family planning and reproductive health services can operate more effectively, equitably, and at scale) (Boydell & Keesbury, 2014; Boydell et al., 2018), and the formative phase study of the UPTAKE Project (Cordero et al., 2019; Steyn et al., 2016).
The CaPSAI project intervention consisted of the eight key steps distilled from a review of social accountability processes (Boydell & Keesbury, 2014; McMullen et al., 2022) (see Text block 1). Each step involved activities which were conducted at the intervention sites, and the different steps included various sub-steps (McMullen et al., 2022). Each country had an implementation team to implement the intervention (during the period 2018–2020). The implementation teams were local civil society organizations with previous experience implementing social accountability processes in-country that contained the eight common steps (Text block 1), which they adapted to focus on family planning and contraceptive services. During CaPSAI, they implemented their social accountability process in districts where they did not have an active programme taking place. More details of the intervention (McMullen et al., 2022; Steyn et al., 2022; Steyn et al., 2020 (a); World Health Organization, 2021) and the role of the implementation teams in CaPSAI (Cordero et al., 2022) have been published elsewhere.
Step 1: Introduction of the intervention to the community Step 2: Mobilisation of participants for the intervention Step 3: Health, rights and civic education with community participants Step 4: Prioritisation meeting with community Step 5: Prioritisation meeting with duty bearers Step 6: Interface meeting and joint action planning Step 7: Three month follow up meeting with community and duty bearers Step 8: Six month follow up meeting with community and duty bearers
*Preparatory and post intervention follow-up activities were also documented.
To evaluate the effects of the social accountability process, a range of qualitative methods and data sources were used (Steyn et al., 2020 (a)). The CaPSAI process evaluation comprised three main components, 1) context mapping; 2) the process evaluation of intervention activities/steps (Text block 1); and 3) identification and exploration of cases of change. The process evaluation was conducted by a research team in each country, who were independent of the implementation teams. Each component of the process evaluation had specific research objective that correspond with the MRC guidance on process evaluations to include: implementation, mechanisms, and context (Medical Research Council, 2006). • For context, context mapping was undertaken to understand the context within which CaPSAI was implemented and to gauge if there were any existing activities that had fostered community understanding of FP, or if there were any unsuccessful community participation activities that may have resulted in negative impressions of such interventions. • For implementation, the process evaluation of Step 1 to Step 8 (including pre and post implementation activities) aimed to understand whether the intervention was delivered in a relevant quantity (dose) as intended per protocol (fidelity) and whether it reached the target audiences of healthcare providers, duty bearers, citizens and health service users (reach); and what factors facilitated or hindered the implementation of the intervention in the study intervention sites (implementation process) per country. • For mechanism, the cases of change were case studies that tracked change stories, and the aim of these was to describe the context and factors that enabled broad changes that improved the provision of FP services in intervention facilities.
Ethical and Regulatory Considerations
This study was coordinated by the Contraception and Fertility Care Unit in the Department of Sexual and Reproductive Health and Research at the World Health Organization (WHO). It was reviewed and approved by the respective ethics committees and institutional boards in the implementing countries and from the WHO including: the World Health Organization’s Ethics Review Committee (A65896) and Human Reproduction Programme (HRP) Research’s Research Project Review Panel (RP2); the Ghana Health Service Ethics Review Committee (GHS-ERC:009/08/2017), and the Population Council Institutional Review Board (exemption approval - # EX201714) for Ghana; and the Ifakara Health Institute Ethical Review Board (IHI-IRB) (IHI/IRB/No: 18–2017) and National Institute for Medical Research (NIMR) (approval number NIMR/HQ/R.8a/Vol.IX2668) for Tanzania. In both participating countries, the Ministry of Health also provided clearance for the study to take place.
Selecting Intervention and Process Evaluation Sites for CaPSAI
In Ghana and Tanzania, districts were selected for the intervention to take place (see Figure 1 for diagrammatic representation of selection of sites and process evaluation activities). Districts with healthcare facilities offering FP services and with similar cultural, religious and socio-economic contexts, were selected to create comparisons. Healthcare facilities in the selected districts were mapped, and FP and facility data were gathered for up to 20 of these facilities. From these, eight intervention and eight control sites were identified. Eligible sites included facilities 1) that offered FP services; and 2) where methods, including barrier methods, short and long-acting methods, emergency contraception, and at least referral for permanent methods, were available. Diagrammatic representation of selection of sites and process evaluation activities.
Criteria were used to ensure that the control and intervention healthcare sites matched, and that contamination between control and intervention sites was minimised. Matching criteria included: facility type and level, average number of service users, and number of new users. Control sites were facilities that continued with usual care. The intervention was implemented in the eight intervention sites. The implementers documented the intervention process in detail, with pre-implementation reports outlining plans for each activity per site, and post-implementation reports documenting how the activities were conducted in earnest at each site.
The process evaluation of the intervention was conducted across the intervention sites in both countries. Specifically, document review of relevant documents related to the implementation of the intervention, was done for all eight intervention sites. After deliberation of feasibility and budget, the project management decided to focus on only four intervention sites for a more detailed process evaluation comprising qualitative data collection activities, which included in-depth interviews with participants of the intervention activities and non-participant observations. This intense focus allowed for a very thorough understanding of implementation in these four sites, rather than a reduced intensity understanding across all eight sites.
The following criteria were used to select the four sites for the in-depth qualitative process evaluation activities: at least one site should be selected at random in each of the selected districts to get a better understanding of the intervention across different subpopulations; and at least one of the sites should be in a rural setting to help assess the effect of the intervention across rural and urban areas.
The Process Evaluation Team
Each country had its own research team who collected data for the various process evaluation activities and were independent from the implementing teams (whose role was to implement the social accountability intervention), so that data collection could be independent and unbiased. In Ghana, the principal investigator and co-investigator trained and monitored different sets of data collectors for the process evaluation components of the study. In Tanzania, the principal investigator trained and monitored the data collectors for the process evaluation. The structure of the CaPSAI project teams is described in more detail elsewhere (Cordero et al., 2022). A process evaluation co-ordinator from a separate institution, not involved in the intervention implementation, provided independent support to the process evaluation activities. Representatives from the UNDP-UNFPA-UNICEF-WHO-World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP) provided ongoing oversight and support to the project.
Data sources – What was Involved in Data Collection
Three main methods were used in the process evaluation and these used different data collection tools– document review, IDIs and non-participant observation. (The data collection instruments for the various activities are available online (Steyn et al., 2020 (b))). The instruments were developed from data collection instruments used in the Evidence Project study in Uganda (Boydell et al., 2020). The instruments were updated based on learning from the Uganda study and then in consultation with both research teams.
Context Mapping
Context mapping activities were conducted in both intervention and control districts (not at facility/site level) at baseline (February to May 2018), midline (April and May 2019), and endline (March/April 2020 in Tanzania, and later in July/August 2020 in Ghana – due to delays because of COVID-19). The purpose of the context mapping was to understand the context in both intervention and control districts, to create awareness and describe other FP related initiatives and community participation programmes. The interview schedule included questions exploring the community engagement and monitoring activities that have taken place and activities conducted to improve FP and reproductive health services. This was important in order to contextualise the implementation and effect of the study intervention. They were conducted at different timepoints to be able to explore any possible changes in context over time.
The context mapping consisted of in-depth interviews with key stakeholders such as community members/local leaders, healthcare personnel, and ministry of health representatives who were knowledgeable about health systems and community engagement activities. Stakeholders were purposively and snowball sampled. Written consent was provided and then stakeholders were asked about social accountability and reproductive health interventions in the region/district. The interviews were conducted in the following languages: Kiswahili in Tanzania, and English or Fante/Twi in Ghana. Interviews were audio recorded with their consent. In some cases, the same person was interviewed at different time points, but in some cases new stakeholders were identified and interviewed over time, due to transfer of or changes in staff, or stakeholder absenteeism during data collection period (e.g., being on leave).
Process evaluation
Three data sources were used for the process evaluation of the pre implementation, Step 1 to Step 8, and post intervention. a. Document review: Document review consisted of the review of any relevant documentation that may be necessary to understand the implementation process and its impacts more fully. These included documents related to implementation of the intervention - local action plans, reports and budget commitments, training materials, and facilitator’s terms of reference (McMullen et al., 2022). The pre- and post-implementation reports were drafted by the implementation team and reviewed by the research team to explore content fidelity, reach and dose (see description of how the implementation teams documented the interventions elsewhere (McMullen et al., 2022)). Document review was conducted for all eight steps of the intervention, as well as for documents used prior to the implementation of the intervention and during follow-up activities. Document review was done for all eight intervention sites. b. Non-participant observation: Activities related to the eight steps of the intervention, such as introduction and interface meetings, were observed by trained researchers. The researchers used a non-participant observation sheet which covered the layout of the activity being observed, what happened, the formal and informal protocols observed, who participated and the quality of participation and other observations. There were numerous activities that could have been observed, therefore the implementation team provided details of the various activities to the research team, highlighting the most relevant activities. The final selection of intervention activities to be observed was done by researchers, based on information received from the implementation team (Cordero et al., 2022). The observations were conducted only at the four process evaluation sites, and were done to capture observational data about the activities and allows for the capture of a broader understanding of the context and implementation of an intervention by capturing non-verbal social information relevant for the social accountability process. In all meetings, participants provided group consent to the observation. c. In-depth interviews: Trained researchers conducted in-depth interviews (IDIs) with stakeholders, including healthcare providers, community members and duty bearers, who took part in the activities related to the eight steps of the intervention. These interviews were conducted at the four process evaluation sites in the predominant language of the area (Kiswahili in Tanzania, and English or Fante/Twi in Ghana), and were audio recorded with consent of the participants. The interviews consisted of collecting experiential data from actors in the social accountability intervention. The researchers used an interview topic guide to ask about the activities, namely: who participated or who did not, what happened, what was discussed, how the intended audience interacted and the results of the activities. Participants for these in-depth interviews were selected purposively and/or via snowball sampling. They were recruited directly during the activities related to the eight steps of the intervention. Recruitment was conducted based on criteria including their exposure to the social accountability intervention or their role. The participants were identified during observations of the activities and were then invited by researchers to participate in an in-depth interview, if willing.
Case Studies of Change
Once the data collection activities described above were nearing completion and after the interface meeting (Step 6 of the intervention), possible case studies of change were considered. To understand how changes occurred during the course of the intervention, retrospective case studies were compiled. The purpose of documenting these cases was to describe the context and factors that enabled the broad changes which improved FP service provision/uptake in the intervention sites. The cases of change were identified collaboratively by research and implementation teams. The researchers identified cases during fieldwork and data analysis of the in-depth interviews and documents. Implementation teams identified cases during their work in the field and documented these in the post-implementation reports. These cases were combined and compared to corroborate information, and were discussed during meetings between implementation and research teams with the CaPSAI management team. The cases were categorized as related to FP uptake and use, cases related to the CaPSAI intervention and activities, and cases that included different types of FP outcomes, including those related to infrastructure, access, behaviour/knowledge and commodities. Following categorization, the research teams selected between five to nine cases per country for further exploration and verification.
Once the cases were selected, they were explored via audio recorded in-depth interviews with stakeholders involved in the change. The IDI aimed to collect data on specific instances of change related to CaPSAI to determine what factors were present and key for a change to take hold. In addition, document reviews were conducted where applicable (e.g., to verify information provided in interviews). Pre- and post-implementation plans and reports were reviewed to identify issues that were prioritized and not taken forward. For the in-depth interviews, participant selection was purposive. Implementing partners submitted lists of key persons who were both knowledgeable about and related to particular cases of change identified. The research team then selected participants from this list or from the facility where the change occurred, using insights related to the particular case of change and also from data collectors’ experiences in the field. These key persons were then invited and provided written informed consent to participate in the research and be audio recorded. These IDIs were conducted after Step 8, i.e. after the 6 month follow up meetings.
Data Analysis Process – How Process Evaluation Data were Analysed
To accommodate the multiple sources of data to inform the process evaluation, a synthesised data analysis plan was developed. This was done to produce an overall summary of the outcomes of the process evaluation. In addition, this synthesised analysis enabled evaluation at a country level, which could be comparable across countries.
Summary of Data and how it was Considered for Analysis.
Participation from researchers and coders from the respective countries in which the data were collected ensured the analysis was embedded in the context. This enabled a detailed understanding of, and facilitated interpretation and analysis of country specific data. Furthermore, for the data to be comparable across contexts, an independent process evaluation co-ordinator assisted with and facilitated data analysis processes. This independent co-ordination was also important to maintain the necessary tempo, improve data reliability and eliminate any potential for data biases.
Code List Development
The development of the code list for data coding was done collaboratively by research team members in both countries, with additional input from the process evaluation co-ordinator. It was an iterative process over several months and followed four key steps – identification of codes; creation of initial/draft code list, testing of code list; and finalisation of/development of master code list (Milford et al., 2017).
Identification of codes was done by reading a subset of transcripts and documents from the two countries and process evaluation activities (in-depth interviews, non-participant observation, context mapping and document review). Researchers from both countries and the process evaluation co-ordinator independently read the same subset of transcripts and documents. Based on these readings, preliminary codes were generated from emergent themes in the transcripts/documents, and from questions in the interview guides. They were also informed by the theory of change model that was used to measure change in this intervention (Steyn et al., 2020 (a)). Where necessary, country specific codes and codes for the different stages and activities in the intervention were included.
Preliminary code lists were drafted at country level and then these preliminary codes were consolidated across the countries into an initial/draft code list. Following this, the code list was tested. There were multiple iterations of this step, testing and revising codes and definitions, until a single master code list was decided on. This final (master) code list was agreed upon by researchers from both countries, the process evaluation co-ordinator, and representatives from WHO management (HRP). As new process evaluation activities took place, however, new iterations of the master code list were developed – codes were added, revised and updated with agreement of all team members, as necessary. A single code list was used to capture thematic areas from process evaluation, cases of change, and context mapping activities.
Throughout the process, the iterations of the code list took into consideration the aspects of concept fidelity (intervention delivered as intended), dose (quantity of intervention implemented) and reach (whether the intended audience comes into contact with the intervention, and how (Moore et al., 2015) – all important for process evaluation.
Data Coding
We used a thematic content analysis approach to coding, with both inductive and a priori code identification – and a priori themes were generated with input from the underlying ToC. In addition, through a constant comparison method (Boeije, 2002; Ryan & Bernard, 2003), we were able to further explore the data and allow for additional themes to emerge.
This master code list was entered and transcripts imported into the qualitative analysis computer software NVivo (version 11, QSR International). Coding and analysis were conducted in NVivo. A portion of the transcripts were double coded by independent researchers to increase data reliability. Some transcripts were double coded within country teams, and some were double coded by the process evaluation co-ordinator. Based on double coding and discussions, if there were inconsistencies in coding, the code list was revised and updated after discussion and agreement amongst team members. In addition, new concepts were included in the code list as new data were gathered and analysed.
During analysis of the process evaluation data, any changes that had occurred in the healthcare facilities or associated districts were identified and noted. In addition, changes noted in the field by researchers were highlighted. Changes included, but were not limited to, deviant cases, increased involvement of men, improved attitudes of healthcare providers, high performing sites, low performing sites, etc. Through identification of these cases, the process evaluation team together with input from the implementers, were able to prioritise cases for inclusion in the case studies of change component. The coding of pre- and post-implementation reports also enabled the identification of prioritized issues which were not addressed during the intervention. The broad methods of data collection and analysis for these cases of change (Table 1) enabled a story to be told – and the relationship between the change and the intervention could be explored. Furthermore, these cases of change provided an additional method utilized for data consolidation and synthetization.
Data analysis validity was maintained by comprehensive data treatment – all data were coded for inclusion in the analysis, to reduce instances of chance results. In addition, any data that opposed conclusions were purposefully sought out and explored in detail, for example deviant case data.
Outputs/Reporting Strategies
Each country drafted a process evaluation report. A report outline was collaboratively developed, in order that the reports could be similarly structured to enable cross country comparisons. The reports comprised three main sections, reflecting the various process evaluation activities – 1) context mapping, 2) process evaluation of the steps, and 3) cases of change.
The section on context mapping was structured to demonstrate variations in context between baseline, midline and endline. Context mapping data were presented according to district/site and allowed for comparisons between areas and between control and intervention sites. The section on the process evaluation of Step 1 to Step 8 included information on pre- and post-intervention follow-up. It was presented per step, summarizing the following points: 1) was the intervention delivered as intended, and did it reach target audiences, and 2) what factors facilitated or hindered the implementation of the intervention. The process evaluation report provided evidence of dose, reach and conceptual fidelity of the intervention. The cases of change were presented in a section and highlighted priority changes that happened as a result of the intervention, and factors enabling or creating a challenge to the change.
The process evaluation data has been used to contextualise and understand the causal pathways explaining the main outcomes of CaPSAI (Steyn et al., 2022). Several manuscripts are being written up to report on these findings and will be published separately. (All published manuscripts related to the Community and Provider driven Social Accountability Intervention (CaPSAI) Project are and will be uploaded on the Australian New Zealand Clinical Trials Registry (ACTRN12619000378123)).
Teamwork in Analysis
The process evaluation was implemented in 2018 and continued through 2021. Throughout this timeline, the process evaluation team had regular meetings. These meetings were initiated with in-person country meetings, where a data management plan and suggestions for teamwork were discussed. In addition, discussions were held to decide on an appropriate qualitative analysis software to be used across the different countries. Training on the chosen qualitative software (NVivo, QSR International, version 11) was conducted at the in-person meetings, to ensure that all team members were at the same level for data management and analysis processes. There were four team members from Ghana (from research assistant to site principal investigator (PI)), and two team members from Tanzania (including the site PI and co-investigator) at the initial in-person training.
Following the in-person consultation, regular, biweekly remote calls were conducted (via meeting platforms such as Skype, Zoom and Microsoft Teams). Where necessary, frequency of these meetings were revised in line with project activities, and weekly remote meetings were conducted when project activities intensified. These meetings were co-ordinated and chaired by the process evaluation co-ordinator and attended by representatives from WHO management (Human Resource Planning -HRP) and process evaluation team members from both countries, all of whom had a role in data collection, transcription, coding and analysis.
These regular meetings followed a standard (yet flexible) agenda throughout the project timeline – including team updates on process evaluation activities and timelines (e.g., data collection, transcription and translation progress, and troubleshooting fieldwork activities.); code list development, coding and analysis updates and comparisons. The later part of the meeting discussions included detailed coding comparisons and discussions about coding queries and suggestions. Ad hoc additional training on NVivo (QSR International) version 11 was also conducted where necessary. These regular discussions enabled timeous identification of challenges and sharing of new coding information, they also facilitated regular reviews and updates of the codes in a collaborative manner. Towards the end of the project timeline, meeting agendas included discussions on strategies for reporting on process evaluation activities and analyses. Furthermore, there were ongoing email communications between team members throughout the project timeline.
Lessons Learned
This unique process evaluation that drew on qualitative research methods provides not only insights into the methods themselves but also into the processes required to support such research approaches. In the discussion section, we reflect on the lessons we learnt about conducting process evaluations of complex interventions. First and foremost, process evaluations that look at implementation, context and mechanisms often require several concurrent methodologies which is complex in of itself, this is then further compounded as these multiple sources have to be combined through a singular analysis, often involving many different actors. Complex processes require complex methods which in turn require procedures and processes to support them, which can often be obscured. Here we draw out the learnings.
Teamwork is important – although members of the team were from multidisciplinary backgrounds and different countries and contexts, these multiple insights added richness and quality to data interpretation and analysis. Teamwork enabled the project team members to learn from each other’s strengths, experiences and expertise. This was true not only for building up the teams’ common understanding of process evaluation and qualitative research methodologies but also on sexual and reproductive health in the specific cultural contexts as well as participatory processes and social accountability. However, the long timeline of the different project activities meant that there were changes in some team members at country level and due the complexity of the various study activities, there had to be additional training for new staff.
Numerous standard operating procedures (SOPs) were drafted at study initiation for multiple aspects of the project (e.g., transcription, storage of study documentation, authorship). Although time consuming, this was useful when put into practice and when needed for reference and guidance. Due to multiple in-country and across-country teams, the creation of SOPs and the exercise of thinking through the practical aspects of a complex intervention was useful and could be applicable to future research/implementation endeavours.
Complex interventions have multiple project activities with competing priorities. Different project components are interconnected, requiring attention to linkages and co-ordination of various data sources. The research team structures varied between the two countries. In Ghana all project team members (including the study principal and co-principal investigators) worked on the process evaluation and impact evaluation, enabling team members to provide support to each and step in easily when a team member was not available. In Tanzania, the co-principal investigator worked on the impact evaluation, but did not work on the process evaluation, which meant that their understanding of this study component was limited. However, there were team members who were involved in the process evaluation throughout the study who had expertise to conduct the process evaluation activities.
At a practical level, there were some challenges with implementation of the process evaluation activities, impacting on data analysis timelines, including delays in fieldwork, due to factors such as inclement weather, community-wide funerals, COVID-19 restrictions, etc. The various process evaluation data sources amounted to large volumes of data to be reviewed and analysed, which was time consuming and complex. However, the regular team meetings, particularly during intense study periods, were important to help country teams to maintain focus (e.g., especially given multiple data collection points, methodologies and large sample size).
Choosing the right software to facilitate data analysis is equally important. Inconsistencies in electricity supply and Internet capabilities as well as access by multiple team members at any time were important factors of consideration when choosing the qualitative data analysis software. The software chosen (NVivo, QSR international, version 11) is an offline based programme which means that it does not depend on users having access to the Internet. There were ongoing communications between country teams to coordinate database sharing and to facilitate version control, to merge latest versions of country data and analyses. The software chosen has the capacity to manage large quantities of data, but there also had to be coordinated communication around sharing and transfer of the data between countries especially where Internet capacity was limited.
As per the MRC guidance (Moore et al., 2015), the process evaluation required the research teams to work with the implementers. As described, the implementers collected data through the writing of pre-implementation plans and post-implementation reports for each step. They also had a role in identifying case studies of change. Working with implementers allowed the process evaluation to capture varied data on the intervention. The interaction between research and implementation teams are further explored in a separate paper (Cordero et al., 2022).
In this paper, the CaPSAI Project team reflected on the use of the MRC guidance to conduct the process evaluation to capture the interaction between the intervention and the context (Moore et al., 2015). It is important to note that complex intervention research is an evolving field. For example, since the inception of CaPSAI, the MRC in partnership with the National Institute for Health Research have published an updated framework for developing and evaluating complex interventions (Skivington et al., 2021). There is increased focus on asking a broader range of questions in addition to efficacy and effectiveness perspectives. Complex intervention research has increasingly integrated identifying other impacts, assessing cost-effectiveness, theorising on mechanisms of change, taking account of how interventions interact with the context, how they contribute to system change, and how the evidence can be used to support real world decision making (Skivington et al., 2021). There is a more pluralistic approach were theory based and systems approaches are equally considered with efficacy and effectiveness in the design phase. These developments should be considered in future studies of social accountability in addition to the lessons learned reported here.
Conclusions
The CaPSAI project has demonstrated that it is feasible to implement a process evaluation of a complex intervention, such as social accountability across multiple countries – it is possible to implement a unified approach across two countries and yield differentiate results based on the specificity of each country’s context. The qualitative data obtained from the process evaluation activities were rich and provided detailed insight into and understandings of the intervention and facilitated triangulation of data within the project. Based on this study, similar process evaluation activities can be replicated in future social accountability interventions and studies. Furthermore, the process evaluation enabled us to capture how context affects intervention implementation and outcomes, and to explore the mechanisms of impact or how the delivered intervention could produce change.
Footnotes
Acknowledgements
The CaPSAI Project team acknowledges the support of Dr. James Kiarie (WHO) who provided conceptual input on the CaPSAI Project. The authors would like to thank the data collection teams (including Beatrice Kingu, Tusekile Mwotela, Irene Mashasi and Sigilbert Mrema from Tanzania; and Rachel Narki Anum, Kojo Mensah Sedzro, Martin Agbodzi, Seth Boateng, and Emmanuel Amevor from Ghana). In addition, we would like to thank healthcare providers from participating facilities and participants who gave time to contribute to this research.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the United States Agency for International Development (to the UNDP/UNFPA/UNICEF/WHO/World Bank Special Pr), Bill and Melinda Gates Foundation (OPP1084560).
Data access
Data gathered in the CAPSAI study is not publicly available as it contains information that could compromise research participant privacy/consent. However, some anonymised aspects of the datasets may be available upon reasonable request from the corresponding author and with permission of the Department of Sexual and Reproductive Health and Research, World Health Organization. Note that data sharing is subject to WHO data sharing policies and data use agreements with the participating research centres.
