Abstract
This exploratory paper seeks to shed light on the methodological challenges of education systems research. There is growing consensus that interventions to improve learning outcomes must be designed and studied as part of a broader system of education, and that learning outcomes are affected by a complex web of dynamics involving different inputs, actors, processes and socio-political contexts. How should researchers in comparative and international education respond to this call for complexity? To begin to answer this question, we draw on recent and ongoing research within the Raising Learning Outcomes in Education Systems research programme – a programme of 30 projects funded by the UK’s Department for International Development (DFID) and Economic and Social Research Council (ESRC). The paper explores critical ways in which the methods used by individual research projects, and across the programme as a whole, offer opportunities and raise challenges for advancing systems thinking in education research.
Introduction
Education research has seen substantial evolution, particularly over the last two decades. The availability of more and better data and the use of credible research designs and methods have contributed significantly to improving our understanding of key issues in education in developing country contexts. This evolution has also been aligned with a shift in the focus of research from identifying those policies that improve children’s access to schooling to identifying those that result in improved quality of their learning experience, which is embodied in improved learning outcomes.
New evidence on learning outcomes has also generated an understanding in education research that, whilst planning for improvements in education quality will necessitate ‘more of the same’ (in the form of more spending on inputs such as building classrooms and providing more teachers), doing this alone will not be enough to achieve improvements in quality that are urgently needed (Pritchett 2015). As a result, there is far greater emphasis on both the generation of more rigorous evidence and the use of that evidence to inform the design of interventions, programmes and policies that go beyond ‘more of the same’. Substantial effort has gone into generating high-quality data that can help answer specific hypotheses and questions and that provide evidence that is objective, accessible and policy-relevant.
This changed focus, from access to learning, has also shifted the emphasis away from individual interventions and programmes to the system as a whole. With this shift in thinking comes the recognition that addressing social problems through individualised, discrete interventions does not suffice in producing desired outcomes. Adopting such a lens requires reforms that cover the full span of education provision including a range of actors and with a clear focus on inputs, processes, people and politics which, in their entirety, will determine whether and how much children learn. Some large-scale research programmes funded by various governments and research councils across the world are testament to this evolved thinking. The Raising Learning Outcomes (RLO) research programme, a multi-country programme which was based on three annual research calls (2014, 2015 and 2016) across three different but complementary themes – system elements, context and dynamics – and how these impact on raising learning outcomes, presents an example of a programme adopting a specific systems framework. This framework is premised on the very real-world problem faced by the policy maker: not all reforms can cover the full span of education provision, but, employing the RLO framework, it may be possible to use the findings from individual research projects, based in different contexts and using different research methods, to adopt a systems lens. In this respect, the RLO programme offers an opportunity to test one proposed ‘systems approach’.
Adopting a systems lens, however, also implies that the use of methods and tools within disciplinary or methodological silos will not be useful and that a more mixed-methods approach, which combines approaches and tools in creative and innovative ways, is called for to answer challenging and complex systems-level questions. In this paper, we test this systems approach from a ‘methods’ lens – to what extent do the methods used by individual research projects (and the projects taken as a whole), both within each theme and across the three themes, offer an opportunity for the investigative study of education systems?
The article is structured as follows. The first section discusses the evolution of research methods in the field of education and the increased focus on systems research. This is followed by a brief discussion of the RLO programme and its proposed systems framework. This framework is ‘tested’ using a methods lens, taking examples from the various research projects funded within this programme to tease out critical ways in which the programme offers opportunities for advancing systems thinking in education research. The article concludes with thoughts on the implications and limitations of the study, and points to potential future directions for conceptual and theoretical work on systems frameworks for education research.
Evolving methods and designs in education research: adopting a systems lens
Education research in the developing world has focused to date, very broadly, on two main questions: (1) what education policies improve children’s participation in education and facilitate school completion, and (2) what education policies improve the quality of schooling available to children in these contexts? Whilst extant research has traditionally tended to focus more extensively on the former, the last few decades have seen greater attention being devoted to the latter. This recognition has stemmed from the emergence of more and better-quality evidence from the developing world that has revealed that, whilst ever increasing numbers of children are participating in school, learning (in even most basic competencies in literacy and numeracy outcomes) is persistently low, and often declining (various Annual Status of Education Reports (ASERs) from India and Pakistan in South Asia and from Kenya, Tanzania and Uganda in Africa).
A range of tools has traditionally been available to education researchers to answer these questions. Quantitative researchers have used a variety of designs and methods – ranging from large-scale sample surveys to exploiting ‘natural experiments’ (such as changes in laws or policies), to creating ‘panels’ of longitudinal data by following the same unit of observation over time. Much of the evolution of complexities in design and methodologies has been a direct result of the desire to have credible answers to key questions. This necessitates an effort to eliminate biases that arise due to the researchers’ inability to observe the ‘unobservables’ which, nevertheless, determine outcomes. An example of this is provided by the challenges faced by education researchers attempting to measure ‘teacher quality’ in both developed and developing country contexts. Despite agreement that teacher quality matters in determining pupil outcomes, researchers have grappled with the problem of how teacher quality can be accurately measured (Aslam et al., this issue). This is mainly due to the fact that teacher quality encompasses a range of competencies, attitudes, behaviours and practices, many of which cannot be observed outside the classroom or even within it (as they cannot be observed or easily quantified). Another example is provided by studies exploring the impact of school choice on pupil learning. Whilst various tools can be used to capture and proxy for the observed characteristics of pupils and their family background (i.e. those factors that might determine whether a child studies in a fee-charging school as opposed to a state school), some factors (such as a child’s innate ability or parental motivation and attitude towards schooling) are simply unobservable or can only be proxied at best. These factors do, however, determine school choice and not accounting for them appropriately will bias any resultant estimates. These methodological issues plague most quantitative research interested in identifying causality in learning outcomes.
Qualitative research, unlike quantitative research interested in establishing cause-and-effect relationships, aims to identify the pathways through which these relationships work. In doing so, qualitative research designs explore the how and why of the relationships that quantitative research investigates in more statistical ways. Qualitative research designs, therefore, typically involve collating and analysing rich data to infer meaning and explore the mechanisms behind cause-and-effect relationships or impact. Various research methods are available to qualitative researchers, including, among others, interviews and focus groups involving a relatively small sample of respondents who can be asked questions on attitudes, behaviours and perceptions (something quantitative research finds challenging to quantify); ethnographic research which involves the study of social interactions, behaviours and perceptions within groups or communities; and case study methods which involve descriptive, exploratory analysis of an individual or a group. Qualitative research faces its own challenges, one of the main ones being the relatively small scale at which it is conducted, yielding smaller sample sizes that raise concerns about generalisability. Qualitative research also suffers from bias in that the methods used in this approach, similar to quantitative designs, cannot observe or capture everything. The bias from ‘unobservables’ affects all types of research.
Both research designs have seen substantial evolution over the last few decades. Central to quantitative design progression, and premised on the strong desire to ‘test’ interventions to measure their efficacy, has been the ever-widening use of randomized controlled trials (RCTs) in education research (Pampaka et al., 2016). Whilst RCTs have been extensively used as the core research method in health research, the evidence-based practice movement in education has led to their popularity increasing in this sector as well. The ability of this research method to identify ‘causal impact’ and establish cause-and-effect relationships (at least far better than other research methods) has tended to earn it the accolade of the ‘gold standard’ of experimental research. RCTs in education follow a similar design process to those in health, with ‘treatment’ being administered to a ‘treatment’ group and not administered to a ‘control’ group, and the resulting difference between the groups being observed using inferential statistical analysis. Recent years have seen persuasive experimental work in education in developing and low-income contexts using RCTs. Studies have explored issues on a variety of policy-relevant topics, including the impact of additional school inputs (e.g. provision of flip charts in Kenyan schools); provision of scholarships to girls; remedial tutoring for children to improve learning outcomes; teacher incentives to improve teacher effort and student learning; camera monitoring of teachers to improve accountability; provision of additional contract teachers to reduce teacher shortages or as an alternative means of hiring cheaper teachers; computer-assisted learning and the provision of conditional cash transfers such as those targeted at students from disadvantaged backgrounds or females with the aim of improving choice and pupil outcomes (e.g. Glewwe et al., 2000; Kremer et al., 2009; Friedman et al., 2011; Banerjee et al., 2007). This research has contributed extensively to the field of education, particularly in demonstrating the effectiveness of educational interventions with the ‘…insights produced by two decades of randomised evaluations … larger than the sum of the individual impact estimates’ (Ganimian, 2017).
However, whilst RCTs have been popular in both research and policy circles in driving ‘evidence-based’ decisions, they have also faced strong opposition from critics (Pampaka et al., 2016). Some argue that this research method is not always suitable for use in education research. Not only does the design typically require randomly assigning large numbers of individuals into treatment and control groups; the design feature makes this a costly, time-consuming and sometimes unethical method for research. Because RCTs can often be conducted on only a limited sample of participants, what works for a small group of individuals or a given region may not always be generalisable for the entire population, resulting in questions about the external validity of these approaches. However, researchers disagree on the extent to which these limitations are real, with some arguing that RCTs in education are not narrowly restricted to impact evaluations and can help answer questions about systemic reform (Crawfurd and Sandefur, 2015). Others have criticised another aspect of RCTs – even when successful, not all interventions have been adopted by governments or taken to scale. This argument has been countered by supporters of RCTs who have noted that most such criticisms ignore the multiple purposes of RCTs, are based on overly simplified views of how interventions should scale and fail to offer clear alternative strategies for informing system reform (Crawfurd and Sandefur, 2015).
Many critics, however, have called for a reassessment of the ‘gold standard’ nature of RCTs, noting that the use of this term undermines the value of other research methods which often greatly enhance the usefulness of RCTs when used in conjunction with them (Hanley et al., 2016). Deaton and Cartwright (2017) are also critical of RCTs and suggest that researchers have tended to place idealistic expectations on their use. More specifically, requiring ‘external validity’ from a single experiment is an unrealistic demand and whilst ‘… RCTs can play a role in building scientific knowledge … they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not “what works”, but “why things work”.’ (Deaton and Cartwright 2017: 1). Hanley et al. (2016) argue similarly, giving two examples of instances in which standard RCTs in education research were supplemented by process and implementation evaluations focusing on classroom practices and feedback from teachers and pupils (one an investigation evaluating the impact of the development of science subject leader skills and expertise at the primary level and the other focusing on co-operative learning of primary maths in the UK). The authors note that adopting this more mixed approach greatly enhanced the research, making it more ‘holistic and richly interpretative’, and conclude that ‘… privilege for particular paradigms should be set aside when designing effective evaluations of education interventions, and that it is insufficient to ask “what works?” without also asking “why?”, “where?” and “how?”’ (Hanley et al., 2016: 287).
The focus on learning and seeking innovative ways to improve the quality of schooling available for all has also resulted in the search for system actions that accelerate the overall pace of progress in learning (RISE vision document 2, 2015). As a result, the focus has shifted away from individual interventions and programmes to the system as a whole. This shift has meant that public policy makers increasingly recognise that addressing social problems through discrete interventions does not necessarily produce desired outcomes. There are examples of very well-intentioned policies and interventions resulting in unexpected consequences which either become manifest in other parts of the system or address the symptoms without tackling the key problems. This has led to a drive for adopting a systems approach in education research, with ‘systems’ defined as ‘elements joined together by dynamics that produce an effect, create a whole or influence other elements within a system’ and the acknowledgement that changing the dynamics of a system is complex and requires innovative approaches to examining problems. This includes adopting ‘…bold decision making that fundamentally challenges public sector institutions’ (OECD, 2017: 5)
Looking at the system in its entirety rather than as comprising individual parts, therefore, allows a better understanding of where the greatest impact of a given change can be achieved. Adopting such a lens also requires a sharper focus on how institutions, actors and processes are organised as well as needing greater alignment between actors both within and across sectors to achieve desired outcomes. Most crucially, a systems approach entails moving away from notions of reforms as ‘isolated interventions’ to a set of comprehensive changes across a range of actors, institutions and processes in various sectors all working towards a common goal (OECD, 2017). System reform within an education system, therefore, necessitates reforms covering the full span of education provision including a range of actors (such as non-state and civil society in many contexts) with a clear focus on inputs, processes, people and politics which in their entirety determine whether and how much children learn. Whilst inputs comprise the core components that underpin the delivery of education (e.g. teachers, infrastructure, teaching and learning materials, etc.), processes focus on the ways existing resources are used (e.g. through how the workforce is structured, how finances are managed, etc.); people refers to the array of individuals who are involved in a child’s learning experience either directly or indirectly (parents, teachers, community members, bureaucrats, civil society organisations, etc.) and politics concerns the overarching structure of governance and power within which the education system operates (DFID, 2018).
Systems thinking is preceded by a substantial body of evidence in education research. This evidence has steered researchers and policy makers towards an approach that recognises the complexity of policy making where the effects of interventions are complex, wide-ranging and often unintended. Substantial improvements in empirical research standards have meant that education researchers have used increasingly credible approaches aimed at identifying ‘causal’ impacts of education policies on key outcomes. This has led to the creation of a large body of evidence using a variety of methodologies. Glewwe and Muralidharan (2015) summarise some of the more quantitative literature in the field. The authors note that there has been a sharp increase in the quantity and quality of empirical research in developing country contexts and this has been facilitated by the increasing use of randomised experiments in particular to evaluate the impact of interventions, as well as the increased availability of administrative and survey data sets on education outcomes in developing country contexts. Researchers in education have also increasingly attempted to learn from the experiences of the health sector. Hanson (2015) notes that education and health face several similarities including the delivery of services and universal commitments, and that the field of education can gain by adopting a ‘systems lens’ in the same vein as the health sector and by focusing on systems- level research rather than looking at narrowly defined questions and impact evaluations of interventions with ‘short causal chains’.
To address such complex and interdependent phenomena, many education researchers are choosing to adopt a mix of research methods that can combine ‘careful measurement with “up-close”, deep understanding of real-world contexts’ (Osborn, 2004: 265). Mixed methods have been defined as an approach that combines quantitative and qualitative research techniques, methods, approaches, concepts or language into a single study. This approach is not new to education research, but emerged in the 1980s as a ‘third wave’ that sought to move past the paradigmatic stalemate between quantitative and qualitative research techniques (Johnson and Onwuegbuzie, 2004). Both qualitative and quantitative methods have inherent strengths and weaknesses: as Taylor et al. argue in this issue, robust experimental designs are well-suited to establishing cause and effect, but cannot answer how certain outcomes were achieved; qualitative studies can shed light on questions of how, but findings are not easily extrapolated to wider contexts. In other words, ‘quantitative has a validity problem; qualitative has a reliability problem’ (Taylor et al., this issue).
Although mixed methods have become common in education research, questions remain about how qualitative and quantitative approaches can best complement each other from the design stage through to analysis. Most common is a sequential design, where qualitative and quantitative research is conducted in two separate phases. Qualitative data are either used initially to help develop quantitative instruments or used after quantitative data have been gathered in order to further explain findings (Creswell and Plano-Clark, 2011; Johnson et al., 2007). It is also possible to adopt mixed methods across the research process, with integration of qualitative and quantitative approaches occurring from research design to analysis stages. Concurrent mixed methodology may hold particular value for studying complex systems. Such an approach can help us to design research that links systems, schools and individual learners in a comparative study (Osborn, 2004) and to develop an iterative approach to implementing the project so that qualitative and quantitative data help feed into the evolving research questions and analysis (Lucero et al., 2018). However, as Greene (2005; 2008) notes, by accepting all valid research traditions into their fold, mixed-methods approaches in education and social inquiry offer a more enriching and insightful understanding of the research.
Case study: RLO research programme
Systems research requires moving beyond a single research method and understanding the interplay between different elements and exploring new ways of combining different methods. It also requires being sensitive to the context, the culture and the limitations of different research designs and methods. This does not require reinventing the wheel and identifying alternatives to already credible research methods, but rather using these methods in innovative ways that will allow researchers to look at the interplay between different system components to examine existing problems.
Recent years have seen the emergence of several large-scale research projects which have advocated the use of mixed-methods designs and innovative research methods that potentially allow for a systems focus. The Research on Improving Systems of Education (RISE) programme is a large-scale, multi-country research programme funded by the UK Department for International Development (DFID) and Australian Aid. RISE is currently conducting research in Ethiopia, India, Indonesia, Pakistan, Tanzania and Vietnam. The World Bank’s Early Learning Partnership (ELP) is a similar, large-scale programme that is attempting to understand systems in early childhood development.
Another example of a systems approach in education research is provided by the Raising Learning Outcomes in Education Systems (RLO) research programme funded by DFID and the Economic and Social Research Council (ESRC) – the focus programme of this special issue. This multi-country programme involved three annual research calls (2014, 2015 and 2016) that have taken different but complementary themes within an overall focus on interactions between system elements, context and dynamics and how these impact on raising learning outcomes. Call 1 focused on effective teaching as a system element, call 2 focused on challenging contexts, and call 3 focused on the dynamic of accountability. 1
The RLO research programme aims to build the evidence on critical policy areas that currently constrain education systems in developing countries from translating resources into better learning for all, and ultimately positive social and economic change. A total of 30 projects from 28 research teams 2 have been funded and the projects span a variety of contexts (e.g. Africa, Latin America and South Asia). The programme aims to enable more effective policies and interventions by providing policy makers and practitioners with concrete ideas on how to improve learning for all and an understanding of how these ideas will translate to their specific contexts and institutions. It is achieving this through funding a portfolio of research that increases understanding of how complex relationships between elements of the education system, the context in which they are embedded and the dynamics operating within the system impact on efforts to raise learning outcomes for all.
The Appendix lists the projects being carried out by the 28 RLO grant-holding teams, with a note on the research methods and designs that each employ. Seven of these have contributed articles to this special issue; many more are discussed in the following section of this article. As will become evident, all of the RLO research projects are using innovative methods and tools to generate new evidence on policy-relevant issues across multiple contexts. Some are adapting or developing new research tools, while others are applying methods from other disciplines to the study of education systems.
The RLO projects share a common concern to improve researchers’ and policy makers’ understanding of how complex relationships between inputs, processes, people and politics ultimately impact learning outcomes across a variety of contexts. The RLO visualises a systems framework as depicted in Figure 1. In this framework, learning outcomes are envisaged as being influenced by three overlapping and mutually constitutive inputs – system elements (particularly teaching and assessment), context (the political, economic and social fabric of a given locale) and the dynamics of governance (including mechanisms of accountability). This research programme provides an especially interesting case study example because it imagines the individual research projects (broadly categorised into the three themes) generating data that allow policy makers to view the findings holistically and through a systems lens. In doing so, the RLO programme offers an opportunity to test one proposed systems approach (Figure 1). In this paper, we test this approach from a ‘methods’ lens – to what extent do the methods used by individual research projects (and the projects taken as a whole), both within each theme and across the three themes, offer an opportunity for the investigative study of education systems? This question is explored further.

RLO systems approach.
Testing the RLO systems framework: emerging insights
As we look at research being conducted across the RLO programme, we notice a number of themes emerging related to the challenges and opportunities for systems-focused research in education. The discussion below explores several of these themes. What is the role of mixed methods and new tools in assessing systems change? How can we develop research approaches, tools and teams that can be simultaneously contextually relevant and internationally comparable? What is the role of participatory research and co-learning within a systems approach? Can we draw findings across RLO projects that share a regional or topical focus in order to illuminate the value of systems research at country or sub-sector level? This is by no means meant to be an exhaustive list of all the methodological issues or insights involved in systems-based research; rather, the discussion that follows is indicative of the emerging learning on methods and approaches from within the RLO programme, situated in the broader literature on international and comparative education research.
Studying systems through individual research projects: using mixed methods
A systems approach requires us to think about educational elements (teachers, curriculum, infrastructure), context (the political economy and historical trajectory of education in a given locale) and dynamics (e.g. educational governance and accountability) as an integrated system and not as individual inputs. Studying complex systems is a process-heavy approach: the researcher needs to understand the non-linear relationships between the elements of a system and how these change across time and space. The research questions need to address not just the ‘what’ of education systems, but also the ‘how’ and ‘when’ in order to shed light on the factors that enable or constrain quality, equitable learning outcomes.
Various individual research projects are exploring key hypotheses and questions using mixed-methods designs, sometimes in multiple contexts, which involve combining quantitative research with qualitative research. Ehren’s research in South Africa explores accountability relationships and processes in education systems that can enable or inhibit improvements in learning outcomes. The researchers use macro social network analysis combined with survey work to evaluate their hypotheses. Mixing methods in this way allows the researchers to explore their hypotheses regarding the education system in South Africa in a more nuanced way. Another example of a mixed-methods design is provided by McCowan’s research, which explores pedagogical reform to improve critical thinking in higher education in three contexts: Kenya, Ghana and Botswana (Rolleston et al., this issue). The researchers combine a longitudinal study of student outcomes combined with a qualitative study of how institutions can encourage processes of pedagogical change. The mixing of designs across multiple contexts allows for a cross-context analysis to be made which is likely to provide useful policy insights. Rose’s research in India and Pakistan combines a mixed-methods design that involves using large-scale school and household survey data with richer qualitative data from a few schools to explore effective teachers and teaching for the most disadvantaged children in rural contexts (Aslam et al., this issue). Ghosal et al.’s projects provide examples of research that is combining RCT/intervention or experimental research with qualitative research to complement the cause-and-effect relationships with the how, why and where.
Another way in which several RLO projects are individually bettering our understanding of education systems through the methods used is by exploring the question of ‘scale’. The question of scale lies at the very heart of systems research; small-scale interventions that cannot be embedded in government structures or that cannot be cost-effectively scaled up may not offer policy solutions to constrained governments. It is also important for researchers to understand the fidelity of the implementation of interventions when undertaken by governments as compared to alternative providers (e.g. NGOs or private providers) whose incentives and system dynamics differ completely, as has been convincingly shown by Bold et al. (2016) in Kenya. Some RLO research projects are attempting to address this particular critique, mainly of first-generation RCTs, where seemingly successful interventions were not taken to scale by governments. This was partly due to the nature of the RCTs, which were implemented in relatively small convenience samples (raising questions of external validity), and often not by governments, which limited the extent to which their findings could then be effectively used by governments to scale up to entire states or countries (Ganimian, 2017).
Two examples from RLO research are especially relevant. Thornton’s work in Uganda uses an RCT design to directly explore the question of scale of a literacy programme combining teacher training, materials assessment, literacy infrastructure and parental engagement (Kerwin and Thornton, 2018). This research uniquely evaluates an NGO implemented model and a partially government implemented model and compares both to a control group to understand the system dynamics underlying the scaling up of such programmes. Muralidharan’s research in Madhya Pradesh, India, explores a randomised trial embedded in an ambitious state-wide scale-up of a school governance intervention – the Madhya Pradesh School Quality Assessment Programme (MPQSA). In addition to using RCTs in this innovative way to address scale, Muralidharan’s research combines the use of extensive administrative data with independent assessments data, collects detailed process metrics data (e.g. on teacher absence, effort, etc.) to allow for a more credible and nuanced school governance environment picture to emerge.
Exploring context within and across education systems
Context is central in a systems approach to education research. Our investigations centre on how actors and elements in a given context interact with each other, and what impact this has on learning outcomes in the time and place under study. Can our findings have broader applicability outside the context in which our studies are rooted? How will political and social changes in a region impact on education systems and learning outcomes? If our aim is to scale a successful education intervention, how can we maintain integrity as we shift to new contexts?
The problem of contextualisation, and how research methods can adapt to new and changing political, social and cultural contexts, is a long-standing issue in international and comparative education and has been frequently discussed in the pages of this journal. Philips (2006) has asked what research methods are most appropriate to grapple with cultural context and to compare across these contexts. His response is to pay close attention to the historical antecedents for any particular education intervention: what we witness in a specific time and place has a historical background that needs to be taken into account in our analysis (Philips, 2006; Crossley and Jarvis, 2001). A 2016 special issue of Research in Comparative and International Education explored the paradox between comparative analyses and cultural specificity in studies of teachers and teaching. Taken together, the contributions in this issue point to the importance of qualitative research that can shed light on the everyday lived experience of classroom practices and link these to quantitative data on learning outcomes.
A deep engagement with the historical trajectories and current socio-cultural contexts of education is central to the systems framework adopted by the RLO programme. Many research teams in the programme include historical and political economy analyses as a starting point for their investigations. For example, comparative work by Zeshan et al. (2016) seeks to develop and implement a deaf-led, community based, learner-focussed teaching programme in Ghana, Uganda and India. In each case-study country, researchers have begun with a thorough historical analysis of inclusive education policy and practice. Another research team is working to scale up and evaluate the Northern Uganda Literacy Project, a locally-owned and developed literacy intervention that was piloted as a response to curricular reforms and related educational challenges in northern Uganda (Kerwin & Thornton, 2018). Many projects within the Raising Learning Outcomes programme are comparative in design: seven of the 30 projects draw findings from two or more countries to broaden the scope of their investigations Some of this work, such as that by Zeshan et al (2016) described above, compares across regions by examining teaching and learning in African and Asian countries. More commonly, comparative research looks either at different areas in a single country or at neighbouring countries that share a common historical or cultural trajectory. Other research in the programme looks across geographically distant locations that nonetheless share common attributes on which to focus comparison: work by Ansell et al., for example, explores education, aspiration and learning in remote rural communities in Laos, Lesotho and India (Ansell et al, 2018).
Comparative research raises additional challenges for ‘establishing constants and contexts in educational experience’ (Osborn, 2004). Again, the field of international comparative education has long grappled with questions about how to draw conclusions on teaching and learning across similar or different contexts. Throughout the past century, much of the work of comparativists sought to draw policy lessons from one or more contexts that could be applied elsewhere – what Alexander (2000) has referred to as ‘policy-directed educational comparison’. At the same time, this body of work has acknowledged limitations in equivalence when conducting research in different contexts. These include challenges of identifying conceptual or linguistic equivalence, as well as equivalence of measurement in different settings (Osborn, 2004; Broadfoot, 2000).
These challenges of equivalence have arisen across research projects in the RLO programme; exploring how they have been met sheds some light on methodological approaches to systems research in education. Differences in language and terminology are perhaps the most obvious equivalence challenge to international and cross-national comparative work. Ansell et al. (2018) have explored how the term ‘aspiration’ is defined and understood across their three case study countries of Laos, India and Lesotho. Crucial here is that the researchers do not just offer a proximate definition for aspiration in local languages; they engage in a deep reflection of aspiration as a concept and how it is shaped by the unique culture and history of each location. They explain that aspiration ‘depends on an idea that the future can be envisaged, planned and achieved through deliberate action. Aspiration can also only exist where there are perceived choices to be made’ (Ansell et al., 2018: 9). A thick, textured analysis of cultural meaning helps us to understand the interplay of education elements, actors and dynamics within and across contexts, and seems an integral component of a systems approach to education research.
Contextualisation of research tools: relevance and reliability
Another critical way in which RLO research methods are innovatively contributing to bettering our understanding of education systems is through the development of new tools. Several research projects have developed new tools in exploring issues such as teaching and teacher support (e.g. Seidman, Hopfenbeck, Aker); adapted learning assessment tools (e.g. Shields, McCowan, Aber, Rose, Ghimire and Tsimpli); developed new metrics (e.g. Walker and Untenhalter) and explored how old tools can give erroneous answers and why using the right tools is important (Rose). The last is an important contribution of Rose’s work, which has explored how the use of questions to measure the incidence of disability in the India and Pakistan contexts has severely underestimated the numbers of people with disability in these contexts. By implementing the Washington Group questions in their surveys, the researchers have shown the true value of appropriate tools in these contexts.
Several projects in the RLO programme have developed or adapted learning and assessment tools as part of their research, raising questions about how these travel across borders. In this issue, Kim et al. note that cultural complexities in different locations limit the extent to which standards-based assessments can validly be applied outside the context in which they were designed. They have designed the Teacher Instructional Practices and Processes System (TIPPS), an observation tool that aims to understand the quality of the classroom environment, with these cultural considerations in mind. As for Ansell et al., the starting point of their work was understanding how the terminology used in their tool could be translated into Ugandan culture and language. They built in room for fluidity in language and examples in order to keep the tool adaptable and relevant to new cultural contexts.
McCowan et al. have adapted a widely-used assessment tool – in this case for critical thinking among university students – for use across different national contexts. Recognising that terminology may be understood differently in the case study countries of Kenya, Ghana and Botswana, they chose to conduct a pre-pilot of the instrument with a small sample of students in each location. This led to slightly modified wording of some questions to improve understanding and relevance in specific contexts (Rolleston et al., this issue). Similarly, a project on multilingualism and multiliteracies in India (Tsimpli et al., this issue) began with pre-pilot consultation to assess the applicability of standardised tools and tests developed mainly in Western settings. The researchers found that the tools required linguistic as well as cultural modifications to ensure that they were suitable for the Indian socio-cultural multilingual educational context.
Designing assessments and tools with some ‘wiggle room’ for contextualisation is an important way to ensure the validity and cultural sensitivity of our research and can illuminate the interplay of elements, actors and dynamics within education systems. But contextualising instruments for different locales can make data comparability and reliability a challenge. A very different approach is illustrated through the development of the Rapid Assessment of Cognitive and Emotional Regulation (RACER) assessment tool (Ford et al., this issue). RACER is a tablet-based tool that measures two executive function skills, inhibitory control and working memory, through very simple touch-based games. Instructions are visual rather than linguistic, meaning that the tool can be used across a variety of country contexts and test administration settings without variation. Work by Tsimpli et al. in this issue also uses non-verbal measures for working memory to negate linguistic differences between Hindi, Telugu and English. This approach, then, avoids the challenge of finding cultural or linguistic equivalence by employing simple signs and images that can be universally understood.
Using simple and prescriptive tools also ensures that reliable and comparable data can be gathered from diverse settings. Taylor et al. (this issue) note that highly-structured instruments are used in their research to encourage data reliability by reducing the potential for fieldworkers to interpret responses in different ways. The costs associated with conducting large survey and experimental research mean that fieldworkers and enumerators are not necessarily highly trained or skilled. Thus a number of research teams in the RLO programme have touched on the balancing act between developing research tools that can be responsive to local culture and language (including the language and culture of local enumerators), but that are prescriptive and low-inference so as to negate issues of fieldwork skill or bias (see, e.g., Aslam et al., Kim et al. and Taylor et al., this issue).
The role of the fieldworkers in the research process is perhaps an overlooked one, but the construction of strong research teams is crucial in the large-scale, mixed-methods, international and systems-focused research typical of the RLO programme. One option is for a single research fellow to support in-country teams across the study countries. This can provide coherence in data collection and comparative analysis, but does raise questions about the relative power and authority of the role as a ‘go-between’ for principle investigators and in-country researchers (Sugden and Punch, 2014). Osborn (2004), in describing the construction of a transnational research team for a study of attitudes to secondary schooling, argues for in-country teams to collaborate closely and to write up observations of countries other than their own. This can help to illuminate the complex interactions between socio-political context, institutions and individual action at the heart of a systems approach to research. The number of such multidisciplinary and multisite research projects is likely to increase in this era of impact-oriented and value-for-money research funding (Sugden and Punch, 2014), so giving more thought to how to construct robust cross-national teams should be a priority for systems-focused research.
Participation and engagement
Research across the RLO programme is seeking to engage directly with education stakeholders in order to effect genuine improvement in learning outcomes. How do participation and engagement fit within a systems approach to education research? As discussed above, the heart of a systems approach to education is illuminating the dynamic relationships among and between actors (teachers, learners, school management, policy makers, parents and carers) and inputs (textbooks, curricula) in fluid socio-political and cultural contexts. Participatory research methods call for a recognition that researchers and research outputs are part of this dynamic system, not outside looking in. Implicit in participatory research is a critique of traditional power dynamics involved in knowledge production and a recognition of all that can be gained by engaging ‘research subjects’ in the co-production of knowledge (Cornwall and Gaventa, 2001). Using a systems lens takes us further: participatory research can help us to investigate partnerships between education stakeholders (including researchers) as complex ecological phenomena in dynamic contexts (Lucero et al., 2018; Creswell and Plano Clark, 2011). Such an approach can illuminate how these contexts and relational dynamics impact research outputs and, ultimately, learning outcomes.
Within the RLO programme, stakeholder engagement and participation are particularly prominent in research projects focused on effective teaching. Many of these projects include a component of teacher or facilitator training to ensure that interventions are implemented in a way that engages teachers and learners and is relevant to local context. Work led by Seidman is illustrative of this trend (Kim et al., this issue). TIPPS is a tool designed to engage directly with teachers and trainees in Uganda, India and Ghana to facilitate reflective practice and continuous improvement. The researchers recognise the need for teachers’ professional feedback to be sustained and supported across the ecosystem of the school. As classrooms and schools are complex social settings, assessment tools require a high degree of granular contextualization to be valid in a given setting; this requires the ‘buy-in’ and continual engagement of teachers, school leaders and other education stakeholders.
The RLO project led by Lynch underscores how participatory research can strengthen a systems approach to education research. This work explores ways to improve the quality of the early childhood curriculum and teaching methods for children with disabilities in Malawi. Alongside empirical research, the project looks at ways of supporting the skill development of pre-school carers and community workers in order to draw out lessons for inclusive early years education. Conceptually, the project draws on a bioecological systems perspective to understand how young children with disabilities are situated in a specific cultural context. The researchers’ model acknowledges that the context shifts over time and is shaped by factors inside a child’s immediate environment (home, school, community) as well as beyond (national early childhood policies and training centres, child welfare laws, etc.), and the interactions between these factors (Lynch et al., 2018; McLinden et al., 2018). By engaging and co-learning with families and carers throughout the research process, the team is able to adapt its approach as it learns to ensure that appropriate solutions are offered to improve learning outcomes for children with disabilities (McLinden et al., 2018).
Work led by Jean-Francois Trani has similarly looked at how to improve education inclusivity and equity through a participatory approach. Across the RLO portfolio, this project is perhaps truest to the roots of participatory action research: the research team invited teachers, parents, community members and children across 108 schools in Pakistan and Afghanistan to participate in sessions where they could co-develop a vision of inclusive classrooms. These sessions were designed as Group Model Building workshops, where participants’ visions would result in the development a Community-Based System Dynamics (CBSD) protocol to map out barriers to inclusive education and to inform teacher training.
Systems dynamics, a mathematical modelling approach to understanding the non-linear behaviour of complex systems over time, has been used to analyse public and global health policies (Homer and Hirsch, 2006), but has rarely been used in the education field. Geurts et al. (2001) have argued that participatory systems modelling has huge potential for analysing and leveraging change in complex policy problems by identifying pressure points with a system beyond the gaze of traditional policy analysis approaches. Trani et al.’s participatory research has led to the creation of two initial causal loop diagrams for inclusive classrooms (Trani et al., this issue), which help to identify leverage points for inclusive education policies and practice. This important contribution highlights the need for more systems modelling and more participatory research approaches as part of a systems-strengthening education research agenda.
Studying systems across research projects: cross-cutting and country-level lens across system elements, contextual factors and dynamics
The 30 projects in the RLO programme present a unique opportunity for studying systems of education geographically and thematically. In several cases, there are multiple research projects based in the same country context but exploring different issues and using a variety of research designs and methods. Synthesis and analysis across the projects provide an opportunity to examine collective research findings at the country level. For example, Seidman’s, Thornton’s and Zeshan’s work is based in Uganda. The first two research projects use experimental methods with the former focusing on developing a new classroom observation tool to measure teacher quality and to provide feedback to teachers at the secondary level, whilst the latter explores the question of taking a comprehensive package of teacher training to scale sustainably for early grade learners. Being placed in the same context and exploring the same theme – effective teaching – these two research projects offer a unique opportunity for exploring whether aspects of education interventions aimed at effective teaching are similar or different at the two education levels. Zeshan’s work also offers the opportunity to complement the research in the Ugandan context by focusing on literacy development with deaf communities using sign language, peer tuition and learner-generated online content. Unlike the first two experimental projects, Zeshan’s work uses a case study approach, exploring the literacy needs and practices of the deaf community in Uganda. The author also proposes using focus groups and dissemination workshops to review the tool and to discuss the possible adaptation and scalability of the tools to teaching situations in deaf communities in Sub-Saharan Africa. This research provides a lens on persons with disabilities in the Ugandan context and, taken together with the research by Seidman and Thornton, is likely to provide useful system-level insights into this context.
Another distinct way in which the research projects in the RLO programme offer an exciting systems lens is by exploring cross-cutting themes across projects and multiple contexts, and using different research designs and methods. Whilst the research projects are spread across three calls (each ensconced firmly in a theme), the RLO systems framework allows us to identify cross-cutting themes that resonate with projects across the three cohorts. These cross-cutting themes include youth-to-adult transitions and the school–employment interface; exploring and moving beyond literacy and numeracy as valued outcomes; alternative models for schools and school management; and disability as a marker for disadvantage.
Using the last theme as a more detailed example, several RLO projects have focused on intersecting disadvantage, with disability emerging as a clear cross-cutting theme being explored across multiple contexts, for different age groups and at different educational levels, offering policy makers a lens on distinct policy levers. For example, Zeshan (India, Uganda and Ghana), Rose (India and Pakistan), Lynch (Malawi) and Bakshi (global) focus on disability as a disadvantage. They use a variety of methods between them, ranging from mixed-methods designs using action research and ethnography in the deaf community (Zeshan); large-scale household and school surveys combined with qualitative research using observations and interviews focusing on children aged 8–12 years in rural contexts (Rose); ethnographic research of ecologies of schooling and community engagement implementing a community participatory framework and focusing on people with disabilities in the early years (Lynch); a systematic review of existing evidence to better decipher the concept of social exclusion drawing on participatory research to determine learning for children with vulnerabilities, including those with disabilities and girls (Bakshi). These projects offer a unique opportunity to identify how disability manifests as a disadvantage and whether engaging the community (in the early years or later primary years) is an effective policy lever across different contexts.
Conclusion
This paper has explored how projects in the RLO programme contribute to our understanding of systems approaches in education. We have looked in particular at the methodological tools and designs that help us grapple with the complexity of systems thinking and the interplay of actors, inputs, context and governance in education. This discussion has also underscored a key value of large-scale research programmes such as the RLO: the whole is greater than the sum of its parts. Synthesis and analysis across projects allow us to break apart the boundaries between individual projects to illuminate collective learning. The final section of this paper has pointed to emerging cross-project knowledge generation on education systems in specific country contexts and in key thematic areas such as disability and marginalisation.
Notably, there is always tension between what researchers propose to achieve methodologically and the challenges that subsequently arise in delivering the methodology. Whilst many researchers propose using mixed-methods research, apply it in diverse contexts and involve and engage various stakeholders, they face numerous challenges in the day-to-day running of the programmes and compromises that are made given those challenges. Recognising the difference between what is proposed and what is delivered and the multiple decisions that are made to deliver high-quality methodology is important. 3
It is also important to note, as we have done in the introduction to this article, that there is limited theory underpinning systems thinking. Learning outcomes are shaped by myriad, often unobservable, social, political, economic and institutional forces. These are not static, but are in a constant state of flux as different actors vie for authority in and through education systems. To construct a ‘grand theory’ that can account for the complexity and fluidity of such a system will be challenging, but the potential rewards are great in terms of furthering our understanding of how to achieve equitable quality learning for all.
The RLO model proposes a first iteration of a systems framework with a focus on the interplay between system elements, contextual factors and governance dynamics. This paper has attempted to test this framework through a methods lens and has noted that using a variety of methods helps to illuminate different aspects of education system functioning. We have found this three-pronged approach particularly useful for highlighting the importance of context (understood in the broadest sense to incorporate historical trajectories, political economy, cultural norms, etc.) and the social and relational nature of education systems (including the central role of teachers and the potential for stakeholder participation and engagement) and how these influence learning outcomes. Although each of the RLO projects relies on conceptual underpinnings that are most suitable for answering the questions it poses, we would argue that, in the absence of a ‘grand theory’ for systems research, having a systems framework such as that offered by the RLO programme offers an insight into the functioning of education systems in more creative and potentially more valuable ways than have been achieved before.
Footnotes
Appendix
| Principle investigators | Propositions/assumptions | Research design and research methods | Data interpretation and learning | System focus | Geographical focus | |
|---|---|---|---|---|---|---|
|
|
||||||
| 1 | Kind | Teaching is improved through social and dialogical discourse and making argument explicit. | Intervention/experiment involving observing teachers receiving treatment before and after and comparing them to control group; teacher interviews; observation using a corpus of video footage of teaching strategies. |
Insights into pedagogy/teaching styles/strategies. Learning through interaction. | Secondary | Ethiopia |
| 2 | Seidman | Teachers improve teaching through feedback that draws on development of classroom observation tool. | Intervention; generation of a new classroom observation tool; corpus of video footage on teaching styles/strategies over the course of a year. Combines this with engaging with various stakeholders to facilitate buy-in and ownership through interviews and focus groups to inform them about new tool and for maximum effectiveness. |
Insights into pedagogy/teaching styles/strategies. | Secondary | Uganda |
| 3 | Thornton | Uptake, regularisation and cognitive gains in reading and writing in local languages through extension of literacy programme that involves teacher training, materials, assessment, literacy infrastructure and parental engagement. Evaluates an NGO-implemented model, a partially government implemented model and a control group to understand scale-up of programme. |
RCT; training intervention; collect a rich set of pupil, parent, teacher, classroom and school-level longitudinal data mainly through quantitative methods. |
Student learning and progression; insights into scale-up; insights into effective teaching. |
Primary | Uganda |
| 4 | Aber | Teacher learning and learners’ social and emotional learning and cognition are an outcome of approaches to teaching that are sensitive to schooling in the context of conflict and violence. | Training intervention; large-scale cluster randomised evaluation. | Insights into impact of teaching on learning in difficult contexts; student growth and emotional development. | Primary | Democratic Republic of Congo |
| 5 | McCowan | Pedagogical reform in higher education results in improved critical thinking. | Mixed-methods design involving a longitudinal study of student outcomes and a qualitative study of how institutions can encourage processes of pedagogical change. | Learning progression; institutional learning and change. | Higher Education | Kenya, Ghana and Botswana |
| 6 | Zeshan | Improved learning and social interaction in deaf communities through model of teaching. | Mixed methods from action research and ethnography. | Learning about learning in activity. | Community | India (Uganda and Ghana) |
| 7 | Rose | Learning improved through identification of teaching strategies sensitive to children facing multiple disadvantages. | Mixed methods using household survey, school survey and classroom observations combined with qualitative case study data on six schools/country. |
Insights into impact of teaching on learning in difficult contexts. | Primary | India, Pakistan |
| 8 | Lynch | Early learning sensitive to social and community-based approaches to human development. | Ethnographic study of ecologies of schooling and sociocultural and activity theory model of community engagement; designs and implements a community participatory framework. |
Learning in activity settings; interactional data | Community engagement and early childhood | Malawi |
| 9 | Murphy-Graham | Effective teaching strategies lead to sustained learning gains and development. | Mixed methods involving quantitative surveys and assessments combined with qualitative in-depth interviews, extensive classroom observations and observation of teachers’ professional standards. | Learning development across life transitions (school to adulthood). | Secondary | Honduras |
| 10 | Bakhshi | Understanding concepts of social exclusion as determinant of equality and capability. | Systematic review; secondary data analysis; participatory research | – | – | Global |
|
|
||||||
| 11 | Walker | Develop Human Capabilities Index for learning in higher education through studies of students and families in social settings. | Multi-methods data set combining quantitative longitudinal data (using existing and new large data sets) with qualitative data and participatory research. | Insights into different forms of learning in multi-layered social settings. | Higher education and social settings. | South Africa |
| 12 | Khwaja | Effectiveness of low-cost private schools will increase if constraints are alleviated and learning participation and outcomes increased as a result thereof. | RCT | Impact of social and political constraints on learning and mechanisms for their alleviation. | Primary | Pakistan |
| 13 | Islam | Improve child outcomes through testing and identification of innovations sensitive to social and educational conditions in rural Bangladesh. | RCT | Innovations and strategies that improve child outcomes. | Early Childhood | Bangladesh |
| 14 | Van Der Berg | Identify resilience and exceptionalism in schooling. | Quantitative survey; case studies. | Influence of institutional arrangements on human dispositions and growth. | Primary | Rural South Africa |
| 15 | Tsimpli | Exploring the paradox of low learning achievement in bilingual and multilingual settings. | Longitudinal survey; linguistic tasks. | Analysis of bilingualism and complex linguistic settings and their relationship to learning achievement. | Primary | India |
| 16 | Ansell | Manner in which schools shape youth aspirations. | Individual and school-based ethnographic research; interviews with stakeholders in policy community; design of questionnaire and piloting with 200 young people/country to explore how qualitative findings may be operationalised for quantitative research. |
Explore the processes through which schooling shapes young people’s aspirations, and how young people’s aspirations shape their engagement with schooling and the learning they achieve. | Lesotho, India and Laos | |
| 17 | Muralidharan | Effects of administrative arrangements on learning outcomes. | RCT; extensive administrative data will be analysed alongside independently administered tests and detailed process metrics (e.g. on teacher absence) using regular, unannounced visits. | Understanding the relationship between school inspection and testing and learning outcomes, school management and pedagogy. |
Elementary and secondary | India |
| 18 | Hopfenbeck | Improved teacher numeracy skills enable better pedagogy and feedback for learners.(assessment for learning) | Developing classroom materials and using them as basis for workshops and the development of teacher–learner communities. | Understanding effects of teacher use of feedback (assessment for learning) on student numeracy outcomes. | Primary | Tanzania and South Africa |
|
|
||||||
| 19 | Aker | Mobile phones can be used in remote rural communities to monitor teacher attendance, improve community engagement and to provide more frequent ‘long-distance’ pedagogical support to teachers. | Intervention; RCT with different types of mobile monitoring of teachers and pedagogical support for an adult education programme. | Using mobile phones to improve adult learning in a difficult context. | Adult Education | Niger |
| 20 | Unterhalter | An indicator framework of gender equality can be developed in education that can be used to evaluate progress on SDG 4 targets at national and international levels. The proposed indicators can be used to |
Action research; key informant interviews. | Develop an indicator to evaluate SDG4 targets at national and global levels. | Global | Malawi and South Africa |
| 21 | Ehren | Accountability relationships and processes in education systems can enable or inhibit the |
Macro social network analysis combined with survey work to evaluate questions. | Understanding accountability relationships, system capacity and trust for improved learning outcomes. | Primary | South Africa |
| 22 | Sabates | Whether and how changes occur when school actors are supported to view their accountability as being primarily to their local community and their goal as being to raise all children’s learning. | Intervention; RCT. | Changes in school actors’ perceptions, attitudes and actions that can improve accountability for children’s learning. | Primary | India |
| 23 | Ghimire | Improve child outcomes through developing and using accountability tools and linking improvements/gains in student learning with various measures of accountability. | Longitudinal survey data; build on pre-existing 20-year panel data in an area of the country. | Develop measures of accountability and identify their association with gains in student learning outcomes. | Secondary | Nepal |
| 24 | Dyer | Analysing multi-scalar accountability relations – showing how multiple |
Survey; narrative and visual qualitative data. | Understanding cross-scalar accountability relationships to improve learning for the disadvantaged. | Elementary | India |
| 25 | Ghosal | Pro-poor accountability frameworks (using Community Score Cards and Citizen Report Cards) and training programmes aimed at disadvantaged parents’ active participation and engagement can positively impact their beliefs and agency, improve their engagement in school management committees and improve children’s learning outcomes. | Intervention (pro-poor accountability framework and training for parents); RCT and qualitative methods. | Understanding relationship between pro-poor accountability and training programmes aimed at parents with respect to their beliefs, engagement and learning outcomes. | Primary | India |
| 26 | Sandefur | Public–private partnership (PPP) arrangements such as those in Liberia, where the government has contracted private companies to run government primary schools, offer different accountability relationships with private providers, providing better managerial accountability (teachers), better bottom-up accountability (private schools being more reactive to parental demands) and better top-down/results-based accountability as their contracts can be terminated by the government if they do not meet certain conditions. | Intervention; experiment; RCT. | To provide a rigorous and independent measure of the effect of Liberia’s PPP programme in |
Primary | Liberia |
| 27 | Trani | Parents and community members can play a role in improving the quality of education through innovative social accountability mechanisms. | Intervention; RCT using social accountability intervention combined with inclusive education training (parents, teachers and children). | Understanding social accountability mechanisms and their impact on learning outcomes and the psycho-social skills of disadvantaged children (particularly those with disabilities); stigma. | Primary | Afghanistan and Pakistan |
| 28 | Shields | Accountability of schools differs across school management types and accountability is linked to differences in learning outcomes and social inclusion. | School surveys (parents, teachers, students); qualitative case study. | Understand how perceived accountability is related to models of school management, |
Primary | India & Nepal |
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported through a grant from DFID and ESRC as part of the Programme Research Lead (PRL) of the Raising Learning Outcomes in Education Systems Research Programme (RLO).
