Abstract
In this colloquium, the authors provide an update on the Organisation for Economic Co-operation and Development’s proposal for an International Early Learning and Child Well-being Study, the ‘first wave’ of which is now being implemented in three countries: England, Estonia and the USA. The authors argue that as the International Early Learning and Child Well-being Study progresses, its superficiality and pointlessness become more apparent. They also locate the International Early Learning and Child Well-being Study in a ‘global web of measurement’ centred on the Organisation for Economic Co-operation and Development, whose aim appears to be the reduction of education to a purely technical exercise of producing common outcomes measured by common indicators, with the Organisation for Economic Co-operation and Development acting as the global arbiter, assessor and governor of education. They call on the Organisation for Economic Co-operation and Development and its partners to start engaging with legitimate concerns and criticisms.
Keywords
Two previous colloquia published in Contemporary Issues in Early Childhood have provided information to the early childhood community about the International Early Learning and Child Well-being Study (IELS), initiated by the Organisation for Economic Co-operation and Development (OECD), and also offered some critical perspectives on this consequential and secretive project (Moss et al., 2016; Moss and Urban, 2017). To recap, the IELS is a cross-national assessment of five-year-olds on four ‘early learning domains’ (early literacy and numeracy skills, self-regulation, and social and emotional skills). The assessment will be done using tablets, supplemented by information (individual background, home learning environment, early childhood education and care (ECEC) experience) from staff and parents using online and paper questionnaires. In this colloquium, we update earlier information to share our understanding of where the project has got to, and once again offer some critical commentary.
A main source of our information on this occasion is PowerPoint presentations by representatives of the English Department for Education, England being one of three countries participating in the first round or wave of the IELS, and of the National Foundation for Educational Research (NFER), the organisation awarded the contract to be the ‘national centre’ for the IELS in England (both presentations are available from the authors on request). These presentations were made at a public seminar on the IELS held in London in February 2018, which was organised by the Early Childhood Studies Degrees Network. We also draw on professional discussions between educators at a public seminar held in New York in April 2018, organised at and by Hunter College, City University of New York, and attended by a representative of Westat, the contractor acting as the ‘national centre’ for the IELS in the USA.
Where has the IELS got to?
Following a tendering process, the OECD chose a consortium to act as ‘international contractors’ to lead and coordinate the IELS, consisting of the Australian Council for Educational Research, the International Association for the Evaluation of Educational Achievement and cApStAn. The project is supported by an Expert Advisory Group, a Technical Advisory Group and a further group of experts; of the seven members of these groups who are not representing consortium organisations, six come from three countries – Australia, the USA and the UK – reflecting a strong ‘anglophone’ orientation to the study (English Department for Education presentation). This orientation is further emphasised by the participating countries. The OECD had originally aimed to have three to six countries in the first round of the IELS; in the event, three were recruited: England, the USA and Estonia. All but two of the 16 countries that originally showed some interest in the study by membership of a ‘scoping group’ did not choose to participate.
A ‘national centre’ has been selected to undertake the study in each participating country. In England, the centre is the NFER, described as ‘the UK’s largest independent provider of research, assessment and information services for education, training and children’s services’. 1 In the USA, the chosen centre is Westat, described as ‘an employee-owned professional services corporation’ which ‘offers innovative professional services to help clients improve outcomes in health, education, social policy, and transportation’. 2 Both organisations are heavily involved in delivering the OECD’s Programme for International Student Assessment (PISA) study in their respective countries. We have no information on which organisation is the ‘national centre’ in Estonia.
Following development and piloting work, field trials were conducted towards the end of 2017 involving 400 or more children in 30 schools in each country. The main study or ‘first wave’ is scheduled to take place in the autumn of 2018. In England, this will involve 200 schools, selected to be ‘representative of primary schools … stratified by school type, pupil deprivation and region’, with ‘15 five-year-olds randomly sampled in each school’, making a total of 3000 children tested. National and international reports are ‘expected to be published in Autumn 2019’ (NFER presentation). We have no information about how schools will be selected in Estonia or the USA.
The cost of participating in the IELS is considerable. For England, the figures quoted by the English Department for Education representative at the London seminar are £1.4 million for the work undertaken in England, plus a contribution of €2 million towards the overall international costs (approximately US$4.2 million in total). According to the FBO Daily, 3 a listing of federal contracting notices, the awarded contract for the IELS in the USA is for US$7 million; it is not clear if this figure also includes the contribution to the international costs of the study.
After the first wave
In her presentation at the London seminar, the representative of the English Department for Education described the ‘first wave’ of the IELS as ‘gaining experience to refine the study methodology and instruments’, and envisaged the possibility of ‘future waves’: ‘depending on the outcome of IELS 2018, repeat IELS [will take place] periodically in a larger number of countries’. So, despite what must have been a disappointing level of interest from OECD member states in the first wave of the IELS, the OECD will hope for better take-up in a second wave, having demonstrated to its own satisfaction the feasibility and value of cross-national assessment of five-year-olds in this initial exercise. This outlook is confirmed by Westat, the US contractor, which states on its website that ‘the U.S. pilot study of IELS [is being implemented] to see if its model can be expanded to other countries’ (https://www.westat.com/project/implementing-pilot-study-how-childrens-early-learning-can-be-developed). Attempts to recruit more countries for future participation will surely follow.
Further critical perspectives
There has been substantial criticism of the IELS since information about it first began to leak into the public domain in 2016 (to those cited in our two earlier colloquia, we can add two US pieces: Wasmuth (2017) and Defending the Early Years (2018)). We note that neither the OECD nor any of the three participating countries nor any of the organisations contracted to implement the IELS have offered any response to these criticisms; in our opinion, this reflects a disdain for the early childhood community symptomatic of an ingrained culture of secrecy and indifference to the wider educational community. This is exemplified in the case of England by the government establishing a National Advisory Committee ‘to provide independent advice, guidance and constructive feedback to DfE [the Department for Education] and NFER’ – but only doing so after the key decision to participate in the IELS had been taken, a decision that was taken without any ‘independent advice’ or indeed any consultation with the early childhood community. It should be noted that both of the recent public events referred to above – in England and the USA – occurred only after work on implementing the IELS had begun and were initiated by concerned organisations.
As the IELS progresses and we come to know more about it, the more we are struck by its superficiality and pointlessness. Its superficiality is exemplified by the study’s disinterest in context. The IELS claims to collect ‘contextual information’ but, on closer examination, this information turns out to be confined to some background information about individual children (e.g. age, gender, family composition, home learning environment and ECEC experience). There is nothing on the wider political, social, cultural and pedagogical context in the participating countries, essential though this is to gaining any understanding of early childhood education or, indeed, to interpreting test results with any degree of insight.
Take just three glaring examples of context that are ignored by the IELS: (1) five-year-old children in England are already in compulsory primary education, while in Estonia they are still in ECEC; (2) England is a highly centralised state, with a national curriculum and inspectorate, while the USA is a federal state, with extensive state-level responsibility for education; and (3) England and the USA are usually defined as liberal welfare regimes, while Estonia is defined as a post-communist welfare regime. These are only the most obvious examples of diverse context, not touching on more complex issues such as the educational purpose, values and understandings that find expression in the curriculum, pedagogy and workforce.
In the approach adopted by the OECD, the IELS shows the dangers of an unquestioned commitment to such decontextualised quantification – dangers alluded to by Jerry Muller in his recent book The Tyranny of Metrics: Quantification is seductive, because it organizes and simplifies knowledge. It offers numerical information that allows for easy comparison. But that simplification may lead to distortion, since making things comparable often means they are stripped of their context, history, and meaning. (Muller, 2017: 24)
Or, to recall the words of Loris Malaguzzi: ‘Anglo-Saxon “testology” … is nothing but a ridiculous simplification of knowledge, and a robbing of meaning from individual histories’ (Malaguzzi, 1990, quoted in Cagliari et al., 2016: 378).
Lack of attention to context contributes not only to superficiality and simplification, but also to the pointlessness of the IELS. This becomes particularly apparent when we consider the participating countries. Serious comparative research requires the careful selection of the countries to be compared, with a clear rationale that will enable insights into the subject of the research. The countries in the IELS, by contrast, have not been so selected, but are included because they have volunteered (and are prepared to pay) to participate. There is no theoretical rationale, no rhyme or reason for the inclusion of Estonia, England and the USA, and consequently no point to comparing them. To what substantive questions can this decontextualised comparison of test scores from an arbitrary collection of three countries provide answers? What policy issues can the study illuminate? What can anyone learn from this? So, from a comparative perspective, the IELS makes no sense at all – except as a precursor exercise, designed to promote a much larger second-wave study.
A global web of measurement
Giving the George F Kneller lecture to the 2017 annual conference of the Comparative and International Education Society, Professor Antonio Novoa (2017), a comparative educationalist from the University of Lisbon, argues that ‘instead of turning comparison into a “mode of governance”, we can value and broaden a public space for discussion and deliberation’. In other words, comparative work should not be telling us what to do, should not be offering a spurious ‘best practice’. Rather, it should provide us with opportunities to think and question, to contest and argue; it should provide a stimulus to democratic dialogue about possible futures. The OECD could have chosen to provide such a ‘public space for discussion and deliberation’ about ECEC, as indeed it did with its landmark studies Starting Strong 1 and Starting Strong 2 Starting Strong I and Starting Strong II (OECD 2001, 2006); instead, it has chosen to develop comparison as a ‘mode of governance’, seeking to give itself a pre-eminent position as education’s global governor.
This will to govern becomes more apparent if we consider the wider context of the OECD’s growing global web of measurement, of which the IELS is (for the moment) just one small part. At the centre of the web is PISA, the well-established and well-known international assessment of 15-year-olds now spanning over 70 countries and regions. PISA has, in turn, spawned the ‘PISA-based Test for Schools’, ‘a student assessment tool geared for use by schools and networks of schools to support research, benchmarking and school improvement efforts’, 4 and ‘PISA for Development’, ‘which further develops and differentiates the PISA data-collection instruments to produce results that better support evidence-based policy making in middle- and low-income countries’. 5
Then, moving up the education age range, there is the ‘Assessment of Higher Education Learning Outcomes’, ‘a feasibility study for assessment of higher education outcomes that will allow comparison between higher education institutions across countries’. 6 Nor does the OECD stop at higher education. There is also a ‘Survey of Adult Skills’, conducted in more than 40 countries, which ‘measures adults’ proficiency in key information-processing skills – literacy, numeracy and problem solving in technology-rich environments – and gathers information and data on how adults use their skills at home, at work and in the wider community’. 7
And finally, at least for the moment, coming down the road is ‘The Study on Social and Emotional Skills’, another international survey that ‘assesses 10 and 15 year old students in a number of cities and countries around the world, identifying the conditions and practices that foster or hinder the development of these critical skills’. A field trial is due this year (2018), with the main study in 2019.
8
Ben Williamson, of the University of Stirling, argues: [The OECD] is seeking to measure student personality to gather policy-relevant insights for participating countries. The inevitable consequence in countries with disappointing results will be new policies and interventions to improve students’ personalities to ensure competitiveness in the global race. Just as PISA has influenced a global market in products to support the skills tested by the assessment, the same is now occurring around social-emotional learning and personality development. (Williamson, 2018; our emphasis)
We find the IELS problematic and disturbing on its own, but far more problematic and disturbing when seen as a part of this spreading global web of measurement. The aim of this web appears to be the reduction of education to a purely technical exercise of producing common outcomes measured by common indicators, with the OECD acting as the global arbiter of what those outcomes and indicators should be, and also as the global assessor of national performance (supported by a narrow coterie of experts and organisations, and strongly influenced by the thinking and language of the anglophone world) – but not just the global arbiter and assessor, also the global governor of education, governing the school and classroom, and (in the evocative words of Nikolas Rose), through its work on social and emotional skills,increasingly ‘governing the soul’ (Rose, 1990). Appearances may be mistaken, but to prove they are, the OECD and its partners must really start engaging with legitimate concerns and criticisms.
Footnotes
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
