Abstract
This paper examines the rise of value-added as a measure of quality in education. As a point of departure, the paper begins with an analysis of the rise of the concept of quality in education and discusses how, at times, various contradictory determinants of quality have managed to influence the evaluation and assessment frameworks of most countries. Leading on from this, the second part of the paper provides a discussion on the use of value-added as a determinant of quality in education. Finally, the study concludes with a discussion on the challenges relating to the introduction of value-added into the Irish education system, a development which will, arguably, become a contentious educational reform initiative within the future landscape of Irish education.
Introduction and background
Interest in the quality of education provided in schools is now intensive. Old-world economic activities, such as labour and manufacturing, are being replaced by newer key determinants for economic success, such as knowledge and innovation. Indeed, it would be reasonable to suggest that among most stakeholders involved in education, ‘improving the micro-efficiency of the schools has come to be seen as a vehicle for addressing some of the macro problems of the state and society’ (Riley, 2000: 29). The increased interest in the quality of education may also be attributed to the view that ‘only when the results will be costly in financial or personal terms, our reflectiveness is proportional to the importance of the issue’ (Guthrie, 1984: 790). As a result of the increasingly pressing need to improve the ‘micro-efficiency’ of the school various methodologies and frameworks are proposed to ascertain and improve the quality of education provided. For example, for a variety of reasons, such as advances in technology and statistical modelling (see Conway and Murphy, 2013; Sloane et al. 2013) and in parallel with other modes of evaluation such as school self-evaluation, the appeal of value-added as a measure of quality has gained increasing momentum in most OECD countries (see Nusche et al., 2013).
Value-added models are concerned with the effect that a school and/or teacher has on a student's progress. In other words, according to Peng and Klieme (2014: 1), ‘The value-added school effects are defined as the “net” contribution of a school to students’ learning after sorting out the impact of other factors’. The impact of these ‘other factors’ specifically relates to the effect that, for example, socioeconomic circumstances and student prior attainment have on value-added test scores. As asserted by He and Tymms (2014: 26): ‘Test scores reflect the combined influences of a number of factors such as the learning environment in the school, the socioeconomic background of the students, the student’s attitudes towards study, the academic achievement attained before entering the school, and many others’.
There are various interpretations of the meanings of the term ‘value-added’, however. Indeed, according to Saunders (1999: 234), ‘Sometimes value added seems to mean whatever the writer/speaker chooses it to mean; and often it is seen as a question of measurement only, as distinct from judgement’. This is no surprise given the different types of ‘value-added’ models that place various emphases on factors such as socioeconomic circumstances that can have an effect on student performance (see Thomas and Mortimore, 1996; Timmermans et al., 2011; van de Grift, 2009).
However, regardless of the various value-added models that exist, the increased interest in value-added can largely be attributed to the globalisation of education in the form of comparative education studies, what Arnove (2012) refers to as ‘educational borrowing’. The benefits of comparative education research stem from the belief that this field of educational research has the potential to contribute to ‘the improvement of educational policy and practice world-wide and advances in theoretical work relating specifically to education and to the social sciences more generally’ (Crossley, 1999: 249). Further to the point made by Crossley (1999), ‘another reason for studying other societies’ education systems is to discover what can be learned that will contribute to improved policy and practice at home’ (Arnove, 2012: 6). Certainly, the use of ‘educational borrowing’ has resulted in policy makers and other agents of change either developing or adapting elaborate assessment and evaluation systems where ‘the government of country x explicitly “borrows” policy y from country z, legitimising it with reference to the attractiveness of country z, and playing on the desire for externalisation in a globalising world’ (Crossley and Schweisfurth, 2009: 457). One such example of ‘educational borrowing’ can be seen in the Department of Education and Skills in Ireland (DES) draft strategy to improve literacy and numeracy standards that proposed to use value-added as a mode of evaluation. In this document, a benchmarking data analysis tool referred to as ‘Schools Like Ours’ is prescribed as allowing a school to ‘have access to its own data as well as the data from the “matched” schools’ (DES, 2010: 41). In 2007, the Literacy and Numeracy Secretariat of the Ontario Ministry of Education developed a similar benchmarking module called ‘Schools Like Ours’. Its purpose is also to ‘find similar schools to any selected school’ (Ontario Ministry of Education, 2007: 4) using any combination of the available indicators such as similar demographics but higher achievements.
The use of value-added is appealing for a variety of reasons but perhaps chiefly to reduce the component of error, in comparison to using the raw score comparative data. Furthermore, with the facility to numerically track student attainment of skills and knowledge at regular fixed points in time, the collation of vast amounts of data can be used to develop specific targets which in turn can be linked to improved teaching and learning strategies. The rise of value-added may also be attributed to the need for a more robust form of accountability which other modes of school accountability such as school inspection and self-evaluation are unable to provide. For example, Borooah and Knox (2013: 19) make the following observation in reference to school evaluation in Northern Ireland, ‘there needs to be a shift in focus within inspections to the value which schools add to pupils’ learning rather than a reliance on self-evaluation and improvement, a system which lacks incentives or punitive measures for poorly performing schools’. This perspective resonates with Danielson and Ferguson (2014: 101) who state: Many analysts prefer value added for measuring teacher effectiveness because, if implemented properly, value added approximates a condition in which there is no difference across classrooms in the characteristics of the students. Hence, value added for any particular teacher is an estimate of how much that teacher adds to students’ skills and knowledge Researchers have long advocated greater reliance on so-called ‘value-added’ measures that seek to adjust for prior achievement, intake and other school and student factors, but there has been a reluctance to embrace them, first because of a commitment to the notion that all should be assessed against the same standard, and second because value-added indices are inherently complex and difficult to grasp for those lacking an understanding of the underlying statistical manipulations. High-stakes testing has radically altered the kind of instruction that is offered in American schools, to the point that “teaching to the test” has become a prominent part of the nation’s educational landscape. Teachers often feel obliged to set aside other subjects for days, weeks, or (particularly in schools serving low-income students) even months at a time in order to devote to boosting students’ test scores. Indeed, both the content and the format of instruction are affected; the test essentially becomes the curriculum.
A further interlinked issue to that of teaching to the test concerns a narrowing of the curriculum to obtain higher performance measures in subjects that are being tested using value-added measures (see O’Day, 2002; OECD, 2008; Rothstein et al., 2008; Sanders, 2000). For example, in the case of Sanders’ (2000) research on value-added testing systems in North America, it was found that in some states, the amount of class time allocated to mathematics and reading was dependent on the value-added test taken during a particular year. It is no surprise therefore that, at 4th-grade level, the concentration of class time to reading resulted in a value-added test score of ‘outstanding’. On the other hand, ‘the 4th grade gain in Math was near zero (based upon traditional achievement test results)’ (Sanders, 2000: 42). Ironically, however, at the 5th-grade level, the reverse occurred when there was a significant increase in the class time allocated to mathematics, resulting in significantly positive value-added test results. Sanders (2000: 337) asserts the following: ‘No surprise, since in this state, the statewide tests in Reading are in the 4th grade and the statewide tests in Math are in 5th’.
Indeed, given that most countries use value-added testing for a limited number of subjects such as English, Mathematics and Science, OECD (2008: 42) make the point that ‘School principals and teachers therefore have an incentive to focus more heavily on the subjects included in the performance measurement’.
Finally, even if mechanisms are put in place to reduce the adverse effects of narrowing of the curriculum and teaching to the test, there are also concerns relating to the impact of, for example, socioeconomic circumstances and student prior attainment on value-added test scores. On the one hand, although Muñoz-Chereau and Thomas (2016: 47) recommend that ‘for a policy-oriented to support accountability mechanisms with high-stakes consequences for schools, a CVA [contextual – value added] model that adjusts for student background factors and compositional or context effects, outside of the control of school, as well as prior attainment may be most appropriate’. On the other hand, at a system level, uncertainty relating to the utility of adjusting value-added raw scores has arisen even for those countries who have already implemented this mode of evaluation. For example, in 2011, England removed contextual value-added indicators from their evaluation criteria for a variety of reasons as alluded to by the chief inspector. ‘The government was right to drop the “contextual value-added” measure, introduced by Labour five years ago, from this week's league tables, he says. Talk about social factors simply “entrenches mediocrity”’ (Abrams, 2012). However, more recently (2013), the education secretary for England, also stated: ‘I agree it is a good thing to have a value-added measure that takes account of socio-economic background’ (cited in Stewart, 2013). Based on the above statements, one can easily see how uncertainty and inevitable tensions arise when issues concerning the use of value-added as a measure of quality are concerned.
However, regardless of the contesting views around the use of value-added indicators, in the case of Ireland, it is apparent that this is the clear direction of policy. The introduction of standardised testing at primary level, and between 2014 and 2017, the introduction of standardised testing at lower secondary level in mathematics, English, and science, means that the scaffolding for value-added has been put in place.
To tease out these conflicting arguments the purpose of this paper is to examine the rise of value-added as a measure of quality in education and in particular as it relates to Irish education. As a point of departure, the paper begins with an analysis of the rise of the concept of quality in education and discusses how, at times, various contradictory determinants of quality have managed to influence the evaluation and assessment frameworks of most countries. Leading on from this, the second part of the paper provides a discussion on the use of value-added as a determinant of quality in education. Finally, although Hislop (2013: 6) is correct in stating: ‘sometimes, fundamental change is necessary but, of course, radical change is much less likely to emerge from a consensus-led approach’, the study concludes with a discussion on the challenges relating to the introduction of value-added assessment into the Irish education system, a development which will arguably become a contentious educational reform initiative within the future landscape of Irish education.
Quality and the acculturation of education
Since the 1950s, interest in the quality of education provided by schools has increased considerably. It is no surprise, therefore, that ‘quality as a concern has dominated the educational debates triggered and sustained by international aid and cooperation, and by the ethos of economic globalisation’ (Kumar, 2010: 8). The increased interest in the quality of education provided in schools is in part driven by marketplace demands in which ‘globalisation has increased international competition and boosted the demand for quality education and school accountability’ (Wong and Li, 2010: 206).The quality improvement agenda is also fuelled by an assemblage of stakeholders who Bangs et al. (2010: 9) refer to as ‘the architects of policy (mainly politicians, senior civil servants and advisers), the critics (mainly academics …) and the prophets (those whose ideas challenge existing ideas and led to new policy initiatives)’.
Unsurprisingly, because of the broad spectrum of stakeholders with diverse expectations of what actually constitutes educational quality, there is no agreed-upon definition of quality in education. Similarly, there is little consensus on the most suitable quality improvement frameworks necessary to ascertain and improve the educational quality of schools. One possible reason for the enigmatic nature of quality is that it, in itself, is a dynamic idea, which resides at the very core of educational provision, expanding at different rates through various influential lenses within the system. In other words, quality, by nature, is dynamic, a reflexive human condition. As Pirsig (1991: 119) states: ‘dynamic quality is the pre-intellectual cutting edge of reality, the source of all things, completely simple and always new’. However, Sallis (2002: 11) also warns that ‘there is the danger that much of the vitality of the concept can be lost if it is subjected to too much academic analysis’. This type of over-analysis, according to Doherty (2008: 256), ‘is all good knock-about fun. Sadly, however, the “quality issue” is more than an academic argument about definitions of meaning’.
Nonetheless, we cannot get away from the fact that as Leu (2005) suggests, ‘the argument can be made that education systems are always structured around a vision of quality’ (4), resulting in a need for a description of quality as it applies to educational assessment and evaluation. Indeed, Kumar (2010: 8) states that quality can have two meanings; the first is ‘the essential attribute with which something may be identified’ (e.g. a school’s ethos) and second is the ‘rank of, or superiority of one thing over another’ (e.g. school league tables). Furthermore, Creemers (1996: 23) also states that: ‘The quality of a school is the average score on an output measure corrected for input characteristics, thereby indicating the “value added” by the school’. Taking these conceptions of quality into account, it is no wonder that tensions and contradictions have arisen in the quality improvement arena. Indeed, Drew and Healy (2006) are of the view that ‘quality has always been a particularly difficult concept to define, and many academics have struggled to provide the all-encompassing definition’.
None the less, Kumar (2010: 8) states that the quality debate continues ‘at least partly because sufficient attention is not paid to the tension that arises between the two meanings when the term “quality” is applied to education’. Therefore, as a means of deconstructing how and why value-added evaluation has come to be used in education and also how tension may arise, it is important to describe how various concepts and connotations of quality have come to influence almost every part of the educational debate.
Quality as human capital
A UNESCO paper by Kumar and Sarangapani (2004: 2) suggests that ‘the usage of the term “quality” in the discourse of education became significant from the 1950s, and more visibly from the 1960s onwards’. This same period also saw the emergence of human capital theory (see Becker 1964; Schultz 1960), which suggests that the acquisition of knowledge and skills is proportional to an individual’s potential earning power. In other words, education has a production function, and the quality of education that individuals receive can be potentially correlated with their potential earning power. In theory, therefore, a quality education has a multiplier effect on the economic prosperity, social wellbeing and living standards of a country. According to Heckman and Jacobs (2010: 4), ‘only when individuals acquire sufficient human capital at the beginning of their life cycles, can they avoid getting stuck in poverty and productivity traps later on in life’. However, like every investment and the laws of economics, there is also an initial capital investment required to cash in on the policy. To use the language of the market, the maximum return accrued from the initial investment is dependent on the quality of the brokers who are tasked with managing the investment. As a result, the important questions are: who should bear the costs of the investment, who is responsible for the process, and who is responsible if the desired outputs are not achieved? The OECD (1990, cited in Baptise, 2001) affirms that each individual is responsible for bearing the cost as those who make a greater investment in education will be rewarded with higher earnings later. ‘This, insisted the OECD, is one reason why students should pay for their own studies and why support for them should be in the form of loans rather than grants or scholarships’ (Baptise, 2001: 189).
However, while it is accepted that the more education an individual receives, the greater the chance that a person has of achieving economic prosperity, it would be naive to suggest that this theory should be considered absolute. Other key determinants of an individual’s potential economic earning power must also be taken into account if education has a production function. A production function, education or otherwise, describes the maximum level of outcome possible from alternative combinations of inputs. It summarizes technical relationships between and among inputs and outcomes. The production function tells what is currently possible. It provides a standard against which practice can be evaluated on productivity grounds (Monk, 1989: 31).
However, correlating the quality of school personnel to the potential earnings of a student creates both quantitative and qualitative problems. For example, a post-primary school in an urban, disadvantaged area in Ireland found that ‘up to 90 per cent of first years coming in have a reading age below their chronological age and in many cases it’s well below’ (Hunt, 2009). While it might be assumed that prior to entering the school, the education received by these students was not of the same quality as those of a similar chronological age that had a higher reading age, it could equally be plausibly argued that this disturbing figure has very little to do with the quality of teaching and learning but rather, is connected to the cultural and social inequities that exist in society more generally. Similarly, at the other end of the social spectrum, it seems equally difficult to separate the various factors which impinge on student outcomes and thus could be said to be the factors informing ‘quality’. The OECD (2010), when referring to the PISA 2009 reading assessment scores, found that it is not always the case that students who attend private education have an advantage over students in the public education system. In fact, of the 15 OECD countries that demonstrate a positive relationship between attendance in private schools and performance, only 3 show a clear advantage in attending private school: in Slovenia, Canada and Ireland, students of similar backgrounds who attend private schools score at least 24 points higher in the reading assessment than students who attend public schools. (OECD, 2010: 43) at a political level the drive for more efficient management of investments in the public services results in favouring a model of investigation that promises conclusive answers about what works. Is this just a passing fad not worthy of our attention? I think not; we have made that mistake too often, and the result is a school performance system that is grossly unjust and a health service at the mercy of political priorities (Simons, 2004: 411). ‘quality’ is used far more frequently, in practice, as shorthand for the bureaucratic procedures than to refer to the concept of quality itself. It is thus, not quality itself that is regarded as undesirable but the paraphernalia of quality monitoring that is seen as so intrusive (2005: 272).
Quality as value-added
Quality as value for money (VFM) has become synonymous with many public service reform initiatives, such as external accountability, decentralisation, and performance indicators. It has become one of the core motives for introducing external accountability systems as a means of assessing quality in terms of the financial returns received from public investments in education, both at a system and individual school level. This is evident where evaluations are conducted by inspectorates. The Office for Standards in Education, Children’s Services and Skills (OFSTED) (2012) states that ‘the aim of all this work is to promote improvement and value for money in the services we inspect and regulate, so that children and young people, parents and carers, adult learners and employers benefit’. Although there is no standard definition of quality as VFM, according to Davidson, Miskelly and Kelly (2008: 4) ‘the relationship between inputs and outputs is an important element of VFM in schools’. Assessments of outputs historically meant analysing the results of high-stakes externally devised examinations and the schools that achieved higher grades based on their level of expenditure than other schools being deemed to provide a quality education. In contrast, there is also a widely held belief that students’ levels of achievement are inhibited by other factors outside the confines of the school grounds and that assessing quality based on the results of high-stakes, externally devised examinations gives an incomplete picture of the quality of education. Yet, in Ireland, during subject inspections and external school evaluations, comparing schools against the average raw score for the entire population in state examinations is now common practice where measures of quality are concerned.
However, using this mode of assessment as a proxy for educational quality is statistically unreliable for a variety of reasons, not least because of the component of error attached to such judgements, but also, because of other factors, such as social disadvantage, which can have a profound effect on student performance. Therefore, as a point of logic, other factors that directly inhibit or enhance student achievement should be considered when school output is evaluated. Sammons (2007: 7) states, In most systems students from disadvantaged backgrounds (especially those from minority ethnic backgrounds, and those experiencing a range of social disadvantages, such as low income, parents lacking qualifications, unemployed or in low SES work, poor housing, etc.) are more likely to experience educational failure or under-achievement, though the equity gap in achievement is wider in some systems than others (7).
Furthermore, as is the case with Ireland, students from disadvantaged backgrounds tend to be concentrated in the same schools. This situation can also have a multiplier effect on the overall achievement levels of the general population of the school where, according to Smyth and McCoy (2009: 57), ‘there is indeed a “multiplier effect” whereby those in schools with a high concentration of disadvantaged students experience poorer outcomes in relation to attendance, achievement and early school leaving’. In fact, it can be argued that, due to the plethora of antecedent variables that affect student test scores, comparing the outputs of high-stakes, externally devised examinations as an indicator of quality is almost meaningless unless other contextual variables that inhibit or promote student achievement are also taken into account. This has resulted in a more complex paradigm for evaluating quality, in the form of ‘Contextual – value added’ (CVA) that uses multi-level statistical techniques to adjust student test scores based on a number of factors such as socioeconomic circumstances and the academic achievement of students prior to entering the school. As a result, it is suggested that, by using CVA, there is a greater likelihood that schools can be compared on a more equitable basis in comparison to using raw score data to measure school performance (see Davidson et al., 2008; Muñoz-Chereau and Thomas 2015; Scheerens et al., 2003; Scherrer, 2011). Indeed, according to OECD (2008: 15), ‘The adjustment to raw scores made with the inclusion of contextual characteristics provides measures that better reflect the contribution of schools to student learning than the use of “raw” test scores to measure school performance’. This perspective resonates with Scheerens, Glas and Thomas (2003: 304) who are also of the view that the more information it is possible to have about individual students, sub-groups of students, and all students in a school as well as comparative data across a whole population (or representative sample) of schools, the more reliable and informative any subsequent analysis is likely to be. Teachers and administrators can focus on the quality of education rather than reputation, resources, or other variables; Achievement data and trends derived from value-added assessments are more meaningful for educational change than school-wide data, and they beget a process more equitable than simply measuring raw scores; The focus on outcomes and individual growth avoids Micro-management of schools and reflects the assumption that both students and schools are responsible for achievement; Data derived from value-added assessment can aid parents in making informed choices about schools because higher or lower scores would no longer be equated with better or worse schools.
However, as previously stated, calculating the effect a school has on student performance is a complex process due to the wide variety of factors that inhibit student progress. As Doherty (2008: 258) states: How do you measure value-added where people are concerned? Or in a system where one of its most cherished characteristics is diversity? There are just too many contextual variables, some of them immeasurable in numerical terms, for even the most sophisticated statistical methods to cope with. The value-added calculations are rather worse than pointless because their apparent precision and technical sophistication may have misled analysts, observers and commentators into believing that they had succeeded, or that a greater range of variables or a more complex analytical model would somehow solve the outstanding problems.
Value-added and the case of Ireland
As previously stated, introducing value-added into mainstream education has both positive and negative effects. However, there appears to be a consistently increasing desire to introduce value-added frameworks into the Irish education system. Indeed, usage of quality as VFM can be seen in Ireland as far back as the late 1990s where ‘the government approved a series of what are called “value for money and policy” reviews to be carried out as part of a new system of comprehensive programme evaluation’ (McNamara et al., 2009: 105). Further, during the launch of the Delivering Equality of Opportunity in Schools initiative (DEIS), which was used to address the educational needs of children and young people from disadvantaged communities, the then Minister for Education stated: Apart from our own interest in measuring outcomes, there is a strong consensus on the need for better data at social partnership level. Regular information on the extent of ‘value added’ being achieved from our investment will support any future case for further targeted investment. (Hanafin, 2005) would give schools access to information about the achievement levels of students in ‘matching’ schools … All teachers and schools would need to do is administer the tests. A central unit, operating on behalf of the Department of Education and Skills, would look after everything else. (DES, 2010: 41) Irish society is traumatised by a crisis of trust … There is no trust in the figures, no trust in the understandings, no trust in the promises, no trust in the will to deliver. There is no trust that the words mean what they seem to mean. There is a suspicion. A presumption on each side of the intention to deceive or renege by the other party. The INTO is concerned that proposals referring to the Schools Like Ours initiative where data from individual schools would be aggregated by the DES, or an agency on its behalf, and returned to schools so that they compare their results with Schools Like Ours could lead, perhaps unintentionally, to the establishment of league tables and competition between schools (INTO, 2011: 11) The government intends to commission research to explore the potential to analyse assessment data from schools so as to enable the provision of national trend data on achievement in different categories of schools (schools serving students from different socioeconomic and demographic contexts, etc.) and the potential for this analysis to assist schools in benchmarking their standards against a norm for similar schools and to set targets for improvement (DES, 2011: 83). The DES will provide each school with a Data Profile … The Data Profile will also provide schools with information on their patterns of achievement relative to schools with a similar school context … These data will help schools to refine their assessment and moderation practice. They will also be a valuable source of information for schools’ self-evaluation processes … In the event of an unusual pattern of achievement, the Inspectorate of the DES will be advised, and support and evaluation measures will be provided for the school. (DES, 2012: 27) Examples of the move towards accountability include the introduction of regular whole-school inspection to secondary schools in 2003, the publication of school inspection reports in 2006, and the introduction of mandatory standardised testing in primary schools in 2007. The National Strategy for Literacy and Numeracy outlines additional accountability measures such as the development of national standards of students’ achievement and the collection of national data on student achievement.
Discussion and conclusion
It is evident that value-added assessment is on the cusp of becoming embedded into Irish education. However, while recognising the immense benefits of data-driven planning, if value-added is to be introduced into mainstream education, Davidson et al. (2008: 21) are of the view that the purpose and function of the measures themselves must be clear: If CVA is used as a measure of school improvement (or to allocate funding), there is little point in capturing factors which schools cannot influence. Similarly, if CVA data are to be used for accountability purposes, the model must be understandable and usable by relatively non-technical stakeholders. If value-added models become more complex in the search for greater accuracy, education professionals may find them harder to understand, challenge, and act upon. (22) Some contextual factors are included because they are easier to measure (e.g. entitlement to free school meals (FSM)), while others are omitted because they add too much complexity to the model. (ibid). For example, the National Research Council and National Academy of Education (2010) state that: ‘Other contextual factors are more difficult to quantify, such as home environment and peer influences, as well as various school characteristics’ (41) It is unrealistic to suggest that schools will add value to all pupils in equal measure, and CVA does not facilitate an assessment of the contributions individual teachers make to student attainment levels. (Davidson, Miskelly and Kelly, 2008: 22) Representative sampling of individual pupils’ progress may prove a more cost-effective way of assessing value added than considering every child from every postcode and from every ethnic background. (ibid). For example, rather than testing all students, as is the case with value-added assessment practices in countries such as England and North America; in New Zealand, the National Monitoring of Student Achievement (NMSSA) samples across the nation in order to monitor national achievement in areas such as English, Science, Mathematics and Health and Physical Education (see New Zealand Ministry of Education, 2012). It would be good if our nation's education leaders recognized that teachers are not solely responsible for student test scores. Other influences matter, including the students' effort, the family's encouragement, the effects of popular culture, and the influence of poverty … Since we can't fire poverty, we can't fire students, and we can't fire families, all that is left is to fire are teachers. parental engagement in children’s learning in the home makes the greatest difference to student achievement. Most schools are involving parents in school-based activities in a variety of ways but the evidence shows that this has little, if any, impact on subsequent learning and achievement of young people.
In conclusion, the use of value-added data for all its perceived benefits and connotations of quality is a dilemma that faces most countries now and will in the future. However, given the political desire to introduce value-added assessment into mainstream education, the authors concur with Hargreaves’ (2011) observation on standardised testing in England, which could be equally applied to its inevitable introduction in Ireland. In England … the last remaining test is currently under review because of what the UK coalition government calls the perverse incentives that standardised testing causes applied to all populations of teaching to the test and concentrating on some students who make the numbers look good at the expense of other students. Whether this is seen as a dilemma or not rests not only in the numbers and the assessments but also how leaders and teachers together use the numbers to initiate conversations about all students on the progression of their achievement over time whether they are a 2.8 or whether they are a 1, but it is a dilemma that most teachers and leaders feel they have in some fundamental way to encounter. (Available at: https://youtu.be/hVkRFELPPzE)
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
