Abstract
The aim of the paper is to contribute to the critical study of digital data use in education, through examination of the processes surrounding school inspection judgements. The interaction between pupil performance data and other (embodied, enacted) sources of inspection judgement is scrutinised and discussed with a focus on the interaction between people (in this case inspectors) and data in the forming of judgements. I look at the changing position, work and personnel of the Office for Standards in Education (Ofsted) in England, drawing on recent Economic and Social Research Council funded research, with an emphasis on changes in the basis of authority claims of personnel (who traditionally operated through elite social networks and knowledges) and the rise of data (especially in its new interactive forms) in order to illustrate how the demands and logics of data production and use influence the performance of authority in school inspections in England, and with some reference to wider developments in Europe.
Introduction
This paper looks at the interaction between data and people in the framing of inspection judgements, drawing on research that is part of an overarching enquiry into governing and governing knowledge (see, for example, Fenwick et al., 2014; Grek and Lindgren, 2015; Ozga et al., 2011). The specific case of inspection is understood as exemplifying a wider shift away from ideas, possibilities and informed expert analysis in shaping the knowledge-governing relationship and towards the application of rules derived from recurring data patterns (Savage and Burrows, 2007) and to data-led practices where the possibility of obtaining ‘good’ results (i.e. large numbers) may displace more creative thinking about fundamental problems and possibilities. Such a shift is hailed by its more excitable proponents as signalling the end of traditional preoccupations with understanding human activity, and a turn towards ‘simple’ measurement as a guide to action:
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology and psychology. Who knows why people do what they do? The point is that they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
Critics of this potential reframing of the epistemologies of social scientific enquiry suggest flaws in this vision of the end of theory, pointing out that even ‘big’ data are incomplete, that they are shaped and created by particular assumptions that also frame the algorithms used in their production, and that making sense of data is always similarly shaped by some conceptual framework, however implicit (Kitchin, 2014: 5).
There is, then, a debate about data analytics, the impact of big data and the uses of data in surveillance to be found in the wider social science literature (see, for example, boyd and Crawford, 2012; Mayer-Schonberger and Cukier, 2013), but as Neil Selwyn points out, this debate does not include data impacts on schooling, nor are education researchers fully alive to the consequences of data use: they are, he suggests, ‘generally slow to respond to the rising significance of data’ (2014: 5). Education research is largely disconnected from sociological studies of data – especially those drawing on the sociology of digital cultures and practices – that seek to open the ‘black box’ of digital data in order to reveal the political work that data do in defining what is known and knowable in contemporary society.
This paper attempts to connect to that critical literature and to question the ‘normalisation’ of digital data work within education/schooling, where it is conventionally understood and defended as the basis of improvement. As Siemens et al. put it: the ‘measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs’ is universal (Siemens et al., 2011: 4). Data mining and big data supposedly enhance efficiency, increase transparency, enable greater competiveness and evaluate the performance of schools and teachers. The generation, accumulation, processing and analysis of digital data is understood transnationally, nationally and institutionally as a solution to problems of schooling. What counts as knowledge work – and especially knowledge work for policy – is now highly dependent on data patterning and its interpretation: a shift that connects to (leads into and follows from) technical development – software and coding. I return to this point in more detail below, but first let me situate the discussion of data within the wider conceptual framing of the problem of governing and its relation to knowledge, which informs the research on which this paper draws.
Governing knowledge
In relation to governing, there have been changes in the practices and processes of both governance and knowledge in recent years, and these changes are interdependent, contributing to a new relationship between governing and knowledge (Delvaux and Mangez, 2008). The interdependence of governing and knowledge may be identified through attention to the ways in which expertise, especially expertise in developing ‘practical knowledge’ has moved from the traditional task of policy-informing conventionally carried out through elite or professional knowledge production in bureaucratic, hierarchical relations, to the ‘applied’ or integrated use of expertise in the formation of policy in a more complex, networked form of governing. Put briefly, our argument is that, as governing has changed to become more networked, less bureaucratic, more flexible and interrelated, so too has knowledge changed, moving from its traditional construction and location in disciplinary silos into a more problem-based form, involving new actors in its production, working in new ways and often driven by data (Grek and Ozga, 2010).
These developments are promoted by the centrality of knowledge and information (especially information about comparative performance) to the neo-liberal project (Hayek, 1969). In the neo-liberal imaginary, society is organised in networks held together through the flow of comparative knowledge and data, and standards, benchmarks and indicators serve to manage some of the tensions that arise between centralised and decentralised levels of governance, deregulation and (re-) regulatory instruments of governance (Ozga and Segerholm, 2015). Because of the need to create diversity in provision – so that choice and competition can operate appropriately – the landscape of provision looks increasingly differentiated and involves new actors, including public–private hybrids and this ‘systemless system’ (Lawn, 2013) requires the production and circulation of apparently objective data in order to make choice possible and manageable. Statistical data reduce the complexities of new national and local education practices through their selection of key indicators on the basis of which schools may be compared, and these ‘thin descriptions’, stripped of contextual complexity, make statistical data a key governing device (Ozga et al., 2011). Furthermore, because there is such a strong emphasis from policy-makers on ensuring that these data enable comparisons to be made (whether of pupil performance, teachers, pupil types), the knowledge claims that are most powerful are those that are de-contextualised, trans-historical and trans-situational, indeed:
…the decline or loss of the context-specificity of a knowledge claim is widely seen as adding to the validity, if not the truthfulness, of the claim. (Grundmann and Stehr, 2012: 3)
The explosion of knowledge production in recent years combined with its increased capacity to travel at speed produces a more intense and intimate relationship between data-based knowledge and governing. As a recent OECD publication puts it: ‘the key question posed is: how do governance and knowledge mutually constitute and impact on each other in complex education systems?’ (Fazekas and Burns, 2012: 6). In this mutually constitutive relationship, policy problems do not appear in the external environment but are identified through their statistical representation from which solutions are (apparently) also derived. As Grundmann and Stehr further suggest, knowledge becomes relevant when ‘it includes the policy options that need to be manipulated’ [and such] ‘practical knowledge (…) provides knowledge that identifies the levers for action’ (2012: 179).
These new governing forms, and the knowledges that support them, create a demand for new governing skills and new kinds of governing work from particular groups of actors who are positioned at key points of intersection of knowledge production and practical problem-solving. This work demands skills in translating information into ‘practical knowledge’, mediating conflict and brokering interests (Clarke, 2015). There is a growing literature on the influence, interconnections and work of networks of experts (Ball and Junemann, 2012), who promote cognitive consensus that makes political action easier. These experts are ‘more than the diffusers of ideas; they develop conceptual knowledge in order to promote educational reforms, drawing on their substantial experience as policy advisers to governments and IOs’. Moreover, ‘their attributes as experts and consultants tend to obscure the ideological and political dimension of their activities of knowledge production for policy’ (Shiroma, 2014: 2). The rapid growth of experts, advisers and consultants in education arises from the rapid expansion of data-based knowledge; this creates a need for simplification that strengthens the trend towards comparison and the search for trends and patterns in comparative date, and also increases the influence of analysts and gives considerable power to those who can interpret data and identify the ‘levers for action’ (Grundmann and Stehr, 2012: 20–21).
Comparative data at the national and institutional levels are perceived to have the capacity to create necessary distance from contamination through attachment to a particular context and thus to support objective analysis of system and institutional strengths and weaknesses. These analyses and the extraction of meaning from them are increasingly in the hands of ‘experts’ – in some cases commercial consultants, in others members of professional communities – whose work is being redesigned by data coding.
Coding is not only a technical process, but a system of regulation that shapes conduct and transmits messages about what counts as knowledge; it embeds and transmits ‘codes of conduct’ that shape the practices of schooling. Some of its power comes from what Thrift and French (2002) call the ‘automagical’ properties of software, that is, its hidden and ceaseless activity – the data systems at work in schools in England have become more and more ‘alive’, so that they exist somewhere between the dead and the living – always ‘on’ and constantly requiring to be ‘fed’.
The processes of coding, that is, the algorithms that construct the performance data software and the processes that shape inspectors’ observation and reporting – codifies the world into ‘rules, routines, algorithms, captabases that are then used to do work in the world’ (Kitchin and Dodge, 2011: 160). We can explore that codification through scrutiny of the work that inspectors do and of the rules that frame it. Code is both structured and structuring, and that combination is clearly visible in our research – inspectors are disciplined and discipline others through their coding practices. Code acts in particular ways to frame inspection activity: its constructs the inspection event so that observation and reporting are driven by analysis of performance data, and the development of predictive profiling also frames inspection as an assessment of progress. I return to these issues below, following a brief review of the work of inspectorates and its changing nature in relation to data use.
I draw in what follows on our study 1 of the background, training, experience and ‘assumptive worlds’ of three national inspectorates, their claims to expertise and their modes of operation (Ozga et al., 2015). Data were gathered from published official documentation and also from the inspectorates themselves. Two sources were particularly important: (i) the documentation required for inspection, including self-evaluation reports, inspection reports and post- inspection development plans; and (ii) interviews with key system and school-level actors (20 interviews in each system, 60 in total). We also undertook a detailed analysis of a large sample of inspection reports (Flórez, 2014). More details of all phases of the research may be found in Grek and Lindgren (2015). Data gathering in England was quite challenging, and we guaranteed confidentiality to our informants, who are identified only by a description of a role and a number (for example, Lead inspector 01).
Inspection and inspectorates
Inspectorates may be understood as epistemic communities (Haas, 1992), with strong claims to expertise: they are positioned as mediators and translators of information, because of their particular and unique positioning in the work of governing. As Clarke (2015) has pointed out, there are three distinctive aspects of inspection as a mode of governing. (i) It is directly observational of sites and practices. That is, in the case of schooling, inspectors are empowered (and required) to enter the world of the school and observe what takes place within it. (ii) It is a form of qualitative evaluation, involving the exercise of judgement rather than only the calculation of statistical regularity/deviation. Judgement is at the core of the activity and thus provides opportunities to study the articulation of knowledge and power. (iii) It is embodied evaluation: the inspector is a distinctive type of agent whose presence is required at the site of inspection and who embodies inspectorial knowledge, judgement and authority.
Inspectors come to these tasks with varying degrees of experience and expertise, depending on their history and the relationship of inspection to the development of national systems of education. They combine embodied and encoded knowledge (Lave and Wenger, 1991), and the balance between the two shifts over time and in different contexts. Inspectors bring their expert judgement and ‘objective’ data into relationship with one another, within more or less prescribed parameters; they are responsible for making knowledge about system performance available for translation into use by policy-makers at all levels, and by practitioners; they are also to a greater or lesser degree engaged in building improvement and knowledge about improvement within and across systems. This summary highlights the fact that inspectorates embody complex and layered identities: the ways in which they have related to governing work and to knowledge have changed, and continue to change, over time (see Ozga et al., 2015).
Our research (Grek and Lindgren, 2015; Grek et al., 2013) provides evidence of increased concern among European inspectorates of education to protect their claims to authority by constructing a new technology of inspection, in which they attempt to position themselves as mediators of the relationship between expertise and data, with an emphasis on the uniqueness of their position as directly observational of school sites and practices:
We, the European inspectors, are the only people going into the classroom, going to see how qualitative lessons are given. All the others don’t do it, they just have data …. (European Inspector 05 quoted in Grek et al., 2013: 10)
European Inspectors in the Standing International Conference on Inspection (SICI) also stress the advantages that their collaborative and reflexive work produces:
We learn from one another through discussion. We learn even more about the principles and processes of inspection by working alongside one another in schools on real inspections … As inspectors we have a key contribution to make and this will be much valued by educational policy makers. (SICI, 2001: 23)
In the repeated emphasis on their unique capacity to interpret through experience and observation, inspectorates in Europe attempt to consolidate their claims to authority and to claim space for their knowledge in order to protect their position in the context of increasing data use.
Data in the inspection process
Indeed, the growth of data has undoubtedly changed the nature of inspection work. In England, the commitment to data use in governing education has been particularly strong (Ozga, 2009). Here I briefly review some of the data resources within which the work of inspection is embedded – and that arguably shape the construction of the inspection ‘event’ and the hierarchy of judgements that inspectors make in the course of it. A key point to grasp is the scale and scope of England’s data machinery – indeed, in the course of our research it was described by a senior analyst in Brussels as ‘monstrous’. The origin and growth of the system owe much to New Labour’s commitment to technology and targets and the UK coalition government and its successor Conservative administration continue the commitment to the dissemination and use of data-based information, perhaps especially in schooling:
We will dismantle the apparatus of central control and bureaucratic compliance. We will instead make direct accountability more meaningful, making much more information about schools available in standardised formats to enable parents and others to assess and compare their performance. (…) In future: parents, governors and the public will have access to much more information about every school and how it performs. (DfE, 2010: 72)
The development of data connects directly to successive governments’ prioritisation of attainment (measured by test results) and to a determined effort to shift school cultures so that data monitoring and active data use became the driving force of school activity and hence improved performance. As this comment from a former senior inspector reveals, such extensive data use, along with the costs of inspection, may indeed pose a challenge to the continued use of observation by inspectors in the school context:
….we’re putting more information out all the time about the performance of schools-and that’s another thing this government has done-to make all this data available-and part of the reason was to maximise commercial usage-commercial agencies as well as academics were to come in and just use the data –if that becomes more regularly used and you get a kind of ‘trip adviser’ view of how schools are doing-you might think, well-pretty imperfect that.…. I don’t know if the inspectorates are thinking about this-but all those [established] ways of capturing data can so easily be overtaken by people just passing information around much more rapidly and presenting and re-presenting it-you could argue that in this more dynamic system what you don’t need is rather stately inspections. (Former Inspector 01)
The data world of inspection is rich in acronyms: PANDA and PAT (Pupil Achievement Tracker) became RAISEonline in 2008 (https://www.raiseonline.org/About.aspx). PAT was designed to enable schools to become data users: tracking, reviewing and predicting pupil progress, attendance, behaviour, target setting – tracking was promoted by the government department as ‘an integral part of day-to-day teaching and learning’. The construction of these instruments is protected from external scrutiny by considerations of commercial confidentiality, and a Freedom of Information (FOI) request made for the full source code for the Pupil Achievement Tracker version 4.2.5, was rejected under section 43(2) of the Freedom of Information Act 2000, because it was considered as commercially sensitive information. The Department for Education (DfE) argued that the general public interest in releasing the information requested had to be balanced against the public interest in protecting commercially sensitive information, and that disclosure of information relating to the full source code for the Pupil Achievement Tracker would be likely to be used by competitors in a particular market to gain a competitive advantage.
Among the battery of data-driven technologies in operation in schooling in England, the school data dashboard (SDD) constructs comparator schools so that teachers can see how ‘similar’ schools perform on a range of measures. Each school has its own group of similar schools for each measure on the dashboard. The methodology described by the Office for Standards in Education (Ofsted) in its SDD guidance in order to define similar schools and groups is exclusively concerned with attainment measures:
The measure does not take into account other contextual factors such as deprivation or levels of special needs because these factors should already be reflected in the prior attainment of the pupils. (Ofsted, 2014: 8)
As data systems developed, and capacity to integrate data increased, the inspectorate too was changing substantially, both in terms of its personnel, and in relation to the frameworks that govern inspection practices. Ofsted came into existence in 1992 to bring inspection into line with a new system of schooling that brought a new governing architecture to education, dependent on forms of scrutiny, evaluation and audit (Ozga et al., 2015) and with the promise that every school (primary and secondary) in England would be inspected within four years, and would then receive repeated inspections. The much-expanded scope of inspection required a change in personnel: Her Majesty’s Inspectorate (HMI), the established embodiment of authority in education in England since 1839, was reduced in number from over 500 to around 300, and the bulk of the work of inspection was sub-contracted. The recruitment of this new inspection force, employed initially by a large number of commercial contractors and, from 2005 by just three – SERCO, TRIBAL and CfBT – required efforts to ensure standardisation and consistency across the system, in the absence of the coherence previously achieved through unwritten rules, professional expertise and the social cohesion of HMI. As a result there was a massive increase in inspection training and in inspection documentation, including inspection frameworks and handbooks – a shift that is also a shift in the governing knowledges that are being mobilised and circulated. There is a move away from the pre-reform resources – often implicit – of officer-class social behaviour, combined with professional experience and (at least in some cases) subject or pedagogic expertise, to the following of rules constructed elsewhere, and able to be applied in (increasingly) different school types, but without reference to context. This shift may be understood as a re-coding of inspection knowledge.
There were constant changes to inspection frameworks within the period 1992–2010, accompanied by changes in the accompanying handbooks and web-based documentation. Analysis of these key texts (Baxter, 2013; Flórez, 2014) reveals quite sharp contradictions in the knowledge claims and the relationship to governing that they contain: there is oscillation between tighter and looser forms of regulation, and an unresolved tension between data use and inspection judgement. The picture is complicated by the entry of commercial, competitive agencies into the field; this means that the frameworks attempt to impose consistency and quality control alongside pressures to minimise costs and maximise profit. Price is a key determinant in winning and keeping contracts, and contracts also influence the ways in which knowledge can be shared between the three commercial contractors; as one inspector reported:
It would be good to share this good practice across agencies, but they [the inspection agencies] often consider this business-sensitive information; to be used when the contracts come up for renewal. (Lead inspector 12)
Whatever the requirements of the different frameworks of inspection, the key criteria (pupil attainment levels in relation to national performance targets) continue to set the agenda. Here, I focus on the ways in which the inspection event is coded by the centrality of attainment data, so that the space for judgement through observation is curtailed.
The pre-inspection process ensures that performance data dominate: inspectors are required to scrutinise performance data in order to arrive at a baseline evaluation using centralised data banks that provide detailed pupil- and class-level information over time, on the schools performance against national targets and in relation to comparator schools. This forms the basis of the pre-inspection commentary (PIC) that guides the work of the inspection team. However, these data may be out of date, and circumstances at the school may have changed; indeed, the rules governing inspection may have changed: for example, taking account of socio-economic circumstance – the contextual value added (CVA) measure – is no longer applied in calculating the performance of the school. Tensions between the rigidity of the framing, and the experiences of inspectors ‘on the ground’ were very evident in our research, as this quotation, from an inspector who wanted to find a way of acknowledging the progress made since the last inspection by a school in challenging circumstances, illustrates:
But I could easily have made it a failing school because the data was pretty basic, of course it’s all changed this time round. With contextual value added it would have looked different, but without it, and attendance wasn’t good, a variety of those sorts of what people may have thought of as irrefutable facts, what people at Ofsted would say were irrefutable facts -but we, the team and I were absolutely clear that without the effort that they had put into the school, it would be a whole lot different, really awful. (Lead Inspector 21)
Before the inspection event, the lead inspector analyses evidence including a summary of the school’s self-evaluation (if that is available from the school), data from RAISEonline plus the sixth-form performance and assessment (PANDA) report, the learner achievement tracker (LAT) and available data about success rates from the SDD, the previous inspection report, the findings of any recent Ofsted survey and/or monitoring letters, any responses from parents/carers on Parent View (Ofsted’s online survey available for parents), and issues raised about, or the findings from, the investigation of any qualifying complaints about the school along with information available on the school’s website, which may include a prospectus and other information for parents.
The preparatory data analysis includes pupils’ attainment in relation to national standards (where available) and compared with all schools, based on data over the last three years where applicable, noting any evidence of performance significantly above or below national averages; trends of improvement or decline; and inspection evidence of current pupils’ attainment across year groups using a range of indicators, including where relevant the proportion of pupils attaining particular standards; capped average points scores; average points scores; pupils’ attainment in reading and writing and pupils’ attainment in mathematics. As Matt Finn argues, these different data forms are nested in ‘scales of interaction’ (Finn, 2015) that connect records about attendance, behaviour, attitudes towards learning, teachers and school, pupils’ views on school life, and information about academic attainment. Moreover, as he further suggests, and as I have illustrated in the quotation from Lead inspector 21 above, the data perform two functions that are in tension: the monitoring of performance (and its public grading) and improvement through evaluation of progress.
However, it is the weight of the performance monitoring machinery that dominates and constrains, even before the inspectors enter the school. In the course of our research we heard many expressions of frustration from inspectors and school staff about the degree to which the inspection event was shaped by the preparatory stage. Although the guidance for inspectors places considerable emphasis on the importance of the time spent by them in school – where they should gather evidence on teaching and learning, observe lessons, scrutinise work (including in pupils’ books and writing), talk to pupils about their work, gauging their understanding and their engagement in learning, and obtain pupils’ perceptions of typical teaching, the coding of the event prioritises data. Tension is especially acute around judgments of teaching quality:
The tension is that the actual judgement to make is about teaching over time, actually you spend a lot of time on learning methods and if there is one aspect of the current framework that teachers really are quite struggling with, teachers and inspectors, it’s that inspectors are going into lessons observing teaching and making a judgement on what they see-for example that teaching is good, but actually the judgement that comes out is satisfactory because they’ve got to get in the notion of teaching over time, and the impact of teaching over time, so that requires them to look at data. Data… so we have RAISEonline, but then you have the- well why bother coming in just look at RAISE.….it is the only thing that the inspector has to show performance over time. So they do have that, progress and attainment, so they have to weigh that up with the data that the school provides and what they see, and that at the moment is the single biggest tension that inspectors are facing really. (Lead inspector 11)
The same inspector, commenting on the difficulty of reporting on a school that had made progress, but where the data could not support his judgement and that of his team, highlighted the strong coding of inspection report writing. Inspectors must decide whether the school is ‘outstanding’ (grade 1), ‘good’ (grade 2), ‘requires improvement’ (grade 3 – changed from ‘satisfactory’ in 2011) or ‘inadequate’ (grade 4). These grades carry consequences, including closure of schools judged to be inadequate. The grades are awarded on the basis of performance across four categories, namely: the achievement of pupils at the school, the quality of teaching in the school, the behaviour and safety of pupils at the school, the quality of leadership in, and management of, the school.
In the case illustrated above, the school had been graded ‘satisfactory’ in two of the four categories. The inspector tries to find a way of acknowledging the improvement the school has made, but this is not possible within the inspection framework:
Because it had satisfactory in the last category and they are saying ‘well you can’t have two satisfactorys because that means they haven’t made sufficient progress since last time’. But I said, hold on, if you are raising the bar all the time, raising expectations all the time you have to make progress just to stay satisfactory! So hang on you can’t go and penalise schools by saying you have been satisfactory twice and then say we are raising expectations. And so that’s why a lot of my colleagues have stopped leading inspections, there are a lot of us who feel straight-jacketed by this emphasis on the numbers, not that I think that the data is a weakness of the system, I think that the data can be really powerful, if it’s using the data to get underneath what is going on in school ….I mean it’s a much more complex process I think… the real difficulty I suspect is that doing the inspections is the easy bit, writing the report is the difficult bit and it’s got increasingly more difficult. (Lead inspector 21)
Time may also be understood as an element in the coding of the inspection event. The time is very limited – not just for the event, which takes place over two days, but in the reporting, which has to be done by the end of the week of inspection. As one headteacher commented:
Let’s be honest, you come and do a two day inspection. Do you really get a grip; a feel for what school’s about in just two days? You go in and see around 50 observations, say my best staff work Monday, Tuesday and Wednesday and they [the inspectors] come Thursday, Friday? It’s the inconsistency. I think they should flip that coin, have their first conversations with the SIP [School Inspection partner] the LEA [Local Education Authority]: tell me about the leadership and management of the school, where do you see the grades over the next 3 years under this management? Then go and inspect it. Otherwise stay in London and look at RAISEonline. (Headteacher 020)
As indicated above, the coding of reporting is extremely strong, given the time pressure on inspectors to write the report that must be with the school’s governing body by the end of the week of the inspection, a constraint that produces reliance on formulae and concern to ‘get it right’. The monitoring system that such a complex knowledge production regime generates is also significant in shaping the performance of inspection and the relations between the different actors involved (i.e. the inspection team, the contracted inspection service providers who suffer financial penalties if there is delay in submitting the report and HMI who have oversight of the process), as this rather lengthy but very revealing quotation illustrates:
They [the lead inspectors] are responsible for putting it all together in one report, and at the same time they will Quality Assure [QA] the sections that come in from other inspectors. When completed they will send it to the inspection service provider [i.e. CfBT, SERCO or TRIBAL] and they will also send the report to the QA readers that QA the report, then it goes to Ofsted and an HMI signs it off………now if HMI say no we are not signing it off, then it becomes a key performance indicator failure for the provider, so they are paranoid about this because they get slapped, you get contract action notices that will say, that unless you improve this will happen ….so you get tied up in these knots and in the end what inspectors are doing is saying ok well I have to follow this rule….there isn’t a rule but I have to follow it….
The most recent inspection framework is notable for its heavy deployment of the term ‘professional’, but this is combined with a simplified set of judgements in a way that creates considerable dissonance between the latitude implied by professional judgement and the rule following required by the framework. For example, inspectors are now required to ‘use their professional knowledge and engage in a professional dialogue with the headteacher or senior member of staff’ (Ofsted, 2012: 11). At the same time, their capacity to translate or mediate judgement as a result of such engagement is much reduced through the simplification of the framework. There is considerable disquiet among the inspectorate, and the operationalisation of the new procedures is far from smooth, and may further reveal tensions within the inspectorate itself, especially in relation to the basis of their claims to authority. Indeed, our research suggests that Ofsted’s attempts to incorporate a professional discourse into a data-driven disciplinary and regulatory regime are weakened by absence of trust, while its increased alignment with political agendas aimed at increasing school choice also undercuts their mobilisation of references to professionalism as a source of their authority.
Discussion
Data processes that, on first encountering them, seem to promote transparency, provide information and assist in sorting things out, reveal themselves on closer scrutiny to be highly powerful social practices (e.g. processes of observing, measuring, describing, categorising, classifying, sorting, ordering and ranking) (Halford et al., 2013: 180). Moreover, in the interaction of data and professional judgement within school inspection, the shaping of the process of preparation, of the inspection event, and of reporting by data is profound and inescapable. The coding of inspection practices extends beyond the interaction of data and observation in the inspection event, to the broader meanings of code as shaping social practices and doing political work. The strong framing of professional judgement by data revealed in our research also illustrates the strengthening of the political framing of inspection. Ofsted’s attempts to combine a professional discourse that references judgement with a data-driven disciplinary and regulatory regime are weakened by absence of trust; ironically, the dominance of numbers undermines trust in the inspection process. At the same time, the inspectorate’s increased alignment with political agendas aimed at increasing school choice also weakens its claims to professionalism.
This development is indicative of the political work that data do on a wider scale. In the context of increasing involvement of new actors – especially corporate actors such as Pearson and McKinsey – in data production and use, data systems enable relations to be established between political and other authorities, offer quick and cost-effective ways of forming and maintaining these connections, and apparently supply practical solutions to complex and hitherto intractable problems (Rose and Miller, 1992). Data systems create governing assemblages that shape individual conduct while simultaneously enabling autonomous, choice-making activity. These data are public – they are described as ‘transparent’: they are no longer produced for and distributed among the bureaucratic elite but distributed among and doing political work in the wider population, and not limited in their use to politicians and civil servants.
They thus have what Nelli Piattoeva refers to as ‘popular and official currency’ (Piattoeva, 2014: 7–8). The official currency of data is visible in their legitimation of governing through rendering an account to taxpayers (Ozga et al., 2011: 92–93), while at the same time populations, including school pupils, become visible through data and so amenable to governing practices. Local government and schools that used to be relatively closed to public and central government scrutiny are now calculable (Ozga et al., 2011: 92). Data expressed as public rankings, league tables and Programme for International Student Assessment (PISA) results are both official and popular knowledge forms, and so, as Piattoeva argues, we can see them as doing political work – for example enabling and consolidating control over a wide network of actors and institutions – local authorities, schools and teachers included.
The ‘popular’ work that data do is to make connections to individual citizens/learners/pupils in such a way as to steer or mediate their decisions and actions in relation to economic demands, to family relationships and all other aspects of everyday life (cf. Rose and Miller, 1992: 180), for example through Ofsted’s parentview in making a choice of school. As Piattoeva points out, this development is not confined to ‘official’ data:
…social media has grown into an efficient tool for the government of individuals and masses. The Internet sites that promote the neoliberal evaluation culture are motivated by and deeply embedded in contemporary social and political reforms, and rely on a new mode of regulation – governing at a distance through the regulated choices of individual citizens, and through specifying subjects of responsibility, autonomy and choice. (Piattoeva, 2014: 8)
In schooling, data act powerfully on individuals and groups through their predictive capacities and individuals are encouraged and, indeed, obliged to work with this information, becoming engaged in their own production (Finn, 2015). Dataveillance constructs ‘predictive’ profiling, where the future behaviours of an individual are calculated and then acted on pre-emptively, using ‘actionable intelligence’ to make decisions and set priorities. Data thus ‘make people up’ – not only by making them visible, but through encouraging people to think of themselves in particular ways, that is, to classify themselves (Williamson, 2014). Furthermore, as Selwyn (2014: 11) argues, the availability of integrated digital data that can be interrogated in real time reduces schools to ‘computational’ projects. The danger that Selwyn highlights is that the ‘modelling’ of education through digital data fosters a sense of algorithmically driven ‘systems thinking’ through which complex (and unsolvable) social problems associated with education can be seen as complex (but solvable) statistical problems. Trust in numbers, in the context of the development of big data, increasing capacity for data integration, and predictive data use in schooling, reinforces this perspective on education, as does the tendency of education research to accept data use in education as a vehicle for improvement, narrowly defined.
This paper, and the example of the inspectorate, seeks to promote a more critical assessment of the political work that data do, and draws attention to Marion Fourcade’s reminder that:
Behind each set of rational instruments always stands a particular political and economic philosophy, as well as particular social groups. (Fourcade, 2010: 571)
Footnotes
Declaration of conflicting interest
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author acknowledges the support of the UK Economic and Social Research Council (ESRC) RES 062-23-2241.
