Abstract
This article examines evidence of teachers’ work and learning in one school setting in the northern regions of Queensland, Australia, revealing how globalized performative practices that circulate around the collection and use of data in schooling settings are both confirmed and contested. Drawing upon the literature on the nature of accountability, particularly in relation to educational governance, and alternative theorizing of educational practice as praxis, the research reveals how teachers’ work and learning are heavily implicated in the development and perpetuation of reductive, performative conceptions of education, even as teachers seek to foster a more educative disposition. The research draws upon interviews and meeting transcripts of teachers seeking to enhance their own learning for student learning. The article reveals that even as teachers are actively involved in developing and analysing a plethora of data to foster their own learning for enhanced student learning, they also struggle to do so in a context that ascribes particular standardized forms of school, regional and national data as the data of most value. At the same time, the article describes how more sustainable practices do exist, and serve as examples of forms of praxis that challenge more reductive, performative conceptions of learning in schools more broadly.
Introduction
This article explores how the complex and contested conditions that surround teachers’ engagement with data influence their learning, and ultimately, the nature of their engagement with the children in their classrooms. Drawing upon the learning practices of teachers in one school in a regional city in the northern half of Queensland, the research reveals how such learning is influenced by the pressure to ensure enhanced outcomes against more standardized measures of student learning – particularly in relation to national standardized literacy and numeracy assessment in Australia, the National Assessment Program-Literacy and Numeracy (NAPLAN). These national tests represent part of a broader, globalized policy framework of standardization of educational practice, sometimes described as the Global Education Reform Movement (GERM; Sahlberg, 2015). Drawing upon neo-Aristotelian insights, the research reveals that even as teachers were constrained by more reductive and performative pressures and demands associated with this particular Australian manifestation of the GERM standardized testing regime, they also sought to contest such narrow foci, and endeavoured to engage with one another in more genuinely collaborative and substantive ways to enhance their understandings of students’ learning. Such responses reflect efforts to foster a more genuinely inclusive, democratic conception of education which takes children’s needs into account, and which contests conceptions of childhood that fail to respond to their situated needs (Rodriguez, 2017: 235; in this case, via standardized assessment practices).
Teacher learning in the context of an infrastructure of accountability
In this article, I focus upon the nature of teacher learning under current policy conditions. As such, teacher learning is understood as all the practices and processes in which teachers engage as part of their work, including the formal and informal activities in which they participate at and beyond the school level, and that contribute to informing understandings of their work; this includes both ‘formal’ and ‘informal’ modes of professional development (sometimes described as ‘PD’; Hardy, 2012). Increasingly, teachers’ learning is made sense of through the lens of specific forms of data, and what Anagnostopoulos et al. (2013) refer to as the substantial ‘infrastructure of accountability’ that enables the collection, analysis and engagement with such data. Henig (2013) argues the motivations behind these initiatives are not new, and include a trust and belief in the ability to develop institutions that can operationalize seemingly productive organizational incentives rather than being beholden to individual whims and self-interest. Such motivations also include trust in technical expertise, and that such expertise can provide solutions to seemingly intractable problems. Referring to the US context, Henig (2013) argues a third motivation is the belief that politics can be corrupting, and that systems need to be put into place that are not beholden to partisan approaches or philosophies.
However, as Henig (2013) also argues, while the motivations behind the development of new forms of managing and monitoring evidence of learning are not new, the specific technologies deployed to do so, are. Such an infrastructure refers to the myriad forms of technology and technical expertise that make it possible to manage and monitor student and teacher performance. Test-based accountability has proved particularly consequential in the US context, where, especially since the adoption of the No Child Left Behind Act in 2001, state and federal policymakers have sought to introduce mechanisms to use test results to manage and monitor educational practices: ‘Test-based accountability has spurred states to create large-scale information systems that gather, process, and disseminate information on the characteristics and performance of schools, teachers, and students’ (Anagnostopoulos et al., 2013: 1). Various forms of ‘value-added measures’ (often expressed in relation to standardized tests) have also been introduced as vehicles to ascertain whether teachers have made a difference to student learning. However, as Konstantopoulos (2014) argues, there is inadequate research into whether and how such value-added measures are actually accurate indicators of teachers’ effects, and that decisions should not be made against such unstable data: ‘It is not obvious that we are better equipped now to make such important decisions about teachers than we were 35 years ago’ (p. 1).
Ball (2003) refers to such technologies to manage and monitor these performances as reconstituting the work of teachers, indeed of teachers themselves. In a battle for what he describes as the very soul of the teacher, and channelling Lyotard (1984), Ball (2003) refers to the ‘terrors of performativity’ that surround the series of technologies that seek to govern teachers’ work. This includes various forms of data, particularly those associated with test-based accountabilities.
In the Australian setting, engagement with data has been most clearly associated with national testing in the form of the NAPLAN. However, even as such data may be used to enhance some aspects of teachers’ learning and practice, and teachers may seek to appropriate performative applications for more educative purposes (Hardy, 2014), there is also evidence of a questioning of the validity of the multiple uses of NAPLAN data (Klenowski, 2016), and associated forms of data more broadly.
Teacher learning and engaging with data
While such conditions matter, much of the literature on teachers’ learning fails to engage with the challenges that surround the plethora of data and its uses. Teachers’ engagement with data occurs at multiple levels; engagement with data constitutes part of their learning, as well as part of their teaching practice. By engaging with data as part of their work, teachers are constantly learning about the nature of data, how it can be used and its limits. They also learn that some forms of data are more useful than others for informing their teaching practice, and the learning of students. From a more technicist perspective, there is an assumption that various forms of standardized and numeric data are inherently ‘objective’, and incontestable. However, analyses of actual measures of attainment problematize such simplistic assumptions. Again, value-added measures, for example, can be analysed and modelled in multiple ways, leading to varying outcomes; this can result in negative associations between achievement and achievement gains (Ready, 2013). There is a sense, including on the basis of the statistics that inform such measures, that such measures should be used cautiously, particularly for significant and consequential decision-making (Pivovarova et al., 2016).
Also, in spite of the myriad forms of data currently available, as Datnow and Hubbard (2015) argue, there is actually relatively little research into how teachers engage with assessment data to inform teaching practice, and the circumstances/conditions that influence such engagement. Some research has shown how engagement in professional learning communities and with various kinds of data coaches and instructional coaches have enhanced instructional practices; in particular, vertical expertise (individual knowledge) in conjunction with horizontal expertise (knowledge co-created with others) have been proffered as ways in which coaches and communities foster changes in pedagogy (Marsh et al., 2015). Such coaches were selected by school and district administrators. There is also some evidence of various forms of ‘productive tension’ as teachers grapple with accountability-oriented demands in the context of high-stakes testing and associated reforms, and how these can cultivate teacher learning and pedagogical improvement (Stillman, 2011). In this case, the pressure around accountability, and data collected for accountability purposes, can be seen as ‘productive’, and helping to stimulate teachers to question their practices and to thereby change their practice – to learn from what is not working when they engage with their students, and to thereby alter their practice. However, the concern with these accountability-oriented approaches to managing student and teacher learning is that they often do little to cultivate the inherent capabilities that should characterize teachers’ work and learning more broadly. They are focused upon teacher accountability for external reasons, not on teacher learning to enhance teachers’ understandings of their work, so as to improve their teaching practice. They are ‘external’ systems that do not and cannot respond to the ‘internal goods’ that characterize particular forms of good practice.
An alternative approach: praxis in practice
An alternative approach to these more performative practices is one that articulates educational practice as other than a decontextualized response to various forms of external accountability, or imposed, externalized criteria. Rather, a situated social practice is wholly informed by efforts to make the best possible decisions under inherently complex, specific conditions. Kemmis et al. (2014) described such ‘situatedness’ as educational praxis. They argue, in an Aristotelian sense, praxis is conceptualized as a particular kind of action that seeks to foreground the approaches and traditions that reflect efforts to engage productively in the world: Praxis is a particular kind of action. It is action that is morally committed and oriented and informed by traditions in a field. It is the kind of action people are engaged in when they think about what their action will mean in the world. (Kemmis and Smith, 2008: 4; emphasis original)
Such an approach foregrounds the importance of cultivating forms of excellence inherent to the particular field to which it pertains; as MacIntyre (2007) argues in relation to the virtues more broadly, ‘there is no way to possess the virtues except as part of a tradition in which we inherit them’ (p. 127). On this rendering, a practice is a complex mode of human activity which seeks to cultivate the particular ‘goods’ inherent within that activity, and endeavours ‘to achieve those standards of excellence which are appropriate to, and partially definitive of, that form of activity’ (p. 187). The consequence is that ‘human powers to achieve excellence, and human conceptions of the ends and goods involved, are systematically extended’ (p. 187). Such an approach to practice is able to ‘speak back’ to standards-based accountability reforms because it foregrounds the particular, situated nature of actual teaching and learning practices, not reified, generic, standardized conceptions of teaching and learning that are meaningless in the absence of consideration of context, and that actually seek to eviscerate context. In this way, power is evident in the advocacy for attention to these situated practices, rather than construing generic, standardized approaches as productive of more substantive outcomes.
Such an account is infused with power relations, challenging more scientistic approaches that do not adequately address the nature of the specific practices that come to characterize excellence in relation to a particular practice. This includes late-modern conceptions of human activities that place great faith in various sorts of technologies and techniques that seek to ‘capture’ and ‘control’ practice, including teaching and learning. Such ‘means’ – such as standardized tests, and the uses of data from these tests to critique and evaluate teaching practice and student learning – do not account adequately for the particular ‘ends’ (a robust education for individual and collective well-being) to which such means are supposedly directed. As MacIntyre (2007) argued in his elaboration of how scientistic reasoning inhibits such ends, and inhibits the potential for a truly virtuous life: ‘Reason is calculative; it can assess truths of fact and mathematical relations but nothing more. In the realm of practice therefore it can speak only of means. About ends it must be silent’ (p 54). Such calculative reasoning cannot be captured external to the particular practice – in this case, educational practice – to which it refers. Consequently, teacher engagement with data in the form of generic, standardized tests, rather than in the form of students’ everyday practices and in relation to the curriculum more broadly, is an example of an ‘external’ practice. This contrasts with more substantive, praxis-oriented approaches which focus upon the ‘good’s internal to education – that is, actual instances of student learning in the context of daily practices, and in relation to substantive, ongoing curricula learning. When teachers engage with more substantive forms of data generated through close scrutiny of students’ daily learning, and substantive curriculum learning, they adopt a more praxis-oriented position. Carr (2007) captures this effectively when he describes the nature of education as inherently focused upon commitment to particular educational values inherent to the practice itself, and not external of it. It is through educational practice as a form of ethical action that ‘a commitment to some educationally worthwhile ‘end’ is given practical expression (p. 276)’. Carr (2007) goes on to argue that it is the values that characterize education as a practice that are integral to its evaluation: ‘the educational character of any practice can only be identified by reference to the educational values imminent in the practice itself – values which serve to distinguish practices which are educational from practices which are not and good educational practice from that which is indifferent or bad’ (p. 276). Education as a practice thus constitutes a form of excellence that exists according to recognition of excellence in relation to that field, not some set of external criteria.
Educational policy and practice in context
The research presented in this article was undertaken in one primary (Prep to Year 6) school site in the northern half of Queensland, in the context of significant policy reform. This included increased attention to students’ results, particularly in relation to national standardized literacy and numeracy testing – NAPLAN – and ongoing curriculum reform. NAPLAN is held every year in May. All students in Years 3, 5, 7 and 9 sit the test each year. The results are released in August each year, and can be accessed through the publicly available MySchool website. Queensland schools have been subject to particular scrutiny in relation to these results after their relatively poor performance on the inaugural NAPLAN test in 2008. As a result of this outcome, a review of literacy, numeracy and science teaching was undertaken in 2009 (Masters, 2009), leading to increased attention to testing practices more broadly, as well as various teaching and learning audits throughout the state. Also, because the results are publicly available for all to see on MySchool, the test has become used as a marker of the relative ‘quality’ of schools, with attendant concerns and angst among teachers, principals, parents and schooling systems. In an effort to make such comparisons more meaningful, schools are also compared against various ‘like’ schools identified as having similar characteristics to each school, although the methodology for identifying such schools is heavily contested.
Serving a lower to medium socio-economic community in a regional city, and with a population of approximately 850 students (large by Australian primary schooling standards), the school had a relatively stable and broad staffing profile, with a mix of recent, experienced and very experienced teachers. Since 2013, the school had also made a commitment to focusing upon teachers’ learning in an ongoing and systematic way. This was most evident in the way teachers from each year level met together 1 day per term in what were described as ‘Inquiry Cycles’. These days involved teachers closely scrutinizing their students’ work samples and data to help inform their teaching practice.
Methods and methodology
The research comprises a case study of the teacher learning practices in one school setting in northern Queensland Australia. This school was part of a larger study into the nature of teacher learning practices in Queensland (undertaken in 17 schools identified by state and regional personnel as schools striving to improve their teacher learning practices for student learning) under current policy conditions. The school was chosen for closer inquiry because, through the Inquiry Cycles, it had the most systematic approach to teacher learning out of all the schools in the larger study. The research question was: How is teacher learning currently constructed in the school in the context of current educational policy reforms? A more detailed case study of its teacher learning practices was appropriate to answer this question, as it enabled deeper scrutiny of teacher learning practices, including in relation to various forms of data. Data informing the research comprise interviews with 23 teachers and school administrators – approximately half the teaching workforce in the school – in 2017. These teachers were selected to ensure a balanced perspective across year levels within the group, as well as to ensure representative inclusion of perspectives (including in relation to school demographic profile, teaching experience, gender and year levels taught). They also included various ‘coaches’ – teachers identified by senior school administrators as exemplary practitioners, and who were relieved of their own classroom duties, and allocated to work closely with teachers at each year level to improve their teaching practice; this included how to engage with various forms of student data to enhance their students’ learning. While this involved close scrutiny of the work of students who struggled to achieve passing grades in relation to a broad array of classroom data, in the context of constant policy pressure to improve students’ NAPLAN results, these data were also an area of attention.
Teachers were asked to reflect upon the nature of the teacher learning practices currently undertaken in the school, the nature of data collected within the school and how they engaged with these data. These interviews form part of a larger corpus of data relating to the nature of teacher learning practices undertaken in this school, since 2013. Trustworthiness of the data was achieved by comparing teachers’ responses in 2017 with interview and professional development meeting data collected during the previous 4 years. Individual teacher responses were also compared with one another throughout the data collection and analysis phase to identify any anomalies, and further clarifying questions asked if anomalies became apparent. Teachers were purposively sampled (Bogdan and Biklen, 2007) across year levels to ensure coverage across the school, and teachers’ roles within the school. The interviews from 2017 were also selected to provide the latest insights into teachers’ perspectives on educational reform within the school. Interviews were approximately of 20–40 minutes duration, tape-recorded and transcribed remotely. The research was conducted in accordance with the university and educational authority ethical guidelines and protocols.
Analytically, the research employs both deductive and inductive approaches to data coding to identify key themes (Miles and Huberman, 1994); this entailed drawing upon the literature into the nature of educational practices under current conditions (as outlined in an earlier section of this article), at the same time as inductively analysing the data. Such an approach was also somewhat ‘abductive’ in nature, focusing upon ‘new’ insights emerging from the data (Reichertz, 2014), but seeking to do so in light of relevant theorizing and research, and on the basis of identifying preliminary hypotheses, and then subjecting such insights to further data analysis and theorizing. The result was the development of key themes on the basis of the data as a whole, and in relation to the relevant theorizing and literature (as outlined earlier).
Findings
This process revealed three key themes in relation to the deployment of various techniques and technologies to foster teacher engagement with data at the school. This included various forms of coaching and engagement in various ‘short-data cycle’ conversations, sometimes explicitly oriented towards national testing, significant attention to setting particular academic targets/goals to attain and the associated identification of ‘target students’ for close scrutiny and attention, and efforts to ‘align’ various forms of data, including through teachers’ engagement with ongoing ‘spirals of inquiry’.
National testing, coaching and the short-term data cycles
A key practice within the school was the ongoing engagement of teachers with specific ‘coaches’ dedicated to their year level. These coaches primarily focused upon the younger year levels, with an individual coach dedicated to Prep and Year 1 students, another coach for Year 2, another coach for Year 3, another for Year 4 and with the Head of Curriculum devoting some of her time to work with teachers of Year 5 students. It was believed that by allocating increased attention to students in the younger years in particular, they could circumvent problems in later years. However, reflecting more performative logics (Lyotard, 1984), in the context of constant policy pressure to improve students’ literacy and numeracy, particularly in relation to NAPLAN, the coaching and associated data conversations were also overtly construed as a vehicle to help redress lower NAPLAN results, even as they had evolved from that initial focus: Initially the thinking was around NAPLAN in fact, … So the thinking was around some support for the teachers to increase perhaps, boost the NAPLAN a little bit initially. (Yvette, Head of Curriculum, March 2017)
The influence of more competitive and performative logics was also evident in the way the Year 3 coach made sense of the different sorts of activities occurring in the classes she visited as part of her coaching role. This included how the curriculum was organized more broadly with attention to NAPLAN-like activities: That’s just the – so just in the unit so in Unit 1 they do, it’s just a persuasive, they were looking at the purpose of a persuasive and the audience … and then this unit is more narrative. And to be honest, I think that’s because in NAPLAN next term it’s either one or the other; so it kind of just sets them up for a bit of both. (Sonya, Year 3 Coach, March 2017)
Nevertheless, and reflecting a more praxis-oriented approach, this coach also sought to frame the coaching as multifaceted and substantive, and how it had shifted from being dominated by the coaches modelling particular lessons and approaches to their colleagues, to a greater focus upon serving as a stimulus for on-the-spot learning, and ongoing reflection upon teachers’ practices during what were described as ‘short term data cycle’ (STDC) meetings: At the moment we’re trying to build teacher capability and capacity depending on their needs, … So last year I found that we were doing a lot more modelling of lessons; a lot more so we’d model for an hour, they’d give feedback, then we would observe them, give feedback [during STDC meetings], whereas this year we’re doing a lot more co-teaching alongside them. (Sonya, Year 3 Coach, March 2017)
Even as it was seen as stressful, the focus on data and coaching was also valued by teachers. While performative logics were evident in the emphasis upon the short-term nature of efforts to enhance students’ results, teachers found the coaching beneficial: Like the coaching agenda – I worry that that is being viewed as someone’s looking over my shoulder all the time so I better be doing a good job! So a little bit of accountability there. Having said that, every time I ask for feedback, and even this morning [the principal] asked some of the Year 3 teachers ‘Is it the coaching agenda that’s putting you under the pump’, and they said ‘No, they love it’. (Yvette, Head of Curriculum, March 2017)
And this was corroborated by teachers themselves, including some time into the year, after experiencing engagement with the coach since the beginning of the year. Even as teachers appeared to be responding to immediate pressures to enhance students’ results, there was also a sense in which the input of coaches was valued for helping build teachers’ capacities more broadly, and beyond more standardized modes of data: And [the coach] is always checking, ‘Well do you think you’ll be able to make it to that goal?’ And sometimes I’m not sure, so I bring in evidence of work and discuss whether she thinks that this is over C level … So, I find that [coaching] really beneficial, because I obviously explain how I’ve taught the lesson, and then strategies I use. But I ask her, ‘What would you do in this situation?’ So, I find her input really valuable. (Tina, Prep Teacher, May 2017)
Reflecting the influence of a commitment to educational values (Carr, 2007) rather than being solely results-focused, the weekly data conversations (STDC meetings) with coaches were considered very helpful for enabling teachers to reflect upon their practice more broadly, to consider where their students were at in their understanding, and to set goals for the future: Yep, so that’s been really helpful this year actually to have that weekly meeting with Yvette and just to sort of, you don’t get a lot of time to sort of sit and reflect about what you're doing … I find that it's helpful to talk about it with someone else. (Lauren, Year 5 Teacher, May 2017)
This opportunity to sit with another teacher and reflect on her practice and to take into consideration the ‘ends’ of seeking to improve her work with her students, rather than being dominated by the ‘means’ of doing so is indicative of a more ‘virtuous’ approach (MacIntyre, 2007) – in this case to teacher learning for student learning. Teachers valued both the input provided by their respective coaches, and the time to reflect upon their practice with a colleague – forms of horizontal knowledge development (Marsh et al., 2015). This was the case even as there was considerable initial focus upon this work as arising out of more performative concerns about results on NAPLAN.
‘Audacious goals’ and targeting students
At the same time, and reflecting more performative concerns within the school, there was a clear sense in which teachers were encouraged to ensure a specified proportion of students achieved passing grades in their classrooms; in this case a much more technicist approach to the field of education seemed apparent. Described by some teachers as an ‘audacious goal’, this goal was focused primarily on English, with a target of 85% of students achieving a level ‘C’ or above: So if we have a target … an ‘audacious goal’ was 80% of students would have a C or B or A. C level or above. (Jess, Prep Teacher, May 2017) I think the goal is 85% of all students to achieve a C and above in English. And that’s scrutinized. (Lou, Prep Teacher, May 2017).
There was also evidence that this goal had changed over time, and that there could also be pressure on teachers in relation to this target: 85% here … C and above. … It was 80 at one stage. I know when I first moved here it was 80 … There can be [pressure around that number]. (Constance, Year 4 Teacher, May 2017)
Indeed, this pressure was seen as having increased over time, as had the specific target. The collection of data about these goals at the regional level reflected part of the infrastructure of accountability (Anagnostopoulos et al., 2013) in place to enhance the proportion of students achieving at C standard. On this rendering, the cultivation of notions of excellence within the particular field (MacIntyre, 2007) appeared wanting. The focus on these numbers, and associated pressures, were also seen as a result of more performative pressure upon the principal at the regional level to ensure that certain goals were attained within the school: [Principal] tries not – as far as possible–he tries not to put the pressure on, but in his role, he has to meet targets from the ARD [Assistant Regional Director] and above for what regional directives are. (Jess, Prep Teacher, May 2017)
For this teacher, there was also a sense in which teachers were expected automatically to be able to respond to these goals. The raising of the goals was seen as appropriate, as the proportion of students attaining the goal was seen as increasing: So 80% of students in [name of school]. And then it was, ‘Well, you’re nearly there, so why not make it 85!’ (Jess, Prep Teacher, May 2017)
Also, while teachers were influenced by different sets of pressures and practices from those outlined in the literature on value-added measures in the United States (Konstantopoulos, 2014), there was a clear sense that teachers were not only endeavouring to attain particular goals, but were doing so by ‘targeting’ particular students. In this case, there was significant attention to students performing at a D standard; these were typically students performing just below the threshold for passing, and who were targeted for particular intervention to enable them to achieve a passing grade. In this instance, a more reductive form of educational practice seemed to be at play, by focusing on selected students, serving as an explicit challenge to a conception of praxis as morally informed and committed to more inclusive traditions within the field of education (Kemmis and Smith, 2008). Teachers were expected to ‘add value’ to these students, who were themselves clearly part of a form of ‘triage’ practices (cf. Booher-Jennings, 2005) more broadly within the school, emphasizing attention to some students over others. These were described as ‘target students’, and the progress of these students was a key focus of attention in the school: We do have targeted kids. … So at the moment they are about a D level, so I'm looking to shift them to a C. (Loretta, Year 1 Teacher, March 2017) I think we targeted about five students each, some four, some six. But the ones who were basically the Ds, [who] had potential to make it a C with just a little bit of extra attention. (Mitch, Year 6 Teacher, May 2017)
Significantly, these were students who were not too low (‘E kids’), and who were deemed, with additional intervention, as likely to achieve passing grades. Some teachers were explicit that these students were subject to extra scrutiny in an effort to enhance the data: I guess it’s always been a bit of a focus and I guess the idea is if you push those Ds to Cs, your LoA [Level of Achievement] data looks better. (Gillian, Year 4 Coach, March 2017) It looks better in the data. It’s data driven. … [It’s] regional pressure. … Yep. Definitely. (Felicia, May 2017)
However, and reflecting a more morally committed, praxis-oriented stance (Kemmis and Smith, 2008; MacIntyre, 2007), teachers also expressed reservations about such goal setting for selected students, even as more performative pressures were evident. Rather than focusing upon the internal ‘goods’ inherent to substantive educational practice (MacIntyre, 2007), the school administration team was described by one teacher as heavily influenced by this focus upon data, even as this was not what the work of teaching was really about: They want to improve their data. … but that’s not what it’s about. It’s about trying to improve the academics about that child, the learning of that child. But for them [school administration team] it’s about the improvement in their data and that little graph, or whatever it is we look at so we have more in the C range than we do in the D range. (Lou, Prep Teacher, May 2017)
And while there was a sense that teachers were focusing heavily upon targeted students, they took seriously efforts to enhance students’ learning, and the learning of all students, and not just the ‘data’ to which their learning pertained: I feel that if I can sort of put a bit of extra into them and just sort of consolidate – like I feel that they’ve got the skills there; they just need that extra bit of a push to build upon their knowledge and build their confidence so that they can be sort of moving up, and just sort of consolidating their understanding. (Riley, Year 2 Teacher, March 2017) I’m definitely running more than one [target group] … I don’t know if it’s supposed to be like that but that’s how it seems to be for me. … I like to target all of my students (Jess, Prep Teacher, May 2017)
In these ways, even as teachers’ work was heavily prescribed by the focus upon particular students, and data, a much more praxis-oriented stance, emphasizing the values inherent to education itself (Carr, 2007) was evident. Such a stance was characterized by broader concerns about students’ learning, and ensuring teachers attended to the needs of all students, not just those who were ‘targeted’. Teachers did not simply agree with the reductive ways in which data were sometimes used or engaged in the school. This did not entail open conflict with the school administrators, whom teachers also recognized as struggling with such reductive test-centric approaches to data, but it did involve striving to ensure they took a holistic approach to their students’ learning needs, and to try to be attentive to all of their students’ learning needs, not simply those ‘targeted’ for additional intervention.
Aligning data and the Inquiry Spirals
Nevertheless, the focus on data was pervasive, and there was a sense in which processes of monitoring data occurred from the outset of the year. This included attention to the ‘alignment’ or lack of ‘alignment’ between the different forms of data collected, as evident in students’ individual profiles. An extensive infrastructure of accountability (Anagnostopoulos et al., 2013) was in place, and included a collection of classroom-based ‘Level of Achievement’ data (A to E), national standardized data (NAPLAN), regionally collected standardized data (Progressive Assessment Test (PAT) data for Reading, Mathematics and Vocabulary) and school-based standardized data (particularly levelled reader data (sometimes described as ‘benchmark’ data)): At the start of the year … we were given our class profiles and looking at the alignment of the LOAs, so Level of Achievement, in English for students, their NAPLAN results for English, the reading and writing and also the PAT testing … Yeah, so all of that evidence or data of each individual student from the previous year … And we could look at that, and see if there was alignment. (Shelby, Year 4, March 2017)
For this teacher, the lack of alignment was construed as problematic, and something to which teachers should aspire to address: I would say you’d be questioning why, definitely. … If they were achieving higher on the national testings and their reading level was high, yet they’re achieving lower in the classroom in terms of through the Units of work in the classroom. Yeah, you’d definitely be asking why. Like, what’s the case there? Is – yeah, through your teaching in the classroom? (Shelby, Year 4, March 2017
This attention to aligning data was seen as the result of multiple reform initiatives happening in the school, and almost appeared to take on a life of its own. Again, there was the potential for attention to data for their own sake, challenging more substantive engagement with data, and reflecting a challenge to a more praxis-oriented approach. Attention to this alignment and engagement with data also occurred in the Inquiry Cycle meetings – day-long meetings held every term for each year level since 2013. Evidence of the success of such initiatives was construed in the form of enhanced data outcomes, and increased alignment of data: The evidence of effective work from the level of the instructional coaching and Inquiry Cycle is improved data; increased level of achievement, increased … size in the PAT-R, increased levels in terms of reading PMs etc. … So and they’re all aligning nicely. (Dorothea, Deputy Principal, March 2017)
During 2017, these meetings became increasingly associated with the development of various ‘Spirals of Inquiry’. The focus of the Spirals of Inquiry was to develop ‘hunches’ about students’ performance, to gather evidence to test whether these were valid, to analyse this data and to make a new ‘hunch’, if necessary, about why students were underperforming. This included collecting not only standardized data (such as that associated with NAPLAN and PAT-R tests) but also evidence of students’ ongoing reading practices in the classroom. Reflecting more active, interrogative and potentially productive tensions around data use (Stillman, 2011), this involved ongoing processes of collecting data, particularly reading data, in a variety of forms, and asking questions about its validity: And a lot of our learning is around reading predominantly; so as a cohort we made a ‘hunch’. At the moment, we have a diverse group of people – little people who are at different stages of learning to read and that was the hunch. … And this spiral of inquiry [is] to refine that particular hunch now that we’ve had more data. (Lou, Prep Teacher, May 2017)
However, and again reflecting the potential to narrow the focus upon a more reductive conception of education, the Spirals of Inquiry as the latest iteration of the Inquiry Cycles, had become more tightly focused upon the needs of the ‘targeted students’: So we had a PD [professional development] on developing a Spiral of Inquiry with our cohort … And the teachers brought along evidence of student work – their targeted students. … It’s really trying to shift our D kids up to that C. (Hyacinth, Year 2 Coach, March 2017)
Interestingly, however, and reflecting a more inclusive conception of education for all students, teachers adopted a broader approach to the Inquiry Cycle days, reflecting efforts to collaboratively develop better understanding and knowledge about specific aspects of the curriculum they were teaching for all students, thereby potentially transforming their own learning practices (Author et al., 2017). As a result of engaging with one another (horizontal expertise), and learning individually from the coaches and facilitator of the Inquiry Spirals (vertical expertise (Marsh et al., 2015)), teachers were able to elaborate in considerable detail about how they had been able to narrow down their focus of attention to more specific aspects of the curriculum/teaching practice to enhance students’ learning. A more praxis-oriented conception of education was evident in how the ‘targeting’ in this case was focused on specific curricula issues that were pertinent to the needs of all students, rather than just the target students: [Inquiry Cycle days] are used for our learning and development as teachers to be able to target areas that we need to work on. … Sometimes like phonemic awareness, phonics could be a target area as well. … It was working on their phonemic awareness and I think their one-to-one correspondence and their letter-sound knowledge in order to better facilitate their reading skills and writing as well. (Mave, Prep Teacher, May 2017) It was about focussing on their comprehension in their reading … So we thought that it was in that comprehension level in the reading of it that they were falling down, and then not able to transfer into their writing. And so that’s where we sort of focussed. (Coralie, Year 5 Teacher, May 2017)
In this way, the Inquiry Cycle days were valued for what teachers continued to learn in relation to the curriculum and their teaching, and about their students’ understanding and comprehension, even as there was a simultaneous focus and attention upon a more limited ‘targeted’ group of students.
Discussion
The way in which teachers’ work and learning were oriented towards enhancing outcomes on various standardized measures of learning, including national test results (NAPLAN), is a manifestation of how a broader infrastructure of accountability has been exercised most recently in schooling settings, and how education can be reduced to numbers. Such an infrastructure entails new forms of managing and monitoring particular construals of evidence as learning (Henig, 2013), and contributes, at least in part, to increased performative pressures upon those in schools as they grapple with these new modes and modalities of ‘capturing’ and denoting ‘learning’. However, at the same time, even as more performative pressures were evident in teachers’ comments and concerns about specific approaches to data generation and use in the school, there was also evidence of efforts to ameliorate some of these effects, perhaps to appropriate the focus on data for more educative purposes (Hardy, 2014), and a desire to engage more productively with particular data technologies and infrastructures, as these permeated teachers’ practices. In short, practices, oriented towards enhancing the learning of teachers to improve their capacity to work with all of their students were apparent, and reflected traditions of excellence within the field (MacIntyre, 2007) – in this case, the field of education.
This was challenging work. Indeed, the very name of the ‘short-term data cycles’ reflected how a particular kind of ‘quick fix’ infrastructure was enabled, as part of various forms of accountability, including test-based accountability, and associated data (Anagnostopoulos et al., 2013). The development of this initiative as part of the broader infrastructure of accountability reflects not only the hope that data would ‘shift’ fairly rapidly through this process, but that short-term approaches to teachers’ learning more broadly were appropriate for such work, or that they could be somehow productive of enhanced learning – in terms of student outcomes, but also in relation to teachers’ learning and capabilities. Again, such short-term thinking is yet another feature of the broader GERM (Sahlberg, 2015).
However, the focus on the STDCs was multifaceted and existed at multiple levels, displaying both performative and more educative, praxis-oriented foci and approaches. At one level, because they were associated with intimate and ongoing coaching of teachers in an intensive one-on-one format, the discussions that arose during these meetings were construed as an opportunity for teachers to engage with a coach on an ongoing basis, around the nature of students’ specific learning needs, and for such conversations to potentially become important mediating factors in enhancing teachers’ pedagogies (Marsh et al., 2015). The conversations with the coaches were valued for the way in which they served as vehicles for teachers to reflect upon their practice, and engage with a colleague about how they would seek to enhance their practice further, based on multiple forms of student data (not just various standardized forms of data). These included standardized data collected nationally (NAPLAN), regional data (progressive Assessment Tests for Mathematics (PAT-M), reading (PAT-R) and vocabulary (PAT-V), and school level (typically described as ‘PM Benchmark’ data after one of the large commercial providers of ‘levelled’ reading books). However these data also included Level of Achievement data (A to E) collected as evidence of students’ ongoing learning based on the curriculum, as well as various samples of student classwork. This broader conception of data reflects a much more praxis-oriented stance, as did discussions about teachers’ teaching practices more broadly, and the extent to which they provided the opportunity to enhance student learning; a commitment to a broader conception of educational values was apparent (Carr, 2006). Teachers’ descriptions of the STDC meetings reveal a much more detailed, elaborated approach to teachers’ practice than if they were only focusing upon short-term improvements in students’ results alone, or longer term enhancements to standardized data – even as these foci were clearly in evidence. These elaborations reflect a more praxis-oriented stance on the part of teachers as these discussions became oriented towards enhancing teachers’ understandings of their practice as a vehicle to enhance their teaching more broadly, and constituted part of the ‘internal goods’ (MacIntyre, 2007) of education – in this case, in relation to teachers’ professional learning. The ‘situatedness’ of children’s learning was in focus, through teachers’ learning.
This was, however, a fraught process, given the more performative demands within which teacher and student learning were undertaken. In the context of pressure on teachers to ensure enhanced outcomes by students on measurable outcomes (e.g. the number of students achieving above C standard), construals of teachers’ work, such as the coach taking an ongoing and active interest in teachers’ work (‘and (coach) is always checking, “Well, do you think you’ll be able to make it to that goal?”’), reflect performative logics writ large. In a sense, what was required was constant scrutiny of teachers’ practice, and understandings of their practice, to secure enhanced student outcomes; that teachers reflected that ‘there could be pressure’ around the push to enhance ‘the data’ reflects such performative concerns. Under such circumstances, the extent to which the best traditions of practice within the field (in this case, education) can be actively cultivated, characterized by a logic internal to that field (MacIntyre, 2007), is a moot point.
Nevertheless, the genuine thankfulness of teachers for the input of the coaches to be able to discuss their practice more broadly, and active interrogation of a broader conception of data in the Inquiry Cycle meetings, gives some hope that more reductive data-centric practices were not the only logics at play; such logics could be resisted by teachers’ efforts to cultivate a more ‘virtuous’ disposition, focused on ‘educational values imminent within the practice itself’ (Carr, 2007: 276). The way in which teachers valued the conversations reveals something of the ‘productive tension’ (Stillman, 2011) between more accountability-oriented logics, and the potential to focus attention beyond a limited array of ‘target students’.
There is no doubt the setting of specific ‘targets’ in the school, and the attention to achieving certain ‘goals’ exerted considerable influence. That the ‘terrors of performativity’ (Ball, 2003) could permeate these practices was evident in the way teachers reflected upon the demands to ensure students attained particular goals. The targeting of specific students perhaps best represents processes of datafication more broadly. Even as the evidence of the value of various measures of value-addedness remain circumspect at best (Konstantopoulos, 2014), there was no doubt that these teachers were seeking to ‘value-add’ to their students, and that the value added was to be measurable and monitored. The latest iteration of the Inquiry Cycles, the Inquiry Spirals, while still engaging teachers with opportunities to analyse the curriculum, and their teaching and assessment practices more broadly, tended to focus upon the outcomes of a more ‘select’ group of students.
Again, however, teachers did not simply capitulate to the focus upon specific students. A much more praxis-oriented stance was evident in how some teachers openly critiqued the emphasis upon the ‘target students’ as about making the data ‘look better’; more performative logics could be resisted, at least through naming problematic practices. Such an approach reveals a more educative disposition, focused upon the ‘goods internal to that form of activity’ (MacIntyre, 2007: 187) – in this case, education. They reflect the valuing of a substantive approach to educating all students, not simply those flagged as likely to enhance school data for its own sake. The way in which some teachers sought to target all their students, even as they felt pressure to ensure specific students’ results were enhanced, reflects concern beyond more performative logics. Similarly, a more praxis-oriented stance was evident in how teachers also focused upon specific samples of students’ work within the Inquiry Cycles more broadly, including maintaining an emphasis upon what these samples reflected about the nature of the teaching practices more broadly that had given rise to them. There was also a sense that the way in which they elaborated upon students’ work, and what students needed to do to enhance their learning (‘their one-to-one correspondence and their letter sound knowledge’), were indicative of a more educative disposition on the part of teachers, and one not simply dominated by data for its own sake. In these ways, a form of education as praxis – as ‘morally committed and oriented and informed by traditions in a field’ (Kemmis and Smith, 2008: 4) – was evident.
Conclusion
The focus of teachers, coaches and school administrators within the school on specific goals and students reflects how broader processes of measuring, managing and monitoring schooling practices can arise as a result of more performative policy conditions, and influence all aspects of schooling, including teachers’ learning (Hardy, 2017). Such practices transpired in the context of a sense of urgency about students’ academic results, on both standardized tests, and in relation to students’ classwork more broadly. This was expressed most overtly in the form of what were described as ‘short-term data cycles’, as well as engagement with various forms of standardized data within the various repositories – infrastructures of accountability – at the state and school levels within which data are collected and collated. Nevertheless, while there is clear evidence of data governance through short-termism, and more and more specific focus upon particular students, this was challenged. That teachers also sought to resist the more limited and limiting conceptions of their work reflects a much more educationally centred, praxis-oriented approach to schooling, reflective of a commitment to particular educational values inherent to the practice itself (Carr, 2007). Arguably, such a commitment was also evident in the way teachers seemed to make the most of, and valued, the opportunity to work closely with a colleague as a coach to enhance their own practice. A more educative disposition was also evident in their efforts to work with all children, and challenge more reductive conceptions of the focus upon data, and the way in which the Inquiry Cycles were recognized as vehicles for deep and collaborative inquiry into their practice, for enhanced student learning. Indeed, this notion of Inquiry Cycles would seem to have cogency in other national settings; analysis of such work in different national settings could also serve to challenge more performative conceptions of childhood as currently constituted through various forms of ‘global’ data, particularly if they are fostered as vehicles for teachers to develop deep and collaborative understandings about their practice. A more critical approach could enable a much richer conception of ‘global’ studies of childhood. However, such possibilities need to be proffered cautiously, and always in the knowledge that more reductive approaches to understanding and deploying data in schooling settings are enabled by acritical conceptions of data, and an infrastructure of accountability that foregrounds more standardized and enumerative forms of data as the evidence of most worth.
Footnotes
Acknowledgements
We wish to acknowledge the support of the teachers and school administrators involved in this study, without whom the research would not have been possible.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Australian Research Council under Grant No. FT140100018.
