Abstract
This article discusses the effects of the datafication and digitalisation of education policy in the context of the Russian Federation. It taps into the policies and practices invented as a result of rising audit cultures and the scientisation and datafication of education governance. These processes turn sites of public examinations into sites of numerical data production on education, and make school systems accountable to data production. The article draws on the notions of power and the non-human object, as developed by Actor-Network-Theory scholars, in order to make sense of the recent introduction of obligatory video surveillance equipment during public examinations. The article argues that demands for accountability and data objectivity have led to a complex surveillance regime – a surveillance assemblage – that leads to intensifying observation through various technical means and new data, i.e. data on data production. Video surveillance is called upon to act as a mediator that translates and holds parts of the fragile assessment-network together by coercing each participant into the role of docile data producer. The introduction of video surveillance manifests and endorses a deep mistrust of the human being, translates data objectivity into a procedural matter and opens up new business opportunities for commercial surveillance and security sectors.
Introduction
In May 2014, I travelled to St Petersburg, Russia, for a conference organised by the Russia Education Aid for Development (READ) Trust Fund, financed by the Russian authorities and administered by the World Bank (www.worldbank.org/en/programs/read). The conference, entitled ‘Measuring for Success. The role of assessment in achieving learning goals’, offered its participants a day of site visits, aiming to ‘(i) expose participants to the Russian education system, particularly examples of good practices within that system’ and ‘(ii) help participants understand the role of learning goals in the Russian system and how assessment is used to monitor and achieve those goals’ (www.worldbank.org/en/events/2014/02/20/read-conference-2014#5). I signed up for a visit to the Regional Centre for Education Quality Evaluation and Information Technologies, knowing that one of its main responsibilities is to administer the Unified State Exam (USE) on the territory of St Petersburg and the Leningrad Oblast (a federal unit within Russia). The USE is a national, mandatory, standardised school leaving examination. As we strolled around the building, our guide – the head of the centre – introduced us to the new surveillance devices installed in the rooms that looked like ordinary classrooms and were, indeed, often used as such for in service teacher training. Due to the new security regulation introduced in 2014, the rooms were equipped with CCTV, usually two cameras per room, with one delivering footage to Moscow, and the other, to a control room equipped with a big screen and located on the centre’s premises. Our guide explained that during the USE the rooms accommodate the education specialists who mark part C of the standardised exam. This section of the test is scored manually by the appointed subject specialists, as opposed to parts A and B, which are scored by a machine. We learned that the cameras need to be placed with care in order to observe the room entirely, and that all important procedures, such as the opening and sealing of the examination packages, in addition to marking, should be captured on camera. Our host also mentioned that she controls the procedures by watching the screen in the control room.
This paper is an attempt to understand the context and meaning of what I saw on that trip, analysing education policy documents, newspaper articles and various internet sources in order to explore the recent history of video surveillance as a technology involved in administering standardised testing in Russia. In broad terms, the aim of this article is to shed light on particular new practices introduced to foster knowledge production on education. These practices, in our case the introduction of surveillance cameras, are presented by their authors as necessary and legitimate precisely due to their important contribution to knowledge accumulation. They appear at a controversial moment defined by audit cultures (Shore, 2008) and the scientisation and datafication of education governance (Grek and Ozga, 2010; Selwyn, 2015) as the ingrained conditions of contemporary education. Thus it is important to raise a number of questions, such as how these practices attempt to reorganise life, remodel public sector institutions, refashion working environments and transform the subjectivities of citizens and professionals (Shore, 2008: 280), and consequently, rearrange old and instigate new power relations.
Inspired by the ontological and epistemological thinking in studies assembled under the broad label of Actor-Network-Theory (e.g. Fenwick and Edwards, 2010; Latour, 2007 [2005]; Law and Hassard, 2005 [1999]) my concern lies with the role of newly introduced non-human actors in the processes of knowledge production on education. Instead of attempting to map the actor-networks that make up standardised testing in Russia, I am at this time interested in capturing the mediating role of surveillance cameras in the weaving together of the network. The opportunity to explore the role of an object is ripe when new or renewed associations are made, in particular, when a ‘thing’ is invoked to exert a connection in place of a weak or failed social tie. Drawing on and paraphrasing a question once posed by Bruno Latour in order to explain how to envisage such ‘things’ as full-blown actors in social processes, I ask what difference video surveillance technology is expected to make to other agents’ action (see Latour, 2007 [2005]: 71, and also more generally 70–82).
Research materials 1
This article draws on textual data, and qualitatively extracts the facts and arguments that shed light on how the Russian authorities officially motivate the introduction of compulsory video surveillance during high-stakes state examinations. The analysis is based on various sources of information collected specifically for this paper and related research projects. The materials comprise public education policy documents that mandate video surveillance, and decrees and recommendations that specify how the technology is to be used. In pursuing the argumentation behind the new regulation, the study examined news items posted on the official website of Rosobrnadzor (Federal’naya Sluzhba po Nadzoru v Sfere Obrazovaniya i Nauki, the Federal Service for Supervision in the Sphere of Science and Education (http://obrnadzor.gov.ru/ru/), 2 an official website dedicated to information on the USE (http://www.ege.edu.ru/), and a newly established, state-sponsored website committed to promoting ‘honesty and objectivity’ during the USE (http://честныйегэ.рф/). In addition, we analysed news items posted on the website of Rostelekom, which is the main telecommunications partner of the state in implementing CCTV. The news items were a fruitful source of information, as they guided the research to official policy documentation, and simultaneously enriched the data with opinions expressed in official meetings and seminars, and in the reported interviews with representatives of education authorities and Rostelekom.
The above-mentioned purpose of expanding the research materials was also served by selecting relevant articles from a range of Russian newspapers. The article publications, 65 of them in total, were collected through keyword searches in Russian (e.g. ‘USE and video’, ‘USE and cameras’) in Integrum, which is the largest collection of Russian and the CIS (Commonwealth of Independent States) databases covering all national and regional newspapers. In selecting the newspapers, we focused on those that we assumed to be well-known and widely read, i.e. Argumenti i Fakti (Arguments and Facts), Izvestia (News), lenta.ru (a news website) and Moskovski Komsomolets (Moscow Komsomolets), and included an official newspaper of the Russian government, Rossiyskaya Gazeta (Russian Gazette). We also incorporated two more specific weekly newspapers: Poisk (Search) which caters to the national scientific community and Uchitel’skaia Gazeta (Teachers’ Gazette) which serves Russian-speaking schoolteachers. These periodicals specialise in publicising new legislation and practices and encouraging public and professional discussion on topical issues related to education, training and science. The starting year for the searches was 2012 in order to (potentially) capture debates prior to the advent of the new regulation, with our materials from Uchitel’skaia Gazeta starting in 2009 due to the central role of the publication as an information channel between the central education authorities and the school community throughout Russia.
Personal fieldwork notes from an earlier research project on Russia as a returning donor to education, documented in Piattoeva and Takala (2015) and Takala and Piattoeva (2012), which enabled me to meet evaluation professionals from the post-Soviet space and beyond, encouraged the present focus on surveillance and security technologies as important, but under-researched actors in standardised testing. The introduction of strict control measures and their reliance on advanced technology is not a unique feature of the Russian testing system, and further research will be necessary to show how this phenomenon unfolds in different contexts. It goes without saying that the current paper is only able to document and explore a small fraction of the research material accumulated.
Scientisation and datafication of education governance
The spread of the New Public Management (NPM) approach to governance, manifested in the rising demand for data about the working of the public sector and, particularly, that of education (Lawn et al., 2011), is inherently embedded in the broader phenomenon of the rise of the ‘evaluative state’ (Neave, 1998). It operates through an elusive frame that splits up and assigns tasks to an array of new instruments, and new intermediary bodies and actors, whose functions are legitimised through extensive legal modifications (see Neave, 1998). The ‘multiplication of the levels of oversight’ (Neave, 1998) solicited from and produced in these new networked structures constitutes the system whose distinct parts are intertwined in a complex arrangement of observation, data production and calculation. Equally important as a context is the complex association between the upsurge in ‘knowledge-based economy’ thinking, which defines national competitiveness in terms of the level of national human capital, increasingly manifested by positions in cross-national student achievement testing, and the redefinition of policy-making as a knowledge-intensive and knowledge-dependent endeavour. As a result of these broad transformations, education is squeezed between two broad, interlinked agendas that pursue measurable knowledge and knowledge about knowledge (cf. Fenwick et al., 2014).
The calculative rationality and the consequent datafication of school work arise from concerns over effective governance and the maximisation and accountability of the results of school work (e.g. Finn, 2015). Embedded in these moves is a shift from teaching to learning as the focus of pedagogical work, and the consequent re-positioning of teachers as accountable for individual and aggregate learning outcomes. Knowledge, in particular statistical knowledge on education, is charged with the task of ‘revealing’ problems and shaping solutions, adding a scientific and thus objective flavour to policy-making – a phenomenon referred to as the scientisation of education governance by Grek and Ozga (2010: 272). The data combines different pieces of information, its prime components being numerical indicators of academic achievement and progress in achievement. It is then used in hard and soft forms of regulation, that is, to command and to discipline, as well as to enlist and to seduce at a distance by means of ‘soft tools’, such as comparison, benchmarking and incentivisation as the core new government technologies (cf. Piattoeva, 2015a). The data is thus introduced as a technology of government with a dual and contradictory purpose, that is, to conduct others conduct and to authorise policy-makers. The former function of data is performative, meaning that evaluation processes and the data that they generate are primarily seen as a means to govern at a distance with the quality of the produced data being secondary to the processes of change and compliance that it is expected to inspire (e.g. Power, 2004). For the latter function, data has to be seen as objective and reliable, no matter to exactly what use it is put in political decisions.
In the present education accountability contexts, the paradox of datafication lies in the fact that educators are required to collect or produce the very data that is then used to ‘[m]onitor, measure, and, potentially, punish them’ (Anagnostopoulos and Bautista-Guerra, 2013: 56). Since digital monitoring and surveillance systems communicate to employees that they are not trusted, these technologies breed mistrust and resentment in return. Consequently, surveillance technologies promote the behaviours that they then seek to curtail, and the corrosive effects of surveillance extend to those doing the surveillance, so that they increasingly mistrust the ones whom they monitor (Kramer, 1999). The subject of soliciting data from the very people who are mostly affected by this data surfaces in education policy debates. This means that authorities recognise the constitutive effects of performativity-driven education policy, such as fabrication, gaming, creative accounting and cheating – the anti-programmes of governance by numbers, to put it in Latour’s (1991) terms. And it is clear that such calculated (re)-actions potentially jeopardise the two governmental roles attributed to the data.
In the national context reported in this article, the complex issue of soliciting data from those who are then affected by it is commonly translated into a problem of ineligibility. The informed professional judgement of traditional experts, especially teachers, is highly mistrusted because, as the argument goes, it is either arbitrary or biased and incommensurable beyond its context, and thus those who produce the results should not be put in a position to evaluate them (e.g. Civic Chamber of the Russian Federation, 2014b). As the ‘problem’ is defined in terms of teachers’ poor reliability and inability to pass commensurable judgement, the resulting policy proposals advocate independent evaluation. This notion implies a standardised, formal and largely quantified procedure performed by specialised, often external agencies and digital testing technologies that can produce objective data.
‘Objectivity’ is a term that figures prominently as an argument to justify the introduction of independent structures, external audits and digital technologies. However, objectivity is here imbued with a particular, administrative connotation, and it is conflated with impersonality. Impersonality reveals itself as a matter of correct procedure rather than correct method of inquiry, and the standardisation of and tighter control over evaluation procedures are substituted for attaining what is ‘true’ (cf. Porter, 1994; Megill, 1994). As Megill (1994: 10) brilliantly writes, the procedural sense of objectivity plays with a ‘[g]overning metaphor that is tactile, in the negative sense of ‘hands off!’ Its motto might well be ‘untouched by human hands’. Ironically, ‘impersonality’ is employed in a literal sense – because humans are not trusted to perform objectively, the procedures are impersonalised by removing the human as far as possible, and replacing her with a thing.
This paper links concerns for data objectivity and legitimacy with the increasing usage of technological devices that seek to steer data production. The task of technological devices, in particular video surveillance cameras, is to improve objectivity by controlling the actions of people who are involved in data production in various roles and in different phases. This diverse group of people compose the assembly-line of the data-factory. In this manner, the present article shares concern for the workings of the infrastructure of accountability (Anagnostopoulos et al., 2013) and data infrastructure (Sellar, 2014) at the backdrop of emerging digital governance and data-driven governance (Selwyn, 2015; Selwyn et al,. 2015; Williamson, 2015) in education settings. The focus on infrastructure intends to shift the spotlight onto the assemblages of technology, people and policies that enable the production, flow and diverse utilisation of data across and beyond the boundaries of the education system (Anagnostopoulos et al., 2013: 2). Importantly, the data infrastructure perspective aspires to explore how data production, and not only its use, serves as a type of governance (Anagnostopoulos and Bautista-Guerra, 2013: 55). The aim of tackling the socio-material practices complicit in data production is an important step towards re-politicising data, which is commonly posited as ‘[i]nformation rather than knowledge or interpretation’ and as ‘[t]ransparent, pre-interpretive, pure, raw’ (Birchall, 2015: 187).
Concern with data manufacture echoes Nikolas Rose and Peter Miller’s Foucault-inspired writings on governmentality and government technologies in which they emphasise that the collection of information is never a neutral process of recording, but is in itself a way of acting upon the world, making it susceptible to intervention (Rose and Miller, 1992: 185). Even when people or organisations are simply requested to write things down, for instance, to make the work of their institution transparent to the public in a particular manner, this practice is a kind of government, as it urges them to ‘[t]hink about and note certain aspects of their activities according to certain norms’ (Rose and Miller, 1992: 200). Similarly, the work of Selwyn et al., (2015) has shown how early stages of assembling data create compliance. This initial digital work, as mandated by the authorities, contributes to building or strengthening hierarchies and unequal power relations between those who assemble the data and those who are acted upon as data (Selwyn et al., 2015: 11). At the same time, the data-driven regime of transparency operates through opaque positioning of the human subject – those who are acted upon as data are interchangeably positioned as active data users and producers, and as examiners and examinees (see Birchall, 2015; Piattoeva, 2015a).
Researchers highlight the fact that most data infrastructure remains hidden and poorly understood (e.g. Anagnostopoulos et al., 2013; Williamson, 2015), thus little is known about which human and non-human elements make up the infrastructure and how it alters the kinds of information that matter (Anagnostopoulos et al., 2013). The surveillance devices analysed in this article represent a distinct segment of the accountability infrastructure and a particular type of technology that combines and relies upon the visible and the invisible attributes of the infrastructure. On the visible side are the material objects that make different categories of exam participants aware of the prospect of moment-to-moment observation for the duration of the examination. On the invisible side lies the complex work that goes into maintaining the surveillance infrastructure, such as legislation, routine documentation, recruitment, architecture and technical management. Equally invisible are the layered observational activities that may operate in real time or retrospectively from multiple points of surveillance, thus extending the gaze and its consequences in space and time (Hope, 2010: 233), and a myriad of the as yet unknown events that the ‘surplus of information’ (Haggerty and Ericson, 2000) produced as video footage might induce. The infrastructure perspective seeks to answer a new set of questions, such as how infrastructure is built, what new technologies and actors it gives rise to, how it defines what and who is counted in schools and how it is that test-based accountability is changing the educational landscape (Anagnostopoulos et al., 2013). All of these questions are centrally related to the questions of power, which I return to in the Discussion part of the paper.
On management by results and standardised tests as sources of knowledge for governance in Russia
In 2009, a country-wide standardised examination – the USE, which combines in a single procedure the school leaving examination with entrance exams to tertiary education – replaced the dual system of separate and independent school graduation and admissions exams that existed since the Soviet times (Andrushchak and Nathov, 2012; Drummond and Gabrscek, 2012; Luk’yanova, 2012; Solodnikov, 2009). After a lengthy piloting of eight years, the recently passed law on education confirmed the USE as the only format for school graduation examinations (Ministry of Education and Science (MOES), 2013). The exam items comply with the state education standards, that is, with the part of the curriculum compulsory in all corners of the Russian Federation. The decree on the implementation of the USE, which entered into force in 2011, states that the exam consists of ‘standardised exam items which enable the evaluation of the level of attainment of the federal education standards’ (MOES, 2011). When I later deploy the term ‘exam’, I ascribe to it a meaning that is more complex than simply depicting a situation in which a student answers exam questions on a piece of paper. By exam, I mean a more extensive process that starts with a political will to administer examinations in a particular format and an institution assigned the task of its implementation, and ends with a multiplicity of ways in which the exam answers are first evaluated, and then enter and re-enter various socio-political contexts as de-contextualised data (cf. Piattoeva 2015a).
Tyumeneva (2013: xi) calls the USE the most important education assessment procedure in the country, and explains that since there is no national large-scale assessment programme for ‘[s]ystem monitoring and accountability purposes, the USE has ended up being used to fill this ga[p]’, despite the fact that it was not initially designed to yield this kind of information. The USE thus acts as a source of evidence and legitimation for policy-making, though it was not designed to play these roles, and still requires extra measures to attain credibility. An indication of the central role of the USE as evidence for policy-making could be found in a recent circular distributed by Rosobrnadzor (2014b) under the title ‘On implementing the USE in 2015’. The document argues that the USE data should motivate measures that improve the quality of education in schools and also the level of pedagogical proficiency among teachers. Moreover, further analysis of the USE scores is expected to shed light on the level of attainment of the federal education standards, and enable profound development work by education institutions and education authorities.
The USE data enters policy-making in many ways beyond student selection and certification (Piattoeva, 2015a). These include informing pedagogy, holding regions, schools and teachers accountable and monitoring education quality (Piattoeva, 2015a; Tyumeneva, 2013). Tyumeneva (2013) describes how the scores appear in annual comparisons of student achievement in Russian language and maths, which are the only two nationally mandated exam subjects. The comparisons that aim to track the development of student achievement across time occur despite the fact that the tests are not equated, which means that the difficulty of the tests and the criteria used to define performance levels may (and do) vary from year to year, making comparisons statistically dubious (Tyumeneva, 2013). The USE statistics are also used to make sense of regional and municipal school systems and to help in formulating policies on sub-national levels (Piattoeva, 2015a). Finally, scholars utilise the scores in academic research and to complement the officially published reports. The problematic roles of the exam – as a selection and certification procedure, as a means to hold different actors accountable and as a source of information for decision-making – should be borne in mind as we proceed to examine the reasons for introducing surveillance technology in examination situations.
The utilisation of the USE outcomes in education governance is linked to the rise of management by results in the spirit of NPM in the entire Russian public policy sector, and this development is linked to the advice given to the Russian authorities by major international organisations, such as the World Bank and the OECD (Organisation for Economic Co-operation and Development) (see Gusarova and Ovchinnikova, 2014; Piattoeva, 2015a). The recent policy documents explicitly state that their successful realisation is dependent on the availability of data to monitor results and that absence of objective information would put the policy agenda at risk (e.g. Government of Russia, 2011). The Russian government is currently striving to build a system of ‘effective state’, which aims at increasing the efficiency of federal regulation and local level self-governance for purposes of bridging the gap between policy and its implementation. Government programmes for different public sectors, including education, now embrace detailed, quantitatively articulated objectives and a system of numerical indicators to monitor the accomplishment of results on an annual basis. For instance, the latest Government Programme on Education (Government of Russia, 2013) contains a long list of performance indicators, including an indicator that utilises USE scores to monitor the discrepancy between high- and low-achieving schools, and an indicator based on the results of international learning achievement studies such as PISA, TIMSS and PIRLS (Government of Russia, 2013).
In addition, the abovementioned government programme emphasises the need to develop and implement regular national studies of individual learning achievements at different levels of schooling and in a variety of school subjects in order to complement the existing ways of data collection through public examinations (Government of Russia, 2013). One such study started in autumn 2014 with a focus on numeracy skills in secondary school. As the head of Rosobrnadzor, Sergey Kravtsov, explained, the National Study of Education Quality (NIKO) is introduced as a means to grasp how children are taught in different parts of the country in order to identify problematic zones and best practices and to compile nation-wide statistics on education. He particularly emphasised the difference between NIKO and USE, for the former does not pose any risk to the students and schools assessed in the survey: ‘We are not going to punish the schools which show low results; it is more important for us to have an objective picture of the situation’ (Kommersant 2014b). One more quote from Kravtsov further explains the role of diverse data:
Our USE provides massive data that could be used for developing the education system. We also plan to introduce other assessment studies that are analogous to the PIRLS, TIMSS and PISA. So far, we participate in those studies as a country, but we want to compare regions, different levels of education and schools. This is not to tell which region or school is bad, but to understand what to do better (Rossiyskaya Gazeta, 2013).
The statements above and the launch of NIKO reveal that the Russian government is seeking to build a comprehensive database on education achievement to support decision-making on education. The government officials also seem to recognise the problems related to using public examination results as statistics for accountability and evidence in policy decisions, and are thus searching for alternative ways of data collection. That said, it is equally important to bear in mind a wealth of academic literature that shows how management by results is first and foremost a tool of political rhetoric that only adds an illusion of objectivity and rationality to the otherwise politically motivated decisions. This is exactly how Russian civil servants experienced management by targets and indicators on the regional level in a study reported by Kalgin (2012).
Introduction of CCTV
The quality of information generated by the USE is a constant focus of political and academic discussion largely due to the problems that arise from the contradictions provoked by simultaneous demands for nation-wide standardised examinations, increased accountability and policy-relevant knowledge, and the emergent political decisions seek to strike a delicate balance between the three. One key objective of the on-going government programme on education is to provide up-to-date security and technology for the procedures that control the quality of education achievements (Government of Russia, 2013). A recent government decision to withdraw USE scores from the list of indicators evaluating executive authorities in the federal units of the Russian Federation was motivated by concerns about the objectivity of the examination process. The decree acknowledged that various actors distort the scores of the USE due to their high stakes for regional administration (President of Russia, 2014).
However, while removing one indicator, the Minister of Education and Science, Dmitri Livanov, announced that starting in 2014, the leaders of education authorities in the regions will be evaluated on the basis of the level of transparency of the USE as a procedure (Ria Novosti, 2014). Rosobrnadzor (2015a) now ranks regions according to how ‘objectively’ they implement the USE. The indicators of ‘objectivity’ include diligence in adhering to the stipulated preparation timetable prior to the exam, the number of exam rooms with online video surveillance and the number of buildings equipped with mobile signal jammers. Upon evaluation, the regions are categorised into three zones, green, yellow and red, with those in the red zone singled out for targeted control and surveillance. This mode of governance through rankings is designed to facilitate alignment with the specific, state-driven agenda (cf. Hansen and Flyverbom, 2014), in our case, this agenda being the marked securitisation and increased surveillance of the education space in the period of standardised testing, as will be shown below.
In 2014, for the first time on a nationwide scale, the Ministry of Education and Science of Russia mandated that each examination location, that is, each building that accommodates the mandatory state exam, be equipped with surveillance cameras. The building should have at least one surveillance camera in each examination room (there are 15 examination rooms on average in each building) and in the staff room. The number of cameras is not left to the discretion of the exam administrators, but instead depends on the shape and size of the room, with the regulations requiring the entire room to be made observable on CCTV. Surveillance cameras are also mandatory in the regional centres of information processing, including the locations where the papers are stored before and after the exam, or where the exam papers are printed prior to the exam, if they are not transported by the special delivery services, during the opening and sealing of the examination packages, in the rooms where the exam assessors sit to correct the exam papers (only the sections of the exam that are scored manually by subject experts) and for the meetings of the committees established to address complaints and conflictual situations (Rosobrnadzor, 2014a). In 2015, it was expected that over 55,000 rooms would be equipped with CCTV, with the percentage of rooms with functioning online CCTV to rise to over 80% (Tusevich, 2015).
Furthermore, test developers are placed under video surveillance while creating test items. A decree that came into force in 2013 confirmed the following measures to improve the ‘information security’ of the exam: the test items are to be developed in zones of limited access under video surveillance, these zones are also to be placed under special security arrangements, the computer-assisted and manual test development is to be monitored, and the local internet web is to be protected against unauthorised access (Rosobrnadzor, 2013). As one test developer who travels to the Moscow-based Institute of Pedagogical Measurement to develop tests in Russian language recalls in a reported interview:
The work trips to Moscow related to this job [as a test developer, author’s note] are very meticulous, but an incredibly interesting, in a professional sense, type of work […]. Everything happens in an isolated space under video surveillance without any means to communication (the mobile phone is seized by the guard at the entry) (Uchitel’skaia Gazeta, 2014).
To comply with the new regulation, the federal authorities invested 600 million Russian roubles in surveillance technology and support functions alone (Government of Russia, 2014). The surveillance costs exceeded the earlier total costs of the exam by at least 100 million Russian roubles (Rostelekom, 2014). In addition to CCTV, the authorities stipulated each examination point be equipped with metal detectors and, preferably, mobile signal jammers, and that the presence of police and/or private security firms at the entrance be guaranteed. As stated by the head of Rosobrnadzor, the push toward tighter ‘information security’ and ‘maximum objectivity of the exam procedure’ through surveillance technology was provoked by analysis of the exam statistics from 2013 (Poisk, 2014a). These analyses showed significantly higher than average scores in a number of regions in comparison to others. The statistics from 2013 reported 1500 contraventions of the USE, which is two times higher than in 2012 (Civic Chamber of the Russian Federation, 2014a). Statistical operations were performed to compare the examination scores obtained to the ‘ideal exam model’ that predicts proper grade distribution, leading to further investigation of anomalies. Education Minister Livanov argued that in 2014, after the first time of CCTV in operation, the exam results became more objective as proved by their comparison to the aforementioned ideal exam model (Poisk, 2014b). Despite the fact that cheating scandals have accompanied the USE since its introduction, 2014 marks a new era in the control operations associated with the exam, now increasingly delegated to ICT on a national scale.
A significant innovation is the opening by Rosobrnadzor of the Information Centre that hosts a 24-hour telephone hot-line for USE candidates, and also, more importantly for our case, functions as a control centre for monitoring examination procedures and the related events at a distance. The Centre is equipped with a massive TV screen and sophisticated technology, allowing those located on the Information Centre’s premises, including federal education inspectors, heads of Rosobrnadzor and accredited journalists, to follow the course of the exam in real time all over Russia without the consent of those being observed. A squad of on-line observers, approximately 6000 of them in 2014, comprises university students who volunteer to receive special on-line training and monitor the exam at a distance on PCs (Rostelekom, 2014). They are provided with information to log in to the newly established, password-protected web-portal Смотриегэ.рф (smotriege.ru, watchUSE.ru) in order to monitor examinations and to report on suspicious activity around the clock (inevitable in a country with multiple time zones and a nationally uniform exam schedule). Their notes on suspected wrong-doing trigger re-examination of video footage by Rosobrnadzor and local and regional examination committees to verify irregularity (e.g. Kommersant, 2014a).
The sole provider of the video infrastructure is one of Russia’s leading telecommunications companies, Rostelekom, which describes itself as an ‘absolute market leader in providing telecommunications services to Russian government departments and corporate users at every level’ (www.rostelecom.ru/en/about/info/). The company supplied its ‘unparalleled’ video technology for the 2012 presidential elections, and it is this technology, the devices and the know-how, that were relocated to the education sphere to ensure the ‘informational security’ of the USE (www.rostelecom.ru/projects/ege/). In addition to being involved in exam preparation and implementation, Rostelekom is responsible for the collection and storage of the video footage for at least three months after the exam has been held (Government of Russia, 2014).
A myriad of new policy documents was made necessary to mandate and explain the usage of CCTV down to the last detail, thus standardising the procedure and building up an infrastructure for more control and discipline. The document contents span from the list of technical appliances that should constitute the ‘PAK package’ – a standard kit for each examination room containing two cameras, a number of cables and camera straps, a stationary computer or a lap-top and an uninterrupted power supply battery, to the specifications on the rights to observation for different categories of exam participants. These documents generate more paper work, for instance, by obliging technical specialists – a new category of personnel introduced due to the technical sophistication of the security measures and responsible for managing the surveillance system alone – to keep an official, standardised diary of all the operations performed with the ‘PAK package’. With respect to the emerging hierarchy, the on-line observers are permitted on-line access to the examination rooms they are assigned to, while those at the top of the administrative hierarchy can monitor all categories of the exam participants (Rosobrnadzor, 2015c). For instance, only the staff of Rosobrnadzor can watch the events inside the regional centres of information processing without requesting their consent.
The significance of video surveillance could not be made more explicit by the instruction to terminate the exam in case of technical failure (Rosobrnadzor, 2015c). While another qualified person can replace an indisposed exam administrator, no human agent can substitute for a broken down video surveillance system. The instructions state that if a technical specialist fails to repair a fault identified on the advice of the operator, the exam administrator(s) ought to terminate the exam, annul the exam results and arrange for the examinees to take the exam on a different day. Finally, while the introduction of surveillance technology is often depicted as a means to combat corruption that compromises true merit, academic mobility and social justice, the earlier-mentioned national study of education quality, NIKO, that does not play any role in certification and admissions to further education, is likewise implemented under the watchful eye of CCTV. The Information Centre at Rosobrnadzor enables the authorities to monitor the implementation of NIKO in real time, just as it does with the USE. When the cases of USE and NIKO are examined together, it becomes increasingly hard to sustain the common argument put forward by the authorities that the use of video surveillance cameras is introduced as a means of social fairness. The facts and arguments analysed so far attest to the view that the introduction of CCTV is directly linked to the production and protection of numerical data, and not (only) to the fairer distribution of life opportunities among the young generation. The following section analyses this claim further by adopting an ANT sensibility and some of its core concepts.
Discussion: the role of surveillance cameras in weaving complex power relations
Invoking ANT to serve interpretation is a risky business. After all, ANT scholars warn that their texts should not be used to explain anything, but rather to inform the technique of inquiry (e.g. Latour, 2007 [2005]). However, my motivation for using ANT in this manner rests on the capacity of its position on the power and actorhood of non-human objects to extend the analysis of new surveillance practices against the backdrop of digital governance and data infrastructure. Testing and video surveillance are distinct but well-integrated elements of the burgeoning and totalising surveillance assemblage (Haggerty and Ericson, 2000). They manifest in slightly different ways the overarching phenomenon of the ‘intensification of technologised forms of observation’ (Haggerty and Ericson, 2000: 610) since they aim to watch, monitor, track and data analyse for the purposes of control (Monahan and Torres, 2010: 6). At the same time, surveillance is not ‘[s]imply about monitoring or tracking individuals and their data – it is about the structuring of power relations through human, technical, or hybrid control mechanisms’ (Monahan and Torres, 2010: 2), and it is this perspective that I engage with next. In other words, I try to move beyond the imperative of data security in an attempt to understand what data security stands for as a function of surveillance.
The way in which ANT comprehends power is different from the analysis that uses power as an explanan instead of an explanandum (see Piattoeva, 2015b). For ANT, as well as for other strands of post-structuralist theorising, power is not a priori owned by the so-called powerful centres, but is ‘made of’ the wills of all the others and is thus a consequence of collective action (Latour, 1986: 269). ANT emphasises the instability of actors as well as the relationality, impermanence and performativity of hierarchies and domination, which are the results or effects, but not the causes, of particular processes. It is through the intense and strategic activities of translating, enrolling, convincing and enlisting of actors by the one(s) who seek(s) compliance (Latour, 1986: 273) that hierarchies and domination can be temporarily established. Thus it is the others who are really powerful and have to attribute their action ‘to one amongst them who becomes powerful in potentia’ (Latour, 1986: 274, emphasis in original). Since every power relationship is fragile and distributed, it relies on the resources of human and non-human nature to be constituted and made durable.
Latour (2007 [2005]) argues that non-human objects are invoked precisely to wield and make durable an association in place of a weak or failed social tie. In a context where consensus is elusive and its social elements are constantly tempted to cheat and betray the system as envisioned by the enunciator, other entities are recruited to counter the anti-programmes of reluctant actors for the desired action to continue (see Latour, 1991). In the controversial context of data-reliant governance characterised by desires to render legible through data collection and to avoid or circumscribe that legibility, the relational nature of the endeavour is particularly obvious. Even when data generation is prescribed by law, there are many ways to endanger the process, to distort the data and, consequently, to pervert the envisioned path and thus undermine the authority of ‘scientific’ policy-making. As discussed earlier, the ideal of uncontaminated measurement provides quantification with authority and maintains its ‘scientific stamp’ (Hansen and Flyverbom, 2014: 11), but the appearance of purity is hard to maintain.
On the one hand, those placed at the top of the administrative structure, like Rosobrnadzor, are able to exert more influence than other agents through the power and authority invested with them by law. They are not only permitted to watch everyone else, but literally own the video data and possess extensive rights to employ the video footage obtained as they please. For example, at the time of finalising this text, Rosobrnadzor (2015b) posted a short video clip to remind the examinees that over 4000 on-line observers and 35,000 public observers and federal and regional experts, will challenge them should they violate the rules of the exam. This clip is a collage of irregularities identified and recorded by on-line observers and as such it does not hesitate to circulate the images of rule breakers, whose faces are easily recognisable on the video (Rosobrnadzor, 2015b).
On the other hand, the introduction and pervasive spread of video surveillance as a technology that secures the outcomes of standardised testing should not be viewed narrowly as an undisputed manifestation of the centre’s ability to control. The developments documented in the empirical part of the article signal the authorities’ ongoing efforts to perform themselves into a powerful role by enlisting other actors to support their cause. Multiple actors are re-enlisted in the government’s programme that seeks to uphold its (scientific) authority and to build up, as it appears, a hierarchical chain of observation that would both manifest and guarantee such authority through objective data by introducing powerful non-human objects into the chain. CCTV is required to transform and temporarily stabilise the identities of a diverse group of actors, all of whom play a role in the implementation of the exam and thus in the very instance of data generation. When the exam is envisioned as a problematic site of data collection, and all the USE participants as necessary but unreliable data producers, additional resources are introduced to sustain the linkages and to ensure that the actors keep the data-production process rolling as it should.
While generating knowledge on individual learners to inform policy as to which pedagogies and practices work best, all those surveyed in the process undergo a series of displacements: ‘[s]chool pupils are to be translated into learners, learners into performances, performances into data and data into locally, nationally and globally comparable databases, tables and visualisations’ (Ozga et al., 2011, as cited in Williamson, 2014: 487). Learning is made knowable, measurable and calculable by re-making learners into transactional data resources to be collected, collated and calculated into comparable governing knowledge (Williamson, 2014: 487). The present article shows that surveillance devices are used to facilitate the displacement processes that de-associate actors from rival identities and agendas of action, and attempt to reconnect them to the project of the one who seeks to exert power. Borrowing Michel Callon’s (1986) terminology, video cameras perform a role as devices of interessement. These devices are called upon ‘to come in between’ potentially intervening associations that supposedly endanger the actor identities envisioned by the strategist. The moment of interessement is usually connected to the formulation of particular problematisations that characterise the phenomenon in need of attention and delineate the involvement of different actors. The actors (that is, their interests and wishes) are then identified and defined in relation to one another and to the problematisation. Consequently, interessement ‘[i]s founded on a certain interpretation of what the yet to be enrolled actors are as well as what entities these actors are associated with’ (Callon, 1986: 211). The devices of interessement seek to sustain actor identities as envisioned by the strategist, and extend and materialise the assumptions that underlie the submitted problematisation.
Our empirical case illustrates the complexity of emerging power relations as stimulated or supported through video surveillance, for instance, by implicating local and regional education authorities as exam implementers and observers of the examinees, and as observable subjects. The CCTV plays a central role in locking human actors into relations of mutual accountability. The in-built hierarchies signify complex power relations within which all actors, also those who perform the measures and audits at one point, ‘[s]it in the middle of vast hierarchies with people above and below them’ (Kipnis, 2008: 282). The existing practices of surveillance target nearly everyone by default: everyone is observed from multiple angles and social positions. As Hansen and Flyverbom (2014) attest, this complexity reflects the polycentric character of contemporary social formations in which governing subjects, such as representatives of governments and corporations, are also governed subjects. Thus, the authority of Rosobrnadzor is likewise vulnerable, as a unit that needs to demonstrate its worth within the larger structure of the education ministry, and as accountable for its outcomes to the government in the context of management by results and indicators that make all executive structures potentially legible for intervention. What is more, the specific case of on-line observers and public observers who monitor the exam on the spot is an essential reminder of the state’s reliance on ordinary citizens to advance its agenda.
Conclusion
This article sought to contribute to a topical debate on the multifaceted effects of the datafication and digitalisation of education policy and governance mechanisms. It showed that simultaneous demands for accountability and information re-signify public examinations as sites of data collection and make the school system accountable to data production (cf. Koyama, 2011). Moreover, these demands generate and legitimise surveillance practices that prioritise and seek to protect the manufacture of numerical data. Surveillance then turns into an indispensable condition for ensuring the transparency and standardisation of data production, which are assumed to be the main constituents of data objectivity. The need for surveillance technology is embedded in and reproduces profound mistrust of all human agents associated with the data assembly-line at its various stages, and mistrust of the human paves the way to firm faith in the capacity of surveillance devices to achieve objectivity. Thus video surveillance is called upon to act as a mediator that translates and holds parts of the assessment network together by coercing each participant into the role of docile data producer. In other words, CCTV obliges actors to stay faithful to the task of data generation (cf. Callon, 1986). In the course of this translation, the distinctive roles and professional identities of humans are erased (as students, specialists, teachers, test developers, etc.). Instead, they are treated as semi-competent data producers whose capacities for data generation are fallible, pose a risk to the data, and thus require the imposition of tighter control.
The empirical case examined in the article demonstrated the workings of the ‘infrastructure of accountability’ (Anagnostopoulos et al., 2013) that functions to a large extent as a ‘surveillant assemblage’ and consists of multiple objects, processes and phenomena all working in concert (Haggerty and Ericson, 2000). This development leads to a situation in which groups previously exempt from surveillance are now increasingly being monitored (cf. Haggerty and Ericson, 2000: 606). The infrastructure of (digital) accountability thus works in ways that make surveillance ever more complex: not only does it produce numerical data on education quality, which is then used to monitor different participants of the education process, and reward and punish them on the basis of measurable results, but in striving to improve this function, the infrastructure is integrated with new forms of surveillance that likewise rely on technological advances. Audits of quality are then complemented with audits of the procedures that produce information on education quality, and the participants of these processes become targets of more intense, layered surveillance. In this manner, surveillance extends in both depth and width: people and institutions are monitored by different digital technologies and devices from various points of observation, and increasing segments of population fall under such practices. This leads to ever larger amounts of data being produced as all surveillance technologies leave a record specific to their function and technical nature (e.g. examination results versus digital recordings of examination situations).
Video surveillance is implicated in generating two types of data. First comes its role in facilitating data production on learning achievements, and this data later enters many governing processes, in addition to being a measure of individual learning (see Piattoeva, 2015a). Second comes its role as generating data on data production. While the act of observing, contained in the video cameras, is in itself a powerful governing device, surveillance technology produces data that can be stored, analysed and utilised in future governance projects. However, it is clear that video surveillance is employed selectively: while points of data production are observed in the name of data transparency and objectivity, the centres of calculation largely remain out of sight.
The article shows that a new and perhaps unrecognised participant in the evaluation industry is the commercial security business with its expertise pervading education-related policy decisions and everyday practices. The securitisation of testing has not yet been discussed in the academic literature on assessment. However, recent developments in Russia and more broadly on the post-Soviet space indicate that this angle of inquiry deserves more attention. In earlier fieldwork, I encountered stories about new ‘high-security buildings’ where developers of test items, ICT staff and typography workers are placed under extreme surveillance conditions. Their daily working lives are embedded in security measures that require fingerprints, secret codes, restrictions on means of personal communication and personal security checks upon recruitment (fieldwork in Moscow, May 2014). Even though education research that focuses on the rise of governance by numbers has documented the link between audit policies and the generation of large profits by commercial companies (e.g. Koyama, 2011; Taubman, 2009), further research should also scrutinise the roles of security businesses in education accountability.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research for this article was supported by the Academy of Finland grant 2501273874.
