Abstract
While algorithmic systems today are transforming state practices globally, their everyday manifestations vary in particularistic contexts. Drawing from discourse analysis, interviews and documentation of state initiatives introducing algorithmic systems in Kerala, India, this paper analyses the effect of algorithmic infrastructures on the practices of discretion within the police force. In this regard, I propose that a dialectical framing of discretion, operating on the fundamental contradiction of liberty and discipline, is a more robust theoretical framework through which we may effectively document everyday practices of algorithmic governance. Using this framework, I note two emergent tendencies of insulation and invisiblisation, wherein algorithmic infrastructures insulate subordinate personnel from public interaction by placing them behind a digital façade, and invisiblise the operation of their discretion. These tendencies, if left unchecked, may potentially lead to these institutions transforming into Kafkaesque, opaque organisations with low democratic accountability.
Introduction
Algorithmic systems are transforming policing and governance globally. Their manifestations, however, vary in particularistic settings. While proponents argue that algorithmic infrastructures make policing efficient (Europol, 2024), critics argue that they automate systems of inequality, exacerbate marginalisations and insulate institutions from democratic accountability (Benjamin, 2019; Eubanks, 2018). In this paper, I engage with the dominant debates on discretion and algorithmic governance in policing by drawing from a postcolonial nation in the global south – an often-underrepresented region in the debates – to unpack the effect of algorithmic systems on the discretion of subordinate level police personnel. For this purpose, I draw from a mixed methods study conducted in Kerala, India documenting the ongoing incorporation of algorithmic governance in its policing framework. Kerala offers a unique context to see how algorithmic policing is transforming everyday practices because of its own alternative and progressive development trajectory even within the postcolonial context of India, where postcolonial developments are conditioned by colonial legacies.
In this complicated terrain, being a progressive state, Kerala offers a unique standpoint wherein we can see some emergent inherent tendencies of algorithmic governance especially insulation and invisiblisation. To more accurately comprehend these tendencies, I develop a conceptual framework of dialectics of discretion that allows us to visualise these tendencies as practices of discipline. I propose viewing discretion not as a binary concept, but a dialectical concept operating on the fundamental contradiction of liberty and discipline. This framework is a useful tool in not only visualising algorithmic governance and algorithmic policing as a socio-technical relation, but also as a social relation mediated by algorithmic infrastructures fundamentally conditioned by pre-existing power relations guiding the possibilities of change. In this framework discretion and discipline are contextualised in colonial legacies of policing in India to note how emergent tendencies of insulation and invisiblisation may be seen as practices of discipline. Insulation may be seen as processes that institutionalise a digital façade in public service that remove subordinate level personnel from nodes of public interaction. Invisiblisation may be seen as a related phenomenon that removes personnel's discretion from the zone of public perception by placing it behind a digital façade. Critical literature in the field has importantly shed light on adverse impact of algorithmic systems implemented by state institutions on populations/people/communities. However, state institutions in these debates appear to be functioning as a homogenous entity obfuscating the power relations that populate its internal functioning. For the purposes of this paper, I turn the gaze inwards to explore how algorithmic infrastructures affect the practice of discretion within the police force.
In the following paper, in section ‘Literature review and method’ I begin with an engagement with key debates on discretion in algorithmic governance and policing to note the dominance of a binary framework of discretion (as either present or absent), and then outline the methods used for this study. In section ‘Algorithmic governance and emergent tendencies’ I present the main empirical observations by first tracing the development of algorithmic governance in Kerala and then documenting the two key emergent tendencies of insulation and invisiblisation. Following this, I take two tangential, but necessary detours. In section ‘Dialectics of discretion’ I develop the framework of dialectics of discretion to note why insulation and invisiblisation may be seen as practices of discipline, and in section ‘Suspicion of discretion, and discipline: colonial, post-colonial legacies’ I locate practices of discipline historically by tracing colonial legacies of everyday conditioning of discretion. This way I conclude by highlighting the central contribution of this paper which is that the ‘dialectics of discretion’ allows us to move away from a false dichotomy of discretion, and posit discretion itself as a site of contradictions and transformation, premised on pre-existing power relations. I argue that algorithmic systems do not remove human discretion, but rather relocate, condition and discipline its practice.
Literature review and method
Discretion in algorithmic governance and policing: dominant debates
The incorporation of ICT in bureaucratic decision-making processes drew attention to the transition from ‘street-level’ to ‘screen-level’ bureaucracies (Bovens and Zouridis, 2002; Lipsky, 1980) and the changing role of discretion in new decision-making practices (Bullock, 2019). While human discretion has been operationalising administrative tasks throughout history, it has also been deemed limited in its capacity and deeply flawed in the pursuit of efficiency in administration (Bullock, 2019). As more societies adopt AI and algorithmic systems in administration, human discretion, it is said, is likely to be removed (Bullock, 2019).
The ‘loss’ of discretion in digital public administration has been debated since the 1980s globally (Adler and Henman, 2009; Alexander, 1990; Brodkin, 1997; Eubanks, 2018; Garson, 1989; Scheepers, 1994; Zouridis et al., 2020). Algorithmic systems vary based on how they are positioned in relation to human decision makers – ranging from acting as guides to dictators (Henman, 2022; Høybye-Mortensen, 2013). To accurately understand how, and why, societies adopt specific forms of algorithmic administration, it is important to unpack this very dynamic of how algorithms are positioned in relation to human participants in particularistic settings. Unpacking this relation reveals the operation of dominant power relations and politics of algorithmic governance.
Fearing that removing human discretion would lead to opaque, Kafkaesque machines, solutions such as co-design and human-in-the-loop (Dekker et al., 2022) have been suggested as mitigation strategies. Scholars critiquing the human-in-the loop system, state that while human participation in algorithmic decision making might enhance justice, legitimacy, fairness and accountability, it needs to be genuine enough to ensure that human participation is not in the form of a rubber stamp (Binns, 2022; Bryson et al., 2017; Wagner, 2019). A binary conceptualisation of presence or absence of discretion in algorithmic systems thus tends to dominate contemporary debates.
Discretion in public administration is also problematised by the manifestation of contested versions of justice in discretionary application of law. This is especially evident in policing, where police personnel often function with their own visions of justice, that may or may not correspond with the legal or institutional imaginaries of justice. Through algorithmic systems we see the emergence of a new class of ‘screen-level bureaucrats’ along with ‘street level bureaucrats’ (Alkhatib and Bernstein, 2019; Binns, 2022; Bovens and Zouridis, 2002) whose visions of justice may be different from each other. In hierarchical governance systems, like the police, the contested terrain of discretion is conditioned by disciplinary power relations. Simply putting humans in the algorithmic loop is not enough to secure individual justice (Binns, 2022), we need a clearer understanding of how algorithmic system correspond with and are overdetermined by existing power relations in particularistic settings.
The mere existence of humans-in-the loop does not mean that the nature of human agency and discretion is not affected by algorithmic governance. With mounting criticisms of predictive policing and automation systems leading to sustained racial biases and disproportionate targeting of marginalised communities, algorithmic systems have been under considerable attack for their ability to automate inequality and create unaccountable structures of administration, hiding behind the veneer of objectivity and neutrality (Benjamin, 2019; Brayne, 2021; Eubanks, 2018; Noble, 2018). Retaining human participants must be analysed beyond the binary of presence and absence of discretion. We must look at how algorithmic systems condition discretion dialectically.
Black Lives Matter, Abolish the Police and various reports of police brutality in different parts of the world, have forced police organisations to review their institutional practices, at least in terms of how they are perceived publicly. The suspicion of police discretion along with the preexisting suspicion of discretion in legal scholarship and public administration has further equated human discretion with ‘corruptibility’. I argue that it is precisely in this context that human discretion, especially in policing, becomes constructed as a problem. Here discretion is a problem when it is deployed in the service of individual and personal desires. The ‘need’ therefore is to condition the operation of discretion in an objective and impersonal capacity, which requires disciplining of the personnel deploying discretionary powers. In practice, subordinate personnel's discretion is qualitative in nature and heavily dependent on public interaction, for instance a traffic cop assessing whether they should book a person they have stopped for jumping a traffic light in a medical emergency, or let them off with a warning. Superior level official's discretion in contrast is often quantitative in nature, for instance a district police chief gauging the allocation of more vehicles and personnel to a specific jurisdiction in response to rising crime rates. Algorithmic policing aims to transform qualitative policing through processes of quantification, that is, ‘process through which context and information are reduced to a single number, which can then be used to evaluate, score, or rank …and to homogenize, regularize, and expand an activity’ [emphasis added] (Besteman, 2019: 167) for instance by replacing traffic cops with AI cameras that only detect traffic violations and do not venture into reasons for the violation.
Literature on predictive policing has amply demonstrated that even when algorithmic systems are deployed in policing systems, personnel retain discretionary agency and do not deterministically follow algorithmic recommendations; moreover, being sceptical of software capabilities, personnel even resist algorithmic imposition (Leese, 2023; Marciniak, 2023; Ratcliffe et al., 2020). This is not to say that algorithmic systems have no effect in the everyday of the police. As Perry et al. (2013) highlight, predictive policing models are not fundamentally about making crime-related predictions, but rather about implementing a prediction led policing business model. This mechanism functions by conditioning discretion, not necessarily removing it. Conditioning discretion in this way however adversely affect accountability. It appears to remove decision making powers from the hands of the street level personnel, and vests them in algorithmic systems, to prove that discretionary personnel are not ‘out of control’ with their use of legally vested authority. In doing so, it makes the system less accountable since the decision-making process is outsourced to a non-human ‘actant’ thereby creating an accountability gap (Moses and Chan, 2018).
As in the case of public administration, discretion in policing has also been subject to critical scrutiny for decades (Campbell, 1999; Davis, 1969; Lipsky, 1980; Pepinsky, 1984). Some equate effective policing with automatic policing (Joh, 2007), arguing that discretion-less policing leads to automatic and ‘impartial’ application of law. Others state that discretion-less policing insulates police from accountability towards the population they police, as police becomes accountable to law alone and not the public; thereby advocating for political and democratic accountability of policing practices rather than automatic policing (Pepinsky, 1984). The tension between algorithmic policing/automation and discretion, mirror these long-standing debates. Algorithmic systems and police practices, we may note, co-constitute and overdetermine each other (Kaufmann et al., 2019; Marciniak, 2023).
Through this brief review of dominant debates on discretion and algorithmic governance, I wish to highlight the tendency to view discretion in a binary framework, that is, algorithmic systems function either in the ‘presence’ or ‘absence’ of discretion. I wish to complicate this understanding by noting that the binary framework presents a false dichotomy and posit that discretion itself is a contested site which different algorithmic systems seek to condition and discipline. They must be understood and documented in their particularistic settings to understand how these transformations are conditioned by material-social relations (Varghese, 2022). In the subsequent sections, I elaborate upon the methods deployed to document these particularistic manifestations in Kerala and outline a new theoretical framework to document emergent tendencies of algorithmic governance.
Method
This paper primarily draws from data collected for a mixed-methods study conducted from August 2022 to August 2024 analysing the digital transformation of policing in Kerala. In this two-year period, national newspapers were tracked every day to document reports that were tagged with the key words ‘AI’ and ‘Kerala’. From these I selected and analysed specific reports and articles related to the inauguration and incorporation of AI systems in state services. These included opinion pieces, news reports of launch of new initiatives, failures of newly implemented systems and controversies. From this iterative process I constructed a genealogy of algorithmic governance in Kerala. Having constructed this genealogy, I identified and analysed two key policy documents that provide the broad framework for AI in state services in Kerala namely – Administrative Reforms Commission Report 2021, and Kerala Police Vision Document 2030. Through these two sources (news reports and policy documents) I brought the genealogy in conversation with the policy framework of algorithmic governance in Kerala, thereby tracing the broad terrain in which discretion operates.
To complicate this story, it was necessary to incorporate qualitative insights from key participants who were directly involved in the creation/adoption/implementation of AI systems in Kerala Police. To this end, I identified four key organisations/institutions from the genealogy and policy framework that played an instrumental role in spearheading the adoption of AI systems in everyday policing – (1) Digital University of Kerala, (2) Kerala Police, (3) Kerala Motor Vehicle Department and (4) police association of subordinate level police personnel. I approached these institutions and secured permissions to interact with academicians from the Digital University of Kerala, and office bearers of two police associations – Kerala Police Association and Kerala Police Officer's Association. Participants were briefed about the research project and key objectives, following which they agreed, under conditions of anonymity, to unstructured, informal interviews/interactions/conversations. Following these conversations, I was also able to secure invitation to two public functions and seminars organised by Kerala Police Officer's Association (KPOA) in 2023 namely – KPOA State Committee Seminar and KPOA Annual Conference in May 2023 which I attended in-person. Through these public meetings, I was able to draw insights into popular imaginaries of AI among police personnel.
To analyse the data from these three sources – newspapers, policy documents, and unstructured interactions with key participants, I relied on policy analysis to outline the terrain of algorithmic governance and policing in Kerala, and discourse analysis to unpack the dominant rhetoric (as it emerges in news announcements and reports) and popular imaginaries (as it emerged in news reports and in-person interaction) of algorithmic policing. Finally, I brought these observations from the current study in conversation with my observations from a decade long engagement with the study of policing in Kerala and India to develop the theoretical framework of dialectics of discretion and locate them in the legacies of colonialism.
Algorithmic governance and emergent tendencies
Algorithmic governance and ‘smart policing’ in Kerala, India
Intelligence Led Policing (ILP) – premised on data analysis and criminal intelligence coordinated around strategic risk management (Ratcliffe, 2014) – has become the clarion call for police transformations in postcolonial nations in the global south. In countries like India, ILP – incorporating technology and forensic investigation in all stages of policing – is presumed to be the gateway for modernisation as well as ‘decolonisation’. In this regard, since at least the early 2000s different police forces in India have been modernising their everyday functioning by integrating ICT and more recently ML, Big-Data and AI in its policing infrastructure. India is slowly emerging as a regional leader in south and central Asia for government readiness in adoption of AI (Hankins et al., 2023). Underlying this vision of algorithmic policing are mechanisms of quantification (Besteman, 2019). In the context of postcolonial states, however mechanisms of quantification are further conditioned by mechanisms of discipline that regulate the operation of discretion.
The state of Kerala, in southern India, is part of the quasi-federal framework of India. Since its inception in 1956, Kerala has charted an alternative development trajectory, both within India and globally, focusing on human development indicators, along with traditional parameters of development resulting in Kerala achieving the highest HDI score in India (Kerala State Planning Board, 2024) and comparable HDI markers to some developed nations like Sweden (Karlsson, 2018). This model of development and governance is popularly referred to as the ‘Kerala model’ which scholars have argued offers a democratic, pluralist and humanitarian model of development and governance (Isaac and Franke, 2004).
Kerala was one of the first states in India to embrace digitalisation and modernisation of state services as a necessary feature of its development strategy. In 2019 Kerala became the first state in India to declare ‘right to Internet’ as a basic human right. In 2021, through its ‘e-Governance’ policy it laid out elaborate plans for integration of ICT in existing state services with the stated aim of providing ‘better governance’ and serving a ‘pluralist democracy’ (Kerala Police Department, Government of Kerala, 2021; Varghese, 2024) . This includes the modernisation of policing practices, where under ‘smart policing’ initiatives, Kerala police is integrating existing practices with ICT, ML, and AI. The decade of 2020–2030 is dedicated to the incorporation of new technology for criminal information management for ‘better’ decision making through AI, Big Data and Cyber-surveillance (Kerala Police Department, Government of Kerala, 2021).
This includes the installation of 675 AI enabled cameras by the Motor Vehicle Department across Kerala in 2021 to automatically monitor, report and process traffic violations. With the perceived success of this initiative, in 2024 Kerala hosted the first ever international Gen-AI conclave (The Hindu Bureau, 2024) subsequently unveiling an industrial policy to invite global investments in generative AI with the promise of transforming the state into a country hub for AI-assisted technology (Press Trust of India, 2024). A state-run university, The Digital University Kerala, has been at the forefront in assisting this digital transformation. Researchers in the University have developed an AI chip (George, 2024), and are conducting training and capacity building workshops for the integration of AI and ML in existing state practices, offering technical expertise, as well as pedagogical support in the form of training courses on digital governance (The Hindu Bureau, 2022).
A unique feature in the state's vision of digital transformation is outlined in its e-Governance plan of 2021 which focuses on incorporating new technology in governance by developing state capacity and resisting the dependence on external agencies. This desire, I argue, represents postcolonial aspirations of reducing dependency on new forms of data colonialism. On the one hand, Kerala is attempting to bypass its dependence on proprietary software and develop state capacity through open-source software. On the other, it is attracting investments from big tech companies like IBM and providing incentives to startups to develop indigenous capacity. Thus, a curious mix of public private investments is emerging in the context of Kerala, which owing to its history of left movements and labour militancy has traditionally resisted reliance on big capital (Heller, 2000). Here we encounter a unique dilemma of quantification and postcoloniality, where postcolonial aspirations are driven by the desire to reduce dependency and drive modernisation, but the modalities of quantification reproduce dominant dependencies of data colonialism. Kerala offers a unique lens to study the inherent tendencies of algorithmic governance, beyond the usual public versus private binary, because unlike dominant systems in the west, Kerala is developing algorithmic governance by attempting to bypass private enterprise and has channelised its alternative development trajectory to build a progressive state. Despite this progressive model of state, the adoption of algorithmic infrastructures in Kerala reveals tendencies of insulation and invisiblisation that if left unchecked may potentially be dangerous. Whether a new form of algorithmic governance is emerging, which aligns with the Kerala model of development, is worth exploring but lies beyond the scope of this paper. For the purposes of this paper, I draw critical attention towards two emergent tendencies of algorithmic governance, namely insulation and invisiblisation.
Insulation and invisiblisation
During fieldwork, I observed that many participants perceived existing modes of policing as outdated. An inspector stated (in paraphrase), ‘crime has become very advanced, criminals are a lot more advanced than us. Only now are we able to even get close to the level of high-tech crime that has been taking place… which is not possible without the help of AI’. Personnel often noted that ‘ChatGPT is everywhere, but police technology is outdated’ they argued that this mandates that policing must become high-tech by incorporating algorithmic infrastructures to keep pace with the changing mode of life. Policing in Kerala had a moderately high pendency figure of 15.6%, in 2019 (National Crime Records Bureau, 2019). It is believed that the incorporation of modern technology, automation, digital evidence and ILP will reduce pendency, indicating efficiency in policing. This sentiment was echoed by various state functionaries. During a seminar on the topic ‘The police of New Kerala’, in one of the keynote addresses the speaker argued that ‘new technology will open new possibilities’ for the police force that have erstwhile been unavailable. He argued that, ‘with the linking of bank accounts, Aadhaar Card 1 , mobile phone, and Google account, case investigation would become so much easier’. He elaborated and noted that ‘to track the activities of an accused person, there would be no need to go out to investigate and search, no need to collect testimonies’. He remarked that things that people themselves may not know or remember – like where they were on a specific date – would be possible for a police officer to know in an instant. Noting that access to this kind of information was already available, he argued that ‘with AI these capabilities will develop further, and therefore in such an era, new opportunities will become possible for the police’ (KPOA State Committee, 2023).
I wish to build on these two imaginaries of the necessity of AI and its potential to make policing ‘so much easier’. On the surface, these hint at the myriad possibilities that digital surveillance makes possible for the police. If we scratch the surface a bit more, we can note subtle transformations in the everyday of the police as well. The digital footprint from the Digital Public Infrastructure – that the linking of bank-Aadhaar-telephone reveals – potentially offers justiciable and incontrovertible evidence, that will stand scrutiny in court, in the guise of objectivity and value-neutrality. This contrasts with a more traditional and ‘outdated’ mode of investigation, where personnel collect testimonies, eye-witness accounts and statements from bystanders to recreate an incident. This becomes clear from the statement that ‘there is no need to collect testimonies’. Conventional investigation methods rely heavily on social interactions, mandating the integration of subordinate personnel and the public. Evidence in the form of testimonies however is seen to be outdated because they are deemed suspicious, motivated or dubious when subject to interrogation in a court of law. In contrast algorithmic infrastructures are seen to provide ‘objective’ evidence, in which digital footprint transforms into ‘objective’ evidence, becoming the harbinger of data-driven policing replacing its reliance on human testimony. This new mode of investigation premised on ‘scientific investigation’ and data-driven policing, is also imagined to be ‘easier’ – since all it requires is access to digital data that the mass surveillance network makes possible, as opposed to hours of social interactions – supposedly saving many hours of work, resources and energy which can then be put to ‘better’ use.
Chronically overworked police personnel that I interacted with, themselves viewed technology-enabled policing as beneficial because of its potential to ‘save time’ and provide incontrovertible evidence. However, underlying this perceived transformation we may note a technocratic reimagination of policing, as mechanical execution of procedure where everyday policing could be transformed to acts done behind a screen, without ‘going out’, and removing the need to interact with the public. This new modality is clearly evident in the AI traffic enforcement system in Kerala. Through publicly available reports and my interactions with police personnel, I learned that the AI cameras installed at specific traffic junctions in Kerala were trained in-house to automatically detect traffic violations – such as drivers not wearing seat belts or helmets, or jumping traffic lights – and capture images as ‘evidence’ which is transmitted to a control room, where control room staff verify the reported incidents and send them to a senior officer to batch approve violations for the issuing of fines by the Motor Vehicle Department. In short, replacing the subordinate traffic cop by an AI camera, removing the node of social interaction between the traffic cop and the public. This is not to say that algorithmic policing completely insulates the police as an institution from social interaction, rather, I argue that it aims to invisiblise the operation of subordinate personnel's discretion and human participation, at least in the domains where police work is visible to other agencies and the public. It creates a hierarchy where human discretion, and human testimonies are suspect, and inferior to data driven evidence. Subsequent reports of the AI traffic enforcement system, showing the success of this mechanism, carried news of incidents where state officials themselves were fined for traffic violations (Mathrabhumi, 2023), outlining how these new systems, unlike the old, were objective and impartial to violators irrespective of their identity.
This mode of policing transforms socialised forms of policing through quantification – more specifically policing premised on in-person interactions with multiple stakeholders, mandating social integration of subordinate personnel with the local population – with technical tasks like extracting call records and tracking the GPS location of the accused, or batch approving traffic violations. In this context, two emergent tendencies of algorithmic governance become amply clear (1) insulation – where personnel's discretion is operational only in the domain of collecting scientific evidence, and not in building social networks with the population; and (2) invisiblisation – where the subordinate personnel's discretion is hidden from perception. I argue that these tendencies are better understood through the dialectics of discretion as practices of discipline, which I will elaborate upon in the subsequent section.
This invisiblisation of discretion is also evident in other tasks of policing that algorithmic frameworks transform. At the time of writing this paper in the winter of 2024, the two major developments in AI systems in policing were (1) AI enabled traffic enforcement system to monitor traffic violations and automatically process fines (briefly referred to above), and (2) Facial Recognition System (FRS) and AI for data management through iCoPS, an indigenously developed software to record and analyse police data. Kerala Police reported having successfully identified suspects in theft cases using FRS in the iCoPS software. The personnel I interacted with noted that these systems were useful because it allowed an officer to instantaneously compare images of suspects through a database of 1.5 lakh ‘criminals’, which earlier would have taken many hours or days because subordinate level personnel would have to manually compare the image against the database, and create a report which the investigating officer would have to then approve. To assist with the adoption of these new systems, the state has also announced plans to transform 20 police stations in the state into ‘smart’ police stations incorporating modern technology and stationing officers with technical expertise such as in AI and Facial Recognition Technology to assist in these stations (Mathrubhumi, 2024).
Here, one may note in the way that the ‘modernisation’ plans are laid out that Kerala police envisions the incorporation of AI technology to automate ‘routine’ tasks. Tasks which are primarily done by subordinate personnel, not in a way that assists subordinate personnel but reduces the need for the recruitment of more subordinate personnel in the chronically understaffed police force. For instance, with FRS scanning images of 1.5 lakh ‘criminals’ and automatically detecting the identity of the accused or, AI enabled cameras at traffic junctions automatically detecting traffic violations and processing fines, there is potentially no need to recruit more personnel in the subordinate ranks to perform these tasks.
Moreover, Government functionaries claim that e-Governance has reduced corruption by making governance services available online, and thereby directly accessible to the public. State functionaries at the KPOA state committee annual conference noted that the e-Governance system has created an infrastructure wherein ‘people can access and avail state services directly, without relying on human interface [which may lead to corruption]’ (KPOA State Committee, 2023). In this narrative, social interactions between subordinate personnel and public are deemed the sites of corruption, bypassing which allows the state to create ‘corruption-free’ infrastructures.
These aims are laid out clearly in the e-Governance plan which states that e-Governance frameworks focus on: providing various government services with transparency, effective interaction between the government and the public, people empowerment by enabling information dissemination, alleviation of corruption, reduced expenditure on governance and, overcoming delay in providing various government services. (Administrative Reforms Commission, 2021: 7)
In these ways, by relocating subordinate personnel and their discretion to technical tasks and making state services ‘directly’ accessible to the public, e-Governance and smart policing appears to make the core tasks of policing less dependent on social integration, especially for the subordinate personnel. Core policing tasks, through algorithmic policing are being reconceptualised as a set of technical tasks that appear discretion-less. Since the incorporation of algorithmic processes in policing in Kerala is in its pilot stages, it remains to be seen how subordinate personnel will respond to these transformations. However, two dominant tendencies of algorithmic governance are clearly emerging in Kerala, that may be inherent to algorithmic governance in general.
Insulation: Insulation may be defined as processes that relocate subordinate personnel's discretion to internal tasks, in a manner that removes them from nodes of public interaction through the institutionalisation of a digital façade. Automation and algorithmic processes appear to be a means to remove interpretative, intellectual action on the part of subordinate personnel, conditioning personnel's action into mechanical, technical tasks. In this tendency a dialectic of discretion operating on the contradiction of liberty and discipline becomes apparent where the state and the police force is aiming to discipline subordinate personnel's discretion, by limiting the zone of its discretionary application, where liberty may be exercised only in technical and mechanical choices, and not social situations where subordinate personnel may find avenues to integrate with the public and build social bonds.
Absolute removal of discretion however, in social institutions like the police is not possible. Despite the overwhelming ways in which algorithmic systems condition choice, they are not capable of completely removing discretion. This reveals the second tendency,
2. Invisiblisation: Invisiblisation may be defined as a related phenomenon that removes personnel's discretion from the zone of public perception by relocating it behind a digital façade. Algorithms offer invisiblisation, because it appears that discretion of subordinate personnel is transferred to an ‘objective’ and ‘impartial’ system. For instance, in the case of AI enabled cameras monitoring traffic violation, the decision whether a traffic violation has been committed or not is not made by the traffic cop anymore, but rather the AI enabled detection system. Personnel may be involved in assessing whether there are any technical problems in the decision, but not in assessing whether a social act amounts to violation of traffic laws. Algorithmic policing in this manner seems to be primarily geared towards removing subordinate personnel from discretionary decision-making tasks that involve nodes of interaction between subordinate personnel and the local population, while retaining subordinate personnel for routine but internal tasks.
In all these instances, it is important to note that algorithmic infrastructures do not radically replace existing mechanisms, and therefore do nothing to replace systemic problems like ingrained biases. They are geared towards increasing the speed or rate through which decisions could be made, alas not necessarily accurately. Speed is therefore equated with efficiency. For instance, FRS produces fast ‘evidence’ for instant police action. Policing today is seen to be outdated because it is perceived to be slow. Algorithmic infrastructures allow for fast policing, fast decision-making, thereby improving ‘efficiency’ in statistical metrics such as pendency rates and conviction rates, and thereby creating a perception of efficiency.
Insulation and invisiblisation are two emergent tendencies of algorithmic policing. On their own, these tendencies may be misinterpreted as merely relocating police personnel to other tasks within the police organisation. When analysing these tendencies through the dialectics of discretion, we see that it is not only relocating but geared towards removing subordinate personnel from discretionary decision-making roles involving public interaction. This does not mean that police as a social institution does not interact with the public, rather these interactions are now mediated by algorithmic infrastructures, instead of subordinate level police personnel. What makes insulation and invisiblisation potentially dangerous tendencies is the fact that upper echelons of police hierarchy are already insulated from everyday public interactions. As noted above subordinate's discretion is more qualitative in nature (recall the traffic cop assessing whether a traffic violation has taken place), whereas the superior's discretion is already guided by statistical metrics (recall the district police chief assessing the number of personnel to be deployed in response to crime statistics).
Rarely do police personnel in supervisory roles, have any real and meaningful public interaction, since their primary tasks are to supervise and monitor the actions of those at the subordinate levels. Behind the algorithmic infrastructures are also police personnel, but these are not the same discretionary personnel as before. The decision to approve whether a traffic violation has taken place or not, is reserved for a superior level officer, with the authority to batch approve violations, rather than the subordinate level traffic cop who through human interaction gauges not only whether a traffic violation has taken place or not, but also the potential reasons for it, and based on that through discretionary powers decides whether or not to activate legal procedures in processing fines and/or punishment. This ability to selectively activate and deactivate legal procedures are dependent on discretionary powers of ranks that are in direct contact with the public. In the disciplinary schema however, these would be treated as leakages or lapses in procedure. These power relations are not produced by algorithmic governance, on the contrary algorithmic governance is conditioned by these pre-existing dynamics. At a superficial level, it would appear that retaining a human-in-the-loop, for instance the officer responsible for batch approving violations, retains human agency in the system, but dialectics of discretion reveals that in hierarchical organisations discretion is neither removed nor retained, rather it is relocated and conditioned through the fundamental contradiction of liberty and discipline. Quantification is thus selectively deployed in algorithmic policing to condition discretion.
Dialectics of discretion
The literature on algorithmic governance has noted how AI models embody the vision of new public management conditioned by ideas of ‘efficiency’, ‘cost-effectiveness’, and ‘data-driven decision making’ that clear from its way human discretion (Kuldova et al., 2021; Kuldova, 2022). Despite the overwhelming ways in which algorithms condition our choices, algorithmic governance does not strip the individual of their agency completely ‘Algorithmic agency’ allows us to see how institutions deploying algorithmic infrastructures selectively deploy regulatory frameworks to condition agency and discretion of subordinate workers, revealing operational power relations (Bonini and Treré, 2024; Ferrari and Graham, 2021; Kennedy et al., 2015). To more fully understand these power relations, I propose a theoretical framework of dialectics of discretion operating on the fundamental contradiction of liberty and discipline.
The contradiction of liberty and discipline is not unique to algorithmic governance. It predates algorithmic systems, but nevertheless overdetermines the particularistic manifestations of algorithmic governance. It is especially evident in institutions like the police, where the organisational system is overdetermined by a disciplinary hierarchy. In this context, I define liberty as the freedom to act according to individual will, and discipline as conditioning of social action premised on the suppression of individual will for the execution of the will of the dominant. Liberty and discipline defined thus exist in a fundamental contradiction in most hierarchical social relations. They can be seen in labour conditioning practices in factories monitoring and controlling when one can eat, take a break, or use the lavatories, disciplining practices in the gig work economy through a system of monetary incentives and disincentives, and the modalities of conditioning discretion in routine police work. The modalities of the contradiction of liberty and discipline vary according to the dominant dynamics of the institution, however in institutions like the police and military which are formally conditioned by disciplinary systems, these contradictions appear more stark and are easier to visualise.
That liberty and discipline exist in a contradiction is self-evident. The factors that make them dialectical contradictions in institutions like the police are that they are in principle distinct entities but inseparably linked, that they are ‘synchronically internally related’, and that they ‘existentially presuppose the other’ (Bhaskar, 2008: 53; Ollman, 2003) . Liberty and discipline are dialectical contradictions in the police in so far as they represent opposing tendencies that are neither fully realised, nor is the contradiction ever fully resolved. It subsumes the Foucauldian conduct, counter-conduct power relation (Foucault, 2007). Yet the desire to realise one tendency conditions social action by conflicting groups in these institutions. For instance, in the case of police organisations, while the desire to acquire more liberties may be expressed by personnel across ranks, it manifests more stringently among the subordinate ranking officers, who feel constrained by norms, rules and procedures. However the mechanisms of discipline are not equally at the disposal of all ranks. Rarely if ever are there any mechanisms where the bottom can discipline the top. The desire and ability to achieve complete discipline and control manifests more stringently among the supervisory positions, that wishes to direct the action of subordinate level personnel into unified, disciplinary behaviour to achieve institutional aims. These contradictions may become more overtly visible in moments of clashes between the top brass and subordinate ranks but nevertheless operate in the everyday practices of discretion in these institutions as well. Dialectical contradictions are also ‘tendentially transformative’ (Bhaskar, 2008: 54), in other words as internal relations that presuppose each other, their particularistic manifestations also guide future transformation and change (Ollman, 2003). This analysis of change, I argue, is what makes the dialectics of discretion a useful framework in understanding algorithmic transformation of policing today.
Discretion in hierarchical structures like the police is always conditioned by the mechanism of discipline, which makes subjects of the personnel first, by stripping them of their liberty. Weber (1946) defines discipline as ‘consistently rationalized, methodically trained and exact execution of the received order, in which all personal criticism is unconditionally suspended, and actor is unswervingly and exclusively set for carrying out the command’ (p. 253). Foucault (1991) expands on this and notes that ‘discipline produces subjected and practised bodies, “docile” bodies. Discipline increases the forces of the body (in economic terms of utility) and diminishes these same forces (in political terms of obedience)’ (p. 138). Discipline, as an interpersonal act may therefore be seen as ways in which an individual is conditioned or constrained in their possibilities of action. This relies on limiting the scope of discretionary decision making, on the part of the actor upon whom discipline is imposed. However, discipline cannot be fully understood as an interpersonal relation alone. Foucault (1991) notes that discipline fulfils an additional demand: ‘to construct a machine whose effect will be maximized by the concerted articulation of the elementary parts of which it is composed. Discipline is no longer simply an art of distributing bodies, of extracting time from them and accumulating it but of composing forces in order to obtain an efficient machine. This demand is expressed in several ways. The soldier is above all a fragment of mobile space, before he is courage or honour’ (p.164). For disciplinary institutions to become ‘efficient machines’ therefore, it is imperative that the elementary parts – that is, subordinate personnel in police – function only as intended or instructed, with conditioned discretion. In this manner, the schema of discipline attempts to create, as Foucault (1991: 138) notes, ‘docile bodies’. In this the ultimate expression of discretion lies in its expression as impersonal or impartial action.
Insulation and invisiblisation in algorithmic policing, I argue, may be seen as practice of discipline because they contribute towards the building of an ‘efficient machine’ by fundamentally relocating the constituent elements – away from nodes of social interaction with the public – and in doing so conditioning discretion of the constituent elements through mechanisms of discipline. The contradiction of liberty and discipline thus condition the everyday of the police, wherein discretionary (and seemingly abstract) police powers enable police intervention in the everyday lives of the public for instance through mechanisms of surveillance and automated decision making systems, under a disciplinary apparatus that attempts to convert personnel into operators of these mechanisms, and thereby agents of ‘impersonal’ and ‘objective’ action. Like any other dialectic, discretion is fraught with fissures and tensions.
Foucault (2007) reflecting on the role of policing, as it emerges at the turn of the 19th century in Europe, within the framework of governmentality, notes that the police is a technology of the state to control populations. Here a double movement must be noted, that is, as the primary security apparatus, the police attempts to discipline society by regulating the everyday lives of the population, but does so by subjecting the personnel to a disciplinary schema first. Algorithmic governance allows for both these movements to more effectively take place by simultaneously improving the state's surveillance capacities while ensuring the subjectification of subordinate personnel, precluding any real meaningful communication between the police and the public.
This fundamental contradiction, between liberty and discipline, helps us visualise why discretion is framed as a problem in algorithmic governance. Discretion here is constructed as a problem by those who wish to condition and discipline this discretion. The desire to control police discretion has been populating police reform debates globally for several decades. The rise of the ILP model was premised on the desire to curb intuitive policing, for its perceived problems of bias, corruption and misuse. ILP and more recently algorithmic policing function by promising to ‘remove’ discretion through scientific method and data-driven policing, converting the institutions into objective, and accessible entities. Research on algorithmic policing however have convincingly showed that this is not the case (Egbert and Leese, 2021; Marciniak, 2023; Sandhu and Fussey, 2021). By removing discretion, gatekeeping increases, and rests in the hands of the technical experts who operationalise the algorithmic framework.
The dialectic of discretion, operating on the contradiction of liberty and discipline, I argue conditions the evolution of policing systems in an era of algorithmic governance wherein the emergent tendencies are of insulation and invisiblisation. However, it is not abstract and universal but itself historically situated and therefore needs to be documented in particularistic settings. The dialectics of discretion furthermore allows us to see algorithmic governance and algorithmic policing not only as new socio-technical relations, but also as social relations which in the contemporary manifestations are mediated by algorithmic infrastructures and conditioned by pre-existing power relations. In contexts like India policing is heavily conditioned by the legacy of colonialism. It is therefore imperative that the suspicion of subordinate personnel's discretion and its disciplining be located historically.
Suspicion of discretion, and discipline: colonial, post-colonial legacies
The modern police in Kerala, like the rest of India and many other erstwhile colonial states, emerged with the introduction of the modern state by the colonial British administration. This modern state and modern police emerged as a militarised and racialised technology in the absence of effective agencies of surveillance. In this regard, the colonial census produced knowledge of the Indian population as composed of racialised identities, which the modern state, and most importantly its professional police were to control, discipline and regulate. In its origin therefore, the objects of policing in India were racialised colonial subjects (Varghese, 2023).
The primary purpose of the colonial police and the state was that of securing the economic and political interests of the dominant propertied classes and the state. For this purposes, structural inequalities of a racialised character were embedded in the policing mechanism, wherein a large force of indigenous subordinate police were to be regulated and controlled by exclusively European superior police (Arnold, 1986). This is where the need to control subordinate personnel's discretion becomes amply clear. Arnold (1986) notes that a fear of the police among local communities was harboured and promoted, and that the British found a pragmatic utility in police excesses, where maintaining the idea of police as corrupt and intimidating, alienated the people from the police, and served to prevent any dangerous collusions between them.
In this model of policing, discretion of subordinate level personnel who were recruited from indigenous populations was always viewed with suspicion. The model of internal disciplinary control was implemented to ensure that no ‘dangerous’ collusions between the subordinate ranks and local populations were taking place, that may threaten the colonial state. While it may have been introduced in the colonial framework, in the postcolonial framework as well this suspicion of subordinate ranks and their discretion has survived.
Witnessing the success of a disciplined police force in executing the will of the state, the newly independent Indian state after 1947 adopted the mechanism of the colonial police, to serve the needs of a new nation, thus continuing embedded structural inequalities. The racialised conflict apparent between the superior and subordinate police in colonial era transformed into a caste-class conflict informed by racialised imaginations in the postcolonial context (Varghese, 2023). Postcolonial nations like India, retained discipline as the fundamental organisational principle to coordinate everyday policing. The various legal and constitutional frameworks that establish the policing mechanism of postcolonial India, identify the need of disciplinary control in police by stating that discipline is necessary to ensure that personnel do not abuse the lawful authority vested in them, and that they are protected from ‘unwarranted influences’ to enable them to function ‘impartially’.
India is not unique in this regard, police forces across the world justify the need for discipline in similar terms. However, it is important to note how everyday practices of discipline take shape in particularistic settings. In the context of India, discipline emerged first in the colonial era, in the form of conditioning discretion of subordinate personnel to keep them from forming any social connections with the local population. In the postcolonial framework, discipline was legally deemed necessary to prevent the abuse of ‘lawful authority’, and ‘unwarranted influence’. Discretion in postcolonial India thus became equated with the potential abuse of power, and unwarranted social connections. Suspicion of the discretion of subordinate personnel, in different forms, has been a foundational logic of the policing organisation. This indicates hierarchical power relations primarily because this suspicion of discretion does not apply to all ranks. Discretion of superior personnel – that is, the upper echelons of police hierarchy – is not imbued with suspicion in the same manner. Discretion and its management therefore, I argue, must be seen as part of the mechanism of disciplinary control in police.
Discretionary power when codified in rules and laws, especially for subordinate personnel, is always subject to specific conditions. There are no instances of absolute discretion, where personnel are free to act according to their own will. A review of the Kerala Police Manual, 1969 of how discretion is managed in everyday policing reveals how discretion is already a matter of hierarchical control. It is primarily vested in (1) the Government, for the monitoring of individual opinions of officers, as personnel are prohibited from expressing opinions on the actions of the government; (2) in Courts for matters pertaining to investigation and handling of prisons and litigants; (3) in supervisory ranks, like the superintendent of police for the reallocation of resources and deployment of officers; and (4) only in one condition, vested in an ‘officer in charge’ from the subordinate ranks, for the refusal of investigation – subject to the fulfilment of certain listed conditions, and scrutiny of supervisors for the check of ‘improper use of discretion’.
Other than the discretion of the officer in charge, all other acts by subordinate personnel may be interpreted as potentially unqualified discretionary powers. This shows that discretion itself is a matter of rank and hierarchy in the mechanism of the police which is conditioned by disciplinary control of subordinate officers. Other forms of police action are not conceptualised as discretionary powers but rather as execution of procedures, which algorithmic infrastructures are deemed capable of automating.
Mechanisms of postcolonial policing, like in Kerala, which are conditioned by hierarchised access to discretionary power resting on supervisory ranks and the state, entails that discretionary use of power by subordinate personnel is always subject to conditioning and discipline. It is worth remembering here, as Go (2023) highlights that modern policing, in both the postcolonial nations and erstwhile colonial metropoles are conditioned by modalities of ‘racialised imperiality of civil policing’ (p.19) reminding us that the origin of policing as a technology rests on empire and its colonial character. He highlights how global policing models are multiscalar and multidirectional, how imperial, now postcolonial, and metropolitan police forces condition each other. For instance, he highlights how new technologies of police surveillance have their roots in the development of fingerprinting technology in British India. Drawing from this I note that modern developments in the technology of civil policing continue to be conditioned by these colonial legacies and manifest in the mechanisms of discipline. This internal logic of discipline aligns well with the tendency of algorithmic governance to implement mechanisms that embolden policing the bottom (Kuldova, 2022). Recall how the traffic cop is replaced by an AI enabled traffic camera, and a senior level official is placed to batch approve cases of violation. In this way processes of quantification (Besteman, 2019) while homogenising state activities also discipline discretion, which in postcolonial contexts like in Kerala, builds on pre-existing power relations. When documenting practices of algorithmic governance, it becomes evident that the regulation of discretionary power is directed towards efficient control. Algorithmic policing does not alter pre-existing systems and tendencies but rather intensify power differentials and execute procedures that focus on disciplining discretion of the subordinate personnel. Such practices of discretion, I argue are effectively documented through a dialectical framing of discretion operating on the fundamental contradiction of liberty and discipline.
Conclusion
The construction of human discretion as the cause of bias and corruption and therefore as a ‘problem’ in algorithmic governance and policing obfuscates the issue by drawing attention away from structural features of police organisations. While removing nodes of interaction between subordinate personnel and public aims to condition subordinate personnel's discretion by relocating its zone of application to technical tasks, it creates the illusion of a solution that is erroneously placed on discretion in the first place. It removes the focus from actual structural features like colonial legacies of racialisation and discipline, unequal laws and discriminatory procedures, whose automation along with the conditioning of discretion not only automates inequality (Eubanks, 2018) but also automates docility.
In this paper I have outlined how algorithmic governance reveals tendencies that condition the discretion of subordinate personnel in manners that insulate them from social interactions, and invisiblises the operation of other levels of discretion. I draw from dialectics of discretion and colonial legacies of policing to note how these tendencies are guided by disciplinary practices. States must recognise that these processes, if left unchecked, may potentially lead to institutions transforming into Kafkaesque, opaque organisations with low democratic accountability. Any state that hopes to achieve democratisation of state services must be critical in its adoption of algorithmic frameworks.
Algorithmic systems alone are unlikely to transform policing to make them ‘better’ equipped for serving the needs of a democratic society. While algorithmic systems may have its merits, it is important to understand its limits and underlying tendencies that insulate police from the population, and invisiblise the operation of subordinate personnel's discretion. While these tendencies are becoming apparent in Kerala, they are not unique or limited to Kerala. As Go (2023) notes global policing models are multiscalar and multidirectional, postcolonial (erstwhile imperial) and metropolitan police forces have historically conditioned each other's development, and continue to do so today. These tendencies that are visible in Kerala, despite its progressive development strategies, are applicable to India in general, and other postcolonial nations as well. For policing to evolve into a democratic and civic service serving the needs of a pluralist democracy, as the Kerala Police Vision Document 2030 claims, the state must harbour avenues that promote social integration between police and public, in ways that makes the police democratically accountable.
In this regard, Kerala, owing to its development trajectory, is uniquely poised to implement counter measures. The state in its e-Governance policy acknowledges that the absence of human discretion in automated processes may have adverse effects on the life of the people. Similarly in the Kerala Police Vision document, the organisation acknowledges that big data and algorithm driven policing poses threats to personal security, privacy and constitutional rights of the citizens. It appears that to counter the tendencies of insulation and invisiblisation, the state is also implementing counter measures. While in core policing activities algorithmic systems appears to insulate police from social integration, social policing and community policing measures like the Kerala Police Janamaithiri Policing initiative – spearheaded by the Kerala Police subordinate personnel's association – focus on increasing police-people integration. Similarly in the adoption of AI systems, the state embarked on ‘AI for the People’ campaign, where it aimed to demystify artificial intelligence for everyday use (Praveen, 2024). The e-Governance policy notes that ‘people-centricity, choice, consultation, engagement and empowerment of the people’ (Administrative Reforms Commission, 2021: 111) are the core features of the state's e-Governance policy.
Despite these policy priorities, the tendencies of insulation and invisiblisation in algorithmic governance are apparent, which suggests that inherent tendencies of algorithmic governance, in the absence of effective countermeasures, may be more susceptible to disciplining mechanisms. The incorporation of algorithmic systems in the state in Kerala is an ongoing process, with many processes still in its early phases. It remains to be seen how and whether the state in its implementation of algorithmic frameworks would be able to counter the tendencies that algorithmic systems pose. Given the state's history of implementing a pluralist, democratic and humane model of governance and development, one may hope that the state would find new ways to implement a democratic form of algorithmic governance, which would be a truly unique contribution. Till then, one may only speculate.
Footnotes
Acknowledgements
The primary research for this work was conducted as per a collaboration agreement between Oslo Metropolitan University and O.P. Jindal Global University for the implementation of the R&D project Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL). The author would like to thank Sharath Srinivasan and Ella McPherson (Co-directors, Centre of Governance and Human Rights, University of Cambridge) for their invaluable comments on early drafts of this paper. The author would also like to thank Tereza Østbø Kuldova and Christin Thea Wathne, project leaders of the AGOPOL project for their support and guidance with this research.
Ethical approval and informed consent
Verbal informed consent has been secured from all research participants and the study was conducted in accordance with the ethical guidelines of O.P. Jindal Global University, India and the British Sociological Association Guidelines on Ethical Research.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
