Abstract
From 29 to 30 August 2022, a diverse group of international researchers convened under the Arctic northern lights in Tromsø. They set out to discuss some of the most pressing questions facing European criminal and public security law. This scientific event was co-organised by the Research Group on Crime Control and Security Law at The Arctic University of Norway under Nandor Knust and Jon Petter Rui, and the Otto Hahn Research Group on Alternative and Informal Systems of Crime Control and Criminal Justice at the Max Planck Institute for the Study of Crime, Security and Law under Emmanouil Billis. As Emmanouil Billis and Nandor Knust noted in their opening speech, the conference had set itself the goal of critically assessing how modern technologies, and especially artificial intelligence (AI), can serve to strengthen the efficiency and effectiveness of crime control and criminal justice systems, while at the same time complying with established ruleof-law principles and human-rights standards. In my report, I summarize the conference’s main discursive themes.
From 29 to 30 August 2022, a diverse group of international researchers convened under the Arctic northern lights in Tromsø. They set out to discuss some of the most pressing questions facing European criminal and public security law. This scientific event was co-organised by the Research Group on Crime Control and Security Law at The Arctic University of Norway under Nandor Knust and Jon Petter Rui, and the Otto Hahn Research Group on Alternative and Informal Systems of Crime Control and Criminal Justice at the Max Planck Institute for the Study of Crime, Security and Law under Emmanouil Billis.
As Emmanouil Billis and Nandor Knust noted in their opening speech, the conference had set itself the goal of critically assessing how modern technologies, and especially artificial intelligence (AI), can serve to strengthen the efficiency and effectiveness of crime control and criminal justice systems, while at the same time complying with established rule-of-law principles and human-rights standards. This is a daunting task, not just because of the vast complexity and rapid evolution of relevant technologies, but also the sheer range of relevant use cases – the participants were called to discuss scenarios ranging from predictive policing, crime prevention and detection to risk and recidivism assessment, the processing of evidence and the determination of criminal punishment. Crucial was also the interdisciplinary dialogue at the heart of these issues. It is necessary for legal researchers to properly convey their craft to AI developers so that they have a firm grasp of the main ideas behind key legal concepts and their potential differentiations in different legal traditions. In the end, it will be up to these developers to “translate” central legal notions and protective principles into programming language.
I can confidently say that the group of researchers, who gathered in Tromsø, were well-equipped to tackle these challenging questions. This was mainly because of their well-established expertise, but also due to the observed diversity on several dimensions: Cross-generationality (the participants were researchers both senior and junior), internationality and interdisciplinarity, both in terms of the legal disciplines and the dialogue between legal and IT experts.
Given the breadth of contributions, it seems impossible to do all the speakers justice, but I will try to briefly outline what, to me, seemed to be three of the main discursive themes permeating our discussions:
I. The ambivalence of technological progress
Many contributions highlighted the advantages of artificial intelligence: Dag Johansen demonstrated, inter alia, how intelligent surveillance could protect fish populations in the Norwegian sea from overfishing. Bjørn Aslak Juliussen examined how homomorphic encryption, trusted execution environments and differential privacy could allow us to make data transfers safer under the conditions of the CJEU’s Schrems jurisprudence. Areeg Samir Ahmed Elgazaaz investigated how modern AI applications could enhance health care.
Indeed, at its best, AI can not only make law enforcement more efficient, but additionally more targeted and rational. However, researchers also emphasised how new technologies radically expand the modern state’s powers of surveillance and coercion, thus raising all sorts of rule-of-law concerns. For instance, Mathias Hauglid eloquently illustrated to which extent AI systems can exacerbate biases in decision-making. Magne Frostad and Clementina Salvi highlighted the different ways in which AI applications can facilitate the manipulation of information, thus endangering our democratic discourse and threatening privacy and personality rights. These risks create the need for clear, rule-of-law-based limits and safeguards. In her presentation, Niovi Vavoula dissected how the CJEU’s gradually developing jurisprudence on the automated processing of personal data for law enforcement purposes advances these legal protections. In so doing, she analyzed how the Court tackles issues such as the intransparency or “black box” character of modern AI applications.
The contributions, however, also underscored how we have only begun to come to terms with this profound ambivalence of technological progress. Much more interdisciplinary work needs to be done in order to reconcile AI’s promise with its peril.
II. Who should set the standards?
Throughout our discussions, it became clearthat before even determining what future standards ought to be, it is necessary to think about who should set them. This problem, while at first glance perhaps appearing simple, actually touches on fundamental questions about legitimacy, multilateralism and the distribution of power. This puzzle can be conceived to operate along three axes:
The first axis runs between the state and the private natural or legal person. In his talk, Valsamis Mitsilegas laid out how modern security law is characterised by an increasing shift of surveillance and law enforcement responsibilities from the state to private corporations, such as financial institutions or airline carriers. This trend raises all sorts of questions, ranging from the fundamental problem of the legitimacy of private power and its chilling effects on our freedom, over the conflict between business secrets and the necessity of transparency and control of technologies used for law enforcement purposes, to more technical problems such as data security and infrastructures.
The second axis runs between the unilateralism of the national state and the multilateralism of a globalised legal community. Katalin Ligeti pointed out how, just like our societies as wholes, threats to our security are becoming increasingly globalised. Likewise, technological progress is achieved through global cooperation, transcending national borders. Big data technologies can only fulfill their potential for European societies if they ensure that data, rather than being monopolised by big US-American companies, can flow freely. This can only be achieved if the necessary rules around data retention, transfer and processing regimes are set on a global level. It will be challenging, however, to reconcile this need for multilateral data governance with our legitimate desire not to lower our standards given existing Transatlantic asymmetries in levels of data protection, as set forth under the CJEU’s Schrems jurisprudence.
The third axis runs between human and machine cognition. In my research, which I was invited to present at the conference, I tackle the question to which extent our legal order allows us to delegate decision-making powers from humans to machines. Assume that in a given scenario, a machine can make a functionally better (meaning a less biased, less noisy and more freedom-preserving) decision than a human ever could. I investigate which remaining interest we have to prefer human over machine judgment. This thought experiment provokes us to confront the fundamental question what precisely it is that makes a decision human. While at first appearing rather abstract, this question bears practical consequences for open doctrinal problems in the interpretation of norms which postulate human interventions in automated decision-making processes. Take for instance Art. 22 GDPR or Art. 11 of the EU Law Enforcement Directive: as long as we cannot clearly put our finger on what exactly defines a decision’s human element, we can neither be certain about identifying a “decision based solely on automated processing”, nor can we determine which forms of human intervention these norms compel us to undertake.
III. Evolving technology means evolving law
Many speakers reminded us that, as technology evolves, our established legal notions need to evolve, as well. As automation is introduced into our legal concepts, they are transformed – potentially beyond recognition. Hence, as we determine to which extent we are willing to allow for this transformation, we are encouraged to critically assess our pre-conceived notions. This reinvigorates our thinking about them.
Julian Roberts investigated how AI can supplement sentencing in a way that does not supplant it, thus reaffirming established principles such as transparency and contestability. Richard Vogler eloquently demonstrated how modern surveillance systems transform what it means to protest and challenge emerging power structures. Lorena Bachmaier and Eftychia Bampasika analyzed how the opportunities modern AI applications, such as voice or image recognition, bring to criminal procedure can be reconciled with established safeguards and terminology, such as probable cause or suspicion. Artem Galushko investigated how new technologies, especially methods of hybrid cyber warfare, alter the face of war, thus challenging common European approaches to national security.
Such research endeavors do not only serve to instill old principles with new life. They remind us legal researchers what makes our craft so fascinating. Yet, they also highlight the limits of our knowledge. There is so much translation work left to do. We will only achieve AI applications which reflect our values if we not only critically re-assess our thinking, but also communicate it to computer scientists and AI practitioners. And then: listen and learn how our concepts work in practice. This is as much challenging as exciting a task. The Tromsø Academic Conference on Crime Control, Security and New Technologies, given its interdisciplinary profile, was a promising step in the long process of such achievements.
Footnotes
Author’s note
Christian Thönnes is a doctoral researcher at the Department of Public Law at the Max Planck Institute for the Study of Crime, Security and Law in Freiburg/Germany.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
