Abstract
This paper articulates and defends a novel type of precautionary argument for situations of severe uncertainty in science and policy, which I term precautionary slippery slope argument. The paper explicates the structure of precautionary slippery slope arguments, identifies the main factors that bear on the strength of these arguments, and illustrates how the proponents of such arguments can address several influential objections put forward against standard slippery slope arguments and other prominent forms of precautionary reasoning.
1. Introduction
Policymakers frequently advocate the adoption of precautionary measures by relying on slippery slope arguments (henceforth, SSAs). SSAs oppose particular actions/policies by arguing that allowing or implementing these actions/policies will likely lead to specific unacceptable—i.e., morally impermissible or otherwise objectionable—consequences (e.g., van der Burg 1991, 42; Walton 1992, 50-52; Whitman 1994, 85). SSAs are articulated and debated in several contexts across science and policy (e.g., Crawford 2003; Jones 2011; Rizzo and Whitman 2009). However, policymakers often face situations of severe uncertainty, i.e., situations where policymakers lack knowledge of both the probabilities of various possible outcomes and some of the possible outcomes themselves (e.g., Stern 2014, on climate change policies; Connell 2017, on genetic manipulation technologies; Bostrom 2014, chaps. 7-9, on AI applications). 1 Moreover, SSAs have limited applicability in situations of severe uncertainty. For in such situations policymakers commonly lack the knowledge of probabilities and possible outcomes required to show that the examined actions/policies will likely lead to specific unacceptable consequences (e.g., Hansson 1996, 376-377; Hill 2019, 225-226). 2
In this paper, I articulate and defend a novel type of precautionary argument for situations of severe uncertainty in science and policy, which I term precautionary slippery slope argument (henceforth, PSSA). Whereas standard SSAs oppose particular actions/policies by arguing that allowing or implementing these actions/policies will likely lead to specific unacceptable consequences, PSSAs prescribe specific precautionary measures (e.g., requiring further research, imposing specific safety standards, delaying or banning the rollout of a novel technology) by arguing that the examined actions/policies place people or society on a slippery slope with possibly catastrophic and yet-undetermined endpoints (e.g., Callies 2019, on the possibly catastrophic consequences of geoengineering; Evans 2021, on the possibly catastrophic consequences of human germline genome editing; Roff 2014, on the possibly catastrophic consequences of lethal autonomous weapon systems). My main claim is that despite influential objections put forward against standard SSAs and other prominent forms of precautionary reasoning (e.g., precautionary principles), PSSAs provide cogent reasons/evidence for the precautionary measures they prescribe in a wide range of situations of severe uncertainty across science and policy. As such, PSSAs effectively demonstrate how policymakers can justify precautionary measures against potentially catastrophic outcomes even in cases where they lack detailed information about such outcomes. 3
The paper is organized as follows. Section 2 explicates PSSAs’ argument structure and identifies several factors that bear on PSSAs’ strength, that is, the extent to which PSSAs provide cogent reasons/evidence for the precautionary measures they prescribe. Section 3 illustrates how the proponents of PSSAs can address six influential objections put forward against standard SSAs and other prominent forms of precautionary reasoning, namely: the objection from insufficient evidence (Subsection 3.1); the objection from excessive precautions (Subsection 3.2); the objection from underdetermined precautions (Subsection 3.3); the objection from arbitrary risk thresholds (Subsection 3.4); the objection from absolutism (Subsection 3.5); and the objection from diachronic incoherence (Subsection 3.6). 4
Before proceeding, let me put forward two preliminary remarks concerning this paper’s aim to extend the applicability of precautionary arguments to situations of severe uncertainty and the relevance of this extension for the ongoing philosophical debate concerning precautionary reasoning in science and policy. First, one may distinguish situations of more or less severe uncertainty depending on how much knowledge policymakers lack about the relevant probabilities and possible outcomes (e.g., Aven 2011, 1517-1522; Bradley et al. 2017, 501-503). Even so, one may identify a wide range of situations of severe uncertainty in science and policy (e.g., Ongaro and Andreoletti 2022; also van de Poel 2016, 681, holding that for many novel technologies, policymakers “do not know the potential benefits and drawbacks well enough to list all possible effects and to assign probabilities”). PSSAs are especially concerned with situations of severe uncertainty where the examined actions/policies impose significant risks of catastrophic consequences (e.g., permanent disruption of valuable socio-economic or political institutions; loss of human lives on major scales; extinction of multiple species; collapse of entire ecosystems). And policymakers frequently face such situations (e.g., Stern et al. 2022; Weitzman 2012, for illustrations concerning climate change policymaking; also Head 2019; Lazarus 2009, on various so-called “wicked” and “super wicked” problems). 5
And second, PSSAs are more aptly regarded as argumentation schemes requiring interpretation rather than algorithmic procedures providing policymakers with immediately applicable prescriptions about what precautionary measures to adopt in specific contexts (e.g., Walton 2015, 273-274, for similar remarks concerning SSAs; Gardiner 2006, 36-37, for similar remarks concerning precautionary principles). This, however, does not detract from PSSAs’ suitability to inform policymakers’ evaluations of precautionary measures in situations of severe uncertainty. For such evaluations are not straightforwardly reducible to algorithmic operations (e.g., Elliott 2017, chaps. 5-7; Khosrowi 2019, on the dependence of policymakers’ evaluations on various types of value judgments). And policymakers can justifiably rely on PSSAs even if PSSAs are not “always, and on [their] own, sufficient to guide [policymaking]” (Gardiner 2006, 57-58, commenting on precautionary principles). I shall expand in Sections 2 and 3 on the interpretation of PSSAs and on PSSAs’ suitability to inform policymakers’ evaluations of precautionary measures in situations of severe uncertainty. For now, I note that PSSAs are plausibly taken to provide pro tanto (rather than decisive) reasons/evidence for precautionary measures and that these reasons/evidence are open to further specification in light of the examined actions/policies and contexts of application.
2. PSSAs: Argument Structure and Strength
In this section, I explicate PSSAs’ argument structure and identify several factors that bear on PSSAs’ strength. PSSAs’ argument structure includes the following components: (1) a targeted action/policy (e.g., rolling out a novel AI application in a given population; removing emission caps in various carbon-intensive sectors); (2) the claim that this action/policy imposes significant risks of leading via slippery slopes to yet-undetermined catastrophic consequences (henceforth, CC; e.g., major socio-economic harms; disastrous environmental losses); and (3) the claim that the risks of CC imposed by the targeted action/policy justify adopting specific precautionary measures concerning such action/policy (e.g., delaying or banning the rollout of the involved AI application; tightening existing caps and regulatory constraints on permissible carbon-intensive emissions). 6
As to PSSAs’ strength, the following remarks are worth making. First, PSSAs’ strength is plausibly taken to come in degrees. For this strength directly depends on both how likely it is that the examined actions/policies lead to CC and how catastrophic such CC are (e.g., Fumagalli 2020, 413; Holtug 1993, 404). In particular, justifying precautionary measures requires PSSAs’ proponents to demonstrate that the risks of CC imposed by the examined actions/policies pass normatively significant thresholds. These risk thresholds are commonly vague (rather than precise) since PSSAs target situations of severe uncertainty, and in these situations policymakers commonly lack the knowledge of probabilities and possible outcomes required to specify precise risk thresholds. This does not prevent PSSAs’ proponents from being able to specify normatively significant risk thresholds since severe uncertainty does not entail complete ignorance concerning the relevant probabilities and possible outcomes. In this respect, PSSAs sharply differ from so-called “mere possibility” arguments, which derive precautionary prescriptions “from the mere possibility that a course of action may lead to [negative] consequences” (Hansson 2011, 140; also Steel 2013, 326-328). For according to PSSAs, the mere possibility that an action/policy might lead to CC falls short of justifying precautionary measures targeting such action/policy. 7
Second, the fact that PSSAs speak of consequences does not commit PSSAs’ proponents to evaluating the examined actions/policies in consequentialist terms (e.g., Christiansen 2019, 6). In particular, PSSAs’ proponents may consistently rely on deontological and other non-consequentialist considerations in evaluating the examined actions/policies (e.g., Lazar 2018, 134-142; Tenenbaum 2017, 704-707). Relatedly, there does not have to be anything politically conservative in the precautionary measures prescribed by PSSAs (e.g., Rizzo and Whitman 2003, 541, for similar remarks concerning SSAs). In fact, one may think of several PSSAs prescribing precautionary measures that are commonly opposed by politically conservative movements (e.g., Section 3 on PSSAs prescribing precautionary measures that aim to tackle the risks of CC imposed by climate change). 8
Third, PSSAs provide policymakers with additional guidance compared to decision theoretic approaches such as expected utility theory (henceforth, EUT) and cost-benefit analyses (henceforth, CBA) in situations of severe uncertainty. For these decision-theoretic approaches provide limited guidance in such situations (e.g., Gilboa et al. 2009, 285-286; Hájek 2021, 189-191, on EUT; Adler 2012, chap. 7; Hansson 2007, 171-176, on CBA). To be sure, policymakers may be able to combine precautionary considerations with EUT or CBA in various policy applications (e.g., Bartha and DesRoches 2021, 8720; Heath 2020, chap. 5, on the possibility of comparing distinct precautionary measures by means of CBA). Still, PSSAs can inform policymakers’ evaluations of precautionary measures in several situations of severe uncertainty where EUT and CBA provide limited guidance. And PSSAs can ground prima facie plausible evaluations in many such situations. For instance, PSSAs allow that policymakers’ degree of caution may justifiably vary depending on how high the stakes involved in their decisions are. And most decision theorists concur that “it [is] reasonable to act more cautiously in high-stakes situations than in low-stakes ones” (Bradley et al. 2017, 509; also Bottomley and Williamson 2024, 716; Hill 2019, 227-233). 9
Finally, PSSAs may be put forward to advocate the adoption of a wide range of precautionary measures (e.g., requiring further research, imposing specific safety standards, delaying or banning the rollout of a novel technology). The justifiability of specific precautionary measures may vary significantly across times and situations (e.g., what individuals and demographic groups are affected by the examined actions/policies; over what time horizons). Policymakers can rely on various adequacy conditions to inform the evaluation of precautionary measures. Three such conditions are especially relevant in the context of PSSAs, namely: scientific plausibility, according to which PSSAs’ prescriptions should be supported by the best available scientific evidence (e.g., Birch 2024, chap. 6; Jamieson 1998); proportionality, according to which PSSAs’ prescriptions should not impose excessive burdens on the involved agents and should not go beyond what is necessary to achieve policymakers’ precautionary goals (e.g., Karliuk 2023; Steel 2014, chap. 1); and revisability, according to which PSSAs’ prescriptions should be subject to ongoing monitoring and periodic re-evaluation by policymakers to mitigate the risk of adopting excessive or counterproductive precautions (e.g., Novelli et al. 2024; Whiteside 2006, chap. 2). Multiple issues may arise regarding how each of these adequacy conditions is most plausibly interpreted by policymakers in situations of severe uncertainty (e.g., what levels of scientific plausibility should the available evidence achieve to be able to justify PSSAs’ prescriptions? How can policymakers ground informative and reliable proportionality claims in situations where the relevant CC are yet-undetermined? And how frequently should policymakers monitor and re-evaluate PSSAs’ prescriptions?). I expand in Section 3 on how policymakers’ interpretation of such adequacy conditions bears on the evaluation of precautionary measures in various situations of severe uncertainty.
3. A Defense of PSSAs
In this section, I illustrate how the proponents of PSSAs can address six influential objections put forward against standard SSAs and other prominent forms of precautionary reasoning, namely: the objection from insufficient evidence (Subsection 3.1); the objection from excessive precautions (Subsection 3.2); the objection from underdetermined precautions (Subsection 3.3); the objection from arbitrary risk thresholds (Subsection 3.4); the objection from absolutism (Subsection 3.5); and the objection from diachronic incoherence (Subsection 3.6). Some of the considerations put forward in this section may be used to defend not only PSSAs, but also standard SSAs and other forms of precautionary reasoning targeting situations of risk and uncertainty. However, below I focus on PSSAs and on situations of severe uncertainty.
3.1. Objection from Insufficient Evidence
The objection from insufficient evidence holds that PSSAs do not withstand scrutiny because PSSAs typically fail to demonstrate that the examined actions/policies impose significant risks of CC (e.g., LaFollette 2005, 475-476, targeting SSAs). The idea is that in situations of severe uncertainty, policymakers are typically unable to show that the risks of CC imposed by the examined actions/policies are sufficiently high to justify the adoption of PSSAs’ precautionary measures (e.g., Lenman 2000, 345-348) and that therefore PSSAs “are hard to support” (Walton 2015, 305, targeting SSAs).
This objection correctly notes that in situations of severe uncertainty, the limitations affecting policymakers’ knowledge of the relevant probabilities and possible outcomes may constrain policymakers’ ability to demonstrate that the examined actions/policies impose significant risks of CC. However, there are at least two reasons to doubt that the objection undermines PSSAs. First, demonstrating that the examined actions/policies impose significant risks of CC is often less demanding than the objection presupposes. For instance, policymakers do not have to agree on precise assessments of the relevant probabilities and possible outcomes to demonstrate that carbon-intensive emissions leading to atmospheric temperature increases of over 2°C above pre-industrial levels would impose significant risks of environmental CC (e.g., Frisch 2020, 982-989; Stern et al. 2022, 191-192) or that the unregulated rollout of AI applications would impose significant risks of socio-economic CC (e.g., Prainsack and Forgó 2024, 1236; Zanotti et al. 2024, 7-8).
And second, policymakers are frequently able to demonstrate that the examined actions/policies impose significant risks of CC via slippery slopes. For instance, the actions/policies examined by policymakers frequently involve continuous variables (e.g., Lenton et al. 2019, 592-593, on “atmospheric temperature”) and vague concepts (e.g., Gardiner 2006, 51-52, on “reasonable doubt”). And the presence of continuous variables and vague concepts, in turn, makes it difficult to distinguish between acceptable and unacceptable actions/policies, thereby leading to slippery slopes (e.g., Rizzo and Whitman 2003, 540-544; also Volokh 2003, 1048-1055, on cases where implementing moderate policies leads policymakers to implement more extreme policies due to people’s multi-peaked preferences). 10
A critic of PSSAs may object that policymakers are frequently able to avoid slippery slopes by drawing sharp demarcation lines between acceptable and unacceptable actions/policies (e.g., Thaler and Sunstein 2008, 236-237). However, policymakers are frequently unable to draw sharp demarcation lines between acceptable and unacceptable actions/policies (e.g., Fumagalli 2020, 417-418, on policymakers’ evaluations of actions/policies involving contested notions such as personhood or voluntariness). Moreover, slippery slopes often arise even in the presence of sharp demarcation lines between acceptable and unacceptable actions/policies (e.g., den Hartogh 1998 [2009], 325-327, on cases where sharp demarcation lines are ignored or trespassed because they are regarded as arbitrary). To be sure, policymakers may occasionally be able to adapt actions/policies so as to avoid slippery slopes (e.g., Walton 2015, 303-304). However, policymakers frequently lack the ability or the incentives to adapt actions/policies so as to avoid slippery slopes. For instance, policymakers’ reliance on past legislative and judicial decisions frequently leads them to implement controversial policies because they take past legislative and judicial decisions to give them reason to implement such policies (e.g., Rizzo and Whitman 2003, 557-560). In fact, policymakers’ ability to adapt actions/policies may increase (rather than decrease) the likelihood or the severity of slippery slopes. For instance, due to widespread categorization effects, policymakers’ classifying some action/policy as justifiable may significantly increase the probability that they classify further actions/policies as justifiable, thereby leading to slippery slopes (e.g., Hahn and Oaksford 2006, 229-232).
A critic of PSSAs may further object that in many cases where policymakers are unable to adapt actions/policies so as to avoid slippery slopes, policymakers can avoid CC by mitigating the harms caused by such actions/policies (e.g., Lomborg 1998 [2001], chap. 24, on climate change mitigation policies). However, precautionary measures are not a substitute for harm-mitigating measures, and can be effectively combined with such measures. In fact, policymakers often advocate combining precautionary and harm-mitigating measures to tackle risks of CC (e.g., Weitzman 2012, 227-228, on climate change policymaking; Taddeo and Floridi 2018, 751-752, on AI policymaking). Moreover, many actions/policies examined by policymakers are such that, once the relevant harms become apparent, it will be too late or too costly for policymakers to adopt effective harm-mitigating measures (e.g., Lenton 2013, 18-21, on several environmental harms caused by climate change). In this respect, it would be of limited import to object that thanks to ongoing scientific progress and technological innovations, future generations will “be in a far better position” to tackle several risks of CC (Sunstein 2010, 242, on risks of environmental CC). For postponing the adoption of harm-mitigating measures until the relevant harms become apparent may leave future generations with significantly less resources and impaired abilities to tackle risks of CC (e.g., Gardiner 2004, 573-575). 11
3.2. Objection from Excessive Precautions
The objection from excessive precautions holds that PSSAs do not withstand scrutiny because PSSAs’ prescriptions typically prevent people or society from enjoying the benefits of scientific progress and technological innovations (e.g., Castro and McLaughlin 2019, 15-18, targeting precautionary calls for stringent AI regulation). The idea is that in situations of severe uncertainty, most actions/policies may yield valuable benefits as well as CC and that PSSAs’ focus on potential CC tends to objectionably neglect actions’/policies’ potential benefits (e.g., Launis 2002, 176, on putative cases where tightening safety regulations in biomedical research would tend to delay the development of new medical treatments).
This objection correctly notes that in situations of severe uncertainty, focusing predominantly on the risks of CC imposed by the examined actions/policies may lead policymakers to neglect or underestimate the potential benefits of such actions/policies. However, there are at least two reasons to doubt that the objection undermines PSSAs. First, due to various economic, political, and psychological factors, policymakers often tend to take insufficient (rather than excessive) precautions to tackle risks of CC (e.g., Burri 2021, 127-129; Gardiner 2011, chaps. 3-6, on risks of environmental CC). 12 This tendency is especially pervasive when the examined actions/policies concern fast-developing technologies having hard-to-reverse effects. To illustrate this, consider AI applications such as large language models (LLMs). Many of these applications are experimental—in that their potential harms and benefits are hard to estimate before their actual adoption (e.g., van de Poel 2016, 669)—and general-purpose—in that they have heterogeneous and unpredictable uses (e.g., Novelli et al. 2024, 2493). These features significantly constrain policymakers’ ability to provide reliable ex ante estimates of the involved risks and avoid (or mitigate) unintended harms (e.g., Prainsack and Forgó 2024, 1236). In this context, PSSAs can effectively help policymakers tackle risks of CC by prescribing timely and stringent precautions (e.g., Knott et al. 2023, 1-3, calling to subordinate the public release of LLMs to the availability of reliable tools for detecting LLM-generated contents; also Schiaffonati 2022, 296-297, advocating the gradual introduction of several AI applications).
And second, policymakers can frequently avoid prescribing excessive precautions by subjecting the precautionary measures they examine to stringent adequacy conditions (Section 2). To illustrate this, consider how proportionality constraints inform the ongoing debate concerning what precautionary measures should be adopted to tackle the risks of CC imposed by climate change. Suddenly banning all fossil fuels would cause catastrophic economic downturns (e.g., Sunstein 2005, chap. 1). Conversely, a gradual and incentive-compatible phasing out of fossil fuels would likely lead to better economic and environmental outcomes than non-mitigation across several scenarios (e.g., Steel 2014, chap. 2). To be sure, policymakers may occasionally prescribe excessive precautions (e.g., Morris 2000, 16, on cases where policymakers overestimate the risks of CC imposed by novel technologies). Yet, in cases where policymakers prescribe excessive precautions, the involved excesses will likely be limited in time and magnitude. For in many of those cases, the people affected by the prescribed precautions will have powerful incentives to oppose (and call to revise) such precautions (e.g., Section 2 on various authors’ calls to subject precautionary measures to stringent revisability constraints).
A critic of PSSAs may object that many actions/policies may lead to extraordinary gains as well as CC and that one may point to such gains to undermine PSSAs’ prescriptions (e.g., Castro and McLaughlin 2019, 12-14, on the potential socio-economic gains of AI applications). However, there is a fundamental asymmetry between extraordinary gains and CC. For although the extraordinary gains made possible by risky actions/policies may prevent some CC, many CC are such that their occurrence would prevent people or society from enjoying any of the extraordinary gains obtainable in the absence of CC (e.g., Christiansen 2019, 13). And this asymmetry, in turn, supports PSSAs’ prescriptions in many cases where policymakers face significant risks of CC (e.g., Hansson 1996, 385, calling policymakers to avoid “irreversible [harms] as much as possible”; also Lazarus 2009, 1205, cautioning against “irreversible consequences that dramatically limit the options [of] future generations”). 13
A critic of PSSAs may further object that precautionary measures themselves may impose significant risks of CC (e.g., Sunstein 2010, 239, on the risks of “serious health harms” imposed by precautionary measures designed to avoid antibiotics overuse) and that therefore one could construct PSSAs against the same precautionary measures prescribed by other PSSAs (e.g., Manson 2002, 270-274, targeting the precautionary measures prescribed by various precautionary principles). However, policymakers can often avoid prescribing contradictory precautionary measures by subjecting the precautionary measures they examine to the stringent adequacy conditions mentioned in Section 2. To illustrate this, consider the proportionality constraints embedded in the European Union Artificial Intelligence Act (2024), which groups AI systems into different risk categories and prescribes different levels of regulation for systems involving unacceptable risks (which are prohibited), high-risk systems (which must comply with strict security requirements), limited-risk systems (which must respect transparency requirements), and minimal-risk systems (which should only be subject to voluntary codes of conduct). The AI Act is not immune to criticisms (e.g., Novelli et al. 2024, 2493-2494, for a critical appraisal of the AI Act’s definition of risk categories). Still, the proportionality constraints embedded in the AI Act provide policymakers with a reliable springboard to identify and adopt coherent precautionary measures (e.g., Floridi 2021; Zanotti et al. 2024, for recent discussion). 14
3.3. Objection from Underdetermined Precautions
The objection from underdetermined precautions holds that PSSAs do not withstand scrutiny because PSSAs typically fail to provide informative and reliable criteria to compare the many precautionary measures available to policymakers (e.g., Posner 2004, chap. 3, targeting precautionary principles). The idea is that in situations of severe uncertainty, policymakers can typically choose among many precautionary measures (e.g., requiring further research, imposing specific safety standards, delaying or banning the rollout of a novel technology) and that PSSAs leave the choice among such measures underdetermined (e.g., Sunstein 2005, chap. 1, targeting precautionary principles).
This objection correctly notes that policymakers may frequently choose among a wide range of precautionary measures in situations of severe uncertainty. However, there are at least two reasons to doubt that the objection undermines PSSAs. First, cases of genuine underdetermination—that is, cases where providing informative and reliable comparisons between precautionary measures is impossible or unfeasible (rather than just costly or difficult)—are not sufficiently widespread to cast general doubt on PSSAs. For although policymakers are occasionally unable to provide informative and reliable comparisons between precautionary measures (e.g., Colombo and Steele 2016, 1197-1198, on putative cases where distinct precautionary measures avoid a given environmental CC at the same level of scientific plausibility), one may identify several cases where policymakers are able to provide such comparisons (e.g., Christiansen 2019, 14-16, on several cases where policymakers are able to provide informative and reliable ordinal comparisons between precautionary measures in environmental policymaking). To be sure, policymakers’ comparisons between precautionary measures may vary significantly depending on the time at which policymakers make such comparisons. For instance, policymakers’ comparisons between public health precautionary measures during the COVID-19 pandemic (e.g., social distancing measures, mandatory vaccinations, mask wearing recommendations) varied significantly across different stages of the pandemic (e.g., Adler et al. 2023). Still, in many cases, the sensitivity of policymakers’ comparisons between precautionary measures to intertemporal considerations is plausibly regarded as a strength (rather than a weakness) of such comparisons (e.g., Fumagalli 2024, on policymakers’ comparisons between public health precautionary measures during the COVID-19 pandemic).
And second, PSSAs can often enable policymakers to provide informative and reliable comparisons between different precautionary measures and to significantly narrow down the set of prima facie justifiable precautionary measures without having to select one single precautionary measure. To illustrate this, consider the ongoing debate concerning what precautionary measures should be adopted to tackle the risks of CC imposed by experimental and general-purpose AI applications. Reasonable disagreements remain about the justifiability of the precautionary measures proposed to tackle such risks (e.g., Zanotti et al. 2024, 12-15). Still, such disagreements do not prevent policymakers from providing informative and reliable comparisons between different precautionary measures and from significantly narrowing down the set of prima facie justifiable precautionary measures (e.g., Birch 2024, chap. 17, on the relative merits of a ban, an international moratorium, regular monitoring and sector-wide codes of good practice).
A critic of PSSAs may object that policymakers’ reasonable disagreements about the justifiability of the proposed precautionary measures justify delaying the adoption of such measures (e.g., Lomborg 1998 [2001], chap. 24, on disagreements about climate change policies). However, not all disagreements about the justifiability of the proposed precautionary measures are reasonable (e.g., Miller 2021, 918-920, on various instances of epistemically inappropriate disagreement in climate change policymaking). Moreover, the existence of reasonable disagreements about the justifiability of the proposed precautionary measures does not per se justify delaying the adoption of such measures. For delaying the adoption of precautionary measures also counts as a decision which stands in need of justification on par with the decision to adopt such measures (e.g., Gardiner 2004, 565). And the decision to delay the adoption of precautionary measures often imposes significant risks of CC (Subsection 3.1).
A critic of PSSAs may further object that policymakers frequently face competing risks of CC (e.g., Norheim et al. 2021, on various public health and economic risks faced during the COVID-19 pandemic) and that PSSAs do not enable policymakers to determine how resources should be distributed to take precautions against competing risks (e.g., Sunstein 2021, chap. 4, targeting precautionary principles). However, policymakers are frequently able to distinguish between risks of CC that require immediate precautionary measures, less pressing risks of CC that are nonetheless worth tackling, and risks of CC that are too low to justify the adoption of precautionary measures (e.g., Bartha and DesRoches 2021, 8731-8732, on the risks of CC imposed by climate change, potential asteroid impact, and possible alien invasion, respectively). Moreover, policymakers can often rely on plausible criteria to determine how resources should be distributed to take precautions against competing risks of CC (e.g., Steel and Bartha 2023, 263-265, for various criteria to distinguish cases where policymakers may justifiably prioritize tackling CC whose probabilities can be most efficiently reduced and cases where policymakers may justifiably prioritize tackling the most probable CC). 15
3.4. Objection from Arbitrary Risk Thresholds
The objection from arbitrary risk thresholds holds that PSSAs do not withstand scrutiny because PSSAs typically fail to provide precise and non-arbitrary specifications of what levels of risk of CC justify the adoption of PSSAs’ precautionary measures (e.g., Morris 2000, 9-12, targeting precautionary principles). The idea is that in situations of severe uncertainty, policymakers may rely on dissimilar specifications of what levels of risk of CC justify the adoption of PSSAs’ precautionary measures (e.g., Jackson and Smith 2006, 275-278) and that PSSAs typically fail to provide precise and non-arbitrary specifications of such risk thresholds (e.g., Morris 2000, 9-12).
This objection correctly notes that in situations of severe uncertainty, policymakers may rely on dissimilar specifications of what levels of risk of CC justify the adoption of PSSAs’ precautionary measures. However, there are at least two reasons to doubt that the objection undermines PSSAs. First, subordinating the justifiability of adopting precautionary measures to the provision of precise specifications of risk thresholds would impose an overly demanding requirement on policymakers in situations of severe uncertainty (e.g., Burri 2022, 1-3). For in these situations, policymakers commonly lack the knowledge of probabilities and possible outcomes required to provide precise specifications of risk thresholds (Section 2). And in such situations, subordinating the justifiability of adopting precautionary measures to the provision of precise specifications of risk thresholds may hamper or delay the adoption of urgent precautionary measures (e.g., Andreou 2006, 107-108; Steel 2014, chap. 2, for illustrations concerning environmental policymaking).
And second, policymakers are frequently able to justify the adoption of PSSAs’ precautionary measures by relying on vague (rather than precise) specifications of the relevant risk thresholds. For instance, policymakers are often unable to specify precise thresholds for the risks of environmental CC imposed by climate change (e.g., Bradley et al. 2017, 501-502; Parker 2010, 270-271, on various policymakers’ reliance on probability intervals qualified by qualitative notions of confidence). Even so, the available evidence concerning the risks of environmental CC imposed by climate change is sufficiently reliable and robust to justify adopting precautionary measures to tackle such risks of CC (e.g., Buchak 2019, 79-80; Stern et al. 2022, 181-185, for illustrations).
A critic of PSSAs may object that in the presence of vague specifications of risk thresholds, policymakers are often unable to determine whether the risks of CC imposed by the examined actions/policies pass the relevant thresholds (e.g., Walton 2015, 296-301, targeting SSAs). However, the vagueness of a risk threshold does not per se prevent policymakers from identifying several clear cases where the risks of CC imposed by the examined actions/policies pass such threshold (e.g., Gardiner 2006, 52). To be sure, in the presence of vague specifications of risk thresholds, policymakers might face some borderline cases where minimal differences in actions/policies make implausibly significant differences to the justifiability of PSSAs’ precautionary measures (e.g., Jackson and Smith 2006, 276; Thoma 2022, 66). Still, such borderline cases are not sufficiently widespread to cast general doubt on the justifiability of relying on vague specifications of risk thresholds. For policymakers can identify several cases where minimal differences in actions/policies are plausibly taken to make significant differences to the justifiability of PSSAs’ precautionary measures (e.g., Lenton et al. 2019, 592-595, on borderline cases where minimal differences in environmental policies lead to passing catastrophic climate tipping points).
A critic of PSSAs may further object that policymakers are often unable to provide non-arbitrary specifications of what levels of risk of CC are imposed by the examined actions/policies since these actions/policies may be categorized in terms of different reference classes (e.g., Sunstein 2006, 860-864) and policymakers frequently lack non-arbitrary criteria to determine which reference classes should be adopted (e.g., Hájek 2007, 564-566). However, policymakers are often able to demarcate a range of plausible reference classes and provide non-arbitrary specifications of what levels of risk of CC the examined actions/policies impose as members of such classes (e.g., Adler 2003, 1348-1365; Cheng 2009, 2101-2105, for various illustrations in legislative and judicial contexts). Moreover, the difficulties faced by policymakers’ attempts to determine which reference classes should be adopted in particular cases do not selectively bear against PSSAs rather than other approaches to policymaking in situations of severe uncertainty. For most proposed approaches face analogous difficulties in such situations (e.g., Hájek 2007, 583-584, on EUT; Hansson 2007, 165-170, on CBA). 16
3.5. Objection from Absolutism
The objection from absolutism holds that PSSAs do not withstand scrutiny because PSSAs typically assign absolute priority to avoiding CC over other reasonable goals (e.g., maximizing expected benefits) and can thereby prescribe irrational decisions such as prohibiting moderately risky actions/policies having large expected benefits (e.g., Colyvan et al. 2010, 224-226, targeting precautionary prescriptions grounded on ascriptions of infinite value to the environment). The idea is that in situations of severe uncertainty, PSSAs’ prescriptions typically depend on highly unlikely unfavorable contingencies and that “it is extremely irrational to make [one’s decisions] wholly dependent on [such] contingencies” (Harsanyi 1975, 40; also Huemer 2010, 336-339).
This objection correctly notes that in situations of severe uncertainty, focusing predominantly on highly unlikely unfavorable contingencies may lead policymakers to advocate the adoption of questionable precautionary measures. However, there are at least two reasons to doubt that the objection undermines PSSAs. First, PSSAs do not commit policymakers to assigning absolute priority to avoiding CC over other goals (e.g., Section 2 on the difference between PSSAs and “mere possibility” arguments). In particular, policymakers may consistently rely on PSSAs and incur some risks of CC. For PSSAs prescribe the adoption of precautionary measures in cases where the risks of CC imposed by the examined actions/policies pass normatively significant thresholds (Subsection 3.4). And the precautionary measures prescribed by PSSAs do not generally require policymakers to avoid any risks of CC (Subsection 3.3). In this context, PSSAs can be aptly regarded as a plausible non-absolutist approach to policymaking in situations of severe uncertainty where policymakers commonly lack the knowledge of probabilities and possible outcomes required to reliably pursue goals such as maximizing expected benefits.
And second, in cases where the examined actions/policies do impose significant risks of CC (e.g., loss of human lives on major scales; irreversible environmental disasters), policymakers can often justifiably attempt to avoid even small risks of such CC for the sake of comparatively limited benefits (e.g., Stern et al. 2022, 191-192, on cases where policymakers face significant risks of environmental CC; also Hansson 2020, 250-251, for similar remarks concerning safety standards in engineering). To be sure, policymakers may be led to violate prima facie plausible continuity conditions if they maintain that “no [expected] benefit in terms of noncatastrophic outcomes could make up for [specified increases in the risk] of a catastrophic outcome” (Stefánsson 2019, 1213, targeting precautionary principles; also Peterson 2006, 598-599). However, policymakers may consistently rely on PSSAs and allow that large expected benefits in terms of non-catastrophic outcomes can make up for minimal increases in the risk of some catastrophic outcomes. In particular, policymakers may identify fine-grained categorical distinctions between outcomes (e.g., Steel and Bartha 2023, 261-263, on categorical distinctions that allow policymakers to regard some catastrophic outcomes as worse than others) and may effectively rely on some of those distinctions to justify the adoption of PSSAs’ precautionary measures (e.g., Lazar and Lee-Stronach 2019, 98-103, on “weak” categorical distinctions that allow policymakers to make some trade-offs between risks of catastrophic outcomes and expected benefits in terms of non-catastrophic outcomes). 17
A critic of PSSAs may object that in their attempts to justify the adoption of PSSAs’ precautionary measures, policymakers frequently ascribe infinite values to differences between catastrophic and non-catastrophic outcomes, and that such value ascriptions hamper policymaking (e.g., Colyvan et al. 2010, 225-226, holding that if the environment is ascribed infinite value, then any actions/policies that protect the environment should be ascribed infinite expected value). However, justifying the adoption of PSSAs’ precautionary measures does not require policymakers to ascribe infinite values to differences between catastrophic and non-catastrophic outcomes. In particular, policymakers may justify the adoption of PSSAs’ precautionary measures by demonstrating that the risks of catastrophic outcomes imposed by the examined actions/policies are disproportionate to these actions’/policies’ expected benefits (e.g., McCauley 2006, 27-28, on various risks of environmental catastrophic outcomes). And policymakers may provide such demonstration without having to ascribe infinite values to differences between catastrophic and non-catastrophic outcomes (e.g., Randall 2011, chaps. 7-9).
A critic of PSSAs may further object that adopting the precautionary measures prescribed by PSSAs often leads policymakers to impose their own risk aversion on the people affected by the examined actions/policies (e.g., future generations) and thereby fail to respect those people’s risk preferences (e.g., Heath 2020, chap. 5, targeting various applications of precautionary principles in climate change policymaking). However, in situations of severe uncertainty, policymakers are frequently unable to identify what people are potentially affected by the examined actions/policies (e.g., van de Poel 2016, 672) and determine the risk preferences of these people (e.g., Fleurbaey 2010, 650-652). In such situations, implementing actions/policies that impose significant risks of CC on other people would lead policymakers to impose their own risk proneness on these people (e.g., Bovens 2015, 403-404). And in many cases, this imposition would be more objectionable than imposing risk aversion on those people (e.g., Buchak 2019, 72-76; Lazarus 2009, 1194-1195, on various cases in climate change policymaking where imposing risk proneness on other people forecloses valuable options to such people). 18
3.6. Objection from Diachronic Incoherence
The objection from diachronic incoherence holds that PSSAs do not withstand scrutiny because PSSAs’ prescriptions typically vary in implausible ways depending on whether policymakers target specific actions/policies (taken individually) or multiple actions/policies (taken collectively, e.g., Sunstein 2006, 891, targeting precautionary principles). The idea is that policymakers may target dissimilar sets of actions/policies in situations of severe uncertainty (e.g., Sunstein 2021, chap. 1) and that PSSAs typically fail to specify what sets of actions/policies should be targeted by policymakers (e.g., Thoma 2022, 61-65, targeting precautionary principles).
This objection correctly notes that policymakers may frequently target dissimilar sets of actions/policies in situations of severe uncertainty. However, there are at least two reasons to doubt that the objection undermines PSSAs. First, PSSAs’ prescriptions may justifiably vary depending on whether policymakers target specific actions/policies (taken individually) or multiple actions/policies (taken collectively). For policymakers may justifiably adopt dissimilar precautionary measures depending on whether they target specific actions/policies or multiple actions/policies (e.g., Buchak 2013, chap. 7). And policymakers’ decision to adopt a given precautionary measure when they target specific actions/policies “need not be irrational just because […] it would be irrational [for them] to adopt” such precautionary measure when they target multiple actions/policies (Arrhenius and Rabinowicz 2005, 180; also Tenenbaum 2017, 704-705, on cases where policymakers face a series of decisions such that each decision in the series imposes acceptable risks of CC, but the whole series of decisions imposes unacceptable risks of CC).
And second, policymakers are often able to provide plausible specifications of what sets of actions/policies should be targeted in the situations they examine (e.g., Lazar and Lee-Stronach 2019, 108, holding that “when a risky act is causally sufficient to realise some expected good it can be considered in isolation [whereas] when one risky act depends on others to realise its expected good, then they must be assessed together”). To be sure, policymakers may occasionally find it difficult to establish what sets of actions/policies should be targeted in situations of severe uncertainty (e.g., Thoma 2019, 250-252, for similar remarks concerning situations of risk and uncertainty). Still, these difficulties do not cast general doubt on PSSAs’ prescriptions. For policymakers are often able to assess the justifiability of targeting the examined sets of actions/policies rather than broader (or narrower) sets of actions/policies in situations of severe uncertainty (e.g., Fumagalli 2021, 344-353, for similar remarks concerning situations of risk and uncertainty).
A critic of PSSAs may object that policymakers often face cases where the risks of CC imposed by the examined actions/policies are continuously resolved (e.g., think of the risks imposed by a nuclear power plant’s continued operation) and that in those cases policymakers cannot justify PSSAs’ prescriptions since in such cases “the risk already incurred through past choices [is] irrelevant for the evaluation of the next risky choice” (Thoma 2022, 65). However, policymakers can often justify PSSAs’ prescriptions by pointing to the yet-unresolved risks of CC imposed by the examined actions/policies (e.g., Birch 2024, chaps. 15-17, on various precautionary prescriptions targeting AI applications). Moreover, policymakers frequently face cases where the risks of CC imposed by the examined actions/policies are not continuously resolved (e.g., think of the risks imposed by actions/policies that contribute to climate change). And in many of those cases, policymakers can justify PSSAs’ prescriptions by demonstrating that the risks of CC imposed by the examined actions/policies pass normatively significant thresholds (e.g., Lenton et al. 2019, 592-595, on thresholds that track the risk of passing catastrophic climate tipping points).
A critic of PSSAs may further object that policymakers face systematic procrastination problems in cases where the risks of CC imposed by the examined actions/policies are not continuously resolved. For in many of those cases, the risks of CC imposed by the examined actions/policies do not pass normatively significant thresholds, and the expected benefits that incurring such risks yields to policymakers give policymakers reason to procrastinate the adoption of precautionary measures (e.g., Gardiner 2011, chaps. 3-6, on various cases of procrastination in environmental policymaking). However, the proponents of PSSAs may be able to address this concern by prescribing precautionary measures that make procrastination irrational or inconvenient for policymakers (e.g., Andreou 2007, 245-246, on clean air laws binding policymakers to monitorable investments to attain future air quality targets). To be sure, if adopting a precautionary measure involves significant immediate costs and requires that policymakers refrain from actions/policies that benefit them while having individually negligible effects, policymakers may have reason to procrastinate the adoption of such measure. Yet, the proponents of PSSAs may be able to address this concern by prescribing precautionary measures that significantly increase the costs of such procrastination and are not vulnerable to higher-order procrastination (e.g., Lazarus 2009, 1205-1230, on enforceable pre-commitment measures such as mandatory scientific advisory, judicial review provisions, and stakeholder consultation requirements).
4. Conclusion
In this paper, I have articulated a novel type of precautionary argument for situations of severe uncertainty in science and policy, which I termed PSSAs. I have then illustrated how the proponents of PSSAs can address several influential objections put forward against standard SSAs and other prominent forms of precautionary reasoning. PSSAs are more aptly regarded as argumentation schemes requiring interpretation rather than algorithmic procedures providing policymakers with immediately applicable prescriptions about what precautionary measures to adopt in specific contexts. This, however, does not detract from PSSAs’ suitability to inform policymakers’ evaluations of precautionary measures in situations of severe uncertainty. In fact, PSSAs provide cogent reasons/evidence for the precautionary measures they prescribe in a wide range of situations of severe uncertainty across science and policy. As such, PSSAs effectively demonstrate how policymakers can justify precautionary measures against potentially catastrophic outcomes even in cases where they lack detailed information about such outcomes.
Footnotes
Acknowledgements
I wish to thank two anonymous reviewers, Matthew Adler, Susanne Burri, Francesco Guala, Donal Khosrowi, Malvina Ongaro, Thomas Rowe, and Orri Stefánsson for their comments on earlier versions of this paper. I also received valuable feedback from audiences at King’s College London, Rutgers University, Politecnico di Milano (META Research Unit), and the 13th Conference of the European Network for the Philosophy of the Social Sciences (ENPOSS, University of Bergen).
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
