Abstract
Crossing the road within the traffic system is an example of an action human agents perform successfully day-to-day in complex systems. How do they perform such successful actions given that the behaviour of complex systems is often difficult to predict? The contemporary literature contains two contrasting approaches to the epistemology of complex systems: an analytic and a post-modern approach. We argue that neither adequately accounts for how successful action is possible in complex systems. Agents regularly perform successful actions without obeying (explicit or implicit) algorithmic rules (as the analytic approach suggests) and without an existential leap to action (as the post-modern approach suggests). We offer an alternative: A common-sense pragmatist epistemology, one that focuses on the kind of actions making up most agents’ successful moment-to-moment actions in complex systems. Successful actions obtain when agents apply ceteris paribus rules-of-thumb during predictive and decisional practices while achieving some desired goal.
Keywords
Introduction
Following Richardson and Cilliers, we take a complex system to be ‘a system that is comprised of a large number of entities that display a high level of nonlinear interactivity’ (2001, p. 8 emphasis removed). Interactivity is nonlinear when cause-effect relationships between the interacting entities are unpredictable; the system can be chaotic. There are many other definitions of ‘complex system’ and ‘complexity’. Ladyman and Wiesner (2020) identify 10 features that complex systems can have. Nonetheless, our short definition captures those that are central to our purposes. Examples of complex systems include the economy, ecosystems, and traffic systems. Complex systems are notoriously recalcitrant to precision modelling and to being reduced to deterministic laws or simple underlying mechanisms (Gell-Mann 1995; Cilliers 1998; Ladyman & Wiesner 2020; Van der Merwe 2023).
Human agents frequently encounter and navigate complex systems as they go about their day-to-day lives. Despite complexity, agents prima facie regularly and reliably act in – i.e. perform successful actions in – complex systems. Spurrett defines ‘action’ as broadly ‘any functional activity that the agent produces, that is any deployment of its relatively transient “degrees of freedom”, whether muscles, glands or other kind of effector…’ (2021, p. 6 emphasis removed; see also Godfrey-Smith 2002). Examples of successful actions performed in complex systems include purchasing groceries (in the economy), fishing (in the ecosystem), and crossing the road at a pedestrian crossing (in the traffic system).
Given the recalcitrance of complex systems to modelling and their tendency to behave in non-lawlike ways, a question naturally arises regarding how successful action within complex systems is possible. When one purchases a container of milk, the amount of milk and change one receives is highly predictable in the sense that one is well capable of predicting it. How do agents successfully navigate complex systems like the economy if such systems are predictively recalcitrant? Shouldn't the complexity of complex systems interfere with and distort agents’ navigational efforts to the extent that successful action is impossible? Agents’ actions in complex systems are not inputs into a framework of lawlike regularities, yet they somehow regularly and reliably perform such actions successfully.
A first response might be to say that complex systems do display stability in some circumstances (Ladyman & Wiesner 2020, Ch. 2). However, this would be too quick. It pushes the epistemological worry back. How do agents know when they are dealing with stable complex systems (or features of complex systems)? There are cases where agents encounter stable complex systems and fail to know it, leading them to, for example, pack an umbrella on a clear morning that turns into a long sunny day. There are also instances where it is far from clear that the system is stable in any strong sense, and yet agents act successfully anyway, as when the ace striker scores the winning goal. So, the answer cannot merely be that complex systems are sometimes stable.
There are two contrasting views of successful action in the complexity literature. According to the Analytic Approach (AA), there are deterministic laws potentially discoverable ‘beneath’ complex systems’ superficial complexity; these laws can then purportedly guide action. Successful action is supposed to occur when agents know the pertinent laws (or at least parts of their implications). Agents can act by means of algorithms that depend on and reflect knowledge of underlying laws. An agent might recite ‘Cross the road when and only when the green man shows’ and then confidently step into the road when she sees the green man (and if there is no applicable rule, then she cannot cross).
According to the post-modern approach – exemplified by a view called Critical Complexity (CC) – agents are largely adrift in a sea of complexity with the volitional will as their primary determinant of action. Successful action involves an existential leap of sorts, a leap into the unknown. 1 Regardless of whether she waits for the green man, the agent's decision to cross is tantamount to throwing herself into the road without knowing whether she will make it to the other side. While confidence and uncertainty are both part of road-crossing, AA and CC emphasise one or the other, and an account somehow combining the two would have obvious appeal.
We explicate AA and CC in the sections titled The Analytic Approach: Algorithmic Rules for Action and Critical Complexity: Radical Voluntarism, respectively, and reject them both. In the section titled A Pragmatist Epistemology of Successful Action in Complex Systems, we propose that agents follow ceteris paribus rules-of-thumb (RoTs) when performing generic actions in complex systems. We set out a six-stage process within which RoTs are identified and deployed. We label this process ‘STRATEGY’. In the section titled Possible Objections, we reply to three possible objections to our thesis.
The analytic approach: algorithmic rules for action
We now introduce AA and highlight some of the problems with the view. CCists have discussed these problems at some length (e.g. Cilliers 1998, Chs. 1, 4, and 5; Woermann 2016, Ch. 2). As will become clear in the next section, CC's proposed alternative is, however, also unsatisfactory.
A slogan some complexity theorists use is ‘Beyond complexity lies simplicity’. This sums up the AA understanding of complexity, which is that complexity arises from the operation of simple laws and can be understood by identifying these laws. We can think of Descartes and Newton as forerunners to AA (Rosen 1991; Cilliers 1998; Kauffman 2019). Although they did not use the terminology of modern complexity theory, these mechanists believed that beneath apparently diverse kinds of macro-behaviour the world is a deterministic machine obeying a few relatively simple laws. This would mean that accurate prediction is, in principle, possible provided enough is known about the laws and initial conditions.
In the context of contemporary complexity studies, AAists employ quantificational formal methods in attempting to simplify or reduce complex systems to some set of principles, laws, or algorithmic rules – rules that can presumably serve as a guide to action. For example, the patterns arising when a murmuration of starlings swirls in the sky arise from just a few simple rules each bird obeys. AAists might take this as a paradigm case of the successful study of complexity. If one knows these rules, then one can predict which patterns will be formed provided one knows enough about the velocity of the starlings and can calculate fast enough. The analogy with Cartesian and Newtonian epistemologies is obvious.
Cilliers (1998) calls AA the ‘rule-based approach’ to complexity. He considers Chomsky, Fodor, Searle, and Habermas to be exemplars because they putatively reduce the behaviour of complex semantic or linguistic systems to formal rules. Regarding the reduction of general, rather than specific, complex systems, Bak (1996) argues for self-organised criticality as the essential feature underlying complex systems. According to Lloyd (2006), all complex systems are products of quantum computation. Kauffman (2008, 2019) (despite expressing fierce anti-reductionism) argues that complex systems can be modelled as (reduced to?) auto-catalytic sets.
According to Woermann (2016 Ch. 2), general systems theory (incorporating cybernetics) is exemplary of a discipline subscribing to AA. This is because, as before, general systems theorists attempt to reduce complex systems to simple laws. General systems theorists concerned with action naturally appeal to such laws in grounding their views. Beer (1979), for example, in developing his management cybernetics, attempts to reduce complex organisations (e.g. economic or business systems) to laws that can govern interventions in and the management of such systems (see also Domicini 2013; Melé, Nuria Chinchilla and López-Jurado 2019).
AA-style modelling has been criticised at length by anti-reductionist complexity theorists. Although not CCists, Ladyman and Wiesner (2020) argue that complex systems cannot be reduced to a simple concept, description, or model without obvious exceptions. They, therefore, develop a family resemblance notion of complexity composed of a list of features that complex systems consist in. These include numerosity of interactions, disorder and diversity of components, feedback, and non-equilibrium. Some, but not necessarily all, of these features will be shared among different complex systems to different degrees. Thus, [i]f complexity is a collection of features rather than a single phenomenon, then all quantitative measures of complexity can quantify only aspects of complexity rather than complexity as such (Ladyman & Wiesner 2020, p. 87).
As pragmatist philosophers of science are at pains to point out, a modeller cannot assume a detached third-person view of her model and/or what is being modelled. Empirical inquirers are immersed in empirical inquiry. Following Arthur Fine, we cannot stand outside the ‘game’ of science (1986, 156; see also Putnam 1981; Davidson 2001). Given this anthropic ‘interference’ in scientific modelling practices, a model is not a perfectly accurate and complete representation of its subject matter, and – like laws or rules – cannot then serve as an infallible guide to action (see also McIntyre 1998; Chu, Stran and Fjellan 2003; Morin 2008, Ch. 2; Mitchell 2009). Being a suitable guide to action implies raising the probability of success, and models can, of course, raise the probability of success in their rightful context. They are, however, non-algorithmic in the sense described above. After constructing a model, the modeller must stand back and think about what it gets wrong. Model-based predictions are relativised to the context of the model even if the success of proceeding actions is judged according to goals antecedent to modelling (as outlined in the introduction) (see also Van der Merwe 2022, 2024).
CCists make a similar point when they argue that agents qua complex systems are entwined with other complex systems. Complex systems are open; they have fuzzy boundaries that cannot be strictly delineated. As Woermann writes, how we frame [complex] systems (in other words, the boundaries that we draw around systems) is not only a function of the activity of the system itself but is also a product of the description that we give to the system (2016, p. 89)… We never approach the act of modelling from a clean slate. Our models are premised on our sensory apparatus and our physical and cognitive resources (including our individual judgements, preferences, biases, opinions, etc) (2016, p. 117; see also Cilliers 2000; Hurst 2010; Woermann, Human and Preiser 2018).
The point that modellers are always located in relation to what they model gives rise to another important distinction that AA theorists sometimes miss, a distinction between acting on and acting in a complex system. In the former case, agents might, for example, act on the traffic system to optimally balance journey times, injury rates, living and shopping spaces, emissions, and other factors. Such interventions are frequently difficult for agents to perform. They require considerable expertise and ingenuity, and agents often get them wrong. For example, building an extra lane on a highway might be followed by a significant increase in traffic on the highway, resulting in a lower improvement in journey times than hoped for. AAists are primarily concerned with modelling these kinds of cases.
In AA, even when a model is of a system without a ‘controller’ (e.g. starling murmurations), the task of complexity science is to understand how individual actions relate to the behaviour of the whole. However, when a pedestrian safely crosses the road, she does so without any interest in the whole. She does not seek to alter the system, even if she invariably does so in virtue of changing her position. She might seek to leave the system unaffected by avoiding forcing cars to brake or she might carelessly get in their way. Regardless, her objective is to cross the road, and explaining how she achieves this objective is different from explaining either how a town planner deliberately effects a change in traffic flows or how a starling non-deliberately affects shapes in the sky. As noted in the introduction, acting in a complex system has been relatively little discussed, and its ease and reliability stand in contrast to modelling and/or acting on a complex system. Our question relates to how action in a complex system is achieved, and AAists do not provide explicit answers to this question.
CCists are, in contrast, concerned with actions in complex systems. We now assess CC's epistemology of successful action in complex systems and (as with AA) find it wanting.
Critical complexity: radical voluntarism
CCists draw on a Derridean post-structural 2 understanding of complexity in attempting to account for the epistemology of successful action in complex systems. Although we mostly agree with CCists on the deficiencies of AA, we will argue that CC's alternative is an overreaction. After rejecting AA's reductionist approach, CC latches onto and somewhat dramatises the freedom involved in agents’ actions in complex systems. The result suggests that actional decisions involve existential crises and radically uncertain outcomes. CC comes too close to saying that agents simply throw themselves into the road and hope for the best. Although CCists are more sensitive to the question of how agents act in complex systems and to the volition involved, they fail to distinguish between high-stakes potentially life-changing actional decisions (e.g. choosing whether or not to get divorced) and low-stakes day-to-day actional decisions (e.g. choosing a pedestrian crossing point with a view to waiting the shortest time).
For CCists, actional decisions can be overwhelming given the oversaturated complexity agents encounter in the world. Complexity irredeemably interferes with attempts at modelling, prediction, decision-making, and action. This situation, says Woermann, ‘marks the heart of our human condition’ (2016, p. 83); ‘we are always in trouble’, and acknowledging as much can be daunting (Woermann & Cilliers 2012, p. 452). Following Derrida (e.g. 1995, pp. 70–77), action ultimately occurs by way of an existential leap into the unknown (Preiser, Cilliers and Human 2013). We can think of such a leap as an act of pure will or volitional freedom that obtains given the absence of any formulaic determinants of action (e.g. rules, rationality, or reason). Derrida goes so far as to say that a decision represents a moment of faith (1995, p. 80) or even madness (1995, p. 65). For CCists, rationality is radically overdetermined by complexity and ‘it is these overdeterminations that generate freedom…’ (Woermann & Cilliers 2012, p. 455). We will call this post-structural account of action in complex systems radical voluntarism.
Moreover, with freedom comes responsibility (Cilliers 2005; Woermann 2016; see also Derrida 1995, Ch. 3, 2002). For CCists, action is unavoidably ethical given its non-algorithmic and non-rational nature. We cannot defer accountability for the consequences of the choices we make onto factors extrinsic to ourselves. Ethical considerations, thus, come to the fore. Preiser and colleagues express radical voluntarism as follows: The ethical moment is situated in the moment in which we take the leap from that which is known to that which is uncertain or unknown… the ethical moment is born once we enter into the gap of the infinite abyss that is created by the limits of our models (Preiser, Cilliers and Human 2013, pp. 270–271).
3
However, despite its initial appeal, CC faces a thorny problem. Radical voluntarism (if it applies at all) only seems to apply to a proper subset of cases where agents navigate complex systems – those satisfying the following conditions:
The actional decision is one whose outcome agents care about greatly (perhaps ‘existentially’). The actional decision is one whose outcome is informed by minimal evidence and a high degree of uncertainty about the future.
Decisions involving divorce, abortion, or career choice might be candidates for something like radical voluntarism, where a positive versus negative outcome is often difficult to calculate with anything resembling precision (perhaps especially where the experiences are ‘transformative’ in Paul's [2014] sense). However, not all decisions satisfy both (1) and (2). Some highly uncertain decisions can even be enjoyable because agents do not care much about the outcome (for example, a small bet with a friend over whether it will rain in the next hour).
5
Other decisions are for high stakes, yet are undaunting because agents are almost certain of the outcome (for example, crossing the road in a city where traffic rules are generally obeyed and a green light is showing at the pedestrian crossing).
Nothing in general day-to-day road-crossing activities appears to resemble anything like gambling for high stakes, and the decisional process does not resemble radical voluntarism. In a seemingly effortless way, agents successfully follow the ‘rules of the road’ as they navigate motor vehicles, bicycles, dogs, and other pedestrians (often with orthogonal interests) in achieving the goal of getting safely to the other side of the road. Things sometimes go terribly wrong, but it is remarkable how often they do not given that traffic is generally complex and complexity is often thought to imply unpredictability. Radical voluntarism is not an account of the simple and easy actions agents mostly perform in complex systems (road-crossing being just one example). Agents regularly and reliably perform these actions without any kind of existential leap. They just do them – without fear, doubt, or uncertainty – and often without thinking much about it at all.
A reason for the failure to distinguish between existential leaps and everyday decisions can be traced to one of post-structuralism's core principles, which (in)famously states that all distinctions (dichotomies, delineations, or dualisms) can and should be deconstructed 6 (Derrida 1982; Hurst 2010; Woermann 2016, pp. 100–104; Woermann, Human and Preiser 2018). CC, therefore, does not, and by its own lights cannot, delineate existential actions from everyday actions. 7 Nanay (2014 ch. 4) calls the former ‘decision-making actions’ which are a special case of the latter ‘actions’, and we should surely account for the generic case and not only the special case. For someone not already wedded to post-structuralism, deconstruction seems a poor reason to ignore what appears on its face to be a rather obvious difference between difficult and stressful versus easy and stress-free decision-based actions. CC's Derridean account intimates that all actional decisions involve radical voluntarism, implying that agents can never make an easy decision in the face of complexity. Given how often agents encounter complex systems and given that many decisional actions are seemingly easy, CC overdramatises the issue. Perhaps radical voluntarism applies to some interactions with complex systems. 8 Yet, even if we grant that it does, CC (like AA) does not adequately account for the vast majority of agents’ (moment-to-moment, day-to-day) actional encounters with complex systems.
A pragmatist epistemology of successful action in complex systems
Neither AA nor CC appears suitable at this point. AA because it does not account for actions in complex systems and CC because it only accounts for actions in complex systems that are both stressful and uncertain. We now turn to how successful action in complex systems is de facto possible. As noted in the introduction, this is primarily an epistemological issue related to how agents know that some action will probably succeed. Furthermore, given the arguments from the sections titled The Analytic Approach: Algorithmic Rules for Action and Critical Complexity: Radical Voluntarism, we will be concerned specifically with actions where (a) agents care about and work toward the outcome and (b) success is not the product of randomness or lucky guesses.
Note that our aim in this section is not to settle once and for all the metaphysics of action or to offer necessary and sufficient conditions for its instantiation. Rather, we aim to describe the epistemic properties of everyday actions that explain the easy success of most actional encounters with complex systems. The outcome is a pragmatist – that is, common-sense or ‘easy’ – description of the epistemological process involved in successful actions in complex systems. Some might wonder why such a description is necessary. What is the purpose of writing a paper that simply offers a common-sense description of seemingly mundane activities like crossing the road?
The answer is that AA- and CC-style approaches to action are relatively widespread (even if proponents do not always state things in complexity theoretical terms). As mentioned in the introduction, there is a general failure to distinguish between different types of interactions with complexity. Much of the topical literature focuses on dramatic existential kinds of interactions with complexity like those involved in macroeconomic policy-making or pandemic interventions. This paper is an attempt to point out that the vast majority of agents’ interactions with complexity are ‘easy’ rather than existential, and then suggest an appropriate epistemology. We see it as a strength, rather than a weakness, of our thesis that it describes the seemingly mundane. Our account can be thought of as an appeal to common-sense given AA and CC's polarised (and, as argued, problematic) views.
Note also that we take the epistemological puzzles involved in successful actions in complex systems to centre around predicting and deciding. Prediction and decision-making are, of course, not always reliable; sometimes errors creep in and the outcomes can be disastrous. We will argue, however, that our road-crossing case (as exemplary of generic cases) demonstrates how predictions and decisions related to successful actions in complex systems are generally reliable sans algorithmic rules or radical voluntarism.
For reasons that will become clear, we think it apt to situate prediction and decision-making within a broader temporal process constituting an action. We, therefore, consider prediction and decision-making to be two of the stages that obtain in a six-stage temporal process immediately preceding and culminating in action. Crossing the road at a pedestrian crossing within some complex traffic system, we contend, roughly involves the following six-stage strategy (in approximately temporal order): STRATEGY: (1) goal determination, (2) surveyance, (3) recall, (4) prediction, (5) decision, and (6) action.
We take Stages 1–3 to be largely self-evident. We have defined action in terms of goal determination (Stage 1). Surveyance of the environment (Stage 2) is required because otherwise an action is a leap in the dark for obvious reasons. Actions without any information at all are not the subject of interest here, even for CCists (or their conclusion would be unremarkable). The role of recall (Stage 3) – construed in an undifferentiated sense as retrieving relevant information from memory – is also plainly necessary for similar reasons. As mentioned, we will focus on Stages 4 and 5, which involve the process of making predictions about possible alternative courses of action and then deciding between them. This deciding, we maintain, occurs via the application of RoTs. Action (stage 6) is included in STRATEGY because prediction and decision-making are ongoing in action. Action does not begin where prediction and decision-making end otherwise agents would have to leap into action and radical voluntarism would come into play. As mentioned, our goal here is, however, not to give a physiological (or metaphysical) account of action. Our primary concern is epistemological.
Stage 4: Making a prediction
Drawing on preceding goals, sensory information, and memory, agents predict both the behaviour of their environment and themselves within that environment to plot some suitable trajectory during everyday actions in complex systems. We propose that this inductive step – although intuitive and approximate rather than algorithmic – regularly and reliably results in successful actions in complex systems.
As an agent stands at the side of the road ready to cross, she may, for example, see a green pedestrian light, a truck moving towards the pedestrian crossing, and various other phenomena that hold different degrees of salience for her current situation. The truck, let us say, appears to be slowing down, so the agent infers that the truck driver is seeing a red light and will halt at the pedestrian crossing. This is how trucks and truck drivers have behaved in the past and agents reasonably infer that they will do so this time as well. The same goes for other entities (e.g. traffic lights and fellow pedestrians) making up the various other components of the traffic system. There is no strict rule here to form the basis of an algorithm; circumstances can defeat the inference. The pedestrian might notice that the road is wet or that the truck is overloaded. The list of defeaters is open-ended. It is either infinite or far too long for any human agent to complete; yet the past informs a prediction.
Along with the reliability of sensations and memory, such predictions are possible for the same reason that Newtonian physics is predictively successful in some complex systems (despite being strictly false at a fundamental level). They are possible because complex systems often give rise to emergent – that is, phenomenal (rather than fundamental) – regularities. These regularities ceteris paribus behave consistently. Agents can cognitively track them and project them forward in making predictions that can inform decision-making (Stage 5) and action (Stage 6) (see also Wuketits 1986; Gigerenzer & Sturm 2012). 9 Although complex systems do not seemingly obey deterministic laws, they do exhibit temporarily stable behaviour that is lawlike – if not lawful – and conducive to successful prediction-making, at least over some time period (see Ladyman & Wiesner 2020).
Talisse thinks of prediction in very similar terms to us. Agential actions, he says, are most often the result of decisions, plans, and intentions, and these are, in turn, projections into the future. That is, when we act, we aim to do something; we act for the sake of bringing about some or other result. Yet our actions typically are not simply stabs in the dark; that they are typically the outcomes of plans and decisions (2009, 81 original emphasis). the result of our assessments of present circumstances, evaluations of the potentialities inherent within the present… The forward-looking nature of action presupposes a prior assessment of the present. But our assessments of the present involve a backward-looking element; when we deliberate about what to do in a given situation we bring to bear on the present a fund of past experience, and the expectation that the past will resemble the future in the relevant respects (Talisse 2009, 81 original emphasis). make predictions, correct mistakes, invent, and improvise… What counts in the case of a folk theory is that it works “on the ground” as it were. Accordingly, folk theories are necessarily incomplete theories; they do not aspire to capture and systematize all the phenomena, but only the phenomena that are most centrally relevant to the tasks at hand (2009, p. 82 original emphasis).
Some might object that such road-crossing scenarios are not as predictable as we suppose. A stranger could unexpectedly push our pedestrian into the path of an oncoming truck, perhaps the truck driver is drunk and swerves into her, or she could trip and fall as she walks. Such events are, of course, possible, but they are infrequent; they are highly unlikely to occur.
Take road-crossing in South Africa, for example. According to the Automobile Association of South Africa, South Africa has one of the highest national rates of road deaths globally. 10 Approximately 30% of these are pedestrian deaths: 5339 according to the report. Although, of course, tragic for those involved, these fatal instances make up a tiny fraction of the overall sample pool. Simplifying grossly, imagine that there is one road-crossing per South African per day (obviously, some will cross many more times and others might not cross at all; set this aside for the sake of argument.). The total South African population is approximately 60 million, resulting in approximately 60 million road-crossings per day. This amounts to approximately 20 billion road-crossings each year, out of which 5339 result in fatalities.
Now, we recognise that not everyone crosses roads (e.g. infants who cannot walk) and that not all pedestrian deaths involve road-crossings. Yet, even if there is only one road-crossing per person per year (surely a gross underestimate), there will still be 60 million successful road-crossings per year compared to 5339 fatalities. This back-of-the-envelope calculation suggests that there are, on average, overwhelmingly more successful than unsuccessful road-crossing actions performed in South Africa (even while South Africa is one of the most dangerous countries for pedestrians). Although not 100% reliable, agents’ everyday predictive capabilities are reliable enough. Most of us will live to old age having successfully crossed the road thousands of times without tragedy. The CCist's existential crisis does not seem justified even in a country where crossing the road is as dangerous as it is in South Africa.
Others (who are not wedded to AA or CC) might object that it is plainly obvious and therefore trivial to point out that agents utilise inductive methods to make goal-directed predictions during actional decisions in complex systems. We hope this is the case. However, we feel the need to state as much given that there is ongoing debate around the epistemological status of notions like induction, volition, and laws.
Stage 5: Making a decision
With the resources above, an alternative to AA and CC – a third option – is available. Agents choose the action they believe most likely to maximise the chances of achieving their goal based on predicting the outcomes of various alternatives (see also Gigerenzer 2008; Mercier & Sperber 2017; Nanay 2017; Van der Merwe 2022). In doing so, agents follow RoTs: inductive ceteris paribus rules that track emergent regularities identifiable in complex systems.
Morin has the following to say about action: Action is strategy. The word strategy does not mean a predetermined program we can apply ne variatur over time. Strategy permits, from an initial decision, to envisage a certain number of scenarios of action, scenarios that can be modified according to information arriving in the action and according to chance occurrences that will occur and disrupt the action (2008, p. 54 original emphasis).
11
We can perhaps think of decision-making as involving an epistemic ‘tipping point’ – what complexity theorists might call a phase transition. When Stages 1–4 in STRATEGY synchronise in a suitable manner, an agent transitions from pre-action to action. Although a phase transition can be fast or slow, it does not occur in a moment. 12 When heated, water does not turn to steam in an instant; the process is gradual, even if it can be rapid at high temperatures. In the same way, an agent engaged in actional decision-making shifts rather than leaps to action. Which course of action (or inaction) is chosen is determined by the context-specific ways that Stages 1–4 interact and align at some moment in space and time.
Given the above, our road-crossing agent proceeds as follows. In line with her goal (Stage 1) and informed by information gathered in sensation (Stage 2) and from memory (Stage 3), she makes a prediction (Stage 4) and then a concomitant decision to move forward (Stage 5). If Stages 1–4 are functioning reliably (as we have argued they generally do), then the resulting decision will be one that the agent can ceteris paribus know will produce a successful outcome. Such success is born out in the evidence; the evidence being that, statistically speaking, agents mostly succeed in performing generic actional activities like crossing the road (previous section). Action (Stage 6) – the road-crossing event itself – naturally follows from Stage 5. The decisional outcome of Stage 5 ‘activates’ (or stimulates) the agent's relevant physiological apparatuses in such a way that she progresses towards her goal (the other side of the road).
Our road-crossing case can be easily extrapolated to many other similarly everyday cases (e.g. grocery shopping in the economic system and fishing in the ecosystem). As before, we see the commonsensical nature of this decision-making description as a strength rather than a weakness. Sans algorithmic rules or radical voluntarism, STRATEGY (or something like it) generates reliable knowledge, knowledge that regularly informs decision-making related to successful actions in complex systems.
Possible objections
Objection 1: Human irrationality
A possible objection arises from celebrated exposés of human irrationality and faulty decision-making that supposedly demonstrate the prevalence of cognitive illusions, biases, faulty statistical reasoning, and the like. Recent work in psychology and behavioural economics (e.g. Ariely 2008; Thaler & Sunstein 2008; Kahneman 2011) purports to show how susceptible we are to the gambler's fallacy, confirmation bias, priming, framing effects, and similar errors of reasoning. 13
In their aptly titled The Rational Animal (2013), Kenrick and Griskevicius respond to this line of thinking by stating that humans, like all animals, evolved to make choices in ways that promote deeper evolutionary purposes. Once we start looking at modern choices through this ancestral lens, many decisions that appear foolish and irrational at the surface level turn out to be smart and adaptive at a deeper evolutionary level (2013, p. 3; see also Haselton, Nettle and Andrews 2005).
Gerd Gigerenzer's ecological rationality – or ‘rationality for mortals’ – expresses a similar motif. Here, agents employ reasoning heuristics (viz. RoTs) that ‘work in real-world environments of natural complexity… where an optimal strategy is often unknown or computationally intractable’ (Gigerenzer 2008, p. 8 emphasis removed; see also Van der Merwe 2022). Like Kenrick and Griskevicius and Mercier and Sperber, Gigerenzer's view emphasises the Darwinian nature of (human) agents’ decision-making faculties. Agents employ rough-and-ready or ‘fast-and-frugal’ RoTs rather than anything resembling an algorithm during successful actional decisions. Gigerenzer uses the example of playing chess. We play chess by a kind of intuition and sometimes play it very well without calculating all possible outcomes of every move and without being overwhelmed by the complexity inherent in the game. Ex hypothesi, the same goes for agents’ generic actions in complex systems. Some sort of calculation is going on when agents cross the road. However, such a ‘calculation’ – such an application of a RoT – at best only approximates anything like deductive logic or formal probability calculi. Neither does it prima facie consist in anything resembling the existential leap at the heart of CC's radical voluntarism.
Objection 2: Complexity is irrelevant
Another kind of objection is that complexity is essentially forgotten in our account. At best, this would deprive the account of its own motivation. At worst, it could ground one or more arguments that the account is deficient because it fails to properly integrate the complexity of the systems concerned.
One might object that RoTs are just as applicable in a deterministic world without complexity. RoTs are effective because of regularities that are not universal but reliable in the circumstances, and these can exist in a non-complex world as well as a complex one. The complexity of the world might then be ‘doing no work’ in our account.
Our response is that it is no surprise that RoTs can work in some (not all) non-complex worlds. AAists think that fully fledged algorithms work in such worlds. It is, however, a significant advance to show how RoTs can work for realistic agents in complex worlds. As agents navigate the world, they are almost always doing so in complex systems (often multiple complex systems at a time). Our goal has been to sketch an epistemology for how such navigations are (on average) so successful. In doing so, we have assumed that systems like the traffic system are complex. We live in a complex world rather than a Humean world.
Objection 3: RoTs not numerous enough, not fast enough, …
A further objection is that RoTs are inadequate for the task we assign them. There are various ways they could fail to deliver what we hope for. There might simply not be enough of them to account for all novel situations. Each road is different, and when agents encounter a new road, they do not always have a RoT available. RoTs might be too slow. By the time agents have applied the RoT, the opportunity to cross has passed and they will have to wait longer.
The objector might offer an alternative, better account: decisions in a complex world (like deciding when to cross the road) fall out of an account of continuous actions. This might involve the entrainment – or harmonisation – of one complex system with another. It is well-established that nonlinear systems like reservoir computers can entrain to chaotic deterministic signals (Falandays, Nguyen and Spivey 2021). These signals have regularities, but not the kind that would permit the application of RoTs. This would have the advantage that what agents experience as a practical decision-making process is a special case of a larger process of continuous (rather than stage-like) interactions with the world. The alternative account could explain the special case and more besides. RoTs could, at best, explain the special case alone, and perhaps not even that.
But, even if complex systems can be entrained to harmonise with each other, and even though agents are complex systems, it does not follow that entrainment explains successful action – specifically human action – in complex systems. On its face, the sort of action we are concerned with is quite different from entrainment. Indeed, it might be close to its opposite.
Perhaps a case of entrainment (or something like it) can account for certain kinds of habitual or reflexive behaviours. An example is sleep patterns adjusting to the day-night cycle. This cycle is not entirely deterministic, at least from the subjective perspective. Daylight hours change, travel alters things to a greater or lesser degree, sometimes people are in a light environment outside daylight hours. Nonetheless, the circadian rhythm does a pretty good job of ‘keeping the beat’. Yet, this is hardly a case of decision-making. Decisions involve a conscious, volitional process. This is, however, not the purely volitional process CC's radical voluntarism suggests. Part of our project is to suggest that decision-making is partly a product of agential volition and partly a product of tracking emergent regularities ‘out there’ (the details of how this works are, though, the subject of a future paper). 14
If entrainment is all there is to decisions, then a decision-making experience is a special sort of illusion; the reasoning that agents think they are performing is a mere epiphenomenon. Alternatively, entrainment plays a role in actions we would regard as purely automatic; unknown to people, they actually decide to feel sleepy in the evening. None of this seems plausible.
We are concerned with cases of human action in complex systems. And, humans are volitional agents. The notion of entrainment does not capture this feature of action. At best, entrainment captures habitual, reflex, or instinctual events (rather than actions). Humans are complex systems, but not the same kind of complex systems as, for example, a reservoir computer (entrained to chaotic deterministic signals). Likewise, the dynamics of a road-crosser do not in any meaningful sense resemble the dynamics of a traffic system, unless the term ‘resemble’ is stretched beyond its usual meaning. A road-crosser is a volitional agent acting within a system, not a system whose dynamics have been tuned to match that of another system.
We are, likewise, confident of the numerosity of RoTs. They are acquired through a learning process. Where there has been no learning process, there is no RoT. But, this is no objection. Action in a complex system can be routine, but only after it has been mastered. It is not part of our brief to argue that agents are intrinsically equipped, qua agents, to cross the road. They must learn, usually from a mix of inputs from other agents, reasoning about situations they face, and experience. The upshot is a set of RoTs permitting the crossing of roads. So long as agents can recognise roads, they can apply the appropriate ‘rules’.
Conclusion
We have outlined one way for successful action to be possible in complex systems. What has emerged is a pragmatist epistemology predicated on a simple description of some typical case of successful action in complex systems (crossing the road). When performing such actions, agents follow something like STRATEGY: (1) set some pertinent goal, (2) use one's senses to survey the environment, (3) draw on relevant memories, (4) formulate predictions about the behaviour of one's environment and oneself, (5) make an appropriate decision based on Stages 1–4, and then (6) act accordingly. STRATEGY is the temporal framework within which a suitable epistemology of successful action in complex systems is situated. This epistemology centres around the notion of RoTs. Neither the application of an algorithm (as in AA) nor an existential leap (as in CC) explains how agents regularly and reliably act successfully in complex systems, while RoTs do.
As mentioned, STRATEGY is a common-sense description of agents’ successful actions in complex systems. We think that appeals to common-sense are often underappreciated. Our thesis is, in part, an attempt to press this point. If common-sense is relevant in the philosophy of action, then it might enjoy fruitful application in other domains of philosophical inquiry.
Regarding RoTs, we have mostly cited literature from psychology rather than philosophy. As mentioned, the idea that agents employ RoTs while successfully navigating complex systems might be obvious to some, especially those immersed in the topical psychological literature. It is, however, not obvious to all philosophers. AA- and CC-style approaches to action appear relatively widespread (even if not always expressed in complexity terms and even if the focus is often on actions on, rather than actions in, complex systems). Our thesis is then, in part, an attempt to encourage epistemologists and philosophers of action to take on board certain lessons from psychology, specifically lessons regarding decision-making and actional encounters with complexity. Engagement between psychologists and philosophers can, we think, only but be fruitful if we seek a better understanding of these issues.
Footnotes
Declaration of interest statement
The authors declare that there is no conflict of interest.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the John Templeton Foundation, (grant number 61408).
