Abstract
In our 2023 paper, entitled “Modeling interactions between the embodied and the narrative self: Dynamics of the self-pattern within LIDA,” Kugele, Newen, Franklin, and I propose a functional description and implementation of a central element of Gallagher & Newen's pattern theory of self, which identifies an agent's self with a dynamic pattern of so-called cognitive aspects which govern their thought and behavior (Gallagher, 2013; Newen, 2018; Gallagher & Daly, 2018). The pattern theory explicitly rejects the traditional conceptualization of the self as a unitary entity with certain properties that resides within agents, with the idea of a pattern of aspects being central to its ability to account for the dynamic, yet relatively stable development of most natural agents’ selves. Implementing the pattern theory within Learning Intelligent Distribution Agent revealed that, in order for a cognitive architecture to account for both the dynamic and stable nature of an agent's self-pattern, aspects of that pattern had to be realized by dispositions of the agent to either think or act in a certain way. In this commentary, I argue that this fundamental role of dispositions extends to cognitive processes in general and that cognitive systems should be understood in terms of the dynamical interactions of dispositions over time. In order to facilitate such an understanding, dispositions will have to be identified with topologies of cognitive (sub)systems. I provide an example of such a topology by reference to informational topologies in neuronal systems.
Keywords
In our 2023 paper, entitled “Modeling interactions between the embodied and the narrative self:
Dynamics of the self-pattern within LIDA,” 1 Kugele, Newen, Franklin, and I propose a functional description and implementation of a central element of Gallagher & Newen's pattern theory of self, which identifies an agent's self with a pattern of so-called cognitive aspects which govern their thought and behavior.2–4 The pattern theory explicitly rejects the traditional conceptualization of the self as a unitary entity with certain properties that resides within agents, which it replaces with the concept of a pattern of aspects. This allows it to account for the dynamic, yet relatively stable development of most natural agents’ selves—a phenomenon particularly well-documented in humans.5,6
While the pattern theory of self, as currently developed by cognitive scientists, represents a promising first step towards providing a scientifically fruitful definition of the term “self,” there is much work left to be done, particularly in spelling out its relation to commonly used terms and concepts from empirical literature in psychology and neuroscience. In order to contribute to addressing this issue, our paper aimed to clarify the notion of “interaction between aspects of a self-pattern,” which captures the idea that the dynamic nature of an agent's self-pattern arises from constantly occurring interactions between different aspects of that pattern. One example we use to demonstrate this interaction is that of learning a new sport such as tennis and steadily becoming better at it, which in turn motivates the agent to spend more of their free time practicing, watching tennis matches, etc. This shift in behavioral routines and skills gradually begins to influence how the agent justifies and narrates their own choices, as well as their past and future actions—that is, it affects how they think and talk about themselves. While this functional-level description seems to capture a real and quite common phenomenon, some of the core concepts it rests on, particularly that of an “aspect of a self-pattern” have unfortunately remained less than well-defined. In an effort to facilitate a clear definition of both the concept of self-aspects and the stipulated interactions between them, our 2023 paper outlined a model of two possible interactions of this kind within the Learning Intelligent Distribution Agent (LIDA) cognitive architecture 7 and identified some of its entities and processes with the relevant self-aspects and their interactions. 1
Our LIDA implementation of interactions between the two self-aspects revealed that, at least in the case of the embodied and the narrative self, these interactions take place over long periods of time, as they involve the gradual modification of both behavioral and cognitive dispositions such as behavioral routines and distal intentions.8–10 Within the modular architecture of LIDA, this modification of dispositions is largely modeled by processes occurring within the Action Selection module (AS), which determines whether the agent will continue (attempting to) instantiate some ongoing action, switch to a different action, or deliberate about which action to choose next. 11 Which action is selected depends on a variety of factors that cannot be discussed here due to spatial constraints, but for current purposes, it suffices to say that the selection process may be biased in favor of certain actions that repeatedly come to consciousness (i.e. that are globally broadcast) due to factors such as accessibility (the action is readily afforded by environmental conditions), the possession of action-related skills, or knowledge or personal desires that might bias the deliberative process of which action to take next. In our model, this bias corresponds to a behavioral or cognitive disposition. 2
An interesting upshot of the way that self-aspects in LIDA interact is that nearly the entire process of interaction can be described adequately by reference to (cognitive and behavioral) dispositions only. This is particularly remarkable in the case of narratives, which form the core of the narrative self, as these are traditionally understood as semantically structured entities that enter into causal relations with other cognitive entities or processes due to the content they bear. In our LIDA model, however, narratives such as “I am too absorbed in my work, and am neglecting my child as a result” are conceptualized not as semantic structures, but as narrative goals that dispose the agent towards certain behaviors. This disposition is not a result of a semantic process such as inferential reasoning, but the outcome of a process that Slors calls “self-programming,” during which the agent becomes increasingly respondent to certain types of environmental situations or cues that allow the agent to take steps towards reaching that goal.
In this commentary, I discuss the possibility that an understanding of behavioral and cognitive dispositions, similar to the one exhibited in our paper, may provide a superior alternative to traditional views of semantic memory processes (such as semantic recall, narrative generation, and verbal narration) as involving semantic structure or content that directly enters into causal relations with other types of cognitive entities or processes. This alternative view limits the explanatory role of semantic structures to being the result of cognitive processes (such as during verbal utterances or sentence writing), whose “building blocks” are non-semantic dispositions for certain patterns of thought, action and language use.13,14 As an example, in the above-mentioned case of a parent who has come to realize that they’ve been neglecting their child, this realization would not be the outcome of a process of conscious deliberation, but of a clash between conflicting narrative goals—for instance, wanting to spend as much time at work as possible on the one hand and wanting to be a caring parent on the other. As Dings points out, conflicts between narrative goals are usually not realized until some external condition highlights them (for instance, the parent seeing their child cry after being unable to come to their soccer game due to having worked overtime). This is because narrative goals, like distal intentions, are almost never cognitively salient, but instead passively guide behavior according to the goal in question. Narrative goals, therefore, may be described in semantic terms, such as when the neglectful parent narrates their recent realization to their spouse, but the goals themselves are not semantic in nature. Rather, they are dispositional: They may dispose the agent to react differently when seeing their child before or during work hours (thus acting as a behavioral disposition) or to think about their child at work more often (thus acting as a cognitive disposition). Of course, they may also dispose the agent to say certain phrases in certain contexts, such as “I noticed a while ago that I’ve been neglecting my child, so I’m trying to focus less on work now.” While such phrases undoubtedly possess semantic content, I argue that they are not the result of states or processes that involve semantic content. Rather, what causes them to be spoken, written, or otherwise semantically expressed, is the existence of a disposition within the agent (perhaps within a context of other dispositions or other cognitive states) to express a certain narrative semantically when the environment affords their expression. 3
A further argument for rejecting any causal-explanatory role for semantic content in research on semantic memory is the following: It is not clear how content—that is, the meaning carried by a physical instantiation of a sentence, a thought, an image, etc.—could have any causal effect on any physical system that goes over and above the causal effect of the system carrying that content. This worry goes back to Hutto & Myin's formulation of the “Hard Problem of Content,” 17 which distinguishes between the physical implementation of some content (its vehicle) on the one hand and the content as a multiply-realizable entity on the other. As Hutto & Myin point out, for some content to have causal efficacy that goes over and above that of its vehicle, it must be an entity that is sensitive to truth-conditions. In other words, content becomes a difference-maker within a physical system because it allows for the possession of mental states such as beliefs, which may be true or false, with these truth-conditions themselves having some causal impact on the system's development. However, as the argument goes, such truth-conditions do not exist independently of human activity, since this would require normative states such as truth or falsehood to be both realized in physical systems, as well as perceptually accessible to us. Therefore, there is no way for cognitive systems to perceive or extract meaningful content from their environments—at best, they could somehow create it by adding novel information that isn’t perceptually available, but it is unclear where this novelty (the basis for what philosophers call non-derived mental content) is supposed to come from—after all, even memories are originally the result of perceptual situations, and in these situations, all that was available to the organism was perceptual information, but no information about semantics or truth-conditions!
Throughout the last two decades, the debate on the causal role of (semantic) mental content has slowly developed into one of the main demarcation lines of the battlefield between traditional and contemporary cognitive science. While traditional approaches to cognitive systems usually rely heavily on the idea of semantic processing and assume the existence of a unique causal role for semantic entities such as sentences, words, and meanings within cognitive systems, contemporary cognitive science emphasizes the situatedness of these systems within others and their constant need to manage relations between its internal states and processes and those of its environment. The dispositional view that I described above is an example of an attempt to identify these quite general features with more concrete, organism-internal states, relying on the notion of dispositions originally developed by Ryle. 18 However, with the methods and knowledge of twenty-first-century cognitive science in hand, I believe that this notion is due for an update. This is because, thanks to the advances of complex systems theory, we know exactly how to precisely characterize those states and processes that, in traditional cognitive science, were understood to involve non-derived mental content in terms of dispositions for cognition and action, and we can spell out these dispositions with the help of dynamical systems terminology.
My view of dispositions starts with the topic of information processing. Most cognitive scientists agree that the brain processes information in some sense, but disagree on what this sense is. A first attempt at clarifying this notion may be to consider the brain as a system that is sensitive to information—in order words, one whose internal states change in some regular way due to the amount or type of information present in another system. Thanks to the advances made in theoretical mathematics throughout the 1970s and 80s, we can spell out these ideas of sensitivity and regular changes in terms of concepts and equations from Dynamical Systems Theory, a branch of mathematics that is concerned with systems whose development is a result of their own history of internal states and states of other systems they interact with. One type of state that is particularly relevant when considering cognitive systems are informational states: These are system states that covary with states of other systems (thus carrying covariant information about them) and/or states of the cognitive system itself (thus co-realizing the informational dynamics of that system). Using Dynamical Systems Theory, we can think of cognitive systems as possessing a certain informational topology that results from the covariances between how various system states are informationally related to each other. 19 Within the brain, such informational topologies exist on multiple different spatiotemporal scales and are realized by various different structures—from the structural connections between individual neurons to the firing patterns between neuronal populations, all the way up to the kind of whole-brain neural activity patterns evident during memory consolidation and neuronal avalanches. 20 Understanding and acting upon a stimulus then becomes the result of a complex mesh of informational topologies at different scales forwarding, altering, or blocking neural activity patterns originating from the perceptual systems, such that coordinated action can emerge.
The informational topology of a given cognitive system will differ depending on whether the system in question is a neuronal population, a hippocampus, or a whole brain. Crucially, as a result of neural plasticity, it will also differ depending on the perceptual input that it has received, either directly (as in V1 or the auditory cortex) or indirectly, after the input has percolated through other neuronal populations or regions. For current purposes, we can think of this perceptual input as leaving an “imprint” in the topology of that system, changing it in a way that biases the system to respond differently to subsequent inputs. 4 These biases are jointly realized by the structural, functional, and effective connectivity of the given system. 20 As neuronal systems get increasingly complex at larger scales, their biases can become both selective and conditional, for instance allowing neural populations to limit the propagation of activity to only that which originated from a certain type of stimulus, under a certain condition. This is exactly what we observe in sub-regions of both the occipital and parietal cortices that are selectively responsive to certain visual stimuli or even objects such as motorcycles or fire hydrants. 23 Particularly the latter cases have traditionally been interpreted as observations of the loci of “concepts” or “object representations,” which were often understood to carry semantic information about e.g. motorcycles, as if the causal role of activity in that area was to produce or signal the semantic content “motorcycle” to other, downstream areas. This interpretation, however, is overly liberal, since the fMRI or MEG recordings referenced in such studies never display truth condition-sensitive semantic content, but simply selective and/or conditional sensitivity, which again points to the Hard Problem of Content. I argue that an understanding of behavioral and cognitive dispositions based on Dynamical Systems Theory can adequately fill this explanatory gap.
Rather than understanding activity in some neuron or region as a meaningful signal (i.e. one with semantic content), we can understand it simply as system-internal activity at a given spatiotemporal scale, which, on the one hand, influences the informational topologies of other systems at similar scales (as input) and, on the other, constitutes part of the ever-changing topology of the larger scale system that constrains its own development. The existent topology of a given population or region may then be identified with parts of a certain behavioral or cognitive disposition, allowing us to understand dispositions in terms of informational topologies of neural populations, brain regions, or even larger organismic states involving more than just the brain. This has the benefit of making the—arguably quite philosophical—term “disposition” more concrete, and setting it up to provide a serious alternative to content-based approaches to understanding cognitive systems.
This view of dispositions and the role they play in explaining cognitive processes such as the formation of narratives unfortunately had to be left out of our original research paper due to both space constraints and disagreements between the authors regarding the content-less view in general. The goal of writing this commentary was to suggest the dispositional view as a possible vantage point on the paper, and the LIDA cognitive architecture in general. As a matter of fact, most of a LIDA agent's aspects can be modeled completely without incorporating semantically structured entities such as sentences—however, this is still often done for practical purposes, e.g. when modeling semantic memory. Consequently, leading members of the Cognitive Computing Research Group such as Sean Kugele, who are currently working with LIDA, do not take strong positions regarding the explanatory role of semantic content (personal communication, March 15, 2021). However, as cognitive science as a whole is increasingly moving towards content-less views of cognitive processes and systems, I argue that researchers working with LIDA (or indeed any other cognitive architecture) should aim to clarify their position in this debate, lest their architecture of choice gets sidelined in the ever-growing literature on twenty-first-century general-purpose AI. This threat is particularly salient when considering the recent practical successes of Artificial Intelligence such as DALL-E and Stable Diffusion, which rely on the generation of abstract feature spaces via diffusion processes, and which are readily analyzable using the Dynamical Systems methodology outlined in this commentary.
Footnotes
Acknowledgements
I would like to thank my collaborators from the original paper, Stan Franklin, Sean Kugele (University of Memphis) and Albert Newen (Ruhr-University Bochum), for their in-depth feedback on my ideas and the many open discussions we have had on the issues presented in this commentary. I also extend my thanks to Abullahi Ali (University of Nijmegen) for feedback on some of the arguments presented in this commentary, as well as the concepts of informational and mental topologies.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Work on this commentary was funded by a 1-year stipend awarded by the Faculty for Philosophy and Educational Research of the Ruhr-University Bochum.
Notes
Author biography
Alexander Hölken is currently a Doctoral Researcher at the Institute for Philosophy and Educational Science at the Ruhr-University Bochum. He started his Doctoral studies in April 2022 and is planning to conclude them in March 2025.
