Abstract
In this article, I consider the possibility of a theoretical integration of phenomenology and a mechanistic framework. First, I discuss the mechanistic model of explanation and the idea of theoretical integration in science as opposed to unification. I argue that the mechanistic model of explanation is preferable for integrating the cognitive sciences, although it is limited and in the case of consciousness studies should be complemented with phenomenology. Second, I examine three possible approaches to the integration of phenomenology and the mechanistic model of explanation. First, I discuss Integrated Information Theory (IIT) of consciousness and propose a new argument against IIT’s axiomatic method—namely, I argue that IIT misuses the notion of axiom. Next, I discuss two different proposals for the integration of phenomenology with cognitive sciences: front-loaded phenomenology and neurophenomenology. I argue that these proposals cannot be integrated with a mechanistic framework unless requisite modifications are made.
Keywords
The last two decades have brought many theoretical approaches to consciousness (e.g., Crick & Koch, 2003; Metzinger, 2003; Rosenthal, 2005; Zahavi, 2005). Most contemporary theories of consciousness try to address the phenomenological aspect of conscious experience as well as its intentional character. Whereas some theories understand phenomenology rather superficially as a qualitative aspect of experience (Rosenthal, 2005, pp. 23–26), others elaborate the idea of phenomenological subjectivity and explore first-person perspectives (Zahavi, 2005). Still others recognize the importance of phenomenological descriptions of first-person and subjective experience and try to integrate them with third-person descriptions, for example, neurobiological ones (e.g., Metzinger, 2003). This, however, leads to the problem of matching different descriptions (Wiese, 2018), i.e., the problematic relation between first-person descriptions/predicates and third-person descriptions/predicates. The most unlikely option is to establish the identity of phenomenal and neurobiological predicates. Another possibility is to search for nomologic relations between levels and then make an epistemological reduction.
As I will argue, there is yet another possibility, namely, a non-reductive integration of phenomenology and empirical research according to a mechanistic framework. A non-reductive, multilevel pluralistic explanatory approach would be desirable for explaining complex phenomena such as consciousness or its pathologies, e.g., mental disorders. Especially in the latter, we see that a complete understanding of disorder requires bringing together different research fields covering such aspects as the psychological, social, phenomenological, and, last but not least, neurobiological (e.g., Engel, 1977). In the first section of this article, I discuss the ideas and methodologies of reductionist (unificatory) and mechanistic (integrative, multilevel) approaches to consciousness and argue for the latter. In the second section, I consider three different proposals for integrating phenomenology and empirical studies of consciousness. First, I evaluate Integrated Information Theory (IIT), which I criticize for misusing the notion of axiom and thus incorrect axiomatic method. I argue that IIT axioms are not self-evident as proponents of this theory claim, furthermore introducing axioms requires already a theory, which these axioms systematize, and IIT does not offer a phenomenological theory. Second, I consider front-loaded phenomenology and, third, neurophenomenology. I argue that these proposals cannot be integrated with a mechanistic framework unless necessary modifications are made. Front-loaded phenomenology is too weak to deliver constraints on possible mechanisms and thus cannot be integrated with a mechanistic framework unless we reconsider the role of phenomenological analyses applied at the beginning of the explanatory process. With that in mind, I argue phenomenological analyses can be understood as analogous to functional analyses and can similarly contribute to the model of the hypothetical mechanism responsible for the explanandum phenomenon. Neurophenomenology seems open to integration with mechanistic explanations, but some methodological misunderstandings about mechanisms need to be clarified first, and, second, neurophenomenology needs to improve its explanatory powers in order to deliver dynamical models which then could be used as constraints for possible mechanisms.
Towards integrated science of consciousness
The strategy of deductive-nomological reduction of one scientific discipline vocabulary to another is related to the idea of the unification of science (Oppenheim & Putnam, 1958). For example, a vocabulary of psychology was thought to potentially reduce to a vocabulary of biology, which would in turn reduce to the terminology of the fundamental level of physics. The theoretical tools providing for such a reduction were hypothetical bridge laws in virtue of which upper level predicates were reduced to lower level predicates. After decades of discussion, most contemporary philosophers (see e.g., Craver, 2007; Dupré, 1995; Fodor, 1974; Roth & Cummins, 2017) think that the realization of this idea is impossible. Instead, some of them have begun considering another strategy called theoretical integration. By theoretical integration of two or more fields of study, I understand the integration of research from these fields without eliminating or reducing any one of them (Darden & Maull, 1977). Integration is thus different from unification, which searches for one general and elegant explanation (Miłkowski, 2016, p. 48).
Recently, several researchers have proposed that the mechanistic model of explanation is the most promising strategy for integration of cognitive neuroscience (Craver, 2007; Miłkowski, 2016). As Craver puts it, “The goal of finding multilevel explanations provides an abstract sketch or scaffold for integrating fields. The findings in different fields of neuroscience are used, like the tiles of a mosaic, to elaborate this abstract mechanism and to shape the space of possible mechanisms” (p. 228). The mechanistic model of explanation is a form of causal explanation and focuses on describing, as well as modeling, the mechanism which is responsible for the phenomenon to be explained. The result of a mechanistic explanation is a model or scheme of a mechanism, describing its parts, activities, and organization. In other words, a mechanistic approach seeks to answer the question of “how?” (e.g., how is a specific behavior produced?) rather than “why?” (e.g., why does a system behave so and so?). Importantly, mechanistic explanations, especially in cognitive neuroscience, are multilevel explanations. To put it briefly, explaining how a behavior is produced requires decomposing the behavior into tasks and subtasks (functional decomposition) together with corresponding parts (structural decomposition; Bechtel & Richardson, 2010). As a result, the hypothetical mechanism responsible for the target phenomenon is divided into levels of organization, and each level can constitute a different field of study. 1 For example, a proposed mechanistic explanation of spatial memory in mice consists of four levels: mouse behavior (navigating in a maze), a neural structure encoding a spatial map in the hippocampus, a long-term potentiation mechanism in neurons, and neurotransmitters along with their receptors (activation of NMDA receptor; Craver, 2007, p. 166).
In rejecting the deductive-nomological model (Hempel & Oppenheim, 1948), mechanists also reject a vision of unified science through reduction to one fundamental scientific discipline. Multilevel explanations require, however, a method of integration. Whether such an integration is reductive in some sense, though different from the law-covering model, is a matter of debate. Some philosophers (e.g., Hensel, 2013) argue that there is much more reduction in the mechanistic approach than is explicitly declared by its proponents, because explanations of higher order phenomena are formulated in lower-level terms. Bechtel uses the term mechanistic reduction, but he claims that it is Janus-faced and that a mechanist can be reductionist and emergentist at the same time (Bechtel, 2008, p. 128). This is so because, according to the mechanistic approach, the behavior of a whole system surpasses the behavior of its parts and, furthermore, it is context dependent. In this regard, a mechanistic explanation can be read, at least in Bechtel’s version, as ontologically reductive (mental phenomena are produced by physical mechanisms, but there are emergent properties) and epistemologically non-reductive (explanations of mental phenomena are multilevel and include multiple languages of description and methods of research). If the mechanistic approach does not aim to eliminate higher order descriptions of the target phenomenon and it accepts the importance of research from various fields of study, it can be conceived of as a non-reductionist strategy of multilevel and pluralistic explanation. Integration with a mechanistic framework does require, however, that each field of study provides constraints on the space of possible mechanisms (Craver, 2007). Thus, integrating phenomenology with a mechanistic framework depends on the possibility of delivering such constraints. As I will argue later, there are at least two ways in which phenomenologically informed research can deliver such constraints.
The mechanistic approach can be and is applied to the study of consciousness. It is important to note that the concept of consciousness is an umbrella term covering a set of diverse phenomena or cognitive functions, such as attention, intentionality, self-monitoring, sense of agency, embodiment, perceptual awareness, qualia, affectivity, subjectivity, etc. There is also no consensus about which research method and approach is best (see, e.g., Irvine, 2013). It is quite plausible that there is no single mechanism responsible for conscious phenomena but rather a set of diverse mechanisms responsible for different conscious phenomena. Thus, it seems that the mechanistic multilevel explanatory strategy, i.e., a strategy which allows for integrating different fields of study, is the most promising strategy for explaining such a complex phenomenon. Such an approach would deliver a naturalistic explanation of consciousness while also giving justice to its complex and multifaceted nature.
However, the mechanistic model of explanation is a model of causal-mechanical explanation, thus it has limited resources to capture and describe the phenomenal level of subjective experience. This limitation is especially troublesome in studying cases of abnormal experiences in mental disorders. Without first-person insights and careful analyses of a subject’s experience, we are unable to capture the phenomenon and formulate hypotheses and heuristics to guide further empirical research and to ultimately propose a mechanistic model of a malady. An example of a phenomenologically informed model of mental disorder is the phenomenological model of schizophrenia proposed by Sass and Parnas (Sass, 2014; Sass & Parnas, 2003). Accordingly, schizophrenia is a disturbance of the “minimal self” that involves malfunctions such as hyper-reflexivity (exaggerated self-consciousness resulting in objectification of experiences which are lived through implicitly in normal conditions) and diminished self-affection (a decline of the sense of existence as a unified subject; Sass, 2014, p. 368). Although Sass and Parnas use a phenomenological conceptual apparatus, mainly taken from the Husserlian and Heideggerian tradition, and argue for the autonomy of such a phenomenological explanation, it seems plausible that this kind of phenomenological model of mental malady could be informative for mechanistic models.
At this point, it is important to mention that phenomenology can be understood in at least two different ways. One is more general and concerns the experiential dimension of consciousness, which is captured by the notion of qualia—the famous “what it’s like.” Phenomenology, in this respect, is often understood as an introspective investigation of this qualitative dimension of experience. The problem of putting qualia into a naturalistic framework is called by some philosophers the “hard problem” of consciousness (e.g., Chalmers, 1996). Others, however, criticize the notion of qualia and reject its importance in explanatory pursuits (e.g., Dennett, 1991; Neisser, 2015).
A more advanced and interesting notion of phenomenology can be found in the philosophical works of Edmund Husserl (e.g., 1982) and the rich phenomenological tradition he initiated. In general, breaking with naïve studies of experience based on introspection was one of the main goals of Husserl’s phenomenological philosophy, which was a highly advanced theory of consciousness as well as a method of study of lived experience. Rejecting qualitative states of experience as the main theme of studies of consciousness was also an important issue in Husserlian phenomenology. Husserl argued that investigating qualitative, or hyletic as he calls it, aspects of consciousness is secondary, and of primary importance are functional considerations (Husserl, 1982, pp. 207–210, para. 86), i.e., analyses of the intentional functions of consciousness which produce the experience of perceptual objects. To put it differently, the key to understanding consciousness is not the “what-is-likeness” but the subjectivity, the “for-me-ness” of experience (Zahavi & Kriegel, 2015). Husserl was fully aware that this task can be guaranteed only by a rigorous and intersubjective method of studying structures of experience, thus he proposed a list of methodological steps, such as “eidetic variation,” in order to reduce biases and the individual character of experience and to extract its general structure (see, e.g., Sowa, 2012).
Husserlian phenomenology, as the most developed and methodologically aware philosophical account of experience, is often an important point of reference in the debate on the naturalization of phenomenology (for an overview, see Petitot, Varela, Pachoud, & Roy, 1999). Naturalization is understood here as an integration “into an explanatory framework where every acceptable property is made continuous with the properties admitted by the natural sciences” (Roy, Petitot, Pauchaud, & Varela, 1999, pp. 1–2). However, proponents of naturalized phenomenology see the project as non-reductionist (Gallagher, 2010). It seems right to say that the naturalization of phenomenology is an attempt to integrate it with the cognitive sciences, rather than to unify it in a reductionist manner. The problem with the naturalization of phenomenology is that although the discussion generated several proposals for such an integration, their impact on empirical findings was very weak. As I will argue in the second part of this paper, one reason for such poor outcomes is that proposals for naturalized phenomenology are methodologically undeveloped; in particular, they do not consider what model of explanation in the cognitive sciences is preferable nor what role phenomenological insights could play in the whole explanatory process. Therefore, it is worth reconsidering these proposals in the context of an integrative mechanistic model of explanation.
In the second part of the paper, I will discuss three proposals, all of which try to include the phenomenological level in the empirical study of consciousness. First is Integrated Information Theory (IIT; Tononi, 2004), in which phenomenology is understood differently from in the Husserlian tradition, but the theory is an important position in contemporary studies of consciousness and naturalization of phenomenal experience. The next two are proposals for a non-reductive integration of phenomenology with the cognitive sciences: the so-called front-loaded phenomenology (Gallagher & Brøsted Sørensen, 2006) and neurophenomenology (Varela, 1996). I will consider whether there is a theoretical possibility of integrating them with a mechanistic framework and what role phenomenological insight could play—whether phenomenology can deliver only methodologically controlled descriptions of a target phenomenon, namely a subjective conscious experience, or whether it can contribute more, e.g., providing analyses and constraints on the space of possible mechanisms.
Putting phenomenology into the science of consciousness
The axiomatic method of Integrated Information Theory
Integrated Information Theory was first presented by Tononi (2004) and further developed by Tononi and colleagues (Balduzzi & Tononi, 2009; Oizumi, Albantakis, & Tononi, 2014; Tononi, Boly, Massimini, & Koch, 2016). IIT is a novel approach to consciousness which has the ambition to be not only a theory of consciousness (understood as a subjective experience), but also a method of measuring consciousness and a research program for searching for the mechanisms responsible for the phenomenon of consciousness. The main claim of IIT is that “consciousness has to do with the capacity to integrate information” (Tononi, 2004, p. 2). The idea behind it is that consciousness is basically the ability to integrate and generate information, i.e., to differentiate various perceptual states and to integrate them in one coherent percept. According to Tononi, generating information is a common feature of many living and artificial systems. The integration of information is, however, not so common and is the key aspect of consciousness. The greater the amount of integrated information, the higher the degree of consciousness.
Proponents of IIT rightly claim that empirical research alone cannot answer the hard questions about consciousness and “should be complemented by a theoretical approach” (Tononi et al., 2016, p. 450). IIT is an example of a top-down approach to consciousness. In contrast to bottom-up approaches, which begin from an empirical study of actual neural mechanisms, it begins by formulating phenomenological axioms. The notion of axiom is related to mathematics, and it does not seem that the proponents of IIT use it differently or metaphorically. They claim to “follow the classical tradition according to which an ‘axiom’ is a self-evident truth, . . . truths about consciousness—the only truths that, with Descartes, cannot be doubted and do not need proof” (Oizumi et al., 2014, p. 2). Furthermore, IIT’s axioms of consciousness are thought to express “essential properties” (Oizumi et al., 2014, p. 15) of consciousness. Accordingly, there are five phenomenological axioms: (a) existence (consciousness exists); (b) composition (consciousness is structured, i.e., “it consists of multiple aspects”); (c) information (consciousness is informative, i.e., “each experience differs in its particular way from other possible experiences”); (d) integration (consciousness is integrated in a non-reducible way to its components); and (e) exclusion (“each experience excludes all others—at any given time there is only one experience having its full content”; Oizumi et al., 2014, pp. 2–3). Each of these axioms, depending on the version of IIT, is usually supplemented with a short phenomenological description which serves as an illustration.
The next methodological step is deriving postulates from these axioms. The postulates concern individual mechanisms: (i) “Mechanisms in a state exist. A system is a set of mechanisms”; ii) “Elementary mechanisms can be combined into higher order ones”; iii) “A mechanism can contribute to consciousness only if it specifies ‘differences that make a difference’ within a system”; iv) “A mechanism can contribute to consciousness only if it specifies a cause–effect repertoire (information) that is irreducible to independent components”; v) “A mechanism can contribute to consciousness at most one cause–effect repertoire, the one having the maximum value of integration/irreducibility”; vi) “A set of elements can be conscious only if its mechanisms specify a set of ‘differences that make a difference’ to the set – i.e. a conceptual structure”; vii) “A set of elements can be conscious only if its mechanisms specify a conceptual structure that is irreducible to non-interdependent components (strong integration)”; viii) “Of all overlapping sets of elements, only one set can be conscious – the one whose mechanisms specify a conceptual structure that is maximally irreducible (MICS) to independent components” (Oizumi et al., 2014, p. 3). These postulates concern the physical realization of consciousness, i.e., they must deliver constraints on the model of organization and properties of individual neural mechanisms and systems of mechanisms if they are to generate conscious experiences. Finally, IIT delivers a mathematical framework to describe the properties of the mechanisms of consciousness and to measure the degree of consciousness.
This raises the following question: what concept of mechanism is used in IIT? Are the mechanisms referred to in IIT mechanisms in the sense in which the term is used in philosophy of science? If so, then IIT can in fact be regarded as a mechanistic approach, and would therefore qualify as a candidate for an integrated approach, because it also purports to take the phenomenological level into account. In a mechanistic framework, a mechanism is understood as “a structure performing a function in virtue of its component parts, component operations and their organization” (Bechtel, 2008, p. 13). In IIT, the definition is as follows: “Mechanism: Any subsystem of a system, including the system itself, that has a causal role within the system” (Oizumi et al., 2014, p. 4). This characterization seems very broad since it includes nothing about parts, activities, and relations between them. More details about mechanisms are in the postulates of IIT. According to those postulates, mechanisms of consciousness have a hierarchical structure; many mechanisms create a system, they perform operations and functions, and the behavior (experience) of a system is irreducible to its component parts. Tononi’s mechanisms are, generally speaking, hierarchical informational mechanisms whose main function is to generate and integrate information, and in this way generate conscious experience.
I agree with the top-down approach and including the phenomenological level in creating a theoretical framework for mechanisms of consciousness, but the proposed axiomatic method seems problematic. Various critical arguments against IIT’s axiomatic method have already been presented in several papers (e.g., Bayne, 2018; Pokropski, 2018), and I am not going to repeat them. Instead, I would like to propose a new argument which does not concern any specific axiom proposed by IIT, but concerns the nature of axiomatic method in general. I believe that the axiomatic method IIT proposes shares a common myth about the role of axioms in science: it is often thought that mathematical axiomatic theories are derived from a set of self-evident axioms. However, the truth is that axioms are not self-evident and do not precede a theory: they are a method of systematizing scientific knowledge (Kitcher, 1984, pp. 218–221). Consider, for example, Euclidean geometry, whose main theorems were already known before it was formulated by Euclid of Alexandria around 300 BCE. Euclid formulated five axioms which, however, were not considered self-evident for centuries. The fifth axiom, the so-called parallel postulate, was considered especially problematic and far from obvious; the rejection of this axiom resulted in the discovery of non-Euclidean geometries in the 19th century. The contemporary view on Euclidean geometry consists of not 5 but 20 axioms, introduced by David Hilbert in 1899. As Kitcher (1984) argues, axioms should not be understood as self-evident basic principles of a theory, but as a method of systematization of a domain of an already existing theory. Axiomatization systematizes the domain and unifies the theory, showing that theorems are derivable from a certain group of basic principles. These axiomatic principles are not self-evident, but they justify themselves by doing the work of systematization.
IIT does not introduce a phenomenological theory that would be systematized by their proposed axioms. The lack of a phenomenological theory also raises questions concerning their choice of axioms. One could formulate another set of alleged phenomenological truths, and there are plausible candidates, including, for instance, subjectivity (every conscious experience is subjective, i.e., it is an experience of a subject), perspective (every experience is an experience from a certain perspective), unity (consciousness is primarily a unitary stream of experiences), intentionality (consciousness is about something), and temporality (consciousness is a temporally bound process). IIT does not give plausible arguments as to why their set of proposed axioms covers all of the essential properties of consciousness, because it does not propose a phenomenological theory in support of those axioms to begin with.
To conclude, the notion of phenomenology used in IIT is very broad, and it is used as synonymous with experience or consciousness. IIT’s phenomenological axioms are thought to express the “essential properties” of this dimension and to deliver a basis for mechanistic postulates. However, the axiomatic method proposed in IIT seems implausible. It sees the role of axioms in scientific theories incorrectly. Furthermore, the proposed set of phenomenological axioms seems arbitrary and is not self-evident. If one wants to search for the axioms of consciousness, one should first introduce a plausible phenomenological theory of consciousness with empirical consequences, and then, perhaps, introduce a set of axioms which will systematize and unify the theory. It is likely, however, that the axiomatic method is not the best approach to integrating the science of consciousness either way.
Phenomenologically informed research
If not phenomenological axioms, then what approach to phenomenology is a better candidate for integrating the level of first-person experience with searching for mechanisms of consciousness? If we want to avoid the mistake of IIT’s axiomatic method, i.e., introducing essential properties of consciousness without theoretical support, then we need to search for a plausible phenomenological theory suitable for integration with empirical findings.
Front-loaded phenomenology is one way of incorporating phenomenological theory into naturalistic cognitive sciences. It requires neither training subjects nor acknowledging a phenomenological theory; it simply proposes the use of phenomenological concepts and distinctions from phenomenological literature in the early stages of research, namely in experimental design. “Phenomenology comes into the picture by contributing to the experimental design, by providing clear phenomenological distinctions, which also inform part of the analytic framework for interpreting the results” (Gallagher & Brøsted Sørensen, 2006, p. 126). As an example of such an experiment, proponents of this approach use research on the sense of agency (Chaminade & Decety, 2002; Farrer & Frith, 2002) and the sense of ownership (the so-called alien hand experiment first done by Nielsen, 1963).
Front-loaded phenomenology favors experimental practice over methodological considerations. If phenomenological concepts and distinctions taken from the works of Edmund Husserl, Maurice Merleau-Ponty, and other phenomenologists seem to contribute to our understanding of experience, then let us use them in an experimental design and see what the results are. However, it is not clear what role phenomenological insights play in the whole scientific process. One possibility is that phenomenological terminology, next to scientific or folk psychological concepts, contributes to the body of knowledge which underlies an experimental design. This is certainly plausible, but it does not give phenomenology a strong position. First, this possibility does not move us any further forward in the project of naturalizing phenomenology because background knowledge is something to control rather than to naturalize. Also, it seems unclear why we should choose the phenomenological tradition represented by Gallagher instead of any other approach to first-person phenomena or any other method of description and analysis of subjective experience, including folk psychology.
Another possibility is that phenomenological insights could play the role of heuristics or testable hypotheses. The former, again, does not give priority to phenomenology because formulating heuristics is a process that is more informal than strict and methodologically driven. Heuristics can be formulated on the grounds of any scientific theory, and it is not clear why the phenomenological approach should be preferable. Testable hypotheses are more promising, especially because Gallagher claims that experiments can “test and verify the phenomenological description” (Gallagher & Brøsted Sørensen, 2006, p. 131) and that it is quite possible that experimenting with phenomenology will lead to a productive mutual enlightenment, where progress in the cognitive sciences will motivate a more finely detailed phenomenological description developed under the regime of phenomenological reduction, and a more detailed phenomenology will contribute to defining an empirical research program. (pp. 131–132)
This sounds promising, but the empirical testability of phenomenological hypotheses is controversial as Husserlian phenomenology was thought to be an a priori transcendental science. Some contemporary Husserlian scholars argue that phenomenological transcendentalism is a sufficient argument to reject the possibility of the naturalization of phenomenology (e.g., Moran, 2013). In response, one can argue for a softer version of phenomenology which would distance itself from Husserlian transcendentalism. However, even if such a “weak” version of phenomenology is possible, arguing for the testability of phenomenological insights through experiments requires recognition of the relation between phenomenological claims and experimental results.
There is, however, yet another problem with Gallagher’s proposal of front-loaded phenomenology. In the examples of the phenomenologically informed experimental designs which he considers, phenomenology seems to provide a description of the explanandum phenomenon rather than testable hypotheses. In both the sense of agency experiment and the alien hand experiment, phenomenological concepts serve to identify and describe the target phenomena, whose explanations are searched for on the neurobiological level. If that is the case, then phenomenological insights are not testable hypotheses and play a different role in the research process, i.e., they contribute to the description of the target phenomenon.
In the context of integration with a mechanistic framework, front-loaded phenomenology seems too simple and methodologically undeveloped to provide constraints on the space of possible mechanisms. To change this, phenomenological insights would have to be supplemented by thinking in terms of cognitive systems, and ultimately translated into a framework that is more mechanistic. This can be achieved by reading phenomenological analyses into functional analyses. The very idea is not so alien to phenomenology. According to McIntyre (1986), Husserlian phenomenology shares similarities with computational functionalism such as mental representations (noemata in Husserlian terminology) and ontological neutrality: “like the functionalists and the computationalists, then, Husserl seeks abstract accounts that would capture what is common to various mental capacities, no matter how different in their natural make-up the entities having these capacities may be” (p. 104). Furthermore, as I mentioned above, in the first book of the greatest problems of all are functional problems, or those of the “constitution of consciousness-objectivities.” . . . The point of view of function is the central one for phenomenology. . . . In place of analysis of comparison description and classification restricted to a single particular mental processes, consideration arises of single particularities from the “teleological” point of view of their function, of making possible a “synthetical unity.” (1982, pp. 207–208, para. 86)
The task of phenomenology, as Husserl sees it, is not the description and analysis of particular experiential phenomena but, on the contrary, a sort of functional analysis, which captures the general structure of experience including activities involved in the production of the object of experience.
A historical example of something I would call a phenomenological decomposition, analogous to functional decomposition, is the Husserlian analysis of the perception of temporal objects, for example the perception of a melody (Husserl, 1991). Briefly speaking, Husserl distinguishes several subfunctions which constitute the experience of a temporal object. These subfunctions are retention (retaining in consciousness the parts of the object which are no longer present), protention (anticipating the parts of the object which are not yet present), and primal-impression (the sensual core of the experience in the present moment). Furthermore, he proposed dividing the retentional function into two subfunctions: so-called longitudinal retention and transverse retention. The former is directed towards the “how” of the stream of consciousness, namely, into succeeding phases synthesized into a non-objectified continuity. The latter is directed towards what appears in the flow, i.e., a temporal object. This and other examples of phenomenological analyses seem to show that phenomenology delivers a decomposition of the cognitive phenomenon into subfunctions which constitute the experience.
Phenomenological functional analyses seem to share some similarity with the most known version of functional analysis, designed by Cummins (1975). The explanandum in both cases is a cognitive capacity (in the example above, the capacity is the perception of a temporal object, say, a melody). The target function or capacity is decomposed into a set of interrelated abstract subfunctions or sub-capacities. Then, the decomposition can be repeated for the subfunctions. As a result of this explanatory strategy, the hypothetical functional organization of a cognitive system can be represented in both approaches. Also, in both cases, the functional description abstracts from implementation and physical details. Finally, both approaches stress the explanatory autonomy of such analyses and the irreducibility of psychological states to biology.
That said, I agree that there are also differences. Husserlian considerations are thought to be transcendental ones (concerning the conditions of experience of every subject), whereas Cummins’ analyses are related to the psychological level. Cummins uses dispositional and causal language to define target capacities and sub-capacities, which, furthermore, requires characterizing normal conditions for a specific disposition to occur. Husserl, on the contrary, describes relations between mental phenomena in weaker terms of motivation instead of causation. For example, some perceptual experiences motivate some beliefs. Despite these differences and looking at these approaches as explanatory strategies (in both cases a method of decomposing mental capacities) it seems that both phenomenological and functional decomposition can deliver at least heuristics if not stronger constraints on possible mechanisms.
In fact, Piccinini and Craver (2011) have argued that functional analyses in general, e.g., Cummins style, which deliver decomposition of cognitive tasks into subtasks, can be read as incomplete elliptical mechanistic explanations. Accordingly, functional analyses can deliver a sketch of the possible mechanisms that are responsible for the target function. This sketch, for example, would consist of a system’s functional modules that realize subtasks and are related in such a way that a realization of subtasks amounts to a realization of the target function. At the same time, the sketch would ignore the structural details of the physical mechanism and its parts, but the expectation is that there is a physical structure that is responsible for each. Once the omitted details were added, the explanation and the scheme of the mechanism would be complete. Piccinini and Craver go even further and claim that because of the lack of physical realization details, the psychological/functional explanations are not complete, therefore, they are not a distinct and autonomous kind of explanation and need to be supplemented by a mechanistic explanation. However, according to Roth and Cummins (2017, p. 40), one does not have to accept the strong claim about the nature of the psychological/functional explanation and can accept its autonomy (functional explanations are autonomous because they do not have to refer to implementation details in order to describe the target system’s design and predict its behavior) while still arguing that functional analyses can deliver constraints, e.g., constraints on hypotheses about mechanisms.
Setting aside the question of autonomy, my claim is that phenomenological analyses can be considered analogous to functional analyses and can similarly contribute to the model of the hypothetical mechanism responsible for the target phenomenon. Both the phenomenological and the functional approach share an interest in identifying the cognitive functions involved in the production of mental phenomena, e.g., the experience of an object of perception. The outcome of such analysis, namely the hypothetical functional design, is just a “how-possibly” model and should be supplemented by implementation details. However, it is likely that different implementations of the same functional design are possible.
To sum up, although front-loaded phenomenology is an interesting idea for applying phenomenology in experimental design, it does not explain what role phenomenological insights play in research and the explanatory process and how they are related to experimental results. Furthermore, front-loaded phenomenology seems methodologically too weak to constrain the space of possible mechanisms unless it delivers analyses analogous to functional analyses. My proposal is thus to reconsider phenomenological analyses as a strategy of decomposition of cognitive phenomena into a set of interrelated cognitive functions in an analogous manner to functional decomposition. Such a modification makes clear how phenomenology can contribute to both the sketch of a mechanism and to experimental designs.
Phenomenologically trained subjects and dynamic models
Another proposal for the naturalization of phenomenology is neurophenomenology (Lutz & Thompson, 2003; Varela, 1996), which aims to correlate a disciplined first-person description of experience with neuroimaging data (EEG). In the paper “Neurophenomenology: A Methodological Remedy for the Hard Problem,” Varela (1996) introduces the working hypothesis of neurophenomenology, which states that “phenomenological accounts of the structure of experience and their counterparts in cognitive science relate to each other through reciprocal constraints” (p. 343). Furthermore, as he argues, “disciplined first-person accounts should be an integral element of the validation of a neurobiological proposal, and not merely coincidental or heuristic information” (p. 344). Neurophenomenology is, therefore, a non-reductionist approach to the naturalization of phenomenology that sees phenomenology as having an important role to play as part of empirical cognitive science—namely, delivering reliable first-person descriptions correlated with neural dynamics and validation of empirical theories of consciousness.
Neurophenomenology recognizes the problem of introspective reports as an unreliable source of information about one’s inner life. Our beliefs about what we experience are often shaped by background knowledge and folk psychology. The reliability of introspective reports was also ruled out by cognitive research on phenomena using illusions, self-deceptions, confabulations, etc. (e.g., Dennett, 1991). It is true that first-person access to consciousness is different from third-person access, but that does not mean that it is infallible, thus introspective reports need to be elaborated and validated. One option is to elaborate more sophisticated methods of phenomenological interviews or to train subjects to become more aware of and cautious about their experiences and language of description (Froese, Gould, & Seth, 2011).
In neurophenomenology, descriptions are produced by experimental participants trained in phenomenological method, which is a simplified version of Husserlian phenomenological reduction. The vocabulary used to describe studied experiences is elaborated together with the experimenter, who guides the participant. In short, participants are trained (a) to suspend their commonsensical and theoretical beliefs about their experiences and mental states in general and (b) to imaginatively vary their own experience in order to apprehend and describe invariants, which are used to identify the structural properties of experiences. The objective of the phenomenological method used in neurophenomenology is precisely to describe this invariant structure of experiences. According to Varela, this kind of phenomenological structural description can provide constraints on empirical observations (1996, p. 343). It is not entirely clear what kind of constraints Varela has in mind. It seems clear that phenomenological description can inform empirical research e.g., searching for neural correlates of experience often requires a first-person report. As I will argue below, neurophenomenology can deliver a more informative input, which is a dynamical model of studied experience.
It is not easy to see how neurophenomenology works in practice since only a few experiments have been done in accordance with this paradigm. An example is the study by Lutz on the degrees of perceptual readiness, i.e., a subjective feeling of preparation to perceive the emerging stimulus. In this study, participants used categories such as steady readiness, fragmented readiness, and unreadiness (Lutz, 2002). Descriptions of the participant’s lived experience are then correlated with their EEG recordings of neural activity. Proponents of neurophenomenology argue that their approach differs from the study of neural correlates of consciousness (NCC). What they are interested in is not activations of the specific brain regions that are allegedly responsible for a specific conscious experience, but a synchronization of the distributed neuronal assemblies hypothetically responsible for the whole studied experiential process. What is correlated, then, is first-person descriptions of experience with dynamics of distributed brain processes. The correlation task in neurophenomenology is, however, not an easy one. Not only are correlations of experiential categories with EEG recordings ex-post problematic, but the too-low spatial resolution of EEG makes it impossible to correlate experience with functionally different neural processes.
An improved version of neurophenomenology was recently proposed by Petitmengin (Petitmengin & Lachaux, 2013). What she proposes is to study the micro-dynamics of experience both at the first-personal and neural level. According to Petitmengin, we can learn to investigate our cognitive micro-processes involved in experience, which are otherwise usually not experienced in everyday activities. To do so one has to focus on particular occurrences of a cognitive process and shift attention from “what” is experienced to “how” it is experienced. In this way, the very process of the emergence of experience can be grasped and analyzed. This process can also be supported by the interviewer in the so-called “elicitation interview,” which was elaborated by Petitmengin (2006) and used with success in treatments of epileptic patients. Then, the first-person descriptions are correlated with recordings from intracranial EEG (iEEG), the spatio-temporal resolution of which is very high and sufficient to picture the neural dynamics of singular experiences.
It seems that the best input that neurophenomenology can give to a mechanistic framework is a description of the dynamics of neural activity correlated with dynamics of experience. According to Lutz, the best way to describe these dynamics is the dynamical systems theory (DST), whose “central idea is that the dynamical trajectories of global moments are shaping the ‘dynamical landscape’ of the system into a specific geometry (named phase space)” (2002, p. 155). The result of neurophenomenological study can be a formal dynamic model of a studied phenomenon capturing its stability phases as well as trajectory of change. In this sense, neurophenomenology can be understood as a type of dynamic explanation, which sees explanation as a formal (mathematical) dynamic model of a target phenomenon (for a general discussion of dynamic explanations see, e.g., Chemero, 2000; van Gelder, 1995; for a discussion of DST and phenomenology see, e.g., Yoshimi, 2017).
However, at first glance the neurophenomenological approach, understood as a type of dynamic explanation, seems contrary to the mechanistic approach (e.g., Thompson, 2007). Such opinions are based on a misunderstanding of the mechanistic model of explanation. As I mentioned above, the main explanatory strategies of the mechanistic approach are a decomposition of a system’s behavior into subfunctions and a localization of these subfunctions in the parts of the system (Bechtel & Richardson, 2010, pp. 23–24). The dynamic approach, on the contrary, abstracts from the composition of systems and focuses on their evolution over time. Proponents of the dynamic approach criticize the strategy of localization as inadequate in the case of human cognitive systems, which, as they argue, are not decomposable, and therefore we cannot localize functions in specific regions of the brain (e.g., Lamb & Chemero, 2014). The notion of localization they use is a simple or direct localization. However, as Bechtel and Richardson (2010) argue, direct localization is not an aim of explanation but a simple heuristic in the preliminary stage of research. In the course of research, direct localization may have to be abandoned and replaced by the notion of distributed or complex localization. Distributed localization is, however, still localization.
Decomposition is more problematic. Introducing decomposition as an explanatory strategy, mechanists (e.g., Bechtel & Richardson, 2010) usually follow Simon’s (1962) and Wimsatt’s (1986) division into several levels of decomposability: a system can be decomposable, nearly decomposable, minimally decomposable, and non-decomposable. Decomposable systems are simple aggregative systems in which the sum of a part’s activities equals the activity of the whole. Decomposable systems have a clear hierarchical organization, and the activity of the parts can be explained independently of the whole. Also, the organization of the parts is irrelevant in principle. Nearly decomposable systems are more complex and difficult to decompose since the components of such systems interact with each other, and the higher level of the organization determines lower levels to some degree. Thus, in nearly decomposable systems we have not only bottom-up but also top-down relations. Minimally decomposable systems are systems in which components are not only organized and interact with each other, but in which top-down relations are more relevant for the whole than bottom-up relations. Finally, there are non-decomposable systems in which top-down determinations make decomposition into parts impossible. Such systems have no hierarchy whatsoever.
Now the question is whether or not cognitive systems can be studied as decomposable systems. While mechanists hypothesize that cognitive systems are nearly decomposable systems, dynamists argue that living cognitive systems are either minimally decomposable or non-decomposable. For example, Thompson argues that the brain is a dynamic network of processes with strong top-down determinations and is thus minimally decomposable if not non-decomposable (2007, pp. 417–447). However, he also admits that sometimes it is useful for explanatory purposes to characterize the brain as nearly decomposable (pp. 422–423). Interestingly, Bechtel and Richardson (2010) also argue that decomposability is a heuristic strategy which may fail. It is clear, therefore, that decomposability should be understood as an epistemic category, not an ontological one. Both decomposition and localization are fallible heuristic strategies, but it does not follow that the mechanistic model of explanation is incorrect; instead, it needs to be further elaborated and integrated with other models of explanation.
Despite these differences, some of which seem to be based on misunderstandings, it was argued recently that dynamic and mechanistic explanations are not exclusive but complementary (Kaplan & Bechtel, 2011; Zednik, 2011). An example of a successful explanatory process that began from dynamic modeling and was then successively supplemented with mechanistic details is the Hodgkin–Huxley model of action potential. The original dynamic model consists of a set of equations describing the dynamics of the behavior of neurons, which informed further discovery of mechanistic components such as ion channels (Kaplan & Bechtel, 2011). Furthermore, Bechtel and Richardson (2010) argue that the mechanistic model of explanation is not limited to phenomena produced by decomposable or nearly decomposable systems, but that it can also give an account of non-linear dynamics in minimally decomposable systems. This makes it possible to explain mechanistically emergent phenomena (pp. xliv–xlvii).
To sum up, neurophenomenology can be considered as a candidate for integration with a mechanistic framework if it is a type of dynamic explanation and delivers a dynamic model of the target phenomenon. A dynamic model can be informative for the mechanistic approach and deliver constraints on the dynamic characteristics of possible mechanisms. Furthermore, dynamic modeling is an efficient way to describe not only complex systems which are distributed across the brain–body environment, but also to describe evidence for continuous inter-level relations in minimally decomposable systems. If neurophenomenology develops in this direction and delivers such dynamic models, then it is plausible it will provide constraints on the space of possible mechanisms and guide the search for the mechanisms responsible for conscious experience.
Conclusion
In this paper, I have discussed the mechanistic model of explanation and its possible integration with a phenomenological approach. I have argued that putting phenomenology into a multilevel mechanistic explanation of consciousness can improve descriptions of subjective experiences and deliver constraints on the space of possible mechanisms. Three proposals for the integration of phenomenology with a mechanistic framework of explanation were considered: Integrated Information Theory, front-loaded phenomenology, and neurophenomenology. All three positions acknowledge the importance of the phenomenological and empirical levels in explaining consciousness. First, I discussed Integrated Information Theory and its axiomatic method, which introduces a set of phenomenological self-evident truths and then derives postulates concerning the physical mechanisms responsible for conscious experience. I argued that using the axiomatic method for integrating phenomenology with a mechanistic explanation of consciousness is highly questionable because it has misunderstood the role of axioms in scientific theories. Axioms are not self-evident truths but their function is to systematize an already existing theory. Then I discussed two non-reductionist proposals of the naturalization of phenomenology: front-loaded phenomenology and neurophenomenology. Both proposals have the ambition of introducing a phenomenological analysis of conscious experience to experimental research. However, it seems that theoretical integration with a mechanistic framework requires these proposals to be modified in such a way that they are more informative and able to deliver constraints on the space of possible mechanisms. Phenomenological insights and analyses, as I argued, can be informative for the mechanistic approach if we understand them as analogous to functional analyses, since these can be applied in sketches of mechanisms. The integration of neurophenomenology depends on the quality of its outcomes and proposed dynamic models of experience. Dynamic modeling can be conceived of as complementary to mechanistic approaches.
The debate on the integration of phenomenology with cognitive science is still open. A mechanistic framework of explanation seems well developed enough to integrate other fields of study, including the study of subjective experience. It is also promising that more and more researchers in the cognitive sciences see possible connections between various explanatory strategies and the benefits we can enjoy from an integrated approach.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Research on this topic was supported by Grant No. 2017/27/B/HS1/00735 financed by the National Science Centre, Poland.
