Abstract
Cognitive neuroscience seeks to pinpoint the neural basis of cognitive function. Application of scientific methods can be credited for its advancement within the field of psychology. Past approaches such as phrenology, that linked bumps on the skull to mental capabilities, initially gained popularity, but the lack of experimental testing contributed to its demise. Research in neuropsychology and the use of the double dissociation experimental technique subsequently emerged. Objective measurements of behaviour following selective damage within the brain led to a paradigm shift. More recently, application of the subtraction technique, coupled with the emergence of cognitive neuroimaging tools, has allowed psychologists to isolate and measure specific functions such as language, vision, memory, and recognition of emotion. Importantly, these approaches enable reliable prediction of behaviours, given parameters of brain integrity, a key goal within the field of psychology.
An emerging branch, growing in dominance within psychology, is cognitive neuroscience. It seeks to discover and pinpoint the neural basis of various cognitive capabilities and functioning. Researchers in this field seek to understand how mental operations are generated, and how these are affected by individual differences and environmental contexts. Establishing the causal link between loss of neural tissue and specific cognitive functions such as language, decision-making, and memory is a central focus. The systematic application of methods and principles from the natural sciences can be credited for the field’s rapid growth and its major contributions to psychology. Cognitive neuroscience methodologies, such as the double dissociation technique in patients, coupled with structural and functional brain imaging and noninvasive brain stimulation, have allowed researchers to capture in vivo patterns of neural activity associated with complex mental operations. Importantly, these techniques have provided evidence for causal brain–behaviour relationships, which have been used to support interventions aimed at altering, treating, or ameliorating brain function in healthy populations and those with pathologies or disorders. Such work suggests that our understanding of the biological basis of psychological function has benefitted tremendously from the application of methods from the natural sciences.
For example, studies of patients with cognitive deficits due to brain lesions constitute an important aspect of cognitive neuroscience. When head injury occurs, the damage can cause the neural circuits in the brain to change, leading to a detectable malfunction in completing basic cognitive processes, such as language processing, memory, and learning. Based on knowledge gained about the underlying brain basis of cognitive functions, researchers can now better understand those with learning disabilities, or problematic behaviours. Evaluation of the degree to which changes in brain tissue integrity contribute to deficits in performance, compared to individuals with healthy neural circuits, has become possible. Following this, scientists have been able to draw conclusions about the brain basis of various cognitive processes and specify whether damage or malfunction in area X would produce a deficit in performance of task Y. To advance the field of psychology, researchers have created models of function that are falsifiable, which enable prediction of behaviour in different contexts. In a nutshell, the methods and principles from natural science have been used to measurably advance the field of psychology. This is not to say that other approaches do not have their merits. For example, an emerging approach called neurophenomenology (Varela, 1996) makes use of subjective reports, asking humans for introspective descriptions of their experiences. Most recently such an approach has allowed for exploration of how insight emerges (Horowitz et al., 2023), by examining dream reports. Subjective reports have indeed illuminated possible functions of our dreams (Kahan & Claudatos, 2016; Wamsley, 2013), particularly in helping to consolidate memories (Wamsley, 2014). Certainly such descriptions of lived experiences can provide a window into the mind and how it functions. Nonetheless, so far, neuroscience research has made its greatest contributions to the study of cognition by illuminating mechanisms (providing a “how”) that underlie behavioural observations made by earlier psychologists (Diamond & Amso, 2008).
Early beginnings: Origins and issues
Although the goal of cognitive neuroscience is to describe the neural mechanisms associated with the mind, historically it initially progressed by investigating how a certain area of the brain supports a given mental faculty. One of the predecessors to cognitive neuroscience was phrenology, an approach pioneered in Europe which claimed that behaviour could be determined by the shape of the scalp. Examining persons with particularly pronounced personal characteristics (e.g., wit or cautiousness) and correlating these with the bulges and depressions on their skulls, phrenologists believed that they could identify those areas of the brain controlling certain functions (Gall & Spurzheim, 1810). They believed that the human brain was localized into approximately 35 different sections, and claimed that a larger bump in one of these areas meant that that area of the brain was used more frequently by that person. Phrenology had two key assumptions: first, that different regions of the brain perform different functions and are associated with different behaviours; second, that the size of these regions produces distortions of the skull and correlates with individual differences in cognition and personality (Ward, 2019).
While phrenology was a popular activity at fairs and carnivals, it did not enjoy wide acceptance within the scientific community. The major criticism of phrenology was that researchers were not able to test theories empirically, and were influenced by faith more than scientific principles (Knight, 2007; McGrew & McGrew, 1985; Parssinen, 1974; van Wyhe, 2004). That is, there was no experimental research to accompany the claims, and no evaluations were conducted to verify whether the “bumps” were accurately identified and related to their said function (van Wyhe, 2004). Claims were taken on faith from existing practitioners and confirmative observations. Contradictory findings were ignored. As van Wyhe (2004) proclaimed, phrenologists were not out to find the truth—they already had it! Such an approach runs counter to the methods and principles from natural science, which involve formulating hypotheses based on observations and then testing them. A scientific theory is an explanation of an aspect of the natural world and universe that can be repeatedly tested and corroborated using accepted protocols of observation, measurement, and evaluation of results (Gower, 2012). Phrenology lacked application of such methods. As the Nobel prize-winning physicist Richard Feynman described, science involves bending over backwards to prove ourselves wrong (Feynman et al., 1985). Such a view echoes that of Skinner’s (1965) characterization: “Science is first of all a set of attitudes. It is a disposition to deal with the facts rather than with what someone has said about them” and a “willingness to accept facts even when they are opposed to wishes” (p. 12). The phrenologist movement failed to apply such experimentation, and has since been rejected.
A similar problem was faced by American Karl Lashley in his well-known article “In Search of the Engram” (Lashley, 1950). Lashley conducted research on rats. He surgically damaged or removed specific areas of a rat’s cortex, either before or after the animal was trained to spatially navigate efficiently through mazes. He reported that only very large cortical lesions affected acquisition and retention of knowledge; the location of the removed cortex had no differential effect on the rats’ performance. This led Lashley to conclude that memories (of how to perform trained tasks) are not localized in any particular brain region within the neocortex, and that no one region is critical to formation and retrieval of learned information. However, his work failed to lesion subcortical regions such as the hippocampus, known today as the seat of memory. To explain failures in his ability to find precise localization of cognitive function, Lashley (1931) erroneously appealed to the “principle of mass action,” asserting that the entire cortex participates in every behaviour, and “equipotentiality,” asserting that intact areas of the cortex could simply take over specific cognitive functions following brain injury. This, instead of more rigorous and systematic lesioning and assessment of the remaining function of other regions (as did Smith, 1959), including subcortical areas now known to be critical to memory formation. Being so tied to a cortical basis for representation, Lashley failed to uncover one of the most reliable and remarkable findings about human cognitive functioning (Soyland, 1994).
More fundamentally, Lashley failed to fully consider or see merit in alternative, or opposing, viewpoints (Nadel & Maurer, 2020). He was entrenched in a “field theory” (equipotentiality) framework (see Pearce, 2009 for a review). He appealed to his personal observations to discount competing localization accounts. For example, citing that in some cases motor functions could be preserved in patients with spinal cord injury, and that specialized cognitive functions (e.g., playing the piano) emerged without needing a one-to-one correspondence within the brain (see Nadel & Maurer, 2020 for examples). Other views at that time offered alternative accounts for how the brain accomplished behaviours. In particular, Hebb (1949) proposed that neuron synapses in the brain can be altered by experience (the basis of current views on brain plasticity). Each time a certain action or thought is repeated, the connection between those co-activated neurons is strengthened (Hebbian loop). Lashley (1931) had rejected the idea that synaptic change in a focused region could underly learning, appealing instead to his principle of mass action and equipotentiality. He held to those beliefs despite Hebb’s continued experimentation, data, and repeated measurements. Admittedly, psychology at that time did not focus on biological processes in the brain, nor consider its role in psychological capacities. In contrast to Lashley, however, Hebb believed that being a psychologist required also being a physiologist; he saw this as critical to inform our understanding of how perceptions, memories, emotion, and action might operate (Nadel & Maurer, 2020). Incorporating ideas, analogies, and methodological approaches from other fields within the natural sciences allowed for the emergence of modern cognitive neuroscience, which is ultimately an interdisciplinary approach to psychology.
Birth of neuropsychology
The Equipotentiality and Mass Action principles put forward by Lashley (1931), claiming that all areas of the brain participated in all behaviour, was rejected as a result of brain mapping studies. These began with Hitzig and Fritsch’s experiments (Fritsch & Hitzig, 1960, 2009) and Hughlings Jackson’s studies of patients with brain damage, particularly those with epilepsy. Hughlings Jackson (1873), a neurologist from England, discovered epileptic convulsions, known today as Jacksonian epilepsy, that progress through the body in a series of spasms. He traced them to damage in the motor region of the cerebral cortex (Hughlings Jackson, 1876). He discovered that patients with epilepsy often made the same movements of muscle during their seizures, leading him to believe that they must be caused by activity in the same place in the brain every time. He proposed that specific functions were localized to specific areas of the brain (Hughlings Jackson, 1876). His careful studies of patients with epilepsy initiated the development of modern methods of clinical localization of brain lesions and the investigation of localized brain functions (York & Steinberg, 2007).
Importantly, Hughlings Jackson’s hypotheses were supported by Fritsch and Hitzig’s experimental evidence of a motor cortex in dogs. Hitzig, a German doctor working at a military hospital in the 1860s, conducted some experiments while working on patients who had suffered cracks to their skull during battle in wars. He systematically stimulated different portions of exposed brains with wires connected to a battery. By doing so, he discovered that weak electric shocks, when applied to areas at the back of the brain, caused the patients’ eyes to move. Later, Hitzig and Fritsch together designed and published experiments, that followed principles from the natural sciences, which they conducted in dogs. They found that not only could they cause movements of the dogs’ bodies, but that specific areas of the brain caused different muscles to contract depending on which areas were or were not electrically stimulated (Fritsch & Hitzig, 1960, 2009). These experiments led to the proposition that individual functions are localized to specific areas of the brain rather than the cerebrum as a whole, as put forward by Lashley (Gross, 2007). The model of organization that Hughlings Jackson had proposed, where one part of the cortex is devoted to one part of the body, was now backed by experimental evidence.
These early findings have now been further developed into what is known as a somatotopic arrangement, a point-for-point correspondence of an area of the body to a specific point on the central nervous system. For example, the somatosensory cortex receives sensory information from the hands in a specific location and information from the feet in another location. Such unique brain–behaviour relations have been verified by a series of experiments from different researchers in different labs (Brown-Sequard, 1968; Dreyer et al., 1975; Foerster, 1936; Penfield & Boldrey, 1937), forming the basis of modern cognitive neuroscience knowledge. In the 1860s, physicians had no systematic procedure for diagnosing diseases of the nervous system; they had no conceptual basis by which to organize their thinking about how the nervous system works. By 1910, thanks to the pioneering experiments that used methods from the natural sciences by Fritsch and Hitzig (1960) and Hughlings Jackson (1873, 1876), neuropsychologists were now able to diagnose neurological disorders using maps of brain–behaviour relations.
Patient studies
The field of neuropsychology grew from these earlier scientific studies, and with it the development of new measurement techniques to further refine our knowledge of brain–behaviour associations. Perhaps the first serious attempts after these studies to localize mental functions to specific locations in the brain were by Broca and Wernicke. These were mostly achieved by studying the effects of injuries to different parts of the brain on psychological functions. In 1861, French neurologist Paul Broca came across a man with a disability who was able to understand language but unable to speak. The man could only produce the sound “tan.” It was later discovered that the man had damage to an area of his left frontal lobe now known as Broca’s (1861a, 1861b) area. Around the same time, Wernicke (1874), a German neurologist, found a patient who could speak fluently but the semantic meaning of his speech was nonsense. The patient had had a stroke and could not understand spoken or written language. This patient had damage in the area where the left parietal and temporal lobes meet, now known as Wernicke’s area. These cases, which suggested that lesions caused specific behavioural changes, strongly supported a localizationist view of the brain (Bouillaud, 1865). By being open to experimentally testing alternative accounts to Lashley’s, Broca and Wernicke offered fundamental knowledge about the brain and language that is central to today’s understanding in psychology of how they work.
Canadian neurosurgeons Penfield and Rasmussen (1950) also challenged Lashley’s (1931) claims by testing an alternative hypothesis to the Principle of Mass Action and Equipotentiality. They obtained the first evidence that episodic memories may be localized in specific brain regions. As a presurgical procedure, Penfield applied small jolts of electricity to the brain to help him determine which neural area was the epileptogenic focus causing seizures in different patients with epilepsy. Remarkably, when stimulating parts of the temporal cortex, some of his patients reported vivid recall of random past events. Several years later, American William Scoville and Canadian neuropsychologist Brenda Milner (Scoville & Milner, 1957) provided a test of the necessity of the hippocampal region to the loss of memory function. To treat the epileptic seizures of the now well-known patient Henry Molaison (H. M.), who suffered uncontrollable seizures, Scoville removed a large portion of the medial temporal lobes from both hemispheres, including the hippocampus and the adjacent brain areas. As a consequence of this surgery, H. M. lost his ability to form new episodic memories (anterograde amnesia) as well as the ability to recall memories of episodes and events from his recent past (temporally graded retrograde amnesia). Scoville and Milner went on to assess and categorize the types of cognitive functions that were intact and impaired. By systematically assessing various categories of memory function, they were able to demonstrate that the brain resection affected episodic memory but left motor and procedural memories, as well as language functions, largely intact. Their work showed that episodic memory formation, but not other cognitive functions, required the specific involvement of the medial temporal lobes, specifically the hippocampus. Borrowing from methods in the natural sciences, by engaging in a systematic investigation of the various types of memory function affected by the surgery they performed, Scoville and Milner provided evidence contrary to Lashley’s (1931) claims. Their work led to a subsequent burgeoning of research that examined other cognitive task performance before and after brain damage, allowing a means of testing the specificity of brain–behaviour dissociations.
Application of the double dissociation technique
Broca’s and Wernicke’s aphasia remain today an accurate example of application of the double dissociation technique. In cognitive neuroscience, double dissociation is an experimental technique by which two areas of the neocortex can be functionally dissociated by two behavioural tests, each test being affected by a lesion in one zone and not the other. When the part of the brain called Broca’s area is damaged, patients may still understand language but be unable to speak fluently. They know what they want to say but are unable to express themselves. On the other hand, when the part of the brain called Wernicke’s area is damaged, patients may still speak fluently but be unable to comprehend language. This results in properly constructed but nonsensical sentences. By establishing a double dissociation, psychologists were able to determine which mental processes are specialized to which areas of the brain.
The field of experimental neuropsychology grew from this work. It now involves categorizing the qualitatively different behavioural and cognitive consequences of brain damage to various areas. Using carefully controlled experimentation, neuropsychologists can now test existing theories of brain–behaviour relationships or new hypotheses concerning the source of differences in perceptual, cognitive, linguistic, and behavioural changes. The term “neuropsychology” originally evolved in the 1930s and 1940s and its popularization is often attributed to Hans-Lukas Teuber (Gross, 1994). Many early neuropsychological procedures were developed during war time to assess the cognitive status of individuals and their suitability for special military service. Subsequently, penetrating missile wounds to the brain became the focus of localization studies. Accordingly, many tests (still in use today) were created with the goal of providing metrics to quantify and assess quality of function that are sensitive to the effects of a variety of focal brain insults (Gross, 1994). Creating standardized tools for measurement of function is a requirement for applying scientific principles. The development of modern clinical neuropsychological assessments within the field of psychology has allowed for reliable identification of disorders, along with objective metrics to denote severity. It is this that has permitted reliable predictions of behaviours, given neural integrity.
Prior to the advent of modern neuroimaging procedures, neuropsychological tests emerged as front-line assessment procedures for the identification and localization of acquired cerebral damage and subsequent loss of cognitive function. While neuropsychological techniques continue to provide this aspect of neurodiagnostic assessment, more commonly, neuropsychological test results are used to describe and quantify behaviour. The results of these evaluations are now used to infer cerebral integrity versus dysfunction, to delineate cognitive strengths and weaknesses, and to assist in differential diagnosis. These are the hallmarks of psychology.
Application of the subtraction technique in cognitive and neuroimaging studies
Outside of neuropsychology, success in applying principles from natural science can be seen in studies of intact brains. Donders’ (1868/1969) work in the Netherlands in 1868 attempted to describe the processes going on in the mind by measuring cognitive activity in separate stages. Until Donders’ work, many scientists had assumed that the mental operations involved in responding to a stimulus occurred instantaneously. Donders was particularly interested in “timing the mind” and used a subtraction technique to time the different mental processes that the brain goes through when faced with different tasks. The subtraction technique is a scientific procedure for estimating the duration of a psychological process. It involves measuring the reaction time to perform a task that incorporates the target psychological process in question and subtracts the reaction time for a task that does not incorporate it. To illustrate, Donders (1868/1969) measured response times in three different tasks: (a) a simple reaction time (RT) task—press a button when a light turns on, (b) a discrimination reaction time task—press a button only when one of the four target lights turns on, and (c) a choice reaction time task—press one of several options of buttons corresponding to the appropriate light from a larger set.
Based on response times, Donders (1868/1969) predicted the kinds of cognitive processes that might be involved in each task. That is, the RT on the simple time task requires perception and motor function, with that RT representing the time it takes to receive and then execute an action. RT on a discrimination task would be longer as it involves the processes required for the simple task plus a discrimination stage. Finally, RT on a choice reaction time task would be longest, as it requires both of these earlier mentioned processes plus a discrimination stage. Results fell in line with these predictions. Donders was able to calculate the precise amount of time required for each stage using a subtraction technique. For example, perception and motor processing equals the RT on the simple task; the cognitive ability to discriminate equals the RT on the discrimination task minus that of the simple task; the cognitive ability to make selections/choices equals the RT on the choice task minus that of the discrimination task.
Later, in the 1960s, the application of this scientific procedure saw the proliferation of studies showing that human cognitive task performance can be described as a completion of a series of mental operations or subroutines (Sternberg, 1969a, 1969b). Thus began a means to break down tasks into their component parts using mental chronometry (subtraction technique) to measure operations. Since then, the subtraction technique has been adopted and used in functional magnetic resonance imaging (fMRI) to study brain activation associated with specific cognitive processes or tasks. It is a key method for identifying brain regions that become more or less active during a particular task or condition compared to a baseline or control condition. Today, applying subtraction logic in an fMRI study, psychologists typically gather data in a baseline or control condition and an experimental condition. The baseline condition serves as a reference point against which brain activity during the experimental condition is compared.
Here, I illustrate how this technique has been successfully used, in the field of psychology, to distinguish differences (and similarities) in where and how various types of stimuli are represented in the brain. During an fMRI scan, the fMRI machine measures changes in blood oxygenation levels in various regions of the brain. This measurement is used as a proxy for neural activity since active brain regions require more oxygenated blood. In the experimental condition, the participant performs a specific cognitive task such as solving a maths problem, viewing emotional images, remembering a list of studied words, or listening to music. The control condition is designed to be as similar as possible to the experimental condition except for the critical cognitive factor of interest. For example, in a study on emotional processing, the control condition might involve viewing neutral images instead of emotional ones. The subtraction method involves subtracting the brain activation (blood flow) patterns observed during the control condition from those observed during the experimental condition. This subtraction isolates brain activity that is specifically associated with the cognitive task or stimulus of interest. Thus, the subtraction method has been valuable in allowing researchers to pinpoint the neural substrates associated with specific cognitive functions or processes.
The approach allows for the systematic manipulation of components and observation of resultant behaviours—a scientific approach. For example, the technique allowed discovery of selective motion-sensitive areas in the brain using an experimental condition of moving dots versus a control condition of a simple visual stimulus (Zeki et al., 1993). If the experimental condition is changed to be one requiring processing of colour or form, then those unique corresponding brain areas are active. The subtraction method applied in fMRI has become a powerful tool for investigating brain–behaviour relationships, and has contributed significantly to our understanding of cognitive processes, perception, emotion, and various other aspects of neural functioning.
Modern cognitive neuroscience
Over time, and many such studies, some generalizations have emerged about mental processes and the human brain that supports them. From this, we now have models of many diverse human cognitive capabilities. In the realm of memory research, early studies used the subtraction technique with positron emission tomography to isolate brain regions contributing to the encoding and retrieval phases of memory. Encoding of episodic information (e.g., a list of to-be-remembered words) was associated with activation of the left prefrontal cortex and the retrosplenial area of the cingulate cortex, while retrieval from episodic memory was associated with activation of the precuneus bilaterally and of the right prefrontal cortex (Fletcher et al., 1995). In the realm of visual processing, neuroimaging evidence indicated that some cortical regions respond selectively during perception of certain categories of visual stimuli. That is, there is a selective response to faces in the fusiform face area, to scenes in the parahippocampal place area, and for bodies in the extrastriate body area (Downing et al., 2006).
Most recently, neuroimaging has contributed to answering even more refined questions. From the 1970s to the 2000s, there was debate regarding the nature of the mental representation used when humans engage in mental imagery (Pearson & Kosslyn, 2015). The main question revolved around what formats the brain used to represent this information: propositional (verbal) or depictive (pictorial). With the advent and proliferation of neuroimaging studies and experimental methods on the matter, results showed a blood oxygenation level-dependent response (or positron emission tomography signal) when a person engaged in mental visual imagery, in the primary visual cortex. Given this, the debate has largely been put to rest: imagery-based representation involves the primary visual cortex (Pearson, 2019). What is more, while the majority of neural imaging work on mental imagery has focused on the overlap or similarities between visual imagery and visual perception, researchers did not stop there. In line with the general principles from natural science, they probed further and examined neural connectivity differences using experimentation to reveal a distinct top-down cascade of activation during imagery but a bottom-up one during visual perception (Dijkstra et al., 2017). During imagery (relative to visual perception conditions), there was increased top-down signal flow from frontal cortical regions running “backwards.” These started in the frontal cortex then proceeded to more posterior regions implicated in memory retrieval, then finally regions involved in sensory and spatial representations formed the image in parieto-occipital cortices. Visual perception, on the other hand, operated largely in reverse. Such knowledge could not have been gathered without such systemic manipulation of factors, reliable methods of data collection, and development of analysis tools common across the field. These are the building blocks of scientific investigations, and upon which much (though not all) of psychology is based.
From this, subsequent work could now build on past findings to determine whether they generalized to other types of materials, using common experimental methods and neuroimaging analysis tools. Neuroimaging studies extended earlier findings by systematically varying the to-be-encoded materials. In one early study, emotionally arousing aversive versus neutral film clips were shown to subjects during positron emission tomography. Activity in the right amygdala and orbitofrontal cortical areas at the time of viewing the emotional movie clips was correlated with enhanced memorability three weeks later (Cahill et al., 1996). This work highlighted the involvement of additional brain regions for emotional materials, visual in nature, and offered a testable mechanism of action to explain why emotional material is typically better remembered than neutral. Current efforts are now finding that changes in connectivity (weakened ones) between these brain regions are notable in older adults with mild cognitive impairment (Kazemi-Harikandei et al., 2022), and can be used as early markers of Alzheimer disease. In order to assess the need for intact connectivity in various memory functions, a recent major breakthrough in the field comes from the use of the Trans Magnetic Stimulation (TMS) technique. TMS is a form of noninvasive brain stimulation that uses powerful time-varying magnetic fields to induce an electrical current through the scalp, affecting underlying brain tissue function. When applied in humans during the performance of a cognitive task, magnetic pulses can depolarize or hyperpolarize large populations of neurons within discrete regions of the cerebral cortex. Researchers have used this approach to causally test whether, and to what extent, network-level brain connectivity influences memory (Hebscher & Voss, 2020). Modern cognitive neuroscience is now aimed at testing which networks of connected tissue underlie human cognitive functions.
As can be seen historically and in modern cognitive neuroscience studies, experimentation and application of principles from natural science are what have allowed psychologists to uncover consistencies in cerebral organization. Such knowledge offers biological mechanisms to explain a variety of behaviours. Importantly, the knowledge base accrued through careful experimentation allows psychologists to identify, with ever-growing accuracy, the deficiencies within neural systems that characterise various disorders, cognitive states, and neurodegenerative diseases. Cognitive neuroscience has grown in dominance within the field of psychology, likely because it offers a means to understand human cognition and create models of function that enable psychologists to make relatively accurate predictions about behaviours, given neural as well as environmental parameters. Much of that is owed to the development and application of scientific principles.
Footnotes
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a Discovery grant (2020-03917) from the Natural Sciences and Engineering Research Council of Canada (NSERC) awarded to author MF.
