Abstract
This paper analyzes notions and models of optimized cognition emerging at the intersections of psychology, neuroscience, and computing. What I somewhat polemically call
Introduction
In their best-selling book
Books like
In the first part of the paper, I draw on analyses of corporate mindfulness and meditation apps to show how rest is reframed within. This links mindfulness trainings and apps to research on wakeful rest in cognitive neuroscience, which I sketch in the second part of the paper. Thanks to investigations into the brain’s so-called
Google’s artificial intelligence division
Mindfulness, Incorporated
In two articles published in 2018, a group of researchers who subscribe to conducting contemplative neuroscience voiced their concerns regarding the ongoing muddling of the ancient Buddhist practice of mindfulness through corporate programs and apps. The authors write that many of the practices that undergird mindfulness “arose in religious and spiritual contexts where the motivations and goals for what could and would be achieved through meditation differed greatly from secular Western notions of health, well-being, and flourishing” (Van Dam et al. 2018b, 68). Even if reliable measures of “flourishing” and “well-being” could be attained, it would remain unclear, from a psychological standpoint, whether the practices conceived as mindfulness trainings are well-suited to attaining such high-level goals.
Theirs is a rebuttal to an ongoing commodification of mindfulness practices, which finds support also in their own fields. For instance, neuroscientist Alissa Mrazek and colleagues argue that apps provide “an unprecedented opportunity to deliver high-quality training to an increasingly internet-connected global audience” (Mrazek et al. 2019, 81). They emphasize the reach and seamlessness of apps in comparison to classic, place-based psychiatric therapy, which is becoming ever more important in workplaces that are primarily or entirely digital and screen-based.
In general, the two camps do not disagree about the value of mindfulness practices, yet they are divided over the question whether mindfulness should be reconceived to neatly fit into the packed schedules of North American and European executives and white-collar workers. “With the current use of umbrella terms,” the contemplative neuroscientists write, “a 5-minute meditation exercise from a popular phone application might be treated the same as a 3-month meditation retreat (both labeled as meditation) and a self-report questionnaire might be equated with the characteristics of someone who has spent decades practicing a particular type of meditation (both labeled as mindfulness)” (Van Dam et al. 2018a, 38). Contemplative neuroscientists are concerned that mindfulness might turn into a mere technological fix for the problems that creative economies and digital cultures have wrought.
In this context, mindfulness trainings and apps sit next to digital detox programs (Beattie and Cassidy 2020; Sutton 2020), conceived to counter the effects of affective bonds introduced through the patents, policies, and business models of digital platforms (Baym, Wagman, and Persaud 2020). Apps promise to alleviate the effects of “how our cognitive capacities are captured and modulated on these platforms at the level of affective flow” (Karppi 2018, 10-11). In contrast to digital detox programs, however, mindfulness trainings and apps are typically designed to allow for affective bonds to be sustained. The very same devices that first induce digital distractions are employed to support changes in behavior and promise “attention by design” (Jablonsky forthcoming). 2
Corporate mindfulness trainings pursue similar objectives and reconceive mindfulness as a behavioral technique. For instance, We’re constantly shaking it with information overload, distractions and task switching. This results in reduced clarity of our priorities and a lack of focus. By practicing a brief meditation (as short as five minutes!)—we can let the “snow’’ settle and see things more clearly and vividly. Clarity of mind can help us prioritize what’s important, solve problems better, figure out new strategies or uncover issues we may have ignored. (Parcerisa 2019)
Janice Maturano, a former Vice President and Deputy General Counsel at the American consumer food manufacturer General Mills and founder of
Corporate mindfulness projects an industrious subject that is never really idle—a mode of subjectivity that is in fact firmly rooted in cognitive neuroscience research on the brain’s “resting state” (Callard and Margulies 2010). Stulberg and Magness, for instance, prominently reference the work of neuroscientist Marcus Raichle, who began studying the brain “at rest” in the 1990s and has since continued this line of research by investigating the brain’s “default mode” of operation. This change of perspective—from rest to default mode—is based on a “flipping of contrasts” in the psychology laboratory that occurred in the 1990s (Callard and Margulies 2011).
Rest, Reframed
In 2007, neuroscientists Alexa Morcom and Paul Fletcher (2007) published an article in the highly influential journal
I myself remember many off-the-record conversations in neuroscience labs from roughly ten years ago, where researchers mocked resting state research as mere experimental laziness. Most experiments I witnessed doing fieldwork in cognitive neuroscience labs had been dominated by problems, instructions, or stimuli defined by the experimenter and executed by the volunteer. Any measurements of mental and cognitive activity—whether via electroencephalography, positron-emissions tomography, or functional magnetic resonance imaging (fMRI)—were typically conducted if and when the volunteer was occupied with an experimental task.
In fact, experimental psychology and neuroscience had since the late nineteenth century been characterized by “an uncanny proximity between subjective responses to a task delivered in the laboratory and one prescribed on the shop floor” (Morrison et al. 2019, 64). Nevertheless, asking volunteers to put their brains “at rest” in the fMRI scanner had always been an important element of brain imaging studies, for brain activity at rest was conceived as “control condition” and thus “the flipside of a range of focused, controlled and externally oriented processes: an image in negative of the aware and externally attentive brain” (Alderson-Day and Callard 2016, 12). In the 1990s, neuroscientists developed a vested interest in distinguishing the components of resting state activity, and initially they simply looked the other way. Instead of subtracting the activities of the brain at rest from what happens when the volunteer’s brain is hard at work, they started to search for brain regions that show increased activity during periods of rest and found a network of brain regions we now know as the “default network.”
In this process, what had been considered as mere background noise that tends to obscure the cognitive activity of the brain became the target of analysis—an “an organized, baseline default mode of brain function that is suspended during specific goal-directed behaviors” (Raichle et al. 2001, 676). Crucially, cognitive neuroscientists gave up on the idea that the default mode is bound to extended periods of rest and began to more generally analyze mental processes that had been disregarded by cognitive neuroscientists, for they were considered unrelated to active, cognitive processes, and hard to summon in the laboratory. They developed strategies that would keep volunteers from following the traditional, attentive routines of the psychology lab so that they stay “off task.”
The goal of these experimental strategies gradually changed, from identifying brain regions that are active when we rest to creating the conditions where volunteers’ minds could stray. The underlying cognitive activity is now variously referred to as self-generated, task-independent, stimulus-independent, unconstrained, or spontaneous thought. The rising interest in these phenomena and the brain’s default mode was based on the idea that “conscious experience is relatively more dependent on the individual’s concerns, preoccupations and hopes (i.e., self-generated), rather than immediate perceptual input (i.e., perceptually generated)” (Callard et al. 2013, 1). 3 The described, experimental shifts in brain imaging have since contributed to the idea that brains are, in fact, entirely unrestful and allowed for a default mode phenomenology to emerge.
Default Mode Phenomenology
In 2010, resting state forerunner Marcus Raichle (Raichle) published a paper on the brain’s “dark energy.” The metaphor latched onto the similar concept of dark energy in physics, which allows one to speak about phenomena that cannot be reliably measured or explained, although its effects can be observed. The metaphor was introduced as an auxiliary hypothesis to explain why the expansion of our universe keeps on accelerating; in neuroscience, it helped explain why the brain remains active when the demands of the environment abate.
The dark energy metaphor grasped the growing interest in the contents of self-generated mental activity, which researchers had largely ignored—also since experimenters technically need the help of volunteers to know when it occurs and catch their mind wandering. 4 In the process of mind wandering, memories are recalled for simulations of the future based on experiences in the past, which is why we sometimes imagine lying on a pristine beach while staring into gray, postindustrial landscapes that whizz by the windows of a commuter train. 5
More than fantasy and (day) dreaming, mind wandering has been linked to the process of drifting away and interrupting whatever activity had been carried out before. In psychology, it had been conceptualized as task-unrelated since laboratory practices “have repeatedly assumed that experimental subjects must have some task to wander
Yet, the differentiation between highly valued attentiveness and pathological introversion has been complicated throughout the last two decades. The fact that humans are mind wandering up to 50 percent of their waking life alone speaks against the cognitive insignificance or any pathological character of mind wandering per se. Proponents of the “perceptual decoupling” hypothesis suggest that mind wandering is characteristic of a cognitive state where we attend to normally sub- or nonconscious processes that occupy parts of our brain throughout the day and not only when we rest (Baird et al. 2014; Hove et al. 2016). That is, neuroscientists meanwhile believe that mind wandering might be indicative of a subconscious, yet system-critical mode of information processing, which occupies our attention whenever we indulge in our thoughts, but generally benefits our ability to keep fixed and “on task” (Shepherd 2019).
Uncontrolled mind wandering is billed as the source of cognitive pathologies such as attention deficit hyperactivity disorder, autism, depressive rumination, schizophrenia, and obsessive thought (Tang, Hölzel, and Posner 2015). If mind wandering is voluntary, however, it is linked to creative thinking, imagining the future, social problem-solving, memory consolidation, and a general openness to new experiences (Beaty et al. 2018; Murphy et al. 2018). In this case, mind wandering amounts to a particularly vivid form of “off-line thought” (Smallwood and Schooler 2015) or “off-line perception” (Fazekas, Nanay, and Pearson 2021). In other words, a phenomenological ambiguity sits at the heart of the concept: mind wandering is considered a source of pathology if it cannot be controlled, but it can be beneficial and productive if it is “goal-directed” (Christoff et al. 2016).
Kieran Fox, one of the currently most prolific researchers in the field of contemplative neuroscience, explains this phenomenological ambiguity of mind wandering in an interview with I don’t think of mind-wandering as a conscious state—I think of these processes as more or less ongoing, below the level of awareness, competing with other inputs and signals in the brain for our attention. We can tune in and pay attention to them, or not, and sometimes the thoughts will be strong enough or emotionally salient enough to grab our attention even when we don’t want them to. I think of the stream of inner thought in a way similar to other perceptual channels; for instance, you are constantly receiving a stream of auditory information, even when you’re asleep, but your brain is very good at blocking out probably 99% of this information as totally irrelevant, and you never become conscious of it…. I suspect the brain is constantly generating thoughts, imagery, and so on at a “subthreshold” level as well, and noticing it is more a matter of this content catching our attention and becoming illuminated by our conscious awareness than of entering a particular conscious state where mind-wandering then starts or is allowed to take place. (Fox and Koroma 2018, 4)
At the same time, this reframing suggests putting the burden of managing attention on the individual. In the context of trainings and apps, mindfulness is reconceived as a behavioral technique that regenerates
Once considered an antidote to mind wandering, contemplative neuroscience now suggests that mindfulness can help “steer people away from the negative biases that we see in mental illness, and instead nudge them toward positive, constructive, and creative patterns of thinking” (Fox and Koroma 2018, 11). A remarkable passage in Stulberg and Magness’s (2017b) book Our subconscious mind functions in an entirely different manner than our conscious mind. It breaks from the pattern of linear thinking and works much more randomly, pulling information from parts of our brain that are inaccessible when we’re consciously working on something. It is in these parts of the brain, in the vast forests bordering the narrow “if-then” highway that our conscious mind runs on, where our creative ideas lie…it’s only when we turn off the conscious mind, shifting into a state of rest, that insights from the subconscious mind surface.
Algorithmic Modulations of Attention
Adam, the restless Google engineer who rarely replies to texts, is only one of many examples presented in Stulberg and Magness’s (2017a) book; and yet he is a very memorable one, since Adam was at that time wholly immersed “in the brains and guts of a car…to teach an inanimate object moving at 70 miles per hour to differentiate between a stray plastic bag and a stray deer.” This is to say that Adam and Google’s prototype self-driving car essentially face the same problem: neither Adam nor self-driving cars have the opportunity to escape their informationally dense environments. They are called upon to sustain attention to task while being confronted with an endless stream of information that threatens to overwhelm their cognitive capacities.
Without necessarily taking account of ongoing exchanges between cognitive neuroscience and machine learning,
This parallelization of human and machine in cognitive science, computing, and public discourse goes back to mid-twentieth century, North American social science, and the concept of information overload (Levine 2017). Nick Levine traces it to the work of American psychologist James Grier Miller and his article “Information Input Overload and Psychopathology,” which was published by the
Grier Miller’s universalist understanding of information overload became characteristic of 1960s and early 1970s complex systems theory that was largely indifferent to the fundamental disparity of human and machine. For instance, social scientist and artificial intelligence forerunner Herbert Simon (1971) observed that information overload “creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it” (p. 41). Throughout the 1980s and 1990s, the concept leaked into management science and became the dominant element of an emerging public discourse on the dangers that accompany the data deluge as well as the proliferation of gadgets, screens, and user interfaces.
The idea of finite cognitive bandwidth is now firmly embedded in both the neurosciences and computing. Neuro-psychologists would frame the attendant problem as a “stability-plasticity dilemma” that haunts artificial and biological systems in a similar way (Mermillod, Bugaiska, and Bonin 2013). A pertinent example of a pathology that derives from the stability-plasticity dilemma is “catastrophic forgetting.” As a concept, catastrophic forgetting is native to the machine learning domain, but it compares to the traumatic memory loss that humans experience under conditions of shock. Catastrophic forgetting occurs when an artificial neural network is trained on different tasks and “forgets” one task in favor of another. For instance, take a network that is trained to play legacy ATARI games such as Space Invaders and Pac Man. The network will start by trying out random strategies and consecutively “learn” to master Space Invaders by memorizing the strategies that lead to success in this very game. If the network is trained on Pac Man thereafter, it might completely erase and overwrite its knowledge of Space Invaders.
The problem of catastrophic forgetting has been approached as an issue of algorithmic attention. In contrast to humans, artificial neural networks are not good at differentiating between useful and superfluous knowledge, which is why their attentional capacities need to be carefully managed. Some researchers suggest to force the network into paying “hard attention to task” (Serra et al. 2018), others promote machinic variants of synaptic memory consolidation (Kirkpatrick et al. 2017). These seemingly different strategies have in common that they seek to protect knowledge from being accidentally erased. Attention is figured “as a means of guarding against undesirable synaptic changes” (Lindsay 2020, 16).
Yet, humans remember and forget primarily when they rest, which is why many machine learning researchers resort to insights from resting state and default mode neuroscience. Wakeful rest and sleep are considered to have a double role in the learning process: if we do not need to pay attention to our environment, our brains supposedly “take out the garbage” and purposefully forget to make space for new knowledge. At the same time, we replay experiences from memory to solve problems creatively and store what is important to long term memory (Langille 2019; Lewis, Knoblich, and Poe 2018).
Current designs for artificial neural networks involve mechanisms that reproduce this behavior in very coarse ways—yet, without factoring actual rest into the equation. Researchers discuss mind wandering as a principle of resilient and creative information processing in machine learning (van Vugt 2018) or suggest encoding artificial rest and sleep into their systems (González et al. 2020). In artificial neural networks, rest turns into a mere algorithmic mechanism—after all, artificial neural networks never actually rest.
In the Sleeplike states in neural networks are very different from the mode your PC enters after some set period of inactivity. A conventional computer that has gone to “sleep” is effectively in suspended animation, with all computational activity frozen in time. And the age-old advice from the IT department to try “turning your computer off and then on again” when a PC gets glitchy is tantamount to exposing your machine to a brief period of brain death. That kind of sleep mode would do nothing to settle an unstable neural network. And power cycling would simply reset the network and undo any prior training, effectively giving the network a severe case of amnesia. In neural networks as well as living creatures, a sleeplike state is not inactivity, but a different kind of activity that is crucial to the proper functioning of neurons. (Kenyon 2020)
It is this reframing of rest that fueled the hype around brief and intermittent mindfulness exercises as a source of psychological resilience and creativity. Corporate mindfulness trainings subsume the Buddhist practice of mindfulness meditation to the logics of psychological resilience and thus reflect “a disturbing utilitarianism—a partial adoption of asceticism that is actually the antithesis of productivity’s insatiable appetite for self-enhancement” (Gregg 2018, 122). Recent experiments in machine learning and artificial intelligence contribute to this reframing of rest as a cognitive technology. Against this backdrop, I would like to conclude by focusing on the question of what it would take to escape these logics of necessity: can machine learning systems be employed to experiment with alternative cognitive subjectivities?
Conclusion
The current prominence of mindfulness trainings and apps suggests that North Americans and Europeans increasingly think about mindfulness, and about their lives more generally, in algorithmic terms. Technologies play an important part in this process: as Ruckenstein and Schüll (2017) observe: apps, trackers, and new, device-based pedagogies “bring machinic agency to bear on human ways of defining, categorizing, and knowing life” (p. 269). At the same time, machine learning researchers experiment with implementing coarse principles of cognition in the human brain in artificial neural networks and thus prepare the ground for selling these as generative models of how humans perceive and learn. Whereas the algorithmic modeling of cognitive processes is conceived to augment artificial intelligence, neural networks, or neuromorphic devices supposedly further our understanding of cognition in the brain.
In other words, both contemporary neuroscience and neuroscience-inspired machine learning research appear to close in on algorithmic understandings of cognition in humans and machines. The related—by now rather speculative—transpositions are not always and inherently problematic. Yet, in their current form, they invite us to think about cognition primarily as a problem of preventing pathology. This tendency is exacerbated in neuroscience-inspired, artificial neural networks. They provide working models of cognitive labor under conditions of overload and lend themselves well to experiments with technological fixes for the effects of working at or over capacity.
Yet, Adam’s issues with work–life balance and the difficulties that (Google’s) driverless cars have with differentiating between plastic bags and stray deer appear comparable only within an information processing framework that presupposes the inevitability of overload. While these techniques and technologies may help alleviate its effects of working at or over capacity, they simultaneously burden the worker, and the worker alone with managing overload—and thus reframe rest as yet another form of labor. Current exchanges between cognitive neuroscience and cognitive computing suggest that “there is no idle time, either for human or non-human actors,” as media historian Markus Krajewski (2018) writes in his book
If we want to think beyond this reframing of rest as cognitive labor, we need to situate and historicize the underlying epistemology. The current interest in determining the algorithms of mindfulness is rooted in a reorientation toward nonconscious cognitive processes in North American and European cognitive neuroscience since the 1990s. It gradually drew attention to patterns of infrastructural activity in the brain and thus aligned our understanding of biological cognition with contemporary paradigms of information processing, exemplified in cloud computing (Bruder 2019).
Thinking about humans as information processors, however, implies neither that we gear our entire lives toward managing overload that is imposed nor that the search for algorithms of mindfulness needs to turn into an endeavor of relentless optimization. The knowledge that contemporary neuroscience produces, and that machine learning research selectively perpetuates, lends itself well to unsettling old dichotomies, such as those between attention and distraction or between task and rest. Rather than resorting to this knowledge only to overcome pathologies that derive from our social and informational environments, it might aid in exposing the frameworks that naturalize overload and define the inability, or unwillingness, to succumb to it as pathological.
If, as I suggested earlier, the reframing of mind wandering as subconscious information processing is a de-pathologizing gesture, this gesture may also be understood as losing interest in perpetuating the notion of psychopathology more generally. Could machine learning systems support this process? In
I believe that studying and experimenting with these systems can contribute to thinking beyond currently paradigmatic epistemologies of machine learning, and toward diversifying or queering related, in North American and European neuro-psychologies. The flipping of contrasts that created an opening for de-pathologizing mind wandering might be a good model for algorithmically unsettling the idea of pathological cognition. This process demands continuous and recurring engagement with the technicalities of algorithmic systems and the knowledge practices they implement—“continual, careful, collective, and always partial reinscriptions of a cultural-technical situation in which we all find ourselves” (Philip, Irani, and Dourish 2012, 5).
Footnotes
Author’s Note
Part of the research for this article was conducted during a fellowship granted by the Institute for Advanced Studies “Media Cultures of Computer Simulation,” Leuphana University Lüneburg.
Acknowledgments
I am extremely grateful to Rebecca Jablonsky, Nick Seaver, and Tero Karppi for their feedback on earlier versions of this article and would like to thank the anonymous reviewers as well as the participants of the panel “Attention,” held at the 4S Annual Meeting in New Orleans, for their very valuable comments and suggestions.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research that undergirds this paper has been partly funded through the SNSF Sinergia Grant “Governing through Design” (grant no. 189933).
