Abstract
It is now a common belief that the truths of our lives are hidden in the databases streamed from our interactions in smart environments. In this current hype of big data, the Internet of Things has been suggested as the idea to embed small sensors and actuators everywhere to unfold the truths beneath the surfaces of everything. However, remaining the technology that promises more than it can provide thus far, more important for the IoT’s actual expansion to various social domains than the actual discovery of hidden truths has been people’s speculations about the unknown problems, such as hidden security issues or lifestyle concerns, beyond the narrow human knowability but assumed to leave their traces in the IoT-collected big data. This paper discuss this speculation as the concealed cognitive labor of IoT users that projects some fictitious values to the big data IoT companies accumulate. By the term pan-kinetics, the systemic operation of smart actuators is analyzed as the process through which fictitious values of data are converted to the real values as these actuators draw some profitable correlations from physical domains of the IoT. Analyzing smart electroencephalogram (EEG) headsets as the unique IoT devices operating on human brains, it argues how the IoT translates this speculative realism of unknown problems into its big data, which the IoT developers believe to be full of machine-learnable correlations that would lead to the smart solutions of the problems.
The current expansion of the Internet of Things into various cultural and geographical domains radically redefines its operational environments. A person’s physiological body, domestic space, and urban neighborhood, for instance, can be now tracked by different sets of sensors and actuators embedded in her smartwatch, smart home, and smart city, all under the clouds of the IoT. Security problems and lifestyle issues are potentially everywhere and subject to constant optimization through the intensive sensor-actuator interoperations of smart objects, which constitute artificial nervous systems of the IoT (Hayles, 2016; Thrift and French, 2002). However, remaining the technology that promises more than it can provide as of now, what facilitates the IoT’s current expansion more than its actual preemptions of problems is a certain form of speculation typical for today’s media industries: that there remain some unknown computational problems whose hidden correlations with other observable events could be eventually discovered through the interoperations of ubiquitous smart objects (Falconer and Mitchell, 2012; Gonzalez-Wertz et al., 2018; Kulesa and Dirks, 2009). As the digital double of this physical reality of hidden correlations, the IoT-collected big data has, in turn, attracted venture capital to tech companies to data-mine profitable correlations, the key resources for their IoT services to preempt and optimize more security and lifestyle concerns of users (Srinivasan et al., 2014).
Scholars such as Srnicek (2017: 101) point out that this speculative trend of media industries, which he diagnoses as a new tech bubble, “involves constantly pressing against the limits of what is socially and legally acceptable in terms of data collection” since “the suppression of privacy is at the heart of this business model.” And the extent of suppression the users need to accept to put on their privacy is more than just their resignation to the 24/7 monitoring of behaviors under ubiquitous sensors but, as Zuboff (2019) puts it, their tolerance to the secret operations of ubiquitous actuators to “nudge, coax, tune, and herd behavior toward profitable outcomes.” In other words, for bringing forward the time to turn a profit from the collected big data and postponing the bubble’s imminent burst, it is not enough just to mine profitable correlations among the behavioral data, but proactively cultivate them through the actuators’ subliminal interventions. In their diagnoses of the current speculative structure of media industries, what has however not been discussed enough, and this paper will examine, is the question on the role of people’s cognitive labor of speculation behind this resignation and tolerance to the ubiquitous sensors and actuators. I define speculation as what IoT users perform within its wireless sensors and actuators as they recognize that there always remains something problematic they fail to pay enough attention to in their daily lives. Users’ constant speculations about these hidden realities beyond knowability then function, indirectly through their subsequent manual labor to click “yes” to all terms of use of personal data, to maintain the networks of ubiquitous sensors and actuators.
This paper puts the question as follows. First, how does this cognitive labor contribute to the IoT’s transformation of our bodies, homes, and neighborhoods into things with unknown problems? Regarding this, I discuss the post-9/11 metaphor of a needle in the haystack as a speculative means to think of the way in which the IoT has redefined its operational environments as full of human-inaccessible symptoms of imminent dangers, timely preemptions of which require its smart sensors to organize a pan-optic surveillance network. Second, how does the IoT translate this speculative realism of unknown problems into abundant correlations embedded in its big data? As to this, I focus on a less-discussed aspect of the IoT that I term its pan-kinetics, describing how smart actuators’ constant and proactive interventions in environments cultivate yet unknown but machine learnable patterns among sensor data. In the last section, the answers to these questions are tested as a theoretical framework to analyze the current smart electroencephalogram (EEG) headsets, EMOTIV and Muse, the unique IoT devices that exemplify the current speculative trend of media with their constant streaming and datamining of human minds.
Speculative economy of the IoT
When Kevin Ashton suggested the term Internet of Things in 1999, it was primarily about the change in the data entry method for the internet, from the entry performed entirely by human labors of “typing, pressing a record button, taking a digital picture or scanning a bar code” to the environmental sensing of autonomous objects (2009). His definition emphasized the benefits users could have if computers “knew everything there was to know about things” so that they were “able to track and count everything, and greatly reduce waste, loss and cost”. But, relatively left aside in his subjunctive description was still the question about the human rationale of this transition to the nonhuman practice of data collection, which we may answer by relocating “everything” he simply put for anything that causes unnecessary “waste, loss, and cost” to the actual need for tracing it all. Speculation about the technologies that extend human knowability virtually to everything has not been something new since Enlightenment, especially after its “archive fever” (Hong, 2020) or pursuit to enlighten everything was coupled with another impetus of social changes, currently termed optimization of societies, their capitalist productions, democracies, and securities. However, in the recent resurgence of this Enlightenment fever through digital technologies, the most decisive in drawing political traction to the IoT’s tracking and counting everything approach was the zero-tolerance policy of national security in the wake of the September 11 attacks. Suggesting “even a single terrorist attack is an unacceptable failure of prediction,” zero-tolerance signifies the preemptive logic of counterterrorism in the US and has justified the state agencies’ practice of big data collection about people’s everyday lives, taken as its “whole haystack” under “the assumption that there must be a needle” (Hong, 2020: 20, 60). As McClanahan (2009) points out, this data-driven counterterrorism means probability is no longer the sole criterion to predict possible terrorist attacks since preparing for some most probable scenarios of catastrophes is not enough alone. What appeared urgent after 9/11 was instead the proactive search for the missing links between distanced events that could make sound plausible even the least probable scenarios.
In this respect, the “haystack” metaphor, which the former NSA director Keith Alexander (2005–2014) used as to the big data collected about people’s everyday lives, is revelatory of how these raw data have, in fact, already been mediated by certain political speculation about the physical reality data collection occurs. The zero-tolerance for him meant a shift in the focus of the metaphor from how impossible to find a needle to how imperative to continue this almost impossible search effort under the condition that the needle in question could risk many lives. And, for this effort to be repeatable even without meaningful result, the metaphor also served the imagery in which everything could potentially lead to the prediction of the problem to a varying degree through the hidden equation of missing links between a hay string and the needle. Exemplified in the case of a Bolivian woman subject to terrorist profiling due to her recent “interest in the purchase of pressure cookers” and many other similar cases (Gabbatt, 2013), the banality of hay suggests even the equation with weakest links is worth speculating to prevent its catastrophic consequence. The excessive vigilance of lurking dangers since 9/11 had formed a certain condition in which “paranoia is the new normal” as Kirn (2015) wrote later in The Atlantic about the Snowden leaks in 2013. It is noteworthy that paranoia for Kirn was primarily about people’s affective repulsion to the secret surveillance networks everywhere (Hong, 2020: 38). But this normalization of paranoia as a form of resistance to or detachment from the state-driven surveillance meanwhile also obscured its another form that the haystack imagery internalized as people's mode of cognition at the time of ubiquitous dangers to perpetuate their paranoid attachment to the surveillance. In the context of the post-9/11 homeland security, paranoia has in fact functioned as what Sedgwick (2003) calls “a strong theory,” preparing people for any plausible scenarios of dangers beyond their knowability, such as hidden terrorist networks in the “ticking bomb scenario” (Hannah, 2006), by letting them “capable of accounting for a wide spectrum of phenomena which appear to be very remote, one from the other” (Sedgwick, 2003: 133–134). This preemptive logic of paranoia was, however, operable only at the huge expense of subjects’ ceaseless cognitive labor of generating untestable suspicions about any plausible correlations among distanced events. Suggested by Alexander’s haystack was, on the other hand, the rationale to justify the paranoid citizens’ resignation to the state surveillance as the only realistic solution to reassure themselves (Dencik and Cable, 2017). Implying a manifold of equations that each hay string draws to the hidden needle far beyond human knowability, his imagery puts the ubiquitous surveillance technologies and machine learning of collected data as the only alternative to which the paranoids could delegate their helpless efforts to preempt all the imaginable links between tiny anomalies in their neighborhoods and hidden terrorist attacks. As Hong argues, “the massive expansion of data collection and storage continued” in this context as what he calls Technologies of Speculation “under the idea that if everything could be tracked about everybody, the hidden correlations to the most unpredictable threats could be disclosed” (2020: 53).
As an earlier application of the IoT’s tracking and counting everything approach, the zero-tolerance policy thus suggests how its opportunity cost, namely the disclosure of personal data, is compensatable not so much by actual preemptions of terrorist attacks, which have not been that effective (Logan, 2017), but by the resignation of the people who believe the needle is real and only secret networks of smart surveillance could greatly reduce waste, loss, and cost of their paranoid concerns. While this “state of exception precipitated by the terrorist attacks of 9/11 produced surveillance exceptionalism,” for the software companies like “Google and other rising surveillance capitalists” to whom data collection and analyses were outsourced in a way to guarantee their own “paranoid style” of “self-management regimes that imposed few limits on corporate practices,” the war on terror meant, on the other hand, the opportunity for “further enabling the new market to root and flourish” (Zuboff, 2019). And, as such nicknames of current commercial IoT as the Internet of Everything (Evans, 2012) and Everywear (Greenfield, 2006) exemplify, surveillance is no longer an exceptional condition in the flourish of this new data business. A mild form of paranoia is now a normative state of media audiences since the inaccessibility by narrow human perception is a common feature of the problems that the software companies commercialize and assume to be literally everywhere from one’s body, home, car, office, to the whole city. In many of these domains, problems are not quite life-risking just as a smart home is more for our informed decision making to optimize our domestic lives, not for preventing something catastrophic (Ahn, 2021). But, even in these everyday conditions, the inefficiencies to be optimized, not to mention hidden symptoms of health problems under smart wearables, are now involved with people’s nonconscious behaviors that can be tracked, counted, and analyzed only through ubiquitous sensors embedded in their daily routines. Paranoia that Kirn defined as a defensive reaction to surveillance is, therefore, also at work in these commercial domains. Cheney-Lippold writes “in most instances of algorithmic identification, we are seen, assessed, and labeled through algorithmic eyes” and “affected by the thought, we know that something else is going on, but we’re not exactly sure what it may be” (2017: 24). This reactive form of paranoia would then be convertible to a proactive gesture to prepare ourselves more properly for some unknown problems if we accept that they are really the issues more significant than our being a little paranoid. In this respect, everything as the emerging domain of commercial IoT has something in common with hidden terrorist networks in that both have appeared, under the post-9/11 mentality and recent trend of commodifying efficiency/optimality, to be what Morton (2013) calls hyperobjects: the objects felt immediate to human lives but always withdrawing 1 + n dimensions away from human knowability. While climate for Morton is the most distinguishable hyperobject of the current time, the haystack, now being a metaphor for any domain of massive dataveillance, signifies hyperobjects being anywhere as the type of the problems that requires the IoT’s tracking and counting everything approach. The more people attempt to grasp the needles embedded in this anywhere, the more they feel stuck into bottomless speculations about hidden correlations between every tiny detail in their surroundings and some unknown problems unless a certain actor smarter than humans eventually takes over this paranoid practice of self-surveillance. Facing this sort of problem, as Morton (2013) argues about climate, to postpone taking immediate actions or paying for suggested solutions by insisting we still need more than just speculations is not a rational behavior; given the imminence of their harmful manifestations, it is even hypocrisy. Regarding other security and lifestyle issues, less catastrophic but still optimizable only through the big data about people’s collective behaviors, this responsibility for immediate action is rephrased as the reason why the decision “to opt out of a data commons” to protect one’s privacy could cause a “free-rider problem” or betray “data philanthropy” (Espinoza and Aronczyk, 2021). The providers of commercial IoT services today seem to know that these speculations are exploitable as the means to compensate for the IoT’s opportunity cost of disclosure of personal data, and thus the common narratives of their white papers usually begin with such diagnoses: “Cities and communities around the world face intractable challenges” (Falconer and Mitchell, 2012: 2); “There are a lot of significant interrelated challenges … that are interrelated and need to be dealt with in a holistic way” (Kulesa and Dirks, 2009: 2). But “there’s also good news … full of pervasive technologies such as sensors or networks … allow us to measure, to monitor, to manage and to optimize our use of finite resources … to understand the interrelationship between systems” (4). As Hu (2015: 124) puts it, “a paranoid worldview” is required to form the affective background for smart objects to infiltrate into their commercial IoT domains “in which everything is hopelessly complex but, with the right (data) tools, can be made deceptively simple and explainable.”
In this narrative setup, users’ speculation is what the invisible narrator or salesperson of the white papers mobilizes for maintaining the connections between the two object kinds beyond human control: hyperobjects and smart objects. And how the term speculation is used in current speculative realism may let us redefine it as a common practice of realist sense making. As Graham Harman’s object-oriented ontology (2005) articulates this most clearly, speculation is an ontological concern of things that constantly withdraw from any access of human or nonhuman actors but leave some residues for these actors to communicate about the world beyond their accessibilities. In a critique of recent realist philosophies, Galloway (2013) points out how coincident this philosophical speculation about things’ elusiveness—both limiting an actor’s access and promising her further communicability beyond the limit—is with the recent “software companies reliant on object-oriented infrastructures” (351). Regarding this complicity he simply alludes to between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), I argue that speculation as a purely ontological concern is also converted these days into a sort of economic concern of IoT users and software developers. For them, felt constantly withdrawing from their understanding are, most of all, such hyperobjects as terrorism, climate, viruses, minds, and all other unknown dangers and needs (Ahn, 2019a), and they are increasingly believed to be redirected to the predictable ranges of the IoT’s algorithmic preemption.
As to the similarity between the works of entrepreneurs in “creative industry” and artists in the contemporary post-conceptual arts, Vishmidt (2018) suggests “speculation as a mode of production,” characterized by its common process to “gather all kinds of data and material and reproduce them” into the product subject to “infinite self-realisation without guarantees” (4, 18). On the other hand, speculation I redefine here is a form of cognitive labor common to audiences as prosumers and characterized by its ontological leap beyond human perceptibility and, in this respect, it is comparable more to attention, another cognitive labor still dominant on the internet. As a form of “immaterial labor” reflective of the real subsumption of human cognition under capital (Lazzarato, 1996), attention is said to produce real values exchangeable in media industries especially for advertisers just as Beller (2006) analyzes its industrial production-like process in the human mind. Provoked conversely by hyperobjective problems we always fail to pay proper attention to (Ahn, 2019b), what our speculation adds to the big data collected by smart sensors about our surroundings are yet fictitious values. And to make sustainable our investment of personal data in the fictitious future where all the elusive problems are already preempted, the IoT also needs to let its smart actuators constantly re-stimulate physical domains to draw some profitable responses there, potentially correlatable to the problems under search. The section that follows defines this operation of IoT actuators as the IoT’s pan-kinetics and argues how it converts the speculated values of sensor data into real values.
Pan-kinetics of the IoT
For Foucault (1995), Panopticon was an apparatus to generate the hallucinatory omnipresence of surveilling eyes by hiding the absence of the actual eyes in the central tower. What the inmates on the other side of the prison could then do with their denied access to the hidden sensor was just display normative behaviors the whole time. However, the pan-optics, which the ubiquitous IoT sensors seems to physically realize, is not enough alone to perform this 24/7 surveillance because the problems they should preempt are hardly measurable by the mere sum of the sensors any longer. Unlike the illegitimacies of prisoner behaviors identifiable in terms of their deviations from the normative responses to the hidden sensor-observer, the security issues under the IoT tend to remain invisible until they appear incalcitrant to be communicated continuously and seamlessly by the smart objects and applications. (For example, a person’s excessive heartbeat would be urgent when a smartwatch detects it but cannot transmit to the health app, fitness app, sleep app, and so forth.)
For IoT users, ubiquitous sensors may nonetheless still be what they should rely on to relieve their paranoid concerns of lurking problems in daily activities. But their investment of personal data into sensor networks (through their click to all terms of use) would be no longer a rational decision if what the mere sum of sensors can construct is just an oligopticon as “partial vantage points with limited view sheds” (Kitchin, 2014: 11) and if the problems are speculated to be embedded in this difficulty of translating between subsets of sensor data from different outlets. The current commercialization of the IoT strategically suggests this elusiveness from limited viewsheds as common to any problems it commodifies even as trivial as a person’s bad habits for energy efficiency since their symptoms distributed over her daily routines require holistic analyses of data streams from everywhere. Furthermore, given that the current IoT services usually allow users to access many sensor devices and their viewsheds, for instance, through a control panel a person can monitor the current state of a smart home or her own body measured separately by different sensors or webcams, their “deliberate exposal of the self” or resignation to ubiquitous dataveillance is less related to their strategic responses to the inaccessibility to sensors than prisoners in Panopticon once did to the central tower (Koskela, 2006: 175–176). The placeless and faceless position of the observer in Foucault is now somewhat democratized as what the users can enjoy over their own homes and bodies (Andrejevic, 2004; Cascio, 2005), and the reason why they still share their personal data beyond domestic control panels does not pertain to the values of data’s mere accumulation. The fictitious values their speculations add to the streams of sensor data are rather concerned with yet unknown correlations supposed to exist among different subsets of accumulated data. Therefore, to convert these fictitious values to the real values from which users can get actual benefits, such as data-driven insights into hidden truths or problems of their daily lives (indispensable for making them continue to be enrolled in the sensor networks), the IoT developers need to suggest their visions for machine learning that would “integrate and bind data streams together, work to move the various oligopticon systems into a single, panoptic vantage point” (Kitchin, 2014: 11). Regarding this realization of fictitious values of big data, what the users may feel more blocked from accessing would be the control of smart actuators; such as the motors of security cameras whose secret motion tracking is black-boxed beneath dark surfaces, the actuators of smart objects to activate themselves or each other, or their direct intervention in space by distributing perceptible or imperceptible cues or nudges to arouse certain responses from environments.
In “the new democracy of devices where users are in control” (Cohn et al., 2015) of rearranging “sensory organs” of a customizable “digital organism,” such as a smart home (Chai, 2020), these actuators are often marginalized as the system’s passive ends to “convert the data/energy [already processed in the earlier stage] to motion to control system” (Rayes and Salam, 2016: 82). However, for this behaviorist closure of sensors and actuators to organize certain sensorimotor activities to substitute one’s physical interactions for fulfilling her daily needs, simply matching already known patterns (sensors detect) to well-defined actions (actuators perform) is not enough. To transform even yet-unknown problems into something resolvable through these algorithmic sensorimotor arcs, actuators should be allowed to proactively intervene in physical domains and users to unfold some yet-unexamined responses as new optimization problems across various sensors (Ahn, 2016). In the AI research based on behavioral robotics in the 1980–90s, to wire multiple arcs among sensors and actuators was an arduous engineering task of human researchers, whose goal was to simulate the evolutionary history of human intelligence through a machine’s proper responses to and proactive manipulations of controlled lab environments (Brooks, 1991; Brooks and Stein, 1994). On the other hand, the artificial intelligence currently believed to reside in the IoT’s wireless sensor and actuator networks is not to replicate the previous model of intelligence restricted to a set of hardwired sensors and actuators, and the task to cultivate as many sensory patterns as possible to correlate with ubiquitous smart actuators is subject to constant datamining of IoT-collected big data.
As an ordinary term, correlation means statistically significant co-occurrence of a certain event with another consisting usually of seemingly unrelated variables. For IoT companies, correlations are, on the other hand, what should be abundant between user behaviors and other environmental variables manipulable by their actuators in order that the fictitious values imposed on their big data can be gradually converted to the real values of future services predicting more various life events. Recently, thanks to the pervasive sensors and actuators, the process to discover these correlations is less guided by theory-driven causation but more relying on increased computing power to find certain equations indicative of possible co-occurrences among distanced variables in sensor and actuator data (Anderson, 2008). In this respect, machine learning as the means to discover optimal coefficients to complete these hidden equations may represent the speculative structure of data-mining, and is also the moment when the order of Sensor→Actuator in the executions of normative IoT applications or their behavioralist closure is reversed.
For IoT users, the presence of smart objects might be felt most vividly when an application intervenes in their behaviors or environments according to algorithmic sensorimotor arcs, which begin with sensors’ detection of known problems and concludes with actuators’ resolving them. However, besides these goal-oriented executions, smart objects in non-active states also perform their ambient operations to stream massive environmental and behavioral data to the cloud where machine learning probes hidden correlations. Streamed to the cloud, sensor data then switches its function from being the trigger for actuators to serving the reservoir of unstructured variables, from which certain equations should be assembled as correlated with the target variables already set at the earlier stage of machine learning. Any well-defined subset of data can be used as the target if its correlations with sensor data are at least plausible. But what the routine practice of machine learning in marketing research (Wang and Minor, 2008) suggests considering first for this target is the actuators’ systemic distribution of human-perceivable or machine-readable cues over the space since they are more than just plausible to stimulate certain responses from environments and thus fluctuations in sensor data. For instance, some variables correlated with hidden patterns of sensor data could be such stimulations from smart actuators as warming a room by a smart thermometer, turning on/off of smart traffic lights, and pushing from a smartwatch; put differently, “smart nudging” of digital choice architectures “that require people to make judgments or decisions” in response to “automatic stimuli” (Mele et al., 2021: 952, 956), or the “choice architecture machine” currently under experiment in such real-world consuming spaces as “Amazon’s prototype of a check-out free convenience store” that “could even alter the arrangement of the cafeteria—change labels and placement of choice” (Risdon, 2017). Rather than intervening directly in user behaviors, these actuators influence the “rules of the game through the modulation and regulation of environments” (Gabrys, 2014: 35) or “managing and controlling affordances” of the environments (Hörl, 2018: 160) within which people design their own actions. If these techniques of “environmental psychology” are to investigate the way in which “all modes of existence and all forms of life” behave under “multilayered technologically distributed environments of control” (156–157, 162), what the IoT commodifies is the responsiveness of things and humans re-sensitized to these constant environmental interventions. Machine learning’s spatial governance of people’s daily activities thus reflects such speculative concerns of the IoT: how their nonconscious responses to ubiquitous actuators are inscribed in sensor data (Actuator→Sensor) and how to cultivate and commodify these responses into health/security issues or efficiency/sustainability problems of lifestyle choices.
If an IoT application’s Sensor→Actuator operation is so focused on a certain registered problem, this Actuator→Sensor operation of machine learning primarily aims to cultivate as many correlations as possible among distanced events under the assumption that something would eventually appear to be manageable optimally within this web of correlations. In other words, while the system of Sensor→Actuator reduces its physical domain to a short-lived “problem space” in which everything is defined along a linear spectrum from the detection to the resolution of the problem (Newell, 1980), the Actuator→Sensor redefines the collected big data as a sort of “state-space,” which describes all possible states the domain could have in response to the stimuli given either by smart actuators or some outside factors (Ahn, 2021). The extended Sensor→Actuator→Sensor operation of smart objects then describes both long-term and short-term approaches of the IoT to its physical domain; on one hand, its executions of smart applications temporarily bifurcate each well-defined service domain for a registered problem, demonstrating a single use value of big data; on the other, its ambient operation for machine learning constantly turns the physical domain back to the thing still preserving abundant hidden correlations enough for its long-term investment for the gradual conversion of more fictitious values to real values. We may understand the first half of the sensor-initiating cycle (Sensor→Actuator) as what represents the pan-optics of the IoT, which promises the full visibility of known problems under its 24/7 surveillance. The second actuator-initiating half (Actuator→Sensor) can be then termed the IoT’s pan-kinetics and what this regime requires individuals to internalize, in addition to the panoptic sensors, would be the ubiquitous nudges or subliminal cues as the means to gradually unfold and optimize all hidden correlations within their smart bodies, homes, and cities.
This correlation-cultivating operation of the IoT, or its pan-kinetics, is comparable to that of the early experimental apparatus of behavioral psychology, namely the Skinner Box, in that this small container for an animal was also designed to cultivate certain behavioral responses of the subject in-between its stimulator (actuator) and recording device (sensor). Even though pan-kinetics seems to replace this early hardwired container with wireless sensors and actuators ubiquitous enough to encompass a wide range of objects stretching from a body to a city, the directions their operations head towards are opposite to each other. Signifying the confidence of behavioralist programs in grasping any observable behaviors, Skinner Boxes’ stimulations of animal subjects were to single out certain disciplined responses under their black-boxed input-output relations, in other words, to transform them into reproducible objects. Defined as a system to analyze hyperobjects, the IoT’s pan-kinetics, on the other hand, does not black-box the middle, but transforms even the objects once considered stabilizable under behavioralist programs, including animal and human bodies and their societies, into hyperobjects still preserving hidden dynamics of correlations, far beyond human perceptibility but partially machine learnable.
Hansen writes (2015: 4) “today’s media industries have honed methods for mining data about our behavior that feature as their key element the complete bypassing of consciousness.” Requiring its users to be always responsive to nonconscious nudges everywhere, the IoT has also honed its pan-kinetics as the method to bypass users’ consciousness. The “nonconscious realm of sensibility” it intervenes in is experienceable (un)consciously only with the “missing half-second” of delay as “the temporal gap between brain activation and awareness” (98, 190). In other words, it is felt most immediate to their minds but always withdrawing from their conscious access. Our genealogy of the IoT may, in this respect, be stretched farther back to another Actuator→Sensor system older than the Skinner box, such as Helmholtz’s myograph: an experimental apparatus used in the early neurophysiology research to examine this elusive domain of human minds. The detailed description of this genealogy is not the concern of this paper but noteworthy about this comparison are the opposite effects these Actuator→Sensor systems have brought to the materialist understanding of human behaviors respectively in the nineteenth and twenty-first centuries. As the machinic extensions of reflex arcs of the experimental subjects, the hardwired sensors and actuators in the early neurophysiology were to produce linear and graphical inscriptions of subject responses. And these mechanical images of life were expected to drive out the mythical concepts of life, such as “vital force” in the vitalist philosophy, to the peripheries of scientific discourses (Patton, 2018). On the other hand, as the new inscription method of subject behaviors or nonconscious responses of their bodies under constant subliminal stimulations, the big data collected through the IoT’s pan-kinetics is resummoning something fetishistic in the form of the mathematical sublime, or the feeling that something significant is still hidden beneath the visible surface of big data (McCosker and Wilken, 2014). While previous discussions of data fetishism focus on how it reduces “all phenomena and means of accounting for phenomena to numbers” at the expense of displacing “other less easily quantifiable albeit insightful ways of expressing phenomena” (Sharon and Zandbergen, 2016: 1698), the speculative belief that maintains our attachment to the pan-kinetic networks is more tolerant to this kind of epistemological criticism in that something less easy to quantify by current methods is no longer simply displaced but speculated to leave hidden correlations from their constant withdrawals. For ordinary IoT users, the fetishistic overevaluation of big data and consequent overconcern for the needles inside are, therefore, based more on their somewhat animistic imagination of big data spurred by the lack of proper means (such as machine learning) tocorrelate it to their restricted understanding, rather than the belief in the numerical closure of selves that the early practitioners of quantified-self once had. For the IoT developers, on the other hand, to fetishize their big data as bigger than currently known is economically required to justify their speculative investment in “any and all future scenarios and technologies” to unfold its unknown correlations (Andrejevic, 2013: 78). This new fetishism is, however, more than delusory in the current media environment where recognizing our inherent epistemological limit is the first step for our being smart audiences. Encountering imperceivable security and lifestyle issues, remaining for us to invest our restricted reasoning power is only this belief that something is certainly hidden out there and the IoT’s systemic stimulations would bring it back to the knowable domain of artificial intelligence.
Anything can be this new commodity fetish if its natural/cultural responsiveness is re-stimulatable by ubiquitous actuators and transcribed into sensor data with potentially unfathomable fictitious values. From one’s body under multiple wearables to a smart city, the spaces once promised to be more transparent ever through the literal pan-optics of sensors have ironically appeared to be possessed by something hidden once again. The full visibility of these spaces is now constantly updated through the pan-kinetic unraveling of hidden correlations. Under the IoT, even human brains come to be these physical domains with abundant unexplored responses and, as the following section discusses, the current smart EEG headsets unravel these responses as profitable resources for the IoT’s new speculative economy.
Smart EEG headsets and exploitation of brain power
The electroencephalogram is “the graphic recording of the electric activity of the human brain, [which has] kindled far-reaching speculations about the imminent deciphering of mind and brain” (Borck, 2008: 367). As a neuroimaging technology for both clinical use and science research, EEG has functioned to discover “a direct and strange correlation [of brainwaves] to mental processing” (367), often stimulated by actuators in experimental setups. From its invention in the early twentieth century to the current commercialization, the history of EEG has been repeated efforts to communicate with something fetishistic about the human mind by unveiling its worldly correlations to technological measurements. For instance, one important motivation that led German physician Hans Berger to invent EEG in the 1920s was his interest in “the physical basis of mind” to explain his own “telepathic experience” by “measuring the ‘energy of mind’” (La Vaque, 1999: 1, 3). Motivated by Helmholtz’s physiology research with myograph, his experiment was to demystify the spiritual sense of the human mind. However, with the vacuum-tube amplification of Berger’s original design, what English electrophysiologists Edgar Adrian really discovered in the 1940s was “the inaccessibility of the mind within the brain” (Borck, 2008: 372). The brainwaves scanned by the advanced EEG sensors were “not merely a more complicated version of reflex machinery” but “a noisy crowd” constantly withdrawing from “the dictatorial bonds of reflex physiology” (371–372). It was only after Norbert Wiener had applied the cross-correlation analysis in the 1950s that this noisy crowd of brainwaves eventually began to unfold its “responses in the EEG to sensory stimulation” (Barlow, 1997: 449). Once this noisy crowd turned out to be correlatable with outside actuators, the exploration of the human mind through EEG has been reprogrammed as a sort of pan-kinetic project. A variety of actuators have been added to this project since then and stimulated its hyperobject, a brain, to unfold more correlatable brainwaves not only for medical and research uses, but for engineering brain-computer interfaces, neuromarketing, and even having a good sleep and meditation.
In line with this brief history, the recent commercial EEG headsets, such as EMOTIV and Muse (Figure 1), can be understood as the latest attempt to correlate human brainwaves to a greater number of actuators through the cloud-based online platform. Embedded with several noninvasive sensors (seven electrodes for Muse and fourteen for EMOTIV) and connectable to a smartphone or PC with monitoring software, these headsets transmit the wearers’ brainwaves to the clouds where the data gathered from each electrode are encrypted and stored in the wearers’ personal accounts, and shared with the internal research teams and, probably, with some third-party researchers and developers. As to the recent neuroentrepreneurship that adds the prefix neuro- to almost every scientific field involved human minds and behaviors, Rose and Abi-Rached argue how “neurobiological self-fashioning,” by means of taking care of “nonconscious determinants of our choices, our affections, our commitments” in our brains, becomes our responsibility not only for our mental lives but for “improving the well-being of our societies” (2013: 22). While this responsibility seemed so difficult for nonexperts to perform with limited knowledge as of their writing, EMOTIV and Muse now make it as easy as to wear a headset and click “yes” to all terms of sharing our brainwaves through their clouds. These headsets in this way organize a “political economy of hope” (19) in which the fictitious values the wearers add to the big data of brainwaves in the clouds are sustained by the hopes they nurse as the users of advanced smart wearables or researchers/entrepreneurs interested in capitalizing the human mind.

EMOTIV EPOC X (EMOTIVE|EPOC X, n.d) and muse 2 (Introducing Muse 2, n.d).
The previous section defined pan-kinetics as the IoT’s operation to transform its physical domains into hyperobjects with enormous fictitious values and discussed how they are converted to real values in uses and exchanges through constant datamining. These smart EEG headsets then most concisely exemplify, with their narrowly focused physical domain, human brains, how the IoT organizes a speculative economy around an assemblage of digital sensors, actuators, and users-speculators. To demonstrate the usefulness of my theoretical framework in analyzing the IoT, this case study reconstructs from the promotional or technical texts the way in which the producers of the headsets have built their own speculative economies. It examines (1) how they have mobilized the wearers’ speculations as the sources of the fictitious values projected to their brain data and (2) how they have converted these fictitious values to real values through the pan-kinetic arrangements of user-customized brain-actuating devices. Despite the slight difference in EMOTIV and MUSE’s emphases on their usability, it focuses on their common features since all functions one supports are also available in another.
According to the producers of these headsets, the migration of the noisy crowd of brainwaves to the clouds for machine learning provides the following solutions to the interested parties. First, for the individual looking for the better “brain fitness,” the headsets provide “an increased awareness of how [her] brain responds to different activities” and help her “make more informed decisions in [her] daily life that improves [her] productivity and long-term well-being” (MyEmotiv, n.d). Second, for those interested in the wireless connectivity of their brains, the headsets train them how to activate “Internet of Things devices, online services like social media, and robotics like Arduino” only using their brainwaves the headsets transmit (Introduction, n.d). Third, for corporate customers interested in “workplace wellness, safety & productiveness,” the headsets allow the companies to “build custom enterprise solutions or applications informed by brain data-driven insights” into stress and attention, rather than relying on “inaccurate self-reports” (Enterprise Neurotechnology Solutions, n.d). Lastly, for individuals and institutes interested in brain research and education, the headsets provide a digital platform for brainwaves, which supports application customization for the researchers who aim to unfold the hidden correlations of brain activities to certain events, environmental cues, or nudges (What It Measures, n.d).
The benefit of application customization these headsets support is obvious compared to other wearable devices such as Fitbit and Apple Watch, whose functional values are restricted by what the sensors can measure. For instance, one function of Apple Watch’s Fitness app can be activated after its photoplethysmographic sensor detects the wearer’s heart rate above the acceptable limit and then terminated by the actuation of the vibrator to push her to slow down. In this case, the app, following the order of Sensor→Actuator, reduces the wearer’s body to a simple problem space along a linear spectrum of heart rates. On the other hand, the customizability of the EEG headsets is to convert this pan-optic order by allowing the wearers to experiment with multiple spectra of their brainwaves that can be unfolded through each application they customize with the actuators in local setups. For instance, a common actuator available for most headset users is a gaming device, such as a PC or gaming console. Paired with an EEG headset, the gaming device arouses manifold brainwaves tentatively correlatable with each conscious or nonconscious task the gamer-wearer performs. According to which kind of stimulus is chosen in the gaming as the target for the machine learning algorithm to refer to in searching the most matched set of brain-sensor data, this customized Actuator→Sensor interoperation can unravel lots of brain data-driven insights potentially relevant to all of four “solutions” mentioned above; about how a gamer’s brain responds to different gaming activities, which subset of brainwaves represents her motor intention to control the joystick, and how a part of the brain indicating her stress level changes under its exposure to various stimuli.
Although this actuator customizability is what enables the headset producers to promise all these future “solutions” only with less than fifteen sensors on a single headset, none of user-customized EEG applications currently work as seamlessly as Fitbit or Apple Watch does. However, for many individual EEG users, these customization experiences could still be the proof of “neuro-realism” (Gruber, 2017), the belief that there are hidden realities in their brains correlatable to any internal and external events they experience. Furthermore, through the headsets’ wireless transmission of brainwaves, this belief about a brain’s unexamined potentials can expand even to those never actualizable by a too-restricted human body and its hardwired sensorimotor responses but available through these cloud-based IoT devices, for instance, the remote control of a machine by using one’s brainwaves only. The smart solutions these headsets promise are in this respect very “correlationist” in a similar sense with what another speculative realist Meillassoux (2008) meant by this term, but with one important revision: that human reason is no longer at the center of correlations. For the producers of the headsets, any conscious or nonconscious experience is correlated with brainwaves and every cognitive process of a person and her physical, psychological, and social needs should have right matches in her brainwaves while simply not yet data-mined are just the missing links to complete these correlations.
Despite not enough empirical EEG data yet mined to corroborate this assumption, some packaged services of the headsets, such as the real-time and aesthetic visualization of the EEG-scanned brain activities and their brief analysis into “six key cognitive metrics” (Figure 2), have been provided to hold the interests of individual users to whom the headsets have promised that the accumulated brain data in the clouds would eventually enable them to make more informed decisions in daily lives. On the other hand, as an effort to make still believable the indefinitely postponed smart future where every IoT device would operate seamlessly through the EEG-based brain-computer interfaces, the companies have continued supporting user customizations of rough EEG-based remote-control systems, such as that for driving a Tesla car back and forth (Touch Titans, 2016). In a similar vein, suggesting as many brain data-driven insights as possible into employees’ stress and attention levels has been the primary concern of the producers to hold the interests of corporate customers rather than filling the missing links to actual workplace wellness and productivity. In short, as Silicon Valley tech companies, EMOTIV and Muse have constructed a fetishistic image around their EEG headset-collected big data to mobilize people’s speculations about the hidden values of their brainwaves, signaled only too faintly yet through the rough EEG applications currently available or too aesthetically through the glittering brain visualization.

3D Brain visualization and performance metrics for brainwaves (MyEmotiv, n.d).
At the same time, to convert these fictitious values to real values of practical insights or life-optimizing services, the companies needed to complement their IoT networks, somewhat incomplete as consisting only of EEG sensors in the first place, with multiple actuators enough to activate as-yet-unknown brain responses. To that end, they have cooperated with many third-party institutes and individual developers, who have exposed the headset users to a variety of sensory cues and cognitive tasks: such as walking around “different urban environments” and shopping centers full of subliminal signals, driving a luxury car, performing “language processing” and “meditation,” playing videogames, and using other wearable devices such as a VR headset (Brain Research & Education, n.d). As Bruder (2019) points out about the current neuroimaging technologies, the headsets have infrastructuralized human brains as the platform where lots of research institutes and experimenters of human behaviors have brought their field-specific brain-actuating apparatuses alongside their informational labor to realize yet undeveloped use values of brainwaves speculated under each different research agenda.
Regarding the neuro-realism’s disenchantment of the human mind, Gruber (2017: 25) writes “datafication of so-called ‘hidden truths’ [of the mind] eradicates, or may at least intend to dislocate, the mystical.” However, for EMOTIV and Muse, where to dislocate the mystical into has still been the sublime image of brain activities as the means to arouse people’s speculations about something hidden in the human brain while infinitely postponing the moment of its full disclosure. In other words, for the version of neuro-realism they espouse, the brain should be both an object with well-defined functions under readymade applications and a hyperobject remaining mystical to some degree to preserve its inexhaustibility.
For the executions of smart EEG applications to replicate the seamlessness of Apple Watch and Fitbit, the brain is required to be defined as the object that sends known signals to the sensors and responds regularly to the “push notifications” the actuators send back for the wearer’s behavioral change, or unconscious nudges capable of directly “pushing the right buttons in the brain” for this change (Brenninkmeijer et al., 2019: 71). Exemplifying typical Sensor→Actuator operations, these applications assign the EEG sensors to the role of trigger for the actuators, which take only a passive role to intervene in the wearer’s behaviors or environments. However, for the EEG-based improvements of behaviors, lifestyles, and corporate environments to continue suggesting more brain data-driven solutions, a much greater number of actuators need to be added to this IoT platform by third-party participants. Put differently, for more brain-sensor data to be discovered as correlatable to these corporate and individual goals with practical values, the ambient operations of actuators should constantly stimulate still not-well-defined brain areas with fictitious values. Even to design a simple brain-computer interface whose operation is, for instance, initiated by the sensors’ detection of the wearer’s “Mental Commands” and completed by the actuators’ moving an object in a computer screen according to her commands in thinking, she first needs to tolerate arduous training sessions. In this training, what takes the position of the commander is the software that asks her to visualize this object moving in her mind so that the EEG sensors detect certain brainwaves correlated with her motor intention (EmotiveBCI, n.d). Besides a small number of reported successes in moving the object by using her brain only, these Actuator→Sensor operations during the training usually accompany numberless experimental failures in unfolding working event-related potentials even though her tolerance to these unsavory software commands would be easily dismissed as the payable cost for her becoming smart and wireless.
According to the privacy policies of EMOTIV and Muse, the EEG data generated from her training are shared with third-party participants either “on a de-identified basis” (Interaxon’s Privacy Policy, 2019) or in the “individualized and aggregated” forms (EMOTIVE Privacy Policy, 2018). To compensate for the potential privacy issues of consumer vulnerability and corporate surveillance, this sharing practice is declared to serve “scientific, medical, and historical research purposes” (EMOTIVE Privacy Policy, 2018), “related to improving the scientific understanding of the brain/body” (Interaxon’s Privacy Policy). Converting the human brain into a hyperobject that streams a massive “Volume” of data with a “Variety” of unstructured correlations in a real-time “Velocity” (Kitchin and McArdle, 2016), EMOTIV and Muse give rise to the new market for big data-driven IoT services that is constructed around the following two consumerist speculations. First, there are underdeveloped potentials of human brains, whose IoT-driven realization could help headset users make more informed decisions in daily lives. Second, there are hidden buttons in consumer or employee brains, whose IoT-driven activation could make them more vulnerable or hardworking.
For the customers of the headsets, the neuro-realism, a data fetishism about the hidden correlations in brainwaves, is, however, more than just what they can simply believe or not. They may feel the urgent need for something hidden in human brains for their everyday lives to be still optimizable even if no human experts could tell exactly what the cause of their chronic depression is. For marketing research to continue suggesting new strategies even if consumers always lie, there should be something more truthful in brainwaves than human consciousness. Something more exploitable should also exist there to squeeze out more productivity from employees even when “the most imminent threat to capitalism might be some combination of lack of engagement and general apathy” of workers (Caring-Lobel, 2016: 197). The headset producers might claim that betting on these inaccessible realities undergirding human minds does not require paying a lot, but just to share their and their employees’ brainwaves in the clouds under the IoT’s pan-kinetics and constant machine learning.
The new capitalist realism
Unknown realities are thriving under the skull or beneath the surface of brain visualization and they are responsive not to our conscious access but to the cues and stimuli from ubiquitous actuators. In the first section, this speculative realism was discussed as something fed off by people’s resignation to the problems they feel in very vicinity but never graspable. As Dencik and Cable (2017) argue, since the Snowden leaks in 2013, people’s “feelings of widespread resignation” to pervasive surveillance networks have made them reluctantly accept this unavoidable reality of “ubiquitous data collection” due to no alternative for addressing the elusive security problems everywhere. The realism in their concept of “surveillance realism” is thus no longer subject to ideological criticism since it is neither something misrecognized nor linguistically constructed but speculated from the subject’s reactive feeling of resignation to inaccessible problems.
On the other hand, in the second section and through the case of EEG headsets, I focused instead on a different kind of affective response, somewhat proactive rather than reactive, that the new capitalist realism of the IoT cultivates from the people stuck in-between its ubiquitous sensors and actuators. People still feel something hidden everywhere massive data collections occur. But the reason why they cannot help speculating about inexhaustible correlations beneath the surface is not because of no alternative. It is rather because this speculative realism is the only alternative to break through the status quo of capitalism. Industrial capitalism was once ripened through people’s resignation to its exploitive mode despite the bad feeling that it would never be sustainable. The current software capitalism, on the other hand, looks for its substantiality from its speculation in the hidden resources, ubiquity of data, ever re-minable and thus sustainable. The network of sensors and actuators in which the EEG headsets put human brains is where our hope for this sustainable future is cultivated.
Footnotes
I would like to thank Professor James Hay, my teacher and mentor. The initial idea of this article was developed in my last year of doctoral studies under his support and guidance. I also thank the three anonymous reviewers for their helpful comments.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the School of Communication and Arts at the University of Queensland (Early Career Researcher Funding).
