Abstract
Recent proposals distinguish scientific enactivism, which operationalizes enactive concepts in empirical research, from utopian enactivism, conceived as a broad philosophy of nature. This paper questions that divide by showing how enactive principles can fruitfully guide human–computer interaction (HCI) research. We argue that enactivism offers a dialectical alternative to both cognitivist models, which reduce behavior to utility optimization and internal representations, and Heideggerian accounts, which stress situated embodied coping but often neglect the social and transformative dimensions of collective sense-making. Emphasizing participatory sense-making as the enactment of ecological norms, we propose a framework that shifts HCI from preference optimization to the co-creation of collective norms. A case study of an AI-mediated energy community illustrates how enactive principles inform empirical design. Participants interacted for 1 month with an AI-integrated interface while coordinating domestic energy use, incorporating the sociotechnical system into everyday routines and orienting to neighbors’ activities. We show that an enactive framework enables analysis of interaction across multiple temporalities and spheres of experience. The study demonstrates a concrete continuity between enactive theorizing and empirical implementation, suggesting that the enactive approach can be fruitfully mobilized in the study of sociotechnical environments.
Keywords
Introduction
The epistemological status of enactivism has become a central object of recent enactive research. In introducing the notion that enactivism constitutes a philosophy of nature, Gallagher (2018) argues that while most existing research programs operate within the confines of a well-established concept of nature as the ensemble of objective physical reality, enactivism implies a more profound reexamination of the concept of nature itself. Rather than conceptualizing nature solely as an external, objective reality independent of human observers, enactivism presents nature as an all-encompassing notion that includes the observer, as a living and cognizing organism structurally coupled with its environment.
Accordingly, Gallagher suggests that enactivism should no longer be regarded as a scientific enterprise in the strict sense, but rather as a philosophy of nature. Enactivists, in this view, can accept without question the specific measurements produced by neuroscientists, while nonetheless challenging the cognitivist interpretations neuroscientists often attach to their data. One might worry, however, that this position enjoys all the advantages of theft over honest toil: if enactivism is merely a philosophy of nature, then enactivists need no longer concern themselves with generating the kind of data required for a post-cognitivist scientific research program.
From the perspective of establishing such a philosophy of nature, direct engagement with cognitive science in practice does not appear to be a priority. Meyer and Brancazio (2022) further question the status of enactivism as a theory within contemporary cognitive science. They argue that there is no current “paradigm crisis” within the cognitivist mainstream, and therefore enactivism should not be expected to bring about a paradigm shift in the way cognitive science is presently conducted. Cognitivism remains valid and effective within its own explanatory paradigm, leaving no compelling reason for cognitive scientists to abandon it in favor of enactivism. Nevertheless, they concur with Gallagher in regarding enactivism as a philosophy of nature, even if it is not a viable scientific alternative to cognitivism. Similar concerns appear elsewhere in the literature, where the distinction between philosophy of nature and scientific operationalization is emphasized as a crucial metatheoretical boundary in investigating key concepts like teleology (Nahas & Sachs, 2023). As Meyer and Brancazio (2023) note, paradigmatic cases of scientific enactivism (largely outnumbered by contributions to a broad philosophy of nature) include Varela’s (1996) neurophenomenology, modeling work on autopoiesis (e.g., Beer, 2015, 2020; Egbert & Di Paolo, 2009), empirical studies in the sensorimotor tradition (Froese & Ortiz-Garin, 2020; O’Regan & Noë, 2001; Rensink et al., 1997), and participatory sense-making research (De Jaegher & Di Paolo, 2007; Froese & Di Paolo, 2010).
In this paper, we question the sharpness of this distinction and use it to draw attention to a deeper philosophical issue: the proper interpretation of the relationship between philosophical conceptualization as expressed within a philosophy of nature and the theoretical operationalization inherent to scientific research programs. We argue that the kind of philosophy of nature we adopt profoundly influences not only how we interpret phenomena, but crucially also how we model and design them. To develop this argument, we focus on a particular domain: Human–Computer Interaction (HCI). This field is especially well-suited for such an inquiry, given its inherently interdisciplinary nature and its emphasis on both conceptual frameworks and practical design applications.
We examine how an enactive approach to modeling can inform and transform practices in HCI, particularly by shifting the goals of modeling from optimization toward embedding models within dynamic, lived practices. This enactive perspective emphasizes supporting agents in managing their environments through reciprocal interactions with the models themselves. Ultimately, this framework bridges the gap between philosophy of nature and design, advocating for models that are participatory, adaptive, and responsive to the realities of lived experience.
We develop this argument as follows: in Section “Human-Centered Artificial Intelligence,” we review the main positions in current debates on HCI methodology; in Section “Social Cognition and Ecological Norms,” we introduce two key enactive concepts: participatory sense-making and ecological normativity; in Section “Enactive Design in Practice,” we demonstrate how these concepts apply to a specific case study concerning an energy community in London; and we finally offer some concluding reflections.
Human-Centered Artificial Intelligence
Artificial intelligence is increasingly interwoven into the fabric of our everyday lives, shaping our activities across domains from online shopping to traffic management and policing. While many AI systems are designed to model human behavior, they are typically evaluated through statistical performance metrics rather than their contribution to real-world contexts (Gonzalez, 2024). These models, built on massive datasets, tend to overlook the nuanced, idiosyncratic, and often serendipitous nature of actual human experience (Steyvers & Kumar, 2024).
The Human-in-the-Loop (HitL) paradigm has emerged as a response to some of these limitations, introducing continuous human input into the development and refinement of AI systems, for example, through manual labeling of text or images or outcome validation such as thumbs up and down buttons (Dange et al., 2024). This approach has proven effective in helping systems manage uncertainty and account for local context. 1 Unsurprisingly, HitL has found a strong foothold in HCI, where there is a growing interest in making AI more user-centered (Akbar & Conlan, 2024).
However, even as HitL marks a shift in practice, many implementations still carry forward traditional AI’s computationalist assumptions about human cognition and behavior (Jokinen et al., 2022). Researchers often design systems to align with user “cognitive models” (Oulasvirta et al., 2022), evaluating success in terms of the AI’s contribution to optimal performance in a predefined task (Erlei et al., 2024; Jokinen et al., 2021). Prevalent HitL approaches like Reinforcement Learning from Human Feedback (RLHF) model human feedback as individual preferences expressed through reward functions, reducing complex human actions to quantifiable costs and benefits. Critics argue that this framing fails to capture human purposiveness: the development of our goals, intentions, and understandings through activity (Bowling et al., 2023; Skalse & Abate, 2022). From this perspective, the Human-in-the-Loop increasingly risks becoming a Loop-in-the-Human: a configuration where human thought, both individual and social, resembles the algorithmic systems we’ve built.
Within the broader HitL landscape, we can distinguish between systems that passively infer user preferences and those that actively support users in understanding and shaping their interactions with AI (Mosqueira-Rey et al., 2023). This distinction brings us to the field of eXplainable AI (XAI), which aims to make AI decisions more transparent by representing their mechanisms or outputs in more understandable ways. In HCI, this often involves “post-hoc” explanations—clarifications of why an AI classified a particular image (e.g., as a tawny owl), or recommended a specific action (e.g., because it associated a particular word with a target concept) (Morrison et al., 2024).
Yet, explainability alone does not escape the Loop-in-the-Human paradigm. The goal of AI explanations in HCI is widely framed as aligning the cognitive model of the user with that of the AI, for example, to optimize planning on a set task (Andrews et al., 2023; Barkouki et al., 2024). Here again, researchers often borrow assumptions about human behavior from decision theory and game theory to formalize the usefulness of an explanation (Mucha et al., 2021). Improving the intelligibility or reducing the “cognitive load” of an explanation is abstracted from the context being explained (Mohseni et al., 2021). What’s more, the focus on accurate mental models makes it difficult to explain why formally successful explanations often fail to be relevant or informative for the user (Alqaraawi et al., 2020). As Ehsan and Riedl (2020) argue, the focus on AI explaining itself does not adequately account for the sociotechnical contexts in which these technologies operate. What the AI model is actually “doing” in practice encompasses far more than representations of its outputs and mechanisms can capture. As a consequence, explanations are often informative to developers but of little use to everyday users.
Some emerging approaches to HitL have explicitly rejected the computationalist framing of human activity: supporting users in actively shaping models to their needs and purposes in specific embodied practices such as music and visual art (Gillies et al., 2016). For example, an artist can develop a repertoire of gestures to produce specific musical effects (Birhane et al., 2022; Tomás et al., 2021), thereby blurring the line between explanation and participation.
These critiques are not entirely new. Dreyfus’s What Computers Still Can’t Do questioned the idea that we can pre-program needs and goals into AI systems, since humans often discover and redefine their goals through the process of activity itself (Dreyfus, 1992, p. 277). For Dreyfus, human behavior cannot be formalized because human intelligence involves our embodied capacities to cope with the world. An AI is insensitive to changes of context which characterize human adaptiveness to the situation. Building on this work, Lucy Suchman’s Plans and Situated Actions (Suchman, 1987) has come to be one of the most influential texts in HCI. Suchman criticizes the focus on scripts and plans of traditional technology designers and instead calls researchers to foreground how a technology is really used by people in everyday situations, such as photocopying at a workplace. Suchman’s post-cognitivist turn to practice (Rogers, 2004) has established alternative traditions within HCI, contrasting the computationalist mainstream described above.
Suchman, Elizabeth Shove and other technological theorists drawing on a synthesis of Heidegger and Wittgenstein reject characterizations of social practice in terms of internal mental states such as desires and beliefs, and instead focus on routines, embodied activity, and situated interactions (Shove et al., 2012, p. 6). Suchman, using the lens of ethnomethodology, argues that “every instance of meaningful action must be accounted for separately with respect to specific, local, contingent determinants of significance” (Suchman, 2007, p. 84). From this standpoint, social practices are not best described as directed by rationality or normativity, but through observable performances of an activity in a particular context (Crabtree et al., 2000).
Recent ethnomethodological studies in HCI have explored how users engage with AI systems through practical interaction (Mlynář et al., 2024), for instance, how participants opened and closed their interactions with an AI agent such as a robot or voice assistant. Some studies in this review paid attention to instances of non-verbal conduct such as posture and “emotive displays,” with participants often drawing on their repertoire of practices from human-human interaction. However, this tradition’s focus on the local order and how it is maintained and repaired may limit its analytical scope. What could ethnomethodologists say about XAI other than noting how explanations are documents sustaining the local order? If belief, motive, and critique are excluded from analysis, ethnomethodology risks reducing practice to mere convention. As Kaptelinin and Nardi (2009, pp. 17–22) argue, this makes it difficult to imagine or advocate for alternatives to current systems. Conventionalist accounts can come to vindicate AI models of human behavior in equivalent ways to reward-function based accounts. While this can be technically useful (Leibo et al., 2024), it could also have the social effects of maintaining the status quo by bracketing out the autonomy of individuals and the groups to which they contribute. Neither the conventionalist nor the game theoretic computationalist accounts of behavior appreciate the sedimenting and transforming effects of AI practices beyond the local and present order.
A few authors are already bringing an enactive approach to HCI. Potapov et al. (2025) draw on an enactive lens to describe how participants with chronic pain dynamically couple their activity to structural features of music sonification to direct their own movements. Relatedly, Nygren et al. (2024) extend the enactive perspective to science learning, showing how bodily engagement and material interaction can reveal “enactive potentialities” in intergenerational exploration. Their analysis of adult–child collaboration in a museum setting illustrates how design can scaffold autonomy through embodied participation rather than cognitive representation. Pérez-Verdugo & Barandiaran (2023) explore how design choices of technological interfaces can encourage behavior in ways that can undermine or support the user’s autonomy. In contrast to traditional cognitivist and behaviorist approaches to “dark patterns” of habit reinforcement, they bring a dynamic enactive understanding of how habits develop in interactions with the environment. They give the example of rearranging the icons on your desktop to structure your own habits in light of your goals and priorities, and other existing valued habits. De la Torre et al. (2025) develop these themes, noting how AI models often undermine habits that would give the user greater autonomy over their attention. Pérez-Verdugo and Barandiaran (2025) offer a nuanced discussion of how large language models differ from other technologies in anticipating and completing the intentions of their users. They emphasize how AI systems appear to us in a form already structured by a history of purposeful use. Arguably however, Barandiaran and Pérez-Verdugo do not make full use of the resources of the enactive approach by keeping the focus of analysis on the dyadic interaction between the user and the technology, rather than the wider sociomaterial worlds in which these technologies and their users are embedded.
Social Cognition and Ecological Norms: An Enactive Approach
The neglect of the social and dynamic dimensions of HCI is a limitation shared by both game-theoretical approaches and their embodied-Heideggerian counterparts. Here we use “game-theoretic approaches” as an illustrative subset of the broader cognitivist paradigm. While cognitivism encompasses many representationalist approaches, game theory is particularly salient in HCI and AI contexts, where it is often employed to model human behavior in formal models. Our critique of game theory is therefore intended as a case study in the broader critique of cognitivism. A key theoretical issue in applying Heidegger to HCI research lies in a seldom acknowledged feature of his philosophy: its fundamentally transcendental orientation. Although it has often been claimed that Heidegger was not a transcendental philosopher (largely because of his sharp critique of Husserl’s transcendental phenomenology) recent scholarship (Crowell & Malpas, 2007; Zahavi, 2017) has demonstrated that his relationship to transcendental philosophy is far more intricate than Dreyfus and his followers assumed. One consequence, as evident in Dreyfus and those influenced by him, is that social practices play little or no impact on the dynamics of agency. This leaves no room to account for the fact that agency itself can be fundamentally reshaped through transformations in modes of action and interaction. In this respect, as Gallagher and Jacobson (2012) have argued, Heideggerian approaches to cognitive science suffer from the marginal role assigned to intersubjectivity.
In this light, what HCI research needs is a shift from a transcendental approach to cognition to a dialectical approach to cognition. This is precisely what enactivism offers. Enactivism is dialectical because it rejects any fixed theoretical vantage point for defining cognition: while autopoietic organization provides the fundamental condition for a minimally cognitive system, the sensorimotor domain complexifies this minimal cognition in ways that are relational and context-sensitive, largely incorporating materials from the environment (Mojica & Di Paolo, 2025). As a dialectical approach to inquiry into cognition (and inquiry into inquiry into cognition) enactivism gives the HCI researcher tools for understanding how individual and collective embodied agency is not fixed but plastic; how we interact with technology invites changes in how we understand ourselves as users of that technology, and further changes in how we understand the structure of our agency.
This perspective also clarifies a deeper shortcoming in both kinds of HCI approaches discussed in the previous section: their inability to account for the materially embedded and historically sedimented nature of human agency. In this section, we develop an enactive alternative. Rooted in the pioneering work of Varela et al. (1991), the enactive approach challenges computational and representationalist views, reframing cognition as a dynamic, embodied, and relational process of sense-making grounded in the adaptive autonomy of living systems (Di Paolo & Thompson, 2014). From this standpoint, historical and material embeddedness are not merely external background conditions, but constitutive dimensions of the cognitive self. For this reason, recent developments have made explicit the convergence between enaction and dialectics (Di Paolo et al., 2018; Di Paolo & Potapov, 2024; Gambarotto & Mossio, 2024; Gambarotto & van Es, 2025a, 2025b).
A key insight from this dialectical perspective for HCI is that the complexity of human agency cannot be modeled solely by reference to factors external to the individual, such as infrastructures or background conditions. Both game-theoretic and Heideggerian approaches, despite their structural differences, tend to emphasize systemic and environmental constraints, often downplaying the agent’s own sense-making. For instance, Shove suggests that individual practices are largely shaped, if not overshadowed, by the weight of existing infrastructures and material routines, what Pickering (1995) famously calls the “mangle of practice.” This third-person perspective risks overlooking the embodied dimension of human agency.
Both approaches overlook the dialectic by which the constraints mediating our social interactions form our very needs and preferences, as well as the means for their realization; for instance, in scaffolding a child’s behavior, a mother also shares her sense of the behavior’s value (Di Paolo, 2020). The enactive approach offers a fundamentally different approach to modeling: seeking to support agents in becoming more attuned to their own practices and empowering them to reshape those practices in light of their evolving purposes. Social and material structures, from this view, are not external impositions but evolving contexts of participation: frames through which meaning is collectively constructed and negotiated, and which must themselves be continually re-embedded through practice.
This enactive-dialectical approach entails a view of (collective) practices as issuing a process of subjectivation whereby an agent becomes truly self-determining through engagement with collectively normative and historically sedimented practices. In this light, personal informatics and digital tools can be understood as mediating collective processes of meaning-making and self-understanding. To elaborate on these ideas, we focus here on two central enactive concepts: participatory sense-making and ecological normativity.
Participatory sense-making is the enactive notion used to account for social cognition (De Jaegher & Di Paolo, 2007). It arises when at least two autonomous agents interact in a way that gives rise to a new autonomous system (the social relation itself) endowed with its own normativity. This emergent system is both grounded in and partially independent from the individuals involved. Participatory sense-making thus emphasizes the co-dependent, emergent dynamics between agents. A paradigmatic example is the hallway “dance,” where two people attempt to sidestep each other and inadvertently mirror movements. Here, interaction gains momentum and structure beyond the control of either individual. Such interactions sediment their own normativity, shaping future engagements by establishing the conditions for ongoing understanding and coordinated action.
This coupling transforms individual autonomy through collective regulation, expanding the scope of self-regulation via the development of shared norms. This mutual transformation gives rise to a self-sustaining relational domain. More recent elaborations (Di Paolo et al., 2018) have emphasized how such interaction patterns exhibit forms of historical drift that tend, under the right conditions, to sediment normativities beyond individual encounters into more (meta)stable forms. Examples of this include recurring arguments in relationships, inside jokes among friends, as well as broader social customs. This process describes a form of collective autonomy: a relational pattern with its own persistence, rooted in the historically sedimented interactions of two or more autonomous agents.
As with the organic bodies of the participants, the autonomy of social interaction remains precarious and prone to breakdowns, whether due to external interruptions, mistimed responses or incompatible expectations. Participatory sense-making arises precisely within this precarious relational space: a space without which autonomy is impossible, and within which sense-making becomes an intertwined, co-regulated process. This co-regulation affects not only how agents act, but how they perceive, feel, and make meaning. It constitutes a form of shared know-how that cannot be reduced to individual cognition but emerges from the interaction itself.
In other words, rather than presupposing stable subjectivities, the enactive-dialectical approach conceives of interaction as a historically sedimented and precarious process through which domains of significance emerge via coordination. Within this framework, interactions are never reducible to individual intentions; they always take shape through the ongoing negotiation among autonomous agents whose sense-making activities become interdependent (Di Paolo, 2021). What matters most is not just coordination per se, but the fact that these dynamics evolve historically, sedimenting over time and reshaping each participant’s embodied repertoire.
These relational dynamics do not negate individual autonomy but transform and reconfigure its scope. Participatory sense-making mediates the tension between individual sensorimotor autonomy and social normativity. Through this mediation, agents internalize social norms while continuously negotiating their relation to them. Over time, these engagements crystallize into interactional repertoires, forming the basis of complex forms of social agency. The development of these capacities is an open-ended and precarious process, shaped by cycles of coordination, breakdown, and repair.
Recent developments have further specified the enactive notion of sense-making as the enactment of ecological norms (Sepúlveda-Pedro, 2023). This concept articulates the historical and ecological dimensions of enactive sense-making. The environment, from this view, is not a passive background—as implied by both reinforcement learning and ethnomethodological approaches—but an active ecological field in which cognition unfolds. Cognition is fundamentally world-involving: it both shapes and is shaped by the environment. Through this co-constitution, the environment is imbued with ecological normativity: a preexisting normative structure that the cognizer encounters and also contributes to through embodied activity. Meaning is thus entangled from the outset in a normative field that precedes and guides its development.
Sense-making should therefore be understood not as the emergence of an Umwelt from an otherwise neutral environment, but as a reconfiguration of already sedimented patterns of interaction within the agent-environment system. This is a fundamentally dialectical process: the individual’s sensorimotor identity is shaped by norms that are themselves historically and materially constituted. Body and world are co-constitutive poles. Enactive cognition is always “in place,” shaped by a dynamic interplay of multiple, often conflicting, individual and collective normative dimensions that are continuously renegotiated through participatory activity.
Unlike the transient norms of co-regulation, ecological norms are sedimented into the environment through the construction and design of tools, which condense social histories and shape future possibilities for action. In this respect, personal informatics tools can be understood, enactively, as channels of social regulation that crystallize into durable artifacts, transforming domains of interaction by embedding norms materially. This has significant implications for HCI. Technological mediation (as both tool and form of mind-extension) can channel individual action-potentials and regulate them through socially embedded and ecologically sedimented norms. In this way, technological mediation becomes a potential enabler of autonomy, provided that it supports self-organization rather than extrinsic control.
Enactive Design in Practice: AI Mediated Energy Communities
There has been growing interest in energy communities in which households or businesses are in some ways responsible for locally generated renewable energy (Ahmed et al., 2024). The EU’s “Clean energy for all Europeans” package and the UK’s Clean Power 2030 Action Plan emphasize that such socio-material arrangements could help lower-income households access cheaper energy, increase resilience of energy grids, and reduce reliance on fossil fuels. Artificial intelligence has been recognized as key to managing the complexities within such arrangements, for example, charging shared solar batteries to manage unpredictability in solar energy supply (Costa et al., 2024). Much of the literature on such AI integration implicitly or explicitly adopts game-theoretic models to simulate human behavior. The particularities of household energy practices are often ignored here (Barth et al., 2018), reduced to fixed user profiles (Panagoulias et al., 2023), or accounted for by adding “noise” to the simulation (Kirsch et al., 2025). The unpredictability of human practice and of meteorological change appears as noise to a machine learning model and makes it much harder to design reinforcement learning for this context in contrast to less “noisy” environments (with fewer exogenous variables) such as video games or plant cells. While human behavior is stochastic and exogenous from the point of view of the model, it is sensible to the humans themselves: it is not randomness but self-organization in precarious unanticipated conditions.
These approaches have been challenged by theorists adopting a post-cognitivist perspective (Pierce et al., 2013). The determinate role of historically accumulated and materialized social practices in energy behavior has been foregrounded by Shove (2017). Shove approaches human behavior in broad brush strokes, with a focus on social systems and trends, for example, that French people are more likely to stop for lunch than Finnish people, or the role of the introduction of air conditioners on cooling practices.
A separate body of work, adopting an ethnomethodological framework, pays greater attention to individual energy practices. The emphasis here is on “little data” of local contexts rather than the normalized “big data” on which AI systems rely (Tolmie et al., 2016). Much of the work focuses on eco-feedback technology, which supports households’ interpretation of energy practices through in-home displays of their energy data. Data visualizations allow users to partially make sense of what are typically black boxed systems and infrastructures.
Analyses of this data work highlight how collaborative talk makes energy data meaningful, documenting social actions like “giving examples” and “elaborating current practice.” However, there is less focus on the actual sense participants make of their data: its relevance or impact for them (Fischer et al., 2016). Throughout this work, authors stress the indexicality of interpreted data, characterizing it as “accountable to the temporally-ordered practices that organize everyday life” (ibid.), for instance, when a user points to data saying “here is where you were putting the kids to bed.”
Yet this approach offers only a thin account of sociality. While these authors stress sociality’s importance, they reduce it to colocation in time and place, and whether actions conform to or deviate from local routines. This monodirectional understanding of environmental relations overlooks the dynamic nature of social interaction in what each agent brings to the encounter and what emerges through their exchange.
Both the game-theoretic and ethnomethodological approaches to energy communities, despite their differences, share a common limitation in failing to account for how technological mediation can actively transform the normative landscape within which agents operate. Game-theoretic models treat preferences as fixed inputs to optimization algorithms, while ethnomethodological approaches focus on how existing norms are maintained and repaired through local interaction. Neither approach adequately addresses how new forms of collective agency might emerge through human-AI interaction.
An enactive approach to energy community design suggests that AI systems might serve not merely as optimization tools or objects of interpretation, but as mediators of participatory sense-making among community members. Rather than modeling individual preferences or documenting local practices, such systems would create ecological conditions within which participants could collectively enact new forms of energy-related agency. Gapenne et al. (2024) characterize this as a shift of focus from tool design to enaction design.
In the rest of this article we report on findings that illustrate how enactive design principles can manifest in practice, particularly in the emergence of new forms of collective agency mediated by AI systems. First, let us lay down a few further theoretical considerations that frame the interpretation of the data. As already highlighted, we adopt a version of dialectical enactivism. Dialectically inclined approaches analogous to this have recently been developed within enactive accounts of social institutions, where institutions are conceived as objectively sedimented forms of participatory sense-making (Werner, 2024). By altering the ecological landscape of affordances available to autonomous agents, such institutions contribute to shaping agents’ subjectivity (their sense of agency) through a form of “mind-shaping” that fundamentally involves embodied cognitive and affective processes (Hanna & Maiese, 2019). In this respect, drawing on Vygotskian insights, Potapov (2021, 2025) emphasizes the central role of perezhivanie, the affectively charged lived experience that enables agents to interiorize past interactions and thereby reshape their individual identity, a process resonant with the Hegelian notion of Er-innerung (recollection as internalization).
In the case of the present study, 2 what is presented is a particular instance of socially determined AI, where the interaction between humans and technologies contributes to processes of social subjectivation. Changes in modes of subjectivity are changes in how agents enact new worlds of significance. As discussed in Section 2, the aim of classical computational design is to isolate variables as much as possible, thereby downplaying the situated, embedded, and fundamentally non-linear aspects of autonomous agency. In contrast, enactive design integrates interaction as a key component within a framework where cognition is understood as fundamentally shaped through relational and embodied activity. The crucial point is that, when attempting to model complex dynamics involving multiple autonomous agents—each with a different history of coupling and varying social viability conditions (e.g., different class belonging or social habits)—and seeking to devise habit change and instantiate new (more sustainable) subjectivities, failing to adopt such an enactive approach rapidly leads to practical modeling dead ends, as real interactions cannot be isolated without distorting the overall design picture.
As discussed earlier, the enactive account of social cognition challenges cognitivist models based on mind-reading, which presuppose two largely self-contained cognizers attempting to establish an external relation. By contrast, participatory sense-making emphasizes emergent, co-dependent dynamics between agents. The dynamics generated in social encounters cannot be reduced to individual intentions; rather, they produce new identities through “complex co-regulated patterns” (De Jaegher et al., 2010, p. 442) that are always situated and relational. Nonetheless, it has been argued that participatory sense-making, in its original formulation, remains too individualistic, insofar as it continues to take individual autonomous agents as its point of departure (Kyselo, 2014). A genuinely dialectical account of social cognition must go beyond emphasizing the emergent character of social interaction to explain how individual identities are socially constituted from the outset. On this view, individual embodiment and social embeddedness form an entwined, co-determined pair. The self-other relationship is an originary feature of selves (minimal or otherwise), which always emerge as “selfless meshworks” of biological, behavioral, and social dynamics (Varela, 1991).
To explore this possibility, we report on work carried out by the second author and colleagues. We deployed a month-long technology probe simulating shared solar energy management among groups of four households in housing co-ops and council estates. Participants experienced a simulated shared solar panel and battery system managed by a reinforcement learning agent that balanced solar forecasts, predicted demand, and energy prices.
The interface allowed participants to book appliance usage times, creating signals for both the AI system and other community members. These bookings served dual purposes: providing meaningful communication within the community and enabling participatory AI through co-regulation of system behavior. Participants could view visualizations of their own daily energy usage, battery status, local energy prices, and others’ anonymized bookings.
Within such a sociotechnical system, the realization of autonomy is not an individual accomplishment but a collective one. Individual agents enact their autonomy only through participation in social practices and institutions that sustain and regulate the conditions for self-determination. Agency thus always embodies a historically sedimented, socially embedded dynamic. The normativity inscribed within social institutions, in turn, feeds back onto individual agents, shaping their actions according to shared norms.
Though participants only used the system for a few weeks, it could reshape how they experienced and interacted with their environment. This is demonstrated by Dasha,
3
a participant who was interviewed a couple of weeks after stopping interacting with the study eco-feedback system: Since that first week, I associated the dishwasher with that block on the graph. It took much more electricity than I thought and I remember that big block. And still now, if I want to put the dishwasher on, I feel like it’s this much electricity and it still affects me. I still have a sense of it and I try to do it outside the peak times.
Here, we see how this initial pattern has begun to sediment into what enactivists call a “portable norm” (Di Paolo et al., 2018). Through her prior dynamic coupling with the interactive visualizations she has incorporated a novel normative sensitivity into her habitual engagement with her domestic environment. The participant’s engagement with the system has settled into an embodied understanding that guides her behavior without external prompting. This is not the AI’s prescription acting upon her, but a self-directed enactment of an internalized normativity. The technology’s prior role now operates transparently within her field of affordances, reshaping her sense of agency and normative orientation within a new ecological landscape of “situated normativity” (Rietveld & Kiverstein, 2014). In this respect, Angie, another participant, commented: I wouldn’t say I was thinking of the AI as such. To be honest, I didn’t really think about it much. But you get a feel for, OK, it’s sunny and we don’t use much electricity around now and then I can do my washing or whatever and it’s obviously going to come out of the battery.
Angie’s understanding of norms around energy has fused aspects of the socio-technical system with her own common sense “feel” of the situation. Angie can develop a “feel” for the AI without needing a full explanation of its mechanisms.
4
While the technology was still necessary to ensure her feel was aligned with the wider unfolding dynamics, Angie was able to sustain a critical relationship to its functions: So the AI was just part of it there and if the cost went way up for some reason, as it did sometimes, then I tended to just ignore it.
This selective engagement demonstrates that Angie has not surrendered her agency to the AI but has incorporated it into her autonomous sense-making in a discriminating way. She actively evaluates the system’s predictions against her own needs and priorities. This illustrates what Sepúlveda-Pedro (2023) describes as ecological normativity, the negotiation between pre-existing norms (embodied in the AI) and the agent’s ongoing sense-making activities.
Where our enactive approach most clearly diverges from traditional modeling paradigms is in how it accounts for the emergence of collective agency. This became evident in Bob’s reflection: It wasn’t just “price is low so I’m going to stick the washing on.”
Rather than treating the system as a price-optimization tool, Bob described a relational process of coordination within the community. He elaborated: It just made me reflect on how Sam has a big family, so if I put my washing on—even if it’s not a conscious thing—I know that others are using the energy.
Here, Bob interprets the interface’s price signals not as incentives for individual utility maximization (as game-theoretic models would suggest), but as socially meaningful cues that mediate relations among community members. The AI system mediates a form of indirect coordination among households, enabling a form of participatory sense-making even without direct communication.
Bob’s later comments suggest this sense-making extends beyond the immediate task of scheduling: If the price is high, I could be like, “That’s Sam maybe,” and it’s more fair that way, that we’re all sharing it out in a way and it’s a community view…
The pricing signal, originally designed as a proxy for energy availability, has been reinterpreted as a marker of community fairness and reciprocity. This shift exemplifies how participants collectively re-signify the system’s outputs, transforming a computational model into a sociotechnical practice.
For Bob, the key concern, above individual price optimization, is the social ecology of energy use within the community. In this instance, the AI tool fostered a form of “collective like-mindedness” (Pippin, 2008) that prioritizes community over individuality. From this enactive perspective, design should not aim to predict or optimize human behavior, but to provide an ecological scaffold through which users enact shared norms and negotiate autonomy.
Bob’s engagement also revealed the limits of the system’s capacity to sustain collective coordination. Reflecting on how the AI failed to respond to his scheduling choices, he explained: To me it was very stupid if I book my washing in because I can do it during the day and then the bot [AI] isn’t responding at all. Because if I put my washing then, I want to know that others can do theirs at more peak times and it would be cheaper for them or for all of us.
Bob’s frustration stems not from misunderstanding the interface, but from a mismatch between his normative expectations and the system’s design. He understands the community energy tool not as a mechanism for price optimization, but as a medium for coordinating shared responsibility. His intention is to create a social signal yet the system treats his input as an isolated data point. The breakdown exposes how the interface fails to recognize or support the teleological dimension of participation: users’ efforts to enact fairness, care, and reciprocity through their actions.
This same tension surfaced in smaller interactional moments. While scrolling through his energy graph, Bob attempted to use the visualization itself to convey meaning: It used up a lot… used almost 2, so quite a lot of energy. Compared to the kettle, that’s a lot.
Here, the data display becomes part of his communicative gesture to express something of the quality of how much energy his microwave is using. Yet his difficulty navigating the interface interrupts this embodied expression. Meaning emerges through the struggle, as he revises and reorients his interpretation around the tool’s responses. Such moments highlight how the AI’s representational framing constrains the fluid, interdependent quality of human sense-making.
As Barandiaran and Pérez-Verdugo (2025) emphasize, AI systems adjust to user intentions to create an ersatz form of participatory sense-making: dynamic co-regulation where the user can draw on a repertoire of portable norms from human-human interaction. However, this framing becomes more problematic in the present context, where the sociotechnical system is not anthropomorphic. These episodes show that Bob is not simply responding to system feedback but attempting to co-constitute its meaning. The technology’s rigidity limits this process, narrowing the scope for participatory sense-making. From an enactive perspective, this suggests a key design challenge: AI systems that mediate collective practice should remain open to users’ normative improvisations, enabling feedback loops through which social meanings rather than just predefined data can evolve. That is, design should aim to support the kinds of “human feedback” to AI models that actually emerges in contexts of interaction (cf. Caramiaux & Alaoui, 2022).
A similar form of missed coupling is found in other participants, such as Gavin, who reports: The council installed a smart meter here… things like smart meters, smart anything, is a way for the council to keep tabs on you… so when I was putting my oven on I just felt the whole time like they’re watching me and like if I’m not doing it right with the energy, if I’m wasting it, is that going to affect like… it could affect my tenancy… maybe they could evict you because they don’t like your lifestyle because you’re not using your energy right or I don’t know… If I’m worried like do I have enough money this month for food and that then I’m just not in that mode that I’m going to look at the graph or whatever because I’m just trying to stay afloat.
A focus on decision-making would overlook the affective elements that shape the system’s affordances for Gavin. Cognitivist and game-theoretic models fail to capture the situatedness of an agent’s sense-making within broader social contexts, and how this can have a tangible impact on one’s sense of agency. His reflections illustrate how sense-making is always shaped by an agent’s history of coupling with its environment, which in the case of human social agents chiefly includes material and financial conditions. A standard computational account would explain the lack of engagement here in terms of a reduced availability of discretionary time, that is, flexibility for demand shifting (Mar et al., 2021). Yet an enactive perspective offers a different picture: due to his more precarious social position, for Gavin the smart meter is not a neutral information device but a potential vector of social control eliciting anxiety about his tenancy and clashes with other pressing concerns tied to his economic vulnerability. Gavin’s history of social coupling generates a relation to the AI system characterized by mistrust and fear, making the normative expectations built into the system (optimizing use around community benefits and affordability) effectively inaccessible to him.
Other participants, by contrast, recognize the usefulness of the AI system as a mediator of needs within the community. Becky, for instance, reports: Like three people are like one woman’s there with her kids being like “We had a disaster. I have to do laundry now” And like somebody else is like “oh, but I booked it in for my own washing machine.” Quite tricky, maybe, to manage that between people… [That’s where having an AI could help].
Becky acknowledges the AI system’s role as a coordination tool within a shared infrastructure. The priority for the system here is not to explain its decisions but to mediate between different needs. The system can act as a mediator within the ecology of sense-making in the energy community. It supports participatory sense-making among participants, according to shared norms. Chloe’s comment further illuminates the emergence of collective norms through AI-mediated interaction: If they’re like the DJ that likes to do home discos for like 3 hours and they want to put that on the solar: that’s like, no, don’t you dare!
Here, Chloe articulates how normative evaluation is embedded in the complex landscape of affordances of the sociotechnical system: Like just put laundry off the solar rather than like weird stuff that we don’t want to be subsidising through the collective…
This distinction is not pre-programmed into the AI system but emerges through the participants’ ongoing interactions with it and with each other.
These vignettes reveal participatory sense-making operating at multiple scales. At the individual level, Dasha’s embodied attunement to energy patterns shows how human-AI coupling can sediment into portable norms. At the dyadic level, Bob’s attempt to signal availability to neighbors through system interaction demonstrates how participants seek to establish coordination dynamics. At the community level, Chloe’s articulation of collective values shows how participatory sense-making can generate new normative distinctions. At wider social levels, Gavin reflects on how economic precarity could affect his engagement with the system. Crucially, these are not separate phenomena but mutually constituting.
Traditional AI design often assumes that effective systems must approximate a complete model of the user or the world (map = territory). From this perspective, the goal is to build ever more accurate internal representations so that the system’s model mirrors objective reality and, ideally, aligns with the user’s own “mental map.” Yet this objective of exhaustive mapping overlooks how meaning and purpose emerges through situated activity. People do not act by consulting internal maps but by navigating environments that are already charged with significance through their ongoing practices.
Rather than designing systems that aim to represent the world for users, we can design systems that help users enact and transform their own worlds of meaning (Gapenne et al., 2024). In our study, the energy interface supported this by enabling participants to build shared understandings of fairness and responsibility: norms that were not pre-programmed but developed through collective use. Here, design becomes less about constructing a perfect model and more about cultivating ecological conditions in which new forms of sense-making can arise.
From this standpoint, the challenge for HCI is not to produce ever more complete or transparent representations, but to create interactive contexts that support people in perceiving, negotiating, and transforming their relations with others and with technology. In AI-mediated energy communities, this means designing for the growth of collective autonomy: systems that remain responsive to emerging social norms and allow these norms to influence system behavior. Future work might explore adaptive feedback mechanisms that detect and respond to evolving community practices, ensuring that algorithmic mediation reinforces rather than constrains participatory sense-making—for example, by aligning to freeform text annotations on energy visualizations (cf. Panagiotidou et al., 2024). This could also involve dynamic system control through communally developing gestures (Tomás et al. 2021).
Embracing a form of dialectical enactivism was crucial not only for our theoretical framing but also for the design of the study itself. Dialectics, in this broad sense, can be understood as the science of embeddedness and entanglement, whose distinctive gesture is to move from the abstract to the concrete (Di Paolo et al., 2018). Rather than isolating variables to establish causal relations, abstracting cognition from its conditions of enactment, we sought to study how norms and meanings emerge through embodied and situated practice. The design therefore did not aim to measure “outcomes” in a means-ends framework typical of traditional AI evaluation, nor merely to describe the “work” people perform with technology in ethnomethodological terms. Instead, it foregrounded the enacted experience of norm formation: how participants, through ongoing interaction, brought new forms of coordination and value into being. As Levins (1998) reminds us, such a dialectical investigation of life and mind must remain attuned to historicity and path-dependency: cognition is never a closed system but an evolving nexus of interdependences that must be studied as such.
Our case study illustrates the potential of such an approach. The AI-mediated energy community became a site where participants enacted new forms of collective agency: reinterpreting price signals as social cues, developing shared expectations, and integrating community concerns into individual decisions. These dynamics elude both game-theoretic optimization and purely ethnomethodological description. An enactive framework, attentive to participatory sense-making and ecological normativity, meanwhile captures the generative interplay between individual action, collective norm formation, and technological mediation. Designing for autonomy, in this sense, means designing for the conditions of enaction rather than for predefined behaviors or representations.
Conclusion
We have argued that concepts from enactivism can be used to guide a specific research project in HCI. One might therefore think that we have been arguing for enactive-inspired HCI as an example of scientific enactivism, as Meyer and Brancazio (2023, p. 7) define it: “the aim of scientific enactivism is to find ways to incorporate enactivist principles into the cognitive sciences and involves empirical work or specific, achievable proposals for such work.” To be sure, we are doing that, to an extent. However, Meyer and Brancazio sharply distinguish scientific enactivism from what they call “utopian enactivism,” which they classify as a “philosophy of nature” rather than as a scientific research project. Against this sharply drawn distinction, we have been concerned to show that enactivism as a philosophy of nature can guide the generation of productive research programs, especially in domains, such as HCI, that have been confronted with a stalemate between cognitivist and Heideggerian methodologies. By taking enactivist concepts such as participatory sense-making and the enactment of ecological norms as foundational to a socio-ecological account of embodied cognition, we can go about HCI research in a different way than was previously conceived. This is not, pace Meyer and Brancazio, merely implementing enactivist principles within scientific research but using enactivist ideas as inspiring a different research methodology than those which have characterized past HCI research.
One important consequence of our enactive approach to HCI research is that it gives us reasons to re-evaluate other uses of enactive concepts in AI research. Some enactivists claim that our interactions with AI count as a new form of participatory sense-making (Pérez-Verdugo & Barandiaran, 2025; Zebrowski & McGraw, 2022). As we see it, that claim is not supported by the incorporation of enactive concepts into the design, implementation, and interpretation of a HCI study. This is because there is a crucial difference between whether an AI is implemented in ways that enact novel forms of participatory sense-making amongst the humans whose interactions with each other is mediated by an AI or whether the AI is itself engaging in participatory sense-making. In other words, enactivists should not simply apply enactive concepts such as participatory sense-making to the interpretation of AI and HCI research; we should incorporate enactive concepts into the design and implementation of HCI, and then determine from what users say just how successful or unsuccessful the results have been.
Arguably, Bob’s frustration at what the AI is telling him is because he expects the AI to engage in participatory sense-making with him. We regard this as indicating that more work needs to be done in determining just when an AI is engaging in participatory sense-making and when it is facilitating the enactment of new collective forms of participatory sense-making that are technologically mediated. By drawing upon the enactive approach for motivating the need for a new HCI methodology as well as guiding the interpretation of the results of a new HCI study, we are underscoring the fundamental continuity between philosophical theorizing and scientific practice. A genuine philosophy of nature should equip researchers with conceptual resources for orienting inquiry, guiding experimental design, and interpreting empirical findings. We have sought to illustrate this by turning to HCI, where the enactive approach provides a powerful framework for modeling the dynamics of user experience, from micro-interactions to community-level patterns of engagement, enabling researchers to navigate productively across different scales of analysis.
Footnotes
Author Contributions
AG coordinated the interdisciplinary project, leading the overall development of the manuscript. KP drafted Section 2 and provided key quotations for Section 4. CS contributed to the discussion throughout the project, assisted in editing the manuscript, and drafted the conclusion. All authors actively contributed to the discussion, revision, and final approval of the manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Andrea Gambarotto’s work was funded by the Luxembourg National Research Fund (FNR), grant reference FNR/O23/18084432/AUTONOMY. Open Access was kindly funded by the Department of Humanities of the Université du Luxembourg.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
