Abstract
To continue the discussion, I will revisit the modes of engagement and the broader aims of situating collaborative research within the current technological landscape, drawing on six insightful commentaries. These contributions from Lucie Chateau, Kari Dahlgren, Noortje Marres, Dan Podjed, Martin Tironi, and Haiqing Yu contextualize collaborative efforts, provide further confirmation and examples, and highlight shortcomings in my reflections. Society and social structures are being reconfigured through algorithmic techniques, resulting in losses and gains that might be difficult to predict. This requires close attention to detail, as well as proactive and clever approaches to studying the digital. Breathing space serves as the epistemic arena where these approaches can be planned and nurtured.
I wrote “Collaboratory Explorations as Breathing Spaces for Digital Futures” unusually quickly and with a distinct programmatic flair. This was a conscious choice, as my aim was to lay out a research agenda for studying the digital by attending to firsthand experiences of epistemic conditions and struggles inherent in collaborations with technology experts. I intentionally steer clear of considerations regarding the general applicability of my findings or whether my experiences within the MyData collaboration were representative of other collaborative efforts or forms of data activism. Instead, I sought to examine collaboration through the lens of a specific venture, providing a retrospective account of what I had learned and showcasing that “epistemic commitment takes humility, perseverance, and creativity” (Chateau, 2025, this issue).
In reflecting on the MyData collaboration, I identified three modes of engagement: creating trouble, composing futures, and securing breathing space. These modes immediately resonated with me as important pursuits for knowledge formation, even if their exact implications were not entirely clear. Taking advantage of the dialogical opportunities that this new journal offers, I decided to present these modes not as robust research approaches but as integral aspects of reflexive collaborative research. In this context, moving from one mode to another appears to be more fruitful for knowledge formation than focusing solely on perfecting specific engagement techniques.
To continue the discussion, I will revisit the modes of engagement and the broader aims of situating collaborative research within the current technology landscape, aided by the six insightful commentaries that complement my article. These contributions contextualize collaborative efforts, provide further confirmation and examples, and highlight shortcomings in my reflections. By engaging with them and building on their insights, I aim to further emphasize the importance of maintaining distance in collaborative explorations without falling into positions that reproduce abstract and detached techno-critique. The recognized need to address the specificities of locally implemented algorithmic systems and their consequences means that collaborative work is defined by various kinds of balancing acts that require purpose and imagination.
What I read in the commentaries reflects a shared appreciation for collaborative research as a timely and much-needed craft. Merely pointing out the harms and risks of technological development and addressing the concerns associated with the penetration of algorithmic systems into everyday life is insufficient. Critical work is essential and can, of course, be extremely beneficial to collaborative research. For instance, critical insights can serve as prompts in the effort to compose livable digital futures. What is crucial, however, is that “the digital is not an abstract elsewhere” (Tironi, 2025, this issue) but an ongoing process that demands situated and engaged action and reflection.
Rehumanizing does not equal human-centric
After reading the commentaries, I had to check whether I had used the term “human-centric” in my piece, as this is not what the sensibility of rehumanizing suggests. Tironi (2025, this issue) writes that “the author constantly raises the importance of moving away from the “techno-centricity” of current debates on digital society to a human-centricity or shifting the attention to the human arrangements and abilities in technology development.” In my understanding, however, there is a significant difference between promoting human-centricity and focusing on human aims and actions in technology development. The MyData initiative defines itself as human-centric, and the term has become widely used in policy circles and by technology companies, but I would not use it to describe the work we do. In fact, the rehumanizing lens can also be applied to human-centric initiatives to uncover how narrowly they define “the human.” The reflexivity that I am promoting rarely has space in human-centric initiatives, where the human is seen as either in need of AI literacy, a data source to be used in the techno-loop, or a capable master of AI tools.
The rehumanizing lens proposes that various kinds of human agencies and aspirations should be brought into the conversation regarding technological developments. As Podjed (2025, this issue) suggests, humans are not just “micro-particles flying around in the turmoil of “datafication” processes but they are steering these processes.” Humans are the ones who promote technological advancements, deciding to prioritize human needs over those of other species. For instance, in our ongoing work on data centers, it is human aspirations that enthusiastically pave the way for new centers in Finnish municipalities. Thus, the rehumanizing approach does not mean an exclusive focus on human interactions or the neglect of the material relations on which the digital depends but acknowledges that it is a human gesture to either take or not take a more-than-human or a more-than-technology, standpoint.
Reconsidering epistemic assumptions
The modes of engagement that I outlined, ranging from creating trouble to securing breathing space, rely on the willingness to contribute to collaborative setups. However, they also require a situational understanding of their possibilities and limitations. Marres (2025, this issue) argues that creating trouble requires more qualification, and I agree that it makes a difference where and how the researcher engages in troublemaking. Marres uses the example of a teenage kid in a hoodie, for whom creating trouble is typically merely a way to get into trouble. The point I want to make is that researchers need to take epistemic risks. Not everyone can create trouble but those of us who can have the privilege of using it as a research technique. A privileged researcher can create trouble on behalf of those who cannot, whose perspectives are often ignored in the development of algorithmic systems. The most vulnerable members of society are typically those whose lives are made legible by means of computational techniques, so they need special attention in scholarly troublemaking.
The fact that algorithmic systems focus only on those aspects of life that can be computationally tackled calls for serious consideration of “the asymmetric treatment of social and computational aspects of social life,” as Marres puts it. The programming of the social is always selective and partial, treating people differently. Furthermore, humans possess different qualities than computational processes, and by reminding ourselves of the specific features of that qualitative difference, whether we call it intuition, attentiveness, or “feeling right,” we can bring depth to lived experiences in the digital realm.
Social scientists might avoid creating trouble because they fear it could skew their research data, damage their reputation, or jeopardize their access to the field. Has the fear of exclusion become a form of self-censorship that conforms to stakeholder expectations? I would like to see more reflection on how access, or the fear of losing it, currently shapes the way we study digital developments in interdisciplinary projects. The need to maintain proximity to technical experts and their practices may be epistemologically detrimental. For instance, the social aspects become overly narrow if researchers discuss values or “value alignment” without paying attention to how selectively technology experts define and approach values.
Dahlgren (2025, this issue) notes my ambivalence about the success of composing futures, suggesting that “the interventional power” of working with collaborators is not fully realized in my account and that more could be done in this regard. I fully agree. Many scholars who conduct collaborative research are much more skilled in the work of composing futures and have a keen eye for aligning their interests and timing their collaborations. As Tironi (2025, this issue) suggests, composing futures can include techniques, visual prompts, and critical prototyping that allow us to problematize current developments and engage with the formulated problems. As a mode of engagement, composing futures’ benefits from experimentation and tools that promote collaborative thinking, “opening up questions that would hardly emerge from theory alone,” as Tironi puts it.
The question that occupies my current work is what happens after a successful workshop. The challenge lies in making insights and interventions tangible within current academic or policy formats, allowing different audiences to reflect on them. It is no coincidence that we are witnessing a trend toward arts-based methods. At its best, composing futures collaboratively creates spaces where scholarly visions and insights can inform and intervene in others’ perspectives. A technology expert I once worked with stated, “There is no ideology in progress.” This assertion often resurfaces in my reflections on all that needs to be unpacked when studying the digital. The fact that devices and services appear “neutral” explains how technical systems become embedded in mundane practices. In collaborative explorations, this neutrality might be the first aspect to tackle. Instead of seeking certainties or definitive solutions, collaborative engagements that are successful from an epistemological standpoint typically address processes of negotiation and friction, where stable notions of what constitutes progress or what is good for society are reconsidered.
Breathing space as an epistemic arena
The final mode of engagement, securing breathing space, partly overlaps with the other two modes I proposed, but it has its own character—a character that the commentators of my article identified so well. It is not surprising that the metaphor of breathing space resonates in the current techno-culture. Among my colleagues, “writing retreat” has become a symbol of a shared space to think, a mirage that points to the possibility and promise “to breathe in and breathe out,” as Podjed (2025, this issue) writes. In the midst of various external pressures, whether related to funding, the metrification of higher education, or the efficiency-driven ethos of digital society, the academic community that studies the digital needs spaces to “engage with the very workings of technologies, to understand their inner logics, and to explore alternative ways of relating to them” (Tironi, 2025, this issue).
Breathing space can be thought of as a means “to expand the socio-economic frame of social and cultural studies of algorithmic systems,” as Marres (2025, this issue) defines in the response. Ideally, this approach facilitates conversations that intervene in straightforward ways to consider the place of the digital and formulate alternatives. New metaphors and concepts can address what is missing in the conversation when there is little room to assemble shared futures or to identify what might be needed for the critical work of composing futures to become possible. As I suggested in my piece, questions concerning regulation are of key interest in this regard because they may offer little room for societal expertise. Even if the aim of technology regulation is to safeguard society from risks and harms, a narrow, technology-focused approach may inadvertently undermine these goals.
Tironi (2025, this issue) notes the risk that breathing space becomes an “epistemic shelter,” referring to “a place where shared prejudices between researchers and communities are reinforced, rather than enabling the generation of dissent about the needs and possibilities of the digital future.” I would argue that even if such shelters temporarily solidify prejudices, they are needed to create conditions for thinking and acting together. An epistemic shelter might be exactly what is required for collaborative agenda-setting or for deciding that some forms of collaboration must be refused. Ultimately, the aim is not to confirm consensual viewpoints or retreat from further collaborations but to arrive at specific framings for what needs addressing and problematizing. This creates the possibility to rethink potential prejudices that have formed while also testing whether those prejudices might be justified, considering the research topic at hand. As Yu (2025, this issue) suggests, the practical conditions for seeking and securing breathing space depend on local policies and data governance. Comparative and complementary approaches can deepen our understanding of what the modes of engagement require and can do in studying the digital.
I never planned to become a troublemaker in my research, but not intervening in a direct manner may simply not be an option when witnessing “epistemic coups” that “can only be enacted through civic engagement,” as Chateau (2025, this issue) puts it. Yes, there are important historical continuities, and epistemological struggles are persistent, but the ways in which society and social structures are being reconfigured with the aid of algorithmic techniques represent a new process, resulting in losses and gains that might be difficult to predict. This requires close attention to detail, as well as proactive and clever approaches to studying the digital. Breathing space serves as the epistemic arena where these approaches can be planned and nurtured.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
