Abstract
This paper reflects on collaborative explorations within the MyData initiative, delineating a non-linear, recursive research approach to emerging technologies in society. Three distinct yet interconnected modes of engagement are discussed: creating trouble, which involves questioning those who shape the technological agenda; composing futures, a technique to claim expertise and broker between different ways of knowing; and securing breathing space, which seeks to establish reflective domains where concepts and actions concerning data practices and algorithmic systems can be deliberated. Together, these modes of engagement suggest the reconceptualization of collaborative explorations as ‘breathing spaces for digital futures’, thereby advocating the proactive integration of social science perspectives into the core of digital society-making. This approach allows us to find new epistemic partners and respond to the epistemic coups we witness when technology experts and policymakers define the terms of debate.
Almost a decade ago, Tuukka Lehtiniemi and I began engaging with the MyData community, which focuses on developing new digital infrastructures and tools to enable individuals to control the use of their personal data (Lehtiniemi and Ruckenstein, 2019). The idea of MyData originated with an Open Knowledge Finland working group, where it was developed collectively. The ensuing white paper (Poikola et al., 2015) outlined the transformation from an “organization-centric” into a “human-centric” system. According to MyData, the right to decide on the uses of personal data collected by organizations, including data on economic transactions, transport, social media engagement, smart home appliance uses, and occupational health check-ups, should reside with the data subjects themselves, instead of being monopolized by organizations.
We referred to the core community members as data activists because they were critically responding to the imbalances in data control and knowledge production capabilities that favor large corporations over individuals. The advocates viewed themselves as promoters of a more equitable and fair digital economy, where data is not just a tool for corporate gain but a resource that individuals can manage and leverage for their own benefit.
Our collaboration with the MyData community is a typical example of a strengthening trend in the study of what is alternatively called “digital,” “automated,” “datafied,” or “algorithmic” society. Digital society can be defined as a variety of societal forms, ranging from the material to the institutional, connected by computational means that can render individual and social phenomena observable and analyzable (Marres, 2017). Collaborative research in multi-stakeholder projects is promoted in the social sciences and humanities through new funding instruments, and researchers are exploring how to make the most of it in the study of digital society and its emerging data practices, algorithmic systems, and AI initiatives (Schäfer et al., 2024). We initiated the collaboration with the goal of opening a reflexive conversation about the political and ideological underpinnings of MyData. Although we shared funding with MyData advocates, we maintained enough independence to define our own aims.
Scholars have written about collaborative, participatory, and action-oriented forms of research for decades, as well as immersing themselves in the worlds of people they are studying—with whom they also engage in forms of co-research (Estalella and Criado, 2018; Holmes and Marcus, 2008; Pink, 2022; Rabinow, 2011). What I bring to this discussion is a specific focus on how ongoing dialogue and the testing of perspectives with other disciplines and non-academics can strengthen the study of emerging technologies in society. The ways in which data practices and algorithmic systems become part of society-making cannot be sufficiently addressed through theoretical debates alone. Collaborative experimentation and conceptual work of different orders are critical for examining the conditions under which knowledge is produced, which means revising established research forms and developing both conceptual and practical tools for understanding the evolving dynamics of society (Marres, 2017; Nafus, 2016).
Collaboration is inherently tied to its context, and our cooperation with data activists would have looked very different if we were studying a radical critical tech initiative. Compared to initiatives that use data practices to pose questions about inequalities and justice (D’Ignazio and Klein, 2020), MyData represents “moderate” activism. The initiative is neither anti-capitalist nor anti-market. In fact, many of the MyData community members did not even like the fact that we called them data activists. We shared the data power critique but had reservations about the individualistic emphasis of MyData and the idea that new technical infrastructures would make for a fair digital economy. From our perspective, the visionary aims of MyData were both narrow and unrealistic, while raising a plethora of questions: Who are the “empowered individuals” benefiting from access to personal data? What forms of agency and expertise are needed to manage the proposed infrastructural arrangements? What about collective mechanisms that protect digital society and vulnerable groups from excessive data gathering and harmful data uses? By asking these questions, which are embedded in the history of social scientific inquiry of technologies (Winner, 1978), at the time when new technical tools and infrastructures were being envisioned and built—rather than after the fact—we participated in processes whereby data infrastructure visions become part of society and shape its future.
In academic practice, collaboration with stakeholders, especially commercial entities, might still be seen as risky. In the past decade, critical colleagues have dismissed our collaborative explorations as “practical” and “fashionable” and seen us as “co-opted” or even “neoliberal.” At a time when algorithmic systems are becoming key sites wherein people's relationships with themselves, others, everyday practices, and the broader society are being tested, explored, and defined (Ruckenstein, 2023), disengagement would, however, seriously limit the perspective to processual aspects of the digital and hinder their integration into research. As Marres and Stark (2020: 425) argue, “Engineering is today the very stuff of where society happens.” They advocate a new sociology of testing that takes as its starting point the fact that “engineering tests the very fabric of the social.” From this perspective, digital society is a huge testing site where data practices and algorithmic systems are part of ongoing society-making by technological means. Engineers and data scientists do not merely engage in technical work, but involve people and societies. Collaborative explorations are a way to participate in society-making and problematize the actions of professionals who steer it.
In the following, I first provide context for the collaborative research that our group, the Datafied Life Collaboratory 1 , conducts at the University of Helsinki, highlighting a research agenda that goes beyond individual projects and focuses on the more emergent and lived qualities of collaborative explorations. With the aid of our experiences with MyData, I then introduce the three modes of engagement—namely, creating trouble, composing futures, and securing breathing space—that define our approach. These modes of engagement are not discussed as a static, three-stage methodological approach, but rather as facets of a dynamic and recursive research process that involves ongoing attuning and refinement. The three modes highlight conceptual experimentation and the testing of experts, examine the epistemic conditions of future-making, and promote the securing of space in which to reflect on and act intentionally in relation to technology developments. With the conceptual work that we do, we aim to respond to the epistemic shifts that we observe when technology experts and policymakers define key questions for digital society. Treating collaborative explorations as “breathing spaces for digital futures” suggests that social scientists need to be more proactive in problematising technological development and related decision-making processes. Collaborative explorations become redefined as productive spaces of academic renewal, which is vital for engaged scholarship on data practices and algorithmic systems.
Rehumanizing emerging technologies
The MyData collaboration benefited from scholarship that studied how data arrangements might be harnessed to challenge accepted norms and practices and promote social justice, equality, new forms of agency, political participation, and collective action (Baack, 2015; Delfanti and Iaconesi, 2016; Milan and van der Velden, 2016; Pybus et al., 2015). This body of research operated in a context where the capacity to gather, store, and analyze physiological, behavioral, and geolocational data was affecting a broadening array of everyday life domains, from policymaking to policing, corporate marketing to public administration, and media to urban planning. The development, known as “datafication,” refers to the ability to convert attributes of individual bodies and actions, social groups, and organizational processes into digital data (Ruckenstein and Schüll, 2017). It is intimately linked to economic, political, social, and cultural aims, thereby setting the scene for more general trends and concerns with the current socio-technical moment (Kitchin, 2014). This has resulted in a flood of work in the social sciences and humanities that examines the social impacts of data and the algorithms they feed, as well as the politics and ideologies involved in their design and implementation.
While our empirical cases are varied, they are brought together by the research approach or sensibility of “rehumanization.” We study how power and practices, emotions, and expertise co-evolve with technological developments (Kristensen and Ruckenstein, 2018; Ruckenstein, 2023). Across the board, we resist imaginaries of technological development that are devoid of people, exploring the human connections essential to the promotion, evaluation, and experience of data practices and algorithmic systems (Ruckenstein and Turunen, 2020; Pink et al., 2022). In our study of prisoners training AI, for instance, we investigate technical features, material contexts, and social aspirations that define how AI influences human engagements with digital society-making (Lehtiniemi and Ruckenstein, 2022). Similarly, research on social media goes beyond questions that typically animate the politics of algorithms to explore dynamic feedback loops between algorithmic and social systems (Savolainen, 2023), while the study of engagements with platform infrastructures in Hangzhou provides insights that serve as a foundation for speculating about a digital future in which platforms and infrastructures are even more deeply embedded in everyday life, calling for a collective response (Grön et al., 2023).
Algorithmic systems are dynamic constructs, shaped and evolved through humans making promises and decisions about technology. Seaver (2019: 419) defines them as “arrangements of people and code,” underlining that it is not merely the algorithm, narrowly defined, that has sociocultural effects, but the overall system. Since digital futures are shaped by current inequalities and can accelerate them further, rehumanizing is a way to engage with the human concerns and aspirations in these uneven processes, thus broadening and politicizing perspectives to computational decision-making. Algorithmic processes do not treat people, or the everyday, evenly; rather, they focus on those aspects that can be computationally tackled (Thornham, 2019). This means that some aspects of the everyday are accentuated and amplified, while others are reduced and overlooked.
Both the anticipation of digital futures and how politics and values emerge in practices are relevant for our projects. A pilot trial can serve as an empirical probe to examine expectations about AI (Lehtiniemi, 2023), or junk news sites as a mechanism whereby people express their belonging (Savolainen et al., 2020). Our investigations cover practices and aims expressed throughout the lifecycle of algorithmic systems—including the development stages of a service—and already established company practices that aim for seamless data loops to create intimate relations with those who use their services. In the realm of insurance, for example, customers are invited to allow self-tracking devices to scan and record their lives. In the past, marketing materials persuaded people to trust insurance agents; now, behavior-based insurance products offer consumers the option to invite digital recording tools into their lives, creating new kinds of tensions and negotiations (Tanninen et al., 2021).
Developers of commercially driven digital services are encouraged by their companies to think about automation and AI within industrial horizons of efficiency and optimization, and they tend to exhibit public confidence about the aims and futures of automation. At work, they are surrounded by products and services that align with the forward-looking goals of innovation and profit-making, performing necessary tasks faster and more conveniently, or “without friction,” as one industry expert put it. As I discuss below, however, in our research we tend to urge developers to relinquish overconfidence and acknowledge inconsistencies in their claims and promises, and the uncertainties ahead. What if AI is not merely a tool for efficiency but, in STS terms, an “extension of politics” or “politics by other means”? This message typically appeals to professionals who are willing to reflect on the many trials, errors, and unknowns in developments connected with data and algorithms, and curious to learn about the societal and human aspects of technologies.
When making sense of ongoing collaborations, we use metaphors and concepts to capture the specificities of algorithmic systems and digital society-making, following Wyatt (2004), who argues for detecting existing metaphors and using new ones to construct socially sensitive framings to guide the development of new technologies. The future visions of technology promoters, especially insofar as they concern AI, might be replicating the same script: “Must advance AI…well, because everyone else is doing it.” This script is symptomatic of a growing gap between technological advancement and digital society developments, suggesting a pressing need to complicate existing stories, and generate new ones (Markham, 2021; Pink, 2022). Technology experts and developers use metaphors—such as “black box,” “digital twin,” “human in the loop” and “AI teammate”—to offer selective accounts of algorithmic systems and AI, and it is our task to point out what their metaphors are hiding and suggest alternatives (Räisänen, 2024). This might involve posing questions that appear counter-intuitive: What would it mean to think of data labor in terms of “welfare” or efficiency in terms of “care”? By finding empirically robust and imaginative ways to address ongoing developments, we hope to interrupt and renew mainstream conversations relating to algorithmic systems and AI.
Three modes of engagement
When reflecting on how our experiences with MyData have shaped our current work, I identified the three modes of engagement we advocate, from creating trouble to securing breathing space. Here I introduce them briefly, before engaging with them more thoroughly. This kind of reflective approach aligns with John Dewey's educational philosophy, which specifies how learning takes place in processes of discovery and problem-solving. The MyData collaboration enabled us to engage in hands-on activities and discuss shared endeavors, including those that made us angry or insecure, or did not lead to any results. These experiences shaped us as researchers. We learned how thinking develops through confronted problems and how concepts and metaphors may be needed to tackle them. Concepts are not meant to resolve problems, but they facilitate discussion and provoke further inquiry.
Uncertainty is an inevitable feature of digital society, and it is alleviated by the comforting words of experts and consultants who create future scenarios and risk assessments, and build models to make digital societies more predictable. Yet, instead of viewing the digital future as a target of controlling measures, it can be thought of as collectively produced in interactions, practices, and imaginaries (Lanzeni et al., 2022). Digital futures are not determined by technologies, but by how technologies are woven into attempts to master, respond to, and be inspired by ongoing transformations. Here, we are all “at the edge of the future,” as Pink (2022) would argue, and it matters what we do, and what we think is occurring at this critical moment. Tensions and vulnerabilities are features of digital society that cannot be eliminated, but we can learn how they are currently overlooked, handled, and lived with.
All the three modes of engagement that I discuss share a willingness to make sense of the messy, ambivalent, and uncertain human involvements in digital society. Like many others, we want to attune ourselves to the concerns and contradictions of digital developments without explaining them away (Ziewitz, 2016). However, our approach goes beyond “staying with the trouble,” as the much-used trope suggests; rather, we actively create trouble by provoking and questioning those who promote and engage in the shaping of digital society. In practice, this means presenting our perspectives and research findings as “interventions,” “probes,” and “tests” to see whether practitioners regard our findings as worth engaging with and whether they accept our vocabulary and frameworks.
The second mode of engagement, composing futures, suggests that researchers act as “brokers” by bringing together different parties and diverging perspectives. If creating trouble aids in making separations, brokering is part of an attempt to coordinate and assemble futures in collaboration with others. Here, collaboration emerges as a technique for reclaiming expertise and bringing together different ways of knowing. Mosse and Lewis (2006) discuss brokers as intermediaries between development institutions and local communities and emphasize that they do not merely transmit aid or ideas from donors to recipients but actively shape how development is done on the ground. Brokers play crucial roles in interpreting and translating development policies and projects, often reshaping them according to local contexts and needs. In the simpler version we are mediators between professionals and organizations, but this mediation can also develop into more proactive brokering between different logics and conflicting views to foster collaborations where new perspectives can emerge. This is also a way to test and refine empirical findings with collaborative partners.
The third mode of engagement, securing breathing space, is where the most valuable work in terms of our own research efforts takes place. Breathing space is a response to the need to envision digital futures unrestricted by the relentless pace and pressure of ongoing changes (Cohen, 2013; Minkkinen, 2020). Indeed, the metaphor of breathing space itself emerged from an attempt to articulate a shrinking dimension of autonomy in human-algorithm relations (Savolainen and Ruckenstein, 2022). It is a reminder that breathing is culturally connected to thinking, time, and space. As a breathing space, collaborative exploration turns into a thinking retreat, which is crucial for conceptual work that integrates diverse elements into new forms. To create a space where we can reflect and collaborate, it might be essential to step back from proximity to digital technologies or the environments where technology is currently being utilized. On the other hand, it might also require a certain level of proximity in the sense that we need to understand in detail what the technology does and to whom.
Creating trouble
Critical scholars create awareness of the corporate power of data arrangements, but, in terms of having a dialogue, a more effective way is to confront a professional with a suggestion or a claim about that power. This could be reasoned with the notion that power is exercised through small-scale, everyday forms of persuasion and actions that affect others’ actions (Foucault, 1989). Within the MyData community, participants acknowledged the presence of powerful market actors and their surveillant technologies, and they understood that everyday behavior is increasingly modified by algorithmic means. Yet where they stood as experts and professionals in all of this was a question that they might not have considered.
Addressing the situatedness of power represents a rehumanizing move. It steps away from the tendency to treat data power as if it were separate from the people who envision, promote, design, and interact with technologies. Within the MyData community, questions concerning who has or should have the power to steer society into the future generated thought-provoking discussions. Technologists, privacy advocates, and entrepreneurs held opposing views on the means and ends of data control, especially regarding collaboration with public and commercial actors. Tuukka demonstrated in his PhD research (Lehtiniemi, 2020) that MyData advocates could be roughly divided into two groups: those who viewed participation in data arrangements through the lens of market dynamics and those who promoted rights-based and more citizen-centric approaches (see also Lehtiniemi and Haapoja, 2020). In practice, the separation between market-based and rights-based approaches to data management is never strictly binary, as the digital society is co-constituted across state, economy, and civil society. However, this separation characterized the different aims of MyData.
The market vision supports the notion that empowerment follows from the ability to act as an economic agent who can negotiate, manage, and benefit from personal data. The idea is that data holds commodity value, and its efficient, targeted distribution yields personal and social benefits via economic transactions. Consequently, MyData fosters new business models that rely on a more balanced use of personal data as their driving force. The alternative framing, emphasizing rights-based and citizen-centric approaches, presents a conception of societal engagement that does not only depend on market dynamics. This vision advocates ensuring that individuals do not merely manage their data but also engage in the governance and ethical oversight of data practices. It stresses the importance of embedding data practices within a framework of digital rights and civic participation, aiming for a society that respects and promotes both individual and collective interests.
This tension prompted us to consider our position, because both market-based and rights-based approaches promoted narrow ideas of data practices and future digital society (Lehtiniemi and Ruckenstein, 2019). In practice, “rights-based” often equated with “privacy-based.” The differences in informational privacy within Europe and beyond were not the focus of the attention (Ribak, 2019). Privacy was typically defined as the guarding of personal boundaries, and therefore limited to a set of defensive rights that overlooked its dynamic, open-ended and collectively resonant qualities (Cohen, 2013; Taylor et al., 2016). Ethical oversight, on the other hand, meant listing principles that were much needed but operated on an overly abstract level (“Who would argue against human rights?”).
Both we and the MyData activists recognized the far-reaching consequences of datafication, as new societal forms affect the production and distribution of knowledge, organizational practices, and governance. Yet there was a lack of mutual epistemic basis. In the shared meetings we learned how useless we were in terms of steering future developments. We were viewing a stream of diagrams on PowerPoint slides depicting databases and data flows, in terms of which our ideas about societally “more robust” data activism (Kennedy, 2018) remained oddly irrelevant. Witnessing how future societies were being shaped by data interoperability and information systems forced us to think how to broaden the socio-critical imaginary that we had internalized through our disciplinary training. In trying to intervene in the aims and practices of MyData, the initiative began to intervene in us (see, Moats and Seaver, 2019). We had to ask ourselves, how did we become so “useless”?
Technology experts had found a way to be part of experimenting with digital society and the means to modify it, while our role was merely to observe, or add a critical sidenote. It felt a bit like crashing someone else's party. The technological imaginary that the MyData community promoted was fed by practical and future-oriented aims. The experts shared an engineering attitude that does politics through infrastructures; thus, while unaware of STS lingo, they were doing politics by other means. Their political statement was that current digital infrastructures need to be reversed and redirected towards fairer and more responsible data practices by building new infrastructures and services. The advocates were outspokenly techno-solutionist: technologies need to solve the problems in digital society that technologies have created.
We could see that emerging data practices increase rather than diminish the need for the knowledge and skills of social scientists and humanities scholars—but their expertise and reflective skills were not included in the debate. This epistemic exclusion opened a rather gloomy vista at the edge of the digital future: technologically advanced but societally underdeveloped.
Social scientists often find themselves assigned a predefined role in large multi-stakeholder research projects. They are expected to occupy the realm of “the ethical” and study trust and responsibility in relation to a specific technology. Or their task is to sort out the regulatory barriers, track the adoption of a tool or service from a user-centric perspective, or figure out how to reduce bias in an algorithmic system that does not even exist. Instead of engaging in open-ended research that would allow them to explore emerging social dynamics, they are offered “a service role” that forces them to narrow down their perspective to what a certain technology or technical system does, or aspires to do.
The dominance of technological expertise in shaping future society suggests that active problematization or “troubling” of collaborations is necessary. Whose futures and “edges” are researchers authorizing with their collaboration? While social scientists may struggle to get the attention of their busy collaborators, in other situations their involvement might be used to elevate the status of a project or an initiative (“social scientists are involved”). For example, Tuukka and I have experience of our research being used as a quality stamp for an ongoing governmental project: by studying it, we guarantee its societal value.
Making choices about how to position themselves in relation to other disciplines and stakeholders allows digital society scholars to take more interventionist stances, while “claiming back alternative futures” (Pink and Salazar, 2017: 18). This proactive stance not only shifts the dialogue but encourages a reevaluation of the scope of contributions. I have taken advantage of opportunities to adopt more interventionist stances by positioning myself as a critic, advocate, or provoker. This kind of shifting of roles means that researchers need to be mindful about the nature of their involvements, and be prepared to articulate their positions clearly and take responsibility for their aims and actions. Yet, if it is true that “engineering tests the very fabric of the social,” as Marres and Stark (2020: 425) argue, it is only logical that scholars test the engineers to see whether they know what they are doing. Indeed, in deliberately positioning myself as a critic, I create a test environment where I become a tester; I approach the professional with a claim or provocation to see how the expert responds. This testing reinforces an “us versus them” division to tease out value clashes and differences in how an issue is handled. Creating trouble is a way to learn about the anticipations, practices, and methods promoting algorithmic systems and AI in organizations. Confrontations are revealing in terms of the kinds of expertise and disciplinary backgrounds that are favored, and the methods and future predictions on which professionals rely.
Adopting a tester role is a departure from many of the less oppositional, caring, and apolitical ways in which researchers discuss involvements with their research participants. This approach does not suit everybody, as it can create uncomfortable and controversial situations that are very far from any type of “feel good” collaboration (see Pink, 2018: 205). Indeed, over the years, I have faced disputes with researchers, policymakers, technology developers, and journalists where it would have been simpler to end the conversation. The first meeting might start off with a fierce argument and end with guilt and regret: emotional responses that make you think. Will I be excluded from future conversations? What kinds of dividing lines emerged in the conversation? Is the disagreement about “us versus them” at all? Confrontations are helpful in making gaps and discrepancies in argumentation visible. Among professionals, heated debates might also be appreciated. They clarify stances in ways that might become instrumental in finding common ground and negotiating further access.
Experts shaping the digital society might be very confident about their doings and eager to dismiss perspectives that do not fit into their agenda. Tamar Sharon (2021: 54) argues that digital expertise is “an entry ticket to previously autonomous spheres, bringing with it other values and interests and granting newfound power to reshape spheres according to those values and interests.” In my experience this means that if the expert knows about technology, he (yes, it is usually he) can also feel qualified to offer authoritative commentary on education, law, health, public administration, medicine—or “people” in general, who are lazy, biased, fallible, ill-informed, or too slow to adopt changes. The envisioned technological solutions to complex social issues, which require historical, political, and social awareness, are presented with such simplicity that they leave social scientists gasping for air. Digital experts inhabit a realm of certainty, whereas researchers who seriously consider the evolving nature of emerging technologies are more likely to be confronted with states of uncertainty and not-knowing what forces them to evaluate whether the questions they ask are even relevant. From this perspective, troubling experts’ perspectives serve as an attempt to derail them, to generate doubt and ambiguity that would make them reflect on their approaches and consider alternative perspectives. It might not work, but when it does, it marks an important step towards meaningful conversations.
Composing futures
MyData advocates pushed us to offer “our solution” to remedy the current ills of the data economy (Lehtiniemi and Ruckenstein, 2019). Since their aims were practical and future-oriented, this was their way of making us more “useful” and “interesting.” After the publication of the white paper, the Finnish MyData promoters were contacted by developers, activists, and policymakers across Europe and beyond, and the inaugural MyData conference, held in Helsinki in August 2016, was a success, attracting 700 participants. The following year, we facilitated a track called “Our Data” at the conference to promote the reimagining of knowledge practices alongside infrastructural data arrangements. By talking about “our” instead of “my” data, we aimed to combine technology-oriented MyData with a critical stance on the individual-centricity of the initiative. We argued that it is not enough to develop data technologies and leave it to the market to correct the economic imbalances (Lehtiniemi, 2017). “Our solution” drew attention to the creation of “data commons” in the form of platform cooperatives and data sharing communities. We wanted to show how collectively steered initiatives can address the inefficiencies of informed consent and privacy protections, and asymmetries in terms of data usage and distribution, and call for rearticulating concepts of participation that have been co-opted by technology companies that rely on users’ active data generation.
In retrospect, we were questioning critical engagement but not composing futures in collaboration with others. This would have demanded an approach that remains open and adaptable, not predetermined by disciplinary divides, epistemological commitments, and power dynamics. By trying to influence the direction of MyData with scholarly insights we hoped it would provide additional resources for the aims and futures of data activism. Together, the activities led to attempts to synthesize visions of data futures in a manner that aligns with the approach advocated by Latour (2004: 247–248), where the critic should not be “the one who debunks, but the one who assembles.” In practice, however, we would have had to move beyond our disciplinary comfort zone and use collaboration to strengthen new kinds of interactions, while we were still trying to adapt our perspectives to established MyData formats and our disciplinary conventions. As a result, we avoided taking epistemic risks and operated at an overly abstract level, speculating on what it would mean to make data futures more human-centric, naturally bearing in mind that there is no universal “human.” We had a concept and a direction but lacked concrete practices and methods to mobilize others to join us in thinking how to move forward.
These experiences taught us the importance of asking whether it is possible to compose futures in collaboration and act as “epistemic partners” (Holmes and Marcus, 2008). Professionals engaged in MyData might have been so committed to techno-centric and regulatory frameworks that they overlooked the everyday and organizational realities of the data practices that we studied. At the time of our collaboration, MyData advocates focused their attention on the coming General Data Protection Regulation (GDPR), which provided legal backing that ensures data access and interoperability, which is foundational for the technologies that MyData promotes. A specific aspect of GDPR, data portability, which allows individuals to obtain and reuse personal data across platforms, was discussed as a legal tool that supports MyData projects by permitting the transfer of data between digital platforms. Unlike what activists predicted, however, we have not seen the assembly of digital futures by data portability arouse great enthusiasm, suggesting a disconnect between MyData visions and practical, digital, everyday realities.
Regulation and ethical guidelines have traditionally served to strengthen the collective foundations of society. Yet policy proposals often overlook the everyday aims and unease that people have about data practices and interactions with algorithmic systems. While policymakers and ethicists place their trust in detached policy programs and ethical guidelines, they talk past the ethical tensions and concerns that emerge through everyday interactions with technologies (Pink, 2022; Pink et al., 2022). In our projects, exploring future developments has been easier with technology experts and consultants than with policymakers, who primarily listen to advocates and other policymakers. Although we may interpret ongoing developments differently, there is potential for dialogue since we both address the generative qualities of everyday life. Any consultant working with “real people” or “in the wild” will inevitably learn about the many ways that technologies become entangled with everyday life circumstances and environments.
Engagements with technologies trigger irritation, fear, and frustration, which are emotional responses directed at their power and limitations, and these need to be taken seriously when thinking about digital futures (Choroszewicz, 2022; Ruckenstein, 2023). In our experience, it is not difficult to find professionals in companies who can relate to this. Their daily work may also involve negotiating the tensions between ethical and technical ideas and values. Ribak (2019) describes how privacy regulations and the everyday challenges to privacy, as well as the practical solutions developed, are mediated by locally valued aims, and technology consultants often recognize their role as mediators and brokers between regulatory aims, values, and logics. This experience of brokerage provides common ground for composing futures in collaboration with others.
The concrete work of composing of futures often takes place in workshops, which have become an established methodology in collaborative research (Ørngreen and Levinsen, 2017). We have used workshops in an interventional and generative manner to clarify, define, and redefine concepts and future visions (Ruckenstein and Trifuljesko, 2022), as they enable iterative approaches wherein insights from earlier sessions can be integrated into emerging ones—flexibility that is crucial when dealing with developing phenomena where assumptions and understandings need to be continuously revised. Shared discussions create a firmer foundation for collaboration, as participants learn how commonly used concepts, such as data, algorithms, human-centricity, or AI futures, can vary in meanings.
Workshop participants might be aware of the society-making qualities of algorithmic systems, and be prepared for critical and open-ended debates, while others have little experience in thinking about exactly how algorithmic systems are entangled with social and societal dynamics. Yet, contrary to all the talk about the black-boxed nature of algorithmic systems that remain inaccessible to public scrutiny, people often have no difficulties in understanding what these systems aim to do (Bucher, 2016). They care about the consequences of algorithmic feedback loops for their everyday lives and future interactions as citizens, raising questions about human responsibility and autonomy. In one of the workshops, for instance, we learned that it is often autonomy that people care about, rather than privacy, suggesting an exploration of dimensions of autonomy in service development and human-algorithm relations (Savolainen and Ruckenstein, 2022; Tanninen et al., 2022).
Workshop engagements aid in rehumanization efforts by departing from the technocentricity of the current debate and shifting the focus to human arrangements that are built with technical systems. Automation or algorithmic systems per se are typically not the problem; rather, it stems from the limitations and consequences of current imaginaries and failures in their implementations. Over the past year, we have experimented with a tool called DEDA (Data Ethics Decision Aid), developed at the Utrecht University's Data School (Franzke et al., 2021), which was originally thought of as an auditing and assessment tool. For us, however, it has worked in bringing together students and professionals to share their insights into the uses of data and algorithms. We see DEDA as an anticipatory technique that tries proactively to repair data practices and algorithmic systems. Not merely a tool to be applied, it also supports the emergent quality of relationships that form when people are moving forward in complex terrain.
Securing breathing space
As part of our MyData collaboration, we proposed that by reimagining knowledge practices the initiative could have better aligned with social dynamics, as the community would have had to engage in conversations about how data relates to everyday realities and potential data futures: Who constitutes the “we” in discussions about data? In the past years, the closed nature of large data companies and the opacity of their business models have become increasingly prominent features of digital society. Predictive analytics and recommender systems predefine data for us in ways that are almost impossible to uncover, and the more intimately data capturing technologies are tied to daily lives, the more problematic the informational secrecies become (Cheney-Lippold, 2017). By studying self-tracking practices, I have learned how personal data can be aggregated to observe processes beyond the individual by identifying collective patterns that have to do with health, everyday mobilities, time use, and environmental exposure (Nafus, 2019; Pantzar et al., 2017). This raised questions about the formation of new kinds of data collectives. An influential aspect of data is the possibility it offers to transcend and bypass familiar ways of approaching bodies and lives. Data can act as a collective resource for “data feminism,” for example, indicating issues related to inequalities and other neglected concerns (D’Ignazio and Klein, 2020).
The MyData initiative was, however, rarely incorporating findings from research on emerging data practices. Since the aims of MyData aligned with the GDPR, as well as the European data strategy and its suite of new regulations, the initiative began to turn more consistently towards policy making, stabilizing in ways that made it increasingly difficult for us to view it as a venue for composing futures. Consequently, we, along with many others who had been active in the early days of MyData, began to explore alternative avenues where ideas about data futures could have broader resonance and impact. Soon after, we were gathering evidence for a European mapping exercise of the Berlin-based NGO AlgorithmWatch.
What we observed with MyData, coupled with subsequent calls for AI ethics and regulation, reflects a desire to ensure that the road to the digital future is properly managed. When discussions become complex, technology experts might conclude them with the appeal, “We need to regulate.” Yet they rarely specify what exactly needs regulation, how it should be implemented, and what kind of oversight it requires. Regulation is presented as a magic solution. As a space of regulation “the digital society” is extremely complex because it is entangled with various kinds of societal and administrative forms. Institutions that aid in implementing and enforcing regulatory aims are gradually being built. While the digital society is being stabilized with policy frameworks and oversight mechanisms, data usages and associated algorithmic systems continue to evolve. They are shaped by everyday acts of knowing and doing, and influenced by the resilience, curiosity, and related feelings of losses and gains that people experience in their daily lives. This ongoing process challenges the efficacy of regulatory frameworks and underscores the need for policies that are adaptable and inclusive of diverse human experiences.
The disconnect between MyData's future vision and the digital every day is symptomatic of enduring tensions. Recent regulatory approaches in the European Union aim to address consequences caused by algorithmic technologies. To do so, the regulators aim at identifying, and policing those actors, who are (seemingly) in control over the technology (Hakkarainen, 2024). For instance, the European Union's AI Act regulates artificial intelligence by identifying providers and operators steering the technology and imposing obligations on them. I was part of a national expert group for research on AI and digitalization, promoted by the Finnish government (2020–2022), that reviewed various draft versions of the AI Act: a tedious task that clarified that stabilizing AI as an object of regulation requires immense effort. This recalled our earlier experience with MyData: once again, I was useless in terms of future-making. The drafts of the AI Act were commented on in the expert group by computer science professors, lawyers, and civil servants. I initiated a discussion about the societal goals of the AI Act and its underlying “European values,” but there was little opportunity for this conversation, as the AI Act process was already underway. The final text presents a major challenge for lawyers, and it remains to be seen whether it will steer AI actors and future societal developments in the desired direction. Instead of sociologists and anthropologists, who know quite a bit about risk (Beck, 1992; Douglas and Wildavsky, 1983), it will be lawyers who sketch the boundaries of “low risk” and “high risk.”
In the midst of ongoing changes, we offer our collaborators “breathing space” to address both tensions around digital expertise and gaps and inconsistencies in regulation and policy making. Informed consent is ineffective if people do not understand what they are consenting to. Data portability has no appeal without ideas on how to use machine-readable datasets. Regulation might enable the aims of the tech-savvy, but what about the rest of us? Discussing the metaphor of breathing space—the need for personal space to foster goals, self-definitions, and reasoning—with Laura Savolainen, we concluded it was a crucial dimension of autonomy in human–algorithm relations (Savolainen and Ruckenstein, 2022). The metaphor can, however, be fruitfully broadened to cover a shared space for articulating and grappling with issues at stake in digital society-making. In a world marked by uncertainty and an uneven and fragmented understanding of what needs promoting, breathing space safeguards the ability to think critically, make plans, and reflect on and foresee how to steer complex developments (Cohen, 2013). Minkkinen (2020) discusses privacy as a “breathing space for futures” in the midst of rapid technological changes and datafication. Privacy is not simply “a barrier” against intrusions, but an opportunity to reserve autonomous spaces for personal and collective future-making. This rehumanizing move reconfigures privacy in an OurData way, as a societal value that fosters long-term thinking, creativity, and the human ability to challenge and reimagine paths for digital society. Privacy creates the possibility to imagine alternative futures beyond current technocentric trajectories.
The DEDA tool introduced above is an example of how to create conditions for breathing space. If offers a chance to negotiate the aims of a data project in a diverse group of experts or pause the development process for structured reflection and deliberation. In an organizational context, a breathing space can be seen as a guard against pressures that might compel individuals to act without proper guidance or against their own interests and values. In the course of collaborations, we learn about the ambivalence, frustration, and ethical stress that professionals feel in organizations when they cannot make informed or responsible choices. This reminds us how important it is not to approach technology professionals merely as “carriers of values” or undermine their reflexive capacities. We gain empirical insights when our collaborators explain their dilemmas, justify their everyday actions, and share concerns about the lighthearted way management discusses AI implementations, or how ethics become sidelined in favor of profit-making (“off the record”). Taking time to reflect on organizational practices with researchers is poorly recognized as research impact, but in our collaborative research it is probably the process that adds the most value to professionals. The shared reflections can later manifest as new vocabulary for framing the problems at hand. At times the experience of having space to breathe leads to aspirations for a new career, or even a PhD project.
To steer our own research, we need to guard our own breathing space alongside securing it for our collaborative partners. Breathing space is fostered by a proactive attempt to envision digital developments unrestricted by disciplinary conventions, policy expectations, and funding pressures. Through shared discussions, breathing space is what guides our work to new areas of inquiry, and what we have been observing lately is how industry-influenced conversations concerning algorithmic systems and AI narrow inequalities to bias, justice to fairness, values to preferences, and anticipatory practices to mere predictions. Technology-driven applications of terms like trust and transparency are crude simplifications in light of rich conceptual histories in the social sciences. For instance, Pink and colleagues (2018) challenge the notion of trust as a transactional element in human–technology interactions and propose seeing it as more of “a feel”—an anticipation of future events. AI systems cannot be intrinsically labelled as trustworthy because trust is a context-dependent characteristic rather than a fixed quality or tangible asset that can be obtained. Strathern (2000), on the other hand, taught us to engage with transparency by asking what visibility conceals. The way transparency is employed in discussions about algorithmic systems and AI should prompt social scientists to approach questions of visibility and invisibility on their own terms, and create vocabulary that better captures what is going on.
Reflections that emerge in conversations are part of breathing spaces for digital futures, but also material for them as they identify problems that need addressing, and subsequent conceptual work. Once we have coined a concept that addresses a specific aspect of AI, we can use it to explore whether it applies to other cases and has potential for composing futures.
Tuukka's research explored a risk prediction pilot in the field of social work and demonstrates how this move overlooked the anticipatory nature of social workers’ interactions with their clients (Lehtiniemi, 2023). Rather than relying on experienced social workers, datasets stored in social care registers and related databases are mobilized as a resource for discovering patterns and features that can aid in intervening in the client's situation. After this research was published, we held a workshop with social workers to discuss their views of their expertise and AI implementations. We offered them breathing space. The workshop participants were not against AI; on the contrary, they wanted more AI to manage their heavy workload. Yet they did not need a risk prediction tool. The beneficial side-effect of the failed pilot was that AI was no longer something abstract for the social workers, but a concrete tool that they could evaluate. Instead of AI that tampers with their “intuition,” they would benefit from AI that filters out unnecessary information.
Responding to epistemic coups
The three modes of engagement that I have discussed are aspects of an ongoing research process, each promoting different kinds of interactions. They can be read as attempts to return data practices and algorithmic systems to their messy worldliness. In addressing “contests over interpretations of emerging realities” (Holmes and Marcus, 2008: 84), scholarly roles can be thought of as small-scale exercises of power, raising issues of who gets to affect the ideas and actions of others. If digital society scholars are “studying up” (Nader, 1969), meaning that they are researching those with the power to steer future developments, experienced anthropologists and STS scholars can find themselves ignored and undervalued in social interactions. Yet instead of accepting marginalization, the process of creating trouble suggests that social scientists should place their epistemic frameworks more boldly in dialogue with technocentric ideologies. This is also a way to make their work collaborative, rather than merely cooperative.
Composing futures in collaboration with other disciplines and professions presents an opportunity to develop “metaexpertise,” a critical awareness of how emerging digital societies prioritize certain interpretations of realities, with associated forms of expertise, at the expense of others. It is perhaps ironic that the quest for composing futures in collaboration has made us semi-professionals in debating technology developments. If we want to look outward from academia, we need to become “solutionists” to some degree and offer research findings as recommendations. I have been asked, for instance, how our research identifying dimensions of autonomy in human-algorithm relations contributes to societally desirable technology design. It is not difficult to imagine that in terms of strengthening autonomy, algorithmic systems should accommodate varied contexts and qualifications of users, thereby enhancing the capacity to make informed choices. People should be equipped with the understanding of how and why algorithmic decisions and recommendations are made. Plurality in algorithmic systems can only be achieved through designs that augment human capabilities and choices rather than diminish them. These and related goals have occupied activists, who “imagine and create alternatives to the techno quo” (Benjamin, 2019: 12). Such alternatives can clarify that safeguarding opportunities for human intervention and oversight in algorithmic decision-making requires systems where crucial decisions are reviewed, and if needed, can be overridden by human operators (Masso & Kasapoglu, 2020). Mechanisms to provide feedback on algorithmic outputs to adjust and improve them can better ensure that human autonomy is protected, rather than compromised.
The willingness to offer recommendations about “good tech” makes social scientists appear useful, yet it might mean that scholars are promoting somebody else's interpretations of emerging digital society. Unlike tech proponents and policymakers who offer assurances to guide us through the uncertainties and complexities of the present, social scientists work with the ambivalences and tensions that define emerging technologies in society. This is their strength in composing futures. Their interpretations clarify how meanings and values that define algorithmic systems are negotiated, pointing out that they might not be stable in any way, as they relate to different forms of knowledge and changing notions of the harms and benefits of implementing such systems. Rather than critiquing values such as efficiency and optimization that are associated with algorithmic systems, it is more fruitful to approach emerging technologies as mediators of values. Efficiency, for instance, is not inherently detrimental to values such as solidarity or autonomy, but its impact in relation to other values needs to be examined.
At its best, collaborative exploration offers time and space for working through epistemic differences, reflecting on values and desires in relation to algorithmic systems, and making room for options and decisions about steering them. Breathing space might emerge when you least expect it—after a panel discussion, or as a response to a social media post. In terms of formulating new ideas and exploring prospects, breathing space is where candid, undefined, and hypothetical ideas about algorithmic futures can be expressed, and identifying its importance has taught us to foster it in our own projects. Yet there is never enough time for shared reflection, raising the question of how we have managed to build organizations and institutions where breathlessness is a collectively shared experience. The scarcity challenges us to protect moments of contemplation and shelter “the processes of play and experimentation from which innovation emerges” (Cohen, 2013: 1906). In breathing spaces for digital futures, we can articulate our stances and evaluate ongoing debates and practices in ways that speak to our own disciplinary backgrounds, but also reach beyond them. This is the space that allows us to respond to the epistemic coups we witness when technology experts and policymakers define the terms of debate.
Ways forward
When we began our MyData collaboration, we were perhaps more optimistic about the ability of social scientists to grasp and steer future developments. In our research, the learning curve was steep, as there were countless options to follow datafication developments and we were busy trying to understand what data activists were doing, work that brought us to a very different place. We are still exploring the connections that data practices and algorithmic systems make, and what has changed and will be changing because of those connections, but we no longer try to fit our perspectives to pre-defined frameworks. Empirically grounded research demonstrates the importance of a thorough understanding of social dynamics and institutional practices for the successes and failures of algorithmic technology. The cases we examine introduce us to the realms of other disciplines, challenging us with the limits of our expertise. If we want to say something about prisoners training AI, we need to know about the prison system, and when we study AI in the field of social work, the essential starting point is to understand social work as a profession. The questions posed by emerging technologies are complex and multifaceted. Studying and observing current developments often falls into the “cracks” between disciplines, yet disciplinary promiscuity no longer threatens us. Instead, it appears to be a logical way forward.
I have suggested that social scientists need to deepen their perspectives by asserting their frameworks and concepts in dialogue with other disciplines and non-academics. Even if this might lead into uncomfortable reexamination of their own frameworks, the epistemic risk is worth taking. A decade ago, I did not see as clearly how emerging technologies test us, our social relations, our societies, and our planet. Today, people are reacting to these developments by pushing back, improvising with technologies, and creating alternatives. Often, they are not in academia but part of NGOs and corporations. Collaborating with other disciplines and non-academics presents an opportunity to focus on shared problems and envision alternative future trajectories. The question of what constitutes a livable digital society needs to be posed in a manner that increases, rather than decreases, the chances of its success. Conversations can be used for epistemic partnering and planning concrete practices and methods that mobilize joint endeavors. The identified modes of engagement—creating trouble, composing futures, and securing breathing space—offer ways to examine mutual problematizations. By embracing these approaches, it is possible not merely to observe and criticize digital society-making, but to learn about its composition and breathe life into it.
Footnotes
Acknowledgements
I would like to thank the researchers of the Datafied Life Collaboratory, as well as those involved in the REPAIR and REIMAGINE ADM projects, for their active participation in collaborative endeavors, even when the direction of our work is often uncertain. Kirsikka Grön, Jenni Hakkarainen, Maja Hojer Bruun, Kaarina Nikunen and Laura Savolainen provided valuable suggestions at different stages of writing this piece. A decade-long cooperation with Tuukka Lehtiniemi has been pivotal in shaping and enriching my ideas around collaborative explorations.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the REIMAGINE ADM funded by CHANSE ERA-NET Co-fund programme, Grant Agreement no 101004509.
