Abstract
This commentary responds to ‘Collaborative Explorations as Breathing Spaces for Digital Futures’, which highlights the role of social scientists in shaping ethical and inclusive digital governance. The commentary reflects on the three collaborative research modes proposed by the original article – creating trouble, composing futures, and securing breathing space – demonstrating how social scientists challenge dominant technological narratives and mediate between stakeholders. It focuses on ‘breathing space’ as a key concept in emphasising critical reflection of and engagement with digital transformation – a crucial role that social scientists can play multi-stakeholder research to develop human-centred digital futures. While the analysis focuses on European contexts, the commentary advocates for a broader, global perspective on data activism, governance, and AI ethics.
Introduction
The article ‘Collaborative explorations as breathing spaces for digital futures’ is a visionary, courageous, and thoughtful reflection on the rehumanising move to involve individuals to take control and manage their data and engage in the governance and ethical oversight of data practices. It discusses a reflexive methodological approach in using collaborative, participatory research methods to co-design parameters for research, public engagement, and policymaking. Through a discussion of MyData, the author introduces three modes of collaborative research engagement: creating trouble, composing futures, and securing breathing space. These modes aim to address the complexities of datafication and the societal influence of algorithmic systems.
The article reflects on the author's experience as a social scientist collaborating with data activists and technologists. It provides valuable insights into the challenges of interdisciplinary research, particularly in a field dominated by technological expertise. It provides insightful background on MyData's evolution from a technology-centric initiative to one striving to address issues of personal data control. This process is marked by ‘trouble making’ – the article calls it ‘creating trouble’, in the spirit of Danna Harraway's ‘staying with the trouble’ – by social scientists who emphasises critical engagement with complex issues, the need for public debate rather than simply technical solutions, and the insistence on making visible the power relations and tensions among powerful stakeholders such as the state and corporate; hence questions of trust and trustworthiness, public interest and surveillance, commercialisation and citizen rights in data usage can be foregrounded.
The article effectively highlights the ideological tension between market-based business models and rights-based, citizen-centric approaches to data practices. Such tension can be mediated through the role of social science researchers as ‘brokers’ among diverse stakeholders – data activists, technologists, policymakers, citizen interest groups, civil society organisations – to collaboratively envision digital futures that are human centred. Such a human-centred approach to digital futures hopes to nudge a shift from the market-centric model of the corporate and from the organisation-centric model of government agencies by emphasising ‘data commons’, which allow communities to have control over their own data and shared resources. Such an approach also creates ‘breathing space’ to allow researchers and their collaborators to step back from the logic of connectivity, speed/efficiency, surveillance, and information abundance (and overloading) – the pressure that rapid technological change has brought. Through shared discussions in workshops, ‘breathing space is what guides our work to new areas of inquiry’ (p. 19).
The discussion of breathing space adds a unique dimension to the study of digital futures. Defined as moments of reflection and critical distance, breathing spaces allow researchers to stop chasing the speed of digital transformations and to reflect and deliberate on the trade-offs and broader implications of speed, connectivity, and convenience. As such, breathing space serves as a counterpoint, advocating for a more purposeful and mindful approach to digital innovations, much similar to the ‘slow tech’ movement.
The three modes of engagement of social scientists with multi-stakeholders in the collaborative research process ‘can be thought of as small-scale exercises of power’ (p. 20) by scholars who are otherwise marginalised in digital societies dominated by scientists, technologists, regulators and policymakers. Securing breathing spaces, in particular, allows multidisciplinary researchers and participants of diversified backgrounds to foster more inclusive, thoughtful, and ethical imaginaries of our digital futures. In the process, social science and humanities scholars, such as those in philosophy, sociology, anthropology, and communication, can not only broad the scope of reflexive engagement with critical issues in our digital societies but also encourage stakeholders to take a human-centric approach technology and system design by prioritising human values over efficiency and optimisation.
Social scientists and inter- or cross-disciplinary collaborations
The social scientist can be a critic, advocate, provoker, broker, tester, or troublemaker in inter- and cross-disciplinary dialogues and collaborations. As pointed above, the idea of ‘creating trouble’ resonates with the critical role that social scientists play in questioning the assumptions held by technologists and bureaucrats, while ‘securing breathing space’ underscores the need for rehumanising digital society and futures. It is easier said than done. Not everybody has a good sense of humour, communication skills, and open-mindedness as the author of the article has; nor do they incorporate the trust and authority in collaborative projects that typically marginalise ‘troublemaker’ social scientists. This is seen in interdisciplinary collaborations among social scientists and in cross-disciplinary collaborations between social scientists and science and technology experts.
The social implications of digital transformations are complex and multifaceted in the AI era. Interdisciplinary collaboration among social scientists has become increasingly crucial to address AI-related challenges. The article advocates composing and imaging digital futures in collaboration with data activists, civil society organisations, civil servants, and public policymakers (as well as technology experts), who may be versed in fields like law, economics, politics, and psychology. They may remain ‘open, adaptable, not predetermined by disciplinary divides, epistemological commitments, and power dynamics’ (p. 13). However, when the focus of discussions on social aspects of technology is confined to ethics, governance, and policy, it remains to be seen whether the efficacy of regulatory frameworks can be ‘adaptable and inclusive of diverse human experiences’, which scholars like sociologists and anthropologists would feel less ‘useless’ (p. 17).
When social scientists and computer scientists collaborate, they bring complementary expertise that can lead to innovative solutions for understanding and addressing complex societal issues. In an ideal situation, such transdisciplinary collaborations would combine the social scientists’ expertise in human behaviours, power relations, and social dynamics with the computer scientists’ technical skills in algorithms, data analysis, and computational analysis. For example, the COVID-19 pandemic saw collaborations between computer scientists and social scientists to analyse public health data (Budd et al., 2020). Computer scientists use methods such as chord diagrams, modularity analysis, and eigenvector centrality analysis in Social Networking Analysis to analyse a large corpus of data (Li et al., 2024), while social scientists help interpret mobility and social behaviour data.
In cross-disciplinary collaborations, the human-centred approach has become accepted as a natural overlap between STEM (science, technology, engineering, and mathematics) and social science fields. Putting real people at the centre of technology design and development processes, the human-centred approach relies on the knowledge and understanding of humans to begin with. It is a problem-solving technique that enables innovative solutions or mapping of solutions as products or services to resonate with human needs and align with human values. The Centre of Excellence for Automated Decision-Making and Society (ADM + S) funded by the Australian Research Council, for example, brings together leading researchers in the humanities, social and technological sciences to create the knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making (admscentre.org.au). In its research on measuring misinformation and people's fact-checking behaviours, the centre's social scientists and computer scientists use creative co-design methods to map the field and leverage simulation modelling and sensing technologies developed by computer scientists to characterise user behaviours with physiological signals, particularly in relation to cognitive load, affective arousal and valence to quantify bias in information engagement (Spina et al., 2024). This is one of many examples that the ADM + S Centre has developed in multi-stakeholder, cross-disciplinary research.
From human-centred design to data-driven social research, the collaboration of social scientists with computer scientists prioritises metaexpertise in positivist frameworks and leads to methodological cosmopolitanism. The metaexpertise is ‘a critical awareness of how emerging digital societies prioritize certain interpretations of realities, with associated forms of expertise, at the expense of others’ (p. 20). Social scientists must learn how and when ‘to claim expertise and broker between different ways of knowing’ (ibid). Otherwise, they’d find themselves relegated to a ‘service role’, where their primary task is to address ethical or regulatory concerns; or to a ‘social token’ and an add-on in computer-science driven projects. In some cases, social scientists are expected to exercise self-censorship to comply with science and technology norms (Burch et al., 2023). This is a concern that the article has pointed out at the end. It calls for a more proactive role for social scientists, positioning them as equal partners to shape digital futures.
The challenges of multi-stakeholder and cross-disciplinary collaboration call for shared reflection on methodological parochialism. When social scientists meet multi-stakeholders with diverse backgrounds, agendas, and mindsets towards complex technology-induced problems, it is not enough to just understand and foreground power dynamics in digital society. It is more important to have the human empathy and skills to engage and associate with fellow human beings, from technology design to conducting research on social impacts of technologies. And when conducting research on social impacts, social scientists can make more contribution than translating difficult and complex technical norms and systems to enable wider public education and acceptance.
A methodological cosmopolitanism responds to the ‘cosmopolitan condition’ (Beck, 2007); it is an epistemology that transcends national and disciplinary boundaries or the worries about risks by adopting a trans-local and trans-national minded perspective that values diversity, inclusivity, and interconnectedness. In the context of multi-stakeholder and multidisciplinary research on technology (such as data, algorithmic, AI systems and infrastructures), a methodological cosmopolitanism involves creating frameworks, methods, and practices that embrace and incorporate a wide range of perspectives, knowledge systems, and cultural context to ensure that the benefits and risks of technology are understood comprehensively and distributed equitably across different contexts and that the needs of diverse populations are met and a wide range of human rights and values are upheld.
Implications for digital society and future research
While the article focuses on the European context, particularly through its analysis of MyData, it would benefit from a broader, global perspective in future research by integrating perspectives from other cultures. Data is known as the oil to power the AI engine, but what constitutes ‘good data’ is defined differently, depending on the differing views on privacy, consent, relevance, ownership and data sovereignty. Similarly, AI for social good – AI initiatives to deliver socially beneficial outcomes – has different approaches and frameworks, ranging from the SDG (sustainable development goals)-based framework of UN's AI for Good Global Summit to the impact-oriented framework of Google's Data Science for Social Good Fellowship, and to human-centred design and participatory approach of MIT Media Lab's AI and Human Rights Lab.
Data activism takes diverse forms in different regions, reflecting varying cultural, political, and economic contexts. For example, data governance in countries like China involves a top-down, ‘Big Brother’-driven approach to ensure national security and competitive advantage in science and technology. 1 It departs from the European focus on individual rights. In the authoritarian context, data activism may take new forms and characteristics. An example is a citizen-led environmental data initiative in China. Starting as bottom-up data activism, the citizen data initiative was co-opted by institutional actors and reshaped by state-sponsored environmental campaigns (Sun and Huang, 2021). Future research could explore how data activism and collaborative research practices are shaped by regional and cultural differences, offering a more comprehensive and comparative view of digital society and developing a globally conscious digital society framework.
The concept of breathing space is both innovative and timely, offering a counterbalance to the fast-paced, efficiency-driven ethos of digital society. This concept can be applied to different context and explore its practical applications in specific cases. For example: What are the practical implications of breathing space for policy and governance? How might regulatory frameworks incentivise collective data governance or mitigate the influence of big techs? How might breathing spaces be institutionalised within organizations in technology processes or policy frameworks? By addressing these questions, social scientists can help bridging the gap between theoretical critique and practical action, providing alternative visions or imaginaries for collective decision making.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
