Abstract
Artificial intelligence (AI) presents transformative opportunities and complex ethical challenges. This paper adopts a socio-technical perspective, emphasizing that AI is not an isolated technology but rather deeply embedded in evolving societies. It critiques governance models, particularly rule-based approaches in the West, which, whilst addressing some risks, often stifle innovation and fail to engage diverse societal needs. This paper proposes an alternative framework integrating Western risk-management strategies with Chinese ethical principles rooted in Confucianism and Daoism. These principles emphasize dynamics, flexibility, relational stakeholder participation, and context-sensitive solutions to align AI with societal and environmental goals. The proposed model advocates for a co-learning approach to AI ethics, recognizing the dynamic interactions among developers, users, policymakers, and the public. By fostering participatory governance and adaptive ethical frameworks, it addresses both known and unknown risks while promoting equitable, sustainable development. It calls for cooperation to harness AI's transformative potential, ensuring it evolves in ways that benefit society and mitigate harm.
Keywords
Introduction: A socio-technical perspective
Artificial intelligence (AI) is not
From a Science and Technology Studies (STS) perspective, AI is an outcome of digital communication technology and a data (digitized information) society that arose with the advancement of the internet and computational and software programming abilities. AI can achieve revolutionary potentials through increased enhancement of the ability to mimic human cognition, reasoning and deep learning capabilities. Today's technology enables AI mechanisms to process vast amounts of data at speeds far beyond human capacity.
AI inevitably reflects the biases, intentions, and limitations of the individuals and institutions that create it. Human cognition and knowledge are shaped by diverse cultural, social, historical, and epistemological backgrounds. This ontological–epistemological (Brad, 2007) diversity means that no two individuals think or process information in the same way. As a result, AI systems, which replicate human decision-making, are influenced by this inherent complexity.
AI, like all other technologies, is deeply intertwined with human society. Scientific knowledge is socially constructed (Barnes, 1974; Bloor, 1976). Theories and practices of science, technology and innovation are influenced by social factors, institutional norms, and human interests (Knorr-Cetina, 1981; Shapin and Schaffer, 1985). Science as a socially embedded process is not an objective truth. Technology and innovation are not fixed products but instead continuously evolving concepts that have been subject to control and manipulation under the current power structure of our society. In viewing the beneficial as well as detrimental social impacts of science and technology, we must recognize the powerful things we have created and the nature of them being a double-edged sword. Our humanity faces pressing challenges of sustainable development and climate change. As visually illustrated in Edward Burtynsky's documentary
Historically, technological development has largely been driven by corporations and the commercial sector, often with outcomes that have fallen short of societal expectations. In the recent past, societal development has modelled or looked at Western modernity, which is operating under capitalist market systems. Early industrialized nations, learning from their experience in the past, were well aware of the risks that came to threaten their societies due to increasing polarization in the wealth and health of their domestic societies and the whole world. While leading science and technology development in the world, the governments of these countries, especially Western European governments, implemented regulations to mitigate the detrimental impacts of technology on society. The principles for the governance of science and technology development, such as Responsible Research and Innovation (RRI), which called for general public participation, and Corporate Social Responsibility (CSR), which created societal mechanisms to monitor corporate activities, have constrained corporations’ greediness, which can be detrimental to society. However, public engagement often failed due to information asymmetry, and CSR, motivated by profit (Friedman, 1970), did not result in significant changes, particularly during crises like the COVID-19 pandemic.
Ethical governance of technology and innovation has long been overdue. At the dawn of broad AI application, without effective governance, the current problems will be further entrenched, and the impacts on society and the climate could become more detrimental and irreversible.
Ethical AI is a must, and the appropriate understanding of AI's potential and the efforts being made in leading AI development to the betterment of society and a sustainable environment could bring new light to this troubled world. The broader issue lies in addressing the socio-technical challenges that shape humanity's development through science and technology.
This paper argues that effective ethical governance of technology and innovation is urgently needed. AI has the potential to play a transformative role in modern technological development, involving a complex chain from design to implementation. This chain includes scientists, technical designers, intermediaries, commercial entities, users, the general public, and policymakers.
Challenges facing AI ethics
AI governance is often generalized as a matter of managing risks associated with data—the foundational element underpinning AI systems—with a primary focus on issues like personal data security, privacy, transparency, data leaks and algorithmic biases. While important, this narrow framing creates a misleading perception for the public, oversimplifying AI as a uniform technology with a single set of ethical concerns. As discussed earlier, technology is shaped by the social, political and cultural contexts in which it is developed. As such, the ethical challenges surrounding AI vary depending on the actors involved and their particular interests, the points of interaction within the technology's lifecycle, and the broader societal context. These issues also differ across countries, communities, and social groups, reflecting diverse value systems, interests, and priorities.
In Western countries, AI projects have often encountered resistance due to public mistrust, especially around the misuse of personal data by corporations. This scepticism is rooted in historical experiences in capitalist societies, where powerful corporations have often prioritized profits over social and environmental sustainability. Consequently, the public is wary of new technologies, particularly those involving sensitive personal data.
In contrast, China faces a different set of challenges. Chinese public awareness of the risks posed by AI technologies, particularly around data security and privacy, is generally lower. This leaves Chinese citizens more vulnerable to potential exploitation by corporations and criminals seeking commercial gain.
These divergent attitudes toward AI governance stem from different historical lived experiences. In the West, where societies have faced the negative consequences of unchecked corporate power, caution prevails. In contrast, Chinese citizens are more open to embracing new technologies, partly due to China's historical memory of the Opium Wars, when technological inferiority contributed to national defeats. For many in China, technological advancement is seen as key to national strength and economic growth that underlies the process of national rejuvenation.
In the West, public scepticism about AI has led to stricter regulations and debates around privacy, transparency, and corporate accountability. This reflects a deeper mistrust of corporations, which, as Korten (2001) argued in
On the other hand, AI development in China has primarily focused on industrial applications aimed at improving productivity and efficiency in sectors such as manufacturing and services. These AI projects have largely bypassed personal data concerns, leading to positive economic outcomes for businesses and contributing to the nation's overall economic growth. China's rapid integration of AI is seen as a strategic advantage in its technological race to catch up with the West. However, China's economic rise was historically supported by a large pool of cheap labour. Today, the government's emphasis on ‘quality development’ reflects a shift toward balanced growth, relying on grassroots innovation to transform society into a more equitable and well-educated one. If China's 1.4 billion citizens become more engaged in learning and understanding their roles in socio-technical systems, the country could lead the world in science and technology. The idea is that Chinese citizens and technology can evolve together, regardless of whether these technologies are considered low or high risk, as long as they are developed through mutual growth.
Ultimately, technology, including AI, is embedded in and shaped by broader social, economic, and political forces. The ethical challenges AI poses are not inherent to the technology itself but instead emerge from the socio-technical systems in which it is developed and deployed.
Shortfalls of the Western approach to governance
The European Union's comprehensive regulations, guided by the precautionary principle and a rule-based approach, have inadvertently slowed technological advancement. While these measures aim to protect society, they have not effectively steered technological development toward broader societal benefits. Many transformative technologies originate from European scientific research before being more extensively developed and commercialized in the United States. This is largely due to public distrust of corporations and concerns over the social, ethical, and environmental risks of new technologies.
A prominent example is genetically modified (GM) food. Initially, GM technology held the potential to increase agricultural productivity and benefit both farmers and consumers. In developing countries like China and India, GM crops could enable farmers to breed locally adaptable seeds, offering a sustainable alternative to hybrid seeds, which lose their productivity after one generation and require annual repurchasing (Shen, 2010). However, public outrage in Europe, sparked by companies like Monsanto, stifled this potential. Monsanto's ‘Roundup Ready’ crops were designed to tolerate the herbicide glyphosate, allowing farmers to kill weeds without harming crops. However, the company's enforcement of intellectual property agreements, which prohibited farmers from reusing seeds, prioritized profits at the expense of public trust.
Monsanto's practices met fierce social resistance, particularly in the United Kingdom, where environmentalists who were concerned about biodiversity damage burned GM crop fields in protest. In developing countries like India, Monsanto's aggressive intellectual property enforcement led to lawsuits against subsistence farmers who violated these agreements, contributing to severe social consequences, including farmer suicides.
In response to public concerns, the European Union invoked its precautionary principle and imposed a moratorium on GM foods in 1999, reflecting deep scepticism about environmental risks, biodiversity, and health impacts. Although the moratorium was lifted in 2004 following the introduction of stringent safety assessments, labelling, and traceability, by then, the market for GM foods in Europe had collapsed. The delay stifled research and development incentives, and public opposition remained entrenched. As one ethical scholar observed, ‘By the time they understood the real questions people were asking, many members of the public were already fixed in their opposition. Decades later, companies and scientists are still trying to rebuild public trust in these technologies, while the technology has moved on’ (Stilgoe, 2024).
Developing countries like China and India initially embraced GM crop innovation for its benefits in weed management and agricultural productivity. However, European ethical positions on GM foods influenced global markets, limiting the broader adoption of these technologies in developing nations.
The core issue in Western technology governance, particularly within the European Union, lies in the complex power dynamics between governments, corporations, and the public. Rule-based regulations have slowed the deployment of new technologies in Europe. Although this has curbed some corporate excesses, it has also hindered innovations that may contribute to societal well-being. Moreover, these rigid regulatory approaches have led to missed opportunities for collective learning and the improvement of governance frameworks for emerging technologies, particularly those with unpredictable risks, like AI.
The ongoing AI revolution presents similar challenges. While AI offers enormous potential benefits, not just in boosting productivity but also in enhancing democratic governance, the West, particularly Europe, remains cautious. Many large companies are developing AI systems, but they must tread carefully to avoid provoking public backlash. In practice, it is not difficult for corporations to obtain user consent as required by regulation. Many users, however, are unaware of how their data is being used despite having technically given consent. It is nearly impossible for individuals to track how their personal data is collected and used, as companies often obscure this process.
The European Union's introduction of the General Data Protection Regulation (GDPR) was an early attempt to address these concerns. The regulation imposed significant penalties for non-compliance, 1 but its effectiveness has been limited. The overwhelming responsibility of understanding and managing cookie consent has fallen on users, who frequently click ‘Accept all’ without reading the details, a phenomenon known as ‘cookie banner fatigue’ (Utz et al., 2019). The privacy advocacy organization noyb analysed over 500 websites and found that 81% did not offer a ‘reject’ option on the initial page, requiring users to navigate through sub-menus to find it. Additionally, 73% used deceptive colours and contrasts to lead users toward the ‘accept’ option, and 90% did not provide an easy way to withdraw consent. 2
The advent of large language models like ChatGPT has reignited interest in AI's potential but also brought new regulatory challenges. While businesses recognize AI's vast opportunities, stringent data regulations remain an obstacle. As discussed earlier, realizing AI's full potential for societal good, such as using AI to address democratic issues in public management, requires access to personal data. Yet, the Western governance model struggles to balance privacy rights with technological innovation. In an increasingly polarized public sphere across Europe, finding this balance is crucial but complex. Standard datasets used to train AI systems often fail to account for the needs of diverse social groups and communities, limiting the societal benefits of these technologies. Worse, this model blocks the active engagement of the general public, missing the opportunity to increase AI literacy through participatory involvement in AI projects.
Meanwhile, developing countries like China cannot entirely escape the regulatory influence of the West. Although AI-assisted public governance holds immense potential for these nations to form tailored systems that meet their unique cultural, economic, and political demands, they remain subject to Western regulatory pressures.
The shortcomings of the European approach to technology governance are becoming increasingly apparent, particularly in the rigid application of frameworks like RRI and CSR. While these regulations are well-meaning and idealistic, they often struggle to adapt to real-world complexities, making them difficult to implement effectively (Rip, 2014).
Many Western societies, long proud of their democratic traditions and political systems, are now grappling with profound challenges that question their stability and adaptability. The rise of populism and right-wing political movements has disrupted traditional political norms, often fuelled by public discontent with social inequalities, economic stagnation, and perceived failures to manage immigration effectively. These shifts have fragmented political consensus, fostering polarization and eroding trust in established institutions.
The historical advancement of science and technology in the West has been marked by transformative milestones such as the Scientific Revolution, the Industrial Revolution, and the Digital Revolution. However, today, rule-based regulations, especially stringent data regulations, have created significant barriers to the innovative application of digital technologies. For instance, the use of surveillance technologies to address uneven social development and manage immigration challenges has been curtailed, leaving governments less equipped to respond to these pressing issues effectively. The general public is not a monolithic data set, as digitalized information of diverse social groups are crucial for societal management—in particular for addressing current challenges of unequal development between communities.
Scholars and practitioners have called for alternative theoretical frameworks that focus on human-centred innovation. However, data-related regulations frequently become roadblocks—obtaining approval for specialized datasets, even for research purposes, remains challenging.
Adopting Chinese ethical thinking
Drawing from the STS framework, we understand that science and technology are deeply intertwined with social development. This socio-technical entanglement highlights the dynamic relationship in which society and technological innovation co-evolve. As societies become more diverse and complex, technological advancements must adapt accordingly. In the case of AI, the potential to transform society is immense, making AI awareness and literacy essential for individuals and institutions alike. The ethical considerations surrounding AI must be seen as an extension of broader human ethics, not as isolated issues. AI ethics should guide technology's role in society, just as traditional ethics shape humanity's relationship with progress.
We propose a hybrid ethical approach integrating Chinese philosophical thinking with Western experience-based risk governance. Two major schools of Chinese thinking, Confucianism and Daoism, offer profound insights into ethical decision-making, emphasizing human-centred values and adaptable governance frameworks.
Confucian ethics, though often abstract and poetic, focus on flexibility and contextual interpretation rather than rigid definitions of ‘rightness’. For example, in
Another example from
Daoist philosophy also emphasizes adaptability and fluidity. The opening line of Laozi's
Chinese ethical thinking, rooted in Confucianism and Daoism, promotes an adaptive approach to governance. It emphasizes flexibility, active listening, and context-driven solutions over top–down rule imposition. This approach fosters active engagement among those directly impacted by technological advancement and involved in the process. Only through such engagement can the advancement of technologies, including AI, mitigate the potential negative impacts on society.
Incorporating Chinese ethical thinking into modern technology governance requires recognizing the diversity and complexity of both technology and society. Instead of imposing one-size-fits-all regulations, ethical considerations must be tailored to specific contexts, relationships, and times. This flexible approach aligns with contemporary goals of sustainability, requiring global cooperation toward common aims. Achieving these goals demands an alternative ethical framework to counter the dominant top–down, rule-based systems.
Prominent Chinese scholar Liang Shuming observed that Chinese culture is fundamentally a ‘culture of life’, where the core lies in human attitudes and values. Central to Chinese philosophy are the concepts of ‘benevolence’ (仁) and ‘harmony’ (和), which help to foster coexistence among individuals, society and nature (Liang, 2015). Liang compared Western and Chinese traditions, observing that Western culture often seeks to conquer nature through science and technology, exerting external control to achieve material progress. In contrast, Chinese culture seeks internal harmony, prioritizing moral cultivation and interpersonal relationships. While Western values encourage external action, Chinese ethics focus on inner balance and collective well-being.
Liang (1949) also noted key differences between Chinese and Western scholarship. Western methods emphasize empirical, static, and divisible approaches, relying on scientific and rational analysis. Conversely, Chinese scholarship adopts dynamic, metaphysical and holistic approaches grounded in natural laws and the concept of life. While Western methodologies excel in generating material prosperity and advancing science, Chinese ethics offer complementary insights into fostering societal harmony and sustainable development.
It is important to distinguish Chinese ethical thinking from its historical applications. Following the Song dynasty, social and economic decline limited educational access for most Chinese, weakening the influence of Confucian and Daoist values. In contrast, Western societies expanded education, fostering individual creativity and innovation. This divergence allowed Western civilization to achieve material progress through accumulated knowledge and governance based on best practices and methods that China lacked.
Today, the strengths of Western methodologies, particularly in risk assessment and governance, remain invaluable. However, integrating these with Chinese ethical perspectives can provide a balanced framework for AI development. This hybrid approach can mitigate risks, foster innovation, and promote societal well-being.
In the era of AI, a dual approach is necessary. On the one hand, a bottom–up strategy that emphasizes adaptive ethical frameworks for grassroots development is crucial. On the other hand, global regulations informed by Western experience and best practices must be integrated. Broad social engagement is essential to shape AI advancements that serve humanity's collective good. While imperfect, this ongoing co-evolution between science, technology, and human civilization underscores the importance of fostering innovation within ethical boundaries.
By raising the quality of education and enhancing AI literacy, societies can harness the creativity and understanding of their citizens. For China, with its 1.4 billion people, this represents an unparalleled opportunity to drive progress through collective human potential.
Socio-technical specificities of AI
Different technologies have their own distinct socio-technical characteristics. From the perspective of STS, AI is not an isolated or abstract technology but rather an integral part of the ongoing evolution of socio-technical systems. Its advanced deep learning and data 7 processing capabilities make AI one of the most powerful tools for enhancing technological operations.
While AI can drive technological progress, it also has the potential to exacerbate social inequalities and contribute to environmental degradation. For instance, AI-assisted missile technology may improve targeting accuracy, but it also increases the potential for human and environmental destruction. Similarly, AI-driven algorithms in stock trading have accelerated wealth accumulation for corporations and investors. Although these systems may seem to contribute to economic growth, they often worsen inequality by disproportionately benefiting the wealthy. In some cases, they can distort key economic indicators, such as the Gross Domestic Product, leading to artificial inflation and increased market volatility, ultimately threatening long-term economic stability (World Economic Forum, 2023). With this in mind, it is crucial to apply AI selectively, drawing on European experience-based risk-management approaches, to prevent further social distortions.
Data is fundamental to AI development and the quality of its operations. In Western societies, academics and practitioners are eager to leverage AI to support public management, but these innovations often require specialized datasets to address the needs of diverse social groups. However, strict personal data protection laws create significant barriers, making it challenging to obtain approval for personalized datasets. In contrast, under Western influence, developing countries like China, with their large-scale manufacturing industries, have shifted focus. Many Chinese companies prioritize using industrial data to enhance productivity while avoiding the complexities and legal challenges associated with personal data.
Every technological system contains ‘black boxes’—complex mechanisms often hidden from public view and accessible only to select actors, such as scientists and technologists. In practice, the development of technology rarely follows a straightforward or linear path (Rönnbäck et al., 2006). Throughout the value chain, from design to application and impact, configurations and reconfigurations frequently occur at socio-technical interfaces. These changes can result from technical considerations, emerging technologies, user feedback, or new regulatory demands. Technological improvements often arise from these intersections of socio-technical factors. However, due to information asymmetry and the specialized knowledge required, most technical components remain ‘black-boxed’ to non-experts. This is particularly true for AI, where operations are often opaque. Yet, these socio-technical intersections offer opportunities for non-technical actors to play an active role in guiding technological progress without needing to fully understand the underlying technical units, which can remain black-boxed.
In AI-assisted technological processes, every data input feeding deep learning across the value chain and lifecycle takes place at socio-technical interfaces. Technological changes or improvements are driven by these new data inputs. Under corporate control, such adjustments tend to prioritize profit-making, whereas ethical AI applications can take a different approach. AI has the potential to create new socio-technical interfaces or open up the black boxes of existing ones. Data can be specifically carefully designed to capture information relevant to particular social groups. For instance, AI could assist vulnerable communities in specific locations or address the needs of technical groups, such as farmers who adapt their practices to local soil conditions and landscapes. By focusing on these interfaces, AI can be used to meet diverse social and environmental needs, rather than just corporate goals.
Borrowing Callon's concept of the ‘device of interessement’, 8 recruiting and engaging interested actors can help both hard and soft man-made systems achieve common goals for the social good. An AI-assisted system can enhance these processes by facilitating interactions between actors, such as balancing power structures among them or prioritizing those who need the most help. In this way, AI can be framed as a mediator, helping to connect and coordinate actors to work toward shared objectives while ensuring equitable participation and addressing social needs.
The socio-technical interfaces of any technological process involve interactions between developers, users, regulators, and the broader community, all of whom shape how systems are designed, used, and governed. By framing AI as a mediating tool, we can recognize its potential to foster interactions, negotiations, and alignment of interests among stakeholders, working toward common goals within specific contexts and timeframes. This perspective aligns with the Chinese definition of ethics, which are plural, dynamic, and context-sensitive. Ethical AI, therefore, may require the adaptation of diverse models to broader applications, such as accommodating both distributed and centralized data sources and management systems.
Conclusion: Ethics as a dynamic, context-sensitive process
From an STS perspective, AI is not an isolated technical entity; it can be applied across a wide range of artefacts. In AI-assisted technology development, particularly those technologies with known risks, unknown risks and ‘unknown unknowns’, we need to highlight the importance of engaging diverse stakeholders. Ethical AI systems, designed to incorporate large data inputs—including personal data—can facilitate the involvement of technical professionals, users, intermediaries and governing bodies. Through socio-technical interfaces, these stakeholders can shape and reshape the development of technology, guiding it toward societal and environmental benefits.
Given this, the challenges of current Western rule-based governance models reveal fundamental shortfalls. The Western risk-oriented approach, while built on valuable past experiences, is limited in addressing the risks of unknown and unpredictable outcomes. Its reliance on the precautionary principle and top–down regulations can stifle innovation at early stages and create barriers for scientific and technological research with positive potential.
We propose an alternative approach that incorporates Chinese ethical thinking, which emphasizes the plurality and dynamism of ethics to better address the complexities of diverse human societies and technologies. The lack of material prosperity over China's long and recent history means that Chinese ethical thinking has not had the influence that it could. However, we argue its emphasis on grassroots individual learning as the essence for achieving progressive equitable social development could and should be at the core of today's AI ethics.
An integrated ethical approach to AI is essential in today's world. Both the West and China are facing the rapid advancement of science and technology capabilities, and peoples’ participation in technology development has become crucial to mitigate the known and unknown risks of negative impacts on society, improve ethical AI systems and shape science and technology innovation for the broader social good.
Rather than creating barriers, this integrated AI ethical model can be expected to promote responsible AI development by fostering engagement, adaptability and ongoing learning. Specifically, this model would help identify challenges faced by different communities and enhance the processes guiding AI-assisted technologies. This dynamic, inclusive approach ensures that AI evolves in a way that benefits society as a whole while mitigating risks in a flexible and thoughtful manner.
To encourage meaningful public participation, training programmes should be developed to help stakeholders identify socio-technical interfaces and engage with the technology. Learning through experience, including trial and error, is essential for advancing AI governance in a responsible manner.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Social Science Fund of China project ‘From Participation to Co-governance: Biotechnology Governance from an STS Perspective' (grant number 21FZXB063); the Chinese Academy of Sciences Strategic Research Project ‘Strategic Technology Research and Frontier Discipline Development' (grant number E4291Z09) and the Ministry of Education Major Project for Philosophy and Social Sciences Research ‘Fundamental Theoretical Issues in the Philosophy of Engineering Science’ (grant number 23JZD0006).
Notes
Author biographies
Xiaobai Shen is an associate professor at the University of Edinburgh Business School. Her academic background is in Science & Technology and Innovation Studies, with previous research focused on socio-technical analyses of technological capabilities in information and communication technology (ICT) and biotechnology sectors in developing countries. Her current research interests include digital and data technology innovations, such as creative cultural content, open-source software, infrastructural ICT, applications of artificial intelligence, and the impact of intellectual property protection regimes, standards and governmental policies and regulations.
Lu Gao is an associate professor at the Institute for the History of Natural Sciences, Chinese Academy of Sciences, where she serves as the Director of the STS Center. She has held visiting scholar positions at the University of Edinburgh, Stanford University and the University of Kent. Her research focuses on the governance of emerging technologies, particularly biotechnology and artificial intelligence, integrating perspectives from STS and the history of science. She advocates a humanizing-science framework to analyse the dynamics of knowledge production and embed ethical governance practices into technological development.
