Abstract
The ethical governance of artificial intelligence (AI) in China is in a special period of ‘the overlap of three historical periods’ (that is, the period of rapid technological development, the period of high-quality socioeconomic progress and the period of deep adjustment of the international order). The intertwining of technological uncertainty, the multi-objectives of socioeconomic development and global strategic games have brought new challenges to the ethical governance of AI in China, particularly in the areas of impact identification, ‘no man's land’ issues and the effectiveness of the ethical governance system. To address those challenges, this paper provides some recommendations from the aspects of regulatory system construction, ethical review, ethical education, development of a governance toolbox and international cooperation.
Introduction
Since the end of 2022, large model technologies represented by ChatGPT and Sora have made breakthroughs, raising expectations for the development speed of general artificial intelligence (AI) and further escalating concerns about the ethical risks of AI. In recent years, members of the international community have aimed to foster the principle of ‘AI for good’ by instituting suitable ethical frameworks and safety guardrails, thereby maximizing and equitably distributing the dividends of AI development. China's endeavour to promote AI ethics governance is also aimed at achieving those goals. In practice, we must grasp the general laws of AI ethics governance and draw on good practices from various countries, and we must accurately identify and understand the unique Chinese context in which China's AI ethics governance is rooted, and, in particular, place it against the backdrop of the ‘overlap of three historical periods’ (the ‘triple overlap’: the overlap of the period of rapid technological development, the period of high-quality socioeconomic progress and the period of deep adjustment of the international order). We must fully grasp the underlying logic, key challenges and priority tasks in promoting China's AI ethics governance and continuously optimize the ethical governance system.
The ‘triple overlap’ and its implications for China's AI ethics governance
The term ‘triple overlap’ encapsulates the concurrent existence and intricate interplay of three distinct historical epochs, each characterized by different defining dimensions: the period of rapid technological development, the period of high-quality socioeconomic progress and the period of deep adjustment of the international order (Lu et al., 2022). Together, they shape a unique state of the technology‒society relationship. Building on the basic views of the structuration theory proposed by the renowned sociologist Anthony Giddens (2016), this paper posits that the current state of the technology‒society relationship will remain the social structure for China's AI development and application in the foreseeable future. It sets boundaries, and also supplies necessary resources and conditions, for the scope and depth of AI development and application.
Connotation of the ‘triple overlap’
The period of rapid technological development: The issue of technological uncertainty
AI technology is in a phase of rapid development, bringing with it significant uncertainties. This creates serious challenges to the setting of the ethical governance agenda and the designing of governance tools. In the medium term, the uncertainty of AI technology stems from at least two sources. The first is deficiencies in the robustness and interpretability of AI models; large model technologies that have made breakthroughs in the past two years also suffer from these problems. The second is uncertainty in the direction and pace of AI technology development. Different technological paths will have significant variations in their demands for data, computing power and energy, as well as in their application domains and methods, leading to different ethical issues and governance approaches. For instance, the massive demand for computing power and training data associated with large model technologies raises ethical concerns for the environment and the protection of personal information, which might not be as prominent in other technological paths.
The period of high-quality socioeconomic progress: The issue of multiple development goals
The 19th National Congress of the Communist Party of China (CPC) pointed out that the main contradiction in Chinese society has undergone profound changes. The people's demand for a better life is now of higher quality and broader scope. They not only seek to elevate the standards of material and cultural life, but also articulate a variety of demands that are both diverse and of a higher order, encompassing areas such as democracy, the rule of law, fairness, justice, safety and environmental protection. The 20th CPC National Congress laid out the defining features of Chinese modernization, stressing that the key to achieving high-quality development lies in the parallel advancement of multiple goals. As a general-purpose and empowering technology, AI is penetrating various industrial, work and life scenes and making a profound impact on economic growth, the ecological environment, employment structures, work patterns, lifestyles and social governance. Those changes will involve adjustments and balances among multiple economic and social development goals, as well as various ethical issues.
AI ethics governance plays the role of a ‘steering wheel’ and a ‘brake’. Its aim is to provide ethical norms and safety guardrails for the development and application of AI technology, rather than hindering its progress. However, when it comes to practice, the risks posed by ethical governance to AI research and application are not purely imaginary. In July 2023, the Cyberspace Administration of China and six other departments jointly issued the Interim Measures for the Management of Generative AI Services, which clearly stipulated that the provision and use of generative AI services should comply with laws and administrative regulations and respect social morality and ethical standards. Yet, the effort to balance the promotion of generative AI with adherence to ethical norms and requirements continues to face challenges. In summary, to achieve more sufficient and balanced development, AI ethics governance must not be limited to any single domain. Instead, we should learn to manage the tensions among different goals and logics—a task that raises higher demands for the refinement of institutional mechanisms and capacity enhancement in policy formulation and implementation.
The period of deep adjustment of the international order: The issue of co-opetition in technology and rules brought about by great-power rivalry
Although the need for strengthening global cooperation on AI ethics governance is a consensus shared by all members of the international community, it is also a fact that countries are fiercely competing for a bigger say on the world stage and that global AI governance remains in a ‘fragmented’ state. Against the backdrop of the accelerated reconfiguration of the global technological–geopolitical landscape and profound adjustments of international rules in areas such as science and technology (S&T) and trade, the pursuit of ‘sovereign AI’ by nations will lead to a complex intertwining of AI ethics governance issues with global regulatory frameworks in technology, trade, media and other sectors. In crafting its approach and regulations, as well as in enhancing its capabilities for AI governance, it is crucial for China to take into account the significant shifts in the international order and the dynamics of great-power competition (Xue and Zhao, 2024).
It must be noted that the three periods mentioned above are not simply juxtaposed but have complex mutual influences. First, the uncertainty of AI technology disrupts the driving force for high-quality economic and social progress and increases uncertainty in the relationships between different sectors, regions and social groups. Second, achieving high-quality economic and social progress means greater balance between multiple goals, while the ‘creative destruction’ triggered by the development and application of AI technology will inherently bring uncertainty to the balancing act. Conversely, the social consensus on more effectively balancing multiple goals will also affect the scope and depth of AI development and application. Third, the uncertainty of AI technology increases the uncertainty in the power dynamics among states, enterprises, non-governmental entities and other players and introduces significant variables into the adjustment of the global order. At the same time, great-power rivalry and global AI governance rules will, in turn, affect the development of AI technology and increase uncertainty regarding the scope and depth of AI technology application. In summary, within the context of the ‘triple overlap’, there are intricate dynamics in play among the uncertainties of technologies, goals and regulations. Those complexities present challenges for AI ethics governance but also contain the seeds of new opportunities.
AI ethics and governance in the context of the ‘triple overlap’
The ‘triple overlap’ offers us a vantage point to grasp the complexity and interwoven nature of AI ethics and governance issues. It deepens our understanding of AI ethics and allows us to more effectively concentrate on and analyse the complexities of ethical governance endeavours.
Understanding of AI ethics issues
Over recent years, intensive discussions on the connotations and scope of AI ethics have been a focal point among the global scientific community, industry leaders, academic circles and policymakers. While disparities persist in the prioritization and interpretation of ethical principles due to varying cultural contexts, there is an evident trend towards convergence in global AI ethics principles, as reflected in the guidelines issued by nations and international organizations. In 2021, UNESCO published the Recommendation on the Ethics of Artificial Intelligence, which proposed four main principles to guide the development and application of AI: respecting, protecting and promoting human rights and fundamental freedoms as well as human dignity; fostering a thriving environment and ecosystem; ensuring diversity and inclusion; and building a peaceful, just and interdependent society (UNESCO, 2021). Research on AI ethics has also undergone similar changes, and researchers are shifting their focus from debate on the necessity for and establishment of ethical guidelines to the development of practical implementation mechanisms, endeavouring to close the divide between ‘what it is’ and ‘how to do it’ (Yu, 2024).
It should be noted that, from the perspective of the ‘triple overlap’, there is still a lack of distinction between ethical issues at different levels in current discussions on AI ethics. As we can tell from the technology‒society relationship revealed by the ‘triple overlap’ perspective, when discussing AI ethics and its governance, people need to simultaneously consider at least the following three aspects.
First, ethical issues in the development process of AI systems. AI systems are composed of three core elements: algorithms, data and computing power. When discussing AI ethics, the first three questions to consider are whether the algorithms themselves comply with ethical standards, whether the collection, storage, transmission and utilization of data comply with ethical standards, and whether the cultivation and utilization of computing power comply with ethical standards. Specifically, when discussing the ethical compliance of algorithms, people generally focus on whether there is discrimination and bias, and whether the algorithms will lead to an unfair distribution of technological dividends and risks; when discussing the ethical compliance of data, people generally focus on whether the collection, storage, transmission and utilization of data comply with ethical standards such as personal information protection, and whether ethical responsibilities such as informed consent have been fulfilled in the data-collection process; when discussing the ethical compliance of computing power, people generally focus on environmental ethics issues (such as energy consumption) in the cultivation and utilization of computing power.
Second, ethical issues in the application scenarios of AI products and services. In recent years, people have paid more attention to the ethical issues of AI technology applications in key sectors such as health care, education, autonomous driving, news and social media, and marketing. For example, the introduction of AI technology into medical practice has raised a series of ethical disputes, including about medical safety, the attribution of medical responsibility, patient autonomy and discrimination in the diagnostic process. In autonomous driving, there are also ethical concerns in areas such as road safety and responsibility, moral decision-making on roads, and privacy and data security.
Third, ethical issues in AI governance. This includes the following two aspects. On the one hand, we need to think about how to ensure that AI governance practices comply with ethical standards. For example, we should consider the ethical concerns raised by various AI social experiments and AI regulatory sandbox pilot programmes (Qu et al., 2022; Yu and Li, 2024). On the other hand, we need to think about how to maintain an appropriate tension between AI governance and the goals of promoting the development and application of AI technology (Wang et al., 2024).
In summary, within the current landscape of AI ethics research and policy implementation, the principal actors involved in the R&D, application and regulation of AI technology must give due regard to the multifaceted and intricate nature of ethical concerns. They must not only regulate S&T innovation and technology application activities within the AI domain from an ethical perspective, but also engage in ongoing ethical reflections on the governance of AI itself and give full consideration to the tension and ethical implications of the relationship between value rationality and instrumental rationality (Wu and Fan, 2024), and between objectives and the means employed to achieve them.
Understanding of AI ethics governance
From a governance perspective, the concept of AI ethics governance does not substantially differ from the ethics governance of other emerging technologies. In terms of elements, they all encompass governance entities, governance rules, governance objects and governance tools; in terms of main tasks, they all need to deal with fundamental issues, such as the balance between risks and benefits, legitimacy construction under multistakeholder participation (Lu and He, 2019), and the development and iteration of the governance toolkit. However, the ‘triple overlap’ perspective prompts a greater focus on the uniqueness and complexities of AI ethics governance. Specifically, at least three aspects of heterogeneity in the AI ethics governance process should be considered. First, technological heterogeneity—in terms of technological characteristics and application scenarios, AI technology differs significantly from other emerging technologies such as gene editing and nanotechnology. Second, heterogeneity in impact generation and transmission mechanisms—the mechanisms of ethical risks in AI technology differ greatly from those in nuclear technology, the internet and gene editing technology. For example, in August 2024, the RAND Corporation published a paper titled ‘Historical analogues that can inform AI governance’, which provided a systematic comparison between the governance experiences of four technologies (nuclear, internet, encryption and genetic engineering), especially the similarities and differences between the governance methods of these technologies and AI technology governance and the lessons they offer for the latter (Vermeer, 2024). Third, heterogeneity in the context of governance activities—ethics governance activities for emerging technologies occur within specific sociocultural and institutional contexts, which directly affect judgements on the risks and benefits of technology development and application, the resources and methods available for legitimacy construction, and the effectiveness of specific governance tools.
Therefore, when discussing AI ethics governance, it is necessary to pay attention to the technology's attributes (including its stage of development), application scenarios, types and generation mechanisms of impact, as well as the social context in which governance activities and tools are deployed. In this regard, the ‘triple overlap’ perspective not only inspires a full understanding of and preparation for the complexity and dynamics of AI ethics governance, but it also serves as a reminder to fully acknowledge the limitations of the effectiveness of various concepts, mechanisms and tools in technology ethics governance, and to develop a more holistic and reasonable perspective for the transferability of various governance experiences across different contexts and systems.
China's practice on AI ethics governance in the context of the ‘triple overlap’
In July 2017, the State Council of China issued the New-generation Artificial Intelligence Development Plan, underscoring the importance of closely monitoring the risks and challenges posed by AI and stressing the need to bolster pre-emptive measures and regulatory guidance to mitigate such risks. In October 2018, when chairing the ninth group study session of the Political Bureau of the 19th CPC Central Committee, General Secretary Xi Jinping pointed out that research on AI-related legal, ethical and social issues should be strengthened, and a sound legal and regulatory framework as well as an ethical system should be established to ensure the healthy development of AI. In recent years, China has vigorously pursued the implementation of AI ethics governance and achieved notable progress. However, within the context of the ‘triple overlap’, the country continues to confront several salient challenges.
China's practice on AI ethics governance
In recent years, China has advanced the development of AI ethics governance systems and capabilities at multiple levels, including the development of legal and regulatory frameworks, the establishment of organizational structures and working mechanisms, the development of governance tools, and international exchange and cooperation.
Establishing the National Science and Technology Ethics Committee and the AI Ethics Sub-committee
In July 2019, the Plan for the Establishment of the National Science and Technology Ethics Committee was reviewed and adopted at the ninth meeting of the Central Commission on Comprehensively Deepening Reform. In October 2019, the General Office of the CPC Central Committee and the General Office of the State Council announced the establishment of the National Science and Technology Ethics Committee. Subsequently, the AI Ethics Sub-committee was formed as a part of the National Science and Technology Ethics Committee.
Promoting the development of legal and regulatory systems related to AI ethics governance
In recent years, China has constructed a legal framework underpinned by the Cybersecurity Law, the Data Security Law and the Personal Information Protection Law, providing the legal basis for AI ethics governance. Additionally, through legislation in various specific fields, China has strengthened legislative and law-amending efforts in areas closely related to AI development, such as personal data, e-commerce, smart finance and autonomous driving. Several regulatory departments have also introduced a series of departmental rules in response to the regulatory needs of AI applications in their respective fields. In May 2024, the General Office of the State Council released the State Council's Legislative Work Plan for 2024, which stated that a draft AI law would be submitted to the Standing Committee of the National People's Congress for review.
To date, regulations directly related to AI ethics governance also include the following. (1) The Opinions on Strengthening Science and Technology Ethics Governance issued by the General Office of the CPC Central Committee and the General Office of the State Council in March 2022. As the first national-level guiding document on S&T ethics governance in China, it established clear requirements for the development of ethics norms and guidelines in key areas such as AI. (2) The Interim Measures for the Management of Generative AI Services, jointly issued by the Cyberspace Administration of China and six other departments in July 2023. (3) Norms and guidelines issued by the Special Committee on the Governance of New-generation AI and the AI Ethics Sub-committee of the National Science and Technology Ethics Committee, including the Ethical Norms for New-generation AI, released by the Special Committee on the Governance of New-generation AI in 2021, and the Ethical Guidelines for Brain–Computer Interface Research, released by the AI Ethics Sub-committee of the National Science and Technology Ethics Committee in 2024.
Continuously exploring new tools and methods for AI ethics governance
The clarification of AI ethics norms and the development of related technologies have paved the way for the development of technologies and methods that keep the behaviour of AI systems in alignment with human values. At the technical level, rapidly developing AI alignment technologies, red teaming and safety assessments provides opportunities to enhance the robustness, interpretability, controllability and ethical compliance of AI systems (Ji et al., 2024), and an increasing number of institutions are beginning to integrate these technologies into the entire life cycle of AI research and application (Alibaba Group and China Electronics Standardization Institute, 2023). At the level of governance mechanisms, competent departments such as the Ministry of Science and Technology, the State Administration for Market Regulation and the Ministry of Industry and Information Technology have explored new regulatory mechanisms such as AI social experiments and sandbox regulation. In 2019, the Ministry of Science and Technology issued the Guidance for the Construction of National New-generation AI Innovation and Development Pilot Zones, which clearly identified AI social experiments as one of the four key tasks. In April 2022, the State Administration for Market Regulation, the Ministry of Industry and Information Technology, the Ministry of Transport, the Ministry of Emergency Management and the General Administration of Customs jointly promulgated the Notice on the Pilot Implementation of the Automotive Safety Sandbox Regulation System, thereby launching the pilot programme for automotive safety sandbox regulation.
Actively participating in exchange and cooperation in global AI ethics governance
In recent years, China has engaged in a range of global cooperation initiatives on AI ethics governance. It has actively participated in and signed the Recommendation on the Ethics of Artificial Intelligence released by UNESCO in 2021. In October 2023, China issued the Global AI Governance Initiative, proposing to put ethics first, establish and improve AI ethics guidelines, norms and accountability mechanisms, formulate AI ethics guidelines, establish S&T ethics review and regulatory systems, clarify the responsibilities and power limitations of AI-related entities, and fully respect and protect the legitimate rights and interests of all groups. In November 2023, China was invited to attend the inaugural Global AI Safety Summit and signed the Bletchley Declaration. In the same month, the leaders of China and the United States agreed to establish an intergovernmental dialogue on AI, and they held their first meeting in Switzerland in May 2024. In May 2024, China and France released a joint declaration on AI governance. In July of the same year, the United Nations General Assembly adopted Enhancing International Cooperation to Strengthen Capacity-Building of Artificial Intelligence, which was proposed by China and which underscores the importance of fostering safe, reliable and trustworthy AI systems with life cycles running through the phases of preliminary designing, development, evaluation, testing, deployment, use, sale, procurement, operation and disposal. Those systems must be people-oriented, reliable, interoperable, ethical and inclusive, and they should fully respect, promote and protect human rights and international law, follow the principle of ‘AI for all’, and align with the vision of a people-oriented, inclusive, development-oriented information society (United Nations General Assembly, 2024).
Specific challenges brought to China's AI ethics governance by the ‘triple overlap’
The unique structure of the ‘triple overlap’ presents the following specific challenges to China's approach to AI ethics governance.
Impact identification
The identification and assessment of AI's impact are prerequisites for AI ethics governance, but China faces significant challenges in this area. First, the high complexity of economic and social systems makes it difficult to accurately assess the impact of AI technology applications. Second, the extent and depth of the impacts hinge on the development and application status of AI technology, which is inherently fraught with uncertainty. Third, impact carries a significant degree of social conductivity, which adds considerable uncertainty to the methods for assessment of AI's impact and the comparison and balancing of various types of impact. That is particularly true for ethical issues that are closely tied to values, risk perceptions and social preferences. For example, a study based on the data collected from a public survey in China found that trust in government officials and scientists can significantly reduce the public's perception of risks associated with AI technology. On the other hand, the influence of trust on the perception of technological risks tends to intensify with higher levels of AI knowledge and diminish with lower levels of AI knowledge (Zhu and He, 2021). Another analysis based on a survey of the American public found that values such as individualism, egalitarianism, general risk aversion and techno-scepticism significantly affect people's attitudes towards the use and regulation of AI (Matthew et al., 2023). Therefore, the identification of AI's impact and the assessment of its ethical issues are not merely technical problems but complex social issues.
AI ethics governance in a ‘no man's land’
AI ethics governance is a ‘knowledge-intensive’ social process, but China currently faces a knowledge deficit in this area. For a long time, China has taken on the role of a follower in technological development. Consequently, many ethical issues that it encounters in scientific research and industrial applications have already been explored and addressed by pioneering countries in their own practices, thereby mitigating the uncertainty surrounding the ethical issues of new technologies. However, new-generation AI is a new phenomenon globally, and there is no ready experience in AI ethics governance. In many application scenarios, China has in fact played a pioneering role, and the related ethical issues and governance activities are actually in a ‘no man's land’. In such circumstances, the production of knowledge on the identification, assessment and governance of AI ethics issues based on the Chinese context becomes extremely important. It should be noted that, just as for the governance of other emerging technologies, the knowledge required for the identification and governance of AI ethics issues includes not only technical knowledge but also a significant amount of social knowledge (Wang et al., 2015). However, as things stand now, there is an urgent need to address several prominent issues in the production of knowledge required for AI ethics governance in China: information asymmetry between AI technology developers and users, information asymmetry between AI technology developers and users and regulatory authorities, and information asymmetry between experts and the public. Although the three types of information asymmetry and the challenges they pose in generating knowledge for ethical governance are not unique to China, related issues become particularly pronounced because we are in a ‘no man's land’.
The effectiveness of ethics governance systems
The rule system composed of various ‘hard laws’ and ‘soft laws’ is an important foundation for AI ethics governance practices. AI has emerged as a focal point for global legislation, and there has been a swift increase in the number of laws and regulations concerning AI development and governance. In 2023, references to AI in global legislative processes doubled compared to the previous year (Nestor et al., 2024). In recent years, China has made fruitful explorations in the field of AI ethics governance, producing an increasing number of ‘hard laws’ and ‘soft laws’ as well as a variety of governance mechanisms and tools in the field of AI ethics governance. However, as governance practices progress, some governance tools are running into conflict with each other, leading to unforeseen consequences. For example, in the process of AI governance, we need to think about how to balance the relationship between promoting development and ensuring safety, and how to balance the improvement in overall efficiency and the protection of individual rights. When these overarching issues are tackled with varying balance points across ‘hard laws’ and ‘soft laws’, their ethical implications diverge—which is also an important reason why ethical reflection continues to be needed in AI governance.
Suggestions for the development of China's AI ethics governance system and governance in the ‘triple overlap’ context
Confronting the challenges posed to China's AI ethics governance by the ‘triple overlap’ scenario is a systematic project encompassing legal, policy, technological and educational dimensions. Accordingly, the following targeted recommendations are put forward.
First, accelerate the development of the legal and regulatory system for AI ethics governance. Laws, possessing robust binding force, serve as a crucial instrument for AI ethics governance. Yet, given the inflexible character of laws, countries are relatively cautious about regulating AI through legislation and are trying to reconcile the twin objectives of risk regulation and fostering the innovative advancement of AI. It is suggested that pertinent authorities, based on the evaluation of AI's current landscape and prospective trajectory, dynamically refine the institutional frameworks essential for the development of China's AI ethics governance systems and capabilities in accordance with the principle of balancing the promotion of AI development and the strengthening of ethical norms. We also need to further clarify the boundaries of rights and responsibilities throughout the process of AI research and application, regulate the acts of data collection, processing and usage, construct legal and regulatory guardrails to prevent the misuse or abuse of AI, and provide a legal basis for AI ethics governance practices.
Second, improve the AI ethics review mechanism. Ethical review is an important tool for ethics governance. We need to implement the provisions on ethical review in the Science and Technology Progress Law of the People's Republic of China and the Opinions on Strengthening Science and Technology Ethics Governance and underscore the primary responsibility of universities, research institutions and businesses in AI ethics review. Besides, we should conduct ethical assessments of AI projects from design and development to application, and we should also encourage AI enterprises and R&D institutions to formulate and comply with ethical norms of the industry and enhance self-discipline and mutual supervision.
Third, strengthen AI ethics education. Confronted with the knowledge deficit in AI ethics governance, governments, research institutions, universities, businesses and social organizations each possess their distinct interests and informational advantages. Multistakeholder participation and full communication are conducive to improving people's recognition of AI benefits and risks and their common understanding of ethical issues. We need to strengthen ethical knowledge and skills training for members of institutional ethics (review) committees, personnel related to institutional ethics review and S&T project managers and help them to build capacity for performing their duties. We should support universities, research institutions and enterprises engaged in AI research to provide AI ethics training for relevant personnel. In addition, we need to increase, as appropriate, AI ethics content in the science curriculums at the basic education stage and make AI ethics courses a compulsory subject for undergraduate and graduate students of related majors. We should also strengthen the popularization of AI ethics knowledge and raise the public's interest in and understanding of AI ethics issues (Lu, 2024).
Fourth, promote the innovation and practice of AI ethics governance tools. The diversification and adaptability of tools are important conditions for achieving effective governance. In recent years, international organizations, governments, legislative bodies and research institutions have actively explored the establishment of a variety of AI ethics governance tools. We need to foster the advancement and application of AI alignment technologies, red teaming and safety assessment and work together to promote technological iteration and institutional practices, embedding safety and alignment technologies into the entire life cycle of AI research and application. We need to conduct forward-looking AI social experiments, focusing on strengthening basic and systematic research in impact identification and intervention effect evaluation. We should continue to explore ethical regulation and governance mechanisms such as sandbox regulation and pilot demonstrations in various application scenarios of AI technology.
Fifth, actively participate in global AI ethics governance. Despite the multitude of challenges, there is a broad consensus within the international community on the need to enhance global cooperation on AI governance, including ethics governance. As a forerunner in AI development, China is poised to actively engage in and spearhead global AI governance—a commitment driven by its developmental interests and sense of international responsibility. It is recommended that pertinent authorities continue to support AI researchers and institutions, as well as researchers of AI ethics governance, to participate in international exchanges and cooperative research on cutting-edge issues of global AI ethics, speak up on matters concerning AI ethics and governance, and promptly communicate China's positions, governance principles, methodologies and practical experiences in the realm of AI ethics to the global community. At the same time, it is important for China to further promote global cooperation and exchange on AI ethics governance and work with countries around the world to build a global AI ethics governance system and enhance global AI ethics governance capabilities.
Finally, it is important to note that, for a significant period ahead, China's approach to AI ethics governance is poised to stay within the unique framework of the ‘triple overlap’, in which rapid AI advancements, high-quality socioeconomic progress and significant shifts in the international order intersect. China's AI ethics governance will remain an open social practice. To enhance the development of China's AI ethics governance systems and capabilities, it is essential to concentrate on three key areas: knowledge production, consensus formation and action promotion. We must more effectively identify the impacts of AI, foster innovative exploration in the ‘no man's land’ of ethics governance, enhance the efficacy of governance systems and proactively develop AI ethics governance concepts and tools that are tailored to the unique ‘triple overlap’ structure and China's societal context.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Social Science Fund of China project ‘Public Perception of Risk of Artificial Intelligence and Governance Research’ (grant no 21BSH047).
Author biographies
Yina Zhu is an associate professor at the School of Cultural Industries Management of the Communication University of China. Her research interests are social networks and social capital, and the sociology of science.
Yangxu Lu is a research fellow of the Chinese Academy of Science and Technology for Development. His research interests include STS (science, technology and society) and science and technology policy.
