Science and technology (S&T) ethics is the base for promoting the healthy development of S&T, and the governance of emerging technology ethics has become an important part of global S&T ethics governance. This study discusses the ethical governance of leading technologies of the new round of the scientific revolution, such as artificial intelligence, human gene editing and stem cell technology. We analyse the approaches taken by international organizations such as the United Nations, the European Union, the World Health Organization and the International Society for Stem Cell Research, as well as the practices of the United States in the fields of emerging technologies. Based on this analysis, we propose relevant policy recommendations.
Science and technology (S&T) ethics is a crucial foundation and prerequisite for the advancement of S&T innovation. Over recent years, China has placed a significant emphasis on the development of ethical standards in S&T. In January 2022, the newly revised Law of the People's Republic of China on Progress of Science and Technology was promulgated, laying out multiple requirements on S&T ethics, including critical initiatives such as establishing and enhancing S&T ethical governance systems, refining S&T ethics mechanisms and establishing a national committee on S&T ethics. In March 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the Opinions on Strengthening the Governance of Science and Technology Ethics. It is the first guiding document on the governance of S&T ethics adopted at the national level in China, providing a systematic framework for its reinforcement. In September 2023, the Ministry of Science and Technology, together with nine other departments, issued the Measures for the Review of Science and Technology Ethics (Trial). The document provides detailed stipulations on the entities, procedures and regulatory oversight pertaining to the review of S&T ethics, ushering China's ethical framework in S&T into a new phase.
Over the years, China has enacted a host of documents with clauses on the governance of S&T ethics, particularly within the domains of life sciences and medicine. The Regulations on the Protection of Experimental Animals, introduced by the former State Science and Technology Commission in 1989, mandated care for and prohibited mistreatment or abuse of experimental animals. This was followed by a succession of regulatory documents and norms, including the Safety Management Measures for Genetic Engineering (1993), Ethical Guidelines for Human Embryonic Stem Cell Research (2003), Guiding Opinions on the Kind Treatment of Experimental Animals (2006), Regulations on Human Organ Transplantation (2007), Ethical Review Measures for Biomedical Research Involving Humans (2016), Safety Management Measures for Biotechnology Research and Development (2017), Regulations on the Administration of Human Genetic Resources (2019), Implementation Rules for the Regulations on the Administration of Human Genetic Resources (2023) and Ethical Guidelines for Human Genome Editing Research (2024). These measures have collectively contributed to the robust improvement of the S&T ethics system in the field of life medicine. The rapid evolution of the S&T revolution and industrial transformation have given rise to a plethora of emerging technologies that are profoundly altering the fabric of daily life. Concurrently, ethics surrounding these new technologies have emerged as a critical component of global S&T governance and an important topic in international S&T exchanges and global governance. In particular, against the backdrop of China–US rivalry in S&T, international cooperation is constrained in various technical domains, which makes dialogue and exchange in non-technical areas, such as the governance of S&T ethics, more important than ever before.
Developed countries started early in the governance of S&T ethics. Organizations such as the United Nations (UN) and the European Union (EU) and countries such as the US have accumulated valuable experience in this field through years of exploration and innovation. In 1946, the Nuremberg International Military Tribunal set a precedent by adopting the Nuremberg Code, providing the international standard for human experimentation. The World Medical Association followed in 1964 with the Declaration of Helsinki, establishing the ethical principles for medical research involving human subjects. Having been revised 10 times by October 2024, the declaration lays a solid foundation for the development of biomedical ethics. In the 1970s, several documents related to S&T ethics were released. For example, the UN issued the Declaration on the Use of Scientific and Technological Progress in the Interests of Peace and for the Benefit of Mankind in 1975; the US issued the National Research Act (1974) and the Ethical Principles and Guidelines for the Protection of Human Subjects of Research (1979), providing the guiding principles for S&T activities (Liu and Li, 2023). In 1999, UNESCO adopted the Declaration on Science and the Use of Scientific Knowledge and the Science Agenda at the World Conference of Science, addressing the issues of scientific ethics and the social responsibilities of scientists, which had a global impact. Since the turn of the millennium, countries including the UK and Germany have introduced moral standards related to S&T ethics and bolstered their own national frameworks.
Over the years, two distinct governance models have emerged as a result of different perspectives on technological innovation and ethical governance. The US model, with its principle-based regulatory approach, believes that ethical issues can always be resolved with the advance of technology and therefore no constraint should be imposed on technological progress. In contrast, the EU employs a precautionary principle, advocating for preventive measures when activities or policies pose potential risks to the public or environment (Huang and Zhai, 2024; Li et al., 2018). To harness the potential of new technologies while mitigating negative consequences, international organizations and developed countries have implemented a range of measures to tackle the challenges of S&T ethics.
Emerging technologies are pivotal in propelling S&T transformation and enhancing social productivity. Globally, major countries are strategically investing in R&D to secure a competitive edge in the current wave of technological revolution and industrial transformation. However, these technologies also present risks and uncertainties, making the prevention of ethical risks in emerging technologies as critical as their application. Artificial intelligence (AI) and biotechnology, as leading technologies in this round of technological revolution, are under intense scrutiny regarding their ethical governance. This paper focuses on key institutional measures taken by international organizations, such as the UN, the EU, the World Health Organization (WHO) and the International Society for Stem Cell Research, in areas such as AI, genetic editing and stem cells. It also examines the ethical governance practices of the US, the global leader in this round of technological revolution, to inspire similar efforts in China.
International practices on the governance of science and technology ethics for emerging technologies
International practice of AI ethics governance
UN agencies’ values and principles for AI governance
The UN places high importance on the governance of AI ethics, recognizing it as a central aspect of AI management. On 25 November 2021, UNESCO unveiled the Recommendation on the Ethics of Artificial Intelligence in Paris (UNESCO, 2022)—a document that provides an important reference and a basis for countries around the world to conduct AI ethics governance. The recommendation has five primary objectives. First, to establish a global framework of values, principles and actions that guides the establishment of AI-related national laws, policies and other instruments in alignment with international standards. Second, to direct individuals, organizations, communities, institutions and private enterprises to integrate ethics into every phase of the AI system life cycle. Third, to safeguard, promote and respect human rights and fundamental freedoms, human dignity and equality, including gender equality, for both present and future generations; to protect the environment, biodiversity and ecosystems; and to honour cultural diversity throughout the AI system life cycle. Fourth, to foster dialogue among multiple stakeholders, across disciplines and with diverse perspectives to build consensus on AI ethics. Fifth, to encourage equitable development and knowledge access in AI, sharing benefits with a particular focus on the needs and contributions of the least developed countries, landlocked developing countries and small island developing states. The recommendation emphasizes that the development and application of AI should reflect four core values: respect for, protection of and enhancement of human rights and dignity; promotion of environmental and ecological development; assurance of diversity and inclusion; and construction of a peaceful, just and interdependent human society. The document outlines 10 ethical principles to be adhered to throughout the AI system life cycle (Table 1). These values and principles are essential for establishing AI systems in any direction and constitute a crucial foundational framework for AI ethics governance. Furthermore, the recommendation provides key policy interpretations in 11 domains, including ethical impact assessments, ethical governance and management, data policies, development and international cooperation, environmental and ecosystem considerations, gender issues, cultural aspects, education and research, communication and information, economic and labour force implications, and health and social welfare. The guidelines are intended to assist member states in applying ethical values and principles more effectively in practice.
Ethical governance principles proposed in the Recommendation on the Ethics of Artificial Intelligence.
No.
Ethical principles
1
Proportionality and do no harm
2
Safety and security
3
Right to privacy and data protection
4
Multistakeholder and adaptive governance and collaboration
5
Responsibility and accountability
6
Transparency and explainability
7
Human oversight and determination
8
Sustainability
9
Awareness and literacy
10
Fairness and non-discrimination
In December 2023, the UN High-Level Advisory Body on Artificial Intelligence released an interim report titled Governing AI for Humanity (AI Advisory Body of UN, 2023). The report noted that the rapid development of AI presents unprecedented opportunities and potential for human society, and also brings risks such as bias reinforcement, surveillance expansion, blurring of responsibility boundaries and dissemination of disinformation. The report compiled a list of AI risks in six aspects: individuals, groups, society, economy, (eco)systems, and values and norms. It proposed the following guiding principles for AI: inclusivity, meaning that the governance of AI should be inclusive, managed by all, and for the benefit of all; public interest, meaning that AI must serve the public interest; centrality of data governance, meaning that the governance of AI should be conducted in tandem with data governance and the promotion of data sharing; universality, networking and multistakeholder participation, meaning that the governance of AI must be universal, networked and rooted in adaptive cooperation among multiple stakeholders; international law, meaning that the governance of AI should be based on the UN Charter, international human rights law and other agreed international commitments, such as the Sustainable Development Goals. In April 2024, the UN General Assembly adopted its first resolution on AI, Seizing the Opportunities of Safe, Secure, and Trustworthy Artificial Intelligence Systems for Sustainable Development (UN General Assembly, 2024), which proposes that the entire AI life cycle must be human-centric, reliable, explainable, ethical and inclusive; fully respect, promote and protect human rights and international law; safeguard privacy; be oriented towards sustainable development; and be responsible.
EU principles, evaluation system and tiered management approach for AI governance
The EU's approach to AI governance is firmly rooted in ethical considerations, emphasizing the advancement and innovation of AI while also safeguarding against its risks in order to protect the fundamental rights and security of citizens. In December 2018, the European Commission introduced the Coordinated Plan on Artificial Intelligence, setting forth an ambitious goal of establishing the EU's position as a global leader in ethical AI governance and advocating an ethics-first governance approach on the global stage (Li, 2024). Building on this, in April 2019, the European Commission published the Ethics Guidelines for Trustworthy AI (EU High-Level Expert Group on AI, 2019). The guidelines are instrumental for promoting AI governance among EU member states and at the global scale. They have laid out the key ethical principles to be observed in the development, deployment and use of AI systems, such as respect for human autonomy, prevention of harm, fairness and explicability, while also considering the potential interplay and conflicts among these principles and the need to strike a balance. The guidelines particularly stress the importance of attending to vulnerable groups, such as children and individuals with disabilities, as well as addressing information asymmetry between employers and employees and between businesses and consumers in the development, deployment and use of AI systems. To foster the development of trustworthy AI, the guidelines have identified three overarching requirements that should be met throughout the AI system life cycle. The first is legality, which involves compliance with all relevant laws and regulations. The second is ethicality, which ensures alignment with ethical principles and values. The third is robustness and reliability, emphasizing that AI systems must be robust and reliable at both the technical and societal levels. Sometimes, even with good intentions, AI systems may still cause unintended harms, which makes reliability even more important. The guidelines have also outlined seven critical requirements for the development, deployment and use of AI: maintaining human agency and oversight, ensuring technical robustness and safety, safeguarding privacy and data governance, keeping transparency, ensuring diversity, non-discrimination and fairness, advancing societal and environmental well-being, and ensuring accountability. Based on these seven key requirements, the guidelines have provided a detailed checklist with 23 specific indicators for the assessment of trustworthy AI (EU High-Level Expert Group on AI, 2019).
In April 2021, the European Commission proposed the Artificial Intelligence Act, marking the first regulatory framework for AI within the EU. The document stipulates that AI systems can be categorized based on the risks they pose to users, with different levels of risk entailing more or less regulation, and strictly prohibits AI systems that present unacceptable risks to human safety. On 21 May 2024, the EU Council approved the Act. The Act represents a significant milestone for the EU and is the world's first law of its kind. The EU aims to fully implement the Act by 2026, and the ‘ban’ on the items with the highest risks is expected to take effect by the end of 2024. The Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk and low risk (European Parliament and EU Council, 2024). The regulatory measure for unacceptable risk is a complete ban, and the application scenarios include social rating systems and the manipulation of cognitive behaviour. The regulatory measure for high risk is conditional permission, requiring prior approval and compliance with a set of requirements and obligations to enter the EU market; the application scenarios include recruitment, health care and biotechnology. The regulatory measure for limited risk is conditional permission, requiring advance notification and transparency; the application scenarios include impersonation and forgery systems (robots). The regulatory measure for low risk is permission without restrictions, and the application scenarios include AI systems that pose no or minimal threat to citizens’ rights or safety.
AI ethics governance system of the US
Through years of development, AI governance has been further refined in the US. A comprehensive institutional framework encompassing national legislation, strategic initiatives, policy directives and normative guidance has been put in place, and the National Artificial Intelligence Initiative Office and similar bodies have been set up to oversee AI coordination and management. Academic institutions and corporations have also played an integral role in AI ethics governance, and a foundational ecosystem for AI ethics governance has taken shape.
From 2020 onwards, competent departments of the US government have intensified oversight and law-based regulation in the AI field and introduced a series of key legislation and policies, including the National Artificial Intelligence Initiative Act (2020), the Principles of Artificial Intelligence Ethics for the Intelligence Community (2020), the Principles for the Stewardship of AI Applications (2020) and the Blueprint for an AI Bill of Rights (2022). Following the initial release of the National Artificial Intelligence Research and Development Strategic Plan in 2016, the US National Science and Technology Council has released subsequent updates in 2019 and 2023. Across all three iterations, a central strategic task has been to ‘understand and address the ethical, legal, and social implications of AI’ (US National Science and Technology Council, 2023). This has led to the identification of four priority areas: first, investing in foundational research, including through the designing of social technology systems, to enhance core values and study the ethical, legal and social implications of AI; second, comprehending and reducing the societal and ethical risks associated with AI; third, utilizing AI to address moral, legal and social challenges; and fourth, gaining insights into the broader impacts of AI. In October 2023, President Biden signed the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, outlining a regulatory approach to govern the AI industry (US White House, 2023a).
Back in December 2016, the Institute of Electrical and Electronics Engineers (IEEE) released the first draft of Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (version 1), emphasizing the importance of protecting human rights, maximizing benefits to humanity and the natural environment, and mitigating technological risks and negative impacts. The document outlined four key principles: (1) human benefit, ensuring that AI respects human rights; (2) responsibility, requiring people and institutions to have a clear understanding of the manufacturing of AI systems to avoid potential harm, with manufacturers able to demonstrate why the system operates in certain ways to address legal issues of culpability and prevent public fear; (3) transparency, ensuring that stakeholders understand how and why AI systems make certain decisions; and (4) education and awareness, informing citizens about the risks of AI misuse, such as hacking or exploitation by unethical manufacturers (IEEE, 2023a). In December 2017, building on societal input, the IEEE released a second draft for public discussion, underscoring that the ethical design, development and implementation of AI should observe five ethical principles: (1) human rights, safeguarding against violations of internationally recognized rights; (2) well-being, prioritizing welfare in design and application; (3) accountability, ensuring that designers and operators honour their responsibilities; (4) transparency, maintaining open operational methods; and (5) awareness of misuse, working to minimize the potential for misuse (IEEE, 2023b). After extensive consultation, the final version was officially published in 2019, expanding to eight principles with three additional ones: data agency, mandating AI system creators to provide individuals with both data access and the means to secure their data, while also preserving individuals’ control over their personal information; effectiveness, requiring developers to demonstrate that systems can achieve their intended purposes; and competence, requiring developers to designate personnel or specify the qualifications necessary to ensure the safe and effective operation of these systems. This version also presented a framework for implementing these principles in practice.
AI enterprises are central to AI ethics governance. In July and September 2023, the US government prompted 15 companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI, Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability) to make voluntary commitments in the following aspects: first, ensuring product safety before market release; second, establishing protective measures against external security threats and internal vulnerabilities; third, actively earning societal trust through initiatives such as developing digital watermarking systems to identify AI-generated content. They also committed to research on potential risks and issues such as preventing bias and discrimination and protecting privacy (US White House, 2023b, 2023c).
International experience of science and technology ethics governance in the biotechnology field
Biotechnology is a key leading technology in the current round of scientific revolution and industrial transformation. With technologies such as genetic editing, synthetic biology and stem cells rapidly permeating into related fields, products based on these new technologies, such as genetic drugs, stem cell therapies, molecular breeding and genetically modified products, are making their way into people's lives and playing an increasingly important role. The ethics of biotechnology has always been a key aspect of global governance on S&T ethics. As early as 1997, the UN issued the Universal Declaration on the Human Genome and Human Rights, which received public attention worldwide. The declaration clearly requires respect for human dignity and strict prior assessment and compliance with the law in the study of human genome as well as in diagnosis and treatment. It also lays out clear conditions for conducting research activities related to the human genome. The declaration has played a positive role in encouraging member states to formulate their own laws, regulations, standards or ethical guidelines and principles in line with its spirit. Subsequently, a series of globally influential documents, such as the Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (published by the Council of Europe in 1999), the Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects (published by the World Medical Association in 2000), the International Ethical Guidelines for Biomedical Research Involving Human Subjects (published by the Council for International Organizations of Medical Sciences in 2002), the Universal Declaration on the Human Genome and Human Rights (published by UNESCO in 2003) and the Universal Declaration on Bioethics and Human Rights (published by UNESCO in 2005), have greatly strengthened the governance of ethics in the field of biotechnology.
In recent years, with the rapid advance of biotechnology, relevant international organizations and countries have successively updated or introduced a series of documents to address new ethical governance challenges. This section focuses on two documents that could serve as references for China in managing S&T ethics—the Framework for the Regulation of Human Genome Editing and the Guidelines for Stem Cell Research and Clinical Translation.
Regulatory principles and measures for human genome editing
In December 2018, the WHO convened a global multidisciplinary advisory committee: the Expert Advisory Committee on Developing Global Standards for Governance and Oversight of Human Genome Editing. The committee was tasked with offering advice and recommendations to guide institutions, nations, regions and global governance in the realm of human genome editing. In July 2021, the committee launched Human Genome Editing: A Framework for Governance (WHO, 2021), with the aim of fostering the safe, effective and ethical application of gene-editing technologies across the globe. The framework is articulated in six distinct sections. The first section delineates the committee's mandate and the rationale underpinning its work. The second section defines the concept of governance and the hallmarks of good governance. ‘Governance’ here refers to the norms, values and rules that encompass the management of public affairs, founded on the principles of transparency, participation, inclusiveness and responsiveness. Good governance is portrayed as an iterative and continuous process, equipped with mechanisms for periodic review. It is envisioned as proactive rather than merely reactive, with the benefit of fostering public trust. It is underpinned by ample resources, capabilities and technical expertise, and includes the participation of education, scientific, medical and health professionals, as well as the general public. Good governance is fundamentally driven by values and principles. The third section identifies the values and principles that help explain why governance measures may be needed and how those charged with reviewing or strengthening governance measures may undertake such a task. The fourth section presents 12 tools and mechanisms for governing human genome editing (such as laws, judicial rulings, departmental orders and certification and approval). It also delineates the various groups that should be engaged in the governance of human genome editing and offers a reference checklist for bolstering oversight measures, which could be tailored to specific contexts. The fifth section outlines seven scenarios that illustrate how the governance framework can be operationalized in practical applications. The sixth section concludes with an analysis of the factors that influence the governance of human genome editing.
Ethical principles and guidelines and classified management approach for stem cell research and clinical translation
In May 2021, the International Society for Stem Cell Research (ISSCR) released the Guidelines for Stem Cell Research and Clinical Translation (ISSCR, 2021) after a rigorous peer-review process involving scientists and ethicists from 14 countries. The guidelines expanded upon the 2016 edition, incorporating the latest developments in stem cell research, such as stem cell–derived embryo models, human embryo studies, chimeras, organoids and genomic editing. At the beginning of the document, five core ethical principles and guidelines were outlined. The first is integrity of the research enterprise, which demands that information acquired for the purpose of foundational, preclinical and clinical research should all be credible, reliable, accessible and attuned to scientific uncertainties and critical health needs. Central to this integrity are processes such as independent peer review, oversight, replication, institutional governance and accountability throughout the research continuum. The second is the primacy of patient/participant welfare, making sure that vulnerable patients and research subjects are shielded from undue risk. Clinical trials must not sacrifice the welfare of current research subjects for the potential benefits to future patients. The third is respect for patients and research subjects, which requires researchers, clinical doctors and medical institutions to empower potential human research participants (human testers) with effective informed consent when they possess the capacity to make decisions. Patients must be provided with accurate information about risks and the current state of evidence regarding novel stem cell–based interventions, in both research and care settings. The fourth is transparency, ensuring timely communication of accurate scientific information to other stakeholders. Researchers and funders are encouraged to promote the open and timely sharing of ideas, methods, data and materials by publishing both positive and negative results in a timely manner. The fifth is social and distributive justice, demanding that the benefits of clinical translation be equitably distributed globally, with a particular emphasis on addressing unsatisfied medical and public health needs. The guidelines have explained and provided recommendations for the review requirements that institutions and researchers should observe in laboratory-based human embryonic stem cell research, embryo research and related activities. To ensure that human embryos and related stem cell research are fully considered and to ensure consistency in research practices among scientists worldwide, the guidelines have divided the research activities into three categories: those exempt from oversight review, those requiring oversight review and those explicitly prohibited. Furthermore, the guidelines have addressed the scientific, clinical, regulatory, ethical and social issues that should be resolved in the clinical translation of stem cell-based interventions, in order to responsibly transform basic stem cell research into appropriate clinical applications.
Inspirations from international governance of science and technology ethics in emerging technologies
By analysing the ethical governance practices of the UN, the EU and other international organizations as well as the US in the fields of AI and biotechnology, we can gain the following inspirations for China's efforts to promote governance of S&T ethics in emerging technologies.
First, there is a global consensus on the principles and values guiding the S&T ethics of emerging technologies, which aligns with the practical needs of ethical governance for new technologies. The principles, as synthesized from various sources, can be primarily distilled into six key points. The first is an emphasis on the centrality of human beings in the development and application of technology, with sufficient control and regulatory authority; ensuring human autonomy in the advancement of emerging technologies is essential for technology to serve humanity. The second is the principle of safety, which asserts that the development and application of technology must ensure its safety and bring no harm to humans or cause any visible vulnerabilities. The third is the principle of transparency, which requires all information to be traceable, phenomena to be explicable, and communication to be timely and effective. The fourth is the principle of fairness, which ensures equitable and non-discriminatory access to technology across different regions and groups to promote social fairness and inclusiveness. The fifth is the principle of integrity, which guarantees the truthfulness and reliability of all information and honestly discloses potential risks. The sixth is the principle of accountability, which aims to minimize negative impacts and correct them promptly.
Second, the governance of S&T ethics is evolving towards a more systematic and sustainable direction. The US experience shows that it has already laid the groundwork for a comprehensive system encompassing laws, strategic planning, policies and regulations, along with a management system overseen by specialized agencies and a social system co-constructed and co-governed by academia and corporations. Systematic governance of S&T ethics is a robust safeguard and effective approach for governing S&T ethics under the current circumstances. Human Genome Editing: A Framework for Governance elucidates good governance, and, as technology advances and environments change, the governance of S&T ethics will become a regularly iterative and sustainable process.
Third, the strategy of classified governance of S&T ethics is worthy of emulation. The EU's Artificial Intelligence Act categorizes technology management into four risk-based classes, which can significantly promote the use of low-risk technologies and their iterative progress while ensuring safety, providing a more scientific and authoritative legal foundation for the ethical management of competent departments. The ISSCR's Guidelines for Stem Cell Research and Clinical Translation have also adopted a classified approach to the review and supervision procedures for different research activities, clearly defining three major categories of research translation activities: exemption from review, special review and prohibition. This classified management approach is worth considering by other nations.
Fourth, the governance of S&T ethics requires early intervention and full-process ethical governance. Both the Recommendation on the Ethics of Artificial Intelligence and Governing AI for Humanity have underscored the principles of ethical governance throughout the AI system life cycle, encompassing all stages of AI systems. This holistic approach to ethical governance enhances ethical oversight from the initial stages to the final implementation, mitigating risks and enhancing the credibility of the entire research process or system design.
Policy recommendations
Drawing on the analytical insights from international experiences and considering the current state of S&T ethical governance in China's emerging sectors, we propose the following five recommendations.
First, enhance the ethical governance system in S&T. While China's ethical framework in S&T has seen continuous improvement, it has not yet become systematic. There is a deficiency in laws concerning ethics in key areas such as AI. Besides, the system of S&T ethics in the country is largely comprehensive, with entries on ethical management systems appearing in only a few technological directions. Therefore, it is suggested that future policymaking should place greater emphasis on emerging technologies. In addition, the management system for S&T ethics should be strengthened, as ethics committees at various levels have not yet fully exercised their intended roles. Currently, they are mostly concentrated on ethical reviews yet lack systematic input in the governance of S&T ethics.
Second, develop a monitoring and evaluation system tailored to the need for enhancing the ethical governance of emerging technologies. With reference to EU's Ethics Guidelines for Trustworthy AI, we could establish guidelines or standards for ethical governance across all stages of the life cycles of emerging technologies and, on this basis, establish a monitoring and evaluation indicator system on S&T ethics for the key actors of S&T innovation, as well as a macro-level monitoring and evaluation indicator system on the governance of S&T ethics for the key actors of emerging technologies. This will guide the ethical management practice of the key actors of S&T innovation, including institutions and managers, and provide methodological support for competent departments to keep track of the situation.
Third, introduce a classified and graded mechanism for the management of S&T ethics. Drawing on the classified and graded governance approaches of the Ethics Guidelines for Trustworthy AI, the Artificial Intelligence Act, the Guidelines for Stem Cell Research and Clinical Translation and Human Genome Editing: A Framework for Governance, we could implement a tiered and categorized ethical management approach within China's emerging technology sectors, prioritizing the regulation of technologies that pose high risk and present significant ethical governance challenges, to bolster the precision, transparency and stability of our preventive and control measures. We should exempt the application scenarios of emerging technologies that have no ethical risks from regulation, and fully leverage the development space of technology to promote its expanded application. We should also strive to maximize the benefits of technology while minimizing its risks.
Fourth, embed S&T ethics into the complete cycle of emerging technologies and strengthen ethical supervision across the whole process. Both the Recommendation on the Ethics of Artificial Intelligence and the Coordinated Plan on Artificial Intelligence stress that ethics should be embedded in all stages of the AI system life cycle, including development, deployment and use. We should prioritize the full-cycle regulation of S&T ethics of technology companies in emerging fields such as AI and biotechnology, ensure their compliance with ethical norms, and help them improve their capabilities and performance in the governance of S&T ethics.
Fifth, enhance international cooperation and exchange. In January 2023, the EU Directorate-General for Communications Networks, Content and Technology and the US Department of State signed the Administrative Arrangement on Artificial Intelligence for the Public Good, highlighting that AI presents unprecedented legal, regulatory and ethical challenges that transcend national borders and industries. The responsible development and deployment of AI requires meaningful and sustained cooperation, closer collaboration between the private and public sectors and joint efforts of stakeholders from different industries. In recent years, China has published or participated in the publication of the Global AI Governance Initiative and the Shanghai Declaration on Global AI Governance, which have given a strong boost to China's involvement in global AI governance. However, cooperation on the ethical governance of AI and other emerging technologies still needs to be further enhanced, including in such fields as personnel exchange and training, interaction and cooperation in academia, and consistency of regulations and guidelines.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by the National Natural Science Foundation of China (grant no 72341007) and the Ministry of Science and Technology of the People's Republic of China (grant no 2024JP012).
ORCID iD
Yao Yang
Author biographies
Yao Yang is an associate research fellow at the National Center for Science and Technology Evaluation (NCSTE). He holds a PhD in entomology and is a postdoctoral researcher engaging in science, technology and innovation (STI) policy research and evaluation. His research interests include research integrity, STI strategy and policy evaluation, and insect evolutionary biology. He has presided over and participated in more than 20 national and ministerial projects in those areas.
Xiaoyong Shi is a research fellow and the head of the Department of Innovation Strategy Research at NCSTE. He joined NCSTE in 2005 and is engaged in research and evaluation of S&T strategy and policy. He has hosted a series of research and evaluations concerning S&T plans, programmes, policies and megaprojects. In 2013 and 2014, he served as a policy analyst in the Organisation for Economic Co-operation and Development.
HuangMTZhaiXM (2024) Hot topics and visual analysis of research on science and technology ethics at home and abroad. Medicine & Philosophy45(7): 24–29 (in Chinese).
5.
IEEE (Institute of Electrical and Electronics Engineers) (2023a) Ethically Aligned Design: A Vision for Prioritizing Well-being with Artificial Intelligence and Autonomous Systems (version 1). Available at: http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html (accessed 15 July 2024).
ISSCR (International Society for Stem Cell Research) (2021) Guidelines for Stem Cell Research and Clinical Translation. Available at: https://www.isscr.org/guidelines (accessed 22 July 2024).
LiZZDongYLGaoYW (2018) Designing life: Safety risks and ethical challenges of synthetic biology. Bulletin of Chinese Academy of Sciences33(11): 1269–1276 (in Chinese).
10.
LiuXLiX (2023) The value orientation of science and technology ethics and its important role in international cooperation in science and technology. Chinese Science Bulletin68(13): 1611–1616 (in Chinese).
11.
UN General Assembly (2024) Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development. Available at: https://digitallibrary.un.org/record/4043244?v=pdf (accessed 25 June 2024).
WHO (World Health Organization) (2021) Human Genome Editing: A Framework for Governance. Available at: https://iris.who.int/handle/10665/342484 (accessed 20 July 2024).