Abstract

Dear Editor,
I read with interest the paper entitled “Chinese colposcopists’ attitudes toward the colposcopic artificial intelligence (AI) auxiliary diagnostic system (CAIADS): a nation-wide, multicenter survey” by Wang et al. 1 AI-assisted diagnostic systems have been shown to help improve diagnostic accuracy in several clinical areas and to bridge the competency gap between doctors at different levels of expertise.2,3 This is critical to improving the health and well-being of patients. The potential benefits of CAIADS are particularly evident for tests such as colposcopy, which require high levels of diagnostic competence. 4 However, there are some constraints to the promotion of CAIADS in clinical settings, including ethical considerations, costs of implementation, and doctors’ attitudes. Wang et al. found that doctors from different levels of healthcare organizations had positive attitudes toward using CAIADS in clinical settings. 1 This is undoubtedly a positive sign for the widespread use of CAIADS. However, there are at least two other constraints that need to be addressed if the goal of improving women's health and reducing health inequalities by facilitating the widespread adoption of CAIADS in clinical settings, as mentioned by Wang et al., is to be realized, and this will require bringing in additional stakeholders beyond doctors, such as the developers of AI tools, as well as policymakers and hospital administrators.
First, in addition to the issue of determining responsibility for the use of AI-assisted tools in clinical diagnosis mentioned by Wang et al., there is a more fundamental ethical issue of algorithmic bias. 5 The premise of AI tools for mitigating health inequalities is that their training data are broadly representative. This means that when developing AI tools, developers must have a strong sense of data representativeness to ensure that their offerings give appropriate results when dealing with different ethnic, racial, gender, age, and other groups. At the same time, policymakers should consider adopting a mandatory requirement when vetting AI tools, requiring a certain percentage of minority data to be included in the training data.
Second, there are significant costs associated with deploying AI tools in healthcare organizations, including direct costs attributable to the technology and indirect costs of training medical staff, time, and costs related to the compatibility of old and new technologies available in the organizations.6,7 The high cost of use raises at least two concerns. First, governments and healthcare organizations need to be clear that the significant costs spent on the implementation of AI tools are delivering valuable results, not huge amounts of wasted money. Second, if only large healthcare organizations can afford the high costs of using AI tools, and other healthcare organizations cannot, then using AI tools to mitigate health inequalities is also futile talk. Therefore, for hospital administrators and policymakers, appropriate AI evaluation models need to be defined to assess the health and economic benefits of AI tools and make decisions accordingly. 8 In addition, for policymakers, it may be necessary to consider selecting different levels of healthcare organizations in different regions as pilots to provide free or low-cost opportunities to implement AI tools.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Guarantor
HNY.
