Abstract
Endodontics is the dental specialty foremost concerned with diseases of the pulp and periradicular tissues. Clinicians often face patients with varying symptoms, must critically assess radiographic images in 2 and 3 dimensions, derive complex diagnoses and decision making, and deliver sophisticated treatment. Paired with low intra- and interobserver agreement for radiographic interpretation and variations in treatment outcome resulting from nonstandardized clinical techniques, there exists an unmet need for support in the form of artificial intelligence (AI), providing automated biomedical image analysis, decision support, and assistance during treatment. In the past decade, there has been a steady increase in AI studies in endodontics but limited clinical application. This review focuses on critically assessing the recent advancements in endodontic AI research for clinical applications, including the detection and diagnosis of endodontic pathologies such as periapical lesions, fractures and resorptions, as well as clinical treatment outcome predictions. It discusses the benefits of AI-assisted diagnosis, treatment planning and execution, and future directions including augmented reality and robotics. It critically reviews the limitations and challenges imposed by the nature of endodontic data sets, AI transparency and generalization, and potential ethical dilemmas. In the near future, AI will significantly affect the everyday endodontic workflow, education, and continuous learning.
Keywords
Introduction
Artificial intelligence (AI), a term coined by John McCarthy, was originally defined as “the science and engineering of making intelligent machines.” In 1955, McCarthy and colleagues proposed a 2-mo, 10-man study based on the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (McCarthy et al. 2006). This landmark project evolved rapidly and continues to affect multiple aspects of our lives today. AI is now a powerful tool that is used to solve complex problems.
Machine learning, a subset of AI, uses algorithms instead of explicit programming to analyze and learn from data and then make informed decisions (Choi et al. 2020). The approach of machine learning is to let computers learn to program themselves through experience. It starts by first gathering and preparing data on which the ML model is trained to find patterns or make predictions. A human programmer may tweak this model to yield more accurate results. Then, new evaluation data are used to test the accuracy of the ML model.
Deep learning (DL), itself a subdomain of ML, involves constructing neural networks with multiple layers to learn from data sets and make predictions (Choi et al. 2020). Artificial neural networks may contain thousands or millions of processing nodes that are interconnected and organized into layers similar to the human brain. DL has become particularly valuable for the analysis of complex imagery data, such as biomedical images.
AI has enabled breakthroughs in diverse health care fields, including radiology, cardiovascular surgery, and neurology (Zhu et al. 2022). The Food and Drug Administration (FDA) has approved 171 AI-based applications (Food and Drug Administration 2023). In endodontics, AI has the potential to improve the diagnosis, treatment, and prevention of pulpal and periapical disease. While AI in endodontics is focused mainly on radiologic diagnosis, AI-based applications are also being developed for clinical treatment and prediction of outcomes. Several reviews have described the application of AI in endodontics (Boreak 2020; Aminoshariae et al. 2021; Das 2022; Karobari et al. 2023; Khanagar et al. 2023; Patel et al. 2023; Ramezanzade et al. 2023; Sudeep et al. 2023). While some of these reviews addressed particular limitations and challenges (Aminoshariae et al. 2021; Das 2022; Patel et al. 2023), for example, regarding the need to validate the reliability, applicability, and cost-effectiveness of AI models for clinical implementation, the steady increase in endodontic AI publications warrants a more thorough critical review. This review will highlight the different AI-based technologies for endodontics and discuss some of the challenges, limitations, and ethical concerns.
Advancements in endodontic AI research for clinical applications include the detection and diagnosis of periapical lesions (PLs), fractures, resorptions, or root canal anatomy and clinical treatment outcome predictions. Representative studies with methods and outcomes are presented in Table 1.
Representative Studies of Clinical AI Applications Detailing AI Methods and Outcomes.
2D, 2-dimensional; 3D, 3-dimensional; AI, artificial intelligence; AUC, area under the curve; CBCT, cone-beam computed tomography; CNN, convolutional neural network; DCNN, deep convolutional neural network; DL, deep learning; DWD, ; ECR, extracanal cervical resorption; ML, machine learning; NPV, negative predictive value; PL, periapical lesion; PPV, positive predictive value; ROC, receiver-operating characteristic; ROI, region of interest; SSL, self-supervised learning.
Significant work on AI applications in endodontics, especially for lesion detection, has been based on 2-dimensional (2D) radiography, such as periapical and panoramic radiographs. The introduction of 3-dimensional (3D) radiography in the form of cone-beam computed tomography (CBCT) has significantly improved the detection of PL compared with 2D radiography. Nevertheless, CBCT interpretation by clinicians suffers from low inter- and intraobserver agreement, and there are low sensitivity and specificity for PL detection in endodontically treated teeth (Parker et al. 2017). Thus, AI-based CBCT applications have become critical to eliminate observer bias. A systematic review and meta-analysis of the diagnostic test accuracy of DL algorithms on pooled data from 12 studies reported the sensitivity range for radiographic detection of PL to be 0.65 to 0.96 (Sadr et al. 2023), comparable with lesion detection accuracy by human clinicians using CBCT. Setzer et al. (2020) used DL algorithms for automated PL detection and simultaneous multilabel segmentation of 5 categories—lesion, tooth structure, bone, restorative materials, and background—from CBCTs (Fig. 1). Multilabel segmentation approaches require significant time and labor efforts to provide training and validation data sets based on expert clinician annotation.

Full 3-dimensional multilabel segmentation with periapical lesion detection of dental limited field-of-view cone-beam computed tomography (CBCT) of the maxillary left quadrant (canine through wisdom tooth). Periapical lesion on the first maxillary left molar. Comparison of clinician-labeled ground truth segmentation (Clinician) with fully automated segmentation of the identical areas with the AI platform (AI). Ground truth labeling techniques as described by Setzer et al. (2020). Labels: lesion, red; tooth structure, yellow; bone, blue; restorative materials, green; background, black. (
Cracked teeth are the third most common cause of tooth loss in industrialized countries. The early detection of cracks, followed by appropriate interventions to prevent crack propagation, is an effective strategy to avert tooth loss. The development of objective and reliable AI-based methods to detect cracks is imperative. Early attempts used convolutional neural network–based segmentation for individual teeth before applying fracture detection algorithms. However, this approach was not optimal for clinical data, as clinical scans are often acquired using different acquisition parameters, a problem well-known by the ML community as a domain shift. To address this problem in an unsupervised manner, Sahu et al. (2023) developed a novel 3D Fourier domain adaptation model for tooth segmentation from a source domain to an adapted target domain (i.e., 2 different CBCT scans and acquisition protocols). Their experiments demonstrated that the proposed domain adaptation method can significantly improve the segmentation performance for the target domain. This technology is currently being further refined to increase the predictive validity of CBCT in detecting cracks (Fig. 2; Table 1).

Crack detection. (
An innovative AI application developed by Lee et al. (2023) analyzed 2D preoperative radiographs to predict the outcome of endodontic treatment after 3 y (Table 1). Success was defined as a periapical index (PAI) score of 1 and failure as a PAI score of 4 or 5, or radiographic evidence of extraction (Fig. 3). A limitation of the study was that it was based solely on radiographs. Advanced algorithms should aim to include patient factors such as medical history or the quality of restorations to achieve higher accuracy.

Visualization of 2 “test set” case examples. Each example shows the gray-scale preoperative periapical radiographs and corresponding Grad-CAM heat map of each feature or endodontic prediction. The red region represents a larger weight, which can be decoded by the color bar on the right. In this figure, 4 clinical features and the endodontic outcome prediction Grad-CAM heat map were superimposed on a preoperative preprocessed image. (
The review of text records by AI models has been applied for training purposes and to improve the detection and classification of pathologies. In endodontics, text recognition has not yet been harnessed. However, future applications would likely benefit from its incorporation, particularly as an AI review of patient records would be objective and not subjective as with clinicians (Karobari et al. 2023). Text record implementation might benefit training (e.g., to learn the differential diagnosis of lesions) and also improve outcome prediction. However, there are selection bias and privacy concerns with assessing large quantities of original patient data for purposes of AI training (Das 2022). Synthetic data, as in artificially created patient records modeled on real patients, may provide an alternative solution.
Potential Benefits of Using AI in Endodontic Diagnosis and Treatment Planning
Integrating AI collaboratively for endodontic diagnosis and treatment planning may offer multiple benefits, including increased accuracy and efficiency (Aminoshariae et al. 2024). AI algorithms can analyze large amounts of patient data, including radiographic and clinical images, medical records, and clinical symptoms. Integrating more patient data points with knowledge gained from AI-driven prognostication and outcome studies will significantly improve clinical decision making and treatment planning using AI support. AI models will continue to learn and adapt after new information and feedback from clinicians become available and provide clinicians with recommendations and probabilities for different endodontic pathologies, aiding in accurate diagnosis. Over time, the algorithms will refine their diagnostic and treatment-planning capabilities, improving accuracy and efficiency. This may contribute to reduced costs and burdens on the health care system and improved integration of all stakeholders, including patients, providers, and insurance carriers (Schwendicke and Büttner 2023).
AI can provide objectivity in diagnosing endodontic pathologies, evaluating features and patterns that a human observer may not easily detect and reducing subjective bias clinicians add to medical image analysis (Hosny et al. 2018). In situations of failed root canal therapy, a decision between nonsurgical and surgical retreatment has to be made if a patient opts for tooth retention. Histologically, odontogenic PLs may be granulomas, cysts, or abscesses. While abscesses can be diagnosed clinically by the presence of a sinus tract or clinical symptoms such as swelling, redness, or pain, cysts cannot be distinguished from granulomas clinically or radiographically. This bears an impact on the potential outcome of a retreatment procedure, as there is a general consensus that granulomas heal after adequate nonsurgical retreatment. However, this may be different for epithelial-lined cysts, which may require surgical excision (Setzer and Kratchman 2022). AI-supported differential diagnosis based on radiographic imaging to distinguish apical granulomas from cysts or other types of lesions may provide practitioners with valuable decision support to favor either nonsurgical or surgical retreatment options.
Earlier studies by Simon et al. (2006) aimed at identifying cysts versus granulomas based on gray-scale values in CBCT images and achieved a 76.5% accuracy compared with the histology results. Building on this work, Okada et al. (2015) used a semiautomatic combination of graph-based random walks segmentation with ML-based boosted classifiers and evaluated 28 CBCT data sets that included the original 17 CBCT data sets of Simon et al. (2006) to attempt automated differential diagnosis of periapical granulomas from cystic lesions. The results of Okada et al. (2015) coincided 94.1% with the results obtained by Simon et al. (2006); Ver Berne et al. (2023) published a 2-step DL approach for the radiologic detection and classification of radicular cysts versus periapical granulomas and achieved a sensitivity of 1.00 (0.63 to 1.00), specificity of 0.95 (0.86 to 0.99), and area under the receiver-operating characteristic curve (AUC) of 0.97 for radicular cysts and a sensitivity of 0.77 (0.46 to 0.95), specificity of 1.00 (0.93 to 1.00), and AUC of 0.88 for periapical granulomas. Moreover, several studies investigated AI-based approaches to detect and classify ameloblastomas from odontogenic keratocysts, lesions that are important for differential diagnosis from lesions of endodontic origin; these studies reported excellent results for lesions exceeding 10 mm in diameter (Chai et al. 2022).
AI could help to implement greater consistency and standardization, aid in lowering interobserver variability, and ensure that diagnostic and treatment decisions align with best practices, improving the quality and consistency of endodontic care across different settings and clinicians (Aminoshariae et al. 2024). AI decision support and expertise augmentation may provide clinicians with evidence-based recommendations, second opinions, and “red flag” warnings based on probabilities acquired from large data sets and information gained from clinical studies (Setzer et al. 2020). The use of AI could result in a reduction in unnecessary medical imaging or consultations. The predictive capabilities of AI can also further provide risk assessment, helping practitioners identify cases with a high risk of failure and possibly avoiding future complications by either modifying the overall treatment plan or suggesting different time points for needed interventions (Mallishery et al. 2020).
AI-Assisted Endodontic Treatment
AI-based applications may aid clinical root canal therapy in various ways. Automated radiographic image analysis, including CBCT imaging, should render the root canal system precisely, including the number and complexity of root canals as well as lengths, curvatures, and general morphology, such as canal fusions, divergences, or anatomical variations. As outlined above, AI-based technologies have already been developed for automated segmentation of the pulp cavity (Lin et al. 2021) and the root canal system (Wang et al. 2023). Specific root canal anatomies, such as the second mesiobuccal canal in maxillary molars (Duman et al. 2023) or C-shape configurations in mandibular molars (Sherwood et al. 2021) have been successfully detected and classified using AI-based models. Similar advances have been made to detect pulp calcifications. For surgical endodontics, the automated detection and segmentation of tissues surrounding the operating site can reduce the risk of iatrogenic errors and injury to anatomical structures (e.g., the inferior alveolar or mental nerves) (Oliveira-Santos et al. 2023).
However, there is still a need to develop clinically approved applications, either as stand-alone programs or incorporated into existing image analysis applications, such as proprietary CBCT software. These programs may be cloud based or run on existing hardware. At the time of this report, only 2 dental AI companies have received FDA clearance to detect clinical conditions with a specific interest in endodontics in 2D bitewing and periapical radiographs, including tooth and root canal segmentation and PL detection. It is unlikely that AI support will, however, stop with applications that aid the detection or classification of pathologies or raise red flags if a clinician is at risk of overlooking specific entities. AI support will include office management applications, review of drug regimens for patients, or treatment planning and decision support (Patel et al. 2023).
AI-based real-time support techniques are being developed for clinicians. Two cadaver studies evaluated the ability of an AI model to identify the minor apical foramen and determine the working length (Saghiri et al. 2012), the latter with a 96% accuracy compared with experienced endodontists. These tools can aid in identifying the precise location of the anatomical constriction, allowing for more effective and efficient root canal treatment.
Augmented Reality and Robotics
Augmented reality (AR) is an interactive experience that combines and enhances the real world with computer-generated content. AR and AI are related in endodontics via dynamic guided navigation techniques. Imaging sensors for dynamic navigation or AR headset techniques feed data into computer vision and ML algorithms to track the position of a tool or its user and provide real-time orientation feedback. Three-dimensional positional tracking allowing for dynamic navigation has been used for locating calcified canals (Jain et al. 2020), the retreatment of fiber posts, and for guided root-end resection in endodontic microsurgery.
Farronato et al. (2023) implemented a markerless AR system to drill preplanned virtually guided access cavities and compared the system’s accuracy and efficiency with that of pre- and postoperative high-resolution CBCT scans. A similar in vitro study using AR for endodontic access cavity preparation was also published by Faus-Matoses et al. (2022). Several studies have explored the use of AR technologies for surgical endodontics, evaluating its use for osteotomy and apicoectomy and comparing its accuracy to that of template-guided approaches in in vitro models (Remschmidt et al. 2023). Ultimately, this will also result in the development and application of robotic endodontic procedures. The first descriptions of robotic dental procedures are all related to autonomous implant placement (Cheng et al. 2021). Artificial intelligence is generally related to robotics (Karobari et al. 2023). Endodontic robotic systems will provide support by automated root-end surgery or root canal instrumentation by providing precise and controlled movements and haptic feedback based on real-time data from intraoral sensors.
To train these robots, the growing adoption of AR systems in endodontics lays a solid foundation by accumulating a wealth of data about surgery planning and real-time execution for individual patients together with CBCT characterizing their 3D anatomy. This data-rich environment will pave the way for training robots in endodontics to learn from human execution with supervised learning algorithms and develop autonomy. To enhance the training process, generative AI can be employed to augment the training data sets beyond real patient data, allowing robots to encounter a wide range of scenarios and challenges (Bandi et al. 2023). In addition, reinforcement learning (RL) can be leveraged to enable robots to learn from their actions and improve over time (Hu et al. 2023). Integrating human oversight and feedback into the RL process can ensure a more robust learning experience. It is worth noting that robot training and development in endodontics is an emerging field, with the first case reports recently published (Isufi et al. 2024; Liu et al. 2024), and may borrow experience from other successful fields such as autonomous vehicles.
Challenges and Limitations of Current AI Applications in Endodontics
A variety of pitfalls exist for AI in health care (Schwendicke and Büttner 2023). AI-based applications (e.g., DL models) typically rely on large data sets for training, validation, and testing of the algorithms. Annotation and labeling of the training data are time and labor intensive, particularly if CBCT data are assessed. To date, there are no image repositories for AI training in dentistry, albeit for the medical field to advance AI in medicine through transparent and reproducible collaborative research. Creating medical image repositories for dentistry and pooling resources from research institutions should be undertaken to gain larger sample sizes for AI training and validation (Huang et al. 2024).
To cope with limited sample sizes, approaches such as transfer learning, in which AI networks are pretrained on other data sets with larger data availability (Kora et al. 2022), and self-supervised learning, which allows a model to pretrain and learn from unlabeled data (Shurrab and Duwairi 2022), have become commonplace in AI. These models may then be fine-tuned on different target data sets for specific tasks (Caron et al. 2021). Initially designed for natural language processing, transformer-based models have been adapted for various vision tasks. Transformers offer greater flexibility than architectures based on convolutional neural networks, as they use patches of input images and self-attention mechanisms to identify dependencies across the entire image (Dosovitskiy et al. 2020). However, increased data sets are required. To date, only 1 study in endodontics has attempted to use a transformer-based architecture to describe and classify radiolucent lesions in panoramic radiographs (Silva et al. 2024). Another technique for overcoming small data sets is active learning, in which training data are assessed using uncertainty quantification and samples with the highest uncertainty scores are labeled to train the AI (Huang et al. 2024).
If data availability is restricted to 1 or a few institutions, obtaining diverse and representative data from different populations can be challenging, potentially introducing biases and limiting the generalizability of AI models to other patient groups or settings (Khanagar et al. 2023). Variations in patient demographics, treatment protocols, and equipment may affect the applicability and performance of AI models. AI models should be validated across diverse populations and settings. Overfitting may occur when an AI architecture gives accurate predictions for the training set but not for new data (Ramezanzade et al. 2023). In addition, domain shift may complicate matters for imaging applications (Sahu et al. 2023). Different machines may have different parameters and may present challenges to AI algorithms. Therefore, it has been suggested that AI applications must be tested on different machines to verify if prediction results achieved with different devices are reproducible. Challenges of interpreting radiographic images can also relate to variations and the anatomical complexities of the human dentition, noise and artifacts, image quality and variability, lack of standardization, limited training data, issues with ground truth labeling and interobserver variability, or the lack of clinical validity due to variations between different environments.
Particularly, the lack of standardization in endodontics may affect AI studies. Practice philosophies may vary among clinicians, depending on education and training, variations in treatment guidelines, and a general lack of standardized protocols, complicating the development of universally applicable and accepted AI models. AI-based applications should consider diverse treatment approaches, anatomical variations, and cultural factors. AI models, especially if developed using DL approaches, also raise concerns regarding interpretability and explainability. DL models often operate as black boxes, making it difficult to comprehend how the AI arrived at specific conclusions or recommendations, which may hinder the acceptance and trust in AI-based applications. AI-generated results must arrive with explanations and justifications so that clinicians can understand and validate the outcomes. Approaches such as transparency in algorithms (Shah 2018), interpretable visualization (Schwendicke et al. 2020), and explainable AI (Kundu 2021) were developed to ensure that AI’s decisions and reasoning are understandable by health care professionals and patients. The human-in-the-loop approach (Uegami et al. 2022), in which AI recommendations are reviewed by health care professionals before being finalized, maintains the benefits of AI while ensuring human oversight. Legislation such as the Algorithmic Accountability Act can encourage responsible AI use. In addition, the medical and dental fields need to develop standards and certification processes specific to AI in health care.
To date, AI in endodontics has been mostly explored for research purposes and lacks clinical validation. Next steps in AI research must ensure that AI algorithms can be safely employed after confirming that AI-based assessments and recommendations align with clinical reality and demonstrate consistent and verifiable results. For example, studies should be conducted that compare clinician-based evaluations with results from automated applications in retrospective cohort studies or prospective clinical trials. Moreover, as ethical considerations arise with introducing and adapting new technologies, AI must follow the ethical principles of nonmaleficence, beneficence, justice, autonomy, and veracity. Challenges that arise need to be recognized and addressed by the stakeholders, including clinicians, patients, developers, and health care insurers, as the integration of AI may disrupt existing workflows (Rokhshad et al. 2023). Practitioners, researchers, and developers must ensure that governing ethical principles remain guaranteed, especially if AI-based decision support is to be followed.
Finally, we would like to point out that AI in endodontics shares some broader issues as other AI applications in health care such as data quality, biases, annotation accuracy, and keeping pace with technological advancements (Schwendicke et al. 2020). The effectiveness of AI models heavily depends on the quality of the data used for training and validation (Whang et al. 2023). High-quality, diverse data sets are essential to develop generalizable AI models. In addition, inherent biases in data can lead to skewed AI predictions, which may mislead clinical decision making. Addressing data biases and ensuring diverse representation is important for developing AI models that are fair and effective for all patient groups. Also, the accuracy of annotations in training datasets directly influences the performance of AI models. This is particularly challenging in complex medical imaging such as CBCT used in endodontics. Reliable annotations require expert input, which is not always available. Robust or data-efficient AI algorithms that are less dependent on high-quality or high-number annotations are desirable (Azizi et al. 2023). Furthermore, recent technological advancements such as the transformer models have led to a breakthrough in natural language processing and are affecting image-based AI applications (Zhang et al. 2023). These models, through pretraining using large, multisource public data sets, have a powerful capability to discover complex patterns in medical imaging such as CBCT, which could enhance the capabilities of AI in Endodontics.
AI in Endodontic Education
AI will impact endodontic education in multiple ways. A recent scoping review on the impact of AI on endodontic education (Aminoshariae et al. 2024) identified 10 areas of potential impact. Students will benefit from AI-assisted training, including radiographic interpretation, differential diagnoses and treatment options, evaluating risks and benefits, and making recommendations for endodontic referrals. AI can help calibrate researchers and educators based on existing standardizing criteria, aid with administrative tasks, monitor student progress, or facilitate personalized education. As AI algorithms can continuously learn and adapt based on new data and feedback from clinicians, AI is ideally suited to contribute to continuous learning and improvement, helping clinicians enhance their diagnostic skills, refine treatment-planning abilities, and stay updated with the latest advancements.
Conclusions
AI applications will have a significant impact on the everyday endodontic practice of the future, transforming various aspects of patient care and practice management and potentially providing increased precision and efficiency. Integrating AI into daily endodontic practice for diagnosis and treatment planning involves data-driven decision making and interdisciplinary collaborations. This includes biomedical image analysis and interpretation, the development of personalized treatment plans incorporating patient data, historical outcomes, and evidence-based guidelines. In addition, it may encompass real-time guided procedures using augmented reality, improve workflow efficiency and quality assurance, foster research and knowledge synthesis, and aid continuous education for professional development.
Author Contributions
F.C. Setzer, J. Li, A. A. Khan, contributed to conception, design, data literature search and interpretation, drafted and critically revised the manuscript. All authors gave their final approval and agree to be accountable for all aspects of work.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Dr. Asma Khan acknowledges funding through the National Institutes of Health (NIH) 5R44DE027574-03. Dr. Jing Li and Dr. Frank Setzer acknowledge funding through NIH 1R41DE031485-01.
