Abstract
Artificial intelligence (AI) is an exciting new technology poised to drastically improve the practice of medicine. Interventional pulmonology (IP) is particularly well situated to implement AI due to the variety of complex diagnostic and therapeutic techniques within its scope. By integrating AI into the field, the procedure planning and management of pulmonary disease should become easier, more accessible, and more effective. AI has already been implemented in the diagnostic techniques of navigational and virtual bronchoscopy, endobronchial ultrasound, and for the rapid onsite evaluation of pathological specimens. The goal of this review is to summarize recent utilization of AI in IP and to discuss the origins of the technology, ethical considerations, and future directions.
Plain Language Summary
In this paper, we discuss advanced computer programs, commonly called Artificial Intelligence(AI), in the field of medicine that specializes in procedures performed in the lungs, called Interventional Pulmonology(IP). We begin by explaining that AI think in a similar way as human minds, but more powerfully. Then, we describe the origins of medical AI. Next, we begin our discussion on AI's use in IP today. First, we examine AI in bronchoscopy, which is sticking special camera through someone's to look into the inside of the lungs. As one might expect, knowing where the camera is exactly is in the lung is very difficult, and pulmonologists have developed AI that can help identify where the camera is in the lung. Another use for bronchoscopy is collecting samples from the lung to test diseases including cancer; however, knowing exactly where to go in the lung is difficult. Programs, broadly referred to as Navigational Bronchoscopy, use AI to create “maps” and then guide the doctor like a GPS. During bronchoscopies, pulmonologist can take look at parts of lung using a special tool called endobronchial ultrasound. These ultrasound pictures are hard to understand, so AI is being used to help interpret these images. Once the samples are collected, traditionally they are examined under a microscope to look for signs of disease. This is quite difficult, so AI have been made to help with this. We then discuss some of the ethical issues AI might face, as well as how we think AI will be used in the future in IP.
Introduction
Artificial intelligence (AI) is a catch all term for computer algorithms that solve problems in a manner mimicking human rationality. Nearly every industry is experimenting with this new technology, so it should come as no surprise that a field as reliant on technology as healthcare has become a major frontier for AI.
AI can be divided by their scale into two main categories: general and narrow. General AI can complete a broad array of tasks. 1 Just as one physician can take a history, perform a physical examination, and synthesize a plan to complete the task of practicing medicine; a general AI would be able to tackle many diverse problems to attain its goal. Narrow AI is a program developed to complete one specific task, such as playing solitaire or identifying pulmonary nodules on a radiograph. 2 AI can also be described by its technological details and the types of algorithms it utilizes.
Machine & Deep Learning in Artificial Intelligence
In medicine, most AI programs rely on machine learning (ML). ML programs use specific traits in a data set to identify patterns that it then applies to other scenarios. 3 Additionally, as more data are fed into the model, it becomes more experienced and skilled at its task. A subcategory of ML, called deep learning (DL) relies on algorithms modeled after the way human brains process information, fittingly called artificial neural networks (ANNs). In an ANN, information is entered into one layer of nodes that then pass it to another, hidden layer. This process is repeated until information is modified and analyzed and then it is passed on to output nodes. Processing information this way, DL programs themselves extract the needed information from samples in addition to the analysis and modeling. 4 A key difference between ML and DL is that DL relies on larger data sets to train compared to traditional ML, but DL can extract the key features of a sample itself while ML cannot. 5 Figure 1 illustrates the relationship between these related forms of AI. Today, medicine is utilizing AI primarily due to its ability to analyze large amounts of data and find connections that physicians could not.

The relationship between artificial intelligence, machine learning, and deep learning.
Origins of Artificial Intelligence in Interventional Pulmonology
Despite only recently reaching the cultural zeitgeist, medical AI has existed for some time. Considerable research compared AI and physician outcomes during the 1970s. A multicenter study of patients with acute abdominal pain found that the use of a diagnostic AI program improved diagnosis and patient outcomes. 6 Pulmonology and other respiratory specialties also began to develop AI programs. As early as 1989, AI was used to analyze pulmonary lesions. 7 In 1991, AI was used to find correlations between occupation and respiratory illness. 8 Since 1999, researchers have been developing AI that can interpret plain chest radiographs in some capacity. 9 Despite being a newer specialty, Interventional pulmonology (IP) has considerable research in AI. In 2008, an AI program analyzed endobronchial ultrasound (EBUS) images with moderate success. 10 Around the same time, AI was being used to label bronchial anatomy on computed tomogram (CT) images and this labeling could be overlaid in a bronchoscopy system successfully. 11 In medicine and IP, AI has for several decades been a subject of study; however, in the past decade interest in the technology has exploded.
The Purpose and Scope of This Review
As a specialty, IP is uniquely situated to integrate AI into its practice. Being reliant on advanced technology in the bronchoscopy suite, AI can reasonably be incorporated in the workflow of most IP procedures, and thus much research has already been done in this field. Today, AI can drastically improve the diagnostic techniques of visual bronchoscopy, navigation bronchoscopy, and EBUS. Some have also integrated AI into therapeutic procedures. This review will explore the state of AI in IP in detail.
Methods
The authors queried the PubMed and Google Scholar databases for literature relevant to AI in IP. The authors searched Pubmed and Google Scholar with the following terms: “Artificial Intelligence in Interventional Pulmonology,” “Artificial Intelligence in Pulmonology,” “Artificial Intelligence in Bronchoscopy,” “Artificial intelligence in EBUS,” and “Artificial Intelligence in ROSE.” Further articles were selected from the references of other work. Articles were chosen based on their relevance and value to the field. Another search for more general information on AI was also performed in the same databases, and articles with relevant and worthwhile information about AI were chosen for inclusion. The literature was reviewed until October 2024. We did not perform a structured systemic review, or perform bias studies on the selected studies, which are limitations to our manuscript.
Integration of Artificial Intelligence into Bronchoscopy Routing and Steering
The goal of many IP procedures is the procurement of tissue samples to determine if a patient with suspicious radiology or symptomatic presentation has lung cancer. Typically, bronchoscopy is the IP physician's tool of choice for this task. The challenge of these diagnostic bronchoscopies is manipulating the instrument through the lung to the desired destination while avoiding complications. Thus, much research has been done into implementing AI into different types of bronchoscopies for the purpose of navigation assistance.
Visual Bronchoscopy
Much effort has been spent integrating AI into visual bronchoscopy. One successful application has been the development of programs that can identify various anatomical segments of the bronchial tree such as the vocal cords and tracheal rings using real-time video images. 12 One group developed an AI that can distinguish between carina and main bronchi from bronchoscopy images more accurately than pulmonologists. 13 Another AI could distinguish 9 specific anatomical positions, including the carina, main stem bronchi, and lobar bronchi from bronchoscopy images and video 14 with superior accuracy than pulmonologists, 14 while a different AI system was developed to aid physicians in recognizing specific areas of the bronchial lumen. 15 Although most of the studies in this area have been limited to smaller, single-center studies, they do prove that AI has the capacity to successfully identify anatomical locations from the bronchoscopy images and video.
Computer-aided diagnosis (CAD) technologies have been developed to identify pulmonary pathology from plain visual bronchoscopy using AI. A model was developed that could diagnose tracheobronchopathia osteochondroblastica, a rare multinodular disease, from bronchoscopy images with accuracy approaching 90%. 16 Another AI could classify suspicious lesions on visual bronchoscopy as normal tissue, mycobacterium tuberculosis, or cancer with an accuracy of 87%, 54%, and 91%, respectively. 17 Additionally, an AI analyzing bronchoscopy images of malignant lesions identified adenocarcinoma and squamous cell carcinoma in a small sample study. 18 As of now, very few programs have been developed to identify disease conditions from just visual bronchoscopy using AI, but present data suggests that visual bronchoscopy with AI can become a useful diagnostic tool in the future.
Navigational Bronchoscopy
Navigational bronchoscopy is an advanced technique that consists of bronchoscopy aided by navigational guidance to reach a target lesion. The two most common forms are electromagnetic navigation bronchoscopy (ENB) and virtual navigation bronchoscopy (VNB). This section will explore the use of AI in these two in detail, as well discuss some AI that provide navigational assistance to simple visual bronchoscopy.
Electromagnetic Navigation Bronchoscopy
In ENB, a preoperative CT scan creates a three-dimensional (3D) model of the lung. During the procedure, the patient lies in an electromagnetic field, and the pulmonologist passes the tip of a sensor probe with the bronchoscope through the airways to match the CT images. Once the airways are mapped to the patient, the bronchoscopist then tracks the scope through the virtual 3D airways to the lesion of interest. 19
AI can considerably improve ENB. One area of research is using AI to improve the mapping and route planning of the ENB software. One model, NaviAirway launched in 2023, used ML for airway segmentation and planning in navigational bronchoscopic biopsy. 20 Similar AI models have been created that report similar levels of success; however, these AI-generated models were compared to models generated by visual bronchoscopy, so their accuracy remains to be verified.21,22 A new technology that tracks not just the tip of the scope, but also the full catheter through shape sensing tracking with AI guidance provided even greater accuracy in navigational bronchoscopic biopsy. 23
Virtual Navigation Bronchoscopy
VNB is an alternative to ENB for navigational bronchoscopy. Again, a CT creates a 3D model of the lung, and then, instead of tracking the bronchoscope location with an electromagnetic field, it displays the constructed lung model with a guided path to the lesion next to the video bronchoscopy image so the bronchoscopist can follow a path through a near exact mapping of the airways.
AI is being implemented into VNB, especially with efforts to improve tracking of the bronchoscope during the procedure using real-time bronchoscopy video. For example, in 2017 a fully convolution network was developed that mapped monocular bronchoscopy images to the 3D model of the lung to track the bronchoscope location. 24 Similarly, Offsetnet used real-time camera footage and AI to localize the bronchoscope during the procedure. 25 A similar AI-video augmented VNB relied on a Three Cycle-Consistent Generative Adversarial Network for bronchoscopic guidance. 26 AI could offer significant navigational improvements in VNB, but as of now it is not generally being utilized.
Plain Visual Navigational Bronchoscopy
ENB 27 and VNB 28 have proven to be very successful in obtaining biopsies of peripheral pulmonary lesions; however, they are complex procedures with considerable skills and technological requirements. An advanced bronchoscopy suite is needed, as well as a recent CT scan as well to function as the map. Developing a form of navigation bronchoscopy for accurate biopsy that requires less technological and radiological investment is a current focus in research. Recently, an AI was developed that relies strictly on visual images and a generic model of the lung to guide bronchoscopists and found that it was able to reach its target legion with 98% accuracy in phantom lung models. 29 Another similar program used a depth-based dual loop framework and that could run at real-time speeds and was accurate within 6.49 ± 3.88 mm in the models of actual patient lungs. 30 One group developed a similar program, as well as a dataset and model to standardize testing visual navigational bronchoscopy programs. 31 Although in the early stages of development, plain visual navigational bronchoscopy is an exciting advancement that could expand the option for navigational bronchoscopy in a resource limited setting.
Artificial Intelligence in Other Bronchoscopic Localization Techniques
Other technologies traditionally utilized during bronchoscopy have been improved with AI. IllumisiteTM, a program that that uses AI and real-time fluoroscopy, reports a diagnostic yield of 79%. 32 LungvisionTM, another bronchoscopy system with AI and fluoroscopy, localized to nodules with a success rate of 93%. 33 The program was also proven to be effective with cryobiopsy procedures during bronchoscopy. 34 Narrow band imaging (NBI) is another exciting bronchoscopy technology being improved with AI; a recently generated AI was able to analyze real-time NBI videos and detected lesions with 93% sensitivity and 86% specificity, which is superior to physicians interpreting NBI. 35 AI has also burgeoned the autonomous robotic bronchoscopy. A DL program has been developed that allowed a robotic bronchoscope to navigate and center itself very effectively. 36 Another robotic bronchoscope is able to navigate to fifth generation airways successfully. 37
Artificial Intelligence in Training and Assisting Inexperienced Pulmonologists
In addition to improving the efficacy of bronchoscopic diagnostic interventions, AI have been developed to help inexperienced bronchoscopists. An AI tool was created that provided feedback to learners while they were performing bronchoscopy on a phantom lung model. They found that the tool outperformed written instructions 38 and expert feedback in randomized control trial. 39 Another group created an AI that assists in steering, with which a novice doctor was able to outperform experts in obtaining target images in porcine lung. 40 In the future, AI may be a key part in effective simulation training for bronchoscopists and interventional pulmonologists.
Artificial Intelligence in Bronchoscopic Sample Identification
The goal of many bronchoscopies is to obtain a sample for further analysis. As described above, mapping and following a path through the lungs is the first challenge faced by a bronchoscopist. The next challenge is ensuring that location navigated to has the material they intended to sample, and EBUS is becoming the preferred method of confirmation. Once the sample is procured, it needs to be evaluated pathologically to determine if the correct sample was taken. Rapid onsite evaluation (ROSE) can provide this answer within minutes. As expected, there has been much research into integrating AI into both tools.
Endobronchial Ultrasound Bronchoscopy
EBUS is routinely used to determine if lymph nodes (LNs) or lung lesions show any signs of malignancy. As expected, considerable effort has been put into developing AI programs that can interpret these images.
Brightness Images
The standard ultrasound image is a grayscale projection that reports different denisities of materials in differing levels of brightness. Colloquially, the brightness images are referred to as “B images,” and many AI have been developed to interpret them. As early as 2008, an ANN was able to distinguish between various types lung cancer and sarcoidosis superior to experts on b images. 10 Since then, significant work has been done on AI that can interpret EBUS images in real time. AI have been developed that can detect malignancy on B images of mediastinal LNs with impressive accuracy.41–43 In a study at two centers, AI, improved with Gaussian support vector machine and Weighted K-nearest neighbor models, had accuracies of 95.9% and 96.4% for fine Gaussian SVM and KNN, respectively. 44 Direct comparison of a CNN AI to 4 pulmonologists showed that the AI was superior with 83.4% diagnostic accuracy compared to 68.4% for the pulmonologists. 45 The improvement of B mode EBUS AI-CAD has been rapid in recent years and hopefully will be validated through larger multicenter trials soon.
Other Ultrasound Imaging Modalities
Although B images are seen as the default for ultrasound, other imaging modalities that exist within ultrasound are important diagnostic tools and have been integrated with AI for several years. For instance, an AI that interprets elastography on EBUS, performed as well as experts at identifying malignancy. 46 In a retrospective analysis of EBUS-derived images, a multimodal AI using elastography, B mode, and Doppler was statistically as effective as experts in determining LN malignancy with an accuracy of 80.82%. 47 Combining different ultrasound modalities certainly can create an effective AI for determining LN malignancy; however, the previously described research does not determine the accuracy of each individual modality. One study found heterogeneous echogenicity and absence of CHS, both from grayscale images, had high sensitivity, 93.1% and 89.1%, respectively, and heterogeneous echogenicity (grayscale) showed the highest diagnostic accuracy (87.2%) 48 ; however, CNS and blue-dominant elastography images had the highest specificity 7.7% and 89.0%, respectively. 48 Certainly, AI have been able to identify malignant LNs from multiple EBUS modalities for several years, and more research in this field is warranted.
Future Directions
Recently, new ideas have been used to optimize malignancy identification by analyzing EBUS images. One strategy is combining two separate CNNs, and by doing so, a program was able to reach a diagnostic accuracy of 82%. 49 Another option is to utilize information besides what is derived by DL in the CAD programs. Combining clinical features entered by the physician, radiomic features collected from grayscale images and deep neural analysis led to the best results, demonstrating an accuracy of 80.6%. 50 Lastly, new AI algorithms pushed the CAD of malignant LNs to new heights. Using new AI algorithm strategies, a lab was able to have an accuracy of 99.38%. 51 These new advancements suggest that EBUS CAD can be an effective diagnostic tool.
Real-Time Artificial Intelligence Ultrasound Interpretation
Most of the AI to identify malignant legions were tested retrospectively on relatively homogenous single study data sets. In order for this technology to be more clinically useful, it should be integrated into a real-time system and validated. An AI model was created that interprets EBUS videos instead of images with sensitivity, specificity, and accuracy improved to 72.7%, 79.0%, and 75.8%, respectively. 52 In a prospective study, AI CAD was found to have specificity of 91%, but a sensitivity of only 28.1%, limiting its utility in LN biopsy. 53 A trained model on data from two hospitals successfully tested the model on data from two other hospitals and reported that the area under the curve for the test hospitals were 0.78 and 0.82. 54 This suggests that real-time AI integrated EBUS is progressing and could be implemented.
Rose
Substantial research is being done for AI applications in rapid ROSE of cytological samples obtained via bronchoscopy. As discussed above, procurement of samples is one of the key tasks for interventional pulmonologists; however, the interpretation of those specimens requires a cytopathologist at bedside which drastically increases the cost and limits the procedure to facilities with expert cytopathologist support. Creating a program that could interpret these specimens in real time at bedside could make the procedure drastically more efficient. Such a system was developed that could identify if malignancy was present with accuracy of 92.9%. 55 Another team used whole-slide samples from mediastinal LNs to train a deep CNN that identified metastatic lesions with a precision of 93.4%. 56 A similar slide image-based CNN was developed with an accuracy of 83.30% which performed slightly superior to a junior cytopathologist but inferior to a senior cytopathologist at identifying malignancy, 57 and another AI model for ROSE was as accurate as a pathologist in a different single-center study. 58 So far, the results of AI-powered ROSE have been exciting; however, testing has been limited to smaller single-center studies, limiting how confident IP physicians should be in its applicability.
Artificial Intelligence in Multidisciplinary Care of Interventional Pulmonology Patients
Many IP procedures are performed to diagnose or manage pulmonary pathologies, many of which are comanaged by cardiothoracic surgeons. Their field also has seen rapid advancements in AI technology that is relevant to the interventional pulmonologist. AI has been used widely by clinicians in the past decade to improve planning of surgery. For instance, AI is used to preoperatively assist with pulmonary segmentectomies to treat cancer and benign masses, allowing 3D visualization of a patient's anatomy to provide better insight. 59 More recent studies have highlighted the benefits of AI incorporation in robotic-assisted thoracic surgery, for example, robotic stapling, fluorescence imaging, and 3D reconstructions.60,61 With the evolution of these techniques, robotic intervention in surgery has resulted in less trauma and faster recovery time for patients. 61 Sadeghi et al also highlighted expansion in the field regarding in-human AR robotic lobectomies with an emphasis on real-time adjustments to various factors such as fissure orientations. 60 Such advances, especially in a field that works closely with IP physicians, hold promise for the way AI can be positively integrated into the care of IP patients.
Ethical Concerns and Other Challenges of Artificial Intelligence in Interventional Pulmonology
While the integration of AI in IP has demonstrated great promise in enhancing patient care, it has also begun to pose complex ethical, legal, and social challenges. Gerke and colleagues recently posited that there are four primary ethical challenges that AI has on the healthcare industry. 62 The first challenge they posit, informed consent, is of particular interest to IP due to the field's invasive, procedural nature. 62 Informed consent requires patient understanding of the procedure, and, one could argue that AI is impossible to understand. Many programs are so-called “black box” algorithms, that find and utilize connections that it cannot elaborate. 63 So, if a patient were to ask, “How does this AI work?”, their physician could not offer an explanation. Next, they stress the importance of safety and transparency of the technology and its development. 62 Any algorithm used in clinical decision making or procedures must be accurate to be safe; however, they can be wrong. 64 AI companies have a financial incentive to obfuscate any failures in their AI tools. Thus, transparency in the development and validation of these tools is crucial. They then discuss how algorithms can reflect and propagate racial biases,62,65 and the importance of data privacy and participant permission when developing these AI tools that require massive amounts of patient data.
Before adopting this new technology, it is necessary to determine who is responsible for any failures of an AI program. Intuitively, a physician is responsible for their tools; if a bed were to collapse during a bronchoscopy the physician would certainly apologize. Still, that does not necessarily entail legal culpability. A recent review found that clinicians should be held accountable if an AI program led to a bad outcome, but that there is little case law in this area to determine if they will be held legally responsible. 66 As of now, AI is typically regulated under frameworks for Software as a Medical Device (SaMD). 67 In 2021, the FDA developed “AI/ML-based SaMD Action Plan” which included directives for AI to follow good ML practices, be patient-centric, reduce bias and monitor real-world performance. 68 In the United Kingdom, the Medicines and Healthcare Products Regulatory Agency enacted the “Software and AI as a Medical Device Change Programme” as a regulatory framework, and it stresses minimizing data security risks, reducing biases, and increasing interpretability. 69 The EU has the AI Act, which regulates AI with respect to how much risk each individual AI has; higher risk programs are those that have access to identifiable patient information. 70 Even though AI is at the cutting edge of medicine, governments have wisely already began regulating its use and development.
Beyond just ethical and regulatory considerations, much work remains until AI tools become commonplace in IP. Many of the traditional problems preventing the implementation of AI such as computing power and machine size have largely been solved through technological advances in computing. Still, AI requires large, labeled data sets to develop. In IP, where images and other forms of data are challenging to acquire, this poses a significant challenge. Relative to other forms of AI, ML can use smaller datasets but since its input needs independent feature extraction, developing ML still takes much effort. DL does remove independent feature extraction requirement, but it uses much larger data sets. 71 Furthermore, most AI systems have been developed from retrospective and single-center databases. To create truly validated tools, multicenter and prospective trials will be needed. IP is well situated to utilize AI, but the technology needs much more development before it is ready for widespread use.
Future of Artificial Intelligence in the Field
AI in IP will continue to be integrated into bronchoscopic techniques such as VNB and EBUS in the future, and likely, robots with AI will be developed that assist with the procedures. Before these new programs can be implemented, more rigorous studies are needed. Thus far, nearly all the research in the field is limited to retrospective single-center studies. While the studies are internally consistent, their external validity needs to be proven, especially their diagnostic value. Being an invasive procedure, patients need to be confident that it will be a productive risk to take. Additionally, most of the studies listed here test AI's ability to diagnose lung cancers in some form. Lung cancer is the number cause of cancer deaths worldwide, 72 so its accurate diagnosis is the utmost imperative. To implement these tools broadly, they need to be verified in a diverse population. To accomplish this, AI should be studied in multiple centers ideally across the world. Furthermore, most studies have been retrospective; more prospective research would bolster these technologies. IP has produced quality research into AI in recent years; however, larger, and broader studies are required before technology is utilized.
Conclusion
The integration of AI in IP demonstrates a notable shift in the field of pulmonary medicine. While still a rapidly changing field, early AI systems in medicine paved the way for the increased capabilities of modern AI applications in pulmonology. The dynamic nature of the field has developed numerous benefits over the years such as significantly improving visual bronchoscopy through anatomical identification as well as the ability to distinguish between normal and pathological tissue. Furthermore, advancements in AI have allowed for enhanced procedural accuracy and accessibility, streamlining complex biopsy procedures. AI integration alongside ROSE and EBUS has improved diagnostic capabilities, showing high accuracy in identifying malignancies while potentially reducing the need for on-site pathologists. Advancements in AI have increased the potential for treatment planning and surgical interventions in pulmonary pathologies, guiding personalized strategies and improving surgical precision. While future research will continue to shape the principles of AI integration in IP, the boundaries continue to be pushed with significant promise for enhanced diagnostic accuracy, procedural efficiency, and personalized care.
Footnotes
Acknowledgements
None.
Author contribution(s)
Availability of Data and Materials
Not applicable.
Consent for Publication
Not applicable.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethics Approval and Consent to Participate
Not applicable
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
