Abstract
Objective
This systematic review investigates of Augmented Reality (AR) systems used in minimally invasive surgery of deformable organs, focusing on initial registration, dynamic tracking, and visualization. The objective is to acquire a comprehensive understanding of the current knowledge, applications, and challenges associated with current AR-techniques, aiming to leverage these insights for developing a dedicated AR pulmonary Video or Robotic Assisted Thoracic Surgery (VATS/RATS) workflow.
Methods
A systematic search was conducted within Embase, Medline (Ovid) and Web of Science on April 16, 2024, following the Preferred Reporting items for Systematic Reviews and Meta-Analyses (PRISMA). The search focused on intraoperative AR applications and intraoperative navigational purposes for deformable organs. Quality assessment was performed and studies were categorized according to initial registration and dynamic tracking methods.
Results
33 articles were included, of which one involved pulmonary surgery. Studies used both manual and (semi-) automatic registration methods, established through anatomical landmark-based, fiducial-based, or surface-based techniques. Diverse outcome measures were considered, including surgical outcomes and registration accuracy. The majority of studies that reached an registration accuracy below 5 mm applied surface-based registration.
Conclusions
AR can potentially aid surgeons with real-time navigation and decision making during anatomically complex minimally invasive procedures. Future research for pulmonary applications should focus on exploring surface-based registration methods, considering their non-invasive, marker-less nature, and promising accuracy. Additionally, vascular-labeling-based methods are worth exploring, given the importance and relative stability of broncho-vascular anatomy in pulmonary VATS/RATS. Assessing clinical feasibility of these approaches is crucial, particularly concerning registration accuracy and potential impact on surgical outcomes.
Keywords
Central Message
This systematic review explores Augmented Reality techniques in minimally invasive surgery of deformable organs, aiming to identify suitable AR applications for pulmonary Video or Robotic Assisted Thoracic Surgery. Surface-based registration methods show promise for accurate navigation. Clinical feasibility assessment is crucial, focusing on registration accuracy and impact on surgical outcomes.
Introduction
Background
In pulmonary surgery, minimally invasive surgery (MIS) techniques reduce tissue trauma and allow for accelerated postoperative recovery.1,2 (Robotic) Video-assisted Thoracoscopic Surgery (RATS/VATS) is a minimally invasive procedure that is currently widely performed. 1 Besides the advantages in surgical outcomes, limited field of view (FoV), and surgical target access in VATS remain a challenge. 3 Moreover, patient-specific anatomical variations in bronchial or vascular structures can introduce challenges, elevating procedural complexity. 4 The complexity is further exacerbated when zooming in on specific anatomical details, potentially leading to a loss of contextual awareness. 5 This emphasizes the need for exact knowledge and evaluation of the patient-specific anatomy, such as the location of the tumor in relation to the intrathoracic vessels, bronchi and segmental borders.6,7 To obtain better understanding of anatomical relationships and plan surgery beforehand, three-dimensional (3D) visualization has become popular and advanced technologies are available, such as 3D computed tomography (CT) based reconstruction and simulation, combined with immersive extended reality techniques, such as Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR).8,9 These immersive techniques enable visualization, manipulation, and interaction of patient specific 3D models, in a fully virtual or hybrid simulated world.1,8,10,11 VR has limited purpose for intraoperative use, as the user is immersed in a complete virtual environment, resulting in full occlusion of the actual operative field. 11 By contrast, AR and MR are suitable for intraoperative use by augmenting 3D models onto the physical environment, eg, the surgical field. 12 This hybrid simulated world is perceived either through a head mounted display or indirectly using a monitor.11,13 The superimposition of 3D overlays on the surgical field may provide enhanced intraoperative perception and can serve as guidance during surgical procedures, navigating towards anatomical objects of interest and maintaining orientation regarding critical landmarks.14-16 AR/MR-guided MIS could potentially shorten learning curves for young surgeons, decrease surgical exploration and procedural time, and increase safety of the procedure for both surgeon and the patient. 17 However, disadvantages of direct overlays in a surgical context include compromised spatial perception, restricted bandwidth of the surgical video, potentially obscuring of critical aspects of the actual surgical scene, and delayed responsiveness to intraoperative changes. 18 In practice, the terminology for MR varies among studies and depending on the definition, may include applications of AR and vice versa.11,15 In the remainder of this article, all AR and MR technologies are both referred to as AR.
Correct alignment and visualization of the 3D models onto the surgical field necessitates several steps, identified as the AR-workflow: 1. 3D reconstruction, 2. initial registration, 3. dynamic tracking, and 4. AR visualization (Figure 1).19,20 While the AR-workflow has been increasingly implemented for surgical navigation, most applications consider target organs that maintain a constant spatial relationship with adjacent anatomical structures.
18
As such, most research has been performed for neurosurgery,
21
otolaryngology,
22
and orthopedic surgery.
15
Therefore, it is essential to understand that registration is further divided in two subtypes, rigid and non-rigid registration. Rigid registration considers translation, rotation, and scaling while deformations of the object are not considered. Rigid registration is particularly suited for these aforementioned specialties eg, orthopedic surgery.16,23 Registration of deformable organs with their rigid preoperative 3D models is more challenging due to inherent organ movement and deformation, requiring more extensive non-rigid registration that does acknowledge deformation to achieve correct alignment.
16
Therefore, the application of AR in pulmonary surgery, and surgery of other types of deformable organs/non-rigid organs, remains limited. During RATS/VATS procedures, the lungs undergo further deformation, including intraoperative collapse of the lung, altered effects of gravity due to difference in patient positioning, and manipulation by the surgeon.
1
These factors impose additional challenges for AR alignment of the 3D models onto the surgical view.1,24-26 Successful application of AR should meet the specific needs for pulmonary surgery, focusing on correct alignment of the complex broncho-vascular anatomy and its relationship with the tumor, particularly considering the extended deformation of a collapsed, highly movable, and pliable lung. AR workflow pipeline. The steps needed for AR visualization are summed up in this figure. First imaging (for example CT-imaging) and a 3D segmentation is accomplished. Then registration is performed, which refers to the correct alignment of the coordinate systems of the virtual model and the true patient anatomy. Registration consists of two steps, initial registration and dynamic alignment. Initial registration refers to the initial alignment of the pre-operative 3D model to the intraoperative anatomy. Dynamic alignment refers to constantly updating the alignment to account for intraoperative changes such as manipulation by the surgeon. Registration is performed using a transformation matrix, which includes translation (movement in the x, y and z direction), rotation, and scaling. This registration can be based on anatomical landmarks, fiducials, or surface-based. Anatomical landmark-based registration calculates the transformation matrix between a paired set of anatomical landmark points, for example the ribs or sternum. The use of anatomical landmarks in pulmonary surgery can be difficult, because of altered anatomical distribution due to collapse of the lung intraoperatively. Fiducial based registration implies the use of artificial landmarks or fiducial markers that are attached to the patient for registration. The placement of these fiducials could have the risk of tissue damage and could complicate the workflow. Finally, surface-based registration is based on minimizing the distance between the two 3D surfaces. In surface-based registration, a point cloud is created by finding corresponding features in image pairs and calculating the disparity between the two. The disparity can be used to calculate the depth of the object in the image. Subsequently, the surface of the point cloud is registered to the surface of the preoperative 3D model. In surface-based registration, insufficient detection of features, poor texture appearance and partial visibility are challenges to be tackled. After registration, AR visualization is performed using several visualization methods, such as a transparent overlay of the 3D model onto the intraoperative view.
Aim
The purpose of this review is to systematically assess current applications of AR within minimally invasive surgery of (non-rigid) deformable organs, focusing on the employed registration, dynamic tracking, and visualization methods. The objective is to explore the adaptability of these insights on the specific requirements for a dedicated pulmonary AR-workflow. Through this approach, our aim is to identify the optimal combination of technologies, while considering both the complexity of pulmonary surgery and the technical possibilities of AR applications.
Methods
Search Strategy
A systematic search was conducted on May 5, 2023, and updated on April 16, 2024, following three databases “Embase”, “Medline (Ovid)”, and “Web of science”. The results from the three database searches were combined. For this systematic review, the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines were followed. 27 The search was based on three characteristics: the use of extended reality (AR or MR), minimally invasive intraoperative applications on deformable organs, and an image-guided or navigational purpose. The search terms are included in the appendix section (Appendix A). In addition, articles involving animals, humans (including cadaveric material or in vivo clinical data) were included if the eventual purpose was for human use. The search was restricted to articles written in English.
Exclusion of articles was performed along the following criteria: 1. Review articles; 2. Registration/Navigation based on bony structures; 3. Training/Simulation/Educational purpose; 4. No application of AR/MR; 5. Comment/Erratum to article; 6. User-experience; 7. Non intraoperative application; 8. Open surgery; 9. Ultrasound as imaging modality; 10. No information of the (XR) technology/application; and 11. No full text available.
Article selection was performed by two authors (JP, MD). First exclusion of articles was based on eligibility of the title and abstract according to predefined selection criteria by all authors. Additional full text analysis was performed on articles for which uncertainty on in- or exclusion remained. Any uncertainty within the study selection process was resolved by consensus among two authors (JP, MD).
Data Extraction and Quality Assessment
Data extraction consisted of author, surgical discipline, main topic, preoperative imaging modality, intraoperative imaging modality, initial registration, tracking registration, registration method, type of research, sample type, sample size, and main outcome measures. For further analysis the included studies were sub-grouped according to the AR workflow, considering the type and method of initial registration and dynamic tracking.
Depending on the study design, following scales were used for quality assessment: the Newcastle-Ottowa scale for case-control studies, 28 the Joanna Briggs Institute checklist for case-reports and case-series, 29 and the QUACS (Quality Appraisal for Cadaveric Studies) scale for pre-clinical animal studies. 30
Results
Systematic Search
The initial search resulted in 2081 results, of which 1101 were duplicates. After title and abstract analysis, 847 articles were excluded, leaving 133 articles for full text assessment. Eventually, 33 articles were found eligible and were included for further evaluation in the systematic review, as shown in the PRISMA flowchart (Figure 2). PRISMA flowchart of the included and excluded studies. AR – Augmented Reality, MR – Mixed Reality, US - Ultrasound.
Study Characteristics
The included articles in this systematic review have been published between 2009 and 2024 (Figure 3A). All study characteristics are summarized in Supplemental Table S1. The majority of the included studies investigated urological surgical procedures (45%, n = 15) concerning a prostatectomy or a (partial) nephrectomy, followed by gastroenterological (39%, n = 13), gynecological (9%, n = 3), general laparoscopic (3%, n = 1), and thoracic (3%, n = 1) surgical applications (Figure 3B). Of the reviewed studies, 76% (n = 25) have used the AR technology intraoperatively for patients, of which 18 (55%) included less than 10 subjects. Furthermore, some studies performed experiments using in- or ex-vivo animal material (18%, n = 6). (A) Number of publications on AR for minimally invasive deformable organ surgery per year according to our search strategy, between 2009 and 2024, and b) per surgical specialty.
3D Reconstruction
As presented in Supplemental Table S1, 31 (94%) of the included articles performed CT or Magnetic Resonance Imaging to construct a 3D model based on pre-operative imaging data. Intraoperative imaging was required in 12% of the studies (n = 4). Subsequently, to identify and extract organs and structures of interest, segmentation was performed. Overall, segmentation was most often accomplished manually or semi-automatically, using software such as 3D Slicer,16,31-33 ITK Snap,34,35 Blender, 36 or Meshmixer. 37 After segmentation, structures were reconstructed through the conversion to a 3D model, for example, using triangle meshes. 16 In one study, fully automatic segmentation and reconstruction was pursued through the holographic Visual3D modelling software. 38
Registration
Registration Methods Pros and Cons.
Automation of Registration Methods.
Outcome Measures
Registration Accuracy.
FRE - Fiducial registration error, SRE - surface registration error, TRE - Target registration error, RMS - Root mean square, RPE- reprojection error.
Quality Assessment
Quality assessment was performed in 17 (52%) studies, evaluating human patients or animal specimens. Of the assessed studies, two (6%) studies included animal specimens for pre-clinical research.33,54 Among the clinical studies, three (9%) case reports were identified,41,44,53 with an average quality score of 88% (range 81-94%), 6 (16%) case series,24,37,49-51,59 with an average score of 83% (range 70-95%), and 6 (18%) prospective case-control studies,38-40,42,45,46 with an average score of 98%. The pre-clinical studies were rated with 77% 33 and 85%. 54 Due to diversity in evaluation metrics, the individual studies could not be compared reliably. In addition, due to heterogeneity of surgical specialty, study type, sample type, and sample size, no valid comparisons could be performed.
Discussion
This systematic review provides an overview of current literature on the application of AR for MIS of deformable organs and the underlying registration, tracking, and visualization methods. Within the studies included, a considerable amount of diverse research has been conducted on the use of AR. However, most of these studies are early stage and no large-scale randomized controlled trials have been conducted yet. In addition, the studies cannot be compared reliably, and no straightforward correlation can be measured between the different registration methods’ accuracies, due to considerable heterogeneity in registration approaches, applications, and evaluation metrics. Finally, the different AR methods are often custom-tailored for a specific application and vary from entirely manual support to fully automated systems, using either fiducial, anatomical landmark, or surface-based registration methods. Within the next sections, we will briefly discuss and highlight our understanding of the current state-of-the-art techniques for the eventual realization of AR for MIS of deformable organs and translatability to pulmonary surgery.
In order to obtain patient-specific 3D anatomical information, segmentation of pre-operative imaging can be performed. Next, these models can be leveraged for pre-operative planning and intra-operative guidance during MIS procedures.4,9 While segmentation was traditionally performed manually or semi-automatically, recent advances in artificial intelligence (AI) have introduced automated algorithms for this purpose.60-63 Furthermore, biomechanical properties of deformable organs can be considered to simulate some extent of intraoperative deformation.1,39,40,43,45,64 Additionally, deformable models are being developed to enable dynamic simulations of surgical procedures.1,6 Moreover, in the context of MIS, it would be interesting to also consider that deformable organs are subject to plastic deformation, due to intraoperative traction and dissection. The next phase of development will involve integrating this plastic deformation into the dynamic models, ensuring the coherence of the intraoperative 3D model with the real-time changing anatomy. 43 These deformable models are a promising innovation, potentially facilitating the replication of intraoperative deformations for AR visualization.
Besides the creation of (dynamic/deformable) 3D models, registration is a critical step to enable AR-based visualization during MIS (Figure 1). Unfortunately, to date, no automatic and robust solutions for dynamic and deformable AR organ registration method are available for routine clinical use. In this systematic review, we have identified three different methods that have been explored previously. Fiducial based registration, using varying multi-modality artificial landmarks eg, infrared reflection or fluorescence,24,53,65 appears least promising for registration of deformable organs. This is attributed to potential registration inaccuracies caused by the fluctuating relationship between the fiducial and the target deformable organ, insufficient attachment, and the invasive nature of fiducial placement. 65
The second potentially interesting method for registration is anatomical landmark registration. Varying anatomical landmark techniques can be employed, based on organ ridges and shapes23,39,40 or vascular structures.37,38,44 While the use of anatomical landmarks is intuitive and non-invasive, it is also potentially unreliable. Anatomical landmark registration is fully dependent on optical perception, is highly influenced by adequate visibility of the concerning structures, and often requires manual intervention. This may be associated with a lack of registration accuracy measurements. 66 Nevertheless, using AR based on manual anatomical landmark registration was associated with improved surgical outcomes for both prostatectomy and nephrectomy procedures.39,40 In pulmonary resections, the identification of the origin and bifurcations of the broncho-vascular anatomy is most crucial for achieving adequate anatomic resection, surpassing the significance of focusing purely on the lung parenchyma. 67 This parallels the importance of understanding renal hilar and biliary-vascular anatomy in nephrectomy and hepatectomy procedures.38,67,68 Therefore, the “vascular bifurcation labelling” method, considering bifurcations as anatomical landmarks, may prove to be particularly relevant for AR registration of these deformable organs, 38 because the bifurcation landmarks are relatively stable in deformable organ procedures.
Finally, surface-based registration can be employed. Surface-based registration requires pre- and intraoperative point-clouds of the organ surface, obtained through various methods,16,32 providing 3D shape information. 58 To detect and match these surface point clouds, various feature mapping algorithms have been used.16,19,32,36 The main challenge for surface-based registration remains the partial-to-whole image registration, as the intraoperative surface is often a fraction of the complete preoperative model. Most of the time, the intraoperative surface reconstruction is incomplete, since only a small surface region of the organ can be reconstructed when the laparoscope is close to the target. 69 However, surface-based approaches demonstrated the most promising registration accuracies, holding significant potential for the marker-less alignment of a 3D stereoscopic scene with a pre-operative 3D model. 34
After (advanced) 3D modeling and registration, the next step is dynamic alignment and tracking. In general, the main challenge of AR is the discrepancy between preoperative imaging and the intraoperative situation, due to continuous organ tissue deformation.45,54 Throughout the included studies, several (semi-)automatic tracking methods were explored to continuously update the registration in response to intraoperative changes.19,36,45,54,69,70 These tracking methods can be potentially unreliable when organs are moving and deforming, surface textures are repeated, or the view is occluded or blurred, as is often the case in deformable organ MIS. 54 In most included articles, some type of manual intervention was required for initial registration and tracking. This may be attributed to the absence of cross-modality similarities between preoperative and intraoperative situations, complicating the automation process. 36 Manual registration often requires the use of a 3D mouse and is highly dependent on the surgeon’s or remote assistant’s anatomical and technical expertise. As a result, the accuracy of manual registration and tracking methods is confined to a visual level and is subject to a large inter- and intra-user variability. Consequently, several studies applied semi-automatic registration methods, which involve a combination of automatic initial registration with manual correction for misalignments or errors, and vice versa.1,19,20,31,33,34,38,50-52,57,59,69,71 Fully automatic frameworks as proposed by Zhang et al., have investigated marker-less non-rigid registration and thus are pioneers for future fully automatic and deformable AR systems.16,32
Next to technical challenges for registration and dynamic tracking, validation of the approaches and methods remains a critical step in the deployment of certain workflows for routine clinical practice. Most of the included studies in this review have validated their registration and tracking methods on short video sequences. Meanwhile, the overarching goal remains to achieve real-time dynamic augmentation, prompting numerous studies to investigate algorithms for longer duration sequences. However, it is crucial to critically assess the necessity of continuous real-time intraoperative overlays throughout the entire MIS procedure. Given that the added value of AR navigation lies in the precise identification of anatomical structures, which is mostly important during specific surgical phases, a continuous real-time overlay may not even be necessary. Among the included studies, the quality of the AR technology was often assessed in terms of registration accuracy, ranging from submillimeter to >5 mm. However, throughout the studies it remains unclear what level of registration accuracy is necessary for safe implementation in a clinical setting. The required level of accuracy may not need to be submillimeter, as it can vary depending on the specific procedure and the intended purpose of the AR navigation. For instance, lower registration accuracy might be acceptable to identify visible vascular branches compared to the accuracy needed to localize a tumor concealed within the parenchyma. Furthermore, currently the primary focus is on technical aspects, such as registration accuracy and error. However, it is of greater importance to also incorporate clinical validation methods, before implementing the AR technology in clinical practice. Finally, following the registration of preoperative models onto the surgical field, it is crucial to ensure the appropriate visualization of both models. While most studies use semi-transparent overlays, research focused on AR visualization methods is limited and no consensus on the best visualization method has been established. 31 Nevertheless, the TileProTM application (Intuitive Surgical, California, USA) enables intraoperative display of 3D virtual overlays directly within the surgical console.37,39-42,57,72 In AR visualization, an ongoing challenge is the establishment of realistic depth perception, and the occlusion of surgical instruments by the AR overlay. Current research is addressing these issues by focusing on real-time automated instrument delineation and luminescence, adding a sense of depth to the 3D model-tissue interaction.73,74 While these two studies showed interesting insights in the current AR visualization methods, they were excluded from this study due to lack of information of the registration technique. Simultaneously, considerable research is being conducted for other approaches aimed at improving intraoperative guidance and navigation. These include (automatic) intraoperative anatomy recognition and surgical phase registration.5,75 The use of these methods to potentially facilitate or contribute to the application of AR during MIS can be investigated.
Future Potential and Outlook for AR Applications in Pulmonary Surgery
It is evident that the extent of deformation varies significantly among different deformable organs and methods remain relatively organ-specific.45,56 The phenomena of intraoperative induced pneumothorax during pulmonary MIS imposes additional challenges compared to other deformable organs, as the intraoperative anatomy does not directly translate to the pre-operatively created 3D reconstruction, impacting intraoperative orientation and correct structure identification. 1 Therefore, compensation for lung collapse may be crucial for accurate AR-based guidance for pulmonary surgery. Several studies are investigating this intraoperative deformation by simulating the deformation that a lung experiences during a pneumothorax, often utilizing intraoperative cone-beam CT.76-79 However, research performed on this topic remains limited due to minimal usage, a lack of clinical datasets, and lower accuracy of intraoperative CT of lungs resulting from atelectasis and bronchial collapse. 78 Furthermore, the actual impact of this deformation on the accuracy of the AR overlay requires investigation as well. Anticipating future advancements, the incorporation of a 3D deformable lung model into AR technologies promises intriguing applications. This integration allows for realistic intraoperative simulations, offering a dynamic portrayal of pulmonary manipulations across varying orientations, such as an interlobar view on the fissure. 1
Given the potential clinically applicable registration accuracy of surface registration for nephrectomy procedures, 16 surface-based registration emerges as a promising and relevant method for further exploration. In addition to surface-based registration, exploring a vascular bifurcation-based method is highly intriguing, given that (broncho)-vascular bifurcations are critical landmarks for pulmonary resections, and these landmarks relatively maintain stable relationships with each other. Using these methods, the eventual goal is to obtain an AR-system that can automatically and accurately achieve intraoperative 3D augmentation, even during rapid motion, occlusion of view, and pulmonary deformation. Additionally, the applied AR technology should be user friendly, not significantly delay or interrupt a procedure, and/or even improve the intraoperative workflow. Achieving these goals can contribute greatly to the future of pulmonary surgery, where AR can provide enhanced visibility while navigating through the complex and highly vulnerable anatomy of the lung, offering guidance and confidence while making critical surgical decisions.
To accomplish this, it is necessary to validate the surgical time saving impact of AR navigation, by assisting surgeons in the localization of specific anatomical structures. The enhanced anatomical insights and confirmation of surgeons’ anatomical knowledge may improve patient safety by minimizing unnecessary exploration, potentially reducing blood loss and postoperative air leaks. Upon validation of these aspects, a phased integration of AR technology into the clinical workflow can be contemplated. Additionally, the objective is to demonstrate that enhanced intraoperative anatomical insight provided by AR navigation has the potential to expedite the learning curve of pulmonary RATS/VATS procedures for novice surgeons. Therefore, anatomical AR-based 3D overlays can potentially contribute to the education and training of novice surgeons, residents, and medical students when integrated early in their training.
Conclusion
Increasing interest in and development of AR systems and the underlying registration and tracking methods for MIS of deformable organs is observed. Although AR has not been implemented in large patient studies, current research shows great potential to facilitate and improve the intraoperative use of AR for a wide range of surgical disciplines. With regards to the specific pulmonary requirements, surface-based registration, combined with anatomical vascular labeling were identified as the most promising and applicable methods for pulmonary application, as the parenchyma is highly deformed and the broncho-vascular and tumor relationship are the most important for intraoperative navigation (Table 1). These methods should be explored and adjusted further for pulmonary MIS purposes. Moreover, before clinical AR implementation during RATS/VATS pulmonary procedures, the potential added value during pulmonary procedures has to be assessed, and a clinical feasible registration accuracy should be investigated and defined.
Supplemental Material
Supplemental Material - Augmented Reality Implementation in Minimally Invasive Surgery for Future Application in Pulmonary Surgery: A Systematic Review
Supplemental Material for Augmented Reality Implementation in Minimally Invasive Surgery for Future Application in Pulmonary Surgery: A Systematic Review by Marie-Claire J. Doornbos, Jette J. Peek, Alexander P. W. M. Maat, Jelle P. Ruurda, Pieter De Backer, Bart M. W. Cornelissen, Edris A. F. Mahtab, Amir H. Sadeghi, and Jolanda Kluin in Surgical Innovation.
Footnotes
Acknowledgements
The authors wish to thank S.T.G. Meertens-Gunput and W. Bramer from the Erasmus MC Medical Library for developing and updating the search strategies.
Author Contributions
MD: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing-original draft; JP: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing-original draft, AM: Supervision, Writing – review & editing, JR: Supervision, Writing – review & editing, PB: Supervision, Writing – review & editing; BC: Conceptualization, Methodology, Supervision, Writing – review & editing, EM: Supervision, Writing – review & editing, AS: Conceptualization, Methodology, Supervision, Visualization, Writing – review & editing; JK: Supervision, Writing – review & editing.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
Appendix
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
