Abstract
Background:
The differential diagnosis of malignant pleural effusion (MPE) and benign pleural effusion (BPE) presents a clinical challenge. In recent years, the use of artificial intelligence (AI) machine learning models for disease diagnosis has increased.
Objective:
This study aimed to develop and validate a diagnostic model for early differentiation between MPE and BPE based on routine laboratory data.
Design:
This was a retrospective observational cohort study.
Methods:
A total of 2352 newly diagnosed patients with pleural effusion (PE), between January 2008 and March 2021, were eventually enrolled. Among them, 1435, 466, and 451 participants were randomly assigned to the training, validation, and testing cohorts in a ratio of 3:1:1. Clinical parameters, including age, sex, and laboratory parameters of PE patients, were abstracted for analysis. Based on 81 candidate laboratory variables, five machine learning models, namely extreme gradient boosting (XGBoost) model, logistic regression (LR) model, random forest (RF) model, support vector machine (SVM) model, and multilayer perceptron (MLP) model were developed. Their respective diagnostic performances for MPE were evaluated by receiver operating characteristic (ROC) curves.
Results:
Among the five models, the XGBoost model exhibited the best diagnostic performance for MPE (area under the curve (AUC): 0.903, 0.918, and 0.886 in the training, validation, and testing cohorts, respectively). Additionally, the XGBoost model outperformed carcinoembryonic antigen (CEA) levels in pleural fluid (PF), serum, and the PF/serum ratio (AUC: 0.726, 0.699, and 0.692 in the training cohort; 0.763, 0.695, and 0.731 in the validation cohort; and 0.722, 0.729, and 0.693 in the testing cohort, respectively). Furthermore, compared with CEA, the XGBoost model demonstrated greater diagnostic power and sensitivity in diagnosing lung cancer-induced MPE.
Conclusion:
The development of a machine learning model utilizing routine laboratory biomarkers significantly enhances the diagnostic capability for distinguishing between MPE and BPE. The XGBoost model emerges as a valuable tool for the diagnosis of MPE.
Keywords
Introduction
Pleural effusion (PE), characterized by the accumulation of a large amount of fluid in the pleural cavity, can be divided into benign pleural effusion (BPE) and malignant pleural effusion (MPE). 1 MPE is a common complication arising from pleural metastasis of malignant tumors, with lung cancer being the most prevalent etiology.2,3 The presence of MPE usually indicates an advanced stage of cancer and carries an unfavorable prognosis, with a median survival of merely 3–12 months.4,5 Thus, the early and accurate differentiation between MPE and BPE is crucial for optimizing treatment strategies and improving clinical outcomes.
Currently, the gold standard for diagnosing MPE is pleural fluid (PF) cytology through thoracocentesis or pleural biopsy. While this method exhibits a high specificity of up to 100%, its diagnostic sensitivity for MPE remains in the range of 50–60%. 6 In clinical practice, tumor markers (TMs), including carcinoembryonic antigen (CEA),6–9 carbohydrate antigen (CA) 125,8,9 CA15-3, 8 cytokeratin 19 fragment (CYFRA21-1), 7 squamous cell cancer (SCC), 9 and neuron-specific enolase (NSE), 9 are commonly employed for distinguishing between MPE and BPE. Among these markers, CEA represented the highest detection frequency and the highest diagnostic value for MPE, compared with other traditional TMs. 9 However, the sensitivity of CEA seemed unsatisfactory, and there was no consensus on the optimal cutoff level for CEA in the diagnosis of MPE.
In recent years, there has been an increase in research focusing on the application of artificial intelligence (AI) in disease diagnosis. AI machine learning techniques have been explored to aid clinicians in the automatic diagnosis of COVID-1910–12 and in the segmentation of PE13,14 using computed tomography (CT) imaging. In addition, Yang et al. 15 developed a PET-CT scoring model to differentiate MPE from BPE, with an AUC of 0.949. Another study by Ren et al. 16 involved the construction of four machine learning models for the diagnosis of tuberculous pleural effusion (TPE) based on clinical features, showing that the random forest (RF) model provided the best diagnostic performance. However, the application of machine learning models based on the laboratory information for identifying MPE from BPE has been relatively limited in previous studies.17,18
This study aims to establish an advanced machine learning model that surpasses CEA in the differential diagnosis of MPE and BPE. First, we collected and analyzed the clinical data, predominantly comprising routine laboratory variables, from patients with PE. Next, five machine learning models based on laboratory variables were created and rigorously validated. The primary objective was to identify the optimal model with the best diagnostic performance in distinguishing MPE from BPE. Finally, an extensive comparison was conducted between the diagnostic performances of our ultimate model and CEA in the area of diagnosing MPE, particularly focusing on cases associated with lung cancer-induced MPE.
We conducted this study according to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis guidelines (TRIPOD) 19 and the Standards for Reporting of Diagnostic Accuracy Studies (STARD) guidelines. 20
Methods
Study population
This was an observational study with retrospective data collection, and 5566 patients diagnosed who PE and had undergone thoracentesis at the Shanghai Changzheng Hospital, China, between January 2008 and March 2021 were consecutively enrolled.
The inclusion criteria
PE diagnosis was confirmed through methods such as ultrasound, chest computed tomography (CT), or X-ray. The diagnostic criteria for MPE were determined by the presence of malignant cells on PF cytology or pleural biopsy. 21 The diagnostic criteria for BPE were based on the absence of malignant cells in PF, coupled with the absence of a malignant disease diagnosis during the follow-up period (at least 1 year). Parapneumonic pleural effusion (PPE) was diagnosed under the following conditions: (1) PE associated with pneumonia or bronchiectasis and absence of other causes and (2) patient’s symptoms improved and PF was absorbed after antibiotic treatment. TPE was confirmed if patients met one of the following criteria: (1) positive acid-fast staining or positive culture or polymerase chain reaction for mycobacterium tuberculosis in PF or pleural biopsy specimens and (2) presence of caseous granuloma in pleural biopsy specimens.
The exclusion criteria
(1) patients lacking clinical laboratory information were excluded; (2) patients with an indeterminable etiology of PE or those unable to provide information during the follow-up were excluded; (3) patients with a history of cancer or who had received anticancer treatment were excluded; (4) patients with PE due to trauma or surgery were also excluded.
Ultimately, patients with a definitive cause of PE were included.
Data collection and quality control
To develop a diagnostic model in this study, we utilized existing routine laboratory data, which were retrieved from the electronic medical record system of Shanghai Changzheng Hospital. The following data were collected: anonymized demographic features (age and gender), clinical diagnosis, pathological diagnosis, laboratory variables, including hematologic parameters, and PF parameters.
All laboratory variables were detected within 21 days prior to the time of diagnosis. In cases of multiple pleural punctures, only the data from the initial PF sample preceding the inclusion period was used for statistical analysis. Furthermore, we selected hematologic parameters that were closest to the first pleural puncture for further analysis. During the initial screening process, variables with a missing value rate of less than 40% were chosen for consideration, generating a total of 79 laboratory variables for potential feature selection. Finally, a total of 81 candidate variables, including age and gender, were selected for the training of the machine learning models (Supplemental Table 1). Missing values in the laboratory data correspond to situations where medical professionals deem specific tests unnecessary, which may be within the normal range. Therefore, we filled in the missing values as ‘negative’ by reviewing medical records.
Construction of machine learning models
To differentiate between MPE and BPE, we employed five machine learning models: extreme gradient boosting (XGBoost), logistic regression (LR), RF, support vector machine (SVM), and multilayer perceptron (MLP). The five methods’ parameters used in our analysis were set as follows:
(1) XGB: n_estimators = 10, max_depth = 2, eval_metric = ‘auc’;
(2) LR: penalty = ‘11’, solver= ‘liblinear’;
(3) SVM: kernel = ‘linear’, probability= True, max_iter= 2000, tol = 0.005, C = 1.0;
(4) RF: max_depth = 2, n_estimators = 20;
(5) MLP: hidden_layer_size = 100, solver = ‘adam’, activation = ‘relu’.
For these models, the specified parameters were set as indicated above to ensure consistent and accurate results throughout our analysis. We used the SHapley Additive exPlanation (SHAP) value to explain the XGBoost model. 22 The SHAP method was employed to illustrate the contributions of features attributed to the model and the effects of the individual feature to model’s output. 23
Statistical analysis
The aim of this study was to develop a diagnostic model for differentiating between MPE and BPE. In the development of the diagnostic model for binary outcome, it was assumed that 10 predictive parameters were considered with an error margin of ⩽0.05, and that the proportion of the expected outcome in the study population was 0.5. 24 The calculation of the sample size using StatBox indicated that at least 120 samples were required for each group. Consequently, a minimum sample size of 240 patients with PE in total was deemed necessary. It is important to highlight that the sample size currently present in each cohort substantially exceeded these calculated minimums, thus ensuring robust and statistically sound results.
The Kolmogorov–Smirnov test was performed to evaluate the variable distribution. Continuous variables were expressed as the mean ± standard deviation or median and interquartile range (IQR) depending on the data distribution and compared by the student’s t-test or Mann–Whitney U test, as appropriate. 25 Categorical variables were expressed as counts and percentages, and compared by Fisher’s exact test or Chi-square test, as appropriate. The receiver operating characteristic (ROC) curve was drawn, and the AUC was calculated to evaluate the diagnostic value of each model for identifying MPE. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy of each model were also calculated. The calculations for these metrics were as follows: sensitivity = true positive cases/(true positive cases + false negative cases), specificity = true negative cases/(true negative cases + false positive cases), PPV = true positive cases/(true positive cases + false positive cases), NPV = true negative cases/(true negative cases + false negative cases), and accuracy = (true positive cases + true negative cases)/(true positive cases + false positive cases + true negative cases + false negative cases). The optimal cutoff value, sensitivity, and specificity were estimated according to the Youden index. Sample size calculations were performed using the Power and Sample Size Software (PASS 2020; NCSS, LLC, Kaysville, UT, USA). Statistical analyses were performed using the python programming language version 3.9. A p value <0.05 was considered statistically significant.
Results
Characteristics of the patients
A total of 2352 newly diagnosed patients with PE were eventually enrolled in the study, and a detailed flow chart of patient selection was presented in Figure 1. The clinical characteristics between MPE and BPE groups were exhibited in Table 1. Notably, statistically significant differences were observed in age, gender, and most laboratory indices between MPE and BPE group, suggesting that the above indices might be risk factors for MPE. Moreover, the detailed disease types of all participants were listed in Table 2, revealing lung cancer as the predominant cause of MPE, while PPE emerged as the primary type of BPE.

Study design and enrolment of participants.
Clinical characteristics of patients.
Values were presented as median (interquartile range) for continuous variables and absolute number for categorical data.
A/G, albumin-to-globulin ratio; ADA, adenosine deaminase; ALB, albumin; BPE, benign pleural effusion; CA125, carbohydrate antigen 125; CEA, carcinoembryonic antigen; CRE, creatinine; CRP, C-reaction protein; CYFRA21-1, cytokeratin 19 fragment antigen21-1; HCT, hematocrit; LDH, lactate dehydrogenase; MPE, malignant pleural effusion; PLT, platelet; RBC, red blood cell; RDW, red blood cell distribution width; NSE, neuron-specific enolase; PF, pleural fluid; TP, total protein; WBC, white blood cell.
Origins of pleural effusion.
Values were presented as absolute number (%) for categorical data.
BPE, benign pleural effusion; MPE, malignant pleural effusion; PPE, parapneumonic pleural effusion; TPE, tuberculous pleural effusion.
XGBoost model showed the best diagnostic performance in the differential diagnosis of MPE and BPE
Patients were randomly assigned to the training cohort (n = 1435), validation cohort (n = 466), and testing cohort (n = 451) in a ratio of 3:1:1 (Figure 1). After data preprocessing and feature selection, a total of 81 candidate variables (Supplemental Table 1) were included for the construction of five machine learning models. The diagnostic performance of these models in the differential diagnosis of MPE and BPE was evaluated using ROC curves. Notably, the XGBoost model provided the best diagnostic performance with AUC values of 0.903, 0.918, and 0.886 in the training, validation, and testing cohorts, respectively, compared with the other models (Figure 2 and Table 3). These results demonstrated that the XGBoost model outperforms other models for MPE diagnosis, and therefore it was selected as the final model for this study.

Diagnostic performance of five machine learning models in differentiating MPE from BPE. ROC curves of five machine learning models for patients with MPE versus BPE in the training cohort (a), validation cohort (b), and testing cohort (c).
Diagnostic performance of five machine learning models in differentiating MPE from BPE.
ACC, accuracy; AUC, area under curve; BPE, benign pleural effusion; LR, logistic regression; MLP, multilayer perceptron; MPE, malignant pleural effusion; NPV, negative predictive value;PPV, positive predictive value; RF, random forest; SEN, sensitivity; SPE, specificity; SVM, support vector machine; XGBoost, extreme gradient boosting.
The impact of laboratory variables on the accuracy of XGBoost model
We used the SHAP value to evaluate the potential effect of each laboratory variable on the discriminative power of the XGBoost model. The XGBoost model included top seven routine laboratory variables, and the feature importance for the model was shown in Figure 3. It displayed the importance of each variable on the output of the XGBoost model from highest to lowest according to the average absolute SHAP value. The most important factor in the diagnostic prediction of the XGBoost model was PF CEA, followed by serum CYFRA21-1 as the second most important factor, and then PF CA125, hematocrit, creatinine, calcium, and the percentage of neutrophils.

Distribution of the impacts by each feature on the XGBoost model using the SHAP value. (a) The SHAP plot depicted the dot estimation on the XGBoost model. Each dot represented the feature value of each individual patient for the model by color (high in red, low in blue). (b) Average absolute impact of variables on the XGBoost model output magnitude ordered by decreasing feature importance.
XGBoost model showed better diagnostic performance than CEA in the differential diagnosis of MPE and BPE
In comparison with PF CEA, serum CEA, and the ratio of PF/serum CEA, the XGBoost model demonstrated significantly improved diagnostic performance in distinguishing MPE from BPE (AUC values: 0.903 versus 0.726, 0.699, and 0.692, respectively) within the training cohort. This trend persisted in the validation and testing cohorts as well [Figure 4(a)–(c)]. In addition, as displayed in Table 4, the XGBoost model was more sensitive than CEA for the diagnosis of MPE, in terms of sensitivity and NPV.

Diagnostic performance of the XGBoost model and CEA in differentiating MPE from BPE. ROC curves of the XGBoost model and CEA in various forms for the differential diagnosis of MPE in the training cohort (a), validation cohort (b), and testing cohort (c).
Diagnostic performance of the XGBoost model and CEA in differentiating MPE from BPE.
ACC, accuracy; AUC, area under curves; BPE, benign pleural effusion; CEA, carcinoembryonic antigen; MPE, malignant pleural effusion; NPV, negative predictive value; PF, pleural fluid; PPV, positive predictive value; SEN, sensitivity; SPE, specificity. Cutoff: 5.3 μg/L of CEA as a cutoff of PF CEA; 6.78 μg/L of CEA as a cutoff of serum CEA; and 1.75 as a cutoff of CEA (PF/serum).
XGBoost model showed better diagnostic performance than CEA in the differential diagnosis of MPE and BPE in its subgroups
According to Light’s criteria, 26 a subgroup of exudative PE cases was identified, comprising 541 MPE patients and 521 BPE patients. In this subset, the XGBoost model consistently displayed exceptional diagnostic capabilities in detecting exudative MPE, with AUC values of 0.936, 0.948, and 0.919 in the training, validation, and testing cohorts, respectively, surpassing PF CEA, serum CEA, and the PF/serum CEA (Figure 5 and Table 5).

Diagnostic performance of the XGBoost model and CEA for patients with exudative MPE. ROC curves of the XGBoost model and CEA in various forms in the training cohort (a), validation cohort (b), and testing cohort (c).
Diagnostic performance of the XGBoost model and CEA for the differential diagnosis of exudative MPE.
Cutoff: 5.17 μg/L of CEA as a cutoff of PF CEA; 6.7 μg/L of CEA as a cutoff of serum CEA; and 1.39 as a cutoff of CEA (PF/serum).
ACC, accuracy; AUC, area under curve; BPE, benign pleural effusion; CEA, carcinoembryonic antigen; MPE, malignant pleural effusion; PF, pleural fluid; PPV, positive predictive value; NPV, negative predictive value; SEN, sensitivity; SPE, specificity; XGBoost, extreme gradient boosting.
As shown in Table 2, lung cancer was the main malignancy of MPE, and infections (pneumonia, empyema, and tuberculosis) were the main cause of BPE. Notably, within the subgroup of MPE patients caused by lung cancer, the XGBoost model maintained its superior performance, boasting AUC values of 0.923, 0.935, and 0.901 in the training, validation, and testing cohorts, respectively, surpassing CEA in its various forms (Figure 6 and Table 6).

Diagnostic performance of the XGBoost model and CEA for MPE caused by lung cancer. ROC curves of the XGBoost model and CEA in various forms in the training cohort (a), validation cohort (b), and testing cohort (c).
Diagnostic performance of the XGBoost model and CEA for the differential diagnosis of MPE caused by lung cancer.
Cutoff: 8.73 μg/L of CEA as a cutoff of PF CEA; 6.78 μg/L of CEA as a cutoff of serum CEA; and 1.39 as a cutoff of CEA (PF/serum).
ACC, accuracy; AUC, area under curve; BPE, benign pleural effusion; CEA, carcinoembryonic antigen; MPE, malignant pleural effusion; NPV, negative predictive value; PF, pleural fluid; PPV, positive predictive value; SEN, sensitivity; SPE, specificity; XGBoost, extreme gradient boosting.
Similar improvements in diagnostic accuracy were observed in the subgroup of exudative MPE caused by lung cancer (n = 263), with XGBoost (AUC values: 0.972, 0.971, and 0.980) in differentiating MPE from BPE in the training, validation, and testing cohorts, respectively, outperforming PF CEA, serum CEA, and the PF/serum CEA (Figure 7 and Table 7). These comprehensive findings underscore the promising and robust diagnostic performance of the XGBoost model in differentiating MPE from BPE, surpassing the capabilities of CEA across various subgroups.

Diagnostic performance of the XGBoost model and CEA for exudative MPE caused by lung cancer. ROC curves of the XGBoost model and CEA in various forms in the training cohort (a), validation cohort (b) and testing cohort (c).
Diagnostic performance of the XGBoost model and CEA for the differential diagnosis of exudative MPE caused by lung cancer.
Cutoff: 5.99 μg/L of CEA as a cutoff of PF CEA; 6.78 μg/L of CEA as a cutoff of serum CEA; and 1.39 as a cutoff of CEA (PF/serum).
ACC, accuracy; AUC, area under curve; BPE, benign pleural effusion; CEA, carcinoembryonic antigen; MPE, malignant pleural effusion; NPV, negative predictive value; XGBoost, extreme gradient boosting; PF, pleural fluid; PPV, positive predictive value; SEN, sensitivity; SPE, specificity.
Discussion
The emergence of MPE commonly represents an advanced stage of the malignant disease and is closely associated with poor prognosis, leading to serious medical burdens. Despite this, the clinical presentation of MPE can be similar to that of BPE, complicating the accurate identification of its etiology. Over the past decades, tremendous efforts have been made to distinguish MPE from BPE. Conventional PF cytology tests have shown limited sensitivity and might be negative in up to 40% of MPE patients.27,28 Many studies have also explored the potential diagnostic value of biomarkers in the identification of MPE from BPE, including traditional TMs and some novel molecules. 26 The conventional TMs often exhibited high specificity and low sensitivity.7,29 For instance, Fan et al. 30 found that PF CEA had the largest AUC (0.890) with a high specificity (95.5%); however, with a low sensitivity (74.1%). In addition, the diagnostic accuracy of some novel biomarkers, including vascular endothelial growth factor (VEGF),31,32 B7 family, 33 and cell-free microRNA, 34 have been investigated for MPE diagnosis in a few studies. 35 However, these novel biomarkers were not routinely detected in clinical practice, and additional studies with larger sample sizes are also needed to further validate the diagnostic power and stability of these novel biomarkers. Therefore, none of the aforementioned TMs achieved satisfactory diagnostic performance in distinguishing between MPE and BPE.
AI machine learning can process large amounts of data and develop highly accurate disease diagnostic models for disease diagnosis.36,37 In clinical practice, its emergence may enhance the value of laboratory medicine in the evolving healthcare ecosystem by processing and combining the available electronic health record data.38,39 Therefore, it may be valuable in improving the diagnostic accuracy of MPE. Wang et al. 40 developed AI for segmentation and classification of BPE and MPE based on thoracic CT images. However, few studies explored the potential of AI machine learning in recognizing MPE from BPE using routine laboratory variables. Li et al. 17 developed a gradient boosting machine model based on clinical characteristics, which showed favorable diagnostic performance for identifying MPE with an AUC of 0.951 in the validation set (sensitivity: 84.75%, specificity: 95.58%). However, it only investigated the potential impact of PF CEA on the model, while the effect of other TMs, such as CYFRA21-1, CA125, and NSE, was not considered. Our study explored and validated five machine learning models based on routine laboratory indicators, showing that the XGBoost model is the best diagnostic performance for the identification of MPE, which was consistent with a previous study. Zhang et al. 18 reported that XGBoost, based on combinations of TMs, comprehensively improves the diagnostic accuracy of MPE. However, the sample size was relatively small, the model was not validated and interpreted, and the impact of other laboratory indicators on the model prediction was also not considered.
The XGBoost model in our study was constructed using top seven laboratory variables related to MPE. Feature importance analysis showed that PF CEA emerges as the highest relative importance in model prediction using SHAP value, consistent with previous studies highlighting its diagnostic relevance with MPE.8,9 CYFRA21-1and CA125 were also found to have certain diagnostic value in differentiating MPE from BPE.7–9 Importantly, all of the laboratory variables used in this model are routinely used in real-world clinical practice and can be quickly and easily obtained.
The diagnostic values of various forms of CEA, such as PF CEA, serum CEA, and the PF/serum CEA ratio in the differential diagnosis of MPE and BPE have been explored.41,42 However, the variability in cutoff values, detection methods, and the heterogeneity of the study population resulted in inconsistent sensitivities and specificities of CEA. In addition, few studies have directly compared the diagnostic power of machine learning models and CEA for MPE. Our study demonstrated that the XGBoost model outperformed CEA in terms of AUCs and sensitivities in diagnosing MPE. It is worth noting that PE CEA might not be elevated in MPE cases arising from mesothelioma.43,44 Therefore, mesothelioma cases were then excluded, and our model maintained higher sensitivity and diagnostic efficacy than CEA for exudative MPE caused by lung cancer.
Based on these findings, our work demonstrates promising clinical values and advantages. First, our model utilizes real-word data, which may be more representative of the true condition of patients and more clinically applicable. Second, the primary advantage of our model is its ability to integrate a wide array of routine laboratory parameters to predict MPE with high accuracy and sensitivity. It is a valuable addition to the diagnostic toolset, especially given the unsatisfactory diagnostic performance of single TM. In addition, as our model uses readily available laboratory data, it is more convenient and can be readily implemented in the clinical setting. In the future clinical practice, our model can be linked to existing electronic medical record system and automatically analyze data. Therefore, further development of the machine learning model for clinical use in the differential diagnosis of MPE and BPE is warranted.
In addition, the study has some limitations. First, our model was based on a retrospective cohort study, which may introduce selection bias. Second, this study only focused on laboratory variables, and other clinical information of PE patients, such as previous medical histories, smoking history, objective clinical symptoms, and imaging data, could be included in the model to potentially increase its predictive power. Finally, the study only included patients with newly diagnosed PE and did not evaluate the performance of the model in monitoring disease progression or response to treatment. Therefore, further prospective studies from multiple medical centers are needed to verify the generalizability and reproducibility of our model in the future.
In conclusion, we developed and validated a novel machine learning model, the XGBoost model, to differentiate MPE from BPE using routine laboratory data. The XGBoost model, with high accuracy and sensitivity, has significant clinical advantages over other models and CEA in various forms. The current study holds significant implications for the application of machine learning in the medical field. Our study demonstrated the potential of AI machine learning in improving the diagnosis of MPE, which could enhance the accuracy of clinical decision-making and provide better patient care. Nevertheless, our findings highlight the significant potential of machine learning in the field of laboratory medicine and pave the way for the development of more accurate and efficient diagnostic tools for various diseases.
Supplemental Material
sj-docx-1-tar-10.1177_17534666231208632 – Supplemental material for Development and validation of a machine learning model for differential diagnosis of malignant pleural effusion using routine laboratory data
Supplemental material, sj-docx-1-tar-10.1177_17534666231208632 for Development and validation of a machine learning model for differential diagnosis of malignant pleural effusion using routine laboratory data by Ting-Ting Wei, Jia-Feng Zhang, Zhuo Cheng, Lei Jiang, Jiang-Yan Li and Lin Zhou in Therapeutic Advances in Respiratory Disease
Footnotes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
