Due to the unavailability of specific vaccines or drugs to treat COVID-19 infection, the world has witnessed a rise in the human mortality rate. Currently, real time RT-PCR technique is widely accepted to detect the presence of the virus, but it is time consuming and has a high rate of eliciting false positives/negatives results. This has opened research avenues to identify substitute strategies to diagnose the infection. Related works in this direction have shown promising results when RT-PCR diagnosis is complemented with Chest imaging results. Finally integrating intelligence and automating diagnostic systems can improve the speed and efficiency of the diagnosis process which is extremely essential in the present scenario. This paper reviews the use of CT scan, Chest X-ray, lung ultrasound images for COVID-19 diagnosis, discusses the automation of chest image analysis using machine learning and deep learning models, elucidates the achievements, challenges, and future directions in this domain.
Novel coronavirus or the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a pandemic that has claimed numerous lives across the globe. Though initially an outbreak in China [1], soon it was declared a pandemic because in no time the virus had spread across the globe as there was no specific vaccine or drug available to treat the infection. It is believed that SARS-CoV-2 [2, 3, 4] first infected the bats and later spread to humans. People get infected by this virus if they encounter the virus containing droplets expelled by the infected person while sneezing or coughing which could be spread across various surfaces in and around the human carrier of the virus. The time span of incubation of the virus ranges between 2 to 14 days. and the commonly identified symptoms of the infection are cough, fever, sore throat, fatigue, breathlessness, malaise, headache, among others [5]. People with good immunity are not affected much and many turned out to be asymptomatic. People of age group 60 and those with underlying medical conditions, and children below age of 6 are found to be vulnerable and at a higher probability of the infection. In some cases, the infection may be mild but in some it may lead to acute respiratory distress syndrome (ARDS), pneumonia and dysfunction of multiple organs leading to death of the patient. The mortality rates of infected people have increased astonishingly over the past few months in many of the countries. This has challenged the health infrastructure of many of the affected countries. Complete lockdown was enforced in most countries urging people to stay home to be safe. Though the lockdown resulted in controlling the situation in most of the countries nevertheless it also had a huge adverse impact on the economy of those countries. This increased the need for identifying the infected people, isolating them, thus preventing the further transmission of the disease and aid in bringing back normalcy in our lives. In addition to this, timely and accurate diagnosis is the need of the hour that can save many lives. Currently, the sensitivity of the most used screening technique for coronavirus, the reverse-transcription polymerase chain reaction (RT-PCR) is relatively poor. Negative RT-PCR result does not exclude the possibility of virus being absent in the suspect. Therefore, it is of prime importance to find complementary or substitute methods to yield higher accuracy results. In this respect, though medical imaging techniques [6] like chest computed tomography (CT), lung ultrasound, Chest X-ray seem to be better choices. However, analysis of medical images for COVID-19 classification increases the demand for skilled medical imaging professionals which in turn may escalate the pressure on these skilled professionals for a faster and accurate diagnosis owing to the increasing rate of COVID-19 cases. This stresses on the importance of automating COVID-19 diagnosis process to reduce the burden on medical professionals. As deep learning is already popular in the medical domain, employing it for COVID-19 automation is highly advisable.
The following sections of the paper are organized as mentioned. The Section 2 of the paper describes the research methodology of literature review process followed. The Section 3 of the paper discusses RT-PCR technique, its shortcomings, and importance of medical images in diagnosis of COVID-19. Section 4 compares viral pneumonia with bacterial pneumonia and identifies features differentiating COVID-19 pneumonia from other types of viral pneumonia. Section 5 focuses on use of AI in COVID-19 diagnosis, image analysis stages, machine learning and deep learning approaches to COVID-19 medical image analysis and sheds light on approaches used for addressing limited dataset problems. Section 6 highlights open research challenges and future directions followed by conclusion of the review.
Research methodology
This section gives an overview of the methodology followed in writing this review paper. The methodology is structured as follows:
Problem formulation: As the world is battling with the COVID-19 pandemic, Coupling RT-PCR testing method with medical imaging and automating the diagnosis process using deep learning can lead to time efficient and accurate diagnosis of the disease.
The purpose of the review: The purpose is to gain up to date knowledge on the medical imaging application for the COVID-19 diagnosis, provide collective information about the related works, findings, and limitations, identify the open challenges and future scope which can be particularly useful to the researchers in this field.
Identifying sources of literature review:
Sources: COVID-19 19 articles on World Health Organization (WHO) websites, IEEE transaction papers, ScienceDirect Journals, Springer proceedings, medical domain journals.
Domain: COVID-19 diagnosis, pneumonia diagnosis, medical imaging, machine learning, deep learning in COVID-19 diagnosis, automation of COVID-19 diagnosis.
Reference types: Review articles, research articles.
Analysis of findings: Analyse the related work by various researchers and draw comparisons about their work, identify the limitations and drawbacks.
Identify the open challenges and future scope: Based on the analysis of the other researchers work identify the research gaps and future directions in this domain of research.
Figure 1 provides an overview of the steps in the research methodology of literature review.
Research methodology.
Coronavirus diagnosis
Figure 2 shows the taxonomy of various testing methods for COVID-19 diagnosis.
Taxonomy of COVID-19 diagnosis methods.
Lab tests for coronavirus diagnosis
The tests available [7, 8, 9] currently for diagnosing the COVID-19 infection are viral (molecular tests), antibody (serology) tests and antigen tests. The molecular tests are best suited if a person exhibits symptoms of coronavirus infection or chances of being exposed to someone with the virus as it gives positive results only if the person is currently infected by the virus. Antibody is suggested if a person has previously been infected by the virus or suspected of having COVID-19 as it detects the presence of antibodies to SARS-CoV-2. The antigen tests detect SARS CoV-2 proteins in respiratory samples, but currently have not received widespread acceptance. The sample collected in case of molecular tests is nasopharyngeal or nasal or a throat swab of a patient. In the case of an antigen test, the blood sample of the patient/suspect is used for testing.
Medical imaging for COVID-19 diagnosis
Medical Imaging [10, 11, 12, 13, 14] like Lung ultrasound, Chest X-rays (CXR) and CT scan are important in the recognition of lesions in the lungs and assessing the evolution, size, and density of the lesions. Examination of CXR is quick, easy and time efficient, but the specificity and sensitivity for patients with mild symptoms are comparatively minimal and is not advised for initial stage COVID-19 patients. Chest CT images can show nearly all abnormalities containing mild initial exudative lesions, Hence useful in early stage COVID-19 pneumonia diagnosis. Lung ultrasound [15] seems suitable for inspection of lung abnormalities in suspected or infected patients because it is flexible, portable, and convenient. Figure 3 shows the taxonomy of image-based diagnosis modalities, components, AI approaches and methods to address limited dataset issues.
Taxonomy of image based diagnosis of review of COVID-19.
RT-PCR
Real time reverse transcriptase-polymerase chain reaction (RT-PCR) [16, 17, 18] is based on nucleic acid detection. At present it is the widely accepted standard coronavirus detection test as it is a simple and specific qualitative assessment method. One of the major drawbacks of this technique is the danger of producing results that are false positive and false negative. A negative result for COVID-19 test does not guarantee the absence of the virus in the suspect, hence patient treatment decisions must not solely depend on this test. There are many RT-PCR testing kits [19] currently available in the market but none of them give 100 percent accuracy. Hence, it facilitates the need to complement RT-PCR with other methods of diagnosis for an effective approach towards handling the pandemic. Notably, blending real-time RT-PCR and medical image analysis shows a Route discovery mechanism using high power level promising direction to find complementary testing methods for COVID-19 diagnosis.
Chest X-rays in COVID-19 diagnosis
Chest X-ray [20] plays an important role in detecting covid19 as they display pneumonia like patterns which can aid in identifying the infection. The most regular radiograph discoveries include ground glass opacity, consolidation, distributions identified as bilateral, peripheral, and lower zone are predominant. Chest X-ray (CXR) [21] displays lower sensitivity compared to CT images in the recognition of COVID-19 lung disease. In CXR, sometimes the pulmonary opacities can be blurry, challenging the task of anomaly identification. Multifocal air-space condition can be essential in the identification of covid19 infection in the CXR report. The air space disease is discovered to be bilateral and mostly concentrated in the lower lung distribution according to the initial investigations conducted on COVID-19. Unique features of COVID-19 include peripheral air space opacities. CXR can easily identify Peripheral lung opacities that are patchy and multifocal. Even though CT scan is better in COVID-19 detection than CXR, Chest X-rays remain a good choice because it is cheaper than CT scan.
Chest CT scan images in COVID-19 diagnosis
The Chest CT scan [22, 23, 24] images of covid19 suspects are evaluated for checking of the presence of GGO (ground glass opacity), consolidation, laterality between GGO and consolidation, presence of nodules, number of lobes affected, presence of pleural effusion, fibrosis, airway abnormalities, axial distribution of disease and degree of contribution of each of the lung lobes. The most common early finding of COVID-19 on Chest CT scan is supposed to be GGO. Apart from GGO, bilateral shadow patches, consolidation, multiple lesions, pulmonary fibrosis, and crazy paving patterns are most frequently seen in the CT scan reports of coronavirus patients. Based on the results of some studies [25] on current RT-PCR testing, it is noted that patients with RT-PCR negative results (81 percent) but with positive CT scans were identified as covid19 affected cases. CT scan reports revealed pulmonary irregularities inconsistent with COVID-19 patients with preliminary RT-PCR negative results. Hence it conveys a message that RT-PCR tests is a time-consuming procedure that lacks sensitivity and stability. In such a situation CT scan diagnosis can be considered a complementary boon in detecting the infection caused by the deadly virus.
Multiple focal and or diffuse B lines with some areas displaying thickened subpleural interlobular septa
Irregularly shaped and thick pleural lines along with spread out discontinuities
Subpleural consolidations associated with localised and discrete pleural effusion
Inflammatory lung lesions represented as avascular in images from Colour Doppler
Alveolar consolidation, with either static or dynamic air bronchogram indicates progressive and severe case of the disease.
During recovery stage bilateral A-lines reappeared and aeration was restored
Lung ultrasound image analysis for COVID-19 diagnosis
Chest CT scans are highly recommended as an alternative measure to RT-PCR testing of coronavirus because of its high sensitivity and detection of COVID-19 traces even when RT-PCR gives false negative results. But the price and the huge size of the CT scan machines, makes it unavailable outside the hospital settings. This paves a way to find a portable device without compromising on the quality of imaging. Lung/thoracic ultrasound [26, 27, 28, 29, 30, 31, 32] have been considered to detect COVID-19 infection due its portable nature. The abnormalities found after lung ultrasound primarily included pleural line, consolidation, B-lines, bilateral involvement with prevalent distribution seen in the posterior portion of infected patient’s lungs. The compositions involving consolidation regions and various B-lines densities exhibited parallel variations with the severity of the infection. Interstitial diffuse of bilateral pneumonia displayed as lesions in patchy and asymmetric distributions periphery of the lungs is identification presence of coronavirus which can be effectively recognized under an ultrasound analysis. Lung ultrasound images can also depict ground glass opacity (GGO) alternation with crazy paving patterns as well as consolidations. But lung ultrasonography [33] fails to identify deep lesions in the lungs, transmission of ultrasound waves is obstructed by the aerated lungs.
Nevertheless, lung ultrasound scan images can be considered as an important tool for the identification and tracking of progress abnormalities in the lung lesions indicating the presence of COVID-19 pneumonia because of its cost-effective, flexible and radiation free nature.
Coronavirus pneumonia
Pneumonia [34] is a medical condition that is caused by virus, bacteria and fungi which involves the inflammation of the lungs and blockage of oxygen supply to the lungs which can eventually lead to breathlessness and finally death. The viral pneumonia varies from bacterial pneumonia in terms of symptoms, treatments, and diagnosis. Viral pneumonia usually appears as a resulting infection from other viruses such as coronavirus, adenovirus, influenza, parainfluenza, and respiratory syncytial virus (RSV). Antibiotics are effective in treating bacterial pneumonia, but this medication is ineffective in treating viral pneumonia. Fungal Pneumonia usually infects when a spore enters the lungs and begins to multiply in the infected person. People with weak immune systems or are having underlying health conditions that are in chronic stage are the most vulnerable to this. COVID-19 pneumonia [35] is caused by a family of viruses belonging to Coronaviridae and cannot be treated with antibiotics. Unfortunately, even existing viral pneumonia vaccines are not effective against coronavirus. Since most of the physiological symptoms are common to other types of viral pneumonia distinguishing COVID-19 from the other types has become a challenging task. Recent work on this has revealed that chest imaging can be extremely helpful in differentiating COVID-19 pneumonia from the others.
The Table 1 [36, 37, 38, 39] describes the types of pneumonia, their symptoms, and predictors for their diagnosis. Table 2 gives an overview of significant findings by researchers in distinguishing COVID-19 and other viral pneumonia based on medical imaging reports.
AI in medical imaging
Medical image diagnosis workflow
Medical images of lungs captured from imaging techniques such as X-ray, CT and ultrasonography have been considered the complementary measures in diagnosing the COVID-19 pneumonia infected patients. Basically, an imaging-based diagnosis workflow includes three stages namely Scan preparation stage, image acquisition stage and disease diagnosis stage. In the preparation stage, the patient is assisted by a technician to prepare for the scan. During the image acquisition stage, image modality machines capture and acquire the X-ray or CT images with necessary reconstruction of images. In the final stage the images are captured and analyzed for diagnosis. Computer aided Image Analysis [40] comprises segmentation, feature extraction and classification. However, analysis of medical images for COVID-19 classification involves a radiologist and this increases the demand for them as COVID-19 infection is growing at a rapid rate. This puts the medical professionals at a higher risk of contracting the virus and escalates the pressure to perform diagnosis in considerably less amount of time. Employing AI powered [41, 42] contactless diagnosis systems are very much needed to avoid severe risks to the medical health care professionals, lessen their burden and accelerate the diagnosis process. In this section we focus on automation of COVID-19 related image analysis and diagnosis.
Image segmentation
Image segmentation [43] is the method of dividing an image into several segments and detecting objects and margins in images. It delineates the regions of interest in the lung images like the lung lobes, lesions, infected areas, bronchopulmonary segments for further assessment and quantification. Image Segmentation process can be performed as manual, semi-automatic or fully automated process. Manual segmentation is time consuming and suitable for only a small dataset because it involves detection of regions of interest in images by experts and accurately annotates each pixel in the image. In Semi-automatic segmentation, automated algorithms are used for accurate segmentation with only some user interactions at a certain level [44]. There is no interaction at any level in fully automatic segmentation techniques. Current methods used for segmentation [45] are thresholding-based, region-based, shape-based, neighboring anatomy–guided, machine learning and deep learning methods. Segmentation in COVID-19 cases is grouped into lung region-oriented segmentation and lung lesion-oriented segmentation. In the lung region segmentation, whole lung and lung lobes are separated from other unnecessary background details in CT or X-ray images. In the lung lesion-oriented method concentration is on separating lesions in the lung from the lung region. As the size of lesions may be extremely small and may also vary in patterns lung lesion segmentation is a challenging task. Projecting of ribs into soft tissues in a 2D X-ray image makes the segmentation of X-ray images a particularly challenging task. Segmentation is considered as the most important prerequisite step in the COVID-19 image analysis process. The attention mechanism which is supposed to be effective in localization tasks [41], can be adopted in dealing with X-ray images for COVID-19 diagnosis.
Feature extraction
Analysis of images to identify and extract the most prominent features representing the categories of different objects and images is termed as feature extraction [46] procedure which is an essential part of image analysis. Shape descriptor features are calculated from object’s contour [47, 48]. Texture of an image is defined by the spatial association of the values of each pixel in the image they are a part of. Variations in local spatial frequency is dependent on any sort of variations in the local texture of the image [49]. Texture analysis identifies the texture primitives from which it extracts essential features to construct spatial or statistical distribution of primitives based on identified features. Parametric mapping usually identifies functionally dedicated reactions. It is mainly used to characterize functional anatomy and variation related to a particular disease. Lately, researchers have worked towards employing Machine Learning deep learning techniques for better feature extraction process in COVID-19 diagnosis cases [50, 51, 52].
Image classification
Classification of COVID-19 patients based on medical image diagnosis involves identifying the abnormalities that are related to coronavirus pneumonia. Classification of images [53] is a supervised learning problem which involves categorizing the segmented input images of CT, X-ray or Ultrasound into various predefined disease classes or sometimes binary classification of whether disease is present or not. Segmentation and feature extraction [54] form the basics steps in pre-processing the images before the classification. After the segmentation, in the feature extraction phase, the features based on shape and texture are extracted which may then be passed to any classifier model used to classify the images. Imaging modalities are widely performed to provide evidence for radiologists due to their quick procurement nature. However, CT images of the chest consist of numerous slices due to which duration of diagnosis might be longer. Also, COVID-19 pneumonia has c indicators comparable to other viral pneumonia, which facilitates the need for skilled and experienced radiologists for an accurate diagnosis which makes COVID-19 image diagnosis a crucial and a challenging task. Thus, AI-supported diagnosis of medical images is extremely desirable.
AI Approaches
Machine learning approaches
Machine Learning (ML) [55, 56] is the ability of computers to self-learn a task without the assistance of manual programming instead learn from experience or historical data.
Machine learning is extremely helpful in medical practices [63] that are dependent on imaging, consisting of radiology, radiation therapy and oncology. Machine learning approaches are applicable to image analysis components such as segmentation and classification to automate the image analysis process. Categories of machine learning are supervised learning (using labelled dataset), unsupervised learning (unlabeled dataset) and reinforcement learning. Some of the supervised learning algorithms [64] are K-Nearest Neighbors [65], Logistic Regression [66], Decision Trees [67], Linear Regression [68], Support Vector Machines [69], Naïve Bayes [70] and Artificial Neural Networks [71]. As Supervised learning methods require labelled dataset, labor intensive data labelling is a time-consuming process which is considered as the major drawback in using these methods. On the other hand, an unsupervised learning algorithm takes unlabeled datasets as inputs and works towards finding similar patterns in the data and grouping the instances with similar traits into groups or clusters. These include algorithms such as K-means clustering [72], hierarchical clustering [73], DBSCAN [74], Gaussian mixture modeling ISODATA (iterative self-organizing data) [75, 76]. Automated Image Segmentation [77] splits the images based on visible dissimilar regions. Most ML based segmentation techniques are supervised that need training data which is well annotated. Also, huge variation in the form of color, shape and texture in patient images pose additional challenges to the automated segmentation algorithms [78]. Variations in the images are also caused due to existence of noise and inconsistency in the data acquisition process. These variations have limited the application of on machine learning (ML) based approaches as they lack in global applicability for most cases. Besides, manual engineering features techniques are time intensive and not easily adaptable for new information. Machine learning techniques like KNN, Neural Networks, SVM have been applied in the past for the classification of medical images [40, 79, 80]. The use of the traditional machine learning methods for medical image classification are limited by time consuming feature extraction/selection process and highly variable from one application to the other [81].
Deep learning approaches
Deep learning in healthcare [82, 83, 84, 85] has shown a promising technological advancement that may revolutionize AI in
health sector. Deep learning [86] methods employ automatic feature engineering and learn complex and sophisticated patterns in the data than conventional machine-learning techniques. This becomes an advantage in the field of medical imaging analysis as manual feature determination might take a longer time duration. Application of deep learning algorithms upgrades the efficiency, accuracy, quality and reduces the time of diagnosis. Convolutional neural networks (CNNs) [87] is the most widely used deep learning model for image classification. Majority of deep learning models are applied on medical image types like CT and MRI for applications like segmentation, classification, and diagnosis [88]. Diagnosis performance of deep learning models [89] has proven to be equivalent to that of the medical professionals. Deep Neural networks [90] are like Artificial neural networks [71] structures with many hidden layers and automatic feature extraction ability. Additional layers in DNN allow modeling of complex data by composing from lower to upper layers. Research is in progress on several deep learning models like deep neural network, deep autoencoder, convolutional neural networks, deep belief network, deep conventional extreme machine learning, deep Boltzmann machine, recurrent neural network (RNN). Particularly, convolutional neural networks (CNNs) have been widely accepted and applied for segmentation and classification [91] of natural images. This accomplishment mainly due automatic feature extraction combined with substantial advances in computational power. But this automatic feature extraction in deep learning is heavily dependent on the availability of huge training dataset. Recent years have seen a tremendous rise in deep learning applications owing to CPU and GPUs with high computational power that has reduced training and execution time to a greater extent and generation of huge volumes of Big data [92]. Convolutional neural networks are used even in medical image analysis to augment the performance of computer aided medical image analysis processes.
Convolution neural network
CNN [93] is a deep learning model for handling images and is intended to acquire knowledge on features’ spatial hierarchies from low to high level adaptively and automatically. CNNs [94] perform dimensionality reduction by preserving local image relations which is significant in capturing feature relationships in images and reduces the parameters desired to be computed which further increases the computational efficiency of the CNN models. CNNs can accept and process both 2-D and 3-D images with minor changes. This acts as an added advantage for designing automated systems for hospitals as medical images could be 2D or 3D. X-rays are 2D while CT or MRI are 3D. CNN architectures [95] such as 2D U-Net, 3D U-Net, multichannel 2D U-Net are widely used in the medical image segmentation process because they do not rely on user-defined image features instead can determine their own features.
Hybrid feature extraction model stacked hybrid classifier
Accuracy 96.07%
Limited datasets
Category wise paper references.
Popularly used convolution neural networks for detection and classification of [96] are Alex Nets, ResNet50 and GoogLeNet. The components [97] of CNN are convolution layers, fully connected and pooling layers. Convolution layer is the fundamental part of CNN which performs feature extraction consisting of linear convolution operation and non-linear activation operation. Convolution operation results are activated through a nonlinear function. A down sampling operation in the pooling layer introduces a small variation in translation and distortions to decrease dimensionality in feature maps and number of learnable parameters.
The results of the final convolution or pooling layer is usually converted to a 1D array of feature vectors which are connected to fully connected layers, where there is a connection between every input and output associated with a learnable weight. Number of desired classes is equal to the number output nodes in the final fully connected layer. A nonlinear activation function follows every fully connected layer. The final layer activation functions [97] varies based on the type of classifications. For binary classification activation function used is Sigmoid, for multiclass single-class classification the SoftMax is used, for multiclass multi-class classification the activation function used is sigmoid and for regression to continuous values the activation function used is identity. As discussed in the earlier sections higher accuracy of COVID-19 diagnosis is achieved with the mixture of RT-PCR test and Chest CT scan or Chest X-ray or Lung Ultrasound images and integrating automation into these methods using deep learning framework provides faster diagnostic systems.
Table 3 displays summary of related work by different authors on applying deep learning for medical image analysis of COVID-19 diagnosis, achieved results and the limitations of the work.
Addressing problem of limited dataset
Enormous amounts of good quality training data is essential for deep learning [90] models to achieve higher accuracy. However, unavailability of a balanced dataset is the major obstacle for successfully applying deep learning in medical imaging. Generation of huge annotated medical imaging data is also an extremely daunting and time-consuming job. Furthermore, annotation may not be possible due to absence of competent experts. Another key quite common and key issue in the health sector is imbalanced data because rare infections like COVID-19 are not clearly represented in the data sets. As discussed in the earlier sections diagnosis of coronavirus is more effective if radiology imaging is combined with clinical lab tests. The ongoing research in using radiology images for COVID-19 diagnosis and application of deep learning models to enhance their performance is limited with non-availability of data related affected COVID-19 patients which may ultimately lead to overfitting problem and degrade the performance of the model. As Deep Learning models need huge amounts of data for giving accurate results, researchers have tried various methods like transfer learning, data augmentation and General Adversarial networks (GAN)to handle the issue of limited and imbalanced datasets. Each of the techniques is discussed in the next sections.
Data augmentation class decomposition transfer learning
De-trac deep resnet18
Accuracy of 92.5%, sensitivity of 65.01%, and specificity of 94.3%
Research gaps and proposed solution
Identified research gap
Future direction
The related work was not carried out on standard datasets, so the results obtained vary and cannot be accepted as standard results.
Aggregation of COVID-19 related medical images from multiple sources to form standardized (benchmark) datasets and uploaded to a public repository that is accessible to researchers across the globe.
Most researchers have worked on Chest X-ray images even though Chest CT proves to provide better diagnosis than X-ray and ultrasound scan is better as it is radiation free compared to the other two especially for pregnant patients. This drawback is due to unavailability of datasets.
Focus on Dataset creation of CT scan and ultrasound scan images of COVID-19 infected persons.
Employ GAN networks to augment CT scan and ultrasound scan images.
As most work is focused on supervised techniques which is heavily dependent on annotated data and lack of annotated data affects the model performance.
Use unsupervised or semi-supervised approaches to solve limited annotated dataset problems.
Meta Learning approaches such as few shots learning, and one-shot learning can be explored to address limited dataset problems.
Deep learning models are uninterpretable, and this poses a major challenge in the medical field where doctors must be explainable about their diagnosis.
Building explainable and interpretable deep learning models has shown tremendous scope for research in this direction.
Transfer learning
Transfer learning [97, 110] is an effective method of using a pretrained model usually trained on huge datasets such as ImageNet [111], and re-use them for a chosen task. The idea behind using transfer learning is that knowledge acquired while solving one problem can be utilized to solve a different but related problem. This gives an advantage to apply learned generic features to several small dataset task domains. Some of the publicly available pretrained models are Resnet, VGG, AlexNet, DenseNet and Inception. Fine tuning and Fixed feature extraction are the two ways of using pretrained models. A fixed feature extraction method is a procedure to eliminate fully connected layers from a network pretrained on some huge dataset but maintaining convolution and pooling layers constituting as the convolutional base which is a fixed feature extractor. On top of this fixed feature extractors any conventional machine classifiers or your own series of fully connected layers can be added. This simplifies the training by limiting it only to the additional classifier on the dataset of the chosen task. Due to dissimilarity between ImageNet images and medical images, the above-mentioned approach is seldom used for medical image diagnosis. On the other hand, Fine-tuning approach has been widely accepted in medical image diagnosis. In this method along with replacing fully connected layers, all the kernel parts in the convolution and pooling layers are also fine-tuned using backpropagation. In some of the situations, few previous layers can remain unchanged and the rest of the deeper layers can be fine-tuned to suit the chosen task domain. This is because the earlier layers are pertaining to generic features and higher layers more specific to domain and tasks. CNN based classification models have proved to be good feature extractors which are evident in performance of most transfer learning approaches. To improve the performance of the models, it can be re-trained with fresh labelled datasets. These results combined with other existing architectures can boost the performance of the models.
Data augmentation
Data Augmentation [112, 113] is a technique to address the problem of limited and imbalanced datasets by synthetically generating additional images. Data augmentations unnaturally increase the quantity of data needed for training by using oversampling or data warping [114] techniques. In data warping transformations are applied in dataspace and in oversampling synthetic images are generated in feature space. Oversample augmentations such as feature space augmentations, mixing images and general Adversarial networks (GANs) generate unreal instances that are included in the training set. The Synthetic Minority Over-Sampling Technique (SMOTE) [114] was applied to solve class imbalance problems in digital handwritten recognition tasks and also has been extensively used on medical datasets having significant minority class. In SMOTE, a fresh artificial sample is formed by picking a random point in feature-space along a line crossing k randomly selected same class sample. Classical transformations [115] use a combination of affine transformations to operate on training data using rotation cropping, zooming, histogram-based methods. Though, the earlier data augmentation [116] approaches based which are a combination of color modification and affine image transformations are easy, quick, and effective, they are vulnerable to adversarial attacks and fail to create fresh visual structures of the images.
General adversarial networks
The two components of General Adversarial Networks (GAN) are generator and discriminator. The generator produces synthetic data based on a random noise vector. The discriminator distinguishes between original data and the generated artificial data. The input to the generator [117] is a fixed-length random vector based on which it generates unreal samples in the chosen domain. The discriminator model accepts any sample original or artificial to predict class label. Generative Adversarial Networks (GANs) are efficient in synthetizing images from the scratch of any given domain and combining it with other methods can yield desirable results. Generally, input to GANs is a random noise vector but additional parameters can be also added to the input signal to permit a variation or adaptation in network output. Such Conditional Generative Adversarial Networks [113] are GANs that accepts an additional input. Most researchers [120, 121, 122] used generative adversarial networks (GAN) to augment the COVID-19 dataset and combined it with transfer learning models to construct a better classifier model to detect COVID-19 based on radiology images. The experimental results claimed that GAN improved the robustness of the model and overcame the overfitting problem. GAN along with fine-tuned Deep Transfer learning models solved the problem of limited and imbalanced dataset and upgrade the classifier model’s accuracy to a greater extent. But there are few limitations of GANs like need of high computational power, lack of idea of perspective, problems with counting and trouble coordinating the global structure [128]. Table 4 represents a summary of the related by many researchers to address the limited dataset problem. Figure 4 graphically represents the category wise reference papers referred in this review paper.
Open research challenges
The review highlights that using deep learning for analyzing medical images for COVID-19 diagnosis is still in its infancy stage. Although several researchers have headed in this direction, there are still many issues that need attention as mentioned in Table 5.
Conclusion
Increasing number of positive cases of coronavirus, rise in mortality cases at an alarming rate has put the countries in a state of emergency to end this pandemic. One of the most effective ways to deal with this, is to identify people infected with the virus so that they can be isolated and stop the community transmission. In this paper, we have reviewed various tests for coronavirus diagnosis and conclude that RT-PCR test along with medical image analysis proves to be an effective way in correctly diagnosing disease. Automating the medical image analysis using deep learning models not only reduces the burden and risks to medical professionals but also speeds up the diagnosis process. But the performance of deep learning models is restricted by unavailability of relevant datasets. Even though few researchers have found solutions to address this issue, most of the work was carried out on CXR images. From the review it was concluded that RT-PCR combined with CT scan image or ultrasound images is a best choice for the COVID-19 diagnosis because most finer details of chest/ lungs are captured better in CT and ultrasonography than CXR. But CXR is cheaper, portable, and is a faster solution. Ultrasound has an upper hand in terms of zero exposure to radiation which is a matter of concern in CT and CXR imaging modalities. This review summarizes the related works of applying deep learning models to coronavirus diagnosis, challenges faced and highlights future directions of research to find an accurate, efficient, and faster COVID-19 automatic diagnosis model which is the need of the hour.
References
1.
SegarsJKatlerQMcQueenDBKotlyarAGlennTKnightZ, et al. Prior and novel coronaviruses, coronavirus disease 2019 (COVID-19), and human reproduction: What is known? Fertility and Sterility. 2020; 113(6): 1140-1149. doi: 10.1016/j.fertnstert.2020.04.025.
2.
JinYHCaiLChengZSChengHDengTFanYP, et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-nCoV) infected pneumonia (standard version). Military Medical Research. 2020; 7(4): 1-23. doi: 10.1186/s40779-020-0233-6.
3.
Coronavirus disease (COVID-19)[Internet]. World Health Orgamization. [cited 2020 Nov] Available from: https://www.who.int/health-topics/coronavirus.
4.
Coronavirus cause: Origin and how it spreads[Internet]. Medical News Today; 2020 [updated 2020 Jun 12; cited 2020 Nov]. Available from: https://www.medicalnewstoday.com/articles/coronavirus-causes.
5.
SinghalT. A review of coronavirus disease-2019 (COVID-19). Indian J Pediatr. 2020; 87(4): 281-286. doi: 10.1007/s12098-020-03263-6.
6.
Diagnosis and treatment protocol for COVID-19 (trial version 7)[Internet]. National Health Commission of the People’s Republic of China; 2020[updated 2020; cited 2020 Nov]. Available from: http://en.nhc.gov.cn/2020-03/29/c_78469.htm.
7.
COVID-19 Testing overview[Internet]. Centres for Disease Control and Prevention; 2020[updated 2022 Feb 1; cited 2020 Nov]. Available from: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/testing.html.
8.
Coronavirus (COVID-19) Testing[Internet]. Testing.com; 2020[updated 2021 Nov 9; cited 2020 Nov]. Available from: https://labtestsonline.org/tests/coronavirus-covid-19-testing.
9.
Coronavirus testing basics[Internet]. U.S. Food and Drug Administration; 2020[updated 2022 Feb 2; cited 2020 Nov]. Available from: https://www.fda.gov/consumers/consumer-updates/coronavirus-disease-2019-testing-basics.
10.
YangQLiuQXuHLuHLiuSLiH. Imaging of coronavirus disease 2019: A Chinese expert consensus statement. Eur J Radiol. 2020; 127: 109008. doi: 10.1016/j.ejrad.2020.109008.
11.
ZhengZYaoZWuKZhengJ. The diagnosis of pandemic coronavirus pneumonia: A review of radiology examination and laboratory test. J Clin Virol. 2020; 128: 104396. doi: 10.1016/j.jcv.2020.104396.
12.
Use of chest imaging in COVID-19: A rapid advice guide. [Internet]. World Health Organization; 2020 [updated on 2020 Jun 11; cited 2020 Nov]. Available from: https://www.who.int/publications/i/item/use-of-chest-imaging-in-covid-19.
13.
StogiannosNFotopoulosDWoznitzaNMalamateniouC. COVID-19 in the radiology department: What radiographers need to know. Radiography (Lond). 2020; 26(3): 254-263. doi: 10.1016/j.radi.2020.05.012.
14.
ShujaJAlanaziEAlasmaryWAlashaikhA. COVID-19 open source data sets: A comprehensive survey. Appl Intell (Dordr). 2021; 51(3): 1296-1325. doi: 10.1007/s10489-020-01862-6.
15.
BuonsensoDPataDChiarettiA. COVID-19 outbreak: Less stethoscope, more ultrasound. Lancet Respir Med. 2020; 8(5): e27. doi: 10.1016/S2213-2600(20)30120-X.
16.
ShenMZhouYYeJAbdullah Al-MaskriAAKangYZengSCaiS. Recent advances and perspectives of nucleic acid detection for coronavirus. J Pharm Anal. 2020; 10(2): 97-101. doi: 10.1016/j.jpha.2020.02.010.
17.
LipsitchMPerlmanSWaldorMK. Testing COVID-19 therapies to prevent progression of mild disease. Lancet Infect Dis. 2020; 20(12): 1367. doi: 10.1016/S1473-3099(20)30372-8.
18.
TahamtanAArdebiliA. Real-time RT-PCR in COVID-19 detection: Issues affecting the results. Expert Rev Mol Diagn. 2020; 20(5): 453-454. doi: 10.1080/14737159.2020.1757437.
19.
van KasterenPBvan der VeerBvan den BrinkSWijsmanLde JongeJvan den BrandtA, et al. Comparison of seven commercial RT-PCR diagnostic kits for COVID-19. J Clin Virol. 2020; 128: 104412. doi: 10.1016/j.jcv.2020.104412.
20.
SarkodieBDOsei-PokuKBrakohiapaE. Diagnosing COVID-19 from chest X-ray in resource limited environment-case report. Med Case. 2020; 6(2): 135. doi: 10.36648/2471-8041.6.2.135.
21.
JacobiAChungMBernheimAEberC. Portable chest X-ray in coronavirus disease-19 (COVID-19): A pictorial review. Clin Imaging. 2020; 64: 35-42. doi: 10.1016/j.clinimag.2020.04.001.
22.
DongDTangZWangSHuiHGongLLuY, et al.,The role of imaging in the detection and management of COVID-19: A review. In IEEE Reviews in Biomedical Engineering. 2021; 14: 16-29. doi: 10.1109/RBME.2020.2990959.
23.
LiM. Chest CT features and their role in COVID-19. Radiol Infect Dis. 2020; 7(2): 51-54. doi: 10.1016/j.jrid.2020.04.001.
24.
MiaoCJinMMiaoLYangXHuangPXiongH, et al. Early chest computed tomography to diagnose COVID-19 from suspected patients: A multicenter retrospective study. Am J Emerg Med. 2021; 44: 346-351. doi: 10.1016/j.ajem.2020.04.051.
25.
AiTYangZHouHZhanCChenCLvW, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in china: A report of 1014 cases. Radiology. 2020; 296(2): E32-E40. doi: 10.1148/radiol.2020200642.
26.
BonadiaNCarnicelliAPianoABuonsensoDGilardiEKadhimC, et al. Lung ultrasound findings are associated with mortality and need for intensive care admission in COVID-19 patients evaluated in the emergency department. Ultrasound Med Biol. 2020; 46(11): 2927-2937. doi: 10.1016/j.ultrasmedbio.2020.07.005.
27.
XingCLiQDuHKangWLianJYuanL. Lung ultrasound findings in patients with COVID-19 pneumonia. Crit Care. 2020; 24: 174. doi: 10.1186/s13054-020-02876-9.
28.
VolpicelliGGarganiL. Sonographic signs and patterns of COVID-19 pneumonia. Ultrasound J. 2020; 12(1): 22. doi: 10.1186/s13089-020-00171-w.
29.
SofiaSBoccatondaAMontanariMSpampinatoMD’ardesDCoccoG, et al. Thoracic ultrasound and SARS-COVID-19: A pictorial essay. J Ultrasound. 2020; 23(2): 217-221. doi: 10.1007/s40477-020-00458-7.
30.
AggeliCOikonomouETousoulisD. A reappraisal of the role of transthoracic ultrasound in the era of COVID-19: Patient evaluation through new windows. Hellenic J Cardiol. 2021; 62(2): 180-181. doi: 10.1016/j.hjc.2020.06.003.
31.
SalehiSAbediABalakrishnanSGholamrezanezhadA. coronavirus disease 2019 (COVID-19): A systematic review of imaging findings in 919 patients. AJR Am J Roentgenol. 2020; 215(1): 87-93. doi: 10.2214/AJR.20.23034.
32.
TanGLianXZhuZWangZHuangFZhangY, et al. Use of lung ultrasound to differentiate coronavirus disease 2019 (COVID-19) pneumonia from community-acquired pneumonia. Ultrasound Med Biol. 2020; 46(10): 2651-2658. doi: 10.1016/j.ultrasmedbio.2020.05.006.
33.
SultanLRSehgalCM. A review of early experience in lung ultrasound in the diagnosis and management of COVID-19. Ultrasound Med Biol. 2020; 46(9): 2530-2545. doi: 10.1016/j.ultrasmedbio.2020.05.012.
34.
Pneumonia[Internet]. National Heart, Lung and Blood Institute; 2020[cited 2020 Nov]. Available from: https://www.nhlbi.nih.gov/health/pneumonia.
35.
Coronavirus disease (COVID-19) advice for the public: Mythbusters[Internet]. World Health Organization; 2020 [cited 2020 Nov]. Available from: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters.
36.
Coronavirus and Pneumonia[Internet].WebMd; 2020[cited 2020 Nov]. Available from: https://www.webmd.com/lung/covid-and-pneumonia#1.
37.
Pneumonia caused by coronavirus is likely to be more severe than other types of pneumonia[Internet]. The Swaddle; 2020[updated 2020; cited 2020 Nov]. Available from: https://theswaddle.com/what-is-the-difference-between-covid-and-bacterial-pneumonia/.
38.
Pneumonia[Internet]. Cleveland Clinic[cited 2020 Nov]. Available from: https://my.clevelandclinic.org/health/diseases/4471-pneumonia
39.
KimJEKimUJKimHKChoSKAnJHKangSJ, et al. Predictors of viral pneumonia in patients with community-acquired pneumonia. PLoS One. 2014; 9(12): e114710. doi: 10.1371/journal.pone.0114710.
40.
KimTYSonJKimKG. The recent progress in quantitative medical image analysis for computer aided diagnosis systems. Healthc Inform Res. 2011; 17(3): 143-149. doi: 10.4258/hir.2011.17.3.143.
41.
ShiFWangJShiJWuZWangQTangZ, et al. Review of artificial intelligence techniques in imaging data acquisition, segmentation, and siagnosis for COVID-19. IEEE Rev Biomed Eng. 2021; 14: 4-15. doi: 10.1109/RBME.2020.2987975.
42.
SwapnarekhaHBeheraHSNayakJNaikB. Role of intelligent computing in COVID-19 prognosis: A state-of-the-art review. Chaos Solitons Fractals. 2020; 138: 109947. doi: 10.1016/j.chaos.2020.109947.
43.
YingT. Gpu-based parallel implementation of swarm intelligence algorithms. 1 ed. Morgan Kaufmann; 2016.
44.
IglesiasJE. Globally optimal coupled surfaces for semi-automatic segmentation of medical images. in: Information Processing in Medical Imaging: 25th International ConferenceNiethammerMStynerMAylwardSZhuHOguzIYapPTShenD, eds. IPMI; 2017, Boone, NC, USA. Cham, Switzerland: Springer. 610-621.
45.
MansoorABagciUFosterBXuZPapadakisGZFolioLR, et al. Segmentation and image analysis of abnormal lungs at CT: Current approaches, challenges, and future trends. Radiographics. 2015; 35(4): 1056-1076. doi: 10.1148/rg.2015140232.
46.
YangFMuratHYanCBYaoJKutlukAKongXM, et al. Feature extraction and classification on esophageal X-ray images of xinjiang kazak nationality. Journal of Healthcare Engineering. 2017; 2040-2295. doi: 10.1155/2017/4620732.
47.
PathakSDNgLWymanBFogarasiSRackiSOelundJC, et al. Quantitative image analysis: Software systems in drug development trials. Drug Discov Today. 2003; 8(10): 451-458. doi: 10.1016/s1359-6446(03)02698-9.
48.
ClarkMW. Quantitative shape analysis: A review. Journal of the International Association for Mathematical Geology. 1981; 142(4): 303. doi: 10.1007/BF01031516.
49.
DrabyczSStockwellRGMitchellJR. Image texture characterization using the discrete orthonormal S-transform. J Digit Imaging. 2009; 22(6): 696-708. doi: 10.1007/s10278-008-9138-8.
50.
ApostolopoulosIDAznaouridisSITzaniMA. Extracting possibly representative COVID-19 biomarkers from X-ray images with deep learning approach and image data related to pulmonary diseases. J Med Biol Eng. 2020; 40(3): 462-469. doi: 10.1007/s40846-020-00529-4.
51.
FaridAASelimGIKhaterHAA. A novel approach of CT images feature analysis and prediction to screen for coronavirus disease (COVID-19). International Journal of Scientific and Engineering Research. 2020; 11(3): 1141. doi: 10.14299/ijser.2020.03.02.
52.
HasanAMAl-JawadMMJalabHAShaibaHIbrahimRWAl-ShamasnehAR. Classification of covid-19 coronavirus, pneumonia and healthy lungs in CT scans using Q-deformed entropy and deep dearning features. Entropy (Basel). 2020; 22(5): 517. doi: 10.3390/e22050517.
53.
BalajiKLavanyaK. Medical image analysis with deep neural networks. in: Deep Learning and Parallel Computing Environment for Bioengineering System. SangaiahAK, ed. Academic Press; 2019. 75-79.
54.
GuptaSWaliaPSinglaCDhankarSMishraTKhandelwalA, et al. Segmentation, feature extraction and classification of astrocytoma in MR images. Indian Journal of Science and Technology. 2016; 9(36). doi: 10.17485/ijst/2016/v9i36/102154.
55.
FuGSLevin-SchwartzYLinQHZhangD. Machine learning for medical imaging. J Healthc Eng. 2019; 2019: 9874591. doi: 10.1155/2019/9874591.
BishopCM. Pattern recognition and machine learning. Berlin, Germany: Springer; 2006.
58.
LiXFangXBianYLuJ. Comparison of chest CT findings between COVID-19 pneumonia and other types of viral pneumonia: A two-center retrospective study. Eur Radiol. 2020; 30(10): 5470-5478. doi: 10.1007/s00330-020-06925-3.
59.
BaiHXHsiehBXiongZHalseyKChoiJWTranTML, et al. Performance of radiologists in differentiating COVID-19 from non-COVID-19 viral pneumonia at chest CT. Radiology. 2020; 296(2): E46-E54. doi: 10.1148/radiol.2020200823.
60.
ShiHHanXJiangNCaoYAlwalidOGuJ, et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: A descriptive study. Lancet Infectious Diseases. 2020; 20: 425-434.
61.
ZhaoWZhongZXieXYuQLiuJ. Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: A multicenter study. American Journal of Roentgenology. 2020; 1. doi: 10.2214/AJR.20.22976.
62.
HaniCTrieuNHSaabIDangeardSBennaniSChassagnonG, et al. COVID-19 pneumonia: A review of typical CT findings and differential diagnosis. Diagn Interv Imaging. 2020; 101(5): 263-268. doi: 10.1016/j.diii.2020.03.014.
63.
Machine learning in medical imaging and analysis{Internet]. aitrends; 2018 Dec 18 [cited 2020 Nov]. Available from: https://www.aitrends.com/healthcare/machine-learning-in-medical-imaging-and-analysis/.
64.
EricksonBJKorfiatisPAkkusZKlineTL. Machine learning for medical imaging. Radiographics. 2017; 37(2): 505-515. doi: 10.1148/rg.2017160130.
HosmerDWStanleyL. Applied logistic regression. 2nd ed. New York, NY: Wiley; 2000.
67.
QuinlanJR. Induction of decision trees. Mach Learn. 1986; 1(1): 81-106.
68.
SeberGAFLeeAJ. Linear regression analysis. 2nd ed. New York, NY: Wiley; 2012.
69.
CristianiniNShawe-TaylorJ. An introduction to support vector machines and other kernel-based learning methods. NY, USA: Cambridge University Press; 1999.
70.
LowdDDanielLPedroD. Naive Bayes models for probability estimation. in: Proceedings of the 22nd International Conference on Machine Learning: ICML ’05. New York, NY: Association for Computing Machinery: 2005.
BirantDKutA. ST-DBSCAN: An algorithm for clustering spatial-temporal data. Data Knowl Eng 2007; 60(1): 208-221.
75.
RobertsSJHusmeierDRezekIPennyW. Bayesian approaches to Gaussian mixture modeling. IEEE Trans Pattern Anal Mach Intell 1998; 20(11): 1133-1142.
76.
DunnJC. A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. J Cybern 1973; 3(3): 32-57.
77.
HaqueIRINeubertJ. Deep learning approaches to biomedical image segmentation. J Informatics in Medicine Unlocked. 2020; 18: 100297. doi: 10.1016/j.imu.2020.100297.
78.
RothHRShenCOdaHOdaMHayashiYMisawaK, et al. Deep learning and its application to medical image segmentation. J Medical Imaging Technology. 2018; 36(2): 1-6.
79.
PathanNJadhavME. Medical image classification based on machine learning techniques. in: Advanced Informatics for Computing Research: Proceedings of ICAICR; LuhachAJatDHawariKGaoXZLingrasP, eds. 2019; Shimla, India. Singapore: Springer.
80.
YoonHJJeongYJKangHJeongJEKangDY. Medical image analysis using artificial intelligence progress in medical physics. Korean Society of Medical Physics. 2019; 30(2): 49-58.
81.
YadavSSJadhavSM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data. 2019; 6: 113.
82.
PurushothamSMengCCheZLiuY. Benchmarking deep learning models on large healthcare datasets. J Biomed Inform. 2018; 83: 112-134. doi: 10.1016/j.jbi.2018.04.007.
83.
FaustOHagiwaraYHongTJLihOSAcharyaUR. Deep learning for healthcare applications based on physiological signals: A review. Comput Methods Programs Biomed. 2018; 161: 1-13. doi: 10.1016/j.cmpb.2018.04.005.
84.
DaiYWangG. A deep inference learning framework for healthcare. Pattern Recognition letters. 2020; 139: 17-25. doi: 10.1016/j.patrec.2018.02.009.
85.
YangHCIslamMM, Jack LiYC. Potentiality of deep learning application in healthcare. Comput Methods Programs Biomed. 2018; 161: A1. doi: 10.1016/j.cmpb.2018.05.014.
86.
LeCunYBengioYHintonG. Deep learning. Nature. 2015; 521(7553): 436-444. doi: 10.1038/nature14539.
87.
KerJWangLRaoJLimT. Deep learning applications in medical image analysis. IEEE Access. 2018; 6: 9375-9389. doi: 10.1109/ACCESS.2017.2788044.
88.
BakatorMRadosavD. Deep learning and medical diagnosis: A review of literature. Multimodal Technologies Interact. 2018; 2(3): 47. doi: 10.3390/mti2030047.
89.
LiuXFaesLKaleAUWagnerSKFuDJBruynseelsA, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit Health. 2019; 1(6): 271-297. doi: 10.1016/S2589-7500(19)30123-2.
90.
RazzakMINazSZaibA. Deep Learning for Medical Image Processing: Overview, Challenges and the Future. in: Classification in BioApps. Lecture Notes in Computational Vision and BiomechanicsDeyNAshourABorraS, eds. 26. Springer, Cham. 2017. 323-350.
91.
SeoH, Badiei KhuzaniMVasudevanVHuangCRenHXiaoR, et al. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys. 2020; 47(5): e148-e167. doi: 10.1002/mp.13649.
92.
ShenDWuGSukHI. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017; 19(1): 221-248.
93.
IndoliaSGoswamiAKMishraSPAsopaP. Conceptual understanding of convolutional neural network – a deep learning approach. Procedia Computer Science. 2018; 132: 679-688. doi: 10.1016/j.procs.2018.05.069.
94.
AhammedKSatuMSAbedinMZRahamanMAIslamSMS. Early detection of coronavirus cases using Chest X-ray images employing machine learning and deep learning approaches[preprint]. 2020. doi: 10.1101/2020.06.07.20124594.
95.
OrkunFMingyanWMatthiasNLukasPMatthiasW, Carl EK, et al. Machine learning techniques for the segmentation of tomographic image data of functional materials. Frontiers in Materials. 2019; 6: 145. doi: 10.3389/fmats.2019.00145.
96.
SharmaNJainVMishraA. An analysis of convolutional neural networks for image classification. Procedia Computer Science. 2018; 132: 377-384.
97.
YamashitaRNishioMDoRKGTogashiK. Convolutional neural networks: An overview and application in radiology. Insights Imaging. 2018; 9(4): 611-629. doi: 10.1007/s13244-018-0639-9.
98.
VarshniDThakralKAgarwalLNijhawanRMittalA. Pneumonia detection using CNN based feature extraction. in: Proceedings of International Conference on Electrical, Computer and Communication Technologies (ICECCT). Coimbotore, India. IEEE; 2019. Available from: https://ieeexplore.ieee.org/document/8869364.
99.
KumarSMishraSSinghSK. Deep transfer learning-based COVID-19 prediction using chest X-rays[preprint]. doi: 10.1101/2020.05.12.20099937.
100.
ElazizMAHosnyKMSalahADarwishMMLuSSahlolAT. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE. 2020; 15(6): e0235187. doi: 10.1371/journal.pone.0235187.
101.
AbedMMHameedAKAlaaSAWSalamaAMShumoosAFMusaDA, et al. Benchmarking methodology for selection of optimal COVID-19 diagnostic model based on entropy and TOPSIS methods. IEEE Access. 2020; 8: 99115-99131. doi: 10.1109/ACCESS.2020.2995597.
102.
WangJBaoYWenYLuHLuoHXiangY, et al. Prior-attention residual learning for more discriminative COVID-19 screening in CT images. IEEE Trans Med Imaging. 2020; 39(8): 2572-2583. doi: 10.1109/TMI.2020.2994908.
103.
SinghDKumarV, VaishaliKaurM. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. Eur J Clin Microbiol Infect Dis. 2020; 39(7): 1379-1389. doi: 10.1007/s10096-020-03901-z.
104.
KangHXiaLYanFWanZShiFYuanH, et al. Diagnosis of coronavirus disease 2019 (COVID-19) with structured latent multi-view representation learning. IEEE Trans Med Imaging. 2020; 39(8): 2606-2614. doi: 10.1109/TMI.2020.2992546.
105.
RoySMenapaceWOeiSLuijtenBFiniESaltoriC, et al. Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound. IEEE Trans Med Imaging. 2020; 39(8): 2676-2687. doi: 10.1109/TMI.2020.2994459.
106.
HuSGaoYNiuZJiangYLiLXiaoX, et al. Weakly supervised deep learning for COVID-19 infection detection and classification from CT images. IEEE Access. 2020; 8: 118869-118883. doi: 10.1109/ACCESS.2020.3005510.
107.
FanDPZhouTJiGPZhouYChenGFuH, et al. Inf-Net: Automatic COVID-19 lung infection segmentation from CT images. IEEE Transactions on Medical Imaging. 2020; 39(8): 2626-2637. doi: 10.1109/TMI.2020.2996645.
108.
PathakYShuklaPKAryaKV. Deep bidirectional classification model for COVID-19 disease infected patients. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2021; 18(4): 1234-1241. doi: 10.1109/TCBB.2020.3009859.
109.
AmyarAModzelewskiRLiHRuanS. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Comput Biol Med. 2020; 126: 104037. doi: 10.1016/j.compbiomed.2020.104037.
110.
WeissKKhoshgoftaarTMWangD. A survey of transfer learning. J Big Data. 2016; 3(9). doi: 10.1186/s40537-016-0043-6.
111.
KrizhevskyASutskeverIHintonGE. ImageNet classification with deep convolutional neural networks. Commun. ACM. 2017; 60(6): 84-90. doi: 10.1145/3065386.
112.
ShortenCKhoshgoftaarTM. A survey on image data augmentation for deep learning. J Big Data. 2019; 6: 60. doi: 10.1186/s40537-019-0197-0.
113.
PerezLWangJ. The effectiveness of data augmentation in image classification using deep learning arXiv: 1712.04621. [Preprint]. 2017. Available from: https://arxiv.org/abs/1712.04621.
114.
WongSCGattAStamatescuAMcDonnellMD. Understanding data augmentation for classification: When to warp? in: International Conference on Digital Image Computing: Techniques and Applications (DICTA); 2016 Nov-Dec. Goldcoast, USA. IEEE; 1-6. doi: 10.1109/DICTA.2016.7797091.
115.
Frid-AdarMDiamantIKlangEAmitaiMGoldbergerJGreenspanH. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018; 321: 321-331. doi: 10.1016/j.neucom.2018.09.013.
116.
MikołajczykAGrochowskiM. Data augmentation for improving deep learning in image classification problem. International Interdisciplinary PhD Workshop (IIPhDW); Swinoujście; 2018. 117-122. doi: 10.1109/IIPHDW.2018.8388338.
117.
A gentle introduction to generative adversarial networks (GANs) [Internet]. Machine Learning Mastery; 2019 [updated 2019 Jul 19; cited Nov 2020]. Avaialble from: https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/.
118.
PereiraRMBertoliniDTeixeiraLOSillaCNCostaYMG. COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput Methods Programs Biomed. 2020; 194: 105532. doi: 10.1016/j.cmpb.2020.105532.
119.
ApostolopoulosIDMpesianaTA. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. 2020; 43(2): 635-640. doi: 10.1007/s13246-020-00865-4.
120.
LoeyMSmarandacheFKhalifaNE. Within the lack of chest COVID-19 X-ray dataset: A novel detection model based on GAN and deep transfer learning. Symmetry. 2020; 12(4): 651. doi: 10.3390/sym12040651.
121.
KhalifaNEMTahaMHNHassanienAEElghamrawyS. The detection of COVID-19 in CT medical images: A deep learning approach. Big Data Analytics and Artificial Intelligence Against COVID-19: Innovation Vision and Approach. 2020; 78: 73-90. doi: 10.1007/978-3-030-55258-9_5.
122.
WaheedAGoyalMGuptaDKhannaAAl-TurjmanFPinheiroPR. CovidGAN: Data augmentation using auxiliary classifier GAN for improved Covid-19 detection. IEEE Access. 2020; 8: 91916-91923. doi: 10.1109/ACCESS.2020.2994762.
123.
HuRRuanGXiangSHuangMLiangQLiJ. Automated diagnosis of COVID-19 using deep learning and data augmentation on chest CT [Preprint]. 2020. Available from: https://www.medrxiv.org/content/10.1101/2020.04.24.20078998v2.
124.
LoeyMManogaranGKhalifaNEM. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput Appl. 2020; 1-13. doi: 10.1007/s00521-020-05437-x.
125.
NishioMNoguchiSMatsuoHMurakamiT. Automatic classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray image: Combination of data augmentation methods. Scientific Reports. 2020; 10(1). doi: 10.1038/s41598-020-74539-2.
126.
UcarFKorkmazD. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med Hypotheses. 2020; 140: 109761. doi: 10.1016/j.mehy.2020.109761.
127.
AbbasAAbdelsameaMMGaberMM. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl Intell. 2021; 51: 854-864. doi: 10.1007/s10489-020-01829-7.
128.
GoodfellowIPouget-AbadieJMirzaMXuBWarde-FarleyDOzairS, et al. Generative adversarial networks. in: Advances in Neural Information Processing Systems. Curan Associates, Inc; GhahramaniZWellingMCortesCLawrenceN, Weinberger KQ, eds. 2014.