• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Stacked Ensemble Learning Model for COVID-19 Classification

    2022-03-14 09:25:30MadhuLalithBharadwajRohitBoddedaSaiVardhanSandeepKautishKhalidAlnowibetAdelAlrasheediandAliWagdyMohamed
    Computers Materials&Continua 2022年3期

    G.Madhu,B.Lalith Bharadwaj,Rohit Boddeda,Sai Vardhan,K.Sandeep Kautish,Khalid Alnowibet,Adel F.Alrasheedi and Ali Wagdy Mohamed

    1Department of Information Technology,VNRVJIET,Hyderabad,500090,India

    2Department of Computer Science and Engineering,VNRVJIET,Hyderabad,500090,India

    3LBEF Campus,Kathmandu,44600,Nepal

    4Statistics and Operations Research Department,College of Science,King Saud University,Riyadh,11451,Kingdom of Saudi Arabia

    5Operations Research Department,Faculty of Graduate Studies for Statistical Research,Cairo University,Giza,12613,Egypt

    6Wireless Intelligent Networks Center(WINC),School of Engineering and Applied Sciences,Nile University,Giza,12588,Egypt

    Abstract: COVID-19 is a growing problem worldwide with a high mortality rate.As a result, the World Health Organization (WHO) declared it a pandemic.In order to limit the spread of the disease,a fast and accurate diagnosis is required.A reverse transcript polymerase chain reaction (RT-PCR) test is often used to detect the disease.However,since this test is time-consuming,a chest computed tomography (CT) or plain chest X-ray (CXR) is sometimes indicated.The value of automated diagnosis is that it saves time and money by minimizing human effort.Three significant contributions are made by our research.Its initial purpose is to use the essential finetuning methodology to test the action and efficiency of a variety of vision models, ranging from Inception to Neural Architecture Search(NAS)networks.Second,by plotting class activation maps(CAMs)for individual networks and assessing classification efficiency with AUC-ROC curves,the behavior of these models is visually analyzed.Finally,stacked ensembles techniques were used to provide greater generalization by combining finetuned models with six ensemble neural networks.Using stacked ensembles, the generalization of the models improved.Furthermore,the ensemble model created by combining all of the finetuned networks obtained a state-of-the-art COVID-19 accuracy detection score of 99.17%.The precision and recall rates were 99.99%and 89.79%,respectively,highlighting the robustness of stacked ensembles.The proposed ensemble approach performed well in the classification of the COVID-19 lesions on CXR according to the experimental results.

    Keywords: COVID-19 classification; class activation maps (CAMs)visualization; finetuning; stacked ensembles; automated diagnosis; deep learning

    1 Introduction

    The coronavirus (COVID-19) was first noted in December 2019 in Wuhan City (Hubei,China).The viral infection quickly spread worldwide, eventually causing a global pandemic.Following a detailed study of its biological properties, the virus was found to be of zoonotic origin and consists of a single-stranded ribonucleic acid (RNA) genome with a strong capsid.Based on this survey, it was concluded that the virus belongs to the coronaviridae family and was subsequently named 2019-novel coronavirus (nCOV).A person infected with 2019-nCoV may have no symptoms or develop mild symptoms, including sore throat, dry cough, and fever.If the human body hosts the 2019-nCoV for a long period, the virus can cause severe respiratory illness and, in the worst case, it can lead to death.There are four stages that are used to assess the virus’s virulence in the human body.During the first four days of the infection, the patient is often asymptomatic.The second stage is the progressive stage which generally occurs between the fifth and eighth day following the infection, whereby the patient may develop mild symptoms.Stage three is known as the peak stage, which occurs between nine and thirteen days.The final stage is the absorption stage, whereby the load of the virus exponentially increases [1].These observations were reported with clinical experimentation in Fig.1 [2].

    Figure 1: An upsurge in the number of cases and death rate from January to July 2020 is depicted.The infection and death rate increased by approximately 105 within six months

    Due to the rapid surge in cases, healthcare systems are finding it increasingly difficult to cope with the demand and to provide timely vaccination [3].This problem is being further exasperated by the shortage of medical supplies globally.In order to reduce the burden on healthcare systems,several preventive measures such as social distancing, proper sanitization, the mandatory wearing of masks in public places, and lockdowns have been implemented worldwide to reduce the spread.Despite the implementation of all these measures, the mortality rate from the disease is still high in various countries.According to the Chinese National Health Commission (NHC), as of February 4th, 2020, the mortality rate from the disease was 2.1% in China and 0.2% outside of China.The mode of spread of the virus in asymptomatic cases remains controversial [4,5].In order to identify COVID-19 in an asymptomatic person, precise and proper diagnostic tests are required.The diagnostic tests are typically performed by collecting samples from the individual patient for testing in a laboratory or at a point of care testing center [6].Manual testing is time consumes and labor-intensive.Therefore this method is not suitable to obtain a fast diagnosis during a pandemic.Computed tomography (CT) and chest X-ray (CXR) can be used to detect and assess the severity of the lung damage caused by the viral infection.However, a radiologist needs to analyze these images manually, which is time-consuming.Artificial intelligence (AI) can be used to develop algorithms to automatically assess the lung damage caused by the virus [2,7].The findings for the COVID-19 infection in CXR or chest CT vary from person to person.However, two common hallmark imaging features observed in infected patients were bilateral and peripheral ground-glass opacities and peripheral lesions with a rounded morphology [2].These distinct features facilitate the use of machine vision learning models to automatically detect COVID-19 lesions on either CXR or CT images.However, traditional methods do not preserve the contextual information of CT scan images.In view of this, this study aimed to develop a robust diagnostic model for COVID-19 detection on CXR images.The objectives of this study were to:

    ? analyze the behavior and performance of various vision models ranging from inception to Neural Architecture Search (NAS) networks followed by appropriate model finetuning,

    ? visually assess the behavior of these models by plotting class activation maps (CAMs) for individual networks,

    ? determine the classification performance of the model by calculating the area under the curve (AUC) of a receiver operator curve (ROC),

    ? improve the generalization of the model by combining the finetuned model deep learning with the shocked model (stacked ensembles technique).

    2 Previous Works

    Numerous studies evaluated the use of deep learning methods for the automatic detection,classification, feature extraction, and segmentation for COVID-19 diagnosis from CXR and CT images.This study discusses the relevant applications of pre-trained deep neural networks that prompt the key aspects to impact COVID-19 detection and classification.Fan et al.[8] proposed the use of the deep learning network Inf-Net for the segmentation of COVID-19 lesions on transverse CT scan images.This network architecture utilized Res2Net as a backbone and obtained a dice score of 0.682.A similar semi-Inf-Net model attained a higher dice score of 0.739.Oh et al.[9] implemented two different approaches, global patch matching, and local patch matching, for segmentation and classification.Their method used ResNet-18 as the backbone to classify four different types of lung infections similar to that of COVID-19.Their algorithm obtained an accuracy score of 88.9% and specificity of 0.946 on randomly cropped patches using a local approach.Rahimzadeh et al.[10] constructed the 8-phase training concatenating Xception and ResNet-50 architectures.In each phase, samples were trained using a proper stratification to overcome class imbalance for 100 epochs.This model attained an overall accuracy score of 91.4% by five-fold cross-validation.Ozturk et al.[11] proposed a Dark-CovidNet model for binary and tri-class classification of CXR images infected with COVID-19.This model was trained by constructing a deep neural architecture with a series of convolutional layers and max-pooling layers.This method attained accuracy scores of 98.3% for the binary classification and 87.2% for the tri-class classification on five-fold cross-validation.Apostolopoulos et al.[12] applied transfer learning using diverse pre-trained architectures on two different datasets for the classification of COVID-19 CXR images.Their transfer learning methodology attained an accuracy score of 98.75% using VGG-19 pre-trained weights for binary classification and an accuracy of 94.7% for the MobileNet-V2 CXR images classification consisting of three classes.Li et al.[13] proposed the CovNet network by training a deep learning model with ResNet-50 as a backbone for sharing weights and attained an accuracy of 96%.Khan et al.[14] designed the CoroNet-architecture with Xception as an underlying weight-sharing model.This model achieved an accuracy score of 99%through binary classification, 95% when using three non-identical classes (one class belonging to COVID-19), and 89.6% for four variant classes following a four-fold cross-validation framework.Wang et al.[15] proposed the use of COPEL-Net to segment COVID-19 pneumonia lesions from CT images.The novel dice loss combined with a MAEloss for generalization was used to reduce noise and minimize the foreground and background imbalance for the segmentation task.This diagnostic frame obtained a dice score of 80.72±9.96.Most COVID-19 classification and segmentation on CXR and CT images described in the literature are based on deep neural networks.The advantage of deep neural networks is that they provide a versatile weight-sharing mechanism, thus improving the performance of the algorithm.Therefore, this study aimed to develop a robust diagnostic COVID-19 model using CXR images.The objectives of the study were to:

    ? examine the behavior and efficiency of different deep learning vision models ranging from Inception to NAS networks, using the proper finetuning procedure,

    ? visually assess the behavior of these models by plotting class activation maps (CAMs) for individual networks,

    ? determine the classification performance of the model by calculating the area under the curve (AUC) of a receiver operator curve (ROC),

    ? improve the generalization of the model by combining the finetuned model deep learning with the shocked model (stacked ensembles technique).

    3 Methodology

    3.1 Dataset Description

    A total of 2905 CXRs were obtained from various databases, including the Italian Society of Medical Radiology (SIRM), ScienceDirect, The New England Journal of Medicine (NEJM),Radiological Society of North America (RSNA), Radiopaedia, Springer, Wiley, Medrxiv, and other sources Fig.2.The complete source list of the COVID-19 CXR image samples is available in the metadata file [16].These images were reviewed by an expert radiologist.Eight percent (n=219) of these images were from patients infected with COVID-19%, 46% (n=1341) of the images were from healthy persons, and the rest of the images were from patients suffering from either bacterial or viral pneumonia (n=1345) [16].The data was then divided into 75% training (Dtrain)and 25% testing datasets (Dtest).Due to the small number of CXRs with COVID-19 lesions,stratified random sampling was used to ensure that all three diagnoses were equally represented in both training and testing datasets and hence minimize the risk of introducing class imbalance in the data distribution.

    Figure 2: Pie chart illustrating the CXR sources

    3.2 Convolutional Neural Networks for Feature Extraction

    Convolutional neural networks (CNNs) are increasingly being used in computer vision to detect, classify, localize, and segment normal and pathological features from medical images [17].The use of CNN increased widely following its application in large-scale image recognition challenges (ILSVRC-2010).In this challenge, AlexNet [18] made use of a deep CNN and resulted in the lowest detection error rate.This motivated researchers to make use of this technology to develop multidisciplinary high-end applications [19].The CNN architecture can be modified significantly by manipulating the width, depth, and channels (activation-maps) to further improve the performance of the model with appropriate generalizations.Furthermore, the model’s performance can be further improved through the manipulation of parametric weight sharing from one network to another network.This technique facilitates the feature extraction procedure in most networks, eventually reducing the computational and training cost [20,21].Following the successful implementation of AlexNet, numerous other CNNs were developed.In the following section, the advantages and limitations of each CNN are discussed.

    3.3 Inception

    The Inception architecture is designed with a novel ideology module.This network architecture is trained by widening layers to increase the depth of the network depth with a few computational parameters.There are two versions of the architecture, including a naive and a dimensionality-reduced.The Inception module consists of three levels.The bottom levels of inception feed into four different layers stacked by width.The intermediate layers extract spatial information individually and correlate with each layer.The top layer concatenates all the intermediate layer’s feature maps to maintain a hierarchy of features to improve the perceived performance of the network [22].

    3.4 VGG-Nets

    After Inception, VGG networks were developed by a sequential convolutional layer with a pooling layer.The sequential depth of the models ranged from 11 to 19 layers.The appropriate use of the max-pooling layers in 16 and 19 layered VGG-Nets is essential for spatial sub-sampling and the extraction of generic features at the rearmost layers.VGG-Nets use small receptive fields of 5x5 and 3x3 to capture small features, eventually improving their detection precision accurately.The generalizability of the model for highly correlated inputs can be further improved by finetuning the learning application schedules to decrease the learning rate [23].

    3.5 Res-Nets

    The Res-Nets were developed to address the problem of vanishing gradients by imparting identity mapping in large-scale networks.They reformulated deep layers by aggregating learned activations from a prior layer to form a residual connection.This residual learning minimizes the problem of degrading and exploding gradients in the deeper networks.These residual connections help in addressing learned activations from preceding layers, maintaining a constant information flow throughout the network, and eventually reduce the computational cost [24-26].

    3.6 Inception-Res-Nets

    This network was inspired by the Inception network modules and identity mappings from ResNets.This method integrates dimensionality-reduced Inception modules with sequential residual connections hence increasing the learning capability of the network while reducing its computational cost.This provides better generalization ability when compared to various versions of the ResNet and Inception Networks [25].

    3.7 Xception

    This network was proposed to compete with the Inception network to reduce its flaws.The simultaneous mapping of spatial and cross-channel correlations guides allows for improved learning with small receptive fields and improves perceptive ability.The depth-wise separable convolutional layers enhance the learning through detailed feature extraction.These networks are computationally less expensive and perform better than the Inception network [27].

    3.8 Dense-Nets

    These densely connected CNNs are motivated by the residual connection of Res-Nets and imposed long-chained residual connections to form dense blocks.In Dense-Nets, for N layers,there are N(N+1)/2 connections (including residual connections) that enhance the network’s capability for extracting detailed features while reducing image degradation.The sequential dense and transition blocks provide a collection of knowledge, and a bottleneck receptive field of 3x3, eventually improving its computational efficiency.The finetuning of larger weights improves generalization in deeper networks with a depth ranging from 121 to 201 layers [28].

    3.9 Mobile-Nets

    Mobile-Nets were designed for mobile applications under a constrained environment.The main advantage of this network is the combination of inverted residual layers with linear bottlenecks.The constructed deep-network accrues a low-dimensional input, which eventually expands by elevating dimensional space.These elevated features are filtered via depth-wise separable CNNs and are further projected back onto a low-dimensional space using linear CNNs.This contribution reduces the need to access the main mobile application memory, thus providing faster executions through the use of a cache-memory [29].

    3.10 Nas-Nets

    Nas-Nets make use of convolution cells by learning from distinct classification tasks.The design of this network is based on a reduced depth-wise stacking of normal cells, hence providing an appropriate search space by decoupling a sophisticated architectural design.This adaptability of Nas-Nets enables it to perform well even on mobile applications.The computational cost is significantly reduced, and its performance can be improved by enhancing the depth [30].

    4 Deep Stacked Ensemble Method

    This deep-stacked ensemble method was evaluated by classifying COVID-19 database inputs into a tri-class and a binary class, as shown in Fig.3.

    Figure 3: The complete methodology of the deep-stacked ensemble method

    Various samples were first considered and pre-processed to a specific resolution of 224×224×3 of the COVID-19 dataset.These pre-processed images were then fed into a variety of deep networks that use different paradigms to extract features from latent dimensions.The extracted feature vectors are then evaluated, and the two best-performing models are selected to form a stacked ensemble.The COVID-19 class is given more weight in this ensemble, which was assessed by classifying the feedback into a tri-class and a binary class.

    ?

    4.1 Finetuning of Neural Networks

    Deep learning algorithms can accurately detect pathology from bio-medical imaging to human-level precision.The CNNs provide numerous advantages for feature detection in medical imaging.There are two methods that can be used to design neural architectures for medical imaging.The first method involves designing a novel architecture by overhauling loops in existing architectures by training it end-to-end.The second method involves model finetuning by either transferring the weights of a pre-trained model (transfer of weights) or by retraining an existing pre-trained architecture.

    The training of an end-to-end CNN requires proper initializations, which can be computationally expensive.On the other hand, the transfer of weights from the pre-trained models for a similar problem statement can be useful to reduce the computational cost.However, they may not extract the invariances if the class samples in the problem statement are not trained at least once.For example, a pre-trained network on Imagenet may not be able to extract the invariances in CXRs if these samples are never seen or trained.This means that the model may end up capturing unwanted features on the CXR, leading to an inaccurate classification.In order to overcome this problem, the model is fine-tuned to obtain the appropriate features.Fine-tuning of the model is extremely important in medical imaging when the sample size is small, leading to class imbalance [31].Hence, the existing models, starting from VGG-Nets to Dense-Nets,were all finetuned to extract invariant features and discriminate the COVID-19 class from the remaining.The fine-tuning for individual models were performed as per Algorithm-1.The major parameters considered for finetuning in our methodology were learning schedules and batch sizes.The algorithms were finetuned by constricting the noise caused during the training process to reduce the risk of misleading the model if not trained with appropriate initializations.

    The Dtrainand Dtestsamples were inserted into each model to capture latent feature vectors.A feedforward neural network was built to classify the extracted feature vectors, and all models were fine-tuned using Algorithm 1.The final extracted feature vector consisted of different threedimensional shapes according to the model.These latent representations were then classified by attaching a dense layer consisting of 256 neurons followed by dropout [32] and batch normalization [33] of the layers for regularization.The final layer consisted of a softmax activation layer with “c” neurons, whereby “c” represents the number of classes.The dropout percentage was set to 30%.A generalization assessment was performed for all individual models.ReLU [34] was used for the non-linearity construction of the model architecture for all the layers except the final layer,whereby feed was forwarded by softmax.Glorot-normal was used for the initializations of most of the layers [35].The initializations with appropriate activations resulted in the extraction of the following intricate, deep feature layers.

    All models were carefully finetuned, and their performance was evaluated using various performance metrics.The generalizations provided by the finetuned models are summarized in Tab.1.All the models performed well and had a similar overall performance Tab.2.The classwise performance of the model is also summarized in Tab.2.The classes are coded as C-0, C-1, and C-2, indicating COVID-19, normal, and pneumonia, respectively.

    In the design of medical diagnostic prediction models, receiver operating characteristics (ROC)analysis is essential for analyzing the model performance.The area under the curve (AUC) of the ROC of a classifier determines the diagnostic stability of the model.This AUC-ROC curve is insensitive to the alterations in the individual class distributions [36].A ROC curve for each model was therefore plotted, as shown in Fig.4.The feature extraction ability of models varied widely, as not all models were capable of recognizing features pertaining to COVID-19 lesions.

    A prediction model for medical imaging needs to have a high sensitivity and specificity.A clinically useful COVID-19 model based on CXR needs to be able to differentiate between COVID-19 from other infections.However, the distinction of CXR lesions caused by COVID-19 as opposed to other infections can be quite challenging.CAMs were therefore applied to all CXR input images [37].CAMs apply global average pooling for bottleneck activations in CNNs and provide a visual understanding of discriminative image regions and/or the region of interest.CAMs provide a visual illustration through the use of heat maps of the features extracted by the models to make predictions.Therefore CAMs provide a clear understanding of whether the acquired features are distinctive of a COVID-19 lesion, as illustrated in Fig.5.

    The CAMs analysis shows that some of the models extract the peripheral and bilateral ground-glass opacities while some of the other models also extracted the rounded morphology typical of COVID-19 lesions [38].Since both features were deemed essential for an accurate diagnosis, the models that provided the highest generalization and extracted different features according to the CAMs analysis were used to develop the neural model averaging or neural stacked ensembles models.

    Table 1: Variables used with their description

    Table 2: Individual models performance

    Figure 4: AUC ROC curves obtained from the fine-tuned neural networks

    Figure 5: Class activation maps (CAMs) obtained from the finetuned neural networks (a) Original CXR, (b) CAMs of VGG-16, (c) CAMs of VGG-19, (d) CAMs of InceptionV3 (e) CAMs of ResNet50, (f) CAMs of ResNet101, (g) CAMs of ResNet152, (h) CAMs of Xception, (i)CAMs of InceptionResNets, (j) CAMs of MobileNet, (k) CAMs of NasNetMobile, (l) CAMs of DenseNet121 (m) CAMs of DenseNet169, (n) CAMs of DenseNet201

    5 Model Averaging

    Model averaging is the process of averaging the outcomes of a group of networks trained on a similar task or the same model trained on different parameters.The model averaging improves the generalization of the models by aggregating their predictions.The generalization for the model was obtained by minimizing the loss during stochastic optimization using equation Eq.(1), wherebyxandyare features and ground truth class labels of particular data distribution.Iffnis an nthneural architecture that predicts the class label for a given feature set (where,n=1, 2...N), the mean squared error for the loss function can be minimized as follows:

    wherefNrepresents the final neural architecture.

    Similarly, weights can be assigned to individual models based on their prediction performance.These weights are then applied to the appropriate models to obtain aggregated generalization.This is known as weighted model averaging.In the case of model averaging, the models are equally treated by assigning the individual performance of the model to each network.This means that the weighted model averaging provides importance to the required models and discards the poorly performing models.

    whereWnare weights of the nthmodel(n=1,2,3,..N).

    The generalization provided by the committee of the neural models improves when compared to that of model averaging and weighted model averaging.Hence, the models were stacked to improve the generalization ability of the model.

    6 Stacked Ensembles

    The stacked ensemble integrates or groups different models to provide aggregated generalization by mapping the output predictions onto a logit function.Instead of averaging the weights to the grouped models, logistic regression or multi-class logit was applied to map the predictions.Therefore, the predictions were gathered, and a logistic regression was applied to them or built at the end-to-end neural model that applies softmax non-linearity as final activation [39,40].The generalization improvements provided by the stacked ensemble (using neural networks) were mathematically described as follows.

    Our network was first considered to be a function that predicts a certain input x, where our true function isTi(x)and approximated function isf(x)?i=1,2,3,...n.Suppose,

    whereriis the generalization error ?i=1,2,3...n,whereby n represents the number of neural networks to an ensemble.

    So, the average individual error settled from the networks can be estimated as follows:

    The ensemble learning of the grouping variant networks is presented in the following equations:

    Estimated error resided by stacking these ensembles:

    Suppose,

    From Eqs.(13) and (8)

    If individual networks did not correlate themselves, the stacking ensemble was reduced bynfactors using the original generalization attained from the individual networks.However, for most scenarios, a correlation in generalization occurs, leading to an increase in the generalization error to a certain extent.

    To understand this scenario,rij0 is considered.So,

    This constant ‘ε’is an additional error caused due to covariances underlying the perception of individual networks.

    With this knowledge, it is clear that the stacked ensembles can outperform single networks in terms of generalization.As a result, six different neural network committees were formed by multiplying the number of neural networks described in Tab.3, which ranged from 2 to 13 networks (all).These ensemble networks were evaluated using the standard classification metrics.A small neural architecture was attached to the committee of networks to adjoin the connected layer fully.This fully CNN consists of 16 neurons with a dropout of 30% for regularization.The final activations were pushed with softmax non-linearity, which consisted of three neurons describing the class predictions pertaining to each individual class.The results obtained by the proposed stacked ensembles are described in the next section in detail.

    Table 3: Different ensemble models used in the experiment

    6.1 Results

    As mentioned, the generalization error obtained by a committee of neural networks is always less to that of a single neural network.Six variant committees of networks were selected and combined as described in Tab.3.The classwise classification metrics utilized to understand the behavior of a specific COVID-19 class are illustrated in Tab.4.

    A comparative study was then performed to compare the performance of the proposed network with other existing models described in the literature Tab.5.

    Our designed generic training algorithm facilitates the training process by acquiring faster convergence and with low computations (Iterations).During the training process, the batch size and learning rate are increased cautiously for each iteration to obtain a balanced criterion, as explained by Smith et al.[41].As noise during training can be reduced by properly choosing the batching parameters fed into the network, the learning rate and momentum of the optimizer were assigned a faster search.

    Table 4: Performance of the various stacked ensemble models

    Table 5: Performance comparison of the novel stacked ensemble model with existing methods

    The noise due to training is theoretically represented as follows in Eq.(18).

    Here we assumed a constant momentum.A training algorithm was developed to conceptualize the noise constraint.Although a decaying learning rate can decrease the noise, it gradually increases the computational time for training.On the other hand, lowering the batch size can also reduce noise but comes at the cost of lowering the generalizing capacity of the model.These problems were overcome by developing an algorithm that increased the batch size during the specified iteration and cautiously increasing the learning rate as follows.The algorithm was first iterated for 16278 steps (iteration 1), whereby the learning rate was set to 10-4by sending 15 samples as a batch at a time.In the next iteration (iteration 2), the batch size was increased by 50%, and the learning rate was increased tenfold.In order to maintain a consistent trade-off between generalization and faster convergence, the batch size was increased by a factor of 150%(to that of initial), and the learning rate was tuned as per the preceding iteration.During the experimentation, it was found that the proposed training procedure led to a faster convergence by training using only a few steps (approximately 20 epochs).

    Appropriate training with fine-tuning of the ensemble was therefore critical to obtain these insightful outcomes.The final Ensemble-6 model had the highest performance when compared with the other method, with an accuracy score of 99.175%.Ensemble-1 and Ensemble-2 attained an accuracy of 98.487% and 98.762%, respectively.When taking into consideration only the COVID-19 class, the precision rate was at least 97.674%, but the recall rate was lower.The highest and lowest recall rates were 89.795% and 69.387% and were obtained by Ensemble-6 and Ensemble-4, respectively.However, due to the small sample of the COVID-19 class in our study,it was difficult to extract additional invariant features to improve the performance of the model further.

    6.2 Limitations

    In this study, we observed that the stacked ensemble was slightly inefficient when a poorperforming model was included.The DenseNet-201 model evaluations were not always finetuned correctly, and the network depth was not always appropriate, leading to a high generalization error.The COVID-19 results were not always included in the single model based on the features derived from the individual models.The ensemble method offers more generalization, but the combination of multiple models increased the computational cost, which is unnecessary for smallscale computational systems (such as Ensemble-6).As a result, in real-world scenarios, small,quick, and efficient models such as Ensemble-1 and Ensemble-2 are advantageous.The progression of the virus can be visualized better on Chest CT axial images.However, there is a chance of missing disease progression on CXR [38], which could be dangerous.Therefore future studies should focus on the development of models that can predict disease progression on CXR.

    7 Conclusion

    In this study, various COVID-19 classification models were evaluated and compared using different classification metrics.Furthermore, a learning framework for finetuning these models was proposed, and their bottleneck activations were visualized using CAMs.The AUC-ROC curves were closely examined, and the output of each class was illustrated visually.These finetuned models were then stacked to outperform previous models and include a broad range of generalizations.The ensemble models achieve an accuracy score of 97.66 percent in the worstcase scenario.Even after finetuning for class imbalance, the models were found to have a high generalization ability.The least error rate obtained by the outperforming model, built by stacking all the finetuned models, was 0.83%.The stacked ensembles method improved the performance of the model and could therefore be used to improve the prediction accuracy of the diagnostic models in medical imaging.

    Acknowledgement:The authors extend their appreciation to King Saud University for funding this work through Researchers Supporting Project number RSP-2021/305, King Saud University,Riyadh, Saudi Arabia.

    Funding Statement:The research is funded by the Researchers Supporting Project at King Saud University, (Project# RSP-2021/305).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精品酒店卫生间| 91精品国产国语对白视频| 大话2 男鬼变身卡| 国产91av在线免费观看| 久久人人爽av亚洲精品天堂| 亚洲精品视频女| 99re6热这里在线精品视频| 超碰97精品在线观看| 黑人猛操日本美女一级片| 国内少妇人妻偷人精品xxx网站| 国产乱人偷精品视频| 国产乱来视频区| 日韩伦理黄色片| 日韩 亚洲 欧美在线| 国产成人freesex在线| 国产欧美日韩综合在线一区二区 | 在现免费观看毛片| 天堂俺去俺来也www色官网| 97超碰精品成人国产| av专区在线播放| 国产精品偷伦视频观看了| 国产老妇伦熟女老妇高清| 色婷婷av一区二区三区视频| 国产无遮挡羞羞视频在线观看| 欧美区成人在线视频| 久久午夜福利片| 国产精品偷伦视频观看了| 精品视频人人做人人爽| 老司机亚洲免费影院| 深夜a级毛片| 亚洲,欧美,日韩| 青春草国产在线视频| 激情五月婷婷亚洲| 日本-黄色视频高清免费观看| 自线自在国产av| 精品少妇内射三级| 九九久久精品国产亚洲av麻豆| 久久国产精品大桥未久av | 赤兔流量卡办理| 97精品久久久久久久久久精品| 久久人人爽av亚洲精品天堂| 2018国产大陆天天弄谢| 一级片'在线观看视频| 日韩欧美 国产精品| 丁香六月天网| 性色avwww在线观看| 久久久精品94久久精品| 男的添女的下面高潮视频| 日韩视频在线欧美| 久久精品夜色国产| av天堂久久9| av免费观看日本| 国产精品久久久久久久电影| 一级毛片aaaaaa免费看小| 男人爽女人下面视频在线观看| 国产欧美另类精品又又久久亚洲欧美| 免费观看av网站的网址| 99久久精品一区二区三区| 色视频www国产| 91久久精品国产一区二区成人| 日韩不卡一区二区三区视频在线| 天堂中文最新版在线下载| 夜夜爽夜夜爽视频| 亚洲精品成人av观看孕妇| 毛片一级片免费看久久久久| 国产成人91sexporn| 亚洲精品乱码久久久久久按摩| 蜜桃在线观看..| 久热久热在线精品观看| 精品少妇内射三级| 国产亚洲91精品色在线| 在线看a的网站| 青春草国产在线视频| 自拍偷自拍亚洲精品老妇| 国产美女午夜福利| 蜜臀久久99精品久久宅男| 美女视频免费永久观看网站| 日本av免费视频播放| 少妇人妻精品综合一区二区| 久久久久久久久久久久大奶| 天堂俺去俺来也www色官网| 日韩伦理黄色片| 国模一区二区三区四区视频| 亚洲精品乱码久久久久久按摩| 另类亚洲欧美激情| 在线观看www视频免费| 性色avwww在线观看| 国产高清国产精品国产三级| 亚洲精品日韩av片在线观看| 肉色欧美久久久久久久蜜桃| 精品视频人人做人人爽| 国产中年淑女户外野战色| 成人特级av手机在线观看| 久久国产乱子免费精品| 国产av精品麻豆| 中文字幕人妻熟人妻熟丝袜美| 在线免费观看不下载黄p国产| 亚洲欧美日韩卡通动漫| 少妇被粗大猛烈的视频| 久久久亚洲精品成人影院| 天天操日日干夜夜撸| 久久婷婷青草| 99热这里只有是精品在线观看| 最近中文字幕高清免费大全6| 亚洲av福利一区| 国产极品天堂在线| 国产黄片美女视频| 日日摸夜夜添夜夜添av毛片| 一级av片app| 99九九在线精品视频 | 校园人妻丝袜中文字幕| 亚洲美女视频黄频| 永久免费av网站大全| 成人午夜精彩视频在线观看| 国产一级毛片在线| 一级毛片我不卡| 国产精品成人在线| 成人亚洲精品一区在线观看| 国产在线男女| 久久久久久久国产电影| 我要看黄色一级片免费的| 午夜久久久在线观看| 黄片无遮挡物在线观看| 大陆偷拍与自拍| 精品一品国产午夜福利视频| 三上悠亚av全集在线观看 | 久久久a久久爽久久v久久| 少妇 在线观看| 最后的刺客免费高清国语| 伦理电影免费视频| 女人精品久久久久毛片| 免费高清在线观看视频在线观看| .国产精品久久| 精品视频人人做人人爽| 中文字幕亚洲精品专区| 亚洲欧美中文字幕日韩二区| 少妇裸体淫交视频免费看高清| 最新中文字幕久久久久| 久久人妻熟女aⅴ| 免费看光身美女| 久久精品国产鲁丝片午夜精品| 亚洲人与动物交配视频| 国产成人免费无遮挡视频| 日韩av免费高清视频| 如日韩欧美国产精品一区二区三区 | 人人妻人人添人人爽欧美一区卜| 欧美bdsm另类| 国产精品一区二区在线观看99| 老女人水多毛片| 国产无遮挡羞羞视频在线观看| 能在线免费看毛片的网站| 激情五月婷婷亚洲| 草草在线视频免费看| av网站免费在线观看视频| 亚洲人成网站在线观看播放| 97精品久久久久久久久久精品| 久久人人爽av亚洲精品天堂| 亚洲怡红院男人天堂| 成年人免费黄色播放视频 | 中文字幕制服av| 一本一本综合久久| 18禁在线无遮挡免费观看视频| 久久久久久久久大av| 男人舔奶头视频| 免费人妻精品一区二区三区视频| 精品国产一区二区久久| 国产片特级美女逼逼视频| 国产色婷婷99| 91aial.com中文字幕在线观看| 热re99久久国产66热| 一个人免费看片子| 久久久久久久久久久久大奶| 国产av国产精品国产| tube8黄色片| 成人毛片a级毛片在线播放| 色哟哟·www| 性色av一级| av福利片在线观看| 少妇的逼好多水| 亚洲成人一二三区av| 伦理电影大哥的女人| 一级毛片 在线播放| 狂野欧美白嫩少妇大欣赏| 在线精品无人区一区二区三| 一本色道久久久久久精品综合| 精品人妻熟女av久视频| 欧美日本中文国产一区发布| 色吧在线观看| 丝袜脚勾引网站| 欧美97在线视频| 成人毛片a级毛片在线播放| 国产综合精华液| 高清在线视频一区二区三区| 婷婷色麻豆天堂久久| 日日爽夜夜爽网站| 久久久久久久大尺度免费视频| 久久国产精品大桥未久av | 伦精品一区二区三区| 视频区图区小说| 男人和女人高潮做爰伦理| a级一级毛片免费在线观看| 午夜av观看不卡| 国产成人a∨麻豆精品| 日韩视频在线欧美| 最新的欧美精品一区二区| 精品少妇久久久久久888优播| videos熟女内射| 男人舔奶头视频| 热re99久久国产66热| 国产淫语在线视频| 国产精品偷伦视频观看了| 久久99热这里只频精品6学生| 国产爽快片一区二区三区| 国产女主播在线喷水免费视频网站| 色视频在线一区二区三区| 乱码一卡2卡4卡精品| 2022亚洲国产成人精品| 最近手机中文字幕大全| 久久精品国产亚洲av涩爱| 色吧在线观看| 91精品国产九色| av.在线天堂| 人人澡人人妻人| 一个人免费看片子| 3wmmmm亚洲av在线观看| 简卡轻食公司| 欧美性感艳星| 色视频www国产| 一级毛片黄色毛片免费观看视频| 久久人人爽av亚洲精品天堂| 精品一区二区三卡| 精品国产一区二区久久| 色视频www国产| 午夜福利在线观看免费完整高清在| 在线观看美女被高潮喷水网站| 97在线人人人人妻| 亚洲熟女精品中文字幕| 人妻少妇偷人精品九色| 三级经典国产精品| 少妇人妻久久综合中文| 美女cb高潮喷水在线观看| 18+在线观看网站| 九九久久精品国产亚洲av麻豆| 97精品久久久久久久久久精品| 免费人成在线观看视频色| 欧美精品一区二区大全| 国产综合精华液| 色视频www国产| av天堂中文字幕网| 我要看日韩黄色一级片| 国产色爽女视频免费观看| 国产真实伦视频高清在线观看| 一本一本综合久久| 亚洲av二区三区四区| 国产高清国产精品国产三级| 国产成人精品一,二区| 欧美精品一区二区免费开放| 日本av手机在线免费观看| 免费黄频网站在线观看国产| 这个男人来自地球电影免费观看 | 最近2019中文字幕mv第一页| 免费大片黄手机在线观看| 精品久久久久久久久av| 亚洲国产精品国产精品| 日本与韩国留学比较| 一本一本综合久久| av天堂中文字幕网| 久久久久久久精品精品| 少妇的逼水好多| 久久久久久伊人网av| 韩国av在线不卡| 国产有黄有色有爽视频| 啦啦啦视频在线资源免费观看| 久久久精品免费免费高清| 国产精品偷伦视频观看了| 老司机影院成人| 99国产精品免费福利视频| 有码 亚洲区| 老熟女久久久| 国产黄频视频在线观看| 欧美精品亚洲一区二区| 欧美区成人在线视频| 久久久久久久精品精品| 亚洲人成网站在线观看播放| 在线观看三级黄色| 新久久久久国产一级毛片| 久久免费观看电影| 成人特级av手机在线观看| 国产色爽女视频免费观看| 看非洲黑人一级黄片| 免费在线观看成人毛片| 美女大奶头黄色视频| 亚洲高清免费不卡视频| 日韩,欧美,国产一区二区三区| 国国产精品蜜臀av免费| 免费黄网站久久成人精品| 只有这里有精品99| 国产黄色视频一区二区在线观看| 欧美日韩一区二区视频在线观看视频在线| 男男h啪啪无遮挡| 伦精品一区二区三区| √禁漫天堂资源中文www| videos熟女内射| 久久精品国产自在天天线| 韩国av在线不卡| 三级国产精品欧美在线观看| 国产在视频线精品| 国产高清不卡午夜福利| 午夜福利视频精品| www.av在线官网国产| 伦理电影免费视频| 午夜免费观看性视频| 国产精品国产三级国产av玫瑰| 香蕉精品网在线| 这个男人来自地球电影免费观看 | 亚洲国产欧美日韩在线播放 | 少妇熟女欧美另类| 精品人妻偷拍中文字幕| av女优亚洲男人天堂| 国产伦理片在线播放av一区| 在线观看三级黄色| 亚洲国产毛片av蜜桃av| 亚洲伊人久久精品综合| 九九爱精品视频在线观看| 亚洲av男天堂| av在线播放精品| 少妇的逼水好多| 国产亚洲91精品色在线| 日本欧美视频一区| av又黄又爽大尺度在线免费看| 乱人伦中国视频| a级毛片在线看网站| 人人妻人人添人人爽欧美一区卜| 欧美高清成人免费视频www| 日日撸夜夜添| 又爽又黄a免费视频| 午夜av观看不卡| 久久午夜福利片| 成人午夜精彩视频在线观看| 午夜久久久在线观看| 婷婷色av中文字幕| 在线天堂最新版资源| 国产免费视频播放在线视频| 少妇高潮的动态图| 91久久精品国产一区二区成人| 在线精品无人区一区二区三| 我要看日韩黄色一级片| 久久亚洲国产成人精品v| 少妇的逼好多水| 欧美精品高潮呻吟av久久| 色婷婷av一区二区三区视频| 国产乱人偷精品视频| 五月伊人婷婷丁香| 18禁动态无遮挡网站| 国产真实伦视频高清在线观看| 麻豆乱淫一区二区| 这个男人来自地球电影免费观看 | av在线老鸭窝| 各种免费的搞黄视频| 久久精品国产亚洲av天美| 精品久久久久久久久亚洲| 免费看av在线观看网站| 日本色播在线视频| 如何舔出高潮| 国产欧美日韩综合在线一区二区 | 91午夜精品亚洲一区二区三区| 色婷婷久久久亚洲欧美| 边亲边吃奶的免费视频| a 毛片基地| 日韩欧美 国产精品| 在线 av 中文字幕| 久久99一区二区三区| 国内精品宾馆在线| 亚洲成人av在线免费| 久久6这里有精品| 高清视频免费观看一区二区| 18禁在线播放成人免费| 亚洲在久久综合| 久久久久人妻精品一区果冻| 国产伦在线观看视频一区| 亚洲综合色惰| 日韩一区二区三区影片| 最新的欧美精品一区二区| 秋霞在线观看毛片| 欧美亚洲 丝袜 人妻 在线| 久久精品国产亚洲网站| 91久久精品电影网| a 毛片基地| 日本-黄色视频高清免费观看| 亚洲欧美清纯卡通| 三级国产精品欧美在线观看| 国产成人aa在线观看| 成人毛片60女人毛片免费| 久久国内精品自在自线图片| 边亲边吃奶的免费视频| 男人爽女人下面视频在线观看| 国产免费一级a男人的天堂| 国产精品久久久久成人av| 久久国产精品大桥未久av | 国产女主播在线喷水免费视频网站| 国产精品熟女久久久久浪| 亚洲欧美一区二区三区黑人 | 最新的欧美精品一区二区| 韩国av在线不卡| 亚洲真实伦在线观看| 欧美丝袜亚洲另类| 99热国产这里只有精品6| 免费黄网站久久成人精品| 日本wwww免费看| 黑人巨大精品欧美一区二区蜜桃 | 只有这里有精品99| 久热久热在线精品观看| 久久精品国产自在天天线| 亚洲美女搞黄在线观看| 亚洲一级一片aⅴ在线观看| 久久久a久久爽久久v久久| 国产一区亚洲一区在线观看| 国产成人精品一,二区| 国产亚洲欧美精品永久| 18+在线观看网站| 亚洲情色 制服丝袜| 一个人看视频在线观看www免费| 天堂8中文在线网| 在线精品无人区一区二区三| 毛片一级片免费看久久久久| 免费观看av网站的网址| 欧美精品亚洲一区二区| 街头女战士在线观看网站| 妹子高潮喷水视频| 国产极品粉嫩免费观看在线 | 国产淫语在线视频| 久久午夜综合久久蜜桃| 免费大片黄手机在线观看| 91精品伊人久久大香线蕉| 亚洲成色77777| 一区二区三区乱码不卡18| 全区人妻精品视频| 亚洲欧美中文字幕日韩二区| 免费人妻精品一区二区三区视频| 18禁动态无遮挡网站| 亚洲国产av新网站| 亚洲国产毛片av蜜桃av| 丰满乱子伦码专区| 高清视频免费观看一区二区| 六月丁香七月| 高清黄色对白视频在线免费看 | 在线观看免费日韩欧美大片 | 日本黄色日本黄色录像| 卡戴珊不雅视频在线播放| 国产av码专区亚洲av| 男人爽女人下面视频在线观看| 亚洲精品乱码久久久久久按摩| 久久精品久久久久久久性| 五月伊人婷婷丁香| 日本91视频免费播放| 亚洲av福利一区| 少妇的逼水好多| 日韩成人av中文字幕在线观看| 女人久久www免费人成看片| 精品久久久精品久久久| 国产成人精品无人区| 美女视频免费永久观看网站| 免费大片黄手机在线观看| 亚洲国产精品一区三区| 欧美高清成人免费视频www| 在线观看三级黄色| 国产日韩欧美在线精品| www.色视频.com| 晚上一个人看的免费电影| 亚洲欧美精品专区久久| 韩国av在线不卡| a级毛片在线看网站| 国产片特级美女逼逼视频| 国产精品久久久久久久电影| av国产久精品久网站免费入址| 青春草亚洲视频在线观看| 欧美老熟妇乱子伦牲交| 午夜福利视频精品| 国产女主播在线喷水免费视频网站| 秋霞在线观看毛片| 亚洲欧洲国产日韩| 免费少妇av软件| 成人18禁高潮啪啪吃奶动态图 | 亚洲欧美一区二区三区黑人 | 又大又黄又爽视频免费| 久久久久久久久久久丰满| 国产伦精品一区二区三区视频9| 中文字幕亚洲精品专区| 午夜av观看不卡| 午夜老司机福利剧场| 最近手机中文字幕大全| 国产精品一区二区在线观看99| 午夜91福利影院| 高清视频免费观看一区二区| 啦啦啦在线观看免费高清www| 亚洲国产精品国产精品| 亚洲av.av天堂| 熟妇人妻不卡中文字幕| 日韩精品有码人妻一区| 日韩 亚洲 欧美在线| 日韩人妻高清精品专区| 99热6这里只有精品| 亚洲精品,欧美精品| 久久精品夜色国产| 最新的欧美精品一区二区| 免费黄色在线免费观看| 亚洲精品久久久久久婷婷小说| 少妇人妻精品综合一区二区| 国产成人精品久久久久久| 夜夜骑夜夜射夜夜干| 国产女主播在线喷水免费视频网站| 亚洲欧美中文字幕日韩二区| 国产免费一级a男人的天堂| 国产精品嫩草影院av在线观看| 久久人妻熟女aⅴ| 日韩欧美精品免费久久| 国产男女超爽视频在线观看| 毛片一级片免费看久久久久| 成人免费观看视频高清| 91久久精品国产一区二区成人| 不卡视频在线观看欧美| 9色porny在线观看| 亚洲性久久影院| 人人澡人人妻人| 亚洲欧美日韩东京热| 亚洲欧美日韩卡通动漫| 精品酒店卫生间| 精品少妇内射三级| 国产精品一区二区在线观看99| 亚洲国产最新在线播放| 欧美日韩综合久久久久久| 美女内射精品一级片tv| 国产男女超爽视频在线观看| 日韩欧美精品免费久久| 全区人妻精品视频| 色婷婷久久久亚洲欧美| 欧美精品一区二区大全| 狂野欧美激情性bbbbbb| 汤姆久久久久久久影院中文字幕| 黄色视频在线播放观看不卡| av网站免费在线观看视频| 51国产日韩欧美| 免费人妻精品一区二区三区视频| 欧美3d第一页| 26uuu在线亚洲综合色| 国产爽快片一区二区三区| 亚洲国产av新网站| 极品少妇高潮喷水抽搐| 国产亚洲91精品色在线| 亚洲av成人精品一区久久| 中文精品一卡2卡3卡4更新| 美女中出高潮动态图| 少妇的逼水好多| 最近最新中文字幕免费大全7| 日韩精品免费视频一区二区三区 | 高清av免费在线| 国产日韩欧美视频二区| 亚洲av中文av极速乱| 天堂俺去俺来也www色官网| 哪个播放器可以免费观看大片| 亚洲国产成人一精品久久久| 日韩伦理黄色片| 涩涩av久久男人的天堂| 自拍欧美九色日韩亚洲蝌蚪91 | a级毛片免费高清观看在线播放| 亚洲精华国产精华液的使用体验| 青青草视频在线视频观看| 不卡视频在线观看欧美| 国产免费视频播放在线视频| 欧美日韩综合久久久久久| 男人和女人高潮做爰伦理| 国内少妇人妻偷人精品xxx网站| 国产黄色视频一区二区在线观看| 久久99精品国语久久久| 国产精品蜜桃在线观看| 色婷婷av一区二区三区视频| 免费少妇av软件| 亚洲精品国产成人久久av| 91精品一卡2卡3卡4卡| 我要看黄色一级片免费的| 久久国产乱子免费精品| 精品一品国产午夜福利视频| 久久午夜福利片| 日本与韩国留学比较| 日韩视频在线欧美| 精品少妇内射三级| 自拍偷自拍亚洲精品老妇| 精品久久久噜噜| 卡戴珊不雅视频在线播放| 亚洲精品久久久久久婷婷小说| 丝袜喷水一区| 久久6这里有精品| 亚洲精品日韩在线中文字幕| 新久久久久国产一级毛片| 天美传媒精品一区二区| 亚洲精品aⅴ在线观看| 99久久人妻综合| 免费大片18禁| 91久久精品电影网| av在线app专区| 亚洲精品久久午夜乱码| a级毛片在线看网站| 精品亚洲成国产av| 久久97久久精品| 久久久午夜欧美精品| 日本91视频免费播放| 亚洲,一卡二卡三卡| 日日啪夜夜撸| 免费少妇av软件| 嫩草影院入口| 国产精品无大码| 99久久精品一区二区三区| 一区二区三区乱码不卡18| 五月伊人婷婷丁香| 国产精品人妻久久久久久|