• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Efficient Deep-Learning-Based Autoencoder Denoising Approach for Medical Image Diagnosis

    2022-03-14 09:27:46WalidElShafaiSamyAbdElNabiElSayedElRabaieAnasAliNaglaaSolimanAbeerAlgarniandFathiAbdElSamie
    Computers Materials&Continua 2022年3期

    Walid El-Shafai,Samy Abd El-Nabi,2,El-Sayed M.El-Rabaie,Anas M.Ali,2,Naglaa F.Soliman,Abeer D.Algarni and Fathi E.Abd El-Samie,

    1Department of Electronics and Electrical Communications,Faculty of Electronic Engineering,Menoufia University,Menouf,32952,Egypt

    2Alexandria Higher Institute of Engineering&Technology(AIET),Alexandria,Egypt

    3Department of Information Technology,College of Computer and Information Sciences,Princess Nourah Bint Abdulrahman University,Riyadh,84428,Saudi Arabia

    Abstract: Effective medical diagnosis is dramatically expensive, especially in third-world countries.One of the common diseases is pneumonia,and because of the remarkable similarity between its types and the limited number of medical images for recent diseases related to pneumonia,the medical diagnosis of these diseases is a significant challenge.Hence,transfer learning represents a promising solution in transferring knowledge from generic tasks to specific tasks.Unfortunately,experimentation and utilization of different models of transfer learning do not achieve satisfactory results.In this study,we suggest the implementation of an automaticdetection model,namely CADTra,to efficiently diagnose pneumonia-related diseases.This model is based on classification,denoising autoencoder,and transfer learning.Firstly,pre-processing is employed to prepare the medical images.It depends on an autoencoder denoising (AD) algorithm with a modified loss function depending on a Gaussian distribution for decoder output to maximize the chances for recovering inputs and clearly demonstrate their features,in order to improve the diagnosis process.Then, classification is performed using a transfer learning model and a four-layer convolution neural network (FCNN)to detect pneumonia.The proposed model supports binary classification of chest computed tomography(CT) images and multi-class classification of chest X-ray images.Finally, a comparative study is introduced for the classification performance with and without the denoising process.The proposed model achieves precisions of 98%and 99% for binary classification and multi-class classification, respectively,with the different ratios for training and testing.To demonstrate the efficiency and superiority of the proposed CADTra model, it is compared with some recent state-of-the-art CNN models.The achieved outcomes prove that the suggested model can help radiologists to detect pneumonia-related diseases and improve the diagnostic efficiency compared to the existing diagnosis models.

    Keywords: Medical images; CADTra; AD; CT and X-ray images;autoencoder

    1 Introduction

    Pneumonia is defined as an infection caused by bacteria, germs, or any other viruses, and it occurs inside the lungs.Pneumonia is one of the main causes of death in children and old people worldwide [1].Therefore, pneumonia threatens human life, if it is not diagnosed promptly or known, early.Symptoms associated with pneumonia diseases include a combination of productive or dry cough, fever, difficulty of breathing, and chest pain [2].

    Due to the similarity of symptoms associated with pneumonia and COVID-19 diseases,identifying them becomes complicated.Due to the mutations of coronavirus and the continuous increase of the number of infected people, COVID-19 pandemic is still widespread.The most critical step in confronting this virus is the effective and continuous examination of patients infected with pneumonia and COVID-19, so that they can receive treatment and isolate themselves to reduce the speed of spreading of the virus.

    The method used in the screening and detection of coronavirus is the polymerase chain reaction (PCR) test [1], which can detect SARSCov-2 RNA from respiratory system samples collected by various means, such as swabs of the oropharynx and the nose.The PCR test is considered as a gold standard for high sensitivity, but it is time-consuming, expensive, and extremely complex.Alternatively, radiography examination of chest radiography images, such as X-rays and CT scans, helps to discover infected cases quickly and isolate them to minimize the spread of infection.In recent studies, it was found that patients show differences and abnormalities in chest radiography, through which it is possible to identify those infected with the COVID-19 virus [2,3].Some researchers have even suggested that chest radiography is a fundamental tool in detecting coronavirus in areas that suffer from the pandemic spread, because it is faster and available in modern healthcare systems [4].Radiological images also show high sensitivity to the infection [5].One of the serious problems faced, when dealing with images, is the need for radiologists to interpret these images, because visual features can be unclear.Therefore, computer diagnostic systems will help specialists greatly in interpreting images, as they are, by far, faster and more accurate in detecting cases of pneumonia and COVID-19.

    Deep convolution learning methods for learning feature representations of data in large dimensions were successfully implemented.The learned features would display the non-linear properties seen in the data.Unsupervised or supervised learning is a part of deep network preparation for feature extraction and classification.Noise reduction is required to analyze images, properly.So, a study of methods to reduce noise is presented, because denosising is a classic problem in computer vision.Various techniques are used in denoising of medical images, such as the stacked denoising autoencoder (SDAE) model, which is used for pre-training of networks [6].Researchers suggested a new efficient online model for variational learning of limited and unlimited Gamma distributions [7].It depends on the characteristics of Gamma distribution, online knowledge scalability, and the performance of variation inference.Initial experiments on newly developed databases of COVID-19 images were carried out with a feed-forward strategy, CNNs, and image descriptors of texture features [8].A modern class decomposition-based CNN architecture was used to increase the efficiency of classification of medical images, with the DeTraC method for transfer learning and class decomposition.This method was presented in [9].

    The developments and progress taking place in deep learning (DL) have led to new models for medical image processing [10-12].Autoencoders were used to reduce the noise in images [13,14],as they have better performance than those of other traditional methods.To compare the autoencoder with traditional methods for noise reduction in medical images, we find that the autoencoder method gives the same performance as those of the other methods in the case of feature linearity.On the other hand, with feature non-linearity, the traditional methods fail to reconstruct the image from noise.Automatic noise reduction devices (autoencoders), using CNN, can effectively reduce noise on images, as they can exploit high spatial correlations.To solve the problem of data paucity in medical images, transfer learning is used to transfer what the model has learned from natural images in ImageNet competitions to medical image classification and save the amount of data and the training time.

    This paper presents a model for the automatic detection of pneumonia and COVID-19 in a multi-class classification (Pneumonia, COVID-19, and Normal) scenario and binary classification(COVID-19 and Normal) scenario of chest CT images.This is performed using a deep CNN,namely CADTra, for better performance in terms of accuracy, precision, recall,f1-score, confusion matrix, and receiver operating characteristic (ROC) curve.It is performed on X-ray and CT datasets with an autoencoder model to denoise medical images and obtain higher evaluation metrics.Different types of noise are investigated, including Gaussian, salt and pepper, and speckle noise with different variances.The CADTra works to reduce noise and achieve good performance in extracting different features from medical images.This is clear in the values obtained for Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM).Then, images resulting from the denoising process are used to improve classification efficiency using transfer learning.Different models of transfer learning, including AlexNet [15], LeNet-5 [16], VGG16 [17], and Inception (Na?ve) V1 [18], are used for DL, whose purpose is to retrain all weights of the pretrained network from the first to the last layer.DenseNet121, Dense-Net169, DenseNet201 [19],ResNet50, ResNet152 [20], VGG16, VGG19 [17], and Xception [21] are used for fine-tuning,whose purpose is to train more layers by tuning the learning weights, until an important performance boost is achieved.To test the rigidity of the proposed model, FCNN full training is used.Moreover, different ratios of training and testing are applied (80:20, 70:30, and 60:40).

    This empirical work leads to the following significant results:

    1) Differentiating between cases of pneumonia, COVID-19, and normal, depending on multiple sources of medical imaging.This is performed on datasets of X-ray and CT scans.

    2) Proposal of CNN deep training for transfer learning models and FCNN model without using CADTra.

    3) Preparing and training CADTra on different datasets, and then preparing images by adding noise and then removing it by AD.

    4) Using CADTra to enhance the same deep training and the diagnostic and classification functions of knowledge transfer models.

    5) Comparing the outcomes of the models with varying training/testing ratios before and after the use of CADTra, to assess their effect on digital medical image categorization.

    6) Utilization of different state-of-the-art models from the literature for detailed comparisons and experimental research.

    The rest of this research work is organized as follows.Related work is presented in Section 2.The proposed model for classification (CADTra) with autoencoder denoising is given in Section 3.Results are presented in Section 4, and the concluding remarks are given in Section 5.

    2 Related Work

    Having an accurate diagnosis and identifying the cause of a disease and its complications quickly remain important tasks for physicians.This is needed to minimize patient distress.Indeed,image processing and deep learning algorithms in biomedical image analysis and processing have shown outstanding results.This section presents a short overview of a few significant literature contributions.

    A CNN was used to create a decompose, transfer, and compose (DeTraC) model for diagnosing COVID-19, based on X-ray scans [22].The model dealt with any irregularity in the image dataset by identifying its class using a class decomposition mechanism.Based on a CT dataset,a dual-branch combination network (DCN) method was developed to map the likelihood of converting the classification from classification at the slice level to classification at the individual level to better recognize COVID-19 [23].This model achieved an accuracy of 96.74%, using an internal validation dataset, and 92.87% using an external dataset.Another model was devised depending on deep learning, using CT datasets to detect Coronavirus.It also uses a stacked autoencoder to improve the performance of the entire model by extracting the features of the dataset and achieving better results [24].Capsule networks have been proposed as new artificial neural networks, whose goal is to detect COVID-19 from chest X-ray images [25].These networks were suggested for quick and accurate diagnosis of COVID-19.For obtaining results in a short time, they were used for binary and multi-class classification.

    Depending on artificial intelligence, especially deep learning, researchers evaluated eight pretrained CNN models (VGG16, AlexNet, GoogleNet, SqueezeNet, MobileNet-V2, Inception-V3,ResNet34, and ResNet50) on chest X-ray images, and compared between models, based on several important factors [26].The ResNet34 model has the best performance, with an average accuracy of 98.33%.A new CoroDet model was proposed for the automatic detection of COVID-19, based on CNN and using X-ray and CT images [27].It was used for binary, as well as multi-class classification.The latter was performed in the case of attempting to classify images into three and four categories.This model has good performance.The CNN was used to classify a set of X-ray images to extract deep features from each [28].Previously-trained models were used, such as ResNet18, ResNet50, ResNet101, VGG16, and VGG19.The support vector machine (SVM)classifier with different kernels, namely linear, quadratic, cubic, and Gaussian, were also used.Deep samples extracted with ResNet50 and the SVM classifier have achieved an accuracy of 94.7%, which was the highest among the results.A new multi-tasking model to identify pneumonia diseases, especially COVID-19, was devised based on DL.It works on CT scans and performs three tasks: classification, reconstruction, and segmentation [29].The goal of this model is to improve the performance in both classification and segmentation, especially for small datasets.

    The multiple kernels ELM (MKs-ELM-DNN) [30] is one of the methods used to identify Coronavirus, based on the DenseNet201 structure.It was previously trained on a set of CT images.It uses the extreme learning machine (ELM) classifier.Panahi et al.[31] introduced a new method for identifying people infected by COVID-19 using X-ray images.It is called fast COVID-19 detector (FCOD), and it depends on the inception architecture, as it reduces the layers of wrapping to reduce the computational cost and time and enable the model to be used in hospitals and in assisting radiology specialists.The COVID-Screen-Net was used in the multi-category classification of datasets of X-ray images, which works to determine the distinctive features of images.These features were drawn from GradCam, and the dataset was collected through hospitals and data available on the web [32].

    In this paper, we create a CADTra model, based on CNN, in addition to using an autoencoder to reduce noise in images, in order to achieve an outstanding performance and a high accuracy of diagnosis.The main objective is to automatically identify the type of the disease in the shortest possible time.The proposed model is superior to other models published in previous studies, as seen in the result section.

    3 Proposed CADTra Model

    In this work, we propose an automatic model, using a type of deep CNN called CADTra, to detect and identify infected persons with pneumonia and COVID-19 diseases, as shown in Fig.1.The proposed CADTra model consists of three stages: a pre-processing stage, an autoencoder denoising stage, and a classification stage using a CNN.The pre-processing stage is responsible for processing and reading the dataset and for augmentation of images.Since the sizes of the images are different, because they have been collected from more than one source, this stage also works on resizing the images to 224×224×3.These numbers refer to the length, width, and channel of the image, and they are adopted for all images (X-ray and CT) to avoid overfitting,in order to classify images by using the CNN architecture.In the denoising autoencoder stage,various noise types (Gaussian, speckle, and salt and pepper) are treated to reduce their effect on the classification process.In the final classification stage, the FCNN and transfer learning models are used for classification.

    Figure 1: General steps of the proposed CADTra model

    3.1 Network of the Denoising Autoencoder

    An autoencoder consists of an encoder a nd a decoder.Each of them contains three layers,in addition to the batch normalization stage at the beginning of the model, as shown in Fig.2.There are three convolution layers and three convolution-transposed layers (ConvTrans) in the autoencoder described above.An autoencoder extracts features from images and reshapes them.It is made of an input layer, which compresses the image to extract all strong features and eliminate weak ones.The second component of the encoder is the neural network, which is usually shrunk to have the smallest number of nodes possible.From the extracted features, a decoder (to reconstruct the image) is presented, and it works based on the composition and features of the image.The general purpose of this autoencoder is to work on denoising of images.The training process is carried out by comparing the resulting image and the original one and improving the former weights to obtain the most similar image to the original one.

    Figure 2: The structure of the denoising autoencoder network layer

    In general, the utilization of a smaller number of layers and the achievement of lower computational cost improve the performance of the image denoising process.Image denoising is performed using an eight-layer convolution autoencoder network.The dimensions of the input image are 224×224×3, and those of the output image are 224×224×32.This output is the input of the decoder, and its output size is 224×224×3.In the proposed denoising autoencoder network, the batch normalization layer and the convolution layers form the encoder.The size of the kernels of each layer is 3×3, and the numbers of convolution filters for layers 1, 2,and 3 are 128, 64, and 32, respectively.The ConvTrans layers and the output convolution layer form the decoder, and the number of convolution filters for layers 1, 2, and 3 are 32, 64, and 128, respectively.The stride of the convolution calculation is 1.The same applies to the padding operation.The linear rectification unit (Relu) is used as the activation function in every ConvTrans and convolution layer [33].It can be expressed using the following function:

    wherexiis an input value.During the training phase, the proposed denoising autoencoder model was trained to reduce the reconstructive error and increase the chances of recovering inputs [34],since both the encoder and decoder are non-linear.It was trained by minimizing loss function through backpropagation to select the strongest features.The autoencoder that depends on the number of layers, convolution filters in each layer and loss is given by:

    whereyiis the original input andis the reconstructed output andN(yi;σ2)represents a Gaussian distribution for decoder output with varianceσ2,nis the output dimension andp(yi|z)is the decoder distribution [35].The denoising autoencoder distinguishes signals from noise and learns the features that capture the distribution of the training dataset to allow the model to robustly recreate the output from a partially destroyed input.

    3.2 Transfer Learning and Full Learning Models

    The CNNs have been used in image classification to detect different types of pneumonia,especially COVID-19.In general, the CNN framework includes the following layers: batch normalization, input, convolutional, fully-connection (FC), pooling, dropout, dense, and output layers.It is well-known that the CNN framework can be prepared endwise to permit selection and extraction of features, and ultimately, prediction or classification.In this work, several full-training and transfer learning models were utilized to compare and test the durableness and efficiency of our CNN model, before and after the proposed AD model, including tuning of CNN frameworks such as AlexNet [15], LeNet-5 [16], VGG16 [17], and Inception (Na?ve) V1 [18], used in DL.The DL aims to retrain all the pre-trained network weights from the first to the end layer.The following CNN models of transfer learning are also included: DenseNet121, DenseNet169,DenseNet201 [19], ResNet50, ResNet152 [20], VGG16, VGG19 [17], and Xception [21].The pre-trained transfer learning models by fine-tuning have achieved outstanding performance in classifying CT and X-ray images [22].The transfer learning is divided into three main scenarios,namely shallow tuning, deep tuning and fine tuning.They were used for fine tuning.Fine tuning aims to train more layers by tuning the learning weights until a significant performance boost is achieved.A simple FCNN framework is designed to identify cases infected by Coronavirus.It is composed of batch normalization, followed by four convolution layers consisting of 16, 32, 64,and 64 filters, successively, with a rectified linear unit (ReLU) and a kernel size of 3×3.It is also composed of two layers of max-pooling with a pool size of 2, three FC layers with dropout probabilities of 0.22 and 0.15, using SoftMax as an activation function, and cross-entropy as a loss function in the classification layer.The comprehensive specifications of our suggested CNN classification framework are shown in Fig.3.

    Figure 3: The structure of the FCNN classification network

    3.3 The Overall Network

    Transfer learning and FCNN models are independently designed, trained, and used in the binary and multi-class classification performed on chest CT and X-ray datasets.Then, the convolution autoencoder network was designed to extract features from the images and reconstruct them and train them, independently.Based on their structure, the convolution autoencoder network,and transfer learning models were used before and after the denoising stage to test the effect of AD on the classification process.Following this step, fine tuning is performed to improve the network performance in noise reduction and feature extraction.The whole network design can be explained as follows:

    Step 1:Constructing the transfer learning and FCNN models.

    Step 2:Utilization of the cross-entropy loss function to train the transfer learning and FCNN models.The trained weights are then saved.

    Step 3:Constructing the denoising autoencoder network.

    Step 4:Utilization of the mean square error (MSE) loss function to train the AD.The trained weights are then saved.

    Step 5:Reusing transfer learning and FCNN after AD.

    Step 6:Reorganizing the denoising autoencoding network, transfer learning, and FCNN to construct the composite network (CADTra).

    Step 7:Keeping the weights of the denoising autoencoding network unchanged in the image denoising process.The trained weights are then saved and used as input for transfer learning and FCNN.

    Step 8:Fine tuning of all overall network parameters, based on the weights of the denoising autoencoding network and transfer learning.The final proposed model (CADTra) is then saved.

    The above-mentioned network, which has been trained and whose weights have been maintained, is used to adjust the whole network.The complex network parameters to obtain a more accurate performance and higher efficiency have been designed and tested.

    4 Experimental Results and Comparative Analysis

    4.1 Dataset Description

    To accurately evaluate the performance of the proposed model, we obtained a total of 9201 X-ray images and 2762 CT images.The datasets are available through the links in [36-39].The X-ray images have three categories, and each category contains several images (1161 COVID-19 images, 4240 pneumonia images, and 3800 normal images).The CT images have two categories,and each category contains 1305 COVID-19 and 1457 normal images that are available publicly.These datasets have been compiled by various platforms and sources.It is noticeable that the dataset of X-ray images has images of different sizes, meaning that it is not balanced.We find that the images of COVID-19 represent 12.5% of the total number of images, while both pneumonia and normal images represent 46.1% and 41.4% of the total number of images, respectively.This may lead to the occurrence of overfitting.In order to avoid this problem, dropout layers are adopted in the proposed model.On the other hand, we find that the numbers of CT images in the two categories are close to each other, which means that the dataset is balanced.In this research,the dataset was divided with different proportions for training and testing, which are [80%:20%],[70%:30%], and [60%:40%].The images were randomly selected to ensure that the proposed model works with high efficiency with different ratios.

    4.2 Data Augmentation

    One of the distinguishing factors for obtaining a good classification performance is the dataset used.We notice that the dataset depends heavily on the number of images.We find that the CT datasets are small in size, and this may cause over fitting.An augmentation process was made on the CT dataset to avoid this problem, as shown in Tab.1.This includes transitions and changes that occur in the images, such as changes in the image width and image rotation.Moreover, the range of brightness and application of augmentation for each part of slice on the digital images were investigated appropriately for each training sample.

    Table 1: The employed data augmentation

    4.3 Implementation Setup

    Binary and multi-class classification depend on a set of publicly available image datasets(Chest X-ray and CT datasets).The size of the dataset images was changed to 224×224 pixels to train the model.We set different batch sizes with different numbers of epochs.Samples for training are assigned and validated according to different ratios.To obtain accurate results, the Adam’s optimizer is used in both classification model and CNN autoencoder denoising.We usedβ1= 0.9,β2= 0.999 for optimization in the classification model, andβ1= 0.5,β2= 0.999 in the autoencoder denoising algorithm.The adaptive learning rate (LR) was set to 0.00001 for all CNN classification models.It was decreased by 0.5 for each 2 epochs to control loss and validation loss.Early stopping is adopted for 4 epochs for validation loss to obtain lower loss and higher accuracy.The LR is set to 0.0002 for the autoencoder algorithm to decrease noise.Epsilon is used with a value of 10-8.Shuffle is true and verbose has a value of 1.Due to using normalization for medical images from 0 to 1, we used ReLU as an activation function in all layers and softmax for the output layer.After tuning of all hyperparameters, FCNN and CNN models achieved excellent performance in the classification of chest CT and X-ray images.The development and design of the proposed models are carried out using GPU machines for implementation, and the proposed structures are carried out using a Kaggle that offers free access to NVIDIA TESLA P100 GPUs for notebook editors and 13 GB of RAM running on a Professional Windows Microsoft 10 (64-bit).Python 3.7 is used for simulation testing, and TensorFlow and Keras are used as the DL backend.

    4.4 Evaluation Metrics

    To evaluate the proposed CAD and transfer learning models used in performing classification of the datasets (chest X-ray and CT images) and to determine the type of disease, we performed a study of the AD effect on the model before and after its use.To ascertain the strength of the model, we segmented the dataset into different proportions for training and testing (80%:20%),(70%:30%), and (60%:40%), successively.Since the proposed model (CADTra) is a model for feature learning classification, the output was tested and compared to those of the models that involve the automated extraction of features.The suggested model outperforms all other models,as per the experimental findings.The models were evaluated by calculating accuracy, loss, precision [40], recall [40],f1-score [40], log loss [41], confusion matrix, precision and recall curve, and ROC curve.The denoising autoencoder model was also evaluated based on SSIM and PSNR [42].These parameters are defined as follows:

    whereTNis the true negative,TPis the true positive,FNis the false negative, andFPis the false positive.yPrefers to the predicted labels,yTrefers to ground truth (correct) labels,MAXIrefers to maximum power of a signal or an imageI, MSE is the mean square error evaluated pixel by pixel,μyis the average value for the second image,μxis the average value for the first image,σxis the standard deviation for the first image,σyis the standard deviation for the second image,andσxy=μxy-μxμyis the covariance.C2,C1are two variable to avoid division by zero.

    4.5 Comparison Results

    The performance and results of the proposed model, using CNN and transfer learning, were evaluated before and after using the AD to illustrate how it affects the results.The proposed model and the other transfer learning models showed better performance with AD, as shown in Tabs.2 and 3.The FCNN model achieved average accuracy scores of 98.38% for X-ray images and 97.64% for CT images without the denoising autoencoder.For transfer learning, AlexNet with full training yields 95.63% on X-ray images and 95.12% on CT images.LeNet-5 with full training yields 93.47% on X-ray images and 90.77% on CT images, VGG16 with full training yields 95.47%on X-ray images and 95.66% on CT images, Inception Na?ve V1 with full-training yields 94.66%on X-ray images and 84.99% on CT images, DenseNet121 with pre-training yields 97.85% on X-ray images and 96.38% on CT images, DenseNet169 with pre-training yields 97.79% on Xray images and 96.56% on CT images, DenseNet201 with pre-training yields 97.04% on X-ray images and 97.29% on CT images, ResNet50 with pre-training yields 97.74% on X-ray images and 94.58% on CT images, ResNet152 with pre-training yields 98.11% on X-ray images and 96.56% on CT images, VGG16 with pre-training yields 95.74% on X-ray images and 94.76% on CT images,VGG19 with pre-training yields 96.39% on X-ray images and 96.75% on CT images, and Xception with pre-training yields 94.99% on X-ray images and 85.35% on CT images without the denoising autoencoder.

    The use of AD to reduce noise and display important characteristics in medical images has achieved great success in enhancing the performance of the models, as shown in Tab.3.The proposed FCNN model has achieved an average accuracy score of 98.42% on X-ray images and 98.34% on CT images.AlexNet with full training yields 96.56% on X-ray images and 96.13% on CT images, LeNet-5 with full training yields 95.20% on X-ray images and 92.64%on CT images, VGG16 with full training yields 97.27% on X-ray images and 96.87% on CT images, Inception Na?ve V1 with full training yields 95.42% on X-ray images and 85.47% on CT images, DenseNet121 with pre-training yields 98.03% on X-ray images and 96.87% on CT images, DenseNet169 with pre-training yields 98.03% on X-ray images and 97.24% on CT images, DenseNet201 with pre-training yields 98.20% on X-ray images and 97.97% on CT images,ResNet50 with pre-training yields 98.03% on X-ray images and 95.03% on CT images, ResNet152 with pre-training yields 98.31% on X-ray images and 97.79% on CT images, VGG16 with pretraining yields 98.31% on X-ray images and 96.13% on CT images, VGG19 with pre-training yields 97.60% on X-ray images and 97.24% on CT images, and Xception with pre-training yields 95.31%on X-ray images and 90.80% on CT images.

    In general, the use of AD helps to increase the efficiency and performance of the models.These results represent the culmination of accuracy, loss, precision, recall,f1-score, log loss, confusion matrix, accuracy and loss curve, precision and recall curve, and ROC curve.The denoising autoencoder model has also been evaluated to reduce the noise with different types (Gaussian,salt and pepper, and speckle) and different variances, including 0.05, 0.10, 0.15, 0.20, and 0.25.The AD was also evaluated by calculating SSIM and PSNR, as shown in Tab.5.It was found that Gaussian noise is the most severe type of noise.

    4.5.1 Classification Results of the FCNN Model and Transfer Learning Architectures Without the AD Model

    Tab.2 displays the parameters used in evaluating the FCNN model and transfer learning models in X-ray images and CT scans without AD, which are accuracy, loss, precision, recall,f1-score, and log loss with different ratios of training and testing [80%:20%], [70%:30%], and[60%:40%].The FCNN model gives the best results, while the LeNet-5 with full training achieves the worst results on X-ray images, while Inception na?ve v1 with full training achieves the worst results on CT images.

    4.5.2 Classification Results of the CADTra Model and Transfer Learning Architectures with the AD Model

    According to Tab.3, the parameters used in evaluating the proposed FCNN and transfer learning models on X-Ray and CT images with AD, which are accuracy, loss, precision, recall,f1-score, and log loss with different ratios of training and testing [80%:20%], [70%:30%], and[60%:40%].The FCNN model gives the best results, while LeNet-5 (Full-Training) gives the worst results on X-ray images, and Inception na?ve v1 (Full-Training) achieves the worst result on CT images.

    Table 2: Comparison of the CNN architectures and the proposed FCNN algorithm without the AD algorithm on the X-ray images and CT scans

    In Tab.4, simulation results of the proposed model using CNN on X-Ray and CT datasets with and without AD are presented using different evaluation metrics, including the confusion matrix, accuracy and loss curves, precision, and recall curves, and ROC curve.These results demonstrate the superior effect of AD in enhancing the efficiency of CNNs in the classification and diagnosis processes.

    Table 3: Comparison of the CNN architectures and the proposed FCNN algorithm with the AD algorithm on the X-ray images and CT scans

    Table 4: Simulation results of the FCNN model with and without the proposed AD algorithm on the X-ray and CT scan datasets (Full-Training)

    4.5.3 Denoising Results with the AD Model for Different Noise Variances

    Tab.5 shows the results of an AD model on X-Ray and CT datasets, and this is represented by calculating PSNR and SSIM according to Eqs.(9) and (10) for different types of noise(Gaussian, Salt &Pepper, and Speckle noise) using different factors for each type including 0.05,0.10, 0.15, 0.20, 0.25, respectively.

    4.6 Discussions

    From the previous tables without using AD, the FCNN and transfer learning models showed satisfactory results regarding the use of CNNs in building an automatic model that works to detect and diagnose pneumonia and COVID-19, as shown in Tab.2.Then, we used AD in the process of feature extraction from medical images and noise reduction.Tab.5 illustrates the evaluation metrics.These results ensure the distinct performance of the CADTra model with AD and transfer learning.Then, we compared the proposed model with the recent models that work on CT and X-ray datasets.Our work outperforms these models, as shown in Tab.6.

    Table 5: Results of an AD model for different types of noise on X-ray and CT datasets

    Table 6: Comparison of the proposed work with other literature models

    (Continued)

    Table 6: Continued

    5 Conclusions and Future Work

    In this work, we presented a method (CADTra) depending on autoencoder denoising for the early and rapid detection of lung infections to determine the type of disease, using a CNN on datasets of medical images (X-ray and CT).The proposed method has been implemented on FCNN and 12 deep learning architectures, namely AlexNet (full training), LeNet-5 (full training), VGG16 (full training), Inception (Na?ve) V1 (full training), DenseNet121 (pre-training),DenseNet169 (pre-training), DenseNet201 (pre-training), ResNet50 (pre-training), ResNet152 (pretraining), VGG16 (pre-training), VGG19 (pre-training), and Xception (pre-training).We performed a comparison with traditional deep learning models for the detection and identification of pneumonia and COVID-19 diseases.The models were evaluated, based on different evaluation metrics,including accuracy, loss, precision, recall,f1-score, log loss, confusion matrix, precision and recall curve, and ROC curve.The experiments were conducted on a chest X-ray dataset, which contains 9,201 images, and a CT dataset containing 2,762 images.X-ray images consist of 1,161 COVID-19 positive images, 4,240 pneumonia-positive images, and 3,800 normal images.For CT images, they were divided into two categories: 1,305 COVID-19-positive images and 1,457 normal images.The proposed model achieved high performance in binary and multi-class classification.In the future research work, we will look at developing and improving the classification model on more datasets and using more in-depth features.Examples include the use of generative adversarial networks(GAN) in super-resolution, after the process of denoising using an autoencoder, which may help to improve the classification performance.

    Acknowledgement:The authors would like to thank the support of the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University.

    Funding Statement:This research was funded by the Deanship of Scientific Research at Princess Nourah Bint Abdulrahman University through the Fast-track Research Funding Program.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲伊人久久精品综合 | 九九爱精品视频在线观看| 青春草视频在线免费观看| 免费看光身美女| 日韩欧美 国产精品| 精品一区二区免费观看| 好男人视频免费观看在线| 久久久国产成人精品二区| 精品久久久久久久人妻蜜臀av| 精品熟女少妇av免费看| 婷婷六月久久综合丁香| 能在线免费看毛片的网站| 99久国产av精品| 欧美变态另类bdsm刘玥| 亚洲精品自拍成人| 嫩草影院精品99| 亚洲国产精品成人久久小说| 免费一级毛片在线播放高清视频| 纵有疾风起免费观看全集完整版 | 男女边吃奶边做爰视频| 精品人妻熟女av久视频| 天堂av国产一区二区熟女人妻| 日本黄大片高清| 国产成人免费观看mmmm| 一本久久精品| 高清毛片免费看| 国产成人freesex在线| 自拍偷自拍亚洲精品老妇| 国产精品日韩av在线免费观看| 亚洲最大成人手机在线| 联通29元200g的流量卡| 日韩av不卡免费在线播放| av又黄又爽大尺度在线免费看 | 国产精品99久久久久久久久| 亚洲精品成人久久久久久| 中国国产av一级| 最近2019中文字幕mv第一页| 看片在线看免费视频| 国产免费一级a男人的天堂| 亚洲一级一片aⅴ在线观看| 嫩草影院入口| 真实男女啪啪啪动态图| 三级国产精品欧美在线观看| 色吧在线观看| 好男人在线观看高清免费视频| 国产综合懂色| 1000部很黄的大片| 久久精品综合一区二区三区| 久久久久性生活片| 亚洲精华国产精华液的使用体验| 人妻少妇偷人精品九色| 免费观看精品视频网站| 国产午夜福利久久久久久| 国产精品,欧美在线| 丰满少妇做爰视频| 色尼玛亚洲综合影院| 国产成人freesex在线| av国产免费在线观看| 观看美女的网站| 桃色一区二区三区在线观看| 国内精品一区二区在线观看| 色吧在线观看| 最近手机中文字幕大全| 亚洲天堂国产精品一区在线| 1000部很黄的大片| 长腿黑丝高跟| 好男人在线观看高清免费视频| 大又大粗又爽又黄少妇毛片口| 欧美日韩综合久久久久久| 国产精品av视频在线免费观看| 久久6这里有精品| 女的被弄到高潮叫床怎么办| 色网站视频免费| 日韩制服骚丝袜av| 国产亚洲精品久久久com| 久久人妻av系列| 国产精品久久久久久久电影| 亚洲欧洲日产国产| 欧美xxxx性猛交bbbb| 欧美成人午夜免费资源| 亚洲精品一区蜜桃| 国产成人aa在线观看| 天堂影院成人在线观看| 黄色配什么色好看| 亚洲国产精品sss在线观看| 色综合亚洲欧美另类图片| 亚洲熟妇中文字幕五十中出| 日韩成人av中文字幕在线观看| 能在线免费观看的黄片| 夜夜爽夜夜爽视频| av天堂中文字幕网| 欧美bdsm另类| 国产黄色小视频在线观看| 久久久色成人| 久久久国产成人精品二区| 乱码一卡2卡4卡精品| 一级爰片在线观看| 我的老师免费观看完整版| 黑人高潮一二区| 亚洲av免费高清在线观看| 国产色婷婷99| 亚洲精品久久久久久婷婷小说 | 99九九线精品视频在线观看视频| 免费看av在线观看网站| 亚洲av不卡在线观看| 亚洲,欧美,日韩| 欧美精品一区二区大全| 91久久精品国产一区二区三区| 99热全是精品| 国内精品一区二区在线观看| 亚洲自偷自拍三级| 麻豆乱淫一区二区| 成人高潮视频无遮挡免费网站| eeuss影院久久| 色播亚洲综合网| 亚洲精品一区蜜桃| 国内少妇人妻偷人精品xxx网站| 有码 亚洲区| 一本久久精品| 日韩在线高清观看一区二区三区| 最新中文字幕久久久久| 国语对白做爰xxxⅹ性视频网站| 老师上课跳d突然被开到最大视频| 两个人的视频大全免费| 最近视频中文字幕2019在线8| 麻豆av噜噜一区二区三区| 亚洲最大成人中文| 黑人高潮一二区| 欧美xxxx性猛交bbbb| 伦精品一区二区三区| 久久久久精品久久久久真实原创| 日本欧美国产在线视频| 国产午夜精品一二区理论片| 亚洲自拍偷在线| 久久6这里有精品| 国产亚洲av片在线观看秒播厂 | 亚洲国产精品sss在线观看| 91精品伊人久久大香线蕉| 国产精品永久免费网站| 一级黄色大片毛片| 大话2 男鬼变身卡| 蜜桃亚洲精品一区二区三区| 国产av一区在线观看免费| 午夜视频国产福利| 国产精品99久久久久久久久| 人妻夜夜爽99麻豆av| 2021天堂中文幕一二区在线观| 成人三级黄色视频| 国产黄a三级三级三级人| 一夜夜www| 国产精品嫩草影院av在线观看| 男人狂女人下面高潮的视频| 国产一区有黄有色的免费视频 | av女优亚洲男人天堂| 18禁动态无遮挡网站| 大又大粗又爽又黄少妇毛片口| 国产精品一区二区三区四区免费观看| 日韩人妻高清精品专区| 亚洲国产精品专区欧美| 我要搜黄色片| 国产69精品久久久久777片| 国产精品国产高清国产av| 国产私拍福利视频在线观看| 国产亚洲一区二区精品| 免费看光身美女| 国产精品无大码| 黑人高潮一二区| 日日干狠狠操夜夜爽| 好男人在线观看高清免费视频| 嘟嘟电影网在线观看| 成人欧美大片| 国产毛片a区久久久久| 亚洲激情五月婷婷啪啪| 午夜a级毛片| 国产一区二区亚洲精品在线观看| 少妇熟女aⅴ在线视频| 久久精品人妻少妇| 日韩制服骚丝袜av| 69av精品久久久久久| 免费在线观看成人毛片| 青春草视频在线免费观看| 日本黄大片高清| 黄片wwwwww| 国产精品爽爽va在线观看网站| 深爱激情五月婷婷| 午夜福利视频1000在线观看| 麻豆精品久久久久久蜜桃| 久久人妻av系列| 国产亚洲精品av在线| 欧美日韩一区二区视频在线观看视频在线 | 欧美zozozo另类| 十八禁国产超污无遮挡网站| 国产精品福利在线免费观看| 精品一区二区三区视频在线| 国产高清不卡午夜福利| 校园人妻丝袜中文字幕| 久久久a久久爽久久v久久| 日日啪夜夜撸| 99久久精品国产国产毛片| 色5月婷婷丁香| 精品久久久久久电影网 | 26uuu在线亚洲综合色| 2021少妇久久久久久久久久久| 亚洲成av人片在线播放无| www.色视频.com| 热99re8久久精品国产| 国产精品一区二区三区四区免费观看| or卡值多少钱| 国产大屁股一区二区在线视频| 亚洲精品自拍成人| 久久久色成人| 黄色一级大片看看| 综合色av麻豆| 国产乱人偷精品视频| 男女啪啪激烈高潮av片| 一区二区三区乱码不卡18| 国产精品人妻久久久久久| 免费av毛片视频| 51国产日韩欧美| 欧美不卡视频在线免费观看| 嫩草影院精品99| 国产 一区 欧美 日韩| 国产极品天堂在线| 日本一二三区视频观看| 欧美激情久久久久久爽电影| 一级二级三级毛片免费看| 级片在线观看| 精品人妻视频免费看| 国产老妇伦熟女老妇高清| 伊人久久精品亚洲午夜| 亚洲欧洲日产国产| 色尼玛亚洲综合影院| 久久99精品国语久久久| 久久久国产成人免费| 欧美不卡视频在线免费观看| 噜噜噜噜噜久久久久久91| 国产综合懂色| 久久婷婷人人爽人人干人人爱| 午夜激情欧美在线| 亚洲中文字幕一区二区三区有码在线看| 免费观看人在逋| 国产成人a区在线观看| 五月伊人婷婷丁香| 美女高潮的动态| 国产免费福利视频在线观看| 亚洲aⅴ乱码一区二区在线播放| 国产在线男女| 亚洲国产欧美在线一区| 国产精品永久免费网站| 26uuu在线亚洲综合色| 免费黄色在线免费观看| 国产高潮美女av| 丝袜美腿在线中文| 看片在线看免费视频| 精品一区二区免费观看| 久久久久久久国产电影| 国产精品无大码| 成人性生交大片免费视频hd| 夜夜看夜夜爽夜夜摸| av在线播放精品| 日韩一区二区视频免费看| 草草在线视频免费看| 国产中年淑女户外野战色| 精品不卡国产一区二区三区| 日日干狠狠操夜夜爽| 欧美变态另类bdsm刘玥| 九色成人免费人妻av| 国产久久久一区二区三区| 欧美又色又爽又黄视频| 国内精品宾馆在线| av.在线天堂| 一个人观看的视频www高清免费观看| 精品久久久久久久人妻蜜臀av| 国产精品国产三级国产av玫瑰| 女的被弄到高潮叫床怎么办| 精品人妻视频免费看| 欧美xxxx性猛交bbbb| 欧美三级亚洲精品| 最近手机中文字幕大全| 美女脱内裤让男人舔精品视频| 99九九线精品视频在线观看视频| 联通29元200g的流量卡| 水蜜桃什么品种好| 精品人妻熟女av久视频| kizo精华| 看非洲黑人一级黄片| 欧美xxxx黑人xx丫x性爽| 午夜精品一区二区三区免费看| 又粗又爽又猛毛片免费看| 国产黄片视频在线免费观看| 亚洲国产精品久久男人天堂| 欧美激情久久久久久爽电影| 高清日韩中文字幕在线| 成人性生交大片免费视频hd| 91狼人影院| 2022亚洲国产成人精品| 国产精品一区www在线观看| 亚洲精品乱码久久久v下载方式| 精品欧美国产一区二区三| av线在线观看网站| 熟妇人妻久久中文字幕3abv| 搡女人真爽免费视频火全软件| 亚洲国产最新在线播放| 天堂影院成人在线观看| 大话2 男鬼变身卡| or卡值多少钱| 九九爱精品视频在线观看| 身体一侧抽搐| 国产精品三级大全| 三级国产精品片| 伦理电影大哥的女人| 丰满人妻一区二区三区视频av| 在线免费十八禁| 国产探花极品一区二区| 久久久久久伊人网av| 欧美xxxx性猛交bbbb| 欧美不卡视频在线免费观看| 国产av不卡久久| 99热全是精品| 欧美最新免费一区二区三区| 99在线人妻在线中文字幕| 三级国产精品片| 国产老妇女一区| 中国国产av一级| 真实男女啪啪啪动态图| 男女那种视频在线观看| 国产熟女欧美一区二区| 亚洲欧美日韩卡通动漫| av黄色大香蕉| 观看免费一级毛片| 欧美成人午夜免费资源| www.av在线官网国产| 2021天堂中文幕一二区在线观| 久久精品久久精品一区二区三区| 亚洲人成网站在线播| 色网站视频免费| 黑人高潮一二区| 国产精品一二三区在线看| 国产白丝娇喘喷水9色精品| 亚洲国产欧洲综合997久久,| a级毛色黄片| 99久久九九国产精品国产免费| 色哟哟·www| 精品久久久久久成人av| 1000部很黄的大片| av视频在线观看入口| 美女被艹到高潮喷水动态| 国产不卡一卡二| 老司机影院成人| 欧美激情久久久久久爽电影| 变态另类丝袜制服| 99热这里只有精品一区| 国产一级毛片七仙女欲春2| av在线播放精品| 欧美潮喷喷水| 99久久精品热视频| 韩国高清视频一区二区三区| 国产精品久久久久久久电影| 免费观看人在逋| 午夜精品一区二区三区免费看| 亚洲av成人精品一二三区| 亚洲人成网站在线播| 男女下面进入的视频免费午夜| 最近2019中文字幕mv第一页| 99久久精品一区二区三区| 久久这里有精品视频免费| av在线观看视频网站免费| 亚洲av免费高清在线观看| 女人久久www免费人成看片 | 成人性生交大片免费视频hd| 不卡视频在线观看欧美| av.在线天堂| 亚洲婷婷狠狠爱综合网| 欧美激情国产日韩精品一区| 五月玫瑰六月丁香| 亚洲国产精品国产精品| 亚洲成人精品中文字幕电影| 18禁裸乳无遮挡免费网站照片| 伊人久久精品亚洲午夜| 亚洲18禁久久av| 久久久久久久久久久丰满| 欧美日本亚洲视频在线播放| 性色avwww在线观看| av在线亚洲专区| 九色成人免费人妻av| 国产91av在线免费观看| 自拍偷自拍亚洲精品老妇| 亚洲最大成人手机在线| 亚洲经典国产精华液单| 乱人视频在线观看| 精品国内亚洲2022精品成人| 亚洲综合色惰| 久热久热在线精品观看| 男女啪啪激烈高潮av片| 插阴视频在线观看视频| 波多野结衣巨乳人妻| 婷婷色麻豆天堂久久 | 国产精品一区二区三区四区免费观看| 亚洲欧洲国产日韩| 午夜精品在线福利| 成人高潮视频无遮挡免费网站| 国产精品精品国产色婷婷| 99久久精品一区二区三区| 精品午夜福利在线看| 午夜精品国产一区二区电影 | 国产精品熟女久久久久浪| 国语自产精品视频在线第100页| 校园人妻丝袜中文字幕| 欧美潮喷喷水| 又粗又硬又长又爽又黄的视频| 尤物成人国产欧美一区二区三区| 欧美bdsm另类| 久久精品夜色国产| 日本黄色视频三级网站网址| 亚洲婷婷狠狠爱综合网| 女人久久www免费人成看片 | 亚洲精华国产精华液的使用体验| 亚洲av成人av| 久久久国产成人精品二区| 波野结衣二区三区在线| 我要搜黄色片| 亚洲av成人精品一区久久| 久久国产乱子免费精品| 亚洲欧美一区二区三区国产| 身体一侧抽搐| 国产精华一区二区三区| 亚洲国产日韩欧美精品在线观看| 久久久久久久久大av| 最近2019中文字幕mv第一页| 最近中文字幕2019免费版| 乱人视频在线观看| 啦啦啦韩国在线观看视频| 自拍偷自拍亚洲精品老妇| 久久久久精品人妻al黑| 久久久久精品久久久久真实原创| 97人妻天天添夜夜摸| 夫妻性生交免费视频一级片| 国产片内射在线| 新久久久久国产一级毛片| 欧美亚洲 丝袜 人妻 在线| 精品国产国语对白av| 国产欧美亚洲国产| 国产精品三级大全| 久久人人97超碰香蕉20202| 亚洲一码二码三码区别大吗| 日日爽夜夜爽网站| 免费人成在线观看视频色| 日本vs欧美在线观看视频| 亚洲欧美中文字幕日韩二区| 人体艺术视频欧美日本| 十分钟在线观看高清视频www| 中文字幕制服av| 精品福利永久在线观看| 亚洲综合精品二区| 欧美激情 高清一区二区三区| 爱豆传媒免费全集在线观看| 久久99热6这里只有精品| 永久网站在线| 看免费av毛片| 日本黄色日本黄色录像| 91久久精品国产一区二区三区| 久久久欧美国产精品| 亚洲欧洲日产国产| 国产av一区二区精品久久| 国产精品人妻久久久影院| 亚洲欧美色中文字幕在线| 亚洲一级一片aⅴ在线观看| 极品少妇高潮喷水抽搐| 国产极品粉嫩免费观看在线| 性色av一级| 久久国产精品大桥未久av| 成人漫画全彩无遮挡| 中国美白少妇内射xxxbb| 日本猛色少妇xxxxx猛交久久| 久久久国产欧美日韩av| 七月丁香在线播放| 乱人伦中国视频| 亚洲美女黄色视频免费看| av电影中文网址| 黄色配什么色好看| 久久人人爽av亚洲精品天堂| 久久av网站| 大码成人一级视频| 乱人伦中国视频| 日本与韩国留学比较| 久久国内精品自在自线图片| 人体艺术视频欧美日本| 咕卡用的链子| 女性生殖器流出的白浆| 国产成人av激情在线播放| 日韩电影二区| 成人国产av品久久久| tube8黄色片| 亚洲国产av新网站| 国产无遮挡羞羞视频在线观看| 中文欧美无线码| 青春草国产在线视频| 国产精品 国内视频| 五月天丁香电影| 中文字幕另类日韩欧美亚洲嫩草| av不卡在线播放| 成年美女黄网站色视频大全免费| 一二三四中文在线观看免费高清| 亚洲人成网站在线观看播放| 九色成人免费人妻av| 在线观看人妻少妇| 日韩大片免费观看网站| 久久久a久久爽久久v久久| 亚洲精品乱久久久久久| 国产精品麻豆人妻色哟哟久久| 一区二区av电影网| 日韩 亚洲 欧美在线| 性高湖久久久久久久久免费观看| 久久 成人 亚洲| 久久精品人人爽人人爽视色| 日日摸夜夜添夜夜爱| 欧美老熟妇乱子伦牲交| 九色成人免费人妻av| 丝袜喷水一区| 菩萨蛮人人尽说江南好唐韦庄| 国产无遮挡羞羞视频在线观看| 免费人妻精品一区二区三区视频| 熟女人妻精品中文字幕| 男的添女的下面高潮视频| www日本在线高清视频| 最近中文字幕2019免费版| 亚洲 欧美一区二区三区| 高清视频免费观看一区二区| √禁漫天堂资源中文www| 国产精品国产av在线观看| 国产男女内射视频| 国产精品麻豆人妻色哟哟久久| 中文精品一卡2卡3卡4更新| 亚洲第一av免费看| 又粗又硬又长又爽又黄的视频| 天天影视国产精品| 国产男女内射视频| h视频一区二区三区| 亚洲一级一片aⅴ在线观看| 日韩视频在线欧美| 少妇的逼好多水| 18+在线观看网站| av一本久久久久| 国产毛片在线视频| 制服人妻中文乱码| 人妻 亚洲 视频| 中国国产av一级| 欧美+日韩+精品| 亚洲成人手机| 亚洲欧美精品自产自拍| 久久久久精品性色| 婷婷色麻豆天堂久久| 婷婷色av中文字幕| 精品一区二区三区四区五区乱码 | 在线观看人妻少妇| 午夜激情久久久久久久| 热99久久久久精品小说推荐| 午夜免费鲁丝| videosex国产| 亚洲欧美清纯卡通| av.在线天堂| 美国免费a级毛片| 肉色欧美久久久久久久蜜桃| 水蜜桃什么品种好| 91aial.com中文字幕在线观看| 欧美成人精品欧美一级黄| 精品国产一区二区久久| tube8黄色片| 少妇的丰满在线观看| 亚洲国产看品久久| 激情五月婷婷亚洲| 十八禁网站网址无遮挡| 啦啦啦视频在线资源免费观看| 亚洲经典国产精华液单| 婷婷色综合www| 日韩人妻精品一区2区三区| 三级国产精品片| 久久精品国产自在天天线| 亚洲图色成人| 成人国产av品久久久| 欧美日韩一区二区视频在线观看视频在线| 色婷婷久久久亚洲欧美| 亚洲av国产av综合av卡| 在线亚洲精品国产二区图片欧美| 日日撸夜夜添| 一区二区日韩欧美中文字幕 | 婷婷色av中文字幕| 日韩精品有码人妻一区| 又黄又爽又刺激的免费视频.| 色哟哟·www| 亚洲久久久国产精品| 少妇猛男粗大的猛烈进出视频| 全区人妻精品视频| 曰老女人黄片| 国产精品无大码| 国产乱人偷精品视频| 一级片'在线观看视频| 色5月婷婷丁香| 丰满少妇做爰视频| 18在线观看网站| av黄色大香蕉| 久久ye,这里只有精品| av有码第一页| 久久久久网色| 一区二区三区精品91| 在线亚洲精品国产二区图片欧美| 亚洲av.av天堂| 国产精品不卡视频一区二区| 又黄又粗又硬又大视频| 又大又黄又爽视频免费| 亚洲国产精品999| 9热在线视频观看99| 桃花免费在线播放| www.av在线官网国产| 国产乱来视频区|