• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detection of Lung Nodules on X-ray Using Transfer Learning and Manual Features

    2022-08-24 12:56:34ImranArshadChoudhryandAdnanQureshi
    Computers Materials&Continua 2022年7期

    Imran Arshad Choudhryand Adnan N.Qureshi

    Faculty of Information Technology, University of Central Punjab, Lahore, Pakistan

    Abstract: The well-established mortality rates due to lung cancers, scarcity of radiology experts and inter-observer variability underpin the dire need for robust and accurate computer aided diagnostics to provide a second opinion.To this end, we propose a feature grafting approach to classify lung cancer images from publicly available National Institute of Health (NIH) chest X-Ray dataset comprised of 30,805 unique patients.The performance of transfer learning with pre-trained VGG and Inception models is evaluated in comparison against manually extracted radiomics features added to convolutional neural network using custom layer.For classification with both approaches, Support VectorsMachines (SVM) are used.The results from the 5-fold cross validation report Area Under Curve (AUC) of 0.92 and accuracy of 96.87% in detecting lung nodules with the proposed method.This is a plausible improvement against the observed accuracy of transfer learning using Inception (79.87%).The specificity of allmethods is>99%, however, the sensitivity of the proposed method (97.24%) surpasses that of transfer learning approaches (<67%).Furthermore, it is observed that the true positive rate with SVM is highest at the same false-positive rate in experiments amongst Random Forests, Decision Trees, and K-Nearest Neighbor classifiers.Hence, the proposed approach can be used in clinical and research environments to provide second opinions very close to the experts’ intuition.

    Keywords: Lungs cancer; convolutional neural network; hand-crafted feature extraction; deep learning; classification

    1 Introduction

    Lung cancer is one of the deadliest forms of cancer has a high mortality rate.It is the second most common cancer among men and women in the United States [1].According to World Health Organization (WHO), 9771 new cases of lung cancer have been reported in both men and women of Pakistan for the year 2018 which is 5.6% of the total number of lung cancer cases reported.It is commonly diagnosed in men and accounts for 14.5% of total cases in men and 8.4% of total cases in women [2].

    Lung cancer is further categorized into two categories based on cell size: Small Cell Lung Cancer(SCLC) and Non-small Cell Lung Cancer (NSCLC).The former is considered to be highly malignant with early metastasis and poor prognosis and accounts for 15%-20% of all cases of lung cancer [2].SCLC is further categorized into two stages, namely, extensive stage and limited stage.The cancer is limited to one side of chest in the limited stage while it spreads to the whole lungs and lymph nodes in the extensive stage.Every two out of three patients are diagnosed with extensive stage and have to undergo regular chemotherapy sessions as the treatment [2].

    As mentioned, this form of cancer is the leading cause of mortality around the world and a timely detectionmechanism in the early stages is critically imperative for treatment regimen.Computer Aided Diagnosis (CAD) has gained the attention of researchers for its early diagnosis followed by appropriate treatment [3].Precise assessment of pulmonary nodules can help ascertain the degree of lung cancer [4].Currently, a fine needle biopsy is the most common method for examining the malignancy status of the pulmonary nodules but it is a severely painful invasive test for the patient.Therefore, computer aided detection mechanisms are a dire need of time to support the clinical diagnostic methods and save time and lives.Thus, non-invasive approaches, such as Computer tomography (CT) scans can be considered as a replacement and this procedure takes lesser time and is completely painless [4].

    In image processing and computer vision applications, a feature is a measurable part of the information that has some unique capability to describe the image(s).These features are further classified into several classes (sometimes called a segment).But the classification of grayscale images such as x-ray images is completely spatially blind.Usually, for generation of Region of Interest(ROI) or cluster, it uses pixel intensity.The feature space (especially the location of the pixels and gray level intensity) of Region of Interest (ROI) is limited.The problem of in-homogeneity due to background contribution and quasi-homogeneity due to noising in grayscale images are arises.Different approaches are used for medical image classification.Some approaches are presented in this thesis such as Region-based [5] or Clustering based [6,7], Atlas based Hybrid-classification based,partial differential equation, and thresholding based [8].

    The main theme of this article is to extract manual features of medical images such as lungs (x-ray images) and concatenation with auto features extracted by Convolutional Neural Network (CNN) to find the precise and pertinent features which are used for classification.

    2 Background

    Different approaches were adopted for manual feature extraction into different environments.Cuenca et al.[5] and Freixenet et al.[9] have published their work and proposed classification based concept of nodule detection using 3D region growing algorithms.Initially, they used a selective enhancement filter and thresholding approach.They achieved 71.8% accuracy with 0.8 False Positive (FP).They presented that region-growing based algorithms have poor results as compared to thresholding based classification approaches.

    For dividing pixel into different groups it used some criteria functions or grouping algorithms(use similarity measures-these algorithms are used to find the similarity between two points (A ={a1, a2, ..., an}, B = {b1, b2, ..., bn}) and make groups of similar pixel Eq.(1) can be generalized by defining Minkowski, Manhattan and Euclidean Distance) such as Expectation Maximization(EM), K-MEANS and Fuzzy C-Means (FCM) algorithm.For nodule classification, Filho et al.[10]proposed a concept of Quality thresholding algorithm and Javaid et al.[11] have developed a CAD based system using K-MEANS algorithm for nodule detection.

    A= {a1,a2, ...,an}

    B= {b1,b2, ...,bn}

    Assefa et al.[12] developed the concept for nodule detection that combines template matching and multiresolution algorithms and reduces false-positive.They achieved sensitivity from range 84%to 91% by using template matching (For object detection and classification, this approach has a brute force algorithm.In this approach, an image is divided into sub-image or templates that contain the region of interest of an image.The sub-image or template slides over the whole image to explore the desired template.For searching or matching templates over the image, the most well-known searching algorithmsareusedsuchas Normalized Cross Correlation(NCC),Cross-correlation,Sumof Squared Difference(SSD),and Sumof Absolute Difference(SAD)).The only demerit of the templatematching approach is selecting metric for template matching and literature depict that it takes too long time for compute the correlation.

    Gong et al.[13] proposed a concept based on dynamic self-adaptive matching and Fisher Linear Discriminant Analysis (FLDA).They have used OTSU and 3D region growing algorithms for classification and for noise smoothing they used Gaussian smoothing operation.The main drawback of this approach, it works well in binary classification and fails in multi-classification problems.

    In the last 20 years, computer vision based applications have used Scale-Invariant Feature Transform (SIFT), Haar Cascades, Speeded Up Robust Feature (SURF), Histogram of Oriented Gradients (HOG), and various statistical features such as Difference of Variance, Entropy, Energy,and Sum of Variance.Recently the researchers have shifted the paradigm from hand-crafted feature extraction to automatic feature extraction methods such as deep learning approaches.The pertinent reasons for the transition from hand-crafted to automatic feature extraction methods are:

    1.Handcrafted feature extraction methods are time consuming-manually setting and tuning the bounding box on a data set or getting the required portion of images is tedious.

    2.Sometimes data set have very low image quality to extract the required portion of pertinent features.

    3.Handcrafted feature extraction requires the active participation of medical expertise to get precise information.

    Automatic feature extraction such as CNN networks can extract features from data set.It givesthem randomweights to all the available features and during training, it adjusts theseweights to extractthe meaningful features.Convolution is the first layer of CNN, which extracts features from the input the meaningful features.Convolution is the first layer of CNN, which extracts features from the input image.Convolution learns the image features considering small squares of input data to establish the relationship between pixels.Eq.(2) depicts the mathematical operation of CNN that takes two inputs such as portion of an image and a filter/kernel.The final output of the convolution between image and filter/kernel is called“Feature Map”.

    A variety of computer aided diagnostic methods are using both image processing and deep learning approaches and Convolutional Neural Network (CNN) is one of the examples [14].A Convolutional Neural Network (CNN) is a neural network that consists of one or more convolutional layers.These layers can be thought of as filters to the input data or some function that is to be applied to the whole or partial image i.e., input data.The number of layers depends on the number of features to be extracted from the input image or number of functions (conv (), batch Normalization () and max Pooling () etc.) needed to be applied to it.They are sometimes called Deep Convolutional Neural Networks (Deep CNNs) due to the large number of convolutional layers used in them.

    The deep neural network requires a large amount of labeled datasets to work efficiently.While the number of annotated datasets that are publicly available are few, Imran et al.[15] have proposed a multi-task learning model for learning a classifier, for chest X-ray images along with a loss function(Tversky depicted in Eq.(3)) for convergence.It extracts the feature from the dataset in a complete black box manner, and it consists of several convolution layers with some additional layers such as max-pooling, dropout, and activation.These layers are learnable and transfer these weights into the next subsequent layer and further pass to the classifier as a vector.The classifier assigns a label to a vector or sometimes assigns multiple labels to a single vector.

    where α and β are used to control the magnitude and pOiis the probability of pixel and gOiis the loss of gradient.

    Qin et al.[16] apply deep learning (which is a subset of machine learning that works on the principle of artificial neural networks) on X-ray images of the chest.The main focus is performing the lung segmentation (method of identifying the boundary of lungs fromthe surrounding tissues) efficiently.In the modern era of technology, machine learning and pattern recognition techniques have been widely used in computer vision (Image processing) and other AI-based system [16].

    Baltruschat et al.[17] focused on chest x-ray radiography which is the most familiar type of examination used in imaging departments.The machine-driven detection tools in the radiology and clinical activity-flow could have enough combat on the level of care.In this research, they analyze and canvas the quality of two advanced and progressive image preprocessing methodologies at first formulated for getting image data points by radiologists, for the achievement and demonstration of deep learning methodologies.Jaiswal et al.[18] propose a deep learning model based on Mask-RCNN to localize pneumonia on Chest X-Ray images.The authors incorporate the local and global features for pixel-wise segmentation and report plausible performance on X-Ray images.It fails to segment pneumonia in low quality images and takes extra computation cost to analyze high quality images.

    In study, Hussain et al.[19] explained concept of Reconstruction Independent Component Analysis (RICA) and Sparse filter features for detection of lungs cancer using machine learning algorithms.They have used multiple machine learning algorithms such as Gaussian Radial Base Function (GRBF), Decision Tree, Support Vector Machine (SVM), and Naive Bayes to classify lungs cancer.Using RICA and Sparse filters, they have achieved plausible results with the jackknife cross validation technique.Kesim et al.[20] proposed a small Convolutional Neural Network (CNN) model for X-Ray image classification using Japanese Society of Radiological Technology (JSRT) dataset.The authors tested their model on JSRT dataset and achieved 86% accuracy.Bhandary et al.[21]customized a pre-trained CNN model AlexNet for detection of abnormalities on lungs X-Ray images.The authors focus on pneumonia detection in this study using customized AlexNet and it introduced a new threshold filter for feature ensemble strategy and achieved classification accuracy 96%.Cao et al.[22] introduce Variational Auto Encoder (VAE) in each layer of the Convolutional Neural Network(CNN) model.Their model was based on U-Net, the most widely used model for segmentation.Variational Auto Encoder (VAE) is used to extract symmetrical semantic information from both right and left thoraxes.The attention methodology used spatial and channel information to segment the region of interest in lungs to improve the segmentation accuracy.Salman et al.[23] explored the concept of deep learning methodologies on X-Ray images collected from Kaggle, Github, and Openi repository.The author proposed a Convolutional Neural Network and apply on X-Ray images collected from Github.

    We have observed that manual annotations task is very time costly and there is a high risk of human errors.Therefore, the aim of this research is to evaluate hybridization of manually extracted and convolutional features for classification and detection of lung cancer nodules which can significantly minimize reporting time and maximize accuracy.

    3 Proposed Methodology

    3.1 Datasets

    The NIH chest X-Ray dataset comprised of 3O8O5 unique patients with disease labeled data.Image size is 120×12 and has 15 classes (14 diseases [Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural-thickening, Cardiomegaly,Nodule, Mass, Hernia] and one for“No Finding”).Fig.1 shows the sample images of NIH X-Ray dataset.The labels are extracted by Natural Language Processing algorithms based on text-mining disease classification under supervision by medical experts.As we are interested in Cancer detection,hence, we consider Nodule as class of interest.Because of data imbalance, we combine Lung Image Database Consortium image collection (LIDC-containing 244,527 low-dose lung images from 1010 lung unique patients) data into the experiments.

    Figure 1: Sample of x-ray National Institutes of Health (NIH) chest X-ray dataset

    Evaluating deep learning models can be quite difficult and tricky.Normally, we split the dataset into different ratios of training and testing sets.One of the most widely used statistical techniques to evaluate the performance of deep learning models and to avoid overfitting the model is Cross Validation or K-Fold Cross Validation.For improving machine learning model prediction, we used kfold cross validation technique as depicted in Fig.2.In k-fold cross validation, the dataset is completely shuffled and to make sure that our inputs are not biased.Further, we divide the dataset into k size equal portions with no overlapping.According to the requirement and environment the size of k is set to 10 or 5.We have used 5 to split the dataset into 5 equal sized portions.Apart from kfolding, we improved the robustness of our proposed novel model and pre-trained models using data augmentation technique.Using this technique, we can generate several samples of under-sampled class images which helps in almost balancing the ratio of distribution.In this step, we artificially synthesize x-ray images from original x-ray images with the help of minor alteration in original images such as rotations, horizontal/vertical flipping, and scaling, zooming, padding and random brightness.We have achieved better results using data augmentation techniques, which prevent data scarcity, increases generalization, and resolve class imbalance issues.

    Figure 2: Dataset representation of training and validation procedure employed in the 5-fold cross validation

    Mostly, Lungs X-ray are gray-scale dataset with have large amount of noise and have low quality images due to using different protocols during acquisition of images.From low quality dataset,extraction of visual features is quite a challenging task.So here, we have applied some contrast enhancement algorithms on these low quality images and achieve some better performance and efficacy.One of the most widely technique used in image processing for background equalization and feature extraction is morphological operations (especially bottom-hat and top-hat transformation).

    In top-hat morphological operation depicted in Eq.(4), we apply opening operation on an input image with the help of Structural Element (SE) and then subtract from the original image to get the bright features and objects from the original image.In the bottom-hat morphological operation depicted in Eq.(4), we apply closing operation on an input image with the help of Structural Element(SE) and apply addition with the original image to extract the darken features and objects from the original image.After closing and opening operations on an image, we combine the operations subtracting the result of bottom-hat and adding the result of top-hat morphological operation from an image to get the enhanced image.Before applying the morphological operations, finding an appropriate and suitable Structural Element (SE) for achieving better enhancement is also a challenge.For getting better and appropriate SE, we use Edge Content (EC) [24] for automatically selecting Structural Element (SE) based on contrast matrix.The vector of Edge Content (EC) is the magnitude of the gradient vector of pixel position (x, y) of an input image (In_img).The depicted Eq.(5) presents the calculation of Edge Content (EC), where (i, j) is the block size of an input image and Fig.3 shows the original and processed images.Fig.4 presents the abstract flow of proposed methodology.

    Figure 3: In preprocessing, we have applied some contrast enhancement algorithms on these low quality images and achieve some better performance and efficacy.One of the most widely technique used in image processing for background equalization and feature extraction is morphological operations (especially bottom-hat and top-hat transformation)

    Figure 4: Abstract level of proposed methodology

    3.2 Feature Extraction

    Manually, we extract statistical features such as Sum of Variance, Entropy, Energy, and Difference of Variance from the NIH X-ray dataset and saved the features in different vectors for future use with pre-trained models (if necessary).Before moving into transfer learning, we train different machine learning algorithms depicted in Eq.(6) such asRandom Forest-it creates several individual trees during training and acquired prediction values fromall individual trees to announce the final prediction value, Decision Tree-for finding categorical features that will use the entropy and Gini depicted in Eq.(6) to find the maximum information gain, K-Nearest Neighbor-it saves all the training data and finds the nearest node for features similarity and Support Vector Machine-it handles non-linear and linear data using kernel and properly tackles the outlier values on our extracted features (vectors).The training and validation accuracy is depicted in Fig.5.We have used the k-fold cross validation concept to train machine learning classification algorithms and achieve plausible constant results on Random Forest (RF) as compared to other classification algorithms.Initially, Decision Tree algorithm achieves optimal results and after ingesting k-folding data it decreases the training accuracy.The K-Nearest Neighbor algorithm gives us a plausible result when K is 3 to 26.When we increase the value of K, it decreases the training accuracy and validation accuracy has huge spikes.In the Support Vector Machine (SVM) algorithm, we use k-fold cross validation to achieve better results as compared to others.It takes a long time to train the SVM with k-fold cross validation but gives us better results.

    where pi2is the probability of class c and b present the intercept of line on the y-axis, W represents the weight vector.

    Figure 5: Machine learning classification algorithms (Random forest, decision tree, K-nearest neighbor and support vector machine) training and validation on National Institutes of Health (NIH) chest Xray dataset

    The achieved results from machine learning classification algorithms failed to meet our threshold.So we use pre-trained deep learning models to achieve better accuracy as compared to machine learning classification algorithms.In pre-trained models, we use the concept of transfer learning with fine tuning of custom Fully Connected (FC) layers.

    The manually extracted features are used as the input of the machine learning classification algorithms as depicted in Fig.6.The features are evaluated using ROC and AUC metrics and selection is performed using hand-crafted feature selection algorithms.

    Figure 6: Complete detail and architecture of transfer learning concept with pretrained models: above model is the birdview of the pretrainedmodel and bottommodel is freezed and embedded with custom fully connected layers

    Further, we extract the radiomics features of the chest x-ray customized dataset.These features are divided into five groups such as shape and size based features (volume, flatness area, surface area and etc.), gray-level co-occurrence matrix (GLCM-Energy, Entropy, Sum of Average, Difference of Variance, Difference Entropy, Difference Entropy and etc.) depicted in Tab.1, size zone matrix (SZMsmall area emphasis, Gray Level Non-uniformity) and run length matrix (RLM-Short Run emphasis,Zone Percentage, Site Zone Non-uniformity and etc.).We extract some features from the radiomics and pass these features to machine learning based classification algorithms (such as Support Vector Machine).Fig.7 represents the SVM classification ROCs of multiple classes.

    Table 1: GLCM features

    Figure 7: Handcrafted feature extraction with the help of radiomics methodologies from NIH chest X-ray and classify with traditional machine learning classifier (SVM)

    It may be observed that the ROC achieved using pre-trained models is not satisfactory and AUC is close to 0.5 in detecting all abnormalities.Generally, radiomics features require expert annotated masks and a manual bounding box.Hence, improved results are expected using the hybrid framework proposed in this research.The depicted results in Fig.7 are not up to the mark, so we move towards automatic deep feature extraction using deep learning methodologies.

    3.3 Hybrid Framework

    The proposed model would be intended to perform better than the existing models and a better prediction accuracy would be proved through experimentation and results.Fig.4 depicts the proposed methodology and our proposed and customized CNN model is depicted in Fig.8:

    Figure 8: Our proposed customized CNN model have automated deep feature extraction techniques and also embedded handcrafted features in the middle of the proposed CNN model

    In the proposed model, the input images would first be preprocessed by applying horizontal and vertical flips, random crops, 45°rotations and color distortion.From the processed images, we manually extract the meaningful feature usingmultiple handcrafted feature methods and these features will guide the CNN towards classification.The preprocessed set of images would be passed through convolution, batch normalization and leakyRelu model which would further be passed to another set of the aforementioned layers that gets divided into two sets of N similar layers running in parallel.The parallel set of layers would be concatenated at the end and passed again through another set of three layers i.e., convolution layer, batch normalization and leakyRelu, followed by a convolutional and leakyRelu layer before passing through a fully convolutional neural network.The processed image data gets through another leakyRelu layer before going through another fully convolutional neural network which gives out the final output which is to be passed to N number of clusters.As observed,the above model does not get trained on a set of input images, the proposed methodology makes use of unsupervised learning.

    Our proposed Convolutional Neural Network (CNN) model achieves robust results and improved performance with handcrafted features.By progressive extraction of the deep features from the input image on eachCNNlayer, the initial layers learn about edges, boundary analysis and the last layers can identify lungs specification such as Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural_thickening and Cardiomegaly.

    Apart from automatic feature extraction by CNN layers, we also embed handcrafted features with our proposed customization layer that also intensify the performance of classification accuracy.Eq.(7) depicts the mathematical operation of the basic CNN model, where Inimgis an input image F is a filter/kernel size of dimension fland f2.

    The output of the above Eq.(7) is followed by Eq.(8), where the Filter is horizontally or vertically flipped during operation.

    vecis the input vector of a layer during convolution andis the output layer, after convolution and f(·) is an activation function of an activation layer

    For calculation cost (C) of network, the depicted Eq.(10) express the mathematical formulation to find cost value of a network, where fc presents actual forecasting cost, n f c is network predicted cost.

    Mostly the above equations are used during CNN training, so for integrating manual features within Convolution layer, we introduce a novel concept of hybrid convolution which comprises automated feature extraction and fusion of handcrafted features to improve the efficacy of classification of medical images.In the depicted Eq.(11)presents the ingestion of handcrafted features in automated features.

    We apply backpropagation using the chain rule with subtracting of our handcrafted feature before finding the ? derivative.For finding gradient on individual weights chain rule is applied.The depicted Eq.(12) shows the subtraction of handcrafted features that are placed in a single vector.

    We apply k-fold cross validation techniques to handle the overfitting issue and achieve generalization during training.In k-fold cross validation technique, the complete dataset is divided into different folds, and combinations of folds are transferred for training.For achieving credible results from the proposed CNN model, we apply data re-sampling techniques to handle the imbalanced classes.We apply different data augmentation techniques such as rotations, horizontal/vertical flipping, and scaling, zooming, padding, and random brightness to synthesize the NIH X-ray dataset to resolve the imbalance issue.After that, we apply morphological operations (combination of opening and closing)to handle the contrast issues in the NIH X-ray dataset.In morphological operations, we apply Edge Content (EC) for optimal selection of Structural Element (SE).Using optimal SE, we get an arguable result to fill and eliminate the smaller cracks and gaps in NIH X-ray dataset.

    Fig.9 represents the activation maps (lines, edges, and texture patterns) of CNN model of the second layer and deep (N) layer.

    Figure 9: Middle layer activation map: Visual activation maps of our proposed CNN model.Later layers of CNN model, construct the features by ingesting deep features from the previous layers

    4 Result and Discussion

    For extraction of deep features, we used pre-trained models (VGG and Inception) and classify those features with traditional machine learning classifiers (Decision Tree, Random Forest, K-Nearest Neighbor and Support Vector Machine).We have removed the top layer from the pre-trained model and embedded dense (FC) layers with by-default optimizer (Adam) with regularization to avoid the overfitting issue and set hyperparameters (learning rate=1e-7, batch size=32 and epochs =100)based on cognitive content.We have set an early stopping mechanism and used the k-fold cross validation concept to acquire optimal generalization performance on the pre-trained models.

    IIn the field of medical imaging, a large number of annotated datasets are required for better classification accuracy which is still a big challenge.Training on the smaller dataset can lead to overfitting andmostly deep learningmodels require a large amount of dataset for optimal classification accuracy.For training on the large medical dataset, we require enough processing power and time and also a large amount of medical annotated dataset.To handle this overcome, initially, we apply the most widely used pre-trained models such as VGG and Inception which are already pre-trained on a large natural dataset for different classification tasks.Therefore, we adopt the concept of transfer learning (pre-trained model) with our custom layers (Fully Connected) Layers.Fig.6 depicts the concept of transfer learning.In transfer learning layer, we use the pre-trained low level features of those models and embed our custom Fully Connected layer for fine tuning.We have frozen the convolutional layers of those pre-trained models and use our customer Fully Connected layer for training.The training, validation and testing accuracy of those pre-trained models and the complete details and architecture of the pre-trained models that are utilized in this study have listed below:

    A well-known and smaller pre-trained model VGG is a convolutional network model that can achieve 92.7% top-5 test accuracy in ImageNet dataset.VGGnetwork is considered a simplified model because it only comprises convolutional layers that are stacked over each other.Max pooling handles reduction in volume size.Fully connected layers are then followed by SoftMax.VGG comprises two versions that are VGG16 and VGG19.VGG16 architecture is made up of 16 layers.The layer composition consists of 13 convolutional layers, 5 pooling layers and 3 fully connected layers.We used with formal machine learning classifier Support Vector Machine (SVM) for optimal results.We freeze themiddle and last layers of theVGGmodel and embed custom Fully Connected (FC) layers to extract features and send those features to machine learning classification algorithms.Using transfer learning with a machine learning classifier, we achieve better results as compared to handcrafted features with machine learning classifiers and other pre-trained models.Furthermore, we embed handcrafted features into the VGG model and acquire optimal results.

    Fig.10 presents the automated extracted features (obtained from pre-trained VGG model) and handcrafted features, Support Vector Machine (SVM) have improved the results as compared to other machine learning classifiers (Random Forest (RF), Decision Tree (DT), and K-Nearest Neighbor(KNN)).

    Figure 10: LEFT: Using pre-trained model (VGG) with fine tuning fully connected (FC) layers for feature extraction and then classify those features with traditionalmachine learning classifier (Support vector machine).RIGHT: Acquired high dimensional features with VGG and custom layers and forward those feature to support vector machine

    Inception is a convolutional neural network that is used for object detection and image analysis.It was first used as a module by GoogleNet.It has shown an accuracy of 78.1% on ImageNet dataset.It was developed on the different ideas of different researchers.Inception has different versions such as Inception V1, V2, and V3.It performs convolution on the input with 3 different sizes i.e., (1×1, 3×3, and 5×5).It also performed max pooling and the output will be concatenated and sent to the next inception module.To achieve advisable results in a lesser time and with more features,we apply another pre-trainedmodel (Inception) model.Using the Inception block and Hebbian principle, it concatenates 1×1, 3×3, and 5×5 convolutional layers into a single output layer.We freeze down the middle and last layers of the Inception model and interject custom Fully Connected (FC) layers to acquire features and transmit those features tomachine learning classification algorithms.Using transfer learning with a machine learning classifier, we come through advisable results as compared to handcrafted features with machine learning classifiers and other pre-trained models such as VGG.Fig.11 shows the results of simple automated feature classification with SVM and integrating handcrafted features with the Inception model to achieve better results.

    The depicted Figs.12 and 13 showthe results of our novel proposed hybrid CNN with handcrafted features and classify with traditional machine learning classifiers.Tab.2 presents the accuracy, specificity, and sensitivity of pre-trained models such as VGG and Inception and the proposed model with a traditional machine learning classifier (SVM).We observe that the proposed model achieved high sensitivity due to preprocessing module and feature grafting of manual with CNN features.

    Figure 11: LEFT:Using pre-trainedmodel (Inception) with fine tuning fully connected (FC) layers for feature extraction and then classify those features with traditionalmachine learning classifier (Support vector machine).RIGHT: Acquired high dimensional features with inception and custom layers and forward those feature to SVM SVM

    Figure 12: Acquired high dimensional features with our novel proposed and custom layers (for fusion of handcrafted features) and forward those feature to traditional machine learning classifier (Support vector machine)

    Figure 13: Using our novel proposed CNN model with fine tuning fully connected (FC) layers for feature extraction and then classify those features machine learning classifiers (Random Forest (RF),Decision Tree (DT), K-Nearest Neighbor (KNN) and Support Vector Machine (SVM))

    Table 2: Assessment results of novel proposed CNN model with pre-trained CNN models and traditional machine learning classifier Support Vector Machine (SVM)

    The proposed CNN model are further classified with other traditional machine learning classifier such as Random Forest (RF), Decision Tree (DT), and K-Nearest Neighbor (KNN).The result of the traditional machine learning classifier is depicted in Fig.13.We observe that Support Vector Machine give us plausible results on the fusion of automated feature with hand-crafted features as compared to other classifiers such as K-Nearest Neighbor (KNN), Decision Tree (DT), and Random Forest (RF).

    Figure 14: Validation accuracy and loss of LIDC dataset on our proposed model

    5 Conclusion

    In this paper, we have proposed a hybrid framework utilizing both handcrafted features as well as features extracted from convolution neural networks to categorize the lung cancer images into classes of interest.In this novel technique, the manual features, radiomics GLCM, are inserted into themiddle of the proposed sequential CNN model to improve performance and the technique is evaluated on publicly availableNIH and LIDC datasets.It is observed thatGLCMfeatures significantly improve the performance as compared to pre-trained VGG and Inception models and this is validated using 5-fold cross-validation.The SVM classifier is used in experiments as it gives significantly robust results than the KNN, Decision Trees, and Randon Forest for classification of features.This novel technique gives us plausible results as depicted by sensitivity 97.24% which is a significant improvement compared to 65.69% with VGG+SVM and 66.49% with Inception + SVM approaches.Also, specificity is 99.77% and overall improved accuracy is 96.87%.In future work, we plan to insert the manually extracted features into the initial and/or ending layers of CNN and evaluate performance.For validating our proposed model, we have trained with the LIDC dataset and acquired the validation accuracy and loss as shown in Fig.14.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    97精品久久久久久久久久精品| 免费高清在线观看视频在线观看| 成人国产av品久久久| 国产爽快片一区二区三区| 亚洲欧美一区二区三区国产| 亚洲欧美清纯卡通| 久久影院123| 精品久久久久久电影网| 一本色道久久久久久精品综合| 欧美精品国产亚洲| 69av精品久久久久久| 色婷婷久久久亚洲欧美| 久久久久性生活片| 色5月婷婷丁香| 国产黄色免费在线视频| 麻豆国产97在线/欧美| 美女内射精品一级片tv| 新久久久久国产一级毛片| 99久久九九国产精品国产免费| 久久99蜜桃精品久久| 久久人人爽人人爽人人片va| 午夜爱爱视频在线播放| 另类亚洲欧美激情| 日韩av不卡免费在线播放| 国产av不卡久久| 亚洲四区av| 国产 一区 欧美 日韩| av福利片在线观看| 男女下面进入的视频免费午夜| 观看美女的网站| 国产视频内射| av在线亚洲专区| 国产极品天堂在线| 国产成人精品婷婷| 一本一本综合久久| 国产综合懂色| 免费观看av网站的网址| 国产男女内射视频| 免费在线观看成人毛片| 免费看光身美女| 哪个播放器可以免费观看大片| 成人漫画全彩无遮挡| 亚洲精品第二区| 午夜免费男女啪啪视频观看| 男女边摸边吃奶| 欧美精品国产亚洲| 97精品久久久久久久久久精品| 久久久精品免费免费高清| 成人毛片a级毛片在线播放| 国产精品国产三级国产av玫瑰| 亚洲国产最新在线播放| 尤物成人国产欧美一区二区三区| 免费观看在线日韩| 各种免费的搞黄视频| 最后的刺客免费高清国语| 高清在线视频一区二区三区| 大陆偷拍与自拍| 一区二区av电影网| 大香蕉97超碰在线| 在线观看一区二区三区激情| 欧美精品一区二区大全| 听说在线观看完整版免费高清| 色视频www国产| 国产伦在线观看视频一区| 久久99精品国语久久久| 欧美性猛交╳xxx乱大交人| 99热这里只有是精品在线观看| 色5月婷婷丁香| 国产精品无大码| 日本av手机在线免费观看| 香蕉精品网在线| 国产成人freesex在线| 精华霜和精华液先用哪个| 啦啦啦中文免费视频观看日本| 91精品国产九色| 日本熟妇午夜| 国产精品av视频在线免费观看| av卡一久久| 国产乱人偷精品视频| 在线a可以看的网站| 日日撸夜夜添| 狂野欧美白嫩少妇大欣赏| 赤兔流量卡办理| 国产黄色免费在线视频| 69av精品久久久久久| 99久久人妻综合| 最近中文字幕2019免费版| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 少妇的逼水好多| 亚洲精品久久久久久婷婷小说| 我要看日韩黄色一级片| 美女视频免费永久观看网站| 日本熟妇午夜| 少妇人妻久久综合中文| 在现免费观看毛片| 国产成人一区二区在线| 亚洲怡红院男人天堂| 亚洲精品国产av成人精品| 国产色爽女视频免费观看| 在线看a的网站| 一边亲一边摸免费视频| 最近的中文字幕免费完整| 亚洲精华国产精华液的使用体验| 美女主播在线视频| 日本欧美国产在线视频| 伦精品一区二区三区| av免费观看日本| 一个人观看的视频www高清免费观看| 日本三级黄在线观看| 美女脱内裤让男人舔精品视频| 国产一区亚洲一区在线观看| 亚洲欧美中文字幕日韩二区| 亚洲最大成人中文| 国产成年人精品一区二区| 麻豆成人午夜福利视频| 亚洲无线观看免费| 五月伊人婷婷丁香| 日韩亚洲欧美综合| 日本熟妇午夜| 又粗又硬又长又爽又黄的视频| 亚洲欧美一区二区三区国产| 成年版毛片免费区| 男女边吃奶边做爰视频| 亚洲av不卡在线观看| 人体艺术视频欧美日本| 久久久国产一区二区| 亚洲精品第二区| 日本熟妇午夜| 高清av免费在线| 青春草国产在线视频| 成人高潮视频无遮挡免费网站| 日韩制服骚丝袜av| 免费在线观看成人毛片| 人体艺术视频欧美日本| 久久影院123| 亚洲人成网站在线播| 国产熟女欧美一区二区| 欧美性猛交╳xxx乱大交人| 女的被弄到高潮叫床怎么办| 国产高潮美女av| 欧美一区二区亚洲| 男女那种视频在线观看| 国产真实伦视频高清在线观看| 色播亚洲综合网| 国产色爽女视频免费观看| 天天躁夜夜躁狠狠久久av| 舔av片在线| 国产av国产精品国产| 中文精品一卡2卡3卡4更新| 亚洲精品,欧美精品| 看非洲黑人一级黄片| 最近最新中文字幕免费大全7| 少妇被粗大猛烈的视频| 看非洲黑人一级黄片| 嫩草影院新地址| 99久久人妻综合| 我的老师免费观看完整版| 搡老乐熟女国产| 国产精品蜜桃在线观看| 免费观看av网站的网址| 91在线精品国自产拍蜜月| 在线观看人妻少妇| 自拍偷自拍亚洲精品老妇| 亚洲国产精品成人综合色| 国产成人精品一,二区| 边亲边吃奶的免费视频| 亚洲国产精品成人综合色| 99热这里只有是精品50| 婷婷色综合大香蕉| 国产片特级美女逼逼视频| 亚洲av中文字字幕乱码综合| 美女国产视频在线观看| 干丝袜人妻中文字幕| 国产精品一区二区三区四区免费观看| 国产亚洲午夜精品一区二区久久 | www.色视频.com| 国产人妻一区二区三区在| 欧美激情国产日韩精品一区| 啦啦啦在线观看免费高清www| 国产视频内射| 永久免费av网站大全| 成人漫画全彩无遮挡| 久久精品国产a三级三级三级| 交换朋友夫妻互换小说| 欧美xxxx黑人xx丫x性爽| av黄色大香蕉| 国产黄频视频在线观看| 国产精品嫩草影院av在线观看| 中国美白少妇内射xxxbb| 日韩制服骚丝袜av| 国产乱来视频区| 大陆偷拍与自拍| 噜噜噜噜噜久久久久久91| 亚洲不卡免费看| 日韩欧美 国产精品| 嫩草影院新地址| 免费少妇av软件| 亚洲欧美日韩无卡精品| 午夜爱爱视频在线播放| 99精国产麻豆久久婷婷| 亚洲最大成人av| 亚洲图色成人| 久久国产乱子免费精品| 亚洲四区av| 特级一级黄色大片| 麻豆国产97在线/欧美| 欧美极品一区二区三区四区| 亚洲精品,欧美精品| 国语对白做爰xxxⅹ性视频网站| 亚洲精品一二三| 国产成人免费观看mmmm| 99热这里只有是精品在线观看| 人体艺术视频欧美日本| 国产av国产精品国产| 久久精品熟女亚洲av麻豆精品| 免费观看a级毛片全部| 在现免费观看毛片| 色网站视频免费| 国产高清国产精品国产三级 | 国产高清不卡午夜福利| 超碰av人人做人人爽久久| av在线老鸭窝| 一区二区av电影网| 97精品久久久久久久久久精品| 亚洲成色77777| 一级毛片电影观看| 久久精品国产鲁丝片午夜精品| 午夜福利在线在线| av女优亚洲男人天堂| 岛国毛片在线播放| 亚洲av电影在线观看一区二区三区 | 亚洲av国产av综合av卡| 亚洲av免费高清在线观看| 午夜视频国产福利| 国产老妇女一区| eeuss影院久久| 18禁在线播放成人免费| 大香蕉97超碰在线| 日韩 亚洲 欧美在线| 中文字幕久久专区| 成人免费观看视频高清| 亚洲精品视频女| 五月伊人婷婷丁香| av卡一久久| 青春草视频在线免费观看| av在线蜜桃| 九色成人免费人妻av| 少妇人妻精品综合一区二区| 99re6热这里在线精品视频| 久久久久久九九精品二区国产| 国产一区二区在线观看日韩| 在线观看免费高清a一片| 少妇的逼水好多| 色吧在线观看| 别揉我奶头 嗯啊视频| 亚洲真实伦在线观看| 男人和女人高潮做爰伦理| 少妇高潮的动态图| 男女国产视频网站| 国产欧美日韩精品一区二区| 国产伦精品一区二区三区视频9| 日韩一区二区三区影片| 久久久亚洲精品成人影院| 欧美日韩国产mv在线观看视频 | 久久99精品国语久久久| 中文乱码字字幕精品一区二区三区| 网址你懂的国产日韩在线| 国产亚洲最大av| 免费黄频网站在线观看国产| 久久99热这里只有精品18| 色视频www国产| 交换朋友夫妻互换小说| 久久久色成人| 国产午夜福利久久久久久| 国产av国产精品国产| kizo精华| 综合色av麻豆| 久久久精品免费免费高清| 亚洲一级一片aⅴ在线观看| 亚洲国产成人一精品久久久| 亚洲精品日韩av片在线观看| 韩国高清视频一区二区三区| 亚洲精品日韩在线中文字幕| 国产黄色视频一区二区在线观看| 身体一侧抽搐| 男插女下体视频免费在线播放| 不卡视频在线观看欧美| 观看美女的网站| 亚洲人成网站在线观看播放| 日韩三级伦理在线观看| av在线老鸭窝| 精品一区二区免费观看| 亚洲美女搞黄在线观看| 99热国产这里只有精品6| 99热6这里只有精品| 成人毛片60女人毛片免费| 亚洲高清免费不卡视频| 亚洲色图av天堂| 在线观看国产h片| 可以在线观看毛片的网站| av免费在线看不卡| 精品人妻一区二区三区麻豆| 亚洲综合色惰| 国产乱人偷精品视频| 日韩中字成人| 久久久久久久大尺度免费视频| 日产精品乱码卡一卡2卡三| 简卡轻食公司| 久久久久久九九精品二区国产| 日本一二三区视频观看| av.在线天堂| 久久午夜福利片| 看免费成人av毛片| av国产精品久久久久影院| kizo精华| 18禁在线播放成人免费| 一级爰片在线观看| 黄片无遮挡物在线观看| videossex国产| 日韩一区二区视频免费看| 国产精品偷伦视频观看了| 久久久久精品性色| 国产高潮美女av| 日本欧美国产在线视频| 大话2 男鬼变身卡| 精品一区二区三区视频在线| 我的老师免费观看完整版| 美女cb高潮喷水在线观看| 在线a可以看的网站| 91久久精品国产一区二区成人| 久久精品国产亚洲av涩爱| 在线观看国产h片| 欧美变态另类bdsm刘玥| 狂野欧美激情性xxxx在线观看| 中文字幕久久专区| 水蜜桃什么品种好| 亚洲av欧美aⅴ国产| 五月天丁香电影| 日本-黄色视频高清免费观看| 丰满少妇做爰视频| 精品国产露脸久久av麻豆| 亚洲精品影视一区二区三区av| 亚洲国产日韩一区二区| 亚洲国产精品成人综合色| 黄色配什么色好看| 卡戴珊不雅视频在线播放| 人人妻人人爽人人添夜夜欢视频 | 三级国产精品片| 色网站视频免费| 免费大片18禁| 亚洲久久久久久中文字幕| 欧美xxⅹ黑人| 中文乱码字字幕精品一区二区三区| 天堂中文最新版在线下载 | 白带黄色成豆腐渣| 亚洲精品影视一区二区三区av| 白带黄色成豆腐渣| 午夜老司机福利剧场| 波野结衣二区三区在线| 欧美成人一区二区免费高清观看| 国产高清不卡午夜福利| 禁无遮挡网站| 老司机影院毛片| 国产成人a∨麻豆精品| 看十八女毛片水多多多| 97热精品久久久久久| 波野结衣二区三区在线| 亚洲精品国产av成人精品| 高清午夜精品一区二区三区| 国产精品熟女久久久久浪| 大话2 男鬼变身卡| 日韩 亚洲 欧美在线| 久久精品人妻少妇| 国产视频内射| 亚洲成人中文字幕在线播放| 99热这里只有精品一区| 少妇人妻久久综合中文| 成人美女网站在线观看视频| 亚洲内射少妇av| 久热这里只有精品99| 丰满乱子伦码专区| av在线亚洲专区| 午夜免费鲁丝| 国产精品.久久久| 男男h啪啪无遮挡| 亚洲av二区三区四区| 一个人观看的视频www高清免费观看| 五月开心婷婷网| 天天躁日日操中文字幕| 在线天堂最新版资源| 国产亚洲精品久久久com| 亚洲av日韩在线播放| 久久久色成人| 男人舔奶头视频| 免费在线观看成人毛片| 五月伊人婷婷丁香| 97在线人人人人妻| 亚洲国产欧美在线一区| 久热这里只有精品99| 91久久精品国产一区二区三区| 日韩在线高清观看一区二区三区| 午夜免费鲁丝| 夫妻性生交免费视频一级片| 亚洲av免费高清在线观看| 国产女主播在线喷水免费视频网站| 国产精品一区www在线观看| 中文字幕av成人在线电影| 99热6这里只有精品| 日本与韩国留学比较| 超碰97精品在线观看| av国产免费在线观看| 亚洲美女搞黄在线观看| 亚州av有码| 夜夜爽夜夜爽视频| 成人国产av品久久久| 国产高清不卡午夜福利| 国产亚洲最大av| 极品少妇高潮喷水抽搐| 只有这里有精品99| 成年人午夜在线观看视频| 国产黄片美女视频| 国产精品女同一区二区软件| 成人综合一区亚洲| 高清日韩中文字幕在线| 亚洲av不卡在线观看| 日产精品乱码卡一卡2卡三| 色婷婷久久久亚洲欧美| 亚洲av二区三区四区| 欧美日韩视频高清一区二区三区二| 可以在线观看毛片的网站| 一本色道久久久久久精品综合| 亚洲精品国产av蜜桃| 免费看a级黄色片| 亚洲欧美中文字幕日韩二区| av.在线天堂| 久久久久精品性色| 亚洲自拍偷在线| 内地一区二区视频在线| 日韩三级伦理在线观看| 国产成人午夜福利电影在线观看| 91在线精品国自产拍蜜月| 夫妻性生交免费视频一级片| 在线免费十八禁| av在线观看视频网站免费| 亚洲av不卡在线观看| 高清日韩中文字幕在线| 80岁老熟妇乱子伦牲交| 亚洲精品久久午夜乱码| 国产有黄有色有爽视频| 亚洲欧美一区二区三区国产| av国产免费在线观看| 国产日韩欧美亚洲二区| 久久久久久久午夜电影| 久久久久国产网址| 亚洲va在线va天堂va国产| 身体一侧抽搐| 欧美3d第一页| 少妇人妻久久综合中文| 99久久精品一区二区三区| 欧美性猛交╳xxx乱大交人| 色综合色国产| 亚洲av免费高清在线观看| 亚洲精品自拍成人| 国产爽快片一区二区三区| 黑人高潮一二区| 久久久欧美国产精品| 国产av国产精品国产| 全区人妻精品视频| 亚洲欧美日韩另类电影网站 | 国产精品人妻久久久影院| 日韩欧美一区视频在线观看 | 成人特级av手机在线观看| 搞女人的毛片| 男插女下体视频免费在线播放| 男女边摸边吃奶| 18禁动态无遮挡网站| 国产精品国产三级国产av玫瑰| 成人午夜精彩视频在线观看| 女人十人毛片免费观看3o分钟| 最近中文字幕2019免费版| 精品人妻熟女av久视频| 五月伊人婷婷丁香| 真实男女啪啪啪动态图| 欧美性猛交╳xxx乱大交人| 亚洲av二区三区四区| 老司机影院成人| 黄片无遮挡物在线观看| 亚洲人成网站高清观看| 丰满乱子伦码专区| 成人免费观看视频高清| 亚洲欧美中文字幕日韩二区| 美女主播在线视频| 国产色婷婷99| 午夜亚洲福利在线播放| 精品熟女少妇av免费看| 天堂俺去俺来也www色官网| 草草在线视频免费看| av专区在线播放| 色视频在线一区二区三区| 插逼视频在线观看| 国产精品国产三级国产专区5o| 草草在线视频免费看| 国产欧美日韩精品一区二区| 欧美激情在线99| 三级国产精品欧美在线观看| 91精品国产九色| 国产成年人精品一区二区| 国产伦理片在线播放av一区| 亚洲国产最新在线播放| 汤姆久久久久久久影院中文字幕| 亚洲av在线观看美女高潮| 麻豆国产97在线/欧美| 免费大片黄手机在线观看| 热99国产精品久久久久久7| 蜜臀久久99精品久久宅男| 亚洲美女搞黄在线观看| 久久精品综合一区二区三区| 国产黄片美女视频| 午夜激情福利司机影院| 少妇人妻久久综合中文| 熟女av电影| 18禁裸乳无遮挡免费网站照片| freevideosex欧美| 国产日韩欧美亚洲二区| 久久亚洲国产成人精品v| 一个人观看的视频www高清免费观看| 纵有疾风起免费观看全集完整版| 91精品国产九色| 中文精品一卡2卡3卡4更新| 不卡视频在线观看欧美| 国产精品嫩草影院av在线观看| 久久国产乱子免费精品| 欧美97在线视频| av在线播放精品| 国产高潮美女av| 国产欧美日韩一区二区三区在线 | 十八禁网站网址无遮挡 | 一本久久精品| 欧美精品人与动牲交sv欧美| 在线亚洲精品国产二区图片欧美 | 人妻制服诱惑在线中文字幕| av女优亚洲男人天堂| 少妇丰满av| 久热这里只有精品99| 亚洲精品影视一区二区三区av| 亚洲激情五月婷婷啪啪| 97在线人人人人妻| 欧美一区二区亚洲| 国产成人精品久久久久久| 亚洲精品456在线播放app| 中文精品一卡2卡3卡4更新| 黄色配什么色好看| 国产美女午夜福利| 日本wwww免费看| 欧美高清成人免费视频www| 国模一区二区三区四区视频| 2021天堂中文幕一二区在线观| av在线老鸭窝| av国产精品久久久久影院| 中文天堂在线官网| 国产精品久久久久久久久免| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲欧美成人精品一区二区| 最近手机中文字幕大全| 97人妻精品一区二区三区麻豆| 男女无遮挡免费网站观看| 丝袜美腿在线中文| 亚洲av国产av综合av卡| 免费黄频网站在线观看国产| 亚洲精品久久午夜乱码| 久久久久久九九精品二区国产| 亚洲图色成人| 18禁在线播放成人免费| 在线看a的网站| 蜜桃久久精品国产亚洲av| 国产高清国产精品国产三级 | 午夜激情久久久久久久| 三级经典国产精品| 久久精品国产亚洲网站| 国产精品人妻久久久影院| 精品酒店卫生间| 一级黄片播放器| 黑人高潮一二区| 少妇高潮的动态图| 免费在线观看成人毛片| 大码成人一级视频| 免费大片黄手机在线观看| 久久精品夜色国产| 亚洲av.av天堂| 亚洲在久久综合| 国产精品一区www在线观看| 亚洲国产精品专区欧美| 亚洲在久久综合| 日韩 亚洲 欧美在线| 免费电影在线观看免费观看| 日韩国内少妇激情av| 亚洲人与动物交配视频| 国产精品国产三级国产av玫瑰| 黄色配什么色好看| 久久久久性生活片| 欧美精品国产亚洲| av福利片在线观看| 国产一级毛片在线| 91久久精品国产一区二区三区| 欧美三级亚洲精品| 成年av动漫网址| 18禁在线播放成人免费| 国产高潮美女av| 涩涩av久久男人的天堂| 国产高清有码在线观看视频| 日韩一本色道免费dvd| 国产色爽女视频免费观看| 国产成人免费无遮挡视频| 国产精品偷伦视频观看了| 美女被艹到高潮喷水动态|