• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Computer Decision Support System for Skin Cancer Localization and Classifcation

    2021-12-14 10:29:30MuhammadAttiqueKhanTallhaAkramMuhammadSharifSeifedineKadryandYunyoungNam
    Computers Materials&Continua 2021年7期

    Muhammad Attique Khan,Tallha Akram,Muhammad Sharif,Seifedine Kadry and Yunyoung Nam

    1Department of Computer Science,COMSATS University Islamabad,Wah Campus,47040,Pakistan

    2Department of Computer and Electrical Engineering,COMSATS University Islamabad,Wah Campus,47040,Pakistan

    3Department of Mathematics,Beirut Arab University,Beirut,Lebanon

    4Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

    Abstract: In this work, we propose a new, fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes, found in HAM10000, ISBI2018, and ISBI2019 datasets.Initially, we consider a pretrained deep neural network model,DarkeNet19,and fine-tune the parameters of third convolutional layer to generate the image gradients.All the visualized images are fused using a High-Frequency approach along with Multilayered Feed-Forward Neural Network(HFaFFNN).The resultant image is further enhanced by employing a log-opening based activation function to generate a localized binary image.Later, two pre-trained deep models, Darknet-53 and NasNet-mobile, are employed and fine-tuned according to the selected datasets.The concept of transfer learning is later explored to train both models, where the input feed is the generated localized lesion images.In the subsequent step, the extracted features are fused using parallel max entropy correlation (PMEC) technique.To avoid the problem of overfitting and to select the most discriminant feature information, we implement a hybrid optimization algorithm called entropy-kurtosis controlled whale optimization(EKWO) algorithm.The selected features are finally passed to the softmax classifier for the final classification.Three datasets are used for the experimental process, such as HAM10000, ISBI2018, and ISBI2019 to achieve an accuracy of 95.8%,97.1%,and 85.35%,respectively.

    Keywords:Skin cancer; convolutional neural network; lesion localization;transfer learning; features fusion; features optimization

    1 Introduction

    Skin cancer, as per stats from the World Health Organization (WHO) [1], is one of the most deadly types of cancer worldwide [2,3].According to the reports, the number of deaths in the next two years will be doubled.However,the death rate could be controlled if the infection is diagnosed in the earlier early stages [4].Skin cancer typically has seven major types, but the most common types are basal cell carcinoma, squamous cell carcinoma, and malignant melanoma.Melanoma is the most deadly skin cancer and causes several deaths worldwide [5].According to WHO, in 2020 only, the number of melanoma cases is increased, but the total number of deaths is decreased with a ratio of 5.3%.The estimated number of melanoma cases in the USA alone, since 2020,will be 196,060.From the numbers around 95,710 will be noninvasive, and 100,350 will be invasive(Skin Cancer Facts & Statistics, 2020).However, based on the previous reports, the number of new invasive cases has increased by 47% in the last ten years.Timely identification of skin cancer increases the survival rate by 95% [6].This cancer is primarily diagnosed using clinical methods such as ABCDE rules, a 3-point checklist, and a 7-point checklist.However, these methods face a lot of constraints including unavailability of experts, limited resources, time deficiency, etc.[7,8].

    Dermoscopy is a new imaging technology used for skin lesion diagnosis with the drawbacks of its high cost, limited number of expert dermatologists and the diagnosis time [9,10].The skin lesions are irregular in shape and texture, which is another factor for an inaccurate identification [11].Researchers actively working in the domain of computer vision and machine learning introduced several computer-aided diagnosis (CAD) techniques to identify skin cancer [12].A CAD system can be helpful for dermatologists as a second opinion [13].The classical techniques are introduced between 2010–2016 and before 2010 [14].These conventional techniques mostly were based on thresholding and clustering [15].Moreover, machine learning techniques are used for the binary classification purposes and on the balanced data.The main theme of these techniques is to extract the handcrafted features including shape, color, point, and texture, and later use them for the classification purpose [16].

    Recently, deep learning algorithms are utilized to develop computerized medical image analysis algorithms [17,18].Using deep learning, researchers achieved remarkable results, especially stomach and skin cancer lesions classification [19,20].The results of deep learning techniques are much improved compared to the conventional techniques [21,22].Moreover, the information fusion of deep learning model also showed improved performance in medical diseases diagnosis [23,24].Recently, Huang et al.[25] presented a lightweight deep learning approach for skin lesion classification.They employed two pre-trained models named EfficientNet and Densenet and optimize their features for multiclass classification.This work was tested on the HAM10000 dataset and achieved an accuracy of 85.8%.The main advantage of this work is that it is applicable in mobiles for skin lesion diagnosis.Carcagnì et al.[26] presented a convolutional neural network (CNN) approach for multiclass skin cancer classification.They initially implemented Densenet deep model and fine-tuned it according to the dataset classes.Later they extracted the more relevant features and classified them using SVM.The experimental process was conducted on the Ham10000 dataset and achieved an accuracy of 90%.This method is only useful for balanced class datasets.

    Thurnhofer-Hemsi et al.[27] presented an ensemble of a deep learning model for multiclass skin cancer classification.They employed five pre-trained deep models such as Googlenet, Inception3, Densenet201, Inception-ResNetV2, and MobileNetV2.They performed fine-tuning and train using transfer learning.After that, they performed a plain classifier and hierarchy of classifiers approach for final classification.For the experimental process, the HAM10000 dataset was used and achieved an accuracy of 87.7%.They conclude that the Densenet model is performed well, and overall, this work is useful for the balanced dataset.Mohamed et al.[28] introduced a deep CNN-based approach for multiclass skin cancer classification.They implemented this method in two factors.First, they train the model on all deeply connected layers.In the second, they balance the data to resolve the issue of data imbalancing.After that, they selected two pretrained models named Densenet121 and Mobilenet for the classification purpose.The fine-tuned these models and map features for the classification phase.A HAM10000 dataset is used for the experimental process and achieved an accuracy of 92.7%.Because of the balanced training data, this model is useful for Mobile applications.Chaturvedi et al.[29] presented a deep CNNbased computerized approach for multiclass skin cancer classification.They initially normalize images and resized according to the deep models.Later, five pre-trained models are selected and fine-tuned.Deep features are extracted from each model and performed classification.The classification performance is evaluated on balanced dataset HAM10000 and achieved an accuracy of 92.83%.The main concept in this work was the fusion of different neural network information for better classification performance.Shahin et al.[30] ensembles two deep learning network features for the classification of skin cancer.They used ResNet50 and InceptionV3 models for this work.The experimental process was performed on HAM10000 and ISBI2018 datasets and achieved 89.9% and 89.05% accuracy.Almaraz-Damian et al.[31] introduced a fusion framework for skin cancer classification from dermoscopic images.They fused handcrafted features and clinical features such as ABCDE to better lesion information at the first stage.In the next phase,deep CNN features are also extracted and fused with first phase features.The classification is performed by L.R., SVM, and relevant vector machine (RVM).For the experimental process,they used the ISBI2018 dataset and achieved an accuracy of 92.4%.Moreover, authors in [32]presented a residual deep learning framework and achieved an accuracy of above 93% using ISBI2018 dataset.

    The rest of the manuscript is organized as follows:Problem statement and major novelties are presented in Section 2.Proposed CAD system is described in the Section 3.Section 4 represents the experimental results and analysis.Finally conclusion is given in Section 5.

    2 Problem Statement and Novelties

    According to research by [33] poor lesion segmentation extract poor features, which later degrades the classification accuracy.The poor contrast skin lesions are the main factor for poor segmentation; therefore, it is essential to improve local contrast before the lesion segmentation step.The second problem which is facing in the studies mentioned above is imbalanced datasets.The researchers resolve this issue by employing data augmentation, and few of them balance the datasets based on a minimum class count.But this is not a good approach because several images are ignored in the training process.Multiclass skin cancer is not an easy process due to the high similarity among skin lesions shown in Fig.1.In this figure, it is demonstrated that bcc and bkl lesions have high similarity.Similarly, melanoma and vasc lesion have high similarity.To handle this issue, more relevant and strong features are required.Several extracted features are irrelevant, and few of them are redundant; hence, it is essential to remove those features before the classification phase.

    Figure 1:Sample skin lesion types of HAM10000 dataset [34]

    In this work, our major contributions are as follows:

    i) We consider a pre-trained deep CNN model named Darknet19 to apply fine tuning for optimal weights generation.A third convolutional layer is utilized to fetch the gradient information after the visualization.Later, all 128 visualized images are fused using the novel High-Frequency along with feed-forward neural network (HFaFFNN).The fused image is further enhanced by employing a log-opening based activation function.

    ii) Two pre-trained deep neural networks, Drarknet53 and NasNetMobile (NMobileNet), are fine-tuned and trained through transfer learning.The features from the second last layers of both models are extracted and fused using a new approach, named Parallel max-entropy correlation (PEnC).

    iii) A feature selection criteria is also proposed using biologically inspired whale feature optimization (WFO) algorithm controlled using entropy and kurtosis based activation functions.Through this function, the best features are selected for final classification.

    3 Proposed Methodology

    A new end to end computerized method is proposed in this work for multiclass skin lesion localization and classification.Two main phases are included in this method.In the first phase,skin lesions are localized using a new CNN and image fusion-based approach.In the second phase, two pre-trained models are fine-tuned and trained using transfer learning.Features are extracted from the last feature layers and performed fusion using a new approach named Parallel Entropy Correlation (PEnC).After the fusion process, a new hybrid optimization method is implemented, named Entropy Kurtosis controlled Whale Optimizer (EKcWO), for the optimal features selection.The selected features are classified using Softmax classifier for final prediction accuracy.A complete flow diagram of the proposed method is showing in Fig.2.

    Figure 2:Proposed flow diagram of multiclass skin cancer classification

    3.1 Datasets

    Three publically available datasets are used in this work for the experimental process.The selected datasets are HAM10000, ISBI2018, and ISBI2019.These datasets are used for the classification tasks.HAM10000 is one of the complex skin cancer datasets containing 10,015 image samples of different resolutions.These images are categorized into seven different lesion classes;akiec, bcc, bkl, df, nv, mel, and vasc.Against each label, the number of images is 327, 541, 1099,155, 6705, 1113, and 142, respectively.ISBI2018 dataset consists of 10,015 images of seven skin lesion types such as Nevus, Dermatofibroma, Melanoma, Pigmented Bowen’s, Keratosis, Basal Cell Carcinoma, and Vascular.For testing and validation, 1,512 and 193 images are provided without ground truths.The ISIC 2019 skin cancer dataset consists of eight classes:akiec, bcc, df,bkl, mel, SCC, vasc, and nv.This dataset is the combination of HAM10000 and BCN dataset.Moreover, a few clinical images are also added in this dataset.The images in each mentioned class, such as akieck are 867, bcc are 3,323, df is 239, bkl are 2,624, mel is 4,522, nv is 12,875,SCC is 628, and vasc are 253, respectively.A few sample images are also shown in Fig.1.Based on the above detail, it is show that these datasets are extremely imbalnced.We used testing and validation images jus for labeling.At the same time, the 50:50 approach is considered for training and testing.

    3.2 Fine-Tuned DarkNet Model

    The arrival of deep learning technology in machine learning has reformed the performance of AI.A deep model consists of a large number of layers.A deep model’s structure starts from the input layer and later processed in the convolutional layer.In this layer, a convolutional operator is used to extract the features called weights.This operation is works based on filters such as filter size and stride.Mathematically, it is formulated as follows:Consider,F0is an input image of CNN havingrrows,ccolumns, andKcolor components, whereK=3 in this work.Hence,F0(x,y,z)image is transformed in this model where, 0 ≤x≤r, 0 ≤y≤c, and 0 ≤z≤Kare the spatial coordinated.Using these points, a convolutional layer is defined as follows:

    where feature map of the convolutional layer is represented byFmap,βrepresent an offset, andωi,j,k∈ω×ω×Krepresent filters of a 2D array.Usually, in a CNN model, many filters are employed to increase the sharpness of object edges.Next, a batch normalization layer is followed to reduces the number of epochs for fast training.Later, a leaky relu (LR) is added to convert the negative values into zero.This function only allows the positive weights for the next step.Mathematically, we defined LR as follows:

    Then, a max-pooling layer is applied to reduce the neighborhood.This layer is based on the filter size, and mostly it is definedW×H.In this work, we consider the DarkNet19 pre-trained deep neural network [35] and fine-tuned according to the requirement.Our purpose is the only visualization of convolutional layer features instead of training a model.Therefore, we ignore the last layers and only consider the network to the third convolutional layer, as shown in Fig.3.The input of this network is 256×256×3.For the first convolutional layer, the filter size is 3×3, the number of filters is 32, the number of channels is 3, and stride 1×1.Following these operations, the learnable weights and bias matrix are of size 3×3×3×32 and 1×1×32.The activation of this matrix of this layer is 256×256×32.Then, batch normalization and leaky relu layers are followed to this convolutional layer.A pooling layer is applied to filter size 2×2 and stride 2.

    After this operation, the activations is 128×128×32.In the next step, a second convolutional laye is added and peformed a convolutional filter of size 3×3, where the number of filters and channels are 64 and 32, respectively.The stride of this layer is 2, and the resultant activation is 128×128×64.The learnable weights and bias matrix are 3×3×32×64 and 1×1×64,respectively.Similar to the first convolutional layer, this follows the batch normalization and leaky relu layers.Then, a pooling layer is added of filter size 2×2 and stride 2.The activations of this layer is 64×64×64.

    A convolutional layer is applied to these activations of filter size 3×3, where the number of filters and channels are 128 and 64.For this layer, the activations are 64×64×128, and learnable weights and bias matrix are 3×3×64×128 and 1×1×128, respectively.On in this layer, it is described that a total 128 filters are employed, and for each filter, the matrix size is 3×3×64.Hence, we visualize these filter matrix based on filter size.Visually, it is shown in Fig.3.In this figure, it is illustrated that a total 128 subimages are reshaped.These images provide the information of input images according to their gradients and pixel-level information, etc.

    Figure 3:Visualization of fine-tuned convolutional layer weights

    We consider these 128 images and fused them in one image for better visualization of the lesion part.For this purpose, we implemented a hybrid approach name High-Frequency along with Feed Forward Neural Network (HFaFFNN).In this approach, these images are considered high-frequency images and learn their pixels based on the feed-forward neural network (FwNN),where the FwNN is multilayered.Since a MLFwNN consider a one image pixel as one neuron.The two hidden layers denoted byHare included in this network.The inputs of each layerHare represented byand total neurons areandrespectively.The sigmoid is applied as an activation function ofHlayers, and linear function is used as an activation function for the output layer, where the output layer is denoted byThe output oflth andpth hidden layer is formulated as follows:

    Here hidden layers weights are represented byωliandωpl.The outputs of each hidden layer are represented byandThe base of each hidden layer is represented byβlandβp,respectively.In the last, the output is computed by the following mathematical formulation.

    To assess this neural network’s training performance, the mean square error rate (MSER)is computed.Based on the MSER value, the weights and bias are updated in the next step.Mathematically, the MSER of this network is calculated as follows:

    However, in our work, we required a more useful and informative image; therefore, we update the weights and bias of the first hidden layer based on the number of iterations.Here, the number of iterations is the number of image pixel values.Hence, the updated weights and bias are defined as follows:

    Here, updated weights are represented byξωliand updated bias is represented byξβl.The learning rate in these equations is defined byφand the value ofφ=0.001.Hence, the newly updated equations are:

    In the last, by employing these updated weights and bias, the fusion is performed.The fusion process is formulated through the following activation function:

    Here,,y)represents high-level decomposition images,Lrepresents levels,drepresents directions, andIter(.)represents the number of iterations per image based on the image pixels,performed by a neural network.Finally, the high-level decomposition pixels are reconstructed by MATLAB image reconstruction function and obtained an output image, shown in Fig.4 (Fused using the proposed approach).After this step, we applied a hybrid contrast stretching method to increase the pixel’s intensity range (can be seen in Fig.4 (Contrast stretching)).The logarithmic function is applied for the contrast enhancement, which later converted into a binary form using the Otsu thresholding approach (can be seen in Fig.4 (binary image)).The active contour-based boundary is drawn on the original image based on the binary lesion image.The localized lesions are used for the text classification task.A few sample qualitative results are illustrated in Fig.5.

    Figure 4:Complete steps involves in the lesion localization using proposed approach

    3.3 Lesion Classification

    Transfer Learning:Transfer Learning is reusing a pre-trained deep learning model for a new task [36].In this task, we used the TL to reuse a pre-trained model (trained on ImageNet 1000 classes) for skin cancer classification (small dataset, max 8 classes).Suppose the source data isΔsrepresenting ImageNet dataset, source labels representingΔL(1000 object classes), and objective function is representing byΔo.This process can be written as:

    Figure 5:Proposed lesion localization visual results

    Hence, we have three targets as target data representing byΔT(HAM10000, ISBI2018, and ISBI2019), target labels representing by TL, and objective function TL.It can be defined as:

    Hence, the transfer learning is applied to ~Δby using the knowledge ofΔ.Visually, this process is showing in Fig.6 and mathematically defined as follows:

    Fine Tuned NasNet Mobile CNN:NasNet Mobile [37] is a new CNN architecture constructed by the search architecture of neural network (NN).This architecture contains building blocks, and each block consists of several layers (i.e., convolutional layer, pooling layer, batch normalization layer).These blocks are optimized using reinforcement learning.This process is repeated several times and based on the capacity of a network.A total of 12 blocks are added in this network,where the number of parameters and MACs are 5.3 and 564 M, respectively.This network accepts an input image ofh×w×3, whereh=224 andw=224.These input image pixels are considered as initial weights and passed in the first convolutional layer.In the first convolutional layer,a convolutional operator is applied to filter size 3×3 and stride 2.Moreover, the number of channels and filters for the first layer are 3 and 32, respectively.For most of the blocks, a batch normalization layer is added of valuee=1.0000e?03.This process is continued for all blocks in the network.In the end, all blocks are concatenated and added a few additional layers.Later, a global average pooling layer is added, which followed the last fully connected layer.

    Figure 6:Visual process of transfer learning for feature extraction

    We remove the last fully connected layer and add a new layer according to the skin classes’number in this work.Then, we train the new fine-tuned model using transfer learning.We utilized 50% skin images for training a model due to an imbalanced dataset in the learning process.For training, we initialized a learning rate of 0.002 and a mini-batch size of 64.Moreover, the dropout factor was 0.5, and the weight decay value of 4e?5.After learning this new model, we extract features from the last layer named Global Average Pooling layer (GAP).On this layer, the number of extracted features is 1056, and a vector length isN×1056, whereNdenotes the number of training images.Mathematically, this vector is represented bywhere k denotes feature-length andNdenotes sample images.

    Fine Tuned DarkNet53 CNN:Darknet53 [38] is the Convolutional Neural Network (CNN)based model, which is used to extract the deep features for object detection and classification [39].It has 53 layers of deep structure.This model combines the basic feature extraction model of YOLOv2 Darknet19 and the deep Residual Network [40].This network utilized the 1×1 consecutively and 3 × 3 convolution layers and residuals.Its smallest component consists of convolution, Batch Normalization (BN), and LeakyRelu layers.The input of this network ish×w×3, whereh=256 andw=256.The filter size of first convolutional layer is 3×3 and stride 2.The batch normalization layer is followed by the convolutional layer, where the value of epsilone=1.000000000000000e?05.In this network, leaku relu is added instead of ReLu layer.Thrugh this layer, convolved weights are converted into positive (0.01) if negative.This process is continued for all 52 convolutional layers.Besides, an additional layer was added which follows the Global Average Pool (GAP) layer and FC layer.

    We removed the last fully connected layer and added a new layer according to the number of skin cancer classes.Transfer learning is employed for training this new fine-tuned model.In the learning process, we utilized 50% skin images for training a model due to an imbalanced dataset and the rest 50% for testing.For training, we initialized a learning rate of 0.002 and a mini-batch size of 64.Moreover, the dropout factor was 0.5, and the weight decay value of 4e?5.This new model’s features are extracted from the last layer named Global Average Pooling layer (GAP).On this layer, the number of extracted features is 1056, and a vector length isN×1056, whereNdenotes the number of training images.Mathematically, this vector is represented bywhereldenotes feature-length andNdenotes sample images.

    3.3.1 Features Fusion

    In this article, we proposed a fusion technique using parallel process.The main reason behind the choice of parallel approach instead of serial approach is to get only most relevant features and try to minimize the dimension of feature vector.Consider, we have two deep learning extracted feature vectors, represented byof dimensionN× 1056 andof dimensionN× 1056,respectively.Suppose,is a fused feature vector of dimensionN×max(length).Initially, we calculated the maximum length feature vectors from the extracted vectors as follows:

    Based onMLng, we define a resultant feature vector matrix as:Then two entropy values are computed for each vectorandas:

    Each corresponding vector features are multiplied by their entropy value.The purpose of this operation is to get positive features only.Mathematically, this operation is formulated as follows:

    The resultant fused feature vectorof dimension,N× 1056 is further optimized for more accurate classification results.For optimization, we implemented a hybrid approach name Entropy–Kurtosis controlled Whale Optimization (EKcWO).The detail of this technique is given in the next section.

    3.3.2 Features Optimization and Classification

    Whale Optimization Algorithm (WOA) [41] is a new optimization algorithm inspired to mimic the humpback whales’natural behavior.Three main steps are involved in this algorithm:(i) prey encircling, (ii) exploitation, and (iii) exploration [42].The detail of each step can be found in [41,42].This algorithm is a wrapper-based approach because classification algorithms are applied to check the accuracy of selected features.In this paper, we used the Softmax classifier for classification accuracy.

    Given a fused feature vectorof dimensionN×1056 andrepresents the selected optimal feature vector of dimensionN×S, where S denotes the length of the optimally selected vector.Initially, we applied WOA, which returns the best features for each iteration.We add one more stage selection by a new activation function based on the Entropy and Kurtosis (E&K).Each iteration’s selected features are first passed in this function and then check the fitness through a fitness function.This process is continued until all initialized iterations are completed.The detail of this hybrid algorithm is given in Algorithm 1.In this algorithm, the maximum iterations(max_Iter) are 50,αis a parameter that decreases the linearly from 2 to 0,vec1 andvec2 are two coefficient vectors,lis a random number of range [?1, 1], andprobis another random number of value between [0, 1].In this algorithm, the position of whales is updated for prob<0.4 as follows:

    Algorithm 1 Output:Optimally selected feature vector λN sel of dimension N×S.Input:Fused feature vector λNf of dimension N×1056.Step 1:Generates Initial Populations ←λNsel=(1,2,3...,fn).Step 2:Compute Objective Values using Softmax.Step 3: λ?best ←Best Selection.While (Iter1)Select Random solutions ←λrand.end if // End inner if statement else if (prob ≥0.4)Update position ←λ?best(Iter+1).end if // End outer if statement end for Step 4:Put features on Kurtosis controlled Entropy activation function.—Compute Kurtosis of λ?best—Compute Entropy of λ?best—Combine in an activation function.Step 5:Compute Fitness through Fitness Function.Step 6:Update λ?best.Step 7:Iter=Iter+1.End While λNsel=λ?best ←Final selected feature vector

    Here,Distrepresents the distance among best selected features.The value of a random vector is between [0, 1].For prob ≥0.4, the position is updated as follows:

    In the next, we proposed a new activation function for one more step feature selection.The activation function is defined as:

    Based on this activation function, each selected feature is again checked and then compute the fitness through fitness function as follows:

    Here,Er(Dist)represents the classification error,Rrepresents the selected subset’s cardinality,andfrepresents the total features, respectively.This process is continued for maximum iterations,and at the end, we get a final optimal selected feature vector of dimensionN×506.This vector dimension can be changed according to the nature of the dataset and selected iterations.Finally,the Softmax classifier [43] based classified these features for final classification.The proposed visual labeled results are shown in Fig.7.The detailed testing results are discussed in Section 4.

    Figure 7:Proposed predicted labeled images

    4 Experimental Results and Analysis

    Experimental Setup:In this section, the proposed method results are presented.As shown in Fig.2, the proposed method works through a sequence of steps.Therefore, we compute the results of each step to show the importance of the next step.As described in Section 3.1, three datasets are used for the experimental process; hence, we computed each datasets results with several experiments.The Softmax classifier is used as a main classifier in this work for the classification of selected features.The 70% of each dataset’s dermoscopic images are used to train a model, while the rest are used for testing.For cross-validation, we used 10-Fold validation.The performance of Softmax classifier is also compared with few other classifiers such as:Fine tree (F-Tree), Gaussian Na?ve Bayes (GNB), SVM of cubic kernel function (C-SVM), extreme learning machine (ELM),fine KNN (F-KNN), and ensemble boosted tree (EBT).Each classifier performance is analyzed using the following performance measures such as sensitivity rate (Sen), precision rate (Prec)F1 Score, AUC, accuracy (Acc), and computational time.This work’s simulations are conducted on MATALB2020b using a Desktop Computer having 16 GB of RAM and 16 GB graphics card.

    4.1 Results

    Results of ISBI2018 Dataset:Tab.1 presented the proposed classification results of multiclass skin lesions.The Softmax classifier produced the best performance of 97.1% accuracy rate using the proposed framework.The sensitivity and precision rate of Softmax is 86.34% and 97.41%,respectively.The F1-Score and AUC are also computed for this classifier, having obtained values are 91.54% and 0.987.The computational testing time is 21.627 (s).The C-SVM produced the second-best performance of 92.3% accuracy.The sensitivity, precision, and F-Score of this classifier are 77.07%, 84.40%, and 80.57%, respectively.For the rest of the classifiers, precision rate is 66.20%, 72.37%, 92.02%, 92.01%, and 92.65%.The minimum computational time is 7.9919 (s);however, it is noted that the accuracy of this classifier is almost 12% lesser as compared to Softmax.Moreover, the performance of the Softmax classifier is verified in Fig.8 in the form of a confusion matrix.In this figure, it is illustrated that DF and VASC skin classes have high error rate.The main challenge in this work is an imbalanced dataset.Hence, due to low sample images,the error rate is high for these two classes.

    Table 1:Proposed multiclass skin lesion classification results for ISBI2018 dataset

    Figure 8:Confusion matrix of Softmax classifier using proposed method for ISBI2018 dataset

    Tab.2 presented the comparison of proposed framework accuracy with individual steps that are involved in this work.Initially, we computed the classification results without using lesion localization.The original images are directly passed in the proposed framework and achieved an accuracy of 90.8%, where the computational time was 16.1295 (s).In the second step, only Darknet53 is employed and performed experiment.For this experiment, the noted accuracy was 92.1%, but time is increased to 34.8915 (s).In the third step, results are computed for NasNet Mobile CNN and achieved an accuracy of 91.6%.In the fourth step, we removed the feature selection step and just fused features.For this experiment, the accuracy is improved to 94.2%,where the computational time was 24.1168 (s).In the last step, we consider the entire proposed framework and achieved an accuracy of 97.1% which is 7% improved as compared first step accuracy.The computational time of this experiment is 21.627 (s).Overall, it is noticed that the lesion localization step consumes much time, but this step improves the classification accuracy.Moreover, we compared the proposed accuracy with a few other neural nets, as illustrated in Fig.9.From this figure, it is confirmed that proposed fusion and proposed selection methods outperform for this dataset.

    Figure 9:Comparison of proposed method with existing pre-trained deep learning models using ISBI2018 dataset

    Table 2:Comparison of proposed accuracy with individual steps

    Results of ISBI2019 Dataset:Tab.3 presented the proposed classification results of multiclass skin lesions using ISBI2019 dataset.The Softmax classifier produced the best performance of 85.3% accuracy rate using the proposed framework.The sensitivity and precision rate of Softmax is 73.3% and 82.6%, respectively.Moreover, the F1-Score and AUC are also computed for this classifier of 77.68% and 0.9725.The computational time of this classifier is 76.3046 (s).The GNB obtained the second-best performance of 84.9% accuracy.The sensitivity, precision, and F-Score of this classifier is 74.13%, 81.25%, and 77.53%, respectively.For the rest of the classifiers,the precision rate is 71.21%, 78.96%, 79.07%, and 82.59%.The minimum computational time is 61.112 (s); however, it is noted that the accuracy of this classifier is almost 9% lesser as compared to Softmax.Moreover, the performance of the Softmax classifier is verified in Fig.10 in the form of a confusion matrix.This figure illustrated that BKL, DF, SCC, and VASC skin classes have low accuracy rates due to fewer images and high similarity.

    Table 3:Proposed multiclass skin lesion classification results using ISBI2019 dataset

    Figure 10:Confusion matrix of Softmax classifier using proposed method for ISBI2019 dataset

    Tab.4 presents the comparison of proposed framework accuracy with individual steps involved in the main Fig.2.Initially, we computed the classification results without using lesion localization and obtained an accuracy of 78.6%, where the computational time was 39.448 (s).In the second step, only Darknet53 is employed and got an accuracy of 80.4%, but time is increased to 89.162 (s).This tie show that the lesion localization step consumes much time.In the third step, results are computed for NasNet Mobile CNN and achieved an accuracy of 81.9%.In the fourth step, we removed the feature selection step and just fused features of both networks.For this experiment, the accuracy is improved to 82.6%, where the computational time was 91.290 (s).In the last step, we consider the entire proposed framework and achieved an accuracy of 85.3%, which is 8% improved as compared first step accuracy and 3% from the fusion step.The computational time of this experiment is 76 (s).Overall, it is noticed that the lesion localization step is more important for improved classification accuracy.Also, the selection step minimizes the computational time and increases accuracy.Moreover, we compared the proposed accuracy with a few other neural nets, as illustrated in Fig.11.This figure showed the significance of the proposed fusion and proposed selection steps for this dataset.

    Figure 11:Comparison of proposed method with existing pre-trained deep learning models using ISBI2019 dataset

    Table 4:Comparison of proposed accuracy with individual steps

    Results of HAM10000 Dataset:The proposed classification results of HAM10000 are presented in Tab.5.This dataset’s best precision rate and accuracy are 92.22% and 95.8% for the Softmax classifier.The sensitivity rate is 84.20%, which can be verified in Fig.12.This figure showed the confusion matrix of the Softmax classifier.Moreover, the F1-Score and AUC of this classifier are 88.03% and 0.9721.The computational time of this classifier is 9.546 (s).The second-best accuracy was achieved on CSVM of 94.9%.The sensitivity, precision, and F-Score of this classifier is 76.64%, 95.67%, and 85.10%, respectively.For the rest of the classifiers, the precision rate is 62.05%, 96.05%, 93.17%, 92.28%, and 93%.The minimum computational time is 8.189 (s) on F-KNN; however, it is noted that the accuracy of this classifier is almost 3% lesser as compared to Softmax, and there is no big difference in time among both.The comparison of proposed accuracy with an individual step involved in the proposed framework is presented in Tab.6.From this table, the proposed accuracy is significantly better.Moreover, the proposed method accuracy is also compared with other neural nets, as illustrated in Fig.13.This figure shows the significance of the proposed accuracy.

    Figure 12:Confusion matrix of Softmax classifier using proposed method for HAM10000 dataset

    Table 5:Proposed multiclass skin lesion classification results using HAM10000 dataset

    4.2 Comparison

    In this section, we analyze the proposed results based on the confidence interval and compare our proposed method accuracy with recent techniques as presented in Tabs.7 and 8.In Tab.7, it is described that a minor change is occurred in the accuracy after the 100 times execution of the proposed framework.Tab.8, the each method is compared base on the dataset and evaluation measures.The accuracy is used as a main measure.However, few of them also consider precision and F1-Score.Hung et al.used the HAM10000 dataset and achieved an accuracy of 85.8% and a precision rate of 75.18%.Carcgani et al.noted accuracy was 90%, and F1-Score is 82%.However,our method obtained an improved accuracy of 95.8% and a precision rate of 92.22%.Similarly,for ISBI2018, the more recent best accuracy was 93.4%.Our method achieved an accuracy of 97.1%.For ISBI2019 dataset, our approach obtained an accuracy of 85.3%.From this table, it is shown that the proposed method works better on these selected datasets.

    Table 6:Comparison of proposed accuracy with individual steps using Ham10000 dataset

    Figure 13:Comparison of proposed method accuracy with existing pre-trained deep learning models using HAM10000 dataset

    Table 7:Confidence interval based analysis of proposed accuracy

    Table 8:Proposed method comparison with existing techniques

    5 Conclusion

    This paper presents a computerized architecture for multiclass skin lesion classification using a deep neural network.The main challenge of this work was imbalanced datasets for training a deep model.Therefore, in our method, we first localize the skin lesions for more useful feature extraction.This process improves the classification accuracy but increases the overall system computational time.The localized lesions are utilized for learning the pre-trained CNN models.The features are extracted from the last layers and performed fusion using a new parallel based approach.This approach’s main advantage is the fusion of most correlated features and controls the length of a feature vector.However, few redundant and irrelevant features are also added,which misclassifies the final classification accuracy.Therefore, we implemented a hybrid features optimization approach.The best features are selected using a proposed hybrid, which is finally classified using Softmax classifier.The experimental process is conducted on three extremely imbalanced datasets.On these datasets, our method achieved improved performance.This work’s main strength is the localization and fusion process; however, the selection of most optimal features decreases the computational time, as described in the results section.This work’s main limitation is the incorrect localization of skin lesion, which extracts the wrong features in the later stage.In the future, we will focus on the more optimized lesion localization approach, which is useful for real-time lesion localization and has improved accuracy.

    Acknowledgement:Authors are like to thanks COMSATS University Islamabad, Wah Campus for technical support in this work.

    Funding Statement:Authors are like to thanks COMSATS University Islamabad, Wah Campus for technical support in this work.This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

    Conficts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩中文字幕欧美一区二区 | 亚洲av片天天在线观看| 午夜激情av网站| 国产深夜福利视频在线观看| 国产在线免费精品| 免费观看a级毛片全部| 黑人猛操日本美女一级片| 午夜福利视频精品| 热re99久久精品国产66热6| 两个人免费观看高清视频| 国产精品一区二区在线观看99| 欧美亚洲 丝袜 人妻 在线| 成人国语在线视频| 2021少妇久久久久久久久久久| 老熟女久久久| 欧美日韩综合久久久久久| 操出白浆在线播放| 精品卡一卡二卡四卡免费| 女人被躁到高潮嗷嗷叫费观| 亚洲av日韩精品久久久久久密 | 久久精品亚洲熟妇少妇任你| 国产日韩欧美在线精品| 久久久亚洲精品成人影院| av又黄又爽大尺度在线免费看| 99久久99久久久精品蜜桃| 女人被躁到高潮嗷嗷叫费观| 亚洲国产日韩一区二区| 国产精品偷伦视频观看了| 日本vs欧美在线观看视频| 久久青草综合色| 日韩一区二区三区影片| 欧美变态另类bdsm刘玥| 国产伦人伦偷精品视频| 可以免费在线观看a视频的电影网站| 亚洲欧美日韩高清在线视频 | 热99国产精品久久久久久7| 十八禁高潮呻吟视频| 啦啦啦 在线观看视频| 日韩av不卡免费在线播放| 久久久久视频综合| 高清欧美精品videossex| 亚洲伊人色综图| 日韩熟女老妇一区二区性免费视频| 麻豆国产av国片精品| 欧美久久黑人一区二区| 国产精品99久久99久久久不卡| 无遮挡黄片免费观看| 啦啦啦 在线观看视频| 后天国语完整版免费观看| 男女国产视频网站| 久久人人爽人人片av| 成年动漫av网址| 国产熟女欧美一区二区| 中文精品一卡2卡3卡4更新| 午夜日韩欧美国产| 亚洲国产毛片av蜜桃av| 高清黄色对白视频在线免费看| 亚洲国产av新网站| 咕卡用的链子| 国产成人系列免费观看| 国产免费福利视频在线观看| 啦啦啦在线观看免费高清www| 黄色怎么调成土黄色| 欧美日韩av久久| 97在线人人人人妻| 在线看a的网站| 91麻豆av在线| 手机成人av网站| 国产日韩欧美在线精品| 欧美在线一区亚洲| 多毛熟女@视频| 国产精品熟女久久久久浪| 亚洲国产精品国产精品| 亚洲国产精品一区二区三区在线| 精品一区在线观看国产| 国产日韩欧美视频二区| 黄色一级大片看看| 在线av久久热| 日韩 亚洲 欧美在线| 国产主播在线观看一区二区 | 亚洲成人免费电影在线观看 | 男男h啪啪无遮挡| 亚洲专区中文字幕在线| 亚洲中文av在线| 久久精品久久久久久噜噜老黄| 丝袜美足系列| 巨乳人妻的诱惑在线观看| 大话2 男鬼变身卡| 精品欧美一区二区三区在线| 国产精品亚洲av一区麻豆| 色视频在线一区二区三区| 蜜桃国产av成人99| 久久久国产精品麻豆| 别揉我奶头~嗯~啊~动态视频 | 激情视频va一区二区三区| 亚洲,欧美精品.| 日韩大码丰满熟妇| 亚洲,一卡二卡三卡| 晚上一个人看的免费电影| 午夜福利一区二区在线看| 国产精品久久久久久精品古装| av在线老鸭窝| 日本五十路高清| 国产xxxxx性猛交| videos熟女内射| 建设人人有责人人尽责人人享有的| 99热全是精品| 99久久人妻综合| 免费在线观看完整版高清| 日本a在线网址| 欧美日韩福利视频一区二区| 欧美国产精品一级二级三级| 真人做人爱边吃奶动态| 在线 av 中文字幕| 女人被躁到高潮嗷嗷叫费观| 国产av一区二区精品久久| 叶爱在线成人免费视频播放| 人人妻人人澡人人看| 久久 成人 亚洲| 久久久久久久国产电影| 亚洲精品美女久久av网站| 久9热在线精品视频| xxxhd国产人妻xxx| 五月天丁香电影| 亚洲精品一区蜜桃| 19禁男女啪啪无遮挡网站| 日本色播在线视频| 天堂8中文在线网| 超碰97精品在线观看| 1024香蕉在线观看| 国产成人影院久久av| 国产精品一区二区免费欧美 | 久久人人爽人人片av| 巨乳人妻的诱惑在线观看| 国产欧美亚洲国产| 免费观看a级毛片全部| 两个人看的免费小视频| 9热在线视频观看99| 精品人妻1区二区| 午夜福利免费观看在线| 日韩伦理黄色片| 亚洲精品久久午夜乱码| 激情视频va一区二区三区| 国产一卡二卡三卡精品| av网站免费在线观看视频| 欧美久久黑人一区二区| 国产成人系列免费观看| 波多野结衣一区麻豆| av国产久精品久网站免费入址| 亚洲国产毛片av蜜桃av| 日本午夜av视频| 99九九在线精品视频| 亚洲第一av免费看| 日韩av在线免费看完整版不卡| 久久人人爽av亚洲精品天堂| 黄色 视频免费看| 精品一区二区三卡| 亚洲人成电影免费在线| 黄色毛片三级朝国网站| 国产深夜福利视频在线观看| 欧美日韩成人在线一区二区| av电影中文网址| 人妻 亚洲 视频| 国产一区二区三区av在线| 99九九在线精品视频| 久热这里只有精品99| 久久九九热精品免费| 精品高清国产在线一区| 国产日韩欧美在线精品| 又大又黄又爽视频免费| 亚洲专区国产一区二区| 老司机影院毛片| 亚洲综合色网址| 亚洲成人免费电影在线观看 | 国产精品熟女久久久久浪| 少妇被粗大的猛进出69影院| 99久久精品国产亚洲精品| 欧美精品亚洲一区二区| 午夜福利免费观看在线| 久久影院123| 啦啦啦啦在线视频资源| 成人亚洲欧美一区二区av| 性少妇av在线| 亚洲精品成人av观看孕妇| 看免费成人av毛片| 国产亚洲午夜精品一区二区久久| 国产成人欧美| 高清黄色对白视频在线免费看| 色视频在线一区二区三区| videos熟女内射| 亚洲人成电影观看| 亚洲精品国产av成人精品| 精品国产一区二区三区四区第35| 日本一区二区免费在线视频| 777久久人妻少妇嫩草av网站| 国产在线免费精品| 国产高清videossex| 亚洲精品国产av蜜桃| 男女免费视频国产| 亚洲,一卡二卡三卡| 国产一区有黄有色的免费视频| 精品免费久久久久久久清纯 | 国产成人精品在线电影| 人人澡人人妻人| 十八禁高潮呻吟视频| 亚洲av国产av综合av卡| 欧美乱码精品一区二区三区| 在线天堂中文资源库| 可以免费在线观看a视频的电影网站| 激情视频va一区二区三区| 欧美日韩亚洲综合一区二区三区_| 亚洲av电影在线观看一区二区三区| 丝瓜视频免费看黄片| 欧美精品人与动牲交sv欧美| 真人做人爱边吃奶动态| 人人妻人人爽人人添夜夜欢视频| 久久久久国产一级毛片高清牌| 国产精品一二三区在线看| 欧美激情高清一区二区三区| 黄色毛片三级朝国网站| 黄片小视频在线播放| 水蜜桃什么品种好| 国产在线免费精品| av天堂久久9| 午夜福利视频在线观看免费| 麻豆国产av国片精品| 亚洲一区中文字幕在线| 国产熟女欧美一区二区| 国产1区2区3区精品| 国产精品麻豆人妻色哟哟久久| 午夜av观看不卡| 男女高潮啪啪啪动态图| 久久久久久人人人人人| 大型av网站在线播放| 一本—道久久a久久精品蜜桃钙片| 欧美精品啪啪一区二区三区 | 免费观看av网站的网址| 啦啦啦在线免费观看视频4| 欧美97在线视频| 婷婷色综合大香蕉| 99热全是精品| 久久久久久久久久久久大奶| 纯流量卡能插随身wifi吗| 黑丝袜美女国产一区| 伦理电影免费视频| 在线观看一区二区三区激情| 亚洲欧美成人综合另类久久久| 手机成人av网站| 久久精品人人爽人人爽视色| www.熟女人妻精品国产| 亚洲三区欧美一区| 久久久欧美国产精品| 久久人妻熟女aⅴ| 亚洲国产av影院在线观看| 亚洲成人手机| 天堂俺去俺来也www色官网| 日韩精品免费视频一区二区三区| 黑丝袜美女国产一区| 在线av久久热| 国产老妇伦熟女老妇高清| 久热这里只有精品99| 大香蕉久久成人网| 青青草视频在线视频观看| 伊人久久大香线蕉亚洲五| 午夜福利免费观看在线| 国产精品亚洲av一区麻豆| 老司机深夜福利视频在线观看 | 国产淫语在线视频| 久久精品成人免费网站| 成人影院久久| 婷婷成人精品国产| 国产在线免费精品| 夫妻午夜视频| 在线观看一区二区三区激情| av片东京热男人的天堂| 成年av动漫网址| 91国产中文字幕| 欧美国产精品va在线观看不卡| 视频区欧美日本亚洲| 久久午夜综合久久蜜桃| 欧美黑人精品巨大| 婷婷丁香在线五月| 天天影视国产精品| 最新在线观看一区二区三区 | 久久免费观看电影| 老司机影院成人| 久久久国产精品麻豆| 精品一区二区三区av网在线观看 | 欧美成人午夜精品| 亚洲一区二区三区欧美精品| 狂野欧美激情性bbbbbb| 99国产精品一区二区蜜桃av | 丝袜人妻中文字幕| av不卡在线播放| 国产三级黄色录像| 免费在线观看日本一区| 国产不卡av网站在线观看| 性少妇av在线| 十八禁高潮呻吟视频| 日韩伦理黄色片| 啦啦啦啦在线视频资源| 久久精品亚洲av国产电影网| 又紧又爽又黄一区二区| 欧美在线一区亚洲| 下体分泌物呈黄色| 国产精品av久久久久免费| 少妇猛男粗大的猛烈进出视频| 国产一卡二卡三卡精品| 制服人妻中文乱码| 国产精品久久久人人做人人爽| 久久久久视频综合| 亚洲欧美色中文字幕在线| 黄色片一级片一级黄色片| 日韩中文字幕欧美一区二区 | 国产爽快片一区二区三区| 中国美女看黄片| 久久精品亚洲熟妇少妇任你| 大话2 男鬼变身卡| 一边摸一边抽搐一进一出视频| 欧美激情高清一区二区三区| 老司机深夜福利视频在线观看 | 欧美黄色淫秽网站| 十八禁高潮呻吟视频| 黄色片一级片一级黄色片| 悠悠久久av| 亚洲美女黄色视频免费看| 一级a爱视频在线免费观看| 51午夜福利影视在线观看| 好男人视频免费观看在线| 国产欧美日韩精品亚洲av| 伦理电影免费视频| 黄片小视频在线播放| 激情五月婷婷亚洲| 18禁国产床啪视频网站| 久久人人爽av亚洲精品天堂| 精品少妇黑人巨大在线播放| av在线播放精品| 久久精品国产亚洲av涩爱| 国产一区二区 视频在线| 亚洲视频免费观看视频| 欧美精品av麻豆av| 国产一卡二卡三卡精品| 天堂8中文在线网| 亚洲国产精品999| 免费av中文字幕在线| 午夜福利乱码中文字幕| 国产片内射在线| 欧美变态另类bdsm刘玥| 男女高潮啪啪啪动态图| 18禁国产床啪视频网站| 天堂8中文在线网| 人妻一区二区av| av天堂久久9| 久久久欧美国产精品| 一级毛片电影观看| 日韩欧美一区视频在线观看| 国产男女内射视频| 午夜免费男女啪啪视频观看| 伦理电影免费视频| 高清视频免费观看一区二区| 国产av国产精品国产| 国产精品二区激情视频| 日本av手机在线免费观看| 亚洲,欧美精品.| 久久影院123| 亚洲成人国产一区在线观看 | 国产精品.久久久| 看免费av毛片| 成人亚洲精品一区在线观看| 好男人视频免费观看在线| 欧美激情极品国产一区二区三区| 久久久久精品国产欧美久久久 | 巨乳人妻的诱惑在线观看| 欧美日韩视频精品一区| 亚洲精品第二区| 老鸭窝网址在线观看| 曰老女人黄片| 这个男人来自地球电影免费观看| 国产亚洲av高清不卡| 欧美av亚洲av综合av国产av| 亚洲成国产人片在线观看| 少妇精品久久久久久久| 啦啦啦在线观看免费高清www| 男女无遮挡免费网站观看| 亚洲精品国产一区二区精华液| av欧美777| svipshipincom国产片| 久久毛片免费看一区二区三区| 搡老乐熟女国产| 91麻豆精品激情在线观看国产 | 性色av一级| 亚洲美女黄色视频免费看| 欧美亚洲日本最大视频资源| 操出白浆在线播放| videos熟女内射| 国产一区亚洲一区在线观看| 一区二区日韩欧美中文字幕| 丰满迷人的少妇在线观看| 99国产精品免费福利视频| 免费在线观看日本一区| 亚洲av电影在线进入| 好男人视频免费观看在线| 99国产综合亚洲精品| 一区二区三区乱码不卡18| 香蕉丝袜av| 我要看黄色一级片免费的| 国产成人精品无人区| 国产片特级美女逼逼视频| 一级片免费观看大全| 亚洲欧美精品综合一区二区三区| 亚洲人成电影观看| 纯流量卡能插随身wifi吗| 一区二区av电影网| 国产亚洲精品第一综合不卡| 国产男女内射视频| 亚洲av美国av| 亚洲精品av麻豆狂野| 亚洲人成77777在线视频| 午夜91福利影院| 香蕉丝袜av| 亚洲一区中文字幕在线| 嫁个100分男人电影在线观看 | 桃花免费在线播放| av欧美777| 国产人伦9x9x在线观看| 99re6热这里在线精品视频| 黄色毛片三级朝国网站| 中文字幕人妻丝袜制服| 国产精品99久久99久久久不卡| 欧美日韩成人在线一区二区| 成年av动漫网址| 最新在线观看一区二区三区 | 亚洲免费av在线视频| 精品一区二区三卡| 久久热在线av| 色94色欧美一区二区| 高清欧美精品videossex| 永久免费av网站大全| 日韩 亚洲 欧美在线| 青草久久国产| 三上悠亚av全集在线观看| 亚洲伊人久久精品综合| 免费少妇av软件| www.熟女人妻精品国产| 国产欧美日韩精品亚洲av| 精品少妇内射三级| 亚洲欧美精品自产自拍| 久久久久国产精品人妻一区二区| 欧美人与性动交α欧美精品济南到| 久久久久久久国产电影| 亚洲精品乱久久久久久| 久久久精品免费免费高清| 久久影院123| 成年人免费黄色播放视频| 少妇猛男粗大的猛烈进出视频| 一级毛片黄色毛片免费观看视频| 精品福利观看| 69精品国产乱码久久久| 女人高潮潮喷娇喘18禁视频| 高清av免费在线| kizo精华| 中文字幕高清在线视频| cao死你这个sao货| 另类精品久久| 男人操女人黄网站| 亚洲av成人不卡在线观看播放网 | 热99国产精品久久久久久7| 麻豆av在线久日| 老司机靠b影院| 桃花免费在线播放| 午夜免费男女啪啪视频观看| 真人做人爱边吃奶动态| 国产成人精品无人区| 日韩中文字幕欧美一区二区 | 久久亚洲精品不卡| 搡老乐熟女国产| 国产精品.久久久| 免费日韩欧美在线观看| 免费高清在线观看视频在线观看| 国产在线一区二区三区精| 一边亲一边摸免费视频| 久久久久精品人妻al黑| 欧美乱码精品一区二区三区| 中国美女看黄片| 国产欧美日韩一区二区三区在线| 欧美黄色淫秽网站| 宅男免费午夜| 99九九在线精品视频| 母亲3免费完整高清在线观看| 亚洲精品在线美女| 性少妇av在线| 国产欧美日韩一区二区三 | 欧美国产精品va在线观看不卡| 欧美亚洲 丝袜 人妻 在线| 日韩制服丝袜自拍偷拍| 色综合欧美亚洲国产小说| 国产一卡二卡三卡精品| 国产免费一区二区三区四区乱码| 久久久亚洲精品成人影院| 久久久久久久大尺度免费视频| 中文字幕色久视频| a级片在线免费高清观看视频| 国产av国产精品国产| 成年美女黄网站色视频大全免费| 九色亚洲精品在线播放| 人人妻人人澡人人看| 国产精品香港三级国产av潘金莲 | 男女之事视频高清在线观看 | 电影成人av| 中文字幕制服av| 又粗又硬又长又爽又黄的视频| 欧美激情 高清一区二区三区| 9191精品国产免费久久| 亚洲av国产av综合av卡| 精品人妻1区二区| 伦理电影免费视频| 国产高清不卡午夜福利| 狂野欧美激情性bbbbbb| 国产麻豆69| 成人午夜精彩视频在线观看| 中文字幕高清在线视频| 一级,二级,三级黄色视频| 999久久久国产精品视频| 亚洲精品国产一区二区精华液| 亚洲国产日韩一区二区| a级毛片黄视频| 99精品久久久久人妻精品| 欧美精品高潮呻吟av久久| 天堂8中文在线网| 免费女性裸体啪啪无遮挡网站| 久久精品久久久久久久性| 在线av久久热| 在线精品无人区一区二区三| 久久精品亚洲熟妇少妇任你| 久9热在线精品视频| 欧美在线黄色| 高清不卡的av网站| 天天躁夜夜躁狠狠久久av| 成人国产一区最新在线观看 | 成人免费观看视频高清| 亚洲一区中文字幕在线| 久久精品久久久久久噜噜老黄| 久久亚洲精品不卡| 99久久综合免费| a级毛片在线看网站| 丝瓜视频免费看黄片| 激情五月婷婷亚洲| 叶爱在线成人免费视频播放| 在线观看www视频免费| 一级毛片 在线播放| 国产一卡二卡三卡精品| 又黄又粗又硬又大视频| 成人手机av| 精品亚洲成国产av| 欧美久久黑人一区二区| 日本a在线网址| 亚洲国产成人一精品久久久| 欧美日韩亚洲国产一区二区在线观看 | 国产免费福利视频在线观看| 每晚都被弄得嗷嗷叫到高潮| 欧美日韩亚洲国产一区二区在线观看 | 亚洲熟女精品中文字幕| 美女中出高潮动态图| 久久精品久久精品一区二区三区| av网站免费在线观看视频| 亚洲专区中文字幕在线| 亚洲欧洲精品一区二区精品久久久| 久久精品人人爽人人爽视色| 日韩一区二区三区影片| 可以免费在线观看a视频的电影网站| 国产成人a∨麻豆精品| 亚洲av欧美aⅴ国产| 老熟女久久久| 欧美久久黑人一区二区| 在线看a的网站| 亚洲专区中文字幕在线| 国产视频首页在线观看| 精品国产一区二区三区久久久樱花| 精品国产超薄肉色丝袜足j| 国产男人的电影天堂91| 成年人免费黄色播放视频| 国产精品一区二区精品视频观看| 99热网站在线观看| 久久精品国产亚洲av涩爱| 欧美日韩精品网址| av片东京热男人的天堂| 十八禁网站网址无遮挡| 国产精品一区二区精品视频观看| 国产成人免费观看mmmm| 亚洲,欧美,日韩| 欧美 亚洲 国产 日韩一| 如日韩欧美国产精品一区二区三区| 日韩电影二区| 午夜91福利影院| 亚洲天堂av无毛| 欧美变态另类bdsm刘玥| 无限看片的www在线观看| 后天国语完整版免费观看| 国产精品香港三级国产av潘金莲 | 纵有疾风起免费观看全集完整版| 亚洲专区中文字幕在线| 亚洲国产精品一区三区| 日本a在线网址| 成人国产一区最新在线观看 | 精品久久久久久久毛片微露脸 | 国产精品国产三级国产专区5o| 国产精品亚洲av一区麻豆| 丝瓜视频免费看黄片| 亚洲国产中文字幕在线视频| 日韩熟女老妇一区二区性免费视频| 亚洲五月色婷婷综合| 十分钟在线观看高清视频www| 99国产精品免费福利视频| 97人妻天天添夜夜摸| 国产精品亚洲av一区麻豆|