• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Skin Lesion Segmentation and Classification Using Conventional and Deep Learning Based Framework

    2022-08-24 03:26:56AminaBibiMuhamamdAttiqueKhanMuhammadYounusJavedUsmanTariqByeongGwonKangYunyoungNamRehamMostafaandRashaSakr
    Computers Materials&Continua 2022年5期

    Amina Bibi,Muhamamd Attique Khan,Muhammad Younus Javed,Usman Tariq,Byeong-Gwon Kang,Yunyoung Nam,*,Reham R.Mostafa and Rasha H.Sakr

    1Department of Computer Science,HITEC University,Taxila,Pakistan

    2College of Computer Engineering and Sciences,Prince Sattam Bin Abdulaziz University,Al-Khraj,Saudi Arabia

    3Department of ICT Convergence,Soonchunhyang University,Asan,31538,Korea

    4Information Systems Department,F(xiàn)aculty of Computers and Information Sciences,Mansoura University,Mansoura,35516,Egypt

    5Computer Science Department,F(xiàn)aculty of Computers and Information Sciences,Mansoura University,Mansoura,35516,Egypt

    Abstract:Background:In medical image analysis,the diagnosis of skin lesions remains a challenging task.Skin lesion is a common type of skin cancer that exists worldwide.Dermoscopy is one of the latest technologies used for the diagnosis of skin cancer.Challenges:Many computerized methods have been introduced in the literature to classify skin cancers.However,challenges remain such as imbalanced datasets,low contrast lesions,and the extraction of irrelevant or redundant features.Proposed Work:In this study,a new technique is proposed based on the conventional and deep learning framework.The proposed framework consists of two major tasks:lesion segmentation and classification.In the lesion segmentation task,contrast is initially improved by the fusion of two filtering techniques and then performed a color transformation to color lesion area color discrimination.Subsequently,the best channel is selected and the lesion map is computed,which is further converted into a binary form using a thresholding function.In the lesion classification task,two pre-trained CNN models were modified and trained using transfer learning.Deep features were extracted from both models and fused using canonical correlation analysis.During the fusion process,a few redundant features were also added,lowering classification accuracy.A new technique called maximum entropy score-based selection(MESbS)is proposed as a solution to this issue.The features selected through this approach are fed into a cubic support vector machine(C-SVM)for the final classification.Results:The experimental process was conducted on two datasets:ISIC 2017 and HAM10000.The ISIC 2017 dataset was used for the lesion segmentation task, whereas the HAM10000 dataset was used for the classification task.The achieved accuracy for both datasets was 95.6% and 96.7%,respectively, which was higher than the existing techniques.

    Keywords: Skin cancer; lesion segmentation; deep learning; features fusion;classification

    1 Introduction

    Skin cancer is a popular research topic due to the high number of deaths and diagnosed cases [1].Cancer is a group of diseases characterized by unrestrained development and the spread of atypical cells.This may cause death if the expansion of irregular cells is not controlled.Skin carcinoma is an irregular expansion of skin cells that frequently appears on the skin when exposed to sunlight or ultraviolet rays.Skin cancer is a fatal disease that can be classified into two types:melanoma and benign (basal cell and squamous cell carcinoma).Benign is constantly retorting to treatment and hardly spreads to other skin tissues.Melanoma is a dangerous type of skin cancer that starts in the pigment cells.Skin cancer develops as a result of malignant lesions and accounts for approximately 75% of all deaths [2].

    In the United States of America, 2021 cases are reported to be 207,390, of which 106,110 are noninvasive and 101,280 are invasive, including 62,260 men and 43,850 women.The estimated death count in 2021 in the USA is 7,180, including 4600 men and 2580 women (https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2021/cancer-facts-and-figures-2021.pdf).The number of cases reported in the United States of America in 2020 is 100,350, including 60,190 men and 40,160 women, with 6,850 deaths from melanoma, including 4,610 men and 2,240 women.Since 2019, the total number of skin cancer patients in the USA has been 192,310.The death count has been 7,230, including 4,740 men and 2,490 women.In 2020, it is estimated that over 16,221 novel cancer cases were analyzed in Australia, including 9,480 men and 6,741 women, with a death count of 1,375, including 891 men and 484 women (https://www.canceraustralia.gov.au/affected-cancer/cancer-types/melanoma/statistics).According to dermatologists, if a melanoma is not detected at a very early stage, it spreads to the entire body or nearby tissues.However, if detected early on, there is a good chance of survival [3].Melanoma has received a lot of attention from the research community because of its high mortality rate.

    Dermatologists have previously used the ABCDE rule, a seven-point check list, laser technology, and a few other methods [4].However, these methods require an expert dermatologist.In addition, manual inspection and diagnosis of skin cancer using these methods is difficult,time-consuming, and expensive.Therefore, it is essential to develop a computerized method for automated skin cancer segmentation and classification [5].Dermoscopy is a new technology for the diagnosis of skin cancer [6].Through dermoscopy technology, RGB images of the skin are captured and later analyzed by experts.A computerized method consists of the following steps:preprocessing of dermoscopic images, segmentation of skin lesions, feature extraction, and finally classification [7].Preprocessing is the step in which low-contrast images are enhanced and artifacts such as hair and noise can be removed through different dermoscopic image techniques [8].This step follows the segmentation step in which the lesion region is segmented based on the shape and color of the lesion, and irregularity of the border [9].Many techniques for lesion segmentation have been introduced in the literature.Some focused on traditional techniques, and few used convolutional neural networks (CNNs).Feature extraction is the third step used to represent an image [10].In this step, image features are extracted such as color, texture, shape, and name.Color is an important feature in skin cancer classification [11].These different features are fused later to obtain the maximum image information [12].However, one major disadvantage is high computational time required to complete this step.Many researchers have implemented feature selection techniques to select the most valuable features.The main purpose of this approach is to obtain maximum accuracy with less computational time.In addition, this step is useful for the redundancy of irrelevant features for classification [13,14].The final step is to classify the features.Features are classified using different classifiers in a relevant category, such as benign or malignant [5].

    More recently, deep learning models have been shown to significantly contribute to medical image analysis for both segmentation and classification [15,16].In deep learning, CNNs are used for classification as they are composed of several hidden layers such as convolutional, pooling,batch normalization, ReLU, and fully connected layers [17,18].CV studies have introduced many techniques for the segmentation and classification of skin lesions.Afza et al.[19] presented a hierarchical framework for skin lesion segmentation and classification.They began with a preprocessing step to enhance the quality of images before running a segmentation algorithm.Later, the ResNet50 model was fine-tuned, and features were extracted.The extracted features are refined using the grasshopper optimization algorithm, which is classified using the Na?ve Bayes algorithm.The experimental process was conducted on three dermoscopy datasets, and improved accuracy was achieved.Zhang et al.[20] presented an intelligent framework for multiclass skin lesion classification.In this method, the skin lesions were initially segmented using MASK RCNN.In the classification phase, they proposed 24 layered CNN model.Three datasets were used for the experimentation of the segmentation phase and the HAM10000 dataset was used for classification.On these datasets, the accuracy of the proposed method was improved.Akram et al.[21] presented a CAD system for skin lesion localization.They applied a de-correlation operation at the initial step and then passed it to the MASK RCNN for lesion segmentation.In the next step, the DenseNet201 pre-trained model is modified, and features are extracted from the two layers.The extracted features were fused and further refined using a selection block.The experimental process was conducted on dermoscopy datasets, and improved performance was achieved.Alom et al.[22]introduced a deep learning architecture for the segmentation of skin lesions.In this model, the best features are initially selected to better represent the lesion region, and then inception RCNN was applied for the final lesion classification.Dermoscopy datasets were employed for evaluation and achieved improved accuracy.Thomas et al.[23] applied interpretable CNN models for the classification of skin cancers.In this method, the outer padding was applied in the first step and then iterated through overlapping tiles.The next step segments the lesion, which later crops for the final segmentation.Al-Masni et al.[24] presented a two-stage deep learning framework for skin lesion segmentation and classification.The segmentation was performed using a fully resolved CNN (FrCNN), and four pre-trained networks were considered for the final classification.Sikkandar et al.[25] presented a computerized method for the segmentation and classification of skin lesions using traditional techniques.The authors combined the performance of the GrabCut and Neuro Fuzzy (NF) classifier for the final classification.In the preprocessing step, top-hat filtering and in-painting techniques were applied.In the later step, the GrabCut algorithm was applied to the segmentation task.In the feature extraction phase, deep learning features are extracted and finally classified using the NF classifier.A mutual bootstrap method was also presented in [26] for skin lesion classification.

    The methods discussed above have some limitations that affect the performance of skin lesion segmentation and classification.The following are the major issues:i) the presence of hair bubbles and irrelevant areas not required for detecting accurate skin lesions; ii) low contrast skin lesions are a factor for inaccurate lesion segmentation; iii) knowledge of useful feature extraction for the accurate classification of skin lesion types; iv) presence of irrelevant features that mislead correct classification; v) manual inspection of skin lesions is time consuming, and vi) accuracy is always dependent on an expert.In this work, we proposed a new computerized method by amalgamating traditional and deep learning methods.The proposed method includes contrast enhancement of dermoscopic images, segmentation of skin lesions, deep learning feature extraction and fusion,selection of the best features, and classification.Our major contributions are as follows:

    ·A contrast enhancement approach was implemented based on the fusion of the haze reduction approach and fast local Laplacian filters.The fusion process followed the HSV color transformation.

    ·The best channel is selected based on the probability value, and then a saliency map is constructed, which is later converted into a binary form using a threshold function.

    ·Two modified pre-trained models, MobileNet V2 and VGG16, were trained on dermoscopic datasets using transfer learning.Later, the features were extracted from the dense layers.

    ·Canonical correlation-based features were fused and later refined using the maximum entropy score-based selection(MESbS).

    The remainder of this article is organized as follows:the proposed methodology is presented in Section 2, the results are detailed in Section 3.Finally, the conclusions are presented in Section 4.

    2 Proposed Methodology

    The proposed method comprises two main tasks:lesion segmentation and classification.For lesion segmentation, a hybrid contrast enhancement technique was proposed, and the best channel was selected based on the histogram.Subsequently, an activation function was proposed to construct a saliency map.In the later stage, the threshold function is applied to convert the image into binary form, which is then mapped onto the original image for final detection.For classification,two pre-trained models were modified and trained through transfer learning.The features were extracted from both models and fused using canonical correlation analysis (CCA).Subsequently,the fused vector was further refined using the highest entropy score.Finally, multiple classifiers were used for the classification of selected features.Several datasets were used for the experimental process, and the results were obtained in visual and numeric form.The detailed architecture of the proposed methodology is illustrated in Fig.1.

    2.1 Lesion Segmentation Task

    As shown in Fig.1, the proposed method performs two tasks:lesion segmentation and classification.The lesion segmentation task is described in this section.Here, a hybrid method was initially proposed for contrast enhancement of the original dermoscopy images.Then, an HSV color transformation was applied and the best channel was selected based on the activation function.Subsequently, a lesion map was constructed based on the selected channel.The resultant lesion map was finally converted into binary form based on a threshold function.The details of each step are as follows:

    Hybrid Contrast Enhancement:The first step was the hybrid contrast enhancement.Here,the image quality is enhanced and bubbles are removed.For this purpose, two techniques were implemented, and the resultant information was fused into one image.First, a haze reduction technique was implemented to clear the boundaries of the lesion region.AssumeU(x) is the input image,S(x) is the hazy image, andY(x) is the medium of transmission.Seff(x) is the image affected by haze and is represented as follows:

    Figure 1:Proposed parallel architecture of skin lesion segmentation and classification

    This image is affected by reflected light represented as follows:

    Here,Ωrepresents a local patch with its origin atX.After this method, an estimation of transmissionY(x) is required before proceeding.Second, a fast local Laplacian filter was implemented to smooth the image and emphasize the edges.The local Laplacian can be defined as follows:

    where,Z(I) is a sample function, wherehis the reference value,γcontrols the amount of increasing and decreasing value,αcontrols the dynamic range compression and expansion, andwzdefines the threshold function.

    whererrepresents distribution time that how much time the process will run anduis the weighted function which is 1.

    wheretrepresents number of iterations have been performed,A4represents the 4-neighborhood oft, andPshows the input image.Mathematically, it is defined as follows:

    whereHωxandHωzare the Gaussian kernels.After that HSV color transformation is applied.HSV consists of three channels such as hue, saturation, and value.Through this transformation,the image is refined in terms of lesion colors.The visual results of this transformation are showing in Fig.2b.From this figure, we extracted the Hue channel for lesion map construction.Mathematically, this channel is defined as follows:

    where,R′=R/255,G′=G/255,B′=B/255,Amax=max(R′,G′,B′), and ?=Amax-Amin.In the next step, an activation function was constructed based on the multiplication function after which a lesion map was constructed, this was later converted into a binary form using a threshold function.Mathematically, the activation and threshold functions are defined as follows:

    Figure 2:Proposed lesion segmentation task results.(a) Original image; (b) Enhanced image; (c)Binary segmented image; (d) Mapped image; (e) Localized image

    Here, 0.4 is computed based on the mean value of all computed pixels of H.The visual results of the threshold function are shown in Fig.2.In this figure, the binary images are shown in (c),whereas the lesion mapped and final localized images are illustrated in (d) and (e), respectively.The final localized images are compared with the ground truth images for the final evaluation process.The numerical segmentation results are presented in Tab.1.The ISIC 2017 dataset was used for the experimental process, and an average dice rate of 95.6% was achieved.For each image, three parameters were computed:dice, Jaccard distance, and Jaccard index.

    Table 1:Sample numerical results of lesion segmentation task

    2.2 Skin Lesion Classification

    During this phase, skin lesions are classified into relevant categories such as melanoma, bkl,and others.For classification, the features were extracted from the input images.Feature extraction is an important step in pattern recognition, and many descriptors have been extracted from the literature.More recently, deep learning has shown success in the classification of medical infections [27,28].A CNN is a deep learning method used for feature extraction [29].A simple CNN model consists of many layers, such as a convolutional layer, ReLU layer, pooling, normalization,fully connected, and softmax.

    VGG16—VGG-16 contains N number of fully connected layers, where N = 1, 2, 3....ThePNunits are in the Nth layer forNST= 224,NX= 224, andNXH= 3.The dataset is represented byγand the training sample is shown by∈γ.Eachis a real number .

    where r(.) represents the activation function ReLu6.ST expresses the no of rows, X symbolizes the number of columns, and XH symbolizes number of channels.Represents the bias vector andn(1)express the weights of the first level which is defined as below:

    The output of the first layer is used as the input of the second layer and so on.This is shown in the mathematical form below:

    wheren(2)∈RN(2)×N(1)andn(2)∈RN(2)×N(1).Soφ(z)represents the last fully connected layer that is used for high level feature extraction.Mathematically expression of last layer is shown as below:

    Visually, the architecture of VGG16 is showing in Fig.3.

    Figure 3:Architecture of VGG-16 CNN model

    In Fig.3, the original architecture includes a total of 16 layers; the first 13 layers are convolutional and the final three are fully connected.The output was generated using softmax.In this study, we modified the VGG-16 pre-trained CNN model for skin cancer classification.For this purpose, the last layer was removed, and a new layer that included seven classes of skin carcinoma was added.These classes are known as the target labels.Transfer learning was then applied to transfer the knowledge of the original model to the target model and obtain a new customized CNN model.This model can be used for feature extraction.The modified architecture of the VGG-16 model is shown in Fig.4.

    MobileNetV2—MobileNet V2 is a CNN model designed specifically for portable and resourceconstrained circumstances.It is founded on an upturned residual structure in which the connections of the residual structure are linked to the bottleneck layers [30].There are 153 layers in MobileNet V2, and the size of the input layer ish×w×k, whereh=224,w=224, andkrepresents the channels, three of which are in the first layer.There are two types of residual blocks in MobileNet V2, with strides 1 and 2.These blocks had three types of layers and were used for downsizing.The first layer is 1×1 convolutions with ReLU6, where ReLU6 is an activation function.It is min (max(x, 0), 6).The second layer is a depthwise convolution used to crop the unnecessary information, and the third layer is a 1×1 convolution, but without nonlinearity.Each layer has batch normalization and activation functions, but the third layer has only batch normalization because the output of this layer has less dimension, and by using ReLU6, the performance will decrease [31].The convolutional block of MobileNet V2 is shown in Fig.5.In the basic architecture of MobileNet V2, there is a convolution layer with 32 filters, followed by 19 residual bottleneck layers.The detailed architecture is presented in Tab.2.

    Figure 4:Fine-tuned VGG-16 model for lesion classification

    Figure 5:MobileNetV2 convolutional blocks [31]

    In the original architecture, there were 153 layers.The output is generated in the last layer.In our work, we used the MobileNet-V2 pre-trained CNN model for skin cancer classification.For this purpose, the original architecture was fine-tuned and the last layer was removed.Subsequently,a new layer was added that includes seven skin classes.These classes are known as the target labels.Subsequently, transfer learning (TL) was used to transfer the knowledge of the original model to the target model and obtain a new customized CNN model.The TL process is discussed in the next section.After the training process, features are extracted from the feature layer(convolutional layer).

    Table 2:Architecture of mobileNetV2 [31]

    2.2.1 Transfer Learning for Feature Extraction

    Transfer learning is a technique that transfers information from a pre-trained model to a modified CNN model for a new task.The primary objective was to obtain the result for the target problem with better performance [32].Given a source domain Dsand target domain as DT,the learning task isTs,andTt.Transfer learning assists the learning of the target predictive functionF(t) in the target domain with knowledge in the source domain and learning task, whereDs≠DtandTs≠Tt.Fig.6 illustrates the TL process for the modified VGG16 model for skin lesion feature extraction.Fig.6 illustrates that the source data ?Dare from the Imagenet dataset,the source model is VGG16, represented by ?Modand the source labels ?Lare 1000.The target dataψτare the HAM10000 dataset, the target model is modified VGG16, and the target labels are seven, represented byψL.Through TL, the weights, and parameters of the VGG16 model are transferred to the modified VGG16 model, whereas the following condition holds.

    Similarly, this process was performed for the modified MobileNet V2 CNN model.In this study, the MobileNet V2 model was used as the source model and the modified MobileNet V2 model was used as the target model (Fig.6).After training both modified models, the deep learning features were extracted from the FC7 layer (modified VGG16) and convolutional layer(modified MobileNet V2).The extracted feature vector sizes of both vectors wereN×4096 andN×1056, respectively.

    Figure 6:Transfer learning process for VGG16 CNN model

    2.2.2 Features Fusion and Selection

    Feature fusion is an important research area and many techniques have been introduced for the fusion of two or more feature vectors [33].The most useful fusion techniques are serial-based,parallel, and correlation-based approaches.In this study, we used the CCA approach [34] for the fusion of both extracted feature vectors.Using CCA, a fused vector is obtained withN×1750 dimensions.However, after the fusion process, we determined that some features had repeated and had to be removed from the final vector.For this purpose, a new method called the MESbS has been proposed.In this approach, initially, the entropy vector is computed using the fused feature vector (column-based).Then, the entropy vector is sorted in descending order.Subsequently, we computed the mean value of the entropy vector and used this value as a threshold function for selecting the best features.Mathematically, this process is defined as follows:

    where,EnFVrepresents the entropy feature vector,μis the mean entropy value, andFuncis the final threshold function.Through this function, the featureFv(i) values greater than the mean value are considered for the final selection, and the remaining features are discarded.Lastly,the final selected features are classified using a multiclass SVM classifier with a one-against-all method.

    3 Experimental Results and Discussion

    3.1 Experimental Setup

    This section presents the experimental process for the proposed classification process.The HAM10000 dataset [35] was used.This dataset consists of approximately 10,000 dermoscopic images in RGB format.A total of seven skin lesion classes, Bkl, Bcc, Vasc, Akiec, Nevi, Mel,and Df.This dataset is highly imbalanced because of the high variation in the number of sample images in each class.Many classifiers are used to compare the accuracy of the proposed method on a cubic SVM.To train the classifiers, a 70:30 approach was used.This ratio indicates that 70% of the images were considered for the training process and 30% for the testing process.The recall rate (TPR), precision rate (PPV), FNR, AUC, accuracy, and time were calculated for each classifier in the evaluation process.All experiments were conducted in MATLAB 2020b on a system with an Intel(R) Core(TM) i5-7200u CPU running at 2.50 and 2.7 GHz, with 16 GB RAM, and an 8 GB graphics card.

    3.2 Proposed Classification Results

    This section presents the proposed classification results in a numerical and confusion matrix.The results were obtained using four different experiments.The first experiment extracted features from the modified VGG16 CNN model and used them in the experimental process.The results are presented in Tab.3.In this table, it can be observed that the cubic SVM showed a better accuracy of 78.2%, whereas the computational time of this classifier was approximately 468 s.The minimum computational time of this experiment was 83.230 s for the Fine KNN classifier.The recall rate of the cubic SVM was 78.2%.

    Table 3:Classification results using modified VGG16 CNN model

    In the second experiment, features were extracted from the modified MobileNet V2 CNN model and used in the experimental process.The results presented in Tab.4 show that the best accuracy of 82.1% was achieved on the cubic SVM.This classifier performed better than the other classifiers listed in this table.The computational time of the cubic SVM was approximately 91 s,whereas the minimum noted time was 20 s for the linear discriminate classifier.The recall rate of the cubic SVM was 82.1%.This table illustrates that the correct prediction accuracy of each class is better than that of the confusion matrix of the modified VGG16 (Tab.4).In addition,the accuracy of this experiment was improved compared to Experiment 1.

    Table 4:Classification results using modified mobileNetV2 model

    In this experiment, we fused the features of both models using CCA.The results are presented in Tab.5 which shows that the maximum accuracy achieved is 82.8% on the cubic SVM.The other calculated evolution measures include a recall rate of 82.1%, a precision rate of 82.97%, an FNR of 17.03%, and an AUC value of 0.97.The computational time of this classifier is 988.07 s.The recall rate of the cubic SVM is 82.81%, as shown in Fig.7.The minimum time required for this experiment was approximately 245 s.From Tab.5, it can be observed that the accuracy of all classifiers increases slightly, however, the execution time increases significantly.This indicates that there are many redundant features included in the fused vector, which degrades the classification accuracy.

    Table 5:Classification result of fused models

    Figure 7:Confusion matrix of cubic SVM for fused features of both models

    In this experiment, features were selected based on the MESbS approach; the results are detailed in Tab.6.In Tab.6, it can be observed that the top-attained accuracy is 96.7% on cubic SVM, whereas the additional calculated measures have a recall rate of 88.31%, a precision rate of 94.48%, an FNR of 5.52%, and an AUC value of 0.98.The computational time is 51.771 s,which is significantly minimized compared to Experiments 1 and 3.The recall rate of the cubic SVM was 88.31%, as shown in Fig.8.From Fig.8, it can be observed that the correct prediction accuracy of each skin lesion class is considerably higher than that of the first three experiments.In addition, the overall computational time of this experiment decreased.Hence, based on the results, we can demonstrate that the proposed method outperforms the proposed framework.A fair comparison was also conducted with the recent techniques, given in Tab.7, which shows the proposed framework outclass for multiclass lesion classification.

    Table 6:Skin lesion classification results using proposed framework

    Figure 8:Confusion matrix of cubic SVM using proposed framework

    Table 7:Comparison of the proposed method with recent techniques

    4 Conclusion

    A conventional and deep learning-based framework is proposed in this study for skin lesion segmentation and classification using dermoscopy images.Two tasks were performed.In the first task, conventional techniques-based skin lesions were segmented.The contrast of lesions was improved for accurate lesion map creation.The accurate lesion map creation process improves segmentation accuracy.The segmentation performance was evaluated on the ISIC 2017 dataset and achieved an accuracy of 95.6%.In the classification tasks, VGG16 and MobileNet V2 CNN models were fine-tuned and trained through TL on dermoscopic images.These models performed better according to recent studies in the medical image processing field.The features were extracted from these fine-tuned trained CNN models and fused using the CCA approach.The main purpose of fusion in this study was to increase image information.However, some redundant features were also added during the fusion process.The redundant features have an impact on classification accuracy.Therefore, we propose MESbS, a novel feature selection method.This method selects the features and classifies them using the C-SVM classifier.The results of our experiments demonstrate better accuracy than the existing techniques.We conclude that the lesion contrast enhancement step improves segmentation accuracy.In addition, the selection of the best features increases classification accuracy and minimizes execution time.Future studies will focus on the CNN for lesion segmentation and provide segmented lesions to modified models for useful feature extraction.

    Funding Statement:This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    最好的美女福利视频网| 国产乱人视频| 激情在线观看视频在线高清| 在线十欧美十亚洲十日本专区| 精品一区二区三区av网在线观看| 亚洲avbb在线观看| 日韩欧美在线二视频| 久久午夜综合久久蜜桃| 曰老女人黄片| 久久亚洲精品不卡| 久久天躁狠狠躁夜夜2o2o| 亚洲精品456在线播放app | 搡老岳熟女国产| 中文字幕熟女人妻在线| 色视频www国产| 99久久精品一区二区三区| 国产视频一区二区在线看| 国产野战对白在线观看| 国产伦人伦偷精品视频| 欧美黄色淫秽网站| 天堂av国产一区二区熟女人妻| 婷婷精品国产亚洲av| 国产成人系列免费观看| 制服丝袜大香蕉在线| av女优亚洲男人天堂 | 成人特级av手机在线观看| 午夜福利在线观看吧| 可以在线观看的亚洲视频| 久久国产精品人妻蜜桃| 日韩中文字幕欧美一区二区| 88av欧美| 精品一区二区三区视频在线 | 全区人妻精品视频| 欧美黑人欧美精品刺激| 精品欧美国产一区二区三| 国产高清videossex| www日本在线高清视频| 日本五十路高清| 欧美最黄视频在线播放免费| 三级国产精品欧美在线观看 | 国产精品野战在线观看| 国产高潮美女av| 黄片小视频在线播放| 欧美成人一区二区免费高清观看 | www.999成人在线观看| 母亲3免费完整高清在线观看| 欧美黄色淫秽网站| 国内精品一区二区在线观看| 99久久精品热视频| 精品免费久久久久久久清纯| 18美女黄网站色大片免费观看| 手机成人av网站| 欧美最黄视频在线播放免费| 国产精品永久免费网站| 色av中文字幕| 级片在线观看| 日韩大尺度精品在线看网址| 欧美日韩综合久久久久久 | 日韩国内少妇激情av| 两性午夜刺激爽爽歪歪视频在线观看| 欧美日韩精品网址| 午夜福利在线观看免费完整高清在 | 国产午夜精品久久久久久| 男人和女人高潮做爰伦理| 两个人看的免费小视频| 老熟妇仑乱视频hdxx| 老汉色av国产亚洲站长工具| 国产视频内射| 国产伦一二天堂av在线观看| 亚洲国产精品sss在线观看| 搡老妇女老女人老熟妇| 国产av一区在线观看免费| 亚洲色图av天堂| 热99在线观看视频| 黄片大片在线免费观看| 99热精品在线国产| 18禁观看日本| 亚洲欧美一区二区三区黑人| 美女扒开内裤让男人捅视频| 久久精品91蜜桃| 男女床上黄色一级片免费看| 99久久国产精品久久久| 中文字幕av在线有码专区| 18禁观看日本| av视频在线观看入口| 亚洲欧美精品综合久久99| 亚洲自偷自拍图片 自拍| 91av网一区二区| 国产伦人伦偷精品视频| 最新美女视频免费是黄的| 久久香蕉精品热| 亚洲激情在线av| 一本久久中文字幕| 免费观看精品视频网站| 亚洲美女黄片视频| 色精品久久人妻99蜜桃| 国产精品野战在线观看| 欧美在线黄色| 毛片女人毛片| 精品久久久久久久人妻蜜臀av| 伊人久久大香线蕉亚洲五| 最近视频中文字幕2019在线8| 在线看三级毛片| netflix在线观看网站| 亚洲美女黄片视频| 久久久久九九精品影院| 波多野结衣巨乳人妻| 99久久久亚洲精品蜜臀av| 国产精品日韩av在线免费观看| 国产成人av教育| 欧美日本视频| 国产成人系列免费观看| 欧美zozozo另类| 亚洲第一电影网av| 亚洲专区中文字幕在线| 国产成人精品久久二区二区91| 美女扒开内裤让男人捅视频| 国内精品美女久久久久久| 五月玫瑰六月丁香| 在线免费观看不下载黄p国产 | 免费大片18禁| 色尼玛亚洲综合影院| 夜夜看夜夜爽夜夜摸| 床上黄色一级片| 亚洲欧美激情综合另类| 黄色日韩在线| 久久香蕉国产精品| 色综合欧美亚洲国产小说| 欧美日韩精品网址| 欧美黑人欧美精品刺激| 最新在线观看一区二区三区| 国产亚洲精品久久久久久毛片| 美女黄网站色视频| 亚洲成a人片在线一区二区| 国内精品一区二区在线观看| 中文字幕人妻丝袜一区二区| 免费在线观看亚洲国产| 国产乱人伦免费视频| 亚洲av美国av| 日本黄色片子视频| 亚洲精品在线美女| 亚洲av电影在线进入| 久久久国产成人精品二区| 亚洲av五月六月丁香网| 国产三级黄色录像| 99热这里只有精品一区 | 国产高清有码在线观看视频| 亚洲av成人精品一区久久| 男人舔女人的私密视频| 麻豆av在线久日| 夜夜躁狠狠躁天天躁| 又粗又爽又猛毛片免费看| 免费看美女性在线毛片视频| 国产精品一区二区免费欧美| 久久国产乱子伦精品免费另类| 亚洲aⅴ乱码一区二区在线播放| 制服人妻中文乱码| av天堂在线播放| 亚洲av第一区精品v没综合| 精品久久久久久久久久久久久| 精品国产超薄肉色丝袜足j| 亚洲av日韩精品久久久久久密| 老司机午夜福利在线观看视频| 亚洲色图av天堂| 欧美大码av| 男人舔奶头视频| 欧美zozozo另类| 亚洲在线自拍视频| 国产真人三级小视频在线观看| 一边摸一边抽搐一进一小说| 国产亚洲精品av在线| 在线观看一区二区三区| 国产美女午夜福利| 亚洲国产精品合色在线| 97超视频在线观看视频| 99久久精品国产亚洲精品| 亚洲色图av天堂| 国产av不卡久久| e午夜精品久久久久久久| 婷婷亚洲欧美| 欧美黄色片欧美黄色片| a在线观看视频网站| 国产v大片淫在线免费观看| 最新美女视频免费是黄的| 色综合婷婷激情| 手机成人av网站| 男人舔奶头视频| 亚洲精品国产精品久久久不卡| 丝袜人妻中文字幕| 国产激情久久老熟女| 日韩国内少妇激情av| 人人妻人人澡欧美一区二区| 哪里可以看免费的av片| 亚洲成av人片免费观看| 日本三级黄在线观看| 岛国在线观看网站| 亚洲欧美日韩无卡精品| 综合色av麻豆| 国产真实乱freesex| 小蜜桃在线观看免费完整版高清| 999久久久国产精品视频| 91av网一区二区| 亚洲av第一区精品v没综合| 最新美女视频免费是黄的| 精品人妻1区二区| 99国产精品一区二区蜜桃av| 丁香六月欧美| 国产不卡一卡二| 国产淫片久久久久久久久 | 日本黄色片子视频| 亚洲国产高清在线一区二区三| or卡值多少钱| 啦啦啦观看免费观看视频高清| 日韩成人在线观看一区二区三区| 国产淫片久久久久久久久 | 蜜桃久久精品国产亚洲av| 我的老师免费观看完整版| 91av网站免费观看| 一级毛片高清免费大全| 国产91精品成人一区二区三区| 日本撒尿小便嘘嘘汇集6| 三级男女做爰猛烈吃奶摸视频| 视频区欧美日本亚洲| 欧洲精品卡2卡3卡4卡5卡区| 免费看a级黄色片| 亚洲五月天丁香| 午夜日韩欧美国产| 国产高清视频在线观看网站| 国产精品 欧美亚洲| 999久久久国产精品视频| 老鸭窝网址在线观看| 成人欧美大片| 久久精品国产清高在天天线| 久久草成人影院| 不卡一级毛片| 欧美激情在线99| 国产激情欧美一区二区| 久久天堂一区二区三区四区| 色综合欧美亚洲国产小说| 一区二区三区高清视频在线| 亚洲精品久久国产高清桃花| 超碰成人久久| 久久久久久大精品| 99热这里只有是精品50| 日韩欧美一区二区三区在线观看| 免费在线观看成人毛片| 国产精品综合久久久久久久免费| 性色avwww在线观看| 又黄又粗又硬又大视频| 亚洲无线在线观看| 看免费av毛片| 黄片小视频在线播放| 99久久综合精品五月天人人| 日本五十路高清| 两性夫妻黄色片| 超碰成人久久| 99国产综合亚洲精品| 国产人伦9x9x在线观看| 热99在线观看视频| 婷婷丁香在线五月| 国产野战对白在线观看| 中文字幕精品亚洲无线码一区| 三级男女做爰猛烈吃奶摸视频| 亚洲九九香蕉| 一级毛片高清免费大全| 免费高清视频大片| 中文字幕最新亚洲高清| 热99在线观看视频| 成人亚洲精品av一区二区| 日韩免费av在线播放| 国产成人影院久久av| 国产成+人综合+亚洲专区| 男女床上黄色一级片免费看| 制服人妻中文乱码| 国产精品久久久久久精品电影| 久久久国产成人精品二区| 午夜福利高清视频| 嫁个100分男人电影在线观看| 一区二区三区国产精品乱码| 久久久国产欧美日韩av| 国产高潮美女av| 亚洲 欧美 日韩 在线 免费| 亚洲乱码一区二区免费版| 国产精品一区二区三区四区久久| 日韩av在线大香蕉| 嫩草影院精品99| 两性夫妻黄色片| 国产激情偷乱视频一区二区| 神马国产精品三级电影在线观看| 亚洲性夜色夜夜综合| 久久久国产成人精品二区| 网址你懂的国产日韩在线| 免费电影在线观看免费观看| 欧美色视频一区免费| 欧美高清成人免费视频www| 欧美在线黄色| 国产成人影院久久av| 国产av麻豆久久久久久久| 国产男靠女视频免费网站| 久久九九热精品免费| 亚洲欧美精品综合一区二区三区| 免费看美女性在线毛片视频| 在线观看舔阴道视频| 19禁男女啪啪无遮挡网站| 一本精品99久久精品77| 精品久久久久久,| 久久精品综合一区二区三区| 久久这里只有精品19| 午夜精品一区二区三区免费看| 999久久久国产精品视频| 天天躁日日操中文字幕| 制服人妻中文乱码| 婷婷精品国产亚洲av在线| 国产精品自产拍在线观看55亚洲| 又爽又黄无遮挡网站| 久久国产精品影院| 长腿黑丝高跟| 一进一出抽搐动态| 国产精品亚洲美女久久久| 日本 av在线| 午夜免费激情av| 人妻丰满熟妇av一区二区三区| 亚洲精品一区av在线观看| 精品久久久久久久末码| 久久久精品大字幕| 免费看a级黄色片| 嫩草影院入口| 高潮久久久久久久久久久不卡| 精品人妻1区二区| 亚洲精品久久国产高清桃花| 午夜精品一区二区三区免费看| av天堂在线播放| 18美女黄网站色大片免费观看| 真人做人爱边吃奶动态| 国产男靠女视频免费网站| 天天躁日日操中文字幕| 窝窝影院91人妻| 日韩成人在线观看一区二区三区| 一边摸一边抽搐一进一小说| 免费一级毛片在线播放高清视频| 色在线成人网| 十八禁人妻一区二区| 国产精品野战在线观看| 国产不卡一卡二| 久9热在线精品视频| 国产成+人综合+亚洲专区| 亚洲,欧美精品.| 国产日本99.免费观看| 又紧又爽又黄一区二区| 嫩草影院精品99| 最新美女视频免费是黄的| www.自偷自拍.com| 亚洲欧美日韩卡通动漫| 免费一级毛片在线播放高清视频| 最新中文字幕久久久久 | 国产成人影院久久av| 色综合婷婷激情| 久久亚洲精品不卡| 一进一出抽搐动态| 激情在线观看视频在线高清| 亚洲av成人av| 中出人妻视频一区二区| 欧美三级亚洲精品| 制服丝袜大香蕉在线| 亚洲av成人精品一区久久| 欧美av亚洲av综合av国产av| 亚洲成av人片免费观看| 搞女人的毛片| 免费高清视频大片| 人妻久久中文字幕网| 色哟哟哟哟哟哟| 国内毛片毛片毛片毛片毛片| 欧美一级a爱片免费观看看| 中文字幕人成人乱码亚洲影| 很黄的视频免费| 久久国产精品影院| 欧美绝顶高潮抽搐喷水| 悠悠久久av| 亚洲av第一区精品v没综合| 日本一本二区三区精品| www.自偷自拍.com| 少妇裸体淫交视频免费看高清| 法律面前人人平等表现在哪些方面| 岛国在线观看网站| 国产精品一区二区三区四区久久| av片东京热男人的天堂| 曰老女人黄片| 中文字幕人妻丝袜一区二区| 精品一区二区三区四区五区乱码| 国产伦在线观看视频一区| 香蕉av资源在线| 午夜激情欧美在线| 国产黄片美女视频| 波多野结衣高清作品| 日韩有码中文字幕| 在线十欧美十亚洲十日本专区| 天堂√8在线中文| 精品99又大又爽又粗少妇毛片 | 一个人看视频在线观看www免费 | 97超视频在线观看视频| 免费在线观看亚洲国产| 亚洲国产精品合色在线| 亚洲熟妇熟女久久| 欧美一级毛片孕妇| av在线蜜桃| 在线观看美女被高潮喷水网站 | 午夜两性在线视频| 叶爱在线成人免费视频播放| 久久午夜综合久久蜜桃| 久久久国产欧美日韩av| 后天国语完整版免费观看| 国产亚洲精品av在线| 国产主播在线观看一区二区| 欧美日韩一级在线毛片| 亚洲av五月六月丁香网| 亚洲狠狠婷婷综合久久图片| 日韩欧美国产在线观看| 巨乳人妻的诱惑在线观看| 国产麻豆成人av免费视频| 一二三四在线观看免费中文在| 噜噜噜噜噜久久久久久91| 91老司机精品| 国产精品香港三级国产av潘金莲| 黄色片一级片一级黄色片| 一区二区三区激情视频| 91久久精品国产一区二区成人 | 一卡2卡三卡四卡精品乱码亚洲| 久久久久亚洲av毛片大全| 国产精华一区二区三区| 手机成人av网站| 精品国产乱码久久久久久男人| 可以在线观看毛片的网站| 国产高清激情床上av| 美女高潮的动态| 国产爱豆传媒在线观看| 性色avwww在线观看| 国产精品一区二区精品视频观看| 久久久精品欧美日韩精品| 亚洲在线自拍视频| 久久精品国产亚洲av香蕉五月| www国产在线视频色| 人人妻,人人澡人人爽秒播| 欧美色视频一区免费| 国产精品亚洲一级av第二区| 俄罗斯特黄特色一大片| 国产精品野战在线观看| 美女高潮喷水抽搐中文字幕| 国产久久久一区二区三区| 久久国产乱子伦精品免费另类| 狂野欧美白嫩少妇大欣赏| 国产99白浆流出| 国产精品久久视频播放| 欧美日韩瑟瑟在线播放| 99国产综合亚洲精品| 99国产精品99久久久久| 亚洲熟妇熟女久久| 精品一区二区三区av网在线观看| 丰满的人妻完整版| 精品99又大又爽又粗少妇毛片 | 欧美午夜高清在线| 99精品在免费线老司机午夜| 极品教师在线免费播放| 国产精品一区二区精品视频观看| 国产欧美日韩一区二区精品| 人人妻人人澡欧美一区二区| 中出人妻视频一区二区| 美女黄网站色视频| 亚洲午夜理论影院| 久久久久久久精品吃奶| 熟女电影av网| 国产成人aa在线观看| 国产 一区 欧美 日韩| 国产精品久久久人人做人人爽| 少妇的丰满在线观看| 久久欧美精品欧美久久欧美| 欧美日韩亚洲国产一区二区在线观看| 久久精品国产清高在天天线| 成人精品一区二区免费| 日本a在线网址| 又紧又爽又黄一区二区| 亚洲人与动物交配视频| 少妇丰满av| 两个人看的免费小视频| 听说在线观看完整版免费高清| 手机成人av网站| 欧美成狂野欧美在线观看| 久久精品人妻少妇| 精品久久久久久成人av| 亚洲精品美女久久av网站| 黑人操中国人逼视频| 久久久久久久精品吃奶| av在线蜜桃| 欧美黄色淫秽网站| 免费电影在线观看免费观看| 亚洲av成人一区二区三| 中亚洲国语对白在线视频| 三级国产精品欧美在线观看 | 国产真人三级小视频在线观看| 99久久综合精品五月天人人| 1024香蕉在线观看| 9191精品国产免费久久| 男女视频在线观看网站免费| 午夜福利高清视频| 熟女少妇亚洲综合色aaa.| 国产精品一区二区三区四区久久| 一本综合久久免费| 免费人成视频x8x8入口观看| 国产一级毛片七仙女欲春2| 日本撒尿小便嘘嘘汇集6| 男女做爰动态图高潮gif福利片| 亚洲在线观看片| 免费大片18禁| 亚洲电影在线观看av| 久久这里只有精品19| 热99在线观看视频| 天堂av国产一区二区熟女人妻| 成人午夜高清在线视频| 国产99白浆流出| 亚洲一区高清亚洲精品| 免费电影在线观看免费观看| 亚洲av片天天在线观看| 国产精品久久视频播放| 男人舔奶头视频| 亚洲国产精品久久男人天堂| 久久香蕉国产精品| 国产乱人伦免费视频| 亚洲人成电影免费在线| 午夜福利欧美成人| 窝窝影院91人妻| 国产成人精品久久二区二区91| av在线蜜桃| 99久久综合精品五月天人人| 日本精品一区二区三区蜜桃| 国产免费男女视频| 香蕉国产在线看| 少妇人妻一区二区三区视频| xxx96com| 国产伦人伦偷精品视频| 欧美日韩黄片免| 热99re8久久精品国产| 日本五十路高清| 日韩欧美国产在线观看| 欧美性猛交╳xxx乱大交人| 变态另类成人亚洲欧美熟女| 老司机午夜十八禁免费视频| 国产蜜桃级精品一区二区三区| 中文字幕高清在线视频| 成人无遮挡网站| 久久久久亚洲av毛片大全| 亚洲国产中文字幕在线视频| 国产一区二区在线av高清观看| 曰老女人黄片| 无人区码免费观看不卡| 久久久成人免费电影| 亚洲午夜理论影院| 高清毛片免费观看视频网站| 无遮挡黄片免费观看| 淫妇啪啪啪对白视频| 免费大片18禁| 老汉色∧v一级毛片| 午夜福利成人在线免费观看| 国产亚洲欧美在线一区二区| av福利片在线观看| 国产视频内射| 男女床上黄色一级片免费看| 亚洲国产精品合色在线| 国产美女午夜福利| 欧美午夜高清在线| 国产成+人综合+亚洲专区| 午夜精品在线福利| 91av网一区二区| 可以在线观看毛片的网站| 母亲3免费完整高清在线观看| www.自偷自拍.com| 国产又黄又爽又无遮挡在线| 国产精品自产拍在线观看55亚洲| 亚洲电影在线观看av| 99久久综合精品五月天人人| 国产精品九九99| 久久国产精品影院| 美女高潮的动态| 免费高清视频大片| 网址你懂的国产日韩在线| 久久久久性生活片| 男女床上黄色一级片免费看| 国产毛片a区久久久久| 三级毛片av免费| 久久中文看片网| 老司机深夜福利视频在线观看| 午夜精品在线福利| 亚洲片人在线观看| 国产亚洲精品av在线| 久久九九热精品免费| 一a级毛片在线观看| 午夜日韩欧美国产| 午夜免费激情av| 国产真人三级小视频在线观看| 亚洲av中文字字幕乱码综合| 日韩高清综合在线| 亚洲国产精品999在线| 丁香六月欧美| 高清毛片免费观看视频网站| 巨乳人妻的诱惑在线观看| 波多野结衣巨乳人妻| 美女cb高潮喷水在线观看 | 久久久精品大字幕| 啪啪无遮挡十八禁网站| 人人妻人人澡欧美一区二区| 怎么达到女性高潮| 色精品久久人妻99蜜桃| 无遮挡黄片免费观看| 国产精品野战在线观看| 俺也久久电影网| 一a级毛片在线观看| 中文资源天堂在线|