• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Gastrointestinal Tract Infections Classification Using Deep Learning

    2021-12-15 07:07:52MuhammadRamzanMudassarRazaMuhammadSharifMuhammadAttiqueKhanandYunyoungNam
    Computers Materials&Continua 2021年12期

    Muhammad Ramzan,Mudassar Raza,Muhammad Sharif,Muhammad Attique Khan and Yunyoung Nam

    1Department of Computer Science,COMSATS University Islamabad,Wah Campus,47040,Pakistan

    2Department of Computer Science,HITEC University Taxila,Taxila,47080,Pakistan

    3Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

    Abstract: Automatic gastrointestinal (GI) tract disease recognition is an importantapplication of biomedical image processing.Conventionally,microscopic analysis of pathological tissue is used to detect abnormal areas of the GI tract.The procedure is subjective and results in significant inter-/intraobserver variations in disease detection.Moreover,a huge frame rate in video endoscopy is an overhead for the pathological findings of gastroenterologists to observe every frame with a detailed examination.Consequently, there is a huge demand for a reliable computer-aided diagnostic system (CADx) for diagnosing GI tract diseases.In this work, a CADx was proposed for the diagnosis and classification of GI tract diseases.A novel framework is presented where preprocessing (LAB color space) is performed first; then local binary patterns(LBP)or texture and deep learning(inceptionNet,ResNet50,and VGG-16) features are fused serially to improve the prediction of the abnormalities in the GI tract.Additionally, principal component analysis(PCA),entropy,and minimum redundancy and maximum relevance(mRMR)feature selection methods were analyzed to acquire the optimized characteristics,and various classifiers were trained using the fused features.Open-source color image datasets (KVASIR, NERTHUS, and stomach ULCER) were used for performance evaluation.The study revealed that the subspace discriminant classifier provided an efficient result with 95.02%accuracy on the KVASIR dataset,which proved to be better than the existing state-of-the-art approaches.

    Keywords: Convolutional neural network; feature fusion; gastrointestinal tract; handcrafted features; features selection

    1 Introduction

    The medical industry is adopting advanced technology, through which it can improve healthy living.With the help of endoscopy and other techniques, medical doctors can visualize the human body’s internal tracts from the mouth to the intestines that were unapproachable in the past.Generally, the vast expertise of medical doctors is desired for problem recognition in the gastrointestinal (GI) tract [1].Upper endoscopy and colonoscopy are the two main endoscopic methods.A tube is inserted through the mouth, throat, and stomach in the upper endoscopy method, and the small intestine is examined.During colonoscopy, the tube is inserted through the anus to examine the rectum and colon.The lower part of the GI tract consists of the bowel,which is influenced by several illnesses, such as cancer and incessant aggravation.

    In the United States, 60–70 million people are affected by GI diseases every year [2].Early examination and tests are carried out to detect colon disorders with the help of colonoscopy.The screening test procedure requires significant time for the medical specialist and high costs, which causes an unpleasant environment and dissatisfaction for the patients.Norway and the United States have performed tests that cost $450 and $1100 per case of GI complaint, respectively [3].

    Several research methodologies improve the health care system by using technologies, such as artificial intelligence, multimedia data analyses, and distributed processing [4].Several research societies have offered many proposals for automatic abnormality detection in the GI tract [5].Different diagnostic imaging modalities are used for diagnosing human body abnormalities, such as CT scan, X-ray, and MRI.However, GI tract abnormalities are observed through colonoscopy and endoscopy (traditional and wireless) [6].A challenge with endoscopy is that it is time consuming for gastroenterologists to go through each image and mark irregularities, making the procedure hectic and costly [7].Similarly, colonoscopy faces miss rate challenges because doctors fail to find abnormalities.

    Wireless capsule endoscopy consists of a CMOS camera that is to be swallowed by a patient.The capsule endoscopy camera transmits the captured images to a receiving digital storage unit for up to 7 h.After swallowing, the patient can perform normal activities as usual [8].In contrast, the traditional wired video endoscopy in which the gastroenterologist can control the wire to observe the desired area in the GI tract, while in capsule endoscopy, captured frames are beyond the control of the gastroenterologist.Therefore, the fundamental aim of this study was to predict the variations from the norm in the GI tract through wired endoscopy.The major goal is to resolve the multi-class categorization issue in the GI tract by characterizing GI tract pictures into various categories.A computer-aided diagnostic system (CADx) assists medical experts in diagnosing and detecting abnormalities by providing an effective assistant for pathological findings.Therefore, the demand for medical image datasets is increasing worldwide for automatic disease detection, recognition, and assessment.Deep learning models are becoming vital players in spotting abnormalities in the GI tract.

    The proposed method was explored in five steps.Preprocessing is the first step in which histogram equalization and color space transformation methods are employed for image enhancement.Visual information is learned in the second phase using handcrafted and deep learning methods.Accordingly, the local binary patterns (LBP) method is adopted to extract handcrafted features, while inceptionNet, ResNet50, and VGG-16 are utilized for acquiring deep features.Principal component analysis (PCA), entropy, and minimum redundancy and maximum relevance(mRMR) were analyzed in the third step, which improved the classification accuracy.In the fourth step, feature fusion is employed serially.The last and most important phase is classification, where several supervised classifiers are trained using integrated features.The proposed model is compared with several state-of-the-art methods.We observed that the proposed approach achieved improved results and performance.

    The manuscript is styled as follows:Related works are presented in Section 2; Section 3 details the proposed approach; Section 4 highlights the outcomes of the tests performed; Section 5 summarizes the achievements.

    2 Related Work

    Endoscopy is the key for the treatment and diagnosis of diseases in the GI tract.CADx systems have recently been introduced, in which existing endoscopy procedures involving operator variations are diminished and guided for accurate diagnoses of the disease [9].The CADx system classifies diseases found in the GI tract using the training and testing feature sets.Generally,classification task results are based on methods, such as preprocessing, feature extraction, and feature selection.Additionally, preprocessing involves segmentation and image enhancement processes that help diagnose illness in the GI tract [10].Feature extraction enriches system accuracy and endures system computation [7].It is categorized into two methods:handcrafted and deep learning.Handcrafted features include shape (superpixel), texture (Gabor), and statistical, cellular,and color features.Meaningful handcrafted approaches help refine features to classify melanoma dermoscopy images [11].Additionally, color features are valuable and return the location information of the disease, and shape features include a histogram of oriented gradient (HOG) [12],and segmentation-based fractal texture analyses (SFTA) are employed to acquire the features from the grayscale image to obtain information on the shape gradient and orientation.LBP features render the information of image patterns from color images [12,13].In the past decades,extracting well-organized and optimized image features has been the primary goal of image classification tasks.Information from the images was extracted from different perspectives, such as handcrafted features using a color histogram [14], which calculates the color distribution of the images.Similarly, edges and texture information are collected by Gabor and LBP, whereas HOG can extract shape information that helps in disease detection.However, handcrafted features failed to detect the features of all aspects in the frames.Thus, the syndicate features of the deep convolutional neural network (CNN), and handcrafted features have been utilized.

    Additionally, CNN models, such as AlexNet, ResNet50, inceptionNet, and VGG16 Net learn visualized features more precisely than handcrafted features.Therefore, the performance of CNN in the image recognition task is outstanding; however, various handcrafted features still play an important role in some domains.Handcrafted features provide image content from specific aspects,contrary to information for CNN in image classification tasks.The CNN learns features automatically, and thus it is difficult to understand the kind of features learned by the network.Using a CNN, it is difficult to control the composite features of a network.Therefore, some researchers have attempted to understand the interpretability and explainability of networks [15].Therefore, in many studies, handcrafted and CNN features have been studied and implemented.These studies provide ideas of links between CNN and handcrafted features; therefore, it has now become a new research area in computer vision and image processing.Transfer learning techniques are introduced with different classification learning techniques without redesigning neural networks;moreover, classification performance is evaluated, and diseases are automatically detected [16].In previous studies, feature fusion was introduced, where CNN and handcrafted features were fused, and in some domains, many handcrafted features played an important role; however, the obtained information did not describe all aspects of images, so, CNN features were introduced with handcrafted features [17].Therefore, deep learning and texture features are integrated so that the performance of the model can be enhanced and domain information can be extracted from the frames of endoscopy, and multimedia content and machine learning techniques can be explored [18].

    3 Proposed Methodology

    In the this study, a novel framework is presented that comprises five phases:preprocessing,feature extraction, feature selection, feature fusion, and classification methods.The LAB color space transformation and histogram equalization methods were used in the preprocessing phase to increase the accuracy of the model.The CNN learns features and handcrafted methods, and feature selection methods, such as PCA, entropy, and mRMR are analyzed using the selection of features alternatively.Additionally, the proposed study is confined to the feature fusion approach,where CNN and handcrafted features are fused, and later, train various classifiers.Fig.1 shows the proposed model.The proposed model results were compared with existing state-of-the-art methods, which proved the effectiveness and robustness of the model.The steps of the proposed model are discussed in detail in the following section.

    3.1 Preprocessing

    In this study, the transformation of the L*a*b* color space was performed, and the individual components of L*a*b* were extracted.The luminance of L* components was equalized by the histogram equalization method, and later, the L* components were merged with a*b* components that resulted in an enhanced L*a*b* frame.The complete KVASIR dataset, which consists of 4000 frames, was enhanced using this method.Preprocessing improves the overall performance of the model.The preprocessing process is illustrated in Fig.2.The L*a*b* space evaluates colors better than the RGB color space and separates the luminosity and color.The L*a*b color space comprises three channels:luminosity L*, chromaticity a*, and b*, where L* represents different ranges of colors, such as black represents 0 level and 100 represents white levels.Similarly, a* and b* both have intensity values ranging from -128 to +127.

    Moreover, L* components are used to adjust the contrast that closely matches the human perception of luminosity.Therefore, for transformation to the L*a*b* space, RGB channels are first converted to CIR channels and then to L*a*b* space channels.The L* transformation is expressed as follows:

    whereX,Y,ZandXn,Yn,Znare components of CIE XYZ color space and tristimulus values respectively.In addition, histogram equalization is a common technique for image enhancement that equalizes individual pixel values and improves the overall dataset performance.

    Figure 1:CNN and handcrafted features fusion model

    3.2 Feature Extraction

    Detailed information cannotbe represented effciiently when the images are in raw form.Therefore, descriptors were used for feature extraction from which abnormalities were found in the images.There are several types of features in the image processing domain, such as the spatial and frequency domains.A special temporal domain is employed for feature acquisition through endoscopic images.The techniques of feature extraction and reduction have become essential in computer vision owing to applications, such as agriculture, robotics, surveillance, and medicine.The purpose of feature acquisition is to transform the input image data such that significant information can be extracted.Thus, the focus of feature extraction is to reduce the computation time and enhance the overall system performance.The two methods, handcrafted and CNN methods, are considered for feature acquisition.

    Figure 2:Image enhancement with L*a*b color space and histogram equalization methods

    3.2.1 Handcrafted Features Extraction

    Several methods are used for feature extraction; however, LBP features return the best performance with a combination of deep features.Hence, the LBP feature extraction method was employed in this study.The LBP operator represents texture information [19].The LBP code represents the circular neighborhood of the pixel.Let LBPU,Vintroduce LBP as code, where U represents the sample points in the neighborhood of the radius V, and the gray intensity of the center pixel, and the gray value of its uthadjacent pixel.The LBPU,Vmathematical model is as follows:

    After LBP-based feature extraction, the histogram is constructed to represent an image and used as pattern recognition, known as features.Fig.3 illustrates the visualization of the extracted LBP features.

    Figure 3:LBP features (a) Original image (b) Visual features

    3.2.2 CNN Features Extraction

    Generally, feature extraction techniques based on CNNs are used in several problems of image processing [20], such as face recognition and breast cancer mitosis detection [21,22].In this study,various experiments were performed for feature extraction (visual information) using various deep learning models; however, the best results were achieved from the three transfer learning models,namely ResNet50 [23], InceptionNet [24], and VGG16 [25].

    The architecture of the VGG16 model is a series network consisting of 41 layers.The network accepts a 224 × 224 size dimension as its input; the most repeated active layers in the network are the convolutional layers, rectified linear unit (ReLU), and max-pooling layers.It had a total of 16 convolutional layers, where 13 layers are convolutional, and three layers are fully connected.The first convolutional layer had a 3 × 3 filter size, and stride and padding were set to one.The features were acquired from the flattened layer, referred to as a fully connected (fc7) layer with an output size of 1 × 1 × 4096, weight size of 4096 × 4096, and bias of 4096 × 1.Finally, the network provides 4000 × 4096 features set over the complete dataset.Fig.4 shows the architecture of VGG16, including visual features selected from the convolutional 4, convolutional 5_1, convolutional 5_2, and convolutional 5_3 layers.

    Figure 4:VGG16 architecture and visual features selected from convolutional layers (conv 4, conv 5_1, conv 5_2 and conv 5_3 layers)

    The architecture of the ResNet50 model, referred to as a directed acyclic graph (DAG)network, consists of 177 layers.The architecture comprises five stages, where a convolutional layer and identity block are found in each stage.Additionally, there are three convolutional layers in a single convolutional block, and each identity block consists of three convolutional layers.The network accepts 224 × 224 size dimensions as its input; the most repeated active layers in the network are convolutional, batch normalization, ReLU, and max-pooling layers.The first convolutional layer contains a 7 × 7 size filter with a depth of 64, using padding 3.After the convolutional layer, the batch normalization layer was computed with 64 channels.The next layer is max pooling with stride 2, and padding 0.Convolutional and other processes are repeated by applying more layers to create a denser network, which can have a better impact on the accuracy, however the computational power also increases, which cannot be ignored.The features are acquired from the fully connected layer, which has an output size of 1 × 1 × 1000, weight size of 1000 × 2048, and bias of 1000 × 1; finally, the network provides 4000 × 1000 features over the complete dataset.Fig.5 shows the architecture of VGG16 and visual features that are selected from layers, such as res2b_branch2a, res3c_branch2c, res4f_branch2b, and res5c_branch2a.

    Figure 5:Resnet50 architecture and visual features selected from layers (res2b_branch2a, res3c_-branch2c, res4f_branch2b and res5c_branch2a)

    The architecture of the inceptionNet model is a convolution-based DAG network consisting of 316 layers.The network accepts 229 × 229 size dimensions as its input; the most repeated active layers in the network are the convolutional layers, batch normalization layers, average pooling, depth concatenation, ReLU, and max-pooling layers.The entire network branches are joined together at the depth concatenation point, where the network is also divided into four or three branches that represent a dense network.The features are acquired from the average pooling layer (avg_pool) in the network which has an output size of 1 × 1 × 20481, offset of 1 × 1 ×320, and a scale of 1 × 1 × 320; finally, the network provides 4000 × 2048 features over the entire dataset.Fig.6 shows the visual features that were selected from convolutional layers, such as conv2d_1, conv2d_10, conv2d_52, and conv2d_94.Image recognition performance has been improved by deep CNN in recent years.InceptionNet is an example of a deep neural network;therefore, a very good performance is achieved using this architecture, while the computation cost is very low.The accuracy achieved is credible when the CNN is used in a composite fashion.The number of convolutional layers of VGG16 is greater than that of AlexNet, and this CNN retains three fully connected layers [26].Adding the features of the VGG16 network with InceptionNet and ResNet50 features in the feature fusion matrix causes an improvement in the overall efficiency of the proposed model (see Fig.7 for a detailed view of extracted features and their fusion after feature selection).In the same manner, in Fig.7, the feature extraction method including a single frame is represented, whereas features of all 4000 frames are extracted from the KVASIR dataset.The size of the feature vector of the same image was different when using different CNN models,such as ResNet50 (1 × 1000), InceptionNet (1 × 2048), VGG16 (1 × 4096), and LBP (1 × 59).However, when the visual features of 4000 frames are extracted, the size of the feature set of every model becomes ResNet50 (4000 × 1000), InceptionNet (4000 × 2048), VGG16 (4000 ×4096), and LBP (4000 × 59).Subsequently, the best scores are computed by PCA, entropy, and mRMR methods using the extracted features, which are then fused in a serial fashion that is used by various classifiers for GI tract disease classification.

    Figure 6:Inceptionv3 architecture and visual features selected from layers (conv2d_1, conv2d_10,conv2d_52 and conv2d_94)

    Figure 7:CNN and handcrafted features extraction, selection and fusion model

    3.3 Features Selection

    Three feature selection approaches are analyzed for better feature selection in this study.

    3.3.1 PCA

    Feature selection methods, such as PCA, is used to reduce the size of the feature vectors.PCA is utilized for the transformation of the correlated variable into uncorrelated variables, also called clusters, and to calculate the optimized distance between each cluster to draw principal components between them.Moreover, PCA computes the learned features, such as handcrafted and deep CNN extracted features.In addition, the dataset contains information on the common structure of latent content extracted by PCA.Generally, when the dataset size is very large, PCA is considered a popular technique in multivariate scenarios [27].The first and second principal components,P1andP2, respectively, are represented with N variables and multiple data samples,wherex1,x2,...,xNshow the linear combination of variables.

    The first component shows the greatest variance among components in the sample space, and A=α11,α12,...,α1Nare the weights that provide the greatest value ofP1.

    All transformations are performed mostly in matrix multiplication, which makes computation fast, and P is the overall PCA transformation of the variables, where A is the eigenvector and diagonal elements are called eigenvalues, which are explained by each principal component [28].

    3.3.2 Entropy

    Entropy is an optimal feature searching algorithm that resolves the problem of the selected initial population.It reduces the features based on the highest entropy and computes them repeatedly up to the final optimal features.The entropy finds the root node and computes all entities [29].

    Then the entropy ofε(LBP),ε(ResNet),ε(VGG)andε(inceptionNet)are defined as

    3.3.3 Minimal-Redundancy-Maximal-Relevance(mRMR)

    The heuristic techniques for removing redundant features in the dataset are known as mRMR [30].The specific and optimal characteristics were obtained using this method without compromising the classification accuracy.High-dimensional data increase the error rate of the learning algorithms and cause overfitting of the model.However, the best features are selected based on the principal component, entropy, and mRMR.Moreover, the dimensions of the learned features set were characterized as ResNet50 (4000 × 1000), InceptionNet (4000 × 2048), VGG16(4000 × 4096), and LBP (4000 × 59), which are illustrated as ResNet50 (4000 × 200), InceptionNet (4000 × 200), VGG16 (4000 × 200), and LBP (4000 × 59), which provide the best performance of the model.

    3.4 Feature Fusion

    In the proposed study, various methods of transfer learning are employed, such as ResNet50,InceptionNet, and VGG16 for feature learning.Several texture feature methods are utilized,such as LBP, to obtain texture information.However, the best performing deep learning and handcrafted feature models were selected and represented in this study.The novel approach of feature fusion is implemented in which the deep model learned features and texture information of the LBP model were fused serially.The individual feature sets of each method are represented as:

    where r is the features set of LBP having dimensions 4000 × 59.

    where s is the features set of ResNet having dimensions 4000 × 200.

    where t is the features set of VGG having dimensions 4000 × 200.

    where u is the features set of an inceptionNet having dimensions 4000 × 200.The individual feature sets are fused with serial concatenation as

    3.5 Classification

    In the classification method, anomalies are identified and classified by a classifier that takes a set of fused features at its input and predicts the class label after feature computation.The accuracy depends on several factors, such as the weight initialization activation function and the selection of deep layers.Moreover, image preprocessing, learned features, and feature fusion methods also play an important role in enhancing the model accuracy.However, several classifiers were trained to predict abnormalities in the frames of the GI tract.In this manner, many classifiers have been investigated, including linear discriminant, linear support vector machine (SVM), cubic SVM, coarse Gaussian SVM, cosine KNN, and subspace discriminant.Consequently, subspace discriminant analysis achieved a high score in accuracy when compared with the other classifiers.

    4 Experimental Setup and Results

    The performance of the CADx was evaluated in the this study, where the anomalies of the GI tract were automatically detected and classified using endoscopic frames.Moreover, experiments are performed using KVASIR as the main dataset, which consists of eight different classes, such as three normal and five disease classes of endoscopic frames.Similarly, the model was also evaluated with two other datasets, ULCER and NERTHUS, as state-of-the-art system evaluations.The evaluation metrics addressed in the prevailing publications are also compared.The proposed model’s results are reported in tabular form, where features, such as LBP, ResNet50, inceptionNet,and VGG16 deep CNN models are used for feature learning.The learned features are then serially fused.Similarly, several tests were carried out, and three of them were chosen based on high performance.Additionally, the selected models provided the best results in this study.The system used for all the evaluations was an Intel Core i5-4200U CPU running at 1.60 GHz and 8 GB RAM.

    4.1 Dataset

    Three datasets, KVASIR [32], NERTHUS [33], and ULCER [13] were considered in this study.Annotated KVASIR consists of 4000 images with eight categories, each class containing 500 images.Of the eight classes, a single frame of each class is illustrated in Fig.8.The major issues faced by qualified staff are high dimensionality and great similarities between certain disorders.ULCER datasets consist of 2413 images with three classes namely, bleeding, healthy, and ulcer.The bleeding class contains 1086 images, the healthy class contains 709 images, and the ulcer class contains 618 images.This dataset was obtained by colonoscopy.

    Figure 8:Eight types of classes taken from KVASIR dataset (a) Dyed-Lifted-Polyp (b) Dyed-Resection-Margins (c) Esophagitis (d) Normal-Cecum (e) Normal-Pylorus (f) Normal-z-Line(g) Polyps (h) Ulcerative-Colitis

    It is an open-source dataset that shows different degrees of bowel cleansing in the GI tract.The NERTHUS dataset comprises a total of 5525 bowel frames from 21 videos [33].Tab.1 lists the details of each of the datasets mentioned above.

    Table 1:Datasets information with modalities

    4.2 Overview of Conducted Experiments on KVASIR Dataset

    Various experiments were performed to improve the performance of the proposed model.Of the several tests, only three that show the best results are presented.A summary of the three tests performed is presented in Tab.2.Each test contained eight classes and 4000 images of the GI tract, test1 contained collectively 280 features, such as HOG (100), SFTA (21), LBP (59),and AlexNet (100) features, test2 contained collectively 459 features, such as LBP (59), ResNet50(200), and VGG16 (200) features, and test3 contained collectively 659 features, such as LBP(59), ResNet50 (200), InceptionNet (200), and VGG16 (200) features.Performance measurement parameters were calculated for each test.Test3 reported the best results compared to previous studies.

    Table 2:Overview of conducted experiments

    4.2.1 Test 1(HOG=100,SFTA=21,LBP=59,RESNET=100 KVASIR DATASET)

    In experiment 1, one class out of eight classes contained 500 images; therefore, 4000 images were used collectively.A 10-fold cross-validation was utilized to evaluate all outcomes.From several classifiers, only six classifiers were trained, as shown in Tab.3.Linear SVM performed well when compared to other classification methods, with an accuracy of 88.9, by consuming a training time of 35.96 s.The performance evaluation of test1 is presented in Tab.3.

    Table 3:Classification and performance evaluation of test 1

    4.2.2 Test 2(LBP=59,RESNET=200,VGG16=200 KVASIR DATASET)

    In this experiment, 10-fold cross-validation was used to assess all the results.The subspace discriminant performance was better than other prediction techniques, with an accuracy of 93.62.and a training time of 91.079 s.The graphical comparisons of classification methods in terms of precision, sensitivity, accuracy, and training time are shown in Tab.4.

    4.2.3 Test 3(LBP=59, RESNET=200, INCEPTIONNET=200, VGG16=200 KVASIR DATASET)

    In this experiment, 5-fold cross-validation was used to evaluate all results.A total of six classification methods were used.The subspace discriminant classifier’s performance was the best in comparison to other prediction methods, with an accuracy of 95.02, and a training time of 134.09 s; this was found to outperform the methods prevalent in the literature.Graphical comparisons of classification methods in terms of precision, sensitivity, accuracy, and training time for test3 are presented in Tab.5.The confusion matrix in Tab.6 shows the satisfactory true positive values per class for test3.

    Table 4:Classification results and performance evaluation of test 2

    Table 5:Classification results and performance evaluation of test 3

    Table 6:Confusion matrix of test 3

    4.3 Analyzing Feature Selection Methods

    Three feature selection approaches, PCA, mRMR, and entropy-based, were employed to check for optimal features.A performance comparison of these results is shown in Tab.7.

    The comparison highlights that the evaluation of the PCA method was better than that of the other feature selection methods.The maximum achieved accuracy was 95.02% using PCA, with a training time of 134.09 s, which shows that the proposed approach is better than the previous approaches.Based on the best results, we selected the configuration of test3, including the PCA feature selection method.This configuration is considered with other state-of-the-art configurations for comparison.

    Table 7:Comparison between PCA, entropy, and mRMR features selection methods on KVASIR dataset

    Table 8:Datasets results comparisons

    4.4 Results Comparisons Between KVASIR,NERTHUS and ULCER Datasets

    The model performance was checked with the configurations mentioned in test3 on the other two datasets (NERTHUS and ULCER).The model also performed well, as shown in Tab.8.A comparison of the six classifiers is presented in Tab.8.Classifiers, such as linear discriminators, linear support vector machines, cubic SVMs, cosine KNNs, bagged trees, and subspace discriminators were used for classification.KVASIR was the most challenging dataset, with an accuracy of 95.02% on the Sub-Discr classifier.The best accuracy on the NERTHUS dataset was 99.9% for the four classifiers, as shown in Tab.8.Cubic SVM showed the best results with an accuracy of 100% on the ULCER dataset.The subspace discriminator classifier showed stability and satisfactory accuracy on all datasets.

    4.5 Comparisons with Existing Approaches

    Tab.9 depicts the comparison of the proposed method with the existing approaches.The proposed system showed better results than the other methods.

    Table 9:Datasets results comparisons

    5 Conclusion

    Automatic disease detection and classification using endoscopic frames of the GI tract were addressed in the proposed study.The handcrafted features (LBP) and deep learning features(VGG16, inceptionNet, ResNet50) were extracted, and their subsets were selected using PCA,entropy, and mRMR feature selection methods.The subsets were then fused using the serial feature fusion method.Three datasets were used for the performance evaluation.High accuracies,such as 95.02%, 99.9%, and 100% on the KVASIR, NERTHUS, and ulcer datasets, respectively,were achieved.The most stable classifier was the Sub-Discr classifier with a satisfactory overall accuracy.Our experiments show that techniques such as preprocessing and feature fusion are efficient techniques that boost the overall performance of the model.Although using this method,we achieved a fairly high accuracy compared with existing approaches, there is still scope for further improvement which must be addressed in future research.Using other preprocessing techniques and deep learning models for feature extraction can improve model performance.

    Funding Statement:This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品无大码| 91精品国产九色| 99热精品在线国产| 国产成人精品久久久久久| 国产伦精品一区二区三区视频9| 中国国产av一级| 午夜爱爱视频在线播放| 国产在线精品亚洲第一网站| 亚洲人成网站在线播放欧美日韩| avwww免费| 激情 狠狠 欧美| 久久久色成人| 亚洲av成人精品一区久久| 99久久九九国产精品国产免费| 午夜福利在线在线| 深爱激情五月婷婷| 成人二区视频| 黄色配什么色好看| 国产不卡一卡二| 少妇裸体淫交视频免费看高清| 国产真实乱freesex| 亚洲人成网站高清观看| 九九在线视频观看精品| 久久久久久久久久黄片| 一本久久中文字幕| 日韩大尺度精品在线看网址| 国产亚洲精品久久久久久毛片| 婷婷六月久久综合丁香| 欧美成人精品欧美一级黄| 国产欧美日韩精品一区二区| 亚洲激情五月婷婷啪啪| 日韩强制内射视频| 美女高潮的动态| 免费av观看视频| 国产精品一区二区性色av| 免费不卡的大黄色大毛片视频在线观看 | 亚洲欧美精品综合久久99| 国产精品电影一区二区三区| 日日啪夜夜撸| 日本爱情动作片www.在线观看| 青春草国产在线视频 | 人妻制服诱惑在线中文字幕| 久久国产乱子免费精品| 色播亚洲综合网| 国产精品一区www在线观看| 久久久精品大字幕| 日韩av不卡免费在线播放| 亚洲内射少妇av| 精品欧美国产一区二区三| 久久人妻av系列| 插逼视频在线观看| 国产成人精品婷婷| 三级毛片av免费| 久久这里有精品视频免费| 国产女主播在线喷水免费视频网站 | av福利片在线观看| 久久精品国产清高在天天线| 国产老妇伦熟女老妇高清| 亚洲av成人精品一区久久| 免费观看在线日韩| 男女边吃奶边做爰视频| 成年免费大片在线观看| 欧美激情久久久久久爽电影| 日日撸夜夜添| 国产精品一区www在线观看| 99久久精品一区二区三区| 成人毛片a级毛片在线播放| 长腿黑丝高跟| 青春草国产在线视频 | 国产精品日韩av在线免费观看| 国内精品宾馆在线| 免费av观看视频| 欧美日本亚洲视频在线播放| 免费看美女性在线毛片视频| 亚洲国产精品久久男人天堂| 熟女电影av网| 精品久久国产蜜桃| 午夜激情福利司机影院| 少妇的逼水好多| 男人舔奶头视频| 亚洲最大成人av| 国产日本99.免费观看| 日韩在线高清观看一区二区三区| 午夜a级毛片| 亚洲五月天丁香| 久久精品国产亚洲av天美| 婷婷精品国产亚洲av| 丰满的人妻完整版| 国产精品一区www在线观看| 亚洲人成网站在线观看播放| 日本色播在线视频| 一边摸一边抽搐一进一小说| 国产精品电影一区二区三区| 91av网一区二区| 一区二区三区四区激情视频 | 在线免费观看的www视频| 亚洲精品国产av成人精品| 男女那种视频在线观看| 日本免费一区二区三区高清不卡| 国产黄片视频在线免费观看| 99热这里只有精品一区| 亚洲第一区二区三区不卡| 中文精品一卡2卡3卡4更新| 亚洲国产精品成人久久小说 | 国产成人a∨麻豆精品| 国产精品永久免费网站| 99久久久亚洲精品蜜臀av| 久久久久久国产a免费观看| 热99re8久久精品国产| 91精品一卡2卡3卡4卡| 免费看a级黄色片| 午夜亚洲福利在线播放| 国产在视频线在精品| 国产一级毛片七仙女欲春2| 久久人人爽人人爽人人片va| 99热6这里只有精品| 色5月婷婷丁香| 看非洲黑人一级黄片| 欧美成人一区二区免费高清观看| 成人综合一区亚洲| 亚洲精品日韩av片在线观看| 亚洲va在线va天堂va国产| 黄色欧美视频在线观看| 黄片无遮挡物在线观看| 欧美在线一区亚洲| 99精品在免费线老司机午夜| 日本色播在线视频| 老司机影院成人| 午夜视频国产福利| 国产精品av视频在线免费观看| а√天堂www在线а√下载| 亚洲久久久久久中文字幕| 欧美最新免费一区二区三区| 日韩三级伦理在线观看| 欧美3d第一页| 免费av不卡在线播放| 免费不卡的大黄色大毛片视频在线观看 | 国产高清视频在线观看网站| 久久精品国产99精品国产亚洲性色| 国产精品.久久久| 日日撸夜夜添| 色视频www国产| 亚洲国产精品合色在线| 长腿黑丝高跟| 内地一区二区视频在线| 亚洲精品自拍成人| 精品久久久久久久末码| 亚洲成av人片在线播放无| 亚洲国产精品sss在线观看| 午夜福利在线观看吧| 黄色欧美视频在线观看| 久久精品久久久久久噜噜老黄 | 成人美女网站在线观看视频| 国产 一区精品| 亚洲va在线va天堂va国产| 成人午夜精彩视频在线观看| av国产免费在线观看| 中国美女看黄片| 大又大粗又爽又黄少妇毛片口| 免费看a级黄色片| 六月丁香七月| 国产欧美日韩精品一区二区| 99久久精品热视频| 夜夜夜夜夜久久久久| 亚洲经典国产精华液单| 国产视频首页在线观看| 精品人妻一区二区三区麻豆| 人人妻人人澡欧美一区二区| 一本久久中文字幕| 不卡视频在线观看欧美| 国产成人福利小说| 精品人妻一区二区三区麻豆| 亚洲欧美日韩无卡精品| 99九九线精品视频在线观看视频| 亚洲精品乱码久久久v下载方式| 亚洲第一区二区三区不卡| 成人漫画全彩无遮挡| 久久精品国产亚洲av天美| 国产精品综合久久久久久久免费| 亚洲婷婷狠狠爱综合网| 中文资源天堂在线| 亚洲精品色激情综合| 少妇熟女欧美另类| 小说图片视频综合网站| 亚洲av第一区精品v没综合| 中文字幕精品亚洲无线码一区| 中文字幕av在线有码专区| 我的老师免费观看完整版| 久久99热这里只有精品18| 国产精品久久久久久精品电影小说 | 成年版毛片免费区| 精品久久久久久久久av| 中文字幕人妻熟人妻熟丝袜美| 男的添女的下面高潮视频| 狠狠狠狠99中文字幕| 成人美女网站在线观看视频| 国产高清激情床上av| 国产精品美女特级片免费视频播放器| 亚洲人成网站在线播放欧美日韩| 最近中文字幕高清免费大全6| 老女人水多毛片| 亚洲av中文av极速乱| 嫩草影院入口| 亚洲性久久影院| 日日撸夜夜添| 一卡2卡三卡四卡精品乱码亚洲| 两个人的视频大全免费| 免费搜索国产男女视频| 中文资源天堂在线| 国产黄a三级三级三级人| 国产v大片淫在线免费观看| 1000部很黄的大片| 99riav亚洲国产免费| 国产精品野战在线观看| 国产精品一及| 免费黄网站久久成人精品| 国内精品美女久久久久久| 亚洲av熟女| 精品久久久噜噜| 97人妻精品一区二区三区麻豆| 99久久精品国产国产毛片| 欧美日本视频| 99在线人妻在线中文字幕| 亚洲欧美精品综合久久99| 日韩欧美 国产精品| 一区二区三区四区激情视频 | 国产高清有码在线观看视频| a级一级毛片免费在线观看| 菩萨蛮人人尽说江南好唐韦庄 | 99久久人妻综合| 在线观看av片永久免费下载| 春色校园在线视频观看| 老熟妇乱子伦视频在线观看| 中国国产av一级| 亚洲av成人精品一区久久| av国产免费在线观看| av女优亚洲男人天堂| 国产精品国产高清国产av| 国产精品日韩av在线免费观看| 国产av一区在线观看免费| 国产亚洲5aaaaa淫片| 国产亚洲av嫩草精品影院| 一个人看视频在线观看www免费| 国产精品人妻久久久影院| 晚上一个人看的免费电影| 国产爱豆传媒在线观看| 久久九九热精品免费| 岛国毛片在线播放| 一夜夜www| 大又大粗又爽又黄少妇毛片口| 精品国产三级普通话版| 99久国产av精品| 国产精品综合久久久久久久免费| 国内精品宾馆在线| 在线天堂最新版资源| 亚洲av一区综合| 免费人成在线观看视频色| 国产伦精品一区二区三区视频9| 99热这里只有是精品在线观看| 精品熟女少妇av免费看| 亚洲国产精品成人综合色| 国产色婷婷99| 久久中文看片网| 一级av片app| 悠悠久久av| 国产精品日韩av在线免费观看| 国产单亲对白刺激| av天堂在线播放| 一级毛片久久久久久久久女| 亚洲图色成人| 国内揄拍国产精品人妻在线| 精品日产1卡2卡| 亚洲av中文av极速乱| 99久国产av精品国产电影| 国内精品一区二区在线观看| 天美传媒精品一区二区| 91狼人影院| АⅤ资源中文在线天堂| 亚洲av成人精品一区久久| 久久亚洲精品不卡| 女的被弄到高潮叫床怎么办| 亚洲成人精品中文字幕电影| 色视频www国产| 在线观看av片永久免费下载| 简卡轻食公司| 青青草视频在线视频观看| 特大巨黑吊av在线直播| 97人妻精品一区二区三区麻豆| 三级毛片av免费| 色哟哟哟哟哟哟| 成人无遮挡网站| 久久久久国产网址| 在线观看美女被高潮喷水网站| 美女被艹到高潮喷水动态| 国产精品免费一区二区三区在线| 中文字幕免费在线视频6| 日产精品乱码卡一卡2卡三| 亚洲国产色片| 三级国产精品欧美在线观看| 国产精品嫩草影院av在线观看| 国产精品1区2区在线观看.| 搡老妇女老女人老熟妇| 91精品一卡2卡3卡4卡| 亚洲精品粉嫩美女一区| 国产精品电影一区二区三区| 97超视频在线观看视频| 麻豆成人av视频| 免费看日本二区| 91久久精品电影网| 精品欧美国产一区二区三| 伦理电影大哥的女人| 我要搜黄色片| 亚洲无线观看免费| 亚洲无线在线观看| 精品久久久久久久人妻蜜臀av| 亚洲国产欧美在线一区| 亚洲av成人精品一区久久| 亚洲综合色惰| 直男gayav资源| 亚洲精品成人久久久久久| 成年免费大片在线观看| 中文欧美无线码| 久久精品国产亚洲网站| 亚洲真实伦在线观看| 人妻少妇偷人精品九色| 麻豆久久精品国产亚洲av| 能在线免费看毛片的网站| 国产黄片美女视频| 99久久无色码亚洲精品果冻| 久久久久久久久久久免费av| 99久久无色码亚洲精品果冻| 免费av不卡在线播放| 免费搜索国产男女视频| 少妇高潮的动态图| av卡一久久| 少妇高潮的动态图| 亚洲五月天丁香| 国产精品.久久久| 两个人视频免费观看高清| 中文资源天堂在线| 午夜亚洲福利在线播放| 日韩,欧美,国产一区二区三区 | 给我免费播放毛片高清在线观看| 亚洲经典国产精华液单| 亚洲精品久久国产高清桃花| 亚洲成人av在线免费| 少妇猛男粗大的猛烈进出视频 | 国产淫片久久久久久久久| 婷婷精品国产亚洲av| 91久久精品国产一区二区成人| 成人高潮视频无遮挡免费网站| 日韩成人av中文字幕在线观看| 97超碰精品成人国产| 欧美性猛交黑人性爽| 成人一区二区视频在线观看| 久久久久久国产a免费观看| 美女被艹到高潮喷水动态| 亚洲内射少妇av| 啦啦啦韩国在线观看视频| 国产av不卡久久| 国产 一区 欧美 日韩| 免费电影在线观看免费观看| 日本在线视频免费播放| 免费av毛片视频| 99久久人妻综合| 菩萨蛮人人尽说江南好唐韦庄 | 波多野结衣巨乳人妻| 97在线视频观看| 中文字幕av在线有码专区| 成人特级av手机在线观看| 久久这里有精品视频免费| 国产一区二区激情短视频| 联通29元200g的流量卡| 国产精品三级大全| 97超碰精品成人国产| 青春草亚洲视频在线观看| kizo精华| 搡老妇女老女人老熟妇| 亚洲色图av天堂| av在线播放精品| 婷婷亚洲欧美| 国产成人影院久久av| 国产精品久久久久久久电影| 精品欧美国产一区二区三| 成人毛片60女人毛片免费| 1024手机看黄色片| 亚洲av中文av极速乱| 日韩一区二区三区影片| 精华霜和精华液先用哪个| 成熟少妇高潮喷水视频| 99热网站在线观看| 久久久久久久亚洲中文字幕| 哪里可以看免费的av片| 97超碰精品成人国产| 欧美一区二区亚洲| 黄色配什么色好看| 日韩欧美精品免费久久| 永久网站在线| 熟女人妻精品中文字幕| av天堂在线播放| 国产69精品久久久久777片| 美女大奶头视频| 亚洲精品456在线播放app| 免费观看a级毛片全部| 中文字幕制服av| 午夜a级毛片| 亚洲国产精品国产精品| 波多野结衣高清作品| 美女被艹到高潮喷水动态| 91久久精品国产一区二区三区| 亚洲欧美成人精品一区二区| 校园人妻丝袜中文字幕| 欧美最新免费一区二区三区| 一本一本综合久久| 波多野结衣高清无吗| 一级黄片播放器| 久久人人精品亚洲av| 国产精品久久久久久久久免| 色综合亚洲欧美另类图片| 又黄又爽又刺激的免费视频.| 国产高清视频在线观看网站| 国产精品人妻久久久久久| 偷拍熟女少妇极品色| 岛国毛片在线播放| 美女cb高潮喷水在线观看| 国产一级毛片七仙女欲春2| 亚洲成人久久爱视频| 人人妻人人澡欧美一区二区| 精品99又大又爽又粗少妇毛片| 国产精品av视频在线免费观看| 国产av一区在线观看免费| 深夜精品福利| 97超碰精品成人国产| 一夜夜www| 国产精品1区2区在线观看.| 国产精华一区二区三区| 亚洲av中文av极速乱| 男人狂女人下面高潮的视频| 国产精品久久久久久精品电影| av在线老鸭窝| 丰满乱子伦码专区| 一个人看视频在线观看www免费| 免费搜索国产男女视频| 不卡一级毛片| 青春草国产在线视频 | 桃色一区二区三区在线观看| 国产不卡一卡二| av黄色大香蕉| 久久精品综合一区二区三区| 特级一级黄色大片| 99riav亚洲国产免费| 国产精品综合久久久久久久免费| av在线亚洲专区| 中文字幕熟女人妻在线| 亚洲欧美日韩东京热| 嫩草影院入口| 国产91av在线免费观看| 国产精品国产三级国产av玫瑰| 一级毛片aaaaaa免费看小| a级毛片a级免费在线| 亚洲在久久综合| videossex国产| 日韩欧美 国产精品| 日本黄色片子视频| 国产精品av视频在线免费观看| 亚洲在线自拍视频| 欧美性感艳星| 免费av不卡在线播放| 好男人在线观看高清免费视频| av天堂在线播放| 精品99又大又爽又粗少妇毛片| 爱豆传媒免费全集在线观看| 国产精品人妻久久久久久| 两个人视频免费观看高清| 在线天堂最新版资源| 国产麻豆成人av免费视频| 欧美日韩在线观看h| 亚洲图色成人| 国产一区二区激情短视频| 内射极品少妇av片p| 日韩高清综合在线| ponron亚洲| 国产单亲对白刺激| 免费不卡的大黄色大毛片视频在线观看 | 亚洲高清免费不卡视频| 永久网站在线| 久久久国产成人精品二区| 亚洲美女视频黄频| 99热精品在线国产| 两个人视频免费观看高清| 免费观看在线日韩| 色视频www国产| 国产精品久久久久久精品电影| 日韩一区二区视频免费看| 免费观看人在逋| 欧美日韩综合久久久久久| 成人av在线播放网站| 日本色播在线视频| 久久久精品大字幕| 日本免费a在线| 美女脱内裤让男人舔精品视频 | 国产精品一区二区性色av| 欧美丝袜亚洲另类| 国产伦精品一区二区三区视频9| 99精品在免费线老司机午夜| 国产高清有码在线观看视频| 色哟哟哟哟哟哟| 成人二区视频| 亚洲自偷自拍三级| 国产成人一区二区在线| 男人的好看免费观看在线视频| 欧美+亚洲+日韩+国产| 中文字幕免费在线视频6| 长腿黑丝高跟| 久久久久国产网址| 麻豆成人午夜福利视频| 看非洲黑人一级黄片| 亚洲三级黄色毛片| 亚洲欧美日韩卡通动漫| 国产高潮美女av| 国产一级毛片在线| 黑人高潮一二区| 特级一级黄色大片| 久久久色成人| 精品久久久久久久久亚洲| 啦啦啦观看免费观看视频高清| 久久综合国产亚洲精品| 亚洲电影在线观看av| 麻豆av噜噜一区二区三区| 免费电影在线观看免费观看| 91狼人影院| 最近手机中文字幕大全| 日产精品乱码卡一卡2卡三| av黄色大香蕉| 18禁在线播放成人免费| 插逼视频在线观看| 干丝袜人妻中文字幕| 色吧在线观看| 久久久久久久久久成人| 在现免费观看毛片| 国产一区二区在线观看日韩| 啦啦啦观看免费观看视频高清| а√天堂www在线а√下载| 三级国产精品欧美在线观看| 人妻久久中文字幕网| 有码 亚洲区| 国产黄片美女视频| 久久人人爽人人片av| 国产精品久久久久久精品电影| 亚洲欧美成人精品一区二区| 亚洲欧美清纯卡通| 亚洲乱码一区二区免费版| 国产精品麻豆人妻色哟哟久久 | 国产一区二区亚洲精品在线观看| 国产精品国产高清国产av| 精品人妻熟女av久视频| 久久99热这里只有精品18| 日韩在线高清观看一区二区三区| 亚洲欧美清纯卡通| av免费观看日本| 美女被艹到高潮喷水动态| av卡一久久| 99国产极品粉嫩在线观看| 真实男女啪啪啪动态图| 精品免费久久久久久久清纯| 色哟哟哟哟哟哟| 精品久久久久久成人av| 97在线视频观看| 在线观看免费视频日本深夜| 美女高潮的动态| 欧美精品国产亚洲| 三级经典国产精品| 欧美色欧美亚洲另类二区| 国产成年人精品一区二区| 日韩国内少妇激情av| 午夜爱爱视频在线播放| 久久久成人免费电影| 日韩在线高清观看一区二区三区| 国产亚洲精品久久久com| 哪个播放器可以免费观看大片| 联通29元200g的流量卡| 性插视频无遮挡在线免费观看| av天堂在线播放| 国产日韩欧美在线精品| 免费人成视频x8x8入口观看| 91午夜精品亚洲一区二区三区| 亚洲天堂国产精品一区在线| 校园人妻丝袜中文字幕| 久久热精品热| 久久人妻av系列| 一区二区三区四区激情视频 | 免费黄网站久久成人精品| 嫩草影院精品99| 成年女人看的毛片在线观看| 如何舔出高潮| 亚洲精品国产成人久久av| 夜夜看夜夜爽夜夜摸| 精品日产1卡2卡| 国产视频首页在线观看| 成年女人看的毛片在线观看| 精品99又大又爽又粗少妇毛片| 嫩草影院精品99| 麻豆久久精品国产亚洲av| 直男gayav资源| 一个人免费在线观看电影| 国产高潮美女av| 国产在视频线在精品| 国产黄片视频在线免费观看| 久久久久久九九精品二区国产| 尤物成人国产欧美一区二区三区| 少妇裸体淫交视频免费看高清| ponron亚洲| av在线老鸭窝| 夜夜爽天天搞| 国产午夜精品久久久久久一区二区三区|