• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Fusion of Residual Blocks and Stack Auto Encoder Features for Stomach Cancer Classification

    2024-01-12 03:48:14AbdulHaseebMuhammadAttiqueKhanMajedAlhaisoniGhadahAldehimLeilaJamelUsmanTariqTaerangKimandJaeHyukCha
    Computers Materials&Continua 2023年12期

    Abdul Haseeb ,Muhammad Attique Khan ,Majed Alhaisoni ,Ghadah Aldehim ,Leila Jamel ,Usman Tariq ,Taerang Kim and Jae-Hyuk Cha

    1Department of Computer Science,HITEC University,Taxila,47080,Pakistan

    2Department of Computer Science and Mathematics,Lebanese American University,Beirut,1100,Lebanon

    3College of Computer Science and Engineering,University of Ha’il,Ha’il,81451,Saudi Arabia

    4Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,P.O.Box 84428,Riyadh,Saudi Arabia

    5Department of Management Information Systems,College of Business Administration,Prince Sattam Bin Abdulaziz University,Al-Kharj,16278,Saudi Arabia

    6Department of Computer Science,Hanyang University,Seoul,04763,Korea

    ABSTRACT Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced challenges,such as irrelevant feature extraction,high similarity among different disease symptoms,and the leastimportant features from a single source.This paper designed a new deep learning-based architecture based on the fusion of two models,Residual blocks and Auto Encoder.First,the Hyper-Kvasir dataset was employed to evaluate the proposed work.The research selected a pre-trained convolutional neural network(CNN)model and improved it with several residual blocks.This process aims to improve the learning capability of deep models and lessen the number of parameters.Besides,this article designed an Auto-Encoder-based network consisting of five convolutional layers in the encoder stage and five in the decoder phase.The research selected the global average pooling and convolutional layers for the feature extraction optimized by a hybrid Marine Predator optimization and Slime Mould optimization algorithm.These features of both models are fused using a novel fusion technique that is later classified using the Artificial Neural Network classifier.The experiment worked on the HyperKvasir dataset,which consists of 23 stomach-infected classes.At last,the proposed method obtained an improved accuracy of 93.90%on this dataset.Comparison is also conducted with some recent techniques and shows that the proposed method’s accuracy is improved.

    KEYWORDS Gastrointestinal cancer;contrast enhancement;deep learning;information fusion;feature selection;machine learning

    1 Introduction

    Gastrointestinal cancer,also known as digestive system cancer,refers to a group of cancers that occur in the digestive system or gastrointestinal tract,which includes the esophagus,stomach,small intestine,colon,rectum,liver,gallbladder,and pancreas [1,2].These cancers develop when cells in the digestive system grow abnormally and uncontrollably,forming a tissue mass known as a tumor[3].Depending on the type and stage of the disease,the symptoms of gastrointestinal cancer might include stomach discomfort,nausea,vomiting,changes in bowel habits,weight loss,and exhaustion[4].According to the National Institute of Health,one out of twelve deaths related to cancer is due to gastrointestinal cancer.Moreover,each year more than one million new cases of gastrointestinal cancer are diagnosed.Gastrointestinal Tract cancer may be treated by surgery,chemotherapy,radiation therapy,or a combination.Detection and treatment at an early stage can enhance survival chances and minimize the risk of complications[5].Despite a gradual decrease in gastric cancer incidence and mortality rates over the past 50 years,it remains the second most frequent cause of cancer-related deaths globally.However,from 2018 to 2020,both colorectal and stomach cancer have shown an upward trend in their rates[6].Global Cancer Statistics show that 26.3 percent of total cancer cases are from gastrointestinal cancer,whereas the mortality rate is 35.4 percent among all cancers[7].

    24. Tough willows: Willows can mean chastity (Biedermann 381), perhaps a reference to the princess s refusal to marry. The willow17 is also connected to the Bible because of its seemingly endless green branches (Biedermann 381). Willow was believed to help sick (Biedermann 381), and a weeping willow can symbolize43 death (Biedermann 381).Return to place in story.

    Identifying and categorizing gastrointestinal disorders subjectively is time-consuming and difficult,requiring much clinical knowledge and skill[8].Yet,the development of effective computer-aided diagnosis(CAD)technologies that can identify and categorize numerous gastrointestinal disorders in a fully automated manner might reduce these diagnostic obstacles to a great extent[9].Computer-aided diagnosis technologies can be of great value by aiding medical personnel in making accurate diagnoses and identifying appropriate therapies for serious medical diseases in their early stages [10,11].Over the past few years,the performance of diagnostic-based artificial intelligence (AI) computer-aided diagnosis tools in various medical fields has been significantly improved with the use of deep learning algorithms,particularly artificial neural networks (ANNs) [12].Generally,these ANNs are trained using optimization algorithms such as stochastic gradient descent [13] to achieve the best accurate representation of the training dataset.

    DL,which refers to deep learning,is a statistical approach that enables computers to automatically detect features from raw inputs,such as structured information,images,text,and audio[14,15].Many areas of clinical practice have been profoundly influenced by the significant advances made in AI based on DL[16,17].Computer-aided diagnosis systems are frameworks that use computational-based help to detect any disease.CAD systems in gastroenterology increasingly use artificial intelligence(AI)to improve the identification and characterization of abnormalities during endoscopy[18].The CNN,a neural network influenced by the visual cortex of life forms,uses convolutional layers with common two-dimensional weight sets.This enables the algorithm to recognize spatial data and employ layer clustering to filter out less significant information,eventually conveying the most pertinent and focused elements[19].However,these classifiers face a challenge in interpretability because they are often seen as “black boxes” that deliver accurate outcomes without explaining them [20].Despite technological developments,image classification for lesions of the gastrointestinal system remains difficult due to a lack of databases containing sufficient images to build models.In addition,the quality of accessible images has impeded the application of CNN models[21].

    1.1 Major Challenges

    In this work,Artificial Neural Networks(ANN)and Deep Neural Networks(DNN)extract the features of images from the Hyper-Kvasir dataset.The dataset contains twenty-three gastrointestinal tract classes with images in each class.However,some classes have only a few images,creating a data misbalancing problem.Data augmentation techniques are used for classes with fewer images to address this issue.Furthermore,feature selection techniques are implied to obtain the best features among feature sets.

    1.2 Major Contributions

    The major contributions of the proposed method are described as follows:

    Overall,the researchers improved their categorization of the Hyper-Kvasir dataset.Yet,a significant gap in the subject matter must be filled.So,it must utilize a wonderful hybrid strategy incorporating deep learning and machine learning methodologies to get exceptional outcomes.Using machine learning approaches to discover key characteristics and automated deep feature extraction to uncover them may help increase classification accuracy.

    – A new CNN architecture is designed based on the concept of pretrained Nasnetmobile.Several residual blocks have been added to increase the learning capability and reduction of parameters.

    – A stack Auto Encoder-Decoder network is designed that consists of five convolutional layers in the encoder phase and five in the decoder phase.

    – The extracted features have been optimized using improved Marine Predator optimization and Slime Mould optimization algorithm.

    – A new parallel fusion technique is proposed to combine the important information of both deep learning models.

    34. A regular blow-out: Blow-out is a colloquialism97 from the UK meaning An excessive spree of drinking, eating, spending or sex (Duckworth 2003). Andrew Lang considers phrase this to be an example of Hansel s vulgarity in a footnote to the story in The Blue Fairy Book.

    – A detailed experimental process in terms of accuracy,confusion matrix,andt-test-based analysis has been conducted to show the significance of the proposed framework.

    The rest of the manuscript is structured as follows:Section 2 describes the significant related work relevant to the study.Section 3 outlines the methodology utilized in the research,including the tools,methods,and resources employed.Section 4 comprises a discussion of the findings acquired from the study.Section 5 provides the conclusions of the research.

    2 Related Work

    Gastrointestinal tract classification is a hot topic in research.In recent years,researchers have achieved important milestones in this work domain[22].In their article,Borgli et al.introduced the Hyper-Kvasir dataset,which contains millions of images of gastrointestinal endoscopy examinations from Baerum Hospital located in Norway.The labeled images in this dataset can be used to train neural networks for discrimination purposes.The authors conducted experiments to train and evaluate classification models using two commonly used families of neural networks,ResNet and DenseNet,for the image classification problem.The labeled data in the Hyper-Kvasir dataset consist of twentythree classes of gastrointestinal disorders.While the authors achieved the best results by combining ResNet-152 and DenseNet-161,the overall performance was still unsatisfactory due to imbalanced development sets[23].In their proposal,Igarashi et al.employed AlexNet architecture to classify more than 85,000 input images from Hirosaki University Hospital.

    Moreover,the input images were categorized into 14 groups based on pattern classification of significant anatomical organs with manual classification.To train the model,the researchers used 49,174 images from patients with gastric cancer who had undergone upper gastrointestinal tract endoscopy.In comparison,the remaining 36,000 images were employed to test the model’s performance.The outcome indicated an impressive overall accuracy of 96.5%,suggesting its potential usefulness in routine endoscopy image classification[24].Gómez-Zuleta[25]developed a deep learning(DL) methodology to detect polyps in colonoscopy procedures automatically.For this task,three models were used,namely Inception-v3,ResNet-50,and VGG-16.Knowledge transfer through transfer learning was adopted for classification,and the resultant weights were used to commence a fresh training process utilizing the fine-tuning technique on colonoscopy images.The training data consisted of a combined dataset of five databases comprising more than 23000 images with polyps and more than 47000 images without polyps for validation,respectively.The data was split into a 70 by 30 ratio for training and testing purposes.Different metrics such as accuracy,F1 score,and receiver operating characteristic curve,commonly known as ROC,were employed to evaluate the performance.Pertrained models such as Inceptionv3,VGG16,and Resnet50 models achieved accuracy rates of 81%,73%,and 77%,respectively.The authors described that pretrained network models demonstrated an effective generalization ability towards the high irregularity of endoscopy videos,and their methodology may potentially serve as a valuable tool in the future[25].The authors employed three networks to classify medical images from the Kvasir database.They began using a preprocessing step to eliminate noise and improve image quality.Then,they utilized data augmentation methods to progress the network’s training and a dropout method to prevent overfitting.Yet,the researchers acknowledged that this technique resulted in a doubling of the training time.The researchers also implemented Adam to optimize the loss to minimize error.Moreover,transfer learning and finetuning techniques are implied.The resulting models were then used to categorize 5,000 images into five distinct categories,with eighty percent of the database allocated for training and twenty percent for validation.The accuracy rates achieved by the models were 96.7%for GoogLeNet,95%for ResNet-50%,and 97%for AlexNet[26].

    The Kvasir-Capsule dataset,presented in[27],includes 117 videos captured using video capsule endoscopy(VCE).The dataset comprises fourteen different categories of images and a total of more than 47000 identified categorized images.VCE technology involves a small capsule with a camera,battery,and other components.To validate the labeled dataset,two convolutional neural networks(CNNs),namely,DenseNet_161 and ResNet_152,were used for training.The study utilized a crossvalidation technique with definite cross-entropy-based loss to validate the models.They implemented this technique without class and with class weights and used weight-based sampling to balance the dataset by removing or adding images for every class.After evaluating the models,the best results were obtained by averaging the outcomes of both CNNs.The resulting accuracy rates were 73.66%for the micro average and 29.94%for the macro average.

    – Proposed fusion-based contrast enhancement technique based on the mathematical formulation of local and global information-enhanced filters,called Duo-contrast.

    3 Proposed Methodology

    The dataset used in this manuscript is highly imbalanced as some classes have few images.To resolve this problem,data augmentation techniques are adopted.Nasnetmobile and Stacked Autoencoders are used as feature extractors.Furthermore,the extracted feature vectors eV1from Nasnetmobile and eV2from Stacked Auto-encoder are reduced by applying feature optimization techniques.eV1is fed to the Marine Predator Algorithm(MPA)[28]while eV2is given as input to the Slime Mould Algorithm(SMA)[29]to extract selected feature vectors S(eV1)and S(eV2),respectively.Selected feature vectors S(eV1)and S(eV2)are fused.Moreover,artificial neural networks are used as classifiers to achieve results.Fig.1 shows the proposed methodology used in this paper.

    Figure 1:Proposed methodology of stomach cancer classification and polyp detection

    3.1 Dataset Description

    The Hyper-Kvasir dataset used in this study is a public dataset collected from Baerum Hospital in Norway [23].The dataset contains 10,662 gastrointestinal endoscopy images categorized into 23 classes.Among twenty-three classes,sixteen belong to the lower gastrointestinal area,while seven are related to the upper gastrointestinal segment.Table 1 describes the data misbalancing problem,as some of the classes have very few numbers of images.To nullify the issue,data augmentation techniques are applied.Fig.2 shows the sample images for each class.

    Table 1:Classes of Hyper-Kvasir dataset and number of images in each class

    Figure 2:Sample images of each class of the Hyper-Kvasir dataset

    3.2 Proposed Contrast Enhancement

    Data is augmented by applying different image enhancement techniques on the whole Hyper-Kvasir dataset,as these techniques change spatial properties but do not affect the image orientation.Brightness Preserving Histogram Equalization (BPHE) [30] and Dualistic Histogram Equalization(DHE)[31]are used in preprocessing.

    BPHE is a method employed in image processing to enhance an image’s visual quality by improving its contrast.This approach involves adjusting the distribution of intensity levels to generate a more uniform histogram.Unlike conventional histogram equalization techniques,brightness-preserving histogram equalization considers both bright and dark regions in an image.It independently adjusts the histograms of each region to retain the details in both bright and dark areas while enhancing the overall contrast.This technique is particularly useful in applications such as medical imaging,where preserving the details in both bright and dark regions is crucial.The input image is divided into two subparts;the first consists of pixels with low contrast values,while the second consists of pixels with high contrast values.Mathematically it is denoted as:

    Never in our forty-four years of marriage had I ever so much as touched her in anger or in rebuke8 of any kind. Never. I wasn t even tempted9, in fact. But now, when she needed me most. . . .

    Moreover,a function of probabilistic density for both subparts is derived as:

    The transform function for subparts is as follows:

    The best performance obtained using fused features is shown in Table 6.Features are given to ANNs and analyze the results.Analysis shows that the highest accuracy of 93.60 is achieved through WNN.Again,the time cost is high in the case of WNN,yet it is best for Narrow Neural Networks.Moreover,the lowest accuracy is obtained through a Narrow Neural Network.The confusion matrix for WNN is depicted in Fig.8.

    whereWlastis the weight matrix connecting the last hidden layer to the output layer,andblastis the bias vector for the output layer.Minimizing the reconstruction error between input and output trains using the stacked autoencoder.Features vector named asFeat_AEvecis extracted through the Stacked Auto-Encoder that consists of 1024 features.

    In the above equation,ImgBPHEis the Brightness Preserved Histogram Equalized image.

    After her shower, she glanced towards the back of Grandpa’s recliner but noticed that his cane6 was not leaning in its usual spot. Sensing something odd, she walked toward the recliner. He was gone. The closet door stood open and his hat and overcoat were missing. Fear ran down her spine7.

    DSIHE is an image enhancement approach that increases an image’s contrast by separating it into two subimages depending on a threshold value and then applying histogram equalization independently to each subimage.The significance of DSIHE resides in its capacity to improve the contrast of images containing dark and light areas.Contrast enhancement is done worldwide using classical histogram equalization,which can result in over enhancing bright parts and under enhancement of dark regions in an image.DSIHE tackles this issue by separating the picture into two subimages based on a threshold value that distinguishes between the light and dark regions.Afterward,histogram equalization is applied separately to each subimage,which helps to achieve an equilibrium across the two regions’contrast enhancement.It has been demonstrated that the DSIHE technique enhances the aesthetic quality of medical images.It is an easy,computationally efficient,and straightforward strategy to implement in image processing systems.

    LetMInpis an input image that is given to apply DSIHE,and the grey level of that image isMInp=Mgrey.Sub-images are denoted byMS1and MS2.The center pixel index is denoted byCpx.

    I was 28 y.o. then. Almost all my classmates were married at that time, but I even didn’t have serious relationships with a girl. One day my friend arranged a party in honor of their newborn baby. His wife made ravioli and put button into one of them. They invited former classmates, as well as their new friends.

    Aggregation of the grey-level original image is as follows:

    The aggregated PDF for the grey levels of the original image will be:

    For both subimages,the transformation function is given by:

    The output image is mathematically denoted by:

    3.3 Novelty:Designed CNN Model

    Feature extraction is extracting a subset of relevant features from raw data useful for solving a particular machine-learning task[32].In deep learning,feature extraction involves taking a raw input,such as an image or audio signal,and automatically extracting relevant features or patterns using a series of mathematical transformations.Deep learning relies on feature retrieval to help the network concentrate on the essential data and simplify the input,making it simpler to train and more accurate.In some cases,feature extraction can also help to reduce overfitting and improve generalization performance.In many deep learning applications,the network performs feature extraction automatically,typically using convolutional layers for image processing or recurrent layers for natural language processing.However,in some cases,manual feature extraction may be necessary,particularly when working with smaller datasets or trying to achieve high levels of accuracy on a specific task.In this study,two feature extractors are used to extract features.Stacked Auto-Encoder and Nasnetmobile are two frameworks that are used to extract features.

    CNNs have become a popular tool in the field of medical image processing.A neural network can be classified as a CNN if it contains at least one layer that performs convolution operations.During a convolution operation,a filter with multiple parameters of a specific size is applied to an input image using a sliding window approach.The resulting image is then passed on to the next layer for further processing.This operation can be represented mathematically as follows:

    Above,Moutis the output matrix havingHorzoutandVertoutrows and columns,respectively.Furthermore,the rectified linear unit function is applied to obtain the negative feature’s value as zero,which can be represented in the equation below:

    Furthermore,a pooling operation reduces the computational complexity and improves the processing time.This operation involves extracting the maximum or average values from a specific region and replacing them with the central input value.A fully connected layer then flattens the features to produce a one-dimensional vector.Mathematically,this can be represented as:

    3.3.1 Stacked Auto-Encoder

    A type of neural network known as a stacked autoencoder utilizes unsupervised learning to develop a condensed representation of input data.The architecture consists of multiple layers,each learning a compressed representation called a “hidden layer”of the input data.The output of one layer is used as input for the subsequent layer,and the final output layer generates the reconstructed data.To create a deeper architecture capable of learning more complex and abstract representations,hidden layers are added to the network.During training,the difference between the input and the reconstructed output data,known as the reconstruction error,is minimized using backpropagation to adjust the neural network’s weights [33].Stacked autoencoders are used in various applications,including speech and image recognition,anomaly detection,and data compression.

    Let Xinpbe the input data and Youtbe the reconstructed data.Let the stacked autoencoder have Last layers,with the hidden layers denoted as h_1layer,h_2layer,...,h_Llast-1.The output layer is denoted as h_Llast.A transformation function can represent each layer of the stacked autoencoder ftransthat maps the input to the output.The transformation function for the n-th layer is denoted as ftransl.The input data is fed into the first layer,which learns a compressed input representation.The output of the first layer is then passed as input to the next layer,which learns a compressed representation of the output from the first layer.This process continues until the final layer produces the reconstructed data yout.The compressed representation learned by each hidden layer can be represented as follows:

    wherehkis the output of the kth hidden layer,Wkis the weight matrix connecting the input to the kth hidden layer,andbkis the bias vector for the kth hidden layer.The reconstructed output Youtcan be calculated by passing the compressed representation of the input through the decoder network,which is essentially the reverse of the encoder network:

    We were stunned3. The boys had never slept in a bed. They were accustomed only to foam4 pads. That night we had a meeting and unanimously decided that beds would be the perfect gift. On Thursday night, a few adults in our group drove to the nearest city and bought beds and new bedding. They arranged for everything to be delivered on Friday.

    3.3.2 Feature Extraction Using Proposed CNN

    The weight of the mould is calculated mathematically as:

    The magic drew me back each week. No two Saturdays were the same. Rotations26 of therapy horses and riders gave volunteers the opportunity to get to know each animal and child. Every Saturday offered a glimpse of an intensely intimate connection between equine and human spirit. Every Saturday revealed the power of this fabled27 four?legged creature to triumph over a child s physical and mental adversity. Every Saturday, a child held the reins28 of freedom and borrowed Pegasus s wings.

    Figure 3:Generalization through transfer learning technique

    3.4 Novelty:Proposed Features Selection

    Feature selection is the operation of identifying a subset of appropriate features from a dataset’s larger set of features [35].Feature selection improves model performance and data interpretation and reduces computational resources.Two feature selection algorithms are used to tackle the curse of dimensionality.Slime Mould Algorithm (SMA) is used to select important features in vector.S(Feat_AEvec) fromFeat_AEvecextracted through the Stacked Auto-Encoder while the Marine Predator Algorithm (MPA) is used to extract selected features vectorS(Feat_NNMobilevec) formFeat_NNMobilevecthat is obtained through Nasnetmobile.S(Feat_AEvec) consists of 535 features whereasS(Feat_NNMobilevec)has 366 features.

    The Slime Mold Algorithm is a feature selection technique influenced by nature and centered around slime mold behavior.The method employs a system of artificial particles that interact with one another to identify the ideal solution.SMA approaches the food according to the strength of the odor the food source spreads.The following equations describe the behavior of the method for slime

    What possible argument could I muster18 against that? There was none. Did I eat the peas? You bet I did. I ate them that day and every other time they were served thereafter. The five dollars were quickly spent. My grandmother passed away a few years later. But the legacy19 of the peas lived on, as it lives on to this day. If I so much as curl my lip when they are served (because, after all, I still hate the horrid20 little things), my mother repeats the dreaded21 words one more time: You ate them for money, she says. You can eat them for love.

    Nasnetmobile is a pretrained neural network model [34] that has been trained using transfer learning.Transfer learning is a method that involves the knowledge transfer learned from a pretrained model to a new task.In the case of Nasnetmobile,it has been trained on the ImageNet dataset.To adapt the pretrained model for a new task,the transfer learning principles shown in Fig.3 are used to refine the model.However,since the pretrained model has been trained on a subset of classes,it is not directly applicable to a medical image classification task.Therefore,the network needs to be trained on a new Hyper-Kvasir dataset.To train the network on the augmented Hyper-Kvasir dataset is divided into 70%training and 30%testing images.Furthermore,the classification layer,soft-max layer,and the last fully connected layer of the Nasnetmobile model are replaced with new layers called“new_classification,”“new_softmax,”and“new_Prediction,”respectively.This allows the model to learn to classify medical images using the features extracted from the original pretrained model.Furthermore,features are extracted through a trained network and obtained using deep feature vectors.Feat_NNMobileveccontaining 1056 features.The layer used for feature extraction are“global_average_pooling2d_1”.

    Hen that lays the golden eggs: Even if they don t lay golden eggs, egg-laying hens have always been valuable commodities, especially before breeding increased the output of hens

    qis the random number from the range zero to one.bdis the best fit for the current iteration,asωdis the worst fit in the current iteration.The position updating is derived as:

    The Marine Predator Optimization Algorithm(MPO)is a metaheuristic optimization algorithm based on the foraging strategies of aquatic predators.MPO is an algorithm replicating the searching and preying behavior of deep-sea predatory animals such as sharks,orcas,and other ocean animals.Like most metaheuristic algorithms,MPA is a population-based approach in which the baseline answer is dispersed equally over the search area,as in the first experiment.Mathematically it is denoted by:

    Here,Aminis the lower bound,whereasAmaxis the upper bound for variables.Rand stands for a randomly chosen vector ranging from zero to one.Based on the notion of survival of the fittest,it is considered that the most efficient hunters in nature are the strongest predators.As a result,the top predator is regarded as the most efficient means of generating an elite matrix.These elite matrices are meant to detect and track prey by leveraging their location data.Each element in the elite matrix denotes predators in a position to search for food.The second matrix is called the prey matrix,where each element represents the prey looking for food.Both matrices haver×cdimensions wherershows the number of searching agents,whereascrepresents the number of dimensions.At each iteration,the fittest predator substitutes the previous fittest predator.

    There are three phases that MPA contains.Phase one is considered when a predator is moving faster than prey,and velocity is(V≥10).In this scenario,the best possible solution could be to stop updating the positions of predators.Mathematically it can be represented as:

    Phase two is considered as the unit velocity ratio when both prey and the predator have the same velocity,is(V≈10).In this phase,the prey is in exploitation mode and levy motion while the predator is in exploration mode with Brownian motion.For half of the population,this could be denoted by:

    In phase three,the prey has a low velocity compared to the predator’s velocity.In low ratio velocity,the value will be(V=0.1).In this scenario,the best motion for the predator will be the Levy motion,as shown in the Eq.(46).

    The reason for the change in marine predators’behavior is the environmental changes inserted in the algorithm such as eddy formation and Fish Aggregating Device(FAD)manipulation.These two effects are denoted by:

    Here,FADs=0.20 represents the likelihood of FADs’influence in the optimization procedure.A binary vector U is created by randomly creating a vector in the interval[0,1]and replacing its elements with zero if they are less than 0.2 and with one if they are more than 0.2.The subscript r denotes a uniformly random number in the interval[0,1].The vectorsandcontain the minimum and maximum dimensions.urand1andurand2denote the random indices of the prey matrix.

    3.5 Novelty:Proposed Feature Fusion

    The significance of feature fusion resides in its capacity to extract more meaningful information from numerous sources,which can lead to improved accuracy in classification,identification,and prediction[36].Feature fusion can increase the resilience and reliability of machine learning systems,especially in cases when data is few or noisy,by merging complementary information from many sources.As stated before,two feature vectors,S(Feat_AEvec)andS(Feat_NNMobilevec),are retrieved from both networks utilized in this process;hence,it is important to merge both vectors to create a larger,more informative feature vector.A correlation extended serial technique is utilized to combine both vectors,which can be mathematically represented as follows:

    With this procedure,the features with a positive correlation (+1) are chosen into a new vector labeled.Vec3and the features with a correlation value of 0 or-1 are added toVec4.Then,the mean value ofVec4is calculated as follows:

    Both vectorsVecupdandVec4are fused using the following formulation:

    Upper transformation is used for less bright images.

    The final fused vectorVecFusedhas 901 features.

    4 Results and Discussion

    The Hyper-Kvasir dataset is used for results and analysis purposes.The dataset contains 10662 images categorized into twenty-three classes.Data is highly imbalanced,so to cater to this issue,data is augmented.The augmented dataset contains 24,000 training images,while 520 are obtained for testing.The implementation uses a system with a core i7 Quad-core processor with 16 GB of RAM.Moreover,the system contains a graphics card with 4 GB of VRAM.MATLAB R2021a is used to achieve results.

    4.1 Numerical Results

    Results are shown in tabular and graphical form.Table 2 represents the results for the extracted features through Nasnetmobile that are given as input to the classifiers.The analysis shows that Wide Neural Network(WNN)has given the best overall accuracy of 93.90 percent,while Narrow Neural Networks,Bilayered Neural Networks,and Trilayered Neural Networks have the lowest accuracy of 93.10 percent.Time taken by WNN is also the highest among all other classifiers,while the lowest time cost is for Narrow Neural Networks.The confusion matrix for WNN is shown in Fig.4.

    Figure 4:Confusion matrix for WNN using Nasnetmobile features

    Similarly,Table 3 shows the results obtained by feeding the features extracted by implementing Stacked Auto-Encoders to classifiers.Analysis shows that WNN has the best performance with 80.50 percent accuracy,yet the time cost is also highest in the case of WNN and the lowest for Narrow Neural Networks.Moreover,the lowest accuracy is achieved by implementing a Narrow Neural Network.The confusion matrix for WNN is shown in Fig.5.

    Table 3:Performance of ANN classifiers using autoencoder features(1024 features)

    Figure 5:Confusion matrix for WNN using auto-encoder features

    Feature selection has given reduced features from the feature vector extracted through Nasnetmobile.Table 4 shows the results for the selected features using the Marine Predator Algorithm(MPA).Selected features are given to the classifiers to obtain results.Analysis shows that WNN has the highest accuracy,93.40,and the highest time cost.Furthermore,the lowest accuracy is obtained through a Trilayered Neural Network.A Narrow Neural Network has given the best time cost among all classifiers.The confusion matrix for WNN is shown in Fig.6.

    Table 4:Performance of ANN classifiers using selected Nasnetmobile features(366 features)

    Table 5 shows the results achieved using the selected features from the Stacked Auto-Encoder.The features are selected using the Slime Mold Algorithm.WNN has the best performance as the accuracy achieved is 78.40 percent.Moreover,the time cost is highest for WNN and lowest for Narrow Neural Networks.In addition,Trilayered Neural Network has given the lowest accuracy.The confusion matrix for WNN is described in Fig.7.

    32.Promise me never to talk with your mother alone: Promises, while important today, were more powerful in the past when honor was a great motivator. Also, before the time of literacy among the masses and written contracts, verbal promises were given greater weight. A promise was a contract and actionable by law if broken. Folklore emphasizes the importance of a promise by meting89 punishment upon those who do not keep their promises. Return to place in story.

    Table 5:Performance of ANN classifier autoencoder selected features(535 features)

    Figure 6:Confusion matrix for WNN using Nasnetmobile selected features

    Figure 7:Confusion matrix for WNN using auto-encoder selected features

    The final image having an equalized histogram with preserved brightness can be obtained by combining both equations,that is:

    Table 6:Performance of ANN classifiers using fused features(901 features)

    Figure 8:Confusion matrix for WNN using fused features

    4.2 Graphical Results

    This section shows the graphical representation of the results.Fig.9 shows the bar chart for all classifiers using the proposed fusion approach.In this figure,each classifier’s accuracy is plotted with different colors,and Wide Neural Network shows the best accuracy of 93.8%,which is improved than the other classifiers.Fig.10 shows the bar chart for the time cost for all classifiers after employing the final step of the proposed approach.Wide Neural Network(WNN)consumed the highest time of 772.93(s),whereas the trilayered neural network spent a minimum time of 372.53(s).Based on Figs.10 and 11,it is observed that the wide neural network gives better accuracy but consumes more time due to additional hidden layers.Fig.11 shows the time-based comparison of the proposed method.This figure shows that the time is significantly reduced after employing the feature selection step;however,a little increase occurs when the fusion step is performed.Overall,it is observed that the reduction of features impacts the computational time,which is a strength of this work.

    Figure 9:Accuracy bar for all selected classifiers using the proposed method

    Figure 10:Time bar for classifiers used in the proposed methodology

    Figure 11:Overall time-based comparison among all classifiers using the proposed method

    A detailed comparison is also conducted among all classifiers of the middle steps employed in the proposed method.Fig.12 shows the insight view of this comparison.This figure shows that the original accuracy of the fine-tuned model NasNet Mobile is better,and the maximum is 93.9%;however,this experiment consumes more time,as plotted in Fig.12.After the selection process,the accuracy is slightly reduced,but the time is significantly dropped.After the fusion process,it is noted that the difference in the classification accuracy of the wide neural network is just 0.1%,which is almost the same.Still,the time is significantly reduced,which is a strength of this work.

    Figure 12:Accuracy comparison of all classifiers using all middle steps of the proposed method

    LIME-based Visualization:Local Interpretable Model-Agnostic Explanations (LIME) [37] is a well-known technique for explainable artificial intelligence(XAI).It is a model-independent technique that may be used to explain the predictions of any machine learning algorithm,including sophisticated models like deep neural networks.LIME aims to produce locally interpretable models that approach the predictions of the original machine learning model in a limited part of the input space.Local models are simpler and easier to comprehend than the original model and can be used to explain specific predictions.The LIME approach generates a large number of perturbed versions of the input data and trains a local model on each disturbed version.Local models are trained to predict the output of the original model for each perturbed version and are then weighted according to their performance and resemblance to the original input.The final explanation offered by LIME is a mix of the weights of the local models and the most significant characteristics of each local model.An explanation can be offered to the user in the form of a heatmap or other visualization,as shown in Fig.13,indicating which input data characteristics were most influential in forming the prediction.

    Fig.14 shows the results of the fine-tuned Nasnetmobile deep model employed for infected region segmentation.The segmentation process employs the polyp images with corresponding ground truth images.This fine-tuned model is trained with static hyperparameters by employing original and ground truth images.After that,testing is performed to visualize a few images in binary form,as presented in Fig.14.For the segmentation process,the weights of the second convolutional layers have been plotted and then converted into binary form.

    Figure 13:Explanation of network’s predictions using LIME

    Figure 14:Proposed infection segmentation using fine-tuned Nasnetmobile deep model

    Table 7 compares the results achieved in this article with recent state-of-the-art works.Reference[38] used self-supervised learning to classify the Hyper-Kvasir dataset.The authors used six classes and achieved the highest accuracy of 87.45.Moreover,reference [27] used the Hyper-Kvasir dataset to classify the gastrointestinal tract and obtained 73.66 accuracy.In the study,the authors only used fourteen classes.In addition,reference[23]achieved 63 percent accuracy for the macro and used all 23 classes.It is clear that the proposed method has outperformed the state-of-the-art methodologies in recent years and achieved the best accuracy of 93.80 percent.Moreover,the computational complexity of the proposed framework isOwhereTdenotes the middle steps,Kare the parameters of deep learning architectures,andCdenote the constant values.

    As I read this message, a wave of sadness ran through me and I realized that she must have thought she was writing to her father the whole time. She and I would never have openly exchanged such words of affection. Feeling guilty for not clarifying, yet not wanting to embarrass her, I simply responded, Love you too! Have a good sleep!

    Table 7:Comparison of the proposed framework accuracy with state-of-the-art(SOTA)techniques

    5 Conclusion

    Gastrointestinal tract cancer is one of the most severe cancers in the world.Deep learning models are used to diagnose gastrointestinal cancer.The proposed model uses Nasnetmobile and Auto-Encoder to extract deep features and is used as input for Artificial Neural Network classifiers.Moreover,feature selection techniques such as the Marine Predator Algorithm and Slime Mould Algorithm are implemented hybrid to cater to the curse of dimensionality problems.In addition,the selected features are fused and fed for classification.The results analysis shows that classification through features extracted from Nasnetmobile gives the best overall validation accuracy of 93.90.Overall,we conclude the following:

    Below is an e-mail he sent to his sister. She then sent it to radio station 103.2 FM in Ft. Wayne, Indiana, who was sponsoring a worst job experience contest. Needless to say, she won:

    – Data augmentation using contrast enhancement techniques can better impact the learning of deep learning models instead of using flip and rotation-based approaches.

    – Extracting encoders and deep learning features give better information on selected disease classes.

    Both greeted me with that Parisian countenance13 that said: Yes, I speak English, but you ll have to struggle with your French if you want to talk to me

    – The selection of features in a hybrid fashion impacts the classification accuracy and reduces the time.

    – The fusion process improved the classification accuracy.

    The drawbacks of this work are: i) segmentation of infected regions is a challenging task due to the change of lesion shape and boundary location;ii) manual assignment of hyperparameters of deep learning models is not a good way,and it always affects the learning process of a network.The proposed framework will be shifted to infected region segmentation using deep learning and saliencybased techniques.Moreover,we will opt for a Bayesian Optimization technique for hyperparameter selection.Although the proposed methodology has achieved the best outcomes,better accuracy may be achieved through different approaches in the future.

    Acknowledgement:This work is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project,Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Funding Statement:This work was supported by “Human Resources Program in Energy Technology” of the Korea Institute of Energy Technology Evaluation and Planning (KETEP),granted financial resources from the Ministry of Trade,Industry &Energy,Republic of Korea (No.20204010600090).Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R387),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: A Haseeb,MA Khan,M Alhaisoni;data collection: A Haseeb,MA Khan,L Jamel,G Aldehim,and U Tariq;analysis and interpretation of results:MA Khan,J.Cha,T Kim,and U Tariq;draft manuscript preparation:Haseeb,MA Khan,L Jamel,G Aldehim,and J Cha;validation:J Cha,T Kim,and U Tariq;funding:J Cha,T Kim,L Jamel and G Aldehim.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The Kvasir dataset used in this work is publically available.https://datasets.simula.no/kvasir/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    人人妻人人澡人人爽人人夜夜| 久久99精品国语久久久| 久久久欧美国产精品| 老女人水多毛片| 妹子高潮喷水视频| 久久精品国产亚洲av天美| 建设人人有责人人尽责人人享有的| 亚洲欧美日韩另类电影网站| 国产成人aa在线观看| 久久国内精品自在自线图片| 成人无遮挡网站| 精品少妇内射三级| 丰满乱子伦码专区| 夫妻午夜视频| 免费久久久久久久精品成人欧美视频 | 亚洲精品日韩av片在线观看| 日本wwww免费看| 日本猛色少妇xxxxx猛交久久| 国产成人免费无遮挡视频| 国产 精品1| 自线自在国产av| 新久久久久国产一级毛片| 少妇的逼水好多| 99精国产麻豆久久婷婷| 这个男人来自地球电影免费观看 | 成人国语在线视频| 久久99精品国语久久久| 91精品三级在线观看| 久久狼人影院| 久久久欧美国产精品| 精品人妻熟女毛片av久久网站| 国产探花极品一区二区| 免费大片黄手机在线观看| 免费av中文字幕在线| 欧美日韩成人在线一区二区| 亚洲美女黄色视频免费看| 美女主播在线视频| 五月伊人婷婷丁香| 99久久中文字幕三级久久日本| 2018国产大陆天天弄谢| 伦理电影免费视频| 国精品久久久久久国模美| videossex国产| 一本大道久久a久久精品| 国产老妇伦熟女老妇高清| 国产成人aa在线观看| 婷婷成人精品国产| 老熟女久久久| 一个人免费看片子| 日韩电影二区| 考比视频在线观看| 尾随美女入室| 国产精品.久久久| 99精国产麻豆久久婷婷| 亚洲精品aⅴ在线观看| 国产精品国产三级专区第一集| 观看av在线不卡| 99久久中文字幕三级久久日本| 有码 亚洲区| 观看美女的网站| videossex国产| 97精品久久久久久久久久精品| 另类精品久久| 亚洲人成网站在线播| 91成人精品电影| 天天影视国产精品| 婷婷色综合www| 女人精品久久久久毛片| 2018国产大陆天天弄谢| 欧美bdsm另类| 亚洲熟女精品中文字幕| 精品久久久久久电影网| 亚洲色图 男人天堂 中文字幕 | 亚洲av综合色区一区| 亚洲美女搞黄在线观看| 夜夜爽夜夜爽视频| 免费看不卡的av| 女人久久www免费人成看片| 欧美日韩一区二区视频在线观看视频在线| 肉色欧美久久久久久久蜜桃| 久久热精品热| 欧美日韩视频精品一区| 日本vs欧美在线观看视频| 亚洲内射少妇av| 中文字幕av电影在线播放| 日韩免费高清中文字幕av| 狂野欧美激情性bbbbbb| 婷婷成人精品国产| 国产日韩欧美亚洲二区| 国产精品麻豆人妻色哟哟久久| a级片在线免费高清观看视频| 国产一区二区三区综合在线观看 | 国产免费一级a男人的天堂| 精品久久久久久久久亚洲| 国产成人精品婷婷| 久久综合国产亚洲精品| 国产成人a∨麻豆精品| 亚洲精品日本国产第一区| 国产一级毛片在线| 国产淫语在线视频| 国产熟女欧美一区二区| av网站免费在线观看视频| 91久久精品电影网| 精品国产露脸久久av麻豆| 国产精品99久久久久久久久| 肉色欧美久久久久久久蜜桃| 国产高清不卡午夜福利| 国产 一区精品| 男女高潮啪啪啪动态图| 99久国产av精品国产电影| 狠狠精品人妻久久久久久综合| 两个人免费观看高清视频| 国产精品麻豆人妻色哟哟久久| 精品一区二区三卡| 中文字幕制服av| a级毛色黄片| 国产欧美日韩一区二区三区在线 | 免费观看a级毛片全部| 国国产精品蜜臀av免费| 毛片一级片免费看久久久久| 亚洲国产欧美日韩在线播放| 国产精品人妻久久久久久| 成年美女黄网站色视频大全免费 | 亚洲精品成人av观看孕妇| 日本欧美视频一区| 夫妻性生交免费视频一级片| 欧美日韩视频精品一区| 成人漫画全彩无遮挡| 国产精品久久久久久精品电影小说| av在线app专区| av有码第一页| 国产精品 国内视频| 在线天堂最新版资源| 久久久久久久久久久免费av| 熟女电影av网| 大话2 男鬼变身卡| 七月丁香在线播放| 美女内射精品一级片tv| 如何舔出高潮| 80岁老熟妇乱子伦牲交| 国产女主播在线喷水免费视频网站| 99久久中文字幕三级久久日本| 国产免费视频播放在线视频| 婷婷色av中文字幕| 久久毛片免费看一区二区三区| 日本黄大片高清| 一级毛片 在线播放| 国产av国产精品国产| 久久国内精品自在自线图片| 飞空精品影院首页| 80岁老熟妇乱子伦牲交| 亚洲av二区三区四区| 日韩伦理黄色片| 中文字幕av电影在线播放| 国产一级毛片在线| 少妇高潮的动态图| 成人毛片a级毛片在线播放| 我的女老师完整版在线观看| 国产免费一区二区三区四区乱码| 久久久久久久久久久丰满| 老司机影院毛片| 成人国产麻豆网| 观看美女的网站| 国产乱来视频区| 欧美+日韩+精品| 国产熟女午夜一区二区三区 | 妹子高潮喷水视频| 黑人巨大精品欧美一区二区蜜桃 | 亚洲少妇的诱惑av| 午夜福利,免费看| 制服人妻中文乱码| 亚洲成人手机| 免费黄网站久久成人精品| 日本黄色日本黄色录像| 美女主播在线视频| 欧美日韩视频高清一区二区三区二| 能在线免费看毛片的网站| 91成人精品电影| 寂寞人妻少妇视频99o| 成人18禁高潮啪啪吃奶动态图 | 成人毛片a级毛片在线播放| 视频中文字幕在线观看| 亚洲欧美色中文字幕在线| 久久久午夜欧美精品| 嫩草影院入口| 亚洲天堂av无毛| av在线播放精品| 乱人伦中国视频| 人成视频在线观看免费观看| 午夜福利视频精品| 亚洲无线观看免费| 大片电影免费在线观看免费| 久久精品久久久久久久性| 最近中文字幕2019免费版| 国产亚洲最大av| 久久精品国产亚洲av天美| a级毛片在线看网站| 啦啦啦视频在线资源免费观看| 少妇精品久久久久久久| 美女主播在线视频| 精品一区二区三区视频在线| 久久国产精品大桥未久av| 国产亚洲午夜精品一区二区久久| 国产在线一区二区三区精| 欧美精品人与动牲交sv欧美| av国产久精品久网站免费入址| 日韩一区二区视频免费看| 日本色播在线视频| 最近的中文字幕免费完整| av有码第一页| 青青草视频在线视频观看| 午夜福利视频在线观看免费| 五月伊人婷婷丁香| 国产乱来视频区| 国产精品久久久久久久电影| 久久久久久久精品精品| 国产成人午夜福利电影在线观看| 国产国语露脸激情在线看| 男人爽女人下面视频在线观看| 免费看光身美女| 国产国拍精品亚洲av在线观看| 欧美日韩亚洲高清精品| 亚洲一区二区三区欧美精品| 免费日韩欧美在线观看| 亚洲成色77777| 国产高清国产精品国产三级| 亚洲av成人精品一区久久| 日本色播在线视频| 国产精品国产三级国产专区5o| 久久久久久久久久人人人人人人| 中国国产av一级| 精品亚洲成a人片在线观看| 最新的欧美精品一区二区| 午夜老司机福利剧场| 亚洲精华国产精华液的使用体验| 色吧在线观看| 人妻系列 视频| 天堂中文最新版在线下载| 最近中文字幕高清免费大全6| 大又大粗又爽又黄少妇毛片口| 26uuu在线亚洲综合色| 日韩 亚洲 欧美在线| 欧美日韩综合久久久久久| 黑人欧美特级aaaaaa片| 又黄又爽又刺激的免费视频.| 免费观看a级毛片全部| 国产精品一区www在线观看| 成人亚洲精品一区在线观看| 久久影院123| 新久久久久国产一级毛片| 91aial.com中文字幕在线观看| 中文乱码字字幕精品一区二区三区| 亚洲精品美女久久av网站| √禁漫天堂资源中文www| 亚洲精品乱码久久久v下载方式| 蜜臀久久99精品久久宅男| 免费观看的影片在线观看| 在线精品无人区一区二区三| 激情五月婷婷亚洲| 国产淫语在线视频| 大码成人一级视频| 免费黄频网站在线观看国产| 日韩不卡一区二区三区视频在线| 欧美成人精品欧美一级黄| 成人毛片a级毛片在线播放| videossex国产| 97超碰精品成人国产| 精品午夜福利在线看| 亚洲国产成人一精品久久久| 中文欧美无线码| 熟女人妻精品中文字幕| 人妻夜夜爽99麻豆av| 卡戴珊不雅视频在线播放| 蜜桃在线观看..| 亚洲国产av影院在线观看| 成人二区视频| 欧美精品一区二区免费开放| 午夜激情福利司机影院| 久久人妻熟女aⅴ| 亚洲精品av麻豆狂野| 欧美老熟妇乱子伦牲交| 多毛熟女@视频| 亚洲国产毛片av蜜桃av| 99久久中文字幕三级久久日本| 18禁动态无遮挡网站| 视频区图区小说| 老司机影院成人| 国产成人精品在线电影| 美女福利国产在线| 人人妻人人爽人人添夜夜欢视频| 久久久久人妻精品一区果冻| 国产精品人妻久久久久久| 又粗又硬又长又爽又黄的视频| 亚洲美女搞黄在线观看| 亚洲天堂av无毛| 少妇熟女欧美另类| 一级,二级,三级黄色视频| 一本大道久久a久久精品| 看十八女毛片水多多多| 久久 成人 亚洲| 97超碰精品成人国产| 九色成人免费人妻av| 乱码一卡2卡4卡精品| 人人妻人人澡人人看| 99热这里只有是精品在线观看| 亚洲精品aⅴ在线观看| 最近手机中文字幕大全| 国产av精品麻豆| 高清不卡的av网站| 伊人久久精品亚洲午夜| 最近中文字幕高清免费大全6| 熟女av电影| 国产男女内射视频| 久久精品熟女亚洲av麻豆精品| 99久久中文字幕三级久久日本| 精品99又大又爽又粗少妇毛片| 成人手机av| 最近2019中文字幕mv第一页| 亚洲av欧美aⅴ国产| 人妻人人澡人人爽人人| 国产熟女欧美一区二区| 免费观看的影片在线观看| 成人亚洲欧美一区二区av| av在线播放精品| 欧美人与善性xxx| 国产片内射在线| 在线观看免费视频网站a站| 日韩中文字幕视频在线看片| av在线老鸭窝| 五月玫瑰六月丁香| 在线观看一区二区三区激情| 成人毛片60女人毛片免费| 一本久久精品| 99精国产麻豆久久婷婷| 国产在视频线精品| 色网站视频免费| 亚洲成人手机| 最近中文字幕2019免费版| 嫩草影院入口| 人人妻人人澡人人爽人人夜夜| 国精品久久久久久国模美| 国产一区二区三区综合在线观看 | 热99国产精品久久久久久7| 欧美老熟妇乱子伦牲交| 七月丁香在线播放| 久久久久国产精品人妻一区二区| 九草在线视频观看| 2018国产大陆天天弄谢| 亚洲av不卡在线观看| 母亲3免费完整高清在线观看 | 精品视频人人做人人爽| 国产 一区精品| 美女cb高潮喷水在线观看| 国产探花极品一区二区| 啦啦啦在线观看免费高清www| 999精品在线视频| 制服丝袜香蕉在线| 国产黄片视频在线免费观看| 亚洲三级黄色毛片| 欧美日韩精品成人综合77777| 2021少妇久久久久久久久久久| 日本黄色日本黄色录像| 国产亚洲欧美精品永久| 特大巨黑吊av在线直播| 久久国内精品自在自线图片| 一个人看视频在线观看www免费| 免费久久久久久久精品成人欧美视频 | 日日爽夜夜爽网站| 欧美激情极品国产一区二区三区 | 午夜激情av网站| 伊人亚洲综合成人网| 国产 精品1| 久久免费观看电影| 国产乱来视频区| 在线观看免费日韩欧美大片 | 国产成人免费观看mmmm| 久久久国产欧美日韩av| 大片电影免费在线观看免费| 另类亚洲欧美激情| 制服人妻中文乱码| 久久鲁丝午夜福利片| 免费观看无遮挡的男女| 99精国产麻豆久久婷婷| 国产精品嫩草影院av在线观看| 久久99一区二区三区| 亚洲国产色片| 亚洲激情五月婷婷啪啪| 欧美3d第一页| av福利片在线| 亚洲精品一区蜜桃| 国产欧美另类精品又又久久亚洲欧美| 亚洲丝袜综合中文字幕| 国产日韩一区二区三区精品不卡 | 夫妻午夜视频| 午夜免费男女啪啪视频观看| 国产综合精华液| 国产成人freesex在线| xxxhd国产人妻xxx| 看十八女毛片水多多多| 高清欧美精品videossex| 男女免费视频国产| 亚洲人与动物交配视频| 制服诱惑二区| 国产深夜福利视频在线观看| 中国三级夫妇交换| 国产淫语在线视频| 亚洲精品国产色婷婷电影| 人妻 亚洲 视频| 日韩强制内射视频| 熟女av电影| 亚洲国产精品一区三区| 99re6热这里在线精品视频| 黑人欧美特级aaaaaa片| 十八禁网站网址无遮挡| 久久韩国三级中文字幕| 国产av码专区亚洲av| 欧美日韩国产mv在线观看视频| 最近中文字幕2019免费版| 能在线免费看毛片的网站| 亚洲成人av在线免费| av电影中文网址| 久久久久久久久久久免费av| 亚洲综合精品二区| 国产成人精品无人区| 男人爽女人下面视频在线观看| a级毛片在线看网站| 亚洲精品第二区| 2022亚洲国产成人精品| 在线观看www视频免费| 亚洲天堂av无毛| 欧美成人精品欧美一级黄| av福利片在线| 久久久国产欧美日韩av| 国产精品免费大片| 亚洲国产精品999| 国产高清有码在线观看视频| 最近中文字幕2019免费版| 五月天丁香电影| 91午夜精品亚洲一区二区三区| 麻豆乱淫一区二区| 国产国语露脸激情在线看| 免费黄频网站在线观看国产| 日本猛色少妇xxxxx猛交久久| 80岁老熟妇乱子伦牲交| 美女xxoo啪啪120秒动态图| 亚洲无线观看免费| 91精品国产国语对白视频| 夫妻性生交免费视频一级片| 在现免费观看毛片| 午夜日本视频在线| 婷婷色av中文字幕| 精品久久久久久电影网| 成人综合一区亚洲| 制服人妻中文乱码| 黄色一级大片看看| 午夜老司机福利剧场| 成人毛片60女人毛片免费| 亚洲人与动物交配视频| 亚洲色图 男人天堂 中文字幕 | 日韩一区二区三区影片| 精品久久久久久久久亚洲| 国产欧美亚洲国产| 99热网站在线观看| 精品久久久噜噜| 丰满乱子伦码专区| 五月开心婷婷网| av电影中文网址| a级毛片在线看网站| 999精品在线视频| 亚洲av男天堂| 少妇的逼水好多| 亚洲色图综合在线观看| 欧美三级亚洲精品| 秋霞在线观看毛片| 亚洲欧洲国产日韩| 97在线人人人人妻| av一本久久久久| av有码第一页| av女优亚洲男人天堂| 亚洲美女黄色视频免费看| 女的被弄到高潮叫床怎么办| 亚洲国产欧美在线一区| 亚洲,欧美,日韩| 美女内射精品一级片tv| 中文字幕制服av| 亚洲精品乱久久久久久| 日韩成人伦理影院| 精品少妇内射三级| 日韩人妻高清精品专区| 亚洲国产成人一精品久久久| 亚洲av福利一区| 国产精品不卡视频一区二区| 91精品伊人久久大香线蕉| 天堂中文最新版在线下载| 人妻系列 视频| 精品国产一区二区久久| 久久久精品免费免费高清| 国产av国产精品国产| 制服丝袜香蕉在线| 亚洲成人一二三区av| 久久久久久伊人网av| 少妇的逼水好多| 午夜福利视频精品| 精品人妻偷拍中文字幕| 18禁观看日本| 亚洲欧洲日产国产| 免费大片黄手机在线观看| 热99国产精品久久久久久7| 日本与韩国留学比较| 亚洲av在线观看美女高潮| 国产精品欧美亚洲77777| 黄色配什么色好看| 久久久久网色| 国产 一区精品| 亚洲欧美色中文字幕在线| 欧美性感艳星| 精品国产一区二区三区久久久樱花| 国产精品久久久久久精品电影小说| 免费观看a级毛片全部| 午夜影院在线不卡| 日韩成人av中文字幕在线观看| 夜夜看夜夜爽夜夜摸| 国产亚洲一区二区精品| 国产精品国产三级国产av玫瑰| 日韩在线高清观看一区二区三区| 亚洲国产最新在线播放| 亚洲一区二区三区欧美精品| 伊人久久国产一区二区| 久久午夜综合久久蜜桃| 熟女电影av网| 免费看av在线观看网站| 久久久国产精品麻豆| 免费观看的影片在线观看| 天天躁夜夜躁狠狠久久av| 啦啦啦啦在线视频资源| 日韩大片免费观看网站| 麻豆成人av视频| 国产精品久久久久成人av| 亚洲av中文av极速乱| 18禁观看日本| 一级片'在线观看视频| 久久久久国产网址| 午夜激情av网站| 亚洲美女视频黄频| 久热这里只有精品99| 精品少妇内射三级| 最新的欧美精品一区二区| 成年av动漫网址| 亚洲精品乱久久久久久| 80岁老熟妇乱子伦牲交| 天堂8中文在线网| 伦理电影大哥的女人| 精品久久久久久久久av| 精品久久久精品久久久| 天堂中文最新版在线下载| 午夜福利视频精品| 国产精品.久久久| 久久 成人 亚洲| 欧美日韩一区二区视频在线观看视频在线| 热99国产精品久久久久久7| 人成视频在线观看免费观看| 麻豆精品久久久久久蜜桃| 日韩伦理黄色片| 精品人妻一区二区三区麻豆| 亚洲内射少妇av| 久久精品国产亚洲av涩爱| 成人无遮挡网站| 久久久久久久久久久久大奶| a级毛色黄片| 26uuu在线亚洲综合色| 乱人伦中国视频| 成人国产av品久久久| 日韩中文字幕视频在线看片| 久久婷婷青草| 久久综合国产亚洲精品| 亚洲中文av在线| 亚洲精品国产av成人精品| 男的添女的下面高潮视频| 国产精品一二三区在线看| 日本午夜av视频| 91成人精品电影| √禁漫天堂资源中文www| 亚洲,欧美,日韩| 王馨瑶露胸无遮挡在线观看| 你懂的网址亚洲精品在线观看| 精品人妻熟女av久视频| 黄片无遮挡物在线观看| 内地一区二区视频在线| 黑人高潮一二区| 国产精品国产av在线观看| 免费观看性生交大片5| 国产男女超爽视频在线观看| 国产精品嫩草影院av在线观看| 国产欧美日韩综合在线一区二区| 久久毛片免费看一区二区三区| 春色校园在线视频观看| 国产熟女午夜一区二区三区 | 亚洲精品av麻豆狂野| 亚洲精品日韩在线中文字幕| 乱码一卡2卡4卡精品| 一区二区三区四区激情视频| 亚洲精品亚洲一区二区| 老女人水多毛片| 免费观看无遮挡的男女| 蜜臀久久99精品久久宅男| 欧美97在线视频| 精品少妇内射三级| 日本av免费视频播放| 成人午夜精彩视频在线观看| 免费观看的影片在线观看| 久久久久精品久久久久真实原创| 欧美激情 高清一区二区三区| 在线 av 中文字幕| 亚洲综合色惰| www.av在线官网国产|