• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Optimizing Fully Convolutional Encoder-Decoder Network for Segmentation of Diabetic Eye Disease

    2023-12-15 03:57:16AbdulQadirKhanGuangminSunYuLiAnasBilalandMalikAbdulManan
    Computers Materials&Continua 2023年11期

    Abdul Qadir Khan,Guangmin Sun,★,Yu Li,Anas Bilal and Malik Abdul Manan

    1Faculty of Information Technology,Beijing University of Technology,Beijing,100124,China

    2College of Information Science Technology,Hainan Normal University,Haikou,571158,China

    ABSTRACT In the emerging field of image segmentation,Fully Convolutional Networks (FCNs) have recently become prominent.However,their effectiveness is intimately linked with the correct selection and fine-tuning of hyperparameters,which can often be a cumbersome manual task.The main aim of this study is to propose a more efficient,less labour-intensive approach to hyperparameter optimization in FCNs for segmenting fundus images.To this end,our research introduces a hyperparameter-optimized Fully Convolutional Encoder-Decoder Network(FCEDN).The optimization is handled by a novel Genetic Grey Wolf Optimization (G-GWO) algorithm.This algorithm employs the Genetic Algorithm(GA)to generate a diverse set of initial positions.It leverages Grey Wolf Optimization(GWO)to fine-tune these positions within the discrete search space.Testing on the Indian Diabetic Retinopathy Image Dataset(IDRiD),Diabetic Retinopathy,Hypertension,Age-related macular degeneration and Glacuoma ImageS (DR-HAGIS),and Ocular Disease Intelligent Recognition (ODIR) datasets showed that the G-GWO method outperformed four other variants of GWO,GA,and PSO-based hyperparameter optimization techniques.The proposed model achieved impressive segmentation results,with accuracy rates of 98.5% for IDRiD,98.7%for DR-HAGIS,and 98.4%,98.8%,and 98.5%for different sub-datasets within ODIR.These results suggest that the proposed hyperparameter-optimized FCEDN model,driven by the G-GWO algorithm,is more efficient than recent deep-learning models for image segmentation tasks.It thereby presents the potential for increased automation and accuracy in the segmentation of fundus images,mitigating the need for extensive manual hyperparameter adjustments.

    KEYWORDS Diabetic eye disease;image segmentation;deep learning;artificial intelligence;grey wolf optimization;FCN;CNN

    1 Introduction

    Diabetes mellitus,more often referred to simply as diabetes,is a condition characterized by excessive blood sugar levels owing to inadequate insulin synthesis or an inappropriate insulin reaction by the body.It is a significant worldwide health complication primarily caused by a sedentary lifestyle,obesity,aging,as well as poor eating habits.The number of people diagnosed with diabetes is rising alarmingly,with an estimated 116 million individuals who have diabetes,according to the international diabetes federation (IDF) [1].According to projections,around 700 million individuals throughout the globe will have diabetes by 2045[2].Diabetes can lead to various medical complications,including nerve damage,cardiovascular disease,kidney failure,and diabetic eye disease (DED).DED,which comprises diabetic macular edema(DME),diabetic retinopathy(DR),cataracts(CA),and glaucoma(GA),is the most common reason for blindness and visual impairment among people of working age.DED symptoms,such as abnormal blood vessel growth,lens degradation,optic nerve damage,and macular swelling,can appear in the retina[3].Effective treatments for DED,including corticosteroids,laser photocoagulation,as well as anti-vascular endothelial growth factor injections,exist.However,early diagnosis is crucial for preventing vision loss,as DED often shows no symptoms in the preliminary stage.As a result,international and regional recommendations stress the need for monitoring for DED in diabetes patients[4].

    The growing population of diabetes patients exceeds the number of retinal specialists worldwide,leading to prolonged waiting times for screening and diagnosis.Automated DED screening systems can address this issue by providing a cost-effective and rapid point-of-care solution.Traditional manual examination of colour retinal fundus images by ophthalmologists is difficult,expensive,timeconsuming,and not immediately responsive to patients.On the other hand,automated DED screening systems can quickly analyze retinal images captured during regular screenings,saving time and cost.Early detection of DED through automated systems can prevent complete vision loss;with early diagnosis and treatment,90% of cases can be prevented.Implementing automated DED detection systems would significantly benefit early screening,treatment,and prevention of vision loss caused by DED.According to the World Health Organization(WHO),DME and diabetic retinopathy may increase by 47%by 2024%and 71%by 2034 if not addressed.Glaucoma is also rising,particularly in older age groups and those with diabetes.Automated DED detection systems can be crucial in early screening,treatment,and preventing vision loss[5].

    Deep learning(DL)methods,in particular Convolutional Neural Networks(CNNs),have become important tools in computer vision[6-9].However,standard CNN architectures designed for image classification may not effectively handle segmentation problems,such as pixel-level classification in semantic segmentation [10,11].To address this,Fully Convolutional Networks (FCNs) were developed,substituting the fully connected(FC)layer with convolution(Conv)along with de-Conv layers to improve pixel-level segmentation[12-15].FCNs eliminate dense layers,reducing network parameters and enabling faster training.Typically,FCN structural design includes convolution (Conv),ReLU,pooling,and an un-pooling (UP) layer.Conv and pooling layers downsample image features,while the UP layer upsamples the output to match the input size.On the other hand,the fact that FCN only utilizes a non-trainable and up-sampling (US) layer can restrict performance [13].In order to improve pixel-level segmentation,a variation known as a fully convolutional encoder-decoder network(FCEDN)is presented.This network incorporates a trainable encoder along with decoder components in its architecture [16,17].The encoder consists of max-pooling (MP),Convolution (Conv),and dropout layers(DO)for feature extraction.At the same time,the decoder comprises transpose Conv(TC),dropout,UP,and US layer-by-layer output encoding.The output layer’s measurements match the input image’s ground-truth to complete decoding.It is more productive than the FCN,which only has a non-trainable US layer since the encoder and the decoder in Fully Convolutional Encoder-Decoder Network(FCEDN)are trainable.

    1.1 Motivation

    Deep learning methods,predominantly CNNs,have garnered interest in improving segmentation performance in computer vision.FCNs replace the FC layer of CNNs with both Conv and de-Conv layers;their performance is limited by a single non-trainable US layer[13].To address this,we propose an FCEDN variant comprising both encoders,decoder components,and fewer layers,resulting in improved pixel-level segmentation performance.Determining the optimal hyperparameters for FCEDN,such as layer numbers,kernel sizes,dropout rates,and learning rate,can significantly enhance its performance.However,optimizing these hyperparameters manually is time-consuming.Optimization techniques like Particle Swarm Optimization(PSO),Quantum PSO,unevolutional encoding,and Grey Wolf Optimization (GWO) have been applied to CNN hyperparameter optimization.Still,no evidence exists for FCEDN hyperparameter optimization[18-21].

    1.2 Contributions

    In this research,an FCEDN model is developed to carry out pixel-level image segmentation.The FCEDN hyperparameters are tuned using a novel Genetic Grey Wolf Optimization(G-GWO)method rather than being manually specified.GWO has developed as a potential way out to many optimization issues in recent years by simulating leadership hierarchy as well as group hunting behaviour in GW’s[22-25].Since the inception of GWO,several versions of methods for accelerating convergence while minimizing local optimums have been developed[26-28].This paper suggested a novel form of GWO called G-GWO that uses GA to construct a significantly more suitable beginning population.Results have been compared to alternative nature-based approaches,and the technique is evaluated on five conventional unimodal as well as five common multimodal benchmarking functions.When G-GWO is compared to the conventional benchmark functions,it outperforms four different variations of the GA,GWO,as well as PSO algorithms.This approach was formerly utilized to optimize the hyperparameters of an FCEDN model,resulting in an efficient model.The major focus is on finding good approximations for the hyperparameters of the FCEDN’s Conv,pooling,TC,UP,and dropout layers.The last stage is to build an FCEDN system using these adjusted hyperparameters and validate its segmentation results on picture datasets.The model applies to any form of classification job as well as any other image segmentation challenge.The DME,DR,and GA image datasets IDRiD[29],DR-HAGIS [30],and ODIR [31] are utilized in this work to assess the model’s successiveness.Extensive quantitative results performed on DED image datasets demonstrated the effectiveness of the G-GWO approach in terms of the Jaccard coefficient along with Jaccard loss,Sensitivity,accuracy,Specificity,and Precision when compared to the GA[32],PSO,GWO[21],Modified GWO(mGWO)[26],Enhanced [27] and incremental [28] GWO.We evaluate the segmentation performance of the hyperparameter-optimized FCEDN model based on G-GWO on the same dataset as other recently created segmentation networks,including Link-Net,U-Net,Seg-Net,as well as FCN.

    The following are some of the most significant contributions of this study:

    ? The study introduces a novel combination of Genetic and Grey Wolf Optimization algorithms to optimize FCEDN.

    ? G-GWO addresses the limitations of the typical GWO algorithm by incorporating genetic crossover in addition to mutation operators for faster exploration to improve solution quality.

    ? The effectiveness of G-GWO is demonstrated through comparisons with other nature-inspired optimization algorithms on benchmark functions.

    ? In addition,G-GWO is implemented in order to fine-tune the FCEDN hyperparameters for pixel-level segmentation.

    ? Simulations conducted on DED show that G-GWO outperforms other optimization algorithms with high accuracy.

    The remaining sections of the paper are as follows:Section 2 covers some of the most recent developments in FCN with hyperparameter tuning based on nature-inspired algorithms.The methodology is discussed in detail in Section 3.Section 4 describes the suggested model,whereas Section 5 presents and discusses the findings in depth.Section 6 provides a concise summary of the study’s findings.

    2 Related Work

    Semantic image segmentation,particularly in retinal disease,is important for effective diagnosis and treatment in ophthalmology.It involves identifying and delineating object boundaries within an image.One popular approach for image segmentation is the use of FCNs,which have gained prominence and continue to advance rapidly.

    Numerous image segmentation networks based on FCNs have been reported in the literature.In[12],semantic segmentation using the FCN model incorporates skip architecture to combine semantic and appearance information.Another U-Net network [14] employs a U-shaped architecture with contracting and expanding paths to propagate context information and enable precise localization.In[11],a pixel-level image classification FCN model using the VGG-16 network was introduced,where the last layers were randomly initialized.SegNet[13],a deep FCN architecture,demonstrated superior performance associated with DeepLab-LargeFOV,FCN,as well as DeconvNet.A combination of FCN,SegNet,and U-Net was proposed in [15] for pleural cell nuclei segmentation,outperforming individual models and majority voting.For hepatocellular carcinoma diagnosis,a computer-aided diagnosis(CAD)system integrating CNN and FCN was proposed[16],incorporating skip structures’to aid with liver along with tumour segmentation.Additionally,by combining FCN-8 and SegNet was developed for plantar pressure image segmentation[17].

    DL techniques have proven effective for retinal image segmentation in the context of cancer disease.Reference [33] introduced a DL framework based on the U-Net model for optic disc recognition in DR.The author employed CNNs to process retinal fundus images and employed a U-Net framework to identify local images for further segmentation.The(Optic Disc)OD and(Optic Cup) OC identification were performed in [34] using watershed transformation and morphological filtering techniques.The exudate detection technique proposed by Prentaˇsi′c et al.[35] utilized a deep convolutional neural network for feature extraction and a Support Vector Machine (SVM)classifier for classification,along with morphological procedures and curve modelling.Glaucoma optic neuropathy screening was addressed in[36]using Inception-v3 in conjunction with mini-batch gradient descent and the Adam optimizer.A disc-aware ensemble network combining global and local image levels was developed for automated glaucoma screening,incorporating a residual network(ResNet)and a U-shaped convolutional network[14].

    Several other studies have proposed image segmentation and classification techniques for different applications.Santos Ferreira et al.[37]trained an OD segmentation U-Net convolutional network and utilized texture-based features for classification.Zhang et al.[38] investigated a deep convolutional neural network(DCNN)for cataract detection and grading.At the same time,Ran et al.[39]proposed a deeper network combining DCNN and an RF classifier for cataract grading.Xu et al.[40]presented a local-global feature representation using an ensemble of CNNs and deconvolution networks(DN)for cataract classification.Li et al.[41] developed an 18-layer deep neural network for cataract diagnosis and localization,and Dong et al.[42]used the Caffe framework with a softmax classifier for cataract classification.GoogLeNet-CAM and AlexNet-CAM models were introduced by Li et al.[43]for automatic cataract detection,leveraging class activation maps(CAM)with pertained GoogLeNet and AlexNet models.

    The selection of hyperparameters is crucial in optimizing the performance of deep learning networks[21].However,manual tuning of hyperparameters can be time-consuming and challenging,especially with complex FCN architectures.Researchers have offered many methods,such as those based on nature-inspired algorithms,as potential solutions to overcome this problem.A variant of PSO [18] and Quantum Behaved PSO [19] to tune CNN hyperparameters.Univariate dynamic encoding was utilized in[20]to optimize CNN hyperparameters,while multiscale and multilevel evolutionary optimization(MSMLEO)multiscale and multilevel evolutionary optimization(MSMLEO)with Gaussian process-based Bayesian optimization (GPEI) was proposed in [11].In a recent study[21],the GWO algorithm was employed to optimize the hyperparameters of CNNs for classification,achieving improved performance.The optimized hyperparameters were used to build and train the CNN model using the backpropagation algorithm for multiclass DED detection.While natureinspired algorithms have been used for CNN hyperparameter selection,no studies have focused on the hyperparameter optimization of FCN.This study introduces a novel algorithm based on GWO[22],which simulates the social and hunting behaviour of GWs.Several GWO variants have been proposed,including MGWO[44],EGWO[27],RL-GWO[45],Ex-GWO,and Incremental Grey Wolf Optimizer[25].

    Additionally,I-GWO[26]incorporates a dimension learning-based hunting strategy.It is important to keep in consideration,nevertheless,that the original GWO population starts entirely randomly.It might lead to a lack of diversity among the wolf packs as they scour the landscape for prey.It has been shown in several studies that an initial population with adequate diversity is extremely beneficial for enhancing the effectiveness of optimization algorithms and that this diversity can have a significant effect on the global convergence speed and the quality of the final solution.Based on this central concept,we attempted for the first time to utilize GA to produce a much more suitable starting population;after that,GWO was built to carry out using the diverse population.The G-GWO algorithm is applied to optimize hyperparameters for an effective FCEDN model used to segment DED fundus images.

    3 Related Methodologies

    The following section covers the fundamentals of the G-GWO optimization algorithm and how it relates to FCEDN’s architecture and hyperparameters.

    3.1 Fully Convolution Encoder-Decoder Network(FCEDN)

    CNNs have gained popularity in computer vision due to their effective feature extraction,prediction,and classification competencies[11].However,when directly applied to image segmentation,standard CNN architectures designed for image classification yield poor results.This is because fully connected layers in CNNs ignore spatial data and provide a single class likelihood value,but pixellevel classification is necessary for semantic segmentation.To improve segmentation performance at the pixel level,FCNs were developed and used to replace the fully linked layer with Conv and de-Conv layers.FCNs offer portability and time-saving advantages by eliminating fully-connected layers [12].There are two approaches to implementing semantic segmentation in FCNs.The first approach involves constructing the FCN architecture with Conv,ReLU,pooling,and UP layers[12].Convolution and pooling layers downsample image features,while the UP layer performs final upsampling.However,this approach may result in limited performance due to a lack of a trainable UP layer,potentially losing spatial information.The second approach employs an encoder-decoder architecture,where the encoder consists of CNN-like layers,and the decoder uses TC and UP to upsample feature maps[13].The trainable parameters in the up-sampling layers significantly improve semantic segmentation.In this study,we propose an FCN with an encoder with decoder mechanisms(FCEDN) to enhance the effectiveness of pixel-level segmentation.The encoder comprises Conv,dropout,and MP layers for feature extraction through down-sampling.The trainable decoder employs TC,UP,and dropout layers to progressively US the encoded output layer.The decoding process concludes through an output layer that matches the ground-truth dimensions of the input image.Compared to FCNs with a non-trainable layer for US,the dual trainable encoder/decoder architecture of FCEDN achieves superior performance.Fig.1 illustrates an evaluation of the FCEDN,CNN,and FCN architectures.

    Figure 1:CNN,FCN,and FCEDN architecture

    The FCEDN model involves numerous hyperparameters accompanying each layer,including the kernel size,number of layers in Conv,de-Conv,MP,and UP layers,number of layer-wise kernels,learning rate,batch size,activation function,dropout rate,number of epochs,optimizer,and more[9].The kernel size determines the features that comprise the following layer,whereas the total number of kernels determines the total number of features.MP and UP layers use pooling sizes to downsample and upsample features.The dropout rate aids in model regularization.The architecture of FCEDN is decided according to the total number of Conv,TC,pooling,and UP layers.As the depth of FCEDN increases,the number of hyperparameters escalates significantly.The performance of FCEDN heavily relies on these parameters.However,manually reaching the near-optimal setup of hyperparameters of FCEDN by an extensive examination of all potential combinations is not only unfeasible but also costly.Therefore,in this study,we formulate the appropriate selection of FCEDN’s hyperparameters as an optimization problem to enhance the overall model performance.

    3.2 Genetic Algorithm(GA)

    The concept of GA,initially proposed by Holland[32],draws inspiration from the principles of Darwinian natural selection and genetics in biological systems.GA is a search methodology based on adaptive optimization.It operates with a population of candidate solutions known as chromosomes,each comprising multiple genes with binary values of 0 and 1.In this study,the initial positions for GWO are generated using GA.The following steps outline the process of generating initial population positions using GA:

    ? Chromosomes are randomly generated as the initial population.

    ? A roulette wheel selection technique is employed to choose parental chromosomes.

    ? A single-point crossover technique is applied to generate offspring chromosomes.

    ? The uniform mutation is adopted to introduce genetic diversity.

    ? The mutated chromosomes are decoded to obtain the preliminary positions of the population.

    ? By leveraging GA,the study establishes the preliminary positions for GWO,facilitating the subsequent optimization process.

    3.3 Genetic-Grey Wolf Optimization(G-GWO)Algorithm

    A metaheuristic algorithm,the GWO [22],was inspired by the pack behaviour and hunting techniques of GWs.Encircling,hunting,and attacking the prey are the three primary phases of the algorithm.The mathematical representation of a wolf pack’s social hierarchy has the optimal solution asα,followed by the 2nd and 3rd optimal solutions,β,andδ.The remaining set of solutions is referred to asω.The dominance structure among GWs is shown in Fig.2.

    Figure 2:Grey wolves’hierarchy[22]

    The following Eq.(1)offered a mathematical description of the encircling behaviour seen in GWs during the hunting process:

    The symbols and vectors have specific meanings in the context of the equations used in GWO.Let us denote the current iteration ast,the prey asp,and a GW asw.The coefficient vectors are represented bystands for the prey’s location vector andfor the GW.The calculation ofvectors are performed as Eqs.(2)and(3):

    Pseudocode 1 provides the implementation of the GWO algorithm.

    The random generation of the initial population in the original GWO algorithm may lead to a deficiency of diversity among the wolf swarms exploring the exploration space.Extensive research shows that the initial population quality is crucial for global convergence and solution quality in swarm intelligence optimization algorithms,and to improve the GWO algorithm’s performance,a novel approach called G-GWO utilizes a GA for generating a more suitable initial population.

    Pseudocode 2 contains the G-GWO algorithm’s pseudocode.

    4 FCEDN HyperParameter Optimization Using G-GWO

    The proposed model consists of four fundamental steps,as illustrated in Fig.3.These steps include image pre-processing,G-GWO algorithm utilization for optimal selection of hyperparameters,creation as well as FCEDN training using the selected hyperparameters,and finally,assessing the performance of the model.

    Figure 3:Proposed methodology

    4.1 Image Processing

    Several pre-processing steps address the challenges posed by the varying resolutions and large sizes of DED images.These images come in different resolutions,such as 4288×2848,4752×3168,3456×2304,3126×2136,2896×1944,and 2816×1880,among others.The presence of many input images,coupled with their large sizes,can potentially lead to suboptimal segmentation performance and increased training time for the FCEDN model.Therefore,before inputting the images into the model,the training and testing images undergo resizing by the bilinear interpolation method[21].This resizing ensures that the images are adjusted to a suitable size while preserving their aspect ratio.A median filter is also applied to eliminate noise,and contrast-limited adaptive histogram equalization(CLAHE)is utilized to enhance the image quality.These pre-processing steps collectively contribute to improved segmentation performance and reduce the training time for the FCEDN model.

    4.2 Design of FCEDN

    The FCEDN (Fully Convolutional Encoder-Decoder Network) is a deep learning architecture consisting of trainable encoder and decoder components,each comprising different layers.The downsampling part,or encoder,includes Conv,ReLU,dropout,and MP layers.On the other hand,the decoder,or up-sampling part,consists of TC,UP,ReLU,and DO layers.Each layer plays a crucial role in the overall network.Designing an optimal FCEDN framework tailored to a specific application is challenging,as it often involves a trial-and-error process or is influenced by previous works.In this study,the initial structure of the FCEDN is built based on related works[15-17].The encoder includes four Conv,one dropout,four ReLU,and two pooling layers.The decoder has four TC,two UP,four ReLU,and one DO layer.The kernel sizes of the convolution,TC,pooling,and UP layers are selected between 3×3 and 5×5.The number of kernels in the initial layers is lower than in the later layers,ranging from 20-200.The model is regularized by establishing a dropout layer with a dropout rate of 0.2-0.4.The overall architecture of FCEDN is controlled by the number of Conv’s,TC,pooling,and UP layers.Increasing the number of convolutions can lead to overfitting,while reducing them can result in underfitting.Furthermore,specific features can be excluded from more pooling or repeated in fewer pooling layers.This work has a range of Conv,TC,pooling,and UP layers between 2 and 10.Multiple experiments have calculated the following values to establish a balance between efficacy and computational time.

    4.3 G-GWO for HyperParameter Optimization of FCEDN

    The progression of optimizing FCEDN’s hyperparameters using G-GWO involves four steps:encoding,population initialization,fitness evaluation,and population update.In the encoding phase,the hyperparameters of FCEDN,such as Conv Kernel Size(C-KS),Transpose Conv Kernel Size(TCKS),Conv Number of Kernels(C-NK),Transpose Conv Number of Kernels(TC-K),Max Pooling Kernel Size(MP-KS),Unpooling Kernel Size(UP-KS),and Dropout Rate(DL-Dr)of the Dropout layer,are encoded into a k-dimensional vector.The encoded vector’s values are chosen at random from a certain interval.Theith parameter vector is represented by Eq.(12):

    Considering the presence of four convolution layers,two dropout layers,two max-pooling layers,four T-Conv layers,and two UP layers,the vector size (k) would be 22,representing the hyperparameters of these layers.The specific hyperparameters corresponding to the vector elements are as follows:(C1-Nk&Ks,C2-Nk&Ks,MP1-Ps,DL1-Dr,C3-Nk&Ks,C4-Nk&Ks,MP2-Ps,DL2-Dr,UP1-ps,TC1-Nk&Ks,TC2-Nk&Ks,UP1-ps,TC3-Ks&Nk,TC4-Ks&Nk).

    Next,n encoding vectors were generated for the preliminary wolf population,denoted as Xn.Each Xi represents the location of the ith GW and is a k-dimensional vector representing the hyperparameters of FCEDN.A lightweight model is trained on small random samples to reduce the computation time for fitness evaluation to determine whether the fitness value alteration is negligible.Then,coefficient vectors of G-GWO are constructed employing Eqs.(2),(3),and (10),respectively.Each agent’s fitness is then assessed,and the procedure of updating the general population while maintaining the top three agentsα,β,andδlasts for a quantified number of iterations,as outlined in the pseudocode.Finally,the agent and the greatest fitness demonstrate the ideal FCEDN hyperparameters.The parameters mentioned above are then used to construct the segmentation network.In this study,determining the FCEDN model’s hyperparameters for segmenting images is presented as an optimization problem.The objective function of EN-GWO is defined as maximizing the Jaccard coefficient,formulated as follows:

    In the formulation,ym(j,l) stands for the precise pixel value (j,l) and ?ym(j,l) stands for the anticipated label of the pixel(j,l)For the mth image with size(r*c),they were utilizing the Xi position vector acquired from FCEDN.Between 0 and 1,the smoothing value is chosen randomly,and tim is the ratio of images used for the training.Class imbalances are more prevalent in segmentation assignments involving fewer classes.A deep neural network can achieve an 80 per cent accuracy by appropriately classifying the background pixels only,which make up the majority of the image.However,in these tasks,the pixels demonstrating the segmented areas establish approximately 20 per cent of pixels.

    Consequently,accuracy alone may not be the most suitable metric for evaluating automatic segmentation performance.Instead,a more reasonable measure of segmentation performance is the percentage correspondence among the ground truth in addition to predicted masks.Similarity between the predicted as well as ground truth masks is measured by determining how many pixels are shared between the two sets of data and then dividing that number by the overall amount of pixels in both sets of data.By considering this measure,we can better assess the model’s performance in accurately segmenting the desired areas of the image.

    4.4 Performance Metrics

    Typical segmentation valuation metrics,such as the Jaccard Coefficient(JC)and Jaccard loss(JL),are applied to determine the relationship between the segmented area and the ground truth.These metrics provide quantitative measures of the resemblance between the predicted segmentation and the actual ground truth.Furthermore,the overall accuracy(Acc),sensitivity(Sen),specificity(Spc),and precision (Pre) of the pixels by the pixels segmentation method are also examined.The metrics in question are derived using the confusion matrix and permit a thorough evaluation regarding the segmentation performance.Following are the formulas for calculating the following metrics from the confusion matrix:

    TP represents the number of object-classified pixels,whereas TN is the number of backgroundclassified pixels.FN indicates the total number of pixels that belong to the object but are categorized as background.In contrast,FP indicates the number of pixels that belong to the background but are classified as objects.

    5 Experimental Result and Discussion

    This section compares the proposed G-GWO to other nature-inspired techniques,including GWO[21],mGWO [26],eGWO [27],iGWO [28],PSO,and GA,using ten standard benchmark functions.These include five unimodal reference functions and five multimodal benchmark functions.

    Unimodal Functions:

    Sphere(F-1):A continuous,convex function that evaluates optimization algorithms’converging ability.

    Schwefel 2.22(F-2):Often used to test an algorithm’s robustness to local minima.

    Schwefel 1.2(F-3):Used to assess convergence speed.

    Schwefel 2.21(F-4):This tests premature convergence and exploration capabilities.

    Generalized Rosenbrock (F-5):Known for its narrow,flat valleys,it is useful for testing the precision of algorithms.

    Multimodal Functions:

    Generalized Schwefel (F-6):Contains multiple local minima,ideal for evaluating global search ability.

    Rastrigin(F-7):Known for its large search space and many local minima.

    Ackley(F-8):Combines characteristics of several functions,useful for a comprehensive evaluation.

    Griewank(F-9):Often used to evaluate the ability of algorithms to escape local minima.

    Generalized Penalized(F-10):Suitable for testing an algorithm’s efficiency in overcoming mathematical penalties.

    All these benchmark functions are implemented with the same 10 * 30 population size and environment;the predetermined number of iterations is 500.Table 1 presents the mean as well as the standard deviation of fitness error derived from 50 liberated trials.While these metrics provide an overview of the performance,it should be noted that this evaluation lacks a deeper statistical analysis to determine the significance of the observed differences between the techniques.Without such analysis,conclusions about the superiority or inferiority of certain techniques may be tentative.In each experiment,the parameters of the relative algorithms are configured according to the specifications recommended in their original work.An examination of Table 1’s results reveals that eGWO provides superior outcomes for F-1 and F-2,while GA outperforms other techniques in the case of F6 [27].EN-GWO demonstrates extremely competitive performance in all other functions compared to other methodologies.However,it should be stressed that these findings are presented without statistical significance testing,and further analysis would be required to confirm these observed differences.

    Extensive experiments were performed to evaluate the efficacy of G-GWO for hyperparameter optimization of FCEDN utilizing IDRiD [29],DR-HAGIS [30],and ODIR [31] datasets.These experiments were carried out using MATLAB and Python,the Keras,Scikit-learn,and OpenCV libraries.Experiments were conducted on Google Colab Pro,outfitted with a GPU with an intel-core i7 8th generation processor with 32 GB RAM.IDRiD,DR-HAGIS,and ODIR are image datasets that serve as input for super pixel-based feature extraction,classification annotations,and ground truths.The resolutions of these datasets varied,including 4288 × 2848,4752 × 3168,3456 × 2304,3126×2136,2896×1944,and 2816×1880.The IDRiD dataset contains 516 RGB images for the segmentation assignment,while the DR-HAGIS and ODIR datasets contain 30 and 362 RGB images,respectively.The ODIR dataset is subdivided into 177 glaucoma images,49 DR images,and 136 DME images.Refer to Table 2 for more information about the simulated dataset and Fig.4 for sample images from the datasets.

    Table 2: A detailed description of the dataset

    The IDRiD[29]is a groundbreaking dataset specifically curated for India,consisting of 516 retinal fundus images captured at the Nanded(M.S.)eye clinic using a Kowa VX-10αfundus camera.With a focus on the macula and a field of view of 50 degrees,the images provide comprehensive coverage of diabetic retinopathy and normal retinal structures,meticulously annotated up to the pixel level.The dataset includes well-defined grading scores from 0 to 4 for diabetic retinopathy and 0 to 3 for diabetic macular edema,reflecting varying levels of severity according to international clinical relevance standards.The IDRiD dataset is an invaluable resource for developing and evaluating advanced algorithms,facilitating early detection and analysis of diabetic retinopathy in the Indian population.The DR-HAGIS [30] database is a collection of retinal fundus images.This database contains 39 high-resolution,colour fundus images from the United Kingdom’s diabetic retinopathy screening program.The screening program utilizes different fundus and digital cameras provided by various service providers,leading to variations in image resolution and sizes.

    Figure 4:Sample images from datasets

    Additionally,patients enrolled in these programs often exhibit other comorbidities alongside diabetes.To accurately represent the range of images assessed by experts during screening,the DR HAGIS database includes images of different sizes and resolutions,as well as four comorbidity subgroups:diabetic retinopathy,age-related macular degeneration,hypertension,as well as Glaucoma.The ODIR(Ocular Disease Recognition)[31]dataset is a publicly available collection of retinal images captured using fundus cameras.Its purpose is to facilitate research in ocular disease recognition by developing and evaluating algorithms for disease detection and classification.The dataset includes diverse images from patients with various ocular diseases,as well as healthy individuals for comparison.Annotations are provided for the presence or absence of ocular conditions such as diabetic retinopathy,glaucoma,AMD,and hypertensive retinopathy.The dataset is split into training,validation,and testing subsets,enabling algorithm optimization and evaluation.By leveraging the ODIR dataset,researchers can advance the field,developing automated tools for early disease detection and improved patient care in ophthalmology.

    CAD systems are designed to detect lesions related to DED.However,current methods still face challenges with a high rate of false-positive detections for each image.Manual feature engineering and limited labelled data hinder accurate lesion recognition and deep learning model training.Fundus image databases suffer from privacy constraints and limited data,making training challenging.This study proposes a fundus image augmentation scheme using diverse techniques to address these issues.This study used data augmentation methods,including geometric transformations and patch extraction,to improve image instances,as explained in Table 3.

    Table 3: Different techniques for data augmenting with invariance features

    Images have artificially increased the images seven times by employing diverse data augmentation approaches,as represented in Table 4.The dataset consists of 14385 fundus images,with 10070 images for training(70%)and 4316 for evaluation(30%).

    Table 4: Description of the experimental augmented dataset

    This study evaluates the viability of agents in G-GWO in order to identify optimal parameters for building the FCEDN model.The FCEDN model comprises four convlayers alongside ReLU activation,two MP layers,two DO layers,four TC layers,and two UP layers.The conv layers have 20,50,70,and 100 kernels,with a fixed kernel size 4.The TC layers contain 70,50,20,and 2 kernel diameters of 4,4,4,and 2,respectively.A particle size of 2 and a DO rate of 0.2 for both the MP and UP layers are implemented.The last layer of the FCEDN model is a TC layer with two kernels and a kernel size of two,which is used to match the image’s ground truth.The final output is generated using the softmax activation function.The FCEDN model is trained with the Adam optimizer,a constant learning rate of 0.001,and a batch size 20.A sample dataset of 200 randomly selected images from the ISIC2016 dataset is used for training.Although the FCEDN architecture is chosen based on a review of previous works [15-17],simulations are also performed with alternative architectures by altering the number of conv,ReLU,MP,TC,and UP layers,as well as the number of kernels in the conv and TC layers.Other network parameters remain unchanged across these architectures.The FCEDN architecture with 4 Conv,8 ReLU,2 DO,2 MP,4 TC,and 2 UP layers is shown in Table 5.Therefore,this architecture is maintained for future simulations that optimize the FCEDN’s hyperparameters.

    Table 5: Parameter configurations

    After determining the FCEDN architecture,its hyperparameters have been encoded to be compatible with the G-GWO population.In addition to G-GWO,GWO [21],mGWO [26],eGWO[27],iGWO[28],PSO,and GA are used to optimize the FCEDN hyperparameters.All optimization strategies utilize an identical population size.Table 5 provides a summary of the hyperparameters obtained through various optimization techniques,as well as the maximal Jaccard coefficient attained by the best-performing agent.G-GWO consistently outperforms GWO,iGWO,eGWO,mGWO,GA,and PSO when considering the value of the Jaccard Coefficient on the sample dataset,as shown by the table analysis.These results demonstrate that G-GWO provides superior hyperparameter optimization for FCEDN compared to other cutting-edge techniques.

    The control parameters for the numerous optimization techniques are listed in Table 6.These values have been determined through simulation and are tailored to the application in question.In order to incorporate the FCEDN model with the G-GWO population,its hyperparameters have been encoded while its architecture remains unchanged.The hyperparameters of FCEDN are optimized utilizing multiple optimization techniques,such as GWO[21],mGWO[26],eGWO[27],iGWO[28],PSO,and GA,in addition to EN-GWO.All of these methods use the same population size.The maximal Jaccard coefficient attained by the fittest agent is presented in Table 6 alongside the hyperparameters of the FCEDN model derived through various optimization techniques.G-GWO provides improved hyperparameter optimization for FCEDN relative to other state-of-the-art techniques,as shown in Table 6.The FCEDN models were trained for 500 iterations on the IDRiD,DR-HAGIS,and ODIR datasets using various hyperparameter methodologies.For each epoch,the Jaccard coefficient along with Jaccard loss were computed,and the results were evaluated.The segmentation effectiveness of the datasets utilizing different hyperparameter-optimized FCEDN models is presented in Table 7.The results show that the proposed EN-GWO-based model yielded exceptional results,with a Jaccard coefficient of 98.7%,98.9%,98.3%,98.6%,98.9%,98.4%,and 98.7%,and a Jaccard loss of 0.0129,0.0326,0.0397,0.027,0.0253,0.0245,and 0.0217,respectively.It can be observed from Table 7,the proposed model evaluated on IDRiD,DR-HAGIS,and ODIR datasets achieved an average DR accuracy of 98.5%on the IDRid dataset.The proposed model evaluated on the DR-HAGIS dataset achieved 98.7%,98.1%,and 98.4%for DR,DME,and Glaucoma,respectively.The accuracy results of ODIR for DR is 98.8% and the accuracy value of DME and glaucoma are 98.2%,and 98.5%,respectively.

    Table 6: Different optimization methods provide FCEDN hyperparameters

    Table 7: Comparison of various optimization techniques based on segmentation efficiency of a hyperparameter optimized FCEDN model

    In general,the results demonstrate the efficacy of G-GWO in optimizing the hyperparameters for various disease classification tasks,resulting in superior performance compared to other optimization techniques.The evaluation metrics further validate the enhanced accuracy,sensitivity,specificity,and precision attained by G-GWO across multiple datasets and disease scenarios.The graphical analysis representing accuracy,sensitivity,specificity,and precision are illustrated in Figs.5a-5d.

    Figure 5:Performance matrices comparison

    Fig.6 depicts the input,pre-processing,ground truth,and corresponding predicted mask obtained by the G-GWO-based hyperparameter-optimized FCEDN model for some sample images.These results indicate that the area is properly segmented.

    Figure 6: (Continued)

    Figure 6:Input,pre-processing,ground truth,and the predicted segmentation results obtained by the proposed model for some sample images

    6 Conclusion

    Fundus images are valuable in detecting areas affected by diabetic eye disease.However,manually identifying these areas presents a significant challenge for ophthalmologists.In response to this challenge,we introduce an optimized method known as Genetic Grey Wolf Optimization for hyperparameter tuning the Fully Convolutional Encoder-Decoder Network.We aim to accurately identify the regions in fundus images associated with diabetic eye disease.The effectiveness of GGWO is demonstrated through its comparison with four variants of the GWO algorithm,as well as the GA and PSO strategies for hyperparameter optimization.We conducted extensive experiments using the IDRiD,DR-HAGIS,and ODIR datasets.As a result of the proposed FCEDN model,several evaluation metrics—including the Jaccard coefficient,Jaccard loss,accuracy,sensitivity,specificity,and precision—have shown significant improvement.The proposed model outperforms other optimization techniques and the latest deep learning methods examined in this study.Despite our research covering several significant aspects,some areas warrant further exploration.For instance,the optimization process can be enhanced by introducing additional hyperparameters such as the regularization rate,activation functions,and training size and rate.Furthermore,we could examine the use of different optimization algorithms to increase the efficacy of the FCEDN model during the weight-updating phase.This study concludes by introducing a new method,G-GWO,for optimizing the hyperparameters of the FCEDN model in fundus image analysis.The experimental results validate its superiority over other optimization methods and show its potential in accurately identifying areas affected by diabetic eye disease.

    The study encompasses potential limitations of the proposed approach,such as dataset generalization,computational complexity,sensitivity to hyperparameters,benchmarking against stateof-the-art,and interpretability of results.Addressing these aspects will provide a comprehensive understanding of the approach’s strengths and areas for improvement.Furthermore,suggesting future research directions will underscore the significance of the study in advancing image segmentation with genetic optimization techniques for Fully Convolutional Encoder-Decoder Network(FCEDN)models in medical imaging applications.

    Acknowledgement:We would like to express our sincere gratitude to all those who have supported and contributed to the completion of this manuscript.

    Funding Statement:This work was supported in part by the National Natural Science Foundation of China under Grant 11527801 and 41706201.

    Author Contributions:The authors confirm their contribution to the paper as follows: Conceptualization,A.Q.K.and G.S.;Methodology,A.Q.K.,Y.L.,A.B.;Software,A.Q.K.and A.B.;Validation,Y.L.and M.A.M.;Formal analysis,G.S.,M.A.M.;Writing-original draft,A.Q.K.;Writing-review&editing,A.B.;Visualization,G.S.,Y.L.;Project administration,G.S.;Supervision,G.S.;Funding acquisition,G.S.,Y.L.

    Availability of Data and Materials:The data used to support the findings of this study are available from the first author upon request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩伦理黄色片| 国产精品麻豆人妻色哟哟久久| 亚洲精品国产区一区二| 国产av一区二区精品久久| 性高湖久久久久久久久免费观看| 成人黄色视频免费在线看| 中文精品一卡2卡3卡4更新| 搡老乐熟女国产| 午夜日韩欧美国产| 午夜福利影视在线免费观看| 日韩一区二区三区影片| 亚洲精品国产av蜜桃| 国产精品国产av在线观看| 免费观看人在逋| 婷婷色麻豆天堂久久| 国产 一区精品| 99久久精品国产亚洲精品| 黄片播放在线免费| 青春草国产在线视频| 天天影视国产精品| 男女之事视频高清在线观看 | 久久国产精品大桥未久av| 91国产中文字幕| 一级片'在线观看视频| 观看美女的网站| 九色亚洲精品在线播放| 丝袜喷水一区| 国产精品国产av在线观看| 久久久久精品性色| 久久久久久久国产电影| 观看美女的网站| 国产男女内射视频| 老汉色av国产亚洲站长工具| 亚洲av电影在线观看一区二区三区| 黄片播放在线免费| 国产亚洲最大av| 日韩一卡2卡3卡4卡2021年| 久久久久精品久久久久真实原创| 日本午夜av视频| 在线观看免费日韩欧美大片| a级毛片在线看网站| 日韩人妻精品一区2区三区| 亚洲色图 男人天堂 中文字幕| 亚洲一级一片aⅴ在线观看| 久久久欧美国产精品| 亚洲,欧美精品.| 欧美av亚洲av综合av国产av | 美女午夜性视频免费| 熟妇人妻不卡中文字幕| 99久久综合免费| 亚洲精品久久午夜乱码| 久久ye,这里只有精品| www.精华液| 日韩伦理黄色片| 欧美黑人精品巨大| 啦啦啦 在线观看视频| 丝袜美腿诱惑在线| 中文欧美无线码| 日韩一卡2卡3卡4卡2021年| 最近最新中文字幕免费大全7| 国产精品免费大片| 国产精品女同一区二区软件| 又粗又硬又长又爽又黄的视频| bbb黄色大片| 亚洲精品,欧美精品| 中文天堂在线官网| 交换朋友夫妻互换小说| 日韩中文字幕视频在线看片| 婷婷色麻豆天堂久久| 纵有疾风起免费观看全集完整版| 午夜福利在线免费观看网站| 欧美日韩一区二区视频在线观看视频在线| 日韩一卡2卡3卡4卡2021年| 男男h啪啪无遮挡| 中文字幕制服av| 黄色视频不卡| 只有这里有精品99| 日韩精品有码人妻一区| 人体艺术视频欧美日本| 卡戴珊不雅视频在线播放| netflix在线观看网站| 国语对白做爰xxxⅹ性视频网站| 91aial.com中文字幕在线观看| av在线老鸭窝| 国产成人系列免费观看| 自线自在国产av| av线在线观看网站| 国产一区有黄有色的免费视频| 高清视频免费观看一区二区| 在线观看免费午夜福利视频| 亚洲欧美清纯卡通| 日韩电影二区| 视频在线观看一区二区三区| 人人澡人人妻人| 免费看av在线观看网站| 国产成人精品在线电影| 一边亲一边摸免费视频| 九色亚洲精品在线播放| 久久鲁丝午夜福利片| 看非洲黑人一级黄片| 一二三四中文在线观看免费高清| 亚洲精品第二区| 午夜免费男女啪啪视频观看| 观看av在线不卡| 啦啦啦在线免费观看视频4| 久久久久久久大尺度免费视频| 亚洲国产日韩一区二区| 国产精品亚洲av一区麻豆 | 老汉色av国产亚洲站长工具| 国产国语露脸激情在线看| 美女福利国产在线| 国产色婷婷99| 欧美日韩成人在线一区二区| 国产免费视频播放在线视频| 男人添女人高潮全过程视频| 亚洲美女黄色视频免费看| 亚洲精品久久午夜乱码| 波多野结衣av一区二区av| 亚洲七黄色美女视频| 日本vs欧美在线观看视频| 国产成人91sexporn| 国产精品蜜桃在线观看| 国产av国产精品国产| 丰满饥渴人妻一区二区三| 亚洲精品国产av蜜桃| 日本91视频免费播放| 乱人伦中国视频| 纯流量卡能插随身wifi吗| 亚洲精品成人av观看孕妇| 免费日韩欧美在线观看| 欧美日韩av久久| 成人漫画全彩无遮挡| av一本久久久久| 亚洲精品,欧美精品| 日韩精品免费视频一区二区三区| 丁香六月天网| 男女下面插进去视频免费观看| 激情视频va一区二区三区| 精品人妻一区二区三区麻豆| 日韩,欧美,国产一区二区三区| 国产女主播在线喷水免费视频网站| 日本一区二区免费在线视频| 精品一区二区三区av网在线观看 | 丰满迷人的少妇在线观看| 午夜免费鲁丝| 性色av一级| 一级爰片在线观看| 久久久精品免费免费高清| 大话2 男鬼变身卡| 狂野欧美激情性xxxx| 国产亚洲欧美精品永久| 啦啦啦视频在线资源免费观看| 成人影院久久| 成人18禁高潮啪啪吃奶动态图| 中文精品一卡2卡3卡4更新| 欧美日韩精品网址| 超碰成人久久| 男女午夜视频在线观看| 午夜影院在线不卡| 丰满饥渴人妻一区二区三| 国产不卡av网站在线观看| 香蕉丝袜av| 亚洲av日韩在线播放| 亚洲精品久久久久久婷婷小说| 熟女少妇亚洲综合色aaa.| 大片免费播放器 马上看| 999精品在线视频| 欧美另类一区| 人人澡人人妻人| 妹子高潮喷水视频| 1024香蕉在线观看| 亚洲第一区二区三区不卡| 国产欧美日韩综合在线一区二区| 国产精品久久久久久久久免| 黄色视频不卡| bbb黄色大片| 免费观看性生交大片5| 人人妻,人人澡人人爽秒播 | 另类亚洲欧美激情| 国产黄频视频在线观看| 丰满迷人的少妇在线观看| 婷婷色综合www| 伊人久久国产一区二区| 秋霞伦理黄片| 天堂中文最新版在线下载| 美女扒开内裤让男人捅视频| 观看美女的网站| 日韩欧美精品免费久久| 最近手机中文字幕大全| 亚洲四区av| 波多野结衣av一区二区av| 精品人妻在线不人妻| netflix在线观看网站| 最近2019中文字幕mv第一页| 欧美av亚洲av综合av国产av | 在线看a的网站| 久久精品亚洲av国产电影网| 日韩精品有码人妻一区| 亚洲综合色网址| 久久国产精品大桥未久av| 欧美精品高潮呻吟av久久| 午夜影院在线不卡| 午夜福利乱码中文字幕| 久久毛片免费看一区二区三区| kizo精华| 高清视频免费观看一区二区| 黄色视频不卡| 亚洲精品美女久久久久99蜜臀 | 亚洲情色 制服丝袜| 欧美日韩一级在线毛片| 亚洲美女视频黄频| av在线观看视频网站免费| 欧美xxⅹ黑人| 成人亚洲精品一区在线观看| 亚洲,欧美精品.| 精品视频人人做人人爽| 精品亚洲乱码少妇综合久久| 国产亚洲一区二区精品| 色综合欧美亚洲国产小说| 欧美日韩视频精品一区| 男的添女的下面高潮视频| 国产男女超爽视频在线观看| 熟女av电影| 亚洲图色成人| 亚洲av成人不卡在线观看播放网 | 黑丝袜美女国产一区| www.自偷自拍.com| 丰满饥渴人妻一区二区三| 下体分泌物呈黄色| 午夜日韩欧美国产| 免费观看性生交大片5| 亚洲天堂av无毛| 亚洲 欧美一区二区三区| 中文字幕人妻熟女乱码| 亚洲一卡2卡3卡4卡5卡精品中文| 久久天躁狠狠躁夜夜2o2o | 国产精品人妻久久久影院| 国产精品一区二区在线观看99| 99九九在线精品视频| 国产免费一区二区三区四区乱码| 极品少妇高潮喷水抽搐| 欧美变态另类bdsm刘玥| 好男人视频免费观看在线| 纯流量卡能插随身wifi吗| 中文乱码字字幕精品一区二区三区| 老司机在亚洲福利影院| 97在线人人人人妻| videos熟女内射| 久久天堂一区二区三区四区| 91成人精品电影| 亚洲精品国产一区二区精华液| 人人妻人人爽人人添夜夜欢视频| 亚洲第一青青草原| 欧美黄色片欧美黄色片| 成人午夜精彩视频在线观看| 久久精品熟女亚洲av麻豆精品| 免费少妇av软件| 久久久久久人妻| 秋霞在线观看毛片| 操美女的视频在线观看| 午夜福利,免费看| 另类亚洲欧美激情| 国产在线免费精品| 免费观看av网站的网址| 女人久久www免费人成看片| 你懂的网址亚洲精品在线观看| 欧美激情 高清一区二区三区| 欧美变态另类bdsm刘玥| 国产色婷婷99| 亚洲av在线观看美女高潮| 亚洲精品中文字幕在线视频| 侵犯人妻中文字幕一二三四区| 午夜激情av网站| 在线 av 中文字幕| 国产免费一区二区三区四区乱码| 久久久久国产一级毛片高清牌| 婷婷色综合大香蕉| 国产精品偷伦视频观看了| 大香蕉久久网| 国产精品麻豆人妻色哟哟久久| 99国产精品免费福利视频| 久久av网站| 亚洲,欧美精品.| 国产毛片在线视频| 肉色欧美久久久久久久蜜桃| 黄频高清免费视频| 欧美日韩一区二区视频在线观看视频在线| 99国产综合亚洲精品| 欧美日韩一区二区视频在线观看视频在线| 国产精品无大码| 一区二区三区精品91| 日韩视频在线欧美| 国产精品久久久人人做人人爽| 亚洲精华国产精华液的使用体验| 亚洲av中文av极速乱| 亚洲人成电影观看| 99香蕉大伊视频| 色吧在线观看| 中文字幕色久视频| 美女脱内裤让男人舔精品视频| 少妇被粗大猛烈的视频| 亚洲人成电影观看| 两个人看的免费小视频| 成人漫画全彩无遮挡| 久久人人爽人人片av| 国产乱来视频区| 观看av在线不卡| 中文欧美无线码| 免费女性裸体啪啪无遮挡网站| 操出白浆在线播放| 欧美日韩亚洲国产一区二区在线观看 | 欧美精品一区二区免费开放| 久久久久视频综合| 午夜日韩欧美国产| 久久久久久久国产电影| 欧美97在线视频| 国产不卡av网站在线观看| 伊人亚洲综合成人网| 精品一区二区免费观看| 国产精品免费大片| 日韩 亚洲 欧美在线| 亚洲欧美一区二区三区国产| 国产日韩一区二区三区精品不卡| av国产精品久久久久影院| 亚洲五月色婷婷综合| 欧美精品高潮呻吟av久久| 中文字幕另类日韩欧美亚洲嫩草| 欧美日韩视频精品一区| 天美传媒精品一区二区| 大片免费播放器 马上看| 免费在线观看黄色视频的| 久久精品久久久久久噜噜老黄| 最新在线观看一区二区三区 | av女优亚洲男人天堂| videos熟女内射| 蜜桃国产av成人99| 久热这里只有精品99| 久久国产精品大桥未久av| 欧美日韩视频精品一区| 夫妻午夜视频| av电影中文网址| 亚洲av成人不卡在线观看播放网 | 男人爽女人下面视频在线观看| 嫩草影院入口| 99九九在线精品视频| 亚洲综合精品二区| 中文乱码字字幕精品一区二区三区| 久久性视频一级片| 精品一区二区免费观看| 女人久久www免费人成看片| 香蕉丝袜av| 国产爽快片一区二区三区| 日韩免费高清中文字幕av| 中国三级夫妇交换| 多毛熟女@视频| 中国三级夫妇交换| 国产精品久久久久久人妻精品电影 | 叶爱在线成人免费视频播放| av有码第一页| 国产人伦9x9x在线观看| 美女福利国产在线| 在线观看免费日韩欧美大片| 亚洲精品一二三| 亚洲视频免费观看视频| 波多野结衣一区麻豆| 成人国语在线视频| 亚洲欧美精品综合一区二区三区| 欧美日韩一区二区视频在线观看视频在线| 校园人妻丝袜中文字幕| 十八禁高潮呻吟视频| 亚洲国产日韩一区二区| 男女边吃奶边做爰视频| 搡老岳熟女国产| 亚洲av国产av综合av卡| 男男h啪啪无遮挡| 国语对白做爰xxxⅹ性视频网站| 欧美av亚洲av综合av国产av | av免费观看日本| 国产又色又爽无遮挡免| 欧美日韩一区二区视频在线观看视频在线| 一本色道久久久久久精品综合| 国产爽快片一区二区三区| 亚洲av成人精品一二三区| 97人妻天天添夜夜摸| 日韩av在线免费看完整版不卡| 日韩欧美精品免费久久| 成年人免费黄色播放视频| 青草久久国产| 啦啦啦中文免费视频观看日本| 国产精品久久久久久久久免| 性高湖久久久久久久久免费观看| av又黄又爽大尺度在线免费看| 在线观看国产h片| kizo精华| 在线观看一区二区三区激情| 欧美成人精品欧美一级黄| 深夜精品福利| 十八禁网站网址无遮挡| 欧美精品一区二区免费开放| 91精品国产国语对白视频| av在线app专区| 亚洲av综合色区一区| 中文字幕精品免费在线观看视频| 亚洲自偷自拍图片 自拍| 妹子高潮喷水视频| 亚洲精华国产精华液的使用体验| 国产国语露脸激情在线看| 日本av手机在线免费观看| 少妇精品久久久久久久| 亚洲中文av在线| 亚洲熟女精品中文字幕| 综合色丁香网| 高清欧美精品videossex| 韩国精品一区二区三区| 黄色视频不卡| 大香蕉久久成人网| 一本色道久久久久久精品综合| 中文字幕av电影在线播放| 成人影院久久| 欧美人与性动交α欧美软件| 高清在线视频一区二区三区| 国产精品.久久久| 2018国产大陆天天弄谢| 亚洲精品成人av观看孕妇| 欧美激情 高清一区二区三区| 国产成人啪精品午夜网站| 久久99热这里只频精品6学生| 99热全是精品| 免费观看av网站的网址| 自拍欧美九色日韩亚洲蝌蚪91| 国产成人免费无遮挡视频| 黄色视频在线播放观看不卡| 亚洲欧美日韩另类电影网站| 狠狠精品人妻久久久久久综合| 久久国产亚洲av麻豆专区| 午夜福利影视在线免费观看| 在线免费观看不下载黄p国产| 久久人妻熟女aⅴ| 国产一区二区在线观看av| 新久久久久国产一级毛片| 天天添夜夜摸| 国产高清国产精品国产三级| 精品酒店卫生间| 久久影院123| 国产成人免费无遮挡视频| 最近中文字幕2019免费版| 久久久久网色| 亚洲精品日韩在线中文字幕| 97人妻天天添夜夜摸| 久久精品亚洲av国产电影网| 亚洲欧美日韩另类电影网站| 成人黄色视频免费在线看| 国产精品偷伦视频观看了| 亚洲视频免费观看视频| 少妇人妻久久综合中文| 亚洲精品国产av蜜桃| 成人国语在线视频| 午夜福利视频在线观看免费| 极品人妻少妇av视频| 我要看黄色一级片免费的| 精品福利永久在线观看| 色播在线永久视频| 不卡视频在线观看欧美| 中文精品一卡2卡3卡4更新| 久久久久久人妻| 中文字幕人妻熟女乱码| 亚洲 欧美一区二区三区| 人体艺术视频欧美日本| 毛片一级片免费看久久久久| 国产伦理片在线播放av一区| av网站免费在线观看视频| 十八禁人妻一区二区| 一区在线观看完整版| 婷婷色综合www| 人妻 亚洲 视频| 男的添女的下面高潮视频| 国产片内射在线| 人妻人人澡人人爽人人| 国产片特级美女逼逼视频| 国产淫语在线视频| 一区二区三区精品91| 亚洲国产精品一区三区| 精品少妇一区二区三区视频日本电影 | 国产成人午夜福利电影在线观看| 汤姆久久久久久久影院中文字幕| 丁香六月欧美| 一区二区三区激情视频| 狠狠精品人妻久久久久久综合| 国产高清不卡午夜福利| 各种免费的搞黄视频| 狂野欧美激情性bbbbbb| 亚洲久久久国产精品| 性少妇av在线| av女优亚洲男人天堂| 在线天堂中文资源库| 在线观看三级黄色| 在线观看www视频免费| 精品国产乱码久久久久久男人| 成人国产av品久久久| 欧美精品av麻豆av| 老司机亚洲免费影院| 午夜免费观看性视频| 亚洲精品乱久久久久久| 亚洲欧美日韩另类电影网站| 国产毛片在线视频| 色视频在线一区二区三区| 亚洲人成网站在线观看播放| 亚洲一码二码三码区别大吗| 亚洲欧美清纯卡通| 成年美女黄网站色视频大全免费| 韩国高清视频一区二区三区| 欧美日韩成人在线一区二区| 亚洲少妇的诱惑av| 日韩大片免费观看网站| 久久av网站| 天堂8中文在线网| 国产色婷婷99| 国产精品 欧美亚洲| 一区福利在线观看| 综合色丁香网| 免费少妇av软件| 日本爱情动作片www.在线观看| 亚洲男人天堂网一区| 亚洲精品国产av成人精品| 国产成人精品久久久久久| 久久久精品区二区三区| 日本av免费视频播放| 亚洲国产毛片av蜜桃av| 亚洲av福利一区| 如何舔出高潮| 美女扒开内裤让男人捅视频| 午夜免费观看性视频| 美女主播在线视频| 欧美av亚洲av综合av国产av | 亚洲国产欧美在线一区| 亚洲,欧美,日韩| 久久97久久精品| 久久精品aⅴ一区二区三区四区| 久久青草综合色| 97在线人人人人妻| 国产精品一区二区在线不卡| 亚洲成人国产一区在线观看 | 国产黄色视频一区二区在线观看| 欧美日韩一级在线毛片| 在线观看免费视频网站a站| 高清不卡的av网站| 中文字幕另类日韩欧美亚洲嫩草| 国产精品99久久99久久久不卡 | 免费黄网站久久成人精品| 日韩av免费高清视频| 99re6热这里在线精品视频| 久久久久久久精品精品| 国产一区二区三区综合在线观看| 欧美国产精品va在线观看不卡| 人人妻人人澡人人爽人人夜夜| av一本久久久久| 欧美老熟妇乱子伦牲交| 啦啦啦在线免费观看视频4| 久久久久久久久久久久大奶| 九色亚洲精品在线播放| 欧美老熟妇乱子伦牲交| 中文精品一卡2卡3卡4更新| 最黄视频免费看| 国产视频首页在线观看| 免费黄频网站在线观看国产| 国产黄频视频在线观看| 麻豆精品久久久久久蜜桃| 精品人妻一区二区三区麻豆| 日韩一卡2卡3卡4卡2021年| 女性被躁到高潮视频| 欧美97在线视频| 18禁裸乳无遮挡动漫免费视频| 老司机影院成人| 国产一区二区在线观看av| 国产精品免费视频内射| 亚洲精品一区蜜桃| 色精品久久人妻99蜜桃| 亚洲 欧美一区二区三区| 国产成人一区二区在线| 菩萨蛮人人尽说江南好唐韦庄| 久久亚洲国产成人精品v| 成人国语在线视频| 国产欧美亚洲国产| 日日摸夜夜添夜夜爱| 亚洲av国产av综合av卡| 亚洲三区欧美一区| 啦啦啦在线观看免费高清www| 少妇被粗大的猛进出69影院| 色视频在线一区二区三区| 国产又爽黄色视频| 亚洲,一卡二卡三卡| a级毛片在线看网站| 在线观看www视频免费| 777米奇影视久久| 欧美 日韩 精品 国产| 亚洲欧洲国产日韩| 欧美黄色片欧美黄色片| 欧美日韩亚洲国产一区二区在线观看 | 国产av精品麻豆| 国产免费视频播放在线视频| 视频在线观看一区二区三区| 不卡av一区二区三区| 男女边摸边吃奶| 国产一区有黄有色的免费视频| 国产精品人妻久久久影院| 国产又色又爽无遮挡免| 亚洲av电影在线观看一区二区三区| 亚洲精品美女久久久久99蜜臀 | 老鸭窝网址在线观看| 自线自在国产av| 男女边吃奶边做爰视频| 国产精品国产av在线观看| 国产精品女同一区二区软件| 国产探花极品一区二区| xxxhd国产人妻xxx|