• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Optimized Deep Learning Approach for Efficient Diabetic Retinopathy Classification Combining VGG16-CNN

    2023-12-15 03:58:40HebaElHosenyHebaElsepaeWaelMohamedandAymanSelmy
    Computers Materials&Continua 2023年11期

    Heba M.El-Hoseny,Heba F.Elsepae,Wael A.Mohamed and Ayman S.Selmy

    1Department of Computer Science,the Higher Future Institute for Specialized Technological Studies,Obour,11828,Egypt 2Department of Electrical Engineering,Benha Faculty of Engineering,Benha University,Benha,13511,Egypt

    ABSTRACT Diabetic retinopathy is a critical eye condition that,if not treated,can lead to vision loss.Traditional methods of diagnosing and treating the disease are time-consuming and expensive.However,machine learning and deep transfer learning(DTL)techniques have shown promise in medical applications,including detecting,classifying,and segmenting diabetic retinopathy.These advanced techniques offer higher accuracy and performance.Computer-Aided Diagnosis(CAD)is crucial in speeding up classification and providing accurate disease diagnoses.Overall,these technological advancements hold great potential for improving the management of diabetic retinopathy.The study’s objective was to differentiate between different classes of diabetes and verify the model’s capability to distinguish between these classes.The robustness of the model was evaluated using other metrics such as accuracy (ACC),precision (PRE),recall (REC),and area under the curve (AUC).In this particular study,the researchers utilized data cleansing techniques,transfer learning(TL),and convolutional neural network(CNN)methods to effectively identify and categorize the various diseases associated with diabetic retinopathy (DR).They employed the VGG-16CNN model,incorporating intelligent parameters that enhanced its robustness.The outcomes surpassed the results obtained by the auto enhancement(AE)filter,which had an ACC of over 98%.The manuscript provides visual aids such as graphs,tables,and techniques and frameworks to enhance understanding.This study highlights the significance of optimized deep TL in improving the metrics of the classification of the four separate classes of DR.The manuscript emphasizes the importance of using the VGG16CNN classification technique in this context.

    KEYWORDS No diabetic retinopathy(NDR);convolution layers(CNV layers);transfer learning;data cleansing;convolutional neural networks;a visual geometry group(VGG16)

    1 Introduction

    Eye disorders cause different infirmities,such as glaucoma [1],cataracts,DR,and NDR,so early diagnosis and treatment prevent blindness.Glaucoma was expected to affect 64.3 million people worldwide in 2013,rising to 76.0 million in 2020 and 111.8 million in 2040.The diagnosis of glaucoma is frequently put off because it does not manifest symptoms until a relatively advanced state.According to population-level surveys,only 10 to 50 percent of glaucoma patients are conscious that they have the condition.To prevent these diseases,diabetic blood pressure must be kept consistent,and the eye must undergo routine examinations at least twice a year [2].The second class is Cataracts are one of the most common visual diseases,where the irises appear cloudy.People who suffer from cataracts[3]find problems in reading activities,driving,and recognizing the faces of others.The World Health Organization (WHO) estimates that there are approximately 285 million visually impaired people globally,of whom 39 million are blind,and 246 million have moderate to severe blindness[4].Better cataract surgery has emerged in recent years than it did in the preceding.In patients without ocular complications like macular degeneration(DR)or glaucoma[5],85%-90%of cataract surgery patients will have 6-12 best-corrected vision[6].The third class is DR[7],categorized into two retinal disease cases:proliferative DR(PDR)and non-proliferative DR(NPDR).A low-risk form of DR is called NPDR,where the Blood vessel membranes in the retina are compromised.The retina’s tissues could expand,causing white patches to appear.On the other hand,the high risk,called PDR,High intraocular pressure may cause the blood tissues to have difficulty transferring fluid to the eye,which destroys the cells responsible for transmitting images from the retina to the cerebrum[8].The fourth class is NDR or typical case.

    The paper’s main contributions are distinguished from others by employing data cleansing techniques,TL,and intelligent parameter techniques.Various cleansing techniques are implemented to compare their performance in different applications,involving replacing,modifying,or removing incorrect or coarse data.This enhances data quality by identifying and eliminating errors and inconsistencies,leading to improved learning processes and higher efficiency.Two enhancement filters,auto enhancement(AE)and contrast-limited adaptive histogram equalization(CLAHE),are applied in steps such as augmentation and optimization using adaptive learning rates with different rates for each layer.

    This work’s primary contribution lies in analyzing and balancing datasets as a cleansing step to achieve high-quality feature extraction.This is accomplished through the application of AE and CLAHE filters.Subsequently,the VGG16 and CNN algorithms are employed with varying dropout values.Two evaluation stages are conducted:the first measures the performance of cleansed images using a MATLAB program,indicating that AE outperforms CLAHE in metrics.The second evaluation involves training the algorithm and compiling metrics such as AUC,ACC,PRE,REC,confusion matrix(CM),and loss curves,reaffirming the findings of the first evaluation.AE is identified as the superior enhanced filter in this manuscript,achieving metrics not previously obtained with the same database.

    The paper is structured as follows: The first section presents related work,followed by an explanation of deep TL,CNN,and the employed cleansing filters with corresponding mathematical equations.Section 3 discusses the VGG16 and CNN algorithms used to divide the original DR images into four groups.Simulation results and visual representations are included.Finally,the paper concludes and outlines potential future work.

    2 Related Works

    The classification of ophthalmological diseases has been the subject of numerous research proposals.We conducted a literature review to determine the primary methods for glaucoma (GL),cataract (CT),DR,and NDR diagnoses based on images.For a deeper understanding of the issue and to brainstorm workable solutions for raising the ACC of our TL model.We looked at recent journals and publications.After these steps,we reached the goal of using an open-source and freely downloaded dataset(s) and examined a model to compare our efforts with previous experiments.This study introduced an ensemble-based DR system,the CNN in our ensemble(based on VGG16),for different classes of DR tasks.We gained evaluation metrics that are confined in ACC that were achieved with the help of the enhancement filters(98.79%CNN with dropout 0.02 and the used filter auto-enhancement filter with loss 0.0390) and outcomes of CLAHE (96.86% TL without dropout with loss 0.0775).In Ahmed et al.’s research on cataracts,CNN with VGG-19 was applied,where the ACC was 97.47%,with a PRE of 97.47%and a loss of 5.27%[9].Huang et al.implemented a semisupervised classification based on two procedures named Graph Convolutional Network(GCN)and CNN,where They scored the best ACC compared to other conventional algorithms with an ACC of 87.86%and a PRE of 87.87%[10].The Deep Convolutional Neural Network(DCNN)was set up by Gulshan et al.[11]to recognize DR in retinal fundus images,and a deep learning algorithm was applied to develop an algorithm that autonomously detects diabetic macular edema and DR in retinal fundus images.The main decision made by the ophthalmologist team affected the specificity and sensitivity of the algorithm used to determine whether DR was moderate,worse,or both.DCNN,with a vast amount of data in various grades per image,was used to create the algorithm with 96.5%sensitivity and 92.4%specificity.Kashyap et al.[12],using two CNNs,implemented a TL technique.Li et al.[13]created another TL model for categorizing DR into four classes:normal,mild,moderate,and severe.This was in addition to applying TL in two ways while using baseline approaches such as Alex Net,Google Net,VGGNet-16,and VGGNet-19[14].The TL model classified the optical coherence tomography (OCT) images for the diseases resulting from the diabetic retina.Kermany et al.[15],who also produced Inception-v3[16],carried out this novel.Their approach was trained,tested,and validated using OCT images from four different categories: Choroidal Neovascularization,Diabetic macular edema,diabetic drusen,and NORMAL.Additionally,they tested the effectiveness of their strategy using 1,000 randomly chosen training samples from each category.Lu et al.[17]also described the TL technique for diagnosing DR using OCT images.They classified five classes from the OCT data sets[18].Kamal et al.[19]used five algorithms(standard CNN,VGG19,ResNet50,Dense Net,and Google Net).They concluded that the best metric measurements from the VGG19 algorithms with fine-tuning were 94% AUC.Sensitivity and specificity were 87.01% and 89.01%,respectively.Ahn et al.[20] presented CNN (consisting of three CNVs and Maxpooling in every layer and two fully connected layers in the classifier).The researchers and their colleagues used a private dataset of 1,542 images.They achieved ACC and an AUC of 87.9%and 0.94 on the test data,respectively.The authors in [21-24] used the algorithms (VGG16,VGG16 with augmentation (they used techniques like mirroring and rotating),VGG16 with dropout added to the architecture,and two cascaded VGG16)to get the highest ACC.Pratt et al.[25]added a CNN to this architecture of preprocessing methods for classifying micro-aneurysms,exudates,and hemorrhages on the retina.They fulfilled Sensitivity(95%)and ACC(75%)on 5000 validation images.Islam et al.[26]conducted experiments on eight retinal diseases using CLAHE as a pre-processing step and CNN for feature extraction.Sarki et al.[27] submitted the CNN-based architecture for the dual classification of diabetic eye illness.Diabetic eye disease of several classes’levels: With VGG16,the maximum ACC for multiclassification is 88.3%,and likewise,for mild multi-classification,it is 85.95%.Raghavendra et al.[28]built a CNN(which included four CNV layers and applied batch normalization,one ReLU,and one Max-pooling at each layer before adding fully connected and Soft-max layers).They can hit an ACC of around 98.13%using 1426 fundus photos from a specific dataset.Recently,presented classification and segmentation approaches for a retinal disease were used to increase the classification ACC.A novel tactic has been posited to boost the fineness of the retinal images (enhancement techniques)before the steps of the classifier.Simulation outcomes for that task reached 84%ACC without fuzzy enhancement,but when applying fuzzy,the ACC reached 100%.This meant that fuzzy was important to discriminate between the different types of retinal diseases[29].This paper proposed a model for Ocular Disease Intelligent Recognition (ODIR),using augmentation techniques to achieve balance in different datasets.As a result,the ACC of each disease in multi-labeled classification tasks was improved by making better images and working with different TL algorithms [30].In the paper by Sultan A.Aljahdali,Inception V3,Inception Res-Net V2,Xception,and DenseNet 121 were used as a few examples of pre-trained models that were applied and provided CAD ways of CT diagnosis.The Inception ResNetV2 model had a test ACC of 98.17%,a true positive rate of 97%,and a false positive rate of 100% for detecting eye disease [31].Nine models (ResNet50,ResNet152,VGG16,VGG19,AlexNet,Google Net,DenseNet20,Inception v3,and Inception v4) served as the foundation for Kamal et al.’s survey paper,which had the best ACC of 0.80.In this article,the dataset is categorized into three ocular illnesses(Strabismus,DR,and GL),where various techniques are used,such as TL,DL,and ML approaches[19].This review study is based on five retina classes sorted by severity(No DR,Mild,Moderate,Severe,and Proliferate).The distribution of that paper is presented in terms of three model-supervised,self-supervised,and transformer models that achieved percentages of 91%,67%,and 50%,respectively [32].Li et al.presented the enhancement methods through two filters.The first filter,Adaptive Histogram Equalization(AHE),enhances the contrast between four classes of images (about 7935 images).The second filter was a nonlocal mean,eliminating the noise.The results of these two filters were used as input measurements of metrics performance:ACC,specificity,and sensitivity (94.25%,94.22,and 98.11).Consecutively [33],different databases of more than two diseases were used and gave acceptable validation ACC for the training process.This process was checked by five different versions of the AlexNet CNN(Net transfer I,Net transfer II,Net transfer III,Net transfer IV,and Net transfer V): 94.30%,91.8%,89.7%,93.1%,and 92.10% successively[34].Sharif A.Kamran presented the generative adversarial network(VTGAN).Vision transformers depend on a constructive adversarial network (GAN) made up of transformer encoder blocks for discriminators and generators,as well as residual and spatial feature fusion blocks with numerous losses,to provide brilliant fluorescein angiography images from both normal and abnormal fundus photographs for training.When network metrics were measured on a vision transformer using the three common metrics criteria ACC,sensitivity,and specificity,the scores were 85.7%,83.3%,and 90%,respectively[35].

    3 Materials and Methods

    3.1 Data Cleansing(Retinal Image Enhancement)

    The diagnosis of DR can be performed manually by an ophthalmologist or through CAD systems.Retinal images are typically captured using a fundus camera.However,several parameters can influence the quality of these diagnostic images,such as eye movements,lighting conditions,and glare.Image quality is critical in the classification,segmentation,and detection processes.Any abnormalities or malformations in the fundus images can have a negative impact on the ACC of the diagnosis.The presence of noise in the images can lead to a decrease in the evaluation metrics used to assess the performance of the diagnostic model.To address these issues,it is crucial to cleanse the datasets of images by removing any malformations or artifacts.The present study uses AE and CLAHE filters to clean data.This cleansing process aims to enhance the images’quality and improve the diagnostic model’s ACC.

    3.1.1 CLAHE

    The filter employs a clipping level in the histogram to determine the intensity of the local histogram mapping function.This helps reduce undesired noise in the retinal image[36].Two essential variables in CLAHE are the block size and clip limit,which are used to adjust the image quality.The clip limit is calculated using the following Eq.(1):

    β: the clip limit;Smax: maximum of the new distribution;N: dimensions of the retinal image;L:level of the image;M represents the area size,N represents the grey-level value(256),and represents the clip factor,which expresses the addition of a histogram limit with a value of 1 to 100[37].

    3.1.2 AE Filter

    This is the second filter to be used in our proposal.Which increases the brightness overall in the image[38].It is simply lightness without any light.The luminosity scale is from(0 to 100),where 0 is no light(black),and 100 is white.The arithmetic means of red,green,and blue can be said to be bright.We can recognize the AE function in the MATLAB code by changing the following three parameters to get the highest quality of enhanced images:In the brightness of various objects or areas.Adjust the relative amounts of dark and light regions of Fundus photos using If you increase the contrast,then the light-colored object will be brighter,and the dark-colored object will be darker.You can say it is the difference between the lightest color object and the darkest color object as an arithmetic Eq.(2):

    3.2 Enhanced Data Evaluation

    After applying the MATLAB code to enhance the quality of the DR classes using both the CLAHE and AE filters,we conducted further analysis to measure the performance of these filters.A custom MATLAB code function was developed to calculate evaluation metrics based on the outcomes of the filters.Table 1 presents the results of these evaluation measurements,including entropy fused,average gradient,edge intensity,and local contrast.From the table,it can be observed that the AE filter achieved higher values for these metrics compared to the CLAHE filter.The higher values obtained for the evaluation measurements suggest that the AE filter was more successful in improving the quality of the DR classes.

    Table 1: Comparison of enhanced outcomes metrics by CLAHE and AE filter

    3.3 Convolutional Neural Network

    CNNs have recently gained significant popularity,particularly for image classification tasks.These deep learning algorithms utilize learnable weights and biases to analyze fundus images and differentiate between them.CNNs[39,40]have multiple input,output,and hidden layers.They have been successfully applied in various computer vision applications[41],such as semantic segmentation,object detection,and image classification.One notable advantage of CNNs is their ability to recognize essential features in images without human supervision.Additionally,the concept of weight sharing in CNNs contributes to their high ACC in image classification and recognition.CNNs can effectively reduce the number of parameters that must be trained while maintaining performance.However,CNNs do have some limitations.They require a large amount of training data to achieve optimal performance.Furthermore,the training process of CNNs can be time-consuming,especially without a powerful GPU,which can impact their efficiency.CNNs are designed as feed-forward neural networks[42]and incorporate filters and pooling layers for image and video processing[43].

    3.3.1 Convolution Layers

    CNV layers are the main building blocks of CNN,which include output vectors like a feature map,filters like a feature detector,and input vectors like an image.After going through a CNV layer,the image is abstracted to a feature map,sometimes called an activation map.Convolution occurs in CNNs when two matrices consisting of rows and columns are merged to create a third matrix.This process repeats with an accurate stride(the step for the filter to move).Doing so decreases the system’s parameters,and the calculation is completed more quickly[44].The output filter size can be calculated as a mathematical Eq.(3).This layer has several results.The first is an increase in dimensionality accompanied by padding,which we can write in the code as“same padding”.The second decreases the dimensionality,and in this process,it happens without padding and is expressed as“valid padding”.Each pixel in the new image differs from the previous one depending on the feature map[45].

    Feature Map=Input Image×Feature Detector

    W:the size of the Input image,f:the size of the CNV layer filters,p:padding of the output matrix,S:stride.

    3.3.2 Activation Function

    Activation functions in a CNN model are crucial in determining whether a neuron should be activated based on its input.They use mathematical processes to assess the significance of the information.In the hidden layers of the model,the Rectified Linear Unit(ReLU)activation function is commonly employed.ReLU helps address the vanishing gradients issue by ensuring that the weights do not become extremely small[46].Compared to other activation functions like tanh and sigmoid,ReLU is computationally less expensive and faster.The main objective of an activation function is to introduce nonlinearity into the output of a neuron.In the suggested framework,the SoftMax function is used for making decisions.Softmax is a straightforward activation function that produces outcomes from 0 to 1,as illustrated in Eq.(4).This activation function is often used for classification tasks[47].

    Max(0,x);if x is positive,output x,otherwise 0;+∞Range:0 to+∞.

    3.3.3 Pooling Layer

    This strategy is applied to reduce the dimensions of the outcomes of previous layers.There are two types of pooling:maximum pooling and average pooling.Max pooling removes the image noise,but average pooling suppresses noise,so max pooling performs better than average pooling [48].In our model,we introduced max pooling with dropout to prevent overfitting during model training(When a neural network overfits,it excels on training data but fails when exposed to fresh data from the issue domain).

    3.3.4 Fully Connected Layer(FC)

    The input picture from the preceding layers is flattened and supplied to the FC layer.The flattened vector then proceeds through a few additional FC levels,where the standard processes for mathematical functions happen.The classification process starts to take place at this level.The Softmax activation function is applied in FC to decide on the classification technique[49].

    3.4 Transfer Learning

    TL is a crucial component of deep learning[22,50],concentrating on storing information obtained while solving an issue and applying it to other closely related problems.TL increases the effectiveness of new training models by doing away with the requirement for a sizable collection of labeled training data for each new model.Further benefits of TL include faster training times,fewer dataset requirements,and improved performance for classification and segmentation detection problems.VGG16 is the most widely utilized transfer network that we employed in our article(VGG),which proved its effectiveness in many tasks,including image classification and object detection in many different tasks.It was based on a study of how to make these networks deeper.

    3.5 Implemented Data Sets

    Our proposal utilized datasets divided into four categories:C,GL DR,and NDR.These datasets,consisting of 4271 images,were obtained from the Kaggle dataset[51].The images in the dataset are in colored format,red,green,blue(RGB),include both left-and right-eye images.The purpose of using these datasets was to classify the images into the categories mentioned above,and the distribution of the images among the classes is summarized in Table 2.

    Table 2: The implemented data sets

    3.6 The Proposed Algorithm of TL and CNN Architecture

    The algorithm used is the VGG16 model;the current work suggests modifying the VGG[52,53]model to get better outcomes and achieve better results.In VGG16,only the ImageNet dataset was used for pre-training the model.VGG16 has fixed input tensor dimensions of 224 ×224 with RGB channels.This model is passed through many convolutional neural networks(CNV)layers,where the most miniature used filters were 3 × 3.The most important thing that distinguishes the algorithm of TL is that it does not need many hyperparameters.They used 3 × 3 CNV layers with stride one and Maxpooling (2 × 2) with stride 2.They consistently employed the same padding.Convolution and Maxpooling are organized in the model with block-CNV layers having 64 filters,block-2 CNV layers having 128 filters,block-3 CNV layers having 256 filters,block-4 CNV,and block-5 CNV5 having 512 filters.This task starts with identifying the input RGB images,whose dimensions are 224×224,but the images in the database have different sizes.256×256 for CT with a size of 8.84 KB,DR with 46.9 KB,GL with 10.5 KB,and 224 × 224 with a size of 63.8 KB for NDR.The VGG16 models were scaled down to 200 × 200 pixels.The four classes of datasets were used to perform classification procedures by CNN models based on deep VGG16.To prepare for the classification,we should balance the datasets into groups of approximately 1000 for four classes to prevent overfitting or underfitting.After balancing the data,two enhancement filters—the AE filter and the CLAHE filter—performed data cleansing steps.Data cleansing removes noise from original images using a MATLAB program after enhancing,resizing,and reshaping the original images.Image resizing is a necessary process because of the scale difference between images.The image augmentation approach increases the training dataset’s size and improves the model’s capacity.Augmentation in this algorithm occurs in preprocessing datasets;various augmentation process types are used in three steps: Zoom range=0.15,rotation range=20,and fill_mode=nearest[54].The augmentation procedure aims to prevent or minimize overfitting on a small quantity of data[55].

    The first model used CNN with a dropout of 0.02 and without TL,which helped improve the classification network and prevent overfitting [56].A common problem occurs when training data in CNN is insufficient;this technique is presented in [57].The second is CNN.The third is CNN,with a dropout of 0.2.These three architectures apply in both cases of data division(80%,20%)and(90%,10%).It is the same for VGG16;these three cases apply to our model.For the training process,Adaptive Moment Estimation(Adam optimizer)gives the highest results[58],in which the network parameters were optimized.Compared to other optimizers,Descent with Momentum and Root Mean Square Propagation (SGDM and MSPROP) in terms of ACC and loss,Adam is the best optimizer presented in [59].Datasets were used for training in this scenario with the following parameters:training epoch of 20,batch size of 32,and learning rate (set by the Adam optimizer according to a model designed and changed automatically in the program to fit the training model).Loss in the form of categorical cross entropy;these parameters are shown in Table 3.The final process is to classify the test data and predict the output.Classes initialize the VGG16 fit model and extract the model’s statistical evaluations,such as the CM,ACC,PRE,REC,AUC,and test loss.The block diagram of the proposed algorithm is introduced in Fig.1.

    Figure 1:The proposed algorithm

    Table 3: Optimized parameters of proposed models

    Table 5: Highlights the evaluation metrics for CNN models for 90%training and 10%testing

    Table 6: Performance analysis of the three case architectures of VGG16

    Table 7:Performance analysis of the three cases architectures of VGG16 for the 2nd section of datasets

    3.7 Evaluation Metrics

    According to our models,we observed that the AE filter had excellent ACC,and the incorrectly categorized instances for every class were small.The metrics evaluation depends on four essential measurements ACC,PRE,REC,and AUC.We want to accomplish these objectives with our methodology;however,false predictions must be avoided.Our study’s measurement performance can benefit from the CM because it makes it easy to compare the values of four indexes: True Positive(TP),True Negative(TN),False Positive(FP),and False Negative(FN)[60].

    ? True Positive(TP):When the predicted value and actual value are the same and the predicted values are positive,the expected model values are also positive.

    ? True Negative (TN): The expected and actual values are identical;besides the real value is negative,the model’s predicted value is negative.

    ? False Positive (FP): The predicted value is false;the actual value is negative,and the model’s expected value is positive.

    ? False Negative(FN):The predicted value is false;the actual value is positive,and the model’s expected value is negative.

    ACCplays a pivotal role in evaluating the metrics;it is the ratio of the sum of true positives and true negatives to the total number of samples.It can be determined from the following Eq.(5):

    PREis the total number of positive predictions (total number of true positives) divided by the total number of expected positives of class values(total number of true positives and false positives).Eq.(6)serves as an example of this.

    RECis the number of True Positives (TP) divided by the number of True Positives and False Negatives (FN),and another name for REC is sensitivity.It can be measured from the arithmetic Eq.(7):

    4 Results and Simulation Graphs

    This section examines how well the classification model works by looking at how CNN and VGG16 work when the training parameters for intelligence are changed.We evaluated the results of the model using the Kaggle dataset.The implementation utilized Tensor Flow and Keras,with Keras serving as a deep machine learning package and Tensor Flow acting as the backend for machine learning operations.A CNN model was employed for classification experiments.The model consisted of four CNV layers followed by a max-pooling process.Additionally,for the fully connected layer,the original Dense layers are removed and replaced with two Dense Layers,two with 1024 nodes,and the final one with 4 for classification.The ReLU was applied to all layers.The experiments were conducted in two scenarios:with or without dropouts.In addition,the change in the enhanced dataset division Tables 4 and 5 illustrate the metrics outcomes for CNN models,and Fig.2 illustrates CNN architecture.The formation of the VGG16 architecture is five blocks of 3×3 convolutions,followed by a max pool layer used during the training phase.To further mitigate overfitting,a dropout of 0.2 and 0.02 was applied to the output of the last block.Dropout is a regularization technique that randomly sets a fraction of the input units to zero during training,which helps prevent over-fitting on specific features.After the dropout layer,a dense layer consisting of 1024 neurons was added.The dense layer is fully connected,allowing for more complex interactions between the features extracted by the convolutional layers.Finally,the output layer is dense with four outputs,each corresponding to a specific category of the DR images.This configuration enables the model to classify input images into DR categories.This algorithm is illustrated in Fig.3,and the results are in Tables 6 and 7.

    Figure 2:CNN architecture

    Figure 3:VGG16 architecture

    4.1 Concluded Results

    After analyzing different cases of CNN architecture and VGG16,it was observed that the results obtained using the AE filter were the most favorable in terms of metrics.Table 8 presents the ACC of 98.7%achieved when employing the AE filter.

    Table 8:Concluded the best performance of the proposed algorithm and with VGG16 of dropout 0.02 for AE,80%;20%dataset

    4.2 CM

    The enhanced database with the AE filter exhibited the highest performance improvement among the cases studied.Furthermore,the CM in Fig.4 and the AUC in the figure provided additional insights into the classification outcomes.

    4.3 Receiver Operating Characteristic(ROC)Curve

    Plotting the ROC Curve is a trustworthy way to evaluate a classifier’s classification ACC.The True Positive Rate(TPR)and False Positive Rate(FPR)charts allow us to observe how the classifier responds to various thresholds.The closer the ROC curve comes to touching the upper left corner of the picture,the better the model performs in categorizing the data.We may compute the AUC,which shows how much of the graphic is below the curve[61,62].The model becomes more accurate as the AUC approaches the value of 1.The figure below shows the combined dataset’s AUC score and ROC curve after being tested on four classes of DR.The black,diagonally dashed line shows the 50%area.According to the graphic,the combined model with the VGG16 and CNN of AE filters and AUC performs better at classifying DR and typical retinal pictures.VGG16CNN has AUC curves,as shown in the below Fig.5.

    Figure 4:CM of the proposed algorithms

    Figure 5:The ROC curve of the VGG16 and CNN evaluated on four datasets.(0)CT,(1)GL,(2)DR,(3)NDR

    4.4 Comparisons

    By comparing the obtained results from the proposed framework with other research[29,61],It can be concluded that the proposed model in this paper has achieved better ACC than the others.This is presented in Table 9.

    Table 9: Comparison between the proposed framework and other algorithms

    5 Conclusion

    This study evaluated the performance of four distinct datasets related to eye conditions:GL,CT,DR,and NDR.This article started by performing data cleansing to ensure the quality of the datasets.Afterward,they prepared the classes for initializing algorithms based on the VGG16CNN architecture.In this work,various parameters were adjusted and experimented with.The datasets were divided into training,validation,and testing sets using different ratios,such as 90%for training,5%for validation,and 5%for testing,or 80%for training,10%for validation,and 10%for testing.Dropout values,which help prevent overfitting,were set to 0.02 and 0.2.Multiple architectures were implemented and tested,leading to variations in the experimental setup.

    The dropout was applied both with and without the CNN architecture or VGG16.To train and test the network for classifying the enhanced classes,a DTL approach was employed in this study.We used TL techniques to leverage pre-trained models and improve classification performance.The proposed model showed promising results when using AE-enhanced classes in combination with TL and CNN models.The achieved metrics included an ACC of 98.62%,an SPE of 98.65%,and a REC of 98.59%.The authors suggest several improvements further to enhance the model’s ACC for future work.One recommendation is to expand the dataset by adding new distinctive classes related to eye conditions.Increasing the diversity and size of the dataset can help the model generalize better and improve its performance.Additionally,incorporating new TL techniques beyond the ones used in this study may enhance the model’s capabilities and overall performance.

    Acknowledgement:The authors thank the Department of Electrical Engineering,Faculty of Engineering,Benha University,for providing intellectual assistance.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design: Heba F.Elsepae and Heba M.El-Hoseny;data collection: Ayman S.Selmy and Heba F.Elsepae;analysis and interpretation of results: Heba F.Elsepae,Heba M.El-Hoseny and Wael A.Mohamed;draft manuscript preparation: Ayman S.Selmy and Wael A.Mohamed.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久人人精品亚洲av| 长腿黑丝高跟| 最后的刺客免费高清国语| 青草久久国产| 亚洲在线自拍视频| 国产精品一区二区三区四区免费观看 | 精品久久久久久久久久久久久| 亚洲在线自拍视频| 久久精品国产亚洲av涩爱 | 欧美又色又爽又黄视频| 久久精品国产清高在天天线| 国产成人福利小说| 黄片大片在线免费观看| 午夜福利成人在线免费观看| 国产aⅴ精品一区二区三区波| 蜜桃久久精品国产亚洲av| 国产精品久久视频播放| 亚洲精品美女久久久久99蜜臀| 亚洲18禁久久av| 国产精品久久久久久精品电影| 国产成人系列免费观看| 亚洲精品一区av在线观看| 亚洲18禁久久av| 变态另类丝袜制服| 一个人免费在线观看的高清视频| 十八禁人妻一区二区| 久9热在线精品视频| 亚洲欧美日韩高清专用| 九九热线精品视视频播放| 美女高潮喷水抽搐中文字幕| 国产真实伦视频高清在线观看 | 91麻豆精品激情在线观看国产| 午夜免费成人在线视频| 亚洲成人久久性| 夜夜看夜夜爽夜夜摸| 日韩精品中文字幕看吧| 午夜免费男女啪啪视频观看 | www.色视频.com| 丰满乱子伦码专区| 精品一区二区三区视频在线 | 精品国产美女av久久久久小说| 免费人成视频x8x8入口观看| 桃色一区二区三区在线观看| 18+在线观看网站| 伊人久久大香线蕉亚洲五| 精品乱码久久久久久99久播| 国产精品久久视频播放| 中文字幕久久专区| 在线天堂最新版资源| 午夜影院日韩av| 国产一区二区三区在线臀色熟女| 18禁黄网站禁片午夜丰满| 级片在线观看| 欧美一区二区精品小视频在线| 国产一区二区在线观看日韩 | 国产高清三级在线| 99久久九九国产精品国产免费| 国产99白浆流出| 欧美bdsm另类| 欧美成人性av电影在线观看| 精品99又大又爽又粗少妇毛片 | 黄片小视频在线播放| 天堂动漫精品| 久99久视频精品免费| 观看免费一级毛片| 久久国产精品影院| 床上黄色一级片| 高清毛片免费观看视频网站| x7x7x7水蜜桃| 欧美精品啪啪一区二区三区| 亚洲自拍偷在线| 有码 亚洲区| 别揉我奶头~嗯~啊~动态视频| 日本三级黄在线观看| 欧美+日韩+精品| av在线蜜桃| 九色成人免费人妻av| a在线观看视频网站| 色综合欧美亚洲国产小说| 中文资源天堂在线| 少妇丰满av| 国内毛片毛片毛片毛片毛片| 亚洲精品影视一区二区三区av| 国产一区二区激情短视频| 一a级毛片在线观看| 老汉色∧v一级毛片| 成人特级黄色片久久久久久久| 精品不卡国产一区二区三区| 亚洲中文字幕日韩| 又紧又爽又黄一区二区| 亚洲国产精品合色在线| 变态另类成人亚洲欧美熟女| 不卡一级毛片| 国产高清视频在线播放一区| 一进一出抽搐gif免费好疼| 人妻夜夜爽99麻豆av| 亚洲精品一区av在线观看| 久久精品国产亚洲av香蕉五月| 免费观看人在逋| 精品无人区乱码1区二区| 好看av亚洲va欧美ⅴa在| 国产乱人视频| 舔av片在线| 女警被强在线播放| 国产伦在线观看视频一区| 三级男女做爰猛烈吃奶摸视频| 人妻夜夜爽99麻豆av| 亚洲内射少妇av| 午夜福利视频1000在线观看| 成人一区二区视频在线观看| 成人国产一区最新在线观看| 麻豆国产97在线/欧美| 精品欧美国产一区二区三| 真实男女啪啪啪动态图| 国产99白浆流出| 18禁国产床啪视频网站| 热99re8久久精品国产| 99久国产av精品| 国产亚洲精品综合一区在线观看| 丁香欧美五月| 搡老妇女老女人老熟妇| 精品久久久久久成人av| 国产精品综合久久久久久久免费| 国产精品日韩av在线免费观看| 午夜精品在线福利| 免费av观看视频| 99国产极品粉嫩在线观看| 成年女人永久免费观看视频| 国产视频内射| 欧美日韩瑟瑟在线播放| 亚洲片人在线观看| 一进一出抽搐动态| 搡老熟女国产l中国老女人| 又紧又爽又黄一区二区| 美女高潮喷水抽搐中文字幕| 国产成人av教育| 久久精品国产亚洲av涩爱 | 99视频精品全部免费 在线| 久久精品夜夜夜夜夜久久蜜豆| 噜噜噜噜噜久久久久久91| 可以在线观看的亚洲视频| 脱女人内裤的视频| 动漫黄色视频在线观看| 免费大片18禁| 亚洲av不卡在线观看| 国产精品99久久99久久久不卡| 国产午夜精品久久久久久一区二区三区 | 亚洲国产日韩欧美精品在线观看 | 国产v大片淫在线免费观看| 亚洲一区二区三区不卡视频| 男女下面进入的视频免费午夜| 18禁裸乳无遮挡免费网站照片| 国产精品永久免费网站| 国产一级毛片七仙女欲春2| 国产精品久久久久久精品电影| 国产淫片久久久久久久久 | 精品久久久久久成人av| 午夜两性在线视频| 国产av不卡久久| 亚洲成a人片在线一区二区| 搡老熟女国产l中国老女人| avwww免费| 久久亚洲真实| xxx96com| 一级黄色大片毛片| 韩国av一区二区三区四区| 亚洲人成电影免费在线| 亚洲欧美日韩高清专用| 久久精品国产99精品国产亚洲性色| 日本一本二区三区精品| 亚洲一区二区三区不卡视频| 欧美精品啪啪一区二区三区| 精品不卡国产一区二区三区| 国产精品综合久久久久久久免费| 男女午夜视频在线观看| 搡老岳熟女国产| 丰满人妻一区二区三区视频av | 丰满的人妻完整版| 国产黄a三级三级三级人| 深爱激情五月婷婷| 高清毛片免费观看视频网站| 特大巨黑吊av在线直播| 法律面前人人平等表现在哪些方面| 啦啦啦韩国在线观看视频| 91在线观看av| 成人特级黄色片久久久久久久| 身体一侧抽搐| 少妇的逼好多水| 一个人观看的视频www高清免费观看| 日韩亚洲欧美综合| 亚洲第一电影网av| 日本黄色片子视频| 亚洲五月婷婷丁香| 国产v大片淫在线免费观看| 91在线观看av| av在线蜜桃| 露出奶头的视频| 久久久色成人| 在线播放国产精品三级| 国产午夜福利久久久久久| avwww免费| 性色av乱码一区二区三区2| 少妇人妻精品综合一区二区 | 欧美黑人巨大hd| 综合色av麻豆| 日韩高清综合在线| 欧美成人免费av一区二区三区| 一级毛片女人18水好多| 激情在线观看视频在线高清| 不卡一级毛片| 国产成年人精品一区二区| 色视频www国产| 99国产精品一区二区三区| 99久久精品热视频| 国产真实乱freesex| 97人妻精品一区二区三区麻豆| 香蕉丝袜av| 久久久国产成人精品二区| 国内精品一区二区在线观看| 3wmmmm亚洲av在线观看| 嫩草影视91久久| 国产精品,欧美在线| 欧美日韩乱码在线| tocl精华| 丰满人妻熟妇乱又伦精品不卡| 国产精品一区二区三区四区免费观看 | 午夜精品一区二区三区免费看| 国产精品98久久久久久宅男小说| 在线观看美女被高潮喷水网站 | 午夜福利成人在线免费观看| 日韩免费av在线播放| 综合色av麻豆| 久久久久性生活片| 一个人看的www免费观看视频| 国产91精品成人一区二区三区| 别揉我奶头~嗯~啊~动态视频| 欧美日本视频| 亚洲精品粉嫩美女一区| 亚洲男人的天堂狠狠| 国产熟女xx| 五月玫瑰六月丁香| 非洲黑人性xxxx精品又粗又长| 又黄又粗又硬又大视频| 搡女人真爽免费视频火全软件 | 亚洲人成电影免费在线| 在线天堂最新版资源| 99在线人妻在线中文字幕| 一级黄片播放器| 国产老妇女一区| 久久香蕉国产精品| 精品人妻一区二区三区麻豆 | 日韩欧美三级三区| 国产视频内射| 亚洲成人免费电影在线观看| 一个人观看的视频www高清免费观看| 夜夜躁狠狠躁天天躁| 日韩精品青青久久久久久| 亚洲五月婷婷丁香| 成人高潮视频无遮挡免费网站| 草草在线视频免费看| 国内精品久久久久久久电影| 久久国产精品影院| 久久久国产精品麻豆| 一本精品99久久精品77| 亚洲 国产 在线| 黄片大片在线免费观看| 特级一级黄色大片| 长腿黑丝高跟| 观看免费一级毛片| 亚洲精品一区av在线观看| 精品久久久久久久末码| 国产成+人综合+亚洲专区| 国产伦精品一区二区三区视频9 | 在线观看舔阴道视频| 亚洲av免费在线观看| 三级毛片av免费| 国产成人a区在线观看| 99国产极品粉嫩在线观看| 欧美黑人巨大hd| 国产激情欧美一区二区| www.999成人在线观看| 日本黄色片子视频| 在线a可以看的网站| 在线观看美女被高潮喷水网站 | 熟女少妇亚洲综合色aaa.| 九九在线视频观看精品| www.999成人在线观看| 老司机福利观看| 国内精品美女久久久久久| a级毛片a级免费在线| 亚洲avbb在线观看| 日韩精品青青久久久久久| 国产精品久久视频播放| 无限看片的www在线观看| 美女黄网站色视频| 亚洲avbb在线观看| 婷婷精品国产亚洲av在线| 午夜福利视频1000在线观看| 国产精品一区二区三区四区免费观看 | 中文字幕高清在线视频| 国产在视频线在精品| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品亚洲一级av第二区| 日本与韩国留学比较| 淫秽高清视频在线观看| 青草久久国产| 国产成人影院久久av| 亚洲人成网站在线播| 国产精品98久久久久久宅男小说| 中文字幕人成人乱码亚洲影| 九九热线精品视视频播放| 一级黄片播放器| 日韩欧美在线乱码| 99视频精品全部免费 在线| 日韩欧美 国产精品| 国产精品av视频在线免费观看| 免费高清视频大片| 免费观看人在逋| 国产日本99.免费观看| 国产亚洲精品av在线| 久久精品91蜜桃| 国产主播在线观看一区二区| 搡老妇女老女人老熟妇| 特大巨黑吊av在线直播| 国产伦在线观看视频一区| 久久香蕉精品热| 午夜a级毛片| 99国产精品一区二区蜜桃av| 蜜桃久久精品国产亚洲av| 欧美一区二区精品小视频在线| 69av精品久久久久久| 丁香六月欧美| 老汉色av国产亚洲站长工具| 国产久久久一区二区三区| 国产乱人视频| 18禁黄网站禁片午夜丰满| 99视频精品全部免费 在线| 99久久无色码亚洲精品果冻| 久久久久免费精品人妻一区二区| 床上黄色一级片| 人妻丰满熟妇av一区二区三区| 美女免费视频网站| 亚洲av电影不卡..在线观看| 精品人妻1区二区| 亚洲精品久久国产高清桃花| 成人特级黄色片久久久久久久| 欧美xxxx黑人xx丫x性爽| 日韩国内少妇激情av| 国产黄a三级三级三级人| 国产在线精品亚洲第一网站| 日韩欧美一区二区三区在线观看| 蜜桃亚洲精品一区二区三区| 一级a爱片免费观看的视频| 热99re8久久精品国产| 熟女电影av网| 久久精品91无色码中文字幕| 又黄又粗又硬又大视频| 床上黄色一级片| 国产激情欧美一区二区| 国产精品免费一区二区三区在线| 制服人妻中文乱码| 久久婷婷人人爽人人干人人爱| 日韩有码中文字幕| 好男人电影高清在线观看| 免费看十八禁软件| 无限看片的www在线观看| 床上黄色一级片| 成人无遮挡网站| 757午夜福利合集在线观看| 综合色av麻豆| 亚洲成av人片在线播放无| 久久精品影院6| 精品无人区乱码1区二区| 亚洲精品在线观看二区| 成人性生交大片免费视频hd| 波野结衣二区三区在线 | 亚洲最大成人手机在线| 国产老妇女一区| 性色avwww在线观看| 少妇熟女aⅴ在线视频| 少妇的逼好多水| 国产精品野战在线观看| 一二三四社区在线视频社区8| 老司机午夜福利在线观看视频| 18禁黄网站禁片免费观看直播| 日韩有码中文字幕| 动漫黄色视频在线观看| 亚洲人成电影免费在线| 精品一区二区三区人妻视频| 国产真实乱freesex| 久久香蕉精品热| 少妇裸体淫交视频免费看高清| 久久精品亚洲精品国产色婷小说| 国产精品女同一区二区软件 | 性欧美人与动物交配| 成人av一区二区三区在线看| 在线观看日韩欧美| 每晚都被弄得嗷嗷叫到高潮| 亚洲av免费高清在线观看| 国产一区二区三区视频了| 国产精品日韩av在线免费观看| 欧美成人免费av一区二区三区| 日本五十路高清| 久99久视频精品免费| 成年女人看的毛片在线观看| 欧美乱妇无乱码| 亚洲av二区三区四区| 午夜亚洲福利在线播放| 网址你懂的国产日韩在线| а√天堂www在线а√下载| 欧美乱码精品一区二区三区| 色综合亚洲欧美另类图片| 级片在线观看| 97人妻精品一区二区三区麻豆| 精品无人区乱码1区二区| 精品一区二区三区视频在线观看免费| 国产淫片久久久久久久久 | 在线观看舔阴道视频| 床上黄色一级片| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 三级国产精品欧美在线观看| 十八禁人妻一区二区| 欧美日韩黄片免| 男人的好看免费观看在线视频| 三级国产精品欧美在线观看| 男人和女人高潮做爰伦理| 黄色成人免费大全| 日韩av在线大香蕉| a级一级毛片免费在线观看| 国产成人系列免费观看| www国产在线视频色| av中文乱码字幕在线| 男女视频在线观看网站免费| av天堂中文字幕网| 日本 欧美在线| 亚洲va日本ⅴa欧美va伊人久久| 一个人观看的视频www高清免费观看| 两性午夜刺激爽爽歪歪视频在线观看| 天美传媒精品一区二区| 国产黄片美女视频| 亚洲欧美日韩卡通动漫| 天天一区二区日本电影三级| 日韩高清综合在线| 波野结衣二区三区在线 | 国产精品野战在线观看| 波多野结衣高清作品| 亚洲天堂国产精品一区在线| 19禁男女啪啪无遮挡网站| 亚洲在线观看片| 欧美性猛交黑人性爽| 免费一级毛片在线播放高清视频| 国产欧美日韩一区二区三| 国产成人av激情在线播放| 一进一出好大好爽视频| 国产亚洲欧美98| 亚洲乱码一区二区免费版| 最新在线观看一区二区三区| 久久精品国产自在天天线| 一二三四社区在线视频社区8| 午夜久久久久精精品| 男女下面进入的视频免费午夜| 九色国产91popny在线| 少妇的逼好多水| 国产伦一二天堂av在线观看| 亚洲精品影视一区二区三区av| 国产一区二区三区视频了| 很黄的视频免费| av中文乱码字幕在线| 亚洲人成网站在线播| 美女高潮喷水抽搐中文字幕| 91九色精品人成在线观看| 夜夜爽天天搞| 老汉色∧v一级毛片| e午夜精品久久久久久久| 亚洲美女视频黄频| 可以在线观看毛片的网站| av在线天堂中文字幕| 国产精品国产高清国产av| 欧美乱妇无乱码| 国产av在哪里看| 日韩欧美在线二视频| 天堂√8在线中文| 91在线观看av| 免费看十八禁软件| 亚洲av成人不卡在线观看播放网| 精品日产1卡2卡| 国产精品,欧美在线| 午夜免费男女啪啪视频观看 | 欧美国产日韩亚洲一区| 午夜免费观看网址| 国产成人av激情在线播放| av在线天堂中文字幕| 91九色精品人成在线观看| 黄色视频,在线免费观看| 婷婷精品国产亚洲av| 中国美女看黄片| 一个人看视频在线观看www免费 | xxxwww97欧美| 国产主播在线观看一区二区| 99久久成人亚洲精品观看| 久久精品夜夜夜夜夜久久蜜豆| 啦啦啦观看免费观看视频高清| 久久精品影院6| 国产欧美日韩精品一区二区| 男女之事视频高清在线观看| 免费无遮挡裸体视频| 天天躁日日操中文字幕| 我的老师免费观看完整版| 午夜精品在线福利| 成人永久免费在线观看视频| 俄罗斯特黄特色一大片| 久久人妻av系列| 午夜影院日韩av| 亚洲国产色片| 免费一级毛片在线播放高清视频| 亚洲成人中文字幕在线播放| 少妇人妻精品综合一区二区 | 免费av观看视频| 一个人免费在线观看的高清视频| 精品久久久久久成人av| 此物有八面人人有两片| 在线观看午夜福利视频| 国产毛片a区久久久久| 亚洲国产精品成人综合色| 看黄色毛片网站| 欧美在线黄色| bbb黄色大片| 一级黄色大片毛片| 最近最新免费中文字幕在线| 国产伦一二天堂av在线观看| www.999成人在线观看| 精品午夜福利视频在线观看一区| 色哟哟哟哟哟哟| 成年人黄色毛片网站| 成人高潮视频无遮挡免费网站| 久久精品综合一区二区三区| 日韩欧美在线二视频| 婷婷六月久久综合丁香| 免费在线观看亚洲国产| 熟女电影av网| 欧美zozozo另类| 成人一区二区视频在线观看| 男女床上黄色一级片免费看| 国产av一区在线观看免费| 午夜影院日韩av| 俺也久久电影网| 免费看光身美女| 国产高清视频在线观看网站| 国产精品久久电影中文字幕| 亚洲av成人不卡在线观看播放网| 国产精品女同一区二区软件 | 久久久久国产精品人妻aⅴ院| 丰满乱子伦码专区| 亚洲人成网站高清观看| 国产成人系列免费观看| 可以在线观看的亚洲视频| 好男人电影高清在线观看| 亚洲五月婷婷丁香| 国产探花极品一区二区| 免费高清视频大片| 国产成人a区在线观看| 久久亚洲精品不卡| 夜夜夜夜夜久久久久| 精品一区二区三区人妻视频| 日韩av在线大香蕉| 香蕉久久夜色| 国产av不卡久久| 天堂影院成人在线观看| 亚洲av免费高清在线观看| 国产视频内射| 亚洲中文字幕日韩| 亚洲国产欧洲综合997久久,| 成人性生交大片免费视频hd| 日韩精品中文字幕看吧| 舔av片在线| 亚洲av中文字字幕乱码综合| 精品一区二区三区视频在线观看免费| 最近最新中文字幕大全免费视频| 国产精品永久免费网站| 日本五十路高清| 国产欧美日韩一区二区精品| 成年女人看的毛片在线观看| 嫩草影院入口| h日本视频在线播放| 亚洲aⅴ乱码一区二区在线播放| 国产亚洲精品一区二区www| 国产精品日韩av在线免费观看| 久久国产精品人妻蜜桃| 他把我摸到了高潮在线观看| bbb黄色大片| 我的老师免费观看完整版| 1024手机看黄色片| av中文乱码字幕在线| 校园春色视频在线观看| 香蕉av资源在线| 欧美最黄视频在线播放免费| 成年女人永久免费观看视频| 老司机深夜福利视频在线观看| 色精品久久人妻99蜜桃| 三级国产精品欧美在线观看| x7x7x7水蜜桃| 国产真人三级小视频在线观看| 欧洲精品卡2卡3卡4卡5卡区| 天堂动漫精品| 久久久久精品国产欧美久久久| 97人妻精品一区二区三区麻豆| 国产 一区 欧美 日韩| 免费人成视频x8x8入口观看| 欧美三级亚洲精品| 人妻久久中文字幕网| 日本精品一区二区三区蜜桃| 少妇高潮的动态图| 久久亚洲真实| 国产三级中文精品| 成熟少妇高潮喷水视频|