• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Transfer Learning-based Computer-aided Diagnosis System for Predicting Grades of Diabetic Retinopathy

    2022-08-23 02:16:30QaisarAbbasMostafaIbrahim2andAbdulRaufBaig
    Computers Materials&Continua 2022年6期

    Qaisar Abbas,Mostafa E.A.Ibrahim2 and Abdul Rauf Baig

    1College of Computer and Information Sciences,Imam Mohammad Ibn Saud Islamic University(IMSIU),Riyadh,11432,Saudi Arabia

    2Department of Electrical Engineering,Benha Faculty of Engineering,Benha University,Qalubia,Benha,13518,Egypt

    Abstract:Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeconsuming.Therefore,many computer-aided diagnosis(CAD)systems were developed to automate this screening process of DR.In this paper,a CAD-DR system is proposed based on preprocessing and a pre-train transfer learningbased convolutional neural network (PCNN) to recognize the five stages of DR through retinal fundus images.To develop this CAD-DR system,a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results.The architecture of the PCNN model is based on three main phases.Firstly, the training process of the proposed PCNN is accomplished by using the expected gradient length(EGL)to decrease the image labeling efforts during the training of the CNN model.Secondly,the most informative patches and images were automatically selected using a few pieces of training labeled samples.Thirdly, the PCNN method generated useful masks for prognostication and identified regions of interest.Fourthly, the DR-related lesions involved in the classification task such as micro-aneurysms,hemorrhages,and exudates were detected and then used for recognition of DR.The PCNN model is pre-trained using a high-end graphical processor unit(GPU)on the publicly available Kaggle benchmark.The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity(SE),specificity(SP),and accuracy(ACC).On the test set of 30,000 images,the CAD-DR system achieved an average SE of 93.20%,SP of 96.10%,and ACC of 98%.This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.

    Keywords:Diabetic Retinopathy;retinal fundus images;computer-aided diagnosis system;deep learning;transfer learning;convolutional neural network

    1 Introduction

    One of the major causes of severe vision-loss among patients of diabetes is diabetic retinopathy(DR).DR is an asymptomatic disease, and it has no prior symptoms.However, many patients are suffered vision-loss without any proper diagnosis and treatment [1].According to the statistics of [2], 285 million population have diabetes, and one-third have signs of DR.In daily practice,ophthalmologists use non-mydriatic fundus images and Computer-aided diagnosis(CAD)programs for the early assessment/grade severity level of DR.The detection of lesions caused by DR is the basis of these earliest detection of DR.Those DR-related lesions in the fundus image are unhealthy objects appear on the retinal surface such as micro-aneurysms(MA’s),exudates(EX’s),hemorrhages(HEM’s)and cotton wool spots (CWS).The visual example of such DR-related lesions along with severitylevel is shown in Fig.1.Proliferative(PDR)and nonproliferative(NPDR)are two main types of DR,where PDR is advanced form of eye disease,and NPDR is an early sign of DR stage.As shown in this figure,there are five grades of severity-level of DR such as(severity 0:normal,severity 1:mild NPDR,severity 2: moderate NPDR, severity 3: severe NPDR and severity 4: PDR).Manual segmentation and count of DR-related lesions by clinicians is a difficult and repetitive task.Moreover,the manual grading of DR requires extensive domain-expert knowledge and reader inter-/intra-class variability experience[3–5].

    Figure 1:A visual example of five severity stages of DR and two main categories Proliferative(PDR)and nonproliferative(NPDR),where(a)Shows the(severity 0:normal),(b)Severity 1:mild NPDR,(c) Severity 2: moderate NPDR, (d) Severity 3: severe NPDR, and (e) Severity 4: PDR along with sample of DR-based lesions(f)

    Previously, several CAD systems have been developed to recognize grades of DR in the clinical setting by digital retinograph images.However,those CAD systems assist ophthalmologists to better screening of patients.As a result, the CAD systems helps clinical experts to identify the early signs of eye-related abnormality, which is difficult to identify by human naked eyes.It noticed that the DR is easily detected by a CAD system to grade the severity-level by using image processing and machine-learning techniques[6]on retinograph images.Nowadays,the CAD systems for recognition of grades of DR are affected by various factors such as (1) it is very much difficult to identify DRrelated lesions and anatomical structure of retinograph, (2) it is also difficult to detect accurate and early stage of retinal structure because it changes during the time,and(3)there is a dire need to develop the effective and automatic CAD system to accurate screening of DR-related diseases.Currently,many deep learning (DL) models especially deep convolution neural networks (CNN) have demonstrated outperform performance in the grading of DR severity-levels.Thus,we have used also deep transfer learning (TL) technique to recognize stages of DR with a pre-processing step to solve the abovementioned problems.In addition, the proposed system is capable to work on much larger datasets and to reduce the inter-reader variability.

    1.1 Research Highlights

    The main contribution of proposed CAD-DR classification system are as follows.

    1) A preprocessing step is developed in a perceptual-oriented CIEL*a*b*color space to enhance the contrast and adjust the light-illumination.

    2) A pretrain TL (DL) approach is used through expected gradient length (EGL) to eliminate the need of large number of labeled fundus images.This step reduces training efforts for CNN model.

    3) To develop PCNN, the 14-layer CNN network was pre-trained using fewer labeled fundus images.This can assist PCNN system to learn simple to complex fundus feature.

    4) Useful masks can be generated by the proposed PCNN system to predict and segment DRrelated regions.

    5) To the best of our knowledge, there is no previous CAD-DR model in the medical imaging field that works in harmony with CNN parameters to select the most informative patches and images.

    6) State-of-the-art comparisons are also performed to test and evaluate the performance of proposed CAD-DR system.

    1.2 Paper Organization

    The rest of the paper is organized as follows.Section 2 shows the literature review of the recent works related to the recognition of multistage of diabetic retinopathy(DR).In Section 3,the proposed methodology is described along with the acquisition dataset.Experimental results and comparisons with state-of-the-art methods are presented in Section 4.The discussion about the paper is described in Section 5 and finally,the paper concludes in Section 6.

    2 Related Works

    Deep learning (DL) models especially deep convolution neural networks (CNN) have demonstrated outperform performance in the grading of DR severity levels in several settings and on several datasets when compared with traditional hand-designed methods[6].A data science platform named Kaggle launched a DR detection competition in 2015,the top participants used different settings of CNN models on approximately 35,000 high-resolution labeled fundus images.They achieved that successful training of such CNN networks was based on the large size of annotated samples.In a previous study [7], a CNN model was developed with data augmentation to classify DR into five stages such as normal, mild NPDR, moderate NPDR, severe NPDR, and proliferative PDR.Their model was trained on more than 100,000 labeled images, yielded comparable performance with a clinical expert.Similarly,Harry et al.in[8]trained a 13-layer CNN model on 80,000 labeled images and obtained significant results in a classification of five severity-level of DR.Another study in[9],a training procedure of CNN model was completed using 8,810 images and obtained comparable results with ophthalmologists.This presents a challenge in clinical practice,as the computational systems need thousands of labeled images to be feed to learn features,representing a time-consuming and expensive process.For a real-time scenario,a well-performing algorithm requires such as a fewer data-intensive CNN model that learns with a few labeled samples.Although, it observed that the previous CAD systems tried to detect DR-related lesions[10]to recognize diabetic retinopathy.Those CAD systems are briefly described in the upcoming paragraphs and compared in Tab.1.

    Reference[11]shows the Faster-RCNN deep-learning(DL)based method to classify five stages of DR lesions without using image preprocessing step for contrast enhancement and adjustment of light illumination.To extract features from retinograph images, the authors used the DenseNet-65 DL model, and then the Faster-RCNN model is finally utilized to recognize the severity-level of DR.For evaluation of the Faster-RCNN model, they used Kaggle and APTOS datasets to achieve 97.2% of accuracy.The reference [12] showed that the current CAD systems for DR are expensive in computation and lack the ability to extract highly nonlinear features that are needed to classify it into five stages.In that study, they utilized the lowest possible learnable parameters were used to speed up the training and have faster convergence.They developed a VGG-NiN model based on the VGG16 transfer learning and spatial pyramid pooling layer.On collected datasets,they showed 83.5%classification accuracy on five stages of DR in comparison to other systems.Whereas in[13],a pretrain based transfer learning algorithm(CNN)was used to detect five stages of DR from retinograph images.They showed that the CNN model based on pretrain strategy achieved higher performance compared to other systems.Similarly,in[14],they used transfer learning(TL)with representational learning to recognize multiple stages of DR.They utilized the Inception-v4 TL pretrain model with fine-tune step.To grade DR into five stages,they used fine-tune to achieve 96.6%accuracy.

    Table 1:Computer-aided diagnosis systems to recognize grades of diabetic retinopathy.Performance based on sensitivity(SE),specificity(SP),accuracy(ACC),and area under the curve(AUC)are shown for deep learning algorithms(DLA)

    Table 1:Continued

    In this research[15],the author’s developed preprocessing-based segmentation along They used saliency maps detection to highlight anatomical structures of lesions compared to the background.Afterward,the structure tensor technique was applied to enhance the edges of the lesions and active contours is performed to accurately segment DR-related lesions.Finally, they used the VGG-19 pretrain TL model to identify the level of severity of DR.The experiments were performed on the Kaggle dataset consists of 20,000 images.On average, they reported 82% of sensitivity and 96% of accuracy.In contradiction with the above-mentioned approaches,the researchers in[16]developed a recognition system of three stages-based DR instead of five severity-level of DR.In that study,they used semantic segmentation to detect microaneurysm along with CNN model to recognize three-stages of DR.

    3 Proposed Methodology

    The Fig.2 shows a systematic flow diagram of our proposed CAD-DR system through pretrain transfer-learning based model.The CAD-DR system is developed in different phases.In the first phase,the retinograph image is transferred to perceptual-oriented CIE L*a*b*uniform color space and preprocessed it to adjust light illumination and enhance the contrast.In the next phase,the fourteen layers of CNN architecture is proposed by a pretrain transfer learning strategy.In first 10 layers,the different convolutional filters are used along with ReLU,BN and max pool layers.Next,the BN,Max pool and dropout layers are integrated and lastly, the ReLU and SoftMax layers are integrated to recognize five stages of DR.

    3.1 Data Acquisition and Platform

    To pretrain and evaluate the proposed CAD-DR system,the retinograph images are obtained from the Kaggle platform [17].The images in this dataset are captured from various patients in different light illumination,many age groups and different people’s ethnicity.Due to these variations,it makes distortion of pixel intensity within the image and creates other variations that affect the classification results.To overcome these issues,the contrast enhancement and light adjustment of retinograph images are implemented through uniform color space and non-linear wavelet technique as stated in[18].After image normalization,the dataset resized to 48×48 pixels which retained the fundus feature to identify and thus reduced memory size of dataset the GeForce GTX TITAN X 1080GPU could handle.Each image of the patient can only have one label corresponding to a single group depending on the divisions outlined for the dataset.Moreover,while testing to measure the performance of PCNN network,only unseen patches of patient images are considered.

    Figure 2:The proposed CAD-DR system to recognize five severity-level of diabetic retinopathy shown as a systematic flow diagram

    The proposed CAD-DR based on PCNN model is trained using 80,000 images from the publicly available Kaggle dataset.Each image has a resolution of 6 M pixel and rated by the clinician for the presence of DR into five graded such as 0-normal, 1-mild DR, 2-moderate DR, 3-severe DR and 4-proliferative DR.These scales were used as labels to develop PCNN model.The training and testing procedures of PCNN are accomplished by using 50,000/30,000 images.All experimental codes were written in Python 3.6 and deep learning package Keras (http://keras.io/) with the TensorFlow(http://deeplearning.net/software/tensorflow/)backend.These platforms are used because of their low computational time, easy access to parameters and maturity level.The GeForce GTX TITAN X 1080GPU having a memory of 12 GB is the hardware used for the experiments.The proposed PCNN model classified image into each DR class in 0.04 s that shows the possibility of real-time feedback to the patient.

    3.2 Data Augmentation

    To avoid overfitting and to improve the localization power of the proposed CAD-DR system,a dropout of 0.5 value on dense layers 12 and 13 with data augmentation such as flipping(horizontal and vertical)and random rotation 0?–270?degrees are utilized.After data augmentation and cropping steps,the dataset splits are undertaken.For this purpose,patches of each class are chosen randomly as follows: 8,760/1,314 patches for each of the classes in the training and testing splits.This data augmentation step is implemented through Albumentations library functions.

    3.3 Preprocessing to Enhance Contrast and Illumination Adjustment

    Retinograph images are captured from different devices and environment conditions.A visual example is displayed in the Fig.3.As a result,the preprocessing step is trying to enhance the patterns,which are presented in the DR-related lesions at the same time to decrease the training efforts in the classification phase.The selected space is kept as close as possible to human perception because the enhancement algorithm aims to help doctors in their diagnosis of retinopathy.

    Figure 3:A visual example of the preprocessing step to enhance the original input retinograph images(See Fig.(a)),correct light illumination(See Fig.(b)L* image)and improve contrast(See Fig.(c))in a perceptual-oriented color space

    In practice, the color retinograph images can be characterized in different color spaces such as HSV (hue, saturation, value), RGB (red, green, blue), CIELUV, etc.The uniform color space is dependent on the application because the color space is very important for image enhancement.The HSV and RGB are not uniform color spaces so they cannot be adopted for image enhancement.As a result, if choose the right color space then the image enhancement method is helping the ophthalmologists in the diagnosis eye-screening process.Hence,it is required that the selected space must be as close as possible to human perception.The CIE L*a*b*and CIE L*u*v*color spaces are closed to human perception,but the CIE L*u*v*color space has a problem of white adaptation that can lead to poor image enhancement results.Therefore,in this paper,we have used CIEL*a*b*color space.To perform image enhancement on retinograph images,the first step is transformed from the ununiform RGB image into uniform perceptual-oriented color space(CIEL*a*b*).To perform this step,The only available color spaces close to human perception are CIE L*a*b*and CIE L*u*v*and both have been extensively used.It may be noted that the white adaptation in CIE L*u*v*can lead to poorer results as mentioned before.The white adaptation has a subtractive change that involves a vector displacement instead of the multiplicative normalization that will produce the desired proportional movement.Therefore, our proposed algorithm initially transform the images from the RGB color space used by the acquisition device to CIE L*a*b*color space.

    Contrast enhancement method that preserves the characteristically features of the images using multiscale discrete-shearlet transform (DST), the perceptual uniform color space CIE L*a*b*and a local-influence control function.The DST technique has provided in the past an efficient multiscale directional representation of the image in a discrete framework and it is, therefore, better suited for multi-scale edge enhancement than the traditional wavelet decomposition.The method follows three main steps: firstly, the DST coefficients of L*plane in the corresponding subbands are modified by a Ben Graham’s method[19]to enhance the conditions of illumination for the images so that we can augment the perceptions from eye images and contrast adjustments,respectively.Secondly,the inverse transform is applied to modify L*coefficients for a better reconstruction and visualization without generating artifacts.Thirdly,a*and b*planes are combined with this lightness component to perform the final enhancement.

    3.4 Architecture of Pretrain Transfer Learning

    To develop this CAD-DR system, the pretrain CNN model is utilized as a basis and the corresponding layers are selected as required to recognize five stages of DR.This research uses a 14-layer CNN architecture, shown in Fig.4, after studying the literature for other complex image recognition tasks.It is perceived that an increased number of layers allows the network to learn the deepest features.For instance, the initial convolutional layer learns basic features like edges, while the last convolutional layer performs a learning process of DR lesions.This PCNN model consists of an input patch layer followed by the convolutional layer,max pooling,and fully connected layers.The soft-max classifier is used in the last fully connected layer to perform five severity levels of DR classification.Leaky rectifier linear unit (ReLU) with 0.01 value and then batch normalization was used as hyper-parameters after each convolutional layer to stop over-reliance on nodes in a network and to control feature maps per block.A kernel size of 3×3 and 2×2 was applied to perform max pooling.Similarly,the initialization of network layers was performed using the weights and biases from the method stated in[20].Gaussian distribution technique was also applied to initialize the network to reduce training time and to randomly generate biases for the last fully connected layer.

    Figure 4: Proposed architecture of pretrain convolutional neural network (PCNN) model by using transfer learning-based CNN network

    3.5 Procedure of Pretrain Transfer Learning

    The training of our CNN was accomplished using expected gradient length (EGL) [21,22] to decrease image labelling efforts during CNN training and to make CNN intelligent to learn features from the relevant data.It trained PCNN system from scratch using a well-known optimization algorithm called Stochastic Gradient Descent(SGD)to optimize parameters by utilizing one instance or sample batches instead of complete training samples.Eq.(1) illustrates SGD cost function J optimization using model parametersω:

    wherexis an input matrix,xiis an element of that matrix,?ωis a weight of the convolution filter,b is the bias term,max(.)is the maximum function andxjrepresents the region ofxwhere the pooling operation is applied.

    In Eq.(1),Jc(?i)represents a cost function applied at ith training sample(xi,yi)in iteration i,σmeans learning rate and shows a gradient operator.A training sample ith with its label is then used to estimate the cost function Jc(?i)and its gradient length|| Jc(?i)||.To select the most relevant image patches,each batch of SGD depends on the highest gradient value of instance,having a probability of sample with ythlabel.A term l in Eq.(2)indicates total number of labels,whileΦrepresents sort values used by EGL algorithm from unlabeled data pool U.The selection mechanism of most informative samples is performed using the calculation of two terms of Eq.(1) such as, a probability of sample with jthlabel,forward propagation across the network is performed to get same probabilities from the last dense layer.Whereas gradient length is calculated using backward propagation to get frobenius norm of gradient parameter.This strategy is then repeated for all labels of each sample.At last, k sample with highest EGL value is chosen from the sort data poolΦ.Algorithm 1 shows the patch selection steps by our PCNN network.After calculation of the significant patches by Algorithm 1, it is straightforward to extend our experiments to choose the most informative images within training dataset.This was done by the calculation of interestingness of an image using image squares(patches) with the given stride and then densely calculatingΦ.Afterwards, image sortation by their top EGL value was done and then patches belong to the most relevant image added to the training data for further parameter updates by Algorithm 1 until convergence.These steps were described in Algorithm 2.

    Algorithm 1:A patch selection step to train PCNN model Requirement: Labeled(patch)dataset P,initial trained model M using patches in P∈Input,k samples of most informative patches 1.While no convergence do 2.Generate and mix sample batches from P 3.For each batch do 4.calculate Φ(x)using M,for all x €batch 5.end for 6.perform sorting process of Θ values and return highest k sample Pk 7.update M via P ∪Pk 8.end while Requirement: Labeled(patch)dataset P,initial trained model M using patches in P∈Input,k samples of most informative patches

    Algorithm 2:A patch selection step to train PCNN model Requirement: Labeled(patch)dataset P,training set T,number n of initial images to look at 1.Chose randomly an initial set Tn of images 2.An initial training of model M using the expert notions from the n images 3.While no convergence do 4.For each image in T/Tn do 5.Patched image and calculate ?image =∑patch ∈input_image ?(patch),using M 6.end for 7.Perform sorting process on all ?_image values and return an image with highest Imax 8.Tn =Tn ∪Imax 9.Pn =patch ∈Pi, for all i∈Tn 10.Update M with the patches Pn and k selected patches via Algorithm 1

    To calculate the loss function of PCNN model,the categorical cross-entropy(CCE)loss function is utilized and applied after SoftMax function for taking the final decision of five severity level of diabetic retinopathy.The CCE loss function is also known as SoftMax Loss.In practice, it is a combination of SoftMax activation with a cross-entropy (CE) loss.If the CCE loss is used, then the PCNN network model is trained to output a probability over the CC classes for each image.The CE Loss is defined as:

    where, the parameterSPis the PCNN score for the positive class.This loss function then able to compute the gradient with respect to the output neurons of the PCNN model to backpropagate it through the network and boost the defined loss function tuning the network parameters.As a result,it is necessary to calculate the gradient of CE Loss with respect to each PCNN class score in ss.The loss terms are zero for the negative classes.The Fig.5 shows the visual plots of lossvs.validation of the proposed PCCN model.

    Figure 5: Proposed architecture of pretrain convolutional neural network (PCNN) model by using transfer learning-based CNN network

    4 Experimental Results

    4.1 Hyperparameter Settings

    The proposed PCNN model to recognize severity-level of DR is formed by stacking the fourteen layers of the network including dropout and SoftMax layers.Transfer learning is used for the convolutional layers of the PCNN network.Fine tuning is also applied to the fully connected layers of the PCNN.The hyper-parameters along with their values are shown in Tab.2.An adaptive learning rate is used so that the learning process can be speeded up and over-fitting can be avoided.Initially,the learning rate is set to 0.01.Iterations are monitored and if the validation loss does not improve for five consecutive iterations, then the learning rate is decreased by a factor of 0.1.In addition, we used batch size of 8,minimum learning rate of 0.0001,initial learning rate of 0.01,momentum of 0.9,number of epochs of 24 and network layers of 14 to test and train the PCNN model.

    Table 2:Performance of the proposed PCNN with preprocessing step model for classification of fiveclass severity level of DR on 30,000 test images

    4.2 Statistical Metrics

    Five evaluation metrics have been used to evaluate and compare our model with other systems.These commonly used metrics are accuracy,F1-score,sensitivity,specificity,and ROC-AUC.The F1 score is the harmonic mean of the precision and recall.We have included F1 score as a metric because there is a large class imbalance.Since F1 score is the harmonic mean of precision and recall, it is considered a better metric than accuracy in such cases.A higher F1 score implies a better system.Sensitivity is used to measure true positive rate which in our case means the correct identification of vessel pixels.In contrast,specificity measures the true negative rate and that corresponds to the ability of identification of non-vessel pixels by our model.AUC is used to measure the ability of the model at discriminating between vessel pixels and non-vessel pixels.ROC graph plots the true positive rate against the true negative rate at various thresholds and AUC is the area under the ROC curve.The higher the AUC the better the model.The suitability of the proposed PCNN system for five severity level of DR was evaluated on 30,000 test images using the statistical metrics, i.e., sensitivity (SE),specificity(SP)and an accuracy.We define SE as the number of images correctly classified as having DR among the total amount with DR and SP as the number of images correctly identified as having no DR out of the total number with no DR.The accuracy is defined as the number of patients that are correctly classified by the system.

    4.3 Results Analysis and Comparisons

    This retinograph images’dataset is divided into 40%of the testing set and the rest as the training set.Also, we have split again the training set into 40% and assigned it to the validation set, and the rest is used for training purposes.On a total of 30,000 images in the dataset,the 60%is working as the training set,20%as the validation set,and 20%as the test set.There are 24 epochs performed based on the 10-fold cross-validation set.The Fig.5 displays the proposed model training and testing loss versus accuracy diagram.From this figure, it is noticed that this plot is displayed without performing any fine-tune of the proposed model.In addition,this figure shows that the change between the predicted stage of DR by our model and the true value in the form of the loss function,which is measured by a category cross-entropy.

    Fig.6 shows the confusion metrics to achieve the results for recognition of five stages of diabetes retinopathy.This confusion metric is calculated based on 20,000 retinograph images.This metric is calculated based on proposed architecture of pretrain convolutional neural network(PCNN)model by using transfer learning-based CNN network.On average,the 0.90 detection accuracy is obtained to predict five stages.However,if training and testing datasets are increased to 30,000 then the detection accuracy is increased too.The Tab.2 reports the highly acceptable SE values for normal (98.15%),mild (93.45%) and proliferative (90.45%) DR classes of the proposed CAD-DR system by using preprocessing step and pretrain PCNN architecture.While the values of SP and classification accuracy for five-classes of DR were found up-to-the-mark.The proposed PCNN is significantly improved in SE of 93.20%,SP of 96.10%and an accuracy of 98%on the 30,000 test samples.However,the Tab.3 shows the lower results because we did not use preprocessing step.We have also compared the PCNN transfer learning model with other transfer learning(TL)algorithms such as VGG16,VGG16noFC1,VGG16noFC2 and InceptionV3.On average,the results are mentioned in Tabs.4 and 5 describes the parameter used to compare the different TL models.The PCNN model is outperformed compared to all other TL algorithms because of use of effective layers with loss function.

    We have also performed comparisons of the proposed CAD-DR system with other state-of-theart systems such as CNN-Pratt-2016 [8], DenseNet-Albahli-2021 [11], and VGG-Khan-2021 [12] in terms of recognition of five stages of DR.We have implemented by ourselves other state-of-the-art DR systems such as CNN-Pratt-2016, DenseNet-Albahli-2021 and VGG-Khan-202.The authors are requested to read those papers for detailed implementation of these papers.We have selected these DR-related systems because those are closely related to our proposed CAD-DR system.Those comparisons are performed and evaluated based on different training and testing ratios on 30,000 test images.Tab.6 indicates the performance of the proposed PCNN model that is outperformed compared to other systems for recognition of five stages of DR.

    Figure 6:Confusion metric of pretrain convolutional neural network(PCNN)model by using transfer learning-based CNN network on 20,000 retinograph images

    Table 3: Performance of the proposed PCNN model for classification of five-class severity level without preprocessing step on DR of 30,000 test images

    Table 4: Comparison setup parameters to other state-of-the-art transfer learning algorithms

    Table 5:Performance comparisons of proposed PCNN art with different pre-trained CNN networks on 30,000 test images for five-class severity level of DR

    Table 6:Performance comparisons of proposed PCNN art with different pre-trained CNN networks on 30,000 test images for five-class severity level of DR

    5 Discussions

    Statistics show that many diabetic patients have a high probability of severe vision-loss by diabetic retinopathy (DR).In daily practice, ophthalmologists use non-mydriatic fundus images and Computer-aided diagnosis (CAD) programs for the early assessment/grade severity level of DR.Those earliest assessment of DR is based on the recognition of lesions related to DR.These DR-related lesions appear as unhealthy objects such as micro-aneurysms (MA’s), exudates (EX’s),hemorrhages (HEM’s) and cotton wool spots (CWS) on the retinal surface in the fundus image.Manual segmentation and count of DR-related lesions by clinicians is a difficult and repetitive task.Moreover, the manual grading of DR requires extensive domain-expert knowledge and reader inter-/intra-class variability experience.As a result,several CAD systems have been developed in the past to recognize grades of DR in the clinical setting by digital retinograph images.However, those CAD systems assist ophthalmologists to better screening of patients.As a result, the CAD systems helps clinical experts to identify the early signs of eye-related abnormality,which is difficult to identify by human naked eyes.It noticed that the DR is easily detected by a CAD system to grade the severitylevel by using image processing and machine-learning techniques[6]on retinograph images.Nowadays,the CAD systems for recognition of grades of DR are affected by various factors such as(1)it is very much difficult to identify DR-related lesions and anatomical structure of retinograph, (2) it is also difficult to detect accurate and early stage of retinal structure because it changes during the time,and(3) there is a dire need to develop the effective and automatic CAD system to accurate screening of DR-related diseases.Thus,there is a dire need to make DR diagnosis on much larger datasets and to reduce the inter-reader variability.

    Deep learning (DL) models especially deep convolution neural networks (CNN) have demonstrated outperform performance in the grading of DR severity levels in several datasets and settings compared to traditional hand-designed methods[6].A data science platform named Kaggle launched a DR detection competition in 2015,the top-most competitors used different settings of CNN models on approximately 35,000 high-resolution labeled fundus images.They achieved that successful training of such CNN networks was based on the large size of annotated samples.In a previous study[7],a CNN model was developed with data augmentation to classify DR into five stages such as normal, mild NPDR, moderate NPDR, severe NPDR, and proliferative PDR.Their model was trained on more than 100,000 labeled images,yielded comparable performance with a clinical expert.Similarly,Harry et al.in[8]trained a 13-layer CNN model on 80,000 labeled images and obtained significant results in a classification of five severity-level of DR.Another study in[9],a training procedure of CNN model was completed using 8,810 images and obtained comparable results with ophthalmologists.This presents a challenge in clinical practice,as the computational systems need thousands of labeled images to be feed to learn features,representing a time-consuming and expensive process.For a real-time scenario,a well-performing algorithm requires such as a fewer data-intensive CNN model that learns with a few labeled samples.Although,it observed that the previous CAD systems tried to detect DR-related lesions[10]to recognize diabetic retinopathy.Those CAD systems have already been briefly described in the Tab.1.

    To develop this CAD-DR system,a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results.The architecture of the PCNN model has three main phases.Firstly,the training process of the proposed PCNN is accomplished by using the expected gradient length(EGL) to decrease the image labeling efforts during the training of the CNN model.Secondly, the most informative patches and images were automatically selected using a few pieces of training labeled samples.Thirdly,the PCNN method generated useful masks for prognostication and identified regions of interest.Fourthly,the DR-related lesions related to the classification task such as micro-aneurysms,hemorrhages,and exudates were detected and then used for recognition of DR.The PCNN model is pre-trained on the publicly available Kaggle benchmark making use of a high-end graphical processor unit(GPU).The obtained results demonstrate that the CAD-DR system outperforms when compared to other latest, state-of-the-art systems in terms of sensitivity (SE), specificity (SP), and accuracy(ACC).On the test set of 30,000 images, the CAD-DR system obtained an average SE of 93.20%,SP of 96.10%,and ACC of 98%.Some of the example images are described in Fig.7 that are correctly classified by proposed PCNN model.We have achieved good results based on several improvements to the CAD-DR system such as a preprocessing step is developed in a perceptual-oriented CIEL*a*b*color space to enhance the contrast and adjust the light-illumination.A pretrain TL(DL)approach is used through expected gradient length(EGL)to eliminate the need of large number of labeled fundus images.This step reduces training efforts for CNN model.This result indicates that the proposed CAD-DR system is appropriate for the screening of the DR severity-levels.

    Figure 7:Fundus images depicting the five stages of diabetic retinopathy:(a)Without DR,(b)Mild,(c)Moderate,(d)Severe and(e)PDR

    6 Conclusions

    In this paper, a new pre-train scheme of the CNN model (PCNN) is presented to develop a label efficient training mechanism in the domain of retinal fundus images for diagnosis of DR.In addition,we have also developed a preprocessing step in a perceptual-oriented color space to enhance the contrast and adjust the light illumination.This proposed CAD-DR system is outperformed compared to other state-of-the-art systems on 30,000 retinograph images.The DR-related lesion patterns are identified by the proposed PCNN system into five classes.Moreover, an additional interpretation layer is utilized to identify those image areas that should be labeled by the clinical expert.In this paper,an improved computer-aided diagnosis(CAD)system to assist ophthalmologists was developed.The presented PCNN architecture was evaluated on 80 thousand fundus images,and the achieved results illustrate the feasibility of the presented grading system for DR-related lesion detection and classification of the five-class severity level of DR.PCNN system was found useful for human evaluation such as the high value of SE and SP rates.To complete this PCNN system, fast image analysis methods with stable interpretation are utilized.Upon completion of PCNN model training, the classification of DR stages was accomplished in 0.04 s.Keeping these conditions, the obtained SE and SP rates indicate that a vast majority of the images were accurately classified into one of the five-stages of DR.To the best of our knowledge,there is no previous CAD-DR model in the medical imaging field that works in harmony with CNN parameters to select the most informative patches and images.The proposed PCNN method is computationally challenged when dealing with large-scale data.This issue can be resolved with standard sampling techniques.As future work, the other existing deep-learning (DL) methods scheme can be utilized to train a CNN model detecting and classifying diabetic maculopathy on the large-scale annotated datasets.

    Acknowledgement:The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no.RG-21-07-01.

    Consultant Works:We would also like to thank Dr.M.Arfan Jaffar for serving as a consultant to critically reviewed the study proposal and participated in technical editing of the manuscript.

    Funding Statement:Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no.RG-21-07-01.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美精品av麻豆av| 亚洲国产欧美日韩在线播放| 最近2019中文字幕mv第一页| 久热爱精品视频在线9| 亚洲精品国产av蜜桃| 国产精品免费视频内射| 美女视频免费永久观看网站| 伊人久久大香线蕉亚洲五| 国产 精品1| 亚洲欧美清纯卡通| 一区二区三区四区激情视频| 国产精品人妻久久久影院| 各种免费的搞黄视频| 大片电影免费在线观看免费| 久久久久精品国产欧美久久久 | 99九九在线精品视频| 国产成人av激情在线播放| 好男人视频免费观看在线| 丝袜脚勾引网站| a 毛片基地| 建设人人有责人人尽责人人享有的| 女人高潮潮喷娇喘18禁视频| 热99久久久久精品小说推荐| 国产成人一区二区在线| 国产精品.久久久| 女人久久www免费人成看片| 亚洲国产中文字幕在线视频| 亚洲第一区二区三区不卡| 亚洲av日韩精品久久久久久密 | 欧美亚洲日本最大视频资源| 最近2019中文字幕mv第一页| 久久久久国产精品人妻一区二区| 少妇 在线观看| 热re99久久精品国产66热6| 久久久久久久国产电影| 久久精品国产a三级三级三级| 久久人人97超碰香蕉20202| 国产无遮挡羞羞视频在线观看| 激情视频va一区二区三区| 国产精品久久久久久久久免| 激情视频va一区二区三区| 免费黄色在线免费观看| 精品一区二区三区av网在线观看 | 19禁男女啪啪无遮挡网站| www.av在线官网国产| 日韩熟女老妇一区二区性免费视频| 久久精品人人爽人人爽视色| 高清视频免费观看一区二区| 欧美激情 高清一区二区三区| 麻豆精品久久久久久蜜桃| 国产精品一区二区精品视频观看| 亚洲欧美激情在线| 日韩中文字幕欧美一区二区 | av在线播放精品| 欧美中文综合在线视频| 自拍欧美九色日韩亚洲蝌蚪91| 可以免费在线观看a视频的电影网站 | 国产欧美日韩综合在线一区二区| 哪个播放器可以免费观看大片| 老鸭窝网址在线观看| 91aial.com中文字幕在线观看| 国产一区二区三区av在线| 又大又爽又粗| 国产黄频视频在线观看| 哪个播放器可以免费观看大片| 国产国语露脸激情在线看| 精品国产一区二区久久| 性少妇av在线| 狂野欧美激情性xxxx| 国产精品亚洲av一区麻豆 | 一个人免费看片子| 天天躁狠狠躁夜夜躁狠狠躁| 欧美人与善性xxx| 丝袜美腿诱惑在线| 色婷婷av一区二区三区视频| 欧美国产精品va在线观看不卡| 男女午夜视频在线观看| 亚洲一码二码三码区别大吗| 欧美少妇被猛烈插入视频| 国产精品一区二区精品视频观看| 午夜福利一区二区在线看| 日韩一本色道免费dvd| 国产亚洲午夜精品一区二区久久| 亚洲三区欧美一区| 看免费成人av毛片| 精品久久久久久电影网| 国产在视频线精品| 日韩一区二区视频免费看| 中文天堂在线官网| 国产亚洲最大av| 免费黄色在线免费观看| 欧美人与善性xxx| 亚洲熟女精品中文字幕| 大片免费播放器 马上看| 国产深夜福利视频在线观看| 肉色欧美久久久久久久蜜桃| 色综合欧美亚洲国产小说| 老司机影院成人| 精品一品国产午夜福利视频| 久久精品亚洲av国产电影网| 91精品国产国语对白视频| 三上悠亚av全集在线观看| 精品酒店卫生间| 少妇人妻 视频| netflix在线观看网站| 女人高潮潮喷娇喘18禁视频| 亚洲欧美一区二区三区黑人| 男女高潮啪啪啪动态图| 亚洲欧美激情在线| 亚洲av日韩精品久久久久久密 | 免费久久久久久久精品成人欧美视频| 下体分泌物呈黄色| 欧美人与善性xxx| 少妇被粗大猛烈的视频| 一本久久精品| 在线观看免费视频网站a站| 亚洲精品自拍成人| 中国国产av一级| 一区二区三区激情视频| 日本黄色日本黄色录像| 亚洲免费av在线视频| 国产精品嫩草影院av在线观看| 美女中出高潮动态图| 亚洲av欧美aⅴ国产| 一级毛片电影观看| 美女视频免费永久观看网站| 免费观看av网站的网址| 制服人妻中文乱码| 免费黄网站久久成人精品| 精品国产一区二区三区久久久樱花| 日日撸夜夜添| 黄色视频在线播放观看不卡| 久久精品国产亚洲av涩爱| 日本色播在线视频| 又黄又粗又硬又大视频| 熟女av电影| 国产精品一国产av| 成人手机av| 青青草视频在线视频观看| 久久久久国产精品人妻一区二区| 人人妻人人爽人人添夜夜欢视频| 操出白浆在线播放| 老司机亚洲免费影院| 亚洲精品久久成人aⅴ小说| 中文天堂在线官网| 日本av手机在线免费观看| 纯流量卡能插随身wifi吗| 成人亚洲精品一区在线观看| 69精品国产乱码久久久| 夜夜骑夜夜射夜夜干| 欧美97在线视频| 日韩av不卡免费在线播放| 搡老乐熟女国产| 欧美成人精品欧美一级黄| 久久女婷五月综合色啪小说| 日韩中文字幕视频在线看片| 天堂俺去俺来也www色官网| 欧美激情极品国产一区二区三区| 在线观看三级黄色| 久久久久久久大尺度免费视频| 女性生殖器流出的白浆| 又粗又硬又长又爽又黄的视频| 久热爱精品视频在线9| 欧美黑人精品巨大| 亚洲欧美精品自产自拍| 亚洲av国产av综合av卡| 午夜日本视频在线| 99精品久久久久人妻精品| 精品久久蜜臀av无| 亚洲av男天堂| 久久久精品94久久精品| 高清欧美精品videossex| 韩国av在线不卡| 天天影视国产精品| 国产探花极品一区二区| 亚洲精品aⅴ在线观看| 免费在线观看黄色视频的| 亚洲精品乱久久久久久| 天堂8中文在线网| 一本久久精品| 婷婷成人精品国产| 亚洲国产毛片av蜜桃av| 国产精品二区激情视频| 精品一品国产午夜福利视频| 久久久精品免费免费高清| 伦理电影大哥的女人| 成人亚洲精品一区在线观看| 国产精品成人在线| 欧美亚洲 丝袜 人妻 在线| 成人手机av| 精品国产乱码久久久久久小说| 亚洲精品,欧美精品| 日韩一卡2卡3卡4卡2021年| 国产亚洲最大av| 午夜91福利影院| 免费看av在线观看网站| 久久狼人影院| 精品国产一区二区三区久久久樱花| 啦啦啦 在线观看视频| 国产欧美亚洲国产| 亚洲精品自拍成人| 欧美少妇被猛烈插入视频| 黄色视频在线播放观看不卡| av视频免费观看在线观看| 伊人亚洲综合成人网| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲综合精品二区| 人人妻,人人澡人人爽秒播 | av不卡在线播放| 欧美av亚洲av综合av国产av | 精品卡一卡二卡四卡免费| 婷婷成人精品国产| 亚洲国产精品999| 这个男人来自地球电影免费观看 | 国产成人欧美| 考比视频在线观看| 国产成人精品久久二区二区91 | 免费日韩欧美在线观看| 欧美 日韩 精品 国产| 国产一区二区三区综合在线观看| 汤姆久久久久久久影院中文字幕| 成人影院久久| 亚洲欧美清纯卡通| 99久久人妻综合| 色网站视频免费| 香蕉国产在线看| 国产亚洲欧美精品永久| 好男人视频免费观看在线| 免费黄色在线免费观看| 如何舔出高潮| 国产亚洲精品第一综合不卡| 精品人妻在线不人妻| 亚洲成av片中文字幕在线观看| 这个男人来自地球电影免费观看 | 国产在线一区二区三区精| 成人午夜精彩视频在线观看| 女性生殖器流出的白浆| 一级片免费观看大全| 男女之事视频高清在线观看 | 久久久久精品人妻al黑| av网站免费在线观看视频| 精品一区二区免费观看| 成人影院久久| 欧美成人午夜精品| 亚洲av成人不卡在线观看播放网 | 亚洲一卡2卡3卡4卡5卡精品中文| 国产激情久久老熟女| 男人爽女人下面视频在线观看| 丝瓜视频免费看黄片| 久久精品国产亚洲av涩爱| 热re99久久精品国产66热6| 成年人免费黄色播放视频| 日本午夜av视频| 亚洲,一卡二卡三卡| 婷婷色av中文字幕| 久久久精品免费免费高清| 欧美精品人与动牲交sv欧美| 爱豆传媒免费全集在线观看| 日日爽夜夜爽网站| 丝袜在线中文字幕| 欧美人与善性xxx| 国产精品国产三级专区第一集| 日韩av免费高清视频| 五月天丁香电影| 日韩电影二区| 欧美日韩成人在线一区二区| 少妇猛男粗大的猛烈进出视频| 国产乱来视频区| av在线老鸭窝| 精品人妻熟女毛片av久久网站| 电影成人av| 18禁裸乳无遮挡动漫免费视频| 亚洲婷婷狠狠爱综合网| 久久久久久久国产电影| 日韩,欧美,国产一区二区三区| 电影成人av| 久久久国产一区二区| 一本久久精品| 黑人巨大精品欧美一区二区蜜桃| 国产精品二区激情视频| 香蕉国产在线看| 国产日韩一区二区三区精品不卡| 亚洲国产精品成人久久小说| 免费黄频网站在线观看国产| 香蕉丝袜av| 日本vs欧美在线观看视频| 啦啦啦视频在线资源免费观看| 亚洲成人一二三区av| 免费在线观看完整版高清| 天天操日日干夜夜撸| 一本—道久久a久久精品蜜桃钙片| 精品午夜福利在线看| 国产男女内射视频| 男女国产视频网站| 亚洲av电影在线观看一区二区三区| 国产亚洲最大av| 人人澡人人妻人| 久久久久人妻精品一区果冻| 亚洲一区二区三区欧美精品| 国产一区二区三区av在线| 大香蕉久久网| 大香蕉久久网| 精品亚洲成国产av| 欧美日韩精品网址| 精品一区二区三区av网在线观看 | 午夜老司机福利片| 亚洲成色77777| 亚洲av电影在线进入| 制服丝袜香蕉在线| 欧美日韩福利视频一区二区| 卡戴珊不雅视频在线播放| 18禁动态无遮挡网站| 91精品三级在线观看| 一区在线观看完整版| 亚洲欧美一区二区三区久久| 老司机靠b影院| 黄色视频不卡| 看非洲黑人一级黄片| 女人爽到高潮嗷嗷叫在线视频| 成年av动漫网址| 看免费成人av毛片| 婷婷色综合大香蕉| 午夜久久久在线观看| 欧美人与性动交α欧美软件| 涩涩av久久男人的天堂| av有码第一页| 看非洲黑人一级黄片| 国产精品欧美亚洲77777| 2021少妇久久久久久久久久久| 亚洲国产成人一精品久久久| 中文字幕人妻丝袜一区二区 | 国产免费一区二区三区四区乱码| 啦啦啦啦在线视频资源| 欧美变态另类bdsm刘玥| 午夜日本视频在线| 高清av免费在线| 老司机亚洲免费影院| 亚洲国产精品国产精品| 亚洲成色77777| 99精品久久久久人妻精品| 91精品伊人久久大香线蕉| 丁香六月天网| 久久久亚洲精品成人影院| 亚洲av电影在线观看一区二区三区| 亚洲熟女毛片儿| kizo精华| 欧美亚洲 丝袜 人妻 在线| 亚洲精品乱久久久久久| 国产免费一区二区三区四区乱码| 久久久国产欧美日韩av| 亚洲av欧美aⅴ国产| 伦理电影免费视频| 韩国高清视频一区二区三区| 十八禁网站网址无遮挡| 午夜老司机福利片| 超碰成人久久| 人妻 亚洲 视频| 国产精品国产三级专区第一集| 天美传媒精品一区二区| 美女国产高潮福利片在线看| kizo精华| 国产伦人伦偷精品视频| 色网站视频免费| 极品人妻少妇av视频| av国产久精品久网站免费入址| 亚洲 欧美一区二区三区| 精品国产一区二区三区四区第35| 国产无遮挡羞羞视频在线观看| 美女高潮到喷水免费观看| 免费观看a级毛片全部| 一本久久精品| 赤兔流量卡办理| 两性夫妻黄色片| av国产久精品久网站免费入址| 久久精品熟女亚洲av麻豆精品| 日本色播在线视频| 一本大道久久a久久精品| av视频免费观看在线观看| av福利片在线| 国产成人a∨麻豆精品| 韩国精品一区二区三区| 最近中文字幕2019免费版| 亚洲av男天堂| 国语对白做爰xxxⅹ性视频网站| 久久精品国产a三级三级三级| 日韩熟女老妇一区二区性免费视频| 别揉我奶头~嗯~啊~动态视频 | 如何舔出高潮| 大香蕉久久网| 免费在线观看黄色视频的| 丁香六月欧美| 99久久人妻综合| 久久久久久久久久久免费av| 免费不卡黄色视频| 日韩免费高清中文字幕av| 波多野结衣av一区二区av| 欧美xxⅹ黑人| 熟女少妇亚洲综合色aaa.| 亚洲精品日本国产第一区| 成年动漫av网址| 亚洲国产欧美一区二区综合| 一边亲一边摸免费视频| 国产精品免费视频内射| 亚洲精华国产精华液的使用体验| 日本一区二区免费在线视频| 777久久人妻少妇嫩草av网站| 九草在线视频观看| 午夜久久久在线观看| 狠狠精品人妻久久久久久综合| 亚洲欧美精品自产自拍| 黄色怎么调成土黄色| 日本欧美视频一区| 成人手机av| 欧美人与性动交α欧美软件| 国产精品二区激情视频| 日韩大片免费观看网站| 青春草国产在线视频| 午夜福利在线免费观看网站| 一边亲一边摸免费视频| 飞空精品影院首页| 午夜福利网站1000一区二区三区| 啦啦啦中文免费视频观看日本| 精品一区二区三卡| 99精品久久久久人妻精品| 国产精品久久久久成人av| 成年人免费黄色播放视频| 国产精品亚洲av一区麻豆 | 欧美日韩综合久久久久久| 日本猛色少妇xxxxx猛交久久| 少妇人妻精品综合一区二区| 午夜福利,免费看| 看非洲黑人一级黄片| 亚洲欧美一区二区三区久久| 搡老岳熟女国产| 日韩av在线免费看完整版不卡| 国产一区二区 视频在线| 中文字幕亚洲精品专区| 夫妻性生交免费视频一级片| 极品人妻少妇av视频| 国产精品熟女久久久久浪| 国产精品久久久人人做人人爽| 波多野结衣一区麻豆| 久久久久精品国产欧美久久久 | 国产av一区二区精品久久| 日日撸夜夜添| 国产成人欧美| 欧美日韩成人在线一区二区| 五月天丁香电影| 在线天堂最新版资源| 毛片一级片免费看久久久久| 国产伦理片在线播放av一区| 色吧在线观看| 成年av动漫网址| 日本色播在线视频| 亚洲五月色婷婷综合| 51午夜福利影视在线观看| 最黄视频免费看| av天堂久久9| 国产xxxxx性猛交| 日日撸夜夜添| 悠悠久久av| 美女高潮到喷水免费观看| 你懂的网址亚洲精品在线观看| 大码成人一级视频| 黄网站色视频无遮挡免费观看| 亚洲成人国产一区在线观看 | 午夜福利,免费看| 国产97色在线日韩免费| 亚洲成人一二三区av| 男的添女的下面高潮视频| 高清不卡的av网站| 考比视频在线观看| 精品午夜福利在线看| 国产亚洲一区二区精品| 色精品久久人妻99蜜桃| 最近手机中文字幕大全| 欧美黑人欧美精品刺激| 一级片'在线观看视频| 国产欧美日韩一区二区三区在线| 色精品久久人妻99蜜桃| 老司机影院成人| 人体艺术视频欧美日本| 99久久综合免费| 90打野战视频偷拍视频| 一区二区三区四区激情视频| 狂野欧美激情性xxxx| 亚洲精品中文字幕在线视频| 亚洲欧美一区二区三区黑人| netflix在线观看网站| 国产精品二区激情视频| 色综合欧美亚洲国产小说| 自线自在国产av| 老鸭窝网址在线观看| 卡戴珊不雅视频在线播放| 老汉色av国产亚洲站长工具| av天堂久久9| 精品国产一区二区三区四区第35| 哪个播放器可以免费观看大片| 国产精品99久久99久久久不卡 | 啦啦啦在线观看免费高清www| 欧美亚洲日本最大视频资源| 男的添女的下面高潮视频| 人妻人人澡人人爽人人| 国产欧美亚洲国产| 国产一区二区 视频在线| avwww免费| 人人妻人人澡人人看| 一边摸一边做爽爽视频免费| 成人亚洲欧美一区二区av| 国产片内射在线| 国产日韩欧美亚洲二区| 两性夫妻黄色片| av女优亚洲男人天堂| 女性生殖器流出的白浆| 久久久久久久精品精品| 亚洲精华国产精华液的使用体验| 视频在线观看一区二区三区| 国产精品人妻久久久影院| 男人舔女人的私密视频| 欧美人与性动交α欧美精品济南到| 成人三级做爰电影| 一级毛片我不卡| 欧美黑人欧美精品刺激| 大片电影免费在线观看免费| 国产欧美日韩一区二区三区在线| 亚洲精品自拍成人| 亚洲自偷自拍图片 自拍| 国产成人精品福利久久| 中国国产av一级| 久久狼人影院| 日本wwww免费看| 婷婷色av中文字幕| a级毛片在线看网站| 精品卡一卡二卡四卡免费| 尾随美女入室| 免费久久久久久久精品成人欧美视频| 国产精品久久久久久精品古装| 欧美人与善性xxx| 少妇人妻久久综合中文| 黄色视频在线播放观看不卡| 深夜精品福利| 性少妇av在线| 老司机影院成人| 午夜免费男女啪啪视频观看| 婷婷色综合大香蕉| 午夜福利影视在线免费观看| 秋霞伦理黄片| 日本色播在线视频| 精品卡一卡二卡四卡免费| 国产精品香港三级国产av潘金莲 | 热re99久久国产66热| 国产亚洲av片在线观看秒播厂| 久久精品国产亚洲av高清一级| 熟女少妇亚洲综合色aaa.| 亚洲欧美日韩另类电影网站| 深夜精品福利| 亚洲欧美成人综合另类久久久| 精品国产国语对白av| 国产又爽黄色视频| 亚洲精品久久久久久婷婷小说| 高清不卡的av网站| 少妇精品久久久久久久| 国产黄色免费在线视频| 亚洲 欧美一区二区三区| 丁香六月天网| 亚洲av中文av极速乱| 高清不卡的av网站| 无遮挡黄片免费观看| 母亲3免费完整高清在线观看| 卡戴珊不雅视频在线播放| 欧美激情极品国产一区二区三区| 黄片播放在线免费| 久久亚洲国产成人精品v| 国产视频首页在线观看| 精品人妻在线不人妻| 大片电影免费在线观看免费| 丁香六月天网| 国产成人91sexporn| 久久人人97超碰香蕉20202| av免费观看日本| 免费黄色在线免费观看| 国产福利在线免费观看视频| 人妻人人澡人人爽人人| 亚洲av电影在线进入| 一二三四中文在线观看免费高清| 国产毛片在线视频| 欧美精品av麻豆av| 另类亚洲欧美激情| 夫妻午夜视频| 日韩一区二区视频免费看| 久久久久国产一级毛片高清牌| 欧美精品高潮呻吟av久久| 各种免费的搞黄视频| 久久久久久人妻| 国产麻豆69| 久久久久久人妻| 视频区图区小说| 纯流量卡能插随身wifi吗| 波多野结衣一区麻豆| 另类精品久久| 欧美亚洲 丝袜 人妻 在线| 一边摸一边做爽爽视频免费| 最新的欧美精品一区二区| 日韩中文字幕欧美一区二区 | 欧美人与善性xxx| 亚洲情色 制服丝袜| 欧美黑人欧美精品刺激| 最近中文字幕高清免费大全6| 久久女婷五月综合色啪小说| 老司机亚洲免费影院| av在线观看视频网站免费| 亚洲精品国产av蜜桃| 亚洲精品乱久久久久久| 亚洲国产看品久久| 成人影院久久|