• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recognition and Detection of Diabetic Retinopathy Using Densenet-65 Based Faster-RCNN

    2021-12-16 07:48:52SalehAlbahliTahiraNazirAunIrtazaandAliJaved
    Computers Materials&Continua 2021年5期

    Saleh Albahli,Tahira Nazir,Aun Irtaza and Ali Javed

    1Department of Information Technology,College of Computer,Qassim University,Buraydah,Saudi Arabia

    2Department of Computer Science,University of Engineering and Technology,Taxila,47050,Pakistan

    3Department of Software Engineering,University of Engineering and Technology,Taxila,47050,Pakistan

    Abstract:Diabetes is a metabolic disorder that results in a retinal complication called diabetic retinopathy (DR) which is one of the four main reasons for sightlessness all over the globe.DR usually has no clear symptoms before the onset,thus making disease identification a challenging task.The healthcare industry may face unfavorable consequences if the gap in identifying DR is not filled with effective automation.Thus,our objective is to develop an automatic and cost-effective method for classifying DR samples.In this work,we present a custom Faster-RCNN technique for the recognition and classification of DR lesions from retinal images.After pre-processing,we generate the annotations of the dataset which is required for model training.Then,introduce DenseNet-65 at the feature extraction level of Faster-RCNN to compute the representative set of key points.Finally,the Faster-RCNN localizes and classifies the input sample into five classes.Rigorous experiments performed on a Kaggle dataset comprising of 88,704 images show that the introduced methodology outperforms with an accuracy of 97.2%.We have compared our technique with state-of-the-art approaches to show its robustness in term of DR localization and classification.Additionally,we performed cross-dataset validation on the Kaggle and APTOS datasets and achieved remarkable results on both training and testing phases.

    Keywords:Deep learning;medical informatics;diabetic retinopathy;healthcare;computer vision

    1 Introduction

    Diabetes,scientifically known as diabetes mellitus is an imbalance of metabolism that precedes to increase in the level of glucose in the bloodstream.According to an estimate provided in [1]about 415 million people are victimized by this sickness.Prolonged diabetes causes retinal complications which results in a medical condition called DR,which is one of the 4 main reasons for sightlessness all over the globe.More than 80% of people who are exposed to diabetes for a long time suffer from this medical condition [2].The high level of glucose in circulating blood causes blood leaks and an increased supply of glucose to the retina.This often leads to abnormal lesions i.e.,microaneurysms,hard exudates,cotton wool spots,and hemorrhages in the retina,thus causing vision impairment [3].DR usually has no clear symptoms before the onset.The most common screening tool used for the detection of DR is retinal (fundus) photography.

    For treatment purposes and avoiding vision impairment,the DR is classified in different levels concerning the severity of the disorder.According to the research of the early treatment of DR and international clinical DR,there are five levels of DR severity.In the zeroth level of DR severity,there is no abnormality.The first,second,third,and fourth levels are identified as the presence of mild- aneurysms,moderate non-proliferative Diabetic Retinopathy (NPDR),severe NPDR,and proliferative DR,respectively.Tab.1 summarizes the five levels of DR severity with their respective fundoscopy observations.

    Table 1:Severity levels of DR

    For computerized identification of DR,initially,hand-coded key points were used to detect the lesions of DR [4-14].However,these approaches exhibit low performance due to a huge change in color,size,intra-class variations,size,bright regions,and high variations among different classes.Moreover,the little signs other than microaneurysms,medical rule marks,and objects also contribute to the unpromising results of CAD solutions.Another reason for the degraded performance of automated DR,detection system is the involvement of non-affected regions with the affected area,which in turn gives a weak set of features.To achieve the promising performance of computer-based Diabetic retinal disease detection solutions,there must be an efficient set of key-points.

    Object detection and classification in images using various machine learning techniques have been a focus of the research community [15,16].Especially with the advent of CNN,various models have been proposed to accomplish the tasks of object detection and classification in the areas of computer vision (CV),speech recognition,natural language processing (NLP),robotics,and medicine [17-21].Similarly,there are various examples of deep learning (DL) use in biomedical applications [22,23].In this work,we have introduced the technique that covers the Data preparation,Recognition,and classification of DR from retinal images.In the first step,we have prepared our dataset with the help of ground truths.For detection and feature extraction,we have proposed a CNN algorithm named DenseNet-65 for images of size 340×240 pixels.We also present the performance comparison of our models in terms of accuracy with DenseNet-121,ResNet-50,and EfficientNet-B5.Moreover,we have compared our approach against the most recent techniques.Our analysis reveals that the introduced technique has the potential to correctly classify the images.The following are the main contributions of our work:

    ? The development of the annotations of the large dataset having images with a total of 88,704 images.

    ? We have introduced a customized Faster-RCNN with DenseNet-65 at the feature extraction level which can accurately increase the performance to locate the small objects while decreasing both training and testing time complexity.By removing the unnecessary layers,the Densenet-65 minimizes the loss of the bottom-level high-resolution key points and saves the data of small targeted regions,which are lost by repeated key points.

    ? To develop a technique for classifying DR images using DenseNet-65 architecture instead of hand-engineered features and reduce cost-effectiveness and the need for face-to-face consultations and diagnosis.

    ? Furthermore,we have compared the classification accuracy of the presented framework with other algorithms like AlexNet,VGG,GoogleNet,and ResNet-11.The results presented in this work show that DenseNet architecture performs well in comparison to the latest approaches.

    The remaining manuscript is arranged as follows:In Section 2,we present the related work.This includes the work on the classification of DR images using handcrafted features and DL approaches.In Section 3,we present the proposed methodology of DR image classification using Custom Faster-RCNN.In Section 4,we present the results and evaluations of the introduced work.Finally,in Section 5 we conclude our work.

    2 Related Work

    In history,several approaches have been introduced to correctly classify the images of normal retina and retina with DR.In [24],the authors propose a technique that uses mixture-models to dynamically threshold the images for differentiating exudates from the background.Afterward,edge detection is applied to classify cotton wool spots from the background texture.The proposed work presents a sensitivity of 100% and specificity of 90%.Authors in [25]present an algorithm that performs 2-step classification by combining four machine learning techniques,namely,k-nearest neighbors (KNN) [26]Gaussian mixture models (GMM) [27]support vector machines(SVM) [28],and the AdaBoost algorithm [29].The authors report the sensitivity and specificity of 100% and 53.16%,respectively.Priya et al.[30]proposes a framework to categorize fundus samples into two classes:proliferative DR and non-proliferative DR.The proposed technique first extracts hand-engineered features of DR abnormalities,for instance,hemorrhages,hard exudates,and swollen blood vessels.These hand-engineered features are then used to train a hybrid model of probabilistic neural networks (PNN),SVM,and Bayesian classifiers.The accuracy of each model is computed separately,i.e.,89.6%,94.4%,and 97.6% for PNN,SVM,and Bayesian classifiers,respectively.In [31],the authors propose a technique that is designed using the idea of a visual descriptor word bag.The proposed algorithm in the initial stage detects the points of interest based on hand-engineered features.Secondly,the feature vectors of these detected points are consumed to construct the dictionary.Finally,the algorithm classifies whether the input image of the human retina contains hard exudates using SVM.

    With the introduction of DL,a focus is on introducing methods for classifying DR images through employing deep neural networks as a replacement for hand-coded key points.The related work of approaches to categorizing normal and DR retinas utilizing DL methodologies is discussed in Tab.2.

    Table 2:A comparison of the diabetic retinopathy severity levels

    (Continued)

    Table 2:Continued

    3 Proposed Methodology

    The presented work comprises of two main parts.The first is ‘dataset preparation’ and the second is Custom ‘Faster-RCNN builder’ for localization and classification.

    The first module develops the annotations for DR lesions to locate the exact region of the lesion.While the second Component of the introduced framework builds a new type of Faster-RCNN.This module comprises two sub-modules in which the first one is a CNN framework and the other is the training component,which performs training of Faster-RCNN through employing the key points computed from the CNN model.Faster-RCNN accepts two types of input,image sample and location of the lesion in the input image.Fig.1 shows the functionality of the presented technique.At first,an input sample along with the annotation’s bounding box (bbox) is passed to the nominated CNN model.The bbox recognizes the region of interest (ROI) in CNN key points.With these bboxes,reserved key points from training samples are nominated.Based on the computed features,the Faster-RCNN trains a classifier and generate a regressor estimator for given regions.The Classifier modules assign predicted class to object and the regressor component learns to determine the coordinates of potential bbox to locate the location of the lesion in each image.Finally,accurateness is estimated for each unit as per metrics employed in the CV field.

    Figure 1:Architecture of custom faster-RCNN model

    3.1 Preprocessing

    Like any other real-world dataset,our data contains various artifacts,such as noise,out of focus images,underexposed or overexposed images.This may lead to poor classification results.Therefore,we perform data pre-processing on the samples beforehand inputting them to CNNs.

    whereσrepresents the variance,x and y represent the distance from the origin in the horizontal and vertical axes.G(x,y) is the output of Gaussian filter.Afterward,we subtract the local average color from the blurred image using Eq.(2).

    where,I′(x,y),I(x,y),and (G(x,y)?I(x,y)) represent the contrast corrected image,the original image and original image convolved with Gaussian filter,respectively.

    Second,the removal of regions which have no information.In the original dataset,there are certain areas in the image that if removed do not affect the output.Therefore,we crop these regions from the input image.The process of cropping images not only enhances the performance of the classification but also assist in reducing the computations.

    3.2 Annotations

    The location of DR lesions of every sample is necessary to detect the diseased area for the training procedure.In this work,we have used the LabelImg tool to generate the annotations of the retinal samples and have manually created a bbox of every sample.The dimensions of the bbox and associated class for each object are stored in XML files,i.e.,xmin,ymin,xmax,ymax,width,and height.The XML files are utilized to generate the CSV file,train.record file is created from the CSV file which is later employed in the training procedure.

    3.3 Faster-RCNN

    Faster-RCNN [19]algorithm is an extended form to already existing approaches,i.e.,R-CNN [21]and Fast-RCNN [20]which employed Edge Boxes [41]technique to generate region proposals for possible object areas.However,the functionality of Faster-RCNN is changed from [21]as it utilizes Region Proposal Network (RPN) to create region proposals directly as part of the framework.It means that Faster-RCNN uses RPN as an alternative to the Edge Boxes algorithm.The computational complexity of Faster-RCNN for producing region proposals is considerably less than the edge box technique.Concisely,the ranking of anchor boxes is finalized by RPN which shows the most expected anchor boxes containing regions of interest (ROIs).So,in Faster-RCNN,region proposal generation is quick and is better attuned to input samples.Two types of outputs are generated by the Faster-RCNN:(i) Classification that shows the class associated with each object (ii) Coordinates of bbox.

    3.4 Custom Feature Faster-RCNN Builder

    A CNN is a special type of NN that is essentially developed/evolved to perceive,recognize,and detect visual attributes from 1D,2D,or ND matrices.In the presented work,image pixels are passed as input to the CNN framework.We have employed DenseNet-65 as a feature extractor in the Faster-RCNN approach.DenseNet [42]is the latest presented approach of CNN,in which the present layer relates to all preceding layers.DenseNet comprises a set of dense blocks which are sequentially interlinked with each other with extra convolutional and pooling layers among successive dense blocks.DenseNet can present the complex transformations which result in improving the issue of the absence of the target’s position information for the top-level key points to some degree.DenseNet minimizes the number of parameters which makes them cost-efficient.Moreover,DenseNet assists the key points propagation process and encourages their reuse which makes them more suitable for lesion/digit classification.So,in this paper,we have utilized the denseNet-65 as a feature extractor for Faster-RCNN.The architectural description of DenseNet is given in Tab.4 that demonstrates the name of layers through which the key points are selected for advance processing by Faster-RCNN.It also represents the query sample size to be readjusted before computing key points from the allocated layer.The training parameters for customized Faster-RCNN are shown in Tab.3.The detailed flow of our presented approach is shown in Algorithm 1.

    Table 3:Training parameters of the proposed method

    The main process of lesion classification through Faster-RCNN can be divided into four steps.Firstly,the input sample along with annotation is given to the denseNet-65 to compute the feature map,then,the calculated key points are used as input to the RPN component to obtain the features information of the region proposals.In the third step,the ROI pooling layer produces the proposal feature maps by using the calculated feature map and proposals from convolutional layers and the RPN unit,respectively.In the last step,the classifier unit shows the class associated with each lesion while the bbox generated by the bbox regression is used to show the final location of the identified lesion.

    The proposed method is assessed employing the Intersection over Union (IOU) as described in Fig.2a.X shows the ground truth rectangle and Y denotes the estimated rectangle with Dr lesions.

    The first decision for lesions being identified when the value of IOU is greater than 0.5,or not is determined when the value is less than 0.5.The Average Precision (AP) is mostly employed in evaluating the precision of object detectors i.e.,R-CNN,SSD,and YOLO,etc.The geometrical explanation of precision is shown in Fig.2b.In our framework of the detection of DR lesions,AP depends upon the idea of IOU.

    Algorithm 1:Steps for DR recognition through custom Fast-RCNN START INPUT:NI,annotation (position)OUTPUT:Localized RoI,CFstDenseNet-65 NI:Total samples with DR lesions annotation(position):bounding box coordinates lesion in samples Localized RoI:lesions position CFstDenseNet-65:Custom Faster-RCNN model based on DenseNe-65 keypoints imageSize ←[340 240]α ←AnchorsEstimation (NI,annotation)// Anchor box estimation CFstDenseNet-65 ←ConstructCustom DenseNet-65 FasterRCNN (imageSize,α) // Custom Faster RCNN[tr,ts]←partitioning of the database into train and test set// lesions Detection Training Module For each sample i in→tr Extract DenseNet-65 keypoints→ni End For Train Model CFstDenseNet-65 over ni,and measure training time t_dense η_dense ←PreLesionLoc(ni)Ap_dense ←Evaluate_AP (CFstDenseNet-65,η_dense)For each I in→ts a) Compute keypoints using trained model €→μ I b) [bounding_box,objectness_score,class]←Predict (μ I)c) Display image along with bounding_box,class d) ?←?[bounding_box]End For Ap_€←Evaluate model €using ?FINISH.

    The Densenet-65 has two potential difference from traditional DenseNet:(i) Densenet-65 has less number of parameters from the actual model as instead of 64,it has 32 channels on the first convolution layer,and the size of the kernel is 3×3 instead of 7×7 (ii) the number of layers within each dense block is attuned to deal with the computational complexity.Tab.4 describes the structure of the proposed DenseNet-65 model.

    Table 4:Structure of DenseNet-65

    Figure 2:(a) IOU venn diagram,(b) geometrical representation of precision

    The Dense block is the important component of DenseNet-65 as shown in Fig.3,in whichs×s×k0represents the features maps (FPs) of theL?1 layer.The size of the FPs issand the number of channels is denoted by k0.A non-linear transformation H(.),that contains the different operations,i.e.,Batch Normalization Layer (BN),Rectified linear unit (Relu) activation function,a 1×1 convolution layer (ConvL) used for the reduction of several channels,and 3×3 ConvL used for feature restructuring.The dense connection is represented by a long-dashed arrow that joins theL?1 layer to theLlayer and creates concatenation with the results ofH(.).Finally,s×s×(k0+2k)is the output of theL+1 layer.

    Figure 3:Architecture of dense block

    After multiple dense connections,the number of FPs will rise significantly,the transition layer(TL) is added to decrease the feature dimension from the preceding dense block.The structure of TL is shown in Fig.4,which comprises of BN and a 1×1 ConvL (decreases the number of channels to half) followed by a 2×2 average pooling layer that decreases the size of FPs.Wheretshows the total channels and average pooling is denoted by the pool.

    Figure 4:Architecture of transition layer

    3.5 Detection Process

    Faster-RCNN is a deep-learning-based technique which is not dependent on methods like the selective search for its proposal generation.Therefore,the input sample with annotation is given as input to the network,on which it directly computes the bbox to show the digit location and associated class.

    4 Results and Discussion

    4.1 Dataset

    In this method,we employ the DR images database provided by Kaggle.There are two sets of training images with a total of 88704 images.Alabel.csvfile is provided that contains the information regarding the severity level of DR.The samples in the database are collected using various cameras in multiple clinics,over time.The sample images of five classes from the Kaggle database are shown in Fig.5.

    Figure 5:Sample images from Kaggle dataset;(a) no DR,(b) mild,(c) moderate,(d) severe,and(e) proliferative

    4.2 Evaluation of DenseNet-65 Model

    The detection accuracy of proposed DenseNet-65 method is compared with base models,i.e.,DenseNet-65,ResNet,EfficientNet-B5,AlexNet,VGG,and GoogleNet.

    In this part,we show the simulation results of the ResNet,DenseNet-65,and EfficientNet-B5.The results are presented in terms of accuracy for DR image classification.Tab.5 presents the comparison of the 3 models used in this work for the classification of DR images in terms of trainable parameters,total parameters,loss,and model accuracy.As presented in Tab.5,DenseNet-65 has a significantly small number of total parameters,whereas the EfficientNet-B5 has the highest number of model parameters.This is because the architecture of DenseNet does not solely rely on the power of very deep and wide networks,rather,they make efficient reuse of model parameters,i.e.,no need to compute redundant feature maps.Therefore,resulting in a significantly small number of total model parameters.For instance,the architecture of DenseNet under consideration in this work is DenseNet-65,i.e.,65 layers deep.Similarly,the ResNet used in this work has 50 layers,however,the number of parameters is still significantly higher than that of DenseNet-65.

    Table 5:Comparison table of various characteristics of the 3 methods used in our work for the classification of DR images.Please note that Densenet-65 is the best choice in terms of trainable parameters and classification accuracy

    The number of trainable parameters of DenseNet-65 is small,i.e.,6,958,900,as compared to the trainable parameters of ResNet and EficientNet-B5.Consequently,the training time for the former deep network,i.e.,DenseNet-65,is short as compared to the later methods,i.e.,ResNet and EfficientNet-B5.

    Our analysis reveals that the classification performance of the DenseNet-65 is higher than the other methods as shown in Tab.6.DenseNet-65 correctly classifies 95.6% of the images that represent the human retinas suffering from DR.Contrary,the classification accuracy of the ResNet and EfficientNet-B5 is 90.4% and 94.5%,respectively.Moreover,the techniques in [36]and[43]are economically complex and may not perform well under the presence of bright regions,noise,or light variations in retinal images.Our method has overcome the existing problems by employing an efficient network for feature computation and can show complex transformations that make it robust to post-processing attacks.

    Table 6:Comparison table of our work with several approaches using different architectures presented in the history for the classification of DR images

    4.3 Localization of DR Lesions Using Custom Faster-RCNN

    For localization of the DR signs,the diseased areas are declared a positive example while the remaining healthy parts are known as a negative example.The correlated area is categorized by a threshold score IOU,which was set to 0.5,less than this score,considering the area as background or negative.Likewise,the value of IOU more than 0.5 the areas are classified as lesions.The localization outcome of Custom Faster-RCNN as shown in Fig.6 having to evaluate retinal samples over a confidence value.The evaluation results exhibit a greater value which is higher than 0.89 and up to 0.99.

    The presented methodology results are analyzed by employing the mean IOU and precision over all samples of the test database.Tab.7 demonstrates that the introduced framework achieved average values of mean IOU as 0.969 and a precision of 0.974.Our presented method exhibits better results because of the precise localization of lesions by utilizing Custom Faster RCNN based on DenseNet-65.

    Figure 6:Test results of custom Faster-RCNN for detection of DR lesions

    Table 7:Performance of proposed method over Kaggle database

    4.4 Stage Wise Performance

    The stage-wise results of the introduced framework are analyzed through the experiments.Faster-RCNN precisely localized and classify the lesions of the DR.The classification results of DR in terms of accuracy,precision,recall,F1-score,and error-rate are presented in Tab.8.According to the results,it can be determined that the introduced methodology attained remarkable results in terms of accuracy,precision,recall,and F1-score and shown a lower error rate.The presented technique attained an average value of accuracy,precision,recall,F1-score,and the error rate is 0.972,0.974,0.96,0.966,and 0.034 respectively.The correctness of DenseNet-65 keypoints computation that shows each class in a viable manner is the reason for good classification.Moreover,a little association among the No and Mild DR classes is found,however,still,both are recognizable.So,because of efficient keypoints computation,our method shows the latest DR classification performance that exhibits the robustness of the presented network.The confusion metrix is shown in Fig.7.

    Table 8:Stage-wise performance of the presented methodology

    Figure 7:Test results of custom faster

    4.5 Comparative Analysis

    In the present work,we reported results by running a computer simulation 10 times.In each run,we randomly selected data with a ratio of 70% to 30% for training and testing,respectively.The average results in form of performance evaluation metrics were then considered.

    In Tab.9,we present an evaluation of the proposed approaches for DR classification with the methods presented in Xu et al.[32],Li et al.[33],Zhang et al.[36],Li et al.[40]and Wu et al.[44]and Pratt et al.[45],and these techniques are capable to classify DR from retinal images.However,requires intense training and exhibits lower accuracy for training samples with the class imbalance problem.Our method has acquired the highest average accuracy of 97.2% that signifies the reliability of the introduced solution against other approaches.

    The proposed method achieved the average accuracy of 97.2%,while the comparative approaches attained the average accuracy of 84.735%,we can say that our technique gave a 12.46% performance gain.Furthermore,the presented approach can simply be adopted or run-on CPU or GPU based systems and every sample test time is 0.9 s which is faster than the other method’s time.Our analysis shows that the proposed technique can correctly classify the images.

    Table 9:Comparison of the introduced framework with the latest approaches

    4.6 Cross-Dataset Validation

    To more assess the presented approach,we present the validation of the cross dataset,which means we trained our method on the Kaggle database,and testing is performed on the APTOS-2019 dataset [46]by “Asia Pacific Tele-Ophthalmology Society.” The dataset contains 3662 retinal samples combined from several clinics under diverse image capturing environments utilizing fundus photography from Aravind Eye Hospital in India.This dataset consists of five classes same as in the Kaggle dataset.

    We have plotted the box plot for evaluation of cross dataset in Fig.8,the accuracy of test and train is spreading across the number line into quartiles,median,whisker,and outliers.According to the figure,we attained an average accuracy of 0.981% for training and 0.975% for testing which exhibits that our proposed work outperforms the unknown samples as well.Therefore,it can be concluded that the introduced framework is robust to DR localization and classification.

    Figure 8:Cross dataset validation results:Train over Kaggle dataset and test over APTOS-2019

    5 Conclusions

    In this work,we introduced a novel approach to accurately identify the different levels of the DR by using a custom Faster-RCNN framework and have presented an application for lesion classification as well.More precisely,we utilized DenseNet-65 for computing the deep features from the given sample on which Faster-RCNN is trained for DR recognition.The proposed approach can efficiently localize retinal images into five classes.Moreover,our method is robust to various artifacts,i.e.,blurring,scale and rotational variations,intensity changes,and contrast variations.Reported results have confirmed that our technique outperforms the latest approaches.In the future,we plan to enhance our technique to other eye-related diseases.

    Acknowledgement:We would like to thank the Deanship of Scientific Research,Qassim University for funding the publication of this project.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美日韩黄片免| 在线观看免费视频网站a站| 国产激情欧美一区二区| 一进一出抽搐动态| 90打野战视频偷拍视频| 亚洲,欧美精品.| 国语自产精品视频在线第100页| 亚洲免费av在线视频| 99在线人妻在线中文字幕| 欧美av亚洲av综合av国产av| 激情在线观看视频在线高清| www.熟女人妻精品国产| 51午夜福利影视在线观看| 中文字幕人成人乱码亚洲影| 国产精品98久久久久久宅男小说| 91大片在线观看| 国产精华一区二区三区| 91精品三级在线观看| 日本免费一区二区三区高清不卡 | 国产精品98久久久久久宅男小说| 久久人妻av系列| 制服诱惑二区| 国产成人影院久久av| 岛国视频午夜一区免费看| 999久久久精品免费观看国产| 中文字幕久久专区| 亚洲人成伊人成综合网2020| 性色av乱码一区二区三区2| 男人舔女人下体高潮全视频| 亚洲成人免费电影在线观看| 制服人妻中文乱码| 久久香蕉激情| 亚洲狠狠婷婷综合久久图片| 精品一品国产午夜福利视频| 夜夜看夜夜爽夜夜摸| 成人18禁高潮啪啪吃奶动态图| 亚洲一卡2卡3卡4卡5卡精品中文| 免费高清在线观看日韩| 美女大奶头视频| 手机成人av网站| 99精品在免费线老司机午夜| 热99re8久久精品国产| 久久精品国产亚洲av高清一级| 深夜精品福利| 女人高潮潮喷娇喘18禁视频| 亚洲午夜精品一区,二区,三区| 日韩大尺度精品在线看网址 | 欧美精品啪啪一区二区三区| 在线播放国产精品三级| 18禁裸乳无遮挡免费网站照片 | 动漫黄色视频在线观看| 看片在线看免费视频| www国产在线视频色| 在线观看午夜福利视频| 精品久久久久久久毛片微露脸| 一a级毛片在线观看| 麻豆成人av在线观看| 成人欧美大片| 国产成人啪精品午夜网站| 麻豆av在线久日| 欧美日韩中文字幕国产精品一区二区三区 | 操美女的视频在线观看| 亚洲国产欧美一区二区综合| 欧美激情极品国产一区二区三区| 亚洲国产精品999在线| netflix在线观看网站| 日韩欧美国产在线观看| 操出白浆在线播放| 国产精品 国内视频| 亚洲精品粉嫩美女一区| 国产真人三级小视频在线观看| 在线观看66精品国产| av电影中文网址| 国产99久久九九免费精品| 精品日产1卡2卡| 欧美色视频一区免费| 亚洲人成77777在线视频| 国产精品久久久久久精品电影 | 黄色 视频免费看| 波多野结衣巨乳人妻| 91麻豆av在线| 99香蕉大伊视频| 亚洲精华国产精华精| 黑丝袜美女国产一区| 免费在线观看完整版高清| 丁香欧美五月| 欧美在线黄色| 好男人电影高清在线观看| 成人国产综合亚洲| 国产1区2区3区精品| svipshipincom国产片| 午夜久久久在线观看| 精品日产1卡2卡| 久久久精品国产亚洲av高清涩受| a级毛片在线看网站| 淫秽高清视频在线观看| 亚洲男人天堂网一区| 久久国产精品男人的天堂亚洲| 午夜福利高清视频| 国产亚洲av高清不卡| 成人永久免费在线观看视频| 怎么达到女性高潮| 亚洲精品国产色婷婷电影| 国产单亲对白刺激| 可以在线观看的亚洲视频| 99riav亚洲国产免费| 国产高清有码在线观看视频 | 99香蕉大伊视频| 久久久久国内视频| 男人的好看免费观看在线视频 | 亚洲最大成人中文| 悠悠久久av| 亚洲成人精品中文字幕电影| 国产xxxxx性猛交| 久久人人精品亚洲av| 国产一区二区在线av高清观看| av视频在线观看入口| 国产av又大| 国产亚洲欧美98| 看黄色毛片网站| 亚洲精品av麻豆狂野| 久久人人爽av亚洲精品天堂| 制服诱惑二区| 久久人人爽av亚洲精品天堂| 成人特级黄色片久久久久久久| 色哟哟哟哟哟哟| 精品国产亚洲在线| 日韩欧美国产一区二区入口| 大陆偷拍与自拍| 给我免费播放毛片高清在线观看| 狂野欧美激情性xxxx| 美女扒开内裤让男人捅视频| 国产精品电影一区二区三区| 亚洲 欧美 日韩 在线 免费| 免费在线观看完整版高清| 午夜精品在线福利| 午夜免费鲁丝| 91字幕亚洲| 一个人免费在线观看的高清视频| 国产麻豆69| 欧美日本视频| 国产野战对白在线观看| 日本免费a在线| 91麻豆av在线| 好看av亚洲va欧美ⅴa在| 一级a爱片免费观看的视频| 90打野战视频偷拍视频| 久久久久久久久久久久大奶| 日韩一卡2卡3卡4卡2021年| 日日爽夜夜爽网站| 在线观看www视频免费| 美女高潮喷水抽搐中文字幕| 国产精品久久久久久人妻精品电影| 女人精品久久久久毛片| 亚洲 欧美一区二区三区| 午夜视频精品福利| 久久久久久国产a免费观看| 精品欧美国产一区二区三| 视频区欧美日本亚洲| 又大又爽又粗| 国产aⅴ精品一区二区三区波| 久久久久国产精品人妻aⅴ院| 国产在线观看jvid| 午夜免费成人在线视频| 午夜两性在线视频| 亚洲国产精品合色在线| 一级毛片高清免费大全| 亚洲欧洲精品一区二区精品久久久| 性少妇av在线| 别揉我奶头~嗯~啊~动态视频| 一级片免费观看大全| 丝袜美腿诱惑在线| 久久香蕉国产精品| 国产精品免费一区二区三区在线| 国产又爽黄色视频| 自线自在国产av| 午夜两性在线视频| 成年版毛片免费区| 超碰成人久久| 搡老熟女国产l中国老女人| 亚洲专区字幕在线| 久久精品亚洲熟妇少妇任你| 激情视频va一区二区三区| 亚洲五月色婷婷综合| 午夜两性在线视频| 黑人巨大精品欧美一区二区mp4| 国产精品亚洲一级av第二区| 亚洲专区国产一区二区| 少妇 在线观看| 女同久久另类99精品国产91| 侵犯人妻中文字幕一二三四区| 一级黄色大片毛片| 在线观看www视频免费| 悠悠久久av| 黑人巨大精品欧美一区二区蜜桃| 好男人电影高清在线观看| 精品人妻1区二区| 涩涩av久久男人的天堂| 天天躁狠狠躁夜夜躁狠狠躁| 精品国产一区二区三区四区第35| 在线观看免费视频网站a站| 欧美午夜高清在线| 亚洲,欧美精品.| 啦啦啦 在线观看视频| 男女下面插进去视频免费观看| cao死你这个sao货| av超薄肉色丝袜交足视频| 一边摸一边抽搐一进一小说| 午夜福利在线观看吧| 桃红色精品国产亚洲av| 国产av又大| 欧美日韩一级在线毛片| 精品一区二区三区四区五区乱码| 首页视频小说图片口味搜索| 欧美一级毛片孕妇| 伦理电影免费视频| 精品第一国产精品| 99精品在免费线老司机午夜| 欧美日本视频| 激情在线观看视频在线高清| 午夜两性在线视频| 午夜精品国产一区二区电影| 日本五十路高清| 国产又色又爽无遮挡免费看| 久久天堂一区二区三区四区| 免费少妇av软件| 欧美激情 高清一区二区三区| 啦啦啦 在线观看视频| 久久人妻福利社区极品人妻图片| 少妇裸体淫交视频免费看高清 | 久久婷婷人人爽人人干人人爱 | 亚洲第一欧美日韩一区二区三区| 日韩欧美在线二视频| 国产片内射在线| 国产在线精品亚洲第一网站| 一级毛片高清免费大全| 欧美性长视频在线观看| 国产精品亚洲一级av第二区| 日本免费一区二区三区高清不卡 | 在线观看日韩欧美| 女人精品久久久久毛片| 午夜成年电影在线免费观看| 九色国产91popny在线| 久久 成人 亚洲| 亚洲色图av天堂| 欧美精品亚洲一区二区| 麻豆久久精品国产亚洲av| 99国产极品粉嫩在线观看| 一a级毛片在线观看| 国产精品一区二区三区四区久久 | 一区二区三区高清视频在线| 亚洲欧美日韩无卡精品| 国产一卡二卡三卡精品| 成人亚洲精品一区在线观看| 亚洲一码二码三码区别大吗| 亚洲成av人片免费观看| 国内精品久久久久久久电影| 国产成人欧美在线观看| 久久影院123| 亚洲国产日韩欧美精品在线观看 | 51午夜福利影视在线观看| 午夜福利视频1000在线观看 | 涩涩av久久男人的天堂| 男女做爰动态图高潮gif福利片 | 久久精品91无色码中文字幕| tocl精华| 91麻豆av在线| 脱女人内裤的视频| 岛国在线观看网站| 每晚都被弄得嗷嗷叫到高潮| 97人妻精品一区二区三区麻豆 | 又黄又粗又硬又大视频| 久久精品91蜜桃| 日韩欧美国产一区二区入口| 日韩欧美一区视频在线观看| 19禁男女啪啪无遮挡网站| 人成视频在线观看免费观看| 精品熟女少妇八av免费久了| 美女扒开内裤让男人捅视频| 成人欧美大片| 国产精品98久久久久久宅男小说| 国产男靠女视频免费网站| 午夜免费观看网址| 免费看美女性在线毛片视频| 亚洲精品国产一区二区精华液| 夜夜躁狠狠躁天天躁| 搞女人的毛片| 悠悠久久av| 国产高清激情床上av| 91成年电影在线观看| 电影成人av| av片东京热男人的天堂| 亚洲情色 制服丝袜| 日韩国内少妇激情av| 国产午夜福利久久久久久| 国产成人系列免费观看| 亚洲熟妇中文字幕五十中出| 亚洲av第一区精品v没综合| 一本久久中文字幕| 精品久久蜜臀av无| 一级毛片精品| 免费无遮挡裸体视频| 999精品在线视频| 亚洲国产毛片av蜜桃av| 欧美日本中文国产一区发布| 一级片免费观看大全| e午夜精品久久久久久久| 午夜免费鲁丝| 黄色成人免费大全| 国产三级在线视频| 亚洲熟妇中文字幕五十中出| 亚洲黑人精品在线| 人人妻人人澡人人看| 久久人妻av系列| 美女国产高潮福利片在线看| 乱人伦中国视频| 国产精品二区激情视频| av网站免费在线观看视频| 日本撒尿小便嘘嘘汇集6| 久久国产精品人妻蜜桃| 女性被躁到高潮视频| 色婷婷久久久亚洲欧美| 亚洲电影在线观看av| 亚洲三区欧美一区| 久久人人97超碰香蕉20202| 午夜免费观看网址| 99国产精品一区二区蜜桃av| 国产成人一区二区三区免费视频网站| av天堂久久9| 手机成人av网站| 精品人妻1区二区| 亚洲精品美女久久av网站| 大型av网站在线播放| 久久香蕉国产精品| 日日干狠狠操夜夜爽| 亚洲人成伊人成综合网2020| 啦啦啦观看免费观看视频高清 | 俄罗斯特黄特色一大片| 午夜精品久久久久久毛片777| 亚洲天堂国产精品一区在线| 欧美激情久久久久久爽电影 | 69精品国产乱码久久久| 国产aⅴ精品一区二区三区波| 曰老女人黄片| 涩涩av久久男人的天堂| 美女高潮到喷水免费观看| a在线观看视频网站| 成人亚洲精品av一区二区| 国产成+人综合+亚洲专区| 国产欧美日韩一区二区三区在线| 亚洲视频免费观看视频| 亚洲一码二码三码区别大吗| 国产97色在线日韩免费| 亚洲国产欧美网| 亚洲 国产 在线| 日韩三级视频一区二区三区| 欧美乱妇无乱码| 久久国产精品人妻蜜桃| 亚洲精品一卡2卡三卡4卡5卡| 一级a爱片免费观看的视频| 国产精品精品国产色婷婷| 亚洲人成电影免费在线| 免费观看精品视频网站| 亚洲精品在线美女| 久久中文看片网| a在线观看视频网站| 国产高清videossex| 国产精品国产高清国产av| 美女免费视频网站| 久久人妻熟女aⅴ| 成人国语在线视频| 国产亚洲精品一区二区www| 亚洲精品在线观看二区| 国产成人精品在线电影| 国产国语露脸激情在线看| 日韩大尺度精品在线看网址 | 一级片免费观看大全| 欧美中文综合在线视频| 免费无遮挡裸体视频| 国产精品香港三级国产av潘金莲| 国产伦人伦偷精品视频| 国产单亲对白刺激| 两性午夜刺激爽爽歪歪视频在线观看 | 99re在线观看精品视频| 亚洲av熟女| 久久草成人影院| 欧美精品啪啪一区二区三区| 99re在线观看精品视频| 国产成人精品久久二区二区91| 91麻豆av在线| 手机成人av网站| 国产蜜桃级精品一区二区三区| а√天堂www在线а√下载| 老司机午夜福利在线观看视频| 欧美性长视频在线观看| 久久国产精品男人的天堂亚洲| 欧美成人性av电影在线观看| 校园春色视频在线观看| 九色亚洲精品在线播放| 久热这里只有精品99| 这个男人来自地球电影免费观看| 91大片在线观看| 不卡一级毛片| 国产三级黄色录像| 日本撒尿小便嘘嘘汇集6| 欧美乱色亚洲激情| 日韩 欧美 亚洲 中文字幕| 免费搜索国产男女视频| 天堂影院成人在线观看| 一边摸一边做爽爽视频免费| 久久久久久大精品| 夜夜夜夜夜久久久久| 日韩国内少妇激情av| av欧美777| 18禁裸乳无遮挡免费网站照片 | 国产精品 国内视频| 制服人妻中文乱码| 国产一区二区三区视频了| 变态另类丝袜制服| 很黄的视频免费| 女人爽到高潮嗷嗷叫在线视频| 97人妻精品一区二区三区麻豆 | 亚洲伊人色综图| 中文字幕另类日韩欧美亚洲嫩草| 成人三级做爰电影| 欧美日韩瑟瑟在线播放| 亚洲片人在线观看| 久久性视频一级片| 成人欧美大片| 男女午夜视频在线观看| 99国产精品一区二区蜜桃av| 天堂影院成人在线观看| 午夜福利高清视频| 后天国语完整版免费观看| 欧美成人午夜精品| 亚洲精华国产精华精| 岛国在线观看网站| 少妇裸体淫交视频免费看高清 | 欧美激情久久久久久爽电影 | 午夜福利成人在线免费观看| 亚洲成人久久性| 叶爱在线成人免费视频播放| 欧美日韩亚洲国产一区二区在线观看| 777久久人妻少妇嫩草av网站| 亚洲av美国av| x7x7x7水蜜桃| 老熟妇乱子伦视频在线观看| 欧美黄色片欧美黄色片| 18禁黄网站禁片午夜丰满| 老司机深夜福利视频在线观看| 色综合亚洲欧美另类图片| 国产97色在线日韩免费| 国产精品1区2区在线观看.| 国产欧美日韩精品亚洲av| 免费不卡黄色视频| 亚洲专区国产一区二区| 99在线人妻在线中文字幕| 在线av久久热| 欧美老熟妇乱子伦牲交| 制服诱惑二区| 成人永久免费在线观看视频| 九色亚洲精品在线播放| 日本黄色视频三级网站网址| 亚洲无线在线观看| 成人亚洲精品av一区二区| 精品一区二区三区视频在线观看免费| 久久久久久大精品| √禁漫天堂资源中文www| 成人av一区二区三区在线看| 国内毛片毛片毛片毛片毛片| 国产av一区在线观看免费| 久久婷婷成人综合色麻豆| 久久人妻福利社区极品人妻图片| 成人18禁在线播放| 韩国精品一区二区三区| 电影成人av| 看片在线看免费视频| 久久精品国产清高在天天线| 精品熟女少妇八av免费久了| 欧美精品啪啪一区二区三区| 88av欧美| 日本撒尿小便嘘嘘汇集6| 久久国产精品影院| 一个人免费在线观看的高清视频| 法律面前人人平等表现在哪些方面| 国产高清视频在线播放一区| 满18在线观看网站| 亚洲 欧美一区二区三区| 国产又爽黄色视频| 久9热在线精品视频| 亚洲国产精品合色在线| 国产一区二区三区综合在线观看| 欧美成人免费av一区二区三区| 黄色 视频免费看| av视频在线观看入口| 可以在线观看毛片的网站| 男人舔女人下体高潮全视频| 亚洲人成电影免费在线| 欧美不卡视频在线免费观看 | 老司机福利观看| 国产一级毛片七仙女欲春2 | 久久久久久大精品| 午夜久久久久精精品| 亚洲自拍偷在线| 国产欧美日韩一区二区精品| 激情在线观看视频在线高清| 欧美激情极品国产一区二区三区| 午夜久久久在线观看| 国产精品影院久久| 黄片大片在线免费观看| 日韩国内少妇激情av| 亚洲精品粉嫩美女一区| www.熟女人妻精品国产| 最好的美女福利视频网| 午夜精品在线福利| 亚洲午夜精品一区,二区,三区| 视频区欧美日本亚洲| av天堂久久9| 在线视频色国产色| 亚洲人成电影免费在线| 女同久久另类99精品国产91| 欧美日韩乱码在线| 日日摸夜夜添夜夜添小说| 久久亚洲精品不卡| 夜夜夜夜夜久久久久| 国产成人欧美| 一级,二级,三级黄色视频| 色播在线永久视频| 国产精品二区激情视频| 亚洲国产精品久久男人天堂| 午夜影院日韩av| 中文字幕人妻熟女乱码| 亚洲人成网站在线播放欧美日韩| 久久人妻熟女aⅴ| 中文字幕av电影在线播放| 久久天堂一区二区三区四区| 精品久久蜜臀av无| 91字幕亚洲| 欧美成狂野欧美在线观看| 一边摸一边抽搐一进一出视频| 国产一区二区在线av高清观看| 久久人妻av系列| 国产精品香港三级国产av潘金莲| 97超级碰碰碰精品色视频在线观看| 麻豆久久精品国产亚洲av| 欧美成人一区二区免费高清观看 | 国产亚洲精品综合一区在线观看 | 黄色丝袜av网址大全| 美女免费视频网站| 性少妇av在线| 999久久久国产精品视频| 桃红色精品国产亚洲av| 午夜激情av网站| 日日干狠狠操夜夜爽| 一本综合久久免费| 国产成人精品在线电影| 50天的宝宝边吃奶边哭怎么回事| 一区福利在线观看| 中文字幕色久视频| 亚洲少妇的诱惑av| 黄色片一级片一级黄色片| 亚洲精品国产区一区二| 99riav亚洲国产免费| 亚洲色图av天堂| 最近最新中文字幕大全免费视频| 香蕉久久夜色| 国产一区二区三区在线臀色熟女| 亚洲精品av麻豆狂野| 99国产精品免费福利视频| 亚洲天堂国产精品一区在线| 男女做爰动态图高潮gif福利片 | 看黄色毛片网站| 一级a爱片免费观看的视频| 成人手机av| 亚洲五月色婷婷综合| 国产成人精品在线电影| 亚洲国产中文字幕在线视频| 国产真人三级小视频在线观看| 首页视频小说图片口味搜索| 日本五十路高清| 日本vs欧美在线观看视频| 97碰自拍视频| 午夜老司机福利片| 纯流量卡能插随身wifi吗| 桃红色精品国产亚洲av| 亚洲人成网站在线播放欧美日韩| 久久天躁狠狠躁夜夜2o2o| 国产精品98久久久久久宅男小说| 色播亚洲综合网| 老司机在亚洲福利影院| 美女大奶头视频| 狂野欧美激情性xxxx| 在线视频色国产色| 亚洲第一欧美日韩一区二区三区| 亚洲自拍偷在线| 国产成人欧美在线观看| 丁香欧美五月| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲第一av免费看| 女人被狂操c到高潮| 亚洲国产精品sss在线观看| 中文字幕色久视频| 亚洲第一av免费看| 国产亚洲欧美精品永久| avwww免费| 在线播放国产精品三级| 色综合亚洲欧美另类图片| 国产高清videossex| 宅男免费午夜| 精品一区二区三区av网在线观看| 日本精品一区二区三区蜜桃| 国产成人av激情在线播放| 女生性感内裤真人,穿戴方法视频| 亚洲精品av麻豆狂野| 久久精品人人爽人人爽视色|