• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Modified Anam-Net Based Lightweight Deep Learning Model for Retinal Vessel Segmentation

    2022-11-10 13:06:58SyedIrtazaHaiderKhursheedAurangzebandMusaedAlhussein
    Computers Materials&Continua 2022年10期

    Syed Irtaza Haider,Khursheed Aurangzeband Musaed Alhussein

    1College of Computer and Information Sciences,King Saud University,Riyadh,11543,Saudi Arabia

    2Department of Computer Engineering,College of Computer and Information Sciences,King Saud University,Riyadh,11543,Saudi Arabia

    Abstract:The accurate segmentation of retinal vessels is a challenging task due to the presence of various pathologies as well as the low-contrast of thin vessels and non-uniform illumination.In recent years,encoder-decoder networks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity.To address the aforementioned challenges and to reduce the computational complexity,we propose a lightweight convolutional neural network (CNN)-based encoder-decoder deep learning model for accurate retinal vessels segmentation.The proposed deep learning model consists of encoder-decoder architecture along with bottleneck layers that consist of depth-wise squeezing,followed by fullconvolution,and finally depth-wise stretching.The inspiration for the proposed model is taken from the recently developed Anam-Net model,which was tested on CT images for COVID-19 identification.For our lightweight model,we used a stack of two 3 × 3 convolution layers (without spatial pooling in between) instead of a single 3 × 3 convolution layer as proposed in Anam-Net to increase the receptive field and to reduce the trainable parameters.The proposed method includes fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution.These modifications do not compromise on the segmentation accuracy,but they do make the architecture significantly lighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters(1.01M)than Anam-Net(4.47M),U-Net(31.05M),SegNet(29.50M),and most of the other recent works.The proposed model does not require any problem-specific pre-or post-processing,nor does it rely on handcrafted features.In addition,the attribute of being efficient in terms of segmentation accuracy as well as lightweight makes the proposed method a suitable candidate to be used in the screening platforms at the point of care.We evaluated our proposed model on open-access datasets namely,DRIVE,STARE,and CHASE_DB.The experimental results show that the proposed model outperforms several stateof-the-art methods,such as U-Net and its variants,fully convolutional network(FCN),SegNet,CCNet,ResWNet,residual connection-based encoderdecoder network (RCED-Net),and scale-space approx.network (SSANet)in terms of {dice coefficient,sensitivity (SN),accuracy (ACC),and the area under the ROC curve(AUC)}with the scores of{0.8184,0.8561,0.9669,and 0.9868} on the DRIVE dataset,the scores of {0.8233,0.8581,0.9726,and 0.9901} on the STARE dataset,and the scores of {0.8138,0.8604,0.9752,and 0.9906} on the CHASE_DB dataset.Additionally,we perform crosstraining experiments on the DRIVE and STARE datasets.The result of this experiment indicates the generalization ability and robustness of the proposed model.

    Keywords:Anam-Net;convolutional neural network;cross-database training;data augmentation;deep learning;fundus images;retinal vessel segmentation;semantic segmentation

    1 Introduction

    Different chronic diseases,such as diabetic retinopathy (DR),glaucoma,cataracts,and others,gradually deteriorate certain parts of the eye,eventually leading to partial or total blindness.The impact of these chronic diseases vary from person to person.This implies that while some people have these chronic conditions,their vision is fine.Others experience a significant impact of chronic diseases on their eye health,due to either ocular weakness or the severity of the chronic condition.This observation effectively translates to the need for regular eye health monitoring,especially for those with a genetic history or those who suffer from chronic diseases.Regular eye health monitoring will lead to a timely prognosis of the condition,allowing us to prevent or at least delay the disease’s impact until later in life.The analysis of the retinal vessels and optic cup/disc has prime importance for the diagnosis of DR and Glaucoma respectively[1,2].The manual diagnosis of these ocular diseases by physicians is time-consuming,exhausting,and can lead to inter-and intra-observer variations.

    There are two types of glaucoma:closed-angle;open-angle.In closed-angle,parts of the iris block the drainage of fluid,which results in pressure in the eye.The symptom in the closed-angle is noticeable which include sudden ocular pain,high intraocular pressure,redness of the eye,and a sudden decrease in vision.Contrarily,in open-angle,the fluid flow is not blocked due to which the symptoms are not noticeable in the early stages[3].

    The traditional procedures for retinal vessels analysis and treatment involve the manual grading and assessment of retinal images by the optometrist and ophthalmologists,which is tiresome and susceptible to observation-variation from doctor to doctor.It is also contingent on the availability of such experts,as well as their experience and expertise.The graded retinal images utilizing manual analysis by the doctor may have huge disparities because of the trainer’s fatigue.Similarly,we may say that ophthalmologists’manual image analysis and grading imposes a constraint on the quality of information extracted,especially for population-scale screening programs,which are critical for early diagnosis of vision-threatening eye diseases.

    On the other hand,automated procedures for analyzing retinal images in order to diagnose eye diseases in a timely fashion have the inherent ability to be more accurate and faster.Furthermore,these automated diagnostics systems could be employed for large-scale screening programs that are required for the detection and prevention of eye diseases in the general public,who are unaware of the progression of these diseases[4].

    The semantic segmentation can be used for observing the variations in retinal structures including vessels,optic cup,and the optic disc,etc.,which can help characterize and diagnose diseases like glaucoma[5],DR[6,7],age-related macular degeneration(AMD)[8],retinal vascular occlusions[9],and chronic systematic hypoxemia[10].Thanks to advancements in high-performance computing,image processing,and machine/deep learning,researchers have devised and explored encoder-decoderbased architecture for semantic segmentation of biomedical images,specifically retinal fundus images for vascular segmentation.Those methods,however,have a limited impact and accuracy.There is a need to explore and develop novel lightweight deep learning models for accurate detection and diagnosis of retinal structures in fundus images for large populations at the point of care.

    The effectiveness of the diagnostic system for population-scale screening is dependent on both the efficiency and computational complexity of the deep neural network(DNN)model.Accurate retinal vessel segmentation is essential for the diagnosis of DR and is a highly challenging task due to the high density,tortuosity,shape/diameter of retinal vessels as well as the existence of lesions including hard/soft exudates and microaneurysms in the retinal images[11].Numerous other challenges in retinal vessels segmentation include centerline reflex,vessels crossing,branching,and the creation of new vessels in advanced stages of the diseases.Other parameters,like as camera shake at the time of capture and image brightness,are as essential and should be taken into account.All of these internal and external factors,in general,increase the challenges for developing eye disease diagnostics systems,both in terms of being competitive for high reliability and being useable at the point of care.

    In recent years,the research community has put forth a lot of work to develop automated methods for retinal vessel segmentation.However,there are a few challenges such as the presence of central vessel reflex,lesions,and low contrast that still need the researcher’s attention.A robust deep learning method for retinal vessel segmentation should handle the aforementioned challenges.The biggest challenge in the existing CNN-based model is their significantly higher computational complexity because of millions of trainable parameters.

    Considering the aforementioned challenges and computational complexity of state-of-the-art methods,we proposed a lightweight CNN-based model,where we adapted the encoder-decoder-based architecture along with bottleneck layers(depth-wise squeezing and stretching)for the implementation.The inspiration for the proposed model is taken from the recently developed Anam-Net model[12],which was tested on CT images for COVID-19 identification.The Anam-Net model is based on encoder-decoder architecture along with AD-Block.The attribute of being lightweight,in addition to being highly efficient in achieving better evaluation metrics,makes the Anam-Net,a suitable and competitive choice to be used in the screening platforms at the point of care.To the best of our knowledge,our work is the first to modify Anam-Net and evaluate its suitability for retinal vessel segmentation.The proposed modifications resulted in reducing the number of trainable parameters from 4.47M as reported in[12]to 1.01M.The contributions of this work can be summarized as follows:

    1.We propose a lightweight CNN-based encoder-decoder architecture based on Anam-Net.We used a stack of two 3 × 3-convolution layers instead of single 3 × 3 convolution layer to increase the receptive field.In addition,we used fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution.These changes significantly reduce the amount of learnable parameters in the proposed model without compromising on the segmentation accuracy.

    2.We conduct extensive experiments on three publicly available databases,including DRIVE,STARE,and CHASE_DB for fair comparison with other previous similar works and achieved enhanced evaluation metrics compared the best models from the state-of-the-art.

    3.The performance of the proposed model is evaluated on the images with challenges such as central vessel reflex,presence of lesions and low-contrast for evaluating its generalization capability.

    4.We perform cross-training experiments on the DRIVE,STARE,and CHASE_DB datasets,and achieve results that shows the generalization ability and robustness of the proposed model.

    The remainder of the manuscript is structured as follows.In Section 2,the related work for retinal vascular segmentation from the existing state-of-the-art is discussed.The proposed model based on its modifications to Anam-Net are described in detail in Section 3.In Section 4,we discuss implementation details and experimental findings and present a detailed evaluation of our developed model using standard evaluation metrics.This section also includes a comprehensive comparison of the proposed retinal vessel segmentation model with other state-of-the-art competing models.Section 5 contains discussion and analysis,while Section 6 concludes with some closing remarks and recommendations for further research.

    2 Related Work

    The retinal vessel segmentation has been given due attention by academicians and researchers throughout the globe,especially in the last decade.The reason for the extra-ordinary importance is partly due to the capability of recently developed DNNs for precise and accurate segmentation of retinal vessels and other parts of retinal structure,which are usually required by ophthalmologists for diagnosing several eye diseases including DR,Glaucoma,etc.In general,vessel segmentation methods may be classified as supervised or unsupervised based on whether they rely on ground truth images.In this section,we will briefly discuss unsupervised methods with a focus on state-of-the-art supervised retinal vessel segmentation methods.

    In[13],the authors classify vessel segmentation methods into various categories such as kernelbased approaches,vessel-tracking approaches,mathematical morphological,multiscale-based techniques,local thresholding,and model-based methods.The authors in[14]developed a kernelbased approach to segment the retinal vessels with an assumption that the width of retinal vessels stays constant with distance.This assumption restricts the adaptation of the kernel-based method adaptation to changes in retinal vessel width and orientation.The vessel-tracking approaches such as one proposed by authors in[15],uses a set of starting points to trace the ridges of retinal vessels.This method requires user intervention to select the starting and ending point,which limits the automation of the approach.The morphological approaches use mathematical equations to segment retinal vessels.In most cases,top-hat operators are used to detect retinal vessels[16].The authors in[17]proposed a multi-scale detector that segments the retinal vessels at various scales and orientations.Such methods are fast;however,for thin vessel segmentation,their performance degrades significantly.In[18],the authors proposed an adaptive thresholding-based method for retinal vascular segmentation.The adaptive thresholding-based approach has the disadvantage of sometimes resulting in an unconnected vascular structure.Model-based methods segment vessels by considering them as flexible curves[19].The method is too sensitive to the changes in intra-image contrast.For a comprehensive review of retinal vessel segmentation,especially unsupervised methods please refer to[20].

    In[21],the authors introduced lattice neural network(LNN)with dendritic processing for retinal vessel segmentation.One of the important steps of their methodology was feature extraction and feature reduction.They compare their model with well-known methods such as support vector machines(SVM)and multilayer perceptron(MLP).The authors report a dice score of 0.69 and 0.66 for DRIVE and STARE datasets that is quite low compared to the state-of-the-art methods.

    According to[22],the authors proposed a FCN conditional random field(CRF)model for vessel segmentation.They achieved a dice score of 0.79 and 0.76 for the DRIVE and STARE datasets.However,for retinal images with serious pathologies such as the hemorrhage inside the optic disc,their proposed model contributes to a large number of false positive.In a different approach,Mo et al.[23]proposed a FCN with deep feature fusion(FCN-DFF)method.Their method achieved a good segmentation accuracy.However,the number of trainable parameters were approx.eight times higher than our proposed method.In a work reported by[24],a vessel segmentation method based on a FCN with stationary wavelet transform (SWT) was proposed.The method achieved good segmentation performance on DRIVE and STARE datasets.However,the cross-training results for a model trained on the STARE dataset and test on DRIVE images resulted in very low sensitivity,which limits the generalization ability of the model.

    In[25],the CNN was applied to learn the discriminative features whereas the combination of filters was used to enhance the thin vessels.Finally,CRF was used for vascular segmentation.Their proposed method was not an end-to-end system as the CRF parameters were not trained together with CNN,which results in a weight gap between CNN and CRF that limits the overall network performance.In the work by[26],the authors proposed a deep learning method by combing multiscale CNN with their improved loss function along with CRF.Their method achieved a low sensitivity for fundus images with lesions as well as for regions with low contrast,which resulted in overall low sensitivity of the model.

    The authors in[27]developed a cross-connected CNN(called Cc-Net)based model,in which all convolution layers of secondary and primary paths are connected to each other for facilitating fusion of multi-level feature fusion.In a different approach,Abbas et al.[28]proposed a novel approach based on a generative adversarial network(GAN).Their proposed method utilizes a generator network and a patch-based discriminator network.The models based on GANs have some limitations such as high sensitivity to hyper parameter selection,overfitting,generator gradient vanishing and nonconvergence,which makes it undesirable for semantic segmentation tasks.

    In the last few years,several researchers have proposed variants of U-Net for retinal vessel segmentation.In[29],the authors proposed a Recurrent Residual CNN named R2U-Net.Their proposed model utilizes the power of U-Net,residual networks,and recurrent CNN.They achieved second-best and third-best dice scores on STARE and CHASE_DB datasets.However,the generalization ability of the model is not validated by performing cross-training experiments.In a different approach,Yan et al.[30]devised a U-Net based model with an innovative joint loss to address the class imbalance between thick and thin retinal vessels in fundus images.During the training phase,the segment-level loss and the pixel-wise loss are used to train the kernels of the two branches,and the losses are merged to train the network for learning better features.However,their proposed method achieved a low sensitivity for individual datasets as well as cross-training experiments.In addition,the number of trainable parameters was approx.36M that resulted in high computational complexity of the model.

    According to[31],a model based on U-Net and deformable convolutional units was proposed.Their proposed model uses a patch size of 48×48 and replaces the original convolutional layer with a deformable convolutional block.The results indicate that their model achieved a low sensitivity compared to several state-of-the-art methods.In the work by[32]and[33],the authors applied a patch-based learning strategy in combination with Dense U-Net and Nest U-Net respectively.The results in[32]indicate that breakage of the fine retinal vessels occurred during the binarization,which require heavy post-processing steps.In[34],the authors proposed a vessel segmentation model named pyramid U-Net where pyramid-scale aggregation blocks were employed in the encoder and decoder stages to extract the coarse and fine details of retinal vessels.According to[35],a variant of U-Net named ResWnet has been proposed,in which two contraction and expansion paths are used instead of one,allowing the model to extract the deeper details of target feature images.To overcome the gradient vanishing problem and speed up convergence,an enhanced residual block that substitutes the convolutional layers has been developed.

    The authors in[36]developed a hybrid approach for retinal vessel segmentation,in which they combined unsupervised and supervised learning.They applied a multi-scale matched filter having vessel improvement features along with the basic U-Net model.They used three channels of the retinal image separately for extracting different features of the retinal vessels and fused the obtained results.Though they achieved better evaluation metrics,the computation complexity of their developed model is very high.In[37],the authors developed a DNN model,which is a variant of U-Net architecture.It combines batch normalization and residual blocks in upsampling as well as downscaling parts of the encoder and decoder.During the training,their model receives extracted patches as input,where the loss function used is based on the distance of each pixel from the vascular structure.

    Lv et al.[38]present an attention-guided U-Net with atrous convolution for retinal vessel segmentation,which directs the network to distinguish between the vessel and non-vessel pixels.In feature layers,atrous convolution replaces the convolution layer to increase the receptive field.Their method achieved a low sensitivity compared to several state-of-the-art methods.However,they tried for reducing the computational complexity but their achieved hyper parameters are more than 28 million.In a different approach,Zhuang[39]proposed a vessel segmentation method named Ladder-Net,which consists of a chain of multiple U-Nets.Their proposed model achieved the second-best dice score,but low sensitivity for the CHASE_DB dataset.

    In our previous work in[40],we aimed for reducing the computational complexity and memory overhead of the developed model called the RCED-NET model.We used skip-connections for sharing indices of max-pooling operation from the encoder to respective stages of the decoder.The sharing of the max-pooling indices was used for improving the resolution of the feature map,which significantly reduced the computational overhead in terms of fewer parameters.Additionally,our developed strategy helped in removing the need for pre as well as post-processing.

    Despite the enhanced accuracy of newly investigated and developed deep learning-based supervised techniques for retinal vessel segmentation,a number of issues remain that require substantial attention from scholars.The use of extensive pre and post-processing steps significantly increases the computational complexity of the overall system.Additionally,these pre and post-processing steps are mostly based on some heuristic optimization algorithms;their tuning to different retinal pathologies and other noises has been overlooked and is highly needed.In addition,earlier studies did not pay much attention to the memory overhead and computational complexity of training a deep learning model with millions of trainable parameters,which limits its application in largescale screening environments.By carefully observing the previous studies,it can be observed that the segmentation performance of most of the state-of-the-art methods was affected by the presence of various pathologies in fundus images.Most of the studies have not conducted a cross-training experiment to validate the generalizability and robustness of their proposed model.Few studies that presented their cross-training results have achieved either low-sensitivity or high-complexity due to a large number of trainable parameters.

    3 Proposed Framework

    We propose a lightweight CNN-based model,where we adapted the encoder-decoder-based architecture along with the AD-Block.In the proposed architecture,two modifications are made from the basic Anam-Net model:(1) A stack of two 3 × 3 convolution layers is used instead of a single 3×3 convolution layer to increase the receptive field.(2)A fixed filter size of 64 is used in all convolutional layers,whereas in Anam-Net,the number of filters increases as we go deeper into the network.These modifications improve the segmentation accuracy compared to the state-of-the-art methods while drastically reducing the number of trainable parameters.

    The proposed segmentation model utilizes AD-block in the encoder and decoder stages.The ADblock consists of 1 × 1 convolution for depth squeezing,followed by 3 × 3 convolution for feature extraction,and finally,1×1 convolution for depth stretching.The architecture details of the AD-block are presented in Tab.2.The basic idea behind the AD-block is to squeeze the feature space dimension depth-wise before performing local feature extraction using 3 × 3 convolution.To summarize the operations performed in the AD-block,the outputh(x)of the AD-block can be written as,

    where,f(x;θ)represents the sequence of convolution operations parametrized byθ,and x represents the feature maps at the input of the AD-block.

    Fig.1 shows the architecture proposed in the study.Like the U-Net model,it consists of a contracting path(encoder)and an expansion path(decoder).The layer-wise details of the proposed model are shown in Tab.1.The input fundus image is passed through a stack of two convolution layers with a fixed filter size of 64.Each convolution layer represents 3 × 3 convolution,followed by batch normalization and rectified linear unit(ReLU)activation.We have included a Maxpooling layer,which is a down sampling approach to reduce the dimensionality by a factor of two,lowering the computation complexity.Afterwards,the AD-block is applied for robust feature learning.The aforementioned steps are performed several times until the resolution of image is low enough.Our architecture consists of four AD-blocks in each encoder and decoder stage.In the expansion path,the transpose convolution layer is applied before the AD-block to upsample the feature map at the desired resolution.The learned features from the contraction path are concatenated with the layers of the expansion path at the decoder stage,allowing the network to learn at several scales.

    The loss function is one of the factors that has the most influence on the segmentation accuracy acquired by the network.In the literature of image segmentation,majority of the networks with CNN employ cross-entropy as a loss function.In this work,we use a log dice loss,which focuses more on less accurate labels[41].The loss function can be written as,

    Figure 1:Proposed lightweight CNN-based encoder-decoder model

    Table 1:Architecture details of proposed model where each convolution layer represents 3 × 3 convolution,followed by batch normalization and ReLU activation

    Table 2:Architecture details of AD-Block,where N the batch size,M the spatial extent,and Z the depth of feature map

    4 Results and Comparative Analysis

    The proposed model was evaluated on open-access datasets namely,DRIVE,STARE,and CHASE_DB.The DRIVE dataset consists of 40 fundus images having a resolution of 565 × 584 pixels obtained from the DR screening program.The set of 40 images has been divided into two sets:a training set and a test set,each with 20 images.The STARE dataset consists of 20 fundus images with a resolution of 605×700 pixels.Unlike the DRIVE dataset,the STARE dataset has no separate training and test data.In this work,for STARE dataset,we have applied a leave-one-out strategy where the model is trained on n-1 samples and tested on the remaining one sample.The CHASE_DB dataset consists of 28 images with a resolution of 999×960 pixels.We have used a set of 20 images for training the network whereas the remaining 8 images were used for testing.The number of training images in all three datasets is limited;therefore,we use a variety of data augmentation techniques to boost the network’s generalization capabilities.The details of data augmentation are discussed in implementation details section.

    All three datasets include two manual segmentations of fundus images with the first observer’s manual annotations serving as ground truth for our evaluation metrics.The image size varies for all the fundus images belonging to different datasets,for this reason,we resized the image into the size of 576 × 576 pixels.The output probability map of the network is rescaled to its original size using bilinear interpolation,thus evaluating the segmentation performance of the proposed model in the original resolution of the images.This step ensures that the results are not skewed by scale variations to which the image is exposed.

    4.1 Image Pre-Processing

    The acquired retinal fundus images may have non-uniform luminosity and intra and inter-image contrast variability,thus preprocessing are required to suppress noise and improve contrast.The preprocessing steps are shown in Fig.2,where the RGB image is transformed into the LAB color space and the CLAHE is applied to the lightness channel.Next,the enhanced L-channel is merged with the original A and B channels.The image is then transformed back into RGB color space,where the enhanced green channel is extracted.In the last preprocessing step,a gamma transformation is applied with the value of gamma to be 1.2 on the enhanced green channel to enhance the local details and adjust image contrast.

    Fig.3 shows the application of pre-processing steps on a retinal image,where its parts(a),(b),(c),and(d)show the original image,green channel of the original image,green channel of the enhanced image,and the image after gamma transformation respectively.Comparing (b) and (c) in Fig.3,the vessel information is more obvious in the green channel of the enhanced image.Moreover,(c)distinguishes well between foreground and background.

    Figure 2:Preprocessing Steps

    Figure 3:(a)Original image,(b)Green channel,(c)Green channel from enhanced image,(d)Gamma transformation

    4.2 Implementation Details

    We use the Keras deep learning library to perform an end-to-end training of the model.A wellknown Adam optimizer is used with a learning rate of 0.001.If the validation loss does not improve after ten epochs,the learning rate.is decreased by a factor of 0.1.The model is trained for 150 epochs with a batch size of 4.To avoid overfitting,we apply early stopping criteria by looking at the validation loss.

    All three datasets have a limited number of training images,i.e.,20 for DRIVE,19 for STARE(leave-one-out approach),and 20 for CHASE_DB.It is very challenging to attain an acceptable segmentation accuracy by training a deep learning model with such a small dataset.Therefore,we apply several data augmentation methods to increase the robustness and improve the generalization ability of the network.The data augmentation strategies include,but were not limited to horizontal flip,vertical flip,random rotations in the range of[0,360]degrees,random width and height shift in the range of[0,0.15],and random magnification in the range of[0.3,0.12].

    All computations were done on IBEX at the High-Performance Computing(HPC)facility of King Abdullah University of Sciences and Technology(KAUST),where we used a single RTX 2080 Ti GPU for our experiments.

    4.3 Evaluation Metrics

    The output of our proposed model is a probability prediction map that describes the probability of a pixel belonging to a vessel or non-vessel.We obtain the binary segmentation of retinal vessels by thresholding a probability map with a value of 0.4 for all three datasets.The predicted value of pixels in the probability map is considered a blood vessel pixel if it is more than the threshold;otherwise,it is considered a background pixel.

    We used the well-known standard evaluation metrics which are commonly used for evaluation of deep learning models in medical image segmentation and analysis.We aim to evaluate our developed DNN model for the retinal vessel segmentation in comparison to the publicly available ground truth from experts.The terms true positive,false positive,true negative and false negative are abbreviated as TP,FP,TN and FN respectively.The evaluation metrics used are:SN,specificity(SP),ACC,F1 score,and Mathew correlation coefficient(MCC).The equations for these evaluation metrics are provided below.

    The sensitivity and specificity indicate the percentage of correctly classified vessel pixels and non-vessel pixels to the ground truth vascular pixels respectively.The accuracy indicates the ratio of correctly classified vascular-pixels to the total number of pixels in the fundus image.The retinal vessel segmentation is a class-imbalance problem since only 9%to 14%of pixels belong to the retinal vessels whereas the remaining pixels are background[42].In class imbalance,the accuracy alone may be misleading for binary segmentation,therefore,we also consider dice score and MCC for performance evaluation of the proposed model.In addition to the metrics listed above,AUC which ranges from 0 to 1,was employed to evaluate image segmentation.

    4.4 Validation of the Proposed Method

    To have a comprehensive understanding of the overall segmentation of the proposed method,Tab.3 shows various evaluation metrics for individual test images of DRIVE,STARE,and CHASE_DB datasets.The labels of the test images in Tab.3 are the same as those in the original dataset.This will allow researchers to compare their proposed model’s segmentation results on test images with our results.The binary segmentation of retinal vessels is obtained by thresholding the probability map where the threshold value is set to 0.4 for all three datasets.We consider manual annotations by the first observer as ground truth for evaluation metrics.The best case (highest f1-score)and the worst case(lowest f1-score)for each dataset are highlighted in green and red respectively.The average for each of these retinal image databases is presented at the end of table,which is used in later tables for comparison with the recent deep learning models from the state-of-the-art.As shown in the table,the variation among the evaluation metrics for the best and the worst case is not high for DRIVE and CHASE_DB1 datasets.However,for the STARE dataset,the variation between the best case and the worst case is slightly high.Figs.4-6 shows the best (top row) and worst (bottom row)case segmentation results for DRIVE,STARE,and CHASE_DB datasets respectively.

    Table 3:Performance evaluation of proposed method on individual test images of the three datasets

    Table 3:Continued

    Figure 4:A visual representation of the best- and worst-case segmentation performances of the proposed DNN model for images from DRIVE database

    To further investigate the worst-case test image for the STARE dataset,Fig.7 shows the visual comparison among manual annotations by the first and second observers,and the segmentation probability map obtained by the proposed model.As shown in Fig.7,the second human observer(c)identified additional vessels around the optic disc region whereas the first human observer(b)did not identify some thick vessels.Although the model is trained on ground truth annotations made by the first human observer,however,it effectively segments the retinal vessels(d)which are not annotated by the first human observer,however,annotated by the second human observer,which directly affects the sensitivity value.

    Figure 5:A visual representation of the best- and worst-case segmentation performances of the proposed DNN model for images from STARE database

    Figure 6:A visual representation of the best- and worst-case segmentation performances of the proposed DNN model for images from CHASE_DB database

    Figure 7:A visual Comparison for STARE (worst-case),(a) Original fundus image,(b) Manual annotation by first observer,(c)Manual annotation by second observer,(d)Probability map of im0004

    4.5 Comparison with the State-of-the-art(Training and Testing on same Database)

    In this section,we compare the sensitivity,specificity,accuracy,AUC,f1 score,and MCC of our proposed technique to those of contemporary state-of-the-art methods from the literature.To test the efficacy of the suggested paradigm,we conducted two separate experiments.Images from the same dataset were used for both training and assessing the model in the first experiment,whereas crossdatabase training and testing were used in the second experiment.

    Tabs.5-7 shows the values of aforementioned evaluation matrices in comparison of the proposed model with other state-of-the-art methods listed in Tab.4 on the DRIVE,STARE,and CHASE_DB dataset respectively.The scores highlighted in green,blue and red colors represent best,second-best and third-best respectively.

    Table 4:List of state-of-the-art methods for comparison with our proposed model

    The results in Tab.5 show that our proposed method achieved the best score in five out of six evaluation metrics for the DRIVE dataset.Our method achieved a sensitivity of 0.8561,which is the highest among several other methods.Wu et al.[43]used a scale-space approximate network and achieved second-best sensitivity.However,the computational complexity of their model is very high,with approximately 25 million trainable parameters compared to our proposed model with only 1.01 million parameters.Our previous work in[40]achieved the third-best sensitivity and second-best accuracy among other methods for the DRIVE dataset.The specificity achieved by Wang et al.[33]is the best of all other methods.However,their sensitivity and f1-score are very low compared to our proposed method.Alom et al.[29]achieved the second-best dice-score and third-best specificity with a very low score of sensitivity among various methods.

    Table 5:Comparison of the proposed model with state-of-the-art methods on the Drive dataset

    Regarding the STARE dataset,we ranked first in five out of six evaluation metrics as shown in Tab.6.The proposed method achieved a sensitivity and accuracy of 0.8581 and 0.9726 respectively which is highest among state-of-the-art methods.Wang et al.[33]ranked first in specificity and score third-best in sensitivity,however,their accuracy,f1-score and MCC is low among other methods.The ResWNet proposed by Tang et al.[35]achieved second-best specificity,second-best accuracy and thirdbest AUC.However,they achieved a sensitivity of 0.7551 which is one of the lowest among other methods.Our method ranked first in terms of accuracy,f1-score and MCC with scores of 0.9726,0.8233 and 0.8111 respectively for the STARE dataset.

    Reference SN SP ACC AUC F1 MCC Vega et al.(2105)[21]0.7019 0.9671 0.9483 - 0.6616 0.6400 Orlando et al.(2017)[22]0.7680 0.9738 - - 0.7644 0.7417 Mo et al.(2017)[23]0.8147 0.9844 0.9674 0.9885 - -Zhou et al.(2017)[25]0.8065 0.9761 0.9585 - 0.8017 0.7830 Hu et al.(2018)[26]0.7543 0.9814 0.9632 0.9751 - -Feng et al.(2020)[27]0.7709 0.9848 0.9633 0.9700 - -Abbas et al.(2019)[28]0.7940 0.9869 0.9647 0.9885 - -Yan et al.(2018)[30]0.7581 0.9846 0.9612 0.9801 - -Jin et al.(2019)[31]0.7595 0.9878 0.9641 0.9832 0.8143 -Wang et al.(2019)[32]0.7882 0.9729 0.9547 0.9740 - -Wang et al.(2021)[33]0.8230 0.9945 0.9641 0.9620 0.7947 -Tang et al.(2020)[35]0.7551 0.9903 0.9723 0.9863 - -Lv et al.(2020)[38]0.7598 0.9878 0.9640 0.9824 - -Khan et al.(2020)[40]0.8118 0.9738 0.9543 0.9728 - -Khan et al.(2020)[40]0.8397 0.9792 0.9659 0.9810 - -Uysal et al.(2021)[46]0.7558 0.9811 0.9589 - - -Proposed 0.8581 0.9823 0.9726 0.9901 0.8233 0.8111

    In the CHASE_DB dataset,we outperformed all other methods in terms of sensitivity,accuracy,AUC,f1-score and MCC whereas ranked second in terms of specificity as shown in Tab.7.Our previous work in[40]achieved the second-best sensitivity and second-best accuracy among other methods for the CHASE_DB dataset.FCN with stationary wavelet transform proposed by Oliveira et al.[24]ranked first in specificity,however their sensitivity is too low.

    Reference SN SP ACC AUC F1 MCC Orlando et al.(2017)[22]0.7277 0.9712 - - 0.7332 0.7046 Mo et al.(2017)[23]0.7661 0.9816 0.9599 0.9812 - -Oliveira et al.(2018)[24]0.7779 0.9864 0.9653 0.9855 - -Zhou et al.(2017)[25]0.7553 0.9751 0.9520 - 0.7644 0.7398 Alom et al.(2018)[29]0.7756 0.9820 0.9634 0.9815 0.7928 -Yan et al.(2018)[30]0.7633 0.9809 0.9610 0.9781 - -Jin et al.(2019)[31]0.8155 0.9752 0.9610 0.9804 0.7883 -Wang et al.(2021)[33]0.8035 0.9787 0.9639 0.9832 - -Zhang et al.(2021)[34]0.8235 0.9711 0.9559 0.9767 - -Lv et al.(2020)[38]0.8176 0.9704 0.9608 0.9865 0.7892 -Zhuang et al.(2018)[39]0.7978 0.9818 0.9656 0.9818 0.8031 -

    Table 7:Continued

    From aforementioned tables,it can be inferred that no method other than our proposed method achieved a best-score for more than two metrics.We note that our method ranked first among other state-of-the-art methods in terms of sensitivity,accuracy,AUC,f1-score and MCC for all three datasets.Also,to the best of our knowledge,we are the first to report sensitivity,specificity,accuracy,AUC,f1-score,and MCC values above 0.856,0.977,0.967,0.986,0.818,and 0.802 respectively for all three datasets.

    4.6 Comparison with the State-of-the-art(Cross-database Training and Testing)

    Previous findings demonstrate how the various approaches behave when segmenting the vascular structure in the most favorable situation:the methods were tested and trained using similar data from images in the same database.However,in a real-world scenario,the method must show robustness and generalization on retinal images with high variability i.e.,the acquisition device may belong to the different manufacturer or the acquired images may come from a wide variety of patients.It is not feasible to retrain the model every time a new retinal fundus image is available for segmentation.Thus to have a more realistic performance evaluation of the proposed method,we perform cross-database training and testing on DRIVE and STARE datasets.

    The cross-database training segmentation result comparison of the proposed method with other state-of-the-art methods is shown in Tab.8.The first part of the table presents the score of evaluation metrics on test images from the STARE dataset when the proposed model is trained on DRIVE images.Our method ranked first in four out of six evaluation metrics.The sensitivity and accuracy of our model went down from 0.8581 and 0.9726 to 0.8456 and 0.9639 respectively for the STARE dataset,compared to Mo et al.[23]whose sensitivity and accuracy went down from 0.8147 and 0.9674 to 0.7009 and 0.9570 respectively.Similarly,the f1-score and MCC of our model fell from 0.8233 and 0.8111 to 0.7641 and 0.7770 respectively compared to Zhou et al.[25]whose f1-score and MCC scores went down from 0.8017 and 0.7830 to 0.7547 and 0.7334 respectively.

    The second part of Tab.8 shows the score of evaluation metrics on test images from the DRIVE dataset when the proposed model is trained on STARE images.Our method ranked first in four out of six evaluation metrics,second in terms of sensitivity,and third in terms of specificity.Zhou et al.[25]ranked first in sensitivity with an average score of 0.7673 whereas our method achieved a sensitivity of 0.7651 for the DRIVE dataset.The cross-training results for a model trained on the STARE dataset and test on the DRIVE images resulted in very low sensitivity for the work presented in[24],which limits the generalization ability of their model.The specificity of our model went up from 0.9777 to 0.9867 compared to[25]whose specificity went up from 0.9674 to 0.9703.In comparison to[25]whose f1-score and MCC scores dropped from 0.7942 and 0.7656 to 0.7770 and 0.7474 for the DRIVE dataset,our model’s f1-score and MCC fell from 0.8184 and 0.8022 to 0.8015 and 0.7863,respectively.We observed that when the model is trained on STARE and tested on DRIVE,the model recognizes fewer thin vessels,resulting in a decrease in sensitivity.In contrast,because the DRIVE database often has more annotated thin vessels than STARE,sensitivity increased significantly.

    Table 8:Cross-training segmentation result comparison of the proposed method with other state-ofthe-art methods

    We perform an additional experiment where the pre-trained model on DRIVE and STARE datasets was used to test on the CHASE_DB dataset.As shown in Tab.8,the model trained on the DRIVE dataset achieved a higher evaluation metrics score compared to the model trained on the STARE dataset.The DRIVE dataset has more annotated thin vessels compared to the STARE dataset;therefore the sensitivity of the proposed model is higher.

    Furthermore,we evaluated the proposed model using ROC curves,as shown in Fig.8.The AUC values obtained for the test images of DRIVE,STARE,and CHASE_DB1 dataset were 0.9869,0.9901,and 0.9906,respectively.The proposed deep learning model achieved an AUC value of higher than 0.98 for all three datasets,proving its generalizability.This also indicates that the segmentation probability map obtained by the proposed method is very close to the ground truth.

    Figure 8:The receiver operating characteristic curve based on the test images of DRIVE,STARE and CHASE_DB databases

    5 Discussion

    Over the last few years,numerous methods have been proposed to improve the segmentation accuracy of retinal vessels.However,there are a few challenges that still need the researcher’s attention such as the presence of central vessel reflex,lesions,and low contrast.A robust deep learning method for retinal vessel segmentation should handle the aforementioned challenges.In this work,we investigate such challenging scenarios to compare the segmentation of the proposed method with manual annotations.Fig.9a shows the presence of central reflex (a light streak that runs along the central length of the vessel) in the retinal fundus image.We extract a patch of retinal vessels with the central reflex problem as shown in the second column Fig.9a.The third and fourth columns of Fig.9 are corresponding manual annotation by 1st observer and the segmentation probability map obtained by the proposed model respectively.As shown in the figure,the proposed method segments the complete retinal vessels with high probability values.Fig.9b shows the presence of a bright lesion in the fundus image where the second column shows the enlarged patch.By looking at the segmentation probability map (fourth column),it can be inferred that the proposed model segmented the retinal vessels correctly and there are no false positives because of the presence of bright lesions.The proposed encoder-decoder structure along with the DSS block learns better discriminative attributes especially for retinal vessels in low-contrast regions as well as the non-vascular structure at the same time as shown in Fig.9c.Moreover,it effectively segments the small blood vessels that haven’t been annotated by experts.In summary,our proposed method is robust and effective in dealing with central vessel reflex,bright lesions,and low-contrast challenging scenarios.

    In another experiment,we select two pathological images im0005 and im0044 from the STARE dataset as shown in Figs.10a and 10b respectively.The top row(for im0005)shows a visual comparison of our proposed model with[22]as well as the first and second manually graded images,whereas for im0044 with[49]is shown in the bottom row of Fig.10.The segmentation for im0005 obtained using the FC-CRF method contributes to a large number of false-positive around the optic cup region whereas the segmentation result of the proposed method is close to vessels identified by the first human observer.The second human observer identified additional vessels in that region whereas the first human observer (which is considered as ground truth) didn’t identify any vessels which affect the sensitivity.It can be observed that the segmentation of pathological images using the proposed method is close to ground truth annotations.Despite the fact that the model is trained on first human observer’s ground truth annotations,it efficiently segments the retinal vessels that are not annotated by the first human observer but are annotated by the second human observer.

    Figure 9:Exemplar results of the proposed deep learning model on challenging cases(a)central reflex vessels,(b)bright lesions,(c)low contrast.From left to right:retinal fundus image,an enlarged patch of fundus image,corresponding ground truth annotation,and the predicted probability maps

    Figure 10:A visual comparison between different retina vessel segmentation methods on serious pathological images from STARE dataset.(a) Image im0005,(b) Image im0044.From left to right:retinal fundus image,Segmentations obtained using the FC-CRF model[22](top),Segmentations obtained using the Dense FCN model[49](bottom),Segmentation obtained using proposed method,first human observer annotations,and second human observer annotations

    Tab.9 shows that the proposed model is lightweight,with only 1.01 million trainable parameters.Jin et al.[31]used a deformable U-Net with 0.88 million trainable parameters.However,for the STARE dataset,their method achieved a sensitivity of 0.7595,which is significantly lower than ours of 0.8581.Furthermore,their suggested model’s robustness and generalization capabilities have not been tested using cross-database training.The Ladder-Net model proposed by Zhuang et al.[39]is also lightweight,with only 1.38 million parameters.However,none of the evaluation metrics scores of their segmentation results were among the top-three best methods.Wang et al.[32]used a dense U-Net model with approximately four million parameters.Their method achieved a low sensitivity for the CHASE_DB dataset.

    Table 9:Parameters comparison of the proposed model with other best models from literature

    Table 9:Continued

    The above findings demonstrate that our proposed method outperforms state-of-the-art methods for retinal vascular segmentation.In addition to being lightweight,the network’s effectiveness has been tested on serious pathological images as well as challenging cases with the central reflex problems,bright lesions,and low-contrast.

    6 Conclusion

    We developed a lightweight CNN-based encoder-decoder architecture with anamorphic depth block for retinal vessel segmentation.We modified the original Anam-Net model by using a stack of two convolution layers to increase the receptive field and keeping the fixed filter size as we go deeper into the network to lower the computational complexity of the network.The performance of the network has been extensively assessed on retinal images from DRIVE,STARE,and CHASE_DB datasets.The results show that our model outperforms state-of-the-art methods for segmenting retinal vessels on all three datasets.For effective generalization of the obtained results,we have assessed the performance of the developed model using a cross database training and testing strategy,which is more realistic but highly challenging at the same time.Our obtained results indicate that even for cross database training and testing,we achieved significantly better results compared to other rivals from the state-of-the-art deep learning models.The proposed architecture has 4.43 times fewer parameters and 1.37 times reduced memory requirement compared to original Anam-Net model.The advantage of being highly robust,reliable and efficient in terms of segmentation accuracy in addition to being lightweight makes the proposed deep learning model an ideal candidate for deploying in the computationally constrained computing facility at the point of care.In future,we are aiming to modify developed model appropriately for other biomedical imaging applications.

    Acknowledgement:The authors acknowledges the technical support provided by the technical team as well as the access to the high performance computing resources (Ibex) of KAUST Supercomputing Laboratory(KSL)at King Abdullah University of Science and Technology,Jeddah,KSA.

    Funding Statement:The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(DRI-KSU-415).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩人妻精品一区2区三区| av不卡在线播放| 美女大奶头黄色视频| 日本-黄色视频高清免费观看| 伦理电影大哥的女人| 美女中出高潮动态图| 久久久久久人人人人人| 人人妻人人添人人爽欧美一区卜| 国产极品天堂在线| 国产极品天堂在线| 丝袜人妻中文字幕| 中文字幕人妻丝袜一区二区 | 欧美国产精品一级二级三级| 久久精品aⅴ一区二区三区四区 | 在线观看人妻少妇| 久久久久人妻精品一区果冻| 777米奇影视久久| 日韩伦理黄色片| 国产成人精品久久久久久| 亚洲久久久国产精品| 亚洲国产精品999| 亚洲精品美女久久久久99蜜臀 | 免费在线观看视频国产中文字幕亚洲 | 日本av手机在线免费观看| 欧美中文综合在线视频| 自拍欧美九色日韩亚洲蝌蚪91| 精品一区在线观看国产| 下体分泌物呈黄色| 久久狼人影院| 欧美亚洲 丝袜 人妻 在线| 欧美中文综合在线视频| 韩国高清视频一区二区三区| 国产伦理片在线播放av一区| 蜜桃在线观看..| 亚洲精品国产av成人精品| 如何舔出高潮| 国产男女超爽视频在线观看| 极品人妻少妇av视频| 综合色丁香网| 另类精品久久| 岛国毛片在线播放| 最黄视频免费看| 久久精品久久精品一区二区三区| 三上悠亚av全集在线观看| 捣出白浆h1v1| 亚洲国产av新网站| 欧美人与性动交α欧美软件| 精品一品国产午夜福利视频| 男的添女的下面高潮视频| av网站在线播放免费| 老司机影院成人| 免费少妇av软件| 永久网站在线| 自拍欧美九色日韩亚洲蝌蚪91| 中文字幕精品免费在线观看视频| 国产xxxxx性猛交| 满18在线观看网站| 欧美另类一区| 999精品在线视频| 欧美激情 高清一区二区三区| 丝袜人妻中文字幕| 两个人看的免费小视频| 女的被弄到高潮叫床怎么办| 日韩,欧美,国产一区二区三区| 欧美日韩视频精品一区| 久久国产亚洲av麻豆专区| 18禁动态无遮挡网站| 亚洲精品美女久久av网站| 男人操女人黄网站| 午夜福利在线免费观看网站| 免费在线观看黄色视频的| 校园人妻丝袜中文字幕| 亚洲精品一区蜜桃| 交换朋友夫妻互换小说| 在线观看美女被高潮喷水网站| 啦啦啦啦在线视频资源| 国产男女超爽视频在线观看| 一边亲一边摸免费视频| 国产乱来视频区| 尾随美女入室| 人人妻人人澡人人看| 亚洲成人手机| 精品视频人人做人人爽| 大香蕉久久成人网| 亚洲 欧美一区二区三区| 久久久久久久大尺度免费视频| 亚洲,欧美,日韩| 欧美人与性动交α欧美精品济南到 | 国产成人免费无遮挡视频| 日韩制服丝袜自拍偷拍| 最近最新中文字幕大全免费视频 | 少妇人妻久久综合中文| 夜夜骑夜夜射夜夜干| 免费少妇av软件| 精品人妻偷拍中文字幕| 亚洲av电影在线观看一区二区三区| 三级国产精品片| 99国产精品免费福利视频| 免费在线观看黄色视频的| 9191精品国产免费久久| 亚洲精品久久成人aⅴ小说| a级毛片在线看网站| www.精华液| 国产成人a∨麻豆精品| 久久影院123| 一区二区日韩欧美中文字幕| 国产成人欧美| 亚洲第一av免费看| 女性生殖器流出的白浆| 亚洲av国产av综合av卡| 欧美日韩亚洲国产一区二区在线观看 | 搡女人真爽免费视频火全软件| 免费播放大片免费观看视频在线观看| 婷婷色综合大香蕉| 大码成人一级视频| 国产成人免费无遮挡视频| 午夜激情av网站| 免费黄网站久久成人精品| 夫妻性生交免费视频一级片| 欧美日韩亚洲国产一区二区在线观看 | 久久99精品国语久久久| 只有这里有精品99| 视频区图区小说| 成年女人毛片免费观看观看9 | 在线 av 中文字幕| 老汉色∧v一级毛片| 黑人巨大精品欧美一区二区蜜桃| 欧美激情高清一区二区三区 | 一本大道久久a久久精品| 精品视频人人做人人爽| 成人黄色视频免费在线看| 精品少妇内射三级| 欧美在线黄色| 一区在线观看完整版| 亚洲成人手机| 免费少妇av软件| 最新的欧美精品一区二区| 中文字幕av电影在线播放| 97在线人人人人妻| 18在线观看网站| 国产视频首页在线观看| 在线观看免费日韩欧美大片| 波多野结衣一区麻豆| 国产精品久久久久久av不卡| 美女午夜性视频免费| 男的添女的下面高潮视频| 在线观看国产h片| 午夜精品国产一区二区电影| 午夜福利网站1000一区二区三区| 久久 成人 亚洲| 波多野结衣av一区二区av| a级毛片在线看网站| 日本爱情动作片www.在线观看| 国产乱人偷精品视频| 亚洲欧美精品自产自拍| 黑人猛操日本美女一级片| 超碰97精品在线观看| 亚洲经典国产精华液单| 在线观看三级黄色| 少妇熟女欧美另类| 国产av码专区亚洲av| 啦啦啦在线观看免费高清www| xxxhd国产人妻xxx| 一二三四在线观看免费中文在| 欧美在线黄色| 边亲边吃奶的免费视频| 看免费av毛片| 久久久久久久亚洲中文字幕| av免费观看日本| av电影中文网址| 免费黄网站久久成人精品| xxx大片免费视频| 国产精品99久久99久久久不卡 | 亚洲综合精品二区| 亚洲成人一二三区av| 欧美最新免费一区二区三区| 熟女少妇亚洲综合色aaa.| 性色av一级| 免费高清在线观看日韩| 欧美少妇被猛烈插入视频| 婷婷色av中文字幕| 午夜福利网站1000一区二区三区| 天美传媒精品一区二区| 男女无遮挡免费网站观看| av免费观看日本| av不卡在线播放| 高清视频免费观看一区二区| 爱豆传媒免费全集在线观看| 另类精品久久| 美女大奶头黄色视频| 丰满迷人的少妇在线观看| 亚洲国产精品国产精品| 日韩人妻精品一区2区三区| 国产av国产精品国产| 久久久国产欧美日韩av| 黄片无遮挡物在线观看| 久久久久网色| 国产高清不卡午夜福利| 天美传媒精品一区二区| 另类亚洲欧美激情| 亚洲成国产人片在线观看| 建设人人有责人人尽责人人享有的| 人人妻人人澡人人爽人人夜夜| 日韩av免费高清视频| 欧美亚洲日本最大视频资源| 中文天堂在线官网| 美女福利国产在线| 国产午夜精品一二区理论片| 一本—道久久a久久精品蜜桃钙片| 秋霞在线观看毛片| 久久99一区二区三区| 久久久久久久精品精品| 另类亚洲欧美激情| 啦啦啦在线观看免费高清www| 国产精品蜜桃在线观看| 麻豆av在线久日| 亚洲国产欧美网| kizo精华| 高清av免费在线| 香蕉精品网在线| 亚洲av欧美aⅴ国产| 日本爱情动作片www.在线观看| 熟妇人妻不卡中文字幕| 久久精品aⅴ一区二区三区四区 | 亚洲成色77777| 波野结衣二区三区在线| 成人亚洲欧美一区二区av| 男女边摸边吃奶| 亚洲成国产人片在线观看| 亚洲,一卡二卡三卡| 一级黄片播放器| 欧美av亚洲av综合av国产av | 久久精品国产亚洲av高清一级| 久久久精品国产亚洲av高清涩受| 亚洲情色 制服丝袜| 侵犯人妻中文字幕一二三四区| 亚洲第一av免费看| 最近最新中文字幕大全免费视频 | 国产精品不卡视频一区二区| av福利片在线| 97精品久久久久久久久久精品| 亚洲经典国产精华液单| 免费观看性生交大片5| 一区福利在线观看| 日韩三级伦理在线观看| 女人高潮潮喷娇喘18禁视频| 看非洲黑人一级黄片| 午夜激情av网站| 久久久久久久久久人人人人人人| 黑人欧美特级aaaaaa片| 亚洲成人一二三区av| 十八禁高潮呻吟视频| 国产精品无大码| 国产精品熟女久久久久浪| 日韩,欧美,国产一区二区三区| 男女边吃奶边做爰视频| 亚洲三级黄色毛片| 中文字幕色久视频| videos熟女内射| 欧美日韩视频精品一区| 国产乱来视频区| 久久人妻熟女aⅴ| 制服人妻中文乱码| 日韩中文字幕欧美一区二区 | 成年人免费黄色播放视频| 欧美少妇被猛烈插入视频| videos熟女内射| 观看av在线不卡| 国产片特级美女逼逼视频| 九草在线视频观看| 在线观看美女被高潮喷水网站| 亚洲av在线观看美女高潮| 免费高清在线观看视频在线观看| 亚洲国产精品成人久久小说| 午夜福利在线观看免费完整高清在| 亚洲精品av麻豆狂野| 久久毛片免费看一区二区三区| 一本色道久久久久久精品综合| 国产欧美日韩一区二区三区在线| 亚洲天堂av无毛| 亚洲欧美精品综合一区二区三区 | 国产成人免费无遮挡视频| 亚洲精华国产精华液的使用体验| 男女边吃奶边做爰视频| 国产淫语在线视频| 国产成人精品婷婷| 巨乳人妻的诱惑在线观看| 日韩制服骚丝袜av| 九九爱精品视频在线观看| 久久99蜜桃精品久久| 下体分泌物呈黄色| 天天操日日干夜夜撸| 性色avwww在线观看| 久久久久国产网址| 看免费av毛片| 老汉色av国产亚洲站长工具| 在线观看美女被高潮喷水网站| 国产av一区二区精品久久| 亚洲精品乱久久久久久| 免费黄频网站在线观看国产| 日日撸夜夜添| 国产乱人偷精品视频| 最黄视频免费看| 韩国高清视频一区二区三区| 亚洲人成电影观看| 亚洲精品第二区| 欧美精品一区二区大全| 2021少妇久久久久久久久久久| 国产成人精品在线电影| 黄色毛片三级朝国网站| 伦理电影大哥的女人| 观看美女的网站| 热99久久久久精品小说推荐| 男女高潮啪啪啪动态图| 18+在线观看网站| 自拍欧美九色日韩亚洲蝌蚪91| 涩涩av久久男人的天堂| av免费在线看不卡| 国产野战对白在线观看| 中国三级夫妇交换| 久久久a久久爽久久v久久| 日本wwww免费看| 啦啦啦啦在线视频资源| 午夜av观看不卡| 少妇熟女欧美另类| 大香蕉久久成人网| 另类精品久久| 欧美人与性动交α欧美精品济南到 | 成年动漫av网址| 一本—道久久a久久精品蜜桃钙片| 亚洲av在线观看美女高潮| 国产熟女午夜一区二区三区| 亚洲国产成人一精品久久久| 又大又黄又爽视频免费| 久久精品国产a三级三级三级| 久久精品国产自在天天线| 国产无遮挡羞羞视频在线观看| 最新的欧美精品一区二区| 最近的中文字幕免费完整| 边亲边吃奶的免费视频| 九色亚洲精品在线播放| 国产精品 国内视频| 9热在线视频观看99| 国产成人精品久久二区二区91 | 在线观看www视频免费| 国产精品国产三级专区第一集| 曰老女人黄片| 一级片免费观看大全| www日本在线高清视频| 国产av码专区亚洲av| 亚洲欧美一区二区三区久久| 国产av码专区亚洲av| 欧美日韩国产mv在线观看视频| 久久久国产精品麻豆| 一区二区三区四区激情视频| 黄色配什么色好看| 毛片一级片免费看久久久久| 成人亚洲精品一区在线观看| 美女福利国产在线| 99国产综合亚洲精品| 久久精品国产综合久久久| 亚洲人成77777在线视频| 91精品伊人久久大香线蕉| 黄色一级大片看看| 国产免费一区二区三区四区乱码| 精品国产一区二区三区久久久樱花| 亚洲欧美一区二区三区久久| 黄频高清免费视频| 国产精品久久久久成人av| 免费观看av网站的网址| 男人爽女人下面视频在线观看| 亚洲国产精品一区二区三区在线| a级毛片黄视频| 亚洲av欧美aⅴ国产| 欧美日韩精品网址| 欧美日韩成人在线一区二区| 国产精品久久久久久久久免| 久久精品久久久久久噜噜老黄| 久久久久久人妻| 视频区图区小说| 国产深夜福利视频在线观看| 99久久中文字幕三级久久日本| 国产日韩欧美亚洲二区| 日韩中文字幕视频在线看片| 涩涩av久久男人的天堂| 午夜激情久久久久久久| 国产精品国产av在线观看| 国产精品一区二区在线不卡| 亚洲第一青青草原| 91在线精品国自产拍蜜月| 捣出白浆h1v1| 久久午夜福利片| 叶爱在线成人免费视频播放| 不卡av一区二区三区| 亚洲一码二码三码区别大吗| 国产精品免费视频内射| 日产精品乱码卡一卡2卡三| 久久久精品区二区三区| 国产欧美日韩综合在线一区二区| 99久久人妻综合| 色吧在线观看| 亚洲精品久久成人aⅴ小说| 91久久精品国产一区二区三区| 久热这里只有精品99| 国产视频首页在线观看| 亚洲精品美女久久av网站| 91精品国产国语对白视频| 亚洲国产日韩一区二区| 欧美日韩国产mv在线观看视频| 美女脱内裤让男人舔精品视频| 夜夜骑夜夜射夜夜干| 亚洲av日韩在线播放| 男人爽女人下面视频在线观看| 天天躁夜夜躁狠狠久久av| 亚洲精品成人av观看孕妇| 两性夫妻黄色片| 亚洲国产成人一精品久久久| 桃花免费在线播放| h视频一区二区三区| 成人影院久久| 国产欧美亚洲国产| 精品久久久精品久久久| 男的添女的下面高潮视频| 99热全是精品| 日韩中字成人| 在线观看人妻少妇| 97在线视频观看| 亚洲,欧美精品.| 精品少妇久久久久久888优播| 捣出白浆h1v1| 午夜福利影视在线免费观看| 天天躁夜夜躁狠狠久久av| 国产精品一二三区在线看| 亚洲色图 男人天堂 中文字幕| 飞空精品影院首页| 亚洲av.av天堂| 国产野战对白在线观看| 黑人猛操日本美女一级片| 国产毛片在线视频| 在线观看免费高清a一片| 亚洲成人av在线免费| 99久国产av精品国产电影| 国产精品 国内视频| 欧美成人午夜精品| 26uuu在线亚洲综合色| 97在线人人人人妻| 男女边吃奶边做爰视频| 成人亚洲精品一区在线观看| videosex国产| av电影中文网址| 中文天堂在线官网| 999精品在线视频| 精品一区二区免费观看| 国产精品久久久av美女十八| 成人影院久久| 成人亚洲精品一区在线观看| 亚洲国产精品成人久久小说| 午夜av观看不卡| 97在线人人人人妻| 丰满乱子伦码专区| 伦理电影大哥的女人| 超碰成人久久| 亚洲激情五月婷婷啪啪| 亚洲精品国产av蜜桃| 亚洲av男天堂| 另类亚洲欧美激情| 一二三四中文在线观看免费高清| 亚洲欧美精品自产自拍| 精品国产超薄肉色丝袜足j| 日韩精品有码人妻一区| 欧美97在线视频| 9色porny在线观看| 人人妻人人澡人人爽人人夜夜| 在现免费观看毛片| 如日韩欧美国产精品一区二区三区| 免费黄网站久久成人精品| 777米奇影视久久| 一区福利在线观看| av一本久久久久| 亚洲国产精品999| 亚洲国产欧美网| 成人毛片a级毛片在线播放| 亚洲成人一二三区av| 99热网站在线观看| 另类亚洲欧美激情| 亚洲成人手机| 日韩av不卡免费在线播放| 欧美日韩亚洲高清精品| 中文精品一卡2卡3卡4更新| 国产毛片在线视频| 亚洲精品国产av成人精品| 午夜日本视频在线| 精品国产超薄肉色丝袜足j| 99热网站在线观看| 亚洲视频免费观看视频| 亚洲四区av| 久久99一区二区三区| 久久国内精品自在自线图片| 亚洲欧美一区二区三区黑人 | 一本大道久久a久久精品| 黄片小视频在线播放| 欧美精品av麻豆av| 亚洲男人天堂网一区| 欧美日韩视频高清一区二区三区二| 国产伦理片在线播放av一区| 久久97久久精品| 国产欧美亚洲国产| 一区二区三区乱码不卡18| 精品国产一区二区三区久久久樱花| 99久久综合免费| 人妻一区二区av| 美女xxoo啪啪120秒动态图| 91精品三级在线观看| 啦啦啦中文免费视频观看日本| 欧美激情极品国产一区二区三区| 我的亚洲天堂| 国产欧美亚洲国产| 午夜福利影视在线免费观看| 亚洲一码二码三码区别大吗| 少妇人妻精品综合一区二区| 如日韩欧美国产精品一区二区三区| 免费黄网站久久成人精品| 亚洲av.av天堂| 精品视频人人做人人爽| 中文字幕最新亚洲高清| 亚洲精品美女久久av网站| videossex国产| 制服丝袜香蕉在线| 一二三四在线观看免费中文在| 欧美亚洲 丝袜 人妻 在线| 中文字幕亚洲精品专区| 亚洲av.av天堂| 一区福利在线观看| 国产视频首页在线观看| 欧美黄色片欧美黄色片| 校园人妻丝袜中文字幕| 久久精品人人爽人人爽视色| 一区二区三区乱码不卡18| 精品国产乱码久久久久久男人| 亚洲欧美中文字幕日韩二区| 2022亚洲国产成人精品| 18禁动态无遮挡网站| 免费播放大片免费观看视频在线观看| 伊人久久国产一区二区| 久久久久国产一级毛片高清牌| 国产精品香港三级国产av潘金莲 | 久久亚洲国产成人精品v| 中文字幕色久视频| 国产精品麻豆人妻色哟哟久久| 中文字幕色久视频| 亚洲国产毛片av蜜桃av| 又粗又硬又长又爽又黄的视频| 亚洲伊人色综图| 18禁裸乳无遮挡动漫免费视频| 欧美在线黄色| 激情五月婷婷亚洲| 深夜精品福利| 国产成人a∨麻豆精品| 你懂的网址亚洲精品在线观看| 成年人午夜在线观看视频| 日韩av在线免费看完整版不卡| 国产精品国产三级国产专区5o| 国产一区二区三区av在线| 纯流量卡能插随身wifi吗| 少妇人妻久久综合中文| 伊人久久国产一区二区| 777米奇影视久久| 午夜免费男女啪啪视频观看| 搡老乐熟女国产| 精品人妻熟女毛片av久久网站| 国产精品久久久久久av不卡| 午夜福利在线观看免费完整高清在| 欧美日韩av久久| 大香蕉久久成人网| 狠狠精品人妻久久久久久综合| av片东京热男人的天堂| 一区福利在线观看| 久久人人爽av亚洲精品天堂| 亚洲中文av在线| videossex国产| 精品福利永久在线观看| 亚洲精品在线美女| 日韩一卡2卡3卡4卡2021年| 男女无遮挡免费网站观看| 日本av手机在线免费观看| 日本免费在线观看一区| 狂野欧美激情性bbbbbb| 成人手机av| 久久人人97超碰香蕉20202| 国产人伦9x9x在线观看 | 97人妻天天添夜夜摸| 美女国产视频在线观看| 亚洲久久久国产精品| 9热在线视频观看99| av片东京热男人的天堂| 捣出白浆h1v1| 少妇 在线观看| 久久久久视频综合| 国产欧美亚洲国产| 99热全是精品| 中文精品一卡2卡3卡4更新| 90打野战视频偷拍视频| 亚洲三区欧美一区| 2018国产大陆天天弄谢| 亚洲成色77777| 婷婷色av中文字幕| 97在线视频观看| 久久人妻熟女aⅴ| 亚洲美女搞黄在线观看| 国产一区二区 视频在线| 国产成人精品久久二区二区91 | 赤兔流量卡办理| 亚洲av免费高清在线观看| 久久这里只有精品19| 国产精品二区激情视频|