• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automatic and Robust Segmentation of Multiple Sclerosis Lesions with Convolutional Neural Networks

    2021-12-14 03:51:28RehanAfzalSuhuaiLuoSaadallahRamadanJeannetteLechnerScottMohammadRuhulAminJiamingLiandKamranAfzal
    Computers Materials&Continua 2021年1期

    H.M.Rehan Afzal,Suhuai Luo,Saadallah Ramadan,2,Jeannette Lechner-Scott,2,,Mohammad Ruhul Amin,Jiaming Li and M.Kamran Afzal

    1School of Electrical Engineering and Computing,University of Newcastle,Callaghan,NSW 2308,Australia

    2Hunter Medical Research Institute,New Lambton Heights,NSW 2305,Australia

    3Department of Neurology,John Hunter Hospital,New Lambton Heights,NSW 2305,Australia

    4CSIRO Data61,Marsfield,NSW 2122,Australia

    5School of Computer Science and Technology,Xiamen University,Xiamen,361005,China

    Abstract:The diagnosis of multiple sclerosis(MS)is based on accurate detection of lesions on magnetic resonance imaging(MRI)which also provides ongoing essential information about the progression and status of the disease.Manual detection of lesions is very time consuming and lacks accuracy.Most of the lesions are difficult to detect manually,especially within the grey matter.This paper proposes a novel and fully automated convolution neural network(CNN)approach to segment lesions.The proposed system consists of two 2D patchwise CNNs which can segment lesions more accurately and robustly.The first CNN network is implemented to segment lesions accurately,and the second network aims to reduce the false positives to increase efficiency.The system consists of two parallel convolutional pathways,where one pathway is concatenated to the second and at the end,the fully connected layer is replaced with CNN.Three routine MRI sequences T1-w,T2-w and FLAIR are used as input to the CNN,where FLAIR is used for segmentation because most lesions on MRI appear as bright regions and T1-w&T2-w are used to reduce MRI artifacts.We evaluated the proposed system on two challenge datasets that are publicly available from MICCAI and ISBI.Quantitative and qualitative evaluation has been performed with various metrics like false positive rate(FPR),true positive rate(TPR)and dice similarities,and were compared to current state-of-the-art methods.The proposed method shows consistent higher precision and sensitivity than other methods.The proposed method can accurately and robustly segment MS lesions from images produced by different MRI scanners,with a precision up to 90%.

    Keywords:Multiple sclerosis;lesion segmentation;automatic segmentation;CNN;automated tool;lesion detection

    1 Introduction

    Multiple Sclerosis(MS)is a common in ammatory neurological condition affecting the central nervous system(brain and spinal cord).It results in demyelination and axonal degeneration predominantly in the white matter of the brain[1].Symptoms vary greatly from patient to patient,with common symptoms including weakness,balance issues,depression,fatigue,or visual impairment.Depending on the location of the in ammation,called plaques,different symptoms arise.These plaques can be detected by magnetic resonance imaging(MRI)but not computer tomography(CT).MRI is not only used for diagnosis but also considered to be the best tool to monitor progression of disease.Yearly MRI is nowadays considered standard of care.Detection rate of new lesions vary between radiologists from 64–82%[2].As current MRI technologies only detect 30% of actual pathology[3],current research is focused on improvement of MRI techniques and analysis to detect lesions more accurately.

    Radiologists use T1-w,T2-w and FLAIR sequences to detect in ammatory lesions and axonal damage but sensitivity depends on slice thickness and is laborious and time-consuming.T1-w,T2-w and FLAIR are different pulse sequences of MRI which are attained with different relaxation times.Automated segmentation and detection algorithms can overcome these issues.Significant advances have been made for the segmentation of medical imaging with the help of traditional machine learning techniques[4,5]and over the last few years advanced deep learning techniques made significant advancements in segmentation,detection,and recognition tasks.Several methods have been described for the automatic detection and segmentation of lesions in MRI of MS[6–8].Some online challenges for the segmentation of lesions in MS,like International Symposium on Biomedical Imaging(ISBI)[9]and Medical Image Computing and Computer Assisted Intervention(MICCAI)[10]are providing a platform for researchers to showcase their innovations.These challenges not only provide MRI datasets but also provide the opportunity to compare different automated segmentation algorithms on the same cohorts.The need for such algorithms has arisen due to limited human capacity to analyse the large number of clinical images and the prohibitive increase in health care prices.For example,if there are more than 150 brain slices of a single MS patient and all slices have several lesions,it is nearly impossible to manually detect all lesions accurately.Machine learning and deep learning techniques can improve the detection rate of lesions and minimise analysis time,irrespective of the number of slices and lesions.

    To cope with such type of issues,several algorithms have been proposed claiming good efficiency for the segmentation of MS lesions.These algorithms can be categorized into two main types,supervised and unsupervised[11].After a detailed literature review,results indicate that supervised methods are more favoured and have an edge over unsupervised methods due to several reasons[12].Unsupervised methods are not very popular for medical segmentation but have some promising results as well.Unsupervised methods mostly depend on the intensity of MR brain images where high intensities in the MRI are considered as outliers.Garcia-Lorenzo et al.[13]have published a specific example of such an unsupervised method that uses intensity distributions.Several other unsupervised methods like the one presented by Roura et al.[14]proposed a thresholding algorithm and Strumia et al.[15]presented a probabilistic algorithm.Tomas-Fernandez et al.[16]claimed that if we have additional information about the distribution of intensity and also know the expected location of normal tissue it could help to outline lesions more precisely.Sudre et al.[17]proposed an unsupervised framework in which no prior knowledge is needed to differentiate between the patterns of different abnormal images.They were able to detect such clusters that were abnormal in nature known as lesions,on clinical and simulated data.They segmented lesions restricted to white matter.

    Supervised methods use templates consistent of MR images with manually segmented lesions from qualified radiologists.One of the best examples is of Valverde et al.[18],who proposed a cascaded network of two CNN.Our proposed algorithm also follows this principle of cascaded CNNs.Han et al.[19]also proposed 2 deep neural networks by teaching every network with mini batch and these two networks were communicated with each other to check which mini batch should be used for training purposes.They used different image datasets like CIFAR-10,CIFAR-100,and MNIST,etc.,to check the robustness of their proposed network.One similar approach was proposed by Zhang et al.[20],who proposed deep mutual learning which is also called DML strategy.In this technique,instead of one-way transfer among students and a static teacher,students collaboratively learn and teach separately during the training.Valcarcel et al.[21]proposed an automatic algorithm for the segmentation of lesions which uses covariance features from regression.They also have taken part in a segmentation challenge and their results were based on a dice similarity coefficient(DSC)of 0.57 with a precision of 0.61.Jain et al.[22]proposed an automated algorithm in which they segmented white matter lesions as well as white matter,grey matter,and cerebro-spinal- uid(CSF).Their method depended on prior knowledge of the appearance and location of lesions.Deshpande et al.[23]proposed a supervised based method with the help of healthy brain tissues and learning dictionaries.They also used a complete brain in which CSF,grey matter and white matter were included.They claimed that the dictionary learning technique was superior while performing lesions and non-lesions patches.For every class,their method automatically adapted the dictionary size according to the complexity.One more supervised approach was proposed by Roy et al.[24].They proposed a 2D patch-based CNN including two pathways,which accurately and robustly segmented the white matter lesions.After the convolutional pathway,they didn’t use fully connected layer but again used another pathway of convolutional layers for the prediction of the membership function.They claimed that it was much faster as compared to fully connected layer.Brosch et al.[25]suggested an approach to segment the whole brain with the help of 3D CNNs and Hashemi et al.[26]presented a novel method by implementing a 3D patch wised CNN.It used a densely connected network idea.One latest and best method in this group of CNNs is of Valverde et al.[27].They have written two papers for the segmentation of lesions.In the first paper,they proposed a 3D patch-based approach in which two convolutional networks were used.The first network was able to find possible lesions whereas the second network was trained to remove misclassified voxels that got from the first network.While in the second paper,they examined the intensity domain effect on their previously CNN based proposed method.They won the ISBI segmentation challenge.All these methods described in this section,used the same parameters as DSC,precision,and sensitivity to check the accuracy.These methods are compared with the proposed method in Tab.4.

    The marvelous development of deep learning especially CNNs has revolutionized progress in the medical field.CNNs are helping with segmentation,detection and even prediction of diseases[28].Contrary to traditional machine learning techniques that require handcrafted features,deep learning techniques can learn features by themselves[29]and know how to do fine tuning according to the input data which is a remarkable achievement.These methods have excellent accuracy.Literature is accumulating online for deep learning medical imaging.There are several advantages of CNNs which compromise two main features.First,it doesn’t need handcrafted features and can handle 2D and 3D patches and learn features automatically.Secondly,convolutional neural networks can handle very large datasets even within a limited time span.This achievement is due to advancement in graphic processing units(GPUs)which can help us to train our algorithm within a portion of the time.So,CNNs with the help of GPUs have remarkably progressed the medical field to solve complex problems.

    Due to deep learning accuracy and achievements,we proposed a novel deep learning-based architecture which segments MS lesions accurately and robustly.The proposed method follows the principle of two convolutional neural networks in which first CNN finds the possible lesions and then second CNN rectifies the false positives and gives better results with regards to accuracy and speed.Our algorithm uses parallel convolutional pathways where the fully connected(FC)layer is replaced with CNNs,similar to the approach of Ghafoorian et al.[30].The replacement of the FC layer with CNNs not only increases the speed but is also much accurate.The reason for the increment of speed is due to the removal of FC layer as we know that FC layer use memory and we are still not sure how many layers are needed at the output.Several recent articles[31–37]may be read for better understanding of deep learning techniques.

    In the proposed algorithm,three MRI sequences,T1-w,T2-w,and FLAIR are used as input for CNNs and then concatenated.T1-w and T2-w only are used to remove artifacts of MRI.Fig.1 shows the general diagram of the proposed method.Moreover,a detailed description of the proposed architecture is provided in Section 3.2 and illustrated in Fig.2.

    Figure 1:The general structure of the proposed method.First CNN consists of 6 convolutional layers with a decreasing number of filters from 256 to 8.Every filter has a size of 3×3 and 5×5 respectively.For example,the first layer has 256 filters with size 5×5,and then 128 filters with size 3×3 and so on.Three sequences have been used as inputs;T1-w,T2-w,and FLAIR.The second CNN is used as a parallel pathway to reduce the false positive

    2 Data/Material

    For the evaluation of the proposed method,two publicly available datasets are used.These two datasets ISBI and MICCAI are available for challenge purposes also.ISBI dataset consists of 82 scans in total,where 21 scans of 5 subjects are available for training purposes and already preprocessed with several steps like skull stripping,denoising,bias correction,and co-registration.These 5 subjects have 4 time points and one subject having 5 time points with a gap of approximately 1 year.These 21 scans are provided for training purposes only.For testing purposes,61 scans are provided from 14 subjects.

    For all scans,three different sequences T1-w,T2-w,and FLAIR are provided with manual annotations of the experts.The second dataset is composed of 45 scans where 20 scans are available for training which are attained at the University of Carolina(UNC)from a 3T Alegra scanner and Children’s Hospital Boston(CHB)from a 3T Siemens scanner.25 scans are provided for testing purposes.For a better understanding of what we used for evaluation,all these details are tabulated in Tab.1.

    Figure 2:Detailed illustration/internal structure of the proposed method earlier shown in Fig.1.At input two datasets are used(ISBI and MICCAI).Then 2D patches are obtained from all three sequences including T1-w,T2-w,and FLAIR.Then passed from the convolutional pathway of 6 layers by decreasing the number of filters from 256,128,64 to 8.Every filter has or a 3 × 3 size or 5 × 5 size alternatively.For example,the first layer has 256 filters with size 5 × 5 and then 128 filters with size 3 ×3 and so on

    Table 1:ISBI and MICCAI datasets

    3 Method

    3.1 Preprocessing

    The two provided datasets were already pre-processed as provided,and this included:skull stripping,image denoising,intensity normalization and image registration.Skull stripping was performed using a Brain Extraction Tool(BET)by Salehi et al.[38],while intensity normalization was implemented by N3 intensity normalization by Leger et al.[39].Image denosing was performed with Gaussian presmoothing filters.Rigid registration was performed using Functional Magnetic Resonance Imaging of the Brain(FMRIB)Linear image registration(FLIRT).

    3.2 CNN Architecture

    Before training,all T1-w,T2-w,and FLAIR images were converted into 2D patches(p × p).The benefit of making 2D patches is to speed up the process.Results show that 2D patches are far better in speed and robustness compared to 3D patches.3D patches need more memory,resulting in a slower speed.According to ISBI,dataset contains 1% lesions of total brain size,so we used large patches like 25 × 25 or 35 × 35(2D patches).It helped us to reduce the balancing issue,which works better with 2D patches.According to Ghafoorian et al.[30],larger patches produce more accurate results.These 2D patches are made so that p is the size of two dimensions and these are stacked in an array containing(m × p × p)where m is “input modalities”,in our case m=3.

    Due to the increased popularity of cascaded convolutional neural networks,the proposed method was also implemented with the help of such networks.The first network finds the possible true positives of required lesions,then the second network refines the results of already segmented lesions,and also decreases the false positives,which can be seen in Fig.4,where(b)shows the manual segmentation of lesions,(c)shows the results of the first network(d)shows the results from the second network,and if we compare(c)and(d),it is clear that after the second network,false positives are decreased.After constructing 2D patches,these are passed through convolutional filter banks and then instead of fully connected layer,again convolutional filter banks are introduced for the prediction of the possible membership function.The reason for not using FC is that we are not sure how many layers are needed for prediction and the system might become too complex.We have applied the approach of GoogleNet and the recently proposed ResNet by He et al.[40],where they use fully convolutional layers instead of FC layer.Traditional CNNs use fully connected(FC)layer for the prediction of the probability of membership function,but the proposed method uses convolutional filters bank for several reasons.The main reason of not using FC layer is parameters complexity.Mostly we are not sure,how many parameters are needed to handle the features from previously convolved filters,which results,unused free parameters.The consequences may be overfitting.

    Figure 3:Qualitative results for ISBI dataset

    The second reason for not using FC layer is that it can increase the time for prediction even using GPU.Because there are hundreds of slices for one scan and when slices are converted to patches,resulting in numerous patches for calculations.Therefore,using FC layer can reduce the time for prediction.The internal architecture of the proposed method can be seen in Fig.2.It consists of 6 layers of convolutional filters and every convolutional filer has max-pooling and rectified linear unit(ReLU).The objective of max pooling is to down-size or reduce the dimensions of output and discarding the unwanted features to make the system fast.ReLU is used here because it is a speedy activation function.6 layers of convolutional filters consist of different sizes of filters.So,it can be seen that the first layer has 256 filters with a size of 5 × 5 and the proceeding filter consisted of 128 filters with size 3 × 3.Then the next convolutional layers have a decreasing number of filters,64 filters with size 5 × 5,32 filters with size 3 ×3,16 filters with size 5× 5,and then the last layer consists of 8 filters of size 3× 3.

    Figure 4:False positive comparison

    After convolving the 2D patches with these 6 convolution layers,the output is concatenated.Then the concatenated output is passed again through parallel implemented convolutional filters bank for prediction purposes.Here we used small filter sizes like 3 × 3 and 5 × 5 for the convolution function.Small filter size is better as compared to big filters because a small filter can help to find the exact boundaries of lesions and train well for segmentation purposes.At the input,we have made 2D patches of three available modalities to speed up the process,and also to improve training.Patch size is denoted by p,where p is set to a different size and gathers results.For experiments,first,we started from small patches p=5 then we increased the size of patches.After reaching size p=25,the outcome improved.

    Valverde et al.[27]use small patches but they have good results,the reason is that they use a balanced training dataset.The training dataset consists of the voxels with lesion and also without lesions.This gives good results for smaller patches,but the drawback is that it consumes more time and is computationally expensive.Roy et al.used large patches.We also adopted large patches since we don’t need balanced training.Large patches contain large area which includes voxels with lesions and also without lesions.We only use those large patches which have lesion label as a center pixel.These large patches have an area “with lesion” and “without lesion”.It means all lesion patches with center voxel with lesion labels are selected.It not only speeds up the system but also makes it applicable for real applications due to less computations.

    Therefore,we tried patch size p=20,p=25,p=35 and received good results as mentioned in the evaluation Section 4.Also,3 modalities T1-w,T-2 w,and FLAIR are taken as input.The reason for taking 3 modalities is that,FLAIR is mainly used for segmentation purposes whereas the other 2 modalities T1-w and T2-w are used to reduce artifacts and help us to reduce false positives which are mentioned in Fig.4.The comparison of one modality versus three modalities is shown in Fig.5.The results show that when three modalities are used,MRI artifacts can be reduced greatly.The exact implementation details are discussed in Section 4.1.

    3.3 Evaluation Metrics

    For evaluation purposes,two datasets ISBI and MICCAI have been used.For comparison,different state-of-the-art methods are compared,which are described in detail in Section 4.4.The evaluation is performed with the following metrics described below by comparing the performance against the human experts.The different metrics are sensitivity,precision and dice similarity coefficient.

    3.3.1 Sensitivity

    The sensitivity of the method can be calculated in terms of lesions true positive rate(LTPR)between automated segmentation and manual annotations of lesions

    whereTpdenotes the correctly segmented lesions and also called true positive whereFnis false negative or missed lesion region candidates.

    Figure 5:Comparison between 1 modality and 3 modalities.(a)Original image.(b)1 modality.(c)3 modalities

    3.3.2 Precision

    Precision is considered as false discovery rate or lesions false positive rate(LFPR)between automatically segmented lesions and manually annotated lesions and expressed as

    HereFpdenotes false positive or the lesions which are incorrectly classified as lesions.

    3.3.3 DSC

    The overall accuracy of%segmentation in terms of dice similarity coefficient(DSC)between automated segmentation masks and manual annotated lesions area can be defined as

    4 Results

    4.1 Implementation Details

    The proposed method is implemented using python language with Keras and TensorFlow due to open source and comprehensive libraries for machine learning.Results were taken at 20 epochs.2.7 GHz Intel Xeon Gold(E5-6150)processor is used with 32 GB memory of Nvidia GPU.Data was divided into two parts:a training set and validation set,occupying 80% and 20% respectively.Best results were obtained by running 20 epochs for training set with a learning rate of 0.0001 and using the optimizer of Kingma et al.[41].We have used early stopping with patience = 10,as in our case,we observed the best results at 20 epochs.The batch size was set to 128.It took 2 h and 16 min for training and just 15 s on average for testing or segmenting the lesions from an unseen image.

    4.2 MICCAI Dataset

    MICCAI dataset provides three sequences T1-w,T2-w,and FLAIR which we need,for our proposed algorithm.These modalities are input at the proposed CNN pathway and results were obtained by using evaluation metrics mentioned in Section 3.3.The proposed method has two pipelines CNN.First,dataset is converted into patches before input to proposed CNN,which increases the pseed of training and validation.Then these patches are divided into 80:20 strategy for the training set and validation,respectively.While making patches for the training set,whole patches were divided into two parts,where 80% patches were used for the training set and remaining patches,20% were used for validation purposes.For example,when the patch size was 25 × 25,the total number of generated patches were 242775.From these total patches,194220 were used for the training set and 48555 samples were used for validation.Tab.2 shows the three results for the testing of MICCAI by using different factors like LTPR and LFPR.First three scan’s results of MICCAI 01,02 and 03 are shown below.

    Table 2:MICCAI dataset evaluation results by using different metrics LTPR&LFPR

    4.3 ISBI Dataset

    From the ISBI dataset,the results of the first 3 scans(ISBI 01,ISBI 02 and ISBI 03)are displayed in Tab.3.ISBI results are evaluated on a qualitative and quantitative basis.Qualitative results can be seen in Fig.3,whereas quantitative results have been shown in Tab.3.Qualitative results in Fig.3,show the original image,manually segmented image,proposed auto segmented image,and overlap of both manual and auto segmented images.Results show that our proposed method has good results for manual segmentation.

    4.4 Comparison

    In this section different state-of-the-art methods have been compared with the proposed method.All these methods used the same ISBI dataset and the same evaluation metrics as described in Section 3.3.Benchmark of the dataset was also provided on their website to compare the results.These benchmarks are manually annotated by the experts in the field of MS.These methods are evaluated by using various metrics like DSC,sensitivity,and precision.These results are tabulated as a mean of all results obtained with the help of the proposed method.The quantitative comparison of the proposed method with top rank existed methods is shown in Tab.4.These values are extracted from the challenge website and some are extracted from their related publications.These results are considered best to date for the segmentation of lesions.The proposed method has more DSC as compared to all existed methods,where DSC is also called overall segmentation accuracy,described by ISBI challenge website.For the qualitative comparison with manually annotated lesions provided by ISBI are shown in Figs.3,4 and 7.To check the trade-off between sensitivity and precision,the receiver operating characteristic(ROC)curve is drawn for the proposed CNN approach on the ISBI dataset,shown in Fig.6.

    Table 3:ISBI dataset evaluation results by using different metrics LTPR&LFPR

    Table 4:Comparison of the proposed method with state-of-the-art methods using ISBI dataset

    Figure 6:ROC curve between sensitivity and specificity

    Results show that the proposed method has excellent results not only in precision but also in speed and robustness as testing takes only 15 seconds on average for automatic segmentation of MS lesions.Some scans have less lesion load and some scans have more lesion load.Lesion load is referred as the volume or number of lesions.Average timing is calculated after calculating time for all available testing scans and then taking the average of all the values.In Fig.7 the results are demonstrated qualitatively and can be compared with ground truth or manually segmented scans.The first row of Fig.7 shows the results of the first tested scan and compared it with manually segmented masks.(a)shows the original FLAIR image and(b)displays a manual mask or ground truth.(c)is the automatic or proposed method that depicts different colors,where red is true positive,green is false positive,and blue is a false negative.Row 2 shows one more scan from the testing dataset.(d)is the original FLAIR image,(e)is manually segmented,and(f)is the automatic proposed method.After a qualitative comparison,it is clear that the proposed method segments lesions automatically and very accurately.

    4.5 Robustness

    To check the robustness of the proposed algorithm,the same architecture is used for the training of two different datasets like ISBI and MICCAI.Even both datasets are from different scanners.As mentioned in Section 2,three different MRI scanners like Siemens Aera 1.5T,Siemens Verio 3T,and Philips Ingenia 3T are used for these two datasets.However,the results are promising as mentioned in Tabs.2,3 and 4.Even considering that the images were taken at different time points with a gap of almost 1 year,the proposed method is stable for different types of parameters like DSC,precision,and sensitivity and also for different scanners and different types of patients.

    Figure 7:Comparison between the proposed method and manually segmented area/ground truth.(a)FLAIR image.(b)Manually segmented image.(c)auto segmented image/proposed method with true positive(red color),false positive(green color)and false negative(blue color).(d)FLAIR image of a second patient.(e)Manually segmented area of the second patient.(f)Auto/proposed segmented lesions of the second patient

    5 Conclusion

    Automatic lesion segmentation is required for diagnostic and monitoring purposes in MS.There is a desperate need for more efficacious and rapid lesion assessments.A six layers CNN is implemented with two cascaded pipelines.This network does not need fully connected(FC)layers but uses CNN for the prediction of the probability of membership function.It not only increases the speed but also decreases the false positive rate.The proposed method follows the supervised method principle which uses templates consisting of MR imaging with manually segmented masks from qualified radiologists.The proposed method accurately and robustly segmented MS lesions even for images from different MRI scanners with a precision of up to 90%.Therefore,this automated algorithm is able to help neurologists to segment lesions fully automatically without time wastage,improving disease monitoring.

    However,the proposed algorithm has some limitations.While performing experiments,it was noted that if two lesions are very close or overlapping,sometimes the proposed algorithm is unable to segment precisely.Also,when lesions are near the cortex of the brain it was difficult to segment them.According to the ISBI dataset and expert opinion cortical lesions are difficult to detect.In future work,as new MRI sequences are being introduced,the proposed method will be checked with other sequences of large number of MRI scans.The proposed method will be also checked for different parameters,CNN layers,batch size,and different filters,etc.

    Acknowledgement:We are thankful to Peter and Shiami(Radiologists at John Hunter Hospital)for manual segmentation of MRI scans.

    Funding Statement:Thanks to research training program(RTP)of University of Newcastle,Australia and PGRSS,UON for providing funding.APC of CMC will be paid by PGRSS,UON funding.

    Conflicts of Interest:The authors declare that they have no con icts of interest to report regarding the present study.

    亚洲精品色激情综合| 日韩,欧美,国产一区二区三区 | 一a级毛片在线观看| 久久久久久久久久成人| 日本黄色片子视频| 欧美日韩乱码在线| 日本与韩国留学比较| 亚洲avbb在线观看| 日韩欧美精品免费久久| 亚洲av五月六月丁香网| 九色国产91popny在线| 91麻豆av在线| 天天躁日日操中文字幕| 黄色女人牲交| 亚洲真实伦在线观看| 99国产极品粉嫩在线观看| 成人毛片a级毛片在线播放| 久久香蕉精品热| .国产精品久久| 精品日产1卡2卡| 午夜激情福利司机影院| 国产成人av教育| 极品教师在线免费播放| 美女黄网站色视频| 在线免费观看的www视频| 亚洲av免费高清在线观看| 国产高清视频在线观看网站| 国产精品一及| 国产高清激情床上av| 国产男靠女视频免费网站| 国内精品久久久久精免费| av黄色大香蕉| 亚洲国产色片| 两个人视频免费观看高清| 黄片wwwwww| 婷婷亚洲欧美| 亚洲在线自拍视频| 国产人妻一区二区三区在| 欧美日韩中文字幕国产精品一区二区三区| 亚洲精品在线观看二区| netflix在线观看网站| 成人亚洲精品av一区二区| 18禁裸乳无遮挡免费网站照片| 国产av不卡久久| 国产一区二区三区av在线 | 99久久中文字幕三级久久日本| 伊人久久精品亚洲午夜| 欧美人与善性xxx| 欧美绝顶高潮抽搐喷水| 亚洲无线观看免费| 国产探花极品一区二区| 免费看光身美女| 男人和女人高潮做爰伦理| 国产亚洲91精品色在线| 午夜福利18| 我的老师免费观看完整版| 国产伦在线观看视频一区| 国产 一区 欧美 日韩| 国产精品美女特级片免费视频播放器| 亚洲av日韩精品久久久久久密| 国产三级在线视频| 精品人妻视频免费看| www日本黄色视频网| 久久婷婷人人爽人人干人人爱| 香蕉av资源在线| 大型黄色视频在线免费观看| 99视频精品全部免费 在线| 亚洲欧美日韩东京热| 黄色欧美视频在线观看| 很黄的视频免费| 中文字幕熟女人妻在线| 3wmmmm亚洲av在线观看| 欧美日韩瑟瑟在线播放| 久久九九热精品免费| 又紧又爽又黄一区二区| 久久草成人影院| 国产国拍精品亚洲av在线观看| 一卡2卡三卡四卡精品乱码亚洲| 18禁黄网站禁片免费观看直播| 真实男女啪啪啪动态图| 69人妻影院| 国产主播在线观看一区二区| 国产精品99久久久久久久久| 性欧美人与动物交配| 亚洲国产精品成人综合色| 国产精品99久久久久久久久| 人人妻人人看人人澡| 国产精品国产高清国产av| 欧美三级亚洲精品| 天天一区二区日本电影三级| 国产精品永久免费网站| 又爽又黄a免费视频| 在线观看舔阴道视频| 日日摸夜夜添夜夜添av毛片 | 国产精品一区二区性色av| 成人特级黄色片久久久久久久| 人人妻,人人澡人人爽秒播| 免费观看在线日韩| 日韩,欧美,国产一区二区三区 | 亚洲三级黄色毛片| 午夜免费男女啪啪视频观看 | 一边摸一边抽搐一进一小说| 中文字幕熟女人妻在线| 一进一出好大好爽视频| 亚洲精品在线观看二区| 亚洲国产高清在线一区二区三| 欧美+日韩+精品| 九色国产91popny在线| 欧美中文日本在线观看视频| 亚洲欧美清纯卡通| 男女之事视频高清在线观看| 久久精品影院6| 精品国产三级普通话版| 人人妻人人澡欧美一区二区| 人妻制服诱惑在线中文字幕| 亚洲不卡免费看| 最新在线观看一区二区三区| 给我免费播放毛片高清在线观看| 校园人妻丝袜中文字幕| 国产男人的电影天堂91| 久久精品91蜜桃| 日本-黄色视频高清免费观看| 悠悠久久av| 国内少妇人妻偷人精品xxx网站| 亚洲 国产 在线| 特级一级黄色大片| 国产伦精品一区二区三区四那| 亚洲中文日韩欧美视频| 欧美绝顶高潮抽搐喷水| 一级黄色大片毛片| 国产精品综合久久久久久久免费| 国产高清视频在线播放一区| 又紧又爽又黄一区二区| 熟女电影av网| 日韩欧美 国产精品| 人妻制服诱惑在线中文字幕| 一个人看的www免费观看视频| 日本黄色视频三级网站网址| 欧美+日韩+精品| 国产精品亚洲一级av第二区| 成人特级黄色片久久久久久久| av专区在线播放| av在线老鸭窝| 99久久精品一区二区三区| 亚洲av成人精品一区久久| 女人被狂操c到高潮| 给我免费播放毛片高清在线观看| 深夜a级毛片| 天堂网av新在线| 天天躁日日操中文字幕| 亚洲欧美日韩无卡精品| 国产伦精品一区二区三区四那| 一边摸一边抽搐一进一小说| 欧美bdsm另类| 国产成人一区二区在线| 在线免费观看不下载黄p国产 | 琪琪午夜伦伦电影理论片6080| 亚洲国产高清在线一区二区三| 又黄又爽又刺激的免费视频.| 亚洲第一电影网av| 久久精品国产亚洲av天美| 亚洲avbb在线观看| 蜜桃亚洲精品一区二区三区| 免费在线观看成人毛片| 亚洲人与动物交配视频| 99久久精品国产国产毛片| 久久久久国内视频| videossex国产| 欧美成人一区二区免费高清观看| 亚洲aⅴ乱码一区二区在线播放| 欧美性感艳星| 国产三级在线视频| 日本与韩国留学比较| 精品一区二区免费观看| 亚洲精品国产成人久久av| 亚洲va在线va天堂va国产| 一进一出抽搐动态| 欧美成人性av电影在线观看| 久久精品国产亚洲av香蕉五月| 3wmmmm亚洲av在线观看| 黄片wwwwww| 在线观看一区二区三区| 久久精品夜夜夜夜夜久久蜜豆| www.色视频.com| 五月玫瑰六月丁香| 国国产精品蜜臀av免费| 日韩强制内射视频| 日日干狠狠操夜夜爽| 亚洲精品久久国产高清桃花| 日本黄色片子视频| 成人综合一区亚洲| av天堂中文字幕网| 国产一区二区三区av在线 | 给我免费播放毛片高清在线观看| 在线免费观看不下载黄p国产 | 日韩精品有码人妻一区| 99热只有精品国产| 精品久久久久久久久av| 91在线精品国自产拍蜜月| 久9热在线精品视频| 精品午夜福利视频在线观看一区| 国产大屁股一区二区在线视频| 成人永久免费在线观看视频| 精品久久久久久久人妻蜜臀av| 又紧又爽又黄一区二区| 免费看a级黄色片| 天堂√8在线中文| 日韩欧美三级三区| 中文字幕熟女人妻在线| 亚洲va在线va天堂va国产| 欧美中文日本在线观看视频| 精品久久久噜噜| 婷婷亚洲欧美| 成人av一区二区三区在线看| 美女免费视频网站| 91久久精品国产一区二区三区| 色播亚洲综合网| 在线免费观看不下载黄p国产 | 无遮挡黄片免费观看| 亚洲美女视频黄频| 精品国产三级普通话版| 我的女老师完整版在线观看| 久久精品影院6| 亚洲aⅴ乱码一区二区在线播放| 三级毛片av免费| 国产精品电影一区二区三区| 国产老妇女一区| 黄色女人牲交| 免费搜索国产男女视频| 蜜桃亚洲精品一区二区三区| 日日摸夜夜添夜夜添av毛片 | 国产男人的电影天堂91| 免费看美女性在线毛片视频| 乱系列少妇在线播放| 人妻丰满熟妇av一区二区三区| 国产精品久久久久久久电影| 日韩中文字幕欧美一区二区| 久久国产乱子免费精品| 久久精品综合一区二区三区| 春色校园在线视频观看| 欧美xxxx性猛交bbbb| 男女那种视频在线观看| 欧美激情久久久久久爽电影| 免费av不卡在线播放| 十八禁国产超污无遮挡网站| 直男gayav资源| 一本精品99久久精品77| 亚洲精品乱码久久久v下载方式| 国产v大片淫在线免费观看| 又爽又黄无遮挡网站| 女人十人毛片免费观看3o分钟| 久久中文看片网| 欧美一区二区精品小视频在线| 中文资源天堂在线| 级片在线观看| x7x7x7水蜜桃| 久久久色成人| 岛国在线免费视频观看| 国产精品免费一区二区三区在线| 日韩欧美一区二区三区在线观看| 婷婷六月久久综合丁香| 亚洲美女黄片视频| 18禁黄网站禁片免费观看直播| 久久人人精品亚洲av| 中文亚洲av片在线观看爽| 91在线观看av| 欧美不卡视频在线免费观看| 国产精品福利在线免费观看| 九九热线精品视视频播放| 中文字幕精品亚洲无线码一区| 免费人成在线观看视频色| 一区二区三区激情视频| 久久欧美精品欧美久久欧美| 男女下面进入的视频免费午夜| 极品教师在线免费播放| 久久精品国产亚洲网站| 99久久成人亚洲精品观看| 韩国av一区二区三区四区| 精品人妻偷拍中文字幕| 久久人人爽人人爽人人片va| 亚洲av熟女| 精品人妻熟女av久视频| 国产精品国产高清国产av| 日本与韩国留学比较| 久久久久久伊人网av| 日日干狠狠操夜夜爽| 91精品国产九色| 国产精品亚洲美女久久久| 久久久午夜欧美精品| 亚洲内射少妇av| 一进一出抽搐动态| 国产aⅴ精品一区二区三区波| 亚洲午夜理论影院| 麻豆成人av在线观看| 又粗又爽又猛毛片免费看| 成人国产麻豆网| 久久久久久国产a免费观看| 91久久精品国产一区二区成人| 别揉我奶头~嗯~啊~动态视频| 成熟少妇高潮喷水视频| 日韩大尺度精品在线看网址| 天天一区二区日本电影三级| 我的老师免费观看完整版| 久久精品国产自在天天线| 色哟哟·www| 欧美成人一区二区免费高清观看| 很黄的视频免费| 日本一本二区三区精品| 日韩,欧美,国产一区二区三区 | 少妇的逼水好多| 国产女主播在线喷水免费视频网站 | 亚洲精品一区av在线观看| 国产成年人精品一区二区| 淫秽高清视频在线观看| 午夜免费激情av| 人妻久久中文字幕网| 精品久久久久久久人妻蜜臀av| 日韩中文字幕欧美一区二区| eeuss影院久久| 舔av片在线| 真人一进一出gif抽搐免费| 别揉我奶头 嗯啊视频| 伦精品一区二区三区| 嫩草影视91久久| 国产av不卡久久| 99在线视频只有这里精品首页| 欧美最黄视频在线播放免费| 99久久久亚洲精品蜜臀av| 久久久久久国产a免费观看| 国内精品久久久久久久电影| 中国美白少妇内射xxxbb| 亚洲国产高清在线一区二区三| 人人妻人人澡欧美一区二区| 免费观看人在逋| 精品午夜福利视频在线观看一区| 网址你懂的国产日韩在线| 午夜福利18| 一个人免费在线观看电影| 亚洲avbb在线观看| 亚洲av二区三区四区| 亚洲av.av天堂| 精品一区二区三区视频在线| 免费观看精品视频网站| 久久热精品热| 国国产精品蜜臀av免费| 午夜老司机福利剧场| 精品久久久久久久久久免费视频| 色哟哟哟哟哟哟| 国国产精品蜜臀av免费| 别揉我奶头~嗯~啊~动态视频| 精品人妻一区二区三区麻豆 | 性色avwww在线观看| 午夜久久久久精精品| 全区人妻精品视频| 变态另类丝袜制服| 少妇人妻精品综合一区二区 | 嫩草影视91久久| 在线观看美女被高潮喷水网站| 美女 人体艺术 gogo| 午夜日韩欧美国产| 五月伊人婷婷丁香| 俺也久久电影网| 成人欧美大片| 亚洲中文字幕一区二区三区有码在线看| 国产精品国产三级国产av玫瑰| 国产真实乱freesex| 亚洲精华国产精华液的使用体验 | 免费观看的影片在线观看| 日韩大尺度精品在线看网址| 久久欧美精品欧美久久欧美| 丰满乱子伦码专区| 国产美女午夜福利| 国产精品不卡视频一区二区| 男人舔女人下体高潮全视频| 性插视频无遮挡在线免费观看| 免费观看精品视频网站| 欧美又色又爽又黄视频| 国产高清有码在线观看视频| 校园春色视频在线观看| 一个人看视频在线观看www免费| av天堂在线播放| 欧美性猛交黑人性爽| 国产精品美女特级片免费视频播放器| 精品久久久久久久久久免费视频| 麻豆成人av在线观看| 亚洲美女视频黄频| 丝袜美腿在线中文| 亚洲av不卡在线观看| 日本熟妇午夜| 日本色播在线视频| 免费大片18禁| 色视频www国产| 中文资源天堂在线| 美女免费视频网站| 女人十人毛片免费观看3o分钟| 蜜桃久久精品国产亚洲av| 日本-黄色视频高清免费观看| 午夜福利在线观看吧| 精品久久久久久成人av| 国产老妇女一区| 色吧在线观看| 成人性生交大片免费视频hd| 成人永久免费在线观看视频| 国产精品,欧美在线| 亚洲人与动物交配视频| 久久久久国产精品人妻aⅴ院| 国产精品,欧美在线| 国产日本99.免费观看| 婷婷亚洲欧美| 美女黄网站色视频| 亚洲四区av| 亚洲成av人片在线播放无| 精品国产三级普通话版| 一本一本综合久久| 人人妻人人看人人澡| 欧美又色又爽又黄视频| 91久久精品国产一区二区成人| 午夜精品一区二区三区免费看| 欧美日本亚洲视频在线播放| 亚洲欧美日韩卡通动漫| 成人亚洲精品av一区二区| 国产精品久久久久久久电影| 九九热线精品视视频播放| 日韩欧美在线二视频| 国产一级毛片七仙女欲春2| 国产精品人妻久久久影院| 能在线免费观看的黄片| 日韩,欧美,国产一区二区三区 | 2021天堂中文幕一二区在线观| 亚洲最大成人av| 一本久久中文字幕| 午夜激情欧美在线| 亚洲图色成人| 赤兔流量卡办理| 三级男女做爰猛烈吃奶摸视频| 91av网一区二区| 少妇裸体淫交视频免费看高清| 成人av在线播放网站| 成人美女网站在线观看视频| 国产精品不卡视频一区二区| 小说图片视频综合网站| 久久精品人妻少妇| 国产免费av片在线观看野外av| 九九热线精品视视频播放| 最后的刺客免费高清国语| 美女高潮的动态| 人人妻人人看人人澡| 亚洲电影在线观看av| av国产免费在线观看| 亚洲精品国产成人久久av| 给我免费播放毛片高清在线观看| 亚洲成av人片在线播放无| 婷婷精品国产亚洲av| 老熟妇仑乱视频hdxx| 一本一本综合久久| 日韩精品中文字幕看吧| 欧美性感艳星| 日本一本二区三区精品| 亚洲乱码一区二区免费版| 99riav亚洲国产免费| 熟妇人妻久久中文字幕3abv| 亚洲内射少妇av| 又爽又黄a免费视频| 97人妻精品一区二区三区麻豆| 精品一区二区三区视频在线| 国产黄片美女视频| 美女xxoo啪啪120秒动态图| 国产大屁股一区二区在线视频| 少妇人妻一区二区三区视频| 国产成人aa在线观看| 色综合色国产| 免费高清视频大片| 欧美高清性xxxxhd video| 亚洲真实伦在线观看| 久久久色成人| 亚洲专区国产一区二区| 欧美性猛交黑人性爽| 欧美日韩亚洲国产一区二区在线观看| 22中文网久久字幕| 97热精品久久久久久| 欧美人与善性xxx| 床上黄色一级片| 又爽又黄a免费视频| 亚洲自拍偷在线| 午夜爱爱视频在线播放| 一级黄色大片毛片| 久久久精品欧美日韩精品| 亚洲美女搞黄在线观看 | 欧美激情久久久久久爽电影| 色综合站精品国产| 精品人妻一区二区三区麻豆 | 久久久精品大字幕| www日本黄色视频网| 在线免费十八禁| 亚洲av二区三区四区| 九九热线精品视视频播放| 日韩欧美国产在线观看| 91久久精品电影网| 淫秽高清视频在线观看| 老司机午夜福利在线观看视频| 亚洲欧美日韩东京热| 精品久久久久久久久亚洲 | 亚洲久久久久久中文字幕| 天天一区二区日本电影三级| 97人妻精品一区二区三区麻豆| 色播亚洲综合网| 在线播放国产精品三级| 国产乱人伦免费视频| 久久亚洲精品不卡| 欧美成人a在线观看| 国产大屁股一区二区在线视频| 1000部很黄的大片| 亚洲性夜色夜夜综合| 国产精品久久久久久久久免| 韩国av一区二区三区四区| 欧美+亚洲+日韩+国产| 亚洲狠狠婷婷综合久久图片| 久久久国产成人精品二区| 国产黄片美女视频| 我的女老师完整版在线观看| 性插视频无遮挡在线免费观看| 一个人免费在线观看电影| 成年免费大片在线观看| 少妇熟女aⅴ在线视频| 久久亚洲精品不卡| 午夜免费激情av| 日韩,欧美,国产一区二区三区 | 国产主播在线观看一区二区| 国产麻豆成人av免费视频| 又紧又爽又黄一区二区| 国产精品女同一区二区软件 | 欧美bdsm另类| 一级黄片播放器| 日韩中文字幕欧美一区二区| 国产伦人伦偷精品视频| 成人一区二区视频在线观看| 久久精品夜夜夜夜夜久久蜜豆| 99热6这里只有精品| 一区二区三区激情视频| 69人妻影院| 人人妻人人看人人澡| 在线免费观看的www视频| 欧美成人免费av一区二区三区| 在线观看av片永久免费下载| 国产精华一区二区三区| av天堂中文字幕网| 一区二区三区四区激情视频 | 有码 亚洲区| 亚洲三级黄色毛片| 88av欧美| 国产aⅴ精品一区二区三区波| 久久这里只有精品中国| 成熟少妇高潮喷水视频| 午夜福利18| 狂野欧美白嫩少妇大欣赏| 成年女人看的毛片在线观看| 久久午夜福利片| 亚洲经典国产精华液单| 丝袜美腿在线中文| 亚洲va日本ⅴa欧美va伊人久久| 欧美性猛交╳xxx乱大交人| 欧美潮喷喷水| 五月玫瑰六月丁香| 99热这里只有是精品50| 免费av毛片视频| av黄色大香蕉| 久久中文看片网| 成人美女网站在线观看视频| 在线观看一区二区三区| 欧美中文日本在线观看视频| 最好的美女福利视频网| 国内精品久久久久久久电影| 午夜福利高清视频| 别揉我奶头~嗯~啊~动态视频| 麻豆久久精品国产亚洲av| videossex国产| 久久久久国产精品人妻aⅴ院| 欧美日韩乱码在线| 国产伦一二天堂av在线观看| 国产精品爽爽va在线观看网站| 国产伦精品一区二区三区四那| 亚州av有码| 老师上课跳d突然被开到最大视频| 他把我摸到了高潮在线观看| 直男gayav资源| 99久久中文字幕三级久久日本| 乱码一卡2卡4卡精品| 免费电影在线观看免费观看| 亚洲avbb在线观看| 97人妻精品一区二区三区麻豆| 女的被弄到高潮叫床怎么办 | 欧美xxxx性猛交bbbb| 亚洲av中文av极速乱 | 三级男女做爰猛烈吃奶摸视频| 中亚洲国语对白在线视频| 亚洲精品一卡2卡三卡4卡5卡| 国产精品久久视频播放| 国产毛片a区久久久久| 亚洲av第一区精品v没综合| 亚洲,欧美,日韩| 超碰av人人做人人爽久久| 日韩在线高清观看一区二区三区 | 午夜福利在线观看吧| 国语自产精品视频在线第100页| 亚洲av熟女| 99久久九九国产精品国产免费| 亚洲aⅴ乱码一区二区在线播放| 国产精品乱码一区二三区的特点| 日本免费a在线| 欧美日韩瑟瑟在线播放| 嫩草影院新地址| 国产探花极品一区二区| 久久6这里有精品| h日本视频在线播放| 最近最新免费中文字幕在线|