• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Rigid Medical Image Registration Using Learning-Based Interest Points and Features

    2019-08-13 05:54:24MaoyangouJinrongHuHuanhangXiWuJiaHehijieXuandYonghong
    Computers Materials&Continua 2019年8期

    Maoyang Ζou,Jinrong Hu,Huan Ζhang,Xi Wu,Jia He,Ζhijie Xu and Yong Ζhong

    Abstract: For image-guided radiation therapy,radiosurgery,minimally invasive surgery,endoscopy and interventional radiology,one of the important techniques is medical image registration.In our study,we propose a learning-based approach named “FIPCNNF” for rigid registration of medical image.Firstly,the pixel-level interest points are computed by the full convolution network (FCN) with self-supervise.Secondly,feature detection,descriptor and matching are trained by convolution neural network (CNN).Thirdly,random sample consensus (Ransac) is used to filter outliers,and the transformation parameters are found with the most inliers by iteratively fitting transforms.In addition,we propose “TrFIP-CNNF” which uses transfer learning and fine-tuning to boost performance of FIP-CNNF.The experiment is done with the dataset of nasopharyngeal carcinoma which is collected from West China Hospital.For the CT-CT and MR-MR image registration,TrFIP-CNNF performs better than scale invariant feature transform (SIFT) and FIP-CNNF slightly.For the CT-MR image registration,the precision,recall and target registration error (TRE) of the TrFIP-CNNF are much better than those of SIFT and FIP-CNNF,and even several times better than those of SIFT.The promising results are achieved by TrFIP-CNNF especially in the multimodal medical image registration,which demonstrates that a feasible approach can be built to improve image registration by using FCN interest points and CNN features..

    Keywords: Medical image registration,CNN feature,interest point,deep learning.

    1 Introduction

    The purpose of image registration is to establish the corresponding relationship between two or more images,and the images are brought into the same coordinate system through transformation.For image-guided radiation therapy,radiosurgery,minimally invasive surgery,endoscopy and interventional radiology,one of the important techniques is image registration.

    For image registration,intensity-based registration and features-based registration are two recognized approaches.The intensity-based image registration approach directly establishes the similarity measure function based on intensity information,and finally registers the images by using the corresponding transformation in the case of maximum similarity.There are classic algorithms of this approach such as cross-correlation,mutual information,sequence similarity detection algorithm and so on.In general,it can be used for rigid and non-rigid registration.Its registration precision is high correspondingly,but the speed is slow due to high computational complexity,and it is also troubled by the monotone texture.For the feature-based image registration approach,the images are registered by using the representative feature of the image.The classical feature-based image registration most commonly uses the feature of SIFT [Lowe (2004)] + Ransac filter,and the second commonly uses the speeded up robust features (SURF) [Bay,Tuytelaars and Gool (2006)] + Ransac filter.The matching pair coordinates are obtained by these approaches,so the image transformation parameters can be calculated.Compared with the intensity-based image registration approach,its computation cost is relatively low because it does not consider all the image regions,and it has stronger antiinterference ability and higher robustness to noise and deformation,but the precision of registration is generally lower.Overall,feature-based image registration approach is currently a hot research topic because of its good cost performance.

    In recent years,the deep neural network which simulates human brain has achieved great success in image recognition [He,Zhang,Ren et al.(2015)],speech recognition [Hinton,Deng,Yu et al.(2012)],natural language [Abdel-Hamid,Mohamed,Jiang et al.(2014)],computer vision and so on [Meng,Rice,Wang et al.(2018)],and has become one of the hot research topics.In the task of computer vision classification [Krizhevsky,Sutskever and Hinton (2012)],segmentation [Long,Shelhamer and Darrell (2015)],target detection[Ren,He,Girshick et al.(2015)],the deep neural network,especially the Convolution Neural Network (CNN),performs well.

    For medical image registration,features-based approaches are developed by deep neural network.Since Chen et al.[Chen,Wu and Liao (2016)] first register spinal ultrasound and CT images using CNN,the researchers have achieved some results with deep learning approaches in the registration of chest CT images [sokooti,Vos,Berendsen et al.(2017)],brain CT and MR images [Cheng,Zhang and Zheng (2018);Simonovsky,Gutieerrez-Becker,Mateus et al.(2016);Wu,Kim and Wang (2013);Cao,Yang and Zhang (2017)],2D X-ray and 3D CT image [Miao,Wang and Liao(2016)],and so on.But overall,there are only a few researches on medical image registration using learningbased approach.Shan et al.[Shan,Guo,yan et al.(2018)] stated:“for learning-based approaches:(1) informative feature representations are difficult to obtain directly from learning and optimizing morphing or similarity function;(2) unlike image classification and segmentation,registration labels are difficult to collect.These two reasons limit the development of learning-based registration algorithms.”

    In this study,we propose a learning-based approach named “FIP-CNNF” to register medical images with deep-learning network.Firstly,FCN is used to detect the interest points in CT and MR images of nasopharyngeal carcinoma,which are collected from the patients in West China Hospital (This dataset is named “NPC”).Secondly,Matchnet network is used for feature detection,descriptor and matching.Thirdly,Ransac is used to filter outliers,and then the CT-CT,MR-MR,CT-MR images are registered by iteratively fitting transforms to the data.In addition,transfer learning is adopted on FIP-CNNF(named “TrFIP-CNNF”).Specifically,the Matchnet network is pre-trained with UBC dataset to initialize network parameters,and then trained with NPC dataset.The experiment show that the registration results of TrFIP-CNNF are better than those of FIP-CNNF.

    The contribution of this work is that:

    ● Two key steps of classic features-based registration algorithm have been improved by learning-based approach.A multi-scale,multihomography approach boosts pixellevel interest point detection with self-supervise,and Matchnet network using transfer learning contributes to feature detection,descriptor and matching.

    ● For CT-MR registration,the precision,recall,and TRE of TrFIP-CNNF are much better than those of SIFT.The result of experiment demonstrates that a feasible approach is built to improve multimodal medical image registration.

    The rest of the paper is organized as follows:Section 2 reviews the related work.Section 3 mainly introduces the methodology.Section 4 describes the transfer learning.Section 5 presents the experimental setup and experimental results.Section 6 is the conclusion for this paper.

    2 Related work

    The feature-based image registration approach focuses on the features of the image.Therefore,it is the key to how to extract features with good invariance.SIFT is the most popular algorithm for feature detection and matching at present.The interest points found by SIFT in different spaces are very prominent,such as corner points,edge points,etc.The features of SIFT are invariance in rotation,illumination,affine and scale.

    SURF is the most famous variant of SIFT,Bay et al.[Bay,Tuytelaars and Gool (2006)]proposed:“SURF approximates or even outperforms previously proposed schemes with respect to repeatability,distinctiveness,and robustness,yet can be computed and compared much faster.”

    The performance comparison of SIFT and SURF is given in Juan et al.[Juan and Gwun(2009)].“SIFT is slow and not good at illumination changes,while it is invariant to rotation,scale changes and affine transformations.SURF is fast and has good performance as the same as SIFT,but it is not stable to rotation and illumination changes.”There are many other variants of SIFT algorithm,such as,Chen et al.[Chen and Shang(2016)] propose “an improved sift algorithm on characteristic statistical distributions and consistency constraint.”

    Although SIFT is widely used,it also has some shortcomings.For example,the SIFT requires that the image has enough texture when it constructs 128-dimensional vectors for interest points,otherwise the 128-dimensional vector constructed is not so distinguished that it is easy to cause mismatch.

    CNN can also be used for feature extraction,feature description and matching.Given the image patches,the CNN usually employs the FC or pooled intermediate CNN features.The paper [Fischer and Dosovitskiy (2014)] “compares features from various layers of convolutional neural nets to standard SIFT descriptors”,“Surprisingly,convolutional neural networks clearly outperform SIFT on descriptor matching”.Other approaches using CNN features include [Reddy and Babu(2015);Xie,Hong and Zhang (2015);Yang,Dan and Yang (2018)].

    Here we specifically discuss Siamese network [Bromley,Guyon,LeCun et al.(1994)],which was first introduced in 1994 for signature verification.On the basis of Siamese network,combined with the spatial pyramid pool [He,Zhang,Ren et al.(2015)] (the network structure can generate a fixed-length representation regardless of image size/scale),Zagoruyko et al.[Zagoruyko and Komodakis (2015)] proposed a network structure of 2-channel + Central-surround two-stream + SPP to improve the precision of image registration.Han et al.[Han,Leung,Jia et al.(2015)] proposed “Matchnet” which is an improved Siamese network.By using fewer descriptors,Matchnet obtained better results for patch-based matching than those of SIFT and Siamese.

    3 Methodology

    This section focuses on the Methodology of FIP-CNNF.FIP-CNNF has three modules:(1) Interest point detection,(2) Feature detection,description,matching and (3)Transformation modelestimation,which will be described in detail as following.

    3.1 Interest points detection

    Inspired by Detone et al.[Detone,Malisiewicz and Rabinovich (2017)],we detect interest points in two steps.The first step is to build a simple geometric shapes dataset with no ambiguity in the interest point locations,which consists of rendered triangles,quadrilaterals,lines,cubes,checkerboards,and stars with ground truth corner locations.And then the FCN named “Base Detector” is trained with this dataset.The second step is finding interest points using Homographic Adaptation,and the process is shown in Fig.1[Detone,Malisiewicz and Rabinovich (2017)].

    Figure1:Homographic adaptation [Detone,Malisiewicz and Rabinovich (2017)]

    To find more potential interest point locations on a diverse set of image textures and patterns,Homographic Adaptation applies random homographies to warp copies of the input image,so Base Detector is helped to see the scene from many different viewpoints and scales.After Base Detector detects the transformed image separately,the results is combined to get the interest point of the image.The interest points from our experimental medical image are shown in Fig.2 (the red interest points are obtained by SIFT and green interest points are obtained by homographic adaptation).

    Figure2:Interest points of CT and MR images

    The ingenious design of this approach is that it can detect interest points with selfsupervision,and it can boost interest point detection repeatability.

    3.2 Feature detection,descriptor and matching

    Siamese network can learn a similarity metric and match the samples of the unknown class with this similarity metric.For images that have detected interest points,feature detection,descriptor and matching can be carried out with a Siamese network.In our experiment,the deep learning network is called “Matchnet” which is a kind of improved Siamese network.The network structure is shown in Fig.3 and the network parameters are shown in Tab.1.

    Figure3:Network structure

    Table1:Network parameters

    The first layer of network is the preprocessing layer.“For each pixel in the input grayscale patch we normalize its intensity value x (in [0,255]) to (x-128)/160” [Han,Leung,Jia et al.(2015)].For following convolution layers,Rectfied Linear Units (ReLU)is used as non-linearity.For the last layer,Softmax is used as activation function.The loss function of Matchnet is cross-entropy error,whose formula is as follow:

    Here training dataset has n patch pairs,yiis the 0 or 1 label for input pair xi,0 indicates mismatch,1 vice versa.and 1-are the Softmax activations computed on the values of v0(xi) and v1(xi) which are the two nodes in FC3,formula is as follow:=

    Formally,given a set S1of interest point descriptors in the fixed image,and a set S2of interest point descriptors in the moving image.For an interest point x in a fixed image,yiis a corresponding point in a moving image,m is a measure of the similarity between the two points.The outputs of Matchnet network is a value between 0 and 1,and 1 indicates full match.To prevent matching when interest points are locally similar,which often occurs in medical images,we want to find the match between x and yiis particularly distinctive.In particular,when we find the maximum m(x,y1) and second largest m(x,y2),the matching score is defined as:

    If h(x,S2) is smaller,x is much closer to y1than any other member of S2.Thus,we say that x matches y1if h(x,S2) is below threshold η.In addition,it is considered that the interest point x of the fixed image does not exist in the moving image if h (x,S2) is higher than the thresholdη.

    We need to consider what the threshold is.When the threshold ηis low,the real correspondence can be recognized less.After considering the effect on precision and recall under theηof 0.6,0.8,1.0 respectively,we defineη= 0.8 in our experiment.

    3.3 Transformation model estimation

    The outliers of interest points are rejected by Ransac algorithm,and the transformation parameters are found with the most inliers by iteratively fitting transforms.The fixed image is transformed to the same coordinate system with the moving image.The coordinate points after image transformation are not necessarily integers,but we can solve this problem with interpolation.

    4 Transfer learning

    Greenspan et al.[Greenspan,Ginneken and Summers (2016)] have pointed out:“the lack of publicly available ground-truth data,and the difficulty in collecting such data per medical task,both cost-wise as well as time-wise,is a prohibitively limiting factor in the medical domain.” Transfer learning and fine-tuning are used to solve the problem of insufficient training samples.Matchnet is pre-trained with UBC dataset which consists of corresponding patches sampled from 3D reconstructions of the Statue of Liberty (New York),Notre Dame (Paris) and Half Dome (Yosemite),and then the weights of the trained Matchnet are used as an initialization of a new same Matchnet,finally NPC dataset is used to fine-tune the learnable parameters of pre-trained Matchnet.According to the introduction of Zou et al.[Zou and Zhong (2018)]:“If half of last layers undergoes fine-tuning,compared with entire network involves in fine-tuning,the almost same accuracy can be achieved,but the convergence is more rapid”,so half the last layers undergoes fine-tuning in our experiment.

    5 Experiment

    5.1 NPC dataset and data preprocessing

    This study has been conducted using CT and MR images of 99 nasopharyngeal carcinoma patients(age range:21-76 years;mean age ± standard deviation:50.3 years ±11.2 years) who underwent chemo radiotherapy or radiotherapy in West China Hospital,and the radiology department of West China Hospital agrees that this dataset is used and the experimental results can be published.There are 99 CT images and 99 MR images in NPC dataset,all of which are coded in DICOM format.The CT images are obtained by a Siemens SOMATOM Definition AS+ system,with a voxel size ranges from 0.88 mm*0.88 mm*3.0 mm to 0.97 mm*0.97 mm*3.0 mm.The MR images are obtained by a Philips Achieva 3T scanner.In this study,T1-weighted images are used,which have a high in-slice resolution of 0.61 mm*0.61 mm and a slice spacing of 0.8 mm.

    The images are preprocessed as follows:

    ● Unifying the axis direction of MRI and CT data.

    ● Removing the invalid background area from CT and MR images.

    ● Unifying the images to have a voxel size of 1 mm*1 mm*1 mm.

    ● Because they are not consistent for the imaging ranges of MRI and CT,we only kept the range from eyebrow and chin when we slice the images.

    ● We randomly selected 15 pairs of MR and CT slices for each patient,and registered them as ground truth using the Elastix toolbox.

    We augment the dataset by rotating and scaling.

    ● Rotation:rotating the slice by a degree from -15 to 15 with a step of 5.

    ● Scale:scaling the slice with a factor in [0.8,1.2] with a step of 0.1.

    We use the approach introduced in section 3.1 to detect the interest points,and then centring in the interest points,image patches of size 64*64 is extracted.If the patch pair is generated from the same or two corresponding slices and the absolute distance between their corresponding interest points is less than 50 mm,this patch pair receives a positive label;Otherwise,a negative label is obtained.

    5.2 Experimental setup

    The CT and MR images of 60 patients are used for training and validation,and 39 cases for testing.More than 2 million pairs of patch are produced in the way described in Section 5.1.From training and validation dataset,500000 patch pairs are randomly selected as training data,200000 patch pairs are used as validation data.From testing dataset,300000 patch pairs are selected for testing.The ratio between positive and negative samples is 1:1,and the proportion of MR-MR,CT-CT,CT-MR pairs is 1:1:2.

    5.3 Results of experiment

    The ground truth displacement at each voxel of test pairs is obtained by Elestix toolbox,so we can independently verify each matched interest Point,and then we can calculate the precision of the features extracted by SIFT,FIP-CNNF and TrFIP-CNNF respectively.True positive is matched interest point in the fixed image for a true correspondence exists,and false positives are interest point which is assigned an incorrect match.

    For CT-CT image registration,the precision and recall of SIFT,FIP-CNNF and TrFIPCNNF are shown as Fig.4 and Fig.5.

    Figure4:CT-CT Precision

    Figure5:CT-CT Recall

    X-coordinate (Scale,Rotation) represents the degree of the scale and rotation respectively.The experimental results show that TrFIP-CNNF outperforms SIFT and FIP-CNNF.For SIFT and FIP-CNNF,the mean value of the precision is little difference,and the recall of FIP-CNNF is better than that of SIFT.

    For MR-MR image registration,the precision and recall of SIFT,FIP-CNNF and TrFIPCNNF are shown as Fig.6 and Fig.7.

    Figure6:MR-MR Precision

    Figure7:MR-MR Recall

    The experimental results show that TrFIP-CNNF and SIFT perform well.In most cases,the precision and recall of TrFIP-CNNF are relatively higher when the rotation is greater than 5o,on the contrary,the precision and recall of SIFT algorithm are relatively higher when the rotation is less than 5o.Overall,the precision and recall of FIP-CNNF is the lowest.

    For CT-MR image registration,the precision and recall of SIFT,FIP-CNNF and TrFIPCNNF are shown as Fig.8 and Fig.9.

    Figure8:CT-MR Precision

    Figure9:CT-MR Recall

    For multimodal image registration,the deep learning approach has obvious advantages,so that FIP-CNNF and TrFIP-CNNF outperform SIFT in every task.

    To further verify the results in Fig.8 and Fig.9,the target registration error (TRE) is calculated for measuring registration accuracy.TRE is defined as root-mean-square on these distance errors over all interest point pairs for one sample.TRE of multimode image registration are shown in Tab.2.The first line (Scale,Rotation) represents the degree of the scale and rotation.

    Table2:TRE of CT-MR registration

    It provides a visual comparison of a random pair of CT-MR slices registration between SIFT,FIP-CNNF and TrFIP-CNNF in Fig.10.

    Figure10:Color overlap registration results of SIFT,FIP-CNNF,and TrFIP-CNNF

    6 Conclusion

    In our study,the CT and MR images of nasopharyngeal carcinoma are registered by deep learning network.In particular,interest points are detected by FCN,and feature detection,descriptor and matching are trained by CNN.Experimental results show that this approach builds a general approach to improve medical image registration.Especially for the CT-MR image registration,FIP-CNNF outperforms SIFT in every task due to the superiority of the high level feature learned by CNN.TrFIP-CNNF outperforms FIPCNNF due to the knowledge transferred by rich natural images,which indicates that transfer learning is feasible for medical image and fine-tuning has a positive impact.

    Acknowledgement:We thank Xiaodong Yang for assistance in experiment.This work is supported by National Natural Science Foundation of China (Grant No.61806029),Science and Technology Department of Sichuan Province (Grant No.2017JY0011),and Education Department of Sichuan Province (Grant No.17QNJJ0004).

    中文字幕亚洲精品专区| 最近中文字幕高清免费大全6| 日本猛色少妇xxxxx猛交久久| 免费观看性生交大片5| 一本一本综合久久| 午夜爱爱视频在线播放| 亚洲精品中文字幕在线视频 | 中文字幕av成人在线电影| 看黄色毛片网站| av专区在线播放| 国产精品麻豆人妻色哟哟久久 | 日日撸夜夜添| 亚洲内射少妇av| 乱人视频在线观看| 日本爱情动作片www.在线观看| 亚洲国产精品成人综合色| 久久久久九九精品影院| 99久久精品国产国产毛片| 亚洲精品日本国产第一区| 最近2019中文字幕mv第一页| 国产精品蜜桃在线观看| 高清在线视频一区二区三区| 国产精品一区二区在线观看99 | 一级黄片播放器| 久久这里有精品视频免费| 大话2 男鬼变身卡| av又黄又爽大尺度在线免费看| 国模一区二区三区四区视频| 精品99又大又爽又粗少妇毛片| 高清视频免费观看一区二区 | 国产久久久一区二区三区| 成人高潮视频无遮挡免费网站| 只有这里有精品99| 国产精品精品国产色婷婷| 亚洲精品第二区| 天堂网av新在线| 欧美三级亚洲精品| 爱豆传媒免费全集在线观看| 精品欧美国产一区二区三| 99热这里只有是精品50| 亚洲第一区二区三区不卡| 我的女老师完整版在线观看| 亚洲综合精品二区| 99热这里只有是精品50| 精品一区二区三区人妻视频| 高清av免费在线| 久久久色成人| 国产淫语在线视频| 精品人妻偷拍中文字幕| 亚洲最大成人av| 51国产日韩欧美| 午夜免费观看性视频| 人妻夜夜爽99麻豆av| 精品午夜福利在线看| 大香蕉97超碰在线| 精品一区二区三卡| 欧美精品一区二区大全| 国产成年人精品一区二区| 男插女下体视频免费在线播放| videos熟女内射| 国产高清不卡午夜福利| 亚洲av男天堂| 日本爱情动作片www.在线观看| 亚洲欧美日韩卡通动漫| 边亲边吃奶的免费视频| 国产亚洲5aaaaa淫片| 黄片无遮挡物在线观看| 国产免费视频播放在线视频 | 熟女电影av网| 一级片'在线观看视频| 精品一区二区三区视频在线| 午夜视频国产福利| 神马国产精品三级电影在线观看| 国产亚洲最大av| 午夜激情欧美在线| 日日摸夜夜添夜夜添av毛片| 91午夜精品亚洲一区二区三区| 色综合站精品国产| 亚洲成人一二三区av| 99re6热这里在线精品视频| 自拍偷自拍亚洲精品老妇| 99九九线精品视频在线观看视频| 男人狂女人下面高潮的视频| 99久久九九国产精品国产免费| 久久这里只有精品中国| 性色avwww在线观看| 又黄又爽又刺激的免费视频.| 三级经典国产精品| 99久久中文字幕三级久久日本| 亚洲欧美清纯卡通| 日韩欧美一区视频在线观看 | 亚洲久久久久久中文字幕| 白带黄色成豆腐渣| 99久久中文字幕三级久久日本| 99久国产av精品国产电影| 亚洲三级黄色毛片| freevideosex欧美| 一个人免费在线观看电影| 蜜桃亚洲精品一区二区三区| h日本视频在线播放| 我的女老师完整版在线观看| 乱系列少妇在线播放| 免费av观看视频| 国产三级在线视频| 日日摸夜夜添夜夜爱| 免费黄网站久久成人精品| 国产男女超爽视频在线观看| 日韩成人伦理影院| 国产在视频线在精品| 高清午夜精品一区二区三区| 欧美精品国产亚洲| 99热6这里只有精品| 中国美白少妇内射xxxbb| 麻豆精品久久久久久蜜桃| 国产午夜精品论理片| 国产精品麻豆人妻色哟哟久久 | 亚洲综合精品二区| 99久久精品一区二区三区| 亚洲精品乱码久久久久久按摩| 99九九线精品视频在线观看视频| 精品久久久久久成人av| 高清午夜精品一区二区三区| 97在线视频观看| 人妻系列 视频| 不卡视频在线观看欧美| 精品国产三级普通话版| av国产久精品久网站免费入址| 天堂网av新在线| 亚洲精品自拍成人| 一夜夜www| 婷婷色av中文字幕| 色网站视频免费| 亚洲一级一片aⅴ在线观看| 一本一本综合久久| 精品国产露脸久久av麻豆 | 欧美xxxx黑人xx丫x性爽| 免费黄频网站在线观看国产| 在线观看av片永久免费下载| 国产综合精华液| 久久久久久久国产电影| 全区人妻精品视频| 久久久精品94久久精品| 国产精品蜜桃在线观看| 亚洲欧美精品自产自拍| 亚洲精品国产成人久久av| 国产一级毛片在线| 大陆偷拍与自拍| 又粗又硬又长又爽又黄的视频| 99热6这里只有精品| 亚洲欧美日韩东京热| 久久久久久久久大av| 国产片特级美女逼逼视频| 亚洲精品一二三| 婷婷色综合大香蕉| 日本爱情动作片www.在线观看| 在线 av 中文字幕| 国产成人免费观看mmmm| 亚洲欧美精品专区久久| 午夜激情久久久久久久| 搡老乐熟女国产| 老司机影院成人| 午夜免费男女啪啪视频观看| 在线观看一区二区三区| 国产片特级美女逼逼视频| av在线亚洲专区| 亚洲av成人精品一二三区| 午夜福利视频1000在线观看| 在线播放无遮挡| 99久国产av精品国产电影| 国产欧美日韩精品一区二区| 免费观看a级毛片全部| 18禁裸乳无遮挡免费网站照片| 99久久精品一区二区三区| 黄片无遮挡物在线观看| 国产亚洲av嫩草精品影院| 看黄色毛片网站| 久久精品久久久久久噜噜老黄| 国产亚洲av嫩草精品影院| 亚洲欧美成人精品一区二区| 色综合亚洲欧美另类图片| 久久久久精品性色| 岛国毛片在线播放| 亚洲内射少妇av| 久热久热在线精品观看| 国产精品一区二区性色av| 深爱激情五月婷婷| 国产 一区 欧美 日韩| 日产精品乱码卡一卡2卡三| 69av精品久久久久久| 国产探花在线观看一区二区| 国产精品无大码| 色哟哟·www| 国产精品女同一区二区软件| 国产探花极品一区二区| 美女大奶头视频| 久久久精品欧美日韩精品| av国产久精品久网站免费入址| 男人爽女人下面视频在线观看| 女的被弄到高潮叫床怎么办| 内射极品少妇av片p| 中文欧美无线码| 我要看日韩黄色一级片| 大香蕉97超碰在线| 99久久人妻综合| 国产熟女欧美一区二区| 久久99热这里只有精品18| videossex国产| 黄色欧美视频在线观看| 汤姆久久久久久久影院中文字幕 | 一级片'在线观看视频| 毛片女人毛片| 国产伦精品一区二区三区四那| 午夜福利高清视频| 97超视频在线观看视频| 99热全是精品| 极品少妇高潮喷水抽搐| 国产av码专区亚洲av| 国产精品久久久久久精品电影| 禁无遮挡网站| 国产成人精品一,二区| 秋霞伦理黄片| 一级毛片黄色毛片免费观看视频| 国产一级毛片七仙女欲春2| av网站免费在线观看视频 | 精品欧美国产一区二区三| 女的被弄到高潮叫床怎么办| 人妻制服诱惑在线中文字幕| 免费播放大片免费观看视频在线观看| 乱人视频在线观看| 国产中年淑女户外野战色| 日日摸夜夜添夜夜添av毛片| 国产精品一区二区性色av| 免费大片18禁| 国产一级毛片在线| 日韩,欧美,国产一区二区三区| 人人妻人人澡欧美一区二区| 久久久久久久午夜电影| 狠狠精品人妻久久久久久综合| 熟妇人妻久久中文字幕3abv| 看非洲黑人一级黄片| 国产一区二区亚洲精品在线观看| 国产av国产精品国产| 成人二区视频| 午夜免费观看性视频| 免费观看在线日韩| 日本av手机在线免费观看| 国模一区二区三区四区视频| av线在线观看网站| 久久久久久久久久人人人人人人| 久久精品久久久久久噜噜老黄| 毛片一级片免费看久久久久| 亚洲av不卡在线观看| 亚洲欧美日韩东京热| 日韩不卡一区二区三区视频在线| 美女内射精品一级片tv| 99热这里只有精品一区| 亚洲成色77777| 亚洲精品色激情综合| 久99久视频精品免费| av福利片在线观看| 美女xxoo啪啪120秒动态图| 日本一本二区三区精品| 又大又黄又爽视频免费| 国产精品一二三区在线看| 国产一级毛片七仙女欲春2| 两个人视频免费观看高清| 国产淫语在线视频| 国产精品久久久久久精品电影| 麻豆av噜噜一区二区三区| 寂寞人妻少妇视频99o| 日韩av在线大香蕉| 中国美白少妇内射xxxbb| 午夜激情久久久久久久| 国产乱人偷精品视频| 国产 一区精品| 国产精品一区www在线观看| 亚洲成人av在线免费| 天美传媒精品一区二区| 人人妻人人看人人澡| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 成人二区视频| 日韩精品青青久久久久久| 男插女下体视频免费在线播放| 国内精品一区二区在线观看| 美女xxoo啪啪120秒动态图| 成年女人在线观看亚洲视频 | 午夜福利视频1000在线观看| 三级男女做爰猛烈吃奶摸视频| 视频中文字幕在线观看| 国产成人午夜福利电影在线观看| 天堂√8在线中文| 亚洲av电影不卡..在线观看| 汤姆久久久久久久影院中文字幕 | 99久久九九国产精品国产免费| or卡值多少钱| 99久久精品一区二区三区| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲图色成人| 欧美日韩精品成人综合77777| 久久人人爽人人爽人人片va| 亚洲精品一区蜜桃| 搡女人真爽免费视频火全软件| 成人鲁丝片一二三区免费| 中文天堂在线官网| 亚洲国产色片| 国产免费福利视频在线观看| 久久国内精品自在自线图片| 国产老妇女一区| 丝袜美腿在线中文| www.av在线官网国产| 一级毛片久久久久久久久女| 小蜜桃在线观看免费完整版高清| 国产一区二区亚洲精品在线观看| 夜夜爽夜夜爽视频| 亚洲av免费在线观看| 熟妇人妻不卡中文字幕| 亚洲欧美日韩无卡精品| 干丝袜人妻中文字幕| 在线免费观看的www视频| 亚洲精品日本国产第一区| 色综合色国产| 最近中文字幕高清免费大全6| 18+在线观看网站| 麻豆国产97在线/欧美| 丝袜喷水一区| 亚洲综合色惰| videos熟女内射| 在线观看一区二区三区| 亚洲av电影不卡..在线观看| 日韩伦理黄色片| 嘟嘟电影网在线观看| 久久久亚洲精品成人影院| 国内精品宾馆在线| 亚洲欧美一区二区三区黑人 | 午夜福利在线在线| 免费黄色在线免费观看| a级一级毛片免费在线观看| 国内揄拍国产精品人妻在线| 亚洲国产成人一精品久久久| 26uuu在线亚洲综合色| 国产单亲对白刺激| 97人妻精品一区二区三区麻豆| 韩国高清视频一区二区三区| 日本与韩国留学比较| 少妇熟女欧美另类| 国产亚洲5aaaaa淫片| 三级毛片av免费| 成人毛片60女人毛片免费| 2021少妇久久久久久久久久久| 插逼视频在线观看| 最后的刺客免费高清国语| 亚洲成色77777| 在线观看免费高清a一片| 少妇熟女aⅴ在线视频| 国产黄色小视频在线观看| 高清视频免费观看一区二区 | av福利片在线观看| 亚洲美女视频黄频| 国产淫片久久久久久久久| 国产一级毛片七仙女欲春2| 国产亚洲精品久久久com| 在线天堂最新版资源| 一级毛片我不卡| 成人亚洲欧美一区二区av| 亚洲婷婷狠狠爱综合网| 国产有黄有色有爽视频| 亚洲最大成人手机在线| 欧美xxⅹ黑人| 久99久视频精品免费| 国产永久视频网站| 97精品久久久久久久久久精品| 中文字幕av成人在线电影| 国产精品.久久久| 国产亚洲午夜精品一区二区久久 | 国产免费福利视频在线观看| 简卡轻食公司| 午夜福利在线在线| 亚洲成人一二三区av| 国产真实伦视频高清在线观看| 欧美3d第一页| 亚洲精品成人久久久久久| 久久草成人影院| 少妇人妻一区二区三区视频| 亚洲人成网站在线播| 少妇被粗大猛烈的视频| 尤物成人国产欧美一区二区三区| 国产欧美另类精品又又久久亚洲欧美| 成人综合一区亚洲| 国产精品三级大全| 免费av观看视频| 国产亚洲最大av| 国产成人午夜福利电影在线观看| 在线观看美女被高潮喷水网站| 伦精品一区二区三区| 一夜夜www| www.av在线官网国产| 91精品伊人久久大香线蕉| 老司机影院毛片| 国语对白做爰xxxⅹ性视频网站| 国产av码专区亚洲av| 禁无遮挡网站| 国产乱人视频| 亚洲一区高清亚洲精品| 欧美日韩精品成人综合77777| 亚洲精品国产av蜜桃| 亚洲精品成人久久久久久| 狠狠精品人妻久久久久久综合| 少妇丰满av| 别揉我奶头 嗯啊视频| 久久99热这里只有精品18| 熟妇人妻不卡中文字幕| 91午夜精品亚洲一区二区三区| xxx大片免费视频| 免费播放大片免费观看视频在线观看| 成人av在线播放网站| 亚洲色图av天堂| 国国产精品蜜臀av免费| 亚洲av在线观看美女高潮| 免费看a级黄色片| 老司机影院成人| 成人亚洲欧美一区二区av| 一夜夜www| kizo精华| 少妇的逼好多水| 最近最新中文字幕大全电影3| 亚洲欧美精品专区久久| 简卡轻食公司| 天堂中文最新版在线下载 | 欧美日韩精品成人综合77777| 少妇熟女欧美另类| 午夜福利网站1000一区二区三区| 97热精品久久久久久| 搞女人的毛片| 国国产精品蜜臀av免费| 黄色欧美视频在线观看| 亚洲国产av新网站| 欧美成人午夜免费资源| 五月玫瑰六月丁香| 亚洲精品一二三| 国产高清有码在线观看视频| 国产有黄有色有爽视频| 最近的中文字幕免费完整| 亚洲欧美精品专区久久| 亚洲av在线观看美女高潮| 亚洲av福利一区| 久久久久久久久中文| 国产精品女同一区二区软件| 国产精品一区二区在线观看99 | 国产高清国产精品国产三级 | 欧美一级a爱片免费观看看| 国产精品国产三级国产av玫瑰| 日韩视频在线欧美| 欧美性感艳星| .国产精品久久| 好男人在线观看高清免费视频| 神马国产精品三级电影在线观看| 午夜精品国产一区二区电影 | av在线天堂中文字幕| 看免费成人av毛片| 国产亚洲av嫩草精品影院| 日韩av在线免费看完整版不卡| 能在线免费观看的黄片| 亚洲欧美日韩无卡精品| 午夜久久久久精精品| 精品久久国产蜜桃| 亚洲欧美日韩东京热| 18禁裸乳无遮挡免费网站照片| av网站免费在线观看视频 | 亚洲精品亚洲一区二区| 狠狠精品人妻久久久久久综合| 免费不卡的大黄色大毛片视频在线观看 | 1000部很黄的大片| 欧美3d第一页| 99久久中文字幕三级久久日本| 99久久精品一区二区三区| 天美传媒精品一区二区| av.在线天堂| 97精品久久久久久久久久精品| av国产免费在线观看| 日韩欧美国产在线观看| 色综合站精品国产| 国产精品不卡视频一区二区| 亚洲熟女精品中文字幕| 极品少妇高潮喷水抽搐| 激情五月婷婷亚洲| 免费无遮挡裸体视频| 亚洲最大成人手机在线| 午夜福利在线在线| 亚洲av在线观看美女高潮| 一边亲一边摸免费视频| 国产一区二区三区综合在线观看 | 国产精品久久久久久av不卡| ponron亚洲| 如何舔出高潮| 亚洲国产欧美在线一区| 熟妇人妻久久中文字幕3abv| 最近手机中文字幕大全| 亚洲av国产av综合av卡| 色尼玛亚洲综合影院| 色哟哟·www| 国产一区二区三区综合在线观看 | 国产一区二区在线观看日韩| 青春草国产在线视频| 中文欧美无线码| av线在线观看网站| 欧美另类一区| 亚洲成色77777| 能在线免费观看的黄片| 男女下面进入的视频免费午夜| 国产成人a区在线观看| 搡老乐熟女国产| 亚洲色图av天堂| 国产精品久久久久久精品电影小说 | 中国美白少妇内射xxxbb| 国产精品久久视频播放| freevideosex欧美| 精品亚洲乱码少妇综合久久| av一本久久久久| 久久久久久久久久久免费av| 国产黄色视频一区二区在线观看| 男女下面进入的视频免费午夜| 日韩电影二区| 亚洲aⅴ乱码一区二区在线播放| 我的老师免费观看完整版| 日日啪夜夜爽| 一级毛片电影观看| 国产精品一区二区在线观看99 | 永久网站在线| 国产一级毛片在线| 又粗又硬又长又爽又黄的视频| 免费看光身美女| 久久久色成人| 看非洲黑人一级黄片| 免费观看精品视频网站| 日日摸夜夜添夜夜爱| 91久久精品电影网| 成年av动漫网址| 九色成人免费人妻av| 国产成人精品福利久久| 久99久视频精品免费| 成人国产麻豆网| 日韩一区二区视频免费看| 日韩av在线大香蕉| 日韩一本色道免费dvd| 别揉我奶头 嗯啊视频| 国产亚洲最大av| 亚洲成人av在线免费| 国产乱来视频区| 国产 亚洲一区二区三区 | 99久久中文字幕三级久久日本| 亚洲精华国产精华液的使用体验| av国产久精品久网站免费入址| 国产精品国产三级专区第一集| 午夜久久久久精精品| 免费看美女性在线毛片视频| 国产免费福利视频在线观看| 中文字幕制服av| 97在线视频观看| 在线a可以看的网站| 99热全是精品| 日韩三级伦理在线观看| 免费看光身美女| 简卡轻食公司| 菩萨蛮人人尽说江南好唐韦庄| 免费看不卡的av| 婷婷色综合www| 免费观看a级毛片全部| 精品国内亚洲2022精品成人| 91精品伊人久久大香线蕉| freevideosex欧美| 男女那种视频在线观看| 国产黄色小视频在线观看| 边亲边吃奶的免费视频| 99久久人妻综合| 18禁在线播放成人免费| 寂寞人妻少妇视频99o| 中文字幕免费在线视频6| av一本久久久久| 国产精品久久久久久久电影| 免费看av在线观看网站| 日韩av免费高清视频| .国产精品久久| 美女高潮的动态| 午夜激情久久久久久久| 亚洲经典国产精华液单| 99热这里只有是精品在线观看| 五月玫瑰六月丁香| 女人久久www免费人成看片| 99热6这里只有精品| 爱豆传媒免费全集在线观看| 岛国毛片在线播放| 一级黄片播放器| 亚洲精品色激情综合| 99久久九九国产精品国产免费| 99热全是精品| 国产男人的电影天堂91| 亚洲成色77777| 亚洲一区高清亚洲精品| 精品久久久久久久久av| 波野结衣二区三区在线| 女人被狂操c到高潮| 男女边吃奶边做爰视频| 三级男女做爰猛烈吃奶摸视频| 亚洲成人一二三区av| 亚洲欧洲日产国产| 九九在线视频观看精品| 国产精品伦人一区二区| 男女边吃奶边做爰视频| 国产有黄有色有爽视频| 中文字幕制服av| 最近视频中文字幕2019在线8| 成年女人看的毛片在线观看| 免费黄网站久久成人精品| 欧美一区二区亚洲| 久久久久久九九精品二区国产|