• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-View Point-Based Registration for Native Knee Kinematics Measurement with Feature Transfer Learning

    2021-11-26 03:43:50CongWangShuainingXiKangLiChongyangWangXuongLiuLiangZhaoTsungYuanTsaia
    Engineering 2021年6期

    Cong Wang, Shuaining Xi, Kang Li, Chongyang Wang, Xuong Liu,*, Liang Zhao*,Tsung-Yuan Tsaia,c,*

    a Shanghai Key Laboratory of Orthopaedic Implants & Clinical Translational R&D Center of 3D Printing Technology, Department of Orthopaedic Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine; School of Biomedical Engineering & Med-X Research Institute, Shanghai Jiao Tong University, Shanghai 200030, China

    b SenseTime Research, Shanghai 200233, China

    c Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China

    d Department of Orthopaedics, New Jersey Medical School, Rutgers University, Newark, NJ 07103, USA

    e Department of Orthopaedics, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai 200233, China

    Keywords:

    ABSTRACT Deep-learning methods provide a promising approach for measuring in-vivo knee joint motion from fast registration of two-dimensional (2D) to three-dimensional (3D) data with a broad range of capture.However, if there are insufficient data for training, the data-driven approach will fail. We propose a feature-based transfer-learning method to extract features from fluoroscopic images.With three subjects and fewer than 100 pairs of real fluoroscopic images,we achieved a mean registration success rate of up to 40%.The proposed method provides a promising solution,using a learning-based registration method when only a limited number of real fluoroscopic images is available.

    1. Introduction

    Accurate kinematics of the knee joint is critical in many orthopedic applications for understanding aspects such as the normal function of the joint [1], development of knee osteoarthritis [2],mechanisms of knee injuries[3],optimization of prosthesis design[4], preoperative planning, and postoperative rehabilitation [5].The measurement of knee kinematics is also essential for biomechanical studies on the musculoskeletal system.In the event of significant demand for kinematics in the clinical field,an efficient and reliable method to measure the dynamic motion of the joint is needed.

    Various measurement tools are now available for researchers to quantify three-dimensional(3D)knee kinematics,but only a few of them can provide millimeter-scale accuracy and rapid tracking velocity. Skin-marker-based optical tracking systems are widely used in the analysis of human motion,but their accuracy is affected by marker-associated soft-tissue artifacts, which can cause displacements of up to 40 mm[6].Although several researchers have attempted to reduce the effects of soft-tissue artifacts by building mathematical models [7–9], the issue remains unsolved when using any skin-marker-based motion-capture technique[10].With the development of medical imaging, some techniques can measure dynamic joint kinematics directly,such as real-time magnetic resonance (MR) tomography and computed tomography (CT)[11,12]. However, clinical promotion of these techniques was limited by low temporal resolution,restricted range of motion(ROM),the need to control motion speed, low image quality, and nonnegligible amounts of radiation [13,14]. In the past decade, a dual-fluoroscopic imaging system (DFIS) has been widely used and well-accepted for accurate in-vivo joint motion analysis because of its high accuracy [15], accessibility, sufficient ROM[16], and low radiation levels compared with traditional radiography (Fig. 1).

    Fig. 1. Virtual DFIS for measuring the dynamic motion of knee joints.

    To find the pose of the object (i.e., native knee joints) in DFIS,two-dimensional (2D) to 3D registration, which aligns the volume data (e.g., CT) with fluoroscopy (continuous X-ray images), is applied in the measurement procedure. The 3D position of CT is adjusted iteratively, and a large number of digitally reconstructed radiographs (DRRs) is generated simultaneously until the DRR is most similar to the X-ray image [17]. With the increasing use of DFIS in clinical applications, researchers have attempted various automatic registration methods to accelerate the matching procedure. Optimization-based registration, which is composed of an optimizer and similarity metrics between images, has been investigated extensively[18,19].Although the accuracy of optimizationbased registration is high [20–22], several drawbacks, such as the strictly required registration initialization and the high computational cost of calculating DRRs and the iterations during optimization, limit the widespread use of DFIS [23].

    With the rapid development of machine learning [24,25] in recent years,several learning-based methods have been developed to measure joint kinematics,with the advantages of computational efficiency and enhancement of capture range compared with optimization-based methods [21,26–28]. However, these methods are always trained with synthetic X-ray images(i.e.,DRRs)because training such models with a large amount of authentic labeled data is impractical. Even so, considerable authentic images are still necessary to ensure the robustness of registration[22,27].Another consideration is the discrepancy between DRRs and X-ray images.Compared with DRRs, fluoroscopic images showed blurred edges,geometric distortion, and nonuniform intensity [29,30]; therefore,networks that train on DRRs do not generalize to fluoroscopic images ideally [22]. Previous studies have established various physical models to generate more realistic DRRs through additional measurements of X-ray quality [31,32]. Recently, a survey conducted by Haskins et al.[24]has shown the ability to use transfer learning in such cross-modal registration, which may save the effort of building complicated DRR models or collecting authentic clinical images.

    In our work,we developed a pseudo-Siamese multi-view pointbased registration framework to address the problem of limited number of real fluoroscopic images. The proposed method is a combination of a pseudo-Siamese point-tracking network and a feature-transfer network. The pose of the knee joints was estimated by tracking selected points on knee joints with the multiview point-based registration network, paired DRRs, and fluoroscopy. A feature extractor was trained by the featurelearning network with pairs of DRRs and fluoroscopic images. To overcome the limited number of authentic fluoroscopic images,we trained the multi-view point-based registration network with DRRs and pre-trained the feature-learning network on ImageNet.

    The remainder of this paper is organized as follows. Section 2 reviews deep-learning-based 2D–3D registration and domain adaption. Section 3 presents the proposed learning-based 2D–3D registration problems. Section 4 presents the experiments and results, and Section 5 concludes the paper.

    2. Related work

    2.1. Learning-based strategy

    To avoid the large computational costs of optimization-based registration, researchers have recently developed learning-based registration [24]. Considering the success of convolutional neural networks (CNNs), feature extraction from both DRRs and fluoroscopic images has been proposed. The pose of the rigid object was then estimated by a hierarchical regressor [33]. The CNN model improves the robustness of registration, but it is limited to objects with strong features,such as medical implants,and cannot perform the registration of native anatomic structures. Miao et al.[28] proposed a reinforcement learning network to register X-ray and CT images of the spine with a Markov decision process.Although they improved the method with a multi-agent system,the proposed method may still fail because it cannot converge during searching.Recently,several attempts have been made to register rigid objects with point correspondence networks [27,34,35],which showed good results in both efficiency and accuracy on anatomic structures. Their method avoids the costly and unreliable iterative pose searching and corrects the out-of-plane errors with multiple views.

    2.2. Domain adaption

    The discrepancy between synthetic data(i.e.,DRRs)and authentic data (i.e., fluoroscopic images), also known as drift, is another challenge to learning-based registration strategies, in which training data and future data must be in the same feature space and have the same distribution [36]. Compared with building complicated models for DRR generation, domain adaption has emerged as a promising and relatively effortless strategy to account for the domain difference between different image sources [37], and it has been applied in many medical applications, such as X-ray segmentation [38] and multi-modal image registration[21,22,39]. For 2D–3D registration, Zheng et al. [21] proposed the integration of a pairwise domain adaptation module into a pretrained CNN that performs rigid registration using a limited amount of training data. The network was trained on DRRs, and it performed well on synthetic data; therefore, the authentic features were transferred close to the synthetic features with domain adaption. However, existing methods are still inappropriate for natural joints,such as knees and hips.Therefore, a designed registration approach for natural joints that do not require numerous clinical X-ray images for training is needed.

    3. Methods

    The aim of 2D–3D registration is to estimate the six degrees of freedom (6DOF) pose of 3D volume data from pairs of 2D multiview fluoroscopic images. In the following section, we begin with an overview of the tracking system and multi-view point-based 2D–3D registration (Section 3.1). Then, details of the two main components of our work are given in Section 3.2 and Section 3.3.

    3.1. Multi-view point-based registration

    3.1.1. 2D–3D rigid registration with 6DOF

    We consider the registration of each bone in the knee joint as a separate 2D–3D registration procedure. Pose reproduction of each bone is denoted as the 3D alignmentof the CT volume data V through atransformation matrix T4×4,whichisparameterized bysix elementsoftranslations and rotations(x,y,z,γ,α,β) using the Euler angle [40]. Transformation matrix T4×4can be represented as a homogeneous 4×4 matrix,and pose P can be derived as follows:

    3.1.2. 3D projection geometry of X-ray imaging

    In the virtual DFIS, the four corners of each imaging plane and the location of the X-ray sources were used to establish the optical pinhole model during DRR generation (Fig. 1). After a polynomialbased distortion correction and spatial calibration of two-view fluoroscopy, DRRs were generated by the ray-casting algorithm[41] with segmented CT volume data using Amira software(ThermoFisher Scientific,USA).Combing the transformation matrix T4×4, the final DRR IDRRcan be computed as follows:

    where l(p,s) is the ray s connecting the X-ray source and image plane in the X-ray imaging model, and p is a point of the ray.μ(?)represents the attenuation coefficient at some point in the volume data.

    3.1.3. Registration by tracking multi-view points

    The final pose of each bone was reproduced with transformation matrix T.

    3.2. Pseudo-Siamese point tracking network

    In the proposed method,we used a pseudo-Siamese network to track points from each view.The pseudo-Siamese network has two branches: One is a visual geometry group (VGG) network [45] for extracting features from DRRs, and the other is a feature-transfer network, which transfers authentic features to synthetic features(Section 3.3). The overall workflow is shown in Fig. 3. The input of the network was unpaired DRRs and fluoroscopic images, and the output was the tracked points of the fluoroscopic images. In the upper branch of the network (Fig. 3), the exported features FDRRaround each selected point have the size of M × N × C when the width and height of the DRR are respectively M and N, and C is the number of feature channels. In the lower branch of the network, the features of fluoroscopic images Ffluoro, were exported by the feature-transfer network without weight sharing. With the output of the extracted features FDRRand Ffluoro, a convolutional layer was applied to quantify the similarity between the two feature maps [27]. The similarity is denoted as

    where W is a learned weighting factor in finding better similarity for each selected point.The objective function to be minimized during the training is Euclidean loss (i.e., registration loss), defined as

    where pfluorois the tracked 2D points and pdrris the projected 2D points in DRR with known locations. With the tracked 2D points from different views,the 3D points were reconstructed using triangulation [43].

    3.3. Feature transfer using domain adaption

    For feature extraction of fluoroscopic images, we proposed a transfer-learning-based method to reduce the domain difference between synthetic images (e.g., the DRRs) and authentic X-ray images (e.g., the fluoroscopic images) (Fig. 4).

    Fig. 2. The workflow of the multi-view point-based registration method. A set of points was selected on the bone surface, and their 2D projections were tracked from each view in the virtual DFIS to reconstruct their 3D positions. The final transformation matrix was determined by the reconstructed points using Procrustes analysis [44].

    Fig.3. The framework of the point-tracking network.Pairs of DRRs and fluoroscopic images were imported to the network,and their features were extracted by a VGG and a feature-transfer network, respectively. The selected points were tracked on fluoroscopic images by searching the most similar feature patch around the selected points in DRRs. Conv: convolution layers.

    Fig. 4. Feature-transfer network with the paired synthetic image and authentic image. Synthetic images (i.e., DRRs) were generated at the pose after manual registration.

    To close the gap between the two domains,we used a domainadaption method. That is, additional coupled VGG net with cosine similarity was set during feature extraction of the fluoroscopic images to close the gap (Fig. 5). Pairs of DRRs and fluoroscopic images, which share the same locations of volume data using a model-based manual registration method [9], were used for training. We used cosine similarity as the cost function to measure the gap between the two domains.For the tracking problem,the cosine similarity can be stated as

    where ‖?‖ denotes L2-norm and 〈?〉 denotes dot product, and FXand FDare the feature maps. To improve the performance of feature transfer, we optimized the proposed method with weights pre-trained on ImageNet.

    Fig. 5. The architecture of synthetic X-ray image feature extraction.

    4. Experiments and results

    4.1. Dataset

    In this institutional-review-board-approved study, we collected CT images of three subjects’ knees, and all subjects performed two or three motions that were captured by a bi-plane fluoroscopy system (BV Pulsera, Philips, the Netherlands) with a frame rate of 30 frames per second. CT scans (SOMATOM Definition AS; Siemens, Germany) of each knee, ranging from approximately 30 cm proximal and distal to the knee joint line(thickness, 0.6 mm; resolution 512 × 512 pixels), were obtained.The size of the fluoroscopic images was 1024 × 1024 pixels with a pixel spacing of 0.28 mm. Geometric parameters of the bi-plane fluoroscopy imaging model, such as polynomial distortion correction parameters [46] and the locations of the X-ray source and detector plane, were used to establish a virtual DFIS, in which poses of each bone were reproduced manually [47]. In this study,143 pairs of matched fluoroscopic images were used (Fig. 6), of which 91 pairs of matched images were used for training the feature-transfer network of fluoroscopic images and the point tracking network, and the remaining images were used as the testing set. Additionally, a three-fold validation was performed in the study. To evaluate the 2D–3D registration algorithm, a widely used 3D error measurement (i.e., the target registration error (TRE)) was applied [48]. We computed the mean TRE(mTRE) to determine the 3D error. The average distance between the selected points defines the mTRE.

    where Pbonedenotes the selected points and PEdenotes the estimated points. The success rate was defined as the percentage of all the test cases with an mTRE of less than 10 mm.

    4.2. Loss selection in cross-domain feature extraction analysis

    We defined a cosine similarity as the loss function in the feature extraction on the authentic X-ray images. We also used the mean squared error as the loss function[22]to find a better loss function.The position of the loss function may also affect the final performance of the feature extraction layer. Thus, we first compared the effects of loss functions located at different convolution layers.To obtain the best performance of the cross-domain feature from the real fluoroscopic images, we put the defined loss function between the pairs of conv2 layers, conv3 layers, conv4 layers,and conv5 layers. In our data (Fig. 7), we preferred the cosine similarity as the loss function because it has better performance regarding the final registration result of the entire knee joint.Cosine similarity showed the best performance between conv5 layers (see details in Appendix A, Table S1).

    Fig. 6. Paired raw fluoroscopic images and the corresponding images after manual matching. The raw fluoroscopic images are (a) and (b), in which additional noise(wearable electromyography sensors)can be found on the surface of the lower limb.As described in the previous study[6],manual registration was performed until the projections of the surface bone model matched the outlines of the fluoroscopic images,and the matched results are shown in(c)and(d).Reproduced from Ref.[6]with permission of Elsevier Ltd., ? 2011.

    4.3. With or without transfer training network analysis

    Fig. 7. The success rate using cosine similarity and mean squared error (MSE) at different convolutional layers.

    Fig. 8. Mean target registration error with different registration networks.

    To test the effects of the proposed feature-based transfer learning method, we compared this method with the Siamese registration network (i.e., POINT2network) [27]. Moreover, as a widely used transfer learning tool, fine-tuning, was also compared in the current study to find a better way to reduce the differences between the fluoroscopic images and DRRs. The weights of the proposed method were pre-trained on the ImageNet database.The average performance of 10 tests for each method was used as the final performance. The mTRE results are reported in terms of the 10th, 25th, 50th, 75th, and 95th percentiles to demonstrate the robustness of the compared methods. The proposed featurebased transfer learning method had a significantly better performance than the Siamese registration network (Fig. 8), and it also performed better than fine-tuning, with a success accuracy rate of almost zero (Table S2 in Appendix A).

    4.4. Three-fold cross-validation

    We used three-fold cross-validation in this study and compared the proposed pseudo-Siamese registration network with and without transfer learning. Therefore, two of the three subjects were used for training the system, and the last subject was used to validate the system.This approach was iterated ten times by shifting the test subjects randomly. The performances (mTRE) were evaluated in each iteration. Finally, the performances recorded in all ten iterations were averaged to obtain a final mTRE. The mTRE results are reported in terms of the 10th,25th,50th,75th,and 95th percentiles (Table 1). The final three-fold cross-validation showed that the proposed method also had a better performance with feature transfer.

    5. Conclusions

    To overcome limited numbers of real fluoroscopic images in learning-based 2D–3D rigid registration via DRRs, we proposed a pseudo-Siamese multi-view point-based registration framework.The proposed method can decrease the demand for real X-ray images.With the ability to transfer authentic features to synthetic features, the proposed method has better performance than the fine-tuning pseudo-Siamese network. This study also estimated the POINT2network with and without transfer learning.The results showed that the proposed pseudo-Siamese network has a better success rate and accuracy than the Siamese point-tracking network.With a small amount of training data, the proposed method can work as an initialization step for the optimization-based registration method to improve accuracy. However, there are several limitations to the current work. First, because our method is designed for at least two fluoroscopic views,multi-view data were required to reconstruct the knee poses; otherwise, out-of-plane translation and rotation error would be large because of the physical imaging model. Second, the proposed method cannot reach a sub-millimeter accuracy compared with an optimization-based strategy. Like other learning-based strategies, our proposed method did not provide good accuracy but would be much faster than the optimization-based method, because no iterative step was needed during matching. In clinical orthopedic practice,accurate joint kinematics is essential for the determination of arehabilitation scheme [5], surgical planning [1], and functional evaluation [47]. The proposed method alone is inappropriate for in-vivo joint kinematics measurement. Therefore, a combination of our method with an optimization-based strategy would be a viable solution.

    Table 1 Three-fold cross-validation with and without transfer learning.

    Acknowledgements

    This project was sponsored by the National Natural Science Foundation of China (31771017, 31972924, and 81873997), the Science and Technology Commission of Shanghai Municipality(16441908700),the Innovation Research Plan supported by Shanghai Municipal Education Commission(ZXWF082101),the National Key R&D Program of China (2017YFC0110700, 2018YFF0300504 and 2019YFC0120600), the Natural Science Foundation of Shanghai (18ZR1428600), and the Interdisciplinary Program of Shanghai Jiao Tong University (ZH2018QNA06 and YG2017MS09).

    Compliance with ethics guidelines

    Cong Wang, Shuaining Xie, Kang Li, Chongyang Wang, Xudong Liu, Liang Zhao, and Tsung-Yuan Tsai declare that they have no conflict of interest or financial conflicts to disclose.

    Appendix A. Supplementary data

    Supplementary data to this article can be found online at https://doi.org/10.1016/j.eng.2020.03.016.

    十八禁国产超污无遮挡网站| 日韩欧美精品v在线| 国产精品人妻久久久影院| 两个人视频免费观看高清| 国内精品宾馆在线| 欧美最黄视频在线播放免费| 1000部很黄的大片| 男女边吃奶边做爰视频| 久久久色成人| 成人美女网站在线观看视频| 国产亚洲精品久久久久久毛片| 黄片wwwwww| 成人漫画全彩无遮挡| 51国产日韩欧美| 乱码一卡2卡4卡精品| 一个人看的www免费观看视频| 国产真实伦视频高清在线观看| 国产一区二区在线av高清观看| 国内精品宾馆在线| 国产激情偷乱视频一区二区| 国产爱豆传媒在线观看| 十八禁网站免费在线| 日韩欧美精品免费久久| 成人美女网站在线观看视频| 一进一出抽搐gif免费好疼| 久久久久久久久久黄片| 一本一本综合久久| 国产av一区在线观看免费| 少妇熟女aⅴ在线视频| 婷婷精品国产亚洲av在线| 国产男人的电影天堂91| 成人av在线播放网站| 精品久久久久久久久久免费视频| 欧美三级亚洲精品| 五月玫瑰六月丁香| 人妻少妇偷人精品九色| 国产伦精品一区二区三区四那| 全区人妻精品视频| 十八禁网站免费在线| 99久国产av精品国产电影| 久久人妻av系列| 国产视频一区二区在线看| 菩萨蛮人人尽说江南好唐韦庄 | 免费av不卡在线播放| 少妇被粗大猛烈的视频| 国模一区二区三区四区视频| 精品久久久久久久久久久久久| h日本视频在线播放| 少妇被粗大猛烈的视频| 日韩高清综合在线| 国产v大片淫在线免费观看| 老女人水多毛片| 国产精品99久久久久久久久| 不卡一级毛片| 国国产精品蜜臀av免费| 秋霞在线观看毛片| 尾随美女入室| 国产一区二区在线观看日韩| 最后的刺客免费高清国语| 午夜久久久久精精品| 成人亚洲欧美一区二区av| 国产伦精品一区二区三区视频9| 亚洲aⅴ乱码一区二区在线播放| 成熟少妇高潮喷水视频| 亚洲中文字幕日韩| 床上黄色一级片| 简卡轻食公司| 久久久久久久久久成人| avwww免费| 日韩av在线大香蕉| 国产aⅴ精品一区二区三区波| 国产熟女欧美一区二区| 国产av不卡久久| 精品一区二区三区视频在线| 99热精品在线国产| 国产免费一级a男人的天堂| 国产av不卡久久| 精品人妻熟女av久视频| 国产精品99久久久久久久久| 国产精品久久久久久精品电影| 国产精品久久视频播放| 国产探花在线观看一区二区| 免费在线观看影片大全网站| 亚洲精品国产成人久久av| 热99re8久久精品国产| 十八禁国产超污无遮挡网站| 人妻制服诱惑在线中文字幕| 日产精品乱码卡一卡2卡三| 九色成人免费人妻av| 日韩高清综合在线| 亚洲av电影不卡..在线观看| 日本-黄色视频高清免费观看| 波多野结衣高清作品| 乱系列少妇在线播放| 国产在线男女| 久久精品国产自在天天线| 欧美在线一区亚洲| 天堂影院成人在线观看| 亚洲激情五月婷婷啪啪| 黄色视频,在线免费观看| 国产精品一二三区在线看| 搞女人的毛片| 午夜a级毛片| 久久天躁狠狠躁夜夜2o2o| 国产在视频线在精品| 精品一区二区三区视频在线观看免费| 国产一区二区三区在线臀色熟女| 午夜精品国产一区二区电影 | 国产熟女欧美一区二区| 成人特级av手机在线观看| 麻豆乱淫一区二区| 国语自产精品视频在线第100页| 久久婷婷人人爽人人干人人爱| 国产成人影院久久av| 男女那种视频在线观看| av在线观看视频网站免费| 一夜夜www| 亚洲av电影不卡..在线观看| 国产免费一级a男人的天堂| 91久久精品国产一区二区三区| 国产精品一二三区在线看| 亚洲最大成人中文| 国产精品一及| 欧美xxxx性猛交bbbb| 尾随美女入室| 免费av毛片视频| 在现免费观看毛片| 熟妇人妻久久中文字幕3abv| 我的女老师完整版在线观看| 可以在线观看的亚洲视频| 精品久久久久久久人妻蜜臀av| 日产精品乱码卡一卡2卡三| 亚洲av第一区精品v没综合| 欧美极品一区二区三区四区| 国产精品综合久久久久久久免费| 日日干狠狠操夜夜爽| 欧美国产日韩亚洲一区| 免费观看在线日韩| 精品久久久久久久久av| 午夜福利成人在线免费观看| 亚洲成人久久性| 最近视频中文字幕2019在线8| 成年女人毛片免费观看观看9| 国语自产精品视频在线第100页| 日本精品一区二区三区蜜桃| 国产精品伦人一区二区| 国产一级毛片七仙女欲春2| 国产一区二区亚洲精品在线观看| 晚上一个人看的免费电影| 女同久久另类99精品国产91| 欧美性感艳星| 亚洲人成网站高清观看| 最好的美女福利视频网| 一级毛片我不卡| 成人一区二区视频在线观看| 一a级毛片在线观看| 一本久久中文字幕| 久久久久久久久中文| 精华霜和精华液先用哪个| 一级毛片我不卡| 波多野结衣巨乳人妻| 一级a爱片免费观看的视频| 午夜老司机福利剧场| 可以在线观看毛片的网站| 久久人人精品亚洲av| 91久久精品国产一区二区成人| 老女人水多毛片| 又粗又爽又猛毛片免费看| 在线观看美女被高潮喷水网站| 欧洲精品卡2卡3卡4卡5卡区| 国产精品久久久久久亚洲av鲁大| 亚洲va在线va天堂va国产| 国产欧美日韩精品一区二区| 99久久精品热视频| 欧美丝袜亚洲另类| 亚洲中文字幕一区二区三区有码在线看| 国产av不卡久久| 国产精品伦人一区二区| 午夜福利高清视频| 两个人视频免费观看高清| 成人美女网站在线观看视频| 久久精品久久久久久噜噜老黄 | 亚洲欧美日韩卡通动漫| 赤兔流量卡办理| avwww免费| 能在线免费观看的黄片| .国产精品久久| 色综合站精品国产| 最近最新中文字幕大全电影3| 久久欧美精品欧美久久欧美| 免费观看的影片在线观看| 尾随美女入室| 国产国拍精品亚洲av在线观看| 国产精品一二三区在线看| 国产高清不卡午夜福利| 久久久久久大精品| 精品久久久久久久久久久久久| 亚洲精品日韩在线中文字幕 | 色av中文字幕| 亚洲成人久久性| 99九九线精品视频在线观看视频| 国产视频内射| 午夜视频国产福利| 国产乱人视频| 91久久精品国产一区二区三区| 久久精品国产亚洲网站| av在线亚洲专区| 日韩欧美一区二区三区在线观看| 国产单亲对白刺激| 午夜福利高清视频| 春色校园在线视频观看| 成年版毛片免费区| a级毛片免费高清观看在线播放| 99久久九九国产精品国产免费| 男人舔女人下体高潮全视频| 国产亚洲欧美98| 在线观看免费视频日本深夜| 狠狠狠狠99中文字幕| 联通29元200g的流量卡| 九九久久精品国产亚洲av麻豆| 国产麻豆成人av免费视频| h日本视频在线播放| 久久久精品94久久精品| 十八禁国产超污无遮挡网站| 亚洲欧美成人综合另类久久久 | 97热精品久久久久久| 日韩一区二区视频免费看| 国产三级在线视频| 熟女人妻精品中文字幕| 在线天堂最新版资源| 俄罗斯特黄特色一大片| 日日摸夜夜添夜夜添小说| 日韩精品有码人妻一区| 成人鲁丝片一二三区免费| 国产极品精品免费视频能看的| 亚洲av第一区精品v没综合| 99热精品在线国产| 丰满人妻一区二区三区视频av| 男人狂女人下面高潮的视频| 久久精品人妻少妇| 午夜免费激情av| 特级一级黄色大片| 久久精品影院6| 欧洲精品卡2卡3卡4卡5卡区| 美女免费视频网站| 在线看三级毛片| 三级男女做爰猛烈吃奶摸视频| 九九热线精品视视频播放| 中文字幕人妻熟人妻熟丝袜美| 日本与韩国留学比较| 毛片女人毛片| 亚洲专区国产一区二区| 岛国在线免费视频观看| 中国美白少妇内射xxxbb| 国产精品一区二区三区四区久久| 又爽又黄a免费视频| 村上凉子中文字幕在线| 成人av在线播放网站| 三级男女做爰猛烈吃奶摸视频| 美女xxoo啪啪120秒动态图| 国产精品一区二区免费欧美| 亚洲美女视频黄频| 欧美另类亚洲清纯唯美| 无遮挡黄片免费观看| 成人av在线播放网站| 小说图片视频综合网站| 欧美在线一区亚洲| 人妻制服诱惑在线中文字幕| 亚洲av一区综合| 亚洲欧美日韩高清在线视频| 日韩制服骚丝袜av| 欧美高清性xxxxhd video| av中文乱码字幕在线| 精品欧美国产一区二区三| 大香蕉久久网| 成人午夜高清在线视频| 国产免费男女视频| 日韩欧美免费精品| 中国美白少妇内射xxxbb| 国产欧美日韩精品亚洲av| 国产av一区在线观看免费| 亚洲国产精品sss在线观看| 欧美xxxx性猛交bbbb| 日本黄色视频三级网站网址| 丝袜美腿在线中文| 日韩精品中文字幕看吧| 亚洲七黄色美女视频| 午夜视频国产福利| 97在线视频观看| 久久久a久久爽久久v久久| 久久精品国产自在天天线| 国产视频内射| 日本五十路高清| 1024手机看黄色片| 夜夜看夜夜爽夜夜摸| 天天躁日日操中文字幕| 欧美激情久久久久久爽电影| а√天堂www在线а√下载| 日韩欧美在线乱码| 综合色av麻豆| 成人一区二区视频在线观看| 少妇的逼好多水| av黄色大香蕉| 国产成人91sexporn| 免费看日本二区| 亚洲精品亚洲一区二区| 简卡轻食公司| 日本免费一区二区三区高清不卡| 最近在线观看免费完整版| 国产精品亚洲一级av第二区| 一边摸一边抽搐一进一小说| 三级男女做爰猛烈吃奶摸视频| 91在线精品国自产拍蜜月| 国产精品人妻久久久久久| АⅤ资源中文在线天堂| 成人一区二区视频在线观看| 九色成人免费人妻av| 国产精品一区二区免费欧美| 日韩三级伦理在线观看| 天堂网av新在线| 国产精品,欧美在线| 精品99又大又爽又粗少妇毛片| 人人妻人人澡欧美一区二区| 亚洲成a人片在线一区二区| 午夜福利高清视频| 国产黄色视频一区二区在线观看 | 一级黄色大片毛片| 亚洲婷婷狠狠爱综合网| h日本视频在线播放| 久久6这里有精品| 欧美人与善性xxx| 噜噜噜噜噜久久久久久91| 看片在线看免费视频| 波多野结衣高清作品| 又黄又爽又刺激的免费视频.| 国产伦精品一区二区三区视频9| 亚洲av不卡在线观看| 村上凉子中文字幕在线| 国产精品久久久久久久久免| 男人舔女人下体高潮全视频| 午夜精品国产一区二区电影 | 成人欧美大片| 精品一区二区免费观看| 中文在线观看免费www的网站| 男女做爰动态图高潮gif福利片| 国产精品久久久久久久久免| 国产一区二区在线观看日韩| 九九爱精品视频在线观看| 男女下面进入的视频免费午夜| 男人狂女人下面高潮的视频| 国产一区二区在线观看日韩| 欧美高清性xxxxhd video| 一区福利在线观看| 直男gayav资源| 日韩制服骚丝袜av| 真实男女啪啪啪动态图| 天堂动漫精品| 成人欧美大片| 91久久精品国产一区二区三区| 国产在线男女| 欧美一区二区亚洲| 欧美绝顶高潮抽搐喷水| 丝袜美腿在线中文| 三级男女做爰猛烈吃奶摸视频| 亚洲婷婷狠狠爱综合网| 全区人妻精品视频| 麻豆国产av国片精品| 免费在线观看影片大全网站| 三级毛片av免费| 嫩草影视91久久| 午夜激情福利司机影院| 午夜福利在线在线| 国产精品伦人一区二区| 久久精品夜夜夜夜夜久久蜜豆| 校园春色视频在线观看| 日韩欧美一区二区三区在线观看| 别揉我奶头 嗯啊视频| 日本 av在线| 高清毛片免费看| 婷婷六月久久综合丁香| 中文字幕人妻熟人妻熟丝袜美| 99热网站在线观看| 又黄又爽又刺激的免费视频.| 国产精品国产三级国产av玫瑰| 中文字幕人妻熟人妻熟丝袜美| 香蕉av资源在线| 十八禁国产超污无遮挡网站| 少妇的逼好多水| 国产欧美日韩精品一区二区| 我要看日韩黄色一级片| 两性午夜刺激爽爽歪歪视频在线观看| 欧美在线一区亚洲| 在线天堂最新版资源| 久久九九热精品免费| 我的老师免费观看完整版| 国产精品久久电影中文字幕| 日韩欧美一区二区三区在线观看| 亚洲在线自拍视频| 亚洲无线在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 免费搜索国产男女视频| 欧美日韩一区二区视频在线观看视频在线 | 国产一区二区在线观看日韩| 亚洲第一电影网av| 黑人高潮一二区| 欧美一级a爱片免费观看看| 精品午夜福利在线看| 中文字幕久久专区| 狂野欧美激情性xxxx在线观看| 尾随美女入室| 少妇猛男粗大的猛烈进出视频 | 精华霜和精华液先用哪个| 精品99又大又爽又粗少妇毛片| 国产伦精品一区二区三区视频9| 中文字幕人妻熟人妻熟丝袜美| 国产精品人妻久久久影院| 欧美潮喷喷水| 久久综合国产亚洲精品| 网址你懂的国产日韩在线| 在线观看午夜福利视频| 国产免费一级a男人的天堂| 最近的中文字幕免费完整| 日本五十路高清| 如何舔出高潮| av免费在线看不卡| 99精品在免费线老司机午夜| 中文字幕熟女人妻在线| 久久久久国产精品人妻aⅴ院| 久久韩国三级中文字幕| 午夜福利在线观看免费完整高清在 | 美女 人体艺术 gogo| 婷婷精品国产亚洲av在线| 久久草成人影院| 久久精品国产亚洲av香蕉五月| 天堂影院成人在线观看| 国产一区二区在线av高清观看| 91在线观看av| 岛国在线免费视频观看| 欧美日韩一区二区视频在线观看视频在线 | 久久天躁狠狠躁夜夜2o2o| 日本黄色视频三级网站网址| 美女 人体艺术 gogo| 国产精品亚洲美女久久久| 男女之事视频高清在线观看| .国产精品久久| 国产精品嫩草影院av在线观看| 日本免费一区二区三区高清不卡| 国产单亲对白刺激| 91麻豆精品激情在线观看国产| 国内精品久久久久精免费| 国产黄a三级三级三级人| 欧美高清成人免费视频www| 嫩草影院新地址| 亚洲专区国产一区二区| 99riav亚洲国产免费| 亚洲精品一卡2卡三卡4卡5卡| 男女之事视频高清在线观看| 久久久久久久久久成人| 免费大片18禁| 日韩成人伦理影院| 六月丁香七月| 人妻夜夜爽99麻豆av| 久久精品国产亚洲网站| 久久这里只有精品中国| 欧美潮喷喷水| а√天堂www在线а√下载| 女同久久另类99精品国产91| 欧美xxxx性猛交bbbb| 91av网一区二区| 日韩欧美国产在线观看| a级毛片免费高清观看在线播放| 两性午夜刺激爽爽歪歪视频在线观看| 一级毛片我不卡| 精品人妻偷拍中文字幕| 色av中文字幕| 长腿黑丝高跟| 久久精品国产亚洲av涩爱 | 一本精品99久久精品77| 九九热线精品视视频播放| 国内精品一区二区在线观看| 亚洲欧美清纯卡通| av天堂在线播放| 色av中文字幕| 久久人人爽人人爽人人片va| 内射极品少妇av片p| av卡一久久| 精品一区二区三区人妻视频| 成人二区视频| av免费在线看不卡| 91在线观看av| 成人无遮挡网站| 国产高清视频在线播放一区| 久久久国产成人精品二区| 久久婷婷人人爽人人干人人爱| 99热精品在线国产| 国产高清有码在线观看视频| 国内精品美女久久久久久| aaaaa片日本免费| 最新中文字幕久久久久| 人妻丰满熟妇av一区二区三区| 深爱激情五月婷婷| 插逼视频在线观看| 一级黄片播放器| 婷婷六月久久综合丁香| 麻豆一二三区av精品| 69av精品久久久久久| 寂寞人妻少妇视频99o| 在线看三级毛片| 精品一区二区免费观看| 午夜日韩欧美国产| 欧美成人一区二区免费高清观看| 国产女主播在线喷水免费视频网站 | 12—13女人毛片做爰片一| 欧美中文日本在线观看视频| 国产精品免费一区二区三区在线| 国产精品不卡视频一区二区| 精品久久久久久久末码| 欧美又色又爽又黄视频| 精品人妻一区二区三区麻豆 | 淫妇啪啪啪对白视频| 99九九线精品视频在线观看视频| 高清午夜精品一区二区三区 | 色5月婷婷丁香| videossex国产| 中文字幕av成人在线电影| 特大巨黑吊av在线直播| 亚洲熟妇中文字幕五十中出| 看免费成人av毛片| 国产欧美日韩一区二区精品| 美女大奶头视频| 又黄又爽又刺激的免费视频.| 偷拍熟女少妇极品色| 成人特级黄色片久久久久久久| 成年免费大片在线观看| 美女黄网站色视频| 特级一级黄色大片| 麻豆国产av国片精品| 3wmmmm亚洲av在线观看| av专区在线播放| 夜夜爽天天搞| 又粗又爽又猛毛片免费看| 蜜桃亚洲精品一区二区三区| 久久精品夜夜夜夜夜久久蜜豆| 久久精品国产鲁丝片午夜精品| 亚洲无线在线观看| 日本成人三级电影网站| 老司机影院成人| 午夜福利高清视频| 一级av片app| 日本精品一区二区三区蜜桃| 人妻制服诱惑在线中文字幕| 国产人妻一区二区三区在| 最新中文字幕久久久久| 波多野结衣巨乳人妻| 久久午夜福利片| 婷婷精品国产亚洲av在线| 老女人水多毛片| 日韩成人av中文字幕在线观看 | 欧美中文日本在线观看视频| 天堂动漫精品| 亚洲av五月六月丁香网| 中文字幕精品亚洲无线码一区| 亚洲综合色惰| 中国美白少妇内射xxxbb| 久久国内精品自在自线图片| 精品人妻视频免费看| 免费人成在线观看视频色| 天天躁日日操中文字幕| 亚洲真实伦在线观看| 久久久久久久亚洲中文字幕| 一个人免费在线观看电影| 永久网站在线| 免费黄网站久久成人精品| 九九热线精品视视频播放| 国产亚洲精品久久久久久毛片| 久久久国产成人精品二区| 国产av不卡久久| 网址你懂的国产日韩在线| aaaaa片日本免费| 国产成年人精品一区二区| 国内精品一区二区在线观看| 51国产日韩欧美| 国产精品免费一区二区三区在线| 日本熟妇午夜| 亚洲成人久久爱视频| 久久精品国产清高在天天线| 久久久久久久久久成人| 国产人妻一区二区三区在| www.色视频.com| 搞女人的毛片| 日韩 亚洲 欧美在线| 免费av毛片视频| 不卡视频在线观看欧美| 最近2019中文字幕mv第一页| 久久中文看片网| 日韩,欧美,国产一区二区三区 | 性插视频无遮挡在线免费观看| 日本五十路高清| 中文字幕久久专区| 亚洲婷婷狠狠爱综合网| 国产精品无大码| 日日摸夜夜添夜夜添小说| 国产成人影院久久av| 3wmmmm亚洲av在线观看| 国产亚洲精品综合一区在线观看| 夜夜爽天天搞| 2021天堂中文幕一二区在线观| 国内精品久久久久精免费| 亚洲五月天丁香| 国产精品福利在线免费观看| 99热精品在线国产| 午夜老司机福利剧场| 美女免费视频网站| 午夜日韩欧美国产| 1024手机看黄色片|