• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-View Point-Based Registration for Native Knee Kinematics Measurement with Feature Transfer Learning

    2021-11-26 03:43:50CongWangShuainingXiKangLiChongyangWangXuongLiuLiangZhaoTsungYuanTsaia
    Engineering 2021年6期

    Cong Wang, Shuaining Xi, Kang Li, Chongyang Wang, Xuong Liu,*, Liang Zhao*,Tsung-Yuan Tsaia,c,*

    a Shanghai Key Laboratory of Orthopaedic Implants & Clinical Translational R&D Center of 3D Printing Technology, Department of Orthopaedic Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine; School of Biomedical Engineering & Med-X Research Institute, Shanghai Jiao Tong University, Shanghai 200030, China

    b SenseTime Research, Shanghai 200233, China

    c Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China

    d Department of Orthopaedics, New Jersey Medical School, Rutgers University, Newark, NJ 07103, USA

    e Department of Orthopaedics, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai 200233, China

    Keywords:

    ABSTRACT Deep-learning methods provide a promising approach for measuring in-vivo knee joint motion from fast registration of two-dimensional (2D) to three-dimensional (3D) data with a broad range of capture.However, if there are insufficient data for training, the data-driven approach will fail. We propose a feature-based transfer-learning method to extract features from fluoroscopic images.With three subjects and fewer than 100 pairs of real fluoroscopic images,we achieved a mean registration success rate of up to 40%.The proposed method provides a promising solution,using a learning-based registration method when only a limited number of real fluoroscopic images is available.

    1. Introduction

    Accurate kinematics of the knee joint is critical in many orthopedic applications for understanding aspects such as the normal function of the joint [1], development of knee osteoarthritis [2],mechanisms of knee injuries[3],optimization of prosthesis design[4], preoperative planning, and postoperative rehabilitation [5].The measurement of knee kinematics is also essential for biomechanical studies on the musculoskeletal system.In the event of significant demand for kinematics in the clinical field,an efficient and reliable method to measure the dynamic motion of the joint is needed.

    Various measurement tools are now available for researchers to quantify three-dimensional(3D)knee kinematics,but only a few of them can provide millimeter-scale accuracy and rapid tracking velocity. Skin-marker-based optical tracking systems are widely used in the analysis of human motion,but their accuracy is affected by marker-associated soft-tissue artifacts, which can cause displacements of up to 40 mm[6].Although several researchers have attempted to reduce the effects of soft-tissue artifacts by building mathematical models [7–9], the issue remains unsolved when using any skin-marker-based motion-capture technique[10].With the development of medical imaging, some techniques can measure dynamic joint kinematics directly,such as real-time magnetic resonance (MR) tomography and computed tomography (CT)[11,12]. However, clinical promotion of these techniques was limited by low temporal resolution,restricted range of motion(ROM),the need to control motion speed, low image quality, and nonnegligible amounts of radiation [13,14]. In the past decade, a dual-fluoroscopic imaging system (DFIS) has been widely used and well-accepted for accurate in-vivo joint motion analysis because of its high accuracy [15], accessibility, sufficient ROM[16], and low radiation levels compared with traditional radiography (Fig. 1).

    Fig. 1. Virtual DFIS for measuring the dynamic motion of knee joints.

    To find the pose of the object (i.e., native knee joints) in DFIS,two-dimensional (2D) to 3D registration, which aligns the volume data (e.g., CT) with fluoroscopy (continuous X-ray images), is applied in the measurement procedure. The 3D position of CT is adjusted iteratively, and a large number of digitally reconstructed radiographs (DRRs) is generated simultaneously until the DRR is most similar to the X-ray image [17]. With the increasing use of DFIS in clinical applications, researchers have attempted various automatic registration methods to accelerate the matching procedure. Optimization-based registration, which is composed of an optimizer and similarity metrics between images, has been investigated extensively[18,19].Although the accuracy of optimizationbased registration is high [20–22], several drawbacks, such as the strictly required registration initialization and the high computational cost of calculating DRRs and the iterations during optimization, limit the widespread use of DFIS [23].

    With the rapid development of machine learning [24,25] in recent years,several learning-based methods have been developed to measure joint kinematics,with the advantages of computational efficiency and enhancement of capture range compared with optimization-based methods [21,26–28]. However, these methods are always trained with synthetic X-ray images(i.e.,DRRs)because training such models with a large amount of authentic labeled data is impractical. Even so, considerable authentic images are still necessary to ensure the robustness of registration[22,27].Another consideration is the discrepancy between DRRs and X-ray images.Compared with DRRs, fluoroscopic images showed blurred edges,geometric distortion, and nonuniform intensity [29,30]; therefore,networks that train on DRRs do not generalize to fluoroscopic images ideally [22]. Previous studies have established various physical models to generate more realistic DRRs through additional measurements of X-ray quality [31,32]. Recently, a survey conducted by Haskins et al.[24]has shown the ability to use transfer learning in such cross-modal registration, which may save the effort of building complicated DRR models or collecting authentic clinical images.

    In our work,we developed a pseudo-Siamese multi-view pointbased registration framework to address the problem of limited number of real fluoroscopic images. The proposed method is a combination of a pseudo-Siamese point-tracking network and a feature-transfer network. The pose of the knee joints was estimated by tracking selected points on knee joints with the multiview point-based registration network, paired DRRs, and fluoroscopy. A feature extractor was trained by the featurelearning network with pairs of DRRs and fluoroscopic images. To overcome the limited number of authentic fluoroscopic images,we trained the multi-view point-based registration network with DRRs and pre-trained the feature-learning network on ImageNet.

    The remainder of this paper is organized as follows. Section 2 reviews deep-learning-based 2D–3D registration and domain adaption. Section 3 presents the proposed learning-based 2D–3D registration problems. Section 4 presents the experiments and results, and Section 5 concludes the paper.

    2. Related work

    2.1. Learning-based strategy

    To avoid the large computational costs of optimization-based registration, researchers have recently developed learning-based registration [24]. Considering the success of convolutional neural networks (CNNs), feature extraction from both DRRs and fluoroscopic images has been proposed. The pose of the rigid object was then estimated by a hierarchical regressor [33]. The CNN model improves the robustness of registration, but it is limited to objects with strong features,such as medical implants,and cannot perform the registration of native anatomic structures. Miao et al.[28] proposed a reinforcement learning network to register X-ray and CT images of the spine with a Markov decision process.Although they improved the method with a multi-agent system,the proposed method may still fail because it cannot converge during searching.Recently,several attempts have been made to register rigid objects with point correspondence networks [27,34,35],which showed good results in both efficiency and accuracy on anatomic structures. Their method avoids the costly and unreliable iterative pose searching and corrects the out-of-plane errors with multiple views.

    2.2. Domain adaption

    The discrepancy between synthetic data(i.e.,DRRs)and authentic data (i.e., fluoroscopic images), also known as drift, is another challenge to learning-based registration strategies, in which training data and future data must be in the same feature space and have the same distribution [36]. Compared with building complicated models for DRR generation, domain adaption has emerged as a promising and relatively effortless strategy to account for the domain difference between different image sources [37], and it has been applied in many medical applications, such as X-ray segmentation [38] and multi-modal image registration[21,22,39]. For 2D–3D registration, Zheng et al. [21] proposed the integration of a pairwise domain adaptation module into a pretrained CNN that performs rigid registration using a limited amount of training data. The network was trained on DRRs, and it performed well on synthetic data; therefore, the authentic features were transferred close to the synthetic features with domain adaption. However, existing methods are still inappropriate for natural joints,such as knees and hips.Therefore, a designed registration approach for natural joints that do not require numerous clinical X-ray images for training is needed.

    3. Methods

    The aim of 2D–3D registration is to estimate the six degrees of freedom (6DOF) pose of 3D volume data from pairs of 2D multiview fluoroscopic images. In the following section, we begin with an overview of the tracking system and multi-view point-based 2D–3D registration (Section 3.1). Then, details of the two main components of our work are given in Section 3.2 and Section 3.3.

    3.1. Multi-view point-based registration

    3.1.1. 2D–3D rigid registration with 6DOF

    We consider the registration of each bone in the knee joint as a separate 2D–3D registration procedure. Pose reproduction of each bone is denoted as the 3D alignmentof the CT volume data V through atransformation matrix T4×4,whichisparameterized bysix elementsoftranslations and rotations(x,y,z,γ,α,β) using the Euler angle [40]. Transformation matrix T4×4can be represented as a homogeneous 4×4 matrix,and pose P can be derived as follows:

    3.1.2. 3D projection geometry of X-ray imaging

    In the virtual DFIS, the four corners of each imaging plane and the location of the X-ray sources were used to establish the optical pinhole model during DRR generation (Fig. 1). After a polynomialbased distortion correction and spatial calibration of two-view fluoroscopy, DRRs were generated by the ray-casting algorithm[41] with segmented CT volume data using Amira software(ThermoFisher Scientific,USA).Combing the transformation matrix T4×4, the final DRR IDRRcan be computed as follows:

    where l(p,s) is the ray s connecting the X-ray source and image plane in the X-ray imaging model, and p is a point of the ray.μ(?)represents the attenuation coefficient at some point in the volume data.

    3.1.3. Registration by tracking multi-view points

    The final pose of each bone was reproduced with transformation matrix T.

    3.2. Pseudo-Siamese point tracking network

    In the proposed method,we used a pseudo-Siamese network to track points from each view.The pseudo-Siamese network has two branches: One is a visual geometry group (VGG) network [45] for extracting features from DRRs, and the other is a feature-transfer network, which transfers authentic features to synthetic features(Section 3.3). The overall workflow is shown in Fig. 3. The input of the network was unpaired DRRs and fluoroscopic images, and the output was the tracked points of the fluoroscopic images. In the upper branch of the network (Fig. 3), the exported features FDRRaround each selected point have the size of M × N × C when the width and height of the DRR are respectively M and N, and C is the number of feature channels. In the lower branch of the network, the features of fluoroscopic images Ffluoro, were exported by the feature-transfer network without weight sharing. With the output of the extracted features FDRRand Ffluoro, a convolutional layer was applied to quantify the similarity between the two feature maps [27]. The similarity is denoted as

    where W is a learned weighting factor in finding better similarity for each selected point.The objective function to be minimized during the training is Euclidean loss (i.e., registration loss), defined as

    where pfluorois the tracked 2D points and pdrris the projected 2D points in DRR with known locations. With the tracked 2D points from different views,the 3D points were reconstructed using triangulation [43].

    3.3. Feature transfer using domain adaption

    For feature extraction of fluoroscopic images, we proposed a transfer-learning-based method to reduce the domain difference between synthetic images (e.g., the DRRs) and authentic X-ray images (e.g., the fluoroscopic images) (Fig. 4).

    Fig. 2. The workflow of the multi-view point-based registration method. A set of points was selected on the bone surface, and their 2D projections were tracked from each view in the virtual DFIS to reconstruct their 3D positions. The final transformation matrix was determined by the reconstructed points using Procrustes analysis [44].

    Fig.3. The framework of the point-tracking network.Pairs of DRRs and fluoroscopic images were imported to the network,and their features were extracted by a VGG and a feature-transfer network, respectively. The selected points were tracked on fluoroscopic images by searching the most similar feature patch around the selected points in DRRs. Conv: convolution layers.

    Fig. 4. Feature-transfer network with the paired synthetic image and authentic image. Synthetic images (i.e., DRRs) were generated at the pose after manual registration.

    To close the gap between the two domains,we used a domainadaption method. That is, additional coupled VGG net with cosine similarity was set during feature extraction of the fluoroscopic images to close the gap (Fig. 5). Pairs of DRRs and fluoroscopic images, which share the same locations of volume data using a model-based manual registration method [9], were used for training. We used cosine similarity as the cost function to measure the gap between the two domains.For the tracking problem,the cosine similarity can be stated as

    where ‖?‖ denotes L2-norm and 〈?〉 denotes dot product, and FXand FDare the feature maps. To improve the performance of feature transfer, we optimized the proposed method with weights pre-trained on ImageNet.

    Fig. 5. The architecture of synthetic X-ray image feature extraction.

    4. Experiments and results

    4.1. Dataset

    In this institutional-review-board-approved study, we collected CT images of three subjects’ knees, and all subjects performed two or three motions that were captured by a bi-plane fluoroscopy system (BV Pulsera, Philips, the Netherlands) with a frame rate of 30 frames per second. CT scans (SOMATOM Definition AS; Siemens, Germany) of each knee, ranging from approximately 30 cm proximal and distal to the knee joint line(thickness, 0.6 mm; resolution 512 × 512 pixels), were obtained.The size of the fluoroscopic images was 1024 × 1024 pixels with a pixel spacing of 0.28 mm. Geometric parameters of the bi-plane fluoroscopy imaging model, such as polynomial distortion correction parameters [46] and the locations of the X-ray source and detector plane, were used to establish a virtual DFIS, in which poses of each bone were reproduced manually [47]. In this study,143 pairs of matched fluoroscopic images were used (Fig. 6), of which 91 pairs of matched images were used for training the feature-transfer network of fluoroscopic images and the point tracking network, and the remaining images were used as the testing set. Additionally, a three-fold validation was performed in the study. To evaluate the 2D–3D registration algorithm, a widely used 3D error measurement (i.e., the target registration error (TRE)) was applied [48]. We computed the mean TRE(mTRE) to determine the 3D error. The average distance between the selected points defines the mTRE.

    where Pbonedenotes the selected points and PEdenotes the estimated points. The success rate was defined as the percentage of all the test cases with an mTRE of less than 10 mm.

    4.2. Loss selection in cross-domain feature extraction analysis

    We defined a cosine similarity as the loss function in the feature extraction on the authentic X-ray images. We also used the mean squared error as the loss function[22]to find a better loss function.The position of the loss function may also affect the final performance of the feature extraction layer. Thus, we first compared the effects of loss functions located at different convolution layers.To obtain the best performance of the cross-domain feature from the real fluoroscopic images, we put the defined loss function between the pairs of conv2 layers, conv3 layers, conv4 layers,and conv5 layers. In our data (Fig. 7), we preferred the cosine similarity as the loss function because it has better performance regarding the final registration result of the entire knee joint.Cosine similarity showed the best performance between conv5 layers (see details in Appendix A, Table S1).

    Fig. 6. Paired raw fluoroscopic images and the corresponding images after manual matching. The raw fluoroscopic images are (a) and (b), in which additional noise(wearable electromyography sensors)can be found on the surface of the lower limb.As described in the previous study[6],manual registration was performed until the projections of the surface bone model matched the outlines of the fluoroscopic images,and the matched results are shown in(c)and(d).Reproduced from Ref.[6]with permission of Elsevier Ltd., ? 2011.

    4.3. With or without transfer training network analysis

    Fig. 7. The success rate using cosine similarity and mean squared error (MSE) at different convolutional layers.

    Fig. 8. Mean target registration error with different registration networks.

    To test the effects of the proposed feature-based transfer learning method, we compared this method with the Siamese registration network (i.e., POINT2network) [27]. Moreover, as a widely used transfer learning tool, fine-tuning, was also compared in the current study to find a better way to reduce the differences between the fluoroscopic images and DRRs. The weights of the proposed method were pre-trained on the ImageNet database.The average performance of 10 tests for each method was used as the final performance. The mTRE results are reported in terms of the 10th, 25th, 50th, 75th, and 95th percentiles to demonstrate the robustness of the compared methods. The proposed featurebased transfer learning method had a significantly better performance than the Siamese registration network (Fig. 8), and it also performed better than fine-tuning, with a success accuracy rate of almost zero (Table S2 in Appendix A).

    4.4. Three-fold cross-validation

    We used three-fold cross-validation in this study and compared the proposed pseudo-Siamese registration network with and without transfer learning. Therefore, two of the three subjects were used for training the system, and the last subject was used to validate the system.This approach was iterated ten times by shifting the test subjects randomly. The performances (mTRE) were evaluated in each iteration. Finally, the performances recorded in all ten iterations were averaged to obtain a final mTRE. The mTRE results are reported in terms of the 10th,25th,50th,75th,and 95th percentiles (Table 1). The final three-fold cross-validation showed that the proposed method also had a better performance with feature transfer.

    5. Conclusions

    To overcome limited numbers of real fluoroscopic images in learning-based 2D–3D rigid registration via DRRs, we proposed a pseudo-Siamese multi-view point-based registration framework.The proposed method can decrease the demand for real X-ray images.With the ability to transfer authentic features to synthetic features, the proposed method has better performance than the fine-tuning pseudo-Siamese network. This study also estimated the POINT2network with and without transfer learning.The results showed that the proposed pseudo-Siamese network has a better success rate and accuracy than the Siamese point-tracking network.With a small amount of training data, the proposed method can work as an initialization step for the optimization-based registration method to improve accuracy. However, there are several limitations to the current work. First, because our method is designed for at least two fluoroscopic views,multi-view data were required to reconstruct the knee poses; otherwise, out-of-plane translation and rotation error would be large because of the physical imaging model. Second, the proposed method cannot reach a sub-millimeter accuracy compared with an optimization-based strategy. Like other learning-based strategies, our proposed method did not provide good accuracy but would be much faster than the optimization-based method, because no iterative step was needed during matching. In clinical orthopedic practice,accurate joint kinematics is essential for the determination of arehabilitation scheme [5], surgical planning [1], and functional evaluation [47]. The proposed method alone is inappropriate for in-vivo joint kinematics measurement. Therefore, a combination of our method with an optimization-based strategy would be a viable solution.

    Table 1 Three-fold cross-validation with and without transfer learning.

    Acknowledgements

    This project was sponsored by the National Natural Science Foundation of China (31771017, 31972924, and 81873997), the Science and Technology Commission of Shanghai Municipality(16441908700),the Innovation Research Plan supported by Shanghai Municipal Education Commission(ZXWF082101),the National Key R&D Program of China (2017YFC0110700, 2018YFF0300504 and 2019YFC0120600), the Natural Science Foundation of Shanghai (18ZR1428600), and the Interdisciplinary Program of Shanghai Jiao Tong University (ZH2018QNA06 and YG2017MS09).

    Compliance with ethics guidelines

    Cong Wang, Shuaining Xie, Kang Li, Chongyang Wang, Xudong Liu, Liang Zhao, and Tsung-Yuan Tsai declare that they have no conflict of interest or financial conflicts to disclose.

    Appendix A. Supplementary data

    Supplementary data to this article can be found online at https://doi.org/10.1016/j.eng.2020.03.016.

    国产欧美日韩精品一区二区| 国产精品一区二区三区四区免费观看| 中文字幕精品亚洲无线码一区| 能在线免费观看的黄片| 99riav亚洲国产免费| 国产亚洲av嫩草精品影院| 在线免费十八禁| 三级毛片av免费| 国产淫片久久久久久久久| 久久精品夜色国产| 日本三级黄在线观看| 尾随美女入室| 国产真实伦视频高清在线观看| 又粗又硬又长又爽又黄的视频 | av女优亚洲男人天堂| 精品久久久久久久久久久久久| 麻豆成人av视频| 此物有八面人人有两片| 久久婷婷人人爽人人干人人爱| 亚洲美女视频黄频| eeuss影院久久| 乱人视频在线观看| 麻豆乱淫一区二区| 97超碰精品成人国产| 一区二区三区高清视频在线| 天堂√8在线中文| 久久久久九九精品影院| 久久久久久久亚洲中文字幕| 日本成人三级电影网站| 九九久久精品国产亚洲av麻豆| 日日摸夜夜添夜夜爱| 婷婷色av中文字幕| 成人高潮视频无遮挡免费网站| 人人妻人人澡欧美一区二区| 日本爱情动作片www.在线观看| 啦啦啦啦在线视频资源| 欧美激情久久久久久爽电影| 精品一区二区免费观看| 亚洲va在线va天堂va国产| 日本三级黄在线观看| www.av在线官网国产| 亚洲精品日韩av片在线观看| 99久久无色码亚洲精品果冻| 老师上课跳d突然被开到最大视频| 亚洲经典国产精华液单| 国产精品,欧美在线| 好男人视频免费观看在线| 九九爱精品视频在线观看| 黄色一级大片看看| 成人毛片a级毛片在线播放| 国产毛片a区久久久久| 91在线精品国自产拍蜜月| 婷婷六月久久综合丁香| 91av网一区二区| 精品久久久久久久久亚洲| 国产精品国产三级国产av玫瑰| 亚洲精品色激情综合| 国产成人福利小说| 青青草视频在线视频观看| 毛片一级片免费看久久久久| 18禁裸乳无遮挡免费网站照片| 国产高清有码在线观看视频| 欧美极品一区二区三区四区| 国产一区二区三区av在线 | 久久精品综合一区二区三区| 欧美性猛交黑人性爽| 精品免费久久久久久久清纯| 亚洲婷婷狠狠爱综合网| 麻豆国产av国片精品| 国产一级毛片在线| 久久精品国产鲁丝片午夜精品| 亚洲三级黄色毛片| 一个人看视频在线观看www免费| 九草在线视频观看| 亚洲第一区二区三区不卡| 亚洲18禁久久av| 日韩亚洲欧美综合| 午夜视频国产福利| 欧美日本亚洲视频在线播放| 嫩草影院新地址| 极品教师在线视频| 亚洲色图av天堂| 国产蜜桃级精品一区二区三区| 中国美女看黄片| 精品久久久久久久久久久久久| 中文亚洲av片在线观看爽| 亚洲av免费在线观看| 国产一区二区激情短视频| 久久国内精品自在自线图片| 啦啦啦观看免费观看视频高清| 午夜福利高清视频| 最近视频中文字幕2019在线8| 韩国av在线不卡| 看片在线看免费视频| 国产女主播在线喷水免费视频网站 | 一级av片app| 欧美日韩国产亚洲二区| 国产精品久久久久久精品电影| 天天躁日日操中文字幕| 高清在线视频一区二区三区 | 久久欧美精品欧美久久欧美| 91久久精品国产一区二区成人| 国产精品久久久久久久久免| 欧美日韩一区二区视频在线观看视频在线 | 麻豆久久精品国产亚洲av| 我要搜黄色片| 草草在线视频免费看| 在线观看一区二区三区| 99热只有精品国产| 偷拍熟女少妇极品色| 欧美成人a在线观看| 一进一出抽搐动态| 欧美一级a爱片免费观看看| 午夜精品一区二区三区免费看| 看片在线看免费视频| 国产精品1区2区在线观看.| 熟女电影av网| 精品人妻视频免费看| 深夜精品福利| 成人午夜高清在线视频| 国产精品一区www在线观看| 特大巨黑吊av在线直播| 亚洲中文字幕一区二区三区有码在线看| 国产av不卡久久| 三级毛片av免费| 精品少妇黑人巨大在线播放 | 久久精品综合一区二区三区| 亚洲美女搞黄在线观看| 国产乱人视频| 99久久无色码亚洲精品果冻| 成人午夜精彩视频在线观看| 国产 一区精品| 美女内射精品一级片tv| 少妇的逼水好多| 91av网一区二区| 中国美女看黄片| 亚洲三级黄色毛片| 少妇丰满av| 一级毛片我不卡| 免费观看精品视频网站| 深夜a级毛片| 久久韩国三级中文字幕| 午夜视频国产福利| 久久久成人免费电影| 免费观看的影片在线观看| 国产成人a区在线观看| 亚洲精品自拍成人| 国产精品久久视频播放| 日韩成人伦理影院| 99久久精品热视频| 国产精品一二三区在线看| 黄色配什么色好看| 亚洲国产欧美人成| 国产久久久一区二区三区| 精品熟女少妇av免费看| 一级二级三级毛片免费看| 午夜精品国产一区二区电影 | 精品久久久久久久久av| 少妇丰满av| 26uuu在线亚洲综合色| 欧美性猛交黑人性爽| 久久国内精品自在自线图片| 别揉我奶头 嗯啊视频| 不卡视频在线观看欧美| 日韩一本色道免费dvd| 97超视频在线观看视频| 欧美激情在线99| 免费看日本二区| 久久久欧美国产精品| 欧美一区二区精品小视频在线| 亚洲综合色惰| 嫩草影院入口| 国产精品伦人一区二区| 亚洲国产精品久久男人天堂| 搡女人真爽免费视频火全软件| 国产午夜精品久久久久久一区二区三区| 亚洲自偷自拍三级| videossex国产| 久久草成人影院| 亚洲无线观看免费| av在线蜜桃| 亚洲一级一片aⅴ在线观看| 国产精品久久久久久av不卡| 精品无人区乱码1区二区| a级一级毛片免费在线观看| 麻豆久久精品国产亚洲av| 亚洲av.av天堂| 久99久视频精品免费| 亚洲在久久综合| 免费不卡的大黄色大毛片视频在线观看 | 麻豆成人av视频| 色综合亚洲欧美另类图片| 久久久久网色| 亚洲经典国产精华液单| 久久综合国产亚洲精品| 国产综合懂色| 日本色播在线视频| 国产成人aa在线观看| 久久鲁丝午夜福利片| 又爽又黄无遮挡网站| 在线天堂最新版资源| 舔av片在线| 中文字幕人妻熟人妻熟丝袜美| 男女啪啪激烈高潮av片| 亚洲五月天丁香| 国产色婷婷99| 搡女人真爽免费视频火全软件| 国产精品美女特级片免费视频播放器| 欧美最新免费一区二区三区| 欧美性猛交黑人性爽| 99久久人妻综合| ponron亚洲| 变态另类丝袜制服| 日韩中字成人| 伦精品一区二区三区| 亚洲无线观看免费| 麻豆成人av视频| 亚洲国产高清在线一区二区三| 亚洲av男天堂| 日韩欧美精品免费久久| 亚洲国产精品国产精品| 亚洲成人av在线免费| 久久精品国产亚洲av香蕉五月| 好男人视频免费观看在线| 直男gayav资源| 国产白丝娇喘喷水9色精品| 哪个播放器可以免费观看大片| 我要看日韩黄色一级片| 国产美女午夜福利| 看片在线看免费视频| 你懂的网址亚洲精品在线观看 | 99热网站在线观看| 午夜福利成人在线免费观看| 午夜福利高清视频| 亚洲最大成人手机在线| 三级男女做爰猛烈吃奶摸视频| a级毛片a级免费在线| 亚洲天堂国产精品一区在线| 国产一区二区在线观看日韩| 51国产日韩欧美| av天堂中文字幕网| 91在线精品国自产拍蜜月| 亚洲三级黄色毛片| 丰满人妻一区二区三区视频av| 欧美3d第一页| 此物有八面人人有两片| av女优亚洲男人天堂| 国产精品一区二区在线观看99 | 国产真实伦视频高清在线观看| 在现免费观看毛片| 在线a可以看的网站| 欧美日韩乱码在线| 亚洲欧美成人精品一区二区| 少妇的逼好多水| 色尼玛亚洲综合影院| 自拍偷自拍亚洲精品老妇| 免费观看的影片在线观看| 亚洲国产精品合色在线| 亚洲av一区综合| 可以在线观看毛片的网站| 亚洲美女视频黄频| 中文字幕制服av| 亚洲精品自拍成人| 精品人妻熟女av久视频| av在线蜜桃| 亚洲五月天丁香| 亚洲,欧美,日韩| 国产黄a三级三级三级人| 日本在线视频免费播放| 国内揄拍国产精品人妻在线| 99久国产av精品| 国产乱人视频| 国产人妻一区二区三区在| 欧美区成人在线视频| 天美传媒精品一区二区| 久久久久久久久中文| 亚洲一级一片aⅴ在线观看| 一卡2卡三卡四卡精品乱码亚洲| 亚洲精品乱码久久久久久按摩| 插阴视频在线观看视频| 国产老妇女一区| av视频在线观看入口| 少妇的逼水好多| 五月玫瑰六月丁香| 99热这里只有是精品50| 美女黄网站色视频| 老女人水多毛片| 精品国产三级普通话版| 午夜福利成人在线免费观看| 自拍偷自拍亚洲精品老妇| 成人av在线播放网站| 国产熟女欧美一区二区| 超碰av人人做人人爽久久| 国产黄色视频一区二区在线观看 | 91av网一区二区| 天天一区二区日本电影三级| 成人三级黄色视频| 欧美zozozo另类| 国产黄色视频一区二区在线观看 | 美女被艹到高潮喷水动态| 久久热精品热| 青青草视频在线视频观看| 婷婷色av中文字幕| 大香蕉久久网| 亚洲av.av天堂| 国产成人aa在线观看| 麻豆成人av视频| 亚洲综合色惰| 亚洲国产精品成人久久小说 | 超碰av人人做人人爽久久| 校园春色视频在线观看| 免费观看在线日韩| 中国国产av一级| 亚洲一区高清亚洲精品| 人妻夜夜爽99麻豆av| 好男人视频免费观看在线| 人妻制服诱惑在线中文字幕| 亚洲av熟女| 特级一级黄色大片| 中国国产av一级| 一级毛片aaaaaa免费看小| 色吧在线观看| 国产亚洲av片在线观看秒播厂 | 在线观看66精品国产| 91在线精品国自产拍蜜月| 亚洲无线观看免费| 欧美日韩乱码在线| 国产成人freesex在线| 国产精品爽爽va在线观看网站| 国产精品电影一区二区三区| 国产一区二区亚洲精品在线观看| 欧美变态另类bdsm刘玥| 变态另类丝袜制服| 欧美潮喷喷水| 丰满乱子伦码专区| 中文字幕av在线有码专区| 久久人人爽人人爽人人片va| 欧美最黄视频在线播放免费| 国产精品免费一区二区三区在线| 国产高清三级在线| 国产精品人妻久久久影院| 国产乱人偷精品视频| 男女下面进入的视频免费午夜| 又爽又黄无遮挡网站| 久久欧美精品欧美久久欧美| 成年免费大片在线观看| 欧美日本亚洲视频在线播放| 蜜桃亚洲精品一区二区三区| 国产真实伦视频高清在线观看| 久久久久久久久久成人| 又粗又硬又长又爽又黄的视频 | 国产一区二区在线观看日韩| 晚上一个人看的免费电影| 村上凉子中文字幕在线| 免费人成视频x8x8入口观看| 午夜精品一区二区三区免费看| 亚洲国产精品成人久久小说 | 熟女电影av网| 久久婷婷人人爽人人干人人爱| 性插视频无遮挡在线免费观看| 男女那种视频在线观看| 日韩一本色道免费dvd| 国产精品精品国产色婷婷| 国产乱人偷精品视频| 日韩精品青青久久久久久| 亚洲欧美日韩卡通动漫| 青春草视频在线免费观看| 国产av麻豆久久久久久久| 日韩精品青青久久久久久| 性插视频无遮挡在线免费观看| 可以在线观看毛片的网站| 美女大奶头视频| 久久国内精品自在自线图片| 国产精品人妻久久久久久| 精品一区二区免费观看| 青春草国产在线视频 | 床上黄色一级片| 中文在线观看免费www的网站| 日本五十路高清| 国产片特级美女逼逼视频| 日本-黄色视频高清免费观看| 亚洲精品国产av成人精品| 久久精品91蜜桃| 禁无遮挡网站| 国产女主播在线喷水免费视频网站 | 日本黄色视频三级网站网址| 日韩欧美精品免费久久| 身体一侧抽搐| 国产美女午夜福利| 人人妻人人澡欧美一区二区| 中文字幕制服av| 成人国产麻豆网| 色尼玛亚洲综合影院| 亚洲av男天堂| 青春草视频在线免费观看| 国产一区二区激情短视频| 国产午夜精品一二区理论片| 深爱激情五月婷婷| 大香蕉久久网| 亚洲欧美清纯卡通| 欧美精品一区二区大全| 成人性生交大片免费视频hd| 国产极品天堂在线| 亚洲精品456在线播放app| 久久国产乱子免费精品| 色尼玛亚洲综合影院| 国产精品免费一区二区三区在线| 欧美潮喷喷水| 久久综合国产亚洲精品| 国产黄片美女视频| 日本欧美国产在线视频| 日韩精品青青久久久久久| 桃色一区二区三区在线观看| 午夜a级毛片| 精品人妻一区二区三区麻豆| АⅤ资源中文在线天堂| 免费av观看视频| 亚洲国产欧美人成| 黄色欧美视频在线观看| 亚洲美女视频黄频| 熟女人妻精品中文字幕| 变态另类成人亚洲欧美熟女| 综合色丁香网| 人人妻人人看人人澡| 亚洲熟妇中文字幕五十中出| 久久精品国产鲁丝片午夜精品| 久久99精品国语久久久| 国产真实乱freesex| 搞女人的毛片| 麻豆国产97在线/欧美| 国产精品久久电影中文字幕| 国产三级在线视频| 男人和女人高潮做爰伦理| 国产精品.久久久| 嫩草影院精品99| 国产精品野战在线观看| 在线免费观看的www视频| 国产麻豆成人av免费视频| 在线观看66精品国产| 一级毛片久久久久久久久女| a级一级毛片免费在线观看| 在线免费观看的www视频| 蜜桃久久精品国产亚洲av| 精品人妻视频免费看| 午夜激情福利司机影院| 国产精品一区二区性色av| 亚洲五月天丁香| 成人高潮视频无遮挡免费网站| 六月丁香七月| 国产女主播在线喷水免费视频网站 | 在线观看av片永久免费下载| 亚洲av免费高清在线观看| 欧美日本视频| 一个人看视频在线观看www免费| 国产精品国产三级国产av玫瑰| 麻豆乱淫一区二区| 国产熟女欧美一区二区| 毛片一级片免费看久久久久| 日日啪夜夜爽| 免费播放大片免费观看视频在线观看| 免费少妇av软件| 亚洲精品国产色婷婷电影| 日本黄色日本黄色录像| 高清欧美精品videossex| 久久久亚洲精品成人影院| 欧美亚洲日本最大视频资源| 成人国产av品久久久| av在线观看视频网站免费| 中文字幕制服av| 亚洲精品乱久久久久久| 日韩在线高清观看一区二区三区| 国产免费视频播放在线视频| 我的老师免费观看完整版| 久久99热6这里只有精品| 少妇人妻 视频| 精品亚洲成国产av| 中文字幕最新亚洲高清| 亚洲av二区三区四区| 日本欧美国产在线视频| 成人手机av| 欧美成人午夜免费资源| 有码 亚洲区| 不卡视频在线观看欧美| 99久久精品国产国产毛片| 一级,二级,三级黄色视频| 在线观看www视频免费| 亚洲欧洲国产日韩| 日韩一区二区视频免费看| 青青草视频在线视频观看| 久久久久久久久久人人人人人人| 简卡轻食公司| 26uuu在线亚洲综合色| 精品久久久噜噜| 久久影院123| 免费av不卡在线播放| 国产 精品1| 日韩中字成人| 精品一区二区三卡| 啦啦啦中文免费视频观看日本| 肉色欧美久久久久久久蜜桃| 国产视频内射| 我的女老师完整版在线观看| 久久亚洲国产成人精品v| 好男人视频免费观看在线| 国产精品 国内视频| 中文字幕制服av| 在线观看免费视频网站a站| 在线观看国产h片| 亚洲人成77777在线视频| 国产成人aa在线观看| 日韩伦理黄色片| 国产淫语在线视频| 亚洲av福利一区| 亚洲三级黄色毛片| 欧美精品人与动牲交sv欧美| 午夜激情久久久久久久| 乱码一卡2卡4卡精品| 精品久久蜜臀av无| 91成人精品电影| 久久久久精品久久久久真实原创| 中文字幕免费在线视频6| 欧美日韩精品成人综合77777| 91精品国产九色| 日本av免费视频播放| 亚洲精品成人av观看孕妇| 国产黄色免费在线视频| 少妇 在线观看| 午夜日本视频在线| 国产日韩欧美视频二区| 日产精品乱码卡一卡2卡三| 国产精品一二三区在线看| 欧美精品一区二区免费开放| 一级黄片播放器| 成人手机av| 国产欧美日韩一区二区三区在线 | 美女国产高潮福利片在线看| 天天躁夜夜躁狠狠久久av| 午夜激情福利司机影院| 欧美亚洲 丝袜 人妻 在线| 建设人人有责人人尽责人人享有的| 亚洲精品,欧美精品| 婷婷成人精品国产| 少妇丰满av| 丝瓜视频免费看黄片| 最近最新中文字幕免费大全7| 午夜影院在线不卡| 免费久久久久久久精品成人欧美视频 | 五月天丁香电影| 五月伊人婷婷丁香| 日韩欧美一区视频在线观看| 国产成人免费无遮挡视频| 国产精品 国内视频| av线在线观看网站| 制服人妻中文乱码| 夜夜看夜夜爽夜夜摸| 欧美3d第一页| 一级片'在线观看视频| av有码第一页| 亚洲欧美日韩另类电影网站| av免费在线看不卡| 久久精品国产亚洲av涩爱| 亚洲国产色片| 精品久久久久久久久亚洲| 午夜免费观看性视频| 久久午夜福利片| 免费高清在线观看视频在线观看| 亚洲国产毛片av蜜桃av| 亚洲,一卡二卡三卡| 日韩人妻高清精品专区| 精品人妻一区二区三区麻豆| 观看美女的网站| 交换朋友夫妻互换小说| 久久精品国产a三级三级三级| 久久影院123| 制服丝袜香蕉在线| 欧美日韩亚洲高清精品| 男女高潮啪啪啪动态图| 夜夜看夜夜爽夜夜摸| 特大巨黑吊av在线直播| 久久久精品94久久精品| 99精国产麻豆久久婷婷| 欧美亚洲日本最大视频资源| 色哟哟·www| 国产av精品麻豆| 999精品在线视频| 五月玫瑰六月丁香| 成人免费观看视频高清| 建设人人有责人人尽责人人享有的| 精品午夜福利在线看| 国产在线免费精品| 久久人人爽人人片av| 亚州av有码| 日韩熟女老妇一区二区性免费视频| 国产精品国产三级国产专区5o| xxx大片免费视频| 国产白丝娇喘喷水9色精品| 秋霞在线观看毛片| 日韩中文字幕视频在线看片| 国产精品.久久久| 国产精品一二三区在线看| 大片电影免费在线观看免费| 老司机影院毛片| 嘟嘟电影网在线观看| 男女高潮啪啪啪动态图| 精品熟女少妇av免费看| 自线自在国产av| 青青草视频在线视频观看| 美女大奶头黄色视频| 涩涩av久久男人的天堂| 黄色怎么调成土黄色| 欧美精品人与动牲交sv欧美| 亚洲色图 男人天堂 中文字幕 | 80岁老熟妇乱子伦牲交| 夜夜骑夜夜射夜夜干| 国产国语露脸激情在线看|