• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-View Point-Based Registration for Native Knee Kinematics Measurement with Feature Transfer Learning

    2021-11-26 03:43:50CongWangShuainingXiKangLiChongyangWangXuongLiuLiangZhaoTsungYuanTsaia
    Engineering 2021年6期

    Cong Wang, Shuaining Xi, Kang Li, Chongyang Wang, Xuong Liu,*, Liang Zhao*,Tsung-Yuan Tsaia,c,*

    a Shanghai Key Laboratory of Orthopaedic Implants & Clinical Translational R&D Center of 3D Printing Technology, Department of Orthopaedic Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine; School of Biomedical Engineering & Med-X Research Institute, Shanghai Jiao Tong University, Shanghai 200030, China

    b SenseTime Research, Shanghai 200233, China

    c Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China

    d Department of Orthopaedics, New Jersey Medical School, Rutgers University, Newark, NJ 07103, USA

    e Department of Orthopaedics, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai 200233, China

    Keywords:

    ABSTRACT Deep-learning methods provide a promising approach for measuring in-vivo knee joint motion from fast registration of two-dimensional (2D) to three-dimensional (3D) data with a broad range of capture.However, if there are insufficient data for training, the data-driven approach will fail. We propose a feature-based transfer-learning method to extract features from fluoroscopic images.With three subjects and fewer than 100 pairs of real fluoroscopic images,we achieved a mean registration success rate of up to 40%.The proposed method provides a promising solution,using a learning-based registration method when only a limited number of real fluoroscopic images is available.

    1. Introduction

    Accurate kinematics of the knee joint is critical in many orthopedic applications for understanding aspects such as the normal function of the joint [1], development of knee osteoarthritis [2],mechanisms of knee injuries[3],optimization of prosthesis design[4], preoperative planning, and postoperative rehabilitation [5].The measurement of knee kinematics is also essential for biomechanical studies on the musculoskeletal system.In the event of significant demand for kinematics in the clinical field,an efficient and reliable method to measure the dynamic motion of the joint is needed.

    Various measurement tools are now available for researchers to quantify three-dimensional(3D)knee kinematics,but only a few of them can provide millimeter-scale accuracy and rapid tracking velocity. Skin-marker-based optical tracking systems are widely used in the analysis of human motion,but their accuracy is affected by marker-associated soft-tissue artifacts, which can cause displacements of up to 40 mm[6].Although several researchers have attempted to reduce the effects of soft-tissue artifacts by building mathematical models [7–9], the issue remains unsolved when using any skin-marker-based motion-capture technique[10].With the development of medical imaging, some techniques can measure dynamic joint kinematics directly,such as real-time magnetic resonance (MR) tomography and computed tomography (CT)[11,12]. However, clinical promotion of these techniques was limited by low temporal resolution,restricted range of motion(ROM),the need to control motion speed, low image quality, and nonnegligible amounts of radiation [13,14]. In the past decade, a dual-fluoroscopic imaging system (DFIS) has been widely used and well-accepted for accurate in-vivo joint motion analysis because of its high accuracy [15], accessibility, sufficient ROM[16], and low radiation levels compared with traditional radiography (Fig. 1).

    Fig. 1. Virtual DFIS for measuring the dynamic motion of knee joints.

    To find the pose of the object (i.e., native knee joints) in DFIS,two-dimensional (2D) to 3D registration, which aligns the volume data (e.g., CT) with fluoroscopy (continuous X-ray images), is applied in the measurement procedure. The 3D position of CT is adjusted iteratively, and a large number of digitally reconstructed radiographs (DRRs) is generated simultaneously until the DRR is most similar to the X-ray image [17]. With the increasing use of DFIS in clinical applications, researchers have attempted various automatic registration methods to accelerate the matching procedure. Optimization-based registration, which is composed of an optimizer and similarity metrics between images, has been investigated extensively[18,19].Although the accuracy of optimizationbased registration is high [20–22], several drawbacks, such as the strictly required registration initialization and the high computational cost of calculating DRRs and the iterations during optimization, limit the widespread use of DFIS [23].

    With the rapid development of machine learning [24,25] in recent years,several learning-based methods have been developed to measure joint kinematics,with the advantages of computational efficiency and enhancement of capture range compared with optimization-based methods [21,26–28]. However, these methods are always trained with synthetic X-ray images(i.e.,DRRs)because training such models with a large amount of authentic labeled data is impractical. Even so, considerable authentic images are still necessary to ensure the robustness of registration[22,27].Another consideration is the discrepancy between DRRs and X-ray images.Compared with DRRs, fluoroscopic images showed blurred edges,geometric distortion, and nonuniform intensity [29,30]; therefore,networks that train on DRRs do not generalize to fluoroscopic images ideally [22]. Previous studies have established various physical models to generate more realistic DRRs through additional measurements of X-ray quality [31,32]. Recently, a survey conducted by Haskins et al.[24]has shown the ability to use transfer learning in such cross-modal registration, which may save the effort of building complicated DRR models or collecting authentic clinical images.

    In our work,we developed a pseudo-Siamese multi-view pointbased registration framework to address the problem of limited number of real fluoroscopic images. The proposed method is a combination of a pseudo-Siamese point-tracking network and a feature-transfer network. The pose of the knee joints was estimated by tracking selected points on knee joints with the multiview point-based registration network, paired DRRs, and fluoroscopy. A feature extractor was trained by the featurelearning network with pairs of DRRs and fluoroscopic images. To overcome the limited number of authentic fluoroscopic images,we trained the multi-view point-based registration network with DRRs and pre-trained the feature-learning network on ImageNet.

    The remainder of this paper is organized as follows. Section 2 reviews deep-learning-based 2D–3D registration and domain adaption. Section 3 presents the proposed learning-based 2D–3D registration problems. Section 4 presents the experiments and results, and Section 5 concludes the paper.

    2. Related work

    2.1. Learning-based strategy

    To avoid the large computational costs of optimization-based registration, researchers have recently developed learning-based registration [24]. Considering the success of convolutional neural networks (CNNs), feature extraction from both DRRs and fluoroscopic images has been proposed. The pose of the rigid object was then estimated by a hierarchical regressor [33]. The CNN model improves the robustness of registration, but it is limited to objects with strong features,such as medical implants,and cannot perform the registration of native anatomic structures. Miao et al.[28] proposed a reinforcement learning network to register X-ray and CT images of the spine with a Markov decision process.Although they improved the method with a multi-agent system,the proposed method may still fail because it cannot converge during searching.Recently,several attempts have been made to register rigid objects with point correspondence networks [27,34,35],which showed good results in both efficiency and accuracy on anatomic structures. Their method avoids the costly and unreliable iterative pose searching and corrects the out-of-plane errors with multiple views.

    2.2. Domain adaption

    The discrepancy between synthetic data(i.e.,DRRs)and authentic data (i.e., fluoroscopic images), also known as drift, is another challenge to learning-based registration strategies, in which training data and future data must be in the same feature space and have the same distribution [36]. Compared with building complicated models for DRR generation, domain adaption has emerged as a promising and relatively effortless strategy to account for the domain difference between different image sources [37], and it has been applied in many medical applications, such as X-ray segmentation [38] and multi-modal image registration[21,22,39]. For 2D–3D registration, Zheng et al. [21] proposed the integration of a pairwise domain adaptation module into a pretrained CNN that performs rigid registration using a limited amount of training data. The network was trained on DRRs, and it performed well on synthetic data; therefore, the authentic features were transferred close to the synthetic features with domain adaption. However, existing methods are still inappropriate for natural joints,such as knees and hips.Therefore, a designed registration approach for natural joints that do not require numerous clinical X-ray images for training is needed.

    3. Methods

    The aim of 2D–3D registration is to estimate the six degrees of freedom (6DOF) pose of 3D volume data from pairs of 2D multiview fluoroscopic images. In the following section, we begin with an overview of the tracking system and multi-view point-based 2D–3D registration (Section 3.1). Then, details of the two main components of our work are given in Section 3.2 and Section 3.3.

    3.1. Multi-view point-based registration

    3.1.1. 2D–3D rigid registration with 6DOF

    We consider the registration of each bone in the knee joint as a separate 2D–3D registration procedure. Pose reproduction of each bone is denoted as the 3D alignmentof the CT volume data V through atransformation matrix T4×4,whichisparameterized bysix elementsoftranslations and rotations(x,y,z,γ,α,β) using the Euler angle [40]. Transformation matrix T4×4can be represented as a homogeneous 4×4 matrix,and pose P can be derived as follows:

    3.1.2. 3D projection geometry of X-ray imaging

    In the virtual DFIS, the four corners of each imaging plane and the location of the X-ray sources were used to establish the optical pinhole model during DRR generation (Fig. 1). After a polynomialbased distortion correction and spatial calibration of two-view fluoroscopy, DRRs were generated by the ray-casting algorithm[41] with segmented CT volume data using Amira software(ThermoFisher Scientific,USA).Combing the transformation matrix T4×4, the final DRR IDRRcan be computed as follows:

    where l(p,s) is the ray s connecting the X-ray source and image plane in the X-ray imaging model, and p is a point of the ray.μ(?)represents the attenuation coefficient at some point in the volume data.

    3.1.3. Registration by tracking multi-view points

    The final pose of each bone was reproduced with transformation matrix T.

    3.2. Pseudo-Siamese point tracking network

    In the proposed method,we used a pseudo-Siamese network to track points from each view.The pseudo-Siamese network has two branches: One is a visual geometry group (VGG) network [45] for extracting features from DRRs, and the other is a feature-transfer network, which transfers authentic features to synthetic features(Section 3.3). The overall workflow is shown in Fig. 3. The input of the network was unpaired DRRs and fluoroscopic images, and the output was the tracked points of the fluoroscopic images. In the upper branch of the network (Fig. 3), the exported features FDRRaround each selected point have the size of M × N × C when the width and height of the DRR are respectively M and N, and C is the number of feature channels. In the lower branch of the network, the features of fluoroscopic images Ffluoro, were exported by the feature-transfer network without weight sharing. With the output of the extracted features FDRRand Ffluoro, a convolutional layer was applied to quantify the similarity between the two feature maps [27]. The similarity is denoted as

    where W is a learned weighting factor in finding better similarity for each selected point.The objective function to be minimized during the training is Euclidean loss (i.e., registration loss), defined as

    where pfluorois the tracked 2D points and pdrris the projected 2D points in DRR with known locations. With the tracked 2D points from different views,the 3D points were reconstructed using triangulation [43].

    3.3. Feature transfer using domain adaption

    For feature extraction of fluoroscopic images, we proposed a transfer-learning-based method to reduce the domain difference between synthetic images (e.g., the DRRs) and authentic X-ray images (e.g., the fluoroscopic images) (Fig. 4).

    Fig. 2. The workflow of the multi-view point-based registration method. A set of points was selected on the bone surface, and their 2D projections were tracked from each view in the virtual DFIS to reconstruct their 3D positions. The final transformation matrix was determined by the reconstructed points using Procrustes analysis [44].

    Fig.3. The framework of the point-tracking network.Pairs of DRRs and fluoroscopic images were imported to the network,and their features were extracted by a VGG and a feature-transfer network, respectively. The selected points were tracked on fluoroscopic images by searching the most similar feature patch around the selected points in DRRs. Conv: convolution layers.

    Fig. 4. Feature-transfer network with the paired synthetic image and authentic image. Synthetic images (i.e., DRRs) were generated at the pose after manual registration.

    To close the gap between the two domains,we used a domainadaption method. That is, additional coupled VGG net with cosine similarity was set during feature extraction of the fluoroscopic images to close the gap (Fig. 5). Pairs of DRRs and fluoroscopic images, which share the same locations of volume data using a model-based manual registration method [9], were used for training. We used cosine similarity as the cost function to measure the gap between the two domains.For the tracking problem,the cosine similarity can be stated as

    where ‖?‖ denotes L2-norm and 〈?〉 denotes dot product, and FXand FDare the feature maps. To improve the performance of feature transfer, we optimized the proposed method with weights pre-trained on ImageNet.

    Fig. 5. The architecture of synthetic X-ray image feature extraction.

    4. Experiments and results

    4.1. Dataset

    In this institutional-review-board-approved study, we collected CT images of three subjects’ knees, and all subjects performed two or three motions that were captured by a bi-plane fluoroscopy system (BV Pulsera, Philips, the Netherlands) with a frame rate of 30 frames per second. CT scans (SOMATOM Definition AS; Siemens, Germany) of each knee, ranging from approximately 30 cm proximal and distal to the knee joint line(thickness, 0.6 mm; resolution 512 × 512 pixels), were obtained.The size of the fluoroscopic images was 1024 × 1024 pixels with a pixel spacing of 0.28 mm. Geometric parameters of the bi-plane fluoroscopy imaging model, such as polynomial distortion correction parameters [46] and the locations of the X-ray source and detector plane, were used to establish a virtual DFIS, in which poses of each bone were reproduced manually [47]. In this study,143 pairs of matched fluoroscopic images were used (Fig. 6), of which 91 pairs of matched images were used for training the feature-transfer network of fluoroscopic images and the point tracking network, and the remaining images were used as the testing set. Additionally, a three-fold validation was performed in the study. To evaluate the 2D–3D registration algorithm, a widely used 3D error measurement (i.e., the target registration error (TRE)) was applied [48]. We computed the mean TRE(mTRE) to determine the 3D error. The average distance between the selected points defines the mTRE.

    where Pbonedenotes the selected points and PEdenotes the estimated points. The success rate was defined as the percentage of all the test cases with an mTRE of less than 10 mm.

    4.2. Loss selection in cross-domain feature extraction analysis

    We defined a cosine similarity as the loss function in the feature extraction on the authentic X-ray images. We also used the mean squared error as the loss function[22]to find a better loss function.The position of the loss function may also affect the final performance of the feature extraction layer. Thus, we first compared the effects of loss functions located at different convolution layers.To obtain the best performance of the cross-domain feature from the real fluoroscopic images, we put the defined loss function between the pairs of conv2 layers, conv3 layers, conv4 layers,and conv5 layers. In our data (Fig. 7), we preferred the cosine similarity as the loss function because it has better performance regarding the final registration result of the entire knee joint.Cosine similarity showed the best performance between conv5 layers (see details in Appendix A, Table S1).

    Fig. 6. Paired raw fluoroscopic images and the corresponding images after manual matching. The raw fluoroscopic images are (a) and (b), in which additional noise(wearable electromyography sensors)can be found on the surface of the lower limb.As described in the previous study[6],manual registration was performed until the projections of the surface bone model matched the outlines of the fluoroscopic images,and the matched results are shown in(c)and(d).Reproduced from Ref.[6]with permission of Elsevier Ltd., ? 2011.

    4.3. With or without transfer training network analysis

    Fig. 7. The success rate using cosine similarity and mean squared error (MSE) at different convolutional layers.

    Fig. 8. Mean target registration error with different registration networks.

    To test the effects of the proposed feature-based transfer learning method, we compared this method with the Siamese registration network (i.e., POINT2network) [27]. Moreover, as a widely used transfer learning tool, fine-tuning, was also compared in the current study to find a better way to reduce the differences between the fluoroscopic images and DRRs. The weights of the proposed method were pre-trained on the ImageNet database.The average performance of 10 tests for each method was used as the final performance. The mTRE results are reported in terms of the 10th, 25th, 50th, 75th, and 95th percentiles to demonstrate the robustness of the compared methods. The proposed featurebased transfer learning method had a significantly better performance than the Siamese registration network (Fig. 8), and it also performed better than fine-tuning, with a success accuracy rate of almost zero (Table S2 in Appendix A).

    4.4. Three-fold cross-validation

    We used three-fold cross-validation in this study and compared the proposed pseudo-Siamese registration network with and without transfer learning. Therefore, two of the three subjects were used for training the system, and the last subject was used to validate the system.This approach was iterated ten times by shifting the test subjects randomly. The performances (mTRE) were evaluated in each iteration. Finally, the performances recorded in all ten iterations were averaged to obtain a final mTRE. The mTRE results are reported in terms of the 10th,25th,50th,75th,and 95th percentiles (Table 1). The final three-fold cross-validation showed that the proposed method also had a better performance with feature transfer.

    5. Conclusions

    To overcome limited numbers of real fluoroscopic images in learning-based 2D–3D rigid registration via DRRs, we proposed a pseudo-Siamese multi-view point-based registration framework.The proposed method can decrease the demand for real X-ray images.With the ability to transfer authentic features to synthetic features, the proposed method has better performance than the fine-tuning pseudo-Siamese network. This study also estimated the POINT2network with and without transfer learning.The results showed that the proposed pseudo-Siamese network has a better success rate and accuracy than the Siamese point-tracking network.With a small amount of training data, the proposed method can work as an initialization step for the optimization-based registration method to improve accuracy. However, there are several limitations to the current work. First, because our method is designed for at least two fluoroscopic views,multi-view data were required to reconstruct the knee poses; otherwise, out-of-plane translation and rotation error would be large because of the physical imaging model. Second, the proposed method cannot reach a sub-millimeter accuracy compared with an optimization-based strategy. Like other learning-based strategies, our proposed method did not provide good accuracy but would be much faster than the optimization-based method, because no iterative step was needed during matching. In clinical orthopedic practice,accurate joint kinematics is essential for the determination of arehabilitation scheme [5], surgical planning [1], and functional evaluation [47]. The proposed method alone is inappropriate for in-vivo joint kinematics measurement. Therefore, a combination of our method with an optimization-based strategy would be a viable solution.

    Table 1 Three-fold cross-validation with and without transfer learning.

    Acknowledgements

    This project was sponsored by the National Natural Science Foundation of China (31771017, 31972924, and 81873997), the Science and Technology Commission of Shanghai Municipality(16441908700),the Innovation Research Plan supported by Shanghai Municipal Education Commission(ZXWF082101),the National Key R&D Program of China (2017YFC0110700, 2018YFF0300504 and 2019YFC0120600), the Natural Science Foundation of Shanghai (18ZR1428600), and the Interdisciplinary Program of Shanghai Jiao Tong University (ZH2018QNA06 and YG2017MS09).

    Compliance with ethics guidelines

    Cong Wang, Shuaining Xie, Kang Li, Chongyang Wang, Xudong Liu, Liang Zhao, and Tsung-Yuan Tsai declare that they have no conflict of interest or financial conflicts to disclose.

    Appendix A. Supplementary data

    Supplementary data to this article can be found online at https://doi.org/10.1016/j.eng.2020.03.016.

    精品国产三级普通话版| 干丝袜人妻中文字幕| 欧美一级a爱片免费观看看| 亚洲无线观看免费| 久久久亚洲精品成人影院| 草草在线视频免费看| 国产精品爽爽va在线观看网站| 99久国产av精品国产电影| 亚洲不卡免费看| 精品少妇黑人巨大在线播放 | 又爽又黄无遮挡网站| 亚洲人成网站在线观看播放| ponron亚洲| 黑人高潮一二区| 国产黄片视频在线免费观看| 中文字幕免费在线视频6| 高清视频免费观看一区二区 | 91久久精品国产一区二区三区| 精品一区二区三区视频在线| 夜夜爽夜夜爽视频| 成年女人看的毛片在线观看| 久久人人爽人人爽人人片va| 亚洲人与动物交配视频| 91精品一卡2卡3卡4卡| 天天躁夜夜躁狠狠久久av| 长腿黑丝高跟| 欧美xxxx性猛交bbbb| 又黄又爽又刺激的免费视频.| 网址你懂的国产日韩在线| www日本黄色视频网| 国产精品爽爽va在线观看网站| 亚洲欧美一区二区三区国产| 国产伦理片在线播放av一区| 亚洲精品成人久久久久久| 一级二级三级毛片免费看| 一级黄片播放器| 国产在线男女| 91aial.com中文字幕在线观看| 日韩强制内射视频| 超碰97精品在线观看| 91狼人影院| 只有这里有精品99| 激情 狠狠 欧美| 久久久国产成人免费| 欧美一级a爱片免费观看看| 日本黄色视频三级网站网址| 精品久久国产蜜桃| 日本黄色片子视频| 欧美激情久久久久久爽电影| 日韩制服骚丝袜av| 97热精品久久久久久| 亚洲图色成人| 亚洲国产高清在线一区二区三| av免费在线看不卡| 简卡轻食公司| 尾随美女入室| 日韩精品有码人妻一区| 少妇猛男粗大的猛烈进出视频 | 国产精品国产三级国产专区5o | 亚洲婷婷狠狠爱综合网| 老女人水多毛片| 人体艺术视频欧美日本| 国产国拍精品亚洲av在线观看| 在现免费观看毛片| a级一级毛片免费在线观看| 亚洲精品亚洲一区二区| 午夜亚洲福利在线播放| 中文字幕免费在线视频6| 国产白丝娇喘喷水9色精品| 久久精品夜夜夜夜夜久久蜜豆| 欧美激情在线99| 欧美极品一区二区三区四区| 精品不卡国产一区二区三区| 丰满人妻一区二区三区视频av| 午夜免费激情av| 观看免费一级毛片| 男女啪啪激烈高潮av片| 少妇裸体淫交视频免费看高清| 欧美丝袜亚洲另类| 日韩,欧美,国产一区二区三区 | 国产日韩欧美在线精品| 男人舔奶头视频| 91久久精品国产一区二区三区| 久久久久性生活片| 国产精品国产高清国产av| 久久婷婷人人爽人人干人人爱| 丰满少妇做爰视频| 国产精品电影一区二区三区| 神马国产精品三级电影在线观看| 日韩av在线大香蕉| 又黄又爽又刺激的免费视频.| 人妻夜夜爽99麻豆av| 变态另类丝袜制服| 韩国高清视频一区二区三区| 国产高清不卡午夜福利| 欧美精品国产亚洲| 三级国产精品片| 久久久久久久久久成人| 男女下面进入的视频免费午夜| 天天躁日日操中文字幕| 成人美女网站在线观看视频| 亚洲真实伦在线观看| 亚洲av电影在线观看一区二区三区 | 久久国产乱子免费精品| 欧美成人一区二区免费高清观看| 国产精品电影一区二区三区| 精品久久久久久久久亚洲| 一二三四中文在线观看免费高清| 国产成人a区在线观看| 两个人的视频大全免费| 亚洲精品国产成人久久av| 桃色一区二区三区在线观看| 亚洲成人av在线免费| 丰满人妻一区二区三区视频av| 国产片特级美女逼逼视频| 成人午夜精彩视频在线观看| 亚洲成人精品中文字幕电影| 97热精品久久久久久| 黄片无遮挡物在线观看| 日韩av在线大香蕉| 国产亚洲av片在线观看秒播厂 | 欧美日本亚洲视频在线播放| 91久久精品国产一区二区成人| 99久久人妻综合| 国产熟女欧美一区二区| 91久久精品国产一区二区三区| 久久久国产成人精品二区| 国产精品一区二区三区四区免费观看| 国产欧美另类精品又又久久亚洲欧美| 韩国高清视频一区二区三区| 三级男女做爰猛烈吃奶摸视频| 爱豆传媒免费全集在线观看| 日韩一区二区三区影片| 欧美最新免费一区二区三区| 97在线视频观看| 色视频www国产| 精品酒店卫生间| 国产私拍福利视频在线观看| 水蜜桃什么品种好| 亚洲成色77777| 五月玫瑰六月丁香| 久久精品国产亚洲网站| 国语自产精品视频在线第100页| 免费不卡的大黄色大毛片视频在线观看 | 亚洲人与动物交配视频| 黄色日韩在线| 欧美一区二区国产精品久久精品| 别揉我奶头 嗯啊视频| 精品一区二区三区人妻视频| 国产高清有码在线观看视频| 啦啦啦韩国在线观看视频| 日韩精品青青久久久久久| 最近中文字幕高清免费大全6| 色尼玛亚洲综合影院| 纵有疾风起免费观看全集完整版 | 丰满少妇做爰视频| 伦精品一区二区三区| 久久亚洲精品不卡| 免费黄色在线免费观看| 三级男女做爰猛烈吃奶摸视频| 亚洲内射少妇av| 成人鲁丝片一二三区免费| 久久久久久久久久久免费av| 国产成人福利小说| 国产精品久久久久久久久免| 亚洲国产高清在线一区二区三| 综合色av麻豆| 中文在线观看免费www的网站| 日本免费一区二区三区高清不卡| 免费搜索国产男女视频| 欧美高清成人免费视频www| 最新中文字幕久久久久| 午夜激情欧美在线| 国产精品1区2区在线观看.| 欧美精品国产亚洲| 欧美成人午夜免费资源| 2022亚洲国产成人精品| 色播亚洲综合网| 十八禁国产超污无遮挡网站| 又粗又爽又猛毛片免费看| 欧美激情在线99| 黄色欧美视频在线观看| 搡女人真爽免费视频火全软件| 1024手机看黄色片| 国产黄片美女视频| av在线天堂中文字幕| 久久精品国产鲁丝片午夜精品| 国产高潮美女av| 久久精品久久精品一区二区三区| 最近最新中文字幕大全电影3| 汤姆久久久久久久影院中文字幕 | 人妻少妇偷人精品九色| 久久久久国产网址| 日韩强制内射视频| 男女那种视频在线观看| 精品久久久久久久久亚洲| 一本一本综合久久| 国语自产精品视频在线第100页| 亚洲国产欧美人成| 天堂影院成人在线观看| 日本色播在线视频| 久久精品夜色国产| 天堂中文最新版在线下载 | 欧美日韩在线观看h| 国语对白做爰xxxⅹ性视频网站| kizo精华| 国产成人福利小说| 老师上课跳d突然被开到最大视频| 久久欧美精品欧美久久欧美| 在现免费观看毛片| 亚洲欧美日韩东京热| АⅤ资源中文在线天堂| 国产在线一区二区三区精 | 欧美色视频一区免费| 亚洲在久久综合| 亚州av有码| 成人二区视频| 国产免费一级a男人的天堂| 国产黄色小视频在线观看| 夜夜爽夜夜爽视频| 国产私拍福利视频在线观看| 久久久久久久国产电影| 只有这里有精品99| 水蜜桃什么品种好| 寂寞人妻少妇视频99o| 日日干狠狠操夜夜爽| 可以在线观看毛片的网站| 少妇被粗大猛烈的视频| 麻豆成人午夜福利视频| 中文亚洲av片在线观看爽| 偷拍熟女少妇极品色| 国产成人a区在线观看| 97人妻精品一区二区三区麻豆| 在现免费观看毛片| 精品国产露脸久久av麻豆 | 日本一二三区视频观看| 久久久久久久久久久免费av| 国产黄色小视频在线观看| 免费观看精品视频网站| 国产私拍福利视频在线观看| 晚上一个人看的免费电影| 中文天堂在线官网| 久久国内精品自在自线图片| 一级黄色大片毛片| 精品免费久久久久久久清纯| 国产午夜精品一二区理论片| 九色成人免费人妻av| 内地一区二区视频在线| av播播在线观看一区| 欧美色视频一区免费| 免费无遮挡裸体视频| 亚洲精品国产成人久久av| 国产大屁股一区二区在线视频| 国产精品福利在线免费观看| 午夜激情欧美在线| 久久久久久九九精品二区国产| 天天躁日日操中文字幕| 亚洲国产精品合色在线| 日本免费a在线| 亚洲av成人精品一区久久| 国产精品国产三级专区第一集| 国产黄a三级三级三级人| 国产在视频线精品| 精品国内亚洲2022精品成人| 欧美不卡视频在线免费观看| 国产成人91sexporn| 国产片特级美女逼逼视频| 人人妻人人看人人澡| 日韩欧美精品v在线| 国产成人a∨麻豆精品| 午夜精品一区二区三区免费看| 自拍偷自拍亚洲精品老妇| 少妇猛男粗大的猛烈进出视频 | 日韩大片免费观看网站 | 久久鲁丝午夜福利片| 久久这里只有精品中国| 春色校园在线视频观看| 国产在线男女| 日日撸夜夜添| 国产69精品久久久久777片| 熟女电影av网| 男女国产视频网站| 久久精品久久久久久久性| 大香蕉久久网| 国产一区二区亚洲精品在线观看| 久久草成人影院| 亚洲中文字幕一区二区三区有码在线看| 国产黄色小视频在线观看| 亚洲三级黄色毛片| 蜜臀久久99精品久久宅男| av视频在线观看入口| 十八禁国产超污无遮挡网站| 亚洲欧美日韩无卡精品| 国内精品宾馆在线| 欧美激情久久久久久爽电影| 一级毛片电影观看 | 麻豆久久精品国产亚洲av| 嫩草影院新地址| 青春草视频在线免费观看| eeuss影院久久| 国产精品久久久久久av不卡| 国产一区有黄有色的免费视频 | 国产女主播在线喷水免费视频网站 | 国产精品综合久久久久久久免费| 一区二区三区四区激情视频| 精品久久久久久久人妻蜜臀av| 久久婷婷人人爽人人干人人爱| 国产成人aa在线观看| 91狼人影院| 久久久久精品久久久久真实原创| 国产淫片久久久久久久久| 亚洲精华国产精华液的使用体验| 精品久久久久久电影网 | 国产爱豆传媒在线观看| 国产精品精品国产色婷婷| 老女人水多毛片| 黄片wwwwww| 在现免费观看毛片| kizo精华| 色综合亚洲欧美另类图片| 成年免费大片在线观看| 国产精品一区二区性色av| av在线天堂中文字幕| 老司机影院成人| 精品酒店卫生间| 国产亚洲91精品色在线| 老司机福利观看| 又粗又爽又猛毛片免费看| 色播亚洲综合网| 国产淫语在线视频| 免费av毛片视频| 在线免费十八禁| 91精品一卡2卡3卡4卡| 免费不卡的大黄色大毛片视频在线观看 | 亚洲精品久久久久久婷婷小说 | 成人鲁丝片一二三区免费| 国产精品一区二区性色av| 69人妻影院| 18+在线观看网站| 美女大奶头视频| 18禁在线无遮挡免费观看视频| 久久亚洲国产成人精品v| 欧美性猛交黑人性爽| 成人欧美大片| 亚洲国产欧美人成| 人人妻人人看人人澡| 97在线视频观看| 可以在线观看毛片的网站| 又粗又硬又长又爽又黄的视频| 真实男女啪啪啪动态图| av又黄又爽大尺度在线免费看 | 3wmmmm亚洲av在线观看| 99热这里只有精品一区| 国产精品av视频在线免费观看| 欧美成人精品欧美一级黄| 亚洲欧美日韩无卡精品| 成人亚洲精品av一区二区| 精华霜和精华液先用哪个| 国产一级毛片在线| www日本黄色视频网| 亚洲欧美精品综合久久99| 亚洲精品日韩av片在线观看| 欧美另类亚洲清纯唯美| 色综合亚洲欧美另类图片| 亚洲国产精品国产精品| 97超视频在线观看视频| 精品国内亚洲2022精品成人| 一级毛片电影观看 | 国产亚洲91精品色在线| av专区在线播放| 国产一区二区亚洲精品在线观看| 97超碰精品成人国产| 欧美性猛交黑人性爽| 日韩欧美在线乱码| 免费大片18禁| 黄色配什么色好看| 成人无遮挡网站| 老司机影院毛片| 日韩欧美精品免费久久| 天堂中文最新版在线下载 | 床上黄色一级片| 中文字幕久久专区| 久久久久久久午夜电影| 97超碰精品成人国产| 99九九线精品视频在线观看视频| 国产伦在线观看视频一区| 少妇裸体淫交视频免费看高清| 特级一级黄色大片| 亚洲国产精品成人综合色| 国产精品一区二区三区四区免费观看| 成人无遮挡网站| 欧美日韩一区二区视频在线观看视频在线 | 日本免费一区二区三区高清不卡| 久久久久性生活片| 婷婷色综合大香蕉| 日本三级黄在线观看| 久久久久久久久大av| 国产 一区 欧美 日韩| av又黄又爽大尺度在线免费看 | 免费电影在线观看免费观看| 青春草视频在线免费观看| 日本猛色少妇xxxxx猛交久久| 久久精品熟女亚洲av麻豆精品 | 天天躁日日操中文字幕| 久久精品91蜜桃| 国产片特级美女逼逼视频| 日韩人妻高清精品专区| 韩国av在线不卡| 午夜福利高清视频| 国产乱来视频区| 亚洲成人精品中文字幕电影| 欧美成人精品欧美一级黄| 久久亚洲精品不卡| 国产成人a区在线观看| 变态另类丝袜制服| 久久久久免费精品人妻一区二区| 国产高清有码在线观看视频| 在线观看66精品国产| 午夜福利视频1000在线观看| 22中文网久久字幕| 免费看av在线观看网站| 日本午夜av视频| 欧美成人午夜免费资源| 国产高清视频在线观看网站| 看黄色毛片网站| 一级二级三级毛片免费看| 少妇熟女aⅴ在线视频| 久久韩国三级中文字幕| 国产极品精品免费视频能看的| 国产精品一区www在线观看| 午夜福利在线在线| 亚洲国产成人一精品久久久| 日韩视频在线欧美| 麻豆国产97在线/欧美| 亚洲欧美成人精品一区二区| 欧美zozozo另类| 美女脱内裤让男人舔精品视频| 老司机影院成人| 亚洲av中文av极速乱| 久久精品国产99精品国产亚洲性色| 国产不卡一卡二| 天堂影院成人在线观看| 国产一区二区在线av高清观看| 18+在线观看网站| 日本爱情动作片www.在线观看| 波野结衣二区三区在线| 麻豆成人午夜福利视频| 亚洲真实伦在线观看| 午夜精品国产一区二区电影 | 熟女电影av网| 久久精品夜色国产| 久久国产乱子免费精品| 国产综合懂色| 精品久久国产蜜桃| 精品少妇黑人巨大在线播放 | 男女啪啪激烈高潮av片| 国产精品蜜桃在线观看| 我的女老师完整版在线观看| 秋霞伦理黄片| 免费观看性生交大片5| 欧美最新免费一区二区三区| 成人av在线播放网站| 村上凉子中文字幕在线| 亚洲图色成人| 精品久久国产蜜桃| 日韩,欧美,国产一区二区三区 | 神马国产精品三级电影在线观看| 国产91av在线免费观看| 久久人人爽人人爽人人片va| 欧美日韩国产亚洲二区| 成人无遮挡网站| 欧美一区二区精品小视频在线| 欧美丝袜亚洲另类| 婷婷六月久久综合丁香| 国产色婷婷99| 日本免费a在线| 成人亚洲欧美一区二区av| 一本一本综合久久| 国产精品无大码| 国产乱人偷精品视频| 天堂√8在线中文| 伦精品一区二区三区| 欧美成人一区二区免费高清观看| 精品人妻偷拍中文字幕| 亚洲av电影不卡..在线观看| 又粗又硬又长又爽又黄的视频| 亚洲无线观看免费| 丰满少妇做爰视频| 波多野结衣高清无吗| 欧美另类亚洲清纯唯美| АⅤ资源中文在线天堂| 一个人看视频在线观看www免费| 一级毛片我不卡| 91午夜精品亚洲一区二区三区| 特大巨黑吊av在线直播| 国产高清视频在线观看网站| 欧美激情国产日韩精品一区| 亚洲va在线va天堂va国产| 亚洲精品自拍成人| 亚洲国产精品久久男人天堂| 成年女人永久免费观看视频| 看免费成人av毛片| 亚洲欧美日韩高清专用| 97人妻精品一区二区三区麻豆| 最近最新中文字幕大全电影3| 日韩强制内射视频| 夫妻性生交免费视频一级片| 国产麻豆成人av免费视频| 哪个播放器可以免费观看大片| 老女人水多毛片| 嫩草影院精品99| 国产黄片美女视频| 最近的中文字幕免费完整| 国内揄拍国产精品人妻在线| 久久这里有精品视频免费| 日日干狠狠操夜夜爽| 精品一区二区三区人妻视频| 如何舔出高潮| 高清av免费在线| 久久久久国产网址| 亚洲国产精品成人久久小说| 少妇被粗大猛烈的视频| 美女被艹到高潮喷水动态| 欧美zozozo另类| 日韩欧美 国产精品| 老司机福利观看| 长腿黑丝高跟| 国产精品乱码一区二三区的特点| 日本欧美国产在线视频| 成人三级黄色视频| 中文字幕久久专区| 丝袜美腿在线中文| 国产爱豆传媒在线观看| 亚洲图色成人| 久久久精品欧美日韩精品| 国产老妇女一区| 久久久久性生活片| 国产精品,欧美在线| 97超碰精品成人国产| 性色avwww在线观看| 免费看a级黄色片| av黄色大香蕉| 最近视频中文字幕2019在线8| 老司机影院毛片| 韩国av在线不卡| 99久久中文字幕三级久久日本| 中国美白少妇内射xxxbb| 最近手机中文字幕大全| 深夜a级毛片| 国产免费又黄又爽又色| 女人十人毛片免费观看3o分钟| av在线蜜桃| 中文字幕免费在线视频6| 国国产精品蜜臀av免费| 亚洲av二区三区四区| 久久人人爽人人片av| 国产私拍福利视频在线观看| 欧美人与善性xxx| 老司机影院成人| 色噜噜av男人的天堂激情| 看片在线看免费视频| 99久久无色码亚洲精品果冻| 久久欧美精品欧美久久欧美| 国产 一区 欧美 日韩| 亚洲av一区综合| 我的老师免费观看完整版| 久久久久久久国产电影| 淫秽高清视频在线观看| 高清午夜精品一区二区三区| 男人狂女人下面高潮的视频| 九色成人免费人妻av| 一个人看的www免费观看视频| 免费在线观看成人毛片| 亚洲精品色激情综合| 岛国毛片在线播放| 中文字幕精品亚洲无线码一区| 色吧在线观看| 少妇熟女aⅴ在线视频| 日韩 亚洲 欧美在线| 亚洲综合色惰| 高清视频免费观看一区二区 | 看片在线看免费视频| 亚洲国产最新在线播放| 精品人妻视频免费看| 国产欧美日韩精品一区二区| 亚洲五月天丁香| 人人妻人人澡欧美一区二区| 国产免费又黄又爽又色| av卡一久久| 欧美日本亚洲视频在线播放| 青春草视频在线免费观看| 亚洲精品影视一区二区三区av| 国产av在哪里看| 精品久久久噜噜| 性插视频无遮挡在线免费观看| 2022亚洲国产成人精品| 亚洲在线自拍视频| 99久久无色码亚洲精品果冻| 免费看a级黄色片| 大又大粗又爽又黄少妇毛片口| 天天躁日日操中文字幕| 黄片无遮挡物在线观看| 可以在线观看毛片的网站| 最后的刺客免费高清国语| av免费观看日本| 人人妻人人看人人澡| 99久国产av精品国产电影| 亚洲欧美成人综合另类久久久 | 日韩高清综合在线| 亚洲精品成人久久久久久| 亚洲人成网站在线观看播放| 99久久精品国产国产毛片| 少妇丰满av| 亚洲国产日韩欧美精品在线观看| 午夜精品在线福利|