• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-View Point-Based Registration for Native Knee Kinematics Measurement with Feature Transfer Learning

    2021-11-26 03:43:50CongWangShuainingXiKangLiChongyangWangXuongLiuLiangZhaoTsungYuanTsaia
    Engineering 2021年6期

    Cong Wang, Shuaining Xi, Kang Li, Chongyang Wang, Xuong Liu,*, Liang Zhao*,Tsung-Yuan Tsaia,c,*

    a Shanghai Key Laboratory of Orthopaedic Implants & Clinical Translational R&D Center of 3D Printing Technology, Department of Orthopaedic Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine; School of Biomedical Engineering & Med-X Research Institute, Shanghai Jiao Tong University, Shanghai 200030, China

    b SenseTime Research, Shanghai 200233, China

    c Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China

    d Department of Orthopaedics, New Jersey Medical School, Rutgers University, Newark, NJ 07103, USA

    e Department of Orthopaedics, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai 200233, China

    Keywords:

    ABSTRACT Deep-learning methods provide a promising approach for measuring in-vivo knee joint motion from fast registration of two-dimensional (2D) to three-dimensional (3D) data with a broad range of capture.However, if there are insufficient data for training, the data-driven approach will fail. We propose a feature-based transfer-learning method to extract features from fluoroscopic images.With three subjects and fewer than 100 pairs of real fluoroscopic images,we achieved a mean registration success rate of up to 40%.The proposed method provides a promising solution,using a learning-based registration method when only a limited number of real fluoroscopic images is available.

    1. Introduction

    Accurate kinematics of the knee joint is critical in many orthopedic applications for understanding aspects such as the normal function of the joint [1], development of knee osteoarthritis [2],mechanisms of knee injuries[3],optimization of prosthesis design[4], preoperative planning, and postoperative rehabilitation [5].The measurement of knee kinematics is also essential for biomechanical studies on the musculoskeletal system.In the event of significant demand for kinematics in the clinical field,an efficient and reliable method to measure the dynamic motion of the joint is needed.

    Various measurement tools are now available for researchers to quantify three-dimensional(3D)knee kinematics,but only a few of them can provide millimeter-scale accuracy and rapid tracking velocity. Skin-marker-based optical tracking systems are widely used in the analysis of human motion,but their accuracy is affected by marker-associated soft-tissue artifacts, which can cause displacements of up to 40 mm[6].Although several researchers have attempted to reduce the effects of soft-tissue artifacts by building mathematical models [7–9], the issue remains unsolved when using any skin-marker-based motion-capture technique[10].With the development of medical imaging, some techniques can measure dynamic joint kinematics directly,such as real-time magnetic resonance (MR) tomography and computed tomography (CT)[11,12]. However, clinical promotion of these techniques was limited by low temporal resolution,restricted range of motion(ROM),the need to control motion speed, low image quality, and nonnegligible amounts of radiation [13,14]. In the past decade, a dual-fluoroscopic imaging system (DFIS) has been widely used and well-accepted for accurate in-vivo joint motion analysis because of its high accuracy [15], accessibility, sufficient ROM[16], and low radiation levels compared with traditional radiography (Fig. 1).

    Fig. 1. Virtual DFIS for measuring the dynamic motion of knee joints.

    To find the pose of the object (i.e., native knee joints) in DFIS,two-dimensional (2D) to 3D registration, which aligns the volume data (e.g., CT) with fluoroscopy (continuous X-ray images), is applied in the measurement procedure. The 3D position of CT is adjusted iteratively, and a large number of digitally reconstructed radiographs (DRRs) is generated simultaneously until the DRR is most similar to the X-ray image [17]. With the increasing use of DFIS in clinical applications, researchers have attempted various automatic registration methods to accelerate the matching procedure. Optimization-based registration, which is composed of an optimizer and similarity metrics between images, has been investigated extensively[18,19].Although the accuracy of optimizationbased registration is high [20–22], several drawbacks, such as the strictly required registration initialization and the high computational cost of calculating DRRs and the iterations during optimization, limit the widespread use of DFIS [23].

    With the rapid development of machine learning [24,25] in recent years,several learning-based methods have been developed to measure joint kinematics,with the advantages of computational efficiency and enhancement of capture range compared with optimization-based methods [21,26–28]. However, these methods are always trained with synthetic X-ray images(i.e.,DRRs)because training such models with a large amount of authentic labeled data is impractical. Even so, considerable authentic images are still necessary to ensure the robustness of registration[22,27].Another consideration is the discrepancy between DRRs and X-ray images.Compared with DRRs, fluoroscopic images showed blurred edges,geometric distortion, and nonuniform intensity [29,30]; therefore,networks that train on DRRs do not generalize to fluoroscopic images ideally [22]. Previous studies have established various physical models to generate more realistic DRRs through additional measurements of X-ray quality [31,32]. Recently, a survey conducted by Haskins et al.[24]has shown the ability to use transfer learning in such cross-modal registration, which may save the effort of building complicated DRR models or collecting authentic clinical images.

    In our work,we developed a pseudo-Siamese multi-view pointbased registration framework to address the problem of limited number of real fluoroscopic images. The proposed method is a combination of a pseudo-Siamese point-tracking network and a feature-transfer network. The pose of the knee joints was estimated by tracking selected points on knee joints with the multiview point-based registration network, paired DRRs, and fluoroscopy. A feature extractor was trained by the featurelearning network with pairs of DRRs and fluoroscopic images. To overcome the limited number of authentic fluoroscopic images,we trained the multi-view point-based registration network with DRRs and pre-trained the feature-learning network on ImageNet.

    The remainder of this paper is organized as follows. Section 2 reviews deep-learning-based 2D–3D registration and domain adaption. Section 3 presents the proposed learning-based 2D–3D registration problems. Section 4 presents the experiments and results, and Section 5 concludes the paper.

    2. Related work

    2.1. Learning-based strategy

    To avoid the large computational costs of optimization-based registration, researchers have recently developed learning-based registration [24]. Considering the success of convolutional neural networks (CNNs), feature extraction from both DRRs and fluoroscopic images has been proposed. The pose of the rigid object was then estimated by a hierarchical regressor [33]. The CNN model improves the robustness of registration, but it is limited to objects with strong features,such as medical implants,and cannot perform the registration of native anatomic structures. Miao et al.[28] proposed a reinforcement learning network to register X-ray and CT images of the spine with a Markov decision process.Although they improved the method with a multi-agent system,the proposed method may still fail because it cannot converge during searching.Recently,several attempts have been made to register rigid objects with point correspondence networks [27,34,35],which showed good results in both efficiency and accuracy on anatomic structures. Their method avoids the costly and unreliable iterative pose searching and corrects the out-of-plane errors with multiple views.

    2.2. Domain adaption

    The discrepancy between synthetic data(i.e.,DRRs)and authentic data (i.e., fluoroscopic images), also known as drift, is another challenge to learning-based registration strategies, in which training data and future data must be in the same feature space and have the same distribution [36]. Compared with building complicated models for DRR generation, domain adaption has emerged as a promising and relatively effortless strategy to account for the domain difference between different image sources [37], and it has been applied in many medical applications, such as X-ray segmentation [38] and multi-modal image registration[21,22,39]. For 2D–3D registration, Zheng et al. [21] proposed the integration of a pairwise domain adaptation module into a pretrained CNN that performs rigid registration using a limited amount of training data. The network was trained on DRRs, and it performed well on synthetic data; therefore, the authentic features were transferred close to the synthetic features with domain adaption. However, existing methods are still inappropriate for natural joints,such as knees and hips.Therefore, a designed registration approach for natural joints that do not require numerous clinical X-ray images for training is needed.

    3. Methods

    The aim of 2D–3D registration is to estimate the six degrees of freedom (6DOF) pose of 3D volume data from pairs of 2D multiview fluoroscopic images. In the following section, we begin with an overview of the tracking system and multi-view point-based 2D–3D registration (Section 3.1). Then, details of the two main components of our work are given in Section 3.2 and Section 3.3.

    3.1. Multi-view point-based registration

    3.1.1. 2D–3D rigid registration with 6DOF

    We consider the registration of each bone in the knee joint as a separate 2D–3D registration procedure. Pose reproduction of each bone is denoted as the 3D alignmentof the CT volume data V through atransformation matrix T4×4,whichisparameterized bysix elementsoftranslations and rotations(x,y,z,γ,α,β) using the Euler angle [40]. Transformation matrix T4×4can be represented as a homogeneous 4×4 matrix,and pose P can be derived as follows:

    3.1.2. 3D projection geometry of X-ray imaging

    In the virtual DFIS, the four corners of each imaging plane and the location of the X-ray sources were used to establish the optical pinhole model during DRR generation (Fig. 1). After a polynomialbased distortion correction and spatial calibration of two-view fluoroscopy, DRRs were generated by the ray-casting algorithm[41] with segmented CT volume data using Amira software(ThermoFisher Scientific,USA).Combing the transformation matrix T4×4, the final DRR IDRRcan be computed as follows:

    where l(p,s) is the ray s connecting the X-ray source and image plane in the X-ray imaging model, and p is a point of the ray.μ(?)represents the attenuation coefficient at some point in the volume data.

    3.1.3. Registration by tracking multi-view points

    The final pose of each bone was reproduced with transformation matrix T.

    3.2. Pseudo-Siamese point tracking network

    In the proposed method,we used a pseudo-Siamese network to track points from each view.The pseudo-Siamese network has two branches: One is a visual geometry group (VGG) network [45] for extracting features from DRRs, and the other is a feature-transfer network, which transfers authentic features to synthetic features(Section 3.3). The overall workflow is shown in Fig. 3. The input of the network was unpaired DRRs and fluoroscopic images, and the output was the tracked points of the fluoroscopic images. In the upper branch of the network (Fig. 3), the exported features FDRRaround each selected point have the size of M × N × C when the width and height of the DRR are respectively M and N, and C is the number of feature channels. In the lower branch of the network, the features of fluoroscopic images Ffluoro, were exported by the feature-transfer network without weight sharing. With the output of the extracted features FDRRand Ffluoro, a convolutional layer was applied to quantify the similarity between the two feature maps [27]. The similarity is denoted as

    where W is a learned weighting factor in finding better similarity for each selected point.The objective function to be minimized during the training is Euclidean loss (i.e., registration loss), defined as

    where pfluorois the tracked 2D points and pdrris the projected 2D points in DRR with known locations. With the tracked 2D points from different views,the 3D points were reconstructed using triangulation [43].

    3.3. Feature transfer using domain adaption

    For feature extraction of fluoroscopic images, we proposed a transfer-learning-based method to reduce the domain difference between synthetic images (e.g., the DRRs) and authentic X-ray images (e.g., the fluoroscopic images) (Fig. 4).

    Fig. 2. The workflow of the multi-view point-based registration method. A set of points was selected on the bone surface, and their 2D projections were tracked from each view in the virtual DFIS to reconstruct their 3D positions. The final transformation matrix was determined by the reconstructed points using Procrustes analysis [44].

    Fig.3. The framework of the point-tracking network.Pairs of DRRs and fluoroscopic images were imported to the network,and their features were extracted by a VGG and a feature-transfer network, respectively. The selected points were tracked on fluoroscopic images by searching the most similar feature patch around the selected points in DRRs. Conv: convolution layers.

    Fig. 4. Feature-transfer network with the paired synthetic image and authentic image. Synthetic images (i.e., DRRs) were generated at the pose after manual registration.

    To close the gap between the two domains,we used a domainadaption method. That is, additional coupled VGG net with cosine similarity was set during feature extraction of the fluoroscopic images to close the gap (Fig. 5). Pairs of DRRs and fluoroscopic images, which share the same locations of volume data using a model-based manual registration method [9], were used for training. We used cosine similarity as the cost function to measure the gap between the two domains.For the tracking problem,the cosine similarity can be stated as

    where ‖?‖ denotes L2-norm and 〈?〉 denotes dot product, and FXand FDare the feature maps. To improve the performance of feature transfer, we optimized the proposed method with weights pre-trained on ImageNet.

    Fig. 5. The architecture of synthetic X-ray image feature extraction.

    4. Experiments and results

    4.1. Dataset

    In this institutional-review-board-approved study, we collected CT images of three subjects’ knees, and all subjects performed two or three motions that were captured by a bi-plane fluoroscopy system (BV Pulsera, Philips, the Netherlands) with a frame rate of 30 frames per second. CT scans (SOMATOM Definition AS; Siemens, Germany) of each knee, ranging from approximately 30 cm proximal and distal to the knee joint line(thickness, 0.6 mm; resolution 512 × 512 pixels), were obtained.The size of the fluoroscopic images was 1024 × 1024 pixels with a pixel spacing of 0.28 mm. Geometric parameters of the bi-plane fluoroscopy imaging model, such as polynomial distortion correction parameters [46] and the locations of the X-ray source and detector plane, were used to establish a virtual DFIS, in which poses of each bone were reproduced manually [47]. In this study,143 pairs of matched fluoroscopic images were used (Fig. 6), of which 91 pairs of matched images were used for training the feature-transfer network of fluoroscopic images and the point tracking network, and the remaining images were used as the testing set. Additionally, a three-fold validation was performed in the study. To evaluate the 2D–3D registration algorithm, a widely used 3D error measurement (i.e., the target registration error (TRE)) was applied [48]. We computed the mean TRE(mTRE) to determine the 3D error. The average distance between the selected points defines the mTRE.

    where Pbonedenotes the selected points and PEdenotes the estimated points. The success rate was defined as the percentage of all the test cases with an mTRE of less than 10 mm.

    4.2. Loss selection in cross-domain feature extraction analysis

    We defined a cosine similarity as the loss function in the feature extraction on the authentic X-ray images. We also used the mean squared error as the loss function[22]to find a better loss function.The position of the loss function may also affect the final performance of the feature extraction layer. Thus, we first compared the effects of loss functions located at different convolution layers.To obtain the best performance of the cross-domain feature from the real fluoroscopic images, we put the defined loss function between the pairs of conv2 layers, conv3 layers, conv4 layers,and conv5 layers. In our data (Fig. 7), we preferred the cosine similarity as the loss function because it has better performance regarding the final registration result of the entire knee joint.Cosine similarity showed the best performance between conv5 layers (see details in Appendix A, Table S1).

    Fig. 6. Paired raw fluoroscopic images and the corresponding images after manual matching. The raw fluoroscopic images are (a) and (b), in which additional noise(wearable electromyography sensors)can be found on the surface of the lower limb.As described in the previous study[6],manual registration was performed until the projections of the surface bone model matched the outlines of the fluoroscopic images,and the matched results are shown in(c)and(d).Reproduced from Ref.[6]with permission of Elsevier Ltd., ? 2011.

    4.3. With or without transfer training network analysis

    Fig. 7. The success rate using cosine similarity and mean squared error (MSE) at different convolutional layers.

    Fig. 8. Mean target registration error with different registration networks.

    To test the effects of the proposed feature-based transfer learning method, we compared this method with the Siamese registration network (i.e., POINT2network) [27]. Moreover, as a widely used transfer learning tool, fine-tuning, was also compared in the current study to find a better way to reduce the differences between the fluoroscopic images and DRRs. The weights of the proposed method were pre-trained on the ImageNet database.The average performance of 10 tests for each method was used as the final performance. The mTRE results are reported in terms of the 10th, 25th, 50th, 75th, and 95th percentiles to demonstrate the robustness of the compared methods. The proposed featurebased transfer learning method had a significantly better performance than the Siamese registration network (Fig. 8), and it also performed better than fine-tuning, with a success accuracy rate of almost zero (Table S2 in Appendix A).

    4.4. Three-fold cross-validation

    We used three-fold cross-validation in this study and compared the proposed pseudo-Siamese registration network with and without transfer learning. Therefore, two of the three subjects were used for training the system, and the last subject was used to validate the system.This approach was iterated ten times by shifting the test subjects randomly. The performances (mTRE) were evaluated in each iteration. Finally, the performances recorded in all ten iterations were averaged to obtain a final mTRE. The mTRE results are reported in terms of the 10th,25th,50th,75th,and 95th percentiles (Table 1). The final three-fold cross-validation showed that the proposed method also had a better performance with feature transfer.

    5. Conclusions

    To overcome limited numbers of real fluoroscopic images in learning-based 2D–3D rigid registration via DRRs, we proposed a pseudo-Siamese multi-view point-based registration framework.The proposed method can decrease the demand for real X-ray images.With the ability to transfer authentic features to synthetic features, the proposed method has better performance than the fine-tuning pseudo-Siamese network. This study also estimated the POINT2network with and without transfer learning.The results showed that the proposed pseudo-Siamese network has a better success rate and accuracy than the Siamese point-tracking network.With a small amount of training data, the proposed method can work as an initialization step for the optimization-based registration method to improve accuracy. However, there are several limitations to the current work. First, because our method is designed for at least two fluoroscopic views,multi-view data were required to reconstruct the knee poses; otherwise, out-of-plane translation and rotation error would be large because of the physical imaging model. Second, the proposed method cannot reach a sub-millimeter accuracy compared with an optimization-based strategy. Like other learning-based strategies, our proposed method did not provide good accuracy but would be much faster than the optimization-based method, because no iterative step was needed during matching. In clinical orthopedic practice,accurate joint kinematics is essential for the determination of arehabilitation scheme [5], surgical planning [1], and functional evaluation [47]. The proposed method alone is inappropriate for in-vivo joint kinematics measurement. Therefore, a combination of our method with an optimization-based strategy would be a viable solution.

    Table 1 Three-fold cross-validation with and without transfer learning.

    Acknowledgements

    This project was sponsored by the National Natural Science Foundation of China (31771017, 31972924, and 81873997), the Science and Technology Commission of Shanghai Municipality(16441908700),the Innovation Research Plan supported by Shanghai Municipal Education Commission(ZXWF082101),the National Key R&D Program of China (2017YFC0110700, 2018YFF0300504 and 2019YFC0120600), the Natural Science Foundation of Shanghai (18ZR1428600), and the Interdisciplinary Program of Shanghai Jiao Tong University (ZH2018QNA06 and YG2017MS09).

    Compliance with ethics guidelines

    Cong Wang, Shuaining Xie, Kang Li, Chongyang Wang, Xudong Liu, Liang Zhao, and Tsung-Yuan Tsai declare that they have no conflict of interest or financial conflicts to disclose.

    Appendix A. Supplementary data

    Supplementary data to this article can be found online at https://doi.org/10.1016/j.eng.2020.03.016.

    久久精品亚洲精品国产色婷小说| 18禁国产床啪视频网站| а√天堂www在线а√下载| 操出白浆在线播放| 看片在线看免费视频| 国产亚洲精品久久久久久毛片| 久久人人精品亚洲av| 成人免费观看视频高清| 好男人在线观看高清免费视频 | 一区福利在线观看| 亚洲欧美一区二区三区黑人| 亚洲aⅴ乱码一区二区在线播放 | 麻豆av在线久日| 天天添夜夜摸| 久久久久久大精品| 久久精品国产清高在天天线| 亚洲精品久久成人aⅴ小说| 国产精品乱码一区二三区的特点| 手机成人av网站| 亚洲色图av天堂| 国内少妇人妻偷人精品xxx网站 | 一进一出好大好爽视频| 桃红色精品国产亚洲av| 午夜成年电影在线免费观看| 色老头精品视频在线观看| 日本五十路高清| 琪琪午夜伦伦电影理论片6080| 日韩大码丰满熟妇| 午夜免费鲁丝| ponron亚洲| 亚洲熟妇中文字幕五十中出| 久久香蕉激情| 精品久久久久久久人妻蜜臀av| 成人av一区二区三区在线看| 美女高潮到喷水免费观看| 国产欧美日韩一区二区精品| 亚洲 欧美一区二区三区| 动漫黄色视频在线观看| 国产欧美日韩一区二区三| 一级片免费观看大全| 欧美成人免费av一区二区三区| 黑人操中国人逼视频| 嫩草影视91久久| 亚洲av日韩精品久久久久久密| 国产真实乱freesex| 国产成人av激情在线播放| 日本黄色视频三级网站网址| 日韩三级视频一区二区三区| 99久久精品国产亚洲精品| 亚洲国产欧美网| 日韩精品青青久久久久久| 国产亚洲欧美在线一区二区| 亚洲人成伊人成综合网2020| 日韩国内少妇激情av| 亚洲自拍偷在线| 极品教师在线免费播放| 国产成人欧美在线观看| 午夜激情av网站| www日本黄色视频网| 51午夜福利影视在线观看| 少妇的丰满在线观看| 国产亚洲精品一区二区www| 亚洲精品在线观看二区| 成在线人永久免费视频| 一卡2卡三卡四卡精品乱码亚洲| 少妇熟女aⅴ在线视频| 色老头精品视频在线观看| 国产激情偷乱视频一区二区| 日本黄色视频三级网站网址| 男女床上黄色一级片免费看| 俺也久久电影网| 欧美黑人巨大hd| 欧美日韩瑟瑟在线播放| 国产高清videossex| 黄色片一级片一级黄色片| 欧美一级a爱片免费观看看 | 美女大奶头视频| 日韩视频一区二区在线观看| 国产精品亚洲av一区麻豆| 欧美乱码精品一区二区三区| 男女下面进入的视频免费午夜 | 在线观看舔阴道视频| 女人高潮潮喷娇喘18禁视频| 亚洲av熟女| 熟女少妇亚洲综合色aaa.| 777久久人妻少妇嫩草av网站| 亚洲国产看品久久| 国产精品 国内视频| 精品久久久久久久久久久久久 | 久久精品国产亚洲av高清一级| 特大巨黑吊av在线直播 | 麻豆国产av国片精品| 日日夜夜操网爽| 欧美又色又爽又黄视频| 国产高清有码在线观看视频 | 窝窝影院91人妻| 欧美+亚洲+日韩+国产| 亚洲人成77777在线视频| 97超级碰碰碰精品色视频在线观看| 欧美乱色亚洲激情| 少妇被粗大的猛进出69影院| 日韩大码丰满熟妇| 怎么达到女性高潮| 日韩成人在线观看一区二区三区| a级毛片在线看网站| 级片在线观看| 久久精品人妻少妇| 日韩av在线大香蕉| 久久青草综合色| 亚洲av成人不卡在线观看播放网| 99国产综合亚洲精品| 成年女人毛片免费观看观看9| 亚洲av五月六月丁香网| 亚洲熟妇熟女久久| 91麻豆av在线| 12—13女人毛片做爰片一| 免费高清在线观看日韩| 久久久久国产一级毛片高清牌| 亚洲午夜精品一区,二区,三区| 在线永久观看黄色视频| bbb黄色大片| 搡老熟女国产l中国老女人| 国产在线观看jvid| 日韩高清综合在线| 国产熟女xx| 99re在线观看精品视频| 制服诱惑二区| 亚洲中文日韩欧美视频| 久久精品aⅴ一区二区三区四区| 黄片大片在线免费观看| 香蕉久久夜色| 色综合亚洲欧美另类图片| 成人手机av| 99国产精品99久久久久| 男女下面进入的视频免费午夜 | 日本撒尿小便嘘嘘汇集6| 丰满人妻熟妇乱又伦精品不卡| 女人爽到高潮嗷嗷叫在线视频| 欧美亚洲日本最大视频资源| 国产一卡二卡三卡精品| 桃红色精品国产亚洲av| 成人18禁高潮啪啪吃奶动态图| 中文亚洲av片在线观看爽| 午夜福利一区二区在线看| 欧美激情高清一区二区三区| 桃色一区二区三区在线观看| 啦啦啦韩国在线观看视频| 老熟妇乱子伦视频在线观看| 大型黄色视频在线免费观看| 国产亚洲欧美在线一区二区| 国产亚洲欧美精品永久| 亚洲免费av在线视频| 久久伊人香网站| 亚洲中文字幕一区二区三区有码在线看 | 窝窝影院91人妻| 午夜福利18| 亚洲精华国产精华精| 脱女人内裤的视频| 色在线成人网| 亚洲色图 男人天堂 中文字幕| netflix在线观看网站| 男人舔女人下体高潮全视频| 久久香蕉激情| 国产黄片美女视频| 色综合站精品国产| 久久久久精品国产欧美久久久| 亚洲一区二区三区色噜噜| 久久久国产精品麻豆| 两人在一起打扑克的视频| 搞女人的毛片| 国产av在哪里看| 久热这里只有精品99| 午夜激情av网站| 国产男靠女视频免费网站| 少妇被粗大的猛进出69影院| 欧美成人性av电影在线观看| 欧美成人免费av一区二区三区| 国产激情偷乱视频一区二区| 久久婷婷成人综合色麻豆| 好男人电影高清在线观看| 少妇粗大呻吟视频| 国产精品一区二区三区四区久久 | 免费在线观看亚洲国产| 看片在线看免费视频| 91字幕亚洲| 国产精品一区二区三区四区久久 | 一本大道久久a久久精品| 韩国精品一区二区三区| 久久香蕉国产精品| a级毛片在线看网站| 午夜福利成人在线免费观看| 久久久久免费精品人妻一区二区 | 老鸭窝网址在线观看| 亚洲欧美精品综合久久99| 欧美绝顶高潮抽搐喷水| 午夜福利18| 亚洲av熟女| 色尼玛亚洲综合影院| 国产免费男女视频| 亚洲美女黄片视频| 日韩欧美免费精品| 97超级碰碰碰精品色视频在线观看| 国产精品一区二区免费欧美| 欧美精品啪啪一区二区三区| 亚洲国产精品999在线| 天天添夜夜摸| 97超级碰碰碰精品色视频在线观看| 看片在线看免费视频| 免费在线观看影片大全网站| 久热这里只有精品99| 91av网站免费观看| 无人区码免费观看不卡| 真人做人爱边吃奶动态| 999精品在线视频| 亚洲成人国产一区在线观看| 人成视频在线观看免费观看| av福利片在线| 免费在线观看日本一区| 欧美+亚洲+日韩+国产| 亚洲av电影在线进入| x7x7x7水蜜桃| 9191精品国产免费久久| 丁香欧美五月| 亚洲精华国产精华精| 两个人看的免费小视频| 日韩免费av在线播放| 国产精品爽爽va在线观看网站 | 欧美绝顶高潮抽搐喷水| 国产蜜桃级精品一区二区三区| 这个男人来自地球电影免费观看| 国产精品永久免费网站| 一级黄色大片毛片| 一进一出好大好爽视频| 麻豆国产av国片精品| 国产激情欧美一区二区| 精品国产亚洲在线| 国产精品综合久久久久久久免费| 国产精品久久久久久精品电影 | 伦理电影免费视频| 久久国产精品影院| 亚洲精品美女久久久久99蜜臀| 好男人在线观看高清免费视频 | 草草在线视频免费看| 69av精品久久久久久| 久久久国产成人精品二区| av免费在线观看网站| 天天添夜夜摸| 99国产精品99久久久久| 长腿黑丝高跟| 制服人妻中文乱码| 黄色女人牲交| 国产激情久久老熟女| 欧美又色又爽又黄视频| 久久香蕉激情| 黑人巨大精品欧美一区二区mp4| 俄罗斯特黄特色一大片| 成人精品一区二区免费| 99热只有精品国产| 精品日产1卡2卡| 91字幕亚洲| 黄片大片在线免费观看| 一级黄色大片毛片| 一个人观看的视频www高清免费观看 | 亚洲狠狠婷婷综合久久图片| 日本一本二区三区精品| 1024香蕉在线观看| svipshipincom国产片| 亚洲美女黄片视频| 欧美色欧美亚洲另类二区| 亚洲中文字幕一区二区三区有码在线看 | 亚洲欧美日韩无卡精品| 中文字幕人妻熟女乱码| 亚洲精品国产区一区二| 中国美女看黄片| 老司机深夜福利视频在线观看| 国产亚洲欧美在线一区二区| 男女那种视频在线观看| 91麻豆精品激情在线观看国产| 久久精品国产99精品国产亚洲性色| 欧美亚洲日本最大视频资源| 国产亚洲av嫩草精品影院| 亚洲av五月六月丁香网| 大型av网站在线播放| 国产单亲对白刺激| 成人av一区二区三区在线看| 后天国语完整版免费观看| 黄色成人免费大全| 亚洲国产精品成人综合色| 人人妻人人看人人澡| 亚洲精品色激情综合| 亚洲av电影不卡..在线观看| 亚洲精品在线观看二区| 精品无人区乱码1区二区| 不卡一级毛片| 久久午夜亚洲精品久久| 国产一级毛片七仙女欲春2 | 天天躁夜夜躁狠狠躁躁| 久久久国产成人免费| 日本成人三级电影网站| 久久天躁狠狠躁夜夜2o2o| 搞女人的毛片| 免费女性裸体啪啪无遮挡网站| 精品久久久久久久人妻蜜臀av| 美女高潮喷水抽搐中文字幕| 午夜激情福利司机影院| 欧美日韩精品网址| 他把我摸到了高潮在线观看| 一区二区三区精品91| 久久久久免费精品人妻一区二区 | 精品久久久久久久毛片微露脸| 国产精品免费一区二区三区在线| 又大又爽又粗| 久久国产精品影院| 日韩高清综合在线| 欧美日韩精品网址| 亚洲av第一区精品v没综合| 亚洲av成人一区二区三| 一个人免费在线观看的高清视频| 啦啦啦韩国在线观看视频| 精品乱码久久久久久99久播| 男女那种视频在线观看| 欧美黑人欧美精品刺激| 人妻丰满熟妇av一区二区三区| 国产一卡二卡三卡精品| 国内久久婷婷六月综合欲色啪| 国产一区在线观看成人免费| 亚洲av电影不卡..在线观看| 国内揄拍国产精品人妻在线 | 神马国产精品三级电影在线观看 | 亚洲中文日韩欧美视频| 国产精品日韩av在线免费观看| 中文字幕久久专区| 国产午夜精品久久久久久| 91老司机精品| 老汉色av国产亚洲站长工具| 免费在线观看日本一区| 国产一区二区激情短视频| 午夜免费鲁丝| 国产午夜精品久久久久久| 50天的宝宝边吃奶边哭怎么回事| www.999成人在线观看| 欧美久久黑人一区二区| 国产私拍福利视频在线观看| 国产精品 欧美亚洲| 午夜a级毛片| 成人亚洲精品av一区二区| 天天躁狠狠躁夜夜躁狠狠躁| 欧美成人免费av一区二区三区| 欧美av亚洲av综合av国产av| 亚洲第一欧美日韩一区二区三区| 丁香欧美五月| 国产av又大| 久久伊人香网站| 国产久久久一区二区三区| 亚洲欧美精品综合一区二区三区| 亚洲人成77777在线视频| 日韩欧美国产在线观看| 色在线成人网| 黄色片一级片一级黄色片| 侵犯人妻中文字幕一二三四区| 日本免费a在线| 国产aⅴ精品一区二区三区波| 日韩精品青青久久久久久| 又大又爽又粗| 亚洲国产日韩欧美精品在线观看 | 亚洲,欧美精品.| 国产片内射在线| 免费av毛片视频| 中文字幕最新亚洲高清| 中文字幕人妻丝袜一区二区| 可以在线观看毛片的网站| 亚洲精品粉嫩美女一区| 精品国产国语对白av| 黄色视频,在线免费观看| 色综合婷婷激情| 日韩视频一区二区在线观看| 超碰成人久久| 亚洲精华国产精华精| 久久国产精品影院| 久久午夜亚洲精品久久| 天天躁夜夜躁狠狠躁躁| 国产区一区二久久| 精品国产一区二区三区四区第35| 亚洲第一电影网av| 久久精品国产清高在天天线| 中文字幕久久专区| 91成年电影在线观看| 国产亚洲精品av在线| 99国产综合亚洲精品| 一本大道久久a久久精品| 亚洲av第一区精品v没综合| 色老头精品视频在线观看| 久久亚洲真实| 精品国产超薄肉色丝袜足j| 真人一进一出gif抽搐免费| 亚洲一区二区三区色噜噜| 一级作爱视频免费观看| 亚洲真实伦在线观看| 国产成人欧美在线观看| 两个人免费观看高清视频| 国产高清videossex| 99热只有精品国产| 久久国产亚洲av麻豆专区| 免费高清在线观看日韩| 岛国在线观看网站| 日本 欧美在线| 熟女电影av网| 一级作爱视频免费观看| 亚洲精品一区av在线观看| 国产单亲对白刺激| 久久精品国产亚洲av香蕉五月| 久久久久精品国产欧美久久久| 黄片小视频在线播放| 中文字幕久久专区| 亚洲性夜色夜夜综合| 99国产精品一区二区三区| 久久久久久久精品吃奶| 女同久久另类99精品国产91| 麻豆成人午夜福利视频| 精品人妻1区二区| 两人在一起打扑克的视频| 成人三级黄色视频| 色尼玛亚洲综合影院| 久久精品国产清高在天天线| 可以在线观看毛片的网站| 亚洲狠狠婷婷综合久久图片| 岛国在线观看网站| 禁无遮挡网站| 久久精品人妻少妇| 香蕉国产在线看| 国产亚洲精品一区二区www| 满18在线观看网站| 黑人巨大精品欧美一区二区mp4| 高清毛片免费观看视频网站| 日韩免费av在线播放| 两人在一起打扑克的视频| 中文字幕人成人乱码亚洲影| 在线播放国产精品三级| 国产日本99.免费观看| 久久欧美精品欧美久久欧美| 久久久国产成人精品二区| 日韩 欧美 亚洲 中文字幕| 国产av一区在线观看免费| 日韩欧美三级三区| 亚洲av片天天在线观看| 一夜夜www| 国产精品一区二区精品视频观看| 男人操女人黄网站| 波多野结衣高清无吗| 在线av久久热| 少妇被粗大的猛进出69影院| 九色国产91popny在线| 精品国产美女av久久久久小说| 可以免费在线观看a视频的电影网站| a级毛片在线看网站| 成人永久免费在线观看视频| 老汉色av国产亚洲站长工具| 在线播放国产精品三级| www日本黄色视频网| 午夜激情福利司机影院| 老司机靠b影院| 给我免费播放毛片高清在线观看| 777久久人妻少妇嫩草av网站| 一级片免费观看大全| 欧美乱色亚洲激情| 午夜福利免费观看在线| 亚洲 欧美一区二区三区| 波多野结衣高清作品| 正在播放国产对白刺激| 国产欧美日韩一区二区三| 成人18禁在线播放| 国产91精品成人一区二区三区| 天堂影院成人在线观看| 午夜福利一区二区在线看| 日韩欧美国产在线观看| 18禁国产床啪视频网站| 久久人妻av系列| 欧美午夜高清在线| 精品久久久久久久久久久久久 | 最近最新免费中文字幕在线| 一级黄色大片毛片| 麻豆av在线久日| 又黄又爽又免费观看的视频| 亚洲国产精品成人综合色| 色老头精品视频在线观看| 国产精品久久久久久人妻精品电影| 色老头精品视频在线观看| 啪啪无遮挡十八禁网站| 美女国产高潮福利片在线看| 啪啪无遮挡十八禁网站| 亚洲一区中文字幕在线| 亚洲国产精品成人综合色| 白带黄色成豆腐渣| 亚洲黑人精品在线| 婷婷精品国产亚洲av| 亚洲七黄色美女视频| 一本综合久久免费| 嫩草影院精品99| 久久久久久久久中文| a级毛片a级免费在线| 久久久久久久久中文| 淫秽高清视频在线观看| 色综合婷婷激情| 天天添夜夜摸| 好看av亚洲va欧美ⅴa在| 成在线人永久免费视频| 亚洲国产欧美网| 国产精品自产拍在线观看55亚洲| 亚洲午夜理论影院| av在线天堂中文字幕| 搡老熟女国产l中国老女人| 欧美日韩亚洲综合一区二区三区_| 亚洲一区二区三区不卡视频| 露出奶头的视频| 在线观看舔阴道视频| 99国产综合亚洲精品| 又黄又粗又硬又大视频| 欧美日韩乱码在线| 国产亚洲av高清不卡| 国内精品久久久久久久电影| 丰满的人妻完整版| 色播在线永久视频| 一边摸一边抽搐一进一小说| 婷婷精品国产亚洲av在线| 国产精品永久免费网站| 亚洲精品国产精品久久久不卡| 国产亚洲精品综合一区在线观看 | 黄色视频不卡| 91老司机精品| 国产精品香港三级国产av潘金莲| 最新美女视频免费是黄的| 黑丝袜美女国产一区| 精品不卡国产一区二区三区| 午夜激情福利司机影院| 黄色女人牲交| 午夜日韩欧美国产| 1024视频免费在线观看| 神马国产精品三级电影在线观看 | 黑人巨大精品欧美一区二区mp4| 国产成人一区二区三区免费视频网站| 国产高清激情床上av| 中文字幕av电影在线播放| 亚洲国产精品成人综合色| 午夜视频精品福利| 亚洲精品国产一区二区精华液| 日本精品一区二区三区蜜桃| 99国产精品一区二区三区| 欧美色视频一区免费| 中文字幕人成人乱码亚洲影| 亚洲天堂国产精品一区在线| 波多野结衣高清作品| 丁香六月欧美| 一级片免费观看大全| 日韩精品青青久久久久久| 十分钟在线观看高清视频www| 精品国产一区二区三区四区第35| 制服人妻中文乱码| 免费观看人在逋| 色哟哟哟哟哟哟| 成年女人毛片免费观看观看9| 日本精品一区二区三区蜜桃| 天天一区二区日本电影三级| 欧美色视频一区免费| 一a级毛片在线观看| 日日摸夜夜添夜夜添小说| 亚洲国产精品成人综合色| 自线自在国产av| 日韩欧美国产在线观看| 欧美激情高清一区二区三区| 亚洲一区二区三区色噜噜| 欧美日韩亚洲国产一区二区在线观看| av有码第一页| 一级a爱视频在线免费观看| 中文字幕精品亚洲无线码一区 | 成人三级做爰电影| 亚洲精品国产精品久久久不卡| 国语自产精品视频在线第100页| 美女 人体艺术 gogo| 宅男免费午夜| 老司机深夜福利视频在线观看| 国产真实乱freesex| 草草在线视频免费看| 中文资源天堂在线| tocl精华| 在线观看一区二区三区| 黄色毛片三级朝国网站| 在线观看午夜福利视频| 在线视频色国产色| 国产成人av激情在线播放| 久久久精品欧美日韩精品| 欧美国产精品va在线观看不卡| 亚洲国产毛片av蜜桃av| 一个人观看的视频www高清免费观看 | 老汉色∧v一级毛片| 最新在线观看一区二区三区| 久久精品国产亚洲av香蕉五月| 精品国内亚洲2022精品成人| 久久久久久久午夜电影| netflix在线观看网站| 一夜夜www| 国产精品乱码一区二三区的特点| 人成视频在线观看免费观看| 亚洲专区国产一区二区| 久久久国产成人免费| 欧美日本亚洲视频在线播放| 黄片播放在线免费| 最近最新中文字幕大全免费视频| 日韩精品青青久久久久久| 精品一区二区三区av网在线观看| 夜夜爽天天搞| 午夜影院日韩av| 精品国内亚洲2022精品成人| 久久久久国内视频| www日本黄色视频网| 9191精品国产免费久久|