• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Real-Time Hand Motion Parameter Estimation with Feature Point Detection Using Kinect

    2014-03-24 05:40:18ChunMingChangCheHaoChangandChungLinHuang

    Chun-Ming Chang, Che-Hao Chang, and Chung-Lin Huang

    Real-Time Hand Motion Parameter Estimation with Feature Point Detection Using Kinect

    Chun-Ming Chang, Che-Hao Chang, and Chung-Lin Huang

    ——This paper presents a real-time Kinectbased hand pose estimation method. Different from model-based and appearance-based approaches, our approach retrieves continuous hand motion parameters in real time. First, the hand region is segmented from the depth image. Then, some specific feature points on the hand are located by the random forest classifier, and the relative displacements of these feature points are transformed to a rotation invariant feature vector. Finally, the system retrieves the hand joint parameters by applying the regression functions on the feature vectors. Experimental results are compared with the ground truth dataset obtained by a data glove to show the effectiveness of our approach. The effects of different distances and different rotation angles for the estimation accuracy are also evaluated.

    Index Terms——Hand motion, Kinect, parameter estimation, random forest, regression function.

    1. Introduction

    Hand tracking has been applied in various human computer interface (HCI) designs, such as sign language recognition, augmented reality, and virtual reality. Two major applications are hand gesture recognition and three dimensional (3D) hand pose estimation. The former analyzes the hand shape and location to identify the hand gesture which can be applied to sign language understanding. The latter estimates the hand parameters such as joint angles of each finger and global orientation of the palm. The 3D hand pose estimation is quite a challenge due to the lack of sufficient information and self-occlusion. It may be applied to the virtual reality or the robotic arm control.

    There are two main approaches, appearance-based and model-based, for the hand pose estimation. The appearance-based method establishes a large amount of hand models pre-stored in a database. These hand models are generated by a synthesized virtual hand or constructed from a real hand. One of the samples in the database that best matches the current observation is retrieved. Romeroet al.used local sensitivity hashing (LSH) to search the nearest neighborhood in a large database that contains over 100000 hand poses with HOG (histogram of oriented gradient) features for real-time operations[1]. Miyamotoet al.designed a tree structure classifier based on the typical hand poses and their variations[2].

    The model-based method estimates the motion parameters based on the on-line construction of a 3D articulated hand model to fit the observations. By rendering the hand model with different parameters iteratively, the deviation between the hand model and real observation will be converged, and the hand parameters can be obtained. Gorceet al.proposed to minimize the objective function between the observation and the model using the quasi-Newton method[3]. The objective function is measured by the difference of the texture and shading information. Oikonomidiset al.utilized particle swarm optimization (PSO) to solve the optimization problem of matching process with a depth image[4],[5]. Hameret al.created a hand model with 16 separate segments connected in a pairwise Markov random field, and adjusted the states of segments by belief propagation based on both RGB (red, green, and blue) and depth images[6]. Some approaches evaluated the 3D human body joints and hand joints based on the depth sensors without markers[7],[8]. Similar to [8], we perform the feature transformation in our method to estimate the hand motion parameters in real time.

    This paper proposes a new approach to estimate hand parameters by analyzing the depth maps and applying regression functions. First, the hand depth map is segmented. Then, we apply a pixel-based classification to categorize each pixel in the hand map into 9 classes which consist of 5 fingertips, 3 blocks on palm, and one for the rest part of hand. The classifier is developed by using a random forest classifier. We exact the feature points from the depth maps which are converted into feature space for regression functions. Based on the regression functions, the hand motion parameters can be obtained.

    Our approach does not use gesture recognition or iterative approximation to develop real-time hand motion parameter estimation. Different from the method proposed by Pó?rola[8], we construct a reliable transform operation. The proposed method can retrieve the continuous hand motion parameters by using the regression functions with less computation load. The virtual hand gestures are reconstructed from the obtained parameters. They are compared with the ground truth dataset to show the accuracy of the estimated hand motion parameters.

    2. Hand Extraction

    To extract the hand features from a depth image, we take the following steps: a) remove the background which does not belong to a hand in the depth map, b) find the center of the hand and make it shift and rotation invariant, and c) resize the hand to achieve depth invariant.

    We assume that a hand is the object closest to the depth sensor. The pixels within a specified distance on the depth image are considered to be the hand region. A threshold is applied to remove the non-hand object and disturbance.

    To find the center of a hand, we apply the distance transform to the hand image and generate a gray-level image of which the gray scale value of each pixel indicates the distance to the closest boundary from itself. Finally, we compute the center of mass of the binary image, which is obtained by using a threshold, as the centroid of the hand segment. Fig. 1 demonstrates the hand extracting procedure.

    Fig. 1. Centroid locating steps: (a) hand shape extraction, (b) distance transformation, and (c) threshold processing.

    However, the method may fail for the self-occluded hand gestures such as “okay sign” or “victory sign”, as shown in Fig. 2. Some undefined depth blocks in the depth maps need to be resolved before applying the distance transformation. To deal with the self-occlusion problem, we dilate the hand silhouette to differentiate the un-defined blocks from the background. Then, a flood-fill operation is applied to remove these un-defined blocks inside the hand region. Finally, we erode the image to eliminate the effect of dilation done previously. This process can effectively remove the un-defined blocks within the hand region while locating the hand centroid.

    Hand size is normalized to make it distance independent. Letdavgdenote the average of the segmented hand depth image. The hand size is rescaled by a constant which is proportional todavg.

    Fig. 2. Resolving the self-occlusion problem: (a) original image and (b) distance-transformed image.

    3. Finger Joint Locating

    The selected features in our approach are the relative distances in the 3D space of specific feature points on a hand. To find these feature points, a robust positioning method is needed.

    3.1 Per-Pixel Classification

    Similar to [7] and [8], we apply a pixel-based classification method to handle the rotation case. However, a different split function which has been proved to be effective is used. Taking the rotation case (e.g., waving hands) into consideration, the split criterion is based on the feature function defined as

    wherefu,v(I, x) is the feature function,dI(x) is the depth of x in imageI, and u and v are the offset vectors. The location of pixel x and the vectors u and v are in the polar coordinate. The centroid of the palm is the origin of polar coordinate system. With such feature function, the feature value remains the same after rotation since the relative position between x and its offset is rotation invariant, as shown in Fig. 3.

    Fig. 3. Rotation-invariant feature function.

    The per-pixel classification is done by the random forest in [7]. The training sample sets for decision tree training are randomly selected. Training samples are split into two groups by the split criterion:

    whereτis a threshold. In the training process, a gain function is defined to measure the effectiveness of the split function based on different split node parameters?:

    whereH(Q) is the Shannon entropy used to calculate the entropy of the normalized histogram. For each split node parameter,?=(θ,τ), we may separate the training sample set and compute the gain function. The larger the gain function, the better the split function. In the training process, the split node parameter?is randomly generated. The best split node parameter can be determined by

    The two split nodes will be determined by two separated subsetsQLandQR. The split nodes will be generated iteratively until either of the following two conditions is met: 1) the number of the samples is not sufficient and 2) the entropy is too small, indicating that the most of the samples in the set are of the same class. Based on the labeled training samples in the leaf nodes, we may calculate the posteriori of each classCof thetth decision tree asPt(C|x).

    3.2 Joint Locating

    After the per-pixel classification, we obtain the probability distribution maps for each class. Next, we have to locate the center of each class except for the class that contains the rest part of the hand. A Gaussian kernel with an appropriate size (16×16 pixels) on the distribution maps is applied to find the maximum to the position of the feature points on a hand, as shown in Fig. 4.

    Fig. 4. Feature point detection results: (a) labeled image. (b) detected feature points.

    The results from the previous steps are the 2-D locations of feature points. The depth information of each feature point can be easily determined as

    whereiis the class index,Gp(i)is a Gaussian mask centered on feature pointp(i),P(i) is the probability distribution map of classi, andIis the depth image.

    4. Feature Vector Formation

    We use the 3D feature point locations to estimate the hand motion parameters. The feature vector X is defined as

    wherexis 3D location of the feature point anddis the depth value of the feature point. The feature points include five points at the fingertips (xthumb,xindex,xmiddle,xring,xlittle), one point at the palm center,xpalm_c, and two feature points nearby the palm center (xpalm_1,xpalm_2) that are chosen not occluded by the bending fingers.

    5. Regression Function Training

    The support vector machine (SVM) is a supervised learning model with associated algorithms that analyze data for classification and regression[9]. In classification, the training data is mapped into a hyper dimensional space, and a hyper plane will be constructed to separate the training data with the largest margin. Instead of minimizing the observed training error, the support vector regression (SVR) attempts to minimize the generalization error bound so as to achieve generalized performance. SVR can be used to train the hyper plane for predicting the data distribution. Regression function training is based on the joint parameters and the input feature vectors. LIBSVM (library for support vector machines)[10], a popular open source machine learning library, is applied to construct the regression functions for retrieving hand joint parameters from the feature vectors.

    To train a regression function, we apply the training data with feature vectors and ground truth data. Feature vectors are obtained by our finger joint locating process and the ground truth data are collected from the 5DT (Fifth Dimension Technologies) Data Glove. The data glove returns fourteen parameters of which ten of them show the bending degrees of fingers, and the other four indicate the joint angles between fingers. In this paper we only use the first 10 parameters to train the regression functions. The ground truth data are used to validate the results of the proposed method.

    6. Experimental Results

    The random forest classifier is trained by 500 labeled training images, and the training samples of each tree are randomly selected around 500 pixels from each trainingimage. The maximum depth of each tree is set to 30 layers, and the random forest consists of 5 random decision trees. The regression functions are trained with 3000 pairs of training data. The experiments are conducted in the following environment: Quad-core Intel i5 3450 CPU, 8 GBs RAM, and NVIDIA GeForce GTX 660.

    Table 1: Time consumption in each step.

    6.1 Time Consumption

    The computation time for each step is shown in Table 1. The system takes about 100 ms to render a hand model from a depth image. The most time consuming parts are per-pixel classification and feature point locating. If the per-pixel classification is optimized via a parallel implementation, the time consumption can be shorten significantly. Another step slowing down the hand pose estimation is the feature point locating. A possible solution is using a clustering algorithm instead of convolving with a Gaussian mask.

    6.2 Reliability Evaluation

    Fig. 5 demonstrates 10 sample frames with different hand gestures and their corresponding estimated results. The average values of ten parameters are illustrated in Fig. 6. The results of our approach using Kinect are compared with those of the ground truth data from 5DT Data Glove. The line on the bottom shows the average error of each joint. Since the maximum bending degrees of each finger joint are not the same (e.g., 90°for the first joint of the thumb vs. 120°for the second joint of the index finger), all joint parameters are normalized to the range [0, 1]. The average error of these testing frames is 0.115, which is acceptable for real-time hand parameter estimation.

    Fig. 5. Sample frames with different hand gestures. Upper: RGB images; middle: detected feature joints; lower: estimated rendering results.

    Fig. 5. Average hand parameters for the proposed method and the ground truth data.

    Fig. 6. Influence of rotation on the proposed method.

    6.3 Rotated Gestures Testing

    We then demonstrate how the hand rotation influences the accuracy of hand pose estimation. The average error of the video stream is 0.174, which is larger than regular hand gestures. Fig. 7 shows that the minimum error occurs when rotation angle is 0°. As the rotation angle increases (decreases), the estimation accuracy decreases slightly, due to the out-plane rotation. Since the training set contains a few out-plane rotated cases, significant errors may emerge at high degree of rotation angle. The rotation angles inX-axis are calculated by the relative displacements of fingertips and the palm center.

    6.4 Applied to Different Identities

    The training images for the random forest classifier are all from the same identity. However, the joint detector with this random forest may obtain incorrect results for a different identity. Two individuals are recruited to test our system by wearing a data glove to collect the ground truth for evaluating the estimation accuracies. It turns out that the errors for distinct subjects are 0.194 and 0.192, respectively.

    There are two possible reasons. First, different hand shapes cause errors for per-pixel classification and hence wrong feature vectors are obtained. This results in large estimation errors. Second, even for the same gesture, a data glove may capture different response values due to slight different hand shapes.

    7. Conclusions

    In this paper, we propose a method to estimate the hand joint parameters by joint detection and regression functions using Kinect. The proposed method retrieves continuous results within a short period of time, whereas the model-based method needs much more time to obtain its answers, and the appearance-based method gets discrete results though it works in real-time. Our approach works quite well if the experimental data of the participated subject are used for the training data. The extended work could be lifting the limitation and make the system user independent.

    [1] J. Romero, H. Kjellstr?m, and D. Kragic, “Hands in action: Real-time 3D reconstruction of hands in interaction with objects,” inProc. of IEEE Int. Conf. on Robotics and Automation, 2010, pp. 458-463.

    [2] S. Miyamoto, T. Matsuo, N. Shimada, and Y. Shirai,“Real-time and precise 3-D hand posture estimation based on classification tree trained with variations of appearances,” inProc. of the 21st Int. Conf. on Pattern Recognition, 2012, pp. 453-456.

    [3] M. Gorce, N. Paragios, and D. J. Fleet, “Model-based hand tracking with texture, shading and self-occlusions,” inProc. of IEEE Computer Society Conf. on Computer Vision andPattern Recognition, 2008, pp. 1-8.

    [4] I. Oikonomidis, N. Kyriazis, and A. A. Argyros, “Ef fi cient model-based 3D tracking of hand articulations using Kinect,” presented at British Machine Vision Conf., 2011.

    [5] I. Oikonomidis, N. Kyriazis, and A. A. Argyros, “Markerless and efficient 26-DOF hand pose recovery,” inProc. of Asian Conf. on Computer Vision, 2010, pp. 744-757.

    [6] H. Hamer, K. Schindler, E. Koller-Meier, and L. V. Gool,“Tracking a hand manipulating an object,” inProc. of IEEE Computer Society Conf. on Computer Vision andPattern Recognition, 2009, pp. 1475-1482.

    [7] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images,” inProc. of IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2011, pp. 1297-1304.

    [8] M. Pó?rola and A. Wojciechowski, “Real-time hand pose estimation using classi fi ers,” inProc. of ICCVG, 2012, pp. 573-580.

    [9] S. Theodoridis and K. Koutroumbas,Pattern Recognition, 3rd ed., Amsterdam: Elsevier/Academic Press, 2006.

    [10] C.-C. Chung and L.-C. Jen, “LIBSVM: A library for support vector machines,”ACM Trans. on Intelligent Systems and Technology, vol. 2, no. 3, pp.1-27, 2011.

    Chun-Ming Chang received the B.S. degree from National Cheng Kung University, Tainan in 1985 and the M.S. degree from National Tsing Hua University, Hsinchu in 1987, both in electrical engineering. He received the Ph.D. degree in electrical and computer engineering from University of Florida, Gainesville in 1997. From 1998 to 2002, Dr. Chang served as a senior technical staff member and a senior software engineer with two communication companies, respectively. He joined the faculty of Asia University, Taichung in 2002. His research interests include computer vision/image processing, video compression, virtual reality, computer networks, and robotics.

    Che-Hao Chang received the B.S. and M.S. degrees from the Department of Electrical Engineering, National Tsing Hua University, Hsinchu in 2011 and 2013, respectively. He is current serving the military obligation in Army.

    Chung-Lin Huang received his Ph.D. degree in electrical engineering from the University of Florida, Gainesville in 1987. He was a professor with the Electrical Engineering Department, National Tsing Hua University, Hsinchu. Since August 2011, he has been with the Department of Informatics and Multimedia, Asia University, Taichung. His research interests are in the area of image processing, computer vision, and visual communication.

    Manuscript received December 15, 2013; revised March 15, 2014. This work was supported by NSC under Grand No. 101-2221-E-468-030.

    C.-M. Chang is with the Department of Applied Informatics and Multimedia, Asia University, Taichung (Corresponding author e-mail: cmchang@ asia.edu.tw).

    C.-L. Huang is with the Department of Applied Informatics and Multimedia, Asia University, Taichung (e-mail: huang.chunglin@ gmail.com).

    C.-H. Chang is with the Department of Informatics and Multimedia, Asia University, Taichung (e-mail: bbb00437@ hotmail.com).

    Digital Object Identifier: 10.3969/j.issn.1674-862X.2014.04.017

    美女免费视频网站| 欧美成人性av电影在线观看| 一个人免费在线观看的高清视频| 欧美日本中文国产一区发布| 欧美日本中文国产一区发布| 国产成人精品久久二区二区91| 亚洲性夜色夜夜综合| 一个人免费在线观看的高清视频| 亚洲五月天丁香| 男女之事视频高清在线观看| 国产午夜精品久久久久久| 亚洲一码二码三码区别大吗| 免费在线观看黄色视频的| 夜夜夜夜夜久久久久| 精品无人区乱码1区二区| 精品无人区乱码1区二区| 亚洲欧洲精品一区二区精品久久久| 国产1区2区3区精品| 欧美国产精品va在线观看不卡| 国产午夜福利久久久久久| 嫩草影视91久久| 一区二区日韩欧美中文字幕| 国产蜜桃级精品一区二区三区| 亚洲激情在线av| 国产精品影院久久| 丰满的人妻完整版| 国产精品久久久av美女十八| 国产亚洲精品第一综合不卡| 他把我摸到了高潮在线观看| 国产精品免费一区二区三区在线| 成人亚洲精品一区在线观看| 叶爱在线成人免费视频播放| 免费女性裸体啪啪无遮挡网站| 免费女性裸体啪啪无遮挡网站| 日本 av在线| 国产片内射在线| 级片在线观看| 淫妇啪啪啪对白视频| 精品国产国语对白av| 色老头精品视频在线观看| 久久香蕉精品热| 国产午夜福利久久久久久| 好男人电影高清在线观看| 男女午夜视频在线观看| 精品福利观看| 禁无遮挡网站| 久久久久久亚洲精品国产蜜桃av| 多毛熟女@视频| 国产精品一区二区免费欧美| 午夜日韩欧美国产| 国产精品国产高清国产av| 久久精品亚洲熟妇少妇任你| 侵犯人妻中文字幕一二三四区| 日韩免费av在线播放| 国产又色又爽无遮挡免费看| 免费高清视频大片| 老鸭窝网址在线观看| 免费不卡黄色视频| 人妻丰满熟妇av一区二区三区| 熟妇人妻久久中文字幕3abv| 变态另类成人亚洲欧美熟女 | 黑人巨大精品欧美一区二区蜜桃| 久久国产精品男人的天堂亚洲| 国产一卡二卡三卡精品| 极品教师在线免费播放| 一级,二级,三级黄色视频| 非洲黑人性xxxx精品又粗又长| 午夜成年电影在线免费观看| 日日夜夜操网爽| av免费在线观看网站| 色哟哟哟哟哟哟| 亚洲国产精品999在线| 男女做爰动态图高潮gif福利片 | 一本综合久久免费| 美女扒开内裤让男人捅视频| 久久热在线av| 国产亚洲欧美精品永久| 成人欧美大片| 老汉色∧v一级毛片| 国产91精品成人一区二区三区| 亚洲av第一区精品v没综合| 久久久国产成人免费| 亚洲精品国产色婷婷电影| 制服人妻中文乱码| 久久精品亚洲熟妇少妇任你| 久久欧美精品欧美久久欧美| 久久精品影院6| 久久九九热精品免费| 午夜久久久在线观看| 久久精品亚洲精品国产色婷小说| www日本在线高清视频| 脱女人内裤的视频| 日韩视频一区二区在线观看| 久久人妻福利社区极品人妻图片| 少妇裸体淫交视频免费看高清 | 黑人欧美特级aaaaaa片| 国产成+人综合+亚洲专区| 色婷婷久久久亚洲欧美| 国产午夜福利久久久久久| 国产野战对白在线观看| 人人妻人人爽人人添夜夜欢视频| 51午夜福利影视在线观看| 99精品久久久久人妻精品| 熟女少妇亚洲综合色aaa.| 久久 成人 亚洲| 一级a爱视频在线免费观看| 亚洲少妇的诱惑av| 免费在线观看完整版高清| 亚洲七黄色美女视频| 无人区码免费观看不卡| 50天的宝宝边吃奶边哭怎么回事| 亚洲欧美精品综合一区二区三区| 成人av一区二区三区在线看| 日本 av在线| 琪琪午夜伦伦电影理论片6080| 黄网站色视频无遮挡免费观看| 亚洲成人免费电影在线观看| 12—13女人毛片做爰片一| 久久精品国产清高在天天线| 亚洲五月天丁香| 久久久久久亚洲精品国产蜜桃av| 人妻丰满熟妇av一区二区三区| 一本大道久久a久久精品| 午夜精品国产一区二区电影| 亚洲第一青青草原| 男女下面插进去视频免费观看| 一区在线观看完整版| 性欧美人与动物交配| 久久伊人香网站| 国产精品一区二区免费欧美| 亚洲国产日韩欧美精品在线观看 | 久久久久国产一级毛片高清牌| 精品日产1卡2卡| 伦理电影免费视频| 欧美另类亚洲清纯唯美| aaaaa片日本免费| 免费av毛片视频| 黄色视频不卡| 亚洲五月天丁香| 中文字幕人妻熟女乱码| 色av中文字幕| 免费一级毛片在线播放高清视频 | 免费在线观看影片大全网站| 国产三级在线视频| 欧美av亚洲av综合av国产av| 亚洲av成人av| 午夜福利在线观看吧| 在线av久久热| 一级毛片精品| 激情视频va一区二区三区| 国产高清激情床上av| 日韩欧美免费精品| 亚洲人成电影观看| 久久草成人影院| 欧美另类亚洲清纯唯美| 黑人操中国人逼视频| 亚洲精品中文字幕在线视频| 国产高清激情床上av| 中文字幕另类日韩欧美亚洲嫩草| 国产欧美日韩综合在线一区二区| xxx96com| 亚洲一区中文字幕在线| 亚洲av成人不卡在线观看播放网| 午夜福利影视在线免费观看| 国产国语露脸激情在线看| 亚洲成国产人片在线观看| 中文字幕高清在线视频| 神马国产精品三级电影在线观看 | 国内精品久久久久久久电影| 欧美国产日韩亚洲一区| 午夜福利影视在线免费观看| 每晚都被弄得嗷嗷叫到高潮| 国产私拍福利视频在线观看| 叶爱在线成人免费视频播放| 69精品国产乱码久久久| 两个人看的免费小视频| av免费在线观看网站| 久久人人精品亚洲av| www.999成人在线观看| 亚洲国产欧美一区二区综合| 亚洲五月色婷婷综合| 亚洲三区欧美一区| 最好的美女福利视频网| 午夜福利,免费看| 一级,二级,三级黄色视频| 黄色丝袜av网址大全| 亚洲av片天天在线观看| 国产视频一区二区在线看| 久久久国产精品麻豆| 少妇裸体淫交视频免费看高清 | 18禁黄网站禁片午夜丰满| 一卡2卡三卡四卡精品乱码亚洲| 日日摸夜夜添夜夜添小说| 国产成人啪精品午夜网站| 久99久视频精品免费| 老司机福利观看| 麻豆一二三区av精品| 人人妻人人澡欧美一区二区 | 国产精品电影一区二区三区| 好男人电影高清在线观看| 美女高潮到喷水免费观看| 麻豆国产av国片精品| 成人三级做爰电影| 97人妻天天添夜夜摸| 黄色女人牲交| 男女之事视频高清在线观看| av片东京热男人的天堂| 久久人人爽av亚洲精品天堂| 女性被躁到高潮视频| 男人舔女人下体高潮全视频| 国内毛片毛片毛片毛片毛片| 老司机午夜福利在线观看视频| 亚洲专区字幕在线| 乱人伦中国视频| 变态另类丝袜制服| 久9热在线精品视频| 亚洲 欧美一区二区三区| 亚洲精品中文字幕在线视频| 国产一区二区三区综合在线观看| 久久久国产成人免费| 国产成人系列免费观看| 国产亚洲精品久久久久5区| 欧美中文日本在线观看视频| 亚洲va日本ⅴa欧美va伊人久久| 97人妻精品一区二区三区麻豆 | 国产精品久久久久久精品电影 | 精品熟女少妇八av免费久了| 国产激情欧美一区二区| 老司机靠b影院| 久久久国产精品麻豆| 精品一区二区三区四区五区乱码| 看免费av毛片| 又紧又爽又黄一区二区| 亚洲一区二区三区色噜噜| 日韩欧美在线二视频| 99国产精品一区二区三区| 免费看a级黄色片| 99国产综合亚洲精品| 免费不卡黄色视频| 咕卡用的链子| 亚洲av成人不卡在线观看播放网| 女性生殖器流出的白浆| 午夜福利免费观看在线| 亚洲国产高清在线一区二区三 | 国产日韩一区二区三区精品不卡| 黄色成人免费大全| 国产免费av片在线观看野外av| 久久人妻熟女aⅴ| 国产国语露脸激情在线看| 久久国产亚洲av麻豆专区| 亚洲中文字幕一区二区三区有码在线看 | netflix在线观看网站| 在线视频色国产色| 最近最新中文字幕大全免费视频| 欧美另类亚洲清纯唯美| 激情在线观看视频在线高清| 久久欧美精品欧美久久欧美| 91九色精品人成在线观看| 精品电影一区二区在线| 91成年电影在线观看| 久久久精品国产亚洲av高清涩受| 午夜免费鲁丝| 日韩精品青青久久久久久| 国产精品九九99| 免费在线观看亚洲国产| 香蕉久久夜色| 女人高潮潮喷娇喘18禁视频| 一级片免费观看大全| 欧美久久黑人一区二区| 少妇粗大呻吟视频| 亚洲七黄色美女视频| 黄片播放在线免费| 午夜福利18| 日韩高清综合在线| 91老司机精品| 99久久久亚洲精品蜜臀av| 久久久精品欧美日韩精品| 国产精品 欧美亚洲| 免费在线观看影片大全网站| 国产精品爽爽va在线观看网站 | 亚洲精品久久成人aⅴ小说| 亚洲av熟女| 亚洲男人的天堂狠狠| 亚洲熟女毛片儿| av有码第一页| www国产在线视频色| 亚洲专区国产一区二区| АⅤ资源中文在线天堂| 国产97色在线日韩免费| 日韩精品中文字幕看吧| 欧美老熟妇乱子伦牲交| 午夜精品久久久久久毛片777| 国产成人免费无遮挡视频| 人人妻,人人澡人人爽秒播| www.www免费av| 国产精品亚洲美女久久久| 国产精品二区激情视频| 精品日产1卡2卡| 91字幕亚洲| 午夜免费激情av| 亚洲最大成人中文| 久久久久久国产a免费观看| 亚洲国产精品久久男人天堂| 婷婷精品国产亚洲av在线| 黄片大片在线免费观看| 国产免费男女视频| 黄色视频不卡| 国产高清videossex| 精品久久久久久,| 久久久国产成人免费| 九色亚洲精品在线播放| 俄罗斯特黄特色一大片| 人妻丰满熟妇av一区二区三区| 日韩精品免费视频一区二区三区| 无人区码免费观看不卡| 国产亚洲精品久久久久久毛片| 免费观看人在逋| 啦啦啦观看免费观看视频高清 | 亚洲一卡2卡3卡4卡5卡精品中文| 9色porny在线观看| 欧美日韩一级在线毛片| 可以在线观看毛片的网站| 亚洲国产精品999在线| 久久久久国内视频| 人人澡人人妻人| 国产成人系列免费观看| 一进一出抽搐动态| 国产亚洲精品久久久久久毛片| 日韩欧美免费精品| 国产精品电影一区二区三区| 国产蜜桃级精品一区二区三区| 很黄的视频免费| 欧美成人午夜精品| 亚洲无线在线观看| 中文字幕色久视频| 免费在线观看影片大全网站| 欧美成人性av电影在线观看| 亚洲精品av麻豆狂野| 国产高清激情床上av| 日本vs欧美在线观看视频| 一区二区三区激情视频| 国内毛片毛片毛片毛片毛片| 我的亚洲天堂| 美女大奶头视频| 一边摸一边抽搐一进一小说| 成人欧美大片| 日韩 欧美 亚洲 中文字幕| 久久九九热精品免费| 中文字幕精品免费在线观看视频| 国产一卡二卡三卡精品| 性少妇av在线| 欧美激情高清一区二区三区| 夜夜躁狠狠躁天天躁| 亚洲激情在线av| 亚洲第一电影网av| 18禁国产床啪视频网站| 淫妇啪啪啪对白视频| 亚洲av美国av| 高潮久久久久久久久久久不卡| 国产精品,欧美在线| 色尼玛亚洲综合影院| 亚洲自拍偷在线| 欧美黄色淫秽网站| 亚洲国产毛片av蜜桃av| 欧美日韩亚洲国产一区二区在线观看| 久久亚洲精品不卡| 国产高清videossex| 他把我摸到了高潮在线观看| 国产精品一区二区在线不卡| 亚洲成av片中文字幕在线观看| 精品久久久久久久久久免费视频| 国产蜜桃级精品一区二区三区| 性少妇av在线| 久久久久久免费高清国产稀缺| 国产av精品麻豆| 正在播放国产对白刺激| 亚洲精品在线观看二区| 久久久久国内视频| 最近最新免费中文字幕在线| 日日夜夜操网爽| 欧美黑人精品巨大| x7x7x7水蜜桃| 国产亚洲精品久久久久久毛片| 成在线人永久免费视频| 久久人妻av系列| 波多野结衣av一区二区av| 一级片免费观看大全| 91九色精品人成在线观看| 亚洲熟妇中文字幕五十中出| 亚洲男人的天堂狠狠| 日韩精品青青久久久久久| 一级,二级,三级黄色视频| 黄色毛片三级朝国网站| 一进一出好大好爽视频| 成人国语在线视频| 亚洲精品中文字幕在线视频| 日韩av在线大香蕉| 精品卡一卡二卡四卡免费| 人人妻人人爽人人添夜夜欢视频| 亚洲avbb在线观看| 黑人巨大精品欧美一区二区蜜桃| 高清毛片免费观看视频网站| 又黄又爽又免费观看的视频| 色综合站精品国产| 黄色丝袜av网址大全| 无遮挡黄片免费观看| 可以在线观看的亚洲视频| www.熟女人妻精品国产| 制服人妻中文乱码| 日本免费一区二区三区高清不卡 | 国产成人啪精品午夜网站| 亚洲自拍偷在线| 老司机午夜十八禁免费视频| 一进一出抽搐gif免费好疼| 青草久久国产| 亚洲三区欧美一区| av网站免费在线观看视频| 咕卡用的链子| 国产精品久久久久久精品电影 | www.www免费av| 在线观看免费午夜福利视频| 777久久人妻少妇嫩草av网站| 91av网站免费观看| 日日夜夜操网爽| 久久久国产欧美日韩av| 老熟妇仑乱视频hdxx| 啪啪无遮挡十八禁网站| 不卡av一区二区三区| 久久久久久国产a免费观看| 成人18禁高潮啪啪吃奶动态图| 国产麻豆69| a级毛片在线看网站| 亚洲国产欧美网| 国产欧美日韩精品亚洲av| 欧美日韩亚洲国产一区二区在线观看| 午夜福利高清视频| 老鸭窝网址在线观看| 熟妇人妻久久中文字幕3abv| 久久久久久久午夜电影| 欧美大码av| 久久精品aⅴ一区二区三区四区| 成年女人毛片免费观看观看9| 伊人久久大香线蕉亚洲五| 禁无遮挡网站| 午夜激情av网站| 搡老妇女老女人老熟妇| 国产精品野战在线观看| av视频免费观看在线观看| 别揉我奶头~嗯~啊~动态视频| 亚洲一区二区三区不卡视频| 宅男免费午夜| 精品午夜福利视频在线观看一区| 999久久久国产精品视频| 母亲3免费完整高清在线观看| 久久久久久久久中文| 99精品欧美一区二区三区四区| 可以免费在线观看a视频的电影网站| 久久婷婷成人综合色麻豆| a级毛片在线看网站| 好看av亚洲va欧美ⅴa在| 深夜精品福利| 少妇的丰满在线观看| 久久精品91蜜桃| 久久久久久久久久久久大奶| 色综合婷婷激情| 91精品三级在线观看| 美女扒开内裤让男人捅视频| 亚洲精品在线观看二区| 国产麻豆成人av免费视频| 黄色 视频免费看| 午夜免费激情av| 亚洲五月色婷婷综合| 老鸭窝网址在线观看| 国产视频一区二区在线看| 欧美日韩福利视频一区二区| 美女大奶头视频| av天堂久久9| 国产亚洲欧美在线一区二区| 日韩精品中文字幕看吧| 精品欧美国产一区二区三| 天天躁夜夜躁狠狠躁躁| 精品熟女少妇八av免费久了| 一进一出抽搐gif免费好疼| 真人一进一出gif抽搐免费| 激情视频va一区二区三区| 国产精品久久视频播放| 国产精品98久久久久久宅男小说| 欧美老熟妇乱子伦牲交| 日韩大码丰满熟妇| 麻豆成人av在线观看| 麻豆av在线久日| 国产精品98久久久久久宅男小说| 久久精品国产亚洲av高清一级| 亚洲一区二区三区不卡视频| 淫秽高清视频在线观看| 欧美成人性av电影在线观看| 电影成人av| 亚洲最大成人中文| 美女高潮喷水抽搐中文字幕| 国产精品一区二区在线不卡| 男女做爰动态图高潮gif福利片 | 精品国内亚洲2022精品成人| 国产欧美日韩精品亚洲av| 久久久久九九精品影院| 夜夜躁狠狠躁天天躁| 国产精品免费一区二区三区在线| 欧美国产日韩亚洲一区| 国产成人免费无遮挡视频| 亚洲三区欧美一区| 亚洲精华国产精华精| 在线观看66精品国产| 久久午夜亚洲精品久久| 18禁观看日本| 两人在一起打扑克的视频| 日韩 欧美 亚洲 中文字幕| 国产精品永久免费网站| 美女高潮喷水抽搐中文字幕| 中文字幕久久专区| 亚洲av日韩精品久久久久久密| 一级毛片精品| 久久久久久久午夜电影| 色综合婷婷激情| 亚洲无线在线观看| bbb黄色大片| 午夜日韩欧美国产| 成人亚洲精品一区在线观看| 中文字幕高清在线视频| 午夜精品国产一区二区电影| 欧美日本中文国产一区发布| 中文字幕久久专区| 午夜福利免费观看在线| 男男h啪啪无遮挡| 精品久久久久久久毛片微露脸| 91在线观看av| 亚洲男人天堂网一区| 国产精品久久视频播放| 亚洲欧美精品综合一区二区三区| 国产免费av片在线观看野外av| a级毛片在线看网站| 欧美精品亚洲一区二区| 大香蕉久久成人网| 亚洲成人久久性| 黄色片一级片一级黄色片| 好男人电影高清在线观看| 极品人妻少妇av视频| 久久久久国产精品人妻aⅴ院| 男女下面进入的视频免费午夜 | 色综合亚洲欧美另类图片| 久久久久久久午夜电影| 国产1区2区3区精品| 成在线人永久免费视频| 搡老熟女国产l中国老女人| 一个人免费在线观看的高清视频| 一级毛片高清免费大全| 色综合站精品国产| 亚洲国产欧美网| 国产一区二区三区在线臀色熟女| 亚洲中文日韩欧美视频| 亚洲美女黄片视频| 国产主播在线观看一区二区| 国产区一区二久久| 啦啦啦免费观看视频1| 久久影院123| 亚洲av日韩精品久久久久久密| 熟妇人妻久久中文字幕3abv| a级毛片在线看网站| 亚洲三区欧美一区| 99久久国产精品久久久| 人人妻,人人澡人人爽秒播| 老司机福利观看| 国产91精品成人一区二区三区| 最新美女视频免费是黄的| 长腿黑丝高跟| 国产激情欧美一区二区| 久久香蕉国产精品| 精品卡一卡二卡四卡免费| 亚洲无线在线观看| 熟妇人妻久久中文字幕3abv| 精品免费久久久久久久清纯| 免费在线观看视频国产中文字幕亚洲| 免费看美女性在线毛片视频| 又黄又爽又免费观看的视频| 亚洲黑人精品在线| 97人妻天天添夜夜摸| 在线观看免费日韩欧美大片| 精品国产亚洲在线| 99香蕉大伊视频| 日韩免费av在线播放| 日本撒尿小便嘘嘘汇集6| 搡老岳熟女国产| 最新美女视频免费是黄的| 天堂√8在线中文| 看黄色毛片网站| АⅤ资源中文在线天堂| 精品卡一卡二卡四卡免费| 91av网站免费观看| 日韩精品中文字幕看吧| 亚洲精品久久国产高清桃花| av天堂久久9| 色婷婷久久久亚洲欧美| 久久精品国产亚洲av高清一级| 黄色成人免费大全| 成人手机av| 大型黄色视频在线免费观看| 夜夜夜夜夜久久久久| 一级a爱片免费观看的视频| 欧美成人免费av一区二区三区| 久久天堂一区二区三区四区| 成人欧美大片| 久久久精品欧美日韩精品| 中文字幕精品免费在线观看视频| 麻豆av在线久日| 亚洲全国av大片| 老司机福利观看| 亚洲男人天堂网一区|