• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improving RGB-D SLAMthrough extracting static region using delaunay triangle mesh

    2020-01-08 10:16:16ZHANGXiaoguoZHENGBingqingLIUQihanWANGQinYANGYuanSchoolofInstrumentScienceandEngineeringSoutheastUniversityNanjing210000China

    ZHANG Xiaoguo,ZHENG Bingqing,LIU Qihan,WANG Qin,YANG Yuan(School of Instrument Science and Engineering,Southeast University,Nanjing 210000,China)

    Abstract:To solve the problem of simultaneous localization and mapping (SLAM) in the dynamic environment of robotics navigation,a real-time RGB-D SLAM approach that can robustly handle high-dynamic environments is proposed.A novel static region extraction method is used to segment the dynamic objects from the static background,and the feature points in the static region are integrated into the RANSAC method to estimate the camera trajectory.The dynamic entities are identified and isolated by discarding the edges in the Delaunay triangle mesh of current frame according to distance-consistency principle in a rigid body.Combined with the weighted Bag-of-Words method,the system accuracy is further improved by reducing the weight of the dynamic object in the dynamic scene.Experimental results demonstrate that,compared with the existing real-time SLAM method,the proposed method improves the accuracy by 81.37% in the high-dynamic sequences of the TUM RGB-D datasets,which significantly improve the accuracy of navigation and positioning of mobile robots in dynamic scenes.

    Key words:SLAM; dynamic environments; static region extraction; Delaunay triangulation; loop detection

    In recent years,the approaches pertaining to Visual Simultaneous Localization and Mapping (VSLAM) have been developed significantly.The state-of-the-art methods[1-4]are now capable of running the application in real-time with robust performance.To simplify the problem formulation,most of these SLAM systems assume that the environment under observation is static,i.e.none of the observable objects in scenes proposes any change in their dynamics or shapes.However,dynamic objects,such as human beings and vehicles,exist in real environments.Assuming that everything is static in a dynamic environment,this will lead to the deterioration of the whole SLAM process due to data association errors,failures in loop closure procedure,and erroneous state estimations[5].Therefore,the SLAM in dynamic environments has attracted attention of many researchers in the past several years.

    Most existing SLAM methods can be roughly categorized into two groups:sparse system and dense system.To handle dynamic objects,different strategies are used for these two groups.Sparse SLAM[2,4,6],as the most widely-used formulation,estimates 3D geometry from a set of keypoint-matches,then the camera’s egomotion is obtained using closed-form solution from the correspondences.Since the points from the static environment follow the same motion,RANSAC (Random Sample Consensus) regression is usually used to filter out dynamic objects in these systems[7-8].Davison[9]introduced a real-time camera tracking system known as monoSLAM (monocular Simultaneously Localization and Mapping),which used an extended Kalman filter (EKF)to estimate the camera pose.Later in [10-11],Meilland and Newcombe maintained the assumption that the underlying environment is stationary and suggested to discard the dynamic element points by considering them outliers to the systems.However,RANSAC algorithm lacks robustness in high-dynamic environments and can result in accuracy reduced when the active feature points exceed the static ones[12].To improve the system’s robustness in dynamic environments,Masoud[5]used multilevel-RANSAC (ML-RANSAC) algorithm to detect and classify objects into stationary and moving objects,however,the use of Lidars limits its versatility.Kang[13]proposed a rotation-translation decoupling algorithm,applying “far point” to estimate the orientation of the visual system,while the “near point” is used to estimate the translation of the camera under RANSAC framework.The system can reduce the impact of the nearby moving objects on the visual odometry,but the translation estimation can be contaminated by dynamic objects.Azartash[14]used an image segmentation method to separate the moving part of the scene from the stationary part,performing motion estimations per segment then optimizing the motion parameters by minimizing the remaining error using the motion parameters of each segment,but it takes too much time in segmentation,which hinders the real-time applicability.

    The dense SLAM estimates 3D geometry in conjunction with a dense,regularized optical flow field,thereby combining a geometric error (deviation from the flow field) with a geometry prior (smoothness of the flow field)[1,3].To improve the precision of the system,dynamic objects need to be found and excluded from the geometric optimization process.Zollhofer[15]proposed to reconstruct a non-rigid deforming object in real time with a single RGB-D camera,and the non-rigid registration of RGB-D data to the template is performed using an extended non-linear As Rigid As Possible (ARAP) framework by implementing on an efficient GPU pipeline.Unfortunately,it requires an initial static model/template of the body that is later tracked and reconstructed,and the template is then deformed over time based on the rigid registration and non-rigid fitting of points.Sun[16]used an intensity difference image to identify the boundaries of dynamic objects,then dense dynamic points are segmented using the quantized depth image.These methods[14-16]achieve stable performance in highly dynamic scenes,but as mentioned before it takes too much time in segmentation and it can hardly run in real-time.Keller[17]proposed a point-based fusion approach to reconstruct a dynamic scene in real-time,and the approach considers outliers from the iterative closest point (ICP)as possible dynamic points and assigns a confidence value which later determines if the point is static or dynamic.The dynamic points are used as seeds for region growing method in order to segment the entire dynamic object in its corresponding depth map.This kind of implementation is proved to work effectively in indoor environments[14,16-17],but also limits its application in a small space and suffers from computation cost.Kim[18]proposed to use depth difference to multiple warped previous frames to calculate a static background model,but when the dynamic object moves parallel to the image plane,only the boundary of dynamic object can be found due to the aperture problem[12].

    To compensate dynamic objects in systems,all above mentioned methods require a correspondence matching step,where either a dense or sparse correspondence is needed.While accurate dense correspondences matching is time consuming[14,16-17],approximation[18]suffers from the aperture problem,and pretrained model[15]cannot be applied in large scenes.Accurate matching of 2D keypoints can be performed in real-time[7-8],but sparse 2D keypoints could be distributed unevenly in the environment.If a dynamic object has many textures or the dynamic keypoints outnumber static keypoints,the RANSAC regression could easily fail especially when dynamic objects move in rigid-motion for many inappropriate matches will be applied in estimation.In this paper,we use a distance-preserving principle[19]to identify dynamic objects from static background,i.e.the distance between two features in a rigid body motions should maintain their distance throughout the runtime.Furthermore,through integrating an efficient loop closure detection procedure into the system,we achieve a real-time RGB-D SLAM system that can handle dynamic environment efficiently.

    The main contributions of this paper are:

    1.A novel efficient static-region extraction method is proposed to segment the dynamic objects from the static background.It calculates the maximum clique of the triangle mesh in the current frame,which is constructed according to the distance-preservation principle in the 3D rigid motion.

    2.A weighted Bag-of-Words method is performed to decrease the influence of the dynamic objects in loop detecting,which degrades the moving points while upgrades the static feature points according to the extracted static region in frames.

    The effectiveness of this method against dynamic objects is tested on dynamic sequences from TUM RGB-D dataset.The proposed method can greatly reduce the tracking error and outperforms the state-of-the-art SLAM systems in most dynamic sequences.

    1 Framework

    The overview of the system is illustrated in Fig.1.Similar to ORB-SLAM2[2],three threads are incorporated in parallel:tracking,local mapping and loop closure,to simultaneously estimate the scene structure and camera motion (6 DOF) without any prior knowledge of the scene.The tracking thread is to localize the camera with every frame,the local mapping to manage the local map and optimize it,and the loop closure to detect large loops and correct the accumulated drift.Different from ORB-SLAM,only the matched feature points distributed in the static background are applied to track camera by minimizing the reprojection error defined between two static point sets.Furthermore,a weighted Bag-of-Words(BoW) method is performed to decrease the influence of the dynamic objects in loop detecting,integrating the point-weight adjustment strategy into the original BoW method to generate more appropriate visual words.

    Fig.1 Framework of the proposed RGB-D system

    We first initialize the structure and motion with the first frame,which is also the first keyframe,and set its pose to the origin and create an initial map from all its keypoints.Then the camera is tracked and the pose is optimized in every frame with improved RANSAC method,combining with static-region extraction algorithm; after that,a keyframe is set if the current frame satisfies the following conditions:1) the current pose is estimated successfully; 2) the current frame tracks less than 80% points than the existing keyframe; 3) the current frame can contribute more than 40 new points.A covisibility graph is maintained in the system to optimize the reconstructed map and the camera trajectory in a local covisible area.Meanwhile,loop closure is integrated to compensate the drift with the partial BoW representation,and the visual words of keyframes are compared and several tests are executed to make a correct loop,then the global bundle adjustment is performed to achieve global consistency.

    2 Methods

    2.1 Tracking with static region extraction method

    The overview of the proposed static region extraction method is illustrated in Fig.2.For each incoming frame,firstly a group of matched points is obtained with the latest keyframe in BoW acceleration matching process; then the Delaunay triangle mesh is constructed where each pair of matched features is a vertex,and an edge is formed between two such pairs of matched features; after that,the distance consistency check is performed to decide whether to keep the edges in mesh or not,which depends on the difference of 3D distances between the features from the current frame to the keyframe;lastly the maximal clique of remaining mesh is extracted according to hybrid principle.

    After the maximal clique is extracted,the relative transformation from the keyframe to the current frame is estimated using PnP algorithm with RANSAC method,where only the static points in the maximal clique are combined in order to reduce the effect of dynamic objects on the transformation estimation.Furthermore,the static weights of points for the keyframe are updated based on the estimated motion,and more accurate pose is obtained by the bundle adjustment operation in a local area.

    Fig.2 Flow chart of the static region extraction algorithm

    2.1.1 Delaunay triangulation with distance consistency test

    The proposed system has embedded a bag-of-words place recognition module[2]based on DBoW2 with improvement of point-weight adjustment strategy,which not only detects loop and decrease the accumulated drift but also accelerates the matching process in tracking thread.When adding an image to the system after initialization,we store a list of pairs of nodes and features in the direct index of the feature words.To obtain correspondences between current frame and the keyframe,we look it up in the direct index and perform the comparison only between those features associated with the same nodes in the vocabulary tree instead of the exhaustive brute search.

    LetVbe the set of the distinct points in the Euclidean camera plane of current frame,the elements of which are matched properly with the keyframe in the previous matching process; and letEbe the set of edges between vertices inV.A triangulation ofVis a planar straight-line graphG(V,E') for whichE'is a maximal subset ofEsuch that no two edges ofE'properly intersect,which means no two edges inE'intersect at a point other than their endpoints.The Delaunay triangulations are constructed in camera plane,which maximize the minimum angle of all the angles of the triangles in the triangulation,thus showing good performance in the distance relations of map points.In this paper,the divide-and-conquer algorithm is used to construct the triangulation and it can run in real-time[20].

    GivenE'in the current frame,and given the set of edges in graphGin the previous process,we check the distance consistency of two endpoints for every edge inE'and determine whether to discard it or not.This principle is motivated by the maximum clique method from Howard,but we improve it for considering the distance relationship of inliers while determining the best hypothesis.Note that the Mahalanobis distance is used instead of normal Euclidean distance here,and the mathematic model is described in Section 2.1.3.

    We compare the 3D distance of two feature points in the current frame with that of the two matched points in the keyframe.According to the fact that rigid body motions are distance-preserving operations,which means the distance between two features should remain their distance throughout the runtime.Thus,the edge is kept if the 3D distance between the features does not change substantially from the keyframe to the current frame.Furthermore,we check the distance difference between frames and delete the edge if the difference above the threshold.Finally,the set of static inliers makes up the graphG(V',E''),whereE''represents the set of remaining edges,andV'represents all endpoints inE''in graphG.

    Algorithm 1:Distance consistency check for current frame Input:- a set of edges between vertices in V.- a target cloud Ptgt and a matched-feature pointset of current frame Btgt.- a source cloud of keyframe Psrc.- corresponding feature points index for source cloud {c(i)}i∈Btgt Output:- a graph G(V',E'')set V',E'' empty.for every line in E' do Find the indexes of the two endpoints p1, p2, and name it i and j in Btgt respectively.Calculate 3D distance d1 between P(i)tgt and P(j)tgt and calculate 3D distance d2 between P(c(i))src and P(c(j))src Calculate the absolute value Δd of the difference of d1 and d2 if Δd < thershold Push current line into E''Push p1, p2 into V'end if end for

    2.1.2 Region extraction principle

    Given a graphG(V',E''),some pairs of matched features across frames form vertexes in the graph,and edges are between two such pairs of matched feature if the 3D distance between the features satisfy the aforementioned condition.In Fig.3,the current frame contains two dynamic objects,with one moving left and the other moving down-right.The solid lines in Fig.3 represent the lines whose distance remains under the threshold,while the dotted lines do not.Only points in {P1’,P2’} belongs to static background,while and are in the dynamic objects but still satisfy the distancepreservation principle.Almost all edges are deleted after checking operation expect for ,,and for the distance change being above the threshold,then these edges are grouped as:{P1,P2},{P3,P4} and {P5,P6} according to the principle that the connective lines belong to the same group.Note that the edges connecting background and the dynamic objects are discarded,but the edges with two endpoints in dynamic objects are kept under this condition.The static background is extracted based on these different groups and only matched feature points {P1~P1’,P2~P2’} are used to estimate the camera pose with PnP algorithm.

    To get the optimal group,all groups are obtained and the boundary lines of the mesh are regenerated for each group to avoid the incomplete shape,and then we calculate the number of feature points and the corresponding area of Delaunay triangulation mesh of each group.We compute score for each group in Eq.(1),and choose the best one group and use members of this group to track the camera.

    where W1and W2are the weights of the number of pointsNand the corresponding projected areaSof the mesh in graph (W1and W2are set to 0.5 and 1 respectively),and Nalland Sallare total number of feature points and gross mesh area in graphG,respectively.

    Fig.3 An example of the edges distance consistency

    This hybrid principle is based on the observation that the static background features usually distribute evenly,whereas the dynamic foreground points may aggregate in a few small textured areas,so we consider not only the points number but also the area while determining the best hypothesis.Furthermore,a coarse-to-fine strategy is implemented in the process:an initial camera pose is calculated for each group,and those groups with similar poses and a close distance are combined into a bigger one,which is a clustering process and the system keeps on clustering until no two groups are eligible.This strategy avoids the universal low scores in static background due to the isolated small groups.Some experimental screenshots are showed in Fig.4 in the TUM RGB-D dataset“fr3/walking-static”.

    Fig.4 Two screenshots after the Delaunay Triangulation operation and after Distance checking and Clustering operations in the TUM RGB-D dataset“fr3/walking-static”.(a) The frame after the Delaunay triangular mesh construction operation (The edges of the mesh represent the connection relationship of two points,and some points cannot meet the requirements of the depth uncertainty so they are isolated); (b) The frame after distance-consistence checking and clustering operations (The points in the frame are typically grouped as two groups,the members of the first group are in the static background,and the other group in the moving person)

    2.1.3 Distance uncertainty model

    In the distance measurement of an RGB-D system,the uncertainty of 3D points must be considered due to some incorrect measurements like in boundaries.The Mahalanobis distance is used instead of normal Euclidean distance and the distance between two feature points is defined as:

    Modelling is combined into the formula to obtain accurate uncertainty of the depth measurement,for the depth measurement of pixels are influenced by each other.The representation of depth is improved,which is considered as independent Gaussian distribution random variable,and its variance is as:

    To extend the applicability of the proposed method,we give the corresponding parameters in stereo system here.The fundamental model in stereo system is similar to RGB-D,which means the definitions of d and Σ′are the same,while the difference lies in the source of error:the error terms of stereo system are mainly from the calibration deviation of angle from baseline to optical axis,and the field angle instead of the measurement error is from infrared speckle technology in RGB-D.It is assumed that the horizontal and vertical fields of view parameters are independent Gaussian random variables:

    where ω and φ represent the horizontal and the vertical fields of view respectively.The variance parameters in stereo system are:

    whereBis the baseline,σω1and σω2are the standard deviations of horizontal field angle,σφ1and σφ2are the standard deviations of vertical field angle,and it is assumed that the relative pose of the cameras in the stereo system is an invariant.

    2.2 Loop closure with weighted descriptors

    A pure visual odometry system usually suffers from drift problem because current absolute pose is obtained by accumulating previous ego-motion estimates,which accumulates the estimation errors.To compensate the drift problem in dynamic environments,the point-weight adjustment strategy is integrated into the Bag-of-words method in the proposed system,and we use the points which are more likely to be static to generate visual words.Before setting a new keyframe,the visual word vtof current keyframe Ktis formally described in descriptor space,which is a set of ORB descriptors generated in tracking thread.Note that only the qualified feature points are used,whose weights are above the static threshold,to generate visual words in the keyframes and the point-weight adjustment procedure is provided in algorithm 2 (θ in algorithm 2 and the static threshold are set to 0.1 and 1 respectively,and the wisrcis 1 originally).Then the new keyframe is set and loop closure is checked with a covisibility graph.A loop closure is detected between Ktand Kt1when the following conditions are fulfilled.

    1) Covisibility:we query the keyframe database which stores the visual words of each keyframe,resulting in a list of matching candidates 〈vt,vt1〉,〈vt,vt2〉,…associated with their scores.We save candidates whose scores are above smin,which is from the lowest score of the neighbors of Ktin the covisibility graph,and those candidates directly connected to Ktare discarded.This method gives a good trade-off between the computation cost and the correctness.The consecutive frames in candidates are pushed into the same group,and the one with the highest score is chose from each group and the others are discarded.

    2) Temporal consistency:this step checks the temporal consistency of previous queries.One match 〈vt,vt1〉must be consistent with thekprevious matches (kis set to 5 in experiments),which means s(vt-Δt,vt1-Δt),…,s(vt-kΔt,vt1-kΔt) must be above the threshold(smin* 0.8).

    3) Geometric consistency:this step ensures two candidate matching keyframes not be too far away from each other:wheredis(T) extracts the translation vector from the transformation matrixT,and τdisis set to 1.5 m and 3 m in indoor and outdoor environments respectively.

    If the keyframe pair 〈Kt,Kt1〉 satisfies all these conditions,a new relative pose constraint between them is generated,which means a new loop is detected.Then the loop fusion module is activated to correct the trajectory as in [2].

    Algorithm 2:Point-weight adjustment procedure Input:- a source cloud of keyframe Psrc- N groups of target cloud Ptgt j and matched-feature point sets Btgt j from every frame correspond to keyframe in tracking thread (0

    3 Experiments

    The whole system is tested with TUM RGB-D dataset.To test the performance of the proposed method in different environments,various kinds of sequences are used,not only the sequences captured in static environment,but also the low-dynamic sequences and high-dynamic sequences.In high-dynamic sequences,more than half of the images are occupied with dynamic objects and dynamic parts account for more than 30% of the picture.In these sequences,people in scenes move in the environment randomly,while the camera also moves with different patterns (static,xyz,rpy and halfsphere).All the experiments are performed on a desktop computer with Intel Core i7-7700HQ CPU (2.8GHz) and 32 GB RAM.

    3.1 Parameters setting

    PnP and RANSAC methods are used to estimate the camera pose when tracking the current frame,and the callation of the iteration timeskin RANSAC is as follows:

    whereprepresents the confidence probability,rrepresents the inlier ratio,andNis the number of points in the calculation.We set the confidence probabilitypto 0.99,and we assignr’(the inlier ratio in the last frame)torinitially.In the coarse-to-fine process of the static region extraction algorithm,we firstly perform 3ktimes of iterations in the initial pose estimation,then we combine the eligible group pair which satisfies the following conditions:1) The two groups have similar initial pose estimations.We obtain the pose results of every group and compute the absolute errors of every pair.Those below one-fifth of the maximal error are regarded as similar pose results.2) The two groups are distance-close to each other.

    The minimal boundary distances of every pair are calculated in the pixel plane and those below 10 pixels are considered to be distance-close in a typical 640×480 frame.The clustering process is performed until no two groups are eligible.Lastly,the static points in the extracted region are used to calculate the camera trajectory,with 10ktimes of iterations in the RANSAC process to ensure accurate results.Furthermore,we recommend setting the distance difference threshold in distance-consistency-check 0.1 m and 0.5 m in indoor and outdoor sequences respectively to have best effect.The parameters can be adjusted dynamically for different application environments.

    3.2 Contrast experiments

    Absolute Trajectory Error (ATE) metric is used to evaluate the system.We compare the results with ORBSLAM2[2]and BaMVO[18].ORB-SLAM2 is a state-ofthe-art SLAM method for static environment,which can only handle small number of dynamic objects.BaMVO is specially designed to handle RGB-D dynamic environments.

    The improvements are achieved in the proposed system by:using static-region extraction method to find correct correspondences efficiently,where a higher correct ratio results in more accurate transformation estimation; then using these correspondences,the static weighting strategy can reduce the influence of dynamic objects in loop closure.Compared with the proposed method,ORB-SLAM2 takes the static environment assumption in formulation and cannot perform normally in highly dynamic sequences.And the points on the same image coordinate are simply approximated as correspondence in BaMVO,causing transformation estimation error.

    The comparison results are shown in Table 1,showing that the proposed method outperforms the other two systems in almost all dynamic sequences,especially in the high-dynamic environments.The improvement for high-dynamic sequences is 81.37% compared with ORBSLAM2,and by 51.25% compared with BaMVO.To get further detailed effect of the proposed system,the estimated trajectories are compared with ground truth,and some results are shown in Fig.5 and Fig.6.

    Fig.5 Some results of the estimated trajectories from ORB-SLAM2 and the proposed system

    Fig.6 Translation error comparisons between ORB-SLAM2 and the proposed method in “fr3/walking_xyz”

    Tab.1 RMSE comparison of absolute trajectory error in the TUM RGB-D benchmark

    In the first row of Fig.5,the trajectories are estimated in ORB-SLAM; and in the second row of Fig.5,the trajectories are estimated with the proposed strategy.It is notable that for the low-dynamic sequence “fr3/sitting_xyz”,the improvement of the proposed system is small,but for high-dynamic sequences,the trajectory error is reduced greatly.We compare the quantitative translation error in Fig.6 from the sequence “fr3/walking_xyz”,where the blue star symbol represents the ORBSLAM2 and the red round symbol represents the proposed method in Fig.6.It can be seen in Fig.6 that the proposed system has more stable and accurate performance than ORB-SLAM2 in this high-dynamic sequence.

    With multi-thread programming,the frame rate of the proposed system can be around 36 fps,including loop closure and visual odometry estimation,so the system satisfies the requirement of real-time running.

    4 Conclusions

    In this paper,we present a real-time RGB-D SLAM system that is able to handle highly dynamic environments robustly.The system uses a static region extraction method to segment the dynamic objects from the static background,and the feature points in the static region are integrated into the RANSAC algorithm to estimate the camera trajectory.Furthermore,an improved Bag-of-Words method is performed to detect loops in even highdynamic scenes,compensating the accumulated drift.The proposed method is evaluated using the dynamic sequences from TUM Dataset.Compared with state-ofthe-art real-time methods,in terms of absolute trajectory error per second,the proposed method improves the accuracy by 81.37% in challenging sequences.

    In future work,we want to investigate how to deal with possible failures in featureless environments and how to provide more accurate camera poses when the dynamic foreground points aggregate in a few small textured areas.A preliminary scheme is to combining direct method with the feature points method in tracking thread and integrating semantic information into the system,which can provide more analyzable scene information and discriminate the entire dynamic object.Furthermore,we want to extend our principle to stereo systems with corresponding parameters we already provide.

    Acknowledgments:The authors would like to thank the support from Jiangsu Overseas Visiting Scholar Program for University Prominent Young & Middle-aged Teachers and Presidents.

    91大片在线观看| a级片在线免费高清观看视频| av视频免费观看在线观看| 中文欧美无线码| 国产极品粉嫩免费观看在线| 亚洲精品一二三| 精品久久久久久久毛片微露脸| 中文亚洲av片在线观看爽 | 亚洲一区高清亚洲精品| 日本撒尿小便嘘嘘汇集6| 欧美日韩福利视频一区二区| 久久中文字幕一级| 天堂动漫精品| 一二三四社区在线视频社区8| 国产精品一区二区精品视频观看| 水蜜桃什么品种好| 50天的宝宝边吃奶边哭怎么回事| 国产成人精品无人区| 69av精品久久久久久| 视频在线观看一区二区三区| 一区二区三区精品91| 男女下面插进去视频免费观看| 51午夜福利影视在线观看| 在线看a的网站| 丝袜美腿诱惑在线| 精品国产一区二区三区久久久樱花| 欧美乱码精品一区二区三区| 一级毛片精品| 一级毛片女人18水好多| 国产精品乱码一区二三区的特点 | 美女高潮喷水抽搐中文字幕| 99精品欧美一区二区三区四区| 19禁男女啪啪无遮挡网站| 欧美不卡视频在线免费观看 | 亚洲片人在线观看| 亚洲精品在线美女| av免费在线观看网站| 亚洲欧美激情在线| 亚洲专区中文字幕在线| 欧美av亚洲av综合av国产av| 久久天堂一区二区三区四区| 国产免费现黄频在线看| 交换朋友夫妻互换小说| 国产色视频综合| 99久久精品国产亚洲精品| 午夜成年电影在线免费观看| 一级毛片女人18水好多| 欧美国产精品va在线观看不卡| 热re99久久精品国产66热6| 日本五十路高清| 久热这里只有精品99| 国产欧美亚洲国产| av超薄肉色丝袜交足视频| 色综合欧美亚洲国产小说| 色婷婷久久久亚洲欧美| 乱人伦中国视频| 少妇被粗大的猛进出69影院| 人人妻人人澡人人爽人人夜夜| 欧美另类亚洲清纯唯美| 18禁裸乳无遮挡动漫免费视频| 国产精品永久免费网站| 国产1区2区3区精品| 欧美日韩精品网址| 中文字幕制服av| 一a级毛片在线观看| 美女高潮到喷水免费观看| a级毛片在线看网站| 99精品久久久久人妻精品| 亚洲国产精品一区二区三区在线| 日韩熟女老妇一区二区性免费视频| 国产成人精品在线电影| 中文字幕人妻丝袜制服| 欧美激情 高清一区二区三区| 99精国产麻豆久久婷婷| 成人手机av| 成熟少妇高潮喷水视频| 国精品久久久久久国模美| 高清欧美精品videossex| 好看av亚洲va欧美ⅴa在| 王馨瑶露胸无遮挡在线观看| 一本综合久久免费| 麻豆成人av在线观看| 一级黄色大片毛片| 黄色片一级片一级黄色片| 国产精品成人在线| 又大又爽又粗| 99在线人妻在线中文字幕 | 久久精品国产亚洲av高清一级| а√天堂www在线а√下载 | 亚洲欧美一区二区三区久久| 成人国产一区最新在线观看| 精品乱码久久久久久99久播| 欧美日本中文国产一区发布| 男女下面插进去视频免费观看| 在线免费观看的www视频| 午夜福利乱码中文字幕| 制服人妻中文乱码| 久久久久视频综合| 午夜成年电影在线免费观看| 成人精品一区二区免费| 欧美性长视频在线观看| 精品电影一区二区在线| 99国产精品免费福利视频| 母亲3免费完整高清在线观看| 可以免费在线观看a视频的电影网站| 淫妇啪啪啪对白视频| av不卡在线播放| 国产成人欧美| 午夜福利乱码中文字幕| 51午夜福利影视在线观看| aaaaa片日本免费| 精品第一国产精品| 女性被躁到高潮视频| 日韩制服丝袜自拍偷拍| 久久精品aⅴ一区二区三区四区| 一二三四社区在线视频社区8| 国产成人精品无人区| 国产欧美日韩综合在线一区二区| 91大片在线观看| 变态另类成人亚洲欧美熟女 | 亚洲五月天丁香| 亚洲五月婷婷丁香| 精品午夜福利视频在线观看一区| 亚洲一区二区三区不卡视频| 丝袜在线中文字幕| 宅男免费午夜| 亚洲七黄色美女视频| 欧美日韩一级在线毛片| 久久草成人影院| 看片在线看免费视频| 动漫黄色视频在线观看| 在线永久观看黄色视频| 亚洲av日韩精品久久久久久密| 亚洲精品自拍成人| 精品国产一区二区久久| 国产不卡一卡二| 成人国产一区最新在线观看| 精品乱码久久久久久99久播| 麻豆av在线久日| 国产精品亚洲av一区麻豆| 日日摸夜夜添夜夜添小说| 亚洲精品中文字幕在线视频| 亚洲第一青青草原| 人成视频在线观看免费观看| 亚洲第一欧美日韩一区二区三区| 香蕉久久夜色| 好男人电影高清在线观看| 1024香蕉在线观看| 狠狠狠狠99中文字幕| 精品久久久久久电影网| 亚洲色图综合在线观看| 亚洲三区欧美一区| 巨乳人妻的诱惑在线观看| 免费高清在线观看日韩| 一进一出抽搐动态| 怎么达到女性高潮| 丁香欧美五月| 欧美人与性动交α欧美精品济南到| 欧美精品av麻豆av| 国产乱人伦免费视频| 嫩草影视91久久| 午夜成年电影在线免费观看| 美女扒开内裤让男人捅视频| 激情在线观看视频在线高清 | 成年人黄色毛片网站| 日韩一卡2卡3卡4卡2021年| 美女扒开内裤让男人捅视频| 黄色毛片三级朝国网站| 国产视频一区二区在线看| 久热这里只有精品99| 国产成人精品无人区| 国产精品秋霞免费鲁丝片| 国产亚洲欧美精品永久| 精品卡一卡二卡四卡免费| 999精品在线视频| 精品乱码久久久久久99久播| 99riav亚洲国产免费| 丰满迷人的少妇在线观看| 国产亚洲精品久久久久久毛片 | 校园春色视频在线观看| 亚洲精品一卡2卡三卡4卡5卡| 少妇 在线观看| 啦啦啦视频在线资源免费观看| 日本a在线网址| 女人被狂操c到高潮| 亚洲国产欧美网| 日韩欧美国产一区二区入口| 成人亚洲精品一区在线观看| 国产成人av教育| 一本综合久久免费| 精品一区二区三卡| 51午夜福利影视在线观看| 久久久国产一区二区| 国产日韩一区二区三区精品不卡| 丁香六月欧美| 亚洲精品中文字幕在线视频| 欧美激情极品国产一区二区三区| 女警被强在线播放| 我的亚洲天堂| 老司机靠b影院| 一本一本久久a久久精品综合妖精| 中文欧美无线码| 国内久久婷婷六月综合欲色啪| 侵犯人妻中文字幕一二三四区| 午夜免费观看网址| a在线观看视频网站| 国产精品永久免费网站| 国产精品久久久久久精品古装| 久久久久精品人妻al黑| 欧美日韩福利视频一区二区| 中文字幕av电影在线播放| 亚洲全国av大片| 亚洲中文av在线| 别揉我奶头~嗯~啊~动态视频| xxxhd国产人妻xxx| 少妇粗大呻吟视频| 欧美日本中文国产一区发布| 在线观看日韩欧美| 91精品国产国语对白视频| 亚洲av成人不卡在线观看播放网| 精品久久久久久久毛片微露脸| aaaaa片日本免费| 无限看片的www在线观看| 美女午夜性视频免费| 欧美精品人与动牲交sv欧美| 黄色丝袜av网址大全| 又紧又爽又黄一区二区| 日韩 欧美 亚洲 中文字幕| 成人黄色视频免费在线看| 岛国在线观看网站| 久久国产精品大桥未久av| 久久影院123| 国产野战对白在线观看| 久久午夜亚洲精品久久| 手机成人av网站| 在线天堂中文资源库| 成人三级做爰电影| 国产日韩欧美亚洲二区| 国产av又大| 欧美激情久久久久久爽电影 | 亚洲一区二区三区不卡视频| 国产精品.久久久| www.精华液| 999久久久国产精品视频| 国产精品久久视频播放| 超碰成人久久| 叶爱在线成人免费视频播放| 国产精品电影一区二区三区 | 国产99久久九九免费精品| 精品国产一区二区三区四区第35| 欧美丝袜亚洲另类 | 国产视频一区二区在线看| 日韩欧美免费精品| 一级毛片高清免费大全| 欧美黑人欧美精品刺激| 欧美大码av| 一级a爱片免费观看的视频| 久9热在线精品视频| 在线十欧美十亚洲十日本专区| 女人久久www免费人成看片| 国产精品永久免费网站| 精品视频人人做人人爽| 中出人妻视频一区二区| 91成人精品电影| 别揉我奶头~嗯~啊~动态视频| 一二三四在线观看免费中文在| 欧美日韩亚洲综合一区二区三区_| 50天的宝宝边吃奶边哭怎么回事| 久久精品国产清高在天天线| 可以免费在线观看a视频的电影网站| 国产精品综合久久久久久久免费 | 免费在线观看完整版高清| 亚洲综合色网址| 嫩草影视91久久| 中文欧美无线码| 777久久人妻少妇嫩草av网站| 国产精品一区二区在线观看99| 9色porny在线观看| 亚洲成人国产一区在线观看| 人妻久久中文字幕网| 天天躁夜夜躁狠狠躁躁| 在线观看日韩欧美| 免费在线观看影片大全网站| 男人的好看免费观看在线视频 | 国产蜜桃级精品一区二区三区 | 在线免费观看的www视频| 大片电影免费在线观看免费| 亚洲第一av免费看| 亚洲免费av在线视频| 久久婷婷成人综合色麻豆| 一级片免费观看大全| 午夜91福利影院| 在线观看免费视频网站a站| 视频区欧美日本亚洲| 啦啦啦在线免费观看视频4| 精品久久久久久,| 免费观看人在逋| 多毛熟女@视频| 每晚都被弄得嗷嗷叫到高潮| 久久久水蜜桃国产精品网| 18禁黄网站禁片午夜丰满| 757午夜福利合集在线观看| 国产高清视频在线播放一区| 午夜福利免费观看在线| 中出人妻视频一区二区| 纯流量卡能插随身wifi吗| 天天操日日干夜夜撸| av国产精品久久久久影院| 丝袜在线中文字幕| 久久中文字幕一级| 午夜久久久在线观看| 国产亚洲av高清不卡| 十八禁网站免费在线| 精品一区二区三区四区五区乱码| 一本大道久久a久久精品| 午夜福利一区二区在线看| 欧美黑人欧美精品刺激| 自拍欧美九色日韩亚洲蝌蚪91| 老熟妇仑乱视频hdxx| 女人十人毛片免费观看3o分钟| 欧美一级a爱片免费观看看| 亚洲欧美日韩无卡精品| 国产精品久久视频播放| 99久久99久久久精品蜜桃| 黄片小视频在线播放| 国产69精品久久久久777片| 国产精品亚洲美女久久久| 波多野结衣高清作品| 午夜福利成人在线免费观看| 91久久精品电影网| 欧美在线一区亚洲| 精品久久久久久久久久久久久| 亚洲中文字幕一区二区三区有码在线看| 一个人免费在线观看的高清视频| 欧美日韩黄片免| 天天添夜夜摸| 两个人看的免费小视频| 久久欧美精品欧美久久欧美| 性色avwww在线观看| 黄色丝袜av网址大全| 悠悠久久av| 欧美成人a在线观看| 国产午夜精品论理片| 高清日韩中文字幕在线| 18禁黄网站禁片午夜丰满| 国产精品98久久久久久宅男小说| 90打野战视频偷拍视频| 欧美中文日本在线观看视频| 伊人久久大香线蕉亚洲五| 日韩欧美免费精品| 看免费av毛片| 丁香欧美五月| 少妇人妻精品综合一区二区 | 哪里可以看免费的av片| 免费看日本二区| 日本免费一区二区三区高清不卡| 女人十人毛片免费观看3o分钟| 99热精品在线国产| 国产精品,欧美在线| 99在线视频只有这里精品首页| 欧美乱码精品一区二区三区| 国产蜜桃级精品一区二区三区| 老司机午夜十八禁免费视频| 老汉色∧v一级毛片| 欧美激情在线99| 国产综合懂色| 最新中文字幕久久久久| 黄色片一级片一级黄色片| 国语自产精品视频在线第100页| 国产一区二区亚洲精品在线观看| 精品午夜福利视频在线观看一区| 两个人的视频大全免费| 免费看十八禁软件| 日日摸夜夜添夜夜添小说| 草草在线视频免费看| 热99re8久久精品国产| 免费av不卡在线播放| 久久久久久久精品吃奶| 亚洲av中文字字幕乱码综合| 国产蜜桃级精品一区二区三区| 热99re8久久精品国产| 老司机深夜福利视频在线观看| or卡值多少钱| 日本一本二区三区精品| 国产一区二区在线观看日韩 | 人人妻人人看人人澡| 久久久久久久久久黄片| 亚洲精品乱码久久久v下载方式 | 熟女人妻精品中文字幕| 国产成+人综合+亚洲专区| 床上黄色一级片| 久久久久久久亚洲中文字幕 | 五月玫瑰六月丁香| 欧美日韩一级在线毛片| 欧美日韩综合久久久久久 | av黄色大香蕉| 国产成人系列免费观看| 最后的刺客免费高清国语| 国产aⅴ精品一区二区三区波| 在线观看一区二区三区| 精品日产1卡2卡| 久久欧美精品欧美久久欧美| 欧美成人一区二区免费高清观看| 18+在线观看网站| 少妇丰满av| 两人在一起打扑克的视频| 国内精品美女久久久久久| 久久九九热精品免费| 亚洲专区国产一区二区| 欧美性猛交黑人性爽| 丰满人妻熟妇乱又伦精品不卡| 午夜免费成人在线视频| 给我免费播放毛片高清在线观看| 免费无遮挡裸体视频| 极品教师在线免费播放| 亚洲av电影不卡..在线观看| 国产精品98久久久久久宅男小说| 国产伦人伦偷精品视频| 可以在线观看毛片的网站| 99热只有精品国产| 国产成人啪精品午夜网站| 精品久久久久久,| 狂野欧美白嫩少妇大欣赏| 午夜福利在线观看免费完整高清在 | 欧美黑人巨大hd| 两个人视频免费观看高清| 色综合婷婷激情| 91麻豆精品激情在线观看国产| 亚洲精品456在线播放app | 成人鲁丝片一二三区免费| 国产高清视频在线播放一区| 嫩草影院精品99| 国产激情偷乱视频一区二区| www日本黄色视频网| 国产综合懂色| 久久久成人免费电影| 欧美日韩亚洲国产一区二区在线观看| 亚洲美女黄片视频| 老司机午夜十八禁免费视频| 午夜福利高清视频| 最近在线观看免费完整版| 女人十人毛片免费观看3o分钟| 免费在线观看日本一区| 美女高潮喷水抽搐中文字幕| 欧美大码av| www日本黄色视频网| 狂野欧美激情性xxxx| 男女午夜视频在线观看| 国产日本99.免费观看| 男人舔奶头视频| 一级作爱视频免费观看| 我的老师免费观看完整版| 男人和女人高潮做爰伦理| 成年女人看的毛片在线观看| 久久久久九九精品影院| 黄色日韩在线| 日本 欧美在线| 国产精品1区2区在线观看.| 精品免费久久久久久久清纯| 一个人看视频在线观看www免费 | 亚洲欧美日韩高清在线视频| 国产成年人精品一区二区| 淫妇啪啪啪对白视频| 天堂动漫精品| 最近最新中文字幕大全电影3| 他把我摸到了高潮在线观看| 中文字幕人妻熟人妻熟丝袜美 | 最近最新中文字幕大全电影3| 岛国视频午夜一区免费看| 99热这里只有精品一区| 国产主播在线观看一区二区| 级片在线观看| 99精品欧美一区二区三区四区| 最新美女视频免费是黄的| 国产精品一及| 免费看日本二区| avwww免费| 亚洲第一电影网av| 草草在线视频免费看| 在线播放国产精品三级| 亚洲国产精品sss在线观看| 性欧美人与动物交配| 国产一区在线观看成人免费| 国内精品久久久久久久电影| 国产精品影院久久| 亚洲人与动物交配视频| 成人av一区二区三区在线看| 男插女下体视频免费在线播放| 老熟妇乱子伦视频在线观看| 日本a在线网址| 天天一区二区日本电影三级| 叶爱在线成人免费视频播放| 9191精品国产免费久久| 露出奶头的视频| 韩国av一区二区三区四区| 听说在线观看完整版免费高清| 看黄色毛片网站| 午夜福利免费观看在线| 少妇的逼水好多| 一个人免费在线观看的高清视频| 一区二区三区国产精品乱码| 国产一区二区亚洲精品在线观看| 波多野结衣巨乳人妻| 亚洲精品国产精品久久久不卡| 脱女人内裤的视频| 五月玫瑰六月丁香| 中文字幕久久专区| 床上黄色一级片| 免费一级毛片在线播放高清视频| 久久久久久国产a免费观看| 亚洲av中文字字幕乱码综合| 久久久国产精品麻豆| 色综合婷婷激情| 舔av片在线| 又黄又粗又硬又大视频| 欧美日韩乱码在线| 久久久久国产精品人妻aⅴ院| 国产高清有码在线观看视频| 亚洲色图av天堂| 亚洲精华国产精华精| 亚洲专区国产一区二区| 十八禁人妻一区二区| 国产极品精品免费视频能看的| 久9热在线精品视频| 伊人久久大香线蕉亚洲五| 很黄的视频免费| 国产高潮美女av| 我的老师免费观看完整版| 最近最新免费中文字幕在线| 免费无遮挡裸体视频| 叶爱在线成人免费视频播放| 欧美日韩瑟瑟在线播放| 国内毛片毛片毛片毛片毛片| 日韩欧美在线乱码| 午夜福利免费观看在线| 91在线观看av| 欧美一区二区国产精品久久精品| 国产成人a区在线观看| 岛国视频午夜一区免费看| 国产精品野战在线观看| 有码 亚洲区| 成人一区二区视频在线观看| 欧美一区二区国产精品久久精品| 看黄色毛片网站| 18禁在线播放成人免费| 日韩国内少妇激情av| 亚洲av熟女| 国产老妇女一区| 美女cb高潮喷水在线观看| 悠悠久久av| 美女 人体艺术 gogo| 村上凉子中文字幕在线| 特大巨黑吊av在线直播| 久久精品国产亚洲av香蕉五月| 久久久久精品国产欧美久久久| 国产精品,欧美在线| 欧美最黄视频在线播放免费| 十八禁网站免费在线| 国产伦精品一区二区三区四那| 舔av片在线| 日本熟妇午夜| 久久天躁狠狠躁夜夜2o2o| 久久香蕉精品热| 99在线视频只有这里精品首页| 国产精品影院久久| 欧美一级毛片孕妇| 精品国产美女av久久久久小说| 夜夜躁狠狠躁天天躁| 成年女人永久免费观看视频| 国产伦一二天堂av在线观看| 夜夜爽天天搞| 亚洲电影在线观看av| 黄色片一级片一级黄色片| 国产欧美日韩一区二区三| 亚洲成av人片在线播放无| 亚洲五月婷婷丁香| 欧美在线黄色| a级一级毛片免费在线观看| 午夜福利在线观看吧| av中文乱码字幕在线| 亚洲在线观看片| 最近最新中文字幕大全免费视频| 日韩人妻高清精品专区| 中文在线观看免费www的网站| 偷拍熟女少妇极品色| 麻豆久久精品国产亚洲av| 亚洲久久久久久中文字幕| 噜噜噜噜噜久久久久久91| 精品国产亚洲在线| 蜜桃亚洲精品一区二区三区| 国产一区二区亚洲精品在线观看| 在线观看免费视频日本深夜| 欧美黄色淫秽网站| 日本精品一区二区三区蜜桃| 亚洲色图av天堂| e午夜精品久久久久久久| 深夜精品福利| 日韩精品中文字幕看吧| 黄色片一级片一级黄色片| 在线免费观看的www视频| 国产毛片a区久久久久| 18禁黄网站禁片午夜丰满| 亚洲,欧美精品.| 激情在线观看视频在线高清| 久久久久性生活片| 亚洲av成人av| 欧美最新免费一区二区三区 | 午夜激情福利司机影院| 美女高潮喷水抽搐中文字幕| 欧美不卡视频在线免费观看| 非洲黑人性xxxx精品又粗又长| 亚洲av免费在线观看| 网址你懂的国产日韩在线| 国产视频内射| 国产成人福利小说|