• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Mobile robot localization algorithm based on multi-sensor information fusion

    2020-04-28 03:52:46WANGMingyiHELileLIYuSUOChao

    WANG Ming-yi, HE Li-le, LI Yu, SUO Chao

    (College of Mechanical & Electrical Engineering, Xi’an University of Architecture & Technology, Xi’an 710055, China)

    Abstract: In order to effectively reduce the uncertainty error of mobile robot localization with a single sensor and improve the accuracy and robustness of robot localization and mapping, a mobile robot localization algorithm based on multi-sensor information fusion (MSIF) was proposed. In this paper, simultaneous localization and mapping (SLAM) was realized on the basis of laser Rao-Blackwellized particle filter (RBPF)-SLAM algorithm and graph-based optimization theory was used to constrain and optimize the pose estimation results of Monte Carlo localization. The feature point extraction and quadrilateral closed loop matching algorithm based on oriented FAST and rotated BRIEF (ORB) were improved aiming at the problems of generous calculation and low tracking accuracy in visual information processing by means of the three-dimensional (3D) point feature in binocular visual reconstruction environment. Factor graph model was used for the information fusion under the maximum posterior probability criterion for laser RBPF-SLAM localization and binocular visual localization. The results of simulation and experiment indicate that localization accuracy of the above-mentioned method is higher than that of traditional RBPF-SLAM algorithm and general improved algorithms, and the effectiveness and usefulness of the proposed method are verified.

    Key words: mobile robot; simultaneous localization and mapping (SLAM); graph-based optimization; sensor fusion

    0 Introduction

    The self-localization of mobile robot with sensor measurement information is the basis and premise of autonomous navigation and intelligent completion of specific tasks. The self-localization of robot is based on the known environment map model, while the generation of the environment map model in turn requires the robot to confirm its pose while obtaining the information of unknown environment, which leads to the simultaneous localization and mapping (SLAM) of robot. In this process, the estimation of robot pose and environment map model are interdependent and iterative. It is essentially a state estimation problem in which the data acquired by the sensor are solved mainly by filter or graph-based optimization. In Ref.[1], the Rao-Blackwellized particle filter (RBPF) algorithm that can effectively solve SLAM was proposed. This algorithm can well approximate the joint probability density of the mobile robot pose and the environment map model. It is still widely used in laser SLAM system, but it has such problems as ultra-high algorithm complexity and large memory space. The execution efficiency of algorithm proposed in Refs.[2-5] is improved in the proposed distribution or resampling mechanism of particles. With the gradual maturation of visual SLAM technology, the parallel tracking and mapping (PTAM) proposed in Ref.[6] adopted the nonlinear optimization method based on key frame, proposed the parallelization of tracking and mapping, and obtained better estimation results than traditional filter, but there are some limitations, such as small application scenarios and easy tracking loss. Refs.[7-8] inherited the main idea of PTAM algorithm, and proposed ORB-SLAM and its improved ORB-SLAM2. This algorithm used the ORB feature point of the image[9]for matching, the calculation efficiency has been greatly improved, but the single use of visual sensor may cause the high dependence on the visual features of the scene. In Ref.[10], graph-based optimization strategy was used to create an environment map while using the Harris operator to design the quadrilateral closed matching algorithm, thus the localization accuracy and algorithm efficiency are improved, but the environmental suitability of this algorithm is poor.

    Since the single use of laser and vision is limited, multi-sensor information fusion (MSIF) can make up for the disadvantage of different sensors and improve the localization accuracy of the robot. In Ref.[11], the SLAM of laser and vision fusion was studied, the probabilistic heuristic model was used to extract the beam and project it onto a raster map to realize feature fusion. Although this method can effectively reduce the uncertainty of data, it has such problems as generous calculation and poor real-timeliness. In Ref.[4], laser and visual observations were used to improve the importance distribution of particle sampling on the basis of RBPF-SLAM, the robot localization results is better than traditional RBPF-SLAM, nevertheless, the information fusion was carried out by extended Kalman filter, thus good results can be obtained when the system is nearly linear and continuous. As a novel nonlinear optimization method, factor graph (FG) has the advantages of smooth growth and global optimization[12], which is gradually applied to sensor fusion. In Ref.[13], pre-integration was carried out on inertial measurement unit, and the efficient fusion of inertial measurement unit (IMU) and monocular vision was realized in the framework of FG. This algorithm is more suitable for unmanned aerial vehicle (UAV) application scene with large acceleration variation while has poor performance in ground mobile robot. Aming at above problems, in this paper, the laser/vision fusion based on FG was studied on the basis of laser RBPF-SLAM localization and FG fusion model, and the ORB feature point processing was perfected to adapt to the construction of binocular vision factor nodes.

    1 Monte Carlo localization method based on laser scanning

    1.1 RBPF-SLAM algorithm

    The core idea of Monte Carlo localization is to represent the pose reliability of the robot in the environment with a set of random samples with corresponding weights (i.e. “particles”), particle resampling based on importance weights is required during localization. Therefore, this method is also known as “particle filter”. The joint posterior probability distribution function of SLAM is divided into the estimation of robot pose state and landmark map with robot pose state estimation as the condition by RBPF[1],

    p(x1:t,m|z1:t,u1:t-1)=p(x1:t|z1:t,u1:t-1)·

    p(m|x1:t,z1:t),

    (1)

    wherex1:tis the real-time pose state of robot movement;mrepresents the global landmark map;z1:tis the systematic observation and represents the observation data of laser ranging sensor;u1:tis system control quantity and represents the mileage data of coded odometer. The schematic diagram for the SLAM in the framework of RBPF-SLAM is shown in Fig.1.

    Fig.1 Schematic diagram for SLAM

    The steps of traditional RBPF-SLAM are as follows:

    3) Calculate the weight of each independent particle to make up the difference between the importance distribution function and the target distribution according to the importance sampling principle.

    (2)

    The likelihood field (LF) is used in laser SLAM system to describe the probability that the laser ranging sensor detects an environmental obstacle. The laser-measured robot pose observation model can be obtained by calculating the distance between the current laser scanning point and the nearest adjacent point[14], as

    (3)

    whereN′ represents the number of laser scanning points att,djis the distance between the current laser scanning point and the nearest adjacent point.

    4) Select particles according to the weight calculated in the previous step and eliminate particles with low weight, and resample when the number of effective particles is smaller than the preset thresholdNth, namely

    (4)

    1.2 Improved RBPF-SLAM algorithm

    In the Monte Carlo localization process of the framework in RBPF-SLAM algorithm, assume Markov property, that is, the state of the robot attis only relevant tot-1, but irrelevant to the states beforet-1. Affected by such factors as the mechanical error of the robot itself and the measurement accuracy of laser sensor, the long-term operation of the system will cause large cumulative error. Now nonlinear optimization method is used to consider the states and observations beforet-1. The graph-based optimization theory[15]is used to constrain and optimize the real-time pose of the robot.

    (5)

    wherexirepresents the robot pose at thei-th peak;zijrepresents the edge between peakiand peakj, i.e. the constraint relation between two peaks, as shown in Fig.2. After obtaining the real-time robot pose by RBPF-SLAM, the translational rotation matrix between laser framesLijis obtained by iterative closest point (ICP)[16]. The accuracy of information matrixΩijis positively correlated to that of matrixLij.e(xi,xj,zij) is the error function for the matching degree among posesxi,xjand the constraint edgezij.

    (6)

    (7)

    (8)

    The optimal estimationx*of robot pose can be obtained with minimizedF(x) by Gauss-Newton method, Levinburg-Marquardt method and other gradient descent methods.

    Fig.2 Graph-based optimization of mobile robot pose

    2 Binocular vision ORB feature point processing

    Single-wire laser ranging sensor can quickly and accurately obtain depth information of scanning plane and has strong anti-interference capability. However, the amount of environmental information contained in the laser point cloud is relatively single. In this paper, the fusion of single-wire laser sensor and binocular vision sensor could improve the accuracy and robustness of scene representation and robot localization.

    In order to use binocular vision camera to restore the 3D information of the environment and realize camera pose estimation, the correction, filtering and other pre-processing of the left and right frame images should be firstly completed according to the calibration results. Feature point extraction and matching is the process of finding the corresponding points of the same scene on the preprocessed visual image.

    2.1 Feature point extraction

    The feature point is extracted by features from accelerated segment test (FAST) corner detection for oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB) algorithm. When detect the gray level of the pixels around the candidate point, if the difference in the gray level between the enough number of pixel in the neighborhood around the candidate point and this candidate point is great enough to reach the threshold, the candidate point is determined as a feature point.

    In order to describe the attributes of the feature point, ORB algorithm makes FAST corner detection directional by gray-scale centroid method and uses rotation matrix and machine learning to improve the rotation invariance of BRIEF descriptor. Its subsequent improved algorithm makes ORB algorithm have good scale consistency by building a pyramid of images. The feature point extraction speed of this algorithm is much higher than that of scale-invariant feature transform (SIFT), speeded up robust features (SURF) and other algorithms, which can better meet the real-time requirements of robot localization[17].

    2.2 Feature point matching

    The key to determine whether the ORB feature points in the double-frame images are matching point pair is to check whether the hamming distance of the descriptors of the two feature points can reach the similarity threshold. In order to solve the rotation matrix and translation vector of the binocular camera, the pose change of the feature point in the next frame is generally tracked and matched after obtaining the 3D information of the feature point by stereo matching. The stereo matching and tracking matching process of traditional matching algorithm are relatively independent[18]. In order to further improve the accuracy of tracking matching, the quadrilateral closed loop matching structure as shown in Fig.3 is built for the adjacent binocular frames.

    Fig.3 Quadrilateral closed loop matching structure

    In Fig.3, the horizontal lines represent the stereo matching process of left and right frame images at the same time, and the tilt lines represent the tracking matching process of double-frame images on the same side at adjacent moments.

    A quadrilateral closed loop constraint can be established according to the structural characteristics of binocular visual camera during movement. Upon tracking matching, the changes in the pixel coordinates of the same feature point are the same. In Fig.3, the slopes ofp1p3andp2p4are almost the same. The following constraint rules are established in combination with polar line constraint and parallax range constraint.

    (9)

    whereKis the slope test threshold anddmaxis the maximum parallax constraint. In the matching process of the quadrilateral closed loop of adjacent frames, the feature points that satisfy the above constraint relations are screened out. In order to optimize the matching results, random sample consensus (RANSAC)[19]is introduced for iterative screening. During actual matching, in consideration of the calibration error of binocular vision camera, the polar line constraint should be allowed to have a fluctuation range of three pixels around the theoretical pixel. The test results show that whenK=0.35, the matching success rate and accuracy can be ensured by the ORB feature point extraction and matching algorithm.

    3 Laser/Vision fusion algorithm based on FG

    During robot movement, the binocular images at adjacent moments are highly similar. If all frame images collected are processed, the information redundancy will increase rapidly, thus increasing the calculation amount. The information redundancy and loss of computing resources can be effectively balanced through the corresponding key frame selection strategy. At this moment, the binocular vision localization speed is low, but the update speed of laser RBPF-SLAM is relatively high. For the asynchrony of sensor speed, a method based on FG was adopted in this paper to realize MSIF.

    3.1 FG fusion model

    FG is a probabilistic graph model expressing the joint probability distribution of random variables and is composed of state nodes, factor nodes and lines. When the statexiis associated with the factorfi, a constraint edge exists between the state and the factor. The measurement model in robot localization can be expressed as

    zi=hi(xi)+vi,

    (10)

    whereziis the measurement of the statexiwith the measurement functionhi(xi) by the sensor;viis the measurement noise. The optimization objective of single factor is obtained by

    fi(xi)=d(hi(xi)-zi).

    (11)

    For the estimation problem that the error obeys Gaussian distribution, Mahalanobis distance is usually chosen as the representation form of its cost function:d(e)=eTΩe, whereΩis the inverse matrix of measurement noise covariance matrix. The whole FG model is the product of each factor as

    (12)

    where each factorfi(xi) represents the error function to be optimized; the optimal estimationx*of robot pose for information fusion can be obtained by minimizingFFG(x).

    The laser/vision fusion model proposed herein is shown in Fig.4. The status output of laser RBPF-SLAM is throughout the entire localization phase. After receiving binocular vision observations, the factor nodefvisionis defined to expand the FG, so as to correct the localization results of laser RBPF-SLAM. Now the FG of laser RBPF-SLAM node and binocular vision node are described, respectively.

    Fig.4 FG fusion model

    3.1.1 Laser RBPF-SLAM node

    According to the idea of improving RBPF-SLAM algorithm by nonlinear optimization method in Section 1.2, the error functione(xi,xj,zij) can be expressed as

    eij=h(xi,xj)-zij.

    (13)

    In order to facilitate optimization and solution, the relation between two adjacent state nodes is expressed as follows according to Lie group

    (14)

    whereLijis the translational rotation matrix obtained by iterating the nearest point with a two-frame laser point cloud;TiandTjare the poses of current state nodesxiandxj. The laser RBPF-SLAM node is constructed as

    (15)

    3.1.2 Binocular vision node

    It is assumed that the matching point set of adjacent key frames obtained by feature point extraction and matching is

    (16)

    The error term of thei-th counter point is

    (17)

    Then the least square problem is constructed. Singular value decomposition (SVD) is used to solve and minimize the error sum of squares

    (18)

    The translational rotation matrixVijbetween adjacent key frames can be obtained by calculating the inverse of optimizedRandtinverse transformation, then the binocular vision node is constructed as

    (19)

    3.2 Incremental optimization

    The essence of FG optimization is to iteratively solve a least square problem: a certain initial valuex0is given to find out the increment Δxand minimizeFFG(x). In Gauss-Newton method, thef(x) first order Taylor is expanded, thus the increment Δxis converted to

    (20)

    The incremental equation is obtained by derivation

    J(x)TJ(x)Δx=-J(x)Tf(x),

    (21)

    whereJ(x) is the Jacobian matrix off(x) at pointx. The matrixHcomposed ofJ(x)TJ(x) is sparse, the calculation can be accelerated by Schur elimination or Cholesky decomposition[20].

    With the gradual increase of the factor nodes in the FG model, the update of all nodes should be recalculated whenever a new node is generated, which will lead to generous calculation and not conducive to system real-timeliness. By comparing the Jacobian matrix at different moments, the newly added factor nodes can be approximated as affecting only the adjacent nodes. In this way, only the affected nodes need to be considered whenever a node update is calculated, instead of optimizing the entire graph. On the basis of this idea, Kaess et al. conducted more elaborate processing on the FG, proposed incremental FG optimization, and improved the optimization process[21].

    4 Experiment and analysis

    In order to verify the effectiveness of the proposed MSIF algorithm, simulation test and experimental verification were carried out, respectively. In simulation test, the measurement process of laser and vision sensors was simulated, and robot localization results were compared with the real trajectory reference value to quantitatively evaluate the localization error of the improved method proposed herein. Besides, experiment verified the influence of MSIF algorithm on robot localization and backend mapping accuracy based on real mobile robot platform.

    4.1 Simulation test

    The software MATLAB was used to establish the motion model of the mobile robot controlled by two-wheel differential, and the robot trajectory is shown in Fig.5. The robot started from the origin of the coordinates. The robot movement was restricted to a 30 m×30 m square region centered at the origin. The odometer error was 0.05 m.

    Fig.5 Robot trajectory set in simulation test

    Observation marker were provided for the sensor by placing several stationary landmarks within the movement region. Landmarks were randomly placed at known positions. The measuring errors of laser and vision sensors were set to 0.1 m and 0.15 m, respectively, and the sampling period was set to 0.1 s and 0.5 s, respectively. The sensor range was set to 10 m.

    The particles representing robot poses were initialized near the origin of the coordinates. 1 000 time steps were simulated for traditional RBPF-SLAM algorithm and the MSIF proposed herein. The localization error curve of robot posex1:1 1000(x,y,θ) obtained was shown in Fig.6. Compared with the ideal robot pose in simulation environment, the simulation results in the two groups had little difference in pose error (θerror). However, in position error (xerror &yerror), the localization results obtained by the algorithm proposed herein were significantly better, and the error was closer to 0.

    Fig.6 Localization error curve of robot

    4.2 Experimental verification

    The crawler-type mobile robot platform used in the experiment is shown in Fig.7.

    The robot body is driven forward by two driving wheel motors equipped with rotary encoders. The model of the single-wire laser ranging sensor is HOKUYO? URG-04LX, the distance measuring range of the sensor is 20-4 000 mm and the measuring error is ±1%, the angle measuring range is 240° and the measuring error is 0.36°. The binocular vision camera is Bumblebee2 of PointGrey Company and the resolution of a single-frame image is 640 px×480 px.

    Fig.7 Crawler-type mobile robot platform

    The experimental environment is shown in Fig.8. In the experiment, the robot trajectory was first set through movement control software; the driving wheel motor drove robot movement after receiving the corresponding instruction. Due to the control error of the motor itself and the cumulative error of dead reckoning method, it was difficult to obtain the true value of the robot trajectory. It can be seen from the mutual iterative relation between robot localization and mapping that the experimental group with more accurate localization results under the same conditions should also have better mapping effect. Therefore, the advantages and disadvantages of different localization algorithms can be reflected by comparing the environment description accuracy of the map model.

    Fig.8 Experimental environment

    In order to make the map model more intuitive, in this paper, the 3D point cloud map of experimental environment was constructed by referring to the mapping method in Ref.[10] and relying on the robot localization results of different algorithms. The control group adopted the laser/vision fusion algorithm proposed in Ref.[4]. The robot trajectory obtained by different localization algorithms is shown in Fig.9. The 3D point cloud map built is shown in Fig.10.

    Fig.9 Robot trajectory in experimental scene

    Fig.10 Visualization results of 3D point cloud map

    In order to analyze the accuracy of environmental description of the map model, the 3D point cloud was projected to the coordinate axis direction of the environmental coordinate system (x,y,z). The results in Table 1 were obtained by measuring the typical spatial characteristics of 3D map model and comparing it with the measured value.

    Table 1 Accuracy analysis of 3D point cloud map

    There was little difference in shape and dimension scale between the 3D map model built herein and the actual experimental environment. The average error between the typical spatial characteristics of the map model and the measured value was only 1.27%, the accuracy was higher than that of the algorithm proposed in Ref.[4]. Besides, the algorithm herein has smaller standard deviation (SD) of the average error and more stable accuracy performance. Both the two algorithms adopted the method of separating the front end from the back end of localization and mapping, and used the same point cloud generation method. Therefore, the robot localization accuracy of the algorithm herein can be deemed higher than that of the algorithm proposed in Ref.[4].

    5 Conclusion

    This paper combined laser RBPF-SLAM and binocular vision, adopted FG node to describe the process of state recursion and robot system update, and proposed a laser/vision fusion localization algorithm based on the FG. The simulation results show that the algorithm herein has higher position accuracy. The localization and mapping experiment in actual scene verify that this algorithm has higher localization accuracy than general fusion localization algorithms, which improves the mapping accuracy and has certain practical value.

    ponron亚洲| 琪琪午夜伦伦电影理论片6080| 777久久人妻少妇嫩草av网站| 50天的宝宝边吃奶边哭怎么回事| 婷婷丁香在线五月| 老司机午夜十八禁免费视频| 国产精品久久久久久人妻精品电影| 他把我摸到了高潮在线观看| 乱人伦中国视频| 国产高清视频在线播放一区| 欧美日韩瑟瑟在线播放| 韩国av一区二区三区四区| 高潮久久久久久久久久久不卡| 国产高清国产精品国产三级| 日韩一卡2卡3卡4卡2021年| 国产色视频综合| 中文字幕另类日韩欧美亚洲嫩草| 欧美日本中文国产一区发布| 亚洲激情在线av| 国产又色又爽无遮挡免费看| 天天躁夜夜躁狠狠躁躁| 久久香蕉精品热| 久热爱精品视频在线9| 一本综合久久免费| 亚洲在线自拍视频| 天堂影院成人在线观看| 99热国产这里只有精品6| 18禁裸乳无遮挡免费网站照片 | 午夜福利免费观看在线| www.999成人在线观看| 日韩欧美一区二区三区在线观看| 免费少妇av软件| av中文乱码字幕在线| 亚洲精品中文字幕在线视频| 精品少妇一区二区三区视频日本电影| 美女国产高潮福利片在线看| 韩国av一区二区三区四区| 一级毛片精品| 制服人妻中文乱码| 国内久久婷婷六月综合欲色啪| 精品午夜福利视频在线观看一区| 免费不卡黄色视频| 后天国语完整版免费观看| 又大又爽又粗| 日韩av在线大香蕉| 亚洲国产精品一区二区三区在线| 亚洲性夜色夜夜综合| 国产成年人精品一区二区 | 久久久精品国产亚洲av高清涩受| 欧美丝袜亚洲另类 | 国产精品一区二区免费欧美| 成人三级做爰电影| 欧美一级毛片孕妇| 日本wwww免费看| 可以免费在线观看a视频的电影网站| 国产精品秋霞免费鲁丝片| 69精品国产乱码久久久| 欧美日韩视频精品一区| www.www免费av| 久久久国产成人免费| 国产麻豆69| 午夜久久久在线观看| 中文字幕最新亚洲高清| 在线av久久热| 欧美中文日本在线观看视频| 两人在一起打扑克的视频| 极品人妻少妇av视频| 99精品久久久久人妻精品| 中文欧美无线码| 十八禁人妻一区二区| 久久天躁狠狠躁夜夜2o2o| 大型av网站在线播放| 欧美日韩乱码在线| 国产成人精品在线电影| 国产精品自产拍在线观看55亚洲| 女生性感内裤真人,穿戴方法视频| 精品乱码久久久久久99久播| 少妇裸体淫交视频免费看高清 | 纯流量卡能插随身wifi吗| 日本黄色视频三级网站网址| 国产精品98久久久久久宅男小说| 嫩草影视91久久| 好男人电影高清在线观看| 丰满饥渴人妻一区二区三| 亚洲欧美精品综合一区二区三区| 久久久久国产精品人妻aⅴ院| 精品国产国语对白av| 99久久99久久久精品蜜桃| 一区二区三区国产精品乱码| 一级毛片高清免费大全| 精品高清国产在线一区| 久久 成人 亚洲| 啦啦啦在线免费观看视频4| 脱女人内裤的视频| 日日爽夜夜爽网站| 高潮久久久久久久久久久不卡| 9色porny在线观看| 久久午夜综合久久蜜桃| 国产人伦9x9x在线观看| 国产高清国产精品国产三级| 91大片在线观看| 美国免费a级毛片| 日韩中文字幕欧美一区二区| 美女大奶头视频| 在线观看一区二区三区| 欧美日本中文国产一区发布| 亚洲精品中文字幕一二三四区| 欧美激情高清一区二区三区| 满18在线观看网站| 99香蕉大伊视频| 亚洲第一欧美日韩一区二区三区| 人妻丰满熟妇av一区二区三区| 757午夜福利合集在线观看| 久久精品91蜜桃| 天堂俺去俺来也www色官网| 夜夜看夜夜爽夜夜摸 | 亚洲,欧美精品.| 国产一区二区激情短视频| 午夜福利在线免费观看网站| 国产有黄有色有爽视频| 丝袜人妻中文字幕| 亚洲精品国产一区二区精华液| 黑人操中国人逼视频| 黄色 视频免费看| 亚洲第一青青草原| 99国产精品一区二区蜜桃av| 欧美色视频一区免费| 一区在线观看完整版| e午夜精品久久久久久久| 亚洲第一欧美日韩一区二区三区| 一级毛片高清免费大全| 亚洲三区欧美一区| 亚洲 欧美 日韩 在线 免费| 老汉色∧v一级毛片| 午夜精品国产一区二区电影| 国产欧美日韩一区二区三区在线| 美女大奶头视频| 久久狼人影院| 免费少妇av软件| 无遮挡黄片免费观看| 免费看a级黄色片| 日韩精品青青久久久久久| 欧美在线黄色| 两个人免费观看高清视频| 日韩欧美一区二区三区在线观看| 亚洲成人国产一区在线观看| 久久久久久久午夜电影 | 国产精品一区二区在线不卡| 日本免费a在线| 电影成人av| 国产极品粉嫩免费观看在线| 色婷婷av一区二区三区视频| 精品久久久久久电影网| 亚洲欧美激情综合另类| 很黄的视频免费| av片东京热男人的天堂| 亚洲男人的天堂狠狠| 一区二区三区国产精品乱码| 亚洲专区国产一区二区| 精品国产亚洲在线| 国产xxxxx性猛交| 国产精品爽爽va在线观看网站 | 国产成人啪精品午夜网站| 人人妻人人添人人爽欧美一区卜| 美女福利国产在线| 欧美日韩精品网址| 99久久久亚洲精品蜜臀av| 天天影视国产精品| 在线天堂中文资源库| 国产欧美日韩精品亚洲av| 成年人免费黄色播放视频| 岛国视频午夜一区免费看| 91精品三级在线观看| 美女高潮到喷水免费观看| 国产精品自产拍在线观看55亚洲| 91av网站免费观看| cao死你这个sao货| 免费看十八禁软件| 狂野欧美激情性xxxx| 亚洲国产中文字幕在线视频| 黑人猛操日本美女一级片| 精品一区二区三区av网在线观看| 黄色怎么调成土黄色| 91精品国产国语对白视频| 亚洲国产欧美网| 在线十欧美十亚洲十日本专区| 色婷婷久久久亚洲欧美| 国产精品免费视频内射| 亚洲精品在线美女| 久久人人97超碰香蕉20202| 亚洲精品在线观看二区| 国产精品久久久av美女十八| 黄色女人牲交| 老汉色av国产亚洲站长工具| 午夜福利免费观看在线| 久久人妻熟女aⅴ| 黄频高清免费视频| 亚洲精品国产精品久久久不卡| 深夜精品福利| 女生性感内裤真人,穿戴方法视频| 午夜福利免费观看在线| 亚洲人成77777在线视频| 亚洲国产欧美一区二区综合| bbb黄色大片| 日本wwww免费看| 国产亚洲欧美精品永久| 窝窝影院91人妻| 黄色成人免费大全| 搡老岳熟女国产| 午夜福利,免费看| 日本a在线网址| 精品无人区乱码1区二区| 午夜成年电影在线免费观看| netflix在线观看网站| 九色亚洲精品在线播放| 午夜福利影视在线免费观看| 妹子高潮喷水视频| 人人妻,人人澡人人爽秒播| 国产亚洲精品第一综合不卡| 91麻豆精品激情在线观看国产 | svipshipincom国产片| 国产伦一二天堂av在线观看| 亚洲成人久久性| 国产亚洲精品第一综合不卡| 久久久精品欧美日韩精品| 18禁黄网站禁片午夜丰满| 在线观看舔阴道视频| av中文乱码字幕在线| 天堂动漫精品| 久久精品成人免费网站| 黄色片一级片一级黄色片| 亚洲精品国产一区二区精华液| 在线播放国产精品三级| 国产亚洲精品综合一区在线观看 | 日韩大尺度精品在线看网址 | 咕卡用的链子| 亚洲精品中文字幕在线视频| 涩涩av久久男人的天堂| 国产成年人精品一区二区 | 岛国在线观看网站| 妹子高潮喷水视频| 在线视频色国产色| 国产午夜精品久久久久久| 久久人妻福利社区极品人妻图片| 真人一进一出gif抽搐免费| 亚洲免费av在线视频| 首页视频小说图片口味搜索| 在线av久久热| 亚洲在线自拍视频| 免费日韩欧美在线观看| 成人av一区二区三区在线看| 精品久久久久久,| 美女高潮到喷水免费观看| 一级作爱视频免费观看| 高潮久久久久久久久久久不卡| 好男人电影高清在线观看| 18禁国产床啪视频网站| 99精国产麻豆久久婷婷| 一区在线观看完整版| 老司机福利观看| 在线观看免费高清a一片| 国产99白浆流出| 亚洲av成人av| 成年人黄色毛片网站| 精品国产乱子伦一区二区三区| 欧美日韩av久久| 在线观看www视频免费| 亚洲成人国产一区在线观看| 99国产综合亚洲精品| 女警被强在线播放| 日韩大尺度精品在线看网址 | 欧美 亚洲 国产 日韩一| 国产三级在线视频| 久久精品人人爽人人爽视色| 久久性视频一级片| 中文字幕最新亚洲高清| 国产精品99久久99久久久不卡| 在线观看一区二区三区激情| 日韩视频一区二区在线观看| 免费在线观看黄色视频的| 午夜日韩欧美国产| 侵犯人妻中文字幕一二三四区| 日韩欧美国产一区二区入口| 午夜福利免费观看在线| 制服诱惑二区| 午夜免费鲁丝| 丝袜在线中文字幕| 久久精品影院6| 69精品国产乱码久久久| 日日夜夜操网爽| 99久久久亚洲精品蜜臀av| 欧美黑人欧美精品刺激| 成人手机av| 亚洲色图综合在线观看| 日韩中文字幕欧美一区二区| 国产蜜桃级精品一区二区三区| а√天堂www在线а√下载| 成人亚洲精品av一区二区 | 久久久久久大精品| 精品熟女少妇八av免费久了| 久久 成人 亚洲| 18禁观看日本| 19禁男女啪啪无遮挡网站| 操出白浆在线播放| 窝窝影院91人妻| 伊人久久大香线蕉亚洲五| 成人免费观看视频高清| 欧美黄色淫秽网站| 亚洲午夜精品一区,二区,三区| 中文字幕人妻丝袜一区二区| 99riav亚洲国产免费| 午夜久久久在线观看| 老司机亚洲免费影院| 黑人巨大精品欧美一区二区蜜桃| 老汉色av国产亚洲站长工具| 香蕉国产在线看| 亚洲精品国产色婷婷电影| 97人妻天天添夜夜摸| 两个人免费观看高清视频| 国产欧美日韩综合在线一区二区| 国产精品1区2区在线观看.| 国产欧美日韩精品亚洲av| 老司机福利观看| 亚洲欧美日韩另类电影网站| 国产精品国产av在线观看| 亚洲aⅴ乱码一区二区在线播放 | 国产熟女午夜一区二区三区| 国产精品野战在线观看 | a级毛片在线看网站| 操美女的视频在线观看| av国产精品久久久久影院| 自线自在国产av| 亚洲人成77777在线视频| 精品国产一区二区久久| 国产熟女xx| 亚洲美女黄片视频| 黑丝袜美女国产一区| 男女高潮啪啪啪动态图| 亚洲人成网站在线播放欧美日韩| 国产高清激情床上av| 中亚洲国语对白在线视频| 夫妻午夜视频| 成年版毛片免费区| 国产xxxxx性猛交| 亚洲av第一区精品v没综合| 性欧美人与动物交配| 好看av亚洲va欧美ⅴa在| 午夜免费成人在线视频| 90打野战视频偷拍视频| 精品国产一区二区久久| 色播在线永久视频| 亚洲少妇的诱惑av| 成在线人永久免费视频| 精品久久久久久电影网| 香蕉久久夜色| 巨乳人妻的诱惑在线观看| 人成视频在线观看免费观看| 亚洲精品国产一区二区精华液| 亚洲一区中文字幕在线| 在线观看免费视频网站a站| 久久精品91无色码中文字幕| 在线播放国产精品三级| 亚洲熟女毛片儿| 国产成年人精品一区二区 | 亚洲欧美日韩另类电影网站| 国产91精品成人一区二区三区| 青草久久国产| 人人妻人人添人人爽欧美一区卜| 久久婷婷成人综合色麻豆| 欧美日本中文国产一区发布| 免费在线观看黄色视频的| 成年女人毛片免费观看观看9| 搡老熟女国产l中国老女人| 久久久久亚洲av毛片大全| 精品国产乱码久久久久久男人| 精品久久蜜臀av无| 国产精品99久久99久久久不卡| 午夜免费观看网址| 欧美乱妇无乱码| 欧美最黄视频在线播放免费 | 大陆偷拍与自拍| 亚洲免费av在线视频| 热99国产精品久久久久久7| 80岁老熟妇乱子伦牲交| 黄片大片在线免费观看| 高潮久久久久久久久久久不卡| 老熟妇仑乱视频hdxx| 国产av精品麻豆| 自拍欧美九色日韩亚洲蝌蚪91| 身体一侧抽搐| 久久久久久大精品| 国产精品 国内视频| 欧美另类亚洲清纯唯美| 欧美精品啪啪一区二区三区| 日韩欧美一区二区三区在线观看| 久久性视频一级片| 丁香六月欧美| 村上凉子中文字幕在线| 一个人观看的视频www高清免费观看 | 欧美日韩中文字幕国产精品一区二区三区 | 麻豆成人av在线观看| 99国产精品免费福利视频| 日韩有码中文字幕| 日韩免费高清中文字幕av| 亚洲片人在线观看| 高清毛片免费观看视频网站 | 欧美成人午夜精品| 久久久水蜜桃国产精品网| 性少妇av在线| 国产精品国产高清国产av| 1024香蕉在线观看| 欧美日韩福利视频一区二区| 久久人妻av系列| 日韩精品青青久久久久久| 在线看a的网站| 两个人看的免费小视频| 神马国产精品三级电影在线观看 | 大型av网站在线播放| 国产片内射在线| 精品电影一区二区在线| 少妇裸体淫交视频免费看高清 | 人成视频在线观看免费观看| 天堂影院成人在线观看| 男人舔女人的私密视频| 久久中文字幕一级| 伦理电影免费视频| 欧美一区二区精品小视频在线| 欧美日韩乱码在线| 中文字幕最新亚洲高清| 日韩精品青青久久久久久| 男女下面插进去视频免费观看| 9色porny在线观看| 国产av在哪里看| 一级,二级,三级黄色视频| 亚洲 欧美一区二区三区| 一级a爱片免费观看的视频| a在线观看视频网站| 亚洲中文日韩欧美视频| 亚洲片人在线观看| 别揉我奶头~嗯~啊~动态视频| 99国产精品一区二区三区| 99热只有精品国产| 十八禁人妻一区二区| x7x7x7水蜜桃| 黄色成人免费大全| 国产免费现黄频在线看| 国产av一区在线观看免费| 在线观看一区二区三区激情| 久久久久国内视频| 色综合婷婷激情| 男女做爰动态图高潮gif福利片 | 亚洲第一青青草原| 操美女的视频在线观看| 亚洲人成网站在线播放欧美日韩| 91成年电影在线观看| 男女高潮啪啪啪动态图| 国产深夜福利视频在线观看| 俄罗斯特黄特色一大片| 欧美中文综合在线视频| 每晚都被弄得嗷嗷叫到高潮| 亚洲av电影在线进入| 人妻久久中文字幕网| 日本免费一区二区三区高清不卡 | 一个人免费在线观看的高清视频| 午夜成年电影在线免费观看| 女人高潮潮喷娇喘18禁视频| 高清毛片免费观看视频网站 | 成年版毛片免费区| 亚洲精品粉嫩美女一区| 免费久久久久久久精品成人欧美视频| ponron亚洲| 久久久久九九精品影院| 亚洲一区中文字幕在线| 亚洲精品国产精品久久久不卡| 黑人欧美特级aaaaaa片| 一级a爱片免费观看的视频| 真人一进一出gif抽搐免费| 女性被躁到高潮视频| 9191精品国产免费久久| 男女下面进入的视频免费午夜 | 十八禁人妻一区二区| 美国免费a级毛片| 一二三四社区在线视频社区8| 国产片内射在线| 每晚都被弄得嗷嗷叫到高潮| 51午夜福利影视在线观看| 夜夜躁狠狠躁天天躁| 国产av在哪里看| 18禁裸乳无遮挡免费网站照片 | 少妇的丰满在线观看| 中文亚洲av片在线观看爽| 亚洲性夜色夜夜综合| 亚洲精品久久午夜乱码| 欧美av亚洲av综合av国产av| 中亚洲国语对白在线视频| 50天的宝宝边吃奶边哭怎么回事| 91成人精品电影| 日韩大码丰满熟妇| 91九色精品人成在线观看| 91麻豆精品激情在线观看国产 | 老司机亚洲免费影院| 国产91精品成人一区二区三区| 日本wwww免费看| 在线观看免费视频日本深夜| 天天添夜夜摸| 精品第一国产精品| 精品久久久久久,| 香蕉国产在线看| 免费日韩欧美在线观看| 亚洲熟妇熟女久久| 成人18禁在线播放| 搡老岳熟女国产| 精品无人区乱码1区二区| 热re99久久精品国产66热6| 中亚洲国语对白在线视频| 国产精品久久久久成人av| 国产免费现黄频在线看| 免费观看精品视频网站| 91精品三级在线观看| 最新美女视频免费是黄的| 他把我摸到了高潮在线观看| 日韩欧美在线二视频| 美女午夜性视频免费| 国产成人欧美在线观看| 色在线成人网| 国产欧美日韩综合在线一区二区| 色哟哟哟哟哟哟| 久久精品亚洲av国产电影网| 亚洲专区国产一区二区| 久久久久久久精品吃奶| 欧美日韩国产mv在线观看视频| 女性生殖器流出的白浆| 人妻丰满熟妇av一区二区三区| a级片在线免费高清观看视频| 高清在线国产一区| 一区福利在线观看| 老司机福利观看| 99re在线观看精品视频| 首页视频小说图片口味搜索| av中文乱码字幕在线| 久久这里只有精品19| 老司机靠b影院| 欧美日韩视频精品一区| 亚洲成人免费电影在线观看| 中亚洲国语对白在线视频| 手机成人av网站| 男女下面插进去视频免费观看| 一本大道久久a久久精品| 亚洲人成77777在线视频| 中文字幕高清在线视频| 日韩视频一区二区在线观看| av电影中文网址| 俄罗斯特黄特色一大片| 午夜91福利影院| 亚洲性夜色夜夜综合| 久久精品国产综合久久久| 色综合站精品国产| 日日爽夜夜爽网站| 99国产精品免费福利视频| 丰满迷人的少妇在线观看| 亚洲精品一二三| 搡老乐熟女国产| 91麻豆精品激情在线观看国产 | 日本免费一区二区三区高清不卡 | tocl精华| 亚洲精品国产区一区二| 国产精品成人在线| 精品国产超薄肉色丝袜足j| 一区二区三区精品91| 亚洲 欧美 日韩 在线 免费| 亚洲五月婷婷丁香| 成人特级黄色片久久久久久久| 久久久久国内视频| 国产精品亚洲一级av第二区| 亚洲一区二区三区欧美精品| 亚洲熟女毛片儿| 十八禁人妻一区二区| 一边摸一边做爽爽视频免费| 18禁裸乳无遮挡免费网站照片 | 国产高清激情床上av| 欧洲精品卡2卡3卡4卡5卡区| 久久精品aⅴ一区二区三区四区| 午夜成年电影在线免费观看| 久久亚洲精品不卡| 久久久久精品国产欧美久久久| 国产av一区二区精品久久| 亚洲男人的天堂狠狠| 久热这里只有精品99| 午夜成年电影在线免费观看| av在线播放免费不卡| 免费久久久久久久精品成人欧美视频| 国产精品香港三级国产av潘金莲| 亚洲avbb在线观看| avwww免费| 久久伊人香网站| 久久天堂一区二区三区四区| 久久久久精品国产欧美久久久| 一进一出好大好爽视频| 国产日韩一区二区三区精品不卡| 黄片小视频在线播放| 亚洲欧美日韩高清在线视频| 成年人免费黄色播放视频| 久久 成人 亚洲| 国产免费现黄频在线看| 国产成人精品久久二区二区免费| 午夜福利在线免费观看网站| 精品久久久精品久久久| 中文字幕人妻丝袜制服| www日本在线高清视频| 人成视频在线观看免费观看| 国产精品免费视频内射| 超色免费av| 老汉色∧v一级毛片| 操出白浆在线播放| 久久人人97超碰香蕉20202|