• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    C-LOG:A Chamfer distance based algorithm for localisation in occupancy grid-maps

    2016-03-20 08:16:36

    Centre for Autonomous Systems,University of Technology,Sydney,NSW,Australia Available online 19 October 2016

    C-LOG:A Chamfer distance based algorithm for localisation in occupancy grid-maps

    Lakshitha Dantanarayana*,Gamini Dissanayake,Ravindra Ranasinge

    Centre for Autonomous Systems,University of Technology,Sydney,NSW,Australia Available online 19 October 2016

    A novel algorithm for localising a robot in a known two-dimensional environment is presented in this paper.An occupancy grid representing the environment is first converted to a distance function that encodes the distance to the nearest obstacle from any given location.A Chamfer distance based sensor model to associate observations from a laser ranger finder to the map of the environment without the need for ray tracing, data association,or feature extraction is presented.It is shown that the robot can be localised by solving a non-linear optimisation problem formulated to minimise the Chamfer distance with respect to the robot location.The proposed algorithm is able to perform well even when robot odometry is unavailable and requires only a single tuning parameter to operate even in highly dynamic environments.As such,it is superior than the state-of-the-art particle filter based solutions for robot localisation in occupancy grids,provided that an approximate initial location of the robot is available.Experimental results based on simulated and public domain datasets as well as data collected by the authors are used to demonstrate the effectiveness of the proposed algorithm.

    Robot localisation;Distance functions;Chamfer distance;Optimisation;Sensor models;Environment representation

    1.Introduction

    Localisation,or determining the pose(position and orientation)of a robot on a given map is a prime requirement for a robot operating autonomously in an environment.In situations where absolute tracking systems such as Global Positioning System(GPS)are not available,measurements obtained from sensors mounted on the robot are used for localisation.Algorithm used for localisation depends on the method used for representing the map of the environment.

    When the map can be represented using geometric primitives such as points or line segments,extended Kalman filter(EKF) based algorithms are capable of efficiently estimating the robot pose within the map by fusing information gathered from robot odometry and observations to these primitives[2].EKF based methods are generally computationally efficient but require an initial guess of the prior location of the robot.Therefore,such methods are incapable of solving problems such as the kidnapped robot problem,where the initial location of the robot is unknown.Furthermore,these methods exploit only a small proportion of the information available from sensors such as laser range scanners or RGB-D cameras due to the dimensionality reduction performed at the feature extraction step of the algorithm.EKF based methods usually require deriving motion andobservationmodelsandtheirJacobians.Thesearespecificto a given robot/sensor combination.Furthermore,extracting, defining,and explicitly associating geometric features and landmarks from the environment to the observations is also sensor specific,presenting an additional challenge.

    An occupancy grid that classifies an environment into cells that are either occupied or free is one of the earliest and commonly used approaches for representing a metric map.When the environment description is available in the form of an occupancy grid map,particle filter based approaches are the preferred choice[3]for robot localisation due to their ability to exploit all the measurements available in a range scan.The particle filter based approaches use a sensor model and a set of particles representing hypothesised robot locations to estimate the true pose of the robot.A sufficiently large number of particles,adequate to describe the probability density function of the robot pose,are required in order to generate location estimates with acceptable accuracy.Particle filters are relatively easy to implement and are capable of global localisation:the ability to deal with the situation when a suitable initial estimate for the robot pose is unavailable.The widely used adaptive Monte-Carlo localisation(AMCL)[4-9],that is also available as a part of the popular Robot Operating System(ROS)[10]is a particle filter based approach for localisation.Within the particle filter framework,it is not straightforward to identify outliers or dynamic objects.In order to address this problem, AMCL uses a“mixture-model”which categorises the range readings by statistically analysing the probable causes of such outliers and penalising these observations during the particle update step.However,Thrun et al.[7]caution that this method would only work in certain limited situations and the categories should be analysed according to the environment.

    Particle filters for localisation can be easily adapted to operate under a wide range of sensors and robot kinematic models.However,to be effective,particle filter implementations need to be tuned using a range of user defined parameters.As the computational burden of a particle filter is proportional to the number of particles used,further tuning is required to dynamically maintain the number of particles at an optimum level.The latest ROS implementation of the particlefilter consists of 24 tunable parameters[11].

    Optimisation based methods have also been proposed for robotlocalisationintheliterature.Thesemethodspredominantly focusonfeaturebasedmapsratherthanonoccupancygridmaps. In[12]a genetic optimisation algorithm is used to localise a mobile robotona mapconsistingofgeometricbeacons.Genetic algorithmsarealsousedin[13]forlocalisingonasatelliteimage geo-map of an outdoor environment using a laser range finder. Kwok et al.[14]proposes the use of evolutionary computing techniques which include genetic algorithms,particle swarm optimisation,and the ants system for feature based localisation anddemonstrates their effectiveness and robustness to noise and dynamic environments.Localisation of nodes in a wireless sensor network is a prominent application which relies heavily on optimisation based methods.Mao et al.[15]explains how different techniques are applied to this unique problem and how optimisation based methods can solve the wireless sensor networks localisation problem.

    Scan matching is another popular method for robot localisation where an optimisation strategy that minimises the misalignment between observations from a sensor,typically a scanfromthelaserrangefinderandamapisusedtoestimatethe robot location.Algorithms for scan matching proposed in the literature include IterativeClosest Point(ICP)[16,17],Iterative Closest Line(ICL)[18]or Iterative Closest Surface(ICS)and probabilistic likelihood methods[7,19].In ICP,each laser endpoint in the query scan is associated with a point,line,or surface in the reference scan(or the map in case of localisation) usingadistancemetricsuchasEuclideandistance,afterwhicha rigid body transformation[20]is used to compute the best alignment.A new set of data associations using the computed rigid body transformation is then used to repeat this process until convergence.In probabilistic scan matching methods, sensor error which is the difference between actual sensor measurement and the predicted sensor reading is used to update thelikelihoodofagivenhypothesisedrobotpose.Thepredicted reading is estimated by algorithms such as ray-casting,which are computationally expensive,or likelihood fields[7]for which environment dependent tuning is essential,as it is an approximation to the ray-casting.

    Distance function based maps are increasingly becoming utilised to capture geometries of environments[21-25].The distance function not only encodes the occupied regions of the environment,but also provides a continuous measure of the distance,making it a much richer representation in comparison to an occupancy grid map.In KinectFusion,Newcombe et al. [22]extends the representation method proposed by Curless& Levoy[21]that uses Signed Distance Functions(SDFs)to encapsulate the three-dimensional(3D)surfaces that are incrementally constructed with the use of range readings from a depth sensor.In contrast to 3D occupancy grid maps,which do not have a clear notion of where the surfaces of an environment are or how surfaces can be extracted,the work by Carrillo et al. [24]makes it apparent that there is a clear mathematical strategy for extracting surfaces in environments that are represented by SDFs.Work by Mullen et al.[26]and Chazal et al. [27]uses unsigned distance functions for 3D surface reconstruction and they point out that unsigned distance functions are much more robust to noise and outliers than SDFs.

    In this paper,we propose distance functions as means of representing two-dimensional environments.We present a sensor model based on Chamfer distance[28]that can relate measurements from sensors such as laser range finders to a distance function based map.1Preliminary results of this work were initially published in Ref.[1].The proposed sensor model does not rely on feature extraction,data association or ray tracing operations.We use this sensor model in an optimisation based strategy that minimises the Chamfer distance to provide efficient means to localise a mobile robot when the initial pose of the robot is approximately known.

    This paper is organised as follows:Section 2 introduces the distance function based environment representation and the Chamfer distance based sensor model.Section 3 formulates the optimisation problem that is used to localise the robot on the map as one of minimising the Chamfer distance,explores the properties of the optimisation problem and presents a strategy for solving it to find the robot pose.An experimental evaluation of the proposed algorithm based on simulation, public domain datasets as well as data collected in dynamic environments demonstrating the robustness of the proposedalgorithm is presented in Section 4.Section 5 provides conclusions and future work.

    2.Environment representation and sensor model

    In this section we develop a distance function based approach for representing the environment and derive the corresponding sensor model for a laser range finder.

    2.1.Distance functions for environment representation

    For a given environment populated with objects,a distance transform or distance field is a map of the environment where any point of the distance field holds the shortest distance to the closest object boundary.The Euclidean distance is commonly used as a distance measure while other simple metrics such as City-block distance,chessboard distance,quasi-Euclidean distance or complex metrics such as Wasserstein metric that is used for 3D image reconstruction[27],are used as alternatives depending on the application and the need for computational efficiency.

    When V is the set of occupied space in an environment, Euclidean distance function can be expressed by(1)at any given point x in space.

    In Equation(1),the distance function is unsigned.However,when representing environments with closed shaped objects the sign of the distance function can be set to be either positive or negative depending on whether the query point x is outside ot inside the closed contour.

    Distance functions can be computed on demand if the environment consists of geometric primitives.Alternatively it is possible to pre-compute distance function values at discrete intervals in space by quantising the environment to pixels(2D environment)or voxels(3D environment).

    The operator used to generate a discrete distance function from an occupancy grid map is commonly known as a distance transform.When na?vely implemented,the distance transform process is an exhaustive search which depends on many factors including the resolution of quantisation.However,the algorithm proposed by Rosenfeld&Pfaltz[29]computes the distance transform efficiently by only two passes over any given twodimensional environment.Furthermore,there are numerous algorithms,some of which rely on graphical processing units,that can compute distance transforms in real-time[30-33].

    Fig.1 represents an occupancy grid map as a binary image and its distance function,where the grey level of the image is used to represent the distance value.

    2.2.Formulation of the sensor model

    A sensor is the device through which the robot“sees”the world.It measures a physical quantity and converts the measurement into a tractable signal for the robot to use.The sensors enable the robot to be aware of its environment and to perform tasks reliably and accurately.The information(measurements of physical quantities)gathered by the sensors is known as sensor measurements or observations.Sensors that are commonly used in robots include:(i)contact sensors such as bump sensors and wheel encoders,(ii)inertial sensors such as accelerometers,gyroscopes and magnetometers(compasses),(iii)proximity sensors such as infra-red sensors,sonar sensors,radars,and laser range finders,(iv)visual sensors such as cameras and depth cameras,and(v)absolute sensors such as GPS and visual tracking systems.In practice,it is quite common to use multiple sensors on a robot as they can be used in a manner to complement each other to improve the overall accuracy and facilitate fault detection.

    The selection of a sensor predominantly depends on the accuracy required by the task,suitability of a sensor for the operating environment of the robot and affordability.For example,even though a GPS based sensor is suitable for outdoornavigation,itcannotbeusedinindoorenvironmentswhere the satellite reception is poor and subject to interference.

    Sensors such as laser range finders have high accuracy and can be deployed in a wide range of environments.In the past, the high cost of these sensors have limited their use,but with the recent growth of robotics applications in the community, laser range finders with acceptable accuracy are now available at affordable prices.

    Fig.1.(a)An occupancy grid map as a binary image and(b)its distance function.

    The observations captured by a sensor are associated with a sensor model,which is an abstract representation of a physical sensor together with how the observations captured by a sensor are processed,interpreted and associated with the internal representation of the environment maintained by the robot.

    Consider a laser range finder mounted on a robot placed in an environment that is represented using a distance function as shown in Fig.2.Observations corresponding to a single laser scan consisting of n range readings r at given bearings θican be projected from a given robot poseusing Equation(2)as shown in Fig.2a to obtain the observation vector in Cartesian space xo.

    Now a vector of distance readings can be extracted from the distance function at points xoi.

    The covariance of this vector for a given robot pose,ΣDFis a diagonal matrix which can be written as,

    whereJDF,xoiistheJacobianofthedistancefunctionatthequery pointsxoiandthe sensornoise is represented by R.Furthermore,provided that the only contributing factor to senso

    r noise R is the laser range noise

    We note here that the“l(fā)ikelihood range finder model”or the End Point(EP)model proposed by Thrun et al.[7]is a sensor model that uses a distance function based environment representation with a laser range finder.However,this is an empirical approximation using an empirical mixture model introduced to cope with the high computational expense associated with the ray-casting process.

    The vector dDFin Equation(3)is a measure of disparity between a map and a sensor measurement.A scalar measure of disparity has clear computational advantages in the process of computing the measurement likelihood.In the context of image processing and computer vision,literature is abundant with scalar measures of disparity between distance functions and binary images.

    Chamfer distance is one of many such distance metrics available that does not require defining explicit corresponding point pairings.Hausdorff distance[34,35],another popular method that is used in many applications,captures one point which has the worst mismatch from a set of points as opposed to Chamferdistance which capturesaverage mismatch of all given points.First introduced by Burrow et al.[28]in 1977,Chamfer distance based template matching has gone through many implementations,improvements and value additions over the years which includes making it robust in rotation(i.e.minor orientation changes) [36],scale changes[37],resolution changes,and even robust in high clutter[38].

    In computer vision literature,Chamfer distance is defined and used for template matching with binary images,where a semblance of the binary query shape is located within a larger reference image.Let U={ui}and V={vj}be sets of query and reference images respectively.The Chamfer distance between U and V is given by the average of distances between each point ui∈U,n(U)=n and its nearest edge in V,

    Here n is the number of points in U.

    With the use of a distance function,it is possible to reduce the cost function(7)and(8)so that it can be evaluated in linear time,O(n)[38].

    Fig.2.Projection of the laser scan from an estimated robot pose,(a)on the binary occupancy grid map and(b)on the distance function.

    Fig.3.Variation of Chamfer distance against robot location,(x,y)at two different locations of the Intel research labs dataset,in the vicinity of the true robot pose. The corresponding laser scan(not to scale)is given above the contour-plots.φ is set to its true value.

    The Chamfer distance is a sum of positive distances and is defined for unsigned distance functions.

    In the case of two-dimensional template matching using Chamfer distance,the reference image and the template are both binary edge images which can be obtained using an edgefilter on the original images.The highest computational complexity in this context lies on the distance transform process to create the distance function from the reference edge image which should be done for every image frame.However, as discussed before,recent high-speed implementations of distance transform enable faster execution and have even made it possible to use Chamfer distance for people recognition and tracking on surveillance footage in real-time[33].It is important to note that when the distance function is used to represent a static environment map,its calculation is a one off process and therefore does not impact the computational cost of the localisation process described in this paper.

    Using Equation(8),the Chamfer distance for a laser scan obtained from a robot operating in an environment that is represented with the unsigned distance function,DF can be written as shown in Equation(9).

    Fig.3a and b present the variation of Chamfer distance relative to a hypothesised robot location(x,y)that varies in the vicinity of the true pose,with approximate coordinates of the true pose of(1.1,1.1)m and(0.45,0.45)m respectively.If there is no measurement noise,the minimum Chamfer distance,which will be equal to zero,is obtained when the robot is placed at its true pose in the map and the laser scan is perfectly aligned.Fig.4 shows the variation of Chamfer distance when x and y are kept at their true values and the orientation φ is varied between±0.4 radians for the robot's true location used in Fig.3b.

    Partial derivatives of Chamfer distance can be deduced with the use of partial derivatives of DF as shown in Equation(11).

    Fig.4.Chamfer distance variation in the vicinity of the true robot pose x and y at their true values and orientation φ varied.

    Table 1 Sensor properties and noise parameters for Dataset 1.

    As with DF,the partial derivativescan also be pre-computed and stored.

    We note here that if a distance transform is used to obtain the distance function from an occupancy grid map,a continuous approximation such as a cubic spline is needed to interpolate the distance function values to estimate distances and the derivatives in continuous space.The derivatives of the distance transform are discontinuous at boundaries between occupied and unoccupied space as well as cut-loci[39].Using an appropriate spline approximation,impact of these discontinuities on gradient based optimisation algorithms can be avoided.Apart from splines,Gaussian processes have also been suggested as smoothing functions for distance function [25],but these incur a heavy computational cost in the application presented in this paper.

    3.Localisation algorithm

    This section describes a method for localising a robot on a two-dimensional map using information gathered using a laser range finder mounted on a robot.It uses the distance functionbased representation and the Chamfer distance based sensor model that we presented in Section 2.2.

    Fig.5.Estimated robot location and the ground-truth for Dataset 1.(a)Optimisation algorithm and(b)AMCL particle filter algorithm with beam range finder model.

    Fig.6.Errors in the location estimate for(a)proposed C-LOG algorithm and(b)AMCL particle filter algorithm with beam range finder model.

    Table 2 Pose errors for Dataset 1.

    Robot localisation problem can be solved by finding the robotposethatminimise acostfunctionC,which isdefinedasa measure of mismatch between a set sensor reading z and the mapm.ThesensormodelasdescribedinSection2.2essentially defines such a cost function in the vicinity of the true pose. Robot localisation problem can therefore be expressed as,

    where DF is the distance function of the occupancy grid map of the environment m and xois the template generated using the laser scan z from Equation(2)with the potential robot pose

    Given that the objective function in Equation(12)is twice differentiable when a cubic spline approximation is used,this unconstrained non-linear optimisation problem can be solved using a variety of gradient based techniques.In the experiments presented in Section 4 the Matlab implementation of the trust-region algorithm was used.The partial derivatives of the objective function with respect to the robot pose x are required for solving the optimisation problem described by Equation (12),are given in Equations(10)and(11).

    Fig.7.Trajectory of the robot in the Intel Dataset,using C-LOG algorithm.(a)GMapping poses and results obtained from the optimisation based localisation strategy.(b)Complete trajectory of the robot.(c)Projection of the laser scan from the estimated pose.

    A gate that admits only the values that are smaller than a maximumerrorasshowninEquation(13)canbeusedtoeliminate the obvious outliers from the laser range finder measurements.

    where Δx,Δy and Δφ are the maximum expected error in the initial guess.In the experiments,0.15 m was used for Δx and Δy,while Δφ was set to 0.05 rad.This is the only tuning parameter required for this algorithm and clearly it is relatively easy to establish.

    4.Experimental results

    We use experiments conducted on three datasets to illustrate the capabilities of the proposed localisation algorithm.

    Dataset 1 is based on a simulation conducted on ROS stage environment so that the ground truth is available for the evaluation.The robot in this simulation is equipped with a Hokuyo laser that provides laser scans.Table 1 presents the sensor properties and the parameters used in the simulation.

    Dataset 2 is a publicly available dataset from the Intel research laboratories,Seattle,USA.In this dataset the robot travels three loops in an office building.Map of the environment and the ground truth are not available.Therefore,laser range scans gathered during the third loop is used with the GMapping[40]algorithm to generate the occupancy grid map for evaluation.

    Fig.8.Trajectory of the robot in the Dataset 2.(a)GMapping poses and results obtained from AMCL,using beam based likelihood model.(b)Complete trajectory of the robot.(c)Projection of the laser scan from the estimated pose.

    Fig.9.A sparse illustration of crowd movement during collection of Dataset 3 over 29.54 min.

    Dataset 3 was collected at the Broadway Shopping Centre, Sydney,Australia.The data was collected during normal operating hours of the shipping centre,and therefore the environment cluttered and crowded.The robot was equipped with a Hokuyo UTM-30LX laser range finder.Odometry was not used.

    4.1.Accuracy of robot pose estimates

    As the ground-truth is available in Dataset 1,we can quantitatively compare the output of the proposed localisation algorithm against ground-truth.As a comparison,we use the particle filter implementation AMCL available for ROS with its beam range finder model.Fig.5 illustrates location estimates obtained with the proposed optimisation algorithm and AMCL while Fig.6 presents the errors of the estimates along the entire robot trajectory against the ground-truth.The average pose errors for the proposed algorithm and AMCL are shown in Table 2.

    Figs.7 and 8 qualitatively compare the results from the proposed algorithm and AMCL using the Dataset 2.As it can be seen in Fig.7c,the map recovered by projecting laser scans from the poses estimated by the proposed algorithm has well defined walls that aligns with the original map as opposed to the map recovered from AMCL illustrated in Fig.8c.This indicates that the poses produced by the proposed algorithm is more accurate.

    4.2.Performance in dynamic environments

    As previously mentioned,the Dataset 3 was collected under natural conditions in a crowded environment.Therefore,in this dataset,the laser observations are mostly corrupted by people.Fig.9 is a sparse illustration of the crowd movement during data collection obtained by projecting all the readings from the laser range finder from estimated robot poses.Fig.10 shows the estimated robot poses obtained from the proposed C-LOG algorithm.As ground truth is not available,poses obtained using a SLAM algorithm that was used to construct the map of the environment is shown for comparison.It is clear that C-LOG performs well even in the presence of significant people movement.

    Fig.10.Trajectory of the Robot using C-LOG on the Dataset 3.

    To further evaluate the performance of the proposed algorithm under dynamic scenarios,we performed a simulation experiment using Dataset 1.In this experiment we artificially corrupted a percentage of input laser scans with a uniformrandom distribution of U(0,ri).Fig.11 shows the root-mean square(RMS)error at different degrees of corruption.C-LOG algorithm continue to localise without losing track even with up to 60%of the input sensor measurements corrupted by dynamic objects,while AMCL fails at 40%corruption.

    Situationswhere the optimisation algorithm failsto converge was dealt with by processing the next laser scan with the current best estimate of the robot pose.

    4.3.Computational cost

    Fig.12 compares the computational cost of the proposed algorithm with AMCL available in ROS.Dataset 1 was used for obtaining the average time to process one laser scan.The laser range finder in Dataset 1 produces 1081 laser readings per scan.For AMCL,the number of particles was set to befixed at 5000 particles.Execution times for both the standard beam-based and likelihood-field sensor models of AMCL are shown.It can be seen that the AMCL with the beam based likelihood function takes the longest time due to the complexity of the ray-casting process.It should be noted that the code distributed with ROS may not be the most efficient implementation of AMCL.

    5.Discussion and conclusion

    In this paper,we presented a novel localisation algorithm for robots equipped with a laser range finder operating in a two-dimensional environment.The proposed algorithm uses a distance function based method for representing the environment and a sensor model that uses Chamfer distance to relate range measurements from the sensor to the map of the environment.The sensor model does not require explicit data association or extraction of features from the sensor readings.An optimisation algorithm that minimises Chamfer distance with respect to the robot pose is shown to be effective in obtaining an estimate for the robot location on the map, provided an approximate initial location of the robot is known.The proposed algorithm does not require odometry measurements.

    Fig.11.The RMS errors of the proposed algorithm and AMCL(beam model)when input sensor measurements are artificially corrupted to simulate dynamic objects in the environment.

    Fig.12.Per scan execution time for localisation algorithms.

    We used multiple experiments based on a simulation,a public domain dataset,and data collected in a crowded environment to demonstrate the effectiveness of the algorithm. This algorithm was illustrated to be more accurate and computationally efficient than the widely used particle filter based algorithm AMCL.Experimental results demonstrate that the optimisation based technique proposed in this paper provides a competitive solution to the problem of robot localisation within an occupancy grid.One of the main advantages observed is that the algorithm does not require tuning parameters,except for a relatively large gate for filtering outliers from laser range data.This is due to the fact that the models of process and observation uncertainty are not used within the optimisation algorithm.Future work includes using these models to compute the robot pose uncertainty and fuse odometry observations,if available.Impact of the map resolution on the localisation accuracy as well as possibilities for using a continuous representation of the distance function,as opposed to a cubic spline approximation of the discrete distance transform are under investigation.

    [1]L.Dantanarayana,R.Ranasinghe,G.Dissanayake,C-LOG:a Chamfer Distance based method for localisation in occupancy grid-maps,in: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE,2013,pp.376-381, http://dx.doi.org/10.1109/ IROS.2013.6696379.

    [2]J.Leonard,H.Durrant-Whyte,IEEE Trans.Rob.Autom.7(3)(1991) 376-382,http://dx.doi.org/10.1109/70.88147.

    [3]C.Stachniss,W.Burgard,Found.Trends Rob.3(4)(2014)211-282, http://dx.doi.org/10.1561/2300000013.

    [4]F.Dellaert,D.Fox,W.Burgard,S.Thrun,Monte Carlo localization for mobile robots,IEEE International Conference on Robotics and Automation vol.2.doi:10.1109/ROBOT.1999.772544.

    [5]S.Thrun,D.Fox,W.Burgard,F.Dellaert,Artif.Intell.128(1-2)(2001) 99-141,http://dx.doi.org/10.1016/S0004-3702(01)00069-8.

    [6]S.Thrun,Int.J.Rob.Res.20(5)(2001)335-363,http://dx.doi.org/ 10.1177/02783640122067435.

    [7]S.Thrun,W.Burgard,D.Fox,Probabilistic Robotics,The MIT Press, 2005.

    [8]D.Fox,Int.J.Rob.Res.22(12)(2003)985-1003,http://dx.doi.org/ 10.1177/0278364903022012001.

    [9]S.S.Srinivasa,D.Ferguson,C.J.Helfrich,D.Berenson,A.Collet, R.Diankov,G.Gallagher,G.Hollinger,J.Kuffner,M.V.Weghe,Auton. Robots 28(1)(2010)5-20,http://dx.doi.org/10.1007/s10514-009-9160-9.

    [10]Open Source Robotics Foundation,Robot Operating System(ROS), 2016.

    [11]B.P.Gerkey,Adaptive Monte-Carlo Localization for ROS,2008.

    [12]L.Moreno,J.M.Armingol,S.Garrido,A.d.l.Escalera,M.A.Salichs, J.Intell.Rob.Syst.34(2)(2002)135-154,http://dx.doi.org/10.1023/A: 1015664517164.

    [13]C.U.Dogruer,A.B.Koku,M.Dolen,Global urban localization of an outdoor mobile robot with genetic algorithms.European Robotics Symposium 2008,Springer Berlin Heidelberg,Berlin,Heidelberg,2008, pp.103-112,http://dx.doi.org/10.1007/978-3-540-78317-6_11.

    [14]N.Kwok,D.Liu,G.Dissanayake,Eng.Appl.Artif.Intell.19(8)(2006) 857-868,http://dx.doi.org/10.1016/j.engappai.2006.01.020.

    [15]G.Mao,B.Fidan,B.D.Anderson,Comput.Netw.51(10)(2007) 2529-2553,http://dx.doi.org/10.1016/j.comnet.2006.11.018.

    [16]P.J.Besl,N.D.McKay,IEEE Trans.Pattern Anal.Mach.Intell.14(1992) 239-256,http://dx.doi.org/10.1109/34.121791.

    [17]S.Thrun,M.Diel,D.H¨ahnel,Scan alignment and 3-D surface modeling with a helicopter platform,in:The 4th International Conference on Field and Service Robotics,2003.

    [18]A.Censi,An ICP variant using a point-to-line metric,in:In:2008 IEEE International Conference on Robotics and Automation,IEEE,2008,pp. 19-25,http://dx.doi.org/10.1109/ROBOT.2008.4543181.

    [19]E.B.Olson,Real-time correlative scan matching,in:2009 IEEE International Conference on Robotics and Automation,IEEE,2009,pp. 4387-4393,http://dx.doi.org/10.1109/ROBOT.2009.5152375.

    [20]B.K.P.Horn,J.Opt.Soc.Am.4(4)(1987)629-642.

    [21]B.Curless,M.Levoy,Avolumetric method for building complex models from range images.Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques-SIGGRAPH’96,ACM Press,New York,USA,1996,pp.303-312,http://dx.doi.org/10.1145/ 237170.237269.

    [22]R.A.Newcombe,A.J.Davison,S.Izadi,P.Kohli,O.Hilliges,J.Shotton, D.Molyneaux,S.Hodges,D.Kim,A.Fitzgibbon,KinectFusion:realtime dense surface mapping and tracking,in:2011 10th IEEE International Symposium on Mixed and Augmented Reality,IEEE,2011,pp. 127-136,http://dx.doi.org/10.1109/ISMAR.2011.6092378.

    [23]T.Whelan,M.Kaess,M.Fallon,Kintinuous:spatially extended KinectFusion,in:RSS Workshop on RGB-D:Advanced Reasoning with Depth Cameras,2012.

    [24]H.Carrillo,Y.Latif,J.Neira,J.Castellanos,Towards measuring uncertainty in volumetric signed distance function representations for active SLAM,Workshop on Multi VIew Geometry in RObotics(MVIGRO)In conjunction with RSS 2014.

    [25]S.Kim,J.Kim,GPmap:a unified framework for robotic mapping based on sparse Gaussian processes,in:Field and Service Robotics,Springer International Publishing,2015,pp.319-332,http://dx.doi.org/10.1007/ 978-3-319-07488-7_22.

    [26]P.Mullen,F.De Goes,M.Desbrun,D.Cohen-Steiner,P.Alliez,Comput. Graph.Forum 29(5)(2010)1733-1741,http://dx.doi.org/10.1111/ j.1467-8659.2010.01782.x.

    [27]F.Chazal,D.Cohen-Steiner,Q.M'erigot,Found.Comput.Math.11(6) (2011)733-751,http://dx.doi.org/10.1007/s10208-011-9098-0.

    [28]H.Barrow,J.Tenenbaum,R.Bolles,H.Wolf,Parametric correspondence and chamfer matching:two new techniques for image matching.5th International Joint Conference on Articial Intelligence vol.2,Morgan Kaufmann Publishers Inc.,San Francisco,CA,USA,1977,pp.659-663.

    [29]A.Rosenfeld,J.Pfaltz,Pattern Recognit.1(1)(1968)33-61,http:// dx.doi.org/10.1016/0031-3203(68)90013-7.

    [30]T.Schouten,E.van den Broek,Fast exact Euclidean distance(FEED) transformation,in:Proceedings of the 17th International Conference on Pattern Recognition,2004.ICPR 2004 vol.3,IEEE,2004,pp.594-597, http://dx.doi.org/10.1109/ICPR.2004.1334599.

    [31]J.Schneider,M.Kraus,R.Westermann,{GPU}-Based real-time discrete Euclidean distance transforms with precise error bounds,in:International Conference on Computer Vision Theory and Applications(VISAPP), 2009,pp.435-442.

    [32]J.Schneider,M.Kraus,R.Westermann,GPU-based Euclidean distance transforms and their application to volume rendering,in:A.Ranchordas, J.M.Pereira,H.J.Arau′jo,J.M.R.S.Tavares(Eds.),Computer Vision, Imaging and Computer Graphics.Theory and Applications,Vol.68 of Communications in Computer and Information Science,Springer Berlin Heidelberg,Berlin,Heidelberg,2010,pp.215-228,http://dx.doi.org/ 10.1007/978-3-642-11840-1_16.

    [33]M.Rauter,D.Schreiber,A GPU accelerated Fast Directional Chamfer Matching algorithm and a detailed comparison with a highly optimized CPU implementation,in:2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,IEEE,2012,pp. 68-75,http://dx.doi.org/10.1109/CVPRW.2012.6238897.

    [34]F.Hausdorff,Grundzu¨ge der Mengenlehre,Veit&co.,1914.

    [35]D.P.D.Huttenlocher,G.A.Klanderman,W.J.Rucklidge,IEEE Trans. Pattern Anal.Mach.Intell.15(9)(1993)850-863,http://dx.doi.org/ 10.1109/34.232073.

    [36]J.Shotton,A.Blake,R.Cipolla,IEEE Trans.Pattern Anal.Mach.Intell. 30(7)(2008)1270-1281,http://dx.doi.org/10.1109/TPAMI.2007.70772.

    [37]G.Borgefors,IEEE Trans.Pattern Anal.Mach.Intell.10(6)(1988) 849-865,http://dx.doi.org/10.1109/34.9107.

    [38]M.Y.Liu,O.Tuzel,A.Veeraraghavan,R.Chellappa,Fast directional chamfer matching,in:2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,IEEE,2010,pp.1696-1703, http://dx.doi.org/10.1109/CVPR.2010.5539837.

    [39]M.W.Jones,J.A.B?rentzen,M.Sramek,IEEE Trans.Vis.Comput. Graph. 12 (4) (2006) 581-599, http://dx.doi.org/10.1109/ TVCG.2006.56.

    [40]G.Grisetti,C.Stachniss,W.Burgard,IEEE Trans.Rob.23(1)(2007) 34-46,http://dx.doi.org/10.1109/TRO.2006.889486.

    Lakshitha Dantanarayana is a Ph.D.student at the Centre for Autonomous Systems,University of Technology,Sydney(UTS),Australia.He obtained his B.Sc.Eng.(Hons.)degree in Electronics and Telecommunications Engineering from the University of Moratuwa,Sri Lanka in 2010.His research is focused on localisation,mapping,SLAM,and navigation for assistive robotics.

    Gamini Dissanayake is the James N.Kirby Professor of Mechanical and Mechatronic Engineering at University of Technology,Sydney(UTS).He graduated in Mechanical and Production Engineering from the University of Peradeniya.He taught at University of Peradeniya,National University of Singapore and University of Sydney before joining UTS in 2002.He founded the UTS Centre for Autonomous Systems;a team of fifty staff and students working in Robotics. His work on robot navigation has resulted in one of the most cited journal publications in robotics.He has been involved in developing robots for a range of industry applications including cargo handling,mining,infrastructure maintenance and aged care.

    Ravindra Ranasinghe received B.Sc.Eng.(Hons.)degree in 1995,specializing in Computer Science and Engineering,from the University of Moratuwa,Sri Lanka.He received his Ph.D.in wireless communication protocols from the University of Melbourne, Australia.Before joining the Centre for Autonomous Systems at University of Technology,Sydney(UTS), he worked in several technology start-up companies in USA,Australia,and Sri Lanka.He is currently a Senior Research Fellow at the Centre for Autonomous Systems,University of Technology,Sydney.He is a member of the IEEE since 1997.His research interests include mobile robotics wireless sensor networks, assistive robotics,active object recognition and machine learning techniques for condition assessments.

    *Corresponding author.

    E-mail address:lakshitha.dantanarayana@uts.edu.au(L.Dantanarayana). Peer review under responsibility of Chongqing University of Technology.

    http://dx.doi.org/10.1016/j.trit.2016.10.003

    2468-2322/Copyright?2016,Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NCND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    Copyright?2016,Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    MSC:00-01;49-01

    欧美精品高潮呻吟av久久| 国产国语露脸激情在线看| 女人精品久久久久毛片| 99久久中文字幕三级久久日本| 日韩中文字幕视频在线看片| 欧美另类一区| 视频中文字幕在线观看| 久久精品夜色国产| 天天影视国产精品| 亚洲欧美精品自产自拍| 五月玫瑰六月丁香| 国产精品嫩草影院av在线观看| 亚洲国产欧美日韩在线播放| 精品少妇黑人巨大在线播放| 2022亚洲国产成人精品| 久久热在线av| 久久久久精品性色| 日韩欧美一区视频在线观看| 成人毛片a级毛片在线播放| 国产欧美另类精品又又久久亚洲欧美| 久久国内精品自在自线图片| 亚洲国产看品久久| 国产成人精品久久久久久| av片东京热男人的天堂| 欧美人与性动交α欧美软件 | 免费少妇av软件| 乱码一卡2卡4卡精品| 国产成人精品在线电影| 伊人久久国产一区二区| 日本欧美视频一区| 久久毛片免费看一区二区三区| 午夜福利,免费看| 五月开心婷婷网| 免费女性裸体啪啪无遮挡网站| 国产成人精品无人区| 人人妻人人澡人人看| 亚洲精品国产色婷婷电影| 国产精品女同一区二区软件| 国精品久久久久久国模美| 我要看黄色一级片免费的| 欧美xxxx性猛交bbbb| 如何舔出高潮| 在线观看人妻少妇| 亚洲国产精品999| 黑丝袜美女国产一区| 91午夜精品亚洲一区二区三区| 国产 一区精品| 久久久久视频综合| 搡老乐熟女国产| 王馨瑶露胸无遮挡在线观看| 欧美精品亚洲一区二区| 纯流量卡能插随身wifi吗| 美女中出高潮动态图| 免费看光身美女| 在线看a的网站| 香蕉国产在线看| 黑人高潮一二区| √禁漫天堂资源中文www| 久久99热这里只频精品6学生| 久久精品国产综合久久久 | 91成人精品电影| 国产欧美日韩一区二区三区在线| 热re99久久精品国产66热6| 欧美3d第一页| 久久久久久人人人人人| av线在线观看网站| 精品亚洲乱码少妇综合久久| 巨乳人妻的诱惑在线观看| 久久久久久久久久成人| 99热国产这里只有精品6| 亚洲欧美日韩卡通动漫| 国产欧美日韩一区二区三区在线| 人体艺术视频欧美日本| 久久人人爽人人爽人人片va| 在线观看www视频免费| 亚洲久久久国产精品| 在线观看免费日韩欧美大片| 一二三四中文在线观看免费高清| 日本91视频免费播放| 免费观看av网站的网址| av福利片在线| 少妇人妻 视频| 一边亲一边摸免费视频| 最近中文字幕高清免费大全6| 婷婷色综合www| 丰满饥渴人妻一区二区三| 老熟女久久久| 欧美老熟妇乱子伦牲交| 熟女电影av网| 成年美女黄网站色视频大全免费| 寂寞人妻少妇视频99o| 欧美精品人与动牲交sv欧美| 在线观看免费日韩欧美大片| 亚洲美女视频黄频| 国产精品国产三级国产av玫瑰| 亚洲综合色网址| 人人妻人人添人人爽欧美一区卜| 亚洲精品久久午夜乱码| 国语对白做爰xxxⅹ性视频网站| 色婷婷久久久亚洲欧美| 丝瓜视频免费看黄片| 色94色欧美一区二区| 亚洲综合色惰| 国产 精品1| h视频一区二区三区| 免费高清在线观看日韩| 日韩中字成人| 亚洲久久久国产精品| 午夜福利在线观看免费完整高清在| 免费播放大片免费观看视频在线观看| 欧美精品av麻豆av| 欧美另类一区| 好男人视频免费观看在线| 欧美国产精品va在线观看不卡| 久久久久久人妻| 国产成人午夜福利电影在线观看| 国产精品久久久av美女十八| 亚洲,欧美精品.| 亚洲精品视频女| 精品久久久久久电影网| 国产精品国产三级国产av玫瑰| 中文字幕最新亚洲高清| 99久久综合免费| 国产精品蜜桃在线观看| 欧美激情 高清一区二区三区| 精品熟女少妇av免费看| 日韩伦理黄色片| 两个人看的免费小视频| a级毛片黄视频| 男人舔女人的私密视频| 精品亚洲乱码少妇综合久久| 成人午夜精彩视频在线观看| 在线天堂中文资源库| 肉色欧美久久久久久久蜜桃| 如日韩欧美国产精品一区二区三区| 欧美精品国产亚洲| 久久97久久精品| 亚洲国产精品成人久久小说| 丝袜在线中文字幕| 80岁老熟妇乱子伦牲交| 久久久a久久爽久久v久久| 波野结衣二区三区在线| 男女国产视频网站| videos熟女内射| 成人毛片a级毛片在线播放| 一本—道久久a久久精品蜜桃钙片| 99久国产av精品国产电影| 国产午夜精品一二区理论片| 精品一区二区免费观看| 日韩一区二区三区影片| 亚洲人成77777在线视频| 高清av免费在线| 建设人人有责人人尽责人人享有的| 看免费成人av毛片| 大片电影免费在线观看免费| 纯流量卡能插随身wifi吗| 欧美日韩一区二区视频在线观看视频在线| 欧美日韩视频高清一区二区三区二| 自线自在国产av| 久久99一区二区三区| av天堂久久9| 自线自在国产av| 男女下面插进去视频免费观看 | 久久精品熟女亚洲av麻豆精品| 一级,二级,三级黄色视频| 国产高清国产精品国产三级| 国产色爽女视频免费观看| 午夜日本视频在线| 好男人视频免费观看在线| 亚洲丝袜综合中文字幕| 精品国产一区二区久久| 韩国av在线不卡| 久久久国产精品麻豆| 黑人高潮一二区| 丰满饥渴人妻一区二区三| 国产男女超爽视频在线观看| 两个人看的免费小视频| 亚洲第一av免费看| 丝袜人妻中文字幕| 黄色毛片三级朝国网站| 亚洲欧美成人精品一区二区| av片东京热男人的天堂| 久热久热在线精品观看| 亚洲精品自拍成人| 美国免费a级毛片| 精品一品国产午夜福利视频| 国产成人免费观看mmmm| 丰满饥渴人妻一区二区三| 久久女婷五月综合色啪小说| 成人漫画全彩无遮挡| 高清在线视频一区二区三区| 街头女战士在线观看网站| 咕卡用的链子| www.av在线官网国产| 大码成人一级视频| 亚洲图色成人| 欧美亚洲 丝袜 人妻 在线| 咕卡用的链子| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 美女主播在线视频| 水蜜桃什么品种好| 欧美日韩综合久久久久久| 欧美精品av麻豆av| 亚洲国产看品久久| 精品国产一区二区久久| 丰满饥渴人妻一区二区三| 90打野战视频偷拍视频| 国产高清国产精品国产三级| 在线观看人妻少妇| 国产一区二区在线观看日韩| 亚洲欧美日韩另类电影网站| 亚洲欧美一区二区三区黑人 | 国产亚洲一区二区精品| 伊人亚洲综合成人网| 美女福利国产在线| 久久av网站| 国产精品秋霞免费鲁丝片| 人妻少妇偷人精品九色| 精品国产露脸久久av麻豆| 久久ye,这里只有精品| 亚洲精品一区蜜桃| 哪个播放器可以免费观看大片| 亚洲av免费高清在线观看| 久久久久久久久久成人| 多毛熟女@视频| 男人舔女人的私密视频| 日本av免费视频播放| 亚洲四区av| 日本黄色日本黄色录像| 国产高清国产精品国产三级| 日韩精品免费视频一区二区三区 | 久久99热6这里只有精品| 久久国产精品大桥未久av| 中国国产av一级| 丝袜喷水一区| 国产国语露脸激情在线看| 九色亚洲精品在线播放| 亚洲国产精品一区二区三区在线| 精品卡一卡二卡四卡免费| 久久久欧美国产精品| 成人国产av品久久久| 亚洲情色 制服丝袜| 极品少妇高潮喷水抽搐| 欧美bdsm另类| 国产一级毛片在线| 国产黄频视频在线观看| 最近2019中文字幕mv第一页| 伦理电影大哥的女人| 多毛熟女@视频| 乱人伦中国视频| 两性夫妻黄色片 | 99国产综合亚洲精品| 免费大片18禁| 国产精品久久久av美女十八| 日韩制服骚丝袜av| 如何舔出高潮| 久久99精品国语久久久| 国产精品99久久99久久久不卡 | 久久久久久久久久成人| 搡老乐熟女国产| 国产日韩欧美在线精品| 中国三级夫妇交换| av在线app专区| 只有这里有精品99| 黄色视频在线播放观看不卡| 免费女性裸体啪啪无遮挡网站| xxxhd国产人妻xxx| 亚洲,欧美精品.| xxx大片免费视频| 欧美成人精品欧美一级黄| 日韩一本色道免费dvd| 婷婷色综合www| 精品酒店卫生间| 精品第一国产精品| 赤兔流量卡办理| 女人精品久久久久毛片| 亚洲美女搞黄在线观看| 久久精品久久久久久噜噜老黄| 国产乱来视频区| 亚洲第一区二区三区不卡| 一二三四中文在线观看免费高清| 国产成人一区二区在线| 多毛熟女@视频| 国产精品麻豆人妻色哟哟久久| 久久精品久久久久久噜噜老黄| 三级国产精品片| 亚洲一码二码三码区别大吗| 欧美国产精品va在线观看不卡| 国产精品一区二区在线观看99| 免费大片18禁| 伦精品一区二区三区| 久久久久久伊人网av| 久热久热在线精品观看| 男人操女人黄网站| 啦啦啦啦在线视频资源| 亚洲天堂av无毛| 男人爽女人下面视频在线观看| www.av在线官网国产| 9色porny在线观看| 欧美精品高潮呻吟av久久| 免费播放大片免费观看视频在线观看| 黄色 视频免费看| 欧美精品一区二区大全| 国内精品宾馆在线| 精品国产露脸久久av麻豆| 国精品久久久久久国模美| 午夜免费男女啪啪视频观看| 一级a做视频免费观看| 免费黄频网站在线观看国产| 菩萨蛮人人尽说江南好唐韦庄| 婷婷色av中文字幕| 国产 一区精品| 亚洲精品第二区| 大话2 男鬼变身卡| 国产亚洲精品久久久com| 国产欧美亚洲国产| 久久 成人 亚洲| 18禁国产床啪视频网站| 中国国产av一级| av视频免费观看在线观看| 欧美精品国产亚洲| 免费人成在线观看视频色| 最近最新中文字幕免费大全7| 狠狠精品人妻久久久久久综合| 一级,二级,三级黄色视频| 精品一区二区三区视频在线| 在线看a的网站| 精品久久久精品久久久| 你懂的网址亚洲精品在线观看| 卡戴珊不雅视频在线播放| 在线天堂中文资源库| 天美传媒精品一区二区| 亚洲在久久综合| 精品第一国产精品| 欧美激情国产日韩精品一区| 国产综合精华液| 国产不卡av网站在线观看| 色哟哟·www| tube8黄色片| 亚洲国产精品专区欧美| 亚洲少妇的诱惑av| 亚洲av.av天堂| 内地一区二区视频在线| 久久精品久久久久久噜噜老黄| 国产精品蜜桃在线观看| 国产国语露脸激情在线看| 中文字幕亚洲精品专区| 99热国产这里只有精品6| 久久久久久人人人人人| 一区二区av电影网| av在线app专区| 成人国产麻豆网| 男人舔女人的私密视频| 少妇人妻久久综合中文| 乱人伦中国视频| 国产极品粉嫩免费观看在线| 九九爱精品视频在线观看| 日韩中字成人| 国产爽快片一区二区三区| 亚洲美女黄色视频免费看| 欧美成人午夜精品| 午夜av观看不卡| 亚洲av综合色区一区| 日本与韩国留学比较| 欧美成人午夜精品| 男人操女人黄网站| 熟妇人妻不卡中文字幕| 日韩制服骚丝袜av| 亚洲国产毛片av蜜桃av| 有码 亚洲区| 男女无遮挡免费网站观看| xxx大片免费视频| 日韩视频在线欧美| 久久久精品94久久精品| a级片在线免费高清观看视频| 国产精品欧美亚洲77777| 午夜视频国产福利| 成人18禁高潮啪啪吃奶动态图| 欧美激情 高清一区二区三区| 精品第一国产精品| 欧美激情极品国产一区二区三区 | 肉色欧美久久久久久久蜜桃| 99久久精品国产国产毛片| 国产有黄有色有爽视频| av女优亚洲男人天堂| 男的添女的下面高潮视频| 日韩 亚洲 欧美在线| 久久久久久久久久久久大奶| 三级国产精品片| 国产精品无大码| 女的被弄到高潮叫床怎么办| www.色视频.com| 午夜福利视频在线观看免费| 久久久国产一区二区| 午夜免费观看性视频| 男女边吃奶边做爰视频| 亚洲性久久影院| 最近中文字幕高清免费大全6| 岛国毛片在线播放| 一区二区三区乱码不卡18| 国产熟女欧美一区二区| 成年人午夜在线观看视频| 一级片免费观看大全| 侵犯人妻中文字幕一二三四区| 久久99精品国语久久久| 亚洲精品国产av蜜桃| 久久这里只有精品19| 精品人妻一区二区三区麻豆| 午夜激情久久久久久久| 宅男免费午夜| 亚洲欧美日韩卡通动漫| 全区人妻精品视频| 亚洲欧美一区二区三区国产| 制服诱惑二区| 丝瓜视频免费看黄片| 色吧在线观看| 精品人妻在线不人妻| 中文字幕最新亚洲高清| 岛国毛片在线播放| 一本大道久久a久久精品| 国产成人精品久久久久久| 国产精品一区www在线观看| 成人毛片60女人毛片免费| 欧美精品亚洲一区二区| 欧美日韩视频精品一区| 日日啪夜夜爽| 亚洲综合色网址| 午夜免费男女啪啪视频观看| av国产精品久久久久影院| 性高湖久久久久久久久免费观看| 国语对白做爰xxxⅹ性视频网站| 亚洲国产精品一区二区三区在线| 少妇熟女欧美另类| 欧美bdsm另类| 又黄又爽又刺激的免费视频.| 日韩av免费高清视频| 国产精品国产av在线观看| 青青草视频在线视频观看| 成年人免费黄色播放视频| 成人无遮挡网站| 亚洲av日韩在线播放| 少妇 在线观看| 亚洲色图 男人天堂 中文字幕 | av播播在线观看一区| 亚洲精品国产av蜜桃| av免费在线看不卡| 人妻 亚洲 视频| 久久综合国产亚洲精品| 免费高清在线观看日韩| 下体分泌物呈黄色| 成人黄色视频免费在线看| 熟妇人妻不卡中文字幕| 欧美精品国产亚洲| 深夜精品福利| 18禁在线无遮挡免费观看视频| 欧美日韩一区二区视频在线观看视频在线| 51国产日韩欧美| 亚洲美女黄色视频免费看| 丝瓜视频免费看黄片| 精品午夜福利在线看| 18+在线观看网站| 黄色怎么调成土黄色| 人人妻人人澡人人爽人人夜夜| 国产精品熟女久久久久浪| 亚洲激情五月婷婷啪啪| 久久久久久久国产电影| av又黄又爽大尺度在线免费看| 80岁老熟妇乱子伦牲交| 搡女人真爽免费视频火全软件| 黑人巨大精品欧美一区二区蜜桃 | 亚洲性久久影院| 99久久综合免费| 永久免费av网站大全| 人妻系列 视频| 人人妻人人澡人人看| 久久国产精品男人的天堂亚洲 | 国产男女超爽视频在线观看| 熟女人妻精品中文字幕| 少妇被粗大猛烈的视频| 久久国产精品男人的天堂亚洲 | 亚洲成人av在线免费| 亚洲精品日本国产第一区| 国产成人精品久久久久久| 麻豆精品久久久久久蜜桃| 26uuu在线亚洲综合色| 免费黄频网站在线观看国产| 中国国产av一级| 久久鲁丝午夜福利片| 亚洲欧美日韩卡通动漫| 免费大片黄手机在线观看| 人人妻人人添人人爽欧美一区卜| 久久久久久人人人人人| 久久久久国产网址| 少妇人妻久久综合中文| 国产欧美日韩一区二区三区在线| 日本色播在线视频| 亚洲精品中文字幕在线视频| 老司机影院毛片| 综合色丁香网| 色5月婷婷丁香| 国产综合精华液| 精品酒店卫生间| 卡戴珊不雅视频在线播放| 免费高清在线观看日韩| 91精品伊人久久大香线蕉| 免费av不卡在线播放| 最后的刺客免费高清国语| www.av在线官网国产| 欧美日韩国产mv在线观看视频| 国产男女超爽视频在线观看| 欧美日韩国产mv在线观看视频| 女性生殖器流出的白浆| 热re99久久国产66热| www.av在线官网国产| 国产免费视频播放在线视频| 久久久久国产网址| 嫩草影院入口| 女人久久www免费人成看片| 校园人妻丝袜中文字幕| 国产高清不卡午夜福利| 97精品久久久久久久久久精品| 日韩中字成人| 在线观看免费视频网站a站| 亚洲精品av麻豆狂野| 亚洲精品aⅴ在线观看| 999精品在线视频| 五月玫瑰六月丁香| 久久久久久久久久成人| 在现免费观看毛片| 欧美人与性动交α欧美精品济南到 | 亚洲一级一片aⅴ在线观看| 国产又爽黄色视频| 在线观看国产h片| 欧美日韩综合久久久久久| 99热全是精品| 日韩大片免费观看网站| 尾随美女入室| av不卡在线播放| 日本色播在线视频| 免费少妇av软件| 啦啦啦中文免费视频观看日本| 国产成人精品无人区| 18禁动态无遮挡网站| 欧美精品国产亚洲| 国产一区二区三区综合在线观看 | 亚洲av国产av综合av卡| 韩国精品一区二区三区 | 精品酒店卫生间| 亚洲av免费高清在线观看| 蜜臀久久99精品久久宅男| 精品亚洲成国产av| 天美传媒精品一区二区| 纵有疾风起免费观看全集完整版| 久久人人97超碰香蕉20202| www.熟女人妻精品国产 | 免费日韩欧美在线观看| 国产一区二区三区av在线| 国产色婷婷99| 国产1区2区3区精品| 日韩人妻精品一区2区三区| 熟妇人妻不卡中文字幕| 国产高清国产精品国产三级| 中文字幕人妻丝袜制服| 亚洲国产欧美日韩在线播放| 一级a做视频免费观看| 波野结衣二区三区在线| 韩国高清视频一区二区三区| 777米奇影视久久| 成年人免费黄色播放视频| 熟女人妻精品中文字幕| 99热网站在线观看| 尾随美女入室| 亚洲精品日本国产第一区| 免费播放大片免费观看视频在线观看| 国产精品国产av在线观看| 日韩免费高清中文字幕av| 免费观看a级毛片全部| 久久精品aⅴ一区二区三区四区 | 熟女人妻精品中文字幕| www.色视频.com| 欧美人与善性xxx| 久久久久精品性色| 少妇人妻久久综合中文| 在线精品无人区一区二区三| 99久久精品国产国产毛片| 少妇被粗大猛烈的视频| 国产精品人妻久久久久久| 欧美 亚洲 国产 日韩一| 免费在线观看黄色视频的| 蜜桃在线观看..| 国产精品熟女久久久久浪| 亚洲欧美成人综合另类久久久| 18禁国产床啪视频网站| 老司机亚洲免费影院| 一区二区av电影网| 人妻一区二区av| 欧美日韩av久久| 日日撸夜夜添| 久久久久国产网址| 一级黄片播放器| 国产一区亚洲一区在线观看| 性高湖久久久久久久久免费观看| 国产乱来视频区| 大片电影免费在线观看免费| 免费高清在线观看日韩| 999精品在线视频| 丰满乱子伦码专区| 一级毛片黄色毛片免费观看视频| 婷婷成人精品国产| 激情五月婷婷亚洲| 欧美日韩一区二区视频在线观看视频在线| 国产精品久久久久久精品古装| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲成色77777| 婷婷成人精品国产|