• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Feature-based RGB-D camera pose optimization for real-time 3D reconstruction

    2017-06-19 19:20:15ChaoWangXiaohuGuo
    Computational Visual Media 2017年2期

    Chao Wang,Xiaohu Guo

    Feature-based RGB-D camera pose optimization for real-time 3D reconstruction

    Chao Wang1,Xiaohu Guo1

    In this paper we present a novel featurebased RGB-D camera pose optimization algorithm for real-time 3D reconstruction systems.During camera pose estimation,current methods in online systems suffer from fast-scanned RGB-D data,or generate inaccurate relative transformations between consecutive frames.Our approach improves current methods by utilizing matched features across allframes and is robust for RGB-D data with large shifts in consecutive frames.We directly estimate camera pose for each frame by efficiently solving a quadratic minimization problem to maximize the consistency of 3D points in global space across frames corresponding to matched feature points.We have implemented our method within two state-of-the-art online 3D reconstruction platforms.Experimental results testify that our method is efficient and reliable in estimating camera poses for RGB-D data with large shifts.

    camera pose optimization;feature matching;real-time 3D reconstruction; feature correspondence

    1 Introduction

    Real-time 3D scanning and reconstruction techniques have been applied to many areas in recent years with the prevalence of inexpensive depth cameras for consumers.The sale of millions of such devices makes it desirable for users to scan and reconstruct dense models of the surrounding environment by themselves.Online reconstructiontechniques have various popular applications,e.g., in augmented reality(AR)to fuse supplemented elements with the real-world environment,in virtual reality(VR)to provide users with reliable environment perception and feedback,and in simultaneous localization and mapping(SLAM) for robots to automatically navigate in complex environments[1–3].

    One of the earliest and most notable methods among RGB-D based online 3D reconstruction techniques is KinectFusion[4],which enables a user holding and moving a standard depth camera such as Microsoft Kinect to rapidly create detailed 3D reconstructions of a static scene.However,a major limitation of KinectFusion is that camera pose estimation is performed by frame-to-model registration using an iterative closest point(ICP) algorithm based on geometric data,which is only reliable for RGB-D data with small shifts between consecutive frames acquired by high-frame-rate depth cameras[4,5].

    To solve the aforementioned limitation,a common strategy adopted by most subsequent online reconstruction methods is to introduce photometric data into the ICP-based framework to estimate camera poses by maximizing the consistency of geometric information as well as color information between two adjacent frames[2,5–11].However, even though an ICP-based framework can effectively deal with RGB-D data with small shifts,it solves a non-linear minimization problem and always converges to a local minimum near the initial input because of the small angle assumption[4]. This indicates that pose estimation accuracy relies strongly on a good initialguess,which is unlikely to be satisfied if the camera moves rapidly or is shifted suddenly by the user.For the same reason,ICP-based online reconstruction methods always generate results with drifts and distortion for scenes with large planar regions such as walls,ceilings,and floors,even ifconsecutive frames only contain smallshifts.Figure 1 illustrates this shortcoming for several current online methods using an ICP-based framework,and also shows the advantage of our method on RGB-D data with large shifts on a planar region.

    Another strategy to improve the robustness of camera tracking is to introduce RGB features into camera pose estimation by maximizing the 3D position consistency of corresponding feature points between frames[12–14].These feature-based methods are better than ICP-based ones in handling RGB-D data with large shifts,since they simply run a quadratic minimization problem to directly compute the relative transformation between two consecutive frames[13,14].However,unlike ICP-based methods using frame-to-model registration, current feature-based methods estimate camera pose only based on pairs of consecutive frames, which usually brings in errors and accumulates drifts in reconstruction on RGB-D data with sudden change.Moreover,current feature-based methods always inaccurately estimate camera pose because of unreliable feature extractors and matching.Practically,the inaccurate camera poses are not utilized directly in reconstruction,but pushed into an offl ine backend post-process to improve their reliability,such as global pose graph optimization[12,15]or bundle adjustment[13, 14].For this reason,most current feature-based reconstruction methods are strictly offl ine.

    In this paper,we combine the advantages of the two above strategies and propose a novel featurebased camera pose optimization algorithm for online 3D reconstruction systems.To solve the limitation that the ICP-based framework always converges to a local minimum near the initial input,our approach estimates the globalcamera poses directly by efficiently solving a quadratic minimization problem to maximize the consistency of matched feature points across frames,without any initial guess.This makes our method robust in dealing with RGB-D data with large shifts.Meanwhile, unlike current feature-based methods which only consider pairs of consecutive frames,our method utilizes matched features from allprevious frames to reduce the impact of bad features and accumulated error in camera pose during scanning.This is achieved by keeping track ofRGB features’3D points information from allframes in a structure called the feature correspondence list.

    Our algorithm can be directly integrated into current online reconstruction pipelines.We have implemented our method within two state-of-the-art online 3D reconstruction platforms.Experimental results testify that our approach is efficient and improves current methods in estimating camera pose on RGB-D data with large shifts.

    2 Related work

    Following KinectFusion,many variants and other brand new methods have been proposed to overcome its limitations and achieve more accurate reconstruction results.Here we mainly consider camera pose estimation methods in onlineand offl ine reconstruction techniques,and briefly introduce camera pose optimization in some other relevant areas.

    Fig.1 Camera pose estimation comparison between methods.Top:four real input point clouds scanned using diff erent views of a white wall with a painting.Bottom:results ofstitching using camera poses provided by the Lucas–Kanade method[6],voxel-hashing[2],ElasticFusion[9], and our method.

    2.1 Online RGB-D reconstruction

    A typical online 3D reconstruction process takes RGB-D data as input and fuses the dense overlapping depth frames into one reconstructed model using some specific representation,of which two most important categories are volume-based fusion[2,4,5,10,16,17]and point/surfel-based fusion[1,9].Volume-based methods are very common since they can directly generate models with connected surfaces,and are also efficient in data retrieval and use of the GPU.While KinectFusion is limited to a small fixed-size scene, severalsubsequent methods introduce different data processing techniques to extend the original volume structure,such as moving volume[16,18],octreebased volume[17],patch volume[19],or hierarchical volume[20].However,these online methods simply inherit the same ICP framework from KinectFusion to estimate camera pose.

    In order to handle dense depth data and stitch frames in real time,most online reconstruction methods prefer an ICP-based framework which is efficient and reliable if the depth data has small shifts.While KinectFusion runs a frameto-model ICP process with vertex correspondence obtained by projective data association,Peasley and Birchfield[6]improved it by providing ICP with a better initial guess and correspondence based on a warp transformation between consecutive RGB images.However,this warp transformation is only reliable for images with very small shifts, just like the ICP-based framework.Nie?ner et al.[2]introduced voxel-hashing technique into volumetric fusion to reconstruct scenes at large scale efficiently and used color-ICP to maintain geometric as well as color consistency of all corresponding vertices.Steinbrucker et al.[21]proposed an octree-based multi-resolution online reconstruction system which estimates relative camera poses between frames by stitching their photometric and geometric data together as closely as possible. Whelan et al.’s method[10]and a variant[5] both utilize a volume-shifting fusion technique to handle large-scale RGB-D data,while Whelan et al.’s ElasticFusion approach[9]extends it to a surfelbased fusion framework.They introduce local loop closure detection to adjust camera poses at any time during reconstruction.Nonetheless,these methods still rely on an ICP-based framework to determine a single joint pose constraint and therefore are still only reliable on RGB-D data with small shifts. Figure 1 gives a comparison between our method and these current methods on a rapidly scanned wall.In Section 4 we compare voxel-hashing[2], ElasticFusion[9],and our method on an RGB-D benchmark[22]and a realscene.

    Feature-based online reconstruction methods are much rarer than ICP-based ones,since camera poses estimated only using features are usually unreliable due to the noisy RGB-D data,and must be subsequently post-processed.Huang et al.[13] proposed one of the earliest SLAM systems which estimates an initial camera pose in real time for each frame by utilizing FAST feature correspondence between consecutive frames,and sending all poses to a post-process for global bundle adjustment before reconstruction,which makes this method less efficient and not strictly an online reconstruction technique.Endres et al.[12]considered different feature extractors and estimated camera pose by simply computing the transformation between consecutive frames using an RANSAC algorithm based on feature correspondences.Xiao et al.[14] provided an RGB-D database with full 3D space views and used SIFT features to construct the transformation between consecutive frames, followed by bundle adjustment to globally improve pose estimates.In summary,current featurebased methods utilize feature correspondences only between pairs of consecutive frames to estimate the relative transformation between them.Unlike such methods,our method utilizes the feature-matching information from all previous frames by keeping track of the information in a feature correspondence list.Section 4.4 compares our method and current feature-based frameworks utilizing only pairs of consecutive frames.

    2.2 Offl ine RGB-D reconstruction

    The typical and most common scheme for offl ine reconstruction methods is to take advantage of some global optimization technique to determine consistent camera poses for allframes,such as bundleadjustment[13,14],pose graph optimization[5,14, 23],and deformation graph optimization with loop closure detection[9].Some offl ine works utilize similar strategies to online methods[2,5,9]by introducing feature correspondences into an ICP-based framework.They maximize the consistency of both dense geometric data and sparse image features,such as one of the first reconstruction systems proposed by Henry et al.[7]using SIFT features.

    Other work introduces various special points of interest into camera pose estimation and RGB-D reconstruction.Zhou and Koltun[24]proposed an impressive offl ine 3D reconstruction method which focuses on preserving details of points of interest with high density values across RGB-D frames,and runs pose graph optimization to obtain globally consistent pose estimations for these points.Two other works by Zhou et al.[25]and Choi et al.[26] both detect smooth fragments as point of interest zones and attempt to maximize the consistency of corresponding points in fragments across frames using globaloptimization.

    2.3 Camera pose optimization in other areas

    Camera pose optimization is also very common in many other areas besides RGB-D reconstruction. Zhou and Koltun[3]presented a color mapping optimization algorithm for 3D reconstruction which optimizes camera poses by maximizing the color agreement of 3D points’2D projections in all RGB images.Huang et al.[13]proposed an autonomous flight control and navigation method utilizing feature correspondence to estimate relative transformation between consecutive frames in real time.Steinbr¨ucker et al.[27]presented a real-time visual odometry method which estimates camera poses by maximizing photo-consistency between consecutive images.

    3 Camera pose estimation

    Our camera pose optimization method attempts to maximize the consistency of matched features’ corresponding 3D points in global space across frames.In this section we start with a briefoverview of the algorithmic framework,and then describe the details of each step.

    3.1 Overall scheme

    The pipeline is illustrated in Fig.2.For each input RGB-D frame,we extract the RGB features in the first step(see Section 3.2),and then generate a good feature match with correspondence-check (see Section 3.3).Next,we maintain and update a data structure called the feature correspondence list to store matched features and corresponding 3D points in the camera’s local coordinate space across frames(see Section 3.4).Finally,we estimate camera pose by minimizing the diff erence between matched features’3D positions in global space(see Section 3.5).

    3.2 Feature extraction

    2D feature points can be utilized to reduce the amount of data needed to evaluate the similarity between two RGB images while preserving the accuracy of the result.In order to estimate camera pose efficiently in real time while guaranteeing the reconstruction reliability,we need to select a feature extraction method with a good balance between feature accuracy and speed.We ignore corner-based feature detectors such as BRIEF and FAST,since the depth data from consumer depth cameras always contains much noise around object contours due to the cameras’working principles[28].Instead, we simply use an SURF detector to extract and describe RGB features,for two main reasons.Firstly, SURF is robust,stable,and scale and rotation invariant[29],which is important for establishing reliable feature correspondences between images. Secondly,existing methods can efficiently compute SURF in parallelon the GPU[30].

    3.3 Feature matching

    Fig.2 Algorithm overview.

    Using the feature descriptors,a feature match can be obtained easily but it usually contains manymismatched pairs.To remove as many outliers as possible,we run an RANSAC-based correspondencecheck based on 2D homography and relative transformation between pairs of frames.

    For two consecutive framesi?1 andiwith RGB images and corresponding 3D points in the camera’s local coordinate space,we first obtain an initial feature match between 2D features based on their descriptors.Next,we run a number of iterations,and in each iteration we randomly select 4 feature pairs to estimate the 2D homographyHzusing the direct linear transformation algorithm[31] and the 3D relative transformationTzbetween the corresponding 3D points.HzandTzwith lowest re-projection errors amongst all feature pairs are selected as the final ones to determine the outliers. After iterations,feature pairs with a 2Dre-projection error larger than a thresholdσHor a 3Dre-projection error larger than a thresholdσTare treated as outliers,and are removed from the initial feature match.

    During the correspondence-check,we only select feature pairs with valid depth values.Meanwhile, in order to reduce noise in the depth data,we pre-smooth the depth image with a bilateral filter before computing 3D points from 2D features.After the correspondence-check,if the number of valid matched features is too small,the estimated camera pose obtained based on them will be unreliable. Therefore,we abandon all subsequent steps after feature matching and use a traditional ICP-based framework ifthe number of validly matched features is smaller than a thresholdσF.In our experiment,we empirically chooseσH=3,σT=0.05,andσF=10.

    Figure 3 shows a feature matching comparison before and after the correspondence-check for two consecutive images captured by a fast-moving camera.The blue circles are feature points,while the green circles and lines are matched feature pairs. Note that almost allpoorly matched correspondence pairs are removed.

    3.4 Feature correspondence list construction

    Fig.3 Two original images(top),feature matching before(middle) and after(bottom)correspondence checking.

    In order to estimate the camera pose by maximizing the consistency of the global positions of matched features in all frames,we establish and update a feature correspondence list(FCL)to keep track of matched features in both the spatial and temporal domain.The FCL is composed of3D point sets,each ofwhich denotes a series of3D points in the camera’s local coordinate space,whose corresponding 2D pixels are matched features across frames.Thus, the FCL in frameiis denoted byL={Sj|j=0, ...,mi?1},where eachSjcontains 3D points whose corresponding 2D points are matched features,jis the point set index,andmiis the number of point sets in the FCL in framei.The FCL can be simply constructed:Fig.4 illustrates the process used to construct FCL for two consecutive frames.

    By keeping track of all RGB features’3D positions in each camera’s local space,we can estimate camera poses by maximizing the consistency of all these 3D points’global positions. By utilizing feature information from all frames instead of just two consecutive frames,we aim to reduce the impact of possible bad features,such as incorrectly matched features or features from ill-scanned RGB-D frames.Moreover,this also avoids the accumulation of error in camera pose from previous frames.

    3.5 Camera pose optimization

    Fig.4 Feature correspondence lists for frame 1(left)and frame 2 (right).To construct the FCL for frame 2,we remove point sets with unmatched features(green),add matched points whose corresponding features are in the previous frame’s FCL into the corresponding point sets(red),and add new point sets for matched features whose corresponding features are not in the previous frame’s FCL(blue). Finally we re-index all points in the FCL.The number of point sets in the two FCLs is the same;herem1=m2=2.

    For the 3D points in each point set in FCL,their corresponding RGB features can be regarded as 2D projections from one 3D point in the real world on the RGB images in a continuous series offrames.For these 3D points in the camera coordinate space,we aim to ensure that their corresponding 3D points in the world space are as close as possible.

    Given the FCLL={Sj|j=0,···,mi?1}in framei,for each 3D pointpij∈Sj,our objective is to maximize the agreement betweenpijand its target position in the world space with respect to a rigid transformation.Specifically,we seek a rotationRiand translation vectortithat minimize the following energy function:

    wherewjis a weight to distinguish the importance of points,andqjis the target position in the world space ofpijafter transformation.In our method we initially set:

    which is the average position of the 3D points in the world frame obtained from allpoints inSjexcept forpijitself,wherenjis the frame index forSj’s first point.Intuitively,the more frequently a 3D global point appears in frames,the more reliable this point’s measured data will be for the estimation of camera pose.Therefore,we usewj=|Sj|to balance the importance of points.qjin Eq.(2)can be easily computed from the stored information in framei’s FCL.

    The energy functionEi(Ri,ti)in Eq.(1)is a quadratic least-squares objective and can be minimized by Arun et al.’s method[32]:

    HereD=diag(1,1,det(VUT))ensures thatRiis a rotation matrix without reflection.U,Vare both 3× mimatrices from the singular value decomposition (SVD)of matrixS=UΣVT,which is constructed byS=XWYTwhere:

    HereXandYare both 3×mimatrices,Wis a diagonal matrix with weight values,andandare the mass centers of allandqjin frameirespectively.In general,by minimizing the energy function in Eq.(1),we seek a rigid transformation which makes each 3D point’s global position in the world space as close as possible to the average position of all its corresponding 3D points from all previous frames.

    After solving Eq.(1)for the current framei,eachpij’s target positionqjin Eq.(2)can be updated by

    This is simply done by puttingpijand the newly obtained transformationRiandtiinto Eq.(2),and estimatingqjas the average center of all points inSj.Note that we can utilize the newin Eq.(9) to further decrease the energy in Eq.(1)and obtain another new transformation,which can be utilized again to updatein turn.Therefore,an iterative optimization process updatingand minimizing the energyEican be repeatedly used to optimize the transformation until the energy converges.

    Furthermore,the aforementioned iterative process can also be run on previous frames to further maximize the consistency of matched 3D points’global positions between frames.If an online reconstruction system contains techniques to update the previously reconstructed data,then the further optimized poses in previous frames can be used to update the reconstruction quality further.Actually, we only need to optimize poses between framertoi,whereris the earliest frame index of all points in framei’s FCL.A common case during online scanning and reconstruction is that,the camera stays steady on a same scene for a long time.Then,the correspondence list willkeep too many old redundant matched features from very early previous frames, which will greatly increase the computation cost of optimization.To avoid this,we check the gap betweenrandifor every framei.Ifi?ris larger than a thresholdδ,we only run optimization between framei?δandi.In the experiments,we useδ=50.

    In particular,minimizing each energyEk(r≤k≤i)is equivalent to minimizing the sum of the energy between these frames:

    According to the solutions in Eqs.(5)–(8),the computation of each transformationRkandtkin Eq.(10)is independent of that in other frames.The totalenergyEis estimated each time in the iterative optimization process to determine ifthe convergence condition is satisfied or not.

    Algorithm 1 describes the entire iterative camera pose optimization process in our method.In the experiments we set the energy thresholdε=0.01. Our optimization method is very efficient in that it only takesO(mi)multiplications and additions as well as a few SVD processes on 3×3 matrices.

    4 Experimental results

    To assess the capabilities of our camera pose estimation method,we embedded it within two stateof-the-art platforms:a volume-based method based on voxel-hashing[2]and a surfel-based method, ElasticFusion[9].In the implementation,we first estimate camera poses using our method,and then regard them as good initial guesses for the original ICP-based framework in each platform. The reason is that the reconstruction quality is possibly low if the online system does not run a frame-to-modelframework to stitch dense data from the current frame with the previous model during reconstruction[5].Note that for each frame,even though our method optimizes camera poses from all relevant frames,we only use the optimizedpose for the current frame for the frame-to-model framework to update the reconstruction,and the optimized poses in previous frames are only utilized to estimate the camera poses in future frames.

    Algorithm 1 Camera pose optimization

    4.1 Trajectory estimation

    We first compare our method with both voxelhashing[2]and ElasticFusion[9],evaluating the trajectory estimation performance using several datasets from the RGB-D benchmark[22].In order to compare with ElasticFusion[9],we utilize the same error metric as in their work, absolute trajectory root-mean-square error(ATE) which measures the root-mean-square of Euclidean distances between estimated camera poses and ground truth ones associated with timestamps[9, 22].

    Table 1 shows the results from each method with and without our improvement.We denote the smallest error for each dataset in bold.Here“dif1”and“dif5”denote the frame difference used for each dataset during reconstruction.In other words,for“dif5”,we only use the first frame of every 5 consecutive frames in each original dataset,and omit the other 4 intermediate framesin order to estimate the trajectories on RGB-D data with large shifts,while for“dif1”we just use the original dataset.Note that our results are different when embedded in the two platforms even for the same dataset.This is because, firstly,the two online platforms utilize different data processing and representation techniques, and diff erent frame-to-model frameworks during reconstruction.Secondly,the voxel-hashing platform does not contain any optimization technique to modify previously constructed models and camera poses,while ElasticFusion utilizes both local and global loop closure detection in conjunction with global optimization techniques to optimize previous data and generate a globally consistent reconstruction[9].Results in Table 1 show that our method improves upon the other two methods for estimating trajectories,especially on large planar regions such as fr1/floor and fr3/ntf which both contain floor with textures.Furthermore,our method also estimates trajectories better than the other methods when the shifts between the RGB-D frames are large.

    Table 1 Trajectory estimation comparison of methods using ATE metric

    4.2 Pose estimation

    To estimate the pose estimation performance,we compared our methods with the same two methods on the same benchmark using relative pose error (RPE)[22],which measures the relative pose difference between each estimated camera pose and the corresponding ground truth.Table 2 gives the results,which show that our method can improve camera pose estimation on datasets with large shifts, even though our result is only on a par with the others on the original datasets with small shifts between consecutive frames.

    4.3 Surface reconstruction

    In order to compare the influence of computed camera poses on the final reconstructed models for our method and the others,we firstly compute camera poses by each method on its corresponding platform,and then use all the poses on the same voxel-hashing platform to generate reconstructed models.Here our method runs on the voxelhashing platform.Figure 5 gives the reconstruction results for different methods on the fr1/floor dataset from the same benchmark,with frame difference 5.The figure shows that our method improves the reconstructed surface by producing good camera poses for the RGB-D data with large shifts.

    To test our method on a fast-moving camera ona real scene,we fixed an Asus XTion depth camera on a tripod with a motor to rotate the camera with controlled speed.With this device,we firstly scanned a room by rotating the camera only around its axis (they-axis in the camera’s local coordinate frame) for severalrotations with a fixed speed,and selected the RGB-D data for exactly one rotation for the test.This dataset contains 235 RGB-D frames;most of the RGB images are blurred,since it took the camera only about 5 seconds to finish the rotation. Figure 6 gives an example showing two blurred images from this RGB-D dataset.Note that our feature matching method can still match features very well.

    Table 2 Pose estimation comparison of methods using RPE metric

    Fig.5 Reconstruction results for diff erent methods on fr1/fl oor from the RGB-D benchmark[22]with frame diff erence 5.

    Fig.6 Two blurred images(top)and feature matching result (bottom)from our scanned RGB-D data from a real scene using a fast-moving camera.

    Figure 7 gives the reconstruction results produced by different methods on the dataset.As in Fig.5, all reconstruction results here are also obtained using the voxel-hashing platform with camera poses pre-computed by different methods on each corresponding platform;again our method ran on the voxel-hashing platform.For the ground truth camera poses,since we scan the scene with fixed rotation speed,we simply compute the ground truth camera pose for each framei(0≤i<235)asRi=Ry(θi)withθi=(360(i?1)/235)°andti=0,whereRy(θi)rotates around they-axis by an angleθi.Moreover,note that ElasticFusion[9] utilizes loop closure detection and deformation graph optimization to globally optimize camera poses and global point positions in the final model.To make the comparison more reasonable,we introduce the same loop closure detection in ElasticFusion[9]into our method,and use a pose graph optimization tool[15]to globally optimize camera poses for all frames efficiently.Figure 7 shows that our optimized camera poses can determine the structure of the reconstructed modelvery wellfor the real-scene data captured by a fast-moving camera.

    4.4 Justification of feature correspondence list

    Fig.7 Reconstruction results for diff erent methods on room data captured by a speed-controlled fast-moving camera.

    In our method we utilize the FCL in order toreduce the impact of bad features on camera pose estimation,and also to avoid accumulating error in camera poses during scanning.Current feature-based methods always estimate the relative transformation between the current frame and the previous one using only the matched features in these two consecutive frames[12–14]and here we callthis strategyconsecutive-feature estimation.

    In our framework,the consecutive-feature estimation can be easily implemented by only using steps(1)and(2)(lines 1 and 2)in Algorithm 1 for eachqj=p(i?1)j,which ispij’s matched 3D point in the previous frame.Figure 9 gives the ATE and RPE errors for our method utilizing FCLs and the consecutive-feature method on fr1/floor,for increasing frame differences.Clearly our method with FCLs outperforms the consecutive-feature method in determining camera poses for RGB-D data with large shifts.

    4.5 Performance

    Fig.9 Comparison between our method and the consecutive-feature method on fr1/fl oor for varying frame diff erence.

    We have tested our method on the voxel-hashing platform on a laptop running Microsoft Windows 8.1 with an Intel Core i7-4710HQ CPU at 2.5 GHz, 12 GB RAM,and an NVIDIA GeForce GTX 860M GPU with 4 GB memory.We used the OpenSURF library and used OpenCL[30]to extract SURF features on each down-sampled 320×240 RGB image.For each frame,our camera pose optimization pipeline takes about 10 ms to extract features and finish feature matching,1–2 ms for FCL construction,and only 5–8 ms for the camera pose optimization step,including the iterative optimization of camera poses for allrelevant frames. Therefore,our method is efficient enough to run in real time.We also note that the offl ine pose graph optimization tool[15]used for the RGB-D data described in Section 4.3 takes only 10 ms for global pose optimization of all frames.

    5 Conclusions and future work

    This paper has proposed a novel feature-based camera pose optimization algorithm which efficiently and robustly estimates camera pose in online RGBD reconstruction systems.Our approach utilizes the feature correspondences from allprevious frames and optimizes camera poses across frames.We have implemented our method within two stateof-the-art online RGB-D reconstruction platforms. Experimental results verify that our method improves current online systems in estimating more accurate camera poses and generating more reliable reconstructions for RGB-D data with large shifts between consecutive frames.

    Considering that our camera pose optimization method is only part of the RGB-D reconstruction system pipeline,we aim to develop a new RGBD reconstruction system with our camera pose optimization framework in it.Moreover,we will also explore utilizing our optimized camera poses in previous frames to update the previously reconstructed model in the online system.

    [1]Keller,M.;Lefloch,D.;Lambers,M.;Izadi,S.; Weyrich,T.;Kolb,A.Real-time 3D reconstruction in dynamic scenes using point-based fusion.In: Proceedings of the International Conference on 3D Vision,1–8,2013.

    [2]Nie?ner,M.;Zollh¨ofer,M.;Izadi,S.;Stamminger, M.Real-time 3D reconstruction at scale using voxel hashing.ACM Transactions on GraphicsVol.32,No. 6,Article No.169,2013.

    [3]Zhou,Q.-Y.;Koltun,V.Color map optimization for 3Dreconstruction with consumer depth cameras.ACM Transactions on GraphicsVol.33,No.4,Article No. 155,2014.

    [4]Newcombe,R.A.;Izadi,S.;Hilliges,O.;Molyneaux, D.;Kim,D.;Davison,A.J.;Kohi,P.;Shotton,J.; Hodges,S.;Fitzgibbon,A.KinectFusion:Real-time dense surface mapping and tracking.In:Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality,127–136,2011.

    [5]Whelan,T.;Kaess,M.;Johannsson,H.;Fallon,M.; Leonard,J.;McDonald,J.Real-time large-scale dense RGB-D slam with volumetric fusion.The International Journal of Robotics ResearchVol.34,Nos.4–5,598–626,2015.

    [6]Peasley,B.;Birchfield,S.Replacing projective data association with Lucas–Kanade for KinectFusion.In: Proceedings of the IEEE International Conference on Robotics and Automation,638–645,2013.

    [7]Henry,P.;Krainin,M.;Herbst,E.;Ren,X.;Fox,D. RGB-D mapping:Using Kinect-style depth cameras for dense 3D modeling of indoor environments.The International Journal of Robotics ResearchVol.31, No.5,647–663,2012.

    [8]Newcombe,R.A.;Lovegrove,S.J.;Davison,A.J. DTAM:Dense tracking and mapping in real-time.In: Proceedings of the IEEE International Conference on Computer Vision,2320–2327,2011.

    [9]Whelan,T.;Leutenegger,S.;Salas-Moreno,R.; Glocker,B.;Davison,A.ElasticFusion:Dense SLAM without a pose graph.In:Proceedings of Robotics: Science and Systems,11,2015.

    [10]Whelan,T.;Johannsson,H.;Kaess,M.;Leonard,J. J.;McDonald,J.Robust real-time visual odometry for dense RGB-D mapping.In:Proceedings of the IEEE International Conference on Robotics and Automation,5724–5731,2013.

    [11]Zhang,K.;Zheng,S.;Yu,W.;Li,X.A depthincorporated 2D descriptor for robust and efficient 3D environment reconstruction.In:Proceedings of the 10th International Conference on Computer Science &Education,691–696,2015.

    [12]Endres,F.;Hess,J.;Engelhard,N.;Sturm,J.; Cremers,D.;Burgard,W.An evaluation of the RGB-D SLAM system.In:Proceedings of the IEEE International Conference on Robotics and Automation,1691–1696,2012.

    [13]Huang,A.S.;Bachrach,A.;Henry,P.;Krainin,M.; Maturana,D.;Fox,D.;Roy,N.Visual odometry and mapping for autonomous flight using an RGBD camera.In:Robotics Research.Christensen,H.I.; Khatib,O.;Eds.Springer International Publishing, 235–252,2011.

    [14]Xiao,J.;Owens,A.;Torralba,A.SUN3D:A database of big spaces reconstructed using SfM and object labels.In:Proceedings of the IEEE International Conference on Computer Vision,1625–1632,2013.

    [15]K¨ummerle,R.;Grisetti,G.;Strasdat,H.;Konolige, K.;Burgard,W.G2o:A general framework for graph optimization.In:Proceedings of the IEEE International Conference on Robotics and Automation,3607–3613,2011.

    [16]Roth,H.;Vona,M.Moving volume KinectFusion.In: Proceedings of British Machine Vision Conference,1–11,2012.

    [17]Zeng,M.;Zhao,F.;Zheng,J.;Liu,X.Octreebased fusion for realtime 3D reconstruction.Graphical ModelsVol.75,No.3,126–136,2013.

    [18]Whelan,T.;Johannsson,H.;Kaess,M.;Leonard,J. J.;McDonald,J.Robust tracking for real-time dense RGB-D mapping with Kintinuous.Computer Science and Artificial Intelligence Laboratory Technical Report,MIT-CSAIL-TR-2012-031,2012.

    [19]Henry,P.;Fox,D.;Bhowmik,A.;Mongia,R.Patch volumes:Segmentation-based consistent mapping with RGBD cameras.In:Proceedings of the International Conference on 3D Vision,398–405,2013.

    [20]Chen,J.;Bautembach,D.;Izadi,S.Scalable real-time volumetric surface reconstruction.ACM Transactions on GraphicsVol.32,No.4,Article No.113,2013.

    [21]Steinbrucker,F.;Kerl,C.;Cremers,D.Large-scale multiresolution surface reconstruction from RGB-D sequences.In:Proceedings of the IEEE International Conference on Computer Vision,3264–3271,2013.

    [22]Sturm,J.;Engelhard,N.;Endres,F.;Burgard,W.; Cremers,D.A benchmark for the evaluation of RGBD SLAM systems.In:Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,573–580,2012.

    [23]St¨uckler,J.;Behnke,S.Multi-resolution surfel maps for efficient dense 3D modeling and tracking.Journal of Visual Communication and Image RepresentationVol.25,No.1,137–147,2014.

    [24]Zhou,Q.-Y.;Koltun,V.Dense scene reconstruction with points ofinterest.ACM Transactions on GraphicsVol.32,No.4,Article No.112,2013.

    [25]Zhou,Q.-Y.;Miller,S.;Koltun,V.Elastic fragments for dense scene reconstruction.In:Proceedings of the IEEE International Conference on Computer Vision, 473–480,2013

    [26]Choi,S.;Zhou,Q.-Y.;Koltun,V.Robust reconstruction of indoor scenes.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,5556–5565,2015.

    [27]Steinbr¨ucker,F.;Sturm,J.;Cremers,D.Realtime visual odometry from dense RGB-D images.In: Proceedings of the IEEE International Conference on Computer Vision Workshops,719–722,2011.

    [28]H¨ansch,R.;Weber,T.;Hellwich,O.Comparison of3D interest point detectors and descriptors for point cloud fusion.ISPRS Annals of the Photogrammetry,Remote Sensing and Spatial Information SciencesVol.2,No. 3,57,2014.

    [29]Juan,L.;Gwun,O.A comparison of SIFT,PCA-SIFT and SURF.International Journal of Image ProcessingVol.3,No.4,143–152,2009.

    [30]Yan,W.;Shi,X.;Yan,X.;Wan,L.Computing openSURF on openCL and general purpose GPU.International Journal of Advanced Robotic SystemsVol.10,No.10,375,2013.

    [31]Hartley,R.;Zisserman,A.Multiple View Geometry in Computer Vision.Cambridge University Press,2003.

    [32]Arun,K.S.;Huang,T.S.;Blostein,S.D.Leastsquares fitting of two 3-D point sets.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.PAMI-9,No.5,698–700,1987.

    Chao Wang is currently a Ph.D. candidate in the Department of Computer Science at the University of Texas at Dallas.Before that,he received his M.S.degree in computer science in 2012,and B.S.degree in automation in 2009,both from Tsinghua University.His research interests include geometric modeling,spectral geometric analysis,and 3D reconstruction of indoor environments.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journalare available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    o

    his Ph.D.degree in computer science from Stony Brook University in 2006.He is currently an associate professor of computer science at the University of Texas at Dallas.His research interests include computer graphics and animation,with an emphasis on geometric modeling and processing,mesh generation,centroidalVoronoitessellation, spectral geometric analysis,deformable models,3D and 4D medical image analysis,etc.He received a prestigious National Science Foundation CAREER Award in 2012.

    1 University of Texas at Dallas,Richardson,Texas,USA. E-mail:C.Wang,chao.wang3@utdallas.edu;X.Guo, xguo@utdallas.edu

    Manuscript received:2016-09-09;accepted:2016-12-20

    欧美大码av| 国产麻豆69| 国产免费男女视频| 免费在线观看视频国产中文字幕亚洲| 国产1区2区3区精品| 国产精品久久久av美女十八| 黄色 视频免费看| 纯流量卡能插随身wifi吗| 变态另类成人亚洲欧美熟女 | 黑人巨大精品欧美一区二区mp4| 老司机深夜福利视频在线观看| 天堂影院成人在线观看| 欧洲精品卡2卡3卡4卡5卡区| svipshipincom国产片| 国产精品 欧美亚洲| 精品久久久久久久久久免费视频| 国内久久婷婷六月综合欲色啪| 国产aⅴ精品一区二区三区波| 亚洲 欧美一区二区三区| 99精品在免费线老司机午夜| 中文字幕av电影在线播放| 精品国产亚洲在线| 99国产精品免费福利视频| 视频在线观看一区二区三区| 中文字幕人妻丝袜一区二区| 天天添夜夜摸| 亚洲欧美激情综合另类| 久久精品国产综合久久久| 伊人久久大香线蕉亚洲五| 亚洲熟妇中文字幕五十中出| 亚洲成av人片免费观看| 久久人人爽av亚洲精品天堂| 每晚都被弄得嗷嗷叫到高潮| 一个人免费在线观看的高清视频| 97超级碰碰碰精品色视频在线观看| 国产成人免费无遮挡视频| 亚洲精品国产区一区二| 国产精品免费一区二区三区在线| 久久久久国内视频| 亚洲第一电影网av| 又紧又爽又黄一区二区| 亚洲,欧美精品.| 母亲3免费完整高清在线观看| 99久久综合精品五月天人人| 侵犯人妻中文字幕一二三四区| 男女下面进入的视频免费午夜 | 亚洲成av片中文字幕在线观看| 午夜精品国产一区二区电影| 国产一区二区三区视频了| 精品一区二区三区视频在线观看免费| 亚洲av成人不卡在线观看播放网| 亚洲欧美精品综合一区二区三区| 亚洲av五月六月丁香网| 免费在线观看日本一区| 两个人看的免费小视频| 高清在线国产一区| 国产在线精品亚洲第一网站| 人妻久久中文字幕网| 少妇的丰满在线观看| 亚洲国产精品久久男人天堂| 日日夜夜操网爽| 激情视频va一区二区三区| 欧美老熟妇乱子伦牲交| 欧美乱妇无乱码| 级片在线观看| 久久久久久免费高清国产稀缺| 国产精品二区激情视频| 好男人在线观看高清免费视频 | 久久国产精品影院| 午夜福利欧美成人| 久久人妻福利社区极品人妻图片| 老司机午夜福利在线观看视频| 精品久久久精品久久久| 午夜福利欧美成人| 十八禁人妻一区二区| 日本在线视频免费播放| 久久香蕉精品热| 国产欧美日韩一区二区三| 中国美女看黄片| 国产高清videossex| 午夜激情av网站| 久久精品aⅴ一区二区三区四区| 两个人视频免费观看高清| 精品午夜福利视频在线观看一区| 日韩中文字幕欧美一区二区| 亚洲av成人av| 中文字幕色久视频| 九色亚洲精品在线播放| 亚洲第一青青草原| 99在线视频只有这里精品首页| 免费在线观看日本一区| av天堂在线播放| 久久中文字幕一级| 欧美成人免费av一区二区三区| 老司机在亚洲福利影院| av视频免费观看在线观看| 欧美成人一区二区免费高清观看 | 久久久久久国产a免费观看| 精品一区二区三区四区五区乱码| 色婷婷久久久亚洲欧美| 在线永久观看黄色视频| 男人操女人黄网站| 丝袜美足系列| 日韩欧美一区二区三区在线观看| 人人妻人人澡人人看| 在线国产一区二区在线| 国产精品久久久av美女十八| 青草久久国产| 精品卡一卡二卡四卡免费| 免费看a级黄色片| 午夜亚洲福利在线播放| 亚洲男人的天堂狠狠| 在线观看66精品国产| 亚洲成人国产一区在线观看| 日韩一卡2卡3卡4卡2021年| 欧美绝顶高潮抽搐喷水| 女人被躁到高潮嗷嗷叫费观| 午夜久久久久精精品| 最近最新免费中文字幕在线| 两性夫妻黄色片| 午夜福利成人在线免费观看| 一级黄色大片毛片| 19禁男女啪啪无遮挡网站| 视频在线观看一区二区三区| 一夜夜www| 国产一区在线观看成人免费| 琪琪午夜伦伦电影理论片6080| 久久久久久久精品吃奶| 日本一区二区免费在线视频| xxx96com| 国产精品野战在线观看| 色老头精品视频在线观看| 黄色毛片三级朝国网站| 亚洲av五月六月丁香网| 国产精品久久久久久亚洲av鲁大| 免费看a级黄色片| 1024香蕉在线观看| 久久香蕉国产精品| www.www免费av| 精品无人区乱码1区二区| 国内精品久久久久精免费| 国产精品免费视频内射| 亚洲av美国av| 波多野结衣一区麻豆| 黄片播放在线免费| 国产视频一区二区在线看| 午夜久久久在线观看| 亚洲色图综合在线观看| www日本在线高清视频| 动漫黄色视频在线观看| 精品人妻在线不人妻| 欧美乱妇无乱码| 麻豆一二三区av精品| 亚洲成人精品中文字幕电影| 制服诱惑二区| 亚洲va日本ⅴa欧美va伊人久久| 在线观看免费视频网站a站| 91国产中文字幕| 久久国产亚洲av麻豆专区| 午夜影院日韩av| 母亲3免费完整高清在线观看| 午夜免费鲁丝| 中文字幕人成人乱码亚洲影| 国产亚洲av嫩草精品影院| 色播在线永久视频| 日韩欧美一区视频在线观看| 午夜老司机福利片| 欧美精品亚洲一区二区| av有码第一页| 欧美大码av| 韩国精品一区二区三区| 最近最新免费中文字幕在线| 欧美乱码精品一区二区三区| 91精品三级在线观看| 亚洲免费av在线视频| 欧美精品亚洲一区二区| 一个人免费在线观看的高清视频| 国产日韩一区二区三区精品不卡| 欧美日本亚洲视频在线播放| 成年人黄色毛片网站| 丝袜美足系列| 国产欧美日韩一区二区三| 国产国语露脸激情在线看| 99国产精品一区二区三区| 日韩欧美三级三区| 女人被狂操c到高潮| 国产亚洲精品久久久久久毛片| 国产三级黄色录像| 校园春色视频在线观看| 国产欧美日韩一区二区三区在线| 一级片免费观看大全| 免费观看精品视频网站| 久久亚洲精品不卡| 国产国语露脸激情在线看| 亚洲av五月六月丁香网| 国产精品爽爽va在线观看网站 | 99久久精品国产亚洲精品| 9热在线视频观看99| 亚洲久久久国产精品| 操美女的视频在线观看| 久热爱精品视频在线9| 亚洲精品粉嫩美女一区| 法律面前人人平等表现在哪些方面| 久久久久久人人人人人| 母亲3免费完整高清在线观看| 成人欧美大片| 国产精品免费一区二区三区在线| 欧美色欧美亚洲另类二区 | 精品熟女少妇八av免费久了| 精品不卡国产一区二区三区| 国产一区二区三区在线臀色熟女| av天堂久久9| 国产野战对白在线观看| 午夜精品在线福利| 欧美性长视频在线观看| АⅤ资源中文在线天堂| www.精华液| 韩国精品一区二区三区| 亚洲成av人片免费观看| 精品福利观看| 老汉色av国产亚洲站长工具| 欧美中文日本在线观看视频| 日韩免费av在线播放| 电影成人av| 91精品国产国语对白视频| 一a级毛片在线观看| 欧美人与性动交α欧美精品济南到| 超碰成人久久| 91成年电影在线观看| 黄色丝袜av网址大全| 999久久久国产精品视频| 色播亚洲综合网| 中文字幕高清在线视频| 91精品三级在线观看| 婷婷丁香在线五月| 日本五十路高清| 亚洲人成电影观看| 欧美不卡视频在线免费观看 | 精品一品国产午夜福利视频| 亚洲人成伊人成综合网2020| 夜夜看夜夜爽夜夜摸| 欧美激情高清一区二区三区| 丁香六月欧美| 免费av毛片视频| 免费不卡黄色视频| 久久婷婷成人综合色麻豆| 最近最新中文字幕大全免费视频| 色播在线永久视频| 中文字幕人妻丝袜一区二区| 正在播放国产对白刺激| 巨乳人妻的诱惑在线观看| 麻豆国产av国片精品| 欧美国产精品va在线观看不卡| 中文字幕人妻丝袜一区二区| 亚洲天堂国产精品一区在线| 一级a爱片免费观看的视频| 久9热在线精品视频| 欧美日韩瑟瑟在线播放| 深夜精品福利| 久久久国产成人免费| 黄片小视频在线播放| 欧美黑人欧美精品刺激| 电影成人av| 亚洲欧洲精品一区二区精品久久久| 亚洲色图av天堂| 国产精品一区二区三区四区久久 | 日本 av在线| 免费人成视频x8x8入口观看| 91麻豆av在线| 亚洲中文日韩欧美视频| 成人av一区二区三区在线看| 色综合站精品国产| 中文字幕人妻熟女乱码| 亚洲第一电影网av| 最新美女视频免费是黄的| 每晚都被弄得嗷嗷叫到高潮| 免费人成视频x8x8入口观看| 国产精品 欧美亚洲| 国产午夜福利久久久久久| 久久中文字幕一级| 高清在线国产一区| 久久久久久免费高清国产稀缺| 国产精品自产拍在线观看55亚洲| 国产精品 欧美亚洲| 亚洲精品一区av在线观看| av网站免费在线观看视频| 国产在线精品亚洲第一网站| 国产亚洲精品第一综合不卡| 母亲3免费完整高清在线观看| 91大片在线观看| 国产成人欧美| 亚洲狠狠婷婷综合久久图片| 亚洲国产精品久久男人天堂| 大香蕉久久成人网| 国产片内射在线| 黄色 视频免费看| 大陆偷拍与自拍| 色尼玛亚洲综合影院| 精品少妇一区二区三区视频日本电影| 88av欧美| 亚洲成a人片在线一区二区| 国产精品一区二区精品视频观看| 婷婷六月久久综合丁香| 黄片大片在线免费观看| 黄色毛片三级朝国网站| 国产野战对白在线观看| 色播亚洲综合网| 欧美中文综合在线视频| 欧美成人午夜精品| 国产私拍福利视频在线观看| 欧美在线一区亚洲| 免费久久久久久久精品成人欧美视频| 国产伦一二天堂av在线观看| 国产xxxxx性猛交| 亚洲九九香蕉| 男人操女人黄网站| 可以在线观看毛片的网站| 国产一区二区三区视频了| 国产成人精品在线电影| 亚洲免费av在线视频| 在线十欧美十亚洲十日本专区| 一进一出好大好爽视频| 日本一区二区免费在线视频| 又黄又爽又免费观看的视频| 免费少妇av软件| 欧美成人一区二区免费高清观看 | 色尼玛亚洲综合影院| 老熟妇乱子伦视频在线观看| 一区福利在线观看| 久久久久九九精品影院| 久久人人爽av亚洲精品天堂| 欧美亚洲日本最大视频资源| 亚洲熟妇熟女久久| 操出白浆在线播放| 亚洲专区字幕在线| 久热这里只有精品99| 亚洲精品久久国产高清桃花| 国产免费男女视频| 狠狠狠狠99中文字幕| 精品久久久精品久久久| 日韩视频一区二区在线观看| 国产亚洲精品一区二区www| 国产亚洲欧美在线一区二区| 午夜日韩欧美国产| 99国产综合亚洲精品| 啦啦啦韩国在线观看视频| 日本精品一区二区三区蜜桃| 淫妇啪啪啪对白视频| 国产91精品成人一区二区三区| 国产av一区在线观看免费| 男女做爰动态图高潮gif福利片 | 国产在线观看jvid| 一卡2卡三卡四卡精品乱码亚洲| 丁香欧美五月| 国产精品久久电影中文字幕| 少妇被粗大的猛进出69影院| 正在播放国产对白刺激| 午夜福利,免费看| 级片在线观看| 深夜精品福利| 可以在线观看毛片的网站| 啦啦啦 在线观看视频| 亚洲色图综合在线观看| 淫秽高清视频在线观看| 国产亚洲精品av在线| 欧美日韩中文字幕国产精品一区二区三区 | 99国产精品99久久久久| 亚洲无线在线观看| 99久久99久久久精品蜜桃| 日韩视频一区二区在线观看| 免费在线观看视频国产中文字幕亚洲| 叶爱在线成人免费视频播放| 脱女人内裤的视频| 日韩精品免费视频一区二区三区| 黄片大片在线免费观看| av电影中文网址| 国产真人三级小视频在线观看| 午夜免费激情av| 咕卡用的链子| 色在线成人网| 这个男人来自地球电影免费观看| 日日爽夜夜爽网站| 伊人久久大香线蕉亚洲五| 一个人观看的视频www高清免费观看 | 国产成人精品无人区| 精品久久久久久成人av| 亚洲 欧美 日韩 在线 免费| 国产精品爽爽va在线观看网站 | 在线国产一区二区在线| 香蕉丝袜av| 欧美绝顶高潮抽搐喷水| av网站免费在线观看视频| 国产精品99久久99久久久不卡| 麻豆av在线久日| 在线观看66精品国产| 亚洲人成电影观看| 一区二区三区高清视频在线| 免费在线观看亚洲国产| 日本欧美视频一区| 国产成人啪精品午夜网站| 国产精品久久久久久精品电影 | 亚洲专区字幕在线| 亚洲视频免费观看视频| 男人操女人黄网站| 麻豆一二三区av精品| 亚洲激情在线av| 欧美大码av| 男女床上黄色一级片免费看| 亚洲中文日韩欧美视频| 18禁美女被吸乳视频| 在线十欧美十亚洲十日本专区| 免费人成视频x8x8入口观看| 一区二区三区激情视频| 国产国语露脸激情在线看| 国产精品影院久久| 亚洲成国产人片在线观看| 亚洲第一av免费看| 正在播放国产对白刺激| 亚洲成av片中文字幕在线观看| 日日夜夜操网爽| 国产一区二区三区在线臀色熟女| 色婷婷久久久亚洲欧美| 久久天躁狠狠躁夜夜2o2o| 亚洲 欧美 日韩 在线 免费| 国产成人av教育| 亚洲欧美日韩无卡精品| www.熟女人妻精品国产| 欧美黑人欧美精品刺激| 丝袜美足系列| 91av网站免费观看| 免费无遮挡裸体视频| 欧美成人一区二区免费高清观看 | 日本撒尿小便嘘嘘汇集6| 亚洲国产精品999在线| 亚洲男人的天堂狠狠| 欧美黑人精品巨大| 美女扒开内裤让男人捅视频| 日韩精品青青久久久久久| 欧美 亚洲 国产 日韩一| 两个人免费观看高清视频| 久久精品国产亚洲av香蕉五月| 一区二区三区精品91| 精品欧美一区二区三区在线| 多毛熟女@视频| 村上凉子中文字幕在线| 香蕉国产在线看| 国产精品99久久99久久久不卡| 叶爱在线成人免费视频播放| 国产精品电影一区二区三区| 精品人妻在线不人妻| 久久久国产成人精品二区| 午夜精品国产一区二区电影| 给我免费播放毛片高清在线观看| 国语自产精品视频在线第100页| 日日干狠狠操夜夜爽| 亚洲专区国产一区二区| 精品熟女少妇八av免费久了| 亚洲国产毛片av蜜桃av| www.熟女人妻精品国产| 国产三级黄色录像| 色婷婷久久久亚洲欧美| 91国产中文字幕| 欧美+亚洲+日韩+国产| 99久久精品国产亚洲精品| 男女下面进入的视频免费午夜 | 国产av一区在线观看免费| 久久精品影院6| 国产熟女xx| 欧美在线一区亚洲| 999精品在线视频| av片东京热男人的天堂| 亚洲片人在线观看| 欧美最黄视频在线播放免费| 国产亚洲精品久久久久5区| 久热爱精品视频在线9| 一级作爱视频免费观看| 美女免费视频网站| 久久久久久久久免费视频了| 精品久久久久久久毛片微露脸| 国产精品乱码一区二三区的特点 | 伊人久久大香线蕉亚洲五| 久久久久国产一级毛片高清牌| 少妇被粗大的猛进出69影院| 91精品国产国语对白视频| 日本a在线网址| 日韩中文字幕欧美一区二区| 日本vs欧美在线观看视频| 国产一区在线观看成人免费| 亚洲欧美日韩高清在线视频| 欧美日韩乱码在线| 日本黄色视频三级网站网址| 国产亚洲欧美98| 免费av毛片视频| 国产激情欧美一区二区| 男女之事视频高清在线观看| 妹子高潮喷水视频| 国产精品亚洲av一区麻豆| 国产一卡二卡三卡精品| 一级毛片高清免费大全| 久久久久久久久久久久大奶| 纯流量卡能插随身wifi吗| 大码成人一级视频| 十八禁网站免费在线| 国产伦一二天堂av在线观看| 嫁个100分男人电影在线观看| 久久人人97超碰香蕉20202| 久久久久国产一级毛片高清牌| 欧美黄色片欧美黄色片| 在线观看舔阴道视频| 一级片免费观看大全| 欧美一区二区精品小视频在线| 欧美一级毛片孕妇| 美女高潮喷水抽搐中文字幕| 91九色精品人成在线观看| 在线观看午夜福利视频| 人人妻,人人澡人人爽秒播| 精品午夜福利视频在线观看一区| 一二三四社区在线视频社区8| 无限看片的www在线观看| 桃红色精品国产亚洲av| 村上凉子中文字幕在线| 国产在线精品亚洲第一网站| 欧美日韩乱码在线| 国产一区二区三区视频了| www国产在线视频色| 99精品欧美一区二区三区四区| 一本综合久久免费| 欧美乱妇无乱码| 亚洲欧美日韩无卡精品| 欧美一级a爱片免费观看看 | 久久久久久大精品| 久热爱精品视频在线9| 精品国产一区二区久久| 欧美黑人精品巨大| 国产成人精品久久二区二区免费| 欧美成人一区二区免费高清观看 | 国产av一区在线观看免费| 午夜福利免费观看在线| 亚洲五月色婷婷综合| 日本撒尿小便嘘嘘汇集6| 91麻豆精品激情在线观看国产| 国产亚洲精品av在线| 日韩大码丰满熟妇| 精品人妻1区二区| 最近最新中文字幕大全电影3 | 久久精品91无色码中文字幕| 久久久久亚洲av毛片大全| 午夜久久久久精精品| 国产精品av久久久久免费| 97人妻天天添夜夜摸| 可以在线观看的亚洲视频| 久久精品aⅴ一区二区三区四区| 一区二区三区国产精品乱码| av超薄肉色丝袜交足视频| 乱人伦中国视频| 国产aⅴ精品一区二区三区波| 久久人妻福利社区极品人妻图片| 亚洲国产高清在线一区二区三 | av天堂久久9| 亚洲av电影在线进入| 久久香蕉精品热| 欧美中文日本在线观看视频| 久久精品影院6| 精品福利观看| 欧美色欧美亚洲另类二区 | 国产熟女xx| 久久九九热精品免费| 欧美av亚洲av综合av国产av| 国产精品电影一区二区三区| 中亚洲国语对白在线视频| 人人妻人人澡欧美一区二区 | 日韩视频一区二区在线观看| 亚洲电影在线观看av| 国产精品亚洲av一区麻豆| 久久精品影院6| 精品福利观看| 两人在一起打扑克的视频| 久久天堂一区二区三区四区| 欧美绝顶高潮抽搐喷水| 欧美日韩福利视频一区二区| 在线永久观看黄色视频| 中文字幕精品免费在线观看视频| 精品欧美国产一区二区三| 精品一品国产午夜福利视频| 91在线观看av| 后天国语完整版免费观看| 国产主播在线观看一区二区| 在线观看舔阴道视频| 精品欧美国产一区二区三| 久久精品aⅴ一区二区三区四区| 国产亚洲精品av在线| 一本久久中文字幕| 亚洲国产精品sss在线观看| 成人欧美大片| 18禁裸乳无遮挡免费网站照片 | 日韩欧美三级三区| 日韩精品中文字幕看吧| 女警被强在线播放| 欧美午夜高清在线| 国产精品乱码一区二三区的特点 | 国产又色又爽无遮挡免费看| 最近最新中文字幕大全电影3 | 国产成人一区二区三区免费视频网站| 纯流量卡能插随身wifi吗| 亚洲欧美激情综合另类| 国产精品久久久av美女十八| 丰满人妻熟妇乱又伦精品不卡| 国产一区二区在线av高清观看| 精品不卡国产一区二区三区| 可以免费在线观看a视频的电影网站| 国产精品九九99| 亚洲中文字幕日韩| 一卡2卡三卡四卡精品乱码亚洲| 亚洲性夜色夜夜综合| 国产99白浆流出|