• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An improved SLAM based on RK-VIF: Vision and inertial information fusion via Runge-Kutta method

    2023-03-28 08:37:38JishnCuiFngruiZhngDongzhuFengCongLiFeiLiQihenTin
    Defence Technology 2023年3期

    Ji-shn Cui , Fng-rui Zhng , Dong-zhu Feng ,*, Cong Li , Fei Li , Qi-hen Tin

    a School of Aerospace Science and Technology, Xidian University, Xi'an, 710126, China

    b China Academy of Space Technology, Xian Branch, Xi'an, 710100, China

    c China Academy of Launch Vehicle Technology, Beijing,100076, China

    d Aerospace Information Research Institute, Chinese Academy of Sciences, Jinan, 250100, China

    Keywords:SLAM Visual-inertial positioning Sensor fusion Unmanned system Runge-Kutta

    ABSTRACT Simultaneous Localization and Mapping (SLAM) is the foundation of autonomous navigation for unmanned systems.The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM)equipped with camera and the lidar SLAM equipped with lidar. However, pure visual SLAM have shortcomings such as low positioning accuracy,the paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration. First, the Inertial Measurement Unit(IMU) information between two adjacent keyframes is pre-integrated at the front-end to provide IMU constraints for visual-inertial information fusion. In particular, to improve the accuracy in pre-integration, the paper uses the Runge-Kutta algorithm instead of Euler integral to calculate the pre-integration value at the next moment.Then,the IMU pre-integration value is used as the initial value of the system state at the current frame time.We combine the visual reprojection error and imu pre-integration error to optimize the state variables such as speed and pose,and restore map points'three-dimensional coordinates.Finally,we set a sliding window to optimize map points' coordinates and state variables. The experimental part is divided into dataset experiment and complex indoor-environment experiment.The results show that compared with pure visual SLAM and the existing visual-inertial fusion SLAM, our method has higher positioning accuracy.

    1. Introduction

    Recently, autonomous navigation based on unmanned systems is gradually being used in various fields.SLAM is the foundation and core of autonomous navigation [1]. The SLAM method refers to unmanned systems establishing the environmental map and estimating its trajectory during the movement under the unknown environment [2]. Vision sensors rely on the image sequence captured to estimate the pose and create an environmental map.Still, when the movement is too fast or the environment features are missing, its accuracy and robustness will decrease sharply and sometimes even lead to positioning failure [3,4].

    With the continuous advancement of hardware technology,low-cost IMU has become ubiquitous[5,6].IMU and vision sensors have strong complementarity in performance. Therefore, the Visual-Inertial Navigation System (VINS) has gradually become a hot topic in the current SLAM field [7].

    The main goal of VINS is to better couple IMU measurement and image information.According to the coupling scheme,VINS can be divided into loosely coupled systems and tightly coupled systems.The loose coupling uses the knowledge of the camera and IMU to estimate the motion of the system and then combines the estimation results.In contrast,the tight coupling method establishes a unified objective function for the camera and IMU state information and then performs the system motion estimation.

    According to the data processing method, VINS can be divided into the filtering-based method and optimization-based method.Filtering-based methods generally choose the EKF framework,which is divided into two stages: prediction and update. The prediction stage uses IMU information to predict the system pose;and then in the update stage, the system state is corrected and optimized according to the image information. The optimizationbased method considers the data of a period in the past and then iteratively optimizes the system state[8,9].For the reason that the filtering-based method only considers data at the current moment and only uses image information when updating the state, its accuracy is limited; therefore, the current mainstream VINS choose the optimization-based method.

    Limited by processor performance in the early days, VINS research was mainly based on filtering. Mourikis et al. proposed a filter-based tight-coupling scheme-MSCKF, which is the first relatively mature scheme in the field of VINS[10].MSCKF uses a feature point to constrain multiple camera motion states at different moments and uses multi-state constrained extended Kalman filtering as the back-end. At the same time, it adopts a sliding window strategy to ensure the real-time performance of the system. However, since only monocular camera is used in the MSCKF, the accuracy of visual-inertial odometry (VIO) is quite insufficient compared to stereo camera. The problem of autonomous underwater vehicles' SLAM has been solved in Ref. [11]. Aiming at the problem of nonlinear models and non-Gaussian noise in AUV motion, the author proposes an improved variance reduction Fast-SLAM with simulated annealing to solve the problems of particle degradation, particle depletion and particle loss in traditional FastSLAM.Bloesch et al.proposed a robust visual-inertial odometry(ROVIO), which also uses a tight coupling solution based on filtering.The ROVIO front-end adopts the direct method to track the camera movement by minimizing the photometric error of the image block, and the extended Kalman filter at the back-end can follow the image block features and three-dimensional landmark points at the same time [12]. However, ROVIO doesn't have a loop detection module, it is only an odometry, and there is a sizeable cumulative error in the long term.

    In recent years,with the continuous improvement of processor performance,the optimized VINS has gradually become the current hot point. Leutenegger et al. proposed the keyframe-based visualinertial SLAM using nonlinear optimization (OKVIS) [13]. Its frontend uses multi-scale Harris corner detection features and BRISK descriptors.OKVIS pre-integrates the IMU information between the keyframes in the back-end and then estimates the system moves through vision and IMU information.However,the scheme doesn't have loop detection and pose graph optimization,which inevitably causes error accumulation. Raul et al. proposed VIORB-SLAM[14,15] based on ORB-SLAM by fusing IMU information, which also adopts a tight coupling scheme based on optimization. Its front-end uses ORB features, pre-integrates IMU data, and then uses the g2o library to fuse the two information to estimate the system motion. The accuracy of VIORB-SLAM is much better than monocular ORB-SLAM.Still,the code is not open-source,and it only supports monocular cameras,which require complex initialization before using [16].

    Shen Shaojiao et al. proposed the robust and versatile monocular visual-inertial state estimator (VINS-Mono),which still uses a tight coupling scheme based on optimization. However, VINSMono only supports monocular cameras. Its front-end uses the KLT optical flow method to track Harris corners.It also uses the preintegration idea to process IMU data and then fuses the two sensor information through iterative optimization [17].In addition,VINSMono can be initialized in an unknown environment and has a loop detection module [18,19], but the accuracy needs to be further improved. Later, the Shen Shaojiao team released VINS-FuSion based on VINS-Mono. VINS-FuSion retains an optimized tightly coupled fusion scheme and supports multiple types of sensor combinations (pure binocular, binocular + IMU, and binocular + IMU + GPS). Compared with VINS-Mono, it can be initialized statically,and the scale information doesn't entirely rely on the IMU, which is more robust. However, because of stereo vision's mismatch problem,its accuracy is not even as good as VINSMono in some situations.

    Although existing visual-inertial information fusion SLAM systems are emerging one after another, the visual and IMU information still cannot be effectively integrated.Solving the frequency inconsistency between the camera and IMU and fully integrating the visual and IMU information is still an urgent problem for VINS.A suitable SLAM should be able to fully integrate visual and IMU information, significantly to improve the positioning accuracy and robustness of the system in extreme situations such as lack of environmental information and rapid carrier movement.Therefore,this paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration, aiming to improve the accuracy and robustness of the system.

    The sections of our paper are arranged as follows: the second section introduces the visual-inertial information fusion framework. The third section introduces the SLAM method of visualinertial information fusion based on Runge-Kutta 4 improved pre-integration. The fourth section introduces the experimental part of our paper, including dataset experiments and experiments in complex indoor environment. Finally, the fifth section summarizes the paper and gives conclusions.

    2. RK-VIF: system overview

    Pure visual SLAM can estimate the pose of unmanned system and create featured map in unknown environment. Visual information has no data drift,and the positioning accuracy is higher in rich image features areas;however,if the image features are less or the movement is too fast,it is difficult for pure visual SLAM to work robustly, or even lead to positioning fails; IMU doesn't depend on environmental characteristics, and the measurement frequency is higher than camera. IMU relative displacement data has high accuracy in a short time,which can make up for the shortcomings of the visual blurring due to fast motion;however,excessive IMU use will cause errors to accumulate with time, and then produce extremely serious drift error. Aiming at the shortcomings of IMU,visual SLAM can constrain the divergence generated by IMU for a long time. Thus, IMU and camera are highly complementary in performance.Therefore,the paper fuses IMU information based on the visual SLAM,studies visual-inertial information fusion SLAM to improve the system's positioning accuracy. As shown in Fig.1, the visual-inertial information fusion SLAM framework of this paper mainly includes four modules: measurement data processing,system initialization, joint state estimation and sliding window local optimization, and loop detection.

    3. Measurement process

    3.1. Feature extraction and matching

    In the visual SLAM based on the feature point method, feature point extraction and matching are first required.The purpose is to determine the projection of the same spatial point in different images according to the extracted features. Feature extraction generally includes two parts: feature point detection and description. First, the critical information in the image is detected, and then it is represented by feature descriptors. Then, by comparing the distances of feature descriptors, the correspondence between feature points in different images can be matched. Usually, there are many mismatched points in the matched image, which will affect the accuracy of the subsequent pose estimation.Therefore,it is necessary to use the mismatch elimination algorithm to eliminate the abnormal points.

    The current visual SLAM method usually uses the Random Sampling Consensus(RANSAC)algorithm to eliminate mismatched points. Although the RANSAC algorithm can effectively eliminate mismatched points, the randomness of the algorithm itself causes the number of iterations to be unstable, which in turn causes problems such as low elimination efficiency. Therefore, this paper introduces the Progressive Sampling Consensus (PROSAC) algorithm[20]to eliminate mismatched points.The PROSAC algorithm can solve the problems of unstable iterations and poor robustness of the RANSAC algorithm, effectively improve the algorithm's efficiency, and provide a guarantee for the real-time performance of SLAM.

    The PROSAC is based on the assumption that “the points with good quality are more likely to be interior points” [21]. Therefore,when selecting sample points,the points with good quality will be chosen first. Specifically, the PROSAC first uses certain criteria to evaluate the quality of all sample points,and then sorts them by the evaluation results, the algorithm only high-quality sample points are selected to estimate the matching model every time. Through the sorting strategy, the randomness of algorithm sampling is avoided,the probability of obtaining the correct model is improved,and the number of algorithm iterations is reduced.

    The steps of the PROSAC algorithm are shown in Fig. 2:

    3.2. IMU pre-integration based on Runge-Kutta

    In the VINS, the camera is an external sensor that collects environmental information and determines the system's pose during movement; IMU is an internal sensor that provides the system's movement information and restores the gravity direction of the visual SLAM.The IMU measurement value will be affected by Gaussian noise and zero bias. In order to simplify the model, it's assumed that each axis of the accelerometer and gyroscope are measured independently, and the influence of the earth's rotation is not considered. The measurement process can be modeled as follows [22,23]:

    According to the rigid body kinematics model,the motion state of the body system in the world coordinate system satisfies the following differential formula:

    In formula, aB|Wrepresents the acceleration of the body coordinate in the world coordinate,and vB|Wrepresents the speed of the body coordinate in the world coordinate.

    Fig.1. Visual-inertial information fusion framework.

    Suppose the time difference between two IMU measurements is Δt. Assuming that the acceleration aB|Wand the angular velocity vB|Wremain constant within [t,t +Δt], substituting the IMU measurement model of formula (1) into formula (2):

    In formula, ηgd(t) and ηad(t) are the discrete values of random noise ηg(t) and ηa(t) respectively.

    Traditional pre-integration uses Euler integral to discretize the IMU motion formula and then derives the IMU pre-integration formula. Our paper uses the Runge-Kutta method to discretize the IMU motion formula to improve the pre-integration accuracy.The basic idea of the Runge-Kutta method is to estimate the derivatives of multiple points in the integration interval, take the weighted average of these derivatives to obtain the average derivative,and then use the average derivative to estimate the result at the end of the integration interval. If Runge-Kutta algorithm's order is higher,it will lead to smaller truncation error of the system,and improve algorithm's accuracy. In practical applications, the fourth-order Runge-Kutta algorithm can already meet most of the accuracy requirements. Considering the computer processor's performance, the paper use the fourth-order Runge-Kutta algorithm.

    Given the following differential formula:

    Euler integral uses the derivative f(tn,yn) at point tnas the derivative of the interval[tn,tn+1],and updates it with the state ynat point tn. There are:

    In formula,Δt =tn+1-tn.Euler integral only uses the derivative information at point tn, so it has a higher estimation error.

    The Runge-Kutta algorithm is an improvement of Euler integral.It selects four points in the interval [tn,tn+1], calculates the derivative k1~k4of four points by iterative method,and then takes the weighted average of these derivatives to calculate the next state yn+1. The formula of the fourth-order Runge-Kutta algorithm is as follows:

    In formula,

    In formula, k1represents the derivative at the beginning of interval (tn,yn);is the midpoint of the interval estimated by Euler integral method using k1as the average derivative, and k2represents its derivative;is the midpoint of the interval estimated by Euler integral method using k2as the average derivative, and k3represents its derivative;(tn+Δt,yn+k3?Δt) is the end point coordinates of the interval estimated by Euler integral method using k3as the average derivative, and k4represents its derivative. From formula (7), it can be seen that the algorithm integrates the derivative information of the four points, and the derivative at the two midpoints has a higher weight.

    Our paper use Runge-Kutta method discretize the IMU motion formula in formula (3):

    ωWB-RK|Band aB-RKare the weighted average values of the angular velocity and acceleration of the system obtained by Runge-Kutta algorithm within [k,k + 1]. Compared with only using the angular velocity ωkWB|Band acceleration akBat moment k to recursively calculate the system state at moment k+ 1, ωWB-RK|Band aB-RKhave higher accuracy.

    As shown in Fig. 3, the IMU frequency is much higher than the camera frequency, and there are much IMU data between two adjacent keyframes [24,25]. Therefore, the IMU data cannot be directly fused with the camera data.The IMU data between the two frames must be integrated to align the frequencies of the two sensors.

    Fig. 2. The process of PROSAC algorithm.

    Fig. 3. Sensor frequency.

    Assuming that the IMU information and the camera image are synchronized in time, only the frequency is different. We accumulate the IMU information from the moment i to the moment j.The state variables such as the rotation, speed, and translation of the system at moment i can be directly updated to the moment j.Combining formula (8), the numerical integration of IMU information between adjancent keyframes i and j satisfies the following formula:

    Formula (9) describes the state estimation obtained by IMU information from moment i to moment j.But if RiWBchanges during the optimization process, all other rotation RkWB(k=i, … ,j-1)will change, and all integrals need to be recalculated. In order to avoid the problem of repeated integration,we use the form of preintegration to define a set of variables that are not related to the initial pose and velocity:is the pre-integrated measurement of rotation between adjacent keyframes,and RiBWRi+1WBis the rotation matrix estimated by vision.Formula (11) is a least-squares problem. The Gauss-Newton method can find the zero bias of the gyroscope. After calculating the gyroscope zero bias, in order to eliminate the influence of the gyroscope zero bias on the pre-integration, it is necessary to recalculate the pre-integration on the IMU data.

    In formula, ΔRi,k= RiBW?RkWB, Δvi,k= RiBW?(vkB|W- viB|W-gW?Δti,k). The left side of formula (10) is the relative motion increment between adjancent keyframes i and j,and the right side is the IMU pre-integration measurement model. The definition makes the right side of the formula independent of the state and gravity at moment i.Repeated integration in the calculation process is solved, and calculation is significantly reduced.

    3.3. System initialization

    VINS initialization is a loosely coupled process of visual estimation and IMU estimation [26]. Firstly, perform visual initialization, estimate the initial pose of the system from the matching image. Then initialize the zero bias of the gyroscope and accelerometer by the estimation results, and calculate the speed of the IMU.

    Assuming that the gyroscope bias remains constant between two consecutive keyframes, minimize the difference between the gyroscope pre-integrated measurement and the visual estimated relative rotation, and calculate the gyroscope bias:

    N is the number of keyframes required for initialization,ΔRi,i+1

    In order to get the zero bias of the accelerometer, firstly, we estimate the gravitational acceleration through the posture of visual SLAM. Then we calculate the zero bias [27]. The conversion relationship between the body coordinate system and the camera coordinate system in the world coordinate system satisfies:

    pB|Wand pC|Wrespectively represent the position of IMU and camera in the world coordinate system,RWCdescribes the rotation matrix of the camera coordinate system relative to the world coordinate system.Meanwhile,pB|Crepresents the position of IMU in the camera coordinate.

    Introducing the constraint of gravitational acceleration G≈9.8,suppose a body coordinate system I, which shares the origin with the world coordinate system. The rotation matrix between the body coordinate system and the world coordinate system is RWI.The direction of gravitational acceleration in the body coordinate system I is= [0,0,1]T, and the direction of gravitational acceleration in the world coordinate system is=g*W/‖g*W‖,so the angle between these two direction vectors can be used to calculate RWI.

    At this time,the acceleration of gravity gWcan be expressed as:

    RWIwill change after the accelerometer zero bias is introduced.We use disturbance δθ to describe the change and perform the firstorder approximation:

    Combining formula (9) and formula (12), considering the change of the accelerometer zero bias to Δpij, and considering the constraints between three consecutive keyframes, eliminating the velocity term,we can get the following formula:

    where,

    3.4. Joint estimation and local optimization

    The initial velocity, gravitational acceleration, and zero bias of the gyroscope and accelerometer are solved through the IMU initialization.Then we use image and IMU information to estimate and optimize the system's pose. First, the joint estimation calculates the system's pose at the current keyframe by minimizing the visual reprojection error and the IMU pre-integration error. Then,the system's pose and the map points'location are optimized in the sliding window.

    We use Singular Value Decomposition (SVD) to solve formula(15), the zero bias baof the accelerometer and the optimized gravity acceleration can be obtained.

    According to the calculated zero bias of the gyroscope and accelerometer, combined with formula (9), the system speed at each keyframe moment can be further obtained.The system speed at the last keyframe is:

    The system speed at other keyframe moments is:

    3.4.1. State and error

    The state variables of the VINS includes rotation matrix, translation matrix and speed.IMU bias is a random walk process,which also needs to be estimated.Define the ith frame's state as Φi={Ri,pi,vi,bgi,bai}.In the subsequent local optimization process,the map point's location in the sliding window needs to be jointly optimized as the state.

    As shown in Fig. 4, the VINS includes three constraints:

    Fig. 4. VINS state and constraints.

    ●The constraint of image feature points'position between the keyframe and the map points;

    ●The constraint of IMU pre-integration measurement between adjacent keyframes;

    ●The constraint on the random walk model of the zero bias between adjacent keyframes.

    According to the above constraints, we define the visual reprojection error and IMU measurement error. The two types of errors are introduced below.

    Visual error is defined by the reprojection error of the map point.Assuming that m map points are observed at the time of the ith frame,the visual error at that time is

    In formula,ρ is the kernel function,is the covariance matrix of the reprojection error. Meanwhile, xkuv=(ukL,vkL,ukR) is the projection coordinate of the kth landmark point obtained by the stereo camera,XkC=[X,Y,Z]is the landmark in camera coordinate,and π is the projection formula:

    fx,fy,cu,and cvare the internal parameters,and bl is the baseline.The IMU error comprehensively considers the two constraints,including the IMU pre-integration measurement and the zero bias random walk process. The IMU error between frame i and j is defined as:

    In formula,ΣIMUis the pre-integrated measurement covariance matrix;Σbis the zero-biased random walk covariance matrix;eIMUis the IMU pre-integrated measurement residual,and ebis the IMU zero-biased random walk residual:

    In formula,

    3.4.2. Joint state estimation

    The joint state estimation measures the system pose in realtime. First, IMU provides the system with a relatively credible initial estimate,reprojects the map points observed in the previous keyframe into the current keyframe. Then, the system calculates the visual reprojection error for all projection points and their matching points in two adjacent keyframes. Finally, it minimizes the sum of visual reprojection error and IMU pre-integration error.Formula (19) defines the visual error Eproj(j), and formula (21) defines the IMU error EIMU(i,j). Therefore, the objective function is defined as:

    The above problem is a least-squares problem. Therefore, the optimal state Φ*can be solved by minimizing the sum of the visual reprojection error and the IMU error.

    3.4.3. Local sliding window optimization

    The state information of the latest keyframe is estimated through the joint state estimation module, and the observed landmarks are added to the local map to become map points.Next,we optimize the system pose and map point locations in the local map.

    In the VINS, since the IMU provides constraints between two adjacent keyframes,the keyframes contained in the local map must be continuous before they can be used for optimization.At the same time, the computational complexity after fusion will increase rapidly because of the high frequency of IMU. To balance accuracy and calculation efficiency, we choose the sliding window strategy to optimize the local map. As shown in Fig. 5, the continuous keyframes in the most recent period of time constitute a sliding window. During the optimization process, only the keyframes in the sliding window and the map points observed by these keyframes are optimized.In our paper,the sliding window's length in the local map is set to 10 to ensure reliability.The method ensures that the IMU pre-integration can provide effective constraints during local optimization (see Fig. 6).

    Fig. 5. Sliding window optimization range.

    Fig. 6. Loop detection process.

    Fig. 7. Globally optimized pose.

    The state at this moment is defined as:

    In formula, k represents the keyframe's subscript in the sliding window, TkWBrepresents the system pose in the world coordinate system at the kth keyframe. Meanwhile, XiWrepresents the ith landmark's coordinate in the world coordinate system.

    Combine the reprojection error and the IMU pre-integration error of all keyframes in the sliding window to construct the objective function:

    Formula(26)is still a least-squares problem,and it is still solved by the Gauss-Newton method in g2o.

    The local sliding window optimization comprehensively considers the status information of consecutive N keyframes, and update the system pose in the sliding window at each iteration.After completing local optimization, the loop detection module will perform loop detection and global pose graph optimization on the latest keyframes to eliminate accumulated errors and optimize the system pose.

    3.5. Back-end optimization

    Unmanned systems explore in an unknown environment, and there is always an error estimating their pose.Due to expanding the exploration area, the previous errors will gradually accumulate to the next moment. Therefore, correcting the pose in time is particularly important for the long-term robust running of the unmanned system.In order to solve this problem,the feasible idea is to detect the image information in the scene and determine whether the current position is a place that has been reached before. If the information is found to be similar, a loop is formed at the repeated position. Then, re-optimize the system's pose in the loop to eliminate accumulated errors and obtain a globally consistent pose graph.

    The loop detection based on the scene's appearance is essentially an image similarity detection problem. However, suppose all historical frames in the map are matched with the current frame.In that case, calculation is too large, which seriously affects the realtime detection. In this paper, we use the Bag of Words (BoW)model to accomplish loop detection.

    The words in the BoW model are the average expression of similar features those have a specific type, and the dictionary is a collection of all words.We use the ORB features extracted from the front-end. For the frame, compare the ORB feature and dictionary elements to convert the frame into a bag-of-words vector. In loop detection, by comparing the bag-of-words vectors of the two frames,it can be judged whether the two frames constitute a loop to speed up the detection. The detection process includes the following three parts: (1) building a dictionary; (2) using bag-ofwords vectors to describe frames; (3) comparing the bag-ofwords vectors to determine the similarity of two frames.

    After the system successfully detects the loop, it can establish the constraint relationship between the two frames through feature matching.Firstly,we need to fuse the map points matched by the loop and eliminate redundant map points.Then calculate the relative pose between the two keyframes according to Perspective-N-Point (PnP) algorithm, and correct the current keyframe's pose.In addition,other keyframes in the loop also have different degrees of accumulated error and need to be globally optimized.

    Considering that the number of map points is much larger than the number of keyframe poses, we only optimize each keyframe pose in the global optimization process to reduce calculation.After the pose optimization is completed, we use the optimized pose to adjust the map point.

    In the pose graph optimization,the nodes represent the camera poses, and the edges represent the pose transformation relationships between adjacent nodes. Suppose the camera poses of two adjacent nodes in the loop are Tiand Tj, the pose transformation matrix is Ti,j, and the corresponding Lie Algebras are expressed as ξi, ξj, and ξi,jrespectively. According to the relationship of pose transformation:

    Due to the pose estimation error,the above formula will not be accurate.The pose estimation of unmanned systems is to iteratively solve an optimal transformation matrix T?SE(3),so as to minimize the system error.In order to simplify the iterative solution process,Lie algebra is introduced,so we can transform the matrix derivation to multiplying the matrix by the vector which corresponding to the anti-symmetric matrix.We stipulate that the symbol‘∨’represents the conversion of an anti-symmetric matrix to corresponding vector, and the symbol ‘∧’ represents the conversion of a vector to corresponding anti-symmetric matrix. Thus we can construct the error term by moving Ti,jto the right side of formula (27):

    Formula (28) describes the error term between two adjacent pose nodes.Considering the error term formed by all pose nodes in pose graph to build a least-squares problem:

    where,ξ is the set of all nodes in the pose graph,ε represents the set of all edges in the pose graph. Σ i,jis the information matrix, representing the weight of error term ei,j.

    Finally,we use g2o to solve the above least-squares problem,the poses of all keyframes in the loop are corrected,and the position of all map points is re-adjusted. Fig. 7 shows the pose graph. Due to the accumulation of errors, the unmanned system motion trajectory does not form a closed loop when the previous position is accessed.Pose optimization can align the system poses at the same place and correct the system global motion trajectory.

    4. Experiment

    We evaluate the proposed visual-inertial information fusion algorithm through two parts: dataset experiment and real environment experiment. We compare the proposed visual-inertial information fusion algorithm with other latest algorithms on a public dataset in the first experiment. The comparison of experimental data proves the positioning accuracy and robustness of our algorithm in detail. Then, we test the algorithm in an indoor environment to evaluate its performance in real scenarios.

    4.1. Dataset experiment

    The experimental platform configuration for our experiment is shown in Table 1.The open-source dataset EuRoC provided by ETH Zurich is used to verify the SLAM algorithm. The EuRoC dataset is collected on a micro-drone containing 20 Hz binocular images provided by MT9V034 camera, IMU information of synchronous 200 Hz provided by ADIS16448 sensor, and actual trajectory provided by a motion capture system. Therefore, it is very suitable to verify the visual-inertial information fusion SLAM algorithm.

    Table 1 Experimental platform configuration.

    The dataset includes 11 data sequences collected during the flight of the micro-drone shown in Fig. 8(a) in an industrial plantand two rooms.According to the camera data sequence's exposure change,feature number,motion speed,and motion blur degree,the EuRoC dataset divides 11 data sequences into three levels: easy,medium,and difficult.Among them,the maximum speed of 7 data sequences exceeds 2 m/s,and the highest speed can reach 2.87 m/s.The average speed of three data series exceeds 0.9 m/s. Fig. 8(b)shows the images collected by the micro-drone in an industrial plant. The environment in the plant is relatively complex and can fully simulate the actual setting of unmanned systems.

    In order to verify the performance of the algorithm, we have done a lot of comparative experiments.To simplify the notation,we use RK-VIF(the SLAM method for visual-inertial information fusion via Runge-Kutta) to represent our algorithm and compares it with the current state-of-the-art visual-inertial information fusion system OKVIS, ROVIO, and VINS-Mono. OKVIS is an advanced VIO algorithm developed by the Automation System Laboratory of ETH Zurich,which can be used with monocular or stereo cameras;VINSMono is a tightly coupled monocular visual-inertial system developed by the team of Mr.Shaojie Shen from HongKong University of Science and Technology;ROVIO was developed by ETH Zurich as a tightly coupled monocular visual-inertial system. This part of the simulation experiment details the results of the MH_05_difficult and V1_02_medium sequences in the EuRoC dataset.

    For sequence MH_05_difficult and sequence V1_02_medium,the running trajectory graph of ROVIO,OKVIS, RK-VIF, and Ground Truth is shown in Fig. 9.

    Through multiple comparison experiments, we give the Root Mean Square Error (RMSE) of various algorithms under each sequence in the EuRoC dataset to evaluate the performance of multiple algorithms.

    The RMSE of 10 sequences in the EuRoC dataset is shown in Table 2. Compared with other algorithms, RK-VIF has the smallest RMSE in most data sequences, indicating that RK-VIF has the highest positioning accuracy and the best performance. However,the OKVIS has weak advantage in the sequence V2_01_easy. After comparative analysis,we think the RK-VIF has a certain positioning error when handling the sequence V2_01_easy, Unmanned aerial vehicle (UAV) moving too fast and blurred motion are the main reasons leading to the increase in RMSE. However, on the test platform of our paper, the OKVIS cannot run in real-time. Fig. 10 shows the running time of various algorithms in EuRoC dataset.The blue column represents Ground Truth, which is the actual shooting time of every sequence. It can be seen that the running speed of our algorithm is better than ROVIO. However, compared with the VINS-Mono, the running time is longer under certain sequences. It is because the algorithm costs long time in matching feature points and using Runge-Kutta for pre-integration; OKVIS has the longest running time.Analysis of the possible reason is that it takes a long time in the process of image feature extraction and global pose optimization. In summary, our experiments verify algorithm's performance in our paper in terms of running accuracy and running speed. Our algorithm satisfies the long-term and stable operation of the SLAM system.

    Table 2 The RMSE in EuRoC Dataset (unit:m).

    Fig. 8. EuRoC dataset.

    Fig. 9. Dataset experiment.

    4.2. Real environment experiment

    In the actual indoor experiment part, the experimental environment is shown in Fig.11(a). Our article uses indoor GPS as the Ground Truth.We compare the RK-VIF with OKVIS,VINS-Mono and ROVIO to verify the performance of various algorithms.

    The experimental platform used is shown in Fig. 11(b). A TX2 development board is mounted on the Mecanum wheel mobile robot.Computing resources mainly include CPU:HMP Dual Denver 2/2 MB L2 + Quad ARM A57/2 MB L2, GPU: NVIDIA Pascal, 256 CUDA Cores, connected to a ZED binocular camera and an STM32 driver board, equipped with IMU sensor MPU-6050.

    In this experiment,we tested the positioning accuracy of ROVIO,OKVIS,VINS-Mono,and RK-VIF.Connect the laptop and TX2 to the same local area network (LAN). We use laptop to control the TX2 mobile robot and run three circles in the indoor experimental environment with the same trajectory, as shown in Fig.12. Indoor GPS obtains ground truth, and the total track length is 72.72 m.Fig. 13 and Fig. 14 respectively show the translation and rotation information of the mobile robot at each moment and the corresponding error.

    Fig.12. Running trajectory in indoor environment.

    Fig.10. Running time in EuRoC Dataset (unit: s).

    Fig.11. Test environment and equipment.

    The RK-VIF, ROVIO, OKVIS, and VINS-Mono are compared and analyzed in the indoor environment experiment. Comparing the ground truth generated by indoor GPS in the X and Y directions,the RK-VIF algorithm has the least trajectory error and the highest positioning accuracy, which is between -.0.2m ~+ .0.2m. The second is the VINS-Mono, and the third is the OKVIS. The worst ROVIO has a trajectory error of±.0.3m;in the Z direction,due to the mobile robot is always on the same horizontal plane. The position information at each moment fluctuates around 0, which is in line with actual changes.

    Considering the actual situation of the indoor experimental site flat and no slope,in the attitude error comparison,we do not count the roll angle and pitch angle;only the yaw angle is included in the category of attitude error analysis.

    Fig.13. The position and their corresponding errors of various algorithms.

    As shown in Fig. 14, comparing the four algorithms of RK-VIF,OKVIS, VINS-Mono, and ROVIO, the attitude error of the RK-VIF is significantly smaller than the other three algorithms and the error is between -4°~+4°.However,OKVIS and VINS-Mono have similar attitude errors,and ROVIO has the largest error, reaching± 6°.

    Figs.13 and 14 analyze the positioning errors of the four algorithms under the same experimental conditions and scene in the form of a relationship diagram. In order to more intuitively demonstrate the advantages of our algorithm, we analogy to the dataset experiment in chapter 4.1. In Table 3, we conduct five experiments for each algorithm and give the best result of the RMSE in the indoor environment.It can be seen from the table that RK-VIF has smaller positioning errors and better robustness in actual scenes. Thus, integrating the two parts of the dataset and actual scene experiments, our algorithm considers the positioning accuracy and running time,providing a more excellent solution for the unmanned system to explore the unknown environment robustly.

    Fig.14. The attitude and corresponding errors of various algorithms.

    Table 3 The average RMSE in indoor environment(unit: m).

    5. Conclusion

    This paper proposes a visual-inertial information fusion SLAM based on the Runge-Kutta improved pre-integration,aiming at the low positioning accuracy of pure visual SLAM in fast motion and complex indoor-environment. In the process of visual-inertial information fusion, the IMU pre-integration value is used as a constraint between two adjacent frames. In order to calculate the pre-integration value, the paper uses the Runge-Kutta instead of Euler integral to discretize the IMU motion equation,so as to avoid the error caused by the first-order approximation.The experiments show that RK-VIF can effectively perform positioning estimation in environments of different difficulty levels.Furthermore,compared with other current state-of-the-art open-source algorithms, our method shows higher positioning accuracy.However,we still need extensive research further to improve the accuracy and robustness of the system.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This work was supported by the China Postdoctoral Science Foundation under Grant 2019M653870XB, National Natural Science Foundation of Shanxi Province under Grants No.2020GY-003 and 2021GY-036, National Natural Science Foundation of China under Grants 62001340 and Fundamental Research Funds for the Central Universities,China, XJS211306 and JC2007.

    一区二区三区精品91| 国产黄频视频在线观看| 少妇人妻 视频| 色视频在线一区二区三区| 国产淫片久久久久久久久| 久久精品国产亚洲av天美| 国产伦精品一区二区三区四那| 日本免费在线观看一区| 女人精品久久久久毛片| av国产久精品久网站免费入址| 亚洲国产成人一精品久久久| 国产毛片在线视频| 一区二区三区精品91| 伦理电影免费视频| 国产日韩欧美视频二区| 久久毛片免费看一区二区三区| 国产免费福利视频在线观看| 日韩大片免费观看网站| 亚洲精品国产av成人精品| 久久久久久人妻| 亚洲欧美成人精品一区二区| 一区二区三区精品91| 狂野欧美激情性xxxx在线观看| 国产淫片久久久久久久久| 欧美精品高潮呻吟av久久| 人人妻人人澡人人爽人人夜夜| 亚洲天堂av无毛| 人人澡人人妻人| 久久久久国产精品人妻一区二区| 亚洲无线观看免费| 哪个播放器可以免费观看大片| 噜噜噜噜噜久久久久久91| 国产精品久久久久久精品电影小说| 乱码一卡2卡4卡精品| 少妇高潮的动态图| 国产极品粉嫩免费观看在线 | 国产精品三级大全| 在现免费观看毛片| 中文在线观看免费www的网站| 女人久久www免费人成看片| 国产真实伦视频高清在线观看| 国产精品欧美亚洲77777| 女性被躁到高潮视频| 乱码一卡2卡4卡精品| 最近中文字幕2019免费版| 亚洲美女视频黄频| 好男人视频免费观看在线| 国产爽快片一区二区三区| 日韩,欧美,国产一区二区三区| 久久免费观看电影| 激情五月婷婷亚洲| a级毛色黄片| 赤兔流量卡办理| 男女啪啪激烈高潮av片| 色哟哟·www| 男人爽女人下面视频在线观看| 在线观看一区二区三区激情| 男的添女的下面高潮视频| 久久人人爽人人爽人人片va| 午夜91福利影院| 99热这里只有是精品在线观看| 久久亚洲国产成人精品v| 2022亚洲国产成人精品| 在线观看美女被高潮喷水网站| 免费在线观看成人毛片| 80岁老熟妇乱子伦牲交| 国产欧美亚洲国产| 黄色一级大片看看| 亚洲精品久久午夜乱码| 日本av手机在线免费观看| 在现免费观看毛片| 久热久热在线精品观看| 久热这里只有精品99| 亚洲久久久国产精品| 国产成人精品无人区| 交换朋友夫妻互换小说| 日韩伦理黄色片| av天堂久久9| 亚洲高清免费不卡视频| 视频中文字幕在线观看| 偷拍熟女少妇极品色| 麻豆精品久久久久久蜜桃| 91精品一卡2卡3卡4卡| 国产极品天堂在线| 日韩成人av中文字幕在线观看| 成人二区视频| 在线观看av片永久免费下载| 哪个播放器可以免费观看大片| 高清不卡的av网站| 99热这里只有精品一区| 国产69精品久久久久777片| 亚洲无线观看免费| 狠狠精品人妻久久久久久综合| 免费黄网站久久成人精品| 午夜激情福利司机影院| 高清欧美精品videossex| 好男人视频免费观看在线| 亚洲国产最新在线播放| 最近的中文字幕免费完整| 妹子高潮喷水视频| 久久久国产欧美日韩av| 精品99又大又爽又粗少妇毛片| 18+在线观看网站| 日韩精品免费视频一区二区三区 | 啦啦啦啦在线视频资源| 免费观看的影片在线观看| 99久国产av精品国产电影| 久久久久国产精品人妻一区二区| 欧美成人午夜免费资源| 日本黄色日本黄色录像| 国产黄色免费在线视频| 高清毛片免费看| 黄片无遮挡物在线观看| 丰满乱子伦码专区| 久久久久网色| 国产精品久久久久久精品古装| 国产爽快片一区二区三区| 女人久久www免费人成看片| 国产 一区精品| 有码 亚洲区| 男人和女人高潮做爰伦理| 嫩草影院新地址| 国产在视频线精品| 国产精品一区www在线观看| 97精品久久久久久久久久精品| 在线亚洲精品国产二区图片欧美 | av.在线天堂| 欧美精品国产亚洲| 久热久热在线精品观看| 精品熟女少妇av免费看| 18禁在线播放成人免费| 午夜精品国产一区二区电影| 人妻夜夜爽99麻豆av| 国产精品欧美亚洲77777| 免费看av在线观看网站| 欧美三级亚洲精品| 美女xxoo啪啪120秒动态图| 纯流量卡能插随身wifi吗| 国产男女超爽视频在线观看| 精品久久久久久久久av| 九九爱精品视频在线观看| 国产精品久久久久久久电影| 国国产精品蜜臀av免费| 亚洲精品一区蜜桃| 欧美精品一区二区大全| 啦啦啦在线观看免费高清www| a 毛片基地| 少妇的逼水好多| 久久国产乱子免费精品| 国产精品久久久久成人av| 国产在线男女| 亚洲av成人精品一二三区| 亚洲四区av| 午夜免费观看性视频| 久久免费观看电影| 精品久久久久久久久av| 啦啦啦啦在线视频资源| 亚洲av二区三区四区| 免费高清在线观看视频在线观看| 99久久综合免费| 在线观看一区二区三区激情| 久久精品国产自在天天线| 性色avwww在线观看| 久热这里只有精品99| 观看av在线不卡| 一区二区三区免费毛片| 免费少妇av软件| 国产亚洲5aaaaa淫片| 久久6这里有精品| 一级爰片在线观看| 久久99精品国语久久久| 六月丁香七月| 成人美女网站在线观看视频| 亚洲精品日韩av片在线观看| 91精品伊人久久大香线蕉| 日本欧美视频一区| 亚洲四区av| 少妇的逼水好多| 看非洲黑人一级黄片| 一区二区三区免费毛片| 久久久久久久久久成人| 久久人人爽人人爽人人片va| 中文字幕制服av| 大话2 男鬼变身卡| av在线播放精品| 国产在线一区二区三区精| 成人毛片60女人毛片免费| 久久久久久久精品精品| 男人添女人高潮全过程视频| 日本wwww免费看| 日本欧美国产在线视频| 人妻制服诱惑在线中文字幕| 18禁裸乳无遮挡动漫免费视频| 少妇 在线观看| 伦理电影大哥的女人| 一级片'在线观看视频| 免费看日本二区| 夫妻性生交免费视频一级片| 久久综合国产亚洲精品| 国产伦精品一区二区三区四那| 国内揄拍国产精品人妻在线| 亚洲精品成人av观看孕妇| 亚洲久久久国产精品| 亚州av有码| 麻豆乱淫一区二区| 熟女电影av网| 中国国产av一级| 黑人巨大精品欧美一区二区蜜桃 | 亚洲成色77777| 夫妻午夜视频| 一个人免费看片子| 亚洲精品日韩av片在线观看| 国产中年淑女户外野战色| 精品酒店卫生间| 亚洲精品色激情综合| 在线免费观看不下载黄p国产| 伦理电影大哥的女人| 多毛熟女@视频| av线在线观看网站| 91aial.com中文字幕在线观看| 亚洲国产最新在线播放| 亚洲国产精品999| 色婷婷av一区二区三区视频| 久久精品国产亚洲网站| 成人毛片a级毛片在线播放| 最近中文字幕高清免费大全6| 又黄又爽又刺激的免费视频.| 久久99精品国语久久久| 国产成人aa在线观看| 国产男女超爽视频在线观看| 色视频在线一区二区三区| 日本欧美视频一区| 七月丁香在线播放| 18禁动态无遮挡网站| 成人毛片a级毛片在线播放| 国产成人a∨麻豆精品| 人人妻人人看人人澡| 一级毛片 在线播放| 韩国av在线不卡| 人体艺术视频欧美日本| 日韩成人av中文字幕在线观看| 久久国产精品男人的天堂亚洲 | 亚洲av欧美aⅴ国产| 国精品久久久久久国模美| 久久这里有精品视频免费| 亚洲美女视频黄频| 热re99久久精品国产66热6| 国产熟女欧美一区二区| 一级毛片久久久久久久久女| 久久婷婷青草| 国产精品国产三级国产av玫瑰| 国产乱来视频区| 寂寞人妻少妇视频99o| 久久99热这里只频精品6学生| videos熟女内射| 国产精品偷伦视频观看了| 国产精品嫩草影院av在线观看| 看免费成人av毛片| 一级毛片久久久久久久久女| 99久久精品一区二区三区| 99久久精品热视频| 亚洲国产成人一精品久久久| 少妇人妻一区二区三区视频| 亚洲欧美日韩另类电影网站| 国产女主播在线喷水免费视频网站| 卡戴珊不雅视频在线播放| 久久毛片免费看一区二区三区| 国产精品国产三级国产av玫瑰| 中文字幕亚洲精品专区| 国产精品偷伦视频观看了| 国产精品熟女久久久久浪| 久久久久国产网址| 99九九线精品视频在线观看视频| 久久毛片免费看一区二区三区| 亚洲精品乱久久久久久| 乱码一卡2卡4卡精品| 精华霜和精华液先用哪个| 久久精品夜色国产| 亚洲精品aⅴ在线观看| 99久久综合免费| 亚洲,欧美,日韩| 高清av免费在线| 亚洲图色成人| 亚洲欧美成人综合另类久久久| 97精品久久久久久久久久精品| 欧美三级亚洲精品| 一本久久精品| 黄色日韩在线| 国产免费视频播放在线视频| 看免费成人av毛片| 久久人人爽av亚洲精品天堂| 免费看不卡的av| 亚洲国产欧美在线一区| 丁香六月天网| 亚洲国产精品专区欧美| 熟妇人妻不卡中文字幕| 欧美日韩精品成人综合77777| 一级黄片播放器| 亚洲国产精品一区二区三区在线| 最近手机中文字幕大全| 国产精品一区二区在线观看99| 一级二级三级毛片免费看| 两个人免费观看高清视频 | 一级毛片aaaaaa免费看小| 国产一区二区在线观看av| 如何舔出高潮| 国产精品成人在线| 国产中年淑女户外野战色| 成人毛片60女人毛片免费| 亚洲综合精品二区| 夫妻性生交免费视频一级片| 大片免费播放器 马上看| 国产欧美亚洲国产| 高清不卡的av网站| 国产欧美亚洲国产| 人人澡人人妻人| 18+在线观看网站| 色哟哟·www| 国产无遮挡羞羞视频在线观看| 久久久国产精品麻豆| 日韩强制内射视频| 日日啪夜夜爽| 日韩av在线免费看完整版不卡| 国产精品久久久久成人av| 黑丝袜美女国产一区| 一二三四中文在线观看免费高清| 伊人久久精品亚洲午夜| 日本爱情动作片www.在线观看| 午夜免费男女啪啪视频观看| a级毛片免费高清观看在线播放| 夜夜爽夜夜爽视频| 在线观看三级黄色| 久久婷婷青草| 久久久精品94久久精品| 赤兔流量卡办理| 有码 亚洲区| 大又大粗又爽又黄少妇毛片口| 免费不卡的大黄色大毛片视频在线观看| 欧美成人午夜免费资源| 国产伦精品一区二区三区视频9| 久久久久久久大尺度免费视频| 日本av免费视频播放| 在线观看www视频免费| 国产伦精品一区二区三区四那| 亚州av有码| 国产高清三级在线| 又粗又硬又长又爽又黄的视频| 99热这里只有是精品50| 五月玫瑰六月丁香| 在线观看免费视频网站a站| 各种免费的搞黄视频| 内地一区二区视频在线| 国内揄拍国产精品人妻在线| av线在线观看网站| 99久久精品热视频| 中文字幕人妻熟人妻熟丝袜美| 99热国产这里只有精品6| 免费黄色在线免费观看| 亚洲av国产av综合av卡| 久久精品国产亚洲网站| 黄片无遮挡物在线观看| 国产精品久久久久久久久免| 成人国产麻豆网| 亚洲美女搞黄在线观看| www.色视频.com| 精品亚洲乱码少妇综合久久| 国产成人免费观看mmmm| 国产高清三级在线| 一边亲一边摸免费视频| 观看免费一级毛片| 国产精品无大码| 一本一本综合久久| 99九九线精品视频在线观看视频| 热99国产精品久久久久久7| 成人综合一区亚洲| 日日啪夜夜撸| 啦啦啦视频在线资源免费观看| 免费人成在线观看视频色| 国产黄频视频在线观看| 国产免费一区二区三区四区乱码| 午夜影院在线不卡| 日韩av不卡免费在线播放| 亚洲综合精品二区| 欧美老熟妇乱子伦牲交| 欧美bdsm另类| 久久久久网色| 精品国产露脸久久av麻豆| 一本色道久久久久久精品综合| 国产 精品1| 国产精品一区二区在线观看99| 少妇裸体淫交视频免费看高清| 一个人免费看片子| 69精品国产乱码久久久| 男人爽女人下面视频在线观看| 岛国毛片在线播放| 午夜免费鲁丝| 日韩精品免费视频一区二区三区 | 啦啦啦在线观看免费高清www| 99热6这里只有精品| 亚洲av二区三区四区| 久久综合国产亚洲精品| 久久精品国产鲁丝片午夜精品| 91久久精品电影网| 国产精品久久久久久av不卡| 80岁老熟妇乱子伦牲交| 日日撸夜夜添| videossex国产| 高清视频免费观看一区二区| 亚洲熟女精品中文字幕| 麻豆成人午夜福利视频| 黄色一级大片看看| 国产欧美另类精品又又久久亚洲欧美| 久久精品国产亚洲网站| 欧美三级亚洲精品| 久久97久久精品| 精品一品国产午夜福利视频| 婷婷色麻豆天堂久久| a 毛片基地| 久久毛片免费看一区二区三区| 日韩中文字幕视频在线看片| 成人二区视频| 国产黄频视频在线观看| 男人爽女人下面视频在线观看| 色网站视频免费| 观看美女的网站| 欧美变态另类bdsm刘玥| 日韩中文字幕视频在线看片| 国产精品伦人一区二区| av.在线天堂| 国产高清国产精品国产三级| 日日啪夜夜爽| 精品久久久久久久久亚洲| 亚洲国产色片| 久久久久久久久久人人人人人人| 亚洲av电影在线观看一区二区三区| 中文在线观看免费www的网站| 中国三级夫妇交换| 91精品国产九色| 91午夜精品亚洲一区二区三区| 国产亚洲精品久久久com| 欧美激情极品国产一区二区三区 | 一个人看视频在线观看www免费| 日韩中文字幕视频在线看片| 中文字幕制服av| 亚洲av欧美aⅴ国产| 国产成人aa在线观看| 熟女电影av网| 一区二区三区四区激情视频| 精品视频人人做人人爽| 菩萨蛮人人尽说江南好唐韦庄| 午夜精品国产一区二区电影| 2022亚洲国产成人精品| 国产淫片久久久久久久久| 不卡视频在线观看欧美| 边亲边吃奶的免费视频| 久久人人爽人人爽人人片va| 亚洲国产成人一精品久久久| 亚洲精品日韩在线中文字幕| 久久国内精品自在自线图片| 国产精品99久久久久久久久| 自拍欧美九色日韩亚洲蝌蚪91 | 街头女战士在线观看网站| 51国产日韩欧美| 亚洲欧美一区二区三区黑人 | 一级片'在线观看视频| 久久久a久久爽久久v久久| 精品国产乱码久久久久久小说| 亚洲欧美日韩另类电影网站| 亚洲av欧美aⅴ国产| 大香蕉97超碰在线| 最近中文字幕2019免费版| 精品少妇内射三级| 国产探花极品一区二区| 精品久久久久久久久av| 在线观看一区二区三区激情| 18禁裸乳无遮挡动漫免费视频| videos熟女内射| 美女国产视频在线观看| 丝瓜视频免费看黄片| 亚洲精品,欧美精品| av在线播放精品| 两个人免费观看高清视频 | 中文资源天堂在线| 男女边吃奶边做爰视频| 中文字幕久久专区| 搡女人真爽免费视频火全软件| 亚洲欧美日韩另类电影网站| 久久人妻熟女aⅴ| 午夜老司机福利剧场| 亚洲av二区三区四区| 一二三四中文在线观看免费高清| 欧美人与善性xxx| 我要看日韩黄色一级片| 中国国产av一级| 久久毛片免费看一区二区三区| 亚洲国产欧美在线一区| 成人美女网站在线观看视频| 亚洲av.av天堂| 精品亚洲成a人片在线观看| 亚洲av男天堂| 丝瓜视频免费看黄片| 国产一区二区在线观看av| 亚洲av在线观看美女高潮| 亚洲国产欧美在线一区| 一级毛片aaaaaa免费看小| 交换朋友夫妻互换小说| 亚洲人成网站在线播| av国产久精品久网站免费入址| 在线免费观看不下载黄p国产| 久久久国产欧美日韩av| 97在线人人人人妻| 精品国产国语对白av| 亚洲国产精品成人久久小说| 国产乱人偷精品视频| 丰满人妻一区二区三区视频av| 日韩强制内射视频| 国产精品一二三区在线看| 国产成人freesex在线| 亚洲美女视频黄频| 久久国产亚洲av麻豆专区| 久久久a久久爽久久v久久| 国产黄频视频在线观看| 一本久久精品| videos熟女内射| 日韩av不卡免费在线播放| 最后的刺客免费高清国语| 久久午夜福利片| 80岁老熟妇乱子伦牲交| 一本—道久久a久久精品蜜桃钙片| 成人国产av品久久久| 日本与韩国留学比较| 中文字幕制服av| 国产高清不卡午夜福利| 亚洲欧美一区二区三区国产| 久久精品国产自在天天线| 久久久国产欧美日韩av| 大码成人一级视频| 亚洲在久久综合| 成人18禁高潮啪啪吃奶动态图 | 午夜日本视频在线| 欧美性感艳星| 99热全是精品| 国产男女超爽视频在线观看| 国产亚洲最大av| 在线免费观看不下载黄p国产| 亚洲国产精品999| 女人精品久久久久毛片| 午夜福利影视在线免费观看| 亚洲美女视频黄频| 51国产日韩欧美| 国产精品一区二区性色av| 狂野欧美激情性xxxx在线观看| 日韩一区二区三区影片| 精品人妻熟女毛片av久久网站| √禁漫天堂资源中文www| 天堂俺去俺来也www色官网| 伊人亚洲综合成人网| 哪个播放器可以免费观看大片| 黄色怎么调成土黄色| 在线观看www视频免费| 国产精品国产av在线观看| 国产伦在线观看视频一区| 久久久午夜欧美精品| 好男人视频免费观看在线| 老女人水多毛片| 免费av不卡在线播放| a级毛片在线看网站| 天美传媒精品一区二区| 乱码一卡2卡4卡精品| 国产av码专区亚洲av| 久久毛片免费看一区二区三区| 久久久亚洲精品成人影院| 欧美三级亚洲精品| 欧美亚洲 丝袜 人妻 在线| 高清不卡的av网站| 精品少妇黑人巨大在线播放| 亚洲精品亚洲一区二区| 一级毛片久久久久久久久女| 99热这里只有是精品在线观看| 日韩欧美 国产精品| 国产伦精品一区二区三区视频9| 亚洲国产精品国产精品| a级毛片免费高清观看在线播放| 国产亚洲av片在线观看秒播厂| 日韩强制内射视频| 一级毛片aaaaaa免费看小| 亚洲精品国产成人久久av| 黄色配什么色好看| 熟女人妻精品中文字幕| 成年人免费黄色播放视频 | 婷婷色麻豆天堂久久| 亚洲不卡免费看| 中国三级夫妇交换| 亚洲国产精品999| 黄色怎么调成土黄色| 性高湖久久久久久久久免费观看| 欧美日韩av久久| 天堂中文最新版在线下载| 精品一区二区三区视频在线| 看非洲黑人一级黄片| 欧美 日韩 精品 国产| 精品国产一区二区久久| 日韩一区二区视频免费看| 亚洲成人手机| 高清视频免费观看一区二区| 午夜福利影视在线免费观看| 欧美日韩精品成人综合77777| 中文字幕av电影在线播放| 日韩熟女老妇一区二区性免费视频| 狂野欧美激情性bbbbbb| 久久久久精品久久久久真实原创| 亚洲图色成人| 一级毛片久久久久久久久女| 人妻夜夜爽99麻豆av| 一区二区三区精品91|