• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Research on DSO vision positioning technology based on binocular stereo panoramic vision system

    2022-04-19 04:09:36XiodongGuoZhouWngWeiZhuGungHeHonginDengCixiLvZhenhiZhng
    Defence Technology 2022年4期

    Xio-dong Guo , Zhou-o Wng , Wei Zhu , Gung He , Hong-in Deng , Ci-xi Lv ,Zhen-hi Zhng ,*

    a School of Mechatronical Engineering, Beijing Institute of Technology, Beijing,100081, China

    b Department of Architectural Engineering Technology, Vocational and Technical College of Inner Mongolia Agricultural University, Baotou, 014109, China

    Keywords:Panoramic vision DSO Visual positioning

    ABSTRACT In the visual positioning of Unmanned Ground Vehicle(UGV),the visual odometer based on direct sparse method (DSO) has the advantages of small amount of calculation, high real-time performance and high robustness,so it is more widely used than the visual odometer based on feature point method.Ordinary vision sensors have a narrower viewing angle than panoramic vision sensors, and there are fewer road signs in a single frame of image, resulting in poor road sign tracking and positioning capabilities, and severely restricting the development of visual odometry. Based on these considerations, this paper proposes a binocular stereo panoramic vision positioning algorithm based on extended DSO, which can solve these problems well. The experimental results show that the binocular stereo panoramic vision positioning algorithm based on the extended DSO can directly obtain the panoramic depth image around the UGV, which greatly improves the accuracy and robustness of the visual positioning compared with other ordinary visual odometers.It will have widely application prospects in the UGV field in the future.? 2022 China Ordnance Society. Publishing services by Elsevier B.V. on behalf of KeAi Communications

    1. Introduction

    The key technology of UGV safe driving [1] can be divided into four parts:environment perception technology,precise positioning technology, decision and planning technology, control and execution technology[2].Among them,precise positioning technology is the core of the entire UGV technology and one of the key tasks of environmental perception [3], which plays a vital role in the safe driving of UGV.The working environment of UGV is rarely in a static environment, usually most of the work is in some complex and highly dynamic environments.

    Vehicle self-positioning can be achieved in many ways,including signal-based positioning[4].The typical representative is GNSS positioning, that is, global satellite navigation system. Dead reckoning-inertial measurement unit (IMU), Infer the current position and azimuth angle according to the position and azimuth angle of the previous moment. Environment feature matching,based on lidar/stereo vision positioning, using the observed features to match the features stored in the database, the current position and attitude of the vehicle can be obtained.

    With the rapid development of computer vision and image processing technology,stereo vision has been widely used in robot navigation, intelligent manufacturing, intelligent robots and other fields [5-7]. As a necessary and cost-effective passive sensor system, ordinary vision system is widely used in UGV. However,limited by the optical principle of the see-through lens,only a local area in the environment can be observed.Therefore,the amount of environmental information obtained by the vehicle is very limited,and the depth information of the environment cannot be obtained by a single ordinary vehicle vision system [8]. With the rapid development of UGV technology,a single vehicle vision system can no longer meet the needs of large-scale, large field of view and integrated imaging. Compared with the ordinary vehicle vision system, the panoramic vision system can effectively expand the perception range of the vision system [9], which is common in many optical applications. So, the panoramic vision system [10,11]gradually appears in the field of UGV technology [12].

    Visual odometry can be considered as part of the visual SLAM technology. The current research direction can realize the method from the front end and restore the dense classification of map points, as shown in Table 1.

    From the back-end implementation method, it can be divided into filter-based methods and nonlinear optimization methods.The method of using filters is generally sparse method, such as SVO,MSCKF, ROVIO, etc. Non-linear optimization methods are: OKVIS,VINS, ORB-SLAM. In the field of computer vision, the most classic algorithms for the front end of the visual slam are the feature point method and the direct method.

    The feature point method [13] completes the matching of feature points by calculating the key sub and descriptors of the extracted feature points in two or two frames of images, thereby giving the pose changes between the two frames of images,and by reprojection the feature points to construct the residual term.The feature point method to solve the pose transformation is to first calculate the relationship between the camera and the map point,and then calculate the camera pose based on this relationship.The direct method [14] bypasses the extraction and calculation of feature points, and directly constructs the residual term by performing pixel intensity (luminosity) on the two frames of images,And the solution to the change of the camera's posture is to solve the non-linear optimization problem of the map point and the camera's posture at the same time.

    Regardless of the direct method or the feature point method,an ordinary perspective camera is usually used to obtain images and perform calculations.The advantage of using a perspective camera is that there are more researches on the related camera model and distortion correction,it is easier to process the image with various computer vision algorithms,and the front-end part is less difficult to implement. However, perspective cameras also have the following disadvantages:

    (1) The range of obtaining information around the environment is limited, generally concentrated within 90of the horizontal field of view,rarely exceeding 180,depending on the special lens design method, the maximum can reach 250,but it is limited by optical principles and lens design Horizontal, large image distortion and complex imaging model.The panoramic vision system can easily achieve horizontal 360-degree surround coverage.The vertical field of view can reach more than 100and the vertical field of view can be controlled by designing the shape of the mirror.

    (2) The limited horizontal field of view also causes the image quality to be degraded due to fast motion when the vehicle is turning and other rotating scenes. For a panoramic vision system, as long as it does not rotate quickly in place, there will be areas with a relatively low rotation speed and better image quality in the entire surround view image.This feature also helps to improve the accuracy and robustness of the algorithm.

    (3) During the rapid movement of the vehicle,the motion speed of the extracted image information points is very fast, and the same environmental point appears in the image sequence very few times. In the joint optimization ofmultiple frames of images,the related geometric constraints will be weakened, and the optimization results will also become worse.

    Table 1 General visual SLAM method classification.

    Since the origin of the ordinary perspective camera is earlier than the panoramic camera, the current visual odometry mainly focuses on the research of the ordinary camera [15], while the research based on the monocular panoramic visual odometer and the binocular stereo panoramic visual odometer is less. However,with the rapid development of unmanned vehicle technology,due to the narrow viewing angle of ordinary vision cameras, there are fewer road signs in a single frame of images,resulting in poor road sign tracking and positioning capabilities, which severely restricts the development of visual odometry [16]. However, panoramic vision has the characteristics of a natural 360large field of view,integrated imaging, etc., which makes it possible to obtain a large amount of rich and complete environmental information at one time, which greatly improves the tracking and positioning capabilities of road signs. Therefore, the binocular stereo panoramic visual odometer[17]will be the main technology that the UGV will rely on for precise positioning in the future.In the visual odometry based on the feature point method,the corner points in the image are often used as feature points due to their good feature invariance. In the panoramic vision image, the corner points are often distorted and deformed and lose their geometric invariance.Therefore, the feature point extraction and matching algorithm used in the feature point method is difficult to extract and match the correct feature point pair in the panoramic vision image.For the direct method that does not use the feature point method for matching, the panoramic image can be better adapted.

    In recent years,many single-sensor or multi-sensor fusion visual odometers have emerged. Cai J, Luo L, Hu S [18]. used RGB-D to conduct direct sparse algorithm research.Shin Y,Park Y,Kim A[19].used lidar and vision camera to complete the direct algorithm SLAM. Zhang N, Zhao Y [20]. used vision combined with IMU measurement data to construct a monocular visual mileage calculation method.Ban X,Wang H,Chen T et al.[21]used deep learning to complete visual odometry.Lu J,Fang Z,Gao Yet al.[22]and Lange M, Raisch C, Schilling A [23]. complete visual odometry by extracting line features of the image. Luo X, Tan Z, Ding Y [24].Construct visual odometry by extracting point features and line features in the image.Xu H,Cai G,Yang X et al.[25]used a method of separating static and dynamic features to construct visual odometry. Meng X Y, Sun B [26]. proposed a visual odometer, but the visual odometer is only an online Kitty running data and failed to be tested on an offline unmanned vehicle, which has certain limitations in the perception of the environment.

    Visual odometer is a kind of higher price sensor that scholars have studied most in single sensor odometer. Scholars have also done a lot of pure visual odometry work. Chen M, Tang Y, Zou X et al. [27,28], Won C, Seok H, Cui Z et al. [29] used multi-vision camera stitching for 3D perception and reconstruction, but multivision cameras Splicing will bring a huge amount of data, which has a great impact on the real-time performance of the system.F¨orster C, Zhang Z, Gassner M et al. [30] proposed a semi-direct visual odometry, but the algorithm is only suitable for monocular vision and multi-camera splicing omnidirectional vision models. J.Engel, V. Koltun, and D. Cremers [31]. studied DSO based on ordinary vision, but this DSO is only suitable for ordinary vision, not suitable for binocular ordinary and panoramic. Wang T, Tan N,Zhang C et al. [32] proposed a visual tracking method based on sparse representation, but this research is only applicable to ordinary visual tracking.Matsuki H,Stumberg LV,Usenko VC et al.[33]proposed a monocular visual odometry based on a fish-eye camera,but this method is only suitable for monocular fish-eye cameras.

    The previous visual odometer mainly focused on the research of ordinary visual odometer and multi-camera splicing visual odometer, and multi-camera splicing has a relatively large amount of data generated by multiple cameras,which has a great influence on the real-time positioning of unmanned vehicles. Therefore, there are few researches on unmanned vehicles, and there are few researches based on the catadioptric panoramic visual odometry based on unmanned vehicles (If there are no special instructions below, the panoramic visual odometer represents the catadioptric panoramic visual odometer.). Based on the research of ordinary monocular vision DSO, this paper completes a set of binocular stereo panoramic vision DSO for real-time positioning of unmanned vehicles.

    2. Methods

    The most classic visual odometer based on the direct method is the DSO algorithm proposed by Dr. Jakob Engle [31] in the Computer Laboratory of the Technical University of Munich in 2016. In terms of classification, DSO is a sparse direct method of nonlinear optimization. This paper adopts DSO as the calculation method of visual odometer to improve the traditional DSO. By adding the model of the panoramic vision system, this paper expands the monocular ordinary vision DSO to the monocular panoramic vision DSO, and finally expands the monocular panoramic vision DSO to the binocular stereoscopic panoramic vision DSO.

    The FullSystem part mainly includes most of the core algorithms of DSO,in which the Coarse Tracker is responsible for matching the latest keyframes and calculating the residual of the current points after matching.CoarseInitializer is responsible for the initialization of the whole system, including the initialization of residual terms,the calculation of Hessian matrix block, the derivation of photometric error, and the judgment of whether the initialization is successful.Hessian block includes the creation and release of Frame Hessian structure, and the coordinate transformation matrix and photometric affine function of points are calculated according to two Frame Hessian structures.ImmaturePoint is responsible for the maintenance of map points and the determination of inverse depth in the initialization process. In the process of movement, the immature map points are tracked, and the immature map points are transformed into mature map points. PixelSelector is responsible for pixel selection.Residual mainly includes the calculation of residual term between two frames and Jacobi J. Optimization Backend is the part of backend optimization.Accumulate SCHessian computes the Hessian matrix.Utils is mainly used to read data sets and correct image distortion. IOWrapper is responsible for the display and output of results.

    The code framework and algorithm flow of monocular DSO are shown in Fig.1 and algorithm 1.The initialization flow of algorithm 1 is explained in algorithm 2 separately.

    In monocular DSO, the energy function of each frame is as follows:

    where Iand Iis the image after photometric calibration[34].pis the point p according to the inverse depth d.‖·‖represents Huber loss function,to prevent the energy function of residual error from growing too fast with photometric error. Nis the neighborhood point(see Fig.2 for the distribution mode)when the weighted sum of squares of error is calculated for p point.tand tare the exposure times of images Iand I.wis the residual weight(It used to weight the residual terms constructed by pixels in different positions).?I(p)is the pixel gradient and c is a constant,as shown in Eq.(2).

    Fig.1. Monocular DSO code framework.

    Among them, Band Iis the radiation intensity and pixel intensity of the i-th frame image, tis the exposure time of the i-th frame. The model parameters G: R and V: Ω of photometric calibration can be calibrated by Eq. (4).

    In Eq. (1), p and pare the corresponding relations between adjacent i and j frames, and the relations are as follows:

    Fig. 2. Np mode diagram.

    F is the set of all images, p is all the points in the set of Pin frame i, and j is all the images where point p can be observed. In monocular DSO, multiple energy functions as shown in Eq. (7)constitute a window. Generally, a fixed number (6-7 keyframes)will be maintained in the whole window, and the back end will optimize the keyframes in this window. In the process of running the algorithm, the key frames inside the window are added and deleted continuously, which is also called edge. Fig. 3 is a factor diagram between the key frames in the window and the observed map points.

    Fig. 3 is the factor graph between the key frames and the observed map points in the window (map optimization). Pt1: din the blue ellipse represents: the inverse depth of the first frame,and so on. E2 in the orange rectangle represents: the residual items formed by projecting the map points in the first frame of image to the second frame of image,and the following numbers are deduced by analogy.The KF1 in the purple rectangle represents the first key frame, where Trepresents the transformation matrix from point p to the first frame of image, aand brepresent the affine brightness and brightness of the first frame of image,respectively, and the following numbers are deduced by analogy.Factor graph for the direct sparse model. Example with four keyframes and four points; one in KF1, two in KF2, and one in KF4.Each energy term (defined in Eq. (1)) depends on the point's host frame (blue), the frame the point is observed in (red), and the points inverse depth(black).Further,all terms depend on the global camera intrinsic vector c, which is not shown.

    Fig. 3. Monocular DSO factor diagram.

    After the global energy function is obtained, the Gauss Newton method is used for nonlinear optimization. In the optimization process, H =JWJ is used as the approximation of the secondorder Hessian matrix. In the process of calculating the global energy function, it is necessary to find the Jacobian matrix for the energy function.

    Binocular DSO is extended based on monocular DSO. The main difference between binocular DSO and monocular DSO is that the initialization process does not need to initialize the depth value in monocular DSO randomly,and the initial depth value is obtained by using binocular stereo matching directly. When a point in a key frame is added to the window for optimization, its inverse depth needs to be updated continuously, so that the depth is improved Information becomes more and more important. In the case of monocular DSO, due to the randomness of the initial value of the depth information,the variance is bound to be very large[35,36]for Monocular Visual Odometry. Computer Vision. In the case of binocular DSO,a better depth estimation can be obtained directly to improve the tracking accuracy. At the same time, in monocular DSO, the scale information of the whole system cannot be estimated, and binocular DSO can get the absolute scale information through a fixed baseline length, which also makes the visual odometer meaningful.

    Since binocular DSO is based on monocular DSO, its basic principle and formula derivation are basically the same as monocular DSO.Different from monocular DSO,because binocular stereo matching is added to binocular DSO, the whole energy function increases the binocular matching error term relative to monocular DSO.

    Fig. 4 is a factor diagram of binocular DSO, the left box in the factor graph represents 4 key frames,the blue ellipse represents the 5 observed map points, the orange long square represents each energy function,and they are all associated with 1 map point and 2 key frames, the blue line Represents the constraints from the key frame,the red line represents the constraints between the left and right cameras,and the gray line represents the constraints from the observed map points corresponding to the key frames.

    The binocular stereo panoramic DSO only needs to extend the perspective camera model of the binocular DSO to the panoramic model. The luminosity calibration has nothing to do with the camera model, only the position of the pixels in the image, so the luminosity calibration is directly performed on the panoramic image,and the corrected image is passed to the back end.

    In the binocular stereo panoramic DSO, the energy function of each frame is

    where the projection transformation of points p and ps the projection transformation in the panoramic vision system,Ω(·) is the forward transformation relationship from the vision camera to the map point,Λ(·)is the reverse transformation relationship from the map point to the vision camera,R and t are respectively from the i frame the rotation matrix and translation vector to frame j.

    The overall energy function of the binocular stereo panoramic DSO can be concluded from the above equation shown in Eq. (14),and the final position transformation relationship can be obtained by optimizing it [40]. As time goes by, the algorithm will jointly optimize a limited (5-7) key frames in the sliding window, and finally get the optimized trajectory.

    Fig. 4. Binocular DSO factor graph.

    where is the balance coefficient,used to adjust the energy function in the binocular DSO and the energy function of the binocular stereo matching error residual term.

    3. Experiment verification

    3.1. Experimental setup

    (1) Experimental location: Beijing Institute of Technology experimental area(Nanyuan).

    (2) Experimental equipment: UGV perception platform, binocular stereo panoramic vision system,binocular stereo vision system, polarized Vision System, ZED Vision System, Integrated inertial navigation system, LIDAR, industrial computer, and other equipment.

    (3) Experimental evaluation index: The binocular stereoscopic panoramic vision system has higher robustness and positioning accuracy than the binocular stereoscopic vision system in terms of perception and positioning.

    (4) Experimental method:Let the UGV drive on the preset route,the binocular stereo vision system, the binocular stereo panoramic vision system, the Integrated inertial navigation system, and LIDAR collect and store data at the same time.Replay all experimental data offline at the original rate, run the data and algorithms that need to be tested,calculate the UGV operating trajectory,then compare each trajectory with the high-precision trajectory generated by the Integrated inertial navigation system and LIDAR fusion, and compare the results analyze and draw experimental conclusions.

    During the experiment, the UGV drove according to the preset route in Fig.6,and collected and stored the data at the same time.After completing each driving route, return to the starting point and repeat the experiment for a total of 5 consecutive times. The driving speed is about 10 km/h, 10 km/h, 15 km/h, 25 km/h, and 30 km/h.

    The experimental equipment configuration is as follows: Integrated inertial navigation system includes OXTS and BEIDOU satellite receiver,OXTS is set to external receiver state,the positioning signal of BEIDOU receiver is used as the external positioning data input source, and the GPS antenna placed on the roof is used for heading estimation. Combined with the accelerometer and gyroscope of OXTS built-in IMU,the 24 order EKF algorithm is used for fusion estimation, all the navigation information, including the ground unmanned platform odometer and longitude and latitude,is finally output at the rate of 100 Hz. In the integrated inertial navigation system, the time synchronization signal source of the sensor comes from the satellite positioning time stamp of the BEIDOU satellite receiver. LIDAR uses VELODYNE 64 s, the data update rate is 10 Hz, about 3 million laser points are sampled per second, and the actual measurement distance is set to 50 m to prevent the influence of distant data noise on the algorithm.At the same time, the positioning information of the BEIDOU satellite receiver is connected to the LIDAR, and the data packets of the LIDAR are synchronized with the satellite positioning timestamp.

    The binocular stereo vision system uses two AD130GE cameras,pre-calibrated the cameras, set the automatic exposure, the frame rate is 30 Hz, the two cameras use an external trigger method for frame synchronization, and the synchronization timestamp still uses the timestamp of the BEIDOU satellite receiver.The binocular stereo panoramic vision system uses two GX2750 cameras, set automatic exposure, frame rate 30 Hz, and uses the same external trigger signal as the AD130GE camera, to ensure that the images obtained by the cameras of the binocular stereo vision system and the binocular stereo panoramic vision system are all triggered synchronously.

    Fig. 5. UGV experimental vehicle.

    Fig. 6. The preset route of the experiment.

    The experimental data of 5 consecutive records are shown in Fig. 7./oxts_gnss is the inertial navigation data, including various navigation information including latitude and longitude;/left/image/compressed and/right/image/compressed are Image data of binocular stereo vision system;/panoramic_camera/left/image_raw and panoramic_camera/right/image_raw are the image data of the binocular stereo panoramic vision system./velodyne_points are the point cloud data of LIDAR.

    After data acquisition, the recorded data is played back offline,the image of the binocular stereo vision system runs the binocular DSO algorithm,and the trajectory diagram output by the algorithm is recorded. For the image of binocular stereo panoramic system,run the binocular DSO algorithm supported by the extended panoramic vision system model, and record the output trajectory diagram.

    Using HDL-graph algorithm to process data of LIDAR and Integrated inertial navigation system. Considering the difficulty of processing the experimental data and the error of the output navigation data, a Ground Truth [41] track is formed by using five experimental tracks after algorithm optimization,as shown in Fig.8 and Fig. 9.

    Perform coordinate conversion on the output Ground Truth trajectory, convert the trajectory value to latitude and longitude,save it as a csv file and import it into Google Earth, as shown in Fig. 10. It is basically consistent with the experimental preset roadmap,which proves that the Ground Truth trajectory is true and reliable.

    Run the binocular DSO to process the experimental data of the ordinary binocular stereo vision system, and draw the five times track output by the algorithm and the track of ground truth(Fig.11 and Fig.12).

    Fig. 7. Data of 5 consecutive experiments.

    Fig. 8. Ground Truth trajectory diagram.

    Fig. 9. Ground Truth XYZ direction trajectory diagram.

    Fig.10. Ground Truth satellite map.

    In Fig.11 and Fig. 12, oxts-gt is Ground Truth, which is represented by a gray dashed line. Stereo-10 and stereo-10a represent the first and second experiments with a speed of 10 km/h, and so on,stereo-15 is a speed of 15 km/h.For the binocular stereo vision system,a large rotation error is generated at the turn.In the lower speed test routes of 10 km/h and 15 km/h, a large rotation error occurs at the first turn, resulting in a deviation in the subsequent route, but the basic trajectory shape is consistent with Ground Truth, and the consistency of the three experiments is good. After the speed reached 25 km/h,because the turning speed was too fast,a large rotation error was generated at each turn. However, in the straight driving part, the overall trajectory did not deviate due to the too fast speed. When the speed is 30 km/h, the tracking is quickly lost due to the too fast speed, and the UGV's driving trajectory is also completely wrong.

    3.2. Experimental results and analysis

    Experiments show that for ordinary vision ordinary binocular vision, the highly dynamic scene of turning can easily cause the image difference between the upper and lower frames to be too large.As shown in Fig.13,the UGV is in the turning process,when it just enters the intersection,tree of the blue box moved to the center of the image,and disappeared when the UGV was about to leave the intersection, while the UGV moved a small translation distance during the turn. This kind of image sequence with excessive rotation changes has a great impact on DSO,will produce larger errors or even tracking loss.

    Similarly, it can be seen from the trajectory diagram in the XYZ direction that only two experiments with lower speed are closer to Ground Truth among the five experiments. The errors in the third and fourth times are both large.The fifth time the tracking is lost in the middle, the positioning failure.

    Run the binocular stereo panoramic DSO algorithm to process the binocular stereo panoramic image data, and draw the 5 experimental trajectories and Ground Truth trajectories obtained by the algorithm (Fig.14 and Fig.15):

    The five solid lines in Fig. 14 are the trajectories of the UGV obtained by the five experimental algorithms,the dashed line is the Ground Truth trajectory,and the yellow and purple trajectories are the trajectories obtained by the visual mileage calculation method when the vehicle is traveling at 10 km/h. Among the five trajectories, the trajectory of the UGV is closest to the Ground Truth trajectory. Red, blue, and green are the trajectories when the vehicle is traveling at 15 km/h,25 km/h,and 30 km/h respectively.The red track of 15 km/h and the blue track of 25 km/h basically coincide,but it deviates more from Ground Truth than the 10 km/h trajectory at a lower speed. In addition, the 25 km/h trajectory produced data jitters due to excessive speed at one of the turns,resulting in a large deviation in the trajectory. In the experiment with a speed of 30 km/h,the trajectory has a large deviation in the right loop and a large deviation in the x direction, but the overall tracking of the map points is not lost. The calculation can be maintained in a relatively stable state. Compared with the fifth experiment of ordinary binocular vision, the binocular stereo panoramic vision system shows better performance.

    Fig.11. The first to fifth trajectory diagrams of the binocular stereo vision system.

    Fig.12. The 1st to 5th trajectory diagram of the binocular stereo camera-XYZ direction.

    Fig.13. Normal camera image during turning.

    Fig. 14. The 1st to 5th trajectory diagram of the binocular stereo panoramic vision system.

    Fig. 15. The 1st to 5th trajectory diagram of the binocular stereo panoramic vision system-XYZ direction.

    Obviously, after comparison with the binocular stereo vision system,it can be found that the experimental track of the binocular stereo panoramic vision system has greatly improved the accuracy of translation and rotation,especially for the quick turning problem that the binocular stereo camera cannot solve,the panoramic vision system does not exist, as shown in Fig. 16. The panoramic vision system can be observed most of the surrounding environment points during the turning of the UGV,so it can always track the map points in the image, and will not cause the tracking accuracy to decline or fail because the speed is too fast and the image information between frames changes too much.

    The absolute error analysis of the trajectory of the binocular stereo vision system and the binocular stereo panoramic vision system is shown in Table 4 and Table 5.

    Fig.16. Binocular stereo panoramic vision system image during turning.

    Table 2 Algorithm flow chart of monocular DSO.

    Table 3 Algorithm flow chart of monocular DSO.

    Table 4 and Table 5 record the absolute errors between the Ground Truth and the trajectories of the five experiments of binocular stereo vision system and binocular stereo panoramic vision system. The maximum error of binocular stereo vision system varies greatly at different speeds, from about 52 to 238 m,because of the influence of speed. In the result of binocular stereo panoramic vision,the maximum error is controlled between 10 and 30 m in terms of average error, the average error of the binocular stereo vision system at any speed is greater than that of the binocular stereo panoramic vision system. the minimum error value is accidental,and the intersection with Ground Truth will also produce a small minimum error value, so it cannot be used as a basis for analysis.

    The error data is analyzed in detail, and the root mean square error (RMSE), Sum of Squares due to Error (SSE), and Standard Deviation(STD)[42]of each experimental trajectory are calculated and analyzed from a statistical perspective.RMSE is the sum of the squares of the difference between each trajectory point and Ground Truth in the entire trajectory, and then take the average value and then extract the square. This data reflects the deviation between the data point and the true value.During the binocular stereo vision experiment, as the speed increases, the RMSE also increases, and the deviation between the generated trajectory and Ground Truth is getting larger and larger. Comparing the five experiments of binocular stereo panoramic vision system,the RMSE changes little with the increase of speed, and the value is also lower than the respective values of the binocular stereo vision system, indicating that binocular stereo panoramic vision can still maintain a relatively stable trajectory output when the speed increases.SSE is the sum of squares of errors, which is the sum of squares of errors between each trajectory point in the entire trajectory and Ground Truth.This value reflects the overall deviation of the experimental trajectory from Ground Truth. The value of the binocular stereo panoramic vision system on SSE is one to two orders of magnitude lower than that of the binocular stereo vision system. STD is the standard deviation, reflecting the degree of deviation of the data from the mean,and more reflecting the difference between its own mean.Compared with RMSE,STD is more stable in binocular stereo vision system, but it is very stable in binocular stereo panoramic vision system and its value is much smaller than that of binocular stereo vision system. It shows that regardless of the binocular stereo vision system or the binocular stereo panoramic vision system,although the degree of error is quite different,the stability of the data can still be guaranteed,indicating that the algorithm is relatively good in the smoothness of the trajectory output.

    Table 4 The absolute error of the trajectory of the binocular stereo vision system.

    Table 5 The absolute error of the trajectory of the binocular stereo panoramic vision system.

    Based on the improved DSO algorithm, the binocular stereo vision system and the binocular stereo panoramic vision system were tested on actual roads.The test results show that for the same algorithm,the performance of the vision system with a large field of view angle is better than that of the narrow field of view.

    4. Conclusions

    This paper proposes a binocular stereo panoramic vision location algorithm based on extended DSO. The results show that the algorithm using extended DSO can effectively solve the resource consumption of high-resolution image feature extraction, and avoid the problem of feature point extraction difficulty caused by large-scale image distortion. The experimental results show that the binocular stereo panoramic vision system based on extended DSO algorithm has higher positioning accuracy and stronger robustness than ordinary binocular stereo vision system. To sum up,the binocular stereo panoramic vision system and the extended DSO positioning algorithm proposed in this paper have good application prospects,and explore a new way for the slam direction of unmanned panoramic vision.

    This dissertation focuses on the research of panoramic vision system and visual odometer, analyzes and expands the existing DSO algorithm, so that it can adapt to the binocular stereo panoramic vision system. To verify the actual application effect of the binocular stereoscopic vision system in the perception of unmanned vehicle environment,this paper collects the experimental data under the same outdoor conditions by using the ordinary binocular stereo vision system and the binocular stereo panoramic vision system in the actual environment at the same time, use the extended version of the DSO algorithm to calculate the trajectory of the vehicle, and compare and analyze it with the Ground Truth trajectory obtained through the lidar and integrated navigation system. The results show that the binocular stereo panoramic vision system based on the extended DSO algorithm has higher positioning accuracy and stronger robustness than the ordinary binocular stereo vision system. In summary, the positioning algorithm based on the extended DSO binocular panoramic vision system proposed in this paper has a good application prospect and explores a new way for the direction of unmanned vehicle panoramic vision slam.

    5. Future work and challenges

    Future work will consider the shortcomings of no loop detection,map reuse,relocation after loss,and slow initialization of DSO.In some complex and multi-featured scenes or artificially place some highly recognizable artificial marker scenes for experiments to achieve better experimental results.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    The authors would like to thank Xu Chen for assistance with SLAM algorithm. The authors would like to acknowledge the Project of National Natural Science Foundation of China (Grant No.61773059) and the Project of National Defense Technology Foundation Program of China(Grant No.20230028)to provide fund for conducting experiments.

    av欧美777| 别揉我奶头~嗯~啊~动态视频| 性色av乱码一区二区三区2| 国产亚洲欧美在线一区二区| 国产三级黄色录像| 大香蕉久久网| 美女高潮到喷水免费观看| 国产一区二区三区综合在线观看| 少妇被粗大的猛进出69影院| av视频免费观看在线观看| 一a级毛片在线观看| 亚洲少妇的诱惑av| 精品视频人人做人人爽| 欧美人与性动交α欧美软件| 91在线观看av| 高清欧美精品videossex| 国产成+人综合+亚洲专区| 久久精品国产综合久久久| 亚洲精品久久午夜乱码| 久久久国产成人精品二区 | 午夜福利,免费看| tube8黄色片| 午夜福利一区二区在线看| 十八禁高潮呻吟视频| 涩涩av久久男人的天堂| 在线观看一区二区三区激情| 免费在线观看影片大全网站| 国产亚洲一区二区精品| 麻豆成人av在线观看| 又黄又爽又免费观看的视频| 757午夜福利合集在线观看| 看黄色毛片网站| 免费一级毛片在线播放高清视频 | 两个人免费观看高清视频| 亚洲精品在线观看二区| 中文亚洲av片在线观看爽 | 视频区欧美日本亚洲| 欧美黄色片欧美黄色片| 一a级毛片在线观看| 新久久久久国产一级毛片| 午夜精品国产一区二区电影| 超碰97精品在线观看| 黄色成人免费大全| 变态另类成人亚洲欧美熟女 | 午夜福利免费观看在线| 91麻豆精品激情在线观看国产 | 国产主播在线观看一区二区| 在线观看一区二区三区激情| 黄网站色视频无遮挡免费观看| 欧美不卡视频在线免费观看 | 91成年电影在线观看| 美女高潮喷水抽搐中文字幕| 国产不卡一卡二| 涩涩av久久男人的天堂| 日韩免费av在线播放| 水蜜桃什么品种好| 中文字幕精品免费在线观看视频| 国产精品久久电影中文字幕 | 欧美日韩国产mv在线观看视频| 亚洲av片天天在线观看| 黑人操中国人逼视频| 男男h啪啪无遮挡| 两性午夜刺激爽爽歪歪视频在线观看 | 精品久久久久久久久久免费视频 | 欧美亚洲 丝袜 人妻 在线| 久久精品成人免费网站| 视频在线观看一区二区三区| 亚洲色图 男人天堂 中文字幕| 午夜福利在线观看吧| 又大又爽又粗| 欧美精品人与动牲交sv欧美| 久久久久久久精品吃奶| 一级片免费观看大全| 国产精品偷伦视频观看了| 很黄的视频免费| 亚洲一区高清亚洲精品| 国产男女内射视频| 亚洲情色 制服丝袜| xxxhd国产人妻xxx| 看免费av毛片| 好看av亚洲va欧美ⅴa在| 中文字幕精品免费在线观看视频| 变态另类成人亚洲欧美熟女 | 大码成人一级视频| 免费av中文字幕在线| 伦理电影免费视频| 国产日韩一区二区三区精品不卡| 男男h啪啪无遮挡| 岛国在线观看网站| 久久香蕉精品热| 亚洲 国产 在线| 久久久国产欧美日韩av| 午夜福利一区二区在线看| 老汉色av国产亚洲站长工具| 国产av精品麻豆| 夜夜爽天天搞| 久久九九热精品免费| 精品欧美一区二区三区在线| 国产精品美女特级片免费视频播放器 | 欧美久久黑人一区二区| 午夜91福利影院| 99久久精品国产亚洲精品| 欧美精品一区二区免费开放| x7x7x7水蜜桃| 亚洲综合色网址| 黄片播放在线免费| 中文字幕av电影在线播放| 免费在线观看视频国产中文字幕亚洲| 老司机在亚洲福利影院| 久久婷婷成人综合色麻豆| 午夜影院日韩av| 男女床上黄色一级片免费看| videosex国产| 色精品久久人妻99蜜桃| 久久久国产成人免费| 国产精品免费大片| 精品国产乱子伦一区二区三区| 欧美乱色亚洲激情| 侵犯人妻中文字幕一二三四区| 丝袜在线中文字幕| videos熟女内射| 日韩欧美一区视频在线观看| 亚洲精品美女久久久久99蜜臀| 国产精品一区二区在线观看99| 国产精品 国内视频| 一区在线观看完整版| 国产精品亚洲av一区麻豆| 多毛熟女@视频| 精品久久久久久久久久免费视频 | 99久久精品国产亚洲精品| 自拍欧美九色日韩亚洲蝌蚪91| av一本久久久久| 久久 成人 亚洲| 亚洲一区中文字幕在线| 老司机靠b影院| 人妻 亚洲 视频| 午夜福利免费观看在线| 亚洲一区二区三区欧美精品| 老司机午夜十八禁免费视频| 国产精品免费大片| 精品少妇久久久久久888优播| 下体分泌物呈黄色| 亚洲三区欧美一区| 一区二区三区激情视频| 国产野战对白在线观看| 久久精品国产清高在天天线| 露出奶头的视频| 老鸭窝网址在线观看| 国产熟女午夜一区二区三区| 欧美另类亚洲清纯唯美| 淫妇啪啪啪对白视频| 欧美精品啪啪一区二区三区| 国产深夜福利视频在线观看| 亚洲精品久久成人aⅴ小说| 精品国产美女av久久久久小说| 天堂√8在线中文| 又大又爽又粗| 国产区一区二久久| 国产精品一区二区免费欧美| 91九色精品人成在线观看| 精品国产亚洲在线| 天天躁狠狠躁夜夜躁狠狠躁| 午夜福利一区二区在线看| 亚洲人成77777在线视频| 青草久久国产| 亚洲免费av在线视频| 窝窝影院91人妻| 在线十欧美十亚洲十日本专区| 嫩草影视91久久| 欧美黑人欧美精品刺激| 精品免费久久久久久久清纯 | 女人被躁到高潮嗷嗷叫费观| 啦啦啦免费观看视频1| 久久久久精品人妻al黑| 亚洲aⅴ乱码一区二区在线播放 | 在线免费观看的www视频| 国产一卡二卡三卡精品| 天天添夜夜摸| a在线观看视频网站| 国产97色在线日韩免费| 脱女人内裤的视频| 亚洲专区国产一区二区| 精品人妻1区二区| 黄片小视频在线播放| 中文字幕人妻丝袜制服| 别揉我奶头~嗯~啊~动态视频| 多毛熟女@视频| 亚洲成人手机| 岛国在线观看网站| 好男人电影高清在线观看| 另类亚洲欧美激情| 国内毛片毛片毛片毛片毛片| 一区二区三区精品91| 午夜成年电影在线免费观看| 香蕉久久夜色| 欧美日韩黄片免| 人妻 亚洲 视频| 亚洲人成77777在线视频| 亚洲人成电影免费在线| 亚洲免费av在线视频| xxxhd国产人妻xxx| 欧美人与性动交α欧美软件| 不卡一级毛片| 久久国产精品男人的天堂亚洲| 午夜福利,免费看| 久久人妻熟女aⅴ| 视频区图区小说| 一级毛片女人18水好多| 亚洲欧美精品综合一区二区三区| 男女午夜视频在线观看| 亚洲熟妇中文字幕五十中出 | 亚洲精品中文字幕一二三四区| 亚洲一卡2卡3卡4卡5卡精品中文| 美女扒开内裤让男人捅视频| 男男h啪啪无遮挡| 一边摸一边做爽爽视频免费| av天堂在线播放| 高清视频免费观看一区二区| 男女下面插进去视频免费观看| 久久久国产成人精品二区 | 亚洲自偷自拍图片 自拍| 超碰97精品在线观看| 搡老岳熟女国产| 日韩熟女老妇一区二区性免费视频| 这个男人来自地球电影免费观看| 免费不卡黄色视频| 50天的宝宝边吃奶边哭怎么回事| 久久精品亚洲熟妇少妇任你| 亚洲少妇的诱惑av| 亚洲精品美女久久久久99蜜臀| 国产精品久久视频播放| 99热网站在线观看| 窝窝影院91人妻| 一级黄色大片毛片| 十八禁高潮呻吟视频| 宅男免费午夜| 电影成人av| 精品午夜福利视频在线观看一区| 日日夜夜操网爽| 岛国在线观看网站| 久久精品国产a三级三级三级| 国产精品九九99| 国产色视频综合| 色尼玛亚洲综合影院| 激情在线观看视频在线高清 | 在线观看午夜福利视频| 中文字幕色久视频| 看黄色毛片网站| ponron亚洲| 99re6热这里在线精品视频| 久久久久国内视频| 欧美成人免费av一区二区三区 | 国产男靠女视频免费网站| 国产一区二区三区在线臀色熟女 | 在线观看午夜福利视频| 国产单亲对白刺激| 咕卡用的链子| 老司机靠b影院| 欧美日本中文国产一区发布| 亚洲人成电影观看| 欧美另类亚洲清纯唯美| 手机成人av网站| 精品视频人人做人人爽| 91av网站免费观看| e午夜精品久久久久久久| 50天的宝宝边吃奶边哭怎么回事| 在线观看免费视频日本深夜| 久久亚洲精品不卡| 51午夜福利影视在线观看| 亚洲人成伊人成综合网2020| 啦啦啦在线免费观看视频4| 美女视频免费永久观看网站| 亚洲av美国av| 亚洲五月天丁香| 中出人妻视频一区二区| 精品国产国语对白av| 天天躁夜夜躁狠狠躁躁| 亚洲精品美女久久av网站| 欧美成狂野欧美在线观看| 黄色a级毛片大全视频| 精品国产超薄肉色丝袜足j| 国产欧美日韩一区二区三| 大型av网站在线播放| 亚洲综合色网址| 中文字幕精品免费在线观看视频| 日韩成人在线观看一区二区三区| 久久国产精品大桥未久av| 很黄的视频免费| 亚洲色图综合在线观看| 亚洲精品在线美女| 免费在线观看完整版高清| 最新的欧美精品一区二区| 国产精品 欧美亚洲| 色婷婷久久久亚洲欧美| av中文乱码字幕在线| 成人亚洲精品一区在线观看| av福利片在线| 免费一级毛片在线播放高清视频 | 男人的好看免费观看在线视频 | 国产一区二区三区综合在线观看| a在线观看视频网站| 色尼玛亚洲综合影院| 97人妻天天添夜夜摸| 少妇被粗大的猛进出69影院| 国产精品久久视频播放| 99精品在免费线老司机午夜| 国产99白浆流出| 免费在线观看完整版高清| 又大又爽又粗| 9色porny在线观看| 在线观看免费视频日本深夜| 国产免费av片在线观看野外av| 99riav亚洲国产免费| 最近最新中文字幕大全电影3 | 欧美日韩视频精品一区| 亚洲色图综合在线观看| 欧美日韩黄片免| 黄片小视频在线播放| x7x7x7水蜜桃| 一二三四在线观看免费中文在| 啦啦啦 在线观看视频| 少妇的丰满在线观看| 亚洲国产欧美日韩在线播放| 国产亚洲精品久久久久5区| 黄片播放在线免费| 丁香欧美五月| 久久人妻福利社区极品人妻图片| 免费久久久久久久精品成人欧美视频| 亚洲自偷自拍图片 自拍| 精品亚洲成a人片在线观看| 亚洲av第一区精品v没综合| 亚洲精品国产一区二区精华液| 国产精品1区2区在线观看. | 这个男人来自地球电影免费观看| 成人18禁高潮啪啪吃奶动态图| 成年人午夜在线观看视频| 国产免费av片在线观看野外av| 亚洲一区二区三区欧美精品| 男女高潮啪啪啪动态图| 又黄又粗又硬又大视频| 一a级毛片在线观看| av超薄肉色丝袜交足视频| 伦理电影免费视频| 国产亚洲av高清不卡| 精品乱码久久久久久99久播| 香蕉国产在线看| 啦啦啦在线免费观看视频4| 午夜免费成人在线视频| 精品国产一区二区三区久久久樱花| 日本撒尿小便嘘嘘汇集6| 高清欧美精品videossex| 欧美精品高潮呻吟av久久| 欧美精品人与动牲交sv欧美| 久久影院123| 激情视频va一区二区三区| 久久香蕉国产精品| aaaaa片日本免费| 亚洲自偷自拍图片 自拍| 一a级毛片在线观看| 超色免费av| 黄片播放在线免费| 国产欧美日韩一区二区三区在线| 国产一卡二卡三卡精品| 国产熟女午夜一区二区三区| 搡老熟女国产l中国老女人| 国产在线观看jvid| 久久精品91无色码中文字幕| 人妻丰满熟妇av一区二区三区 | 在线天堂中文资源库| 两人在一起打扑克的视频| 美女国产高潮福利片在线看| 午夜免费观看网址| 在线观看免费午夜福利视频| 国产免费男女视频| 丰满的人妻完整版| 69精品国产乱码久久久| 一本综合久久免费| 他把我摸到了高潮在线观看| 国产精品偷伦视频观看了| 在线观看免费高清a一片| 亚洲国产精品sss在线观看 | 国产不卡av网站在线观看| av福利片在线| 欧美 亚洲 国产 日韩一| 久久精品国产亚洲av香蕉五月 | 黑人巨大精品欧美一区二区mp4| 国产精品二区激情视频| 午夜老司机福利片| 老熟妇乱子伦视频在线观看| 一二三四社区在线视频社区8| 女人精品久久久久毛片| 亚洲欧美日韩另类电影网站| 亚洲精品粉嫩美女一区| 国产区一区二久久| 亚洲五月色婷婷综合| 人人妻人人澡人人爽人人夜夜| 国产精品 欧美亚洲| 精品亚洲成国产av| 99香蕉大伊视频| 国产亚洲精品久久久久久毛片 | 一边摸一边抽搐一进一出视频| a级片在线免费高清观看视频| 欧美日韩一级在线毛片| 欧美激情久久久久久爽电影 | 亚洲成人手机| 国产高清激情床上av| 女警被强在线播放| 国产精品乱码一区二三区的特点 | 免费观看精品视频网站| 免费少妇av软件| 久久午夜亚洲精品久久| 搡老熟女国产l中国老女人| 国产精品免费一区二区三区在线 | 一区二区三区国产精品乱码| 精品一品国产午夜福利视频| 在线观看日韩欧美| 曰老女人黄片| 午夜福利免费观看在线| 多毛熟女@视频| 婷婷精品国产亚洲av在线 | 手机成人av网站| 国产午夜精品久久久久久| www.精华液| 999久久久国产精品视频| 亚洲七黄色美女视频| 啦啦啦在线免费观看视频4| 久久精品aⅴ一区二区三区四区| 亚洲欧美激情综合另类| 精品少妇一区二区三区视频日本电影| 亚洲精品国产精品久久久不卡| 欧美日韩亚洲国产一区二区在线观看 | 黑人操中国人逼视频| 国产蜜桃级精品一区二区三区 | 精品电影一区二区在线| 日本黄色日本黄色录像| 自线自在国产av| 免费看十八禁软件| 免费观看精品视频网站| tube8黄色片| 亚洲色图 男人天堂 中文字幕| a级毛片黄视频| 国产一区二区激情短视频| 成人18禁在线播放| 亚洲少妇的诱惑av| 久久午夜亚洲精品久久| 欧美不卡视频在线免费观看 | 岛国在线观看网站| 国产午夜精品久久久久久| 色尼玛亚洲综合影院| 久久国产乱子伦精品免费另类| 国产精品成人在线| 9191精品国产免费久久| 国产精品免费一区二区三区在线 | 午夜91福利影院| 亚洲成人国产一区在线观看| 狂野欧美激情性xxxx| 12—13女人毛片做爰片一| 亚洲五月天丁香| 亚洲专区字幕在线| videosex国产| 欧美大码av| 99re在线观看精品视频| 国产成人一区二区三区免费视频网站| 国产精品免费视频内射| 日韩欧美免费精品| 50天的宝宝边吃奶边哭怎么回事| 在线永久观看黄色视频| 国产精品亚洲av一区麻豆| 亚洲av成人不卡在线观看播放网| 超色免费av| 色婷婷久久久亚洲欧美| 国产精品.久久久| 亚洲情色 制服丝袜| avwww免费| 久久久久久亚洲精品国产蜜桃av| ponron亚洲| 麻豆国产av国片精品| 国产精品亚洲一级av第二区| 最近最新中文字幕大全免费视频| 丝袜美足系列| 啦啦啦 在线观看视频| 伦理电影免费视频| 亚洲精品久久成人aⅴ小说| 极品人妻少妇av视频| 美女视频免费永久观看网站| 最新的欧美精品一区二区| 又黄又粗又硬又大视频| 午夜福利免费观看在线| 久久久久久久久免费视频了| 在线永久观看黄色视频| 首页视频小说图片口味搜索| 丝瓜视频免费看黄片| 香蕉国产在线看| 在线观看66精品国产| 精品福利永久在线观看| 咕卡用的链子| 欧美激情极品国产一区二区三区| 亚洲精品国产色婷婷电影| 另类亚洲欧美激情| 国产国语露脸激情在线看| 在线观看66精品国产| 久久99一区二区三区| 美国免费a级毛片| 亚洲av日韩精品久久久久久密| 久久国产精品人妻蜜桃| 午夜福利乱码中文字幕| 亚洲成国产人片在线观看| 99久久人妻综合| 中文字幕人妻熟女乱码| 中文亚洲av片在线观看爽 | 国产在视频线精品| 啦啦啦免费观看视频1| 亚洲色图 男人天堂 中文字幕| 欧美日韩瑟瑟在线播放| 精品人妻在线不人妻| aaaaa片日本免费| 亚洲专区中文字幕在线| 久久香蕉激情| 丝袜在线中文字幕| 亚洲人成电影观看| 99国产精品一区二区三区| 亚洲成av片中文字幕在线观看| 国产精品久久视频播放| 亚洲一区高清亚洲精品| 久久精品国产综合久久久| 国产高清视频在线播放一区| 美女高潮喷水抽搐中文字幕| 香蕉丝袜av| 亚洲色图 男人天堂 中文字幕| 免费不卡黄色视频| 超碰97精品在线观看| 国产亚洲精品久久久久5区| 色综合婷婷激情| 久久久国产精品麻豆| 亚洲精品久久午夜乱码| 捣出白浆h1v1| 人人妻人人添人人爽欧美一区卜| 国产精品久久久人人做人人爽| 中出人妻视频一区二区| 久久国产乱子伦精品免费另类| 两人在一起打扑克的视频| 极品少妇高潮喷水抽搐| 热99re8久久精品国产| av天堂在线播放| e午夜精品久久久久久久| av中文乱码字幕在线| 无人区码免费观看不卡| 乱人伦中国视频| 亚洲人成伊人成综合网2020| 大型黄色视频在线免费观看| 日韩有码中文字幕| ponron亚洲| 99香蕉大伊视频| 欧美日韩黄片免| 国产精品美女特级片免费视频播放器 | 久久亚洲精品不卡| 欧美丝袜亚洲另类 | av电影中文网址| 少妇裸体淫交视频免费看高清 | 国产一区二区三区综合在线观看| 最新美女视频免费是黄的| 国产av一区二区精品久久| 亚洲国产中文字幕在线视频| 午夜福利乱码中文字幕| 免费av中文字幕在线| 18禁国产床啪视频网站| 精品第一国产精品| 人人妻人人爽人人添夜夜欢视频| 50天的宝宝边吃奶边哭怎么回事| 一本一本久久a久久精品综合妖精| 久久人妻av系列| 亚洲,欧美精品.| 亚洲va日本ⅴa欧美va伊人久久| 免费在线观看影片大全网站| 99热国产这里只有精品6| 久久精品国产a三级三级三级| aaaaa片日本免费| videos熟女内射| 少妇的丰满在线观看| 午夜福利乱码中文字幕| 正在播放国产对白刺激| 91精品三级在线观看| 中文字幕av电影在线播放| 欧美日韩中文字幕国产精品一区二区三区 | 欧美乱码精品一区二区三区| 免费观看人在逋| 成年人午夜在线观看视频| 中亚洲国语对白在线视频| av免费在线观看网站| 午夜精品在线福利| 三上悠亚av全集在线观看| 欧美黄色淫秽网站| 精品一区二区三卡| 黄色丝袜av网址大全| 亚洲av成人av| 看片在线看免费视频| 国产亚洲精品久久久久5区| 成年版毛片免费区| 亚洲一区二区三区欧美精品| 久久久精品区二区三区| 国产国语露脸激情在线看| 亚洲少妇的诱惑av| 亚洲综合色网址| 国产一卡二卡三卡精品| 一级毛片女人18水好多| 久久久国产精品麻豆| 欧美人与性动交α欧美软件| 夜夜爽天天搞| 久久久精品区二区三区| 人人妻人人澡人人看| 久久国产乱子伦精品免费另类| 欧美国产精品一级二级三级| 欧美黑人欧美精品刺激| 王馨瑶露胸无遮挡在线观看| 亚洲国产精品一区二区三区在线|