• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Perpendicular-Cutdepth:Perpendicular Direction Depth Cutting Data Augmentation Method

    2024-05-25 14:41:06LeZouLinsongHuYifanWangZhizeWuandXiaofengWang
    Computers Materials&Continua 2024年4期

    Le Zou ,Linsong Hu ,Yifan Wang ,Zhize Wu and Xiaofeng Wang,?

    1Anhui Provincial Engineering Laboratory of Big Data Technology Application for Urban Infrastructure,School of Artificial Intelligence and Big Data,Hefei University,Hefei,230601,China

    2Institute of Applied Optimization,School of Artificial Intelligence and Big Data,Hefei University,Hefei,230601,China

    ABSTRACT Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentation methods often employ pixel-wise transformations,which may inadvertently disrupt edge features.In this paper,we propose a data augmentation method for monocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentation methods.On the indoor dataset NYU,our method increases accuracy from 0.900 to 0.907 and reduces the error rate from 0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.

    KEYWORDS Perpendicular;depth estimation;data augmentation

    1 Introduction

    Computer vision,as a pivotal branch in the realm of modern technology,spans a diverse array of applications [1–5].Nevertheless,with the escalating complexity of tasks,we encounter a myriad of challenges when training deep learning models,prominently featuring limited data,overfitting issues,and the imperative need for model generalization across diverse scenarios.In this context,data augmentation emerges as a pivotal strategy to counter these challenges.Data augmentation,by introducing diversity,exposes the model to a broader spectrum of scenes and variations during training,thereby augmenting its ability to generalize to unseen data.By learning representations adaptable to various scenarios,the model becomes more adept at accommodating novel,real-world inputs.Simultaneously,data augmentation aids in mitigating the risk of overfitting,as the model experiences a more diverse set of inputs during training,reducing its reliance on specific data distributions.This enhances the model’s resilience when confronted with unknown data,ensuring robust performance.Moreover,data augmentation can introduce various transformations such as rotation,scaling,flipping,and optical transformations,imparting greater robustness to the neural network model.This robustness signifies that the model is more resistant to subtle changes and noise in the input,contributing to a more reliable execution of tasks in the real world.

    These data augmentation methods have been widely applied in research for advanced tasks such as abnormal detection[1],personalized diagnosis[3–7],simulation enlargement and Transfer learning combined with fault sample augment[8].In the field of abnormal detection and diagnosis,obtaining real fault samples can be challenging or limited in availability.Data augmentation can generate more diversified fault samples through transformations,rotations,scaling,and other methods,aiding the model in better learning and understanding different types of faults.For personalized diagnosis,differences exist between individuals,necessitating more samples to better adapt to personalized requirements.Data augmentation can generate additional samples with personalized scenarios,assisting the model in better adapting to individual differences and improving the accuracy of personalized diagnosis.

    However,there has been relatively less research in the domain of low-level tasks,particularly when dealing with pixel-wise transformations,as seen in tasks like monocular depth estimation.Effective data augmentation methods for these lower-level tasks have not received sufficient attention and indepth investigation.The challenges of data augmentation[9–13]in pixel-wise tasks are more intricate,given the need to maintain precise pixel-level label information.

    Monocular depth estimation is a critical research focus in the field of computer vision,primarily aiming to predict depth information of objects in a scene from a single image.It finds extensive applications in areas such as 3D reconstruction,virtual reality,and autonomous driving.The input for monocular depth estimation typically consists of a set of images along with their corresponding depth maps.Depth maps are commonly acquired through depth cameras and laser scanners.However,obtaining accurate depth information can be challenging in certain scenarios,such as underwater environments or objects with transparent and glass-like properties.In such cases,data augmentation becomes an indispensable step in monocular depth estimation tasks.Currently,widely used data augmentation methods include random rotation[14],random cropping[15],and optical transformations[16](color and brightness variations),among others.Random rotation involves rotating the image by a certain angle to simulate different capture perspectives.Random cropping entails selecting a region of the image as input,mimicking different viewpoints.Optical transformations alter the brightness and contrast of the input image,enhancing data diversity.These augmentation techniques contribute to the robustness and generalization ability of monocular depth estimation models,enabling them to handle diverse and challenging real-world scenarios effectively.

    Although these methods have improved the generalization ability of neural networks,they mainly focus on altering the global environment rather than modifying the geometric structure within the scene.Many studies have attempted to modify the geometric structure within the scene to encourage the network to learn more complex scenes and thereby improve the accuracy of the model [14–16].Ishii et al.[15] observed similarities in edge positions between depth and RGB images,especially in low-level features.They introduced the Cutdepth algorithm for monocular depth estimation networks,aiming to normalize the images using the provided depth information and reduce the gap between RGB images and depth maps in the latent space.This not only increases visual diversity,but also restricts excessive geometric changes within the scene,causing the neural networks to focus more on high-frequency regions.Dijk et al.[17] investigated how neural networks perceive depth from single images,primarily using the vertical position of objects in images.In response,Kim et al.[16]argued that the vertical viewpoint in a single image is more important than the horizontal viewpoint,leading to the proposal of a variant of Cutdepth called Vertical-Cutdepth.This algorithm performs Cutdepth cuts in the vertical direction of the input images,encouraging the network to capture vertical long-range correlations.However,Vertical-Cutdepth overlooks the importance of both horizontal and vertical correlations for depth information,since in human vision the positional structure of an object is determined by the intersection of horizontal and vertical directions.A single vertical correlation can determine the height of an object,but not its width.

    To encourage the network to focus on the correlation between the horizontal and vertical directions,we propose “Perpendicular-Cutdepth”.This method aims to simultaneously reduce the horizontal and vertical distances between RGB images and their corresponding depth maps in the latent space,enhancing the network’s ability to learn from both horizontal and vertical directions within the scene.Perpendicular-Cutdepth involves randomly cropping horizontal and vertical regions from real depth maps and replacing them with corresponding areas in RGB images.By doing so,this effectively promotes the learning of both horizontal and vertical correlations in the scene.We conducted extensive quantitative and qualitative experiments on publicly available datasets,including the indoor dataset NYU[18]and the outdoor dataset KITTI[19],to validate the effectiveness of our proposed Perpendicular-Cutdepth.We will provide a detailed introduction to our method in Section 3.

    The contributions of our work are as follows:

    ? We compared the impact of data methods with different geometric structures on the network.

    ? We propose a new data augmentation method to improve model performance.

    ? Compared to previous data augmentation methods,our proposed method can improve depth estimation performance in both indoor and outdoor scenes.

    2 Related Method

    2.1 Monocular Depth Estimation

    Depth estimation,as a critical problem in the field of computer vision,has demonstrated vast potential in various applications.With the decreasing cost and widespread availability of monocular cameras,researchers have increasingly turned their attention to monocular depth estimation methods due to their simplicity and practicality.Traditional geometry-based methods rely on texture,corners,and edge information in images to compute depth.These approaches often require additional sensors or strict scene assumptions,limiting their applicability in complex environments.In recent years,with the advancement of deep learning,deep learning-based methods have made significant strides in this field.In monocular depth estimation,neural network-based approaches have proven capable of producing satisfactory depth estimates in many scenarios[20–23].Common architectures for depth estimation networks include Convolutional Neural networks(CNNs)[24–27]and Transformers[28–31].For instance,Lee et al.[20] introduced the concept of mask3D to predict local normal for obtaining depth information,encouraging the network to learn structural information within the scene.Li et al.[21] convert the 360° image to low degree distorted perspective patches,then obtain patch wise predictions based on CNN,and finally merge them to obtain the final prediction result,solving the problem of CNN structure being difficult to handle spherical distortions.Wang et al.[23]proposed Probability and Geometric Depth (PGD),which estimates depth by utilizing probability depth uncertainty and geometric relationships between instances.Patil et al.[27]designed a network with two heads.The first input outputs pixel plane level coefficients,while the second head outputs a dense offset vector field that identifies the position of seed pixels.The vector field then uses the sparsity of the seed pixel’s plane to predict the depth of each position.The prediction results are fused with the initial prediction of the first head through learning confidence adaptation.Bhat et al.[28]employed a CNN as an encoder,introduced the self-adaptive regression unit Adabins,and used a Transformer module to capture global information.Kim et al.used Segformer [29] as a feature extractor and proposed a selective local and global fusion network to enhance feature fusion.Bhat et al.[31]proposed a new architecture(LocalBins)for depth estimation from a single image.The architecture itself is based on the popular encoder decoder architecture.Firstly,the network predicts the depth distribution of the local neighborhood for each pixel,rather than predicting the global depth distribution.Secondly,the network is not only predicting the depth distribution at the end of the decoder,but also involving all layers of the decoder.Agarwal et al.[32]extended Adabins with Transbins to incorporate global information,yielding more detailed depth maps.Jun et al.[33]introduced a novel monocular depth estimation algorithm that decomposes depth maps into normalized depth maps and scale features.This method can utilize datasets without depth labels to improve monocular depth estimation performance.

    2.2 Data Augmentation

    When neural networks reach a performance bottleneck,data augmentation is an effective method to improve their performance without introducing additional computational burden.In the field of computer vision,several data augmentation techniques have been developed.As mentioned in the introduction,common data augmentation methods,such as rotation,cropping,and optical transformations,primarily alter the overall scene environment,which has inherent limitations in boosting network performance.To address this issue,some studies have attempted to modify the geometric structure of input images to further enhance network generalization[9,11–14].Fig.1 shows some data enhancement methods,where Figs.1a and 1b are RGB images and corresponding depth maps.as shown in Fig.1c,Devries et al.[12] introduced a regularization method called CutOut to prevent CNN overfitting.During network training,CutOut randomly selects a region of the input image and sets the pixel values within that region to 0 or adds random noise.Zhong et al.[13]introduced a lightweight data augmentation method called random erasure,as shown in Fig.1d.This method randomly selects a rectangular region and erases the pixel values within that region using random values.Yoo et al.[9] proposed a data augmentation method called CutBlur,which is specifically designed for image restoration tasks.It involves cutting out low-resolution regions and pasting them onto corresponding high-resolution regions.This approach teaches the model not only how to reconstruct,but also where to reconstruct.Ghiasi et al.[14] presented a simple yet efficient copy-paste data augmentation method that improves the accuracy of instance segmentation.The authors believed that this technique encourages the network to use information from the entire image rather than relying on specific small regions.Yun et al.[11] improved on CutOut and proposed CutMix,a method that fills the CutOut portion with parts of another image.This approach retains the advantages of CutOut,allowing the model to learn features from different parts of the target,including less discriminative areas.Additionally,it is more efficient than CutOut,enabling the model to simultaneously learn features from two targets.The specific procedure for CutMix is illustrated in Fig.1e.

    Figure 1: Examples of data augmentation

    In the field of monocular depth estimation,Ishii et al.[15] introduced the Cutdepth method to address geometric variations in scenes.This method replaces a portion of the RGB image with real depth map information,thereby enhancing visual diversity while suppressing irrelevant geometric features in the image.Building on this concept,Kim et al.[16] proposed a variant of Cutdepth called Vertical-Cutdepth,which aims to strengthen the network’s ability to capture depth cues by preserving the vertical information in the image.These two methods are depicted in Figs.1f and 1g.As mentioned in the introduction,although Vertical-Cutdepth motivates the network to learn cues in the vertical aspect,it fails to establish the correlation between the horizontal and vertical directions,and to alleviate this problem,we propose Perpendicular-Cutdepth,as shown in Fig.1h.We present the specific algorithm in Section 3.3.

    3 Method

    3.1 Motivation

    Our main motivation comes from neural networks for deep understanding of scenes.For realworld scenes,there will be a lot of texture information in the same plane,such as patterns,wall paintings,etc.,which is depth independent.However,the boundary information in the scene is crucial for understanding depth.The network considers the areas where color changes occur as the boundaries of objects,but this part of information also includes texture information.Our motivation for this is to randomly reduce texture information while retaining useful boundary information during the learning process of neural networks.Because RGB and its corresponding depth images have similar edge information.Our idea is to replace some scenes in the real world with corresponding depth scenes.So the main problem is how to replace it.In previous work,Cutdepth[15]randomly cropped a rectangular area of the depth image and pasted it at the corresponding position of the RGB image.However,horizontal and vertical information should not be considered equally important.Dijk et al.[17]found that neural networks ignored the size of known occlusions in the process of depth cognition and chose the vertical position on the image,that is to say,the network only needs to know the location of the ground contact points of the object to infer approximate depth information.Kim et al.[16]proposed an improved method(Vertical-Cutdepth),which enables the network incentive network to focus on the vertical geometric information in the scene.Although this method does improve accuracy,we believe that focusing only on vertical information in the scene is far from enough.If only Vertical-Cutdepth is used,it can easily lead to incomplete plane of the object.The plane integrity of the object and the correlation between horizontal and vertical are crucial[34,35].Therefore,we propose orthogonal cutting,which can guide the network to focus on vertical information in the scene,this has further deepened the network’s understanding of the plane of objects in the scene.

    3.2 Algorithm

    Our method is used in the data preprocessing process,but not throughout the entire dataset.In order to enhance the generalization ability of the network,we randomly select different scenes for data augmentation.Specifically,for the selected RGB image and corresponding depth map,we will randomly select a coordinate(l,u)in the image.Next,we will randomly select a cross-shaped region within the entire image.This enables the network to simultaneously consider correlations in both horizontal and vertical directions while preserving the vertical geometric structure in the image.For a given set of RGB images and their corresponding depth maps,the specific approach is shown in Algorithm 1,and Fig.2 is data augmentation using the Perpendicular-Cutdepth method.

    Figure 2: Data augmentation using Perpendicular-Cutdepth

    In Algorithm 1,alpha and beta represent random numbers ranging from 0 to 1,and p denotes a specified hyperparameter.Essentially,we randomly select a subset of data from the entire training data set for data augmentation.For the selected subset,we ensure that at least one pixel-wide region is cropped both horizontally and vertically.The start and end points of this region are chosen randomly to increase the generalization ability of the network.

    3.3 Network Architecture

    To validate the effectiveness of the algorithm,we constructed a simple network architecture TransUnet.The Transformer serves as the encoder of the network,and we stack several upsampling layers.Then,layer-wise concatenation is used as the decoder.As shown in the Fig.3,before inputting into the network,we apply various data augmentation methods to compare the final prediction results.

    Figure 3: TransUnet network architecture

    3.4 Accuracy Measures for Depth Estimation

    We use the RMSE,REL,Log10,andδaas metrics to evaluate depth estimation.We denotediandgithe predicted pixels and real pixels,respectively.Andnrepresents the total number of effective pixels.

    RMSE:Root mean square error.Lower is better.

    REL:Mean relative error.Lower is better.

    Log10:Mean log 10 error.Lower is better.

    δa:Accuracy under threshold.We use a ∈{1,2,3}.Higher is better.

    4 Experimental

    4.1 Experimental Setting

    We employed Transformer[29],and DenseNet161[36]as the backbones for our experiments,both of which were pretrained on ImageNet.During the training process,we used the PyTorch framework and selected Adam as the optimizer for our network.The learning rate was decayed using a polynomial decay strategy,starting at 1e-4 and gradually decreasing to 1e-5.We set the values ofβ1andβ2to 0.9 and 0.999,respectively.The experiments were conducted over 20 epochs with a batch size of 12.All experiments were carried out on a 3090 GPU.

    Our experiments were conducted on public datasets,namely the NYU dataset[18]and the KITTI dataset[19].The NYU dataset comprises 464 color images from indoor scenes,each accompanied by corresponding depth maps.The valid depth range for this dataset is from 0.5 to 10 m.To train on the NYU dataset,we used a dataset of 20 k samples as the training set and 654 image-depth pairs for testing.During training,we randomly cropped images to the size of 576×448 pixels.The KITTI dataset contains images and corresponding depth maps captured using LiDAR sensors.It includes 61 outdoor scenes with distances ranging from 50 to 80 m.Similarly,we used a training dataset of 20 k samples and randomly cropped the data to the size of 375×1241 pixels.For evaluation,we employed the official 697 images provided by the KITTI dataset for depth assessment.

    4.2 Comparation to the State-of-the-Art

    Table 1 shows our comparison results with different state-of-the-art models,from the table we can find that our proposed algorithm has limited improvement when targeting the latest model AdaBins[28],this is because the performance of the current model has reached a bottleneck,and the data enhancement algorithm alone is not enough to improve the performance of the network again.However,for some networks with weak generalization ability,such as BTS[20],and the Transformer network we constructed,the performance of the network can be significantly improved by using our algorithm,which also verifies the effectiveness of our algorithm for networks with weak generalization ability.

    Table 1: The result of different network performance on the NYU dataset

    4.3 Comparative Experiments

    To validate the impact of different data augmentation methods on network performance,we followed the findings of Cutdepth[15]and experimented on the NYU indoor dataset[18]using BTS[20] as the backbone,BTS was pre-trained on ImageNet and using DenseNet161 [36] as the feature extractor.Table 2 displays the experimental results for CutOut[12],RE[13],CutMix[11],Cutdepth[15],and our proposed Perpendicular-Cutdepth under the same backbone.From the experimental results,we can observe that all these methods show varying degrees of improvement in the final depth evaluation metrics compared to the baseline.Among them,our proposed Perpendicular-Cutdepth method has an REL(mean absolute relative error)metric that is only 0.001 away from the best Cutdepth result.However,our RMSE (root mean square error) and accuracy measurementδ1have improved by 2.2% and 0.2%,respectively,compared to the best Cutdepth results.We believe that this trade-off in one metric for improvements in others is worthwhile,and it directly validates the effectiveness and superiority of our proposed method.In addition,we also observed that the performance of the network did not increase with an increase in the hyperparameter p,indicating that our network has relatively low dependence on hyperparameters.

    4.4 The Impact of Geometric Structures on Network Performance in Data Augmentation

    Fig.4 shows different cut shapes for depth maps,where Figs.4a and 4b are RGB images and corresponding depth maps,and Fig.4c is the method of Cutdepth,which randomly cuts the rectangular part of the depth map and pastes it to the corresponding position of the RGB image.Kim et al.[16]discovered that replacing the vertical regions of depth maps in images with RGB images effectively improves the performance of network models.They introduced a variant of Cutdepth[15] called Vertical-Cutdepth (V-Cutdepth),as shown in Fig.4d.However,we had doubts about whether the vertical region is the most appropriate segmentation algorithm.In order to explore the impact of the geometric shape of the segmented depth map region on the network,we propose two segmentation shape algorithms:Horizontal-Cutdepth(H-Cutdepth)and Perpendicular-Cutdepth(P-Cutdepth)methods.H-Cutdepth,as shown in Fig.4e,selects the whole horizontal region for depth replacement.On the other hand,P-Cutdepth (Fig.4f) simultaneously selects both horizontal and vertical regions for depth replacement in the corresponding areas of the real image.In addition,we adopt the network architecture designed in Fig.3 as the main structure of our network,where the backbone utilizes Transformer instead of CNN,and the decoding part is represented by the decoder in Fig.3.The experimental results on the NYU indoor dataset,as shown in Table 3,demonstrate that our proposed P-Cutdepth method outperforms existing depth estimation data augmentation methods.Through corresponding data analysis,it is evident that merely changing the geometric structure can indeed improve the network performance.Under the same hyperparameters,P-Cutdepth shows better results in terms of accuracy measurement and error rate.Our proposed method reduces the RMSE error rate from 0.357 to 0.351,a decrease of 1.6%,while V-Cutdepth decreases by 1.1%,H-Cutdepth and Cutdepth both decrease by 0.84%.The accuracyδ1increases from 0.900 to 0.907,an improvement of 0.7%,while the other three methods correspondingly improve by 0.3%.This demonstrates the importance of geometric structures in data augmentation and highlights the superiority of our proposed method.Fig.5 provides a comparison of depth map predictions between our method and Cutdepth.

    Table 3: Experimental results of the network on the NYU dataset with different geometric structures using Cutdepth,with the best performance emphasized in bold.Here,‘p’refers to hyperparameters,and we use the prefixes V,H,and P to represent three different methods,corresponding to vertical,horizontal,and perpendicular,respectively

    Figure 4: Cutdepth and some variants of the Cutdepth.We use the prefixes V-Cutdepth,H-Cutdepth,and P-Cutdepth to represent three different methods,corresponding to vertical,horizontal,and perpendicular,respectively

    Figure 5: Results visualized on the NYU dataset,from left to right:RGB,depth,result of Cutdepth and ours

    To further compare the differences between our proposed method and the Cutdepth method,we conducted comparative experiments on the KITTI outdoor dataset.The experimental results are shown in Table 4.We found that using the Cutdepth data augmentation method in outdoor environments did not lead to any further improvement in network performance.In fact,all metrics showed a decrease.In contrast,our algorithm showed improvements in both the accuracy and error rate evaluation metrics.Therefore,compared to the Cutdepth algorithm,our method can improve network performance not only in indoor environments but also in outdoor environments,demonstrating the stronger generalization capability of our algorithm.Fig.6 provides a comparison of depth map predictions between our method and Cutdepth.

    Table 4: Experimental results of the network using Cutdepth and Perpendicular-Cutdepth on the KITTI dataset,with the best performance emphasized in bold.Here,‘p’refers to hyperparameters

    4.5 Ablation Experiments

    To more intuitively illustrate the effectiveness of our proposed method,we conducted ablation experiments on NYU dataset using both CNN and the Transformer network structure designed in Fig.3.From Table 5,it can be observed that our proposed method shows improvement regardless of the value of the hyperparameter p.Additionally,the experimental results do not increase with an increase in P,indicating the stability of our network.Furthermore,we also conducted ablation experiments on KITTI,and the results in Table 6 demonstrate that our method enhances the network to a certain extent in both indoor and outdoor environments.

    Table 5: Ablation experiments on the NYU dataset

    Table 6: Ablation experiments on the KITTI dataset

    Figure 6: Results visualized on the KITTI dataset,from left to right:RGB,depth,result of Cutdepth and ours

    5 Conclusion

    In this paper,we have introduced a novel data augmentation method for depth estimation.In contrast to traditional methods,our proposed approach involves replacing the horizontal and vertical regions of RGB images with corresponding depth regions.This enhances the ability of the network to extract features in both horizontal and vertical directions.Through extensive experiments,we have not only confirmed that altering geometric structures can improve model performance,but also demonstrated the superiority of our proposed Perpendicular-Cutdepth over traditional data augmentation methods.In future work,we will validate the effectiveness of the proposed method in other domains.

    Acknowledgement:The authors would like to thank the editors and reviewers for their valuable work,as well as the supervisor and family for their valuable support during the research process.

    Funding Statement:This work was supported by the Grant of Program for Scientific Research Innovation Team in Colleges and Universities of Anhui Province (2022AH010095);The Grant of Scientific Research and Talent Development Foundation of the Hefei University (No.21-22RC15);The Key Research Plan of Anhui Province (No.2022k07020011);The Grant of Anhui Provincial Natural Science Foundation,No.2308085MF213;The Open Fund of Information Materials and Intelligent Sensing Laboratory of Anhui Province IMIS202205,as well as the AI General Computing Platform of Hefei University.

    Author Contributions:The authors confirm contribution to the paper as follows:Le Zou:Methodology,Investigation,Funding.Linsong Hu:Investigation,Writing Review and Editing,Writing-Original Draft and Methodology.Yifang Wang:Resources,Validation.Zhize Wu and Xiaofeng Wang:Writing Review and Editing,Funding.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are openly at: https://drive.google.com/file/d/1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP/view?usp=sharing.https://www.cvlibs.net/datasets/kitti/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日本爱情动作片www.在线观看| 蜜臀久久99精品久久宅男| 日韩欧美 国产精品| 人妻夜夜爽99麻豆av| 国产熟女欧美一区二区| 亚洲av日韩在线播放| 亚洲精品久久久久久婷婷小说| 黄色一级大片看看| 99国产精品免费福利视频| 久热这里只有精品99| 久久毛片免费看一区二区三区| 亚洲性久久影院| 日韩国内少妇激情av| 岛国毛片在线播放| 日韩一本色道免费dvd| 亚洲精品亚洲一区二区| 久久人妻熟女aⅴ| h日本视频在线播放| 你懂的网址亚洲精品在线观看| 在线看a的网站| 亚洲精品日韩在线中文字幕| 免费播放大片免费观看视频在线观看| 一级片'在线观看视频| 欧美少妇被猛烈插入视频| 国产精品一区二区在线不卡| 在线观看国产h片| 啦啦啦中文免费视频观看日本| 亚洲婷婷狠狠爱综合网| 欧美丝袜亚洲另类| 中文字幕制服av| 国产在线一区二区三区精| 黄色欧美视频在线观看| 国产伦在线观看视频一区| 亚洲中文av在线| 美女脱内裤让男人舔精品视频| 你懂的网址亚洲精品在线观看| 色综合色国产| 亚洲av国产av综合av卡| 久久久久久九九精品二区国产| 亚洲精品日韩在线中文字幕| 久久久久久伊人网av| 午夜老司机福利剧场| 亚洲国产欧美人成| 美女cb高潮喷水在线观看| 女性被躁到高潮视频| 国产人妻一区二区三区在| 成年av动漫网址| 啦啦啦啦在线视频资源| 久久久久久久亚洲中文字幕| 欧美激情极品国产一区二区三区 | 精品亚洲成国产av| 国产伦精品一区二区三区四那| 久久久久久久久久成人| 日日摸夜夜添夜夜添av毛片| 午夜福利在线观看免费完整高清在| freevideosex欧美| 搡老乐熟女国产| av福利片在线观看| 国产精品av视频在线免费观看| 美女福利国产在线 | 亚洲中文av在线| 亚洲精品日韩av片在线观看| 久久 成人 亚洲| 欧美日韩亚洲高清精品| 久久久久久久久久久免费av| 日本vs欧美在线观看视频 | 亚洲图色成人| 在线天堂最新版资源| 菩萨蛮人人尽说江南好唐韦庄| 国产av国产精品国产| 男人狂女人下面高潮的视频| 熟女av电影| av在线观看视频网站免费| 亚洲精品国产成人久久av| 黄色视频在线播放观看不卡| 国产精品人妻久久久久久| 国产乱人视频| 欧美性感艳星| 久久久久久久大尺度免费视频| 亚洲久久久国产精品| 街头女战士在线观看网站| 国产亚洲最大av| 国产精品一区www在线观看| 美女中出高潮动态图| 1000部很黄的大片| 久久ye,这里只有精品| 秋霞在线观看毛片| 人妻制服诱惑在线中文字幕| 色婷婷av一区二区三区视频| 国产片特级美女逼逼视频| 亚洲三级黄色毛片| 成人毛片a级毛片在线播放| 色哟哟·www| 乱系列少妇在线播放| 人妻 亚洲 视频| 亚洲精品乱码久久久久久按摩| av一本久久久久| 99热全是精品| 亚洲无线观看免费| 久久久久精品久久久久真实原创| 一本久久精品| 18+在线观看网站| 欧美精品一区二区免费开放| 天天躁日日操中文字幕| 久久国产精品男人的天堂亚洲 | 一级av片app| 亚洲中文av在线| 日本vs欧美在线观看视频 | 日韩视频在线欧美| 日韩一本色道免费dvd| 舔av片在线| 国产69精品久久久久777片| 欧美激情国产日韩精品一区| 国产成人免费观看mmmm| 精品一区二区三区视频在线| a级一级毛片免费在线观看| 亚洲精品视频女| 久久人人爽av亚洲精品天堂 | 97超视频在线观看视频| 少妇 在线观看| 在线播放无遮挡| 亚洲美女黄色视频免费看| 天美传媒精品一区二区| 亚洲国产毛片av蜜桃av| 日日啪夜夜撸| 亚洲高清免费不卡视频| 少妇精品久久久久久久| 国产极品天堂在线| 国产精品久久久久久av不卡| 国产精品麻豆人妻色哟哟久久| 日本黄大片高清| 有码 亚洲区| 一级黄片播放器| 精品视频人人做人人爽| av免费观看日本| 自拍欧美九色日韩亚洲蝌蚪91 | 热99国产精品久久久久久7| 纯流量卡能插随身wifi吗| 欧美激情国产日韩精品一区| 亚洲伊人久久精品综合| 国产成人aa在线观看| 日本欧美视频一区| 亚洲精品视频女| 啦啦啦中文免费视频观看日本| 免费看av在线观看网站| 国产成人精品久久久久久| 亚洲欧美日韩无卡精品| 性色avwww在线观看| 国产91av在线免费观看| 日本黄大片高清| 十分钟在线观看高清视频www | 夫妻午夜视频| 一级毛片久久久久久久久女| 国产黄片视频在线免费观看| 亚州av有码| 亚洲欧美日韩无卡精品| 免费人妻精品一区二区三区视频| 秋霞伦理黄片| 日韩一区二区视频免费看| 国产爽快片一区二区三区| 老师上课跳d突然被开到最大视频| 老熟女久久久| 国产探花极品一区二区| 亚洲一区二区三区欧美精品| 少妇人妻久久综合中文| 五月伊人婷婷丁香| 激情五月婷婷亚洲| 一区二区三区免费毛片| 99热全是精品| 国产午夜精品一二区理论片| 久久国产精品大桥未久av | 免费少妇av软件| 人妻夜夜爽99麻豆av| 国产精品秋霞免费鲁丝片| 国产精品爽爽va在线观看网站| av在线播放精品| 亚洲国产成人一精品久久久| 午夜福利网站1000一区二区三区| 精品国产一区二区三区久久久樱花 | 美女内射精品一级片tv| 午夜免费鲁丝| 亚洲va在线va天堂va国产| 国产精品久久久久久精品电影小说 | 国产久久久一区二区三区| 人妻一区二区av| 成年美女黄网站色视频大全免费 | 99久久精品国产国产毛片| 亚洲国产成人一精品久久久| 成人免费观看视频高清| 黄片无遮挡物在线观看| 深爱激情五月婷婷| 少妇熟女欧美另类| 欧美3d第一页| 欧美日本视频| 1000部很黄的大片| 噜噜噜噜噜久久久久久91| 国产 精品1| 精品视频人人做人人爽| 免费高清在线观看视频在线观看| 高清av免费在线| 中文字幕精品免费在线观看视频 | tube8黄色片| 欧美变态另类bdsm刘玥| 久久久精品免费免费高清| 久久久欧美国产精品| 在线播放无遮挡| 制服丝袜香蕉在线| 欧美亚洲 丝袜 人妻 在线| av播播在线观看一区| 国内揄拍国产精品人妻在线| 国产一区亚洲一区在线观看| 六月丁香七月| 亚洲四区av| 国产精品伦人一区二区| 乱码一卡2卡4卡精品| 国产色婷婷99| 欧美日韩综合久久久久久| 久久久午夜欧美精品| 久久久久精品久久久久真实原创| 夫妻午夜视频| 韩国av在线不卡| 男女无遮挡免费网站观看| 亚洲精品乱码久久久久久按摩| 免费观看性生交大片5| 久久久久久人妻| 国产伦在线观看视频一区| 99久久精品热视频| 国产片特级美女逼逼视频| 国产成人精品福利久久| 中文精品一卡2卡3卡4更新| 一级av片app| 一个人看的www免费观看视频| 欧美精品一区二区免费开放| 日韩欧美精品免费久久| 亚洲精品一二三| 99热网站在线观看| 国产精品99久久99久久久不卡 | 七月丁香在线播放| 干丝袜人妻中文字幕| 黄色一级大片看看| 欧美日韩视频高清一区二区三区二| 蜜桃亚洲精品一区二区三区| 在线天堂最新版资源| 日本爱情动作片www.在线观看| 丝瓜视频免费看黄片| 国产精品国产三级专区第一集| 久久6这里有精品| 如何舔出高潮| 久久99精品国语久久久| 一区二区三区乱码不卡18| 亚洲,欧美,日韩| 边亲边吃奶的免费视频| 欧美性感艳星| 久久久欧美国产精品| 观看av在线不卡| 水蜜桃什么品种好| 国产成人精品福利久久| 青春草亚洲视频在线观看| 欧美成人精品欧美一级黄| www.av在线官网国产| 爱豆传媒免费全集在线观看| 欧美精品亚洲一区二区| 国产精品久久久久久久电影| 国产精品精品国产色婷婷| 免费少妇av软件| 免费高清在线观看视频在线观看| 日本爱情动作片www.在线观看| 久久精品国产鲁丝片午夜精品| 黑丝袜美女国产一区| 成人18禁高潮啪啪吃奶动态图 | 五月玫瑰六月丁香| 久久精品夜色国产| 久久人妻熟女aⅴ| 一级a做视频免费观看| 高清在线视频一区二区三区| 蜜臀久久99精品久久宅男| 热99国产精品久久久久久7| kizo精华| 2018国产大陆天天弄谢| 女的被弄到高潮叫床怎么办| 欧美亚洲 丝袜 人妻 在线| 嫩草影院入口| av在线app专区| 在线观看免费日韩欧美大片 | 九九久久精品国产亚洲av麻豆| 青春草国产在线视频| 亚洲中文av在线| 亚洲自偷自拍三级| 一级av片app| 亚洲精品日韩在线中文字幕| 插阴视频在线观看视频| 成人亚洲精品一区在线观看 | 欧美极品一区二区三区四区| 亚洲av欧美aⅴ国产| 日韩免费高清中文字幕av| 91精品一卡2卡3卡4卡| 亚洲av男天堂| 热re99久久精品国产66热6| 看十八女毛片水多多多| 视频中文字幕在线观看| 久久精品人妻少妇| 一级毛片久久久久久久久女| 日日摸夜夜添夜夜添av毛片| 日韩大片免费观看网站| 日日啪夜夜爽| 久久99蜜桃精品久久| 人妻少妇偷人精品九色| 人体艺术视频欧美日本| 亚洲国产精品一区三区| 赤兔流量卡办理| 国产真实伦视频高清在线观看| 中文乱码字字幕精品一区二区三区| 亚洲欧美成人精品一区二区| 国产男女内射视频| 婷婷色av中文字幕| 人妻制服诱惑在线中文字幕| 亚洲自偷自拍三级| 欧美成人a在线观看| 国产成人午夜福利电影在线观看| 日韩制服骚丝袜av| 欧美丝袜亚洲另类| 久久久久久久久久久免费av| 在线观看一区二区三区| 国产亚洲一区二区精品| 五月开心婷婷网| 99热网站在线观看| 婷婷色综合www| 国产久久久一区二区三区| 黑丝袜美女国产一区| 18禁裸乳无遮挡动漫免费视频| a 毛片基地| 欧美最新免费一区二区三区| 国产精品三级大全| 97在线人人人人妻| 久久99蜜桃精品久久| 久久6这里有精品| 亚洲精华国产精华液的使用体验| 又大又黄又爽视频免费| 青春草国产在线视频| 波野结衣二区三区在线| 日本av手机在线免费观看| 直男gayav资源| 日本黄大片高清| 在线观看免费高清a一片| 国产成人a区在线观看| 少妇被粗大猛烈的视频| 五月天丁香电影| 亚洲精品一二三| 2018国产大陆天天弄谢| 国产91av在线免费观看| 久久99蜜桃精品久久| 18禁在线无遮挡免费观看视频| 久久久久久久久久久丰满| av.在线天堂| 噜噜噜噜噜久久久久久91| 中文字幕久久专区| 亚洲熟女精品中文字幕| 亚洲人与动物交配视频| 性高湖久久久久久久久免费观看| 黄色怎么调成土黄色| 精品人妻一区二区三区麻豆| 久久热精品热| 国产 精品1| 伊人久久精品亚洲午夜| 搡老乐熟女国产| 高清毛片免费看| 国产精品一区二区在线观看99| 欧美bdsm另类| 国产淫片久久久久久久久| 熟女电影av网| 视频中文字幕在线观看| 亚洲欧美成人精品一区二区| 高清日韩中文字幕在线| 人妻系列 视频| 全区人妻精品视频| 久久影院123| 亚洲av在线观看美女高潮| 熟女电影av网| 校园人妻丝袜中文字幕| 精品少妇黑人巨大在线播放| 中国国产av一级| 亚洲图色成人| 中国美白少妇内射xxxbb| 亚洲内射少妇av| 国产69精品久久久久777片| 亚洲精品乱码久久久久久按摩| 91精品国产国语对白视频| 亚洲国产毛片av蜜桃av| 精品国产乱码久久久久久小说| 韩国av在线不卡| 99视频精品全部免费 在线| 亚洲欧洲日产国产| 男人添女人高潮全过程视频| av国产免费在线观看| 久久精品久久久久久久性| 一级毛片黄色毛片免费观看视频| av国产久精品久网站免费入址| 国产午夜精品一二区理论片| 自拍欧美九色日韩亚洲蝌蚪91 | 最近的中文字幕免费完整| 免费大片黄手机在线观看| 欧美一区二区亚洲| 国产一区有黄有色的免费视频| 高清日韩中文字幕在线| 老熟女久久久| 五月玫瑰六月丁香| 国产亚洲av片在线观看秒播厂| 一级毛片 在线播放| 日韩 亚洲 欧美在线| 成人亚洲欧美一区二区av| 亚洲va在线va天堂va国产| 性高湖久久久久久久久免费观看| 日韩欧美精品免费久久| 国产av精品麻豆| 热99国产精品久久久久久7| 波野结衣二区三区在线| 亚洲av免费高清在线观看| 国产男女超爽视频在线观看| 日韩亚洲欧美综合| 国产有黄有色有爽视频| 成年人午夜在线观看视频| 免费av不卡在线播放| 国产在视频线精品| 日韩,欧美,国产一区二区三区| 国产免费视频播放在线视频| 欧美激情国产日韩精品一区| 亚洲成人手机| 国产精品99久久久久久久久| 国产成人a∨麻豆精品| 欧美区成人在线视频| 亚洲婷婷狠狠爱综合网| 成年美女黄网站色视频大全免费 | 婷婷色麻豆天堂久久| 欧美少妇被猛烈插入视频| 老女人水多毛片| 麻豆成人av视频| 中国美白少妇内射xxxbb| 丝袜喷水一区| 这个男人来自地球电影免费观看 | 亚洲av.av天堂| 我要看黄色一级片免费的| 伊人久久精品亚洲午夜| 久久久成人免费电影| 老司机影院成人| 久久99热这里只有精品18| 91精品国产九色| 国产免费福利视频在线观看| 又粗又硬又长又爽又黄的视频| 国产av精品麻豆| 日韩一本色道免费dvd| 人人妻人人添人人爽欧美一区卜 | 中文字幕亚洲精品专区| 女性生殖器流出的白浆| 少妇丰满av| 黄色怎么调成土黄色| 成年av动漫网址| 最近中文字幕2019免费版| 日本vs欧美在线观看视频 | 精品一区在线观看国产| 亚洲国产精品成人久久小说| 大陆偷拍与自拍| 精品少妇久久久久久888优播| 亚洲精品一区蜜桃| 日韩三级伦理在线观看| av在线老鸭窝| 国产亚洲最大av| 久久国产精品男人的天堂亚洲 | 亚洲国产精品国产精品| 日韩不卡一区二区三区视频在线| 亚洲图色成人| 亚洲欧洲国产日韩| 欧美成人精品欧美一级黄| 国产精品一及| 涩涩av久久男人的天堂| 黑人猛操日本美女一级片| 久久这里有精品视频免费| 欧美一区二区亚洲| 在线精品无人区一区二区三 | 国产成人freesex在线| 国模一区二区三区四区视频| 色网站视频免费| 欧美精品亚洲一区二区| 国产精品一区二区性色av| 国产69精品久久久久777片| 国产永久视频网站| 国产精品国产三级国产专区5o| 亚洲欧美一区二区三区黑人 | 在线观看免费高清a一片| 熟女av电影| 91精品国产国语对白视频| 亚洲欧美日韩东京热| 免费观看性生交大片5| 久久婷婷青草| 日韩av免费高清视频| 久久韩国三级中文字幕| 18禁裸乳无遮挡免费网站照片| 国产成人一区二区在线| 在线观看国产h片| 国产爽快片一区二区三区| 亚洲一区二区三区欧美精品| 日韩亚洲欧美综合| 欧美激情国产日韩精品一区| 日韩制服骚丝袜av| 直男gayav资源| 深夜a级毛片| 亚洲美女搞黄在线观看| 少妇人妻一区二区三区视频| 极品教师在线视频| 国产成人精品久久久久久| 色吧在线观看| 国内精品宾馆在线| 亚洲国产精品999| 少妇裸体淫交视频免费看高清| 亚洲av二区三区四区| 国模一区二区三区四区视频| freevideosex欧美| 如何舔出高潮| 又粗又硬又长又爽又黄的视频| 国产乱人偷精品视频| a级毛片免费高清观看在线播放| h视频一区二区三区| 国精品久久久久久国模美| 国产成人freesex在线| 国内精品宾馆在线| 黑丝袜美女国产一区| 97精品久久久久久久久久精品| 大话2 男鬼变身卡| 免费黄色在线免费观看| 国内少妇人妻偷人精品xxx网站| 中文字幕精品免费在线观看视频 | 国产亚洲最大av| 91精品一卡2卡3卡4卡| 丰满少妇做爰视频| 最近中文字幕2019免费版| 韩国av在线不卡| 国产黄频视频在线观看| 中文在线观看免费www的网站| 日韩中文字幕视频在线看片 | 男女国产视频网站| 五月开心婷婷网| 少妇猛男粗大的猛烈进出视频| 国产爱豆传媒在线观看| 国产精品99久久99久久久不卡 | 秋霞在线观看毛片| 日本vs欧美在线观看视频 | 国产av精品麻豆| 最后的刺客免费高清国语| 一区二区三区精品91| 美女视频免费永久观看网站| 国产亚洲欧美精品永久| 青春草视频在线免费观看| 免费观看无遮挡的男女| 人妻制服诱惑在线中文字幕| 久久久久久久精品精品| 老熟女久久久| 免费黄频网站在线观看国产| 人体艺术视频欧美日本| 久久青草综合色| 中文资源天堂在线| 国产高清有码在线观看视频| 久久婷婷青草| 国产男女内射视频| 精品人妻一区二区三区麻豆| 亚洲欧美成人综合另类久久久| 国产精品99久久99久久久不卡 | 夜夜爽夜夜爽视频| 国产精品秋霞免费鲁丝片| 精品亚洲成国产av| 又大又黄又爽视频免费| 妹子高潮喷水视频| 亚洲精品久久久久久婷婷小说| 在线免费十八禁| 国产熟女欧美一区二区| 亚洲欧美清纯卡通| 国产精品国产av在线观看| 成人毛片a级毛片在线播放| 一个人看视频在线观看www免费| 国产一区二区在线观看日韩| 成人二区视频| 日韩一本色道免费dvd| 在现免费观看毛片| 亚洲av二区三区四区| 久久ye,这里只有精品| 亚洲精华国产精华液的使用体验| 2018国产大陆天天弄谢| 国产深夜福利视频在线观看| 七月丁香在线播放| 老熟女久久久| 偷拍熟女少妇极品色| 99re6热这里在线精品视频| 在线免费十八禁| 亚洲国产av新网站| 国产淫语在线视频| 国模一区二区三区四区视频| 国产高潮美女av| 九九爱精品视频在线观看| 99久久中文字幕三级久久日本| 国产探花极品一区二区| 精品一区二区三区视频在线| 日本av免费视频播放| 日韩一区二区视频免费看| 人人妻人人添人人爽欧美一区卜 | 一级片'在线观看视频| 一个人看的www免费观看视频| 黄色视频在线播放观看不卡| 高清日韩中文字幕在线| 男女无遮挡免费网站观看| 美女福利国产在线 | 久久 成人 亚洲| 大陆偷拍与自拍| 欧美 日韩 精品 国产| 丰满乱子伦码专区| 国产精品.久久久| 亚洲欧美清纯卡通| 久久精品人妻少妇|