• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Self-Supervised Monocular Depth Estimation via Discrete Strategy and Uncertainty

    2022-07-18 06:17:30ZhenyuLiJunjunJiangandXianmingLiu
    IEEE/CAA Journal of Automatica Sinica 2022年7期

    Zhenyu Li, Junjun Jiang, and Xianming Liu

    Dear Editor,

    This letter is concerned with self-supervised monocular depth estimation. To estimate uncertainty simultaneously, we propose a simple yet effective strategy to learn the uncertainty for selfsupervised monocular depth estimation with the discrete strategy that explicitly associates the prediction and the uncertainty to train the networks. Furthermore, we propose the uncertainty-guided feature fusion module to fully utilize the uncertainty information. Codes will be available at https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox.

    Self-supervised monocular depth estimation methods turn into promising alternative trade-offs in both the training cost and the inference performance. However, compound losses that couple the depth and the pose lead to a dilemma of uncertainty calculation that is crucial for critical safety systems. To solve this issue, we propose a simple yet effective strategy to learn the uncertainty for selfsupervised monocular depth estimation using the discrete bins that explicitly associate the prediction and the uncertainty to train the networks. This strategy is more pluggable without any additional changes to self-supervised training losses and improves model performance. Secondly, to further exert the uncertainty information,we propose the uncertainty-guided feature fusion module to refine the depth estimation. The uncertainty maps serve as an attention source to guide the fusion of decoder features and skip-connection features. Experimental results on the KITTI and the Make3D dataset show that the proposed methods achieve satisfying results compared to the baseline methods.

    Estimating depth plays an important role in the perception of the 3D real-world, and it is often pivotal to many other tasks such as autonomous driving, planning and assistive navigation [1]–[3]. Selfsupervised methods trained with many monocular videos have emerged as an alternative for the depth estimation [4]–[6] since the ground truth RGB-D data is costly. These methods treat the depth estimation as one of novel view-synthesis by training a network to predict target images from other viewpoints. In general, the framework consists of a depth network to predict image depth and a pose network to predict the camera ego-motion between successive image pairs, and aims to minimize the photometric reprojection loss in the training stage. Moreover, the smooth regularization [5], [6] and the masking strategy [4], [5], [7] are commonly included in the selfsupervised loss for sharper estimation results.

    However, complex self-supervised training losses that couple the depth and the pose lead to a dilemma of uncertainty estimation [8],which is extremely vital in safety critical systems, allowing an agent to identify unknowns in an environment and reaches optimal decisions [6]. The popular log-likelihood maximization strategy proposed in [9] causes sub-optimal modeling and fails to beneficially work in the self-supervised situations [8]. This strategy needs to reweight all the loss terms during training stage to get reasonable uncertainty predictions, leading to a plight re-balancing the delicately designed loss terms in self-supervised depth estimation. In contrast,we aim for a pluggable uncertainty prediction strategy that can leave the weights of loss terms untouched.

    In this paper, instead of pre-training a teacher network to decouple the depth and the pose in losses [8], which doubles the training time and the parameters, we aim to learn the uncertainty by a single model in an end-to-end fashion without any additional modifications to the self-supervised loss terms. To this end, we apply the discrete strategy[10]. Following [9], We train the network to infer the mean and variance of a Gaussian distribution, which can be treated as the prediction and the uncertainty, respectively. After that, we divide the continued interval into discrete bins and calculate the probability of each bin based on the mean and the variance. A weighted sum of the normalized probabilities is then served to calculate the expected prediction. Such a strategy can explicitly associate the prediction and the uncertainty before calculating losses. After self-supervised training withonly a simpleadditionalL1uncertainloss,ourmethod can masterthe capabilitytopredictthe uncertainty.Itis more pluggable for self-supervised methods and improves the model performance in addition. Furthermore, our method also guarantees the Gaussian probability distribution on the discrete bins, which yields more reasonable and sharper uncertainty results compared to the method of standard deviation proposed in [6].

    Moreover, to make full use of the uncertainty information, based on the U-net multi-scale prediction backbone [5], we propose an uncertainty-guided feature fusion module to refine the depth estimation. Therefore, it will help the model pay closer attention to high-uncertain regions and refine the depth estimation more effectively. Sufficient experiments on the KITTI dataset [11] and the Make3D dataset [12] demonstrate the effectiveness and generalization of our proposed methods.

    Our contributions are three-fold: 1) We propose a strategy to learn uncertainty for self-supervised monocular depth estimation utilizing the discrete bins. 2) We design an uncertainty-guided feature fusion module in the decoder to make full use of the uncertainty. 3)Extensive experiments on KITTI and Make3D dataset demonstrate the effectiveness of our proposed methods.

    Methods:In this section, we present the importance of the main contributions of this paper: 1) A pluggable strategy to learn the depth uncertainty without additional modification to the self-supervised loss terms; and 2) An uncertainty-guided feature fusion module. We use the Monodepth2 [5] as our baseline. The framework with improvements is shown in Fig. 1.

    Depth and uncertainty: Following [9], we simultaneously estimate the mean and the variance of the Gaussian distribution, which respectively represent the mean estimated depth and the measurement uncertainty. It is formulated by

    whereIis the input RGB image,Dis the mean estimated depth map,Uistheuncertainty map, andfDrepresents the depth estimation network.

    In general, two key points are indicated in the design of loss terms tomakeuncertainty reasonable:1) AddingaL1losstoforcethe model topredict depthmore confidently; and2)Re-weightingthe loss terms according to the uncertainty. There will be a more relentless punishment for the pixel prediction results with a lower uncertainty. However, complex loss terms make 2) much more tough[9]. To this end, we combine the prediction and uncertainty before the loss computing.

    To be specific, we divide the depth range into discrete bins,compute the approximate probability of each bin, and normalize the probabilities

    Fig. 1. Overview of our proposed methods. In part (a), the framework is based on Monodepth2 [5] that contains a U-net-based depth network and a pose network. We simply extend the depth network in Monodepth2 to estimate depth and uncertainty at the same time. (b) shows more details of the modified multiscale decoder. Successive uncertainty-guided feature fusion modules refine the depth estimation. Our strategy is performed at each output level to achieve multi-scale predictions. In (c), we illustrate details of the uncertainty-guided feature fusion module. It makes full use of the uncertainty information, containing two convolutions to extract useful information and an identity feature mapping to facilitate gradients back-propagation and preserve semantic cues.

    Fig. 2. Visualization Example. Given the input RGB (a), (b) and (c) show the depth and the uncertainty prediction, respectively. (d) shows the depth probability distributions for the three selected points in picture. Blue and orange points have sharp peaks, indicating low uncertainty. Red point has a flatter probability distribution which means high uncertainty.

    whereiandjdenote the location of one pixel on the image. To make the representation more concise and cause no misunderstandings, we omit the subscriptsiandjin (2) and after.fcd(D,U,·) is the cumulative distribution function of the normal distribution whose mean and areDandU, respectively.dkis thek-th split point of the range,Nisthe number of bins, andqkis the probability after normalization.

    Finally, we calculate the expected depth as following:

    whered(k) represents the depth of thek-th bin andEis the expected depth. We use the expected depth to train our modals like other discrete bins based method [6].

    Notably, the expected depth is not equal to the predicted mean depth thanks to the discrete strategy. Therefore, we combine the mean and the variance explicitly before the loss calculation. In the mathematics view, smaller variance leads to a relatively higher lower bound of the self-supervised losses. Models are forced to predict more precise depth at pixels with smaller variance, thereby predicting reasonable uncertainty. Such a strategy avoids complicating the selfsupervised losses.

    In the training stage, we apply the minimum per-pixel photometric reprojection errorlp, the auto-masking strategy and the edge-aware smoothness losslsproposed in our baseline to train our model.Limited by pages, we recommend referring all the details in [5].

    Additionally, we also want the model to provide more confident results with less uncertainty, so we add an uncertainty loss followinglu[9]:hsmeans the hyperparameter factors in multi-scale outputS={1,1/2,1/4,1/8}. The scale factorshsare set to 1 ,1/2,1/4,1/8 to force the model decrease uncertainty (i.e., increase the punishment on uncertainty) during the depth refinement process. The total loss can be written as

    where λ1and λ2are the hyperparameters for weighing the importance of the smoothness loss and the proposed uncertainty loss.Both the pose model and depth model are trained jointly using this loss. Hyperparameter λ1follows the setting in origin paper [5].

    Examples of the probabilities on bins are shown in Fig. 2. We can see a sharper peak at a low-uncertainty point (please refer to the blue and orange points), which means the model is more confident with the estimation. A higher-uncertainty point has a flatter probability distribution (the red point), indicating the model is uncertain about the prediction.

    Fusion module: Uncertainty maps can provide the information of how confident the depth estimation is, which can help the depth refinement process to focus on areas with high uncertainty [13].

    Therefore, we propose the uncertainty-guided feature fusion module to refine the depth estimation. The proposed uncertaintyguided feature fusion module contains three main components as shown in Fig. 1(c), which are two 3×3 convolution layers and an identity feature mapping. Specifically, the first concatenation and convolution layer is used to extract low-uncertain information and filter high-uncertain features to make the model pay closer attention to high-uncertainty areas. The output is then concatenated with the skip connected feature and the uncertainty map, and they are fed into the second convolution layer. This allows effective feature selection between feature maps. Finally, identity mapping is used to facilitate gradient back-propagation and preserve high-level semantic cues[14].

    The fusion module,whichutilizesthepredicted uncertaintyUto fus e theupsampled featuresFuandthe skipconnectedfeaturesFs,ca n be formulated as

    In the multi-scale depth decoder, the uncertainty-guided feature fusion module is repeatedly applied to the gradual feature fusion procedure to refine the depth estimation. It helps the model pay closer attention to higher-uncertain depth estimation areas and refine the depth estimation more effectively.

    Experiments:

    Datasets: We conduct a series of experiments on the KITTI dataset[11] and the Make3D dataset [12] to prove the effectiveness of the proposed methods. KITTI contains 39 810 monocular triplets for training and 4424 for validation. After training our models on the KITTI dataset, we evaluate them on the Make3D dataset without further fine-tuning.

    Implementation details: We jointly train the pose and depth network by the Adam Optimizer with β1=0.9 and β2=0.999 for 25 epochs. The initial learning rate is set tolr=1E?4. We use a multilearning rate decay which drops tolr=1E?5 at after 15 epochs and decay tolr=1E?6 after 20 epochs. Following [6], we include a context module. By sufficient experiments, the weights in (6) are empirically set to λ1=1E?3 and λ2=1E?2, which can get the best results.

    Evaluation metrics: For the quantitative evaluation, several typical metrics [15] are employed in our experiments

    Performance comparison: We first evaluate our models on the KITTI dataset. The quantitative results compared with other methods are shown in Table 1. With the proposed methods, our models further decrease the evaluation error and achieve higher accuracy. We also provide some qualitative results in Fig. 3 . Our models provide sharper and more accurate results on object boundaries such as signboards, lamp posts, and background buildings. Furthermore,uncertainty maps also provide useful information. As shown in the first sub-figure in Fig. 3, the depth estimation of the closer round signboard is not very satisfactory with clear and accurate boundaries.On the uncertainty map, such inaccurate areas have higher uncertainty. Then, we compare the uncertainty maps with other methods. As shown in Fig. 4, the uncertainty maps we provided are more reasonable without artifacts from close to far and have more detailed information than the results in [6].

    Ablation study: Table 1 also shows the quantitative results of the ablation study. As for low-resolution images (640×192), based on Monodepth2 (Baseline), we can observe the better per formation in almost all the evaluation measures by the discrete strategy (+ DS).Uncertainty-guided feature fusion module (+ UGF) also provides a satisfying improvement. The ablation study for high-resolution images (1024×320) also shows the effectiveness of our proposedmethods.

    Table 1.Quantitative Results. Comparison of Existing Methods to Ours on the KITTI 2015 [11] Using the Eigen Split [15]. The Best and Second Best Results are Presented in Bold and Underline for Each Category. The Upper/Lower Part is the Low/High Resolution Result (640×192/1024×320). DS: Discrete Strategy. UGF: Uncertainty-Guided Fusion Module. DDVO: Differentiable Direct Visual Odometry

    Fig. 3. Qualitative results on the KITTI Eigen split. Our model produce sharper depth maps than baseline MonoDepth2 (MO2), which are reflected in the superior quantitative results in Table 1. At the same time, uncertainty depth maps are provided for high-level applications.

    We also provide some qualitative ablation study results in Fig. 4.Comparing the depth estimation results, the model with the uncertainty-guided feature fusion module provides sharper and more accurate results. Furthermore, there is a more prominent deep blue(lower uncertainty) area in uncertainty results provided by the model with an uncertainty-guided feature fusion module, which indicates the module can further reduce the uncertainty of depth estimations.

    Generalization test: To further evaluate the generalization of our proposed methods, we tested our model without fine-tuning on the Make3D dataset. The quantitative comparison results are tabulated in Table 2. It shows that our proposed method outperforms the baseline method with a significant margin. Qualitative results can be seen in Fig. 5. Our method results in sharper and more accurate depth maps and reasonable uncertainty estimations.

    Result analysis and future work: As we can see in the qualitative results, the highest uncertain areas locate at the object edges, which may be caused by the smooth loss that blurry the object edges since the lack of prior object information and occlusion. Therefore,designing a more effective smooth regularisation term, introducing objective edge information, and taking more effective masking strategies will help model training procedures and reduce uncertainty.Additionally, the smooth areas with less texture (high shadow and sky) show the lowest uncertainty. It indicates that photometric loss may not be helpful enough to train the model in this kind of area.While our model can precisely estimate the depth in these areas, it is essential to develop a more effective loss to supervise these areas better.

    While we have achieved more reasonable uncertainty maps, when we concatenate the uncertainty maps along the time axis, we find there will be fluctuations in different areas of the image. They will hurt the algorithm’s robustness, especially for the systems that request smooth predictions in the temporal domain. In the future, we will try to associate filter methods or explore more temporal constraints to make the prediction more smooth and stable, and it is much more meaningful work.

    Fig. 4. Comparison examples. Ours (w/o) represents our method without the UGF. MO2 is the baseline. Discrete disparity volume (DDV) shows the uncertainty results from [6].

    Table 2.Quantitative Results of the Make3D Dataset

    Fig. 5. Qualitative results on the Make3D dataset. Our methods show a better effectiveness on the depth estimation and can also provide uncertainty maps.

    Conclusion:This paper proposes a simple yet effective strategy to learn the uncertainty for self-supervised monocular depth estimation with the discrete strategy that explicitly associates the prediction and the uncertainty to train the networks. Furthermore, we propose the uncertainty-guided feature fusion module to fully utilize the uncertainty information. It helps the model pay closer attention to high-uncertain regions and refine the depth estimation more effectively. Sufficient experimental results on the KITTI dataset and the Make3D dataset indicate that the proposed algorithm achieves satisfying results compared to the baseline methods.

    Acknowledgments:This work was supported by the National Natural Science Foundation of China (61971165), in part by the Fundamental Research Funds for the Central Universities (FRFCU 5710050119), the Natural Science Foundation of Heilongjiang Province (YQ2020F004).

    99热网站在线观看| 美女xxoo啪啪120秒动态图| 日日撸夜夜添| 99热网站在线观看| 美女国产视频在线观看| 只有这里有精品99| 大型黄色视频在线免费观看| 国产精品国产高清国产av| 免费看光身美女| 亚洲人成网站在线观看播放| 成人午夜精彩视频在线观看| 天堂网av新在线| 国产高清三级在线| 国产精品电影一区二区三区| 国产日韩欧美在线精品| 欧美最黄视频在线播放免费| 一级毛片aaaaaa免费看小| 一级毛片aaaaaa免费看小| 青春草视频在线免费观看| 在线播放国产精品三级| 老司机福利观看| 精品99又大又爽又粗少妇毛片| 成人亚洲精品av一区二区| 级片在线观看| 黄色欧美视频在线观看| 成人美女网站在线观看视频| 亚洲在线自拍视频| 国产色婷婷99| 中文字幕久久专区| 精品久久久噜噜| 看片在线看免费视频| 国产亚洲欧美98| 熟女人妻精品中文字幕| 女人被狂操c到高潮| 午夜精品国产一区二区电影 | 亚洲国产精品成人综合色| 日韩欧美精品免费久久| 国产淫片久久久久久久久| 欧美丝袜亚洲另类| 国产v大片淫在线免费观看| 免费在线观看成人毛片| 国产精品久久久久久久久免| 亚洲不卡免费看| 亚洲欧美日韩高清在线视频| 中文字幕av在线有码专区| 在线播放无遮挡| 搞女人的毛片| 久久久久久久久久成人| 国内精品一区二区在线观看| 99精品在免费线老司机午夜| 成人午夜精彩视频在线观看| 国产精品人妻久久久久久| 网址你懂的国产日韩在线| 亚洲人成网站在线播放欧美日韩| 欧美又色又爽又黄视频| 久久精品国产亚洲av香蕉五月| 深爱激情五月婷婷| 欧美成人a在线观看| 男人和女人高潮做爰伦理| 韩国av在线不卡| 老女人水多毛片| 亚洲在久久综合| 悠悠久久av| 国产午夜精品久久久久久一区二区三区| 国内揄拍国产精品人妻在线| 麻豆国产97在线/欧美| 欧美在线一区亚洲| 麻豆国产av国片精品| 欧美丝袜亚洲另类| 老师上课跳d突然被开到最大视频| 亚洲精品国产av成人精品| 精品一区二区三区视频在线| 老司机福利观看| 大又大粗又爽又黄少妇毛片口| 免费无遮挡裸体视频| 99久久久亚洲精品蜜臀av| 久久99精品国语久久久| 亚洲av.av天堂| 国产一区二区激情短视频| 久久综合国产亚洲精品| 亚洲高清免费不卡视频| 久久草成人影院| 日韩视频在线欧美| 国产白丝娇喘喷水9色精品| 九草在线视频观看| 身体一侧抽搐| 国产午夜精品久久久久久一区二区三区| 在线免费观看的www视频| 亚洲av一区综合| 国内精品宾馆在线| 久久人人爽人人片av| 极品教师在线视频| 成人一区二区视频在线观看| 色综合色国产| 天堂中文最新版在线下载 | 人妻夜夜爽99麻豆av| 欧美日韩精品成人综合77777| 国产男人的电影天堂91| 一级二级三级毛片免费看| av女优亚洲男人天堂| 又爽又黄a免费视频| 日本熟妇午夜| 91久久精品电影网| 高清日韩中文字幕在线| 超碰av人人做人人爽久久| 99久久精品国产国产毛片| 国产精品伦人一区二区| 麻豆一二三区av精品| 熟女电影av网| 国产一区二区三区av在线 | 麻豆一二三区av精品| 在线观看66精品国产| 免费观看a级毛片全部| 久久精品影院6| 春色校园在线视频观看| 校园春色视频在线观看| 成人欧美大片| 成年女人看的毛片在线观看| 中文欧美无线码| 亚洲av成人av| 18禁裸乳无遮挡免费网站照片| 日韩一区二区三区影片| 精品人妻一区二区三区麻豆| 亚洲在久久综合| av天堂在线播放| 久久久久久久久久黄片| 日日干狠狠操夜夜爽| 热99re8久久精品国产| 中文字幕制服av| 天堂av国产一区二区熟女人妻| 成年版毛片免费区| 日日啪夜夜撸| 国产精品人妻久久久影院| 亚洲av熟女| 美女 人体艺术 gogo| 真实男女啪啪啪动态图| 九草在线视频观看| 亚洲精品日韩在线中文字幕 | 色尼玛亚洲综合影院| 国产精品伦人一区二区| 美女被艹到高潮喷水动态| 人妻制服诱惑在线中文字幕| 天天躁夜夜躁狠狠久久av| 99热网站在线观看| 日韩成人av中文字幕在线观看| 国内精品宾馆在线| 亚洲第一电影网av| 国产午夜精品论理片| 在线观看av片永久免费下载| 99热只有精品国产| 国产精品三级大全| 久久这里只有精品中国| 不卡一级毛片| 两性午夜刺激爽爽歪歪视频在线观看| 国产v大片淫在线免费观看| 99riav亚洲国产免费| 3wmmmm亚洲av在线观看| 我的老师免费观看完整版| 天堂网av新在线| 九九爱精品视频在线观看| 亚洲欧美日韩高清在线视频| 一个人看的www免费观看视频| 99国产极品粉嫩在线观看| 长腿黑丝高跟| 日本一二三区视频观看| 午夜精品国产一区二区电影 | 亚洲成av人片在线播放无| 成年版毛片免费区| 久久久午夜欧美精品| 亚洲国产欧美在线一区| 欧美成人一区二区免费高清观看| 久久99热6这里只有精品| 亚洲人成网站在线观看播放| 大型黄色视频在线免费观看| 国产精品免费一区二区三区在线| 色播亚洲综合网| 亚洲美女搞黄在线观看| 91精品一卡2卡3卡4卡| 悠悠久久av| 国产午夜精品久久久久久一区二区三区| 亚洲18禁久久av| 卡戴珊不雅视频在线播放| 亚洲欧美精品综合久久99| 国产成人一区二区在线| 国产高潮美女av| www.色视频.com| 69人妻影院| 中文字幕精品亚洲无线码一区| 日韩大尺度精品在线看网址| 久久精品国产亚洲av涩爱 | 少妇猛男粗大的猛烈进出视频 | 亚洲在线观看片| 国产精品.久久久| 国产极品天堂在线| 男女边吃奶边做爰视频| 成人av在线播放网站| 亚洲欧美日韩高清在线视频| 亚洲国产高清在线一区二区三| 91av网一区二区| 国产精品爽爽va在线观看网站| 国产淫片久久久久久久久| 久久久久久久久久久丰满| 国产成人一区二区在线| 神马国产精品三级电影在线观看| 精品久久久久久久久av| 日本一二三区视频观看| 亚洲在线观看片| 亚洲电影在线观看av| 国产精品,欧美在线| 国产精品久久久久久av不卡| 国产精品永久免费网站| 欧美激情久久久久久爽电影| 美女大奶头视频| 成人漫画全彩无遮挡| 一级黄片播放器| 亚洲国产日韩欧美精品在线观看| 日本五十路高清| 丰满乱子伦码专区| 一本一本综合久久| 成人综合一区亚洲| 国产精品综合久久久久久久免费| 黑人高潮一二区| 午夜福利视频1000在线观看| 内地一区二区视频在线| 国产伦理片在线播放av一区 | 噜噜噜噜噜久久久久久91| 久久国内精品自在自线图片| 亚洲不卡免费看| 九九在线视频观看精品| 18禁黄网站禁片免费观看直播| 淫秽高清视频在线观看| 99在线视频只有这里精品首页| 国产精品麻豆人妻色哟哟久久 | 给我免费播放毛片高清在线观看| 99久久成人亚洲精品观看| 亚洲欧美日韩东京热| 日日啪夜夜撸| 变态另类丝袜制服| 日韩亚洲欧美综合| 日韩欧美国产在线观看| 黄色一级大片看看| 国产午夜精品久久久久久一区二区三区| 久久久久久久午夜电影| 男人和女人高潮做爰伦理| av在线老鸭窝| av天堂中文字幕网| 精品一区二区免费观看| 欧美在线一区亚洲| 女的被弄到高潮叫床怎么办| 特级一级黄色大片| 日韩中字成人| 久久草成人影院| 亚洲国产精品国产精品| 久久久久久久久久久免费av| av天堂在线播放| 久久人人爽人人爽人人片va| 简卡轻食公司| 欧美变态另类bdsm刘玥| 久久欧美精品欧美久久欧美| 亚洲精品国产成人久久av| 秋霞在线观看毛片| 麻豆一二三区av精品| 亚洲欧美精品自产自拍| 免费电影在线观看免费观看| 日本av手机在线免费观看| 麻豆av噜噜一区二区三区| 午夜视频国产福利| 国产精品.久久久| 成人国产麻豆网| 国产黄片视频在线免费观看| 久久精品国产99精品国产亚洲性色| 国产伦一二天堂av在线观看| 亚洲真实伦在线观看| 有码 亚洲区| 亚洲国产色片| 国产精品电影一区二区三区| ponron亚洲| 免费看美女性在线毛片视频| 日本撒尿小便嘘嘘汇集6| 黄色配什么色好看| 国产不卡一卡二| 亚洲国产精品成人综合色| 尤物成人国产欧美一区二区三区| 亚洲成人av在线免费| 国产精品一区二区三区四区久久| 能在线免费看毛片的网站| 蜜桃久久精品国产亚洲av| 熟妇人妻久久中文字幕3abv| 最新中文字幕久久久久| 午夜福利视频1000在线观看| av专区在线播放| 亚洲精品久久久久久婷婷小说 | 亚洲精品日韩av片在线观看| 一个人免费在线观看电影| 精品久久久久久成人av| 女同久久另类99精品国产91| 国产高清视频在线观看网站| 乱系列少妇在线播放| 一个人看视频在线观看www免费| 久久这里只有精品中国| 在现免费观看毛片| 国产在线男女| 国产真实伦视频高清在线观看| 听说在线观看完整版免费高清| 成人二区视频| 国产毛片a区久久久久| www.色视频.com| 国产成人影院久久av| 成人综合一区亚洲| 欧美+亚洲+日韩+国产| 狂野欧美激情性xxxx在线观看| 麻豆乱淫一区二区| 特级一级黄色大片| 久久精品国产自在天天线| 夜夜爽天天搞| 国产av一区在线观看免费| 国产乱人视频| av视频在线观看入口| 久久久色成人| 嫩草影院精品99| 国产又黄又爽又无遮挡在线| 看非洲黑人一级黄片| 激情 狠狠 欧美| 成年av动漫网址| 又粗又爽又猛毛片免费看| 岛国在线免费视频观看| a级一级毛片免费在线观看| 男女做爰动态图高潮gif福利片| av女优亚洲男人天堂| 看免费成人av毛片| 久久精品国产清高在天天线| 18禁在线无遮挡免费观看视频| 成年女人永久免费观看视频| 99riav亚洲国产免费| 国产久久久一区二区三区| 久久久久久久久久久免费av| 免费不卡的大黄色大毛片视频在线观看 | 午夜老司机福利剧场| av在线老鸭窝| 久久欧美精品欧美久久欧美| av视频在线观看入口| 成人美女网站在线观看视频| 蜜臀久久99精品久久宅男| 一区二区三区四区激情视频 | 久久久成人免费电影| 美女cb高潮喷水在线观看| 中文字幕av在线有码专区| 亚洲欧美日韩无卡精品| 97热精品久久久久久| 黄色欧美视频在线观看| 久久99蜜桃精品久久| 午夜福利在线观看吧| 亚洲av中文字字幕乱码综合| 91aial.com中文字幕在线观看| 好男人视频免费观看在线| 99精品在免费线老司机午夜| 欧美日本亚洲视频在线播放| 欧美成人a在线观看| 久久人妻av系列| 神马国产精品三级电影在线观看| 欧美zozozo另类| 亚洲人成网站高清观看| 国产av在哪里看| 欧美xxxx黑人xx丫x性爽| a级一级毛片免费在线观看| 欧美日韩在线观看h| 国内精品宾馆在线| 欧美人与善性xxx| 国产精品av视频在线免费观看| 国产精品美女特级片免费视频播放器| 婷婷亚洲欧美| 国产黄a三级三级三级人| 嘟嘟电影网在线观看| 欧美日韩一区二区视频在线观看视频在线 | 亚洲中文字幕一区二区三区有码在线看| 激情 狠狠 欧美| 国产综合懂色| ponron亚洲| 欧美在线一区亚洲| 中文字幕久久专区| 18禁黄网站禁片免费观看直播| 欧美色视频一区免费| 亚洲性久久影院| 日韩欧美三级三区| 青春草国产在线视频 | 国产成人aa在线观看| 国产精品99久久久久久久久| 免费av毛片视频| 久久人人爽人人片av| 欧美区成人在线视频| 中国国产av一级| 久久精品国产亚洲av天美| 22中文网久久字幕| 少妇猛男粗大的猛烈进出视频 | 亚洲中文字幕日韩| 免费大片18禁| 国产大屁股一区二区在线视频| 国产极品天堂在线| 18禁裸乳无遮挡免费网站照片| 国产精品不卡视频一区二区| 免费电影在线观看免费观看| 又爽又黄无遮挡网站| 成人毛片60女人毛片免费| 亚洲欧美日韩卡通动漫| 免费观看精品视频网站| 精品国产三级普通话版| 亚洲欧美中文字幕日韩二区| 亚洲欧美日韩东京热| 欧美一区二区国产精品久久精品| 国产久久久一区二区三区| 日韩精品青青久久久久久| 插阴视频在线观看视频| 欧美日韩国产亚洲二区| 国产精品一及| 成人性生交大片免费视频hd| 国产毛片a区久久久久| 久久久久久大精品| 久久人妻av系列| 黄片无遮挡物在线观看| 美女cb高潮喷水在线观看| 亚洲,欧美,日韩| 哪里可以看免费的av片| 九九爱精品视频在线观看| 2021天堂中文幕一二区在线观| 亚洲四区av| 国产精品99久久久久久久久| 精品午夜福利在线看| 国产单亲对白刺激| 精品一区二区三区人妻视频| 一区二区三区四区激情视频 | 国产精品一区二区三区四区久久| 特大巨黑吊av在线直播| 日韩欧美 国产精品| 丝袜美腿在线中文| 亚洲欧美成人精品一区二区| av免费在线看不卡| 国产精品久久久久久久久免| 日韩人妻高清精品专区| 欧美zozozo另类| 亚洲成av人片在线播放无| 有码 亚洲区| 99久久成人亚洲精品观看| 91aial.com中文字幕在线观看| 成人av在线播放网站| 欧美区成人在线视频| av视频在线观看入口| 午夜免费男女啪啪视频观看| 日韩三级伦理在线观看| 精品99又大又爽又粗少妇毛片| 亚洲真实伦在线观看| 在现免费观看毛片| 波野结衣二区三区在线| 久久久精品欧美日韩精品| 国产探花在线观看一区二区| 久久亚洲精品不卡| 亚洲成a人片在线一区二区| av在线蜜桃| 最近手机中文字幕大全| 亚洲在久久综合| 成人漫画全彩无遮挡| 99热这里只有是精品50| 国产成人91sexporn| 成熟少妇高潮喷水视频| 激情 狠狠 欧美| 免费在线观看成人毛片| 麻豆久久精品国产亚洲av| 欧美zozozo另类| 国产午夜精品论理片| 久久久久久久午夜电影| 又爽又黄a免费视频| 免费观看a级毛片全部| 日韩一区二区三区影片| 一进一出抽搐gif免费好疼| 日本一二三区视频观看| 国产精品一区二区在线观看99 | 日韩大尺度精品在线看网址| 亚洲美女视频黄频| 欧美成人免费av一区二区三区| 男人舔奶头视频| 久久午夜亚洲精品久久| 成人特级黄色片久久久久久久| 亚洲va在线va天堂va国产| 伦理电影大哥的女人| 乱人视频在线观看| 国产在线男女| 99在线人妻在线中文字幕| 一本精品99久久精品77| 精品一区二区免费观看| 成熟少妇高潮喷水视频| 91av网一区二区| 欧美3d第一页| 国产三级中文精品| av女优亚洲男人天堂| 国内精品美女久久久久久| 18禁裸乳无遮挡免费网站照片| 国产黄片美女视频| 亚洲七黄色美女视频| 亚洲,欧美,日韩| 久久综合国产亚洲精品| 国产精品人妻久久久久久| 国产一区二区激情短视频| 精品久久久久久久久久免费视频| 精品一区二区免费观看| 欧美极品一区二区三区四区| 中文欧美无线码| 日本黄色片子视频| 久久久午夜欧美精品| 狂野欧美白嫩少妇大欣赏| 99久久久亚洲精品蜜臀av| 亚洲欧美日韩高清专用| 人体艺术视频欧美日本| 亚洲精品影视一区二区三区av| 黄色配什么色好看| 一级毛片久久久久久久久女| 亚洲av中文字字幕乱码综合| 有码 亚洲区| 国产欧美日韩精品一区二区| 久久人人爽人人爽人人片va| 成人综合一区亚洲| 五月玫瑰六月丁香| 黄色欧美视频在线观看| 精品午夜福利在线看| 老师上课跳d突然被开到最大视频| 婷婷六月久久综合丁香| 亚洲美女搞黄在线观看| 最新中文字幕久久久久| 成熟少妇高潮喷水视频| 搡女人真爽免费视频火全软件| 特大巨黑吊av在线直播| 3wmmmm亚洲av在线观看| 如何舔出高潮| 婷婷色综合大香蕉| 中文字幕免费在线视频6| 3wmmmm亚洲av在线观看| 校园春色视频在线观看| 少妇被粗大猛烈的视频| 午夜福利成人在线免费观看| 日韩中字成人| 亚洲欧美精品综合久久99| 在线观看一区二区三区| 国产av一区在线观看免费| 色综合亚洲欧美另类图片| 国产黄片美女视频| 日日干狠狠操夜夜爽| 日本熟妇午夜| 国产高清有码在线观看视频| 九草在线视频观看| 一个人免费在线观看电影| 网址你懂的国产日韩在线| 午夜精品在线福利| 精品人妻偷拍中文字幕| 欧美丝袜亚洲另类| av免费观看日本| 91精品国产九色| 午夜a级毛片| 久久精品国产鲁丝片午夜精品| 久久久久久久久大av| 国产日韩欧美在线精品| 久久人人爽人人爽人人片va| 天堂av国产一区二区熟女人妻| 欧美一区二区国产精品久久精品| 小蜜桃在线观看免费完整版高清| 乱人视频在线观看| 中文资源天堂在线| 搡老妇女老女人老熟妇| 国产精品.久久久| 亚洲精品影视一区二区三区av| 成人亚洲欧美一区二区av| 日韩欧美精品免费久久| 色哟哟哟哟哟哟| 简卡轻食公司| 亚洲精品久久久久久婷婷小说 | 性色avwww在线观看| 精品一区二区三区人妻视频| 1024手机看黄色片| 久久精品人妻少妇| 99热这里只有是精品在线观看| 国产真实伦视频高清在线观看| 国产精品蜜桃在线观看 | 一边摸一边抽搐一进一小说| 人体艺术视频欧美日本| 国产精华一区二区三区| 国产女主播在线喷水免费视频网站 | 在线国产一区二区在线| 欧美性猛交╳xxx乱大交人| 搞女人的毛片| 亚洲精品影视一区二区三区av| 夜夜看夜夜爽夜夜摸| 久久久国产成人免费| 亚洲av中文av极速乱| 婷婷色av中文字幕| 最新中文字幕久久久久| 亚洲天堂国产精品一区在线| 有码 亚洲区| 熟妇人妻久久中文字幕3abv| 色播亚洲综合网| 免费无遮挡裸体视频| 男插女下体视频免费在线播放| 两个人的视频大全免费| 麻豆乱淫一区二区| 亚洲国产精品sss在线观看| 精品久久久久久久久久久久久| av又黄又爽大尺度在线免费看 | 欧美3d第一页| 久久婷婷人人爽人人干人人爱| 成人美女网站在线观看视频| 国国产精品蜜臀av免费| 久久精品国产亚洲av涩爱 | 国产精品.久久久| 久久午夜亚洲精品久久| 久久99精品国语久久久| 特大巨黑吊av在线直播| 美女大奶头视频| 中文字幕人妻熟人妻熟丝袜美| 有码 亚洲区|