• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Monocular Depth Estimation with Sharp Boundary

    2023-02-17 03:13:12XinYangQinglingChangShitingXuXinlinLiuandYanCui

    Xin Yang,Qingling Chang,Shiting Xu,Xinlin Liu and Yan Cui,,*

    1Faculty of Intelligent Manufacturing,Wuyi University,Jiangmen,529000,China

    2China-Germany(Jiangmen)Artificial Intelligence Institute,Jiangmen,529000,China

    3Zhuhai 4DAGE Network Technology,Zhuhai,519000,China

    ABSTRACT Monocular depth estimation is the basic task in computer vision. Its accuracy has tremendous improvement in the decade with the development of deep learning.However,the blurry boundary in the depth map is a serious problem.Researchers find that the blurry boundary is mainly caused by two factors.First,the low-level features,containing boundary and structure information, may be lost in deep networks during the convolution process.Second,the model ignores the errors introduced by the boundary area due to the few portions of the boundary area in the whole area,during the backpropagation.Focusing on the factors mentioned above.Two countermeasures are proposed to mitigate the boundary blur problem.Firstly,we design a scene understanding module and scale transform module to build a lightweight fuse feature pyramid,which can deal with low-level feature loss effectively.Secondly,we propose a boundary-aware depth loss function to pay attention to the effects of the boundary’s depth value.Extensive experiments show that our method can predict the depth maps with clearer boundaries,and the performance of the depth accuracy based on NYU-Depth V2,SUN RGB-D,and iBims-1 are competitive.

    KEYWORDS Monocular depth estimation; object boundary; blurry boundary; scene global information; feature fusion; scale transform;boundary aware

    1 Introduction

    Monocular depth estimation is a base vision task in computer vision. It is widely used in autonomous driving, height measurement, SLAM (Simultaneous Localization and Mapping), AR(Augmented Reality),etc.Monocular depth estimation is denser and more low-cost than traditional cost sensors to obtain depth directly.Furthermore,it has the advantages of low price,rich information acquisition content,and small size compared with the binocular image which is limited by the baseline length resulting in a poor match between the volume of the equipment and the vehicle platform.So,estimating depth information based on monocular cameras has become one of the research hotspots in computer vision.Monocular depth estimation refers to transforming a 2D RGB image into a 2.5D depth map,which relies on the“Shape From X”methods to obtain the scene depth information from an RGB image.Monocular depth estimation is an ill-posed problem because a single image lacks the geometrical information required to infer the depth from the image. Compared with the traditional methods[1,2],etc.,that use the cues designed artificially,learning-based monocular depth estimation methods use Convolutional Neural Networks(CNNs)to extract features from images instead of the artificial cues and build the mapping between features and depth. Eigen et al. [3] proposed the first monocular depth estimation method based on deep learning,which showed a surprising performance than pre-works [1,2]. Then, a lot of excellent works based on deep learning were proposed such as[4-14]. However, monocular depth estimation methods have still suffered from the boundary blur challenge, especially in indoor scenes which have complex scene structures and many objects. Fig.1 shows the estimation depth results from existing works in indoor scenes.From the red box in Fig.1,we can see that Hu et al. [15] have a significant improvement in depth accuracy compared with Eigen et al. [3], furthermore, the object structure in the depth maps is more clear. But there is still an obvious boundary blur phenomenon,especially in the complex object structure and some objects are given a wrong depth value that is close to the background.The blurry boundary not only increases predicting errors in the depth estimation but also causes the“flying pixels”[16].Some cases of“flying pixels”in the point cloud can be seen in Fig.2.Looking the Fig.2,the first group shows that the point clouds from Hu et al. [15], and Chen et al. [17] have serious “flying pixels”in the human head, the point clouds are discontinuous. In the second group, the screens are wrapped. The projected point clouds from depth maps are discontinuous, especially in the object boundary. The blurry boundary leads the depth values of pixels from boundary and non-boundary are different and the pixels with different depth values will be projected to different planes,which lead the boundary and non-boundary area in the same object to be separated like Fig.2. Studies find that the boundary blur problem is mainly brought about by two factors in the CNNs frameworks.Firstly,the loss of low-level features in the encoding phase, the low-level feature includes scene structure and object information, which lead the depth maps to unclear and blurry object boundary.But the deep network is needed to improve the spatial expression ability and receptive field of the features due to the deeper network can extract the high-level and abstract information of images such as depth information.Secondly,the“boundary smoothing”during model training(Boundary smoothing:the loss caused by boundary area is ignored during model training,due to the boundary area occupying a little proportion.Although the gradient of boundary is larger,it causes little loss in training.The model processing the boundary as the nonboundary with a small gradient leads to the boundary not sharp and clear enough,like the methods[15,17] in Fig.1) In this paper, we propose two solutions for these two impact factors, respectively.Firstly, to mitigate the low-level information loss, we propose a Scene Understanding module (SU)and a Scale Transform module(ST).Secondly,to solve the problem caused by“boundary smoothing”,we pay attention to the loss introduced by the boundary area and design a novel depth loss function Boundary Aware Depth loss(BAD).

    SU and ST.The multi-scale feature fusion is an effective method to deal with low-level feature loss.The Fuse Feature Pyramid(FFP)is a usual method to aggregate different scales feature,such as Chen et al.[17],and Yang et al.[18],but these models have a lot of parameters due to sampling the features too many times during building the FFP.To reduce the parameters,SU and ST are designed in this work.SU is used to aggregate all scales feature,extracted from all encode phases,and learn the global scene information which includes rich scene structure and boundary information.Then,ST is used to transform the global scene information to different scales to build the lightweight FFP.The feature in FFP will be connected with each corresponding scale in the decoder.

    BAD.BAD will guide the model to focus on the boundary area during model training. BAD introduces boundary weights based on the depth loss function,the weight composed of multi-items to ensure it is useable in most cases.The BAD Will enforce the model to pay attention to the loss caused by the boundary area.

    Figure 1: Predicted depth map from other methods. From left to right are the input image, Ground truth,and the predicted from Eigen et al.[3]and Hu et al.[15]

    Figure 2: The flying pixels phenomenon in the projected cloud from depth maps. From left to right are input RGB images, ground-truth depth maps, and results of Hu et al. [15], and Chen et al. [17],respectively

    Contributions:

    1. We propose a Scene Understanding module(SU)to aggregate the multi-scale features of the encoder and learn the global scene information. Furthermore, we design a Scale Transform module (ST) to transform global information to different scales to build a lightweight FFP that deals with the low-level feature effectively.

    2. We propose a novel Boundary Aware Depth loss(BAD).BAD introduces a boundary aware weight to the depth loss and guides the model to be aware of these pixels which have high edge gradients.

    3. Extensive experimental results show that our model can predict depth maps with clearer boundaries,which can effectively alleviate“flying pixels”and show competitive depth accuracy in the NYU-Depth V2[19],SUN RGB-D[20],and iBims-1 datasets[21].

    2 Related Work

    Monocular depth estimation is an important task in computer vision. The beginning of the learning-based monocular depth estimation is Eigen et al.[3],which predicted the depth from a single RGB image with CNNs. Eigen et al. [3] showed a great performance over the previous works [1,2].Based on this work, Eigen et al. [4] proposed a universal multi-task framework to predict depth,surface normal,and segments from a single image.After that,the monocular depth estimation made great progress. Some researchers proposed to the fusion of the CRF (Conditional Random Field)and deep learning [22-25]. Combining CRF and CNNs makes up for the problems of CNNs and improves the accuracy of the depth estimation models. In addition, Fu et al. [26-28] proposed the use of classification to deal with monocular depth estimation. They divide the depth interval of the image and solve the pixel interval corresponding to each pixel and use the depth value corresponding to the depth interval to express the depth of each pixel.These works show a great performance in the depth predict,but ignore the structure information in the depth map,which will impact the effects of reconstruction or obstacle detection with point clouds projected from a depth map without or with less structure information.This flaw is fatal,especially in complex scenes such as indoor scenes,which have complicated structures and a mass of objects.Clear object boundary not only improves the accuracy of depth estimation but also keeps the point cloud in great shape,which is conducive to upper-level work such as scene reconstruction and object detection. To deal with the blurry boundary, Hu et al. [15]proposed a fusion model to fuse multi-scale features and proposed a compound loss function to make the boundary clearer.Based on this excellent work,Chen et al.[17]proposed a Fused Feature Pyramid(FFP)and a residual pyramid to predict depth maps.Yang et al.[18]built an FFP and used an ASFF(Adaptively Spatial Feature Fusion)structure[29]to fuse the different scale depth maps to keep the structure information.Although Chen et al.[17]and Yang et al.[18]showed a great performance,their model has a lot of parameters for building the FFP.Based on this work,we propose a SU module to fuse the multi-scale feature, which learns the global scene information. And then use ST module to transform the global scene information to build a more lightweight FFP than pre-works.Furthermore,to predict the depth including clearer object boundary,we propose a novel depth loss BAD to enforce the network to punish the depth error in the boundary field.

    3 Methods

    In this section, we first introduce the overall framework, and then we describe the Scene Understanding module (SU), the Scale Transform module (ST), and Boundary Aware Depth loss(BAD)in detail successively.

    3.1 Overall Framework

    A blurry boundary not only introduces errors in depth maps but also causes“flying pixels”in point clouds,to alleviate the boundary blur problem,we design a framework,which is shown in Fig.3,and the detail on the structure is shown in Table 1(H is height,W is width C is the number of channels),to predict the depth maps with a clear boundary. This framework uses the encoder-decoder as the based architecture and selects the SeNet154[30] as the backbone. Besides the base encoder-decoder architecture,we designed the SU module to aggregate all extracted features to learn the scene global information and ST module to transform the global information to different scales.The different scale global information will be sent to each step of the decoder.In the decoder,we use the u-net architecture.The features of the backbone will be compressed to half of the original and participate in the decoding as a skip connection. In the decoder, the decoding results of each layer will be compressed and use bilinear interpolation to the corresponding resolution and participate in the next layer decoding.As shown in Fig.3 and Table 3.In the model training,we propose a novel loss function Boundary Aware Depth loss(BAD)to enforce the model to focus on the object boundary in the training process but not ignore it directly.

    Figure 3:The architecture of our framework.SU is the Scene Understanding module and the ST is the Scale Transform module

    Table 1:Sizes of output features,and input/output channels of each layer when using a SeNet154 as the encoder

    Table 1 (continued)Module Layers Input Output Layer4 H*W*64 H*W*128 3+B/R Layer1 H*W*C H*W*C/2 Decoder Layer1 8*10*1152 8*10*512 bilinear interpolation 8*10*512 15*19*512 Concatenate 15*19*512 15*19*1152 Layer2 15*19*1152 15*19*256 bilinear interpolation 15*19*256 29*38*256 concatenate 29*38*256 29*38*640 Layer3 29*38*640 29*38*128 bilinear interpolation 29*38*128 58*76*128 Concatenate 58*76*128 58*76*384 Layer4 58*76*384 58*76*64 bilinear interpolation 58*76*64 114*152*64 Concatenate 114*152*64 114*152*256 Layer5 114*152*256 114*152*1

    3.2 Scene Understanding Module(SU)and Scale Transform Module(ST)

    Scene Understanding module(SU).In the deep network,the low-level features are always lost which lead to the blurry boundary.Monocular depth estimation is a dense prediction vision task that needs to use deep CNNs to extract high-level features to establish the mapping between the RGB field and depth field.To deal with this problem,we propose that the SU aggregate and learn the global scene information containing the low-and high-level features.The architecture of SU is illustrated in Fig.4.Firstly, to reduce the model parameter we use two convolution layers with convolution, the kernel size is 3 * 3 to compress these feature maps which are extracted from the backbone to 64 channels.Secondly,we use bilinear interpolation to sample these feature maps to the second scale resolution(57*76).Finally,we concatenate the feature maps and use a fusion layer to fuse these feature maps and compress them to 128 channels,the fusion layer includes two convolution layers with 5*5 and 3*3 kernel sizes separately.The SU output a feature map with global scene information,which only has 128 channels. The global scene information can provide the additional feature containing the detail feature in the decoder to enrich the detail in the depth maps.To meet the decoding needs of each stage,we need global scene information on a multi-scale. But frequent feature fusion will increase a large number of parameters.The ST is therefore proposed to deal with this hardpoint.

    Scale Transform module (ST).To obtain the multi-scale global scene information with fewer parameters,we designed the ST to transform the global scene information to different scales instead of fuse feature again.The architecture of the ST is shown in Fig.5.In this module,we mainly use channel attention to set different weights to feature channels to adaptively change the feature to different scales.Firstly,we use bilinear interpolation to sample the feature map,including the global scene information,to a different scale.Then,we compress the feature to 64 channels and use the average pooling layer to deal with the feature to 1*1*64.Thirdly,we use the convolution layer to compress the pooled feature to 32 channels and use the relu function to activate it.After that,we use the convolution to recover the feature from 32 to 64 channels and use the sigmoid as the activate function.Finally,we produce a recovered feature with the feature before pooling, and then use a convolution layer to recover the feature to 128 channels. The processed feature maps will be transformed to the scale according to every phase of the decoder and sent to the corresponding decoding step as a skip connection. The SU can learn the global information of the scene and the ST will transform the global information to each scale.The ST not only changes the resolution of global information but also adaptively learn what features are needed in different decode scale and change the feature.With ST building a pyramid reduces a lot of model parameters than pre-works,such as Chen et al.[17]and Yang et al.[18].The comparison results of the parameters of several models can be seen in the experiment section.

    Figure 4: The architecture of the Scene Understanding module. The sample module uses to sample each scale feature to the same scale and the fusion module is used to adaptive fuse multi-scale features for learning the whole scene feature

    Figure 5:The architecture of the Scale Transform module

    3.3 Boundary Aware Depth Loss(BAD)

    In this section, we propose a novel depth loss function Boundary Aware Depth loss (BAD) to pay attention to the loss,caused by the boundary,which is usually ignored in training.Generally,the depth value at the plane is continuous with a smoother gradient,while the depth at the boundary is discontinuous with a large gradient.The boundary area always occupies a little proportion through the gradient of the boundary is larger,which causes little loss in training.So,it is easily ignored,especially in indoor scenes,there are more planes than other scenes,due to the existence of walls,ceilings,tables,beds, etc. Therefore, the models tend to predict the depth of the entire scene with the same smooth depth when training the depth estimation model,which aggravates the boundary blur problem.To deal with the ignored loss caused by the boundary,Hu et al.[15]proposed the gradients of depth term in the total loss function to guide the model to predict the depth maps with accurate boundary gradients.However,the gradient loss term will not play an ideal role when the background and boundary,depth predicted by the model,is too large or too small at the same time.In this paper,based on Hu et al.[15],we propose a novel depth loss function BAD to pay attention to the boundary depth for improving the depth accuracy of the object boundary. BAD guides the training process by setting a boundary aware Weight for each pixel.The BAD is defined as:

    LBADcontains two items, the boundary aware weight and depth predict item. Where thewboundary aware weight,dis the true depth,is predicted depths. Theαis an aware factor, we setα= 0.3 in this paper. These pixels will be focused if the boundary aware weight is big. Boundary aware weightwis defined as:

    where thegxis the gradient ofxscale and thegyis the gradient ofyscale in the ground truth. Theis the gradient ofxscale and theis the gradient ofyscale in predicted depth maps. TheNis the total number of pixels. We use the Sobel operator [31] to extract the gradient in this paper. Thewincludes the true and error items,wwill enforce the model to focus on boundary area by setting different weights to these pixels.The true item will become big when these pixels have a big gradient in the ground truth. The error items will show their role if there is a large gradient prediction error.When the background and boundary depth are too large or too small at the same time,our true items and the depth error item in(1)will become big,which guides the model to focus these pixels together even though the gradient error is small.The error item is used to guide the model to focus on these boundary fields where the gradient error is big. The depth loss will be huge when the true item and the error item are big at the same time.To ensure our model can take a bigger awareness of the object boundary and point cloud quality,we retain the edge loss item and normal loss item proposed by[15].The total loss is defined as:

    Extensive experiments show that BAD can improve the accuracy of the boundary and depth.

    4 Experiments

    In this section, we will introduce the evaluation indicators in our experiment firstly. Then we introduce the datasets used in our experiment, and finally, we conduct various experiments on the datasets to prove the effectiveness of the novel modules and functions.

    4.1 Quantitative Evaluation Indexes

    This paper follows [3] using the metrics to evaluate our proposed model’s performance. These metrics define as:

    Root mean squared error(RMSE):

    Absolute relative difference(AbsRel):

    log10:

    Threshold(δ)

    wherethr= 1.25,1.252,1.253thegiis the ground truth,is the predict depth value, andTis the available pixels in the ground truth.

    4.2 Datasets and Experimental Setting

    This paper focuses on indoor scenes which have a complex scene structure with a large number of objects. So we mainly trained and evaluated our model in NYU-Depth V2 [19]. The NYU-Depth V2 [19] dataset is the most popular indoor dataset in monocular depth estimation and semantic segmentation.It uses a Kinect depth camera[32]to capture images,mainly for scene understanding.It contains 1449 pairs of RBGD image pairs with a resolution of 640*480 from 464 different indoor scenes from 3 cities. These image pairs are divided into two parts. 795 image pairs captured in 249 scenes are used as the training set, and 654 image pairs from 215 scenes are used as the test set. In addition, the data set also contains the corresponding semantic segmentation label information. In our experiment,we use the training dataset that contains 50K RGB-D images that were preprocessed by Hu et al.[15].

    In this paper, we use the Pytorch [33] to implement our model, and then in the encoder state,we use the SENet-154[30]as our backbone to initialize the pre-trained model by ImageNet[34].We set the LR = 0.0001 and use the learning Adam optimizer. Furthermore, we set a rate decay policy to the learning rate by reducing it to 10% every 5 epochs, we setβ1 = 0.9,β2 = 0.999, epochs =10, and weight decay as 10-5. Our model was trained and evaluated on a two-piece Tesla V 100(32 GB version)with a patch size is 16.

    4.3 Performance Comparisons

    In this section, we evaluate our model from both qualitative and quantitative points of view on the NYU-Depth V2 dataset [19]. Firstly, we compare the different state-of-the-art models with our model tested on the NYU-Depth V2 dataset[19]by the common indicators and the results are shown in Table 2.From Table 2 we can see that our model shows state-of-the-art inδ2andδ3.Additionally,we also get the second state inδ1Rel. Although our accuracy is next, Reynolds et al. [16] built an FFP to get a fusion of features in each scale to improve the predicting accuracy, which leads to the model becoming big.In contrast,we only fuse features once and use the ST to transform the fusion feature to different scales.Furthermore,the output of SU only has 128 channels,which also reduces the parameters.Moreover,our main purpose of this paper is not to improve the accuracy of depth but to get clear boundaries in the depth maps.To prove our contribution to predicting clearer boundaries,we provide the qualitative results of our model, which is shown in Fig.6, which includes five group results.From Fig.6,we can see our model predicts the clearer structure than other models,even better than the ground truth, like the black boxes in the first group. The ground truth is captured by the Kinect[32]using an infrared ray,but the glass will reflect the ray which leads to depth loss in the glass.But the monocular depth estimation can predict the depth of glass.

    Table 2: Evaluation results of depth estimation on the NYU-Depth V2 test set. The best results are boldfaced,and the second-best ones are underlined.The shown values of the evaluated methods are those reported by the authors in their paper

    In the second group,our model predicts the clearest structure of the bed with a sharp boundary.Although Chen et al.[17]and Hu et al.[15]keep the whole structure of the bed,they suffer from serious boundary smoothness leading to some structures being indistinguishable from the background.The same situation also appeared in the third group.In the third group,Chen et al.[17]and Hu et al.[15]cannot predict the toy with a clear boundary that causes the toy and the sofa to be mixed,and the toy is completely invisible from the depth map.By contrast,our model predicts the toy with an obvious boundary, the toy, and the sofa can be distinguished. In the fourth group, we can see that other methods predict fuzzy table boundaries, especially in the area selected by the black box. Although Chen et al.[17]and Hu et al.[15]also predicted the overall structure of the chair,the predicted edges of the backrest were very blurry and there was a serious smoothing phenomenon. Additionally, in the black box on the wall,we can see a chair placed on the wall from the ground truth.Our method not only successfully predicts chairs with sharp edges, but also suppresses the boundary smoothing phenomenon,so the chair and the wall are distinguished.But other algorithms did not deal with the boundary smoothing phenomenon, resulting in the wall and the chair being indistinguishable. The same situation also occurs in the fifth group,in which the other algorithms did not predict the clear boundaries between the cabinet and the table.Because the cabinet is back against the wall,the predicted depth of the display is very close to the depth of the wall.As a comparison,our model not only predicts clear boundaries but also is more discriminatory than others in the depth prediction of the display.In general,our algorithm can preserve the overall structure of the scene better than other algorithms,and can effectively distinguish objects and background thanks to our depth maps with clear boundaries.In addition,the clear boundary information also helps us to get more accurate depth predictions when the background and object depths are similar.To prove that our method estimates more accurate the depth in the edge, we evaluate the accuracy of depth in the edge region and compare it with others.The result is shown in Table 3(edge thredhold=0.5,boundary gradient threshold that was proposed in Hu et al.[15].We will regard the pixel as the boundary if its gradient is bigger than the threshold).Ours shows the best performance inδ2,δ3.

    Figure 6: Qualitative results on the NYU-Depth V2 test set. From left to right: input RGB images,ground-truth depth maps,results of Eigen et al.[3],Laina et al.[8],Hu et al.[15],Chen et al.[17],and our method,respectively

    4.4 Models Test in Other Dataset

    To evaluate our model more thoroughly,we test our pre-trained model in the SUN RGB-D[20]directly.SUN RGB-D[20]is a scene understanding benchmark,which includes three datasets NYUDepth V2 [19], B3DO [37], SUN3D [38]. To ensure the effectiveness of the comparison, we choose Hu et al.[15],Chen et al.[17],and Yang et al.[18]as the comparison models.We use the same dataset[19]to train the model.The comparison results are shown in Fig.7.We select five sets of depth maps predicted from different scenes as a comparison.In the first group,we can see the comparison models have s.In the second group,from the black boxes,we can see that our model predicts a desktop with a clear preserved object structure of the scene,and our model’s object boundaries are the clearest(tables and chairs in the black box). The lack of depth in ground truth is due to specular reflections of the scene,which is difficult to obtain the depth in the corresponding area.This is the disable in RGB-D camera and LIDAR.This phenomenon also can be found in the second and third group boundary.What’s more, we also predict the outline of the faucet but the other algorithms do not get a clear outline, and they are very hard to distinguish the faucet from the background. In the third group,we can also see that a clearer object structure is preserved by ours than in others. From the fourth group, we can see that our model shows an obvious advantage over other algorithms in structural preservation,it retains the complete object structure while also getting sharp boundaries.This makes the object can be perfectly distinguished from the background. In other algorithms, it is sometimes difficult to distinguish the object from the background due to the blurry structure and smooth edges.Moreover,the object is easily mistaken as the background when the edges are blurry.

    Table 3:Evaluation results of edge depth estimation on the NYU-Depth V2 test set.The best results are boldfaced,and the second-best ones are underlined

    As can be seen in the last group,our model predicts sharper table legs compared to other models.This experiment is used to prove that our model can predict the depth map with a clear boundary.

    Furthermore, it will have a depth value similar to the background, resulting in huge errors. To explore the performance of our model, we test our pre-trained model on iBims-1 dataset [21] and compare it with others. The result is shown in Table 4. We can see that our model shows the stateof-art inδ1,δ2,δ3,rel, and log 10. The result shows that the model proposed in this paper has great generalization.

    We show the visual result in Fig.8 which includes four comparison groups.In the first group,we can see that our methods save the clearest outline of the lamp than others.Ours maintain the structure of windows that others without in the second group.In the third group,ours saves more detail in the plant and the boundaries are clearer than other methods. In the last group, our predicted saves the structure of windows and shelves that others cannot keep. As mentioned above, our methods show excellent performance in saving the detail of the object and predicting clearer boundaries.

    Figure 7: Qualitative results of the test on the SUN RGB-D. From left to right: input RGB images,ground-truth depth maps, results of Hu et al. [15], Chen et al. [17], Yang et al. [18], and our method,respectively

    Table 4: Evaluation results of depth estimation on the iBims-1 dataset [21]. The best results are boldfaced,and the second-best ones are underlined

    Figure 8:Qualitative results of the test on the iBims-1 dataset.From left to right:input RGB images,ground-truth depth maps,results of Hu et al.[15],Chen et al.[17],Yang et al.[18],and our method,respectively

    4.5 Boundary Accuracy Comparisons

    To prove that our model can predict more accurate boundaries than others more effectively, we compare them to specific evaluation indicators.We follow Hu et al.[15]using the Precision,Recall,and F1 scores to evaluate the method performance,the results show in Table 5.There is the boundary gradient threshold that was proposed in Hu et al.[15].We will regard the pixel as the boundary if its gradient is bigger than the threshold.We can see that our model has achieved 3 SOTA and 5 sub-SOTA in 3 indicators with 3 different thresholds.The results prove that our model performs SOTA in edge accuracy.

    Table 5: Accuracy of recovered edge pixels in-depth maps under different thresholds based on NYU-D V2 test set.The best results are boldfaced,and the second-best ones are underlined(Thres is the boundary gradient threshold that was proposed in Hu et al.[15])

    Table 5 (continued)Thres Method Prec Recall F1 Chen et al.[17] 0.663 0.523 0.578 Our 0.680 0.520 0.582 1 Laina et al.[8] 0.670 0.479 0.548 Xu et al.[24] 0.794 0.407 0.525 Fu et al.[26] 0.483 0.512 0.485 Hu et al.[15] 0.759 0.540 0.623 Yang et al.[18] 0.774 0.544 0.631 Chen et al.[17] 0.749 0.554 0.630 Our 0.770 0.553 0.635

    4.6 Generating Point Cloud from Depth Maps

    As mentioned in the pre-chapters,the sharp boundary can suppress“flying pixels”effectively in the point clouds projected by depth maps.To prove this conclusion,we projected the predicted depth maps as 3D point clouds and rendered them in novel views using OpenCV.The results are presented in Fig.9.From the first group,other algorithms have serious pixel drift in the head of the man in the red block.Our algorithm suppresses this phenomenon well and the point cloud in the head of man is continued.The same situation also appeared in the second set.The projection effect of Hu et al.[15]is relatively good,but it still appears seriously distortions at the upper boundary of the screen.Compared with others,our model also has distortions at the bottom edge of the screen,but the overall structure is better than others.In the third group,we can see that all methods have better preserved the overall structure of the scene.However,by changing the perspective,we found that the TV screen predicted by other methods has serious“flying pixels”and the screen is distorted which surface is curving.Although ours suffers a slight of flying pixels at the top of the screen,the overall screen is not distorted.Through point clouds comparison experiments,we proved that our algorithm can suppress the phenomenon of“flying pixels”effectively, and also proved the opinion that accurate edge information can help to improve the quality of the point clouds project from depth maps.

    4.7 The Comparison of Model’s Params

    To prove that the ST can show a great performance in reducing the number of params,we make the comparison between ours and others and the comparison results can be seen in Fig.10. From Fig.10,we find that our model’s parameters are slightly more than Hu et al.[15].That is because[15]just fused the multi-scale feature one time on a single scale and did not change the fusion feature to different scales. But our model not only fuses the all-scale feature in every training process but also transforms the fusion feature into 5 different scales.What’s more,Compared with Chen et al.[17]and Yang et al.[18]who built the FFP,our model only contains two-thirds of the parameters of these FFP models.

    Figure 9: The result of comparing the projected point cloud from ours and other methods. From left to right: input RGB images, ground-truth depth maps, results of Hu et al. [15], Chen et al. [17],Yang et al.[18],and our method,respectively

    Figure 10: Compare the number of model’s params. From left to right: results of Hu et al. [15],Chen et al.[17],Yang et al.[18],and our method,respectively

    4.8 Ablation Studies

    To explore our model in detail,we designed corresponding ablation experiments for the proposed method.The results are shown in Tables 6 and 7 based on thresholds=0.5.The first group experiment is mainly to show the performance of the baseline as the benchmark for subsequent comparisons.For the baseline, we use SeNet154 [30] as the backbone and use the composite loss function proposed by Hu et al. [15] to supervise the model training. From the results of the second group, it can be known that directly adding BAD to supervision model training can improve the prediction accuracy of the model effectively. Comparing the third group and the baseline, we can see that the SU+ST can show a great enhancement in the model performance.To confirm the effectiveness of ST,we use bilinear interpolation to sample the output of the SU to different scales directly in the fourth group.Comparing the fourth group with the fifth group we can find that the fifth group which uses the ST shows more accurate performance than the fourth group which uses bilinear interpolation directly.The comparison between the second group and the fourth group proves that SU can effectively improve the functioning of the model.In particular,there is a major improvement in edge accuracy.Comparing the third group with the fifth group we can see that the BAD shows a great promotion of the model’s performance.

    Table 6:The depth accuracy comparison.With our module or without.The best results are boldfaced,and the second-best ones are underlined

    Table 7:The edge accuracy comparison,with our module or without.The best results are boldfaced,and the second-best ones are underlined

    5 Conclusion

    To deal with the blurry boundary caused by low-level information loss in the process of feature extraction and boundary smoothness in the boundary area during the training process, a Scene Understanding module,a Scale Transform module,and a Boundary Aware Depth loss function were proposed.In the Scene understanding module and Scale Transform module,we focus on taking care of information loss. The Scene understanding module can learn the global information about the scene and the Scale Transform module will transform the global scene information to multi-scale for building a feature pyramid with a few additional parameters. The Boundary Aware Depth loss was designed to enforce the model to focus on the depth in the boundary field during the model training.Extensive experiments show our modules and the novel loss functions ensure our model can predict depth maps with clearer object boundaries than others.The most important thing is the object will be predicted with the depth value the same as the background without a clear boundary.It means that the boundary information can influence the depth prediction.But some problems still exist,for example,although our model can recover the boundary very well,the point clouds are not always nice enough.Such as some plants are not smooth.The other problem is the complex time of our model is high.In future work,we will concentrate on how to improve the accuracy of depth prediction and reduce the time complexity.Also,we will further explore the influence of object boundaries in depth prediction.

    Funding Statement:This work was supported in part by School Research Projects of Wuyi University(No.5041700175).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    色5月婷婷丁香| 男女边吃奶边做爰视频| 俺也久久电影网| 亚洲精品456在线播放app| 少妇的逼水好多| 亚洲欧美精品自产自拍| 国产精品人妻久久久影院| 一夜夜www| 国产美女午夜福利| 亚洲中文日韩欧美视频| 国产av在哪里看| 国产一区二区亚洲精品在线观看| 美女 人体艺术 gogo| 亚洲人成网站高清观看| 变态另类成人亚洲欧美熟女| 九九久久精品国产亚洲av麻豆| 露出奶头的视频| 麻豆成人午夜福利视频| 一级毛片aaaaaa免费看小| 欧美三级亚洲精品| 久久精品影院6| 俺也久久电影网| 久久精品国产鲁丝片午夜精品| 成人高潮视频无遮挡免费网站| 欧美日本亚洲视频在线播放| 最近中文字幕高清免费大全6| 久久久久性生活片| 亚洲无线观看免费| 亚洲国产高清在线一区二区三| 亚洲国产精品sss在线观看| 国产一区二区三区在线臀色熟女| 精品人妻熟女av久视频| 欧美高清成人免费视频www| 日本免费一区二区三区高清不卡| 国产精品爽爽va在线观看网站| 成人av在线播放网站| 久久精品国产鲁丝片午夜精品| 免费av观看视频| 网址你懂的国产日韩在线| 少妇高潮的动态图| 人人妻人人澡人人爽人人夜夜 | 国产精品野战在线观看| 性插视频无遮挡在线免费观看| 午夜精品在线福利| 又爽又黄无遮挡网站| av黄色大香蕉| 直男gayav资源| 99热全是精品| 国产av一区在线观看免费| 亚洲图色成人| 国产av麻豆久久久久久久| 久久久久国产精品人妻aⅴ院| 免费观看在线日韩| 午夜影院日韩av| 久久久久久久久久黄片| а√天堂www在线а√下载| 国产亚洲av嫩草精品影院| 一级黄片播放器| 九九爱精品视频在线观看| 欧美日本视频| 两个人的视频大全免费| 久久精品国产自在天天线| 色综合亚洲欧美另类图片| 九九热线精品视视频播放| 国产精品女同一区二区软件| 欧美成人一区二区免费高清观看| 国产片特级美女逼逼视频| 久久精品国产99精品国产亚洲性色| 精品人妻一区二区三区麻豆 | 久久久久国产精品人妻aⅴ院| 成人av一区二区三区在线看| 国产一区二区在线观看日韩| 十八禁网站免费在线| 狠狠狠狠99中文字幕| 简卡轻食公司| 亚洲欧美日韩无卡精品| 久久精品国产清高在天天线| 好男人在线观看高清免费视频| 日韩大尺度精品在线看网址| 日产精品乱码卡一卡2卡三| 久久久精品欧美日韩精品| 国产日本99.免费观看| 欧美日韩国产亚洲二区| 久久亚洲精品不卡| 亚洲国产精品成人久久小说 | 夜夜看夜夜爽夜夜摸| av在线观看视频网站免费| 国产精品,欧美在线| 校园人妻丝袜中文字幕| 国产精品嫩草影院av在线观看| 亚洲欧美精品综合久久99| 国内精品久久久久精免费| 成人高潮视频无遮挡免费网站| 久久久久久九九精品二区国产| 日韩精品有码人妻一区| 黄色视频,在线免费观看| 国产亚洲av嫩草精品影院| 男女那种视频在线观看| 人人妻,人人澡人人爽秒播| 我要看日韩黄色一级片| 亚洲图色成人| 色哟哟哟哟哟哟| 国产精品日韩av在线免费观看| 美女 人体艺术 gogo| 91狼人影院| 日韩中字成人| 精品一区二区三区视频在线| 国产伦一二天堂av在线观看| 婷婷色综合大香蕉| 午夜福利视频1000在线观看| 91在线观看av| 看非洲黑人一级黄片| 国产成人a∨麻豆精品| 国产乱人偷精品视频| 亚洲欧美精品自产自拍| 听说在线观看完整版免费高清| 亚洲一区高清亚洲精品| 在线a可以看的网站| 国产精品一及| 狂野欧美白嫩少妇大欣赏| 国产精品不卡视频一区二区| 国内揄拍国产精品人妻在线| 日本欧美国产在线视频| 欧美潮喷喷水| 97人妻精品一区二区三区麻豆| 日韩av在线大香蕉| 青春草视频在线免费观看| 男女啪啪激烈高潮av片| 欧美成人a在线观看| 直男gayav资源| 午夜福利在线观看免费完整高清在 | 2021天堂中文幕一二区在线观| 少妇裸体淫交视频免费看高清| 成人鲁丝片一二三区免费| 熟妇人妻久久中文字幕3abv| 色综合亚洲欧美另类图片| 精品欧美国产一区二区三| 久久久久久久久中文| 成人三级黄色视频| 在线观看午夜福利视频| 亚洲中文日韩欧美视频| 精品国产三级普通话版| 少妇人妻精品综合一区二区 | 在线观看免费视频日本深夜| 国产一区二区三区在线臀色熟女| 级片在线观看| 久久欧美精品欧美久久欧美| 午夜福利在线观看吧| 天堂网av新在线| 亚洲av中文av极速乱| av视频在线观看入口| 久久人妻av系列| 欧美一区二区精品小视频在线| 国产精品三级大全| 国产久久久一区二区三区| 永久网站在线| 免费大片18禁| 亚洲人成网站高清观看| 少妇人妻一区二区三区视频| 亚洲va在线va天堂va国产| 内地一区二区视频在线| 99热6这里只有精品| 国产精品国产三级国产av玫瑰| 亚洲欧美成人综合另类久久久 | 久久人妻av系列| 人人妻人人澡人人爽人人夜夜 | 香蕉av资源在线| 国产成人a∨麻豆精品| 极品教师在线视频| 国产亚洲精品av在线| 国内少妇人妻偷人精品xxx网站| 天堂动漫精品| 国内精品宾馆在线| 精品久久久久久久久久久久久| 色综合色国产| 国产 一区精品| 国产精品一区二区三区四区久久| 99久久久亚洲精品蜜臀av| 少妇丰满av| 综合色丁香网| 老司机影院成人| 男女那种视频在线观看| 欧美日韩一区二区视频在线观看视频在线 | a级毛片免费高清观看在线播放| 精品欧美国产一区二区三| 真实男女啪啪啪动态图| 变态另类成人亚洲欧美熟女| 国产伦精品一区二区三区视频9| av视频在线观看入口| 99久国产av精品| 日韩国内少妇激情av| 成人三级黄色视频| 亚洲欧美日韩高清专用| 三级经典国产精品| 久久精品综合一区二区三区| 级片在线观看| 日韩中字成人| 好男人在线观看高清免费视频| 久久精品国产亚洲av涩爱 | 免费看av在线观看网站| 一区二区三区高清视频在线| 久久天躁狠狠躁夜夜2o2o| 精品久久久久久久人妻蜜臀av| 男人舔女人下体高潮全视频| 少妇熟女欧美另类| 欧美三级亚洲精品| 一个人观看的视频www高清免费观看| 黄色欧美视频在线观看| 日韩中字成人| 美女 人体艺术 gogo| av福利片在线观看| 九九热线精品视视频播放| 日本在线视频免费播放| 搡老岳熟女国产| 亚洲四区av| 日韩国内少妇激情av| 亚洲av成人av| 久久精品国产鲁丝片午夜精品| 97碰自拍视频| 午夜精品在线福利| 成人亚洲欧美一区二区av| 狂野欧美激情性xxxx在线观看| 日本三级黄在线观看| 嫩草影视91久久| 精品久久久久久久末码| 成人高潮视频无遮挡免费网站| 国产国拍精品亚洲av在线观看| 91精品国产九色| 成人av一区二区三区在线看| 最近手机中文字幕大全| 欧美激情久久久久久爽电影| 成人漫画全彩无遮挡| 亚洲天堂国产精品一区在线| 亚洲国产精品sss在线观看| 99在线视频只有这里精品首页| 婷婷亚洲欧美| 午夜亚洲福利在线播放| 亚洲中文日韩欧美视频| 九色成人免费人妻av| 午夜福利成人在线免费观看| av天堂在线播放| 国产成人影院久久av| 精品少妇黑人巨大在线播放 | 九九久久精品国产亚洲av麻豆| 久久久久性生活片| 午夜福利高清视频| 在现免费观看毛片| 国产亚洲精品久久久久久毛片| 日韩欧美免费精品| 少妇的逼水好多| 丰满人妻一区二区三区视频av| 老司机午夜福利在线观看视频| 人妻夜夜爽99麻豆av| 国产成人a∨麻豆精品| 国产精品不卡视频一区二区| 亚洲国产欧美人成| 人妻丰满熟妇av一区二区三区| 日本 av在线| 中文字幕精品亚洲无线码一区| 九九在线视频观看精品| 欧美zozozo另类| 亚洲精华国产精华液的使用体验 | 久久九九热精品免费| 天天一区二区日本电影三级| 欧美性感艳星| 麻豆一二三区av精品| 亚洲综合色惰| 亚洲人成网站在线播| 亚洲av中文av极速乱| 美女cb高潮喷水在线观看| 亚洲国产精品合色在线| 我要看日韩黄色一级片| 青春草视频在线免费观看| 国产蜜桃级精品一区二区三区| 成人鲁丝片一二三区免费| 久久久久久九九精品二区国产| 性色avwww在线观看| 久久鲁丝午夜福利片| 欧美性猛交黑人性爽| 又黄又爽又刺激的免费视频.| 日韩中字成人| 麻豆乱淫一区二区| av在线蜜桃| 一边摸一边抽搐一进一小说| 亚洲,欧美,日韩| 哪里可以看免费的av片| 国产乱人偷精品视频| 日本色播在线视频| 国产麻豆成人av免费视频| 国产精品久久久久久久电影| 狠狠狠狠99中文字幕| 日韩在线高清观看一区二区三区| 国产高清视频在线观看网站| 在线播放国产精品三级| 无遮挡黄片免费观看| 成人av在线播放网站| 国产亚洲精品久久久久久毛片| 69av精品久久久久久| 尾随美女入室| 床上黄色一级片| 天堂影院成人在线观看| 久久国产乱子免费精品| 99热全是精品| av天堂中文字幕网| 久久久久久国产a免费观看| 好男人在线观看高清免费视频| 免费看美女性在线毛片视频| 麻豆乱淫一区二区| 成人高潮视频无遮挡免费网站| 热99在线观看视频| 高清毛片免费观看视频网站| 国产精品电影一区二区三区| 村上凉子中文字幕在线| 18禁在线无遮挡免费观看视频 | 精华霜和精华液先用哪个| 亚洲va在线va天堂va国产| 国产在线男女| 男人舔女人下体高潮全视频| 亚洲四区av| 搡老熟女国产l中国老女人| 国产一区二区在线观看日韩| 在现免费观看毛片| 成人永久免费在线观看视频| 国产精品一及| 久久久久性生活片| 在线观看66精品国产| 丰满乱子伦码专区| 国产精品女同一区二区软件| 99国产精品一区二区蜜桃av| 亚洲久久久久久中文字幕| 亚洲经典国产精华液单| 国产亚洲91精品色在线| 久久久国产成人精品二区| 网址你懂的国产日韩在线| 久久精品影院6| 如何舔出高潮| 麻豆久久精品国产亚洲av| 亚洲av一区综合| 观看免费一级毛片| 免费看a级黄色片| 六月丁香七月| 免费一级毛片在线播放高清视频| av国产免费在线观看| 亚洲av免费高清在线观看| 亚洲,欧美,日韩| 国产男人的电影天堂91| 老女人水多毛片| 精品一区二区免费观看| 高清毛片免费看| 在线观看一区二区三区| 蜜桃亚洲精品一区二区三区| 国产三级在线视频| a级毛色黄片| 久久精品人妻少妇| av女优亚洲男人天堂| 午夜福利在线观看免费完整高清在 | 欧美性猛交╳xxx乱大交人| 人人妻人人看人人澡| 国产精品三级大全| 日日摸夜夜添夜夜添小说| 日韩成人av中文字幕在线观看 | 精品久久久久久久末码| 国产白丝娇喘喷水9色精品| 狠狠狠狠99中文字幕| 日韩国内少妇激情av| 男女之事视频高清在线观看| 国产免费男女视频| 精品免费久久久久久久清纯| 欧美+日韩+精品| 99九九线精品视频在线观看视频| 久久亚洲国产成人精品v| 亚洲性夜色夜夜综合| 亚洲精华国产精华液的使用体验 | 日本黄色片子视频| 日本与韩国留学比较| 欧美色欧美亚洲另类二区| av中文乱码字幕在线| a级毛色黄片| 日本与韩国留学比较| 亚洲一区高清亚洲精品| 精品人妻视频免费看| 欧美中文日本在线观看视频| 国产三级在线视频| 女人被狂操c到高潮| 亚洲乱码一区二区免费版| 久久久国产成人精品二区| 久久国产乱子免费精品| 国产v大片淫在线免费观看| 日韩中字成人| 内地一区二区视频在线| 九九久久精品国产亚洲av麻豆| 久久精品国产鲁丝片午夜精品| www.色视频.com| 老女人水多毛片| 国产精品人妻久久久影院| 大又大粗又爽又黄少妇毛片口| 精品久久久久久久久av| 赤兔流量卡办理| 日韩强制内射视频| 日日摸夜夜添夜夜添av毛片| 亚洲性夜色夜夜综合| 亚洲,欧美,日韩| 91在线精品国自产拍蜜月| 伦理电影大哥的女人| 性插视频无遮挡在线免费观看| 在线a可以看的网站| 插阴视频在线观看视频| 国产午夜福利久久久久久| 别揉我奶头~嗯~啊~动态视频| 精品人妻偷拍中文字幕| 久久精品国产亚洲网站| 日韩欧美精品v在线| 又爽又黄无遮挡网站| 久久精品国产自在天天线| 国产久久久一区二区三区| 香蕉av资源在线| 久久久精品大字幕| 国产一区二区在线av高清观看| 午夜日韩欧美国产| av在线天堂中文字幕| 欧美激情国产日韩精品一区| 69av精品久久久久久| 亚洲成人av在线免费| 99久国产av精品国产电影| 两个人的视频大全免费| 欧美xxxx性猛交bbbb| 蜜臀久久99精品久久宅男| 欧美中文日本在线观看视频| 最新中文字幕久久久久| 久久久久精品国产欧美久久久| 亚洲精品色激情综合| 日日摸夜夜添夜夜添小说| 黄色视频,在线免费观看| 天天一区二区日本电影三级| 美女内射精品一级片tv| 国产亚洲精品久久久久久毛片| 欧美色视频一区免费| 国产高潮美女av| 麻豆精品久久久久久蜜桃| 国产午夜精品久久久久久一区二区三区 | 熟妇人妻久久中文字幕3abv| 免费人成视频x8x8入口观看| 欧美区成人在线视频| 99在线视频只有这里精品首页| 男人舔奶头视频| 蜜桃亚洲精品一区二区三区| 免费看日本二区| 婷婷色综合大香蕉| 久久人妻av系列| 国产一区二区激情短视频| 国产伦在线观看视频一区| 99国产精品一区二区蜜桃av| 精品熟女少妇av免费看| 免费在线观看成人毛片| 精品国产三级普通话版| 久久久精品大字幕| 狂野欧美激情性xxxx在线观看| 国产成人freesex在线 | 日本五十路高清| 色av中文字幕| 日韩欧美三级三区| 国产精品不卡视频一区二区| av卡一久久| 三级国产精品欧美在线观看| 美女高潮的动态| 九九久久精品国产亚洲av麻豆| 欧美一级a爱片免费观看看| 久久久久久大精品| 国内精品一区二区在线观看| 久久九九热精品免费| 亚洲精品日韩在线中文字幕 | 网址你懂的国产日韩在线| 国内精品宾馆在线| 国产女主播在线喷水免费视频网站 | 精品久久久久久久人妻蜜臀av| av国产免费在线观看| 国产中年淑女户外野战色| 不卡一级毛片| 春色校园在线视频观看| 国产精品1区2区在线观看.| 丝袜美腿在线中文| 少妇熟女aⅴ在线视频| 一级毛片久久久久久久久女| 午夜免费男女啪啪视频观看 | 亚洲av成人精品一区久久| 99热网站在线观看| 天堂影院成人在线观看| 久久99热这里只有精品18| 高清毛片免费看| 久久这里只有精品中国| 免费黄网站久久成人精品| 少妇丰满av| 一卡2卡三卡四卡精品乱码亚洲| 国产一区二区在线观看日韩| 亚洲av电影不卡..在线观看| 久久国产乱子免费精品| 极品教师在线视频| 国产 一区精品| 国产精品久久久久久久久免| 白带黄色成豆腐渣| 成人三级黄色视频| 啦啦啦啦在线视频资源| 亚洲激情五月婷婷啪啪| 亚洲欧美日韩高清专用| 亚洲成人久久性| 能在线免费观看的黄片| 老熟妇仑乱视频hdxx| 精品一区二区三区视频在线| 三级国产精品欧美在线观看| 亚洲最大成人手机在线| 国内精品美女久久久久久| 国产精品嫩草影院av在线观看| 亚洲中文字幕一区二区三区有码在线看| 男插女下体视频免费在线播放| 中文亚洲av片在线观看爽| 午夜老司机福利剧场| 你懂的网址亚洲精品在线观看 | 亚洲中文字幕日韩| 美女cb高潮喷水在线观看| 成年女人看的毛片在线观看| 免费观看的影片在线观看| 此物有八面人人有两片| 亚洲国产精品国产精品| 露出奶头的视频| av福利片在线观看| 非洲黑人性xxxx精品又粗又长| 久久这里只有精品中国| 午夜免费男女啪啪视频观看 | 在线观看av片永久免费下载| 蜜桃久久精品国产亚洲av| 精品久久国产蜜桃| 国语自产精品视频在线第100页| 成人性生交大片免费视频hd| 美女免费视频网站| 啦啦啦韩国在线观看视频| 亚洲av熟女| 一进一出抽搐gif免费好疼| 特大巨黑吊av在线直播| 国产国拍精品亚洲av在线观看| 亚洲最大成人中文| 在线免费十八禁| 亚洲精品成人久久久久久| 亚洲三级黄色毛片| 色5月婷婷丁香| 日韩欧美精品免费久久| 午夜福利在线在线| 国产aⅴ精品一区二区三区波| 欧美性猛交黑人性爽| 国产午夜精品论理片| 久久韩国三级中文字幕| 日韩人妻高清精品专区| 色av中文字幕| 免费高清视频大片| 国产探花极品一区二区| 国产淫片久久久久久久久| 少妇被粗大猛烈的视频| 麻豆一二三区av精品| 久久天躁狠狠躁夜夜2o2o| 人人妻,人人澡人人爽秒播| 国产精品久久电影中文字幕| 亚洲无线在线观看| 在线观看66精品国产| 99热这里只有是精品在线观看| 97超视频在线观看视频| 国产成人freesex在线 | 97人妻精品一区二区三区麻豆| 国产色婷婷99| 日日摸夜夜添夜夜爱| 91午夜精品亚洲一区二区三区| 精品久久久久久久久久久久久| 中文字幕av在线有码专区| 黑人高潮一二区| 成人高潮视频无遮挡免费网站| 人妻少妇偷人精品九色| 一区福利在线观看| 亚洲人成网站在线观看播放| 国产成人影院久久av| 精品人妻视频免费看| 一级毛片aaaaaa免费看小| 美女内射精品一级片tv| 日产精品乱码卡一卡2卡三| 日本一本二区三区精品| 久久6这里有精品| 嫩草影视91久久| 丝袜喷水一区| 亚洲av熟女| 亚洲精品日韩在线中文字幕 | av天堂中文字幕网| 久久精品国产亚洲网站| 最新中文字幕久久久久| 国产精品人妻久久久久久| 国产亚洲精品久久久com| 婷婷色综合大香蕉| 看黄色毛片网站| 久久九九热精品免费| 男女下面进入的视频免费午夜| 午夜视频国产福利| 性色avwww在线观看| 亚洲成人中文字幕在线播放| 波多野结衣巨乳人妻| 国产精华一区二区三区| 国产一区二区在线观看日韩| 在线国产一区二区在线| 春色校园在线视频观看| 在线观看一区二区三区| 国产精品不卡视频一区二区| 亚洲欧美日韩高清专用| 菩萨蛮人人尽说江南好唐韦庄 | 欧美成人精品欧美一级黄| 亚洲国产欧美人成| 成人特级黄色片久久久久久久| 色视频www国产| 少妇的逼好多水| av天堂中文字幕网| 中出人妻视频一区二区|