• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Stereo Matching Method Based on Space-Aware Network Model

    2021-04-27 10:29:16JilongBianandJinfengLi

    Jilong Bianand Jinfeng Li

    1College of Information&Computer Engineering,Northeast Forestry University,Harbin,150040,China

    2College of Computer&Information Technology,Mudanjiang Normal University,Mudanjiang,157011,China

    ABSTRACT The stereo matching method based on a space-aware network is proposed,which divides the network into three sections:Basic layer,scale layer,and decision layer.This division is beneficial to integrate residue network and dense network into the space-aware network model.The vertical splitting method for computing matching cost by using the space-aware network is proposed for solving the limitation of GPU RAM.Moreover,a hybrid loss is brought forward to boost the performance of the proposed deep network.In the proposed stereo matching method,the space-aware network is used to calculate the matching cost and then cross-based cost aggregation and semi-global matching are employed to compute a disparity map.Finally,a disparity-post processing method is utilized such as subpixel interpolation,median filter,and bilateral filter.The experimental results show this method has a good performance on running time and accuracy,with a percentage of erroneous pixels of 1.23%on KITTI 2012 and 1.94%on KITTI 2015.

    KEYWORDS Deep learning;stereo matching;space-aware network;hybrid loss

    1 Introduction

    Stereo matching is an important research topic in the field of computer vision.It is widely used in three-dimensional reconstruction [1],autonomous navigation [2,3],and augmented reality [4].In the pipeline of stereo matching,an input of stereo matching consists of two epipolar rectified images taken from different points of view,one of which serves as a reference image and the other as a matching image.For each pixel(x,y)in the reference image,stereo matching identifies a pixel(x?d,y)in the matching image,corresponding to the same points in the scene,wheredis the disparity of the pixel(x,y).According to the principle of triangulation,the depth of the pixel(x,y)can be calculated aswherefis focal length andBis baseline length.

    Stereo matching process is divided into four steps:Cost calculation,cost aggregation,disparity calculation,and disparity refinement [5].Cost calculation is the first step in stereo matching process,and its quality largely affects the accuracy of stereo matching.For the past few years,deep learning has made great progress and is widely applied in intelligent traffic [6–9],network security [10–14],privacy-protecting [15–18],and natural language processing [19–21].Recently,deep learning has also been applied to stereo matching to calculate matching cost because of its powerful feature representation ability.It can improve the robustness of matching cost to radiation difference and geometric distortion and enhance matching accuracy.Lecun et al.[22,23] first employs Siamese network structure [24] to calculate matching cost,the matching cost is aggregated by the cross-based cost aggregation method,a disparity map is produced by the semi-global matching method [25],and finally,a disparity map is refined using some disparity post-processing methods.Subsequently,Zagoruyko et al.[26] extends the Siamese network structure and proposes three network structures,which are applied to stereo matching to calculate matching cost.Chen et al.[27] put forward a deep embedding model,which was like the central-surrounding twostream network [26].Luo et al.[28] propose an efficient deep learning model,which takes image patches of different sizes as input of left and right branch networks.The right image patch is larger than the left image patch in size and contains all disparities.In stereo matching,deep learning is used to calculate matching cost,which has achieved good matching results.This kind of method associates the depth of a network model with the size of training patches and the network depth depends on the size of a training image patch.As a result,it is impossible to increase the network depth to achieve high matching accuracy without changing the size of training image patches,which makes this method unable to effectively use excellent deep network structures such as residual network [29] and dense network [30].

    To increase the depth of the network and improve matching accuracy,we propose a stereo matching method based on the space-aware network.Firstly,matching cost is calculated by using a deep network,then the matching cost is aggregated by cross-based cost aggregation method,and a disparity map is computed by the semi-global method.Finally,the disparity map is further refined by some disparity post-processing methods.The main contributions of this paper are as follows:Firstly,we propose a space-aware network model,which can have the ability to integrate many popular network models.Secondly,a hybrid loss function is designed to enhance the network performance.Finally,a vertical splitting method is proposed to calculate feature maps for a whole image to reduce the consumption of GPU memory.

    2 Space-Aware Network Model

    2.1 Basic Model

    Deep learning is applied to calculation of matching cost and can produce good matching results [23].The deep network is called Siamese network,which consists of two parts:Feature layer and decision layer,and its network structure is shown in Fig.1.The feature layer is composed of two branches with the same structure and weight,which receive an image patch,respectively.The two image patches are fed into convolution layers,ReLU layers,and max-pooling layers.When they pass through a convolution layer,their size will be decreased.Finally,each branch obtains a one-dimensional feature vector,and these two feature vectors are concatenated and fed into a decision layer.The decision layer consists of a linear fully connected layer followed by a ReLU layer and outputs a scalar value,which is a probability value and denotes whether left and right image patches are similar.Fig.1 shows a deep network including 4 convolution layers with a convolution kernel of size 3×3 and the depth of this network model determines its input of size 9×9.In other words,the size of training image patches is subject to the count of convolution layers.When the kernel size is fixed,the more convolution layers are,the larger the image patch size is.This characteristic of the network model limits its depth and the application of ResNet [29] and DenseNet [30].If the network depth is deepened,it will inevitably increase the size of training image patches,which will cause over-fitting.

    Figure 1:Deep network for stereo matching

    2.2 Residual Model

    He et al.[29] proposed a residual network model,which has been applied to image classification and achieved very good results.Up to now,it is still a popular network model and there are many variants.The basic idea of this model is to add an Identity Shortcut Connection to the network model,which skips several convolution layers at one time.A residual block structure is shown in Fig.2.A residual block can be expressed asH(X)=F(X)+X and is composed of two parts,one part of which is called residualF(X)and the other part is identity mapping X.In general,a residual block consists of two or three convolution layers,and these residual blocks comprise a residual network.

    Figure 2:Residual block

    The idea of residual connection is extended to propose a dense connection network [30].As shown in Fig.3,each convolution layer in a dense block has an Identity Shortcut Connection connecting to the convolution layers coming before it.The input of each convolution layer is made of the concatenation of feature maps of all convolution layers coming behind it along feature dimension.A layer in a dense block can be denoted asX?=H?([X0,X1,...,X??1]),whereX?is the output of the layer?and [X0,X1,...,X??1] represents the concatenation of feature maps,H?denotes a composite function of three consecutive operations:Batch normalization (BN),followed by a rectified linear unit (ReLU) and a 3×3 convolution.A DenseNet consists of these dense blocks followed by transition layers.A transition layer is mainly composed of normalization layers,1×1 convolution layers,and pooling layers.

    Figure 3:Dense block

    2.3 Space-Aware Network Model

    For each pixelpin a reference image,stereo matching first calculates matching costc(p,d),which forms a cost volume.Then,a series of steps such as cost aggregation,disparity calculation and disparity refinement are performed,and finally,a disparity map is obtained.In general,absolute values of gray difference,normalized cross-correlation function and so on are used to calculate matching cost.This paper will present a method for computing matching cost by using deep learning.At present,the size of a training image patch depends on the number of convolution layers in the deep learning-based methods for computing matching cost.If the number of convolution layers is increased to obtain more accurate matching cost,the size of training image patches will be large,which results in over-fitting and reduces matching accuracy.

    To solve this problem and use a more advanced network model to calculate the stereo matching cost,we propose a space-aware network model.The main characteristic of this model is that the feature layer is divided into two parts:Basic layer and scaling layer.The main purpose of the basic layer is to extract features,which can use advanced network models such as residual network and dense network.Fig.4 shows the overall structure of the space-aware network model.The input of the basic layer is a pair of the image patchesPatchL(p)andPatchR(p?d)of size 9×9,and the output of the basic layer is the feature maps of size 9×9.In the basic layer,the size of feature maps is the same as the size of input.However,the main purpose of the scaling layer is to reduce the spatial size of feature map to 1×1.Our proposed scaling layer consists of only one convolution layer,whose size is the size of image patches.We do not choose max-pooling layer or average pooling layer.This is because the scaling convolution layer is like cost aggregation based on filter and thus can gather more space information to learn more discriminative features.For instance,when the training image patches of size 9×9 are taken as input of the network model,the filter kernel size of the convolution layer in the scaling layer is selected as 9×9.When the training image patches of size 9×9 are fed into the basic layer,the feature maps of size 9×9 are produced.Subsequently,these feature maps are fed into the scaling layer and the scaling layer outputs the feature maps of size 1×1.Finally,the feature maps of left and right image patches are concatenated to form a one-dimensional vector,which is taken as input to a decision layer,whose output is a probability,denoting the similarity between the left and right image patches.

    Figure 4:Space-aware network

    3 Hybrid Loss

    Because the proposed network model consists of three parts:Basic layer,scaling layer and decision layer,we combine the outputs of these three parts to define a hybrid loss function.The outputs of two branches of the basic layer are flattened to a one-dimensional vector,and then the cosine similarity is calculated by using an inner product layer:

    whereuLanduRdenote the output of the left and right branches of the basic layer.The output of the left and right branches in the scaling layer is a one-dimensional vector,so we do not need to flatten the scaling layer output and can directly utilize the cosine similarity.Because ReLU is used as an activation function,the output of the network is greater than 0,and the output range of the inner product layer is [0,1].For these two outputs,Hinge loss is used:wheres1+ands1?denote the output of the basic network layer for positive and negative samples,s2+ands2?represent the output of the scaling layer for positive and negative samples,mis a constant and is set to 0.2 during training.For output of the decision layer,mutual entropy loss is used:

    wherev+andv?are outputs of the decision layer for positive and negative samples.Finally,the total loss is:

    whereθis a constant and is set to 0.3.

    4 Vertical Splitting Method

    During training,the proposed deep network produces three outputs:One output of the decision layer and two outputs of the inner product layer,but only the output of the decision layer is used as matching cost to compute disparities:

    wherePatchL(·)andPatchR(·)denotes left and right image patches respectively,andnetwork(·)denotes the output of the decision layer.To calculate an initial cost volume by using the spaceaware network,it is necessary to take as input to the network the left and right image patches for every pixel in a reference image and the corresponding pixel in a matching image at all possible disparities.The advantage of this method is that it can reduce the consumption of GPU memory,but greatly increase the running time.An alternative method [23] uses a whole image as input to calculate matching cost.This method only calculates a feature map for left and right images one time,so it can greatly decrease running time and improve efficiency,but this method requires more GPU memory.

    Therefore,we propose a vertical splitting method to reduce the consumption of GPU memory.The main idea of this method is to divide left and right images into several patches vertically:

    wherepydenotes vertical coordinate ofpandKdenotes patch height.Then,an initial cost volume is produced for each pair of vertical patches using the space-aware network:

    whereIiL(·)andIiR(·)denotesith patch of left and right images,respectively.Finally,these subcost volumesCiCNN(p,d)are concatenated vertically to form a complete cost volumeCCNN(p,d).

    5 Disparity Calculation

    The output of the space-aware network is an initial 3D cost volume.Cost aggregation,disparity calculation,and disparity postprocessing are used to obtain a more accurate disparity map.Firstly,cross-based cost aggregation is employed,and secondly,semi-global matching is adopted.Finally,a series of disparity post-processing methods are utilized,such as left-right consistency check,sub-pixel enhancement,median filtering,and bilateral filtering.

    5.1 Cross-Based Cost Aggregation

    The cross-based cost aggregation method [31] firstly constructs a cross arm for each pixel,then uses cross arms to define a supporting area for each pixel.A left-arm of the pixelpcan be defined as:whereI(·)represents a gray value,αandβis a predefined gray threshold and a predefined distance threshold,respectively.Eq.(8) shows that pixel pointpis taken as a starting point,and continuously extends to the left under the constraint of the predefined gray threshold and predefined distance threshold.right(p),top(p)andbottom(p)of the pixelpare constructed in the same way.After these arms are defined,the supporting areasupport(p)can be defined as:

    Then,cost aggregation is carried out on the supporting areasupport(p):

    whereCCNN(·,·)denotes an initial matching cost.

    5.2 Semi-Global Matching

    Disparity calculation is usually classified into a local optimization method and a global optimization method.Global optimization methods generally obtain a high accuracy map,including dynamic programming,belief propagation,and graph-cut optimization.A global optimization method transforms the stereo matching problem into an energy function minimization problem:

    whereDdenotes a disparity map,N(p)the neighborhood of pixelp,P1andP2the constant penalty,Dqthe disparity of pointq.The semi-global method [25] approximately solves the energy function by dynamic programming in multiple directions:

    whererdenotes direction andCr(p,d)is a cost volume in directionr.The final matching cost is the sum of matching costs in all directions:

    Then,disparities are calculated by the “winner-takes-all” method:

    5.3 Disparity Post Processing

    To improve the accuracy of stereo matching,we use disparity post-processing methods such as left-right consistency check,sub-pixel enhancement,median filtering,and bilateral filtering.There are inevitably some erroneous disparities in a disparity map,which may be caused by non-textured areas and occlusion.These erroneous disparities can be detected by left-right consistency check of left and right disparity maps,and each pixel can be marked by the following rules:

    whereDL(p)is a left disparity map andDR(p)is a right disparity map.Background disparities are used to fill occlusion,and correct disparities in the neighborhood are used to replace erroneous matches.

    Sub-pixel refinement can further improve matching accuracy.We use a sub-pixel refinement method based on quadratic curve fitting in the cost domain,which uses optimal matching cost and its left and right immediate matching costs to fit a quadratic curve to obtain a sub-pixellevel disparity:

    whereC?=CSGM(p,d?1),C+=CSGM(p,d+1),C=CSGM(p,d).

    The final step of stereo matching uses a 5×5 median filter and a bilateral filter:

    whereg(·)is Gaussian function,denotes a normalized constant,andγis a predefined threshold.

    6 Experimental Analysis

    We use LUA and Torch7 to implement the proposed stereo matching method based on a deep space-aware network and the network is trained on GeForce GTX1080Ti GPU.The experimental parameters are set asα=4,β=0.00442,P1=1,P2=32,γ=5.The experimental data sets are KITTI 2012 and KITTI 2015.KITTI 2012 stereo dataset contains 194 training images and 195 test images,while KITTI 2015 stereo dataset contains 200 training images and 200 test images.Training data set is generated according to [6,17].The training data set is composed of positive and negative samples.Positive samples are defined as matching image patches,while negative samples are defined as unmatched image patches.The number of positive samples is the same as the number of negative samples,which can prevent it from loss of accuracy caused by imbalanced samples.

    6.1 Training Strategy

    The choice of training strategy is very important for deep learning,and a good training strategy can accelerate convergence and improve accuracy.In our experiment,the SGD optimizer algorithm is adopted,and its momentum is set to 0.9.The learning rate adjustment method is OneCycleLR with a cosine annealing strategy and the initial learning rate is set to 0.003.Its learning rate curve is shown in Fig.5.The space-aware network is trained for 14 epochs.

    Figure 5:Learning rate curve

    6.2 Classification Accuracy Analysis

    The computation of matching cost with deep learning is a binary classification problem.The higher the classification accuracy is,the more accurate the matching cost is,and the higher the matching accuracy is.In this experiment,80% of training data is used as training data and 20% as validation data to analyze the classification accuracy of the proposed network model.The comparison of classification accuracy is shown in Fig.6a,which indicates that the validation accuracy increases steadily with the increase of training epochs,and our classification accuracy is 98.40%.With the increase of epochs,the classification accuracy of [22] shows a slight fluctuation and its classification accuracy is 94.25%.We also compare training loss and validation loss.The comparison of training loss is shown in Fig.6b,which indicates that our proposed method obtains lower training loss and better convergence effect.Fig.6c shows validation loss.With the increase of epochs,the validation loss of our proposed method gradually decreases,and our validation loss is lower than [22],which shows that our network model is better than [22].

    Figure 6:Classification accuracy and loss for the proposed method and Ref.[22].(a) Classification accuracy (b) Training loss (c) Validation loss

    6.3 Matching Accuracy Analysis

    In this paper,we implement two space-aware network models.In the first network called SADenseNet,DenseNet with 20 DenseNet blocks is selected as the feature layer,the scale layer consists of a convolutional layer with the filter kernel of size 11×11,and four fully connected layers are included in the decision layer.We train SADenseNet network on the KITTI2012 data set and the KITTI2015 data set,respectively.Then 40 stereo image pairs are extracted from every data set to calculate the average number of bad pixels of 3 pixels.The experimental results are shown in Tabs.1 and 2.The second network is called SAResNet.In this network,the basic layer is composed of the residual network of 18 residual blocks,and the scaling layer is one convolution layer of size 11×11,and the decision layer consists of four fully connected layers.The experimental results are also shown in Tab.1 and Tab.2.Disparity maps for the two proposed networks are shown in Fig.7,in which the first row is left image and ground truth;the second row is the calculated disparity map and the error map for SAResNet,in which green denotes correct disparities and red denotes erroneous disparities,and whose error percentage of 3 pixels is 0.16%;The third row shows the calculated disparity map and the error map for SADenseNet,and whose error percentage of 3 pixels is 0.46%.It can be observed from the disparity map that the error of the blue rectangle in the disparity map calculated by the SAResNet network has been improved obviously.

    Table 1:Comparison result for 2012 dataset

    Table 2:Comparison result for 2015 dataset

    Figure 7:Comparison for disparities map of two proposed deep network

    6.4 Comparision of Experimental Results

    In this section,we first use the KITTI2012 dataset to test the performance of our proposed stereo matching method based on a space-aware network model and compare it with other methods.We randomly select 40 stereo pairs and compute the average error percentage of 3 pixels.The comparison results are shown in Tab.1.Our proposed method gives a good performance,whose average error percentage is 1.23%.Fig.8 shows the calculated disparity maps for four methods with low error percentage.Fig.8a shows the left and right images and the ground truth,and Fig.8b shows the calculated disparity and the error map for SAResNet,with an error percentage of 0.65%;Fig.8c for SADenseNet,with an error percentage of 1.2%;Fig.8d for [32],with an error percentage of 4.97%;Fig.8e for [22],with an error percentage of 5.90%.From these error maps,it can be observed that our method can obviously reduce the erroneous pixels in the blue rectangle.

    Figure 8:Results of disparity estimation for KITTI 2012.(a) The left and right images and the groud truth (b) SAResNet (c) SADenseNet (d) [32] (e) [22]

    In this section,we use the KITTI2015 dataset to test the performance of our space-aware network and compare it with other methods by using the same metric as the KITTI2012 dataset.The comparison results are shown in Tab.2.Fig.9 shows the calculated disparity maps for four methods with low error percentage.Fig.9a shows the left and right images and the ground truth,and Fig.9b shows the calculated disparity map and the error map for SAResNet with the 3-pixel error percentage of 0.64%;Fig.9c for SADenseNet with the 3-pixel error percentage of 1.30%;Fig.9d for [32] with the 3-pixel error percentage of 2.80%;Fig.9e for [22] with the 3-pixel error percentage of 2.90%.These experimental results show that our proposed method can give a more accurate disparity.The blue rectangles in these error maps show that the erroneous pixels marked by red are less than other methods.

    Figure 9:Results of disparity estimation for KITTI 2015.(a) The left and right images and the groud truth (b) SAResNet (c) SADenseNet (d) [32] (e) [22]

    7 Conclusion

    In this paper,a stereo matching method based on the space-aware network is proposed,which can combine the advanced network model with our network model,and then solve the GPU memory limitation problem through a vertical splitting method,and further improve the network performance by using hybrid loss.Our proposed method is trained on KITTI 2012 and KITTI 2015 datasets and compared with other methods.The experimental results show that the proposed method gives a better performance,with an error rate of 1.23% on KITTI 2012 and 1.94% on KITTI 2015.

    Funding Statement:This work was supported in part by the Heilongjiang Provincial Natural Science Foundation of China under Grant F2018002,the Research Funds for the Central Universities under Grants 2572016BB11 and 2572016BB12,and the Foundation of Heilongjiang Education Department under Grant 1354MSYYB003.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    免费在线观看成人毛片| 麻豆国产av国片精品| 国产精品女同一区二区软件| 国产片特级美女逼逼视频| 精品99又大又爽又粗少妇毛片| 99热精品在线国产| 九九在线视频观看精品| 久久精品国产亚洲av香蕉五月| 长腿黑丝高跟| 国产极品天堂在线| 欧美一区二区亚洲| 美女黄网站色视频| 亚洲成人精品中文字幕电影| 男人舔奶头视频| 国产极品天堂在线| 深爱激情五月婷婷| 亚洲一区高清亚洲精品| 国产成人freesex在线| 亚洲精品日韩在线中文字幕 | 欧美成人a在线观看| 久久人人爽人人爽人人片va| 久久婷婷人人爽人人干人人爱| 日韩欧美精品v在线| 麻豆久久精品国产亚洲av| 国产激情偷乱视频一区二区| 蜜桃久久精品国产亚洲av| 桃色一区二区三区在线观看| 99久国产av精品国产电影| 午夜亚洲福利在线播放| 国产成人精品婷婷| 国产一区二区三区在线臀色熟女| 国产成人福利小说| 岛国在线免费视频观看| 蜜桃久久精品国产亚洲av| 久久久久国产网址| 久久这里只有精品中国| 性色avwww在线观看| 亚洲不卡免费看| 少妇熟女aⅴ在线视频| 免费看a级黄色片| 亚洲无线在线观看| 亚洲精品456在线播放app| 欧美人与善性xxx| 日产精品乱码卡一卡2卡三| 亚洲精品国产成人久久av| 亚洲欧美中文字幕日韩二区| 亚洲人成网站在线观看播放| 中文欧美无线码| 青春草亚洲视频在线观看| 国产精品一区www在线观看| 99热这里只有是精品50| 高清午夜精品一区二区三区 | 别揉我奶头 嗯啊视频| 婷婷亚洲欧美| 精品国产三级普通话版| 能在线免费看毛片的网站| 日本三级黄在线观看| 大香蕉久久网| 国产精品一及| 亚洲国产精品成人久久小说 | 综合色丁香网| 午夜精品国产一区二区电影 | 亚洲一区高清亚洲精品| 欧美色视频一区免费| 国产人妻一区二区三区在| 韩国av在线不卡| 人妻久久中文字幕网| 天天一区二区日本电影三级| 99热网站在线观看| 老司机影院成人| 免费观看精品视频网站| 2021天堂中文幕一二区在线观| 国产一区二区三区在线臀色熟女| 中文字幕制服av| 色5月婷婷丁香| 日韩一区二区视频免费看| 日韩三级伦理在线观看| 在线观看一区二区三区| 久久久色成人| 日本黄色片子视频| 成人亚洲精品av一区二区| 蜜桃亚洲精品一区二区三区| 九九爱精品视频在线观看| 国产综合懂色| 成人毛片a级毛片在线播放| 九色成人免费人妻av| 热99re8久久精品国产| a级一级毛片免费在线观看| 高清毛片免费观看视频网站| 级片在线观看| 一区二区三区高清视频在线| 青青草视频在线视频观看| 99热全是精品| 成年版毛片免费区| АⅤ资源中文在线天堂| 精品欧美国产一区二区三| 国产成人a区在线观看| 寂寞人妻少妇视频99o| 国产一区亚洲一区在线观看| 99久国产av精品国产电影| 亚洲三级黄色毛片| 99国产极品粉嫩在线观看| 国产精品女同一区二区软件| 性欧美人与动物交配| 亚洲,欧美,日韩| 久久久午夜欧美精品| 一区福利在线观看| 日本五十路高清| 全区人妻精品视频| 91久久精品电影网| 2021天堂中文幕一二区在线观| av在线观看视频网站免费| 自拍偷自拍亚洲精品老妇| 精品国产三级普通话版| 我要搜黄色片| 亚洲五月天丁香| 秋霞在线观看毛片| 99热网站在线观看| 三级毛片av免费| 哪里可以看免费的av片| 亚洲av二区三区四区| 久久精品91蜜桃| 神马国产精品三级电影在线观看| 国产精品,欧美在线| 久久国内精品自在自线图片| 久久久欧美国产精品| 久久99蜜桃精品久久| 在线观看美女被高潮喷水网站| 精品日产1卡2卡| 女同久久另类99精品国产91| 听说在线观看完整版免费高清| 桃色一区二区三区在线观看| 亚洲中文字幕日韩| 一级毛片久久久久久久久女| 久久人妻av系列| 日韩av不卡免费在线播放| 青春草亚洲视频在线观看| 久久久色成人| 99热只有精品国产| 人人妻人人看人人澡| 亚洲在久久综合| 国产高清不卡午夜福利| 99久久久亚洲精品蜜臀av| 精品熟女少妇av免费看| ponron亚洲| 91麻豆精品激情在线观看国产| 搡老妇女老女人老熟妇| 男人和女人高潮做爰伦理| 99久国产av精品| 欧美不卡视频在线免费观看| 国产精品三级大全| 国产91av在线免费观看| 中文亚洲av片在线观看爽| 国产国拍精品亚洲av在线观看| 久久99热6这里只有精品| 久久久久国产网址| 男女边吃奶边做爰视频| 久久久久久久久久久丰满| 不卡一级毛片| а√天堂www在线а√下载| 国内久久婷婷六月综合欲色啪| 寂寞人妻少妇视频99o| 黄色视频,在线免费观看| 久久精品国产亚洲av天美| 国产精品国产高清国产av| 边亲边吃奶的免费视频| 亚洲欧美成人综合另类久久久 | 亚洲成人av在线免费| 九九在线视频观看精品| 神马国产精品三级电影在线观看| 美女国产视频在线观看| 中国国产av一级| 99久国产av精品国产电影| 一卡2卡三卡四卡精品乱码亚洲| 国产在线男女| 国语自产精品视频在线第100页| 久久热精品热| 亚洲精品粉嫩美女一区| 欧美成人一区二区免费高清观看| 最近的中文字幕免费完整| 乱码一卡2卡4卡精品| 美女脱内裤让男人舔精品视频 | 午夜a级毛片| 丰满乱子伦码专区| 在线观看午夜福利视频| 欧美xxxx性猛交bbbb| 免费av观看视频| 18禁黄网站禁片免费观看直播| 少妇猛男粗大的猛烈进出视频 | 色视频www国产| 久久久a久久爽久久v久久| 99久久九九国产精品国产免费| 国产午夜精品久久久久久一区二区三区| 精品久久久久久久久av| 成人鲁丝片一二三区免费| 人体艺术视频欧美日本| av在线播放精品| 国产亚洲av片在线观看秒播厂 | 日韩大尺度精品在线看网址| 国产成人freesex在线| 亚洲四区av| 日韩一区二区三区影片| 亚洲欧美日韩高清在线视频| 天堂中文最新版在线下载 | 成年女人看的毛片在线观看| 欧美日本亚洲视频在线播放| 亚洲精品国产成人久久av| 麻豆乱淫一区二区| 18+在线观看网站| 欧美不卡视频在线免费观看| 成人毛片a级毛片在线播放| 国产私拍福利视频在线观看| videossex国产| 欧美高清性xxxxhd video| 国产免费一级a男人的天堂| 美女xxoo啪啪120秒动态图| 晚上一个人看的免费电影| 我的老师免费观看完整版| 国产精品精品国产色婷婷| 日本在线视频免费播放| 18禁黄网站禁片免费观看直播| 好男人在线观看高清免费视频| 亚洲最大成人中文| 最近视频中文字幕2019在线8| 亚洲美女搞黄在线观看| 2021天堂中文幕一二区在线观| 九九热线精品视视频播放| 久久久精品94久久精品| 91午夜精品亚洲一区二区三区| eeuss影院久久| 国产国拍精品亚洲av在线观看| 国产精品嫩草影院av在线观看| 国产91av在线免费观看| 久久99蜜桃精品久久| 99久国产av精品国产电影| 久久鲁丝午夜福利片| 久久99热这里只有精品18| 成人性生交大片免费视频hd| 日韩成人伦理影院| 久久久久久久午夜电影| 成人综合一区亚洲| 丰满人妻一区二区三区视频av| 99热全是精品| 成人毛片a级毛片在线播放| 男女视频在线观看网站免费| 91精品一卡2卡3卡4卡| 欧美日本亚洲视频在线播放| а√天堂www在线а√下载| 麻豆av噜噜一区二区三区| 赤兔流量卡办理| 12—13女人毛片做爰片一| 九色成人免费人妻av| 简卡轻食公司| 精品人妻一区二区三区麻豆| 国产探花极品一区二区| 亚洲精品乱码久久久久久按摩| 国产成年人精品一区二区| 中国美女看黄片| 亚洲第一电影网av| 精品人妻视频免费看| 麻豆国产av国片精品| 色综合色国产| 综合色丁香网| 亚洲成av人片在线播放无| 99久久中文字幕三级久久日本| 国产一级毛片在线| 男人舔奶头视频| 亚洲最大成人av| 亚洲国产精品成人久久小说 | 12—13女人毛片做爰片一| 亚洲一级一片aⅴ在线观看| 国产精品久久视频播放| 精品99又大又爽又粗少妇毛片| 人人妻人人澡人人爽人人夜夜 | 国产精品综合久久久久久久免费| 中国美女看黄片| 日日摸夜夜添夜夜添av毛片| 国产亚洲91精品色在线| 一进一出抽搐动态| 99精品在免费线老司机午夜| 亚洲中文字幕一区二区三区有码在线看| 99热网站在线观看| 在线观看午夜福利视频| 国产高清三级在线| 色哟哟哟哟哟哟| 国产一区二区在线av高清观看| 啦啦啦啦在线视频资源| 久久久午夜欧美精品| 亚洲欧美日韩高清专用| 国产精品一区二区在线观看99 | 美女xxoo啪啪120秒动态图| 99国产精品一区二区蜜桃av| 尾随美女入室| 热99在线观看视频| 两个人视频免费观看高清| 乱人视频在线观看| 99久久成人亚洲精品观看| 熟女电影av网| 狂野欧美激情性xxxx在线观看| 简卡轻食公司| 国产午夜精品论理片| 日日啪夜夜撸| 国产成人福利小说| 熟女人妻精品中文字幕| 成人午夜精彩视频在线观看| 村上凉子中文字幕在线| 在线观看美女被高潮喷水网站| АⅤ资源中文在线天堂| 亚洲aⅴ乱码一区二区在线播放| 日本欧美国产在线视频| 久久久久久久久中文| 欧美一区二区精品小视频在线| 国产高清视频在线观看网站| 国产亚洲av嫩草精品影院| 直男gayav资源| 成人美女网站在线观看视频| 亚洲成av人片在线播放无| 99久久九九国产精品国产免费| 12—13女人毛片做爰片一| 哪个播放器可以免费观看大片| 日韩制服骚丝袜av| 欧美成人一区二区免费高清观看| 99热这里只有精品一区| 午夜免费男女啪啪视频观看| 国产极品精品免费视频能看的| 欧美性猛交黑人性爽| 欧美3d第一页| 99久久九九国产精品国产免费| 亚洲国产欧洲综合997久久,| 亚洲成a人片在线一区二区| 国产精品.久久久| 哪里可以看免费的av片| 成人亚洲精品av一区二区| 国内精品美女久久久久久| 噜噜噜噜噜久久久久久91| 精品日产1卡2卡| 91久久精品电影网| 69av精品久久久久久| 亚洲国产精品久久男人天堂| 亚洲人成网站在线播| 在线观看66精品国产| 中文精品一卡2卡3卡4更新| 桃色一区二区三区在线观看| 国产精品久久久久久亚洲av鲁大| 久久久久性生活片| 国产av在哪里看| 能在线免费观看的黄片| 精品一区二区三区人妻视频| 亚洲欧美日韩高清专用| 两个人的视频大全免费| 春色校园在线视频观看| 国产精品一区二区在线观看99 | 99久久九九国产精品国产免费| 久久精品综合一区二区三区| 内地一区二区视频在线| 久久精品91蜜桃| 变态另类丝袜制服| 成人二区视频| 久久久久久久久久久免费av| 色哟哟·www| 中文在线观看免费www的网站| 欧美bdsm另类| 人妻久久中文字幕网| 18禁黄网站禁片免费观看直播| 久久精品久久久久久久性| 国产三级在线视频| 国产精品一二三区在线看| 日韩一本色道免费dvd| 一进一出抽搐动态| 人人妻人人澡欧美一区二区| 51国产日韩欧美| 久久热精品热| 99热这里只有是精品在线观看| 男人舔奶头视频| 国产黄色视频一区二区在线观看 | 国产老妇女一区| 欧美性猛交╳xxx乱大交人| 欧美3d第一页| 国产三级在线视频| 国产精品伦人一区二区| 少妇被粗大猛烈的视频| 老司机福利观看| 看非洲黑人一级黄片| 国产精品伦人一区二区| 欧美bdsm另类| 99久久人妻综合| 亚洲综合色惰| 最近最新中文字幕大全电影3| 国产精品一及| 99热这里只有是精品50| 国产男人的电影天堂91| 久久久久久久久久久丰满| 欧美bdsm另类| 老师上课跳d突然被开到最大视频| av国产免费在线观看| 成人二区视频| 亚洲精品成人久久久久久| 免费看日本二区| 日产精品乱码卡一卡2卡三| 少妇人妻精品综合一区二区 | 亚洲aⅴ乱码一区二区在线播放| 黄色配什么色好看| 日韩欧美精品免费久久| 国产成人精品久久久久久| 精品久久久久久久末码| 小说图片视频综合网站| 亚洲欧美日韩高清专用| 不卡一级毛片| 国产老妇伦熟女老妇高清| 麻豆久久精品国产亚洲av| 国内精品宾馆在线| 12—13女人毛片做爰片一| 精品一区二区三区视频在线| 亚洲精品久久久久久婷婷小说 | 国产精品一二三区在线看| 国产精品福利在线免费观看| 亚洲av不卡在线观看| 久久精品久久久久久噜噜老黄 | 亚洲欧美日韩东京热| 女人十人毛片免费观看3o分钟| 亚洲经典国产精华液单| 亚洲精品久久国产高清桃花| 亚洲成人av在线免费| 欧美zozozo另类| 日韩一区二区视频免费看| 最好的美女福利视频网| 简卡轻食公司| .国产精品久久| 乱人视频在线观看| 国产探花在线观看一区二区| 久久精品国产清高在天天线| 日韩视频在线欧美| 欧美精品国产亚洲| 亚洲人成网站高清观看| 精品一区二区免费观看| 国产精品一区www在线观看| 天堂影院成人在线观看| 国产午夜福利久久久久久| 久久久久久国产a免费观看| 性欧美人与动物交配| 不卡一级毛片| 麻豆国产97在线/欧美| 日韩 亚洲 欧美在线| 中文字幕av在线有码专区| 国产精品,欧美在线| 日本在线视频免费播放| 两个人的视频大全免费| 97热精品久久久久久| 亚洲国产日韩欧美精品在线观看| 久久中文看片网| 男人狂女人下面高潮的视频| 欧美+亚洲+日韩+国产| 亚洲av中文av极速乱| 免费观看的影片在线观看| 91久久精品电影网| 成人二区视频| 久久久久久伊人网av| 国产精品麻豆人妻色哟哟久久 | 精品久久久久久久久亚洲| 精品久久久久久成人av| 美女黄网站色视频| 国产午夜精品一二区理论片| 国产黄色小视频在线观看| 国产精品国产高清国产av| 男插女下体视频免费在线播放| 22中文网久久字幕| 久久精品国产99精品国产亚洲性色| 男人舔女人下体高潮全视频| 国产大屁股一区二区在线视频| 亚洲国产欧洲综合997久久,| 国产精品人妻久久久久久| 日韩成人av中文字幕在线观看| 欧美最新免费一区二区三区| 亚洲国产精品成人久久小说 | 午夜久久久久精精品| 天堂√8在线中文| 日日摸夜夜添夜夜添av毛片| 国产成人a∨麻豆精品| 日本黄大片高清| 亚洲欧美中文字幕日韩二区| 欧美又色又爽又黄视频| 国产人妻一区二区三区在| 欧美激情久久久久久爽电影| 99热只有精品国产| 高清日韩中文字幕在线| 少妇熟女欧美另类| 不卡一级毛片| 久久九九热精品免费| 日韩,欧美,国产一区二区三区 | 久久人人精品亚洲av| 精品人妻熟女av久视频| 91久久精品电影网| 久久久国产成人免费| 免费观看在线日韩| 久久午夜福利片| 日韩国内少妇激情av| 国产精品女同一区二区软件| 国产精品人妻久久久久久| 成人漫画全彩无遮挡| 久久久久久久久久久免费av| 能在线免费观看的黄片| www日本黄色视频网| 国产在视频线在精品| 国产精品日韩av在线免费观看| 两性午夜刺激爽爽歪歪视频在线观看| 免费观看的影片在线观看| 亚洲精品日韩在线中文字幕 | 99热网站在线观看| 国产精品电影一区二区三区| 国产黄a三级三级三级人| 成人永久免费在线观看视频| 97热精品久久久久久| 在线观看av片永久免费下载| 国产精品蜜桃在线观看 | 中文字幕制服av| 欧美精品国产亚洲| 国产成人一区二区在线| 久久久久九九精品影院| 欧美+亚洲+日韩+国产| 精品久久久久久成人av| 国产一区二区在线av高清观看| 日本熟妇午夜| 毛片女人毛片| 国产精品无大码| 日本一本二区三区精品| 听说在线观看完整版免费高清| 国语自产精品视频在线第100页| 成人性生交大片免费视频hd| 深夜a级毛片| 99热这里只有是精品50| 一夜夜www| 永久网站在线| 国产精品蜜桃在线观看 | 国产精品永久免费网站| 欧美性猛交黑人性爽| 国语自产精品视频在线第100页| 可以在线观看的亚洲视频| 在线观看一区二区三区| 午夜视频国产福利| 好男人视频免费观看在线| 亚洲激情五月婷婷啪啪| 又爽又黄a免费视频| 久久久久久国产a免费观看| 免费无遮挡裸体视频| 亚洲欧美中文字幕日韩二区| 日韩精品有码人妻一区| 国产综合懂色| 国产高清视频在线观看网站| 精品国内亚洲2022精品成人| 国产成人福利小说| 波多野结衣巨乳人妻| 中国国产av一级| 日韩在线高清观看一区二区三区| 国国产精品蜜臀av免费| 少妇的逼水好多| 我要看日韩黄色一级片| 在线播放无遮挡| 日韩强制内射视频| 看黄色毛片网站| 男女边吃奶边做爰视频| 国产激情偷乱视频一区二区| 国产成人福利小说| 直男gayav资源| 久久99热这里只有精品18| 国产色爽女视频免费观看| 日韩国内少妇激情av| 欧美性猛交╳xxx乱大交人| 国产午夜精品论理片| 亚洲成人久久性| 免费大片18禁| 搞女人的毛片| 日韩高清综合在线| 麻豆成人av视频| 亚洲国产欧洲综合997久久,| 亚洲国产精品久久男人天堂| 国产成年人精品一区二区| 日韩一区二区视频免费看| 99九九线精品视频在线观看视频| 伦精品一区二区三区| 最近视频中文字幕2019在线8| 亚洲国产精品合色在线| 夜夜爽天天搞| 日本-黄色视频高清免费观看| 在线免费十八禁| 色哟哟哟哟哟哟| 日韩一区二区三区影片| 国产午夜精品久久久久久一区二区三区| 特大巨黑吊av在线直播| 婷婷六月久久综合丁香| 在线播放国产精品三级| 深夜精品福利| 中文资源天堂在线| 在线观看午夜福利视频| 一进一出抽搐gif免费好疼| 国产精品伦人一区二区| 精品久久久久久久久亚洲| 内地一区二区视频在线| 久久久久久伊人网av| 日韩一区二区视频免费看| 日韩成人av中文字幕在线观看| 国产精品1区2区在线观看.| 国产老妇伦熟女老妇高清| 久久99蜜桃精品久久| 老师上课跳d突然被开到最大视频| 色吧在线观看| 国产精品野战在线观看| 99九九线精品视频在线观看视频| 国产精品一区二区在线观看99 | 国产免费男女视频| 欧美激情在线99| 久久久久久久久中文| 91在线精品国自产拍蜜月| 九九在线视频观看精品| 最后的刺客免费高清国语| 亚洲精品乱码久久久久久按摩|