• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Hybrid tree guided PatchMatch and quantizing acceleration for multiple views disparity estimation

    2021-06-17 08:40:04ZHANGJiguangXUShibiaoZHANGXiaopeng

    ZHANG Jiguang, XU Shibiao, ZHANG Xiaopeng

    (National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China)

    Abstract: Existing stereo matching methods cannot guarantee both the computational accuracy and efficiency for the disparity estimation of large-scale or multi-view images.Hybrid tree method can obtain a disparity estimation fast with relatively low accuracy, while PatchMatch can give high-precision disparity value with relatively high computational cost.In this work, we propose the Hybrid Tree Guided PatchMatch which can calculate the disparity fast and accurate.Firstly, an initial disparity map is estimated by employing hybrid tree cost aggregation, which is used to constrain the label searching range of the PatchMatch.Furthermore, a reliable normal searching range for each current normal vector defined on the initial disparity map is calculated to refine the PatchMatch.Finally, an effective quantizing acceleration strategy is designed to decrease the matching computational cost of continuous disparity.Experimental results demonstrate that the disparity estimation based on our algorithm is better in binocular image benchmarks such as Middlebury and KITTI.We also provide the disparity estimation results for multi-view stereo in real scenes.

    Key words:stereo matching; multiple views; disparity estimation; hybrid tree; PatchMatch

    0 Introduction

    Stereo matching or multiple-view matching has always been one of the most important research problems in computer vision because disparity(depth)estimation has played a crucial role in most computer vision applications, including depth of field rendering, consistent object segmentation, and multiple-view stereo(MVS).However, any of the current global or local stereo matching algorithms[1-3]is insufficient to show matching accuracy and calculation efficiency during the matching processing, therefore rendering numerous stereo matching-based applications incapable of achieving their desired performance.Most of the current stereo matching-based multiple-view reconstruction methods[4-6]suffer from low accuracy, model incompleteness, and time issues.Although many algorithms are being developed to achieve balance in processing precision and speed, several challenges still remain.

    1 Related Work

    In general,the traditional stereo matching process includes four main parts[7]: matching cost computation, cost aggregation, disparity estimation and disparity refinement(optional).Among them, matching cost computation and cost aggregation are the core stages in stereo matching.Firstly, according to the difference between pixels, matching cost computation will obtain a cost volume(disparity space image, DSI), which stores the similarity score for each pixel.Then, the cost aggregation is viewed as filtering over the cost volume, which plays a key role of denoising and refinement for DSI.At last, the disparities are computed with a local or global optimizer.Obviously, the quality of the matching cost computation and aggregation method has a significant impact on the success of stereo-matching algorithms.

    Recently, Yang et al.[8]proposed a novel bilateral filtering method for cost aggregation processing to estimate depth information, which can be very effective for high-quality local stereo matching.Owing to the cost aggregation in local support window, the local minimum problem cannot be avoided.Yang et al.[9-10]further solved the problem by using a series of linear complexity non-local cost aggregation methods, which extends the support window size to the whole image and constructs a minimum spanning tree(MST).Because of the impact of global pixels, this method considerably improves computation speed and accuracy.In addition, Vu et al.[11]extended the above MST-based cost aggregation method and proposed a hybrid tree algorithm to estimate depth information by using pixel-and region-level MST, which can avoid the constructing errors of MST in a texture-rich region and improve the accuracy of the MST-based cost aggregation method.

    By contrast to the preceding stereo correspondence algorithms mentioned in one-pixel precision, the recently proposed Patch-Match stereo[12]can directly estimate highly slanted planes and achieve notable disparity details with sub-pixel precision.Given its high-quality advantages, this kind of patch-based method has also been extended with different considerations and requirements[13-15].Furthermore, Shen[16]utilized the precise PatchMatch stereo to compute individual depthmap, which can achieve a depth-map merging-based MVS(Multi-View Stereo)reconstruction for large-scale scenes.Shen[17]extended his previous work and further proposed a PatchMatch-based MVS method for large-scale scenes.The PatchMatch process is utilized to generate depthmap at each image with acceptable errors and then enforced consistency over neighboring views to refine the disparity values.The experimental result proved that multiple-view reconstruction accuracy by using PatchMatch stereo is significantly better than other methods.Besse[18]proposed a PatchMatch belief propagation stereo matching algorithm, which improved the estimation accuracy of PatchMatch by using a global optimization strategy and achieved MVS matching in a 2-D flow field.Unfortunately, these strategies often contain a large label space for continuous disparity estimation that is difficult to balance between computational efficiency and estimation accuracy.

    More recently, the development of deep learning methods have proved their convenient, efficient(excluding the time of network training)and academic potential in disparity(depth)estimation.The convolutional neural network(CNN)based pixel-level depth estimation method[34]achieved the monocular depth estimation ability by using supervised training network in the ground-truth training data collected from the depth sensor.By using multi-level contextual and structural information[35]in the CNN, depth estimation gets rid of the traditional algorithm based on geometry.However, the feature extractor of this kind of method needs repeated pooling, which leads to the reduction of feature spatial resolution and affects the accuracy of depth estimation.To solve this issue, Hu et al.[36]combined mutiple information of a single image to estimate depth.However, most of them only consider local geometric constraints, which will cause the surface of the depth map to be discontinuous and the boundary to be blurred.Chen et al.[37]proposed a spatial attention mechanism based network.The performance of depth estimation is improved by using global focus relative loss and edge sensing consistency modul.However, the lacking of real scale can not be avoided in monocular depth estimation.On the other hand, a considerable literature has grown up around the theme of CNN based stereo matching for disparity estimation, including MatchNet[19]obtains high-level feature maps through two CNNs and learns the correspondence between binocular images.MTCNN[20]transforms the stereo matching into a two-category classification problem.Then a classical stereo matching operation is required to restore the final disparity results.It can be seen that most of the deep learning-based methods perform disparity estimation by combining the learned features with the traditional stereo matching strategy.But people ignore the issue that supervised training and data set generation consume a lot of resources and time.For all this, end-to-end training strategy more or less reduces the time complexity of the deep learning algorithm.Such as DispNet[21]applies an optical flow detection end-to-end network FlowNet to improve the efficiency of binocular disparity estimation.GCNet[22]generates matching cost by using 3D convolution and end-to-end training to accelerate the speed.But most of them requires a large amount of memory space, which is not suitable for large-size images.

    Although the CNN is the future development trend of stereo matching, and large number of training sets are required to improve the accuracy of parallax estimation, which is very labor-intensive and time-consuming task.At the same time, the existing deep learning based stereo matching methods have poor generalization performance and are not robust enough for real scene reconstruction.It is worth noting that the operation of the traditional stereo matching has strong mathematical theoretical foundations support(such as photo-graphic geometry and epipolar geometry), which conforms to natural imaging rules and the results are closer to the real physical environment.Therefore, deep learning methods cannot completely replace classical stereo matching.The integration of the above two is a more reasonable way to promote the development of stereo matching.So, a deep mining and development of traditional stereo matching theory still deserves to further study.

    1.1 Differences and Contributions

    In this paper, we extend our preliminary work[23]by applying hybrid tree-guided PatchMatch stereo and quantizing acceleration for a continuous disparity estimation framework.Therefore, the proposed disparity acquisition algorithm is based on the recently popular hybrid tree cost aggregation algorithm[11]and the accuracy PatchMatch stereo algorithm[12].Other advanced stereo matching methods(not limited to what we use)in one-and sub-pixel levels(sub-pixel and one-pixel is the precise level of disparity, which represent float-valued disparity and integer-valued disparity respectively)can also be integrated into our framework, which means two independent algorithms can be seamlessly merged to efficiently address the problem of large label spaces while maintaining or even improving the solution quality.

    Our obvious differences and contributions include:

    -Two independent algorithms(hybrid tree cost aggregation and PatchMatch stereo)can be seamlessly merged to achieve MVS matching applications while maintaining or even improving the solution quality.

    -An initial one-pixel level disparity map is generated by hybrid tree cost aggregation to constrain the label searching range of Patch-Match stereo algorithm.The map not only significantly accelerates the estimation speed of PatchMatch but also improves the accuracy of disparity from one-pixel-to sub-pixel-level accuracy.

    -Instead of stepping through large label space for similarity computation, an effective quantizing acceleration strategy is proposed by generating a linear interpolation of matching cost between the two closest disparity values to yield high efficiency in cost computation.

    -Empirical experiments on Middlebury and KITTI benchmark evaluation show that our algorithm not only provides faster and higher or competitively accurate disparity results than both original binoculars[11, 12]but is also suitable for multiple-view reconstruction.

    2 Proposed Method

    In this section, we will further explain our method for disparity information acquisition from calibrated stereo image pair or multiple views.Our goal is to obtain a dense and high-quality disparity map while considering accuracy and efficiency.

    According to the computation principle of the original Patch-Match stereoalgorithm[12], individual 3-D slant plane at each pixel is used to overcome bias problems during reconstructing fronto-parallel surfaces and extended to find an approximate nearest neighbor among all possible planes.These conditions help this method achieve remarkable disparity details with sub-pixel precision.However, the random initialization process may be inappropriate for a real scene, which is very likely to have at least one good guess for each disparity plane in the image.This observation is often false, especially for middle-or low-resolution images, in which each 3-D plane contains a small number of pixels, thereby indicating insufficient guesses.Furthermore, calculating the final disparity of each pixel with an iterative searching way from the random initialization is time consuming.

    In the framework of our proposed algorithm as shown in the Fig.1, pixels from stereo image pairs are ranged into super-pixel regions.Then an initial pixel level disparity is generated by recent advanced hybrid tree cost aggregation algorithm[12].Furthermore, a novel iterative plane refinement strategy inspired by PatchMatch algorithm[11]is applied, where the label searching range and normal vector are both constrained with initial disparity values, to calculate the final sub-pixel level disparity.During the computation of PatchMatch, a quantizing acceleration strategy for continuous disparity cost computation is also proposed to significantly improve the algorithm efficiency by reducing the label number in sub-pixel level disparity estimation.We will deeply explain each core process in Algorithm 1.

    Fig.1 Overall framework of the proposed method.

    2.1 Initial Disparity Set from Hybrid Tree

    Given a pair of imagesI0andI1, we denoteI0(p)to be the intensity of pixelpin imageI0and denoteI1(pd)to be the intensity of pixelpdin imageI1.For stereo matching,pdcan be easily computed aspd=p-(d, 0), withdbeing the corresponding disparity value.For multiple-view matching,pdis represented aspd=H*[px,py,d]T, where the homographyHbetween imagesI0andI1should be computed through their camera-intrinsic and external parameters.We will provide more details to calculate homography.

    With a set of images from different multiple views, selecting a suitable reference image from the input image set is necessary to form a stereo pair for further disparity estimation.We utilize a method similar to[24]to select eligible stereo pairs to obtain a good reference image with similar viewing direction as the target image and avoid extremely short or long baseline.Based on the assumption that the camera parameters of the image pair are {Ki,Ri,Ti} and {Kj,Rj,Tj}, theKiandKjare the two camera-intrinsic parameters[25], whereasRandTare the camera rotation and translation parameters relative to the world coordinate system, respectively.Then, the resulting induced homography is:

    Hij=Kj[RiRj-1+(Ti-RiRj-1Tj)]Ki-1

    (1)

    We calculate the pixel-and region-level dissimilarity cost for the hybrid tree cost aggregation[11].A 3-D structure cost volume[25]is defined asD(x,y,d), wherexandyare the current pixel coordinates, anddis the user-given disparity level to store the matching costs between input images at each discrete integer-valued disparity level.The pixel-level matching costCd(p)is defined as the dissimilarity between pixelpand pixelpd, which is given by the linear combination of the color dissimilarity and the gradient difference:

    Cd(p)=(1-α)min(‖I0(p)-I1(pd)‖,τ1)+

    αmin(‖I0(p)-I1(pd),τ2‖)

    (2)

    2.2 PatchMatch with label Constraints

    Although we have obtained the optimal initial disparity value set, we still cannot treat them as the final results.All the processing that was previously mentioned aim to obtain a label(disparity and normal)guessing range to serve the global PatchMatch label refinement method that will be explained in this section.The goal of this part is to refine the disparity results to further reduce the matching cost.

    For each pixelpof the image pair, we hope to find a planefp, which is one of the minimum aggregated matching costs among all possible planes in the reliable range:

    (3)

    whereFdenotes the candidate set of all planes for each pixelpand is defined based on the preceding reliable disparity and normal values mentioned.We use theα-expansion algorithm[27]for the whole label optimization.

    In traditional PatchMatch stereo algorithm[12], we have to calculate the pixel-matching aggregation costs according to planefas

    (4)

    whereωpix(p,q)=e-(|I(p)-I(q)|)/γis an adaptive weight function, which is used to solve the edge-fattening problem and is computed by pixelpandqcolor difference.The dissimilarity functionρ(p,q)is defined similar to Equation 6.Based on any given planef, we can calculate the corresponding disparity in sub-pixel precision for the current pixel as

    dp=afpx+bfpy+cf

    (5)

    whereaf,bfandcfare the three parameters of planef,pxandpy, respectively, which are denoted as the coordinates of pixelp.Notably, this continuous and varied label space prevents rapid checking of all possible labels in the label refinement process because one would perform matching cost computation for each continuous disparity in brute force.In the next section, we will further provide an effective quantizing acceleration strategy for matching cost computation of each continuous disparity.

    2.3 Quantizing Acceleration

    Labels often correspond to the integer-valued disparity in standard stereo matching.Therefore, WTA[7]can be easily used to find the true minimum cost and obtain the optimal initial integer-valued disparity for each pixel.However, the label number will become infinite for disparity estimation in sub-pixel accuracy, such as PatchMatch stereo.Stepping through large label space for continuous disparity setting is difficult and will further bring numerous matching cost calculations in each label optimization process.

    Based on the preceding analyses, we innovatively discretized the continuous disparity into a number of values and computed a linear filter for each value.Then, the final output is a linear interpolation of matching cost between the two closest disparity values.In practice, the disparity value for each pixel is discretized withD= {1, 2, …,Lk,Lk+1,…,LN}, whereLNis the maximum value of disparity.Given the disparityd∈[Lk,Lk+1]of pixelp, the dissimilarity functionρ(p,p-d)in Equation 4 can be expressed as:

    ρ(p,p-d)=(Lk+1-d)ρ(p,p-Lk)+

    (d-Lk)ρ(p,p-Lk+1)

    (6)

    Instead of directly computing the matching cost for continuous disparity d, we show that quantization can be easily implemented and computing performance improvement can be achieved(5~6 times faster on average)with the same output accuracy because the cost volume on the discretized disparity setDcan be pre-computed only once with constant time.

    3 Experiments and Evaluation

    All the experiments are implemented on a PC platform with an Intel i7 3.60 GHz CPU, 16 GB memory, and an NVIDIA GeForce GTX 980 GPU.We use the same parameters as the original PatchMatch stereo matching algorithm[12]and hybrid tree algorithm[11], which are {γ,α,τcol,τgrad} = {10, 0.9, 10, 2}, with a large patch size of 51 pixels.

    The post-refinement strategy is often used to conceal the disparity estimation errors after stereo matching computation.We tested our algorithm without any post-refinement for disparity with two utilized stereo matching methods(HybridTree and PatchMatch)and six related main stream methods[30-35]on the second and third version of Middlebury[7, 28]benchmark, and KITTI dataset[29]to completely reflect the real performance of each algorithm.We compare all the disparity results from each stereo matching methods with corresponding ground-truth disparity values provided by the Middlebury and KITTI benchmark under a given threshold.If the estimation deviation of the current pixel is greater than the threshold, then it will be considered as an error pixel and marked with red(please see Fig.2~Fig.5).This condition will more vividly express the quality of disparity estimation.It is worth noting that KITTI dataset are mostly utilize to evaluate unmanned driving algorithms, in which the images are mainly complex real urban traffic scenes with ground truth obtained by depth sensors.Therefore, testing on such data is not only a great challenge, but also can prove the robustness of the algorithm in practical applications.

    Fig.2 Visual comparison with other methods for disparity results on 3nd Middlebury benchmark(the red color represents the pixel with the error disparity estimation).From left to right, they are orginal stereo image pair, HybridTree results[11], PatchMatch Results[12] and Our Method Results.As can be seen from(b)and(c), there are obvious error estimates in the areas with low texture and complex edges(such as floor and background).Our method has less error pixels than the first two algorithms.

    Fig.3 Visual comparison with other methods for disparity results on 2nd Middlebury benchmark(the red color represents the pixel with the error disparity estimation).As can be seen from(b)and(c), there are obvious error estimates in the areas with duplicate textures and unclear edges(such as fences and face masks).Our method has better accuracy in dealing with the above areas.

    Fig.4 Visual comparison with other methods for disparity results of street scene on KITTI Benchmark(error threshold is three-pixel).From left to right, they are orginal stereo image pair, HybridTree results[11](error rate: 14.58%), PatchMatch Results[12](error rate: 9.21%)and Our Method Results(error rate: 9.36%).

    Fig.5 Visual comparison with other methods for disparity results of buildings scene on KITTI benchmark(error threshold is three-pixel).From left to right, they are orginal stereo image pair, HybridTree results[11](error rate: 17.58%), PatchMatch Results [12](error rate: 17.38%)and Our Method Results(error rate:12.26%).

    Table 1 Performance Evaluation of 3rd Middlebury Benchmark

    Table 2 Evaluation of 2nd Middlebury Benchmark in sub-pixel accuracy

    In Table 1, we mainly illustrate that our integration strategy has certain advantages in operating efficiency while ensuring high accuracy disparity estimation, especially for stereo matching of large-scale and high-resolution images.Since our method is derived from the fusion of the hybrid tree and patch matching, So we compared the above two constituent algorithms with ours inaccuracy and time complexity evaluation of the 3rd Middlebury benchmark, which proves that our strategy does not sacrifice operating efficiency to improve the estimation accuracy.As we can see in Table 1, disparity results(our method with cost aggregation or not)are of two kinds, which are used to indicate the advantages of the cost aggregation.The results clearly show that the proposed method seamlessly merges two independent algorithms(hybrid tree cost aggregation and PatchMatch-based label search)to efficiently address the problem of large label spaces while maintaining or even improving the solution quality.Our method clearly achieved higher comprehensive performance compared with those of the two other methods[11, 12].

    In Table 2, sub-pixel accuracy evaluation between the 8 mainstream algorithms and ours are provided with the second Middlebury benchmark.Because they all borrowed the strategy of patch matching and tree flitering in the calculation cost aggregation stage, they have a certain degree of comparability with our algorithm.Notably, “Tsukuba” in the benchmark should be omitted because its ground truth is quantized to integer values, which is unsuitable for sub-pixel comparisons.The average error rates of the three remaining views are PatchMatch(7.58%)versus our method(6.83%), and these rates are considered to be significant improvements in average error rates compared with that of the PatchMatch based methods.Comprehensively considering the disparity estimation results of all images, our method also has the lowest average error rate compared with the other 8 results, which shows that our method can provide a higher-precision estimation.

    Finally, we also implemented our method in MVS application to prove the practicability of our algorithm.Figure 6 and Figure 7 provides our depth estimation results(without any post-refinement for depth)from images of realistic complex scenes(“statue” and “tree”).Reconstructing statue and tree in a realistic way is difficult because of the non-uniform color distribution of statue and the inherent geometric complexity of trees.We use the proposed method for statue and tree reconstruction, which can prove that this approach has good potential for multiple-view reconstruction in real scenes.

    Fig.6 Our estimated multi-views depth results from statue images.Row 1 and row 3 are the input images captured from four different view points.Row 2 and row 4 are the related depth results.

    Fig.7 Our estimated multi-views depth results from tree images.Row 1 and row 3 are the input images captured from four different viewpoints.Row 2 and row 4 are the related depth results.

    4 Conclusion

    In this paper, we presented a hybrid tree-guided PatchMatch and quantizing acceleration algorithm for stereo.In our method, hybrid tree cost aggregation and PatchMatch label search complemented with each other.The disparity searching range began with the integer-level disparity of hybrid tree cost aggregation and was then refined by a PatchMatch optimizer to enhance the accuracy.The experiments show that our approach is more robust, accurate and faster in disparity estimation than the two other comparison methods.In the future, we can integrate our algorithm into more global PatchMatch methods, including PMBP[15]and PM-Huber[14].

    蜜桃在线观看..| 国产淫语在线视频| 韩国精品一区二区三区| 99re6热这里在线精品视频| 99久久综合免费| 夫妻午夜视频| 国产 一区精品| 99国产综合亚洲精品| 热99久久久久精品小说推荐| 精品亚洲乱码少妇综合久久| 亚洲精品第二区| 一本大道久久a久久精品| 亚洲精品av麻豆狂野| 日日撸夜夜添| 国产精品国产三级专区第一集| 你懂的网址亚洲精品在线观看| 久久天堂一区二区三区四区| 日韩制服丝袜自拍偷拍| 亚洲av综合色区一区| 一本久久精品| 2018国产大陆天天弄谢| 免费人妻精品一区二区三区视频| 黄片小视频在线播放| 国产精品久久久人人做人人爽| 国产精品久久久久久精品古装| 国产乱来视频区| a 毛片基地| 亚洲四区av| 精品卡一卡二卡四卡免费| 国产成人av激情在线播放| 欧美精品av麻豆av| 美女扒开内裤让男人捅视频| 99久久精品国产亚洲精品| 亚洲熟女精品中文字幕| 午夜av观看不卡| 国产极品天堂在线| 国产有黄有色有爽视频| av有码第一页| 岛国毛片在线播放| 亚洲国产精品999| 午夜福利,免费看| 久久久久久人人人人人| 国产精品免费视频内射| 亚洲色图综合在线观看| 国产精品女同一区二区软件| 亚洲精品国产一区二区精华液| 十分钟在线观看高清视频www| 一区二区三区精品91| 精品国产一区二区三区久久久樱花| 国产伦理片在线播放av一区| 久久毛片免费看一区二区三区| 女的被弄到高潮叫床怎么办| 午夜91福利影院| 女人精品久久久久毛片| 免费高清在线观看视频在线观看| 51午夜福利影视在线观看| 中文字幕人妻丝袜制服| 国产一区二区三区综合在线观看| 亚洲精品一二三| 欧美人与性动交α欧美精品济南到| 在线天堂中文资源库| 深夜精品福利| 日韩一区二区视频免费看| 丝袜喷水一区| 99久久人妻综合| 少妇 在线观看| 日韩免费高清中文字幕av| 91精品伊人久久大香线蕉| 国产一区二区 视频在线| 高清av免费在线| 中文精品一卡2卡3卡4更新| 成年女人毛片免费观看观看9 | bbb黄色大片| 成人手机av| 美女主播在线视频| 亚洲精品视频女| 制服丝袜香蕉在线| 久久天躁狠狠躁夜夜2o2o | 成人免费观看视频高清| 午夜福利,免费看| 国产探花极品一区二区| 人妻一区二区av| 国产日韩欧美在线精品| 成年av动漫网址| 国产深夜福利视频在线观看| 日韩中文字幕欧美一区二区 | 午夜福利影视在线免费观看| 丰满少妇做爰视频| 啦啦啦视频在线资源免费观看| 中文字幕最新亚洲高清| 国产片内射在线| 又粗又硬又长又爽又黄的视频| 极品人妻少妇av视频| 免费在线观看黄色视频的| 久久99精品国语久久久| 国产黄色免费在线视频| 男女之事视频高清在线观看 | 纯流量卡能插随身wifi吗| 麻豆精品久久久久久蜜桃| 一级毛片我不卡| 一级毛片电影观看| 精品国产一区二区久久| 你懂的网址亚洲精品在线观看| 99精国产麻豆久久婷婷| 亚洲精品美女久久久久99蜜臀 | 免费日韩欧美在线观看| xxxhd国产人妻xxx| 精品一区在线观看国产| 最新在线观看一区二区三区 | 成年动漫av网址| 亚洲精品久久成人aⅴ小说| 精品亚洲成a人片在线观看| 午夜精品国产一区二区电影| 在线亚洲精品国产二区图片欧美| 亚洲精品在线美女| 久久人人爽人人片av| 国产精品久久久久成人av| 毛片一级片免费看久久久久| 丝袜美足系列| 国产成人av激情在线播放| 日本av手机在线免费观看| 国精品久久久久久国模美| 久久韩国三级中文字幕| 王馨瑶露胸无遮挡在线观看| 老司机深夜福利视频在线观看 | 欧美激情极品国产一区二区三区| 欧美日本中文国产一区发布| 欧美黑人欧美精品刺激| a级毛片在线看网站| 如日韩欧美国产精品一区二区三区| 日韩av免费高清视频| 爱豆传媒免费全集在线观看| 久久人人爽av亚洲精品天堂| av不卡在线播放| 黑人巨大精品欧美一区二区蜜桃| 两个人免费观看高清视频| 成年美女黄网站色视频大全免费| 国产精品一国产av| 国产男女内射视频| 日韩欧美精品免费久久| 亚洲国产成人一精品久久久| 久久久精品免费免费高清| 国产又色又爽无遮挡免| 国产成人精品久久二区二区91 | 中国三级夫妇交换| 少妇猛男粗大的猛烈进出视频| 久久精品亚洲熟妇少妇任你| 亚洲国产精品成人久久小说| 啦啦啦啦在线视频资源| 亚洲国产成人一精品久久久| 国产色婷婷99| 大香蕉久久网| 街头女战士在线观看网站| 激情五月婷婷亚洲| netflix在线观看网站| 免费看av在线观看网站| 国产淫语在线视频| 女性生殖器流出的白浆| 日韩av不卡免费在线播放| 午夜av观看不卡| 一级毛片电影观看| 热99国产精品久久久久久7| 老汉色∧v一级毛片| 亚洲国产毛片av蜜桃av| 亚洲国产精品999| 成人亚洲精品一区在线观看| 精品国产乱码久久久久久男人| 久久99一区二区三区| 日本wwww免费看| 少妇的丰满在线观看| 王馨瑶露胸无遮挡在线观看| 国产欧美日韩一区二区三区在线| 欧美黑人精品巨大| 成年人免费黄色播放视频| 成人毛片60女人毛片免费| 日本色播在线视频| 亚洲精品乱久久久久久| 欧美另类一区| 好男人视频免费观看在线| 国产精品国产三级国产专区5o| 国产亚洲av片在线观看秒播厂| 18禁国产床啪视频网站| 成年人免费黄色播放视频| 狠狠婷婷综合久久久久久88av| 国产精品无大码| av一本久久久久| 热re99久久国产66热| 99热全是精品| 极品少妇高潮喷水抽搐| 国产精品久久久人人做人人爽| 欧美日韩综合久久久久久| 久久天躁狠狠躁夜夜2o2o | 国产福利在线免费观看视频| 亚洲精品,欧美精品| 精品一区二区三区四区五区乱码 | 久久精品人人爽人人爽视色| 天天躁夜夜躁狠狠久久av| 99香蕉大伊视频| 欧美中文综合在线视频| 热re99久久国产66热| av女优亚洲男人天堂| 麻豆精品久久久久久蜜桃| 国产精品蜜桃在线观看| 欧美av亚洲av综合av国产av | 久久久久国产一级毛片高清牌| 我的亚洲天堂| 亚洲婷婷狠狠爱综合网| 女人被躁到高潮嗷嗷叫费观| 十八禁高潮呻吟视频| 赤兔流量卡办理| 欧美最新免费一区二区三区| 日韩一本色道免费dvd| 亚洲欧美成人综合另类久久久| 一边亲一边摸免费视频| 在线观看免费高清a一片| 亚洲一区中文字幕在线| 久久久久久久大尺度免费视频| 美女中出高潮动态图| 蜜桃在线观看..| 99精国产麻豆久久婷婷| 一级,二级,三级黄色视频| 日本一区二区免费在线视频| 一边亲一边摸免费视频| 9191精品国产免费久久| 悠悠久久av| 精品人妻在线不人妻| 欧美激情高清一区二区三区 | 最近手机中文字幕大全| 国产精品免费视频内射| 天天躁狠狠躁夜夜躁狠狠躁| 美女高潮到喷水免费观看| 交换朋友夫妻互换小说| 色婷婷av一区二区三区视频| 大片电影免费在线观看免费| 免费观看性生交大片5| 精品少妇一区二区三区视频日本电影 | 又粗又硬又长又爽又黄的视频| xxxhd国产人妻xxx| 日韩 亚洲 欧美在线| www.自偷自拍.com| 日韩精品免费视频一区二区三区| 成人亚洲欧美一区二区av| 中文字幕人妻熟女乱码| 国产一区亚洲一区在线观看| 青青草视频在线视频观看| 欧美精品亚洲一区二区| 免费看av在线观看网站| 精品视频人人做人人爽| 免费观看人在逋| 最黄视频免费看| 9色porny在线观看| 亚洲一区中文字幕在线| 欧美亚洲日本最大视频资源| 人妻 亚洲 视频| 久久99一区二区三区| 精品国产一区二区久久| 久久午夜综合久久蜜桃| 天美传媒精品一区二区| 99九九在线精品视频| 在线观看www视频免费| 久久狼人影院| 人体艺术视频欧美日本| 欧美人与性动交α欧美精品济南到| 男女边吃奶边做爰视频| 亚洲精品国产av成人精品| 久久久久久人妻| 久久女婷五月综合色啪小说| 婷婷色综合大香蕉| 美女福利国产在线| 国产深夜福利视频在线观看| 久久久国产精品麻豆| 欧美日韩福利视频一区二区| 啦啦啦视频在线资源免费观看| 飞空精品影院首页| 在线精品无人区一区二区三| 街头女战士在线观看网站| 人人澡人人妻人| 亚洲av综合色区一区| 久久久久精品人妻al黑| 国产精品成人在线| 天堂8中文在线网| 免费看av在线观看网站| 久久久久精品人妻al黑| 亚洲天堂av无毛| 制服人妻中文乱码| 97人妻天天添夜夜摸| 亚洲欧美色中文字幕在线| 国产av精品麻豆| 一级毛片 在线播放| 美女福利国产在线| 中文字幕制服av| 日韩,欧美,国产一区二区三区| 亚洲色图 男人天堂 中文字幕| 9色porny在线观看| 婷婷色av中文字幕| 国产不卡av网站在线观看| 久久免费观看电影| 激情五月婷婷亚洲| 777久久人妻少妇嫩草av网站| 国产成人系列免费观看| 大话2 男鬼变身卡| av又黄又爽大尺度在线免费看| 国产1区2区3区精品| 亚洲国产精品国产精品| 国产一区二区三区综合在线观看| 亚洲激情五月婷婷啪啪| 免费av中文字幕在线| 777久久人妻少妇嫩草av网站| 日韩精品免费视频一区二区三区| 亚洲七黄色美女视频| 最近最新中文字幕免费大全7| 精品少妇内射三级| 天天躁夜夜躁狠狠躁躁| 黄色一级大片看看| 欧美人与性动交α欧美精品济南到| 久久人人爽av亚洲精品天堂| 一区二区三区激情视频| 免费在线观看完整版高清| 超色免费av| 国产精品一区二区在线不卡| 黄网站色视频无遮挡免费观看| 宅男免费午夜| 大香蕉久久成人网| 中国国产av一级| 亚洲欧美日韩另类电影网站| 国产精品一区二区在线观看99| 精品午夜福利在线看| 国产日韩欧美视频二区| 亚洲精品一二三| 一级片免费观看大全| 啦啦啦中文免费视频观看日本| 男人添女人高潮全过程视频| 中文字幕人妻熟女乱码| 午夜激情av网站| 亚洲成色77777| 国产精品免费大片| 丝袜脚勾引网站| 日韩 欧美 亚洲 中文字幕| 国产成人精品久久久久久| 男女边吃奶边做爰视频| 国产黄色视频一区二区在线观看| 亚洲久久久国产精品| 欧美日韩一级在线毛片| 丁香六月欧美| av在线app专区| 天堂中文最新版在线下载| 免费久久久久久久精品成人欧美视频| 欧美精品亚洲一区二区| 午夜福利乱码中文字幕| 欧美精品一区二区大全| 伦理电影免费视频| 久久久精品区二区三区| 精品少妇黑人巨大在线播放| 99精品久久久久人妻精品| 中文字幕av电影在线播放| 美女视频免费永久观看网站| 成年人免费黄色播放视频| 99香蕉大伊视频| 好男人视频免费观看在线| 高清在线视频一区二区三区| 欧美最新免费一区二区三区| 精品一区二区三区四区五区乱码 | 久久99一区二区三区| 青春草亚洲视频在线观看| 国产在线一区二区三区精| 麻豆av在线久日| 嫩草影院入口| 成人午夜精彩视频在线观看| 中国三级夫妇交换| 超碰97精品在线观看| 国产成人精品无人区| 男女国产视频网站| 三上悠亚av全集在线观看| 天堂中文最新版在线下载| 男女边吃奶边做爰视频| 热re99久久精品国产66热6| 亚洲精品美女久久久久99蜜臀 | 极品人妻少妇av视频| 免费黄色在线免费观看| 欧美日本中文国产一区发布| 日韩一卡2卡3卡4卡2021年| 亚洲一区中文字幕在线| 亚洲精华国产精华液的使用体验| 少妇被粗大猛烈的视频| 亚洲精品久久午夜乱码| 999久久久国产精品视频| 中文字幕另类日韩欧美亚洲嫩草| 成人漫画全彩无遮挡| www.自偷自拍.com| 天堂8中文在线网| av线在线观看网站| 成年美女黄网站色视频大全免费| 成人漫画全彩无遮挡| 精品亚洲成a人片在线观看| 亚洲av欧美aⅴ国产| 啦啦啦中文免费视频观看日本| 97在线人人人人妻| 免费观看人在逋| 成人国语在线视频| 亚洲欧美精品综合一区二区三区| 一本一本久久a久久精品综合妖精| 国产成人免费观看mmmm| 男人爽女人下面视频在线观看| 18禁国产床啪视频网站| 亚洲欧洲国产日韩| 精品少妇一区二区三区视频日本电影 | 欧美变态另类bdsm刘玥| 欧美日韩一级在线毛片| 成年人午夜在线观看视频| 欧美日本中文国产一区发布| 午夜激情久久久久久久| 97在线人人人人妻| 在线观看免费午夜福利视频| 精品少妇一区二区三区视频日本电影 | 波多野结衣av一区二区av| 亚洲国产精品一区二区三区在线| 国产精品免费大片| 妹子高潮喷水视频| 19禁男女啪啪无遮挡网站| 亚洲国产av影院在线观看| 欧美人与性动交α欧美软件| 亚洲视频免费观看视频| 别揉我奶头~嗯~啊~动态视频 | 国产精品av久久久久免费| 欧美国产精品一级二级三级| 丝袜在线中文字幕| 亚洲第一区二区三区不卡| 男女高潮啪啪啪动态图| 成年av动漫网址| 永久免费av网站大全| 两个人看的免费小视频| 成年av动漫网址| 亚洲精品aⅴ在线观看| 天堂8中文在线网| 国产野战对白在线观看| 亚洲国产欧美一区二区综合| 久久97久久精品| 午夜福利乱码中文字幕| 99精品久久久久人妻精品| av不卡在线播放| 欧美黑人欧美精品刺激| 99热网站在线观看| 欧美成人午夜精品| 国产免费一区二区三区四区乱码| 亚洲精品成人av观看孕妇| 美女脱内裤让男人舔精品视频| 高清不卡的av网站| 人人妻人人爽人人添夜夜欢视频| 免费久久久久久久精品成人欧美视频| 韩国av在线不卡| 亚洲成色77777| 国产成人啪精品午夜网站| 国产精品蜜桃在线观看| 男女之事视频高清在线观看 | 国产成人精品久久二区二区91 | 最近的中文字幕免费完整| 19禁男女啪啪无遮挡网站| 一级片'在线观看视频| 久久久久久免费高清国产稀缺| 免费女性裸体啪啪无遮挡网站| 黑人猛操日本美女一级片| 狠狠精品人妻久久久久久综合| 大话2 男鬼变身卡| 国产99久久九九免费精品| 亚洲第一av免费看| 亚洲熟女精品中文字幕| 欧美人与性动交α欧美软件| 国产成人一区二区在线| 国产精品亚洲av一区麻豆 | 亚洲一码二码三码区别大吗| 色视频在线一区二区三区| 国产免费又黄又爽又色| 日本色播在线视频| 色婷婷久久久亚洲欧美| 日本黄色日本黄色录像| 亚洲综合色网址| 国产精品久久久久久久久免| 亚洲精品aⅴ在线观看| 毛片一级片免费看久久久久| 赤兔流量卡办理| 亚洲国产av新网站| 亚洲三区欧美一区| 久久人妻熟女aⅴ| av国产精品久久久久影院| 国产免费又黄又爽又色| av福利片在线| 国产精品免费视频内射| 亚洲人成77777在线视频| 制服人妻中文乱码| 国产成人系列免费观看| 免费观看a级毛片全部| 曰老女人黄片| 欧美最新免费一区二区三区| www.熟女人妻精品国产| 在线观看人妻少妇| 最黄视频免费看| 麻豆乱淫一区二区| 天堂中文最新版在线下载| 高清不卡的av网站| 亚洲视频免费观看视频| 成人国产av品久久久| 天堂8中文在线网| 亚洲国产看品久久| 性少妇av在线| 999精品在线视频| a级毛片在线看网站| 一本久久精品| 午夜福利免费观看在线| av天堂久久9| 欧美av亚洲av综合av国产av | 亚洲欧美精品综合一区二区三区| 美女中出高潮动态图| 精品国产一区二区久久| 丰满饥渴人妻一区二区三| 亚洲七黄色美女视频| 热re99久久国产66热| 国产伦理片在线播放av一区| 成人三级做爰电影| 亚洲av成人不卡在线观看播放网 | 97人妻天天添夜夜摸| 美女脱内裤让男人舔精品视频| 亚洲精品久久午夜乱码| 久久久精品免费免费高清| 成人国语在线视频| 九九爱精品视频在线观看| 久久精品亚洲av国产电影网| 中文字幕精品免费在线观看视频| 精品酒店卫生间| 黄片播放在线免费| 秋霞伦理黄片| 少妇人妻久久综合中文| 啦啦啦啦在线视频资源| 久久久亚洲精品成人影院| 女性被躁到高潮视频| 国产精品av久久久久免费| 国产成人免费无遮挡视频| 中文字幕人妻丝袜一区二区 | 亚洲欧美色中文字幕在线| 极品人妻少妇av视频| 欧美日韩亚洲高清精品| 只有这里有精品99| 亚洲欧美中文字幕日韩二区| 中文字幕人妻丝袜制服| 嫩草影院入口| 国产精品av久久久久免费| 校园人妻丝袜中文字幕| 久久精品aⅴ一区二区三区四区| 夫妻午夜视频| 久久狼人影院| 啦啦啦在线观看免费高清www| 亚洲少妇的诱惑av| 亚洲国产av新网站| www.自偷自拍.com| 国产极品粉嫩免费观看在线| 亚洲专区中文字幕在线 | 一边摸一边抽搐一进一出视频| 国产熟女欧美一区二区| 色播在线永久视频| 久久久国产精品麻豆| 丁香六月欧美| 日本黄色日本黄色录像| 欧美日韩视频高清一区二区三区二| 热99久久久久精品小说推荐| 夫妻午夜视频| 日韩一区二区视频免费看| 一级片'在线观看视频| 天天躁狠狠躁夜夜躁狠狠躁| 操美女的视频在线观看| 亚洲国产精品国产精品| 又大又黄又爽视频免费| 久久久久久人妻| 可以免费在线观看a视频的电影网站 | 久久av网站| 国产精品女同一区二区软件| 熟女av电影| 搡老乐熟女国产| 久久久欧美国产精品| 夫妻午夜视频| h视频一区二区三区| 日韩大片免费观看网站| 亚洲成人免费av在线播放| 99久久精品国产亚洲精品| 国产乱人偷精品视频| 菩萨蛮人人尽说江南好唐韦庄| 韩国精品一区二区三区| 日韩精品有码人妻一区| 丝袜在线中文字幕| 久久精品久久久久久噜噜老黄| 纯流量卡能插随身wifi吗| 看非洲黑人一级黄片| 午夜精品国产一区二区电影| 欧美日韩福利视频一区二区| 女人被躁到高潮嗷嗷叫费观| 日本午夜av视频| 午夜精品国产一区二区电影| 看非洲黑人一级黄片| 老汉色∧v一级毛片| a级毛片在线看网站| 狂野欧美激情性bbbbbb| 亚洲av欧美aⅴ国产| a 毛片基地| 麻豆av在线久日| 一级黄片播放器| 欧美日韩成人在线一区二区| 青春草国产在线视频| 街头女战士在线观看网站| 飞空精品影院首页| 看非洲黑人一级黄片| av在线老鸭窝| 亚洲精品国产一区二区精华液| 一区二区日韩欧美中文字幕| 欧美精品av麻豆av| 人妻一区二区av| 日韩成人av中文字幕在线观看| 老汉色av国产亚洲站长工具|