• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Accurate disparity estimation in light field using ground control points

    2016-07-19 05:44:40HaoZhuQingWangcTheAuthor206ThisarticleispublishedwithopenaccessatSpringerlinkcom
    Computational Visual Media 2016年2期

    Hao Zhu,Qing Wang()?cThe Author(s)206.This article is published with open access at Springerlink.com

    ?

    Research Article

    Accurate disparity estimation in light field using ground control points

    Hao Zhu1,Qing Wang1()
    ?cThe Author(s)2016.This article is published with open access at Springerlink.com

    1School of Computer Science,Northwestern Polytechnic University,Xi’an 710072,China.E-mail:qwang@ nwpu.edu.cn().

    Manuscript received:2015-12-01;accepted:2016-04-01

    AbstractThe recent development of light field cameras has received growing interest,as their rich angular information has potential benefits for many computer vision tasks.In this paper,we introduce a novel method to obtain a dense disparity map by use of ground control points(GCPs)in the light field. Previous work optimizes the disparity map by local estimation which includes both reliable points and unreliable points.To reduce the negative effect of the unreliable points,we predict the disparity at non-GCPs from GCPs.Our method performs more robustly in shadow areas than previous methods based on GCP work,since we combine color information and local disparity.Experiments and comparisons on a public dataset demonstrate the effectiveness of our proposed method.

    Keywords disparity estimation;ground control points (GCPs);light field;global optimization

    1 Introduction

    Withtherapiddevelopmentofcomputational photography,manycomputationalimaging deviceshavebeeninvented, basedon, e.g.,coded apertures[1],focal sweep[2],and light fields[3,4],examples of the latter being the Lytro (https://lytro.com) andRaytrix(http://www. raytrix.de)cameras.As a light field camera captures both spatial and angular information about the distribution of light rays in space,it provides a potential basis for many fundamental operations in computer vision[5-8].In this paper,we focus on accurate disparity estimation using the light field.

    Disparity estimation in stereo[9-11]is a longstanding problem in computer vision,and it plays an important role in many tasks.Given two or more images captured from different viewpoints,the fundamental problem is to find the optimal correspondence between these images.Unlike the point information captured by a traditional camera,a light field camera captures information about the light,providing a full description of the real world via a 4D function.We can obtain so-called subaperture images[12],and synthesize a multi-view representation of the real world.Unlike the wide baseline used in traditional multi-view stereo,the multi-view representation synthesized by the light field has a narrow baseline,which can provide more accurate sub-pixel disparity estimation.

    Previous work[13,14]has computed an optimized disparity map based on local estimation,which includes both reliable and unreliable information. Undoubtedly,reliable points will have a positive effect on the global disparity map,but the effect of unreliable points may be detrimental.To reduce the negative effects of unreliable points,we propose to obtain a dense disparity map from certain reliable estimation points in the light field called groundcontrolpoints(GCPs)[15].GCPsare sparse points which can be matched reliably in stereo,and are often obtained by stable matching or laser scanning.We determine GCPs by the reliability of the structure tensor,and construct the GCP spread function based on color similarity and local disparity similarity.By combining local disparities,the proposed method performs more robustly than the previous GCP method in shadow areas:experimental results on the LFBD dataset[16]show that our method performs more robustly than the original light field method[14]and the originalGCP method[15].

    The rest of this paper is organized as follows. In Section 2,we review the background and prior work on depth estimation from a light field and the GCP method.Section 3 describes our algorithm. We give the experimental results in Section 4,and our conclusions and suggestions for future work are provided in Section 5.

    2 Background and related work

    A light field[17]is represented by a 4D function f(x,y,u,v),where the dimensions(x,y)describe the spatial distribution of light,and the dimensions (u,v)describe the angular distribution of light. When we fix one spatial dimension y?and one angular dimension v?,we get the epipolar plane image(EPI)[18](see Fig.1).

    In a rectified EPI,the slope k of a line has a linear relationship with the depth d(see Fig.2),the larger the depth,the larger the slope:d=fsk/c,where f is the focal length of the lens,s is the baseline between the two views,and c is the pixel size.Given the EPI structure,the depth estimation problem is converted into a slope detection problem.As light field cameras capture angular information,analyzing the EPI is a suitable way to estimate depth using the light field.

    Fig.1 The epipolar plane image obtained by fixing one spatial dimension and one angular dimension.

    Fig.2 Left:epipolar plane image.Note that the slope is the reciprocal of the disparity.Right:the inverse relationship between disparity and depth.The relationship between slope and depth is linear.

    Structuretensor(ST).Wannerand Goldluecke [14]proposedtoanalyzetheEPI by the structure tensor,obtaining the local disparity andthecorrespondingreliabilityatthesame time.They then optimized the local disparity map by a global energy function based on variational regularization.However,as they pointed out,the sampling of viewpoints must be dense enough to guarantee that the disparity between two views is less than 2 pixels,otherwise the straight line will become a broken line in the EPI,leading to inaccurate local estimation.This issue will be discussed in Section 3.1.

    Lisadtransform.TosicandBerkner [19]proposed the light field scale and depth transform. Theyinitiallydetectedtheslopeoftheepitube[20]in Lisad-2 space,and then repaired depth discontinuous areas using the slope of the epistrip[20]detected in Lisad-1 space.

    Multiple cues.Tao et al.[21]proposed a method to estimate depth by combining multiple cues,including correspondence cues and defocus cues.The advantages of using different cues in different areas were exploited,and they obtained good results for real scenes.

    RPCA(robustprinciplecomponent analysis) matching.HeberandPock[22]considered the highly redundant nature of the light field,and proposed a new global matching term based on low rank minimization.Their method achieves the best results on synthetic datasets.

    GCP method.Ground control points are sparse points which can be matched reliably in stereo. Wang and Yang[15]proposedto predict the disparity of non-GCPs from GCPs,and optimized the initial disparity map using a Markov random field(MRF)energy function containing three terms: a data term,a smoothness term,and the GCP term. The top ranking results achieved on the Middlebury dataset prove that it is a good approach to stereo matching.However,it performs poorly in shadow (see Fig.5)as it assumes that two points with similar colors should have similar disparities,but the colors may differ between a center point and its neighbors while their disparities are similar.

    3 Theory and algorithm

    3.1Algorithm overview

    Our approach is given in Algorithm 1.The inputis the 4D light field L(x,y,u,v)and the reference view index r.The output is the disparity map D. The algorithm consists of five steps:

    1.Local estimation.The local disparity map and its reliability map are obtained by using the structure tensor method(see Section 3.2).

    2.GCP detection.The most credible points as determined by the reliability map are selected as the GCPs(see Section 3.2).

    3.GCP optimization.The GCP spread function is built using the local disparity map and the set of GCPs.By using this function,intermediate results are obtained(see Section 3.3).

    4.Building the energy function.The intermediate results from GCP optimization are combined with traditional stereo matching function results (see Section 3.4).

    5.Final optimization.The energy function is found by optimization(see Section 3.4).

    Algorithm 1:Our GCP algorithm____________________

    3.2Local disparity and GCPs

    We use the structure tensor to detect the slopes of lines in the EPI:

    where Gσrepresents a Gaussian smoothing operator,and Sx,Syrepresent the gradients of the EPI in x and y directions respectively.

    Then,the direction of the line is

    and its reliability is measured by the coherence of the structure tensor:

    It is worth noting that the structure tensor performs poorly in the near field(see Fig.4),which corresponds to the low-slope area in the EPI;in other words,the disparity between two views is larger than 2 pixels in these areas.We adopt the easiest method to solve this issue-by scaling up the EPI along the y axis in these areas to convert the low-slope area into a high-slope area.We use bicubic interpolation to scale up the EPI.After calculating the slope of the scaled EPI,we can recover the slope of the original EPI by dividing by the scaling factor.An original EPI and the scaled up EPI can be seen in Fig.3.The improvement of our method can be seen in Fig.4.

    Fig.3 Top:an original EPI.Bottom:the scaled up EPI.Clearly,the slopes of the lines are greater in the latter.

    Fig.4 Disparity and error map obtained by the basic structure tensor method and our method for the Buddha dataset.(a)Disparity map obtained by the basic structure tensor method.(b)Disparity map obtained by our improved method.The basic structure tensor method performs poorly at close range.(c)and(d)Relative depth error maps for the above two methods.Relative depth errors of more than 1%are indicated in red.The error rates of these two methods are 7.87%and 3.97%,respectively.

    Having obtained the coarse disparity map D0and its reliability map R0from the structure tensor,it is unnecessary to do stable matching or laser scanning to obtaining GCPs;to do so would be timeconsuming and laborious.We obtain the set of GCPs G from the reliability map R0.

    3.3GCP spread function

    The inputs to our GCP spread function are the reference view image Ir,the local disparity map D0,and the set of GCPs G.

    Unlike Wang and Yang’s method[15],which constrains similar colors to have similar disparities,we use the constraint that two neighboring pixels p and q should have similar disparities if not only their colors are similar but also their local disparities are similar.This constraint can be described by minimizing the difference between the disparity of p and a weighted combination of its neighbors’disparities.The global cost function is given by

    where?Cpqand?D0,pqarerespectivelythe Euclidean distances between pixels in RGB color space and local disparity(D0)space.Parameters γc,γd,and?control sharpness of the function.To ensure that the disparity term and the color term have the same order of magnitude,the parameter γdis not independent,and is calculated from the following equation:

    where Cmaxand Cminare the maximum and minimumvaluesincolorspace,255and0 respectively.NCis the number of color channels,here 3.Parameters dmaxand dminare the maximum and minimum values in hypothetical disparity space,respectively.sγdis a weight to control the strength of the disparity term.We suggest that the disparity term should be larger than the color term,so sγd<1.

    Equation(4)is a quadratic function.After taking its derivative to find the extremum,we find that the disparity of the center point p has a linear relationship with the disparities of its neighbors:

    Given the linear relationship above,we can derive the disparity of non-GCPs from GCPs by solving a system of sparse linear equations(I-A)x=b,where I is an identity matrix,and A is an N×N matrix(N is the number of points in the image).In each row of A,if the corresponding pixel p belongs to the GCPs,all elements are zero,otherwise only the 8-connected neighbor points q have non-zero values,equal to their weights αpq.Similarly,for the elements in vector b,only the pixels belonging to GCPs have non-zero values,and are equal to their initial disparities.x is the optimal disparity map that we hope to obtain.

    3.4Energy functions with the GCPs

    Our energy function contains three terms:a data term,a smoothness term,and the GCP term.

    As the length of the baseline between two views in the light field camera is narrow,we combine the basic sum of squared differences(SSD)and sum of squared gradient differences(SSGD)as our data term to ensure good matching: and

    Here,D is the global optimal disparity map,D(p)is the disparity value of point p,V is the number of views in the light field,N(p)is an image patch centered at pixel p,Iris the reference view in the light field,and Ii,gxand Ii,gyare the gradients of the i-th view image in x and y directions,respectively.

    We select the widely used linear model based on a 4-connected neighborhood system as our smoothness term,which is expressed as

    where?Dpqis the difference in disparity between pixels p and q,λsis a smoothness coefficient to control the strength smoothing,and ωpqis a weight based on the distance between p and q in RGB color space:

    Given the optimal disparity?Dpobtained by the GCP spread function in Eq.(7),we can now employ the robust penalty function[15]:

    where λris a regularization coefficient that controls the strength of the GCP energy;γdand η control the sharpness and upper-bound of the penalty function.

    4 Experimental results

    4.1Parameters and dataset

    In our experiments,we have found that the structure tensor method performs poorly for lines in the EPI with slope larger than 0.65.So,we first analyze the EPI by structure tensor,and then if the slope of 50% of points is larger than 0.65,we scale this EPI from 9 to 17 pixels in the y direction,and reanalyze the EPI.

    We set two thresholds to select GCPs,using absolutereliabilityandrelativereliability. The absolute reliability is set to 0.99 in our implementation.If the reliability of a point is larger than 0.99,it is classified as a GCP.If only this criterion is used,GCPs may be too sparse in some datasets to reliably determine the disparity map calculated from the GCPs.We thus also consider relative reliability.If the fraction of GCPs obtained by considering absolute reliability is smaller than a set percentage(20%),we select the 20%most relaible points as GCPs.

    For the disparity map obtained from GCPs,the parameters γcand?are set to 30 and 0,respectively. The weight parameter sγdis set to 0.25.Note that if γcor γdis unsuitable,the sparse linear system may be singular.

    In the data term,the parameter α is set to 0.5,and the size of image patches is 7×7.The parameters γcand?in the smoothness term are set to 3.6 and 0.3,respectively.And the parameters η and γdare set to 0.005 and 2 according to Wang and Yang[15],respectively.In the final energy function,the smoothness coefficient λsand the GCP scaling coefficient λrare set to 1.67 and 4.67,respectively. We divide the disparity into 120 levels,and use linear interpolation to get floating point values for use in the data term.We use GCO-v3[23-25]to optimize our energy function.

    We tested our method using HCI LFBD [16],which contains synthetically generated light fields,each of which is represented by 9×9 sub-aperture images.The dataset was rendered by Blender (http://www.blender.org),and contains the ground truth.

    Our algorithm was implemented in MATLAB,on MacOS X 10.11.1 with 8 GB RAM and a 2.7 GHz processor.The running time for a 9×9×768×768×3 light field is 60 minutes.Most of the cost is to build two large sparse matrices for the smoothness term and the GCP term,and could be greatly reduced by reimplementation in C.

    4.2Comparision with previous works

    We compare our GCP propagation method with the basic structure tensor method and Wang and Yang’s method[15]in Fig.5.Our method performs more robustly than Wang and Yang’s in shadows and in color discontinuous areas.

    We also compare our results with Wanner and Goldluecke’s[14]and Wang and Yang’s[15]results,as shown in Fig.6.Note that there is much noise in the initial map;there is reduced noise after the GCPs’spread.A quantitative comparison can be seen in Table 1.We select the relative depth error as our criterion(we used 0.2%here). Our method performs better than Wanner and Goldluecke’s on 4 datasets,but worse on 3 datasets. On analyzing these 7 datasets we find that there arefewtransparentmaterialsintheBuddha,StillLife,Horses,and Medieval datasets,but more transparent materials and shadows,lamps,and mirrorsintheBuddha2,Mona,andPapillondatasets.Stereo matching outperforms the structure tensor method in a Lambertian environment,whilst thestructuretensorperformsbetterinnon-Lambertian environments.

    Fig.6 Results of our method on Buddha,StillLife,Mona,and Horses datasets from LFBD.Top:initial disparity map D0.2nd row:results ?D from the GCP spread function.3rd row:final results of our method.4th row:ground truth.Bottom:error map.Relative depth errors of more than 0.2%are shown in red.

    Our method performs better than Wang and Yang’s method,especially for the Papillon,StillLife,and Horses datasets.Note that there exist some shadows in these datasets;the disparities of shadow areas ought to be the same as those of their neighborhoods.As local disparity information is combined in our GCP spread function,our method outperforms Wang and Yang’s.

    Table 1 Percentage of pixels with depth error more than 0.2%for LFBD[16].The results of Wanner and Goldluecke’s method were obtained by running their public source code.The results of Wang and Yang’s method were obtained by our implementation ?。║nit:%)

    Fig.5 Comparison.(a)Input dataset;there are two colors in the butterfly and a shadow on the leaves to the left of the butterfly.(b)Basic structure tensor result,showing many points with wrong values. (c)Wang and Yang’s propagation strategy assigns wrong disparities to shadow areas and divides the butterfly into 2 parts along the color boundary in its tail.(d)Result of our GCP propagation method,with few wrong values and good performance in shadow areas.

    5 Conclusions and future work

    In this paper,we have proposed the idea to determine GCPs by using the reliability of the structure tensor. We also give a more robust GCP spread function than Wang and Yang’s[15]to propagate disparity from GCPs to non-GCPs.Experimental results on LFBD show that our method performs more robustly than Wanner and Goldluecke’s and Wang and Yang’s methods.However,our method performs less well in strong light and complicated shadow areas;this is a problem for most stereo methods.In the future,we will continue to investigate stereo in light fields,and take the advantages of more features of light field to solve these problems.

    Acknowledgements

    The work was supported by National Natural ScienceFoundationofChina(Nos.61272287,61531014)and a research grant from the State Key Laboratory of Virtual Reality Technology and Systems(No.BUAA-VR-15KF-10).

    References

    [1]Levin,A.Analyzing depth from coded aperture sets. In:Lecture Notes in Computer Science,Vol.6311. Daniilidis,K.;Maragos,P.;Paragios,N.Eds.Springer Berlin Heidelberg,214-227,2010.

    [2]Zhou,C.;Miau,D.;Nayar,S.K.Focal sweep cameraforspace-timerefocusing.Columbia UniversityAcademicCommons,2012.Available at http://hdl.handle.net/10022/AC:P:15386.

    [3]Lumsdaine,A.;Georgiev,T.The focused plenoptic camera.In:ProceedingsofIEEEInternational Conference on Computational Photography,1-8,2009.

    [4]Ng,R.;Levoy,M.;Br′edif,M.;Duval,G.;Horowitz,M.;Hanrahan,P.Light field photography with a handheld plenoptic camera.Stanford Tech Report CTSR,2005.

    [5]Birklbauer,C.;Bimber,O.Panorama light-field imaging.Computer Graphics Forum Vol.33,No.2,43-52,2014.

    [6]Dansereau,D.G.;Pizarro,O.;Williams,S.B. Linear volumetric focus for light field cameras.ACM Transactions on Graphics Vol.34,No.2,Article No. 15,2015.

    [7]Li,N.;Ye,J.;Ji,Y.;Ling,H.;Yu,J.Saliency detection on light field.In:Proceedings of IEEE Conference onComputer Vision and Pattern Recognition,2806-2813,2014.

    [8]Raghavendra,R.;Raja,K.B.;Busch,C.Presentation attack detection for face recognition using light field camera.IEEE Transactions on Image Processing Vol. 24,No.3,1060-1075,2015.

    [9]Scharstein,D.;Szeliski,R.A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision Vol.47,No. 1,7-42,2002.

    [10]Du,S.-P.;Masia,B.;Hu,S.-M.;Gutierrez,D.A metric of visual comfort for stereoscopic motion.ACM Transactions on Graphics Vol.32,No.6,Article No. 222,2013.

    [11]Wang,M.;Zhang,X.-J.;Liang,J.-B.;Zhang,S.-H.;Martin,R.R.Comfort-driven disparity adjustment for stereoscopic video.Computational Visual Media Vol.2,No.1,3-17,2016.

    [12]Wanner,S.;Fehr,J.;J¨ahne,B.Generating EPI representations of 4D light fields with a single lens focusedplenopticcamera.In:Lecture Notes in Computer Science,Vol.6938.Bebis,G.;Boyle,R.;Parvin,B.et al.Eds.Springer Berlin Heidelberg,90-101,2011.

    [13]Wanner,S.;Goldluecke,B.Globally consistent depth labelingof4Dlightfields.In:Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition,41-48,2012.

    [14]Wanner,S.;Goldluecke,B.Variational light field analysis for disparity estimation and super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.36,No.3,606-619,2014.

    [15]Wang,L.;Yang,R.Global stereo matching leveraged by sparse ground control points.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,3033-3040,2011.

    [16]Wanner,S.;Meister,S.;Goldluecke,B.Datasets and benchmarks for densely sampled 4D light fields.In: Proceedings of Annual Workshop on Vision,Modeling and Visualization,225-226,2013.

    [17]Levoy,M.; Hanrahan,P.Lightfieldrendering. In:Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques,31-42,1996.

    [18]Bolles,R.C.;Baker,H.H.;Marimont,D.H.Epipolarplane image analysis:An approach to determining structurefrommotion.InternationalJournalof Computer Vision Vol.1,No.1,7-55,1987.

    [19]Tosic,I.;Berkner,K.Light field scale-depth space transform for dense depth estimation.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops,441-448,2014.

    [20]Criminisi,A.;Kang,S.B.;Swaminathan,R.;Szeliski,R.;Anandan,P.Extracting layers and analyzing their specular properties using epipolar-plane-image analysis.Computer Vision and Image Understanding Vol.97,No.1,51-85,2005.

    [21]Tao,M.W.;Hadap,S.;Malik,J.;Ramamoorthi,R. Depth from combining defocus and correspondence using light-field cameras.In:Proceedings of IEEE International Conference on Computer Vision,673-680,2013.

    [22]Heber,S.;Pock,T.Shape from light field meets robust PCA.In:Lecture Notes in Computer Science,Vol. 8694.Fleet,D.;Pajdla,T.;Schiele,B.;Tuytelaars,T. Eds.Springer International Publishing,751-767,2014.

    [23]Boykov,Y.; Kolmogorov,V.Anexperimental comparison of min-cut/max-flow algorithms for energy minimization in vision.In:Lecture Notes in Computer Science,Vol.2134.Figueiredo,M.;Zerubia,J.;Jain,A.K.Eds.Springer Berlin Heidelberg,359-374,2001.

    [24]Boykov,Y.;Veksler,O.;Zabih,R.Fast approximate energyminimizationviagraphcuts.IEEE TransactionsonPatternAnalysisandMachine Intelligence Vol.23,No.11,1222-1239,2001.

    [25]Kolmogorov,V.;Zabin,R.What energy functions can be minimized via graph cuts?IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.26,No. 2,147-159,2004.

    HaoZhu received his B.E.degree fromtheSchoolofComputer Science,NorthwesternPolytechnic University,in 2014.He is now a Ph.D. candidate in the School of Computer Science,NorthwesternPolytechnic University.Hisresearchinterests include computational photography and computer vision.

    Qing Wang is a professor and Ph.D. tutor in the School of Computer Science,NorthwesternPolytechnicUniversity. HegraduatedfromtheDepartment ofMathematics, PekingUniversity,in 1991.He then joined Northwestern Polytechnic University as a lecturer.In 1997 and 2000 he obtained his master and Ph.D.degrees from the Department of Computer ScienceandEngineering, NorthwesternPolytechnic University,respectively.In 2006,he was recognized by the Outstanding Talent of the New Century Program by the Ministry of Education,China.He is a member of IEEE and ACM.He is also a senior member of the China Computer Federation(CCF).

    He worked as a research assistant and research scientist in the Department of Electronic and Information Engineering,Hong Kong Polytechnic University,from 1999 to 2002.He also worked as a visiting scholar in the School of Information Engineering,University of Sydney,in 2003 and 2004.In 2009 and 2012,he visited the Human Computer Interaction Institute,Carnegie Mellon University,for six months,and the Department of Computer Science,University of Delaware,for one month,respectively.Prof.Wang’s research interests include computer vision and computational photography,including 3D structure and shape reconstruction,object detection,tracking and recognitionindynamicenvironments,andlightfield imaging and processing.He has published more than 100 papers in international journals and conferences.

    Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), whichpermits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    国产精品一国产av| 精品国产乱码久久久久久小说| 精品国产露脸久久av麻豆| 欧美xxⅹ黑人| 国产一区二区三区综合在线观看| 老司机影院毛片| 午夜福利在线免费观看网站| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲精品美女久久av网站| 国产欧美亚洲国产| 日韩一区二区视频免费看| 成人手机av| 人人妻人人爽人人添夜夜欢视频| 欧美黑人欧美精品刺激| 亚洲精品美女久久av网站| 欧美日韩亚洲综合一区二区三区_| 一区二区三区精品91| 国产男人的电影天堂91| 丁香六月欧美| 韩国高清视频一区二区三区| tube8黄色片| 国产在线免费精品| 久久精品亚洲av国产电影网| 亚洲色图综合在线观看| 国产麻豆69| 久久久国产欧美日韩av| 亚洲自偷自拍图片 自拍| 午夜激情久久久久久久| 久久精品国产亚洲av高清一级| 成年人午夜在线观看视频| 精品久久蜜臀av无| 国产精品av久久久久免费| 丁香六月天网| 日韩一区二区三区影片| 国产成人免费无遮挡视频| 在线观看三级黄色| 一区二区三区精品91| 久久精品人人爽人人爽视色| 成人午夜精彩视频在线观看| 亚洲国产精品成人久久小说| 久久久久久久久免费视频了| 99热网站在线观看| 亚洲精品日韩在线中文字幕| 秋霞伦理黄片| 2021少妇久久久久久久久久久| 日日摸夜夜添夜夜爱| 精品少妇久久久久久888优播| 国产一区二区三区av在线| 精品国产一区二区三区久久久樱花| 大香蕉久久网| 国产一区二区激情短视频 | 悠悠久久av| 狂野欧美激情性xxxx| 国产av精品麻豆| 成人午夜精彩视频在线观看| 女的被弄到高潮叫床怎么办| www日本在线高清视频| 久久综合国产亚洲精品| 中文精品一卡2卡3卡4更新| 老鸭窝网址在线观看| 国产一区二区激情短视频 | 女人久久www免费人成看片| 国产精品蜜桃在线观看| 午夜福利影视在线免费观看| 少妇猛男粗大的猛烈进出视频| 两性夫妻黄色片| 欧美激情极品国产一区二区三区| 国产爽快片一区二区三区| 一二三四中文在线观看免费高清| 又大又黄又爽视频免费| 日韩不卡一区二区三区视频在线| 黄片无遮挡物在线观看| 欧美精品人与动牲交sv欧美| 国产精品久久久久久人妻精品电影 | 亚洲第一区二区三区不卡| av有码第一页| 国产精品熟女久久久久浪| 欧美国产精品一级二级三级| 狂野欧美激情性xxxx| 欧美国产精品一级二级三级| 午夜av观看不卡| 叶爱在线成人免费视频播放| 岛国毛片在线播放| 亚洲成人国产一区在线观看 | 久久久久久久国产电影| 一级a爱视频在线免费观看| 国产片内射在线| av免费观看日本| 交换朋友夫妻互换小说| 人人妻人人爽人人添夜夜欢视频| 国产av一区二区精品久久| 老汉色av国产亚洲站长工具| 国产精品香港三级国产av潘金莲 | 青春草国产在线视频| 丝袜美腿诱惑在线| 国产伦理片在线播放av一区| 巨乳人妻的诱惑在线观看| 日韩制服丝袜自拍偷拍| 激情视频va一区二区三区| 亚洲精品日本国产第一区| 91精品国产国语对白视频| 久久久久精品国产欧美久久久 | 热re99久久国产66热| 综合色丁香网| 亚洲欧洲日产国产| 欧美少妇被猛烈插入视频| 这个男人来自地球电影免费观看 | 久久影院123| 啦啦啦中文免费视频观看日本| 亚洲成人手机| 多毛熟女@视频| 精品一区二区免费观看| 亚洲国产欧美一区二区综合| 久久精品国产亚洲av高清一级| av网站免费在线观看视频| 捣出白浆h1v1| av视频免费观看在线观看| 亚洲视频免费观看视频| 两个人免费观看高清视频| 老汉色∧v一级毛片| 精品国产国语对白av| 七月丁香在线播放| 日韩一卡2卡3卡4卡2021年| 亚洲色图 男人天堂 中文字幕| 你懂的网址亚洲精品在线观看| 国产在线视频一区二区| 欧美老熟妇乱子伦牲交| 考比视频在线观看| 一区福利在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 高清在线视频一区二区三区| 国产爽快片一区二区三区| 国产一级毛片在线| 婷婷成人精品国产| 日本欧美国产在线视频| 久久99热这里只频精品6学生| 久久热在线av| 亚洲一级一片aⅴ在线观看| 91国产中文字幕| 午夜福利免费观看在线| 国产精品av久久久久免费| 纵有疾风起免费观看全集完整版| 欧美人与善性xxx| 日韩电影二区| 国产精品久久久久久久久免| 精品国产超薄肉色丝袜足j| 天天躁日日躁夜夜躁夜夜| 亚洲欧美中文字幕日韩二区| 一区二区三区激情视频| 丝袜在线中文字幕| 青春草国产在线视频| 亚洲成人一二三区av| 亚洲自偷自拍图片 自拍| 好男人视频免费观看在线| 久久精品国产综合久久久| 成年人午夜在线观看视频| 夜夜骑夜夜射夜夜干| 在线观看免费午夜福利视频| 国产黄色免费在线视频| 哪个播放器可以免费观看大片| 中文字幕人妻丝袜制服| 欧美最新免费一区二区三区| 91精品三级在线观看| 伦理电影大哥的女人| 午夜福利在线免费观看网站| 中文字幕另类日韩欧美亚洲嫩草| 90打野战视频偷拍视频| 亚洲成人av在线免费| 国产免费又黄又爽又色| 精品人妻在线不人妻| 日韩,欧美,国产一区二区三区| 精品亚洲成国产av| 国产欧美日韩一区二区三区在线| 亚洲国产欧美日韩在线播放| 欧美 日韩 精品 国产| 三上悠亚av全集在线观看| 亚洲国产成人一精品久久久| 三上悠亚av全集在线观看| 男人舔女人的私密视频| 免费黄网站久久成人精品| 亚洲精品一区蜜桃| 亚洲精品美女久久久久99蜜臀 | 一二三四在线观看免费中文在| a 毛片基地| 国产亚洲av高清不卡| 欧美亚洲日本最大视频资源| 另类亚洲欧美激情| 日韩电影二区| 久久鲁丝午夜福利片| 人体艺术视频欧美日本| 国产毛片在线视频| 久久毛片免费看一区二区三区| 精品亚洲成a人片在线观看| 人人妻,人人澡人人爽秒播 | 精品第一国产精品| 精品一区二区三区av网在线观看 | 亚洲一码二码三码区别大吗| kizo精华| 国产一区有黄有色的免费视频| 天天影视国产精品| 女人高潮潮喷娇喘18禁视频| 亚洲美女搞黄在线观看| 日本91视频免费播放| 美女中出高潮动态图| 80岁老熟妇乱子伦牲交| 中文字幕人妻丝袜一区二区 | 亚洲欧洲日产国产| 9色porny在线观看| 久久韩国三级中文字幕| 午夜福利影视在线免费观看| 一边摸一边抽搐一进一出视频| 国产乱来视频区| 天天躁夜夜躁狠狠躁躁| 精品人妻一区二区三区麻豆| 精品国产一区二区三区四区第35| 久久婷婷青草| 免费少妇av软件| 亚洲人成网站在线观看播放| 欧美激情极品国产一区二区三区| 又大又爽又粗| 1024视频免费在线观看| 婷婷色综合大香蕉| e午夜精品久久久久久久| 桃花免费在线播放| 最近中文字幕高清免费大全6| 一本久久精品| 麻豆精品久久久久久蜜桃| 国产 精品1| 不卡av一区二区三区| 天天躁日日躁夜夜躁夜夜| 久久免费观看电影| 天天操日日干夜夜撸| 中文欧美无线码| 夫妻午夜视频| av国产精品久久久久影院| 一本久久精品| 国产亚洲一区二区精品| 蜜桃国产av成人99| 精品人妻一区二区三区麻豆| 欧美日本中文国产一区发布| 视频在线观看一区二区三区| av又黄又爽大尺度在线免费看| 亚洲精品乱久久久久久| 日韩一区二区视频免费看| 黑人巨大精品欧美一区二区蜜桃| 亚洲欧美色中文字幕在线| 欧美人与善性xxx| 99国产精品免费福利视频| 天堂中文最新版在线下载| 成人手机av| 精品午夜福利在线看| 国精品久久久久久国模美| 在线 av 中文字幕| 另类精品久久| 韩国av在线不卡| 久久国产精品男人的天堂亚洲| 97在线人人人人妻| tube8黄色片| 欧美人与善性xxx| 国产伦人伦偷精品视频| 久久99精品国语久久久| 七月丁香在线播放| 久久午夜综合久久蜜桃| 热re99久久国产66热| 黄色怎么调成土黄色| 中文字幕高清在线视频| 性色av一级| 欧美激情 高清一区二区三区| 亚洲伊人色综图| 新久久久久国产一级毛片| 十八禁高潮呻吟视频| 成年人免费黄色播放视频| 亚洲精品自拍成人| 99精品久久久久人妻精品| 国产成人免费观看mmmm| 亚洲情色 制服丝袜| 9热在线视频观看99| 国产欧美日韩一区二区三区在线| 亚洲三区欧美一区| 欧美成人午夜精品| 一区二区三区激情视频| av不卡在线播放| 一边摸一边抽搐一进一出视频| 久久久久久免费高清国产稀缺| 久久久久久久大尺度免费视频| 超碰97精品在线观看| 欧美黑人精品巨大| 国产又色又爽无遮挡免| 国产av码专区亚洲av| 香蕉国产在线看| 久久久久国产一级毛片高清牌| av国产精品久久久久影院| 欧美精品亚洲一区二区| 97在线人人人人妻| 亚洲,欧美精品.| 久久精品国产亚洲av高清一级| 操出白浆在线播放| 亚洲欧洲精品一区二区精品久久久 | 国产熟女午夜一区二区三区| 亚洲国产欧美在线一区| 啦啦啦中文免费视频观看日本| a 毛片基地| 精品亚洲成a人片在线观看| 十八禁网站网址无遮挡| 亚洲一码二码三码区别大吗| 日本一区二区免费在线视频| www.熟女人妻精品国产| 一区二区三区乱码不卡18| 免费日韩欧美在线观看| 美女视频免费永久观看网站| 国产毛片在线视频| 国产精品成人在线| 99久久精品国产亚洲精品| 精品少妇久久久久久888优播| 九色亚洲精品在线播放| 999久久久国产精品视频| 高清视频免费观看一区二区| 亚洲激情五月婷婷啪啪| 性少妇av在线| 最近中文字幕高清免费大全6| 激情视频va一区二区三区| 精品亚洲成a人片在线观看| 深夜精品福利| 国产成人精品在线电影| 午夜日本视频在线| 人人澡人人妻人| 亚洲精品国产区一区二| 国产又色又爽无遮挡免| 视频在线观看一区二区三区| 午夜激情久久久久久久| 中文乱码字字幕精品一区二区三区| 午夜福利视频在线观看免费| 欧美激情极品国产一区二区三区| 黄片无遮挡物在线观看| 日日爽夜夜爽网站| 亚洲精品一区蜜桃| 免费女性裸体啪啪无遮挡网站| 欧美人与性动交α欧美精品济南到| 色吧在线观看| 国产免费视频播放在线视频| 蜜桃国产av成人99| 一本一本久久a久久精品综合妖精| 制服诱惑二区| 高清不卡的av网站| 欧美亚洲日本最大视频资源| 国产精品二区激情视频| 国产精品无大码| 我的亚洲天堂| 国产亚洲一区二区精品| 久久av网站| 黄片无遮挡物在线观看| 久久久久久久久免费视频了| 啦啦啦啦在线视频资源| 在线观看人妻少妇| 国产免费福利视频在线观看| 成人国产av品久久久| 色精品久久人妻99蜜桃| 国产亚洲欧美精品永久| 国产精品熟女久久久久浪| 精品亚洲成国产av| 色网站视频免费| 亚洲欧美成人综合另类久久久| 丁香六月欧美| 亚洲精品国产av蜜桃| 国产精品.久久久| 看非洲黑人一级黄片| 91老司机精品| 久久久久久久大尺度免费视频| 亚洲av日韩精品久久久久久密 | kizo精华| 国产精品成人在线| 国产成人啪精品午夜网站| 国产精品一区二区精品视频观看| 久久亚洲国产成人精品v| 一区在线观看完整版| 嫩草影院入口| 免费黄网站久久成人精品| 成人亚洲精品一区在线观看| 日本wwww免费看| av电影中文网址| 久久婷婷青草| 亚洲av国产av综合av卡| 美女扒开内裤让男人捅视频| 亚洲综合精品二区| 日韩制服丝袜自拍偷拍| 久久精品国产亚洲av涩爱| 久久久久精品久久久久真实原创| 亚洲国产精品一区三区| 久久久久国产精品人妻一区二区| 精品一品国产午夜福利视频| 久久免费观看电影| 亚洲欧洲精品一区二区精品久久久 | 黄色怎么调成土黄色| 欧美成人精品欧美一级黄| 大香蕉久久网| 精品一区二区免费观看| 9191精品国产免费久久| 日本欧美视频一区| 人体艺术视频欧美日本| 亚洲情色 制服丝袜| 自线自在国产av| 国产探花极品一区二区| 一区二区av电影网| 欧美人与性动交α欧美软件| 国产又色又爽无遮挡免| 久久久久精品久久久久真实原创| 久久人人爽av亚洲精品天堂| 不卡av一区二区三区| 两个人免费观看高清视频| 别揉我奶头~嗯~啊~动态视频 | 成人午夜精彩视频在线观看| 亚洲人成电影观看| 黑人猛操日本美女一级片| 久久精品久久久久久久性| 国产成人精品在线电影| 亚洲免费av在线视频| 亚洲精品国产av成人精品| 午夜影院在线不卡| 久久天躁狠狠躁夜夜2o2o | 少妇的丰满在线观看| 少妇人妻精品综合一区二区| 熟女av电影| 亚洲美女视频黄频| 女性被躁到高潮视频| 宅男免费午夜| 成人国产麻豆网| 丝袜脚勾引网站| 精品人妻熟女毛片av久久网站| 一本色道久久久久久精品综合| 欧美成人午夜精品| 最近中文字幕2019免费版| 久久久精品94久久精品| 女人被躁到高潮嗷嗷叫费观| 大陆偷拍与自拍| 国产亚洲午夜精品一区二区久久| 超碰97精品在线观看| 精品国产一区二区三区久久久樱花| 丰满乱子伦码专区| 九九爱精品视频在线观看| 色精品久久人妻99蜜桃| 欧美久久黑人一区二区| 捣出白浆h1v1| 亚洲国产欧美一区二区综合| 两个人看的免费小视频| 久久人人97超碰香蕉20202| 亚洲第一区二区三区不卡| 伦理电影免费视频| 纯流量卡能插随身wifi吗| 大陆偷拍与自拍| 欧美国产精品一级二级三级| 婷婷色av中文字幕| 最近中文字幕2019免费版| 午夜久久久在线观看| 飞空精品影院首页| 人体艺术视频欧美日本| 欧美另类一区| 国产爽快片一区二区三区| 男女免费视频国产| 国产精品女同一区二区软件| 乱人伦中国视频| 色精品久久人妻99蜜桃| 高清不卡的av网站| 国产色婷婷99| 亚洲色图 男人天堂 中文字幕| 久久精品国产综合久久久| 日韩一区二区视频免费看| 国产熟女午夜一区二区三区| 成年人午夜在线观看视频| 人人妻人人爽人人添夜夜欢视频| 成人国语在线视频| 激情视频va一区二区三区| 亚洲国产精品一区二区三区在线| xxxhd国产人妻xxx| h视频一区二区三区| 色婷婷av一区二区三区视频| 下体分泌物呈黄色| 看免费成人av毛片| 高清av免费在线| 亚洲av成人不卡在线观看播放网 | 夜夜骑夜夜射夜夜干| 亚洲国产日韩一区二区| 亚洲国产中文字幕在线视频| www.自偷自拍.com| 欧美日韩成人在线一区二区| 天天躁日日躁夜夜躁夜夜| 人人妻人人爽人人添夜夜欢视频| 国产一区亚洲一区在线观看| 在线亚洲精品国产二区图片欧美| 交换朋友夫妻互换小说| 王馨瑶露胸无遮挡在线观看| 一区福利在线观看| 99精品久久久久人妻精品| a 毛片基地| 女人久久www免费人成看片| 伦理电影免费视频| 乱人伦中国视频| 只有这里有精品99| 国产精品久久久久成人av| 久久精品国产亚洲av涩爱| 自拍欧美九色日韩亚洲蝌蚪91| 黄色 视频免费看| av网站免费在线观看视频| 嫩草影院入口| 1024香蕉在线观看| 亚洲成国产人片在线观看| 免费看av在线观看网站| 国产女主播在线喷水免费视频网站| 人人妻人人添人人爽欧美一区卜| 国产免费又黄又爽又色| 在线观看免费午夜福利视频| 亚洲欧美精品综合一区二区三区| 国产精品一区二区在线观看99| 制服人妻中文乱码| 天天影视国产精品| 国产国语露脸激情在线看| 亚洲欧美日韩另类电影网站| 在线天堂最新版资源| 午夜免费鲁丝| 午夜免费观看性视频| 国产精品秋霞免费鲁丝片| 性色av一级| 久久久久久久久久久久大奶| 大片免费播放器 马上看| 欧美黄色片欧美黄色片| 欧美日韩亚洲国产一区二区在线观看 | 日韩中文字幕欧美一区二区 | 18禁国产床啪视频网站| 日韩大片免费观看网站| 成年av动漫网址| 久久99精品国语久久久| 免费观看av网站的网址| 精品久久蜜臀av无| 亚洲精品中文字幕在线视频| 亚洲欧美精品自产自拍| 精品国产国语对白av| 黄色一级大片看看| 中文字幕最新亚洲高清| 免费少妇av软件| 亚洲一级一片aⅴ在线观看| 嫩草影视91久久| 少妇被粗大猛烈的视频| 建设人人有责人人尽责人人享有的| 51午夜福利影视在线观看| 亚洲欧美一区二区三区久久| 性高湖久久久久久久久免费观看| 亚洲精品久久午夜乱码| 在线观看免费视频网站a站| xxx大片免费视频| 三上悠亚av全集在线观看| 最近中文字幕2019免费版| 热re99久久国产66热| 五月开心婷婷网| 日日爽夜夜爽网站| 九草在线视频观看| 日本av手机在线免费观看| 99re6热这里在线精品视频| 欧美少妇被猛烈插入视频| 国产淫语在线视频| 亚洲国产欧美在线一区| 一个人免费看片子| 18禁裸乳无遮挡动漫免费视频| 精品亚洲乱码少妇综合久久| 哪个播放器可以免费观看大片| 欧美成人午夜精品| 亚洲激情五月婷婷啪啪| 少妇人妻精品综合一区二区| 亚洲国产欧美一区二区综合| 美女国产高潮福利片在线看| 久久99一区二区三区| 国产片特级美女逼逼视频| 国产免费福利视频在线观看| 亚洲精品国产av成人精品| 波野结衣二区三区在线| av卡一久久| 国产毛片在线视频| 丰满迷人的少妇在线观看| 国产不卡av网站在线观看| 欧美国产精品一级二级三级| 一区二区日韩欧美中文字幕| svipshipincom国产片| 赤兔流量卡办理| 国产精品一区二区在线不卡| 午夜福利乱码中文字幕| 成人三级做爰电影| 美女午夜性视频免费| 高清av免费在线| 久久婷婷青草| 国产精品 欧美亚洲| 少妇被粗大的猛进出69影院| 国产亚洲欧美精品永久| 久久精品熟女亚洲av麻豆精品| 国产男女内射视频| 日韩欧美一区视频在线观看| www.熟女人妻精品国产| 亚洲成人国产一区在线观看 | 亚洲伊人久久精品综合| 亚洲美女搞黄在线观看| 极品人妻少妇av视频| 97在线人人人人妻| 女人精品久久久久毛片| 乱人伦中国视频| 七月丁香在线播放| 无遮挡黄片免费观看| 爱豆传媒免费全集在线观看| 亚洲欧美色中文字幕在线| 在线观看三级黄色| 国产精品香港三级国产av潘金莲 | 菩萨蛮人人尽说江南好唐韦庄| 乱人伦中国视频| 十分钟在线观看高清视频www| 欧美av亚洲av综合av国产av | 亚洲图色成人| 99久久精品国产亚洲精品| 精品亚洲成国产av| 777米奇影视久久|