• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An effective graph and depth layer based RGB-D image foreground object extraction method

    2018-01-08 05:10:10ZhiguangXiaoHuiChenChangheTuandReinhardKlette
    Computational Visual Media 2017年4期
    關(guān)鍵詞:小商店零錢塊錢

    Zhiguang Xiao,Hui ChenChanghe Tu,and Reinhard Klette

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    An effective graph and depth layer based RGB-D image foreground object extraction method

    Zhiguang Xiao1,Hui Chen1Changhe Tu2,and Reinhard Klette3

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    We consider the extraction of accurate silhouettes of foreground objects in combined color image and depth map data.This is of relevance for applications such as altering the contents of a scene,or changing the depths of contents for display purposes in 3DTV,object detection,or scene understanding.To identify foreground objects and their silhouettes in a scene,it is necessary to segment the image in order to distinguish foreground regions from the rest of the image,the background. In general,image data properties such as noise,color similarity,or lightness make it difficult to obtain satisfactory segmentation results.Depth provides an additional category of properties.

    1 Proposed method

    Our approach includes four steps:see Fig.1.Firstly,graphs are built independently from color and depth information such that each node represents a pixel,and an edge ep,q∈E connects nodes p and q.We transform the color space into CIELUV space to measure differences between pixels,and use as a region merging predicate:two regions are merged if and only if they are clustered in both color and depth graphs,providing more accurate over-segmentation results. Secondly,depth mapsare partitioned into layers using a multi-threshold method.In this step,objects belonging to different depth ranges are segmented into different layers.Thirdly,seed points are specified manually to locate the foreground objects,and to decide which depth layer they belong to.Finally,we merge the over segmented scene according to cues obtained in the previous three steps,to extract foreground objects with their accurate silhouettes from both color and depth scenes.

    1.1 Improved graph-based algorithm with depth

    Although there have been related studies over the past 20 years,image segmentation is still a challenging task.To obtain foreground objects,the first step of our approach is to obtain an over segmented scene.We improve upon the graph-based approach of Ref.[1]in the following two ways:

    Selection of color space.The first improvement concerns the color space used.RGB color space is often used because of its compatibility with additive color reproduction systems.In our approach,dissimilarity between pixels is measured by edge weights,which is calculated using Euclidean distance in CIELUV color space.Color differences,measured by Euclidean distances in RGB color space are not proportional to human visual perception;CIELUV color space is considered to be perceptually more uniform than other color spaces.

    Fusion of color and depth.The second important aspect is that we combine color and depth information to provide more over-segmented results.In Ref.[1],the merging predicate is de fined for regions R1and R2as where the minimuminternal difference diis de fined by

    Fig.1 (a)Input color scene,(b)input depth map,(c)over-segmentation result,(d)selection of seed points(red line),(e)selected depth layer,and(f,g)extracted foreground object in color and depth.

    Here,dbandware thebetween-region differenceand thewithin-region maximum weight,respectively,τ(R)is a threshold function which is based on the area of regionR,anddbandware de fined as follows:

    where edge weightω(e)is a measure of dissimilarity between two pixels connected by edgee.An edgee∈ERconnects two pixels in regionR.

    Exclusive use of color information is very likely to lead to under-segmentation,and this needs to be avoided. Conversely,depth information may provide additional clues for providing more accurate silhouettes of objects.Thus,we build a separate graph based on depth information. During the segmentation process,two regions are clustered if and only if they are allowed to cluster both in the color image graph and the depth map graph.

    1.2 Seed point specification

    Seed points are used to locate the foreground objects in both color image and depth map.Our approach allows a user to specify an individual object to be extracted as a foreground object by roughly drawing a stroke on the object.We sample points on the trajectory of the stroke as seed points.

    Typically our approach can extract the specified object by indicating seed points in this way only once,but in some cases,to obtain a satisfactory result, repeated interaction might be needed.Therefore we define two kinds of seed points,those inside and outside an intended object,which we callpositiveandnegative seed points(PSPs and NSPs),respectively.

    Regions containing positive seed points are calledpositive seed regions.When we unify an over segmented color image,we remove regions which contain negative seed points(negative seed regions)to break continuity of regions,which are connected under constraints defined by depth layers; we maintain positive seed point regions as well as regions which are connected to them.Therefore,for each extraction task,a user may draw one or more strokes inside the foreground object for extraction,and pixels from the stroke are used as PSPs.Next,our approach provides an extraction result.If the result contains regions which should not be merged,the user may draw an NSP stroke in the joined regions to separate them,like using a knife to cut off redundant parts.

    1.3 Depth layering

    Given a depth map of a 3D scene,the purpose of depth layering is to segment the continuous depth map into several depth layers.We assume in this paper that depth values for a single indoor object are only distributed within a small range. This assumption may not be true in general but appears to be acceptable in our application.

    We partition the depth map into depth layers in the form of binary images.A depth layer contains pixels in a range of depth values,and we consider these pixels as the foreground region(white)of the chosen depth layer.One depth layer is used to extract one foreground object.Therefore,the specified foreground object for extraction should lie inside the foreground region of the selected depth layer,as our approach merges an over-segmented scene based on the selected depth layer.If depth values of one object are not in a small range then the depth value interval of this object is out of the considered range of one depth layer,and have an integral object which is divided into more than one depth layer.In such a case,our approach is unable to select a proper depth layer to extract the integral object.

    Inpainting for depth map.Before depth layering,we do some preprocessing of the depth map,calledinpainting,to reduce artefacts caused by the capturing procedure.

    Time-of- flight cameras or structured lighting methods(as used by Kinect)cannot obtain depth information in over or under-exposed areas,and are often inaccurate at silhouettes of objects.This results in an incomplete depth map that has inaccurate depth for some pixels,and no depth at all for others. Estimating the depth for these missing regions is an ill-posed problem since there is very little information to use.Recovering the true depth would only be possible with very detailed prior knowledge of the scene(or by using an improved depth sensor,such as replacing one stereo matcher by another one).

    We use an intensive bilateral filter,proposed in Ref.[2],to repair the depth map.The depth of each pixel with missing depth information is determined by searching thek×kneighboring pixels for ones with similar color in the color image,and with a non-zero depth value.The search range varies until a fixed number of satisfactory neighboring pixels is reached.After completing all depth values of such pixels,a median filter is applied to the depth map,to smooth outliers in the depth map.

    Segmentation of depth map.After inpainting in the depth map,next we segment the depth map into different layers such that each layer contains pixels in a range of depth values.The problem here is to decide how many layers should be used.If too few segmented layers are used,many over-segmented regions produced in the last step are probably in the same layer.Contrariwise,if the segmented layers are too many,a single over-segmented region is probably spread across more than one layer. Either case will make it difficult to distinguish whether a region belongs to a foreground object or not.

    Our goal is that,in this step,those regions which overlap according to the seed points specified by the users,should be contained in the same layer.This agrees with the assumption that the user will usually specify most of the regions that belong to a foreground object.With this constraint,we segment the depth map into layers using an extended multithreshold algorithm as proposed in Ref.[3].Equation(5)outlines how to segment a depth map into a given numbernof layers:

    whereD(i,j)is the depth value at pixel(i,j),andTm,for 0≤m≤n?1,is themth threshold computed by an extended Otsu’s multi-threshold method.

    We propose a method to find a proper depth layer automatically for a foreground object in given depth map.First,we initialisenas being the maximum number of layers that sufficient for any 3D scene considered. Thus,for any given depth map,the proper number of layers should be in the range of 2–n.Then we split the depth map repeatedly from 2 tonlayers,in an ordered way,and obtain a series of segmented layers each represented by one binary image.Thus,for any given depth map,an optimal layer for a specified object should be in this set of segmented layers.

    We define the pixels with value 1 in one binary image as being the foreground pixels;they comprise the foreground region of this layer. A layer is defined to be avalid layerif and only if all the positive seed points are in the foreground region in its corresponding binary image.

    We sort all valid layers according to the total number of foreground pixels in the binary image.Our experimental results indicate that choosing the middle valid layer from the sequence defines a good choice,allowing the proper depth layer of the specified foreground object.

    1.4 Merging over-segmented color regions

    Actually,one object often contains a variety of colors while it connects to the background region at the same depth.Therefore,we propose to group the regions on the basis of regional continuity which is established under the constraint of depth layers.Our regional continuity function is de fined as follows:

    whereAd(k)is the area of overlap of the foreground of the selected depth layer with regionk,Ac(k)is the total area of regionk,andTAis an adjustable coefficient.

    Based on this criterion,the region merging step starts with region labeling,to distinguish and count the area of each region.Firstly,each region is relabeled(approximately)for initialization.Secondly,for each pixelp,we find those pixels among its 8-adjacent neighbors which belong to the same region asp.We then updatepby assigning the minimum label among those of the detected 8-adjacent neighbors andpitself. We repeat this procedure until no update occurs.After that,each region has a unique label,and the area of each region,as well as the area of the region overlapping with the foreground region of the selected depth layer,can be determined by counting.Next,regional continuity is constructed on the basis of Eq.(6).We modify the regional continuity to remove mis-connected regions:negative seed regions and regions that are connected to positive seed regions via negative seed regions,should be disconnected from positive seed regions. Finally,semantically meaningful object results are obtained by merging positive seed regions and regions connected to them.

    2 Experimental evaluation

    2.1 Qualitative analysis

    Our approach was evaluated mainly on a largescale hierarchical multi-view RGB-D object dataset collected using a Kinect device.A recently published dataset,the RGB-D Scenes Dataset v2(RGB-D v2),includes 14 scenes which cover common indoor environments.Depth maps for this dataset were recovered by the intensive bilateral filter mentioned in Section 2.3 before the depth-layering step.The MSR 3D Video Dataset(MSR 3D),and more complex RGB-D images used by Zeng et al.[4],were also employed to test our approach.

    Objects and their silhouettes extracted by our approach are shown in Fig.2. Although Kinect devices provide depth maps with large holes and significant noise,a well restored depth map and the segmented results demonstrate the robustness of our algorithm to noise in the depth images.From our results we conclude that our approach is able(for the test data used)to extract foreground objects from the different background scenes.

    2.2 Quantitative analysis

    Metrics including precision,recall,andF-measure(see Eqs.(7)–(9)) were also computed and interpreted to analyze our results quantitatively:

    Fig.2 Extraction results for scenes from different datasets.(A,B)Extracted silhouettes in color and depth images.(C,D)Extracted foreground objects in color and depth images.

    Fig.3 Quantitative analysis of different methods.MW:magic wand,GC:grab-cut[6],BL:graph-based on color information only(i.e.,baseline method),FM:the fast mode of Ref.[5]with depth layer,MS:mean-shift color-depth with depth layer,Our:our approach.The horizontal axis represents different datasets.

    whereTpis the number of correctly detected foreground object pixels,Fprepresents nonforeground object pixels detected as foreground object pixels,andFnmeans undetected foreground object pixels.

    老常突然肯定地說:“對了,我想起來一點,那個客人肯定是一個男人,但是具體多大年紀(jì)說不上,我記得當(dāng)時加上寄存費和搬運(yùn)費一起是80塊錢,可是我們把寄存單給他的時候,那人在身上摸了半天零錢只有75塊,他要求我們只收75塊算了,我們當(dāng)然不同意。他只好掏出一張100的,可是我們又沒錢找。我說我拿到那邊小商店去換一下,他猶豫了半天說好的,可是等我換好零錢再來到停車場,當(dāng)時只有車在,沒看到人了。我還等了好久的?!?/p>

    We extract ground truth manually to evaluate the results,and setβ=1 to calculate theF-measure as we consider that the recall rate is as important as precision.Performance measures were computed for different datasets to evaluate the approach’s effectiveness on those different datasets.See Fig.3 for quantitative analysis results for our approach(yellow).

    2.3 Comparison with other methods

    For a comparative evaluation of our approach,we also tested five other methods designed for extracting objects from scenes in datasets as used above.They are the magic wand in PhotoShop,grab-cut,the original graph-based algorithm in Ref.[1]with depth layers,a multistage,hierarchical graph-based approach[5]with depth layers,and an improved mean-shift algorithm with depth layers.See Fig.4 for comparative results.

    Qualitative results.Compared to magic wand,shown in Fig.4(A),our approach(Fig.4(F))is able to reduce the amount of user interaction considerably,only with a single initialisation needed to complete an extraction task.

    Grab-cut[6],in Fig.4(B),is excellent in terms of simplicity of the user input,but for colorful scenes,the extraction process is difficult and more interactions are needed to indicate the foreground and the background. Moreover,the results lack discrimination in the presence of color similarity.

    The above methods only use color information to extract foreground objects.For a further illustration of the performance of our approach,extraction results provided by methods in which color and depth are both applied are also compared with our approach.First,we take the original graph-based algorithm[1]with depth layers as abaseline methodin our experiments:see Fig.4(C).The graph-based algorithm generates over-segmented results.Then,regions are merged based on depth layer constraints and seed points.Comparing results shows the effectiveness of our improved graph-based method.

    We also compare with results obtained by using an algorithm published in Ref.[5]which combines depth,color,and temporal information,and uses a multistage,hierarchical graph-based approach for segmenting 3D RGB-D point clouds. Because the scenes in our applications are static,we are able to use the fast mode(i.e.,removing temporal information)of the algorithm of Ref.[5]for providing over-segmented results.The 3D point cloud data,as generated by the color scene and depth map,are used as input for this method.The foreground objects are extracted based on the previous result,seed points,and the depth layers.See Fig.4(D)for results of this method following Ref.[5].

    An improved mean-shift algorithm with depth layers,shown in Fig.4(E),is another candidate used for testing. Depth information is first added to amend the mean-shift vector to over-segment the color scene.The over-segmented results are merged based on the seed points and depth layers.

    Quantitative results.Figure 3 presents the precision rate,recall rate,andF-measure for the above methods on three different datasets.One of the merging constraints of our approach is based on the depth layer,and as the edges of objects in the depth map are not so accurate(usually a little outside the objects compared to the ground truth),in the extraction results,our approach may merge some pixels that do not belong to the ground truth.Some methods provide higher precision because their extraction results are not integral and are almost contained within the ground truth. Thus,the precision rate of our approach is lower than that of some other methods.However,our approach offers more integral extraction results,which makes the recall rate higher than that of the others.TheF-measure withβ=1 demonstrates that our approach performs better.

    Fig.4 Foreground objects and silhouettes extracted by different methods in both color and depth.(A)Magic wand,(B)grab-cut[6],(C)graph-based on color information only(i.e.,baseline method),(D)the fast mode of Ref.[5]with depth layers,(E)mean-shift color-depth with depth layers,and(F)our approach.(a)Interactions,(b,c)extraction results in color and depth.

    Amount of interaction.Figure 4 shows the interaction need by each method for each scene.For the magic wand,the red spots show the seed points specified by the users.The sizes and locations of the red spots should be chosen according to different foregrounds.

    In grab-cut,a rectangular box is drawn around the foreground object.Red lines are seed points in the foreground,and blue lines are seed points in the background.When applying the grab-cut method to colorful scenes,for example,the scenes used by Zeng et al.,more iterations and seed points are needed.We do not show all of the iterations of the grab-cut method on a scene used by Zeng et al.in Fig.4;it is difficult to follow them visually.Seed points for the other four methods are specified by roughly drawing a stroke on the foreground. Red lines represent the seed points for the foreground,and blue lines represent the background.

    There is no limitation on seed points in our method;we usually draw a stroke around the center of the specified foreground object,but this is not necessary.If the automatically selected depth layer is appropriate for extracting foreground objects,then no further seed points are needed. If not,then more positive seed points are required to specify other positions to be extracted as parts of foreground objects.Positions can be located according to the previously selected depth layer;therefore a user can coarsely add positive seed points around the located positions to obtain a proper depth layer.The user is able to obtain expected results by applying positive and negative seed points flexibly.

    The extracted results of our approach remain fairly robust:the integrity of the objects is mostly retained while silhouettes are better preserved.Our approach outperforms in general the other approaches regarding the quality of results,with a reduced need for interaction.

    Acknowledgements

    The authors thank the editors and reviewers for their insightful comments.This work is supported by Key Project No. 61332015 of the National Natural Science Foundation of China,and Project Nos. ZR2013FM302 and ZR2017MF057 of the Natural Science Found of Shandong.

    [1]Felzenszwalb,P.F.;Huttenlocher,D.P.Efficient graph-based image segmentation.International Journal of Computer VisionVol.59,No.2,167–181,2004.

    [2]Li,Y.;Feng,J.;Zhang,H.;Li,C.New algorithm of depth hole filling based on intensive bilateral filter.Industrial Control ComputerVol.26,No.11,105–106,2013.

    [3]Otsu,N.A threshold selection method from gray-level histograms.IEEE Transactions on Systems,Man,and CyberneticsVol.SMC-9,No.1,62–66,2007.

    [4]Zeng,Q.;Chen,W.;Wang,H.;Tu,C.;Cohen-Or,D.;Lischinski,D.;Chen,B.Hallucinating stereoscopy from a single image.Computer Graphics ForumVol.34,No.2,1–12,2015.

    [5]Hickson,S.;Birch field,S.;Essa,I.;Christensen,H.Efficient hierarchical graph-based segmentation of RGBD videos.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,344–351,2014.

    [6]Rother,C.;Kolmogorov,V.;Blake,A. “Grabcut”:Interactive foreground extraction using iterated graph cuts.ACM Transactions on GraphicsVol.23,No.3,309–314,2004.

    1 School of Information Science and Engineering,Shandong University,Jinan 250100,China. E-mail: Z.Xiao,xiaozhg@live.com;H.Chen,huichen@sdu.edu.cn

    2 School of Computer Science and Technology,Shandong University,Jinan 250100,China. E-mail:chtu@sdu.edu.cn.

    3 School of Engineering,Computer and Mathematical Sciences,Auckland University of Technology,Auckland 1142,New Zealand.E-mail:rklette@aut.ac.nz.

    2017-02-17;accepted:2017-07-12

    Zhiguang Xiaois a postgraduate at the School of Information Science and Engineering,Shandong University.He received his B.E.degree in electronics and information engineering from the College of Electronics and Information Engineering, Sichuan University. His research interests are in graph algorithms,computer stereo vision,and image segmentation.

    Hui Chenis a professor at the School of Information Science and Engineering,Shandong University. She received her Ph.D.degree in computer science from the University of Hong Kong,her bachelor and master degrees in electronics engineering from Shandong University. Her research interests include computer vision,3D morphing,and virtual reality etc.

    Changhe Tucurrently is a professor and the associate dean at the School of Computer Science and Technology,Shandong University.He obtained his bachelor,master,and Ph.D.degrees all from Shandong University.His research interests include geometric modelling and processing, computational geometry,data-driven visual computing,etc.He published papers on SIGGRAPH,Eurographics,ACM TOG,IEEE TVCG,CAGD,etc.

    Reinhard Kletteis a Fellow of the Royal Society of New Zealand and a professor at Auckland University of Technology.He was on the editorial board oftheInternational Journal of Computer Vision(2007–2014),the founding Editor-in-Chief ofthe Journal of Control Engineering and Technology(2011–2013),and an Associate Editor of IEEE PAMI(2001–2008).He(co-)authored more than 300 publications in peer-reviewed journals or conferences,and books on computer vision,image processing,geometric algorithms,and panoramic imaging. He presented more than 20 keynotes at international conferences. Springer London published his book entitledConcise Computer Visionin 2014.

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript,please go to https://www.editorialmanager.com/cvmj.

    猜你喜歡
    小商店零錢塊錢
    數(shù)零錢
    8塊錢還在
    “1塊錢的困惑”
    零錢探測器
    馴鹿零錢包
    童話世界(2018年35期)2018-12-03 05:23:10
    社區(qū)邊上開個小商店,難!難!難!
    金點子生意(2017年2期)2017-02-22 16:07:58
    取錢
    扭扭棒小籃子
    小氣鬼
    社區(qū)邊上開個小商店,難!難!難!
    国产在线一区二区三区精| 欧美午夜高清在线| 国产免费视频播放在线视频| 色综合欧美亚洲国产小说| 欧美在线一区亚洲| 精品人妻一区二区三区麻豆| 色婷婷久久久亚洲欧美| 精品久久蜜臀av无| 亚洲人成电影观看| 亚洲男人天堂网一区| 黄色 视频免费看| 热re99久久国产66热| 亚洲成国产人片在线观看| 别揉我奶头~嗯~啊~动态视频 | 欧美人与性动交α欧美精品济南到| 91成年电影在线观看| 在线永久观看黄色视频| 国产精品一区二区精品视频观看| 搡老熟女国产l中国老女人| av在线播放精品| 久久国产精品人妻蜜桃| 国产国语露脸激情在线看| 两性夫妻黄色片| 又大又爽又粗| 99国产综合亚洲精品| 人人妻人人爽人人添夜夜欢视频| 国产在线观看jvid| 男人添女人高潮全过程视频| 涩涩av久久男人的天堂| 国产欧美日韩一区二区三区在线| 日韩中文字幕欧美一区二区| 精品福利观看| 午夜福利影视在线免费观看| 大片免费播放器 马上看| 欧美国产精品一级二级三级| 可以免费在线观看a视频的电影网站| 99久久国产精品久久久| 国产熟女午夜一区二区三区| 这个男人来自地球电影免费观看| 国产亚洲精品久久久久5区| 午夜福利免费观看在线| 成在线人永久免费视频| 国产一区二区三区综合在线观看| 美女高潮喷水抽搐中文字幕| 高清视频免费观看一区二区| 宅男免费午夜| 一区二区日韩欧美中文字幕| 巨乳人妻的诱惑在线观看| 在线观看人妻少妇| 日韩欧美国产一区二区入口| 一边摸一边抽搐一进一出视频| 岛国在线观看网站| 亚洲自偷自拍图片 自拍| 午夜免费鲁丝| 91老司机精品| 欧美中文综合在线视频| 成人黄色视频免费在线看| 超色免费av| 欧美亚洲 丝袜 人妻 在线| 久久天躁狠狠躁夜夜2o2o| 99国产精品一区二区蜜桃av | 精品卡一卡二卡四卡免费| 看免费av毛片| 搡老乐熟女国产| 熟女少妇亚洲综合色aaa.| 免费av中文字幕在线| 欧美久久黑人一区二区| 欧美久久黑人一区二区| 久久久久久久精品精品| av在线播放精品| 岛国在线观看网站| av不卡在线播放| 五月天丁香电影| 一级a爱视频在线免费观看| a 毛片基地| 日韩一区二区三区影片| 婷婷成人精品国产| 少妇粗大呻吟视频| 亚洲精品国产av蜜桃| 日本五十路高清| 精品一区二区三卡| 国产成人免费无遮挡视频| 操美女的视频在线观看| 黄片大片在线免费观看| 久久久国产成人免费| 国产麻豆69| 亚洲免费av在线视频| 18禁裸乳无遮挡动漫免费视频| 午夜激情久久久久久久| av一本久久久久| 久久人人爽av亚洲精品天堂| 国产高清国产精品国产三级| 青春草亚洲视频在线观看| 中文字幕最新亚洲高清| 亚洲国产精品一区三区| 国产日韩一区二区三区精品不卡| 国产成人av激情在线播放| 亚洲精品av麻豆狂野| netflix在线观看网站| 男人爽女人下面视频在线观看| 国产日韩欧美视频二区| 日日摸夜夜添夜夜添小说| 亚洲 国产 在线| 五月天丁香电影| 亚洲成人免费电影在线观看| 国产成人精品在线电影| 精品人妻在线不人妻| 亚洲成av片中文字幕在线观看| 久久人妻熟女aⅴ| 青草久久国产| 久久久国产成人免费| 老司机福利观看| 亚洲国产中文字幕在线视频| 9色porny在线观看| 不卡av一区二区三区| 亚洲精品成人av观看孕妇| 精品亚洲成a人片在线观看| 青春草视频在线免费观看| 日韩熟女老妇一区二区性免费视频| 最近最新免费中文字幕在线| 精品久久久久久电影网| av片东京热男人的天堂| svipshipincom国产片| 亚洲国产精品一区三区| 啦啦啦视频在线资源免费观看| 91九色精品人成在线观看| 欧美性长视频在线观看| 岛国在线观看网站| 精品高清国产在线一区| 日日爽夜夜爽网站| 美国免费a级毛片| 久久精品国产综合久久久| 丰满迷人的少妇在线观看| 大片电影免费在线观看免费| 亚洲第一欧美日韩一区二区三区 | 国产成人精品无人区| 桃红色精品国产亚洲av| 日本vs欧美在线观看视频| 熟女少妇亚洲综合色aaa.| 9191精品国产免费久久| 中文字幕精品免费在线观看视频| 欧美精品啪啪一区二区三区 | 国产欧美日韩一区二区三 | 91麻豆av在线| 亚洲国产欧美网| 热re99久久精品国产66热6| av一本久久久久| 亚洲欧美日韩高清在线视频 | 99久久99久久久精品蜜桃| 国产精品1区2区在线观看. | 免费观看av网站的网址| 精品一区二区三区四区五区乱码| 高清在线国产一区| 丁香六月欧美| 亚洲 国产 在线| 91精品三级在线观看| 天天操日日干夜夜撸| 美女扒开内裤让男人捅视频| 美女国产高潮福利片在线看| av线在线观看网站| 日韩精品免费视频一区二区三区| 国产精品.久久久| a级毛片在线看网站| 国产97色在线日韩免费| 在线观看舔阴道视频| 夜夜骑夜夜射夜夜干| 国产精品一区二区精品视频观看| 午夜视频精品福利| 在线观看一区二区三区激情| 欧美黄色片欧美黄色片| 岛国毛片在线播放| 免费观看人在逋| 欧美日韩一级在线毛片| 老司机午夜福利在线观看视频 | 90打野战视频偷拍视频| 黄色视频,在线免费观看| 欧美精品人与动牲交sv欧美| 99热国产这里只有精品6| 人妻久久中文字幕网| 久久久久精品国产欧美久久久 | 午夜精品国产一区二区电影| tube8黄色片| 日本黄色日本黄色录像| 国产精品av久久久久免费| 大香蕉久久成人网| 十分钟在线观看高清视频www| 午夜成年电影在线免费观看| 在线观看免费高清a一片| 在线十欧美十亚洲十日本专区| 亚洲精品国产区一区二| 天天影视国产精品| 男女床上黄色一级片免费看| 免费观看人在逋| 一区二区av电影网| 免费av中文字幕在线| 一本大道久久a久久精品| 欧美日韩亚洲高清精品| 青春草视频在线免费观看| 91精品三级在线观看| 久久精品国产亚洲av高清一级| 日本av免费视频播放| 欧美另类一区| 精品卡一卡二卡四卡免费| 欧美成人午夜精品| 国产日韩欧美亚洲二区| 成人免费观看视频高清| 国产精品成人在线| 国产日韩一区二区三区精品不卡| 黄色视频在线播放观看不卡| 国产一区有黄有色的免费视频| 99国产精品免费福利视频| 久久免费观看电影| 中文精品一卡2卡3卡4更新| 99久久99久久久精品蜜桃| 中文字幕制服av| 国产不卡av网站在线观看| 香蕉丝袜av| 欧美日韩视频精品一区| 久久久久国产一级毛片高清牌| 丰满迷人的少妇在线观看| 国产老妇伦熟女老妇高清| 老熟女久久久| 岛国毛片在线播放| 如日韩欧美国产精品一区二区三区| 国产男人的电影天堂91| 丰满少妇做爰视频| 日韩电影二区| 日韩三级视频一区二区三区| 高清黄色对白视频在线免费看| 国产极品粉嫩免费观看在线| 50天的宝宝边吃奶边哭怎么回事| a 毛片基地| www.自偷自拍.com| 在线永久观看黄色视频| 久久久精品免费免费高清| 叶爱在线成人免费视频播放| 日韩视频一区二区在线观看| 亚洲人成电影观看| 夫妻午夜视频| 免费日韩欧美在线观看| 伊人久久大香线蕉亚洲五| 黄色视频不卡| 汤姆久久久久久久影院中文字幕| 大码成人一级视频| 久久精品国产综合久久久| 嫩草影视91久久| 男女床上黄色一级片免费看| 亚洲精品日韩在线中文字幕| 丰满人妻熟妇乱又伦精品不卡| 高清欧美精品videossex| 亚洲一区中文字幕在线| 午夜免费成人在线视频| 男女床上黄色一级片免费看| 最近最新免费中文字幕在线| 亚洲欧洲日产国产| 亚洲专区国产一区二区| 精品国产一区二区三区四区第35| 国产成人精品久久二区二区91| 一区二区av电影网| 制服人妻中文乱码| a级毛片黄视频| 精品一区二区三卡| 精品免费久久久久久久清纯 | 12—13女人毛片做爰片一| 1024香蕉在线观看| 丝袜人妻中文字幕| 黑人猛操日本美女一级片| 国产老妇伦熟女老妇高清| 男人添女人高潮全过程视频| 少妇裸体淫交视频免费看高清 | 黄网站色视频无遮挡免费观看| 免费少妇av软件| 香蕉丝袜av| 色婷婷av一区二区三区视频| 亚洲成人手机| 久久久久久人人人人人| 国产一区二区三区综合在线观看| 国产福利在线免费观看视频| 亚洲精品国产一区二区精华液| 手机成人av网站| tube8黄色片| 欧美激情高清一区二区三区| 久久久久视频综合| 我要看黄色一级片免费的| 啦啦啦免费观看视频1| 亚洲一码二码三码区别大吗| 成人国产av品久久久| 91国产中文字幕| 久久午夜综合久久蜜桃| 少妇 在线观看| 亚洲第一青青草原| 在线观看免费日韩欧美大片| 国产男女内射视频| 亚洲精品一卡2卡三卡4卡5卡 | 黄频高清免费视频| 精品国产一区二区久久| 夜夜骑夜夜射夜夜干| 美女午夜性视频免费| 亚洲精品久久久久久婷婷小说| 两个人看的免费小视频| 国产深夜福利视频在线观看| 久久人妻熟女aⅴ| 亚洲伊人久久精品综合| 不卡av一区二区三区| 午夜免费观看性视频| 99精品久久久久人妻精品| 黑人操中国人逼视频| 在线永久观看黄色视频| 国产欧美日韩综合在线一区二区| 亚洲精品久久午夜乱码| 悠悠久久av| 国产在线视频一区二区| 麻豆国产av国片精品| 色视频在线一区二区三区| 999久久久精品免费观看国产| www.999成人在线观看| 一区二区三区精品91| 国产av国产精品国产| 日韩欧美国产一区二区入口| 亚洲九九香蕉| av又黄又爽大尺度在线免费看| 男人爽女人下面视频在线观看| 美女中出高潮动态图| 69av精品久久久久久 | 国产高清videossex| 建设人人有责人人尽责人人享有的| 亚洲第一青青草原| 69精品国产乱码久久久| 亚洲成人手机| 在线观看一区二区三区激情| 亚洲av欧美aⅴ国产| 欧美大码av| 在线亚洲精品国产二区图片欧美| 亚洲精品国产精品久久久不卡| 亚洲欧美成人综合另类久久久| 超碰成人久久| 欧美97在线视频| 久久久久久久久久久久大奶| 亚洲欧洲日产国产| 亚洲精品成人av观看孕妇| 亚洲精品美女久久久久99蜜臀| 欧美日韩视频精品一区| 欧美乱码精品一区二区三区| 日韩视频在线欧美| 母亲3免费完整高清在线观看| 欧美国产精品一级二级三级| 一边摸一边做爽爽视频免费| 捣出白浆h1v1| 三级毛片av免费| 亚洲av片天天在线观看| videos熟女内射| 久久人妻熟女aⅴ| 日本猛色少妇xxxxx猛交久久| 侵犯人妻中文字幕一二三四区| 人人妻人人爽人人添夜夜欢视频| 天天躁狠狠躁夜夜躁狠狠躁| 韩国精品一区二区三区| 国产免费现黄频在线看| 国产麻豆69| 不卡av一区二区三区| 国产主播在线观看一区二区| 嫁个100分男人电影在线观看| 色视频在线一区二区三区| 十八禁高潮呻吟视频| 免费女性裸体啪啪无遮挡网站| 黑人猛操日本美女一级片| 久久久久久免费高清国产稀缺| 亚洲av日韩精品久久久久久密| 爱豆传媒免费全集在线观看| 亚洲国产精品成人久久小说| 中文字幕制服av| 国产男女超爽视频在线观看| 亚洲精品乱久久久久久| 亚洲精品第二区| 国产精品一区二区在线观看99| 日韩制服骚丝袜av| 久热爱精品视频在线9| 国产欧美日韩精品亚洲av| 十分钟在线观看高清视频www| 99国产极品粉嫩在线观看| 大陆偷拍与自拍| 熟女少妇亚洲综合色aaa.| 婷婷色av中文字幕| 久久人妻福利社区极品人妻图片| 欧美国产精品一级二级三级| 悠悠久久av| 69av精品久久久久久 | 精品国产乱子伦一区二区三区 | 精品熟女少妇八av免费久了| 欧美性长视频在线观看| 午夜福利一区二区在线看| 一区二区av电影网| 黄色视频在线播放观看不卡| 国产一区二区激情短视频 | 一个人免费看片子| a在线观看视频网站| 亚洲中文日韩欧美视频| 久热这里只有精品99| 一进一出抽搐动态| 亚洲国产精品成人久久小说| 一二三四社区在线视频社区8| 各种免费的搞黄视频| 一二三四在线观看免费中文在| av有码第一页| 亚洲欧洲日产国产| 91字幕亚洲| 麻豆av在线久日| 人妻人人澡人人爽人人| 美女高潮喷水抽搐中文字幕| 日本wwww免费看| 下体分泌物呈黄色| 久久久国产精品麻豆| 人妻久久中文字幕网| 在线观看www视频免费| 中文字幕另类日韩欧美亚洲嫩草| 国产高清国产精品国产三级| 亚洲少妇的诱惑av| 国产精品 国内视频| 大片免费播放器 马上看| 少妇被粗大的猛进出69影院| 精品熟女少妇八av免费久了| 欧美日韩精品网址| 国产av精品麻豆| 久久影院123| 精品亚洲成a人片在线观看| 十八禁人妻一区二区| 男男h啪啪无遮挡| 国产精品久久久人人做人人爽| 两人在一起打扑克的视频| 精品国产超薄肉色丝袜足j| 精品国产乱码久久久久久男人| 啦啦啦在线免费观看视频4| 国产欧美日韩一区二区三 | 中文字幕精品免费在线观看视频| 国产精品二区激情视频| 欧美国产精品一级二级三级| 精品亚洲乱码少妇综合久久| 青草久久国产| 9热在线视频观看99| 视频区欧美日本亚洲| 久久综合国产亚洲精品| 日韩欧美国产一区二区入口| 激情视频va一区二区三区| 美国免费a级毛片| 欧美在线黄色| 18禁黄网站禁片午夜丰满| 秋霞在线观看毛片| 美女脱内裤让男人舔精品视频| 国产片内射在线| 日本91视频免费播放| 成人手机av| 国产精品熟女久久久久浪| 亚洲精品国产精品久久久不卡| 亚洲欧洲精品一区二区精品久久久| 亚洲国产毛片av蜜桃av| 久久久国产成人免费| 777久久人妻少妇嫩草av网站| 精品久久久久久久毛片微露脸 | 咕卡用的链子| kizo精华| 国产成人欧美在线观看 | 国产男人的电影天堂91| 亚洲 国产 在线| 国产一区二区三区av在线| 欧美性长视频在线观看| av有码第一页| av一本久久久久| 欧美97在线视频| 一区二区av电影网| 黄色 视频免费看| 亚洲av国产av综合av卡| 亚洲欧美一区二区三区久久| 人妻一区二区av| 夫妻午夜视频| 欧美97在线视频| 亚洲av男天堂| 亚洲 国产 在线| 香蕉国产在线看| 欧美性长视频在线观看| www.999成人在线观看| 亚洲精品国产一区二区精华液| 日本五十路高清| 中文字幕人妻熟女乱码| 亚洲成人免费电影在线观看| 亚洲av欧美aⅴ国产| 免费日韩欧美在线观看| 啦啦啦 在线观看视频| 久久热在线av| 午夜福利一区二区在线看| 人妻久久中文字幕网| 精品人妻1区二区| 操出白浆在线播放| 欧美日韩亚洲高清精品| 久久九九热精品免费| 亚洲三区欧美一区| 丝袜人妻中文字幕| 美女国产高潮福利片在线看| 男女之事视频高清在线观看| 高清欧美精品videossex| 少妇裸体淫交视频免费看高清 | 欧美激情高清一区二区三区| 99国产精品99久久久久| 欧美另类一区| 亚洲成人免费av在线播放| 别揉我奶头~嗯~啊~动态视频 | 日本a在线网址| 精品少妇内射三级| 黄片小视频在线播放| 大片电影免费在线观看免费| 嫩草影视91久久| 免费观看a级毛片全部| 久久人人爽人人片av| 视频在线观看一区二区三区| 水蜜桃什么品种好| 性色av乱码一区二区三区2| 精品国产国语对白av| 人妻 亚洲 视频| 国产精品免费大片| 一个人免费看片子| 国产成人免费观看mmmm| 99国产极品粉嫩在线观看| 99热国产这里只有精品6| 另类精品久久| 最新的欧美精品一区二区| 老司机亚洲免费影院| 免费观看人在逋| 高清欧美精品videossex| 国产有黄有色有爽视频| 美国免费a级毛片| 99国产综合亚洲精品| 99国产精品99久久久久| 五月开心婷婷网| 中文字幕av电影在线播放| av有码第一页| 黄色毛片三级朝国网站| 女人久久www免费人成看片| 日日爽夜夜爽网站| 捣出白浆h1v1| 精品国内亚洲2022精品成人 | 国产区一区二久久| videos熟女内射| 国产男人的电影天堂91| 美女午夜性视频免费| 午夜成年电影在线免费观看| 另类亚洲欧美激情| 中文字幕最新亚洲高清| 精品乱码久久久久久99久播| 高清av免费在线| 日本欧美视频一区| 欧美另类亚洲清纯唯美| 国产真人三级小视频在线观看| 亚洲第一欧美日韩一区二区三区 | 天天躁日日躁夜夜躁夜夜| 最近中文字幕2019免费版| 99国产精品99久久久久| 国产一区二区三区在线臀色熟女 | 亚洲全国av大片| 桃花免费在线播放| 日韩精品免费视频一区二区三区| 亚洲国产欧美在线一区| 亚洲精品久久成人aⅴ小说| 搡老乐熟女国产| 日本黄色日本黄色录像| 91精品国产国语对白视频| 亚洲精品美女久久av网站| 日日爽夜夜爽网站| 99国产精品免费福利视频| 黄色怎么调成土黄色| 亚洲精品国产精品久久久不卡| 丁香六月欧美| 精品久久久久久久毛片微露脸 | av天堂在线播放| av又黄又爽大尺度在线免费看| 久久久精品免费免费高清| 我的亚洲天堂| 国产亚洲欧美在线一区二区| 欧美乱码精品一区二区三区| 狠狠狠狠99中文字幕| 国产精品熟女久久久久浪| 啪啪无遮挡十八禁网站| 侵犯人妻中文字幕一二三四区| 中亚洲国语对白在线视频| 国产亚洲av高清不卡| tocl精华| 老熟女久久久| 日韩中文字幕视频在线看片| 精品视频人人做人人爽| av天堂久久9| 久久久水蜜桃国产精品网| netflix在线观看网站| 免费高清在线观看日韩| av不卡在线播放| 视频在线观看一区二区三区| 激情视频va一区二区三区| 欧美日韩一级在线毛片| 嫩草影视91久久| 首页视频小说图片口味搜索| 国产xxxxx性猛交| 一级毛片电影观看| 国产精品国产av在线观看| 亚洲国产精品一区三区| www.av在线官网国产| 大片免费播放器 马上看| 国产精品av久久久久免费| 男人爽女人下面视频在线观看| 又大又爽又粗| 亚洲精品中文字幕在线视频| 亚洲国产av影院在线观看| 巨乳人妻的诱惑在线观看| 国产激情久久老熟女| 亚洲,欧美精品.| 美女福利国产在线| 中文字幕人妻丝袜制服| 国产老妇伦熟女老妇高清| 亚洲av国产av综合av卡|