• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A method for estimating the errors in many-light rendering with supersampling

    2019-08-05 01:45:12HirokazuSakaiKosukeNabataShinyaYasuakiandKeiIwasaki
    Computational Visual Media 2019年2期

    Hirokazu Sakai, Kosuke Nabata, Shinya Yasuaki, and Kei Iwasaki,2 ()

    Abstract In many-light rendering, a variety of visual and illumination effects, including anti-aliasing,depth of field, volumetric scattering, and subsurface scattering, are combined to create a number of virtual point lights (VPLs). This is done in order to simplify computation of the resulting illumination. Naive approaches that sum the direct illumination from many VPLs are computationally expensive; scalable methods can be computed more efficiently by clustering VPLs, and then estimating their sum by sampling a small number of VPLs. Although significant speed-up has been achieved using scalable methods, clustering leads to uncontrollable errors, resulting in noise in the rendered images. In this paper, we propose a method to improve the estimation accuracy of manylight rendering involving such visual and illumination effects. We demonstrate that our method can improve the estimation accuracy by a factor of 2.3 over the previous method.

    Keywords anti-aliasing; depth of field; many-light rendering; participating media

    1 Introduction

    Many-light rendering methods simplify the complex computation of global illumination into a simple summation of the contributions from many virtual point lights (VPLs) [1]. In many-light rendering,the incident radiance at a point to be shaded (the shading point) is calculated using VPLs. Because the number of VPLs used is generally quite large,previous methods [2-4] have clustered VPLs to streamline the computation. These methods, however, cannot control the errors produced by clustering, resulting in noise in the rendered images. To eliminate noise in such systems,users must adjust clustering parameters in tedious trial-and-error processes.

    To address this problem,Nabata et al. [5]proposed a method to estimate errors in the pixel values due to VPL clustering. This method,however,estimates each pixel value using a single shading point and cannot be applied to visual effects such as anti-aliasing and depth-of-field (DOF), which require multiple shading points to estimate the pixel value. Walter et al. [6]proposed a method called multidimensional lightcuts(MDLC), which attempts to control the clustering errors in many-light rendering of various visual effects.This method, however, does not estimate the error in each pixel value, which appears as noise in the rendered images. This paper proposes a method to improve the accuracy of pixel values for visual effects such as anti-aliasing, DOF, and volumetric effects,as shown in Fig.1. Our method automatically partitions clusters and continues sampling them until the relative errors in the pixel values are smaller than a user-specified threshold.

    2 Related work

    2.1 Scalable VPL rendering

    Recent advances in many-light rendering have demonstrated that global illumination effects can be more than adequately approximated using many virtual lights [1, 7]. Keller [8] introduced the instant radiosity method, which calculates the indirect illumination from virtual point lights(VPLs). Walter et al. [2,6]proposed a scalable solution to many-light methods using a hierarchical representation of VPLs,called cuts. Haˇsan et al. [9] represented many-light rendering via a large matrix, and explored the matrix structure by using row-column sampling.Ou and Pellacini [3] clustered the shading points into groups called slices and performed matrix rowcolumn sampling for each slice. Georgiev et al.[10] proposed an importance sampling method for VPLs by recording the contributions of the VPLs at each cache point. Yoshida et al. [11] proposed an adaptive cache insertion method to improve VPL sampling. Wu and Chuang [12] proposed the VisibilityCluster algorithm, which approximates the visibilities between each cluster of VPLs and those of the shading points by estimating the averaged visibility. Huo et al. [4] proposed a matrix samplingand-recovery method to efficiently gather contributions from the VPLs by sampling a small number of them. Although these methods can significantly accelerate many-light rendering, the results rely on user-specified parameters, and finding the optimal parameter values remains a challenging task.

    Fig. 1 Rendering with anti-aliasing (256 spp): (a) reference, (b) our method, (c) multidimensional lightcuts (MDLC). (b) and (c) are rendered so that the relative errors are less than 2%. (d) and (e) indicate relative errors (error color-code shown at far right). Values in (d) and (e) give the percentages of pixels with relative errors of less than 2%. Our method improves the estimation accuracy of pixel values by approximately 50% compared to MDLC. The low estimation accuracy of MDLC results in noise as shown in (c).

    2.2 Participating media

    VPL rendering suffers from splotches that stem from the singularity of the geometry term relating shading points and VPLs. Clamping is often used to avoid these artifacts. Engelhardt et al. [13] proposed a method to compensate for energy loss due to clamping and proposed a rendering method for participating media using VPLs. Nov′ak et al. [14]proposed virtual ray lights (VRLs) to alleviate this singularity, and several groups have proposed acceleration methods[15, 16] that use VPLs and VRLs. These methods,however, do not estimate the errors due to clustering of the VPLs and VRLs.

    2.3 Subsurface scattering

    Arbree et al. [17] proposed a scalable rendering method for translucent materials using VPLs. Walter et al. [18] proposed bidirectional lightcuts that supports complex illumination effects, including subsurface scattering. Wu et al. [19] formulated a radiance computation method for subsurface scattering by sampling light-to-surface and surface-tocamera matrices. Although these methods can render translucent materials efficiently by clustering VPLs,they cannot control the errors due to clustering.

    To address this problem, we provide an error estimation method that can handle a wide variety of illumination and visual effects. After specifying the relative error tolerance ∈and the probability α, our method stochastically estimates the relative errors with probability α,i.e.,using our method,the relative errors of a proportion α of the pixels in the rendered image is likely to be less than ∈.

    3 Background

    In many-light rendering, the outgoing radiance L(x,xv) at shading point x toward the viewpoint xvis calculated by

    where L is the set of VPLs, I(y) is the intensity of VPL y, and f(y,x,xv) is the material function that encompasses the bidirectional reflectance distribution function (BRDF) on the surface and the phase function within the volume. V(y,x) and G(y,x)are the visibility and geometry terms [1, 7] relating y and x, respectively. To estimate the pixel value for the rendering of the participating media, antialiasing, and DOF, the outgoing radiances from multiple shading points are required, as shown in Fig. 2. For example, when multiple shading points are generated via supersampling for anti-aliasing, the pixel values are calculated using the weighted sum of the outgoing radiances from the shading points. The pixel value I produced by supersampling is calculated as

    where G is the set of shading points, and W is the weighting function. The weighting function W and material function f depend on the visual effects and the rendered objects. Details of W and f are provided in Sections 4.4-4.6.

    The computational cost of Eq. (2) is proportional to the product of the number of VPLs |L| and the number of shading points|G|. Since,in general,many VPLs are used, the computational cost of Eq. (2) is prohibitive. MDLC is an efficient scalable many-light rendering method that clusters VPLs and shading points [6]. MDLC implicitly represents the hierarchy of clusters with a product graph that consists of pairs of clusters for VPLs and shading points. MDLC estimates the upper bound for clustering error, but does not estimate the error in the pixel value(i.e.,the sum of the errors due to clustering). We improve the estimation accuracy of the pixel value by combining a method for estimating errors [5] with MDLC [6].

    Fig. 2 Pixel value estimation using multiple shading points.

    4 Proposed method

    4.1 Overview

    Figure 3 shows an overview of our method, which estimates image pixel values by clustering VPLs and shading points. Clusters of VPLs and shading points are referred to as VPL clusters and shading clusters, respectively. VPL clusters and shading clusters are represented by binary trees, whose leaf nodes correspond to VPLs (or shading points), and whose inner nodes correspond to clusters of VPLs(or shading points). As shown in Fig. 3(b), we refer to the binary trees for the VPL clusters and the shading clusters as the light tree and the shading tree,respectively. Following MDLC, we denote a cluster pair consisting of a VPL cluster CLand a shading cluster CGby (CL,CG).

    We create the VPL clusters and build the light tree, used for all pixels, in a preprocess. For each pixel, we first generate shading points and build the shading tree by clustering the shading points. We prepare a priority queue Q that stores the cluster pairs in descending order of standard deviation of each pair. Q is initialized with the pair (CLr,CGr) of root nodes of the light tree and the shading tree. By using this pair, the estimate ^I of the pixel value and the estimated error Δ^I =|^I-I| are initialized, and the standard deviation σ is calculated (see Section 4.2). Then we repeat the following processes:

    Fig. 3 Overview of our method, explained for rendering translucent materials. (a) Subsurface scattering of light at xj is calculated by generating shading points xi, which are importance sampled. (b) A VPL cluster CLr and shading cluster CGr are constructed. (c) Two VPLs and shading points from each cluster are sampled from the pair (CLr ,CGr ), then the pixel value ^I, error Δ^I, and standard deviation are estimated.(d) One of the clusters is subdivided if the estimated error Δ^I is greater than tolerance ∈^I. A cluster is selected based on the length of the diagonal of its bounding box weighted by the numbers of VPLs and shading points in each cluster. Here, the VPL cluster CLr is selected and replaced with two child nodes CL1 and CL2; two new pairs (CL1,CGr ) and (CL2,CGr ) are created. The pixel value is estimated from the two pairs (CL1,CGr ) and (CL2,CGr ).

    2. Subdivide one of the two clusters,or,and move down one step in the corresponding binary tree. Cluster selection is described in Section 4.3.Replace the selected cluster with two child nodes and create two new pairs.

    3. Update ^I and Δ^I, and calculate the standard deviations for the two new pairs. Push the two new pairs into Q.

    4. Terminate the process if the estimated error,Δ^I, is smaller than the tolerance, ∈^I, where ∈is the relative error tolerance. Otherwise, return to Step 1.

    4.2 Estimating I and ΔI

    Outgoing radiances Iidue to the shading cluster CGiilluminated from VPL clusterare calculated by

    As summing over all the VPLs and shading points in clusters CLiand CGiis computationally expensive,our method estimates Iiby sampling a small number of VPLs and shading points as follows:

    where xkand ykare the k-th sample of the shading point and VPL, respectively, K is the number of samples, and pGand pLare probability mass functions for sampling, calculated as follows:

    Substituting these functions into Eq. (4), ^Iiis calculated from:

    Computing the error ΔI = |^I -I| necessitates knowing the true value of I in Eq. (2), but the computational cost of obtaining this value is high.Therefore, our method estimates Δ^I using the following equation [5]:

    where tαis the α quantile of the t-distribution, N is the number of pairs, and s2iis the sample variance of Iifor the i-th pair of; Eq. (6) is derived in the Appendix.

    To select a pair of clusters to be subdivided,the standard deviation σifor each pairis required. Although the sample variance s2ican be used to estimate σi, the accuracy of this estimate is low, since our method estimates ^Iiwith K = 2 samples, as does the previous method [5]. Instead of using the sample variance, our method calculates the standard deviation σifor the i-th pair using the upper bounds of f and G, as follows:

    where fuband Gubare the upper bounds of f and G,respectively, within the VPL cluster CLiand shading cluster CGi.is the standard deviation of the visibility function; a maximum value of 0.5 is used. fuband Gubfor CLiand CGiare calculated in a similar way to those in MDLC.

    4.3 Cluster selection

    We now determine which cluster of the two clusters in the pair with maximum standard deviation is to be subdivided. In MDLC, the cluster is chosen by a refinement heuristic; it basically selects the cluster with the largest bounding box. Although this works well in some cases, it predominantly selects the VPL cluster CL, since VPLs are usually distributed throughout a scene, whereas the shading points are distributed locally, as shown by Fig. 2. In addition,the numbers of VPLs and shading points are often substantially different (e.g., the number of VPLs in Fig. 1 is 35.1k while the number of shading points is only 256). These facts cause unequal subdivisions between VPL clusters and shading clusters, which results in lower estimation accuracy. To address this problem, we propose a cluster selection method that accounts for the size difference between the bounding boxes of the root nodes of the VPL cluster and the shading cluster. We also consider the number of VPLs and shading points when selecting the cluster to be subdivided. Specifically, the bounding box for each shading cluster is scaled using the following two coefficients:

    where CLrand CGrare the root nodes of the VPL and shading clusters, respectively, lCLrand lCGrare the diagonal lengths of the bounding boxes for CLrand CGr, respectively, and |L| and |G| are the numbers of VPLs and shading points, respectively.

    4.4 Anti-aliasing and DOF

    In many-light rendering with anti-aliasing or DOF,Nsppviewing rays are generated via sampling on the screen pixel or camera lens, and the point of intersection between each viewing ray and the surface in the scene is calculated. The material function f at each shading point xiis represented by a BRDF fr.As we use a simple box filter, the weighting function is calculated by W(xi)=1/Nspp; other filters (e.g.,Gaussian filters) can also be applied.

    Here, for simplicity, we describe methods for rendering translucent materials and participating media with a single ray per pixel; however, with simple modifications, our method can render these materials using multiple rays, as shown in Fig. 8.

    4.5 Rendering translucent materials

    The outgoing radiance at xoon the surface of a translucent material in viewing direction ωodue to multiple scattered light within the translucent material is calculated using the diffuse bidirectional scattering surface reflectance distribution function(BSSRDF) Rd[20]: subsurface scattering of light at xjin viewing direction ωjis given by

    where Tηis the Fresnel transmittance,xiand niare a point and the normal to the surface of the translucent material at xi, respectively, rij=‖xi-xj‖, A is the set of points on the surface of the translucent material,and ? is a set of directions over the hemisphere.

    Our method traces Nspprays through each pixel and defines the point of intersection between the j-th ray and the surface of the translucent material as xj.

    The pixel value due to the subsurface scattering of light is calculated by

    By comparing Eqs. (11) and (2), the weighting function W and material function f are represented as follows:

    4.6 Rendering participating media

    Like MDLC, our method assumes homogeneous participating media and an isotropic phase function.To calculate a pixel value in the presence of homogeneous participating media, the scattered radiances are integrated along the viewing ray, as shown in Fig.2(b). Let xobe the point of intersection of the viewing ray and the surface in the scene.Thus, the outgoing radiance from xois calculated using the sum of the reflected radiance Lsat xoand the scattered radiance Lmalong the viewing ray, as follows:

    where τ(xo,xv) = exp(-σt‖ xo- xv‖) is the transmittance, and σtand σsare the extinction and scattering coefficients, respectively. x(t) is the point on the viewing ray parameterized by the distance t from viewpoint xv. L(xo,xv) in Eq. (12)is calculated using Eq. (1). The weighting function W and material function f for Lsare represented by W(xo) = τ(xo,xv) and the BRDF fr(y,xo,xv),respectively. The scattered radiance L(x(t),xv) at x(t) is calculated using:

    where fpis the (isotropic) phase function.

    To compute the integral in Eq.(13),shading points xiare generated by uniformly subdividing the viewing ray with step size Δt; Lmis calculated by summing over the shading points and VPLs as follows:

    Comparing this equation with Eq. (2), the material function f is represented by the phase function fp,and the weighting function W(xi) is calculated using W(xi)=σsτ(xi,xv)Δt.

    5 Results

    Figures 1 and 4-8 show our results. Table 1 shows the numbers of VPLs and shading points, as well as the computational time for our method and MDLC measured using a PC with an Intel Xeon E5-2650 v4 2.20 GHz CPU. In all the calculations, the relative tolerance ∈was set to 2%, and the α quantile for Eq. (6) was set to 95%. To measure the estimation accuracy, we defined R∈as the percentage of pixels satisfying ‖ ^I-I ‖<∈I; the estimation is accurate when R∈is close to α (95% in our experiments).

    Figure 1 shows results of our rendering method with anti-aliasing, where the insets compare (a) the reference solution, (b) our method, and (c) MDLC.The reference solution is rendered by summing up all the contributions from the VPLs at all the shading points to calculate the pixel values. Insets (d) and (e)illustrate the relative errors and R∈for our method and MDLC.This figure demonstrates that our method can improve upon the estimation accuracy achieved using MDLC,with corresponding improvements in noise,as shown in Fig.1(c). Figure 4 shows a San Miguel scene rendered with DOF. Figure 5 shows a Cornell box scene filled with homogeneous participating media.As shown in insets (c) of Figs. 4 and 5, the stochastic noise is perceptible when using MDLC, whereas it is imperceptible using our error estimation method (see insets (b)). Figure 6 shows a Sponza scene filled with homogeneous participating media. Figure 7 shows a Cornell box scene where the two boxes consist of translucent material. Figure 8 shows a human head model rendered with a diffuse BSSRDF and anti-aliasing. In all of these experiments, our method showed improved estimation accuracy R∈over MDLC.

    Table 1 Statistics for our method and MDLC

    Fig. 4 San Miguel scene with DOF.

    Fig. 5 Cornell box scene with participating media.

    Fig. 6 Sponza scene with participating media.

    Fig. 7 Cornell box scene with two boxes of translucent materials.

    Fig. 8 Human head scene (subsurface scattering with anti-aliasing(8 spp)).

    In Figs. 9 and 10, cluster selections without scaling(the second column), scaling by cl(the third column),and scaling by cd(the fourth column) are compared.Scaling by cdyields the best estimation accuracy R∈for five out of six of these scenes. Moreover,comparing results for these scenes without scaling with those with scaling demonstrates that scaling by cdconsistently improves the estimation accuracy. In Fig. 9(a), the first row of the San Miguel scene with anti-aliasing shows relative errors greater than 5%,especially around the ivy-covered wall in the first column, but are reduced in the third column. In Fig. 9(d), the fourth row of the kitchen scene with DOF, relative errors greater than 5% can be seen,especially around the shadow boundaries and outlines.We think that these low estimation accuracies arise from the uneven cluster subdivisions (i.e., shading clusters are not subdivided, whereas VPL clusters are predominantly subdivided). Comparing scalings by cland cdshows that estimation accuracies in the three scenes deteriorate when using cl, although clyields the best estimation accuracy in the San Miguel scene with DOF (c). Based on these experiments, we used the scaling coefficient cdto select the clusters.

    Fig. 9 Cluster selection without scaling (the second column), scaling by cl (the third column), and scaling by cd (the fourth column). The left column shows the reference solutions and the other images show the relative errors in false color. The values in each relative error image show R∈. Scaling using cd yields better estimation accuracies than not scaling or scaling by cl.

    Fig. 10 Cluster selection without scaling (the second column), with scaling by cl (the third column), and scaling by cd (the fourth column) for subsurface scattering (above) and participating media (below). Although scaling by cl provides better estimation accuracy R∈than not scaling in the human head scene (above), relative errors greater than 5% can be seen in the neck. Scaling using cd provides the best estimation accuracy and relative errors greater than 5% are not seen. In the Cornell-box scene with homogeneous participating media (below), since VPLs and shading points are distributed throughout the scene, the improvements due to the use of cd are subtle, but its use does not impair the estimation accuracy.

    While our method outperformed MDLC in estimation accuracy,its computational time was from 1.11 to 8.55 times greater. We attempted to adjust the parameter ∈so that the estimation accuracy of MDLC was similar to our result in Fig. 5. After several trial-and-error processes and re-renderings,R∈of MDLC became 91.2% with ∈= 0.5%. The computational time for MDLC with ∈= 0.5% was 33.5 s, which is comparable to our result (35.6 s).However, our method does not require the tedious trial-and-error processes needed for MDLC.

    6 Conclusions and future work

    We have presented a scalable many-light rendering method that can improve the estimation accuracy for various visual and illumination effects. Our method automatically partitions VPL and shading clusters so that pixel errors are smaller than the relative tolerance ∈.Currently, our method is limited to homogeneous participating media with isotropic scattering. We would like to lift this limitation in future work.Moreover, we intend to apply our method to motion blur.

    Acknowledgements

    This work was partially supported by JSPS KAKENHI 15H05924 and 18H03348.

    Appendix Derivation of the error estimate ΔI

    If the samples in each pair follow a normal distribution and Neyman allocation is used for the number of samples for each pair, their statistics T follow a tdistribution with (n-N) degrees of freedom:

    where N is the number of pairs, niis the number of samples for the i-th pair (CLi,CGi), n is the total number of samples (i.e., n=N i=1ni), and siis the sample variance of the i-th pair. The error ΔI = ^I-I is calculated using the α quantile in the t-distribution,tα, as

    To estimate the pixel value, it is known to be more efficient to use a large number of pairs with a small number of samples than a small number of pairs with many samples. Since at least two samples for each pair is required to calculate the sample variance si,we sample two VPLs and shading points from the pair (K =2 in Eq. (4)). By substituting ni=2 and n=2N in Eq. (15), the error I can be simplified to give Eq. (6)

    一区二区三区乱码不卡18| 亚洲欧美一区二区三区黑人| 欧美日本中文国产一区发布| 在线永久观看黄色视频| 久久精品国产亚洲av香蕉五月 | 国产成人av教育| 麻豆国产av国片精品| 99热国产这里只有精品6| 亚洲天堂av无毛| av网站免费在线观看视频| 欧美日韩一级在线毛片| 黄片播放在线免费| 秋霞在线观看毛片| 亚洲,欧美精品.| 亚洲av欧美aⅴ国产| e午夜精品久久久久久久| 国产精品一区二区免费欧美 | 亚洲av国产av综合av卡| 国产高清国产精品国产三级| 国产老妇伦熟女老妇高清| 精品人妻1区二区| 国产欧美亚洲国产| 在线观看免费午夜福利视频| 91精品国产国语对白视频| 性色av一级| 精品少妇一区二区三区视频日本电影| 在线观看一区二区三区激情| 大陆偷拍与自拍| 波多野结衣一区麻豆| 91大片在线观看| 国产精品一区二区精品视频观看| 1024香蕉在线观看| 如日韩欧美国产精品一区二区三区| 亚洲精品国产色婷婷电影| 97人妻天天添夜夜摸| 国产真人三级小视频在线观看| 久久人妻福利社区极品人妻图片| 亚洲成国产人片在线观看| 咕卡用的链子| 国产真人三级小视频在线观看| 免费女性裸体啪啪无遮挡网站| av片东京热男人的天堂| 色老头精品视频在线观看| 亚洲第一av免费看| 亚洲激情五月婷婷啪啪| 亚洲精品国产色婷婷电影| 亚洲欧美日韩另类电影网站| 在线精品无人区一区二区三| 久久精品久久久久久噜噜老黄| 看免费av毛片| 亚洲少妇的诱惑av| 黄频高清免费视频| 久久久久国产一级毛片高清牌| 国产在视频线精品| 久久精品aⅴ一区二区三区四区| 国产福利在线免费观看视频| 国产激情久久老熟女| 国产老妇伦熟女老妇高清| 欧美97在线视频| 久久中文字幕一级| 亚洲精品国产一区二区精华液| 不卡av一区二区三区| 国产精品久久久av美女十八| 精品免费久久久久久久清纯 | 最黄视频免费看| 欧美成人午夜精品| 他把我摸到了高潮在线观看 | 亚洲国产中文字幕在线视频| 男男h啪啪无遮挡| 咕卡用的链子| 老司机影院成人| 亚洲专区国产一区二区| 国产在视频线精品| 在线 av 中文字幕| 亚洲三区欧美一区| av天堂在线播放| 午夜福利一区二区在线看| 亚洲精品国产精品久久久不卡| 天堂中文最新版在线下载| 桃红色精品国产亚洲av| 99久久精品国产亚洲精品| 亚洲欧洲日产国产| av在线播放精品| 下体分泌物呈黄色| 国产高清视频在线播放一区 | 法律面前人人平等表现在哪些方面 | 高清av免费在线| 精品一区在线观看国产| 久久精品成人免费网站| 亚洲一卡2卡3卡4卡5卡精品中文| 伦理电影免费视频| 成年人午夜在线观看视频| 妹子高潮喷水视频| av视频免费观看在线观看| 男女下面插进去视频免费观看| 日韩大片免费观看网站| 亚洲中文日韩欧美视频| 久久中文字幕一级| 亚洲男人天堂网一区| 午夜福利在线免费观看网站| 嫁个100分男人电影在线观看| 91麻豆av在线| 99久久综合免费| 精品国产乱子伦一区二区三区 | 国产免费现黄频在线看| 国产av又大| 国产精品 欧美亚洲| 日韩免费高清中文字幕av| 飞空精品影院首页| 午夜福利,免费看| 亚洲一区二区三区欧美精品| 日韩制服骚丝袜av| 欧美xxⅹ黑人| 亚洲美女黄色视频免费看| 欧美国产精品一级二级三级| 国产精品久久久久成人av| 999久久久精品免费观看国产| 国产真人三级小视频在线观看| 日韩三级视频一区二区三区| 人成视频在线观看免费观看| 一区二区av电影网| 99国产精品99久久久久| 亚洲精品一卡2卡三卡4卡5卡 | 91成年电影在线观看| 99精品久久久久人妻精品| 正在播放国产对白刺激| 亚洲国产欧美网| 亚洲少妇的诱惑av| 国产精品一区二区在线观看99| 狠狠狠狠99中文字幕| 日本vs欧美在线观看视频| 两性午夜刺激爽爽歪歪视频在线观看 | 久久天躁狠狠躁夜夜2o2o| 免费在线观看黄色视频的| 国产精品 欧美亚洲| 超碰成人久久| 99久久人妻综合| 男女无遮挡免费网站观看| 岛国毛片在线播放| 天堂中文最新版在线下载| 日韩有码中文字幕| 中文字幕最新亚洲高清| 国产1区2区3区精品| 亚洲欧美一区二区三区久久| 在线观看免费高清a一片| 国产成人a∨麻豆精品| 亚洲国产欧美网| 精品久久久精品久久久| 国产亚洲欧美精品永久| 国产人伦9x9x在线观看| 亚洲精品国产av成人精品| 动漫黄色视频在线观看| 老司机影院成人| 欧美日韩黄片免| 欧美黑人精品巨大| 精品国产乱码久久久久久男人| 精品国产乱码久久久久久小说| 久久午夜综合久久蜜桃| 欧美精品啪啪一区二区三区 | 91成年电影在线观看| 69精品国产乱码久久久| 91老司机精品| 777久久人妻少妇嫩草av网站| 亚洲第一av免费看| 最近中文字幕2019免费版| 中文欧美无线码| 中文字幕最新亚洲高清| 美女国产高潮福利片在线看| 在线观看人妻少妇| 国产精品.久久久| 国产熟女午夜一区二区三区| 国产片内射在线| 国产伦人伦偷精品视频| 天天操日日干夜夜撸| 亚洲成国产人片在线观看| 91九色精品人成在线观看| av线在线观看网站| 欧美日韩中文字幕国产精品一区二区三区 | 日本欧美视频一区| 成在线人永久免费视频| 人人妻人人澡人人看| 亚洲欧洲日产国产| 脱女人内裤的视频| 侵犯人妻中文字幕一二三四区| 国产又色又爽无遮挡免| 黄片大片在线免费观看| 国产高清国产精品国产三级| 少妇 在线观看| 免费在线观看完整版高清| 热99re8久久精品国产| 精品国产国语对白av| 精品亚洲成国产av| 一个人免费在线观看的高清视频 | 无限看片的www在线观看| 自线自在国产av| 99精国产麻豆久久婷婷| 国产熟女午夜一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 亚洲中文字幕日韩| 亚洲男人天堂网一区| 亚洲欧美一区二区三区黑人| 亚洲激情五月婷婷啪啪| 少妇的丰满在线观看| 99久久99久久久精品蜜桃| 久久人人爽人人片av| 欧美在线黄色| 久久亚洲国产成人精品v| 久久免费观看电影| 国产成人免费观看mmmm| 高清av免费在线| 欧美另类亚洲清纯唯美| 日韩有码中文字幕| 亚洲中文日韩欧美视频| 天天影视国产精品| 久久精品国产亚洲av高清一级| 亚洲欧美成人综合另类久久久| 美女福利国产在线| 亚洲成人免费av在线播放| 满18在线观看网站| 欧美日韩亚洲国产一区二区在线观看 | 少妇 在线观看| 深夜精品福利| 欧美精品一区二区免费开放| 国产精品欧美亚洲77777| 欧美 亚洲 国产 日韩一| 美女中出高潮动态图| 最新的欧美精品一区二区| 国产av国产精品国产| 亚洲伊人色综图| 国产精品久久久久久人妻精品电影 | 久久免费观看电影| 91大片在线观看| av国产精品久久久久影院| 嫩草影视91久久| 美女脱内裤让男人舔精品视频| 久久av网站| 免费高清在线观看视频在线观看| 久久热在线av| 丝袜在线中文字幕| 深夜精品福利| 欧美日韩福利视频一区二区| 免费一级毛片在线播放高清视频 | 久久精品久久久久久噜噜老黄| 亚洲欧洲精品一区二区精品久久久| 亚洲一区中文字幕在线| 纯流量卡能插随身wifi吗| 另类亚洲欧美激情| 在线观看免费日韩欧美大片| 国产一区二区 视频在线| 亚洲精品中文字幕在线视频| 国产欧美亚洲国产| 69精品国产乱码久久久| 国产熟女午夜一区二区三区| 精品一区二区三区四区五区乱码| 久久久国产一区二区| 欧美另类一区| 欧美大码av| 制服人妻中文乱码| 国产99久久九九免费精品| 欧美 亚洲 国产 日韩一| 亚洲欧美日韩另类电影网站| 老司机靠b影院| 天天躁夜夜躁狠狠躁躁| 亚洲 国产 在线| 涩涩av久久男人的天堂| 精品视频人人做人人爽| 80岁老熟妇乱子伦牲交| 在线天堂中文资源库| 捣出白浆h1v1| 国产一区二区三区在线臀色熟女 | av欧美777| 久久久水蜜桃国产精品网| 黑丝袜美女国产一区| 交换朋友夫妻互换小说| av福利片在线| 婷婷丁香在线五月| 国产成人av激情在线播放| 日韩免费高清中文字幕av| 一区二区三区激情视频| 男人舔女人的私密视频| 天天影视国产精品| 欧美激情极品国产一区二区三区| 肉色欧美久久久久久久蜜桃| 中文欧美无线码| 18禁裸乳无遮挡动漫免费视频| 人人澡人人妻人| 夜夜骑夜夜射夜夜干| 另类精品久久| 在线观看一区二区三区激情| www.av在线官网国产| 亚洲国产精品一区二区三区在线| 亚洲一区二区三区欧美精品| 精品久久久久久电影网| 欧美大码av| 女人爽到高潮嗷嗷叫在线视频| 韩国高清视频一区二区三区| 涩涩av久久男人的天堂| 十八禁网站网址无遮挡| 在线看a的网站| 99精品久久久久人妻精品| av又黄又爽大尺度在线免费看| 精品少妇久久久久久888优播| 午夜福利影视在线免费观看| 免费在线观看影片大全网站| 午夜福利一区二区在线看| 久久国产精品大桥未久av| 91国产中文字幕| 一本—道久久a久久精品蜜桃钙片| 青青草视频在线视频观看| 日韩免费高清中文字幕av| 婷婷色av中文字幕| 国产精品久久久av美女十八| 欧美激情极品国产一区二区三区| 最近最新免费中文字幕在线| 免费不卡黄色视频| 精品少妇久久久久久888优播| 久久精品亚洲熟妇少妇任你| 欧美日韩亚洲高清精品| 日韩 欧美 亚洲 中文字幕| 亚洲av电影在线进入| 国产黄色免费在线视频| 操美女的视频在线观看| 一级毛片女人18水好多| 一二三四在线观看免费中文在| av天堂在线播放| 午夜福利,免费看| 交换朋友夫妻互换小说| 亚洲国产av新网站| 搡老乐熟女国产| 国产老妇伦熟女老妇高清| 老熟女久久久| 亚洲精品国产精品久久久不卡| 久久天躁狠狠躁夜夜2o2o| 桃花免费在线播放| 真人做人爱边吃奶动态| 国产精品国产三级国产专区5o| 天天影视国产精品| 国产一区二区三区在线臀色熟女 | 老司机影院毛片| 欧美精品一区二区免费开放| 国产精品欧美亚洲77777| 亚洲欧美清纯卡通| 日韩欧美一区视频在线观看| 免费人妻精品一区二区三区视频| 精品少妇久久久久久888优播| 欧美精品亚洲一区二区| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲av电影在线进入| 69精品国产乱码久久久| 国产精品 欧美亚洲| 色婷婷久久久亚洲欧美| 99久久人妻综合| 国产在线免费精品| 久久精品亚洲av国产电影网| 免费av中文字幕在线| 岛国在线观看网站| 午夜福利视频在线观看免费| 色94色欧美一区二区| 久久九九热精品免费| 欧美精品av麻豆av| 亚洲一区中文字幕在线| 少妇精品久久久久久久| 日韩视频一区二区在线观看| 视频区图区小说| 久久99热这里只频精品6学生| 亚洲熟女毛片儿| 国产av一区二区精品久久| av网站免费在线观看视频| 大码成人一级视频| 一本大道久久a久久精品| 国产亚洲精品第一综合不卡| 老司机影院毛片| 亚洲伊人久久精品综合| 久久99热这里只频精品6学生| 性色av一级| 久久中文看片网| 18在线观看网站| 日本猛色少妇xxxxx猛交久久| a级毛片黄视频| 国产淫语在线视频| 叶爱在线成人免费视频播放| 99香蕉大伊视频| 人人妻人人添人人爽欧美一区卜| 黄色视频在线播放观看不卡| 午夜免费成人在线视频| 女性生殖器流出的白浆| 国产熟女午夜一区二区三区| 国产日韩一区二区三区精品不卡| 国产精品1区2区在线观看. | 老汉色av国产亚洲站长工具| 中文欧美无线码| 亚洲欧美日韩高清在线视频 | 国产精品久久久久成人av| 天天躁狠狠躁夜夜躁狠狠躁| 母亲3免费完整高清在线观看| 久久青草综合色| 国产精品久久久av美女十八| 99久久国产精品久久久| 男男h啪啪无遮挡| 伊人久久大香线蕉亚洲五| www.999成人在线观看| 男人操女人黄网站| 黄片播放在线免费| 午夜精品国产一区二区电影| 久久久国产精品麻豆| 操美女的视频在线观看| 精品国产国语对白av| 亚洲成人免费av在线播放| 免费女性裸体啪啪无遮挡网站| 欧美一级毛片孕妇| 精品福利观看| 国产精品国产av在线观看| 久久性视频一级片| 黄色怎么调成土黄色| 老熟妇仑乱视频hdxx| 人人妻人人澡人人爽人人夜夜| tocl精华| 国产1区2区3区精品| 91成人精品电影| 纵有疾风起免费观看全集完整版| 午夜激情久久久久久久| 国产精品一区二区精品视频观看| 涩涩av久久男人的天堂| 久久免费观看电影| 91老司机精品| 国产在线一区二区三区精| 男女免费视频国产| 精品一区在线观看国产| 999精品在线视频| 成人亚洲精品一区在线观看| 一本—道久久a久久精品蜜桃钙片| 老司机亚洲免费影院| 日韩欧美一区视频在线观看| 久久性视频一级片| 少妇猛男粗大的猛烈进出视频| 男女午夜视频在线观看| 久久久久久久久久久久大奶| 天堂8中文在线网| av天堂在线播放| 满18在线观看网站| 亚洲情色 制服丝袜| 国产欧美日韩一区二区三区在线| 精品久久久久久久毛片微露脸 | 色视频在线一区二区三区| 久久人人97超碰香蕉20202| 久久人妻福利社区极品人妻图片| 2018国产大陆天天弄谢| 亚洲欧美激情在线| 大香蕉久久网| 大码成人一级视频| 久久人人爽人人片av| 90打野战视频偷拍视频| 欧美另类亚洲清纯唯美| 国产精品一二三区在线看| 欧美大码av| 美女扒开内裤让男人捅视频| 不卡一级毛片| 叶爱在线成人免费视频播放| 国产精品 欧美亚洲| 在线亚洲精品国产二区图片欧美| 国产一区二区激情短视频 | 亚洲成人免费电影在线观看| 人人妻人人澡人人爽人人夜夜| 黑人猛操日本美女一级片| 在线亚洲精品国产二区图片欧美| 免费女性裸体啪啪无遮挡网站| 欧美精品av麻豆av| av线在线观看网站| 美女大奶头黄色视频| 99国产精品99久久久久| 精品一品国产午夜福利视频| 免费观看a级毛片全部| 亚洲欧美清纯卡通| 久久亚洲国产成人精品v| 操美女的视频在线观看| 韩国高清视频一区二区三区| 黄色视频,在线免费观看| 欧美另类一区| www.自偷自拍.com| 麻豆av在线久日| 精品国产国语对白av| 在线永久观看黄色视频| 日本wwww免费看| 一级黄色大片毛片| 俄罗斯特黄特色一大片| 黑人欧美特级aaaaaa片| 亚洲少妇的诱惑av| 亚洲精品自拍成人| 美女福利国产在线| 丝袜喷水一区| 欧美日韩视频精品一区| 国产成+人综合+亚洲专区| 久久毛片免费看一区二区三区| 午夜老司机福利片| 久久国产精品影院| 免费观看av网站的网址| 色视频在线一区二区三区| 午夜影院在线不卡| 99精品久久久久人妻精品| 老司机影院成人| 蜜桃在线观看..| 精品国产一区二区久久| 又紧又爽又黄一区二区| 亚洲人成77777在线视频| 久久人人爽av亚洲精品天堂| 精品一区二区三卡| 午夜老司机福利片| 热re99久久精品国产66热6| 满18在线观看网站| 99国产精品99久久久久| 精品乱码久久久久久99久播| 久久久久国产一级毛片高清牌| 黄色 视频免费看| 伊人亚洲综合成人网| av电影中文网址| 美女高潮喷水抽搐中文字幕| 777米奇影视久久| 日韩欧美国产一区二区入口| 嫁个100分男人电影在线观看| 免费在线观看视频国产中文字幕亚洲 | 男女床上黄色一级片免费看| 亚洲avbb在线观看| 亚洲专区中文字幕在线| 亚洲精品久久久久久婷婷小说| 欧美国产精品va在线观看不卡| 在线天堂中文资源库| 国产片内射在线| 欧美在线黄色| 国产精品秋霞免费鲁丝片| 国产精品久久久久久精品古装| 老鸭窝网址在线观看| 一区二区三区乱码不卡18| 岛国毛片在线播放| 制服人妻中文乱码| 青春草视频在线免费观看| 国产97色在线日韩免费| 久久久久久人人人人人| 欧美成狂野欧美在线观看| 欧美精品亚洲一区二区| 日韩欧美免费精品| 精品卡一卡二卡四卡免费| 亚洲成av片中文字幕在线观看| 久久久精品94久久精品| tube8黄色片| 日本wwww免费看| 老熟妇仑乱视频hdxx| 欧美亚洲 丝袜 人妻 在线| 777久久人妻少妇嫩草av网站| 欧美激情高清一区二区三区| 美女福利国产在线| 91精品三级在线观看| 久久久久精品国产欧美久久久 | 国产欧美亚洲国产| 久久人人爽av亚洲精品天堂| 午夜免费观看性视频| 中文精品一卡2卡3卡4更新| 涩涩av久久男人的天堂| 99精品久久久久人妻精品| 在线观看www视频免费| 麻豆av在线久日| 老司机在亚洲福利影院| 免费观看人在逋| 女人爽到高潮嗷嗷叫在线视频| 女人久久www免费人成看片| 精品国产乱子伦一区二区三区 | 成年女人毛片免费观看观看9 | 亚洲,欧美精品.| 国产人伦9x9x在线观看| av视频免费观看在线观看| 777米奇影视久久| tube8黄色片| 99热网站在线观看| 亚洲少妇的诱惑av| 热99re8久久精品国产| 男人舔女人的私密视频| 欧美日韩视频精品一区| 亚洲精品国产区一区二| 亚洲人成电影观看| 亚洲中文字幕日韩| 中国美女看黄片| 亚洲五月色婷婷综合| 天天操日日干夜夜撸| 丝袜脚勾引网站| av网站免费在线观看视频| a级毛片黄视频| 天天操日日干夜夜撸| 国产精品熟女久久久久浪| 91av网站免费观看| 精品国产国语对白av| 大陆偷拍与自拍| 亚洲中文日韩欧美视频| 国产成人精品在线电影| 啦啦啦在线免费观看视频4| 19禁男女啪啪无遮挡网站| 人人妻人人添人人爽欧美一区卜| 一本一本久久a久久精品综合妖精| 男女国产视频网站| 在线永久观看黄色视频| √禁漫天堂资源中文www| 久久精品熟女亚洲av麻豆精品| 欧美一级毛片孕妇| 一二三四在线观看免费中文在| 狠狠狠狠99中文字幕| 手机成人av网站| 伊人亚洲综合成人网| 超碰成人久久| 久久女婷五月综合色啪小说| 精品国产国语对白av| 美女脱内裤让男人舔精品视频| 搡老熟女国产l中国老女人| 狠狠婷婷综合久久久久久88av| 亚洲精品国产av成人精品| 日本a在线网址| 手机成人av网站| 美女大奶头黄色视频|