• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A method for estimating the errors in many-light rendering with supersampling

    2019-08-05 01:45:12HirokazuSakaiKosukeNabataShinyaYasuakiandKeiIwasaki
    Computational Visual Media 2019年2期

    Hirokazu Sakai, Kosuke Nabata, Shinya Yasuaki, and Kei Iwasaki,2 ()

    Abstract In many-light rendering, a variety of visual and illumination effects, including anti-aliasing,depth of field, volumetric scattering, and subsurface scattering, are combined to create a number of virtual point lights (VPLs). This is done in order to simplify computation of the resulting illumination. Naive approaches that sum the direct illumination from many VPLs are computationally expensive; scalable methods can be computed more efficiently by clustering VPLs, and then estimating their sum by sampling a small number of VPLs. Although significant speed-up has been achieved using scalable methods, clustering leads to uncontrollable errors, resulting in noise in the rendered images. In this paper, we propose a method to improve the estimation accuracy of manylight rendering involving such visual and illumination effects. We demonstrate that our method can improve the estimation accuracy by a factor of 2.3 over the previous method.

    Keywords anti-aliasing; depth of field; many-light rendering; participating media

    1 Introduction

    Many-light rendering methods simplify the complex computation of global illumination into a simple summation of the contributions from many virtual point lights (VPLs) [1]. In many-light rendering,the incident radiance at a point to be shaded (the shading point) is calculated using VPLs. Because the number of VPLs used is generally quite large,previous methods [2-4] have clustered VPLs to streamline the computation. These methods, however, cannot control the errors produced by clustering, resulting in noise in the rendered images. To eliminate noise in such systems,users must adjust clustering parameters in tedious trial-and-error processes.

    To address this problem,Nabata et al. [5]proposed a method to estimate errors in the pixel values due to VPL clustering. This method,however,estimates each pixel value using a single shading point and cannot be applied to visual effects such as anti-aliasing and depth-of-field (DOF), which require multiple shading points to estimate the pixel value. Walter et al. [6]proposed a method called multidimensional lightcuts(MDLC), which attempts to control the clustering errors in many-light rendering of various visual effects.This method, however, does not estimate the error in each pixel value, which appears as noise in the rendered images. This paper proposes a method to improve the accuracy of pixel values for visual effects such as anti-aliasing, DOF, and volumetric effects,as shown in Fig.1. Our method automatically partitions clusters and continues sampling them until the relative errors in the pixel values are smaller than a user-specified threshold.

    2 Related work

    2.1 Scalable VPL rendering

    Recent advances in many-light rendering have demonstrated that global illumination effects can be more than adequately approximated using many virtual lights [1, 7]. Keller [8] introduced the instant radiosity method, which calculates the indirect illumination from virtual point lights(VPLs). Walter et al. [2,6]proposed a scalable solution to many-light methods using a hierarchical representation of VPLs,called cuts. Haˇsan et al. [9] represented many-light rendering via a large matrix, and explored the matrix structure by using row-column sampling.Ou and Pellacini [3] clustered the shading points into groups called slices and performed matrix rowcolumn sampling for each slice. Georgiev et al.[10] proposed an importance sampling method for VPLs by recording the contributions of the VPLs at each cache point. Yoshida et al. [11] proposed an adaptive cache insertion method to improve VPL sampling. Wu and Chuang [12] proposed the VisibilityCluster algorithm, which approximates the visibilities between each cluster of VPLs and those of the shading points by estimating the averaged visibility. Huo et al. [4] proposed a matrix samplingand-recovery method to efficiently gather contributions from the VPLs by sampling a small number of them. Although these methods can significantly accelerate many-light rendering, the results rely on user-specified parameters, and finding the optimal parameter values remains a challenging task.

    Fig. 1 Rendering with anti-aliasing (256 spp): (a) reference, (b) our method, (c) multidimensional lightcuts (MDLC). (b) and (c) are rendered so that the relative errors are less than 2%. (d) and (e) indicate relative errors (error color-code shown at far right). Values in (d) and (e) give the percentages of pixels with relative errors of less than 2%. Our method improves the estimation accuracy of pixel values by approximately 50% compared to MDLC. The low estimation accuracy of MDLC results in noise as shown in (c).

    2.2 Participating media

    VPL rendering suffers from splotches that stem from the singularity of the geometry term relating shading points and VPLs. Clamping is often used to avoid these artifacts. Engelhardt et al. [13] proposed a method to compensate for energy loss due to clamping and proposed a rendering method for participating media using VPLs. Nov′ak et al. [14]proposed virtual ray lights (VRLs) to alleviate this singularity, and several groups have proposed acceleration methods[15, 16] that use VPLs and VRLs. These methods,however, do not estimate the errors due to clustering of the VPLs and VRLs.

    2.3 Subsurface scattering

    Arbree et al. [17] proposed a scalable rendering method for translucent materials using VPLs. Walter et al. [18] proposed bidirectional lightcuts that supports complex illumination effects, including subsurface scattering. Wu et al. [19] formulated a radiance computation method for subsurface scattering by sampling light-to-surface and surface-tocamera matrices. Although these methods can render translucent materials efficiently by clustering VPLs,they cannot control the errors due to clustering.

    To address this problem, we provide an error estimation method that can handle a wide variety of illumination and visual effects. After specifying the relative error tolerance ∈and the probability α, our method stochastically estimates the relative errors with probability α,i.e.,using our method,the relative errors of a proportion α of the pixels in the rendered image is likely to be less than ∈.

    3 Background

    In many-light rendering, the outgoing radiance L(x,xv) at shading point x toward the viewpoint xvis calculated by

    where L is the set of VPLs, I(y) is the intensity of VPL y, and f(y,x,xv) is the material function that encompasses the bidirectional reflectance distribution function (BRDF) on the surface and the phase function within the volume. V(y,x) and G(y,x)are the visibility and geometry terms [1, 7] relating y and x, respectively. To estimate the pixel value for the rendering of the participating media, antialiasing, and DOF, the outgoing radiances from multiple shading points are required, as shown in Fig. 2. For example, when multiple shading points are generated via supersampling for anti-aliasing, the pixel values are calculated using the weighted sum of the outgoing radiances from the shading points. The pixel value I produced by supersampling is calculated as

    where G is the set of shading points, and W is the weighting function. The weighting function W and material function f depend on the visual effects and the rendered objects. Details of W and f are provided in Sections 4.4-4.6.

    The computational cost of Eq. (2) is proportional to the product of the number of VPLs |L| and the number of shading points|G|. Since,in general,many VPLs are used, the computational cost of Eq. (2) is prohibitive. MDLC is an efficient scalable many-light rendering method that clusters VPLs and shading points [6]. MDLC implicitly represents the hierarchy of clusters with a product graph that consists of pairs of clusters for VPLs and shading points. MDLC estimates the upper bound for clustering error, but does not estimate the error in the pixel value(i.e.,the sum of the errors due to clustering). We improve the estimation accuracy of the pixel value by combining a method for estimating errors [5] with MDLC [6].

    Fig. 2 Pixel value estimation using multiple shading points.

    4 Proposed method

    4.1 Overview

    Figure 3 shows an overview of our method, which estimates image pixel values by clustering VPLs and shading points. Clusters of VPLs and shading points are referred to as VPL clusters and shading clusters, respectively. VPL clusters and shading clusters are represented by binary trees, whose leaf nodes correspond to VPLs (or shading points), and whose inner nodes correspond to clusters of VPLs(or shading points). As shown in Fig. 3(b), we refer to the binary trees for the VPL clusters and the shading clusters as the light tree and the shading tree,respectively. Following MDLC, we denote a cluster pair consisting of a VPL cluster CLand a shading cluster CGby (CL,CG).

    We create the VPL clusters and build the light tree, used for all pixels, in a preprocess. For each pixel, we first generate shading points and build the shading tree by clustering the shading points. We prepare a priority queue Q that stores the cluster pairs in descending order of standard deviation of each pair. Q is initialized with the pair (CLr,CGr) of root nodes of the light tree and the shading tree. By using this pair, the estimate ^I of the pixel value and the estimated error Δ^I =|^I-I| are initialized, and the standard deviation σ is calculated (see Section 4.2). Then we repeat the following processes:

    Fig. 3 Overview of our method, explained for rendering translucent materials. (a) Subsurface scattering of light at xj is calculated by generating shading points xi, which are importance sampled. (b) A VPL cluster CLr and shading cluster CGr are constructed. (c) Two VPLs and shading points from each cluster are sampled from the pair (CLr ,CGr ), then the pixel value ^I, error Δ^I, and standard deviation are estimated.(d) One of the clusters is subdivided if the estimated error Δ^I is greater than tolerance ∈^I. A cluster is selected based on the length of the diagonal of its bounding box weighted by the numbers of VPLs and shading points in each cluster. Here, the VPL cluster CLr is selected and replaced with two child nodes CL1 and CL2; two new pairs (CL1,CGr ) and (CL2,CGr ) are created. The pixel value is estimated from the two pairs (CL1,CGr ) and (CL2,CGr ).

    2. Subdivide one of the two clusters,or,and move down one step in the corresponding binary tree. Cluster selection is described in Section 4.3.Replace the selected cluster with two child nodes and create two new pairs.

    3. Update ^I and Δ^I, and calculate the standard deviations for the two new pairs. Push the two new pairs into Q.

    4. Terminate the process if the estimated error,Δ^I, is smaller than the tolerance, ∈^I, where ∈is the relative error tolerance. Otherwise, return to Step 1.

    4.2 Estimating I and ΔI

    Outgoing radiances Iidue to the shading cluster CGiilluminated from VPL clusterare calculated by

    As summing over all the VPLs and shading points in clusters CLiand CGiis computationally expensive,our method estimates Iiby sampling a small number of VPLs and shading points as follows:

    where xkand ykare the k-th sample of the shading point and VPL, respectively, K is the number of samples, and pGand pLare probability mass functions for sampling, calculated as follows:

    Substituting these functions into Eq. (4), ^Iiis calculated from:

    Computing the error ΔI = |^I -I| necessitates knowing the true value of I in Eq. (2), but the computational cost of obtaining this value is high.Therefore, our method estimates Δ^I using the following equation [5]:

    where tαis the α quantile of the t-distribution, N is the number of pairs, and s2iis the sample variance of Iifor the i-th pair of; Eq. (6) is derived in the Appendix.

    To select a pair of clusters to be subdivided,the standard deviation σifor each pairis required. Although the sample variance s2ican be used to estimate σi, the accuracy of this estimate is low, since our method estimates ^Iiwith K = 2 samples, as does the previous method [5]. Instead of using the sample variance, our method calculates the standard deviation σifor the i-th pair using the upper bounds of f and G, as follows:

    where fuband Gubare the upper bounds of f and G,respectively, within the VPL cluster CLiand shading cluster CGi.is the standard deviation of the visibility function; a maximum value of 0.5 is used. fuband Gubfor CLiand CGiare calculated in a similar way to those in MDLC.

    4.3 Cluster selection

    We now determine which cluster of the two clusters in the pair with maximum standard deviation is to be subdivided. In MDLC, the cluster is chosen by a refinement heuristic; it basically selects the cluster with the largest bounding box. Although this works well in some cases, it predominantly selects the VPL cluster CL, since VPLs are usually distributed throughout a scene, whereas the shading points are distributed locally, as shown by Fig. 2. In addition,the numbers of VPLs and shading points are often substantially different (e.g., the number of VPLs in Fig. 1 is 35.1k while the number of shading points is only 256). These facts cause unequal subdivisions between VPL clusters and shading clusters, which results in lower estimation accuracy. To address this problem, we propose a cluster selection method that accounts for the size difference between the bounding boxes of the root nodes of the VPL cluster and the shading cluster. We also consider the number of VPLs and shading points when selecting the cluster to be subdivided. Specifically, the bounding box for each shading cluster is scaled using the following two coefficients:

    where CLrand CGrare the root nodes of the VPL and shading clusters, respectively, lCLrand lCGrare the diagonal lengths of the bounding boxes for CLrand CGr, respectively, and |L| and |G| are the numbers of VPLs and shading points, respectively.

    4.4 Anti-aliasing and DOF

    In many-light rendering with anti-aliasing or DOF,Nsppviewing rays are generated via sampling on the screen pixel or camera lens, and the point of intersection between each viewing ray and the surface in the scene is calculated. The material function f at each shading point xiis represented by a BRDF fr.As we use a simple box filter, the weighting function is calculated by W(xi)=1/Nspp; other filters (e.g.,Gaussian filters) can also be applied.

    Here, for simplicity, we describe methods for rendering translucent materials and participating media with a single ray per pixel; however, with simple modifications, our method can render these materials using multiple rays, as shown in Fig. 8.

    4.5 Rendering translucent materials

    The outgoing radiance at xoon the surface of a translucent material in viewing direction ωodue to multiple scattered light within the translucent material is calculated using the diffuse bidirectional scattering surface reflectance distribution function(BSSRDF) Rd[20]: subsurface scattering of light at xjin viewing direction ωjis given by

    where Tηis the Fresnel transmittance,xiand niare a point and the normal to the surface of the translucent material at xi, respectively, rij=‖xi-xj‖, A is the set of points on the surface of the translucent material,and ? is a set of directions over the hemisphere.

    Our method traces Nspprays through each pixel and defines the point of intersection between the j-th ray and the surface of the translucent material as xj.

    The pixel value due to the subsurface scattering of light is calculated by

    By comparing Eqs. (11) and (2), the weighting function W and material function f are represented as follows:

    4.6 Rendering participating media

    Like MDLC, our method assumes homogeneous participating media and an isotropic phase function.To calculate a pixel value in the presence of homogeneous participating media, the scattered radiances are integrated along the viewing ray, as shown in Fig.2(b). Let xobe the point of intersection of the viewing ray and the surface in the scene.Thus, the outgoing radiance from xois calculated using the sum of the reflected radiance Lsat xoand the scattered radiance Lmalong the viewing ray, as follows:

    where τ(xo,xv) = exp(-σt‖ xo- xv‖) is the transmittance, and σtand σsare the extinction and scattering coefficients, respectively. x(t) is the point on the viewing ray parameterized by the distance t from viewpoint xv. L(xo,xv) in Eq. (12)is calculated using Eq. (1). The weighting function W and material function f for Lsare represented by W(xo) = τ(xo,xv) and the BRDF fr(y,xo,xv),respectively. The scattered radiance L(x(t),xv) at x(t) is calculated using:

    where fpis the (isotropic) phase function.

    To compute the integral in Eq.(13),shading points xiare generated by uniformly subdividing the viewing ray with step size Δt; Lmis calculated by summing over the shading points and VPLs as follows:

    Comparing this equation with Eq. (2), the material function f is represented by the phase function fp,and the weighting function W(xi) is calculated using W(xi)=σsτ(xi,xv)Δt.

    5 Results

    Figures 1 and 4-8 show our results. Table 1 shows the numbers of VPLs and shading points, as well as the computational time for our method and MDLC measured using a PC with an Intel Xeon E5-2650 v4 2.20 GHz CPU. In all the calculations, the relative tolerance ∈was set to 2%, and the α quantile for Eq. (6) was set to 95%. To measure the estimation accuracy, we defined R∈as the percentage of pixels satisfying ‖ ^I-I ‖<∈I; the estimation is accurate when R∈is close to α (95% in our experiments).

    Figure 1 shows results of our rendering method with anti-aliasing, where the insets compare (a) the reference solution, (b) our method, and (c) MDLC.The reference solution is rendered by summing up all the contributions from the VPLs at all the shading points to calculate the pixel values. Insets (d) and (e)illustrate the relative errors and R∈for our method and MDLC.This figure demonstrates that our method can improve upon the estimation accuracy achieved using MDLC,with corresponding improvements in noise,as shown in Fig.1(c). Figure 4 shows a San Miguel scene rendered with DOF. Figure 5 shows a Cornell box scene filled with homogeneous participating media.As shown in insets (c) of Figs. 4 and 5, the stochastic noise is perceptible when using MDLC, whereas it is imperceptible using our error estimation method (see insets (b)). Figure 6 shows a Sponza scene filled with homogeneous participating media. Figure 7 shows a Cornell box scene where the two boxes consist of translucent material. Figure 8 shows a human head model rendered with a diffuse BSSRDF and anti-aliasing. In all of these experiments, our method showed improved estimation accuracy R∈over MDLC.

    Table 1 Statistics for our method and MDLC

    Fig. 4 San Miguel scene with DOF.

    Fig. 5 Cornell box scene with participating media.

    Fig. 6 Sponza scene with participating media.

    Fig. 7 Cornell box scene with two boxes of translucent materials.

    Fig. 8 Human head scene (subsurface scattering with anti-aliasing(8 spp)).

    In Figs. 9 and 10, cluster selections without scaling(the second column), scaling by cl(the third column),and scaling by cd(the fourth column) are compared.Scaling by cdyields the best estimation accuracy R∈for five out of six of these scenes. Moreover,comparing results for these scenes without scaling with those with scaling demonstrates that scaling by cdconsistently improves the estimation accuracy. In Fig. 9(a), the first row of the San Miguel scene with anti-aliasing shows relative errors greater than 5%,especially around the ivy-covered wall in the first column, but are reduced in the third column. In Fig. 9(d), the fourth row of the kitchen scene with DOF, relative errors greater than 5% can be seen,especially around the shadow boundaries and outlines.We think that these low estimation accuracies arise from the uneven cluster subdivisions (i.e., shading clusters are not subdivided, whereas VPL clusters are predominantly subdivided). Comparing scalings by cland cdshows that estimation accuracies in the three scenes deteriorate when using cl, although clyields the best estimation accuracy in the San Miguel scene with DOF (c). Based on these experiments, we used the scaling coefficient cdto select the clusters.

    Fig. 9 Cluster selection without scaling (the second column), scaling by cl (the third column), and scaling by cd (the fourth column). The left column shows the reference solutions and the other images show the relative errors in false color. The values in each relative error image show R∈. Scaling using cd yields better estimation accuracies than not scaling or scaling by cl.

    Fig. 10 Cluster selection without scaling (the second column), with scaling by cl (the third column), and scaling by cd (the fourth column) for subsurface scattering (above) and participating media (below). Although scaling by cl provides better estimation accuracy R∈than not scaling in the human head scene (above), relative errors greater than 5% can be seen in the neck. Scaling using cd provides the best estimation accuracy and relative errors greater than 5% are not seen. In the Cornell-box scene with homogeneous participating media (below), since VPLs and shading points are distributed throughout the scene, the improvements due to the use of cd are subtle, but its use does not impair the estimation accuracy.

    While our method outperformed MDLC in estimation accuracy,its computational time was from 1.11 to 8.55 times greater. We attempted to adjust the parameter ∈so that the estimation accuracy of MDLC was similar to our result in Fig. 5. After several trial-and-error processes and re-renderings,R∈of MDLC became 91.2% with ∈= 0.5%. The computational time for MDLC with ∈= 0.5% was 33.5 s, which is comparable to our result (35.6 s).However, our method does not require the tedious trial-and-error processes needed for MDLC.

    6 Conclusions and future work

    We have presented a scalable many-light rendering method that can improve the estimation accuracy for various visual and illumination effects. Our method automatically partitions VPL and shading clusters so that pixel errors are smaller than the relative tolerance ∈.Currently, our method is limited to homogeneous participating media with isotropic scattering. We would like to lift this limitation in future work.Moreover, we intend to apply our method to motion blur.

    Acknowledgements

    This work was partially supported by JSPS KAKENHI 15H05924 and 18H03348.

    Appendix Derivation of the error estimate ΔI

    If the samples in each pair follow a normal distribution and Neyman allocation is used for the number of samples for each pair, their statistics T follow a tdistribution with (n-N) degrees of freedom:

    where N is the number of pairs, niis the number of samples for the i-th pair (CLi,CGi), n is the total number of samples (i.e., n=N i=1ni), and siis the sample variance of the i-th pair. The error ΔI = ^I-I is calculated using the α quantile in the t-distribution,tα, as

    To estimate the pixel value, it is known to be more efficient to use a large number of pairs with a small number of samples than a small number of pairs with many samples. Since at least two samples for each pair is required to calculate the sample variance si,we sample two VPLs and shading points from the pair (K =2 in Eq. (4)). By substituting ni=2 and n=2N in Eq. (15), the error I can be simplified to give Eq. (6)

    纯流量卡能插随身wifi吗| 国产精品1区2区在线观看. | 精品久久蜜臀av无| 欧美人与性动交α欧美精品济南到| 国产av精品麻豆| 久久人妻福利社区极品人妻图片| 日韩欧美免费精品| 99re6热这里在线精品视频| 少妇精品久久久久久久| 新久久久久国产一级毛片| 天天躁日日躁夜夜躁夜夜| 国产高清国产精品国产三级| 交换朋友夫妻互换小说| 最近中文字幕2019免费版| 日本黄色日本黄色录像| 国产精品一区二区在线观看99| 美女高潮到喷水免费观看| 男女床上黄色一级片免费看| 欧美激情 高清一区二区三区| 国产免费av片在线观看野外av| 日韩 欧美 亚洲 中文字幕| 国产免费av片在线观看野外av| 亚洲伊人色综图| 一个人免费看片子| 亚洲第一av免费看| 国产一区有黄有色的免费视频| 国产91精品成人一区二区三区 | 亚洲国产毛片av蜜桃av| 乱人伦中国视频| 少妇被粗大的猛进出69影院| 久久精品熟女亚洲av麻豆精品| 亚洲专区中文字幕在线| 另类精品久久| 亚洲精品国产av蜜桃| 精品亚洲成国产av| 日韩 欧美 亚洲 中文字幕| 日韩 欧美 亚洲 中文字幕| 亚洲成人免费av在线播放| 免费少妇av软件| 欧美老熟妇乱子伦牲交| 成人亚洲精品一区在线观看| 99精国产麻豆久久婷婷| 精品国产乱码久久久久久男人| 久久久国产精品麻豆| 狠狠婷婷综合久久久久久88av| 国产精品 国内视频| 成年人免费黄色播放视频| 热99久久久久精品小说推荐| 在线观看免费视频网站a站| 午夜免费成人在线视频| 欧美日韩精品网址| 曰老女人黄片| 亚洲第一av免费看| 亚洲精品日韩在线中文字幕| 亚洲精品国产一区二区精华液| 久久精品亚洲熟妇少妇任你| 午夜福利乱码中文字幕| a在线观看视频网站| av又黄又爽大尺度在线免费看| 色播在线永久视频| 在线观看一区二区三区激情| 亚洲国产欧美一区二区综合| 国产高清国产精品国产三级| 久热爱精品视频在线9| 一二三四在线观看免费中文在| 黄网站色视频无遮挡免费观看| 欧美 日韩 精品 国产| 悠悠久久av| www.自偷自拍.com| 两个人免费观看高清视频| 在线永久观看黄色视频| 成年人免费黄色播放视频| 国产一区二区三区在线臀色熟女 | 91av网站免费观看| 久久天堂一区二区三区四区| 国产淫语在线视频| 国产精品麻豆人妻色哟哟久久| 午夜免费成人在线视频| 狠狠精品人妻久久久久久综合| 一边摸一边做爽爽视频免费| 老司机影院成人| 亚洲av成人一区二区三| av又黄又爽大尺度在线免费看| 午夜福利在线观看吧| 成年美女黄网站色视频大全免费| 欧美在线一区亚洲| 久久性视频一级片| 国产欧美日韩综合在线一区二区| 99久久精品国产亚洲精品| 久久久久国产一级毛片高清牌| 欧美中文综合在线视频| av天堂久久9| 日日夜夜操网爽| 美国免费a级毛片| 欧美日韩av久久| 欧美成人午夜精品| 午夜视频精品福利| 亚洲av成人不卡在线观看播放网 | 日本av免费视频播放| 免费在线观看黄色视频的| 亚洲国产av影院在线观看| 18禁裸乳无遮挡动漫免费视频| 亚洲精品国产一区二区精华液| 精品一区在线观看国产| 天堂8中文在线网| 精品高清国产在线一区| 日韩免费高清中文字幕av| 老司机影院成人| 亚洲欧美激情在线| 天堂俺去俺来也www色官网| 999久久久精品免费观看国产| 久久久久久久大尺度免费视频| 嫩草影视91久久| 免费少妇av软件| 亚洲,欧美精品.| 成年人免费黄色播放视频| 日韩欧美免费精品| 色精品久久人妻99蜜桃| 中国美女看黄片| 99香蕉大伊视频| 最新在线观看一区二区三区| 高清av免费在线| 亚洲情色 制服丝袜| 日日摸夜夜添夜夜添小说| 一本大道久久a久久精品| 秋霞在线观看毛片| av线在线观看网站| 精品国产一区二区三区四区第35| a级片在线免费高清观看视频| 成年人免费黄色播放视频| 日本a在线网址| 超色免费av| 免费不卡黄色视频| 免费久久久久久久精品成人欧美视频| 人人妻人人添人人爽欧美一区卜| 热re99久久国产66热| 热99国产精品久久久久久7| 久久人人爽人人片av| 老熟女久久久| 一区二区日韩欧美中文字幕| 精品一品国产午夜福利视频| 老熟妇仑乱视频hdxx| 狂野欧美激情性bbbbbb| 窝窝影院91人妻| 亚洲精品国产一区二区精华液| 日韩有码中文字幕| 男女边摸边吃奶| 久久国产精品大桥未久av| 一区二区日韩欧美中文字幕| 久久国产精品男人的天堂亚洲| 一级a爱视频在线免费观看| 日韩 欧美 亚洲 中文字幕| 亚洲精品国产av成人精品| 欧美另类一区| 精品国产一区二区久久| 99精品久久久久人妻精品| 性少妇av在线| 黑人猛操日本美女一级片| av一本久久久久| 黄片播放在线免费| 9191精品国产免费久久| 777久久人妻少妇嫩草av网站| 韩国高清视频一区二区三区| 日韩精品免费视频一区二区三区| 老司机影院毛片| 亚洲人成电影免费在线| 国产精品一区二区在线不卡| 亚洲专区字幕在线| 老汉色∧v一级毛片| 男女无遮挡免费网站观看| 黑丝袜美女国产一区| 久久久久久久久久久久大奶| av电影中文网址| 夜夜夜夜夜久久久久| av网站在线播放免费| 国产99久久九九免费精品| a级毛片黄视频| 成人国产一区最新在线观看| 国产精品二区激情视频| 男女免费视频国产| 亚洲国产av新网站| 久久久久精品国产欧美久久久 | 女人被躁到高潮嗷嗷叫费观| 在线观看一区二区三区激情| 国产一区二区三区在线臀色熟女 | 久久ye,这里只有精品| 欧美激情久久久久久爽电影 | 亚洲全国av大片| 最近最新中文字幕大全免费视频| 真人做人爱边吃奶动态| 少妇被粗大的猛进出69影院| 在线av久久热| 国产成人欧美| 国产亚洲精品久久久久5区| 久久影院123| 丝瓜视频免费看黄片| 欧美激情高清一区二区三区| 久久人人爽av亚洲精品天堂| 极品少妇高潮喷水抽搐| 一级,二级,三级黄色视频| 精品少妇久久久久久888优播| 新久久久久国产一级毛片| 1024视频免费在线观看| 窝窝影院91人妻| 久久国产精品男人的天堂亚洲| 国产精品偷伦视频观看了| 黑人操中国人逼视频| 久热这里只有精品99| 视频区图区小说| 日韩 亚洲 欧美在线| videosex国产| 天堂8中文在线网| 国产日韩欧美在线精品| 成人黄色视频免费在线看| 亚洲va日本ⅴa欧美va伊人久久 | 搡老乐熟女国产| 人人妻,人人澡人人爽秒播| 母亲3免费完整高清在线观看| 久久 成人 亚洲| h视频一区二区三区| 如日韩欧美国产精品一区二区三区| 王馨瑶露胸无遮挡在线观看| 国产91精品成人一区二区三区 | 高潮久久久久久久久久久不卡| 男女午夜视频在线观看| 亚洲熟女精品中文字幕| 亚洲午夜精品一区,二区,三区| 大香蕉久久网| 人人澡人人妻人| 巨乳人妻的诱惑在线观看| 欧美精品一区二区免费开放| 女性被躁到高潮视频| 欧美激情极品国产一区二区三区| 国产精品免费大片| 一边摸一边做爽爽视频免费| 成年美女黄网站色视频大全免费| 看免费av毛片| 国产又色又爽无遮挡免| 大片电影免费在线观看免费| 午夜福利视频在线观看免费| 午夜老司机福利片| 丰满饥渴人妻一区二区三| 日本a在线网址| 丁香六月欧美| 亚洲男人天堂网一区| 中文字幕色久视频| 自线自在国产av| 欧美日韩成人在线一区二区| 国产精品99久久99久久久不卡| 亚洲av电影在线进入| 久久热在线av| 国产成人系列免费观看| 少妇人妻久久综合中文| 悠悠久久av| 1024视频免费在线观看| 国产精品久久久久久精品电影小说| 久久久国产精品麻豆| 亚洲av电影在线观看一区二区三区| 亚洲伊人色综图| 两性夫妻黄色片| www日本在线高清视频| 国产在线免费精品| 色视频在线一区二区三区| 欧美精品高潮呻吟av久久| 精品国产乱码久久久久久小说| 国产精品免费视频内射| 在线看a的网站| 精品少妇内射三级| 大香蕉久久网| 亚洲色图综合在线观看| 18禁国产床啪视频网站| 狂野欧美激情性xxxx| 岛国在线观看网站| 啦啦啦 在线观看视频| 亚洲欧洲日产国产| 老熟妇仑乱视频hdxx| 少妇裸体淫交视频免费看高清 | 国产91精品成人一区二区三区 | 婷婷成人精品国产| 日韩人妻精品一区2区三区| 亚洲av成人一区二区三| 国产欧美亚洲国产| 久久99热这里只频精品6学生| 精品人妻在线不人妻| 日本撒尿小便嘘嘘汇集6| 999久久久精品免费观看国产| 伊人亚洲综合成人网| 在线观看舔阴道视频| 99re6热这里在线精品视频| svipshipincom国产片| 国产成人系列免费观看| 日韩制服骚丝袜av| av线在线观看网站| 日本wwww免费看| 超碰97精品在线观看| 最近最新免费中文字幕在线| 777久久人妻少妇嫩草av网站| 中文字幕另类日韩欧美亚洲嫩草| 黄色毛片三级朝国网站| www.999成人在线观看| 欧美少妇被猛烈插入视频| 国产日韩欧美在线精品| 久久久久视频综合| av天堂在线播放| 最黄视频免费看| 麻豆乱淫一区二区| 国产成人精品无人区| 在线观看舔阴道视频| 国产三级黄色录像| 脱女人内裤的视频| 捣出白浆h1v1| 色视频在线一区二区三区| 热99久久久久精品小说推荐| 午夜激情久久久久久久| 男人添女人高潮全过程视频| 国产成人欧美在线观看 | 婷婷成人精品国产| 欧美日韩视频精品一区| 欧美日韩黄片免| 亚洲一区二区三区欧美精品| 水蜜桃什么品种好| 午夜福利在线观看吧| a级毛片黄视频| 成年美女黄网站色视频大全免费| 超碰成人久久| 日本精品一区二区三区蜜桃| 中文字幕最新亚洲高清| 欧美激情 高清一区二区三区| 亚洲国产中文字幕在线视频| 国产主播在线观看一区二区| 午夜成年电影在线免费观看| 亚洲国产看品久久| 成人国产av品久久久| av欧美777| 精品欧美一区二区三区在线| 男人添女人高潮全过程视频| 中文精品一卡2卡3卡4更新| 亚洲精品一区蜜桃| 两人在一起打扑克的视频| 欧美+亚洲+日韩+国产| 麻豆国产av国片精品| 热re99久久精品国产66热6| 汤姆久久久久久久影院中文字幕| 黄网站色视频无遮挡免费观看| 亚洲一区二区三区欧美精品| 亚洲精品一卡2卡三卡4卡5卡 | 国产亚洲精品一区二区www | 亚洲av日韩精品久久久久久密| 久久99热这里只频精品6学生| 亚洲男人天堂网一区| 91大片在线观看| 欧美午夜高清在线| svipshipincom国产片| 午夜福利乱码中文字幕| av一本久久久久| 久久精品国产亚洲av香蕉五月 | av片东京热男人的天堂| 黑人巨大精品欧美一区二区蜜桃| 一区二区av电影网| 亚洲av成人不卡在线观看播放网 | 久久狼人影院| 国产精品欧美亚洲77777| 午夜福利乱码中文字幕| 国产高清videossex| 欧美黑人精品巨大| 中文字幕最新亚洲高清| 男女床上黄色一级片免费看| 啦啦啦在线免费观看视频4| 啦啦啦 在线观看视频| 亚洲 欧美一区二区三区| 99re6热这里在线精品视频| 91老司机精品| 啦啦啦 在线观看视频| 中文字幕av电影在线播放| 大香蕉久久网| 在线观看舔阴道视频| 精品国产一区二区三区久久久樱花| 精品高清国产在线一区| 国产有黄有色有爽视频| 国产老妇伦熟女老妇高清| 男女免费视频国产| 亚洲精品一卡2卡三卡4卡5卡 | 99国产精品一区二区三区| 成年美女黄网站色视频大全免费| 国产精品久久久久久精品电影小说| 永久免费av网站大全| 亚洲一区中文字幕在线| 免费看十八禁软件| 婷婷色av中文字幕| 一进一出抽搐动态| 新久久久久国产一级毛片| 日韩大片免费观看网站| avwww免费| 黑人猛操日本美女一级片| 久久毛片免费看一区二区三区| 欧美精品av麻豆av| 久久久久精品人妻al黑| 色播在线永久视频| 亚洲欧美精品自产自拍| 狂野欧美激情性bbbbbb| 国产一区二区三区av在线| 婷婷色av中文字幕| 精品国产一区二区久久| 国产在线观看jvid| www.熟女人妻精品国产| 99re6热这里在线精品视频| 国产av国产精品国产| 成人免费观看视频高清| 一本大道久久a久久精品| 狠狠狠狠99中文字幕| 色婷婷久久久亚洲欧美| 又大又爽又粗| 12—13女人毛片做爰片一| 99热国产这里只有精品6| 老熟女久久久| 各种免费的搞黄视频| 国产淫语在线视频| 亚洲av电影在线观看一区二区三区| 99国产精品免费福利视频| 啦啦啦中文免费视频观看日本| 国产三级黄色录像| 亚洲专区字幕在线| 最近最新免费中文字幕在线| 婷婷色av中文字幕| bbb黄色大片| 亚洲伊人色综图| 少妇的丰满在线观看| 青春草视频在线免费观看| 黑人猛操日本美女一级片| 18禁国产床啪视频网站| 国产成人免费无遮挡视频| 久久精品人人爽人人爽视色| 欧美性长视频在线观看| 国产精品 国内视频| 性色av一级| 麻豆国产av国片精品| 亚洲av成人不卡在线观看播放网 | 国产伦理片在线播放av一区| 真人做人爱边吃奶动态| 香蕉丝袜av| 亚洲午夜精品一区,二区,三区| 18在线观看网站| 成人av一区二区三区在线看 | 日韩中文字幕视频在线看片| 成年av动漫网址| 欧美黄色淫秽网站| av视频免费观看在线观看| 久久午夜综合久久蜜桃| 国产精品.久久久| 日韩免费高清中文字幕av| 伊人亚洲综合成人网| 久久久精品区二区三区| 一进一出抽搐动态| 啦啦啦免费观看视频1| 多毛熟女@视频| 99国产精品免费福利视频| 色播在线永久视频| 交换朋友夫妻互换小说| 国产亚洲一区二区精品| 日本wwww免费看| 国产真人三级小视频在线观看| 国产高清国产精品国产三级| 成人免费观看视频高清| 亚洲国产欧美网| 亚洲国产精品一区二区三区在线| 天堂俺去俺来也www色官网| 18禁黄网站禁片午夜丰满| 国产一区有黄有色的免费视频| 法律面前人人平等表现在哪些方面 | 伊人亚洲综合成人网| 亚洲精品在线美女| av片东京热男人的天堂| 日韩欧美一区二区三区在线观看 | 别揉我奶头~嗯~啊~动态视频 | 久久影院123| 女性生殖器流出的白浆| 热99国产精品久久久久久7| 亚洲av国产av综合av卡| 黑人巨大精品欧美一区二区蜜桃| 熟女少妇亚洲综合色aaa.| 欧美黑人欧美精品刺激| 9191精品国产免费久久| 久久免费观看电影| 999久久久国产精品视频| 大码成人一级视频| 一本一本久久a久久精品综合妖精| 99久久综合免费| cao死你这个sao货| 老司机靠b影院| 首页视频小说图片口味搜索| 他把我摸到了高潮在线观看 | 韩国精品一区二区三区| 亚洲国产中文字幕在线视频| 欧美激情极品国产一区二区三区| 老司机影院成人| 成年人黄色毛片网站| 新久久久久国产一级毛片| 国产精品 欧美亚洲| 欧美激情极品国产一区二区三区| 色婷婷久久久亚洲欧美| 成人黄色视频免费在线看| 久久久国产欧美日韩av| 久久国产精品人妻蜜桃| 亚洲人成77777在线视频| 97人妻天天添夜夜摸| 欧美人与性动交α欧美精品济南到| tube8黄色片| 亚洲国产精品一区三区| 叶爱在线成人免费视频播放| 亚洲熟女毛片儿| 日本wwww免费看| 久久久精品免费免费高清| 日日摸夜夜添夜夜添小说| 久久99一区二区三区| 亚洲精品国产区一区二| 日韩中文字幕视频在线看片| 在线精品无人区一区二区三| 国产成+人综合+亚洲专区| 免费看十八禁软件| 亚洲精品一区蜜桃| 中文字幕最新亚洲高清| 精品国产一区二区三区久久久樱花| 侵犯人妻中文字幕一二三四区| 国产成人av教育| 久久久精品免费免费高清| 91精品国产国语对白视频| 国产精品久久久久久人妻精品电影 | 嫩草影视91久久| 午夜成年电影在线免费观看| 久久久国产精品麻豆| 国产精品一区二区精品视频观看| 日韩有码中文字幕| 日韩欧美一区视频在线观看| 国产成人a∨麻豆精品| 天天操日日干夜夜撸| 午夜精品国产一区二区电影| 欧美中文综合在线视频| 色94色欧美一区二区| 久热爱精品视频在线9| 精品一区二区三区四区五区乱码| 国产欧美日韩一区二区三 | 日韩,欧美,国产一区二区三区| 中文字幕精品免费在线观看视频| tocl精华| 免费少妇av软件| 一区二区三区四区激情视频| av网站免费在线观看视频| 亚洲全国av大片| 法律面前人人平等表现在哪些方面 | 国产片内射在线| 一个人免费在线观看的高清视频 | 国产福利在线免费观看视频| 黄色 视频免费看| 欧美一级毛片孕妇| 欧美日本中文国产一区发布| 9热在线视频观看99| 三级毛片av免费| 80岁老熟妇乱子伦牲交| 真人做人爱边吃奶动态| 视频在线观看一区二区三区| 人人妻,人人澡人人爽秒播| 伦理电影免费视频| 俄罗斯特黄特色一大片| 亚洲中文字幕日韩| 大码成人一级视频| 亚洲精品国产精品久久久不卡| 精品福利观看| 日韩制服骚丝袜av| a级片在线免费高清观看视频| 一本综合久久免费| 成人黄色视频免费在线看| 久久精品熟女亚洲av麻豆精品| 男男h啪啪无遮挡| 亚洲精品日韩在线中文字幕| av线在线观看网站| 久久热在线av| 国产福利在线免费观看视频| 韩国精品一区二区三区| 亚洲国产精品一区三区| 51午夜福利影视在线观看| 黄色 视频免费看| 亚洲欧洲日产国产| 精品国产一区二区三区久久久樱花| 999久久久精品免费观看国产| 人妻久久中文字幕网| 国产一区二区激情短视频 | 精品熟女少妇八av免费久了| 欧美变态另类bdsm刘玥| 精品国产国语对白av| 亚洲美女黄色视频免费看| 国产淫语在线视频| 王馨瑶露胸无遮挡在线观看| 亚洲美女黄色视频免费看| 在线观看免费视频网站a站| 黑人操中国人逼视频| 制服诱惑二区| 80岁老熟妇乱子伦牲交| 午夜福利影视在线免费观看| 午夜精品久久久久久毛片777| h视频一区二区三区| 天天躁日日躁夜夜躁夜夜| 波多野结衣一区麻豆| 午夜福利一区二区在线看| 夜夜骑夜夜射夜夜干| 精品熟女少妇八av免费久了| 国产一卡二卡三卡精品| 色婷婷久久久亚洲欧美| 国产精品秋霞免费鲁丝片| 欧美激情 高清一区二区三区| 岛国在线观看网站| 性少妇av在线| 亚洲精品美女久久av网站| 精品少妇一区二区三区视频日本电影| 老司机午夜福利在线观看视频 | 美女大奶头黄色视频|