• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Estimating ref l ectance and shape of objects from a single cartoon-shaded image

    2017-06-19 19:20:12HidekiTodoYasushiYamaguchi
    Computational Visual Media 2017年1期

    Hideki Todo(),Yasushi Yamaguchi

    Estimating ref l ectance and shape of objects from a single cartoon-shaded image

    Hideki Todo1(),Yasushi Yamaguchi2

    Although many photorealistic relighting methods provide a way to change the illumination of objects in a digital photograph,it is currently diffi cult to relight digital illustrations having a cartoon shading style.The main dif f erence between photorealistic and cartoon shading styles is that cartoon shading is characterized by soft color quantization and nonlinear color variations that cause noticeable reconstruction errors under a physical ref l ectance assumption,such as Lambertian ref l ection.To handle this non-photorealistic shading property,we focus on shading analysis of the most fundamental cartoon shading technique.Based on the color map shading representation,we propose a simple method to determine the input shading as that of a smooth shape with a nonlinear ref l ectance property.We have conducted simple ground-truth evaluations to compare our results to those obtained by other approaches.

    non-photorealistic rendering;cartoon shading;relighting;quantization

    1 Introduction

    Despite recent progress in 3D computer graphics techniques,traditional cartoon shading styles remain popular for 2D digital art.Artists can use a variety of commercial software(e.g.,Photoshop,Painter)to design their own expressive shading styles.Although the design principle used roughly follows a physical illumination model,editing is restricted to 2D drawing operations.We are interested in exploring new interactions which allow relighting of a painted shading style given a single input image.

    Reconstructing surface shape and ref l ectance from a single image is known as the shape-from-shading problem[1].Based on the fundamental problem setting,most relighting approaches assume shading follows a Lambertian model[2–4].Although these approaches work well for photorealistic images,they often fail to interpret cartoon shading styles in digital illustrations.

    The main dif f erence between photorealistic and cartoon shading styles is that cartoon shading is characterized by nonlinear color variation with soft quantization.The designed shading is typically more quantized than the inherent surface shape and its illumination.This assumption is common in many 3D stylized rendering techniques which use color map representation[5–7]that simply convert smooth 3D illumination to an artistic shading style.As shown in Fig.1,this simple mechanism can produce a variety of shading styles with dif f erent quantization ef f ects. However,such stylization processes make it more diffi cult for shading analysis to reconstruct a surface shape and ref l ectance from such shading.

    Fig.1 Stylized shading styles obtained by color map representation.

    In this paper,we propose a simple shadinganalysis method to recover a reasonable shading representation from the input quantized shading. As a f i rst step,we focus on the most fundamental cartoon shading[6].Our primary assumption is that the main nonlinear factor in the f i nal shading can be encoded by a color map function.With this in mind,we aim to reconstruct a smooth surface f i eld and a nonlinear ref l ectance property from the input shading.Using these estimated data,our method provides a way to change the illumination of the input image with its quantized shading style. To evaluate our approach,we conducted a simple pilot study using a prepared set of 3D models and color maps with a variety of stylization inputs. The proposed method was quantitatively compared to related approaches,which provided several key insights regarding relighting stylized shading.

    2 Related work

    Color mapping is a common approach used to generate stylized appearances in comics or illustrations.In stylized rendering of a 3D scene,the color map representation is used to convert smooth 3D illumination into quantized nonlinear shading ef f ects[5–7].Similar conversion techniques are used in 2D image abstraction methods for photorealistic images or videos[8–11].As a starting point,our work follows the basic assumption that stylized shading appearance is based on a smooth surface shape.

    Previous shape reconstruction methods for painted illustrations also attempt to recover a smooth surface shape from the limited information provided by feature lines.Lumo[12]generates an approximate normal f i eld by interpolating normals on region boundaries and interior contours.S′ykora et al.[13] extended this approach with a simple set of user annotations to recover full 3D shape for global illumination rendering.CrossShade[14]enables the user to design cross-section curves for better control of the constructed normal f i eld.The CrossShade technique was extended by Iarussi et al.[15]to construct generalized bend f i elds from rough sketches in a bitmap form.However,these approaches only focus on shape modeling from the boundary constraints.The recently proposed inverse toon shading[16]modeling framework also follows the strategy of modeling normal f i elds by designing isophote curves.In this work,the interpolation scheme requires manual editing to design two sets of isophotes with dif f erent illumination conditions for robust interpolation.In addition,reliable isophote values are also assumed.In contrast,our objective is to use a single cartoon-shaded image to provide a shading representation that contains both a shape and a nonlinear color map ref l ectance.

    An entire illumination constraint is considered in the well-known shape-from-shading(SFS) problem[1]for photorealistic images.Since the problem is severely ill-posed,accurate surface reconstruction requires skilled user interaction[3,4,17].The user must specify shape constraints to reduce the solution space of the SFS problem.To reduce user burden,another class of approach suggests rough approximation from luminance gradients[2,18]that can be tolerated by human perception.However,such approaches assume a photorealistic ref l ectance model,which often results in large reconstruction errors for the nonlinear shading in digital illustrations.

    Motivated by these considerations,we attempt to leverage limited cartoon shading information to model a smooth surface shape and nonlinear ref l ectance to reproduce the original shading appearance.

    3 Problem def i nition

    3.1 Shading model assumptions

    As proposed in the technique of cartoon shading[6], we assume a color map representation is used to reproduce the artist’s nonlinear shading ef f ects. Figure 2 illustrates the basic cartoon shading process.In this model,shading color c∈R3is computed as follows:

    Fig.2 Cartoon shading process.

    where I∈R is the luminance value of theillumination,and M:R 7→R3is a 1D color map function which converts the luminance value to the f i nal shading color.For a dif f use shading material, we set I=L·N,whereLis a light vector andNis the surface normal vector.We are interested in manipulatingLtoL0to produce a new lighting result,i.e.,c0=M(L0·N).

    However,the inverse problem is ill-posed if only shading colorcis available.The primary consideration of this paper is that we limit the solution space for other factors while preserving the final shading appearance.Some basic assumptions considered in this paper are as follows.

    ?Smooth shape and illumination.We assume that the surface shapeNand the illumination I are smooth and follow a linear relationship.The only nonlinear factor is the color map function M,which is used to produce the stylized shading appearance.

    ?Monotonic function for color map.For the color map function M,we assume a monotonic relation between image luminance Ic(obtained fromc)and surface illumination I.This assumption is important to simplify our problem def i nition as a variation of a photorealistic relighting problem.

    ?Dif f use lighting for illumination.We analyze all shading ef f ects as due to dif f use lighting.We do not explicitly model specular ref l ections and shadows in our shading analysis experiments.

    4 Methods

    Figure 3 illustrates the main process of the proposed shading analysis and relighting approach.Here we provide the primary objective and summarize each step.

    ?Initial normal estimation.First,an initial normal f i eldN0is required as input for the ref l ectance estimation and normal ref i nement steps.Since the ref l ectance property is not available,we simply approximate a smooth rounded normal f i eld from the silhouette.

    ?Ref l ectance estimation.Given the initial normal f i eldN0,we estimate a key light directionLand a color map function M which best f i tc=M(L·N0).This decomposition result roughly matches the original shadingcfor the givenN0.

    ?Normal ref i nement.Since the estimated decomposition does not satisfyc=M(L·N0),we ref i ne the surface normalN0toNto reproduce the original shadingc.

    Fig.3 Method overview.(a)Initial normal estimation to approximate a smooth rounded normal f i eld.(b)Ref l ectance estimation to obtain a light and a color map.(c)Normal ref i nement to modify the initial normal by f i tting the shading appearance.(d)Relighting to provide lighting interactions based on the shading analysis data.

    ?Relighting.Based on the above analysis results, the proposed method can relight the given input illustration.We change the light vector L to L0to obtain the f i nal shading color c0=M(L0·N).

    In the following sections,each step of the proposed shading analysis and relighting approaches is described in detail.

    4.1 Initial normal estimation

    For the target region ?,we can obtain a rounded normal f i eld N0from the silhouette inf l ation constraints[12,13]:

    where N??=(N??x,N??y,0)is the normal constraint from the silhouette??.These normals are propagated to the interior of ? using a dif f usion method[19].As shown in Fig.4,we can obtain a smooth initial normal f i eld N0as a rounded shape.

    4.2 Ref l ectance estimation

    Once the initial normal f i eld N0has been obtained, our system estimates ref l ectance factors based on the cartoon shading representation c=M(L·N).

    The ref l ectance estimation process takes the original color c and the initial normal N0as inputs to estimate the light direction L and the color map function M.We assume that the scene is illuminated by a single key light direction(i.e.,L is the same for the entire image).The color map function M is estimated for each target object.

    In the early stage of our experiments,we observed that the key light estimation step was signif i cantly af f ected by the input material style and shape.Our simple experiment is summarized in the Appendix. Since L is a key factor in the following estimation steps,we assume that a reliable light direction is provided by the user.In our evaluation,we used a predef i ned ground-truth light direction Ltto observe errors caused by the other estimation steps.

    Fig.4 Initial normal f i eld obtained by silhouette inf l ation.

    Color map estimation.Given the smooth illumination result I0=L·N0,we estimate a color map function M to f i t c=M(I0).

    As shown in Fig.5,isophote pixels of I0do not provide the same color as c.Therefore,a straight forward minimization of Pproduces a blurred color map M.

    To avoid this invalid correspondence between I0and c,we force monotonicity by sorting the target pixels in dark-to-bright order as shown in Fig.6.From the sorted pixels,we can obtain a valid correspondence between luminance range [Ii,Ii+1]and each shading color ciin the same luminance order.As a result,a color map function M is recovered as a lookup table for obtaining cifrom[Ii,Ii+1].We also construct the corresponding inverse map M?1,which is an additional lookup table to retrieve the luminance range[Ii,Ii+1]from a shading color ci.

    4.3 Normal ref i nement

    As shown in the right image of Fig.6,the shading result of M(L·N0)does not match c perfectly.Here we consider ref i ning normal N0to reproduce theoriginal color c by minimizing the following objective function:

    Fig.5 Invalid correspondence between the initial illumination I0and the input shading c.

    Fig.6 Color map estimation.Given the set of illumination L·N0and original color c,a color map function M is estimated by matching the range of luminance orders.

    To address this issue,we provide the following complementary objective function to Eq.(3):

    Figure 7 illustrates the illumination constraints for the normal ref i nement process.From the color map estimation process described in Section 4.2,the luminance range[Ii,Ii+1]is known for each shading color ci.Therefore,the illumination is restricted by the following conditions:

    where Ci:={p∈?|c(p)=ci}is the quantized color area and illumination L·N(p)is constrained to [Ii,Ii+1].

    We solve the problem by minimizing the following energy:

    Fig.7 Illumination constraints for normal ref i nement.The initial illumination result is modif i ed by luminance range constraints derived from M?1.

    The normal N is updated iteratively from the estimated initial normal N0in Gauss–Seidel iterations.Here we chose λ=1.5 to obtain the ref i nement result.Compared to the initial normal N0,the ref i ned normal N better f i ts the original color c.

    4.4 Relighting

    Based on the cartoon shading representation c= M(L·N),our system enables lighting interactions for the input illustration.We can obtain a relighting result c0by changing the light vector L to L0as follows:

    where the estimated factors M and N are preserved in relighting process.

    5 Evaluation of shading analysis

    To evaluate our shading analysis approach,we conducted a simple pilot study via a ground-truth comparison.We compare our estimated results with several existing approaches and ground-truth inputs.

    5.1 Experimental design

    To generate a variety of stylized appearance,we f i rst prepared shape and color map datasets(see Fig.8).

    Shape dataset.We prepared 20 groundtruth 3D models having varying shape complexity and recognizability.This dataset includes 7 simple primitive shapes and 13 other shapes from 3D shape repositories.Each ground-truth model is renderedfrom a specif i c view point to generate a 512×512 normal f i eld.

    Fig.8 20 ground-truth 3D shapes and 24 color maps in our datasets.

    Color map dataset.To better understand real situations,we extracted color maps from existing digital illustrations.We selected a small portion of a material area with a stroke.Then the selected pixels were simply sorted in luminance order to obtain a color map.We tried to extract more than 100 material areas from dif f erent digital illustrations sources.From the extracted color maps,we selected 24 distinctive color maps with dif f erent quantization ef f ects.

    Given the ground-truth normal f i eld Ntand color map Mt,a f i nal input image was obtained by ct= Mt(Lt·Nt).Note that we also provide a groundtruth light direction Ltin our evaluation process.

    5.2 Comparison of ref l ectance models

    We f i rst compared the visual dif f erence between our target cartoon shading model and a common photorealistic Lambertian model as shown in Fig.9. To obtain an ambient color kaand a dif f use ref l ectance color kdfor the Lambertian shading representation c=ka+kdI,we minimizedM(I)?(ka+kdI)with the input color map function M.The color dif f erence suggests that cartoon shading includes some nonlinear parts,which cannot be described by a simple Lambertian model.We will discuss how this nonlinear ref l ectance property af f ects the estimation results.

    Fig.9 Comparison of ref l ectance models.Top:color map materials selected from our dataset.Middle:Lambertian material f i tted to the corresponding color map.Bottom:color dif f erence between the color map materials and Lambertian materials.The materials are listed according to the color dif f erence.

    5.3 Shading analysis

    Figure 10 summarizes a comparison of our estimation results with ones from Lumo[12]and the Lambertian assumption[4].To simulate Lumo we used the silhouette inf l ation constraints of the initial normal estimation in Eq.(2).For the Lambertian assumption,we used the illumination constraint in Eq.(5)with a small value λ=1.0 to f i t the input image luminance Ic.In all examples,we used our color map estimation method(Section 4.2)to reproduce the original shading appearance.

    As shown in Fig.10,Lumo cannot produce the details of illumination due to the lack of inner shading constraints.The Lambertian assumption recovers the original shading appearance well; however,the estimated normal f i eld is overf i tted to the quantized illumination.Although our method distributes certain shading errors near the boundaries of the color areas,it produces a relatively smooth normal f i eld and illumination that are both similar to the ground-truth.

    Figure 11 summarizes the shading analysis results for dif f erent material settings.Although our method cannot recover the same shape from dif f erent quantization styles,the estimated normal f i eld is smoother than the input shading.

    We also compute the mean squared error(MSE) to compare estimated results quantitatively(see Figs.12–15).In each comparison,we used the same shape and changed materials for computing the shape estimation errors.

    Fig.10 Comparison of shading analysis results with Lumo[12]and Lambertian assumption[4].The proposed method reproduces the original shading appearance similar to the Lambertian assumption with a smooth normal f i eld as in Lumo.

    Fig.11 Shading analysis results for dif f erent color map materials.

    Fig.12 Errors of estimated shape depending on input material (simple shape Three Box).

    Fig.13 Errors of estimated shape depending on input material (medium complexity shape Fertility).

    Note that our method tends to produce smaller errors for simple rounded shapes but the errors become larger than the Lambertian assumption for more complex shapes.For a complex shape like the Pulley shown in Fig.15,even the Lambertian assumption results in large errors.Since initial normal estimation errors become large in such cases, our method fails to recover a valid shape when only minimizing the appearance error.We provide further discussions on initial normal estimation errors in Section 7.

    Fig.14 Errors of estimated shape depending on input material (medium complexity shape Venus).

    Fig.15 Errors of estimated shape depending on input material (complex shape Pulley).

    Though the estimated shape may not be accurate, our method successfully reduces the inf l uence of the material dif f erence in all comparisons.Thanks to the proposed shading analysis based on the cartoon shading model assumption,our method regulates estimated ref l ectance properties for various quantization settings.

    5.4 Relighting

    Fig.16 Comparison of our relighting results with those from Lumo[12]and using the Lambertian assumption in Ref.[4].The shading analysis shows the estimated shading results from the input ground-truth light direction and shading.The analysis data are used to produce the following relighting results.Our method can produce dynamic illumination changes from the input light directions as in Lumo,which are less noticeable in the Lambertian assumption.The details of the shapes are also preserved in our method.

    Figure 16 and the supplemental videos in the Electronic Supplementary Material(ESM) summarize a comparison of our relighting results with those from Lumo[12]and using the Lambertian assumption in Ref.[4].In all examples,we f i rst estimate the shading representations in the shading analysis step.Then we use the analysis data to produce relighting results.

    As in the discussion in the previous evaluation of the shading analysis,the proposed method and the Lambertian assumption can preserve the original shading appearance in the shading analysis step. However,the Lambertian assumption tends to be strongly af f ected by the initial input illumination,so that dynamic illumination changes from the input light directions are less noticeable in the relighting results.On the other hand,the proposed method and Lumo can produce dynamic illumination changes that are similar to the ground-truth relighting results.The proposed method cannot fully recover the details of the ground-truth shape;however, our shading decomposition result can provide both dynamic illumination changes and details of the target shape.

    6 Real illustration examples

    We have tested our shading analysis approach on dif f erent shading styles using three real illustrations. Figure 17 shows relighting results for the one of them, the others are included in the supplemental videos in the ESM.The material regions are relatively simple, but each material region is painted with dif f erent quantization ef f ects.

    To apply our shading analysis and relighting methods,we f i rst manually segmented material regions for the target illustration.We also provide a key light direction L for the target illustration,which is needed for our ref l ectance estimation step.

    Fig.17 Relighting sequence using the proposed method.Nondif f use parts are limited to static transitions with simple residual representation.

    Fig.18 Ref l ectance and shape estimation results for a real illustration.Non-dif f use parts are encoded as residual shading.

    Figure 18 illustrates the elements of ref l ectance and shape estimation results for the illustration. Compared to the ideal cartoon shading in our evaluations,a material region in the real examples may include non-dif f use parts.As suggested by a photorealistic illumination estimation method[20], we encode such specular and shadow ef f ects as residual dif f erences?c=c?M(L·N)from our assumed shading representation c=M(L· N).Finally,we obtain relighting results as c= M(L0·N)+?c by changing the light direction L0.

    As shown in Fig.17 and the supplemental videos in the ESM,the residual representation can recover the appearance of the original shading.We also note that our initial experiment produced possible shading transitions for dif f use lighting,while specular and shadow ef f ects are relatively static.

    7 Discussion and future work

    In this paper,we have demonstrated a new shading analysis framework for cartoon-shaded objects. The visual appearance of the relighting results is improved by the proposed shading analysis.We incorporate color map shading representation in our shading analysis approach,which enables shading decomposition into a smooth normal f i eld and a nonlinear color map ref l ectance.We have introduced a new way to provide lighting interaction with digital illustrations;however,there are several things left to accomplish.

    Firstly,our method requires a reliable light direction which is provided by the user.Since the light estimation method in the Appendix is signif i cantly af f ected by the input shading,more friendly and robust cartoon shading estimation approaches are needed.We consider that a perceptually motivated approach[21]might be suitable.

    Secondly,the method minimizes the appearance error,because a shading image is the only input. This results in an under-constrained problem to estimate both shape and ref l ectance.Actually,our method achieves almost the same appearance as the input.As shown in Fig.19,the proposed method cannot recover the input shape even if the material has Lambertian ref l ectance with full illumination constraints.Although the recovered shape satisf i es appearance similarity with the color map that is estimated in advance,we need a better solution space to obtain a plausible shape.Since a desirable shape is typically dif f erent for dif f erent users,we plan to integrate user constraints[3,4,14]for normal ref i nement.More robust iterated ref i nement cycles of shape and ref l ectance estimations are to be desired.

    Fig.19 Shape analysis results for Lambertian ref l ectance.Blob (top):small errors in shape and shading.Pulley(middle):large errors in shape.Lucy(bottom):large errors in shading.

    Another limitation is that our initial normal f i eld approximation assumes the shape to be convex.This causes errors noticeable in complex shapes such as the Pulley,as shown in Fig.19.Currently,we also plan to incorporate interior contours for concave constraints as suggested by Lumo[12].Even though we require a robust edge detection process to def i ne suitable normal constraints for various illustration styles,this is a promising direction for future work that may yield a more pleasing initial normal f i eld.

    Although large collections of 2D digital illustrations are available online,we cannot directly apply our method since we require manual segmentation.A crucial area of future research is to automate albedo estimation,as suggested by intrinsic images[22,23].While our initial experiments with manual segmentation produced possible shading transitions via the dif f use shading assumption,our method cannot fully encode additional specular and shadow ef f ects.Therefore, incorporating such specular and shadow models is an important future work for more practical situations.Such shading ef f ects are often designed using non-photorealistic principles;however,we hope that our approach will provide a promising direction for new 2.5D image representations of digital illustrations.

    Appendix Light estimation

    In the early stage of our experiments,we tried toestimate the key light direction L from the input shading c and the estimated initial normal N0.

    As suggested by Ref.[4],we approximate the problem using Lambertian ref l ectance Ic=kdL·N0, where the dif f use term L·N0is simply scaled by the dif f use constant kd.For the input illumination Ic, we compute the luminance value from the original color c as the L component in Lab color space. We estimate the light vector L by minimizing the following energy:

    where L0is given by L0=kdL.We f i nally obtain the unit light vectorby normalizing L0.The dif f use ref l ectance constant kdis optionally computed from

    Figure 20 summarizes our experiment for light estimation.In this experiment,we give a single ground-truth light direction Lt(top left)to generate the input cartoon-shaded image ctand then estimate a key light direction L by solving Eq.(10).

    It can be observed that the estimated results look consistent with near-Lambertian materials(the left 3 maps)but inconsistent with more stylized materials (the right 3 maps).Another important factor is the shape complexity.The estimated light direction is relatively consistent with rounded smooth shapes. However,the light estimation error becomes quite large when the input model contains many crease edges,especially around the silhouette.

    Fig.20 Light estimation error.Top left:input ground-truth light direction Lt.Top row:input color map materials shaded from the Lt.The left 3 maps have small average errors;the right 3 maps have large average errors.Left column:input 3D models.The top 3 models have small average errors;the bottom 3 models have large average errors.

    The result suggests that we require additional constraints to improve light estimation.In this paper,we simply provide a ground-truth light direction for evaluation,or a user-given reliable light direction for relighting real illustration examples.

    Acknowledgements

    We would like to thank the anonymous reviewers for their constructive comments.We are also grateful to Tatsuya Yatagawa,Hiromu Ozaki,Tomohiro Tachi, and Takashi Kanai for their valuable discussions and suggestions.Additional thanks go to the AIM@SHAPE Shape Repository,Keenan’s 3D Model Repository for 3D models,and Makoto Nakajima,www.piapro.net for 2D illustrations used in this work.This work was supported in part by the Japan Science and Technology Agency CREST project and the Japan Society for the Promotion of Science KAKENHI Grant No.JP15H05924.

    Electronic Supplementary MaterialSupplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s41095-016-0066-0.

    [1]Horn,B.K.P.;Brooks,M.J.Shape from Shading. Cambridge,MA,USA:MIT Press,1989.

    [2]Khan,E.A.;Reinhard,E.;Fleming,R.W.; B¨ulthof f,H.H.Image-based material editing.ACM Transactions on Graphics Vol.25,No.3,654–663, 2006.

    [3]Okabe,M.;Zeng,G.;Matsushita,Y.;Igarashi,T.; Quan,L.;Shum,H.-Y.Single-view relighting with normal map painting.In:Proceedings of Pacif i c Graphics,27–34,2006.

    [4]Wu,T.-P.;Sun,J.;Tang,C.-K.;Shum,H.-Y. Interactive normal reconstruction from a single image. ACM Transactions on Graphics Vol.27,No.5,Article No.119,2008.

    [5]Barla,P.;Thollot,J.;Markosian,L.X-toon: An extended toon shader.In:Proceedings of the 4th International Symposium on Non-Photorealistic Animation and Rendering,127–132,2006.

    [6]Lake,A.;Marshall,C.;Harris,M.;Blackstein,M. Stylized rendering techniques for scalable real-time 3D animation.In:Proceedings of the 1st International Symposium on Non-Photorealistic Animation and Rendering,13–20,2000.

    [7]Mitchell,J.;Francke,M.;Eng,D.Illustrative rendering in Team Fortress 2.In:Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering,71–76,2007.

    [8]DeCarlo,D.;Santella,A.Stylization and abstraction of photographs.ACM Transactions on Graphics Vol. 21,No.3,769–776,2002.

    [9]Kang,H.;Lee,S.;Chui,C.K.Flow-based image abstraction.IEEE Transactions on Visualization and Computer Graphics Vol.15,No.1,62–76,2009.

    [10]Kyprianidis,J.E.;D¨ollner,J.Image abstraction by structure adaptive f i ltering.In:Proceedings of EG UK Theory and Practice of Computer Graphics,51–58, 2008.

    [11]Winnem¨oller,H.;Olsen,S.C.;Gooch,B.Real-time video abstraction.ACM Transactions on Graphics Vol. 25,No.3,1221–1226,2006.

    [12]Johnston,S.F.Lumo:Illumination for cel animation. In:Proceedings of the 2nd International Symposium on Non-Photorealistic Animation and Rendering,45–52,2002.

    [13]S′ykora,D.;Kavan,L.;ˇCad′?k,M.;Jamriˇska,O.; Jacobson,A.;Whited,B.;Simmons,M.;Sorkine-Hornung,O.Ink-and-ray:Bas-relief meshes for adding global illumination ef f ects to hand-drawn characters. ACM Transactions on Graphics Vol.33,No.2,Article No.16,2014.

    [14]Shao,C.;Bousseau,A.;Shef f er,A.;Singh,K. CrossShade:Shading concept sketches using crosssection curves.ACM Transactions on Graphics Vol. 31,No.4,Article No.45,2012.

    [15]Iarussi,E.;Bommes,D.;Bousseau,A.Bendf i elds: Regularized curvature f i elds from rough concept sketches.ACM Transactions on Graphics Vol.34,No. 3,Article No.24,2015.

    [16]Xu,Q.;Gingold,Y.;Singh,K.Inverse toon shading: Interactive normal f i eld modeling with isophotes. In:Proceedings of the Workshop on Sketch-Based Interfaces and Modeling,15–25,2015.

    [17]Wu,T.-P.;Tang,C.-K.;Brown,M.S.;Shum,H.-Y.ShapePalettes:Interactive normal transfer via sketching.ACM Transactions on Graphics Vol.26,No. 3,Article No.44,2007.

    [18]Lopez-Moreno,J.;Jimenez,J.;Hadap,S.;Reinhard, E.;Anjyo,K.;Gutierrez,D.Stylized depiction of images based on depth perception.In:Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering,109–118, 2010.

    [19]Orzan,A.;Bousseau,A.;Barla,P.;Winnem¨oller, H.;Thollot,J.;Salesin,D.Dif f usion curves: A vector representation for smooth-shaded images. Communications of the ACM Vol.56,No.7,101–108, 2013.

    [20]Kholgade,N.;Simon,T.;Efros,A.;Sheikh,Y.3D object manipulation in a single photograph using stock 3D models.ACM Transactions on Graphics Vol.33, No.4,Article No.127,2014.

    [21]Lopez-Moreno,J.;Garces,E.;Hadap,S.;Reinhard, E.;Gutierrez,D.Multiple light source estimation in a single image.Computer Graphics Forum Vol.32,No. 8,170–182,2013.

    [22]Grosse,R.;Johnson,M.K.;Adelson,E.H.;Freeman, W.T.Ground truth dataset and baseline evaluations for intrinsic image algorithms.In:Proceedings of IEEE 12th International Conference on Computer Vision,2335–2342,2009.

    [23]Rother,C.;Kiefel,M.;Zhang,L.;Sch¨olkopf,B.; Gehler,P.V.Recovering intrinsic images with a global sparsity prior on ref l ectance.In:Proceedings of Advances in Neural Information Processing Systems 24,765–773,2011.

    Hideki Todois an assistant professor in the School of Media Science at Tokyo University of Technology.He received his Ph.D.degree in information science and technology from the University of Tokyo in 2013.His research interests lie in the f i eld of computer graphics in general,particularly non-photorealistic rendering.

    Yasushi Yamaguchi,Dr.Eng.,is a professor in the Graduate School of Arts and Sciences at the University of Tokyo.His research interests lie in image processing,computer graphics,and visual illusion,including visual cryptography,computer aided geometric design,volume visualization, and painterly rendering.He has been serving as a president of Japan Society for Graphic Science and as a vice president of International Society for Geometry and Graphics.

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    1 Tokyo University of Technology,Tokyo,192-0982,Japan. E-mail:toudouhk@stf.teu.ac.jp().

    2 The University of Tokyo,Tokyo,153-8902,Japan.E-mail:yama@graco.c.u-tokyo.ac.jp.

    t

    2016-08-30;accepted:2016-11-10

    精品久久久久久久人妻蜜臀av| 久久婷婷人人爽人人干人人爱| 别揉我奶头 嗯啊视频| 亚洲精品色激情综合| 长腿黑丝高跟| 国产精品综合久久久久久久免费| 日本在线视频免费播放| 精品久久久久久成人av| 麻豆乱淫一区二区| 禁无遮挡网站| 在线免费十八禁| 好男人在线观看高清免费视频| 99久久无色码亚洲精品果冻| 国产黄色小视频在线观看| 国产精品久久视频播放| 在线观看午夜福利视频| 一个人免费在线观看电影| 亚洲不卡免费看| 日本黄色片子视频| 校园春色视频在线观看| 老师上课跳d突然被开到最大视频| 午夜精品国产一区二区电影 | av黄色大香蕉| 男人的好看免费观看在线视频| 精品久久久久久久末码| 免费看美女性在线毛片视频| 天美传媒精品一区二区| 18禁裸乳无遮挡免费网站照片| 亚洲国产色片| 女人被狂操c到高潮| 精品国内亚洲2022精品成人| 天堂av国产一区二区熟女人妻| 精品久久久久久久久久免费视频| 老师上课跳d突然被开到最大视频| 国产精品人妻久久久久久| av天堂中文字幕网| 国产精品久久久久久亚洲av鲁大| 国产成人91sexporn| 久久精品综合一区二区三区| 久久久久久大精品| 亚洲av一区综合| 亚洲欧洲日产国产| av卡一久久| 日本一二三区视频观看| 久久韩国三级中文字幕| av在线天堂中文字幕| av在线天堂中文字幕| 97超视频在线观看视频| 国产精品女同一区二区软件| 亚洲国产精品成人综合色| 亚洲高清免费不卡视频| 国产精品爽爽va在线观看网站| 国产 一区精品| 日本黄大片高清| 夫妻性生交免费视频一级片| 一夜夜www| 亚洲欧美精品自产自拍| 国产黄色小视频在线观看| 精品午夜福利在线看| 国产高清激情床上av| 亚洲国产高清在线一区二区三| 久久精品久久久久久久性| 亚洲欧美精品自产自拍| 99精品在免费线老司机午夜| 久久久久免费精品人妻一区二区| 精品久久久久久久久亚洲| 国产不卡一卡二| 99久久久亚洲精品蜜臀av| 成人一区二区视频在线观看| 国产一区二区三区av在线 | 欧美性感艳星| 欧美xxxx黑人xx丫x性爽| 成人性生交大片免费视频hd| 精品久久久久久成人av| 亚洲色图av天堂| 国产一区二区在线观看日韩| 亚洲va在线va天堂va国产| 校园春色视频在线观看| 天堂影院成人在线观看| 九色成人免费人妻av| 精品久久久久久久久av| 97在线视频观看| 青春草亚洲视频在线观看| 日日摸夜夜添夜夜爱| 亚洲av免费高清在线观看| 夜夜夜夜夜久久久久| 久久久久久久亚洲中文字幕| 一级黄色大片毛片| 日韩精品青青久久久久久| 菩萨蛮人人尽说江南好唐韦庄 | 青青草视频在线视频观看| 少妇被粗大猛烈的视频| www日本黄色视频网| 在线观看免费视频日本深夜| 麻豆乱淫一区二区| 精品一区二区三区视频在线| 一区二区三区四区激情视频 | 99热精品在线国产| 国产精品人妻久久久久久| 国产亚洲精品久久久久久毛片| 国产一级毛片七仙女欲春2| 天堂√8在线中文| 国产片特级美女逼逼视频| 天堂网av新在线| 99久久久亚洲精品蜜臀av| 淫秽高清视频在线观看| 99热这里只有精品一区| 三级国产精品欧美在线观看| 日本免费a在线| 女的被弄到高潮叫床怎么办| 99久久九九国产精品国产免费| www.色视频.com| 一本久久精品| 日韩视频在线欧美| 日韩欧美精品v在线| 亚洲av熟女| 亚洲图色成人| 一进一出抽搐gif免费好疼| 久久久精品94久久精品| 老司机福利观看| 精品不卡国产一区二区三区| 91久久精品国产一区二区三区| 色视频www国产| 大香蕉久久网| 国产精品一及| 午夜激情福利司机影院| 欧美不卡视频在线免费观看| 国产伦精品一区二区三区视频9| 成人无遮挡网站| 成人美女网站在线观看视频| 人人妻人人澡欧美一区二区| 最后的刺客免费高清国语| 久久久精品大字幕| 国产精品无大码| 亚洲aⅴ乱码一区二区在线播放| 尤物成人国产欧美一区二区三区| 欧美精品一区二区大全| 美女xxoo啪啪120秒动态图| 蜜桃亚洲精品一区二区三区| 国产白丝娇喘喷水9色精品| 亚洲av.av天堂| 久久久久网色| 天天一区二区日本电影三级| 99久久中文字幕三级久久日本| 成人性生交大片免费视频hd| 欧美不卡视频在线免费观看| 一级毛片电影观看 | 少妇人妻精品综合一区二区 | av在线老鸭窝| 九色成人免费人妻av| 免费观看的影片在线观看| 亚洲丝袜综合中文字幕| 3wmmmm亚洲av在线观看| 麻豆国产av国片精品| 亚洲性久久影院| 亚洲乱码一区二区免费版| 91久久精品国产一区二区成人| 级片在线观看| 人人妻人人澡欧美一区二区| 99久国产av精品国产电影| 午夜免费男女啪啪视频观看| 成人国产麻豆网| 国产高清有码在线观看视频| 一区二区三区免费毛片| 久久国产乱子免费精品| 精品一区二区免费观看| 激情 狠狠 欧美| videossex国产| 91精品国产九色| 中文字幕av在线有码专区| 亚洲中文字幕日韩| 色吧在线观看| 精品久久久久久久末码| 精品人妻一区二区三区麻豆| 中文字幕av成人在线电影| 狠狠狠狠99中文字幕| 久久精品国产亚洲av天美| 亚洲欧美精品自产自拍| 日本av手机在线免费观看| 级片在线观看| 亚洲欧美日韩无卡精品| 久久午夜亚洲精品久久| 国产伦一二天堂av在线观看| 99久久中文字幕三级久久日本| 午夜激情福利司机影院| 国产熟女欧美一区二区| 欧美日韩乱码在线| 99久久精品一区二区三区| 18禁黄网站禁片免费观看直播| 性色avwww在线观看| 天堂av国产一区二区熟女人妻| 3wmmmm亚洲av在线观看| 最近的中文字幕免费完整| 国产精品.久久久| 变态另类丝袜制服| 九九爱精品视频在线观看| 12—13女人毛片做爰片一| 69人妻影院| 国产成人a区在线观看| 日本五十路高清| 六月丁香七月| 欧美最黄视频在线播放免费| 草草在线视频免费看| 麻豆国产av国片精品| 国产成人freesex在线| 日韩欧美一区二区三区在线观看| 99热全是精品| 不卡一级毛片| 一个人观看的视频www高清免费观看| 午夜精品一区二区三区免费看| 国产精品无大码| 99久久精品国产国产毛片| 国产成人一区二区在线| 精品午夜福利在线看| av女优亚洲男人天堂| 日韩人妻高清精品专区| 国产人妻一区二区三区在| 一级av片app| 午夜视频国产福利| 12—13女人毛片做爰片一| 五月玫瑰六月丁香| 久久久久九九精品影院| 日本三级黄在线观看| 午夜福利高清视频| 日韩高清综合在线| 两性午夜刺激爽爽歪歪视频在线观看| 国产一区二区亚洲精品在线观看| 久久精品国产亚洲av天美| 亚洲av中文av极速乱| 亚洲精品久久国产高清桃花| 国产毛片a区久久久久| 亚洲国产欧美在线一区| 久久精品国产鲁丝片午夜精品| 日韩一本色道免费dvd| 国产成人a∨麻豆精品| 久久久欧美国产精品| 18禁黄网站禁片免费观看直播| 国产片特级美女逼逼视频| 免费av观看视频| 亚洲中文字幕一区二区三区有码在线看| 国内久久婷婷六月综合欲色啪| 精品国内亚洲2022精品成人| 插逼视频在线观看| 色吧在线观看| 最近手机中文字幕大全| 国产av一区在线观看免费| 大香蕉久久网| 长腿黑丝高跟| 99热全是精品| 一个人看的www免费观看视频| 国国产精品蜜臀av免费| 免费观看精品视频网站| 久久久久久伊人网av| 成年免费大片在线观看| 日韩av不卡免费在线播放| 国模一区二区三区四区视频| 亚洲国产精品合色在线| 亚洲av一区综合| 欧美最黄视频在线播放免费| 黑人高潮一二区| 精品99又大又爽又粗少妇毛片| 男人舔奶头视频| 国产精品久久视频播放| 国产探花在线观看一区二区| 波多野结衣高清无吗| 我要看日韩黄色一级片| 精品日产1卡2卡| 人妻制服诱惑在线中文字幕| 中文欧美无线码| 久久综合国产亚洲精品| 免费看av在线观看网站| 久久亚洲国产成人精品v| 亚洲精品自拍成人| 日韩强制内射视频| 性欧美人与动物交配| 黄色配什么色好看| 免费电影在线观看免费观看| 好男人在线观看高清免费视频| 国产精品不卡视频一区二区| 久久99热这里只有精品18| 亚洲五月天丁香| 国产v大片淫在线免费观看| 国产黄片美女视频| 久久久久久伊人网av| 日韩高清综合在线| 免费观看精品视频网站| 日日啪夜夜撸| 一区二区三区四区激情视频 | 久久热精品热| 色综合亚洲欧美另类图片| 精品无人区乱码1区二区| 成年女人看的毛片在线观看| 少妇猛男粗大的猛烈进出视频 | 91久久精品国产一区二区成人| 在线播放无遮挡| 久久久久久久久久黄片| 尤物成人国产欧美一区二区三区| 久久亚洲精品不卡| 国产91av在线免费观看| 黄片wwwwww| 麻豆av噜噜一区二区三区| 欧美精品一区二区大全| 在线免费十八禁| 欧美日韩一区二区视频在线观看视频在线 | 午夜福利在线观看吧| 久久精品国产亚洲av涩爱 | 国内精品一区二区在线观看| 搡老妇女老女人老熟妇| 伊人久久精品亚洲午夜| 搡老妇女老女人老熟妇| 国产成人freesex在线| 国产一区二区三区在线臀色熟女| 18+在线观看网站| 国产成人福利小说| 99热精品在线国产| 国产黄a三级三级三级人| 亚洲人成网站高清观看| 久久精品国产清高在天天线| 亚洲国产精品国产精品| 草草在线视频免费看| 国产一级毛片在线| 美女大奶头视频| 国产精品一二三区在线看| 久久久久久九九精品二区国产| 99久久精品国产国产毛片| 小说图片视频综合网站| 男女视频在线观看网站免费| av在线亚洲专区| 欧美一区二区国产精品久久精品| 麻豆一二三区av精品| 国产成人精品婷婷| 天堂网av新在线| 国产精品,欧美在线| 99热这里只有精品一区| 国产一区二区在线观看日韩| 99热这里只有精品一区| 久久亚洲精品不卡| 亚洲精品乱码久久久v下载方式| 成人三级黄色视频| 麻豆乱淫一区二区| 欧美日本亚洲视频在线播放| 秋霞在线观看毛片| 国产极品天堂在线| 精品无人区乱码1区二区| 午夜福利高清视频| 午夜激情福利司机影院| 久久欧美精品欧美久久欧美| 韩国av在线不卡| 此物有八面人人有两片| 99热这里只有是精品在线观看| 久久欧美精品欧美久久欧美| 热99re8久久精品国产| 亚洲av二区三区四区| 久久人人爽人人爽人人片va| 午夜精品在线福利| 赤兔流量卡办理| 国产蜜桃级精品一区二区三区| 日韩一区二区三区影片| 一级黄色大片毛片| 青春草亚洲视频在线观看| 久久精品国产鲁丝片午夜精品| 身体一侧抽搐| 国产黄片视频在线免费观看| 午夜久久久久精精品| 日韩国内少妇激情av| 亚洲精品日韩av片在线观看| 国产亚洲精品久久久com| 欧美激情国产日韩精品一区| 国产不卡一卡二| 国产亚洲5aaaaa淫片| av卡一久久| 亚洲av中文字字幕乱码综合| 色噜噜av男人的天堂激情| 日韩高清综合在线| 中文精品一卡2卡3卡4更新| 亚洲国产精品成人久久小说 | 日韩一本色道免费dvd| 免费观看人在逋| 久久草成人影院| 亚洲四区av| 亚洲av中文字字幕乱码综合| 3wmmmm亚洲av在线观看| 乱系列少妇在线播放| 亚洲在久久综合| 长腿黑丝高跟| 国产精品久久电影中文字幕| 中文字幕av成人在线电影| 亚洲四区av| 全区人妻精品视频| 我要看日韩黄色一级片| 中文在线观看免费www的网站| 国产不卡一卡二| 亚洲精品乱码久久久v下载方式| 久久婷婷人人爽人人干人人爱| 国产乱人偷精品视频| 色吧在线观看| 可以在线观看毛片的网站| 亚洲欧洲日产国产| 赤兔流量卡办理| 最近视频中文字幕2019在线8| 免费av观看视频| av在线天堂中文字幕| 国产精品99久久久久久久久| 一区二区三区四区激情视频 | 黄色视频,在线免费观看| 欧美高清成人免费视频www| 欧美性猛交╳xxx乱大交人| 国产精品99久久久久久久久| 一个人观看的视频www高清免费观看| 国产精品1区2区在线观看.| 日韩欧美精品免费久久| 1000部很黄的大片| 非洲黑人性xxxx精品又粗又长| 国产av在哪里看| 免费看日本二区| 国产不卡一卡二| 校园春色视频在线观看| 美女黄网站色视频| 免费一级毛片在线播放高清视频| or卡值多少钱| 欧美激情久久久久久爽电影| 性色avwww在线观看| 精品一区二区三区人妻视频| 老司机影院成人| 在线观看av片永久免费下载| 麻豆精品久久久久久蜜桃| 少妇人妻一区二区三区视频| 国内久久婷婷六月综合欲色啪| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲成人久久性| 能在线免费观看的黄片| 网址你懂的国产日韩在线| 男人的好看免费观看在线视频| 国产女主播在线喷水免费视频网站 | 天天一区二区日本电影三级| 国产成人福利小说| 国产v大片淫在线免费观看| 国内少妇人妻偷人精品xxx网站| 亚洲精品国产成人久久av| 99精品在免费线老司机午夜| 国产精品爽爽va在线观看网站| 又黄又爽又刺激的免费视频.| 美女脱内裤让男人舔精品视频 | 高清毛片免费看| 国产不卡一卡二| 日本撒尿小便嘘嘘汇集6| eeuss影院久久| 蜜桃久久精品国产亚洲av| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 男人舔奶头视频| 国产精品伦人一区二区| 久久精品国产清高在天天线| 99热精品在线国产| 观看美女的网站| 三级经典国产精品| 99在线视频只有这里精品首页| 午夜福利在线在线| 免费黄网站久久成人精品| 色吧在线观看| 中文字幕av在线有码专区| 欧美性猛交黑人性爽| 美女被艹到高潮喷水动态| avwww免费| 99热只有精品国产| 久久精品国产亚洲网站| 亚洲一区二区三区色噜噜| 精品久久久久久久末码| 黄色配什么色好看| 校园人妻丝袜中文字幕| 人人妻人人看人人澡| 床上黄色一级片| 尤物成人国产欧美一区二区三区| 亚洲精品日韩av片在线观看| 国产麻豆成人av免费视频| 婷婷精品国产亚洲av| 国产黄色小视频在线观看| 在线国产一区二区在线| 亚洲av成人av| 亚洲国产精品久久男人天堂| 国产 一区精品| 亚洲最大成人手机在线| 欧美激情国产日韩精品一区| 有码 亚洲区| 国产探花在线观看一区二区| 国产亚洲91精品色在线| 欧美日本视频| 国产成人a∨麻豆精品| 久久精品91蜜桃| 国产精品美女特级片免费视频播放器| 蜜臀久久99精品久久宅男| 一卡2卡三卡四卡精品乱码亚洲| 免费av毛片视频| 啦啦啦观看免费观看视频高清| 听说在线观看完整版免费高清| 两性午夜刺激爽爽歪歪视频在线观看| 最后的刺客免费高清国语| 寂寞人妻少妇视频99o| 中文字幕精品亚洲无线码一区| 免费观看a级毛片全部| 国产av一区在线观看免费| 国产av不卡久久| 97超碰精品成人国产| 狠狠狠狠99中文字幕| .国产精品久久| 能在线免费观看的黄片| 亚洲成人中文字幕在线播放| 丝袜喷水一区| 青青草视频在线视频观看| 成人亚洲欧美一区二区av| 简卡轻食公司| avwww免费| 日韩欧美国产在线观看| 精品国产三级普通话版| 内地一区二区视频在线| 少妇的逼好多水| 亚洲精华国产精华液的使用体验 | 夜夜夜夜夜久久久久| 欧美bdsm另类| 欧美最黄视频在线播放免费| 成人午夜高清在线视频| 你懂的网址亚洲精品在线观看 | 国产激情偷乱视频一区二区| 亚洲欧洲日产国产| 久久久久国产网址| 观看美女的网站| 一个人看视频在线观看www免费| 亚州av有码| a级毛片a级免费在线| 日韩视频在线欧美| 久久久成人免费电影| 久久精品国产99精品国产亚洲性色| 又粗又爽又猛毛片免费看| 国产精品久久久久久精品电影小说 | 中文字幕熟女人妻在线| 欧美最黄视频在线播放免费| 麻豆国产97在线/欧美| a级一级毛片免费在线观看| 国产精品日韩av在线免费观看| 国产欧美日韩精品一区二区| 亚洲精品粉嫩美女一区| 赤兔流量卡办理| 我要看日韩黄色一级片| 欧美激情国产日韩精品一区| av视频在线观看入口| 亚洲经典国产精华液单| 插阴视频在线观看视频| 国产精品一区二区性色av| 少妇人妻精品综合一区二区 | 日日摸夜夜添夜夜爱| 亚洲一区二区三区色噜噜| 国产视频内射| 97超视频在线观看视频| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲欧美日韩高清专用| 色噜噜av男人的天堂激情| 国产av麻豆久久久久久久| 毛片一级片免费看久久久久| 国产乱人视频| 在线播放国产精品三级| 欧美性猛交╳xxx乱大交人| 高清午夜精品一区二区三区 | 国产精品免费一区二区三区在线| 91狼人影院| 国产精品嫩草影院av在线观看| www日本黄色视频网| av专区在线播放| 国产精品久久久久久av不卡| 中文字幕av成人在线电影| 欧美另类亚洲清纯唯美| 99热网站在线观看| 成人一区二区视频在线观看| 色噜噜av男人的天堂激情| 亚洲国产精品成人综合色| www.av在线官网国产| 国产精品久久视频播放| 日韩大尺度精品在线看网址| 在线免费观看的www视频| 国内精品久久久久精免费| 国产精品人妻久久久影院| 免费观看精品视频网站| 亚洲无线在线观看| 中国美白少妇内射xxxbb| 我要看日韩黄色一级片| 国产精品99久久久久久久久| 久久6这里有精品| 99视频精品全部免费 在线| 男人和女人高潮做爰伦理| 国产爱豆传媒在线观看| 亚洲一区高清亚洲精品| 少妇丰满av| 真实男女啪啪啪动态图| 亚洲av成人av| 日韩成人伦理影院| 99久久无色码亚洲精品果冻| 精品熟女少妇av免费看| 三级毛片av免费| 黄片wwwwww| 又黄又爽又刺激的免费视频.| 国产精品日韩av在线免费观看| 搞女人的毛片| 一边摸一边抽搐一进一小说| 欧美精品一区二区大全| 美女高潮的动态| 免费av毛片视频| 久久这里有精品视频免费| 日本av手机在线免费观看| 女同久久另类99精品国产91| 久久久色成人| 亚洲乱码一区二区免费版| 国产乱人偷精品视频| 成年女人永久免费观看视频| 中国美白少妇内射xxxbb| 欧美在线一区亚洲| 天天躁夜夜躁狠狠久久av| 中文在线观看免费www的网站|