• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Lighting transfer across multiple views through local color transforms

    2018-01-08 05:09:57QianZhangPierreYvesLaffontandTerenceSim
    Computational Visual Media 2017年4期

    Qian ZhangPierre-Yves Laffont,and Terence Sim

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    Lighting transfer across multiple views through local color transforms

    Qian Zhang1Pierre-Yves Laffont2,and Terence Sim3

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    We present a method for transferring lighting between photographs of a static scene.Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions.We cast lighting transfer as an edit propagation problem,where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo.Instead of directly propagating color,we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences.Our color transforms model the large variability of appearance changes in local regions of the scene,and are robust to missing or inaccurate correspondences.The method is fully automatic and can transfer strong shadows between images.We show applications of our image relighting method for enhancing photographs,browsing photo collections with harmonized lighting,and generating synthetic time-lapse sequences.

    relighting;photo collection;time-lapse;image editing

    1 Introduction

    If there is one thing that can make or break a photograph,it is lighting.This is especially true for outdoor photography,as the appearance of a scene changes dramatically with the time of day.In order to capture the short,transient moments of interest,photographers have to wait at the right place forthe perfect time of day.A majority of photographs taken by casual users are captured in the middle of the day,when lighting is not ideal.While photo retouching softwares such as Adobe Photoshop and Lightroom enable after-the-fact editing to some extent,achieving convincing manipulations such as drastic changes in lighting requires significant time and effort even for talented artists.

    In this paper,we propose an automatic technique for transferring lighting across photographs,given a photo collection depicting the same scene under varying viewpoint and illumination,as shown in Fig.1.There are millions of photographs of famous landmarks on online photo-sharing websites,providing rich information for lighting transfer.For a pair of source and target images chosen by a user from the photo collection,our method modifies the source image by transferring the desired lighting from the target image. We model the large variability of appearance changes for different parts of the scene with local color transforms.The transforms are learned from sparse geometric correspondences,which we obtain from the photo collection through multi-view stereo.For regions without correspondences,we propagate the transforms in an edge-aware manner.Compared to direct color propagation,our propagation technique is robust to missing or inaccurate correspondences.Our main contributions are as follows:

    ?We cast lighting transfer as an edit propagation problem,learning local color transforms from sparse geometric correspondences and propagating the transforms in an edge-aware manner.

    ?We introduce a confidence map to indicate the reliability of propagated transforms,which helps to preserve the color of pixels with transform outliers.

    ?We extend our method to transfer lighting based on multiple target images,exploiting the information from different viewpoints.

    Fig.1 Given a photo collection of a landmark scene under varying lighting,our method transfers the illumination between images from different viewpoints,synthesizing images with new combinations of viewpoint and time-of-day.

    We have run our method on 6 scenes,including 5 Internet photo collections and a synthetic benchmark with ground truth images,allowing us to give a quantitative evaluation. We also show comparisons with baselines and previous approaches.Our image relighting method enables enhancement of photographs, photo collection browsing with harmonized lighting,and synthetic time-lapse generation.

    2 Related work

    2.1 Color transfer and correction

    Lighting transfer mainly concerns color.Approaches for color transfer manipulate color distributions.Example-based transfer methods such as those in Refs.[1–3]reshape the color distribution of the input image so that it approaches the statistical color properties of the example image.Huang et al.[4]recolor a photo by learning from database correlations between color property distributions and geometric features of regions.Li et al.[5]recolor images using geodesic distance based on harmonization.More recently,Luan et al.[6]propose a deep learning approach for photographic style transfer.These methods produce visually pleasing recolored images but cannot change local lighting.Color transfer methods can also be used for tone adjustment and correction.Park et al.[7]recover sparse pixel correspondences and compute color correction parameters with a low-rank matrix factorization technique.Spatialtemporal correspondences are also used in Refs.[8,9]for multi-view color correction.These methods work well for optimizing color consistency for image collections or videos,but they are not intended to transfer spatially-varying lighting.In our case,we use local transforms to model the large variability of appearance changes in local regions of the scene,so can transfer strong shadows.

    2.2 Image relighting

    A number of image relighting methods have been proposed by various researchers over the years,such as those in Refs.[10–13].These sophisticated systems make use of detailed geometric models and require registration or non-linear fitting.Laffont et al.[14]show that intrinsic image decomposition can be used for illumination transfer,but the extraction of consistent reflectance and illumination layers is a challenging and computationally expensive problem.Alternatively,some methods transfer an image by learning color changes from correspondences of image pairs.HaCohen et al.[15]compute a parametric color model based on dense correspondences,but do not take into account local color changes.Shih et al.[16]successfully synthesize different-time-of day images by learning color transformations from time-lapse videos.A similar approach by Laffont et al.[17]enables appearance transfer of time-of-day,weather,or season by observing color changes in a webcam database.However,both methods rely on the availability of images of different appearance from the same webcam.While these image pairs may be available for some scenes with a static camera,this data does not exist in many cases.More recently,Martin-Brualla et al.[18]use a simple but effective new temporal filtering approach to stabilize appearance.In work developed concurrently,Shen et al.[19]propose regional foremost matching for image morphing and time-lapse sequence generation.In our system,we target a more general case that does not need highly accurate geometry,timelapse sequences from a static viewpoint,or densely computed correspondences.Our method relies on the vast numbers of available images of the same scene in various online photo communities and sparse geometric correspondences.

    2.3 Edit propagation

    Also related are edit propagation methods,which propagate user specified edits under the guidance of image gradients.Levin et al.[20] first introduce a framework for colorization,a computer-assisted process for adding color to a monochrome image or movie.They use manually specified color scribbles and propagate the colors in an edge-aware manner.Liu et al.[21]decompose images into illumination and reflectance layers,and transfer color to grayscale reflectance images using a similar color propagation scheme.Lischinski et al.[22]extend the framework for image tone manipulation,propagating user constraints with edge-preserving optimization. A similar method is used in Ref.[23],which propagates coarse user edits for spatially-varying image editing.Chen et al.[24]propose a manifold-preserving edit propagation algorithm for video object recoloring and grayscale image colorization.Inspired by these approaches,we propagate local color transforms for lighting transfer.Edge-aware propagation originates at sparse correspondences obtained from a pair of images.A key difference between our method and previous approaches is that we propagate transforms rather than simply color,which allows us to preserve texture in the source image.

    3 Method

    We propose a method for transferring lighting between photographs of a static scene.Our method takes as input a landmark scene photo collection,which includes images from multiple viewpoints and under different lighting conditions.The user chooses from the photo collection a source image to be edited,and a target image with desired lighting condition.We cast lighting transfer as an edit propagation problem.We use local color transforms to model the large variability of lighting changes in different parts of the scene.The transforms are learned from paired sparse correspondences between source and target images.Then,we propagate these transforms to relight the source image in an image-guided manner,and output a result image. The process is fully automatic.

    Figure 2 shows an overview of the pipeline of our approach,which consists of three main steps:

    Fig.2 Given a pair of source and target images from a photo collection,our method uses sparse correspondences(a)to learn local color transforms(b),which are then propagated in an image-guided manner to regions with no correspondences,generating a relit image(c).

    (1)Extracting sparse correspondences from a photo collection(see Section 3.1).

    (2)Learning local color transforms from paired sparse correspondences(see Section 3.2).

    (3)Propagating local color transforms and relighting the source image(see Section 3.3).

    To be robust to missing or inaccurate correspondences,we introduce a confidence map to detect potentially unreliable transforms in Section 3.4. We further extend our method for relighting based on multi-view target images in Section 3.5.Further results and comparisons are presented in Section 4.

    3.1 Sparse correspondences from a photo collection

    We take as input a photo collection,consisting of images of the same scene with different viewpoints and lighting conditions. There are two reasons why we utilize photo collections. Photo-sharing websites contain millions of photographs of famous landmarks,and these collections of scenes with varying illumination provide rich information for lighting transfer. Moreover,we can reconstruct a sparse point cloud from multi-view photos and find correspondences between images,which allows local analysis of lighting changes. We use off-the-shelf VisualSfM[25]:we first apply structure from motion[26]to estimate the parameters of the cameras and then use patch-based multi-view stereo[27]to generate a 3D point cloud of the scene.For each point,the algorithm also estimates a list of images in which it appears.The visible 3D points are projected to each image to obtain paired correspondences.

    3.2 Learning local color transforms

    We learn the lighting changes from sparse correspondences between the source imageSand target imageT. These correspondences can be represented by three-dimensional points in a given color space. We estimate transformations for corresponding pixel pairs to represent the color changes in a local neighborhood.The local color transforms[16]model color variations between a pair of images under varying lighting.Letkdenote a correspondence in the source image.We express the transformforkas a linear matrix that maps the color of a pixel in the source imageSto another pixel in the targetT.We learn the local transforms as linear models[17]in RGB color space.The local color transforms are modeled as the solutions to an optimization problem:

    The obtained linear transformis represented by a 3×3 matrix.We denote byvk(S)the patch centered on the pixel in the source image and byvk(T)the corresponding patch in the target image.Both are represented as 3×Pmatrices in RGB color space,whereP=5×5 is the number of pixels in the patch.is a global linear matrix estimated on the entire image(γ=0.01),used for regularization.is a 3×3 identity matrix.Trying different color spaces,e.g.,HSV,CIELAB,and RGB,a visual comparison of results in Fig.S1 in the Electronic Supplementary Material(ESM)shows that local transforms work slightly better in RGB space.

    3.3 Propagation of local color transforms

    We then propagate the transforms learned from correspondences to other regions of the source image.Inspired by the work of Levin et al.[20]and other edit propagation methods,we use an image-guided propagation algorithm.Instead of propagating RGB pixel values,we propagate the color transforms estimated in the previous section.

    Our propagation algorithm builds on the assumption that in a very small neighborhood,two pixels with similar colors are likely to have similar transforms.We sample every pixeliin the source image,and assign a weightwfor each pixeljin the 3×3 sampling window.We wish to minimize the difference between the transform at pixeliand the weighted average of transforms at neighboring pixels.We assignw=1 for the center pixeli.Ifihas a correspondence in the target image and thus a precomputed transform,we set the weights of its neighbors to zero. Otherwise,the weights are calculated from Euclidean distances of colors.The weight is large when the colors of pixelsjandiare similar,and small when they are different.We express the weighting function in the equation below.For eachjin the sampling windowDi:

    whereis the variance of the colors in the sampling window.These weights are then used as constraints and guidance when propagating transforms.Given a sparse set of pixelskwith precomputed transforms(from Eq.(2)),the set of local transforms for all pixels in regions with no correspondences can be obtained by solving:

    wherefor allWe can rewrite Eq.(4)in the form of a matrix product,and formalize it as a global optimization problem:

    whereis anN×Nsparse matrix,whose(i,j)th entry iswij.N=width×height is the number of pixels in the source image.is a constraint matrix withis the transform matrix to be found.This large,sparse system of linear equations can be solved by standard methods.We use the backslash operator in MATLAB.All the transformsare optimized simultaneously. This allows us to propagate the learned sparse transforms to all pixels without correspondences.

    3.4 Detecting transform outliers

    For pixels without correspondences, the color transforms obtained by propagation may not be accurate,especially when these pixel have colors very different from those of correspondences.We show an example of this situation in Fig.3.The paired pixel correspondences between source image(a)and target image(b)are on the building,where the pixel colors are different from the colors of people’s clothes and the green leaves. The propagated transforms in these regions are thus inaccurate,and will transfer the source image wrongly as indicated by the red rectangle in the naive output(c).To detect regions where transforms are potentially less reliable,we introduce a confidence map.The idea is if a source pixel’s color is not similar to any of the correspondences in the source image,the computed transform of that pixel is less reliable,as the propagation of transforms is based on color similarities.

    Fig.3 Unreliable transforms result in distorted colors in the naive output(c).We compute a confidence map(d)to detect transform outliers after propagation.By removing the transforms with low confidence values(e)and leaving the associated source pixels’colors unchanged,we have an output with correct colors(f),e.g.,people and leaves(in the red rectangle).

    For each pixelpin the source image,we calculate its color differences with all correspondencesqin this image.A pixel only needs a few neighboring constraints to get an appropriate transform,so we sum up the smallestmdifferences and use the negative natural logarithm of the sum as a confidence factorC(p). All factors are then normalized to[0,1],with a small value when the transforms are unreliable.We usem=10 and set a threshold to detect possibly wrong transforms.Such transforms are removed when applying color transformations,and the associated pixels retain the same colors as in the source image.As shown in Fig.3,while there are color artifacts in the naive result(c),the leaves remain green and people’s clothes seem more natural in the corrected output image(f).

    3.5 Extension to multiple targets

    If the viewpoints of source and target images are drastically different,there are fewer correspondences.This makes it difficult to transfer lighting properly.To alleviate this issue,we extend our method by combining multiple target images with similar illumination conditions for the relighting of a source image.

    Multiple target images provide more correspondences from different viewpoints.Here,we demonstrate the method using two target images with similar lighting. We learn the local color transforms from correspondences using the same method described in the previous sections,but combine the transforms before propagation. For pixels in the source image that have correspondences in both target images,the learned transforms are combined by calculating their arithmetic mean.Figure 4 shows that with the help of target images from different viewpoints,appropriate local lighting is transferred to the source image(see the regions highlighted by red rectangles).We further evaluate our method on a synthetic dataset,and make a comparison between the single-target-image method and the extended multiple-target-image one.The results are shown in the next section.

    4 Results and comparisons

    We apply our method to two types of data.First we show results of our method for photo collections from online photo-sharing websites.We also apply our method to a synthetic dataset which allows a comparison to ground truth.

    4.1 Internet photo collections

    We utilize the datasets in Ref.[14].When applying transforms directly to the source image,noise in the image may be magnified. We use bilateral filtering[28]to decompose the source image into a detail layer and a base layer,and learn and propagate the transforms based on the base layer.We then apply the linear transforms to the base layer and add back the detail layer to obtain the final result.A similar method is used in Ref.[16].

    Our method enables dramatic lighting transfer between images.Figure 5 illustrates our results for several scenes,namely St.Basil,Manarola,and Rizzi Haus.We compare to two baselines:an image warping method based on homography,and direct propagation of pixel colors.While the image warping method distorts the image,and propagating pixel colors blurs image details,our method successfully relights the source images.We estimate the homography based on pixel correspondences,using linear least squares in MATLAB.The propagation of pixel colors uses code from Ref.[20].Propagating colors produce blurred results,especially for regions with no correspondences and thus no guidance from“color scribbles”.

    4.2 Synthetic scene

    We evaluate the effectiveness of our method on the synthetic dataset of St.Basil[29],which contains rendered images from 3 different viewpoints and under 30 lighting conditions. We compare the result of our lighting transfer to the ground truth rendering from the same viewpoint with the same lighting conditions.Quantitative evaluation of absolute differences between relit images and ground truth in Fig.6 shows that the method using multiple target images produces a more plausible result.

    4.3 Comparisons

    In order to further evaluate our lighting transfer method,we show a comparison with previous approaches in Fig.7.Reinhard et al.’s method[3],which computes a global color mapping,gives the overall image a warm tone.Pitie et al.’s method[2]produces a more similar tone to the target,but both of them do not properly transfer local lighting.In the result of Laffont et al.’s method[14],which uses intrinsic image decomposition,the regions in shadow are washed-out,and there are artifacts around the boundaries of the sky and buildings.In contrast,our result has lighting similar to that of the target image,and people and objects in shadow still have their colors.We show more comparison results with Shih et al.[16]and deep photo style transfer[6]in Fig.S4 and Fig.S5 in the ESM.

    Fig.5 We compare our method to image warping by homography and naive propagation of color.While the image warping method based on homography(c)distorts the image,and direct propagation of pixel colors(d)blurs image details,our method(e)successfully relights the source images.

    Fig.6 We test our lighting transfer methods on a synthetic dataset,and show results of a quantitative evaluation.Compared with output using only the left-view target image(c),the output image produced with both target images(f)looks more similar to the ground truth(b)and has smaller residuals.

    Fig.7 We compare the global color transfer method[3],intrinsic image decomposition[14],and our lighting transfer method.While the results of other methods have either wrong color tone(c)or artifacts(d),our result has appropriate lighting similar to the target image(b).

    4.4 Applications

    In Fig.8,we show that our method can be used for harmonized multi-view image collection browsing and time-lapse hallucination of single view scenery.We refer to the supplementary video in the ESM for results for these two applications.We show image-based view transitions[30]with harmonized photographs. Our method produces stable transitions between views,and can transfer or remove strong shadows in the original images that could not be handled by simple color compensation.We also show time-lapse sequences synthesized by transferring all illumination conditions to a single viewpoint. In addition,we show a side-by-side comparison with the results of Laffont et al.[14].

    We also include relighting results where a person is present in the landmark photos and occupies a significant part of the scene. Though people in the scene do not have any correspondences with the target images,Fig.9 shows that our transform propagation method can produce a plausible result.

    Fig.8 Our method can be used for harmonizing a photo collection with multi-view images(b)and hallucinating time-lapses(d).The insets represent source images in(b)and target images with desired lighting in(d).Additional results are available in the supplementary video in the ESM.

    Fig.9 Image relighting with people present in the landmark photos.Our method produces plausible results for scenes with strong local lighting.The background scene has proper local lighting transferred from the target,and people have a similar color to the scene.

    The background scene has proper local lighting transferred from the target,and people have a similar color to the scene.

    4.5 Performance

    We use a 3.6 GHz Intel Core i7 CPU in this paper.All images are resized to a width of 640 pixels.Our MATLAB implementation takes approximately 7 s for learning and applying color transforms and 23 s for propagating the transforms.

    4.6 Limitations

    Like all example-based techniques,our method has limitations.Processing images from varying viewpoints and under dramatically different illumination conditions can be challenging,as the multi-view stereo method may not find sufficient correspondences between images.Picking a target image with more correspondences or several targets with similar illumination may help produce better results.Another challenging case is a scene region with similar texture but distinct target lighting at a different depth.The propagation of transforms guided by the source image would be the same,and thus the generated output would not be as desired.For high-quality results in a small region,an RGB-D camera in the scene may greatly increase the number of correspondences and allow more accurate analysis of the spatially-varying lighting.

    5 Conclusions

    The novelty of this paper is that we cast lighting transfer as an edit propagation problem.We learn local color transforms from sparse correspondences reconstructed from multi-view stereo,and propagate in an image-guided manner.Compared to previous image relighting methods,our approach does not rely on highly accurate geometry, time lapse videos from static viewpoints,or densely computed correspondences.The color transforms model the large variability of local lighting changes between images in different parts of the scene.We demonstrate that our method can be used for enhancing photographs,harmonizing image collections of multiple viewpoints,and hallucinating time-lapse sequences.

    Acknowledgements

    We would like to thank all reviewers for their comments and suggestions.The first author carried out the earlier phase of the research at the National University of Singapore with support from the School of Computing.This research is supported by the BeingThere Centre,a collaboration between Nanyang Technological University Singapore,Eidgen?ssische Technische Hochschule Zu¨rich,and the University of North Carolina at Chapel Hill.The BeingThere Centre is supported by the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiative and is administered by the Interactive Digital Media Programme Office.

    Electronic Supplementary Material Supplementary materials(a supplementary document and video with further results)are available in the online version of this article at https://doi.org/10.1007/s41095-017-0085-5.

    [1]Pouli,T.;Reinhard,E.Progressive color transfer for images of arbitrary dynamic range.Computers&GraphicsVol.35,No.1,67–80,2011.

    [2]Pitie, F.; Kokaram, A.C.; Dahyot, R.N-dimensional probability density function transfer and its application to color transfer.In:Proceedings of the 10th IEEE International Conference on Computer Vision,Vol.2,1434–1439,2005.

    [3]Reinhard,E.;Ashikhmin,M.;Gooch,B.;Shirley,P.Color transfer between images.IEEE Computer Graphics and ApplicationsVol.21,No.5,34–41,2001.

    [4]Huang,H.-Z.;Zhang,S.-H.;Martin,R.R.;Hu,S.-M.Learning natural colors for image recoloring.Computer Graphics ForumVol.33,No.7,299–308,2014.

    [5]Li,X.;Zhao,H.;Nie,G.;Huang,H.Image recoloring using geodesic distance based color harmonization.Computational Visual MediaVol.1,No.2,143–155,2015.

    [6]Luan,F.;Paris,S.;Shechtman,E.;Bala,K.Deep photo style transfer.arXiv preprintarXiv:1703.07511,2017.

    [7]Park,J.; Tai,Y.-W.; Sinha,S.N.; Kweon,I.S.Efficient and robust color consistency for community photo collections.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,430–438,2016.

    [8]Ye,S.;Lu,S.-P.;Munteanu,A.Color correction for large-baseline multiview video.Signal Processing:Image CommunicationVol.53,40–50,2017.

    [9]Lu,S.-P.;Ceulemans,B.;Munteanu,A.;Schelkens,P.Spatio-temporally consistent color and structure optimization for multiview video color correction.IEEE Transactions on MultimediaVol.17,No.5,577–590,2015.

    [10]Yu,Y.;Debevec,P.;Malik,J.;Hawkins,T.Inverse global illumination:Recovering reflectance models of real scenes from photographs.In:Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques,215–224,1999.

    [11]Debevec,P.;Tchou,C.;Gardner,A.;Hawkins,T.;Poullis,C.;Stumpfel,J.;Jones,A.;Yun,N.;Einarsson,P.;Lundgren,T.;Fajardo,M.;Martinez,P.Estimating surface reactance properties of a complex scene under captured natural illumination.USC ICT Technical Report ICT-TR-06.2004.

    [12]Kopf,J.;Neubert,B.;Chen,B.;Cohen,M.;Cohen-Or,D.;Deussen,O.;Uyttendaele,M.;Lischinski,D.Deep photo:Model-based photograph enhancement and viewing.ACM Transactions on GraphicsVol.27,No.5,Article No.116,2008.

    [13]Yu,Y.;Malik,J.Recovering photometric properties of architectural scenes from photographs. In:Proceedings of the25th Annual Conference on Computer Graphics and Interactive Techniques,207–217,1998.

    [14]Laffont,P.-Y.;Bousseau,A.;Paris,S.;Durand,F.;Drettakis,G.Coherent intrinsic images from photo collections.ACM Transactions on GraphicsVol.31,No.6,Article No.202,2012.

    [15]HaCohen, Y.; Shechtman, E.; Goldman, D.B.;Lischinski,D.Non-rigid dense correspondence with applications for image enhancement.ACM Transactionson GraphicsVol.30,No.4,Article No.70,2011.

    [16]Shih,Y.;Paris,S.;Durand,F.;Freema,W.T.Datadriven hallucination of different times of day from a single outdoor photo.ACM Transactions on GraphicsVol.32,No.6,Article No.200,2013.

    [17]Laffont,P.-Y.;Ren,Z.;Tao,X.;Qian,C.;Hays,J.Transient attributes for high-level understanding and editing of outdoor scenes.ACM Transactions on GraphicsVol.33,No.4,Article No.145,2014.

    [18]Martin-Brualla,R.;Gallup,D.;Seitz,S.M.Timelapse mining from internet photos.ACM Transactions on GraphicsVol.34,No.4,Article No.62,2015.

    [19]Shen,X.;Tao,X.;Zhou,C.;Gao,H.;Jia,J.Regional foremost matching for internet scene images.ACM Transactions on GraphicsVol.35,No.6,Article No.178,2016.

    [20]Levin,A.;Lischinski,D.;Weiss,Y.Colorization using optimization.ACM Transactions on GraphicsVol.23,No.3,689–694,2004.

    [21]Liu,X.;Wan,L.;Qu,Y.;Wong,T.-T.;Lin,S.;Leung,C.-S.;Heng,P.-A.Intrinsic colorization.ACM Transactions on GraphicsVol.27,No.5,Article No.152,2008.

    [22]Lischinski,D.;Farbman,Z.;Uyttendaele,M.;Szeliski,R.Interactive local adjustment of tonal values.ACM Transactions on GraphicsVol.25,No.3.646–653,2006.

    [23]An,X.;Pellacini,F.AppProp:All-pairs appearancespace edit propagation.ACMTransactionson GraphicsVol.27,No.3,Article No.40,2008.

    [24]Chen,X.;Zou,D.;Zhao,Q.;Tan,P.Manifold preserving edit propagation.ACM Transactions on GraphicsVol.31,No.6,Article No.132,2012.

    [25]Wu,C.VisualSFM:A visual structure from motion system.2011.Available at http://ccwu.me/vsfm/.

    [26]Wu,C.;Agarwal,S.;Curless,B.;Seitz,S.M.Multicore bundle adjustment.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,3057–3064,2011.

    [27]Furukawa,Y.;Ponce,J.Accurate,dense,and robust multiview stereopsis.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.32,No.8,1362–1376,2010.

    [28]Tomasi,C.;Manduchi,R.Bilateral filtering for gray and color images.In:Proceedings of the 6th International Conference on Computer Vision,839–846,1998.

    [29]Laffont,P.-Y.;Bazin,J.-C.Intrinsic decomposition of image sequences from local temporal variations.In:Proceedings of the IEEE International Conference on Computer Vision,433–441,2015.

    [30]Roberts,D.A.Pixelstruct,an opensource tool for visualizing 3D scenes reconstructed from photographs.2009.Available at https://github.com/davidar/pixelstruct.

    1Nanyang Technological University,639798,Singapore.E-mail:zhangqian@ntu.edu.sg

    2ETH Zurich,8092 Zurich,Switzerland.

    3 National University of Singapore,119077,Singapore.

    2017-03-31;accepted:2017-05-26

    Qian Zhangis a research assistant at Nanyang Technological University,Singapore.Her research interests include image processing, computational photography, and image-based rendering. Qian Zhang has her B.S.degree in electronics and information engineering from Huazhong University of Science and Technology,China.

    Pierre-Yves Laffontis the CEO and co-founder of Lemnis Technologies.During this research, he was a postdoctoral researcher at ETH Zurich and a visiting researcher at Nanyang Technological University.His research interests include intrinsic imagede composition,example-based appearance transfer,and image-based rendering and relighting.He has his Ph.D.degree in computer science from Inria Sophia-Antipolis.

    Terence Simis an associate professor at the School of Computing,National University of Singapore.He is also an assistant dean of corporate relations at the School.For research,Dr.Sim works primarily in the areas of facial image analysis,biometrics,and computational photography. He is also interested in computer vision problems in general,such as shape from-shading,photometric stereo,and object recognition.From 2014 to 2016,Dr.Sim served as president of the Pattern Recognition and Machine Intelligence Association(PREMIA),a national professional body for pattern recognition,affiliated with the International Association for Pattern Recognition(IAPR).

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript,please go to https://www.editorialmanager.com/cvmj.

    av在线app专区| 婷婷色麻豆天堂久久| 无遮挡黄片免费观看| 欧美在线一区亚洲| 亚洲精品国产一区二区精华液| 亚洲精品中文字幕在线视频| 免费观看av网站的网址| 色婷婷久久久亚洲欧美| 黄色一级大片看看| 9191精品国产免费久久| 少妇的丰满在线观看| 国产精品 国内视频| 午夜福利免费观看在线| 超碰成人久久| 久久人人爽人人片av| 女的被弄到高潮叫床怎么办| 午夜影院在线不卡| 欧美老熟妇乱子伦牲交| 色94色欧美一区二区| 中文天堂在线官网| 69精品国产乱码久久久| 亚洲综合色网址| 免费观看av网站的网址| 男人舔女人的私密视频| 亚洲国产中文字幕在线视频| 国产男女内射视频| 精品一区二区三区av网在线观看 | 天天躁狠狠躁夜夜躁狠狠躁| 日日摸夜夜添夜夜爱| 亚洲一码二码三码区别大吗| 欧美97在线视频| 亚洲四区av| 亚洲av成人精品一二三区| 欧美黑人欧美精品刺激| 亚洲欧美一区二区三区国产| 91精品伊人久久大香线蕉| 欧美中文综合在线视频| 日本vs欧美在线观看视频| 一级片免费观看大全| 国产精品国产三级国产专区5o| 久久久久国产一级毛片高清牌| 成人亚洲欧美一区二区av| 一二三四在线观看免费中文在| 亚洲国产精品一区三区| 美女脱内裤让男人舔精品视频| 男女边摸边吃奶| 人人妻人人澡人人看| 激情五月婷婷亚洲| 高清视频免费观看一区二区| 天堂8中文在线网| 只有这里有精品99| 久久精品aⅴ一区二区三区四区| 91老司机精品| 欧美日韩成人在线一区二区| 精品视频人人做人人爽| 在线观看人妻少妇| 亚洲精华国产精华液的使用体验| 国产成人精品在线电影| 亚洲精品日韩在线中文字幕| 只有这里有精品99| 久久久久久久久久久久大奶| 久久女婷五月综合色啪小说| 中文字幕人妻丝袜一区二区 | 黄片小视频在线播放| 日韩免费高清中文字幕av| 少妇人妻 视频| 欧美国产精品一级二级三级| 亚洲婷婷狠狠爱综合网| 黄网站色视频无遮挡免费观看| 免费黄频网站在线观看国产| 男女床上黄色一级片免费看| 一级爰片在线观看| 在现免费观看毛片| 国产精品欧美亚洲77777| 大话2 男鬼变身卡| 18禁观看日本| 精品第一国产精品| 亚洲人成电影观看| 菩萨蛮人人尽说江南好唐韦庄| 中文字幕人妻丝袜制服| av在线app专区| 亚洲成人免费av在线播放| 精品久久久精品久久久| 制服人妻中文乱码| 日本一区二区免费在线视频| 十八禁网站网址无遮挡| 久久久久久久国产电影| 狂野欧美激情性bbbbbb| 国产亚洲av高清不卡| 精品人妻一区二区三区麻豆| 精品一区二区三卡| 九草在线视频观看| 久久人妻熟女aⅴ| 日韩人妻精品一区2区三区| 亚洲熟女精品中文字幕| 最近最新中文字幕免费大全7| 国产精品一区二区在线不卡| 美国免费a级毛片| 国产 精品1| 国产精品久久久久久精品古装| 一级毛片我不卡| 亚洲精品久久午夜乱码| 性色av一级| 国产色婷婷99| 一个人免费看片子| 亚洲成人一二三区av| 国产精品嫩草影院av在线观看| 亚洲精品在线美女| 在线观看人妻少妇| avwww免费| 少妇精品久久久久久久| 九九爱精品视频在线观看| 丰满饥渴人妻一区二区三| 日韩 亚洲 欧美在线| 国产精品熟女久久久久浪| 午夜91福利影院| 久久国产亚洲av麻豆专区| 久久人人爽av亚洲精品天堂| 欧美另类一区| 男女边吃奶边做爰视频| 在线看a的网站| 免费在线观看黄色视频的| 久久精品国产亚洲av高清一级| 免费高清在线观看视频在线观看| 国产精品偷伦视频观看了| 欧美精品av麻豆av| 欧美 亚洲 国产 日韩一| 亚洲欧美一区二区三区国产| 男人舔女人的私密视频| 亚洲第一青青草原| 亚洲七黄色美女视频| 国产一区二区三区av在线| 一本色道久久久久久精品综合| 成人漫画全彩无遮挡| 99久国产av精品国产电影| 免费黄色在线免费观看| 在线亚洲精品国产二区图片欧美| 午夜福利乱码中文字幕| 亚洲精华国产精华液的使用体验| 国产高清不卡午夜福利| 青草久久国产| 亚洲国产中文字幕在线视频| 国产爽快片一区二区三区| 99精国产麻豆久久婷婷| 久久ye,这里只有精品| 欧美亚洲日本最大视频资源| 99国产综合亚洲精品| 不卡视频在线观看欧美| 国产精品秋霞免费鲁丝片| 女性被躁到高潮视频| 熟女av电影| 视频区图区小说| 夜夜骑夜夜射夜夜干| 成人午夜精彩视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 91精品伊人久久大香线蕉| 在线观看一区二区三区激情| 别揉我奶头~嗯~啊~动态视频 | 国产精品女同一区二区软件| 欧美精品高潮呻吟av久久| 国产黄色免费在线视频| 亚洲国产最新在线播放| 免费看不卡的av| 美女主播在线视频| 亚洲国产精品成人久久小说| 美女高潮到喷水免费观看| 一级毛片 在线播放| 狂野欧美激情性bbbbbb| 叶爱在线成人免费视频播放| 观看美女的网站| 人人妻,人人澡人人爽秒播 | 欧美亚洲 丝袜 人妻 在线| 天堂8中文在线网| 极品少妇高潮喷水抽搐| 国产精品一国产av| 黄色视频在线播放观看不卡| 性高湖久久久久久久久免费观看| 中文字幕最新亚洲高清| 91成人精品电影| 丝袜人妻中文字幕| 国产免费一区二区三区四区乱码| 亚洲第一av免费看| 免费观看a级毛片全部| 美国免费a级毛片| 免费看不卡的av| 久久这里只有精品19| 亚洲国产精品成人久久小说| 亚洲中文av在线| 大码成人一级视频| 悠悠久久av| 亚洲七黄色美女视频| 亚洲情色 制服丝袜| av在线app专区| 亚洲精品久久久久久婷婷小说| 亚洲国产欧美网| 精品一品国产午夜福利视频| 七月丁香在线播放| 建设人人有责人人尽责人人享有的| 90打野战视频偷拍视频| 搡老乐熟女国产| 国产乱人偷精品视频| 亚洲精品自拍成人| 婷婷色综合www| 欧美激情 高清一区二区三区| 亚洲国产av影院在线观看| 国产 精品1| 伦理电影大哥的女人| 久久久国产一区二区| 晚上一个人看的免费电影| 国产成人精品久久二区二区91 | 国产精品久久久人人做人人爽| 成人国产麻豆网| 亚洲国产精品成人久久小说| 丝袜美腿诱惑在线| 久久国产亚洲av麻豆专区| 日韩一区二区视频免费看| 视频区图区小说| 国产精品免费视频内射| 国产av精品麻豆| 欧美久久黑人一区二区| 日韩欧美一区视频在线观看| 极品人妻少妇av视频| 一级黄片播放器| 美女脱内裤让男人舔精品视频| 丰满迷人的少妇在线观看| 高清在线视频一区二区三区| 最黄视频免费看| 亚洲国产精品成人久久小说| 欧美激情高清一区二区三区 | 香蕉丝袜av| 中国国产av一级| tube8黄色片| 香蕉国产在线看| 成年人午夜在线观看视频| 九草在线视频观看| 欧美日韩视频高清一区二区三区二| 亚洲国产毛片av蜜桃av| 国产一区二区在线观看av| 亚洲三区欧美一区| 天天操日日干夜夜撸| 国产1区2区3区精品| 久热这里只有精品99| 久久这里只有精品19| 国产日韩欧美视频二区| 天天操日日干夜夜撸| 嫩草影院入口| 亚洲在久久综合| 国产男人的电影天堂91| 日本91视频免费播放| 国产精品一国产av| 黄网站色视频无遮挡免费观看| 男女之事视频高清在线观看 | 老司机影院成人| 亚洲国产日韩一区二区| 黄片播放在线免费| 久久天堂一区二区三区四区| 欧美中文综合在线视频| videos熟女内射| 如日韩欧美国产精品一区二区三区| 亚洲一区中文字幕在线| 一级毛片黄色毛片免费观看视频| 亚洲国产欧美网| 深夜精品福利| 日韩av免费高清视频| 亚洲国产精品国产精品| 日本一区二区免费在线视频| 如何舔出高潮| 亚洲色图 男人天堂 中文字幕| 老司机在亚洲福利影院| 亚洲第一区二区三区不卡| 国产99久久九九免费精品| 另类亚洲欧美激情| 亚洲av福利一区| 99久久人妻综合| 亚洲色图 男人天堂 中文字幕| 午夜激情av网站| 国产熟女欧美一区二区| 国产免费一区二区三区四区乱码| 91精品三级在线观看| 激情五月婷婷亚洲| 一区二区日韩欧美中文字幕| 精品午夜福利在线看| 国产深夜福利视频在线观看| 成人黄色视频免费在线看| 久热爱精品视频在线9| 精品免费久久久久久久清纯 | 精品亚洲成a人片在线观看| 免费观看a级毛片全部| 亚洲精品第二区| 国产成人精品久久二区二区91 | 久久鲁丝午夜福利片| 男女国产视频网站| 日韩一区二区视频免费看| 亚洲免费av在线视频| 悠悠久久av| 亚洲精品,欧美精品| 欧美日韩精品网址| 亚洲av欧美aⅴ国产| 欧美人与性动交α欧美软件| 成人亚洲欧美一区二区av| 亚洲色图综合在线观看| 久久鲁丝午夜福利片| 丰满迷人的少妇在线观看| 久久精品aⅴ一区二区三区四区| 亚洲天堂av无毛| 午夜激情av网站| 51午夜福利影视在线观看| 大片免费播放器 马上看| 男女床上黄色一级片免费看| 青青草视频在线视频观看| 赤兔流量卡办理| 一级片'在线观看视频| 久久久国产欧美日韩av| 99久久精品国产亚洲精品| 亚洲国产欧美日韩在线播放| 免费高清在线观看视频在线观看| 亚洲欧美激情在线| 久久免费观看电影| 最近的中文字幕免费完整| 精品免费久久久久久久清纯 | 国产成人系列免费观看| 女人精品久久久久毛片| 午夜福利视频在线观看免费| 免费不卡黄色视频| 亚洲伊人色综图| 又大又黄又爽视频免费| www.自偷自拍.com| 精品国产乱码久久久久久小说| 国产精品久久久久久精品古装| 国产极品天堂在线| 激情视频va一区二区三区| 日韩人妻精品一区2区三区| 国产片特级美女逼逼视频| 久久人妻熟女aⅴ| 精品久久久精品久久久| 香蕉丝袜av| 欧美日韩国产mv在线观看视频| 久久久国产精品麻豆| 高清不卡的av网站| 色94色欧美一区二区| 中文字幕av电影在线播放| 精品人妻熟女毛片av久久网站| 欧美另类一区| a级片在线免费高清观看视频| 亚洲国产欧美在线一区| 久久婷婷青草| 色婷婷av一区二区三区视频| 自线自在国产av| bbb黄色大片| 中文字幕制服av| 国产 一区精品| 热re99久久精品国产66热6| 欧美日韩成人在线一区二区| 99热全是精品| 国产精品无大码| 青春草国产在线视频| 又大又爽又粗| 激情视频va一区二区三区| av在线观看视频网站免费| 国产精品久久久久久精品电影小说| 满18在线观看网站| 欧美精品亚洲一区二区| 伊人亚洲综合成人网| 中文乱码字字幕精品一区二区三区| 大话2 男鬼变身卡| 国产精品99久久99久久久不卡 | 悠悠久久av| 日本午夜av视频| 国产一级毛片在线| av免费观看日本| 亚洲国产欧美日韩在线播放| 成人国产麻豆网| 亚洲美女搞黄在线观看| 视频区图区小说| 国产深夜福利视频在线观看| 精品久久久久久电影网| 精品亚洲成国产av| 精品午夜福利在线看| 久久国产精品男人的天堂亚洲| 久久久久国产一级毛片高清牌| 巨乳人妻的诱惑在线观看| 亚洲成国产人片在线观看| 国产亚洲av片在线观看秒播厂| 一级毛片 在线播放| 波多野结衣av一区二区av| 最近最新中文字幕免费大全7| av在线老鸭窝| 国产成人啪精品午夜网站| 80岁老熟妇乱子伦牲交| 久久影院123| 亚洲第一青青草原| av在线播放精品| 亚洲精品美女久久av网站| 日本爱情动作片www.在线观看| 久久av网站| 校园人妻丝袜中文字幕| 人人妻人人澡人人爽人人夜夜| 精品卡一卡二卡四卡免费| 天天添夜夜摸| 国产老妇伦熟女老妇高清| av网站免费在线观看视频| 97人妻天天添夜夜摸| 亚洲av国产av综合av卡| 国产一级毛片在线| 亚洲伊人久久精品综合| 国产色婷婷99| 成人影院久久| 久久鲁丝午夜福利片| 毛片一级片免费看久久久久| 黄频高清免费视频| 下体分泌物呈黄色| 久久精品亚洲av国产电影网| 999久久久国产精品视频| 亚洲美女视频黄频| 日本欧美视频一区| 午夜久久久在线观看| 亚洲精品一二三| 日本av手机在线免费观看| www.av在线官网国产| 王馨瑶露胸无遮挡在线观看| 亚洲七黄色美女视频| 日韩 亚洲 欧美在线| 久久99精品国语久久久| 一区二区三区激情视频| 视频区图区小说| 在线观看人妻少妇| 欧美另类一区| 日韩,欧美,国产一区二区三区| 精品少妇内射三级| 欧美黑人精品巨大| 亚洲情色 制服丝袜| 欧美激情极品国产一区二区三区| 一本久久精品| 国产女主播在线喷水免费视频网站| 精品卡一卡二卡四卡免费| 19禁男女啪啪无遮挡网站| 可以免费在线观看a视频的电影网站 | 午夜福利乱码中文字幕| 少妇人妻久久综合中文| 国产精品久久久久久精品古装| 一级片免费观看大全| 夜夜骑夜夜射夜夜干| 性高湖久久久久久久久免费观看| 两个人看的免费小视频| 新久久久久国产一级毛片| 亚洲熟女精品中文字幕| 最近的中文字幕免费完整| 美女大奶头黄色视频| 丰满饥渴人妻一区二区三| 99热国产这里只有精品6| 91成人精品电影| 国产成人啪精品午夜网站| 国产毛片在线视频| netflix在线观看网站| 在线看a的网站| 国产亚洲av高清不卡| 国产熟女午夜一区二区三区| 久久久国产欧美日韩av| 亚洲精品国产色婷婷电影| 日韩,欧美,国产一区二区三区| 日韩欧美精品免费久久| 成年女人毛片免费观看观看9 | 亚洲久久久国产精品| 久久人人爽av亚洲精品天堂| 日韩一区二区视频免费看| 看免费成人av毛片| 日韩制服骚丝袜av| 在线观看www视频免费| 亚洲七黄色美女视频| 婷婷色综合www| 制服丝袜香蕉在线| 欧美 亚洲 国产 日韩一| 免费女性裸体啪啪无遮挡网站| 日本欧美国产在线视频| 亚洲av欧美aⅴ国产| 亚洲精品成人av观看孕妇| 国产精品久久久久久人妻精品电影 | 色婷婷av一区二区三区视频| 最新的欧美精品一区二区| 午夜激情av网站| 久久天躁狠狠躁夜夜2o2o | 一区二区三区激情视频| 十八禁人妻一区二区| 侵犯人妻中文字幕一二三四区| 成人毛片60女人毛片免费| 美女国产高潮福利片在线看| 国产成人精品无人区| 亚洲国产精品成人久久小说| 久久久久久久久久久久大奶| 黄色视频在线播放观看不卡| 久久精品国产亚洲av高清一级| 久久久久网色| 国产男女超爽视频在线观看| 午夜福利视频精品| 久久久国产精品麻豆| 毛片一级片免费看久久久久| 大香蕉久久网| 国产免费视频播放在线视频| 亚洲av日韩精品久久久久久密 | 久久青草综合色| 啦啦啦 在线观看视频| 91老司机精品| 国产又色又爽无遮挡免| 亚洲欧洲精品一区二区精品久久久 | 欧美黄色片欧美黄色片| 校园人妻丝袜中文字幕| 国产欧美亚洲国产| 王馨瑶露胸无遮挡在线观看| 国产高清不卡午夜福利| 国产成人a∨麻豆精品| 亚洲国产日韩一区二区| 人人妻人人添人人爽欧美一区卜| 国产成人一区二区在线| 中国国产av一级| 国产不卡av网站在线观看| 丰满乱子伦码专区| kizo精华| 日韩成人av中文字幕在线观看| a级片在线免费高清观看视频| 日本av手机在线免费观看| 啦啦啦 在线观看视频| 9191精品国产免费久久| 国产伦人伦偷精品视频| 午夜激情av网站| 超色免费av| 亚洲国产欧美日韩在线播放| 国产精品秋霞免费鲁丝片| 极品少妇高潮喷水抽搐| 国产精品人妻久久久影院| 天天影视国产精品| 国产精品三级大全| 国产精品国产av在线观看| 丰满迷人的少妇在线观看| 欧美日本中文国产一区发布| 看免费av毛片| 国产男女超爽视频在线观看| 我要看黄色一级片免费的| 国产成人精品无人区| 国产老妇伦熟女老妇高清| 亚洲精品aⅴ在线观看| 热99国产精品久久久久久7| 亚洲七黄色美女视频| 97人妻天天添夜夜摸| 麻豆av在线久日| 亚洲欧美激情在线| 亚洲熟女精品中文字幕| 国产精品蜜桃在线观看| 别揉我奶头~嗯~啊~动态视频 | av网站免费在线观看视频| 日韩 亚洲 欧美在线| 国产高清不卡午夜福利| 久久久久久久久免费视频了| 18禁国产床啪视频网站| 天美传媒精品一区二区| 日韩,欧美,国产一区二区三区| 亚洲精品av麻豆狂野| 夫妻性生交免费视频一级片| 丁香六月欧美| 国产av精品麻豆| 一边摸一边抽搐一进一出视频| 一级片免费观看大全| 亚洲欧美一区二区三区国产| 汤姆久久久久久久影院中文字幕| 天堂俺去俺来也www色官网| 亚洲熟女精品中文字幕| 亚洲专区中文字幕在线 | 伊人久久大香线蕉亚洲五| 操美女的视频在线观看| 日韩视频在线欧美| 久久久精品94久久精品| 国产精品二区激情视频| 午夜福利视频精品| 天堂中文最新版在线下载| 婷婷色综合www| 电影成人av| 亚洲成人一二三区av| 日韩一卡2卡3卡4卡2021年| 国产免费又黄又爽又色| www.精华液| 一本一本久久a久久精品综合妖精| 久久97久久精品| 叶爱在线成人免费视频播放| 亚洲国产精品成人久久小说| 在线观看免费日韩欧美大片| 久久人人97超碰香蕉20202| 一本色道久久久久久精品综合| 午夜免费观看性视频| 天美传媒精品一区二区| 成人亚洲欧美一区二区av| 亚洲欧美中文字幕日韩二区| 欧美亚洲 丝袜 人妻 在线| 18禁观看日本| 日韩中文字幕视频在线看片| 精品国产一区二区三区四区第35| 亚洲精品国产色婷婷电影| 国产欧美亚洲国产| 亚洲精品国产av成人精品| 中文字幕人妻丝袜制服| 天堂俺去俺来也www色官网| 亚洲综合色网址| 成人免费观看视频高清| 国产精品一二三区在线看| av网站在线播放免费| 久久国产精品大桥未久av| 激情视频va一区二区三区| 亚洲精品国产一区二区精华液| 亚洲成人av在线免费| 国产一级毛片在线| 男女国产视频网站| 天堂中文最新版在线下载| 嫩草影视91久久| 丰满迷人的少妇在线观看| 天天操日日干夜夜撸| 国产亚洲精品第一综合不卡| 国产av国产精品国产|