• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Decoding and calibration method on focused plenoptic camera

    2016-07-19 07:04:54ChunpingZhangZheJiandQingWangcTheAuthor206ThisarticleispublishedwithopenaccessatSpringerlinkcom
    Computational Visual Media 2016年1期

    Chunping Zhang(),Zhe Ji,and Qing Wang○cThe Author(s)206.This article is published with open access at Springerlink.com

    Xiaole Zhao1(),Yadong Wu1,Jinsha Tian1,and Hongying Zhang2○cThe Author(s)2016.This article is published with open access at Springerlink.com

    ?

    Research Article

    Decoding and calibration method on focused plenoptic camera

    Chunping Zhang1(),Zhe Ji1,and Qing Wang1
    ○cThe Author(s)2016.This article is published with open access at Springerlink.com

    AbstractThe ability of light gathering of plenoptic camera opens up new opportunities for a wide range of computer vision applications.An efficient and accurate method to calibrate plenoptic camera is crucial for its development.This paper describes a 10-intrinsic-parameter model for focused plenoptic camera with misalignment.By exploiting the relationship between the raw image features and the depth-scale information in the scene,we propose to estimate the intrinsic parameters from raw images directly,with a parallel biplanar board which provides depth prior.The proposed method enables an accurate decoding of light field on both angular and positional information,and guarantees a unique solution for the 10 intrinsic parameters in geometry.Experiments on both simulation and real scene data validate the performance of the proposed calibration method.

    Keywordscalibration;focused plenoptic camera;depth prior;intrinsic parameters

    1School of Computer Science,Northwestern Polytechnical University,Xi'an 710072,China.E-mail:C.Zhang,724495506@qq.com();Z.Ji,1277440141@qq.com;Q. Wang,qwang@nwpu.edu.cn.

    Manuscript received:2015-12-01;accepted:2016-01-13

    1 Introduction

    The light field cameras,including plenoptic camera designed by Ng[1,2]and focused plenoptic camera designed by Georgiev[3-5],capture both angular and spatial information of rays in space.With the micro-lens array between image sensor and main lens,the rays from the same point in the scene fall on different locations of image sensor.With a particular camera model,the 2D raw image can be decoded into a 4D light field[6,7],which allows applications on refocusing,multiview imaging,depth estimation,and so on[1,8-10].To support the applications,an accurate calibration method for light field camera is necessary.

    Prior work in this area has dealt with the calibrationofplenopticcameraandfocused plenoptic camera by projecting images into the 3D world,but their camera models are still improvable. Thesemethodsmakeanassumptionthatthe geometric center of micro-lens image lies on the optical axis of its corresponding micro-lens,and do not consider the constraints on the high-dimensional features of light fields.In this paper,we concentrate on the focused plenoptic camera and analyze the variance and invariance between the distribution of rays inside the camera and in real world scene,namely the relationship between the raw image features and the depth-scale information.We fully take into account the misalignment of the micro-lens array,and propose a 10-intrinsic-parameter light field camera model to relate the raw image and 4D light fields by ray tracing.Furthermore,to improve calibration accuracy,instead of a singleplanar board,we design a parallel biplanar board to provide depth and scale priors.The method is verified on simulated data and a physical focused plenoptic camera.The effects of rendered images on different intrinsic parameters are compared.

    In summary,our main contributions are listed as follows:

    (1)A full light field camera model taking into account the geometric relationship between the center of micro-lens image and the optical center of micro-lens,which is ignored in most literature.

    (2)A loop-locked algorithm which is capable of exploiting the 3D scene prior for estimating the intrinsic parameters in one shoot with good stability and low computational complexity.

    The remainder of this paper is organized as follows.Section 2 summarizes related work onlight field camera models,decoding and calibration methods.Section 3 describes the ideal model for a traditional camera or a focused plenoptic camera,and presents three theorems we utilize for intrinsic parameter estimation.In Section 4,we propose a more complete model for a focused plenoptic camera.Section 5 presents our calibration algorithm.In Section 6,we evaluate our method on both simulation and real data.Finally,Section 7 concludes with summary and future work.

    2 Related work

    A light field camera captures light field in a single exposure.The 4D light field data is rearranged on a 2D image sensor in accordance with the optical design.Moreover,the distribution of raw image depends on the relative position of the focused point inside the camera and the optical center of the microlens,as shown in Fig.1.Figure 1(a)shows the design of Ng's plenoptic camera,where the micro-lens array is on the image plane of the main lens and the rays from the focused point almost fall on the same microlens image.Figure 1(b)and Fig.1(c)show the design of Georgiev's focused plenoptic camera with a microlens array focused on the image plane of main lens, and the rays from the focused point fall on different micro-lenses.

    Fig.1 Different designs of light field camera and raw images that consist of many micro-lens images closely packed.

    Decoding light field is equivalent to computing multiview images in two perpendicular directions. Multiview images are reorganized by selecting a contiguous set of pixels from each micro-lens image,for example,one pixel for plenoptic camera[2]and a patch for focused plenoptic camera[3,10]However,for a focused plenoptic camera,the patch size influences the focus depth of the rendered image. Such decoding method causes discontinuity on outof-focus area and results in artifact of aliasing.

    For decoding a 2D raw image to a 4D light field representation,a common assumption is made that the center of each micro-lens image lies on the optical axis of its corresponding micro-lens[7,11,12]in ideal circumstances.Perwa? et al.[7]synthesized refocused images on different depths by searching pixels from multiple micro-lens images.Georgiev et al.[13]decoded into light field using ray transfer matrix analysis.Based on this assumption,the deviation in the ray's original direction has little effect on rendering a traditional image.However,the directions of decoded rays are crucial for an accurate estimation of camera intrinsic parameters,which is particularly important for absolute depth estimation [14]or light field reparameterization for cameras in different poses[15].

    The calibration of a physical light field camera aimstodecoderaysmoreaccurately.Several methods are proposed for the plenoptic camera. Dansereauetal.[6]presenteda15-parameter plenoptic camera model to relate pixels to rays in 3D space,which provides theoretical support for light field panorama[15].The parameters are initializedusingtraditionalcameracalibration techniques.Bok et al.[16]formulated a geometric projection model to estimate intrinsic and extrinsic parametersbyutilizingrawimagesdirectly,includinganalyticalsolutionandnon-linear optimization.Thomason et al.[17]concentrated on the misalignment of the micro-lens array and estimatedits positionandorientation.Inthis work,the directions of rays may deviate due to an inaccurate solution of the installation distances among main lens,micro-lens array,and image sensor.On the other hand,Johannsen et al.[12]estimated the intrinsic and extrinsic parametersfor a focused plenoptic camera by reconstructing a grid pattern from the raw image directly.The depth distortion caused by main lens was taken into account in their method.More importantly,expect for Ref.[17],these methods do not consider the deviation of the image center or the optical center for each micro-lens,which tends to cause inaccuracy in decoded light field.

    3 The world in camera

    The distribution of rays refracted by a camera lens is different from the original light field.In this section,we first discuss the corresponding relationship between the points in the scene and inside the camera modelled as a thin lens.Then we analyze the invariance in an ideal focused plenoptic camera,based on a thin lens and a pinhole model forthemainlensandmicro-lensrespectively. Finally we conclude the relationship between the raw image features and the depth-scale information in the scene.Our analysis is conducted in the non-homogeneous coordinate system.

    3.1Thin lens model

    As shown in Fig.2,the rays emitted from the scene point(xobj,yobj,zobj)Tin different directions are refracted through the lens aperture and brought to a single convergence point(xin,yin,zin)Tif zobj>F,where F denotes the focal length of the thin lens. The relationship between the two points is described as follows:

    Equation(2)showsthattheratioonthe coordinates of the two points changes with zobj. Furthermore,thereisaprojectiverelationship between the coordinates inside and outside the camera.For example,as shown in Fig.3,the objects with the same size in different depths in the scene correspond to the objects with different sizes inside the camera.The relationship can be described as

    where the focal length F satisfies:

    Fig.2 Thin lens model.

    3.2Ideal focused plenoptic camera model

    As shown in Fig.1,there are two optical designs of the focused cameras.In this paper,we only consider the design in Fig.1(b).The case in Fig.1(c)is similar to the former,only with the difference in the relative position of the focus point and the optical center of the micro-lens.

    In this section,the main lens and the micro-lens array are described by a thin lens and a pinhole model respectively.As shown in Fig.4,the main lens,the micro-lens array,and the image sensor are parallel to each other and all perpendicular to the optical axis.The optical center of the main lens lies on the optical axis.

    Let dimgand dlensbe the distance between two geometric centers of arbitrary adjacent microlens images and the diameter of the micro-lens respectively,as shown in Fig.4(a).The ratio between them is

    Fig.3 Two objects with the same size of T in the scene at different depths focus inside a camera with focal length F.

    where L and l are the distances among the main lens,the micro-lens array,and the image sensor respectively.We can find that the ratio L/l is

    dependent on the raw image and the diameter of micro-lens,which is useful for our calibration model in Section 5.Moreover,there is a deviation between the optical center of micro-lens and the geometric center of its image,and dimgis constant in the same plenoptic camera.

    Let dlens,sceneand dimg,scenebe the size of microlens and its image refracted through the main lens into the scene respectively(Fig.4(b)),combining Eqs.(2)and(5),the ratio between them satisfies:

    Fig.4 Ideal focused plenoptic camera in telescopic case.

    Equation(6)shows that though the rays are refracted through the main lens,the deviation between the geometric center of micro-lens image and the optical center of micro-lens still can not be ignored.The effect of deviations on the rendered imageswill bedemonstrated and discussedin experiment.

    In Fig.4(b),A′and B′are the focus points of two scene points A and B respectively.The rays emitted from every focus point fall on multiple micro-lens and focus on the image sensor,resulting in multiple images A′iand Bi′.The distance between sensor

    where LA′is the distance between focus point A′and the micro-lens array,and|·|denotes the absolute operator.Equation(7)indicates that the distance between arbitrary two adjacent sensor points of the same focus point inside the camera is only dependent on intrinsic parameters.Once the raw image is shot (thus dA′is determined),LA′is only dependent on l and dlens.According to triangle similarity,we can get the coordinate of the focus point:

    Based on Eq.(7),we can simplify Eq.(8)as

    According to Eq.(9),once a raw image is shot(thusandare determined)andis given,xA′and yA′can be calculated and they areindependentonotherintrinsicparameters. Furthermore,the length of AB can be calculated using only the raw image and dlens.

    Imaging that there are two objects with equal size in the scene,as shown in Fig.3,the distance between the focus point and the micro-lens array can be calculated via Eq.(7).Replacing b1and b2in Eq. (4)and simplifying via Eqs.(5)and(7),we get the relationship:

    where S1,S2,LI′1,and LI′2are dependent on only three factors,including the raw image,dlens,and l.Equation(10)shows that the value of F can be calculated uniquely once the other intrinsic parameters are determined.

    In the same manner,Eq.(3)can be simplified as

    From Eq.(11),the size of an object in the scene is independent on l.The size of an object which we reconstruct from the raw image can not be taken as a cost function to constrain l.

    In summary,given the coordinates of micro-lens and the raw image,three theorems can be concluded as follows:

    (1)The size of a reconstructed object inside the camera and its distance to the micro-lens array are constant(Eq.(9)).

    (2)The unique F can be determined by the prior of the scene(Eq.(10)).

    (3)The size of the reconstructed object in the scene is constant with changing L(Eq.(11)).

    4 Micro-lens-based camera model

    In this section we present a more complete model for a focused plenoptic camera with misalignment of the micro-lens array[17],which is capable of decoding more accurate light field.There are 10 intrinsic parameters totally to be presented in this section,including the distance between the main lens and the micro-lens array,L,the distance between the micro-lens array and the image sensor,l,the misalignment of micro-lens array,xm,ym,(θ,β,γ),the focal length of the main lens,F(xiàn),and the shift of image coordinate,(u0,v0).

    4.1Distribution of micro-lens image

    As shown in Fig.5(a),every micro-lens with its unique coordinate(xi,yi,0)Tis tangent with each other.In addition,(xi,yi,0)Tis only dependent on dlens.To simplify the discussion,we assume the layout of the micro-lens array is square-like.For hexagon-like configuration,it is easy to partition the whole array into two square-like ones.With the transformation shown in Fig.5(b),the coordinate of the optical center of the micro-lens is represented as

    where t=(xm,ym,L)Tand R is the rotation matrix with three degrees of freedom,i.e.,the rotations (θ,β,γ)about three coordinate axes,which are similar to the traditional camera calibration model [18].

    Although the main lens and the image sensor are parallel,the case between the micro-lens array and the image sensor is not similar(Fig.5(c)). Each geometric center of the micro-lens image is represented as

    Fig.5 The coordinate system of a focused plenoptic camera.

    4.2Projections from the raw image

    Once the coordinate of a micro-lens's optical center (xc,yc,xc)Tand its image point(ximg,yimg,L+l)Tare calculated,we can get a unique ray rirepresented as

    As shown in Fig.4(b),the multiple imageson the image sensor from the same focus point A′can be located if a proper pattern is shot,such as a grid-array pattern[12]. Thus the multiple rays emitted from point A′through different optical centers of the micro-lenses are collected to calculate the coordinate of point A′:

    where‖·‖2represents L2norm.Till now,we have accomplished the decoding process of light field inside the camera.To obtain the light field data in the scene,combining the depth-dependent scaling ratio described in Eq.(2),the representation of the focused points?A′can be transformed using the focal lens F easily.

    5 Calibration

    Compared to the ideal focused plenoptic camera model,the shift caused by the rotations of related micro-lenses is far less than l and the difference in the numerical calculation is trivial,therefore the three theorems concluded for an ideal focused plenoptic camera still hold for our proposed model with misalignment.More importantly,when there iszeromachiningerror,thediameterofthe micro-lens dlensis set,and does not need to be estimatedduringthecalibration.Consequently,the unique solution of the intrinsic parameters PPP=(θ,β,γ,xm,ym,L,l,u0,v0)TandFcanbe estimated using the two steps described in the following.

    5.1Decoding by micro-lens optical center

    To locate the centers of the micro-lens images,we shoot a white scene[19,20].Then a template of proper size is cut out from the white image and its similarity with the original white image is calculated via normalized cross-correlation(NCC).To find the locations with subpixel accuracy,a threshold is placed on the similarity map such that all values less than 50%of the maximum intensity are set to zero. Then we take the filtered similarity map as weight and calculate the weighted coordinate of every small region.The results are shown in Fig.6.

    Fig.6 The template(top-left),the crops of similarity map(topright),the filtered similarity map(bottom-left),and the final location of the micro-lens image centers(bottom-right).

    To estimate parametersPPP,we minimize the cost function:

    where(u0,v0)is the offset between the camera coordinate and the image coordinate.After this optimization,PPPis used to calculate micro-lens opticalcentersandreconstructthecalibration points.Then the rays are obtained via Eq.(14).

    According to Eq.(5),the solution of Eq.(17),changing with the initial value of L,is not unique. Moreover,the ratio L/l is almost constant with changing initial value ofPPP.Although there are differences between the models described in Section 3.2 and Section 4,the theorems still hold since the shift caused by the rotations can be ignored.This observation will be verified in experiment later.

    In addition,the value of l influences the direction of decoded rays.Due to the coupling relationship of angle and depth,either of them can be used as the prior to be introduced to estimate the uniquePPP.

    5.2Reconstruction of calibration points To reconstruct a plane in the scene,we may shoot a certain pattern in order to recognize multiple images from different scene points.A crop of the calibration board and its raw image we shoot are shown in Fig.7. To locate the multiple images of every point on the calibration board,we preprocess the grid image by adding the inverse color of the white image to the grid image(Fig.7).Then one of the sensor points corresponding to the focus point A′,denoted byis located by the same method described in Section 5.1.Consequently,the plane we shoot in the scene,denoted by?Π={Ai|i=1,···,n},is easy to be reconstructed using Eqs.(2)and(14).

    As shown in Fig.8,we design a parallel biplanar board with known distance between the two parallel planes and the distance between adjacent grids,which can provide depth prior Prdpand scale prior Prsc.Equivalently,we can shoot a single-plane board twice while we move the camera on a guide rail to a fixed distance.

    Fig.7 A crop of calibration board,its raw image,and the preprocessed image by white image.

    Note that if the values of L or l is incorrect,the distance between planeandis not equal to the prior distance.Therefore we take the distance

    Fig.8 The parallel biplanar board we designed to provide depth prior for calibration.

    where dis(·,·)represents the distance between two parallel planes.In practice,we take the mean distance of reconstructed points on?Π1to plane?Π2as the value of dis.

    Moreover,?T1and?T2may not be equal to Prscdue to possible calculation error,so we must refine the value of depth prior to ensure the correct ratio of scale and depth.

    5.3Algorithm summary

    Thecompletealgorithmissummarizedin Algorithm 1.

    To make the algorithm more efficiently,the searchstep of the loop of L should be changed with the value of‖dis)-Prdp‖2in Eq.(19). The same principle is applied to the search step of F.In addition,because of the monotonicity of‖dis)-Prdp‖2with L,and Fwith,we can use dichotomy to search an accurate value more efficiently.

    Algorithm 1:Calibration method for a focused camera with a parallel calibration board

    6 Experimental results

    In experiments,we apply our calibration method on simulated and real world scene data.We capture three datasets of white images and grid images using a self-assembly focused plenoptic camera(Fig.9). The camera includes a GigE camera with a CCD image sensor whose resolution is 4008×2672 pixels that are 9μm wide,F(xiàn)-mount Nikon lens with 50mm focal length,and a micro-lens array whose diameter is 300μm with negligible error in hexagon layout.

    We use the function“fminunc”in MATLAB to complete the non-linear optimization in Eqs.(15),(17),and(18).The initial parameters are set as the installation parameters,and θ,β,γ,xm,ymare set to zero.

    Fig.9 The focused plenoptic camera we installed and its micro-lens array inside the camera.

    Table 1 The parameters we estimated and the ground truth

    6.1Simulated data

    First we verify the calibration method on simulated images rendered in MATLAB,as shown in Fig.1. The ground truth and the calibrated parameters are shown in Table 1.We compare the estimated angle of the ray passing through each optical center of microlens and the one of the main lens to the ground truth,which is shown in Fig.10.The differences are less than 1.992×10-3rad.

    We compare the geometric centers of the microlens images we locate and the ones with optimization. Theerrormapsof84×107geometriccenters optimized with different L are shown in Fig.11(a). From Fig.11(b),we find that there are 96.53%of the centers whose error is less than 0.1 pixel,which is the input for the following projection step.

    The comparison of the locations of optical centers of micro-lenses with different L is illustrated in Fig.12.Thedifferenceinx-coordinateandycoordinate of the optical center is trivial with changing L.The maximal difference is 4.2282× 10-6mm when L changes from 55 to 84mm,which proves our observation mentioned in Section 5.1.

    Fig.10 The histogram of the deviation between the estimated angles of the rays and the ground truth.

    Fig.11 The results of optimization on geometric centers of the micro-lens image on simulated data.

    Fig.12 The comparison of the locations of optical centers of microlenses with different L from 55 to 84mm on simulated data.The value in row i and column j represents the difference in x-coordinate and y-coordinate between the results optimized in Liand Lj.

    Fig.13 The values of F,dis(?Π1,?Π2),?S1,?S2,and?T(?T2= ?T1)with different L on simulated data.

    Fig.14 The relationship of?T1,?T2,and F when L=67.3129mm.

    6.2Physical camera

    Then we verify the calibration method on the physical focused plenoptic camera.To obtain the equivalent data of parallel biplanar board,we shoot a single-plane board twice while we move the camera on a guide rail to an accurate fixed distance,as shown in Fig.9.The depth prior Prdpis precisely controlled to be 80.80mm and the scale prior Prscis 28.57mm.The calibration results are shown in Fig.15.

    As shown in Fig.15(a),there is an obvious error between the computed geometric centers and the located centers on the edge of the error map,which may result from the distortion of lenses or the machining error of micro-lens.However,we find that that there are 73.00%of the centers whose error is less than 0.6 pixel,as shown in Fig.15(b).Themean difference of geometric centers of micro-lens images optimized with different L is 1.89×10-4pixel (Fig.15(c)).The results of F,dis(?Π1,?Π2),?S1,?S2,?T (?T1= ?T2)with different L are similar to the results on simulated data.

    Finally,to verify the stability of our algorithm,we calibrate intrinsic parameters with different poses of calibration board.Corresponding results are shown in Table 2.

    Fig.15 The results of optimization on geometric centers of microlens image on physical data.

    Table 2 Parameters estimated with calibration board with different poses.The third parameter is the angle between the calibration board and the optical axis

    Fig.16 The rendered images from simulated data.

    6.3Rendering

    Werenderthefocusedimagewithdeviations between the optical center of micro-lens and the geometric center of micro-lens image.

    We shoot a resolution test chart on the same depth for simulated data(Fig.16),which indicates that the deviation surely effects the accuracy of decoded lightfield.Then we shoot a chess board for simulated data to evaluate the width of every grid in the rendered images.We resize the images by setting the mean width of the grids to be 100pixels.Then we calculate the range and the standard deviation of the grid width.The results are shown in Table 3,which indicates that the calibration contributes to the uniform scale in the same depth and reduces the distortion caused by incorrect deviations.The results on physical camera are shown in Table 4 and Fig.17.The decoded light field with the estimated intrinsic parameters leads to more accurate refocus distance[14],which is equivalent to a correct ratio of scale and depth.

    Table 3 The range and variance of rendered chess board on simulated data

    Fig.17 The image rendered from physical camera.

    Table 4 The range and variance of rendered chess board on physical camera

    7 Conclusions and future work

    In the paper we present a 10-intrinsic-parameter model to describe a focused plenoptic camera with misalignment.To estimate the intrinsic parameters,we propose a calibration method based on the relationship between the raw image features and the depth-scale information in the real world scene.To provide depth and scale priors to constrain the intrinsic parameters,we design a parallel biplanar boardwithgrids.Thecalibrationapproachis evaluatedonsimulationaswellasrealdata. Experimental results show that our proposed method is capable of decoding more accurate light field for the focused plenoptic camera.

    Future work includes modelling the distortion caused by the micro-lens and main lens,optimization of extrinsic parameters,and the reparameterization of multiple and re-sampling light field data from cameras with different poses.

    Acknowledgements

    The work is supported by the National Natural Science Foundation of China(Nos.61272287 and 61531014)and the research grant of State Key LaboratoryofVirtualRealityTechnologyand Systems(No.BUAAVR-15KF-10).

    References

    [1]Ng,R.Digital light field photography.Ph.D.Thesis. Stanford University,2006.

    [2]Ng,R.;Levoy,M.;Bredif,M.;Duval,G.;Horowitz,M.;Hanrahan,P.Light field photography with a handheld plenoptic camera.Stanford University Computer Science Tech Report CSTR 2005-02,2005.

    [3]Georgiev,T.G.;Lumsdaine,A.Focused plenoptic camera and rendering.Journal of Electronic Imaging Vol.19,No.2,021106,2010.

    [4]Lumsdaine,A.;Georgiev,T.Full resolution lightfield rendering.Technical Report.Indiana University and Adobe Systems,2008.

    [5]Lumsdaine,A.;Georgiev,T.The focused plenoptic camera.In:ProceedingsofIEEEInternational Conference on Computational Photography,1-8,2009.

    [6]Dansereau,D.G.;Pizarro,O.;Williams,S.B. Decoding,calibration and rectification for lenseletbasedplenopticcameras.In:Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition,1027-1034,2013.

    [7]Perwa?,C.;Wietzke,L.Single lens 3D-camera with extended depth-of-field.In:Proceedings of SPIE 8291,Human Vision and Electronic Imaging XVII,829108,2012.

    [8]Bishop,T.E.;Favaro,P.Plenoptic depth estimation from multiple aliased views.In:Proceedings of IEEE 12th International Conference on Computer Vision Workshops,1622-1629,2009.

    [9]Levoy,M.;Hanrahan,P.Lightfieldrendering. In:Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques,31-42,1996.

    [10]Wanner,S.;Fehr,J.;J¨ahne,B.Generating EPI representations of 4D light fields with a single lens focusedplenopticcamera.In:Lecture Notes in Computer Science,Vol.6938.Bebis,G.;Boyle,R.;Parvin,B.et al.Eds.Springer Berlin Heidelberg,90-101,2011.

    [11]Hahne,C.;Aggoun,A.;Haxha,S.;Velisavljevic,V.;Fern′andez,J.C.J.Light field geometry of a standard plenoptic camera.Optics Express Vol.22,No.22,26659-26673,2014.

    [12]Johannsen,O.;Heinze,C.;Goldluecke,B.;Perwa?,C.On the calibration of focused plenoptic cameras. In:Lecture Notes in Computer Science,Vol.8200. Grzegorzek,M.;Theobalt,C.;Koch,R.;Kolb,A.Eds. Springer Berlin Heidelberg,302-317,2013.

    [13]Georgiev,T.;Lumsdaine,A.;Goma,S.Plenoptic principal planes.Imaging Systems and Applications,OSA Technical Digest(CD),paper JTuD3,2011.

    [14]Hahne,C.;Aggoun,A.;Velisavljevic,V.The refocusingdistanceofastandardplenoptic photograph.In:Proceedings of 3DTV-Conference: The True Vision—Capture,Transmission and Display of 3D Video,1-4,2015.

    [15]Birklbauer,C.;Bimber,O.Panorama light-field imaging.Computer Graphics Forum Vol.33,No.2,43-52,2014.

    [16]Bok,Y.;Jeon,H.-G.;Kweon,I.S.Geometric calibrationofmicro-lens-basedlight-fieldcameras using line features.In:Lecture Notes in Computer Science,Vol.8694.Fleet,D.;Pajdla,T.;Schiele,B.;Tuytelaars,T.Eds.Springer International Publishing,47-61,2014.

    [17]Thomason,C.M.;Thurow,B.S.;Fahringer,T. W.Calibration of a microlens array for a plenoptic camera.In:Proceedings of the 52nd Aerospace Sciences Meeting,AIAA SciTech,AIAA 2014-0396,2014.

    [18]Zhang,Z.Aflexiblenewtechniqueforcamera calibration.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.22,No.11,1330-1334,2000.

    [19]Cho,D.;Lee,M.;Kim,S.;Tai,Y.-W.Modeling the calibration pipeline of the Lytro camera for highqualitylight-fieldimagereconstruction.In: Proceedings of IEEE International Conference on Computer Vision,3280-3287,2013.

    [20]Sabater,N.;Drazic,V.;Seifi,M.;Sandri,G.;Perez,P.Light-field demultiplexing and disparity estimation.2014.Available at https://hal.archivesouvertes.fr/hal-00925652/document.

    Chunping Zhang received her B.E. degreefromSchoolofComputer Science, NorthwesternPolytechnical Universityin2014.Sheisnowa M.D.student at School of Computer Science, NorthwesternPolytechnical University.Herresearchinterests includecomputationalphotography,and light field computing theory and application.

    Zhe Ji received her B.E.degree in technology and computer science from Northwestern Ploytechnical University in 2015.She is now a M.D.student atSchoolofComputerScience,Northwestern Polytechnical University. Hercurrentresearchinterestsare computational photography,and light field computing theory and application.

    Qing Wang is now a professor and Ph.D.tutor at School of Computer Science, NorthwesternPolytechnical University.Hegraduatedfromthe DepartmentofMathematics,Peking Universityin1991.Hethenjoined Northwestern Polytechnical University as a lecturer.In 1997 and 2000,he obtained his master and Ph.D.degrees from the Department ofComputerScienceandEngineering,Northwestern Polytechnical University,respectively.In 2006,he was awarded the Program for New Century Excellent Talents in University of Ministry of Education,China.He is now a member of IEEE and ACM.He is also a senior member of China Computer Federation(CCF).

    He worked as research assistant and research scientist in the Department of Electronic and Information Engineering,the Hong Kong Polytechnic University from 1999 to 2002.He also worked as a visiting scholar at the School of Information Engineering,the University of Sydney,Australia,in 2003 and 2004.In 2009 and 2012,he visited the Human Computer Interaction Institute,Carnegie MellonUniversity,for six months and the Department of Computer Science,University of Delaware,for one month,respectively.

    Professor Wang's research interests include computer visionandcomputationalphotography,suchas3D structureandshapereconstruction,objectdetection,tracking and recognition in dynamic environment,and light field imaging and processing.He has published more than 100 papers in the international journals and conferences.

    Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), whichpermits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    Xiaole Zhao1(),Yadong Wu1,Jinsha Tian1,and Hongying Zhang2
    ○cThe Author(s)2016.This article is published with open access at Springerlink.com

    AbstractIt has been widely acknowledged that learning-basedsuper-resolution(SR)methodsare effective to recover a high resolution(HR)image from a single low resolution(LR)input image.However,there exist two main challenges in learning-based SR methods currently:the quality of training samples and the demand for computation.We proposed a novel framework for single image SR tasks aiming at these issues,which consists of blind blurring kernel estimation(BKE)and SR recovery with anchored space mapping(ASM).BKE is realized via minimizing the cross-scale dissimilarity of the image iteratively,and SR recovery with ASM is performed based on iterative least square dictionary learning algorithm(ILS-DLA).BKE is capable of improving the compatibility of training samples and testing samples effectively and ASM can reduce consumed time during SR recovery radically. Moreover,a selective patch processing(SPP)strategy measured by average gradient amplitude|grad|of a patch is adopted to accelerate the BKE process.The experimental results show that our method outruns several typical blind and non-blind algorithms on equal conditions.

    Keywordssuper-resolution(SR);blurring kernel estimation(BKE); anchoredspace mapping(ASM);dictionary learning;average gradient amplitude

    1School of Computer Science and Technology,Southwest University of Science and Technology,Mianyang 621010,China.E-mail:X.Zhao,zxlation@foxmail.com();Y. Wu,wyd028@163.com.

    2School of Information Engineering,Southwest University of Science and Technology,Mianyang 621010,China.E-mail:zhy0838@163.com.

    Manuscript received:2015-11-20;accepted:2016-02-02

    1 Introduction

    Single image super-resolution has been becoming the hotspot of super-resolution area for digital images because it generally is not easy to obtain an adequate number of LR observations for SR recovery in many practical applications.In order to improve image SR performance and reduce time consumption so that it can be applied in practical applications more effectively,this kind of technology has attracted great attentions in recent years.

    Singleimagesuper-resolutionisessentiallya severe ill-posed problem,which needs adequate priorstobesolved.Existingsuper-resolution technologies can be roughly divided into three categories:traditionalinterpolationmethods,reconstruction methods,and machine-learning(ML)basedmethods.Interpolationmethodsusually assume that image data is continuous and bandlimited smooth signal.However,there are many discontinuousfeaturesinnaturalimagessuch as edges and corners etc.,which usually makes the recovered images by traditional interpolation methods suffer from low quality[1].Reconstruction based methods apply a certain prior knowledge,such as total variation(TV)prior[2-4]and gradient profile(GP)prior[5]etc.,to well pose the SR problem.The reconstructed image is required to be consistent with LR input via back-projection.But a certain prior is typically only propitious to specific images.Besides,these methods will produce worse results with larger magnification factor.

    Relativelyspeaking, machine-learningbased method is a promising technology and it has become the most popular topic in single image SR field. The first ML method was proposed by Freemanet al.[6],which is called example-based learning method.This method predicts HR patches from LR patches by solving markov random field(MRF)model by belief propagation algorithm.Then,Sun et al.[7]enhanced discontinuous features(such as edges and corners etc.) by primal sketch priors. These methods need an external database which consists of abundant HR/LR patch pairs,and time consumption hinders the application of this kind of methods.Chang et al.[8]proposed a nearest neighbor embedding(NNE)method motivated by the philosophy of locally linear embedding(LLE)[9].They assumed LR patches and HR patches have similar space structure,and LR patch coefficients can be solved through least square problem for the fixed number of nearest neighbors(NNs).These coefficients are then used for HR patch NNs directly. However,the fixed number of NNs could cause over-fitting and/or under-fitting phenomena easily [10].Yang et al.[11]proposed an effective sparse representation approach and addressed the fitting problems through selecting the number of NNs adaptively.

    However,ML methods are still exposed to two main issues:the compatibility between training and testing samples(caused by light condition,defocus,noise etc.),and the mapping relation between LR and HR feature spaces(requiring numerous calculations).Glasner et al.[12]exploited image patch non-local self-similarity(i.e.,patch recurrence)within image scale and cross-scale for single image SR tasks,which makes an effective solution for the compatibility problem.The mapping relation involves the learning process of LR/HR dictionaries. Actually,LR and HR feature spaces are tied by some mapping function,which could be unknown and not necessarily linear[13].Therefore,the originally direct mapping mode[11]may not reflect this unknown non-linear relation correctly.Yang et al.[14]proposed another joint dictionary training approach to learn the duality relation between LR/HRpatchspaces.Themethodessentially concatenates the two feature spaces and converts the problem to the standard sparse representation. Further,they explicitly learned the sparse coding problem across different feature spaces in Ref.[13],whichisso-calledcoupleddictionarylearning (CDL)algorithm.He et al.[15]proposed another beta process joint dictionary learning(BPJDL)for CDL based on a Bayesian method through using a beta process prior.But,above-mentioned dictionary learning approaches did not take the feature of training samples into account for better performance.Actually,it is not an easy work to find the complicated relation between LR and HR feature spaces directly.

    We present a novel single image super-resolution methodconsideringbothSRresultandthe acceleration of execution in the paper.The proposed approach firstly estimated the true blur kernel based on the philosophy of minimizing the dissimilarity between cross-scale patches[16].LR/HR dictionaries then were trained via input image itself downsampled by the estimated blur kernel.The BKE processing was adopted for improving the quality of training samples.Then,L2norm regularization was used to substitute L0/L1norm constraint so that latent HR patch can be mapped on LR patch directly through a mapping matrix computed by LR/HR dictionaries.This strategy is similar with ANR[17],but we employed a different dictionary learning approach,i.e.,ILS-DLA,to train LR/HR dictionaries.In fact,ILS-DLA unified the principle ofoptimizationofthewholeSRprocessand produced better results with regard to K-SVD used by ANR.

    The remainder of the paper is organized as follows: Section 2 briefly reviews the related work about this paper.The proposed approach is described in Section 3 detailedly.Section 4 presents the experimental results and comparison with other typical blind and non-blind SR methods.Section 5 concludes the paper.

    2 Related work

    2.1Internal statistics in natural images

    Glasner et al.[12]exploited an important internal statistical attributes of natural image patches named the patch recurrence,which is also known as image patch redundancy or non-local self-similarity (NLSS).NLSS has been employed in a lot of computer vision fields such as super resolution[12,18-21],denoising[22],deblurring[23],and inpainting [24]etc.Further,Zontak and Irani[18]quantified this property by relating it to the spatial distance from

    the patch and the mean gradient magnitude|grad|of a patch.The three main conclusions can be perceived according to Ref.[18]:(i)smooth patches recur very frequently,whereas highly structured patches recur much less frequently;(ii)a small patch tends to recur densely in its vicinity and the frequency of recurrence decays rapidly as the distance from the patch increases;(iii)patches of different gradient content need to search for nearest neighbors at different distances.These conclusions consist of the theoretical basis of using the mean gradient magnitude|grad|as the metric of discriminatively choosing different patches when estimating the blurring kernels.

    2.2Cross-scale blur kernel estimation

    For more detailed elaboration,we still need to briefly review the cross-scale BKE and introduce our previous work[16]on this issue despite a part of it is the same as previous one.We will illustrate the detailed differences in Section 3.1.Because of camera shake,defocus,and various kinds of noises,the blur kernel of different images may be entirely and totally different.Michaeli and Irani[25]utilized the non-local self-similarity property to estimate the optimal blur kernel by maximizing the crossscale patch redundancy iteratively depending on the observation that HR images possess more patch recurrence than LR images.They assumed the initial kernel is a delta function used to down-sample the input image.A few NNs of each small patch were found in the down-sampled version of input image. Each NN corresponds to a large patch in the original scale image,and these patch pairs construct a set of linear equations which could be solved by using weighted least squares.The root mean squared error (RMSE)between cross-scale patches was employed as the iteration criterion.Figure 1 shows the main process of cross BKE of Ref.[25].We follow the same idea with more careful observation:the effect of the convolution on smooth patches is obviously smaller than that on structured patches(refer to Fig.2). This phenomenon can be explained easily according to the definition of convolution.Moreover,the mean gradient magnitude|grad|is more expressive than the variance of a patch on the basis of the conclusions in Ref.[18].

    Fig.1 Description of cross-scale patch redundancy.For each small patch piin Y,finding its NNsinwhich corresponds to a large patch qijin Y,andconstitute a patch pair,and all patch pairs of NNs construct a set of linear equations which is solved using weighted least squares to obtain an updated kernel.

    Fig.2 Blurring effect on non-smooth and smooth areas.Black boxes indicate structured areas,and red boxes indicate smooth areas. (a)Clean patches.It can be clearly seen that the structure of nonsmooth patch is distinct.(b)Blurred patches corresponding to(a). The detail of non-smooth patch is obviously blurry.

    2.3ILS-DLA and ANR

    ILS-DLA is a typical dictionary learning method. It adopts an overall optimization strategy based on least square(LS)to update the dictionary when the weight matrix is fixed so that ILS-DLA[26]is usually faster than K-SVD[17,27].Besides,ANR just adjusts the objective function slightly and the SR reconstruction process is theoretically based on least square method.

    Supposing we have two coupled feature sets FLand FHwith size nL×L and nH×L,and the number of the atoms in LR dictionary DLand HR dictionary DHis K.The training process for DLcan be described as

    where diLis an atom in DL,wiis a column vector in W.p∈{0,1}is the constrain of the coefficient vector wi,and λ is a tradeoff parameter.Equation (1)is usually resolved by optimizing one variable while keeping the other one fixed.In ILS-DLA case,least square method is used to update DLwhile W is fixed.Once DLand W were obtained then we couldcompute the DHaccording to the same LS rule:

    According to the philosophy of ANR,a mapping matrix can be calculated through the weight matrix and the both dictionaries.Then,it is used to project LR feature patches to HR feature patches directly. Thus,L0/L1norm constrained optimization problem degenerates to an issue of matrix multiplication.

    3 Proposed approach

    3.1Improved blur kernel estimation

    Referring to Fig.1,we use Y to represent the input LR image,and X to be latent HR image.Michaeli and Irani[25]estimated the blur kernel through maximizing the cross-scale NLSS directly,while we minimized the dissimilarity between cross-scale patches.Despite these two ideas look like the same with each other intuitively,they are slightly different and lead to severely different performance[16].While Ref.[16]has introduced this part of content in detail,we need to present the key component of the improved blur kernel estimation for integrated elaboration.The following objective function has reflected the idea of minimizing the dissimilarity between cross-scale patches:

    whereNisthenumberofquerypatchesin Y.Matrix Rijcorresponds to the operation of convolving with qijand down-sampling by s.C is a matrix used as the penalty of non-smooth kernel. The second term of Eq.(3)is kernel prior and η is the balance parameter as the tradeoff between the error term and kernel prior.For the calculation of the weight zij,we can find MiNNs in down-sampled version Ysfor each small patch pi(i=1,2,···,N)in the input image Y.The“parent”patchesright aboveare viewed as the candidate parent patches of pi.Then the weight zijcan be calculated as follow:

    where Miis the number of NNs in Ysof each small patch piin Y,and σ is the standard deviation of noise added on pi.s is the scale factor(see Fig.1).Note that we apply the same symbol to express column vector corresponding to the patch here.Setting the gradient of the objective function in Eq.(3)to zero can get the update formula of k:

    Equation(5)is very similar to the result of Ref.[25],which can be interpreted as maximum a posteriori(MAP)estimation on k.However,there are at least three essential differentials with respect to Ref.[25].Firstly,the motivation is different so that Ref.[25]tends to maximize the cross-scale similarity according to NLSS[12]while we minimize the dissimilarity directly according to Ref.[18].This may not be easy to understand. However,the former leads Michaeli and Irani[25]to form their kernel update formula from physical analysis and interpretation of“optimal kernel”.The latter leads us to obtain kernel update formula from quantitating cross-scale patch dissimilarity and directly minimizing it according to ridge regression [16].Secondly,selective patch processing measured bytheaveragegradientamplitude|grad|was adopted to improve the result of blind BKE.Finally,the number of NNs of each small patch piis not fixed which provides more flexibility during solving least square problem.Accordingly,the terminal criterion cannot be the totality of NNs.We use the average patch dissimilarity(APD)as terminal condition of iteration:

    It is worth to note that selective patch processing is used to eliminate the effect on BKE caused by smooth patches;we selectively employ structured patches to calculate blur kernel.Specifically,if the average gradient magnitude|grad|of each query patch is smaller than a threshold,then we abandon it.Otherwise,we use it to estimate blur kernel according to Eq.(5).We typically perform search in the entire image according to Ref.[18]but this could not consume too much time because of a lot of smooth patches being filtered out.

    3.2Feature extraction strategy

    There is a data preparation stage before dictionary learning when using sparse representation to do SRtask,it is necessary to extract training features from the given input data because different feature extraction strategies will cause very different SR results.The mainstream feature extraction strategies include raw data of an image patch,the gradient of an image patch in x and y directions,and meanremoved patch etc.We adopt the back-projection residuals(BPR)model presented in Ref.[28]for feature extraction(see Fig.3).

    Firstly,we convolve Y with estimated kernel k,and dowm-sample it with s.From the view of minimizing the cross-scale patch dissimilarity,the estimated blur kernel gives us a more accurate downsampling version of Y.In order to make the feature extraction more accurate,we consider the enhanced interpolation of Y′,which forms the source of LR feature space FL.The enhanced interpolation is the result of an iterative back-projection(IBP)operation [29,30]:

    Fig.3 Feature extraction strategy.e(·)represents the enhanced interpolation operation.The down-sampled version Ysis obtained by convolving with estimated kernel?k and down-sampling with s. LR feature set consists of normalized gradient feature patch extracted from Y′,HR feature set is made up with the raw patches extracted from BPR image Y-Y′.

    3.3SR recovery via ASM

    Yang et al.[13]accelerated the SR process from two directions:reducing the number of patches and finding a fast solver for L1norm minimization problem.We adopt a similar manner for the first optimization direction,i.e.,a selective patch process (SPP)strategy.However,in order to be consistent with BKE,the criterion of selecting patches is the gradient magnitude|grad|instead of the variance. The second direction Yang et al.headed to is learning a feed-forward neural network model to find an approximated solution for L1norm sparse encoding.We employ ASM to accelerate the algorithm similar with Ref.[17].It requires us to reformulate the L1norm minimization problem as a least square regression regularized by L2norm of sparse representation coefficients,and adopt the ridge regression(RR)to relieve the computationally demanding problem of L1norm optimization.The problem then comes to be

    wheretheparameterμallowsalleviatingthe ill-posed problem and stabilizes the solution.y corresponds to a testing patch extracted from enhanced interpolation version of input image.DLis the LR dictionary trained by ILS-DLA.The algebraic solution of Eq.(8)is given by setting the gradient of objective function to zero,which gives:

    where Iis a identity matrix.Then,the same coefficients are used on the HR feature space to compute the latent HR patches,i.e.,x=DHw. Combined with Eq.(9):

    where mapping matrix PMcan be computed offline and DHis computed by Eq.(2).Equation(10)means HR feature patch can be obtained by LRpatch multiplying with a projection matrix directly,which reduces the time consumption tremendously in practice.Moreover,the feature patches needed to be mapped to HR features via PMwill be further reduced due to SPP.Though the optimization problem constrained by L2norm usually leads to a more relaxative solution,it still yields very accurate SR results because of cross-scale BKE.

    4 Experimental results

    All the following experiments are performed on the same platform,i.e.,a Philips 64 bit PC with 8.0 GB memory and running a single core of Intel Xeon 2.53 GHz CPU.The core differences between the proposed method and Ref.[16]are the feature extraction and SR recovery.The former is mainly aiming at reducing local projection error and improving the quality of the training samples further.The latter is primarily used to accelerate the reconstruction of the latent HR image.

    4.1Experiment settings

    We quintessentially perform×2 and×3 SR in our experiments on blind BKE.The parameter settings in BKE stage are partially the same with Refs.[16]and[25],i.e.,when scale factor s=2,the size of small query patches piand candidate patchesof NNs are typically set to 5×5,while the sizes of“parent”patches qijare set to 9×9 and 11×11;when performing×3 SR,query patches and candidate patches do not change size but“parent”patches are set to be 13×13 patches.Noise standard deviation σ is assumed to be 5.Parameter η in Eq.(3)is set to be 0.25,and matrix C is chosen to be the derivative matrix corresponding to x and y direction of“parent”patches.The threshold of gradient magnitude|grad|for selecting query patches varies in 10-30 according to the different images.In the processing of feature extraction,the enhanced interpolation starts with the bicubic interpolation anddown-samplingoperationisperformedby convolving with estimated blur kernel k,and the back-projection filter k′is set to be a Gaussian kernel with the same size of k.The tradeoff parameterμin Eq.(8)is set to be 0.01 and the number of iteration for dictionary training is 20.

    4.2Analysis of the metric in blind BKE

    Comparisons for blind BKE usually include the accuracy of the estimated kernels and the efficiency,and these two elaborations have been presented in our previous work[16]in detail.We intend to analyze the impact on blind BKE through discriminating the query patches instead of simply comparing the final results with some related works.The repeated conclusions will be ignored here.We collected patches from three natural image sets(Set2,Set5,and Set14)and found that the values of|grad|and variance mostly fall into the range of[0,100].So the entire statistical range is set to be[0,100]and the statistical interval for|grad|and variance is typically set to be 10.

    We sampled the 500×400“baboon”image and got 236,096 query patches,and 235,928 patches from 540×300“high-street”(dense sampling).It is distinctly observed that the statistical characteristics of|grad|and variance are very similar with each other in Fig.4.The query patches with threshold ≤30 account for the most proportion for both|grad| and variance,and we could get similar conclusion from other images.However,the relative relation between them reverses around 30(value may be different from images but the reverse determinately exists).This is an intuitive presentation that why we adopt the|grad|instead of variance as the metric of selecting patches based on the philosophy of dropping the useless smooth patches as many as possible and keeping the structured patches as far as possible.More systemically theoretical explanations could be found in Ref.[18].

    Moreover,the performance of blind BKE is obviously affected by the threshold on|grad|.The optimal kernel was pinned beside the threshold in Fig.4.We can see that the estimated kernel by our method is not close to the ground-truth one infinitely as the threshold increasing because the useful structured patches reduce as well.Usually,the estimated kernel at the threshold of“turning point”is closest to the ground-truth.When the threshold is set to be 0,it actually degenerates to the algorithm of Ref.[25],which does not give the best result in most instances.In general case,the quality of recovery declines with the increase of|grad|like the second illustration in Fig.4.But there indeed exist special cases like the first illustration in Fig.4 for the PSNRsof recovered images rise firstly and then fell with the threshold increasing.

    Fig.4 Comparisons between statistical characteristics and threshold effect on estimated kernels.The testing images are“baboon”from Set14 and“high-street”from Set2,and the blur kernels are a Gaussian with hsize=5 and sigma=1.0(9×9)and a motion kernel with length=5,theta=135(11×11)respectively.We only display estimated kernels when threshold on|grad|≤50.

    4.3Comparisons for SR recovery

    Compared with several currently proposed methods (such as A+ANR[27]and SRCNN[31]etc.),the reconstruction efficiency of our method sometimes is slightly low but almost in the same magnitude. Due to the anchored space mapping,the proposed method was accelerated substantially with regard to some typical sparse representation algorithms like Refs.[11]and[32].Table 1 and Table 2 present the quantitative comparisons using PSNRs andSRrecoverytimetocompareobjective index and SR efficiency.Four recent proposed methods including Ref.[32],A+ANR(adjusted anchored neighborhood regression)[27],SRCNN (super-resolutionconvolutionalneuralnetwork)[31],and JOR (jointly optimized regressors)[33]arepickedoutastherepresentativeofnonblind methods(presented in Table 1),and three blind methods,NBS(nonparametric blind superresolution)[25],SAR-NBS(simple,accurate,and robust nonparametric blind super-resolution)[34],and Ref.[16]are concurrently listed in Table 2 with the proposed method.It's worth noting that PSNR needs reference images as base line.Because the input images are blurred by different blurring kernels so that the observation data is seriously degenerated and non-blind methods usually give very bad results in this case,we referred the recovered images to the blurred input images.The average PSNRs and running time are collected(×2 and×3)over four image sets(Set2,Set5,Set14,and B100). Besides,we set the threshold on|grad|around to the“turning point”adaptively for the best BKE estimation instead of pinning it to a fixed number (e.g.,10 in Ref.[16]).The methods listed in Table 1 and Table 2 are identical to the methods presented in Figs.5-8.

    As shown in Table 1 and Table 2,the proposed algorithm obtained higher objective evaluation than other blind or non-blind algorithms in both s=2 and/or s=3 case.For fairness,it excludes the time of preparing data and training dictionaries for all of these methods.Firstly,four non-blind methods in Table 1 fail to recover real images when the inputs are degraded seriously though some of them provided very high speed.And,both the accuracy and efficiency of estimating kernels via Michaeli et al.[25]are not high enough which has been illustrated in Ref.[16],and SR recovery performed by Ref.[11]is very time-consuming. While the same process of BKE with us was executed in Ref.[16](fixed threshold on|grad|),the SR reconstruction with SPP is essentially still lowefficiency.The proposed method adopted adaptive |grad|threshold to improve the quality of BKE and the enhanced interpolation on input images reduced the mapping errors brought by estimated kernels further.On the other hand,ASM increased the speed of the algorithm in essence.This is mainlydue to the adjustment of the objective function and the constraint conversion from L0/L1norm to L2norm.Actually,the improvement of our method is not only reflected in SR recovery stage,but also reflected in BKE (through SPP)and dictionary training(through ILS-DLA)which are not usually mainly concerned by most of researchers.But these preprocessing procedures are still very different when big data need to be processed.

    Table 1 Performance of several non-blind methods(without estimating blur kernel)

    Figures 5-8 show the visual comparisons between several typical SR algorithms and our method.For layout purpose,all images are diminished when inserted in the paper.Still note that the input images of all algorithms are obtained through reference images blurred with different blur kernel. Namely,input image data is set to be of low quality in our experiments for the sake of simulating many actual application scenarios.Though it is well known that non-blind SR algorithms presented intheillustrationsareefficientformanySR tasks,they fail to offset the blurring effect in testing images without a more precise blurring kernel.There is also significant difference about the estimated kernels and reconstruction results among blind algorithms.The BKE process of Ref.[25]is actually close to our method when the threshold on|grad|is 0 and the criterion for iteration is MSE.Shao et al.[34]solved the blur kernel and HR image simultaneously by minimizing a bi-L0-L2-norm regularized optimization problem with respect to both an intermediate super-resolved image and a blur kernel,which is extraordinarily time-demanding (so not shown in Table 1).More important is that the fitting problem caused by useless smooth patches still exists in these methods.Although the idea of our method is simple,it could avoid the fitting problem by dropping the smooth query patches reasonably according to the internal statistics of a single natural image.

    Figures 9 and 10 present SR recovery results of other two real images(“fence”and“building”),which were captured by our cell-phone with slightly joggle and motion.It is easily noticed that all blind methods produce a better result than nonblind methods which even can not offset the motion deviation basically.Comparing Fig.9(f)-Fig.9(i)and Fig.10(f)-Fig.10(i),we can find the visible difference in estimated kernels and recovered results produced by different blind methods.Particularly,SAR-NBS tends to overly sharpen the high frequency region and gives obvious distortion in final images. The results of Zhao et al.[16]look more realistic but the reconstruction accuracy is not high enough compared with our approach.

    Fig.5 Visual comparisons of SR recovery with low-quality“butterfly”image from Set5(×2).The ground-truth kernel is a 9×9 Gaussian kernel with hsize=5 and sigma=1.25,and threshold on|grad|is 18.

    5 Conclusions

    We proposed a novel single image SR processing framework aiming at improving the SR effect and reducing SR time consumption in this paper.The proposed algorithm mainly consists of blind blur kernel estimation and SR recovery.The former is based on the idea of minimizing dissimilarity of cross-scale image patches,which leads us to obtain kernel update formula by quantitating crossscale patch dissimilarity and directly minimizing it according to least square method.The reduction of SR time mainly relies on an ASM processwithLR/HRdictionariestrainedbyILS-DLA algorithms a selective patch processing strategy measured by|grad|.Therefore,the SR effect is mainly guaranteed by improving the quality of training samples and the efficiency of SR recovery is mainly guaranteed by anchored space mapping and selective patch processing.They ensure the improvement of time performance via reducing the number of query patches and translating L1norm constrained optimization problem into L2norm constrained anchor mapping process.Under the equal conditions,all above-mentioned processes makeourSRalgorithmachievebetterresults than several outstanding blind and non-blind SR approaches proposed previously with a much higher speed.

    Fig.6 Visual comparisons of SR recovery with low-quality“high-street”image from Set2(×3).The ground-truth kernel is a 13×13 motion kernel with len=5 and theta=45,and threshold on|grad|is 24.

    Acknowledgements

    We would like to thank the authors of Ref.[34],Mr.Michael Elad and Mr.Wen-Ze Shao,for their kind help in running their blind SR method[34],which thus enables an effective comparison with their method.This work is partially supported by National Natural Science Foundation of China (GrantNo.61303127),WesternLightTalent Culture Project of Chinese Academy of Sciences (Grant No.13ZS0106),Project of Science and Technology Department of Sichuan Province(Grant Nos.2014SZ0223 and 2015GZ0212),Key Program ofEducationDepartmentofSichuanProvince (Grant Nos.11ZA130 and 13ZA0169),and the innovation funds of Southwest University of Science and Technology(Grant No.15ycx053).

    References

    [1]Freedman,G.;Fattal,R.Image and video upscaling fromlocalself-examples.ACMTransactionson Graphics Vol.30,No.2,Article No.12,2011.

    [2]Babacan,S.D.; Molina,R.; Katsaggelos,A. K.Parameter estimation in TV image restorationusing variational distribution approximation.IEEE Transactions on Image Processing Vol.17,No.3,326-339,2008.

    Fig.7 Visual comparisons of SR recovery with low-quality“zebra”image from Set14(×3).The ground-truth kernel is a 13×13 Gaussian kernel with hsize=5 and theta=1.25,and threshold on|grad|is 16.

    [3]Babacan,S.D.;Molina,R.;Katsaggelos,A.K. Total variation super resolution using a variational approach.In:Proceedingsofthe15thIEEE International Conference on Image Processing,641-644,2008.

    [4]Babacan,S.D.; Molina,R.;Katsaggelos,A. K.VariationalBayesiansuperresolution.IEEE Transactions on Image Processing Vol.20,No.4,984-999,2011.

    [5]Sun,J.;Xu,Z.;Shum,H.-Y.Image super-resolution using gradient profile prior.In:Proceedings of IEEE Conference on Computer Vision Pattern Recognition,1-8,2008.

    [6]Freeman,W.T.;Jones,T.R.;Pasztor,E.C.Examplebased super-resolution.IEEE Computer Graphics and Applications Vol.22,No.2,56-65,2002.

    [7]Sun,J.;Zheng,N.-N.;Tao,H.;Shum,H.-Y. Image hallucination with primal sketch priors.In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.2,729-736,2003.

    [8]Chang,H.;Yeung,D.Y.;Xiong,Y.Super-resolution through neighbor embedding.In:Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition,275-282,2004.

    [9]Roweis,S.T.;Saul,L.K.Nonlinear dimensionality reduction by locally linear embedding.Science Vol. 290,No.5,2323-2326,2000.

    [10]Bevilacqua,M.;Roumy,A.;Guillemot,C.;Morel,M.-L.A.Low-complexity single-image super-resolution basedonnonnegativeneighborembedding.In: Proceedingsofthe23rdBritishMachineVision Conference,135.1-135.10,2012.

    [11]Yang,J.;Wright,J.;Huang,T.;Ma,Y.Image super-resolution as sparse representation of raw image patches.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1-8,2008. [12]Glasner,D.;Bagon,S.;Irani,M.Super-resolution from a single image.In:Proceedings of IEEE 12thInternational Conference on Computer Vision,349-356,2009.

    [13]Yang,J.;Wang,Z.;Lin,Z.;Cohen,S.;Huang,T.Coupled dictionary training for image superresolution.IEEE Transactions on Image Processing Vol.21,No.8,3467-3478,2012.

    [14]Yang,J.;Wright,J.;Huang,T.;Ma,Y.Image super-resolutionviasparserepresentation.IEEE Transactions on Image Processing Vol.19,No.11,2861-2873,2010.

    [15]He,L.;Qi,H.;Zaretzki,R.Beta process joint dictionary learning for coupled feature spaces with applicationtosingleimagesuper-resolution.In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,345-352,2013.

    [16]Zhao,X.;Wu,Y.;Tian,J.;Zhang,H.Single image super-resolution via blind blurring estimation and dictionary learning.In:Communications in Computer and Information Science,Vol.546.Zha,H.;Chen,X.;Wang,L.;Miao,Q.Eds.Springer Berlin Heidelberg,22-33,2015.

    [17]Timofte,R.;De,V.;VanGool,L.Anchored neighborhood regression for fast example-based superresolution.In:Proceedings of IEEE International Conference on Computer Vision,1920-1927,2013.

    [18]Zontak,M.;Irani,M.Internal statistics of a single natural image.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,977-984,2011.

    [19]Yang,C.-Y.;Huang,J.-B.;Yang,M.-H.Exploiting self-similarities for single frame super-resolution.In: LectureNotesinComputerScience,Vol.6594. Kimmel,R.;Klette,R.;Sugimoto,A.Eds.Springer Berlin Heidelberg,497-510,2010.

    [20]Zoran,D.;Weiss,Y.Fromlearningmodelsof natural image patches to whole image restoration. In:Proceedings of IEEE International Conference on Computer Vision,479-486,2011.

    [21]Hu,J.;Luo,Y.Single-image superresolution based on local regression and nonlocal self-similarity.Journal of Electronic Imaging Vol.23,No.3,033014,2014.

    Fig.8 Visual comparisons of SR recovery with low-quality“tower”image from B100(×2).The ground-truth kernel is a 11×11 motion kernel with len=5 and tehta=45,and threshold on|grad|is 17.

    Fig.9 Visual comparisons of SR recovery with real low-quality image“fence”captured with slight joggle(×2).Threshold on|grad|is 21.

    [22]Zhang,Y.;Liu,J.;Yang,S.;Guo,Z.Joint imagedenoisingusingself-similaritybasedlowrankapproximations.In:ProceedingsofVisual Communications and Image Processing,1-6,2013.

    [23]Michaeli,T.;Irani,M.Blind deblurring using internal patch recurrence.In:Lecture Notes in Computer Science,Vol.8691.Fleet,D.;Pajdla,T.;Schiele,B.;Tuytelaars,T.Eds.Springer International Publishing,783-798,2014.

    [24]Guillemot,C.;LeMeur,O.Imageinpainting: Overview and recent advances.IEEE Signal Processing Magazine Vol.31,No.1,127-144,2014.

    [25]Michaeli,T.;Irani,M.Nonparametric blind superresolution.In:Proceedings of IEEE International Conference on Computer Vision,945-952,2013.

    [26]Engan,K.;Skretting,K.;Hus?y,J.H.Family of iterative LS-based dictionary learning algorithms,ILSDLA,for sparse signal representation.Digital Signal Processing Vol.17,No.1,32-49,2007.

    [27]Timofte,R.;De Smet,V.;Van Gool,L.A+:Adjusted anchored neighborhood regression for fast superresolution.In:Lecture Notes in Computer Science,Vol.9006.Cremers,D.;Reid,I.;Saito,H.;Yang,M.-H.Eds.Springer International Publishing,111-126,2014.

    [28]Bevilacqua,M.;Roumy,A.;Guillemot,C.;Morel,M.-L.A.Super-resolution using neighbor embedding of back-projection residuals.In:Proceedings of the 18th International Conference on Digital Signal Processing,1-8,2013.

    [29]Irani,M.;Peleg,S.Motionanalysisfor imageenhancement:Resolution, occlusion, and transparency.Journal of Visual Communication and Image Representation Vol.4,No.4,324-335,1993.

    [30]Irani,M.;Peleg,S.Improving resolution by image registration.CVGIP:Graphical Models and Image Processing Vol.53,No.3,231-239,1991.

    Fig.10 Visual comparisons of SR recovery with real low-quality image“building”captured with slight motion(×2).Threshold on|grad|is 26.

    [31]Dong,C.; Chen,C.L.; He,K.; Tang,X. Learning a deep convolutional network for image super-resolution.In:Lecture Notes in Computer Science,Vol.8692.Fleet,D.;Pajdla,T.;Schiele,B.;Tuytelaars,T.Eds.Springer International Publishing,184-199,2014.

    [32]Zeyde,R.;Elad,M.;Protter,M.On single image scaleup using sparse-representations.In:Lecture Notes in Computer Science,Vol.6920.Boissonnat,J.-D.;Chenin,P.;Cohen,A.et al.Eds.Springer Berlin Heidelberg,711-730,2010.

    [33]Dai,D.;Timofte,R.;Van Gool,L.Jointly optimized regressorsforimagesuper-resolution.Computer Graphics Forum Vol.34,No.2,95-104,2015.

    [34]Shao,W.-Z.;Elad,M.Simple,accurate,and robust nonparametric blind super-resolution.In:Lecture NotesinComputerScience,Vol.9219.Zhang,Y.-J.Ed.Springer International Publishing,333-348,2015.

    Xiaole Zhao received his B.S.degree in computer science from the School of Computer Science and Technology,Southwest University of Science and Technology(SWUST),China,in 2013. HeisnowstudyingintheSchool of Computer Science and Technology,SWUST for his master degree.His main research interests include digital image processing,machine learning,and data mining.

    Yadong Wu now is a full professor with the School of Computer Science and Technology,Southwest University of Science and Technology(SWUST),China.He received his B.S.degree in computer science from Zhengzhou University,China,in 2000,and M.S. degree in control theory and controlengineering from SWUST in 2003.He got his Ph.D.degree in computer application from University of Electronic Science and Technology of China.His research interest includes image processing and visualization.

    Jinsha Tian received her B.S.degree inHebeiUniversityofScienceand Technology,China,in 2013.She is now studying in the School of Computer ScienceandTechnology, Southwest University of Science and Technology (SWUST),China for her master degree. Hermainresearchinterestsinclude digital image processing and machine learning.

    HongyingZhangnowisafull professor with the School of Information Engineering,Southwest University of ScienceandTechnology(SWUST),China.She received her B.S.degree in applied mathematics from Northeast University,China,in 2000,and M.S. degree in control theory and control engineering from SWUST in 2003.She got her Ph.D. degree in signal and information processing from University of Electronic Science and Technology of China,in 2006.Her research interest includes image processing and biometric recognition.

    Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative CommonsAttribution4.0InternationalLicense (http://creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    Research Article

    Single image super-resolution via blind blurring estimation and anchored space mapping

    日韩一本色道免费dvd| 久久精品综合一区二区三区| 亚洲不卡免费看| 国产精品女同一区二区软件| 波野结衣二区三区在线| 女的被弄到高潮叫床怎么办| 国产男人的电影天堂91| 高清午夜精品一区二区三区 | 亚洲精品日韩在线中文字幕 | 一级毛片久久久久久久久女| 在线观看av片永久免费下载| 国产精品嫩草影院av在线观看| 在线免费观看的www视频| 少妇熟女aⅴ在线视频| 精品一区二区三区人妻视频| 在线观看午夜福利视频| 国产日本99.免费观看| 国产精品亚洲一级av第二区| 别揉我奶头~嗯~啊~动态视频| 久久精品国产清高在天天线| 欧美最新免费一区二区三区| 99视频精品全部免费 在线| 嫩草影视91久久| 亚洲经典国产精华液单| 天天一区二区日本电影三级| 国产精品久久久久久久电影| 在线观看av片永久免费下载| 草草在线视频免费看| 97在线视频观看| 91在线精品国自产拍蜜月| 九色成人免费人妻av| 97超视频在线观看视频| 国产 一区精品| 在线观看av片永久免费下载| 亚洲国产高清在线一区二区三| 波野结衣二区三区在线| 婷婷精品国产亚洲av| 人妻制服诱惑在线中文字幕| 欧美绝顶高潮抽搐喷水| 国产亚洲精品综合一区在线观看| 看黄色毛片网站| 日韩欧美在线乱码| 天天一区二区日本电影三级| 亚洲,欧美,日韩| 精品不卡国产一区二区三区| 日韩精品中文字幕看吧| 成人永久免费在线观看视频| 黄色一级大片看看| 久久亚洲国产成人精品v| 女同久久另类99精品国产91| 你懂的网址亚洲精品在线观看 | 久久午夜福利片| av.在线天堂| 淫妇啪啪啪对白视频| 国产大屁股一区二区在线视频| 天堂影院成人在线观看| 精品久久久久久久久久免费视频| 亚洲av中文字字幕乱码综合| 亚洲中文日韩欧美视频| 日本熟妇午夜| 高清午夜精品一区二区三区 | 国产毛片a区久久久久| 国产伦在线观看视频一区| 亚洲三级黄色毛片| 亚洲天堂国产精品一区在线| 亚洲最大成人手机在线| 国产精品久久视频播放| 久久国内精品自在自线图片| 99久久精品国产国产毛片| 亚洲国产欧美人成| 日日撸夜夜添| 久久精品夜夜夜夜夜久久蜜豆| 一级a爱片免费观看的视频| 日本与韩国留学比较| 欧美日韩精品成人综合77777| 国产不卡一卡二| 亚洲欧美成人综合另类久久久 | videossex国产| 成人午夜高清在线视频| 长腿黑丝高跟| АⅤ资源中文在线天堂| 有码 亚洲区| 亚州av有码| 丰满的人妻完整版| 激情 狠狠 欧美| 美女xxoo啪啪120秒动态图| 丰满乱子伦码专区| 亚洲精品一区av在线观看| 搡女人真爽免费视频火全软件 | 欧美精品国产亚洲| 97在线视频观看| 久久中文看片网| 精品人妻视频免费看| 成年女人看的毛片在线观看| 婷婷亚洲欧美| 成人一区二区视频在线观看| 国产久久久一区二区三区| 麻豆国产av国片精品| 欧美在线一区亚洲| 久久久成人免费电影| 亚洲成人久久性| av.在线天堂| 国产毛片a区久久久久| 亚洲av中文字字幕乱码综合| 国产aⅴ精品一区二区三区波| 啦啦啦啦在线视频资源| 最近在线观看免费完整版| 亚洲va在线va天堂va国产| 亚洲av熟女| 亚洲精品粉嫩美女一区| 一进一出好大好爽视频| 波野结衣二区三区在线| 少妇的逼水好多| 国产在线精品亚洲第一网站| 欧洲精品卡2卡3卡4卡5卡区| 桃色一区二区三区在线观看| 国产日本99.免费观看| 午夜精品在线福利| 久久九九热精品免费| 淫秽高清视频在线观看| 亚洲av免费在线观看| 免费av毛片视频| 亚洲欧美成人综合另类久久久 | 国产淫片久久久久久久久| 天天躁日日操中文字幕| 欧美在线一区亚洲| 性插视频无遮挡在线免费观看| 久久6这里有精品| 狠狠狠狠99中文字幕| 免费高清视频大片| 黄色视频,在线免费观看| 亚洲av美国av| 国产成年人精品一区二区| 99热只有精品国产| 久久99热这里只有精品18| 搡老岳熟女国产| 男人舔女人下体高潮全视频| 欧美日本亚洲视频在线播放| 日日啪夜夜撸| 日韩一本色道免费dvd| 久久精品国产自在天天线| 国产黄a三级三级三级人| 欧美成人精品欧美一级黄| 久久精品国产99精品国产亚洲性色| 午夜激情福利司机影院| 亚洲欧美中文字幕日韩二区| 久久久国产成人免费| 成人av在线播放网站| 人妻少妇偷人精品九色| 毛片一级片免费看久久久久| 欧美成人一区二区免费高清观看| 亚洲aⅴ乱码一区二区在线播放| 最近中文字幕高清免费大全6| 免费大片18禁| av在线播放精品| 天美传媒精品一区二区| 我要搜黄色片| 久久6这里有精品| 亚洲婷婷狠狠爱综合网| 蜜桃久久精品国产亚洲av| 日本撒尿小便嘘嘘汇集6| 韩国av在线不卡| 色视频www国产| aaaaa片日本免费| 在线免费观看不下载黄p国产| 少妇人妻精品综合一区二区 | 国产蜜桃级精品一区二区三区| 国产精品免费一区二区三区在线| 搡女人真爽免费视频火全软件 | 嫩草影院新地址| 亚洲欧美日韩卡通动漫| 久久天躁狠狠躁夜夜2o2o| 97热精品久久久久久| 午夜福利视频1000在线观看| 国产片特级美女逼逼视频| 国产精品美女特级片免费视频播放器| 秋霞在线观看毛片| 精品午夜福利视频在线观看一区| 中文字幕免费在线视频6| 日韩欧美国产在线观看| 亚洲欧美日韩高清在线视频| 国产色婷婷99| 成人精品一区二区免费| 国产精品99久久久久久久久| 一个人看视频在线观看www免费| 色综合色国产| 在线a可以看的网站| 国产亚洲精品久久久久久毛片| 国产高清视频在线播放一区| 日本五十路高清| 国产伦精品一区二区三区视频9| 免费电影在线观看免费观看| 久久久精品欧美日韩精品| 69av精品久久久久久| 99riav亚洲国产免费| 国产欧美日韩一区二区精品| 国产精品一区www在线观看| 99热这里只有是精品在线观看| 国产精品久久电影中文字幕| 精品一区二区三区av网在线观看| 看黄色毛片网站| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品国产高清国产av| 亚洲电影在线观看av| 在线观看av片永久免费下载| 午夜福利18| 日韩精品青青久久久久久| 欧美区成人在线视频| ponron亚洲| 悠悠久久av| 亚洲精品日韩av片在线观看| 久久精品国产自在天天线| 亚洲一级一片aⅴ在线观看| 国国产精品蜜臀av免费| 日日干狠狠操夜夜爽| 六月丁香七月| 亚洲成人久久爱视频| 成人鲁丝片一二三区免费| 亚洲美女搞黄在线观看 | av免费在线看不卡| a级毛片免费高清观看在线播放| 久久6这里有精品| 精品欧美国产一区二区三| 女人被狂操c到高潮| 乱码一卡2卡4卡精品| 久久久久久久久中文| 免费无遮挡裸体视频| 国内精品美女久久久久久| 国产不卡一卡二| 久久中文看片网| 少妇裸体淫交视频免费看高清| 亚洲欧美清纯卡通| 精品日产1卡2卡| 欧洲精品卡2卡3卡4卡5卡区| av专区在线播放| 久久鲁丝午夜福利片| 久久热精品热| 久久精品久久久久久噜噜老黄 | 国产在视频线在精品| 成人综合一区亚洲| 精品少妇黑人巨大在线播放 | 亚洲国产日韩欧美精品在线观看| 波野结衣二区三区在线| 亚洲精品亚洲一区二区| 国产精品一及| 最近视频中文字幕2019在线8| 一本久久中文字幕| 欧美中文日本在线观看视频| 尤物成人国产欧美一区二区三区| 亚洲丝袜综合中文字幕| 老熟妇乱子伦视频在线观看| 中文字幕av成人在线电影| 国产一区二区在线观看日韩| 国产精品日韩av在线免费观看| av在线播放精品| 美女 人体艺术 gogo| 天堂影院成人在线观看| 欧美日韩国产亚洲二区| 寂寞人妻少妇视频99o| av在线观看视频网站免费| 国产精品人妻久久久影院| 久久久色成人| 国产高清三级在线| 国产极品精品免费视频能看的| 国产一区二区激情短视频| 97热精品久久久久久| 少妇人妻精品综合一区二区 | 丝袜美腿在线中文| 国产美女午夜福利| 午夜福利成人在线免费观看| 国产伦在线观看视频一区| 欧美日韩一区二区视频在线观看视频在线 | 久久99热6这里只有精品| 国产成人a区在线观看| 黄色一级大片看看| 91午夜精品亚洲一区二区三区| 最新中文字幕久久久久| 色综合站精品国产| 在线观看午夜福利视频| 国产精品伦人一区二区| 国产av麻豆久久久久久久| 人人妻人人澡人人爽人人夜夜 | 可以在线观看毛片的网站| 婷婷精品国产亚洲av| 欧美日韩在线观看h| 国产黄a三级三级三级人| 久久久久国内视频| 久久99热6这里只有精品| 啦啦啦韩国在线观看视频| 亚洲专区国产一区二区| 国产精品无大码| 97在线视频观看| 亚洲最大成人手机在线| 欧洲精品卡2卡3卡4卡5卡区| 国产伦在线观看视频一区| 亚洲精品亚洲一区二区| 国产高清视频在线观看网站| 嫩草影院入口| 男人狂女人下面高潮的视频| 人人妻人人看人人澡| 亚洲成人久久性| 精品乱码久久久久久99久播| 嫩草影院入口| 一级av片app| 黄色日韩在线| 久久精品夜夜夜夜夜久久蜜豆| 亚洲国产精品成人久久小说 | 干丝袜人妻中文字幕| 极品教师在线视频| 看片在线看免费视频| 成人特级黄色片久久久久久久| 欧美日韩精品成人综合77777| a级毛片a级免费在线| 97热精品久久久久久| 欧美中文日本在线观看视频| 国产精品电影一区二区三区| av天堂中文字幕网| 国产不卡一卡二| 欧美性猛交╳xxx乱大交人| 性色avwww在线观看| 久久久久性生活片| 国产又黄又爽又无遮挡在线| 亚洲最大成人中文| 91久久精品电影网| 免费av毛片视频| 一夜夜www| 亚洲国产色片| 亚洲人成网站在线播| 搡老熟女国产l中国老女人| 日本黄大片高清| 国产不卡一卡二| 成人漫画全彩无遮挡| 插阴视频在线观看视频| a级毛色黄片| av专区在线播放| 夜夜看夜夜爽夜夜摸| 久久精品夜夜夜夜夜久久蜜豆| 中国国产av一级| 99在线视频只有这里精品首页| 在线播放国产精品三级| 日韩成人伦理影院| 12—13女人毛片做爰片一| 高清日韩中文字幕在线| 嫩草影院新地址| 热99re8久久精品国产| 真实男女啪啪啪动态图| 国产精品一区二区三区四区免费观看 | 日本色播在线视频| 嫩草影院精品99| 可以在线观看的亚洲视频| 国产 一区 欧美 日韩| 永久网站在线| 一级毛片久久久久久久久女| 国产又黄又爽又无遮挡在线| 精华霜和精华液先用哪个| 精品欧美国产一区二区三| 久久久久久久久久久丰满| 欧美激情久久久久久爽电影| 村上凉子中文字幕在线| 久久精品综合一区二区三区| 在线免费十八禁| 亚洲国产精品久久男人天堂| 国内精品一区二区在线观看| 久久欧美精品欧美久久欧美| 哪里可以看免费的av片| 亚洲精品影视一区二区三区av| 久久精品国产99精品国产亚洲性色| 亚洲丝袜综合中文字幕| 亚洲电影在线观看av| 丰满的人妻完整版| 天天躁日日操中文字幕| 成人亚洲精品av一区二区| 亚洲成人av在线免费| av免费在线看不卡| 成人无遮挡网站| 老女人水多毛片| 成人高潮视频无遮挡免费网站| 最好的美女福利视频网| 国产精品日韩av在线免费观看| 国产淫片久久久久久久久| 欧美高清性xxxxhd video| 国内精品一区二区在线观看| 91av网一区二区| 亚洲欧美日韩卡通动漫| 精品午夜福利视频在线观看一区| 亚洲欧美日韩东京热| 国产真实伦视频高清在线观看| 国产黄a三级三级三级人| 亚洲四区av| 亚洲第一电影网av| 人妻制服诱惑在线中文字幕| 国产 一区精品| 日韩欧美免费精品| 美女cb高潮喷水在线观看| 九九爱精品视频在线观看| 午夜日韩欧美国产| 日韩av不卡免费在线播放| 国产精品,欧美在线| 日本爱情动作片www.在线观看 | 成年版毛片免费区| 99国产极品粉嫩在线观看| 国产精品爽爽va在线观看网站| 能在线免费观看的黄片| 精品久久久久久久人妻蜜臀av| 一进一出好大好爽视频| 女的被弄到高潮叫床怎么办| 日韩一本色道免费dvd| 国产亚洲精品久久久com| 插逼视频在线观看| а√天堂www在线а√下载| 伦精品一区二区三区| 淫秽高清视频在线观看| 欧美一级a爱片免费观看看| 亚洲av中文av极速乱| 国产精品嫩草影院av在线观看| 亚洲av二区三区四区| 亚洲av熟女| 亚洲精品国产av成人精品 | 欧美性猛交╳xxx乱大交人| 99精品在免费线老司机午夜| 久久婷婷人人爽人人干人人爱| 麻豆成人午夜福利视频| 国产精品亚洲美女久久久| 国产精品,欧美在线| 成人二区视频| av天堂在线播放| 非洲黑人性xxxx精品又粗又长| 亚洲最大成人中文| 国产av不卡久久| 成熟少妇高潮喷水视频| 欧美日韩在线观看h| 日韩 亚洲 欧美在线| 成人高潮视频无遮挡免费网站| 综合色丁香网| 麻豆一二三区av精品| 深夜精品福利| 在线免费十八禁| 国产极品精品免费视频能看的| 天天躁日日操中文字幕| 免费无遮挡裸体视频| 国产成人福利小说| 久久草成人影院| 三级国产精品欧美在线观看| 嫩草影院精品99| 中文字幕人妻熟人妻熟丝袜美| 欧美又色又爽又黄视频| 深夜a级毛片| 国产高清激情床上av| 女人被狂操c到高潮| 国产精品综合久久久久久久免费| 日韩av在线大香蕉| 色5月婷婷丁香| 97超视频在线观看视频| 一级黄片播放器| 全区人妻精品视频| 成人午夜高清在线视频| 尾随美女入室| 久久久久久国产a免费观看| av黄色大香蕉| 久久亚洲精品不卡| 国产单亲对白刺激| 久久久午夜欧美精品| 99久久无色码亚洲精品果冻| 亚洲中文日韩欧美视频| 长腿黑丝高跟| 天堂影院成人在线观看| 最好的美女福利视频网| 久久九九热精品免费| 在线观看66精品国产| 久久综合国产亚洲精品| 免费人成在线观看视频色| 五月伊人婷婷丁香| 午夜激情福利司机影院| 免费在线观看影片大全网站| 成人无遮挡网站| ponron亚洲| 黄色视频,在线免费观看| 国产精品野战在线观看| 国产精品人妻久久久影院| 午夜久久久久精精品| 国产 一区精品| 啦啦啦啦在线视频资源| 此物有八面人人有两片| 给我免费播放毛片高清在线观看| 精品不卡国产一区二区三区| 午夜亚洲福利在线播放| 久久久久久久久中文| 国产一区二区激情短视频| 亚洲天堂国产精品一区在线| 搡女人真爽免费视频火全软件 | 亚洲人成网站在线播| 亚洲欧美日韩无卡精品| 老司机影院成人| 国产成人精品久久久久久| 最近手机中文字幕大全| 黄色日韩在线| av卡一久久| 日韩精品中文字幕看吧| 亚洲欧美精品自产自拍| 午夜免费激情av| 国产白丝娇喘喷水9色精品| а√天堂www在线а√下载| 国产亚洲精品av在线| 久久久久久久午夜电影| 午夜精品一区二区三区免费看| 日日撸夜夜添| 午夜精品国产一区二区电影 | 久久久国产成人精品二区| 亚洲精品一区av在线观看| 秋霞在线观看毛片| 日韩av不卡免费在线播放| 麻豆国产av国片精品| 亚洲中文字幕一区二区三区有码在线看| 一级毛片我不卡| 亚洲成人av在线免费| 亚洲五月天丁香| 国语自产精品视频在线第100页| 成人午夜高清在线视频| 精品少妇黑人巨大在线播放 | 国语自产精品视频在线第100页| 日韩欧美免费精品| 少妇的逼水好多| 亚洲18禁久久av| 国产乱人偷精品视频| 黄色配什么色好看| 欧美性猛交╳xxx乱大交人| 国产精品电影一区二区三区| 久久久久久久久大av| 特大巨黑吊av在线直播| 自拍偷自拍亚洲精品老妇| 最近2019中文字幕mv第一页| 亚洲欧美精品自产自拍| 久久久久国内视频| 亚洲色图av天堂| 欧美日本亚洲视频在线播放| 一级黄片播放器| ponron亚洲| 国产欧美日韩一区二区精品| 最近2019中文字幕mv第一页| 一进一出抽搐gif免费好疼| 老司机影院成人| 亚洲电影在线观看av| 欧美成人a在线观看| 婷婷亚洲欧美| 精品免费久久久久久久清纯| 国产精品野战在线观看| 久久久久久久久久黄片| 精品久久久久久成人av| 91av网一区二区| 国产真实乱freesex| 国产午夜福利久久久久久| 久久午夜福利片| 国产成人freesex在线 | 午夜爱爱视频在线播放| 变态另类成人亚洲欧美熟女| 毛片一级片免费看久久久久| 国内精品一区二区在线观看| 一夜夜www| 久久精品国产亚洲av涩爱 | 日本免费一区二区三区高清不卡| 男人和女人高潮做爰伦理| 日本成人三级电影网站| 嫩草影院入口| 亚洲经典国产精华液单| 毛片一级片免费看久久久久| 精品福利观看| 亚洲国产精品成人综合色| 国产精品99久久久久久久久| 偷拍熟女少妇极品色| 国产亚洲精品av在线| 国产精品久久久久久精品电影| 毛片女人毛片| 精品福利观看| 露出奶头的视频| 精品日产1卡2卡| 97人妻精品一区二区三区麻豆| 久久精品国产亚洲网站| 国产av麻豆久久久久久久| 亚洲熟妇熟女久久| 国产探花在线观看一区二区| АⅤ资源中文在线天堂| 久久久精品大字幕| 五月玫瑰六月丁香| 人妻丰满熟妇av一区二区三区| 五月玫瑰六月丁香| 精品久久久久久久久久免费视频| 色吧在线观看| 国产精品一区二区免费欧美| 深夜精品福利| 天堂网av新在线| 国产精品一区二区三区四区久久| 精品久久久久久久久久久久久| 国产女主播在线喷水免费视频网站 | 色在线成人网| 99热6这里只有精品| 日日摸夜夜添夜夜添av毛片| 国产一区二区激情短视频| 国产精品野战在线观看| 一级毛片电影观看 | ponron亚洲| 亚洲人与动物交配视频| 国产av在哪里看| 亚洲精品456在线播放app| www日本黄色视频网| 99热这里只有精品一区| 久久久成人免费电影| 又黄又爽又刺激的免费视频.| 成人特级av手机在线观看| 天天一区二区日本电影三级| 一级毛片我不卡| 国内少妇人妻偷人精品xxx网站| 亚洲最大成人手机在线| 三级经典国产精品| 久久久国产成人免费| 精品人妻偷拍中文字幕|