• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Comfort-driven disparity adjustment for stereoscopic video

    2016-07-19 07:04:46MiaoWangXiJinZhangJunBangLiangSongHaiZhangandRalphMartincTheAuthor06ThisarticleispublishedwithopenaccessatSpringerlinkcom
    Computational Visual Media 2016年1期

    Miao Wang(),Xi-Jin Zhang,Jun-Bang Liang,Song-Hai Zhang,and Ralph R.Martin○cThe Author(s)06.This article is published with open access at Springerlink.com

    ?

    Research Article

    Comfort-driven disparity adjustment for stereoscopic video

    Miao Wang1(),Xi-Jin Zhang1,Jun-Bang Liang1,Song-Hai Zhang1,and Ralph R.Martin2
    ○cThe Author(s)2016.This article is published with open access at Springerlink.com

    1DepartmentofComputerScienceandTechnology,Tsinghua University,Beijing 100084,China.E-mail:M. Wang,wangmiao11@mails.tsinghua.edu.cn();X.-J. Zhang,zhangxijin91@gmail.com;J.-B.Liang,williamm 2006@126.com;S.-H.Zhang,shz@tsinghua.edu.cn.

    2Cardiff University,Cardiff,CF243AA,UK.E-mail: ralph@cs.cf.ac.uk.

    Manuscript received:2015-12-02;accepted:2015-12-09

    Abstract Pixel disparity-the offset of corresponding pixels between left and right views-is a crucial parameter in stereoscopic three-dimensional(S3D)video,as it determines the depth perceived by the human visual system(HVS).Unsuitable pixel disparity distribution throughout an S3D video may lead to visual discomfort.We present a unified and extensible stereoscopic video disparity adjustment framework which improves the viewing experience for an S3D video by keeping the perceived 3D appearance as unchanged as possible while minimizing discomfort.We first analyse disparity and motion attributes of S3D video in general,then derive a wide-ranging visual

    discomfort metric from existing perceptual comfort models.An objective function based on this metric is used as the basis of a hierarchical optimisation method to find a disparity mapping function for each input video frame.Warping-based disparity manipulation is then applied to the input video to generate the output video,using the desired disparity mappings as constraints.Our comfort metric takes into account

    disparity range,motion,andstereoscopic window violation;the framework could easily be extended to use further visual comfort models.We demonstrate the power of our approach using both animated cartoons and real S3D videos.

    Keywords stereoscopic video editing;video enhancement;perceptual visual computing;video manipulation

    1 Introduction

    With the recent worldwide increase in stereoscopic display hardware,there has been great interest in both academia and industry in stereoscopic threedimensional(S3D)movie production,for instance,glasses-free multi-view display technology[1,2]and perceptual disparity models[3,4].Viewing the 3D world through a display screen differs fromnaturalviewing-itintroducesvergenceaccommodation conflict[5,6].As a result,poor scene design in S3D movies can lead to visual fatigue.In addition to vergence-accommodation conflict,other factors such as motion and luminance also affect the human visual system (HVS),and may make the viewer feel uncomfortable.Most of these factors have a close relationship to binocular disparity-the difference in an object's location on the left and right retinas[7].The brain uses binocular disparity to extract depth information via a process of stereopsis.

    The goal of making a movie stereoscopic is to add realism by providing a feeling of depth,but care must be taken to avoid visual discomfort.It is thus a tedious process to tune the perceptual depth of S3D videos during shooting,even for professionals with years of experience[8].Existing S3D video post-processing technology[9,10]helps to manipulate the original disparity of S3D images and videos.Given the desired disparity mapping,these methods manipulate the original disparity to meet the requirements.Unfortunately,such approaches require manually input disparity targets or manipulation operators for guidance.A general,content-driven solution for ensuring the comfort of S3D video is still lacking.

    In this paper,we provide an automatic solution to the disparity tuning problem using a unifiedand extensible comfort-driven framework.Unlike previous works that focus on user-guided S3D video disparity retargeting[9,10],we automatically manipulatethedisparityofanoriginalS3D video,to improve visual comfort while maintaining satisfactory parts of the original content whenever possible.The challenge of this problem is to build a bridge between S3D visual comfort and the automatic manipulation of video content.By taking advantage of existing S3D visual comfort models,we derive a general discomfort metric which we use to evaluate and predict the discomfort level. We build on this metric to define an objective function for use in optimising disparity mapping functions.Our metric may be further extended if needed,to incorporate further visual comfort models.We optimise the mappings over the whole video,using a hierarchical solution based on a genetic algorithm.The output video is generated by applying the disparity mappings to the original video using a warping-based technology(Fig.1).To our knowledge,our framework is the first system which can automatically improve visual comfort by means of comfort-driven disparity adjustment.

    The major contributions of our work are thus:

    ·A unified S3D video post-processing framework that automatically reduces visual discomfort by disparity adjustment.

    ·A discomfort metric that combines several key visual comfort models;it could be easily extended to incorporate others too if desired.It provides a basis for an objective function used to optimise disparity.

    ·A hierarchical optimisation method for computing a disparity mapping for each video frame.

    Fig.1 Inputs and outputs:given an input stereoscopic 3D video(sample frames(a)and(c)),our framework automatically determines comfort-driven disparity mappings(b)and(d)for every frame.Output video frames(e)and(g)are produced by applying these mappings to the input video frames,improving visual comfort.(f)and(h)show close-ups of frames before and after manipulation(c○B(yǎng)lender Foundation).

    2 Related work

    Causesofvisualdiscomfortexperiencedwhen watching S3D movies have been investigated,with a view to improving such movies.Mendiburu[8]qualitatively determined various factors such as excessive depth and discontinuous depth changes that contribute to visual fatigue.Liu et al.[11]summarized several principles,and applied them to photo slideshows and video stabilization.

    Variousmathematicalmodelshavealsobeen proposedtoquantitativelyevaluatediscomfort experiencedbytheHVS.Besidesviewing configurationssuchasviewingdistance[12],time[13],and display screen type,effects particular tostereoscopiccontenthavealsobeenwidely investigated[3,4,14-18],which we now consider in turn.

    Vergence-accommodationconflictiswidely accepted to be a key factor in visual discomfort. These ideas may be used to quantitatively determine acomfortzonewithinwhichlittlediscomfort arises[12].Stereoscopic fusion disparity range is modeled in Ref.[19],based on viewing distance and display sizes.Didyk et al.[3]modeled perceptual disparity based on experiments with sinusoidal stimuli;the ideas can be used to produce backwardcompatible stereo and personalized stereo.This work was later extended to incorporate the influence of luminance contrast[4].Our metric includes adisparity range term,based on the comfort zone model in Ref.[12].It allows us to decide whether the disparity of a given point lies within the comfort zone.

    Motion is another important factor in perceptual discomfort[14,15,20].In Ref.[20],the contribution of the velocity of moving objects to visual discomfort is considered.Jung et al.[15]gave a visual comfort metric based on salient object motion.Cho and Kang[13]conducted experiments with various combinations of disparity,viewing time,and motionin-depth,measuring the visual discomfort.Du et al.[14]proposed a comfort metric for motion which takes into account the interaction of motion components in multiple directions,and depths.Such literature shows that visual comfort is improved when objects move at lower velocities or lie closer to the screen plane.Movements perpendicular to the screen(along the z-axis)play a more powerful role in comfort than movements parallel to the screen plane (the x-y plane).

    Abrupt depth changes at scene discontinuities may also induce discomfort:for instance,a sudden jump from a shot focused in the distance to an extreme close-up can be disorienting.Disparity-response time models[17,18]have been determined by a series of user-experience experiments.To reduce discomfort caused by depth changes,depths in shots should change smoothly.

    Stereoscopic window violation describes a situation in which any object with negative disparity(in front of the screen)touches the left or right screen boundary.Part of the object may be seen by one eye but hidden from the other eye,leading to confusion by the viewer as to the object's actual position;this too causes fatigue[21].Yet further factors are discussed in a recent survey[22].As our approach provides a post-processing tool,we consider factors related to scene layout rather than camera parameters.These factors are disparity range,motion,stereoscopic window violation,and depth continuity;they are meant to cover the major causes of discomfort,but our approach could be extended to include others too.

    Use of post-processing technology has increased in recent years,helping amateurs to create S3D content and directors to improve S3D movie appearance.Lo et al.[23]showed how to perform copy&paste for S3D,to create new stereoscopic photos from old ones; constraints must be carefully chosen.Later,Tong et al.[24]extended this work to allow pasting of 2D images into stereoscopic images.Kim et al.[25]provided a method to create S3D line drawings from 3D shapes.Niu et al.[26]gave a warping-based method for stereoscopic image retargeting.Lang et al.[10]provided a disparity manipulation framework which applies desired disparity mapping operators to the original video using image warping.Kellnhofer et al.[9]optimised the depth trajectories of objects in an S3D video,providing smoother motion.Kim et al.[27]computed multi-perspective stereoscopic images from a light field,meeting users'artistic control requirements.Masia et al.[28]proposed a light field retargeting method that preserves perceptual depth on a variety of display types. Koppal et al.[29]provided an editor for interactively tuning camera and viewing parameters.Manually tuned parameters of cameras are applied to video;the results are then immediately fed back to the user.However,there is presently a gap between mathematical comfort models and post-processing applications-few technologies automatically work in a comfort-driven manner.

    In a similar vein to our work,the OSCAM system [30]automatically optimises the camera convergence and interaxial separation to ensure that 3D scene contents are within a comfortable depth range. However this work is limited to processing virtual scenes with known camera settings.Tseng et al.[31]automatically optimised parameters of S3D cameras,taking into account the depth range and stereoscopic window violation.The major differences between their work and ours are,firstly,they optimise the camera separation and convergence,while our system automatically generates an output video with a better viewing experience.Secondly,their objective functions are derived from either a simple depth range or a few general principles while ours relys on mathematical models.We build upon existing S3D post-processing approaches,especially warping-based ones,to build a bridge between comfort models and a practical tool.

    3 Overview

    In this section,we explain our notation,and then sketch our proposed framework.

    We adapt the measure of binocular disparity from Ref.[14],expressed as angular disparity.Assuming the viewer focuses on the display screen with a vergence angle θ′,the angular disparity at a 3D point P with a vergence angle θ is measured as the difference of vergence angles θ′-θ(see Fig.2(a)). We also use the concept of pixel disparity in the metric and disparity mapping optimisation.The pixel disparity of a feature point fLin the left view L is defined as an integer offset fR-fLwhere fRis the corresponding feature location in the right view R(see Fig.2(b)).Given these definitions,both the angular disparity and pixel disparity are negative when the 3D point P is in front of the screen and positive when it is behind the screen. A disparity mapping is a function φ(d)that given an input disparity value d,returns a new output disparity value d′.In this paper,φ is presented in discrete form:given a set of τ different disparity values Din={dmin,...,dτ},and a corresponding set of output disparity values Dout={d′min,...,d′τ},we regard φ:Din→Doutas a disparity mapping,where d′i=φ(di).

    Fig.2 Definitions:(a)angular disparity and(b)pixel disparity.

    As explained in Section 1,our comfort-driven disparity mapping framework automatically adjusts the disparity in an S3D video to improve visual comfort.Given an input S3D video to be optimised,we first evaluate the discomfort level of every frame,using the proposed metric,then determine time intervals which cause discomfort and key frames inside each interval throughout the video (see Section 4).Next,based on the key frames,weoptimiseadisparitymappingφforevery frame using a hierarchical optimisation method(see Section 5),using an objective function derived from the discomfort metric.Finally,the mappings are applied to the original video by video warping.The pipeline is illustrated in Fig.3.

    Fig.3 Pipeline.The input of our system is a stereoscopic 3D video.Discomfort level of every frame is evaluated using the proposed metric. Discomfort intervals and key frames inside each interval are determined.A disparity mapping for every frame is optimised,based on the key frames,using a hierarchical optimisation method.Finally,the output video is generated by applying the mappings to the original video by warping.

    4 Discomfort metric

    An objective function measuring discomfort level isessentialforautomaticS3Dvideocomfort optimisation.In this section,we present a general discomfortmetricwhichisusedtodetermine theobjectivefunctionfordisparitymapping optimisation.The metric takes into account disparity range,motion,stereoscopic window violation,and temporal smoothness,all of which have been shown to have a major impact on the HVS.Each factor is formulated as a cost function.The temporal smoothness term relates pairs of successive frames (soisabinaryterm)whileothersareonly dependent on one frame(so are unary terms). The wide-ranging nature of this metric enables ustoevaluatediscomfortlevelintheround. The disparity range term measures the severity of vergence-accommodation conflict.The motion term evaluates discomfort brought about by eye movements.Retinal rivalry arises from inconsistent screen boundary occlusions,and is assessed by the stereoscopic window violation term.Flickering resulting from temporal inconsistency is evaluated by the temporal smoothness term.We now discuss each term individually and then explain how theyare combined.

    4.1Individual terms

    Disparity range.Excessive disparity leads to strong adverse reactions in the HVS due to vergenceaccommodationconflict[5,6].Toreducethe resulting discomfort,one intuitive approach is to compress the disparity range,but severe compression makes an S3D video appear flat,and ultimately imperceptibly different from 2D.Instead,we evaluate how far each 3D point is from the screen plane and penalize points that fall outside the comfort zone.In Ref.[12],the far and near comfort zone boundaries Bfarand Bnearare introduced.In terms of angular disparity,these may be written as

    where the constants in their model are mfar=1.129,mnear=1.035,Tfar=0.442,and Tnear=-0.626. dais the angular disparity(in degree)of a pixel and dfis the viewing distance(in metre),which,in our viewing configuration,is set to 0.55 m.

    In this formulation,whether the angular disparity da(p)of a pixel p is within the comfort zone range is determined by

    The fraction of pixels in frame f whose disparity is outside the comfort zone is computed,and used to define the disparity range penalty term Ed(f)for frame f:

    where N is the number of pixels in frame f.Figure 4 shows examples where disparities of certain pixels lie beyond the comfort zone.

    Motionisanimportantsourceofvisual discomfort[15,20].In Ref.[14],a novel visual comfort metric for S3D motion is proposed.This metric is a function of both the combination of velocity and depth,and luminance frequency.It returns a comfort value from 1 to 5(the higher, the more comfortable).We adopt this model in our metric and assign to every video frame a motion discomfort value.We first assign a motion discomfort value Vc(p)=ωn(5-Mp(p))for every pixel p,where ωnis a coefficient normalising Vc(p)to[0,1),set to 0.25.Mp(p)is the pixel-wise motion comfort value calculated as in Ref.[14]: whereis a model of motion comfort based on planar velocity vxy,spatial velocity vz, angular disparity d,and luminance frequencyLk(p)is the contrast value of the(2k+1+1)-neighborhood at p at the k-th Laplacian level of the Laplacian pyramid of the luminance;see Ref.[14]for further details.

    Fig.4 Comfort zone.Left:anaglyph 3D images.Right:disparities beyond the comfort zone shown in blue.

    After computing a discomfort value for every pixel,we determine the motion discomfort for the whole frame.In Ref.[14],average motion comfort values are calculated for individual saliency-based segments[32],assigning an importance value to every segment.The segments are obtained by graph-based segmentation[33].They assumed that the most uncomfortable region in a frame dictates the discomfort of the whole frame.However,we find that calculating the most salient and uncomfortable regioninseparateimageswithoutconsidering temporal coherence can lead to motion comfort instability.Instead,we modify their approach to perform SLIC superpixel segmentation[34],consider multiple discomfort-causing segments,and regard every segment as having the same importance.We extract an average motion comfort value for the top-K(K=20 by default)segment discomfort values as a motion penalty.The motion discomfort Em(f)for the whole frame f is

    Stereoscopic window violation occurs when an object is virtually located in front of the screen (i.e.,has negative disparity)but is occluded by the screen boundary.This is confusing as a nearer object appears to be occluded by a further one,causing retinal rivalry[8].If this happens,only one eye can see part of the object,leading to visual inconsistency and hence discomfort(see Fig.6).One practical way to alleviate this is to trim off the offending part.

    To measure stereoscopic window violation(SWV),we use a term Ev(f)for frame f.We first detect violations by checking pixels near left and right boundaries:if pixels touching the frame boundaries have negative disparity,they violate the stereoscopic window.The SWV penalty for frame f is then defined by counting the number of pixels included in violating objects:

    Fig.5 Motion discomfort estimation:(a)anaglyph 3D frames (c○B(yǎng)lender Foundation);(b)estimated discomfort caused by motion.

    Fig.6 Stereoscopic window violation(SWV).Left:a toy example illustrating SWV.Part of the object in green falling in the light blue region can only be seen by the left eye.Right:a real S3D photo showing SWV(c○ KUK Filmproduktion GmbH).There is inconsistent content in the leftmost part of the photo,leading to viewer discomfort.

    where s stands for image segments extracted as before,and Rbis an approximation of violating objects in the form of segments;every segment in Rbhas a negative average disparity.Rbis initially set to those boundary segments with a negative average disparity,and is then iteratively augmented by adding new neighbouring segments with negative averagedisparityuntilnonewsegmentswith negative average disparity are found.n(s)is the number of pixels in segment s and N is the number of pixels in frame f.

    Temporal smoothness.To avoid sudden depth changes,the disparity should vary smoothly and slowly,as needed.In Ref.[11],the importance oftemporalsmoothnessisemphasisedin3D cinematography;they suggested that the disparity range of successive frames should vary smoothly. Following the definition of disparity map similarity in Ref.[11],we define the similarity of disparity between neighbouring frames f and f′using Jensen-Shannon divergence[35]:

    where Ψ(f)is a pixel disparity histogram for frame f with dmax-dmin+1 bins;dmaxis the largest integer pixel disparity value in f and dminis the smallest integer pixel disparity value in f.H(X)is the Shannon entropy for distribution X.Intuitively,the more unlike the disparity histograms are,the higher the value of Es.

    4.2Discomfort metric

    Our general discomfort metric for a set of successive framesF in an S3D video is formulated as a linear combinabtion of the above terms in Eqs.(3)-(6),summed over the frames:

    where f′is the successor frame to f inλd,λm,λv,and λsare weights balancing the penbalty terms,set to 1,0.4,10,and 0.1 respectively.The weights are determined via experiments.We did a small scale perceptual study on 10 subjects with 10 input videos: we enumerated every weight from 0 to 20 with step 0.1,generating 1.6×109possible combinations of weights.After calculating the corresponding general metrics for input videos based on each group of weights,we asked 5 subjects to view each video and evaluate its comfort level by assigning integer scores from 1 to 5.We finally selected the group of weights,for which the metric score best reflected the subjects' comfort feelings.The weights were further validated by the other 5 subjects'evaluation.This metric can be used to predict the overall discomfort level of part or all of an S3D video.An S3D video frame is predicted as visually comfortable if the discomfort value Ec<0.3.Figure 7(b)shows example frames with their corresponding discomfort values.

    The metric has a general form,with a default set of weights balancing the penalty terms.If consideredunimportant,certaintermscanbe ignored,by setting their corresponding weights to 0.Alternatively,additional terms of a similar kind could also be included with proper weight configuration(asanexample, wepresenta variation of the metric,driving perceptual depth enhancement,by adding another unary term to each frame in the video(see Section 6)).We intentionally do not include all factors that cause visual fatiguethere are many such factors.Instead,we claim that the above metric includes many of the most significant factors,and the way we have formulated it allows ready extension to include other comfort models using additional penalty terms.The ideas in the rest of the paper do not depend on the precise form of this metric,only that such a metric can be formulated.We next show how to use this metric to define the objective function used to optimise pixel disparity mapping.

    Fig.7 Typical frames(c○ Blender Foundation)and discomfort scores.?。╝)Discomfort scores for frames in an S3D video clip. The discomfort interval is marked in blue.Key frames selected by our algorithm are highlighted in red.(b)shows three frames and corresponding discomfort scores from(a).

    5 Optimisation of pixel disparity mapping

    Based on the above visual discomfort metric,we next derive the objective function used for disparity mapping optimisation.A genetic algorithm is used in a hierarchical approach to optimise disparity mapping:given a set of input disparity values,we compute a corresponding target output disparity for each value.

    5.1Objective function

    The visual discomfort metric Ecmeasures the discomfort level of S3D video frames.However,directly using it as an objective function in an optimisation process leads to unsatisfactory results: clearly,mapping all disparity values to zero would minimise Ec,making it equal to zero at all times. Also,making large changes to the disparity without scaling the sizes of objects leads to a change in the perceived size of the original content.We thus add an additional unary term En(φ,f)to every frame f with the intent that optimisation should change the original disparities as little as possible.En(φ,f)measures differences between new and original disparities:

    where N is the number of pixels in frame f,d is the integer pixel disparity value,and Ψd(f)is the disparity histogram count for disparity d in frame f,as in Eq.(6).φ(d,f)is disparity mapping for disparity d in frame f.This formulation gives a cost for the mapping φ,punishing large changes from the original disparity distribution.This additional term allows us to find a suitable disparity mapping for each video frame that improves visual comfort while also preserving the original appearance.

    We denote the objective function for optimising a

    sequence of mappings Φ of a sequence of framesF

    in a S3D video as E(Φ)b;it is defined as b

    where f′is the successor frame to f,and Γφf(f)is a function which applies the mapping operator φfto frame f to produce a new frame with the desired new pixel disparities.λnis a further weight,set to 0.01 by default.

    5.2Hierarchical optimisation

    The objective function in Eq.(9)is complex;weuseanefficienthierarchicalapproachto optimise it in a coarse-to-fine manner along the time-line.We note that in S3D movies,frames causing discomfort usually appear together,forming discomfortintervals.Thus, wefirstlyextract discomfortintervalsforthewholevideo:we manipulate the disparity only for frames which cause discomfort,and leave the others unchanged.The discomfort intervals are determined using Eq.(7): a discomfort interval is a set of continuous frames from starting frame fsto ending frame fe,within which the discomfort metric Ec({f,f′})is above a threshold α=0.3,where f and f′are consecutive frames inside the interval.During optimisation,at coarser levels,inside every discomfort interval we determine key frames where the disparity changes drastically or there is a local maximum of discomfort. Frames at discomfort interval boundaries are also taken as key frames having a fixed identity pixel disparity map(φ(d)=d).Next,we use a genetic algorithm to optimise pixel disparity mappings of the key frames,treating the key frames as neighbours. After optimising the key frames at this hierarchy level,we fix the disparity mappings of the current key frames,and continue to seek new key frames for finer intervals between any two successive key frames at the current level.The mappings of the current key frames are used as boundary conditions for the next level.This process is recursively performed until fewer than ten frames exist between each neighbouring pair of key frames.Finally,the disparity mapping for remaining frames between these key frames is interpolated.We now give further details of key steps.

    5.2.1Key frame determination

    Key frame determination is a crucial step in the hierarchical disparity mapping optimisation.Since the optimisation is performed in a coarse-to-fine manner,at coarser levels,key frames should provide a story line overview of frames at finer levels,especially in terms of disparity.Motivated by this requirement,inside each discomfort interval we mark a frame as a key frame when there is a sudden depth change or the discomfort metric reaches a local maximum within a window of Υlframes for each level l.By default,Υlat level l is set to a quarter of the interval length between the two boundary key frames.Specifically,we use the inequality Es(f,f′)>β to determine whether frame f is a key frame at a drastic depth change;by default β=0.5. After optimising key frames at level l,new key frames at level l+1 are recursively determined,by seeking new key frames at level l+1 between every adjacent pair of key frames at level l.We stop when fewer than ten frames exist between each neighbouring pair of key frames.

    5.2.2Heuristicoptimisationusinggenetic algorithm

    After finding key frame sets F at level l,we use a heuristic algorithm to optimise disparity mappings of these key frames.Without loss of generality,assume we are optimising a discomfort interval with

    t detected key frames F={f1,...,ft}.Including the additional key frames at the discomfort interval boundaries,the augmented key frame set becomeswith fixed identity disparity mappings for fsand feas boundary conditions.We regard every successive pair of frames inalong the time-line as neighbours in a coarse view.We optimise the key frame mappings Φ={φ1,...,φt}at coarser levels using the objective function adapted from Eq.(9):

    where fis the successor fram′e to f inThis ′ objective function is used as fitness assessment in genetic algorithm.

    A genetic algorithm(GA)is used to optimise the disparity mapping φ for each frame f using this objective function as a fitness function.We use the GALib implementationofa steady-stategeneticalgorithm[36];50%of the population is replaced on each generation.The genome for each individual is a vector of real numbers,which is used to store target disparity mapping values(with a conversion between integer and real numbers).Uniform crossover[37]is used with Gaussian mutation[38],which adds a random value from a Gaussian distribution to each element of an individual's state vector,to create offspring.

    Genomeofindividuals.Thegenome representationneedstobecarefullydesigned;poorchoicecanleadtoGAdivergence.The target output disparity mapping values Dφf=of the mapping function φ for a frame fis an elementary unitin each individual's genome.The disparity mapping φ(x)should be a non-decreasing function,i.e.,if x1<x2,then φ(x1)≤φ(x2),to avoid depth reversal artifacts in the output video.We enforce this requirement by using an increment-based representation.We represent the mapping values Dφf={φ(dmin),g where d ranges over all integer pixel disparity values between dminand dmax,and?i=φ(di+1)-φ(di)is a non-negative mapping value increment.Obviously,we can recover Dφffrom the relationship Dφf[i]=The non-negativity of?iis gguaranteed byg additional bound biand lower bound

    Evolution.The state of every individual in the first generation is initialised using random mappings. The objective function in Eq.(10)is used for individual fitness assessment.The uniform crossover probability is pc=0.7 and the Gaussian mutation probability is pm=0.05.The population size npis set to 100 and the GA terminates after ng= 50 generations.As a steady-state GA is used,the last generation includes the best solution found for the desired mappings Φ′.Figure 8 illustrates the mappings corresponding to the best solution in different generations.

    Fig.8 Best disparity mapping solutions for improving comfort level of the frame shown in Fig.1(c),at various generations of the genetic algorithm.The frame set F contains four key frames.During optimisation,the discomfort cost Ecof F is reduced.

    5.3Warping-based manipulation

    After optimising pixel disparity mappings for each frame of the video,we have to adjust the input video using these mappings.In Ref.[10],a warping-based method is given to adjust disparity to match desired disparity mappings.Their approach first extracts sparse stereo correspondences,followed by warping of left and right frames respectively with constraints applied to the vertices of a mesh grid placed over each frame.The output is thus a deformed mesh aswell as the warped frame.We use this technology to generate the output video.

    6 Results

    The experiments were carried out on a computer with an Intel Core i7-4790K CPU with 32GB RAM. All videos were uniformly scaled to fit the screen size(1920×1080 pixels)to the extent possible before calculation.We calculate dense pixel correspondence between the left view and right view to estimate the pixel disparity in S3D videos using optical flow[39]. Motion in the x-y plane is also estimated using this method,between consecutive frames in the left view.Calculating the discomfort metric for one S3D video frame of size 1920×1080 takes less than 0.2s.The most time-consuming part is hierarchical optimisation,but the time taken is variable.It is dominated by the key frame determination step;it takes up to 15min to optimise ten key frames together in our implementation,using a single core.

    We have tested our approach on S3D video clips whose lengths are less than one minute.As explained in Ref.[14],the proposed motion comfort metric was derived from experiments on short videos.All of the results were obtained using default parameters. Extensive experiments showed that our system is insensitive to parameter settings.

    Our method provides smooth scene transitions between successive shots.Representative frames of a video clip with shot cuts are shown in Fig.9(a). Boundary frames 1 and 40 do not cause discomfort,so are fixed to retain their original disparities.Our algorithm detects drastic disparity changes between these boundary frames and automatically adjusts disparities to provide smoother depth transitions by finding suitable disparity mappings(see Fig.9(b)). In this example,frames where shot cuts occur are detected as key frames.This is because the values of motion term and temporal smoothness term reach a local maximum within a window. As can be seen in Fig.9(c),after manipulating the video,the depth storyboard suffers less from sudden jumps in disparity.While the last part of the video initially has a constant disparity range,which after processing becomes a slowly increasing disparity range,this does not lead to any perceptual artifacts:(i)slow transitions in disparity are often used to control disparity at shot cuts,(ii)the rate of disparity change is small,and(iii)the warping provides a smooth solution.

    Fig.9 Representative anaglyph frames of our results,with a fluent depth storyboard.(a)Sample input and output frames(frame 1 and frame 40 are fixed to their original disparities).(b)Pixel disparity mappings along the time-line(colour encodes output disparity value).(c)Depth storyboard before and after the manipulation,with colour encoding frequency of the occurrence of disparity values.

    Figure 10 gives an example of automatic correction of excessive disparity range.The ball popping out towards the viewer in the center of the frame makes it difficult for the viewer to comfortably perceive the depth.Our correction pushes the ball a little closer to the screen.Pushing the ball back into the screen too far would change the content too much,in disagreement with the film maker's intent.The deformed meshes of the left and right views used for the warping-based disparity manipulation are also shown.Discomfort scores in our metric before andafter manipulation are 0.58 and 0.22,respectively.

    Fig.10 Disparity mapping.?。╝)An input S3D frame and corresponding output frame.The“ball”outside the comfort zone is pushed back towards the screen.(b)Disparity mapping generated by our algorithm. (c)Deformed meshes for left and right views,indicating the warping effect.

    Figure11givesanexampleofeliminating stereoscopic window violation.In the original input frame,the front of the car appears in front of the screen plane,but is occluded by the picture boundary.This causes the leftmost part of the image to be seen only by the left eye.Such inconsistent content gives an unpleasant viewing experience.Our approach detects such violation automatically and eliminates it by pushing the car behind the screen.

    We further tested our framework using videos from a consumer stereoscopic camera(Fuji FinePix REAL 3D W3).Typical frames from one video are shown in Fig.12.The perceptual depth range is excessive,making it hard to fuse the scene.In the result,the depth of the building in the far distance is reduced,while the disparity values of the flowers are subtly changed.

    Perceptual depth enhancement.In Sections 4 and5,wepresentedanextensibleframework thatoptimisesdisparitymappingsdrivenby comfort.A variation of this framework can be used to drive disparity manipulation to provide depth enhancement(while not introducing visual discomfort),for a greater feeling of depth.This goal can be accomplished by introducing to the objective function an additional unary term Ea(φ,f)for each frame with weight λa=1,with the aim of punishing small disparities after applying the mapping φ to the video: where N is the number of pixels in frame f,d is the integer pixel disparity value,and Ψd(f)is the disparity histogram count for disparity d in frame f,as in Eq.(8).This change allows the perceived depths to be amplified,as shown in Fig.13.

    Fig.11 Eliminating stereoscopic window violation.Left:input frames with SWV(c○KUK Filmproduktion GmbH).Right:in the manipulation result,the popped out parts are pushed back towards the screen.

    Fig.12 Processing a video taken by a consumer S3D camera.The depth of the scene is reduced to facilitate stereoscopic fusion.

    Fig.13 Enhancing perceptual depth.Left:input and output frames (c○ Blender Foundation).After enhancement,the head of the man looks more angular.Right:the generated disparity mapping.

    6.1User study

    We conducted two user studies with 20 subjects aged from 18 to 32,to further assess the performance of our proposed comfort-driven disparity adjustment method.The primary aims for the two user studies were to test whether the framework can produce artifact-free results,and its ability to improve visualcomfort.Subjects participated by watching S3D videos and filling in questionnaires.

    Weuseda23-inchinterleaved3Ddisplay (1920×1080 pixels,400 cd/m2brightness),with passive polarized glasses.The viewing distance was set to 55 cm,as assumed in the proposed metric.All subjects had normal or corrected-to-normal vision,and were assessed to ensure they had no difficulty in stereoscopic fusion.Videos were displayed at full screen size.

    We prepared ten pairs of S3D videos including animated cartoons and real S3D videos.Both videos in a pair had the same content,one being the original and the other being modified by our system.A random order was used for each pair,and displayed three times in succession.Subjects were allowed to pause and carefully examine the content at any time.

    In the first user study,we evaluated whether our output video provides greater visual comfort than the original.After watching each video,each subject was asked to rate the comfort level of their viewing experience,in terms of ease of fusing the scene,causing fewer or more severe headaches,and other feelings of discomfort.Five ratings were used,from 1 to 5:very uncomfortable,uncomfortable,mildly comfortable,comfortable,very comfortable.In all ten pairs of test videos,our results achieved on average a higher comfort score than the original video.The differences in average score in each pair varied from 0.3 to 1.35.A right-tailed pairedsample hypothesis test was conducted,with the null hypothesis H0:there was no significant difference between the comfort scores of our outputs and the original videos and alternate hypothesis HA:the comfort scores of our results were significantly higher than those for the original videos at significance level α=0.05 with n=200 samples.The one-tailed critical value was t=1.653,while the test statistic was t?=9.905.Since t?>t,the null hypothesis was rejected,indicating that the differences were statistically significant:our approach provides an improved stereoscopic video viewing experience.

    The second user study aimed to assess artifacts in our results.Before undertaking the user study,the subject was told to note any disturbing perceptual depth artifacts(e.g.,depth reversals or unsuitable depthchanges) thatcausedconfusion.After watching each video,the subject was asked to rate both videos for unsuitable perceived depths,which were scored as follows:4=many strong artifacts,2=few strong/many weak artifacts,1=few weak artifacts,and 0=no artifacts.The results showed that 8 out of 20 subjects did not notice artifacts in any video,2 subjects only saw artifacts in our results and 2 subjects only saw artifacts in the original videos.The other 8 subjects noticed artifacts in both our results and the original videos.The worst score for both our results and the original videos was 2 (few strong/many weak artifacts).To further test whether statistically the two sets of scores have no difference,a two-tailed paired-sample hypothesis test was conducted,with the null hypothesis H0:there was no significant difference between the artifact scores of our outputs and the original videos and alternate hypothesis HA:artifact scores of our results and the original videos differ at significance level α=0.05 with n=200 samples.The two-tailed critical value was t=1.972,while the test statistic was t?=1.236.This time,the null hypothesis was not rejected,as|t?|≤|t|.We conclude that there is no significant difference in the perceived level of artifacts in the original videos and our results. Indeed,viewers are fairly insensitive to artifacts in these videos.Full statistics of the user studies are provided in the Electronic Supplementary Material (ESM)of this paper.

    Limitations.Our approach has limitations.As optimisation is based on a genetic algorithm,it may only find a local optimum.However,tests in which the genetic algorithm was initialized with differing initialpopulationsledtoquitesimilaroutput mappings.Secondly,existing individual comfort models work well only for viewers with normal stereoscopic fusion ability,and give an average comfort evaluation.Thus using the discomfort metric with default parameters may not give an accurate comfort evaluation for every individual,especially for those with poor stereoscopic fusion ability.Across individuals,there may well be differences in which aspects of an S3D video cause most discomfort.Moreover,our system cannot predict directors'intention:shots intentionally causing discomfort for artistic visual impact would unfortunately be eliminated by our system.

    7 Conclusions

    Wehavesuggestedageneralframeworkfor automaticcomfort-drivendisparityadjustment,together with an S3D discomfort metric.The metric combines several key factors,and could be of general benefit to S3D movie makers by giving an objective visual comfort evaluation in the round.It underpins our automatic disparity adjustment approach,which is based on disparity mapping optimisation.Our results demonstrate the effectiveness and uses of our approach.

    Our work is among the first attempts to tackle this challenging problem,and leaves room for improvement.In our framework,the disparity mappingisautomaticallydeterminedusinga heuristic method,and a closed-form solution for this problem is desirable.

    Acknowledgements

    This work was supported by the National HightechR&DProgramofChina(ProjectNo. 2013AA013903),theNationalNaturalScience Foundation of China(Project Nos.61272226 and 61133008),and Research Grant of Beijing Higher Institution Engineering Research Center.

    Electronic Supplementary MaterialSupplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s41095-016-0037-5.

    References

    [1]Maimone,A.;Wetzstein,G.;Hirsch,M.;Lanman,D.;Raskar,R.;Fuchs,H.Focus 3D:Compressive accommodationdisplay.ACMTransactionson Graphics Vol.32,No.5,Article No.153,2013.

    [2]Wetzstein,G.;Lanman,D.;Heidrich,W.;Raskar,R.Layered 3D:Tomographic image synthesis for attenuation-based light field and high dynamic range displays.ACM Transactions on Graphics Vol.30,No. 4,Article No.95,2011.

    [3]Didyk,P.;Ritschel,T.;Eisemann,E.;Myszkowski,K.;Seidel,H.-P.A perceptual model for disparity.ACM Transactions on Graphics Vol.30,No.4,Article No. 96,2011.

    [4]Didyk,P.;Ritschel,T.;Eisemann,E.;Myszkowski,K.; Seidel,H.-P.; Matusik,W.Aluminancecontrast-aware disparity model and applications.ACM Transactions on Graphics Vol.31,No.6,Article No. 184,2012.

    [5]Hoffman,D.M.;Girshick,A.R.;Akeley,K.;Banks,

    M.S.Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.Journal of Vision Vol.8,No.3,33,2008.

    [6]Howard,I.P.;Rogers,B.J.Perceiving in Depth,Vol. 2:Stereoscopic Vision.New York:Oxford University Press,2012.

    [7]Palmer, S.E.VisionScience:Photonsto Phenomenology.Cambridge, MA, USA:MIT Press,1999.

    [8]Mendiburu,B.3D Movie Making:Stereoscopic Digital Cinema from Script to Screen.Oxon,UK:Focal Press,2009.

    [9]Kellnhofer,P.;Ritschel,T.;Myszkowski,K.;Seidel,H.-P.Optimizingdisparityformotionindepth. Computer Graphics Forum Vol.32,No.4,143-152,2013.

    [10]Lang,M.;Hornung,A.;Wang,O.;Poulakos,S.;Smolic,A.;Gross,M.Nonlinear disparity mapping for stereoscopic 3D.ACM Transactions on Graphics Vol. 29,No.4,Article No.75,2010.

    [11]Liu,C.-W.;Huang,T.-H.;Chang,M.-H.;Lee,K.-Y.;Liang,C.-K.;Chuang,Y.-Y.3D cinematography principles and their applications to stereoscopic media processing.In:Proceedingsofthe19thACM International Conference on Multimedia,253-262,2011.

    [12]Shibata,T.;Kim,J.;Hoffman,D.M.;Banks,M.S. The zone of comfort:Predicting visual discomfort with stereo displays.Journal of Vision Vol.11,No.8,11,2011.

    [13]Cho,S.-H.;Kang,H.-B.Subjective evaluation of visual discomfort caused from stereoscopic 3D video using perceptual importance map.In:Proceedings of IEEE Region 10 Conference,1-6,2012.

    [14]Du,S.-P.;Masia,B.;Hu,S.-M.;Gutierrez,D.A metric of visual comfort for stereoscopic motion.ACM Transactions on Graphics Vol.32,No.6,Article No. 222,2013.

    [15]Jung,Y.J.;Lee,S.-i.;Sohn,H.;Park,H.W.;Ro,Y.M.Visual comfort assessment metric based on salient object motion information in stereoscopic video.Journal of Electronic Imaging Vol.21,No.1,011008,2012.

    [16]Kooi,F(xiàn).L.;Toet,A.Visual comfort of binocular and 3D displays.Displays Vol.25,Nos.2-3,99-108,2004. [17]Mu,T.-J.;Sun,J.-J.;Martin,R.R.;Hu,S.-M.A response time model for abrupt changes in binocular disparity.The Visual Computer Vol.31,No.5,675-687,2015.

    [18]Templin,K.;Didyk,P.;Myszkowski,K.;Hefeeda,M.M.;Seidel,H.-P.;Matusik,W.Modeling and optimizing eye vergence response to stereoscopic cuts. ACM Transactions on Graphics Vol.33,No.4,Article No.145,2014.

    [19]Jin,E.W.; Miller,M.E.; Endrikhovski,S.;Cerosaletti,C.D.Creating a comfortable stereoscopic viewing experience:Effects of viewing distance and field of view on fusional range.In:Proceedings of SPIE 5664,Stereoscopic Displays and Virtual Reality Systems XII,10,2005.

    [20]Ukai,K.;Howarth,P.A.Visual fatigue caused by viewing stereoscopic motion images:Background,theories,and observations.Displays Vol.29,No.2,106-116,2008.

    [21]Zilly,F(xiàn).;M¨uller,M.;Eisert,P.;Kauff,P.The stereoscopic analyzer—An image-based assistance tool for stereo shooting and 3D production.In:Proceedings of the 17th IEEE International Conference on Image Processing,4029-4032,2010.

    [22]Masia,B.;Wetzstein,G.;Didyk,P.;Gutierrez,D. A survey on computational displays:Pushing the boundaries of optics,computation,and perception. Computers&Graphics Vol.37,No.8,1012-1038,2013.

    [23]Lo,W.-Y.;van Baar,J.;Knaus,C.;Zwicker,M.;Gross,M.H.Stereoscopic 3D copy&paste.ACM Transactions on Graphics Vol.29,No.6,Article No. 147,2010.

    [24]Tong,R.-F.;Zhang,Y.;Cheng,K.-L.StereoPasting: Interactive composition in stereoscopic images.IEEE Transactions on Visualization and Computer Graphic Vol.19,No.8,1375-1385,2013.

    [25]Kim,Y.;Lee,Y.;Kang,H.;Lee,S.Stereoscopic 3D line drawing.ACM Transactions on Graphics Vol.32,No.4,Article No.57,2013.

    [26]Niu,Y.;Feng,W.-C.;Liu,F(xiàn).Enabling warping on stereoscopic images.ACM Transactions on Graphics Vol.31,No.6,Article No.183,2012.

    [27]Kim,C.;Hornung,A.;Heinzle,S.;Matusik,W.;Gross,M.Multi-perspective stereoscopy from light fields.ACM Transactions on Graphics Vol.30,No.6,Article No.190,2011.

    [28]Masia,B.;Wetzstein,G.;Aliaga,C.;Raskar,R.;Gutierrez,D.Display adaptive 3D content remapping. Computers&Graphics Vol.37,No.8,983-996,2013. [29]Koppal,S.J.;Zitnick,C.L.;Cohen,M.;Kang,S.B.;Ressler,B.;Colburn,A.A viewer-centric editor for 3D movies.IEEE Computer Graphics and Applications Vol.31,No.1,20-35,2011.

    [30]Oskam,T.;Hornung,A.;Bowles,H.;Mitchell,K.;Gross,M.OSCAM-optimized stereoscopic camera control for interactive 3D.ACM Transactions on Graphics Vol.30,No.6,Article No.189,2011.

    [31]Tseng,K.-L.;Huang,W.-J.;Luo,A.-C.;Huang,W.-H.;Yeh,Y.-C.;Chen,W.-C.Automatically optimizing stereo camera system based on 3D cinematography principles.In:Proceedings of 3DTV-Conference:The True Vision—Capture,Transmission and Display of 3D Video,1-4,2012

    [32]Cheng,M.-M.;Mitra,N.J.;Huang,X.;Torr,P.H. S.;Hu,S.-M.Global contrast based salient region detection.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.37,No.3,569-582,2015.

    [33]Felzenszwalb,P.F.;Huttenlocher,D.P.Efficient graph-basedimagesegmentation.International Journal of Computer Vision Vol.59,No.2,167-181,2004.

    [34]Achanta,R.;Shaji,A.;Smith,K.;Lucchi,A.;Fua,P.;S¨usstrunk,S.SLIC superpixels compared to stateof-the-art superpixel methods.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.34,No.11,2274-2282,2012.

    [35]Manning,C.D.;Sch¨utze,H.Foundations of Statistical Natural Language Processing.Cambridge,MA,USA: MIT Press,1999.

    [36]Syswerda,G.A study of reproduction in generational and steady state genetic algorithms.Foundation of Genetic Algorithms Vol.2,94-101,1991.

    [37]Syswerda,G.Uniform crossover in genetic algorithms. In:Proceedings of the 3rd International Conference on Genetic Algorithms,2-9,1989.

    [38]Higashi,N.;Iba,H.Particle swarm optimization with Gaussian mutation.In:Proceedings of the IEEE Swarm Intelligence Symposium,72-79,2003.

    [39]Brox,T.;Malik,J.Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.33,No.3,500-513,2011.

    MiaoWangiscurrentlyaPh.D. candidateatTsinghuaUniversity,Beijing,China.He received his B.S. degreefromXidianUniversityin 2011.His research interests include computer graphics,image processing,and computer vision.

    Xi-Jin Zhang is currently a Ph.D. candidateatTsinghuaUniversity,Beijing,China.He received his B.S. degree from Xidian University in 2014. His research interests include image and video processing,computer vision,and machine learning.

    Jun-BangLiangiscurrentlyan undergraduatestudentatTsinghua University,Beijing,China.His research interests include computer vision and computer graphics.

    Song-Hai Zhangreceived his Ph.D. degree in 2007 from Tsinghua University. He is currently an associate professor in the Department of Computer Science and Technology of Tsinghua University,Beijing,China.His research interests include image and video processing,geometric computing.

    RalphR.Martiniscurrentlya professor at Cardiff University.He obtainedhisPh.D.degreein1983 from Cambridge University.He has published about 300 papers and 14 books,covering such topics as solid and surface modeling,intelligent sketch input, geometricreasoning, reverse engineering,and various aspects of computer graphics. He is a Fellow of the Learned Society of Wales,the Institute of Mathematics and its Applications,and the British Computer Society.He is on the editorial boards of Computer-Aided Design,Computer Aided Geometric Design,Geometric Models,and Computers and Graphics. Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), whichpermits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    十八禁人妻一区二区| 中文字幕人妻熟女乱码| 黑丝袜美女国产一区| 在线天堂最新版资源| 一级,二级,三级黄色视频| 一区在线观看完整版| 久久久久视频综合| 久久人人爽av亚洲精品天堂| 亚洲国产最新在线播放| 久久久久国产精品人妻一区二区| 热re99久久精品国产66热6| 久久久久精品人妻al黑| 国产欧美亚洲国产| 国产黄色免费在线视频| 免费久久久久久久精品成人欧美视频| 久久97久久精品| 男人爽女人下面视频在线观看| 飞空精品影院首页| 巨乳人妻的诱惑在线观看| 亚洲第一青青草原| 精品国产乱码久久久久久男人| 操出白浆在线播放| 9色porny在线观看| 妹子高潮喷水视频| 日韩免费高清中文字幕av| 欧美久久黑人一区二区| 亚洲激情五月婷婷啪啪| 国产免费现黄频在线看| 国产在线免费精品| 日韩,欧美,国产一区二区三区| 赤兔流量卡办理| 午夜福利乱码中文字幕| 国产又爽黄色视频| 十八禁高潮呻吟视频| 亚洲av成人不卡在线观看播放网 | 亚洲精品在线美女| 欧美 亚洲 国产 日韩一| 国产 精品1| 亚洲av成人不卡在线观看播放网 | 丝袜人妻中文字幕| 在线观看免费午夜福利视频| 亚洲欧洲日产国产| 男男h啪啪无遮挡| 亚洲精品日本国产第一区| 欧美日韩精品网址| 免费黄网站久久成人精品| 久久鲁丝午夜福利片| 国产精品无大码| 亚洲国产精品一区三区| 欧美成人午夜精品| 国产一卡二卡三卡精品 | 国语对白做爰xxxⅹ性视频网站| 国产深夜福利视频在线观看| 国产xxxxx性猛交| 在线天堂中文资源库| 亚洲欧美一区二区三区黑人| 欧美日韩福利视频一区二区| 男女国产视频网站| 精品少妇一区二区三区视频日本电影 | 亚洲欧美一区二区三区黑人| 亚洲久久久国产精品| 国产色婷婷99| 亚洲精品一二三| 国产成人系列免费观看| 国产精品蜜桃在线观看| 王馨瑶露胸无遮挡在线观看| 久久综合国产亚洲精品| 在线观看国产h片| 丰满迷人的少妇在线观看| 免费观看性生交大片5| 男人爽女人下面视频在线观看| 免费黄网站久久成人精品| 1024香蕉在线观看| 国产成人一区二区在线| 亚洲成色77777| 黄片无遮挡物在线观看| 亚洲精品国产一区二区精华液| 亚洲婷婷狠狠爱综合网| 妹子高潮喷水视频| 精品视频人人做人人爽| 日韩熟女老妇一区二区性免费视频| 亚洲精品视频女| 18禁裸乳无遮挡动漫免费视频| 久久久久久久精品精品| 一二三四在线观看免费中文在| 亚洲精品一区蜜桃| 亚洲男人天堂网一区| 亚洲精品aⅴ在线观看| 国产麻豆69| 精品少妇久久久久久888优播| 最近2019中文字幕mv第一页| 伊人久久国产一区二区| 捣出白浆h1v1| 99热国产这里只有精品6| 女人高潮潮喷娇喘18禁视频| a级毛片在线看网站| 国产熟女午夜一区二区三区| 一本色道久久久久久精品综合| 性少妇av在线| 久久精品亚洲av国产电影网| 午夜福利在线免费观看网站| 午夜福利免费观看在线| av.在线天堂| 亚洲一码二码三码区别大吗| 肉色欧美久久久久久久蜜桃| 99国产综合亚洲精品| 亚洲成色77777| 99国产综合亚洲精品| 亚洲欧美精品自产自拍| www.av在线官网国产| 亚洲人成77777在线视频| 欧美亚洲 丝袜 人妻 在线| 大片电影免费在线观看免费| 夜夜骑夜夜射夜夜干| 自线自在国产av| 国产伦理片在线播放av一区| 男女边摸边吃奶| 美女扒开内裤让男人捅视频| 高清欧美精品videossex| 九九爱精品视频在线观看| 亚洲在久久综合| 日韩中文字幕视频在线看片| 欧美日韩视频高清一区二区三区二| 丝袜在线中文字幕| 高清视频免费观看一区二区| 亚洲精品视频女| 欧美日韩亚洲国产一区二区在线观看 | 天天躁夜夜躁狠狠躁躁| 亚洲自偷自拍图片 自拍| 久久久国产一区二区| 9191精品国产免费久久| 纯流量卡能插随身wifi吗| 一本一本久久a久久精品综合妖精| 母亲3免费完整高清在线观看| 女人精品久久久久毛片| 下体分泌物呈黄色| 18禁观看日本| 国产有黄有色有爽视频| 青草久久国产| 国产 一区精品| 两性夫妻黄色片| 日韩不卡一区二区三区视频在线| 亚洲av国产av综合av卡| 观看av在线不卡| 国产日韩一区二区三区精品不卡| 欧美 亚洲 国产 日韩一| 97在线人人人人妻| 国产欧美亚洲国产| 欧美日韩成人在线一区二区| 一边亲一边摸免费视频| 亚洲国产最新在线播放| 亚洲专区中文字幕在线 | 国产精品人妻久久久影院| 亚洲国产欧美网| 欧美乱码精品一区二区三区| 999久久久国产精品视频| 欧美激情 高清一区二区三区| 亚洲天堂av无毛| 国产成人a∨麻豆精品| 一个人免费看片子| 男女边摸边吃奶| 久久97久久精品| av在线老鸭窝| 久久天躁狠狠躁夜夜2o2o | 如何舔出高潮| av片东京热男人的天堂| 国产片特级美女逼逼视频| 一本一本久久a久久精品综合妖精| 日韩制服骚丝袜av| 在线 av 中文字幕| 1024香蕉在线观看| 久久这里只有精品19| 国产精品熟女久久久久浪| 国产不卡av网站在线观看| 在线观看免费午夜福利视频| 国产精品久久久久成人av| 亚洲精品一二三| 精品国产一区二区久久| 亚洲四区av| 在线免费观看不下载黄p国产| 日本wwww免费看| 在线观看三级黄色| 丰满饥渴人妻一区二区三| 亚洲精品国产av成人精品| 国产深夜福利视频在线观看| 中文字幕人妻丝袜制服| 国产亚洲午夜精品一区二区久久| 精品国产超薄肉色丝袜足j| 欧美激情 高清一区二区三区| 观看av在线不卡| 一本久久精品| 亚洲成人免费av在线播放| 欧美老熟妇乱子伦牲交| 99国产综合亚洲精品| 满18在线观看网站| 两性夫妻黄色片| 久久热在线av| 99热全是精品| 80岁老熟妇乱子伦牲交| 纵有疾风起免费观看全集完整版| 一级毛片 在线播放| 咕卡用的链子| 国产亚洲精品第一综合不卡| 99国产综合亚洲精品| 99久久精品国产亚洲精品| 亚洲国产欧美网| 亚洲伊人色综图| 国产在线视频一区二区| 如日韩欧美国产精品一区二区三区| av一本久久久久| 飞空精品影院首页| 在线精品无人区一区二区三| 久久这里只有精品19| 在线免费观看不下载黄p国产| 久久久久精品人妻al黑| 久久女婷五月综合色啪小说| 国产深夜福利视频在线观看| 久久久欧美国产精品| 久久久久精品人妻al黑| 在线观看免费视频网站a站| 亚洲第一区二区三区不卡| svipshipincom国产片| 国产精品一区二区精品视频观看| 久久av网站| e午夜精品久久久久久久| 99热全是精品| 免费观看人在逋| 国产精品女同一区二区软件| 国产野战对白在线观看| 日韩av在线免费看完整版不卡| 精品卡一卡二卡四卡免费| 亚洲av国产av综合av卡| 亚洲色图综合在线观看| 在线观看免费视频网站a站| 久久性视频一级片| 热99国产精品久久久久久7| 99国产精品免费福利视频| 成年女人毛片免费观看观看9 | 下体分泌物呈黄色| 90打野战视频偷拍视频| 国产人伦9x9x在线观看| 亚洲少妇的诱惑av| 免费人妻精品一区二区三区视频| 亚洲欧美清纯卡通| 欧美黑人欧美精品刺激| 日韩中文字幕视频在线看片| 999久久久国产精品视频| 麻豆av在线久日| 一区二区三区乱码不卡18| 高清视频免费观看一区二区| 国产精品人妻久久久影院| 亚洲国产成人一精品久久久| 亚洲av欧美aⅴ国产| 少妇猛男粗大的猛烈进出视频| 国产成人午夜福利电影在线观看| 一本大道久久a久久精品| 国产亚洲一区二区精品| 亚洲七黄色美女视频| 亚洲色图综合在线观看| 国产成人午夜福利电影在线观看| 免费不卡黄色视频| 亚洲精品在线美女| 亚洲七黄色美女视频| 毛片一级片免费看久久久久| 国产伦理片在线播放av一区| 亚洲精品国产一区二区精华液| 乱人伦中国视频| 91国产中文字幕| 久久久久精品性色| 久久国产精品大桥未久av| 精品亚洲成a人片在线观看| 亚洲第一区二区三区不卡| 高清不卡的av网站| 国产又色又爽无遮挡免| 国产成人欧美在线观看 | 看免费成人av毛片| svipshipincom国产片| 午夜福利视频在线观看免费| 哪个播放器可以免费观看大片| 久久久久国产精品人妻一区二区| 精品免费久久久久久久清纯 | 极品少妇高潮喷水抽搐| 香蕉国产在线看| 亚洲欧美精品综合一区二区三区| 最近中文字幕2019免费版| 综合色丁香网| 两个人免费观看高清视频| 少妇人妻久久综合中文| 丝袜在线中文字幕| 免费高清在线观看视频在线观看| avwww免费| 国产精品无大码| 九色亚洲精品在线播放| 51午夜福利影视在线观看| 午夜福利视频精品| 人人妻人人爽人人添夜夜欢视频| 多毛熟女@视频| 丁香六月天网| 亚洲美女视频黄频| 蜜桃国产av成人99| 又大又爽又粗| 亚洲欧美一区二区三区国产| 精品免费久久久久久久清纯 | 国产亚洲av片在线观看秒播厂| 国产精品一国产av| 国产乱来视频区| 男女下面插进去视频免费观看| 王馨瑶露胸无遮挡在线观看| 精品人妻熟女毛片av久久网站| 一级毛片电影观看| 巨乳人妻的诱惑在线观看| 日本vs欧美在线观看视频| 亚洲精品国产av蜜桃| 亚洲人成电影观看| 欧美精品亚洲一区二区| 国产伦人伦偷精品视频| 亚洲成人手机| 成年人免费黄色播放视频| 亚洲情色 制服丝袜| 日韩一区二区视频免费看| 国产一区亚洲一区在线观看| 日本91视频免费播放| 日韩精品有码人妻一区| 女人被躁到高潮嗷嗷叫费观| 国产精品偷伦视频观看了| 一级a爱视频在线免费观看| 男女之事视频高清在线观看 | 亚洲欧美精品综合一区二区三区| 男女下面插进去视频免费观看| 亚洲精品国产av成人精品| 国产成人精品在线电影| 午夜福利视频精品| 19禁男女啪啪无遮挡网站| 国产成人av激情在线播放| 免费看不卡的av| 高清av免费在线| 国产成人系列免费观看| 亚洲成国产人片在线观看| 人人妻,人人澡人人爽秒播 | 亚洲国产中文字幕在线视频| 国产色婷婷99| 在线观看三级黄色| 国产又爽黄色视频| 亚洲av福利一区| 丁香六月天网| 成人国产av品久久久| 看非洲黑人一级黄片| 亚洲国产精品成人久久小说| 免费不卡黄色视频| 黄色视频在线播放观看不卡| www.av在线官网国产| 天堂8中文在线网| 亚洲国产精品999| 国产高清国产精品国产三级| 赤兔流量卡办理| 多毛熟女@视频| 欧美黑人精品巨大| 国产男女内射视频| 午夜免费观看性视频| 啦啦啦在线观看免费高清www| 久久久国产欧美日韩av| 一本色道久久久久久精品综合| 91成人精品电影| 国产97色在线日韩免费| 十八禁高潮呻吟视频| 色婷婷av一区二区三区视频| 亚洲伊人色综图| 日韩制服骚丝袜av| av在线app专区| 桃花免费在线播放| tube8黄色片| 中文字幕色久视频| 国产又爽黄色视频| 国产男女内射视频| 考比视频在线观看| 制服人妻中文乱码| a级片在线免费高清观看视频| 精品一区在线观看国产| 午夜福利视频在线观看免费| 免费高清在线观看视频在线观看| 欧美国产精品一级二级三级| 午夜激情av网站| 欧美人与善性xxx| 国产精品秋霞免费鲁丝片| 亚洲四区av| 黄色一级大片看看| 欧美另类一区| 成人毛片60女人毛片免费| 久久99热这里只频精品6学生| 丁香六月天网| 香蕉丝袜av| 一边亲一边摸免费视频| 国产无遮挡羞羞视频在线观看| 中文字幕人妻丝袜制服| xxxhd国产人妻xxx| 久热这里只有精品99| 亚洲国产av影院在线观看| 麻豆乱淫一区二区| 满18在线观看网站| 中文字幕高清在线视频| 欧美激情 高清一区二区三区| 久久久久视频综合| 伊人久久大香线蕉亚洲五| 亚洲精品日韩在线中文字幕| 男男h啪啪无遮挡| 久久亚洲国产成人精品v| 中文精品一卡2卡3卡4更新| 久久精品国产a三级三级三级| 啦啦啦 在线观看视频| 黑人猛操日本美女一级片| 日日啪夜夜爽| 最新在线观看一区二区三区 | 亚洲专区中文字幕在线 | 久久久精品免费免费高清| 亚洲精品国产色婷婷电影| 国产伦理片在线播放av一区| av在线播放精品| 成人免费观看视频高清| 欧美乱码精品一区二区三区| 满18在线观看网站| 97精品久久久久久久久久精品| 亚洲精品久久久久久婷婷小说| 亚洲精品中文字幕在线视频| 久久久久精品久久久久真实原创| 久久久国产欧美日韩av| 熟女少妇亚洲综合色aaa.| 久久久欧美国产精品| 精品一区二区三区av网在线观看 | 欧美久久黑人一区二区| 国产一区二区激情短视频 | 国产免费视频播放在线视频| 亚洲av电影在线进入| 天天躁日日躁夜夜躁夜夜| 国产免费福利视频在线观看| 无限看片的www在线观看| 90打野战视频偷拍视频| 91精品三级在线观看| 精品一区二区三区四区五区乱码 | 国产精品亚洲av一区麻豆 | 国产乱人偷精品视频| 国产一区二区三区av在线| 九色亚洲精品在线播放| 免费观看av网站的网址| 日日爽夜夜爽网站| 蜜桃在线观看..| 中文精品一卡2卡3卡4更新| 日日啪夜夜爽| 操出白浆在线播放| 婷婷色av中文字幕| 超色免费av| 97精品久久久久久久久久精品| 满18在线观看网站| 夜夜骑夜夜射夜夜干| 国产成人精品在线电影| 亚洲成色77777| 在线观看国产h片| www.av在线官网国产| 美女福利国产在线| 国产成人精品久久二区二区91 | 狠狠婷婷综合久久久久久88av| 亚洲国产精品成人久久小说| 91aial.com中文字幕在线观看| 久久久久久久国产电影| 少妇的丰满在线观看| 免费少妇av软件| 国产视频首页在线观看| 性高湖久久久久久久久免费观看| 欧美人与性动交α欧美软件| 国产极品天堂在线| 亚洲av成人不卡在线观看播放网 | 一区二区av电影网| 国产又爽黄色视频| 中文字幕最新亚洲高清| 午夜福利免费观看在线| 下体分泌物呈黄色| 国产熟女午夜一区二区三区| 欧美精品高潮呻吟av久久| 国产精品三级大全| 19禁男女啪啪无遮挡网站| 最近手机中文字幕大全| 国产熟女欧美一区二区| 国产在视频线精品| 精品国产一区二区三区久久久樱花| 国产精品 欧美亚洲| 少妇精品久久久久久久| 欧美日韩一级在线毛片| 亚洲欧美日韩另类电影网站| 中文字幕另类日韩欧美亚洲嫩草| 日日爽夜夜爽网站| 亚洲精品美女久久av网站| 亚洲av成人不卡在线观看播放网 | 国产精品一二三区在线看| 国产成人精品久久二区二区91 | av视频免费观看在线观看| 欧美精品av麻豆av| 人人妻人人澡人人爽人人夜夜| 中文乱码字字幕精品一区二区三区| 青草久久国产| 天天躁夜夜躁狠狠躁躁| 国产麻豆69| 新久久久久国产一级毛片| 岛国毛片在线播放| 男女床上黄色一级片免费看| 国产男女内射视频| 美女午夜性视频免费| 各种免费的搞黄视频| 国产野战对白在线观看| 午夜老司机福利片| 韩国精品一区二区三区| 亚洲美女视频黄频| 精品亚洲乱码少妇综合久久| 一区二区av电影网| 亚洲av在线观看美女高潮| 极品人妻少妇av视频| 日韩制服丝袜自拍偷拍| 国产极品粉嫩免费观看在线| 国产欧美日韩一区二区三区在线| 69精品国产乱码久久久| 在线亚洲精品国产二区图片欧美| 亚洲av成人精品一二三区| 99久久人妻综合| 青草久久国产| 久久人人爽人人片av| 亚洲精品,欧美精品| 超碰97精品在线观看| 99国产综合亚洲精品| 成年动漫av网址| 亚洲av男天堂| 纯流量卡能插随身wifi吗| 这个男人来自地球电影免费观看 | 飞空精品影院首页| 欧美日韩视频精品一区| 老司机靠b影院| 欧美人与性动交α欧美精品济南到| 欧美亚洲日本最大视频资源| 欧美人与性动交α欧美精品济南到| 久久久久久人人人人人| 少妇被粗大猛烈的视频| 男人舔女人的私密视频| 亚洲天堂av无毛| 熟妇人妻不卡中文字幕| 成人国产麻豆网| 女人高潮潮喷娇喘18禁视频| 一本色道久久久久久精品综合| 在线观看一区二区三区激情| 日韩中文字幕视频在线看片| 女人被躁到高潮嗷嗷叫费观| 精品国产乱码久久久久久小说| 亚洲精品国产av蜜桃| 国产一区二区三区av在线| 亚洲精品国产区一区二| 久久久久精品久久久久真实原创| 国产乱人偷精品视频| xxx大片免费视频| 日韩,欧美,国产一区二区三区| 久久精品亚洲熟妇少妇任你| 最黄视频免费看| 一区福利在线观看| 一二三四中文在线观看免费高清| 亚洲av欧美aⅴ国产| www.自偷自拍.com| 搡老乐熟女国产| 免费观看人在逋| 国产成人欧美| 久久久国产一区二区| 女人久久www免费人成看片| 久久久久国产精品人妻一区二区| e午夜精品久久久久久久| 人人妻人人爽人人添夜夜欢视频| 色婷婷久久久亚洲欧美| 99热国产这里只有精品6| 考比视频在线观看| 国语对白做爰xxxⅹ性视频网站| 香蕉国产在线看| 日韩电影二区| 亚洲婷婷狠狠爱综合网| 国产黄频视频在线观看| 搡老岳熟女国产| 久久国产亚洲av麻豆专区| 亚洲精品日韩在线中文字幕| tube8黄色片| 国精品久久久久久国模美| 亚洲av欧美aⅴ国产| 伊人久久国产一区二区| 亚洲欧洲日产国产| 热99久久久久精品小说推荐| 色视频在线一区二区三区| 少妇被粗大的猛进出69影院| 欧美中文综合在线视频| 18禁国产床啪视频网站| av天堂久久9| 久久av网站| 国产精品久久久av美女十八| videosex国产| 欧美乱码精品一区二区三区| 国产成人精品久久二区二区91 | 久久天堂一区二区三区四区| 啦啦啦在线免费观看视频4| 亚洲成人av在线免费| 亚洲一区中文字幕在线| 欧美日韩成人在线一区二区| 成年av动漫网址| 爱豆传媒免费全集在线观看| 另类亚洲欧美激情| 国产精品一区二区在线不卡| 成年人免费黄色播放视频| 午夜福利影视在线免费观看| 日韩人妻精品一区2区三区| 韩国高清视频一区二区三区| 精品一区二区免费观看| 久久精品aⅴ一区二区三区四区| 校园人妻丝袜中文字幕| 肉色欧美久久久久久久蜜桃| 丰满少妇做爰视频| 久久免费观看电影|