• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Surface tracking assessment and interaction in texture space

    2018-05-11 06:30:56JohannesFurchAnnaHilsmannandPeterEisert
    Computational Visual Media 2018年1期
    關(guān)鍵詞:癥狀

    Johannes FurchAnna Hilsmannand Peter Eisert()

    The Author(s)2017.This article is published with open access at Springerlink.com

    1 Introduction

    Elaborate video manipulation in post-production often requires means to move image overlays or corrections along with the apparent motion of an image sequence,a task termedmatch movingin the visual effects community.The most common commercial tracking tools available to extract the apparent motion include manual keyframe warpers,point trackers,dense vector generators(for optical fl ow),planar trackers,keypoint match-based camera solvers(for rigid motion),and 3D model-based trackers[1–3]. Many of these tools allow for some kind of user interaction to guide,assist,or improve automatically generated results.However,while increasingly being discussed in the research community[4–6],visual effects artists have not yet adopted user interaction with dense optical flow based estimation methods. We believe this is due to the technical aims of most proposed tools,their relative complexity of usage,and the difficulty of assessing tracking quality in established result visualizations.

    In this paper, we introduce the concept of assessment and interaction in texture space for surface tracking applications.We believe the quality of a tracking result can best be assessed on footage mapped to a common reference space.In such a common space,perfect motion estimation is reflected by a perfectly static sequence,while any apparent motion suggests errors in the underlying tracking.Furthermore,this kind of representation allows for the design of tools that are much simpler to use,since even in case of errors,visually related content is usually mapped in close spatial proximity throughout the sequence. Interacting with the tracking algorithms directly and improving the tracking results instead of adjusting the overlay data have the clear advantage of decoupling technical aspects from artistic expression.

    2 Related work and contribution

    Today,many commercial tools exist for motion extraction.These tools often allow user interaction to guide,assist,or improve automatically generated results.However,many commercial implementations are limited to simple pre-and post-processing of input and output respectively[7,8]. Also,often the motion estimation is based on key point-based trackers.These methods allow for the estimation of rigid,planar,or coarse deformable motion only,as they are based on sparse feature points and merely contribute a limited number of constraints to the optimization framework.In contrast,dense or optical fl ow-based methods use information of all pixels in a region of interest and therefore allow for much more complex motions. However,user interaction has not yet been integrated into dense optical fl ow-based estimation methods in commercial tools.

    In the research community,a variety of user interaction tools for dense tracking and depth estimation have been proposed in recent years.One possibility is to manually correct the output of automatic processing and then to retrain the algorithm as is for example done in Ref.[9]for face tracking. In order to avoid tedious manual work when designing user interaction tools,one important aspect is to find a way to also integrate inaccurate userhintsdirectly into the optimization framework that is used for motion or depth estimation.Inspired by scribble-based approaches for object segmentation[10],recent works on stereo depth estimation have combined intuitive user interaction with dense stereo reconstruction.While Zhang et al.[6]directly work on the maps by letting the user correct existing disparity maps on key frames,other approaches work in the image domain and use sparse scribbles on the 2D images to define depth layers,using them as soft constraints in a global optimization framework which propagates them into per-pixel depth maps through the whole image or video sequence[11,12].Similarly,other approaches use simple paint strokes to let the user set smoothness,discontinuity,and depth ordering constraints in a variational optimization framework[4,5,13].

    In this work,we address user assisted deformable video tracking based on mesh-based warps in combination with an optical fl ow-based cost function. Mesh-based warps and dense intensity based cost functions have already been applied to various image registration problems,e.g.,in Refs.[14,15],and have been extended by several authors to non-rigid surface tracking in monocular video sequences[16,17]. These approaches can estimate complex motions and deformations but often fail in certain situations,like large scale motion,motion discontinuity,correspondence ambiguity,etc.Here,user hints can help to guide the optimization.In our approach,we integrate user interaction tools directly into an optimization framework,similarly to the approach in Ref.[16]which not only estimates geometric warps between images but also photometric ones in order to account for lighting changes. Our contribution lies on one hand in illustrating how texture space in combination with a variety of change inspection tools provides a much more natural visualization environment for tracking result assessment.On the other hand,we show how tools similar to those other authors have introduced can be redesigned and adapted to create powerful editing instruments to interact with the tracking results and algorithms directly in texture space.Finally,we introduce an implementation of our texture space assessment and interaction framework.

    3 Surface tracking

    3.1 Model

    Given a sequence of imagesI0,...,IN,without loss of generality we assume thatI0is the reference frame in which a region of interestR0is defined.Furthermore,it is assumed that the image content inside this reference region represents a continuous surface.The objective is to extract the apparent motion of the content insideR0for each frame.We determine this motion by estimating a bijective warping functionW0i(x0;θ0i)that maps 2D image coordinatesx0∈R0toxiin a regionRiinIibased on a parameter vectorθ0idescribing the warp.The inverse of this function=Wi0is defined for the mapped regionRi.As the indices ofxandθcan be deduced fromW,they will be omitted in the following.

    We design the bijective warping function based on deforming 3D meshesM(V,T).The meshes consist of a consistent triangle topologyTand frame dependent vertex positionsv∈V.Coordinates are mapped fromRitoRjbased on barycentric interpolation of the offsets?v=vi?vjbetween the meshesMiandMjcovering the regions:

    whereB(x)∈R2×2|V|is a matrix representation of the barycentric coordinatesβ,vlare the vertices of the triangle containingx,andθ∈R2|V|×1is the parameter vector containing all vertex offsets inxandydirections.

    The mapping defined in Eq.(1)reflects the rendering of object meshMiinto imageIibased on texture coordinates defined by the vertices ofMjfor the object textureIj,i.e.,Ii(x)=Ij(Wij(x;θ)).Therefore,the objective can be reformulated as the recovery of model parameters from a rendered sequence in whichI0represents the texture and the vertices ofM0represent the texture coordinates.

    The sequence tracking problem can be interpreted as the requirement to find a set of mesh deformationsM1,...,MNof a reference meshM0that minimizes each differenceI0(Wi0(x;θ))?Ii(x)for coordinatesx∈Ri.The free parameters in this equation are the vertex offsets that can be changed by adapting the positions of the meshesMi. Note that the motion vectors for pixel positions inR0are implicitly estimated,since the inverse warping functionW0ican be constructed by swapping the two meshes.

    A warping function that maps imageIjto imageIican be found by minimizing the following objective:

    whereψis a norm-like function(e.g.,SSD,Huber,or Charbonnier).The pixel difference is normalized by the pixel count|Ri|,so the function cannot be minimized by shrinking the region. In addition,the function can be used across different scales of a Gaussian pyramid.Motion blur can also be explicitly considered by adding motion dependent blurring kernels to the data term[18].

    To tackle noisy image data and to propagate motion information for textureless areas, we constrain the permitted deformation of the mesh by introducing a uniform mesh LaplacianLas a smoothing regularizer based on mesh topology,and include it into our objective as an additional termEL(θ).The final nonlinear optimization problem is as follows:

    whereλLbalances the influence of the terms involved and is set to a multiple of|V|/|Rj|,so the influence of the Laplace term is scaled by the average amount of per triangle image data.

    Using the Gauss–Newton algorithm, the parameter updateθk+1=θk+?θto iteratively findis determined by solving equations that require the Jacobian of the residual term.The JacobianJD∈R|R|×2|V|of the data term is

    where▽I∈R|R|×2is the spatial image gradient inxandydirections.

    For a more detailed discussion of the theory behind image registration using mesh warps we refer to Ref.[19].

    3.2 Photometric registration

    The tracking method described above makes use of the brightness constancy assumption,explaining all changes between two images by the pure geometric warpin Eq.(1).Varying illumination and viewdependent surface reflection cannot be described by this model.In order to deal with such effects as well,we add a photometric warp:

    that models spatially varying intensity scaling of the image.ρlis the scaling factor corresponding to vertexvlwhich is related to the scaling of pixelxvia the barycentric coordinates stored inBp.This photometric warp represented by parametersθpis multiplicatively included in the data term in Eq.(2),leading to

    This data term is solved jointly for the geometric and photometric parametersθg,θpin a Gauss–Newton framework[16].Like for the geometric term,shading variations over the surface are constrained by a uniform Laplacian on the photometric warp.

    3.3 Expected problems

    To design meaningful interaction tools, it is necessary to understand what problems are to be expected by a purely automatic solution for determining the meshesM1,...,MN.There are two distinct sources of error:

    ·The assumption that change can be modeled by geometric displacement(and smooth photometric adjustment)does not hold for most real-world scenarios.Since the appearance of the content inR0might vary significantly throughout the sequence(e.g.,reflections,shadows,...),the minimum of the objective function may not be close to zero.

    ·Every automated algorithmic solution has its own inherent problems.In our case,the optimization is sensitive to the initialization of the meshesMiand while being easy to implement,a global Laplacian term that assumes constant smoothness inside the region of interest cannot model complex motion properties of a surface.

    We use a number of heuristics to address these anticipated problems.First and foremost,we make use of the a priori knowledge that visual and therefore geometric change between adjacent frames is small.Therefore,starting atM1,we iteratively determineMiin the sequence usingMi?1as initialization for the optimization.Furthermore,assuming thatMi?1describes an almost perfect warping function to the reference frame,we useIi?1(and thereforeMi?1)rather thanI0as an initial image reference for optimizingMi.However,to avoid error propagation(i.e.,drift),we optimize with reference toI0(and thereforeM0)in a second pass using the result of the first pass as initialization.To deal with large frame to frame offsets,we run the optimization on a Gaussian image pyramid starting at low resolution.This problem can also be addressed by incorporating keypoint or region correspondences into the initialization or the optimization term[20,21],an approach we adopt in a variety of ways for user interaction below.We address the problem of noisy data and model deviations by applying robust norms inEDandEL.Those problems have also been addressed by other authors by introducing a data-based adaption of the smoothness term to rigid motion[22].In some cases violations of the brightness constancy constraint can be effectively handled by introducing gradient constancy intoED[23].

    4 Assessment and interaction tools

    While the above optimization scheme generally yields satisfactory results,sometimes the global adjustment of parameters leaves tracking errors in a subset of frames.As our framework iteratively determines meshesMi,it allows online assessment of the results.Therefore,whenever a problem is apparent to the user,the user can stop the process and interact directly with the algorithms using the tools described below.The optimization for a frame can be iteratively rerun based on additional input until a desired solution is reached.Therefore,the user can also decide what level of quality is needed and only initiate interaction if the currently determined solution is insufficient.Although each meshMiis ultimately registered to the reference image,reoptimization based on user input can lead to sudden jumps in the tracking.Such interruptions can easily be detected in texture space,and can usually be dealt with by back propagating the improved result and reoptimizing.

    To be able to make use of established post production tools, we have implemented our tracking framework as a plugin for the industry standard compositing software NUKE [2].For illustrations in this section we use the publicFace Capturedataset[24],while additional results on other sequences are presented in Section 5 and the accompanying video in the Electronic Supplementary Material(ESM).Assessment is best done by playing back the sequences.

    4.1 Parameter adjustment

    A number of concepts we introduced in the previous section can be fine-tuned by the user by adjusting a number of settings.While some parameters need to be fixed before tracking starts(e.g.,the topology of the 2D mesh),most of them can be individually adjusted per frame.This includes the choice of the norm-like function,theλparameters of the objective function in Eq.(3),the scales of the image pyramid,the images used as reference,and the mesh data to be propagated.This per-frame application implies that readjustment of a single frame with different parameter settings is possible,making the parameter adjustment truly interactive.In this context,the data propagation mode is an essential parameter:while the default mode is to propagate tracking data from the previous frame(i.e.,to useMi?1as initialization forMi),if results from previous iterations are to be refined,Miitself is used as initialization.Given the implementation in a post production framework,keyframe animation of the parameters using a number of interpolation schemes and linking them to other parameters are useful mechanisms.A possible application of this feature would be to link the motion of a known camera to the bottom scale of the image pyramid.

    4.2 Texture space assessment

    We call the deformation of image content inRito the corresponding position inR0thetexture unwrapofRi.Consequently,we say that the image information deformed in this way is represented intexture spaceand that anunwrapped sequenceconsists of a texture unwrap of all frames in the sequence(see rows 2–4 of Fig.1).This terminology is derived from the assumption that the input sequence can be seen as a rendering of textured objects and that the reference frame provides a direct view onto the object of interest,so that image coordinates are interpreted as the coordinates of the texture.While the reference frame is usually chosen to provide good visualization,any mapping of those coordinates can also be used as texture space. Conversely,we say that image information(e.g.,an overlay)that is mapped fromR0toRiismatch moved(see row 5 in Fig.1).

    Traditionally,results are evaluated by watching a composited sequence incorporating match moved overlays,like the content of the reference frame,a checkerboard,or even the final overlay.In a way,this approach makes sense,since the result is judged by applying it to its ultimate purpose.However,since it is hard to visually separate underlying scene motion from the tracking,it is hard for a user to localize,quantify,and correct an error even if it can be seen that “something is off”. So while viewing the final composite is a good way to judge whether the tracking quality is sufficient,it is not a good reference to assess or improve the quantitative tracking result:if presented with the match-moved content in row 5 of Fig.1 in a playback of the whole sequence,an untrained observer would find it hard to point out possible errors.Note that the content of the reference region is moving and deforming considerably,making the chosen framing the smallest possible to include all motion.

    顱內(nèi)靜脈竇血栓屬于特殊性的腦靜脈系統(tǒng)缺血性腦血管病,患者的主要臨床癥狀包括意識(shí)障礙、視乳頭水腫以及局灶性神經(jīng)功能障礙等[1]。顱內(nèi)靜脈竇血栓具有發(fā)病形勢(shì)多樣、發(fā)病原因復(fù)雜以及臨床癥狀、體征缺乏特異性等特點(diǎn),且患者病情進(jìn)展迅速,增加了誤診以及漏診率[2-3]。因此,本研究通過(guò)分析診斷時(shí)間與顱內(nèi)靜脈竇血栓患者臨床特征及預(yù)后的關(guān)系,旨在為顱內(nèi)靜脈竇血栓的早期診斷提供參考依據(jù)。現(xiàn)報(bào)道如下。

    Fig.1 Visualizations of tracking results.The first row:samples from the public Face Capture sequence[24].Rows 2–4:the unwrapped texture with and without shading compensation,and composited onto the reference frame.Bottom row:a match-moved semi-transparent checkerboard overlay.

    The main benefit of assessment in texture space is the static appearance of correct results.When playing back an unwrapped sequence,the user can zoom in and focus on a region of interest in texture space,and does not have to follow the underlying motion of the object in the scene.In this way,any change can easily be localized and quantified even by an untrained observer.Figure 1 illustrates in rows 2–4 different visualizations of the unwrapping space.The influence of photometric adjustment(estimated as part of our optimization)becomes very clear when comparing rows 2 and 3.Row 4 shows how layering the unwrapped texture atop the reference frame can help to detect continuity issues in regions bordering the reference region(e.g.,on the right side of frame 200).

    Fig.2 Assessment tools.Top:frame 156,bottom:frame 200,in Fig.1.

    While a side-by-side comparison is not particularly well suited for assessment,errors are highlighted very clearly in Fig.2.The depicted visualizations facilitate a variety of tools available in established post-production software for assessing change between images,mainly designed for color grading,sequence alignment,and stereo film production.The first three columns show comparisons between the reference image and the texture unwrap of the current image.For the shifted difference,we used the shading compensated unwrap to better highlight the geometric tracking issues.This illustration shows the difference between the two images with a median grey offset,highlighting both negative and positive outliers.This is particularly useful,as these positive and negative regions must be aligned to yield the correct tracking result.Being part of our objective function,image differences are a perfect way of visualizing change. Furthermore,basic image analysis instruments like histograms and waveform diagrams can provide useful additional visualization to detect deviations in a difference image.A wipe allows the user to cut between the images at arbitrary positions,showing jumps if they are not perfectly aligned.Blending the same two images should result in an exact copy of the input.Therefore,if the blending factor is modulated,a semi transparent warping effect indicating the apparent motion between the two images can be observed.The last column in Fig.2 illustrates a reference point assessment tool implemented as part of the correspondence tool introduced below.The user can specify the position of a distinct pointxrefin the reference frame,which is then marked by a white point. As the apparent position of any texture unwrapping of the corresponding image data should fall in the exact same location,visualizing this position as a point overlay throughout the sequence is very helpful for detecting deviations.It can also be used in combination with any of the other assessment tools.If a user detects a deviation,any available tool below can be applied to correct the error by aligning the content with the overlaying point without the need to revisit the actual reference image data.

    In the following discussion of interaction tools,it is required in some cases to transform directional vectors from coordinates in texture space to those in the current frame.As the warping function is a nonlinear mapping,this transformation is achieved by mapping the endpoints of the directional vector:

    4.3 Adjustment tool

    Fig.3 Adjustment tool.The user drags the content to the correct location in texture space.For each mouse move event,real-time optimization is triggered and the result is updated.The radius of influence(i.e.,the affected region)is marked in red.

    The adjustment tool is an interactive user interface to correct an erroneous tracking resultMifor a single frame(see Fig.3). The tool produces results in real time and any of the assessment tools introduced above can be used for visualization. To initiate a correction,the user clicks on misplaced image contentxstartin the unwrapped texture and drags it to the correct positionxendin the reference frame.Note that both of these coordinates are defined in texture space. Using the mouse wheel,the user can define an influence radiusrvisualized by a translucent circle around the cursor to determine the area that is influenced by the local adjustment.Whenever a mouse move or release event is triggered,the current position is set to bexendand the mesh and therefore the assessment visualization is updated,so the user can observe the correction in real time.This interactive method is well suited to correcting large scale deviations from the desired tracking result,e.g.,if the optimization is stuck in a local minimum.However,as it does not incorporate the image data, fine details are best left to the databased optimization.So,while this corrected result could be kept as it is,it makes sense to use it as initialization for another data-based optimization pass.

    The algorithmic correction of the mesh coordinatesMiof the current frame,the pointsxstartandxendare transformed for processing using the warping functionW0ithat is based onMiat the time the correction is initiated.Asxstartis the position of the misplaced image data in the texture unwrap andxendis the position of the image data in the reference frame,correspondence of the relevant vertices inMican be established via Eq.(5).To achieve the transformation,the vertex positionsViof meshMiare adjusted by solving a set of linear equations.The parameters to be found are again the offsets from the initial to the modified mesh verticesθ= ?v,as defined by the modification of the mesh.The adjustment term consists of a single equation for the two coordinate directions:

    whereBcontains barycentric coordinates and the propagation of the adjustment is facilitated by applying the uniform LaplacianLas defined above.The radius of influence is modeled using a damping identity matrix scaled by an inverse GaussianGwhose standard deviation is set according to the influence radiusr.With these three terms,the new vertex positions can be obtained by solving:

    These 4|V|+2 linear equations are independent of the image data and the equations,and can be solved in real time.Note that the equations inxandyare independent of each other and can be solved separately.

    The main benefit of using this tool in texture space is that assessment and interaction can be performed locally.Only small cursor movements are required to correct erroneous drift and iterative fine tuning can easily be performed in combination with the tools shown in Fig.2.

    4.4 Correspondence tool

    The correspondence tool lets a user mark the location of a distinct pointxref∈R0inside the reference region(the white points in Figs.2 and 4).As mentioned above,the visualization of this location stays static in texture space; it has proven to be a very powerful assessment tool.A correspondence is established by marking the correct positionxcurof the feature in the texture unwrap of the current frameIi(the green point in Fig.4).Translated to the adjustment tool,xcuris the data found in a wrong location(i.e.,xstart)andxrefis the position where it should be moved to(i.e.,xend).It should be noted thatxcurmarks the location of image data insideIi,rather than a position in texture space.So wheneverMichanges for any reason,the location ofxcurhas to be adapted.An arbitrary number of correspondences between the reference frame and the current frame can be set.To avoid confusion,the visualizations of corresponding points are connected by a green line.Note that as the sparse correspondences represent static image locations and are therefore independent of tracking results,they can also be derived from an external source,e.g.,by facilitating a point tracker in a host application.Naturally,they can only be visualized in texture space if tracking data is available.

    Fig.4 Correspondence tool.Top:tool applied to the first row of Fig.2.Bottom:result of data-based optimization incorporating the correspondence.

    The alignment based on those correspondences extends the adjustment term introduced for the adjustment tool to incorporate multiple equations for the correspondence vectors pointing fromxreftoxcur.

    Finding a purely geometric solution is again possible and can make sense for single frames containing very unreliable data(e.g.,strong motion blur).However,in most cases a more elegant approach is to include the correspondences as additional constraints directly into the image databased optimization.As the mesh changes in each iteration,the correspondence vectors have to be updated each time using Eq.(5).However,as mentioned above,the locationW0i(xcur)is constant in the current frame and is therefore only calculated before the first iteration based on the initial meshMi.The correspondence termECis added to the objective function defined in Eq.(3):

    The parameterλCcan be used to communicate the accuracy of the provided correspondence.For the results in Fig.4,the con fi dence in this accuracy was set low,giving more relevance to the underlying data.This is reflected by the slight misalignment of the input points,but correct alignment of the data.

    The main benefit of applying this tool in texture space is again that assessment and interaction can be performed locally.Communicating drift by specifying the location of content deviating from a reference location has proven to be a very natural process that only requires small cursor movements.

    4.5 Influence and smoothness brushes

    The influence and smoothness brushes are both painting tools that allow the user to specify characteristics of image regions in the sequence.The influence brush facilitates a per-pixel(i.e.,per equation)scalingλDkof the data term while the smoothness brush represents a per-vertex(i.e.,per equation)scalingλLkof the Laplacian term. In both cases this can be seen as an amplification(>1)or weakening(>0 and<1)of the respectiveλparameter for the specific equation.Visualization is based on a simple color scheme:green stands for amplification,magenta represents weakening,and the transparency determines the magnitude. For users who prefer to create the weights independent of the plugin,e.g.,by using tools inside the host applications,an interface for influence and smoothness maps containing the values forλDkandλLkrespectively is provided.

    The main application of the influence brush is to weaken the influence of data in subregions that are very unreliable or erroneous. This can be a surface characteristic,e.g.,the blinking eye in Fig.5,or a temporary external disturbance like occluding objects or reflections.The smoothness brush can be used to model varying surface characteristics or to amplify regularization for low texture areas.A typical application for varying dynamics is for the bones and joints of an articulated object.

    If actual surface properties are to be modeled or an expected disturbance occurs in the same part of the surface throughout the sequence,those characteristicscan be set inI0and can be propagated throughout the tracking process.The idea behind the propagation is that a “verifi ed”tracking result exists upto the frame previous to the currently processed one.Therefore,mapping the brush information from the reference frame to the previous one naturally propagates the previously determined surface characteristics to the correct location.Figure 5 illustrates how the influence brush can be applied in texture space to tackle a surface disturbance caused by a blinking eye.

    For occluding objects,vanishing surfaces,or temporary disturbances(e.g., motion blur or highlights),the brushes can be set for individual frames. Generally,propagation does not work for these use cases since the disturbance is not bound to the surface.However,in texture space the actual motion of a disturbance is usually very restricted.Therefore,propagation in combination with slight user adjustments creates a very efficient work fl ow.Established brush tools in combination with keyframing or even tracking of the overlaying object in the host application can also be used for this kind of correction.

    Fig.5 Influence brush.Left to right:the reference region,an erroneous result,application of the influence brush to weaken the influence of the image data,and the corrected frame.

    5 Results

    In a first experiment,we evaluated the capability of the proposed tools to correct tracking errors caused by large frame-to-frame motion.To do so,we increased the displacements by dropping frames from the 720×480 pixelFace Capturesequence[24],originally designed for post-production students to master their skills on material that is representative of real world challenges.While the original sequence was tracked correctly,tracking breaks down at displacements ofaround 50 pixels.Figure 6 illustrates for one frame how the correspondence tool can be used to correct such tracking errors with minimal intervention. In this example,our automatic approach using default parameters can track from frame 1 directly to frames 2–12.However,trying to directly track to frame 13 fails.A single manual approximate correspondence provided by the user effectively solves the problem.

    Fig.6 Correcting tracking errors caused by large displacements.Top left to bottom right:part of reference frame 1 with tracking region marked,frame 12 with tracking from reference still working,frame 13 with tracking directly from frame 1 failing,provision of a single correspondence as hint(green),correct tracking with additional hint,and estimated displacement vector for corrected point.

    The remaining results in this section were created using production quality 4k footage that we are releasing as open test material alongside this publication.Thesailorsequence depicted in Fig.7 shows the flexing upper arm of a man.The post production task we defined was to stick a temporary tattoo onto the skin of the arm.A closeup of the effect is depicted below the samples.Note the strongly non-rigid deformation of the skin and therefore of the anchor overlay.Also note the change in shading on the skin;it is estimated and applied to the tattoo.The texture unwrap(without photometric compensation)in Fig.7 highlights how the complex lighting and surface characteristics lead to very different appearances of the skin throughout the sequence.While geometric alignment and photometric properties were estimated fairly accurately,the shifted difference images depicted in Fig.9 show a considerable texture differences.

    Fig.7 Samples from the sailor sequence(100 frames)and a closeup of the same samples including the visual effects.

    Fig.8 Texture unwrap of the same samples as in Fig.7.

    Fig.9 Shifted differences of the unwrapped samples in Fig.7 and the reference region.

    Fig.10 Problematic region in texture space and high contrast closeup for better assessment.

    Fig.11 Closeup of a problematic tracking region in the final composite.

    The reason is that estimation of photometric parameters is limited to smooth,low frequency shading properties and cannot capture fine details like shadows of the bulging skin pores in this example.However,on playing back the unwrappedsailorsequence,it can be observed that the overall surface stays fairly static,suggesting that the tracking result is adequate.Nevertheless,there is a distinct disturbance for a few frames in a con fined region of the image.Figure 10 shows from left to right the reference image,the unwrap just before the disturbance starts,the unwrapped frame of maximum drift,and the corrected version.A drifting structure exists as a vertical dark ridge passing through the overlayed reference point.The ridge drifts about 5 pixels to the right.As this feature cannot be distinguished in the reference image,we use an adjacent frame as reference for the correction.The issue can be solved with both the smoothness brush and the adjustment tool.Applying the smoothness brush is particularly easy,as the problematic region is very con fined and can just be covered by a small static matte in texture space.The adjustment tool can also easily be applied in a single frame in texture space and the corrected result can be propagated to eliminate the drift in all affected frames.As there is considerable global motion in the sequence and the issue is very con fined and subtle,we found that assessment and interaction in texture space is the only effective and efficient way to detect,quantify,and solve the problem.

    Thewifesequence depicted in Fig.12 shows a woman lifting her head and wiping hair out of her face.The post-production task we defined was to age her by painting wrinkles on her face.The final effect is depicted below the samples.Note the opening of the mouth and eyes,the occlusion by the arm,and the change in facial expression.Good initial results can be achieved for the tracking of the skin.However,the opening of the mouth,the blinking of the eyes,and the motion of the arm create considerable problems.Due to the confinement of the disturbance,both the influence and the smoothness brush can be applied for the mouth and the eyes.See Fig.5 for a similar use-case.In this specific case,an adjustment of the global smoothness parameterλLadequately solved the issue.One distinct problem to be solved is the major occlusion by the wife’s arm where she is wiping the hair out of her face.To have an unoccluded reference texture,tracking was started at the last frame and performed backwards.Figure 13 highlights that while most of the sequence tracks perfectly well,at the end of the sequence major disturbances occur.To solve this problem,the influence brush is applied in texture space.For this,we used the built-in Roto tool in NUKE with only 5 keyframes. The resulting matte can be reused in compositing to limit the painted overlay to the surface of the face.Figure 14 illustrates application of the influence brush to two problematic frames.Note the improvements around the mouth and on the cheek.

    Fig.12 Samples from the wife sequence(100,8,and 1)and the same samples including the visual effect.

    Fig.13 Unwrapped samples of the wife sequence(100,50,and 1).

    In order to evaluate the effectiveness of the proposed tool sand work flows,post-production companies compared the NUKE plugin tool with other existing commercialtools in real-world scenarios. Different usability criteria were rated on a 5-point scale and passed back together with additional comments.The resulting feedback showed that the proposed method was rated superior to the other tools.Most criteria were judged slightly better while “usefulness”and “overall satisfaction”were rated clearly higher,indicating that consideration of user hints in deformable tracking can enhance real visual effects work flows.

    Fig.14 Texture unwrap of samples 8 and 1 of the wife sequence,application of the influence brush in texture space,and resulting tracking improvement.

    6 Conclusions

    We have introduced a novel way of assessing and interacting with surface tracking results and algorithms based on unwrapping a sequence to texture space.To prove applicability to the relevant use-cases,we have implemented our approach as a plugin for an established post-production platform.Assessing the quality of tracking results in texture space is equivalent to detecting geometric(and photometric)changes in a played back sequence.We found that this is a simple task even for an untrained casual observer and that established post production tools can help to pinpoint even minimal errors. Therefore,assessment has proven to be very effective.The application of user interaction tools directly in texture space in combination with iterative re-optimization of the result has proven to be intuitive and effective.The most striking benefits of applying tools in texture space is that interaction can be focused on a very localized area and that only small cursor movements are required to correct errors.We believe that there is a high potential in pursuing both research and development in texture space assessment and user interaction for tracking applications.

    Acknowledgements

    This work was partially funded by the German Science Foundation(Grant No.DFG EI524/2-1)and by the European Commission(Grant Nos.FP7-288238 SCENE and H2020-644629 AutoPost).

    Electronic Supplementary Materiall Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s41095-017-0089-1.

    References

    [1]Imagineer Systems.mocha Pro.2016.Available at http://www.imagineersystems.com/products/mochapro.

    [2]Foundry.NUKE.2016.Available at https://www.foundry.com/products/nuke.

    [3]The Pixelfarm.PFTrack.2016.Available at http://www.thepixelfarm.co.uk/pftrack/.

    [4]Klose,F.;Ruhl,K.;Lipski,C.; Magnor,M.Flowlab—An interactive tool for editing dense image correspondences.In:Proceedings of the Conference for Visual Media Production,59–66,2011.

    [5]Ruhl,K.;Eisemann,M.;Hilsmann,A.;Eisert,P.;Magnor,M.Interactive scene fl ow editing for improved image-based rendering and virtual spacetime navigation.In: Proceedingsof the23rd ACM International Conference on Multimedia,631–640,2015.

    [6]Zhang,C.;Price,B.;Cohen,S.;Yang,R.Highquality stereo video matching via user interaction and space–time propagation.In:Proceedings of the International Conference on 3D Vision,71–78,2013.

    [7]Re:Vision Effects. Twixtor. 2016. Available at http://revisionfx.com/products/twixtor/.

    [8]Wilkes,L.The role of ocula in stereo post production.Technical Report.The Foundry,2009.

    [9]Chrysos,G.G.;Antonakos,E.;Zafeiriou,S.;Snape,P.Offline deformable face tracking in arbitrary videos.In:Proceedings of the IEEE International Conference on Computer Vision Workshops,1–9,2015.

    [10]Rother,C.;Kolmogorov,V.;Blake,A. “GrabCut”:Interactive foreground extraction using iterated graph cuts.ACM Transactions on GraphicsVol.23,No.3,309–314,2004.

    [11]Liao,M.;Gao,J.;Yang,R.;Gong,M.Video stereolization:Combining motion analysis with user interaction.IEEE Transactions on Visualization&Computer GraphicsVol.18,No.7,1079–1088,2012.

    [12]Wang,O.;Lang,M.;Frei,M.;Hornung,A.;Smolic,A.;Gross,M.Stereobrush:Interactive 2D to 3D conversion using discontinuous warps.In:Proceedings of the 8th Eurographics Symposium on Sketch-Based Interfaces and Modeling,47–54,2011.

    [13]Doron,Y.;Campbell,N.D.F.;Starck,J.;Kautz,J.User directed multi-view-stereo.In:Computer Vision–ACCV 2014 Workshops.Jawahar,C.;Shan,S.Eds.Springer Cham,299–313,2014.

    [14]Bartoli,A.;Zisserman,A.Direct estimation of nonrigid registrations.In:Proceedings of the 15th British Machine Vision Conference,Vol.2,899–908,2004.

    [15]Zhu,J.;Van Gool,L.;Hoi,S.C.H.Unsupervised face alignment by robust nonrigid mapping.In: Proceedings of the IEEE 12th International Conference on Computer Vision,1265–1272,2009.

    [16]Hilsmann,A.;Eisert,P.Joint estimation of deformable motion and photometric parameters in single view videos.In:Proceedings of the IEEE 12th International Conference on Computer Vision Workshops,390–397,2009.

    [17]Gay-Bellile,V.;Bartoli,A.;Sayd,P.Direct estimation of nonrigid registrations with image-based self occlusion reasoning.IEEE Transactions on Pattern Analysis&Machine IntelligenceVol.32,No.1,87–104,2010.

    [18]Seibold,C.;Hilsmann,A.;Eisert,P.Model-based motion blur estimation for the improvement of motion tracking.Computer Vision and Image UnderstandingDOI:10.1016/j.cviu.2017.03.005,2017.

    [19]Hilsmann,A.;Schneider,D.C.;Eisert,P.Image-based tracking of deformable surfaces.In:Object Tracking.InTech,245–266,2011.

    [20]Pilet,J.;Lepetit,V.;Fua,P.Real-time nonrigid surface detection.In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.1,822–828,2005.

    [21]Brox,T.;Malik,J.Large displacement optical fl ow:Descriptor matching in variational motion estimation.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.33,No.3,500–513,2011.

    [22]Wedel,A.;Cremers,D.;Pock,T.;Bischof,H.Structure-and motion-adaptive regularization for high accuracy optic fl ow.In:Proceedings of the IEEE 12th International Conference on Computer Vision,1663–1668,2009.

    [23]Brox,T.;Bruhn,A.;Papenberg,N.;Weickert,J.High accuracy optical fl ow estimation based on a theory for warping.In:Computer Vision–ECCV 2004.Pajdla,T.;Matas,J.Eds.Springer Berlin Heidelberg,25–36,2004.

    [24]Hollywood Camera Work.Face Capture dataset.2016.Available at https://www.hollywoodcamerawork.com/tracking-plates.html.

    猜你喜歡
    癥狀
    Don’t Be Addicted To The Internet
    保健醫(yī)苑(2022年1期)2022-08-30 08:39:40
    出現(xiàn)哪些癥狀要給肝臟做個(gè)檢查?
    缺素癥的癥狀及解決辦法
    缺素癥的癥狀及解決辦法
    預(yù)防心肌缺血臨床癥狀早知道
    可改善咳嗽癥狀的兩款藥膳
    瓜類蔓枯病發(fā)病癥狀及其防治技術(shù)
    吉林蔬菜(2017年10期)2017-11-01 07:47:04
    夏季豬高熱病的癥狀與防治
    以肺內(nèi)病變?yōu)槭装l(fā)癥狀的淋巴瘤多層螺旋CT與PET/CT表現(xiàn)
    欧美+日韩+精品| 欧美激情国产日韩精品一区| 人妻夜夜爽99麻豆av| 男女下面进入的视频免费午夜| 亚洲内射少妇av| 欧美激情在线99| 三级国产精品片| 成人综合一区亚洲| 美女脱内裤让男人舔精品视频| 自拍偷自拍亚洲精品老妇| 午夜激情久久久久久久| 99久国产av精品国产电影| 亚洲人成网站在线播| 午夜视频国产福利| 99re6热这里在线精品视频| 波野结衣二区三区在线| 天天一区二区日本电影三级| 欧美老熟妇乱子伦牲交| 日本三级黄在线观看| 在线观看人妻少妇| 中文字幕亚洲精品专区| 九九爱精品视频在线观看| 高清毛片免费看| 亚洲图色成人| 中文字幕久久专区| 国产成人午夜福利电影在线观看| 国产成人一区二区在线| 精品酒店卫生间| 最近中文字幕2019免费版| 日韩欧美 国产精品| 街头女战士在线观看网站| 亚洲伊人久久精品综合| 亚洲欧美成人综合另类久久久| 国产综合精华液| 最近2019中文字幕mv第一页| 人妻 亚洲 视频| 精品酒店卫生间| 国产乱人视频| 美女主播在线视频| 亚洲av男天堂| 在线观看一区二区三区| 亚洲欧美成人综合另类久久久| 九九久久精品国产亚洲av麻豆| 国产成人精品婷婷| 欧美少妇被猛烈插入视频| 最近中文字幕高清免费大全6| 久久午夜福利片| 少妇熟女欧美另类| 男人和女人高潮做爰伦理| 80岁老熟妇乱子伦牲交| 国产成人精品福利久久| 美女脱内裤让男人舔精品视频| 精品人妻视频免费看| 亚洲精品色激情综合| 久久影院123| 狂野欧美激情性xxxx在线观看| 欧美一区二区亚洲| 成人漫画全彩无遮挡| 91久久精品国产一区二区成人| 欧美成人一区二区免费高清观看| 97精品久久久久久久久久精品| 干丝袜人妻中文字幕| 日日啪夜夜爽| 久热这里只有精品99| 亚洲最大成人中文| 国产成人精品久久久久久| 亚洲av中文av极速乱| 少妇人妻 视频| 一级毛片电影观看| 国模一区二区三区四区视频| 最近的中文字幕免费完整| 亚洲人成网站在线播| 大香蕉久久网| 99久久精品热视频| 中文乱码字字幕精品一区二区三区| 观看免费一级毛片| 亚洲欧美一区二区三区黑人 | 精品国产一区二区三区久久久樱花 | 欧美97在线视频| 亚洲电影在线观看av| 我的女老师完整版在线观看| 在线免费十八禁| 国产淫语在线视频| 久久99蜜桃精品久久| 欧美性猛交╳xxx乱大交人| 三级国产精品片| 国产伦在线观看视频一区| av播播在线观看一区| 国产精品女同一区二区软件| 特大巨黑吊av在线直播| 黄色一级大片看看| 欧美日韩精品成人综合77777| 亚洲国产精品国产精品| 可以在线观看毛片的网站| 亚洲成色77777| 中文天堂在线官网| 亚洲精品成人久久久久久| 国产成年人精品一区二区| av在线天堂中文字幕| 国产免费一区二区三区四区乱码| 久久久久网色| 极品教师在线视频| 联通29元200g的流量卡| 国产探花极品一区二区| 全区人妻精品视频| 少妇人妻一区二区三区视频| av免费在线看不卡| 嫩草影院新地址| 在线免费十八禁| 在线观看人妻少妇| 欧美三级亚洲精品| 嫩草影院精品99| 色视频www国产| 亚洲在久久综合| 日韩中字成人| 插阴视频在线观看视频| 久久久精品免费免费高清| 91午夜精品亚洲一区二区三区| 丰满乱子伦码专区| 久久久久久久国产电影| 欧美三级亚洲精品| 亚洲最大成人av| a级毛色黄片| 欧美+日韩+精品| 欧美日韩一区二区视频在线观看视频在线 | 天堂中文最新版在线下载 | 一级毛片久久久久久久久女| 丝袜脚勾引网站| 国产精品伦人一区二区| 国产在线男女| 亚洲欧美一区二区三区黑人 | 国产老妇伦熟女老妇高清| 欧美日韩国产mv在线观看视频 | 欧美日韩视频精品一区| av天堂中文字幕网| 亚洲在久久综合| 岛国毛片在线播放| 午夜日本视频在线| 各种免费的搞黄视频| 黄色怎么调成土黄色| 日本欧美国产在线视频| 亚洲av不卡在线观看| 国产午夜精品一二区理论片| 美女脱内裤让男人舔精品视频| 深夜a级毛片| 欧美另类一区| 久久久久九九精品影院| 亚洲国产成人一精品久久久| 大香蕉久久网| 日本猛色少妇xxxxx猛交久久| 在线免费观看不下载黄p国产| 亚洲国产精品999| 欧美日韩视频精品一区| 久久久久网色| 欧美激情久久久久久爽电影| 在线观看三级黄色| 国产免费一区二区三区四区乱码| 久热久热在线精品观看| 国产精品麻豆人妻色哟哟久久| 最近中文字幕2019免费版| 3wmmmm亚洲av在线观看| 欧美少妇被猛烈插入视频| 春色校园在线视频观看| 欧美极品一区二区三区四区| 久久亚洲国产成人精品v| 国内精品美女久久久久久| 久久久成人免费电影| 啦啦啦中文免费视频观看日本| 天堂俺去俺来也www色官网| 中文字幕久久专区| 欧美区成人在线视频| 国产精品成人在线| 精品久久久久久久久av| 婷婷色综合www| 国产久久久一区二区三区| 一个人看视频在线观看www免费| 肉色欧美久久久久久久蜜桃 | 狠狠精品人妻久久久久久综合| 搞女人的毛片| 久久99蜜桃精品久久| 激情 狠狠 欧美| 简卡轻食公司| 大香蕉97超碰在线| 国产黄片视频在线免费观看| 国产美女午夜福利| 成人综合一区亚洲| 亚洲国产高清在线一区二区三| 大话2 男鬼变身卡| 亚洲av免费在线观看| 亚洲色图综合在线观看| 99热这里只有是精品在线观看| 午夜免费观看性视频| 你懂的网址亚洲精品在线观看| 国产伦精品一区二区三区视频9| 在线观看美女被高潮喷水网站| 久久精品国产亚洲网站| 在线观看一区二区三区激情| 亚洲精品一二三| 国产高清不卡午夜福利| 视频区图区小说| 69人妻影院| 热re99久久精品国产66热6| kizo精华| 男女边摸边吃奶| 有码 亚洲区| 亚洲国产色片| 一级毛片aaaaaa免费看小| 国产精品一区二区性色av| av免费观看日本| 嫩草影院入口| 赤兔流量卡办理| 精品国产三级普通话版| 韩国高清视频一区二区三区| 日本免费在线观看一区| 亚洲,一卡二卡三卡| 亚洲精品一区蜜桃| 欧美日韩视频精品一区| 亚洲精品第二区| 亚洲成色77777| 综合色av麻豆| 在线观看一区二区三区激情| 91aial.com中文字幕在线观看| 校园人妻丝袜中文字幕| 色视频www国产| 欧美+日韩+精品| 寂寞人妻少妇视频99o| 日日撸夜夜添| 2021少妇久久久久久久久久久| 国产日韩欧美在线精品| 国产永久视频网站| 欧美少妇被猛烈插入视频| 韩国高清视频一区二区三区| 亚洲在久久综合| 欧美bdsm另类| 国语对白做爰xxxⅹ性视频网站| 国产日韩欧美亚洲二区| 亚洲第一区二区三区不卡| av又黄又爽大尺度在线免费看| 国产黄色视频一区二区在线观看| 国产免费又黄又爽又色| 美女脱内裤让男人舔精品视频| 亚洲国产精品专区欧美| 日韩欧美精品免费久久| 亚洲国产精品国产精品| 国产老妇女一区| 日韩大片免费观看网站| 高清欧美精品videossex| 成年女人在线观看亚洲视频 | 亚洲电影在线观看av| 少妇人妻一区二区三区视频| 国产黄片视频在线免费观看| 99热全是精品| 亚洲,欧美,日韩| 国产一区二区三区av在线| 国产综合精华液| 国产精品久久久久久久久免| 女人久久www免费人成看片| 91精品国产九色| 小蜜桃在线观看免费完整版高清| 成人亚洲欧美一区二区av| 国产一区二区亚洲精品在线观看| 国产亚洲一区二区精品| 成年版毛片免费区| 亚洲av.av天堂| 久久久色成人| 99久久精品一区二区三区| 男人舔奶头视频| 91在线精品国自产拍蜜月| 天堂网av新在线| 亚洲av欧美aⅴ国产| 成人特级av手机在线观看| 建设人人有责人人尽责人人享有的 | 日韩制服骚丝袜av| 亚洲欧美清纯卡通| 91精品国产九色| 久久99热6这里只有精品| 在线观看人妻少妇| 永久网站在线| 有码 亚洲区| 日韩 亚洲 欧美在线| 中文字幕人妻熟人妻熟丝袜美| 美女高潮的动态| 成人亚洲欧美一区二区av| 国产精品精品国产色婷婷| 亚洲av在线观看美女高潮| 成人毛片a级毛片在线播放| 啦啦啦中文免费视频观看日本| 免费黄频网站在线观看国产| 欧美激情久久久久久爽电影| 丰满人妻一区二区三区视频av| 水蜜桃什么品种好| 人妻制服诱惑在线中文字幕| 99热网站在线观看| 婷婷色综合www| 亚洲三级黄色毛片| 日日啪夜夜撸| 观看美女的网站| 在线亚洲精品国产二区图片欧美 | 欧美xxⅹ黑人| 我要看日韩黄色一级片| 亚洲av不卡在线观看| 波野结衣二区三区在线| 日韩av免费高清视频| 日韩亚洲欧美综合| 人人妻人人看人人澡| 亚洲天堂av无毛| 好男人视频免费观看在线| 久久精品熟女亚洲av麻豆精品| 国产黄频视频在线观看| 男插女下体视频免费在线播放| 国产成人午夜福利电影在线观看| 看非洲黑人一级黄片| 好男人在线观看高清免费视频| 欧美变态另类bdsm刘玥| 国产成人91sexporn| 亚洲欧美一区二区三区国产| 久久久a久久爽久久v久久| 又黄又爽又刺激的免费视频.| 欧美xxⅹ黑人| 精品一区在线观看国产| 成年免费大片在线观看| 在线精品无人区一区二区三 | 人妻 亚洲 视频| 亚洲精品,欧美精品| 久久韩国三级中文字幕| 在线观看人妻少妇| 中文字幕av成人在线电影| 日本av手机在线免费观看| 国产日韩欧美亚洲二区| 交换朋友夫妻互换小说| 偷拍熟女少妇极品色| 国产成人精品一,二区| 少妇人妻一区二区三区视频| 性色avwww在线观看| 久久久精品免费免费高清| 国产精品国产三级专区第一集| 久久97久久精品| 日本一本二区三区精品| 少妇丰满av| 日本黄色片子视频| 麻豆久久精品国产亚洲av| 亚洲国产精品成人综合色| 男男h啪啪无遮挡| 欧美一级a爱片免费观看看| 久久99蜜桃精品久久| 亚洲国产最新在线播放| 精品99又大又爽又粗少妇毛片| 久久鲁丝午夜福利片| 亚洲精品色激情综合| 一级黄片播放器| 黄色日韩在线| 黑人高潮一二区| 干丝袜人妻中文字幕| 美女脱内裤让男人舔精品视频| 男女国产视频网站| 国产欧美日韩一区二区三区在线 | 精品久久久久久电影网| 国产亚洲5aaaaa淫片| 国产精品三级大全| 中文字幕亚洲精品专区| 久久鲁丝午夜福利片| 精品午夜福利在线看| 免费黄色在线免费观看| 大陆偷拍与自拍| 亚洲欧洲日产国产| 国产v大片淫在线免费观看| 久久久久久久久久成人| 亚洲精品乱码久久久久久按摩| 国精品久久久久久国模美| av在线老鸭窝| av国产免费在线观看| 少妇人妻精品综合一区二区| 大陆偷拍与自拍| 在线观看美女被高潮喷水网站| 精品亚洲乱码少妇综合久久| 亚洲欧美精品专区久久| 久久国内精品自在自线图片| 日韩一区二区视频免费看| 亚洲av中文av极速乱| 色婷婷久久久亚洲欧美| 亚洲在线观看片| 一级毛片 在线播放| 欧美3d第一页| 人体艺术视频欧美日本| 一二三四中文在线观看免费高清| 精品视频人人做人人爽| 黄片wwwwww| 王馨瑶露胸无遮挡在线观看| 一级毛片aaaaaa免费看小| 26uuu在线亚洲综合色| 麻豆成人午夜福利视频| 九草在线视频观看| 久久久久久国产a免费观看| 国产黄片视频在线免费观看| 一区二区三区乱码不卡18| 免费av观看视频| 777米奇影视久久| 在线 av 中文字幕| 欧美高清成人免费视频www| 中国三级夫妇交换| 三级国产精品片| 免费观看在线日韩| 听说在线观看完整版免费高清| 高清在线视频一区二区三区| 欧美日韩在线观看h| 简卡轻食公司| 亚洲最大成人中文| 亚洲欧美日韩东京热| 国产精品国产三级国产av玫瑰| 热99国产精品久久久久久7| 日韩,欧美,国产一区二区三区| 欧美亚洲 丝袜 人妻 在线| 色视频在线一区二区三区| av在线老鸭窝| 精华霜和精华液先用哪个| 国内精品宾馆在线| 日韩一区二区视频免费看| 97人妻精品一区二区三区麻豆| 日韩亚洲欧美综合| 少妇猛男粗大的猛烈进出视频 | 大陆偷拍与自拍| 偷拍熟女少妇极品色| 国产欧美日韩一区二区三区在线 | 亚洲欧洲日产国产| 国产乱人偷精品视频| 亚洲精品日韩av片在线观看| 国产精品99久久99久久久不卡 | 成年版毛片免费区| 成人亚洲精品一区在线观看 | 国产精品久久久久久久电影| 亚洲色图综合在线观看| 秋霞伦理黄片| 日本与韩国留学比较| 69av精品久久久久久| 亚洲精华国产精华液的使用体验| 丰满人妻一区二区三区视频av| 久久久久久久大尺度免费视频| 国产乱人视频| 99久国产av精品国产电影| 国产欧美亚洲国产| 亚洲最大成人av| 免费人成在线观看视频色| 国产国拍精品亚洲av在线观看| 国产色爽女视频免费观看| 最新中文字幕久久久久| 乱码一卡2卡4卡精品| 国产亚洲91精品色在线| 亚洲精品一二三| 亚洲色图综合在线观看| 欧美三级亚洲精品| 午夜激情福利司机影院| 26uuu在线亚洲综合色| 新久久久久国产一级毛片| 精品一区二区免费观看| 成人美女网站在线观看视频| 91久久精品国产一区二区三区| 国产在线男女| 免费观看无遮挡的男女| 日本黄大片高清| 国产黄色视频一区二区在线观看| 午夜福利在线在线| 久久久久久久国产电影| 爱豆传媒免费全集在线观看| 在线播放无遮挡| 欧美激情在线99| 欧美三级亚洲精品| 少妇熟女欧美另类| 国产一区二区三区av在线| 直男gayav资源| 中文字幕免费在线视频6| 永久免费av网站大全| 一本一本综合久久| 久久久久性生活片| 成人高潮视频无遮挡免费网站| 久久影院123| 国产探花在线观看一区二区| 只有这里有精品99| 免费观看的影片在线观看| 人妻制服诱惑在线中文字幕| 一级毛片久久久久久久久女| 亚洲欧美成人精品一区二区| 欧美国产精品一级二级三级 | 在线观看免费高清a一片| 特级一级黄色大片| 卡戴珊不雅视频在线播放| 制服丝袜香蕉在线| 欧美3d第一页| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 精品国产三级普通话版| 青春草视频在线免费观看| 亚洲av一区综合| 最近中文字幕2019免费版| 国产 精品1| eeuss影院久久| 国产成人精品一,二区| 欧美性猛交╳xxx乱大交人| 人人妻人人澡人人爽人人夜夜| 在线免费十八禁| 亚洲精品日韩av片在线观看| 久久久久久久亚洲中文字幕| 国产综合精华液| 国产av码专区亚洲av| 99热6这里只有精品| 亚洲欧美成人综合另类久久久| 日韩电影二区| 女人十人毛片免费观看3o分钟| 日本黄大片高清| 热re99久久精品国产66热6| 国产精品一及| 国产伦理片在线播放av一区| 一本久久精品| 国产精品福利在线免费观看| 观看免费一级毛片| 国产精品女同一区二区软件| 夜夜看夜夜爽夜夜摸| 国产视频内射| 我的女老师完整版在线观看| 尾随美女入室| 日本一二三区视频观看| 91aial.com中文字幕在线观看| 最近最新中文字幕大全电影3| 我要看日韩黄色一级片| 九九在线视频观看精品| 99热全是精品| 久久久成人免费电影| 波多野结衣巨乳人妻| 欧美zozozo另类| 日本黄大片高清| 婷婷色av中文字幕| 亚洲自偷自拍三级| 日韩国内少妇激情av| 热99国产精品久久久久久7| 亚洲成人精品中文字幕电影| 男女那种视频在线观看| 观看美女的网站| 成年女人看的毛片在线观看| 男人狂女人下面高潮的视频| 精品午夜福利在线看| 十八禁网站网址无遮挡 | 色视频在线一区二区三区| 日本免费在线观看一区| 女人被狂操c到高潮| av又黄又爽大尺度在线免费看| 99re6热这里在线精品视频| 2021少妇久久久久久久久久久| 中文欧美无线码| 国产一区有黄有色的免费视频| 久久影院123| 亚洲国产高清在线一区二区三| 国产69精品久久久久777片| 成人国产av品久久久| 欧美性猛交╳xxx乱大交人| 国产老妇女一区| 国产视频内射| 亚州av有码| 欧美激情国产日韩精品一区| 亚洲av免费高清在线观看| 哪个播放器可以免费观看大片| 欧美97在线视频| 丝袜美腿在线中文| 男女啪啪激烈高潮av片| 尤物成人国产欧美一区二区三区| 听说在线观看完整版免费高清| 成人亚洲欧美一区二区av| 青春草国产在线视频| 日日撸夜夜添| 国产毛片在线视频| 国产亚洲av嫩草精品影院| 国产精品成人在线| 美女视频免费永久观看网站| 久久久久精品久久久久真实原创| 国产亚洲av片在线观看秒播厂| 交换朋友夫妻互换小说| 国产黄a三级三级三级人| 成年女人在线观看亚洲视频 | 亚洲av在线观看美女高潮| 日韩不卡一区二区三区视频在线| 日本三级黄在线观看| 国产一区亚洲一区在线观看| 亚洲人成网站在线观看播放| 亚洲av电影在线观看一区二区三区 | 欧美精品人与动牲交sv欧美| 狂野欧美白嫩少妇大欣赏| 国产精品成人在线| 精品99又大又爽又粗少妇毛片| www.色视频.com| 黄色欧美视频在线观看| 激情五月婷婷亚洲| 色综合色国产| 亚洲欧美日韩另类电影网站 | 久久久精品94久久精品| 插逼视频在线观看| 91精品国产九色| 亚洲国产欧美人成| 国产高清不卡午夜福利| 欧美3d第一页| 国产男女内射视频| 99精国产麻豆久久婷婷| 免费看日本二区| 97在线视频观看| 欧美区成人在线视频| 一级av片app| 亚洲国产欧美人成| 欧美日韩一区二区视频在线观看视频在线 | a级一级毛片免费在线观看| 一级毛片我不卡| 精品久久久久久久久亚洲| 80岁老熟妇乱子伦牲交| 日本一二三区视频观看| 男女下面进入的视频免费午夜| 午夜免费观看性视频| 婷婷色综合www| 视频区图区小说| 看黄色毛片网站| 国内精品宾馆在线|