• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    3D  modeling  and  motion  parallax  for  improved videoconferencing

    2016-07-19 05:44:34ZheZhuRalphMartinRobertPepperellandAlistairBurleighcTheAuthor06ThisarticleispublishedwithopenaccessatSpringerlinkcom
    Computational Visual Media 2016年2期

    Zhe Zhu,Ralph R.Martin,Robert Pepperell,and Alistair Burleigh?cThe Author(s)06.This article is published with open access at Springerlink.com

    ?

    Research Article

    3Dmodelingandmotionparallaxforimproved videoconferencing

    Zhe Zhu1,Ralph R.Martin2,Robert Pepperell3,and Alistair Burleigh3
    ?cThe Author(s)2016.This article is published with open access at Springerlink.com

    AbstractWeconsideraface-to-face videoconferencing system that uses a Kinect camera at each end of the link for 3D modeling and an ordinary 2D display for output.The Kinect camera allows a 3D model of each participant to be transmitted;the(assumed static)background is sent separately. Furthermore,the Kinect tracks the receiver’s head,allowing our system to render a view of the sender depending on the receiver’s viewpoint.The resulting motion parallax gives the receivers a strong impression of 3D viewing as they move,yet the system only needs an ordinary 2D display.This is cheaper than a full 3D system,and avoids disadvantages such as the need to wear shutter glasses,VR headsets,or to sit in a particular position required by an autostereo display. Perceptual studies show that users experience a greater sensation of depth with our system compared to a typical 2D videoconferencing system.

    Keywordsnaked-eye3D; motionparallax;videoconferencing;real-time3D modeling

    1TNList,Tsinghua University,Beijing 100084,China.E-mail:ajex1988@gmail.com

    2School of Computer Science&Informatics,Cardiff University,UK.E-mail:ralph.martin@cs.cardiff.ac.uk.

    3Cardiff School of Art&Design,Cardiff Metropolitan University,UK.E-mail:R.Pepperell,rpepperell@ cardiffmet.ac.uk;A.Burleigh,aburleigh@cardiffmet. ac.uk.

    Manuscript received:2015-11-17;accepted:2015-12-15

    1 Introduction

    The way people communicate remotely has evolved as technology has developed.The telegraph and later the telephone allowed information to be transmitted electronically instead of by a physical written letter; it also allowed remote communication in real time. Modern tools such as Microsoft Skype and Apple FaceTime further improve telepresence for remote communication,allowing both voice and video so that remote participants can hear and see each other.

    The history of videoconferencing dates back to the 1930s when the German Reich Postzentralamt videotelephonenetworkconnectedBerlinand several other German cities via coaxial cables. Rosenthal’s very early work[1]already considered the issue of transmission of eye contact during video broadcast.Various works have also described multiparty videoconferencing[2-6],in which it is important to preserve gaze directional cues to see who is speaking.

    Humans have long attempted to record their visual experience of three-dimensional space on a flat pictorial plane,from early cave art,through centuries of painting and drawing,to photography and highdefinition digital media.Although most pictures are presented on a two-dimensional surface,they are full of differing visual cues that allow us to infer depth[7]. Occlusion,lighting,object shading,stereopsis,and parallax are all used by the visual system to perceive depth in the real world,and many of these can be replicated in pictures to create the illusion of spatial depth on a flat surface[8].

    Artists at Cardiff School of Art&Design have been exploring new methods of generating depth cues within the context of digital media,some of which are based on discoveries made by earlier artists about the nature of visual perception and how to depict it[9].By observing fundamental features of visual experience,such as the size,shape,and distribution of objects in the visual field,they have established that pictures generated by artistic methods can outperform ones generatedby conventional geometric techniques in terms of representational accuracy[10].

    Sincethedevelopmentoflinearperspective in the 15th century,artists have sought ways tocreategreaterdepthintheirwork[11]. Mostimagingtechnologytodayusesstandard principles of linear perspective to represent space on a flat picture surface[12].Videoconferencing solutions are no exception,with images of the participants normally presented on flat monitors in geometrical perspective.However,linear perspective imagesarenormallygeneratedfromafixed,monocularviewpoint, whilenaturalvisionis normally experienced with two mobile eyes[13].The development of new sensing technologies presents an opportunity to enhance the sense of space in flat images by integrating more naturalistic cues into the images.This paper concerns the use of real-time,user-responsive motion parallax for videoconferencing,combined with simple 3D modeling,with the goal of improving the sense of immersion and quality of the user experience.Other work has also considered using motion parallax cues,and we will discuss them further in Section 2.

    An alternative way of providing 3D cues for the user on a flat 2D display is stereopsis.However,many stereopsis systems require users to wear shutter glasses,which may be acceptable when watching 3D movies,but not in videoconferencing,as participants rely on seeing each other’s faces unobstructed for full communication.Alternatively,autostereo displays may be used,but these require the user to sit in a fairly precisely controlled location.While this may be achievable for videoconferencing,as head motion is usually limited,such systems are still costly.

    Our system is intended for two-person face-toface videoconferencing,so we need not consider the gaze direction problem present in multiparticipant systems[3,5].Each end of the link uses a Kinect camera for data acquisition,an ordinary 2D display for output,and a commodity PC.The Kinect camera allows a 3D model of each sender’s head and shoulders to be transmitted;the background is sent separately.Furthermore,the Kinect tracks each receiver’s head,allowing the system to render a view of the sender according to the receiver’s viewpoint.

    We assume that users only make small movements during videoconferencing,such as slight swaying of body and shaking of head.We are only interested in transmitting the head and shoulders,and do not consider any hand or other body movements.We also assume that the background is static,allowing us to model foreground and background separately,and to ignore any changes to the background after the initial setup.

    A key idea is that we do not aim to model the foreground and background in 3D accurately,which would lead to high computational costs in both time and space,and is also unlikely to be robust.Instead we aim to model the foreground and background with sufficient realism to convey a more convincing sense of depth.We do not just layer the foreground and background like Refs.[14,15],as such models are too flat.Neither do we use KinectFusion[16,17]to do the modeling,even though at first it might seem suitable,for two reasons.Firstly,models generated by KinectFusion are noisy,with gaps in the surface and edges that are not smooth(see the top row of Fig.1).Secondly,the resulting models are large and would place a heavy burden on the networkthe amount of data to be transmitted should be kept as reasonably small as possible.Instead,we use a robust,realistic,but lightweight parameterized model customized to each participant.Our model typically has fewer than 1000 vertices.Compared to Ref.[18]which transmits whole depth frames,our model requires much less network bandwidth.

    Fig.1 Top:KinectFusion modeling result,from various viewpoints. The model is very noisy and is unsuited to videoconferencing. Bottom:our modeling result.Our smoother parametric model is better suited to videoconferencing.

    The main technical contribution of our work,other than a demonstration of the advantages of using motion parallax for videoconferencing,is a practical system for doing so.It is based on a parametric model of the head and shoulders and allows videoconferencing based on commodity hardware.The model can cope with the high levelsof noise in Kinect data,and is lightweight yet sufficiently realistic.Our approach allows our system to be more robust to noise than other generic models,while providing more realistic results than simply layering the foreground and background.

    2 Related work

    2.1Motion parallax and its application in videoconferencing

    Motion parallax is an important kinetic monocular depth cue that provides the visual system with information about the configuration of space and objects in the surrounding physical environment[8]. Motion parallax works by comparing the relative movement of objects in space;e.g.,as a viewer’s head rotates or moves through space,objects further away move quicker in relation to objects that are closer. This allows the viewer to form accurate judgements about both their current position in the world,and also the relative location of objects around them.

    Lee[19]devised a system which tracked user’s head position with a Nintendo Wii remote to determine a suitable camera position for a 3D scene in real time.The resulting shift of the digital space in response to user’s head position produces a powerful depth illusion for the viewer,which in Lee’s words “effectively transforms your display into a portal to a virtual environment”.

    Apple’s iOS 7 and later operating systems include a motion parallax effect that moves the icons and tabs on the screen very slightly in response to phone or tablet motion from the user[20].This synthetic motion parallax again creates an enhanced feeling of digital space as the layers move separately.

    Applying the same kind of depth separation and 3D modeling approach to a videoconferencing application is potentially promising.However,the complexity of modeling objects in depth in a realtime application,and with sufficient quality to be visually believable(including moving facial features),raises complex technical issues.

    Harrison and Hudson[14]proposed a pseudo-3D video-conferencing system based on a commodity webcam.They initially capture a background image and then extract the foreground sender in real time during conferencing.The sender and background are layered at different depths,and a virtual camera is put at a 2D position corresponding to the x-y tracked position of the receiver’s head.To overcome imperfections in the edges of the foreground,simple Gaussian bluring is used along the composition boundary.The system provides some motion parallax but is not particularly realistic as it gives the appearance of two planes in relative motion.

    Zhang et al.[15]proposed a similar system,using a feature-based face-tracking algorithm to robustly estimate the position and scale of the face.A time-of-flight camera is used to improve the segmentation of background and foreground,and a matting strategy[21]is used to improve the composition result.Although this provides improved accuracy face tracking and higher quality foreground/background composition,there is still a lack of realism due to the planar modeling of the foreground.

    Kim et al.[18]described TeleHuman,a cylindrical 3D display portal for life-size human telepresence. Their system relies on 10 Kinects to capture 360?3D video;each frame contains an image and a depth map.Their system supports both motion parallax and stereoscopy.Nevertheless,as the Kinect depth stream is noisy,the 3D images are of low quality.The cylindrical display and the need for 10 Kinect devices also make it unsuitable for general use in home and office.

    Our system provides a 3D model of the sender’s head,and tracks the 3D position of the receiver’s head,and so can generate more realistic motion parallax than these earlier systems.At the same time,it only needs an ordinary 2D display and a single low-cost Kinect camera.

    2.2Modeling

    2.2.1Parameterized facial models

    Many works have considered parameterized face models;CANDIDE-type models are widely used for modeling the human face.These are predefined triangle meshes whose shape can be adjusted by animation units and shape units.The animation unit parameters represent facial expression,while the shape units tailor the proportions of the face to a particular individual.The initial version of CANDIDE[22]contained 75 vertices and 100 triangles.Since the mouth and eyes are crudely represented,this version of the model is unrealistic and so is rarely used.Welsh[23]produced an improved CANDIDE model with 160 vertices and238 triangles,covering the entire frontal head and shoulders.However,using a fixed number of vertices to model the shoulders does not lead to good results.The most popular version,CANDIDE-3[24],provides more details for mouth,cheeks,nose,and eyes,using 113 vertices and 168 triangles. This version is much improved and is used in the Microsoft Kinect SDK.The most obvious drawback of such models is that they only represent the frontal face so look like a mask when rendered. In videoconferencing,this presents problems if the sender turns their head too far to one side.

    2.2.2Generic real-time 3D modeling

    Making 3D models from data is a fundamental problem in computer graphics and computer vision,with much research.Balancing speed and accuracy is a key issue.Rusinkiewicz et al.[25]pioneered the real-time modeling of objects from depth data. Theirapproachusesa60Hzstructured-light rangefinder;the user rotates an object in front of it to get a continuously-updated model.However this procedure is unsuited to human body capture since any non-rigid movement of the body leads to inaccurate modeling results.While commercial systems exist for dynamic face and body capture,such as those produced by 3dMD[26],they are far too expensive for home and office use.Based on the much lower-priced Kinect,KinectFusion[16,17]provides a real-time,robust,room scale GPU-based modeling technique,as part of the Microsoft Kinect SDK.It uses a volume representation in which each voxel contains color information.Models can be updated at an interactive rate.By providing a human body detection module in the Microsoft Kinect SDK,KinectFusion can reconstruct the body even in the presence of non-rigid movement. However KinectFusion has two obvious drawbacks. Firstly,it is memory intensive.Chen et al.[27]showed how to use a fast and compact hierarchical GPU data structure instead of a regular 3D voxel grid to save an order of magnitude of memory. Secondly,the modeling result is noisy,mainly due to the noisy depth data provided by the Kinect itself. This could be overcome to some extent by hardware improvements.

    In our low-cost system,we use a parameterized approach to model the body,which is robust,fast,and provides good quality.It can model more of the body than CANDIDE-type approaches,but with much lower noise than approaches that directly use KinectFusion.

    3 System overview

    Oursystemisintendedforone-to-onevideoconferencing.We assume the users are indoors and the background is static.The hardware needed by our system is cheap and readily available,comprising a Kinect,a commodity PC,and a standard 2D display for each participant.When our system is started,it initially models the background(at each end)while the sender stands to one side,outside the view of the Kinect.A 2D background image is captured,and is texture mapped to a plane whose depth is set to the average distance of the depth image.Our justification for using such a simple model for the background is that the users of a videoconferencing system spend nearly all of their time looking at the other person,and only peripherally observe the background.An alternative approach to prior background capture would be to use an image completion approach[28]to fill background gaps resulting from foreground movement.Apart from the extra computational effort needed,a further disadvantage is that such completed backgrounds always have undesirable artifacts in practice.Since the background is static,it only needs to be transmitted once at the start of the session.

    After background capture,the user then sits in front of the system,which builds a model of the front of his or her head,neck,and shoulders in real time;at this stage the user should also turn his head to the left and right to allow modeling of the sides of the head.Since the Kinect is located above the top of the display,it can also capture much of the top of the head.We assume that the bottom part of the head(under the chin)always remains unseen,and that users do not significantly tilt their heads up and down.The user is given feedback in real time to allow verification that the constructed model is satisfactory.The model we produce is a 3D mesh model with a corresponding image texture:the color image provided by the Kinect is mapped via texture coordinates to the 3D vertices of the mesh model.

    After model acquisition is finished,the two users are then connected to each other.The backgroundmodel is transmitted first.After that,real-time transmission of the foreground model is sent for each frame.In particular,the location of each mesh vertex and its texture coordinates are sent,together with the current texture image.While the connection is active,each receiver sees the sender as a rendered 3D model,rendered according to the receiver’s viewpoint.Thus,as the receiver’s head moves,the viewpoint used for rendering changes,and the resulting motion parallax and 3D modeling give the receiver a sense of 3D.We illustrate our system in Fig.2.

    Subsequent sections now give further details: Section 4 discusses our parameterized model of the upper part of the human body,while Section 5 explains how we construct the virtual scene.We evaluate our system in Section 6 and conclude our work in Section 7.

    4 Real-time modeling of the upper part of the body

    For videoconferencing,we wish to model the upper part of the body,including the head,neck,and the shoulders.During videoconferencing,the front of the head always faces the camera,so this is modeled separately in greater detail.Looking down from above,this frontal model encompasses 180?as seen from the front.Horizontal movement and rotation of the head may occur.Thus,we must also model the sides and back of the head,which we do using separate low-detail models for the left-back and right-back.These left and right back parts each provide a further 90?to provide a 360?head model. The top of different parts of the head is modeled along with each of these three parts(we assume vertical movement does not in practice occur).

    For the front of the head,we use the CANDIDE-3 model based on parameters representing individual shape and facial expression.A mesh model based on a quarter ellipsoid,but which does not allow for changes in expression,is used for each of the left-back and right-back of the head.These are joined with appropriate continuity to the front of the head and each other to complete the head.Further similar expressionless models are used for the neck and shoulders.Each model is based on a standard template,appropriately deformed to suit a particular individual,with further transformations that may represent rotation and translation.The position and orientation of the model are continuously updated to capture the movement of the user.

    The model parameters are of two types,those that are constant for an individual,and those that vary from frame to frame(e.g.,representing facial expression).Thus our model building process extractsindividualbodyfeaturesinaninitial step before the conversation begins,to determine the parameters describing the body shape of a particular person.The textures of the left-back and right-back head are also captured at this stage,and are transmitted with the texture coordinates of the corresponding vertices just once at the start of the session-these are assumed to be relatively unimportant and can be considered to be unchanging.Then as the conversation occurs,feature tracking is used to acquire the dynamic parameters.The textures for the front of the head,neck,and shoulders are also captured for each frame to allow for changes in expression and hence facial appearance,as well as for minor body movements.The vertex positions of the head,neck,and shoulders,their texture coordinates,and the current image are transmitted for each frame.

    4.1The parameterized model

    We now consider the models in more detail.The front,left-back,and right-back of the head are modeled separately and seamlessly joined together. The upper end of the neck is inserted into the head while the lower end of the neck is inserted into the shoulders,connecting the three parts as a whole.

    Fig.2 System overview.

    4.1.1The head

    The head model comprises three parts:front,leftback,and right-back.The frontal head uses the CANDIDE-3 model[24],which can be written as

    where Mfhrepresents the 3D model of the frontal face in terms of a 3N-dimensional vector containing the(x,y,z)coordinates of the vertices(h denotes head,f denotes front).Sfis a predefined standard face model,representing standard positions on a standard face,connected into a triangulation with known topology.σ deforms the standard face to match a specific face,and is derived from the shape units describing a particular individual’s face,e.g.,the height of the head and the width of the chin.A encapsulates animation units(AUs),which describe expression changes from a neutral facial expression. Note that σ is invariant over time but A varies.R is a rotation matrix and t is a translation to allow for head movements.

    The left-back of the head is defined as

    where Slis a predefined left-back of the head model,containing 3 triangle strips making an arched shape;each strip has 9 vertices in total.We do not model the shape of the ears as they typically occupy a tiny area in videoconferencing,and furthermore their geometry is complicated and hard to model robustly. Texture mapping to a curved surface suffices for our application.ω deforms the template to a specific head.We illustrate the left-back of the head model in Fig.3(a).The right-back of the head model is symmetrically defined as

    To seamlessly connect the different parts of the head model we ensure that appropriate triangles in each part share vertices.In reality,these parts of the head undergo only very limited deformation due to changes in expression,and for simplicity in this application we assume they are rigid.

    Fig.3 Parameterized models:(a)left-back of the head,(b)neck,(c)shoulders.

    Thus,the parameters for the head model are of two kinds,unchanging ones specific to an individual: {σ,ω},and those which depend on head pose and facial expression:{A,R,t}.

    4.1.2The neck

    The neck occupies a relatively small area of the field of view,and is not the focus of attention.Thus,it suffices to model it using a single triangle strip:

    where Snis a triangle strip forming a forward facing semi-cylinder,andμis a deformation to match a particular individual.We assume that even if the head rotates,the neck more or less remains fixed,so we need not add a rotation term for the neck model. Figure 3(b)illustrates a deformed neck model.

    Thus,the parameters for the neck model are of two kinds,unchanging one specific to an individual:μ,and that which depends on neck position:t.

    4.1.3The shoulders

    The shoulders(and associated part of the chest)are more difficult to model than the head and the neck.Unlike the head,they have no stable feature points,making it harder to define a template based on feature points.The shoulders occupy a much larger part of the image than the neck,and their shape varies significantly between different individuals.We also note that human observers are more sensitive to the shape of the shoulders than to their texture or appearance.Our main goal in modeling the shoulders is to smoothly approximate their silhouette.We thus define them as where Ssis a standard shoulder template.

    To form the shoulder template,we first define edge vertices.These are divided into two sets,those lying on the more or less vertical sides of the shoulders(i.e.,the arms),and those lying on the more or less horizontal top of the shoulders.See Fig.4:“vertical”edge vertices are marked with triangles and“horizontal”points are marked with stars.The vertical edge vertices are used to separate the shoulder model into layers;left and right vertical vertices sharing the same y value are connected by a curve.To define this curve,we add another auxiliary vertex with the same y value and whose x coordinate is the average of their x coordinates.Itsz coordinate is closer to the viewer by a distance of 1.2 times the radius of the neck.These three vertices determine a circular arc,which is uniformly sampled by Nvvertices(Nv=40 in our implementation). Horizontal vertices share the same z values as the vertical edge vertices,and are connected to the first layer of vertical edge vertices,as illustrated in Fig.3(c).α and β are deformations in vertical and horizontal directions respectively which we explain later.tsis the translation of the shoulders.Thus,the parameters for the shoulder model are of two kinds,unchanging ones specific to an individual:{α,β},and that which depends on shoulder position:ts.

    Fig.4 Shoulder edge point detection.The black circle is the corner point found after several iterations.It is then snapped to the vertical edge and vertical edge points(triangles)are detected by downwards search.After horizontal snapping,horizontal edge points(stars)are detected by horizontal search.

    4.2Parameter determination

    We now explain how we determine the various parameters.The overall set of parameters describing the model is

    Theseparametersfallintotwocategories: {σ,ω,α,β,μ}are unchanging in time and describe the body shape of an individual,while{R,t,ts,A}change over time,describing expression,position,and orientation.

    4.2.1Offline parameter calculation

    We initially determine each of the unchanging parameters.σ can be calculated from the 11 shape units as explained in Ref.[24],while ω can be calculated from the distance between the cheek bone and the ear;the necessary information in both cases can be obtained using the Kinect SDK.μcan be calculated from the width and height of the neck. We approximate the width of the neck as the x distance between the left/right jawbone points of the face provided by the Kinect SDK:such feature points of the face provide a more stable solution than determining the neck location from the 2D image.The length of the neck is determined as the y distance between the skeleton joint of the head and the center of the shoulders,provided by the Kinect skeleton stream.α and β can be calculated from the vertical and horizontal edge points on the shoulders,respectively.α comes from corresponding pairs of vertical edge vertices,which define a deformation for each horizontal strip.β defines how horizontal edge vertices are translated from an initial position to their current position.

    4.2.2Online parameter calculation

    Duringreal-timetransmission, thechanging parameters must be determined.A can be calculated using the MPEG-4 face animation parameters,again provided by the Kinect SDK.R and t can be straightforwardly calculated from the face tracking output also provided by the Kinect SDK.To determine ts,we average all x centers of vertical edge vertex pairs,and the y centers of all horizontal edge vertex pairs.Finding these edge vertices depends on fast and robust edge point extraction. Our approach is based on edge point detection and edge point filtering,which are further explained.

    4.2.3Edge point detection

    First,we must search for the shoulder edge points. The Kinect provides three different data streams: a color stream,a depth stream,and a skeleton stream.The skeleton stream provides robust tracking information for twenty joints of the user’s body. We use the left and right shoulder joints as initial shoulder corner points at which we switch from horizontal to vertical edge points.

    Since the Kinect depth stream is noisy,we do not perform the search on the depth image.Instead,we make use of player ID information also contained in the depth stream.Each pixel is allocated a positive number indicating the player ID if this pixel belongs to a person,or 0 if this pixel is outside any person. As the player ID image is less noisy than the depth image(see Fig.5),we use the player ID information to determine which pixels belong to the sender.

    Starting from the initial corner points,we first search for more accurate shoulder corner points,located on the person’s outline,and then find vertical and horizontal edge points.An iterative approach is used to find the accurate corner points.Starting from the initial left corner point,we find the first pixel going vertically upwards that is outside the person;we also perform a similar horizontal search from right to left.The midpoint of the 2 pixels found gives a more accurate corner point.This process is repeated until the distance between the two points is under than 2 pixels.

    Fig.5 Raw depth data and player ID data.

    Using this accurate corner point we follow the edge downwards to find successive vertical edge points until reaching the bottom of the frame.For horizontal edge points,we search from the corner rightwards until reaching the neck.We consider the neck to start when the edge slope exceeds 45?.Edge points are sampled 5 pixels apart.

    4.2.4Edge point filtering

    We next stabilize this edge point data.We have two sets of vertical and horizontal edge points(one on either side of the body).As these have been determined from player ID data that still has a somewhat noisy boundary,we need to smooth them both within each individual frame,and in between frames.Within each frame,each row and column of points is filtered using a Gaussian filter with radius 4 pixels.To help alleviate jitter in the player ID image,we calculate the average position for each row and column of points,and if the change between frame i+1 and frame i is more than 5 times the change between frames i and i-1,we regard the frame i+1 as having significant jitter,and keep the positions of the points unchanged from frame i.Within a frame,if the change in any one row(or column)is more than twice as big as that of its neighbours,we again assume this is due to jitter and keep the positions of these two rows the same as in the previous frame.

    4.3Model part connection

    The parameter values and standard models provide a description for each part.We next need to consider how to connect them into a whole.

    Itiscommontoaddgeometricconstrains when assembling parts to make a model[29-32].These approaches usually optimize an energy function which satisfies connectivity constraints,while ensuring the positions of the vertices after optimizationhavetexturecoordinatesasclose as possible to the correct ones.Concentricity,coplanarity,parallelism,etc.are commonly used constraints,and are useful for such things as mechanical parts,but are not particularly useful fororganicobjectssuchasthehumanbody. This has many non-rigidly deformable parts whose connections can be hard to precisely define.

    Instead,we opt for a simpler approach,and add softer geometric constraints.We only adjust the z values of the head and shoulders,and z and y values for the neck.Three principles are used to provide a simple modeling approach:

    ?Boundary vertices of the shoulders all share the same z value,and are located on the same plane as the edges of the semi-cylindrical neck.

    ?The(vertical)axis of the neck semi-cylinder has the same z value as the midpoint of the left and right jawbones.

    ?Since the neck is thinner than the head and shoulders,and behind them,it can be made a little longer(at both ends)than it is in reality,as a way of ensuring connectivity. To meet these requirements,we determine the z depth of the head first,based on the depth values. Then we adjust the depths of the neck and shoulders,according to the first two principles above.Next,we connect the top two vertices on the back of the neck to two key points on the head located under the ears. No special steps are needed to join the neck and the shoulders due to the extra length of the neck;the shoulders cover its end.This simple approach avoids solving any optimization problem and is very fast.

    5 Scene rendering

    We now consider how the scene is rendered on the receiver’s side.

    At setup time,the background texture image,the background model and its texture,and texture images of the left-back and right-back head aretransmitted to the receiver just once.

    During videoconferencing,the color image of each frame is sent as a texture image together with the foreground model as triangle meshes and vertex texture coordinates.The resolution of the color image is 640×480,with 8 bits per channel.The frame rate of our system is 30 fps.The color information is sent using a video codec,while typically the foreground model has fewer than 1000 vertices,which requires little extra bandwidth over that needed by the color stream.

    On the receiver’s side,the head of the receiver is tracked during model building and the received scene models are rendered taking into account the position and orientation of the tracked head.Our goal is to give the receiver a realistic impression of parallax.Our basis for rendering is that the receiver’s attention is assumed to be fixed on the sender’s face,at the midpoint of sender’s eyes.Thus,we render the scene so that the sender appears at a fixed location on the screen.Most of the parallax is seen in the relative motion of the background;slight changes to the sender’s appearance are also seen as the receiver moves the head relative to the position of the sender’s head-as the receiver moves more to one side,more that side of the sender’s head will be seen.Detailed rendering parameters are determined according to the position of the receiver’s head,using a predetermined scene layout which simulates real face-to-face communication.We now give the details.

    5.1Scene layout

    We must consider two problems when arranging thescenetoberendered.Thefirstisthat positions of models transmitted to the receiver’s side are determined by the relative positions of the sender’s Kinect and the sender.Suppose the distance between the Kinect and the foreground and background on the sender’s side are Dfand Dbrespectively.Since the simulated distance between the receiver and the sender is arbitrary,we simply assume that the receiver sits at the position of the Kinect on the sender’s side.Suppose the rendering distance between the receiver and the sender is df,and that between the receiver and the sender’s background is db,we thus have:

    However,the receiver may move backwards and forwards to a limited extent.Moving backwards would cause the receiver to see unmodeled parts of the sender’s scene,losing realism.To prevent this problem arising,we slightly reduce the angle of view relative to the sender’s side.If the angle of view at the sender’s side is ψsand is ψrat the receiver’s side,we set ψrto

    In our implementation we set ψsto 45?and ρ to 0.9.

    5.2Camera position

    We assume that the receiver’s gaze is fixed at the midpoint of the sender’s eyes.If the receiver always accordingly rotated his head in compensation while moving it,it would be straightforward to perform rendering based on this new viewing position and direction,using the head tracking information.In practice,however,the receiver may often just rotate his eyes as he moves his head,and such eye movement cannot be readily determined.Thus,rather than using the measured rotation of the head as a basis for rendering,for simplicity we model the situation as if his eyes were fixed in his head to look forwards,and work out how much he would have to rotate his head to keep looking at the same point.

    Thus,we separate the movement of the receiver’s head into two parts:translation,and consequent head rotation.The tracked midpoint of the receiver’s eyes provides changes in position.For each frame,the change in position relative to the previous frame,is used to update the camera position.Camera rotation based on modeled head rotation is assumed to occur in two orthogonal directions,through small angles θ about the y axis and ? about the x axis. If the distance between the camera and the sender along the z axis is Ds,and the offsets relative to the original locations in x and y directions are Dxand Dyrespectively,the changes in rotation angles are simply given by

    The camera position and orientation are accordingly updated in each frame.

    6 Experiments and evaluation

    Our system has been implemented in C#using the Microsoft Kinect SDK v1.8,on a PC with an Intel Core i7 3770 3.40GHz CPU,an Nvidia GTX780GPU,and a first generation Kinect.We illustrate the results of our system in Fig.6.

    We have performed an initial user study;a much fuller and more carefully designed perceptual study is also planned.We invited 10 participants to take part in the user study;they were Ph.D.students in computer graphics and computer vision,whose background might perhaps make them more critical than the general public.Each participant took part in videoconferencing using our system,and using Microsoft Skype as a typical 2D videoconferencing system as a basis for comparison.

    We were interested in particular in how much our system gave an enhanced impression of depth during videoconferencing.We thus specifically asked them to evaluate their experience of depth when using our system compared to the typical 2D videoconferencing system.Five subjective scores could be chosen,ranging from-2 to+2,where-2 meant our system gave much less sensation of depth,-1 meant somewhat less sensation of depth,0 meant both systems gave more or less equal sensations of depth,+1 meant our system gave somewhat more sensation of depth,and+2 meant much more sensation of depth.Furthermore,to achieve further insight,we asked participants to give a short written comment justifying their evaluation.

    Eight out of the ten participants gave our system a score of 2,while two of them gave a score of 1.These initial results clearly show that our approach leads to a greater sensation of depth during videoconferencing.Of the two participants who gave a score of 1,one of them justified his score on the basis that the background seemed like a flat plane,when the subject could see it was actually composed of two orthogonal walls.The other participant who gave a lower score said the edge of the head did not seem sufficiently smooth,and the lack of realism caused him to keep looking at the edge,distracting him.Since we made the assumption that the receiver would fixate at the midpoint of the sender’s eyes,staring at the edge of the sender violates the assumption,perhaps leading to the lessthan-perfect satisfaction with the depth realism.

    These comments will be used to inform future improvements of our system,along with those for the eight participants who gave the highest score. Their most frequent comments can be summarized as“I observed the motion parallax between the foreground and background”and“the perspective of the scene is very consistent with my viewpoint”.

    Fig.6 Four frames were selected from 2 scenes.In scene 1,the receiver tilted his head to the left in frame 1 while in frame 100 he tilted his head to the right.The viewpoints for rendering the sender’s scene were changed with respect to the head’s position.In scene 2,the receiver sat straightly in frame 1 while in frame 100 he leaned forward.

    7 Conclusions

    In this paper,we proposed a videoconferencing system based on 3D modeling and motion parallax to give an improved sensation of depth.We use a parameterized model of the sender,and position a synthetic camera based on tracking the receiver’s head position.Initial experimental results show that users feel that our system gives a greater sensation of depth perception than a typical 2D videoconferencing system.Further,fuller perceptual testing is planned for the future.

    Our system has some limitations.Our system does not support hand gestures or large movements,e.g.,standing up or large shoulder rotations,as these are harder to track and would need more complete models.Our system assumes there is only one person in the field of view-the Kinect depth stream is noisier when there are multiple persons,and this makes it hard to give a visual-pleasing modeling result.

    We hope to improve our system in the future by using a more detailed model of the sender based on more vertices;newer Kinect-like devices will also help to make improved models.We will also make more complex models of the background;this can be done readily,even if a little slow,as part of theoffline modeling before videoconferencing begins.

    Acknowledgements

    This work was supported by the National HightechR&DProgramofChina(ProjectNo. 2013AA013903), theNationalNaturalScience Foundation of China(Project Nos.61133008 and 61272226),ResearchGrantofBeijingHigher Institution Engineering Research Center,an EPSRC Travel Grant,and the Research and Enterprise Investment Fund of Cardiff Metropolitan University.

    References

    [1]Rosenthal,A.H.Two-way television communication unit.US Patent 2420198,1947.

    [2]Okada,K.-I.;Maeda,F(xiàn).;Ichikawaa,Y.;Matsushita,Y.Multiparty videoconferencing at virtual social distance:MAJIC design.In:Proceedings of ACM ConferenceonComputerSupportedCooperative Work,385-393,1994.

    [3]Sellen,A.;Buxton,B.;Arnott,J.Using spatial cues to improve videoconferencing.In:Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,651-652,1992.

    [4]Tang,J.C.; Minneman,S.VideoWhiteboard: Video shadows to support remote collaboration.In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,315-322,1991.

    [5]Vertegaal, R.TheGAZEgroupwaresystem: Mediatingjointattentioninmultiparty communication and collaboration.In:Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,294-301,1999.

    [6]Vertegaal,R.;Ding,Y.Explaining effects of eye gaze on mediated group conversations:Amount or synchronization?In:Proceedings of ACM Conference on Computer Supported Cooperative Work,41-48,2002.

    [7]Pirenne,M.H.Optics,Painting and Photography. Cambridge,UK:Cambridge University Press,1970.

    [8]Solso,R.L.Cognition and the Visual Arts.Cambridge,MA,USA:MIT Press,1996.

    [9]Pepperell,R.;Haertel,M.Do artists use linear perspective to depict visual space?Perception Vol. 43,No.5,395-416,2014.

    [10]Baldwin,J.;Burleigh,A.;Pepperell,R.Comparing artistic and geometrical perspective depictions of space in the visual field.i-Perception Vol.5,No.6,536-547,2014.

    [11]Kemp,M.The Science of Art:Optical Themes in Western Art from Brunelleschi to Seurat.New Haven,CT,USA:Yale University Press,1990.

    [12]Kingslake,R.Optics in Photography.Bellingham,WA,USA:SPIE Publications,1992.

    [13]Ogle,K.N.Research in Binocular Vision,2nd edn. New York:Hafner Publishing Company,1964.

    [14]Harrison,C.;Hudson,S.E.Pseudo-3Dvideo conferencing with a generic webcam.In:Proceedings ofthe10thIEEEInternationalSymposiumon Multimedia,236-241,2008.

    [15]Zhang,C.; Yin,Z.; Florencio,D.Improving depthperceptionwithmotionparallaxandits application in teleconferencing.In:Proceedings of IEEE International Workshop on Multimedia Signal Processing,1-6,2009.

    [16]Izadi,S.;Kim,D.;Hilliges,O.;Molyneaux,D.;Newcombe,R.;Kohli,P.;Shotton,J.;Hodges,S.;Freeman,D.;Davison,A.;Fitzgibbon,A. KinectFusion:Real-time3Dreconstructionand interactionusingamovingdepthcamera.In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology,559-568,2011.

    [17]Newcombe,R.A.;Izadi,S.;Hilliges,O.;Molyneaux,D.;Kim,D.;Davison,A.J.;Kohli,P.;Shotton,J.;Hodges,S.;Fitzgibbon,A.KinectFusion:Real-time dense surface mapping and tracking.In:Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality,127-136,2011.

    [18]Kim,K.;Bolton,J.;Girouard,A.;Cooperstock,J.;Vertegaal,R.TeleHuman:Effects of 3D perspective on gaze and pose estimation with a life-size cylindrical telepresence pod.In:Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,2531-2540,2012.

    [19]Lee,J.C.Head tracking for desktop VR displays using the Wii remote.Available at http://johnnylee.net/ projects/wii/.

    [20]iPhone User Guide For iOS 8.1 Software.Apple Inc.,2014.

    [21]Levin,A.;Lischinski,D.;Weiss,Y.A closed form solution to natural image matting.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.30,No.2,228-242,2008.

    [22]Rydfalk,M.CANDIDE,aparameterizedface. TechnicalReportLiTH-ISY-I-866.Link¨oping University,1987.

    [23]Welsh,B.Model-based coding of images.Ph.D.Thesis. British Telecom Research Lab,1991.

    [24]Ahlberg,J.CANDIDE-3—An updated parameterised face.Technical Report LiTH-ISY-R-2326.Link¨oping University,2001.

    [25]Rusinkiewicz,S.;Hall-Holt,O.;Levoy,M.Real-time 3D model acquisition.ACM Transactions on Graphics Vol.21,No.3,438-446,2002.

    [26]3dMD Static Systems.Available at http://www.3dmd. com/3dMD-systems/.

    [27]Chen,J.;Bautembach,D.;Izadi,S.Scalable real-time volumetric surface reconstruction.ACM Transactions on Graphics Vol.32,No.4,Article No.113,2013.

    [28]Wexler,Y.;Shechtman,E.;Irani,M.Space-time completion of video.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.29,No.3,463-476,2007.

    [29]Chen,T.;Zhu,Z.;Shamir,A.;Hu,S.-M.;Cohen-Or,D.3-Sweep:Extracting editable objects from a single photo.ACM Transactions on Graphics Vol.32,No.6,Article No.195,2013.

    [30]Gal,R.;Sorkine,O.;Mitra,N.J.;Cohen-Or,D.iWIRES:An analyze-and-edit approach to shape manipulation.ACM Transactions on Graphics Vol.28,No.3,Article No.33,2009.

    [31]Schulz,A.;Shamir,A.;Levin,D.I.W.;Sitthi-amorn,P.;Matusik,W.Design and fabrication by example. ACM Transactions on Graphics Vol.33,No.4,Article No.62,2014.

    [32]Zheng,Y.;Fu,H.;Cohen-Or,D.;Au,O.K.-C.;Tai,C.-L.Component-wise controllers for structurepreserving shape manipulation.Computer Graphics Forum Vol.30,No.2,563-572,2011.

    ZheZhu is a Ph.D.candidate in the Department of Computer Science and Technology,Tsinghua University. He received his bachelor degree from Wuhan University in 2011.His research interestsarecomputervisionand computer graphics.

    RalphR.Martiniscurrentlya professor at Cardiff University.He obtainedhisPh.D.degreein1983 fromCambridgeUniversity.Hehas published more than 250 papers and 14 books,covering such topics as solid and surface modeling,intelligent sketch input, geometricreasoning, reverse engineering,and various aspects of computer graphics. He is a Fellow of the Learned Society of Wales,the Institute of Mathematics and its Applications,and the British Computer Society.He is on the editorial boards of Computer-Aided Design,Computer Aided Geometric Design,GeometricModels,theInternationalJournal ofShapeModeling,CADandApplications,andthe InternationalJournalofCADCAM.Hewasrecently awarded a Friendship Award,China’s highest honor for foreigners.

    Robert Pepperell is an artist who studied at the Slade School of Art,London,andhasexhibitedwidely. He has published several books and numerous academic papers,and is a professor of fine art at Cardiff School of Art&Design in the UK.He specialises in research that combines art practice with scientific experimentation and philosophical inquiry.

    Alistair Burleigh has a background in the conception and development of new creative digital ideas and technology for commercial application.He studied fine art:interactive media at Newport School of Art and went on to work in lead roles on creative digital projects for a wide range of functions and prestige clients on a global basis.He is now a researcher and technical director working at Cardiff School of Art&Design,UK.

    Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), whichpermits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    欧美 亚洲 国产 日韩一| 国产精品1区2区在线观看.| 真人做人爱边吃奶动态| 中文资源天堂在线| 亚洲 欧美一区二区三区| 在线十欧美十亚洲十日本专区| 亚洲熟女毛片儿| 亚洲精品国产区一区二| 男人舔奶头视频| 国内揄拍国产精品人妻在线 | 久久国产精品男人的天堂亚洲| 中文字幕人成人乱码亚洲影| 女生性感内裤真人,穿戴方法视频| 精品久久久久久久毛片微露脸| 国产精品久久久久久人妻精品电影| 长腿黑丝高跟| 欧美激情极品国产一区二区三区| 一个人免费在线观看的高清视频| www.999成人在线观看| 国内毛片毛片毛片毛片毛片| cao死你这个sao货| 久久精品91无色码中文字幕| 亚洲人成伊人成综合网2020| 法律面前人人平等表现在哪些方面| 亚洲国产精品久久男人天堂| 老司机福利观看| 最近最新中文字幕大全免费视频| 丝袜人妻中文字幕| 女警被强在线播放| 国产精华一区二区三区| 久久久精品国产亚洲av高清涩受| 国产亚洲欧美在线一区二区| 日韩三级视频一区二区三区| 神马国产精品三级电影在线观看 | 一本精品99久久精品77| 亚洲免费av在线视频| bbb黄色大片| 欧美日韩亚洲国产一区二区在线观看| 国产99久久九九免费精品| 老司机深夜福利视频在线观看| 黄网站色视频无遮挡免费观看| 老司机靠b影院| 悠悠久久av| 搞女人的毛片| 无限看片的www在线观看| 久久 成人 亚洲| 校园春色视频在线观看| 久久天躁狠狠躁夜夜2o2o| 日本a在线网址| 女生性感内裤真人,穿戴方法视频| 久久久水蜜桃国产精品网| 亚洲精品国产精品久久久不卡| 国产伦人伦偷精品视频| 免费高清在线观看日韩| 免费在线观看完整版高清| 久久香蕉国产精品| 露出奶头的视频| 久久久久久久精品吃奶| 免费人成视频x8x8入口观看| 中国美女看黄片| 99久久精品国产亚洲精品| 美女扒开内裤让男人捅视频| 国内久久婷婷六月综合欲色啪| 国产99久久九九免费精品| 黄色丝袜av网址大全| 99国产精品一区二区蜜桃av| 在线看三级毛片| 高清在线国产一区| 丁香六月欧美| 中文字幕精品免费在线观看视频| 国产亚洲av高清不卡| 久热爱精品视频在线9| 成人av一区二区三区在线看| 18禁黄网站禁片免费观看直播| 琪琪午夜伦伦电影理论片6080| 久久精品国产亚洲av高清一级| 色精品久久人妻99蜜桃| 久久久久九九精品影院| 国产成人av教育| 精品久久蜜臀av无| 欧美久久黑人一区二区| 99精品在免费线老司机午夜| 亚洲中文日韩欧美视频| 女人爽到高潮嗷嗷叫在线视频| 国产免费男女视频| 国产乱人伦免费视频| 久久久久国产精品人妻aⅴ院| 国产精品一区二区精品视频观看| 国产黄片美女视频| 侵犯人妻中文字幕一二三四区| 一区二区三区激情视频| 亚洲国产毛片av蜜桃av| 女生性感内裤真人,穿戴方法视频| 国产精品一区二区三区四区久久 | 69av精品久久久久久| 久久精品91无色码中文字幕| 久久久久九九精品影院| www.www免费av| 亚洲国产精品成人综合色| 狠狠狠狠99中文字幕| 色综合亚洲欧美另类图片| 国产精品香港三级国产av潘金莲| 国产精品久久久人人做人人爽| 精品国产一区二区三区四区第35| 精品国产美女av久久久久小说| 久久天堂一区二区三区四区| 18禁国产床啪视频网站| 波多野结衣av一区二区av| 在线观看一区二区三区| 国产一卡二卡三卡精品| 99re在线观看精品视频| avwww免费| 国产精品香港三级国产av潘金莲| 婷婷六月久久综合丁香| 午夜a级毛片| 国产蜜桃级精品一区二区三区| 一区福利在线观看| 深夜精品福利| 成人免费观看视频高清| 久久久久久久精品吃奶| 欧美成狂野欧美在线观看| 99热6这里只有精品| xxxwww97欧美| 欧美人与性动交α欧美精品济南到| 欧美成人一区二区免费高清观看 | 欧美激情久久久久久爽电影| 国产亚洲精品一区二区www| 国产1区2区3区精品| 人妻丰满熟妇av一区二区三区| cao死你这个sao货| 91av网站免费观看| 国产成人精品无人区| 亚洲精华国产精华精| 非洲黑人性xxxx精品又粗又长| 精品久久蜜臀av无| 国产av一区在线观看免费| 在线观看舔阴道视频| 国产黄片美女视频| 亚洲国产毛片av蜜桃av| 黑丝袜美女国产一区| tocl精华| 精品第一国产精品| 人人澡人人妻人| 国产亚洲精品一区二区www| 欧美亚洲日本最大视频资源| 天天躁狠狠躁夜夜躁狠狠躁| 久久久水蜜桃国产精品网| 黄色视频,在线免费观看| 男女之事视频高清在线观看| 欧美日韩一级在线毛片| 欧美大码av| 久久国产精品影院| 男女下面进入的视频免费午夜 | 久久亚洲真实| 精品无人区乱码1区二区| 淫妇啪啪啪对白视频| 九色国产91popny在线| 亚洲人成网站高清观看| 日韩大尺度精品在线看网址| 熟女少妇亚洲综合色aaa.| 欧美一级毛片孕妇| 级片在线观看| 一本大道久久a久久精品| 曰老女人黄片| 日韩欧美国产一区二区入口| 亚洲,欧美精品.| 搡老妇女老女人老熟妇| 亚洲精品中文字幕一二三四区| 久久精品91无色码中文字幕| 久久久久国产一级毛片高清牌| 国产在线精品亚洲第一网站| 亚洲第一欧美日韩一区二区三区| 国产单亲对白刺激| 国产aⅴ精品一区二区三区波| 啦啦啦观看免费观看视频高清| 看片在线看免费视频| 人人妻人人看人人澡| 亚洲av片天天在线观看| 欧美国产精品va在线观看不卡| 国语自产精品视频在线第100页| 亚洲一区中文字幕在线| 亚洲电影在线观看av| 老熟妇仑乱视频hdxx| 久久久久亚洲av毛片大全| 国产真人三级小视频在线观看| 一a级毛片在线观看| 搡老岳熟女国产| 国产蜜桃级精品一区二区三区| 国产精品一区二区三区四区久久 | 欧美日韩乱码在线| 中国美女看黄片| 国产成人av教育| 一卡2卡三卡四卡精品乱码亚洲| 九色国产91popny在线| 亚洲国产日韩欧美精品在线观看 | 又黄又爽又免费观看的视频| 欧美日本视频| 白带黄色成豆腐渣| www日本黄色视频网| 国内毛片毛片毛片毛片毛片| 日韩一卡2卡3卡4卡2021年| 亚洲 欧美一区二区三区| 亚洲五月色婷婷综合| 啦啦啦免费观看视频1| 久久人妻福利社区极品人妻图片| 免费在线观看黄色视频的| 啦啦啦韩国在线观看视频| ponron亚洲| 国产精品一区二区三区四区久久 | 日日摸夜夜添夜夜添小说| 日本免费一区二区三区高清不卡| 十八禁网站免费在线| 欧美乱妇无乱码| 亚洲欧洲精品一区二区精品久久久| av免费在线观看网站| 国产三级在线视频| 日韩欧美三级三区| 亚洲av熟女| 精品久久蜜臀av无| 搡老熟女国产l中国老女人| 麻豆久久精品国产亚洲av| 宅男免费午夜| 亚洲五月婷婷丁香| 国产aⅴ精品一区二区三区波| 欧美三级亚洲精品| 色综合亚洲欧美另类图片| 午夜福利视频1000在线观看| 91老司机精品| 老汉色∧v一级毛片| 色在线成人网| 国产91精品成人一区二区三区| 一个人免费在线观看的高清视频| 国产色视频综合| 免费在线观看成人毛片| 精品第一国产精品| 女性被躁到高潮视频| 精品国产国语对白av| 国产欧美日韩一区二区三| 丰满的人妻完整版| 露出奶头的视频| 成年免费大片在线观看| 亚洲一区高清亚洲精品| 热re99久久国产66热| 国产男靠女视频免费网站| 99在线人妻在线中文字幕| 两性夫妻黄色片| 丰满人妻熟妇乱又伦精品不卡| 我的亚洲天堂| 成年版毛片免费区| 一区二区三区激情视频| 免费av毛片视频| 满18在线观看网站| 午夜激情福利司机影院| 中文字幕久久专区| 欧美亚洲日本最大视频资源| 亚洲中文av在线| 久久久精品国产亚洲av高清涩受| svipshipincom国产片| 男男h啪啪无遮挡| 一边摸一边做爽爽视频免费| 亚洲成人精品中文字幕电影| 天堂√8在线中文| 午夜亚洲福利在线播放| 99热6这里只有精品| 50天的宝宝边吃奶边哭怎么回事| av免费在线观看网站| 亚洲成人国产一区在线观看| 99久久无色码亚洲精品果冻| 免费在线观看亚洲国产| 久久久久久久久免费视频了| 欧美日韩瑟瑟在线播放| 老汉色∧v一级毛片| 中文字幕人成人乱码亚洲影| 黄色视频不卡| 国产成人一区二区三区免费视频网站| 亚洲欧美精品综合久久99| 又黄又粗又硬又大视频| 神马国产精品三级电影在线观看 | 国内少妇人妻偷人精品xxx网站 | 午夜久久久在线观看| 亚洲av中文字字幕乱码综合 | 亚洲欧美一区二区三区黑人| 亚洲一卡2卡3卡4卡5卡精品中文| www.熟女人妻精品国产| 中文字幕精品免费在线观看视频| 日韩免费av在线播放| 丰满的人妻完整版| 精品免费久久久久久久清纯| 精品熟女少妇八av免费久了| 色尼玛亚洲综合影院| 亚洲色图av天堂| 一区二区三区高清视频在线| 国产蜜桃级精品一区二区三区| 欧美成人性av电影在线观看| 久久久久久亚洲精品国产蜜桃av| 九色国产91popny在线| 欧美乱色亚洲激情| 国产爱豆传媒在线观看 | 亚洲欧美日韩无卡精品| 成人免费观看视频高清| 国产成人影院久久av| 老司机午夜福利在线观看视频| av电影中文网址| 一区二区三区高清视频在线| 免费av毛片视频| 免费看美女性在线毛片视频| 在线观看免费午夜福利视频| 欧洲精品卡2卡3卡4卡5卡区| 精品国产国语对白av| 身体一侧抽搐| 日韩欧美一区二区三区在线观看| 久久久精品欧美日韩精品| 两性夫妻黄色片| 黄色毛片三级朝国网站| 午夜福利高清视频| 亚洲精品美女久久久久99蜜臀| 一个人观看的视频www高清免费观看 | 精品久久久久久,| 91成年电影在线观看| 成人手机av| 一本精品99久久精品77| 亚洲国产精品合色在线| 精品人妻1区二区| 国产av在哪里看| 成人亚洲精品一区在线观看| 在线观看日韩欧美| 精品国内亚洲2022精品成人| 亚洲电影在线观看av| 草草在线视频免费看| а√天堂www在线а√下载| 亚洲人成伊人成综合网2020| 成人精品一区二区免费| 亚洲一区二区三区色噜噜| 91麻豆精品激情在线观看国产| cao死你这个sao货| 欧美一级毛片孕妇| 久久久久免费精品人妻一区二区 | АⅤ资源中文在线天堂| 男人操女人黄网站| 亚洲欧美日韩高清在线视频| 亚洲一区二区三区色噜噜| 宅男免费午夜| 国产成人一区二区三区免费视频网站| 男男h啪啪无遮挡| 亚洲自拍偷在线| 中亚洲国语对白在线视频| 国产高清videossex| 一区二区三区激情视频| 精品乱码久久久久久99久播| 麻豆国产av国片精品| 成人欧美大片| 久久人人精品亚洲av| 99久久无色码亚洲精品果冻| 精品久久久久久,| 国产视频一区二区在线看| 满18在线观看网站| 精品国产国语对白av| 国产精品二区激情视频| 人妻久久中文字幕网| 99久久久亚洲精品蜜臀av| 国内毛片毛片毛片毛片毛片| 我的亚洲天堂| 婷婷六月久久综合丁香| 中文字幕人妻熟女乱码| 欧美一级a爱片免费观看看 | 欧美乱色亚洲激情| 欧美日本亚洲视频在线播放| 欧美日韩亚洲综合一区二区三区_| 亚洲免费av在线视频| 香蕉久久夜色| 午夜a级毛片| 亚洲av美国av| 首页视频小说图片口味搜索| 欧美精品啪啪一区二区三区| 他把我摸到了高潮在线观看| 亚洲成人久久爱视频| 国产黄片美女视频| 亚洲狠狠婷婷综合久久图片| 级片在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 1024视频免费在线观看| 日日爽夜夜爽网站| 午夜影院日韩av| 国产欧美日韩精品亚洲av| 在线观看免费午夜福利视频| 一级毛片高清免费大全| 久久人人精品亚洲av| 欧美日韩亚洲综合一区二区三区_| 少妇的丰满在线观看| 一二三四在线观看免费中文在| 国产私拍福利视频在线观看| 在线观看免费日韩欧美大片| 高潮久久久久久久久久久不卡| 午夜视频精品福利| 男女之事视频高清在线观看| 国产久久久一区二区三区| 侵犯人妻中文字幕一二三四区| 宅男免费午夜| 99在线视频只有这里精品首页| 亚洲午夜理论影院| 美女大奶头视频| 免费搜索国产男女视频| 级片在线观看| 精品人妻1区二区| 亚洲av第一区精品v没综合| 亚洲欧美精品综合久久99| 白带黄色成豆腐渣| 国产精品免费一区二区三区在线| 午夜精品在线福利| 国产亚洲精品综合一区在线观看 | 成人特级黄色片久久久久久久| 免费高清视频大片| 操出白浆在线播放| 真人做人爱边吃奶动态| 天天躁狠狠躁夜夜躁狠狠躁| 最近最新免费中文字幕在线| 一级毛片女人18水好多| 神马国产精品三级电影在线观看 | 免费高清在线观看日韩| 美女扒开内裤让男人捅视频| 亚洲三区欧美一区| 国产一区二区在线av高清观看| 国产成人一区二区三区免费视频网站| 黑人操中国人逼视频| www日本黄色视频网| 1024视频免费在线观看| 日日爽夜夜爽网站| 色在线成人网| 老鸭窝网址在线观看| 亚洲在线自拍视频| 丁香欧美五月| 这个男人来自地球电影免费观看| 啦啦啦观看免费观看视频高清| 在线视频色国产色| 男女视频在线观看网站免费 | 久久热在线av| 日韩欧美 国产精品| 国产精品av久久久久免费| 国产激情久久老熟女| 身体一侧抽搐| 午夜福利免费观看在线| 国产免费av片在线观看野外av| 欧美乱妇无乱码| 99久久99久久久精品蜜桃| 国产精品久久久av美女十八| 日韩大码丰满熟妇| 人人妻人人看人人澡| 丁香欧美五月| 99在线视频只有这里精品首页| 国产精品亚洲美女久久久| 亚洲,欧美精品.| 成年人黄色毛片网站| 国产精品亚洲美女久久久| 亚洲第一青青草原| 两人在一起打扑克的视频| 精品一区二区三区视频在线观看免费| 51午夜福利影视在线观看| 真人做人爱边吃奶动态| 国产精品久久久久久精品电影 | 欧美日韩黄片免| 亚洲五月天丁香| 亚洲国产日韩欧美精品在线观看 | 亚洲欧美一区二区三区黑人| 50天的宝宝边吃奶边哭怎么回事| 男女做爰动态图高潮gif福利片| 亚洲成国产人片在线观看| 亚洲精品在线观看二区| 亚洲在线自拍视频| 少妇被粗大的猛进出69影院| 精品久久久久久成人av| 国产精品久久久久久亚洲av鲁大| 国产真实乱freesex| 亚洲精品国产区一区二| 久久精品91无色码中文字幕| 老司机在亚洲福利影院| av在线天堂中文字幕| 人妻丰满熟妇av一区二区三区| 日韩欧美一区视频在线观看| 好男人在线观看高清免费视频 | 制服丝袜大香蕉在线| 久久青草综合色| 日韩欧美一区二区三区在线观看| 免费在线观看亚洲国产| 在线观看www视频免费| 亚洲国产精品成人综合色| 99精品在免费线老司机午夜| 男女床上黄色一级片免费看| 脱女人内裤的视频| 亚洲精品国产精品久久久不卡| 热re99久久国产66热| 极品教师在线免费播放| 熟妇人妻久久中文字幕3abv| 狠狠狠狠99中文字幕| 老熟妇乱子伦视频在线观看| 男女视频在线观看网站免费 | 久久久久亚洲av毛片大全| 麻豆成人午夜福利视频| 亚洲 国产 在线| 久久久精品国产亚洲av高清涩受| 2021天堂中文幕一二区在线观 | 亚洲美女黄片视频| 亚洲成a人片在线一区二区| 午夜视频精品福利| 久9热在线精品视频| 国产亚洲av高清不卡| 啪啪无遮挡十八禁网站| 欧美精品啪啪一区二区三区| 国产伦人伦偷精品视频| 色播在线永久视频| 90打野战视频偷拍视频| 免费观看精品视频网站| 亚洲第一青青草原| 最新美女视频免费是黄的| 亚洲激情在线av| 欧美国产日韩亚洲一区| 最好的美女福利视频网| x7x7x7水蜜桃| 国产av在哪里看| 国产成人av教育| 99久久久亚洲精品蜜臀av| 高清毛片免费观看视频网站| 少妇熟女aⅴ在线视频| 亚洲中文av在线| 黄色a级毛片大全视频| 国产免费av片在线观看野外av| 国产精品亚洲av一区麻豆| 好男人电影高清在线观看| 久久香蕉激情| 12—13女人毛片做爰片一| 视频在线观看一区二区三区| 熟妇人妻久久中文字幕3abv| 久久久久久久午夜电影| 麻豆一二三区av精品| 日本精品一区二区三区蜜桃| 国产成人av激情在线播放| 欧美午夜高清在线| 精品欧美一区二区三区在线| 国产av在哪里看| 国产国语露脸激情在线看| 亚洲精品在线观看二区| 欧美人与性动交α欧美精品济南到| 老司机靠b影院| 亚洲色图 男人天堂 中文字幕| 久久精品91蜜桃| 久久九九热精品免费| 夜夜看夜夜爽夜夜摸| 这个男人来自地球电影免费观看| 欧美日韩精品网址| 久久久久久久精品吃奶| 老司机在亚洲福利影院| 欧美黑人巨大hd| 国语自产精品视频在线第100页| 满18在线观看网站| 天天一区二区日本电影三级| 精品免费久久久久久久清纯| 欧美乱色亚洲激情| 少妇裸体淫交视频免费看高清 | 久久久国产成人精品二区| 亚洲美女黄片视频| 最新在线观看一区二区三区| 欧美又色又爽又黄视频| 国产成人av教育| 2021天堂中文幕一二区在线观 | 亚洲全国av大片| 在线观看免费午夜福利视频| 日韩免费av在线播放| 午夜激情福利司机影院| 国产男靠女视频免费网站| 国内精品久久久久久久电影| 亚洲精品在线观看二区| 欧美色欧美亚洲另类二区| 国产三级在线视频| 少妇的丰满在线观看| 国产又色又爽无遮挡免费看| 欧洲精品卡2卡3卡4卡5卡区| 在线永久观看黄色视频| 制服人妻中文乱码| 国产精品二区激情视频| 我的亚洲天堂| 视频区欧美日本亚洲| 9191精品国产免费久久| 少妇裸体淫交视频免费看高清 | 欧美另类亚洲清纯唯美| 亚洲片人在线观看| 国产片内射在线| 99精品欧美一区二区三区四区| 精品一区二区三区av网在线观看| 哪里可以看免费的av片| 亚洲精品av麻豆狂野| 亚洲无线在线观看| 午夜日韩欧美国产| 露出奶头的视频| 中文资源天堂在线| 最近最新中文字幕大全免费视频| av免费在线观看网站| 欧美乱妇无乱码| 女性被躁到高潮视频| 亚洲精品久久国产高清桃花| 日本黄色视频三级网站网址| 麻豆成人午夜福利视频| 女人高潮潮喷娇喘18禁视频| 老司机福利观看| 丰满的人妻完整版| 国产精品电影一区二区三区| 国产精品av久久久久免费| 不卡av一区二区三区| 日本 欧美在线| 国产午夜福利久久久久久| 无人区码免费观看不卡| 巨乳人妻的诱惑在线观看| 日本在线视频免费播放| 日本精品一区二区三区蜜桃| 国产激情偷乱视频一区二区| 欧美性猛交╳xxx乱大交人| www日本在线高清视频|