• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    VoxLink-Combining sparse volumetric data and geometry for efficient rendering

    2016-07-19 07:04:52DanielKaukerMartinFalkGuidoReinaAndersYnnermanandThomasErtlcTheAuthor06ThisarticleispublishedwithopenaccessatSpringerlinkcom
    Computational Visual Media 2016年1期

    Daniel Kauker,Martin Falk(),Guido Reina,Anders Ynnerman,and Thomas Ertl○cThe Author(s)06.This article is published with open access at Springerlink.com

    ?

    Research Article

    VoxLink-Combining sparse volumetric data and geometry for efficient rendering

    Daniel Kauker1,Martin Falk2(),Guido Reina1,Anders Ynnerman2,and Thomas Ertl1
    ○cThe Author(s)2016.This article is published with open access at Springerlink.com

    AbstractProcessing and visualizing large scale volumetric and geometric datasets is mission critical in an increasing number of applications in academic research as well as in commercial enterprise.Often the datasets are,or can be processed to become,sparse. In this paper,we present VoxLink,a novel approach to render sparse volume data in a memory-efficient manner enabling interactive rendering on common,offthe-shelf graphics hardware.Our approach utilizes current GPU architectures for voxelizing,storing,and visualizing such datasets.It is based on the idea of perpixel linked lists(ppLL),an A-buffer implementation for order-independent transparency rendering.The method supports voxelization and rendering of dense semi-transparent geometry,sparse volume data,and implicit surface representations with a unified data structure.The proposed data structure also enables efficient simulation of global lighting effects such as reflection,refraction,and shadow ray evaluation.

    Keywordsray tracing;voxelization;sparse volumes;GPGPU;generic rendering

    1 Introduction

    Nowadays,datasets obtained from measurements,modeling,simulations,or other sources grow larger and larger in size.Regardless of their origin,these large datasets have to be processed and visualized,pushing the limits of available hardware._In most

    1VISUS,UniversityofStuttgart,70569Stuttgart,Germany.E-mail:D.Kauker,kauker@visus.unistuttgart.de;G.Reina,reina@visus.uni-stuttgart.de;T. Ertl,ertl@visus.uni-stuttgart.de.

    2Immersive Visualization Group,Link¨oping University,601 74 Norrk¨oping,Sweden.E-mail:M.Falk,martin. falk@liu.se();A.Ynnerman,anders.ynnerman@liu.se.

    Manuscript received:2015-12-01;accepted:2015-12-09 cases,however,the original raw data contains much information which is of no interest in the subsequent processing or visualization steps.This uninteresting data can be filtered out beforehand in the preprocessing step,for example by applying a transfer function or threshold filters.

    Inthispaper,wepresentourresearchon rendering and storing sparse data with the VoxLink approach-a spatial data structure based on linked voxels.We extend the concept of per-pixel linked lists(ppLL)[1],using it for voxelization,voxelbased rendering,and the visualization of sparse volumedata.Meshobjectsarevoxelizedby intersecting the voxel position with the triangles,and implicit representations like spheres are voxelized by evaluating samples at the voxel position[2].In addition,continuous volumetric data can be sampled as well.In Fig.1,four exemplary scenarios are depicted in which our method can be utilized.

    To summarize the contribution,our method is able to render voxelized scenes-including global rendering effects-at interactive frame rates.For sparse volume data,we are able to reduce the required memory footprint,allowing the inspection of sparse high resolution volumes even on low-end graphic devices.

    Incontrastandasextensionstoexisting approaches,VoxLink can render meshes,volumes,and implicit surfaces by storing the voxels internally in linked lists.It displays volumes combined with rasterized data and uses ray tracing for rendering and visualizing global lighting effects.

    2 Related work

    Order-independenttransparency(OIT). When rendering a scene containing semi-transparentobjects,the correct result requires a consistent depth ordering of all objects.Depth peeling[3]utilizes the rendering pipeline of the GPU for correct sorting but requires multiple rendering passes to do so.The A-buffer presented by Carpenter avoids the problem of multiple rendering passes by storing all potentially visible fragments per pixel during rendering and then sorting and blending them [4]. In 2014,Lindholm et al.[5]presented a hybrid volume-geometryrenderingalgorithmandtwo optimizations for current A-buffer implementations. Yang et al.[1]introduced per-pixel linked lists (ppLL)as an efficient GPU-based implementation of the A-buffer.The linked list technique allows for OIT rendering,constructive solid geometry (CSG)effects,depth-of-field effects[6],and even distributed rendering with semi-transparent models and object-space decomposition[7].We extend the linked list approach of the A-buffer in our work so it not only contains the information of a particular view but comprises the entire scene.

    Fig.1 Example applications of our VoxLink approach:sparse volume representation of data from geostatistics and CT scans(top row),voxelized triangle meshes(bottom left),and global lighting effects for combined voxel data,implicit geometry,and volume data (bottom right).

    Voxelization.The method of voxelization has long been used for voxel-based graphics systems[8]and to speed up ray tracing[9].Karabassi et al.[10]utilized the depth buffer of the GPU to voxelize non-convex objects and their surfaces.In 2000,F(xiàn)ang and Liao[11]presented a voxelization approach for CSG models,evaluating multiple slices along the view direction.Eisemann et al.[12]voxelized polyhedral objects in a single rendering pass on the GPU.GigaVoxels[13]is a voxel-based framework for real-time rendering of large and highly detailed volumetric scenes.These works are specialized to voxelizing either volumes or geometric objects while VoxLink can voxelize and visualize both data types. K¨ampe et al.[14]evaluated directed acyclic graphs instead of octrees to store the voxels in hierarchical levels for polygonal models.

    Sparse volume rendering.Volume ray casting was first presented in 1988[15-17].In recent GPU-based approaches,the volumetric data is stored in a 3D texture and the volume rendering is performed within a single pass[18].For large,sparse datasets,naive storage often exceeds available GPU memory so advanced schemes exploiting sparsity have to be used.VDB(volumetric,dynamic grid)[19]is a framework for sparse,time-varying volumetric data that is discretized on a grid.This kind of data is used for animations and special effects in movies. Teitzel et al.[20]presented an approach to render sparse grid data by interpolating the sparse grid and in fact turning it into a volume.K¨ohler et al.[21]used adaptive hierarchical methods to render sparse volumes,effectively partitioning the whole volume into smaller volumes which are then used for volume ray casting.Gobetti et al.[22]presented an alternative single-pass approach for rendering out-of-core scalar volumes.For additional state of the art,we refer to Balsa et al.[23]and Beyer et al.[24].Other methods for storing sparse data are the compressed row storage(CRS)or compressed sparse row(CSR)patterns[25].Here,sparse matrices are compressed by removing the entries which contain zeros.Our approach adopts these notions of sparse matrices and extends it to 3D by relying on linked lists.Instead of storing all volume data,we pre-classify the volume data with a given transfer function and store only non-zero voxels.

    Nie?ner et al.[26]used layered depth images (LDIs)[27],a concept similar to the VoxLink approach.Instead of using three orthographic projections and linked lists of voxels,they used n orthographic LDIs to store the scene's geometric data.Frey et al.[28]extended the LDI approach to volumetric depth images generating proxy data for displaying volume datasets.B¨urger et al.[29]proposed orthogonal frame buffers as an extension to LDIs to allow surface operations like recoloring or particle flow on rendered meshes.Reichl et al.[30]used a hybrid of rasterization and ray tracingtechnologies to render meshes.

    3 From pixel to voxel-per-voxel linked lists

    Renderingsemi-transparentscenesinasingle rendering pass is possible when employing the A-buffer approach[4].The A-buffer is aligned with the screen and each rendered fragment is stored in a list at its respective pixel location on the screen. To improve memory efficiency and account for local variations in depth complexity,ppLLs[1]can be used instead of pre-allocated memory blocks.After all fragments have been gathered in lists,the lists are sorted according to fragment depth.The final image is obtained by blending the sorted fragments based on their opacity.Since the A-buffer is created for a particular camera setting,its contents have to be updated when the view point is changed.The A-buffer also reflects the camera projection.Thus,the contents of the A-buffer describe only parts of the scene.

    In contrast,we use orthographic projections along the three coordinate axes to capture the entire scene in a view-independent manner.We extend the 2D ppLL concept to per-voxel linked lists which allows us to combine the results of the three individual views in a single buffer-the VoxLink buffer.This buffer stores the voxelized scene in a memory-efficient way which can be reconstructed in the subsequent rendering step.Since the VoxLink buffer is not view-dependent,it is created in a preprocessing step(Fig.2)and has to be updated only when the scene changes.By using three orthogonal cameras we are able to rasterize the entire scene. This is due to the fact that if an object is not visible in one view,e.g.,a plane parallel to the viewing direction yields no fragments for that particular direction,it will be visible in one of the other views. Thus,the scene is fully rasterized and the final voxel volume will include each rendered fragment.Please note that the depth test is disabled to capture all depth layers of the scene.

    From a conceptual point of view,we define the bounding volume with a given volume resolution to enclose the scene via an axis-aligned bounding box. One axis of the bounding volume is chosen to be the predominant axis for storing our voxel linked lists-without loss of generality let it be the z-axis. We further refer to this as VoxLink space.Using an orthographic camera along one of the major coordinate axes,we can transform each rendered fragment into VoxLink space by a simple coordinate swizzle.The bounding volume itself is subdivided into multiple bricks.For reasons of simplicity,the brick resolution is set to 1×1×1 unless otherwise noted.The voxels represent spatially extended pixels which can either be filled or empty.Our VoxLink approach only stores filled voxels by utilizing the linked-list concept.

    Fig.2 The pre-processing step for creating the sparse VoxLink buffer.Geometry and implicit surfaces are voxelized for each of the three coordinate axes whereas the per-voxel buffer is set up in one pass for sparse volumes.The final rendering stage is identical.

    3.1Algorithm

    Our VoxLink algorithm consists of two stages: pre-processing and rendering.During setup,the necessary buffers are created and initialized.For bothgeometricandvolumetricdata,asingle VoxLink buffer,i.e.,one A-buffer based on per-voxel linked lists,is sufficient.

    3.1.1Geometry

    Geometric objects,i.e.,mesh-based models,are voxelized in three steps.First,we render an orthographic projection of the scene for each of the major coordinate axes.During the rendering of one view,all fragments are transformed into a common coordinate system,VoxLink space,to match the predominant axis of the VoxLink buffer-the growth direction of the buffer.Each fragment is inserted into the respective per-voxel list.If a voxel entry does not exist for that position,the entry is created and added to the A-buffer,the fragment data,i.e.,surface normal and material ID,is stored,and a pointer links the entry to the fragment data.In case the entry already exists,the fragment data is appended to the fragment list of the voxel.

    After rendering the scene into the VoxLink buffer,the buffer contents have to be sorted.As in the ppLL approach,each list is sorted according to depth along the predominant axis independently.

    3.1.2Implicit geometry Besides rasterized polygons,voxelization can also process implicit object representations that are typically used for ray casting,e.g.,to visualize particle datasets[31].Parameters like sphere center and radius are embedded into the bounding volume and intersecting voxels are updated with a reference to the respective object[2].

    3.1.3Volume data

    Volume data can be transformed into a VoxLink buffer.Thevolumeisprocessedalongthe predominant axis in slices that are one voxel thick to keep the memory requirements down.On the GPU,we perform a pre-classification of the volume slice by applying the transfer function for alpha-thresholding thereby discarding transparent voxels.The remaining voxels are added to the respective voxel lists.Since the volume is sliced along the same axis we use for the VoxLink buffer,a sorting step is not necessary.The scalar value as well as the volume gradient is stored in the fragment data of each voxel.The gradient of the density volume is computed by means of central differences.The transfer function can be changed interactively by the user,affecting the data stored in the VoxLink buffer during rendering.Only if adjustments to the transfer function affect discarded data,i.e.,by assigning a non-zero opacity to previously transparent voxels,is the VoxLink buffer updated;the pre-classification is performed within 3-5 seconds.

    In the last step of VoxLink buffer generation,the brick states are updated.A brick is marked as occupied if it contains at least one filled voxel. Brick occupancy is used during the rendering stage to support empty-space skipping.Note that the VoxLink buffer holds only individual non-empty bricks and information on empty space is encoded indirectly.

    3.1.4Voxel ray tracing

    The final rendering is obtained by voxel ray tracing within the bounding volume.Initial rays starting at the bounding volume are created for each pixel. Figure 3 shows a scene containing a semi-transparent bunny and exemplary primary rays(orange)and secondary rays(teal).A lookup in the brick volume determines whether the local area is completely empty(white cells)or whether there is a populated voxel(blue cells).In the first case,the ray can fast-forward to the next brick(indicated by dashed lines).Otherwise,each voxel within this brick has to be traversed to compute its contribution.The contribution of a voxel,i.e.,its color,is obtained by applying the transfer function to volumetric data and evaluating the illumination model using material IDs and normals for meshes and implicit geometry.

    Rays are traversed by projecting the direction vector onto the voxel boundary,thereby entering the next voxel.Similarly,empty-space skipping is performed by projecting the direction vector onto the next non-empty brick.Within a brick we employ a generic ray-voxel traversal with a slight modification.Finding the next voxel along the principal direction of the VoxLink space is straight forward,i.e.,the next list element.To find a voxel in a neighboring list,we first identify the respective list and traverse it from its beginning until we arrive at the correct depth.If no stored information is available at this position,the voxel is empty and we can proceed with the next.Otherwise,the color value is computed for contributing fragments using the stored normal.The color value is then blended with the respective screen pixel utilizing front-toback blending.Secondary rays can be generated and traversed as well to achieve shadowing,reflection,or refraction.Early-ray termination is carried out once a ray leaves the bounding volume or the accumulated opacity exceeds a threshold.

    4 Implementation details

    We utilize the OpenGL rendering pipeline and GLSL shaders for voxelization and subsequent ray tracing. The data layout of the lists is adjusted to benefit from cache coherency during ray tracing.

    4.1Data structures

    Our proposed data structure is depicted in Fig.4. The top part shows the bricking of the physical space and the linked voxels.In the bottom part,the data structures and buffers are shown;these are stored in GPU memory.All links are realized as indices into particular buffers,i.e.,pointers referring to the respective linked lists.The global header,a 2D buffer,covers the front face of the bounding volume,which is perpendicular to the predominant axis. The voxels are stored in the voxel buffer as linked lists and each element consists of a fragment index,the fragment depth transformed to the predominant axis,and a pointer to the next element.

    The fragment index refers to the fragment buffer,which holds material IDs and normals used for deferred shading.The material ID is used for lookup in the material buffer holding all materials.Both mesh normals and volume gradients are stored in 16 bit polar coordinates.

    To support empty-space skipping,we subdivide the bounding volume of the scene into multiple bricks of size n.Each brick has a unique brick header representing n2entry indices.The combined brick headers of the first layer represent exactly the same information as the global header.Thus,the global header can be discarded during rendering.

    5 Rendering

    After the VoxLink buffer has been created by either voxelizing geometric objects or embedding sparse volume data,the final image is obtained by ray tracing.In the following,we point out some data-dependent peculiarities.

    5.1Sparse volume rendering

    Depending on the volume data and the associated transfer function,typically only a low percentage of voxels is of interest(see Table 3).All other density values can be omitted since their content is invisible,e.g.,surrounding air.

    The rendering of the sparse volume data itself maps well to the proposed rendering algorithm.The main difference is,however,that changes of the transfer function are somewhat limited.Since we rely on a pre-segmentation,the transfer function can only be adjusted interactively for all densities occurring in the sparse volume.Voxels which were discarded during voxelization cannot be made visible without a re-computation of the sparse volume.

    5.2Global effects rendering for geometry

    With the availability of an in-memory representation of the whole scene additional global effects are feasible.Adding support for secondary rays enables shadows,refraction,and reflection.We extend the algorithm described in Section 3.1 by performing the ray traversal in a loop(see Fig.2,dashed line)and adding an additional ray buffer for dynamically managing rays generated during the rendering phase. Besides ray origin and direction,the ray buffer holds the accumulated opacity and the pixel position of the primary ray from the camera.The explicitly stored pixel position allows us to process primary and secondary rays independently of the final pixel position while contributing to the correct pixel.To account for correct blending the accumulated opacity is propagated as well.

    5.2.1Opacity transport

    Figure 5 depicts an opacity-value transportation scenario where voxel A is refractive as well as reflective.The refraction ray hits voxel B and thereflectionrayhits C.Thestandardcolorblending equation for front-to-back compositing is not applicable since rays can split.However,if we split the contribution of the rays and propagate the alpha value with them,the equation can be rewritten yielding the correct color as

    Fig.5 Correct color blending for split rays.The ray is split in voxel A and directly traversed to voxel B while the ray going to voxel C is stored for the next iteration.

    and βAdenotes the reflection coefficient of A.This principle is applied at each ray split by propagating the new opacity,and hence yields proper compositing of the final image.

    5.2.2Ray traversal

    The ray traversal step of the algorithm is adjusted for the iterative ray storage to take care of potential ray splits,depending on the material.Storing the rays causes memory writes using atomic counter operations and after each pass the CPU starts the next ray batch with the respective number of shaders derived from the atomic counter holding the number of active rays.If the material at the intersection point has either a reflective component or a refractive component,but not both,the current ray direction is adjusted accordingly and no split occurs.In the case of reflection and refraction,one ray is continued directly and the other is stored in the ray buffer for the next iteration.Whenever a ray hits a fragment,a new shadow ray is created which targets a light source.

    To keep the shader executions from diverging too much,a ray is only iterated for a fixed number of steps n.After all rays have either finished or reached their iteration count in the current pass,only active and newly created rays are traced in the subsequent ray traversal-pass during the current frame.

    6 Results and discussion

    A performance evaluation was carried out on an Intel Xeon E5-2637 machine with 128GB of RAM and an NVIDIA Quadro K6000 graphics card.A viewport of size 1000×1000 was used for all measurements. The frame rates were determined by averaging the frame rates of a 360°rotation around the y-axis followed by a 360°rotation around the x-axis.With these two rotations the entire scene is covered.Since the per-voxel linked lists are generated along the z-axis,a rotation around this axis will have only minor effects on performance due to rather similar bounding volume traversals for this direction.

    Figure 6 depicts the three scenes used in the evaluation.The dragon model features a nonreflective,non-refractive material.The window panes in the house scene are refractive.The bunny scene contains a refractive Stanford bunny and a reflective implicit sphere.Performance was measured for a plain rendering of the voxelized dragon and with global effects enabled for all scenes(denoted by ray tracing).Results are shown in Table 1.Please note that measurements in Table 1 include only the rendering;the one-time costs of voxelization are omitted.Information on voxelization can be found in Section 6.1.For all scenes,rendering is possible at high interactive frame rates.

    To investigate the influence the view direction has on voxel traversal,we set up a scene containing a single sphere.Again,the frame rate was recorded while the scene was rotated around the y-axis and the x-axis.The results for different bounding volume resolutions as well as varying brick sizes are shown in Fig.7.Since the sphere always has the same appearance for every view direction,we can conclude that the variations are due to voxel lookup.The evaluation shows that while a brick size of 8 works best for bounding volumes of size 256 and 512,for a volume resolution of 1024,a brick size of 16 yields the best performance.All lines share the same features with slight variations.At y rotations of 90°and 270°for example,the spikes indicate the highest performance is achieved for that particular view of the scene,as this allows for the most performancefriendly traversal of the per-voxel linked lists.

    Fig.6 Dragon,house,and bunny scenes as used in the evaluation.Performance measurements are shown in Table 1.

    Table 1 Results of the performance measurements for our test scenes.Columns denote the resolution of the bounding volume with brick size in brackets,number of non-empty voxels,percentage of non-empty bricks,memory footprint,and frame rate(frames per second,F(xiàn)PS)

    Fig.7 Ray-casted sphere rotated around the y-axis and x-axis for 360°at different bounding volume resolutions and brick sizes (denoted in brackets).

    6.1Voxelization

    As mentioned before,the voxelization of static scenes has to be done only once in the beginning. Ifweconsider,however,dynamicscenes,the per-voxel linked lists have to be recomputed all the time.Table 2 shows the combined results of voxelization and rendering for our three test scenes when voxelization is performed every frame.__________ This demonstrates that our approach is capable of voxelizing and rendering a scene at interactive frame rates except in the cases which combine high volume resolutions with global effects.

    Table 2 Voxelization combined with rendering for different volume resolutions and brick sizes.Measurements given in FPS

    While the approach performs well for the dragon,even for large sizes,the performance for the house and bunny scenes drops dramatically.This is due tothe high depth complexity of the scenes,particularly in the xz-plane,and the related costs for voxelization caused by axis-aligned floors and walls.In all cases,similar frame rates were obtained for brick sizes of 8 and 16.Although this seems contradictory to our findings in Fig.7 at first,the explanation is rather straightforward.The rendering process does not have too big an impact on the overall performance of one frame and the benefits of using the optimal brick size for rendering are canceled out by the additional costs of voxelization.

    6.2Sparse volumes

    To illustrate the applicability of our approach in combination with volume rendering,we chose two large-scale CT datasets of animals and two large datasets from the field of geostatistics.In Fig.8,the volume rendering results are shown for these datasets.The datasets are not inherently sparse by themselves,but they all contain large regions which are not of particular interest.For the CT scans of the animals(Figs.8(b)and 8(c))this applies to the air surrounding the specimen.Other volume densities representing,e.g.,tissue and bone structures are conserved and separated by applying a simple transfer function.The resolution of the animal datasets is 10243at 8 bits per data point.

    Thegeostatisticsdatacomprisesavolume datasetwhichrepresentsscatteredmeasuring points interpreted into a volume dataset through a geostatistical method called Kriging[32].It is in a data-sparse format through low-rank Kronecker representations[33].A domain size of 10243and double precision were used in the simulation for computing the Kronecker volumes.We converted the data to 32 bit floating point values before voxelization and uploading into GPU memory to match our data structures.Despite the conversion no significant loss in the dynamic range could be detected when compared with the original results. Although the entire domain of the dataset actually contains data,domain experts are only interested in the data outside the 95%confidence interval of the standard deviation N(μ=0,σ2=1).This turns the volume datasets into sparsely occupied volumes.Figure 8(a)shows the data outside the confidence interval thereby illustrating the sparsity of the data itself.The depiction in Fig.1(top left),in contrast,shows the rendering of one Kronecker dataset with low opacities,generating a fluffy surface.

    Fig.8 Datasets used in the evaluation of sparse volume rendering and their respective transfer functions.

    Table 3 Results for the volume datasets regarding memory usage and rendering performance.The columns denote the resolution of the bounding volume with brick size in brackets,percentage of non-empty voxels,percentage of non-empty bricks,total amount of memory,number of voxels stored per byte,and the frame rate

    The generation of the per-voxel linked list is carried out on the GPU for all volume datasets as described in Section 3.1.The computation is noninteractive but takes only 2.6 s for a 10243volume dataset.Table 3 shows the results of the volumebenchmarks.The interesting parts in the Kronecker dataset still occupy a large part,34%,of the volume,resulting in a comparatively large memory footprint. The volume occupancies for the CT scans are in a range of 0.5%to 1.8%,excluding air surrounding the specimen(60%to 80%)and some of the internal tissue while keeping the main features visible.Our approach delivers interactive frame rates for most of the test cases.Naturally,higher numbers of voxels result in lower average frame rates.This also directly impacts the memory footprint but still delivers low memory footprints compared to the original dataset.

    6.3Global effects rendering

    In Table 1,rendering performance is shown for scenes with global effects enabled.Since the inclusion of refraction and reflection requires an additional ray buffer,the frame rate for the dragon scene drops to a mere 15%and 35%for bounding volume resolutions of 256 and 1024,respectively.At higher volume resolutions the actual costs of ray traversal outweigh the impact of the ray buffer.

    With shadow rays enabled,the performance drops to about a third due to the increase in the number of rays as well as storing the shadow rays in the additional ray buffer.We also tested different upper limits n of ray steps per rendering iteration without noticing a significant difference in performance between 256,512,and 1024 steps.

    6.4Distinction of existing systems

    AsstatedintherelatedworkinSection2,other systems with similar capabilities have been presented before.Table 4 compares recent work in this area to VoxLink.In contrast to the other systems,VoxLink natively supports volumes and surface representations.Each system uses a different memory scheme to store the depth images or voxels.Here,we use the ppLL concept and extend it to voxels in a double layered scheme for fast empty space clipping.The octree and directed acyclic graphs used in GigaVoxels[13]and voxel DAGs[14]could be a subject for future work to incorporate in our scheme for further speed up. In contrast to the rasterization-based mechanism of VoxLink,GigaVoxels uses a direct voxelization scheme and voxel DAGs is built from sparse voxel octrees,created from depth peeling.Apart from VDI[28]which uses proxy geometry for rendering,all schemes use some sort of ray casting or ray tracing. The methods based on the orthogonal fragment buffer[29,30]also create the OFBs by rasterization and can display them via ray tracing,but can render mesh-based models only.

    6.5Limitations

    In its current state,our implementation has several shortcomings which are partly due to the inherent nature of voxels and partly due to unoptimized code. For voxelization,the biggest performance issue is the slow speed for updates in every frame for some scenes(see Table 2).This,however,could partly be overcome by inserting the fragments immediately into distinct bricks instead of the global buffer.Thus,only fragments in the individual bricks have to be sorted which should improve overall performance.In principle,our approach could be easily extended to handle explicit triangle meshes instead of voxelized geometry by referencing the triangle vertices per voxel but this would result in a larger memory footprint.

    Another limitation is that reflection,refraction,and shadow generation for geometry relies on the voxel representation.Thus,VoxLink cannot keep up with other ray-tracing programs,e.g.,NVIDIA OptiX,either in terms of computation time or in terms of image quality.

    Table 4 Comparison of our VoxLink approach and similar works in terms of renderable data,storage,and rendering technique

    7 Future work

    For future research,we want to optimize ray traversal further by employing adaptive space subdivision methods like Octrees,BSP trees,or kd-trees instead of uniform bricks.This might also lead to a reduced memory footprint,thus making the approach more suitable for even larger volumes.Additionally,the implementation of proper level-of-detail mechanisms could improve the performance for large proxy geometries.By achieving a higher performance in the rendering phase,the number of secondary rays can be increased and,thus,enable effects like caustics,sub-surface scattering,or diffuse mirroring. The memory footprint could be reduced further by using a tighter bounding box scheme which is aligned and fitted to the sparse data,eliminating empty space within the bricks.

    In their work,Kauker et al.[7]used per-pixel linked lists for distributed and remote rendering. As our approach uses similar data structures,it might be possible to also extend it to make use of multiple distributed rendering engines.As in their approach where they compared different instances of a molecule,our approach could be used for a comparison of different volumes.

    8 Conclusions

    In this paper,we presented VoxLink,an approach to use per-voxel linked lists for rendering sparse volume data as well as voxelization and rendering of geometry,including reflection and refraction,at interactive frame rates.Our method adapts the ppLL approach and extends it to store the entire scene in three dimensions.

    Experimental results show that VoxLink reduces the memory footprint of sparse volume data while still providing interactive performance.VoxLink,to our knowledge,is the only approach that can handle arbitrary representations using voxelization.

    Acknowledgements

    Theenvironmentmapsusedaretheworkof Emil Persson and are licensed under the Creative CommonsAttribution3.0UnportedLicense. ThisworkispartiallyfundedbyDeutsche Forschungsgemeinschaft(DFG)as part of SFB 716 project D.3,the Excellence Center at Link¨oping and Lund in Information Technology(ELLIIT),and the Swedish e-Science Research Centre(SeRC).

    References

    [1]Yang,J.C.;Hensley,J.;Gr¨un,H.;Thibieroz,N.Realtime concurrent linked list construction on the GPU. Computer Graphics Forum Vol.29,No.4,1297-1304,2010.

    [2]Falk,M.;Krone,M.;Ertl,T.Atomistic visualization of mesoscopic whole-cell simulations using ray-casted instancing.Computer Graphics Forum Vol.32,No.8,195-206,2013.

    [3]Everitt,C.Interactiveorder-independent transparency.Technical Report.NVIDIA Corporation,2001

    [4]Carpenter,L.The A-buffer,an antialiased hidden surface method.In:Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques,103-108,1984.

    [5]Lindholm,S.;Falk,M.;Sund′en,E.;Bock,A.;Ynnerman,A.;Ropinski,T.Hybrid data visualization basedondepthcomplexityhistogramanalysis. Computer Graphics Forum Vol.34,No.1,74-85,2014.

    [6]Kauker,D.;Krone,M.;Panagiotidis,A.;Reina,G.;Ertl,T.Rendering molecular surfaces using orderindependent transparency.In:Proceedings of the 13th Eurographics Symposium on Parallel Graphics and Visualization,33-40,2013.

    [7]Kauker,D.;Krone,M.;Panagiotidis,A.;Reina,G.;Ertl,T.Evaluation of per-pixel linked lists for distributed rendering and comparative analysis. Computing and Visualization in Science Vol.15,No. 3,111-121,2012.

    [8]Kaufman,A.;Shimony,E.3Dscan-conversion algorithms for voxel-based graphics.In:Proceedings of the 1986 Workshop on Interactive 3D Graphics,45-75,1987.

    [9]Yagel,R.;Cohen,D.;Kaufman,A.Discrete ray tracing.IEEE Computer Graphics and Applications Vol.12,No.5,19-28,1992.

    [10]Karabassi,E.-A.;Papaioannou,G.;Theoharis,T. Afastdepth-buffer-basedvoxelizationalgorithm. Journal of Graphics Tools Vol.4,No.4,5-10,1999.

    [11]Liao,D.;Fang,S.Fast CSG voxelization by frame bufferpixelmapping.In:ProceedingsofIEEE Symposium on Volume Visualization,43-48,2000.

    [12]Eisemann,E.;D′ecoret,X.Single-pass GPU solid voxelization for real-time applications.In:Proceedings of Graphics Interface,73-80,2008.

    [13]Crassin,C.GigaVoxels:A voxel-based rendering pipeline for efficient exploration of large and detailed scenes.Ph.D.Thesis.Universite de Grenoble,2011.

    [14]K¨ampe,V.;Sintorn,E.;Assarsson,U.High resolution sparse voxel DAGs.ACM Transactions on Graphics Vol.32,No.4,Article No.101,2013.

    [15]Drebin,R.A.;Carpenter,L.;Hanrahan,P.Volume rendering.In:Proceedingsofthe15thAnnual Conference on Computer Graphics and Interactive Techniques,65-74,1988.

    [16]Levoy,M.Display of surfaces from volume data.IEEE Computer Graphics and Applications Vol.8,No.3,29-37,1988.

    [17]Sabella,P.A rendering algorithm for visualizing 3D scalar fields.In:Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques,51-58,1988.

    [18]Stegmaier,S.;Strengert,M.;Klein,T.;Ertl,T.A simple and flexible volume rendering framework for graphics-hardware-based raycasting.In:Proceedings of the 4th Eurographics/IEEE VGTC Conference on Volume Graphics,187-195,2005.

    [19]Museth,K.VDB:High-resolution sparse volumes with dynamic topology.ACM Transactions on Graphics Vol.32,No.3,Article No.27,2013.

    [20]Teitzel,C.; Hopf,M.; Grosso,R.; Ertl,T. Volume visualization on sparse grids.Computing and Visualization in Science Vol.2,No.1,47-59,1999.

    [21]K¨ahler,R.;Simon,M.;Hege,H.-C.Interactive volume rendering of large sparse datasets using adaptive mesh refinement hierarchies.IEEE Transactions on Visualization and Computer Graphics Vol.9,No.3,341-351,2003.

    [22]Gobbetti,E.;Marton,F(xiàn).;Guiti′an,J.A.I.A singlepass GPU ray casting framework for interactive outof-core rendering of massive volumetric datasets.The Visual Computer Vol.24,No.7,797-806,2008.

    [23]Rodr′?guez,M.B.;Gobbetti,E.;Guiti′an,J.A. I.;Makhinya,M.;Marton,F(xiàn).;Pajarola,R.;Suter,S. K.State-of-the-art in compressed GPU-based direct volume rendering.Computer Graphics Forum Vol.33,No.6,77-100,2014.

    [24]Beyer,J.;Hadwiger,M.;Pfister,H.A survey ofGPU-basedlarge-scalevolumevisualization. In:ProceedingsofEurographicsConferenceon Visualization,2014.Availableathttp://vcg.seas. harvard.edu/files/pfister/files/paper107camera_ready. pdf?m=1397512314.

    [25]Koza,Z.;Matyka,M.;Szkoda,S.;Miroslaw,L.Compressed multirow storage format for sparse matrices on graphics processing units.SIAM Journal on Scientific Computing Vol.36,No.2,C219-C239,2014.

    [26]Nie?ner,M.;Sch¨afer,H.;Stamminger,M.Fast indirect illumination using layered depth images.The Visual Computer Vol.26,No.6,679-686,2010.

    [27]Shade,J.;Gortler,S.;He,L.-w.;Szeliski,R.Layered depth images.In:Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques,231-242,1998.

    [28]Frey,S.;Sadlo,F(xiàn).;Ertl,T.Explorable volumetric depth images from raycasting.In:Proceedings of the 26th Conference on Graphics,Patterns and Images,123-130,2013.

    [29]B¨urger,K.;Kr¨uger,J.;Westermann,R.Sample-based surface coloring.IEEE Transactions on Visualization and Computer Graphics Vol.16,No.5,763-776,2010.

    [30]Reichl,F(xiàn).;Chajdas,M.G.;B¨urger,K.;Westermann,R.Hybridsample-basedsurfacerendering.In: Proceedings of Vision,Modelling and Visualization,47-54,2012.

    [31]Reina, G.Visualizationofuncorrelatedpoint data.Ph.D.Thesis.Visualization Research Center,University of Stuttgart,2008.

    [32]Kitanidis, P.K.IntroductiontoGeostatistics: Applications in Hydrogeology.Cambridge University Press,1997.

    [33]Nowak,W.;Litvinenko,A.Kriging and spatial design accelerated by orders of magnitude:Combining lowrank covariance approximations with FFT-techniques. Mathematical Geosciences Vol.45,No.4,411-435,2013.

    DanielKauker received his Ph.D. degree(Dr.rer.nat.)from the University of Stuttgart in 2015.His research interests are distributed computation andvisualization,genericrendering approaches,and GPU-based methods.

    MartinFalkisapostdoctoral researcherintheImmersive VisualizationGroupatLink¨oping University.He received his Ph.D.degree (Dr.rer.nat.)from the University of Stuttgart in 2013.His research interests arevolumerendering,visualizations inthecontextofsystemsbiology,large spatio-temporal data,glyph-based rendering,and GPU-based simulations.

    GuidoReinaisapostdoctoral researcher at the Visualization Research Center of the University of Stuttgart (VISUS).He received his Ph.D.degree in computer science(Dr.rer.nat.)in 2008 from the University of Stuttgart,Germany.His research interests include large displays,particle-based rendering,and GPU-based methods in general. A

    nders Ynnerman received his Ph.D. degreeinphysicsfromGothenburg University in 1992.Since 1999 he has held a chair in scientific visualization at Link¨oping University and is the director of the Norrk¨oping Visualization Center-C.He is a member of the Swedish Royal Academy of Engineering Sciences and aboard member of the Foundation for Strategic Research. He currently chairs the Eurographics Association and is an associate editor of IEEE TVCG.His research interests include large-scale datasets in visualization and computer graphics,direct volume rendering including data reduction and volumetric lighting techniques,besides immersive visualization techniques.

    Thomas Ertl is a full professor of computer science at the University of Stuttgart,Germany,and the head of the Visualization and Interactive Systems Institute(VIS)and the Visualization Research Center(VISUS).He received his M.S.degree in computer science from the University of Colorado at Boulder and Ph.D.degree in theoretical astrophysics from the University of T¨ubingen.His research interests include visualization,computer graphics,and human computer interaction.He has served on and chaired numerous committees and boards in the field.

    Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), whichpermits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    亚洲七黄色美女视频| 两个人看的免费小视频| 欧美乱码精品一区二区三区| 嫩草影视91久久| 两个人免费观看高清视频| 亚洲人成77777在线视频| 国产亚洲av嫩草精品影院| www.精华液| 日本熟妇午夜| 中文字幕人成人乱码亚洲影| 免费搜索国产男女视频| 亚洲精品国产一区二区精华液| 国产亚洲精品av在线| 老汉色∧v一级毛片| 国产av一区二区精品久久| 亚洲欧美一区二区三区黑人| 可以免费在线观看a视频的电影网站| 欧美日本视频| 少妇人妻一区二区三区视频| 黄频高清免费视频| 这个男人来自地球电影免费观看| 麻豆av在线久日| 国产精品久久视频播放| 国产精品 国内视频| 久久精品91无色码中文字幕| 国产欧美日韩一区二区精品| 欧美色视频一区免费| 国产精品av久久久久免费| 国产精品一及| 欧美黑人欧美精品刺激| 亚洲片人在线观看| 亚洲精品久久成人aⅴ小说| 最好的美女福利视频网| 欧美乱码精品一区二区三区| 999久久久国产精品视频| 久久久久久九九精品二区国产 | 亚洲avbb在线观看| 亚洲午夜精品一区,二区,三区| 亚洲国产精品999在线| 成在线人永久免费视频| 亚洲免费av在线视频| 丁香六月欧美| 亚洲最大成人中文| 777久久人妻少妇嫩草av网站| 不卡一级毛片| 国产又色又爽无遮挡免费看| 亚洲中文av在线| 亚洲精华国产精华精| 久久这里只有精品19| 在线播放国产精品三级| 校园春色视频在线观看| 啦啦啦观看免费观看视频高清| 亚洲午夜理论影院| 亚洲欧美日韩无卡精品| 日日干狠狠操夜夜爽| 欧美中文日本在线观看视频| 又黄又爽又免费观看的视频| 久久精品国产清高在天天线| 99精品久久久久人妻精品| 午夜精品一区二区三区免费看| 精品国产亚洲在线| 国产欧美日韩精品亚洲av| 精品欧美国产一区二区三| 91大片在线观看| 在线a可以看的网站| 国产又色又爽无遮挡免费看| 一本久久中文字幕| 日日夜夜操网爽| 国产精品综合久久久久久久免费| 久久中文字幕一级| 熟女少妇亚洲综合色aaa.| 99国产精品一区二区蜜桃av| 欧美一级毛片孕妇| 一级黄色大片毛片| 长腿黑丝高跟| 亚洲欧美激情综合另类| 日日夜夜操网爽| 12—13女人毛片做爰片一| 久久久水蜜桃国产精品网| 麻豆成人午夜福利视频| 国产伦在线观看视频一区| 村上凉子中文字幕在线| 亚洲黑人精品在线| 亚洲欧洲精品一区二区精品久久久| 一边摸一边做爽爽视频免费| 麻豆国产97在线/欧美 | 两个人的视频大全免费| 两个人免费观看高清视频| 麻豆国产97在线/欧美 | 性欧美人与动物交配| 久久久久国产一级毛片高清牌| 黄色丝袜av网址大全| 欧美3d第一页| 成年女人毛片免费观看观看9| 精品久久久久久成人av| 亚洲 欧美一区二区三区| 欧美激情久久久久久爽电影| 亚洲电影在线观看av| 国产亚洲精品第一综合不卡| 国产又色又爽无遮挡免费看| 亚洲精品美女久久久久99蜜臀| 黄色视频,在线免费观看| 99久久国产精品久久久| 欧美日韩国产亚洲二区| av在线播放免费不卡| av超薄肉色丝袜交足视频| 久久伊人香网站| 又紧又爽又黄一区二区| 老汉色av国产亚洲站长工具| 精品熟女少妇八av免费久了| 国产精品 欧美亚洲| 久久久久久久久免费视频了| 日韩欧美在线乱码| 777久久人妻少妇嫩草av网站| 欧美日韩黄片免| 久久精品人妻少妇| 精品久久久久久,| 久久精品国产清高在天天线| a级毛片a级免费在线| 欧美日韩亚洲综合一区二区三区_| 一夜夜www| 国产精品久久久久久人妻精品电影| 国产亚洲av嫩草精品影院| 91老司机精品| 亚洲全国av大片| 精品国产乱码久久久久久男人| 色av中文字幕| 国产一区二区三区视频了| 色综合站精品国产| 色av中文字幕| 国产精品久久久久久久电影 | 日本一区二区免费在线视频| 国产精品 欧美亚洲| 欧美乱码精品一区二区三区| 国语自产精品视频在线第100页| 激情在线观看视频在线高清| 老汉色∧v一级毛片| 中文字幕人成人乱码亚洲影| 国产av一区二区精品久久| 中文字幕熟女人妻在线| a级毛片a级免费在线| а√天堂www在线а√下载| 最近最新免费中文字幕在线| 波多野结衣巨乳人妻| av中文乱码字幕在线| 国产精品av久久久久免费| 精品午夜福利视频在线观看一区| 亚洲欧美精品综合一区二区三区| 床上黄色一级片| 首页视频小说图片口味搜索| 久久精品91蜜桃| 天堂√8在线中文| 成年免费大片在线观看| 精品不卡国产一区二区三区| 男女床上黄色一级片免费看| 亚洲成人久久爱视频| netflix在线观看网站| 日韩 欧美 亚洲 中文字幕| www.www免费av| 窝窝影院91人妻| 无遮挡黄片免费观看| 中国美女看黄片| 可以在线观看毛片的网站| 国产亚洲精品第一综合不卡| 亚洲色图 男人天堂 中文字幕| 国产亚洲精品av在线| 欧美黑人精品巨大| 精品一区二区三区四区五区乱码| 小说图片视频综合网站| 免费看美女性在线毛片视频| 午夜精品一区二区三区免费看| av天堂在线播放| 国产成人精品久久二区二区91| 久久国产精品人妻蜜桃| 在线视频色国产色| 这个男人来自地球电影免费观看| 国产精品亚洲美女久久久| 一进一出好大好爽视频| 日本撒尿小便嘘嘘汇集6| 国产乱人伦免费视频| 国产欧美日韩精品亚洲av| 成年免费大片在线观看| 黄色 视频免费看| 我要搜黄色片| 啦啦啦韩国在线观看视频| 国产亚洲精品第一综合不卡| 日韩高清综合在线| 亚洲va日本ⅴa欧美va伊人久久| 国产91精品成人一区二区三区| 中文字幕精品亚洲无线码一区| 欧美日韩瑟瑟在线播放| 亚洲熟妇熟女久久| 婷婷丁香在线五月| 国产精品亚洲一级av第二区| 国产黄色小视频在线观看| 91大片在线观看| 国产男靠女视频免费网站| 午夜福利在线观看吧| 亚洲av中文字字幕乱码综合| 精品国产美女av久久久久小说| 女警被强在线播放| 在线观看免费视频日本深夜| 1024视频免费在线观看| 国产片内射在线| 国产爱豆传媒在线观看 | 亚洲国产精品合色在线| 欧美日韩中文字幕国产精品一区二区三区| 久久久久精品国产欧美久久久| 三级国产精品欧美在线观看 | 国产麻豆成人av免费视频| 一本大道久久a久久精品| 亚洲中文字幕日韩| 丁香欧美五月| 天天添夜夜摸| 国产精品美女特级片免费视频播放器 | 波多野结衣高清作品| 国产成年人精品一区二区| 国产一区二区在线av高清观看| 中文字幕熟女人妻在线| 国产成人精品无人区| 成年免费大片在线观看| 日韩国内少妇激情av| 国产精品久久久av美女十八| 国产精品av久久久久免费| 黄色片一级片一级黄色片| 国产97色在线日韩免费| 亚洲男人的天堂狠狠| xxx96com| 亚洲成av人片在线播放无| 777久久人妻少妇嫩草av网站| 一区福利在线观看| 成在线人永久免费视频| 波多野结衣高清作品| 在线观看一区二区三区| 日韩精品青青久久久久久| 欧美不卡视频在线免费观看 | 激情在线观看视频在线高清| 99热这里只有精品一区 | 亚洲va日本ⅴa欧美va伊人久久| 淫妇啪啪啪对白视频| 成人三级做爰电影| 午夜福利免费观看在线| 三级国产精品欧美在线观看 | 日本一本二区三区精品| 久久草成人影院| 99精品久久久久人妻精品| 夜夜爽天天搞| 国产精品自产拍在线观看55亚洲| 日韩欧美一区二区三区在线观看| 两个人的视频大全免费| 欧美日韩精品网址| 午夜日韩欧美国产| 亚洲av成人精品一区久久| 特大巨黑吊av在线直播| 国产麻豆成人av免费视频| 日本 欧美在线| 最近视频中文字幕2019在线8| 国产欧美日韩一区二区三| 国产成人系列免费观看| 一区二区三区高清视频在线| 曰老女人黄片| 99久久99久久久精品蜜桃| 男女那种视频在线观看| 亚洲av电影不卡..在线观看| 一进一出抽搐动态| 日本一二三区视频观看| 国产成人精品久久二区二区91| 色综合站精品国产| 在线免费观看的www视频| 午夜福利欧美成人| 午夜福利在线在线| 麻豆久久精品国产亚洲av| 国产乱人伦免费视频| 亚洲专区字幕在线| 久久久久久久久中文| 国内久久婷婷六月综合欲色啪| 国产精品 国内视频| 国产精品久久久久久精品电影| 成人国产一区最新在线观看| 久久国产精品影院| 少妇熟女aⅴ在线视频| 亚洲专区中文字幕在线| 亚洲 欧美 日韩 在线 免费| 手机成人av网站| 无限看片的www在线观看| 亚洲精品av麻豆狂野| 亚洲人成网站高清观看| 国产久久久一区二区三区| 免费看日本二区| 亚洲精品一卡2卡三卡4卡5卡| 国产免费av片在线观看野外av| 国产成人影院久久av| 亚洲aⅴ乱码一区二区在线播放 | 正在播放国产对白刺激| 亚洲avbb在线观看| 久久精品影院6| 桃色一区二区三区在线观看| 亚洲一区中文字幕在线| 我的老师免费观看完整版| 在线观看免费视频日本深夜| 老熟妇乱子伦视频在线观看| 亚洲欧美日韩东京热| 搡老熟女国产l中国老女人| 婷婷精品国产亚洲av| 少妇熟女aⅴ在线视频| 老司机深夜福利视频在线观看| 欧美日本视频| 成人精品一区二区免费| 日日爽夜夜爽网站| 久久 成人 亚洲| 亚洲精品av麻豆狂野| 亚洲 欧美一区二区三区| 啦啦啦观看免费观看视频高清| 国产在线精品亚洲第一网站| 最新美女视频免费是黄的| 国产片内射在线| 精品久久久久久久久久久久久| 法律面前人人平等表现在哪些方面| 精品午夜福利视频在线观看一区| 国产成人欧美在线观看| 久久精品国产99精品国产亚洲性色| 免费在线观看成人毛片| 国内精品久久久久久久电影| 免费在线观看亚洲国产| 亚洲成a人片在线一区二区| 99久久精品国产亚洲精品| av片东京热男人的天堂| 夜夜夜夜夜久久久久| 亚洲免费av在线视频| 老司机午夜福利在线观看视频| 欧美一级毛片孕妇| 亚洲男人的天堂狠狠| 黄色丝袜av网址大全| 观看免费一级毛片| 亚洲av成人精品一区久久| 午夜视频精品福利| 国产精品一及| 亚洲国产欧美一区二区综合| 成年女人毛片免费观看观看9| 亚洲国产欧美网| 国内少妇人妻偷人精品xxx网站 | 亚洲精品一区av在线观看| 五月伊人婷婷丁香| 18禁国产床啪视频网站| 麻豆国产97在线/欧美 | 国产成人影院久久av| 一级作爱视频免费观看| x7x7x7水蜜桃| 老司机午夜福利在线观看视频| 午夜成年电影在线免费观看| 国产人伦9x9x在线观看| 精品国产乱子伦一区二区三区| 天堂av国产一区二区熟女人妻 | 国产一区二区三区视频了| 久久久久久大精品| 久久久国产成人精品二区| 亚洲一区二区三区不卡视频| 欧美+亚洲+日韩+国产| 国产一区二区激情短视频| 99在线视频只有这里精品首页| 国产精品一区二区免费欧美| 亚洲人成网站在线播放欧美日韩| 美女高潮喷水抽搐中文字幕| 很黄的视频免费| 亚洲成a人片在线一区二区| 法律面前人人平等表现在哪些方面| 黑人欧美特级aaaaaa片| 脱女人内裤的视频| 无限看片的www在线观看| 亚洲人成网站在线播放欧美日韩| 18禁观看日本| 亚洲国产看品久久| 亚洲精品美女久久久久99蜜臀| 欧美性猛交╳xxx乱大交人| 欧美zozozo另类| 亚洲成av人片免费观看| 国内精品久久久久久久电影| 国产精品久久电影中文字幕| 精品一区二区三区av网在线观看| 亚洲成av人片在线播放无| 国语自产精品视频在线第100页| 每晚都被弄得嗷嗷叫到高潮| 天天躁狠狠躁夜夜躁狠狠躁| 日韩欧美精品v在线| 91麻豆精品激情在线观看国产| 淫秽高清视频在线观看| 亚洲成a人片在线一区二区| 丰满人妻熟妇乱又伦精品不卡| 成人一区二区视频在线观看| 搞女人的毛片| 久久精品91蜜桃| 无遮挡黄片免费观看| 日韩欧美三级三区| 我要搜黄色片| 欧美成狂野欧美在线观看| 亚洲一区高清亚洲精品| 一本精品99久久精品77| 国产亚洲精品av在线| 久久香蕉精品热| 黄色视频不卡| 91成年电影在线观看| 亚洲精华国产精华精| x7x7x7水蜜桃| 欧美久久黑人一区二区| 亚洲自偷自拍图片 自拍| 国产99白浆流出| 美女午夜性视频免费| 久久香蕉国产精品| 中亚洲国语对白在线视频| 精品一区二区三区四区五区乱码| 国产人伦9x9x在线观看| 女人爽到高潮嗷嗷叫在线视频| 亚洲熟女毛片儿| 亚洲国产精品sss在线观看| 亚洲熟妇中文字幕五十中出| 99国产精品一区二区蜜桃av| 黄色片一级片一级黄色片| 天堂影院成人在线观看| 一本大道久久a久久精品| 每晚都被弄得嗷嗷叫到高潮| 国产主播在线观看一区二区| 国产精品精品国产色婷婷| 国产av又大| 国产精品永久免费网站| 欧美乱色亚洲激情| 老熟妇仑乱视频hdxx| 91麻豆精品激情在线观看国产| 国产黄色小视频在线观看| 日韩有码中文字幕| 久久午夜综合久久蜜桃| 岛国在线观看网站| 日本五十路高清| 五月玫瑰六月丁香| 欧美黑人巨大hd| 少妇的丰满在线观看| 国产1区2区3区精品| 精品一区二区三区视频在线观看免费| 亚洲国产看品久久| av在线天堂中文字幕| 操出白浆在线播放| 在线观看免费视频日本深夜| 亚洲精品一区av在线观看| 哪里可以看免费的av片| 久久天躁狠狠躁夜夜2o2o| 国产成人欧美在线观看| 午夜日韩欧美国产| 国产高清视频在线播放一区| av国产免费在线观看| 极品教师在线免费播放| 狂野欧美白嫩少妇大欣赏| 国产成人av激情在线播放| 国产黄片美女视频| 搡老岳熟女国产| 一个人免费在线观看电影 | 一个人免费在线观看电影 | 亚洲国产欧美一区二区综合| 精品欧美一区二区三区在线| 欧美乱码精品一区二区三区| 91字幕亚洲| 亚洲中文av在线| 成人av在线播放网站| 50天的宝宝边吃奶边哭怎么回事| 黑人巨大精品欧美一区二区mp4| 久久这里只有精品中国| 一夜夜www| 91麻豆av在线| 又黄又粗又硬又大视频| 此物有八面人人有两片| 久久精品影院6| 国产一区二区三区在线臀色熟女| 欧美乱妇无乱码| 欧美人与性动交α欧美精品济南到| 国产成人啪精品午夜网站| 老熟妇乱子伦视频在线观看| 国产精品日韩av在线免费观看| 精品第一国产精品| 午夜福利18| 香蕉国产在线看| 日本熟妇午夜| 哪里可以看免费的av片| 色av中文字幕| 天天躁夜夜躁狠狠躁躁| 老鸭窝网址在线观看| 欧美大码av| 999精品在线视频| 十八禁网站免费在线| 国产av又大| 亚洲av中文字字幕乱码综合| 波多野结衣高清无吗| 国产精品av久久久久免费| www国产在线视频色| 欧美精品啪啪一区二区三区| av欧美777| 一本久久中文字幕| 亚洲 欧美一区二区三区| www.自偷自拍.com| 欧美日韩亚洲综合一区二区三区_| 成年女人毛片免费观看观看9| 亚洲国产精品合色在线| 少妇被粗大的猛进出69影院| 男人的好看免费观看在线视频 | 国产午夜精品久久久久久| 国产在线精品亚洲第一网站| 99re在线观看精品视频| 国产成人aa在线观看| av福利片在线观看| 我要搜黄色片| 日日夜夜操网爽| 少妇被粗大的猛进出69影院| 亚洲免费av在线视频| 悠悠久久av| 嫁个100分男人电影在线观看| 亚洲午夜精品一区,二区,三区| 久久久久精品国产欧美久久久| 妹子高潮喷水视频| 亚洲av电影在线进入| 精品午夜福利视频在线观看一区| 三级男女做爰猛烈吃奶摸视频| 男女床上黄色一级片免费看| 午夜激情福利司机影院| 超碰成人久久| 国产91精品成人一区二区三区| 亚洲人成电影免费在线| 欧美绝顶高潮抽搐喷水| 久久精品国产清高在天天线| av福利片在线| 超碰成人久久| 久久久久久久久久黄片| 美女 人体艺术 gogo| 欧美激情久久久久久爽电影| 亚洲男人天堂网一区| 老司机午夜十八禁免费视频| 婷婷亚洲欧美| 国产av不卡久久| 久久草成人影院| 精品国产亚洲在线| 久久人妻福利社区极品人妻图片| www.熟女人妻精品国产| 一进一出抽搐gif免费好疼| 搡老妇女老女人老熟妇| 男女那种视频在线观看| 婷婷精品国产亚洲av在线| 久久精品91无色码中文字幕| 精品国产乱子伦一区二区三区| 久久人人精品亚洲av| 高清毛片免费观看视频网站| 精品高清国产在线一区| 久久精品夜夜夜夜夜久久蜜豆 | 国产区一区二久久| 女同久久另类99精品国产91| 久久久精品欧美日韩精品| 丰满人妻熟妇乱又伦精品不卡| 18禁观看日本| 日本 欧美在线| 看黄色毛片网站| 他把我摸到了高潮在线观看| 久久久精品国产亚洲av高清涩受| 国产精品1区2区在线观看.| 午夜福利免费观看在线| 国产高清videossex| 亚洲精品一区av在线观看| 嫁个100分男人电影在线观看| 好看av亚洲va欧美ⅴa在| 黄片小视频在线播放| 亚洲免费av在线视频| 亚洲中文字幕一区二区三区有码在线看 | 国产亚洲精品综合一区在线观看 | 老司机午夜十八禁免费视频| 欧美成人午夜精品| 黄频高清免费视频| 高潮久久久久久久久久久不卡| 色在线成人网| 久久性视频一级片| 亚洲国产精品合色在线| 久久久国产精品麻豆| 精品第一国产精品| 日韩 欧美 亚洲 中文字幕| 三级毛片av免费| 午夜两性在线视频| 搡老妇女老女人老熟妇| 黄色毛片三级朝国网站| 国产精品久久久久久精品电影| 欧美丝袜亚洲另类 | 黄片小视频在线播放| 欧美成人免费av一区二区三区| 91字幕亚洲| 制服诱惑二区| av免费在线观看网站| 少妇裸体淫交视频免费看高清 | 亚洲最大成人中文| or卡值多少钱| 脱女人内裤的视频| 宅男免费午夜| 精品久久久久久成人av| 2021天堂中文幕一二区在线观| 1024手机看黄色片| 日本成人三级电影网站| 最近最新中文字幕大全免费视频| 在线国产一区二区在线| 久久精品影院6| 黑人欧美特级aaaaaa片| 久久99热这里只有精品18| 久久久久国产一级毛片高清牌| 神马国产精品三级电影在线观看 | 亚洲自偷自拍图片 自拍| a级毛片a级免费在线| 精品国内亚洲2022精品成人| 亚洲激情在线av| 日韩欧美在线二视频| www日本黄色视频网| 首页视频小说图片口味搜索| 久久欧美精品欧美久久欧美| 1024手机看黄色片| 国产99白浆流出| 久久性视频一级片|