• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Spatio-Temporal Context-Guided Algorithm for Lossless Point Cloud Geometry Compression

    2024-01-12 14:48:28ZHANGHuiranDONGZhenWANGMingsheng
    ZTE Communications 2023年4期

    ZHANG Huiran , DONG Zhen, WANG Mingsheng

    (1. Guangzhou Urban Planning and Design Survey Research Institute,Guangzhou 510060, China;2. Guangdong Enterprise Key Laboratory for Urban Sensing, Monitoring and Early Warning, Guangzhou 510060, China;3. State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China)

    Abstract: Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence, autonomous driving, and cultural heritage preservation. However, point cloud data are distributed irregularly and discontinuously in spatial and temporal domains, where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem. In this paper, we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression. The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis. Then, it introduces a prediction method where both intraframe and inter-frame point clouds are available, by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm. Finally, the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques. Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information, and is suitable for 3D point cloud compression applicable to various types of scenes.

    Keywords: point cloud geometry compression; single-frame point clouds; multi-frame point clouds; predictive coding; arithmetic coding

    1 Introduction

    With the improvement of multi-platform and multiresolution acquisition equipment performance,light detection and ranging (LiDAR) technology can efficiently simulate 3D objects or scenes with massive point sets. Compared with traditional multimedia data, point cloud data contain more physical measurement information which represents objects from free viewpoints, even scenes with complex topological structures. This results in strong interactive and immersive effects that provide users with a vivid and realistic visualization experience. Additionally, point cloud data have stronger anti-noise ability and parallel processing capability, which seems to have gained attraction from the industry and academia, notably for application domains such as cultural heritage preservation, 3D immersive telepresence and automatic driving[1-2].

    However, point cloud data usually contain millions to billions of points in spatial domains, bringing burdens and challenges to the storage space capacity and network transmission bandwidth. For instance, a common dynamic point cloud utilized for entertainment usually comprises roughly one million points per frame, which, at 30 frames per second,amounts to a total bandwidth of 3.6 Gbit/s if left uncompressed[3]. Therefore, the research on high efficiency geometry compression algorithms for point clouds has important theoretical and practical value.

    Prior work tackled this problem by directly building grids or on-demand down-sampling, due to limitations in computer computing power and point cloud collection efficiency, which resulted in low spatio-temporal compression performance and loss of geometric attribute feature information. Recent studies were mainly based on computer graphics and digital signal processing techniques to implement block operations on point cloud data[4-5]or combined video coding technology[6-7]for optimization. In 2017, the Moving Picture Experts Group(MPEG) solicited proposals for point cloud compression and conducted subsequent discussions on how to compress this type of data. With increasing approaches to point cloud compression available and presented, two-point cloud data compression frameworks—TMC13 and TMC2 were issued in 2018. The research above shows remarkable progress has been made in the compression technology of point cloud. However, prior work mostly dealt with the spatial and temporal correlation of point clouds separately but had not yet been exploited to their full potential in point cloud compression.

    To address the aforementioned challenges, we introduce a spatio-temporal context-guided method for lossless point cloud geometry compression. We first divide point clouds into unit layers along the main axis. We then design a prediction mode via a travelling salesman algorithm, by adopting spatiotemporal correlation. Finally, the residuals are written into bitstreams with a utilized context-adaptive arithmetic encoder.Our main contributions are as follows.

    1) We design a prediction mode applicable to both intraframe and inter-frame point cloud, via the extended travelling salesman problem (TSP). By leveraging both the spatial and temporal redundancies of point clouds, the geometry prediction can make better use of spatial correlation and therefore enable various types of scenarios.

    2) We present an adaptive arithmetic encoder with fast context update, which selects the optimal 3D context from the context dictionary, and suppresses the increase of entropy estimation. As a result, it enhances the probability calculation efficiency of entropy encoders and yields significant compression results.

    The rest of this paper is structured as follows. Section 2 gives an outline of related work on point cloud geometry compression. Section 3 firstly presents an overview of the proposed framework. Then, the proposed method is descibed in detail. Experimental results and conclusions are presented in Sections 4 and 5, respectively.

    2 Related Work

    There have been many point cloud geometry compression algorithms proposed in the literature. CAO et al.[8]and GRAZIOSI et al.[9]conduct an investigation and summary of current point cloud compression methods, focusing on spatial dimension compression technology and MPEG standardization frameworks respectively. We provide a brief review of recent developments in two categories: single-frame point cloud compression and multi-frame point cloud compression.

    2.1 Single-Frame Point Cloud Compression

    Single-frame point clouds are widely used in engineering surveys, cultural heritage preservation, geographic information systems, and other scenarios. The octree is a widely used data structure to efficiently represent point clouds, which can be compressed by recording information through the occupied nodes. HUANG et al.[10]propose an octree-based method that recursively subdivides the point cloud into nodes with their positions represented by the geometric center of each unit.FAN et al.[11]further improve this method by introducing cluster analysis to generate a level of detail (LOD) hierarchy and encoding it in a breadth-first order. However, these methods can cause distortion due to the approximation of the original model during the iterative process.

    To address these limitations, scholars have introduced geometric structure features, such as the triangular surface model[12], the planar surface model[13-14], and the clustering algorithm[15], for inter-layer prediction and residual calculation.RENTE et al.[16]propose a concept of progressive layered compression that first uses the octree structure for coarse-grained encoding and then uses the graph Fourier transform for compression and reconstruction of cloud details. In 2019, MPEG released the geometry-based point cloud compression (GPCC) technology for both static and dynamic point clouds,which is implemented through coordinate transformation, voxelization, geometric structure analysis, and arithmetic coding step by step[17].

    Since certain octants within an octree may be sparsely populated or even empty, some methods have been proposed to optimize the tree structure by pruning sub-nodes and therefore conserve memory allocation. For example, DRICOT et al.[18]propose an inferred direct coding mode (IDCM) for terminating the octree partition based on predefined conditions of sparsity analysis, which involves pruning the octree structure to save bits allocated to child nodes. ZHANG et al.[19]suggest subdividing the point cloud space along principal components and adapting the partition method from the binary tree,quadtree and octree. Compared with the traditional octree partitioning, the hybrid models mentioned above can effectively reduce the number of bits used to represent sparse points,therefore saving nodes that need to be encoded. However, complex hyperparameter conditions and mode determination are required in the process, making it difficult to meet the requirements of self-adaptation and low complexity.

    With deep neural networks making significant strides in image and video compression, researchers have explored ways to further reduce bit rates by leveraging super prior guidance and the redundancy of latent space expression during the compression process. QUACH et al.[20]and HUANG et al.[21]propose methods that incorporate these concepts. GUARDA et al.combine convolutional neural networks and autoencoders to exploit redundancy between adjacent points and enhance coding adaptability in Ref. [22]. Recently, WANG et al.[23]propose a point cloud compression method based on the variational auto-encoder, which improves the compression ratio by learning the hyperprior and reducing the memory consumption of arithmetic coding. The aforementioned methods use neural network encoders to capture the high-order hidden vector of the point cloud, the entropy model probabilities, and the edge probabilities of which fit better, thus reducing the memory consumption of arithmetic coding.

    Generally speaking, the research on single-frame point cloud geometric compression is relatively mature, but there are two challenges that remain yet. Spatial correlation has not been utilized effectively, and most methods do not code the correlation of point cloud data thoroughly and efficiently.Besides, the calculation of the probability model for entropy coding appears long and arduous due to the massive number of contexts.

    2.2 Multi-Frame Point Cloud Compression

    Multi-frame point clouds are commonly used in scenarios such as real-time 3D immersive telepresence, interactive VR,3D free viewpoint broadcasting and automatic driving. Unlike single-frame point cloud compression, multi-frame point cloud compression prioritizes the use of time correlation, as well as motion estimation and compensation. The existing methods for multi-frame point cloud compression can be divided into two categories: 2D projection and 3D decorrelation.

    The field of image and video compression is extensive and has been well-explored over the past few decades. Various algorithms convert point clouds into images and then compress them straightforwardly by FFmpeg and H.265 encoders, etc.AINALA et al[24]introduce a planar projection approximate encoding mode that encodes both geometry and color attributes through raster scanning on the plane. However, this method causes changes in the target shape during the mapping process, making accurate inter-prediction difficult. Therefore,SCHWARZ et al.[25]and SEVOM et al.[26]suggest rotated planar projection, cube projection, and patch-based projection methods to convert point clouds into 2D videos, respectively.By placing similar projections in adjacent frames at the same location in adjacent images, the video compressor can fully remove temporal correlation. In Ref. [27], inter-geometry prediction is conducted via TSP, which computes the one-to-one correspondence of adjacent intra-blocks by searching for the block with the closest average value. MPEG released the video-based point cloud compression (V-PCC) technology for dynamic point clouds in 2019[28]. This framework divides the input point cloud into small blocks with similar normal vectors and continuous space, then converts them to the planar surface through cubes to record the occupancy image and auxiliary information. All resulting images are compressed by mature video codecs, and all bitstreams are assembled into a single output file. Other attempts have been made to improve the effectiveness of these methods. COSTA et al.[29]exploit several new patch packaging strategies from the perspective of optimization for the packaging algorithm, data packaging links,related sorting, and positioning indicators. Furthermore,PARK et al.[30]design a data-adaptive packing method that adaptively groups adjacent frames into the same group according to the structural similarity without affecting the performance of the V-PCC stream.

    Due to the inevitable information loss caused by point cloud projection, scholars have developed effective techniques to compress the point cloud sequence of consecutive frames using motion compensation technology based on 3D space.KAMMERL et al.[31]propose an octree-based geometric encoding method, which achieves high compression efficiency by performing the exclusive OR (XOR) differences between adjacent frames. This method has been not only adopted in the popular Point Cloud Library (PCL)[32]but also widely used for further algorithm research. Other interframe approaches convert the 3D motion estimation problem into a feature matching problem[33]or use reconstructed geometric information[34]to predict motion vectors and identify the corresponding relationship between adjacent frames accurately. Recent explosive studies[35-36]have shown that the learned video compression offers better rate-distortion performance over traditional ones,bringing significant reference significance to point cloud compression. ZHAO et al.[37]introduce a bi-directional inter-frame prediction network to perform inter-frame prediction and bring effective utilization of relevant information in spatial and temporal dimensions. KAYA et al.[38]design a new paradigm for encoding geometric features of dense point cloud sequences,optimizing the CNN for estimating the encoding distribution to realize lossless compression of dense point clouds.

    Despite progress in the compression coding technology of multi-frame point cloud models, two problems persist. The existing multi-frame point cloud compression approaches mainly rely on video coding and motion compensation, which inevitably involves information loss or distortion caused by mapping and block edge discontinuity. In addition, predictive coding exhibits low applicability due to the inconsistency of inter-frame point cloud geometry. The apparent offset of points between frames and the unavoidable noise increases the difficulty of effectively using predictive coding in inter-frame compression.

    3 Proposed Spatio-Temporal Context-Guided Lossless Geometry Point Cloud Compression Method

    3.1 Overview

    The overall pipeline of our spatio-temporal context-guided algorithm is shown in Fig. 1. First, we preprocess the input point cloud by applying voxelization and scale transformation.Then, the point cloud is divided into unit thickness sliced layers along the main axis. Next, we design a prediction mode that makes full use of the temporal and spatial correlation information within both intra-frame and inter-frame. We calculate the shortest path of points of reference layers (R-layers)via travelling salesman algorithms, and the results of the Rlayers are then used to predict spatio-temporally and encode the rest of the point clouds, namely predicted layers (Players). Finally, the improved entropy coding algorithms are adopted to obtain the compressed binary file.

    ▲Figure 1. Proposed framework for spatio-temporal context-guided lossless point cloud geometry compression

    3.2 Image Sliced-Based Hierarchical Division

    1) Pre-processing

    The pre-processing module includes voxelization and scale transformation, for better indexing of each certain point. In voxelization, we divide the space into cubes of sizeN, which corresponds to the actual resolution of the point cloud. Each point is assigned a unique voxel based on its position. A voxel is recorded as 1; if it is positively occupied, it is 0 otherwise.

    Scale transformation can reduce the sparsity for better compression by zooming out the point cloud, where the distance between points gets smaller. We aggregate the point cloud coordinates (x,y,z) using a scaling factors, i.e.,

    To ensure lossless compression, we need to ensure that the scaling factorscannot cause geometry loss and needs to be recorded in the header file.

    2) Sliced-layer division

    This module works by dividing the 3D point cloud along one of its axes, creating several unit-sliced layers with occupied and non-occupied information only that can be further compressed using a predictive encoder and an arithmetic coder. The function is defined as:

    whereGrefers to the input point cloud coordinate matrix, axis refers to the selected dimension, andS(a,b) is the 2D slice extracted by each layer.

    In general, we conduct experiments on a large number of test sequences, and results suggest that division along the longest axis of point cloud spatial variation yields the lowest bitrate, i.e.

    3) Minimum bounding box extraction

    In most cases, on-occupied voxels are typically unavoidable and greatly outnumber occupied voxels. As a result, processing and encoding both types of voxels simultaneously burdens the computational complexity and encoding speeds of the compression algorithm. Therefore, we adopt the oriented bounding box (OBB)[39]to calculate the minimum bounding box for each sliced layer, ensuring that the directions of the bounding boxes are consistent across layers. In subsequent processing, only the voxels located within the restricted rectangle are compressed.

    3.3 Spatial Context-Guided Predictive Encoding

    The goal of spatial context-guided predictive encoding is to encode all the points layer by layer. Inspired by the TSP, we design a prediction mode to explore the potential orders and correlation within each sliced layer. This module consists of partition and the shortest path calculation.

    At first, we partition the sliced layers and determine the Rlayer and R-layers for each group. We traverse the point cloud layer by layer along the selected axis. When the length of the main direction of the minimum bounding box between adjacent layers differs by a specified unit length, it is recorded as the same group. Otherwise, it is used as the reference layer of the next group, and each point cloud in the following group uses the same shortest path. In this paper, we set the first layer of each group as the R-layer, and the others as P-layers.We also carry out experiments on a large number of test sequences and recommend setting this specified parameter as 3 units to obtain the best compression.

    Afterwards, we conduct the shortest path calculation on the R-layers and record the residuals of P-layers. According to the distribution regulation of the point cloud of each slice layer,we optimally arrange the irregular point clouds for each slice layer based on the TSP algorithm. This allows us to efficiently compute the shortest path to the point cloud of the R-layers,and then record the residuals of the corresponding prediction layers. Algorithm 1 shows the pseudo-code of the prediction procedure.

    Firstly, we define the distance calculation rule between points in the local area and initialize the path state with a randomly selected point pc1. In each iteration, whenever a new point pciis added, the permutation is dynamically updated through the state transition equation path(P-i,i) until all added points are recorded inPin the order of the shortest path. This process is modified gradually based on the minimal distance criterion. After all iterations are completed in the total shortest path, we calculate the mindist(pci, pcj) in each of the R-layers, and return the shortest path record table of point clouds in each of the R-layers. For further compression, we calculate the deviation of the P-layers from the shortest path of the R-layer within the same group and record them as predictive residuals. Finally, the shortest path of the Rlayer and the residuals of each group are output and passed to the entropy encoder to compress prediction residuals further.

    3.4 Spatio-Temporal Context-Guided Predictive Encoding

    The spatial context-guided prediction mode encodes single-frame point clouds individually. However, applying spatial encoding to each single-frame point cloud separately can miss out on opportunities exposed by the temporal correlations across multi-frame point cloud. Considering that multi-frame point cloud shares large chunks of overlaps, we focus on using temporal redundancy to further enhance the compression efficiency. Hence, based on the proposed spatial context-guided prediction mode, we can compress multiframe point cloud by identifying a correspondence between adjacent layers across frames.

    1) Inter-frame partition

    To enhance the effectiveness of inter-frame prediction mode, it is crucial to ensure adequate similarity between adjacent layers of frames. As a result, we need to partition the groups between adjacent frames and determine the R-layers and P-layers across frames. By estimating the shortest path of the P-layers based on the shortest path of the R-layers, we record the prediction residuals and further compress them through the entropy encoder. Algorithm 2 shows the pseudocode of the inter-frame partition.

    Algorithm 2. Inter-frame partition 1: Input: point cloud sliced-layers S1,S2,…,Sn, and principal axis lengths hi of Si inter-frame point cloud sliced layers SS1,SS2,…,SSn, and principal axis lengths hhi of SSi 2: Output: correspondence and partition of the adjacent layers’ relationship 3: Initialization: set S1 and SS1 as corresponding layers 4: for new Si and SSi do :5: coarse partition: set Si and SSi as corresponding layers 6: if |hi - hhi|≤3 :7: fine partition: set Si and SSi as corresponding layers 8: else if 9: compare |hi - hhi|, |hi - hhi - 1|, and |hi - hhi + 1|, and pick the minimum 10: set the slice layer corresponding to the minimum and SSi as corresponding layers 11: else 12: set as a single layer 13: end for

    Based on sliced-layers orientation alignment, we realize coarse partition and fine partition successively. For coarse partition, we sort the sliced layers of each frame based on the coordinates corresponding to the division axes, from small to large. As a result, each slice layer of each frame has a unique layer number, allowing us to coarsely partition the slice layers with the same number between adjacent frames. Afterward, we compute the difference between the principal axis lengths of the minimum bounding boxes of adjacent layers with the same number. If this value is less than or equal to a specified length unit, the layers will be partitioned into the same group. Otherwise, we compare the difference in the length of the main direction axis of the minimum bounding box in the corresponding layer of the adjacent frame with the specified layer before and after the number in the adjacent frame. The layer with the smallest difference is then partitioned into the same group.This ensures a fine partition between adjacent layers, and so as to realize the fine partition of the adjacent relationship.

    2) Spatio-temporal context-guided prediction mode

    Based on the partition, we apply and expand the prediction mode mentioned in Section 3.3. We incorporate inter-frame context in the process, meaning that the first layer of each group, which serves as the R-layer, may not necessarily yield the best prediction result. To fully explore the potential correlation between adjacent layers, we need to expose the optimal prediction mode.

    Firstly, we calculate the prediction residuals for each sliced-layer in the current group when used as the R-layer. By comparing the prediction residuals in all cases, we select the R-layer with the smallest absolute residual value as the best prediction mode. For R-layer shortest path calculation, we use the travelling salesman algorithm to compute the shortest path of the R-layers under the best prediction mode. Moreover, we calculate the prediction residuals for each group under their respective best prediction modes. We also record the occupancy length and R-layer information of each group for further compression in subsequent processing.

    In the follow-up operation, we use arithmetic coding based on the best context selection for the above information to complete the entire process of the multi-frame point cloud geometry compression algorithm.

    3.5 Arithmetic Coding Based on Context Dictionary

    The massive amount of context in point cloud significantly burdens the overall compression scheme in terms of arithmetic coding computational complexity. We improve the arithmetic coding from the following two modules. 1) We set up a context dictionary, and select and update the global optimal value according to the entropy estimate, and then 2) we adopt adaptive encoders to efficiently calculate the upper and lower bounds of probabilities.

    1) Context dictionary construction

    We construct a context dictionary that represents a triple queue, consisting of coordinates of the point cloud at each sliced-layer and the integer representation of its corresponding non-empty context. Thus, we associate the voxels contained in the point cloud with the minimum bounding box of each layer with its non-empty context. To illustrate the construction of the triple queue array of the context dictionary clearly, we give an intuitive explanation in Fig. 2.

    For the shaded two squares in Fig. 2, only the context map positions pc1and pc2are considered. The context contribution along thex-axis and they-axis is recorded to the two queuesQX-andQY-respectively. Thus the context dictionary consists ofQX-andQY-. Queue elements with the same coordinates are integrated into a triplet, the context integer representation of which is computed as the sum of the context contributions of the merged triplet.

    Therefore, the context of each voxel can be computed as the sum of the independent contributions of occupied voxels in its context dictionary. This structure helps determine whether a voxel should be added to the context dictionary without tedious matrix lookups, resulting in a significant reduction in computational complexity and runtime.

    2) Probability calculation

    To calculate entropy probability, both the length of the sequence and the context of its constituent voxels must be taken into account. In this module, we design an adaptive encoder that first estimates the upper and lower cumulative probability bounds for each group from the context dictionary, and then encodes it subsequently.

    First of all, we construct a binary tree based on the Markov chain model. By traversing the occupancy of voxels, we assign values of 1 and 0 to occupied and empty voxels, respectively,and calculate the probability based on the tree structure. Starting from the root node, when a voxel is occupied, we record the left child node as 1. Otherwise, we mark the right child node as 0 and proceed to the next step of judgment and division. The calculation formula for the run probability of occupied voxels can be found in Eq. (4).

    wherelis the length of the run ending at the occupied voxel.

    ▲Figure 2. Construction of the context dictionary

    For run lengths less than or equal ton, there may be 2nof tree nodes representing the occupancy states of voxels. Therefore, the probability of any occupied voxel is represented by the independent joint probability of traversing all states starting at the root and ending at any childless node of the tree.

    Based on Eq. (4), to perform arithmetic encoding on the occupancy of the voxel sequence, we need the cumulative upper and lower probabilities of the sequence, as shown in Eq. (5).

    Employing this approach, we can utilize the adaptive properties of arithmetic coding to adjust the probability estimation value of each symbol based on the optimized probability estimation model and the frequency of each symbol in the current symbol sequence. This allows us to calculate the upper and lower bounds of the cumulative probability of occupied voxels and complete the encoding process.

    4 Experiment

    4.1 Implementation Details

    1) Dataset. To verify the performance of our proposed method, extensive experiments were conducted over 16 point cloud datasets that can be downloaded from Ref. [40], as shown in Fig. 3, in which Figs. 3(a)-3(l) are portraits with dense points, and Figs. 3(m) - 3(p) are architecture with sparse points. Figs. 3(a)-3(h) are voxelized upper bodies point cloud data sequences of two spatial resolutions obtained from Microsoft. Figs. 3(i)-3(l) are chosen from 8i voxelized full bodies point cloud data sequences. Remaining large-scale sparse point clouds in Figs. 3(k)-3(p) are static facade and architecture datasets.

    2) Evaluation metrics. The performance of the proposed method is evaluated in terms of bit per point (BPP). The BPP refers to the sum of bits occupied by the coordinate information attached to the point. The lower the value, the better the performance.

    where Sizedigrepresents the number of bits occupied by the coordinate information of point cloud data, andkrefers to the number of points in the original point cloud.

    3) Benchmarks. We mainly compare our method with other baseline algorithms, including: PCL-PCC: octree-based compression in PCL; G-PCC (MPEG intra-coders test model) and interEM (MPEG inter-coders test model) target single-frame and multi-frame point cloud compression respectively; The Silhouette 3D (S3D)[41]and Silhouette 4D (S4D)[42]target singleframe and multi-frame point cloud compression, respectively.For PCL, we use the octree point cloud compression approach in PCL-v1.8.1 for geometry compression only. We set octree resolution parameters from point precision and voxel resolution. For G-PCC (TM13-v11.0), we choose a lossless geometry—lossless attributes condition in an octree-predictive mode,leaving parameters as default. For interEM (tmc3v3.0), we use the experimental results under lossless geometry and lossless attributes conditions as a comparison[43]. For S3D and S4D, we follow the default conditions and parameters.

    ▲Figure 3. Point cloud sequences used in experiments: (a) Andrew_vox09, (b) Andrew_vox10, (c) David_vox09, (d) David_vox10, (e)Ricardo_vox09, (f) Ricardo_vox10, (g) Sarah_vox09, (h) Sarah_vox10, (i)Longdress_vox10, (j) Loot_vox10, (k) Redandblack_vox10, (l) Soldier_vox10, (m) Facade_00009_vox12, (n) Facade_00015_vox14, (o)Arco_Valentino_Dense_vox12, and (p) Palazzo_Carignano_Dense_vox14

    4) Hardware. The proposed algorithm is implemented in Matlab and C++ using some functions of the PCL-v1.8.1. All experiments have been tested on a laptop with Intel Core i7-8750 CPU @2.20 GHz with 8 GB memory.

    4.2 Results of Single-Frame Point Cloud Compression

    1) Compression results of portraits of dense point cloud data sequences

    Table 1 shows the performance of our spatial contextguided lossless point cloud geometry compression algorithms compared with PCL-PCC, G-PCC and S3D methods on portraits of dense point cloud data sequences.

    It can be seen from Table 1 that for all the point cloud of the same sequences, the proposed method achieves the lowest compression BPP compared with other methods. Our algorithm offers averaged gains from -1.56% to -0.02% against S3D, and gains from -10.62% to -1.45% against G-PCC. It shows a more obvious advantage, that is, the compression performance gains of the proposed algorithm range from -10.62%to -1.45%; For PCL-PCC, the proposed algorithm shows a nearly doubled gain on all sequences, ranging from -154.43%to -85.39%.

    2) Compression results of large-scale sparse point cloud data

    Because the S3D cannot work in this case, we only compare our spatial context-guided lossless geometry point cloud compression algorithm with PCL-PCC and G-PCC methods on large-scale sparse point cloud data.

    Again, our algorithm achieves considerable performance with G-PCC and PCL-PCC, as shown in Table 1. Results have shown that averaged BPP gains ranging from - 8.84% to-4.35% are captured compared with G-PCC. For PCL- PCC,our proposed algorithm shows more obvious advantages, with gains ranging from -34.69% to -23.94%.

    3) Summary

    To provide a more comprehensible comparison of the singleframe point cloud compression results, Table 2 presents the average results between our spatial context-guided compression method and other state-of-the-art benchmark methods.Compared with S3D, our proposed method shows average gains ranging from -0.58% to -3.43%. As for G-PCC and PCL-PCC, the average gains achieve at least -3.43% and-95.03% respectively.

    Experimental analysis reveals that our spatial contextguided compression method exceeds current S3D, G-PCC and PCL-PCC by a significant margin. Thus, it can satisfy the lossless compression requirements of point cloud geometry for various scene types, e.g., dense or sparse distributions, and the effectiveness of our method consistently remains.

    4.3 Results of Multi-frame Point Cloud Compression

    We evaluate our proposed spatial-temporal context-guided point cloud geometry compression algorithm against existing compression algorithms such as S4D, PCL-PCC, G-PCC and interEM. Only portraits of dense point cloud data sequences are used in this experiment. The results are illustrated inTable 3. As we can see, after optimizations in prediction mode and arithmetic encoder, the proposed algorithm shows superiority on all test sequences. Specifically, compared with interEM and G-PCC, the proposed algorithm shows significant gains ranging from -51.94% to -17.13% and -46.62%to -5.7%, respectively. Compared with S4D, the proposed algorithm shows robust improvement ranging from -12.18% to-0.33%. As for PCL-PCC, our proposed algorithm has nearly halved over all test sequences.

    ▼Table 1. BPP comparisons of our spatial context-guided compression algorithm and the baseline methods

    ▼Table 2. BPP comparison with state-of-the-art algorithms on single-frame point cloud data

    Furthermore, we summarize the compression results and gains of the proposed method on the portraits dense point cloud data sequences, listed in Table 4. On average, it delivers gains between -11.5% and -2.59% compared with the spatial context-guided point cloud geometry compression algorithm proposed previously. Moreover, it shows superior average gains of - 19% compared with G-PCC, and has achieved an average coding gain of -24.55% compared with interEM. Additionally, compared with S3D and S4D, it gains more than -6.11% and -3.64% on average respectively.

    The overall experimental analysis shows that the spatiotemporal context-guided point cloud compression method can make full use of both the spatial and temporal correlation of adjacent layers within intra-frames and inter-frames.We also improve the global context selection and probability model of the arithmetic encoder to obtain a lower bit rate.The proposed method surpasses the performance of state-of-the-art algorithms, so as to meet the requirements of point cloud geometry lossless compression in multimedia application scenarios such as dynamic portraits.

    ▼Table 3. Bit per point comparisons of our spatio-temporal context-guided compression algorithm and the baseline methods

    ▼Table 4. Bit per point comparison with state-of-the-art algorithms on multi-frame point cloud data

    4.4 Ablation Study

    We perform ablation studies on predictive encoding over 8i voxelized full-body point cloud data sequences to demonstrate the effectiveness of the partition. It can be seen from Table 5 that the improvement shows a stable gain of -70% on multiframe point cloud compression and -60% on single-frame point cloud compression against the non-partition predictive coding.

    Next, we perform an ablation experiment on arithmetic coding to demonstrate the effectiveness of the context dictionary. As shown in Table 6, a robust improvement of-33% on multi-frame point cloud compression and that of-41% on single-frame point cloud compression against the arithmetic coding without context dictionary are observed inour method.

    ▼Table 5. Ablation study on predictive encoding

    ▼Table 6. Ablation study on arithmetic coding

    4.5 Time Consumption

    We test the time consumption to evaluate the algorithm complexity and compare the proposed methods with others.The algorithm complexity is analyzed by encoders and decoders independently, listed in Table 7. As we can see, G-PCC,interEM and PCL-PCC can achieve an encoding time of less than 10 s and a decoding time of less than 5 s for portrait dense point cloud data. They also perform well in large-scale sparse point cloud data compared with others. Our proposed algorithms take around 60 s and 15 s to encode and decode portrait sequences, even more on facade and architecture point cloud data. There is a trade-off between bitrates and compression speed. Compared with S3D and S4D, which take hundreds of seconds to encode, our time-consuming method can show superiority.

    In summary, the time consumption of our proposed methods is medium among all the compared algorithms but still necessary to be further improved.

    5 Conclusions

    In this paper, we propose a spatio-temporal contextguided method for lossless point cloud geometry compression. We consider sliced point cloud of unit thickness as the input unit and adopt the geometry predictive coding mode based on the travelling salesman algorithm, which applies to both intra-frame and inter-frame. Moreover, we make full use of the global context information and adaptive arithmetic encoder based on context fast update to achieve lossless compression and decompression results of point clouds. Experimental results demonstrate the effectiveness of our methods and their superiority over previous studies. For future work, we plan to further study the overall complexity of the algorithm, by reducing algorithm complexity to achieve a high-speed compression rate and low bit rate compression results. A low bit rate and real-time/low-delay supported method is highly desired in various types of scenes.

    ▼Table 7. Time consumption comparison with state-of-the-art algorithms in encoding and decoding

    91麻豆av在线| tube8黄色片| 99国产精品99久久久久| 人妻丰满熟妇av一区二区三区 | 欧美黄色淫秽网站| 国产在视频线精品| 免费黄频网站在线观看国产| 91av网站免费观看| 女人精品久久久久毛片| 午夜日韩欧美国产| 村上凉子中文字幕在线| 女性生殖器流出的白浆| 女警被强在线播放| 亚洲国产欧美网| 女人被躁到高潮嗷嗷叫费观| 国产97色在线日韩免费| 人妻丰满熟妇av一区二区三区 | 午夜精品国产一区二区电影| 国产国语露脸激情在线看| 51午夜福利影视在线观看| 丝袜美腿诱惑在线| 亚洲av熟女| 大香蕉久久网| 午夜福利,免费看| 性色av乱码一区二区三区2| 一级,二级,三级黄色视频| 香蕉国产在线看| aaaaa片日本免费| www.精华液| 人妻丰满熟妇av一区二区三区 | 不卡一级毛片| 国产在线一区二区三区精| 日本欧美视频一区| 午夜免费观看网址| 国产男女超爽视频在线观看| 日韩人妻精品一区2区三区| 麻豆国产av国片精品| 正在播放国产对白刺激| 国产精品久久久久久精品古装| 18禁国产床啪视频网站| 99香蕉大伊视频| 国产成人av激情在线播放| 变态另类成人亚洲欧美熟女 | 国产在线一区二区三区精| 午夜福利免费观看在线| 在线av久久热| 97人妻天天添夜夜摸| 99re在线观看精品视频| 亚洲精品自拍成人| 亚洲情色 制服丝袜| 建设人人有责人人尽责人人享有的| 日韩欧美一区视频在线观看| 1024视频免费在线观看| 精品久久久久久久久久免费视频 | 亚洲 国产 在线| 国产三级黄色录像| av片东京热男人的天堂| 国产亚洲精品一区二区www | xxxhd国产人妻xxx| 国产欧美日韩一区二区三区在线| 成人亚洲精品一区在线观看| 狂野欧美激情性xxxx| 成人永久免费在线观看视频| 久久狼人影院| 高清毛片免费观看视频网站 | netflix在线观看网站| 黑人巨大精品欧美一区二区蜜桃| 国产伦人伦偷精品视频| 首页视频小说图片口味搜索| 亚洲欧美精品综合一区二区三区| 高清视频免费观看一区二区| 99久久国产精品久久久| av不卡在线播放| 啦啦啦免费观看视频1| 亚洲国产精品一区二区三区在线| 国产高清videossex| 亚洲欧美日韩另类电影网站| 久久久水蜜桃国产精品网| 欧美日韩一级在线毛片| 免费日韩欧美在线观看| 国产精品影院久久| 男女午夜视频在线观看| 999久久久国产精品视频| 国产成人系列免费观看| 国产又色又爽无遮挡免费看| 久久亚洲精品不卡| 一a级毛片在线观看| 天堂动漫精品| 久久人妻熟女aⅴ| 如日韩欧美国产精品一区二区三区| 成人国语在线视频| 久久久久久久久免费视频了| 欧美国产精品va在线观看不卡| 国产精品久久久久久精品古装| 久久国产精品男人的天堂亚洲| 一二三四社区在线视频社区8| 久久久久久久精品吃奶| 一区在线观看完整版| 熟女少妇亚洲综合色aaa.| 日日摸夜夜添夜夜添小说| 欧美 亚洲 国产 日韩一| 好男人电影高清在线观看| 免费高清在线观看日韩| 国产精品久久电影中文字幕 | 男女免费视频国产| 最近最新中文字幕大全电影3 | 这个男人来自地球电影免费观看| 91老司机精品| 波多野结衣av一区二区av| 男女免费视频国产| 精品国产乱子伦一区二区三区| 大型黄色视频在线免费观看| 国产一区二区三区在线臀色熟女 | 国产成+人综合+亚洲专区| 午夜福利欧美成人| 午夜精品在线福利| 一级毛片高清免费大全| 搡老岳熟女国产| 色精品久久人妻99蜜桃| 午夜免费鲁丝| 在线视频色国产色| 久久久久国内视频| 色播在线永久视频| 日韩欧美一区二区三区在线观看 | 欧美日韩av久久| 在线免费观看的www视频| 波多野结衣一区麻豆| 亚洲中文日韩欧美视频| 一区福利在线观看| 别揉我奶头~嗯~啊~动态视频| 日日摸夜夜添夜夜添小说| 精品欧美一区二区三区在线| 下体分泌物呈黄色| 午夜福利,免费看| 久99久视频精品免费| 黑人操中国人逼视频| 国产一区在线观看成人免费| 国产av又大| 99久久99久久久精品蜜桃| 亚洲国产毛片av蜜桃av| 桃红色精品国产亚洲av| 日韩免费高清中文字幕av| 久久精品熟女亚洲av麻豆精品| 这个男人来自地球电影免费观看| 正在播放国产对白刺激| 国产精品久久久久成人av| 老司机深夜福利视频在线观看| 欧美日韩视频精品一区| 美女扒开内裤让男人捅视频| 91老司机精品| 老熟妇仑乱视频hdxx| 丰满的人妻完整版| 日韩大码丰满熟妇| 18禁黄网站禁片午夜丰满| 免费观看精品视频网站| 大型黄色视频在线免费观看| 国产一区二区三区在线臀色熟女 | 在线永久观看黄色视频| 大型av网站在线播放| 日本vs欧美在线观看视频| 亚洲精品国产一区二区精华液| 国产成+人综合+亚洲专区| 亚洲精品在线观看二区| 久久久久久久午夜电影 | 十分钟在线观看高清视频www| 欧美最黄视频在线播放免费 | 国产在视频线精品| 黑人巨大精品欧美一区二区蜜桃| 18禁国产床啪视频网站| 亚洲精华国产精华精| 大片电影免费在线观看免费| 好看av亚洲va欧美ⅴa在| 日韩欧美国产一区二区入口| 久久久久久久午夜电影 | 国产人伦9x9x在线观看| 18禁观看日本| 免费看十八禁软件| 美国免费a级毛片| 久久久精品免费免费高清| 亚洲一卡2卡3卡4卡5卡精品中文| 一边摸一边抽搐一进一出视频| 99热只有精品国产| 欧美一级毛片孕妇| 黑人巨大精品欧美一区二区蜜桃| 男人的好看免费观看在线视频 | 亚洲欧美色中文字幕在线| 18禁裸乳无遮挡免费网站照片 | 男女床上黄色一级片免费看| 中文字幕色久视频| 老熟妇仑乱视频hdxx| 高清黄色对白视频在线免费看| 黄色a级毛片大全视频| 亚洲少妇的诱惑av| 成年人黄色毛片网站| 午夜91福利影院| 热99久久久久精品小说推荐| 久久草成人影院| 国产99白浆流出| 中文字幕人妻丝袜制服| 黑人欧美特级aaaaaa片| 久久亚洲精品不卡| 亚洲一区高清亚洲精品| 三上悠亚av全集在线观看| 久久久久久久精品吃奶| 9191精品国产免费久久| 亚洲成av片中文字幕在线观看| 国产精品亚洲av一区麻豆| av免费在线观看网站| 国产免费现黄频在线看| 正在播放国产对白刺激| 99re在线观看精品视频| e午夜精品久久久久久久| 亚洲五月色婷婷综合| 天天添夜夜摸| 精品少妇久久久久久888优播| 国产成人系列免费观看| 一级,二级,三级黄色视频| 欧美日韩成人在线一区二区| 亚洲精品美女久久久久99蜜臀| 男人的好看免费观看在线视频 | 国产精品久久久人人做人人爽| 国产精品九九99| 亚洲精品自拍成人| 欧美成人免费av一区二区三区 | 在线观看一区二区三区激情| 一夜夜www| 色综合欧美亚洲国产小说| 欧美激情久久久久久爽电影 | 搡老熟女国产l中国老女人| 国产亚洲精品第一综合不卡| 黄色视频,在线免费观看| e午夜精品久久久久久久| 女性被躁到高潮视频| 两个人免费观看高清视频| 男人的好看免费观看在线视频 | 国产成+人综合+亚洲专区| 午夜影院日韩av| 一个人免费在线观看的高清视频| 少妇 在线观看| 国产91精品成人一区二区三区| 免费高清在线观看日韩| 最新的欧美精品一区二区| 高清在线国产一区| 精品国产亚洲在线| 视频区欧美日本亚洲| 很黄的视频免费| 丁香六月欧美| 丁香欧美五月| 欧美日韩成人在线一区二区| 国产成人av激情在线播放| 91九色精品人成在线观看| 涩涩av久久男人的天堂| 高清黄色对白视频在线免费看| 国产亚洲欧美精品永久| 亚洲精品成人av观看孕妇| 在线观看舔阴道视频| 精品久久久久久,| videos熟女内射| 亚洲一码二码三码区别大吗| 操美女的视频在线观看| 亚洲国产欧美网| 搡老岳熟女国产| 国产熟女午夜一区二区三区| 咕卡用的链子| 精品人妻在线不人妻| 18在线观看网站| 人人妻人人添人人爽欧美一区卜| 黄网站色视频无遮挡免费观看| 亚洲第一青青草原| 国产成人av教育| 一边摸一边抽搐一进一出视频| 精品亚洲成国产av| 日本a在线网址| 搡老乐熟女国产| 建设人人有责人人尽责人人享有的| 日本黄色日本黄色录像| 欧美日韩视频精品一区| 久久青草综合色| 久久久久久人人人人人| 黄色视频,在线免费观看| 成熟少妇高潮喷水视频| 久久久精品区二区三区| 丰满的人妻完整版| 一级片'在线观看视频| 午夜激情av网站| 久久精品国产清高在天天线| 久久中文字幕一级| 大码成人一级视频| 欧美日韩瑟瑟在线播放| 国产欧美日韩一区二区三| 欧美日韩视频精品一区| 中文字幕精品免费在线观看视频| 午夜影院日韩av| 18禁裸乳无遮挡免费网站照片 | 国产精品久久电影中文字幕 | 亚洲精品美女久久av网站| 精品久久蜜臀av无| 身体一侧抽搐| 最近最新中文字幕大全免费视频| 在线播放国产精品三级| 久久人人97超碰香蕉20202| 亚洲av日韩在线播放| 精品午夜福利视频在线观看一区| 免费在线观看影片大全网站| 久久精品国产亚洲av高清一级| 国产精品电影一区二区三区 | 夫妻午夜视频| 91字幕亚洲| 国产精品久久视频播放| 好看av亚洲va欧美ⅴa在| 日本精品一区二区三区蜜桃| 午夜老司机福利片| 超碰成人久久| 国产一卡二卡三卡精品| 中文字幕人妻熟女乱码| 老司机在亚洲福利影院| 美女高潮喷水抽搐中文字幕| 极品人妻少妇av视频| 亚洲三区欧美一区| 中文字幕人妻丝袜制服| 欧美最黄视频在线播放免费 | 一进一出抽搐gif免费好疼 | 国产黄色免费在线视频| 亚洲精品av麻豆狂野| 一a级毛片在线观看| 国产精华一区二区三区| 美国免费a级毛片| 老汉色av国产亚洲站长工具| 欧美日韩亚洲国产一区二区在线观看 | 亚洲欧美精品综合一区二区三区| 国产蜜桃级精品一区二区三区 | 看免费av毛片| 亚洲精品中文字幕在线视频| 色94色欧美一区二区| 欧美国产精品一级二级三级| 中文字幕制服av| 久久久久久久久久久久大奶| 中文字幕精品免费在线观看视频| 国产91精品成人一区二区三区| 免费不卡黄色视频| 一级黄色大片毛片| 国内毛片毛片毛片毛片毛片| 国产成人av教育| 在线观看免费视频日本深夜| 亚洲av电影在线进入| 大码成人一级视频| 亚洲精品国产区一区二| 亚洲aⅴ乱码一区二区在线播放 | 美女 人体艺术 gogo| 50天的宝宝边吃奶边哭怎么回事| 国产麻豆69| 王馨瑶露胸无遮挡在线观看| 国产xxxxx性猛交| 男女下面插进去视频免费观看| av一本久久久久| 麻豆成人av在线观看| 亚洲精品中文字幕一二三四区| 在线观看舔阴道视频| 19禁男女啪啪无遮挡网站| 亚洲精品国产色婷婷电影| 一级a爱片免费观看的视频| av福利片在线| 又黄又粗又硬又大视频| 大香蕉久久网| 国产激情久久老熟女| 91成人精品电影| 一区二区三区精品91| 老熟妇乱子伦视频在线观看| 在线天堂中文资源库| 中文字幕色久视频| 男人舔女人的私密视频| 国产精品偷伦视频观看了| 成年版毛片免费区| 亚洲国产精品一区二区三区在线| 黄色怎么调成土黄色| 亚洲国产精品sss在线观看 | 精品国产超薄肉色丝袜足j| 亚洲性夜色夜夜综合| 日韩免费av在线播放| 淫妇啪啪啪对白视频| 国产精品电影一区二区三区 | 如日韩欧美国产精品一区二区三区| av电影中文网址| 国产亚洲精品第一综合不卡| 国产av又大| 黄网站色视频无遮挡免费观看| 狠狠婷婷综合久久久久久88av| 久久热在线av| 日韩精品免费视频一区二区三区| 久久久国产成人免费| 香蕉久久夜色| 欧美激情极品国产一区二区三区| 人妻丰满熟妇av一区二区三区 | 一级片免费观看大全| 搡老熟女国产l中国老女人| 国产三级黄色录像| xxxhd国产人妻xxx| 免费在线观看完整版高清| 国内久久婷婷六月综合欲色啪| 国产精品香港三级国产av潘金莲| 色婷婷av一区二区三区视频| 黄色 视频免费看| 日韩人妻精品一区2区三区| 高清欧美精品videossex| 国产91精品成人一区二区三区| 欧美色视频一区免费| 9191精品国产免费久久| 亚洲av美国av| 国产精品亚洲av一区麻豆| 久久久久久免费高清国产稀缺| 在线看a的网站| 啦啦啦在线免费观看视频4| 国产精品久久久人人做人人爽| 最新在线观看一区二区三区| 久久久久久免费高清国产稀缺| 丝袜人妻中文字幕| 亚洲精品一卡2卡三卡4卡5卡| 91av网站免费观看| 91精品国产国语对白视频| 亚洲av美国av| 欧美中文综合在线视频| 婷婷精品国产亚洲av在线 | 亚洲精品国产区一区二| a级毛片在线看网站| 日韩免费高清中文字幕av| av有码第一页| 国产日韩一区二区三区精品不卡| 巨乳人妻的诱惑在线观看| 色婷婷av一区二区三区视频| 精品电影一区二区在线| 青草久久国产| 国产视频一区二区在线看| 久久亚洲真实| 亚洲国产精品合色在线| 69av精品久久久久久| 真人做人爱边吃奶动态| 国产男女超爽视频在线观看| 美女国产高潮福利片在线看| 色精品久久人妻99蜜桃| 女人被躁到高潮嗷嗷叫费观| 日本撒尿小便嘘嘘汇集6| 在线观看一区二区三区激情| 国产精品永久免费网站| 99在线人妻在线中文字幕 | 一夜夜www| 国产精品免费视频内射| 亚洲一卡2卡3卡4卡5卡精品中文| 精品亚洲成国产av| 高清在线国产一区| 国产精品九九99| 久久精品亚洲精品国产色婷小说| 欧美精品人与动牲交sv欧美| 少妇 在线观看| 超色免费av| 亚洲专区中文字幕在线| 欧美黄色淫秽网站| 他把我摸到了高潮在线观看| 精品国产乱子伦一区二区三区| 国产精品一区二区在线观看99| 不卡av一区二区三区| 久久精品成人免费网站| 国产成人欧美| 国产成人欧美在线观看 | av国产精品久久久久影院| 久久久国产精品麻豆| 欧美精品啪啪一区二区三区| 91国产中文字幕| 自线自在国产av| 精品午夜福利视频在线观看一区| 精品乱码久久久久久99久播| 美女福利国产在线| 狠狠狠狠99中文字幕| 亚洲欧美激情在线| 免费在线观看亚洲国产| 人妻 亚洲 视频| 精品国产乱子伦一区二区三区| 日韩视频一区二区在线观看| 在线十欧美十亚洲十日本专区| 在线观看午夜福利视频| 成人国产一区最新在线观看| 成年女人毛片免费观看观看9 | 人妻久久中文字幕网| 精品国产美女av久久久久小说| xxxhd国产人妻xxx| 天堂中文最新版在线下载| 又紧又爽又黄一区二区| 18禁黄网站禁片午夜丰满| 巨乳人妻的诱惑在线观看| 精品第一国产精品| 一个人免费在线观看的高清视频| 午夜激情av网站| 欧美人与性动交α欧美精品济南到| 无遮挡黄片免费观看| 男女高潮啪啪啪动态图| 亚洲精品国产精品久久久不卡| 好看av亚洲va欧美ⅴa在| 99精国产麻豆久久婷婷| 无限看片的www在线观看| 久久精品国产亚洲av高清一级| 成人特级黄色片久久久久久久| 国产黄色免费在线视频| 一区二区三区激情视频| 免费在线观看黄色视频的| 在线视频色国产色| 丰满迷人的少妇在线观看| 久久久国产精品麻豆| 成年版毛片免费区| 国产极品粉嫩免费观看在线| 久久精品熟女亚洲av麻豆精品| xxxhd国产人妻xxx| 狠狠狠狠99中文字幕| 亚洲精品国产区一区二| 99久久综合精品五月天人人| 高清视频免费观看一区二区| 国产免费男女视频| 如日韩欧美国产精品一区二区三区| 91成人精品电影| 最近最新中文字幕大全电影3 | 日日夜夜操网爽| 在线看a的网站| 美女视频免费永久观看网站| 精品午夜福利视频在线观看一区| 久久久久精品人妻al黑| 欧美日韩瑟瑟在线播放| 在线观看日韩欧美| 亚洲国产精品sss在线观看 | 美国免费a级毛片| 黄频高清免费视频| 久久精品亚洲熟妇少妇任你| 亚洲午夜精品一区,二区,三区| 国产亚洲欧美在线一区二区| 丰满人妻熟妇乱又伦精品不卡| 老熟女久久久| 国产av又大| 亚洲精品成人av观看孕妇| 国产成人精品久久二区二区91| 韩国av一区二区三区四区| 天天添夜夜摸| 亚洲一区二区三区不卡视频| 国产一区二区三区在线臀色熟女 | 淫妇啪啪啪对白视频| 老熟女久久久| 亚洲第一青青草原| 欧洲精品卡2卡3卡4卡5卡区| www.自偷自拍.com| 最新的欧美精品一区二区| 无限看片的www在线观看| 国产伦人伦偷精品视频| 97人妻天天添夜夜摸| 欧美日本中文国产一区发布| 黄色毛片三级朝国网站| 亚洲精品国产精品久久久不卡| 色综合婷婷激情| 少妇裸体淫交视频免费看高清 | 国产精品自产拍在线观看55亚洲 | 亚洲av日韩在线播放| 国产精品99久久99久久久不卡| 18禁裸乳无遮挡免费网站照片 | 日韩欧美国产一区二区入口| 亚洲精品乱久久久久久| 一区福利在线观看| 最新美女视频免费是黄的| 黄色视频不卡| 美国免费a级毛片| 亚洲av第一区精品v没综合| 国产乱人伦免费视频| 大香蕉久久成人网| 在线观看一区二区三区激情| 国精品久久久久久国模美| a级毛片在线看网站| 女人高潮潮喷娇喘18禁视频| 亚洲情色 制服丝袜| 在线看a的网站| 久久人人97超碰香蕉20202| 国产高清国产精品国产三级| 国产亚洲精品久久久久5区| 每晚都被弄得嗷嗷叫到高潮| 国产精品 欧美亚洲| 在线天堂中文资源库| 国产亚洲精品一区二区www | 免费女性裸体啪啪无遮挡网站| 怎么达到女性高潮| 成年动漫av网址| 欧美日韩亚洲国产一区二区在线观看 | 一级黄色大片毛片| 成年人黄色毛片网站| 少妇粗大呻吟视频| 精品无人区乱码1区二区| 欧美在线黄色| 久久国产精品人妻蜜桃| 久久香蕉国产精品| 91九色精品人成在线观看| 免费人成视频x8x8入口观看| 建设人人有责人人尽责人人享有的| 欧美日韩福利视频一区二区| 日本五十路高清| 国产激情久久老熟女| 深夜精品福利| 咕卡用的链子| 国产精品 国内视频| 黄色 视频免费看| 首页视频小说图片口味搜索| 午夜福利一区二区在线看| 50天的宝宝边吃奶边哭怎么回事| 免费高清在线观看日韩| 亚洲国产精品合色在线| 国产精品偷伦视频观看了| 99久久人妻综合| 大片电影免费在线观看免费| 亚洲成av片中文字幕在线观看| 亚洲av成人一区二区三| 母亲3免费完整高清在线观看| 国产精品久久久久久精品古装| 日本撒尿小便嘘嘘汇集6| 黄色女人牲交|