• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    CFSA-Net:Efficient Large-Scale Point Cloud Semantic Segmentation Based on Cross-Fusion Self-Attention

    2024-01-12 03:45:08JunShuShuaiWangShiqiYuandJieZhang
    Computers Materials&Continua 2023年12期

    Jun Shu ,Shuai Wang ,Shiqi Yu and Jie Zhang

    1School of Electrical and Engineering,Hubei University of Technology,Wuhan,430068,China

    2Hubei Key Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System,Hubei University of Technology,Wuhan,430068,China

    3School of Mechanical and Electrical Engineering,Wuhan Donghu University,Wuhan,430212,China

    ABSTRACT Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization (RO) structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces (S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.

    KEYWORDS Semantic segmentation;large-scale point cloud;random sampling;cross-fusion self-attention

    1 Introduction

    Large-scale semantic segmentation of point clouds holds significant practical applications in real-time intelligent systems,such as autonomous driving and remote sensing.However,due to the voluminous nature of large-scale point cloud datasets,often exceeding millions of points,efficiently conducting semantic segmentation on such a scale poses a formidable challenge.Furthermore,compared to two-dimensional image data,three-dimensional point cloud data exhibits characteristics of disorder and unstructured.Leveraging the underlying data structure of point clouds,designing a deep neural network tailored for the semantic segmentation of three-dimensional point clouds becomes an arduous and demanding research endeavor.

    In addressing the challenges of point cloud semantic segmentation,researchers have devoted substantial efforts to exploring deep learning-based approaches for 3D point cloud semantic segmentation.Over the past years,a growing number of deep learning frameworks have been proposed to tackle this task.Notably,Qi et al.introduced the groundbreaking PointNet [1] network,which was the first model capable of directly processing point cloud data using neural networks without additional operations.However,the PointNet network did not account for local feature extraction,prompting subsequent studies to propose various methods to address this limitation.These methods[2–4] not only rely on individual points for feature extraction but also incorporate the aggregation of local geometric information to capture the point cloud’s structural features.Additionally,graphbased [5–7] and kernel-based [8–10] convolution techniques,which have demonstrated significant advancements in the field of image processing,have been introduced to capture relationships between different local structural features through convolutional neural networks.While these algorithms have achieved noteworthy results in point cloud processing,they often partition the point cloud into small,independent blocks,such as 1 × 1 × 1-meter blocks,each containing 1024 points,for efficiency purposes.However,this partitioning approach proves impractical for large-scale point clouds as it disrupts the inherent three-dimensional object structure and incurs high computational costs.There are two primary reasons for the low efficiency of semantic segmentation in large-scale point clouds.1)These methods often employ complex point sampling strategies to ensure the uniform distribution of points.However,these strategies are either computationally intensive or have low memory efficiency.2)Previous research has typically treated feature information and coordinate information separately during the process of local feature aggregation.They simply concatenate the three-dimensional raw coordinates with the feature information,overlooking the comprehensive modeling of geometric information.

    Currently,there are also existing approaches that can directly handle tasks involving largescale point clouds.For instance,SPG [11] preprocesses point cloud data into superpoint graphs and then employs neural networks for semantic segmentation.RangeNet++[12] and PCT [13]utilize projection-based and voxel-based methods to handle large-scale point clouds.However,these methods either entail computationally intensive and time-consuming preprocessing steps or require the partitioning of point clouds into smaller blocks for learning,resulting in suboptimal overall performance.

    To tackle the aforementioned issues,this paper designs a new large-scale point cloud semantic segmentation framework.The framework uses a random reduced sampling strategy to process large amounts of point cloud data with fewer computing resources.Furthermore,this paper introduces a robust module for extracting local features,enhancing the network’s capacity to describe fine-grained features at a local level and model geometric information in a more comprehensive manner.To this end,this paper first establishes the efficacy of random sampling and subsequently emphasizes the necessity of designing a feature extraction module to comprehensively capture geometric information.

    The downsampling of point clouds is a vital component in point cloud semantic segmentation networks.This step involves the selection of representative subset points from the point clouds,for which Farthest Point Sampling(FPS)[2]and Inverse Density Importance Sub-Sampling(IDIS)[14]are commonly used methods.The computational complexity of farthest point sampling isO(N),where N denotes the number of points in the point cloud.Inverse density sampling,on the other hand,exhibits a computational complexity ofO,assuming N points in the point cloud.It is worth noting that there exist other learning-based sampling methods [15–18],although they are not specifically mentioned in the paper.In contrast,Random Sampling (RS) exhibits a computational complexity of onlyO(1),making it an efficient option to consider when dealing with large-scale point clouds.However,while random sampling offers efficiency advantages,it comes with associated costs.This sampling method may result in a lack of representativeness within the sampled point set and the loss of crucial structural information within the point cloud,as depicted in Fig.1.To overcome the potential drawbacks of random sampling,this paper proposes a local feature extraction module based on Cross-Fusion Self-Attention(CFSA),which effectively captures intricate local structures.

    Figure 1:Sampling effect of different sampling methods under the same sampling ratio

    The local feature extraction module,based on cross-fusion self-attention,consists of three pivotal components.Firstly,this paper proposes a hierarchical location coding module that conducts hierarchical sampling and relative location coding for each query point.This module effectively addresses the long-distance dependencies between points.Secondly,this study presents a cross-fusion self-attention pooling module,which facilitates the interactive fusion of features and coordinates information within the point clouds.The CFSA pooling module dynamically enhances the expressive capacity between features and coordinates,thereby preserving intricate local geometric structure information.Lastly,this paper introduces a residual optimization module,which enhances the performance of feature extraction by stacking the hierarchical position coding module and the cross-fusion self-attention pooling module.This integration increases the depth of the network and expands the receptive field of each point,thereby further improving the efficacy of feature extraction.

    This paper makes significant contributions in the following aspects:

    1.This paper,through meticulous analysis and comparison of existing sampling methods,has chosen random sampling as the down-sampling strategy in this paper to efficiently process large-scale point cloud data.

    2.This paper proposes a local feature extraction module based on cross-fusion self-attention,which can better integrate the remote context dependence of the points,interactively enhance the coordinates and feature information of the points,and expand the receptive field of each point to model more complete geometric information.

    3.Building upon the aforementioned contributions,this paper proposes CFSA-Net,a powerful network designed to effectively tackle the segmentation task of large-scale point clouds.Notably,CFSA-Net achieves competitive results on three mainstream datasets:S3DIS[19],Semantic3D[20],and SemanticKITTI[21].

    The subsequent organization of this paper is outlined as follows: Section 2 provides a detailed overview of the classical approaches utilized in point cloud semantic segmentation tasks.In Section 3,we present an elaborate description of our proposed methodology.Comprehensive performance evaluations of the proposed method are conducted in Section 4 through comparative experiments and ablation studies.Finally,an objective summary is presented in Section 5 to conclude this paper.

    2 Related Work

    Projection-based and voxel-based methods:The methodologies based on projection and voxelization entail specific preprocessing steps for the raw point cloud.The projection-based[22–25]approach involves projecting the 3D point cloud onto a 2D plane,enabling the direct application of conventional 2D Convolutional Neural Networks (CNN).By leveraging the powerful capabilities of 2D CNN[26],semantic segmentation can be performed using the projected image information.On the other hand,the voxel-based [27–29] approach transforms the 3D point cloud into a regular 3D grid or voxel representation,facilitating processing through 3D CNN.This allows for capturing the spatial relationships between the voxels through 3D convolutions.However,the projection-based methods may suffer from information loss during the projection process and may encounter limitations in capturing fine-grained geometric details.On the other hand,voxel-based methods often face challenges in handling high-resolution data due to memory constraints and exhibit inefficiency when representing sparse point clouds.They also exhibit significant drawbacks when dealing with large-scale point clouds.

    Point-based methods:The point-based methodologies involve direct manipulation of point cloud data to implement algorithms for semantic segmentation by assigning each point in the point cloud to its corresponding semantic class.Drawing inspiration from the groundbreaking work of PointNet [1],researchers have proposed a series of neural network models to directly process raw point cloud data.For instance,Qi et al.introduced the PointNet++[2]network,which integrates a sophisticated multi-level local feature aggregation module,thereby facilitating enhanced aggregation of local features.Thomas et al.proposed KPConv[30],which introduces the novel concept of kernel points and adaptively selects certain points in the point cloud as templates for convolutional kernels.Li et al.introduced the PSNet [31] network,which provides a rapid data structuring approach for simultaneous point sampling and grouping.Ibrahim et al.proposed SAT3D [32],which introduces the first-ever technique based on the Slot Attention Transformer to effectively model object-centric features in point cloud data.Point-based methods exhibit remarkable performance in handling irregular and sparse point clouds as they directly capture the local geometric attributes of each point.These networks demonstrate promising results on small-scale point clouds.However,due to their high computational and memory costs,most networks face limitations in direct scalability to larger scenes,thus hindering their modeling capabilities for large-scale point clouds.

    Large-scale point cloud semantic segmentation:Recently,various models have been introduced in academia to address the challenge of large-scale point cloud semantic segmentation.Among them,Landrieu et al.introduced SPG[11],which leverages the concept of a superpoint graph to transform point cloud data into a graph structure and utilizes graph neural networks for semantic segmentation.Additionally,to improve computational efficiency,some models convert 3D point clouds into 2D representations,enabling the utilization of efficient 2D convolutions for semantic segmentation.For example,Tatarchenko et al.[33] projected the local surface geometry of the point cloud onto the tangent plane of each point and process it using 2D convolutions.Wu et al.[24]employed point cloud spherical projection methods to transform point cloud data into a data format compatible with various mature 2D image processing techniques.Moreover,some methods directly operate on points to handle large-scale point clouds.Zhang et al.proposed PointCCR [34],which enhances efficiency through random sampling while leveraging the local structure of the point cloud and expanding the receptive field of individual points.Although the aforementioned methods have achieved significant results,the preprocessing steps involve substantial computational complexity,and the projections disrupt the 3D geometric structure of the point cloud.Motivated by these approaches,to balance efficiency and preserve the original 3D geometric relationships,we propose CFSA-Net,an end-to-end efficient network specifically designed for large-scale point cloud semantic segmentation.

    Self-attention mechanism:The self-attention mechanism was initially introduced in the fields of natural language processing and 2D image processing[35],and it has garnered considerable attention in current research due to its remarkable ability to model contextual information.In recent years,researchers have focused on applying this mechanism to point cloud processing tasks to further enhance the processing capabilities of point cloud data.Several self-attention-based point cloud processing methods have been proposed.For instance,Fu et al.introduced FFANet [36],which effectively captures the contextual information of each point using the self-attention mechanism.Chen et al.introduced GAPNet [37],which integrates graph attention mechanisms into a series of stacked Multi-Layer Perceptron (MLP) layers to effectively learn the local features of input point clouds.Guo et al.proposed PCT[13],which adopts the self-attention mechanism from Transformers to effectively capture the relationships between points in point cloud data,enabling better capturing of fine-grained details.Ren et al.proposed PA-Net [38],which designs two parallel self-attention mechanisms that simultaneously focus on coordinate and feature information.Previous works have primarily handled coordinate and feature information separately.In contrast,our network employs a cross-fusion self-attention mechanism,which interactively captures and integrates coordinate and feature information,considering the relative positional relations of the point cloud,thereby modeling more comprehensive geometric information.

    3 Methodology

    3.1 Overview

    The model,as illustrated in Fig.2,utilizes an encoder-decoder architecture with skip connections to process a point cloud collection comprising N points.Each point encompasses xyz coordinate position information and feature attributes (e.g.,color,normal vectors) as inputs.To capture the intricate characteristics of each point,the input point cloud undergoes a series of five encoding and decoding layers.During the encoding phase,the point cloud scale is reduced through the application of random sampling.By incorporating the Local Feature Extraction(LFE)module,the model enriches the coordinate information,enhances the interaction between coordinate and feature attributes,and expands the receptive field of each point.In the decoding phase,each point employs the K-Nearest Neighbor(KNN)approach to identify its nearest neighboring point.Subsequently,Up-Sampling(US)is performed using linear interpolation to restore the point cloud to its original scale.The features from the encoding phase and the skip connections are combined through summation and then input into a shared Multi-Layer Perceptron(MLP)to reduce the dimensionality of the features.Finally,the entire process is iteratively repeated to obtain the final segmentation result.

    Figure 2:Network structural diagram

    3.2 Local Feature Extraction Based on Cross-Fusion Self-Attention Mechanism

    Local Feature Extraction (LFE) constitutes the core of the encoding layer and is composed of three primary components: Hierarchical Position Coding (HPC),Cross-Fusion Self-Attention(CFSA)pooling module,and Residual Optimization(RO)structure.

    3.2.1 Hierarchical Position Coding(HPC)

    The module encompasses hierarchical sampling and relative position encoding.The first is sampling.Common sampling methods usually only perform KNN-based sampling on neighboring points.However,this approach limits the receptive field of each query point,hindering the establishment of long-range contextual dependencies.To address this issue,a straightforward solution is to increase the sampling radius,but this results in increased computational memory requirements.To effectively aggregate distant contextual dependencies with lower memory costs,a hierarchical sampling strategy is introduced,as illustrated in Fig.3.The specific strategy is defined as follows:

    Figure 3:Hierarchical positional coding module

    Given an input point set,denoted asP={pi,fi|i=1,2,3,...,n},where n signifies the total number of points within the point cloud,pirepresents the positional information (x,y,z),andfirepresents the feature information (e.g.,color,normal vectors,etc.),the following approach is employed for each query point:Initially,a dense selection of K neighboring points is performed using the KNN method,resulting in the setK1.Subsequently,a sparser selection of K neighboring points is achieved by employing the FPS method within a larger radius,forming the setK2.Finally,the two sets,K1andK2,are merged and duplicate points are removed,resulting in the final set of neighboring points,denoted asK3.

    Then the relative position coding is performed,and the neighbor point setK3is encoded.The coding process is defined as follows:

    whereK′is the number of points of the setK3;is the result of spatial position encoding of points;piis the coordinates of the query point;is the coordinates ofK′adjacent points;pi-is the relative coordinate between the query point and the adjacent point.;is the Euclidean distance between the query point and the adjacent points;g represents the connection operation,which connects the above relative position information;MLP extends the relative position information of the connection to the same dimension asfi.

    As depicted in Fig.3,the variabledenotes a feature information matrix of dimensions(K′×d).This matrix is derived from a setK3comprisingK′neighboring points.It is worth noting that the matrix does not include coordinate information.

    Ultimately,the HPC module produces the original feature information ofK′nearest neighbor points along with corresponding relative spatial positional information,which has the same dimension as the original features.Compared to conventional sampling methods,this approach involves additional computations for sparse neighboring points and effectively addresses long-range dependency issues.However,due to the sparsity of distant neighbor points,it does not excessively consume computational memory resources.

    3.2.2 Cross-Fusion Self-Attention(CFSA)Pooling

    The CFSA pooling module uses a powerful self-attention mechanism to interactively enhance local coordinate and feature information.It takes as input the output of the HPC module,which consists of the coordinates and feature information after being processed by HPC.The specific structure of this module is illustrated in Fig.4.

    Figure 4:Cross-fusing self-attention pooling module

    The input of the upper part is,and after the linear transformation of,the three feature descriptions ofhq,hk,hvare obtained.Similarly,fq,fk,fvare obtained after the linear transformation of the inputin the lower half.The process of linear transformation can be described as follows:

    Some of the above elements are cross-fused to obtain the outputhoandfoafter self-attention calculation.The specific process is defined as follows:

    where ?represents matrix multiplication,it can be seen from Eq.(4) that coordinates and feature information are effectively enhanced.haandfain the above equation are obtained by query and key weighting.The specific process is defined as follows:

    where ?also represents matrix multiplication,the sum represents adding the first row of the result of?to each subsequent row,and finally assigning weights through softmax.

    Compared with some traditional self-attention mechanisms,the cross-fusion self-attention mechanism enables the coordinates and feature information after HPC to be mutually enhanced.Finally,the new feature descriptionFoutof the query point is obtained after sum pooling and MLP.The specific definition process is as follows:

    3.2.3 Residual Optimization(RO)

    In this study,the residual optimization module is used to stack the HPC module and the CFSA pooling module to enhance the receptive field of individual points and mitigate the potential loss of key point information resulting from random sampling.According to the aforementioned theory,a higher number of stacked HPC modules and CFSA pooling modules leads to a more effective extension of the receptive field.However,computational efficiency and module transferability are taken into consideration.The residual optimization structure in this paper consists of two stacked HPC modules and CFSA pooling modules,complemented by residual connections.Additionally,a multilayer perceptron is incorporated before the input and after the output to achieve the necessary feature dimensions.Finally,the output features after stacking are added to the features of the input point cloud after shared MLP processing to obtain the final aggregation features.The specific structure is illustrated in Fig.5.

    Figure 5:Residual optimization module

    After the first stacking operation,the receptive field of the query point isK′points.After the second stacking operation,the receptive field will be raised topoints.The receptive field expansion diagram is shown in Fig.6.

    Figure 6:Receptive field expansion diagram

    4 Performance Analysis

    In this section,the proposed network is evaluated on three mainstream semantic segmentation datasets (S3DIS,Semantic3D,SemanticKITTI).In addition,some related ablation experiments,including network structure analysis and self-attention mechanism selection,have been carried out to verify the proposed modules.

    4.1 Data Set Introduction

    This study primarily conducts evaluations on three datasets,namely S3DIS,Semantic3D,and SemantiKITTI.S3DIS represents a dataset of indoor scenes,Semantic3D represents a dataset of outdoor scenes,and SemantiKITTI represents a dataset of autonomous driving scenarios.Each dataset has distinct point counts and features.A detailed introduction to each dataset is provided below.

    S3DIS represents a comprehensive dataset of indoor scenes,comprising six educational and office regions with a total of 271 rooms.This dataset encompasses 13 distinct categories.Each point cloud data within S3DIS is defined by nine features,encompassing coordinate information and color information,along with three corresponding normal vectors.

    The Semantic3D dataset provides a vast collection of natural scene point clouds,exceeding a total of 4 billion points.It encompasses a diverse range of urban scenes,including churches,streets,railways,squares,villages,football fields,and castles.Each point cloud data is characterized by seven features,encompassing coordinate information(x,y,z),reflectance intensity,as well as color information(R,G,B).

    SemanticKITTI stands as an authoritative dataset in the field of autonomous driving.This dataset incorporates various categories such as pedestrians,vehicles,and other traffic participants,along with ground facilities like parking lots and sidewalks.Each point cloud data within the SemanticKITTI dataset consists of four features,namely coordinate information(x,y,z),and reflectance intensity.

    4.2 Experimental Environment

    The experimental parameters are set as follows:The computations are performed on the Ubuntu 20.04 system utilizing the TensorFlow 2.6.0 framework,with acceleration provided by the NVIDIA Quadro P6000 GPU.The Adam optimizer is employed,and the batch sizes for the three datasets are respectively set to 6,3,and 3.The initial learning rates are uniformly set to 0.01,and the maximum number of iterations for all datasets is established as 100.

    4.3 Comparative Experiments and Results Analysis

    4.3.1 Experimental Results Evaluation of S3DIS Dataset

    This study utilizes the S3DIS dataset,which partitions 271 rooms into 6 regions,to evaluate the performance of the proposed algorithm through 6-fold cross-validation on these regions.The quantitative results of comparing the proposed algorithm with other algorithms across the 6 regions are presented in Table 1,with the best results highlighted in bold.Our algorithm outperforms others in terms of three metrics:Overall Accuracy(OA),Mean Accuracy(mAcc),and Mean Intersection over Union (mIoU),achieving values of 87.6%,82.3%,and 71.2%,respectively.The categories of floor,pillar,chair,whiteboard,and clutter exhibit the best performance in mIoU,with improvements of 0.9%,0.7%,1.8%,0.8%,and 0.5%,respectively,compared to the best results of other algorithms in the table.Additionally,the segmentation accuracy is equally impressive for categories such as windows and doors.

    Table 1:Quantitative results of semantic segmentation of S3DIS dataset

    Next,we compare the proposed algorithm with PointNet++and RandLA-Net,and provide visual comparisons to demonstrate the advantages of our algorithm.As shown in Fig.7,the first column represents a hallway scene,the second column depicts a conference room scene,and the third column illustrates an office scene.Each scene includes the ground truth labels,predictions from PointNet++,predictions from RandLA-Net,and predictions from our algorithm.The algorithm presented in this study demonstrates the capability to accurately predict the contours of visually similar objects,the edges of small-scale objects,and the contours of embedded objects.For instance,it effectively captures the intricate geometric shapes of objects such as pillars,beams,and corners of walls,which share similarities in their geometry.Moreover,it successfully identifies the boundaries of small objects like bookshelves housing books and miscellaneous items,as well as accurately outlines embedded objects like blackboards on walls.This is attributed to the local coordinate encoding module and the cross-attention interaction module.The local coordinate encoding module preserves rich local geometric information,while the cross-attention interaction module enhances the learning capability of coordinate and feature interactions.

    Figure 7:S3DIS dataset semantic segmentation visualization

    4.3.2 Experimental Results Evaluation of Semantic3D Dataset

    The experimental evaluation was performed using the reduce-8 subset of the Semantic3D dataset,which comprises training point cloud data from 15 distinct regions and testing point cloud data from 4 regions.The quantitative results of the experiments are presented in Table 2.Our proposed algorithm surpasses the comparative algorithms in terms of both the mIoU and the OA on the Semantic3D dataset,achieving a mIoU of 78.2%and an OA of 94.9%.Particularly noteworthy is its outstanding performance in the domains of architecture (including structures such as churches,town halls,and stations),hard landscapes(a diverse category encompassing elements like garden walls,fountains,and banks),and automobiles.In comparison to the best results obtained by the comparative algorithms in this paper,our algorithm demonstrates improvements of 0.2%,1.1%,and 0.4% in these respective categories.Furthermore,it achieves commendable results in classes such as artificial terrain and natural terrain.

    Table 2:Quantitative results of semantic segmentation of Semantic3D dataset

    The visualized test results are depicted in Fig.8.Due to the unavailability of the ground truth labels for the test set of this dataset,the images from left to right represent the input point cloud data and the predicted labels,respectively.On the whole,our proposed algorithm exhibits remarkable segmentation performance,effectively discerning the boundaries of buildings,roads,and other target objects.It is worth noting that the distribution of the hard landscape category is uneven,and characterized by substantial variations in shape and structure.The internal geometric shapes,colors,and texture features also change with different environmental contexts.Nonetheless,our proposed algorithm achieves optimal segmentation performance even in such complex scenarios.Through data analysis and result visualization,it becomes evident that the algorithm can identify intricate details and complex components within the point cloud structure,accurately distinguishing features and nuances associated with different targets.These findings validate the network’s exceptional capabilities in feature extraction,spatial information aggregation,and precise segmentation,thereby providing comprehensive verification of the effectiveness of the feature extraction module.

    4.3.3 Experimental Results Evaluation of SemanticKITTI Dataset

    The SemanticKITTI dataset serves as an extension of the KITTI dataset,and Table 3 provides a quantitative comparison of our algorithm with several classical algorithms on the SemanticKITTI dataset.The results from the table indicate the superiority of our algorithm over the majority of existing approaches,achieving a mIoU of 55.4%.Notably,our algorithm demonstrates outstanding segmentation performance in the categories of vehicles,vegetation,and terrain,surpassing other methods.Our algorithm exhibits remarkable advantages in point-based approaches and also demonstrates certain strengths in projection-based and voxel-based methods,ranking second only to the SalsaNext algorithm.

    The segmentation results of our algorithm on the SemanticKITTI dataset are visually depicted in Fig.9.From left to right,the images correspond to the ground truth labels,predictions from SqueezeSegV2,predictions from RandLA-Net,and predictions from our algorithm.It is evident from the figure that our algorithm achieves the closest approximation to the ground truth labels in vehicle predictions,while also demonstrating excellent segmentation performance in vegetation areas and along terrain edges.The visual analysis reveals that even on large-scale outdoor scene datasets characterized by sparse point cloud densities,our algorithm consistently achieves favorable segmentation results,effectively showcasing the efficacy of our network’s feature extraction capabilities.

    4.3.4 Discuss

    S3DIS,Semantic3D,and SemantiKITTI are all point cloud datasets collected from the real world.S3DIS focuses on indoor scenes,Semantic3D covers large-scale outdoor scenes in various settings such as urban,rural,and natural environments,while SemantiKITTI specifically focuses on autonomous driving scenarios.These three datasets differ significantly in terms of scale and scenes.However,the proposed model in this paper has achieved competitive results on all three datasets,demonstrating its strong generalization ability.In future work,we plan to enhance the model’s robustness to input data by introducing data augmentation techniques such as rotation,translation,and others during the training process.

    Figure 9:Visualization results of semantic segmentation of SemanticKITTI dataset

    4.4 Ablation Experiments

    4.4.1 Efficiency Analysis of Sampling Method

    This study aims to address the challenge of semantic segmentation in large-scale point clouds.We analyze existing semantic segmentation network models under the conditions of large-scale point clouds.Our findings reveal that the choice of sampling method significantly impacts both training time and memory consumption,thereby necessitating the establishment of an effective downsampling strategy.Such a strategy should enable the rational processing of large-scale point clouds and enhance the overall efficiency of the network.In this regard,we analyze five distinct sampling methods,namely Random Sampling (RS),Farthest Point Sampling (FPS),Generator-Based Sampling (GS),Policy Gradient-Based Sampling(PGS),and Inverse Density Importance Sampling(IDIS).

    Fig.10 presents the experimental comparison of sampling methods in terms of efficiency when dealing with point clouds of different scales.The number of point cloud data is plotted on the x-axis,while memory consumption and processing time are represented on the y-axis.The experimental results for the time and memory consumption of each sampling method are illustrated in Fig.10.For smaller-scale point cloud quantities,all the aforementioned sampling methods exhibit similar time and memory consumption,suggesting minimal computational burden.However,as the number of point clouds gradually increases,FPS,GS,PGS,and IDIS either become highly timeconsuming or significantly consume memory.In contrast,random sampling demonstrates relatively favorable performance in terms of time and memory consumption.This outcome indicates that most existing semantic segmentation network models perform well only when handling small-scale point clouds,primarily due to the limitations imposed by the employed sampling methods.In summary,considering the analysis of the six sampling methods discussed above,random sampling exhibits distinct advantages in terms of time and memory consumption.Consequently,this study opts to employ the random sampling algorithm for processing large-scale point cloud data.

    Figure 10:Comparison of sampling effect

    4.4.2 Network Structure Analysis

    To validate the effectiveness of the proposed HPC and CFSA pooling modules,as shown in Table 4,we conducted meticulous tests by systematically adjusting each module within the same network architecture and evaluated their performance on the S3DIS dataset.In the absence of any added modules,the mIoU was merely 68.1%.When employing the HPC and CFSA pooling modules individually,the mIoU improved by 1.1% and 2.1%,respectively,resulting in values of 70.1% and 69.2%.Furthermore,when both modules were introduced and jointly utilized,the mIoU experienced a significant boost of 3.1%,reaching an impressive 71.2%.These results from the conducted ablation experiments unequivocally demonstrate the pivotal role of the proposed modules in feature extraction.

    Table 4:Analysis of experimental results of network structure

    4.4.3 Selection of Self-Attention Mechanism

    Table 5 presents the results of ablation experiments on the S3DIS dataset,examining the impact of different self-attention mechanisms within the constructed local feature extraction module.The evaluated mechanisms include channel self-attention(CSA),spatial self-attention(SSA),dual-channel self-attention (DCSA) with parallel spatial and channel interactions,and our proposed CFSA mechanism.These experiments aim to assess the influence of these various self-attention mechanisms on the performance of point cloud semantic segmentation.The results in the table demonstrate that the CFSA mechanism achieves the most favorable outcomes,thus substantiating the effectiveness of this approach.

    Table 5:Experimental results of different self-attention mechanisms

    5 Conclusions

    This paper presents a novel CFSA-Net designed for large-scale semantic segmentation of point clouds.This paper’s framework adopts a memory-efficient and computationally economical random sampling strategy.Furthermore,to mitigate the potential drawbacks associated with random sampling,this paper introduces a local feature extraction module based on cross-fusion selfattention,enabling a more comprehensive modeling of geometric information.This paper’s network has exhibited exceptional performance in large-scale point cloud semantic segmentation tasks,as evidenced by comprehensive experiments conducted on public datasets,namely S3DIS,Semantic3D,and SemanticKITTI.The visualized results of our predictions clearly illustrate the network’s ability to effectively adapt to variations in the shape,structure,and appearance of the target,thereby demonstrating its robust adaptability and generalization capabilities.

    The primary limitation of this study emanates from the imperative of point-wise class annotations within the framework of the fully supervised learning paradigm,which presents a highly challenging task when dealing with large-scale point clouds.In future research,our research will be concentrated on exploring weakly/semi-supervised segmentation methods specifically tailored for large-scale point clouds,to alleviate the burden of manual annotation and reduce associated costs.The algorithm proposed in this paper can combine the multi-innovation theory and hierarchical identification principle[39–42]to enhance computational efficiency and accuracy.

    Acknowledgement:The authors would like to express their gratitude for the valuable feedback and suggestions provided by all the anonymous reviewers and the editorial team.

    Funding Statement:This study was funded by the National Natural Science Foundation of China Youth Project(61603127).

    Author Contributions:Conceptualization,Jun Shu and Jie Zhang;Data curation,Jie Zhang;Formal analysis,Shiqi Yu and Jie Zhang;Investigation,Shiqi Yu;Methodology,Jun Shu,Shuai Wang and Jie Zhang;Software,Jun Shu and Shiqi Yu;Validation,Jun Shu and Shuai Wang;Visualization,Shuai Wang;Writing–original draft,Shuai Wang and Jie Zhang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The training data used in this paper were obtained from S3DIS,Semantic3D and SemantiKITTI,respectively.Available online via the following link:http://buildingparser.stanford.edu/dataset.html,http://semantic3d.net/,and http://semantic-kitti.org/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    变态另类成人亚洲欧美熟女| bbb黄色大片| 小说图片视频综合网站| 欧美最黄视频在线播放免费| 国产成人影院久久av| 久久久久久久精品吃奶| 国产三级黄色录像| 国产免费av片在线观看野外av| 麻豆久久精品国产亚洲av| 老司机在亚洲福利影院| tocl精华| 国产欧美日韩精品亚洲av| 伊人久久大香线蕉亚洲五| 日韩欧美在线乱码| 免费在线观看成人毛片| 国产黄色小视频在线观看| 国产精品香港三级国产av潘金莲| 熟女电影av网| 欧美日韩精品网址| 看黄色毛片网站| 99精品在免费线老司机午夜| 日韩欧美国产在线观看| 特大巨黑吊av在线直播| 亚洲欧美激情综合另类| 女警被强在线播放| 亚洲欧美日韩高清专用| 国产淫片久久久久久久久 | 久久中文看片网| 国产亚洲精品一区二区www| 亚洲av成人一区二区三| 一级毛片高清免费大全| 国产成人啪精品午夜网站| 国产精品一区二区三区四区久久| 国产亚洲精品久久久久久毛片| 欧美高清成人免费视频www| 国产日本99.免费观看| 天堂√8在线中文| 91麻豆精品激情在线观看国产| 欧美在线黄色| 手机成人av网站| 中文字幕高清在线视频| 淫妇啪啪啪对白视频| 狂野欧美激情性xxxx| 国产成人欧美在线观看| 免费看日本二区| 露出奶头的视频| 国产不卡一卡二| 麻豆久久精品国产亚洲av| 看黄色毛片网站| 成人18禁在线播放| 操出白浆在线播放| 国产精品 国内视频| 一个人免费在线观看的高清视频| 免费大片18禁| 岛国在线免费视频观看| 99re在线观看精品视频| 久久精品91蜜桃| 国产午夜精品论理片| 国产 一区 欧美 日韩| 成人性生交大片免费视频hd| 1024手机看黄色片| 国产精品亚洲av一区麻豆| 午夜成年电影在线免费观看| 深夜精品福利| 亚洲熟妇中文字幕五十中出| 免费搜索国产男女视频| x7x7x7水蜜桃| 亚洲国产中文字幕在线视频| 91在线观看av| 国产精品久久久久久久电影 | 成人精品一区二区免费| 麻豆一二三区av精品| 人人妻,人人澡人人爽秒播| svipshipincom国产片| 三级毛片av免费| 免费在线观看成人毛片| 又爽又黄无遮挡网站| 老司机福利观看| 美女扒开内裤让男人捅视频| 高清在线国产一区| 日韩大尺度精品在线看网址| 国产成人精品久久二区二区免费| 校园春色视频在线观看| 欧美极品一区二区三区四区| 国产成人av教育| 国产97色在线日韩免费| 别揉我奶头~嗯~啊~动态视频| 久久久色成人| 在线免费观看的www视频| 国产精品影院久久| 12—13女人毛片做爰片一| 波多野结衣高清无吗| 亚洲 国产 在线| 久久久久国产精品人妻aⅴ院| 精品国产亚洲在线| 国产精品综合久久久久久久免费| av天堂在线播放| 国产欧美日韩一区二区三| 国产高潮美女av| 国产精品综合久久久久久久免费| 国产成人精品久久二区二区免费| 亚洲专区国产一区二区| www.精华液| 成人高潮视频无遮挡免费网站| 亚洲激情在线av| 91在线观看av| 特大巨黑吊av在线直播| 午夜免费激情av| 草草在线视频免费看| 五月玫瑰六月丁香| 成在线人永久免费视频| 国产一区二区三区在线臀色熟女| 男女之事视频高清在线观看| 母亲3免费完整高清在线观看| 中国美女看黄片| 一进一出抽搐动态| 国产精华一区二区三区| 国内毛片毛片毛片毛片毛片| 俄罗斯特黄特色一大片| 欧美日韩中文字幕国产精品一区二区三区| 日韩欧美免费精品| 叶爱在线成人免费视频播放| 亚洲国产欧美人成| 久久婷婷人人爽人人干人人爱| 一个人看视频在线观看www免费 | 最新美女视频免费是黄的| 亚洲一区高清亚洲精品| 国产亚洲欧美98| 桃色一区二区三区在线观看| 亚洲精品456在线播放app | tocl精华| 他把我摸到了高潮在线观看| 悠悠久久av| 国产免费男女视频| 一级毛片女人18水好多| 中文字幕久久专区| 亚洲av成人精品一区久久| svipshipincom国产片| 高清在线国产一区| 欧美大码av| 国产精品永久免费网站| 香蕉国产在线看| 每晚都被弄得嗷嗷叫到高潮| 99久久国产精品久久久| 国产伦人伦偷精品视频| 国产欧美日韩精品亚洲av| 日韩免费av在线播放| 欧美+亚洲+日韩+国产| 亚洲在线观看片| 久久久成人免费电影| 成人高潮视频无遮挡免费网站| 美女被艹到高潮喷水动态| 欧美一级毛片孕妇| 高清毛片免费观看视频网站| 18美女黄网站色大片免费观看| 欧美zozozo另类| 免费观看的影片在线观看| 少妇裸体淫交视频免费看高清| 听说在线观看完整版免费高清| 一夜夜www| 国产精品自产拍在线观看55亚洲| 在线观看美女被高潮喷水网站 | 看片在线看免费视频| 天堂网av新在线| 国内少妇人妻偷人精品xxx网站 | 夜夜看夜夜爽夜夜摸| 19禁男女啪啪无遮挡网站| 又大又爽又粗| 操出白浆在线播放| 亚洲国产中文字幕在线视频| 一本久久中文字幕| 18美女黄网站色大片免费观看| 国产美女午夜福利| 亚洲第一欧美日韩一区二区三区| 久久人妻av系列| 国产伦精品一区二区三区四那| 亚洲av片天天在线观看| 国产精品1区2区在线观看.| 日本黄色视频三级网站网址| 又粗又爽又猛毛片免费看| 美女黄网站色视频| 99在线人妻在线中文字幕| 亚洲无线在线观看| 91av网站免费观看| 老司机福利观看| av片东京热男人的天堂| 欧美色欧美亚洲另类二区| cao死你这个sao货| 麻豆av在线久日| 欧美一区二区国产精品久久精品| 日本黄色视频三级网站网址| 久久久色成人| 亚洲欧美日韩高清专用| 欧美日韩亚洲国产一区二区在线观看| 午夜福利成人在线免费观看| 非洲黑人性xxxx精品又粗又长| 日本免费a在线| 国产三级黄色录像| 亚洲人成伊人成综合网2020| 国产精品久久久久久久电影 | 成在线人永久免费视频| xxx96com| 亚洲国产精品999在线| 最近最新免费中文字幕在线| 亚洲国产欧美网| 巨乳人妻的诱惑在线观看| 丝袜人妻中文字幕| 日韩国内少妇激情av| 国产欧美日韩一区二区三| 国产成人aa在线观看| 又黄又爽又免费观看的视频| www.999成人在线观看| av片东京热男人的天堂| 亚洲成人久久性| 精品免费久久久久久久清纯| 欧美中文日本在线观看视频| 中文字幕精品亚洲无线码一区| 国产亚洲av嫩草精品影院| 国产单亲对白刺激| 搞女人的毛片| av天堂在线播放| 19禁男女啪啪无遮挡网站| 又大又爽又粗| 中出人妻视频一区二区| 亚洲成av人片在线播放无| 久久精品国产清高在天天线| 校园春色视频在线观看| 婷婷精品国产亚洲av在线| 18美女黄网站色大片免费观看| 99久久99久久久精品蜜桃| 久久99热这里只有精品18| 黄色片一级片一级黄色片| 激情在线观看视频在线高清| 亚洲中文av在线| 国产成人啪精品午夜网站| 午夜福利高清视频| 美女cb高潮喷水在线观看 | 国产一区二区在线av高清观看| 婷婷丁香在线五月| 此物有八面人人有两片| 欧美成狂野欧美在线观看| 搞女人的毛片| 黄色视频,在线免费观看| 国产又黄又爽又无遮挡在线| 美女午夜性视频免费| 1024手机看黄色片| 在线观看免费视频日本深夜| 啪啪无遮挡十八禁网站| 精品一区二区三区四区五区乱码| 国产亚洲欧美98| 少妇人妻一区二区三区视频| 国内精品美女久久久久久| 亚洲国产看品久久| 日本a在线网址| 中文字幕久久专区| 亚洲,欧美精品.| 精品午夜福利视频在线观看一区| 久久欧美精品欧美久久欧美| 嫁个100分男人电影在线观看| 国产亚洲精品一区二区www| 免费高清视频大片| 午夜久久久久精精品| 久久人人精品亚洲av| 久久久久九九精品影院| 国产精品99久久99久久久不卡| 综合色av麻豆| 国产激情欧美一区二区| 草草在线视频免费看| 亚洲一区高清亚洲精品| 国产av在哪里看| 99在线视频只有这里精品首页| 欧美不卡视频在线免费观看| 国产精品自产拍在线观看55亚洲| 亚洲中文日韩欧美视频| 99精品欧美一区二区三区四区| 99久久综合精品五月天人人| 国产精品综合久久久久久久免费| 90打野战视频偷拍视频| 高清毛片免费观看视频网站| 亚洲乱码一区二区免费版| 久久中文看片网| 九九热线精品视视频播放| 母亲3免费完整高清在线观看| 两个人看的免费小视频| 亚洲精品在线美女| 叶爱在线成人免费视频播放| 国产aⅴ精品一区二区三区波| 老熟妇仑乱视频hdxx| 日本精品一区二区三区蜜桃| 午夜激情欧美在线| 观看免费一级毛片| 亚洲熟女毛片儿| 亚洲无线在线观看| 在线观看免费午夜福利视频| 看片在线看免费视频| 国产不卡一卡二| 国产亚洲精品久久久com| 麻豆一二三区av精品| 午夜福利欧美成人| 国产欧美日韩一区二区精品| 韩国av一区二区三区四区| 五月伊人婷婷丁香| av视频在线观看入口| 国产精品一区二区免费欧美| 中文亚洲av片在线观看爽| 五月玫瑰六月丁香| 久久精品国产99精品国产亚洲性色| 老司机在亚洲福利影院| 啦啦啦免费观看视频1| 香蕉av资源在线| 精品一区二区三区四区五区乱码| 黄色成人免费大全| 一二三四在线观看免费中文在| АⅤ资源中文在线天堂| 欧美色欧美亚洲另类二区| 国产精品一区二区三区四区免费观看 | 国产精品一区二区精品视频观看| 国产精品自产拍在线观看55亚洲| 精品久久久久久久久久久久久| 久久久国产欧美日韩av| 亚洲九九香蕉| 国产精品一区二区免费欧美| 精华霜和精华液先用哪个| 美女cb高潮喷水在线观看 | 综合色av麻豆| 欧美极品一区二区三区四区| 欧美日韩精品网址| 深夜精品福利| 日本撒尿小便嘘嘘汇集6| 成人高潮视频无遮挡免费网站| 久久精品国产清高在天天线| 三级毛片av免费| 日韩精品中文字幕看吧| 亚洲av熟女| 久久精品国产综合久久久| 岛国视频午夜一区免费看| 欧美国产日韩亚洲一区| 啦啦啦免费观看视频1| 国产97色在线日韩免费| 久久人人精品亚洲av| 免费在线观看成人毛片| 久久人人精品亚洲av| 97超级碰碰碰精品色视频在线观看| 成人国产一区最新在线观看| 久99久视频精品免费| 999久久久精品免费观看国产| 老鸭窝网址在线观看| 免费无遮挡裸体视频| 国产成人精品久久二区二区91| 亚洲va日本ⅴa欧美va伊人久久| 香蕉丝袜av| 看片在线看免费视频| 亚洲国产欧美人成| 午夜日韩欧美国产| 精品国产乱码久久久久久男人| 在线国产一区二区在线| 国产精品爽爽va在线观看网站| 99热精品在线国产| 床上黄色一级片| 夜夜躁狠狠躁天天躁| 五月玫瑰六月丁香| 亚洲欧美日韩高清在线视频| 女生性感内裤真人,穿戴方法视频| 夜夜躁狠狠躁天天躁| 国产高清视频在线观看网站| 成人欧美大片| 欧美一区二区国产精品久久精品| 国产精品野战在线观看| 久久香蕉国产精品| 亚洲成av人片在线播放无| 噜噜噜噜噜久久久久久91| 看片在线看免费视频| 色综合站精品国产| 国产亚洲精品综合一区在线观看| 免费看美女性在线毛片视频| 成人高潮视频无遮挡免费网站| 日日干狠狠操夜夜爽| 亚洲午夜理论影院| 我要搜黄色片| www日本在线高清视频| 岛国在线免费视频观看| av天堂中文字幕网| 午夜福利免费观看在线| 亚洲天堂国产精品一区在线| 麻豆一二三区av精品| 国产精品99久久99久久久不卡| 国产高清视频在线观看网站| 亚洲一区高清亚洲精品| 999精品在线视频| 日日干狠狠操夜夜爽| 亚洲中文日韩欧美视频| 色吧在线观看| 两性夫妻黄色片| 又黄又粗又硬又大视频| 免费电影在线观看免费观看| 99在线视频只有这里精品首页| 老熟妇乱子伦视频在线观看| 欧美黑人巨大hd| 亚洲午夜精品一区,二区,三区| 最近最新中文字幕大全免费视频| 无限看片的www在线观看| 亚洲欧美一区二区三区黑人| 亚洲午夜理论影院| 亚洲18禁久久av| 日韩欧美国产在线观看| 日韩欧美精品v在线| 麻豆av在线久日| 免费看光身美女| 一二三四社区在线视频社区8| av片东京热男人的天堂| 亚洲专区字幕在线| 黄片小视频在线播放| e午夜精品久久久久久久| 国产精品一区二区精品视频观看| 51午夜福利影视在线观看| 色老头精品视频在线观看| 中亚洲国语对白在线视频| 一级黄色大片毛片| 国产精华一区二区三区| 男人舔女人的私密视频| 久久久国产成人精品二区| 国产精品久久久av美女十八| 亚洲国产精品久久男人天堂| 国产私拍福利视频在线观看| 看免费av毛片| 极品教师在线免费播放| 757午夜福利合集在线观看| 亚洲五月天丁香| 亚洲 欧美 日韩 在线 免费| av女优亚洲男人天堂 | 免费观看的影片在线观看| 亚洲成人久久性| x7x7x7水蜜桃| 亚洲欧美日韩无卡精品| 黄色丝袜av网址大全| 在线免费观看的www视频| 淫妇啪啪啪对白视频| 国产欧美日韩精品亚洲av| 午夜日韩欧美国产| 久久久久久久精品吃奶| 午夜福利在线观看免费完整高清在 | 亚洲美女视频黄频| 国产激情久久老熟女| 美女黄网站色视频| 少妇的逼水好多| 午夜亚洲福利在线播放| 亚洲国产精品合色在线| 国产亚洲av嫩草精品影院| 999久久久国产精品视频| 免费在线观看亚洲国产| av国产免费在线观看| 级片在线观看| 国产成人福利小说| 日韩国内少妇激情av| 国产精品一区二区精品视频观看| 欧美成人性av电影在线观看| 日韩中文字幕欧美一区二区| 1024手机看黄色片| 小说图片视频综合网站| 91久久精品国产一区二区成人 | 久久香蕉精品热| 日韩高清综合在线| 欧美一区二区精品小视频在线| 免费在线观看成人毛片| 亚洲美女视频黄频| 成人一区二区视频在线观看| 精品国产乱码久久久久久男人| 一个人看的www免费观看视频| 国产精华一区二区三区| 国产av在哪里看| 亚洲国产中文字幕在线视频| 国产av不卡久久| 搡老熟女国产l中国老女人| 久久亚洲真实| 精品国产超薄肉色丝袜足j| 成人性生交大片免费视频hd| 亚洲自拍偷在线| 黄色丝袜av网址大全| 国产黄a三级三级三级人| 又黄又粗又硬又大视频| 黄片大片在线免费观看| 又粗又爽又猛毛片免费看| 特大巨黑吊av在线直播| 波多野结衣高清作品| 人人妻人人看人人澡| 欧美在线一区亚洲| 国产一区二区三区视频了| 欧美色欧美亚洲另类二区| 国产高清激情床上av| 中文在线观看免费www的网站| 日韩三级视频一区二区三区| 少妇熟女aⅴ在线视频| 久久久久国内视频| 欧美又色又爽又黄视频| 成人18禁在线播放| 九九热线精品视视频播放| 婷婷亚洲欧美| 真人一进一出gif抽搐免费| 精品久久久久久,| www.熟女人妻精品国产| 国产一区二区激情短视频| 少妇熟女aⅴ在线视频| 欧美乱妇无乱码| 国产三级在线视频| 观看免费一级毛片| 俄罗斯特黄特色一大片| 中文亚洲av片在线观看爽| 亚洲人成伊人成综合网2020| 久久久成人免费电影| 成人精品一区二区免费| 欧美在线一区亚洲| 日本免费一区二区三区高清不卡| 熟女少妇亚洲综合色aaa.| 中文字幕人妻丝袜一区二区| 香蕉久久夜色| 这个男人来自地球电影免费观看| 99国产极品粉嫩在线观看| 国产av一区在线观看免费| 成人特级av手机在线观看| 欧美激情久久久久久爽电影| 男人和女人高潮做爰伦理| 亚洲欧美日韩无卡精品| 天堂网av新在线| 搡老妇女老女人老熟妇| 国产午夜福利久久久久久| 欧美3d第一页| 国产高清视频在线观看网站| 不卡av一区二区三区| 精品欧美国产一区二区三| 国产精品久久久av美女十八| 啦啦啦观看免费观看视频高清| 免费大片18禁| 夜夜看夜夜爽夜夜摸| 国产97色在线日韩免费| 88av欧美| 一本综合久久免费| 香蕉国产在线看| 18禁黄网站禁片免费观看直播| 午夜精品一区二区三区免费看| 18禁美女被吸乳视频| 人妻夜夜爽99麻豆av| www国产在线视频色| 亚洲熟妇中文字幕五十中出| 久久久久国产精品人妻aⅴ院| 香蕉国产在线看| 啦啦啦免费观看视频1| 99热精品在线国产| 免费搜索国产男女视频| 嫩草影视91久久| 特大巨黑吊av在线直播| 女生性感内裤真人,穿戴方法视频| 亚洲色图av天堂| 美女 人体艺术 gogo| 亚洲片人在线观看| 99久久99久久久精品蜜桃| 国产成人影院久久av| 亚洲自拍偷在线| 高潮久久久久久久久久久不卡| 一个人免费在线观看电影 | 99久久精品国产亚洲精品| 国产精品久久久人人做人人爽| netflix在线观看网站| 免费看十八禁软件| 色在线成人网| 久久久久久久久免费视频了| 国产高清有码在线观看视频| 一区二区三区激情视频| 婷婷丁香在线五月| 国产精品日韩av在线免费观看| 亚洲专区国产一区二区| 最新在线观看一区二区三区| 美女黄网站色视频| 在线十欧美十亚洲十日本专区| 成人性生交大片免费视频hd| 久久久成人免费电影| 国产午夜精品论理片| 国产欧美日韩一区二区精品| 日本 av在线| 小蜜桃在线观看免费完整版高清| 性色avwww在线观看| 国产野战对白在线观看| 啦啦啦观看免费观看视频高清| 啪啪无遮挡十八禁网站| 国产av在哪里看| 亚洲欧美日韩高清在线视频| 99久久无色码亚洲精品果冻| www.自偷自拍.com| 真实男女啪啪啪动态图| 日本免费a在线| 午夜福利高清视频| 中文字幕最新亚洲高清| 国内毛片毛片毛片毛片毛片| 12—13女人毛片做爰片一| 国产精品永久免费网站| 香蕉丝袜av| 欧美一区二区精品小视频在线| 国产av一区在线观看免费| 免费观看人在逋| 精品电影一区二区在线| 成人18禁在线播放| 国产伦人伦偷精品视频| 国产乱人视频| 日韩欧美国产在线观看| 亚洲七黄色美女视频| 精品电影一区二区在线| 国产 一区 欧美 日韩| 国产成人精品久久二区二区91| 此物有八面人人有两片| 国产成人av激情在线播放| 男女视频在线观看网站免费| 午夜影院日韩av| 精品国产乱子伦一区二区三区| 亚洲黑人精品在线| 村上凉子中文字幕在线| 国产1区2区3区精品|