• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Point Cloud Classification Using Content-Based Transformer via Clustering in Feature Space

    2024-01-27 06:49:04YahuiLiuBinTianYishengLvLingxiLiandFeiYueWang
    IEEE/CAA Journal of Automatica Sinica 2024年1期

    Yahui Liu, Bin Tian, Yisheng Lv,,, Lingxi Li,,, and Fei-Yue Wang,,

    Abstract—Recently, there have been some attempts of Transformer in 3D point cloud classification.In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points.To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short.It exploits the locality of points in the feature space (content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity.We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately.Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification.Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectNN.Source code of this paper is available at https://github.com/yahuiliu99/PointConT.

    I.INTRODUCTION

    3D point cloud analysis has gained tremendous popularity in many fields, including scene understanding [1]–[3],robotics and self-driving vehicles [4]–[6].Compared with 2D images, 3D point clouds can provide sufficient spatial and geometric information, but they are not arranged in any particular order.Due to its irregular structure, the convolutional neural networks cannot be directly applied to point cloud processing, while Transformer [7] architecture is inherently permutation-invariant and natural-suited for point cloud learning.

    Recently, some explorations have been made on the Transformer architecture in point cloud analysis [8]–[14].However,a common downside of these models, the high computational cost, has caught the attention of researchers and motivated them to consider the trade-off between accuracy and inference speed.The two main approaches to reduce the computational complexity are downsampling points and local selfattention [8], [11]–[13].Points downsampling algorithms,such as farthest point sampling (FPS) [15], provide uniform coverage of the entire point cloud.Local self-attention computes the relationship within a subset of points (patch or cubic window) that is partitioned in 3D space.Although local spatial attention significantly improves efficiency, it still faces difficulty in capturing interactions among distant but similar points.

    Therefore, we propose a simple yet powerful architecture for point cloud classification, named point content-based Transformer (PointConT), which exploits local self-attention in the feature space (content-based) instead of the 3D space,as visualized in Fig.1.Starting from the content of the point cloud, we cluster the sampled points into classes based on their similarity, and compute the self-attention within each class, which preserves the ability of the global self-attention mechanism to capture long-range feature dependencies, while significantly reducing computational complexity.Specifically,it dynamically divides all queries into multiple clusters according to their contents (i.e., features) in each block, and selects the corresponding keys and values to compute local self-attention.The clustering varies accordingly at each stage and each head in the Transformer, adequately reflecting the content dynamics.Note that unlike the K-nearest neighborhood (KNN), the clusters are non-overlapping, which further reduces the computational complexity.

    Moreover, we complement the point cloud feature aggregation from a frequency standpoint.Recent studies [16], [17]found that max-pooling amplifies high-frequency features while average pooling and Transformer reduce high-frequency components, which also accords with the observations in our ablation experiments.In order to aggregate highfrequency and low-frequency features, we design an Inception feature aggregator composed of two branches, where the name of “inception” is derived from the Inception module[18], [19].The high-frequency aggregation branch consists of a max-pooling operation and a residual multi-layer perceptron(MLP) module, while the low-frequency aggregation branch is implemented by an average pooling operation and the contentbased Transformer block.

    Fig.1.Comparison between 3D space locality and content-based locality.The red point denotes the sampled center point, and the blue points denote neighborhood or cluster.In content-based attention, points will be clustered into multiple clusters based on their feature similarity.

    The main contributions of this paper are summarized as follows.

    1) We propose the point content-based Transformer (Point-ConT) to cluster points according to their content and compute self-attention within each cluster, establishing long-range feature dependencies while significantly reducing computations.

    2) We design an Inception feature aggregator for point cloud classification, using parallel structures to aggregate high-frequency and low-frequency information in each branch separately.

    3) Experiments show the competitiveness of our model on ModelNet40 [20] and ScanObjectNN [21] datasets.Extensive ablation studies verify benefits of each component in the PointConT design.

    II.RELATED WORK

    A. Point Cloud Processing

    There are mainly two branches of methods for processing the point clouds.One is to convert point clouds into a regular grid structure that can be directly consumed by convolutional neural networks, such as volumetric representation [22]–[24](through voxelization) or images [25], [26] (through projection or rendering).The other is point-based modeling, where the raw point clouds are directly fed into deep networks without any conversion.This paper focuses on point-based methods.

    PointNet is a pioneering work that successfully applies deep architecture on raw point sets [27].It is constructed as a symmetric function using shared MLP and max-pooling, which guarantees its permutation-invariance.However, PointNet only learns either single-point or global features, and thus is limited in capturing interactions among points.PointNet++ is built on top of the PointNet, which learns hierarchical point cloud features and is able to aggregate features in local geometric neighbors using set abstraction [15].

    Following them, some works have extended the point-based methods to various local aggregation operators.The explorations of local aggregation operators can be categorized into three groups: convolution-based [28]–[35], graph-based[36]–[39], and attention-based [8], [9], [40]–[42] methods.

    1)Convolution-Based Methods: References [31] and [32]learn the kernel within a local region through predefined geometric priors.Another type of point convolutions, KPConv[34], relates the weight matrices with predefined kernel points in 3D space.However, the fixed kernel points may not be optimal for modeling the complicated 3D position variations.PAConv constructs a position adaptive convolution operator with a dynamic kernel [35], which assembles basic weight matrices in Weight Bank.The assembling coefficients are learned from relative point positions by MLPs.

    2)Graph-Based Methods:The rise of the graph-based methods began with dynamic graph convolutional neural network (DGCNN) [37], which learns on graphs dynamically updated at each layer.It proposes a local feature aggregation operator, named EdgeConv, which generates edge features that describe the semantic relationships between key points and their neighbors in the feature space.Besides, CurveNet explores geometric information by taking guided walks to group contiguous segments of points as curves [39].

    3)Attention-Based Methods:Point cloud Transformer designs offset attention for extracting global features and uses a neighbor embedding strategy to augment local feature representation [9].Point Transformer proposes a modified Transformer architecture that aggregates local features with vector attention and relative position encoding [8].Stratified Transformer [12], inspired by Swin Transformer [43], partitions the 3D space into non-overlapping cubic windows, and proposes a stratified strategy for sampling keys.

    In addition, PointASNL [44] leverages non-local network[45] and adaptive sample module to enhance the long-dependency correlation learning.Recently, PointNeXt [46] explores more advanced training and data augmentation strategies with the PointNet++ backbone to further improve the accuracy and efficiency.

    B. Vision Transformer

    Fig.2.Overall architecture of point content-based Transformer (PointConT).The network is composed of a stack of inception feature aggregator blocks.

    In recent years, compared to familiar convolutional networks, Transformer architectures have shown great success in 2D images understanding.Vision Transformer (ViT) [47] is the first paper that successfully applies a Transformer encoder on images.It divides an image into non-overlapping patches(tokens), which are then linearly embedded.Further, pyramid ViT (PVT) [48], [49] proposes a hierarchical structure into Transformer framework.Transformer in Transformer (TNT)[50] extends the ViT baseline with sub-patch-wise attention within patches.More recently, methods of [43], [51]–[53]compute attentions within local windows.Swin [43] is a representative approach, which employs two key concepts to improve the original ViT — hierarchical feature maps and shifted window attention.Beyond image-space hand-crafted window, dynamic group Transformer (DGT) [54] and bilateral local attention vision Transformer (BOAT) [55] exploit feature-space locality.DGT [54] dynamically divides queries into multiple groups and selects the most relevant keys/values for each group to compute the attention.BOAT [55] supplements the existing window-based local attention with the feature space local attention module, which enables the modeling ability for long-range feature dependencies to be significantly improved.

    Although Transformer is highly capable of establishing long-range dependencies, recent studies present intuitive visual explanations that Transformer lacks the ability to capture high frequencies that predominantly convey local information [16], [17].In other words, Transformer is a low-pass filter.To address this issue, inception Transformer (iFormer)[17] designs a channel splitting mechanism to adopt parallel convolution path and self-attention path as high-frequency and low-frequency mixers.

    Inspired by the concepts of feature space local attention and features in different frequencies, we adopt content-based Transformer and Inception feature aggregator for 3D point cloud classification.

    III.METHODOLOGY

    A. Overall Architecture

    An overview of the proposed PointConT architecture is shown in Fig.2.The backbone structure consists of five hierarchical stages of Inception feature aggregator blocks.

    Given an input point cloudp∈RN×3, containingNpoints in 3-dimensional space.The “Stage 1” inception feature aggregator block partitions the point cloud into overlapping patches and then embeds the input coordinates into a new feature space (dimension denoted asC).It halves the number of points and doubles the number of feature dimensions stage by stage.Consequently, the output containsN/2mpoints and 2m-1Cfeature dimensions at them-th stage.For classification,the final classifier head is a global max-pooling followed by two linear layers.

    B. Inception Feature Aggregator

    Fig.3.The details of the inception feature aggregator.

    wherefj-fidenotes thatfjminusfito obtain the neighboring features relative to the centroidi, //·// is the concatenation operation and MLP is a simple network that includes a pointwise convolutional layer, a batch normalization layer, and an activation function.Note that unlike DGCNN, which defines its KNN in the feature space, we adopt neighbor search in the 3D space.

    Next, we propose a mix pooling strategy to aggregate the features of local patches.In most previous works, max-pooling has been verified as effective in aggregating the local features, for the reason that it can capture high frequencies that predominantly convey local information.Instead of directly combining max-pooling and Transformer block in a serial manner, in our PointConT, we use a parallel structure composed of a high-frequency aggregation branch and a low-frequency aggregation branch.The max-pooling operation aggregates high-frequency signals, while the average pooling operation filters low-frequency representations.

    1)High-Frequency Aggregation Branch: This branch can be defined as

    where MaxPool and ResMLP denote max-pooling operation and residual MLP block, respectively.

    2)Low-Frequency Aggregation Branch: We simply utilize an average pooling layer (AvgPool) before the content-based Transformer (ConT), and this design allows the content-based Transformer to focus on embedding low-frequency information.This branch can be defined as

    In the end, we concatenate the features from the high-frequency aggregation branch and the low-frequency aggregation branch, and then feed them to an MLP block as the Inception aggregator output featuresf′.

    C. Content-Based Transformer

    Differently from point Transformer [8], which computes self-attention among local spatial neighbors, we propose a content-based attention, as visualized in Fig.4.It dynamically divides all queries into multiple clusters according to their content (i.e., features) at each block, and selects the corresponding keys and values to compute local self-attention.

    Fig.4.Illustration of content-based attention.It can dynamically cluster all queries into multiple groups and compute the self-attention within each group.

    LetX∈RS×d(dis the feature space dimension,Sdenotes the length of features) be a set of feature vectors.Furthermore,we get embeddingsQ=XWQ,K=XWKandV=XWVto represent the queries, keys and values, respectively.

    Then we use the clustering algorithm so that the queries are scattered in different clusters.K-means clustering algorithm is a classic method for clustering problems.However, K-means clustering generally enables each cluster to contain varying numbers of queries, and therefore this algorithm cannot be implemented in a parallel way by using GPUs.To address this issue, we refer to the balanced binary clustering algorithm proposed in BOAT [55], which equally divides a set of queries into two clusters hierarchically.

    whereYiis the output of each subset.Lastly, all subsetsare merged into the outputY∈RS×din keeping with their original order.

    Multi-head configuration is a standard practice in Transformer, we expand multiple heads and each head performs query/key/value embeddings and clustering independently.This setting brings clustering diversity to a great extent, as visualized in Fig.5.

    1)Hierarchical Binary Clustering: Similarly to K-means clustering that the cluster assignment relies on the distance between all cluster centroids and each sample, our binary clustering starts with a random division of queriesinto two clusters and then calculates the two cluster centroids,denoted asc1andc2, respectively.After that, we compute the distance ratio to perform the hard assignment.Above operations can be summarized as

    Fig.5.Visualization of points clustering in different heads.

    wheredistmeans the Euclidean distance in feature space,C1and C2represent the two equal size clusters through balanced binary clustering.After performingniterations (n=log2L) of binary clustering, we obtainLsubsets with the same size.

    2)Choice of SA: In point Transformer, the choice of the self-attention has a crucial influence on the properties of Transformer block.One choice of SA is the standard scalar attention as

    Another choice of SA is the vector attention [8], [56] that we adopt in this paper

    3)Complexity Analysis: Given a set of feature vectors with a sizeS×d, for standard multi-head self-attention (MSA), the computational complexity of a local MSA module is?(S×(4kd2+2k2d))=4S kd2+2S k2d, wherekis the number of points in a local neighborhood.Unlike scalar attention, vector attention further reduces the complexity to ?(4S kd2+2S kd).In our PointConT, hierarchical binary clustering algorithm divides the feature vectors in a non-overlapping manner, dramatically reducing the complexity to ? (4S d2+2S d).

    IV.EXPERIMENTS

    In this section, we show experimental results of the proposed model on the shape classification task.All the experiments are performed on one Tesla V100 GPU.

    Implementation Details:We implement the PointConT in PyTorch framework and train the network using the SGD(stochastic gradient descent) optimizer (momentum and weight decay set to 0.9 and 0.0001, respectively), cosine learning rate schedule starting at 0.001 (warm up steps set to 10), and cross-entropy with label smoothing.We fix the random seed in all experiments to eliminate the influence of randomness.

    For shape classification training, we only use 1024 uniformly sampled points as network inputs.Moreover, we use RSMix [57] in addition to random scaling and translation as data augmentation.We train PointConT on ModelNet40 and ScanObjectNN with a batch size of 32 and 64 for 300 and 400 epochs, respectively.For testing, batch size is set to 16 and 32 on ModelNet40 and ScanObjectNN, respectively.

    A. Classification on ModelNet40

    We evaluate the model on the ModelNet40 [20] shape classification benchmark.There are 12 308 computer-aided design(CAD) models of point clouds from 40 common categories.The dataset is divided as 9840 models for training and 2468 models for testing.

    The results are presented in Table I.The overall accuracy of PointConT on ModelNet40 is 93.5%, which is a competitiveresult in attention-based models.Besides, our PointConT presents a high inference speed (166 samples/second in training and 279 samples/second in testing), which is 3.5 × faster than the original PointMLP [58] paper and 1.4× faster than the lightweight version PointMLP-elite.We visualize the clustering results at each stage in Fig.6.The clusters are able to cover long-range dependencies.

    TABLE I SHAPE CLASSIFICATION RESULTS ON THE MODELNET40 DATASET (OA: OVERALL ACCURACY)

    B. Classification on ScanObjectNN

    We furthermore perform experiments on a recent real-world point cloud classification dataset — ScanObjectNN [21], which consists of 15 k objects from 15 categories.We only report the heavy permutations from rigid transformations PB_T50_RS dataset.Unlike sampled virtual point clouds in Model-Net40, objects in ScanObjectNN are obtained from real-world scans.Therefore, the point clouds in ScanObjectNN are noisy(background, occlusions) and not axis-aligned, which brings a significant challenge to existing point cloud analysis methods.

    Table II shows the classification results on ScanObjectNN.PointConT outperforms prior models with 88.0% overall accuracy without voting [31] and reaches Top-1 90.3% when averages 10 prediction votes.This suggests that the Point-ConT is effective in real world point clouds.

    C. Ablation Study

    We perform ablation studies for the key designs of our methods on the shape classification task.All experiments are conducted under the same training settings.

    Fig.6.Visualizations of clustering results at each stage (The points of the same cluster are plotted with the same color.Different clusters are distinguished by random colors).

    TABLE II SHAPE CLASSIFICATION RESULTS ON THE SCANOBJECTNN DATASET(* DENOTES METHOD EVALUATED WITH VOTING STRATEGY [31].MACC: MEAN CLASS ACCURACY; OA: OVERALL ACCURACY)

    1)Component Ablation Study:Table III reports the classification results of removing each component in PointConT.Comparing Exp.III and Exp.IV, we notice that with the content-based Transformer, the model improves with 0.6% on ModelNet40 and 1.7% on ScanObjectNN.This demonstrates that the content Transformer can enhance the representation power of point clouds.Remarkably, the result of Exp.V drops a lot.In the absence of the average pooling, Exp.V means that the content-based Transformer follows after max-pooling and residual MLP in a serial manner, which indicates that the mix pooling strategy plays an important role in PointConT.Observably, by combining all these components, we obtain the best results on ModelNet40 and ScanObjectNN, which implies the effectiveness of content-based Transformer and Inception feature aggregator in point cloud classification.

    2)The Number of Stages: In Table IV, we ablate different number of stages in PointConT.We gradually increase the depth of PointConT on ModelNet40 and ScanObjectNN datasets to test the effectiveness of greater depth.We find that the stage number of 5 is sufficient for full exploitation.A deeper model will bring redundant information and perfor-mance decline.

    TABLE III ABLATION STUDY (MP: MAX-POOLING; RES: RESIDUAL MLP;AP: AVERAGE POOLING; CONT: CONTENT-BASED TRANSFORMER.METRIC: OA)

    TABLE IV ABLATION STUDY: THE NUMBER OF STAGES

    3)Local Cluster Size: We investigate the setting of the local cluster size and show the result in Table V.The best performance is achieved when the local cluster size is set to 16.

    4)Similarity Metric: We compare two important measures of similarity for clustering: the cosine similarity and the Euclidean distance.The cosine similarity is proportional to the dot product of two vectors.Hence, vectors with a high cosine similarity lied in the close direction from the origin, while the Euclidean distance corresponds to the L2-norm of a difference between vectors.Vectors with a small Euclidean distance are located in the close region of a vector space.The result in Table VI shows that clustering according to Euclidean distance is better than cosine similarity in the classification task.

    5)Attention Type: Finally, we compare the scalar attentionand the vector attention introduced in Section III-C.The results are shown in Table VII.It is obvious that the attention module is more effective than the no-attention, and vector attention slightly outperforms scalar attention.As described in point Transformer, vector attention supports adaptive modulation of individual feature channels, rather than just the entire feature vector, which can be beneficial in 3D point cloud analysis.

    TABLE V ABLATION STUDY: LOCAL CLUSTER SIZE

    TABLE VI ABLATION STUDY: SIMILARITY METRIC

    TABLE VII ABLATION STUDY: ATTENTION TYPE

    V.CONCLUSION

    In this paper, we propose point content-based Transformer(PointConT), a simple yet powerful architecture, adopting content-based Transformer, which clusters the sampled points with similar features into the same class and computes the self-attention within each class.Content-based Transformer can establish long-range feature dependencies compared to local spatial attention.Moreover, we design an Inception feature aggregator to combine high-frequency and low-frequency information in a parallel manner.The max-pooling operation aggregates high-frequency signals, while the average pooling operation and Transformer filter low-frequency representations.We hope that this study will provide valuable insights into the point cloud Transformer designs.

    It is noticed that the balanced clustering algorithm generates clusters with the same size, which limits generality and flexibility of the proposed PointConT.Advanced clustering approaches (e.g., [63], [64]) and CUDA can be used to implement cluster-wise matrix multiplication in future work.

    一级毛片久久久久久久久女| 久久久久国内视频| 在线十欧美十亚洲十日本专区| 色5月婷婷丁香| 好看av亚洲va欧美ⅴa在| 丰满人妻熟妇乱又伦精品不卡| 别揉我奶头 嗯啊视频| 18禁在线播放成人免费| 久久精品人妻少妇| 亚洲av二区三区四区| 看黄色毛片网站| 网址你懂的国产日韩在线| 成人av一区二区三区在线看| 国产高清视频在线播放一区| 日本与韩国留学比较| 九九久久精品国产亚洲av麻豆| 欧美午夜高清在线| 免费观看人在逋| 亚洲精品久久国产高清桃花| 色精品久久人妻99蜜桃| 禁无遮挡网站| 一级作爱视频免费观看| 淫秽高清视频在线观看| 婷婷六月久久综合丁香| 国产精品久久久久久亚洲av鲁大| 久久久久亚洲av毛片大全| 久久人人精品亚洲av| 18禁黄网站禁片免费观看直播| 国产不卡一卡二| 久久这里只有精品中国| 一卡2卡三卡四卡精品乱码亚洲| 欧美高清性xxxxhd video| 在线观看美女被高潮喷水网站 | 国产人妻一区二区三区在| 精品日产1卡2卡| 国产av麻豆久久久久久久| 欧美激情久久久久久爽电影| 亚洲成人免费电影在线观看| 欧美一区二区精品小视频在线| 午夜免费成人在线视频| 2021天堂中文幕一二区在线观| 精品国产三级普通话版| 精品久久久久久久久av| 性色av乱码一区二区三区2| 69人妻影院| 最后的刺客免费高清国语| 最近中文字幕高清免费大全6 | 两个人的视频大全免费| 亚洲国产精品合色在线| 91久久精品电影网| 大型黄色视频在线免费观看| 国内精品美女久久久久久| 成人av一区二区三区在线看| 一a级毛片在线观看| 亚洲国产欧美人成| 我要看日韩黄色一级片| 午夜免费激情av| 一级黄片播放器| 99热这里只有是精品50| 国产精品99久久久久久久久| 91久久精品电影网| 亚洲av美国av| 国产亚洲欧美在线一区二区| 久久亚洲真实| av视频在线观看入口| 国产亚洲av嫩草精品影院| a在线观看视频网站| 麻豆成人午夜福利视频| 亚洲国产精品成人综合色| 国产精品久久视频播放| 亚洲人成电影免费在线| 99久国产av精品| 怎么达到女性高潮| 最近在线观看免费完整版| 精品不卡国产一区二区三区| 国内精品美女久久久久久| 久久国产乱子免费精品| 亚洲美女视频黄频| 亚洲av日韩精品久久久久久密| 亚洲国产精品成人综合色| 性色avwww在线观看| 欧美色欧美亚洲另类二区| 久久精品国产亚洲av天美| 十八禁国产超污无遮挡网站| 毛片女人毛片| 久久香蕉精品热| 国产视频一区二区在线看| 亚洲精品在线观看二区| 丁香欧美五月| 国产欧美日韩精品亚洲av| 亚洲五月天丁香| 三级毛片av免费| 婷婷丁香在线五月| 久久久久久久久中文| 亚洲精品在线美女| 日韩欧美精品v在线| 国产精品,欧美在线| 亚洲 国产 在线| 欧美精品啪啪一区二区三区| 欧美一级a爱片免费观看看| 韩国av一区二区三区四区| 成年版毛片免费区| 国产亚洲精品久久久久久毛片| 美女 人体艺术 gogo| 欧美国产日韩亚洲一区| 天堂网av新在线| 免费观看精品视频网站| 人妻制服诱惑在线中文字幕| 成人高潮视频无遮挡免费网站| 日本成人三级电影网站| 国产一级毛片七仙女欲春2| 少妇被粗大猛烈的视频| 五月玫瑰六月丁香| 欧美性感艳星| 可以在线观看的亚洲视频| aaaaa片日本免费| 国产在视频线在精品| 搞女人的毛片| 自拍偷自拍亚洲精品老妇| 亚洲av成人av| 听说在线观看完整版免费高清| 亚洲欧美清纯卡通| 亚洲第一欧美日韩一区二区三区| 成人美女网站在线观看视频| 在线国产一区二区在线| 91久久精品国产一区二区成人| 亚洲内射少妇av| 精品久久国产蜜桃| 美女cb高潮喷水在线观看| 欧洲精品卡2卡3卡4卡5卡区| 在线国产一区二区在线| 亚洲中文字幕日韩| 国产高清视频在线播放一区| 亚洲天堂国产精品一区在线| 欧美日韩国产亚洲二区| 免费无遮挡裸体视频| 成人三级黄色视频| 在线观看美女被高潮喷水网站 | 国产aⅴ精品一区二区三区波| 国产精品98久久久久久宅男小说| 嫩草影视91久久| 草草在线视频免费看| 久久久久久大精品| 国模一区二区三区四区视频| 两人在一起打扑克的视频| 日韩高清综合在线| 精品一区二区免费观看| 十八禁网站免费在线| www.www免费av| 亚洲成a人片在线一区二区| 又黄又爽又刺激的免费视频.| 在线十欧美十亚洲十日本专区| 日韩欧美 国产精品| 久久久成人免费电影| 欧美一区二区亚洲| 久久亚洲真实| 欧美xxxx性猛交bbbb| 校园春色视频在线观看| 韩国av一区二区三区四区| 永久网站在线| 午夜福利欧美成人| 欧美一区二区国产精品久久精品| 国产伦精品一区二区三区四那| 日韩欧美在线二视频| 老司机深夜福利视频在线观看| 亚洲真实伦在线观看| 五月玫瑰六月丁香| 精品久久久久久久末码| 99久国产av精品| 午夜视频国产福利| 国产白丝娇喘喷水9色精品| 欧美又色又爽又黄视频| 亚洲18禁久久av| 性色avwww在线观看| 亚洲人成网站在线播| 免费观看精品视频网站| 国产一区二区三区视频了| 国产一区二区在线观看日韩| 国产精品一及| 欧美成人性av电影在线观看| 男人狂女人下面高潮的视频| 日日夜夜操网爽| 精品人妻偷拍中文字幕| 亚洲熟妇熟女久久| 午夜老司机福利剧场| 51午夜福利影视在线观看| 精品人妻偷拍中文字幕| 婷婷色综合大香蕉| 国产一区二区三区在线臀色熟女| 欧美xxxx黑人xx丫x性爽| 最新在线观看一区二区三区| 国产又黄又爽又无遮挡在线| 婷婷亚洲欧美| 精品久久久久久,| 99在线人妻在线中文字幕| 亚洲精品久久国产高清桃花| 首页视频小说图片口味搜索| 国产av麻豆久久久久久久| 日韩国内少妇激情av| 国产精品不卡视频一区二区 | 窝窝影院91人妻| 国产成+人综合+亚洲专区| 夜夜看夜夜爽夜夜摸| 国产精品久久久久久精品电影| 亚洲无线在线观看| 日本免费a在线| 免费看a级黄色片| 国产精品综合久久久久久久免费| 亚洲精品色激情综合| 日韩欧美精品免费久久 | 夜夜躁狠狠躁天天躁| 小蜜桃在线观看免费完整版高清| 欧美激情国产日韩精品一区| av天堂在线播放| 亚洲av电影在线进入| 中文字幕精品亚洲无线码一区| av专区在线播放| 熟女电影av网| 国产精品电影一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 在线国产一区二区在线| 欧美+亚洲+日韩+国产| 亚州av有码| 美女 人体艺术 gogo| 欧美绝顶高潮抽搐喷水| 日韩欧美一区二区三区在线观看| 午夜老司机福利剧场| 99热这里只有是精品50| 小说图片视频综合网站| 久久久久久久久久黄片| 亚洲欧美清纯卡通| 亚洲欧美日韩高清在线视频| 99国产极品粉嫩在线观看| 国产精品爽爽va在线观看网站| 欧美色欧美亚洲另类二区| 精品一区二区三区av网在线观看| a级一级毛片免费在线观看| 国产精华一区二区三区| 成人性生交大片免费视频hd| 在线播放国产精品三级| 欧美日韩国产亚洲二区| 丰满的人妻完整版| 简卡轻食公司| 色5月婷婷丁香| 午夜免费成人在线视频| 国产亚洲欧美在线一区二区| 男女那种视频在线观看| 熟妇人妻久久中文字幕3abv| 国产精品综合久久久久久久免费| 国产一区二区激情短视频| 免费在线观看亚洲国产| 精品久久久久久久人妻蜜臀av| 国产一区二区三区视频了| 国产av不卡久久| 国产人妻一区二区三区在| 人人妻人人看人人澡| 1000部很黄的大片| 少妇的逼好多水| 午夜精品在线福利| 亚洲在线观看片| 不卡一级毛片| 亚洲成人中文字幕在线播放| 欧美乱色亚洲激情| 国产色爽女视频免费观看| 成人国产综合亚洲| 久久精品国产清高在天天线| a在线观看视频网站| 国产精品一区二区免费欧美| 亚洲中文字幕日韩| 香蕉av资源在线| 久久久久九九精品影院| 国产高清三级在线| 日本黄色视频三级网站网址| 亚洲精品色激情综合| 日韩欧美免费精品| 亚洲av成人不卡在线观看播放网| 九九热线精品视视频播放| 麻豆国产av国片精品| 国产在视频线在精品| 婷婷亚洲欧美| 草草在线视频免费看| 亚洲精品影视一区二区三区av| 亚洲精品乱码久久久v下载方式| 99久久九九国产精品国产免费| 老熟妇仑乱视频hdxx| 国产精品人妻久久久久久| 男女那种视频在线观看| 麻豆成人av在线观看| 国产高清视频在线观看网站| 麻豆国产av国片精品| 韩国av一区二区三区四区| 婷婷精品国产亚洲av在线| 噜噜噜噜噜久久久久久91| 三级毛片av免费| 99久久99久久久精品蜜桃| 神马国产精品三级电影在线观看| 国产欧美日韩精品亚洲av| 午夜精品久久久久久毛片777| 国产探花在线观看一区二区| 悠悠久久av| 欧美性猛交╳xxx乱大交人| 女人被狂操c到高潮| 黄色一级大片看看| 亚洲av.av天堂| 91字幕亚洲| 夜夜看夜夜爽夜夜摸| 亚洲精品色激情综合| 国产美女午夜福利| 欧美最黄视频在线播放免费| 三级国产精品欧美在线观看| 女生性感内裤真人,穿戴方法视频| 真人一进一出gif抽搐免费| 国产大屁股一区二区在线视频| 在线a可以看的网站| 国产91精品成人一区二区三区| 麻豆国产av国片精品| 91九色精品人成在线观看| 一级a爱片免费观看的视频| а√天堂www在线а√下载| 欧美xxxx黑人xx丫x性爽| 深夜精品福利| 欧美成人a在线观看| 一本精品99久久精品77| 国产老妇女一区| 精品久久久久久久久av| 亚洲一区高清亚洲精品| 一个人看的www免费观看视频| 亚洲国产精品sss在线观看| 丝袜美腿在线中文| 国产v大片淫在线免费观看| 婷婷亚洲欧美| 国产一区二区在线观看日韩| 国产黄色小视频在线观看| 中文字幕免费在线视频6| 午夜福利在线观看免费完整高清在 | 在线a可以看的网站| 国产真实伦视频高清在线观看 | 欧美性感艳星| 成人欧美大片| 久久国产乱子免费精品| 最近在线观看免费完整版| 日韩国内少妇激情av| 国产成人福利小说| 午夜激情福利司机影院| 亚洲专区国产一区二区| 88av欧美| 首页视频小说图片口味搜索| 一进一出抽搐gif免费好疼| а√天堂www在线а√下载| 小蜜桃在线观看免费完整版高清| 久久精品国产亚洲av香蕉五月| 国产淫片久久久久久久久 | 丰满的人妻完整版| 日韩 亚洲 欧美在线| 免费无遮挡裸体视频| 亚洲av美国av| 国产精品美女特级片免费视频播放器| 亚洲中文日韩欧美视频| 日本五十路高清| 亚洲av熟女| 淫妇啪啪啪对白视频| av中文乱码字幕在线| 性插视频无遮挡在线免费观看| 亚洲午夜理论影院| 国产蜜桃级精品一区二区三区| 亚洲国产色片| 欧美激情久久久久久爽电影| 97超视频在线观看视频| 最新在线观看一区二区三区| 日韩国内少妇激情av| 亚洲午夜理论影院| 一级a爱片免费观看的视频| 禁无遮挡网站| 免费av观看视频| 男人舔女人下体高潮全视频| 久久久国产成人免费| 网址你懂的国产日韩在线| 国产av不卡久久| 久久精品国产99精品国产亚洲性色| 欧美激情久久久久久爽电影| 淫秽高清视频在线观看| 精品一区二区三区av网在线观看| 久久精品国产自在天天线| 国内精品久久久久久久电影| 国产精品一区二区性色av| 欧美在线一区亚洲| 18+在线观看网站| 欧美黑人欧美精品刺激| 亚洲欧美激情综合另类| 国产爱豆传媒在线观看| a在线观看视频网站| 亚洲av五月六月丁香网| 国产一区二区三区在线臀色熟女| 啦啦啦韩国在线观看视频| 日韩中字成人| 麻豆国产av国片精品| 99国产精品一区二区三区| 精品久久久久久久久av| 色av中文字幕| 好男人电影高清在线观看| 免费人成视频x8x8入口观看| 日本一本二区三区精品| 亚洲欧美日韩无卡精品| 在现免费观看毛片| 亚洲国产日韩欧美精品在线观看| 男女下面进入的视频免费午夜| 熟女人妻精品中文字幕| 18+在线观看网站| 国产真实伦视频高清在线观看 | 欧美日本亚洲视频在线播放| 日本一二三区视频观看| 欧美高清成人免费视频www| 国产精品一及| 久久久久性生活片| 欧美日本视频| 日韩欧美一区二区三区在线观看| 亚洲成人久久爱视频| 99视频精品全部免费 在线| 欧美性感艳星| 国产又黄又爽又无遮挡在线| 久久国产乱子免费精品| 简卡轻食公司| 757午夜福利合集在线观看| 亚洲内射少妇av| 亚洲久久久久久中文字幕| 日本免费一区二区三区高清不卡| 国模一区二区三区四区视频| 成人永久免费在线观看视频| 波多野结衣高清作品| 女人被狂操c到高潮| 网址你懂的国产日韩在线| 亚洲av成人av| 老司机福利观看| 女同久久另类99精品国产91| 成年女人永久免费观看视频| 一级av片app| 在线天堂最新版资源| 久久久精品欧美日韩精品| 国产麻豆成人av免费视频| 久久久久久久久久成人| 九九久久精品国产亚洲av麻豆| 亚洲成人久久爱视频| 婷婷亚洲欧美| a级毛片免费高清观看在线播放| 中文字幕免费在线视频6| 欧美日韩综合久久久久久 | 激情在线观看视频在线高清| 一二三四社区在线视频社区8| 日韩欧美在线二视频| 国内少妇人妻偷人精品xxx网站| 欧美zozozo另类| 日韩精品中文字幕看吧| 欧美性感艳星| 国产熟女xx| 亚洲精品色激情综合| 国产久久久一区二区三区| 欧美丝袜亚洲另类 | 欧美又色又爽又黄视频| 禁无遮挡网站| 日本成人三级电影网站| 69人妻影院| 亚洲av中文字字幕乱码综合| 欧美日本亚洲视频在线播放| 国产真实乱freesex| 1000部很黄的大片| 男女视频在线观看网站免费| 91九色精品人成在线观看| 国产精品一及| 亚洲一区高清亚洲精品| 亚洲电影在线观看av| 欧美一区二区亚洲| 午夜免费男女啪啪视频观看 | 国产激情偷乱视频一区二区| 日本精品一区二区三区蜜桃| 直男gayav资源| av欧美777| 人妻久久中文字幕网| 少妇人妻一区二区三区视频| 99久久精品国产亚洲精品| 两个人的视频大全免费| 精品欧美国产一区二区三| 搡老熟女国产l中国老女人| 午夜福利免费观看在线| 精品国内亚洲2022精品成人| 少妇被粗大猛烈的视频| 深夜a级毛片| 超碰av人人做人人爽久久| 啦啦啦韩国在线观看视频| 国产欧美日韩一区二区三| 亚洲美女视频黄频| 99热只有精品国产| 国内久久婷婷六月综合欲色啪| 日韩欧美精品v在线| 欧美色欧美亚洲另类二区| ponron亚洲| 一边摸一边抽搐一进一小说| 精品久久久久久久久久免费视频| 99久久精品热视频| 欧美国产日韩亚洲一区| 男人舔女人下体高潮全视频| 欧美另类亚洲清纯唯美| 国产av在哪里看| 成年免费大片在线观看| 2021天堂中文幕一二区在线观| aaaaa片日本免费| 午夜福利18| 日本黄色片子视频| 村上凉子中文字幕在线| 桃色一区二区三区在线观看| 欧美另类亚洲清纯唯美| 亚洲精品日韩av片在线观看| 国产精品久久久久久久久免 | 国产一区二区在线av高清观看| 成人美女网站在线观看视频| 亚洲,欧美精品.| 天堂影院成人在线观看| 精品一区二区免费观看| 亚洲不卡免费看| www日本黄色视频网| 欧美黄色淫秽网站| 91字幕亚洲| 国产淫片久久久久久久久 | 日韩欧美一区二区三区在线观看| 99热这里只有精品一区| 欧美在线黄色| 91麻豆av在线| 婷婷色综合大香蕉| 成熟少妇高潮喷水视频| 男人舔女人下体高潮全视频| 欧洲精品卡2卡3卡4卡5卡区| 中文字幕av成人在线电影| 国产乱人视频| 亚洲欧美清纯卡通| 午夜两性在线视频| 99久久无色码亚洲精品果冻| 国产精品乱码一区二三区的特点| 观看美女的网站| 亚洲av二区三区四区| 俺也久久电影网| av在线蜜桃| 国产日本99.免费观看| 亚洲精品成人久久久久久| 久久久久精品国产欧美久久久| 亚洲18禁久久av| 国产色婷婷99| 国产黄a三级三级三级人| 久久久精品大字幕| 变态另类丝袜制服| 亚洲国产日韩欧美精品在线观看| 国产高清有码在线观看视频| 国产视频一区二区在线看| 亚洲最大成人手机在线| 欧美一区二区精品小视频在线| 在现免费观看毛片| 国产乱人视频| 色视频www国产| 自拍偷自拍亚洲精品老妇| 久久精品国产99精品国产亚洲性色| 亚洲成人中文字幕在线播放| 欧美另类亚洲清纯唯美| 男女做爰动态图高潮gif福利片| 中文字幕免费在线视频6| www.色视频.com| 亚洲人成网站高清观看| 欧美黑人欧美精品刺激| 99精品在免费线老司机午夜| 99热只有精品国产| 国产精品野战在线观看| 淫秽高清视频在线观看| 韩国av一区二区三区四区| 深爱激情五月婷婷| 日韩精品中文字幕看吧| 18禁裸乳无遮挡免费网站照片| 三级国产精品欧美在线观看| 热99re8久久精品国产| 国产男靠女视频免费网站| 色av中文字幕| 久久久久久久久大av| 亚洲在线观看片| 午夜福利免费观看在线| 老司机深夜福利视频在线观看| 欧美黑人巨大hd| 日本黄大片高清| x7x7x7水蜜桃| eeuss影院久久| 在线播放国产精品三级| АⅤ资源中文在线天堂| 人人妻人人看人人澡| 最近最新免费中文字幕在线| 2021天堂中文幕一二区在线观| 午夜精品一区二区三区免费看| 国产高清有码在线观看视频| 成人国产一区最新在线观看| 久久人妻av系列| 亚洲欧美日韩卡通动漫| 嫩草影视91久久| 男女那种视频在线观看| 69av精品久久久久久| 99国产极品粉嫩在线观看| 亚洲在线自拍视频| 老鸭窝网址在线观看| 午夜福利18| 免费一级毛片在线播放高清视频| 18禁裸乳无遮挡免费网站照片| 亚洲成人中文字幕在线播放| 成人精品一区二区免费| 毛片女人毛片| 久久这里只有精品中国| 亚洲,欧美,日韩| 国产精品美女特级片免费视频播放器| 亚洲激情在线av| 一个人看视频在线观看www免费| 国产麻豆成人av免费视频| 亚州av有码| h日本视频在线播放|