• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition

    2022-04-06 07:34:38JiaxuZhangGaoxiangYeZhigangTuYongtaoQinQianqingQinJinluZhangJunLiu

    Jiaxu Zhang|Gaoxiang Ye|Zhigang Tu|Yongtao Qin|Qianqing Qin|Jinlu Zhang|Jun Liu

    1State Key Laboratory of Information Engineering in Surveying,Mapping and Remote Sensing, Wuhan University,Wuhan, China

    2State Grid Wuhan Power Supply Company,Wuhan,China

    3Shenzhen Infinova Ltd.Company,Shenzhen,China

    4Information Systems Technology and Design Pillar,Singapore University of Technology and Design,Singapore

    Abstract Current studies have shown that the spatial-temporal graph convolutional network (STGCN) is effective for skeleton-based action recognition.However, for the existing STGCN-based methods,their temporal kernel size is usually fixed over all layers,which makes them cannot fully exploit the temporal dependency between discontinuous frames and different sequence lengths.Besides,most of these methods use average pooling to obtain global graph feature from vertex features,resulting in losing much fine-grained information for action classification.To address these issues,in this work,the authors propose a novel spatial attentive and temporal dilated graph convolutional network(SATD-GCN).It contains two important components,that is,a spatial attention pooling module(SAP)and a temporal dilated graph convolution module (TDGC).Specifically, the SAP module can select the human body joints which are beneficial for action recognition by a self-attention mechanism and alleviates the influence of data redundancy and noise.The TDGC module can effectively extract the temporal features at different time scales, which is useful to improve the temporal perception field and enhance the robustness of the model to different motion speed and sequence length.Importantly,both the SAP module and the TDGC module can be easily integrated into the ST-GCN-based models,and significantly improve their performance.Extensive experiments on two large-scale benchmark datasets,that is, NTU-RGB + D and Kinetics-Skeleton, demonstrate that the authors’ method achieves the state-of-the-art performance for skeleton-based action recognition.

    1|INTRODUCTION

    Human action recognition, which has a wide range of applications in intelligent video surveillance, human-machine interaction, medical service, and so forth, is still a challenging and unsolved problem[1-4].Human action recognition based on the RGB appearance is usually easily affected by the complex background, illumination change, occlusion, and other factors.In recent years, more and more research have been carried out on skeleton-based action recognition as it is robust against changes in motion speeds, body scales, camera viewpoints, and interference of backgrounds.Moreover,increasingly human skeleton data is collected by depth cameras and human pose estimation algorithms [5, 6], which provides sufficient data for skeleton-based action recognition study and application.The skeleton data represents the human action as a sequence of 2D or 3D coordinates of the major body joints,so it is crucial to extract discriminative features in both spatial and temporal domains for action recognition.

    The earliest attempts of skeleton-based action recognition treat all the body joints in sequence as a feature vector and use a classifier such as SVM to classify the feature vector[7].These methods rarely explore the spatial and temporal dependencies of the skeleton sequence and cannot capture the fine-grained information of human action.Owing to the rapid progress of deep learning,models based on convolutional neural networks(CNN)and recurrent neural networks(RNN)have become the mainstream, which normally considers the coordinates ofhuman joints as pseudo-images or vector sequences [8-11].Although these methods have the ability to exploit spatialtemporal information, they are only suitable for dealing with the regular data in Euclidean space and are not suitable for handling the graph data in non-Euclidean space.The skeleton is naturally structured as a graph in a non-Euclidean space with the characteristic that the joints as vertexes and their natural connections as edges.

    To better leverage the data in the non-Euclidean space,some works process data directly on the graph structure [12,13].Yan et al.firstly applied the graph convolutional network(GCN) to model the skeleton-based action recognition [14].They proposed a spatial-temporal graph convolutional network(ST-GCN),which constructs a spatial graph based on the natural connections of joints in the human body and adds the temporal edges between corresponding joints in consecutive frames.ST-GCN can aggregate the information of graph vertexes in both spatial and temporal domains to obtain a discriminative feature representation of vertexes,and then uses an average pooling layer on both spatial and temporal domains to get the feature of spatial-temporal graph for action classification.Based on the ST-GCN, many variants were explored[17, 29, 31], which typically introduce some incremental modules, for example the adaptive adjacency matrix [17], the actional-structural module [29], and the variable temporal module [31], to enhance the network capacity.However, there are two drawbacks of the ST-GCN based methods: (1) they only consider the temporal dependency between adjacent frames on the time sequence, causing them cannot fully exploit the temporal dependency between frames in a multiscale time span.Besides, the pose variation between adjacent frames is small, which usually cannot reflect the motion information of human action.(2) ST-GCN-based methods simply use average pooling to obtain the global graph feature representation from vertex feature representations, without paying attention to key joints and key frames in the skeleton sequence, thus losing a lot of fine-grained information for action classification.For example, we should pay more attention to the long-term variations of the human hands and upper limbs for the actions ‘reading’ and ‘writing’.Because in the process of reading or writing, the human body is mainly moving with its upper limbs in a slow speed.In contrast, for‘running’ and ‘hopping’, we should pay more attention to the instantaneous movement of human lower limbs.In other words,for different actions,different parts of the human body have different degrees of importance, and their movement speed is also very different.Therefore,how to fully exploit the attentive multi-scale spatial-temporal dependency of human body joints is one of the crucial problems in skeleton-based action recognition.

    To address this issue,a novel spatial attentive and temporal dilated graph convolutional network (SATD-GCN) is proposed in this work.Specifically, in the spatial domain, we propose a spatial attention pooling (SAP) module, which uses the self-attention mechanism to pick important vertexes and remove unimportant vertexes in the graph.In this way, it carries out a spatial attention pooling in the process of spatial graph convolution, which avoids the loss of fine-gained information and reduces the impact of noise caused by the average pooling.It should be noted that although unimportant vertexes are removed, their useful information is preserved.Because before pooling, their useful features have been aggregated on other vertexes by the spatial graph convolution.In the temporal domain, to give the network multi-scale temporal perception field, we propose a temporal dilated graph convolution (TDGC) module.Similar to the dilated convolution, TDGC extracts the non-adjacent graph sequence with a multi-scale interval to expand the temporal receptive field.Both the SAP module and the TDGC module can be easily embedded into the spatial-temporal graph convolution networks, and significantly improves the performance (see Section 5).Although the latest research on skeleton-based action recognition also uses a spatial-temporal attention mechanism to refine the extracted features[15,16],in contrast,the purposed SAP module can not only refine features but also reduce the number of graph vertexes properly and alleviate the influence of data redundancy and noise.Moreover, following the work of 2s-AGCN [17], we also use the length and direction of bones as the second-order information to construct a two-stream (i.e.joint stream and bone stream) SATD-GCN to boost the accuracy.

    The main contribution of this work lies in three folds:

    ●A spatial attention pooling module is designed to adaptively capture important vertexes and remove unimportant vertexes in the graph, which is effective to reduce the number of graph vertexes and enhance the extraction of discriminative vertex features.

    ●A temporal dilated graph convolution module is exploited to expand the receptive field of temporal graph convolution,which can adapt to different speed of joint movement in different actions and learn temporal features from subtle motion to large-scale motion hierarchically.

    ●A two-stream spatial attentive and temporal dilated graph convolutional network is constructed by combining the SAP module and the TDGC module, which outperforms the state-of-the-art skeleton-based action recognition methods.

    2|RELATED WORK

    2.1|Skeleton-based action recognition

    Conventional skeleton-based action recognition methods usually extracted handcrafted features,that is,relative positions of joints[7]or rotations,translations between body parts [18],etc., to represent human motion.However, these methods cannot effectively extract the spatial-temporal correlation of skeleton sequence in a wide range, thus the performance of these handcrafted-feature-based methods is unsatisfied.With the collection of skeleton data becomes easy and the development of deep learning technology, using the deep networks for data-driven feature learning has become the mainstream for skeleton-based action recognition.Shahroudy et al.[19] treat3D coordinates of all joints of the human body in time sequence as a vector sequence and then use RNN to extract the temporal information.Similar to [19], many RNN-based methods have been proposed and good results obtained[10, 11, 20-22].However, in these RNN-based methods, the graph structure of human body joints is directly regarded as vectors, leading to the spatial structure information of the human body is ignored.To solve this problem, CNN-based methods have been studied to model the skeleton data as a pseudo-image on the manually designed transformation rules[8,9,23-26],which do not directly process the graph structure skeleton data in the non-Euclidean space and increase a large amount of redundant computation.

    Recently, GCN-based methods promote the performance of the skeleton-based action recognition to a higher level[14, 17, 27-31], which construct a skeleton graph whose vertices are joints and edges are bones and apply the GCN to extract correlated features.The existed GCN-based methods can be roughly divided into two categories.The first type of approach leverages GCN to extract spatial correlation of the skeleton graph and then uses RNN to capture the temporal correlation[16,30].The second type of approach uses spatialtemporal GCN to process the graph sequence directly[14,17,29,31],which can be well adapted to the non-Euclidean space data in time sequence and achieves the state-of-the-art performance.Yan et al.[14]first proposed the ST-GCN,in which each ST-GCN layer constructs the spatial characteristic with a graph convolutional operation and models the temporal dynamic with a temporal convolutional operation.Li et al.[29]introduced an encoder-decoder structure to capture richer joint correlations and action-specific latent vertexes dependencies.Wen et al.[31] explored a motif-based graph convolution to encode the hierarchical spatial structure and applied a variable temporal dense block to exploit local temporal information over different ranges of human skeleton sequences.Although these studies optimize the extraction of spatial-temporal features of the skeleton graph sequence, the pooling method they used in both the spatial domain and the temporal domain is simple, resulting in some important features cannot be effectively retained and is vulnerable to noise.Besides, these methods do not have the multi-scale temporal perception field, therefore they are unable to deal with different length of the graph sequence and different speed of the human body movement well.Following the GCN-based methods, our model combines the TDGC module and the SAP module proposed to extract spatial-temporal features more effectively.

    2.2|Graph convolutional network

    In the real world,many data are in the irregular non-Euclidean space such as the molecular structure [32], the transportation network[33],the knowledge graph[46],and the skeleton graph[14].Therefore, how to improve the feature extraction ability of the deep model in the non-Euclidean space is a pressing research topic.Scarselli et al.[34]first proposed a graph neural network(GNN)to handle the graph structure data,as GNN is a trainable model which is able to aggregate the vertex information in terms of the manually designed rules in the graph structure.Defferrard et al.[35] used the Fourier transform of the graph structure data to expand the convolution operation into the non-Euclidean space and proposed a graph convolutional network (GCN) for graph classification.Kipf et al.[36]applied the GCN for semi-supervised learning and verified the validity of GCN.However, these deep learning methods operate the graph structure data in the spectral domain,so the computational speed is inefficient.Monti et al.[37] modified the spectral domain GCN to construct a more effective spatial domain GCN, which directly operates on the graph vertexes and avoids the complex steps for example, the Fourier transform and the Chebyshev polynomial approximation.Our work also uses GCN to handle the skeleton-based action recognition, and we follow the work of ST-GCN [14] to extract the feature of human body joints from both the spatial dimension and temporal dimension.

    3|BACKGROUND

    In this section, we introduce the basic background knowledge of this work.

    3.1|Notations

    We use G =(V,E) to represent the skeleton graph, where V is the set ofnbody joints and E is the set ofmbones.We consider the adjacency matrix of the skeleton graph as A ∈{0,1}n×n,where Ai,j=1 if thei-th and thej-th joints are connected and 0 otherwise.LetD∈?n×nbe the diagonal degree matrix,whereFollowing the work of STGCN [14], we divide one root vertex and its one-order neighbours into three sets, including (1) the root vertex itself,(2)the centripetal group,which is closer to the body barycentre than the root, and (3) the centrifugal group, which is farther away to the body barycenter than the root.In this way, A is accordingly classified to be Aroot, Acentripetal and Acentrifugal,which can better express the structural information of the skeleton graph.We denote the partition group set asP={root,centripetal,centrifugal} andLet X ∈?n×3×Tbe the 3D joint positions acrossTframes.Let Xt=X:,:,t∈?n×3be the 3D joint positions at thet-th frame,which slices thet-th frame in the last dimension of X.=Xi,:,t∈?3be the positions of thei-th joint at thet-th frame.

    3.2|Spatial-temporal GCN

    ST-GCN [14] consists of a series of ST-GCN blocks.Each block contains a spatial GCN layer followed by a temporalGCN layer,which can extract the spatial and temporal features alternatingly.In the spatial dimension, the convolution operation on the skeleton graph is:

    where Xin∈?n×dinand Xout∈?n×doutare the input and output features of all joints in one frame respectively, anddinanddoutis the channel dimension of them.is the normalized adjacency matrix for each partition.Wp∈?din×doutare the trainable weights for each partition in the spatial GCN.In ST-GCN, the adjacency matrix A is manually defined according to the physical structure of human body, which cannot adaptively represent the interdependence of different parts of human body in different actions.For example, clapping hands, there is a connection between two hands, but they are not physically connected.Follow the work of 2s-AGCN [17], we change Equation 1 to the following form:

    where Bp∈?n×nis a trainable adjacency matrix that can be optimized together with other parameters in the training process.There are no constraints on Bp, which means that the graph is completely learned according to the training data.Cpis a vertex-dependent adjacency matrix which can determine whether there is a connection between two vertexes and how strong the connection is.We calculate Cpas follows:

    where Wθ∈?din×nand Wφ∈?din×nare the trainable parameters of the embedding functions.sof tmaxfunction operates on each row of the matrix.

    For the temporal dimension, since the corresponding vertexes in continuous graph frames are linear structures, it is straightforward to perform the temporal graph convolution similar to the classical convolution operation.Concretely, we perform a 2D convolution on the output feature map calculated by spatial convolution with a Kt×1 kernel, where Ktis the kernel size of the temporal dimension.

    4|SPATIAL ATTENTIVE AND TEMPORAL DILATED GCN

    In this section,we introduce the components of our proposed spatial attentive and temporal dilated graph convolutional network (SATD-GCN) in detail.

    4.1|Model architecture

    Our model consists of two streams,that is,a joint stream and a bone stream.The joint stream takes human body joints as graph vertexes and bones as graph edges to construct the skeleton graph sequence,and the initial feature of the vertex is its 3D coordinate corresponding to the human body joint.The bone stream takes human bones as graph vertexes and joints as graph edges, and the initial feature of the bone is the coordinate of the target joint minus the coordinate of the source joint.We define the joint,which closes to the centre of gravity of the skeleton,as the source joint;and define the joint,which is far away from the centre of gravity, as the target joint.For example, given a bone with its source jointv1=(x1,y1, z1)and its target jointv2=(x2,y2, z2), the initial feature of the bone is calculated asv2-v1=(x2-x1, y2-y1, z2-z1).The overall architecture of the SATD-GCN is shown in Figure 1.Given a sample, we first calculate the data of bones based on the data of joints.Then, the joint data and the bone data are fed into the joint stream and the bone stream,respectively.In the two-stream network,we first apply five STGCN blocks to extract low-level features of the vertexes.Then,we apply two spatial and temporal dilated graph convolution blocks (S-TD-GCN) with a dilation rate 1 followed by a STGCN block to extract the high-level feature of the vertexes.Next, we use two S-TD-GCN blocks with a dilation rate 2 followed by a spatial attention pooling block(SAP)with downsampling rate 2 to further extract high-level features and capture the important vertexes while removing the unimportant vertexes.Finally,we apply the average pooling to a few of the remaining important vertexes on both the spatial and temporal domains.Thesof tmaxscores of the two streams are combined to obtain the final score for the action label prediction.

    4.2|Temporal dilated graph convolution module

    The spatial-temporal GCN first aggregates vertex information in the spatial domain based on the spatial adjacency of the skeleton graph.With the help of multiple adaptive adjacency matrices and the vertex subset partition,ST-GCN can adapt to different spatial correlations of human body joints.However,in the temporal domain,ST-GCN does not have the ability to extract multi-scale correlations of non-adjacent frames.This disadvantage makes the ST-GCN cannot adapt to various human actions which usually with different speed and time span.In order to explore the time sequence information and human motion feature more effectively, we propose a novel temporal dilated graph convolution module (TDGC module).As shown in Figure 2,we use the temporal convolution with a continuous kernel to extract low-level features.When to extract high-level features, we let the temporal kernels have gaps, and we call the size of this gap is the ‘dilation rate’.By using the temporal dilated graph convolution, our model can learn the dependence between non-adjacent frames, andsignificantly expands the temporal perception field.In addition,by gradually increasing the dilation rate,our model is able to perceive the motion of human body at different time scales.It should be noted that the TDGC module does not increase the number of parameters and it can be easily combined with the ST-GCN based model.In our SATD-GCN model, as shown in Figure 1,we apply two spatial-temporal dilated-graph convolution(S-TD-GCN)blocks with a dilation rate of 1 after five ST-GCN blocks.And then we add two S-TD-GCN blocks with a dilation rate of 2 after one ST-GCN block.Experiments show that this kind of structure can extract temporal features from subtle motion to large-scale motion hierarchically.

    F I G U R E 1 The overall architecture of the proposed SATD-GCN

    F I G U R E 2 The temporal dilated graph convolution

    4.3|Spatial attention pooling module

    In the large-scale skeleton datasets that is NTU-RGBD or Kinetics-Skeleton, there are some poor information joints,such as the right and left ears which can make little contribution to action recognition.On the other side,there are some relatively important joints.For example, most of the actions will have the movement information of the human body's left and right hands or feet.The previous methods usually compress vertex features by means of average pooling in both the temporal domain and the spatial domain[14,17,29],which will inevitably be losing important spatial-temporal information.To solve this problem, we propose a spatial attention pooling module(SAP module).As can be seen from Figure 3,the SAP module uses self-attention mechanism to select the important vertexes in the graph and remove the unimportant vertexes.At the same time, before filtering vertexes, the SAP module also utilizes the attention map to enhance the feature of vertexes (Element-wise multiplication).More specifically,because the SAP module works on the spatial dimension, we first use the temporal average pooling (T-AvgPool) on the skeleton graph sequence to reduce the temporal dimension to one and get a feature map which hasn×ddimension.Furthermore,we use a fully connected layer(FC)followed by a sigmoid function on the feature map to generate an attention map for each vertex in the graph, which can be interpreted as the relative importance given to vertex in the current graph.The feature of the original graph vertex is multiplied by its attention map to enhance the feature.We rank the attention map from large to small,and filter row vectors of the adjacency matrix according to the attention map by a down-sampling rateα.In this way,the originaln×ddimensional adjacency matrixbecomes×ndimensional.In the SAP module, the physical connection structure of the human body no longer exists, so we remove the physical structure adjacency matrix A and the trainable adjacency matrix B, only retain the vertex-dependent adjacency matrix C in Eq.2.Finally, we perform ST-GCN operation with the new adjacency matrix on the skeleton graph,so the number of vertexes in the skeleton graph is reduced tonα.It should be noted that we do not directly filter the important vertexes, but indirectly select the vertexes by filtering the adjacency matrix, which can alleviate the non-differentiable problems caused by the selection operation and make the model easy to train.In our SATD-GCN,as shown in Figure 1,we apply one SAP block with the down-sampling rate two at the end of the two streams, respectively.

    F I G U R E 3 The architecture of the spatial attention pooling module

    5|EXPERIMENT

    Extensive experiments are conducted and analysed in this section.Firstly, we introduce two large-scale skeleton datasets,namely NTU-RGB + D [19] and Kinetics-Skeleton [14].Secondly, our model implementation details, and the training details are discussed.Thirdly, we perform an ablation study of each component.Finally, our model is evaluated on these two datasets to compare with state-of-the-arts.

    5.1|Datasets

    5.1.1|NTU-RGB + D

    NTU-RGB + D is a large in-door-captured dataset with annotated 3D joint coordinates for the human action recognition task[19,47].NTU-RGB+D contains 56,000 action videos in 60 action classes, which are captured from 40 volunteers in different age groups ranging from 10 to 35.Each action is obtained by three cameras at the same height from different viewpoints, and the provided annotations are given in the camera coordinate system.There are 25 joints for each subject in the skeleton sequences,while each action video has no more than two subjects.It includes two settings: (1) Cross-Subject(CS) benchmark, which contains 40,320 videos for training,and 16,560 for evaluation.In this setting,the training set comes from one subset of 20 subjects, and a model is validated on sequences from the remaining 19 subjects;2)Cross-View(CV)benchmark, which includes 37,920 videos for training and 18,960 videos for evaluation.The training samples in this set come from the camera views 2 and 3, and the evaluation samples are all from the camera view 1.We follow the conventional settings and report the top-1 accuracy on both benchmarks.

    5.1.2|Kinetics-skeleton

    Kinetics [38] consists of 300,000 videos clips in 400 action classes.The video clips of Kinetics are sourced from YouTube and have a great variety, but it only provides raw video clips without skeleton information.Yan et al.[14] estimate the locations of 18 joints on every frame of the clips by using the publicly available OpenPose [39] toolbox and release the Kinetics-Skeleton datasets.In Kinetics-Skeleton, all videos are resized to a resolution of 340×256 and are converted to a frame rate of 30 fps.The toolbox generates 2D coordinate and confidence score for totally 18 joints from the resized videos.For the multi-person clips, two people are selected based on the average joint confidence.Each joint is represented as a three-element feature vector that contains the 2D coordinate and confidence score.Following the evaluation method of Yan et al.[14], we train the models on the training set and report the top-1 and top-5 accuracies on the validation set.Because the videos in the Kinetics dataset are captured from the real-world, this experiment can better reflect the performance of the model in real-world situations.

    5.2|Implementation details

    Our SATD-GCN model has a total of 12 blocks.In each block, we add a residual connection [40], which enables the model to learn features more effectively and prevents overfitting.The output channels for each block are 64,64,64,128,128, 128, 256, 256, 256, 256, and 256.We set the down-sampling rateα=2 and the temporal kernel size Kt=9.A data BN layer is added at the beginning to normalize the input data.The final output is sent to asoftmaxclassifier to obtain the action prediction.

    We implement our SATD-GCN model based on the PyTorch deep learning framework[41].We apply the stochastic gradient descent (SGD) algorithm with Nesterov momentum(0.9) as the optimizer.The weight decay is set to 0.0001.We use a Titan XP GPU for the model training and the batch size is set to 16.

    For the NTU-RGB + D dataset, the max number of frames in each sample is 300.We repeat the samples until it reaches to 300 frames if the samples have frames less than 300.There are at most two human bodies in each sample.If the number of bodies in the sample is less than 2, we pad the second body with 0.The number of training epoch is set as 55 and the learning rate is set as 0.1.The learning rate decay is set as 0.1 at the 30th epoch, 40th epoch, and 50th epoch.

    For the Kinetics-Skeleton dataset, there are 150 frames in each sample and two bodies in each frame.We randomlychoose 150 frames from the input skeleton sequence, and slightly disturb the joint coordinates with randomly chosen rotations and translations for data-augmentation.The number of training epoch is set as 70 and the learning rate is set as 0.1.The learning rate decay is set as 0.1 at the 45th epoch, 55th epoch, and 65th epoch.

    TA B L E 1 Comparison of the validation accuracy of the joint stream SATD-GCN with or without TDGC module and SAP module on NTURGB + D Cross-View benchmark.(w/o means without)

    5.3|Ablation study

    We test the effectiveness of the components of our SATDGCN with the Cross-View benchmark on the NTU-RGB+D dataset.We only test one stream(joint stream)in our model as the Bone Stream can be conducted in the same way.From Table 1, we can see that the original performance of one stream ST-GCN [14] on the NTU-RGB + D Cross-View benchmark is 88.3%.By applying the adaptive adjacency matrix and the specially designed data pre-processing methods, Shi et al.designed an AGCN[17], its performance is improved to 93.4%.We use AGCN as the baseline in this work.

    To further boost the performance of ST-GCN,we propose two novel modules to effectively learn the temporal and spatial features in the skeleton data, that is, the TDGC module and the SAP module.The results in the third row and the fourth row of Table 1 show that either the TDGC module or the SAP module is beneficial for action recognition.Specifically,compared to the baseline AGCN, the TDGC module boosts the performance by 0.6% (94.0% vs.93.4%) and the SAP module enhances the performance by 0.5%(93.9%vs.93.4%),respectively.When integrating these two modules together,the joint stream SATD-GCN obtains the best performance,which improves the accuracy by 1.0% (94.4% vs.93.4%).Experiments demonstrate that both the TDGC module and the SAP module are effective.They can help the network to learn multiscale and discriminative spatial-temporal features, so as to improve the accuracy of action classification.Besides,compared to AGCN, the increasement of the parameters and training/inference time of our model can be ignored(less than 10%).

    Figure 4 visualizes the attention map in the SAP module.The skeleton graph is plotted based on the physical connection of the human body.Each circle represents one joint, and the radius size represents the weight of the joint.It can be seen that for the action ‘throw’, the model pays more attention tothe movement of human hands,while for‘kicking something’,the joints of feet are given the higher weight.For the actions‘pick up’ and ‘stand up’, the joints of the upper body contain more information and are selected in the SAP module.

    F I G U R E 4 Visualization of the attention map in the SAP module.The radius size of the circle represents the weight of the joint

    TA B L E 2 Comparison of the validation accuracy of the joint stream SATD-GCN with different configuration on the NTU-RGB+D Cross-View benchmark

    TA B L E 3 Comparison of the validation accuracy with state-of-the-art methods on the NTU-RGB + D dataset

    Table 2 shows the effect of different dilation rate in the TDGC module and different down-sampling rate in the SAP module.For the dilation rate, increasing the dilation rate gradually enables the model extracts temporal features from subtle motion to large-scale motion hierarchically and effectively.The joint stream SATD-GCN model,which has two STGCN blocks followed by two S-TD-GCN blocks with dilation rates one and two, respectively, obtains the best performance.For the down-sampling rateα,the SAP module withα=two is the best configuration in our experiment.It should be noted that if the dilation rate and the down-sampling rate are toolarge,the performance of the model will be damaged.Because the process of padding in the TDGC module and the conduction of deleting vertexes in the SAP module will lead the model to lose some useful information.When the dilation rates of the two S-TD-GCN blocks are set to two and three,respectively, the accuracy is even lower than the baseline(92.8% vs.93.4%).Because the dependence between two frames with too large time span is very weak, and the TDGC module will destroy the fine-grained temporal feature instead.

    TA B L E 4 Comparison of the validation accuracy with state-of-the-art methods on the Kinetics-Skeleton dataset

    5.4|Comparisons to the state-of-the-art

    We compare the proposed SATD-GCN model (two-stream)with the state-of-the-art skeleton-based action recognition methods on both the NTU-RGB + D dataset and the Kinetics-Skeleton dataset.The methods which we selected for comparison include the handcraft-feature-based methods [18,42], the RNN-based methods [10, 19-22, 43, 44], the CNNbased methods[8,9,23-26],and the GCN-based methods[14,17, 29, 31, 44].Results on the NTU-RGB + D dataset are shown in Table 3.Our SATD-GCN outperforms the handcraft-feature-based methods,RNN-based methods,and CNNbased methods for more than >4%on both the Cross-Subject and the Cross-View benchmarks,which proved that GCN has great advantages in dealing with skeleton data.Among the GCN-based methods,our SATD-GCN also achieves the stateof-the-art performance.Compared to ST-GCN [14], the improvements of our method reach to 7.8% (89.3% vs.81.5%)and 7.2% (95.5% vs.88.3%) on the Cross-Subject benchmark and the Cross-View benchmark, respectively.For the most related work 2s-AGCN[17],our results outperform it by 1.1%(89.3% vs.88.2%) on the Cross-Subject benchmark and 0.6%(95.5% vs.94.9%) on the Cross-View benchmark.The results reveal that our SATD-GCN can better classify a variety of human actions by combining the TDGC module and the SAP module.

    Table 4 shows the results of the Kinetics-Skeleton dataset,where we compared the proposed SATD-GCN with six stateof-the-art approaches.We can see that our SATD-GCN outperforms the other competitive methods in both Top-1 and Top-5 accuracies.In contrast to ST-GCN [14], the improvements of our method reach to 5.9% (36.6% vs.30.7%) and 7.0% (59.8% vs.52.8%) on Top-1 accuracy and Top-5 accuracy, respectively.For the most related work 2s-AGCN [17],our results outperform it by 0.7%(36.6%vs.35.9%)on Top-1 accuracy and 1.2% (59.8% vs.58.6%) on Top-5 accuracy.

    6|CONCLUSIONS

    In this article, we propose a novel SATD-GCN, which contains a TDGC module and an SAP module,for skeleton-based action recognition.The TDGC module can effectively extract the temporal features in different time scales, improve the perception field in the temporal domain, and maintain the robustness to different motion speed and sequence length.The SAP module can select human body joints which are beneficial for action recognition by self-attention mechanism and alleviate the influence of data redundancy and noise.In addition,both the TDGC module and the SAP module can be easily incorporated into the ST-GCN, and significantly improve the performance of ST-GCN.Owing to the contribution of these two modules, our SATD-GCN obtains the state-of-the-art performance on two large-scale action recognition benchmark datasets.

    ACKNOWLEDGEMENTS

    The work is supported by the National Key Research and Development Program of China (No.2018YFB1600600).

    ORCID

    Jiaxu Zhanghttps://orcid.org/0000-0002-9551-2708

    午夜激情福利司机影院| 中国美白少妇内射xxxbb| 亚洲在线自拍视频| 大又大粗又爽又黄少妇毛片口| 男女那种视频在线观看| 久久久精品大字幕| 精品国内亚洲2022精品成人| 在线国产一区二区在线| 亚洲性久久影院| 国产伦在线观看视频一区| 少妇熟女aⅴ在线视频| 国内精品一区二区在线观看| 国产高清不卡午夜福利| 在线天堂最新版资源| 12—13女人毛片做爰片一| 国产三级中文精品| av在线观看视频网站免费| 亚洲美女黄片视频| 男人舔女人下体高潮全视频| 高清日韩中文字幕在线| 久久人人爽人人爽人人片va| 哪里可以看免费的av片| 我要看日韩黄色一级片| av在线播放精品| 最后的刺客免费高清国语| 在线免费十八禁| 亚洲精品影视一区二区三区av| 女生性感内裤真人,穿戴方法视频| 久久久精品94久久精品| 最近中文字幕高清免费大全6| 久久韩国三级中文字幕| 亚洲五月天丁香| 男插女下体视频免费在线播放| 国产淫片久久久久久久久| 午夜激情欧美在线| 麻豆av噜噜一区二区三区| 99久久成人亚洲精品观看| 免费一级毛片在线播放高清视频| 亚洲成av人片在线播放无| 日韩在线高清观看一区二区三区| 亚洲最大成人中文| 日本-黄色视频高清免费观看| 国产乱人视频| 色在线成人网| 丰满的人妻完整版| 国产一区二区在线av高清观看| 综合色av麻豆| 毛片一级片免费看久久久久| 日本一二三区视频观看| 久久午夜亚洲精品久久| 国产黄色视频一区二区在线观看 | av在线观看视频网站免费| 久久欧美精品欧美久久欧美| 成人三级黄色视频| 精品乱码久久久久久99久播| 国产黄a三级三级三级人| 亚洲av第一区精品v没综合| 18禁裸乳无遮挡免费网站照片| 热99re8久久精品国产| 亚洲av电影不卡..在线观看| 观看免费一级毛片| 国产精品人妻久久久久久| av天堂中文字幕网| 久久精品人妻少妇| 久久久午夜欧美精品| 99久久精品热视频| 日本精品一区二区三区蜜桃| 国产高清视频在线观看网站| 欧美精品国产亚洲| 久久综合国产亚洲精品| 一区二区三区免费毛片| 少妇熟女欧美另类| 日本三级黄在线观看| 免费一级毛片在线播放高清视频| 老熟妇乱子伦视频在线观看| 成人毛片a级毛片在线播放| 尤物成人国产欧美一区二区三区| av专区在线播放| 久久精品国产亚洲av涩爱 | 22中文网久久字幕| 搡女人真爽免费视频火全软件 | 成人二区视频| 亚洲在线自拍视频| 大香蕉久久网| 欧美又色又爽又黄视频| 菩萨蛮人人尽说江南好唐韦庄 | 看黄色毛片网站| av福利片在线观看| 日产精品乱码卡一卡2卡三| 一a级毛片在线观看| 变态另类丝袜制服| 色播亚洲综合网| 51国产日韩欧美| 国产av一区在线观看免费| 97超碰精品成人国产| 国产熟女欧美一区二区| 欧美日韩在线观看h| 波多野结衣高清作品| 日日摸夜夜添夜夜爱| 国产美女午夜福利| 直男gayav资源| 国产成人一区二区在线| 寂寞人妻少妇视频99o| 一级a爱片免费观看的视频| 国产国拍精品亚洲av在线观看| 亚洲欧美日韩卡通动漫| 久99久视频精品免费| 久久精品国产亚洲av天美| 亚洲人成网站在线观看播放| 女同久久另类99精品国产91| 人人妻人人澡欧美一区二区| 国产亚洲91精品色在线| av女优亚洲男人天堂| 国产色婷婷99| 亚洲精品在线观看二区| 一夜夜www| 99久国产av精品| 久久鲁丝午夜福利片| 美女免费视频网站| 免费人成视频x8x8入口观看| 丝袜喷水一区| 国产精品一区二区性色av| 免费高清视频大片| 亚洲一区高清亚洲精品| 人人妻人人澡人人爽人人夜夜 | 日本黄色片子视频| 网址你懂的国产日韩在线| 国产成人91sexporn| 卡戴珊不雅视频在线播放| 99久国产av精品| 国产精品1区2区在线观看.| 免费电影在线观看免费观看| 久久午夜亚洲精品久久| 亚洲丝袜综合中文字幕| 亚洲欧美日韩无卡精品| 欧美成人a在线观看| 亚洲人成网站高清观看| 成人三级黄色视频| 在现免费观看毛片| 在线国产一区二区在线| 亚洲综合色惰| 99热6这里只有精品| 熟妇人妻久久中文字幕3abv| 精品乱码久久久久久99久播| 干丝袜人妻中文字幕| 99久久九九国产精品国产免费| 亚洲一级一片aⅴ在线观看| 国产精品美女特级片免费视频播放器| 天天一区二区日本电影三级| 少妇的逼好多水| 色视频www国产| 欧美不卡视频在线免费观看| 九色成人免费人妻av| 久久精品国产亚洲网站| 小说图片视频综合网站| 亚洲最大成人av| 极品教师在线视频| 不卡一级毛片| 天美传媒精品一区二区| 国产色爽女视频免费观看| 91久久精品电影网| 69av精品久久久久久| 国产大屁股一区二区在线视频| 免费在线观看影片大全网站| 成人午夜高清在线视频| 十八禁国产超污无遮挡网站| 国产一区二区三区av在线 | 一夜夜www| 精品午夜福利视频在线观看一区| 亚洲电影在线观看av| 色综合站精品国产| 色尼玛亚洲综合影院| 人妻夜夜爽99麻豆av| 99九九线精品视频在线观看视频| 亚洲人成网站高清观看| 无遮挡黄片免费观看| 精品久久久久久久末码| 亚洲精华国产精华液的使用体验 | 日日干狠狠操夜夜爽| 欧洲精品卡2卡3卡4卡5卡区| 国产极品精品免费视频能看的| 女生性感内裤真人,穿戴方法视频| 啦啦啦啦在线视频资源| 午夜福利在线观看免费完整高清在 | 国产成年人精品一区二区| 最近的中文字幕免费完整| 亚洲欧美日韩东京热| 日韩三级伦理在线观看| 99热全是精品| 欧美三级亚洲精品| 两个人视频免费观看高清| 日本一本二区三区精品| 精华霜和精华液先用哪个| 一本一本综合久久| 亚洲精品一卡2卡三卡4卡5卡| 天堂√8在线中文| 美女高潮的动态| 精品不卡国产一区二区三区| 少妇裸体淫交视频免费看高清| 久久久久久久久久久丰满| 国产高潮美女av| 丝袜美腿在线中文| 亚洲欧美成人精品一区二区| 深夜精品福利| 欧美高清性xxxxhd video| 人妻夜夜爽99麻豆av| av在线播放精品| 99精品在免费线老司机午夜| 99久久精品热视频| 蜜桃亚洲精品一区二区三区| 日韩人妻高清精品专区| 99riav亚洲国产免费| 看黄色毛片网站| 99热网站在线观看| 国产精品伦人一区二区| 国产高潮美女av| 卡戴珊不雅视频在线播放| 国产伦精品一区二区三区四那| 一区福利在线观看| 97超级碰碰碰精品色视频在线观看| 波野结衣二区三区在线| 18禁在线播放成人免费| 国内精品久久久久精免费| 国产真实伦视频高清在线观看| 国产精华一区二区三区| 伦理电影大哥的女人| 天堂√8在线中文| 欧美另类亚洲清纯唯美| av中文乱码字幕在线| 亚洲精品久久国产高清桃花| 国产伦在线观看视频一区| 日韩三级伦理在线观看| 特大巨黑吊av在线直播| 亚洲激情五月婷婷啪啪| 亚洲欧美清纯卡通| 少妇高潮的动态图| 男女边吃奶边做爰视频| 丝袜喷水一区| 久久久成人免费电影| 中文字幕精品亚洲无线码一区| 免费av不卡在线播放| a级毛片免费高清观看在线播放| 九九爱精品视频在线观看| 男人舔奶头视频| 国模一区二区三区四区视频| 99久久九九国产精品国产免费| 欧美性猛交黑人性爽| 直男gayav资源| 欧美日本亚洲视频在线播放| 三级毛片av免费| 成人亚洲欧美一区二区av| 国产一区二区三区在线臀色熟女| 国产色婷婷99| 免费不卡的大黄色大毛片视频在线观看 | 欧美三级亚洲精品| 夜夜爽天天搞| 国产探花极品一区二区| 露出奶头的视频| 又黄又爽又刺激的免费视频.| 国产一区二区三区av在线 | 欧美潮喷喷水| 一进一出好大好爽视频| 精品日产1卡2卡| 日韩精品青青久久久久久| 国模一区二区三区四区视频| 欧美绝顶高潮抽搐喷水| 亚洲最大成人av| 亚洲欧美日韩高清在线视频| 日韩欧美一区二区三区在线观看| 干丝袜人妻中文字幕| 日本免费一区二区三区高清不卡| 国产亚洲av嫩草精品影院| 少妇高潮的动态图| 亚洲七黄色美女视频| 日韩欧美在线乱码| 精品久久久久久久久av| 久久精品国产鲁丝片午夜精品| 深夜a级毛片| 一进一出抽搐动态| 亚洲一区二区三区色噜噜| 日韩av在线大香蕉| 成人永久免费在线观看视频| 久久久久久久午夜电影| 午夜精品一区二区三区免费看| 美女内射精品一级片tv| 日日摸夜夜添夜夜添小说| 国产精品免费一区二区三区在线| 日韩欧美国产在线观看| 搡女人真爽免费视频火全软件 | 在线观看免费视频日本深夜| 久久午夜福利片| 伊人久久精品亚洲午夜| 欧美性感艳星| 18+在线观看网站| 亚州av有码| 精品久久久久久成人av| 在线观看一区二区三区| 在线看三级毛片| 国产亚洲精品久久久久久毛片| 深夜精品福利| 久久6这里有精品| 精品99又大又爽又粗少妇毛片| 色哟哟哟哟哟哟| 久久久久国内视频| 亚洲经典国产精华液单| av卡一久久| 特级一级黄色大片| 18禁在线播放成人免费| 亚洲性久久影院| 丰满乱子伦码专区| 国产精品野战在线观看| 国产精品三级大全| 亚洲av成人精品一区久久| 成人国产麻豆网| 久久精品国产99精品国产亚洲性色| 国产精品综合久久久久久久免费| 国产精品嫩草影院av在线观看| 婷婷精品国产亚洲av| 91在线精品国自产拍蜜月| 久久久久久久久中文| 五月玫瑰六月丁香| 国产精品一区二区免费欧美| 国产成人91sexporn| 免费黄网站久久成人精品| 无遮挡黄片免费观看| 少妇熟女欧美另类| av中文乱码字幕在线| 亚洲人与动物交配视频| 日韩一本色道免费dvd| 国产片特级美女逼逼视频| 免费观看在线日韩| 久久久色成人| 12—13女人毛片做爰片一| 国产亚洲av嫩草精品影院| 女同久久另类99精品国产91| 男插女下体视频免费在线播放| 在线观看66精品国产| 两个人视频免费观看高清| 国产精品亚洲美女久久久| 听说在线观看完整版免费高清| av专区在线播放| 天堂√8在线中文| 亚洲在线自拍视频| 欧美精品国产亚洲| 淫妇啪啪啪对白视频| 久久久久久久久大av| 免费黄网站久久成人精品| 3wmmmm亚洲av在线观看| 老司机福利观看| 日韩欧美精品v在线| 夜夜夜夜夜久久久久| 97碰自拍视频| 秋霞在线观看毛片| 久久人人精品亚洲av| 精品无人区乱码1区二区| 精品人妻偷拍中文字幕| 国产真实乱freesex| 免费在线观看影片大全网站| 最近2019中文字幕mv第一页| av天堂在线播放| 国产美女午夜福利| 国产av在哪里看| 久久久久性生活片| av在线观看视频网站免费| 免费在线观看成人毛片| 婷婷精品国产亚洲av在线| 最近2019中文字幕mv第一页| 久久久午夜欧美精品| 久久人人爽人人片av| 亚洲精品在线观看二区| 性欧美人与动物交配| 免费看a级黄色片| 一级黄片播放器| 91午夜精品亚洲一区二区三区| 亚洲国产精品成人久久小说 | 九色成人免费人妻av| 可以在线观看毛片的网站| 国产精品免费一区二区三区在线| 51国产日韩欧美| 插逼视频在线观看| 一边摸一边抽搐一进一小说| 中文亚洲av片在线观看爽| 春色校园在线视频观看| 12—13女人毛片做爰片一| 人妻少妇偷人精品九色| 日产精品乱码卡一卡2卡三| 桃色一区二区三区在线观看| 少妇人妻一区二区三区视频| 蜜桃久久精品国产亚洲av| 男女做爰动态图高潮gif福利片| 一卡2卡三卡四卡精品乱码亚洲| 日韩精品有码人妻一区| 国产又黄又爽又无遮挡在线| 国产精品人妻久久久影院| 亚洲人成网站高清观看| 国产一区二区亚洲精品在线观看| 18+在线观看网站| 18禁在线播放成人免费| 大香蕉久久网| 午夜免费激情av| 国内精品美女久久久久久| 简卡轻食公司| 97碰自拍视频| 91精品国产九色| 在线观看一区二区三区| 国产综合懂色| 色哟哟·www| 国产在线男女| 国产精品女同一区二区软件| 国产毛片a区久久久久| 亚洲aⅴ乱码一区二区在线播放| 亚洲av二区三区四区| 久久天躁狠狠躁夜夜2o2o| av天堂中文字幕网| 亚洲欧美日韩高清专用| 日韩一本色道免费dvd| 日本撒尿小便嘘嘘汇集6| 日韩欧美精品v在线| 一卡2卡三卡四卡精品乱码亚洲| 日韩制服骚丝袜av| 成人综合一区亚洲| 亚洲性夜色夜夜综合| 伦精品一区二区三区| 欧美一区二区亚洲| 一个人看的www免费观看视频| 久久午夜福利片| 午夜福利在线观看免费完整高清在 | 人妻少妇偷人精品九色| 日本与韩国留学比较| 亚洲精品456在线播放app| eeuss影院久久| 精品久久久久久久久久久久久| 深夜精品福利| 亚洲精华国产精华液的使用体验 | 精品日产1卡2卡| 久久99热这里只有精品18| 久久久久久久亚洲中文字幕| 综合色av麻豆| 国产精品国产高清国产av| 中国美女看黄片| 久久精品91蜜桃| 成人av在线播放网站| 国产男靠女视频免费网站| 免费看日本二区| 最新中文字幕久久久久| 国产综合懂色| 久久久久久大精品| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲av熟女| 九九在线视频观看精品| 在线免费十八禁| 欧美日本亚洲视频在线播放| 色尼玛亚洲综合影院| 日本精品一区二区三区蜜桃| 在线播放无遮挡| 日日摸夜夜添夜夜爱| 全区人妻精品视频| 久99久视频精品免费| 亚洲自拍偷在线| 亚洲国产精品久久男人天堂| АⅤ资源中文在线天堂| 色视频www国产| www日本黄色视频网| 午夜久久久久精精品| 亚洲熟妇中文字幕五十中出| 国产伦在线观看视频一区| av卡一久久| 成人高潮视频无遮挡免费网站| 狠狠狠狠99中文字幕| 九九在线视频观看精品| 美女 人体艺术 gogo| 国产av在哪里看| 国产精品久久电影中文字幕| 深夜精品福利| 亚洲无线观看免费| 男女视频在线观看网站免费| 国产高潮美女av| 又爽又黄无遮挡网站| 99热网站在线观看| 亚洲人成网站在线观看播放| 亚洲国产精品成人综合色| 午夜福利在线观看免费完整高清在 | 欧美成人精品欧美一级黄| 九九久久精品国产亚洲av麻豆| 亚洲人成网站高清观看| 美女xxoo啪啪120秒动态图| 人人妻人人澡人人爽人人夜夜 | h日本视频在线播放| 国产精品嫩草影院av在线观看| 亚洲精品一区av在线观看| 午夜福利成人在线免费观看| 长腿黑丝高跟| 国产探花极品一区二区| 久久国内精品自在自线图片| 国产精品福利在线免费观看| 欧美国产日韩亚洲一区| 国产一区二区激情短视频| 熟妇人妻久久中文字幕3abv| 春色校园在线视频观看| 亚洲乱码一区二区免费版| 在线播放国产精品三级| 久久久久久大精品| 波野结衣二区三区在线| 狂野欧美白嫩少妇大欣赏| 黄片wwwwww| 国产精品一区二区免费欧美| 日韩欧美免费精品| 国产精品亚洲一级av第二区| 麻豆乱淫一区二区| 欧美日韩精品成人综合77777| 亚洲av成人精品一区久久| 国产精品精品国产色婷婷| 一级黄色大片毛片| 一级黄片播放器| 久久精品国产亚洲av香蕉五月| 国产av麻豆久久久久久久| 精品免费久久久久久久清纯| 国产成人精品久久久久久| 日本撒尿小便嘘嘘汇集6| 国产精品久久久久久精品电影| 美女被艹到高潮喷水动态| 欧美激情在线99| 日韩欧美 国产精品| 免费看av在线观看网站| 国产一区亚洲一区在线观看| 欧美绝顶高潮抽搐喷水| 男人和女人高潮做爰伦理| 少妇人妻一区二区三区视频| 又黄又爽又刺激的免费视频.| 深爱激情五月婷婷| 欧美日韩一区二区视频在线观看视频在线 | 久久午夜福利片| 国产三级中文精品| 在线播放国产精品三级| 露出奶头的视频| 成人亚洲欧美一区二区av| 免费在线观看成人毛片| 免费av毛片视频| 三级男女做爰猛烈吃奶摸视频| 亚洲七黄色美女视频| 你懂的网址亚洲精品在线观看 | 熟女人妻精品中文字幕| 国产高清激情床上av| 小蜜桃在线观看免费完整版高清| 亚洲aⅴ乱码一区二区在线播放| 欧美一区二区精品小视频在线| 成年av动漫网址| 国产亚洲91精品色在线| 日韩精品中文字幕看吧| 亚洲人与动物交配视频| 久久99热这里只有精品18| 国产精品人妻久久久久久| 午夜老司机福利剧场| 国产精品久久久久久亚洲av鲁大| 中文字幕人妻熟人妻熟丝袜美| 狂野欧美激情性xxxx在线观看| 日本熟妇午夜| 精品人妻熟女av久视频| 精品一区二区免费观看| 国产成人一区二区在线| 99久久精品国产国产毛片| 自拍偷自拍亚洲精品老妇| 国产在线精品亚洲第一网站| 丰满乱子伦码专区| 欧美区成人在线视频| 亚洲一区高清亚洲精品| 国产亚洲91精品色在线| 成年版毛片免费区| 久久精品夜色国产| 久久中文看片网| 亚洲欧美成人精品一区二区| 性色avwww在线观看| 日韩欧美精品v在线| 青春草视频在线免费观看| 两个人视频免费观看高清| 天天躁夜夜躁狠狠久久av| 午夜精品国产一区二区电影 | 搡老妇女老女人老熟妇| 色哟哟哟哟哟哟| 免费黄网站久久成人精品| or卡值多少钱| 欧美三级亚洲精品| 成人亚洲精品av一区二区| 丰满乱子伦码专区| 国产精品久久久久久久电影| 免费看av在线观看网站| 天堂动漫精品| 黄色配什么色好看| 国产人妻一区二区三区在| avwww免费| 国产伦精品一区二区三区四那| 在线观看午夜福利视频| 少妇人妻一区二区三区视频| 日日摸夜夜添夜夜添小说| 成人无遮挡网站| 亚洲乱码一区二区免费版| 青春草视频在线免费观看| 精品久久久久久久末码| 国产精品无大码| 日日摸夜夜添夜夜添小说| 一边摸一边抽搐一进一小说| 少妇人妻一区二区三区视频| 99久久精品一区二区三区| 俺也久久电影网| 看非洲黑人一级黄片| 亚洲精华国产精华液的使用体验 | 精品一区二区三区av网在线观看| 嫩草影院新地址| 免费在线观看影片大全网站| 亚洲第一电影网av| 午夜亚洲福利在线播放| 日韩人妻高清精品专区| 国产高清三级在线| 免费搜索国产男女视频| 欧美日本亚洲视频在线播放|