• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Skeleton Split Strategies for Spatial Temporal Graph Convolution Networks

    2022-08-23 02:16:46MotasemAlsawadiandMiguelRio
    Computers Materials&Continua 2022年6期

    Motasem S.Alsawadiand Miguel Rio

    Electronic and Electrical Engineering Department,University College London,London,WC1E 7JE,England

    Abstract:Action recognition has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living can support identifying trends in the data during critical events.A skeleton representation of the human body has been proven to be effective for this task.The skeletons are presented in graphs form-like.However,the topology of a graph is not structured like Euclideanbased data.Therefore, a new set of methods to perform the convolution operation upon the skeleton graph is proposed.Our proposal is based on the Spatial Temporal-Graph Convolutional Network (ST-GCN) framework.In this study, we proposed an improved set of label mapping methods for the ST-GCN framework.We introduce three split techniques(full distance split,connection split, and index split) as an alternative approach for the convolution operation.The experiments presented in this study have been trained using two benchmark datasets:NTU-RGB+D and Kinetics to evaluate the performance.Our results indicate that our split techniques outperform the previous partition strategies and are more stable during training without using the edge importance weighting additional training parameter.Therefore,our proposal can provide a more realistic solution for real-time applications centred on daily living recognition systems activities for indoor environments.

    Keywords: Skeleton split strategies; spatial temporal graph convolutional neural networks;skeleton joints;action recognition

    1 Introduction

    Action recognition(AR)has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living(ADL)can support identifying trends in the data during critical events.These include actions that might compromise a person’s life.For that reason,human AR has become an active research area.Generally,human activity is characterized by different recipes.Amidst these recipes,wearable sensor-based recognition systems have become one of the most utilized approaches.In this kind of system,the input data comes from a sensor or a network of sensors[1].These sensors are worn by the person performing the action.In general,a sensor-based recognition system consists of a set of sensors and a central node[2].The aim of this node is to compute the action representation and perform the action recognition.However,these sensor devices are seldom ergonomic.Hence,the discomfort and need of wearing an external device on a daily basis prevails.These characteristics cause that the person who is monitored to usually forgets to use the sensor device which makes the recognition system unfunctional[3].

    Other AR solutions include computer vision-based systems such as optical flows,appearance,and body skeletons[4–6].The use of dynamic human skeletons(DHS)usually carry vital information that encompasses other modalities.One of the main benefits of this approach is that it minimizes the need for wearing sensors.Therefore,to collect the data,surveillance cameras can be mounted on the ceiling or walls of the environment of interest;ensuring an efficient indoor monitoring system[7].However,DHS modelling has not yet been fully explored.

    A performed action is typically described by a time series of the 2D or 3D coordinates of human joint positions[6,8].Furthermore,action is recognized by examining the motion patterns.A skeleton representation of the human body has been proven to be effective for this task.It provides a robust solution to noise,and it is considered to be a computational and storage-efficient solution[8].Additionally,it provides a background-free data representation to the classification algorithms.This allows the algorithms to focus only on the human body pattern recognition without being concerned about the surrounding environment of the performed action scenarios.This work aims to develop a unique and efficient approach for modelling the DHS for human AR.

    1.1 Open Pose

    There are multiple sources of camera-based skeleton data.Recently, Cao et al.[9] released the open-source libraryOpenPosewhich allows real-time skeleton-based human detection.Their algorithm outputs the skeleton graph represented as an array with the 2D and the 3D coordinates.They are 18 tuples with values (X, Y, C) for 2D and (X, Y, Z, C) for 3D; where C is the confidence score of the detected joint,X,Y and Z represent the coordinates on the X-axis,Y-axis and the Z-axis of the video frame,respectively.

    1.2 Spatial Temporal Graph Neural Network

    New techniques have been proposed recently to exploit the connections between the joints of a skeleton.Among these, Convolutional Neural Networks (CNNs) are used to address human action modelling tasks due to their ability to automatically capture the patterns contained in the spatial configuration of the joints and their temporal dynamics [10].However, the skeletons are presented in graphs form-like,making it difficult to use conventional CNNs to model the dynamics of human actions.Thanks to the recent evolution of Graph Convolutional Neural Networks (GCNNs), it is possible to analyse the non-structured data in an end-to-end manner.These techniques generalize CNNs to the graph’s structures [11].It has been proven that GCNNs are highly capable of solving computer vision problems and have demonstrated superior performance as compared to CNNs approaches[12].The remarkable success of GCNNs is based on the locally connected configurations and the collective aggregation upon graphical structures.Moreover, GCNNs operate on each node separately regardless of the input sequence.Meaning that, unlike CNNs, the outcome of GNNs is robust to changes in the input node information[13].

    In order to achieve an accurate ADL recognition,the temporal dimension must be considered.An action can be considered as a time-dependent pattern of a set of joints in motion[8].A graph offers a more intuitive representation of a skeleton by presenting the bones as edges and joints as vertices[14].Given the advantages of GCNNs mentioned previously,numerous approaches for skeleton-based action recognition using this architecture have been proposed.The first GCNN-based solution for action recognition using skeleton data was presented by Yan et al.[6].They considered both spatial and temporal dimensions of skeleton joints movements at the modelling stage.This approach is called the Spatiotemporal Graph Convolutional Network(ST-GCN)model.In the ST-GCN model,every joint has a set of edges for the spatial and temporal dimensions independently,as it is illustrated in Fig.1.Suppose a given sequence of frames with skeleton joints coordinates;then the spatial edges connect each joint with its neighbourhood per frame.On the other hand,temporal boundaries connect each joint with another joint corresponding to the exact location from a consecutive frame.Meaning that,the temporal edge set represents the joint trajectory over time[6].However,the topology of the graph is not implicitly structured like Euclidean-based data.For instance,most of the nodes have different numbers of neighbours.Therefore, multiple strategies for applying the convolution operation upon skeleton joints have been proposed.

    Figure 1:Spatiotemporal graph representation of a skeleton

    In their work, Yan et al.[6] presented multiple solutions to perform the convolution operation over the skeleton graph.They first divided the skeleton graph into a fixed subset of nodes(the skeleton joints)they calledneighbour sets.Every neighbour set has a central node(theroot node)and its adjacent nodes.Subsequently, it is performed a partitioning of the neighbour set into a fixed number ofKsubsets, where a numeric label (which we callpriority) is assigned to each of them.Formally, each adjacent nodeutiin a neighbour setB(utj)of a root nodeutjis mapped to a labellti.On the other hand,each filter of the CNN has aKnumber of subsets of values.Therefore,each subset of values of a filter performs the convolution operation process upon the feature vector of its corresponding node.Given that the skeleton data has been obtained using the Open Pose toolbox[9],each feature vector consists of the 2D coordinates of the joints,including a value of confidenceC.These ideas are illustrated in Fig.2.

    In Fig.2, two of the neighbour sets are shown with an orange background.Subsequently, the features of each of the nodes(x, y, c)are then concatenated into a feature matrix.However, the criteria to define the position of each of the feature vector in the final matrix is then defined by the utilized partition strategy.Amidst the skeleton partitioning strategies to perform the label mapping presented in[6],theSpatial Configuration Strategyserved as a reference for the techniques proposed in the present study.

    Figure 2:Skeleton components

    1.2.1 Spatial Configuration Partitioning Strategy

    In this strategy,the partitioning for the label mapping is performed according to the distance of each node in the neighbour set with respect to the centre of gravitycgof the skeleton graph.Thecgis defined as the average of the coordinates of all the joints of the skeleton in a single videoframe[6].According to[6],each neighbour set is divided into three(filter sizeK=3).Therefore,each kernel has three subsets of values;one for the root node,one for the joints closer tocgand another one for the joints located farther with respect tocg.As it can be seen in Fig.3, each filter with three subsets of values is applied to the node feature vectors in order to create the output feature map.

    Figure 3:Spatial configuration partitioning

    In this technique,the filter sizeK=3,and the mapping are defined by the following[6]:

    whereltipresents the label map for each jointiin the neighbour set of the root nodeutj,riis the average distance fromcgto the root nodeutjover each frame andriis the average distance fromcgto theithjoint over each frame across all the training set.Once the labelling of each node in the neighbour set has been set,the convolution operation is performed to produce the output feature maps,as shown in Fig.3.

    1.3 Learnable Edge Importance Weighting

    It is important to note that complex movements can be inferred from a small set of representatives’bright spotson the joints of the human body[15].However,not all the joints provide the same quality and quantity of information regarding the movement performed.Therefore,it is intuitive to assign a different level of importance to every joint in the skeleton.

    In the ST-GCN framework proposed by Yan et al.[6],the authors added a mask M(or M-mask)to each layer of the GCNN to express the importance of each joint.The mask applied scales the contribution of each joint of the skeleton according to the learned weights of the spatial graph network.Accordingly,the proposed M-mask considerably improves architecture’s performance.Therefore,the M-mask is applied to the ST-GCN network throughout their experiments.

    1.4 Our Contribution

    This work proposes an improved set of label mapping methods for the ST-GCN framework by introducing three split techniques(full distance split,connection split,and index split)as an alternative approach for the convolution operation.It is based upon the ST-GCN framework proposed by Yan et al.[6].Our results indicate that all our proposed split strategies outperform the baseline model.Furthermore, the proposed frameworks are more stable during training.Finally, our proposals do not require additional training parameters of the edge importance weighting applied by the ST-GCN model.This proves that our proposal can provide a more suitable solution for real-time applications focused on daily living recognition systems activities for indoor environments.

    The contributions are summarized below:

    I: We present an improved set of label mapping methods for the ST-GCN framework by introducing three split techniques (full distance split, connection split, and index split) as an alternative approach for the convolution operation.

    II:Instead of the traditional way of extracting information from the skeleton without considering the relations between the joints,we exploit the relationship between the joints during the action execution to provide valuable and accurate information about the action performed.

    III:We find that an extensive analysis of the inner skeleton joint information by partitioning the skeleton graph in the most number of pieces possible results in more accurate data.

    IV:We propose split strategies that focus on capturing the patterns in the relationship between the skeleton joints by carefully analysing the partition strategies utilized to perform the movement modelling using the ST-GCN framework.

    The rest of the paper is structured as follows:Section 2 presents state-of-the-art review for previous skeleton graph-based action recognition approaches.The details of the proposed skeleton partition strategies are presented in Section 3.Section 4 discuss the experimental settings we use to obtain the results.The results and discussion are presented Section 5.Finally,Section 6 concludes the paper.

    2 Related Literature

    There has been previous work on AR upon skeleton data.Due to the emergence of low-cost depth cameras,access to skeleton data has become relatively easy[16].Therefore,there has been an increasing interest in using skeleton representations to recognize human activity in general.For the sake of being conscience, few most recent but relevant works are mentioned.Zhang et al.[17] combined skeleton data with machine learning methods (such as logistic regression) upon dataset benchmarks.They demonstrated that skeleton representations provide better performance in terms of accuracy than other forms of motion representations.In order to model the dependencies between joints and bones,Shi et al.[14] presented a variety of graph networks denominated Directed Acyclic Graph (DAG).Later, Cheng et al.[18] presented a shift CNN inspired method called Shift-GCN.Their approach aims to reduce the computational complexity of previous ST-GCN-based methods.The results showed the achievement of 10×less computational complexity.However,to the best of our knowledge,there have not been unique partition strategies proposed to enhance the performance of an AR using the ST-GCN model presented in[6].

    3 Proposed Split Strategies

    In this section, we present a new set of techniques to create the label mapping for the nodes in the neighbour sets of the skeleton graph.The techniques are modifications of the previously proposed spatial configuration partitioning presented in[6].

    As the baseline model,a maximum distance of one node with respect to the root node defines the neighbour sets in the skeleton graph.However,every node in the neighbour set is labelledseparatelyin every strategy presented in this section.Therefore,in every proposed approach,the filter sizeK=4.For instance,consider a neighbour set consisting only of the root node with a single adjacent node.For this case,the third and fourth subsets values of the kernel are set by zeros.Each of the split strategies proposed is computed in each frame of a training video sample individually.

    Fig.4 illustrates our proposed partitioning strategy.As it can be seen,a different label mapping is assigned to each node in the neighbour set.Therefore, a different subset of values of each filter is applied to each joint feature vector.However, the bottleneck is defining each node’s order (split criterion)in the neighbour set.We propose three different approaches to address this issue:full distance split,connection split,and index split.These proposals are shown in Fig.5 and will be explained in the following sections.

    Figure 4:Proposed partition strategy

    Figure 5:Proposed split techniques;(i)Full distance split.(ii)Connection split.(iii)Index split

    3.1 Full Distance Split

    In this method, the partitioning for the label mapping is performed according to the distance of each node in the neighbour set with respect tocg.As can be noticed, this solution is similar to the spatial configuration partitioning approach previously explained.However, here we consider the distance of every node in the neighbour set.Thus, this solution is named thefull distancesplit technique.Therefore,depending on the neighbour set in the skeleton,each kernel can have up to four subsets of values.Fig.5i shows that each filter with four subsets of values is applied to the node feature vectors.The order is defined by their relative distances with respect tocgto create the output feature map.To explain this strategy,we define the setFas the Euclidean distances of theithadjacent nodeuti(of the root nodeutj)with respect tocgsorted in ascending order as:

    whereNis the number of adjacent nodes to the root nodeutj.For instance,f1andfNhave the minimum and maximum values inF,respectively.In this strategy,the label mapping is given by:

    whereltirepresents the label map for each jointiin the neighbour set of the root nodeutj,xris the Euclidean distance from the root nodeutjtocg.

    3.2 Connection Split

    In this approach, the number of adjacent joints of each joint (i.e., the joint degree) represents the split criterion in the neighbor set.Thus,the more connections the joint has,the higher priority is assigned to it.

    Fig.5ii shows that the joint with label A represents the root node, and B is the joint with the highest priority since it has three adjacent joints connected.We observe that both C and D joints have two connections.Hence, the priority for these nodes is set randomly.Once the joint priorities have been set,the convolution operation is performed with a subset of values of each filter for every joint in the neighbor set independently.

    To define the label mapping in this approach, we first define the neighbor set of a root nodeutjandNadjacent nodes asB(utj)[6], and we also define the degree matrix ofB(utj)asD, whereD∈RNxN.Therefore, the values at thediiposition ofDcontain the degree valued(uti)of the each of the adjacent nodes of the root nodeutj.Similarly,we define a setCas the degree valuesd(uti)of each of theNadjacent nodes of the root node sorted in descending order as follows:

    For instance,c1andcNhave the maximum and minimum values ofC,respectively.Finally,the label mapping is thus defined as:

    whereltirepresents the label map for each adjacent jointito the root nodeutjin the neighbor set,anddris the degree corresponding the root nodeutj.

    3.3 Index Split

    The skeleton data utilized for our study is gathered using the Open-Pose[9]library.According to the library documentation,the output file with the skeleton information consists of critical/key points.The output skeleton provided by the Open Pose toolbox is shown in Fig.6.

    Figure 6:Open-Pose output key points

    In this approach, the value of the index of each key point defines the priority criterion of the neighbour set.An illustrative example is shown in Fig.5iii.For instance,joint B is assigned with the highest priority since it has a key point index value of 1, and C is the joint with the second priority since it has a key point index value of 3.Finally,D is the joint with the least priority since it has a key point index value of 8.

    Therefore,we define the setPas the indexes of the key pointsind(uti)of theithadjacent nodesuti(of the root nodeutj)sorted in ascending order as:

    whereNis the number of adjacent nodes to the root nodeutj.For instance,p1andpNhave the minimum and maximum values ofP,respectively.The label mapping is therefore defined as:

    whereltirepresents the label map for each jointiin the neighbour set of the root nodeutjandinris the index of the key point corresponding to the root nodeutj.

    4 Experiments

    4.1 Datasets

    To evaluate the performance of our proposed partitioning techniques,we train our models on two benchmark datasets:the NTU RGB+D[19]and the Kinetics[20]dataset.These two datasets were considered in order to provide a valid comparison with the original ST-GCN framework.

    4.1.1 NTU-RGB+D

    Up to date, the NTU-RGB+D is known to be the most extensive dataset with 3D joints annotations for human AR tasks [6].The samples have been recorded using the Microsoft Kinect V2 camera.In order to take the most advantage of the chosen camera device, each action sample consists of a depth map modality, 3D joint information, RGB frames, and infrared sequences.The information provided by this dataset consists of the tri-dimensional location of the 25 main joints of the human body.

    In their study, Shahroudy et al.[19] proposed two evaluation criteria for the NTU-RGB+D dataset: the Cross-Subject (X-sub) and the Cross-View (X-view) evaluations.In the first approach,the train/test split for evaluation was based upon groups of subjects performing the action;the data corresponding to 20 participants is used for training and the remaining samples for testing.On the other hand, the X-view evaluation approach considers the camera view as criteria for the train/test split; the data collected by the camera 1 is used for testing and the data collected by the other two cameras is used for training.

    The NTU-RGB+D dataset provides a total of 56,880 action clips performing 60 different actions classified into three major groups: daily actions, health-related actions, and mutual actions.Forty participants performed the test action samples.Each sample has been captured with 3 different cameras simultaneously located at the same height but different angles.Later,this dataset was extended twice its size by adding 60 more classes and another 57,600 video samples[19].This extended version is called NTU RGB+D 120 (120-class NTU RGB+D dataset).By considering the 3D skeletons modality of the NTU-RGB+D dataset only, the storage was reduced from 136 GB to 5.8 GB.Therefore,the computational speed is reduced considerably.

    4.1.2 Kinetics

    While the NTU-RGB+D dataset is widely known to be the largest in-house captured AR dataset,the DeepMind Kinetics human action dataset is the largest set with unconstrained AR samples.

    The 306,245 videos provided by the Kinetics dataset are obtained from YouTube.Each video sample is supplied with no previous editing to ensure good variable resolution and frame rate for action modelling and is classified into 400 different action classes.

    Due to the vast quantity of classes,one video sample can be classified into more than one cluster.For instance, a video sample with a person texting while driving a car can be classified with the“texting”label or the“driving a car”label.Therefore,the authors in[20]suggest considering a top-5 performance evaluation rather than a top-1 approach.Meaning that,a labelled sample is considered a true positive if its ground truth label appears within the 5 classes with the highest scores predicted by the model(top-5);contrary to considering only the predicted class with the highest score(top-1).

    The Kinetics dataset provides the raw RGB format videos.Therefore, it requires the skeleton information to be extracted from the sample videos.Accordingly, we use the dataset that contains the Kinetics-skeleton information provided by Yan et al.[6]for our experiments.

    4.2 Model Implementation

    The experiment process comprises of three stages: Data Splitting, ST-GCN model setup, and Model Training.These stages are explained as follows:

    4.2.1 Data Splitting

    The datasets are divided into two subsets:the training and the validation sets.In our experiments,we consider a 3:1 relation for training and validation split,respectively.

    4.2.2 ST-GCN Model Setup

    The ST-GCN model uses a baseline architecture.It consists of a stack of 9 layers that are divided into 3-layer blocks stacked together.Each layer block consists of 3 layers each.The layers of the first block have 64 output channels each.The second and third blocks have 128 and 256 output channels,respectively.Finally,the 256-feature vector output by the last layer is fed into a softmax classifier to predict the performed action[6].

    4.2.3 Model Training

    The ST-GCN model is implemented on the PyTorch framework for deep learning modelling[21].The models are trained using stochastic gradient descent with learning rate decay as an optimization algorithm.The initial learning rate is 0.1.The number of epochs and decay schedule for training varies depending on the dataset used.For the NTU-RGB+D dataset, we train the models for 80 epochs,and the learning rate decays by a factor of 0.1 on the 10thand the 50thepochs.On the other hand,for the Kinetics dataset,we train the models for 50 epochs,and the learning rate decays by a factor of 0.1 every 10thepoch.Similarly,the batch size also varies according to the dataset utilized;for the NTU-RGB+D dataset,the batch sizes for training and testing used were 32 and 64,respectively;on the other hand,for the Kinetics dataset,the batch sizes for training and testing used were 128 and 256,respectively.To avoid overfitting,a weight decay value of 0.0001 has been considered.Additionally,a dropout value of 0.5 has been set for the NTU-RGB+D dataset experiments.

    To provide a valid comparison with the baseline model,an M-mask implementation is considered in the experiments presented in this study.

    5 Experimental Results and Discussion

    This section discusses the performance of our proposals against the benchmark ST-GCN models based on [6] using the spatial configuration partition approach.This strategy provides the best performance in terms of accuracy in [6].Therefore, it has been chosen as a baseline to prove the effectiveness of the partition strategies introduced in this study.

    5.1 Results Evaluation on NTU-RGB+D

    Note that we aim to recognize ADL in an indoor environment.Therefore, the NTU-RGB+D dataset serves as a more accurate reference than the Kinetics dataset since it was recorded using the same conditions.Hence, we focus on the results obtained with this dataset.We use the 3D joint information provided in[19] in our experiments.The Tab.1 shows the performance comparisons of our proposals and the state-of-the-art ST-GCN framework.It can be observed that all our partition strategies outperform the spatial configuration strategy of the ST-GCN.For the X-sub benchmark,the connection split achieves the highest performance of 82.6%accuracy,more than 1%higher than the ST-GCN performance.On the other hand,the index split outperforms the rest of the strategies with 90.5% accuracy on the X-view benchmark,more than 2%higher than the ST-GCN performance.

    Table 1: NTU-RGB+D performance

    Figs.7–10 show the training behaviour of the models using the spatial configuration partitioning of the ST-GCN framework and the proposed connection split on both X-sub and X-view benchmarks without the M-mask implementation.The blue and orange plots show the performance of the models using the training and the validation sets,respectively.The training score plots show that the learning performance of the proposed connection split stabilizes while increasing over time compared with the ST-GCN outcome.Our proposals provide a considerable advantage over the benchmark framework because it demonstrates that the M-mask is not required to yield satisfactory performance.The omission of the M-mask results in a reduction of computational complexity.Hence, our proposal can provide a more suitable solution for real-time applications.Moreover, given the performance superiority on accuracy and time consumption, our proposed method offers a practical solution an ADL recognition system.

    Figure 7:Spatial C.P X-sub training scores

    Figure 8:Connection split X-sub training scores

    Figure 9:Spatial C.P X-view training scores

    Figure 10:Connection split X-view training scores

    5.2 Performance on the Kinetics Dataset

    The recognition performance has been evaluated using the top-1 and top-5 criterion using the Kinetics dataset.We validate the performance of our proposed techniques with the ST-GCN framework,as shown in Tab.2.

    Table 2: Performance on kinetics dataset

    As the results indicate,all our partition strategies outperform the spatial configuration strategy of the ST-GCN using the top-5 criteria.We observe that 54.5%accuracy is achieved using the full distance split approach, which is 2% higher than the performance obtained with the baseline model.On the other hand,by using the top-1 evaluation criteria,our proposal achieves the same performance as the ST-GCN model.Similarly, using this evaluation basis, the highest performance achieved is a 31.7%accuracy using the full distance split approach resulting in a 1%margin higher than the result obtained with the ST-GCN model.

    Therefore, we can conclude that the performance metrics presented in Tab.2 validates the superiority of the full distance split method proposed on the Kinetics dataset.

    6 Conclusion

    In this work,we propose an improved set of label mapping methods for the ST-GCN framework(full distance split, connection split, and index split) as an alternative approach for the convolution operation.Our results indicate that all our split techniques outperform the previous partitioning strategies for the ST-GCN framework.Moreover,they demonstrate to be more stable during training without using the additional training parameter of the edge importance weighting applied by the baseline model.Therefore, the results obtained with our current split proposals can provide a more suitable solution for real-time applications focused on ADL recognition systems for indoor environments than the baseline strategies for the ST-GCN framework.

    A significant computational effort is involved in using heterogeneous methods to calculate the distances between the joints and thecgfor each frame in the video sample for full distance split and spatial configuration partitioning.It will be computationally less demanding to use a homogeneous technique to calculate the distance between the joints and thecgfor both splitting strategies.Furthermore,while our current methodology considers greater distances from the root node to perform the skeleton partitioning,additional flexibility can be made by increasing the amount joints per neighbour set.This may give room to cover larger body sections(such as limbs),making it possible to find more complex relationships between the joints during the execution of the actions.

    Acknowledgement:The authors acknowledge the support of King Abdulaziz City of Science and Technology.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    中文字幕人妻丝袜制服| 成人免费观看视频高清| 女同久久另类99精品国产91| 一级片'在线观看视频| 国产精品免费一区二区三区在线 | 大陆偷拍与自拍| 国产99白浆流出| 脱女人内裤的视频| 成人黄色视频免费在线看| 免费在线观看完整版高清| 在线国产一区二区在线| 成人手机av| 女人久久www免费人成看片| 午夜视频精品福利| 黄色毛片三级朝国网站| 一级片免费观看大全| 国产精品久久久av美女十八| 老司机深夜福利视频在线观看| 国产区一区二久久| 久久人妻熟女aⅴ| 亚洲国产毛片av蜜桃av| 亚洲国产欧美网| 咕卡用的链子| 免费观看人在逋| 搡老岳熟女国产| 高清在线国产一区| 国产激情欧美一区二区| 91在线观看av| 中文字幕人妻丝袜一区二区| 黄网站色视频无遮挡免费观看| 国产成人影院久久av| 亚洲片人在线观看| 精品人妻在线不人妻| 天堂中文最新版在线下载| 午夜福利视频在线观看免费| 丁香六月欧美| 成在线人永久免费视频| 一级a爱片免费观看的视频| 国产精品久久久久成人av| 一进一出好大好爽视频| 国产免费男女视频| 欧美中文综合在线视频| 韩国av一区二区三区四区| 黄片小视频在线播放| 欧美成人免费av一区二区三区 | 九色亚洲精品在线播放| 久9热在线精品视频| 男女高潮啪啪啪动态图| 岛国在线观看网站| 日韩熟女老妇一区二区性免费视频| 丁香欧美五月| 男女床上黄色一级片免费看| 国产欧美日韩一区二区精品| 两个人免费观看高清视频| 国产亚洲精品一区二区www | 日本一区二区免费在线视频| 国产三级黄色录像| 99国产精品一区二区蜜桃av | 久久精品国产99精品国产亚洲性色 | 在线天堂中文资源库| 精品亚洲成a人片在线观看| 成人黄色视频免费在线看| 视频区图区小说| 18禁美女被吸乳视频| 男人舔女人的私密视频| 欧美日韩精品网址| 免费人成视频x8x8入口观看| 欧美 日韩 精品 国产| 亚洲人成电影观看| 欧美亚洲日本最大视频资源| 激情在线观看视频在线高清 | 国产成人系列免费观看| 日本五十路高清| 这个男人来自地球电影免费观看| 飞空精品影院首页| 黑人操中国人逼视频| 午夜福利在线观看吧| av在线播放免费不卡| 91在线观看av| 99热网站在线观看| 午夜福利一区二区在线看| 日韩欧美在线二视频 | 99久久人妻综合| 啦啦啦视频在线资源免费观看| а√天堂www在线а√下载 | 欧美 亚洲 国产 日韩一| 久久国产亚洲av麻豆专区| 亚洲在线自拍视频| 少妇的丰满在线观看| 久久性视频一级片| 国产成人精品无人区| 国产主播在线观看一区二区| 纯流量卡能插随身wifi吗| 激情在线观看视频在线高清 | 免费日韩欧美在线观看| 9色porny在线观看| 中亚洲国语对白在线视频| 欧美黑人欧美精品刺激| 欧美+亚洲+日韩+国产| 波多野结衣一区麻豆| 免费人成视频x8x8入口观看| 亚洲av成人av| 伊人久久大香线蕉亚洲五| 国产精品久久久久成人av| av一本久久久久| 99久久精品国产亚洲精品| 国产亚洲精品久久久久5区| 国产精品免费一区二区三区在线 | 91在线观看av| 女人被狂操c到高潮| 80岁老熟妇乱子伦牲交| 国产精品综合久久久久久久免费 | 9热在线视频观看99| 久久人妻熟女aⅴ| 国产精品自产拍在线观看55亚洲 | 一边摸一边做爽爽视频免费| 夫妻午夜视频| 99香蕉大伊视频| 天堂√8在线中文| 久久香蕉精品热| 午夜福利影视在线免费观看| 免费在线观看亚洲国产| 女人久久www免费人成看片| 另类亚洲欧美激情| 午夜福利欧美成人| www.精华液| 久99久视频精品免费| 久久中文看片网| 下体分泌物呈黄色| 日日摸夜夜添夜夜添小说| 一区在线观看完整版| 日韩欧美一区二区三区在线观看 | 国产国语露脸激情在线看| 香蕉丝袜av| 亚洲第一青青草原| 国产野战对白在线观看| 一二三四在线观看免费中文在| 好看av亚洲va欧美ⅴa在| 97人妻天天添夜夜摸| 国产精品久久久久久精品古装| 欧美+亚洲+日韩+国产| 免费少妇av软件| 国产蜜桃级精品一区二区三区 | 18在线观看网站| 久久精品人人爽人人爽视色| 亚洲精品久久成人aⅴ小说| 午夜影院日韩av| 12—13女人毛片做爰片一| 999精品在线视频| 久久国产精品影院| av免费在线观看网站| 老熟妇乱子伦视频在线观看| √禁漫天堂资源中文www| 成年人黄色毛片网站| 久久中文字幕一级| 五月开心婷婷网| 日韩 欧美 亚洲 中文字幕| 亚洲av成人一区二区三| 9色porny在线观看| 人人澡人人妻人| 欧美日韩视频精品一区| 亚洲欧美一区二区三区黑人| 日韩欧美三级三区| 伊人久久大香线蕉亚洲五| 日日摸夜夜添夜夜添小说| 69精品国产乱码久久久| 久久久久国内视频| 免费人成视频x8x8入口观看| 亚洲精品中文字幕在线视频| 国产精品久久久久成人av| 一区二区三区国产精品乱码| 亚洲欧美日韩高清在线视频| 法律面前人人平等表现在哪些方面| 成年女人毛片免费观看观看9 | 亚洲久久久国产精品| 亚洲国产看品久久| 久久精品熟女亚洲av麻豆精品| 欧美亚洲日本最大视频资源| 下体分泌物呈黄色| 精品熟女少妇八av免费久了| 欧美精品人与动牲交sv欧美| 一区二区日韩欧美中文字幕| 99re在线观看精品视频| 精品亚洲成a人片在线观看| 午夜影院日韩av| 男人舔女人的私密视频| 国产深夜福利视频在线观看| 侵犯人妻中文字幕一二三四区| 超碰成人久久| 99国产精品一区二区三区| 午夜91福利影院| 不卡av一区二区三区| 色婷婷久久久亚洲欧美| a级毛片黄视频| 看片在线看免费视频| 色综合婷婷激情| 看黄色毛片网站| 少妇猛男粗大的猛烈进出视频| 丝袜美足系列| 99国产精品99久久久久| 精品国产国语对白av| 国产1区2区3区精品| 看免费av毛片| 午夜免费鲁丝| 国产精品电影一区二区三区 | 国产成人精品在线电影| 国产在线一区二区三区精| 韩国精品一区二区三区| 精品福利观看| 亚洲五月色婷婷综合| 亚洲精品一卡2卡三卡4卡5卡| 亚洲精品成人av观看孕妇| 激情视频va一区二区三区| 亚洲精品中文字幕在线视频| 久久久久久免费高清国产稀缺| 悠悠久久av| 女人被躁到高潮嗷嗷叫费观| 午夜视频精品福利| 老司机靠b影院| 欧美日韩亚洲高清精品| 又紧又爽又黄一区二区| 777久久人妻少妇嫩草av网站| 热99国产精品久久久久久7| 又黄又爽又免费观看的视频| av不卡在线播放| 一区二区三区国产精品乱码| 日日夜夜操网爽| 国产在线精品亚洲第一网站| 久久久久国产一级毛片高清牌| 中文字幕色久视频| e午夜精品久久久久久久| 一级毛片女人18水好多| 成人特级黄色片久久久久久久| 一级a爱片免费观看的视频| 在线天堂中文资源库| 国产成人影院久久av| 亚洲精品国产一区二区精华液| 久久精品亚洲精品国产色婷小说| 每晚都被弄得嗷嗷叫到高潮| 亚洲精品久久成人aⅴ小说| 国产有黄有色有爽视频| 久久午夜亚洲精品久久| 国产精品免费大片| 欧美激情久久久久久爽电影 | 后天国语完整版免费观看| 欧美大码av| 亚洲国产精品合色在线| 涩涩av久久男人的天堂| 老汉色av国产亚洲站长工具| 国产精品 国内视频| 久久久国产一区二区| 一本大道久久a久久精品| 成人av一区二区三区在线看| 精品一品国产午夜福利视频| 亚洲精品粉嫩美女一区| 色精品久久人妻99蜜桃| 亚洲精品国产一区二区精华液| 亚洲国产欧美网| 热re99久久精品国产66热6| 国产淫语在线视频| 黄频高清免费视频| 国产精品秋霞免费鲁丝片| 国产激情欧美一区二区| 国产亚洲精品久久久久久毛片 | 免费观看a级毛片全部| 免费一级毛片在线播放高清视频 | 99热国产这里只有精品6| 夜夜躁狠狠躁天天躁| 一区二区三区激情视频| 1024香蕉在线观看| 亚洲五月天丁香| 老司机午夜福利在线观看视频| av电影中文网址| av天堂在线播放| 五月开心婷婷网| 热99re8久久精品国产| 男男h啪啪无遮挡| 国产一区二区三区视频了| 1024香蕉在线观看| 一级片免费观看大全| 自拍欧美九色日韩亚洲蝌蚪91| 天天躁日日躁夜夜躁夜夜| 亚洲 国产 在线| 90打野战视频偷拍视频| bbb黄色大片| 日日爽夜夜爽网站| 欧美日韩黄片免| 亚洲精品一二三| 亚洲精品自拍成人| 国产人伦9x9x在线观看| 下体分泌物呈黄色| 国产又色又爽无遮挡免费看| 国产有黄有色有爽视频| 18禁黄网站禁片午夜丰满| 黄色毛片三级朝国网站| 中文字幕制服av| 国产精品国产av在线观看| 极品教师在线免费播放| 别揉我奶头~嗯~啊~动态视频| 黄色a级毛片大全视频| 美女视频免费永久观看网站| www.999成人在线观看| 国产精华一区二区三区| 伦理电影免费视频| 又紧又爽又黄一区二区| 欧美午夜高清在线| 欧美乱妇无乱码| 亚洲精品久久午夜乱码| 在线观看www视频免费| 一进一出好大好爽视频| 午夜福利在线观看吧| 久久影院123| 国产成人欧美| 啦啦啦 在线观看视频| 日本一区二区免费在线视频| 韩国av一区二区三区四区| 动漫黄色视频在线观看| 午夜福利,免费看| 日韩大码丰满熟妇| 在线播放国产精品三级| 国产亚洲av高清不卡| 亚洲成av片中文字幕在线观看| 国产欧美日韩一区二区精品| 一级毛片精品| 精品一区二区三区视频在线观看免费 | 国产极品粉嫩免费观看在线| 午夜91福利影院| 一级a爱片免费观看的视频| 一级片免费观看大全| 国产日韩一区二区三区精品不卡| 欧美成人午夜精品| 国产免费av片在线观看野外av| 天天操日日干夜夜撸| 欧洲精品卡2卡3卡4卡5卡区| 午夜激情av网站| 国产不卡一卡二| 国产有黄有色有爽视频| 久久草成人影院| 欧美激情久久久久久爽电影 | av片东京热男人的天堂| 免费一级毛片在线播放高清视频 | 久久这里只有精品19| 国产成人精品久久二区二区免费| 午夜视频精品福利| 亚洲精品国产精品久久久不卡| 亚洲一卡2卡3卡4卡5卡精品中文| 国精品久久久久久国模美| 欧美日韩亚洲高清精品| www.999成人在线观看| 男人操女人黄网站| 一级毛片女人18水好多| 亚洲精品成人av观看孕妇| 国产又爽黄色视频| 免费看十八禁软件| 日日爽夜夜爽网站| 搡老岳熟女国产| 中亚洲国语对白在线视频| 国产日韩一区二区三区精品不卡| 飞空精品影院首页| 亚洲性夜色夜夜综合| 成年女人毛片免费观看观看9 | 丝袜在线中文字幕| 国产三级黄色录像| 国产成人欧美在线观看 | 亚洲九九香蕉| 久久精品国产99精品国产亚洲性色 | 国产成人精品久久二区二区免费| 国产亚洲精品第一综合不卡| 亚洲国产精品一区二区三区在线| 久久久国产一区二区| 一级a爱视频在线免费观看| 热99久久久久精品小说推荐| 一本综合久久免费| 成人特级黄色片久久久久久久| 捣出白浆h1v1| 人人妻人人澡人人看| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲精品中文字幕一二三四区| 国产激情久久老熟女| 每晚都被弄得嗷嗷叫到高潮| 国产有黄有色有爽视频| 亚洲avbb在线观看| 亚洲三区欧美一区| 91成年电影在线观看| e午夜精品久久久久久久| a级毛片黄视频| 美女高潮到喷水免费观看| 99热网站在线观看| 久久国产精品影院| 国产免费现黄频在线看| 国产精品 欧美亚洲| 精品一区二区三区视频在线观看免费 | 在线av久久热| 国产精品一区二区精品视频观看| 十八禁网站免费在线| 十分钟在线观看高清视频www| 国产精品1区2区在线观看. | 国产av又大| 亚洲av片天天在线观看| 国产真人三级小视频在线观看| 在线观看午夜福利视频| 欧美最黄视频在线播放免费 | 热re99久久国产66热| 欧美av亚洲av综合av国产av| 亚洲av欧美aⅴ国产| 高潮久久久久久久久久久不卡| 国产日韩欧美亚洲二区| 午夜福利在线观看吧| 窝窝影院91人妻| 波多野结衣av一区二区av| 视频在线观看一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 亚洲伊人色综图| 男人的好看免费观看在线视频 | 久久久久久久久久久久大奶| 高潮久久久久久久久久久不卡| 精品亚洲成a人片在线观看| 十八禁人妻一区二区| 真人做人爱边吃奶动态| 精品国产美女av久久久久小说| 精品一区二区三区四区五区乱码| 国产精品免费视频内射| av有码第一页| 日韩欧美在线二视频 | 国产精品偷伦视频观看了| 国产免费男女视频| 亚洲第一青青草原| 老司机午夜十八禁免费视频| 一区二区三区激情视频| 香蕉丝袜av| 国产高清videossex| 午夜视频精品福利| 男女免费视频国产| 久久人人爽av亚洲精品天堂| 免费在线观看影片大全网站| av超薄肉色丝袜交足视频| 成人永久免费在线观看视频| 国产一卡二卡三卡精品| 精品国产一区二区久久| 侵犯人妻中文字幕一二三四区| 国产精品av久久久久免费| 国产精品永久免费网站| 久久人人97超碰香蕉20202| 啪啪无遮挡十八禁网站| 一本综合久久免费| 亚洲第一青青草原| 久久精品国产99精品国产亚洲性色 | 一边摸一边抽搐一进一出视频| 天堂√8在线中文| www.精华液| 99国产极品粉嫩在线观看| 高清毛片免费观看视频网站 | 国产精品亚洲av一区麻豆| 精品国产超薄肉色丝袜足j| 午夜亚洲福利在线播放| 欧美最黄视频在线播放免费 | 女同久久另类99精品国产91| 极品教师在线免费播放| 久久久久久久久免费视频了| 国产一区有黄有色的免费视频| 十八禁高潮呻吟视频| 亚洲人成77777在线视频| 国产精品久久电影中文字幕 | 亚洲熟妇熟女久久| 精品午夜福利视频在线观看一区| 久久久久久人人人人人| 日韩人妻精品一区2区三区| 亚洲av片天天在线观看| 亚洲久久久国产精品| 国产成人欧美在线观看 | 又大又爽又粗| 高清欧美精品videossex| 捣出白浆h1v1| 午夜久久久在线观看| 国产深夜福利视频在线观看| 一区在线观看完整版| 亚洲一区中文字幕在线| 国产成人精品在线电影| 丰满迷人的少妇在线观看| 国产又爽黄色视频| 日本一区二区免费在线视频| 久久香蕉国产精品| 人妻久久中文字幕网| 天天躁夜夜躁狠狠躁躁| 国产成人影院久久av| 老司机深夜福利视频在线观看| 国产午夜精品久久久久久| 国产成人啪精品午夜网站| 久久国产乱子伦精品免费另类| 国产欧美日韩综合在线一区二区| 精品少妇一区二区三区视频日本电影| 国产成人精品久久二区二区91| 中文字幕高清在线视频| 美女高潮喷水抽搐中文字幕| 国产亚洲欧美精品永久| 国产男靠女视频免费网站| 免费在线观看黄色视频的| 亚洲欧美激情在线| 高清黄色对白视频在线免费看| 亚洲成人手机| 国产欧美亚洲国产| 久久午夜综合久久蜜桃| 久久精品熟女亚洲av麻豆精品| 免费不卡黄色视频| 三上悠亚av全集在线观看| 欧美精品一区二区免费开放| 久热爱精品视频在线9| 男女之事视频高清在线观看| 欧美另类亚洲清纯唯美| 亚洲中文字幕日韩| 国产精品久久视频播放| 啦啦啦在线免费观看视频4| 涩涩av久久男人的天堂| 午夜精品久久久久久毛片777| 九色亚洲精品在线播放| 久久人妻熟女aⅴ| 搡老熟女国产l中国老女人| 岛国毛片在线播放| 黄片播放在线免费| 另类亚洲欧美激情| 亚洲综合色网址| 午夜福利视频在线观看免费| 精品久久久久久电影网| 飞空精品影院首页| 丝袜人妻中文字幕| 国产精品一区二区精品视频观看| 亚洲av第一区精品v没综合| 在线观看舔阴道视频| 女人久久www免费人成看片| xxx96com| 1024香蕉在线观看| 日韩欧美在线二视频 | 免费在线观看亚洲国产| 天天躁夜夜躁狠狠躁躁| 99热国产这里只有精品6| 少妇粗大呻吟视频| 久9热在线精品视频| 超碰97精品在线观看| 如日韩欧美国产精品一区二区三区| 一级a爱视频在线免费观看| 亚洲欧美一区二区三区黑人| 国产在视频线精品| 亚洲人成77777在线视频| 搡老乐熟女国产| 美女高潮喷水抽搐中文字幕| 欧美久久黑人一区二区| 国产一卡二卡三卡精品| 高清在线国产一区| 国产精品久久久人人做人人爽| 久久ye,这里只有精品| 午夜免费观看网址| 少妇的丰满在线观看| 免费黄频网站在线观看国产| 婷婷丁香在线五月| 极品少妇高潮喷水抽搐| 极品人妻少妇av视频| 老熟女久久久| 精品一区二区三区四区五区乱码| videos熟女内射| 校园春色视频在线观看| 午夜精品国产一区二区电影| 国产麻豆69| 色在线成人网| 日韩欧美在线二视频 | 日韩有码中文字幕| 三上悠亚av全集在线观看| 日本wwww免费看| 亚洲精品在线观看二区| 国产成人一区二区三区免费视频网站| av线在线观看网站| 日本vs欧美在线观看视频| 国产男靠女视频免费网站| 热99re8久久精品国产| av中文乱码字幕在线| 亚洲免费av在线视频| 一级a爱片免费观看的视频| 欧美一级毛片孕妇| 亚洲片人在线观看| 欧美大码av| 捣出白浆h1v1| 一级毛片高清免费大全| 午夜福利在线免费观看网站| 欧美乱码精品一区二区三区| 精品国产乱码久久久久久男人| 搡老熟女国产l中国老女人| 天堂中文最新版在线下载| 无限看片的www在线观看| 亚洲七黄色美女视频| 国产精品国产高清国产av | 精品卡一卡二卡四卡免费| 日韩视频一区二区在线观看| 无限看片的www在线观看| 满18在线观看网站| 九色亚洲精品在线播放| 一区二区三区精品91| 每晚都被弄得嗷嗷叫到高潮| 国产成人一区二区三区免费视频网站| 国产在线一区二区三区精| 国产成人欧美在线观看 | 两人在一起打扑克的视频| 搡老岳熟女国产| 国产片内射在线| 欧美日韩中文字幕国产精品一区二区三区 | 动漫黄色视频在线观看| 国产成人av教育| 欧美在线黄色| 久久国产精品人妻蜜桃| 亚洲av成人av| 午夜日韩欧美国产| 少妇裸体淫交视频免费看高清 | 亚洲av美国av| 国精品久久久久久国模美| 黑丝袜美女国产一区| www日本在线高清视频| 又大又爽又粗|