• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Skeleton Split Strategies for Spatial Temporal Graph Convolution Networks

    2022-08-23 02:16:46MotasemAlsawadiandMiguelRio
    Computers Materials&Continua 2022年6期

    Motasem S.Alsawadiand Miguel Rio

    Electronic and Electrical Engineering Department,University College London,London,WC1E 7JE,England

    Abstract:Action recognition has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living can support identifying trends in the data during critical events.A skeleton representation of the human body has been proven to be effective for this task.The skeletons are presented in graphs form-like.However,the topology of a graph is not structured like Euclideanbased data.Therefore, a new set of methods to perform the convolution operation upon the skeleton graph is proposed.Our proposal is based on the Spatial Temporal-Graph Convolutional Network (ST-GCN) framework.In this study, we proposed an improved set of label mapping methods for the ST-GCN framework.We introduce three split techniques(full distance split,connection split, and index split) as an alternative approach for the convolution operation.The experiments presented in this study have been trained using two benchmark datasets:NTU-RGB+D and Kinetics to evaluate the performance.Our results indicate that our split techniques outperform the previous partition strategies and are more stable during training without using the edge importance weighting additional training parameter.Therefore,our proposal can provide a more realistic solution for real-time applications centred on daily living recognition systems activities for indoor environments.

    Keywords: Skeleton split strategies; spatial temporal graph convolutional neural networks;skeleton joints;action recognition

    1 Introduction

    Action recognition(AR)has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living(ADL)can support identifying trends in the data during critical events.These include actions that might compromise a person’s life.For that reason,human AR has become an active research area.Generally,human activity is characterized by different recipes.Amidst these recipes,wearable sensor-based recognition systems have become one of the most utilized approaches.In this kind of system,the input data comes from a sensor or a network of sensors[1].These sensors are worn by the person performing the action.In general,a sensor-based recognition system consists of a set of sensors and a central node[2].The aim of this node is to compute the action representation and perform the action recognition.However,these sensor devices are seldom ergonomic.Hence,the discomfort and need of wearing an external device on a daily basis prevails.These characteristics cause that the person who is monitored to usually forgets to use the sensor device which makes the recognition system unfunctional[3].

    Other AR solutions include computer vision-based systems such as optical flows,appearance,and body skeletons[4–6].The use of dynamic human skeletons(DHS)usually carry vital information that encompasses other modalities.One of the main benefits of this approach is that it minimizes the need for wearing sensors.Therefore,to collect the data,surveillance cameras can be mounted on the ceiling or walls of the environment of interest;ensuring an efficient indoor monitoring system[7].However,DHS modelling has not yet been fully explored.

    A performed action is typically described by a time series of the 2D or 3D coordinates of human joint positions[6,8].Furthermore,action is recognized by examining the motion patterns.A skeleton representation of the human body has been proven to be effective for this task.It provides a robust solution to noise,and it is considered to be a computational and storage-efficient solution[8].Additionally,it provides a background-free data representation to the classification algorithms.This allows the algorithms to focus only on the human body pattern recognition without being concerned about the surrounding environment of the performed action scenarios.This work aims to develop a unique and efficient approach for modelling the DHS for human AR.

    1.1 Open Pose

    There are multiple sources of camera-based skeleton data.Recently, Cao et al.[9] released the open-source libraryOpenPosewhich allows real-time skeleton-based human detection.Their algorithm outputs the skeleton graph represented as an array with the 2D and the 3D coordinates.They are 18 tuples with values (X, Y, C) for 2D and (X, Y, Z, C) for 3D; where C is the confidence score of the detected joint,X,Y and Z represent the coordinates on the X-axis,Y-axis and the Z-axis of the video frame,respectively.

    1.2 Spatial Temporal Graph Neural Network

    New techniques have been proposed recently to exploit the connections between the joints of a skeleton.Among these, Convolutional Neural Networks (CNNs) are used to address human action modelling tasks due to their ability to automatically capture the patterns contained in the spatial configuration of the joints and their temporal dynamics [10].However, the skeletons are presented in graphs form-like,making it difficult to use conventional CNNs to model the dynamics of human actions.Thanks to the recent evolution of Graph Convolutional Neural Networks (GCNNs), it is possible to analyse the non-structured data in an end-to-end manner.These techniques generalize CNNs to the graph’s structures [11].It has been proven that GCNNs are highly capable of solving computer vision problems and have demonstrated superior performance as compared to CNNs approaches[12].The remarkable success of GCNNs is based on the locally connected configurations and the collective aggregation upon graphical structures.Moreover, GCNNs operate on each node separately regardless of the input sequence.Meaning that, unlike CNNs, the outcome of GNNs is robust to changes in the input node information[13].

    In order to achieve an accurate ADL recognition,the temporal dimension must be considered.An action can be considered as a time-dependent pattern of a set of joints in motion[8].A graph offers a more intuitive representation of a skeleton by presenting the bones as edges and joints as vertices[14].Given the advantages of GCNNs mentioned previously,numerous approaches for skeleton-based action recognition using this architecture have been proposed.The first GCNN-based solution for action recognition using skeleton data was presented by Yan et al.[6].They considered both spatial and temporal dimensions of skeleton joints movements at the modelling stage.This approach is called the Spatiotemporal Graph Convolutional Network(ST-GCN)model.In the ST-GCN model,every joint has a set of edges for the spatial and temporal dimensions independently,as it is illustrated in Fig.1.Suppose a given sequence of frames with skeleton joints coordinates;then the spatial edges connect each joint with its neighbourhood per frame.On the other hand,temporal boundaries connect each joint with another joint corresponding to the exact location from a consecutive frame.Meaning that,the temporal edge set represents the joint trajectory over time[6].However,the topology of the graph is not implicitly structured like Euclidean-based data.For instance,most of the nodes have different numbers of neighbours.Therefore, multiple strategies for applying the convolution operation upon skeleton joints have been proposed.

    Figure 1:Spatiotemporal graph representation of a skeleton

    In their work, Yan et al.[6] presented multiple solutions to perform the convolution operation over the skeleton graph.They first divided the skeleton graph into a fixed subset of nodes(the skeleton joints)they calledneighbour sets.Every neighbour set has a central node(theroot node)and its adjacent nodes.Subsequently, it is performed a partitioning of the neighbour set into a fixed number ofKsubsets, where a numeric label (which we callpriority) is assigned to each of them.Formally, each adjacent nodeutiin a neighbour setB(utj)of a root nodeutjis mapped to a labellti.On the other hand,each filter of the CNN has aKnumber of subsets of values.Therefore,each subset of values of a filter performs the convolution operation process upon the feature vector of its corresponding node.Given that the skeleton data has been obtained using the Open Pose toolbox[9],each feature vector consists of the 2D coordinates of the joints,including a value of confidenceC.These ideas are illustrated in Fig.2.

    In Fig.2, two of the neighbour sets are shown with an orange background.Subsequently, the features of each of the nodes(x, y, c)are then concatenated into a feature matrix.However, the criteria to define the position of each of the feature vector in the final matrix is then defined by the utilized partition strategy.Amidst the skeleton partitioning strategies to perform the label mapping presented in[6],theSpatial Configuration Strategyserved as a reference for the techniques proposed in the present study.

    Figure 2:Skeleton components

    1.2.1 Spatial Configuration Partitioning Strategy

    In this strategy,the partitioning for the label mapping is performed according to the distance of each node in the neighbour set with respect to the centre of gravitycgof the skeleton graph.Thecgis defined as the average of the coordinates of all the joints of the skeleton in a single videoframe[6].According to[6],each neighbour set is divided into three(filter sizeK=3).Therefore,each kernel has three subsets of values;one for the root node,one for the joints closer tocgand another one for the joints located farther with respect tocg.As it can be seen in Fig.3, each filter with three subsets of values is applied to the node feature vectors in order to create the output feature map.

    Figure 3:Spatial configuration partitioning

    In this technique,the filter sizeK=3,and the mapping are defined by the following[6]:

    whereltipresents the label map for each jointiin the neighbour set of the root nodeutj,riis the average distance fromcgto the root nodeutjover each frame andriis the average distance fromcgto theithjoint over each frame across all the training set.Once the labelling of each node in the neighbour set has been set,the convolution operation is performed to produce the output feature maps,as shown in Fig.3.

    1.3 Learnable Edge Importance Weighting

    It is important to note that complex movements can be inferred from a small set of representatives’bright spotson the joints of the human body[15].However,not all the joints provide the same quality and quantity of information regarding the movement performed.Therefore,it is intuitive to assign a different level of importance to every joint in the skeleton.

    In the ST-GCN framework proposed by Yan et al.[6],the authors added a mask M(or M-mask)to each layer of the GCNN to express the importance of each joint.The mask applied scales the contribution of each joint of the skeleton according to the learned weights of the spatial graph network.Accordingly,the proposed M-mask considerably improves architecture’s performance.Therefore,the M-mask is applied to the ST-GCN network throughout their experiments.

    1.4 Our Contribution

    This work proposes an improved set of label mapping methods for the ST-GCN framework by introducing three split techniques(full distance split,connection split,and index split)as an alternative approach for the convolution operation.It is based upon the ST-GCN framework proposed by Yan et al.[6].Our results indicate that all our proposed split strategies outperform the baseline model.Furthermore, the proposed frameworks are more stable during training.Finally, our proposals do not require additional training parameters of the edge importance weighting applied by the ST-GCN model.This proves that our proposal can provide a more suitable solution for real-time applications focused on daily living recognition systems activities for indoor environments.

    The contributions are summarized below:

    I: We present an improved set of label mapping methods for the ST-GCN framework by introducing three split techniques (full distance split, connection split, and index split) as an alternative approach for the convolution operation.

    II:Instead of the traditional way of extracting information from the skeleton without considering the relations between the joints,we exploit the relationship between the joints during the action execution to provide valuable and accurate information about the action performed.

    III:We find that an extensive analysis of the inner skeleton joint information by partitioning the skeleton graph in the most number of pieces possible results in more accurate data.

    IV:We propose split strategies that focus on capturing the patterns in the relationship between the skeleton joints by carefully analysing the partition strategies utilized to perform the movement modelling using the ST-GCN framework.

    The rest of the paper is structured as follows:Section 2 presents state-of-the-art review for previous skeleton graph-based action recognition approaches.The details of the proposed skeleton partition strategies are presented in Section 3.Section 4 discuss the experimental settings we use to obtain the results.The results and discussion are presented Section 5.Finally,Section 6 concludes the paper.

    2 Related Literature

    There has been previous work on AR upon skeleton data.Due to the emergence of low-cost depth cameras,access to skeleton data has become relatively easy[16].Therefore,there has been an increasing interest in using skeleton representations to recognize human activity in general.For the sake of being conscience, few most recent but relevant works are mentioned.Zhang et al.[17] combined skeleton data with machine learning methods (such as logistic regression) upon dataset benchmarks.They demonstrated that skeleton representations provide better performance in terms of accuracy than other forms of motion representations.In order to model the dependencies between joints and bones,Shi et al.[14] presented a variety of graph networks denominated Directed Acyclic Graph (DAG).Later, Cheng et al.[18] presented a shift CNN inspired method called Shift-GCN.Their approach aims to reduce the computational complexity of previous ST-GCN-based methods.The results showed the achievement of 10×less computational complexity.However,to the best of our knowledge,there have not been unique partition strategies proposed to enhance the performance of an AR using the ST-GCN model presented in[6].

    3 Proposed Split Strategies

    In this section, we present a new set of techniques to create the label mapping for the nodes in the neighbour sets of the skeleton graph.The techniques are modifications of the previously proposed spatial configuration partitioning presented in[6].

    As the baseline model,a maximum distance of one node with respect to the root node defines the neighbour sets in the skeleton graph.However,every node in the neighbour set is labelledseparatelyin every strategy presented in this section.Therefore,in every proposed approach,the filter sizeK=4.For instance,consider a neighbour set consisting only of the root node with a single adjacent node.For this case,the third and fourth subsets values of the kernel are set by zeros.Each of the split strategies proposed is computed in each frame of a training video sample individually.

    Fig.4 illustrates our proposed partitioning strategy.As it can be seen,a different label mapping is assigned to each node in the neighbour set.Therefore, a different subset of values of each filter is applied to each joint feature vector.However, the bottleneck is defining each node’s order (split criterion)in the neighbour set.We propose three different approaches to address this issue:full distance split,connection split,and index split.These proposals are shown in Fig.5 and will be explained in the following sections.

    Figure 4:Proposed partition strategy

    Figure 5:Proposed split techniques;(i)Full distance split.(ii)Connection split.(iii)Index split

    3.1 Full Distance Split

    In this method, the partitioning for the label mapping is performed according to the distance of each node in the neighbour set with respect tocg.As can be noticed, this solution is similar to the spatial configuration partitioning approach previously explained.However, here we consider the distance of every node in the neighbour set.Thus, this solution is named thefull distancesplit technique.Therefore,depending on the neighbour set in the skeleton,each kernel can have up to four subsets of values.Fig.5i shows that each filter with four subsets of values is applied to the node feature vectors.The order is defined by their relative distances with respect tocgto create the output feature map.To explain this strategy,we define the setFas the Euclidean distances of theithadjacent nodeuti(of the root nodeutj)with respect tocgsorted in ascending order as:

    whereNis the number of adjacent nodes to the root nodeutj.For instance,f1andfNhave the minimum and maximum values inF,respectively.In this strategy,the label mapping is given by:

    whereltirepresents the label map for each jointiin the neighbour set of the root nodeutj,xris the Euclidean distance from the root nodeutjtocg.

    3.2 Connection Split

    In this approach, the number of adjacent joints of each joint (i.e., the joint degree) represents the split criterion in the neighbor set.Thus,the more connections the joint has,the higher priority is assigned to it.

    Fig.5ii shows that the joint with label A represents the root node, and B is the joint with the highest priority since it has three adjacent joints connected.We observe that both C and D joints have two connections.Hence, the priority for these nodes is set randomly.Once the joint priorities have been set,the convolution operation is performed with a subset of values of each filter for every joint in the neighbor set independently.

    To define the label mapping in this approach, we first define the neighbor set of a root nodeutjandNadjacent nodes asB(utj)[6], and we also define the degree matrix ofB(utj)asD, whereD∈RNxN.Therefore, the values at thediiposition ofDcontain the degree valued(uti)of the each of the adjacent nodes of the root nodeutj.Similarly,we define a setCas the degree valuesd(uti)of each of theNadjacent nodes of the root node sorted in descending order as follows:

    For instance,c1andcNhave the maximum and minimum values ofC,respectively.Finally,the label mapping is thus defined as:

    whereltirepresents the label map for each adjacent jointito the root nodeutjin the neighbor set,anddris the degree corresponding the root nodeutj.

    3.3 Index Split

    The skeleton data utilized for our study is gathered using the Open-Pose[9]library.According to the library documentation,the output file with the skeleton information consists of critical/key points.The output skeleton provided by the Open Pose toolbox is shown in Fig.6.

    Figure 6:Open-Pose output key points

    In this approach, the value of the index of each key point defines the priority criterion of the neighbour set.An illustrative example is shown in Fig.5iii.For instance,joint B is assigned with the highest priority since it has a key point index value of 1, and C is the joint with the second priority since it has a key point index value of 3.Finally,D is the joint with the least priority since it has a key point index value of 8.

    Therefore,we define the setPas the indexes of the key pointsind(uti)of theithadjacent nodesuti(of the root nodeutj)sorted in ascending order as:

    whereNis the number of adjacent nodes to the root nodeutj.For instance,p1andpNhave the minimum and maximum values ofP,respectively.The label mapping is therefore defined as:

    whereltirepresents the label map for each jointiin the neighbour set of the root nodeutjandinris the index of the key point corresponding to the root nodeutj.

    4 Experiments

    4.1 Datasets

    To evaluate the performance of our proposed partitioning techniques,we train our models on two benchmark datasets:the NTU RGB+D[19]and the Kinetics[20]dataset.These two datasets were considered in order to provide a valid comparison with the original ST-GCN framework.

    4.1.1 NTU-RGB+D

    Up to date, the NTU-RGB+D is known to be the most extensive dataset with 3D joints annotations for human AR tasks [6].The samples have been recorded using the Microsoft Kinect V2 camera.In order to take the most advantage of the chosen camera device, each action sample consists of a depth map modality, 3D joint information, RGB frames, and infrared sequences.The information provided by this dataset consists of the tri-dimensional location of the 25 main joints of the human body.

    In their study, Shahroudy et al.[19] proposed two evaluation criteria for the NTU-RGB+D dataset: the Cross-Subject (X-sub) and the Cross-View (X-view) evaluations.In the first approach,the train/test split for evaluation was based upon groups of subjects performing the action;the data corresponding to 20 participants is used for training and the remaining samples for testing.On the other hand, the X-view evaluation approach considers the camera view as criteria for the train/test split; the data collected by the camera 1 is used for testing and the data collected by the other two cameras is used for training.

    The NTU-RGB+D dataset provides a total of 56,880 action clips performing 60 different actions classified into three major groups: daily actions, health-related actions, and mutual actions.Forty participants performed the test action samples.Each sample has been captured with 3 different cameras simultaneously located at the same height but different angles.Later,this dataset was extended twice its size by adding 60 more classes and another 57,600 video samples[19].This extended version is called NTU RGB+D 120 (120-class NTU RGB+D dataset).By considering the 3D skeletons modality of the NTU-RGB+D dataset only, the storage was reduced from 136 GB to 5.8 GB.Therefore,the computational speed is reduced considerably.

    4.1.2 Kinetics

    While the NTU-RGB+D dataset is widely known to be the largest in-house captured AR dataset,the DeepMind Kinetics human action dataset is the largest set with unconstrained AR samples.

    The 306,245 videos provided by the Kinetics dataset are obtained from YouTube.Each video sample is supplied with no previous editing to ensure good variable resolution and frame rate for action modelling and is classified into 400 different action classes.

    Due to the vast quantity of classes,one video sample can be classified into more than one cluster.For instance, a video sample with a person texting while driving a car can be classified with the“texting”label or the“driving a car”label.Therefore,the authors in[20]suggest considering a top-5 performance evaluation rather than a top-1 approach.Meaning that,a labelled sample is considered a true positive if its ground truth label appears within the 5 classes with the highest scores predicted by the model(top-5);contrary to considering only the predicted class with the highest score(top-1).

    The Kinetics dataset provides the raw RGB format videos.Therefore, it requires the skeleton information to be extracted from the sample videos.Accordingly, we use the dataset that contains the Kinetics-skeleton information provided by Yan et al.[6]for our experiments.

    4.2 Model Implementation

    The experiment process comprises of three stages: Data Splitting, ST-GCN model setup, and Model Training.These stages are explained as follows:

    4.2.1 Data Splitting

    The datasets are divided into two subsets:the training and the validation sets.In our experiments,we consider a 3:1 relation for training and validation split,respectively.

    4.2.2 ST-GCN Model Setup

    The ST-GCN model uses a baseline architecture.It consists of a stack of 9 layers that are divided into 3-layer blocks stacked together.Each layer block consists of 3 layers each.The layers of the first block have 64 output channels each.The second and third blocks have 128 and 256 output channels,respectively.Finally,the 256-feature vector output by the last layer is fed into a softmax classifier to predict the performed action[6].

    4.2.3 Model Training

    The ST-GCN model is implemented on the PyTorch framework for deep learning modelling[21].The models are trained using stochastic gradient descent with learning rate decay as an optimization algorithm.The initial learning rate is 0.1.The number of epochs and decay schedule for training varies depending on the dataset used.For the NTU-RGB+D dataset, we train the models for 80 epochs,and the learning rate decays by a factor of 0.1 on the 10thand the 50thepochs.On the other hand,for the Kinetics dataset,we train the models for 50 epochs,and the learning rate decays by a factor of 0.1 every 10thepoch.Similarly,the batch size also varies according to the dataset utilized;for the NTU-RGB+D dataset,the batch sizes for training and testing used were 32 and 64,respectively;on the other hand,for the Kinetics dataset,the batch sizes for training and testing used were 128 and 256,respectively.To avoid overfitting,a weight decay value of 0.0001 has been considered.Additionally,a dropout value of 0.5 has been set for the NTU-RGB+D dataset experiments.

    To provide a valid comparison with the baseline model,an M-mask implementation is considered in the experiments presented in this study.

    5 Experimental Results and Discussion

    This section discusses the performance of our proposals against the benchmark ST-GCN models based on [6] using the spatial configuration partition approach.This strategy provides the best performance in terms of accuracy in [6].Therefore, it has been chosen as a baseline to prove the effectiveness of the partition strategies introduced in this study.

    5.1 Results Evaluation on NTU-RGB+D

    Note that we aim to recognize ADL in an indoor environment.Therefore, the NTU-RGB+D dataset serves as a more accurate reference than the Kinetics dataset since it was recorded using the same conditions.Hence, we focus on the results obtained with this dataset.We use the 3D joint information provided in[19] in our experiments.The Tab.1 shows the performance comparisons of our proposals and the state-of-the-art ST-GCN framework.It can be observed that all our partition strategies outperform the spatial configuration strategy of the ST-GCN.For the X-sub benchmark,the connection split achieves the highest performance of 82.6%accuracy,more than 1%higher than the ST-GCN performance.On the other hand,the index split outperforms the rest of the strategies with 90.5% accuracy on the X-view benchmark,more than 2%higher than the ST-GCN performance.

    Table 1: NTU-RGB+D performance

    Figs.7–10 show the training behaviour of the models using the spatial configuration partitioning of the ST-GCN framework and the proposed connection split on both X-sub and X-view benchmarks without the M-mask implementation.The blue and orange plots show the performance of the models using the training and the validation sets,respectively.The training score plots show that the learning performance of the proposed connection split stabilizes while increasing over time compared with the ST-GCN outcome.Our proposals provide a considerable advantage over the benchmark framework because it demonstrates that the M-mask is not required to yield satisfactory performance.The omission of the M-mask results in a reduction of computational complexity.Hence, our proposal can provide a more suitable solution for real-time applications.Moreover, given the performance superiority on accuracy and time consumption, our proposed method offers a practical solution an ADL recognition system.

    Figure 7:Spatial C.P X-sub training scores

    Figure 8:Connection split X-sub training scores

    Figure 9:Spatial C.P X-view training scores

    Figure 10:Connection split X-view training scores

    5.2 Performance on the Kinetics Dataset

    The recognition performance has been evaluated using the top-1 and top-5 criterion using the Kinetics dataset.We validate the performance of our proposed techniques with the ST-GCN framework,as shown in Tab.2.

    Table 2: Performance on kinetics dataset

    As the results indicate,all our partition strategies outperform the spatial configuration strategy of the ST-GCN using the top-5 criteria.We observe that 54.5%accuracy is achieved using the full distance split approach, which is 2% higher than the performance obtained with the baseline model.On the other hand,by using the top-1 evaluation criteria,our proposal achieves the same performance as the ST-GCN model.Similarly, using this evaluation basis, the highest performance achieved is a 31.7%accuracy using the full distance split approach resulting in a 1%margin higher than the result obtained with the ST-GCN model.

    Therefore, we can conclude that the performance metrics presented in Tab.2 validates the superiority of the full distance split method proposed on the Kinetics dataset.

    6 Conclusion

    In this work,we propose an improved set of label mapping methods for the ST-GCN framework(full distance split, connection split, and index split) as an alternative approach for the convolution operation.Our results indicate that all our split techniques outperform the previous partitioning strategies for the ST-GCN framework.Moreover,they demonstrate to be more stable during training without using the additional training parameter of the edge importance weighting applied by the baseline model.Therefore, the results obtained with our current split proposals can provide a more suitable solution for real-time applications focused on ADL recognition systems for indoor environments than the baseline strategies for the ST-GCN framework.

    A significant computational effort is involved in using heterogeneous methods to calculate the distances between the joints and thecgfor each frame in the video sample for full distance split and spatial configuration partitioning.It will be computationally less demanding to use a homogeneous technique to calculate the distance between the joints and thecgfor both splitting strategies.Furthermore,while our current methodology considers greater distances from the root node to perform the skeleton partitioning,additional flexibility can be made by increasing the amount joints per neighbour set.This may give room to cover larger body sections(such as limbs),making it possible to find more complex relationships between the joints during the execution of the actions.

    Acknowledgement:The authors acknowledge the support of King Abdulaziz City of Science and Technology.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品国产av在线观看| 久久久欧美国产精品| 久久久精品免费免费高清| 欧美一级a爱片免费观看看| 国产精品麻豆人妻色哟哟久久| 国产一区有黄有色的免费视频| 久热这里只有精品99| 欧美人与善性xxx| 亚洲中文av在线| 夜夜看夜夜爽夜夜摸| 天堂中文最新版在线下载| 久久久久精品久久久久真实原创| 青青草视频在线视频观看| 水蜜桃什么品种好| 80岁老熟妇乱子伦牲交| 91午夜精品亚洲一区二区三区| 在线观看三级黄色| 亚洲婷婷狠狠爱综合网| av黄色大香蕉| 九九在线视频观看精品| 一本—道久久a久久精品蜜桃钙片| 亚洲aⅴ乱码一区二区在线播放| 天堂俺去俺来也www色官网| av视频免费观看在线观看| av.在线天堂| av免费在线看不卡| 亚洲av免费高清在线观看| 久久精品国产亚洲av天美| 国产成人freesex在线| 亚洲精品视频女| 国产精品免费大片| 97精品久久久久久久久久精品| 中文字幕久久专区| 亚洲av欧美aⅴ国产| 亚州av有码| 深爱激情五月婷婷| 啦啦啦啦在线视频资源| 看十八女毛片水多多多| 国产成人91sexporn| 久久6这里有精品| 婷婷色av中文字幕| 久久久久久久久久久免费av| 99久久中文字幕三级久久日本| 午夜福利视频精品| 又爽又黄a免费视频| 网址你懂的国产日韩在线| 久久精品国产自在天天线| 最近中文字幕高清免费大全6| 美女高潮的动态| 国产av一区二区精品久久 | 99久久精品热视频| 国产免费视频播放在线视频| 一个人免费看片子| 三级国产精品欧美在线观看| 大片电影免费在线观看免费| 亚洲欧美一区二区三区国产| 亚洲国产精品999| 蜜臀久久99精品久久宅男| 色哟哟·www| 国产 精品1| 国产伦精品一区二区三区四那| 亚洲av日韩在线播放| 亚州av有码| 国产精品嫩草影院av在线观看| 欧美人与善性xxx| 婷婷色麻豆天堂久久| 黑人猛操日本美女一级片| 国产黄色视频一区二区在线观看| 国产在线视频一区二区| 国产精品一及| 亚洲人与动物交配视频| 伦精品一区二区三区| 久久99蜜桃精品久久| 丰满迷人的少妇在线观看| 99热6这里只有精品| 久久毛片免费看一区二区三区| 日韩一本色道免费dvd| 多毛熟女@视频| 国产乱来视频区| 日韩成人伦理影院| 亚洲熟女精品中文字幕| 亚洲欧美日韩卡通动漫| 精品亚洲成a人片在线观看 | 美女高潮的动态| 一个人看视频在线观看www免费| 国产一区二区三区av在线| 国产综合精华液| 超碰97精品在线观看| 亚洲精华国产精华液的使用体验| 男女下面进入的视频免费午夜| 女人十人毛片免费观看3o分钟| 女的被弄到高潮叫床怎么办| 国产高清不卡午夜福利| 国产精品av视频在线免费观看| 18禁在线无遮挡免费观看视频| 97在线人人人人妻| 1000部很黄的大片| 伦理电影免费视频| 香蕉精品网在线| 黄色配什么色好看| 亚州av有码| 亚洲欧美中文字幕日韩二区| 国产成人一区二区在线| 九九久久精品国产亚洲av麻豆| 日韩免费高清中文字幕av| 在线观看一区二区三区激情| 午夜日本视频在线| 亚洲,一卡二卡三卡| 亚洲人成网站在线播| 三级经典国产精品| 777米奇影视久久| 国产老妇伦熟女老妇高清| 一级爰片在线观看| 街头女战士在线观看网站| 国产av国产精品国产| 亚洲人成网站在线播| 美女脱内裤让男人舔精品视频| 黑人高潮一二区| 国产精品99久久99久久久不卡 | 国产免费又黄又爽又色| 汤姆久久久久久久影院中文字幕| 亚洲精品久久午夜乱码| 国产久久久一区二区三区| 九九在线视频观看精品| 激情 狠狠 欧美| 超碰97精品在线观看| 网址你懂的国产日韩在线| 成人国产av品久久久| 美女xxoo啪啪120秒动态图| 美女视频免费永久观看网站| 亚洲第一区二区三区不卡| 少妇的逼好多水| 韩国av在线不卡| 人妻制服诱惑在线中文字幕| 久久久久视频综合| 麻豆成人午夜福利视频| 十八禁网站网址无遮挡 | 午夜老司机福利剧场| 国产免费又黄又爽又色| 免费人妻精品一区二区三区视频| 久久国产亚洲av麻豆专区| 极品少妇高潮喷水抽搐| 在线看a的网站| 一级毛片电影观看| 丝瓜视频免费看黄片| av.在线天堂| 激情五月婷婷亚洲| 国产在线一区二区三区精| 大话2 男鬼变身卡| 久久影院123| 欧美成人午夜免费资源| 日韩一本色道免费dvd| 精品少妇久久久久久888优播| 久久精品国产a三级三级三级| 国产成人精品久久久久久| 日韩强制内射视频| 国产 一区 欧美 日韩| 日韩欧美 国产精品| 青春草亚洲视频在线观看| 国产精品蜜桃在线观看| 永久网站在线| 成人漫画全彩无遮挡| 女的被弄到高潮叫床怎么办| 亚洲欧美一区二区三区黑人 | 香蕉精品网在线| 国产熟女欧美一区二区| 中国国产av一级| 中文字幕免费在线视频6| 肉色欧美久久久久久久蜜桃| 校园人妻丝袜中文字幕| 一级毛片aaaaaa免费看小| 亚洲精品,欧美精品| 国产免费视频播放在线视频| 免费人成在线观看视频色| 只有这里有精品99| 久久久色成人| 亚洲欧美日韩东京热| 99久久中文字幕三级久久日本| 少妇 在线观看| 亚洲最大成人中文| 亚洲精品乱久久久久久| 亚洲av不卡在线观看| 全区人妻精品视频| 午夜免费鲁丝| 亚洲国产毛片av蜜桃av| 蜜臀久久99精品久久宅男| 六月丁香七月| 91精品一卡2卡3卡4卡| 又大又黄又爽视频免费| 久久精品国产亚洲网站| 大话2 男鬼变身卡| 99久久综合免费| 大又大粗又爽又黄少妇毛片口| 亚洲美女黄色视频免费看| videossex国产| 成人一区二区视频在线观看| 少妇人妻 视频| 亚洲国产色片| 波野结衣二区三区在线| 欧美性感艳星| 精品一品国产午夜福利视频| 身体一侧抽搐| 王馨瑶露胸无遮挡在线观看| 国产精品99久久久久久久久| 久久久成人免费电影| 欧美精品亚洲一区二区| av福利片在线观看| 舔av片在线| 直男gayav资源| 男女边摸边吃奶| av卡一久久| 国产色婷婷99| 精品久久久久久电影网| 麻豆乱淫一区二区| av在线蜜桃| 亚洲,一卡二卡三卡| 精品视频人人做人人爽| 九九久久精品国产亚洲av麻豆| 丰满少妇做爰视频| 久久久a久久爽久久v久久| 老司机影院成人| 麻豆乱淫一区二区| 国产黄片美女视频| 亚洲欧美成人综合另类久久久| 日韩成人av中文字幕在线观看| 欧美成人精品欧美一级黄| 久久久欧美国产精品| 3wmmmm亚洲av在线观看| 最后的刺客免费高清国语| 中文字幕久久专区| 日本色播在线视频| 男人和女人高潮做爰伦理| 成年女人在线观看亚洲视频| 亚洲激情五月婷婷啪啪| 一级毛片电影观看| 美女高潮的动态| 国产av一区二区精品久久 | 日韩三级伦理在线观看| 亚洲人与动物交配视频| 日韩成人伦理影院| 精品久久久久久久久亚洲| 欧美xxxx黑人xx丫x性爽| 免费人成在线观看视频色| 午夜福利在线在线| 国产精品女同一区二区软件| 国产在线一区二区三区精| 国产爱豆传媒在线观看| 国产精品爽爽va在线观看网站| 韩国av在线不卡| 亚洲性久久影院| 搡女人真爽免费视频火全软件| 一本久久精品| 国产免费又黄又爽又色| 亚洲av中文av极速乱| 久久久久久久精品精品| 久久久久久久大尺度免费视频| 99国产精品免费福利视频| 亚洲国产精品专区欧美| 下体分泌物呈黄色| 国产午夜精品一二区理论片| 国产高潮美女av| 国产欧美另类精品又又久久亚洲欧美| 99久久精品国产国产毛片| 日韩一区二区视频免费看| 亚洲熟女精品中文字幕| 久久午夜福利片| 观看免费一级毛片| 欧美xxⅹ黑人| 成人18禁高潮啪啪吃奶动态图 | 国产在线免费精品| 亚洲欧美精品专区久久| 在线观看三级黄色| 精品一区二区三区视频在线| 麻豆国产97在线/欧美| 99热全是精品| 日本黄色日本黄色录像| 日韩大片免费观看网站| 国产精品熟女久久久久浪| 久久久精品免费免费高清| 午夜福利在线观看免费完整高清在| 99热网站在线观看| 久久久久网色| 男女无遮挡免费网站观看| 日韩欧美 国产精品| 免费av中文字幕在线| 亚洲成人一二三区av| 日日摸夜夜添夜夜添av毛片| 日产精品乱码卡一卡2卡三| av在线播放精品| 久久久a久久爽久久v久久| 欧美精品一区二区大全| 国产成人精品婷婷| 精品人妻熟女av久视频| 国产av一区二区精品久久 | 国产亚洲欧美精品永久| 亚洲四区av| 久久婷婷青草| 亚洲精品久久久久久婷婷小说| 日本av手机在线免费观看| 国产在线一区二区三区精| av在线蜜桃| 丰满迷人的少妇在线观看| 久久久久久久久久久丰满| 我的老师免费观看完整版| 日韩强制内射视频| 九九爱精品视频在线观看| 高清黄色对白视频在线免费看 | 91久久精品国产一区二区成人| 欧美成人午夜免费资源| 全区人妻精品视频| 中文精品一卡2卡3卡4更新| 亚洲精品日本国产第一区| 小蜜桃在线观看免费完整版高清| 国产一区二区三区av在线| 午夜福利影视在线免费观看| 久久精品国产亚洲av天美| 成人免费观看视频高清| 色吧在线观看| 王馨瑶露胸无遮挡在线观看| 中文字幕免费在线视频6| 一级毛片黄色毛片免费观看视频| 亚洲国产欧美人成| 日本黄色片子视频| 人体艺术视频欧美日本| 国产精品爽爽va在线观看网站| 精品99又大又爽又粗少妇毛片| 日韩一本色道免费dvd| 日韩欧美精品免费久久| 赤兔流量卡办理| 成人美女网站在线观看视频| 少妇熟女欧美另类| 麻豆成人av视频| 亚洲美女黄色视频免费看| 女人久久www免费人成看片| 亚洲精品乱码久久久久久按摩| 亚洲精品国产色婷婷电影| 纯流量卡能插随身wifi吗| 夜夜看夜夜爽夜夜摸| 搡老乐熟女国产| 日日啪夜夜爽| 午夜日本视频在线| 我要看黄色一级片免费的| 日日摸夜夜添夜夜添av毛片| 99久久中文字幕三级久久日本| 韩国av在线不卡| 黑丝袜美女国产一区| 国产精品爽爽va在线观看网站| 97在线人人人人妻| 亚洲真实伦在线观看| 一级毛片我不卡| 97超碰精品成人国产| 色婷婷av一区二区三区视频| 欧美xxxx性猛交bbbb| 伦理电影大哥的女人| 国产精品一及| 少妇精品久久久久久久| 黑人高潮一二区| 亚洲国产毛片av蜜桃av| 大片电影免费在线观看免费| 99久久精品一区二区三区| 亚洲欧洲日产国产| 亚洲精品一二三| 丰满乱子伦码专区| 国产伦理片在线播放av一区| 亚洲精品久久午夜乱码| 女性生殖器流出的白浆| 午夜老司机福利剧场| 国产av精品麻豆| 精品久久久久久久末码| 国产有黄有色有爽视频| 久久国内精品自在自线图片| 纵有疾风起免费观看全集完整版| 狂野欧美白嫩少妇大欣赏| 午夜福利在线观看免费完整高清在| 99久久精品国产国产毛片| 成年美女黄网站色视频大全免费 | 中文精品一卡2卡3卡4更新| 在线观看一区二区三区激情| 日本猛色少妇xxxxx猛交久久| 国产色婷婷99| 91久久精品国产一区二区三区| 成人美女网站在线观看视频| 精品久久国产蜜桃| 欧美日本视频| 午夜免费鲁丝| 成人亚洲精品一区在线观看 | 久久久久久久久久成人| 久久人人爽av亚洲精品天堂 | 日本一二三区视频观看| 边亲边吃奶的免费视频| 狂野欧美白嫩少妇大欣赏| 婷婷色麻豆天堂久久| 青春草国产在线视频| 国产精品福利在线免费观看| 午夜精品国产一区二区电影| 亚洲国产av新网站| 国产日韩欧美在线精品| 国产精品欧美亚洲77777| 国产又色又爽无遮挡免| 国产黄片视频在线免费观看| .国产精品久久| 欧美日韩国产mv在线观看视频 | 久久人人爽人人片av| 国产精品偷伦视频观看了| 亚洲精品中文字幕在线视频 | 高清午夜精品一区二区三区| 日本欧美国产在线视频| 日韩欧美 国产精品| 下体分泌物呈黄色| 婷婷色麻豆天堂久久| 中文字幕精品免费在线观看视频 | 麻豆国产97在线/欧美| 亚洲av成人精品一区久久| 日韩三级伦理在线观看| a级一级毛片免费在线观看| 国产亚洲最大av| 亚洲欧美成人精品一区二区| 日本av手机在线免费观看| 色综合色国产| 中文字幕精品免费在线观看视频 | 美女视频免费永久观看网站| 草草在线视频免费看| 高清在线视频一区二区三区| 又粗又硬又长又爽又黄的视频| 欧美97在线视频| 精品一区在线观看国产| 日韩,欧美,国产一区二区三区| 在线观看美女被高潮喷水网站| 伊人久久精品亚洲午夜| 国精品久久久久久国模美| 国产白丝娇喘喷水9色精品| 性色avwww在线观看| 制服丝袜香蕉在线| 韩国av在线不卡| 亚洲精品日韩av片在线观看| 精品亚洲乱码少妇综合久久| 777米奇影视久久| 国产伦精品一区二区三区四那| 成人18禁高潮啪啪吃奶动态图 | 女的被弄到高潮叫床怎么办| 七月丁香在线播放| 秋霞伦理黄片| 国国产精品蜜臀av免费| 伊人久久精品亚洲午夜| 国产男女超爽视频在线观看| 交换朋友夫妻互换小说| 婷婷色综合www| 波野结衣二区三区在线| 久久人人爽av亚洲精品天堂 | 亚洲av成人精品一二三区| 热re99久久精品国产66热6| 91午夜精品亚洲一区二区三区| 国产免费一区二区三区四区乱码| 不卡视频在线观看欧美| 深爱激情五月婷婷| av专区在线播放| 一本色道久久久久久精品综合| 亚洲不卡免费看| 国产真实伦视频高清在线观看| 99热6这里只有精品| av网站免费在线观看视频| 久久国内精品自在自线图片| 国产精品一区二区三区四区免费观看| 久久影院123| 久久久精品免费免费高清| 激情五月婷婷亚洲| 亚洲性久久影院| 嫩草影院入口| av网站免费在线观看视频| 国产69精品久久久久777片| 亚洲欧美成人综合另类久久久| 一级爰片在线观看| 丰满迷人的少妇在线观看| 亚洲色图av天堂| 男人和女人高潮做爰伦理| freevideosex欧美| 九九爱精品视频在线观看| 亚洲av福利一区| 国产在线男女| 亚洲色图综合在线观看| 制服丝袜香蕉在线| 欧美丝袜亚洲另类| 久久99热这里只有精品18| 联通29元200g的流量卡| 亚洲av电影在线观看一区二区三区| 免费大片黄手机在线观看| 国产精品久久久久成人av| 日韩av不卡免费在线播放| 极品教师在线视频| 色吧在线观看| 蜜桃在线观看..| av播播在线观看一区| 99久久人妻综合| 日韩av免费高清视频| 91精品一卡2卡3卡4卡| 国产成人a∨麻豆精品| 1000部很黄的大片| 人妻少妇偷人精品九色| 五月伊人婷婷丁香| 国产中年淑女户外野战色| 欧美日韩视频精品一区| 日本黄色片子视频| 亚洲av.av天堂| 赤兔流量卡办理| 91久久精品国产一区二区成人| 免费看光身美女| 日韩免费高清中文字幕av| 亚洲精品自拍成人| 精品久久国产蜜桃| 人妻系列 视频| 日日摸夜夜添夜夜添av毛片| 少妇被粗大猛烈的视频| 女性被躁到高潮视频| 国产精品一区二区三区四区免费观看| 少妇丰满av| 免费观看无遮挡的男女| av视频免费观看在线观看| 丰满人妻一区二区三区视频av| 美女视频免费永久观看网站| 亚洲精品视频女| 女性被躁到高潮视频| 亚洲国产高清在线一区二区三| 99久久综合免费| 免费看光身美女| 一级毛片电影观看| 国产色婷婷99| 色吧在线观看| 高清在线视频一区二区三区| 新久久久久国产一级毛片| 国产一区有黄有色的免费视频| 国产一区二区在线观看日韩| 久久精品久久久久久久性| 久久久亚洲精品成人影院| 国语对白做爰xxxⅹ性视频网站| 亚洲欧美日韩东京热| 亚洲国产精品一区三区| 久久久精品免费免费高清| 高清日韩中文字幕在线| 国产精品久久久久久久久免| 欧美xxⅹ黑人| 18禁在线无遮挡免费观看视频| 91精品一卡2卡3卡4卡| 自拍欧美九色日韩亚洲蝌蚪91 | 丰满人妻一区二区三区视频av| 蜜桃亚洲精品一区二区三区| 人妻夜夜爽99麻豆av| 丰满少妇做爰视频| 久久久久国产网址| 国产视频首页在线观看| 成人美女网站在线观看视频| 视频区图区小说| 妹子高潮喷水视频| 亚洲精品乱码久久久久久按摩| 日韩免费高清中文字幕av| 女人久久www免费人成看片| 一级毛片久久久久久久久女| 人妻 亚洲 视频| 大码成人一级视频| 国产精品蜜桃在线观看| 国产亚洲5aaaaa淫片| 久久久色成人| 在线天堂最新版资源| 高清不卡的av网站| 国产精品99久久久久久久久| av一本久久久久| 在线观看人妻少妇| 精品国产三级普通话版| 国产一级毛片在线| 亚洲精品,欧美精品| 97精品久久久久久久久久精品| 在线看a的网站| 美女cb高潮喷水在线观看| av天堂中文字幕网| 国产免费视频播放在线视频| 欧美激情国产日韩精品一区| av一本久久久久| 日韩一区二区视频免费看| 亚洲av福利一区| 亚洲av成人精品一区久久| 99热网站在线观看| 欧美高清性xxxxhd video| 九九久久精品国产亚洲av麻豆| 久久精品国产亚洲av天美| 男女下面进入的视频免费午夜| 蜜桃亚洲精品一区二区三区| 中文欧美无线码| 美女高潮的动态| 一区在线观看完整版| 免费黄网站久久成人精品| 亚洲国产av新网站| 国产亚洲av片在线观看秒播厂| 国产欧美另类精品又又久久亚洲欧美| 国产免费又黄又爽又色| 97在线视频观看| 91狼人影院| av在线播放精品| 亚洲色图av天堂| 欧美成人一区二区免费高清观看| 免费av不卡在线播放| av在线播放精品| 黄色一级大片看看| 国产av国产精品国产| 黄色一级大片看看| 大片免费播放器 马上看| 国产熟女欧美一区二区| 亚洲欧美清纯卡通| 亚洲欧美精品自产自拍| 国产片特级美女逼逼视频| 日韩人妻高清精品专区| 国产中年淑女户外野战色| 免费人妻精品一区二区三区视频| 青春草国产在线视频| 国产 一区精品| 国产精品久久久久久久久免| 久久久久精品久久久久真实原创|