• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Cascade Human Activity Recognition Based on Simple Computations Incorporating Appropriate Prior Knowledge

    2023-12-12 15:49:24JianguoWangKuanZhangYueshengZhaoXiaolingWangandMuhammadShamroozAslam
    Computers Materials&Continua 2023年10期

    Jianguo Wang,Kuan Zhang,?,Yuesheng Zhao,Xiaoling Wang and Muhammad Shamrooz Aslam

    1School of Biomedical Engineering,Capital Medical University,Beijing,100054,China

    2School of Automation,Guangxi University of Science and Technology,Liuzhou,545006,China

    ABSTRACT The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient eigenvalues and classification algorithms.In this paper,we experimentally validate the HAR process and its various algorithms independently.On the base of which,it is further proposed that,in addition to the necessary eigenvalues and intelligent algorithms,correct prior knowledge is even more critical.The prior knowledge mentioned here mainly refers to the physical understanding of the analyzed object,the sampling process,the sampling data,the HAR algorithm,etc.Thus,a solution is presented under the guidance of right prior knowledge,using Back-Propagation neural networks(BP networks)and simple Convolutional Neural Networks(CNN).The results show that HAR can be achieved with 90%–100%accuracy.Further analysis shows that intelligent algorithms for pattern recognition and classification problems,typically represented by HAR,require correct prior knowledge to work effectively.

    KEYWORDS Human activities recognition;prior knowledge;physical understanding;sensors;HAR algorithms

    1 Introduction

    In recent years,Human Activities Recognition (HAR) has received tremendous attention from scholars due to its widespread application,such as rehabilitation training and competitive sports.The purpose of HAR is to unambiguously identify human poses and behaviors using strap-down sampled data from triaxial accelerometers,triaxial gyroscopes,and other sensors.

    Currently,HAR as discussed in the academic community mainly refers to the recognition of human activities including sitting,standing,lying,walking,running,going upstairs and downstairs.HAR is to identify each of these types of human activity and give a conclusion as to what type of activity the human body is currently doing.These body activities are usually measured by sensors such as triaxial accelerometers and triaxial gyroscopes,based on which the recognition results are given by an AI algorithm.

    The processes of human activity involve a wide range of complex movements such as triaxial acceleration,deceleration,bending,lifting,lowering and twisting of the upper limbs,chest,waist,hips,lower limbs and other parts of the human body.Correspondingly,measuring devices may be strapdown and fixed to different parts of the human body,or may be placed randomly in a pocket on the surface of the body.A considerable number of methods have been employed by scholars to analyze and utilize measured samples of human activity.

    Li et al.[1] proposed and implemented a detection algorithm for the user’s behavior based on the mobile device.After improving the directional independence and stride,the proposed algorithm enhances the adaptability of the HAR algorithm based on the frequency-domain and time-domain features of the accelerometer samples.A recognition accuracy of 95.13%was achieved when analyzing several sets of motion samples.

    Based on accelerometers and neural networks,Zhang et al.[2]proposed a HAR method.A total of 100% of human activities are classified identically,including walking,sitting,lying,standing and falling.

    The study conducted by Liu et al.[3]was based on the data analysis of the triaxial accelerometer and gyroscope of smartphones in order to extract the eigenvalue vector of human activities.It also selects four typical statistical methods to create HAR models separately.Model decisions are used to find the optimal HAR model,which achieves an average recognition rate of 92% for six activities:standing,sitting,going upstairs and going downstairs.

    Zhou et al.proposed a multi-sensor-based HAR system in [4].An algorithm is designed to identify eight common human activities using two levels of classification based on a decision tree.The identification rate averaged 93.12 percent.

    Based on Coordinate Transformation and Principal Component Analysis(CT-PCA),Chen et al.[5] developed a robust HAR system.The Online Support Vector Machine (OSVM) was able to recognize 88.74%in terms of the variations of orientation,placement,and subject.

    Using smartphones in different positions,Yang et al.[6] studied HAR with smartphones and proposed Parameter Adjustment Corresponding to Smartphone Position (PACP),a novel positionindependent method that improves HAR performance.PACP achieves significantly higher accuracy than previous methods with over 91 percent accuracy.

    Bandar et al.[7]developed a CNN model,which aimed at an effective smartphone-based HAR system.Two public datasets collected by smartphones,University of California,Irvine (UCI) and Wireless Sensor Data Mining(WISDM),were used to assess the performance of the proposed method.Its performance is evaluated by the F1-score,which is achieved at 97.73%and 94.47%on the UCI and WISDM datasets,respectively.

    Andrey[8]developed an approach based on user-independent deep learning algorithm for online HAR.It is shown that the proposed algorithm exhibits high performance at low computational cost and does not require manual selection of feature values.The recognition rate of the proposed algorithm is 97.62%and 93.32%on the UCI and WISDM datasets,respectively.

    Based on the wearable HAR system(w-HAR)and HAR framework,Bhat et al.[9]presented the w-HAR containing marked data from seven activities performed by 22 volunteers.This framework is capable of 95 percent accuracy,and the online system can improve HAR accuracy by up to 40 percent.

    Ronao et al.[10] used smartphone sensors to exploit the inherent features of human activities.A deep CNN is proposed to provide an approach to extract robust eigenvalues automatically adaptively from one-dimensional(1D)time-series raw data.Based on a benchmark dataset collected from 30 volunteers,the proposed CNN outperforms other HAR techniques,exhibiting a combined performance of 94.79%on the raw dataset.Due to the fact that HAR processes 1D time series data,the CNNs used in HAR are mainly 1D-CNNs rather than the 2D-CNNs commonly used in image recognition.

    Based on the above studies,it can be seen that the accelerometer used in HAR is the usual triaxial accelerometer in smartphones.The algorithms used in HAR include Decision Trees (DT),Naive Bayesian Network,Ada-Boost,Principal Component Analysis(PCA),Support Vector Machine(SVM),K-Nearest Neighbor(KNN),Convolutional Neural Networks(CNN)and so on.More and more scholars have adopted deep learning as the main HAR algorithm,and increasing innovative ideas are being introduced,as in[11–25].

    Therefore,many HAR researchers are trying to find better eigenvalues and more efficient algorithms to implement HAR.Currently,the dominant HAR research strategy is how to achieve a complex multi-class classification task of human activities in a single step.

    In this paper,we experimentally validate the process of HAR and the various algorithms independently and discuss the key role of prior knowledge step by step.Theoretical analysis and experimental validation show that the single-step multi-class classification task for complex human activities is difficult to implement and difficult to understand,while the solution of the proposed cascade structure model is clear to understand and easy to implement.The essential difference between the two HAR strategies is whether the HAR is based on the correct prior knowledge for the intelligent algorithm.A priori knowledge,which involves a physical understanding of human activity and the HAR algorithm,plays an important role in HAR.Our study shows that the recognition of nine daily activities from each other can be made clear and simple.The main reason for this effect is that the correct prior knowledge is incorporated into the proposed solution,which simplifies and improves HAR.

    Section 2 of this paper presents the background of our confirmatory experiments.The results of our confirmation experiments are presented in Section 3 and discussed in Section 4.Finally,the conclusions are drawn in Section 5.

    2 Confirmatory Experiments

    In this work,the HAR process and various HAR algorithms are tested to refresh our understanding of current developments in HAR.First,some measurement devices are used to obtain HAR raw data in the HAR experiment.

    2.1 Measurement Devices,HAR Experiments,and Raw Sampling Data

    With respect to measuring equipment,specialized equipment is designed and produced.Inside the device are triaxial accelerometers,gyroscopes and magnetometers.As shown in Fig.1,the volunteer’s ankles were fitted with two sets of such devices on his lateral sides.

    The activities performed by the volunteers included sitting,standing,lying right and left,supine and prone positions,going upstairs and downstairs.The raw sampling data from these HAR devices was computed using a variety of so-called intelligent algorithms.

    The experiment was also repeated using three smartphones.A third smartphone was attached to the forehead of the volunteers,while two smartphones were attached to their ankles.Furthermore,we used additional public datasets including UCI in[26],and WISDM in[27].

    The UCI dataset contains inertial measurements generated from 30 volunteers aged between 19 and 48 while performing daily activities: standing,sitting,lying,walking,going upstairs and downstairs.Smartphones (Samsung Galaxy SII) fixed on the volunteers’waists were used to collect triaxial acceleration and angular velocity signals,which were produced by the triaxial accelerometer and the triaxial gyroscope built in the smartphones.The sampling frequency of these signals is 50 Hz.

    The WISDM dataset contains triaxial acceleration data from 29 volunteers performing six daily activities: sitting,standing,walking,jogging,walking upstairs and walking downstairs.While collecting the data,the volunteers kept their smartphones in their foreleg pockets.The signal of the triaxial acceleration of the smartphone is collected at a sampling frequency of 20 Hz.

    2.2 Data Pre-Processing:Bad Data Identification and Data Marking

    Raw sampling data can be applied directly when a product is in its final form,but not during the research phase.Identifying bad data and labeling the data with human activity is crucial in research.

    Constantly,measurement experiments are always imperfect,and thus bad data is frequently encountered.Causes of bad data include device power failure,device sliding,person sliding,etc.For example,volunteers did not rigorously complete all activities in a standard manner and equipment did not always work smoothly and properly.In addition,various noises and disturbances are constantly present during the experiments.

    In our practice,about 10% of the raw sampled data can be visually identified as bad data and can be manually removed,as shown in Fig.2.Further,it is reasonable to assume that the fraction of unseen and unrecognized bad data is not less than 10%,since the fraction of visible and identifiable bad data is approximately 10%.

    Figure 2:Examples of visible bad data

    After removing the visible bad data,it is necessary to label the remaining 90%of the raw sampled data.This is to annotate which data are samples of walking and which are samples of sitting,standing,running,going upstairs and downstairs,etc.

    During the data annotation phase,any error decreases the HAR accuracy.It is reasonable to conjecture that the maximum accuracy of HAR is close to 80 percent or even lower for raw data,since the fraction of valid data in the total raw data is close to 80 percent.

    The subsequent calculations in this paper use the remaining data after the bad data has been manually removed.

    2.3 Validation of the HAR Algorithm

    A number of HAR algorithms were analyzed and tested in our study.As described in the references,these algorithms include Naive Bayesian Networks(NBN),Decision Trees(DT),Convolutional Neural Networks(CNN),Principal Component Analysis(PCA),Support Vector Machines(SVM),K-Nearest Neighbor Algorithms(KNA)and Ada-Boost.

    After a comprehensive comparison,BP neural networks and 1D-CNNs are used for HAR in this paper.The pseudo-code of Cascade HAR for the 1D-CNNs and BP neural networks used in this paper is shown in Fig.3,and the algorithm parameters for the 1D-CNNs or BP neural networks used in this paper are shown in Tables 1 and 2.Compared to other intelligent algorithms such as Decision Tree,we can achieve HAR accuracy of 90.5%–100%using simple BP neural networks or convolutional neural networks.The main difference between our solution and other HAR solutions is that it adopts a cascade identification structure,which is the result of a deep understanding of human activity.

    3 Validation of HAR Algorithms

    Derived from the motion analysis of HAR process,“the vector sum of triaxial acceleration”or“the X-axis acceleration”should be used according to various HAR needs.That is,the acceleration samples in the vertical direction of the human torso and the forward direction of the human torso are the key data for HAR.This is an important starting point of the prior knowledge in this paper.

    Table 1:Recognition algorithms and their parameters used in this paper

    Figure 3:Pseudo-code of cascade HAR with CNNs or BP neural networks in this paper

    Based on the correct prior knowledge,we transform the complex single-step multi-class classification task into a set of hierarchical binary classification recognition tasks as shown in Fig.4,and adopt the following Steps 1–5 to validate the HAR of neural networks and Step 6 to validate the HAR of CNNs,respectively.

    Figure 4:Binary classification cascade structure model for HAR

    In order to verify the HAR effect of the Back-Propagation(BP)neural networks through Steps 1–5,the samples from the left accelerometer are separated into 466 groups,30 samples in each group,which are the number of samples for 3 s at the sampling frequency of 10 Hz.The eigenvalues of each group are then calculated,including their maximum,minimum,mean,standard deviation,variance,skewness,kurtosis,and range of the data groups.These eigenvalues are separated into training sets,validation sets,and test sets for the two-layer BP neural network,which are then trained,validated and tested based on these eigenvalues.As for the convolution neural networks in Step 6,the sampled data need to be labeled with the corresponding activities and the eigenvalue calculation for 466 groups data does not need to be performed as in Steps 1–5.

    Step 1 distinguishes between walking/running and sitting/lying,which means identifying the dynamic activities of a human body from its static attitudes.The above sets of eigenvalues are sent to the BP neural networks for network training and model validation.In these datasets,dynamic data are placed in target class 1,and static data are placed in target class 2.A comparison of the HAR accuracy of training dataset,validation dataset,test dataset,and all data are shown in Fig.5.All accuracy values are 100%.This suggests that the dynamic activities of the subjects can be clearly identified from their static attitudes.

    Figure 5:Step 1,binary classification task of static body attitudes and dynamic activities using a BP neural network

    Step 2 distinguishes between walking/going upstairs/going downstairs and running.The above eigenvalues sets are sent to the BP neural networks for network training and model validation.In these datasets,the data of walking/going upstairs/going downstairs are placed in target class 1,and the running data are placed in target class 2.A comparison of HAR accuracy for training dataset,validation dataset,test dataset,and all data is shown in Fig.6.All accuracy values are 100%.This shows that the activities of walking/going upstairs/going downstairs can be accurately distinguished from the activity of running.

    Figure 6:Step 2,the binary classification task of running and walking/going upstairs/going downstairs using a BP neural network

    In Step 2,the total numbers of data for target class 1 and class 2 are significantly different.The data for the three activities,walking/going upstairs/going downstairs,are placed in target class 1,while only the running data are placed in target class 2.But it is not difficult to understand that this does not affect the identification rate.

    Step 3 distinguishes between walking and going upstairs/downstairs.The above eigenvalues sets are sent to the BP neural networks for network training and model validation.In these datasets,the data of going upstairs/going downstairs are placed in target class 1,and the walking data are placed in target class 2.A comparison of HAR accuracy of the training dataset,validation dataset,test dataset,and all data is shown in Fig.7.Accuracy values of 91.9%,90.5%,90.5%,and 91.5%can be found.Due to the high probability,this means that the going upstairs and downstairs activities of the subjects can be clearly distinguished from their walking activities.

    Step 4 distinguishes between going upstairs and going downstairs.The above eigenvalues sets are sent to the BP neural networks for network training and model validation.Among these datasets,the going upstairs data are placed in target class 1,and the going downstairs data are placed in target class 2.A comparison of the HAR accuracy of the training dataset,validation dataset,test dataset,and all data is shown in Fig.8.It can be found that since all results are 100%correct,there is no error in distinguishing between the upstairs and downstairs activities of the subjects.

    Figure 7:Step 3,the binary classification task of walking and going upstairs/going downstairs using a BP neural network

    Figure 8:Step 4,the binary classification task of going upstairs and downstairs using a BP neural network

    As shown in Fig.9,Step 5 distinguishes between sitting/standing and lying.As physically understood,accelerometers measure only the acceleration of gravity when the body is at rest,not the acceleration of other body movements.Since the gravity directions of sitting/standing and lying are respectively in the longitudinal and transverse directions of the human body,these activities can be identified by using“the X-axis accelerometer samples”in the vertical direction of the human body.

    Figure 9:Step 5,the binary classification task of lying and sitting/standing using a BP neural network

    In addition,since the measuring devices used here was fixed separately on both ankles of the subjects,it is likely that the position and orientation of the lower leg will be the same for the two human poses of sitting and standing.Therefore,for such mounting positions,the sitting and standing poses cannot be identified by any recognition algorithm.

    In the same way as Steps 1–4 above,the eigenvalues of the left accelerometer (sitting/standing data are placed in target class 1 and lying data in target class 2)are sent to the BP neural networks for network training and model validation.Fig.9 shows the HAR accuracy for the training,validation,test,and all data.It can be found that there is no error in distinguishing between lying and sitting/standing,since all results are 100%.

    The total recognition rate of the cascade model for HAR is calculated from the recognition rates of BP neural network models at each layer,as shown in Table 3.The recognition rate for walking,going upstairs and downstairs is as low as 91.5%,which was caused by the recognition in the third step and needs to be improved.

    Table 3:Comprehensive recognition rate of human activities using BP neural networks

    As shown in Table 4 and Figs.10–13,Step 6 is to validate the intelligent algorithms of the CNNs for HAR.Compared with the two-layer BP neural networks in previous Steps 1–5,CNNs have no need for“Feature Engineering”,that is,the measured data can be directly fed into the CNN model,while there is no need to calculate the eigenvalues of these data in advance.

    Table 4:Step 6,comprehensive recognition accuracy with 1D-CNN

    Figure 10:Identifying dynamic and static human activities with a CNN.The HAR accuracy is 99.35%and 100%for the test and training datasets,respectively

    Figure 11:Identifying running and walking/going upstairs/going downstairs.The HAR accuracy is 94.44%and 97.56%for the test and training datasets,respectively

    Figure 12:Identifying walking and going upstairs/downstairs.HAR accuracy is 100%on both test and training datasets

    Figure 13:Identifying lying and sitting/standing.The HAR accuracy is 99.0%and 99.14%for the test and training datasets,respectively

    A CNN is tested to distinguish between dynamic and static human activities as shown in Fig.10.The HAR accuracy is 99.35%and 100%for the test and training datasets,respectively.Another CNN is used to distinguish between running and walking/going upstairs and downstairs as shown in Fig.11.The HAR accuracy is 94.44%and 97.56%for the test and training datasets,respectively.A third CNN is used to distinguish between walking and going upstairs and downstairs,as shown in Fig.12.The HAR accuracy is 100%on both test and training datasets.A fourth CNN is used to distinguish between lying and sitting/standing as shown in Fig.13.The HAR accuracy is 99.0%and 99.14%for the test and training datasets,respectively.

    HAR with 1D-CNNs would fail on the task of single-step multi-class classification,distinguishing between sitting/lying/standing/walking/running/going upstairs and going downstairs simultaneously.The solution proposed in this paper is the hierarchical recognition model of“cascade layered”.In this hierarchical recognition model,a two-layer BP neural network with the same structure is used for each layer in Steps 1–5,and a slightly different 1D-CNN is used for each layer in Step 6.

    In Step 6,the training dataset accounts for 70%of all data,and the test dataset accounts for the remaining 30%.Of the training dataset,20%is used as a validation dataset.Table 4 shows the overall recognition rate with 1D-CNNs.That is,the human static behaviors of sitting,standing and lying were recognized with 98.36%accuracy and the dynamic human activities including walking,running,going upstairs and downstairs were recognized with 93.83%accuracy.

    The accuracy rate of dynamic recognition in Table 4 is as low as 93.83%,while the accuracy rate of static recognition is 98.36%.Why?The immediate reason is that the HAR accuracy between running and walking/going upstairs/going downstairs is only 94.44%,and the main reason for the 94.44%accuracy is that there are too few sampled data for running compared to other activities.The theoretical basis for this conjecture is that the classification effect of CNNs is related to the total amount of data.The larger the amount of data,the higher the classification accuracy can be achieved.

    4 Discussion about Confirmation Experiments

    In this paper,some typical human activities are measured and identified using some specially customized measuring devices and three smartphones.Our confirmatory experiments found that standing/sitting/lying/walking/running/going upstairs and downstairs can be recognized with 90.5%–100% accuracy with only a simple BP neural network and basic CNNs.This result follows from the physical understanding of the sampling device,the sampling process,the sampling data,and the HAR algorithm.This is the prior knowledge that this paper focuses on,which leads to sufficient data preparation and algorithm design,especially the cascade model of“hierarchical recognition”.

    In the first step of our HAR,it is necessary to distinguish between static poses and dynamic activities of the human body.It is easy to identify static poses and dynamic activities,as one property is static and the other is dynamic.It is true that a variety of simple algorithms can perform this binary classification task accurately.

    In the second step of our HAR,running and walking(including going upstairs and downstairs)need to be distinguished.Since walking and running are movements of the same nature and similar scales,both BP neural networks and CNNs have the classification ability to distinguish them.

    In an additional recognition step,we also perform a set of binary classifications to distinguish one human activity from another.The two types of human activity are either of a different nature or of the same nature and on a similar scale.

    Evidently,BP neural networks and 1D CNNs are unable to accurately distinguish human activities of the same nature and similar scale while recognizing human activities of different nature.

    The point we want to make here is that correct prior knowledge,including an understanding of the capabilities and characteristics of the algorithm,is essential.We believe that it is difficult to accurately identify human static postures(sitting,standing and lying),going upstairs and downstairs,walking,and running at the same time by using a single-step multi-classification model,because these classification objects are of different natures and different scales.

    Essentially,single-step multi-class recognition is a research strategy adopted by numerous HAR scholars,but this research strategy ignores the understanding of algorithmic and human activity.

    Different human activities have different physical properties and observational scales.For example,walking,running,going upstairs and going downstairs are all periodic behaviors.The main difference lies in the frequency and amplitude of these periodic steps,which have a timescale of seconds.

    Static behavior and transient events are perfectly distinct from walking,running,going upstairs and downstairs.The main difference between static poses,such as standing and lying,is that gravity is applied in a different direction to the human body,with a timescale that can exceed 10 s;Some transient events,such as falls and collisions,differ mainly in the direction and value of the acceleration generated by the human event,which may have a timescale of milliseconds.

    The three types of typical human activities,poses,and events mentioned above have perfectly different physical properties and observational scales,and therefore their key characteristics are different.Compared to a single-step multi-class recognition,a chain of binary classification recognition is significantly simpler and more accurate.

    The prior knowledge contained in this point is analogous to the relation between a telescope and a microscope.By analogy,although the principles of telescopes and microscopes are the same,it is unlikely that they can be integrated into an instrument called a micro+telescope,which can resolve not only stars but also accurately observe different microscopic particles of matter.Similarly,even though various classification algorithms have different degrees of classification power,while increasing their classification accuracy,their generalization power inevitably decreases.Second,recognition tools or recognition algorithms can identify objects with the same properties at different scales,or objects with different properties at the same scale,but they typically cannot handle both scales and properties.

    Therefore,simple algorithms are used to achieve the discriminative effect possessed by complex algorithms.This phenomenon includes appealing inspirations for generalized classification problems.We will discuss these inspirations further in a future study.

    5 Conclusions

    The HAR procedure and various HAR algorithms have been independently validated experimentally in this paper,so the key role of prior knowledge can be discussed step by step.Theoretical analysis and experimental validation show that the single-step multi-class classification task for common human activities is difficult and complex,whereas the solution of the proposed cascade structure model is relatively simple and easy.The essential difference between the two HAR strategies lies in the different understanding of intelligent algorithms.

    Prior knowledge plays an essential role in HAR.The study presented in this paper demonstrates that the identification of nine common human activities from each other can be relatively clear and simple.The main reason for this effect is that proper prior knowledge is incorporated into this solution,which can simplify and improve HAR.

    The task of HAR is to identify certain common human activities from each other.These human activities include standing,sitting,walking,running,lying right and left,prone,supine,going up and downstairs,etc.Activity can be measured using sensors such as accelerometers and gyroscopes.We implement HAR on the measured data using BP neural networks and deep learning networks in a cascaded structure.

    Numerous existing algorithms are capable of recognizing common human activities,poses,and events.These common algorithms include Principal Component Analysis(PCA),BP neural network,Ada Boost algorithm,Random Forest(RF),Decision Trees(DT),Support Vector Machine(SVM),K-nearest Neighbor algorithm,Convolutional Neural Network (CNN),etc.However,correct prior knowledge is crucial for proper use of general algorithms.For example,BP neural networks and basic CNNs can perform accurate HAR in the cascade models,but cannot in non-cascade models because a series of cascaded binary classification tasks can sufficiently utilize the recognition capabilities of BP neural networks and basic CNNs,while other HAR solutions require considerably more complex models and algorithms to achieve similar accuracy.

    This can be likened to a general prior knowledge: at the level of contemporary science and technology,it is difficult to integrate microscopes and astronomical telescopes into a single instrument.Similarly,existing general-purpose algorithms find it difficult to identify human activities with different natures,different scales,and different poses in a single step.While such a piece of prior knowledge is easy to understand,it is difficult to be applied flexibly for HAR improvement.

    In fact,a thorough knowledge of the subject under study is always required.Additional comparative experiments will be presented in our future research to explore the importance of prior knowledge for HAR and pattern recognition.

    Acknowledgement:The authors are deeply grateful to Dr.Jiao Wenhua for his great help.

    Funding Statement:This work was supported by the Guangxi University of Science and Technology,Liuzhou,China,sponsored by the Researchers Supporting Project (No.XiaoKeBo21Z27,The Construction of Electronic Information Team Supported by Artificial Intelligence Theory and Three-Dimensional Visual Technology,Yuesheng Zhao).This work was supported by the Key Laboratory for Space-based Integrated Information Systems 2022 Laboratory Funding Program (No.Space-InfoNet20221120,Research on the Key Technologies of Intelligent Spatio-Temporal Data Engine Based on Space-Based Information Network,Yuesheng Zhao).This work was supported by the 2023 Guangxi University Young and Middle-Aged Teachers’Basic Scientific Research Ability Improvement Project (No.2023KY0352,Research on the Recognition of Psychological Abnormalities in College Students Based on the Fusion of Pulse and EEG Techniques,Yutong Lu).

    Author Contributions:Study conception and design: Jianguo Wang,Kuan Zhang,Yuesheng Zhao;Data collection: Jianguo Wang,Xiaoling Wang;Analysis and interpretation of results: Yuesheng Zhao,Xiaoling Wang,Kuan Zhang;Draft manuscript preparation:Yuesheng Zhao,Xiaoling Wang,Muhammad Shamrooz Aslam.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The public datasets used in this study,UCI and WISDM,are accessible as described in references[26]and[27].Our self-built dataset can be accessed at the website:https://github.com/NBcaixukun/Human_pose_data.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美日韩精品网址| 国产一区二区三区视频了| 色综合婷婷激情| 日本三级黄在线观看| 18美女黄网站色大片免费观看| 后天国语完整版免费观看| 无人区码免费观看不卡| 成人亚洲精品av一区二区| 日韩欧美国产在线观看| 日韩大尺度精品在线看网址| 啦啦啦韩国在线观看视频| 天堂av国产一区二区熟女人妻 | 在线观看美女被高潮喷水网站 | 亚洲五月婷婷丁香| 很黄的视频免费| 亚洲avbb在线观看| 国产午夜精品论理片| 午夜激情av网站| 亚洲精品中文字幕在线视频| 一a级毛片在线观看| 亚洲乱码一区二区免费版| 91麻豆av在线| 久久久久久国产a免费观看| 精品国产亚洲在线| 久久 成人 亚洲| 国产激情欧美一区二区| 麻豆一二三区av精品| 韩国av一区二区三区四区| 亚洲av成人一区二区三| 一本大道久久a久久精品| 欧美成人午夜精品| 日韩欧美三级三区| 久久性视频一级片| xxx96com| 人妻夜夜爽99麻豆av| 亚洲激情在线av| 国产精品久久久久久人妻精品电影| 欧美3d第一页| 搞女人的毛片| 成人国产一区最新在线观看| 五月玫瑰六月丁香| 久久久久九九精品影院| 亚洲免费av在线视频| 亚洲精品国产一区二区精华液| 成人欧美大片| 我的老师免费观看完整版| 五月伊人婷婷丁香| 国产精品自产拍在线观看55亚洲| 麻豆国产av国片精品| 精品日产1卡2卡| 国产99久久九九免费精品| 国产精品乱码一区二三区的特点| 午夜福利在线在线| 亚洲无线在线观看| 禁无遮挡网站| 久久午夜综合久久蜜桃| 制服人妻中文乱码| 久久精品亚洲精品国产色婷小说| 精品电影一区二区在线| АⅤ资源中文在线天堂| 成人午夜高清在线视频| 热99re8久久精品国产| 久久午夜综合久久蜜桃| 欧美日韩精品网址| 欧美在线黄色| 美女高潮喷水抽搐中文字幕| 最近最新免费中文字幕在线| 欧美丝袜亚洲另类 | 搡老熟女国产l中国老女人| 啦啦啦观看免费观看视频高清| 真人做人爱边吃奶动态| 免费在线观看日本一区| 亚洲精品av麻豆狂野| 国产又黄又爽又无遮挡在线| 久久 成人 亚洲| 久久久久免费精品人妻一区二区| 天堂av国产一区二区熟女人妻 | 国产亚洲精品一区二区www| 精品久久久久久久久久久久久| 日本撒尿小便嘘嘘汇集6| 国产激情久久老熟女| 丰满人妻熟妇乱又伦精品不卡| 久久久久精品国产欧美久久久| e午夜精品久久久久久久| 校园春色视频在线观看| 日本一本二区三区精品| 亚洲国产日韩欧美精品在线观看 | 国产主播在线观看一区二区| 少妇粗大呻吟视频| 亚洲成人免费电影在线观看| 特级一级黄色大片| 成人欧美大片| 欧美成人一区二区免费高清观看 | 精品国产乱子伦一区二区三区| 午夜成年电影在线免费观看| 久久久久性生活片| 国产欧美日韩一区二区精品| a级毛片在线看网站| a在线观看视频网站| 真人一进一出gif抽搐免费| 高清毛片免费观看视频网站| 国模一区二区三区四区视频 | 黄色女人牲交| 一个人免费在线观看电影 | 在线观看美女被高潮喷水网站 | 可以免费在线观看a视频的电影网站| 亚洲天堂国产精品一区在线| 国产精品免费视频内射| 国产一区二区在线观看日韩 | 日韩有码中文字幕| 亚洲av成人不卡在线观看播放网| 欧美日韩亚洲综合一区二区三区_| 麻豆国产av国片精品| svipshipincom国产片| 特级一级黄色大片| 国产一区二区在线观看日韩 | www.999成人在线观看| 亚洲精品粉嫩美女一区| bbb黄色大片| 男女那种视频在线观看| 国产精品1区2区在线观看.| 国产男靠女视频免费网站| 十八禁网站免费在线| 日本 欧美在线| 久久精品国产99精品国产亚洲性色| 国产aⅴ精品一区二区三区波| 天天一区二区日本电影三级| 国产主播在线观看一区二区| cao死你这个sao货| 亚洲第一电影网av| 夜夜躁狠狠躁天天躁| 首页视频小说图片口味搜索| 亚洲狠狠婷婷综合久久图片| 日韩av在线大香蕉| 亚洲一码二码三码区别大吗| 亚洲欧美日韩高清专用| 1024视频免费在线观看| 一进一出抽搐动态| 久久这里只有精品19| 别揉我奶头~嗯~啊~动态视频| 一本一本综合久久| 久久午夜亚洲精品久久| 国产av又大| 久久久国产成人免费| 成人手机av| 久久九九热精品免费| 成人手机av| 午夜激情福利司机影院| 黄色片一级片一级黄色片| 91国产中文字幕| 老司机福利观看| 国产熟女xx| 大型黄色视频在线免费观看| 一二三四社区在线视频社区8| 成年人黄色毛片网站| 在线播放国产精品三级| 中文亚洲av片在线观看爽| 色综合亚洲欧美另类图片| 日日干狠狠操夜夜爽| 国产97色在线日韩免费| 超碰成人久久| 色在线成人网| 99国产精品一区二区蜜桃av| 成人18禁在线播放| 国产精品98久久久久久宅男小说| 两个人免费观看高清视频| 69av精品久久久久久| 久久久久久亚洲精品国产蜜桃av| 91大片在线观看| 搞女人的毛片| 午夜福利高清视频| 两个人免费观看高清视频| 人成视频在线观看免费观看| 一卡2卡三卡四卡精品乱码亚洲| 女人爽到高潮嗷嗷叫在线视频| 亚洲精品中文字幕在线视频| 视频区欧美日本亚洲| 欧美色视频一区免费| 午夜精品一区二区三区免费看| 精品国产超薄肉色丝袜足j| 精品国产美女av久久久久小说| 最近最新免费中文字幕在线| 日韩欧美在线二视频| 老熟妇乱子伦视频在线观看| 久久精品国产清高在天天线| 在线观看午夜福利视频| 人妻久久中文字幕网| 亚洲欧美日韩东京热| 99国产极品粉嫩在线观看| 一个人免费在线观看的高清视频| 日本一区二区免费在线视频| 免费观看精品视频网站| 在线十欧美十亚洲十日本专区| 午夜两性在线视频| 亚洲激情在线av| 俺也久久电影网| 高清在线国产一区| 亚洲精品中文字幕在线视频| 久久久国产精品麻豆| 非洲黑人性xxxx精品又粗又长| 成人一区二区视频在线观看| 老汉色∧v一级毛片| 国产av不卡久久| 在线观看免费午夜福利视频| 中文字幕av在线有码专区| 精品人妻1区二区| 嫩草影院精品99| 亚洲成av人片免费观看| 国产99久久九九免费精品| 999久久久国产精品视频| 久久香蕉国产精品| 成人av在线播放网站| 99精品在免费线老司机午夜| 国产男靠女视频免费网站| 精品国产乱码久久久久久男人| 99久久精品国产亚洲精品| 国产伦一二天堂av在线观看| 国产高清激情床上av| 18美女黄网站色大片免费观看| 很黄的视频免费| 国产亚洲精品综合一区在线观看 | 亚洲美女黄片视频| 日韩av在线大香蕉| 国产99久久九九免费精品| 日韩精品青青久久久久久| 90打野战视频偷拍视频| 可以免费在线观看a视频的电影网站| 成人高潮视频无遮挡免费网站| 五月伊人婷婷丁香| 中亚洲国语对白在线视频| 黄色丝袜av网址大全| 色老头精品视频在线观看| 国产一区二区三区视频了| 亚洲国产欧美人成| 成年免费大片在线观看| 国产亚洲精品久久久久久毛片| 欧美黑人欧美精品刺激| 午夜老司机福利片| 免费无遮挡裸体视频| 男女做爰动态图高潮gif福利片| 99riav亚洲国产免费| 国产aⅴ精品一区二区三区波| 欧美日韩黄片免| 天堂√8在线中文| 狂野欧美白嫩少妇大欣赏| 日韩欧美在线乱码| 两个人的视频大全免费| 女生性感内裤真人,穿戴方法视频| 99热6这里只有精品| 9191精品国产免费久久| 国产精品av视频在线免费观看| 精品久久久久久久末码| 一本精品99久久精品77| 999久久久精品免费观看国产| 天堂√8在线中文| 精品一区二区三区四区五区乱码| 精品福利观看| 舔av片在线| www国产在线视频色| 久久精品亚洲精品国产色婷小说| 免费看a级黄色片| 99国产综合亚洲精品| 色综合欧美亚洲国产小说| 亚洲人成网站在线播放欧美日韩| 久久精品综合一区二区三区| 成熟少妇高潮喷水视频| 国产精品98久久久久久宅男小说| 国产黄a三级三级三级人| 精品无人区乱码1区二区| 亚洲专区字幕在线| 亚洲美女视频黄频| tocl精华| 亚洲va日本ⅴa欧美va伊人久久| 国产精品av视频在线免费观看| 青草久久国产| 熟妇人妻久久中文字幕3abv| 两个人免费观看高清视频| 久9热在线精品视频| 中文字幕av在线有码专区| 18禁黄网站禁片免费观看直播| www.www免费av| 日日爽夜夜爽网站| 18禁黄网站禁片免费观看直播| 黄片大片在线免费观看| 国产欧美日韩一区二区三| 日韩欧美免费精品| 黄色丝袜av网址大全| 999久久久精品免费观看国产| 黄色视频不卡| 夜夜夜夜夜久久久久| 亚洲色图av天堂| 国产精品精品国产色婷婷| 午夜精品一区二区三区免费看| 草草在线视频免费看| 国产精品亚洲av一区麻豆| 久久久久国产精品人妻aⅴ院| 99国产精品99久久久久| 日韩欧美精品v在线| videosex国产| 露出奶头的视频| 亚洲中文av在线| 黄片小视频在线播放| 99热只有精品国产| 长腿黑丝高跟| 亚洲中文字幕日韩| 在线十欧美十亚洲十日本专区| 特大巨黑吊av在线直播| 最新在线观看一区二区三区| 日本在线视频免费播放| 男女那种视频在线观看| 搡老熟女国产l中国老女人| 亚洲av五月六月丁香网| 亚洲一区二区三区色噜噜| 欧美一区二区精品小视频在线| 午夜影院日韩av| 欧美日韩亚洲国产一区二区在线观看| 久久久久亚洲av毛片大全| 精品少妇一区二区三区视频日本电影| 久久久国产欧美日韩av| 黄色视频不卡| 一个人观看的视频www高清免费观看 | 国产精品久久久久久久电影 | 久久精品人妻少妇| 此物有八面人人有两片| 一区福利在线观看| 久久精品国产亚洲av香蕉五月| 亚洲av成人av| 欧美黑人巨大hd| 在线观看日韩欧美| 欧美另类亚洲清纯唯美| 窝窝影院91人妻| 成年版毛片免费区| 亚洲 国产 在线| 日韩成人在线观看一区二区三区| 亚洲激情在线av| av超薄肉色丝袜交足视频| 桃色一区二区三区在线观看| 黄色a级毛片大全视频| 成人特级黄色片久久久久久久| 国产成人系列免费观看| 黄色女人牲交| 日韩欧美免费精品| 两人在一起打扑克的视频| 亚洲av片天天在线观看| 一区二区三区高清视频在线| 久久人人精品亚洲av| av有码第一页| 久久午夜亚洲精品久久| 国产人伦9x9x在线观看| 国产精品1区2区在线观看.| 亚洲成人中文字幕在线播放| 露出奶头的视频| 亚洲免费av在线视频| 成人18禁高潮啪啪吃奶动态图| 成年版毛片免费区| 国产精品久久久av美女十八| 制服丝袜大香蕉在线| xxxwww97欧美| 国产午夜精品论理片| 日韩欧美 国产精品| 久久精品国产清高在天天线| 亚洲国产欧美一区二区综合| 法律面前人人平等表现在哪些方面| 国产亚洲精品一区二区www| 午夜免费成人在线视频| 久久香蕉国产精品| 亚洲欧美日韩高清专用| 最新美女视频免费是黄的| 国产精品98久久久久久宅男小说| 成人三级黄色视频| 香蕉丝袜av| 亚洲一区高清亚洲精品| 麻豆一二三区av精品| 两个人的视频大全免费| 国产精品一及| 丝袜人妻中文字幕| 亚洲一区高清亚洲精品| 又黄又爽又免费观看的视频| 18禁黄网站禁片免费观看直播| 国产成人精品久久二区二区91| 久久草成人影院| 99热这里只有是精品50| 欧美精品亚洲一区二区| 亚洲国产精品999在线| 淫秽高清视频在线观看| 老鸭窝网址在线观看| √禁漫天堂资源中文www| 搡老熟女国产l中国老女人| 日本黄大片高清| 欧美黑人欧美精品刺激| 日韩大码丰满熟妇| 日韩欧美国产在线观看| 午夜激情av网站| 在线观看一区二区三区| 男女视频在线观看网站免费 | 午夜久久久久精精品| 国产激情偷乱视频一区二区| 亚洲成人免费电影在线观看| 色哟哟哟哟哟哟| 国产成年人精品一区二区| 国产精华一区二区三区| 亚洲男人的天堂狠狠| 国产一区二区三区视频了| 在线观看美女被高潮喷水网站 | 99热6这里只有精品| 国产精品98久久久久久宅男小说| 99在线人妻在线中文字幕| 精品欧美一区二区三区在线| 精品福利观看| 亚洲国产欧美人成| 人妻久久中文字幕网| 久久久久久免费高清国产稀缺| 国产精品自产拍在线观看55亚洲| 午夜福利成人在线免费观看| 老司机靠b影院| av超薄肉色丝袜交足视频| 欧美高清成人免费视频www| 国内揄拍国产精品人妻在线| 中国美女看黄片| 一区二区三区国产精品乱码| 亚洲国产精品sss在线观看| av视频在线观看入口| 欧美久久黑人一区二区| 大型黄色视频在线免费观看| 男女之事视频高清在线观看| www.精华液| 又粗又爽又猛毛片免费看| 久久天躁狠狠躁夜夜2o2o| 亚洲国产日韩欧美精品在线观看 | 亚洲欧洲精品一区二区精品久久久| 每晚都被弄得嗷嗷叫到高潮| 岛国在线观看网站| 日韩欧美 国产精品| 香蕉丝袜av| 精品第一国产精品| 国产亚洲精品久久久久久毛片| 国产在线观看jvid| tocl精华| 一边摸一边做爽爽视频免费| 极品教师在线免费播放| 精品午夜福利视频在线观看一区| 中文字幕最新亚洲高清| 国产成人系列免费观看| 午夜精品久久久久久毛片777| 成年版毛片免费区| 亚洲精品粉嫩美女一区| 亚洲欧洲精品一区二区精品久久久| 少妇人妻一区二区三区视频| 2021天堂中文幕一二区在线观| 亚洲精华国产精华精| 国产乱人伦免费视频| 欧美一区二区国产精品久久精品 | 色尼玛亚洲综合影院| ponron亚洲| 欧美成人一区二区免费高清观看 | 中文在线观看免费www的网站 | 制服诱惑二区| 可以在线观看毛片的网站| 男女下面进入的视频免费午夜| 日本 欧美在线| 精品久久久久久久人妻蜜臀av| 美女免费视频网站| 一级作爱视频免费观看| 三级国产精品欧美在线观看 | 香蕉国产在线看| 欧美绝顶高潮抽搐喷水| 叶爱在线成人免费视频播放| 色尼玛亚洲综合影院| 亚洲专区中文字幕在线| 久久久久久久精品吃奶| 国产激情欧美一区二区| 亚洲成人精品中文字幕电影| 夜夜爽天天搞| 看免费av毛片| 一a级毛片在线观看| 美女 人体艺术 gogo| 两个人的视频大全免费| 国产精品 国内视频| 在线观看舔阴道视频| 午夜a级毛片| 国产亚洲欧美98| 久久久久久人人人人人| 俺也久久电影网| 日韩国内少妇激情av| 成在线人永久免费视频| 精品第一国产精品| 亚洲激情在线av| 丰满人妻熟妇乱又伦精品不卡| 国产三级在线视频| 久久久久性生活片| 欧美+亚洲+日韩+国产| 国产真实乱freesex| av福利片在线观看| 欧美日韩乱码在线| 欧美最黄视频在线播放免费| 国产欧美日韩一区二区三| 久久久久国产一级毛片高清牌| www.自偷自拍.com| 18美女黄网站色大片免费观看| 欧美乱码精品一区二区三区| 长腿黑丝高跟| 最新美女视频免费是黄的| 亚洲片人在线观看| 特大巨黑吊av在线直播| 欧美又色又爽又黄视频| 亚洲成a人片在线一区二区| 人人妻,人人澡人人爽秒播| 亚洲欧美激情综合另类| 又粗又爽又猛毛片免费看| 中文字幕精品亚洲无线码一区| 精品人妻1区二区| 可以免费在线观看a视频的电影网站| 91字幕亚洲| 在线观看日韩欧美| 日韩欧美 国产精品| 久9热在线精品视频| 麻豆久久精品国产亚洲av| 亚洲欧美日韩高清专用| 亚洲一区二区三区色噜噜| 久久精品91无色码中文字幕| 一本综合久久免费| 麻豆国产97在线/欧美 | 精品久久久久久久人妻蜜臀av| 精品乱码久久久久久99久播| 精品久久蜜臀av无| 午夜视频精品福利| 午夜亚洲福利在线播放| 最近最新中文字幕大全电影3| 国产av在哪里看| 制服诱惑二区| 亚洲av五月六月丁香网| 亚洲人成77777在线视频| 91老司机精品| 亚洲成人中文字幕在线播放| 免费在线观看黄色视频的| 欧美精品亚洲一区二区| 露出奶头的视频| 国产探花在线观看一区二区| 欧美黑人精品巨大| 夜夜夜夜夜久久久久| 精品不卡国产一区二区三区| 久久亚洲真实| 亚洲最大成人中文| 少妇裸体淫交视频免费看高清 | 91老司机精品| 搡老岳熟女国产| 精品一区二区三区四区五区乱码| 亚洲国产高清在线一区二区三| 欧美日韩亚洲综合一区二区三区_| 成人18禁在线播放| 久久人人精品亚洲av| 毛片女人毛片| 国产精品一区二区三区四区免费观看 | www日本黄色视频网| 香蕉国产在线看| 老司机在亚洲福利影院| 国产精品自产拍在线观看55亚洲| 50天的宝宝边吃奶边哭怎么回事| 巨乳人妻的诱惑在线观看| 国产精品美女特级片免费视频播放器 | 亚洲熟妇熟女久久| 国产成人啪精品午夜网站| 免费看a级黄色片| 欧美色视频一区免费| 日韩中文字幕欧美一区二区| 老鸭窝网址在线观看| 国产精品一区二区三区四区免费观看 | 美女免费视频网站| 女人被狂操c到高潮| 男女午夜视频在线观看| 很黄的视频免费| 久久欧美精品欧美久久欧美| 狂野欧美白嫩少妇大欣赏| 久久久久久人人人人人| 麻豆国产av国片精品| 三级国产精品欧美在线观看 | 又大又爽又粗| 小说图片视频综合网站| 欧美黑人巨大hd| 亚洲色图av天堂| 欧美成人一区二区免费高清观看 | 婷婷亚洲欧美| 最近最新免费中文字幕在线| 日本成人三级电影网站| 亚洲五月天丁香| 一级黄色大片毛片| 一边摸一边抽搐一进一小说| 亚洲av五月六月丁香网| 99国产精品一区二区蜜桃av| 熟妇人妻久久中文字幕3abv| 香蕉丝袜av| 日韩国内少妇激情av| 成人午夜高清在线视频| 成年版毛片免费区| 久久九九热精品免费| av天堂在线播放| 成人18禁在线播放| 国产亚洲欧美在线一区二区| 日韩国内少妇激情av| 成人手机av| 亚洲精品在线观看二区| 日本三级黄在线观看| 哪里可以看免费的av片| 脱女人内裤的视频| 久久久精品欧美日韩精品| 欧美日韩瑟瑟在线播放| 高清毛片免费观看视频网站| 人妻夜夜爽99麻豆av| 久久这里只有精品19| 亚洲乱码一区二区免费版| 特大巨黑吊av在线直播| 午夜免费观看网址| 一级毛片女人18水好多| 淫秽高清视频在线观看| 熟妇人妻久久中文字幕3abv|