• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Template Matching Based Feature Extraction for Activity Recognition

    2022-08-24 12:57:58MuhammadHameedSiddiqiHelalAlshammariAmjadAliMadallahAlruwailiYousefAlhwaitiSaadAlanaziandKamruzzaman
    Computers Materials&Continua 2022年7期

    Muhammad Hameed Siddiqi, Helal Alshammari, Amjad Ali, Madallah Alruwaili,Yousef Alhwaiti, Saad Alanaziand M.M.Kamruzzaman

    1College of Computer and Information Sciences, Jouf University, Sakaka, Aljouf, 2014, Kingdom of Saudi Arabia

    2Department of Computer Science, COMSATS University Islamabad, Lahore Campus, Pakistan

    Abstract: Human activity recognition (HAR) can play a vital role in the monitoring of human activities, particularly for healthcare conscious individuals.The accuracy of HAR systems is completely reliant on the extraction of prominent features.Existing methods find it very challenging to extract optimal features due to the dynamic nature of activities, thereby reducing recognition performance.In this paper, we propose a robust feature extraction method for HAR systems based on template matching.Essentially, in this method, we want to associate a template of an activity frame or sub-frame comprising the corresponding silhouette.In this regard, the template is placed on the frame pixels to calculate the equivalent number of pixels in the template correspondent those in the frame.This process is replicated for the whole frame, and the pixel is directed to the optimum match.The best count is estimated to be the pixel where the silhouette (provided via the template)presented inside the frame.In this way, the feature vector is generated.After frame, and the pixel is directed to the optimum match.The best count is to label the incoming activity.We utilized different publicly available standard datasets for experiments.The proposed method achieved the best accuracy against existing state-of-the-art systems.

    Keywords: Activity recognition; feature extraction; template matching; video surveillance

    1 Introduction

    Human activity recognition has a significant role in many applications such as tele medicine and healthcare, neuroscience, and crime detection.Most of these applications need additional grades of independence like rotation or orientation, scale or size and viewpoint distortions.Rotation might be felt by spinning the template or by utilizing Arctic coordinates; scale invariance might be attained using templates of various size.Having additional parameters of attention infers that the accumulator space becomes bigger; its dimensions rise through one for every extra parameter of attention.Positioninvariant template matching infers a 2D parameter space; while, the enlargement of scale and positioninvariant template matching needs 3D parameter space [1].

    Human activity recognition (HAR) systems try to automatically recognize and examine human activities by acquiring data from different sensors [2].HAR is frequently associated to the procedure of finding and naming actions using sensory annotations [3].Generally, a human activity states the movement of one or many parts of the human body, which might be static or composed of numerous primitive actions accomplished in some successive order.Hence, HAR should permit classification the same activity with the similar label even when accomplished by various persons under various dynamic [2].

    There are various types of audio and video sensors that can be employed in HAR systems.However, most of them have their own limitations.In audio sensors-based data collection, we may lose the data because of utilizing GPRS to transmit the data.This is one the main disadvantages of the audio-based data collection.Therefore, in this work, we will be using video-sensor (such as 2D RGB camera).HAR system has three basic stages.In the first stage, the noise and environmental distortion will be diminished from the video frame.Furthermore, in this stage, we also segment the human body.In the second stage, we extract the best and informative features from the segmented body.While, in the last stage, a classifier is employed to categorize the incoming activities as shown in Fig.1.

    Figure 1: General flow diagram of a HAR system

    Commonly, classification has two types: First is the frame-based classification; while, second is sequence-based classification.In frame-based classification, only the present frame is employed with or without a standard frame in order to categorize the human actions from the arriving videos.On the other hand, in the sequence-based classification, the symmetrical movement of the feature pixels is considered among the present frame and the preliminary frame.Therefore, the frame-based classification does not have the ability in such domains in order to classify human activities; hence, the concentration of this work is the sequence-based classification [4].

    Accordingly, some latest works have been developed for the sequence-based HAR systems that showed significant performance in various dynamic scenarios.A state-of-the-art system was proposed by [5-8] that is based on the extraction of the individual persons’scene from the sequence of frames.Then, 3D convolutional neural network was utilized in order to detect and classify the corresponding activities of every sequence of frames.Activity-based video summarization is accomplished by saving every person’s activity at every time of the incoming video.Similarly, another sequence-based HAR system is proposed by [9] for the identification of the human in healthcare domains.This system takes video frames of COVID-19 patients, then finds for a match inside the grip on frames.In this system, the Gabor filter is utilized for feature extraction where the personal sample generation formula along with Gabor filter is utilized on input frame in order to collect the optimum and non-redundant Gabor features.Further, deep learning models are employed for matching the human activities with input frame.Furthermore, a robust sequence-based HAR system was proposed by [10] that was assessed on Weizmann and KTH actions datasets.In the pre-processing step of this system, the authors extracted the initial frames from input videos and resized.Then, frame by frame, the region of interest has been considered by employing Blob detection technique and tracing is done with the help of Kalman filter.Furthermore, an ensembled method (which is a group of various techniques such as bidimensionalempiricalmode decomposition, scale invariant feature transform, and wavelet transform)was employed for feature extraction, which extracts the features from moving object.Similarly, this method was also utilized on pre-processed frames in order to extract the best features from multiscaled frames.Finally, convolution neural network was employed for activity classification.Most of these systems suffer from their own limitations such as the degradation of accuracy in dynamic and naturalistic environments.

    Therefore, in this work, we have proposed an adoptive feature extraction method.Essentially in this method, we want to associate a template of an activity frame which will be the template like subframe which comprises the silhouette, we are going to search.Therefore, we focus the template on the frame pixels and calculate the equivalent number of pixels in the template correspondent those in the frame.This process is replicated for the whole frame, and the pixel that directed to the optimum match, the best count is estimated to be the pixel where the silhouette (provide via the template)presented inside the frame.For the experiments,we utilized various publicly available standard datasets such as Weizmann dataset [11], KTH action dataset [12], UCF sports dataset [13], and IXMAS action dataset [14] respectively.The proposed technique showed best performance against existing works.

    The remaining article is ordered as: Section 2 provides some recent literature review about sequence-based human activity classification systems.The detailed description on the proposed feature extraction is presented in Section3.The detailed description on the proposed feature Section4.The Section 5 describes the experimental setup.While, in Section 6, the results along with the discussion are explained.Lastly, in Section 7, the proposed HAR system will be summarized along with little future directions.

    2 Related Work

    Human activity states the movement of one or many parts of the human body, which might be static or composed of numerous primitive actions accomplished in some successive order.There lots of state-of-the-art methods have been proposed for HAR systems.However, most of them their own limitations.The authors of [15] developed a state-of-the-art system that is based on the architecture of deep learning and V4 inception in order to classify the incoming activities.However, deep learning lacks mutual intelligence, which makes the corresponding systems flimsy and the errors might be very large if the errors are made [16].Moreover, due to the larger number of layers, the step time of Inception-v4 is suggestively slower in practice [17].

    Similarly, an HAR system was proposed by [18] that is based on dissimilarity in body shape,which has been divided into five parts that associate to five fractional occupancy regions.For every frame, the region ratios have been calculated that further be employed for classification purpose.For classification, they utilized the advantages of AdaBoost algorithm that has the greater acumen capacity.However, AdaBoost algorithm cannot be equivalent since every predictor might only be trained after the preceding one has been trained and assessed [19].A novel ensembled model was proposed by [20] for Har systems, where they utilized multimodal sensor dataset.They proposed a new data preprocessing method in order to permit context reliant feature extraction from the corresponding dataset to be employed through various machine learning techniques such as linear discriminant,decision trees, kNN, cubic SVM, DNN, and bagged tree.However, every of these algorithms has its own limitations, for instance, kNN, SVM and DNN are frame-based classifiers that do not have the ability to accurately recognize the human activities from incoming sequences of video frames [21].

    A new HAR approach was introduced by [22] which is based on entropy-skewness and dimension reduction technique in order to get the condensed features.These features are then transformed into a codebook through serial-based fusion.In order to select the prominent and best features, a genetic algorithm is applied on the created feature codebooks, and for classification, a multi-class SVM has been employed.However, the well-known limitation of the genetic algorithm is that it does not guarantee any variety amongst the attained solutions [23].Moreover, SVM does not have the capability to correctly classify the human activities from incoming sequences of video frames [21].A naturalistic HAR system was proposed by [24] for which the human behavior is demonstrated as a stochastic sequence of activities.Activities are presented through a feature vector including both route data such as position and velocity, and a group of local movement descriptors.Activities are classified through probabilistic search of frames feature records on behalf of formerly seen activities.Hidden Markov Models (HMM) was employed for activity classification from incoming videos.However, the local descriptors have one of the main limitations, means that due to this algorithm the results might not be directly transferred to pixel descriptors which cannot be further utilized for classification [25].

    A motion-based feature extraction was proposed by [26] for HAR systems.They employed the context information from various resources to enhance the recognition.So, for that purpose, they presented the scene context features which presents the situation of the subject at various levels.Then for classification, the structure of deep neural network was utilized in order to get the higher-level presentation of human actions, which further combined with context features and motion features.However, deep neural network has major limitations such as short transparency and interpretability,and requires huge amount of data [27].Moreover, the motion features are very scant if human or background comprise non-discriminative features, and sometimes, the extracted features are defectiveand vanish in succeeding frames[28].Avery recent systemwas proposed by [29] that is based on various machine learning techniques such as Spatio-temporal interest point, histogram orient gradient, Gabor filter, Harris filter coupled with support vector machine, and they claimed best accuracy.However, the aforementioned techniques have major limitations such as the high-frequency response of Gabor filter produces ring effect closer to the edges which may degrade the accuracy [30].Moreover, Harris filter requires much time for feature extraction and space to store them, which might not be suitable for naturalistic domains [31].

    On the other hand, an automatic sequence-based HAR system was proposed by [32], which is based on group features along with high associations into category feature vectors.Then every action is classified through the amalgamation of Gaussian mixture models.However, Gaussian mixture model is a frame-based classifier which does not has the ability to accurately classify video-based activities.Another sequence-based HAR system was designed by [33] that was based on the neural network.The corresponding networkswere created the features database of various activities that were extracted and selected from sequence of frames.Finally, multi-layer feed forward perceptron network was utilized used in order to classify the incoming activities.However, neural network is a vector-based classifier that has low performance against sequence of frames [21].Similarly, a multi-viewpoint HAR systems was proposed by [34] that was based on two-stream convolutional neural networks integrated with temporal pooling scheme (that builds non-direct feature subspace depictions.However, their accuracy was very low in naturalistic domains.Moreover, temporal pooling scheme receive the shortcomings in performance generalization as described in [35] that clearly make the benefit of trained features over handmade ones [36].

    A multimodal scheme was proposed for human action recognition [37].This system was based on ascribing importance to the semanticmaterial of label texts instead of just mapping them into numbers.After this step, they modelled the learning framework that reinforces the video description with additional semantic language management and allows the proposed model to the activity recognition without additional required parameters.However, semantic information has some major issues like dimension detonation, data sparseness, incomplete generalization capacity [38].

    Accordingly, this work presents an accurate, robust and dynamic feature extraction method that has the ability to extract the best features from the sequence of video frames.In this method, we want to associate a template of an activity frame which will be the template like sub-frame which comprises the silhouette, we are going to search.Therefore, we focus the template on the frame pixels and calculate the equivalent number of pixels in the template correspondent those in the frame.This process is replicated for the whole frame, and the pixel that directed to the optimum match, the best count is estimated to be the pixel where the silhouette (provide via the template) presented inside the frame.By this way, the feature vector is generated.After feature extraction, the hidden Markov model(HMM) has been utilized in order to label the incoming activities.

    3 Proposed Feature Extraction Method

    In a typical human activity recognition system, the accuracy is completely relying on the feature extraction module.Therefore, we proposed a robust and naturalistic method for feature extraction module.In this method, we want to associate a template of an activity frame which will be the template like sub-frame which comprises the corresponding silhouette.Therefore, we focus the template on the frame pixels and calculate the equivalent number of pixels in the template corresponding to those in the frame.This process is replicated for the whole frame, and the pixel that is directed to the optimum match, the best count is estimated to be the pixel where the silhouette (provided via the template) is inside the frame.

    Generally, template matching might be explained as an algorithm of parameter calculation.The parameters describe the template location in the image, which might be defined as a distinct function Fi, jthat accepts the values in a frame such as the coordinates of the pixels like(i, j)∈S.For instance,a set points of 3×3 template may be defined as S={(0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0)(1, 0, 1) (1, 1, 0) (1, 1, 1)}.

    Let assume that every pixel in the activity frameImi, jis disturbed by the noise of additive Gaussian,and the corresponding noise is the mean of zero and the unidentified standard deviation that is represented by σ.Hence, the probability at a pixels’template positioned at the coordinates(x, y)ties the equivalent pixel at location(i, j)∈S that is shown by the general distribution

    where σ indicates the Gaussian distribution and π is the ratio between the edges and diameter of the image area.Meanwhile, the noise that affect every pixel is autonomous, the probability of the template at location(x, j)is the fused probability of every pixel that is covered by the template, such as

    Put Eq.(1), then we have

    wherekrepresents the number of points in the corresponding template, which is known as the likelihood function.Commonly, for simpler analysis, this function is expressed in the form of logarithmic.It should be noticed that the scale of the logarithm function does not modify the location of the maximum likelihood.Hence, the updated likelihood function under the logarithm is shown as shown below

    To select the parameter which enlarges the likelihood function, we need to estimate the maximum likelihood.For instance, the location enlarges the rate of modification of the objective function.

    Hence, the aforementioned equations also provide the solution of the minimization issue, which is given as

    Here, the estimation of maximum likelihood is equal to picking the location of the template which diminishes the shaped errors.The location where the utmost matches of the frame template is the projected location of the template inside the frame.Hence, if the solution of maximum likelihood has been selected based on the measurement of the matching under the criteria of squared error.This indicates that the result attained via template matching is optimum for frames that are crooked through Gaussian noise.It should be noted that practically assessed noise might be presumed to be the Gaussian noise based on the recommendation of the algorithm of the central limit, though many frames seem to deny this presumption.Alternatively, other errors criteria like the complete difference,instead of the squared difference.

    The alternative criteria of the squared error can be derived by substituting Eq.(7), which can be written as:

    The final part of the Eq.(8) does not rely on the location of the template(x, y).Intrinsically, it is continuous and might not be diminished.Hence, the optimal in Eq.(8) might be gained through minimizing.

    If the initial term

    is almost continuous, then the rest of the terms give a quantity of the likeness among the template and frame.Specifically, we might enlarge the cross correlation among the frame and template.Hence, the best location might be calculated as

    But, the term of square in Eq.(10) may be changed with location; so, the defined match through Eq.(11) might be poor.Similarly, the variety of the cross-correlation is reliant on the template size,which means that under various environmental conditions, it does not vary.Hence, it is more feasible to utilize either Eqs.(7) or (9) in implementation.

    On the other hand, in order to normalize the cross-correlation, Eq.(8) can be defined as below

    Accordingly, the first part is consistent, and hence, the optimal value might be attained as

    Generally, it is feasible to stabilize the window for every activity frame against the template.So,

    whereImx, yis the average of the pixelsImi+x, j+y, which is utilized for points inside the window (such as(i, j)∈S) and F indicates the is the average of the pixels in the corresponding template.Likewise,normalized cross-correlation is presented by Eq.(14), which does not modify the location of the optimal and provides a clarification as the vector of cross-correlation is normalized.Hence,

    If the activity frame and the corresponding template are binary, then such type of combination for template matching will be more beneficial, which might present the regions in the frame or it may comprise the edges.The overall flowchart of the proposed approach is presented in Fig.2.

    Figure 2: The flowchart of the proposed feature extraction approach

    4 Utilized Action Datasets

    The proposed feature extraction technique has been tested and validated on four publicly available standard action datasets such as Weizmann dataset, KTH action dataset, UCF sports dataset, and IXMAS action dataset respectively.Every action dataset is explained as below:

    4.1 Weizmann Dataset

    In this dataset, there are ten various activities which are performed by nine different subjects.The corresponding activities are skip, bend, walk, run, side changing, place jumping, forward jumping, one hand waving (Wave-1) and two hand waving (Wave-2) respectively.The dataset has total 90 activity clips having approximately 15 frames/activity.In order to normalize the entire frames of the dataset,we resized them to 280×340.

    4.2 KTH Action Dataset

    This dataset was created by 25 subjects who performed total six activities such as walk, boxing,run, clapping, jogging, and waving in various dynamic distinctive situations.This dataset was created under the setting of static camera against consistent background.The dataset has total 2391 sequences under the size of were taken with a frame size 280×320.

    4.3 UCF Sports Dataset

    This dataset contains 182 videos of total that were assessed through n-fold cross validation scheme from television channels.This dataset was created from various sports persons who were performing different sport matches.Moreover, the entire activities were collected under the settings of static camera.Some of the classes have high intra-class resemblances.There is total nine activities such as diving, run, lifting, skating, golf swimming, kick, walk, baseball swimming, and horse back riding.Each activity frame has a size 280×320.

    4.4 IXMAS (INRIA Xmas Motion Acquisition Sequences) Action Dataset

    In this dataset, there were total thirteen activities that were performed by eleven subjects.Each actor selected a free angle and location.For each subject, there were corresponding silhouettes in this dataset.We have chosen eight activity classes such as cross arm, walk, turn around, punch, wave, sit down, kick, and get up.This dataset has a view-invariant HAR where the size of each activity frame is size 280×320 (for our experiments).This dataset suffers from high occlusion which may reduce the performance of the proposed approach; therefore, we employed one of our previous methods [39] to normalize the occlusion concern.

    5 Experiments Setup

    The proposed method was assessed and validated against the following set of experiments.

    5.1 First Experiment

    This experiment presents the accuracy of the HAR system under the presence of the proposed feature extraction technique.So, for that purpose, we performed four sub-experiments against each dataset in order to show the significance and robustness of the proposed technique.

    5.2 Second Experiment

    This experiment indicates the role and importance of the designed approach in a typical HAR system.So, we utilized an inclusive set of sub-experiments for such persistence.For these experiments,we employed various state-of-the-art feature extraction methods instead of using the proposed technique.

    5.3 Third Experiment

    Finally, in this experiment, we compared the accuracy of the proposed method against state-ofthe-art systems.

    6 Results and Discussions

    6.1 First Experiment

    In this sub-experiment, we presented the performance of the proposed feature extraction technique against each dataset.For reach dataset,we utilizedn-fold cross validation structure,which means that every activity is utilized for training and testing respectively.The overall result of the proposed method is shown in Tab.1 (Weizmann dataset), Tab.2 (KTH action dataset), Tab.3 (UCF dataset),and Tab.4 (IXMAS dataset) respectively.

    Table 1: Analysis of the proposed approach on Weizmann dataset

    Table 2: Analysis of the proposed approach on KTH action dataset

    It should be noted from Tabs.1-4 that the common HAR system along with the proposed feature extraction method achieved accuracy on every dataset.From these results, we observed that the proposed method is robust, which means the proposed feature extraction method did not achieve best accuracy only on one dataset but also showed significant performances on other datasets respectively.This is because the averaging intrinsic in the proposed feature extraction method is the reduction of the vulnerability to noise and the maximization stage diminishes defenselessness to occlusion.

    Table 3: Analysis of the proposed approach on UCF dataset, where GoS is Golf Swimming, HoBR is Horse Back Riding, and BaS is Baseball Swimming

    Table 4: Analysis of the proposed approach on IXMAS dataset, where CrA is Cross Arm, SiD is Sit Down, GeU is Get Up, TuA is Turn Around

    6.2 Second Experiment

    For this experiment, we performed a group of sub-experiments in order to show the performance of the proposed HAR system.The entire sub-experiments were performed on every dataset under the absence of the proposed feature extraction method.For these sub-experiments, we utilized recent well-known feature extraction techniques such as wavelet transform [4], Curvelet transform [40], local binary pattern (LBP) [41], local directional pattern (LDP) [42], and stepwise linear discriminant analysis (SWLDA) [43] respectively.The overall results of the sub-experiments are presented in Tabs.5-24 against Weizmann dataset, KTH dataset, UCF dataset, and IXMAS dataset of various activities.

    Table 5:Analysis of a common HAR system along with existing wavelet transform(without employing the proposed approach) on Weizmann dataset

    Table 6: Analysis of a common HAR system along with existing Curvelet transform (without employing the proposed approach) on Weizmann dataset

    Table 7: Analysis of a common HAR system along with existing local binary patter (LBP) (without employing the proposed approach) on Weizmann dataset

    Table 7: Continued

    Table 8: Analysis of a common HAR system along with existing local directional pattern (LDP)(without employing the proposed approach) on Weizmann dataset

    Table 9: Analysis of a common HAR system along with existing stepwise linear discriminant analysis(SWLDA) (without employing the proposed approach) on Weizmann dataset

    Table 10: Analysis of a common HAR system along with existing wavelet transform (without employing the proposed approach) on KTH action dataset

    Table 11: Analysis of a common HAR system along with existing Curvelet transform (without employing the proposed approach) on KTH action dataset

    Table 12: Analysis of a common HAR system along with existing local binary pattern (LBP) (without employing the proposed approach) on KTH action dataset

    Table 13: Analysis of a common HAR system along with existing local directional pattern (LDP)(without employing the proposed approach) on KTH action dataset

    Table 14: Analysis of a common HAR system along with existing stepwise linear discriminant analysis(SWLDA) (without employing the proposed approach) on KTH action dataset

    Table 15: Analysis of a common HAR system along with existing wavelet transform (without employing the proposed approach) on UCF dataset, where GoS is Golf Swimming, HoBR is Horse Back Riding, and BaS is Baseball Swimming

    Table 16: Analysis of a common HAR system along with existing Curvelet transform (without employing the proposed approach) on UCF dataset, where GoS is Golf Swimming, HoBR is Horse Back Riding, and BaS is Baseball Swimming

    Table 17: Analysis of a common HAR system along with existing local binary patter (LBP) (without employing the proposed approach) on UCF dataset, where GoS is Golf Swimming, HoBR is Horse Back Riding, and BaS is Baseball Swimming

    As can be seen from Tabs.5-24 that under the absence of the proposed approach (like feature extraction technique), the HAR system did not achieved best accuracy.This is because the inattentiveness to noise and occlusion are the main benefits of the proposed feature extraction technique.Noise may happen in any frame of the incoming video.Similarly, there might be low noise in digital photographs; however, in image processing it is made inferior through edge detection by the quality of variation procedures.Furthermore, shapes might simply be obstructed or hidden, for instance, a person may walk behind a streetlamp, or illumination may be one of reasons to create occlusion.The averaging intrinsic in the proposed feature extraction method is the reduction of the vulnerability to noise and the maximization stage diminishes defenselessness to occlusion.

    Table 18: Analysis of a common HAR system along with existing local directional patter (LDP)(without employing the proposed approach) on UCF dataset, where GoS is Golf Swimming, HoBR is Horse Back Riding, and BaS is Baseball Swimming

    Table 19: Analysis of a common HAR system along with existing stepwise linear discriminant analysis (SWLDA) (without employing the proposed approach) on UCF dataset, where GoS is Golf Swimming, HoBR is Horse Back Riding, and BaS is Baseball Swimming

    Table 20: Analysis of a common HAR system along with existing wavelet transform (without employing the proposed approach) on IXMAS dataset, where CrA is Cross Arm, SiD is Sit Down,GeU is Get Up, TuA is Turn Around

    Table 21: Analysis of a common HAR system along with existing Curvelet transform (without employing the proposed approach) on IXMAS dataset, where CrA is Cross Arm, SiD is Sit Down,GeU is Get Up, TuA is Turn Around

    Table 22: Analysis of a common HAR system along with existing local binary pattern (LBP) (without employing the proposed approach) on IXMAS dataset, where CrA is Cross Arm, SiD is Sit Down,GeU is Get Up, TuA is Turn Around

    Table 22: Continued

    Table 23: Analysis of a common HAR system along with existing local directional pattern (LDP)(without employing the proposed approach) on IXMAS dataset, where CrA is Cross Arm, SiD is Sit Down, GeU is Get Up, TuA is Turn Around

    Table 24: Analysis of a common HAR system along with existing stepwise linear discriminant analysis(SWLDA) (without employing the proposed approach) on IXMAS dataset, where CrA is Cross Arm,SiD is Sit Down, GeU is Get Up, TuA is Turn Around

    6.3 Third Experiment

    Finally, in this group of experiments, we have compared the recognition rate of the proposed approach against latest HAR systems.For some system,we have borrowed their implementation code;while, for the remaining system, we have presented their accuracies as described in their respective articles.The entire systems were implemented under the exact settings as indicated in their respective articles.For comparison, we also utilized, UCF50 dataset [44] and HMDB51 dataset [45].The comparison results are accordingly presented in Tab.25.

    Table 25: Performance of the proposed approach against recent HAR systems

    It is vibrant fromTab.25 that the proposed approach achieved best weighted average classification accuracy against state-of-the-art works.The reason is that, the proposed technique has the capacity to extract the prominent features from the action frames under the presence of occlusion, illumination and background disorder and scale changes.Moreover, the proposed approach extracts the best features from various resources such as shapes, textures, and colors in order to build the feature vector that will be input for a classifier.

    7 Conclusions

    Human activity recognition (HAR) has a fascinating role in our daily life.HAR can be applied for healthcare domains to check the patients’daily routines.Also, HAR has a significant role in other applications such as crime control, sports, defense etc.There are many resources for HAR systems.Among them, video-camera is one of the best candidates for HAR systems.The accuracy of such systems completely depends upon the extraction and selection of the best features from the activity frames.Accordingly, in this work, we have proposed a new feature extraction technique that is based on template matching.In the proposed approach, we matched a template of an image which will be the template like sub-frame which comprises the silhouette.Therefore, we focus the template on the frame pixels and calculate the equivalent number of pixels in the template correspondent those in the frame.The proposed approach was assessed against four publicly available standard datasets of activities, which sowed showed the best performance against existing recent HAR systems.The averaging intrinsic in the proposed approach is the reduction of the vulnerability to noise and the maximization stage diminishes defenselessness to occlusion.Moreover, the proposed algorithm has the capacity to extract the prominent features from the activity frames under the presence of occlusion, illumination and background disorder and scale changes.Also, the proposed approach extracts the best features from various resources such as shapes, textures, and colors for building the feature vector that will be input for a classifier.

    In the future, we will implement and deploy the proposed HAR system under the presence of the proposed feature extraction in healthcare, which will facilitate the physicians to remotely check the daily exercises of the patients through which they might easily recommend the corresponding recommendations for the patients.This approach may also help the patients sufficiently improve the quality of their lives in healthcare and telemedicine.

    Funding Statement:The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this work through the Project Number“375213500”.Also,the authors would like to extend their sincere appreciation to the central laboratory at Jouf University to support this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    每晚都被弄得嗷嗷叫到高潮| 一进一出抽搐动态| 国产成人av激情在线播放| 一边摸一边抽搐一进一出视频| 成人国产一区最新在线观看| 久久久国产精品麻豆| 嫁个100分男人电影在线观看| 国产麻豆69| 久久午夜综合久久蜜桃| 国产精品免费视频内射| 一进一出好大好爽视频| 中国美女看黄片| 亚洲avbb在线观看| 一本综合久久免费| 波多野结衣av一区二区av| 91精品三级在线观看| 老鸭窝网址在线观看| 亚洲成国产人片在线观看| 亚洲av成人一区二区三| 韩国精品一区二区三区| 精品国产乱码久久久久久男人| 欧美日韩一级在线毛片| 国产精品一区二区免费欧美| 久久久精品区二区三区| 久久久水蜜桃国产精品网| 男人的好看免费观看在线视频 | 黄色丝袜av网址大全| 一本大道久久a久久精品| 乱人伦中国视频| 国产精品九九99| 亚洲色图 男人天堂 中文字幕| 亚洲精品国产色婷婷电影| 手机成人av网站| 久久国产精品男人的天堂亚洲| 国产精品av久久久久免费| 国产极品粉嫩免费观看在线| 亚洲 欧美一区二区三区| 亚洲精品在线美女| 日韩三级视频一区二区三区| 精品久久久久久电影网| 欧美乱码精品一区二区三区| 久久精品亚洲精品国产色婷小说| a级毛片黄视频| 成熟少妇高潮喷水视频| 这个男人来自地球电影免费观看| 精品久久久久久电影网| 久久性视频一级片| 狂野欧美激情性xxxx| av一本久久久久| 国产免费男女视频| 亚洲av成人不卡在线观看播放网| 国产精品国产av在线观看| 黄色视频,在线免费观看| 久久久久国内视频| 最新在线观看一区二区三区| svipshipincom国产片| 亚洲色图综合在线观看| 又黄又粗又硬又大视频| 午夜两性在线视频| 亚洲精品在线观看二区| 国产精品久久久人人做人人爽| 久久香蕉国产精品| 母亲3免费完整高清在线观看| 一区在线观看完整版| 国产单亲对白刺激| 50天的宝宝边吃奶边哭怎么回事| 看黄色毛片网站| 成年版毛片免费区| ponron亚洲| 久久狼人影院| 91成年电影在线观看| 在线观看免费日韩欧美大片| 久久久国产成人精品二区 | 99久久精品国产亚洲精品| 国产精品 欧美亚洲| 色综合婷婷激情| 老熟妇乱子伦视频在线观看| 欧美日韩黄片免| 欧美av亚洲av综合av国产av| 高清视频免费观看一区二区| 国产有黄有色有爽视频| tocl精华| 精品午夜福利视频在线观看一区| 夜夜躁狠狠躁天天躁| 两性午夜刺激爽爽歪歪视频在线观看 | 侵犯人妻中文字幕一二三四区| 女人被躁到高潮嗷嗷叫费观| 精品国产一区二区三区四区第35| 19禁男女啪啪无遮挡网站| 亚洲九九香蕉| 欧美精品一区二区免费开放| 亚洲精品乱久久久久久| 怎么达到女性高潮| 下体分泌物呈黄色| 国产精品影院久久| 欧美激情 高清一区二区三区| 欧美激情高清一区二区三区| 久久九九热精品免费| 久久久国产一区二区| 国产激情欧美一区二区| 女同久久另类99精品国产91| 国产亚洲精品第一综合不卡| 91精品三级在线观看| 免费人成视频x8x8入口观看| 国产欧美日韩综合在线一区二区| 国产精品欧美亚洲77777| 中文字幕高清在线视频| 黑人巨大精品欧美一区二区mp4| 免费黄频网站在线观看国产| 国产精品二区激情视频| 国产有黄有色有爽视频| 午夜精品久久久久久毛片777| 自拍欧美九色日韩亚洲蝌蚪91| 日日爽夜夜爽网站| 成年版毛片免费区| 18禁观看日本| 国产av一区二区精品久久| 亚洲人成77777在线视频| 18禁裸乳无遮挡免费网站照片 | av一本久久久久| 老司机深夜福利视频在线观看| 欧美日韩亚洲高清精品| 人妻 亚洲 视频| 国产精品自产拍在线观看55亚洲 | 久久久精品区二区三区| avwww免费| 大陆偷拍与自拍| 日韩有码中文字幕| 精品一区二区三区av网在线观看| 丝袜在线中文字幕| 黄色视频,在线免费观看| 亚洲综合色网址| av免费在线观看网站| 91麻豆精品激情在线观看国产 | 精品少妇久久久久久888优播| 国产高清视频在线播放一区| 亚洲av日韩在线播放| 中文字幕最新亚洲高清| 999久久久国产精品视频| 黑丝袜美女国产一区| 久久久久视频综合| 欧美日韩精品网址| 久久久久精品人妻al黑| 操出白浆在线播放| 久久久久久久久免费视频了| 久久精品国产a三级三级三级| 性色av乱码一区二区三区2| 美女扒开内裤让男人捅视频| 久久香蕉精品热| 亚洲精品美女久久av网站| 国产熟女午夜一区二区三区| 99国产精品99久久久久| 新久久久久国产一级毛片| 亚洲男人天堂网一区| 操出白浆在线播放| 一级a爱视频在线免费观看| 精品人妻熟女毛片av久久网站| 国产精品秋霞免费鲁丝片| 欧美 亚洲 国产 日韩一| 在线永久观看黄色视频| 国产麻豆69| 手机成人av网站| www.999成人在线观看| 国产成人免费观看mmmm| 亚洲成人免费电影在线观看| 久久人妻熟女aⅴ| 夫妻午夜视频| 久久久久久久国产电影| 久久久久久久久久久久大奶| 看免费av毛片| avwww免费| 国产欧美日韩一区二区精品| 久久久久国产精品人妻aⅴ院 | 国产成人精品久久二区二区免费| 久久精品熟女亚洲av麻豆精品| 亚洲色图 男人天堂 中文字幕| 女人爽到高潮嗷嗷叫在线视频| 国产精品香港三级国产av潘金莲| 免费看十八禁软件| 天天躁日日躁夜夜躁夜夜| 日韩精品免费视频一区二区三区| 人人妻人人澡人人看| 一边摸一边抽搐一进一出视频| 亚洲av成人av| 多毛熟女@视频| 女人被躁到高潮嗷嗷叫费观| 中亚洲国语对白在线视频| 精品少妇久久久久久888优播| 波多野结衣av一区二区av| a在线观看视频网站| 美女福利国产在线| 精品国产美女av久久久久小说| 18禁黄网站禁片午夜丰满| 国产精品乱码一区二三区的特点 | 国产亚洲一区二区精品| 一区二区日韩欧美中文字幕| 一二三四社区在线视频社区8| 交换朋友夫妻互换小说| 一级片'在线观看视频| 激情在线观看视频在线高清 | 91麻豆av在线| 午夜日韩欧美国产| 老熟妇仑乱视频hdxx| 亚洲精品一二三| 最近最新免费中文字幕在线| 高清av免费在线| 99国产精品一区二区三区| 欧美av亚洲av综合av国产av| 久久久久久久久久久久大奶| 日韩欧美在线二视频 | 一级,二级,三级黄色视频| 欧美另类亚洲清纯唯美| 欧美乱色亚洲激情| 天天影视国产精品| 亚洲av第一区精品v没综合| 久久草成人影院| 老司机深夜福利视频在线观看| 日本五十路高清| 高清欧美精品videossex| 久久中文看片网| 性少妇av在线| 日韩中文字幕欧美一区二区| 日本a在线网址| 精品一区二区三区av网在线观看| 啦啦啦免费观看视频1| 别揉我奶头~嗯~啊~动态视频| 老司机深夜福利视频在线观看| 亚洲成av片中文字幕在线观看| 老司机午夜福利在线观看视频| 热re99久久精品国产66热6| 欧美日韩亚洲综合一区二区三区_| 9色porny在线观看| 80岁老熟妇乱子伦牲交| 亚洲一区高清亚洲精品| www.自偷自拍.com| 69精品国产乱码久久久| 97人妻天天添夜夜摸| 大香蕉久久网| 9色porny在线观看| 国产精品久久久人人做人人爽| 热99re8久久精品国产| 精品国内亚洲2022精品成人 | 很黄的视频免费| 少妇猛男粗大的猛烈进出视频| 亚洲欧美日韩另类电影网站| 欧美黑人精品巨大| 免费看十八禁软件| 国产国语露脸激情在线看| 国产精品 欧美亚洲| 欧美不卡视频在线免费观看 | x7x7x7水蜜桃| 91大片在线观看| 老汉色∧v一级毛片| 午夜福利,免费看| 欧美亚洲 丝袜 人妻 在线| 黄色a级毛片大全视频| 在线观看www视频免费| 在线天堂中文资源库| 一进一出抽搐动态| 日韩免费高清中文字幕av| 日本五十路高清| 亚洲性夜色夜夜综合| 中文字幕色久视频| av不卡在线播放| 欧美在线黄色| 男女之事视频高清在线观看| 制服人妻中文乱码| 91字幕亚洲| 久久人妻福利社区极品人妻图片| 国产成人欧美在线观看 | 国产亚洲精品第一综合不卡| 老司机影院毛片| 视频区图区小说| 99精品在免费线老司机午夜| 久久香蕉精品热| 91麻豆精品激情在线观看国产 | bbb黄色大片| 中文字幕最新亚洲高清| 国产99久久九九免费精品| 欧美乱色亚洲激情| 嫩草影视91久久| 国产精品av久久久久免费| 欧美成狂野欧美在线观看| 正在播放国产对白刺激| 欧美精品啪啪一区二区三区| 午夜视频精品福利| 露出奶头的视频| 久久久久视频综合| 亚洲欧美精品综合一区二区三区| 另类亚洲欧美激情| 久久午夜亚洲精品久久| 少妇猛男粗大的猛烈进出视频| 亚洲成人免费av在线播放| 久久精品91无色码中文字幕| 人人妻人人添人人爽欧美一区卜| 女人被躁到高潮嗷嗷叫费观| 亚洲男人天堂网一区| 黄色片一级片一级黄色片| 色播在线永久视频| 超碰97精品在线观看| 十八禁网站免费在线| 欧美精品一区二区免费开放| 99精品在免费线老司机午夜| 丝瓜视频免费看黄片| 亚洲欧美一区二区三区黑人| 欧美国产精品一级二级三级| 亚洲三区欧美一区| 久久久久久久午夜电影 | 欧美日韩亚洲国产一区二区在线观看 | 欧美成人午夜精品| av有码第一页| 精品午夜福利视频在线观看一区| 欧美成人午夜精品| 国产免费现黄频在线看| 在线av久久热| 亚洲色图av天堂| 亚洲熟妇中文字幕五十中出 | 久久久久精品国产欧美久久久| 女人高潮潮喷娇喘18禁视频| 国产黄色免费在线视频| 免费观看精品视频网站| 国产精品影院久久| 中文字幕制服av| 国产成人系列免费观看| 天堂中文最新版在线下载| √禁漫天堂资源中文www| 久久香蕉精品热| 视频区欧美日本亚洲| 久久草成人影院| 精品国产亚洲在线| 日本黄色日本黄色录像| 极品人妻少妇av视频| 亚洲五月天丁香| 精品久久蜜臀av无| 精品国产亚洲在线| 黄色视频,在线免费观看| 国产精品欧美亚洲77777| 老熟妇仑乱视频hdxx| 免费人成视频x8x8入口观看| 久久青草综合色| 国产成人系列免费观看| 久久性视频一级片| 老司机深夜福利视频在线观看| 日韩欧美三级三区| 亚洲精品乱久久久久久| 精品少妇一区二区三区视频日本电影| 久久久久久久久久久久大奶| 亚洲一区中文字幕在线| 母亲3免费完整高清在线观看| 久久 成人 亚洲| 欧美黑人精品巨大| 久久久久久久午夜电影 | 黑丝袜美女国产一区| 国产高清国产精品国产三级| 亚洲va日本ⅴa欧美va伊人久久| 国产精品二区激情视频| 国产97色在线日韩免费| 国产视频一区二区在线看| 人人妻人人澡人人爽人人夜夜| 亚洲精品国产一区二区精华液| 日韩欧美国产一区二区入口| 亚洲国产欧美一区二区综合| 久久精品国产综合久久久| 国产日韩欧美亚洲二区| 精品福利永久在线观看| 在线观看免费高清a一片| 两个人免费观看高清视频| 久久香蕉激情| 亚洲午夜精品一区,二区,三区| 天堂√8在线中文| 亚洲第一欧美日韩一区二区三区| 亚洲精品中文字幕在线视频| 香蕉丝袜av| 最新美女视频免费是黄的| 日韩成人在线观看一区二区三区| 大香蕉久久成人网| 91九色精品人成在线观看| 99热只有精品国产| 91麻豆av在线| 18禁观看日本| 又大又爽又粗| 国产一区二区三区视频了| 如日韩欧美国产精品一区二区三区| 亚洲三区欧美一区| 久久香蕉激情| 精品人妻1区二区| 免费在线观看亚洲国产| 女人高潮潮喷娇喘18禁视频| 中出人妻视频一区二区| av在线播放免费不卡| 99久久综合精品五月天人人| 国产av一区二区精品久久| av一本久久久久| 久久中文看片网| 9色porny在线观看| 国产一区二区激情短视频| 国产1区2区3区精品| 国产成人精品在线电影| 99国产精品一区二区三区| 高清黄色对白视频在线免费看| 曰老女人黄片| 久久精品成人免费网站| 动漫黄色视频在线观看| 欧美人与性动交α欧美精品济南到| 中文字幕制服av| 国产精品秋霞免费鲁丝片| videos熟女内射| 久久久久国产一级毛片高清牌| 国产又爽黄色视频| 黄色丝袜av网址大全| 免费在线观看完整版高清| 亚洲精品久久午夜乱码| 老司机影院毛片| 中文字幕精品免费在线观看视频| 免费人成视频x8x8入口观看| 亚洲三区欧美一区| 日韩有码中文字幕| 色在线成人网| svipshipincom国产片| 亚洲成人国产一区在线观看| 亚洲精品av麻豆狂野| 国产成人精品在线电影| 色婷婷久久久亚洲欧美| 日韩大码丰满熟妇| 国产精品综合久久久久久久免费 | 成在线人永久免费视频| 免费在线观看视频国产中文字幕亚洲| 69av精品久久久久久| 亚洲五月色婷婷综合| 香蕉国产在线看| 精品电影一区二区在线| 国产亚洲欧美98| tube8黄色片| 午夜成年电影在线免费观看| 精品乱码久久久久久99久播| 成人影院久久| 国产蜜桃级精品一区二区三区 | 精品人妻熟女毛片av久久网站| 国产精品欧美亚洲77777| 午夜福利一区二区在线看| 9色porny在线观看| 一个人免费在线观看的高清视频| av不卡在线播放| 电影成人av| 热re99久久精品国产66热6| 99热国产这里只有精品6| 精品亚洲成a人片在线观看| 国产成人欧美在线观看 | 国产激情欧美一区二区| 精品国产乱码久久久久久男人| 曰老女人黄片| 如日韩欧美国产精品一区二区三区| 亚洲 国产 在线| 两人在一起打扑克的视频| 一级a爱片免费观看的视频| 欧美中文综合在线视频| 欧美黄色片欧美黄色片| 亚洲国产欧美网| 性色av乱码一区二区三区2| 亚洲专区国产一区二区| 波多野结衣一区麻豆| 午夜免费成人在线视频| 久久久国产一区二区| 久久中文看片网| 91精品国产国语对白视频| 性色av乱码一区二区三区2| 少妇 在线观看| 制服诱惑二区| 精品亚洲成国产av| 国产人伦9x9x在线观看| 黄色视频不卡| 国产精品乱码一区二三区的特点 | 亚洲aⅴ乱码一区二区在线播放 | 亚洲av成人av| 无限看片的www在线观看| 国内毛片毛片毛片毛片毛片| 王馨瑶露胸无遮挡在线观看| 19禁男女啪啪无遮挡网站| 国产99久久九九免费精品| 亚洲国产看品久久| 少妇被粗大的猛进出69影院| 久久这里只有精品19| 男女午夜视频在线观看| 精品久久蜜臀av无| 欧美黑人精品巨大| 亚洲欧洲精品一区二区精品久久久| 黄片小视频在线播放| 黄色a级毛片大全视频| 日韩三级视频一区二区三区| 免费观看精品视频网站| 91精品三级在线观看| 一级毛片女人18水好多| 欧美黄色片欧美黄色片| 久久久久久久久免费视频了| 精品一品国产午夜福利视频| www.熟女人妻精品国产| 国产精品久久久人人做人人爽| 久久人人爽av亚洲精品天堂| 丰满的人妻完整版| 国产精品一区二区免费欧美| 日韩三级视频一区二区三区| 久久国产亚洲av麻豆专区| 极品教师在线免费播放| 在线十欧美十亚洲十日本专区| 成人免费观看视频高清| 91大片在线观看| 男人操女人黄网站| 不卡一级毛片| 在线观看日韩欧美| 91字幕亚洲| 日本wwww免费看| 亚洲国产精品sss在线观看 | 俄罗斯特黄特色一大片| 美国免费a级毛片| 人人妻人人澡人人爽人人夜夜| 真人做人爱边吃奶动态| 一本一本久久a久久精品综合妖精| 中文字幕最新亚洲高清| 日韩欧美一区二区三区在线观看 | 9热在线视频观看99| 69av精品久久久久久| 在线观看免费视频日本深夜| 久久99一区二区三区| 亚洲av片天天在线观看| 久久中文看片网| 久久久久精品国产欧美久久久| 国产精品久久久人人做人人爽| 精品久久蜜臀av无| 国产一区有黄有色的免费视频| 女人精品久久久久毛片| 亚洲精品一卡2卡三卡4卡5卡| 午夜视频精品福利| 国产伦人伦偷精品视频| 欧美日韩精品网址| 久久久久国内视频| 每晚都被弄得嗷嗷叫到高潮| 国产精品秋霞免费鲁丝片| 人人妻,人人澡人人爽秒播| 国产成人免费观看mmmm| 国产又色又爽无遮挡免费看| 亚洲欧洲精品一区二区精品久久久| 高清视频免费观看一区二区| 国产精品电影一区二区三区 | 亚洲一区高清亚洲精品| 午夜福利欧美成人| 国产高清videossex| 91麻豆精品激情在线观看国产 | 日日夜夜操网爽| 91成年电影在线观看| 精品卡一卡二卡四卡免费| 亚洲七黄色美女视频| 亚洲午夜精品一区,二区,三区| 交换朋友夫妻互换小说| 亚洲一区中文字幕在线| 精品福利永久在线观看| 午夜激情av网站| 啪啪无遮挡十八禁网站| 热99国产精品久久久久久7| 久久中文字幕一级| 成人国语在线视频| 精品亚洲成国产av| 国产亚洲欧美98| 亚洲欧美色中文字幕在线| 后天国语完整版免费观看| 不卡一级毛片| 欧美日韩视频精品一区| 欧美日韩亚洲高清精品| 三级毛片av免费| 午夜免费成人在线视频| 日本vs欧美在线观看视频| 亚洲美女黄片视频| 欧美成人免费av一区二区三区 | 久久性视频一级片| 国产又爽黄色视频| 99国产精品99久久久久| 十八禁人妻一区二区| 国产日韩欧美亚洲二区| 精品国产乱子伦一区二区三区| 人成视频在线观看免费观看| 久久精品国产a三级三级三级| 久久人妻av系列| 狠狠狠狠99中文字幕| 亚洲全国av大片| www.熟女人妻精品国产| 免费观看人在逋| 国产精品综合久久久久久久免费 | 嫁个100分男人电影在线观看| 波多野结衣av一区二区av| 精品国内亚洲2022精品成人 | 久久久久国产精品人妻aⅴ院 | avwww免费| 9191精品国产免费久久| 欧美成狂野欧美在线观看| av欧美777| 麻豆乱淫一区二区| 亚洲人成电影免费在线| 精品人妻1区二区| 人妻久久中文字幕网| 欧美老熟妇乱子伦牲交| 91成年电影在线观看| 国产人伦9x9x在线观看| 久久久水蜜桃国产精品网| 国产成人av激情在线播放| 精品人妻在线不人妻| 人人澡人人妻人| 天天添夜夜摸| 亚洲国产精品合色在线| 久久久久国内视频| 亚洲欧洲精品一区二区精品久久久| 少妇被粗大的猛进出69影院| av超薄肉色丝袜交足视频| 一本大道久久a久久精品| 亚洲五月婷婷丁香| 午夜福利视频在线观看免费| 欧美久久黑人一区二区|