• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Human Interaction Framework Using Quadratic Discriminant Analysis with HMM

    2023-12-15 03:56:44TanvirFatimaNaikBukhtNaifAlMudawiSaudAlotaibiAbdulwahabAlazebMohammedAlonaziAishaAhmedAlArfajAhmadJalalandJaekwangKim
    Computers Materials&Continua 2023年11期

    Tanvir Fatima Naik Bukht,Naif Al Mudawi,Saud S.Alotaibi,Abdulwahab Alazeb,Mohammed Alonazi,Aisha Ahmed AlArfaj,Ahmad Jalal and Jaekwang Kim

    1Department of Computer Science,Air University,Islamabad,44000,Pakistan

    2Department of Computer Science,College of Computer Science and Information System,Najran University,Najran,55461,Saudi Arabia

    3Information Systems Department,Umm Al-Qura University,Makkah,Saudi Arabia

    4Department of Information Systems,College of Computer Engineering and Sciences,Prince Sattam bin Abdulaziz University,Al-Kharj,16273,Saudi Arabia

    5Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,P.O.Box 84428,Riyadh,11671,Saudi Arabia

    6Convergence Program for Social Innovation,Sungkyunkwan University,Suwon,03063,Korea

    ABSTRACT Human-human interaction recognition is crucial in computer vision fields like surveillance,human-computer interaction,and social robotics.It enhances systems’ability to interpret and respond to human behavior precisely.This research focuses on recognizing human interaction behaviors using a static image,which is challenging due to the complexity of diverse actions.The overall purpose of this study is to develop a robust and accurate system for human interaction recognition.This research presents a novel image-based human interaction recognition method using a Hidden Markov Model (HMM).The technique employs hue,saturation,and intensity (HSI)color transformation to enhance colors in video frames,making them more vibrant and visually appealing,especially in low-contrast or washed-out scenes.Gaussian filters reduce noise and smooth imperfections followed by silhouette extraction using a statistical method.Feature extraction uses the features from Accelerated Segment Test (FAST),Oriented FAST,and Rotated BRIEF (ORB) techniques.The application of Quadratic Discriminant Analysis (QDA) for feature fusion and discrimination enables high-dimensional data to be effectively analyzed,thus further enhancing the classification process.It ensures that the final features loaded into the HMM classifier accurately represent the relevant human activities.The impressive accuracy rates of 93%and 94.6%achieved in the BIT-Interaction and UT-Interaction datasets respectively,highlight the success and reliability of the proposed technique.The proposed approach addresses challenges in various domains by focusing on frame improvement,silhouette and feature extraction,feature fusion,and HMM classification.This enhances data quality,accuracy,adaptability,reliability,and reduction of errors.

    KEYWORDS Human interaction recognition;HMM classification;quadratic discriminant analysis;dimensionality reduction

    1 Introduction

    Human interaction recognition(HIR)in computer vision refers to a system’s ability to recognize and recognize the gestures,expressions,and movements of humans engaged in face-to-face communication.This technology enables computers to comprehend and respond to human behaviour,allowing for more natural and intuitive interactions between humans and machines.Human-to-human interaction in computer vision applications includes video conferencing,virtual reality,and gaming.This technology could revolutionize how we connect with technology and one another by allowing computers to understand human behaviour.

    HIR is a challenging computer vision problem that seeks to comprehend human behavior by analyzing visual data such as photos and videos.The goal is to recognize complex,human-to-human interactions;however,this is difficult due to challenges such as viewpoint fluctuation,occlusion,ambiguity,data scarcity,and interaction complexity.As a result,the performance and application of most existing HIR approaches are limited.HIR advancements could allow applications such as better video/image surveillance,improved human-computer interactions,and safer intelligent modes of transport.Image-based interaction recognition is more challenging than video-based action detection due to limited data,the blurry background of the images,ambiguous qualities like Similar looking interactions may have different meanings,and training data like hard-to-code large amounts of marked-up data of human interactions and Complex interactions.This has led to an interest in human interaction recognition,to its diverse applications,such as human-machine interaction,behavioral biometrics,surveillance systems,environmental intelligence,assisted living,and human-computer interaction.However,this research aims to develop a novel image-based human interaction recognition method that can improve performance compared to existing methods.This research is particularly significant as it has the potential to contribute to various computer vision applications such as video surveillance,human-computer interaction,and intelligent transportation systems.

    Several methods have been proposed for recognizing human interactions,including histogram of oriented gradients (HOG),local binary patterns (LBP),and deep neural networks DNNs [1-6].However,these methods have limitations and may not always provide optimal results.The following are some limitations of HOG and LBP techniques,which rely on handcrafted features that may not be relevant to a wide range of human interaction contexts.The features of each new dataset must be carefully designed and optimized.DNNs require a huge amount of labeled training data for human interaction recognition,which can be costly and time-consuming to acquire.DNNs have considerable computing requirements during training and inference,which limits their use in resource-constrained situations.HOG/LBP and standard DNNs do not explicitly recreate the temporal dynamics of human interactions,which are essential for successful recognition.

    To address these limitations,we present a model that represents the developmental process of human interactions using Hidden Markov Models.Compared to other methods,HMMs require fewer parameters to train and can obtain good results with smaller data sets for training.Because HMM has modest computing needs,they are appropriate for real-world applications.It can demonstrate temporal interconnectedness and the development of human interactions through time.This study offers a unique human interaction recognition method that combines the best features of QDA and the HMM.QDA is an effective feature fusion and discriminating tool for high-dimensional data analysis.HMM is a statistical model successfully applied to human interaction recognition.It can be used to classify sequential data.Pre-processing the photos with HSI color transformation to boost contrast and Gaussian filters to reduce noise is the first step in our suggested approach for human interaction recognition.Following that,statistical approaches are utilized to extract the person’s silhouette.The FAST and ORB approaches are used to extract features.These features are then passed through QDA for feature fusion and discrimination.Finally,HMM is used to classify the features into appropriate human activities.The proposed method achieved an accuracy of 93%on a dataset of human activities and shows potential for improving human recognition performance.The HIR system can recognize complex human activities,such as shaking hands,hugging,kicking,patting,pushing,hifi,bending,and boxing.The main contributions of our proposed model:

    ? Frame enhancement and extraction using HSI transformation and Gaussian filter.

    ? Silhouette extraction using statistical methods.

    ? Feature extraction using FAST and ORB.

    ? Quadratic Discriminant Analysis technique is applied for feature fusion and discrimination,which are later identified using the Hidden Markov model.

    The suggested framework’s compact design makes it suitable for deployment on any edge device.There is no need for waiting or extra processing time;everything can be done in real-time.Complexity and computational requirements may make the suggested approach suitable for large datasets.Noise or missing values in input data may influence the algorithm’s accuracy and dependability.The algorithm may require significant domain-specific knowledge and experience to implement and configure.

    The research paper is organized into the following sections:Section 2 provides a detailed review of the related work in the field of HIR.Section 3 focuses on the design and structure of the proposed system,explaining the methodology and techniques used for image dataset pre-processing,feature extraction,and classification.Section 4 presents the experimental analysis and results of the proposed method,including the system’s accuracy and efficiency in recognizing different human activities.Finally,Section 5 concludes the research paper by summarising.

    2 Related Work

    Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have recently emerged as promising candidates for human recognition thanks to advancements in deep learning[7].These methods can learn hierarchical representations of images and videos that can be used to recognize activities.CNN-based methods have shown impressive performance on various datasets and benchmarks but require a large amount of labeled data for training and have a high computational cost.Other handcrafted feature extraction techniques like Motion History Image(MHI)[8],Optical Flow,and 3D convolutional neural networks(3DCNNs)have also been proposed for HIR.Furthermore,certain hybrid approaches have been presented in the literature to enhance the performance of human action identification,such as integrating CNN with HMM [9].In this setup,the CNN is utilized to extract the features from the image,while the HMM models the temporal information of the activities.

    2.1 Image-Based HIR

    Human interaction recognition recognizes and classifies human activities using visual data,such as images or videos,to recognize and organize them.Regarding recognizing human-human interactions(HHI)in computer vision,researchers have proposed various methods that use machine learning classifiers such as Random Forest,Support Vector Machine (SVM),Decision Trees,and HMM to recognize and categorize human interactions,such as shake_hands,hugs,and high-fives,into distinctly different classes[7,8].These methods extract handcrafted elements,including spatialtemporal information,posture,and gesture,from images or videos and use classifiers to recognize and classify interactions.Human-object interaction (HOI) recognition analyses human-object relationships in images and videos.They recognize and classify activities like carrying a bag or sitting on a chair.Researchers employ machine learning classifiers like Random Forest,SVM,Decision Trees,and HMM to recognize and classify these interactions by extracting information like object location,size,and object-human interaction from images and videos.

    Histograms of oriented gradients are one of the most popular approaches to human action recognition in images[9-11],which can extract features from stable images against modifications like brightness and contrast.Following feature collection,a classifier like an SVM or a linear discriminant analysis (LDA) is trained to recognize the activities based on the data collected LDA.HOG-based methods effectively recognize simple and repetitive activities but may not always provide optimal performance for more complex and varied activities.Another popular method for human interaction recognition is based on LBP[12,13],which encodes the local texture of the image.LBP-based methods have also been effective for recognizing simple and repetitive activities but effective for recognizing and recognizing simple and repetitive activities.However,they may not be as robust to changes in lighting and scale as HOG-based methods.

    2.2 Markers-Based HIR

    Markers-based Human Interaction Recognition (HIR) is a computer vision technique that uses physical markers,typically placed on specific body joints or landmarks,to identify and track human movements and interactions in real-time[14].Markers accurately capture motion data,which algorithms may evaluate to recognize human activities and interactions.Virtual reality,gaming,sports,and rehabilitation use marker-based HIR for accurate motion analysis and intuitive user interfaces[15].Although marker-based HIR produces high-quality data,it can be limited by the need for hardware,which may hinder the natural movements of the tracked subjects[16].

    Numerous studies have examined marker-based HIR methods.Hidden Markov Models are famous for modeling and recognizing human behaviour [17].HMMs are statistical models that can represent complex temporal patterns and have been effectively used in human activity recognition and motion analysis [18].HMMs were employed to recognize tennis actions from marker-based motion capture data[19,20].Deep learning techniques,such as CNNs and RNNs,have been investigated by other researchers for marker-based HIR [21].These methods recognize complicated activities and interconnections better.They need substantial training datasets and computer power.HMMs,deep learning algorithms,and others have been studied to improve marker-based HIR recognition accuracy and efficiency.

    3 Structure of Designed System

    This research paper represents a novel approach to HIR.The suggested system consists of five steps.Stretching the frame contrast occurs during pre-processing.The subsequent process involves extracting the person’s silhouette.In the third phase,two features are extracted using FAST and ORB and combined.The fourth step explores high-dimensional data features using the Quadratic Discriminant Analysis technique for feature fusion and discrimination.This feature is subsequently used to train the HMM for the last stage of interaction recognition.In Fig.1,the suggested architecture is displayed.

    Figure 1:Flow diagram of the proposed HIR approach

    3.1 Pre-Processing

    Incorrect estimates of human behavior can be avoided with the help of pre-processing,which is why removing noise from the input frames is a crucial step in extracting essential features.In this study,we employ a simplified pre-processing solution consisting of two steps:(a)HSI color transformation and(b)picking the optimal channel and applying a Gaussian filter to target this problem.

    3.1.1 Selecting the Optimal Channel

    We apply the HIS transformation to the input video frame.The HSI effect deconstructs the source video frame,which we may divide into three channels (hue,saturation,and intensity) using the coordinatesβ(x,y).The red,green,and blue channels are represented byβr(x,y),βg(x,y),andβb(x,y),respectively,are normalized by dividing them by the sum of the three channels.The final HSI transformation is then calculated using the following equations:

    Fig.2 represents the HSI of the input image.The hue,saturation,and intensity channels are displayed as grayscale images,showing the original image’s color information,vividness,and brightness.The figure highlights the importance of considering different color channels for image processing and analysis.It highlights each channel’s key features and emphasizes the HSI representation’s significance in image analysis.

    Figure 2:Hue-Saturation-Intensity(HSI)representation of the original image

    3.1.2 Gaussian Filter

    The Gaussian filter is a widely used image-processing technique in computer vision applications for smoothing or blurring images[19-24].It is based on the Gaussian function,a bell-shaped curve that weights pixels in an image.To pre-process our image data in our study,we used the Gaussian filter in conjunction with the HSI transformation.We applied the Gaussian filter to each channel separately after performing the HSI transformation,which divides the original image into hue,saturation,and intensity channels.This lets us remove noise and high-frequency material from photos while retaining edges and other critical elements.Furthermore,because of the low-pass character of the filter,the Gaussian filter can be used to lessen the amount of aliasing in an image.

    The Gaussian filter can be represented by Eq.(6):

    whereG(x,y)is the Gaussian filter,e is Euler’s number,and σ is the standard deviation of the Gaussian function.The standard deviation determines the spread of the Gaussian function and directly controls the amount of smoothing or blurring applied to the image.Gaussian filtering results are shown in Fig.3.

    Figure 3: Gaussian filtering applied,which highlights the improved contrast and reduced noise,(a)show hug,(b)show kick,and(c)shake_hand

    We have used this filter because it is a simple and effective technique for reducing image noise and preserving important features.It is crucial for accurately recognizing human activities in our research.By applying it separately to each channel after HSI,we can improve the performance of our imagebased human interaction recognition method.

    3.2 Silhouette Extraction

    Silhouette extraction is crucial to object recognition,tracking,and segmentation in computer vision.Statistic-based silhouette extraction is durable and accurate.The Gaussian-Mixture-Model,Expectation-Maximization Algorithm,K-Means Clustering,Mean-Shift,and Spectral Clustering are used for image segmentation,object recognition,and data analysis[25-28].These models use statistical data to get insights and perform various tasks.

    In this work,we suggest using the Gaussian Mixture Model(GMM)to extract silhouettes.Our method performs thresholding and inverse thresholding on the image to produce a binary mask.After applying the mask,the GMM pulls out the silhouette from the image.The resulting silhouette is a two-dimensional binary image,with the foreground pixels denoting the subject and the background pixels indicating the background.The output is represented by a silhouette superimposed on top of the original color image on a black backdrop.The results are displayed in Fig.4,demonstrating the accuracy and efficiency with which our method achieves silhouette extraction.

    Eq.(7),in GMM wherep(x)is the probability density function,wiis the weight of theithGaussian component,φ is the Gaussian distribution function,kis the number of Gaussian components,μiis the mean vector,and ∑iis the covariance matrix.

    Figure 4: Silhouette extraction using a statistical method,(a) show hug,(b) show kick,and(c)shake_hand

    The algorithm extracts an object’s silhouette from an input image.It converts the image to grayscale,applies a GMM background subtractor,and thresholds the foreground mask.The mask becomes inverted to produce a binary picture if the mean value exceeds a certain threshold.The silhouette is shown on a black backdrop,with the original image overlaid with the silhouette.

    3.3 Feature Extraction

    Despite its ubiquity and effectiveness,the FAST algorithm still has certain shortcomings.One of the key limitations of the FAST algorithm is its sensitivity to noise and complexity in images.The combined usage of the FAST and ORB extractors for features is well recognized as an excellent method for imagining feature extraction.The FAST feature detector has been recognized for its great computational speed and repeatability,but the ORB descriptor is known for its resistance to scale and rotation changes.Combining these two strategies yields a robust feature extraction strategy that can handle many tough conditions.Using FAST and ORB,the approach can encapsulate high-speed computing and resistance to diverse transformations.Furthermore,utilizing ORB descriptors can assist in reducing the number of false matches in the matching stage,enhancing accuracy and efficiency.

    3.3.1 Features from Accelerated Segment Test(FAST)

    The FAST Algorithm is a popular technique for detecting and extracting key characteristics from digital images.Many computer vision applications,such as object recognition,tracking,and registration,rely on feature detection.Extracting features from images accurately and effectively is critical in many computer vision systems.Feature detection algorithms generally seek out identifiable and recurrent structures in photos.These keypoint structures define distinct points in the image that can be used to identify and track things across multiple frames or images.

    FAST algorithm works by comparing a pixel intensity value with the values of its surrounding pixels in a circular pattern.If a certain number of contiguous pixels have intensities that are either higher or lower than the central pixel,then the central pixel is marked as a corner.First,the algorithm calculates the difference between the intensities of the pixel at the center and its surrounding pixels,with a radius of 3 pixels:

    wherepcis the center pixel andpiare the surrounding pixels.

    Next,the algorithm selects a threshold valueT,and a pixelpcis considered a corner if there existncontiguous pixels in the circle aroundpcwhose intensities are all greater than+Torlessthan-T:

    Finally,to speed up the algorithm,the threshold valueTis calculated as a fractionkof the maximum intensity range:

    whereIis the image andkis a defined constant.

    The results presented in Fig.5 were obtained after extracting features from the images using FAST.The detected features were then used for subsequent analysis and evaluation.

    Figure 5:Features extraction using FAST,(a)show hug,(b)show kick,and(c)shake_hand

    3.3.2 Oriented FAST and Rotated BRIEF

    This study employed the FAST and ORB algorithm for feature extraction in the BIT Interaction and UT-Interaction dataset,which focuses on human-human interactions.ORB combines the features of the FAST keypoint detector and the ORB descriptor resulting in an efficient algorithm suitable for real-time applications.The algorithm first detects key points in the image using the FAST algorithm,which selects the points with a large difference in intensity for each neighboring pixel.Then it computes the orientation of each keypoint using the intensity distribution around it.Finally,the ORB descriptor is calculated for each keypoint.This binary string encodes the relative intensities of the pixel pairs around the keypoint.The resulting descriptors are robust to scale rotation and illumination changes,making them suitable for human-human interaction recognition.

    Our experiments using the BIT Interaction dataset demonstrated the ORB algorithm’s effectiveness in recognizing various human-human interactions.The ORB features were extracted from the images and used to train a machine-learning model,which achieved an accuracy of over 93%.And 94.6 using Ut-Interaction.The speed and accuracy of ORB make it an ideal feature extraction method for real-time applications such as human-human interaction recognition in video surveillance systems.

    wherepiandpjare pairs of points sampled from a circular region around the keypoint,andddenotes the descriptor index.

    Computing the Hamming distance between two ORB descriptors:

    where N is the number of elements in the ORB descriptor,⊕denotes the bitwise XOR operation andf1,iandf2,iare the ith elements of the two descriptors being compared.

    These equations are used in various stages of the ORB algorithm,such as keypoint detection descriptor computation and feature matching;results are also shown in Fig.6.

    Figure 6:Features extraction using ORB(a)show hug,(b)show kick,and(c)shake_hand

    3.4 Feature Fusion and Discrimination

    In recent years feature extraction has become integral to many computer vision applications,including object recognition,image matching,and scene reconstruction.One of the main challenges in feature extraction is achieving high accuracy and robustness,which requires a combination of multiple feature descriptors.In this paper,we explore the concept of feature fusion and discrimination for improving the performance of the BIT and UT interaction datasets.We extracted features using FAST and ORB feature detectors and saved them into a csv file.The next step is to fuse these features to create a more comprehensive dataset representation.To achieve this,we will explore several fusion methods,including feature concatenation,feature averaging,and feature weighting.Once the fused feature representation is obtained,we will use QDA to discriminate between different classes in the BIT interaction dataset.This approach can help overcome the limitations of using a single feature descriptor,improving performance and accuracy in computer vision tasks.Our experimental results demonstrate that the proposed feature fusion and discrimination approach outperforms the individual feature descriptors regarding discrimination accuracy,robustness,and speed.

    Quadratic Discriminant Analysis(QDA)formula:

    In this formula,f(x)is the discriminant function that predicts the class of an observationx μis the mean vector of the features ∑is the covariance matrix of the features|∑|denotes the determinant of ∑andP(Y=k)is the prior probability of classk.

    Weighted feature fusion formula:

    In this formula,Ffuseis the fused feature representationFiare the individual feature descriptorswiare the weights assigned to each feature descriptor andnis the total number of feature descriptors.

    Feature discrimination formula:

    wheredijis the discriminant value of featureifor classesjandk,μijis the mean of featureiin classj,μikis the mean of featureiin classkandare the variances of featureiin classesjandk,respectively.Fig.7 represent features fusion and discrimination result using QDA.

    Figure 7:Features fusion and discrimination result using QDA

    3.5 Hidden Markov Models

    Hidden Markov Models (HMMs) are a class of probabilistic graphical models that capture the underlying dynamics of a system with hidden (unobservable) states [19-21].These models have been widely used in speech recognition,natural language processing,bioinformatics and finance applications.In this research,we employ an HMM to model the hidden states and transitions of an 8-class dataset.

    The following components define an HMM and also represent using Fig.8:

    A set ofNhidden states,S={s1,s2,...,sN}.

    A set ofMobservable states,O={o1,o2,...,oM}.

    Transition probabilities between hidden states,A=,wherea{ij}=P(s{j}|s{i}).

    Emission probabilities of observable states given hidden states,B=whereb{j}(k)=P(ok|sj).

    Initial state probabilities,π={πi},whereπi=P(si).

    The HMM can be represented as a tupleλ=(A,B,π).

    Figure 8:A simple illustration of a Hidden Markov Model.The circles represent hidden and observable states,while the arrows show the possible transitions between states

    There are various methods to estimate the HMM parameters,including Maximum Likelihood Estimation (MLE) and the Expectation-Maximization (EM) algorithm,also known as the Baum-Welch algorithm.The MLE of the initial state probabilities π can be computed as:

    where γ1(i)is the probability of being in state i at time 1,given the observations.

    The MLE of the transition probabilities A can be computed as:

    where ξt(i,j)is the joint probability of being in states i and j at times t and t+1,respectively,given the observations,and γt(i)is the probability of being in state i at time t given the observations.

    The MLE of the emission probabilities B can be computed as:

    where 1ot=kis an indicator function that is equal to 1 ifot=kand 0 otherwise.

    Maximum Likelihood Estimation (MLE) of HMM parameters involves finding the parameters that maximize the likelihood of observing the given sequence of observations O.The likelihood of the observations can be expressed as:

    where λ=(A,B,π)is the set of HMM parameters,andSis the set of possible hidden state sequences.The sum is taken over all possible state sequences that could have generated the observed sequence.Computing this sum directly is infeasible for large state spaces,but it can be solved efficiently using the Forward-Backward algorithm.

    4 Experimental Analysis and Results

    This study uses Hidden Markov Models (HMMs) as a classifier to analyze our proposed approach’s performance.The experimental process was conducted with great attention to detail,and the numerical results were thoroughly analyzed.Our approach was evaluated using various measures such as precision,recall,F1-score,and support,which were all calculated to understand the classifier’s performance comprehensively.The results showed that our HMM-based approach achieved an impressive accuracy of 93%,demonstrating the potential of our proposed method for real-world applications.

    4.1 BIT/UT Interaction Dataset

    BIT/UT[27]interaction dataset was used in this work.Additionally,our proposed solution was implemented in Visual Studio Code,and We took a dataset of human interaction frames and extracted features.The dataset was randomly divided into training and testing sets,with a 70%training size and a 30% testing size.We employed numerous measures to evaluate the performance of our suggested technique,including precision,recall,and F1-score,and we also provided support for each class in the dataset.BIT contains video recordings of human interactions from eight different classes:shake_hands,hug,kick,pat,push,hifi,bend,and box.The dataset is of exceptional quality with a resolution of 640 × 480 pixels and a total size of 4.4 GB.The videos were shot with a high-quality camera,showing various people engaging in natural interactions.The dataset was pre-processed to find out essential features for our HMM-based classifier.The resulting dataset was then used to test how well our proposed method worked.

    4.2 Performance Measures

    The result section provides an in-depth review of the recognition results generated with our suggested HMM-based technique.HMMs are a sort of probabilistic model that has been widely employed in pattern recognition and speech recognition tasks.HMMs represent a series of observations as hidden states that are not immediately observable but may be deduced from the observations.We used HMMs as a classifier in our suggested strategy to recognize human interactions based on a sequence of retrieved features.The HMMs were trained on the BIT/UT interaction dataset to classify new instances.HMMs have several advantages,including the capacity to simulate temporal dependencies and the flexibility to handle variable sequence lengths.These characteristics make HMMs wellsuited for recognizing human interactions,which frequently entail complicated and diverse movement sequences.Our results show the efficacy and resilience of our recommended approach for detecting HHI in real-world contexts.

    We tested our approach on the BIT interaction dataset,which performed admirably,achieving 93%accuracy overall.Support in machine learning refers to the number of occurrences in each dataset class.In Table 1 of our study,the “Support” column reflects the number of cases from each class utilized to train and evaluate our suggested approach.High recognition rates(F1-scores between 0.83)and reliability (F1-scores between 0.99) were attained using our method across all eight interaction classes.Specifically,our approach achieved a high recognition rate for Shake_hands(0.90),hug(1.00),kick(0.96),pat(1.00),push(0.90),hifi(0.92),bend(0.82),and box(0.88)interactions.

    Table 1: Performance measures of proposed HMM-based approach for recognizing human interactions

    Fig.9 shows the classification results from our HMM-based approach to recognizing human interactions.The matrix is essential for evaluating and improving classifiers.The confusion matrix shows that our method correctly classified most interaction types with only a few exceptions.Our method for identifying human interactions in real-world environments works well and is resilient.Table 2 represents a comparison of HIR accuracy using different techniques shown below.

    Table 2: Comparison of human interaction recognition accuracy using different techniques on the selected dataset

    Figure 9:Confusion,matrix of a proposed HMM-based approach for recognizing human interactions(a)displays BIT-Interaction results,while(b)displays UT-Interaction results

    5 Conclusion

    This study presents a novel HMM-based method for recognizing human image interactions.This method has a high accuracy of 93% when applied to a dataset of human activities in BITInteraction and 94.6 using Ut-Interaction.The suggested approach includes several crucial phases:frame improvement and extraction,silhouette extraction,feature extraction,feature fusion and discrimination,and classification using HMM.Our method is also computationally effective,making it appropriate for real-time edge device applications.This study contributes to the expanding field of computer vision and pattern recognition and has real-world applications in biometrics,surveillance,and human-computer interaction.Future improvements can be made by incorporating deep learning techniques such as CNNs and RNNs for feature extraction and classification.Testing the proposed method on larger datasets and in more complex environments can also assess its generalizability and effectiveness.

    Acknowledgement:None.

    Funding Statement:The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/12/6).This study is supported via funding from Prince Satam bin Abdulaziz University Project Number (PSAU/2023/R/1444).Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R348),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia,and this work was also supported by the Ministry of Science and ICT(MSIT),South Korea,through the ICT Creative Consilience Program supervised by the Institute for Information and Communications Technology Planning and Evaluation (IITP) under Grant IITP-2023-2020-0-01821.

    Author Contributions:Study conception and design:Tanvir Fatima Naik Bukht,Jaekwang Kim,data collection:Naif Al Mudawi,Saud S.Alotaibi and Aisha Ahmed AlArfaj;analysis and interpretation of results:Tanvir Fatima Naik Bukht,Abdulwahab Alazeb and Mohammed Alonazi;draft manuscript preparation: Tanvir Fatima Naik Bukht and Ahmad Jalal.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:All publicly available datasets are used in the study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    啦啦啦观看免费观看视频高清| 亚洲国产色片| 伦理电影大哥的女人| 亚洲成人久久爱视频| 少妇人妻一区二区三区视频| 久久精品国产99精品国产亚洲性色| 国产黄色小视频在线观看| 久久精品综合一区二区三区| 精品乱码久久久久久99久播| 久久人人精品亚洲av| 大型黄色视频在线免费观看| 国产一级毛片七仙女欲春2| 插逼视频在线观看| 国产伦一二天堂av在线观看| 女人被狂操c到高潮| 男女之事视频高清在线观看| 国产蜜桃级精品一区二区三区| 九九在线视频观看精品| 欧美激情久久久久久爽电影| 亚洲人与动物交配视频| 精品一区二区免费观看| 免费高清视频大片| 国产亚洲精品av在线| 一本精品99久久精品77| 欧美另类亚洲清纯唯美| 中出人妻视频一区二区| 99热这里只有精品一区| 日本爱情动作片www.在线观看 | 97人妻精品一区二区三区麻豆| 中文字幕av成人在线电影| 欧美日韩一区二区视频在线观看视频在线 | 高清午夜精品一区二区三区 | 色av中文字幕| 一区二区三区免费毛片| 高清日韩中文字幕在线| av福利片在线观看| 国产成人a区在线观看| 男人和女人高潮做爰伦理| 日韩强制内射视频| 少妇人妻一区二区三区视频| 亚洲精品一卡2卡三卡4卡5卡| 久久久久久久亚洲中文字幕| 国产人妻一区二区三区在| 色综合站精品国产| 五月伊人婷婷丁香| 成人特级黄色片久久久久久久| 国产色婷婷99| 麻豆乱淫一区二区| 日韩av在线大香蕉| 精品午夜福利在线看| 国产高潮美女av| 免费看光身美女| 国产在线精品亚洲第一网站| 美女高潮的动态| 男女之事视频高清在线观看| 噜噜噜噜噜久久久久久91| 草草在线视频免费看| 一级a爱片免费观看的视频| 日韩中字成人| 久久久午夜欧美精品| 日韩欧美精品v在线| 国产精品久久久久久久电影| 波野结衣二区三区在线| 国产亚洲精品综合一区在线观看| 99久久成人亚洲精品观看| 久久精品国产亚洲av天美| 亚洲人与动物交配视频| 俺也久久电影网| 一个人观看的视频www高清免费观看| 俄罗斯特黄特色一大片| 国内揄拍国产精品人妻在线| 国产精品综合久久久久久久免费| 婷婷亚洲欧美| 日韩国内少妇激情av| 晚上一个人看的免费电影| 乱系列少妇在线播放| 亚洲国产精品成人久久小说 | 免费看a级黄色片| 国产高潮美女av| 久久欧美精品欧美久久欧美| 人人妻,人人澡人人爽秒播| 成人av一区二区三区在线看| 欧美日韩国产亚洲二区| 丝袜喷水一区| 三级毛片av免费| 日韩欧美精品免费久久| 黄色视频,在线免费观看| 久久精品国产清高在天天线| 欧美又色又爽又黄视频| 国产成人精品久久久久久| 亚洲av五月六月丁香网| 久久久久国产网址| 亚洲性久久影院| 国产欧美日韩精品一区二区| 美女被艹到高潮喷水动态| 成人综合一区亚洲| 亚洲国产日韩欧美精品在线观看| 观看免费一级毛片| 日本一本二区三区精品| 国产成人a区在线观看| 久久久久久久久中文| 久久久久久久久久成人| 伦精品一区二区三区| 精品久久久久久久久久免费视频| 性欧美人与动物交配| 一区二区三区高清视频在线| 亚洲成人av在线免费| 国产伦一二天堂av在线观看| 国产91av在线免费观看| 久久精品国产亚洲av天美| 一本精品99久久精品77| 男女那种视频在线观看| 女的被弄到高潮叫床怎么办| 麻豆精品久久久久久蜜桃| 亚洲av不卡在线观看| 色尼玛亚洲综合影院| 国内久久婷婷六月综合欲色啪| 韩国av在线不卡| 男人的好看免费观看在线视频| 最近中文字幕高清免费大全6| 一级毛片电影观看 | 国国产精品蜜臀av免费| 久久久久久久午夜电影| 波多野结衣高清作品| 亚洲熟妇熟女久久| av在线天堂中文字幕| 欧美激情在线99| 国内精品宾馆在线| 桃色一区二区三区在线观看| 国产色爽女视频免费观看| 亚洲性久久影院| 黄色日韩在线| 黄色一级大片看看| 看免费成人av毛片| 国语自产精品视频在线第100页| 寂寞人妻少妇视频99o| 嫩草影视91久久| 亚洲精品亚洲一区二区| 91久久精品电影网| 国产黄片美女视频| 亚洲av二区三区四区| 看十八女毛片水多多多| 大香蕉久久网| 国产午夜精品论理片| 欧美激情久久久久久爽电影| 精品熟女少妇av免费看| 日本免费一区二区三区高清不卡| 亚洲第一区二区三区不卡| 成人av一区二区三区在线看| av中文乱码字幕在线| 日韩欧美一区二区三区在线观看| 九九热线精品视视频播放| 久久精品影院6| 丰满的人妻完整版| 亚洲欧美成人综合另类久久久 | 国产中年淑女户外野战色| 男人和女人高潮做爰伦理| 精品久久久久久久末码| 久久6这里有精品| 日韩成人av中文字幕在线观看 | 国产高清三级在线| 久久人妻av系列| 乱码一卡2卡4卡精品| 色5月婷婷丁香| 中国美白少妇内射xxxbb| 免费av观看视频| 欧美日本视频| 亚洲成av人片在线播放无| 国产在线男女| 中文字幕av在线有码专区| 波多野结衣巨乳人妻| 亚洲av中文字字幕乱码综合| 日日摸夜夜添夜夜添小说| 成人欧美大片| 天堂网av新在线| 国产探花在线观看一区二区| 午夜精品一区二区三区免费看| 色尼玛亚洲综合影院| 久久久久久久久久久丰满| 内地一区二区视频在线| 99热精品在线国产| 搡老熟女国产l中国老女人| 特级一级黄色大片| 日本-黄色视频高清免费观看| 日本色播在线视频| 免费高清视频大片| 99久久无色码亚洲精品果冻| 非洲黑人性xxxx精品又粗又长| 九九爱精品视频在线观看| 黄色欧美视频在线观看| 国产在线男女| 少妇熟女aⅴ在线视频| 永久网站在线| 精品久久久久久久久亚洲| 哪里可以看免费的av片| 亚洲精品影视一区二区三区av| 好男人在线观看高清免费视频| 日韩 亚洲 欧美在线| 久久99热6这里只有精品| 成人亚洲精品av一区二区| 国产精品永久免费网站| 午夜视频国产福利| 久久久久久久亚洲中文字幕| 国产 一区精品| 我的女老师完整版在线观看| 日本免费a在线| 国产精品美女特级片免费视频播放器| 麻豆精品久久久久久蜜桃| 日本在线视频免费播放| 少妇的逼水好多| 午夜福利在线观看免费完整高清在 | 在线免费观看不下载黄p国产| 桃色一区二区三区在线观看| 国产单亲对白刺激| 欧美国产日韩亚洲一区| 久久久精品大字幕| 午夜视频国产福利| 国产免费男女视频| 日韩一本色道免费dvd| 国产精品免费一区二区三区在线| 免费av毛片视频| 日日摸夜夜添夜夜爱| 国产中年淑女户外野战色| 国产在线精品亚洲第一网站| 99久久九九国产精品国产免费| 久久久久久九九精品二区国产| 亚洲综合色惰| 91在线精品国自产拍蜜月| 免费在线观看成人毛片| 亚洲精品456在线播放app| 成人美女网站在线观看视频| 亚洲色图av天堂| 亚洲激情五月婷婷啪啪| 国产私拍福利视频在线观看| 一个人免费在线观看电影| 亚洲欧美日韩高清专用| 亚洲一区高清亚洲精品| 又爽又黄a免费视频| 久久人妻av系列| 不卡视频在线观看欧美| 三级男女做爰猛烈吃奶摸视频| 悠悠久久av| 婷婷六月久久综合丁香| 亚洲性夜色夜夜综合| 一本一本综合久久| 能在线免费观看的黄片| 亚洲人成网站在线播| 久久久欧美国产精品| 国产女主播在线喷水免费视频网站 | 高清毛片免费观看视频网站| 在现免费观看毛片| 色视频www国产| 欧洲精品卡2卡3卡4卡5卡区| 日韩大尺度精品在线看网址| av在线亚洲专区| 99视频精品全部免费 在线| 国产爱豆传媒在线观看| 黄片wwwwww| 亚洲五月天丁香| 欧美中文日本在线观看视频| 国产在视频线在精品| 中文字幕av在线有码专区| 伦理电影大哥的女人| 成人av一区二区三区在线看| 亚洲精品日韩av片在线观看| 高清日韩中文字幕在线| 激情 狠狠 欧美| 最后的刺客免费高清国语| 欧美色欧美亚洲另类二区| 国产精品永久免费网站| 国产在线男女| 黄色日韩在线| 久久久午夜欧美精品| 99热网站在线观看| 中国国产av一级| 国产人妻一区二区三区在| 精品99又大又爽又粗少妇毛片| 久久久久久久久久成人| av中文乱码字幕在线| 搡老妇女老女人老熟妇| 国产亚洲欧美98| 亚洲欧美成人精品一区二区| 国产爱豆传媒在线观看| 久久精品影院6| 午夜精品在线福利| 久久久久久久亚洲中文字幕| 两个人的视频大全免费| 国产伦精品一区二区三区视频9| 两个人视频免费观看高清| 国产日本99.免费观看| 少妇的逼水好多| 99国产极品粉嫩在线观看| 亚洲精品粉嫩美女一区| 免费观看的影片在线观看| 直男gayav资源| 国产精品人妻久久久久久| 九九在线视频观看精品| 搡老岳熟女国产| 免费大片18禁| 亚洲成a人片在线一区二区| 欧美潮喷喷水| 国产高潮美女av| 国产一区二区在线av高清观看| 精品欧美国产一区二区三| 亚洲三级黄色毛片| 日本黄大片高清| 亚洲av.av天堂| 成人无遮挡网站| 亚洲国产精品久久男人天堂| 亚洲av五月六月丁香网| av国产免费在线观看| 国产精品女同一区二区软件| 免费观看的影片在线观看| 欧美极品一区二区三区四区| 日日摸夜夜添夜夜添小说| 欧美区成人在线视频| 国产一区亚洲一区在线观看| 久久精品综合一区二区三区| 一本久久中文字幕| 欧美极品一区二区三区四区| 人人妻人人看人人澡| 亚洲中文字幕一区二区三区有码在线看| 黄片wwwwww| 成人漫画全彩无遮挡| 国产真实乱freesex| 中文字幕人妻熟人妻熟丝袜美| 国产免费男女视频| 在线观看av片永久免费下载| 一进一出好大好爽视频| 91在线精品国自产拍蜜月| 精品无人区乱码1区二区| 亚洲欧美日韩高清在线视频| 人人妻人人澡人人爽人人夜夜 | 国产乱人偷精品视频| 精品少妇黑人巨大在线播放 | 国产亚洲精品久久久com| 国产91av在线免费观看| 91狼人影院| 国产三级中文精品| 国产精品日韩av在线免费观看| 亚洲人与动物交配视频| 黄色欧美视频在线观看| 欧美三级亚洲精品| 国产亚洲91精品色在线| 热99re8久久精品国产| 日韩成人伦理影院| 亚洲成人中文字幕在线播放| 免费人成视频x8x8入口观看| 日本三级黄在线观看| 在线观看免费视频日本深夜| 国产精品一及| 亚洲人与动物交配视频| 看片在线看免费视频| 12—13女人毛片做爰片一| 国产精品日韩av在线免费观看| 欧美色欧美亚洲另类二区| 成人高潮视频无遮挡免费网站| 草草在线视频免费看| 亚洲四区av| 一级毛片aaaaaa免费看小| 国产免费一级a男人的天堂| 免费电影在线观看免费观看| 日韩高清综合在线| 精品99又大又爽又粗少妇毛片| 亚洲最大成人中文| avwww免费| 俺也久久电影网| 成人一区二区视频在线观看| 午夜福利在线在线| 日韩av在线大香蕉| 亚洲在线观看片| 亚洲精品一区av在线观看| 搡老妇女老女人老熟妇| 国产单亲对白刺激| 婷婷亚洲欧美| 99热网站在线观看| 国产真实乱freesex| 国产精品不卡视频一区二区| 村上凉子中文字幕在线| 卡戴珊不雅视频在线播放| 久久久久久久久久久丰满| 我的老师免费观看完整版| 我要搜黄色片| 成人亚洲精品av一区二区| 人妻制服诱惑在线中文字幕| 91av网一区二区| 国产高清有码在线观看视频| 特大巨黑吊av在线直播| 国产成人freesex在线 | 色播亚洲综合网| 九九爱精品视频在线观看| 成年女人看的毛片在线观看| 色综合站精品国产| 成人二区视频| 高清午夜精品一区二区三区 | 国产精品人妻久久久影院| 午夜激情福利司机影院| 国产毛片a区久久久久| 国产男人的电影天堂91| 欧美3d第一页| 亚洲中文字幕日韩| 中文字幕熟女人妻在线| 久久久久精品国产欧美久久久| 国产av在哪里看| 嫩草影院入口| 精品一区二区三区av网在线观看| 在线观看av片永久免费下载| 欧美性感艳星| 国产乱人偷精品视频| 波多野结衣高清无吗| 综合色丁香网| 亚洲av成人精品一区久久| 国产精品福利在线免费观看| 69av精品久久久久久| 国产久久久一区二区三区| av专区在线播放| 成人av一区二区三区在线看| 久久久久久大精品| 欧美中文日本在线观看视频| 美女被艹到高潮喷水动态| av在线蜜桃| 国产一区二区在线av高清观看| 一级毛片我不卡| 国产欧美日韩精品一区二区| 国内揄拍国产精品人妻在线| 乱系列少妇在线播放| 免费av不卡在线播放| 欧美精品国产亚洲| 99热这里只有是精品在线观看| 人人妻人人澡人人爽人人夜夜 | 成人亚洲欧美一区二区av| 日韩强制内射视频| 亚洲不卡免费看| 变态另类成人亚洲欧美熟女| 欧美bdsm另类| 久久久久九九精品影院| av专区在线播放| 日韩欧美精品v在线| 夜夜爽天天搞| 久久久久久久亚洲中文字幕| 男女啪啪激烈高潮av片| 日本免费a在线| 亚洲第一电影网av| 久久久久九九精品影院| 观看免费一级毛片| 91久久精品电影网| 欧美一区二区亚洲| 高清毛片免费看| 国产精品一区二区性色av| 久久久精品欧美日韩精品| 精品日产1卡2卡| 夜夜看夜夜爽夜夜摸| a级毛片免费高清观看在线播放| 久久精品国产99精品国产亚洲性色| 国产黄片美女视频| 日韩三级伦理在线观看| 国产成人aa在线观看| 精品少妇黑人巨大在线播放 | 韩国av在线不卡| 欧美不卡视频在线免费观看| 亚洲一级一片aⅴ在线观看| 亚洲人与动物交配视频| 欧美日韩乱码在线| 在线播放无遮挡| 欧美日韩在线观看h| 少妇裸体淫交视频免费看高清| 午夜精品一区二区三区免费看| 国产高清激情床上av| 麻豆国产av国片精品| 一个人看的www免费观看视频| 国产精品爽爽va在线观看网站| 亚洲图色成人| 亚洲真实伦在线观看| 美女cb高潮喷水在线观看| 久久亚洲国产成人精品v| 国产一区亚洲一区在线观看| 成年女人毛片免费观看观看9| 日本黄色视频三级网站网址| 国产精品精品国产色婷婷| 天堂网av新在线| 3wmmmm亚洲av在线观看| 秋霞在线观看毛片| 熟女电影av网| 亚洲av一区综合| 国产色爽女视频免费观看| 亚洲精品成人久久久久久| 亚洲国产精品成人久久小说 | 国产欧美日韩精品一区二区| 国内揄拍国产精品人妻在线| 麻豆乱淫一区二区| 永久网站在线| 亚洲国产精品成人综合色| 国产精品不卡视频一区二区| 黑人高潮一二区| 成年av动漫网址| 免费看a级黄色片| 可以在线观看的亚洲视频| 日本黄大片高清| 一本久久中文字幕| 亚洲专区国产一区二区| 婷婷色综合大香蕉| 午夜爱爱视频在线播放| 国产精品一区二区三区四区免费观看 | 婷婷色综合大香蕉| 99在线视频只有这里精品首页| 亚洲欧美日韩东京热| 国产亚洲91精品色在线| 精品日产1卡2卡| 22中文网久久字幕| 免费观看的影片在线观看| 欧美不卡视频在线免费观看| 欧美一区二区国产精品久久精品| 乱码一卡2卡4卡精品| 亚洲精品成人久久久久久| 99热这里只有是精品在线观看| 精品人妻偷拍中文字幕| 亚洲国产精品成人久久小说 | 插逼视频在线观看| 欧美xxxx黑人xx丫x性爽| 久久鲁丝午夜福利片| 久久久久久大精品| 大又大粗又爽又黄少妇毛片口| 久久精品国产清高在天天线| 国产91av在线免费观看| 精品久久久久久久久亚洲| 国产日本99.免费观看| 亚洲一区二区三区色噜噜| 插阴视频在线观看视频| 久久久久久久久久成人| 综合色av麻豆| 国产精品电影一区二区三区| 国产私拍福利视频在线观看| 成人亚洲精品av一区二区| 国产真实伦视频高清在线观看| 少妇人妻精品综合一区二区 | 午夜福利视频1000在线观看| 国产伦精品一区二区三区四那| 色吧在线观看| 成人亚洲精品av一区二区| 无遮挡黄片免费观看| 日日摸夜夜添夜夜爱| 国产综合懂色| 婷婷六月久久综合丁香| 久久精品国产99精品国产亚洲性色| 亚洲专区国产一区二区| 黄色视频,在线免费观看| 可以在线观看毛片的网站| 十八禁网站免费在线| 国产精品一区二区免费欧美| 一进一出好大好爽视频| 免费看日本二区| av天堂在线播放| 热99re8久久精品国产| 悠悠久久av| 国产淫片久久久久久久久| 三级国产精品欧美在线观看| 禁无遮挡网站| 熟女电影av网| 性色avwww在线观看| 国产精品国产高清国产av| 18禁在线播放成人免费| 国产精品野战在线观看| 亚州av有码| 在线播放国产精品三级| 日韩中字成人| 美女高潮的动态| 91午夜精品亚洲一区二区三区| 久久九九热精品免费| 变态另类成人亚洲欧美熟女| а√天堂www在线а√下载| 国产亚洲精品av在线| 国产一区二区激情短视频| 国产成人a区在线观看| 91麻豆精品激情在线观看国产| 久久久久九九精品影院| 日韩,欧美,国产一区二区三区 | 国产精品一区二区免费欧美| 99久久中文字幕三级久久日本| 又爽又黄a免费视频| 免费av毛片视频| 直男gayav资源| 久久婷婷人人爽人人干人人爱| 99热这里只有精品一区| 在线观看一区二区三区| 午夜福利成人在线免费观看| 少妇被粗大猛烈的视频| 免费黄网站久久成人精品| 国产伦精品一区二区三区视频9| 亚洲av电影不卡..在线观看| 国产 一区精品| 国产精品久久久久久久久免| 亚洲自拍偷在线| 成年女人毛片免费观看观看9| 精华霜和精华液先用哪个| 女的被弄到高潮叫床怎么办| 3wmmmm亚洲av在线观看| 成年av动漫网址| 久久精品国产亚洲av香蕉五月| 欧美高清性xxxxhd video| 午夜a级毛片| 亚洲图色成人| 亚洲欧美成人综合另类久久久 | 久久精品人妻少妇| 人妻少妇偷人精品九色| 22中文网久久字幕| 最近的中文字幕免费完整| 日韩精品青青久久久久久| 免费在线观看成人毛片| 久久人人精品亚洲av| 晚上一个人看的免费电影| 亚洲欧美成人综合另类久久久 | avwww免费| 亚洲国产精品国产精品| www.色视频.com| 久久人妻av系列|