• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Behavior Recognition of the Elderly in Indoor Environment Based on Feature Fusion of Wi-Fi Perception and Videos

    2023-05-13 09:24:54YuebinSongChunlingFan

    Yuebin Song, Chunling Fan

    Abstract: With the intensifying aging of the population, the phenomenon of the elderly living alone is also increasing.Therefore, using modern internet of things technology to monitor the daily behavior of the elderly in indoors is a meaningful study.Video-based action recognition tasks are easily affected by object occlusion and weak ambient light, resulting in poor recognition performance.Therefore, this paper proposes an indoor human behavior recognition method based on wireless fidelity (Wi-Fi) perception and video feature fusion by utilizing the ability of Wi-Fi signals to carry environmental information during the propagation process.This paper uses the public WiFi-based activity recognition dataset (WIAR) containing Wi-Fi channel state information and essential action videos, and then extracts video feature vectors and Wi-Fi signal feature vectors in the datasets through the two-stream convolutional neural network and standard statistical algorithms,respectively.Then the two sets of feature vectors are fused, and finally, the action classification and recognition are performed by the support vector machine (SVM).The experiments in this paper contrast experiments between the two-stream network model and the methods in this paper under three different environments.And the accuracy of action recognition after adding Wi-Fi signal feature fusion is improved by 10% on average.

    Keywords: human behavior recognition; two-stream convolution neural network; channel status information; feature fusion; support vector machine (SVM)

    1 Introduction

    With the intensifying aging of the population,the phenomenon of the elderly living alone has also gradually increased.At the same time, with the growth of age, the body functions of the elderly begin to degenerate, and the probability of accidents also gradually increases.The survey shows that the probability of the elderly getting help for indoor accidents is only about 65%,among which falling and collision account for 80% of the accident types [1, 2].They are also the main causes of injury or death.This is due to the lack of health supervision for the elderly [3].Therefore, it is a meaningful study to improve the indoor detection technology of elderly activities and reduce the occurrence of accidents by utilizing advanced technologies of artificial intelligence and the internet of things in modern.

    With the development of machine learning and artificial intelligence, computer vision has made good progress in the fields of human behavior recognition [4, 5], video event retrieval [6],abnormal behavior detection [7– 9], and other video tasks.The research on human behavior recognition is also of great significance to the development of the above video recognition tasks.With the in-depth study of deep learning and the continuous expansion of the scale of various public datasets, the research on human behavior recognition has entered a diversified field, among which video-based human behavior recognition is the most widely used [10].However, most videos have good quality and prominent motion targets in the general public training datasets.However,it is often difficult to obtain high-quality action videos in daily life due to the diverse and complex living environment characteristics.It also makes the video-based action recognition task vulnerable to the interference of environmental factors such as different target angles, camera shake, background clutter, target occlusion and so on, resulting in the loss of video action information and the degradation of recognition performance [11, 12].Therefore, improving the performance of human behavior recognition, which is easily affected by environmental factors, is a research project of great significance.

    There are many kinds of methods of learning for the depth of the human behavior recognition task [13–15], including the traditional convolutional neural networks (CNN), the 3 dimensional convolutional neural network (3DCNN),the time series models based on short-term memory network, the two-stream neural network, etc.Among them, Tran et al.proposed the convolutional 3D network (C3D) [16] in which 3D convolution is used.3D convolution network and 3D pooling operations can simultaneously obtain temporal and spatial feature information from continuous video frames, so as to realize behavior recognition of behavior objects in video.Kiwon Rhee et al.used depth visual guidance to apply electromyography (EMG) signals to gesture recognition tasks [17].This method uses information other than video to complete gesture recognition, making it possible for more multi-feature fusion recognition tasks.Hochreiter et al.proposed the long short-term memory(LSTM) [18], which uses three different doors to preserve and forget information, making up for the shortcomings of gradient explosion and loss in the initial recursive neural network (RNN).Wen Qi et al.used recursive neural networks to study the use of multi-sensor-guided gesture recognition [19].Based on the multisensor fusion model, a multilayer RNN consisting of an LSTM module and a dropout layer (LSTM-RNN) is presented, which is used for multi-gesture classification and shows strong anti-interference ability.And Ref.[20] used a multi-modal wearable remote control robot to complete the task of breath pattern detection.Simonyan and others put forward the innovation of the two-stream convolutional network for human behavior recognition tasks [21].The network consists of spatial and temporal convolution networks that do not interfere with each other.The two networks extract their features, fuse them in a certain way,and finally perform classification and recognition.The network fully uses the spatiotemporal information in the action video and improves the performance of video-based action recognition.However, in practical application, the action video used to extract optical flow information in two flow convolutional neural networks is easily affected by environmental factors, resulting in the loss of action information, incomplete optical flow information extraction, and action recognition performance degradation.

    With the application of orthogonal frequency division multiplexing (OFDM) [22] technology, researchers have found that channel state information (CSI) signals have a higher sensitivity to the environment in the propagation process and can provide more specific and accurate grained information [23, 24].In 2014, Halperin et al.released a CSI measurement tool based on an Intel-5300 network adapter (CSI-Tool) [25].A large number of research works on CSI-based perception have emerged.Among them, Wang et al.took the CSI as the detection signal of human fall behavior [26].This method provides a reference for CSI signals used in human behavior recognition tasks.In [27], Guo et al.constructed a Wi-Fi based activity recognition dataset(WIAR) and proposed a Human Activity(HuAc) system, proposed a subcarrier selection mechanism based on the sensitivity of subcarriers to human behavior, and established a relationship library between CSI and human joint activity to achieve pattern matching.[28] presented a Wi-Fi-enabled gesture recognition using dual-attention network (WIGRUNT) model using residual network (ResNet), which used a dual-attention mechanism to distinguish whether there are gesture movements and to identify the categories of actions by dynamically focusing on gesture movements in a time dimension and evaluating the correlation of signal CSI sequences.

    Therefore, the main contributions and innovations of this paper are as follows.This paper proposes the method of using the action information carried by the wireless fidelity (Wi-Fi)channel state information to compensate for the lost action information when the video action recognition task is affected by environmental factors.This paper mainly designs an indoor human behavior recognition method based on Wi-Fi signal perception and video feature fusion.This method extracts Wi-Fi signal features from a standard statistical algorithm, and two streams of convolutional neural network extract video action features.The two groups of feature vectors are fused [29, 30] and input into a Support Vector Machine (SVM) [31, 32] for classification training.Finally, through the comparison experiment in three different environments, it is verified that the recognition model after feature fusion has higher accuracy.

    2 Overall Scheme Design

    The indoor human behavior recognition method based on Wi-Fi signal perception and video feature fusion is mainly divided into three parts:video feature extraction, Wi-Fi signal CSI feature extraction, feature fusion and classification.Fig.1 shows the workflow of the human behavior recognition method based on Wi-Fi perception and video feature fusion.In the video feature extraction part, two streams of a convolutional neural network are used to extract the action features in the video.The standard statistical algorithm is used to remove the Wi-Fi signal features in the Wi-Fi signal acquisition.Finally, the two groups of feature vectors are fused and input into SVM for classification training.

    Fig.1 Human behavior recognition process design based on feature fusion of Wi-Fi perception and video

    3 Methodology

    3.1 Video Feature Extraction

    The information contained in the video can be divided into two dimensions: space and time.The spatial information mainly includes the background and object information in the video.Time information refers to the relative displacement of the object’s position along the time axis in a continuous video frame.The two-stream networkbased human behavior recognition framework proposed by Simanyan et al.makes full use of the spatiotemporal information of action videos [21].The framework includes two independent convolutional networks, one of which is used to extract the features of a single video image, that is,extract spatial information, which is called a spatial convolutional network.The other one is used to extract the optical flow information of the video, that is, to extract the time information,which is called the temporal convolutional network.The two network structures are trained on the single frame image of the sample video and the optical flow image extracted from the continuous video frame, respectively.Finally, the feature vectors extracted from the two networks are fused through model fusion to obtain the global feature representation of the sample.Fig.2 shows the basic structure of a two-stream network.

    Fig.2 Two-stream convolution neural network framework

    The upper part of Fig.2 is the structure of a spatial convolutional network.A spatial convolutional network is a classification model that trains the target features of samples based on a single video frame and generates the classification model of such images.When extracting spatial features, the extracted video frames need to be pre-processed.Firstly, all video frames are shortened to 256×256, and then 25 video frames are selected with equal spacing.Subsequently, a 224×224 sub-image is obtained by cropping from the top left corner, then the video frame is inverted 90° counterclockwise, and again a 224×224 sub-image is obtained by cropping from the top left corner.Repeated four times, cropping and flipping the four corners and the center of each frame, i.e., we can get 5 sub-images per frame.Finally, input the spatial network to extract features.The spatial network in the figure has a total of 8 steps, covering five convolution layers, three pooling layers, two fully connected layers, and a classification layer (Softmax), in which the size and stride of the convolution kernel are set for each convolution layer.For example, in the first layer, the size of the convolution kernel is 7×7, and the number of convolution kernels is 96.In the convolutional network, the size of the pooling layer is 2×2, two fully connected layers are set behind the last pooling layer, and a feature vector of 2 048 dimensions is output in the seventh layer.A few training samples often lead to an overfitting phenomenon in the training process.The overfitting phenomenon refers to the high recognition rate during training due to the small number of training sets but the low recognition rate during experimental testing.To reduce the overfitting phenomenon, the dropout function is added to the fully connected layer so that some neuronal features do not participate in the training.Still,their feature values are retained in the training model, which improves the robustness of the neural network.The values of the two-layer dropout function in this paper are 0.5 and 0.9, respectively.

    The lower part of Fig.2 shows the temporal convolutional network structure.When extracting time characteristics, video frames are converted to optical flow maps, which range from dense status to sparse status to reduce optical flow storage space, specifically to compress red,green and blue system (RGB)-like images.Save all rescaled optical flow values to [0, 255] integer as JPEG pictures.In this way, each optical flow diagram will be more than 10 kbit, and the storage space will be greatly reduced.The resulting optical flow map is then rotated and clipped,similar to video frame preprocessing in the spatial network.Finally, 224×224×2Llight flow information for each sub-image is calculated and fed into the temporal convolutional network to extract the time characteristics.The temporal convolutional network calculates the displacement of pixels on the time axis in a continuous video frame image.Finally, the expulsion of all pixels forms an optical flow field, which is decomposed into horizontal and vertical vectors.The vector information of these two directions is the convolutional network’s channel input sample,and the time feature is extracted by convolution.The concrete implementation step is to regard the optical flow information of pixels as a set of displacement vectorsdtbetween adjacent video framestand (t+1).Random in a pixel (u,v), by point derivativedt(u,v), can be said that the pixels betweentand (t+1) frame, the displacement vector of the image, and the image sample can be regarded as a vector field, including horizontal and vertical components anddytcan be regarded as the input channel of the image.To show movement in continuous video frames, the time convolution inputLadjacent frames of the optical flow vector field of the channeldxtanddyt,and formed 2Linput channels.Set the width and height of the video size ofwandh, respectively.For any frameninput convolution network capacityIm(u,v,c)∈R2whLby the following to achieve

    Finally, the two features are fused through model fusion, as shown on the right side of Fig.2.The feature vectors extracted from spatial and temporal convolutional networks are arranged in a particular order to obtain the global features of the samples.This paper uses the two-stream convolution neural network framework displayed in Tab.1.In the two-stream network framework used in this paper, characteristics of 2 048 dimensions are extracted from spatial and temporal networks, respectively.After the classification by softmax layer, the classification results are fused,and finally, a vector descriptor with the video feature of 4 096 dimensions is obtained.

    Tab.1 Two-stream convolution network parameters

    3.2 Wi-Fi Signal Feature Extraction

    CSI is a kind of information that can carry the variation characteristics of its transmission communication link.This information can measure the variation of channel state and the weakness degree of Wi-Fi signal on multiple transmission paths.

    The CSI measures the channel by portraying each multipath component (multipath transmission) with time and frequency domain information.In the time domain, CSI uses the channel impulse response to represent the energy value of the signal arriving at the receiver.CSI is more sensitive to the environment and better portrays the environmental changes caused by the actions, as it can describe the channel changes from time and frequency domain information through subcarriers respectively.The principle of using CSI in Wi-Fi signals for action recognition is that Wi-Fi signals will form different multipath reflections when they encounter moving targets during propagation, which makes the CSI parameters at the receiving end change and thus form different CSI waveforms.Therefore, different movements can be identified according to the different CSI waveforms.

    CSI signals are generally described as channel impulse response (CIR).The frequency domain expression of CSI signal is transformed by International Football Friendship Tournament (IFFT) to obtain CIR.

    CIR can be expressed as

    whereαiis the amplitude decay of theith path,θiis the phase offset of theith path,τiis the time delay of theith path,Nis the total number of paths, andδ(t ?τi) is the Diracδfunction.

    After Fourier transform,H(fi) is the CSI response value with center frequencyfi, where|H(fi)|is the amplitude value and ∠H(fi) is the phase value.

    In the acquisition process, each packet containing a group of subcarriers is sent, and its expression is shown as

    whereirepresents the number of subcarriers,Nrepresents the number of data packets received by each antenna, andSrepresents the number of antennas capable of receiving data.

    After multipath transmission, the received signal at the receiver is expressed as

    where,Y(f,t) is the frequency domain representation of the received signal,X(f,t) is the frequency domain representation of the transmitted signal,H(f,t) is the channel frequency response(CFR) at the timet,fis the center frequency of the subcarrier, andNnoiseis the environmental noise carried in the propagation process.Thus,the representation of the CSI signal is obtained.For details, please refer to reference [33].

    As CSI is a fine-grained physical layer signal, it is more sensitive to the environment.It can carry environmental information, so CSI signal is often used for recognition in the research of human behavior recognition.The data values of CSI are the amplitude and phase of each subcarrier corresponding to the frequency domain space after OFDM technology decoding.These values are the action information, and some noise carried in the propagation process.OFDM enables Wi-Fi signals to be transmitted in parallel through multiple carrier channels, significantly improving communication efficiency.This technology is widely used in Wi-Fi wireless devices.The main working principle of OFDM is to convert the Wi-Fi signal into several subcarriers,which are orthogonal to each other, and then modulate the subcarriers to the sub-channels for parallel low-speed data stream transmission.The orthogonal feature of subcarriers can reduce the interference among transmission channels.Fig.3 shows how it works.

    Fig.3 Working principle of OFDM system

    Fig.4 is a schematic diagram of action acquisition.The transmitter of the signal (the left part of Fig.4) is a Wi-Fi router; the receiver of the call (the right amount of Fig.4) is a computer equipped with an Intel-5300 Network Interface Card (NIC); the middle part is the active area of the moving target, and the sender sends the signal.During the propagation process, multipath reflection occurs through static environments such as the mover or the ground, forming different propagation paths and finally collected by the receiver.

    Fig.4 Schematic diagram of CSI signal acquisition

    In this paper, the public data set WIAR is used to extract CSI signal features for classification training, which contains 16 sets of CSI action data, such as standing, squatting, and sitting, completed by three volunteers.The data is in the form of a “.dat”file, which can be directly processed in matrix laboratory (MATLAB).During the action recognition and detection experiment, the Wi-Fi signal sending and receiving environment should be established in the room.The Wi-Fi router completes the Wi-Fi signalsending terminal.The acquisition program is run first, and the router starts to send data.The movement process will cause a change in the transmission path of the Wi-Fi signal, so the movement information is carried in the transmission process.The computer equipped with the Intel-5300 NIC receives Wi-Fi signals and saves them.The CSI-tool saves the collected Wi-Fi signals as a “.dat”file.Because the receiver of the Intel-5300 network adapter is equipped with three antennas, each antenna receives 30 subcarriers at a time.So each data packet is a data matrix of 3 × 30.The collected CSI waveform is shown in Fig.5.

    The collected CSI data needs to be reprocessed.Abnormal sample points will inevitably appear in the process of data collection.This paper uses Hample outlier detection to eliminate the data values with large differences.MATLAB was used to write an outlier detection program,which stipulated that the median value of the corresponding sample point and the standard deviation of the pair median value were calculated for 30 subcarriers of an input data packet.If the sample point exceeded or was equal to three standard deviations below the median value, the sample point would be an outlier.The median value would be used to replace the sample point.

    Fig.5 CSI waveform diagram: (a) standing; (b) sitting; (c) jumping; (d) walking

    Since Wi-Fi signals are susceptible to an indoor environment, changes in the indoor environment (such as the unintentional actions of collectors, ambient temperature, etc.) will affect the information carried by the channel state information CSI.Therefore, Wi-Fi signals inevitably have noise, which makes it impossible to extract Wi-Fi signal features directly.The frequency of CSI waveform changes caused by moving targets belongs to the low-frequency part.By contrast, the frequency of environmental noise carried by moving targets belongs to the high-frequency domain.Therefore, low pass filtering is adopted in this paper to reduce noise after outliers are removed.And the PAC principal component analysis is used to extract the CSI waveform features.

    Finally, the waveform is interpolated.Interpolation is the numerical estimation of points where data records do not yet exist, based on a known data sequence, according to some law.Since the CSI signal is subject to some loss on some links due to absorption by various furniture, equipment and walls, it can happen that the collected packets will have a small loss of deviation.Therefore, to ensure the integrity of the experiment, each subcarrier stream amplitude is interpolated according to the actual waveform to reduce and offset the loss.Because only a small amount of data needs to be estimated, the amount of data is small and the computational effort is small.So linear interpolation is chosen.In other words, the two data adjacent to the left and right of the packet that needs to be interpolated in the sequence are estimated numerically for filling, and the signal segments containing action information are marked out to reduce interference for subsequent feature fusion and action classification.

    Finally, the standard statistical algorithm is used to extract the features of the image data.This paper selects the average value, maximum value, minimum value, standard deviation,amplitude, average absolute deviation, variance,and eight mode feature values.There are three antennas at the signal receiver, each antenna has 30 subcarriers at a time, and each waveform extracts eight eigenvalues.Therefore, the feature vector of 720 dimensions is finally obtained and saved in the file of the “.mat”type, namely the feature vector of the Wi-Fi signal.

    3.3 Feature Fusion and Classification

    Feature fusion is the method of extracting the feature information of the same research object in different ways and fusing the feature information to obtain new features.The feature fusion method can compensate for the shortcomings of different techniques and complement each other.This paper adopts the approach of early fusion.Prefusion is usually used in traditional machine learning, a relatively simple and convenient method.The feature information extracted differently will be spliced and fused into a new feature vector with a specific length and the sum of multidimensions as the final feature representation of the research object [34].When multi-feature fusion occurs, it is unavoidable that the dimension of feature data is not uniform.The eigenvectors can then be dimensionally adjusted or reduced [35].Then the corresponding elements can be fused into a new feature vector through accumulation and multiplication.Finally, classification learning is committed to completing the recognition task.

    In this paper, the video feature vector and CSI feature vector are fused in the early stage,and the two groups of feature vectors are directly spliced together.The dimension of the video feature vector is 4 096 dimensions, the CSI signal feature vector is 720 dimensions, and the dimension of each sample feature vector after fusion is 4 816 dimensions.Fig.6 shows the implementation process of early fusion.It can be seen from Fig.6 that CSI feature values are directly spliced together with video feature vectors after adjusting the dimension of feature vectors.Ensure that all eigenvalues are equally involved in classification calculation and classification recognition.

    Fig.6 Schematic diagram of feature fusion

    When the new feature vector after feature fusion is obtained, it can be input to the classifier for action classification and recognition.The commonly used classifier is the SVM algorithm,which is usually called a classifier [36].

    At first, SVM was mainly used to solve binary classification problems, and then it was gradually developed and applied in multi-classification tasks and became the mainstream algorithm in the field of traditional machine learning.After the known feature information samples are input to SVM, the relationship between the feature data and the sample label is found through training, and a function model for classification is finally generated.The generated function model is used to classify and predict the unknown feature information.According to the classification method, it can be divided into linear classification and linear non-classification.In this paper,the linear function is selected as the kernel function, and its expression is as follows

    wherexis the sample to be identified,xiis the sample for training classification, andf(x,xi) is used to calculate the similarity between samplexand training samplexi.

    Its linear classification principle is shown in Fig.7.The essence of SVM’s classification task is to find an optimal classification decision hyperplane for similar features by repeatedly training the features in the data set.When the new feature information is input, the feature information can be accurately classified according to the linear kernel function model corresponding to the hyperplane.It can be seen from Fig.7 that the circle and the cross represent two different categories of feature data sets, namely samples.The two samples are separated by a straight line, and this classification is linearly separable.Among them, the distancedbetween the circle and the cross is called the interval.SVM obtains the classification decision hyperplane by solving the maximum intervald, which can accurately separate the sample points of different categories.

    Fig.7 Linear separable optimal hyperplane

    Assuming hyperplane classifier expressions forf(x)=ωx+b(ωis parameter matrix,bis intercepting), can separate the two kinds of feature information thoroughly, whenf(x)=0, and samplexis on the hyperplane.When the sample point function valuef(x)≤0 , its category label is (-1), and whenf(x)>0, the sample label is 1, which can realize the recognition of samples.

    According to the experimental environment designed in this paper, Library for Support Vector Machines (LibSVM) is used as a classifier for the fused feature vectors.LibSVM is an opensource SVM software package developed in a MATLAB environment, which can be compiled and used directly in MATLAB software.This package provides many parameter settings for the function model, and different settings of these parameters can solve various classification problems.Facing the requirements of other classification problems, it only needs to complete the required parameter settings for the selected kernel function model, which reduces the frequent training links of feature information and the learning difficulty.

    LibSVM provides multiple learning methods,such as one-to-one and one-to-many, which is suitable for numerous dichotomous or multi-classification tasks.In this paper, cross-validation is used to evaluate classification accuracy.The specific implementation steps are as follows.

    1) Firstly, the training dataset is divided into five action sub-datasets.

    2) Then, the sub-datasets of one action are selected as one class, and the four remaining action data sets are selected as the other class.This is equivalent to constituting a dichotomous classifier.

    3) Take 5 consecutive times, construct 5 classifiers, and record the recognition accuracy of each classifier.

    4) Finally, the average value of the results of the five classifiers is calculated as the final recognition accuracy.

    4 Experimental Design and Analysis of Results

    4.1 Experimental Environment

    According to the experimental requirements, the testing equipment used in this paper is as follows.

    1) Hardware

    Wi-Fi router with TP-Link, camera, a laptop computer equipped with Intel-5300 NIC.

    2) Software

    MATLAB 2019, Pycharm 2020, LibSVM.

    3) Experimental Environment

    During the experiment, a quiet and closed environment was maintained to minimize the interference of environmental noise.The Wi-Fi router is the transmitter of Wi-Fi signals, and the Intel-5300 NIC is the receiver.The distance between the transmitter and receiver is 3 m, and the middle is the motion range of the mover.In front of the athlete there is a camera to record video of the action.

    In order to verify the change in action recognition performance after feature fusion, the experiment is divided into three parts.The twostream network framework and the feature fusion model are used to conduct comparison tests in a normal environment, dark environment, and partially occluded environment.

    4.2 Analysis of Experimental Data

    During the experiment, the five actions were recognized and detected by the two-flow convolutional neural network and the proposed feature fusion method in three environments, namely normal environment, dark light and local occlusion.The recognition results were statistically analyzed in the following section.

    Tab.2 shows the statistics of action recognition results in the comparison test under a normal environment.As can be seen from Tab.2,under normal circumstances, both the two-stream convolutional network behavior recognition and the feature fusion method proposed in this paper have good performance.And the recognition rate of the five types of movements is above 80%,with the recognition accuracy of standing and walking adding CSI feature fusion reaching 100%.While other action recognition accuracy remains the same.The reason is that the recognition rate is low due to the different jumping heights and jumping postures of different moving targets.

    It is not difficult to see from Tab.2 that human behavior recognition based on the twostream network framework has a good performance in a similar typical environment.At the same time, the feature fusion method with Wi-Fi signal can only improve the accuracy of behavior recognition for standing and walking, which have a small range of actions and similar actions of different targets.

    Tab.2 Data statistics under normal environment

    Tab.3 shows the statistical results of action recognition in the comparison test under a low light environment.From Tab.3 analysis, in the case of dimmed environmental light, caused by the change of the optical flow information, activity recognition based on the two-stream convolutional network performance descends, and recognition accuracy of five kinds of action are down.Although the performance of activity recognition based on video and CSI feature fusion falls slightly, its recognition accuracy is still higher than the former under the same environmental conditions.The recognition performance is much higher than action recognition based on twostream network.This also proves that motion recognition incorporating Wi-Fi signal fusion can overcome environmental interference caused by light changes.

    Tab.3 Data statistics in dark environment

    In this experiment, the moving area of the moving target was partially set below the knee.As can be seen from Tab.4, because the moving target in the video was blocked, the action recognition performance based on the two-steam network was significantly decreased, and the movements with obvious leg floating in squatting and walking were significantly affected by the occlusion.The motion recognition based on video and Wi-Fi signal feature fusion has little influence on the recognition performance due to CSI feature compensation.However, the motion posture and amplitude of different moving targets are different, and Wi-Fi carries environmental noise,which cannot be eliminated, so the recognition accuracy of squatting and walking actions is slightly reduced.It can be concluded from Tab.4 that local occlusion has a great impact on the performance of action recognition based on a two-stream network.After adding Wi-Fi signal feature fusion, the performance of action recognition is improved due to the action information carried by Wi-Fi.

    Tab.4 Data statistics under partial occlusion

    Through comparative tests in three environments, it can be seen that environmental factors have a great impact on the performance of action recognition based on the two-stream network.After CSI feature fusion is added, although the recognition accuracy is slightly decreased due to environmental interference, the overall recognition accuracy is significantly improved compared with that of the two-stream network.After experimental verification, the indoor human behavior recognition method based on Wi-Fi perception and video feature fusion proposed in this paper can improve the performance of human behavior recognition based on the two-stream network when it is interfered by environmental factors.

    5 Conclusion

    This paper introduces action recognition based on video task easily affected by environmental factors such as light, and background.Therefore,in order to improve the performance of action video under the influence of environmental factors, this paper proposes a human behavior recognition solution for action video that utilizes information features carried by Wi-Fi signals to compensate for information loss due to environmental factors.An indoor human behavior recognition method based on Wi-Fi perception and video feature fusion is designed.The extracted video feature vectors and Wi-Fi channel state information feature vectors are fused and finally input into the support vector machine to complete classification and recognition.

    Finally, after adding the two-stream convolutional networks and a feature fusion system under the environment of the three different contrast experiments, respectively, the experiment selected the five experiments, compared to the experimental data, after joining the Wi-Fi signal feature fusion of 5 kinds of human action recognition accuracy which were improved, and could overcome particular environmental interference.Considering that video and Wi-Fi signals have advantages and disadvantages, the research of weighted feature fusion based on video and Wi-Fi signals has excellent potential.Looking forward to the future, the feature fusion of the video and Wi-Fi signals can overcome environmental interference and effectively improve the accuracy of indoor video tasks.

    国内少妇人妻偷人精品xxx网站| av国产久精品久网站免费入址| 男人爽女人下面视频在线观看| 亚洲精品第二区| 中文字幕人妻熟人妻熟丝袜美| 小蜜桃在线观看免费完整版高清| 国产精品国产av在线观看| 国产精品久久久久久精品电影小说 | 国产熟女欧美一区二区| 亚洲精品一二三| 久久国产精品男人的天堂亚洲 | 有码 亚洲区| 高清不卡的av网站| 国产亚洲一区二区精品| 成人午夜精彩视频在线观看| 精品国产露脸久久av麻豆| 少妇人妻一区二区三区视频| 久久人人爽av亚洲精品天堂 | 亚洲av国产av综合av卡| 大片电影免费在线观看免费| 亚洲综合精品二区| 国产在线一区二区三区精| 日韩在线高清观看一区二区三区| 亚洲精品亚洲一区二区| 亚洲精品色激情综合| 免费观看无遮挡的男女| 美女视频免费永久观看网站| 在线精品无人区一区二区三 | 色视频www国产| 亚洲人成网站高清观看| 肉色欧美久久久久久久蜜桃| 国产精品国产三级国产av玫瑰| av国产精品久久久久影院| 国产淫片久久久久久久久| 直男gayav资源| 亚洲在久久综合| 免费高清在线观看视频在线观看| 亚洲内射少妇av| 插阴视频在线观看视频| av在线app专区| 免费看光身美女| 午夜日本视频在线| 一级a做视频免费观看| 18禁裸乳无遮挡免费网站照片| 极品教师在线视频| kizo精华| 免费高清在线观看视频在线观看| 成人漫画全彩无遮挡| 日韩 亚洲 欧美在线| 少妇精品久久久久久久| 人人妻人人看人人澡| 三级国产精品欧美在线观看| 亚洲av福利一区| videossex国产| 中文字幕免费在线视频6| 尾随美女入室| 91aial.com中文字幕在线观看| 精品一区在线观看国产| 51国产日韩欧美| 男人添女人高潮全过程视频| 日韩人妻高清精品专区| 亚洲av电影在线观看一区二区三区| 水蜜桃什么品种好| 少妇的逼水好多| 国产爱豆传媒在线观看| 一级毛片电影观看| 青春草国产在线视频| 日韩在线高清观看一区二区三区| 日韩av不卡免费在线播放| 啦啦啦在线观看免费高清www| 丰满少妇做爰视频| 久久精品国产亚洲av天美| 日本欧美视频一区| 丰满少妇做爰视频| av免费观看日本| 国产一区亚洲一区在线观看| 又粗又硬又长又爽又黄的视频| 久久久精品94久久精品| 免费观看a级毛片全部| 日本免费在线观看一区| 人人妻人人澡人人爽人人夜夜| 蜜臀久久99精品久久宅男| 丰满乱子伦码专区| 亚洲av综合色区一区| 日日摸夜夜添夜夜添av毛片| av免费观看日本| 久久99热这里只频精品6学生| 久久99热6这里只有精品| 亚洲av中文字字幕乱码综合| 欧美日韩在线观看h| 激情 狠狠 欧美| 亚洲人与动物交配视频| 在线播放无遮挡| 亚洲av国产av综合av卡| 国产黄片视频在线免费观看| 男人狂女人下面高潮的视频| 看十八女毛片水多多多| 搡女人真爽免费视频火全软件| 亚洲精品久久午夜乱码| 久久精品国产鲁丝片午夜精品| 亚洲色图av天堂| 日韩不卡一区二区三区视频在线| 精品久久久精品久久久| 久久国产精品男人的天堂亚洲 | 国精品久久久久久国模美| 麻豆精品久久久久久蜜桃| 国产成人午夜福利电影在线观看| 大话2 男鬼变身卡| 中国三级夫妇交换| 插阴视频在线观看视频| 亚洲国产成人一精品久久久| 亚洲中文av在线| 亚洲欧美日韩无卡精品| 免费大片18禁| 国产深夜福利视频在线观看| 一级毛片黄色毛片免费观看视频| 国产欧美另类精品又又久久亚洲欧美| 色婷婷久久久亚洲欧美| 亚洲欧美成人精品一区二区| 黄片wwwwww| 91在线精品国自产拍蜜月| 国产无遮挡羞羞视频在线观看| 久久亚洲国产成人精品v| 亚洲一区二区三区欧美精品| 天堂8中文在线网| 久久精品夜色国产| 久久鲁丝午夜福利片| 精华霜和精华液先用哪个| 啦啦啦中文免费视频观看日本| 欧美3d第一页| 内射极品少妇av片p| 欧美日韩在线观看h| 免费大片18禁| av线在线观看网站| 精品国产露脸久久av麻豆| 国产精品一及| 国内精品宾馆在线| 久久毛片免费看一区二区三区| 亚洲精品日韩在线中文字幕| 久久青草综合色| 成年av动漫网址| 亚洲av二区三区四区| 视频区图区小说| 久久 成人 亚洲| 亚洲图色成人| av网站免费在线观看视频| 久久久精品94久久精品| 女的被弄到高潮叫床怎么办| 欧美区成人在线视频| 最近最新中文字幕免费大全7| 午夜福利影视在线免费观看| 联通29元200g的流量卡| 中文字幕av成人在线电影| 成人漫画全彩无遮挡| 成人二区视频| 久久久久久久久久人人人人人人| 免费久久久久久久精品成人欧美视频 | 老熟女久久久| 纵有疾风起免费观看全集完整版| 成人18禁高潮啪啪吃奶动态图 | 91在线精品国自产拍蜜月| 内地一区二区视频在线| 亚洲电影在线观看av| 国产精品一及| 免费黄色在线免费观看| 成人亚洲精品一区在线观看 | 蜜桃久久精品国产亚洲av| .国产精品久久| 99久久中文字幕三级久久日本| 亚洲综合精品二区| 国产精品不卡视频一区二区| 日本vs欧美在线观看视频 | 亚洲av成人精品一区久久| 纵有疾风起免费观看全集完整版| 青春草亚洲视频在线观看| 亚洲激情五月婷婷啪啪| 亚洲,一卡二卡三卡| 亚洲不卡免费看| 插逼视频在线观看| 国产视频首页在线观看| 内地一区二区视频在线| 免费观看av网站的网址| 多毛熟女@视频| 妹子高潮喷水视频| 精品国产露脸久久av麻豆| 一边亲一边摸免费视频| 嘟嘟电影网在线观看| 国产男女超爽视频在线观看| 亚洲国产欧美人成| 亚洲精品一区蜜桃| 22中文网久久字幕| 亚洲,一卡二卡三卡| 久久久a久久爽久久v久久| 亚洲精品视频女| 美女福利国产在线 | 草草在线视频免费看| 亚洲av免费高清在线观看| 久久亚洲国产成人精品v| 少妇人妻 视频| 在现免费观看毛片| 在线精品无人区一区二区三 | 亚洲精华国产精华液的使用体验| 日韩制服骚丝袜av| 亚洲国产色片| 亚洲图色成人| 欧美日本视频| 国内少妇人妻偷人精品xxx网站| 91久久精品国产一区二区三区| 99热6这里只有精品| 香蕉精品网在线| 免费人成在线观看视频色| 麻豆成人av视频| 91久久精品电影网| 亚洲aⅴ乱码一区二区在线播放| 久久国产精品男人的天堂亚洲 | 丰满少妇做爰视频| 国产一区二区三区综合在线观看 | 欧美变态另类bdsm刘玥| 亚洲经典国产精华液单| 日本av免费视频播放| 99九九线精品视频在线观看视频| 99热6这里只有精品| 小蜜桃在线观看免费完整版高清| 亚洲精品乱码久久久久久按摩| 大片电影免费在线观看免费| 精品久久久精品久久久| 亚洲精品一区蜜桃| 欧美最新免费一区二区三区| 欧美精品一区二区免费开放| 欧美一区二区亚洲| 秋霞在线观看毛片| 国产精品人妻久久久久久| 永久免费av网站大全| 爱豆传媒免费全集在线观看| 午夜老司机福利剧场| 伊人久久国产一区二区| .国产精品久久| 九九久久精品国产亚洲av麻豆| 女人十人毛片免费观看3o分钟| 高清不卡的av网站| 男人狂女人下面高潮的视频| 精品一品国产午夜福利视频| 成人亚洲欧美一区二区av| 久久久久久伊人网av| 天天躁日日操中文字幕| 性色av一级| a级一级毛片免费在线观看| 五月伊人婷婷丁香| 国产成人免费无遮挡视频| 老女人水多毛片| 亚洲av欧美aⅴ国产| 精品少妇久久久久久888优播| 中文字幕免费在线视频6| 国产成人精品福利久久| 欧美精品一区二区大全| 成人18禁高潮啪啪吃奶动态图 | 久久久亚洲精品成人影院| 久久综合国产亚洲精品| 亚洲av在线观看美女高潮| 免费观看在线日韩| 一本—道久久a久久精品蜜桃钙片| 精品久久久精品久久久| 新久久久久国产一级毛片| www.av在线官网国产| 这个男人来自地球电影免费观看 | 免费少妇av软件| 久久国产精品大桥未久av | 国产亚洲午夜精品一区二区久久| 久久久久网色| 在线播放无遮挡| 欧美日韩精品成人综合77777| 国产亚洲精品久久久com| 日韩视频在线欧美| 国产精品.久久久| 少妇人妻久久综合中文| 久久 成人 亚洲| 观看免费一级毛片| 亚洲精品久久久久久婷婷小说| 久久久国产一区二区| 一本久久精品| 久久精品国产鲁丝片午夜精品| 久久亚洲国产成人精品v| 美女福利国产在线 | 韩国高清视频一区二区三区| 六月丁香七月| 一级毛片久久久久久久久女| 亚洲国产最新在线播放| 性色av一级| 日韩成人伦理影院| 日本av免费视频播放| 久久国产亚洲av麻豆专区| 亚洲图色成人| 高清午夜精品一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91 | 男人添女人高潮全过程视频| 日韩av不卡免费在线播放| 国产精品精品国产色婷婷| 在线亚洲精品国产二区图片欧美 | 亚洲四区av| 精品一品国产午夜福利视频| 亚洲精品,欧美精品| 91精品伊人久久大香线蕉| 欧美日本视频| 亚洲精品国产色婷婷电影| 99热这里只有是精品50| 亚洲综合色惰| 纯流量卡能插随身wifi吗| 国产精品国产三级专区第一集| 国内精品宾馆在线| 波野结衣二区三区在线| 久久97久久精品| 秋霞伦理黄片| 亚洲av.av天堂| 国产69精品久久久久777片| 永久网站在线| 日韩成人av中文字幕在线观看| 国产伦精品一区二区三区四那| 国产成人91sexporn| 国产亚洲精品久久久com| 97在线人人人人妻| 午夜免费男女啪啪视频观看| 国产精品.久久久| 国产精品av视频在线免费观看| 亚洲国产精品一区三区| 欧美精品亚洲一区二区| 久久毛片免费看一区二区三区| 黄片无遮挡物在线观看| 免费观看性生交大片5| 成年免费大片在线观看| 国产在线免费精品| 99热网站在线观看| 亚洲国产精品成人久久小说| 91精品一卡2卡3卡4卡| 日韩中字成人| 夜夜骑夜夜射夜夜干| 久久亚洲国产成人精品v| 草草在线视频免费看| 最近的中文字幕免费完整| 国产日韩欧美在线精品| av国产久精品久网站免费入址| 狂野欧美激情性bbbbbb| 男女无遮挡免费网站观看| 高清黄色对白视频在线免费看 | 一区在线观看完整版| 一本久久精品| 亚洲精品一二三| 亚洲怡红院男人天堂| 最近2019中文字幕mv第一页| 亚洲欧美日韩东京热| 日韩三级伦理在线观看| 国产精品一二三区在线看| 免费av不卡在线播放| 一级毛片电影观看| 九九爱精品视频在线观看| 男的添女的下面高潮视频| 国产免费视频播放在线视频| 国产欧美日韩精品一区二区| 日本猛色少妇xxxxx猛交久久| 美女中出高潮动态图| 亚州av有码| 涩涩av久久男人的天堂| 国产精品一及| 久热久热在线精品观看| 天天躁夜夜躁狠狠久久av| 少妇被粗大猛烈的视频| 免费在线观看成人毛片| 女性生殖器流出的白浆| 国产男人的电影天堂91| 久久久久久人妻| av免费观看日本| 热99国产精品久久久久久7| 人人妻人人爽人人添夜夜欢视频 | 国产有黄有色有爽视频| 日韩欧美精品免费久久| 日韩欧美一区视频在线观看 | 看非洲黑人一级黄片| 黄色怎么调成土黄色| av专区在线播放| 熟女电影av网| 亚洲人成网站在线播| 欧美成人午夜免费资源| 国产又色又爽无遮挡免| 少妇人妻一区二区三区视频| 91在线精品国自产拍蜜月| 亚洲av不卡在线观看| 久久鲁丝午夜福利片| 久久久久久伊人网av| 97热精品久久久久久| av线在线观看网站| 日本免费在线观看一区| 国产黄片视频在线免费观看| 亚洲精品国产av蜜桃| 一个人看视频在线观看www免费| 草草在线视频免费看| 51国产日韩欧美| 久久毛片免费看一区二区三区| 亚洲精品亚洲一区二区| 中国国产av一级| 亚洲精品中文字幕在线视频 | 国产伦理片在线播放av一区| 婷婷色麻豆天堂久久| 搡老乐熟女国产| 在线免费观看不下载黄p国产| 亚洲久久久国产精品| 亚洲精品久久久久久婷婷小说| 日本wwww免费看| a级毛片免费高清观看在线播放| 777米奇影视久久| 久久久久久久久久久免费av| 人人妻人人澡人人爽人人夜夜| 亚洲av中文av极速乱| 一区二区av电影网| 一区二区三区四区激情视频| 99热全是精品| 精品一区二区三卡| 国产精品久久久久久久电影| 国内揄拍国产精品人妻在线| 日日摸夜夜添夜夜爱| 18禁动态无遮挡网站| 伦精品一区二区三区| freevideosex欧美| 亚洲欧洲日产国产| av女优亚洲男人天堂| 在线观看人妻少妇| 精品人妻偷拍中文字幕| 九色成人免费人妻av| 一级爰片在线观看| 少妇被粗大猛烈的视频| 国产大屁股一区二区在线视频| 日本av免费视频播放| 水蜜桃什么品种好| 国产精品偷伦视频观看了| 能在线免费看毛片的网站| 91狼人影院| 青青草视频在线视频观看| 日韩欧美一区视频在线观看 | 日本免费在线观看一区| 久久精品久久精品一区二区三区| 色综合色国产| 成人特级av手机在线观看| 中文在线观看免费www的网站| 亚洲国产精品专区欧美| 热re99久久精品国产66热6| 日本-黄色视频高清免费观看| 人人妻人人添人人爽欧美一区卜 | 插逼视频在线观看| 亚洲,欧美,日韩| 亚洲av成人精品一二三区| 亚洲不卡免费看| 亚洲精品久久久久久婷婷小说| 毛片女人毛片| 日韩欧美 国产精品| 久久午夜福利片| 免费大片黄手机在线观看| 午夜老司机福利剧场| 九草在线视频观看| 啦啦啦在线观看免费高清www| 久热这里只有精品99| 中国三级夫妇交换| 久久毛片免费看一区二区三区| 亚州av有码| 内地一区二区视频在线| 日韩中文字幕视频在线看片 | 自拍偷自拍亚洲精品老妇| 极品教师在线视频| 午夜日本视频在线| av不卡在线播放| 亚洲精品aⅴ在线观看| 欧美日韩国产mv在线观看视频 | 天天躁日日操中文字幕| 人妻一区二区av| 国产免费视频播放在线视频| 日韩,欧美,国产一区二区三区| 王馨瑶露胸无遮挡在线观看| 免费大片黄手机在线观看| 成人影院久久| 五月开心婷婷网| 天堂8中文在线网| 三级经典国产精品| 天堂8中文在线网| av视频免费观看在线观看| 视频中文字幕在线观看| 欧美老熟妇乱子伦牲交| 一边亲一边摸免费视频| 色5月婷婷丁香| 国产男女内射视频| 精品少妇黑人巨大在线播放| 久久影院123| 久久这里有精品视频免费| 久久人人爽av亚洲精品天堂 | 成人毛片a级毛片在线播放| 成年人午夜在线观看视频| 一级毛片黄色毛片免费观看视频| 欧美zozozo另类| 一级毛片黄色毛片免费观看视频| 国产高清有码在线观看视频| 亚洲第一区二区三区不卡| 国产精品久久久久久av不卡| 欧美+日韩+精品| 91精品国产国语对白视频| 精品少妇黑人巨大在线播放| 我要看黄色一级片免费的| 国产黄片美女视频| 校园人妻丝袜中文字幕| 插逼视频在线观看| 卡戴珊不雅视频在线播放| 国产伦理片在线播放av一区| 嫩草影院入口| 看十八女毛片水多多多| 在线免费十八禁| 日本免费在线观看一区| 老司机影院成人| 日本黄大片高清| 久久午夜福利片| 成人毛片a级毛片在线播放| 国产淫片久久久久久久久| 免费观看a级毛片全部| 色婷婷久久久亚洲欧美| 少妇高潮的动态图| 日韩中文字幕视频在线看片 | 亚洲中文av在线| 国产黄片视频在线免费观看| 国产成人精品久久久久久| 成人无遮挡网站| 免费观看av网站的网址| 2021少妇久久久久久久久久久| 亚洲成色77777| 五月天丁香电影| 夜夜看夜夜爽夜夜摸| 午夜激情福利司机影院| 午夜福利高清视频| 国产精品久久久久久av不卡| 亚洲精品亚洲一区二区| 51国产日韩欧美| 免费播放大片免费观看视频在线观看| 高清毛片免费看| 男女国产视频网站| 日韩欧美一区视频在线观看 | 日韩伦理黄色片| 国产一区有黄有色的免费视频| 亚洲国产精品专区欧美| 熟女av电影| 国产精品精品国产色婷婷| 熟女电影av网| 精品久久久久久久末码| 人妻夜夜爽99麻豆av| 国产精品久久久久久精品电影小说 | 九九在线视频观看精品| 韩国av在线不卡| 久久久午夜欧美精品| videos熟女内射| 少妇人妻久久综合中文| 蜜桃久久精品国产亚洲av| 一级毛片电影观看| av视频免费观看在线观看| 一区二区三区免费毛片| 色网站视频免费| 国产亚洲一区二区精品| 欧美亚洲 丝袜 人妻 在线| 高清在线视频一区二区三区| 中国三级夫妇交换| 国产 一区 欧美 日韩| 国产一区亚洲一区在线观看| 最近最新中文字幕大全电影3| 视频中文字幕在线观看| 岛国毛片在线播放| 丰满少妇做爰视频| 国产精品99久久久久久久久| 一级片'在线观看视频| a级毛色黄片| 国产精品一区二区在线观看99| 国产视频首页在线观看| 中文在线观看免费www的网站| 国产精品一及| 久久精品久久久久久久性| 国产有黄有色有爽视频| a级毛片免费高清观看在线播放| 超碰97精品在线观看| 欧美激情极品国产一区二区三区 | 欧美三级亚洲精品| 欧美三级亚洲精品| 亚洲国产色片| 亚洲av国产av综合av卡| 涩涩av久久男人的天堂| 免费不卡的大黄色大毛片视频在线观看| 日韩国内少妇激情av| 久久97久久精品| 亚洲一区二区三区欧美精品| 免费黄色在线免费观看| 18禁裸乳无遮挡动漫免费视频| 在线观看人妻少妇| 国产av国产精品国产| av福利片在线观看| 亚洲成人一二三区av| 视频区图区小说| 日韩成人伦理影院| 国产高清有码在线观看视频| 亚洲精品亚洲一区二区| 国产精品一二三区在线看| 午夜激情久久久久久久| 免费少妇av软件| 天天躁日日操中文字幕| 插逼视频在线观看| 晚上一个人看的免费电影| 日韩中字成人| 亚洲精华国产精华液的使用体验| 欧美国产精品一级二级三级 | 久久久精品免费免费高清| 欧美成人一区二区免费高清观看| 99视频精品全部免费 在线| 超碰97精品在线观看| 深夜a级毛片| 小蜜桃在线观看免费完整版高清| 综合色丁香网| 亚洲无线观看免费|