• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Gait Recognition via Cross Walking Condition Constraint

    2021-12-14 06:04:00RunshengWangHefeiLingPingLiYuxuanShiLeiWuandJialieShen
    Computers Materials&Continua 2021年9期

    Runsheng Wang,Hefei Ling,*,Ping Li,Yuxuan Shi,Lei Wu and Jialie Shen

    1HuaZhong University of Science and Technology,Wuhan,430074,China

    2Queen’s University,Belfast,BT7 1NN,UK

    Abstract:Gait recognition is a biometric technique that captures human walking pattern using gait silhouettes as input and can be used for long-term recognition.Recently proposed video-based methods achieve high performance.However,gait covariates or walking conditions,i.e.,bag carrying and clothing,make the recognition of intra-class gait samples hard.Advanced methods simply use triplet loss for metric learning,which does not take the gait covariates into account.For alleviatingthe adverse influence of gait covariates,we propose cross walking condition constraint to explicitly consider the gait covariates.Specifically,this approach designs center-based and pair-wise loss functions to decrease discrepancy of intra-class gait samples under different walking conditions and enlarge the distance of inter-class gait samples under the same walking condition.Besides,we also propose a video-based strong baseline model of high performance by applying simple yet effective tricks,which have been validated in other individual recognition fields.With the proposed baseline model and loss functions,our method achieves the state-of-the-art performance.

    Keywords:Gait recognition;metric learning;cross walking condition constraint;gait covariates

    1 Introduction

    Gait,as a type of effective biometric feature,can be used to identify persons at a distance.Since gait is an unconscious behavior,it can be recognized without cooperation of subjects.Therefore,besides person re-identification approaches [1,2],gait recognition methods have extensive deployment prospect on surveillance video and public security.Recent years,gait recognition has attracted the attention of many researchers.The past years has witnessed the rapid development of deep learning in image recognition and retrieval [3-5].With the development of deep learning,many neural-network-based gait recognition methods are proposed.Typically,gait recognition can be divided into image-based methods and video-based methods.

    Image-based methods [6-8]take Gait Energy Images (GEI) as input and use CNN to judge whether the input GEI pair belongs to the same identity.Reference [9]introduces GAN to get rid of the adverse influence of viewpoints.However,GEI-based methods which take image as input cannot capture spatial temporal gait information.Video-based methods capture motion pattern from a gait sequence.Reference [10]uses LSTM to extract feature from pose estimated by OpenPose to get rid of the adverse influence bag-carrying and clothing.Recently proposed videobased methods focus on aggregating frame-level features from silhouettes sequence via temporal pooling such as GaitSet [11]and GaitPart [12],and use triplet loss as supervision signal.These methods achieve the state-of-the-art performance.

    Nevertheless,one issue of gait recognition is that the variance of walking conditions or gait covariates,i.e.,normal walking (NM),bag carrying (BG) and clothing (CL),changes the appearance of gait silhouettes.As shown in Fig.1,the visual differences of cross walking condition intra-class GEI pairs ((a) and (b),(a) and (c)) are large,while the visual difference of inter-class GEI pair from the same walking condition ((a) and (d)) is small.

    Figure 1:Gait covariates change the appearance of GEI (synthesized by a sequence of silhouettes).(a) is the GEI of NM.(b) and (c) are GEI of BG and CL,respectively.(a)-(c) belong to the same identity.(d) is the GEI of NM from another identity

    Thus,the issue of variance of walking conditions results in large distance of positive pairs from different walking conditions,as well as small distance of negative pairs from the same walking condition.Unfortunately,this issue is ignored by the state-of-the-art video-based methods which only employs triplet loss [13]and do not explicitly take walking conditions into account and results in the large intra-class distance and small inter-class distance.We attach more importance on the cross-walking-condition samples,and aim at devise loss functions to explicitly reduce the distance of intra-class gait samples under different walking conditions and enlarge the distance of inter-class gait samples under the same walking condition at feature space.This approach is referred as cross walking condition constraint.Practically,we design center-based and pair-based loss functions.

    The center-based loss is named as cross walking condition center loss (XCenter loss).Specifically,this loss contracts the intra-class centers of different walking conditions as well as repulses the inter-class centers of same walking condition.The pair-based loss named as cross walking condition pair-wise loss (XPair loss),which focuses on local pair-wise similarity,intends to decrease distance of cross walking condition positive pairs,as well as enlarge the distance of same walking condition negative pairs.

    Secondly,we propose a strong baseline model of high performance for video-based gait recognition by applying simple yet effective tricks,which have been validated in other individual recognition fields [14].Specifically,we involve batch normalization (BN) layers in our model to mitigate the covariate shift issue as well as make the model easier to train,and combine identification loss (ID loss) and metric learning as the training signal.We also use the second-order pooling for frame-level part feature extraction.With these simple tricks,our baseline model achieves high performance.

    Our contributions can be summarized as follows:

    · We propose cross walking condition embedding constraint to explicitly constrain distance between gait samples under different walking conditions,and enlarge the distance of interclass samples under the same walking condition.

    · We explore tricks which is beneficial for the training of the model.With these tricks,we devise a stronger video-based gait recognition baseline model of high performance.The baseline model can be further used in the future researches.

    · Compared with other existing methods,we achieve a new state-of-the-art performance of cross-view gait recognition on CAISA-B and OU-MVLP dataset.We further validate the proposed methods by ablation experiment.

    2 Related Work

    2.1 Video-Based Gait Recognition

    Video-based methods take a sequence of gait silhouettes as input and aggregate frame-level features into a video-level feature.Reference [15]uses LSTM and CNN to extract spatial and temporal gait features.Reference [16]apply 3D convolution operation on feature maps of frames.GaitNet [17]disentangles gait features from colored images via novel losses and uses LSTM to extract temporal gait information.Recently,GaitSet and GaitPart,as video-based methods,focus on aggregating features from gait silhouettes via spatial pooling and temporal pooling.GaitSet [11]extract frame-level feature by CNN and then propose Set Pooling (SP),which is practically an order-less temporal max pooling,to generate the video-level feature map.GaitPart [12]capture temporal information by a short-term motion capture module.These video methods focus on capturing discriminative spatial temporal information,yet do not explicitly consider the issue of gait covariates.Our method is closely related with GaitSet [11]and GaitPart [12],both of which achieve the state-of-the-art performance,and focuses more on cross walking condition gait recognition.

    2.2 GEI-Based Cross Walking Condition Gait Recognition

    In the real situation,gait representation can be interfered by bag-carrying or clothing change(referred as variance of walking condition),since the real shape of human and motion pattern of limbs are invisible or occluded by clothes.Many GEI-based methods strive for cross walking condition gait recognition.Early works [6,18]design networks to learn the similarity of cross walking condition GEI pairs.Reference [7]learn the similarities of GEI pairs in a metric learning manner.Some works devise Generative Adversarial Network (GAN) based methods to solve this issue.Generative methods [9,19]use GAN based methods to overcome the influence of variance of views.References [9,20]generate GEI images of normal walking condition.Reference [21]uses AutoEncoder based network disentangles gait features from GEI of different walking condition to get rid of the influence of clothing and bag-carrying.Reference [22]designs a visual attentionbased network to focus on limbs that is invariant for clothing change.However,these GEI-based methods fail to capture dynamic motion information,since they only take one image as input,and cannot take advantages of the recently proposed video-based model,which achieve the stateof-the-art performance.

    3 Proposed Method

    In this section,we first introduce the loss functions,designed for cross walking condition constraint,i.e.,XCenter and XPair loss,in Sections 3.1 and 3.2.Then,we introduce the framework of the proposed baseline model,and simple yet effective tricks involved in the framework.

    3.1 Cross Walking Condition Center Loss(XCenter)

    In this section,we present our cross-walking-condition center loss,which is named as XCenter loss.As discussed in Section 1,the variance of walking conditions results in large intra-class discrepancy and small inter-class discrepancy.Two manipulations of centers are proposed.The first manipulation is the Center Contraction Loss (CCL) which intends to decrease the distance of intra-class centers to reduce the discrepancy of intra-class distribution,while the second manipulation is Center Repulsion Loss (CRL) which manages to repulse the inter-class centers of the same walking condition to enlarge the inter-class distance.

    Computation of Centers:We compute the centers of samples under different walking condition for each identity.Takei-th identity we sample in a mini-batch,the centers of three walking conditions,i.e.,normal walking (NM),bag-carrying (BG) and clothing (CL) are computed as:

    Here,,andare three sets of samples ofi-th identity of NM,BG and CL,respectively.,,denote three centers ofi-th identity of the three walking conditions,which are computed by averaging the features of corresponding walking conditions (denoted asfjin the above equation).Note that the computation of centers is conducted within a mini-batch.

    Center Contraction Loss (CCL):To reduce the intra-class discrepancy,we propose a loss named as Center Contraction Loss (CCL) that helps the intra-class centers contract.Since the gait samples of NM are not interfered by other gait covariates (clothing and bag-carrying) and represent the real gait information of humans.As shown in Fig.2a,we intend to decrease the distance between the center of NM and the intra-class centers of other two walking conditions.

    Figure 2:A diagram of XCenter loss.(a) Center contraction loss (CCL),(b) Center repulsion loss(CRL)

    Points of different color represent samples of different identities.Squares,circles,and triangles denote samples of NM,BG and CL,respectively.The solid stars enclosed by samples denote the centers of corresponding samples.Fig.2a is the diagram of CCL,where intra-class centers of different walking conditions (stars of same color yet enclosed by points of different shape) are pulled closer.Fig.2b is the diagram of CRL,where the inter-class centers of the same walking condition (stars of different colors yet enclosed by points of same shape) are repulsed.Thus,CCL can be represented as:

    whereKis the number of identities in a mini-batch,andd(·,·)measures the Euclidean distance of two given centers.denote the Euclidean distance between the center of NM and the center of BG,the center of NM and the center of CL,respectively.

    Center Repulsion Loss(CRL):We design a center-based repulsion loss to enlarge the discrepancy of interclass samples under the same walking condition.As shown in Fig.2b,CRL repulses the inter-class centers under the same walking condition away.CRL can be expressed as follow:

    Here,subscriptiand superscripttare the indicator of identities and walking conditions (i∈{1,2,...,K}andt∈{nm,bg,cl}),respectively.jis the indicator of negative identities.[·]+denotes the hinge function.This loss enlarges the distance of the hardest inter-class centers of the same walking condition.The XCenter loss can be represented as:

    3.2 Cross Walking Condition Pair-Wise(XPair)Loss

    As shown in Fig.3,we also design a pair-wise loss function which focuses on local sample pairs.Intuitively,the dissimilarity of cross walking condition positive (Xpos) pairs should be decreased,while the distance of same walking condition negative (Sneg) pairs should be enlarged.Thus,XPair loss consists of two loss functions.

    Figure 3:The diagram of XPair loss

    We reduce the distance of sample pairs from the same identity (same color) yet different walking condition (different shape),and enlarge the distance of pair from different identities(different color) yet same walking condition (same shape).

    Xpos Pair Loss:This loss intends to decrease the dissimilarity of cross walking condition positive (Xpos) pairs.Similar with Section 3.1,we intend to minimize the distance between samples of NM and samples of other two walking condition.Two corresponding sorts of cross walking condition pairs are selected.

    Here,fanmis the anchor feature of NM.fpbgandfpclare the positive features of BG and CL,respectively.This loss decreases the dissimilarity of two kinds of cross walking condition hardest sample pairs.

    Sneg Pair Loss:This loss intends to enlarge the distance of negative yet of same walking status (Sneg) pairs.Practically,hardest negative pairs of NM,which is of smallest dissimilarity,are selected:

    Here,nis the indicator of negative samples of anchora.fnnmis the negative feature of NM.mis the margin for Sneg pair.The XPair consists of the above two loss functions,and can be represented as:

    3.3 Framework with Effective Tricks

    Typical video-base gait recognition framework includes frame-level feature extractor,aggregation of video-level feature,horizontal mapping and part-level feature learning.The framework of our model,as shown in Fig.4,also consists of the above components.The framework takes a sequence of gait images,the length of which isT,as input.In the following,we introduce the details and proposed tricks of all the components.

    The frame-level feature extractor generates a matrix of temporal part features,which represents features ofPparts andTframes.SP represents Set Pooling.BNHM denotes Horizontal Mapping with BN layers.SP and BNHM are applied on each part to generate the final video-level part featuresf1,f2...fP.Then ID loss with BNNeck,triplet loss and proposed loss functions are used for supervision.

    Frame-Level Feature Extractor with PSP:As shown in Fig.5,a base CNN network is used to extract feature maps for frames.Fori-th frame,the extraction of the base network is as:

    Here,Iidenotesi-th gait image.andXiis the feature map ofIi.Frepresents the base convolution neural network.Then,Xiis partitioned into horizontal part-level feature maps.

    Figure 4:The overall framework of our method

    We also use second-order pooling to generate features for different parts,which is called as Part-based Second-order Pooling (PSP) and is introduced in Section 3.4.

    Here,Pis the number of parts andp∈{1,2,...,P}.zp,irepresents the feature ofp-th part ofi-th frame.As shown in Fig.5,parallel PSP blocks produce features for horizontal parts.

    Figure 5:Frame-level feature extractor

    Aggregation of Video-Level Feature:As shown in Fig.4,givenTframes,PSP blocks produce the matrix of part temporal featureswhich represents features ofPparts andTframes.Previous work [12]also produce similar feature matrix.Temporal features of each part are aggregated into video-level part feature by Set Pooling(SP) [11].Takingp-th part as an example,thep-th part video-level featurempgenerated by SP can be expressed as:

    Usage of BN (BNHM and BNNeck):Since the gait dataset has many different types of gait samples,it is hard to sample all types of data in a mini-batch.This causes the issue of covariate shift.Thus,we involve BN layers in our framework.First,horizontal mapping uses part independent FC layers to project part video-level features into discriminative space.We combine horizontal mapping with BN layers,which is named as BNHM.Thep-th part BNHM which generatesp-th part video-level featurefpcan be denoted asSecondly,we also involve identification loss (ID loss) in the training process with BNNeck [14].Practically,the part featurefpfirst goes through a BN layer,And ID loss takesfbnpas input.

    Part-Level Feature Learning:For the baseline model,only ID Loss and triplet loss are involved.For our model,as shown in Fig.4,ID Loss,Triplet loss,and proposed XCenter,XPair loss are applied separately on each part,where ID loss is applied with BNNeck while other loss functions are applied directly on video-level part features.

    3.4 Part-Based Second-Order Pooling(PSP)

    We use part-based second-order pooling to extract discriminative frame-level part features,since the second-order pooling increases the non-linearity for features and is able to capture discriminative high-order information [23,24].

    Suppose that frame-level part feature mapXp,igiven in (10) is ofc×hwdimensions (denote channel,height,and width,as shown in Fig.6).For simplicity,subscriptsp,iare ignored below.Typically,second-order pooling ofX(denoted asB(X)) generate image representation by computing channel-wise covariance matrix:

    Here,vecrepresents vectorization,andB(X)∈Rc2,which is of high dimensions.Recent years,many works [25,26]focuses on reducing the computational cost and memory requirement for second-order pooling.We also formulate a light weight second-order pooling module.

    Figure 6:Structure of part-based second-order pooling (PSP)

    As shown in Fig.6,we replaceXTwithWTin (12).Thus,whereWis another part-level feature map generated by a convolutional layer,and the dimension ofWisc′×HW,where c′<c.The dimension ofB(X)is reduced andB(X)∈Rcc′.To further reduce the dimension of the matrix,a FC layer is followed to generate the final frame-level part feature ofddimension.Thus,PSP that generates frame-level part featurezcan be expressed as:

    Note that the PSP blocks are applied on horizontal parts and the parameters of FC layers of PSP are part independent.

    3.5 Overall Loss Function

    In this part,we first introduce the base loss function which consists of triplet loss and identification loss.Then the overall loss is presented.Base Loss Identification loss and triplet loss are involved in the training process,which are separately applied on each part.The triplet hard loss can be represented as:

    wherea,pandnrepresent anchor,corresponding positive and negative sample,respectively.P(a)and N(a)represent the sets of positive samples and negative samples of the given anchor.Different from previous work [11,12],we also incorporate identification loss during training.The features go through a BN layer and FC layer (BNNeck [14]) to generate the classification scores.Thus,the identification loss can be denoted as:

    Here,superscriptbndenotes the features generated by the BN layer,and subscript ·kdenotesk-th sample in the mini-batch.Nis the number of identities in the training set.Wcdenotes the weight vector ofc-th class,andWykis the weight vector of the ground truth identity ofk-th sample.The combination ofLtriandLidare referred as base loss functions:Lb=Ltri+Lid.

    Overall Loss:The overall loss includes hard triplet loss,identification loss,XCenter loss and XPair loss.The equation of overall loss function can be expressed as:

    whereλxcenandλxpaircontrol the importance of XCenter loss and XPair loss,respectively.

    3.6 Implementation Details

    Experiments are implemented based on pytorch with an Nvidia RTX2080Ti GPU.In this part,we introduce the configuration and details of our network.The input silhouettes,the channel of which is set as 1,are cropped into 64×44 in all experiments.For fair comparison,we adopt the same backbone used in previous video-based model [11].The output channels of each layer in backbone are 32,32,64,64,128,128.As for the PSP used in frame-level feature extractor,Wmentioned in Section 3.4 is generated by an extra convolutional layer.The channel ofW(defined asc′in Section 3.4) is set as 32.The dimension of frame-level part feature,i.e.,ddefined in Section 3.4,is set as 256.The Set Pooling is set as max pooling,since previous works [11,12]validate that this setting achieves better performance.The dimension of the final video-level part featurefpis set as 256.

    4 Experiment

    Two prevailing gait recognition benchmarks,CASIA-B and OU-MVLP,are included in our experiments.In this section,we first introduce two datasets,and then comparative and ablative results are given.In comparison experiments,we report the state-of-the-art models and proposed method on the two datasets.We also visualize the gait features to validate whether the proposed loss functions decrease the intra-class discrepancy.

    4.1 Datasets

    CASIA-B[27]dataset contains 124 identities.Although the number of subjects is limited,each subject has 110 samples of 11 different views and 10 walking types,and the 10 walking types consists of 6 types of normal walking condition (indexed as nm-01—nm-06),2 types of bag carrying (BG) (indexed as bg-01,bg-02) and 2 types of clothing (CL) (indexed as cl-01—cl-02).Thus,the dataset contains samples for cross-view and cross-condition evaluation.During training,the samples of first 74 subjects are taken as training data.During testing,the samples of the rest subjects are involved.Concretely,the samples from nm-01-nm-04 are taken as probes.The samples of other types are taken as gallery.

    OU-MVLP[28]is a gait dataset of largest population in the world.It contains 10307 persons.5153 persons consist training set and the other 5154 persons consist testing set.Each person has image sequences of 14 views.The views consist of two groups:(0°,15°,...,90°) and(180°,195°,...,270°),and each view of one person have two gait sequences,where the sequences indexed 01 are used as probes and the sequences indexed 02 are used as gallery,during training.

    Evaluation Protocol:For fair comparison,we use cross-view evaluation protocol which is employed in previous work to measure the performance of our model.During evaluation,the probes are used to retrieve the gallery of different views,and mean rank-1 accuracy of galleries of other views is reported.Except for cross-view evaluation,cross-walking-condition evaluations are considered in CASIA-B,which use probes to retrieve the galleries of different walking conditions in the cross-view manner.

    Training Parameters:During training,Adam Optimizer is employed in all experiment,where the momentum is 0.9 and the learning rate is 1e-4.The margin of triplet loss is set as 0.2.The margin of CRL is set as 0.5.Batch size can be denoted as (p,k),whereprepresents the number of subjects,andkrepresents the number of samples selected from each subject.The batch size of experiment implemented on CASIA-B is (4,16).We train our model for 15K iterations,which is notable that our model converges significantly faster than previous state-of-the-art models [11,12]during training.In the experiment of OU-MVLP,the batch size is set as (32,4).We train our model on OU-MVLP for 150K iterations.The learning rate decays to 1e-5 in the last 50K iteration.Since OU-MVLP only contains gait sequences of normal walking condition,proposed loss functions (LxcenandLxpair) is not involved in the experiment.

    4.2 Comparison Experiment

    Comparative results on CASIA-B and OU-MVLP are given in Tabs.1 and 2,respectively.

    CASIA-B:Tab.1 demonstrates the cross-view and cross walking condition recognition result.As shown in the table,our method achieves the state-of-the-art result.For the three walking conditions,we report the rank-1 accuracy of different probe view and the average rank-1 accuracy for different walking condition.Our model achieves 97% and 80.2% rank-1 accuracy under NM and CL.This performance surpasses most of cross-view gait recognition methods to our best knowledge.Several conclusions can be observed:1) Compared withCNN-LBwhich takes GEI as input,our method and other video-based methods perform better.This further demonstrates the superiority of video-based methods [11,12]which aggregate frame-level features via temporal pooling or set pooling.2) Compared with GaitNet [17],our method achieves better results.Both of our method and GaitNet intend to mitigate the adverse impact of the variance of walking conditions on the extraction of gait features.GaitNet introduces LSTM and auto-encoder based disentanglement learning to extract walking condition invariant gait features,while our method intends to apply simple yet effective loss functions to alleviate the discrepancy of the gait features from different walking conditions.3) Our method is better than GaitSet [11]and GaitPart [12]which are so far the state-of-the-art approaches.Specifically,the two cross walking condition recognition performance (reported by the rows of BG and CL in Tab.1) surpass [11,12]by a large margin.We believe the reason is that the proposed loss functions focus more on cross walking condition gait recognition,while GaitSet and GaitPart simply use BA+triplet loss [13]and do not take the variance of walking conditions into account.

    Table 1:Performance of advanced methods

    OU-MVLP:Since this dataset is so far the largest gait dataset,we implement experiments on this dataset to further validate our method.Tab.2 reports performance of our method and other advanced methods under the cross-view evaluation protocol.Since the proposed loss functions focus on clothing and object carrying invariant gait recognition,and this dataset does not contain corresponding samples,we only report the performance of the proposed baseline model without using the XCenter and XPair loss functions.It can be observed that our method performs better than previous methods.Time consuming is tested on this dataset.During evaluation,which is implemented with one RTX2080Ti GPU,GaitSet costs 17 min while ours costs 10 min.Note that since the hardware setting in our experiment is different with [11],the time costed by evaluations of GaitSet reported in our implementation is different with that given in [11].

    4.3 Ablation Study on Involved Tricks of Framework

    In Tab.3,we validate several options that benefit the proposed framework,including PSP block,BNHM,and BNNeck.The results of four models are given.

    Model-a replaces the PSP with max-pooling and a FC layer for fair comparison,while model-b removes the BN layers in BNHM,which is turned into ordinary horizontal mapping [11].Model-c removes BNNeck.Model-d is the strong baseline model trained with all the proposed tricks.Both above models are trained with base loss functionLb.Following points can be observed:1) Effectiveness of PSP:We compare model-a with model-d.It can be seen that model-d with PSP block surpasses the model-a with max pooling (first-order pooling).This indicates that the proposed light-weight second-order pooling is better for extracting local framelevel feature from gait silhouettes.2)Effectiveness of BNHM:Model-b removes the BN layer before horizontal mapping.Obvious performance drop proves the necessity of BNHM.We believe that since the variance of walking conditions causes the discrepancy of gait features,the BN layer is beneficial for horizontal mapping.3) Effectiveness of BNNeck:Model-c removes BNNeck and degrades in performance.This proves the effectiveness of BNNek used in our framework.

    Table 2:Performance of advanced methods on OU-MVLP

    Table 3:Results of ablation study on proposed framework

    The three tricks are simple and effective.Furthermore,they make the model easier to train.Our baseline model can converge after 15K iterations,while GaitSet converges after 80K iterations.

    4.4 Ablation Study on Loss Functions

    In Tab.4,we report the ablative results of proposed loss functions.Four rows of results are given.The first row is the baseline model trained with base loss functionLb.The second row gives the result of model trained withLband center contraction lossLcon.The third row gives the result of model trained withLbfunction and XCenter lossLxcen.The fourth row shows the result of model trained withLband XPair lossLxpair.The last row gives the performance of the model trained with bothLb,LxcenandLxpair.

    Columns of BG and CL in Tab.4 report the accuracy of using NM probes to retrieve BG and CL galleries,respectively.Thus,the two columns report the performance of cross walking condition recognition.The 2-nd row is the model trained withLbandLcon(which means the XCenter loss withoutLrep).Thus,comparison between 3-rd row and 2-nd row proves the effectiveness ofLrep.From 3-rd row and 4-th row,we can observe that both two loss functions improve the accuracy of cross walking condition gait recognition.The last row shows that joint training of two loss functions is effective for both cross view and cross walking condition recognition.Consequently,we believe the proposed loss functions are able to reduce the intra-class discrepancy caused by gait covariates.We also testλxcenandλxpair.In the experiment,λxcenis set from 0.1 to 0.5 andλxpairis set from 0.01 to 0.05.We find the bestλxcenis 0.1 and the bestλxpairis 0.02 for the joint training of XCenter and XPair loss.

    Table 4:Ablation study on proposed loss functions

    4.5 Analysis of Gait Features

    The features are visualized by T-SNE [30]in Fig.7,where Fig.7.Fig.7a is the visualization result of the features from the model trained with proposed losses and Fig.7b is the result of features generated by the baseline model.It can be seen from Fig.7b that features of CL (triangle shaped points) are separable from other features that belongs to the same person,since the triangle points can be easily circled out by the red circles.However,features from the same subject tend to stay together in Fig.7a.It can be concluded that the intra-class divergence is decreased by the constraint of proposed methods.

    We select several identities to visualize their samples,where squares,circles and triangles represent the features of NM,BG and CL,respectively.Points of different colors represent features from different identities.Fig.7a visualizes features generated from the model trained with proposed loss functions.Fig.7b visualizes features produced by the baseline model.

    We also present the statistical result of the distance of cross walking condition positive (Xpos)pairs in Fig.8.Blue curve is the distribution of Xpos pairs computed from the baseline model,while red curve is the distribution of Xpos pairs generated from the model trained with the constraint of proposed loss functions.It can be seen that with the constraint ofLxcenandLxpair,the distribution shift left,which means the discrepancy of Xpos pairs decreases.

    Figure 7:Visualization of features from CASIA-B by T-SNE.(a) With proposed constraint(b) baseline model

    Figure 8:Distribution of distance of cross walking condition positive (Xpos) pairs

    5 Conclusion

    In this paper,we propose cross walking condition constraint,which specifically contains center-based and pair-wise loss,manages to constrain cross walking condition intra-class discrepancy as well as enlarge inter-class discrepancy of same walking condition.We also present a more effective video-based gait recognition model,which utilizes and simple yet effective tricks such as part-based second-order pooling,usage of BN layers and joint training with ID loss,as a strong baseline model.The proposed method achieves a new state-of-the-art performance.

    Funding Statement:This work was supported in part by the Natural Science Foundation of China under Grant 61972169 and U1536203,in part by the National key research and development program of China (2016QY01W0200),in part by the Major Scientific and Technological Project of Hubei Province (2018AAA068 and 2019AAA051).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精品人妻1区二区| 亚洲图色成人| 国产精品99久久99久久久不卡| 久久精品久久精品一区二区三区| 99热国产这里只有精品6| 纵有疾风起免费观看全集完整版| 久热这里只有精品99| 色婷婷久久久亚洲欧美| 亚洲精品国产色婷婷电影| av国产久精品久网站免费入址| 精品国产国语对白av| 成人国语在线视频| 国产精品九九99| 丝袜喷水一区| 一级黄色大片毛片| 国产成人av激情在线播放| svipshipincom国产片| av国产久精品久网站免费入址| 超碰成人久久| 亚洲国产最新在线播放| 黄片播放在线免费| 免费高清在线观看视频在线观看| 视频在线观看一区二区三区| 美女脱内裤让男人舔精品视频| 丁香六月天网| 久久久精品免费免费高清| 国产成人精品久久二区二区91| 99久久精品国产亚洲精品| 搡老乐熟女国产| 欧美精品av麻豆av| 在线观看免费午夜福利视频| 国产伦人伦偷精品视频| 久久人妻熟女aⅴ| 黄色一级大片看看| 搡老岳熟女国产| 欧美精品高潮呻吟av久久| 欧美激情极品国产一区二区三区| 欧美激情极品国产一区二区三区| 男女床上黄色一级片免费看| av线在线观看网站| 欧美日韩黄片免| 2021少妇久久久久久久久久久| 欧美日韩视频高清一区二区三区二| 9色porny在线观看| 高清不卡的av网站| 午夜日韩欧美国产| 2018国产大陆天天弄谢| svipshipincom国产片| 18禁黄网站禁片午夜丰满| 香蕉丝袜av| 精品国产一区二区三区久久久樱花| 国产精品九九99| 搡老岳熟女国产| 亚洲成人免费av在线播放| 精品视频人人做人人爽| 色94色欧美一区二区| 欧美日韩综合久久久久久| 欧美日本中文国产一区发布| 1024香蕉在线观看| av视频免费观看在线观看| 丝袜在线中文字幕| 国产精品久久久久成人av| 91精品伊人久久大香线蕉| 女人被躁到高潮嗷嗷叫费观| 久久亚洲精品不卡| 国产又色又爽无遮挡免| 考比视频在线观看| 国产亚洲精品第一综合不卡| 国产精品一二三区在线看| 亚洲专区国产一区二区| 亚洲欧美一区二区三区久久| 我的亚洲天堂| 成人国语在线视频| 欧美日韩福利视频一区二区| 日韩伦理黄色片| 1024视频免费在线观看| 叶爱在线成人免费视频播放| 免费女性裸体啪啪无遮挡网站| 久久精品aⅴ一区二区三区四区| 美女视频免费永久观看网站| 精品国产超薄肉色丝袜足j| 日本91视频免费播放| 久久国产精品男人的天堂亚洲| 亚洲精品中文字幕在线视频| 国产精品久久久久久精品电影小说| 美女福利国产在线| 少妇猛男粗大的猛烈进出视频| 在线精品无人区一区二区三| 1024香蕉在线观看| 老汉色∧v一级毛片| 亚洲av综合色区一区| 飞空精品影院首页| 韩国精品一区二区三区| 国产深夜福利视频在线观看| 涩涩av久久男人的天堂| 一本久久精品| 亚洲精品国产色婷婷电影| 国产av国产精品国产| 久久国产精品人妻蜜桃| 国产精品一区二区免费欧美 | 热99久久久久精品小说推荐| av在线老鸭窝| 日韩欧美一区视频在线观看| 人人妻人人爽人人添夜夜欢视频| 国产伦理片在线播放av一区| 青春草视频在线免费观看| 国产免费又黄又爽又色| 国产日韩欧美亚洲二区| 久久久久久久国产电影| 久久人人97超碰香蕉20202| 咕卡用的链子| 丁香六月欧美| 美国免费a级毛片| 黄色怎么调成土黄色| 亚洲色图综合在线观看| 成人18禁高潮啪啪吃奶动态图| 两人在一起打扑克的视频| 波多野结衣一区麻豆| 最黄视频免费看| 亚洲av男天堂| 一级毛片电影观看| 精品一区在线观看国产| 色视频在线一区二区三区| 婷婷丁香在线五月| 久久精品久久久久久噜噜老黄| 久久久久久亚洲精品国产蜜桃av| 国产成人免费观看mmmm| 国产欧美日韩综合在线一区二区| 亚洲激情五月婷婷啪啪| 成年美女黄网站色视频大全免费| 免费高清在线观看日韩| 热99久久久久精品小说推荐| 另类精品久久| 人人澡人人妻人| 亚洲熟女精品中文字幕| 麻豆av在线久日| 免费观看a级毛片全部| 久久久国产一区二区| 少妇猛男粗大的猛烈进出视频| 高清不卡的av网站| 午夜福利影视在线免费观看| 另类精品久久| 考比视频在线观看| 18禁国产床啪视频网站| 国产xxxxx性猛交| 97精品久久久久久久久久精品| 日本五十路高清| 成年动漫av网址| 免费在线观看完整版高清| 你懂的网址亚洲精品在线观看| 亚洲七黄色美女视频| 最近最新中文字幕大全免费视频 | 我的亚洲天堂| 免费一级毛片在线播放高清视频 | 国产男人的电影天堂91| 国产激情久久老熟女| 激情五月婷婷亚洲| 丰满人妻熟妇乱又伦精品不卡| 国产一区二区 视频在线| 久久午夜综合久久蜜桃| 国产视频首页在线观看| 伦理电影免费视频| 色视频在线一区二区三区| 高清不卡的av网站| 免费在线观看视频国产中文字幕亚洲 | 飞空精品影院首页| 另类精品久久| 亚洲精品一卡2卡三卡4卡5卡 | 一级毛片 在线播放| 久久久久视频综合| 亚洲欧美一区二区三区久久| 国产精品99久久99久久久不卡| 一边摸一边做爽爽视频免费| 老司机靠b影院| 男人添女人高潮全过程视频| 日韩制服丝袜自拍偷拍| 美女视频免费永久观看网站| 丝瓜视频免费看黄片| 国产成人系列免费观看| 国产精品久久久人人做人人爽| 精品国产一区二区久久| 国产日韩欧美视频二区| 欧美xxⅹ黑人| 校园人妻丝袜中文字幕| 午夜福利视频在线观看免费| 丝袜人妻中文字幕| 久久久久视频综合| 大码成人一级视频| 中文字幕最新亚洲高清| 国产视频首页在线观看| 亚洲色图综合在线观看| 午夜福利视频精品| 久久人人爽人人片av| 狠狠精品人妻久久久久久综合| 性高湖久久久久久久久免费观看| 亚洲国产精品一区二区三区在线| 人妻一区二区av| 亚洲,欧美,日韩| 久久鲁丝午夜福利片| 国产精品九九99| 90打野战视频偷拍视频| 国产高清不卡午夜福利| 看十八女毛片水多多多| 中国国产av一级| 亚洲 国产 在线| 国产欧美日韩精品亚洲av| 久久精品国产亚洲av涩爱| 日韩精品免费视频一区二区三区| 欧美激情 高清一区二区三区| 你懂的网址亚洲精品在线观看| 国产一区二区三区av在线| bbb黄色大片| 久久久国产一区二区| 午夜免费鲁丝| 美女国产高潮福利片在线看| 一二三四在线观看免费中文在| 性色av一级| 国产亚洲一区二区精品| 亚洲专区中文字幕在线| 日本欧美视频一区| 赤兔流量卡办理| 欧美日韩综合久久久久久| 免费观看av网站的网址| 在线观看免费日韩欧美大片| 亚洲五月婷婷丁香| 亚洲成国产人片在线观看| 黄色a级毛片大全视频| 晚上一个人看的免费电影| 久久亚洲国产成人精品v| 久久国产精品男人的天堂亚洲| 久久毛片免费看一区二区三区| 国产av一区二区精品久久| 成年人免费黄色播放视频| 国产免费福利视频在线观看| 少妇人妻久久综合中文| 欧美精品av麻豆av| 精品第一国产精品| 欧美久久黑人一区二区| 午夜激情av网站| 国产在线一区二区三区精| 老熟女久久久| 欧美变态另类bdsm刘玥| 成年人免费黄色播放视频| 爱豆传媒免费全集在线观看| 亚洲一区中文字幕在线| 人妻 亚洲 视频| 久久精品国产亚洲av涩爱| 国产精品一区二区在线观看99| 国产成人精品无人区| 波多野结衣一区麻豆| 精品一区二区三区四区五区乱码 | 一级毛片电影观看| 丰满人妻熟妇乱又伦精品不卡| 久久久久国产一级毛片高清牌| 久久综合国产亚洲精品| h视频一区二区三区| 女警被强在线播放| 2021少妇久久久久久久久久久| 国产精品久久久av美女十八| 熟女av电影| 90打野战视频偷拍视频| 国产精品国产三级专区第一集| 国产精品人妻久久久影院| 超碰成人久久| 精品人妻1区二区| 国产高清不卡午夜福利| 成年人黄色毛片网站| 亚洲人成电影免费在线| 国产视频首页在线观看| 女性被躁到高潮视频| 伊人久久大香线蕉亚洲五| 尾随美女入室| 男女之事视频高清在线观看 | 操美女的视频在线观看| 飞空精品影院首页| 色精品久久人妻99蜜桃| 中国国产av一级| 另类精品久久| 国产xxxxx性猛交| 午夜免费男女啪啪视频观看| 亚洲七黄色美女视频| 操美女的视频在线观看| 丁香六月欧美| 99热全是精品| 少妇精品久久久久久久| 免费在线观看完整版高清| 男女下面插进去视频免费观看| 久久久久久亚洲精品国产蜜桃av| 亚洲国产精品一区三区| 亚洲精品自拍成人| 夜夜骑夜夜射夜夜干| 在线精品无人区一区二区三| 日本vs欧美在线观看视频| 亚洲av片天天在线观看| 丝袜在线中文字幕| 亚洲精品久久成人aⅴ小说| 精品免费久久久久久久清纯 | 肉色欧美久久久久久久蜜桃| 久9热在线精品视频| 亚洲av男天堂| 在线观看国产h片| 亚洲av片天天在线观看| 丝袜在线中文字幕| 亚洲久久久国产精品| 久久女婷五月综合色啪小说| 伊人亚洲综合成人网| xxx大片免费视频| 韩国精品一区二区三区| 欧美人与性动交α欧美精品济南到| 亚洲欧美一区二区三区久久| 午夜免费成人在线视频| 一本一本久久a久久精品综合妖精| 永久免费av网站大全| 成人国产av品久久久| 老汉色∧v一级毛片| 大香蕉久久网| 国产一区二区三区av在线| 久久热在线av| 超碰成人久久| 午夜精品国产一区二区电影| 日本黄色日本黄色录像| 伊人亚洲综合成人网| 777米奇影视久久| 一边摸一边抽搐一进一出视频| 天天影视国产精品| 国产欧美日韩一区二区三 | 日本91视频免费播放| 在线观看免费午夜福利视频| 免费在线观看视频国产中文字幕亚洲 | 亚洲精品国产一区二区精华液| 咕卡用的链子| 嫩草影视91久久| 日韩,欧美,国产一区二区三区| 两性夫妻黄色片| 午夜激情久久久久久久| 无限看片的www在线观看| 国产日韩欧美视频二区| 青春草视频在线免费观看| 美女高潮到喷水免费观看| 超色免费av| 在线观看免费高清a一片| 亚洲五月色婷婷综合| 在线观看免费视频网站a站| 国产男女超爽视频在线观看| 男女边吃奶边做爰视频| 考比视频在线观看| 国产亚洲欧美精品永久| 热re99久久精品国产66热6| 女性生殖器流出的白浆| 亚洲欧美成人综合另类久久久| 国产日韩一区二区三区精品不卡| 久久国产精品影院| 国产一卡二卡三卡精品| 欧美日韩成人在线一区二区| 脱女人内裤的视频| 国产亚洲精品第一综合不卡| 天天躁夜夜躁狠狠久久av| 成人国产av品久久久| 欧美97在线视频| 午夜av观看不卡| 婷婷色综合www| 亚洲av综合色区一区| 另类亚洲欧美激情| a级毛片在线看网站| 夫妻午夜视频| 亚洲av综合色区一区| 国产成人精品在线电影| 欧美日韩成人在线一区二区| 国产免费视频播放在线视频| 亚洲精品在线美女| h视频一区二区三区| 99国产综合亚洲精品| 国产精品人妻久久久影院| 久久人人爽人人片av| 精品人妻熟女毛片av久久网站| 欧美亚洲 丝袜 人妻 在线| 国产亚洲午夜精品一区二区久久| 欧美日韩av久久| 看十八女毛片水多多多| 少妇人妻 视频| 女人爽到高潮嗷嗷叫在线视频| 91成人精品电影| 又大又爽又粗| 久久国产精品影院| 大香蕉久久成人网| 丝袜脚勾引网站| 国产精品一区二区在线观看99| 99国产精品一区二区蜜桃av | 国产欧美日韩综合在线一区二区| 色婷婷久久久亚洲欧美| 亚洲第一av免费看| 成年人黄色毛片网站| 国产免费视频播放在线视频| 国产精品久久久久久精品电影小说| 国产1区2区3区精品| 丰满饥渴人妻一区二区三| 亚洲av日韩精品久久久久久密 | 亚洲国产精品一区三区| 妹子高潮喷水视频| 人人妻人人爽人人添夜夜欢视频| 日本一区二区免费在线视频| 久久人人爽av亚洲精品天堂| 亚洲,欧美精品.| 久久国产精品人妻蜜桃| 亚洲,欧美精品.| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲欧美日韩另类电影网站| 在线观看一区二区三区激情| 国产有黄有色有爽视频| 亚洲成国产人片在线观看| 99久久99久久久精品蜜桃| 国产精品香港三级国产av潘金莲 | 欧美乱码精品一区二区三区| 午夜福利一区二区在线看| 国产成人精品无人区| 美女扒开内裤让男人捅视频| 秋霞在线观看毛片| 国产欧美日韩精品亚洲av| 亚洲欧美成人综合另类久久久| 亚洲欧美日韩另类电影网站| 90打野战视频偷拍视频| 一级毛片 在线播放| 日韩一区二区三区影片| 亚洲成色77777| 搡老乐熟女国产| 国产又色又爽无遮挡免| 一区二区三区四区激情视频| 大香蕉久久网| 欧美久久黑人一区二区| 波多野结衣av一区二区av| 汤姆久久久久久久影院中文字幕| 久久综合国产亚洲精品| 悠悠久久av| 一级片免费观看大全| 久久国产亚洲av麻豆专区| 国产又爽黄色视频| 制服诱惑二区| 欧美乱码精品一区二区三区| 精品国产一区二区三区四区第35| 亚洲色图 男人天堂 中文字幕| 黄片播放在线免费| 免费在线观看影片大全网站 | 成在线人永久免费视频| 91麻豆av在线| 国产亚洲精品第一综合不卡| 亚洲国产欧美一区二区综合| 久久午夜综合久久蜜桃| av天堂在线播放| 天天操日日干夜夜撸| 国产99久久九九免费精品| 免费少妇av软件| 国产一区二区三区av在线| 亚洲专区国产一区二区| 欧美 日韩 精品 国产| 波多野结衣av一区二区av| 国产一区二区在线观看av| 中文字幕制服av| 嫁个100分男人电影在线观看 | 熟女av电影| 一本—道久久a久久精品蜜桃钙片| 亚洲三区欧美一区| 色94色欧美一区二区| 国产人伦9x9x在线观看| 黑人巨大精品欧美一区二区蜜桃| 久久99精品国语久久久| av不卡在线播放| 久久女婷五月综合色啪小说| 亚洲精品久久午夜乱码| 91麻豆精品激情在线观看国产 | 丝袜喷水一区| 欧美亚洲 丝袜 人妻 在线| 9191精品国产免费久久| 国产精品久久久人人做人人爽| 色婷婷av一区二区三区视频| 久热爱精品视频在线9| 午夜福利在线免费观看网站| 一本色道久久久久久精品综合| 伊人久久大香线蕉亚洲五| 如日韩欧美国产精品一区二区三区| 午夜精品国产一区二区电影| 成年人黄色毛片网站| 性色av一级| 色精品久久人妻99蜜桃| 热99国产精品久久久久久7| 看十八女毛片水多多多| 多毛熟女@视频| 欧美+亚洲+日韩+国产| 丝袜人妻中文字幕| 9191精品国产免费久久| 国产伦理片在线播放av一区| 国产亚洲午夜精品一区二区久久| 精品人妻一区二区三区麻豆| 国产极品粉嫩免费观看在线| 九色亚洲精品在线播放| 黄色一级大片看看| 久久久久精品国产欧美久久久 | 王馨瑶露胸无遮挡在线观看| 国产在线视频一区二区| 人人妻人人添人人爽欧美一区卜| 欧美在线黄色| 日本wwww免费看| 韩国高清视频一区二区三区| 51午夜福利影视在线观看| 国产精品一区二区在线不卡| 欧美黑人精品巨大| 亚洲精品国产一区二区精华液| 久久人人97超碰香蕉20202| 国产熟女午夜一区二区三区| 久久人人爽人人片av| 国产成人精品无人区| 啦啦啦在线免费观看视频4| 国产91精品成人一区二区三区 | 97在线人人人人妻| 亚洲av成人精品一二三区| 一二三四在线观看免费中文在| 国产高清不卡午夜福利| 久久青草综合色| 国产午夜精品一二区理论片| 自拍欧美九色日韩亚洲蝌蚪91| 日本五十路高清| 丰满人妻熟妇乱又伦精品不卡| 久久精品熟女亚洲av麻豆精品| 日韩av免费高清视频| 日本五十路高清| 嫁个100分男人电影在线观看 | 国产精品二区激情视频| 国产高清videossex| 两个人免费观看高清视频| 日本vs欧美在线观看视频| 亚洲专区中文字幕在线| 美女主播在线视频| 欧美 亚洲 国产 日韩一| 一级毛片黄色毛片免费观看视频| 欧美日韩国产mv在线观看视频| 性高湖久久久久久久久免费观看| av国产精品久久久久影院| 国产91精品成人一区二区三区 | 乱人伦中国视频| 亚洲人成电影免费在线| 日韩中文字幕欧美一区二区 | 9热在线视频观看99| 国产成人精品久久二区二区91| 免费在线观看视频国产中文字幕亚洲 | 99九九在线精品视频| 看十八女毛片水多多多| 校园人妻丝袜中文字幕| 国产不卡av网站在线观看| 91麻豆av在线| 美女中出高潮动态图| 免费久久久久久久精品成人欧美视频| 日本欧美视频一区| 国产97色在线日韩免费| 黄色 视频免费看| 一区在线观看完整版| 蜜桃国产av成人99| 1024视频免费在线观看| 97精品久久久久久久久久精品| 亚洲人成电影免费在线| 国产精品久久久av美女十八| 久久久国产一区二区| 国产精品久久久人人做人人爽| svipshipincom国产片| 成人午夜精彩视频在线观看| 亚洲精品国产色婷婷电影| 成人午夜精彩视频在线观看| 啦啦啦在线观看免费高清www| 欧美97在线视频| 在线av久久热| 国产在线观看jvid| 亚洲精品国产一区二区精华液| 午夜免费男女啪啪视频观看| 国产精品欧美亚洲77777| 成年动漫av网址| 男女高潮啪啪啪动态图| 久久毛片免费看一区二区三区| 久久久久精品国产欧美久久久 | 欧美性长视频在线观看| 男的添女的下面高潮视频| 建设人人有责人人尽责人人享有的| 国产精品偷伦视频观看了| 一边摸一边做爽爽视频免费| 只有这里有精品99| 天堂俺去俺来也www色官网| 最黄视频免费看| 亚洲专区中文字幕在线| 美女主播在线视频| 久久人人97超碰香蕉20202| 亚洲伊人久久精品综合| av在线app专区| 首页视频小说图片口味搜索 | 欧美精品av麻豆av| 欧美激情极品国产一区二区三区| av国产久精品久网站免费入址| 在线看a的网站| 国产精品一二三区在线看| 国产欧美日韩综合在线一区二区| 日本五十路高清| e午夜精品久久久久久久| 天天躁狠狠躁夜夜躁狠狠躁| 美女扒开内裤让男人捅视频| 日韩精品免费视频一区二区三区| 一级毛片电影观看| 另类亚洲欧美激情| av天堂久久9| 高清视频免费观看一区二区| 精品卡一卡二卡四卡免费| 99国产综合亚洲精品| 欧美精品啪啪一区二区三区 | 亚洲精品美女久久av网站| 一级毛片我不卡| 国产福利在线免费观看视频| 精品少妇一区二区三区视频日本电影| 国产色视频综合| 成人国产一区最新在线观看 | 男人舔女人的私密视频|