• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Online RGB-D person re-identi fication based on metric model update

    2017-05-16 10:26:43HongLiuLingHuLiqin

    Hong Liu,Ling Hu,*,Liqin M

    aThe Engineering Lab on Intelligent Perception for Internet of Things(ELIP),Shenzhen Graduate School,Peking University,Shenzhen,518055,China

    bThe VISICS,ESAT,KU Leuven,Kasteelpark Arenberg 10,Heverlee,3001,Belgium

    Original article

    Online RGB-D person re-identi fication based on metric model update

    Hong Liua,Liang Hua,*,Liqian Mab

    aThe Engineering Lab on Intelligent Perception for Internet of Things(ELIP),Shenzhen Graduate School,Peking University,Shenzhen,518055,China

    bThe VISICS,ESAT,KU Leuven,Kasteelpark Arenberg 10,Heverlee,3001,Belgium

    A R T I C L E I N F O

    Article history:

    Received 18 March 2017

    Accepted 1 April 2017

    Available online 20 April 2017

    Person re-identi fication

    Person re-identi fication(re-id)on robot platform is an important application for human-robotinteraction(HRI),which aims at making the robot recognize the around persons in varying scenes. Although many effective methods have been proposed for surveillance re-id in recent years,re-id on robot platform is still a novel unsolved problem.Most existing methods adapt the supervised metric learning of fline to improve the accuracy.However,these methods can not adapt to unknown scenes.To solve this problem,an online re-id framework is proposed.Considering that robotics can afford to use high-resolution RGB-D sensors and clear human face may be captured,face information is used to update the metric model.Firstly,the metric model is pre-trained of fline using labeled data.Then during the online stage,we use face information to mine incorrect body matching pairs which are collected to update the metric model online.In addition,to make full use of both appearance and skeleton information provided by RGB-D sensors,a novel feature funnel model(FFM)is proposed.Comparison studies show our approach is more effective and adaptable to varying environments.

    ?2017 Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    1.Introduction

    The essence of person re-id is associating a person under different cameras over time.It is a key technology for service robotics,especially for modeling human long-term activities across different scenes to provide friendly human-computer interaction.

    In recent years,many re-id methods for video surveillance applications have been proposed.Generally,they can be classi fied into two categories[1]:feature representation and matching strategy. For feature representation,researchers mainly focus on generating robust and ef ficient body appearance representation using information such as color[2]and texture[3],since in surveillance environment the captured RGB images are usually in lowresolution.For matching strategy,metric learning[4]and learning to rank methods are explored a lot.And methods based on metric learning achieved good performances.

    As matures of high-resolution RGB-D sensors,robotics can afford to use them to improve re-id performances.Compared with person RGB images in low resolution,RGB-D data can provide not only appearance information,but also skeleton, depth and even clear face information.Taking advantages of RGBD data,some researchers combine the skeleton or face information with color or texture features to obtain a multi-modal system which is more robust to pose changes and complex backgrounds[5-7].

    Although these multi-modal methods achieved good performances on public dataset,these methods might obtain unsatisfactory performance on handling relatively varying environments for two reasons.On one hand,most conventional approaches usually learn the similarity metric model of fline,so that they cannot adapt to new scenes which are signi ficantly different from the training data,caused by varying illuminations,camera views, backgrounds.On the other hand,blindly combining a bunch of features to calculate person similarity is prone to error accumulation,and also brings unnecessary computational cost.

    To overcome the drawbacks of above methods,an effective online re-id framework is proposed in this paper.Based on observations,clear face images contain more reliable and distinguishing information but may be dif ficult to obtain in many situations such as people with his back to the camera.While,body images are more ambiguous but are usually easy to capture.Therefore,the face and body images are complementary.

    Firstly,each person is described by appearance-based and geometric features using skeleton information following[8].Secondly,the metric model is learned of fline using labeled training data.Then,the face information is utilized to update the metric model online.Finally,the feature similarities are fused by feature funnel model which is based on the degree of feature reliability.

    Our contributions are mainly threefold.First,we propose a novel online re-id framework which is robust to the changing environments.Second,a fusion strategy named feature funnel model is proposed to fuse multiple features effectively.Third,a novel publicly available RGB-D re-id dataset named RobotPKU RGBD-ID dataset is collected,which contains 180 video sequences of 90 persons collected using Kinect.

    The rest of this paper is organized as follows.In section 2,the related works are reviewed.In section 3,our method is presented in its main components:features extraction,online metric model update and feature funnel model.In section 4,experiments run for assessing the performance and comparing our method to the stateof-the-art methods are discussed.Finally conclusions are drawn in Section 5.

    2.Related works

    2.1.Depth-based re-identif ication

    With the appearing of cheap depth-sensing devices in the market,computer vision for robotics was revolutionized and some RGB-D based re-id approaches have also been proposed.These methods can be divided into two categories,the first type of method is appearance-based methods which integrate appearance and depth information together[9-11].Munaro et al.propose a reid approach based on skeletal information[11].Feature descriptors are extracted around person skeletal joints and final person signature is obtained by concatenating these descriptors.The second type of method is based on geometric features:in Ref.[12],reid is performed by matching body shapes in terms of whole point clouds warped to a standard pose with the described method. Matteo et al.[8]adopt the anthropometric measure method for reid.They use the 3D location of body joints provided by skeletal tracker(see Fig.2)to compute the geometric features,such as limb lengths and ratios.

    2.2.Multimodal person re-identif ication

    Appearance-based methods are easily impacted by varying environments such as illuminations,and geometric method has low inter-class variability.Thus,re-identifying persons only relying on a single source of biometric information can actually be dif ficult.For this reason,multimodal biometric systems are adopted to make reid more reliable.

    Many multimodal systems have been proposed which can be classi fied into two categories[13]:One approach is to fuse information at feature level[14,15]by concatenating the feature vectors as final feature.However,this method often overlooks the reliability differences between different features.The other is to fuse information at score level[6,7],i.e.combining the scores of different sub-systems,and our feature funnel model belongs to this type.

    2.3.Online learning

    Most re-id methods are corresponding learning the similarity metric model of fline[4],but these methods can not adapt to unknown scenes.Recently,the semi-supervised learning methods become more and more popular in many computer vision applications[16],since they can utilize both labeled and unlabeled data in classi fier training[17]to improve the performance using unlabeled data.P-N learning[18]is an effective semi-supervised learning method which is guided by positive(P) and negative(N)constraints restricting the labeling procedure of the unlabeled set.In this paper,we adopt P-N learning in our online re-id framework to utilize the clear face images to improve the metric model online.Through face information,the measured-error examples are screened out to retaining the metric model and make the metric model adaptive to new environment.

    3.Framework description

    In this section,an overview of the system framework is provided.As depicted in Fig.3,our framework contains three parts: feature extraction,online metric model update and feature funnel model.

    Primarily,each person is described by appearance and geometric features utilizing skeleton information.Then metric models are updated for each feature modality.The concrete steps of online metric learning are follows:(I)Train the initial metric model of fline using labeled data;(II)Measure the similarities of unlabeled data pairs obtained online with metric model;(III)Label the unlabeled data pairs whose measure results are inconsistent with reliable face veri fication results;(IV)Extend the training set with the new labeled data;(V)Retrain the metric model.Finally,the feature similarity is obtained by our feature funnel model which will be explained in 3.3.

    3.1.Feature extraction

    The first step of our framework is to extract features which are robust to varying illumination and pose.Inspired by Ref.[11],we extract features around person skeletal joints to overcome pose variation.In order to overcome illumination varying,we adopt noise-insensitive appearance-based features and illuminationinvariant geometric features.

    3.1.1.Appearance-based features

    Person appearance-based feature is an intuitive and effectual expression for re-id.In order to reduce the impact of pose variation,the features extracted from skeleton joints are used to describe person appearance[11].As shown in Fig.2,20 keypoints are obtained by the Microsoft Kinect SDK for each person. Through these keypoints,the real joints of person body can be located and the local information can be described accurately.To describe local appearance information,color and texture features are extracted.

    The feature evaluated from skeleton joints is represented as ftwhere(p,i)denotes thei-th skeleton joint on thep-th person image.Adenotes the appearance-based method.The feature of thep-th person image is de fined as:

    In order to extract features which are robust to illumination variation,the color feature employs 8×8×8 bins HSV histograms. To describe texture features,the Scale Invariant Local Ternary Pattern(SILTP)[19]histograms are extracted from each joint.SILTP is an improved operator over the Local Binary Pattern(LBP)[20]. LBP is a gray-scale invariant texture feature,but is susceptible to the noises.In order to overcome this drawback,SILTP introduces a scale invariant local comparison tolerance,achieving invariance tointensity scale changes and insensitivity to noises.

    3.1.2.Geometric feature

    In addition to appearance-based description,we also apply the anthropometric measure method for geometric description. Although the distinguishable power of geometric feature is not as good as appearance-based feature,it has a good property that insensitive to varying illumination and dim lighting conditions,as shown in Fig.1(b).When appearance information is ambiguous, geometric feature can provide supplemental information.In order to describe body geometric,following anthropometric measures from Ref.[8]are chosen:

    a)head height,

    b)neck height,

    c)distance between neck and left shoulder,

    d)distance between neck and right shoulder,

    e)distance between torso and right shoulder,

    f)the length of right arm,

    g)the length of left arm,

    h)the length of right upper leg,

    i)the length of left upper leg,

    j)the length of torso,

    k)distance between right hip and left hip,

    l)ratio between torso and right upper leg length(j/h),

    m)ratio between torso and left upper leg(j/i).

    These distances are shown in Fig.2.Our geometric features are composed of these distances and ratios,which is de fined as:

    3.2.Online metric model update

    In order to evaluate the similarity of features(Xp,Yq)between probe personpand gallery personq,Mahalanobis distance is adopted to measure the similarity distanceS(·)of features:

    where M is the Mahalanobis distance matrix.

    First,in the initialization phase,the M is learned of fline with some labeled data.

    However,due to the varying changes of the environment,the metric model trained of fline may fail in real scenes.Considering that in the human-robot-interaction scenario,distance between robot and human is always close,so sometimes we can obtain clear face images which are very helpful to re-id.In our online re-id framework,we use reliable face information to label person image pairs online which are used to update our metric model.

    Fig.2.Twenty body joints tracked by the Microsoft Kinect SDK.

    3.2.1.Metric Learning

    An effective and ef ficient metric learning method,XQDA[21],is applied to learn the Mahalanobis distance matrix M.Unlike other methods which reduce feature dimension and learn distance matrix separately,XQDA further learns a discriminant subspace together with a matrix.From a statistical inference point of view, XQDA de fines the Mahalanobis distance matrix M by:

    Fig.1.Some example image pairs from datasets.(a)Example image pairs from the new proposed RobotPKU RGBD-ID dataset.(b)Example image pairs from the proposed dataset with strong illumination changes.

    Fig.3.The proposed framework contains three major modules in three different colors.The features of each person are extracted from labeled and unlabeled images.Then,metric models are updated for each feature.Finally the similarity is obtained by feature funnel model.Best viewed in color.

    3.2.2.Online metric model update strategy

    Face contains more powerful information for person identi f ication than body appearance.Besides,face veri fication research has achieved highly reliable performance on large unconstrained LFW dataset[22].Additionally,towards re-id on robot platform, captured face images are usually in high resolution compared to surveillance environment.Thus,we use face information to update the metric model online.

    After initializing the metric model with labeled data,the metric model is then updated using unlabeled data iteratively in online stage.

    Firstly,the similaritiesS(Xp,Yq)(q=1…M)between probe image featureXpand gallery image featureYqare calculated.

    Secondly,the matching pairsS(Xp,Yq)(q=1…M)are ranked according to their similarities.LetR(Xp,Yq)denotes the ranking result.

    Thirdly,ripe face detection and recognition algorithm are utilized to obtain the face similarity scoreθ(0≤θ≤1).Since the research of face detection and recognition is very mature and is not the focus of this paper,we simply adopt the Face++SDK1http://www.faceplusplus.com.to detect face and calculate the similarity scoreθbetween two faces.In Fig.5, the face similarity scoreθdistributions of positive samples and negative samples pairs are shown.The con fidence of face pair is de fined as:

    Then,we verify that whether the similaritiescalculated by the metric are in line with the face veri fication resultF(p,q).Two types of error samples are be selected:

    ·Error positive samples:Positive samples refer to the samples from the same person.When their reliable face images are captured,the similarity scoreθbetween the positive samples is usually high,as shown in Fig.5.Therefore,thresholdθ1is selected to choose the positive samplesF(p,q)≥θ1,and the error positive samples are selected if the ranking resultR(Xp,Yq)>E,where theEis a threshold to determine ranking result;

    ·Error negative samples:Negative samples refer to the samples from the different person.When their reliable face images are captured,the similarity scoreθbetween the negative samples is usually low,as shown in Fig.5.Therefore,thresholdθ2is selected to choose the negative samplesF(p,q)<θ2,and the error negative samples are selected if theR(Xp,Yq)<E;

    Finally,the two types of error samples are placed into the training set to retrain the matrix M by Eq.(4).

    The procedure is shown in Algorithm 1.

    3.3.Feature funnel model

    In the feature fusion stage,fusing multiple types of features blindly sometimes will not increase discrimination power,since different features have different reliability.In order to obtain an effective fusion strategy,feature funnel model is proposed.

    Firstly,a feature spaceFis constructed which contains different types of features{fk}for describing person.Based on the feature spaceF,the similarity between probepand galleryqis de fined as follows:

    Secondly,Fis rebuilt intoKlevels according to the following rules:

    1)The 1st step of the feature spaceF1includes the most reliable featuref1inF.

    2)Thek-th step of the feature spaceFkincludes all features inFk-1and the most reliable featurefkinF-Fk-1.

    Then,we use these feature spaces rebuilt feature space {F1,F(xiàn)2,…,F(xiàn)k}to filter the gallery setGfor probep.Based on the first feature spaceF1,we calculate the similaritySim(p,q|F1)between probe p and gallery set G to find the minimum similarityMin1.

    In thek-th level(k=1,2,…,K-1),the gallery imageqwill be selected from the gallery setGktoGk+1in the(k+1)-th level ifSim(p,q|Fk)<αk·Mink.And the rest distracters are then abandoned.

    Finally,the similaritySim(p,q|FK)is used to re-identify the probe p based on the last feature levelFK.

    4.Experiments and analysis

    In this section,some re-id experiments are presented to demonstrate the effectiveness of the method.

    4.1.Datasets and evaluation protocol

    Our approach is evaluated on three RGB-D re-id datasets:the BIWI RGBD-ID,IAS-Lab RGBD-ID and the new proposed RobotPKU RGBD-ID dataset.Results are reported in the form of average Cumulated Matching Characteristic(CMC)[1]which is commonly used in re-id problem.

    4.1.1.BIWI RGBD-ID dataset[8]

    This dataset2http://robotics.dei.unipd.it/reid.is collected with Kinect sensors using the Microsoft Kinect SDK.It contains a training set and two testing sets.The training set includes video sequences of 45 persons.The testing set contains 56 sequences and each person has both one still sequence and one walking sequence.People wear different clothes in the training video with respect to their two testing sequences,and people are wearing the same clothes in still and walking sequences. This dataset includes RGB images,depth images,persons'segmentation maps and skeletal data.

    4.1.2.IAS-Lab RGBD-ID dataset

    This dataset3http://robotics.dei.unipd.it/reid.is acquired using the OpenNI SDK and the NST tracker.It contains 33 sequences of 11 people.Unlike BIWI RGBDID,the Training and TestingB sequences of this dataset have strong illumination varying because of the different auto-exposure level of the Kinect in the two rooms.

    Fig.4.The performances of metric learning methods with different features on the RobotPKU RGBD-ID Dataset.

    4.1.3.RobotPKU RGBD-ID dataset

    To perform more extensive experiments on a larger amount of data we collected our own RGB-D dataset called RobotPKU RGBD-ID Dataset4https://github.com/lianghu56/RobotPKU-RGBD-ID-dataset..This dataset is collected with Kinect sensors using the Microsoft Kinect SDK.This dataset contains 180 video sequences of 90 person,and for each one the Still and Walking sequences were collected in two different rooms.This dataset includes RGB images, depth images,persons'segmentation maps and skeletal data.

    4.2.Evaluation of online metric update

    The effectiveness of XQDA is first evaluated on RobotPKU RGBDID dataset,comparing with some state-of-the-art metric learning methods as shown in Fig.4.The dataset is split into two parts,both consisting of 50 individuals,one for training and the other for testing.It is indicated that the metric learning methods(XQDA and KISSME[23])can learn more information and perform better than conventional Euclidean distance[11].Especially,XQDA achieves an improvement of 4.97%for HSV and 5.85%for SILTP respectively compared with KISSME.

    Since face is an important information in update stage,we analyze the face reliability calculation procedure.According to Table 1,and we can see that 28.26%of positive pairs and 28.36%of negative pairs can detect both two effective face images,respectively.Fig.5 shows the similarity scoreθdistributions of positive samples and negative samples pairs respectively,we can see that they are independent and identical distributions.The average (standard deviation)of positive samples and negative samples are 0.7397(0.0997)and 0.5540(0.0924).So whenθ1>0.8,the two face images are considered to be a positive pair,i.e.face images are from same person.On the contrary,whenθ2<0.5,the two face images are considered to be a negative pair.

    To validate the performances of online metric model updatestrategy,we compare the online update strategy with of fline strategy using different features and metric learning methods.As shown in Fig.6,there are four sets of comparative experiments.The result shows that rank1 of online update strategy is higher than of fline but the performance is limited.The reasonable interpretation is that online update strategy can correct its mistakes through adding new labeled datas which are error positive samples and error negative samples coming from the new scenes.However,the initial training data and test data are drawn from the same environment,where the online learning strategy help little on the performance.

    Table 1 The result of face detection and veri fication.

    Fig.5.The face similarity scoreθdistributions of positive samples and negative samples pairs.

    When we initially train the metric model using the RobotPKU RBGD-ID dataset and test it on IAS-Lab RGBD-ID dataset,the rank 1 of of fline HSV+XQDA method and of fline HSV+KISSME method are 18.35%and 12.15%,respectively(see Fig.7).The of fline SILTP+XQDA method and the of fline SILTP+KISSME method achieve 61.45%and 59.45%.The performance of HSV is poorer than that of SILTP.The reason is that IAS-Lab RGBD-ID dataset has strong illumination changes which has a great in fluence on the color feature(see Fig.1(a)and(b)).Therefore,the performance of HSV is not good,but varying illumination has less performance on SILTP.Comparing online methods with of fline methods,online update strategy can increase the performance obviously particularly HSV+XQDA methods with an improvement of 16.10%.The performance improvementon the IAS-Lab is more obvious than on the RobotPKU RGB-D dataset.This is due to the obvious scenery variation.However,it is not obvious on SILTP owing to SILTP is insensitivity to varying illumination.In general,the online update strategy can adapt to the new environment than of fline strategy.

    Fig.6.The results of different features and metric methods on the BIWI RGBD-ID Dataset.

    Fig.7.The results of different features and metric methods on the IAS-Lab RGBD-ID Dataset.

    4.3.Evaluation of feature funnel model

    Based on the discussion in the 4.2,the two face images are considered to be positive pair whenθ1>0.8 and the two face images are considered to be negative pair whenθ2<0.5.Therefore, θ1=0.8,θ2=0.5,E=10 is set for this dataset.

    Fig.8 shows the results of using different feature fusion methods.In the experiments,concatenation algorithm denotes the method that these features are concatenated to obtain the final person feature.Score-level fusion denotes the method summing the scores of the different features.In addition,the single feature extraction methods which are used in feature fusion methods are also shown.As we can see,the feature funnel model achieves 77.94%accuracy rate under rank-1 identi fication,which is better than all other single feature extraction methods and has an improvement of 3.01%over score-level fusion method.This is because different feature methods have different reliability, concatenating directly or summing feature similarity scores brings unnecessary error.Using stable feature to filter out some special samples,it achieves better performance than all other feature fusion methods.

    Fig.8.The results of different features and fusion methods on the RobotPKU RGBD-ID Dataset.

    Table 2 BIWI RGBD-ID dataset:comparison of state-of-the-art rank1 matching rates.

    4.4.Evaluation of cross-dataset system

    The proposed framework is also compared with the state-ofthe-arts on the BIWI RGBD-ID dataset.In according to most re-id methods,we assume that the same person wears the same clothes,therefore we just use still sequences and walking sequences.Besides,the frames with missing skeleton joints are also discarded.In our approach,the metric model is trained initially by the RobotPKU RGBD-ID dataset,and tested on BIWI RGBD-ID dataset.The experiments of other methods are made on BIWI RGBD-ID dataset directly.The rank-1 identi fication rates of various algorithms are shown in Table 2.As expected,combining with extra information,like face and skeleton,will bring a certain promotion to accuracy(row 6 and row 7).Therefore,our methods(row 1 and row 2)which combine appearance-based and body geometric information also achieve good performances.As we can see,our proposed method achieves 91.6%accuracy rate under rank1 identi fication,with an improvement of 5.4%over the of fline strategy. This was due to the online update strategy can quickly adapt to new environment by the increase of typical training data.

    5.Conclusions and future work

    In this paper,we present an online re-id learning framework.To overcome the drawback of of fline training,that metric model can not well adapt to the changing environment,face information is utilized to update metric model online.In particular,the face information is used to find measured incorrect examples,then these examples are added to training set to update metric model.In addition,the feature funnel model is proposed to fuse the similarity scores from different feature expressions.

    In order to validate the ef ficiency of the method,experiments are conducted on two public dataset,BIWI and IAS-Lab.In addition, a bigger dataset named RobotPKU RGBD-ID is collected to perform more extensive experiments.The results demonstrate that our method makes metric model adapt to environmental changes and has powerful transplantable ability,and the feature funnel model makes full use of features information to improve the recognition rate.Therefore,the proposed method is ideal for the use in robotic applications dealing with human-computer interactions robot.

    In future work,we plan to extend the work to solve the occupation problem which may lead to the skeleton missing problem and poor re-id performance.

    Acknowledgment

    This work is supported by the NationalNaturalScience Foundation of China(NSFC,nos.61340046),the National High Technology Research and Development Programme of China(863 Programme,no. 2006AA04Z247),the Scienti fic and Technical Innovation Commission of Shenzhen Municipality(nos.JCYJ20130331144631730),and the Specialized Research Fund for the Doctoral Programme of Higher Education(SRFDP,no.20130001110011).

    [1]S.Gong,M.Cristani,S.Yan,C.C.Loy,Person Re-identi fication,2014,pp.1-20.

    [2]Y.Yang,Salient color names for person re-identi fication,in:European Conference on Computer Vision,2014.

    [3]B.Ma,Y.Su,F.Jurie,Local descriptors encoded by fisher vectors for person reidenti fication,in:International Conference on Computer Vision,2012,pp. 413-422.

    [4]X.Xu,Distance metric learning using privileged information for face veri fication and person re-identi fication,Trans.Neural Netw.Learn.Syst.(2015)1.

    [5]A.Mogelmose,T.B.Moeslund,K.Nasrollahi,Multimodal person reidenti fication using rgb-d sensors and a transient identi fication database,in: International Workshop on Biometrics and Forensics,2013,pp.1-4.

    [6]A.Mogelmose,C.Bahnsen,T.B.Moeslund,A.Clapes,S.Escalera,Tri-modal person re-identi fication with rgb,depth and thermal features,in:Computer Vision and Pattern Recognition Workshops,2013,pp.301-307.

    [7]R.Kawai,Y.Makihara,C.Hua,H.Iwama,Y.Yagi,Person re-identi fication using view-dependent score-level fusion of gait and color features,in:International Conference on Pattern Recognition,2012,pp.2694-2697.

    [8]M.Munaro,A.Fossati,A.Basso,E.Menegatti,L.V.Gool,One-Shot Person Reidenti fication with a Consumer Depth Camera,2014.

    [9]D.Baltieri,R.Vezzani,R.Cucchiara,A.Utasi,Multi-view people surveillance using 3d information,in:International Conference on Computer Vision Workshops,2011,pp.1817-1824.

    [10]J.Oliver,A.Albiol,A.Albiol,3d descriptor for people re-identi fication,in: International Conference on Pattern Recognition,2011,pp.1395-1398.

    [11]M.Munaro,S.Ghidoni,D.T.Dizmen,E.Menegatti,A feature-based approach to people re-identi fication using skeleton keypoints,in:International Conference on Robotics and Automation,2014,pp.5644-5651.

    [12]M.Munaro,A.Basso,A.Fossati,L.Van Gool,E.Menegatti,3d Reconstruction of Freely Moving Persons for Re-identi fication with a Depth Sensor,2014,pp. 4512-4519.

    [13]A.Ross,A.Jain,Information fusion in biometrics,Pattern Recognit.Lett.(2010) 2115-2125.

    [14]F.Pala,R.Satta,G.Fumera,F.Roli,Multi-modal person re-identi fication using rgb-d cameras,Trans.Circuits Syst.Video Technol.(2015),1-1.

    [15]R.Satta,G.Fumera,F.Roli,Fast person re-identi fication based on dissimilarity representations,Pattern Recognit.Lett.(2012)1838-1848.

    [16]N.Noceti,F.Odone,Semi-supervised learning of sparse representations to recognize people spatial orientation,in:International Conference on Image Processing,2014,pp.3382-3386.

    [17]C.Chen,J.Odobez,We are not contortionists:coupled adaptive learning for head and body orientation estimation in surveillance video,in:Computer Vision and Pattern Recognition,2012,pp.1544-1551.

    [18]Z.Kalal,J.Matas,K.Mikolajczyk,P-n learning:bootstrapping binary classi fiers by structural constraints,in:Computer Vision and Pattern Recognition,2010, pp.49-56.

    [19]S.Liao,G.Zhao,V.Kellokumpu,M.Pietikainen,Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes, in:Computer Vision and Pattern Recognition,2010,pp.1301-1306.

    [20]T.Ojala,M.Pietik¨ainen,D.Harwood,A comparative study of texture measureswith classi fication based on featured distributions,Pattern Recognit.(1996) 51-59.

    [21]S.Liao,Y.Hu,X.Zhu,S.Z.Li,Person re-identi fication by local maximal occurrence representation and metric learning,in:Computer Vision and Pattern Recognition,2015,pp.2197-2206.

    [22]Y.Sun,X.Wang,X.Tang,Deep learning face representation by joint identification-veri fication,Adv.Neural Inf.Process.Syst.(2014)1988-1996.

    [23]P.M.Roth,P.Wohlhart,M.Hirzer,M.Kostinger,H.Bischof,Large scale metric learning from equivalence constraints,in:Computer Vision and Pattern Recognition,2012,pp.2288-2295.

    *Corresponding author.

    E-mail address:lianghu@sz.pku.edu.cn(L.Hu).

    Peer review under responsibility of Chongqing University of Technology.

    http://dx.doi.org/10.1016/j.trit.2017.04.001

    2468-2322/?2017 Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http:// creativecommons.org/licenses/by-nc-nd/4.0/).

    Online metric model update

    Face information

    Skeleton information

    午夜成年电影在线免费观看| 91大片在线观看| 性少妇av在线| 成人18禁高潮啪啪吃奶动态图| 在线国产一区二区在线| 亚洲专区中文字幕在线| 国产亚洲精品第一综合不卡| 日韩欧美三级三区| 深夜精品福利| 久久久久亚洲av毛片大全| 国产区一区二久久| 欧美精品一区二区免费开放| 色综合欧美亚洲国产小说| 日韩大码丰满熟妇| 久久久久久久久久久久大奶| 亚洲九九香蕉| 精品国产美女av久久久久小说| 美女午夜性视频免费| 成人特级黄色片久久久久久久| netflix在线观看网站| 国产人伦9x9x在线观看| 国产激情久久老熟女| 少妇的丰满在线观看| e午夜精品久久久久久久| 精品久久久久久电影网| 视频在线观看一区二区三区| 久久精品aⅴ一区二区三区四区| 黑人巨大精品欧美一区二区mp4| 欧美不卡视频在线免费观看 | 性欧美人与动物交配| 中文字幕色久视频| 欧美久久黑人一区二区| netflix在线观看网站| 国产熟女xx| 久久久国产一区二区| 在线观看午夜福利视频| 国产一区二区激情短视频| 亚洲熟妇中文字幕五十中出 | 成人黄色视频免费在线看| 中国美女看黄片| 欧美人与性动交α欧美精品济南到| 午夜老司机福利片| 欧美中文综合在线视频| 免费观看精品视频网站| 久久国产亚洲av麻豆专区| 91大片在线观看| 狂野欧美激情性xxxx| 一级作爱视频免费观看| www.www免费av| 超色免费av| 亚洲一码二码三码区别大吗| 久久中文字幕人妻熟女| 欧美在线一区亚洲| 青草久久国产| 久久久久国产一级毛片高清牌| 黑人欧美特级aaaaaa片| 亚洲午夜理论影院| 一边摸一边做爽爽视频免费| 老熟妇仑乱视频hdxx| 精品高清国产在线一区| 国产精品一区二区在线不卡| 色综合欧美亚洲国产小说| 国产精品 欧美亚洲| 成人国语在线视频| bbb黄色大片| 老司机午夜福利在线观看视频| 91老司机精品| 久久精品亚洲精品国产色婷小说| 激情视频va一区二区三区| 久久久久亚洲av毛片大全| 久久欧美精品欧美久久欧美| 久久香蕉精品热| 久久国产亚洲av麻豆专区| 国产精品野战在线观看 | 久久久久亚洲av毛片大全| 一a级毛片在线观看| 免费高清视频大片| 少妇裸体淫交视频免费看高清 | 中文字幕另类日韩欧美亚洲嫩草| 免费在线观看完整版高清| 亚洲成人免费av在线播放| 色哟哟哟哟哟哟| 在线十欧美十亚洲十日本专区| 久久久久国产精品人妻aⅴ院| 日韩中文字幕欧美一区二区| 91成人精品电影| 丰满的人妻完整版| 国产成人精品在线电影| 又大又爽又粗| 日韩有码中文字幕| 日韩一卡2卡3卡4卡2021年| 亚洲人成电影免费在线| 99热国产这里只有精品6| 国产色视频综合| 在线观看日韩欧美| 久9热在线精品视频| 三级毛片av免费| 久久久久精品国产欧美久久久| 91九色精品人成在线观看| 日韩中文字幕欧美一区二区| 巨乳人妻的诱惑在线观看| 美女高潮到喷水免费观看| 51午夜福利影视在线观看| 免费不卡黄色视频| 午夜免费鲁丝| 欧美 亚洲 国产 日韩一| 日本黄色视频三级网站网址| 亚洲中文av在线| 国产一区二区三区在线臀色熟女 | 香蕉国产在线看| 真人一进一出gif抽搐免费| 免费在线观看亚洲国产| 美女扒开内裤让男人捅视频| 91老司机精品| 国产成人免费无遮挡视频| 国产精品一区二区精品视频观看| 中文亚洲av片在线观看爽| 成人三级黄色视频| 视频区欧美日本亚洲| 日韩高清综合在线| 午夜久久久在线观看| 一区二区三区国产精品乱码| 97人妻天天添夜夜摸| 纯流量卡能插随身wifi吗| 91字幕亚洲| 久久婷婷成人综合色麻豆| 国产乱人伦免费视频| 亚洲全国av大片| 美女午夜性视频免费| 国产一区二区三区综合在线观看| 最近最新中文字幕大全免费视频| 别揉我奶头~嗯~啊~动态视频| 亚洲中文日韩欧美视频| 亚洲专区国产一区二区| 久久欧美精品欧美久久欧美| 日韩 欧美 亚洲 中文字幕| 日韩精品免费视频一区二区三区| 久久中文看片网| 91精品三级在线观看| av在线天堂中文字幕 | 香蕉丝袜av| 久9热在线精品视频| 亚洲国产欧美网| 亚洲国产精品sss在线观看 | 成年版毛片免费区| 国产午夜精品久久久久久| 真人一进一出gif抽搐免费| 在线播放国产精品三级| 免费不卡黄色视频| 在线av久久热| 午夜老司机福利片| 色综合欧美亚洲国产小说| 亚洲av成人一区二区三| 日韩国内少妇激情av| 亚洲欧美激情在线| 亚洲美女黄片视频| 国产有黄有色有爽视频| 黑人欧美特级aaaaaa片| 一级a爱片免费观看的视频| 亚洲狠狠婷婷综合久久图片| 亚洲一码二码三码区别大吗| 国内毛片毛片毛片毛片毛片| 欧美日韩乱码在线| 国产精品av久久久久免费| 1024视频免费在线观看| 在线观看免费午夜福利视频| 国产成人啪精品午夜网站| 国产精品电影一区二区三区| 日本欧美视频一区| 超色免费av| 亚洲男人的天堂狠狠| av国产精品久久久久影院| 精品第一国产精品| 精品国产国语对白av| 亚洲专区国产一区二区| 国产又爽黄色视频| 国产一区二区在线av高清观看| 一个人观看的视频www高清免费观看 | 成人黄色视频免费在线看| 欧美中文综合在线视频| 热re99久久国产66热| 亚洲va日本ⅴa欧美va伊人久久| 999久久久国产精品视频| 亚洲精品一区av在线观看| 91精品三级在线观看| 啦啦啦 在线观看视频| 18禁裸乳无遮挡免费网站照片 | 男女之事视频高清在线观看| 人人澡人人妻人| 村上凉子中文字幕在线| 99精品久久久久人妻精品| 亚洲av片天天在线观看| 在线永久观看黄色视频| 大型av网站在线播放| 视频区图区小说| 国产精品永久免费网站| 可以免费在线观看a视频的电影网站| 精品国产乱子伦一区二区三区| 日韩国内少妇激情av| 精品一区二区三区视频在线观看免费 | 亚洲欧美激情在线| 欧美日韩中文字幕国产精品一区二区三区 | 精品久久久久久久久久免费视频 | 精品高清国产在线一区| 亚洲午夜精品一区,二区,三区| 国产99白浆流出| 亚洲国产看品久久| 夜夜爽天天搞| 两人在一起打扑克的视频| 欧美丝袜亚洲另类 | 久久欧美精品欧美久久欧美| 亚洲精品一区av在线观看| 亚洲成人国产一区在线观看| 亚洲一码二码三码区别大吗| 日韩高清综合在线| 精品无人区乱码1区二区| 最近最新中文字幕大全电影3 | 丝袜在线中文字幕| 国产欧美日韩一区二区三| 亚洲国产精品一区二区三区在线| 精品午夜福利视频在线观看一区| 韩国精品一区二区三区| 嫁个100分男人电影在线观看| 午夜91福利影院| 欧美激情 高清一区二区三区| 十分钟在线观看高清视频www| 激情视频va一区二区三区| 欧美激情高清一区二区三区| 91精品国产国语对白视频| 午夜精品国产一区二区电影| 宅男免费午夜| 满18在线观看网站| 三级毛片av免费| 无人区码免费观看不卡| 久久国产精品人妻蜜桃| 欧美人与性动交α欧美软件| 丝袜在线中文字幕| 欧美午夜高清在线| 午夜免费激情av| 9191精品国产免费久久| 美女高潮喷水抽搐中文字幕| 日韩欧美在线二视频| 亚洲伊人色综图| 麻豆av在线久日| 欧美一级毛片孕妇| 欧美乱色亚洲激情| 久久人人97超碰香蕉20202| 天天影视国产精品| 90打野战视频偷拍视频| 又大又爽又粗| 午夜成年电影在线免费观看| 高清欧美精品videossex| 女人高潮潮喷娇喘18禁视频| 香蕉国产在线看| 亚洲国产精品合色在线| 亚洲性夜色夜夜综合| 精品一区二区三区av网在线观看| 国产精品亚洲一级av第二区| 精品一区二区三卡| 黑丝袜美女国产一区| 午夜精品在线福利| 正在播放国产对白刺激| 一本综合久久免费| 日本免费一区二区三区高清不卡 | 男女午夜视频在线观看| 欧美人与性动交α欧美精品济南到| 女人爽到高潮嗷嗷叫在线视频| 国产男靠女视频免费网站| 黄色视频不卡| 这个男人来自地球电影免费观看| 国产成人欧美在线观看| 真人做人爱边吃奶动态| 亚洲欧美激情在线| 国产精品免费一区二区三区在线| 丝袜人妻中文字幕| 亚洲avbb在线观看| 久久久久国产精品人妻aⅴ院| 精品国产美女av久久久久小说| 国产区一区二久久| 婷婷精品国产亚洲av在线| 久久午夜亚洲精品久久| 亚洲熟女毛片儿| 满18在线观看网站| 亚洲va日本ⅴa欧美va伊人久久| 女性生殖器流出的白浆| 老司机福利观看| 亚洲五月色婷婷综合| 午夜免费观看网址| 啪啪无遮挡十八禁网站| 精品久久蜜臀av无| 又黄又爽又免费观看的视频| 国产男靠女视频免费网站| 天堂中文最新版在线下载| 亚洲欧美精品综合久久99| 99久久99久久久精品蜜桃| 国产精品影院久久| 午夜免费激情av| 一边摸一边抽搐一进一出视频| 黑丝袜美女国产一区| av国产精品久久久久影院| 亚洲精品一二三| 精品久久久久久成人av| 国产精品香港三级国产av潘金莲| 精品久久蜜臀av无| 亚洲成人免费电影在线观看| 国产精品免费一区二区三区在线| 亚洲一区二区三区欧美精品| 成人三级做爰电影| 欧美日韩国产mv在线观看视频| 麻豆久久精品国产亚洲av | 热99re8久久精品国产| 国产亚洲精品综合一区在线观看 | 久久久国产成人精品二区 | 久久精品国产99精品国产亚洲性色 | 亚洲av美国av| av网站免费在线观看视频| 成人三级做爰电影| 亚洲国产精品合色在线| 免费av毛片视频| 久久精品成人免费网站| 男女高潮啪啪啪动态图| 国产成人欧美| 久久精品亚洲av国产电影网| 丰满饥渴人妻一区二区三| 亚洲aⅴ乱码一区二区在线播放 | 亚洲专区字幕在线| 亚洲少妇的诱惑av| 女生性感内裤真人,穿戴方法视频| 18禁裸乳无遮挡免费网站照片 | av免费在线观看网站| 婷婷六月久久综合丁香| 在线观看免费午夜福利视频| 在线国产一区二区在线| ponron亚洲| 日本a在线网址| av国产精品久久久久影院| 他把我摸到了高潮在线观看| 国产欧美日韩综合在线一区二区| 女人爽到高潮嗷嗷叫在线视频| 神马国产精品三级电影在线观看 | 91精品三级在线观看| 69精品国产乱码久久久| 丝袜美足系列| 男女下面进入的视频免费午夜 | 成人永久免费在线观看视频| 国产精品av久久久久免费| 国产成人免费无遮挡视频| 亚洲熟妇熟女久久| 真人做人爱边吃奶动态| 成年人免费黄色播放视频| 欧美日韩亚洲国产一区二区在线观看| 亚洲三区欧美一区| 日韩三级视频一区二区三区| 99热国产这里只有精品6| 黄色怎么调成土黄色| 99久久人妻综合| 国产精品一区二区三区四区久久 | 日日爽夜夜爽网站| 无遮挡黄片免费观看| 免费人成视频x8x8入口观看| 色老头精品视频在线观看| 亚洲国产精品一区二区三区在线| 9热在线视频观看99| 欧美在线一区亚洲| 777久久人妻少妇嫩草av网站| 大型av网站在线播放| 久久久久国产精品人妻aⅴ院| 久久精品国产亚洲av高清一级| 又黄又粗又硬又大视频| 欧美激情高清一区二区三区| 欧美精品啪啪一区二区三区| 一级片免费观看大全| 97人妻天天添夜夜摸| 久久久久久久久久久久大奶| 亚洲精品成人av观看孕妇| 麻豆av在线久日| 一级a爱视频在线免费观看| 一级片免费观看大全| av免费在线观看网站| 中亚洲国语对白在线视频| 久久国产精品影院| 久久性视频一级片| 欧美日韩亚洲高清精品| 久9热在线精品视频| 纯流量卡能插随身wifi吗| 18禁黄网站禁片午夜丰满| 亚洲专区中文字幕在线| 亚洲成a人片在线一区二区| 91成人精品电影| 久久久国产成人精品二区 | 欧美久久黑人一区二区| 99国产精品免费福利视频| 久久青草综合色| 19禁男女啪啪无遮挡网站| 真人做人爱边吃奶动态| 国内毛片毛片毛片毛片毛片| 国产伦一二天堂av在线观看| 人成视频在线观看免费观看| 色老头精品视频在线观看| 日韩人妻精品一区2区三区| 亚洲五月色婷婷综合| 超色免费av| 亚洲精品国产区一区二| 午夜老司机福利片| 国产在线观看jvid| 男女高潮啪啪啪动态图| 99国产精品免费福利视频| 可以在线观看毛片的网站| 久久精品人人爽人人爽视色| 国产精品久久久久成人av| 亚洲一码二码三码区别大吗| 亚洲欧美日韩另类电影网站| 国产欧美日韩一区二区精品| 亚洲伊人色综图| 首页视频小说图片口味搜索| 国产主播在线观看一区二区| 黑人欧美特级aaaaaa片| 久久久久久久午夜电影 | 久久精品91无色码中文字幕| 欧美日韩黄片免| 国产精品美女特级片免费视频播放器 | 亚洲 国产 在线| 很黄的视频免费| 亚洲欧美日韩无卡精品| 亚洲avbb在线观看| 亚洲av美国av| 看黄色毛片网站| 日本免费a在线| 欧美日韩av久久| 十八禁人妻一区二区| 999久久久精品免费观看国产| 在线观看日韩欧美| 久久国产乱子伦精品免费另类| 欧美日韩亚洲国产一区二区在线观看| 18禁国产床啪视频网站| 亚洲精品久久午夜乱码| 女生性感内裤真人,穿戴方法视频| 久久 成人 亚洲| 大陆偷拍与自拍| 色婷婷久久久亚洲欧美| 高清毛片免费观看视频网站 | 丝袜人妻中文字幕| 一进一出抽搐gif免费好疼 | 欧美老熟妇乱子伦牲交| 国产1区2区3区精品| 久久久精品欧美日韩精品| 在线十欧美十亚洲十日本专区| 久久久久亚洲av毛片大全| 高清欧美精品videossex| 国产免费现黄频在线看| 两个人看的免费小视频| 一进一出好大好爽视频| 国产高清国产精品国产三级| 日韩欧美一区二区三区在线观看| 亚洲第一欧美日韩一区二区三区| 日韩精品青青久久久久久| tocl精华| 日韩中文字幕欧美一区二区| 国产精品 国内视频| 夜夜躁狠狠躁天天躁| 亚洲欧美一区二区三区久久| 99国产精品免费福利视频| 成人影院久久| 亚洲精品久久成人aⅴ小说| 夜夜夜夜夜久久久久| 三上悠亚av全集在线观看| 电影成人av| 午夜91福利影院| 成年人免费黄色播放视频| 9色porny在线观看| av欧美777| 首页视频小说图片口味搜索| 国产欧美日韩一区二区三区在线| 在线观看午夜福利视频| 欧美激情久久久久久爽电影 | 国产精品久久久av美女十八| 热99re8久久精品国产| 欧美一区二区精品小视频在线| 午夜a级毛片| av超薄肉色丝袜交足视频| 国产成+人综合+亚洲专区| 亚洲性夜色夜夜综合| 一边摸一边抽搐一进一出视频| 又大又爽又粗| 一边摸一边做爽爽视频免费| 在线观看午夜福利视频| 老熟妇乱子伦视频在线观看| 亚洲性夜色夜夜综合| 精品久久久久久成人av| 国产欧美日韩一区二区三区在线| 18禁裸乳无遮挡免费网站照片 | 777久久人妻少妇嫩草av网站| 亚洲av日韩精品久久久久久密| 国产日韩一区二区三区精品不卡| 日韩成人在线观看一区二区三区| 人人妻人人澡人人看| 欧美成人午夜精品| 日日干狠狠操夜夜爽| 国产亚洲精品一区二区www| 亚洲av电影在线进入| 成人精品一区二区免费| 久久香蕉国产精品| 怎么达到女性高潮| 免费av毛片视频| 女人精品久久久久毛片| 9热在线视频观看99| 亚洲九九香蕉| 精品国内亚洲2022精品成人| 久久精品国产清高在天天线| 亚洲第一欧美日韩一区二区三区| 男女午夜视频在线观看| 欧美日韩福利视频一区二区| 黄色成人免费大全| 日韩av在线大香蕉| 欧美不卡视频在线免费观看 | 黄色 视频免费看| 在线av久久热| 88av欧美| 高清欧美精品videossex| 久久久久久久久久久久大奶| 一本大道久久a久久精品| 久久人妻熟女aⅴ| 老司机亚洲免费影院| 免费观看人在逋| 日韩免费高清中文字幕av| 国产av精品麻豆| 中文字幕高清在线视频| 亚洲精品美女久久久久99蜜臀| 久久久久精品国产欧美久久久| 日本欧美视频一区| 亚洲成a人片在线一区二区| 999久久久国产精品视频| 真人一进一出gif抽搐免费| 亚洲成人免费电影在线观看| 国产精品久久久久久人妻精品电影| 亚洲自偷自拍图片 自拍| 免费女性裸体啪啪无遮挡网站| 欧美 亚洲 国产 日韩一| 久久亚洲真实| 97碰自拍视频| 亚洲欧洲精品一区二区精品久久久| 精品乱码久久久久久99久播| 亚洲欧美日韩高清在线视频| 国产欧美日韩精品亚洲av| 精品国内亚洲2022精品成人| 国产精品一区二区三区四区久久 | 搡老熟女国产l中国老女人| 亚洲精品粉嫩美女一区| 亚洲情色 制服丝袜| 欧美午夜高清在线| 免费一级毛片在线播放高清视频 | 女性生殖器流出的白浆| 欧美日韩国产mv在线观看视频| 在线观看午夜福利视频| 最好的美女福利视频网| 久久久久久久久免费视频了| 久久久久久久午夜电影 | 91成年电影在线观看| 看黄色毛片网站| 亚洲av日韩精品久久久久久密| 色综合婷婷激情| 日韩三级视频一区二区三区| 国产99久久九九免费精品| 在线免费观看的www视频| 午夜成年电影在线免费观看| 精品一区二区三卡| 婷婷丁香在线五月| 欧美日本亚洲视频在线播放| 在线观看免费午夜福利视频| 亚洲欧美精品综合一区二区三区| 18禁裸乳无遮挡免费网站照片 | 如日韩欧美国产精品一区二区三区| 久久久国产欧美日韩av| 黄色怎么调成土黄色| 亚洲aⅴ乱码一区二区在线播放 | 亚洲国产中文字幕在线视频| 亚洲精品粉嫩美女一区| 亚洲精品一卡2卡三卡4卡5卡| 亚洲人成伊人成综合网2020| 日韩欧美三级三区| 757午夜福利合集在线观看| 日本一区二区免费在线视频| 国产精品久久久人人做人人爽| 在线观看舔阴道视频| 香蕉国产在线看| 男女下面进入的视频免费午夜 | 国产97色在线日韩免费| 天堂动漫精品| 老司机在亚洲福利影院| 99久久精品国产亚洲精品| 亚洲成av片中文字幕在线观看| 美女国产高潮福利片在线看| 很黄的视频免费| 亚洲成av片中文字幕在线观看| 一边摸一边抽搐一进一出视频| 国产麻豆69| 99精品欧美一区二区三区四区| 激情在线观看视频在线高清| 精品无人区乱码1区二区| 精品乱码久久久久久99久播| 757午夜福利合集在线观看| 热99re8久久精品国产| 99热只有精品国产| 激情在线观看视频在线高清| 性少妇av在线| 国产欧美日韩精品亚洲av| av免费在线观看网站| 女人被狂操c到高潮| 国产精品九九99| 午夜两性在线视频| 成人三级做爰电影| 亚洲狠狠婷婷综合久久图片| 欧美亚洲日本最大视频资源| 久久伊人香网站| 欧美日韩亚洲综合一区二区三区_|