• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Weighted average integration of sparse representation and collaborative representation for robust face recognition

    2016-12-14 08:06:13ShaoningZengYangXiong
    Computational Visual Media 2016年4期

    Shaoning Zeng(),Yang Xiong

    Research Article

    Weighted average integration of sparse representation and collaborative representation for robust face recognition

    Shaoning Zeng1(),Yang Xiong1

    Sparse representation is a significant method to perform image classification for face recognition.Sparsity of the image representation is the key factor for robust image classification. As an improvement to sparse representation-based classification,collaborative representation is a newer method for robust image classification.Training samples of all classes collaboratively contribute together to represent one single test sample.The ways of representing a test sample in sparse representation and collaborative representation are very different,so we propose a novel method to integrate both sparse and collaborative representations to provide improved results for robust face recognition. The method first computes a weighted average of the representation coefficients obtained from two conventional algorithms,and then uses it for classification. Experiments on several benchmark face databases show that our algorithm outperforms both sparse and collaborative representation-based classification algorithms,providing at least a 10% improvement in recognition accuracy.

    sparse representation; collaborative representation; image classification;face recognition

    1 Introduction

    Feature extraction and classification are two key steps in face recognition[1,2].Extraction of features is the basic of mathematical calculation performed in classification methods.Only if sufficient and proper

    1 Huizhou University,Guangdong 516007,China.E-mail: S.Zeng,zsn@outlook.com();Y.Xiong,xyang.2010@ hzu.edu.cn.

    Manuscript received:2016-05-05;accepted:2016-09-22 features are extracted,can a classification method produce good recognition results. One prevailing paradigm is to use statistical learning approaches based on training data to determine proper features to extract and how to construct classification engines.Nowadays,many successful algorithms for face detection, alignment, and matching are learning-based algorithms.Representation-based classification methods(RBCM),such as PCA[3,4] and LDA[5,6],have significantly improved face recognition techniques. Such linear methods can be extended by use of nonlinear kernel techniques (kernel PCA[7]and kernel LDA[8]).The basic process in these methods is as follows: first all training samples are coded to obtain a representation matrix,then this matrix is used to evaluate each test sample and determine new lower-dimensional representation coefficients,and finally classification is performed based on these coefficients[2,9]. Therefore,the robustness of face recognition is determined by suitability of the representation coefficients.

    Sparse coding or representation has recently been proposed as an optimal representation of image samples.Sparse representation-based classification (SRC)for face recognition[2,9,10]first codes the test sample using a linear combination on the training samples,and then determines the differences between the test sample and all training samples using the representation coefficients.Consequently, the test sample can be classified as belonging to the class with minimal distance. SRC has been widely applied to face recognition[11–13],image categorization[14,15],and image super-resolution [9,16]. Indeed,SRC can be viewed as a global representation method[17],because it uses all

    training samples to represent the test sample. On the contrary,collaborative representation-based classification(CRC),proposed as an improvement to SRC,considers the local features in common for each class in its representation.The training samples as a whole are used to determine the representation coefficients of a test sample.CRC considers the collaboration between all classes in the representation as the underlying reason it is possible to make a powerful image classification method [18–20].However,we believe the collaborative contribution from local classes can also be used to refine the sparse representation, and that it is possible to improve the robustness of image classification by integrating both types of representation.Zhang et al.[17]integrated the globality of SRC with the locality of CRC for robust representation-based classification,and Li et al.[21]also combined sparse and collaborative representations for hyperspectral target detection with a linear operation.Further similar integrative methods have been proposed for other domains.

    In this paper,we propose to use a slightly more sophisticated mathematical operation performing weighted averaging of sparse and collaborative representations for classification, which we call WASCRC.Firstly,it determines the sparse representation coefficients β for the test sample via l1-norm minimization on all training samples.Secondly,it determines the collaborative representation coefficients α for the same test sample via l2-norm minimization on all training samples. Thirdly, it calculates the new representation coefficients as a weighted average of these two groups of coefficients:β'=aα+bβ.Finally,the distance between the test sample and each training sample is determined as, allowing the test sample to be classified as belonging to the nearest class.Usually,we can let a=1 for simplicity and vary b appropriately to a specific application.We conducted various experiments on several benchmark face databases,which showed that our WASCRC algorithm could decrease the failure rate of classification by up to 17%and 26% relative to SRC and CRC respectively.

    The rest content of this paper is organized as follows.Section 2 introduces related work on sparse representation for robust face recognition.Section 3 describes our proposed algorithm and the rationale behind it.Section 4 presents experimental results on several benchmark face databases.Section 5 gives our conclusions.

    2 Related work

    2.1 Sparse representation

    The sparse representation-based classification(SRC) algorithm was proposed by Wright et al.[2].The basic procedure involves two steps,first representing the test sample as a linear combination of all training samples,and then identifying the closest class based on the minimal deviation.

    Assume that there are C subjects or pattern classes with n training samples x1,x2,...,xnand the test sample is y.Let the matrix Xi=denote nitraining samples from the i th class.By stacking all columns from the vector for a w×h gray-scale image,we can obtain the vector for this image:x∈Im(m= w×h). Each column of Aithen represents the training images of the i th subject.Any test sample y∈Imfrom the same class can be described by a linear formula as

    y=ai,1xi,1+ai,2xi,2+···+ai,nxi,n(1)

    where ai,j∈I,j=1,2,...,ni.

    The n training samples of C subjects can be denoted by a new matrix:Thus,Eq.(1)can be rewritten more simply as

    y=Xβ∈Im(2)

    where β = [0,...,0,ai,1,ai,2,...,0,...,0]Tis the sparse coefficient vector in which only entries for the i th class are non-zero.This vector of coefficients is the key factor which affects the robustness of classification.Note that SRC uses the entire set of training samples to find these coefficients.This is a significant difference from one-sample-at-one-time or one-class-at-one-time methods such as nearest neighbor(NN)[22]and nearest subspace(NS)[23] algorithms.These local methods can both identify objects represented in the training set and reject samples that do not belong to any of the classes present in the training set.

    The next step in SRC is to perform l1-norm minimization to solve the optimization problem to find the sparsest solution to Eq.(2).This result is

    used to identify the class of the test sample y.Here we use:

    Next, SRC computes the residuals for this representative coefficient vector for the i th class:

    Finally the identity of y is output as

    There are five prevailing fast l1-minimization approaches:gradient projection,homotopy,iterative shrinkage-thresholding, proximal gradient, and augmented Lagrange multipliers(ALM)[15].As we know,it is more efficient to use first order l1-minimization techniques for noisy data,e.g., SpaRSA[9],FISTA[24],and ALM[13],while homotopy[25],ALM,and l1_ls [26]are more suitable for face recognition because of their accuracy and speed.Other SRC algorithms are implemented using l0-norm,lp-norm(0<p<1), or even l2-norm minimization.Xu et al.[26] exploited l1/2-norm minimization to constrain the sparsity of representation coefficients;further descriptions of various norm minimizations can be seen in Ref.[22].Yang et al.[13]proposed fast l1-minimization algorithms called augmented Lagrangian methods (ALM) for robust face recognition. Furthermore, many researchers proposed different SRC implementations and improvements,such as kernel sparse representation by Gao et al.[15],an algorithm by Yang and Zhang [27]that uses a Gabor occlusion dictionary to significantly reduce the computational cost when dealing with face occlusion,l1-graphs for image classification by Cheng et al.[28],sparsity preserving projections by Qiao et al.[29],combination of sparse coding with linear pyramid matching by Yang et al.[30],and a prototype-plus-variation model for sparsity-based face recognition[31].Classification accuracy can be further improved by using virtual samples[32–34]. All these methods attempt to improve the robustness of image classification for face recognition–it is clear that sparsity plays a paramount role in robust classification for face recognition.

    2.2 Collaborative representation

    Collaborative representation-based classification (CRC)was proposed as an improvement to and replacement for SRC by Zhang et al.[18,19]and Chen and Ramadge[20].Much literature on SRC, including Ref.[2],overemphasizes the significance of l1-norm sparsity in image classification,while the role of collaborative representation (CR)is downplayed[18].CR involves contributions from every training sample to represent the test sample y, because different face images share certain common features helpful for classification.It is thus based on nonlocal samples.CRC can use this nonlocal strategy to output more robust face recognition results.

    Using a regularized least square approach[35]we can collaboratively represent the test sample using X with low computational burden:

    where λ is a regularization parameter,which makes the least square solution stable and introduces better sparsity in the solution than using l1-norm.Thus, the CR in Eq.(5)now becomes:

    Let P=(XT·X+λ·I)-1XT,so we can just simply project the test sample y onto P:

    We may then compute the regularized residuals by

    Finally,we can output the identity of the test sample y as

    identity(y)=arg mini{resCRC,i} (11)

    In this way,CRC involves all training samples to represent the test sample.We consider this collaboration to be an effective approach,giving a better sparse representation result.

    3 Our method

    We believe that sparse representation (SR) still makes a significant contribution to robust classification, while the real importance of collaborative representation (CR) is to refine the sparse representation but not to negate it. Recent literature has proposed novel approaches which integrate both algorithms in pursuit of more robust results.Zhang et al.[17]integrated

    the globality of SRC with the locality of CRC for robust representation-based classification. In this method,integration was performed in the residual calculation in the representation,rather than in the sparse vector for the test sample.Li et al.[21]proposed a method to combine sparse and collaborative representations for hyperspectral target detection.This combination also happened at the step of computing the distance after the sparse vector had been determined.We compute a weighted average of the representation coefficients produced by SRC and CRC algorithms,as well as the computation of residuals,in an approach we call WASCRC.WASCRC works as follows.In the first stage,we obtain two kinds of coefficients from SRC and CRC.We use a conventional SRC algorithm to find the sparse representation coefficients β for the test sample,using Eq.(3).We also find the collaborative representation coefficients α using a conventional CRC algorithm,as in Eq.(8).

    Next,we integrate them by means of a weighted average,denoted by y= (ax1+bx2)/(a+b). Our algorithm obtains new coefficients by imposing different weights on the two kinds of coefficients found by the two algorithms as follows:

    β'=aα+bβ (12)

    where a and b indicate the weights of two algorithms.

    Finally,we compute the residuals between the test sample and training samples with an l2-norm operation.Unlike conventional SRC and CRC,after performing the normalization,we need to divide by the sum of the two weights:

    In this way,this new residual incorporates the weighted average,producing a refined solution.We can use it to identify the test sample y as

    identity(y)=arg mini{resWASCRC,i} (14)

    In practice,we use a=1 for simplicity and vary b to adjust the contribution of the two algorithms. We used two values,b=4 and b=300,in our experiments.

    4 Results

    We conducted comprehensive experiments on several mainstream benchmark face databases to compare the robustness of our WASCRC and conventional SRC and CRC algorithms.The benchmarks chosen include ORL[36],Georgia Tech[37],and FERET [38]face databases.We ran experiments with different numbers of training samples for each face database.We now explain the samples,steps,and results for each experiment,as well as an analysis and comparison of the results.The experimental results indicate that WASCRC produces a lower classification failure rate than the SCR and CRC algorithms,reaching a 10%improvement in some cases.

    4.1 Experiments on the ORL face database

    The ORL face database[36]is a small database that includes only 400 face images with 10 different face images taken from 40 distinct subjects.The face images were captured under different conditions for each subject at varying time,with varying lighting,facial expressions(open or closed eyes, smiling or not smiling),and facial details(glasses or no glasses).These images were captured against a dark homogeneous background while the subjects were in an upright,frontal position.To reduce the computational complexity,we resized all face images to 56×46 pixels. Figure 1 presents the first 20 images from the ORL database.

    We calculated the improvements provided by WASCRC over both SRC and CRC in each case. We set the weighted average factor b=1 in these cases.The best case for SRC was the one using two training samples,in which WASCRC diminished the classification failure by 27%.The best case for CRC reached 23%reduction in failure when using five training samples.On average,the improvements to SRC and CRC by WASCRC were 17%and 18% respectively.In the one-training-sample case,which

    is typical of real applications,WASCRC gained 1% and 17%in accuracy,respectively.

    Fig.1 The first two subjects in the ORL face database.

    Fig.2 Classification results for all 3 algorithms for the ORL face database.

    Fig.3 Coefficients determined by the 3 algorithms for the ORL face database.

    In order to further understand the cause of these improvements,we added a step to analyze the change in representation coefficients in all three algorithms. We picked a single test sample that WASCRC succeeded in classifying for which both SRC and CRC failed. We selected the first two samples for all 40 subjects as training samples,so that 80 training samples in total were used to determine the representation coefficients for the test sample. In our experiment,we found that the 214th test sample,the 6th sample for the 26th subject,was not recognized correctly by either SRC or CRC, while WASCRC succeeded in classifying it.We thus carefully analyzed the representation coefficients of this test sample,shown in Fig.3.It is clear that every single coefficient used by WASCRC(green) was between the values used by SRC(pink)and CRC(yellow):the new coefficients were smoother than the original ones,due to the weighted average calculation.The distance between the test sample and each class is calculated by the sum of entries for all training samples belonging to a class.We believe that if the curve is smoother,which means the values are relatively smaller and closer to zero, the resulting distances will be closer to zero and have smaller differences.On one hand,more entries close to zero produce a sparser representation vector;on the other hand,smaller differences help output a more precise comparison. Therefore,this had a positive effect on the representation and made a better sparse representation than conventional representation-based methods,leading to higher classification accuracy.

    4.2 Experiments on the Georgia Tech face database

    The next group of experiments used the Georgia Tech face database[37,39].This database has 750 face images captured from 50 individuals in several sessions;all images are in 15-color JPEG format. Each subject shows frontal and/or tilted faces as well as different facial expressions,lighting conditions, and scale.The original face images all have cluttered backgrounds,and a resolution of 640×480 pixels, which is too large for efficient representation.For the experiments,we programmatically removed the background and resized the images to 30×40 pixels to reduce computing load. However,this image preprocessing did not negatively affect on our results. Figure 4 shows the first subject with 15 samples in the face database.It is not necessary to use all three dimensions of color data in these colored images:we only used two dimensions of gray-scale data from these RGB images.Again,this did not affect our experimental results.

    We successively picked the first 1 to 10 face images as training samples and the rest as test samples.In this case,we set the weighted average factor b=300 and recorded the classification results for all test samples given by all three algorithms.The resulting failure rates are shown in Fig.5.The results from the SRC algorithm(blue)unexpectedly outperformed the CRC algorithm(green),while our WASCRC algorithm(yellow)not surprisingly gave best results.

    Furthermore,as the number of training samples increased,the failure rates dropped dramatically.

    Fig.4 The first subject,with 15 samples in the Georgia Tech face database.

    Fig.5 Classification results for all 3 algorithms for the Georgia Tech face database.

    We again determined the improvement by WASCRC over SRC and CRC,which were slightly lower than those for the ORL face database. The conventional SRC algorithm still performed well and the best case over SRC generated only 4%improvement when using 7 training samples. As conventional CRC algorithm underachieved, WASCRC outperformed it by up to 20%when using 8 training samples.On average,WASCRC outperformed SRC and CRC by 2%and 11% respectively.WASCRC outperformed CRC by 9% in the one-training-sample case.

    4.3 Experiments on the FERET face

    database

    The last group of experiments was performed on one of the largest public benchmark face databases,the FERET database[38].It is much bigger than the Georgia Tech and ORL face databases.Each subject has five to eleven images with two frontal views(fa and fb)and one more frontal image with a different facial expression.Our experiments used 200 subjects in total,with 7 samples for each.Figure 7 shows the first three subjects in the database;images 1–7 belong to the first subject,while 8–14 belong to the second subject and image 15 belongs to the third subject(who has 6 more images not shown in the figure).

    We used images 1–5 as training samples and the remaining images as test samples,and again set the weighted average factor b=300. The resulting classification failure rates from all three algorithms are shown in Fig.8. As for the experiments on other databases,our WASCRC algorithm(yellow) still outperformed both SRC(blue)and CRC(green) algorithms in all test cases.Even in the one-trainingsample case,WASCRC also produced the highest classification accuracy.

    WASCRC outperformed SRC and CRC by up to 26%and 49%respectively when using 5 training samples.On average,WASCRC outperformed SRC and CRC by 12%and 26%respectively.WASCRC outperformed SRC and CRC by 5%and 10%in the one-training-sample case.

    Fig.6 Coefficients determined by the 3 algorithms for the Georgia Tech face database.

    Fig.7 The first fifteen face images from the FERET face database.

    Fig.8 Classification results for all 3 algorithms for the FERET face database.

    Fig.9 Coefficients determined by the 3 algorithms for the FERET face database.

    The representation coefficients used by WASCRC were always smoother,as shown in Fig.9.The weighted average operation worked well,as expected. Figure 9 only shows the first half of all 200 training samples,from one test sample classified correctly by WASCRC and incorrectly by both SRC and CRC. This result validates that our proposed algorithm is a more robust classifier.

    5 Conclusions

    Sparsity of a representation is the key to successful sparse representation-based classification, while collaboration from all classes in the representation is the key to promising collaborative representationbased classification.We have shown how to integrate these approaches in a method that performs a weighted average operation on sparse and collaborative representations for robust face recognition.Such integration can lower the failure rate in face recognition. Our experiments demonstrated that our new approach can outperform both sparse and collaborative representation-based classification algorithms for face recognition, decreasing the recognition failure rate by about 10%.It is possible to achieve higher accuracy still in some specific cases by altering the factor used for weighted averaging.

    Acknowledgements

    This work was supported in part by the National Natural Science Foundation of China (Grant No.61502208),the Natural Science Foundation of Jiangsu Province of China(Grant No.BK20150522), the Scientific and Technical Program of City of Huizhou(Grant No.2012-21),the Research Foundation of Education Bureau of Guangdong Province of China(Grant No.A314.0116),and the Scientific Research Starting Foundation for Ph.D.in Huizhou University(Grant No.C510.0210).

    [1]Brunelli,R.;Poggio,T.Face recognition:Features versus templates.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.15,No.10, 1042–1052,1993.

    [2]Wright,J.;Yang,A.Y.;Ganesh,A.;Sastry,S.S.;Ma, Y.Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.31,No.2,210–227,2009.

    [3]Xu,Y.;Zhang,D.;Yang,J.;Yang,J.-Y.An approach for directly extracting features from matrix data and its application in face recognition.Neurocomputing Vol.71,Nos.10–12,1857–1865,2008.

    [4]Turk,M.;Pentland,A.Eigenfaces for recognition. Journal of Cognitive Neuroscience Vol.3,No.1,71–86, 1991.

    [5]Park,S.W.;Savvides,M.A multifactor extension of linear discriminant analysis for face recognition under varying pose and illumination.EURASIP Journal on Advances in Signal Processing Vol.2010,158395,2010.

    [6]Lu,J.;Plataniotis,K.N.;Venetsanopoulos,A.N. Face recognition using LDA-based algorithms.IEEE Transactions on Neural Networks Vol.14,No.1,195–200,2003.

    [7]Debruyne,M.;Verdonck,T.Robust kernel principal component analysis and classification.Advances in Data Analysis and Classification Vol.4,No.2,151–167,2010.

    [8]Muller,K.-R.;Mika,S.;Ratsch,G.;Tsuda,K.;Scholkopf,B.An introduction to kernel-based learning

    algorithms.IEEE Transactions on Neural Networks Vol.12,No.2,181–201,2001.

    [9]Yang,J.;Wright,J.;Huang,T.S.;Ma,Y. Image super-resolution via sparse representation. IEEE Transactions on Image Processing Vol.19,No. 11,2861–2873,2010.

    [10]Xu,Y.;Zhang,D.;Yang,J.;Yang,J.-Y.A two-phase test sample sparse representation method for use with face recognition.IEEE Transactions on Circuits and Systems for Video Technology Vol.21,No.9,1255–1262,2011.

    [11]Zhong,D.;Zhu,P.;Han,J.;Li,S.An improved robust sparse coding for face recognition with disguise. International Journal of Advanced Robotic Systems Vol.9,126,2012.

    [12]Xu,Y.;Zhu,Q.;Zhang,D.Combine crossing matching scores with conventional matching scores for bimodal biometrics and face and palmprint recognition experiments.Neurocomputing Vol.74,No.18,3946–3952,2011.

    [13]Yang,A.Y.;Zhou,Z.;Balasubramanian,A.G.;Sastry,S.S.;Ma,Y.Fast l1-minimization algorithms for robust face recognition.IEEE Transactions on Image Processing Vol.22,No.8,3234–3246,2013.

    [14]Mairal,J.;Bach,F.;Ponce,J.;Sapiro,G.;Zisserman, A.Discriminative learned dictionaries for local image analysis.In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1–8,2008.

    [15]Gao,S.;Tsang,I.W.-H.;Chia,L.-T.Kernel sparse representation for image classification and face recognition.In:Computer Vision—ECCV 2010. Daniilidis,K.;Maragos,P.;Paragios,N.Eds.Springer Berlin Heidelberg,1–14,2010.

    [16]Yang,J.;Wright,J.;Huang,T.;Ma,Y.Image super-resolution as sparse representation of raw image patches.In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1–8,2008.

    [17]Zhang,Z.;Li,Z.;Xie,B.;Wang,L.;Chen, Y.Integrating globality and locality for robust representation based classification. Mathematical Problems in Engineering Vol.2014,Article No. 415856,2014.

    [18]Zhang,L.;Yang,M.;Feng,X.Sparse representation or collaborative representation: Which helps face recognition? In:Proceedings of IEEE International Conference on Computer Vision,471–478,2011.

    [19]Zhang,L.;Yang,M.;Feng,X.;Ma,Y.;Zhang, D.Collaborative representation based classification for face recognition.arXiv preprint arXiv:1204.2358, 2012.

    [20]Chen, X.; Ramadge, P. J. Collaborative representation,sparsity or nonlinearity:What is key to dictionary based classification?In:Proceedings of IEEE International Conference on Acoustics,Speech and Signal Processing,5227–5231,2014.

    [21]Li,W.;Du,Q.;Zhang,B.Combined sparse and collaborative representation for hyperspectral target detection.Pattern Recognition Vol.48,No.12,3904–3916,2015.

    [22]Feng,Q.;Pan,J.-S.;Yan,L.Restricted nearest feature line with ellipse for face recognition.Journal of Information Hiding and Multimedia Signal Processing Vol.3,No.3,297–305,2012.

    [23]Elhamifar,E.;Vidal,R.Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.35,No.11,2765–2781,2013.

    [24]Beck,A.;Teboulle,M.A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences Vol.2,No.1,183–202,2009.

    [25]Zhang,Z.;Xu,Y.;Yang,J.;Li,X.;Zhang,D.A survey of sparse representation:Algorithms and applications. IEEE Access Vol.3,490–530,2015.

    [26]Xu,Z.;Zhang,H.;Wang,Y.;Chang,X.;Liang, Y.L1/2regularization.Science China Information Sciences Vol.53,No.6,1159–1169,2010.

    [27]Yang,M.;Zhang,L.Gabor feature based sparse representation for face recognition with Gabor occlusion dictionary.In:Computer Vision—–ECCV 2010.Daniilidis,K.;Maragos,P.;Paragios,N.Eds. Springer Berlin Heidelberg,448–461,2010.

    [28]Cheng,B.;Yang,J.;Yan,S.;Fu,Y.;Shuang,T. S.Learning with l1-graph for image analysis.IEEE Transactions on Image Processing Vol.19,No.4,858–866,2010.

    [29]Qiao,L.;Chen,S.;Tan,X.Sparsity preserving projections with applications to face recognition. Pattern Recognition Vol.43,No.1,331–341,2010.

    [30]Yang,J.;Yu,K.;Gong,Y.;Huang,T.Linear spatial pyramid matching using sparse coding for image classification.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1794–1801, 2009.

    [31]Deng,W.;Hu,J.;Guo,J.In defense of sparsity based face recognition.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,399–406,2013.

    [32]Xu,Y.;Zhu,X.;Li,Z.;Liu,G.;Lu,Y.;Liu,H. Using the original and‘symmetrical face’training samples to perform representation based two-step face recognition.Pattern Recognition Vol.46,No.4,1151–1158,2013.

    [33]Xu,Y.;Zhang,Z.;Lu,G.;Yang,J.Approximately symmetrical face images for image preprocessing in face recognition and sparse representation based classification.Pattern Recognition Vol.54,68–82, 2016.

    [34]Liu,Z.;Song,X.;Tang,Z.Fusing hierarchical multiscale local binary patterns and virtual mirror samples to perform face recognition.Neural Computing and Applications Vol.26,No.8,2013–2026,2015.

    [35]Lee,H.;Battle,A.;Raina,R.;Ng,A.Y.Efficient sparse coding algorithms.In:Proceedings of Advances in Neural Information Processing Systems,801–808, 2006.

    [36]AT&T Laboratories Cambridge.The database of faces. 2002.Available athttp://www.cl.cam.ac.uk/research/

    dtg/attarchive/facedatabase.html.

    [37]Computer Vision Online.Georgia Tech face database. 2015.Available at http://www.computervisiononline. com/dataset/1105138700.

    [38]NIST Information Technology Laboratory.Color FERET database.2016.Available at https://www. nist.gov/itl/iad/image-group/color-feret-database.

    [39]Xu,Y.;Zhang,B.;Zhong,Z.Multiple representations and sparse representation for image classification. Pattern Recognition Letters Vol.68,9–14,2015.

    Shaoning Zeng received his M.S. degree in software engineering from Beihang University,China,in 2007. Since 2009,he has been a lecturer at Huizhou University,China.His current research interests include pattern recognition, sparse representation, image recognition,and neural networks.

    Xiong Yang received his B.S.degree in computer science and technology from Hubei Normal University,China, in 2002.He received his M.S.degree in computer science from Central China Normal University,China,in 2005 and Ph.D.degree from the Institute for Pattern Recognition and Artificial Intelligence,Huazhong University of Science and Technology,China,in 2010.Since 2010,he has been teaching in the Department of Computer Science and Technology,Huizhou University,China.His current research interests include pattern recognition and machine learning.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    ? The Author(s)2016.This article is published with open access at Springerlink.com

    老司机影院成人| 久久精品久久久久久噜噜老黄| 一级片'在线观看视频| 好男人视频免费观看在线| 激情视频va一区二区三区| 少妇精品久久久久久久| 国产成人精品在线电影| 免费在线观看黄色视频的| av天堂久久9| 国产精品久久久久成人av| 久久久久久久大尺度免费视频| a级片在线免费高清观看视频| 天天操日日干夜夜撸| 日韩大片免费观看网站| 久久精品国产综合久久久 | 久久精品国产自在天天线| 亚洲情色 制服丝袜| 黄色视频在线播放观看不卡| 赤兔流量卡办理| 人成视频在线观看免费观看| 一级片'在线观看视频| 久久国内精品自在自线图片| 国产色婷婷99| 美女大奶头黄色视频| 在线观看三级黄色| 97人妻天天添夜夜摸| 巨乳人妻的诱惑在线观看| 国产av精品麻豆| 国产麻豆69| 久久国产亚洲av麻豆专区| 999精品在线视频| 免费在线观看完整版高清| 久久久亚洲精品成人影院| 亚洲综合色网址| 18禁在线无遮挡免费观看视频| 成年人午夜在线观看视频| 性色avwww在线观看| 一区二区三区精品91| 精品人妻偷拍中文字幕| 亚洲激情五月婷婷啪啪| 中文欧美无线码| 欧美另类一区| 久久精品久久久久久久性| 国产伦理片在线播放av一区| 99香蕉大伊视频| 人人妻人人添人人爽欧美一区卜| 内地一区二区视频在线| 美女脱内裤让男人舔精品视频| 精品久久久久久电影网| 国产极品粉嫩免费观看在线| 欧美日韩国产mv在线观看视频| 亚洲图色成人| 人人妻人人澡人人爽人人夜夜| 最黄视频免费看| 国产日韩一区二区三区精品不卡| 婷婷色综合www| 亚洲精品成人av观看孕妇| 丝袜在线中文字幕| 新久久久久国产一级毛片| 亚洲第一av免费看| 亚洲精华国产精华液的使用体验| 18禁国产床啪视频网站| 欧美人与善性xxx| 黄色配什么色好看| 日本免费在线观看一区| 亚洲一码二码三码区别大吗| 两性夫妻黄色片 | 欧美少妇被猛烈插入视频| 午夜福利视频精品| 国产免费视频播放在线视频| 久久人妻熟女aⅴ| 成年美女黄网站色视频大全免费| 国产熟女欧美一区二区| 大话2 男鬼变身卡| 日韩av在线免费看完整版不卡| a级毛片在线看网站| 看非洲黑人一级黄片| 亚洲综合色惰| 国产国拍精品亚洲av在线观看| 黄色怎么调成土黄色| 丝袜美足系列| 国产精品久久久久久精品电影小说| 在线观看www视频免费| 亚洲国产精品国产精品| 中国美白少妇内射xxxbb| 满18在线观看网站| 两个人看的免费小视频| 午夜精品国产一区二区电影| 中文字幕另类日韩欧美亚洲嫩草| 国产欧美另类精品又又久久亚洲欧美| 亚洲欧美日韩另类电影网站| 午夜福利,免费看| 亚洲天堂av无毛| 国产1区2区3区精品| 久久人人97超碰香蕉20202| 女人久久www免费人成看片| 精品久久久久久电影网| 亚洲欧美一区二区三区国产| 国产成人精品福利久久| 国产欧美另类精品又又久久亚洲欧美| 建设人人有责人人尽责人人享有的| 91精品三级在线观看| 欧美日韩精品成人综合77777| 91精品国产国语对白视频| 韩国精品一区二区三区 | 国产1区2区3区精品| 亚洲国产精品国产精品| 免费女性裸体啪啪无遮挡网站| 大片电影免费在线观看免费| 国产一级毛片在线| 美女脱内裤让男人舔精品视频| 免费不卡的大黄色大毛片视频在线观看| 色婷婷久久久亚洲欧美| 国产毛片在线视频| 国产成人免费观看mmmm| 韩国av在线不卡| 不卡视频在线观看欧美| 亚洲精品乱久久久久久| 亚洲欧美中文字幕日韩二区| 黄色配什么色好看| 免费久久久久久久精品成人欧美视频 | 国产黄色视频一区二区在线观看| 欧美老熟妇乱子伦牲交| 亚洲国产av影院在线观看| 亚洲四区av| av视频免费观看在线观看| 国产精品偷伦视频观看了| 精品少妇久久久久久888优播| 高清毛片免费看| 香蕉丝袜av| 久久久国产一区二区| 亚洲精品成人av观看孕妇| 欧美日本中文国产一区发布| 亚洲精品色激情综合| 亚洲欧美一区二区三区国产| 久久精品人人爽人人爽视色| 免费av中文字幕在线| 欧美3d第一页| 精品国产露脸久久av麻豆| 中文精品一卡2卡3卡4更新| 欧美xxxx性猛交bbbb| 亚洲在久久综合| 亚洲欧洲国产日韩| 国产黄色视频一区二区在线观看| 成人亚洲精品一区在线观看| 国产片内射在线| 香蕉精品网在线| 欧美日韩综合久久久久久| 国产精品免费大片| 中文字幕亚洲精品专区| 中文天堂在线官网| 91精品伊人久久大香线蕉| 亚洲欧美色中文字幕在线| 视频在线观看一区二区三区| 亚洲成人手机| 国产淫语在线视频| 亚洲欧美成人综合另类久久久| 中文字幕精品免费在线观看视频 | 日本爱情动作片www.在线观看| 乱人伦中国视频| 香蕉丝袜av| 亚洲国产精品专区欧美| 亚洲av成人精品一二三区| 色网站视频免费| 在线观看免费日韩欧美大片| 最近最新中文字幕免费大全7| www日本在线高清视频| 久久毛片免费看一区二区三区| 99香蕉大伊视频| 亚洲欧美清纯卡通| 深夜精品福利| 免费不卡的大黄色大毛片视频在线观看| 少妇 在线观看| 欧美激情极品国产一区二区三区 | 伊人久久国产一区二区| 国产极品天堂在线| 天天躁夜夜躁狠狠躁躁| 边亲边吃奶的免费视频| 精品少妇内射三级| a级片在线免费高清观看视频| 亚洲美女视频黄频| 人妻系列 视频| 亚洲国产日韩一区二区| 日韩av免费高清视频| 欧美丝袜亚洲另类| 国产成人精品无人区| 黄片播放在线免费| 久久精品熟女亚洲av麻豆精品| 午夜av观看不卡| 最后的刺客免费高清国语| 午夜视频国产福利| 亚洲av男天堂| 青春草国产在线视频| 欧美日本中文国产一区发布| 欧美另类一区| 久久久a久久爽久久v久久| 免费高清在线观看视频在线观看| 热99国产精品久久久久久7| 亚洲,欧美精品.| 亚洲伊人久久精品综合| 在线观看www视频免费| 最近中文字幕2019免费版| 国产亚洲最大av| 乱人伦中国视频| 1024视频免费在线观看| 久久国内精品自在自线图片| 久久久久久久久久久免费av| 9色porny在线观看| 久久精品熟女亚洲av麻豆精品| 看非洲黑人一级黄片| 久久久久精品性色| 欧美最新免费一区二区三区| 视频中文字幕在线观看| 国产日韩欧美在线精品| 久久精品久久久久久噜噜老黄| 下体分泌物呈黄色| 香蕉丝袜av| 青青草视频在线视频观看| 国产精品麻豆人妻色哟哟久久| 只有这里有精品99| 黄色视频在线播放观看不卡| 国产高清不卡午夜福利| 最近中文字幕高清免费大全6| 91成人精品电影| 欧美亚洲 丝袜 人妻 在线| 午夜老司机福利剧场| 尾随美女入室| 国产成人精品婷婷| 97在线人人人人妻| 少妇被粗大猛烈的视频| 国产69精品久久久久777片| 亚洲第一区二区三区不卡| 日本爱情动作片www.在线观看| 男人添女人高潮全过程视频| 美女国产视频在线观看| 韩国高清视频一区二区三区| 中文天堂在线官网| 国产亚洲精品久久久com| 日韩,欧美,国产一区二区三区| 九色成人免费人妻av| 男人舔女人的私密视频| 欧美日韩成人在线一区二区| av不卡在线播放| 欧美日本中文国产一区发布| 亚洲欧洲日产国产| 少妇精品久久久久久久| 少妇人妻久久综合中文| 男人爽女人下面视频在线观看| 欧美97在线视频| 亚洲欧美精品自产自拍| 一本—道久久a久久精品蜜桃钙片| 久久青草综合色| 黄网站色视频无遮挡免费观看| 两个人免费观看高清视频| 熟女电影av网| 成人国语在线视频| 99久国产av精品国产电影| 国产精品久久久av美女十八| 久久女婷五月综合色啪小说| 观看av在线不卡| 亚洲欧洲精品一区二区精品久久久 | 国产欧美日韩一区二区三区在线| 成人手机av| 久久久久网色| 亚洲第一区二区三区不卡| 亚洲欧美日韩卡通动漫| 欧美人与善性xxx| 午夜久久久在线观看| 亚洲少妇的诱惑av| 搡女人真爽免费视频火全软件| 日韩一区二区三区影片| 日韩av在线免费看完整版不卡| 最近手机中文字幕大全| 欧美最新免费一区二区三区| 国产成人免费无遮挡视频| 亚洲精品一二三| 日韩三级伦理在线观看| 国产亚洲av片在线观看秒播厂| 欧美日韩视频精品一区| 狂野欧美激情性bbbbbb| 中文天堂在线官网| 亚洲精品日本国产第一区| 国产高清不卡午夜福利| 国产av国产精品国产| kizo精华| 国产视频首页在线观看| 久久久精品免费免费高清| 亚洲av男天堂| av黄色大香蕉| 搡女人真爽免费视频火全软件| 亚洲成人一二三区av| 男女边摸边吃奶| 欧美国产精品va在线观看不卡| 黄色一级大片看看| 亚洲欧美日韩卡通动漫| 国产欧美亚洲国产| 91在线精品国自产拍蜜月| 97超碰精品成人国产| xxx大片免费视频| 老熟女久久久| 日本wwww免费看| 国产精品久久久久久久久免| 成人黄色视频免费在线看| 麻豆乱淫一区二区| 2022亚洲国产成人精品| 在线免费观看不下载黄p国产| 亚洲第一区二区三区不卡| 好男人视频免费观看在线| 久久鲁丝午夜福利片| 中国国产av一级| 午夜免费观看性视频| 少妇的丰满在线观看| 国产深夜福利视频在线观看| 一区二区av电影网| 婷婷色av中文字幕| 亚洲美女黄色视频免费看| 狂野欧美激情性bbbbbb| 美女大奶头黄色视频| 免费日韩欧美在线观看| 免费女性裸体啪啪无遮挡网站| 久久午夜福利片| 国产成人精品福利久久| 国产av国产精品国产| 亚洲,欧美,日韩| 成人影院久久| 亚洲精品一二三| 欧美国产精品一级二级三级| 少妇被粗大的猛进出69影院 | 久久人人97超碰香蕉20202| 亚洲色图 男人天堂 中文字幕 | 成人毛片60女人毛片免费| 国产在视频线精品| 天堂中文最新版在线下载| 久久精品国产综合久久久 | 久久久久精品性色| 精品人妻偷拍中文字幕| 侵犯人妻中文字幕一二三四区| 男的添女的下面高潮视频| 在现免费观看毛片| 纵有疾风起免费观看全集完整版| 亚洲国产精品成人久久小说| 成人毛片60女人毛片免费| 国产日韩欧美亚洲二区| 又大又黄又爽视频免费| 国产深夜福利视频在线观看| 美女国产视频在线观看| 国产色爽女视频免费观看| 日本vs欧美在线观看视频| 久热久热在线精品观看| 欧美bdsm另类| 久久精品国产a三级三级三级| 久久久久精品久久久久真实原创| 高清毛片免费看| 久久精品久久久久久噜噜老黄| 晚上一个人看的免费电影| 波多野结衣一区麻豆| 亚洲av综合色区一区| 国产精品成人在线| 国产精品久久久久久久久免| 国产欧美日韩综合在线一区二区| 自线自在国产av| 亚洲欧美成人综合另类久久久| 久久久久久久国产电影| 日韩精品有码人妻一区| 日日啪夜夜爽| 菩萨蛮人人尽说江南好唐韦庄| 九九爱精品视频在线观看| 国产国语露脸激情在线看| 亚洲国产成人一精品久久久| 国产熟女午夜一区二区三区| 久久久久久久大尺度免费视频| 69精品国产乱码久久久| 18禁观看日本| 99热网站在线观看| 精品人妻一区二区三区麻豆| 国产精品人妻久久久影院| 久久久久久久亚洲中文字幕| 一本色道久久久久久精品综合| tube8黄色片| 黑人高潮一二区| 一级爰片在线观看| 精品久久国产蜜桃| 少妇的逼水好多| 免费av中文字幕在线| 少妇的逼水好多| 搡老乐熟女国产| 国产伦理片在线播放av一区| 91在线精品国自产拍蜜月| 黄色一级大片看看| 99九九在线精品视频| 免费av不卡在线播放| 成人无遮挡网站| 久久久久久久久久人人人人人人| 亚洲精品国产av成人精品| 香蕉丝袜av| 热99国产精品久久久久久7| 亚洲av日韩在线播放| 亚洲欧美中文字幕日韩二区| 丝袜人妻中文字幕| 飞空精品影院首页| www.色视频.com| 极品少妇高潮喷水抽搐| 欧美精品国产亚洲| 18+在线观看网站| 搡老乐熟女国产| 亚洲成av片中文字幕在线观看 | 一本大道久久a久久精品| 永久网站在线| 久久精品夜色国产| 波多野结衣一区麻豆| 中文字幕亚洲精品专区| 亚洲av男天堂| 久久婷婷青草| 丰满迷人的少妇在线观看| 亚洲成av片中文字幕在线观看 | 免费日韩欧美在线观看| 亚洲精品乱码久久久久久按摩| 亚洲欧美日韩另类电影网站| 成人黄色视频免费在线看| 18禁观看日本| 免费不卡的大黄色大毛片视频在线观看| 国产又爽黄色视频| 黑人猛操日本美女一级片| kizo精华| 中文字幕av电影在线播放| 90打野战视频偷拍视频| av不卡在线播放| 国产永久视频网站| 国产精品一二三区在线看| 亚洲熟女精品中文字幕| 97在线人人人人妻| 午夜久久久在线观看| 国产精品不卡视频一区二区| 免费观看无遮挡的男女| 免费观看av网站的网址| 午夜激情av网站| 超色免费av| 久久久久久人人人人人| www.色视频.com| 天天影视国产精品| av天堂久久9| 久久鲁丝午夜福利片| 精品国产国语对白av| 永久网站在线| 人妻系列 视频| 中文字幕人妻熟女乱码| 久久久a久久爽久久v久久| 一级a做视频免费观看| 熟女av电影| 日日摸夜夜添夜夜爱| 99精国产麻豆久久婷婷| 亚洲欧美一区二区三区国产| 午夜av观看不卡| av线在线观看网站| 久久人人爽人人爽人人片va| 久久av网站| 午夜福利视频精品| 国产福利在线免费观看视频| 青春草亚洲视频在线观看| 高清视频免费观看一区二区| 嫩草影院入口| 日本黄大片高清| 麻豆乱淫一区二区| 26uuu在线亚洲综合色| 乱码一卡2卡4卡精品| 色婷婷av一区二区三区视频| 宅男免费午夜| 成人午夜精彩视频在线观看| 久久av网站| 日韩一本色道免费dvd| 精品酒店卫生间| 亚洲欧美成人综合另类久久久| 国产亚洲精品久久久com| 捣出白浆h1v1| 中文精品一卡2卡3卡4更新| 亚洲精品国产av成人精品| 免费黄频网站在线观看国产| 中文欧美无线码| 人妻少妇偷人精品九色| 国产成人精品在线电影| 国产亚洲欧美精品永久| 一级黄片播放器| 中文字幕另类日韩欧美亚洲嫩草| 美女国产视频在线观看| 国产精品一二三区在线看| 亚洲成色77777| 亚洲国产精品一区三区| 看免费成人av毛片| 在线观看免费高清a一片| 狠狠婷婷综合久久久久久88av| 亚洲色图综合在线观看| 精品国产一区二区三区久久久樱花| 国产亚洲精品久久久com| 18在线观看网站| 18禁动态无遮挡网站| 日韩制服骚丝袜av| 在线观看免费视频网站a站| 两个人看的免费小视频| 波野结衣二区三区在线| 久久久久精品性色| 在线观看免费日韩欧美大片| 男人舔女人的私密视频| 啦啦啦在线观看免费高清www| 蜜桃在线观看..| 啦啦啦视频在线资源免费观看| 中文字幕人妻丝袜制服| 久久人人爽人人片av| 亚洲av成人精品一二三区| 久久国产精品男人的天堂亚洲 | 国产欧美亚洲国产| 国产精品嫩草影院av在线观看| 丝袜人妻中文字幕| 高清毛片免费看| 啦啦啦在线观看免费高清www| 欧美+日韩+精品| 好男人视频免费观看在线| a级毛片在线看网站| 啦啦啦啦在线视频资源| 26uuu在线亚洲综合色| 午夜影院在线不卡| 久久热在线av| 美女脱内裤让男人舔精品视频| 熟女人妻精品中文字幕| 久久精品国产自在天天线| 一级爰片在线观看| 黄片播放在线免费| 美女脱内裤让男人舔精品视频| 一级黄片播放器| www.色视频.com| 一区二区日韩欧美中文字幕 | 亚洲第一av免费看| 亚洲情色 制服丝袜| 成人亚洲欧美一区二区av| 99热6这里只有精品| 国产国语露脸激情在线看| 久久久久久人人人人人| 美女主播在线视频| 国产在线一区二区三区精| 91久久精品国产一区二区三区| 色5月婷婷丁香| 久久影院123| 色5月婷婷丁香| 中文字幕精品免费在线观看视频 | 国产深夜福利视频在线观看| 精品久久国产蜜桃| 黄色配什么色好看| www日本在线高清视频| 91aial.com中文字幕在线观看| 丁香六月天网| 青青草视频在线视频观看| 欧美xxⅹ黑人| 亚洲av综合色区一区| 亚洲精品乱久久久久久| 十八禁网站网址无遮挡| a级毛片黄视频| 亚洲天堂av无毛| 美女中出高潮动态图| 久久精品久久精品一区二区三区| 久久人妻熟女aⅴ| 热re99久久精品国产66热6| 最近最新中文字幕免费大全7| 赤兔流量卡办理| 最近中文字幕高清免费大全6| 波野结衣二区三区在线| 97精品久久久久久久久久精品| 亚洲综合色惰| 高清视频免费观看一区二区| 高清欧美精品videossex| 街头女战士在线观看网站| 丝袜喷水一区| 综合色丁香网| 亚洲精品aⅴ在线观看| 中文字幕制服av| 午夜久久久在线观看| 国产一区二区激情短视频 | 国产片特级美女逼逼视频| 亚洲熟女精品中文字幕| 成年动漫av网址| 一二三四中文在线观看免费高清| 中文天堂在线官网| 免费黄网站久久成人精品| 97在线视频观看| 国产精品.久久久| 99热全是精品| 十分钟在线观看高清视频www| 久久av网站| 中文乱码字字幕精品一区二区三区| 国产熟女午夜一区二区三区| 亚洲国产看品久久| 啦啦啦视频在线资源免费观看| 亚洲精品自拍成人| av黄色大香蕉| 国产精品女同一区二区软件| 永久网站在线| 亚洲国产欧美日韩在线播放| 一边亲一边摸免费视频| 伦理电影大哥的女人| 亚洲精品成人av观看孕妇| 99久久中文字幕三级久久日本| 高清视频免费观看一区二区| 亚洲欧美中文字幕日韩二区| 亚洲av成人精品一二三区| 亚洲国产精品999| av不卡在线播放| 欧美人与性动交α欧美精品济南到 | 国产综合精华液| 成人毛片a级毛片在线播放| 欧美成人午夜免费资源| 美女xxoo啪啪120秒动态图| 免费观看av网站的网址| av女优亚洲男人天堂| 亚洲av成人精品一二三区| 国产成人一区二区在线| 成年人午夜在线观看视频| 亚洲欧洲国产日韩|