• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Integrating absolute distances in collaborative representation for robust image classification

    2016-03-20 07:11:31ShoningZengXiongYngJinpingGouJijunWen

    Shoning Zeng*,Xiong YngJinping Gou,Jijun Wen

    aDepartment of Computer Science,Huizhou University,46 Yanda Road,Huizhou,Guangdong,China

    bCollege of Computer Science and Communication Engineering,Jiangsu University,301 Xuefu Road,Zhenjiang,Jiangsu,China

    cInstitute of Textiles and Clothing,Hong Kong Polytechnic University,Room QT715,Q Core,7/F,Hong Kong

    Available online 13 October 2016

    Integrating absolute distances in collaborative representation for robust image classification

    Shaoning Zenga,*,Xiong Yanga,Jianping Goub,Jiajun Wenc

    aDepartment of Computer Science,Huizhou University,46 Yanda Road,Huizhou,Guangdong,China

    bCollege of Computer Science and Communication Engineering,Jiangsu University,301 Xuefu Road,Zhenjiang,Jiangsu,China

    cInstitute of Textiles and Clothing,Hong Kong Polytechnic University,Room QT715,Q Core,7/F,Hong Kong

    Available online 13 October 2016

    Conventional sparse representation based classification(SRC)represents a test sample with the coefficient solved by each training sample in all classes.As a special version and improvement to SRC,collaborative representation based classification(CRC)obtains representation with the contribution from all training samples and produces more promising results on facial image classification.In the solutions of representation coefficients,CRC considers original value of contributions from all samples.However,one prevalent practice in such kind of distance-based methods is to consider only absolute value of the distance rather than both positive and negative values.In this paper,we propose an novel method to improve collaborative representation based classification,which integrates an absolute distance vector into the residuals solved by collaborative representation.And we named it AbsCRC.The key step in AbsCRC method is to use factors a and b as weight to combine CRC residuals rescrcwith absolute distance vector disabsand generate a new deviation r=a.rescrc-b.disabs,which is in turn used to perform classification.Because the two residuals have opposite effect in classification,the method uses a subtraction operation to perform fusion.We conducted extensive experiments to evaluate our method for image classification with different instantiations.The experimental results indicated that it produced a more promising result of classification on both facial and non-facial images than original CRC method.

    Sparse representation;Collaborative representation;Integration;Image classification;Face recognition

    1.Introduction

    Image classification is an crucial technique applied in biometrics like face recognition[1,2]and one of the most significant steps in image classification is to represent or code the images.Proper description or representation of images is the basis of achieving robust image classification results[3,4]. Only with well represented,one subject in the form of the image can be easily distinguished from the others.The basicprocess of representation-based classification is firstly representing the targeted sample with a linear combination on training samples and then evaluating the dissimilarity to classify the test sample into a closest class.Representationbased classification algorithms play a significant role in face recognition.Among various representation-based classification methods[5-7],sparse representation(SR)and collaborative representation(CR)based classifications are two of most crucial methods that have drawn wide attention[8,9].

    Despite face recognition is a convenient biometric technology and has been widely studied,there is still lots of challenge in this area.First,face images may be captured in severe variation of poses,illuminations and facial expressions. Consequently,even the images of one same face may differ significantly,which is likely to corrupt the discrimination. Furthermore,it is another big problem that lack of enoughtraining samples for robust face recognition.Some researchers proposed various methods to create more representations of one face to improve the accuracy of face recognition.Gao et al.proposed a virtual face image generation for robust face recognition[10],and Thian et al.also proposed using visual virtual samples to improve face authentication[11].Recently, Xu et al.proposed to reprocess images with symmetrical samples in sparse representation based image classification [12].The combination of multiple methods of image classi fications is effective for improving the classification accuracy [13].How to obtain competitive and complementary contributions among multiple descriptions of images is an hot topic. Furthermore,even sparse representation and collaborative representation can be combined together for classification [14].So integrating multiple classifiers is an effective approach to pursue robust image classification.

    This paper proposes a novel method to integrate an absolute distance vector with the coefficient solved by CRC to improve image classification.The basic idea of our proposed method is to calculate an absolute distance vector between the query sample and the training samples when solving the collaborative coefficient,and then integrate the absolute distance vector disabsfor the query sample with the collaborative residuals rescrcsolved by CRC,with a pair of tuned fusion factors a and b.Therefore a new fusion residuals can be obtained with r=a.rcrc-b.dabs,which isfinally used to perform classification.We tested the proposed method on a number of facial or non-facial datasets and found that it archived higher accuracy than conventional CRC.The paper has the following main contributions to image classification.First,it proposes a novel fusion method to improve CRC.Second,it analyzes and implements a reverse integration on multiple classifiers.Third,it demonstrates an experiment way to find tuned factors for integration on multiple classifiers.

    The structure of the following content in this paper is as follows.The related work on SRC,CRC is introduced in Section 2.In Section 3,we describe our proposed method to integrate absolute distances in collaborative representation based classification(AbsCRC).In the next Section 4,we analyze the selection of fusion factors a and b,as well as some classification examples in the experiments.Section 5 conducts our experiments on a couple of popular benchmark datasets, and Section 6 concludes the paper.

    2.Related work

    Our work is to improve CRC with a novel fusion method. CRC is proposed as an improvement to SRC,therefore we first analyze the work related with conventional SRC before digging into CRC.

    2.1.Sparse representation based classification

    Sparse representation based classification(SRC)algorithm was proposed by J.Wright et al.[8].The basic procedure to perform classification based on sparse representation involves two steps,which are first representing the test sample with a linear combination on all training samples and then identifying the closest class based on the minimal deviation.

    Assume that there are C subjects or pattern classes with N training samples x1,x2,…,xnand the test sample is y.Let matrix Xi=[xi,1,xi,2,…,xi,ni]∈ImXnidenote nitraining samples from the ith class.By stacking all columns from the vector of a wXh gray-scale image,we can obtain the vector to identify this image:x∈Im(m=wXh).Each column of Aiis then representing the training images of the ith subject.So any test sample y∈Imfrom the same class can be denoted by a linear formula as:

    where ai,j∈I,j=1,2,…,ni.

    And then N training samples of C subjects can be denoted by a new matrix:X=[Xi,X2,…,XC].So(1)can be rewritten to a simpler form like: where α=[0,…,0,ai,1,ai,2,…c,0,…,0]Tis the sparse coeffi cient vector in which only entries related with the ith class are not zero.This vector of coefficient is the key factor to affect the robustness of classification.It's noted that SRC using the entire training samples to solve the coefficient.

    Next step in SRC is to perform an l1-norm minimization to solve the optimization problem to pursue the sparsest solution to(2).And this result is used to identify the class of the test sample y.Here we use:

    Next,SRC computes the residuals with this representing coefficient vector associated with ith class,that is:

    And finally output the identity of y as:

    Some SRC algorithms are also implemented by l0-norm,lpnorm(0

    2.2.Collaborative representation based classification

    Collaborative representation based classification(CRC) was proposed as an improvement and replacement of SRC by Lei Zhang et al.[9,24,25].It's approved that most literature on SRC,including[8],may emphasize too much on the signi ficance of l1-norm sparsity in image classification,while the role of collaborative representation(CR)is somehow ignored [9].As we know that CR is involving all contribution from every single training sample to represent the test sample y. Because it's fact that different face images share some common features helpful for classification,which is called nonlocal sample.CRC can learn this nonlocal strategy to output more robust face recognition.

    Let us note X=[X1,X2,…,XC]∈ImXN,and then the test sample y∈Imcan be represented with:

    And then use the regularized least square method to collaboratively represent the test sample using X with low computational burden.That is:

    where λ is a regularization parameter,which makes the least square solution stable and introduces a better enough sparsity to the solution than l1-norm.So the solution of CR in(7)can be derived as:

    Let P=(XT.+λ.I)-1XT,so that we can just simply project the test sample y onto P and get this formula:

    At this step,the classification is performed based on the coefficientwith the class specific representation residual.Herebyis the coefficient vector related with class i and computed with:

    So that it computes the regularized residuals by:

    Finally,it outputs the identity of the test sample y as:

    In this way,CRC involves all training samples to represent the test sample,which is considered as an improvement to conventional SRC[9,24,25].Also,there are a host of methods proposed to optimize CRC.Zhang et al.proposed to integrating globality from other samples with locality in current sample to generate robust classification[26].Xu et al.applied transfer learning algorithm into sparse representation[27]. Fusion of multiple classifiers is also applied in CRC[28].And recently CRC was reinterpreted with a probabilistic theory [29].CRC still has large space to improve,especially on the collaborative coefficient for test sample.

    3.Our method

    Based on nearest feature line(NFL)and nearest feature plane(NFP)[30,31],we can calculate the sum of representation coefficients from all samples in one class and use them to represent to weight of one class.Then the test sample is classified into one class with maximal weight value.The greater the sum value is,the more contribution is produced from that class.

    From the procedures of SRC and CRC,We can infer that l2-norm sparse coefficient contains some crucial discrimination clue for classification.In order to generate a more promising result,this is probably a candidate part where we should pay more efforts to.Here comes our proposed method:firstly using absolute value of coefficient instead of original value to obtain the distance between the test sample and each class,and then integrating the distance vector with the one from CRC for classification.Hereafter the schema of proposed AbsCRC is demonstrated.

    3.1.Solving distance with the absolute values

    Instead of directly solving coefficientbβifor each class with the sum of all coefficients from all samples with(10),we turn to sum absolute value of all coefficients to calculate the whole distance between the test sample and the class:

    And this distance vector can used to identify a class most relevant with the test sample.In this distance vector,however, the bigger value indicates the test sample is more relevant with the class represented by the training samples.So that the role by this absolute distance vector diis opposite with the one by collaborative representation residuals ri.

    For comparison,we here use the maximal value in this vector to identify a class most relevant with the test sample y:

    However,when the new residuals are used to directly perform classification,robust classification cannot be obtained.This was demonstrated in our experiments,as shown in Section 5.

    3.2.Integrating absolute residuals with original ones

    While using absolute residuals alone cannot produce a comparable classification with originalones in CRC,integrating absolute with original residuals can bring a more promising reaction.In the integration,the residuals by CRC is integrated with the absolute distance vector by different weights a and b respectively.Therefore,we can obtain the new residuals with:

    It's noted that usually a can be assigned value of 1,that is a=1,for simplicity.And then using different value of b can still reflect the weight of absolute distance.Furthermore,since the absolute distance plays an opposite role in classification,in (15)we use subtraction to combine the absolute distance.

    Finally,it outputs the identity of the test sample y with the new residuals:

    4.Analysis

    In this section,we use some experiment cases to demonstrate the rationale and effects of our proposed AbsCRC method.Indeed,the absolute distance vector alone does not bring good enough helps on image classification or face recognition,by which the classification results can not match the results by conventional CRC in most cases.However, when integrating the absolute distance vector with the residualsfrom CRC,the fusion residualscan produce outstanding classification results.

    The absolute distance vector may help stabilize the representation coefficients by CRC.This is the most crucial contribution in our AbsCRC method.On the other hand,the fusion process is affected by the selection of weighted factors b.So our second effort is to find out optimized weighted factors for robust image classification or face recognition.

    Fig.1 shows the CRC residuals,absolute distances and fusion residuals in one experiment case,which was run on ORL face database with first 6 images as training samples and the rest as test samples(See subsection 5.3).And this group of residuals are for a test sample at number 113 position,which is the first test sample of twenty-ninth class(number 4* 28+1=113 sample).We can see from the Fig.1 that the fusion residuals(green)are affected by the absolute distance vector and slightly flatter than the original residuals by conventional CRC(yellow).

    In this experiment case,with factor of b=0.1(See Table 3),both CRC and ABS were failed to classify the number 113 test sample into a right class,while only AbsCRC produced a right answer,as shown in Fig.2.

    Consequently,our experiments have taken into account the weighted factor b for different classification cases.And in a glut of benchmark datasets,we are managed to choose a group of parameters that help AbsCRC generate an optimized result. The next Section 5 will demonstrate all the experimental results.

    5.Experimental results

    In this section,we will demonstrate our experimental results on some popular visual benchmark datasets.Extensive experiments were conducted on these datasets to evaluate the classification accuracy of conventional CRC,absolute distance only(ABS)and our AbsCRC method,as well as the selections of fusion factors a and b.The chosen benchmark datasets include Caltech Faces[32],Caltech Leaves[32],ORL[33], FERET[34],CMU Faces[35],and Senthil IRTT Face Database[36].

    Fig.1.Residuals for a test sample in ORL face database.

    Fig.2.Labels classified for a test sample in ORL face database.

    On each benchmark database,we respectively run experiments with different number of training samples,as well as different integration factors a and b.For simplicity,we keep a=1 and use different value of b to reflect the weights of two coefficients.In our experiments,we found that when CRC outperforms ABS,it's better to assign a value less than one to b,that is b<1;on the contrary,usually b>1 usually produces more pleasuring result when ABS outperforms CRC.However,there are still some exceptional experiments cases.So our experiments also paid efforts to seek a optimal fusion factor b. The following subsections will demonstrate the samples,steps, factors and results in every experimental case,as well as our discussion on the results.The experimental results indicate that in most cases the AbsCRC is managed to produce higher accuracy of classification than CRC.

    5.1.Experiments on Caltech Faces dataset

    The Caltech Faces dataset is a frontal face dataset collected by Markus Weber at California Institute of Technology[32]. There are 450 facial images in this dataset,which is all in size of 896*592 pixels and JPEG format.These pictures were taken from 27 or so unique people under with different lighting,expressions and backgrounds.We resized each image into half-scale of 488*296 pixels to reduce computing complexity.Furthermore,we selected only 19 subjects with more that 10 samples in our experiments to fulfill the experimental requirements that there are 8 training samples at least in every subject.In out experiments,however,it is not necessary to use all three dimensions data in these colored images.Therefore we converted these original colored images to gray scale before running our tests.

    For each subject,we successively took 1 to 8 face images as training samples and the rested as test samples.We evaluated the wrong classification rates of CRC,ABS and AbsCRCalgorithms,as well as different weight factor b.The classi fication results are shown in Table 1.In most experimental cases,AbsCRC unexpectedly outperformed CRC in this dataset.The error rates by ABS only are also listed in the table for comparison.The most promising case is the one using 7 training samples and b=0.2,in which AbsCRC outperforms both CRC and ABS and the error rate drops down to 33.33%. On the whole,experimental results on this dataset demonstrated that our AbsCRC gained an excellent improvement onto conventional CRC in image classification.

    Table 1Improvements to CRC on Caltech Faces dataset.

    5.2.Experiments on Caltech Leaves dataset

    Caltech Leaves dataset[32]is also taken around Caltech by Markus Weber from California Institute of Technology.There are 186 images of leaves against different backgrounds and with approximate scale normalization.All images are also in JPEG format and in size of 896*592 pixels as well.We again resized them to half scale.At this time,we selected only 7 subjects with more that 10 samples in our experiments so thatthere are 8 training samples at least in every subject.Because it is not necessary to use all three dimensions data in these colored images,we converted these original colored images to gray scale before running our tests.

    The results for this group of experiments are shown in Table 2.In this non-facial dataset,AbsCRC again generated amazing results.As we known that CRC is a method specific for face recognition,so it fell behind ABS in almost all cases.However,AbsCRC is managed to produce a more pleasuring accuracy in many cases.The most promising case is the one using 8 training samples and b=0.2,in which AbsCRC outperforms both CRC and ABS by 16.67%and the error rate drops down to the lowest level of 35.71%(See the last row with start mark).This group of experiments demonstrated that AbsCRC works well in non-facial image classification.

    5.3.Experiments on ORL face database

    ORL face database[33]is a small database that includes only 400 facial images taken from 40 folks and every single class provides only 10 distinct face images.The facial images were captured at different conditions for every subject like times,lighting,facial expressions(open or closed eyes,smiling,or not smiling),and facial details(glasses or no glasses). Besides,these images were taken against a dark consistent background while the folks were in an upright,frontal position.For simplicity,we resized all the face images to 56*46 pixels.We designedly renamed all image files to filenames with ordered numbers in 3 digits,which are elegant to reflect the right position of classes in experiments.

    We respectively took first 1 to 8 picture(s)of each subject as source training samples and used the other face images as test samples.We evaluated the classification failure rates by all algorithms.The classification results are outstanding and in a very low error rates.Table 3 shows the detailed error rates as well as the improvements by three algorithms.The most promising result to AbsCRC was generated on the case of using 8 training samples,in which AbsCRC outperformed CRC up to 50.00%when b=1.3.And the classification accuracy reaches an amazing level of 95.00%.Furthermore, AbsCRC produces higher accuracy on all cases with at least 4 training samples.

    Table 2Improvements to CRC and ABS on Caltech Leaves dataset.

    Table 3Improvements to CRC and ABS on ORL face database.

    5.4.Experiments on FERET face database

    The FERET benchmark database[34]is one of the biggest visual databases.In FERET database,each subject has a group of five to eleven images including two frontal views(fa and fb) and one more frontal image by a different facial expression. We chose to test on 200 subjects in the database,which means this group of experiments were running on a scale of 1400 face images and seven samples in each subject.In our experiments, all images are renamed to an ordered number filename.With this type of ordered number filenames,we can easily figure out the right answer in the classification algorithm.

    Since there are only 7 samples in each subject,we respectively used first 1 to 5 images as training samples and the remaining images as test samples.This group of experiments generated a pleasure classification results.Though the improvement by the new algorithm is not so outstanding as that on the other databases,AbsCRC still slightly outperforms conventional CRC.The detailed improvements by AbsCRC are shown in Table 4.We can see that AbsCRC still outperformed CRC up to 7.14%when using 5 training samples with b=0.4.And the error rate on classification is at a very low level of 29.25%.

    5.5.Experiments on CMU face images

    The CMU face images[35]consists of 640 black and white face images of people taken with varying pose(straight,left, right,up),expression(neutral,happy,sad,angry),eyes (wearing sunglasses or not),and size.All images are in PGM format and grouped by the name of specific subject.There are 20 subjects in total and up to 96 images in some subjects, while some subjects contains only images less than others.Sowe chose to select 54 images that exist in all subjects as experimental samples.That means there are 20*54=1080 images used in this group of experiments.

    Table 4Improvements to CRC and ABS on FERET face database.

    In this group of experimental cases,we still took first 1 to 8 images as training samples and the rest images as test samples. Table 5 shows the detailed results of classification.The most promising case is the one with 7 training samples and b=0.1. Though AbsCRC only outperforms CRC by 4.55%,the classification accuracy reaches 91.06%.We can see that ABS alone did not perform as well as CRC,but it pushes CRC up to a higher level by a simple fusion.

    5.6.Experiments on Senthil IRTT face database

    The Senthil IRTT Face Database Version1.2[36]contains both color and gray scale faces from IRTT students.There are 100 facial images for 10 IRTT young female students around 23-24 years,and each has 10 facial samples.The color images along with background are captured with a pixel resolution of 480*640 and their faces are cropped to 100X100 pixels.All facial images are labeled with the number of subject and sample.This database is relatively smaller than others,so that the experiments run fast.

    Using 1 to 8 training samples,our experiments run fast and smoothly.The most promising case for AbsCRC is the one using 6 training samples with b=0.1.The improvement rate from AbsCRC to CRC and ABS reaches 20%and the classifi cation accuracy reaches a high level of 90.00%.Again, though not performing as well as CRC,ABS pushes CRC up to higher level with fusion.

    5.7.Discussion

    In the all 6 datasets,there are 5 facial datasets and one nonfacial datasets.The experiments showed that integrating ABS into face recognition method CRC helps improve the classifi cation robustness of CRC,which is effective in both facial and non-facial image classification.Besides we can find some other useful hints for image classification when applying absolute distance in collaborative representation.

    Absolute distance based classification might be not stable enough in face recognition.As shown in the detailed results from Tables 1,3-6,the results by ABS in most cases did not match the ones by original CRC.While Table 2 demonstrated

    Acknowledgment

    Table 5Improvements to CRC and ABS on CMU face images.

    Table 6Improvements to CRC and ABS on Senthil IRTT face database.

    that ABS works better than CRC on non-facial image classification.

    The number of training samples still matters.As shown in Tables 1-6,the more training samples we used in classification,the higher classification accuracy we can obtain.This is truth in both facial and non-facial datasets.

    One-training-sample issues exists as usual.Almost in all face databases,the results when using only one training sample are usually at the lowest accuracy,even some of them produced zero improvement.Such unstable only-one-trainingsample case is an common issue in face recognition,but it can be mitigated with a host of methods,like[22,37,38]and so forth.

    6.Conclusion

    This paper proposed a novel absolute collaborative representation based classification(AbsCRC)method for robust image classification.When solving the representation coef ficient CRC,we calculate the sum of the absolute distance between the test sample and the training samples at the same time.And then this absolute distance vector is integrated with original collaborative coefficient to generate a more promising classification.In the fusion,a tuned factor b is involved to adjust the weights from both distance vectors to output the best classification.Extensive experiments were conducted on a couple of facial and non-facial benchmark databases,and the results demonstrate that AbsCRC outperforms state-of-the-art CRC in most of cases.

    This work was supported in part by Research Foundation of Education Bureau of Guangdong Province of China(Grant No.A314.0116),Scientific Research Starting Foundation for Ph.D.in Huizhou University(Grant No.C510.0210),National Natural Science Foundation of China(Grant No.61502208) and Natural Science Foundation of Jiangsu Province of China (Grant No.BK20150522).

    [1]S.Z.Li,Encyclopedia of Biometrics:I-Z,vol.1,Springer Science& Business Media,2009.

    [2]Z.Fan,Y.Xu,D.Zhang,Neural Netw.IEEE Trans.22(7)(2011) 1119-1132.

    [3]J.Chen,S.Shan,C.He,G.Zhao,M.Pietik ainen,X.Chen,W.Gao, Pattern Anal Mach.Intell.IEEE Trans.32(9)(2010)1705-1720.

    [4]X.Hong,G.Zhao,M.Pietikainen,X.Chen,Image Process.IEEE Trans. 23(6)(2014)2557-2568.

    [5]Y.Xu,X.Li,J.Yang,Z.Lai,D.Zhang,Cybern.IEEE Trans.44(10) (2014)1738-1746.

    [6]Y.Pang,X.Li,Y.Yuan,D.Tao,J.Pan,Inf.Forensics Secur.IEEE Trans. 4(3)(2009)441-450.

    [7]Y.Xu,Q.Zhu,Z.Fan,D.Zhang,J.Mi,Z.Lai,Inf.Sci.238(2013) 138-148.

    [8]J.Wright,A.Y.Yang,A.Ganesh,S.S.Sastry,Y.Ma,Pattern Anal Mach. Intell.IEEE Trans.31(2)(2009)210-227.

    [9]L.Zhang,M.Yang,X.Feng,Sparse representation or collaborative representation:which helps face recognition?,in:Computer Vision (ICCV),2011 IEEE International Conference on,IEEE,2011,pp. 471-478.

    [10]W.Gao,S.Shan,X.Chai,X.Fu,Virtual face image generation for illumination and pose insensitive face recognition,in:Multimedia and Expo,2003.ICME'03.Proceedings.2003 International Conference on, Vol.3,IEEE,2003.III-149.

    [11]N.P.H.Thian,S.Marcel,S.Bengio,Improving face authentication using virtual samples,in:Acoustics,Speech,and Signal Processing,2003. Proceedings.(ICASSP'03).2003 IEEE International Conference on,Vol. 3,IEEE,2003.III-233.

    [12]Y.Xu,Z.Zhang,G.Lu,J.Yang,Pattern Recognit.54(2016)68-82.

    [13]Y.Xu,B.Zhang,Z.Zhong,Pattern Recognit.Lett.68(2015)9-14.

    [14]W.Li,Q.Du,B.Zhang,Pattern Recognit.48(12)(2015)3904-3916.

    [15]Z.Xu,H.Zhang,Y.Wang,X.Chang,Y.Liang,Sci.China Inf.Sci.53(6) (2010)1159-1169.

    [16]A.Y.Yang,Z.Zhou,A.G.Balasubramanian,S.S.Sastry,Y.Ma,Image Process.IEEE Trans.22(8)(2013)3234-3246.

    [17]S.Gao,I.W.-H.Tsang,L.-T.Chia,Kernel sparse representation for image classification and face recognition,in:Computer Vision-ECCV 2010, Springer,2010,pp.1-14.

    [18]M.Yang,L.Zhang,Gabor feature based sparse representation for face recognition with gabor occlusion dictionary,in:Computer Vision--ECCV 2010,Springer,2010,pp.448-461.

    [19]B.Cheng,J.Yang,S.Yan,Y.Fu,T.S.Huang,Image Process.IEEE Trans.19(4)(2010)858-866.

    [20]L.Qiao,S.Chen,X.Tan,Pattern Recognit.43(1)(2010)331-341.

    [21]W.Deng,J.Hu,J.Guo,In defense of sparsity based face recognition,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2013,pp.399-406.

    [22]Y.Xu,X.Zhu,Z.Li,G.Liu,Y.Lu,H.Liu,Pattern Recognit.46(4) (2013)1151-1158.

    [23]Z.Liu,X.Song,Z.Tang,Neural Comput.Appl.26(8)(2015) 2013-2026.

    [24]L.Zhang,M.Yang,X.Feng,Y.Ma,D.Zhang,Collaborative representation based classification for face recognition,arXiv preprint arXiv: 1204.2358.

    [25]X.Chen,P.J.Ramadge,Collaborative representation,sparsity or nonlinearity:what is key to dictionary based classification?,in:Acoustics,Speech and Signal Processing(ICASSP),2014 IEEE International Conference on,IEEE,2014,pp.5227-5231.

    [26]Z.Zhang,Z.Li,B.Xie,L.Wang,Y.Chen,Math.Probl Eng.(2014).

    [27]Y.Xu,J.Wu,X.Li,D.Zhang,et al.,Image Process.IEEE Trans.25(2).

    [28]Y.Peng,Z.Pan,Z.Zheng,X.Li,Int.J.Database Theory Appl.9(2) (2016)183-192.

    [29]S.Cai,L.Zhang,W.Zuo,X.Feng,A probabilistic collaborative representation based approach for pattern classification,in:IEEE Conference on Computer Vision and Pattern Recognition,CVPR,2016.

    [30]Q.Feng,J.-S.Pan,L.Yan,J.Inf.Hiding Multimed.Signal Process 3(3) (2012)297-305.

    [31]Q.Feng,C.-T.Huang,L.Yan,J.Inf.Hiding Multimed.Signal Process 4 (3)(2013)178-191.

    [32]M.Weber,Caltech datasets,http://www.vision.caltech.edu/html-files/ archive.html.online;(Accessed 7 June 2016).

    [33]A.L.Cambridge,The orl database of faces,http://www.cl.cam.ac.uk/ research/dtg/attarchive/facedatabase.html.online;(Accessed17May2016).

    [34]T.N.I.of Standards,T.(NIST),The color feret database,http://www. nist.gov/itl/iad/ig/colorferet.cfm.online;(Accessed 17 May 2016).

    [35]T.Mitchell,Cmu face images,https://archive.ics.uci.edu/ml/machinelearning-databases/faces-mld/faces.html,online;(Accessed 9 June 2016).

    [36]Senthilkumar,Senthil IRTT face database version 1.2,http://www. geocities.ws/senthilirtt/Senthil%20IRTT%20Face%20Database% 20Version%201.2,online;(Accessed 17 May 2016).

    [37]D.Beymer,T.Poggio,Face recognition from one example view,in: Computer Vision,1995.Proceedings.,Fifth International Conference on, IEEE,1995,pp.500-507.

    [38]T.Vetter,Int.J.Comput.Vis.28(2)(1998)103-116.

    Mr.Shaoning Zengreceived his M.S.degree in Software Engineering from Beihang University,Beijing,PR China,in 2007.Since 2009,he has been a lecturer at Huizhou University,PR China.His current research interests include pattern recognition,sparse representation,image recognition and neural network.

    Dr.Xiong Yangreceived his B.S.degree in Computer Science and Technology from Hubei Normal University,PR China,in 2002.He received the M.S.degree in Computer Science from Central China Normal University,PR China,in 2005 and the Ph.D.degree at Institute for Pattern Recognition and Artificial Intelligence,Huazhong University of Science and Technology,PR China,in 2010.Since 2010,he has been teaching in the Department of Computer Science and Technology,Huizhou University,PR China.His current research interests include pattern recognition and machine learning.

    Dr.Jianping Goureceived the BS degree in computer science from Beifang University of Nationalities, China in 2005,the MS degree in computer science from the Southwest Jiaotong University,China in 2008,and the PhD degree in computer science from University of Electronic Science and Technology of China,China in 2012.He is currently a lecturer in School of Computer Science and Telecommunication Engineering,JiangSu University,China.His current research interests include pattern classification,machine learning.He has published over 20 technical articles.

    Jiajun Wenreceived the Ph.D.degree in computer science and technology from Harbin Institute of Technology,China,in 2015.He has been a Research Associate with the Hong Kong Polytechnic University, Hong Kong,since 2013.He is currently a Postdoctoral Fellow with the College of Computer Science& Software Engineering,Shenzhen University,Shenzhen,China.His research interests include pattern recognition and video analysis.

    *Corresponding author.

    E-mail addresses:zxn@outlook.com(S.Zeng),xyang.2010@hzu.edu.cn (X.Yang),goujianping@ujs.edu.cn(J.Gou),jiajun.wen@polyu.edu.hk (J.Wen).

    Peer review under responsibility of Chongqing University of Technology.

    http://dx.doi.org/10.1016/j.trit.2016.09.002

    2468-2322/Copyright?2016,Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    Copyright?2016,Chongqing University of Technology.Production and hosting by Elsevier B.V.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    欧美日韩综合久久久久久| 国产午夜精品论理片| 成人综合一区亚洲| 久久久精品大字幕| 久久久久久久亚洲中文字幕| 精品久久久久久久久久久久久| 一本精品99久久精品77| 中文字幕免费在线视频6| 国产69精品久久久久777片| 麻豆精品久久久久久蜜桃| 亚洲国产色片| 国产美女午夜福利| 一级黄片播放器| 国产亚洲欧美98| 如何舔出高潮| 久久久a久久爽久久v久久| 国产在线精品亚洲第一网站| 可以在线观看毛片的网站| 精品久久久久久久人妻蜜臀av| 亚洲性久久影院| 熟妇人妻久久中文字幕3abv| 国产午夜精品论理片| 色5月婷婷丁香| 黄色一级大片看看| 在线观看免费视频日本深夜| 成年女人看的毛片在线观看| 国产高清视频在线播放一区| 天天躁夜夜躁狠狠久久av| 亚洲精品亚洲一区二区| 午夜福利在线在线| 久久精品国产自在天天线| 丰满的人妻完整版| 日日撸夜夜添| 哪里可以看免费的av片| 国产一区二区亚洲精品在线观看| 亚州av有码| 日韩三级伦理在线观看| 亚洲国产欧美人成| 国产av不卡久久| 免费不卡的大黄色大毛片视频在线观看 | 熟女人妻精品中文字幕| 最近在线观看免费完整版| 简卡轻食公司| 非洲黑人性xxxx精品又粗又长| 高清毛片免费观看视频网站| 欧美在线一区亚洲| 国内精品宾馆在线| 亚洲最大成人手机在线| 色在线成人网| 久久精品人妻少妇| 此物有八面人人有两片| 国产精品无大码| 一区福利在线观看| 天堂动漫精品| 久久综合国产亚洲精品| 午夜福利在线观看吧| 亚洲欧美日韩高清专用| 秋霞在线观看毛片| 久久精品国产99精品国产亚洲性色| 久久久久久久久久久丰满| 天堂网av新在线| 麻豆一二三区av精品| 久久鲁丝午夜福利片| 1000部很黄的大片| 97人妻精品一区二区三区麻豆| 五月玫瑰六月丁香| 少妇裸体淫交视频免费看高清| 美女内射精品一级片tv| 久久久国产成人免费| 一级av片app| 真实男女啪啪啪动态图| 18+在线观看网站| 免费无遮挡裸体视频| 国产精品一区二区性色av| 国产三级在线视频| 一进一出好大好爽视频| 麻豆国产97在线/欧美| 男女之事视频高清在线观看| 在线观看午夜福利视频| 国产片特级美女逼逼视频| 免费看av在线观看网站| 久久久久精品国产欧美久久久| 国产激情偷乱视频一区二区| 看黄色毛片网站| .国产精品久久| 波多野结衣高清无吗| 搡女人真爽免费视频火全软件 | 国产精品久久久久久av不卡| 久久久色成人| 欧美极品一区二区三区四区| 天堂av国产一区二区熟女人妻| 日韩国内少妇激情av| 国产精品一区二区性色av| 国产午夜精品久久久久久一区二区三区 | 欧美bdsm另类| 国产av不卡久久| 亚洲av第一区精品v没综合| 一区二区三区免费毛片| 热99re8久久精品国产| 欧美另类亚洲清纯唯美| 亚洲三级黄色毛片| 欧美绝顶高潮抽搐喷水| 九九在线视频观看精品| 国产乱人视频| 草草在线视频免费看| av在线天堂中文字幕| 插阴视频在线观看视频| 国产精品一二三区在线看| 亚洲人成网站在线播| 国产在视频线在精品| 国产探花在线观看一区二区| 两性午夜刺激爽爽歪歪视频在线观看| 国内精品美女久久久久久| av在线观看视频网站免费| 少妇的逼好多水| 久久久久国内视频| 麻豆av噜噜一区二区三区| 丝袜喷水一区| 久久精品国产亚洲av涩爱 | 亚洲国产高清在线一区二区三| 成人三级黄色视频| 97超视频在线观看视频| av视频在线观看入口| 成人av在线播放网站| 亚洲国产精品久久男人天堂| 免费av观看视频| 亚洲人成网站在线播放欧美日韩| 亚洲成人中文字幕在线播放| 国产国拍精品亚洲av在线观看| 色播亚洲综合网| 国产精品伦人一区二区| 99热精品在线国产| 在线a可以看的网站| 亚洲自拍偷在线| 国产国拍精品亚洲av在线观看| 夜夜爽天天搞| 欧美丝袜亚洲另类| 国产大屁股一区二区在线视频| 两性午夜刺激爽爽歪歪视频在线观看| 精品一区二区三区人妻视频| 午夜a级毛片| 中国国产av一级| 亚洲av免费高清在线观看| 欧美成人精品欧美一级黄| 波多野结衣巨乳人妻| 国内精品美女久久久久久| 国产午夜精品久久久久久一区二区三区 | 99久久九九国产精品国产免费| 日韩国内少妇激情av| 国产老妇女一区| 国产 一区精品| 亚洲自偷自拍三级| 久久6这里有精品| 国产人妻一区二区三区在| 级片在线观看| 精品少妇黑人巨大在线播放 | 如何舔出高潮| 久久久欧美国产精品| 九九在线视频观看精品| 日本 av在线| av在线蜜桃| avwww免费| 精品免费久久久久久久清纯| 老司机影院成人| 色av中文字幕| 欧美激情国产日韩精品一区| 免费看a级黄色片| 哪里可以看免费的av片| 99久久无色码亚洲精品果冻| aaaaa片日本免费| 久久久精品大字幕| 国产黄片美女视频| 亚洲精品一区av在线观看| 亚洲无线在线观看| 国产淫片久久久久久久久| 一本精品99久久精品77| 国产毛片a区久久久久| 亚洲精品亚洲一区二区| 色噜噜av男人的天堂激情| 最近视频中文字幕2019在线8| 男人的好看免费观看在线视频| 精品久久久久久成人av| 亚洲欧美成人精品一区二区| 亚洲第一区二区三区不卡| 午夜a级毛片| 亚洲aⅴ乱码一区二区在线播放| 久久99热6这里只有精品| 老师上课跳d突然被开到最大视频| 伊人久久精品亚洲午夜| av中文乱码字幕在线| 亚洲精品色激情综合| 嫩草影院入口| 国产精品久久久久久久久免| 内射极品少妇av片p| 亚洲av中文av极速乱| 国产欧美日韩精品亚洲av| 国产成人影院久久av| 精品午夜福利视频在线观看一区| 日韩欧美国产在线观看| 精品少妇黑人巨大在线播放 | 国产精品久久久久久久久免| 少妇的逼水好多| 亚洲欧美日韩无卡精品| а√天堂www在线а√下载| 极品教师在线视频| 日日摸夜夜添夜夜添av毛片| av在线蜜桃| 亚洲欧美成人综合另类久久久 | 伦理电影大哥的女人| 亚洲自拍偷在线| 欧美一区二区国产精品久久精品| 一本一本综合久久| av在线老鸭窝| 婷婷精品国产亚洲av| 色噜噜av男人的天堂激情| 日韩欧美免费精品| 少妇人妻精品综合一区二区 | 久久精品国产亚洲网站| 美女被艹到高潮喷水动态| 日本黄大片高清| 亚洲精品色激情综合| 国产成人影院久久av| 欧美成人免费av一区二区三区| 亚洲欧美成人精品一区二区| 悠悠久久av| 亚洲av美国av| 18禁裸乳无遮挡免费网站照片| 日本三级黄在线观看| 三级经典国产精品| 精品久久久久久久久亚洲| 在线观看美女被高潮喷水网站| 51国产日韩欧美| 日日摸夜夜添夜夜添小说| 不卡一级毛片| 日本精品一区二区三区蜜桃| 嫩草影院入口| 女的被弄到高潮叫床怎么办| 人人妻人人看人人澡| 三级国产精品欧美在线观看| 亚洲国产欧洲综合997久久,| 男插女下体视频免费在线播放| 欧洲精品卡2卡3卡4卡5卡区| 黄色配什么色好看| 免费看a级黄色片| 淫秽高清视频在线观看| 国产片特级美女逼逼视频| 人妻丰满熟妇av一区二区三区| 日本成人三级电影网站| 成人av在线播放网站| 久久久久久久亚洲中文字幕| 亚洲一区二区三区色噜噜| 免费电影在线观看免费观看| 嫩草影院入口| 久久精品国产亚洲av天美| 亚洲成人av在线免费| 91麻豆精品激情在线观看国产| 18禁裸乳无遮挡免费网站照片| 精品人妻一区二区三区麻豆 | 一级a爱片免费观看的视频| 久久久精品欧美日韩精品| 国产精品美女特级片免费视频播放器| 老女人水多毛片| 欧美最新免费一区二区三区| 五月伊人婷婷丁香| 舔av片在线| 老司机影院成人| 国产精品久久电影中文字幕| 国内精品一区二区在线观看| 亚洲成人中文字幕在线播放| 俄罗斯特黄特色一大片| 国产中年淑女户外野战色| av专区在线播放| 一进一出抽搐gif免费好疼| 天堂√8在线中文| 国产淫片久久久久久久久| 久久人妻av系列| 亚洲av二区三区四区| 看非洲黑人一级黄片| av视频在线观看入口| 内地一区二区视频在线| 国内精品一区二区在线观看| 国产真实伦视频高清在线观看| а√天堂www在线а√下载| 一级av片app| 国产毛片a区久久久久| 3wmmmm亚洲av在线观看| 亚洲婷婷狠狠爱综合网| 丝袜喷水一区| 1000部很黄的大片| 久久久久久久午夜电影| 欧美色视频一区免费| 啦啦啦韩国在线观看视频| 人妻夜夜爽99麻豆av| 天堂动漫精品| 国产片特级美女逼逼视频| 啦啦啦韩国在线观看视频| 蜜臀久久99精品久久宅男| 亚洲最大成人av| 在线免费观看不下载黄p国产| 中国美白少妇内射xxxbb| 亚洲精品在线观看二区| 一夜夜www| 校园春色视频在线观看| 一级黄色大片毛片| 在线免费观看的www视频| 国产亚洲精品久久久com| 美女被艹到高潮喷水动态| 日产精品乱码卡一卡2卡三| 噜噜噜噜噜久久久久久91| 成年免费大片在线观看| 国产成人freesex在线 | 国产成人aa在线观看| av在线亚洲专区| 国产一区二区在线观看日韩| 国产精品久久视频播放| 国产精品电影一区二区三区| 2021天堂中文幕一二区在线观| 69人妻影院| 亚洲不卡免费看| 欧美日韩一区二区视频在线观看视频在线 | 成年女人永久免费观看视频| 久久精品夜色国产| 色5月婷婷丁香| 少妇丰满av| 亚洲va在线va天堂va国产| 男女那种视频在线观看| 精品一区二区三区人妻视频| 久久精品影院6| 欧美色欧美亚洲另类二区| 色综合色国产| 热99在线观看视频| 老司机影院成人| 啦啦啦韩国在线观看视频| 久久人人精品亚洲av| 天天躁日日操中文字幕| 人妻丰满熟妇av一区二区三区| 97碰自拍视频| 观看美女的网站| 国产乱人偷精品视频| 麻豆久久精品国产亚洲av| 国产一区二区激情短视频| 欧美3d第一页| 一级黄色大片毛片| 精品久久国产蜜桃| 国产v大片淫在线免费观看| 欧美最新免费一区二区三区| 亚洲精品粉嫩美女一区| 国产视频内射| 成人高潮视频无遮挡免费网站| 国产精品久久久久久精品电影| 精品人妻熟女av久视频| 伊人久久精品亚洲午夜| 99久久中文字幕三级久久日本| 51国产日韩欧美| 亚洲,欧美,日韩| 搡女人真爽免费视频火全软件 | 又爽又黄a免费视频| 国产一级毛片七仙女欲春2| 国产精品三级大全| 精品无人区乱码1区二区| 两性午夜刺激爽爽歪歪视频在线观看| 99久久精品热视频| 欧美xxxx性猛交bbbb| 一个人看的www免费观看视频| 天堂动漫精品| 欧美丝袜亚洲另类| 不卡视频在线观看欧美| 亚洲美女视频黄频| 国产精华一区二区三区| 赤兔流量卡办理| av国产免费在线观看| 男人的好看免费观看在线视频| 久久久久久久久中文| eeuss影院久久| 国产精品嫩草影院av在线观看| 小蜜桃在线观看免费完整版高清| 男女下面进入的视频免费午夜| 国产精品嫩草影院av在线观看| 欧美人与善性xxx| 国产成人91sexporn| 欧美xxxx性猛交bbbb| 日日干狠狠操夜夜爽| 亚洲美女搞黄在线观看 | 老司机影院成人| 内射极品少妇av片p| 波多野结衣高清作品| 成年女人毛片免费观看观看9| 免费观看在线日韩| 国产一区二区三区在线臀色熟女| 老司机福利观看| 日本-黄色视频高清免费观看| 欧美一区二区亚洲| 最近最新中文字幕大全电影3| 日日啪夜夜撸| 免费人成视频x8x8入口观看| 国产高清三级在线| 99国产极品粉嫩在线观看| 精品久久久久久久末码| 男女那种视频在线观看| 大型黄色视频在线免费观看| 69人妻影院| av专区在线播放| 国产一区二区在线观看日韩| 亚洲四区av| 麻豆一二三区av精品| 国产伦精品一区二区三区视频9| 老司机影院成人| 日韩,欧美,国产一区二区三区 | 亚洲人成网站高清观看| 日产精品乱码卡一卡2卡三| 久久久久国内视频| 精品少妇黑人巨大在线播放 | 久久久久久久久久久丰满| h日本视频在线播放| av在线天堂中文字幕| 老女人水多毛片| 日日啪夜夜撸| 亚洲精品影视一区二区三区av| 99久久精品一区二区三区| avwww免费| 欧美xxxx性猛交bbbb| 国产精品亚洲一级av第二区| 国产男人的电影天堂91| 亚洲精品456在线播放app| 国产爱豆传媒在线观看| 美女内射精品一级片tv| 看黄色毛片网站| avwww免费| 欧美色欧美亚洲另类二区| 三级男女做爰猛烈吃奶摸视频| 我要搜黄色片| 国产成人aa在线观看| 欧洲精品卡2卡3卡4卡5卡区| 久久久国产成人精品二区| 日本一二三区视频观看| 97在线视频观看| 成人三级黄色视频| 国产一级毛片七仙女欲春2| 午夜免费男女啪啪视频观看 | 插阴视频在线观看视频| 国产色婷婷99| 亚洲欧美精品自产自拍| 免费观看人在逋| 天堂网av新在线| 亚洲精品日韩av片在线观看| 国产亚洲精品久久久com| 日韩国内少妇激情av| 男女边吃奶边做爰视频| 成年av动漫网址| 自拍偷自拍亚洲精品老妇| 精品一区二区三区视频在线| 婷婷精品国产亚洲av| 美女xxoo啪啪120秒动态图| 国产成人a区在线观看| 午夜a级毛片| 最新在线观看一区二区三区| 国产高清视频在线播放一区| 日韩高清综合在线| 国产精品人妻久久久久久| 久久久久久久久大av| 蜜桃亚洲精品一区二区三区| 熟女电影av网| 嫩草影院新地址| 少妇的逼水好多| 亚洲自偷自拍三级| 国内久久婷婷六月综合欲色啪| 久久国产乱子免费精品| 欧美xxxx性猛交bbbb| 寂寞人妻少妇视频99o| 18禁裸乳无遮挡免费网站照片| 欧美人与善性xxx| 亚洲熟妇熟女久久| 午夜久久久久精精品| 国产一区二区在线观看日韩| aaaaa片日本免费| 国产免费一级a男人的天堂| 淫秽高清视频在线观看| 亚洲欧美日韩卡通动漫| 日韩在线高清观看一区二区三区| 亚洲自偷自拍三级| 尤物成人国产欧美一区二区三区| 精品久久久久久久久亚洲| 少妇人妻精品综合一区二区 | 啦啦啦韩国在线观看视频| 国产精品免费一区二区三区在线| 日本三级黄在线观看| 国产一区二区在线观看日韩| 欧美3d第一页| 久久久午夜欧美精品| 国产美女午夜福利| 悠悠久久av| 一级毛片我不卡| 精品福利观看| 97在线视频观看| 午夜精品一区二区三区免费看| 国产成人91sexporn| 午夜精品国产一区二区电影 | 97热精品久久久久久| 真人做人爱边吃奶动态| 精品人妻视频免费看| 午夜激情福利司机影院| 1000部很黄的大片| 插阴视频在线观看视频| 国产欧美日韩精品一区二区| av在线观看视频网站免费| 美女cb高潮喷水在线观看| 久久久国产成人精品二区| 不卡一级毛片| 一级毛片我不卡| 三级男女做爰猛烈吃奶摸视频| av卡一久久| 色5月婷婷丁香| 长腿黑丝高跟| 亚洲精品456在线播放app| 国内精品美女久久久久久| 观看美女的网站| 真人做人爱边吃奶动态| 亚洲18禁久久av| 观看免费一级毛片| 久久久成人免费电影| 精品久久久久久成人av| av天堂在线播放| 久久人人爽人人片av| 成人毛片a级毛片在线播放| 国产精华一区二区三区| 婷婷精品国产亚洲av| 丝袜美腿在线中文| 看免费成人av毛片| 狠狠狠狠99中文字幕| 亚洲,欧美,日韩| 一级毛片久久久久久久久女| 午夜视频国产福利| 国产成人福利小说| 免费人成在线观看视频色| 国产男人的电影天堂91| 精品久久久久久久久久免费视频| 国产精品免费一区二区三区在线| 精品久久久久久成人av| avwww免费| 免费看日本二区| 国产男靠女视频免费网站| 桃色一区二区三区在线观看| 国产爱豆传媒在线观看| 麻豆国产av国片精品| 国产极品精品免费视频能看的| 免费观看的影片在线观看| 亚洲中文字幕日韩| 美女内射精品一级片tv| 免费av观看视频| 国产一区二区激情短视频| 精品国内亚洲2022精品成人| 精品久久国产蜜桃| 国产成人a区在线观看| 97超视频在线观看视频| 午夜精品一区二区三区免费看| 国产男人的电影天堂91| 青春草视频在线免费观看| 精品午夜福利在线看| 啦啦啦观看免费观看视频高清| 久久人人爽人人爽人人片va| 人妻丰满熟妇av一区二区三区| 九色成人免费人妻av| 国产精品野战在线观看| 九九热线精品视视频播放| 91在线观看av| 午夜亚洲福利在线播放| 日本熟妇午夜| 欧美激情在线99| 在线a可以看的网站| 亚洲欧美清纯卡通| 国产高清视频在线播放一区| 精品人妻熟女av久视频| 99国产极品粉嫩在线观看| 欧美中文日本在线观看视频| 搡老岳熟女国产| 亚洲久久久久久中文字幕| 国产成人91sexporn| 国产高清三级在线| 九色成人免费人妻av| 日韩欧美在线乱码| 欧美性猛交╳xxx乱大交人| 免费看光身美女| 99久久无色码亚洲精品果冻| 精品不卡国产一区二区三区| 日韩高清综合在线| 熟女电影av网| 老司机影院成人| 日韩一本色道免费dvd| 女人被狂操c到高潮| 看非洲黑人一级黄片| 尤物成人国产欧美一区二区三区| 精品乱码久久久久久99久播| 淫妇啪啪啪对白视频| 麻豆国产97在线/欧美| 国产成人一区二区在线| 亚洲aⅴ乱码一区二区在线播放| 网址你懂的国产日韩在线| 日韩欧美精品免费久久| 永久网站在线| 国产精品一区www在线观看| 亚洲精品在线观看二区| 国产精品久久久久久久久免| 日韩制服骚丝袜av| 成人综合一区亚洲| 亚洲国产欧美人成| 亚洲精华国产精华液的使用体验 | 变态另类丝袜制服| 91在线观看av| 91久久精品国产一区二区成人| 国产熟女欧美一区二区| 亚洲国产色片| 人人妻,人人澡人人爽秒播| 久久久国产成人精品二区| 亚洲美女视频黄频| 又黄又爽又刺激的免费视频.|