• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Semi-supervised dictionary learning with label propagation for image classif i cation

    2017-06-19 19:20:12LinChenMengYang
    Computational Visual Media 2017年1期

    Lin Chen,Meng Yang,2,3()

    Semi-supervised dictionary learning with label propagation for image classif i cation

    Lin Chen1,Meng Yang1,2,3()

    Sparse coding and supervised dictionary learning have rapidly developed in recent years, and achieved impressive performance in image classif i cation.However,there is usually a limited number of labeled training samples and a huge amount of unlabeled data in practical image classif i cation, which degrades the discrimination of the learned dictionary.How to ef f ectively utilize unlabeled training data and explore the information hidden in unlabeled data has drawn much attention of researchers.In this paper,we propose a novel discriminative semisupervised dictionary learning method using label propagation(SSD-LP).Specif i cally,we utilize a label propagation algorithm based on class-specif i c reconstruction errors to accurately estimate the identities of unlabeled training samples,and develop an algorithm for optimizing the discriminative dictionary and discriminative coding vectors simultaneously. Extensive experiments on face recognition,digit recognition,and texture classif i cation demonstrate the ef f ectiveness of the proposed method.

    semi-supervised learning;dictionary learning;label propagation;image classif i cation

    1 Introduction

    In recent years,sparse representation has gainedmuch interest in the computer vision f i eld[1,2]and has been widely applied to image restoration[3,4], image compression[5,6],and image classif i cation[7–11].The success of sparse representation is partially because natural images can be generally and sparsely coded by structural primitives(e.g.,edges and line segments)and the images or signals can be represented sparsely by dictionary atoms from the same class.

    In the task of image classif i cation based on sparse representation,signals need to be encoded over a dictionary(i.e.,a set of representation bases)with some sparsity constraint.The dictionary,which encodes the testing sample,can directly consist of the training samples themselves.For example, Wright et al.[12]f i rstly constructed a dictionary by using the training samples of all classes,then coded the test sample with this dictionary,and f i nally classif i ed the test sample into the class with the minimal class-specif i c representation residual. So-called sparse representation based classif i cation (SRC)[12]has achieved impressive performance in face recognition.However,the number of dictionary atoms used in SRC can be quite high,resulting in a large computational burden in calculating the coding vector.What is more,the discriminative information hidden in training samples cannot be exploited fully. To overcome the above problems,the problem of how to learn an ef f ective dictionary from training data has been widely studied.

    Dictionary learning methods can be divided into three main categories:unsupervised[13], supervised[14–17],and semi-supervised[11,18–23].K-SVD[13]is a representative unsupervised dictionary learning model,which is widely applied to image restoration tasks.Since no label information is exploited in the phase of dictionary learning,unsupervised dictionary learning methods are useful for data reconstruction,but not advantageous for classif i cation tasks.

    Based on the relationship between dictionary atoms and class labels,prevailing supervised dictionary learning methods can be divided into three categories:shared,class-specif i c,and hybrid. In the f i rst case,discrimination provided by shared dictionary learning is typically explored by jointly learning a dictionary and a classif i er over the coding coeffi cients[9,10].Using the learned shared dictionary,the generated coding coeffi cients,which are expected to be discriminative, are used for classif i cation.In class-specif i c dictionary learning,each dictionary atom is predef i ned to correspond to a unique class label so that the class-specif i c reconstruction error can be used for classif i cation[14,24].However,the learned dictionary can be very large when there are many classes.In order to take advantage of the powerful class-specif i c representation ability,and to reduce the coherence between dif f erent sub-dictionaries,the hybrid dictionary learning[15,25,26]combines shared dictionary atoms and class-specif i c dictionary atoms.

    Suffi cient labeled training data and high quality training images are necessary for good performance in supervised dictionary learning algorithms. However,it is expensive and diffi cult to obtain the labeled training data due to the vast human ef f ort involved.On the other hand,there are abundant unlabeled images that can be collected easily from public image datasets.Therefore,semi-supervised dictionary learning,which ef f ectively utilizes these unlabeled samples to enhance dictionary learning, has attracted extensive research.

    In recent years,semi-supervised learning methods have been widely studied[27–31].One classical semi-supervised learning method is co-training[29] which utilizes multi-view features to retrain the classif i ers to obtain better performance.In cotraining,the multi-view features need to be conditionally independent so that one classif i er can conf i dently select unlabeled samples for the other classif i er.Another important semi-supervised learning method is the graph-based method[27].In classif i cation,graph-based semi-supervised learning methods can readily explore the class information in unlabeled training data via a small amount of labeled data.A representative method based on graphs is label propagation(LP),which has been widely used in image classif i cation and ranking.Label propagation algorithms[27,28,33–35]perform class estimation of unlabeled samples by propagating label information from labeled data to unlabeled data. This is done by constructing a weight matrix(or affi nity matrix)based on the distance between any two samples.The basic assumption of LP algorithms is that if the weight linking two samples is high,they are likely to belong to the same class.

    Semi-supervised dictionary learning[11,18,19, 21–23,36]has gained considerable interest in the past several years.In semi-supervised dictionary learning,the issue of whether the unlabeled samples can be accurately estimated for use as labeled samples for training is very important.For instance, a shared dictionary and a classif i er may be jointly learned by estimating the class conf i dence of unlabeled samples[18].In Ref.[23],the unlabeled samples are utilized to learn a discriminative dictionary by preserving the geometrical structure of both the labeled and unlabeled data.However, the class-specif i c reconstruction error which carries strong discriminative ability cannot be utilized to estimate the identities of unlabeled samples in the shared dictionary.A semi-supervised class-specif i c dictionary has also been learned in Ref.[19]. However,its model is a little complex due to many regularizations.

    By combining the label information of the labeled samples and reconstruction error of unlabeled samples over all classes,the identities of the unlabeled training samples can be estimated more accurately.In this paper,we propose a novel semi-supervised dictionary model with label propagation.In our proposed model,we design an improved label propagation algorithm to evaluate the probabilities of unlabeled data belonging to a specif i c class.Specif i cally,the proposed label propagation algorithm is based on the powerful class-specif i c representation provided by the reconstruction error of unlabeled samples for each sub-dictionary.Simultaneously,the label information of labeled data can be utilized better by this graph-based method via label propagation. We also well exploit the discrimination providedby the labeled training data in dictionary learning by minimizing the within-class variance.We have conducted several experiments on face recognition, digit recognition,and texture classif i cation,which show the advantage of our proposed SSD-LP approach.

    Our main contributions are summarized as follows:

    1.We propose a novel discriminative semi-supervised dictionary learning method which can ef f ectively utilize the discriminative information hidden in both unlabeled and labeled training data.

    2.By using label propagation,we estimate a more accurate relationship between unlabeled training data and classes,and enhance exploration of the discrimination provided by the unlabeled training data.

    3.The discrimination provided by the labeled training data by minimizing within-class variance is explored during semi-supervised dictionary learning.

    4.Experimental results show that our method has a signif i cantly better discrimination ability using unlabeled training data in dictionary learning.

    The rest of this paper is organized as follows. In Section 2,we brief l y introduce related work on semi-supervised dictionary learning.Our model is presented in Section 3,and Section 4 describes the optimization procedure.Section 5 presents experimental results and Section 6 concludes the paper with a brief summary and discussion.

    2 Related work

    Based on the predef i ned relationship between dictionary atoms and class labels,semi-supervised dictionary learning approaches can be divided into two main categories:discriminative classspecif i c dictionary learning and discriminative shared dictionary learning.

    Motivated by Ref.[24],Shrivastava et al.[19] learnt a class-specif i c dictionary by using Fisher discriminant analysis on the coding vectors of the labeled data.However,its model is complex:the training data is represented by a combination of all class-specif i c dictionaries,and the coding coeffi cients are regularized by both intra-class and inter-class constraints.

    Another approach to semi-supervised dictionary learning is to learn a shared dictionary.Pham and Venktesh[11]took into account the representation errors of both labeled data and unlabeled data.In addition,the classif i cation errors of labeled data were incorporated into a joint objective function.One major drawback of the above approach is that it may fall into a local minimum due to the dictionary construction and classif i er design.Wang et al.[18]utilized an artif i cially designed penalty function to assign weights to the unlabeled data, greatly suppressing the unlabeled data having low conf i dence.Zhang et al.[22]proposed an online semi-supervised dictionary learning framework which integrated the reconstruction error of both labeled and unlabeled data,label consistency, and the classif i cation error into an objective function.Babagholami-Mohamadabadi et al.[23] integrated dictionary learning and classif i er training into an objective function,and preserved the geometrical structure of both labeled and unlabeled data.Recently,Wang et al.[21]utilized the structural sparse relationships between both the labeled and unlabeled samples to learn a discriminative dictionary in which the unlabeled samples are automatically grouped into dif f erent labeled samples.Although a shared dictionary usually has a compact size,the discrimination provided by class-specif i c reconstruction residuals cannot be used.

    3 Semi-supervised dictionary learning with label propagation(SSD-LP)

    Although several semi-supervised dictionary learning approaches have been proposed,there are still some issues to be solved,such as how to build a discriminative dictionary by using unlabeled data,how to utilize the representation ability of a class-specif i c dictionary,and how to estimate the class probabilities of the unlabeled data.In this section,we propose a discriminative semi-supervised dictionary learning method using label propagation (SSD-LP)to address the issues mentioned above.

    3.1 SSD-LP model

    Let A=[A1,...,Ai,...,AC]be the labeled training data,where Aiis the i th-class training data and each column of Aiis a training sample,and B=[b1,...,bj,...,bN]is the unlabeled training data with unknown labels from 1 to C,where N isthe number of unlabeled training samples.Here,as in prevailing semi-supervised dictionary methods[11, 18,19,21–23,36],we assume that the unlabeled training data belongs to some class of the training set.

    In our proposed model,the dictionary to be learnt is D=[D1,...,Di,...,DC],where Diis the class-specif i c sub-dictionary associated with class i;it is required to well represent the i th-class data but to have a poor representation ability for all other classes.In general,we make each column of Dia unit vector.We can write Di,the representation coeffi cient matrix of Aiover D as Xi=whereis the coding coeffi cient matrix of Aion the sub-dictionary Dj. Further,yijis the coding coeffi cient vector of the unlabeled sample bjon the class-specif i c dictionary D i.

    Apart from requiring the coding coeffi cients to be sparse,for the labeled training data we also minimize the within-class scatter of coding coeffi cients,?to make the training samples from the same class have similar coding coeffi cients,where Miis the mean coeffi cient matrix with the same size asand takes the mean column vector ofas its column vectors.

    We def i ne a latent variable,Pi,j,which represents the probability that the j th unlabeled training sample belongs to the i th class.Pi,jsatisf i es 0and P.If the labeled training sample k belongs to class j,then Pj,k=1 and Pi,k= 0 for

    Our proposed SSD-LP method can now be formulated as

    For the labeled training data,a discriminative representation term,i.e.,and a discriminative coeffi cient term,i.e.,are introduced.Since Diis associated with the i th-class,it is expected that Aishould be well represented by Dibut not by Dj,.This implies thatshould have some signif i cant coeffi cients such thatis small,whileshould haveis eliminated as shown in Eq.(1). n e a rly zero coeffi cients.Thus the term

    For the unlabeled training data,the probability that the sample belongs to each class is required.For instance,Pi,j=1 indicates that the j th unlabeled training sample comes from the i th-class,and the class-specif i c dictionary Dishould well represent the j th unlabeled training sample in thatis small.

    Due to the good performance of graph-based label propagation on semi-supervised classif i cation tasks, we utilize it to select the unlabeled sample with high conf i dence and assign the unlabeled sample a high weight,as explained in detail in Section 4.1.

    3.2 Classif i cation scheme

    Once the dictionary D=[D1,...,Di,...,DC]has been learned,a testing sample can be classif i ed by coding it over the learned dictionary.Although the learned dictionary is class-specif i c,the testing sample is not always coded on each sub-dictionary corresponding to each class.As the discussion in Ref.[24],there are two methods of coding the testing sample.

    When the number of training samples in each class is relatively small,the sample sub-space of class i cannot be supported by the learned sub-dictionary Di.Thus the testing samples btare represented on the collaborative combination of all class-specif i c dictionaries.In this case,the sparse coding vector of the testing sample should be found by solving:

    When the number of training samples in each class is relatively large,the sub-dictionary Di,which has enough discrimination,can support the sample subspace of class i.Thus,we can directly code testing sample bton each sub-dictionary:

    The class of testing sample btis then predicted by

    4 Optimization of SSD-LP

    The SSD-LP objective function is not convex in the joint variables of{D,X,P,y},but it is convex in each variable when the others are fi xed. Optimization of Eq.(1)can be divided into three sub-problems:updating P by fi xing D,X,y; updating X,y by fi xing P,D;and updating D by fi xing P,X.

    4.1 Updating P by improved label propagation

    Unlike the approach used in Ref.[28]to construct the weight matrix,our weight matrix is constructed from the reconstruction errors of the unlabeled samples over all classes rather than the distances between any two samples.Intuitively,since sub-dictionary Diis good at representng the i th-class but is poor at representing other classes,any pair of samples is likely to belong to the same class if they achieve minimum reconstruction error in the same class.

    Specif i cally,to compute the weight value wij(if wijis large,then sample biand sample bjare likely to have the same class),we f i rst compute the reconstruction errors of both training samples biand bjover all classes.This gives ei=[ei1;...;eik;...eic]and ej=[ej1;...;ejk;...ejc] where eik=is the reconstruction error value of sample bion the class k andis the coeffi cient vector for class k.

    After obtaining eiand ej,we compute the distancebetween them:

    Finally,the weight linking samples biand bjis

    where σ is a constant.After f i nding all weight values for every pair of samples,we can get the transition matrix T,which can be def i ned by normalizing the weight matrix using:

    Let n=nl+nuwhere nl,nuare the total numbers of labeled and unlabeled training samples respectively.For the multi-class problem,the probability matrix is P=[Pl;Pu]where C is the number of classes,Plis the probability matrix for labeled samples,and Puis the probability matrix for unlabeled samples.We set Pl(i,k)=1 if sample biis a labeled sample with class k,and 0 otherwise.We initialize the probability matrix as P0=i.e.,the probability for the unlabeled training samples is set to zero.The improved label propagation algorithm for updating P is presented in Algorithm 1.Its convergence can be seen by refering to Ref.[27].Pt+1denotes the next iteration of Pt.Please note that the step 3.b is crucial as it ensures the label information of the labeled samples is preserved.

    Compared with using a weight matrix based on the distances of any two original samples,there are two main advantages in our method.On one hand,the original method of constructing the weight matrix is a kind of single-track feedback mechanism in which the update of the probability matrix P can af f ect the dictionary update,but the update of the latter cannot af f ect the former because the distances between the original samples do not change.On the other hand,a weight matrix based on reconstruction errors over all classes more realistically ref l ects the similarity between two samples,which is helpful in estimating the class labels of unlabeled data.

    4.2 Updating X and y

    By f i xing the estimated class probabilities of the unlabeled training data(i.e.,P),the discriminative dictionary(i.e.,D)and coding coeffi cients(i.e.,X and y)can now be updated.

    When the dictionary D is f i xed,the coding coeffi cients of the labeled training data can be easily updated.The objective function in Eq.(1)now reduces to

    Algorithm 1:Improved label propagation based on reconstruction error

    As discussed in Section 3.2,when the number of training samples in each class is relatively small,updating the coding coeffi cients of the unlabeled training data using a collaborative representation can achieve better classif i cation performance.Conversely,we choose a local representation when there are suffi cient training samples of each class.For the unlabeled training data,two coding strategies,i.e.,collaborative representation and local representation,are used. In the collaborative representation,the coding coeffi cient is solved via

    where D=[D1,...,Di,...,DC]and yj=[;...;;...;];is the coding vector of the unlabeled sample bjfor the sub-dictionary Di.Here the dif f erent class-specif i c dictionaries Diwill compete with each other to represent bj.In order to ensure fair competition between dif f erent class-specif i c dictionaries,the encoding phase of collaborative representation ignores P.

    In the local representation,the SSD-LP model associated with yichanges to

    which is a standard sparse coding problem.

    4.3 Updating D

    After updating P,further unlabeled training samples are selected to train our model.If we f i x the size of the learnt dictionary,the discrimination of our dictionary cannot improve.Thus,after updating the probability matrix P,we should increase the size of each sub-dictionary to explore the discriminative information hidden in the unlabeled samples(i.e., an additional dictionary atom Eimust be initialized and added to sub-dictionary Di).

    Since the unlabeled samples provide more discrimination,Eiis initialized by using unlabeled data:

    We update Eiclass by class:

    Then we combine all terms in Eq.(13):

    Since we require the coding coeffi cients to be sparse, we compute the extended dictionary by singularvalue decomposition(SVD):

    The extended dictionary is def i ned such that:

    where n is the number of atoms in the extended dictionary.In all experiments shown in this paper, we set n=1,i.e.,each sub-dictionary adds a single dictionary atom after the update of probability matrix P.

    The new sub-dictionary for class i is initialized using=[Di,Ei].By f i xing the coding coeffi cient X and probability matrix P,the problem in Eq.(1) is reduced to

    Dictionary updating can be easily performed atom by atom by using Metaface[8].After updating the extended dictionary E,we need several iterations to update the dictionary and coeffi cients to guarantee the convergence of the discriminative dictionary.In our experiment,the number of additional iterations is set to 5.

    The whole algorithm of the proposed semisupervised dictionary learning is summarized in Algorithm 2.The algorithm converges since the total objective function value in Eq.(1)decreases in each iteration.Figure 1 shows the total objective function value for the AR dataset[37].In all the experiments mentioned in this paper,our algorithm converges in less than 10 iterations.

    5 Experiment results

    We have performed experiments and corresponding analysis to verify the performance of our method for image classif i cation.We evaluate our approach on two face databases:the Extended YaleBdatabase[38]and the AR face database[37], two handwritten digit datasets:MNIST[39]and USPS[40],and an object category dataset:Texture-25[41].We compare our method with SRC[12],MSVM[17],FDDL[24],DKSVD[10],LCKSVD[16], SVGDL[42],S2D2[19],JDL[11],OSSDL[22], SSR-D[36],and the recently proposed USSDL[18] and SSP-DL[21]algorithms.The last six methods (S2D2,JDL,OSSDL,SSR-D,USSDL,and SSP-DL) are semi-supervised dictionary learning models;the others are supervised dictionary learning methods.

    Algorithm 2:Semi-supervised dictionary learning with label propagation(SSD-LP)

    Fig.1 Total objective function value on the AR database[37]versus number of iterations.

    We repeated each experiment 10 times with dif f erent random splits of the datasets and report the average classif i cation accuracy together with standard deviation;the best classif i cation results are in boldface.For all approaches,we report their best results obtained after tuning their parameters.

    5.1 Parameter selection and comparison with original label propagation

    In our all experiments,the parameters of SSD-LP are f i xed to γ=0.001 and λ=0.01.The number of additional iterations is set to 5 in step 3.2 of Algorithm 2.In our experiment,since the subdictionary Diis initialized using the i th-class of labeled samples,the number of atoms Diis equal to the number of labeled samples for the i th-class(e.g., in the AR database,these are 2,3,5 as there are 2, 3,5 labeled samples respectively).After each update of the probability matrix P,each sub-dictionary adds an additional dictionary atom(the number of atoms of each sub-dictionary does not increase if the number of iterations exceeds the number of unlabeled training examples).

    In order to show the ef f ectiveness of our algorithm, a test was conducted on the Extended YaleB dataset. As shown in Fig.2,we can see that for face recognition,recognition signif i cantly improves with iteration number.

    Fig.2 Recognition rate versus iteration number for the Extended YaleB database with f i ve labeled training samples per class.

    We also compare our proposed improved label propagation method with the original label propagation method(LP).As Fig.3 shows, SSD-LP has at least 10%improvement over the performance when the images are classif i ed directlyby the original label propagation method.With an increasing number of iterations,the recognition rate of our method grows,while the performance of the original label propagation algorithm is essentially unchanged.This is because the original label propagation is dependent on the distribution structure of the input data which does not change as the dictionary is updated.This is a kind of singletrack feedback mechanism between the original label propagation and dictionary learning as explained in Section 4.1.

    We also compare the running time of our improved LP and the original LP using MATLAB 2015a on an Intel i7-3770 3.40 GHz machine with 16.0 GB RAM. The running time for the improved LP and original LP is 11.21 s and 7.54 s,respectively for two training examples per person(see Fig.3 up),and 11.73 s and 7.75 s,respectively for f i ve training examples per person(see Fig.3 bottom).We can see that the running time of the improved LP and original LP is comparable.

    Fig.3 Recognition rate versus iteration number for the Extended YaleB database with two labeled training samples per person(up) and f i ve training samples per person(bottom).

    5.2 Face recognition

    In this section,we evaluate our method in a face recognition problem,for both AR and Extended YaleB databases,using the same experimental setting as Ref.[18].In both face recognition experiments,the image samples are reduced to 300 dimensions by PCA.

    The AR database consists of over 4000 images of 126 individuals.In the experiment we chose a subset of 50 male and 50 female subjects.Focusing on illumination and expression changes,for each subject we chose 7 images from Session 1 for training,and 7 images from Session 2 for testing.We randomly selected{2,3,5}samples from each class in the training set as the labeled samples, and the remaining as the unlabeled samples. Five independent evaluations were conducted for the experiment with dif f erent numbers of labeled training samples.

    As shown in Table 1,when the number of labeled samples was small(2 or 3),our algorithm performed better than all other methods,especially supervised dictionary learning models.This is because supervised dictionary methods cannot utilize the discriminative information hidden in the unlabeled training samples.The semi-supervised dictionary learning methods usually perform better than supervised dictionary learning methods:for instance,USSDL performs the second best.From Table 1,we can see that USSDL has very close results to SSD-LP,but we should note that USSDL needs more information in the dictionary learning task, including classif i er learning of the coding vectors.In addition,the optimization procedure of USSDL is more complex than that of SSD-LP.

    We also evaluated our approach on the ExtendedYaleB database.The database consists of 2414 frontal face images of 38 individuals.Each individual has 64 images;we randomly selected 20 images as the training set and used the rest as the testing set. We randomly selected{2,5,10}samples from each class in the training set as the labeled samples,and used the remainder as the unlabeled samples.The classif i cation results are shown in Table 2.

    It is clear that our proposed method provides better classif i cation performance than other dictionary learning methods.Especially when a small number of label samples is involved,the SSDLP performs singif i cantly better than supervised dictionary learning methods which are dependent on the number of the labeled samples.It also can be seen that SSD-LP improves by at least 1.5%over the other semi-supervised dictionary learning methods. When the number of labeled samples is small, the improvement is more obvious.That is mainly because our method has strong capability to utilize the unlabeled samples by accurately determining their labels and using them as labeled samples to train our discriminative dictionary.

    5.3 Digit classif i cation

    Next,we evaluated the performance of our method on both the MNIST and USPS datasets,with the same experimental setting as Ref.[21].The MNIST dataset has 10 classes.The training set has 60,000 handwritten digital images and the test set has about 10,000 images.The dimension of each digital image is 784.We randomly selected 200 samples from each class,using 20 images as the labeled samples,80 as the unlabeled samples,and the rest for testing.

    The USPS dataset has 9298 digital images in 10 classes.We randomly selected 110 images from eachclass,using 20 as the labeled samples,40 as the unlabeled samples,and 50 as the testing samples. We used the whole image as the feature vector,and normalized the vector to have unit l2-norm.

    Table 2 Recognition rate for various methods,for dif f erent number of labeled training samples,for the Extended YaleB database (Unit:%)

    The results for the ten independent tests are combined in Table 3.It can be seen that our proposed SSD-LP method can ef f ectively utilize information from the unlabeled samples,achieving a classif i cation accuracy clearly higher than for the other dictionary methods.Using the additional unlabeled training samples,the size of the dictionary is enlarged adaptively to better utilize the discrimination provided by the unlabeled samples,which is why we can achieve better performance than other semisupervised dictionary methods mentioned in Table 3.

    5.4 Object classif i cation

    In this experiment we used the Texture-25 dataset which contains 25 texture categories,with 40 samples of each.We used low-level features[43,44],including PHOG[32],GIST[45],and LBP[46].Using the experimental setting in Ref.[18],PHOG was computed with a 2-layer pyramid in 8 directions, and GIST was computed on rescaled images of 256×256 pixels,in 4,8,and 8 orientations at 3 scales from coarse to f i ne.Uniform LBP were used. All the features are concatenated into a single 119-dimensional vector.In this experiment,13 images were randomly selected for testing and we randomly select{2,5,10,15}samples from each class in the training set as labeled samples.The average accuracies together with the standard deviation in f i ve independent tests are presented in Table 4.

    It can be seen that SSD-LP improves by at least 3%over supervised dictionary learning when the number of labeled samples is 2 or 5.As the numberof labeled samples increases,the ef f ect is clearly enhanced,by about 10%.Table 4 shows that our method also gives better results than the other three semi-supervised dictionary methods.That is because as more samples are used for training,the estimates of the labels of the unlabeled training data become more accurate.The result fully demonstrates the classif i cation ef f ectiveness of label propagation based on reconstruction error.In addition,adaptively adding dictionary atoms makes our learnt dictionary more discriminative.JDL,which only uses the reconstruction error of both labeled and unlabeled data,does not work well.

    Table 3 Recognition rate for various methods,for digit databases USPS and MNIST(Unit:%)

    Table 4 Recognition rate for various methods,for dif f erent number of labeled training samples,for the Texture-25 database(Unit:%)

    6 Conclusions

    This paper has proposed a discriminative semi-supervised dictionary learning model.By integrating label propagation with that classspecif i c reconstruction error of each unlabeled training sample,we can more accurately estimate the class of unlabeled samples to train our model.The discriminative property of labeled training data is also well explored by using a discriminative representation term and minimizing within-class scatter of the coding coeffi cients. Several experiments,including applications to face recognition,digit recognition,and texture classif i cation have shown the advantage of our method over supervised and other semi-supervised dictionary learning approaches.In the future,we will explore more classif i cation questions,e.g.,the case in which the training samples may not belong to any known class.

    Acknowledgements

    This work was partially supported by the National Natural Science Foundation for Young Scientists of China(No.61402289),and the National Science Foundation of Guangdong Province(No. 2014A030313558).

    [1]Elad,M.;Figueiredo,M.A.T.;Ma,Y.On the role of sparse and redundant representations in image processing.Proceedings of the IEEE Vol.98,No.6, 972–982,2010.

    [2]Wright,J.;Ma,Y.;Mairal,J.;Sapiro,G.;Huang,T. S.;Yan,S.Sparse representation for computer vision and pattern recognition.Proceedings of the IEEE Vol. 98,No.6,1031–1044,2010.

    [3]Chen,Y.-C.;Patel,V.M.;Phillips,P.J.;Chellappa, R.Dictionary-based face recognition from video. In:Computer Vision–ECCV 2012.Fitzgibbon,A.; Lazebnik,S.;Perona,P.;Sato,Y.;Schmid,C.Eds. Springer Berlin Heidelberg,766–779,2012.

    [4]Mairal,J.;Elad,M.;Sapiro,G.Sparse representation for color image restoration.IEEE Transactions on Image Processing Vol.17,No.1,53–69,2008.

    [5]Bryt,O.;Elad,M.Compression of facial images using the K-SVD algorithm.Journal of Visual Communication and Image Representation Vol.19, No.4,270–282,2008.

    [6]Bryt,O.;Elad,M.Improving the k-svd facial image compression using a linear deblocking method.In: Proceedings of the IEEE 25th Convention of Electrical and Electronics Engineers in Israel,533–537,2008.

    [7]Yang,J.;Yu,K.;Huang,T.Supervised translationinvariant sparse coding.In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,3517–3524,2010.

    [8]Yang,M.;Zhang,L.;Yang,J.;Zhang,D. Metaface learning for sparse representation based face recognition.In:Proceedings of the IEEE International Conference on Image Processing,1601–1604,2010.

    [9]Mairal,J.;Ponce,J.;Sapiro,G.;Zisserman,A.;Bach, F.R.Supervised dictionary learning.In:Proceedings of the Advances in Neural Information Processing Systems,1033–1040,2009.

    [10]Zhang,Q.;Li,B.Discriminative K-SVD for dictionary learning in face recognition.In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2691–2698,2010.

    [11]Pham,D.-S.;Venkatesh,S.Joint learning and dictionary construction for pattern recognition.In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1–8,2008.

    [12]Wright,J.;Yang,A.Y.;Ganesh,A.;Sastry,S.S.;Ma, Y.Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.31,No.2,210–227,2009.

    [13]Aharon,M.;Elad,M.;Bruckstein,A.K-SVD:An algorithm for designing overcomplete dictionaries for sparse representation.IEEE Transactions on Signal Processing Vol.54,No.11,4311–4322,2006.

    [14]Mairal,J.;Bach,F.;Ponce,J.;Sapiro,G.;Zisserman, A.Discriminative learned dictionaries for local image analysis.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1–8,2008.

    [15]Yang,M.;Dai,D.;Shen,L.;Van Gool,L.Latent dictionary learning for sparse representation based classif i cation.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,4138–4145,2014.

    [16]Jiang,Z.;Lin,Z.;Davis,L.S.Learning a discriminative dictionary for sparse coding via label consistent K-SVD.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1697–1704,2011.

    [17]Yang,J.;Yu,K.;Gong,Y.;Huang,T.Linear spatial pyramid matching using sparse coding for image classif i cation.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1794–1801,2009.

    [18]Wang,X.;Guo,X.;Li,S.Z.Adaptively unif i ed semisupervised dictionary learning with active points.In: Proceedings of the IEEE International Conference on Computer Vision,1787–1795,2015.

    [19]Shrivastava,A.;Pillai,J.K.;Patel,V.M.;Chellappa, R.Learning discriminative dictionaries with partially labeled data.In:Proceedings of the 19th IEEE International Conference on Image Processing,3113–3116,2012.

    [20]Jian,M.;Jung,C.Semi-supervised bi-dictionary learning for image classif i cation with smooth representation-based label propagation.IEEE Transactions on Multimedia Vol.18,No.3,458–473, 2016.

    [21]Wang,D.;Zhang,X.;Fan,M.;Ye,X.Semi-supervised dictionary learning via structural sparse preserving.In: Proceedings of the 30th AAAI Conference on Artif i cial Intelligence,2137–2144,2016.

    [22]Zhang,G.;Jiang,Z.;Davis,L.S.Online semisupervised discriminative dictionary learning for sparse representation.In:Computer Vision–ACCV 2012.Lee,K.M.;Mu,K.;Matsushita,Y.;Rehg,J. M.;Hu,Z.Eds.Springer Berlin Heidelberg,259–273, 2012.

    [23]Babagholami-Mohamadabadi,B.;Zarghami,A.; Zolfaghari,M.;Baghshah,M.S.PSSDL:Probabilistic semi-supervised dictionary learning.In:Machine Learning and Know ledge Discovery in Databases. Blockeel,H.;Kersting,K.;Nijssen,S.;ˇZelezn′y,F.Eds. Springer Berlin Heidelberg,192–207,2013.

    [24]Yang,M.;Zhang,L.;Feng,X.;Zhang,D. Fisher discrimination dictionary learning for sparse representation.In:Proceedings of the IEEE International Conference on Computer Vision,543–550,2011.

    [25]Zhou,N.;Shen,Y.;Peng,J.;Fan,J.Learning interrelated visual dictionary for object recognition.In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,3490–3497,2012.

    [26]Deng,W.;Hu,J.;Guo,J.Extended SRC: Undersampled face recognition via intraclass variant dictionary.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.34,No.9,1864–1870, 2012.

    [27]Zhu,X.;Laf f erty,J.;Rosenfeld,R.Semi-supervised learning with graphs.Carnegie Mellon University, Language Technologies Institute,School of Computer Science,2005.

    [28]Wang,B.;Tu,Z.;Tsotsos,J.K.Dynamic label propagation for semi-supervised multi-class multilabel classif i cation.In:Proceedings of the IEEE International Conference on Computer Vision,425–432,2013.

    [29]Blum,A.;Mitchell,T.Combining labeled and unlabeled data with co-training.In:Proceedings of the 11th Annual Conference on Computational Learning Theory,92–100,1998.

    [30]Mallapragada,P.K.;Jin,R.;Jain,A.K.;Liu,Y. SemiBoost:Boosting for semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.31,No.11,2000–2014,2009.

    [31]Gong,C.;Tao,D.;Maybank,S.J.;Liu,W.;Kang, G.;Yang,J.Multi-modal curriculum learning for semisupervised image classif i cation.IEEE Transactions on Image Processing Vol.25,No.7,3249–3260,2016.

    [32]Bosch,A.;Zisserman,A.;Munoz,X.Image classif i cation using random forests and ferns. In:Proceedings of the IEEE 11th International Conference on Computer Vision,1–8,2007.

    [33]Xiong,C.;Kim,T.-K.Set-based label propagation of face images.In:Proceedings of the 19th IEEE International Conference on Image Processing,1433–1436,2012.

    [34]Cheng,H.;Liu,Z.;Yang,J.Sparsity induced similarity measure for label propagation.In: Proceedings of the IEEE 12th International Conference on Computer Vision,317–324,2009.

    [35]Kang,F.;Jin,R.;Sukthankar,R.Correlated label propagation with application to multi-label learning.In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1719–1726,2006.

    [36]Wang,H.;Nie,F.;Cai,W.;Huang,H.Semisupervised robust dictionary learning via effi cient lnorms minimization.In:Proceedings of the IEEE International Conference on Computer Vision,1145–1152,2013.

    [37]Martinez,A.M.The AR face database.CVC Technical Report 24,1998.

    [38]Lee,K.-C.;Ho,J.;Kriegman,D.J.Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.27,No.5,684–698,2005.

    [39]LeCun,Y.;Bottou,L.;Bengio,Y.;Haf f ner, P.Gradient-based learning applied to document recognition.Proceedings of the IEEE Vol.86,No.11, 2278–2324,1998.

    [40]Hull,J.J.A database for handwritten text recognition research.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.16,No.5,550–554,1994.

    [41]Lazebnik,S.;Schmid,C.;Ponce,J.A sparse texture representation using local affi ne regions. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.27,No.8,1265–1278,2005.

    [42]Cai,S.;Zuo,W.;Zhang,L.;Feng,X.;Wang, P.Support vector guided dictionary learning.In: Computer Vision–ECCV 2014.Fleet,D.;Pajdla,T.; Schiele,B.;Tuytelaars,T.Eds.Springer International Publishing,624–639,2014.

    [43]Boix,X.;Roig,G.;Van Gool,L.Comment on“Ensemble projection for semi-supervised image classif i cation”.arXiv preprint arXiv:1408.6963,2014.

    [44]Dai,D.;Van Gool,L.Ensemble projection for semisupervised image classif i cation.In:Proceedings of the IEEE International Conference on Computer Vision, 2072–2079,2013.

    [45]Oliva,A.;Torralba,A.Modeling the shape of the scene:A holistic representation of the spatial envelope. International Journal of Computer Vision Vol.42,No. 3,145–175,2001.

    [46]Ojala,T.;Pietikainen,M.;Maenpaa,T. Multiresolution gray-scale and rotation invariant texture classif i cation with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.24,No.7,971–987,2002.

    Meng Yangis currently an associate professor in the School of Computer Science&Software Engineering, Shenzhen University,Shenzhen,China. He received his Ph.D.degree from Hong Kong Polytechnic University, Hong Kong,China,in 2012.Before joining Shenzhen University,he worked as a postdoctoral fellow in the Computer Vision Lab.of ETH Zurich.His research interests include sparse coding,dictionary learning,object recognition,and machine learning.He has published 10 AAAI/CVPR/ICCV/ICML/ECCV papers,and several IJCV,IEEE TNNLS,and TIP journal papers.

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    eceived his B.S.degree in computer science and technology from Shenzhen University,Shenzhen,China, in 2015.He is currently pursuing his M.S.degree in the School of Computer Science&Software Engineering, Shenzhen University.

    1 College of Computer Science and Software Engineering, Shenzhen University,Shenzhen,China.E-mail:L. Chen,chen.lin@email.szu.edu.cn;M.Yang,yang.meng@ szu.edu.cn().

    2 School of Data and Computer Science,Sun Yat-sen University,Guangzhou,China.

    3 Key Laboratory of Machine Intelligence and Advanced Computing(Sun Yat-sen University),Ministry of Education,China.

    Manuscript

    2016-09-04;accepted:2016-12-20

    久久久久久久国产电影| 99热网站在线观看| 国产精品日韩av在线免费观看| 欧美激情在线99| 精品一区二区三区视频在线| 午夜久久久久精精品| 国产探花极品一区二区| 只有这里有精品99| 欧美一区二区亚洲| 免费看日本二区| 麻豆一二三区av精品| 小说图片视频综合网站| 国产精品国产三级国产av玫瑰| 午夜亚洲福利在线播放| 久久久国产成人精品二区| 午夜视频国产福利| ponron亚洲| 国产私拍福利视频在线观看| 亚洲欧美清纯卡通| av线在线观看网站| 国产老妇女一区| 男人舔女人下体高潮全视频| 91精品伊人久久大香线蕉| 桃色一区二区三区在线观看| 国产伦在线观看视频一区| 亚洲激情五月婷婷啪啪| 我要看日韩黄色一级片| 国产v大片淫在线免费观看| 在线播放无遮挡| 日韩人妻高清精品专区| 噜噜噜噜噜久久久久久91| 国产真实乱freesex| 国产一区有黄有色的免费视频 | 国产白丝娇喘喷水9色精品| 久久99精品国语久久久| 青青草视频在线视频观看| 51国产日韩欧美| 联通29元200g的流量卡| 久久精品人妻少妇| 男女边吃奶边做爰视频| 秋霞在线观看毛片| 国产成人一区二区在线| 在线a可以看的网站| 欧美日本亚洲视频在线播放| 男人舔女人下体高潮全视频| 狂野欧美激情性xxxx在线观看| 午夜爱爱视频在线播放| 成人午夜精彩视频在线观看| 国产精品福利在线免费观看| 狠狠狠狠99中文字幕| 女人被狂操c到高潮| 欧美日本视频| 日本午夜av视频| 日本免费a在线| 国产亚洲一区二区精品| 欧美激情国产日韩精品一区| 国产av在哪里看| 日韩强制内射视频| 久久久国产成人精品二区| 亚洲三级黄色毛片| 黄色日韩在线| 久久久久性生活片| 亚洲欧美清纯卡通| 日本三级黄在线观看| 久久亚洲国产成人精品v| 精品熟女少妇av免费看| 午夜福利网站1000一区二区三区| 成人国产麻豆网| 舔av片在线| 亚洲国产最新在线播放| 久久精品久久久久久久性| 国产av一区在线观看免费| 亚洲av电影不卡..在线观看| 高清在线视频一区二区三区 | 精品久久久噜噜| 国产精品人妻久久久久久| 日韩,欧美,国产一区二区三区 | 国模一区二区三区四区视频| 国产片特级美女逼逼视频| 非洲黑人性xxxx精品又粗又长| 午夜福利成人在线免费观看| 美女被艹到高潮喷水动态| 岛国毛片在线播放| 精品一区二区三区人妻视频| 亚洲av中文av极速乱| 国产精品三级大全| 女人被狂操c到高潮| 伊人久久精品亚洲午夜| 久久精品久久久久久噜噜老黄 | 国产精品一区二区性色av| АⅤ资源中文在线天堂| 中文字幕精品亚洲无线码一区| 亚洲aⅴ乱码一区二区在线播放| 国产一级毛片在线| 国产白丝娇喘喷水9色精品| 国产成人91sexporn| 丰满少妇做爰视频| 啦啦啦韩国在线观看视频| 中文天堂在线官网| or卡值多少钱| 51国产日韩欧美| 亚洲无线观看免费| 热99在线观看视频| 国模一区二区三区四区视频| 美女被艹到高潮喷水动态| 久久欧美精品欧美久久欧美| 一个人观看的视频www高清免费观看| 高清毛片免费看| 欧美最新免费一区二区三区| 成人国产麻豆网| 一本久久精品| 麻豆一二三区av精品| 欧美日本亚洲视频在线播放| 久久午夜福利片| 丝袜喷水一区| 久久久午夜欧美精品| 日韩成人av中文字幕在线观看| 欧美xxxx性猛交bbbb| 久久久久久久久久久丰满| 国产精品三级大全| 久久久久久伊人网av| 国产高清国产精品国产三级 | 毛片一级片免费看久久久久| 乱系列少妇在线播放| 99久久人妻综合| 免费av观看视频| 精品国产露脸久久av麻豆 | 夜夜看夜夜爽夜夜摸| 久久99蜜桃精品久久| 亚洲国产欧美在线一区| 丝袜美腿在线中文| 秋霞在线观看毛片| 国产成人a区在线观看| 少妇熟女aⅴ在线视频| 久久久久久九九精品二区国产| 99久久精品热视频| 69av精品久久久久久| 69av精品久久久久久| 国产人妻一区二区三区在| 色尼玛亚洲综合影院| 日韩成人伦理影院| 麻豆久久精品国产亚洲av| 91av网一区二区| 成人特级av手机在线观看| 亚洲欧美清纯卡通| 内地一区二区视频在线| 国产午夜精品论理片| 天天躁夜夜躁狠狠久久av| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 久热久热在线精品观看| 欧美高清性xxxxhd video| 少妇熟女欧美另类| 老司机福利观看| 啦啦啦啦在线视频资源| 亚洲av中文字字幕乱码综合| 日本黄大片高清| 色综合亚洲欧美另类图片| 欧美激情在线99| 女人被狂操c到高潮| 亚洲成av人片在线播放无| 国内少妇人妻偷人精品xxx网站| 色吧在线观看| 亚洲aⅴ乱码一区二区在线播放| 啦啦啦啦在线视频资源| 久久久欧美国产精品| 99视频精品全部免费 在线| 我的老师免费观看完整版| 亚洲精品日韩av片在线观看| 成年女人永久免费观看视频| 国产亚洲av嫩草精品影院| 久久精品国产鲁丝片午夜精品| 久久久久久久久久成人| 亚洲乱码一区二区免费版| 国产精品爽爽va在线观看网站| 亚洲欧美日韩高清专用| 亚洲精品一区蜜桃| 黄色配什么色好看| 自拍偷自拍亚洲精品老妇| 久久久精品94久久精品| 亚洲精品国产av成人精品| 久久这里有精品视频免费| 五月伊人婷婷丁香| 久久久久网色| 白带黄色成豆腐渣| 欧美bdsm另类| 一夜夜www| 亚洲欧美清纯卡通| 男人和女人高潮做爰伦理| a级毛色黄片| 欧美潮喷喷水| 人人妻人人澡人人爽人人夜夜 | 人人妻人人看人人澡| 久久精品91蜜桃| 91午夜精品亚洲一区二区三区| 亚洲av男天堂| 亚洲av电影在线观看一区二区三区 | 国产亚洲av片在线观看秒播厂 | 国产久久久一区二区三区| 欧美人与善性xxx| 久久亚洲国产成人精品v| 日韩欧美 国产精品| 亚洲18禁久久av| 插阴视频在线观看视频| 高清毛片免费看| av在线亚洲专区| 国产精品国产高清国产av| 91aial.com中文字幕在线观看| 国产视频首页在线观看| 人妻少妇偷人精品九色| 中文字幕久久专区| 寂寞人妻少妇视频99o| 我的女老师完整版在线观看| 99国产精品一区二区蜜桃av| 麻豆精品久久久久久蜜桃| 天堂av国产一区二区熟女人妻| 大话2 男鬼变身卡| 国产精品一区二区三区四区久久| 国产亚洲av嫩草精品影院| 观看免费一级毛片| 精品国产一区二区三区久久久樱花 | 久久亚洲国产成人精品v| 久久久午夜欧美精品| 国产一区二区亚洲精品在线观看| 能在线免费看毛片的网站| 九九爱精品视频在线观看| 内射极品少妇av片p| 国产三级在线视频| 中文字幕熟女人妻在线| 三级毛片av免费| 少妇的逼水好多| 日本免费一区二区三区高清不卡| 少妇的逼好多水| 国产淫片久久久久久久久| 免费观看精品视频网站| 精品一区二区三区人妻视频| 精品午夜福利在线看| 2021少妇久久久久久久久久久| 精品久久久噜噜| av天堂中文字幕网| 国产精品av视频在线免费观看| 国产精品久久久久久av不卡| 国产久久久一区二区三区| 一个人看的www免费观看视频| 少妇丰满av| 亚洲国产精品成人综合色| 免费黄网站久久成人精品| 波野结衣二区三区在线| av免费在线看不卡| 日韩高清综合在线| 99热这里只有是精品50| 人体艺术视频欧美日本| 搡女人真爽免费视频火全软件| 91精品伊人久久大香线蕉| 日本一本二区三区精品| 精品一区二区三区人妻视频| 日韩高清综合在线| 国产黄片美女视频| 日本av手机在线免费观看| 青春草视频在线免费观看| 亚洲av成人精品一二三区| 国产精品一区www在线观看| 91精品一卡2卡3卡4卡| 日本与韩国留学比较| 观看美女的网站| 三级毛片av免费| 久久久久久久久久成人| 少妇丰满av| 性色avwww在线观看| 久久这里有精品视频免费| 亚洲乱码一区二区免费版| 国产片特级美女逼逼视频| 亚洲经典国产精华液单| 嫩草影院新地址| 成人午夜精彩视频在线观看| 97超视频在线观看视频| 国产精品久久久久久久久免| 又粗又爽又猛毛片免费看| 丰满少妇做爰视频| 免费av不卡在线播放| 青春草亚洲视频在线观看| 亚洲在线观看片| 久久久久精品久久久久真实原创| 韩国高清视频一区二区三区| 永久免费av网站大全| 性插视频无遮挡在线免费观看| 尾随美女入室| 国产精品,欧美在线| 国产中年淑女户外野战色| 秋霞伦理黄片| 国产爱豆传媒在线观看| 国产精品三级大全| 久99久视频精品免费| 国产91av在线免费观看| 伦精品一区二区三区| av播播在线观看一区| 亚洲aⅴ乱码一区二区在线播放| 国产在视频线在精品| 国产极品精品免费视频能看的| av又黄又爽大尺度在线免费看 | 色尼玛亚洲综合影院| 国产精品.久久久| 亚洲伊人久久精品综合 | 日本黄色视频三级网站网址| 性色avwww在线观看| 黄色一级大片看看| 色综合亚洲欧美另类图片| 老司机影院成人| 欧美成人精品欧美一级黄| 亚洲第一区二区三区不卡| 久久精品久久精品一区二区三区| 久久精品国产亚洲av涩爱| 男女国产视频网站| 看黄色毛片网站| 欧美精品国产亚洲| 夜夜爽夜夜爽视频| 亚洲精品色激情综合| 免费黄网站久久成人精品| 精品人妻视频免费看| 在线免费十八禁| 成人午夜精彩视频在线观看| 村上凉子中文字幕在线| 少妇裸体淫交视频免费看高清| 美女高潮的动态| 日韩av不卡免费在线播放| 日本欧美国产在线视频| 精品一区二区三区人妻视频| 一级毛片我不卡| 国产白丝娇喘喷水9色精品| 精品免费久久久久久久清纯| 国产精品99久久久久久久久| 日本av手机在线免费观看| 久久久久免费精品人妻一区二区| 欧美一区二区亚洲| 日韩在线高清观看一区二区三区| 中国国产av一级| 亚洲人成网站在线观看播放| 日韩强制内射视频| 亚洲成av人片在线播放无| 国国产精品蜜臀av免费| 日韩一区二区视频免费看| 欧美精品国产亚洲| 一级毛片aaaaaa免费看小| 国产精品国产三级国产av玫瑰| 久久精品国产鲁丝片午夜精品| 精品国产三级普通话版| 亚洲人成网站在线播| 深夜a级毛片| 国产亚洲一区二区精品| 亚洲精品,欧美精品| 99热6这里只有精品| 久久精品人妻少妇| 丰满人妻一区二区三区视频av| 亚洲不卡免费看| www日本黄色视频网| 免费看日本二区| 又粗又爽又猛毛片免费看| 国产精品久久视频播放| 亚洲成av人片在线播放无| 亚洲成人中文字幕在线播放| 久久国内精品自在自线图片| 国产老妇女一区| 成人无遮挡网站| 成人亚洲欧美一区二区av| 精品久久久久久久末码| 成人亚洲欧美一区二区av| 一区二区三区四区激情视频| 欧美97在线视频| 美女大奶头视频| 亚洲五月天丁香| 大又大粗又爽又黄少妇毛片口| 亚洲国产高清在线一区二区三| 中文精品一卡2卡3卡4更新| 日韩一区二区三区影片| 色网站视频免费| 五月玫瑰六月丁香| 久久久久性生活片| 全区人妻精品视频| 免费看美女性在线毛片视频| 国产精品永久免费网站| 人人妻人人澡欧美一区二区| 国产在视频线精品| 国产精品不卡视频一区二区| av免费在线看不卡| 国产一级毛片在线| 两个人的视频大全免费| 最近中文字幕高清免费大全6| 大又大粗又爽又黄少妇毛片口| 少妇被粗大猛烈的视频| 国产成人aa在线观看| 网址你懂的国产日韩在线| 亚洲欧洲日产国产| 别揉我奶头 嗯啊视频| 男女啪啪激烈高潮av片| 欧美xxxx黑人xx丫x性爽| 久久久久免费精品人妻一区二区| 久久久久久久久久黄片| 晚上一个人看的免费电影| 亚洲av日韩在线播放| 久久精品久久精品一区二区三区| 久久精品熟女亚洲av麻豆精品 | 在线观看66精品国产| 久久99热这里只频精品6学生 | 亚洲精品乱码久久久久久按摩| videossex国产| 国产精品99久久久久久久久| 国产av不卡久久| 久久久久久久国产电影| 又粗又爽又猛毛片免费看| 国模一区二区三区四区视频| 国产伦一二天堂av在线观看| 又粗又硬又长又爽又黄的视频| 极品教师在线视频| 边亲边吃奶的免费视频| 91久久精品国产一区二区成人| 小蜜桃在线观看免费完整版高清| 高清日韩中文字幕在线| 如何舔出高潮| 久久欧美精品欧美久久欧美| 国产精华一区二区三区| 国产精品一区二区三区四区免费观看| 亚洲精品国产成人久久av| 丝袜喷水一区| 免费播放大片免费观看视频在线观看 | 国产美女午夜福利| 老司机影院毛片| 美女xxoo啪啪120秒动态图| 日韩视频在线欧美| 亚洲伊人久久精品综合 | 九草在线视频观看| 一级毛片电影观看 | 欧美一区二区精品小视频在线| 男女视频在线观看网站免费| 国产伦在线观看视频一区| 有码 亚洲区| 国产 一区 欧美 日韩| www日本黄色视频网| 一级黄片播放器| 天堂中文最新版在线下载 | 中文资源天堂在线| 高清av免费在线| 中文字幕人妻熟人妻熟丝袜美| 精华霜和精华液先用哪个| 非洲黑人性xxxx精品又粗又长| 亚洲中文字幕一区二区三区有码在线看| 日日摸夜夜添夜夜爱| 97超碰精品成人国产| 高清av免费在线| 国产 一区 欧美 日韩| 精品国产三级普通话版| 色视频www国产| 国产免费一级a男人的天堂| 亚洲高清免费不卡视频| 国产伦在线观看视频一区| 国产一区二区亚洲精品在线观看| 最近的中文字幕免费完整| 大香蕉久久网| 欧美一区二区亚洲| 可以在线观看毛片的网站| 菩萨蛮人人尽说江南好唐韦庄 | av福利片在线观看| 免费观看性生交大片5| 国产精品不卡视频一区二区| 色哟哟·www| 啦啦啦观看免费观看视频高清| 99热网站在线观看| 国产91av在线免费观看| 免费观看a级毛片全部| 成年免费大片在线观看| 男人舔奶头视频| 国产人妻一区二区三区在| 久久精品91蜜桃| 欧美成人一区二区免费高清观看| 黄色配什么色好看| 久久综合国产亚洲精品| 18+在线观看网站| 少妇的逼好多水| 成人综合一区亚洲| 我的女老师完整版在线观看| 内地一区二区视频在线| 最新中文字幕久久久久| 99热这里只有是精品在线观看| 国产精品综合久久久久久久免费| 大香蕉97超碰在线| 欧美区成人在线视频| 亚洲av一区综合| 中文字幕av在线有码专区| 五月玫瑰六月丁香| 久久久久久久久久久丰满| 国产一区二区在线观看日韩| 亚洲精品456在线播放app| 久久99蜜桃精品久久| 免费在线观看成人毛片| 中文字幕av成人在线电影| 人妻少妇偷人精品九色| 国产精品综合久久久久久久免费| 一本一本综合久久| 精品国产露脸久久av麻豆 | 丰满人妻一区二区三区视频av| 久久精品国产鲁丝片午夜精品| 少妇人妻精品综合一区二区| 三级国产精品欧美在线观看| 亚洲av免费高清在线观看| 在线观看66精品国产| 国产精品久久久久久av不卡| 亚洲在久久综合| 亚洲av一区综合| 高清视频免费观看一区二区 | 一二三四中文在线观看免费高清| av在线亚洲专区| 欧美一区二区亚洲| 国产午夜精品论理片| 欧美性猛交╳xxx乱大交人| 亚洲精华国产精华液的使用体验| 成人特级av手机在线观看| 天堂av国产一区二区熟女人妻| 国产黄a三级三级三级人| 免费人成在线观看视频色| 少妇的逼好多水| 亚洲丝袜综合中文字幕| 国产亚洲午夜精品一区二区久久 | 免费搜索国产男女视频| 伦理电影大哥的女人| 99久久无色码亚洲精品果冻| 久久久久久久久久久免费av| 男女视频在线观看网站免费| h日本视频在线播放| 精品久久久久久久末码| .国产精品久久| 午夜老司机福利剧场| 成人亚洲欧美一区二区av| 性色avwww在线观看| 高清午夜精品一区二区三区| 男人的好看免费观看在线视频| 亚洲中文字幕日韩| 2021少妇久久久久久久久久久| 日本黄大片高清| 精品一区二区三区视频在线| 一区二区三区高清视频在线| 国产成人aa在线观看| 日韩精品青青久久久久久| 麻豆乱淫一区二区| 国产精品无大码| 欧美极品一区二区三区四区| 男人舔女人下体高潮全视频| 色尼玛亚洲综合影院| av免费在线看不卡| 亚洲在线观看片| 日本猛色少妇xxxxx猛交久久| 成人午夜精彩视频在线观看| or卡值多少钱| 高清视频免费观看一区二区 | 男插女下体视频免费在线播放| 日本黄大片高清| 麻豆成人午夜福利视频| 国内少妇人妻偷人精品xxx网站| 午夜激情欧美在线| 国产精品福利在线免费观看| 成人亚洲精品av一区二区| 国产精品久久久久久精品电影| 久久久a久久爽久久v久久| 欧美精品国产亚洲| 看片在线看免费视频| 欧美性猛交╳xxx乱大交人| 不卡视频在线观看欧美| 91精品伊人久久大香线蕉| av天堂中文字幕网| 亚洲欧美清纯卡通| 日韩欧美 国产精品| 非洲黑人性xxxx精品又粗又长| 中文欧美无线码| 午夜精品国产一区二区电影 | 中文字幕亚洲精品专区| 免费播放大片免费观看视频在线观看 | 级片在线观看| 国产精品一区二区三区四区久久| 99久久精品一区二区三区| 青春草亚洲视频在线观看| 日韩,欧美,国产一区二区三区 | 波野结衣二区三区在线| 国内精品宾馆在线| 男女啪啪激烈高潮av片| 免费看av在线观看网站| 久久精品夜夜夜夜夜久久蜜豆| 高清毛片免费看| av又黄又爽大尺度在线免费看 | 国产精品福利在线免费观看| 99久国产av精品| 99热网站在线观看| 免费人成在线观看视频色| 成人漫画全彩无遮挡| 成人高潮视频无遮挡免费网站| 天天躁日日操中文字幕| 日本-黄色视频高清免费观看| 水蜜桃什么品种好| av在线观看视频网站免费| 久热久热在线精品观看| 国产精品av视频在线免费观看| 精品久久久噜噜| 中文字幕熟女人妻在线| 成人高潮视频无遮挡免费网站| 一本久久精品| 日韩欧美 国产精品| 成年免费大片在线观看| 热99在线观看视频| 午夜精品一区二区三区免费看| 97在线视频观看| 不卡视频在线观看欧美| 国语对白做爰xxxⅹ性视频网站| 久久久久久九九精品二区国产| 桃色一区二区三区在线观看| 少妇被粗大猛烈的视频| 国产亚洲av片在线观看秒播厂 | 噜噜噜噜噜久久久久久91| 国产一级毛片七仙女欲春2| 亚洲美女搞黄在线观看|