• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation①

    2017-12-19 00:39:31TianDongping田東平
    High Technology Letters 2017年4期
    關(guān)鍵詞:東平

    Tian Dongping (田東平)

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation①

    Tian Dongping (田東平)②

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper, a semi-supervised learning based probabilistic latent semantic analysis (PLSA) model for automatic image annotation is presenred. Since it’s often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine (TSVM) is exploited to enhance the quality of the training image data. Then, different image features with different magnitudes will result in different performance for automatic image annotation. To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally, a PLSA model with asymmetric modalities is constructed based on the expectation maximization(EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation.

    automatic image annotation, semi-supervised learning, probabilistic latent semantic analysis (PLSA), transductive support vector machine (TSVM), image segmentation, image retrieval

    0 Introduction

    Probabilistic models with hidden topic variables, originally developed for statistical text modeling of large document collections such as latent semantic analysis (LSA), probabilistic latent semantic analysis (PLSA)[1], latent Dirichlet allocation (LDA)[2]and correlated topic model[3], have recently become an active topic of research in computer vision and pattern recognition. Probabilistic topic models originate from modeling large databases of text documents. When applied to images instead of documents, each topic can be thought of as a certain object type contained in an image. The topic distribution then refers to the degree to which a certain object/scene type is contained in the image. In the ideal case, this gives rise to a low dimensional description of the coarse image content and thus enables retrieval in the very large databases. Another advantage of such models is that topics are learned automatically without requiring any labeled training data. However, the performance of these models usually hinges on an inappropriate assumption[1-3], i.e., all the topics are independent of each other, which will inevitably undermine the performance of multi media processing such as object recognition, image annotation, scene classification and automatic segmentation, etc. Besides, the main drawback of these approaches is that they do not allow exploiting the huge amount of un-annotated data and consequently are affected by the small sample size problem. Except for the merits and demerits of the probabilistic topic models mentioned above, it should also be noted that most of the existing work associated with PLSA have focused on the aspects of its improvement and application, whereas almost no consideration for the construction of its training set and ways of feature normalization, especially in the case of different image features with different magnitudes. Based on this recognition, a semi-supervised learning based probabilistic latent semantic analysis (abbreviated as SSPLSA) is presented for the task of automatic image annotation. First of all, TSVM, as one of the most often used semi-supervised learning methods, is exploited to boost the quality of the training image data with the help of unlabelled data in the presence of the small sample size problem. Then Gaussian normalization method (GNM) is applied to normalize different image features with different magnitudes so as to reserve intrinsic content of the images as complete as possible. Finally, a PLSA model with asymmetric modalities is constructed based on EM algorithm to predict a candidate set of annotations with confidence scores. A series of experiments on the standard Corel5k show the effectiveness and efficiency of SSPLSA.

    The rest of this paper is organized as follows. Section 1 summarizes the related work, especially PLSA and several semi-supervised learning methods applied in the field of automatic image annotation. Section 2 elaborates the proposed SSPLSA model from four aspects of PLSA, transductive support vector machine, Gaussian normalization method and the implementation of the SSPLSA, respectively. Section 3 reports experimental results based on the standard Corel5k image dataset. Concluding remarks and future work are discussed in Section 4.

    1 Related work

    Automatic image annotation (AIA) is a promising methodology for image retrieval. However, it is still in its infancy and is not sophisticated enough to extract perfect semantic concepts according to image low-level features, often producing noisy key words irrelevant to image semantics, which significantly hinders the task of image retrieval. As one of the representative probabilistic topic models, PLSA has been extensively applied in a variety of different image annotation and retrieval tasks. Monay et al. presented a series of PLSA models for AIA[4,5], among which PLSA-MIXED[4]learned a standard PLSA on a concatenated representation of the textual and visual features while PLSA-WORDS or PLSA-FEATURES[5]allowed modeling of an image as a mixture of latent aspects that was defined either by its text captions or its visual features for which the conditional distributions over aspects were estimated from one of the two modalities only. Peng, et al.[6]employed PLSA model to discover the latent topics existing in the audio-clips and carried out the concept classification by a SVM further. In order to extract effective features to reflect the intrinsic content of images, Zhang, et al.[7]proposed a multi-feature PLSA to tackle the problem by combining low-level visual features for image region annotation in which it handled data from two different visual feature domains. In recent work of Ref.[8], a supervised PLSA model was constructed to improve image segmentation by using the classification results. Besides, the standard PLSA was extended to higher order for image indexing by treating images, visual features and tags as three observable variables of an aspect model[9]. In more recent work[10], Tian, et al. integrated PLSA with multiple Markov random fields (MRF) for AIA. Particularly, MRF was used to fuse the contextual information of images. Alternatively, as for the PLSA model itself, it could be improved from four aspects of its initialization, visual words, hidden layers and integration with other models. As the representative work, Wang, et al.[11]proposed a method to build an effective visual vocabulary by using hierarchical Gaussian mixture model instead of the traditional clustering methods to improve its visual words. Lu, et al.[12]exploited rival penalized competitive learning method to initialize the model so as to enhance the performance of PLSA. In addition, Lienhart, et al.[13]extended the standard single-layer PLSA to multiple multimodal layers that consisted of two leaf-PLSA and a single top-level PLSA node merging the two leaf-PLSA. Meanwhile, a correlated PLSA model[14]was constructed by introducing a correlation layer between images and latent topics to incorporate the image correlations.

    Semi-supervised learning (SSL), which aims at learning from labeled and unlabeled data, typically a small amount of labeled data with a large amount of unlabeled data, has aroused considerable interest in the data mining and machine learning fields, since it is usually hard to collect enough labeled data in practical applications. SSL falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data), whose aim is to achieve good classification performance with the help of unlabelled data in the presence of the small sample size problem. As the representative work of semi-supervised learning for AIA, Li, et al.[15]formulated automatic image annotation as a joint classification task based on 2D conditional random fields, the SSL technique was utilized to exploit the unlabeled data for improving its joint classification performance. In Ref.[16], a semi-supervised ensemble of classifiers, viz. weighted semi-supervised adaboost (WSA), was constructed for AIA. Note that WSA is able to incorporate unlabeled instances that are annotated based on the classifier from the previous stage, and then used to train the next classifier. Zhu, et al.[17]proposed to annotate images based on a progressive model to obtain the candidate annotations of unlabeled images. In addition, Yuan, et al.[18]exploited semi-supervised cross-domain learning with group sparsity to boost the performance of automatic image annotation, etc.

    2 Proposed SSPLSA model

    Fig.1 illustrates the framework of the proposed SSPLSA model, which mainly includes two stages, viz. feature extraction and PLSA modeling. Details of SSPLSA will be elaborated in the following subsections.

    Fig.1 Framework of the SSPLSA model

    2.1 PLSA model

    PLSA[1]is a statistical latent aspect model which introduces a hidden variable (latent aspect) zkin the generative process of each element wjin document di. Given the unobservable variable zk, each occurrence wjis independent of the document it belongs to, which corresponds to the following joint probability:

    (1)

    The model parameters of PLSA are the two conditional distributions: P(wj|zk) and P(zk|di). P(wj|zk) characterizes each aspect and remains valid for documents out of the training set while P(zk|di) is only relative to the document-specific and cannot carry any prior information to an unseen document. The EM algorithm is usually utilized to estimate the parameters through maximizing the log-likelihood of the observed data.

    (2)

    where n(di, wj) denotes the number of times, term wjoccurring in document di. The steps of the EM algorithm can be succinctly described as follows.

    E-step. The conditional distribution P(zk|di, wj) is computed from the previous estimate of the parameters:

    (3)

    M-step. The parameters P(wj|zk) and P(zk|di) are updated with the new expected values P(zk|di, wj):

    (4)

    (5)

    If one of the parameters is known, the other one can be inferred by leveraging fold-in method, which updates the unknown parameters with the known ones kept fixed so that it can maximize the likelihood with respect to the previously trained parameters. Given a new image visual features v(dnew), the conditional probability distribution P(zk|dnew) can be inferred with the previously estimated model parameters P(v|zk), then the posterior probabilities of keywords can be computed by the following formula:

    (6)

    From Eq.(6), the top n words can be selected as the annotations for the new image.

    2.2 TSVM algorithm

    Semi-supervised learning (SSL) problem has recently drawn large attention in the field of machine learning and pattern recognition mainly due to its significant importance in practical applications. Concretely, SSL is a family of algorithms that takes the advantage of both labeled and unlabeled data and has been extensively studied for many years. Among them, the transductive support vector machine (TSVM), also called semi-supervised support vector machine located between supervised learning with fully labeled training data and unsupervised learning without any labeled training data, is a promising way to find out the underlying relevant data from the unlabeled ones. TSVM works as follows for mining the relevant image regions: Given a keyword w, several labeled regions are taken as the relevant examples and the initial non-relevant examples are randomly sampled from the remaining regions. A two-class SVM classifier is trained firstly. Then based on the learnt SVM classifier, the most confident relevant regions and the most non-relevant ones are added into the relevant and non-relevant training set respectively. With the expanded training set, SVM classifier will be re-trained until the maximum time of iteration is reached. Finally, an expanded set of labeled regions can be obtained to benefit for modeling the visual feature distribution of the keyword w. Thus in this paper, TSVM is adopted to explore more relevant image regions to boost the performance of the PLSA model, which can be described as Algorithm 1.

    Algorithm1 PseudocodeofTSVMforminingrelevantregionsInput:R0LandR0Udenotethesetsoflabeledandunlabeledregionsforthekeywordw,SisaSVMclassifier,m,nandKdenotecontrolparametersProcess:1. fork=1toKdo2. LearningaSVMclassifierSfromRkL;3. UsingStoclassifyregionsinRkU;4. SelectingmmostconfidentlypredictedregionsfromRkUwhicharelabeledasrelevantexamples;5. SelectingnmostconfidentlypredictedregionsfromRkUwhicharelabeledasnon?relevantexamples;6. Addingm+nregionswiththeircorrespondinglabelsintoRkL;7. Removingthesem+nregionsfromRkU;8. endforOutput:RkLanexpandedsetoflabeledregions

    2.3 Feature normalization

    During the course of image feature extraction, different kinds of features may have different magnitudes. How to appropriately normalize these features plays a crucial role in the subsequent image processing. Based on this recognition, the Gaussian normalization method (GNM) is used for image feature normalization[19]. Let Fi=(fi1,…, fik,…, fiq) be the feature vector representing the i-th image region. The mean μkand standard deviation σkof the k-th feature dimension can be easily calculated. Subsequently the feature vectors can be normalized to N(0,1) according to:

    (7)

    In Eq.(7), assume that each feature is normally distributed and k=3. According to the 3-σ rule, the probability of an entry’s value being in the range of [-1,1] is approximately 99%. By defining the following Eq.(8), namely, a simple additional shift embedded can guarantee that 99% of the feature values will be within [0,1].

    (8)

    2.4 Implementation of SSPLSA

    Based on the contents aforementioned, the SSPLSA model can be summarized as follows. Note that Algorithm 2 is utilized to estimate the parameters of SSPLSA while Algorithm 3 is applied to calculate the annotations of the test image.

    Algorithm2 EstimationoftheSSPLSAparametersP(w|z)Input:visualfeaturesvnandtextualwordswmoftrainingimagesProcess:1. RandomlyinitializingtheprobabilitytablesP(zk|di)andP(wj|zk);2. whileincreaseinthelikelihoodofvalidationdataΔLs>Tsdo{Estep}3. fork∈1,…,Kandall(di,wj)pairsintrainingdocumentsdo4.ComputingP(zk|di,wj)withEq.(3);5. endfor{Mstep}6. fork∈1,…,Kandj∈1,…,Mdo7.ComputingP(wj|zk)withEq.(4);8. endfor9. fork∈1,…,Kandi∈1,…,Ndo10.ComputingP(zk|di)withEq.(5);11. endfor12. ComputingthelikelihoodofvalidationdataLswithEq.(2);13. endwhileOutput:θk={P(w1|zk),P(w2|zk),…,P(wM|zk)},k∈1,…,K.

    Algorithm3 AnnotationestimationforthetestingimagesInput:modelparametersθk,visualfeaturesfofthetestingimagednewProcess:1. RandomlyinitializingtheP(v|z)probabilitytable;2. whileincreaseinthelikelihoodofvalidationdataΔLf>Tfdo {Estep}3. fork∈1,…,Kandall(di,vj)pairsintrainingdocumentsdo4.ComputingP(zk|di,vj)withEq.(3);5. endfor{PartialMstep}6. fork∈1,…,Kandj∈1,…,Mdo7.ComputingP(vj|zk)withEq.(4);8. endfor9. ComputingthelikelihoodofvalidationdataLffromP(v|z)andP(z|d)frompreviousmodalitywithEq.(2);10. endwhile11. Savingηk={P(v1|zk),P(v2|zk),…,P(vM|zk)};12. RandomlyinitializingtheP(z|d)probabilitytable;13. whileincreaseinthelikelihoodofvalidationdataΔLs>Tsdo{Estep}14. fork∈1,…,Kandall(di,vj)pairsintrainingdocumentsdo15.ComputingP(zk|di,vj)withEq.(3);16. endfor{PartialMstep}17. fork∈1,…,Kandi∈1,…,Ndo18.ComputingP(zk|di)withEq.(5);19. endfor20. ComputingthelikelihoodofvalidationdataLsfromP(z|d)andP(v|z)frompreviousmodalitywithEq.(2);21. endwhile22. ComputingP(wj|dnew)withEq.(6);23. Selectingtop?nwordswithhighestP(wj|dnew)astheannotations.Output:annotationresultsofdnew,L={w1,…,wl}.

    3 Experimental results and analysis

    3.1 Dataset and evaluation measures

    To evaluate the performance of the SSPLSA model, it is tested on the Corel5k dataset[20], which consists of 5000 images from 50 Corel Stock Photo CD’s. Each CD contains 100 images with a certain theme, of which 90 are designated to be in the training set and 10 in the test set, resulting in 4500 training images and a balanced 500-image test collection. Note that the dictionary contains 260 words that appear in both the training and testing sets, and the normalized cuts (Ncuts) algorithm[21]is applied to segment images into a number of meaningful regions. For each image, at most the 10 largest regions are selected and 809-dimensional visual features (color, texture, shape and saliency) are extracted for each region, which include 81-dim grid color moment features, 59-dim local binary pattern texture features, 120-dim Gabor wavelets texture features, 37-dim edge orientation histogram features and 512-dim GIST features respectively. All of the extracted features are subsequently normalized by GNM described in subsection 2.3 and further utilized to train PLSA model based on the EM algorithm. It should be noted that the keywords of the top-five ranked regions with the largest area are determined to annotate the test image. In our case, 5% regions in the entire relevant regions are labeled for each keyword and are taken as the initial positive examples for training SVM. Besides, the parameters are set as: K=10, m=3, n=5, especially m indicates 3 relevant regions and n denotes 5 non-relevant regions to be added to the training set for re-training the SVM classifier during each round of iteration. In addition, to make a fair comparison with other AIA methods, the most commonly used metrics precision and recall of every word in the test set are calculated and the mean of these values is used to summarize the model performance.

    3.2 Results of automatic image annotation

    MATLAB 7.0 is applied to implement the proposed SSPLSA model. Specifically, the experiments are carried out on a 3.30GHz Intel Core i5 CPU personal computer with 4.0G memory running Win 7 Ultimate. To systematically verify the effectiveness of the SSPLSA, thorough experiments are performed on the Corel5k dataset and compared with several previous approaches[5,20,22-24]. Table 1 reports the experimental results based on two sets of words: the subset of 49 best words and the complete set of all 260 words that occur in the training set. From Table 1, it can be clearly observed that our model markedly outperforms all the others, especially the first three approaches. Meanwhile, it is also superior to MBRM, PLSA-WORDS and CRMR by the gains of 16, 20 and 6 words with non-zero recall, 12%, 7% and 1% mean per-word recall as well as 14%, 30% and 1% mean per-word precision on the set of 49 words, 25%, 25% and 9% mean per-word recall in conjunction with 21%, 64% and 5% mean per-word precision on the set of 260 words, respectively. Compared to PLSA-WORDS, the significant performance improvement is largely ascribed to the applications of the TSVM to enhance the quality of the training image data and GNM to normalize different image features with different magnitudes. Furthermore, it is argued that combining these techniques allows them to benefit from each other and yield a great deal of advantages in terms of annotation accuracy and ease-of-use of the model. Note that CRMR listed in Table 1 denotes CRM with rectangular regions as input[24].

    Table 1 Performance comparison on Corel5k dataset

    Table 2 shows some annotation results (only four cases are listed here due to the limited space) yielded by PLSA-WORDS and SSPLSA respectively. It can be clearly observed that our model can generate more accurate annotation results compared with the original annotations as well as the ones provided in the literature[5]. Note that the enriched and re-ranked annotations compared to those of the ground truth and PLSA-WORDS are underlined and italicized respectively. Taking the first image for example, there exist four tags in the original annotation list. However, after annotation by SSPLSA, its annotation is enriched by the other keyword “l(fā)eaves”, which is very appropriate and reasonable to describe the visual content of the image. Similarly, the enriched labels “farm” and “plants” for the second image, “trees” for the third image and “ice” for the fourth image.

    Table 2 Annotation comparison between PLSA-WORDS and SSPLSA

    Fig.2 illustrates the precision-recall curves of PLSA-WORDS and SSPLSA models based on the Corel5k dataset, with the number of annotations from 2 to 10. It is easy to see that the performance of the proposed model is evidently superior to that of the PLSA-WORDS.

    To further illustrate the effect of SSPLSA model for automatic image annotation, Fig.3 displays the average annotation precision of the selected 10 words “mountain”, “snow”, “tree”, “building”, “water”, “beach”, “bear”, “sky”, “cat” and “house” based on PLSA-WORDS and SSPLSA models respectively. As shown in Fig.3, the average precision of the model is consistently higher than that of PLSA-WORDS. Inaddition, as for the complexity of SSPLSA, assume that there are D training images and each image produces R visual feature vectors, then the complexity of the model is O(DR).

    Fig.2 Precision-recall curves of PLSA-WORDS and SSPLSA

    Fig.3 Average precision based on PLSA-WORDS and SSPLSA

    4 Conclusions and future work

    In this paper, a semi-supervised learning based PLSA is presented for automatic image annotation. First, the most widely used TSVM is applied to enhance the quality of the training image data with the help of unlabelled data in the presence of the small sample size problem. Second, GNM is used to normalize different image features with different magnitudes so as to reserve the intrinsic content of the images as complete as possible. Third, a PLSA model with asymmetric modalities is constructed to predict a candidate set of annotations. Extensive experiments on the Corel5k validate that the SSPLSA model outperforms peer methods in the literature in terms of accuracy, efficiency and robustness. In future work, more complicated real-world image datasets will be applied, such as NUS-WIDE and MIRFLICKR, to further evaluate the scalability of SSPLSA model. And no doubt inaccurate image segmentation will make the region based image feature representation imprecise and therefore undermine the performance of PLSA based approaches. So exploring more efficient image segmentation methods is helpful to boost the annotation performance. Furthermore, image segmentation itself is a worthy research direction in the field of computer vision and pattern recognition.

    [ 1] Hofmann T. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 2001, 42(1): 177-196

    [ 2] Blei D, Ng A, Jordan M. Latent Dirichlet allocation. Journal of Machine Learning Research, 2003, 3(4): 993-1022

    [ 3] Blei D, Lafferty J. Correlated topic models. Annals of Applied Statistics, 2007, l(1): 17-35

    [ 4] Monay F, Gatica-Perez D. On image auto-annotation with latent space models. In: Proceedings of the 11th International Conference on Multimedia, Berkeley, USA, 2003. 275-278

    [ 5] Monay F, Gatica-Perez D. Modeling semantic aspects for cross-media image indexing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(10): 1802-1817

    [ 6] Peng Y, Lu Z, Xiao J. Semantic concept annotation based on audio PLSA model. In: Proceedings of the 17th International Conference on Multimedia, Beijing, China, 2009. 841-844

    [ 7] Zhang R, Guan L, Zhang L, et al. Multi-feature PLSA for combining visual features in image annotation. In: Proceedings of the 19th International Conference on Multimedia, Scottsdale, USA, 2011. 1513-1516

    [ 8] Guo Q, Li N, Yang Y, et al. Integrating image segmentation and annotation using supervised PLSA. In: Proceedings of the 20th International Conference on Image Processing, Melbourne, Australia, 2013. 3800-3804

    [ 9] Nikolopoulos S, Zafeiriou S, Patras I, et al. High order PLSA for indexing tagged images. Signal Processing, 2013, 93(8): 2212-2228

    [10] Tian D, Zhao X, Shi Z. Fusing PLSA model and Markov random fields for automatic image annotation. High Technology Letters, 2014, 20(4): 409-414

    [11] Wang Z, Yi H, Wang J, et al. Hierarchical Gaussian mixture model for image annotation via PLSA. In: Proceedings of the 5th International Conference on Image and Graphics, Xi’an, China, 2009. 384-389

    [12] Lu Z, Peng Y, Ip H. Image categorization via robust PLSA. Pattern Recognition Letters, 2010, 31(1): 36-43

    [13] Romberg S, Lienhart R, Horster E. Multimodal image retrieval: fusing modalities with multilayer multimodal PLSA. International Journal of Multimedia Information Retrieval, 2012, 1(1): 31-44

    [14] Li P, Cheng J, Li Z, et al. Correlated PLSA for image clustering. In: Proceedings of the 17th International Conference on Multimedia Modeling, Taipei, China, 2011. 307-316

    [15] Li W, Sun M. Semi-supervised learning for image annotation based on conditional random fields. In: Proceedings of the 5th International Conference on Image and Video Retrieval, Tempe, USA, 2006. 463-472

    [16] Marin-Castro H, Sucar E, Morales E. Automatic image annotation using a semi-supervised ensemble of classifiers. In: Proceedings of the 12th Iberoamerican Conference on Progress in Pattern Recognition, Image Analysis and Applications, Valparaiso, Chile, 2007. 487-495

    [17] Zhu S, Liu Y. Semi-supervised learning model based efficient image annotation. IEEE Signal Processing Letters, 2009, 16(11): 989-992

    [18] Yuan Y, Wu F, Shao J, et al. Image annotation by semi-supervised cross-domain learning with group sparsity. Journal of Visual Communication and Image Representation, 2013, 24(2): 95-102

    [19] Rui Y, Huang T, Ortega M, et al. Relevance feedback: a power tool for interactive content-based image retrieval. IEEE Transaction on Circuits and Systems for Video Technology, 1998, 8(5): 644-655

    [20] Duygulu P, Barnard K, Freitas N De, et al. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 2002. 97-112

    [21] Shi J, Malik J. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 888-905

    [22] Jeon L, Lavrenko V, Manmantha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 2003. 119-126

    [23] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Advances in Neural Information Processing Systems 16, Vancouver, Canada, 2003. 553-560

    [24] Feng S, Manmatha R, Lavrenko V. Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Washington, USA, 2004. 1002-1009

    10.3772/j.issn.1006-6748.2017.04.004

    ①To whom correspondence should be addressed. E-mail: tdp211@163.com, tiandp@ics.ict.ac.cn

    on Sep. 30, 2016***

    ②Supported by the National Program on Key Basic Research Project (No. 2013CB329502), the National Natural Science Foundation of China (No. 61202212), the Special Research Project of the Educational Department of Shaanxi Province of China (No. 15JK1038) and the Key Research Project of Baoji University of Arts and Sciences (No. ZK16047).

    his M.Sc. and Ph.D. degrees in computer science from Shanghai Normal University and Institute of Computing Technology, Chinese Academy of Sciences in 2007 and 2014, respectively. His research interests include computer vision, machine learning and evolutionary computation.

    猜你喜歡
    東平
    墾荒
    種絲瓜
    茶藝
    金秋(2020年8期)2020-08-17 08:38:20
    鐵 匠
    長江叢刊(2019年19期)2019-11-14 23:16:49
    詩四首——東平之《春》、《夏》、《秋》、《冬》
    讓批評和自我批評成為黨內(nèi)政治生活的常態(tài)
    全面從嚴(yán)治黨背景下建立健全容錯糾錯機(jī)制的探討
    Semantic image annotation based on GMM and random walk model①
    A Comparative Study of Buddha in China and God in Western Countries
    東方教育(2017年2期)2017-04-21 04:46:18
    Exploiting PLSA model and conditional random field for refining image annotation*
    天天影视国产精品| 国产精品成人在线| 黄色配什么色好看| 在线观看人妻少妇| 国产黄色视频一区二区在线观看| 久久青草综合色| 男女高潮啪啪啪动态图| 多毛熟女@视频| 少妇人妻 视频| freevideosex欧美| 久久久久久久久久久免费av| 国产在线视频一区二区| 在线观看一区二区三区激情| 国产精品久久久久久精品古装| 夜夜骑夜夜射夜夜干| 大香蕉97超碰在线| 精品少妇内射三级| 午夜激情久久久久久久| 欧美另类一区| 免费观看性生交大片5| 黄色视频在线播放观看不卡| 麻豆乱淫一区二区| 国产视频内射| 老司机亚洲免费影院| 久久国产亚洲av麻豆专区| 伦理电影免费视频| 丰满乱子伦码专区| 中文精品一卡2卡3卡4更新| 国产欧美日韩综合在线一区二区| 久久精品国产自在天天线| 久久热精品热| 国产在线视频一区二区| 人人妻人人添人人爽欧美一区卜| 在线天堂最新版资源| 精品亚洲乱码少妇综合久久| 日韩欧美精品免费久久| av免费在线看不卡| 2022亚洲国产成人精品| 日本黄色日本黄色录像| 亚洲美女搞黄在线观看| 精品一区二区三区视频在线| 日本av手机在线免费观看| 亚洲国产av新网站| av免费观看日本| 亚洲精品日韩在线中文字幕| 熟女电影av网| 丝瓜视频免费看黄片| 久久人人爽人人片av| 久久午夜福利片| 国产高清国产精品国产三级| 最新中文字幕久久久久| 国产伦理片在线播放av一区| 一级黄片播放器| 天美传媒精品一区二区| 久久精品久久久久久噜噜老黄| 久久人妻熟女aⅴ| 人人妻人人爽人人添夜夜欢视频| 亚洲精品乱码久久久v下载方式| 男人爽女人下面视频在线观看| 在线观看免费高清a一片| 爱豆传媒免费全集在线观看| 亚洲精品日韩av片在线观看| 久久国产精品大桥未久av| 亚洲无线观看免费| 亚洲第一av免费看| 欧美日韩亚洲高清精品| 久久久久网色| 久久久国产一区二区| 中文欧美无线码| 亚洲国产精品一区二区三区在线| 80岁老熟妇乱子伦牲交| 亚洲精品国产av蜜桃| 中文乱码字字幕精品一区二区三区| 天堂8中文在线网| 九九久久精品国产亚洲av麻豆| av网站免费在线观看视频| 国产精品不卡视频一区二区| 青春草视频在线免费观看| 在线亚洲精品国产二区图片欧美 | 亚洲怡红院男人天堂| 国产精品.久久久| 欧美日本中文国产一区发布| 国产亚洲av片在线观看秒播厂| 麻豆乱淫一区二区| 一级毛片我不卡| 久久综合国产亚洲精品| 亚洲av男天堂| 天天操日日干夜夜撸| 国产午夜精品一二区理论片| 涩涩av久久男人的天堂| 亚洲美女搞黄在线观看| 国产爽快片一区二区三区| 在线免费观看不下载黄p国产| 91在线精品国自产拍蜜月| 久久久久精品久久久久真实原创| 色94色欧美一区二区| 伊人久久国产一区二区| 乱人伦中国视频| 97精品久久久久久久久久精品| 王馨瑶露胸无遮挡在线观看| 国产日韩一区二区三区精品不卡 | 欧美日韩一区二区视频在线观看视频在线| 久久久久久久亚洲中文字幕| 成人二区视频| 亚洲欧美一区二区三区国产| 国产欧美另类精品又又久久亚洲欧美| 69精品国产乱码久久久| 成人亚洲欧美一区二区av| 亚洲久久久国产精品| 日本欧美视频一区| 亚洲av综合色区一区| 少妇猛男粗大的猛烈进出视频| 一级片'在线观看视频| 免费观看a级毛片全部| 人妻夜夜爽99麻豆av| 久久av网站| 两个人免费观看高清视频| 亚洲少妇的诱惑av| 如何舔出高潮| 九色亚洲精品在线播放| 亚洲婷婷狠狠爱综合网| 亚洲国产精品一区三区| 日韩不卡一区二区三区视频在线| 只有这里有精品99| 纵有疾风起免费观看全集完整版| 国产一区二区三区综合在线观看 | 免费久久久久久久精品成人欧美视频 | 丁香六月天网| 蜜桃国产av成人99| 亚洲av综合色区一区| 人妻系列 视频| 精品人妻熟女av久视频| 午夜老司机福利剧场| 91在线精品国自产拍蜜月| 国产亚洲最大av| 日本猛色少妇xxxxx猛交久久| 亚洲av男天堂| 少妇被粗大猛烈的视频| 精品人妻偷拍中文字幕| 人妻夜夜爽99麻豆av| 精品国产一区二区三区久久久樱花| 在线精品无人区一区二区三| 日日摸夜夜添夜夜添av毛片| 青春草视频在线免费观看| 三上悠亚av全集在线观看| 欧美亚洲日本最大视频资源| av电影中文网址| 一区二区三区免费毛片| 视频中文字幕在线观看| 激情五月婷婷亚洲| 亚洲精品乱久久久久久| 日韩大片免费观看网站| 哪个播放器可以免费观看大片| 欧美人与性动交α欧美精品济南到 | 成年人免费黄色播放视频| 国产成人一区二区在线| 欧美变态另类bdsm刘玥| 精品一区二区三卡| 狂野欧美激情性xxxx在线观看| 99热国产这里只有精品6| 久久99蜜桃精品久久| 三级国产精品欧美在线观看| 亚洲中文av在线| 午夜福利,免费看| 成人午夜精彩视频在线观看| 精品亚洲成a人片在线观看| 久久精品熟女亚洲av麻豆精品| 在线精品无人区一区二区三| 精品人妻熟女毛片av久久网站| 性高湖久久久久久久久免费观看| 80岁老熟妇乱子伦牲交| 国产精品麻豆人妻色哟哟久久| 中文字幕免费在线视频6| 久久精品国产亚洲网站| 国产精品99久久99久久久不卡 | 成人免费观看视频高清| 午夜免费观看性视频| 青春草视频在线免费观看| 在线免费观看不下载黄p国产| 麻豆乱淫一区二区| 韩国av在线不卡| 亚洲国产色片| 亚洲国产精品成人久久小说| 午夜老司机福利剧场| 国产精品女同一区二区软件| 亚洲精品国产色婷婷电影| 亚洲综合色网址| 亚州av有码| 精品午夜福利在线看| 婷婷色麻豆天堂久久| 夜夜看夜夜爽夜夜摸| 夜夜看夜夜爽夜夜摸| 欧美精品人与动牲交sv欧美| 成人毛片60女人毛片免费| 国产精品一区二区在线观看99| 九九在线视频观看精品| 成年美女黄网站色视频大全免费 | 久久国产精品大桥未久av| 成人免费观看视频高清| 亚洲av二区三区四区| 日本-黄色视频高清免费观看| 国产高清不卡午夜福利| 永久网站在线| 三级国产精品欧美在线观看| 国产熟女欧美一区二区| 亚洲av日韩在线播放| 国产精品 国内视频| a级毛片在线看网站| 国产精品 国内视频| 制服诱惑二区| 久久精品久久久久久噜噜老黄| 成人国产麻豆网| 色吧在线观看| 爱豆传媒免费全集在线观看| 我的女老师完整版在线观看| 久久久国产一区二区| 九色成人免费人妻av| 亚洲怡红院男人天堂| 久久久久久伊人网av| 七月丁香在线播放| 一级片'在线观看视频| 丰满乱子伦码专区| 免费久久久久久久精品成人欧美视频 | 久久99精品国语久久久| 久久精品久久久久久噜噜老黄| 精品国产一区二区久久| 99热网站在线观看| 777米奇影视久久| 色哟哟·www| 免费播放大片免费观看视频在线观看| 日本wwww免费看| 七月丁香在线播放| 免费人妻精品一区二区三区视频| 如日韩欧美国产精品一区二区三区 | 欧美xxxx性猛交bbbb| 免费黄色在线免费观看| 日韩一区二区三区影片| 嘟嘟电影网在线观看| 另类精品久久| 亚洲精品国产av蜜桃| 涩涩av久久男人的天堂| 精品视频人人做人人爽| 欧美97在线视频| 亚洲欧美成人精品一区二区| 久久久久久久久久久免费av| 交换朋友夫妻互换小说| 国产精品国产三级专区第一集| 看非洲黑人一级黄片| 欧美日韩视频高清一区二区三区二| 国产亚洲欧美精品永久| 精品一区在线观看国产| 一二三四中文在线观看免费高清| 午夜老司机福利剧场| 免费观看无遮挡的男女| 免费观看的影片在线观看| 下体分泌物呈黄色| 熟女人妻精品中文字幕| 亚洲精品成人av观看孕妇| 国产精品一二三区在线看| 女的被弄到高潮叫床怎么办| 内地一区二区视频在线| 国产高清三级在线| 久久毛片免费看一区二区三区| 亚洲国产精品999| 久久精品国产亚洲av涩爱| 国产av码专区亚洲av| 91久久精品电影网| 国产黄频视频在线观看| www.色视频.com| 大码成人一级视频| 成人影院久久| 久久青草综合色| 精品久久久精品久久久| a 毛片基地| 三上悠亚av全集在线观看| 麻豆精品久久久久久蜜桃| 国产精品蜜桃在线观看| 亚洲三级黄色毛片| 成人毛片60女人毛片免费| 国产欧美另类精品又又久久亚洲欧美| 亚洲高清免费不卡视频| 国产片特级美女逼逼视频| 免费看av在线观看网站| 亚洲国产精品国产精品| 赤兔流量卡办理| 亚洲,欧美,日韩| 国产白丝娇喘喷水9色精品| 伦理电影大哥的女人| 精品熟女少妇av免费看| 汤姆久久久久久久影院中文字幕| 欧美精品一区二区免费开放| 日韩精品有码人妻一区| 各种免费的搞黄视频| a级毛片免费高清观看在线播放| 色视频在线一区二区三区| 亚洲婷婷狠狠爱综合网| 久久久久精品性色| 亚洲精品中文字幕在线视频| 在线观看人妻少妇| 自线自在国产av| 丰满饥渴人妻一区二区三| 亚洲国产成人一精品久久久| 国产一区有黄有色的免费视频| 成年女人在线观看亚洲视频| 伦精品一区二区三区| 精品国产一区二区三区久久久樱花| 亚洲av日韩在线播放| 69精品国产乱码久久久| 国产国拍精品亚洲av在线观看| 国产精品三级大全| 免费观看性生交大片5| 我要看黄色一级片免费的| 热re99久久精品国产66热6| videos熟女内射| 亚洲无线观看免费| 高清视频免费观看一区二区| 亚洲国产精品一区二区三区在线| 国产午夜精品久久久久久一区二区三区| 日韩人妻高清精品专区| 亚洲天堂av无毛| 亚洲国产欧美在线一区| 久久97久久精品| 欧美 日韩 精品 国产| 女人久久www免费人成看片| 精品国产乱码久久久久久小说| 精品少妇久久久久久888优播| 建设人人有责人人尽责人人享有的| 国产不卡av网站在线观看| 免费人妻精品一区二区三区视频| 国产视频首页在线观看| 人人妻人人添人人爽欧美一区卜| 18禁在线无遮挡免费观看视频| 欧美日韩av久久| 黑人巨大精品欧美一区二区蜜桃 | 一区二区三区精品91| 日韩中字成人| 国产精品国产三级国产av玫瑰| 亚洲情色 制服丝袜| 国产亚洲精品久久久com| 丝袜脚勾引网站| 国产亚洲精品第一综合不卡 | 国产av国产精品国产| av女优亚洲男人天堂| 精品99又大又爽又粗少妇毛片| 久久久a久久爽久久v久久| 伦精品一区二区三区| 永久免费av网站大全| 99久久中文字幕三级久久日本| 亚洲av电影在线观看一区二区三区| 丝袜脚勾引网站| 精品久久久噜噜| 人人妻人人澡人人爽人人夜夜| 一级,二级,三级黄色视频| av免费在线看不卡| 黄色毛片三级朝国网站| 欧美国产精品一级二级三级| 男女高潮啪啪啪动态图| av不卡在线播放| 一边摸一边做爽爽视频免费| 日本欧美视频一区| 日本黄色日本黄色录像| 久久久亚洲精品成人影院| 国产成人精品婷婷| 亚洲欧美日韩卡通动漫| 国产日韩欧美视频二区| 国产色爽女视频免费观看| 九草在线视频观看| 成年美女黄网站色视频大全免费 | 在线观看免费日韩欧美大片 | 久久99热这里只频精品6学生| 一本一本综合久久| 高清在线视频一区二区三区| 在线精品无人区一区二区三| 欧美精品人与动牲交sv欧美| 免费播放大片免费观看视频在线观看| 国产亚洲最大av| 精品久久久精品久久久| 亚洲国产成人一精品久久久| 尾随美女入室| 男女边摸边吃奶| 一级a做视频免费观看| 免费观看的影片在线观看| 亚洲国产成人一精品久久久| 久久亚洲国产成人精品v| 日本黄色日本黄色录像| 人人妻人人爽人人添夜夜欢视频| 亚洲国产精品一区二区三区在线| 十分钟在线观看高清视频www| 男男h啪啪无遮挡| 欧美一级a爱片免费观看看| 久久鲁丝午夜福利片| 国产成人免费观看mmmm| 下体分泌物呈黄色| 18+在线观看网站| 亚洲精品视频女| 激情五月婷婷亚洲| 久久人人爽人人片av| 亚洲国产精品999| 国产极品天堂在线| 亚洲精品国产色婷婷电影| 午夜激情av网站| 精品一区二区三卡| 国产高清不卡午夜福利| 美女主播在线视频| 一边亲一边摸免费视频| 国产精品久久久久成人av| 日日啪夜夜爽| 免费av不卡在线播放| 欧美bdsm另类| 午夜免费鲁丝| 性色av一级| 国产免费视频播放在线视频| 国产精品国产三级国产专区5o| 亚洲中文av在线| 国产精品人妻久久久影院| 啦啦啦啦在线视频资源| 成人亚洲精品一区在线观看| 啦啦啦在线观看免费高清www| 三级国产精品片| 99国产综合亚洲精品| 一级毛片电影观看| 精品亚洲乱码少妇综合久久| 日韩av不卡免费在线播放| 18禁在线无遮挡免费观看视频| 日韩欧美一区视频在线观看| 欧美bdsm另类| 国产片特级美女逼逼视频| 欧美激情国产日韩精品一区| 高清毛片免费看| 嘟嘟电影网在线观看| 国产精品.久久久| 多毛熟女@视频| 另类精品久久| 中文字幕人妻丝袜制服| 一级毛片电影观看| 99久久人妻综合| 亚洲精品亚洲一区二区| 卡戴珊不雅视频在线播放| 午夜免费观看性视频| 日韩伦理黄色片| 亚洲欧美中文字幕日韩二区| 九草在线视频观看| 精品一区在线观看国产| 亚洲精品中文字幕在线视频| 插逼视频在线观看| 亚洲情色 制服丝袜| 男人添女人高潮全过程视频| 欧美人与性动交α欧美精品济南到 | 最近最新中文字幕免费大全7| 建设人人有责人人尽责人人享有的| 欧美日韩综合久久久久久| 国产精品一区二区在线不卡| 午夜福利视频在线观看免费| 国产午夜精品久久久久久一区二区三区| 春色校园在线视频观看| 日日摸夜夜添夜夜添av毛片| 18禁裸乳无遮挡动漫免费视频| 国产精品蜜桃在线观看| 22中文网久久字幕| 亚洲成人手机| 午夜视频国产福利| 人人妻人人澡人人看| 中文乱码字字幕精品一区二区三区| 看免费成人av毛片| 久久久国产一区二区| 熟妇人妻不卡中文字幕| 蜜臀久久99精品久久宅男| 亚洲国产最新在线播放| 99国产精品免费福利视频| 高清av免费在线| 自线自在国产av| 日日摸夜夜添夜夜添av毛片| 成人黄色视频免费在线看| 国产精品久久久久成人av| 欧美激情 高清一区二区三区| 国产成人91sexporn| 久热久热在线精品观看| 大话2 男鬼变身卡| 视频中文字幕在线观看| 国产日韩欧美在线精品| 免费播放大片免费观看视频在线观看| 99热6这里只有精品| 熟女电影av网| 人人妻人人添人人爽欧美一区卜| 国产免费又黄又爽又色| 亚洲欧美成人综合另类久久久| 麻豆精品久久久久久蜜桃| 天堂中文最新版在线下载| 一区二区三区四区激情视频| 国产又色又爽无遮挡免| 亚洲伊人久久精品综合| 国产精品一二三区在线看| 高清黄色对白视频在线免费看| 大片免费播放器 马上看| 欧美老熟妇乱子伦牲交| 日本午夜av视频| 午夜激情av网站| 黑丝袜美女国产一区| 久久人人爽av亚洲精品天堂| 在线观看一区二区三区激情| 免费观看a级毛片全部| 亚洲精品成人av观看孕妇| 伊人亚洲综合成人网| 午夜91福利影院| 久久精品人人爽人人爽视色| 成人漫画全彩无遮挡| 精品亚洲乱码少妇综合久久| 黄片无遮挡物在线观看| a 毛片基地| 永久免费av网站大全| 日韩免费高清中文字幕av| 伦精品一区二区三区| 99九九线精品视频在线观看视频| 国产淫语在线视频| 一本—道久久a久久精品蜜桃钙片| 51国产日韩欧美| 国产有黄有色有爽视频| 天堂8中文在线网| 久久毛片免费看一区二区三区| 爱豆传媒免费全集在线观看| 中文字幕亚洲精品专区| 久久热精品热| 午夜老司机福利剧场| 国产成人一区二区在线| 五月玫瑰六月丁香| 飞空精品影院首页| 久久久亚洲精品成人影院| av卡一久久| 男女无遮挡免费网站观看| 精品一区二区免费观看| 人妻系列 视频| 母亲3免费完整高清在线观看 | 国产无遮挡羞羞视频在线观看| 免费观看av网站的网址| 99久久综合免费| 亚洲欧美成人精品一区二区| 久久人人爽人人爽人人片va| tube8黄色片| 免费播放大片免费观看视频在线观看| 丝袜在线中文字幕| 男女啪啪激烈高潮av片| 一区二区三区乱码不卡18| 国产av码专区亚洲av| 美女内射精品一级片tv| 免费大片黄手机在线观看| 一级二级三级毛片免费看| 亚洲人成网站在线观看播放| 久久久国产一区二区| 日本91视频免费播放| www.色视频.com| 亚洲精品乱码久久久v下载方式| av免费观看日本| 国产精品久久久久久久久免| 在线观看免费日韩欧美大片 | 精品国产露脸久久av麻豆| 久久国产亚洲av麻豆专区| 国产欧美亚洲国产| 国产成人av激情在线播放 | 日韩一本色道免费dvd| 我要看黄色一级片免费的| 波野结衣二区三区在线| 18禁在线无遮挡免费观看视频| 女性生殖器流出的白浆| 久久精品久久精品一区二区三区| 欧美一级a爱片免费观看看| 七月丁香在线播放| 熟妇人妻不卡中文字幕| 亚洲欧洲国产日韩| 久久狼人影院| 亚洲一级一片aⅴ在线观看| 高清在线视频一区二区三区| 亚洲美女视频黄频| 国产精品蜜桃在线观看| 国产 一区精品| 亚洲精品中文字幕在线视频| 91国产中文字幕| 99热这里只有是精品在线观看| 欧美日韩成人在线一区二区| 99热6这里只有精品| 亚洲av国产av综合av卡| 老司机亚洲免费影院| 制服丝袜香蕉在线| 欧美 亚洲 国产 日韩一| 国产有黄有色有爽视频| 免费观看无遮挡的男女| 大香蕉久久网| 亚洲色图综合在线观看| 欧美日本中文国产一区发布| 嫩草影院入口| 久久99一区二区三区| 久久久精品区二区三区| 在线观看免费日韩欧美大片 | 亚洲精品日韩在线中文字幕| 日韩一区二区三区影片| 国产精品国产三级专区第一集| 欧美日韩av久久| 这个男人来自地球电影免费观看 | 中文字幕免费在线视频6| 亚洲国产av新网站| 高清在线视频一区二区三区| 亚洲国产欧美在线一区| 久热这里只有精品99| 激情五月婷婷亚洲| 国产一区二区三区综合在线观看 | 国产精品国产三级国产专区5o| 久久精品久久久久久噜噜老黄| 夜夜爽夜夜爽视频| 青春草亚洲视频在线观看| 国产探花极品一区二区| 久久午夜福利片| 成人亚洲欧美一区二区av| 午夜激情av网站| 26uuu在线亚洲综合色| 在线免费观看不下载黄p国产| 视频在线观看一区二区三区|