• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Semantic image annotation based on GMM and random walk model①

    2017-06-27 08:09:22TianDongping田東平
    High Technology Letters 2017年2期
    關(guān)鍵詞:東平

    Tian Dongping (田東平)

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Semantic image annotation based on GMM and random walk model①

    Tian Dongping (田東平)②***

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Automatic image annotation has been an active topic of research in computer vision and pattern recognition for decades. A two stage automatic image annotation method based on Gaussian mixture model (GMM) and random walk model (abbreviated as GMM-RW) is presented. To start with, GMM fitted by the rival penalized expectation maximization (RPEM) algorithm is employed to estimate the posterior probabilities of each annotation keyword. Subsequently, a random walk process over the constructed label similarity graph is implemented to further mine the potential correlations of the candidate annotations so as to capture the refining results, which plays a crucial role in semantic based image retrieval. The contributions exhibited in this work are multifold. First, GMM is exploited to capture the initial semantic annotations, especially the RPEM algorithm is utilized to train the model that can determine the number of components in GMM automatically. Second, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which is able to avoid the phenomena of polysemy and synonym efficiently during the image annotation process. Third, the random walk is implemented over the constructed label graph to further refine the candidate set of annotations generated by GMM. Conducted experiments on the standard Corel5k demonstrate that GMM-RW is significantly more effective than several state-of-the-arts regarding their effectiveness and efficiency in the task of automatic image annotation.

    semantic image annotation, Gaussian mixture model (GMM), random walk, rival penalized expectation maximization (RPEM), image retrieval

    0 Introduction

    With the advent and popularity of world wide web, the number of accessible digital images for various purposes is growing at an exponential speed. To make the best use of these resources, people need an efficient and effective tool to manage them. In such context, content-based image retrieval (CBIR) was introduced in early 1990s, which heavily depends on the low-level features to find images relevant to the query concept that is represented by the query example provided by the user. However, in the field of computer vision and multimedia processing, the semantic gap between low-level visual features and high-level semantic concepts is a major obstacle to CBIR. As a result, automatic image annotation (AIA) has appeared and become an active topic of research in computer vision for decades due to its potentially large impact on both image understanding and web image search[1]. To be specific, AIA refers to a process to generate textual words automatically to describe the content of a given image, which plays a crucial role in semantic based image retrieval. As can be seen from the literature, the research on AIA has mainly proceeded along two categories. The first one poses image annotation as a supervised classification problem, which treats each semantic keyword or concept as an independent class and assigns each keyword or concept one classifier. More specifically, such kind of approaches predicts the annotations for a new image by computing the similarity at visual level and propagating corresponding keywords subsequently. Representative work includes automatic linguistic index for pictures[2]and supervised formulation for semantic image annotation and retrieval[3]. In contrast, the second category treats the words and visual tokens in each image as equivalent features in different modalities. Followed by image annotation is formalized via modeling the joint distribution of visual and textual features on the training data and predicting the missing textual features for a new image. Representative research includes translation model (TM)[4], cross-media relevance model (CMRM)[5], continuous space relevance model (CRM)[6], multiple Bernoulli relevance model (MBRM)[7], probabilistic latent semantic analysis (PLSA)[8]and correlated topic model[9], etc. By comparison, the former approach is relatively direct and natural to be understood. However, its performance is limited with the increase of the number of the semantic concepts and explosive multimedia data on the web. On the other hand, the latter often requires large-scale parameters to be estimated and the accuracy is strongly affected by the quantity and quality of the training data available.

    The content of this paper is structured as follows. Section 1 summarizes the related work, particularly GMM applied in the fields of automatic image annotation and retrieval. Section 2 elaborates the proposed GMM-RW model, including its parameter estimation, label similarity graph and refining annotation based on the random walk. In Section 3, conducted experiments are reported and analyzed based on the standard Corel5k dataset. Finally, some concluding remarks and potential research directions of GMM in the future are given in Section 4.

    1 Related work

    Gaussian mixture model (GMM), as another kind of supervised learning method, has been extensively applied in machine learning and pattern recognition. As the representative work using GMM for automatic image annotation, Yang, et al.[10]formulate AIA as a supervised multi-class labeling problem. They employ color and texture features to form two separate vectors for which two independent Gaussian mixture models are estimated from the training set as the class densities by means of the EM algorithm in conjunction with a denoising technique. In Ref.[11], an effective visual vocabulary was constructed by applying hierarchical GMM instead of the traditional clustering methods. Meanwhile, PLSA was utilized to explore semantic aspects of visual concepts and to discover topic clusters among documents and visual words so that every image could be projected on to a lower dimensional topic space for more efficient annotation. Besides, Wang, et al.[12]adapted the conventional GMM to a global one estimated by all patches from training images along with an image-specific GMM obtained by adapting the mean vectors of the global GMM and retaining the mixture weights and covariance matrices. Afterwards GMM is embedded into the max-min posterior pseudo-probabilities for AIA, in which the concept-specific visual vocabularies are generated by assuming that the localized features of images with a specific concept satisfy the distribution of GMM[13]. It is generally believed that the spatial relation among objects is very important for image understanding and recognition. In more recent work[14], a new method for automatic image annotation based on GMM by region-based color and coordinate of matching is proposed to be taken into account this factor. To be specific, this method firstly partitions images into disjoint, connected regions with color features and x-y coordinate while a training dataset is modeled through GMM to have a stable annotation result in the later phase.

    As the representative work for CBIR, Sahbi[15]proposed a GMM for clustering and its application to image retrieval. In particular, each cluster of data, modeled as a GMM into an input space, is interpreted as a hyperplane in a high dimensional mapping space where the underlying coefficients are found by solving a quadratic programming problem. In Ref.[16], GMM was leveraged to work on color histograms built with weights delivered by the bilateral filter scheme, which enabled the retrieval system not only to consider the global distribution of the color image pixels but also to take into account their spatial arrangement. In the work of Sayad, et al.[17], a new method was introduced by using multilayer PLSA for image retrieval, which could effectively eliminate the noisiest words generated by the vocabulary building process. Meanwhile, the edge context descriptor is extracted by GMM as well as a spatial weighting scheme is constructed based on GMM to reflect the information about the spatial structure of the images. At the same time, Raju, et al.[18]presented a method for CBIR by making use of the generalized GMM. Wan, et al.[19]proposed a clustering based indexing approach called GMM cluster forest to support multi-features based similarity search in high-dimensional spaces, etc. In addition, GMM has also been successfully applied in the task of other multimedia related fields[20-24].

    As briefly reviewed above, most of these GMM related models can achieve encouraging performance and motivate us to explore better image annotation methods with the help of their excellent experiences and knowledge. So in this paper, a two stage automatic image annotation method is proposed based on Gaussian mixture model and random walk. First, GMM is learned by the rival penalized expectation maximization algorithm to estimate the posterior probabilities of each annotation keyword. In other words, GMM is exploited to capture the initial semantic annotations, which can be seen as the first stage of AIA. Second, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which can efficiently avoid the phenomena of polysemy and synonym. Third, the random walk is implemented over the constructed label graph to further refine the candidate set of annotations generated by GMM, which can be viewed as the second stage of image annotation. At length, extensive experiments on Corel5k dataset validate the effectiveness and efficiency of the proposed model.

    2 Proposed GMM-RW

    In this section, the scheme of the GMM-RW model proposed in this study is first described (as depicted in Fig.1). Subsequently, GMM-RW is elaborated from three aspects of GMM and its parameter estimation, construction of the label similarity graph and refining annotation based on the random walk, respectively.

    Fig.1 Scheme of the proposed GMM-RW model

    2.1 GMM and its parameter estimation

    A Gaussian mixture model is a parametric probability density function represented as a weighted sum of Gaussian component densities. GMM is commonly used as a parametric model of the probability distribution of continuous measurements. More formally, a GMM is a weighted sum ofMcomponent Gaussian densities as given by the following equation.

    (1)

    where x is aD-dimensional continuous-valued data vector,wi(i=1,2,…,M) denotes the mixture weights,g(x|μi, ∑i),i=1,2,…,M, are the component Gaussian densities. Each component density is aD-variate Gaussian function as follows.

    g(x|μi,Σi)=

    (2)

    withmeanvectorμiand covariance matrix Σi. The mixture weights satisfy the constraint, i.e., sum to 1. The complete GMM is parameterized by the mean vectors, covariance matrices and mixture weights from all the component densities, and these parameters can be collectively represented by the notationλ={wi,μi, Σi},i=1, 2,…,M.

    There are several techniques available for estimating parameters of GMM. By far the most popular and well-established method is the maximum likelihood (ML) estimation, whose aim is to find the model parameters to maximize the likelihood of the GMM given the training data. But in general, the expectation-maximization (EM) algorithm is employed to fit GMM due to the infeasibility of direct maximization for ML. However, there is no penalty for the redundant mixture components based on the EM, which means that the number of components in a GMM cannot be automatically determined and has to be assigned in advance. To this end, the rival penalized expectation maximization (RPEM) algorithm[25]is leveraged to determine the number of components as well as to estimate the model parameters. Since RPEM introduces unequal weights into the conventional likelihood, the weighted likelihood can be written as below:

    (3)

    with

    (4)

    whereh(j|xi,λ)=ωjp(xi|μj, ∑j)/p(xi|λ) is the posterior probability thatxibelongs to thej-th component in the mixture,λis a positive constant,g(j|xi,λ),j=1,2,…,M, are designable weight functions, satisfying the following constraints:

    (5)

    In literature[24], they are constructed as follows:

    g(j|xi,λ)=(1+εi)I(j|xi,λ)-εih(j|xi,λ)

    (6)

    whereI(j|xi,λ) equals to 1 ifj=argmax1≤i≤Mh(i|x,λ) and 0 otherwise.εiis a small positive quantity. The major steps of RPEM algorithm can be summarized as below:

    Algorithm1:TheRPEMalgorithmforGMMmodelingInput:featurevectorx,M,thelearningrateη,themaximumnumberofepochsepochmax,initializeλasλ(0).Process:1. epochcount=0,m=0;2. whileepochcount≤epochmax,do3. fori=1toNdo4. Givenλ(m),calculateh(j|xi,λ(m))toobtaing(j|xi,λ(m))byEq.(6).5. λ(m+1)=λ(m)+Δλ=λ(m)+η?l(xi;λ)?λλ(m),6. m=m+1.7. endfor8. epochcount=epochcount+1;9. endwhileOutput:theconvergedλforGMM.

    Based on Gaussian mixture model and RPEM algorithm described above, GMM can be trained and utilized to characterize the semantic model of the given concepts by Eq.(1). Assume that the training image is represented by both a visual featureX={x1,x2,…,xm} and a keyword listW={w1,w2,…,wn}, wherexi(i=1,2,…,m) denotes the visual feature for regioniandwj(j=1,2,…,n) is thej-th keyword in the annotation. For a test imageIrepresented by its visual feature vector X={x1,x2,…,xm}, according to Bayesian rule, the posterior probabilityp(wi|I) can be calculated based on the conditional probabilityp(I|wi) and prior probabilityp(wi) as follows:

    (7)

    From Eq.(7), the topnkeywords can be selected as the initial annotations for image X.

    2.2 Construction of the label similarity graph

    In the process of automatic image annotation, at least three kinds of relations are involved based on two different modal data. That is, image-to-image, image-to-word and word-to-word relations. How to reasonably reflect these cross-modal relations between images and words plays a critical role in the task of AIA. Note that the most common approaches include WordNet[26]and normalized Google distance (NGD)[27]. From their definitions, it can be easily observed that NGD is actually a measure of the contextual relation while WordNet focuses on the semantic meaning of the keyword itself. Moreover, both of them build word correlations only based on the textual descriptions whereas the visual information of images in the dataset is not considered at all, which can easily lead to the phenomenon that different images with the same candidate annotations would obtain the same annotation results after the refined process. For this reason, an effective pairwise similarity strategy is devised by calculating a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, in which the label similarity between wordswiandwjis defined as

    sl(wi,wj)=exp(-d(wi,wj))

    (8)

    whered(wi,wj) represents the distance between two wordswiandwjand it is defined similarly to NGD as below:

    (9)

    wheref(wi) andf(wj) denote the numbers of images containing wordswiandwjrespectively,f(wi,wj) is the number of images containing bothwiandwj,Gis the total number of images in the dataset.

    Similar to Ref.[28], for labelwassociated with imagex, the nearest neighbors ofKarecollectedfromimagescontainingw, and these images can be regarded as the exemplars of labelwwith respect tox. Thus from the point view of labels associated with an image, the visual similarity between labelswiandwjis given as follows:

    (10)

    whereΓwis the representative image collection of wordw,xandydenote image features corresponding to the respective image collections of wordswiandwj,σis the user-defined radius parameter for the Gaussian kernel function. To benefit from each other of the two similarities described above, a weighted linear combination of label similarity and visual similarity is defined as below:

    sij=s(wi,wj) =λsl(wi,wj)+(1-λ)sv(wi,wj)

    (11)

    whereλ∈[0,1] is utilized to control the weights for each measurement.

    2.3 Refining annotation based on random walk

    In the following, the refining image annotation stage is to be elaborated based on the initial annotations generated by GMM and the random walk model. Given that a label graph constructed in subsection 2.2 withnnodes,rk(i) is used to denote the relevance score of nodeiat iterationk,Pdenotes an-by-ntransition matrix, whose elementpijindicates the probability of the transition from nodeito nodejand it is computed as

    (12)

    wheresijis the pairwise label similarity (defined by Eq.(11)) between nodeiand nodej. Then the random walk process can be formulated as

    (13)

    whereα∈(0,1) is a weight parameter to be determined,vjdenotes the initial annotation probabilistic scores calculated by the GMM. In the process of refining image annotation, random walk proceeds until it reaches the steady-state probability distribution and subsequently the top several candidates with the highest probabilities can be seen as the final refining image annotation results.

    3 Experimental results and analysis

    3.1 Dataset and evaluation measures

    The proposed GMM-RW is tested on the Corel5k image dataset obtained from the literature[4]. Corel5k consists of 5,000 images from 50 Corel Stock Photo CD’s. Each CD contains 100 images with a certain theme (e.g. polar bears), of which 90 are designated to be in the training set and 10 in the test set, resulting in 4,500 training images and a balanced 500-image test collection. Alternatively, for the sake of fair comparison, similar features to Ref.[7] are extracted. First of all, images are simply decompose into a set of 32×32-sized blocks, followed by computing a 36-dim feature vector for each block, consisting of 24 color features (auto-correlogram) computed over 8 quantized colors and 3 manhattan distances, 12 texture features (Gabor filter) computed over 3 scales and 4 orientations. As a result, each block is represented as a 36-dim feature vector. Finally, each image is represented as a bag of features, i.e., a set of 36 dimensional vectors. And these features are subsequently employed to train GMM based on the RPEM algorithm. In addition, the value ofλin Eq.(11) is set to be 0.6, and the value ofαin Eq.(13) is set to be 0.5 by trial and error. Without loss of generality, the commonly used metrics precision and recall of every word in the test set are calculated and the mean of these values is utilized to summarize the performance.

    3.2 Results of automatic image annotation

    Matlab 7.0 is applied to implement the proposed GMM-RW model. Specifically, the experiments are carried out on a 1.80GHz Intel Core Duo CPU personal computer (PC) with 2.0G memory running Microsoft windows xp professional. To verify the effectiveness of the proposed model, it is compared with several previous approaches[4-8]. Table 1 reports the experimental results based on two sets of words: the subset of 49 best words and the complete set of all 260 words occur in the training set. From Table 1, it is clear that the model markedly outperforms all the others, especially the first three approaches. Meanwhile, it is also superior to PLSA-WORDS and MBRM by the gains of 21 and 4 words with non-zero recall, 30% and 4% mean per-word recall in conjunction with 79% and 4% mean per-word precision on the set of 260 words respectively. In addition, compared to MBRM on the set of 49 best words, improvement can be get in mean per-word precision despite the mean per-word recall of GMM-RW is the same as that of MBRM.

    Table 1 Performance comparison on Corel5k dataset

    To further illustrate the effect of GMM-RW model for automatic image annotation, Fig.2 displays the average annotation precision of the selected 10 words “flowers”, “mountain”, “snow”, “tree”, “building”, “beach”, “water”, “sky”, “bear” and “cat” based on GMM and GMM-RW models, respectively. As shown in Fig.2, the average precision of the model is obviously higher than that of GMM. The reason lies in that in addition to profit from the calculation strategy of cross-modal relations between images and words. GMM-RW, to a large extent, takes benefit from the random walk process to further mine the correlation of the candidate annotations.

    Fig.2 Average precision based on GMM and GMM-RW

    Alternatively, Table 2 shows some examples of image annotation (only eight cases are listed here due to the limited space) produced by PLSA-WORDS and GMM-RW, respectively. It is clearly observed that the model is able to generate more accurate annotation results compared with the original annotations as well as the ones provided in Ref.[8]. Taking the first image in the first row for example, there exist four tags in the original annotation. However, after annotation by GMM-RW, its annotation is enriched by the other keyword “grass”, which is very appropriate and reasonable to describe the visual content of the image. On the other side, it is important to note that the annotation ranking of the keywords compared to that generated by the PLSA-WORDS is more reasonable, which plays a crucial role in semantic based image retrieval. In addition, as for the complexity of GMM-RW, assuming that there areDtraining images and each image producesRvisual feature vectors, then the complexity of our model isO(DR), which is similar to the classic CRM and MBRM models mentioned in Ref.[3].

    Table 2 Annotation comparison with PLSA-WORDS and GMM-RW

    4 Conclusions and future work

    In this paper, a two stage automatic image annotation method is presented based on GMM and a random walk model. First of all, GMM fitted by the rival penalized expectation maximization is applied to estimate the posterior probabilities of each annotation keyword. Followed by a random walk process over the constructed label similarity graph is implemented to further mine the correlation of the candidate annotations so as to capture the refining results. Particularly, the label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which can efficiently avoid the phenomena of polysemy and synonym in the course of automatic image annotation. Extensive experiments on the general-purpose Corel5k dataset validate the feasibility and utility of the proposed GMM-RW model.

    As for future work, a plan is made to explore more powerful GMM related models for automatic image annotation from the following aspects. First, due to the classic GMM has limitation in its modeling abilities as all data points of an object are required to be generated from a pool of mixtures with the same set of mixture weights, so how to determine the weight factors of GMM more appropriately is well worth exploring. Second, how to speed up the GMM estimation with EM algorithm is also an important work for large-scale multimedia processing. In other words, the choice of alternate estimation techniques for the estimation of GMM parameters could also be very valuable. Third, how to introduce semi-supervised learning into the proposed approach to utilize the labeled and unlabeled data simultaneously is a worthy research direction. At the same time, work on web image annotation is continued by refining more relevant semantic information from web pages and building more suitable connection between image content features and available semantic information. Last but not the least, GMM-RW should be expected to be applied in more wider ranges to deal with more multimedia related tasks, such as speech recognition, video recognition and other multimedia event detection tasks, etc.

    [ 1] Tian D P. Exploiting PLSA model and conditional random field for refining image annotation.HighTechnologyLetters, 2015,21(1):78-84

    [ 2] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2003,25(19):1075-1088

    [ 3] Carneiro G, Chan A, Moreno P, et al. Supervised learning of semantic classes for image annotation and retrieval.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007,29(3):394-410

    [ 4] Duygulu P, Barnard K, Freitas N De, et al. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 2002. 97-112

    [ 5] Jeon L, Lavrenko V, Manmantha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 2003. 119-126

    [ 6] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proceedings of the Advances in Neural Information Processing Systems 16, Vancouver, Canada, 2003. 553-560

    [ 7] Feng S, Manmatha R, Lavrenko V. Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Washington, USA, 2004. 1002-1009

    [ 8] Monay F, Gatica-Perez D. Modeling semantic aspects for cross-media image indexing.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007,29(10):1802-1817

    [ 9] Blei D, Lafferty J. Correlated topic models.AnnalsofAppliedStatistics, 2007,1(1):17-35

    [10] Yang F, Shi F, Wang Z. An improved GMM-based method for supervised semantic image annotation. In: Proceedings of the International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 2009. 506-510

    [11] Wang Z, Yi H, Wang J, et al. Hierarchical Gaussian mixture model for image annotation via PLSA. In: Proceedings of the 5th International Conference on Image and Graphics, Xi’an, China, 2009. 384-389

    [12] Wang C, Yan S, Zhang L, et al. Multi-label sparse coding for automatic image annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009. 1643-1650

    [13] Wang Y, Liu X, Jia Y. Automatic image annotation with cooperation of concept-specific and universal visual vocabularies. In: Proceedings of the 16th International Conference on Multimedia Modeling, Chongqing, China, 2010. 262-272

    [14] Luo X, Kita K. Region-based image annotation using Gaussian mixture model. In: Proceedings of the 2nd International Conference on Information Technology and Software Engineering, Beijing, China, 2013. 503-510

    [15] Sahbi H. A particular Gaussian mixture model for clustering and its application to image retrieval.SoftComputing, 2008, 12(7):667-676

    [16] Luszczkiewicz M, Smolka B. Application of bilateral filtering and Gaussian mixture modeling for the retrieval of paintings. In: Proceedings of the 16th International Conference on Image Processing, Cairo, Egypt, 2009. 77-80

    [17] Sayad I, Martinet J, Urruty T, et al. Toward a higher-level visual representation for content-based image retrieval.MultimediaToolsandApplications, 2012,60(2):455-482

    [18] Raju L, Vasantha K, Srinivas Y. Content based image retrievals based on generalization of GMM.InternationalJournalofComputerScienceandInformationTechnologies, 2012,3(6): 5326-5330

    [19] Wan Y, Liu X, Tong K, et al. GMM-ClusterForest: a novel indexing approach for multi-features based similarity search in high-dimensional spaces. In: Proceedings of the 19th International Conference on Neural Information Processing, Doha, Qatar, 2012. 210-217

    [20] Dixit M, Rasiwasia N, Vasconcelos N. Adapted Gaussian models for image classification. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Providence, USA, 2011. 937-943

    [21] Celik T. Image change detection using Gaussian mixture model and genetic algorithm.JournalofVisualCommunicationandImageRepresentation, 2010,21(8):965-974

    [22] Beecks C, Ivanescu A, Kirchhoff S, et al. Modeling image similarity by Gaussian mixture models and the signature quadratic form distance. In: Proceedings of the 13th International Conference on Computer Vision, Barcelona, Spain, 2011. 1754-1761

    [23] Wang Y, Chen W, Zhang J, et al. Efficient volume exploration using the Gaussian mixture model.IEEETransactionsonVisualizationandComputerGraphics, 2011,17(11):1560-1573

    [24] Inoue N, Shinoda K. A fast and accurate video semantic-indexing system using fast MAP adaptation and GMM super-vectors.IEEETransactionsonMultimedia, 2012,14(4):1196-1205

    [25] Cheung Y. Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection.IEEETransactionsonKnowledgeandDataEngineering, 2005,17(6):750-761

    [26] Fellbaum C. WordNet. Theory and Applications of Ontology: Computer Applications, 2010. 231-243

    [27] Cilibrasi R, Paul M. The Google similarity distance.IEEETransactionsonKnowledgeandDataEngineering, 2007, 19(3):370-383

    [28] Liu D, Hua X, Yang L, et al. Tag ranking. In: Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 2009. 351-360

    10.3772/j.issn.1006-6748.2017.02.015

    ①Supported by the National Basic Research Program of China (No.2013CB329502), the National Natural Science Foundation of China (No.61202212), the Special Research Project of the Educational Department of Shaanxi Province of China (No.15JK1038) and the Key Research Project of Baoji University of Arts and Sciences (No.ZK16047).

    ②To whom correspondence should be addressed. E-mail: tdp211@163.com

    on May 25, 2016

    ping, born in 1981. He received his M.Sc. and Ph.D. degrees in computer science from Shanghai Normal University and Institute of Computing Technology, Chinese Academy of Sciences in 2007 and 2014, respectively. His research interests include computer vision, machine learning and evolutionary computation.

    猜你喜歡
    東平
    墾荒
    種絲瓜
    茶藝
    金秋(2020年8期)2020-08-17 08:38:20
    鐵 匠
    詩(shī)四首——東平之《春》、《夏》、《秋》、《冬》
    Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation①
    讓批評(píng)和自我批評(píng)成為黨內(nèi)政治生活的常態(tài)
    全面從嚴(yán)治黨背景下建立健全容錯(cuò)糾錯(cuò)機(jī)制的探討
    A Comparative Study of Buddha in China and God in Western Countries
    東方教育(2017年2期)2017-04-21 04:46:18
    Exploiting PLSA model and conditional random field for refining image annotation*
    美女黄网站色视频| 免费看av在线观看网站| 麻豆国产97在线/欧美| 亚洲精品,欧美精品| 成人欧美大片| 免费人成在线观看视频色| 久久久a久久爽久久v久久| 精品久久久久久成人av| 人妻系列 视频| 精品一区二区三区视频在线| 久久久久网色| 国产美女午夜福利| 非洲黑人性xxxx精品又粗又长| 国产黄色视频一区二区在线观看 | 国产成人一区二区在线| 亚洲无线观看免费| 日本三级黄在线观看| 亚洲人成网站在线播| 中文字幕免费在线视频6| 亚洲国产精品久久男人天堂| 99久久成人亚洲精品观看| 国产精品永久免费网站| 国产大屁股一区二区在线视频| 国产在线一区二区三区精 | 久久鲁丝午夜福利片| 成年版毛片免费区| 天天一区二区日本电影三级| 欧美高清成人免费视频www| 国产激情偷乱视频一区二区| 久久久欧美国产精品| 国产伦一二天堂av在线观看| 久久鲁丝午夜福利片| 99视频精品全部免费 在线| 亚洲欧美精品自产自拍| 国产爱豆传媒在线观看| 亚洲精品自拍成人| 久久久午夜欧美精品| 黄片无遮挡物在线观看| 国产三级在线视频| 日本午夜av视频| 免费观看a级毛片全部| 亚洲欧洲国产日韩| 国产精品av视频在线免费观看| 欧美bdsm另类| 亚洲av男天堂| 国产精品国产三级国产av玫瑰| 亚洲内射少妇av| 精品免费久久久久久久清纯| 两个人视频免费观看高清| 国产高清有码在线观看视频| av视频在线观看入口| 69av精品久久久久久| 天天躁日日操中文字幕| 一区二区三区乱码不卡18| 国语对白做爰xxxⅹ性视频网站| 男人的好看免费观看在线视频| 久久99热这里只频精品6学生 | 最近的中文字幕免费完整| 国产人妻一区二区三区在| 麻豆av噜噜一区二区三区| 乱系列少妇在线播放| 国产乱来视频区| 蜜桃亚洲精品一区二区三区| 亚洲av不卡在线观看| 久久久久久久国产电影| 高清毛片免费看| 老女人水多毛片| 国产在视频线在精品| 欧美一级a爱片免费观看看| 大又大粗又爽又黄少妇毛片口| 国产在视频线在精品| 日韩欧美国产在线观看| 真实男女啪啪啪动态图| 亚洲在线自拍视频| 欧美潮喷喷水| 欧美+日韩+精品| 伦理电影大哥的女人| .国产精品久久| 成人毛片a级毛片在线播放| 亚洲精品aⅴ在线观看| 欧美日韩在线观看h| 99久久精品热视频| 欧美精品国产亚洲| 国产一区二区亚洲精品在线观看| 日韩强制内射视频| 亚洲av电影在线观看一区二区三区 | 一夜夜www| 观看美女的网站| 一个人免费在线观看电影| 超碰av人人做人人爽久久| 国产亚洲91精品色在线| 亚洲国产欧洲综合997久久,| a级毛片免费高清观看在线播放| 激情 狠狠 欧美| 日日干狠狠操夜夜爽| 一边摸一边抽搐一进一小说| 久久久国产成人免费| 亚洲一区高清亚洲精品| 国产久久久一区二区三区| 最近最新中文字幕大全电影3| 青春草视频在线免费观看| 中国美白少妇内射xxxbb| 国产成人freesex在线| 伦理电影大哥的女人| 亚洲电影在线观看av| 纵有疾风起免费观看全集完整版 | 中文字幕人妻熟人妻熟丝袜美| 免费一级毛片在线播放高清视频| 免费人成在线观看视频色| 岛国毛片在线播放| 高清午夜精品一区二区三区| 国产爱豆传媒在线观看| 久久精品国产自在天天线| 亚洲内射少妇av| 一区二区三区四区激情视频| 日韩高清综合在线| 亚洲国产精品成人久久小说| 青春草国产在线视频| 女的被弄到高潮叫床怎么办| 级片在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 成人亚洲欧美一区二区av| 久久欧美精品欧美久久欧美| 97人妻精品一区二区三区麻豆| 色吧在线观看| 少妇的逼好多水| 视频中文字幕在线观看| 午夜福利视频1000在线观看| 青春草国产在线视频| 国产 一区 欧美 日韩| 天堂av国产一区二区熟女人妻| 特级一级黄色大片| ponron亚洲| 亚洲精品亚洲一区二区| 五月伊人婷婷丁香| 亚洲自偷自拍三级| 我要看日韩黄色一级片| 男插女下体视频免费在线播放| 午夜福利在线观看吧| 国产淫片久久久久久久久| 国产亚洲一区二区精品| 天堂影院成人在线观看| 色视频www国产| 一级爰片在线观看| 在线播放无遮挡| 亚洲第一区二区三区不卡| 亚洲美女视频黄频| 国产精品,欧美在线| 99九九线精品视频在线观看视频| 禁无遮挡网站| 99热6这里只有精品| 国产又黄又爽又无遮挡在线| 日日摸夜夜添夜夜添av毛片| 全区人妻精品视频| 欧美日韩在线观看h| 五月玫瑰六月丁香| 一级毛片久久久久久久久女| 身体一侧抽搐| 国产一区有黄有色的免费视频 | 国产v大片淫在线免费观看| 99热这里只有是精品50| 亚洲成av人片在线播放无| 国产黄片美女视频| 又粗又爽又猛毛片免费看| 搡女人真爽免费视频火全软件| 久久久久久久久久黄片| 久久韩国三级中文字幕| 水蜜桃什么品种好| 精品人妻一区二区三区麻豆| 国产亚洲精品av在线| 三级国产精品欧美在线观看| 草草在线视频免费看| 亚洲精品国产成人久久av| 久久久久九九精品影院| 欧美3d第一页| 日韩视频在线欧美| 日韩成人伦理影院| 在线观看66精品国产| 深夜a级毛片| 精品久久久久久电影网 | a级一级毛片免费在线观看| 国语自产精品视频在线第100页| 91午夜精品亚洲一区二区三区| 亚洲国产精品专区欧美| 久久久精品欧美日韩精品| 午夜精品在线福利| 国产久久久一区二区三区| av在线亚洲专区| 三级国产精品欧美在线观看| 免费av毛片视频| 狠狠狠狠99中文字幕| 在线免费观看不下载黄p国产| av在线老鸭窝| 国产精品乱码一区二三区的特点| 久久久久久九九精品二区国产| 亚洲av中文av极速乱| 国产欧美另类精品又又久久亚洲欧美| 国产av不卡久久| 国产黄色小视频在线观看| 男插女下体视频免费在线播放| 国产精品伦人一区二区| 国产精品电影一区二区三区| 欧美潮喷喷水| 亚洲中文字幕日韩| 哪个播放器可以免费观看大片| 男人舔奶头视频| 久久久久久久久大av| 两性午夜刺激爽爽歪歪视频在线观看| 日韩在线高清观看一区二区三区| 夜夜看夜夜爽夜夜摸| 亚洲精品成人久久久久久| 校园人妻丝袜中文字幕| a级毛色黄片| 日韩成人伦理影院| 亚洲精品日韩在线中文字幕| 爱豆传媒免费全集在线观看| av卡一久久| 蜜桃久久精品国产亚洲av| a级一级毛片免费在线观看| 亚洲中文字幕一区二区三区有码在线看| 3wmmmm亚洲av在线观看| 成人漫画全彩无遮挡| 2021少妇久久久久久久久久久| 久久久a久久爽久久v久久| 国产白丝娇喘喷水9色精品| 亚洲精品乱码久久久久久按摩| 日本wwww免费看| 久久午夜福利片| 国产91av在线免费观看| 国产午夜精品论理片| 亚洲精品久久久久久婷婷小说 | 毛片一级片免费看久久久久| 日韩 亚洲 欧美在线| 又黄又爽又刺激的免费视频.| 色综合色国产| 国产午夜精品久久久久久一区二区三区| 美女xxoo啪啪120秒动态图| 最近的中文字幕免费完整| 国产乱人偷精品视频| 人妻制服诱惑在线中文字幕| 久久久午夜欧美精品| 综合色av麻豆| 日日干狠狠操夜夜爽| 干丝袜人妻中文字幕| 亚洲,欧美,日韩| 久久99精品国语久久久| 少妇的逼好多水| 国产精品一区www在线观看| 欧美97在线视频| 亚洲中文字幕日韩| 美女黄网站色视频| 禁无遮挡网站| 尤物成人国产欧美一区二区三区| 岛国毛片在线播放| 少妇人妻一区二区三区视频| 丰满乱子伦码专区| 2021少妇久久久久久久久久久| 国产大屁股一区二区在线视频| 色网站视频免费| 久久99热这里只有精品18| 国产免费一级a男人的天堂| 汤姆久久久久久久影院中文字幕 | 边亲边吃奶的免费视频| 一边亲一边摸免费视频| 久久精品综合一区二区三区| 国产精品熟女久久久久浪| videos熟女内射| 成人高潮视频无遮挡免费网站| 亚洲精品久久久久久婷婷小说 | 精品一区二区三区视频在线| 久久人妻av系列| 少妇熟女aⅴ在线视频| 亚洲精品日韩av片在线观看| 乱人视频在线观看| 精品人妻熟女av久视频| 亚洲成av人片在线播放无| 国产精品国产三级专区第一集| 男女国产视频网站| 久久精品夜色国产| 波多野结衣巨乳人妻| 日韩欧美精品v在线| 丝袜美腿在线中文| 高清毛片免费看| .国产精品久久| 热99在线观看视频| 亚洲av熟女| 成人二区视频| 国产成人精品一,二区| 日本黄色视频三级网站网址| 97超视频在线观看视频| 韩国av在线不卡| 免费看光身美女| 黄片无遮挡物在线观看| 亚洲av熟女| 99热这里只有精品一区| 亚洲经典国产精华液单| 免费播放大片免费观看视频在线观看 | 国产精品人妻久久久影院| 日本欧美国产在线视频| 亚洲av成人精品一区久久| 成人毛片60女人毛片免费| 亚州av有码| 99在线视频只有这里精品首页| 亚洲精华国产精华液的使用体验| 国产av在哪里看| 两个人视频免费观看高清| 真实男女啪啪啪动态图| 九九在线视频观看精品| 亚洲伊人久久精品综合 | 99久久人妻综合| 99久久九九国产精品国产免费| 搡女人真爽免费视频火全软件| 日韩欧美精品免费久久| 老女人水多毛片| 亚洲av日韩在线播放| 国产成人福利小说| 国产成人aa在线观看| av专区在线播放| 午夜日本视频在线| 我要看日韩黄色一级片| 成人鲁丝片一二三区免费| 日韩在线高清观看一区二区三区| 久久欧美精品欧美久久欧美| 免费看美女性在线毛片视频| 亚洲av不卡在线观看| 人体艺术视频欧美日本| 在线免费观看的www视频| 国产精华一区二区三区| 欧美97在线视频| 久久欧美精品欧美久久欧美| 99九九线精品视频在线观看视频| 国产精品久久久久久久久免| av福利片在线观看| 欧美精品国产亚洲| 99热这里只有是精品在线观看| 不卡视频在线观看欧美| 婷婷色综合大香蕉| 亚洲av熟女| 中文在线观看免费www的网站| 日本黄色视频三级网站网址| АⅤ资源中文在线天堂| 中文字幕精品亚洲无线码一区| 在线播放无遮挡| 亚洲精品,欧美精品| 一夜夜www| videos熟女内射| 边亲边吃奶的免费视频| 精品久久国产蜜桃| 青春草国产在线视频| 九草在线视频观看| 精品一区二区免费观看| 午夜免费激情av| 秋霞伦理黄片| 国产亚洲5aaaaa淫片| 国产黄色小视频在线观看| 亚洲不卡免费看| 亚洲中文字幕日韩| 日本猛色少妇xxxxx猛交久久| 99热6这里只有精品| 久久鲁丝午夜福利片| 26uuu在线亚洲综合色| 亚洲五月天丁香| 国产av一区在线观看免费| 两个人的视频大全免费| 听说在线观看完整版免费高清| 成人二区视频| 床上黄色一级片| 日韩亚洲欧美综合| 精品国内亚洲2022精品成人| 国产亚洲5aaaaa淫片| 美女内射精品一级片tv| 日韩一区二区视频免费看| eeuss影院久久| 最近中文字幕高清免费大全6| 99久久人妻综合| 欧美一区二区精品小视频在线| 一区二区三区四区激情视频| 天堂影院成人在线观看| 一区二区三区高清视频在线| 欧美性感艳星| 久久久久久久久久久丰满| 欧美日韩一区二区视频在线观看视频在线 | 久久这里只有精品中国| 99久国产av精品国产电影| 亚洲av中文字字幕乱码综合| 亚洲乱码一区二区免费版| 丰满少妇做爰视频| 亚洲丝袜综合中文字幕| 美女大奶头视频| 2021少妇久久久久久久久久久| 天堂av国产一区二区熟女人妻| 久久国产乱子免费精品| 亚洲综合精品二区| 一边亲一边摸免费视频| 国产视频内射| 国产老妇女一区| 国产国拍精品亚洲av在线观看| 狂野欧美白嫩少妇大欣赏| 人人妻人人澡人人爽人人夜夜 | 亚洲欧美精品自产自拍| 国产成人福利小说| 亚洲美女搞黄在线观看| 身体一侧抽搐| 少妇裸体淫交视频免费看高清| 欧美97在线视频| 日韩中字成人| av女优亚洲男人天堂| 国产白丝娇喘喷水9色精品| 久久久精品94久久精品| 老师上课跳d突然被开到最大视频| 午夜福利网站1000一区二区三区| 国产免费视频播放在线视频 | 欧美人与善性xxx| 日韩大片免费观看网站 | 不卡视频在线观看欧美| 一边亲一边摸免费视频| 91精品国产九色| 成年女人看的毛片在线观看| 国产视频首页在线观看| 亚洲图色成人| 人妻夜夜爽99麻豆av| av视频在线观看入口| 亚洲丝袜综合中文字幕| 久久精品国产自在天天线| 国内精品宾馆在线| 久久久久免费精品人妻一区二区| 日韩大片免费观看网站 | 赤兔流量卡办理| 菩萨蛮人人尽说江南好唐韦庄 | 日韩国内少妇激情av| 免费电影在线观看免费观看| 日韩亚洲欧美综合| 黄色配什么色好看| 久久久久久久午夜电影| 欧美成人午夜免费资源| 日韩一本色道免费dvd| 美女内射精品一级片tv| 尾随美女入室| 爱豆传媒免费全集在线观看| 三级国产精品片| .国产精品久久| 国产精品爽爽va在线观看网站| 国产色爽女视频免费观看| 精品人妻视频免费看| 一个人观看的视频www高清免费观看| 免费观看精品视频网站| 少妇被粗大猛烈的视频| 中文乱码字字幕精品一区二区三区 | 六月丁香七月| 免费一级毛片在线播放高清视频| 日本色播在线视频| 男人和女人高潮做爰伦理| 欧美激情国产日韩精品一区| 国产欧美另类精品又又久久亚洲欧美| 亚洲精品456在线播放app| 天堂中文最新版在线下载 | 免费在线观看成人毛片| 夜夜爽夜夜爽视频| 久久精品夜夜夜夜夜久久蜜豆| 欧美色视频一区免费| 国产毛片a区久久久久| 亚洲精品日韩av片在线观看| 一区二区三区四区激情视频| 99久久精品一区二区三区| 亚洲欧美精品自产自拍| 精品久久国产蜜桃| 国产91av在线免费观看| 成年女人看的毛片在线观看| 中文字幕av成人在线电影| 国产成人a∨麻豆精品| 久久久久久久久中文| 26uuu在线亚洲综合色| 日韩一区二区三区影片| 天堂中文最新版在线下载 | 又爽又黄无遮挡网站| 国产一级毛片七仙女欲春2| 日韩av在线大香蕉| 91精品一卡2卡3卡4卡| 免费不卡的大黄色大毛片视频在线观看 | 久久99热这里只有精品18| 女人被狂操c到高潮| 中文天堂在线官网| 一边摸一边抽搐一进一小说| 久久亚洲国产成人精品v| 国产精品国产三级国产av玫瑰| a级毛色黄片| av在线蜜桃| 天天躁日日操中文字幕| 特级一级黄色大片| 99在线人妻在线中文字幕| 别揉我奶头 嗯啊视频| 亚洲精品日韩av片在线观看| 国产人妻一区二区三区在| 一级黄色大片毛片| 日本av手机在线免费观看| 一级毛片aaaaaa免费看小| 久久精品久久久久久噜噜老黄 | av福利片在线观看| 亚洲最大成人av| 国产又黄又爽又无遮挡在线| 少妇被粗大猛烈的视频| 国产成人aa在线观看| 欧美区成人在线视频| 欧美一区二区亚洲| 一级爰片在线观看| 精品国产三级普通话版| 国产69精品久久久久777片| 日韩人妻高清精品专区| 男人的好看免费观看在线视频| 亚洲欧美日韩东京热| a级毛片免费高清观看在线播放| 久久99精品国语久久久| 一区二区三区免费毛片| 日韩成人伦理影院| 欧美性猛交黑人性爽| 啦啦啦韩国在线观看视频| 97超视频在线观看视频| 直男gayav资源| 少妇被粗大猛烈的视频| 久久人人爽人人片av| 人妻夜夜爽99麻豆av| 日韩高清综合在线| 搞女人的毛片| 欧美zozozo另类| 成人美女网站在线观看视频| 久久亚洲精品不卡| 婷婷色av中文字幕| 日韩一本色道免费dvd| 精品久久久久久久末码| 精品少妇黑人巨大在线播放 | 国产免费又黄又爽又色| 亚洲国产精品专区欧美| 成人二区视频| 国产三级在线视频| 国产精品国产三级国产专区5o | 热99re8久久精品国产| 日本黄大片高清| 日韩高清综合在线| 国产老妇女一区| 欧美成人一区二区免费高清观看| 国产精品一区二区三区四区免费观看| 青春草亚洲视频在线观看| 赤兔流量卡办理| 综合色av麻豆| 少妇的逼水好多| 有码 亚洲区| 日韩视频在线欧美| 国产成人福利小说| 国产欧美日韩精品一区二区| 国产高清视频在线观看网站| 国产又黄又爽又无遮挡在线| 搡女人真爽免费视频火全软件| 桃色一区二区三区在线观看| 91久久精品电影网| 一本一本综合久久| 精品熟女少妇av免费看| 毛片女人毛片| 人人妻人人澡人人爽人人夜夜 | 日本wwww免费看| 精品久久久久久久久久久久久| 99久久精品热视频| 精品久久久久久久久久久久久| 久久久久久伊人网av| 国产乱人偷精品视频| 久久久久久伊人网av| 高清在线视频一区二区三区 | 一边亲一边摸免费视频| 午夜久久久久精精品| 色噜噜av男人的天堂激情| 亚洲国产成人一精品久久久| 97人妻精品一区二区三区麻豆| 欧美3d第一页| 少妇人妻一区二区三区视频| 蜜桃亚洲精品一区二区三区| 3wmmmm亚洲av在线观看| 国产91av在线免费观看| 五月伊人婷婷丁香| 极品教师在线视频| 精华霜和精华液先用哪个| 欧美成人免费av一区二区三区| 哪个播放器可以免费观看大片| 欧美极品一区二区三区四区| 最新中文字幕久久久久| 免费黄网站久久成人精品| 亚洲精品成人久久久久久| 欧美成人免费av一区二区三区| 精品国产三级普通话版| 亚洲国产精品国产精品| 国产私拍福利视频在线观看| 亚洲天堂国产精品一区在线| 美女黄网站色视频| 夜夜爽夜夜爽视频| 床上黄色一级片| 日韩视频在线欧美| 床上黄色一级片| 国产又色又爽无遮挡免| 免费看美女性在线毛片视频| 日韩,欧美,国产一区二区三区 | 国产精品人妻久久久久久| 久久久精品欧美日韩精品| 两性午夜刺激爽爽歪歪视频在线观看| 久久6这里有精品| av福利片在线观看| 18禁裸乳无遮挡免费网站照片| 国产女主播在线喷水免费视频网站 | 免费电影在线观看免费观看| 啦啦啦观看免费观看视频高清| 色哟哟·www| 在线播放无遮挡| 日日干狠狠操夜夜爽| 人人妻人人澡人人爽人人夜夜 | 久久久久久久久久黄片| 日韩一区二区视频免费看| 日韩强制内射视频| 看免费成人av毛片|