• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Exploiting PLSA model and conditional random field for refining image annotation*

    2015-02-15 02:31:30TianDongping田東平
    High Technology Letters 2015年1期
    關(guān)鍵詞:東平

    Tian Dongping(田東平)

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)(**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    ?

    Exploiting PLSA model and conditional random field for refining image annotation*

    Tian Dongping(田東平)****

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)(**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    This paper presents a new method for refining image annotation by integrating probabilistic latent semantic analysis (PLSA) with conditional random field (CRF). First a PLSA model with asymmetric modalities is constructed to predict a candidate set of annotations with confidence scores, and then model semantic relationship among the candidate annotations by leveraging conditional random field. In CRF, the confidence scores generated by the PLSA model and the Flickr distance between pairwise candidate annotations are considered as local evidences and contextual potentials respectively. The novelty of our method mainly lies in two aspects: exploiting PLSA to predict a candidate set of annotations with confidence scores as well as CRF to further explore the semantic context among candidate annotations for precise image annotation. To demonstrate the effectiveness of the method proposed in this paper, an experiment is conducted on the standard Corel dataset and its results are compared favorably with several state-of-the-art approaches.

    automatic image annotation, probabilistic latent semantic analysis(PLSA), expectation-maximization, conditional random field(CRF), Flickr distance, image retrieval

    0 Introduction

    With the prevalence of digital imaging devices such as webcams, phone cameras and digital cameras, the number of accessible images is growing at an exponential speed. Thus how to make the best use of these resources becomes an emerging problem. An ideal image retrieval system can establish exact correspondence between image visual content and semantic description so that many well-developed text retrieval approaches can be easily used to retrieve images by ranking the connection between image annotations and text quires. Therefore, how to efficiently annotate the images serves as a key problem for image retrieval. The traditional method for image annotation is to let people manually annotate images by some keywords. However, this method is labor-intensive and time-consuming. Furthermore, the annotating result is subjective to different people and it is difficult to be extended to large image dataset. To address these limitations, automatic image annotation (AIA) has emerged as an important topic and becomes an active research area in recent years. Its goal is to automatically assign some keywords to an image that can well describe the content comprising in it.

    In recent years, many methods have been developed for AIA, and most of them can be roughly classified into two categories, viz., classification-based method and probabilistic modeling method. The representative work of the former involves automatic linguistic index for pictures[1], content-based annotation method with SVM[2]and asymmetrical support vector machine-based MIL algorithm[3]. The probabilistic modeling methods include the translation model (TM)[4], cross-media relevance model (CMRM)[5], continuous relevance model (CRM)[6], multiple- Bernoulli relevance model (MBRM)[7]and latent aspect model PLSA[8, 9]. However, all the annotation methods aforementioned, to some extent, can achieve certain success compared to the manual annotation, but they are still far from satisfaction due to the little effort on exploiting the semantic context and correlations among annotation keywords. Recently some researchers propose to refine image annotation by taking the word correlation into account. Jin, et al.[10]have implemented pioneer work on annotation refinement based on the knowledge of WordNet. This method, however, can only achieve limited success as it totally ignores the visual content of images. In Ref.[11], Wang, et al. apply random walk with restarts model to refine candidate annotations by integrating word correlations with the original candidate annotation confidence together. Followed by they propose a content based approach by formulating the annotation refinement as a Markov process[12]. In addition, Wang, et al.[13]employ conditional random field to refine image annotation by incorporating semantic relations between annotation words. More recently Liu, et al.[14]rank the image tags according to their relevance with respect to the associated images by tag similarity and image similarity in a random walk model. Xu, et al.[15]come up with a new graphical model termed as regularized latent Dirichlet allocation (rLDA) for tag refinement. Zhu, et al.[16]put forward an efficient iterative approach for image tag refinement by pursuing the low-rank, content consistency, tag correlation and error sparsity by solving a constrained yet convex optimization problem. Besides, several nearest-neighbor-based methods have also been proposed for refining image annotation in the most recent years[17,18].

    As briefly reviewed above, most of these approaches can achieve encouraging performance and motivate us to explore better image annotation methods with the help of their excellent experiences and knowledge. Hence, in this paper a new method for refining image annotation is presented based on a fusion of probabilistic latent semantic analysis and conditional random field (PLSA-CRF). For a given image, a PLSA model with asymmetric modalities is first constructed to predict a candidate set of annotations with confidence scores, and then model semantic relationship between these keywords using conditional random field (CRF) where each vertex indicates the final decision (true/false) on a candidate annotation and the refined annotation is given by inferring the most likely states of these vertices. The method is evaluated on the standard Corel dataset and the experimental results are superior or highly competitive to several state-of-the-art approaches. To the best of our knowledge, this study is the first attempt to integrate PLSA with conditional random field in the task of refining image auto-annotation.

    The rest of the paper is organized as follows. Section 1 presents how to apply PLSA to predict a candidate set of annotations with confidence scores. Section 2 elaborates the PLSA-CRF model, in which the confidence scores generated by the PLSA model and the concept similarity between pairwise candidate annotations are considered as local evidences and contextual potentials respectively. Experimental results on the Standard Corel dataset are reported and analyzed in Section 3. Finally, this paper is ended with some important conclusions and future work in Section 4.

    1 PLSA model

    PLSA[19]is a statistical latent class model which introduces a hidden variable (latent aspect)zkin the generative process of each elementxjin a documentdi. Given this unobservable variablezk, each occurrencexjis independent of the document it belongs to, which corresponds to the following joint probability:

    (1)

    The model parameters of PLSA are the two conditional distributions:P(xj|zk) andP(zk|di).P(xj|zk) characterizes each aspect and remains valid for documents out of the training set. On the other hand,P(zk|di) is only relative to the specific documents and cannot carry any prior information to an unseen document. An EM algorithm is used to estimate the parameters through maximizing the log-likelihood of the observed data.

    (2)

    wheren(di,xj) is the count of elementxjin documentdi. The steps of the EM algorithm can be succinctly described as follows.

    E-step. The conditional distributionP(zk|di,xj) is computed from the previous estimate of the parameters:

    (3)

    M-step. The parametersP(xj|zk) andP(zk|di) are updated with the new expected valuesP(zk|di,xj):

    (4)

    (5)

    If one of the parameters (P(xj|zk) orP(zk|di)) is known, the other one can be inferred by using the folding-in method, which updates the unknown parameters with the known parameters kept fixed, so that it can maximize the likelihood with respect to the previously trained parameters. Given a new image visual featuresv(dnew), the conditional probability distributionP(zk|dnew) can be inferred with the previously estimated model parametersP(v|zk), then the posterior probability of words can be computed by

    (6)

    From Eq.(6), a candidate set of annotations with confidence scores (i.e., the posterior probabilities of words) can be easily obtained.

    2 PLSA-CRF model for refining image annotation

    2.1 Concept similarity measure

    To measure the similarity between pairwise concepts related to an image from the viewpoint of computer vision, in actual fact, is still a tough problem in multimedia information processing. The commonly used methods include WordNet[20]and normalized Google distance (NGD)[21]. From the comparison of their definitions, it can be seen easily that NGD puts emphasis on the measure of the contextual relation while WordNet focuses on the semantic meaning of concept itself. What’s more, both of them build word correlations only based on textual descriptions of images and do not fully take visual information of the corresponding images into account, which also plays a crucial role in precise image auto-annotation. So in this paper, the simple yet very efficient Flickr distance (FD)[22]is adopted to measure the similarity between two conceptsC1andC2, which can be calculated as the average square root of Jensen-Shannon (JS) divergence between the corresponding visual language models as follows (Kdenotes the total number of latent topics).

    (7)

    (8)

    2.2Conditionalrandomfield

    Conditionalrandomfield(CRF)hasbeenwidelyusedincomputervisioncommunity[23-25].ACRFcanbeviewedasanundirectedgraphicalmodelinwhicheachvertexrepresentsarandomvariablewhosedistributionistobeinferred,andeachedgerepresentsadependencybetweentworandomvariables.InCRF,thedistributionofeachdiscreterandomvariableyiin the graph is conditioned on an input sequencex. Mathematically, the conditional probability ofy=(y1,y2,…,yn) givenxis formulated as

    (9)

    where

    (10)

    (11)

    whereω1indicates the local evidence of the state ofyi. It depends on the image observationx.ω2is a prior parameter that indicates the contextual potential between the states of two variablesyiandyj. Here, we take the local evidence as the logarithm of the confidence scores provided by the PLSA model.

    (12)

    (13)

    In this simplified CRF model, the weight parametersΦ={α1,α2}tobelearnedareutilizedtocontrolthebalancebetweenlocalevidenceandcontextualpotential.

    2.3Parameterestimationandrefiningimageannotation

    Asweknow,thetaskofCRFforimageannotationistoinferthemostprobablelabelsgivenaninputimageandthemodelparameterswhicharelearnedfromthetrainingset.Ascanbeseenfromtheabovedescription,itisdifficulttochoosetheweightparametersmanuallysincethelocalevidenceandcontextualpotentialcomefromdifferentsources.FollowingtheworkinRef.[13],asimilarlearningalgorithmisadoptedtoestimatetheparameters.AndthewholeprocessforrefiningimageannotationbyfusingPLSAwithconditionalrandomfieldaswellastheweightparameterestimationisdescribedinAlgorithm1.

    Algorithm1PLSA-CRFforrefiningimageannotationTraining1. Input:thetrainingimagesetT,validationimagesetV2. TrainPLSAmodelonT3. SelectcandidateannotationswithsometopconfidencescoresgeneratedbythetrainedPLSAonV4. ConstructindicatorvectoryforallimagesinV5. ComputelocalevidencebyEq.(12)aswellasthecontex-tualpotentialsofCRFbyEq.(13)6. LearnФbymaximizingthelogposteriorofthefollowingequationbythedeepestgradientdescentalgorithmL(Φ)=∑klog(yk|xk)-α21/2σ2-α22/2σ2Testing1. Input:atestimageI2. GeneratecandidateannotationsofIbythetrainedPLSA3. Constructthecorrespondingindicatorvector4. Infertheindicatorvariabley*i=argmaxyiP(yi|x;Φ*),yi∈{0,1}5. Output:refinedannotationresults

    Note that the indicator vector in the pseudo-code described above is constructed in such a way that variableyiis true if the corresponding concept appears among the keywords with top 10 confidence scores and also in the ground truth labels, otherwise it is false.

    3 Experimental results and analysis

    In this section, experimental results and some analysis for the proposed PLSA-CRF will be reported. The experiment is conducted on the Corel5k dataset comprising 5000 images, in which 4500 images are used as training set and the remaining 500 images as testing set. Here, features similar to Ref.[6] are used since the focus of this paper is not on image feature selection. For fair comparison, each image is divided into a set of 32×32 sized blocks and a 36 dimensional feature vector for each block is extracted, which includes 24 dimensional color features computed over 8 quantized colors and 3 Manhattan distances, 12 dimensional texture features computed over 3 scales and 4 orientations. As a result, each image is represented as a bag of features, i.e., a set of 36 dimensional vectors. The following features of images are clustered byk-means algorithm and discretized into clusters, which are considered as visual words. Thus, the clustering process generates a visual-word vocabulary describing different local patches in images. The number of clusters determines the size of the vocabulary. By mapping all the blocks to visual words, we can represent each image as a bag-of-visual-words (or bag-of-visterms). Similar to Ref.[8], the dimension of the bag-of-visual-words for images is set to 1000 dimension in our experiment.

    In addition, the visual language model is constructed to calculate Flickr distance so as to measure the semantic correlation between the annotation keywords. Without loss of generality, precision and recall metrics are utilized to evaluate the image annotation results. Furthermore, the top_Nprecision and coverage rate[26]are adopted to measure the performance of annotation, in which top_Nprecision measures the precision of top_Nranked annotations for one image whereas the top_Ncoverage rate is defined as the percentage of images that are correctly annotated by at least one word among the firstNranked annotations. Both of them can be defined as

    (14)

    (15)

    whereprecision(i,N) denotes the number of correct annotations in top_Nranked annotations for imagei,Tis the test image set and |T| denotes the size ofT.coverage(i,N) judges whether imageicontains correct annotations in the top_Nranked ones. If at least one correct annotation of imageibelongs to the top_Nannotations, thencoverage(i,N) is set to 1, otherwise by 0. To evaluate the performance of final annotations, the precision and coverage rate are adopted together in our experiment.

    3.1 Comparison of different measurements

    To demonstrate the advantage of Flickr distance applied in the CRF model over the WordNet and normalized Google distance (NGD), we make use of WordNet, NGD and FD to define the contextual potentials for the conditional random field proposed in this paper respectively, and the corresponding results based on the complete set of all 260 words are illustrated in Fig.1. It’s easy to see that the top_Nprecision descends gradually with the increase ofNin Fig.1(a) for three different measures. To be specific, PLSA-CRF based on the Flickr distance can get 8%, 23%, 14%, 24%, 22%, 16% and 18% as well as 1%, 17%, 9%, 11%, 15%, 13% and 15% precision improvements over that based on the WordNet and normalized Google distance respectively. On the contrary, it is also worth noting that the top_Ncoverage rate displayed in Fig.1(b) increases gradually whenNis varied from 3 to 9 for the three different approaches. This fact suggests that the conditional random field based on the Flickr distance is apparently superior to the other two methods. The reason lies in that the FD is more precise for visual domain concepts and it can capture the visual relationship between the concepts instead of their co-occurrence in text search results.

    Fig.1 Performance comparison of top_Nprecision and coverage rate

    3.2 Comparison with state-of-the-art results

    We apply MATLAB 7.0 to implement the proposed PLSA-CRF model. The experiments are carried out on a 1.80GHz Intel Core Duo CPU personal computer (PC) with 2.0G memory running Microsoft windows XP professional. To validate the effectiveness of PLSA-CRF, we make a direct comparison with several previous approaches[4-9]except for RVM-CRF[13]because its experimental results cannot be accessed directly from the literature. Similarly, we compute the recall and precision of every word in the test set and use the mean of these values to summarize the model performance. The experimental results listed in Table 1 are based on two sets of words: the subset of 49 best words and the complete set of all 260 words that occur in the training set. From Table 1, it is easy to see that our model PLSA-CRF outperforms all the others, especially the first three approaches.

    Table 1 Performance comparison of AIA on Corel5k dataset

    Alternatively, Table 2 presents some examples of the annotations (only four cases are listed here due to the limited space) generated by RVM-CRF and PLSA-CRF respectively. As can be seen from Table 2, the performance of PLSA-CRF is superior or highly competitive to that of RVM-CRF, which further demonstrates the effectiveness of PLSA-CRF proposed in this paper.

    Table 2 Illustration of some annotation results obtained by RVM-CRF and PLSA-CRF

    To further illustrate the effect of PLSA-CRF, mean average precision (mAP) is also applied as a metric to evaluate the performance of single word retrieval. Here, we only compare our model with CMRM, CRM, MBRM and PLSA-FUSION due to themAP of other methods cannot be accessed directly. As shown in Table 3, our model is obviously superior to CMRM, CRM and PLSA-FUSION. Compared with MBRM, it can also get 7% and 3% improvements on 260 words and words with positive recall respectively.

    Table 3 Ranked retrieval results based on one word queries

    4 Conclusion

    This paper presents a novel method for refining image annotation by integrating probabilistic latent semantic analysis with conditional random field. In particular, the confidence scores generated by the PLSA model and the Flickr distance rather than WordNet or NGD between two candidate annotations are applied to define the unary and binary potential functions for CRF, respectively. The experimental results on the Corel5k dataset show that our model is superior or highly competitive to several state-of-the-art approaches. In the future, we plan to introduce semi-supervised learning into our approach to utilize the labeled and unlabeled data simultaneously. In addition, we intend to employ other more complicated real-world image datasets, such as NUS-WIDE and MIRFLICKR to further evaluate the scalability and robustness of PLSA-CRF comprehensively.

    [1] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2003, 25(9):1075-1088

    [2] Cusano C, Ciocca G, Schettini R. Image annotation using SVM. In: Proceedings of the International Society for Optical Engineering, California, USA, 2003. 330-338

    [3] Yang C, Dong M, Hua J. Region-based image annotation using asymmetrical support vector machine-based multiple-instance learning. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, New York, USA, 2006. 2057-2063

    [4] Duygulu P, Barnard K, De Freitas J, et al. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 2002. 97-112

    [5] Jeon L, Lavrenko V, Manmantha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 2003. 119-126

    [6] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proceedings of the 17th International Conference on the Advances in Neural Information Processing Systems, Vancouver, Canada, 2003. 553-560

    [7] Feng S, Manmatha R, Lavrenko V. Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Washington, USA, 2004. 1002-1009

    [8] Monay F, Gatica-Perez D. Modeling semantic aspects for cross-media image indexing.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007, 29(10): 1802-1817

    [9] Li Z, Shi Z, Liu X, et al. Fusing semantic aspects for image annotation and retrieval.JournalofVisualCommunicationandImageRepresentation, 2010, 21(8): 798-805

    [10] Jin Y, Khan L, Wang L, et al. Image annotations by combining multiple evidence and wordnet. In: Proceedings of the 13th International Conference on Multimedia, Singapore, 2005. 706-715

    [11] Wang C, Jing F, Zhang L, et al. Image annotation refinement using random walk with restarts. In: Proceedings of the 14th International Conference on Multimedia, California, USA, 2006. 647-650

    [12] Wang C, Jing F, Zhang L, et al. Content-based image annotation refinement. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Minnesota, USA, 2007. 1-8

    [13] Wang Y, Gong S. Refining image annotation using contextual relations between words. In: Proceedings of the 6th International Conference on Image and Video Retrieval, Amsterdam, Netherlands, 2007. 425-432

    [14] Liu D, Hua X, Yang L, et al. Tag ranking. In: Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 2009. 351-360

    [15] Xu H, Wang J, Hua X, et al. Tag refinement by regularized LDA. In: Proceedings of the 17th International Conference on Multimedia, Beijing, China, 2009. 573-576

    [16] Zhu G, Yan S, Ma Y. Image tag refinement towards low-rank, content-tag prior and error sparsity. In: Proceedings of the 18th International Conference on Multimedia, Firenze, Italy, 2010. 461-470

    [17] Makadia A, Pavlovic V, Kumar S. A new baseline for image annotation. In: Proceedings of the 10th European Conference on Computer Vision, Marseille, France, 2008. 316-329

    [18] Guillaumin M, Mensink T, Verbeek J, et al. TagProp: discriminative metric learning in nearest neighbor models for image auto-annotation. In: Proceedings of the 12th International Conference on Computer Vision, Kyoto, Japan, 2009. 309-316

    [19] Hofmann T. Unsupervised learning by probabilistic latent semantic analysis.MachineLearning, 2001, 42(1-2):177-196

    [20] Miller G, Fellbaum C. WordNet: An electronic lexical database. Cambridge: MIT press, 1998

    [21] Cilibrasi R, Vitanyi P. The Google similarity distance.IEEETransactionsonKnowledgeandDataEngineering, 2007, 19(3): 370-383

    [22] Wu L, Hua X, Yu N, et al. Flickr distance. In: Proceedings of the 16th International Conference on Multimedia, Vancouver, Canada, 2008. 31-40

    [23] Li W, Sun M. Semi-supervised learning for image annotation based on conditional random fields. In: Proceedings of the 5th International Conference on Image and Video Retrieval, Arizona, USA, 2006. 463-472

    [24] Xu X, Jiang Y, Peng L, et al. Ensemble approach based on conditional random field for multi-label image and video annotation. In: Proceedings of the 19th International Conference on Multimedia, Arizona, USA, 2011. 1377-1380

    [25] Huang Q, Han M, Wu B, et al. A hierarchical conditional random field model for labeling and segmenting images of street scenes. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Colorado, USA, 2011. 1953-1960

    [26] Li J, Wang J. Real-time computerized annotation of pictures. In: Proceedings of the 14th International Conference on Multimedia, California, USA, 2006. 911-920

    Tian Dongping, born in 1981. He received his M.S. and Ph.D. degrees from Shanghai Normal University and Institute of Computing Technology, Chinese Academy of Sciences in 2007 and 2013, respectively. His main research interests include computer vision and machine learning.

    10.3772/j.issn.1006-6748.2015.01.011

    *Supported by the National Basic Research Priorities Programme (No. 2013CB329502), the National High Technology Research and Development Programme of China (No. 2012AA011003), the Natural Science Basic Research Plan in Shanxi Province of China (No. 2014JQ2-6036) and the Science and Technology R&D Program of Baoji City (No. 203020013, 2013R2-2).

    *To whom correspondence should be addressed. E-mail: tdp211@163.com, tiandp@ics.ict.ac.cnReceived on Nov. 4, 2013

    猜你喜歡
    東平
    墾荒
    種絲瓜
    茶藝
    金秋(2020年8期)2020-08-17 08:38:20
    鐵 匠
    詩(shī)四首——東平之《春》、《夏》、《秋》、《冬》
    Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation①
    讓批評(píng)和自我批評(píng)成為黨內(nèi)政治生活的常態(tài)
    全面從嚴(yán)治黨背景下建立健全容錯(cuò)糾錯(cuò)機(jī)制的探討
    Semantic image annotation based on GMM and random walk model①
    A Comparative Study of Buddha in China and God in Western Countries
    東方教育(2017年2期)2017-04-21 04:46:18
    狂野欧美激情性xxxx在线观看| 99九九线精品视频在线观看视频| 不卡视频在线观看欧美| 特大巨黑吊av在线直播| 国产黄频视频在线观看| 日韩亚洲欧美综合| 亚洲精品久久午夜乱码| 午夜福利网站1000一区二区三区| 成人免费观看视频高清| av线在线观看网站| 国产成人精品婷婷| 视频在线观看一区二区三区| 日日啪夜夜爽| 亚洲av电影在线观看一区二区三区| 亚洲av中文av极速乱| 亚洲精品乱码久久久v下载方式| 男人爽女人下面视频在线观看| 日日撸夜夜添| 日本黄大片高清| 黄色怎么调成土黄色| 我的老师免费观看完整版| 十八禁高潮呻吟视频| 中文字幕av电影在线播放| 如日韩欧美国产精品一区二区三区 | 国产成人午夜福利电影在线观看| 一区二区三区四区激情视频| 久久久午夜欧美精品| 在线天堂最新版资源| 不卡视频在线观看欧美| 久久精品熟女亚洲av麻豆精品| 久久精品久久久久久久性| 2021少妇久久久久久久久久久| 国产精品免费大片| 国产熟女午夜一区二区三区 | 国产精品一区二区三区四区免费观看| 久久精品久久久久久久性| 插阴视频在线观看视频| 亚洲av福利一区| 日本wwww免费看| 亚洲av中文av极速乱| 国产在线免费精品| 18禁动态无遮挡网站| 99久久综合免费| 我的女老师完整版在线观看| 三级国产精品欧美在线观看| 欧美日本中文国产一区发布| 免费黄频网站在线观看国产| 三级国产精品片| 99九九在线精品视频| 大片电影免费在线观看免费| 久久免费观看电影| 久久久久国产网址| 黄色欧美视频在线观看| 国产日韩欧美视频二区| 好男人视频免费观看在线| 插逼视频在线观看| 国产乱人偷精品视频| 国产精品秋霞免费鲁丝片| 亚洲国产精品国产精品| 国产免费一区二区三区四区乱码| 看非洲黑人一级黄片| 丝瓜视频免费看黄片| 一级黄片播放器| 色哟哟·www| 久久久久久伊人网av| 黄色一级大片看看| 国产伦理片在线播放av一区| 高清毛片免费看| 曰老女人黄片| 亚洲色图综合在线观看| 七月丁香在线播放| 日本av免费视频播放| 国产精品一区二区在线观看99| 中文字幕免费在线视频6| 国产在视频线精品| 9色porny在线观看| 国产精品一区二区在线不卡| 久久女婷五月综合色啪小说| 久久久a久久爽久久v久久| 国产无遮挡羞羞视频在线观看| 能在线免费看毛片的网站| av一本久久久久| 亚洲怡红院男人天堂| 在线观看美女被高潮喷水网站| 高清午夜精品一区二区三区| 一区二区三区精品91| 在线观看免费日韩欧美大片 | 国产国拍精品亚洲av在线观看| 亚洲怡红院男人天堂| 99热网站在线观看| 男人操女人黄网站| 亚洲国产精品一区二区三区在线| 国产在线免费精品| 在线精品无人区一区二区三| 日日摸夜夜添夜夜添av毛片| 999精品在线视频| 欧美xxxx性猛交bbbb| 久久午夜福利片| 美女cb高潮喷水在线观看| 久久韩国三级中文字幕| 一二三四中文在线观看免费高清| 中文字幕精品免费在线观看视频 | 久久久久久久精品精品| 18禁在线无遮挡免费观看视频| 国产亚洲精品久久久com| 一本一本综合久久| 欧美xxxx性猛交bbbb| 高清在线视频一区二区三区| 日韩视频在线欧美| 熟女av电影| 人妻一区二区av| 亚洲av二区三区四区| 人妻夜夜爽99麻豆av| 国产免费现黄频在线看| 一级毛片黄色毛片免费观看视频| 永久网站在线| 欧美亚洲 丝袜 人妻 在线| 午夜日本视频在线| 午夜av观看不卡| 欧美日韩视频精品一区| 91久久精品电影网| 高清午夜精品一区二区三区| 亚洲成色77777| 亚洲成人av在线免费| 欧美xxⅹ黑人| 一级,二级,三级黄色视频| 两个人的视频大全免费| 大香蕉久久成人网| 日本午夜av视频| 色吧在线观看| 18禁裸乳无遮挡动漫免费视频| 国产男女内射视频| 久久综合国产亚洲精品| 美女福利国产在线| 久久国内精品自在自线图片| 2021少妇久久久久久久久久久| av卡一久久| 成人国产av品久久久| 亚洲天堂av无毛| 少妇人妻 视频| 69精品国产乱码久久久| 国产国语露脸激情在线看| 视频区图区小说| 国产男人的电影天堂91| 校园人妻丝袜中文字幕| 久久韩国三级中文字幕| 一本大道久久a久久精品| 三级国产精品片| 在线亚洲精品国产二区图片欧美 | 成人亚洲精品一区在线观看| 国产男女超爽视频在线观看| 大话2 男鬼变身卡| 亚洲精品日韩av片在线观看| 久久久久久久久久久久大奶| 三级国产精品欧美在线观看| 波野结衣二区三区在线| 亚洲,欧美,日韩| 99re6热这里在线精品视频| 人妻一区二区av| 五月开心婷婷网| 亚洲人成77777在线视频| 在线天堂最新版资源| 在线看a的网站| 性高湖久久久久久久久免费观看| 高清午夜精品一区二区三区| 国产黄频视频在线观看| 国产欧美另类精品又又久久亚洲欧美| 18禁裸乳无遮挡动漫免费视频| 国产成人av激情在线播放 | 能在线免费看毛片的网站| 亚洲欧美清纯卡通| 国产黄色视频一区二区在线观看| 韩国高清视频一区二区三区| 18禁观看日本| 午夜免费观看性视频| 伦精品一区二区三区| 国产欧美亚洲国产| 看十八女毛片水多多多| 国产熟女欧美一区二区| 欧美日韩在线观看h| 一区在线观看完整版| 哪个播放器可以免费观看大片| 国产永久视频网站| 22中文网久久字幕| 超色免费av| 最近最新中文字幕免费大全7| 男女高潮啪啪啪动态图| a级毛片免费高清观看在线播放| 久久久精品94久久精品| 亚洲欧洲日产国产| 青青草视频在线视频观看| 久久ye,这里只有精品| 91久久精品国产一区二区三区| 日本av手机在线免费观看| 亚洲欧美清纯卡通| 亚洲人与动物交配视频| 性色av一级| 亚洲不卡免费看| 99re6热这里在线精品视频| 人成视频在线观看免费观看| 两个人的视频大全免费| 欧美日韩视频精品一区| 亚洲国产av新网站| 国产亚洲欧美精品永久| 一级a做视频免费观看| 成人漫画全彩无遮挡| 一区二区三区四区激情视频| 美女cb高潮喷水在线观看| 亚洲欧美日韩另类电影网站| 飞空精品影院首页| 国产淫语在线视频| 国产在线免费精品| 女性生殖器流出的白浆| 国产白丝娇喘喷水9色精品| 高清黄色对白视频在线免费看| freevideosex欧美| 久久久久久伊人网av| 制服人妻中文乱码| 人妻人人澡人人爽人人| 麻豆乱淫一区二区| av在线老鸭窝| av有码第一页| 国产黄色免费在线视频| 五月玫瑰六月丁香| 丰满少妇做爰视频| 亚洲人与动物交配视频| 三上悠亚av全集在线观看| 天天影视国产精品| 考比视频在线观看| 国产有黄有色有爽视频| 哪个播放器可以免费观看大片| 有码 亚洲区| 亚洲av电影在线观看一区二区三区| 最后的刺客免费高清国语| 妹子高潮喷水视频| 久久精品熟女亚洲av麻豆精品| 国产高清不卡午夜福利| 国产午夜精品久久久久久一区二区三区| 各种免费的搞黄视频| 美女福利国产在线| 国产精品一二三区在线看| 国产日韩欧美在线精品| 国产欧美亚洲国产| 日本与韩国留学比较| 大香蕉久久成人网| 中文欧美无线码| 欧美人与性动交α欧美精品济南到 | 日本欧美国产在线视频| 天堂俺去俺来也www色官网| 乱码一卡2卡4卡精品| 色5月婷婷丁香| 久久99一区二区三区| 又大又黄又爽视频免费| 久久久久久伊人网av| av卡一久久| 18禁动态无遮挡网站| 成年人免费黄色播放视频| 亚洲精品,欧美精品| 国产精品人妻久久久影院| 国产综合精华液| 亚洲四区av| 亚洲欧美色中文字幕在线| 国产伦理片在线播放av一区| 国产精品一区二区在线观看99| 汤姆久久久久久久影院中文字幕| 日本色播在线视频| 久久久国产一区二区| 91在线精品国自产拍蜜月| 午夜老司机福利剧场| 热re99久久精品国产66热6| 国产高清三级在线| 国产成人精品福利久久| 天天躁夜夜躁狠狠久久av| 精品一区二区免费观看| 伦理电影免费视频| 精品一区二区三区视频在线| 国产欧美日韩一区二区三区在线 | 永久网站在线| 中文欧美无线码| 少妇的逼好多水| 亚洲欧美清纯卡通| 久久久久久人妻| 人妻一区二区av| 国产免费福利视频在线观看| 亚洲天堂av无毛| av一本久久久久| 一本久久精品| 亚洲国产精品成人久久小说| 久久精品国产亚洲网站| 青春草视频在线免费观看| 伦理电影大哥的女人| 亚洲精品日韩在线中文字幕| 日本vs欧美在线观看视频| 99久久综合免费| 一级黄片播放器| 欧美精品高潮呻吟av久久| 欧美最新免费一区二区三区| 超色免费av| 国产片内射在线| 麻豆乱淫一区二区| 久久久午夜欧美精品| 中文欧美无线码| 最近中文字幕高清免费大全6| 婷婷色av中文字幕| 人人妻人人爽人人添夜夜欢视频| 国产精品无大码| 日韩成人av中文字幕在线观看| 精品人妻熟女av久视频| 少妇精品久久久久久久| 亚洲经典国产精华液单| 日韩大片免费观看网站| 欧美日韩视频高清一区二区三区二| 狠狠精品人妻久久久久久综合| 亚洲国产精品999| 亚洲人成77777在线视频| 亚洲伊人久久精品综合| 久久午夜福利片| 亚洲少妇的诱惑av| 国产一区二区三区综合在线观看 | 性高湖久久久久久久久免费观看| 国产 一区精品| av天堂久久9| 免费看av在线观看网站| 亚洲久久久国产精品| 哪个播放器可以免费观看大片| 日日摸夜夜添夜夜爱| 亚洲av综合色区一区| 欧美+日韩+精品| 69精品国产乱码久久久| 大片免费播放器 马上看| 18禁观看日本| 成年女人在线观看亚洲视频| 天堂中文最新版在线下载| 国产黄频视频在线观看| 欧美另类一区| 午夜免费鲁丝| 亚洲人成网站在线播| 精品人妻在线不人妻| 亚洲第一区二区三区不卡| 日日撸夜夜添| 最后的刺客免费高清国语| 久久久久久人妻| 九九在线视频观看精品| 精品久久久久久久久av| 99热6这里只有精品| 少妇被粗大的猛进出69影院 | 久久精品熟女亚洲av麻豆精品| 蜜臀久久99精品久久宅男| 国精品久久久久久国模美| 91成人精品电影| 22中文网久久字幕| 青春草亚洲视频在线观看| 少妇熟女欧美另类| 午夜福利,免费看| 欧美日韩精品成人综合77777| 免费观看无遮挡的男女| 亚洲国产欧美日韩在线播放| 欧美激情极品国产一区二区三区 | 色94色欧美一区二区| 久久久久精品久久久久真实原创| 日日摸夜夜添夜夜爱| 自线自在国产av| 色网站视频免费| 各种免费的搞黄视频| 中文乱码字字幕精品一区二区三区| 一级爰片在线观看| 精品熟女少妇av免费看| xxx大片免费视频| 夜夜骑夜夜射夜夜干| 午夜免费观看性视频| 亚洲精品亚洲一区二区| 男女国产视频网站| a级毛片黄视频| 国产深夜福利视频在线观看| 国产成人freesex在线| freevideosex欧美| 亚洲国产精品一区三区| 成人二区视频| 免费大片黄手机在线观看| 26uuu在线亚洲综合色| 国产精品免费大片| 女性被躁到高潮视频| 久久久精品免费免费高清| 一级,二级,三级黄色视频| 久久午夜综合久久蜜桃| 久久精品国产亚洲网站| 日韩,欧美,国产一区二区三区| 免费高清在线观看视频在线观看| 天堂8中文在线网| 妹子高潮喷水视频| 国产欧美亚洲国产| 精品国产一区二区久久| 精品国产国语对白av| 国产黄色视频一区二区在线观看| 亚洲性久久影院| 欧美人与性动交α欧美精品济南到 | 大片电影免费在线观看免费| 丝袜脚勾引网站| 人人妻人人爽人人添夜夜欢视频| 日韩中字成人| 亚洲国产色片| 99re6热这里在线精品视频| 国产精品国产三级国产av玫瑰| 亚洲人与动物交配视频| 不卡视频在线观看欧美| 国产一区亚洲一区在线观看| 国产精品蜜桃在线观看| 高清黄色对白视频在线免费看| 亚洲高清免费不卡视频| 国产午夜精品一二区理论片| 成人国产麻豆网| 丰满迷人的少妇在线观看| 日日啪夜夜爽| av网站免费在线观看视频| 久久久久人妻精品一区果冻| 如何舔出高潮| 午夜日本视频在线| 午夜福利影视在线免费观看| 久久av网站| 99热网站在线观看| 国产精品一二三区在线看| 色网站视频免费| 狂野欧美激情性xxxx在线观看| 高清在线视频一区二区三区| 超色免费av| 精品国产国语对白av| 国产成人午夜福利电影在线观看| 精品国产乱码久久久久久小说| 国产亚洲精品久久久com| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 九色成人免费人妻av| 国产成人aa在线观看| .国产精品久久| 久久人人爽人人爽人人片va| 99热全是精品| 大香蕉久久网| 我的女老师完整版在线观看| 韩国av在线不卡| 在线看a的网站| 美女福利国产在线| 免费看不卡的av| 国产国语露脸激情在线看| 熟女人妻精品中文字幕| 欧美+日韩+精品| 亚洲av男天堂| 免费黄色在线免费观看| 国产精品一区二区三区四区免费观看| 午夜激情久久久久久久| 99热全是精品| 寂寞人妻少妇视频99o| 久久ye,这里只有精品| 亚洲不卡免费看| 久久热精品热| 久久ye,这里只有精品| 乱人伦中国视频| 日韩一本色道免费dvd| 免费观看无遮挡的男女| 成人综合一区亚洲| 欧美日韩成人在线一区二区| 国产成人a∨麻豆精品| 久久久久久久亚洲中文字幕| 久久久亚洲精品成人影院| 国产黄片视频在线免费观看| 国产亚洲av片在线观看秒播厂| 精品人妻偷拍中文字幕| 久久午夜综合久久蜜桃| 99热这里只有是精品在线观看| 欧美激情国产日韩精品一区| 草草在线视频免费看| 人妻少妇偷人精品九色| 亚洲在久久综合| 99热网站在线观看| 午夜激情福利司机影院| 国产精品99久久久久久久久| 成人毛片a级毛片在线播放| 免费观看a级毛片全部| 我要看黄色一级片免费的| 中文字幕久久专区| 观看av在线不卡| 精品人妻熟女av久视频| 久久久久国产网址| 欧美国产精品一级二级三级| 久久久久久久大尺度免费视频| 欧美精品一区二区大全| 久久 成人 亚洲| 大又大粗又爽又黄少妇毛片口| 观看av在线不卡| 欧美日韩av久久| 精品国产露脸久久av麻豆| 搡老乐熟女国产| 日本91视频免费播放| 国产欧美日韩综合在线一区二区| 欧美精品亚洲一区二区| 九九在线视频观看精品| 制服人妻中文乱码| 一级爰片在线观看| 成人黄色视频免费在线看| 在现免费观看毛片| 大香蕉久久网| √禁漫天堂资源中文www| 亚洲精品国产av蜜桃| av免费观看日本| 丝袜脚勾引网站| 国产免费现黄频在线看| 亚洲av电影在线观看一区二区三区| 亚洲精品乱码久久久v下载方式| 国产精品成人在线| 久久久久精品性色| 国产精品一区二区三区四区免费观看| 色哟哟·www| 亚洲少妇的诱惑av| 少妇被粗大猛烈的视频| 久久久久精品性色| 精品国产露脸久久av麻豆| 日日爽夜夜爽网站| 少妇被粗大的猛进出69影院 | 国产欧美日韩一区二区三区在线 | 五月天丁香电影| 国产高清不卡午夜福利| 欧美另类一区| 免费黄频网站在线观看国产| 最新中文字幕久久久久| 青春草国产在线视频| 22中文网久久字幕| 亚洲av免费高清在线观看| 亚洲成人av在线免费| 亚洲欧洲精品一区二区精品久久久 | 欧美精品人与动牲交sv欧美| 国产免费一区二区三区四区乱码| 不卡视频在线观看欧美| 日本爱情动作片www.在线观看| 国产av一区二区精品久久| 国产成人91sexporn| 国产精品蜜桃在线观看| 男女啪啪激烈高潮av片| 最后的刺客免费高清国语| 人妻制服诱惑在线中文字幕| 国产亚洲最大av| 丁香六月天网| 国产免费一级a男人的天堂| 美女内射精品一级片tv| 国产精品99久久久久久久久| 欧美少妇被猛烈插入视频| 婷婷色av中文字幕| 日本与韩国留学比较| 在线 av 中文字幕| 一级毛片黄色毛片免费观看视频| 国产成人免费无遮挡视频| 人妻一区二区av| 国产精品欧美亚洲77777| 亚洲av成人精品一区久久| 18禁裸乳无遮挡动漫免费视频| 飞空精品影院首页| 国产黄频视频在线观看| 亚洲欧美日韩另类电影网站| 亚洲激情五月婷婷啪啪| 最近的中文字幕免费完整| 日本与韩国留学比较| 欧美成人精品欧美一级黄| av网站免费在线观看视频| 午夜激情av网站| 国产精品国产三级专区第一集| 麻豆精品久久久久久蜜桃| av网站免费在线观看视频| 久久国内精品自在自线图片| 99视频精品全部免费 在线| 99九九在线精品视频| 伊人久久精品亚洲午夜| 国产成人av激情在线播放 | 国产av精品麻豆| 国产高清有码在线观看视频| a级片在线免费高清观看视频| 国产成人精品婷婷| 久久久国产欧美日韩av| 九九久久精品国产亚洲av麻豆| 精品99又大又爽又粗少妇毛片| 婷婷色综合大香蕉| 亚洲精品中文字幕在线视频| 蜜臀久久99精品久久宅男| 99热这里只有是精品在线观看| 大话2 男鬼变身卡| 亚洲四区av| 天天操日日干夜夜撸| 下体分泌物呈黄色| 满18在线观看网站| 国产老妇伦熟女老妇高清| 插逼视频在线观看| 国产在视频线精品| 中文字幕人妻丝袜制服| 夜夜骑夜夜射夜夜干| 欧美xxⅹ黑人| 在现免费观看毛片| 三级国产精品欧美在线观看| 99久国产av精品国产电影| 国产成人精品久久久久久| 99九九在线精品视频| 日韩熟女老妇一区二区性免费视频| 啦啦啦中文免费视频观看日本| www.av在线官网国产| 在线观看美女被高潮喷水网站| 国产在线视频一区二区| 亚洲精品久久午夜乱码| 久久精品国产亚洲av涩爱| 国产高清有码在线观看视频| 在线精品无人区一区二区三| 亚洲精品久久午夜乱码| 伦理电影大哥的女人| 国产成人a∨麻豆精品| 美女脱内裤让男人舔精品视频| 插阴视频在线观看视频| 亚洲人成网站在线观看播放| 亚洲精品国产色婷婷电影| 国产伦理片在线播放av一区| 激情五月婷婷亚洲| 十八禁网站网址无遮挡|