• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Density peaks clustering based integrate framework for multi-document summarization

    2017-05-16 10:26:43BoynWngJinZhngYiLiuYuexinZou

    Boyn Wng,Jin Zhng,Yi Liu,Yuexin Zou

    aADSPLAB,School of ECE,Peking University,Shenzhen,518055,China

    bShenzhen Raisound Technologies,Co.,Ltd,China

    cPKU Shenzhen Institute,China

    dPKU-HKUST Shenzhen-Hong Kong Institute,China

    eSchool of Computer Science and Network Security Dongguan University of Technology,China

    Original article

    Density peaks clustering based integrate framework for multi-document summarization

    Baoyan Wanga,Jian Zhangb,e,*,Yi Liuc,d,Yuexian Zoua

    aADSPLAB,School of ECE,Peking University,Shenzhen,518055,China

    bShenzhen Raisound Technologies,Co.,Ltd,China

    cPKU Shenzhen Institute,China

    dPKU-HKUST Shenzhen-Hong Kong Institute,China

    eSchool of Computer Science and Network Security Dongguan University of Technology,China

    A R T I C L E I N F O

    Article history:

    Received 14 October 2016

    Accepted 25 December 2016

    Available online 20 February 2017

    Multi-document summarization

    We present a novel unsupervised integrated score framework to generate generic extractive multidocument summaries by ranking sentences based on dynamic programming(DP)strategy.Considering that cluster-based methods proposed by other researchers tend to ignore informativeness of words when they generate summaries,our proposed framework takes relevance,diversity,informativeness and length constraint of sentences into consideration comprehensively.We apply Density Peaks Clustering (DPC)to get relevance scores and diversity scores of sentences simultaneously.Our framework produces the best performance on DUC2004,0.396 of ROUGE-1 score,0.094 of ROUGE-2 score and 0.143 of ROUGE-SU4 which outperforms a series of popular baselines,such as DUC Best,FGB[7],and BSTM[10].

    ?2017 Production and hosting by Elsevier B.V.on behalf of Chongqing University of Technology.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    1.Introduction

    With the explosively growing of information overload over the Internet,consumers are flooded with all kinds of electronic documents i.e.news,emails,tweets,blog.Now more than ever,there are urgent demands for multi-document summarization(MDS),which aims at generating a concise and informative version for the large collection of documents and then helps consumers grasp the comprehensive information of the original documents quickly. Most existing studies are extractive methods,which focus on extracting salient sentences directly from given materials without any modi fication and simply combining them together to form a summary for multi-document set.In this article,we study on the generic extractive summarization from multiple documents. Nowadays,an effective summarization method always properly considers four important issues[1,2]:

    ·Relevance:a good summary should be interrelated to primary themes of the given multi-documents as possible.

    ·Diversity:a good summary should be less redundant.

    ·Informativeness:the sentences of a good summary should conclude information as much as possible.

    ·Length Constraint:the summary should be extracted under the limitation of the length.

    The extractive summarization methods can fall into two categories:supervised methods that rely on provided documentsummary pairs,and unsupervised ones based upon properties derived from document clusters.The supervised methods consider the multi-document summarization as a classi fication/regression problem[3].For those methods,a huge amount of annotated data is required,which are costly and time-consuming.For another thing, unsupervised approaches are very enticing and tend to score sentences based on semantic grouping extracted from the original documents.Researchers often select some linguistic features and statistic features to estimate importance of original sentences and then rank sentences.

    Inspired by the success of cluster-based methods,especially density peaks clustering(DPC)algorithm on bioinformatics,bibliometric,and pattern recognition[4],in this article we propose a novel method to extract sentences with higher relevance,more informativeness and a better diversity under the limitation of length for sentences ranking based on Density Peaks Clustering (DPC).First,thanks to the DPC,it is not necessary to provide theestablished number of clusters in advance and do the postprocessing operation to remove redundancy.Second,we attempt to put forward an integrated score framework to rank sentences and employ the dynamic programming solution to select salient sentences.

    This article is organized as follows:Section 2 describes related research work about our motivation in detail.Section 3 presents our proposed Multi-Document Summarization framework and the summary generation process based on dynamic programming technology.Section 4 and Section 5 give the evaluation of the algorithm on the benchmark data set DUC2004 for the task of multidocument summarization.We then conclude at the end of this article and give some directions for future research.

    2.Related work

    Various extractive multi-document summarization methods have been proposed.For supervised methods,different models have been trained for the task,such as hidden Markov model, conditional random field and REGSUM[5].Sparse coding[2]was introduced into document summarization due to its useful in image processing.Those supervised methods are based on algorithms that a large amount of labeled data is needed for precondition.The annotated data is chie fly available for documents, which are mostly relevant to the trained summarization model. Therefore,it's not necessary for the trained model to generate a satisfactory summary when documents are not parallel to the trained model.Furthermore,when consumers transform the aim of summarization or the characteristics of documents,the training data should be reconstructed and the model should be retrained necessarily.

    There are also numerous methods for unsupervised extractedbased summarization presented in the literature.Most of them tend to involve calculating salient scores for sentences of the original documents,ranking sentences according to the saliency score,and utilizing the top sentences with the highest scores to generate the final summary.Since clustering algorithm is the most essential unsupervised partitioning method,it is more appropriate to apply clustering algorithm for multi-document summarization. The cluster based methods tend to group sentences and then rank sentences by their saliency scores.Many methods use other algorithms combined with clustering to rank sentences.Wan et al.[6] clustered sentences first,consulted the HITS algorithm to regard clusters as hubs and sentences as authorities and then ranked and selected salient sentences by the final gained authority scores. Wang et al.[7]translated the cluster-based summarization issue to minimizing the Kullback-Leibler divergence between the original documents and model reconstructed terms.Cai et al.[8]ranked and clustered sentences simultaneously and enhanced each other mutually.Other typical existing methods include graph-based ranking,LSA based methods,NMF based methods,submodular functions based methods,LDA based methods.Wang et al.[9]used the symmetric non-negative matrix factorization(SNMF)to softly cluster sentences of documents into groups and selected salient sentences from each cluster to generate the summary.Wang et al. [10]used generative model and provided an ef ficient way to model the Bayesian probability distributions of selecting salience sentences given themes.Wang et al.[11]combined different summarization results from single summarization systems.Besides,some papers considered reducing the redundancy in summary,i.e.MMR [12].To eliminate redundancy among sentences,some systems selected the most important sentences first and calculated the similarity between previously selected ones and next candidate sentence,and add it to the summary only if it included suf ficient new information.

    We follow the idea of cluster-based method in this article. Different from previous work,we attempt to propose an integrated weighted score framework that can order sentences by evaluating salient scores and remove redundancy of summary.We also use the dynamic programming solution for optimal salient sentences selection.

    3.Proposed method

    In this section,we discuss the outline of our proposed method as illustrated in Fig.1.We show a novel way of handling the multidocument summarization task by using DPC algorithm.All documents are first represented by a set of the sentences as raw input of the framework.After the corpus is preprocessed,DPC is employed to get relevance scores and diversity scores of sentences simultaneously.Meanwhile,the number of effective words will be applied to obtain informativeness scores of sentences.What's more,a length constraint is used to ensure the extracted sentenced have a proper length.In the end,we attempt to use an integrated scoring framework to rank sentences and generate the summary based on the dynamic programming algorithm.The DPC based summarization method mainly includes the following steps:

    3.1.Pre-processing

    Before using our method to deal with the text data,a preprocessing module is indispensable.After the given corpus of English documents,C corpus={d 1,d2,…,d i,…,d cor},which d i denotes the i-th document inC corpusand those documents are same or similar topics,splitting apart into individual sentences,S={s 1,s 2,…s i,…,s sen}where s i means the i-th sentence inC corpus,we utilize an unde fined forward stop words list to remove all stop words and Porter's stemming algorithm to perform stem of remaining words.

    3.2.Sentence estimation factors

    3.2.1.Relevance score

    Fig.1.The outline of our proposed framework.

    In this section,we show a relevance score to measure the extent how much a sentence is relevant to residual sentences in thedocuments.One of the underlying assumptions of DPC is that cluster centers are characterized by a higher density than their neighbors.Inspired by the assumption,we assume that a sentence will be deemed to be higher relevance and more representational when it possesses higher density meaning owning more similar sentences.As the input of the DPC algorithm is similarity matrix among sentences,the sentences are represented by bag-of-words vector space mode primarily,and then cosine similarity formula is applied to calculate the similarity among sentences.The reason why terms are weighted with Binary schemes,which Term weighting Wij is set 1 if term tj appears at least once in the sentence,is that the frequency of term repetition tend to be less in sentences than that in documents.Thus we de fine the function to compute the Relevance Scoring SCrele(i)for each sentence si as following:

    whereSim ijrepresents the cosine similarity numerical value between thei-th andj-th sentence,Kdenotes the total number of sentences in the documents andTdenotes the total number of terms in the documents.ωrepresents the prede fined value of density threshold.SCR(i)should be normalized in order to adapt to the comprehensive scoring model.

    In this section,the density thresholdωis determined following the study[4]to exclude the sentences,which hold lower similarity values with the others.

    3.2.2.Diversity score

    In this section,diversity scoring is presented to argue a good summary should not include analogical sentences.A document set usually contains one core topic and some subtopics.In addition to the mostevident topic,it's also necessary to get the sub-topics most evident topic so as to better understand the whole corpus.In other words,sentences of the summary should be less overlap mutually so as to eliminate redundancy.Maximal Marginal relevance(MMR), one of the typical methods reducing redundancy,uses a greedy approach to sentence selection through combing criterion of query relevance and novelty of information.Another hypothesis of DPC is that cluster centers also are characterized by a relatively large distance from points with higher densities,which ensure the similar sentences get larger difference scores.Therefore,by comparing with all the other sentences of the corpus,the sentence with a higher score could be extracted,which also can guarantee the diversity globally.The diversity scoreSCdiv(i)is de fined as the following function.

    Note that diversity score of the sentence with the highest density is assigned 1 conventionally.

    3.2.3.Informativeness score

    Relevance score and diversity score measure the relationship between the sentences.In this section,Informative content words are employed to calculate the internal informativeness of sentences.Informative content words are the non-stop words and parts of speech are nouns,verbs and adjectives.

    It's also necessary to normalize the informativeness scoring as follows:

    3.2.4.Length constraint

    The longer sentence is,the more informativeness it owns,which causes the longish sentences tend to be extracted.The total number of words in the summary usually is limited.The longer sentences are,the fewer ones are selected.Therefore,it is requisite to provide a length constraint.Length of sentences li range in a large scope.On this occasion,we should lead in a smoothing method to handle the problem.Taking logarithm is a widely used smoothing approach. Thus the length constraint is de fined as follows in(7).

    It needs to be normalized as the previous operations:

    3.3.Integrated score framework

    The ultimate goal of our method is to select those sentences with higher relevance,more informativeness and better diversity under the limitation of length.We de fine a function comprehensively considering the above purposes as follows:

    In order to calculate concisely and conveniently,the scoring framework then is changed to:

    Note that in order to determine how to tune the parametersα,β, andγof the integrated score framework,we carry out a set of experiments on development dataset.The value ofα,β,andγwas tuned by varying from 0 to 1.5,and chose the values,with which the method performs best.

    3.4.Summary generation process

    The summary generation is regarded as the 0-1 knapsack problem:

    The 0-1 knapsack problem is NP-hard.To alleviate this problem we utilize the dynamic programming solution to select sentencesuntil the expected length of summaries is satis fied,shown as follows.

    where S[i][l]stands for a high score of summary,that can only contain sentences in the set{s 1,s 2,…s i}under the limit of the exact length l.

    4.Experimental setup

    4.1.Datasets and evaluation metrics

    We evaluate our approach on the open benchmark data sets DUC2004 and DUC2007 from Document Understanding Conference(DUC)for summarization task.Table 1 gives a brief description of the datasets.There are four human-generated summaries,of which every sentence is either selected in its entirety or not at all, are provided as the ground truth of the evaluation for each document set.

    In this section,DUC2007 is used as our development set to investigate howα,β,andγrelate to integrated score framework. ROUGE version 1.5.5 toolkit[13],widely used in the research of automatic documents summarization,is applied to evaluate the performance of our summarization method in experiments.Among the evaluation methods implemented in Rouge,Rouge-1 focuses on the occurrence of the same words between generated summary and reference summary,while Rouge-2 and Rouge-SU4 concerns more over the readability of the generated summary.We report the mean value over all topics of the recall scores of these three metrics in the experiment.

    4.2.Baselines

    We study with the following methods for generic summarization as the baseline methods to compare with our proposed method,which of them are widely applied in research or recently released in literature.

    1:DUC best:The best participating system in DUC2004;

    2:Cluster-based methods:KM[10],FGB[7],ClusterHITS[6],NMF [14],RTC[8];

    3:Other state-of-the-art MDS methods:Centroid[15],LexPageR-ank[16],BST M[10],WCS[11].

    5.Experimental results

    We evaluate our method on the DUC 2004 data withα=0.77, β=0.63,γ=0.92 which was our best performance in the experiments on the development data DUC 2007.The results of these experiments are listed in Table 2.Fig.2 visually illustrates the comparison between our method with the baselines so as to better demonstrate the results.We subtract the KM score from the scores of residual methods and then plus the number 0.01 in the figure, thus the distinction among those methods can be observed more distinctly.We show ROUGE-1,ROUGE-2 and ROUGE-SU Recallmeasures in Table 2.

    Table 1 Description of the dataset.

    Table 2 Overall performance comparison on DUC2004 dataset using ROUGE evaluation tool. Remark:“-”indicates that the corresponding method does not authoritatively release the results.

    From Table 2 and Fig.2,we can have the following observed results:our result is on the verge of the human-annotated result and our method clearly outperforms the DUC04 best team work.It is obvious that our method outperforms most rivals signi ficantly on the ROUGE-1 metric and the ROUGE-SU metric.In comparison with the WCS,the resultof our method is slightly worse.It may be due to the aggregation strategy used by WCS.The WCS aggregates various summarization systems to produce better summary results. Compared with other cluster-based methods,ours consider the informativeness of sentences and do not need to set the clusters' number.By removing one from the four scores of the integrated score framework,the results show that effectiveness of the method is reduced.In other words,the four scores of the integrated score framework have a promoting effect for the summarization task.In a word,it is effective for our proposed method to handle MDS task.

    6.Conclusion

    Fig.2.Comparison of the methods in terms of ROUGE-1,ROUGE-2,and ROUGESU Recall-measures.

    In this paper,we proposed a novel unsupervised method tohandle the task of multi-document summarization.For ranking sentences,we proposed an integrated score framework.Informative content words are used to get the informativeness,while DPC was employed to measure the relevance and diversity of sentences at the same time.We combined those scores with a length constraint and selected sentences based dynamic programming at last.Extensive experiments on standard datasets show that our method is quite effective for multi-document summarization.

    In the future,we will introduce external resources such as Wordnet and Wikipedia to calculate the sentence semantic similarity,which can solve the problems of the synonym and the multivocal word.We will then apply our proposed method in topicfocused and updated summarization,to which the tasks of summarization have turned.

    Acknowledgments

    This work is partially supported by NSFC(No:61271309,No. 61300197),Shenzhen Science&Research projects(No: CXZZ20140509093608290).

    [1]T.Ma,X.Wan,Multi-document summarization using minimum distortion,in: 2010 IEEE International Conference on Data Mining,IEEE,2010,pp.354-363.

    [2]H.Liu,H.Yu,Z.-H.Deng,Multi-document summarization based on two-level sparse representation model,in:AAAI,2015,pp.196-202.

    [3]Z.Cao,F.Wei,L.Dong,S.Li,M.Zhou,Ranking with recursive neural networks and its application to multi-document summarization,in:AAAI,2015,pp. 2153-2159.

    [4]A.Rodriguez,A.Laio,Clustering by fast search and find of density peaks, Science(6191)(2014)1492-1496.

    [5]K.Hong,A.Nenkova,Improving the estimation of word importance for news multi-document summarization,in:EACL,2014,pp.712-721.

    [6]X.Wan,J.Yang,Multi-document summarization using cluster-based link analysis,in:Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,ACM,2008, pp.299-306.

    [7]D.Wang,S.Zhu,T.Li,Y.Chi,Y.Gong,Integrating document clustering and multi-document summarization,ACM Trans.Knowl.Discov.Data(TKDD)5(3) (2011)14.

    [8]X.Cai,W.Li,Ranking through clustering:an integrated approach to multidocument summarization,IEEE Trans.Audio,Speech,Lang.Process.21(7) (2013)1424-1433.

    [9]D.Wang,T.Li,S.Zhu,C.Ding,Multi-document summarization via sentencelevel semantic analysis and symmetric matrix factorization,in:Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,ACM,2008,pp.307-314.

    [10]D.Wang,S.Zhu,T.Li,Y.Gong,Multi-document summarization using sentence-based topic models,in:Proceedings of the ACL-IJCNLP 2009 Conference Short Papers,ACL,2009,pp.297-300.

    [11]D.Wang,T.Li,Weighted consensus multi-document summarization,Inf. Process.Manag.48(3)(2012)513-523.

    [12]J.Goldstein,V.Mittal,J.Carbonell,M.Kantrowitz,Multi-document summarization by sentence extraction,in:Proceedings of the 2000 NAACL-ANLP Workshop on Automatic Summarization-Volume 4,ACL,2000,pp.40-48.

    [13]P.Over,J.Yen,Introduction to duc-2001:an intrinsic evaluation of generic news text summarization systems,in:Proceedings of DUC 2004 Document Understanding Workshop,Boston,2004.

    [14]D.Wang,T.Li,C.Ding,Weighted feature subset non-negative matrix factorization and its applications to document understanding,in:2010 IEEE International Conference on Data Mining,IEEE,2010,pp.541-550.

    [15]D.R.Radev,H.Jing,M.Sty's,D.Tam,Centroid-based summarization of multiple documents,Inf.Process.Manag.40(6)(2004)919-938.

    [16]Q.Mei,J.Guo,D.Radev,Divrank:the interplay of prestige and diversity in information networks,in:Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,ACM,2010,pp. 1009-1018.

    *Corresponding author.Shenzhen Raisound Technologies,Co.,Ltd,China.

    E-mail address:13925876721@163.com(J.Zhang).

    Peer review under responsibility of Chongqing University of Technology.

    http://dx.doi.org/10.1016/j.trit.2016.12.005

    2468-2322/?2017 Production and hosting by Elsevier B.V.on behalf of Chongqing University of Technology.This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

    Integrated score framework

    Density peaks clustering

    Sentences rank

    欧美国产精品一级二级三级| 一区二区三区乱码不卡18| 丰满饥渴人妻一区二区三| 搡老岳熟女国产| 黄色视频不卡| 国产一区二区三区av在线| 亚洲综合色网址| 真人做人爱边吃奶动态| 久久久久久人人人人人| 国产欧美日韩精品亚洲av| 久久天躁狠狠躁夜夜2o2o | 男女午夜视频在线观看| 美女高潮到喷水免费观看| 久久99精品国语久久久| 一级毛片电影观看| 免费av中文字幕在线| 老司机深夜福利视频在线观看 | 19禁男女啪啪无遮挡网站| 久久精品成人免费网站| 伦理电影免费视频| 亚洲精品国产av成人精品| 亚洲av成人不卡在线观看播放网 | 男人操女人黄网站| 久久人人爽av亚洲精品天堂| 亚洲精品一卡2卡三卡4卡5卡 | 国产在线一区二区三区精| 久久久久久免费高清国产稀缺| 久久精品亚洲av国产电影网| 久久天躁狠狠躁夜夜2o2o | 欧美精品亚洲一区二区| 午夜福利一区二区在线看| 捣出白浆h1v1| 亚洲av综合色区一区| 青春草亚洲视频在线观看| 亚洲欧美日韩高清在线视频 | 国产野战对白在线观看| 九色亚洲精品在线播放| 欧美 亚洲 国产 日韩一| 中文字幕精品免费在线观看视频| 天天躁日日躁夜夜躁夜夜| 免费日韩欧美在线观看| 一本久久精品| 午夜91福利影院| 中文字幕av电影在线播放| 欧美 亚洲 国产 日韩一| 性色av一级| 少妇被粗大的猛进出69影院| 久久鲁丝午夜福利片| 国产免费福利视频在线观看| a级毛片在线看网站| 国产黄色免费在线视频| 国产极品粉嫩免费观看在线| e午夜精品久久久久久久| 99热国产这里只有精品6| 亚洲欧美成人综合另类久久久| 桃花免费在线播放| 欧美中文综合在线视频| 久久午夜综合久久蜜桃| 午夜福利视频精品| 高清欧美精品videossex| 国产精品久久久久久精品电影小说| 妹子高潮喷水视频| 精品人妻一区二区三区麻豆| 美女午夜性视频免费| 深夜精品福利| 少妇裸体淫交视频免费看高清 | 91九色精品人成在线观看| 曰老女人黄片| 91字幕亚洲| 亚洲成人国产一区在线观看 | 啦啦啦在线免费观看视频4| 久久久久久久久久久久大奶| 久久女婷五月综合色啪小说| 丝袜脚勾引网站| 性色av一级| 男女午夜视频在线观看| 亚洲久久久国产精品| 青春草视频在线免费观看| 亚洲久久久国产精品| 在线看a的网站| 久久久精品国产亚洲av高清涩受| 国产日韩一区二区三区精品不卡| 亚洲精品日本国产第一区| 午夜久久久在线观看| 制服人妻中文乱码| 精品久久久精品久久久| 建设人人有责人人尽责人人享有的| 伊人久久大香线蕉亚洲五| 日本欧美视频一区| 国产精品久久久av美女十八| 深夜精品福利| 黄色片一级片一级黄色片| 久久国产亚洲av麻豆专区| 亚洲 国产 在线| 欧美日本中文国产一区发布| 美女主播在线视频| 亚洲欧美清纯卡通| h视频一区二区三区| 电影成人av| 精品一区在线观看国产| 欧美成人精品欧美一级黄| 黄色视频不卡| 日韩一卡2卡3卡4卡2021年| 一本综合久久免费| 久久精品aⅴ一区二区三区四区| 人成视频在线观看免费观看| 国产精品免费大片| 久久国产精品影院| 免费在线观看完整版高清| 日本午夜av视频| 精品少妇内射三级| 免费少妇av软件| 日韩一本色道免费dvd| 久久综合国产亚洲精品| 精品人妻一区二区三区麻豆| 亚洲一码二码三码区别大吗| 久久ye,这里只有精品| 天堂俺去俺来也www色官网| 精品少妇黑人巨大在线播放| 99热国产这里只有精品6| 欧美乱码精品一区二区三区| 青春草视频在线免费观看| 亚洲精品日韩在线中文字幕| 久久人人爽av亚洲精品天堂| 伊人久久大香线蕉亚洲五| 欧美激情高清一区二区三区| 久久久久国产一级毛片高清牌| 一边摸一边抽搐一进一出视频| 蜜桃在线观看..| 99香蕉大伊视频| 99久久综合免费| 久久久国产欧美日韩av| 中国美女看黄片| 脱女人内裤的视频| 久久精品久久久久久久性| 久久亚洲精品不卡| netflix在线观看网站| 我的亚洲天堂| 久久久久网色| 五月天丁香电影| 一区在线观看完整版| 久久久久久亚洲精品国产蜜桃av| 欧美中文综合在线视频| 亚洲欧美清纯卡通| 91字幕亚洲| 国产在线视频一区二区| 香蕉国产在线看| 午夜久久久在线观看| 亚洲自偷自拍图片 自拍| 午夜精品国产一区二区电影| 国产伦理片在线播放av一区| 永久免费av网站大全| 桃花免费在线播放| 91精品国产国语对白视频| 日韩中文字幕欧美一区二区 | 青草久久国产| 国产日韩一区二区三区精品不卡| 丝袜在线中文字幕| 久久久久精品人妻al黑| 国产成人精品无人区| 国产高清videossex| 丝袜在线中文字幕| 天天操日日干夜夜撸| 国产激情久久老熟女| 老汉色∧v一级毛片| 国产av精品麻豆| svipshipincom国产片| 一边摸一边做爽爽视频免费| 欧美日本中文国产一区发布| 一本色道久久久久久精品综合| 国产精品久久久久久精品电影小说| 人人妻人人爽人人添夜夜欢视频| 亚洲伊人久久精品综合| 亚洲熟女毛片儿| 人人妻人人澡人人爽人人夜夜| 狂野欧美激情性xxxx| 日韩中文字幕欧美一区二区 | 777久久人妻少妇嫩草av网站| 亚洲成人国产一区在线观看 | 亚洲成人免费av在线播放| 欧美在线一区亚洲| 亚洲欧美日韩高清在线视频 | 在线观看www视频免费| 国产精品.久久久| 男女边摸边吃奶| av一本久久久久| 欧美日韩亚洲高清精品| cao死你这个sao货| 91精品国产国语对白视频| 青春草视频在线免费观看| 日本av免费视频播放| 久久精品国产综合久久久| 久久毛片免费看一区二区三区| 国产成人av教育| 成年人午夜在线观看视频| 欧美激情 高清一区二区三区| 少妇人妻久久综合中文| 久久久久久免费高清国产稀缺| 国产精品av久久久久免费| 国产精品 欧美亚洲| 晚上一个人看的免费电影| 亚洲av日韩在线播放| xxxhd国产人妻xxx| av视频免费观看在线观看| 亚洲人成电影免费在线| 一边摸一边做爽爽视频免费| 99精国产麻豆久久婷婷| 看免费av毛片| 热99久久久久精品小说推荐| 精品久久久久久电影网| 久久久久国产精品人妻一区二区| 美女高潮到喷水免费观看| 午夜久久久在线观看| 天天躁夜夜躁狠狠躁躁| 天天躁夜夜躁狠狠久久av| 脱女人内裤的视频| 亚洲视频免费观看视频| 国产男人的电影天堂91| 亚洲成av片中文字幕在线观看| 欧美人与性动交α欧美精品济南到| 日韩免费高清中文字幕av| 1024香蕉在线观看| 一级片'在线观看视频| 丝袜在线中文字幕| 丝袜人妻中文字幕| 亚洲一区中文字幕在线| 成人亚洲精品一区在线观看| 亚洲精品国产色婷婷电影| 嫩草影视91久久| 天天躁夜夜躁狠狠久久av| 丝袜美足系列| 亚洲国产日韩一区二区| 成人18禁高潮啪啪吃奶动态图| 亚洲国产精品国产精品| 欧美亚洲 丝袜 人妻 在线| 午夜91福利影院| 久热这里只有精品99| 成人黄色视频免费在线看| 免费在线观看视频国产中文字幕亚洲 | 性色av一级| 狠狠婷婷综合久久久久久88av| 欧美大码av| 蜜桃国产av成人99| 视频在线观看一区二区三区| 一本久久精品| 欧美日本中文国产一区发布| 国产激情久久老熟女| 乱人伦中国视频| 18在线观看网站| 国产精品久久久久成人av| 99re6热这里在线精品视频| 免费久久久久久久精品成人欧美视频| 波多野结衣av一区二区av| 免费观看a级毛片全部| 国产精品99久久99久久久不卡| 国产精品九九99| 久久久久精品国产欧美久久久 | 成年av动漫网址| 久久国产精品大桥未久av| 亚洲国产av影院在线观看| 丁香六月欧美| a级毛片黄视频| 免费黄频网站在线观看国产| 黄频高清免费视频| 多毛熟女@视频| 天堂俺去俺来也www色官网| 亚洲国产日韩一区二区| 操美女的视频在线观看| 欧美性长视频在线观看| 日韩欧美一区视频在线观看| 日韩人妻精品一区2区三区| svipshipincom国产片| 成年人午夜在线观看视频| 日本欧美视频一区| 亚洲精品美女久久久久99蜜臀 | 色婷婷久久久亚洲欧美| 少妇被粗大的猛进出69影院| 后天国语完整版免费观看| 我的亚洲天堂| 成人国产一区最新在线观看 | 久久午夜综合久久蜜桃| 一二三四在线观看免费中文在| 国产不卡av网站在线观看| 汤姆久久久久久久影院中文字幕| 美女扒开内裤让男人捅视频| 欧美日韩国产mv在线观看视频| 黄色 视频免费看| 另类亚洲欧美激情| 伊人亚洲综合成人网| 免费人妻精品一区二区三区视频| 激情五月婷婷亚洲| 老司机深夜福利视频在线观看 | 亚洲少妇的诱惑av| 成在线人永久免费视频| 国产熟女午夜一区二区三区| 伦理电影免费视频| 最近手机中文字幕大全| 亚洲精品一区蜜桃| 午夜福利在线免费观看网站| 国产精品偷伦视频观看了| 久久99热这里只频精品6学生| 欧美成人精品欧美一级黄| 午夜福利免费观看在线| 18禁裸乳无遮挡动漫免费视频| 欧美国产精品va在线观看不卡| 国产午夜精品一二区理论片| 69精品国产乱码久久久| 久久国产亚洲av麻豆专区| 久久精品国产a三级三级三级| 女性生殖器流出的白浆| 少妇裸体淫交视频免费看高清 | 色网站视频免费| 亚洲综合色网址| 亚洲av综合色区一区| 国产av精品麻豆| 多毛熟女@视频| 欧美成人精品欧美一级黄| 中文字幕人妻丝袜制服| av天堂久久9| 久久人人97超碰香蕉20202| 欧美在线一区亚洲| 亚洲色图综合在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 两性夫妻黄色片| 最近手机中文字幕大全| 在线精品无人区一区二区三| 国产成人免费观看mmmm| 亚洲专区国产一区二区| 精品国产一区二区三区久久久樱花| 国产亚洲欧美精品永久| 建设人人有责人人尽责人人享有的| 成人亚洲欧美一区二区av| www.熟女人妻精品国产| 亚洲精品第二区| 午夜福利免费观看在线| 国产精品 欧美亚洲| 女人被躁到高潮嗷嗷叫费观| 天堂中文最新版在线下载| 国产日韩一区二区三区精品不卡| 丝瓜视频免费看黄片| cao死你这个sao货| 日日夜夜操网爽| 搡老岳熟女国产| 国产精品国产三级专区第一集| 亚洲成人手机| 人妻 亚洲 视频| 一区二区av电影网| 不卡av一区二区三区| 十八禁网站网址无遮挡| 亚洲免费av在线视频| 热re99久久精品国产66热6| 夫妻性生交免费视频一级片| 国产精品亚洲av一区麻豆| 亚洲专区中文字幕在线| 制服诱惑二区| 三上悠亚av全集在线观看| 99香蕉大伊视频| 国产精品久久久久久精品古装| 丝袜人妻中文字幕| 天堂俺去俺来也www色官网| 嫁个100分男人电影在线观看 | 亚洲一卡2卡3卡4卡5卡精品中文| 在线观看www视频免费| 久久精品亚洲av国产电影网| 国产精品一二三区在线看| 免费看十八禁软件| 啦啦啦啦在线视频资源| 中文字幕人妻熟女乱码| 99久久人妻综合| 亚洲一卡2卡3卡4卡5卡精品中文| 飞空精品影院首页| 国产精品秋霞免费鲁丝片| 日韩制服丝袜自拍偷拍| 最新在线观看一区二区三区 | 视频区图区小说| 女人被躁到高潮嗷嗷叫费观| 免费在线观看黄色视频的| 亚洲精品国产av蜜桃| 18禁黄网站禁片午夜丰满| 超碰97精品在线观看| 男人操女人黄网站| www.自偷自拍.com| 18禁裸乳无遮挡动漫免费视频| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲国产欧美在线一区| 国产淫语在线视频| 制服人妻中文乱码| 高清视频免费观看一区二区| 9色porny在线观看| 91精品三级在线观看| av在线播放精品| 日韩熟女老妇一区二区性免费视频| 日日摸夜夜添夜夜爱| 成人亚洲精品一区在线观看| 亚洲国产最新在线播放| 中文字幕人妻丝袜制服| 黄色视频在线播放观看不卡| 视频在线观看一区二区三区| 韩国高清视频一区二区三区| 一区在线观看完整版| 三上悠亚av全集在线观看| 精品国产一区二区久久| 国产野战对白在线观看| 国产激情久久老熟女| 少妇被粗大的猛进出69影院| 国产一区二区 视频在线| 亚洲精品美女久久久久99蜜臀 | 久久99一区二区三区| 国产老妇伦熟女老妇高清| 99国产精品一区二区蜜桃av | 操出白浆在线播放| 亚洲精品一区蜜桃| 狂野欧美激情性xxxx| 久久精品熟女亚洲av麻豆精品| 欧美少妇被猛烈插入视频| 亚洲,欧美精品.| 久久99热这里只频精品6学生| 亚洲中文av在线| 欧美日韩国产mv在线观看视频| 狠狠婷婷综合久久久久久88av| 亚洲一区中文字幕在线| 久久99精品国语久久久| 久久人人97超碰香蕉20202| 日本vs欧美在线观看视频| 国产亚洲av片在线观看秒播厂| 人人妻人人澡人人看| 国产视频一区二区在线看| 成人亚洲欧美一区二区av| 色视频在线一区二区三区| 天天影视国产精品| 婷婷色综合大香蕉| 女人精品久久久久毛片| 国产成人a∨麻豆精品| 国产午夜精品一二区理论片| 一本—道久久a久久精品蜜桃钙片| 丝袜在线中文字幕| 婷婷色麻豆天堂久久| 在线 av 中文字幕| 叶爱在线成人免费视频播放| 国产99久久九九免费精品| 午夜免费男女啪啪视频观看| 精品欧美一区二区三区在线| 极品少妇高潮喷水抽搐| 亚洲国产中文字幕在线视频| 亚洲第一青青草原| 亚洲成人手机| 人成视频在线观看免费观看| 精品国产超薄肉色丝袜足j| 国产欧美日韩综合在线一区二区| 亚洲国产毛片av蜜桃av| 国产精品欧美亚洲77777| 欧美黑人欧美精品刺激| 欧美日韩国产mv在线观看视频| 在线看a的网站| 一边摸一边做爽爽视频免费| 国产国语露脸激情在线看| 精品人妻一区二区三区麻豆| 男女边吃奶边做爰视频| 欧美日本中文国产一区发布| 中文字幕精品免费在线观看视频| 国产黄色免费在线视频| 亚洲专区中文字幕在线| 亚洲综合色网址| a级片在线免费高清观看视频| 免费在线观看日本一区| 欧美激情高清一区二区三区| 亚洲精品第二区| 美女主播在线视频| 国产精品久久久av美女十八| 欧美精品av麻豆av| 国产欧美日韩精品亚洲av| 咕卡用的链子| a级毛片在线看网站| 老司机靠b影院| 菩萨蛮人人尽说江南好唐韦庄| 精品国产一区二区三区四区第35| 久久影院123| 久久精品人人爽人人爽视色| 亚洲国产精品成人久久小说| 久9热在线精品视频| 另类精品久久| 亚洲精品自拍成人| 麻豆乱淫一区二区| 免费看十八禁软件| 亚洲专区国产一区二区| 亚洲欧美一区二区三区黑人| 热re99久久国产66热| 九草在线视频观看| xxxhd国产人妻xxx| 另类亚洲欧美激情| 亚洲国产中文字幕在线视频| 精品亚洲成a人片在线观看| 亚洲伊人久久精品综合| 中文字幕亚洲精品专区| 久久九九热精品免费| 亚洲欧美中文字幕日韩二区| 51午夜福利影视在线观看| 男男h啪啪无遮挡| 国产欧美亚洲国产| 日本av手机在线免费观看| 国产av一区二区精品久久| 中文精品一卡2卡3卡4更新| 青春草视频在线免费观看| 自拍欧美九色日韩亚洲蝌蚪91| 午夜影院在线不卡| 91麻豆av在线| 国产野战对白在线观看| 菩萨蛮人人尽说江南好唐韦庄| 男人舔女人的私密视频| 大香蕉久久成人网| 国产黄色视频一区二区在线观看| 精品福利永久在线观看| 热99国产精品久久久久久7| 最近中文字幕2019免费版| 大片免费播放器 马上看| 国产欧美日韩精品亚洲av| 久久综合国产亚洲精品| 亚洲精品一二三| √禁漫天堂资源中文www| 精品一品国产午夜福利视频| 爱豆传媒免费全集在线观看| 自线自在国产av| 国产欧美日韩综合在线一区二区| 狠狠精品人妻久久久久久综合| 99香蕉大伊视频| 欧美成人精品欧美一级黄| 69精品国产乱码久久久| 亚洲精品中文字幕在线视频| 99香蕉大伊视频| 男人舔女人的私密视频| 精品久久久久久电影网| 久久精品国产a三级三级三级| 亚洲国产av影院在线观看| 91字幕亚洲| av片东京热男人的天堂| 无遮挡黄片免费观看| 18禁观看日本| 亚洲一卡2卡3卡4卡5卡精品中文| 久久久久久久大尺度免费视频| 两性夫妻黄色片| 久久久久久久大尺度免费视频| 亚洲人成电影免费在线| www.自偷自拍.com| 亚洲伊人久久精品综合| 无遮挡黄片免费观看| 各种免费的搞黄视频| 欧美黑人精品巨大| 亚洲精品国产色婷婷电影| 亚洲男人天堂网一区| 人人澡人人妻人| 在线精品无人区一区二区三| 99国产精品99久久久久| 叶爱在线成人免费视频播放| 99国产精品一区二区三区| 国产成人精品无人区| 国产片特级美女逼逼视频| 永久免费av网站大全| 手机成人av网站| 久久免费观看电影| 精品少妇一区二区三区视频日本电影| 日韩av不卡免费在线播放| 国产精品一二三区在线看| 男人操女人黄网站| 三上悠亚av全集在线观看| 久久这里只有精品19| 极品少妇高潮喷水抽搐| 一本色道久久久久久精品综合| 搡老岳熟女国产| 夫妻午夜视频| 深夜精品福利| 免费一级毛片在线播放高清视频 | 国产国语露脸激情在线看| 精品亚洲乱码少妇综合久久| 日韩中文字幕视频在线看片| 精品一品国产午夜福利视频| 中国国产av一级| 美女中出高潮动态图| 亚洲成色77777| 老司机深夜福利视频在线观看 | 嫩草影视91久久| 自拍欧美九色日韩亚洲蝌蚪91| 午夜免费男女啪啪视频观看| 国产日韩一区二区三区精品不卡| 国产午夜精品一二区理论片| 看免费av毛片| 你懂的网址亚洲精品在线观看| 最近手机中文字幕大全| 伊人亚洲综合成人网| 亚洲成人免费电影在线观看 | 亚洲 国产 在线| 国产熟女午夜一区二区三区| 日本五十路高清| 国产成人欧美在线观看 | 99re6热这里在线精品视频| 高清不卡的av网站| 精品久久久久久久毛片微露脸 | 久久久久久久久免费视频了| av有码第一页| 不卡av一区二区三区| 又大又爽又粗| 91字幕亚洲| 婷婷丁香在线五月| 亚洲男人天堂网一区| 色播在线永久视频| 亚洲国产精品一区二区三区在线| 久久人人爽人人片av| 又大又黄又爽视频免费| 欧美在线黄色| 欧美大码av| 美女高潮到喷水免费观看| 不卡av一区二区三区| 欧美日韩成人在线一区二区| 成人国产一区最新在线观看 | 国产淫语在线视频| 精品少妇久久久久久888优播|