• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Unsupervised Graph-Based Tibetan Multi-Document Summarization

    2022-11-10 02:32:06XiaodongYanYiqinWangWeiSongXiaobingZhaoRunandYangYanxing
    Computers Materials&Continua 2022年10期

    Xiaodong Yan,Yiqin Wang,Wei Song,*,Xiaobing Zhao,A.Run and Yang Yanxing

    1School of Information and Engineering,Minzu University of China,Beijing,100081,China

    2National Language Resource Monitoring&Research Center,Minority Languages Branch,Beijing,100081,China

    3University of California,Irvine,California,92617,USA

    4Department of Physics,New Jersey Institute of Technology,Newark,New Jersey,07102-1982,USA

    Abstract:Text summarization creates subset that represents the most important or relevant information in the original content,which effectively reduce information redundancy.Recently neural network method has achieved good results in the task of text summarization both in Chinese and English,but the research of text summarization in low-resource languages is still in the exploratory stage,especially in Tibetan.What’s more,there is no large-scale annotated corpus for text summarization.The lack of dataset severely limits the development of low-resource text summarization.In this case,unsupervised learning approaches are more appealing in low-resource languages as they do not require labeled data.In this paper,we propose an unsupervised graph-based Tibetan multi-document summarization method,which divides a large number of Tibetan news documents into topics and extracts the summarization of each topic.Summarization obtained by using traditional graph-based methods have high redundancy and the division of documents topics are not detailed enough.In terms of topic division,we adopt two level clustering methods converting original document into document-level and sentence-level graph,next we take both linguistic and deep representation into account and integrate external corpus into graph to obtain the sentence semantic clustering.Improve the shortcomings of the traditional K-Means clustering method and perform more detailed clustering of documents.Then model sentence clusters into graphs,finally remeasure sentence nodes based on the topic semantic information and the impact of topic features on sentences,higher topic relevance summary is extracted.In order to promote the development of Tibetan text summarization,and to meet the needs of relevant researchers for high-quality Tibetan text summarization datasets,this paper manually constructs a Tibetan summarization dataset and carries out relevant experiments.The experiment results show that our method can effectively improve the quality of summarization and our method is competitive to previous unsupervised methods.

    Keywords:Multi-document summarization;text clustering;topic feature fusion;graphic model

    1 Introduction

    With the development of the mobile Internet media platform,the information on the Internet has exploded.The massive amount of information has brought abundant information to users,but also caused huge reading barriers.In order to meet the needs of users to quickly obtain effective information,text summarization technology emerges as the times require.Automatic text summarization technology uses computer technology to extract content from texts to generate summaries,which can help people quickly obtain information.

    Despite recent years have witnessed an increasing number of summarization systems[1,2],but most of those systems are aim to high-resource languages.Low-resource languages summarization systems are still in its infancy.Tibetan is an official language of the Tibet Autonomous Region in China.In addition to China,Tibetan is also spoken in Nepal,Bhutan,and India.The lack of data severely limits the development of Tibetan text summarization.This paper concentrates on survey and realization Tibetan text summarization.

    The structure of this research is as follows:Section 2 presents the related work.Section 3 presents our model architecture.Section 4 detailed description about dataset constructions.Section 5 step by step briefing of the proposed technique with tables and graphs.Section 6 implement and evaluation the proposed method.Section 7 conclusion of the entire work.

    2 Related Work

    Multi-document summarization (MDS) is an effective tool for information aggregation that generates an informative and concise summary from a cluster of topic-related documents[3].Most of the existing clustering methods directly use k-means to cluster documents.In general,there are two approaches to MDS,extractive approach:words,phrases or sentences are identified as salient pieces of text and reassembled as the summary,does not generate new text;abstractive approach:abstractive summarization approach does not simply copy important phrases from source text but also potentially come up with new phrases,which can be seen as paraphrasing.Recent years variants of neural sequence-to-sequence models have been particularly successful in the summarization tasks[4].Despite the huge efforts of using deep neural models in summarization,they often require large-scale parallel corpora of input texts paired with their corresponding output summaries for direct supervision[5].Obtaining training data for MDS is time consuming and resource-intensive,therefore,low-resource languages mostly use unsupervised methods to generate text summaries.According to the selection of features,unsupervised document summarization methods can be divided into the following three approaches:statistics-based approach,topic-based approach and graph-based approach.

    Statistics-based approach was first applied to document summarization,which calculates the importance of sentences based on the statistical characteristics of the text to extract summaries.Loret et al.measure the weight of a sentence based on the frequency of the word and the length of the gerund phrase.Statistics-based approach lacks the understanding of the deep semantic relations between sentences,which leads to problems such as difficulty in expressing the topic of the document and logical order missing in summaries[6].The topic-based approach selects sentences that represent the subject of the document by mining the underlying semantic information of the text.Chang et al.considered the relationship between words,sentences,topics,and documents,and proposed a method to measure weight through the KL divergence between the sentence distribution model and document distribution model[7].Balaji et al.proposed a method to identify key topics and extract summary from multiple documents[8].Alrumiah et al.proposed a summarization method by using Latent Dirichlet Allocation(LDA)and length enhancement[9].The topic-based approach solves the problem of lack of summarization semantics for a certain extent,but it still lacks information of document structure.The graph-based approach transforms the traditional extraction step into a graph construction,calculation and sorting nodes[10].Graph-based approaches are widely used in the field of text summarization.The earliest work can be found in[11].In terms of graph sorting,the classic sorting algorithms include TextRank[12],HITS[13]etc.Most of the existing systems make corresponding improvements based on the text graph constructed by TextRank or HITS algorithms,Li Wei et al.used external corpus information into TextRank in the form of word vectors and use k-means at sentences level to cluster documents[14].Saeed et al.proposed an abstractive summarization technique which generates variable-length keywords as per document diversity instead of selecting fixed-length keywords for each document,improves the metadata similarity to the original text.[15].Hu et al.proposed an automatic text summarization technology based on affinity graphs combining topic information to extract highly informative and highly unique sentences[16].

    3 Model Architecture

    Since the documents come from different sources,the opinions expressed are usually redundant and repetitive.Therefore,the two-level clustering method is adopted.When construct sentence graph considers the deep representation of language together with words embedding.After that,apply spectral clustering to get sentence-clusters,then perform topic feature fusion on each cluster to generate Tibetan summary.In this paper,we propose an unsupervised graph-based Tibetan multidocument summarization method,as shown in Fig.1.In summary,the contributions of this paper are threefold,as described below:

    1.We introduce an Tibetan multi-document clustering algorithm based on graph model,which two-level clustering methods are performed at the document-level and sentence-level,use two-level clustering can effectively reduce the operating efficiency drop caused by directly constructing sentence graph.

    2.We adopt spectral clustering method at sentence level.Define a feature vector for each sentence,then use these sentences features to clustering sentences.Which improve the shortcomings of the traditional K-Means clustering method,multiple documents are divide into finer divisions.

    3.We adopt a topic feature fusion method to generate Tibetan text summary,aiming at the traditional graph models for text summarization lack mining and utilization of deep topic semantic features.According to text span and their relevance to input“manual features”,reset the weight of the nodes in the graph model.Select the top K nodes with the highest weight in the graph as summary.

    4 Dataset Constructions

    The construction of datasets is an important task in text summarization.At present,the deep learning algorithm have achieved impressive performance on High-Resource Languages (HRL)datasets,which even surpassing human performance.Such as:DUC dataset,Gigaword dataset and CNN/Dailymail dataset[17,18].But for low-resource languages,the task of text summarization is still in its infancy due to the lack of corresponding datasets.In order to promote the development of Tibetan text summarization,and to meet the needs of relevant researchers for high-quality Tibetan text summarization datasets.We manual construct a Tibetan text summarization dataset,which contains 1000 parallel of news content paired with their corresponding summaries and more than 3,500 keywords.

    4.1 Construction Process

    All news in this dataset comes from the “Public Opinion Convergence and Analysis” project of the Natural Language Processing Laboratory of Minzu University of China.First we select the original news,delete those news that are too long or too short.Then clean those texts.Participants in the construction are divided into two groups.One group is responsible for the manual construction of summaries on the cleaned dataset,and the other group is responsible for verifying the quality of the summaries generated above,reviewing the initial summaries,and deleting or manually rebuilding the summaries which below standard.

    4.2 News Selection

    We adopt 5000 news as initial dataset,those news are crawled from websites such as People’s Daily Online,Yunzang Net,Xinhua Net and other websites in the “Public Opinion Convergence and Analysis”project of the Natural Language Processing Laboratory of Minzu University of China.Involving categories such as politics,science and technology,society,economy,art,sports,etc.,regular expressions are used to clean text and non-text data such as images,tables,website links,and article sources.In order to improve the quality of the summarization dataset,we discard news texts with less than 1000 words or more than 400 sentences,and finally selected 1,000 news contents for Tibetan summarization dataset construction.

    4.3 Summary Construction

    The work of summaries constructing are in charge of Tibetan language and literature students from Minzu University of China.Tibetan as their native language,and they also have the basic literacy skills of their major,so they are fully competent in Tibetan summarization writing.The summaries are constructed based on the following requirements:briefly explanation of the materials,highlighting the key points of the news,abandons the content that has no related with the topic.Rigorous sequence structure and clear hierarchy are necessary.In addition,in order to further improve the quality of the dataset,cross-validation is used to select the constructed summaries.

    4.4 Cross-validation

    After obtain the initial summarization,the quality of summarizes needs to be verified.The verification group scores the initial summarization from the fluency of sentences,completeness of semantics and coverage of news,and then eliminated low-quality abstracts.The scoring rules are shown in Tab.1.Remove or rewrite summarization with an average score of less than 3.5.Eventually,1,000 news and news summarization pairs were manually proofread.Examples of manual construction of summarization are shown in Appendix A.

    Table 1:Manual summarization scoring rules

    5 Model Details

    5.1 Graph-based Clustering Algorithm

    Text clustering is the application of cluster analysis to text documents.It uses machine learning and natural language processing(NLP)to understand and categorize unstructured,textual data.Through text clustering,texts in the same cluster are more similar to each other than to those in other clusters,so that a set with higher similarity can be founded,and reduce redundancy of text summaries.Text clustering should fully reflect the characteristics of high cohesion and low coupling.Through text clustering algorithm sentences on the same topic can be grouped into same cluster.

    Most of the existing clustering methods directly use k-means to cluster document at sentences level.However,due to the large number of sentences in multi-documents,building a sentence-level graph model directly will lead to a decrease in operating efficiency and the topics obtained by directly using K-means to cluster are not detailed enough.Therefore,we adopt two-level of document-level and sentence-level text clustering algorithms to reduce the operating efficiency drop caused by directly constructing sentence graph.First,construct document-level graph model and perform text clustering.Secondly,construct a sentence graph for the sentences of the documents in the obtained clusters,and assign a feature vector to each sentence,then perform clustering on these feature vectors.

    5.1.1 Document-level Clustering

    The news corpus needs to be clustered by topic firs in order to generate multi-document summaries for the documents under each topic.The typical clustering algorithm is the K-Means clustering algorithm[19].Since K-Means is an unsupervised algorithm and does not require a training set,it can effectively save clustering costs,so K-Means has become one of the most widely used clustering algorithms[20].We use K-Means combined with semantic similarity method for document clustering.Based on the text vector space model,the cosine similarity is used to calculate the similarity between two documents,and document-level graph model is constructed according to the similarity threshold of each document.

    5.1.2 Sentence-level Clustering

    After the document-level graph model is constructed,document-level text clustering algorithms will be used to obtain document clusters with high similarity,that is,the discovery process of subtopics.In order to divide a topic in more details,the sentences under the subtopics are clustered.

    In terms of sentence graph construction,which is different from document graph construction,we construct a sentence graph based on Approximate Discourse Graph(ADG)[21].Specifically,we build a graph(V,E),where each nodevi∈Vrepresents a sentence,and nodesviandvj(ij)are connected,i.e.,their edgeei,j= 1 if the similarity between sentences is greater than the threshold.Sentence level clustering schematic diagram as shown in Fig.2.

    K-means is sensitive to the initial clustering center,and the division of dense dataset such as text is not detailed enough[22].For sentence level cluster,we use spectral clustering method.The clusters obtained by the spectral clustering method have the characteristics of small intra-cluster distances and large inter-cluster distances,enable more detailed topic division of documents.

    5.2 Tibetan Text Summarization Combining Topic Feature

    Our Tibetan text summarization combine topic feature method is a graph-based content extraction method inspired by the TextRank algorithm.We use keywords which were manually generating a general description of the document as the topic of news.Reassign the random restart probabilities of graph nodes based on the relevance of the graph nodes to the topic of the news,enable sentences that related to the topic have higher score.

    5.2.1 TextRank Node Scoring

    In the original TextRank algorithm,the strengths and weaknesses of text spans reflect through links extracted directly from the original text.TextRank treat each sentence in the text as a graph nodeviand the edges between nodes having a weightwi,j,wi,jis calculated by the similarity between sentence nodes,sentence similarity score is calculated by taking cosine similarity of two sentence vectors,sentence vectors are obtained by averaging all of the word vectors of a sentence.TextRank node score as shown in Eq.(1):

    5.2.2 Feature Combining Node Scoring

    In the traditional TextRank algorithm,each node has an equal random restart probability,so all nodes are treated equally during the application of the algorithm.However,we hope that the higher the relevance of the sentence to the topic of document,the higher the probability that the sentence will be selected.We reset the node score based on Biased-TextRank[23]combining topic feature.The TextRank node score combine with the topic feature is shown in Eq.(2):

    Among them,the feature value is set to reflect the relevance of the current sentence node and the topic keyword,and the damping factor d is set to 0.85 as described above.We use multiple keywords extracted from the description content to determine the similarity between nodes and topics.Convert multiple keyword information into fixed-length embedding vectors,and calculate the similarity with nodes.The higher the similarity between the node and the embedding vector,the higher the restart probability assigned to the node.Finally,the first K nodes with high weight are selected as the summary sentence,and K is selected as 20%of the length of the article.

    6 Experiments

    6.1 Data Preprocessing

    We use Tibetan border characters to separate sentences.Then remove the stop words and punctuation through the Tibetan stop words list,and use the TIP-LAS[24]tool to segment words.Consider that sentences that are too long or too short are not suitable as candidate sentences for the abstract,and those that are too long or too short are removed.

    6.2 Evaluation Method and Dataset

    Evaluation methods are a key part of the task of text summarization.The evaluation methods of text summarization can be roughly divided into two categories:Intrinsic Methods and Extrinsic Methods.Internal method:Provide reference summary,and evaluate the quality of the system summary based on the reference summary.The quality of the system is evaluated by the degree of agreement between the system summary and the reference summary.External evaluation methods do not provide reference summary and are generally applied to specific tasks.For example:document retrieval,document clustering,document classification,etc.,to evaluate the quality of summary based on whether the summary can improve application performance.Among them,the internal method is the most commonly used summary evaluation method in academia.Comparing the system summaries with the expert summaries using a certain method is also one of the most common summary evaluation methods at present.The expert summaries are used as reference summary to evaluate the quality of system summary.Lin et al.[25].proposed the ROUGE automatic summary evaluation method based on BLUE,an automatic evaluation method for machine translation,which is now widely used in summary evaluation tasks.ROUGE compares expert summary with system summary,counts the overlapping basic units,and evaluates the quality of the system summary.At present,ROUGE has become one of the general standards for summary evaluation.ROUGE is an evaluation method for the recall rate of n-grams,the calculation shown as Eq.(3):

    Where RefSummaries represents reference summaries,that is,expert summaries obtained in advance,Countmatch(n-gram)represents the number of co-occurrences of n-grams of system summary and reference summary,andCount(n-gram)represents the number of n-grams that appear in the reference summary.

    Since there is no general dataset in the field of Tibetan text summarization research,we first use news as corpus and title as reference summary for evaluation[26].In order to verify the performance of the system summary on the expert summary dataset,we use the Tibetan summary dataset,using news as the corpus,and expert summary as the reference summary for evaluation.Then compare the evaluation results of the two methods.

    6.3 Experimental Results and Analysis

    The ROUGE evaluation results of the news title as reference summary are shown in Tab.2.The key sentences extracted by our method have the best effect on the ROUGE evaluation index where the title is used as reference summary.However,when the news headline is used as the reference summary,the ROUGE score of each method is relatively low.This is because the headline of the news usually only has one or two sentences,so the headline only summarizes the parts of news content,and lack of a comprehensive description of news events which cannot provide a complete summary of news.Therefore,using the title as a reference summary cannot evaluate the comprehensiveness of the system summary.The Tibetan summarization dataset was constructed,and the effect of system summary was evaluated on the Tibetan summarization dataset.

    Table 2:Rouge evaluation results of title reference summary

    Table 2:Continued

    The Tibetan summarization dataset constructed in this paper can provide a concise and comprehensive summary of news content.We use this dataset as reference summary for evaluation,use ROUGE as an evaluation indicator,and conduct the following experiments.

    Lead-3+K-Means:We use K-Means combine with Lead-3 as the baseline of the experiment.

    TextRank+K-Means:We use K-Means combine with TextRank to extract summaries.Use word frequency co-occurrence matrix to calculate similarity.

    Lead-3+two-level clustering:We use two-level clustering combine with Lead-3 to extract abstracts to verify the effectiveness of two-level clustering.

    TextRank two-level clustering:Use two-level clustering combine with TextRank method to extract summary.The word frequency co-occurrence matrix is used to calculate the similarity.

    The expert summary used as the reference summary for evaluation.The evaluation results of ROUGE-1,ROUGE-2,and ROUGE-L are shown in Tab.3.

    Table 3:Rouge evaluation results of manual reference summary

    When use lead-3 method as the experimental baseline,our method compared with the baseline,ROUGE-1 increased by 16.8%,ROUGE-2 score increased by 17%,ROUGE-L score increased by 17.2%.Our method compared with the K-Means + TextRank method,ROUGE-1 score increased by 14.4%,ROUGE-2 score increased by 17.9%,and ROUGE-L score increased by 14.4%,which proved the effectiveness of two-level clustering and topic feature fusion.Compared with the two-level clustering + TextRank method,the ROUGE-1 score has increased by 7.3%,ROUGE-2 score has increased by 18.7%,and ROUGE-L score has increased by 11.3%,which verified that the method of topic feature combination can generate a summary more in line with the topic.

    As shown in Fig.3 the score ranking of various methods in the title reference summary is basically the same as the score of the expert reference summary.However,the ROUGE scores have improved on the expert reference summary.This is because Tibetan summarization dataset can achieve a comprehensive and focused evaluation of summary,also comprehensively consider the results of the system summary.Our method is more optimized than traditional algorithms in Tibetan multi-news summarization.Use two-level graph model for multi-text clustering,when processing highdimensional data such as vectors,the complexity of clustering is better than traditional clustering algorithms.The text summarization combine topic feature method can select candidate sentences that are more relevant to topic.Through the method of graph model clustering&topic feature fusion the obtained summary can describe the news content comprehensively.

    7 Conclusions

    We propose an unsupervised Tibetan multi-document summarization method based on graph model.The two-level clustering can effectively improve the efficiency of the algorithm,and the generated summaries are more hierarchical.Based on the topic semantic information reflects the main idea of the news and the impact of topic features on sentences,the value of sentence nodes in the graph is re-measured.The method of topic features combine with summary extraction is proposed.By manually construct a Tibetan summarization dataset,the experiment of extracting Tibetan summarization on the dataset has achieved good results,the effectiveness of the Tibetan summarization method that proposed in this paper has been verified.Since the graph model method used in this paper is an unsupervised algorithm,we did not use a large-scale corpus in the experiment,which has certain limitations.In the next step,we will expand the scale of the Tibetan summarization dataset and try to generate abstractive summary on the large-scale Tibetan summarization dataset.

    Funding Statement:This work was supported in part by the National Science Foundation Project of P.R.China 484 under Grant No.52071349,partially supported by Young and Middle-aged Talents Project of the State Ethnic Affairs 487 Commission.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    Appendix A.

    Algorithm A:Examples of the Tibetan summarization dataset original Tibetan text images/BZ_1753_283_852_2163_1733.png......(By 2020,China’s gross domestic product(GDP)will exceed 100 trillion yuan for the first time,reaching 101,5986 billion yuan.Calculated at comparable prices,it will increase by 2.3%over the previous year.GDP will exceed 100 trillion yuan,and we will work together and share Development is integrated and everyone has income.Yunnan proposes that all the Lisu,Dulong,Nu and Pumi populations will be lifted out of poverty,and all 88 impoverished counties will be lifted out of poverty.Guangxi requires that 8 impoverished counties including Rongshui Miao Autonomous County have their hats removed,and 54 impoverished counties Decapitated.Guizhou Province announced that nine poverty-stricken counties in the province have exited poverty-stricken counties.China’s economy has reached a new level.According to preliminary calculations,China’s gross domestic product(GDP)will exceed 100 trillion yuan for the first time in 2020,reaching 1015986 100 million yuan.Calculated at comparable prices,an increase of 2.3%over the previous year.-GDP exceeded 100 trillion yuan,hard-won and very rare.This reflects the central government’s ability to judge and make decisions.By 2020,a hundred years the unprecedented SARS epidemic came suddenly,and the world economy was in the most severe ups and downs since the end of World War II.Faced with the sudden shock,China’s GDP in the first quarter fell by 6.8%year-on-year,achieving the first negative growth since quarterly statistics.”O(jiān)nly fully demonstrated.Only with the huge potential and powerful driving force of our country’s development can we achieve the goals and tasks of this year’s economic and social development.”......

    S ummarization images/BZ_1754_282_690_2162_1058.pngThe Chinese economy has reached a new level.By 2020,China’s gross domestic product(GDP)will exceed 100 trillion yuan for the first time,reaching 101,5986 billion yuan.Calculated at comparable prices,an increase of 2.3%over the previous year.The global economy will become the only economic entity with fair growth,and its share in the world economy will rise from 16.3%in 2019 to around 17%,a record high.An extraordinary year in the history of New China,the people were satisfied.It has attracted worldwide attention.Handed over the answer sheet recorded in the annals of history.

    久久亚洲国产成人精品v| 国产国拍精品亚洲av在线观看| 日本黄大片高清| 夫妻性生交免费视频一级片| 久久精品国产亚洲av涩爱| 国产在线一区二区三区精| 亚洲不卡免费看| 亚洲人成网站在线播| 亚洲成人一二三区av| 香蕉精品网在线| 国产v大片淫在线免费观看| 在线亚洲精品国产二区图片欧美 | 日韩成人伦理影院| 欧美高清成人免费视频www| 又爽又黄a免费视频| 99热这里只有精品一区| 欧美成人精品欧美一级黄| 在线亚洲精品国产二区图片欧美 | 日韩制服骚丝袜av| 少妇人妻 视频| av免费在线看不卡| 国产伦精品一区二区三区视频9| 欧美日韩精品成人综合77777| 一区二区av电影网| 五月开心婷婷网| 国产成人一区二区在线| 成人一区二区视频在线观看| 免费少妇av软件| 久热这里只有精品99| kizo精华| 久久毛片免费看一区二区三区| 亚洲,欧美,日韩| 久久久久久久亚洲中文字幕| 国产精品一二三区在线看| 下体分泌物呈黄色| 中文字幕精品免费在线观看视频 | 亚洲综合精品二区| 国内精品宾馆在线| 国产老妇伦熟女老妇高清| 亚洲精品日本国产第一区| 午夜福利影视在线免费观看| 国产伦精品一区二区三区四那| 中文字幕亚洲精品专区| 51国产日韩欧美| 少妇人妻精品综合一区二区| 国产淫片久久久久久久久| 能在线免费看毛片的网站| 极品教师在线视频| 夜夜看夜夜爽夜夜摸| 五月开心婷婷网| 美女脱内裤让男人舔精品视频| av.在线天堂| 国产免费福利视频在线观看| 亚洲精品乱码久久久久久按摩| 亚洲熟女精品中文字幕| 高清日韩中文字幕在线| 有码 亚洲区| 亚洲,一卡二卡三卡| 天天躁日日操中文字幕| 午夜福利影视在线免费观看| 乱系列少妇在线播放| 91精品国产九色| 尤物成人国产欧美一区二区三区| 成年女人在线观看亚洲视频| 蜜桃在线观看..| 成人亚洲精品一区在线观看 | 97超碰精品成人国产| 成人亚洲精品一区在线观看 | 自拍欧美九色日韩亚洲蝌蚪91 | 丰满乱子伦码专区| 国产综合精华液| 偷拍熟女少妇极品色| 91精品伊人久久大香线蕉| 嫩草影院入口| 夜夜骑夜夜射夜夜干| 麻豆成人午夜福利视频| 97精品久久久久久久久久精品| 少妇人妻精品综合一区二区| 蜜桃亚洲精品一区二区三区| 亚洲精品国产av蜜桃| 亚洲av综合色区一区| 丰满迷人的少妇在线观看| 激情 狠狠 欧美| 国产真实伦视频高清在线观看| 视频区图区小说| 亚洲av.av天堂| 久久精品国产自在天天线| 人妻夜夜爽99麻豆av| 男女免费视频国产| 国产高潮美女av| 亚洲电影在线观看av| 色网站视频免费| 国产精品一区二区三区四区免费观看| 波野结衣二区三区在线| 91狼人影院| 亚洲在久久综合| 色网站视频免费| av.在线天堂| 国内少妇人妻偷人精品xxx网站| 久久99精品国语久久久| 男女下面进入的视频免费午夜| xxx大片免费视频| 青春草亚洲视频在线观看| 日本爱情动作片www.在线观看| 欧美xxxx黑人xx丫x性爽| 亚洲综合精品二区| 啦啦啦视频在线资源免费观看| 色哟哟·www| 精品一区在线观看国产| 久久综合国产亚洲精品| 97热精品久久久久久| 少妇 在线观看| 狠狠精品人妻久久久久久综合| 免费少妇av软件| 国产伦理片在线播放av一区| www.av在线官网国产| 国产精品一区二区性色av| 亚洲天堂av无毛| 亚洲av男天堂| 十分钟在线观看高清视频www | 国产av码专区亚洲av| 国产乱人视频| 国产精品无大码| 成人国产av品久久久| 纵有疾风起免费观看全集完整版| 91久久精品电影网| 街头女战士在线观看网站| 99热国产这里只有精品6| 丰满人妻一区二区三区视频av| 91久久精品国产一区二区三区| 少妇猛男粗大的猛烈进出视频| 中文字幕精品免费在线观看视频 | 亚洲第一区二区三区不卡| 一个人看视频在线观看www免费| 免费大片黄手机在线观看| 波野结衣二区三区在线| 国产精品国产三级专区第一集| 少妇人妻久久综合中文| 一级毛片电影观看| 91久久精品国产一区二区成人| 美女cb高潮喷水在线观看| 亚洲精品第二区| 丝瓜视频免费看黄片| 精品一品国产午夜福利视频| 久久精品国产亚洲av涩爱| 永久免费av网站大全| 亚洲怡红院男人天堂| 激情五月婷婷亚洲| 亚洲婷婷狠狠爱综合网| 丝袜脚勾引网站| 少妇人妻久久综合中文| 在线观看三级黄色| 狂野欧美激情性xxxx在线观看| 国精品久久久久久国模美| 下体分泌物呈黄色| 新久久久久国产一级毛片| 人妻一区二区av| 日本黄大片高清| 日韩精品有码人妻一区| 亚洲精品国产成人久久av| 性色avwww在线观看| 中文字幕免费在线视频6| 亚洲av中文字字幕乱码综合| 久久女婷五月综合色啪小说| 1000部很黄的大片| 久久精品久久久久久噜噜老黄| 精品久久久久久久久av| 91精品国产国语对白视频| 韩国高清视频一区二区三区| 国产日韩欧美亚洲二区| 99re6热这里在线精品视频| 人人妻人人爽人人添夜夜欢视频 | 99久久综合免费| 亚洲成人手机| 国产视频内射| 国产一区有黄有色的免费视频| 女人久久www免费人成看片| 欧美精品人与动牲交sv欧美| 99热网站在线观看| 一级二级三级毛片免费看| 夜夜骑夜夜射夜夜干| 高清毛片免费看| 日韩在线高清观看一区二区三区| 熟女电影av网| 男女下面进入的视频免费午夜| 国国产精品蜜臀av免费| 春色校园在线视频观看| 欧美极品一区二区三区四区| kizo精华| 国产爱豆传媒在线观看| 七月丁香在线播放| 观看美女的网站| 亚洲欧美一区二区三区国产| 99热这里只有精品一区| 人人妻人人爽人人添夜夜欢视频 | 久久婷婷青草| 伦理电影大哥的女人| 一区二区三区精品91| 激情 狠狠 欧美| 国产人妻一区二区三区在| 极品少妇高潮喷水抽搐| 亚洲精华国产精华液的使用体验| 插逼视频在线观看| 国产精品成人在线| 观看av在线不卡| 成人漫画全彩无遮挡| 亚洲精品第二区| 免费黄网站久久成人精品| 在线精品无人区一区二区三 | 春色校园在线视频观看| 九九爱精品视频在线观看| 欧美激情极品国产一区二区三区 | 午夜福利在线观看免费完整高清在| 美女主播在线视频| 国产高清国产精品国产三级 | 人妻制服诱惑在线中文字幕| 国产黄色免费在线视频| 搡老乐熟女国产| 久久久久国产精品人妻一区二区| 国产欧美日韩精品一区二区| 最近中文字幕2019免费版| 美女主播在线视频| 在线观看美女被高潮喷水网站| 日韩一区二区三区影片| 亚洲人成网站在线观看播放| 亚洲av中文av极速乱| 国产高清国产精品国产三级 | 亚洲一区二区三区欧美精品| 欧美国产精品一级二级三级 | 久久久久国产精品人妻一区二区| 男女无遮挡免费网站观看| 亚洲av欧美aⅴ国产| 免费久久久久久久精品成人欧美视频 | 久久精品国产亚洲av天美| 内射极品少妇av片p| 黄片无遮挡物在线观看| 国产精品国产三级国产av玫瑰| 精品久久久久久久末码| 丝瓜视频免费看黄片| 亚洲成人av在线免费| 免费播放大片免费观看视频在线观看| 国产精品一区二区在线不卡| 日韩 亚洲 欧美在线| 一级爰片在线观看| 大陆偷拍与自拍| 亚洲国产精品专区欧美| 国产成人免费无遮挡视频| 欧美 日韩 精品 国产| 久久久久久久久久成人| 国产永久视频网站| 少妇的逼好多水| 国产老妇伦熟女老妇高清| 久久久久久久亚洲中文字幕| 美女脱内裤让男人舔精品视频| 久久久久精品久久久久真实原创| 啦啦啦视频在线资源免费观看| 亚洲国产成人一精品久久久| 黄色一级大片看看| av免费观看日本| 国产精品一区二区在线不卡| 一区二区三区免费毛片| 亚洲精品中文字幕在线视频 | 人妻少妇偷人精品九色| av在线老鸭窝| 99视频精品全部免费 在线| 精品国产三级普通话版| 国产高清有码在线观看视频| 欧美3d第一页| 久久av网站| 婷婷色麻豆天堂久久| 亚洲欧洲日产国产| 国产老妇伦熟女老妇高清| 久久国产精品男人的天堂亚洲 | 亚洲精品aⅴ在线观看| 欧美国产精品一级二级三级 | 成人二区视频| 不卡视频在线观看欧美| 欧美精品一区二区免费开放| 久久久久人妻精品一区果冻| 国产精品一区二区在线观看99| 在线看a的网站| 九色成人免费人妻av| 国产美女午夜福利| 亚洲av中文字字幕乱码综合| 男人舔奶头视频| 日韩强制内射视频| 国产69精品久久久久777片| 国产成人一区二区在线| av播播在线观看一区| 国产精品一区二区在线不卡| 国产69精品久久久久777片| 国产探花极品一区二区| 成人黄色视频免费在线看| 免费观看无遮挡的男女| 亚洲av在线观看美女高潮| 五月开心婷婷网| 久久久成人免费电影| 久久国内精品自在自线图片| 老师上课跳d突然被开到最大视频| 日本免费在线观看一区| 十分钟在线观看高清视频www | 最近最新中文字幕免费大全7| 一区二区三区免费毛片| 久久久a久久爽久久v久久| 免费观看在线日韩| 啦啦啦在线观看免费高清www| 我要看黄色一级片免费的| av在线老鸭窝| 高清黄色对白视频在线免费看 | 97在线视频观看| 狂野欧美激情性bbbbbb| 国产精品不卡视频一区二区| 免费观看a级毛片全部| 日日摸夜夜添夜夜爱| 国产 一区 欧美 日韩| 纵有疾风起免费观看全集完整版| 免费av中文字幕在线| 亚洲成人一二三区av| 看十八女毛片水多多多| 日韩大片免费观看网站| 亚洲四区av| 赤兔流量卡办理| 成人国产麻豆网| 亚洲四区av| 伦精品一区二区三区| 草草在线视频免费看| 亚洲精品国产av成人精品| 精品一品国产午夜福利视频| 国产极品天堂在线| 国产亚洲最大av| 欧美成人精品欧美一级黄| 精品国产露脸久久av麻豆| 国产精品欧美亚洲77777| 国产精品久久久久久精品古装| 网址你懂的国产日韩在线| 日日撸夜夜添| 亚洲国产色片| 亚洲欧美日韩东京热| 丰满人妻一区二区三区视频av| 一本一本综合久久| 欧美高清成人免费视频www| 精品一区二区免费观看| 丝袜脚勾引网站| 网址你懂的国产日韩在线| 一边亲一边摸免费视频| 成人二区视频| 成年美女黄网站色视频大全免费 | 人人妻人人澡人人爽人人夜夜| 97超视频在线观看视频| 亚洲欧美中文字幕日韩二区| 精品少妇黑人巨大在线播放| 久久久久久久久大av| 亚洲欧美中文字幕日韩二区| 亚洲在久久综合| 日韩大片免费观看网站| 久久99热6这里只有精品| 久久久久久久久久人人人人人人| 午夜激情久久久久久久| 亚洲人与动物交配视频| 免费看光身美女| 日韩三级伦理在线观看| 麻豆精品久久久久久蜜桃| 久久久久国产网址| 中国三级夫妇交换| 国产乱来视频区| 网址你懂的国产日韩在线| 亚洲精品乱久久久久久| 99热这里只有是精品在线观看| 看十八女毛片水多多多| 2018国产大陆天天弄谢| 久久人人爽人人片av| av福利片在线观看| 国产精品欧美亚洲77777| 亚洲不卡免费看| 高清午夜精品一区二区三区| 成人特级av手机在线观看| 99热这里只有精品一区| 成人毛片a级毛片在线播放| av又黄又爽大尺度在线免费看| 国产精品一区www在线观看| 国产伦在线观看视频一区| 日本vs欧美在线观看视频 | 精品人妻熟女av久视频| 女性生殖器流出的白浆| 久久久精品94久久精品| 久久久久久久大尺度免费视频| 永久网站在线| 色视频在线一区二区三区| 亚洲第一av免费看| 欧美精品人与动牲交sv欧美| 亚洲国产精品一区三区| 亚洲av在线观看美女高潮| 欧美日韩综合久久久久久| 啦啦啦在线观看免费高清www| 欧美 日韩 精品 国产| 天堂中文最新版在线下载| 18禁在线播放成人免费| 中文在线观看免费www的网站| 亚洲一区二区三区欧美精品| 97热精品久久久久久| 亚洲无线观看免费| 又粗又硬又长又爽又黄的视频| 欧美成人午夜免费资源| 自拍偷自拍亚洲精品老妇| 美女cb高潮喷水在线观看| 国产成人午夜福利电影在线观看| 99久久中文字幕三级久久日本| 亚洲av综合色区一区| 国产女主播在线喷水免费视频网站| 少妇的逼好多水| 免费黄频网站在线观看国产| 日韩av在线免费看完整版不卡| 成人无遮挡网站| 亚洲最大成人中文| 又粗又硬又长又爽又黄的视频| 国产爽快片一区二区三区| av在线观看视频网站免费| 国模一区二区三区四区视频| 亚洲国产精品999| 国产亚洲91精品色在线| 欧美极品一区二区三区四区| 午夜日本视频在线| 好男人视频免费观看在线| kizo精华| 国产亚洲91精品色在线| 免费黄网站久久成人精品| 国产精品无大码| 国语对白做爰xxxⅹ性视频网站| 亚洲久久久国产精品| 2018国产大陆天天弄谢| 在线观看人妻少妇| 日韩精品有码人妻一区| 亚洲国产精品成人久久小说| 老司机影院毛片| 在线观看美女被高潮喷水网站| 夜夜爽夜夜爽视频| 一区二区av电影网| 免费看av在线观看网站| 男女免费视频国产| 美女内射精品一级片tv| 亚洲性久久影院| 麻豆国产97在线/欧美| 中文在线观看免费www的网站| 精品人妻视频免费看| 美女内射精品一级片tv| 一个人看视频在线观看www免费| 国产精品一及| 妹子高潮喷水视频| 中文天堂在线官网| 中文字幕精品免费在线观看视频 | 婷婷色麻豆天堂久久| 男女国产视频网站| 免费播放大片免费观看视频在线观看| 午夜激情福利司机影院| 国产av精品麻豆| 免费观看无遮挡的男女| 久久精品熟女亚洲av麻豆精品| 国产精品秋霞免费鲁丝片| 国产精品久久久久久精品古装| 男女边摸边吃奶| 热99国产精品久久久久久7| 五月开心婷婷网| 免费av中文字幕在线| 亚洲av成人精品一区久久| 欧美日韩精品成人综合77777| 亚洲欧美清纯卡通| 岛国毛片在线播放| 国产精品爽爽va在线观看网站| 99热这里只有是精品50| 视频区图区小说| 免费在线观看成人毛片| 欧美xxⅹ黑人| 欧美日韩国产mv在线观看视频 | 久久久久人妻精品一区果冻| 夜夜爽夜夜爽视频| 亚洲伊人久久精品综合| 中文字幕亚洲精品专区| av在线观看视频网站免费| 国产淫片久久久久久久久| 九九在线视频观看精品| 一二三四中文在线观看免费高清| 97超碰精品成人国产| 国产精品免费大片| 国产高清三级在线| 少妇人妻久久综合中文| 国产精品蜜桃在线观看| 亚洲av中文av极速乱| 简卡轻食公司| 制服丝袜香蕉在线| 欧美日韩国产mv在线观看视频 | 高清不卡的av网站| 人妻一区二区av| 狂野欧美激情性xxxx在线观看| 大又大粗又爽又黄少妇毛片口| 国产精品一及| 直男gayav资源| av国产免费在线观看| 亚洲精品456在线播放app| 免费高清在线观看视频在线观看| 国产精品熟女久久久久浪| 成人亚洲欧美一区二区av| 亚洲欧美日韩卡通动漫| 精品少妇黑人巨大在线播放| 成人漫画全彩无遮挡| 免费看不卡的av| 精品一区在线观看国产| 午夜免费观看性视频| 久久久国产一区二区| 一级爰片在线观看| av免费在线看不卡| 国产 一区 欧美 日韩| 99久久中文字幕三级久久日本| 高清毛片免费看| 亚洲美女视频黄频| 亚洲不卡免费看| 亚洲av成人精品一二三区| 免费黄网站久久成人精品| 又大又黄又爽视频免费| 美女视频免费永久观看网站| 亚洲欧美精品专区久久| 99热6这里只有精品| 天天躁夜夜躁狠狠久久av| 水蜜桃什么品种好| 亚洲av欧美aⅴ国产| 亚洲欧美中文字幕日韩二区| 成人毛片60女人毛片免费| 大香蕉97超碰在线| 国产成人精品久久久久久| 国产精品熟女久久久久浪| 人体艺术视频欧美日本| 国产男人的电影天堂91| 中文字幕久久专区| 黑人高潮一二区| 国产日韩欧美在线精品| 五月玫瑰六月丁香| 精品亚洲成a人片在线观看 | 亚洲图色成人| 色婷婷av一区二区三区视频| 直男gayav资源| 亚洲熟女精品中文字幕| 国产乱来视频区| 国产精品成人在线| 热re99久久精品国产66热6| 一区在线观看完整版| 干丝袜人妻中文字幕| 亚洲精品日韩av片在线观看| 国产一级毛片在线| 欧美日韩视频精品一区| 如何舔出高潮| 男女下面进入的视频免费午夜| 欧美成人午夜免费资源| 99久久精品热视频| 直男gayav资源| 七月丁香在线播放| 国产精品麻豆人妻色哟哟久久| 97超视频在线观看视频| 亚洲丝袜综合中文字幕| 久久久精品94久久精品| 蜜桃在线观看..| 亚洲精品久久午夜乱码| 三级经典国产精品| 一本久久精品| 日本黄色日本黄色录像| 亚洲高清免费不卡视频| 黄色日韩在线| 一区二区三区乱码不卡18| 日本黄大片高清| 人妻少妇偷人精品九色| 午夜福利在线在线| 老司机影院成人| 边亲边吃奶的免费视频| 最近2019中文字幕mv第一页| 亚洲精品亚洲一区二区| 久久99热6这里只有精品| 免费观看a级毛片全部| 久久久久久久国产电影| 久久久成人免费电影| 又爽又黄a免费视频| av免费在线看不卡| 国产色婷婷99| 黑人猛操日本美女一级片| 亚洲在久久综合| 天天躁日日操中文字幕| 一级毛片黄色毛片免费观看视频| av不卡在线播放| 欧美xxⅹ黑人| 国产美女午夜福利| 中文欧美无线码| 亚洲av在线观看美女高潮| 亚洲一级一片aⅴ在线观看| 日韩一区二区视频免费看| 久久99精品国语久久久| 嘟嘟电影网在线观看| 国产精品秋霞免费鲁丝片| 亚洲av成人精品一区久久| 多毛熟女@视频| 内射极品少妇av片p| 18禁在线无遮挡免费观看视频| 人人妻人人澡人人爽人人夜夜| 大香蕉97超碰在线| 国产永久视频网站| 热re99久久精品国产66热6| 久久99蜜桃精品久久| 亚洲精品一区蜜桃| 大片电影免费在线观看免费| 亚洲精华国产精华液的使用体验| 欧美zozozo另类| 永久网站在线| 久久精品国产亚洲网站| 一区在线观看完整版| 蜜桃久久精品国产亚洲av| 欧美精品国产亚洲| 老女人水多毛片| 大码成人一级视频| 久久亚洲国产成人精品v|