• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Tibetan Question Generation Based on Sequence to Sequence Model

    2021-12-14 06:04:30YuanSunChaofanChenAndongChenandXiaobingZhao
    Computers Materials&Continua 2021年9期

    Yuan Sun,Chaofan Chen,Andong Chen and Xiaobing Zhao

    1School of Information Engineering,Minzu University of China,Beijing,100081,China

    2Minority Languages Branch,National Language Resource and Monitoring Research Center

    3Queen Mary University of London,London,E1 4NS,UK

    Abstract:As the dual task of question answering,question generation (QG)is a significant and challenging task that aims to generate valid and fluent questions from a given paragraph.The QG task is of great significance to question answering systems,conversational systems,and machine reading comprehension systems.Recent sequence to sequence neural models have achieved outstandingperformance in English and Chinese QG tasks.However,the task of Tibetan QG is rarely mentioned.The key factor impeding its development is the lack of a public Tibetan QG dataset.Faced with this challenge,the present paper first collects 425 articles from the Tibetan Wikipedia website and constructs 7,234 question-answer pairs through crowdsourcing.Next,we propose a Tibetan QG model based on the sequence to sequence framework to generate Tibetan questions from given paragraphs.Secondly,in order to generate answer-aware questions,we introduce an attention mechanism that can capture the key semantic information related to the answer.Meanwhile,we adopt a copy mechanism to copy some words in the paragraph to avoid generating unknown or rare words in the question.Finally,experiments show that our model achieves higher performance than baseline models.We also further explore the attention and copy mechanisms,and prove their effectiveness through experiments.

    Keywords:Tibetan;question generation;copy mechanism;attention

    1 Introduction

    Aiming at generating specific answer-related questions from a given text,question generation (QG) is a basic and important task in the natural language generation community.With a wide range of applications,QG has attracted much attention from industrial and academic communities.The QG task can not only extend the corpus of question answering systems,but also be applied in many fields such as dialogue systems,expert systems,and machine reading comprehension [1].Based on the form of the input text,QG can be categorized into two types:sentence-level QG and paragraph-level QG.The inputting information of the former is a sentence,while that of the latter is a paragraph.Compared with sentence-level QG,paragraph-level QG contains much richer text information.

    Early works on QG tasks were mainly based on templates and rules [2,3].These works transformed the input text into a question by complex question templates and linguistic rules crafted by humans.However,such methods heavily rely on manual rules and professional linguistic knowledge,and thus cannot be directly adapted to other domains.Recently,with the advancement of deep learning,researchers have begun to adopt neural network approaches to solve this problem [4-6].Different from the template-based or rule-based methods,the deep learning method does not require complex question templates or rules.Therefore,researchers no longer pay attention to design question rules and templates,but concentrate on training an end-toend neural network.At the same time,the models based on the deep learning method have a good generalizability,and can be easily applied to different domains.However,most QG works have been applied to English or Chinese tasks.For low-resource languages such as Tibetan,few relevant experiments have been conducted.Two main factors hinder the development of Tibetan QG:(1) there is no large-scale public Tibetan QG dataset;and (2) with complex grammar,most QG models have difficulty understanding the Tibetan sentence.Thus,it is difficult to achieve satisfactory results.

    To address these issues,this study proposes a Tibetan QG model to generate valid and fluent question sentences.Our model takes a paragraph and a target answer as input.Different from other text generation tasks,our model aims to (1) generate questions that can be answered from a paragraph,and (2) generate fluent questions.Specifically,to achieve the first goal,our model needs to incorporate the target answer’s position information and capture the key information in the paragraph.To achieve the second goal,we need to avoid generating rare or unknown words.Considering both goals,we adopt an attention mechanism and copy mechanism.Our contributions to the literature are three-fold:

    (1) To solve the problem of Tibetan QG corpus shortage,we construct a high-quality Tibetan QG dataset.

    (2) To generate answer-aware questions,we adopt an attention mechanism to help the model encode the answer-aware paragraph.

    (3) To avoid generating unknown/rare words,we adopt a copy mechanism to copy some words from the paragraph.

    2 Related Work

    Question generation requires the system to generate fluent questions when given a paragraph and a target answer.Early works have focused on template-based methods,and mostly adopt linguistic rules.Under this scenario,researchers have spent a considerable amount of time designing the question rules and templates,and then transforming the input text information into a question.Heilman et al.[7]proposed a method to generate a large number of candidate questions.They matched the text with complex question templates and rules designed by humans.Finally,they sorted out the generated questions and selected the top questions.However,designing the question rules and templates were time-consuming.To solve this problem,Mitkov et al.[8]proposed the semi-automatic method:they began by using some simple natural language processing tools,such as the shallow semantic analysis tool,named entity recognition tool,and part-of-speech tagging tool,to better understand the input text.Their work shortened the development time,but the natural language processing tools introduced some errors.Therefore,the quality of questions generated by their system could not be guaranteed.To generate high-quality questions,Mostow et al.[9]expanded the question templates and rules by summarizing many question structures.At the same time,Labutov et al.[10]regarded the QG task as an “ontology crowd-relevance”workflow.They firstly represented the original text as a low-dimensional ontological space,and then generated questions by aligning question templates and rules.The aforementioned works all converted paragraphs into questions through question templates and rules.Moreover,these works all followed three steps:sentence processing,template and rule matching,and question sorting.Thus,the performance of these systems heavily depends on the quality of the rules and templates.Additionally,these rule-based methods are difficult to apply in other domains.

    Recently,deep learning methods have achieved remarkable performance on QG tasks.Deep learning methods do not require researchers to design complex question templates and rules.However,they do require a large amount of labeled data to train the model.Existing English and Chinese QG models are based on the sequence to sequence framework [11-14].Serban et al.[15]and Du et al.[16]used two different recurrent neural networks (RNN) to encode and decode the input text.However,they did not consider that some sentences would not generate valuable questions.Thus,Du et al.[17]proposed a high-level neural network to identify which sentences in the paragraph were worth asking questions.Finally,they used these sentences to generate more questions,and the F1 value reached 79.8%.However,the aforementioned works did not have any restriction on the generated questions,and the questions could not be answered after reading the paragraph.To generate answer-aware questions,Zhou et al.[18]proposed a method to integrate answer position information.Their model not only took the original text as input but also considered the position information of the answer in the text.Finally,the experimental results proved the effectiveness of their model:they achieved 13.29% BLEU-4.Kim et al.[19]found that the generated questions would contain the answers.They inferred that the sequence to sequence model would overlearn some paragraph information,and thus proposed the answer masking method,which replaced the answer with a special label in the original text.They obtained 43.96% ROUGE-L on the Stanford Question Answering corpus (SQuAD) [20].However,their method required a large-scale training corpus.To solve this problem,Song et al.[21]proposed a matching strategy to enhance the performance of the model.They utilized three different strategies(fully matching,attention matching and maximum matching) on the SQuAD dataset,and found that the full matching attention mechanism reached 42.72% ROUGE-L,and demonstrated higher performance than other strategies.Subsequently,Zhao et al.[22]adopted gated attention and a maxout pointer network in the encoder stage,and achieved 44.48% ROUGE-L.Note that the above works were all based on the English QG task.

    As a minority language in China,Tibetan has a complicated grammatical structure.Tibetan is a Pinyin language,and the smallest unit of a Tibetan word is a syllable.Some words also have some changes in syllables,which means that the QG model must have a deep understanding of Tibetan text.Due to the particularity of minority languages,there are few studies on the Tibetan automatic question generation.Ban et al.[23]analyzed the Tibetan interrogative questions from a linguistic view.Similarly,Sun [24]analyzed the difference between questions and paragraphs.While these works were not related to automatic question generation tasks,they did provide the correct forms of Tibetan questions.In an attempt at Tibetan sentence-level question generation,Xia et al.[25]proposed a semi-automated method to generate questions;however,their work required human participation.To achieve automatically generated questions,Sun et al.[26]combined generative adversarial networks (GANs) [27]with reinforcement learning to automatically generate Tibetan questions.They achieved 6.7% higher than baseline methods on BLEU-2,but it took a long time to train the QG model.

    3 Model Structure

    3.1 Task Definition

    For each sample data in our dataset,our goal is to generate answer-aware questionsfrom a paragraph.Given a paragraph and a target answer,we first embed the paragraph and answer into two sequencesP={p1,p2,p3,...,pn}andA={a1,a2,a3,...,am},respectively.Next,the QG task can be defined as in Eq.(1).

    Therefore,our goal is to find the best.Different from the traditional sequence to sequence model,the word in the generated question is not only from the dictionary but also from the paragraph sequenceP.

    3.2 The Overall Framework

    In this section,we briefly introduce our model for the Tibetan QG.The overall framework,which can be roughly divided into two parts,is shown in Fig.1.

    Figure 1:Framework of proposed model

    (1)Encoding stage.In this stage,the Tibetan paragraph and the target answer are segmented into word sequences through word segment tools [28],and these sequences are sent to the neural network to be encoded.To ensure that the generated question is related to the given answer,the attention mechanism is used to aggregate the key information related to the target answer.Finally,we can obtain the answer-aware paragraph representation.

    (2)Decoding stage.The decoder is used to decode the answer-aware paragraph representation and generate questions.To avoid generating unknown or rare words in the question,we adopt a copy mechanism to copy some words from the paragraph.

    4 Model Details

    4.1 Paragraph and Answer Encoding

    Different from the traditional sequence to sequence model,we use two different neural networks to separately encode the paragraph and answer.In order to obtain context information,this study adopts a bi-directional long short-term memory network (BiLSTM) [29]to encode questions and answers.Assuming that there is a paragraph sequenceq={q1,q2,q3,...,qm}withmwords,the paragraph embedding can be calculated as follows:

    wherevtpresents the RNN hidden state at time stept,qtis the word embedding of thetth word in a paragraph,andvt-1is the previous RNN hidden state.Because it is important to generate an answer-aware question,we need to incorporate the answer’s position information.ptis the answer tagging sequence of whether the wordqtis inside or outside the answer.[qt,pt]represents the concatenation of word embedding and answer tagging embedding.Finally,each word in the paragraph can be represented byv=

    To avoid the influence of irrelevant information when encoding answer sequences,we use another BiLSTM to encode the target answer sequence:

    whereut-1is the RNN hidden state at time stept-1,andatis the word embedding in the answerA.Finally,the final answer embedding is represented byu=In this study,we use GloVe to embed the original text [30].

    4.2 Interaction of Questions and Answers

    Attention mechanisms are widely used in natural language processing tasks [31,32].They allow the model to dynamically assign weights to different information.The model will pay more attention to words with greater weights.For the QG task,paragraphs contain a lot of information,but there may be some noise information as well.Therefore,the model must filter out the noise information.We introduce the attention mechanism to determine which word is important,and the answer-aware paragraph encoding can be described as follows:

    whereVT,W1,andW2are weight matrices,and the matrixSuis the weight of each word in the paragraph.Next,we normalizeSuby using the softmax function as shown in Eq.(5).

    Next,we calculate the weight of every word in the paragraph as follows:

    whereaurepresents the weight factor,andMtis the final paragraph context embedding.

    4.3 Decoder

    After the encoding stage,we obtain a rich semantic paragraph representation.Next,similar to other sequence to sequence models,we use another LSTM network to decode the hidden state.Additionally,to avoid generating unknown or rare words,we allow the decoder to copy some words from the paragraph.We use an LSTM network to decode the hidden vector as shown in Eq.(7):

    Here,Stpresents the current hidden state at timet,St-1is the previous hidden state,andWt-1is the previously generated word.Next,we employ the attention mechanism in the paragraph.We need to calculate the weight distribution matrix between the paragraph and the hidden state.This step is illustrated in Eqs.(8) and (9).

    Here,W3represents the weight matrix and can be trained from the network,andαtis the attention distribution of paragraph words.According to the attention distribution,we can obtain the contextual representation of the paragraph by

    whereaitrepresents the attention score between theith word in the paragraph and answer at timet.Miisith word representation in the paragraph,and is the output of the encoder.Next,the probability of the word generated by the decoder can be calculated as follows:

    whereW4is the trainable weight matrix.We find that the question contains some words in the paragraph.Motivated by the above observation,we adopt a copy mechanism to copy some words in the paragraph.Next,we need to calculate the probability of copying a specific word.The copying probability can be calculated as follows:

    whereW5andW6are the trainable weight matrices,h*is the context vector,Stis the previously generated hidden state vector,andσis the Sigmoid function that transforms the real number into a probability value.Then,the probability distribution of words in the paragraph can be calculated by using the sum of all weights of the related words,as shown in Eq.(13).

    Finally,the probability of the generated word is a weighted sum of two ways:

    5 Experiments and Analysis

    5.1 Dataset

    Considering the lack of a public Tibetan QG corpus,we collected a large number of Tibetan knowledge Wikipedia articles.These articles cover numerous domains such as science,literature,biology,and education.Next,we invited Tibetan native speakers to write down questions and corresponding answers when they read a paragraph.Finally,we obtained 7,234 Tibetan questionand-answer pairs.The specific format of Tibetan data is shown in Tab.1.The UUID of each question-and-answer pair is unique.

    Table 1:Sample from the Tibetan QG corpus

    5.2 Evaluation

    To better evaluate the performance of our model,this paper uses two metrics:BLEU-2 and ROUGE-L.BLEU-2 adopts the 2-gram language model to match the gold question.The 2-gram model can be calculated as follows:

    wheresijrepresents the gold question,cjrepresents the question generated by the model,is the occurrence frequency of the wordWkin the generated question,andis the occurrence frequency of wordWkin the gold questionSij.However,the 2-gram model may encourage the generation of short sentences.In order to solve this problem,the researchers introduce a length penalty factor:

    wherelcrepresents the length of the generated question,andlsis the length of the effective words.Finally,the calculation of BLUE-2 is as expressed in Eq.(17).

    ROUGE-L is a method based on the longest common subsequence.It can be calculated by Eqs.(18)-(20).

    Here,Xis the question sentence generated by the model,andYis the gold question.LCS(X,Y)is the length of the common subsequence between the gold question sentence and the generated question sentence.βis a fixed parameter.In this paper,we setβto 1.2.

    5.3 Results Analysis

    To prove the effectiveness of our model,we compare some algorithms based on the sequence to sequence model.

    (1)Seq2Seq:This is the baseline model.To generate the question sentences,this model directly uses two LSTM networks to encode and decode original input texts.

    (2)Du et al.[17]model:This is an attention-based sequence learning model that generates questions from a paragraph.However,the authors of this model did not consider whether the generated question was related to the answer.Therefore,in the present paper we concatenate the target answer embedding with the paragraph embedding to generate answer-aware questions.

    (3)NQG[18]:This work is based on the sequence to sequence model,and provides feature-rich encoding.In encoding stages,NQG adopts rich lexical features to encode the paragraph.In the decoding stages,it adopts a copy mechanism to generate questions.

    From Tab.2,it can be found that our model exhibits better performance on Tibetan QG.Our model achieves 25.34 BLEU-2 and 36.47 ROUGE-L.The baseline model (Seq2Seq) reaches 16.42 BLEU-2 and 27.13 ROUGE-L,respectively.Du et al.model achieves 31.26 ROUGE-L,and the ROUGE-L of NQG reaches 21.72 BLEU-2 and 32.71 ROUGE-L.Compared with the basic Seq2Seq,our model demonstrates improvements of 8.92 BLEU-2 and 9.34 ROUGE-L.Compared with Du et al.[17],our model demonstrates improvements of 4.73 BLEU-2 and 5.21 ROUGE-L.Lastly,compared with NQG,our model shows improvements of 3.62 BLEU-2 and 3.76 ROUGE-L.To further verify our model,we also conduct some ablation experiments,and the results are shown in Tab.3.

    Table 2:Experimental results of different models

    Table 3:Model ablation results on Tibetan QG task

    Tab.3 shows significant improvements in performance when adding the attention and copy mechanism.BLUE-2 improves by+5.3 when adding the attention mechanism.The performance of the sequence to sequence model also greatly increased when adding a copy mechanism.These results show that the attention mechanism and copy mechanism can help improve model performance.

    6 Conclusion

    In this paper,we propose a Tibetan QG model based on the sequence to sequence model.Additionally,faced with the problem of lacking a Tibetan QG corpus,we construct a mediumsized Tibetan QG dataset to explore the Tibetan question generation task.Furthermore,we propose a QG model that adopts an attention mechanism to gain answer-aware paragraph embedding,and introduce a copy mechanism to copy some words from the paragraph when generating the question.The copy mechanism enables us to avoid generating unknown or rare words.Experimental results demonstrate the effectiveness of our model.However,the performance of our model still needs to be improved.In the future,we will introduce some linguistic knowledge and common sense to the model to improve its performance.

    Acknowledgement:We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

    Funding Statement:This work is supported by the National Nature Science Foundation(No.61972436).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    美女国产高潮福利片在线看| 亚洲欧美色中文字幕在线| 搡老乐熟女国产| 亚洲国产欧美一区二区综合| av线在线观看网站| 午夜日韩欧美国产| 日韩不卡一区二区三区视频在线| 亚洲视频免费观看视频| 一级黄片播放器| 中国国产av一级| 男人操女人黄网站| 亚洲成人av在线免费| 80岁老熟妇乱子伦牲交| 美女视频免费永久观看网站| 欧美久久黑人一区二区| 69精品国产乱码久久久| 十八禁网站网址无遮挡| 纯流量卡能插随身wifi吗| 一区二区三区乱码不卡18| 免费久久久久久久精品成人欧美视频| 亚洲精品视频女| 免费在线观看完整版高清| 麻豆精品久久久久久蜜桃| 黑人猛操日本美女一级片| 如日韩欧美国产精品一区二区三区| 国产探花极品一区二区| 久久久久久免费高清国产稀缺| 久久久久人妻精品一区果冻| 波多野结衣一区麻豆| 人妻一区二区av| 一二三四在线观看免费中文在| 日本猛色少妇xxxxx猛交久久| 欧美xxⅹ黑人| 少妇精品久久久久久久| 曰老女人黄片| 国产精品亚洲av一区麻豆 | 国产欧美日韩综合在线一区二区| 一级爰片在线观看| 狂野欧美激情性bbbbbb| 高清av免费在线| av免费观看日本| 亚洲国产精品999| 青春草国产在线视频| 国产黄色视频一区二区在线观看| xxx大片免费视频| 亚洲av中文av极速乱| 久久天堂一区二区三区四区| 亚洲精品在线美女| 91精品伊人久久大香线蕉| 亚洲免费av在线视频| 一级片免费观看大全| 成人午夜精彩视频在线观看| 女的被弄到高潮叫床怎么办| 一级黄片播放器| 国产精品免费视频内射| 热99国产精品久久久久久7| 国产99久久九九免费精品| 亚洲av福利一区| 少妇人妻久久综合中文| 亚洲美女黄色视频免费看| 精品福利永久在线观看| 亚洲av国产av综合av卡| 欧美激情极品国产一区二区三区| 日韩 欧美 亚洲 中文字幕| 18禁动态无遮挡网站| 亚洲熟女毛片儿| 少妇人妻精品综合一区二区| av不卡在线播放| 久久久久久久精品精品| 成人18禁高潮啪啪吃奶动态图| 大片电影免费在线观看免费| 极品人妻少妇av视频| 成人国产麻豆网| 在线观看免费午夜福利视频| 亚洲熟女精品中文字幕| 亚洲精品自拍成人| 欧美日韩综合久久久久久| 成人亚洲精品一区在线观看| 欧美av亚洲av综合av国产av | 免费在线观看完整版高清| 亚洲第一av免费看| 日韩制服丝袜自拍偷拍| 免费黄网站久久成人精品| 丰满饥渴人妻一区二区三| 侵犯人妻中文字幕一二三四区| 曰老女人黄片| 午夜91福利影院| 啦啦啦在线免费观看视频4| 夫妻午夜视频| 成人国产av品久久久| av在线app专区| 9191精品国产免费久久| 丁香六月欧美| 久久午夜综合久久蜜桃| 99精品久久久久人妻精品| 日韩大码丰满熟妇| 亚洲国产精品一区二区三区在线| 黄片小视频在线播放| 国产日韩欧美亚洲二区| 亚洲av在线观看美女高潮| 激情视频va一区二区三区| 亚洲一区中文字幕在线| 亚洲国产最新在线播放| xxxhd国产人妻xxx| 成人18禁高潮啪啪吃奶动态图| 欧美久久黑人一区二区| 久久久国产一区二区| 亚洲国产av影院在线观看| 国产熟女午夜一区二区三区| 日韩人妻精品一区2区三区| 制服丝袜香蕉在线| av有码第一页| 尾随美女入室| 秋霞伦理黄片| 久久国产精品男人的天堂亚洲| 午夜精品国产一区二区电影| 老司机影院成人| 国产精品久久久久成人av| 一本久久精品| av卡一久久| 免费观看性生交大片5| 色吧在线观看| 国产一区二区三区综合在线观看| 老汉色av国产亚洲站长工具| 久久久欧美国产精品| 国产一区有黄有色的免费视频| 久久久久久久久免费视频了| √禁漫天堂资源中文www| 午夜激情久久久久久久| av网站在线播放免费| av网站在线播放免费| 亚洲国产中文字幕在线视频| 国产高清国产精品国产三级| 欧美中文综合在线视频| 欧美中文综合在线视频| 男女国产视频网站| 亚洲精品国产色婷婷电影| 色综合欧美亚洲国产小说| 美女扒开内裤让男人捅视频| 欧美久久黑人一区二区| 亚洲 欧美一区二区三区| 国产99久久九九免费精品| 中文字幕av电影在线播放| 亚洲情色 制服丝袜| 久久狼人影院| 妹子高潮喷水视频| 嫩草影视91久久| 欧美日韩亚洲高清精品| 午夜精品国产一区二区电影| 亚洲国产成人一精品久久久| 九九爱精品视频在线观看| 老熟女久久久| 亚洲精品一区蜜桃| 精品亚洲成国产av| 国产野战对白在线观看| 涩涩av久久男人的天堂| 久久久国产欧美日韩av| 欧美精品高潮呻吟av久久| 久久毛片免费看一区二区三区| 国产午夜精品一二区理论片| 热99久久久久精品小说推荐| 欧美黑人欧美精品刺激| 欧美日本中文国产一区发布| 国产免费现黄频在线看| 久久人妻熟女aⅴ| 亚洲一码二码三码区别大吗| 天天操日日干夜夜撸| 午夜福利网站1000一区二区三区| 精品福利永久在线观看| 如日韩欧美国产精品一区二区三区| 日韩不卡一区二区三区视频在线| 91国产中文字幕| 一级片免费观看大全| 国产女主播在线喷水免费视频网站| 女性生殖器流出的白浆| 亚洲人成网站在线观看播放| 无遮挡黄片免费观看| 久久久久久人人人人人| 亚洲美女搞黄在线观看| 亚洲欧美色中文字幕在线| 精品一品国产午夜福利视频| 熟女av电影| 成人免费观看视频高清| 看十八女毛片水多多多| 亚洲av欧美aⅴ国产| 又大又爽又粗| 制服诱惑二区| 欧美日韩一级在线毛片| 亚洲国产av新网站| 国产成人啪精品午夜网站| 免费观看性生交大片5| 国语对白做爰xxxⅹ性视频网站| 九色亚洲精品在线播放| 纯流量卡能插随身wifi吗| 人人妻,人人澡人人爽秒播 | 亚洲精品久久午夜乱码| 啦啦啦视频在线资源免费观看| 亚洲婷婷狠狠爱综合网| 国产精品久久久久成人av| 狂野欧美激情性bbbbbb| 18禁动态无遮挡网站| videos熟女内射| 久久精品aⅴ一区二区三区四区| 国产精品久久久人人做人人爽| 91精品三级在线观看| 你懂的网址亚洲精品在线观看| 国产免费福利视频在线观看| 美女大奶头黄色视频| 看非洲黑人一级黄片| 亚洲美女视频黄频| 中国三级夫妇交换| 狂野欧美激情性xxxx| svipshipincom国产片| 成人影院久久| 免费观看性生交大片5| 毛片一级片免费看久久久久| 欧美人与性动交α欧美软件| 一级a爱视频在线免费观看| 另类精品久久| 自线自在国产av| 精品亚洲乱码少妇综合久久| 男女午夜视频在线观看| 久久久精品区二区三区| 亚洲 欧美一区二区三区| 亚洲一码二码三码区别大吗| 熟妇人妻不卡中文字幕| 亚洲三区欧美一区| 精品少妇内射三级| 免费久久久久久久精品成人欧美视频| 国产一区二区三区av在线| videosex国产| 成年动漫av网址| 建设人人有责人人尽责人人享有的| 欧美日韩视频高清一区二区三区二| 日韩伦理黄色片| 不卡视频在线观看欧美| 日日啪夜夜爽| 五月天丁香电影| av在线播放精品| 成年女人毛片免费观看观看9 | 国产高清不卡午夜福利| 夫妻性生交免费视频一级片| 国产成人精品久久二区二区91 | 在线观看免费午夜福利视频| 亚洲av男天堂| 男女国产视频网站| 国产免费视频播放在线视频| 日本欧美国产在线视频| 国产精品一二三区在线看| 久久人人97超碰香蕉20202| 无遮挡黄片免费观看| 午夜免费观看性视频| 中文字幕人妻丝袜一区二区 | 国产成人啪精品午夜网站| 丁香六月欧美| 久久久久网色| 纯流量卡能插随身wifi吗| 一区福利在线观看| 在线天堂中文资源库| 精品免费久久久久久久清纯 | 亚洲自偷自拍图片 自拍| av女优亚洲男人天堂| 999精品在线视频| 午夜影院在线不卡| 黄片小视频在线播放| 亚洲七黄色美女视频| 久久天躁狠狠躁夜夜2o2o | 麻豆av在线久日| 又大又黄又爽视频免费| 久久久久视频综合| 搡老乐熟女国产| 我的亚洲天堂| 一级a爱视频在线免费观看| 亚洲专区中文字幕在线 | 国产熟女午夜一区二区三区| 亚洲精品乱久久久久久| 日本av手机在线免费观看| 亚洲四区av| 亚洲成人一二三区av| 天天躁夜夜躁狠狠躁躁| 少妇被粗大的猛进出69影院| 国产伦人伦偷精品视频| 免费高清在线观看视频在线观看| 日韩大片免费观看网站| 天堂8中文在线网| 亚洲精品日本国产第一区| 999久久久国产精品视频| 成人午夜精彩视频在线观看| 久久女婷五月综合色啪小说| 久久精品亚洲av国产电影网| 99久久精品国产亚洲精品| 一级毛片 在线播放| 一区二区三区精品91| 国产成人精品久久久久久| 亚洲,欧美,日韩| 高清在线视频一区二区三区| 日韩一区二区三区影片| 久久99热这里只频精品6学生| 国产男人的电影天堂91| 日韩伦理黄色片| 日韩av不卡免费在线播放| 国产av码专区亚洲av| 色94色欧美一区二区| 国产在线免费精品| 亚洲精品一二三| 亚洲第一av免费看| 十八禁高潮呻吟视频| 国产精品久久久久久久久免| 晚上一个人看的免费电影| 日韩一区二区视频免费看| 色网站视频免费| av免费观看日本| 搡老乐熟女国产| 曰老女人黄片| 精品亚洲乱码少妇综合久久| 久久av网站| 久久免费观看电影| 可以免费在线观看a视频的电影网站 | 日本av免费视频播放| 18禁裸乳无遮挡动漫免费视频| 亚洲 欧美一区二区三区| 亚洲熟女毛片儿| 男女无遮挡免费网站观看| 色网站视频免费| 国产 一区精品| 精品第一国产精品| 97人妻天天添夜夜摸| 搡老岳熟女国产| 天天添夜夜摸| 精品久久蜜臀av无| 国产老妇伦熟女老妇高清| 国产色婷婷99| 高清黄色对白视频在线免费看| av网站在线播放免费| 中文乱码字字幕精品一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 狠狠婷婷综合久久久久久88av| 日韩制服骚丝袜av| 日韩欧美精品免费久久| 亚洲少妇的诱惑av| 美女高潮到喷水免费观看| 亚洲国产av新网站| 日韩欧美精品免费久久| 波多野结衣av一区二区av| 51午夜福利影视在线观看| 欧美少妇被猛烈插入视频| 亚洲欧美中文字幕日韩二区| 在线观看人妻少妇| 免费高清在线观看日韩| 成人国产麻豆网| 欧美日韩国产mv在线观看视频| 天天躁夜夜躁狠狠躁躁| 免费高清在线观看日韩| 精品久久久久久电影网| 国产成人a∨麻豆精品| 国产精品蜜桃在线观看| 国产精品.久久久| 日韩一卡2卡3卡4卡2021年| 国产成人av激情在线播放| 最近2019中文字幕mv第一页| 丝袜在线中文字幕| 久久精品人人爽人人爽视色| 亚洲美女视频黄频| 99久久综合免费| 精品亚洲乱码少妇综合久久| 最近中文字幕2019免费版| 最新在线观看一区二区三区 | 日韩人妻精品一区2区三区| 黑人猛操日本美女一级片| 亚洲精品在线美女| 男女床上黄色一级片免费看| 制服诱惑二区| 亚洲精品av麻豆狂野| 老熟女久久久| 亚洲av日韩在线播放| www.精华液| 丰满饥渴人妻一区二区三| 亚洲精品美女久久av网站| 国产精品秋霞免费鲁丝片| 一本久久精品| 国产日韩欧美在线精品| 永久免费av网站大全| 成人影院久久| 亚洲精品国产区一区二| 看免费成人av毛片| 一本色道久久久久久精品综合| av不卡在线播放| 亚洲国产毛片av蜜桃av| 日韩av不卡免费在线播放| 国产一区二区 视频在线| 女的被弄到高潮叫床怎么办| 成人亚洲欧美一区二区av| 中文字幕色久视频| 一边摸一边抽搐一进一出视频| 亚洲精华国产精华液的使用体验| 如日韩欧美国产精品一区二区三区| 中文字幕高清在线视频| 精品人妻一区二区三区麻豆| 在线看a的网站| 在线观看国产h片| 自线自在国产av| 一级毛片黄色毛片免费观看视频| √禁漫天堂资源中文www| 丝袜在线中文字幕| 波野结衣二区三区在线| 亚洲精品乱久久久久久| 日韩 亚洲 欧美在线| 在线精品无人区一区二区三| 国产成人91sexporn| 男人爽女人下面视频在线观看| 久久久久久久久久久免费av| 亚洲欧洲日产国产| 国产亚洲精品第一综合不卡| 国产成人精品久久久久久| 亚洲av日韩在线播放| 中文字幕av电影在线播放| a级片在线免费高清观看视频| 自拍欧美九色日韩亚洲蝌蚪91| 国产在线一区二区三区精| 亚洲天堂av无毛| 在线免费观看不下载黄p国产| av.在线天堂| 亚洲色图综合在线观看| 男人添女人高潮全过程视频| 成人国产av品久久久| 国产精品国产三级国产专区5o| 日韩av不卡免费在线播放| 国产 精品1| 欧美在线一区亚洲| 中文字幕精品免费在线观看视频| 欧美黄色片欧美黄色片| 国产男女内射视频| 亚洲少妇的诱惑av| 欧美精品一区二区免费开放| 啦啦啦在线观看免费高清www| 中文精品一卡2卡3卡4更新| 亚洲三区欧美一区| 国产高清不卡午夜福利| svipshipincom国产片| 午夜精品国产一区二区电影| 亚洲欧美一区二区三区黑人| 青春草亚洲视频在线观看| 亚洲精品一区蜜桃| 亚洲精品自拍成人| 国产1区2区3区精品| 午夜影院在线不卡| 日本午夜av视频| 五月天丁香电影| 国产在线视频一区二区| 精品午夜福利在线看| 99热全是精品| 大陆偷拍与自拍| 男女国产视频网站| 丝袜在线中文字幕| www.av在线官网国产| 中文精品一卡2卡3卡4更新| 国产精品免费大片| 黄片播放在线免费| 色播在线永久视频| 美国免费a级毛片| 国产精品嫩草影院av在线观看| 伊人久久国产一区二区| 一级,二级,三级黄色视频| 一区二区三区四区激情视频| 免费在线观看完整版高清| 亚洲,欧美,日韩| 亚洲精品乱久久久久久| 青春草国产在线视频| 你懂的网址亚洲精品在线观看| 我的亚洲天堂| 精品视频人人做人人爽| 日韩精品有码人妻一区| 欧美成人午夜精品| 精品国产一区二区三区久久久樱花| 国产精品一区二区在线不卡| 国产乱人偷精品视频| 欧美日韩成人在线一区二区| 一本久久精品| 精品人妻熟女毛片av久久网站| 国产亚洲午夜精品一区二区久久| 丝瓜视频免费看黄片| 免费高清在线观看视频在线观看| 午夜福利影视在线免费观看| 人妻一区二区av| 亚洲国产中文字幕在线视频| 99久久人妻综合| 欧美精品一区二区免费开放| 欧美日韩av久久| 精品国产一区二区三区久久久樱花| 午夜免费鲁丝| 精品一品国产午夜福利视频| 久久这里只有精品19| 青春草国产在线视频| 成年美女黄网站色视频大全免费| 久久久久久久精品精品| 欧美黑人精品巨大| 亚洲一级一片aⅴ在线观看| 男人舔女人的私密视频| 久久精品亚洲av国产电影网| 人妻人人澡人人爽人人| 国产成人系列免费观看| 亚洲色图 男人天堂 中文字幕| 色播在线永久视频| 欧美av亚洲av综合av国产av | 色综合欧美亚洲国产小说| 亚洲av在线观看美女高潮| 超色免费av| 老司机亚洲免费影院| 伦理电影大哥的女人| 久久久久人妻精品一区果冻| 国产成人系列免费观看| 亚洲美女视频黄频| 老汉色∧v一级毛片| 免费高清在线观看视频在线观看| av不卡在线播放| 亚洲人成77777在线视频| 搡老乐熟女国产| 久久久精品免费免费高清| 成人三级做爰电影| 日本91视频免费播放| 亚洲欧美成人综合另类久久久| 另类精品久久| 91国产中文字幕| 亚洲五月色婷婷综合| 成人毛片60女人毛片免费| 国产在视频线精品| 久久国产精品大桥未久av| 久久青草综合色| 18禁国产床啪视频网站| 国产男女内射视频| 国产成人系列免费观看| 又大又黄又爽视频免费| 成年av动漫网址| 精品国产乱码久久久久久小说| 国产精品一二三区在线看| 母亲3免费完整高清在线观看| 99久国产av精品国产电影| 亚洲中文av在线| 校园人妻丝袜中文字幕| 亚洲成人国产一区在线观看 | 自拍欧美九色日韩亚洲蝌蚪91| 国产深夜福利视频在线观看| 在线观看免费午夜福利视频| 性色av一级| 超色免费av| 777久久人妻少妇嫩草av网站| 亚洲一码二码三码区别大吗| 亚洲欧美清纯卡通| 捣出白浆h1v1| 久久人人爽av亚洲精品天堂| 国产精品久久久人人做人人爽| 午夜久久久在线观看| 亚洲国产看品久久| 18禁裸乳无遮挡动漫免费视频| 国产日韩欧美亚洲二区| 成人国产麻豆网| 成年av动漫网址| 天天操日日干夜夜撸| 在现免费观看毛片| 在线天堂最新版资源| 人妻人人澡人人爽人人| 中文字幕人妻丝袜制服| 国产在视频线精品| 亚洲人成77777在线视频| 精品一区二区三区av网在线观看 | 免费日韩欧美在线观看| 欧美日韩一区二区视频在线观看视频在线| 好男人视频免费观看在线| 午夜福利免费观看在线| 国产成人a∨麻豆精品| 十八禁网站网址无遮挡| 人人妻人人添人人爽欧美一区卜| 国产伦理片在线播放av一区| 国产一区二区 视频在线| 国产成人一区二区在线| 中国国产av一级| 国产精品国产av在线观看| 又大又黄又爽视频免费| 国产 精品1| 亚洲av电影在线进入| 久久午夜综合久久蜜桃| 精品亚洲成国产av| 丝袜人妻中文字幕| 一级毛片我不卡| 欧美激情 高清一区二区三区| 国产成人91sexporn| 国产日韩欧美视频二区| 90打野战视频偷拍视频| 在线观看国产h片| 免费少妇av软件| 成人国产麻豆网| 精品少妇一区二区三区视频日本电影 | 涩涩av久久男人的天堂| 久久天躁狠狠躁夜夜2o2o | 欧美日韩亚洲综合一区二区三区_| 啦啦啦在线观看免费高清www| 在线观看www视频免费| 黄频高清免费视频| 91老司机精品| 午夜影院在线不卡| 99香蕉大伊视频| 一级a爱视频在线免费观看| 看十八女毛片水多多多| 国产片内射在线| 蜜桃在线观看..| 亚洲精品国产色婷婷电影| 秋霞伦理黄片| √禁漫天堂资源中文www| 午夜福利一区二区在线看| www.熟女人妻精品国产| 80岁老熟妇乱子伦牲交| 男女国产视频网站| 欧美激情 高清一区二区三区| 久久久久久久精品精品| 亚洲成人av在线免费|