• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    QGAE: an end-to-end answer-agnostic question generation model for generating question-answer pairs

    2024-03-23 04:33:26LinfengLiLichengZhangChiweiZhuandZhendongMao
    中國科學技術大學學報 2024年1期

    Linfeng Li,Licheng Zhang,Chiwei Zhu,and Zhendong Mao ?

    1School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230027, China;

    2School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China

    Abstract: Question generation aims to generate meaningful and fluent questions,which can address the lack of a questionanswer type annotated corpus by augmenting the available data.Using unannotated text with optional answers as input contents,question generation can be divided into two types based on whether answers are provided: answer-aware and answer-agnostic.While generating questions by providing answers is challenging,generating high-quality questions without providing answers is even more difficult for both humans and machines.To address this issue,we proposed a novel end-to-end model called question generation with answer extractor (QGAE),which is able to transform answeragnostic question generation into answer-aware question generation by directly extracting candidate answers.This approach effectively utilizes unlabeled data for generating high-quality question-answer pairs,and its end-to-end design makes it more convenient than a multi-stage method that requires at least two pre-trained models.Moreover,our model achieves better average scores and greater diversity.Our experiments show that QGAE achieves significant improvements in generating question-answer pairs,making it a promising approach for question generation.

    Keywords: deep learning;natural language processing;answer-agnostic question generation;answer extraction

    1 Introduction

    Question generation[1,2](QG) is defined as the task of generating fluent,meaningful questions automatically from texts with optional answers,so it can be mainly divided into two streams: answer-aware QG[3]that requires answers,and answer-agnostic QG[4]that does not.QG is the reverse task of question answering (QA),which is a long-standing and valuable task helping computers achieve machine reading comprehension[5,6],dating back to the 1960s[7].As with many other supervised learning[8,9]tasks,QA will also encounter the lack of annotated data in spite of the fact that annotated data sometimes make the most essential part of the whole work.

    QG is a popular choice for data augmentation for QA to alleviate insufficient labeled data.With the continuous development of Internet technology,it is becoming increasingly easier to obtain valuable data from the Internet.However,question-answer pairs (as shown in Table 1) are still such expensive corpora that typically require manual annotation by crowdsourcing before being used for supervised learning on QA and QG tasks.To alleviate the high-cost problem of generating question-answer pairs,it is natural to consider answeragnostic QG,since its only input is raw text.

    Table 1. A case of QA-pairs generated by our QGAE model: the model accepts unannotated texts as input,extracts the highlighted phrase“Lorentz’s law” as an answer,then uses this answer to make question generation.

    Although labeled answers are not necessary,answeragnostic QG is still facing a great challenge.Most previous works focused on providing additional information to their models by leveraging named entity recognition (NER)[10]to obtain extra linguistic features,adding answer position features[11],using knowledge graphs[12],and some other methods to improve the generation effect.These methods effectively improve the fluency and accuracy of generated texts,but answer-agnostic QG still performs worse than answer-aware QG.Thus,answer-aware QG may play an irreplaceable role,and changing answer-agnostic QG to answer-aware QG is a good choice.Apart from this,there is still an obstacle in generating question-answer pairs that answer-agnostic QG can’t generate answers.To address this issue,researchers often add an additional measure for question-answer pair generation:answer extraction.Compared with generating an answer,extracting an exact span in the context is much simpler.

    Explicitly extracting candidate answers will not only resolve the demand for the lack of answers but also can transform answer-agnostic QG into answer-aware QG.As shown in Fig.1,some works such as RGF[13](retrieve-generate filter)proposed a multi-stage pipeline method to handle the problem.A multi-stage pipeline method is often designed in complexity,including several parts,and each part may need different inputs.Some early RNN-based[14-17]works optimized pipeline methods in an end-to-end way,which makes the overall structure lighter and faster.Though pre-trained language models (PLMs) have occupied dominance in both natural language generation and understanding,there is still no end-to-end work using pre-trained models to generate question-answer pairs.We are sure there is enough potential for PLMs to achieve the task.

    Fig.1. The difference between multi-stage methods and end-to-end models is that a multi-stage method usually has more than one model in the whole workflow.In every stage,a multi-stage method may need to deal with different inputs and outputs,while on the contrary,an end-to-end model only needs a definite kind of input.

    In this study,we are motivated by the weak performance of answer-agnostic QG compared to answer-aware QG,inspired by the combination of QG and AE tasks,trying to propose an answer-agnostic question generation model called question generation with answer extractor (QGAE) to alleviate the high demand for large-scale QA pairs.QGAE is a multi-task model that requires only raw texts as input and can achieve the dual tasks: answer extraction and question generation.We design our model based on the PLM model BART[18],which has dual encoders and a decoder to generate questions and extract answers in parallel.In our study,question generation is the main task,which is the most challenging part similar to all other generation tasks for generated texts’ high syntactic diversity and semantic substitutability,so we pay more attention and assign a higher weight to the corresponding module.Therefore answer extraction is considered an auxiliary task.The design not only makes it feasible to turn answer-agnostic question generation into answer-aware question generation but also enables the model to be considered capable of generating question-answer pairs.The contributions of this paper are summarized as follows:

    ● We are the first to propose a new end-to-end model using PLMs,which is called QGAE for answer-agnostic question generation.

    ● The QGAE model generates question-answer pairs from unannotated texts without requiring any additional information.

    ● Our model achieves state-of-the-art performance in generating high-quality question-answer pairs,outperforming existing methods by a significant margin.

    The rest of this paper is organized as follows.In Section 2,we review the related works of question generation and answer extraction.In Section 3,we formulate the QG task and AE task.In Section 4,we describe each module of our QGAE model.In Section 5,we introduce our experiment in detail.In the last Section 6,we conclude this work and give a detailed analysis.

    2 Related works

    2.1 Question generation

    The QG field was devoted great interest by researchers for its great potential benefits;therefore,it has made great progress in application scenarios such as data augmentation[19],chatbots[20],machine reading comprehension[21],and intelligent tutors[22].

    In the neural model age,Du et al.[4]proposed the first neural QG model focused on answer-agnostic QG.They investigated the effect of encoding sentence-vs.paragraph-level information by using an attention-based model and found that as the size of the input text increased,the evaluation score of the output decreased.To deal with the rare or unknown word problem,Gulcehre et al.[23]proposed a copy mechanism that was first used in the neural machine translation[24]to solve the out-of-vocabulary problem.This mechanism was absorbed in the QG task and widely used.Following the old experience of rule-based QG[25],Wu et al.[26]suggested two new strategies to deal with this task: question type prediction and a copy loss mechanism.Du et al.[15]combined answer extraction and question generation in an LSTM[27]model including answer feature embedding,denoting answer span with the usual BIO tagging scheme[28].

    In the transformer-based[29]PLM era,compared to autoencoder models,auto-regressive[30]models are widely picked as baselines for the QG task.Laban et al.[20]fine-tuned a GPT2[31]as the base part of a question-driven news chatbot.Wang et al.[32]leveraged BART to propose QAGS (question answering and generation for summarization) to evaluate automatic summarization.Bhambhoria et al.[33]leveraged T5[34]to generate QA pairs for COVID-19 literature.Paranjape et al.[13]developed a retrieve-generate filter (RGF) technique to create counterfactual evaluation and training data with minimal human supervision,which is a multi-stage job.

    The traditional works above have motivated us to explicitly infer the candidate answer to transform the answeragnostic QG into the answer-aware QG.Meanwhile,PLMs with fine-tuning achieved SOTA in many NLP fields,becoming benchmarks hard to bypass.In multi-stage work,researchers will choose different PLMs for different stages in question-answer pair generation,which is effective but heavy.There’s still no end-to-end work to handle the whole task.Therefore,we combine answer extraction and question generation using PLMs and propose an end-to-end model that extracts answers and generates questions in parallel.

    2.2 Answer extraction

    Information extraction[35,36](IE) is basically defined as the task of turning the unstructured information expressed in natural language text into a structured 3-tuple representation as (NE1;R;NE2).Thus,answer extraction can be seen as a sub-field of IE,expecting to pick the most valuable phrase from tuples,regardless of whether it is a named entity,a relation,or their combination: an event.Many IE systems have been proposed for open domains.Yahya et al.[37]describe ReNoun,an open information extraction system that complements previous efforts that rely on big knowledge bases by focusing on nominal attributes and on the long tail.Del Corro and Gemulla[38]proposed ClausIE,a novel,clause-based approach to open information extraction,which extracts relations and their arguments from natural language text.Additionally,some rulebased systems using man-made extraction rules have been proposed,including verb-based[39],semantic role labeling[40],and dependency parse trees[41].

    In the era of pre-trained models,auto-encoder[42]models,such as BERT[43]have made great progress in natural language understanding (NLU) tasks.BERT achieves SOTA in the GLUE[44]score which is a multi-task benchmark including named entity recognition.It is a declaration that large PLMs are blossoming in the IE field and will take the place of traditional methods.

    3 Task definition

    Answer-agnostic question generation.It aims to generate fluent,meaningful questionsQ={q1,q2,···,qn} from unlabeled input contextC={c1,c2,···,cm} without a specific answer.Suppose the length of the question sequence isnwhile the length of the context sequence ism.During training,this task aims to maximize the conditional probability ofQ.All relevant parameters in the model are denoted by θ:

    where the probability of eachqtis predicted based on all the words generated previously (i.e.,qi

    In our work,we split traditional answer-agnostic question generation into 2 sub-tasks: answer extraction and answeraware question generation,as in early works.

    Answer extraction.It supposes there is at least one question-worthy candidate answer in the input contextC={c1,c2,···,cm}and then returns its answerA={ai,ai+1,···,aj},whereA’s span is limited byC,therefore,1 ≤i≤j≤m.

    Answer-aware question generation.It is similar to answer-agnostic question generation while it provides an additional answerA={a1,a2,···,al},lis the length of the answer:

    4 Model

    4.1 Foundation model

    We choose BART (bidirectional and auto-regressive transformer) as our foundation model.BART is a sequence-tosequence model that uses a standard transformer-based encoder-decoder architecture,inheriting its encoder from BERT’s bidirectional encoder and its decoder from GPT’s leftto-right decoder,and is particularly effective for text generation as well as reading comprehension tasks.One limitation of BART is that it cannot simultaneously perform NLU and NLG (natural language generation) tasks.It excels at tasks such as text generation and reading comprehension individually,but integrating these tasks in a single model remains a challenge.However,with its strong foundation,we believe that BART has the potential to be further improved to handle such tasks effectively.

    4.2 QGAE

    QGAE is a sequence-to-sequence model as shown in Fig.2 which mainly adopts BART’s architecture while adding an additional encoder,so there are two encoders and a decoder.The model first extracts the phrase with high probability asAand rebuilds inputCtoA,C.The model will return the rebuild inputA,C,andQ.

    Fig.2.The architecture of QGAE consists of two encoders and one decoder,which take raw texts as input and generate question-answer pairs.

    4.2.1 Answer extractor encoder

    Answer extractor encoder is the first encoder inherited from BART similar to BERT and is used to understand the input context and extract the most valuable phrase.We leverage this encoder by appending an extra linear as a classifier to predict the high probability answer span position.Because BART only supports,at most,a pair of sequences as input,we choose the highest score answer of all predictions as the candidate answer.This module will focus on the first task answer extraction (AE).

    We select cross entropy to calculate the loss of the AE task.Kis the number of classes.In this task,classKis the position of the input paragraph span in the range [0,m-1],andmis the input context length.xi,kindicates that theith sample is thekth category.pis the probability distribution of annotated data whileqis the probability distribution of prediction data:

    Concretely,we put the specific answer into Eq.(3),and the equation can be changed as:

    whereais the labeled answer span as ground-truth,is the target candidate answer span,andNis the data size.ti,kindicates that the true label of theith answer is thekth category,which can only take 0 or 1.

    4.2.2 Question generation encoder-decoder

    Question generation encoder-decoder is mainly derived from BART but adds a unique function leveraging the candidate answer extracted from the first encoder to rebuild input〈s〉C〈/s〉 to traditional QG inputs as 〈s〉A〈/s〉〈/s〉C〈/s〉.Then,the module uses rebuilt input to generate text as BART does.This module will focus on the second task question generation (QG).

    The loss of the QG task is also cross entropy with the only difference being that we use the labeled questionsqas groundtruth and prediction questionsand classKis the vocabulary size of the model:

    4.2.3 QGAE loss

    The QGAE loss is the loss of the multi-task model,in this work,it is the sum of the answer extraction loss and question generation loss:

    where α is the weight of the AE task as a hyper-parameter.

    5 Experiments

    5.1 Dataset

    The Stanford question answering dataset (SQuAD) is the most famous reading comprehension dataset for reversible tasks: question answering and question generation.As the Table 2 shows,it has two versions,SQuAD1.1[45]and SQuAD2.0[46],consisting of questions posed by crowdworkers on a set of Wikipedia articles.Each article has several paragraphs,and each paragraph is asked a set of questions and provided answers,where the answer to every question is a segment of text,or span,from the corresponding reading passage.In SQuAD2.0,because of a percentage of unanswerable questions are added to the dataset,some answers may be null.

    Table 2. Statistics of datasets SQuAD1.1 and SQuAD2.0.No matter in which dataset,an example consists of a context,a question,and an optional answer.The term “negative example” refers to a context passage paired with an unanswerable question,which is intended to help models learn to identify when a question cannot be answered correctly based on the given context.

    5.2 Experiments settings

    We implement our models in HuggingFace[47]architecture and fine-tune the model with V100 32 GB GPUs.We first finetune BART-base on SQuAD2.0 for 2 epochs to obtain checkpoint BART-base-SQuAD2.0-2 epoch (BbS2).Then we use BbS2 to initialize our QGAE model;more specifically,QGAE’s dual encoder is initialized by the BbS2’s encoder twice and some linear layers that do not exist in BbS2 but in the QGAE will be initialized randomly.We set the batch size to 20,epoch to 3,learning rate to 0.00002,dropout to 0.2,beam search size to 10,max input length to 1024,max question size to 20,and min question size to 3.We perform gradient descent by the Adam optimizer[48].The coefficient α of task 1 answer extraction is 0.3 while the coefficient of the question generation task is 0.7.

    5.3 Evaluation

    We report the evaluation results with four metrics: BLEU,METEOR,ROUGE-L,and exact match (EM).

    BLEU.BLEU is an algorithm first for evaluating machinetranslated text from one natural language to another,later adopted by the text generation task.BLEU compares n-gram words appearing in candidates and references and punishes too-short sentences with a brevity penalty.

    ROUGE.ROUGE is a set of metrics including ROUGE-N,ROUGE-L,and ROUGE-W.In this work,we mainly choose ROUGE-L,which is the longest common sub-sequence(LCS)-based statistic.LCS takes into account sentence-level structure similarity naturally and identifies the longest cooccurring in sequence n-grams automatically.

    METEOR.METEOR is also a metric based on the harmonic mean of unigram precision and recall,with recall weighted higher than precision.

    Exact match.Exact match measures the percentage of predictions that match any one of the ground truth answers exactly.

    As each paragraph in the SQuAD dataset may have several question-answer pairs,we use paragraphs as input and compare outputs with a group of question-answer pairs and choose the highest score with BLEU-4 as the main indicator.

    6 Results and discussion

    6.1 Results

    In Table 3,we compare our proposed end-to-end QGAE model with 3 other types of earlier works: standalone answer extraction task,standalone answer-agnostic question generation,and multi-stage QA-pair generation pipeline.All the data used in the experiments have been replicated from the following paper.

    Table 3. Comparison of method performance in major metrics (including QG metrics and AE metric) on the SQuAD dataset.These methods are divided into four types according to their primary research fields.The first two classifications focus on their own independent fields,while the latter two classifications can accomplish these two tasks at the same time.

    (Ⅰ) Standalone answer extraction

    KPE.Key phrase extraction (KPE)[49]is a part of a neural question-answer pair generation system.It has two approaches: KPE-class and KPE-Gen.

    (Ⅱ) Standalone answer-agnostic question generation

    Attention LSTM.Attention LSTM was proposed by Du et al.[4]and was the first work to focus on answer-agnostic QG.

    Self-attention transformers.Self-attention transformers[50]explore how transformers can be adapted to the task of neural question generation without constraining the model to focus on a specific answer passage.

    Question-driven LSTM.Question-driven LSTM[26]proposed two new strategies question type prediction and a copy loss mechanism to address the task.

    (Ⅲ) Multi-stage QA-pair generation pipeline

    MCF.Wang et al.[51]proposed a multi-stage framework that can extract question-worthy phrases and improve the performance of question generation.We chose this framework as the baseline for the specific task of generating QA pairs and used it to evaluate the performance.

    6.2 Discussion

    The performance shows that our end-to-end QGAE model not only achieves SOTA in the answer extraction task but also makes a great improvement in the answer-agnostic question generation compared with the traditional encoderdecoder architecture.Even if multi-stage work MCF has a much more complex workflow,has a weaker comprehensiveperformance than our work.What is more? QGAE is lighter,more convenient,and more portable since it only requires finetuning of one pre-trained model,whereas multi-stage methods need at least two models for stage AE and QG.

    Although great progress has been made in the EM score,reaching 53.82%,there is still much room for improvement in extraction accuracy.Our model may extract candidate answers that are not ground truth but also meaningful,while extraction accuracy is judged and limited by the labeled data.Specifically,the range of candidate answers is very wide,ranging from named entities to relationships,to events.However,only a small percentage of key phrases are included in the training dataset while others are out of range.Candidate answers beyond the confines of the dataset may make the later question generation task in the wrong direction,performing worse when choosing traditional machine-translation evaluation indicators.Despite all this,prediction sentences not in the ground truth are still valuable and reasonable.The high diversity of generated sentences,to a certain extent,is an advantage that will make our model competitive in different scenes for data augmentation.

    Therefore it can be concluded that we have expanded our model’s function not only to generate questions but also to generate QA-pairs compared to the baseline model and better than any previous work,which proved our model is diverse and efficient.

    7 Conclusions

    In this paper,our focus is on answer-agnostic question generation,which can be extended to question-answer pair generation.This task can be divided into two sub-tasks: answer extraction and question generation.We proposed an end-to-end model called question generation with answer extractor(QGAE) using raw text without costing any additional information,which can generate question-answer pairs in parallel.Compared to the multi-stage question-answer generation method,QGAE has several advantages.First,QGAE is able to generate question-answer pairs in parallel,whereas the multi-stage method requires multiple rounds of generation and refinement.Second,it is lighter,more convenient,and more portable than multi-stage methods in training,which reduces the complexity of the overall system.Third,our model achieves a better average score and greater diversity.Overall,QGAE is a more efficient and versatile approach to answeragnostic question generation,with potential applications in various natural language processing tasks.

    In further work,we will try to compile more datasets into one ensemble to improve the accuracy of answer extraction.Not only that,we will try to change our main task to information retrieval to optimize our answer extraction,as different weight biases in sub-tasks lead to an imbalance in the model’s focus in the two sub-tasks.All in all,this is still pioneering work in pre-trained language models adapting questionanswer pair generation.

    Acknowledgements

    This work was supported by the Fundamental Research Funds for Central Universities (WK3480000010,WK3480000008).

    Conflict of interest

    The authors declare that they have no conflict of interest.

    Biographies

    Linfeng Liis currently pursuing a master’s degree at the School of Cyber Science and Technology,University of Science and Technology of China.His research interest is natural language processing.

    Zhendong Maoreceived his Ph.D.degree in Computer Application Technology from the Institute of Computing Technology,Chinese Academy of Sciences (CAS) in 2014.From 2014 to 2018,he was an Assistant Professor at the Institute of Information Engineering,CAS.He is currently a Professor at the School of Cyber Science and Technology,University of Science and Technology of China.His research interests include the fields of computer vision,natural language processing,and cross-modal understanding.

    亚洲精品456在线播放app | 亚洲五月天丁香| 午夜激情欧美在线| 啦啦啦韩国在线观看视频| 九九在线视频观看精品| 久久婷婷人人爽人人干人人爱| 听说在线观看完整版免费高清| 国产成人影院久久av| 欧美中文综合在线视频| 757午夜福利合集在线观看| 婷婷精品国产亚洲av在线| 天堂动漫精品| 操出白浆在线播放| 波多野结衣巨乳人妻| 亚洲一区二区三区色噜噜| 一进一出好大好爽视频| 18禁裸乳无遮挡免费网站照片| 蜜桃久久精品国产亚洲av| 亚洲一区二区三区不卡视频| 美女免费视频网站| 久久久久久久精品吃奶| 久久伊人香网站| 日本熟妇午夜| 最新中文字幕久久久久 | 中文字幕高清在线视频| 国产精品影院久久| 一夜夜www| 在线看三级毛片| 国产激情偷乱视频一区二区| 18禁观看日本| 听说在线观看完整版免费高清| 午夜激情欧美在线| 欧美乱色亚洲激情| 亚洲国产精品sss在线观看| or卡值多少钱| 欧美丝袜亚洲另类 | 欧美zozozo另类| 狂野欧美激情性xxxx| 亚洲专区字幕在线| 亚洲成人中文字幕在线播放| 欧美激情在线99| 少妇的丰满在线观看| 久久久久久久久中文| 亚洲 国产 在线| 19禁男女啪啪无遮挡网站| 一本一本综合久久| 久久久精品大字幕| 日本免费一区二区三区高清不卡| 国产黄a三级三级三级人| 男人和女人高潮做爰伦理| 真实男女啪啪啪动态图| 亚洲人成网站在线播放欧美日韩| 午夜免费观看网址| 精品一区二区三区视频在线 | 老汉色∧v一级毛片| 五月玫瑰六月丁香| 别揉我奶头~嗯~啊~动态视频| 伦理电影免费视频| 亚洲电影在线观看av| 欧美午夜高清在线| 欧美色视频一区免费| 亚洲在线自拍视频| 久久精品人妻少妇| 我要搜黄色片| 欧美3d第一页| 99国产精品一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 最近最新免费中文字幕在线| 国产成人av激情在线播放| av在线蜜桃| 波多野结衣高清作品| 1024手机看黄色片| 黑人操中国人逼视频| 免费看a级黄色片| 91av网站免费观看| 国产久久久一区二区三区| 婷婷精品国产亚洲av| 国产蜜桃级精品一区二区三区| 国产精品 国内视频| 天堂影院成人在线观看| 麻豆久久精品国产亚洲av| 亚洲熟妇中文字幕五十中出| 很黄的视频免费| 韩国av一区二区三区四区| 国产男靠女视频免费网站| 俄罗斯特黄特色一大片| 欧美中文综合在线视频| 1000部很黄的大片| 变态另类丝袜制服| 中出人妻视频一区二区| 精品不卡国产一区二区三区| 亚洲精品粉嫩美女一区| 性欧美人与动物交配| 搞女人的毛片| 久久精品国产清高在天天线| 99精品久久久久人妻精品| 国产激情偷乱视频一区二区| 日本五十路高清| 亚洲av日韩精品久久久久久密| av女优亚洲男人天堂 | av视频在线观看入口| 国产激情久久老熟女| 日本成人三级电影网站| 五月玫瑰六月丁香| aaaaa片日本免费| 熟妇人妻久久中文字幕3abv| 欧美午夜高清在线| 狂野欧美白嫩少妇大欣赏| 午夜日韩欧美国产| 久久精品夜夜夜夜夜久久蜜豆| 制服丝袜大香蕉在线| 日韩欧美免费精品| 三级国产精品欧美在线观看 | 少妇丰满av| 香蕉av资源在线| 听说在线观看完整版免费高清| 欧美成狂野欧美在线观看| 亚洲av电影不卡..在线观看| 欧美高清成人免费视频www| 全区人妻精品视频| 一级a爱片免费观看的视频| 老汉色av国产亚洲站长工具| 中国美女看黄片| 校园春色视频在线观看| 亚洲国产日韩欧美精品在线观看 | 精品人妻1区二区| 亚洲精品美女久久久久99蜜臀| 国产成人精品久久二区二区91| 色视频www国产| 国内久久婷婷六月综合欲色啪| 一区二区三区激情视频| 欧美性猛交╳xxx乱大交人| 亚洲一区高清亚洲精品| 国产亚洲精品久久久久久毛片| 波多野结衣高清作品| 中文字幕精品亚洲无线码一区| 又黄又粗又硬又大视频| 黄色女人牲交| 国产真实乱freesex| 真人做人爱边吃奶动态| 男人舔女人下体高潮全视频| 老鸭窝网址在线观看| 老司机午夜十八禁免费视频| av天堂中文字幕网| 国产欧美日韩一区二区三| 国产成人精品久久二区二区91| 国产综合懂色| 精品久久久久久,| 18美女黄网站色大片免费观看| 人妻丰满熟妇av一区二区三区| 国产三级中文精品| 日韩高清综合在线| 国产伦一二天堂av在线观看| 五月玫瑰六月丁香| 欧美日韩亚洲国产一区二区在线观看| 亚洲国产高清在线一区二区三| 亚洲无线在线观看| 国产人伦9x9x在线观看| 欧美+亚洲+日韩+国产| 久久中文字幕一级| 精品一区二区三区av网在线观看| 人人妻人人澡欧美一区二区| 毛片女人毛片| 好看av亚洲va欧美ⅴa在| 日本 av在线| 午夜精品在线福利| 欧美激情久久久久久爽电影| 美女高潮喷水抽搐中文字幕| 狂野欧美激情性xxxx| 免费无遮挡裸体视频| 午夜影院日韩av| 女生性感内裤真人,穿戴方法视频| 可以在线观看的亚洲视频| 少妇的逼水好多| 日韩欧美精品v在线| 俄罗斯特黄特色一大片| 成人鲁丝片一二三区免费| 一本精品99久久精品77| 免费在线观看日本一区| 亚洲人成网站在线播放欧美日韩| 国产免费av片在线观看野外av| 91av网站免费观看| 欧美3d第一页| 好看av亚洲va欧美ⅴa在| 99国产极品粉嫩在线观看| 久久久久久国产a免费观看| 香蕉国产在线看| 黄色丝袜av网址大全| 岛国在线观看网站| 午夜福利欧美成人| 免费人成视频x8x8入口观看| 欧美3d第一页| 欧美精品啪啪一区二区三区| av国产免费在线观看| 51午夜福利影视在线观看| 色尼玛亚洲综合影院| 一区二区三区高清视频在线| 久久亚洲真实| 国产1区2区3区精品| www日本黄色视频网| 给我免费播放毛片高清在线观看| 在线十欧美十亚洲十日本专区| 在线免费观看的www视频| 国产日本99.免费观看| 一个人看的www免费观看视频| 亚洲熟妇熟女久久| 国产久久久一区二区三区| 97碰自拍视频| 国产亚洲av嫩草精品影院| 好看av亚洲va欧美ⅴa在| 国产精品,欧美在线| 在线看三级毛片| 熟女电影av网| 国产高清有码在线观看视频| 看黄色毛片网站| 久久这里只有精品中国| 两人在一起打扑克的视频| av天堂中文字幕网| 岛国在线免费视频观看| 91在线观看av| 国产黄a三级三级三级人| 色av中文字幕| 欧美日本亚洲视频在线播放| 成人av在线播放网站| 成人高潮视频无遮挡免费网站| 老汉色av国产亚洲站长工具| 亚洲天堂国产精品一区在线| 男人舔女人下体高潮全视频| 亚洲一区高清亚洲精品| 97碰自拍视频| 亚洲aⅴ乱码一区二区在线播放| xxxwww97欧美| 精品国产美女av久久久久小说| 国产一区二区激情短视频| a在线观看视频网站| 久久人妻av系列| 成在线人永久免费视频| 国产真实乱freesex| 日韩欧美在线乱码| 午夜福利在线在线| 亚洲狠狠婷婷综合久久图片| 亚洲成人久久爱视频| 日日摸夜夜添夜夜添小说| 日本免费一区二区三区高清不卡| 久久久国产精品麻豆| 很黄的视频免费| 无限看片的www在线观看| 国产真人三级小视频在线观看| 最近最新免费中文字幕在线| 国产高清三级在线| 一进一出抽搐gif免费好疼| 久久精品综合一区二区三区| 美女高潮的动态| 757午夜福利合集在线观看| 男女做爰动态图高潮gif福利片| 久久久久国内视频| 日本一本二区三区精品| 日本黄大片高清| 巨乳人妻的诱惑在线观看| 亚洲最大成人中文| 美女免费视频网站| 免费观看人在逋| 亚洲一区高清亚洲精品| 色老头精品视频在线观看| 久久精品夜夜夜夜夜久久蜜豆| 俺也久久电影网| 99精品在免费线老司机午夜| 亚洲精品在线观看二区| 国产伦人伦偷精品视频| 亚洲男人的天堂狠狠| av在线蜜桃| 最近视频中文字幕2019在线8| 首页视频小说图片口味搜索| 婷婷亚洲欧美| 欧美黑人欧美精品刺激| 别揉我奶头~嗯~啊~动态视频| 亚洲av成人一区二区三| 亚洲乱码一区二区免费版| 亚洲欧美一区二区三区黑人| 国产av一区在线观看免费| 我的老师免费观看完整版| 听说在线观看完整版免费高清| 99热这里只有精品一区 | 色噜噜av男人的天堂激情| 别揉我奶头~嗯~啊~动态视频| 看片在线看免费视频| 在线观看一区二区三区| 国产精品亚洲av一区麻豆| 国产毛片a区久久久久| 岛国视频午夜一区免费看| 日韩欧美在线二视频| 久久久国产欧美日韩av| 免费在线观看亚洲国产| 久久久久久人人人人人| 最近在线观看免费完整版| 国产亚洲av高清不卡| 亚洲欧美一区二区三区黑人| 免费看光身美女| 中文字幕熟女人妻在线| 9191精品国产免费久久| 老汉色av国产亚洲站长工具| 免费观看精品视频网站| 十八禁网站免费在线| 中文在线观看免费www的网站| 午夜福利欧美成人| 欧美极品一区二区三区四区| 最好的美女福利视频网| 最近最新免费中文字幕在线| 国产高清三级在线| 日韩欧美精品v在线| 麻豆成人av在线观看| 亚洲片人在线观看| 久久久久九九精品影院| 看免费av毛片| 国内精品一区二区在线观看| 男女床上黄色一级片免费看| 女警被强在线播放| 亚洲成av人片在线播放无| 五月伊人婷婷丁香| 高清毛片免费观看视频网站| 一级毛片高清免费大全| 亚洲成人中文字幕在线播放| 毛片女人毛片| 国产精品亚洲美女久久久| 婷婷六月久久综合丁香| 可以在线观看的亚洲视频| 国内精品一区二区在线观看| 十八禁网站免费在线| 性欧美人与动物交配| 日韩中文字幕欧美一区二区| 国产精品爽爽va在线观看网站| 成人亚洲精品av一区二区| 熟女电影av网| 亚洲无线在线观看| av黄色大香蕉| 久久伊人香网站| 人妻夜夜爽99麻豆av| 亚洲国产精品成人综合色| 校园春色视频在线观看| 亚洲一区二区三区不卡视频| 一级a爱片免费观看的视频| 久久精品91无色码中文字幕| 成人欧美大片| 国产av在哪里看| 99国产综合亚洲精品| 99久久综合精品五月天人人| 亚洲中文字幕一区二区三区有码在线看 | 国产蜜桃级精品一区二区三区| 给我免费播放毛片高清在线观看| 色综合婷婷激情| 久久亚洲精品不卡| 99热6这里只有精品| 免费无遮挡裸体视频| 欧美一区二区国产精品久久精品| 午夜福利成人在线免费观看| 日本黄大片高清| 两个人视频免费观看高清| 国内精品久久久久精免费| 日本与韩国留学比较| 日本精品一区二区三区蜜桃| 中出人妻视频一区二区| 一个人看视频在线观看www免费 | 变态另类丝袜制服| 精品久久久久久久久久久久久| 亚洲乱码一区二区免费版| 久久久色成人| 变态另类成人亚洲欧美熟女| 免费看美女性在线毛片视频| 麻豆一二三区av精品| 久久久水蜜桃国产精品网| 高潮久久久久久久久久久不卡| 日本精品一区二区三区蜜桃| 欧美最黄视频在线播放免费| 亚洲av成人av| 麻豆一二三区av精品| 日本精品一区二区三区蜜桃| 人妻丰满熟妇av一区二区三区| 老汉色av国产亚洲站长工具| 窝窝影院91人妻| 国产av不卡久久| 窝窝影院91人妻| 久久久久久久午夜电影| 亚洲午夜理论影院| 视频区欧美日本亚洲| 久久国产乱子伦精品免费另类| 男人的好看免费观看在线视频| 熟妇人妻久久中文字幕3abv| 久久久久亚洲av毛片大全| 午夜福利在线在线| 99riav亚洲国产免费| 人人妻人人看人人澡| 国产伦精品一区二区三区视频9 | 色在线成人网| 久久久久久久精品吃奶| 国产探花在线观看一区二区| 日韩成人在线观看一区二区三区| 精品国产乱码久久久久久男人| 久久久久久久久免费视频了| 午夜影院日韩av| 亚洲欧美日韩高清专用| 岛国在线观看网站| 中文字幕熟女人妻在线| 国产精品 欧美亚洲| 久久精品人妻少妇| 又大又爽又粗| 最近最新中文字幕大全电影3| 91av网一区二区| 国产成人av教育| av天堂中文字幕网| 免费大片18禁| 亚洲av中文字字幕乱码综合| 久久久久久久久久黄片| 欧洲精品卡2卡3卡4卡5卡区| 女人高潮潮喷娇喘18禁视频| 久99久视频精品免费| 亚洲人成网站在线播放欧美日韩| 91在线精品国自产拍蜜月 | 日韩欧美在线乱码| 18禁美女被吸乳视频| 国内精品一区二区在线观看| 性色avwww在线观看| 动漫黄色视频在线观看| 女警被强在线播放| 欧美日本亚洲视频在线播放| 亚洲国产中文字幕在线视频| 美女高潮喷水抽搐中文字幕| 真人做人爱边吃奶动态| 我要搜黄色片| 亚洲专区国产一区二区| 欧美成狂野欧美在线观看| 老熟妇仑乱视频hdxx| 国产成人av教育| 亚洲自拍偷在线| 色综合欧美亚洲国产小说| 婷婷六月久久综合丁香| 看免费av毛片| 极品教师在线免费播放| 网址你懂的国产日韩在线| 高清毛片免费观看视频网站| 国产视频一区二区在线看| 亚洲国产欧美一区二区综合| 一二三四在线观看免费中文在| 狠狠狠狠99中文字幕| 啦啦啦观看免费观看视频高清| 欧美av亚洲av综合av国产av| 亚洲欧美精品综合久久99| 亚洲真实伦在线观看| 99久久无色码亚洲精品果冻| 91九色精品人成在线观看| 麻豆国产97在线/欧美| 99久久99久久久精品蜜桃| 嫩草影院精品99| 日韩大尺度精品在线看网址| 欧美一区二区国产精品久久精品| 色综合亚洲欧美另类图片| 久久久久精品国产欧美久久久| 男女做爰动态图高潮gif福利片| 亚洲男人的天堂狠狠| www日本黄色视频网| 国产精品久久电影中文字幕| 99久久精品热视频| av黄色大香蕉| www.999成人在线观看| 免费观看精品视频网站| 黄色视频,在线免费观看| 国产免费男女视频| 夜夜爽天天搞| 精品不卡国产一区二区三区| 久久久久九九精品影院| 亚洲成人免费电影在线观看| tocl精华| 日本免费一区二区三区高清不卡| 非洲黑人性xxxx精品又粗又长| 2021天堂中文幕一二区在线观| 亚洲欧美激情综合另类| avwww免费| 色综合欧美亚洲国产小说| 亚洲美女视频黄频| 亚洲人成网站高清观看| 久久久久久久午夜电影| 欧美乱色亚洲激情| 国产成人精品久久二区二区91| 一区二区三区激情视频| 国产视频内射| 久久精品国产亚洲av香蕉五月| 99热精品在线国产| 91在线精品国自产拍蜜月 | 午夜福利欧美成人| 国内精品久久久久精免费| 俄罗斯特黄特色一大片| 又紧又爽又黄一区二区| 可以在线观看的亚洲视频| 18禁黄网站禁片免费观看直播| 欧美中文综合在线视频| 国产午夜精品久久久久久| 亚洲va日本ⅴa欧美va伊人久久| 俺也久久电影网| 国产精品久久久久久亚洲av鲁大| 亚洲精品乱码久久久v下载方式 | 欧美日韩福利视频一区二区| 美女免费视频网站| 亚洲欧美日韩高清在线视频| 免费看美女性在线毛片视频| 亚洲成人免费电影在线观看| 在线看三级毛片| 在线永久观看黄色视频| 国产伦精品一区二区三区四那| 亚洲七黄色美女视频| 亚洲 欧美 日韩 在线 免费| 美女 人体艺术 gogo| 久久亚洲精品不卡| 麻豆一二三区av精品| 色播亚洲综合网| 国产高清视频在线播放一区| 亚洲av电影不卡..在线观看| 九色成人免费人妻av| 熟妇人妻久久中文字幕3abv| 两性夫妻黄色片| 亚洲乱码一区二区免费版| 国产人伦9x9x在线观看| 成在线人永久免费视频| 国产亚洲精品久久久久久毛片| 在线看三级毛片| 欧美午夜高清在线| 搡老熟女国产l中国老女人| 久久久久国内视频| 最好的美女福利视频网| 久久热在线av| 国产伦人伦偷精品视频| 国产精品98久久久久久宅男小说| 禁无遮挡网站| 国产高清视频在线播放一区| 老司机午夜十八禁免费视频| 性欧美人与动物交配| 成人国产一区最新在线观看| 最近在线观看免费完整版| 老汉色∧v一级毛片| 久久久久九九精品影院| 国产视频一区二区在线看| 久久午夜亚洲精品久久| 欧美三级亚洲精品| a级毛片在线看网站| 麻豆成人av在线观看| 99riav亚洲国产免费| 精品国产超薄肉色丝袜足j| 久久久色成人| 91字幕亚洲| 国产97色在线日韩免费| 村上凉子中文字幕在线| tocl精华| 国产三级中文精品| 美女高潮的动态| 国产精品香港三级国产av潘金莲| 蜜桃久久精品国产亚洲av| 亚洲电影在线观看av| 国产亚洲欧美在线一区二区| 制服人妻中文乱码| 波多野结衣高清作品| 精品国产乱子伦一区二区三区| 亚洲av日韩精品久久久久久密| 亚洲中文日韩欧美视频| 亚洲av电影在线进入| 国产成+人综合+亚洲专区| 欧美日韩乱码在线| 亚洲va日本ⅴa欧美va伊人久久| 亚洲自偷自拍图片 自拍| 成人午夜高清在线视频| 又紧又爽又黄一区二区| 午夜精品在线福利| 欧美+亚洲+日韩+国产| 国产精品亚洲av一区麻豆| 午夜a级毛片| 一本精品99久久精品77| 国产av不卡久久| 国产熟女xx| 免费人成视频x8x8入口观看| 国产精品亚洲一级av第二区| 一级作爱视频免费观看| 人人妻,人人澡人人爽秒播| 中出人妻视频一区二区| 99久久精品热视频| 怎么达到女性高潮| 国产精品日韩av在线免费观看| 在线观看舔阴道视频| 欧美zozozo另类| 男人的好看免费观看在线视频| 精品不卡国产一区二区三区| 久久久久九九精品影院| 欧美三级亚洲精品| 操出白浆在线播放| 婷婷精品国产亚洲av在线| 色综合欧美亚洲国产小说| 亚洲激情在线av| 丁香六月欧美| 少妇裸体淫交视频免费看高清| 一个人看的www免费观看视频| 午夜精品在线福利| 国产精品影院久久| 一卡2卡三卡四卡精品乱码亚洲| 18禁国产床啪视频网站| 看片在线看免费视频| 狠狠狠狠99中文字幕| 日韩中文字幕欧美一区二区| 国产成人av教育| 国产精品香港三级国产av潘金莲| 99热这里只有是精品50| 舔av片在线| 国产精品一区二区三区四区免费观看 | 欧美乱色亚洲激情| 欧美日韩一级在线毛片| 色吧在线观看| 亚洲国产精品合色在线| aaaaa片日本免费| 亚洲精品美女久久av网站| 女同久久另类99精品国产91|