• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Im proved End-to-End Memory Network for QA Tasks

    2019-11-25 10:23:00AziguliWulamuZhenqiSunYonghongXieCongXuandAlanYang
    Computers Materials&Continua 2019年9期

    Aziguli WulamuZhenqi SunYonghong Xie Cong Xuand Alan Yang

    Abstract:At present,End-to-End trainable Memory Networks (MemN2N) has proven to be promising in many deep learning fields,especially on simple natural language-based reasoning question and answer (QA) tasks.However,when solving some subtasks such as basic induction,path finding or time reasoning tasks,it remains challenging because of limited ability to learn useful information between memory and query.In this paper,we propose a novel gated linear units (GLU) and local-attention based end-to-end memory networks (MemN2N-GL) motivated by the success of attention mechanism theory in the field of neural machine translation,it shows an improved possibility to develop the ability of capturing complex memory-query relations and works better on some subtasks.It is an improved end-to-end memory network for QA tasks.We demonstrate the effectiveness of the ese approaches on the 20 bAbI dataset which includes 20 challenging tasks,without the use of any domain know ledge.Our project is open source on github4.

    Keywords:QA system,memory network,local attention,gated linear unit.

    1 Introduction

    The QA problem has been around for a long time.As early as 1950,the British mathematician A.M.Turing in his paper put forward a method to determine whether the machine can think-Turing Test,which is seen as the blueprint for the QA system [Turing(1950)].The first generation of intelligent QA system converts simple natural language questions into pre-set single or multiple keywords and queries the information in a domainspecific database to obtain answers.Its earliest appearance can be traced back to the early 1950s and 1960s when the computer was born.The representative systems include two well-known QA systems,baseball [Green Jr,Wolf,Chomsky et al.(1961)] and lunar[Woods and Kaplan (1977)].They have a database in the background that holds various data the system can provide.When the user asks a question,the system converts the user’s question into a SQL query statement,and queries the data from the database to the user.

    Shrdlu was a highly successful QA program developed by Terry Winograd in the late 60s and early 70s [Winograd (1972)],it simulated the operation of a robot in a toy world (the“blocks world”).The reason for its success was to choose a specific domain and the physical rules were easily w ritten as programs.The travel information consultation system GUS developed by Bobrow et al.in 1977 is another successful QA system [Bobrow,Kaplan,Norman et al.(1977)].In the 1990s,with the development of the Internet,a second generation question and answer system emerged.It extracts answers from large-scale text or web-based libraries based on information retrieval techniques and shallow NLP techniques [Srihari and Li (2000);Voorhees (1999);Zheng (2002)].A representative example is Start (1993),which is the world’s first web-based question answering system,was developed by the M IT Artificial Intelligence Lab [Katz (1997)].In 1999,the TREC (Text REtrieval Conference)began the evaluation of the question and answer system.In October 2000,ACL(the Association for Computational Linguistics) used the open domain question and answer system as a topic,which promoted the rapid development of the question and answer system.With the rise of web 2.0 technology,the third generation question answering system has developed [Tapeh and Rahgozar (2008)].It is characterized by high quality know ledge resources and deep NLP technology.Up to now,in addition to the “Cortana” of Microsoft[Young (2019)],the “Dumi” of Baidu [Zhu,Huang,Chen et al.(2018)] and the “Siri” of Apple [Hoy (2018)],many companies and research groups have also made breakthroughs in this field [Becker and Troendle (2018);Zhou,Gao,Li et al.(2018)].

    Table1:Samples of the ree typesoftasks

    In such a situation,end-to-end learning framework have shown promising performance because of their applicability in the real environment and efficiency in model updating [Shi and Yu (2018);Madotto,Wu and Fung (2018);Li,Wang,Sun et al.(2018);Liu,Tur,Hakkani-Tur et al.(2018)].In end-to-end dialog systems,End-to-end memory network(MemN2N) and its variants have always been hot topics of research [Perez and Liu (2018);Ganhotra (2018)],in light of the powerful ability to describe long term dependencies [Huang,Qi,Huang et al.(2017)] and the flexibility in the implementation process.

    Although MemN2N has achieved good performance on the dialog bAbI tasks,where the memory components effectively work as representation of the dialog context and play a good role in inference.There are still many tasks not very satisfactory in the bAbI [Shi and Yu (2018)].In order to find out the reasons for this,we have made a careful comparison.We found that tasks achieved good performance on the dialog bAbI tasks (such as Task 3 supporting factsin Tab.1) have in common that there are many more contextual sentences than those perform well (such as Task yes/no questions in Tab.1).And when calculating the relevance of the memory and the query,MemN2N attend to all sentences on the memory side for query [Sukhbaatar,Szlam,Weston et al.(2015)],which is expensive and can potentially render it impractical.Inspired by the field of machine translation [Minh Thang and Hieu Pham (2015)],we introduce the local-attention mechanism when calculating the correlation between memory and query.We don’t consider all the information of memory,but a subset of sentences which are more relevant to the query.At the same time,inspired by the gated convolutional network(GCN) [Dauphin,Fan,Auli et al.(2017)],we investigate the idea of the gated linear units (GLU) into MemN2N to update the intermediate state between layers.The purpose of these two improvements is the same,that is to appropriately reduce the complexity of the model,let the model pay more attention to useful information when training.

    We have compared our two improved methods to MemN2N in the bAbI tasks,analyzed their number of successful tasks and error rates in different tasks.We also analyzed from the perspective of training speed and visual weight.Experimentally,we demonstrate that both of our approaches are effective.

    In the following sections,we first introduce the application of MemN2N model and the innovation of our MemN2N-GL model in the second section;in the third section,we introduce the implementation methods of our model in detail,including local-attention matching and GLU mapping;then,we show our experimental results in the fourth section,and make a comparative analysis with baseline;finally,in the fifth section,we show our conclusion and planning for future work.

    Figure1:A single layer version of MemN2N

    2 Model

    The MemN2N architecture,introduced by Sukhbaatar et al.[Sukhbaatar,Szlam,Weston et al.(2015)],is a hot research method in the field of current QA system [Ganhotra and Polymenakos (2018);Perez and Liu (2018)].It is a form of Memory Network (MemNN)[Weston,Chopra and Bordes (2014)],but unlike the model in that work,it is trained endto-end that makes it easier to apply in practical situations [Sukhbaatar,Szlam,Weston et al.(2015)].Compared with traditional MemNN,MemN2N gets less supervision during training,which means it will reduce some complexity and may not be able to fully capture the context information.

    Because of the good characteristics of MemN2N,it has been used in a wide range of tasks in recent years.Boyuan Pan et al.introduced a novel neural network architecture called Multi-layer Embedding with Memory Network (MemNN) for machine reading task,where a memory network of full-orientation matching of the query and passage to catch more pivotal information [Pan,Li,Zhao et al.(2017)].M Ghazvininejad et al.presented a novel,fully data-driven,and know ledge-grounded neural conversation model aimed at producing more contentful responses [Ghazvininejad,Brockett,Chang et al.(2018)].In the field of computer vision,Wu et al.proposed a long-term feature bank,which extracts supportive information over the entire span of a video to augment state- of -the-art video models [Wu,Feichtenh of er,Fan et al.(2018)].Although MemN2N has been widely used in many fields and has achieved good results,it may not scale well to the case where a larger memory is required [Sukhbaatar,Szlam,Weston et al.(2015)].

    In this paper,inspired by the attention mechanism and its many deformations in the field of deep learning [Shen,Zhou,Long et al.(2018);Zhang,Goodfellow,Metaxas et al.(2018);Shen,He and Zhang (2018)],we propose an improvement point to introduce local attention mechanism into MemN2N to improve the model effect.Compared to the use of global matching between u and each memory miin MemN2N (Eq.(4)),local attention mechanism pays more attention to the local information related to the question state u in the memory.We also consider to optimize the updating of hidden state u between layers of MemN2N (Fig.1).The original method uses a linear mapping H (Eq.(10)) while we draw on the experience of GLU (Gated Linear Unit) proposed by Dauphin et al.[Dauphin,Fan,Auli et al.(2017)].We compare the improved model based on these two points with MemN2N in the same data sets.As a result,our model performs better in more complex QA tasks.

    3 Methods

    In this section,we introduce our proposed model MemN2N-GL.Our model aims to extract more useful interactions between memory and query to improve the accuracy of MemN2N.Similar to MemN2N,our MemN2N-GL consists of the ree main components:input memory representation,out memory representation and final answer prediction.

    In the part of input memory representation,an input set x1,...,xi are converted into memory vectors {mi} and {ci} of dimension d in a continuous space,using embedding matrixes A and C,both of them are d×V,where d is embedding size,V is vocabulary size.Similarly,the query q is also embedded (by matrix B) to an internal state u.We use position encoding (PE) [Sukhbaatar,Szlam,Weston et al.(2015)] to convert word vectors as sentence vectors.This takes the form:

    where ljis column vector with the structure:

    J is the number of words in the sentence,and d is the dimension of the embedding.In this way,the position information of the words are taken into account when generating the sentence vector.Questions,memory inputs and memory outputs also use the same representation.In order to enable memory to have context temporal information,we also modify the memory vector by:

    where TA(i) is the i th row of a special matrix TAthat encodes temporal information,and TAis learned during training.

    Then in the embedding space,MemN2N calculate the relevance score between u and each memory miby means of matching in dot form [Minh-Thang Luong (2015)]followed by a softmax:

    where

    After applying softmax function,each component of the matrix uTmiwill be in the interval(0,1),and the components will add up to 1,so that they can be interpreted as probabilities.Furthermore,the larger input components will correspond to larger probabilities.

    By contrast,we develop local-attention mechanism to calculate the correlation and filtering out irrelevant information between u and {mi},compared with the attention mechanism used in MemN2N,our model does not focus on the relevance of the global memory and query,but focuses on the local memory associated with the query.

    3.1 Local-attention matching

    As mentioned before,local-attention matching chooses to focus only on a small subset of the memory,which is more relevant to query q (Fig.2).Concretely,the model first generates an aligned position pufor the query q in the memory embedding A:

    where vp,Waare the model parameters which will be learned to predict positions.S is the memory size,δ is activation function and pu∈ [0,S].

    Then the relevance score between u and each memory miis defined as:

    where piis the original score (Eq.(4)),is the standard deviation and D is the window size of subset memory.Finally,we use the new relevance score pito calculate the out memory representation in Fig.2.The response o from output memory is a sum of memory vectors {ci},weighted by the input probability vector:

    Finally in final answer prediction,the predicted answer distribution a^ is produced by the sum of the output vector o and the input embedding u which then passed through a final weight matrix W (of size V×d) and a softmax:

    Figure2:Local-Attention Matching

    The above is for single layer structure (Fig.2),for many different types of difficult tasks,the model can be extended to multi-layer memory structure (Fig.1),where each memory layer is named a hop and in MemN2N,the (K+1)t?hop’s state uK+1is calculated by:

    By contrast,we utilize gated linear units (GLU) in our MemN2N-GL.Compared to linear mapping H or frequently used nonlinear mapping functions,GLU effectively reducing the gradient dispersion,but also retaining the ability of nonlinearity.Proved by experiment,it is better suit for MemN2N.

    3.2 Gated linear units mapping

    In our MemN2N-GL,we use the layer-wise [Sukhbaatar,Szlam,Weston et al.(2015)]form (where the memory embeddings are the same across different layers,i.e.,A1...=AKand C1...=CK.) to expand the model from a single-layer to a multi-layer structure.In this case,we utilize GLU mapping to the update of u between layers:

    where W,V ∈ ?m×m,m is embedding size,b,c∈ ?m×1,W,V,b,c are learned parameters,oK∈ ?m×1is the output of the Kt? layer (Fig.1).

    Then,the predicted answer distribution a is the combination of the input and the output of the top memory layer:

    Figure3:A k layers version of MemN2N

    Finally,at the top of the network,we adopt the original approach combining the input ukand the output okof the top memory layer:

    where Wis the parameter learned during training.

    Compared with MemN2N,our model performs better in complex QA problems,which can be confirmed in the experimental results in the next section.We believe that the local attention mechanism removes redundant memory when calculating the correlation between memory and query,so the weight vector obtained is “purer” and contains more useful information.Besides,compared with linear mapping,GLU mapping has the ability of nonlinearity,which makes the model has stronger learning ability in the update of u between layers.

    4 Experiments and results

    We perform experiments on goal-oriented dialog datasets Dialog bAbI [Weston,Bordes,Chopra et al.(2015)],which contains 20 subtasks.Each of subtasks consists of the ree parts:the context statements of the e problem,the question,and correct answer.There are samples of the ree of the tasks in Tab.1.For each question,only certain subsets of the e statement contain the information needed for the answer,while other statements are basically unrelated interferers.And the difficulty of various subtasks is different,which is reflected in the increase of interference statements.During training,we choose to use 10K dataset,our goal is to improve the ability of the model to answer questions correctly based on context.

    4.1 Training details

    We perform our experiments with the following hyper-parameter values:embedding dimension embed_size=128,learning rate λ =0.01,size of each batch batc?_size=32,number of layers K=3,capacity of memory memory_size=50 and max gradient norm to clip max_clip=40.0.We also used some skills during the training,for example,the learning rate of our model automatically adjusts with the change of loss.If the loss value does not decrease but increases between adjacent training epochs,the learning rate will be reduced to 2/3 of the current value.The condition of training termination is that the loss value is less than a certain threshold (the experiment was 0.001),or the number of training epochs reaches the upper limit.In the course of training,the training time varies with the difficulty of different subtasks,but all of them are within one day.

    Table2:Test error rates (%) on the 20 QA tasks

    5:3 argument relation 9.5 11.2 3.2 0.9 6:yes/no questions 50.0 50.0 25.2 48.2 7:counting 50.3 11.5 16.1 10.9 8:lists/sets 8.7 7.3 6.0 5.6 9:simple negation 12.3 35.2 13.1 4.4 10:indefinite know ledge 11.3 2.6 3.9 13.9 11:basic coreference 15.9 18.0 40.9 9.0 12:conjunction 0.0 0.0 2.8 0.0 13:compound coreference 46.1 21.3 18.8 1.4 14:time reasoning 6.9 5.8 4.4 10.3 15:basic deduction 43.7 75.8 2.4 0.0 16:basic induction 53.3 52.5 57.8 53.1 17:positional reasoning 46.4 50.8 48.6 46.2 18:size reasoning 9.7 7.4 13.6 12.9 19:path finding 89.1 89.3 14.5 24.7 20:agent’s motivation 0.6 0.0 1.7 Mean error (%) 30.2 29.2 21.2 19.0 0.0 Successful tasks (err < 5%) 4 5 8 8

    Table3:The visualization weights of layers

    4.2 Results and analysis

    Our baseline is MemN2N.We try three different combinations of improvements and compare their performance in different subtasks (as shown in Tab.2).MemN2N (Local-Attention) and MemN2N (GLU) indicate that the model only adds the local attention mechanism and the GLU mechanism respectively,MemN2N-GL means that both improvements exist simultaneously.The number in the table represents the error rates of each sub-task,and the bold number represents the result of the model that performs best in the same subtask.In the last two rows of the e table,we counted the mean error rates of all models and the number of successful tasks (subtasks with error rates less than 5).

    In terms of results,MemN2N-GL achieves the best results both in mean error rates and in the number of successful tasks.Compared with MemN2N,the mean error rates is reduced by 37.09%,and the number of successful tasks doubled from four to eight.MemN2N(Local-Attention) and MemN2N (GLU) have their own advantages and disadvantages,but both of their effect are better than MemN2N.

    4.3 Related tasks

    In addition to comparing the results of different tasks with each model in Tab.2,we use a specific example to quantitatively analyze the result by the visualization weights of layers.As shown in Tab.3,the most relevant memory sentence to the query “Who did Fred give the apple to ?” is the first memory sentence:“Fred gave the apple to Bill”.After training,MemN2N does not focus on the memory sentences of greater relevance,which led to the wrong answer as a result.While our model pays close attention to contextual information that is highly relevant to query,which can be reflected in the size of the correlation weight at each layers.The darker the color,the greater the weight.

    5 Conclusion and future work

    In this paper we proposed two improvements based on MemN2N model for QA problem and perform empirical evaluation on dialog datasets bAbI.The experimental results show that our improved model has a greater performance than the original model,which strongly confirms our conjecture that the model should pay more attention to the useful information when training.In the future,we are prepared to further improve the ability of the model to handle complex tasks.At the same time,we are going to test our model on more datasets.We also intend to combine our model with recent research results Bert (Bidirectional Encoder Representations from Transformers) and use our model as a downstream part to see if it will achieve better results.

    Acknowledgement:This work is supported by the National Key Research,Development Program of China under Grant 2017YFB1002304 and the National Natural Science Foundation of China (No.61672178)

    cao死你这个sao货| 国产97色在线日韩免费| 在线观看国产h片| 免费在线观看视频国产中文字幕亚洲 | 国产av国产精品国产| 国产精品99久久99久久久不卡| 久久精品国产a三级三级三级| 精品卡一卡二卡四卡免费| 日韩免费高清中文字幕av| 91字幕亚洲| 久久av网站| cao死你这个sao货| 日韩av免费高清视频| 咕卡用的链子| 乱人伦中国视频| 啦啦啦 在线观看视频| 大香蕉久久成人网| 搡老乐熟女国产| 午夜激情av网站| 少妇精品久久久久久久| 免费高清在线观看日韩| 日韩一卡2卡3卡4卡2021年| 国产av国产精品国产| 婷婷成人精品国产| 亚洲男人天堂网一区| 久久精品成人免费网站| 丰满人妻熟妇乱又伦精品不卡| 国产老妇伦熟女老妇高清| 日本91视频免费播放| 久久精品人人爽人人爽视色| 日本一区二区免费在线视频| 高清不卡的av网站| 午夜视频精品福利| 啦啦啦 在线观看视频| 亚洲天堂av无毛| 黑人欧美特级aaaaaa片| 建设人人有责人人尽责人人享有的| 国产午夜精品一二区理论片| 男女边摸边吃奶| 亚洲欧洲日产国产| 久热爱精品视频在线9| 精品一品国产午夜福利视频| 纵有疾风起免费观看全集完整版| 大片电影免费在线观看免费| 在现免费观看毛片| 欧美97在线视频| 亚洲欧美一区二区三区黑人| xxx大片免费视频| 我要看黄色一级片免费的| 国产日韩欧美亚洲二区| 亚洲九九香蕉| 91麻豆av在线| 精品福利观看| 两个人免费观看高清视频| 交换朋友夫妻互换小说| 欧美中文综合在线视频| 国产亚洲av高清不卡| 午夜免费男女啪啪视频观看| 1024视频免费在线观看| 亚洲av电影在线观看一区二区三区| 一级黄色大片毛片| 蜜桃在线观看..| 又大又爽又粗| 人人澡人人妻人| 天堂俺去俺来也www色官网| 国产av一区二区精品久久| 久久免费观看电影| 国产精品三级大全| 精品少妇一区二区三区视频日本电影| 人妻一区二区av| 久久性视频一级片| 久久久久久久精品精品| 亚洲国产成人一精品久久久| 亚洲成色77777| 亚洲av成人精品一二三区| 中文字幕av电影在线播放| 国产欧美日韩一区二区三区在线| 国产免费现黄频在线看| 免费不卡黄色视频| 日本欧美国产在线视频| 一区二区日韩欧美中文字幕| 这个男人来自地球电影免费观看| 我的亚洲天堂| 18禁观看日本| 美女脱内裤让男人舔精品视频| 国产97色在线日韩免费| 下体分泌物呈黄色| 91麻豆精品激情在线观看国产 | 黑人欧美特级aaaaaa片| 国产高清视频在线播放一区 | 99热国产这里只有精品6| 黄色毛片三级朝国网站| 亚洲精品一区蜜桃| 天堂中文最新版在线下载| 欧美日韩亚洲国产一区二区在线观看 | 波多野结衣av一区二区av| 精品免费久久久久久久清纯 | 国产一区二区三区综合在线观看| 午夜老司机福利片| 国产在视频线精品| 精品一区二区三卡| 日韩av不卡免费在线播放| 成人影院久久| 国产欧美日韩一区二区三区在线| 叶爱在线成人免费视频播放| 婷婷丁香在线五月| 亚洲五月色婷婷综合| 考比视频在线观看| 亚洲色图 男人天堂 中文字幕| 久久精品久久精品一区二区三区| 欧美日韩亚洲高清精品| 女人高潮潮喷娇喘18禁视频| 丝袜喷水一区| 美女中出高潮动态图| 欧美激情 高清一区二区三区| 亚洲中文av在线| 纵有疾风起免费观看全集完整版| 日本一区二区免费在线视频| www.熟女人妻精品国产| 美国免费a级毛片| 免费看av在线观看网站| 欧美av亚洲av综合av国产av| 啦啦啦 在线观看视频| 男女边摸边吃奶| 老司机午夜十八禁免费视频| 亚洲av电影在线观看一区二区三区| 又黄又粗又硬又大视频| 纯流量卡能插随身wifi吗| 日韩电影二区| 亚洲精品久久久久久婷婷小说| 久久人人爽av亚洲精品天堂| 伊人久久大香线蕉亚洲五| 久久中文字幕一级| 我的亚洲天堂| 伊人久久大香线蕉亚洲五| 欧美激情高清一区二区三区| 免费av中文字幕在线| 另类精品久久| 成在线人永久免费视频| 最新在线观看一区二区三区 | 大陆偷拍与自拍| 一边亲一边摸免费视频| 建设人人有责人人尽责人人享有的| 大香蕉久久网| 少妇的丰满在线观看| 水蜜桃什么品种好| 老汉色av国产亚洲站长工具| 久久久久视频综合| 天堂俺去俺来也www色官网| 女性生殖器流出的白浆| 2018国产大陆天天弄谢| 妹子高潮喷水视频| 99精国产麻豆久久婷婷| 男女国产视频网站| 18禁观看日本| 男的添女的下面高潮视频| 黄片播放在线免费| 97在线人人人人妻| 精品福利永久在线观看| 亚洲情色 制服丝袜| www.自偷自拍.com| 美女大奶头黄色视频| 色视频在线一区二区三区| 亚洲精品一区蜜桃| 又大又爽又粗| 欧美乱码精品一区二区三区| 免费看不卡的av| 在线观看免费午夜福利视频| 在现免费观看毛片| 精品人妻熟女毛片av久久网站| 国产一区二区在线观看av| 9热在线视频观看99| 国产成人一区二区在线| 一级毛片 在线播放| 九色亚洲精品在线播放| 久久精品人人爽人人爽视色| 黑人猛操日本美女一级片| 飞空精品影院首页| 亚洲人成77777在线视频| 大片电影免费在线观看免费| 精品少妇一区二区三区视频日本电影| 国产精品.久久久| 亚洲国产欧美在线一区| 蜜桃国产av成人99| 日本a在线网址| 久久午夜综合久久蜜桃| 久久久国产欧美日韩av| 欧美亚洲 丝袜 人妻 在线| 午夜福利视频精品| 午夜免费男女啪啪视频观看| 国产麻豆69| 手机成人av网站| 亚洲精品美女久久av网站| 亚洲国产欧美在线一区| 少妇粗大呻吟视频| 亚洲七黄色美女视频| 精品一区二区三区av网在线观看 | 亚洲自偷自拍图片 自拍| 亚洲,欧美精品.| 桃花免费在线播放| 在线观看www视频免费| 丝袜美腿诱惑在线| 欧美久久黑人一区二区| 日本91视频免费播放| 亚洲五月婷婷丁香| 色婷婷久久久亚洲欧美| 日韩,欧美,国产一区二区三区| 亚洲综合色网址| 欧美日韩黄片免| 中文字幕另类日韩欧美亚洲嫩草| 精品少妇久久久久久888优播| 在线观看一区二区三区激情| 免费观看a级毛片全部| 免费日韩欧美在线观看| 黄色视频不卡| 国产成人啪精品午夜网站| 在线亚洲精品国产二区图片欧美| a级毛片在线看网站| 久久国产亚洲av麻豆专区| 日韩欧美一区视频在线观看| 国产成人一区二区三区免费视频网站 | av欧美777| 国产极品粉嫩免费观看在线| 久热爱精品视频在线9| 精品久久久久久久毛片微露脸 | 热re99久久国产66热| 日韩欧美一区视频在线观看| 国产97色在线日韩免费| 午夜视频精品福利| 免费看十八禁软件| 超碰成人久久| 欧美激情极品国产一区二区三区| www.熟女人妻精品国产| 亚洲国产毛片av蜜桃av| 国产成人av教育| 国产精品99久久99久久久不卡| 久久女婷五月综合色啪小说| 久久国产精品人妻蜜桃| 精品国产一区二区三区四区第35| 久久精品久久久久久噜噜老黄| 水蜜桃什么品种好| 国产亚洲精品久久久久5区| 亚洲欧美日韩高清在线视频 | 免费黄频网站在线观看国产| 午夜av观看不卡| 在线 av 中文字幕| 亚洲av电影在线进入| 搡老乐熟女国产| 国产精品麻豆人妻色哟哟久久| 亚洲色图 男人天堂 中文字幕| 美女国产高潮福利片在线看| 欧美日韩综合久久久久久| 国产精品一区二区免费欧美 | a级毛片在线看网站| 丰满少妇做爰视频| 欧美日本中文国产一区发布| 别揉我奶头~嗯~啊~动态视频 | 日韩av免费高清视频| 亚洲自偷自拍图片 自拍| www.999成人在线观看| 亚洲综合色网址| 99久久人妻综合| 香蕉国产在线看| 精品福利观看| 成年美女黄网站色视频大全免费| 亚洲av片天天在线观看| 九草在线视频观看| 青青草视频在线视频观看| 美女午夜性视频免费| 精品久久久久久电影网| 国产高清视频在线播放一区 | 精品第一国产精品| 两个人免费观看高清视频| 欧美黄色淫秽网站| 老熟女久久久| 亚洲一卡2卡3卡4卡5卡精品中文| 国产精品国产三级国产专区5o| 黄频高清免费视频| 精品国产国语对白av| 亚洲精品美女久久久久99蜜臀 | 成年人免费黄色播放视频| av一本久久久久| 免费少妇av软件| 一区二区av电影网| 女人被躁到高潮嗷嗷叫费观| 韩国精品一区二区三区| 国产色视频综合| 国产精品一区二区精品视频观看| 男女之事视频高清在线观看 | 国产伦理片在线播放av一区| 免费久久久久久久精品成人欧美视频| 在线观看人妻少妇| 色婷婷av一区二区三区视频| 三上悠亚av全集在线观看| 国产精品国产三级专区第一集| 亚洲精品日本国产第一区| 大陆偷拍与自拍| 高清不卡的av网站| 日韩,欧美,国产一区二区三区| 国产精品一区二区在线不卡| 国产主播在线观看一区二区 | 亚洲精品一二三| 高清av免费在线| 亚洲成人免费av在线播放| 欧美日韩一级在线毛片| 妹子高潮喷水视频| 一区二区三区精品91| 男人爽女人下面视频在线观看| 99精国产麻豆久久婷婷| 午夜91福利影院| 国产一区二区激情短视频 | 每晚都被弄得嗷嗷叫到高潮| 国产黄频视频在线观看| 少妇 在线观看| 男的添女的下面高潮视频| 免费在线观看视频国产中文字幕亚洲 | 高清黄色对白视频在线免费看| 午夜日韩欧美国产| 性高湖久久久久久久久免费观看| 久久 成人 亚洲| 亚洲精品美女久久久久99蜜臀 | 大陆偷拍与自拍| 黄色毛片三级朝国网站| av有码第一页| 91精品国产国语对白视频| 久久鲁丝午夜福利片| 国产视频首页在线观看| 91麻豆精品激情在线观看国产 | 亚洲三区欧美一区| 欧美日韩成人在线一区二区| 黄网站色视频无遮挡免费观看| 久久免费观看电影| 狂野欧美激情性bbbbbb| 欧美精品一区二区大全| 国产片特级美女逼逼视频| 亚洲欧美精品自产自拍| 免费在线观看视频国产中文字幕亚洲 | 精品福利永久在线观看| 欧美日韩av久久| 男女下面插进去视频免费观看| 老司机深夜福利视频在线观看 | 日韩大片免费观看网站| videos熟女内射| 日本wwww免费看| 纯流量卡能插随身wifi吗| 麻豆国产av国片精品| 妹子高潮喷水视频| 观看av在线不卡| 亚洲精品国产区一区二| 后天国语完整版免费观看| 亚洲熟女毛片儿| 日韩av不卡免费在线播放| 老司机在亚洲福利影院| 亚洲国产欧美一区二区综合| 亚洲欧美一区二区三区国产| 日韩av不卡免费在线播放| 97人妻天天添夜夜摸| 伊人亚洲综合成人网| 中文乱码字字幕精品一区二区三区| 国产免费福利视频在线观看| 欧美日韩av久久| av网站免费在线观看视频| 手机成人av网站| 永久免费av网站大全| 国产精品熟女久久久久浪| 欧美xxⅹ黑人| 午夜激情av网站| a级毛片在线看网站| 一级a爱视频在线免费观看| 亚洲黑人精品在线| 一边摸一边做爽爽视频免费| 在线观看免费日韩欧美大片| 超碰97精品在线观看| 国产精品一国产av| 国产爽快片一区二区三区| 18禁黄网站禁片午夜丰满| 天天躁夜夜躁狠狠躁躁| 观看av在线不卡| 999久久久国产精品视频| 黄色片一级片一级黄色片| 亚洲专区中文字幕在线| 老司机午夜十八禁免费视频| 丁香六月天网| 美女扒开内裤让男人捅视频| 考比视频在线观看| 青春草亚洲视频在线观看| 亚洲精品一二三| 两个人看的免费小视频| 黄色视频不卡| 一级毛片我不卡| 国产有黄有色有爽视频| 亚洲午夜精品一区,二区,三区| 欧美精品人与动牲交sv欧美| 亚洲三区欧美一区| av福利片在线| videos熟女内射| av又黄又爽大尺度在线免费看| 欧美精品高潮呻吟av久久| 日韩中文字幕视频在线看片| 精品视频人人做人人爽| 男女高潮啪啪啪动态图| 在线精品无人区一区二区三| 精品高清国产在线一区| 久久人妻熟女aⅴ| 国产片特级美女逼逼视频| 国产精品 国内视频| 国产亚洲av高清不卡| 成年人黄色毛片网站| 亚洲国产欧美日韩在线播放| 国产成人91sexporn| 一个人免费看片子| 欧美国产精品一级二级三级| 在线精品无人区一区二区三| 中文字幕高清在线视频| 成年美女黄网站色视频大全免费| 亚洲欧美精品综合一区二区三区| 免费人妻精品一区二区三区视频| 黄色 视频免费看| 亚洲人成电影免费在线| 无遮挡黄片免费观看| 亚洲精品国产一区二区精华液| 另类亚洲欧美激情| 脱女人内裤的视频| 欧美日韩亚洲综合一区二区三区_| 热99国产精品久久久久久7| 天天躁夜夜躁狠狠久久av| 日韩中文字幕欧美一区二区 | 97精品久久久久久久久久精品| 久久精品久久久久久久性| 国产精品国产av在线观看| 十分钟在线观看高清视频www| 纯流量卡能插随身wifi吗| 嫁个100分男人电影在线观看 | 亚洲精品美女久久久久99蜜臀 | 国产免费又黄又爽又色| 亚洲精品乱久久久久久| 亚洲国产毛片av蜜桃av| 精品欧美一区二区三区在线| 日韩视频在线欧美| 日韩中文字幕视频在线看片| 欧美在线黄色| 国产亚洲欧美精品永久| 国产亚洲av片在线观看秒播厂| 欧美日韩一级在线毛片| 又紧又爽又黄一区二区| 欧美精品人与动牲交sv欧美| 亚洲精品国产av成人精品| www.av在线官网国产| 亚洲,一卡二卡三卡| 亚洲伊人色综图| 精品国产一区二区久久| 欧美97在线视频| 国产高清国产精品国产三级| 国产精品.久久久| 又黄又粗又硬又大视频| 又大又爽又粗| 丝瓜视频免费看黄片| 少妇的丰满在线观看| 亚洲人成网站在线观看播放| 久久精品久久精品一区二区三区| 人体艺术视频欧美日本| 黄色视频在线播放观看不卡| 麻豆国产av国片精品| 新久久久久国产一级毛片| 美女扒开内裤让男人捅视频| 久久精品国产亚洲av涩爱| av国产久精品久网站免费入址| 久久99一区二区三区| 亚洲精品第二区| 在线观看免费午夜福利视频| 巨乳人妻的诱惑在线观看| 久久影院123| videosex国产| 欧美日韩精品网址| 后天国语完整版免费观看| av在线app专区| 国产精品欧美亚洲77777| 欧美黑人欧美精品刺激| av国产久精品久网站免费入址| cao死你这个sao货| 免费在线观看影片大全网站 | 免费在线观看影片大全网站 | 久久久久久久大尺度免费视频| 日本vs欧美在线观看视频| 日韩电影二区| 久久精品久久精品一区二区三区| 51午夜福利影视在线观看| 精品少妇一区二区三区视频日本电影| 亚洲欧洲国产日韩| 亚洲精品久久成人aⅴ小说| 91老司机精品| 中文字幕人妻丝袜制服| 一级黄片播放器| 人人妻人人澡人人看| 男女无遮挡免费网站观看| 50天的宝宝边吃奶边哭怎么回事| 久久中文字幕一级| 黄网站色视频无遮挡免费观看| 日韩制服骚丝袜av| 国产午夜精品一二区理论片| 亚洲中文日韩欧美视频| 欧美精品一区二区大全| 纯流量卡能插随身wifi吗| 777米奇影视久久| 欧美黑人欧美精品刺激| 欧美人与性动交α欧美软件| 两性夫妻黄色片| 91成人精品电影| 激情五月婷婷亚洲| 亚洲人成77777在线视频| 国产黄色视频一区二区在线观看| 视频区图区小说| 日韩 欧美 亚洲 中文字幕| 久久人人爽av亚洲精品天堂| 免费观看av网站的网址| 在线观看www视频免费| 成人国产av品久久久| 国产一级毛片在线| 大码成人一级视频| 国产精品二区激情视频| 99国产精品一区二区三区| 在线观看免费日韩欧美大片| 亚洲成人免费av在线播放| 亚洲国产精品一区三区| 久久人妻福利社区极品人妻图片 | 一区二区av电影网| 另类亚洲欧美激情| 欧美日韩国产mv在线观看视频| 色精品久久人妻99蜜桃| 国产深夜福利视频在线观看| 国产高清videossex| 色94色欧美一区二区| 午夜激情av网站| 国产成人av教育| 80岁老熟妇乱子伦牲交| 精品少妇内射三级| 亚洲精品日韩在线中文字幕| 亚洲图色成人| 久久影院123| 少妇裸体淫交视频免费看高清 | 亚洲第一av免费看| 久久久久久久精品精品| 日本a在线网址| 晚上一个人看的免费电影| 国产亚洲欧美精品永久| 五月开心婷婷网| 亚洲欧美精品综合一区二区三区| 久久这里只有精品19| 蜜桃在线观看..| 最近手机中文字幕大全| 中文欧美无线码| 免费少妇av软件| 国精品久久久久久国模美| 欧美精品人与动牲交sv欧美| 亚洲av成人不卡在线观看播放网 | 啦啦啦啦在线视频资源| 少妇的丰满在线观看| 亚洲精品美女久久av网站| 手机成人av网站| 精品国产一区二区久久| 亚洲成人国产一区在线观看 | 嫁个100分男人电影在线观看 | 久久综合国产亚洲精品| 狠狠婷婷综合久久久久久88av| 啦啦啦啦在线视频资源| 一级片'在线观看视频| 国产一区二区激情短视频 | 久久av网站| 欧美国产精品va在线观看不卡| 美女主播在线视频| 午夜免费观看性视频| 亚洲成人手机| 在线观看免费日韩欧美大片| 国产真人三级小视频在线观看| 久久 成人 亚洲| 又大又爽又粗| 国产精品麻豆人妻色哟哟久久| 看免费av毛片| 日韩av不卡免费在线播放| 涩涩av久久男人的天堂| 久久99热这里只频精品6学生| 满18在线观看网站| 18禁观看日本| 久久午夜综合久久蜜桃| 男女免费视频国产| 狠狠精品人妻久久久久久综合| 久久久久精品国产欧美久久久 | 又大又爽又粗| 在线观看一区二区三区激情| 久久 成人 亚洲| 亚洲人成电影观看| 国产一区有黄有色的免费视频| 亚洲精品av麻豆狂野| 一级片免费观看大全| 91成人精品电影| 国精品久久久久久国模美| 亚洲一区二区三区欧美精品| 无遮挡黄片免费观看| 国精品久久久久久国模美| 丝袜人妻中文字幕| 少妇精品久久久久久久| 91精品国产国语对白视频| 嫁个100分男人电影在线观看 | 亚洲精品久久午夜乱码| 男女边摸边吃奶| 国产一区有黄有色的免费视频| 91精品国产国语对白视频| 大香蕉久久成人网| av欧美777| 最近手机中文字幕大全| 免费在线观看黄色视频的| 亚洲第一青青草原| 欧美国产精品va在线观看不卡| 国产精品秋霞免费鲁丝片| 在线精品无人区一区二区三|